id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/0001/astro-ph0001195.html
ar5iv
text
# Physical morphology and triggers of starburst galaxies ## 1 Introduction The large number of galaxies at high redshifts ($`z`$) undergoing intense star-formation (Steidel et al. 1996, Lowenthal et al. 1997) suggest starburst galaxies were a dominant phase of early galaxy evolution. While the rate and specific intensity of star formation in these distant galaxies is certainly higher than typical, nearby starbursts (Weedman et al. 1999), local starbursts are similar to star-forming high-$`z`$ galaxies in terms of their structural and stellar characteristics (Giavalisco et al. 1996; Hibbard & Vacca 1997; Heckman et al. 1998; Conselice et al. 2000a). Related issues include determining how starbursts are triggered, and if the triggering mechanisms change with $`z`$. Starburst triggering mechanisms include: interactions and mergers (Schweizer 1987; Jog & Das 1992), bar instabilities (Shlosman et al. 1990), and kinematic effects from SNe and stellar winds (e.g. Heckman et al. 1990). While the merging is expected to be more common at earlier epochs, the first epoch of star-formation could occur as a result of the inital collapse of individual gas clouds. For nearby, luminous galaxies, usually it is possible to determine what triggers a starburst by examining kinematic, and pan-chromatic structural information. This can be quite expensive in telescope time, particularly for high-$`z`$ starbursts, where detailed spectroscopic information is difficult to obtain at present. While some starbursts are undergoing interactions or mergers, it is difficult to quantify the strength and youth of such events. A method of determining starburst triggers based on morphology or other, easily observable properties of a galaxy would be ideal. In this paper we present a method to determine objectively if a starburst is triggered by a galaxy interaction based on its color and $`R`$-band asymmetry. ## 2 The Sample and Optical Data The sample consists of five UV-bright examples of nearby galaxies chosen from a study of northern hemisphere starbursts: Markarian 8, NGC 3310, NGC 3690, NGC 7673, NGC 7678. These galaxies were imaged in several bands with the WIYN 3.5m telescope<sup>1</sup><sup>1</sup>1The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatories., located at the Kitt Peak National Observatory, using a 2048<sup>2</sup>-pixel thinned SB2K CCD with a $`6.8\times 6.8`$ arcmin<sup>2</sup> field of view and a scale of 0.2 arcsec per pixel. The seeing during the observations on average was $`1^{\prime \prime }`$ FWHM. The $`R`$-band images used here are bias subtracted, flat-fielded, and cleared of foreground stars and background galaxies. These contaminating objects typically can cause rather high asymmetries if not properly removed. The (B-V) colors for all galaxies are from the RC3 catalog, except NGC 3690, where we adopt Weedman’s (1973) value. Each of the starbursts in our sample are benchmarks; i.e. they are relatively well studied and understood. Markarian 8 hosts an intense starburst, with very blue colors (Huchra 1977). This galaxy contains several distinct ‘pieces’ with visible tidal tails, and has long been recognized as a merger, or a strongly interacting double galaxy (Casini & Heidmann 1976; Keel & van Soest 1992). The interaction/merger between the multiple components of this galaxy are responsible for triggering the star-formation in the disk. NGC 3310, classified as a barred spiral (RC3), has it’s very young starburst in a 1 kpc diameter ring around the nucleus. The bar and the size of the ringed structure suggest this starburst was triggered by a bar instability (Athanassoula 1992; Piner et al. 1995). Faint outer ripples are evidence for a minor merger or interaction with another galaxy, probably a dwarf (e.g. Balick & Heckman 1981; Schweizer & Seitzer 1988); this plausibly produced the bar instability which led to the starburst. NGC 3690, along with Markarian 8 are the most extreme interactions/mergers in our sample. The second ‘half’ of NGC 3690 is IC 694 (e.g. Gehrz et al. 1983). There is no disk structure to this galaxy, which is populated by very luminous, high surface-brightness star forming regions (Soifer et al. 1989). NGC 7673 morphologically consists of an inner disturbed spiral structure, faint outer ripples (Homeier & Gallagher 1999), and huge, blue star-forming clumps embedded in a disturbed H I disk (Nordgren et al. 1997). These features are clues that this galaxy recently interacted with another galaxy, triggering the starburst. NGC 7678, classified as a barred spiral, contains a starburst located in a roughly symmetrical spiral pattern, similar to NGC 3310. This starburst consists of several bright H II regions (Goncalves et al. 1998) and contains a Seyfert nucleus (Kazarian 1993). NGC 7678 contains a large, massive blue arm where much of the starburst is located (Vorontsov-Velyaminov 1977). Most of these galaxies show evidence for an interaction, but in various degrees and intensities. NGC 3310, NGC 7678 are probably minor mergers, or interactions that occurred in the distant past, while NGC 3690, NGC 7673, and Markarian 8 are obvious collisions that contain very disturbed structures. ## 3 Asymmetry The asymmetry method used here, described in detail by Conselice, Bershady & Jangren (2000), gives a simple, quantified measure of how a galaxy deviates from perfect axi-symmetry. Like Abraham et al. (1996), our algorithm consists of rotating a galaxy 180$`^{}`$about a center, subtracting the rotated image from the original, and dividing the sum of the absolute value of the pixels in the residual image by the sum of pixel values in the original image. The higher the intensity of the residuals, the larger the asymmetry. In our implementation, however, (a) we repeat the asymmetry computation using different center estimates until a minimum asymmetry is found, and (b) we make a correction for noise. The rotational asymmetry of normal galaxies increases with the ‘lateness’ of their morphological type (Conselice, 1997), indicating that at least some component of their asymmetry is associated with the flocculent appearance of star-formation within galaxies. Asymmetry may also arise, however, from large-scale, dynamical perturbations. Other methods of asymmetry measurement (e.g. Zaritsky & Rix, 1997; Kornreich, Haynes, & Lovelace, 1998) are particularly sensitive to this dynamical component. In contrast, our rotational measurement is sensitive to both dynamical and flocculent components of asymmetry. As we show in the next section, interacting and irregular galaxies can be distinguished on the basis of the relative amplitudes of these two asymmetry components via the color-asymmetry diagram. Asymmetry is a powerful quantitative morphological parameter, and in conjunction with the color of a galaxy, it can be used to determine the physical nature of a galaxy. ## 4 The Color-Asymmetry Diagram (CAD) The CAD for a sample of 113 nearby galaxies (Frei et al. 1996; Conselice et al. 2000b) and the starburst sample is shown in Figure 1. A correlation between asymmetry and color can be seen for most Hubble types: Early type galaxies (E, S0) populate the symmetric, red corner of the diagram; later type galaxies become progressively bluer and more asymmetric. This ‘fiducial’ trend with Hubble type is indicated by the dashed line in Figure 1. Some objects do not lie on the fiducial color-asymmetry sequence. Most of these are highly-inclined systems, as discussed by Conselice et al. 2000b. However, visual inspection of the most extreme outliers at blue colors of B-V$`<`$0.7 (i.e., NGC 3079, Arp 18, and NGC 4254) indicates they are are too asymmetric for their colors because they are dynamically disturbed. It appears to be possible to distinguish if a galaxy is interacting or merely undergoing normal star formation (such as irregular galaxies) based simply on its position in the CAD. Where do the starbursts lie in the CAD? Their CAD positions are mostly consistent with a strong interaction, or merger origin (Figure 1, $``$). Moreover, the amount of deviation from the fiducial galaxy sequence appears to be correlated roughly with the degree of interaction/merger. For example, NGC 3690 and Markarian 8 are major mergers between galaxies of similar sizes, and have two of the most deviant positions. NGC 3310, on the other hand, almost fits along the fiducial galaxy sequence; this starburst is believed to be produced by a bar instability or minor interaction (Conselice et al. 2000a). The fact that NGC 3310 does not fit exactly along this sequence is probably due to the fact that it was involved with a minor merger in the past (Balik & Heckman 1981). We can quantify and test our interpretation of the CAD by comparing a starbursts deviance in the CAD with a kinematic indicator of deviance. A variety of HI studies show that interacting galaxies often have unusual and asymmetric global HI emission line profiles, often with extended velocity tails (e.g., Gallagher, Knapp, & Faber 1981, Bushouse 1987, Mirabel & Sanders 1988, Sancisi 1997). Here we introduce the use of the ratio of HI line-width at 20% and 50% maximum, as extracted from the Lyon-Meudon Extragalactic Database (LEDA) as a new dynamical indicator. High values of W<sub>20</sub> / W<sub>50</sub> imply shallower rising HI profiles or wings, and hence should be an indicator of a recent dynamical disturbance. As Figure 2 shows, this line-width ratio is large for the starburst galaxies with asymmetries that deviate from the fiducial sequence. Galaxies shown by the color-asymmetry digram to be strongly interacting or merging, namely NGC 7673, NGC 3690 and Markarian 8, all have the largest HI line width ratios. NGC 3310 and NGC 7678, which have symmetric inner spiral structures and are probably older starbursts perhaps triggered by bar instabilities, have smaller line-width ratios and are less deviant from the fiducial sequence in color and asymmetry. Hence our interpretation of starburst origin based on the color-asymmetry diagram is corroborated. Finally, we note that the starburst galaxies with high asymmetries have blue UBV colors (Figure 3), but do not lie outside the range for normal galaxies (rectangle, Figure 3, adopted from Huchra, 1977). Larson and Tinsley (1978) showed that galaxies with tidal features, such as tidal tails, had a large scatter on a UBV color-color plots. This is not evident for our small sample, which indicates that the color-asymmetry method is more sensitive to identifying merger-induced starbursts than colors alone. ## 5 Implications for High-Redshift Galaxies The origins and triggering mechanisms of the increasingly abundant and luminous starbursts observed at intermediate and high $`z`$ are still an issue of debate. Merging is invoked as a likely candidate for triggering because the physical volume of the universe decreases at earlier times, however this has not been demonstrated directly. The physical-morphological method outlined in this paper can be used to determine if merging is indeed the culprit. If these star-forming galaxies at high $`z`$ are triggered by interactions/mergers then their positions in the CAD would be similar those for the starbursts presented in this paper. If the interactions/mergers are minor, then their locations would fall near the normal galaxy fiducial sequence, similar to NGC 3310. In principle, the high resolution of the Hubble Space Telescope allows detailed morphological studies of distant galaxies. For example, asymmetry has previously been used in conjunction with the concentration of light for galaxies in the Hubble Deep Field (Abraham et al. 1996) and other WFPC-2 images (Jangren et al. 1999). The method used to compute the asymmetries in Abraham et al.’s study differ in several important respects from the method used here and by Jangren et al. In particular, Conselice et al. (2000b) demonstrated that high angular resolution ($`<`$ 0.1 arcsec FWHM) is critical for the study of distant galaxy asymmetry. While deep NICMOS images of the Hubble Deep Field allow an unprecedented opportunity to study the rest frame optical morphology of high-$`z`$ galaxies, adaptive optics on large, ground-based telescope or the Next Generation Space Telescope may ultimately prove critical for the study of asymmetries in distant galaxies. ## 6 Conclusions In this letter we have demonstrated a morphological method of deciphering the triggering mechanisms of starbursts galaxies through the use of a color-asymmetry diagram. Based on a sample of nearby starbursts, we have demonstrated that those starbursts generally regarded as triggered by interactions or mergers are located in a special region of CAD characterized by large optical asymmetry at a given color. In contrast, the two starbursts in our sample (NGC 3310 and NGC 7678) that are probably not triggered from a major interaction/merger fall nearly on the fiducial color-asymmetry sequence of normal galaxies. While our sample is small, we suggest that the color-asymmetry diagram can be used to separate starbursts triggered by mergers/interactions from those triggered from other causes. We confirm this interpretation by comparing the degree of deviation from the fiducial color-asymmetry sequence to HI line-width ratios, which serves as a dynamical indicator of strong gravitational interactions. This physical-morphological method is quantitative and appears to be superior to analyses of outliers in, e.g., two-color plots such as $`UV`$ and $`BV`$. This physical-morphological method is also well-suited for studying the nature and evolution of distant galaxies where it is difficult to gather information beyond images, magnitudes, and redshifts. This work was funded by NASA contract WAS7-1260 to JPL; AR7539, AR7518 and GO7875, and GO7339 from STScI which is operated by AURA, Inc. under NASA contract NAS5-26555; NASA LTSA contract NAG5-6032; and NSF contract AST-9970780. JSG and CJC thank the WFPC-2 team.
no-problem/0001/nucl-th0001019.html
ar5iv
text
# References COMMENT on Yuan et al. Fan Wang Center for Theoretical Physics and Department of Physics, Nanjing University, Nanjing, 210008, P. R. China T. Goldman Theoretical Division, Los Alamos National Laboratory Los Alamos, NM 87545, USA LA-UR-00-51 nucl-th/0001019 It has come to our attention that the work of Yuan et al.$`^{\text{[1]}}`$ is being represented as definitive regarding the existence of a particular nonstrange, isoscalar, spin three dibaryon, $`d^{}`$, which we$`^{\text{[2]}}`$, among others$`^{\text{[3, 4, 5, 6, 8]}}`$, have proposed exists. We therefore wish to comment on this paper and its relation to ours and those of others. In our work$`^{\text{[2]}}`$, the $`d^{}`$ dibaryon is constructed in terms of its constituent quark wavefunctions, which include significant distortion from those found in isolated baryons. It should be noted that these wavefunctions do not collapse into a simple, spherical system as proposed, for example, by Jaffe$`^{\text{[9]}}`$ for the $`H`$ dibaryon. Our techniques have allowed us to demonstrate that this picture adequately reproduces known low energy baryon-baryon scattering amplitudes with only one or two fitting parameters. At the other extreme are theories of baryon interactions without any quark substructure whatsoever. These also, in terms of effective potentials or meson exchanges, adequately reproduce known low energy baryon-baryon scattering amplitudes, albeit with a multitude of fitting parameters. Such approaches tend not to predict deeply bound dibaryons, with the exception of Ref.. Rather, the states examined are generally found to have a character not very dissimilar from the deuteron, with small binding energies predicted$`^{\text{[10]}}`$, if any. Approaches such as that of Yuan et al.$`^{\text{[1]}}`$ and similarly those of Thomas et al.$`^{\text{[5]}}`$ or Wilets et al.$`^{\text{[7]}}`$ take an intermediate view, including quark substructure of distinct baryons, but variously restricting interactions between quarks as due to exchanges of mesons or other collective fields, exchanges of gluons, or both. Some change of internal baryon structure is allowed in these models. Interestingly, the approaches that include or are restricted to only meson exchanges may be characterized as generally predicting dibaryon states with binding energies intermediate between the two approaches referred to above. They also can well describe low energy baryon-baryon scattering amplitudes or nuclei, albeit again with a significant number of parameters which must be fit to data. We wish to suggest that the proper scientific response to this range of models and results is not to make a theoretical judgment about which is more likely to be correct. Rather we feel that it behooves us all to acknowledge that our understanding is limited and to recognize two issues: One is the question of the range of scales over which nucleons and mesons may be viewed as having rigid internal structures uninfluenced by their surroundings. The second is the question of whether or not quark propagation, at least at low energies, can be coherent over ranges larger than about one fermi, or conversely, whether their propagation must always be re-expressed in terms of composite, colorless, degrees of freedom. Yuan et al. also criticize the manner in which we use a nonrelativistic form of the confinement potential. However, as we have explained in Ref., our nonrelativistic model Hamiltonian is an extended, effective matrix element approach rather than simply a potential model. It is sufficient to define a quantum mechanical model if all of the matrix elements of the Hamiltonian in the model Hilbert space have been fixed. Their concern regarding the nonorthogonality of the left and right centered orbits in our approach is misplaced since the fixing of the matrix elements by our model assumption is clearly defined. As has been emphasized before, our understanding of confinement is limited so it is perhaps unwise to narrowly restrict model approaches to any description of confinement. Our view is that it is indeed fortunate that this range of models with differing physics assumptions produces a range of predicted masses for the $`d^{}`$. This allows the experimental search for such a state, and the mass determination if it is observed, to distinguish among pictures which a priori have similar strengths for their claims of accuracy. This makes experiments, such as the proton induced excitation discussed by Wong$`^{\text{[8]}}`$ and possible experiments involving electron scattering$`^{\text{[12]}}`$, extremely important to carry out. Whatever the result of these experiments, they will have much to say about how we should view the realm of low energy strong interactions in terms of quarks. This research is supported in part by the National Science Foundation of China and in part by the Department of Energy under contract W-7405-ENG-36.
no-problem/0001/cond-mat0001056.html
ar5iv
text
# I Introduction ## I Introduction Computer simulations have played an important role in understanding tribological processes. They allow controlled numerical ”experiments” where the geometry, sliding conditions and interactions between atoms can be varied at will to explore their effect on friction, lubrication, and wear. Unlike laboratory experiments, computer simulations enable scientists to follow and analyze the full dynamics of all atoms. Moreover, theorists have no other general approach to analyze processes like friction and wear. There is no known principle like minimization of free energy that determines the steady state of non-equilibrium systems. Even if there was, simulations would be needed to address the complex systems of interest, just as in many equilibrium problems. Tremendous advances in computing hardware and methodology have dramatically increased the ability of theorists to simulate tribological processes. This has led to an explosion in the number of computational studies over the last decade, and allowed increasingly sophisticated modeling of sliding contacts. Although it is not yet possible to treat all the length scales and time scales that enter the friction coefficient of engineering materials, computer simulations have revealed a great deal of information about the microscopic origins of static and kinetic friction, the behavior of boundary lubricants, and the interplay between molecular geometry and tribological properties. These results provide valuable input to more traditional macroscopic calculations. Given the rapid pace of developments, simulations can be expected to play an expanding role in tribology. In the following chapter we present an overview of the major results from the growing simulation literature. The emphasis is on providing a coherent picture of the field, rather than a historical review. We also outline opportunities for improved simulations, and highlight unanswered questions. We begin by presenting a brief overview of simulation techniques and focus on special features of simulations for tribological processes. For example, it is well known that the results of tribological experiments can be strongly influenced by the mechanical properties of the entire system that produces sliding. In much the same way, the results from simulations depend on how relative motion of the surfaces is imposed, and how heat generated by sliding is removed. The different techniques that are used are described, so that their influence on results can be understood in later sections. The complexities of realistic three-dimensional systems can make it difficult to analyze the molecular mechanisms that underly friction. The third section focuses on dry, wearless friction in less complex systems. The discussion begins with simple one-dimensional models of friction between crystalline surfaces. These models illustrate general results for the origin and trends of static and kinetic friction, such as the importance of metastability and the effect of commensurability. Then two-dimensional studies are described, with an emphasis on the connection to atomic force microscope experiments and detailed measurements of the friction on adsorbed monolayers. In the fourth section, simulations of the dry sliding of crystalline surfaces are addressed. Studies of metal/metal interfaces, surfactant coated surfaces, and diamond interfaces with various terminations are described. The results can be understood from the simple pictures of the previous chapter. However, the extra complexity of the interactions in these systems leads to a richer variety of processes. Simple examples of wear between metal surfaces are also discussed. The fifth section describes how the behavior of lubricated systems begins to deviate from bulk hydrodynamics as the thickness of the lubricant decreases to molecular scales. Deviations from the usual no-slip boundary condition are found in relatively thick films. These are described, and correlated to structure induced in the lubricant by the adjoining walls. As the film thickness decreases, the effective viscosity increases rapidly above the bulk value. Films that are only one or two molecules thick typically exhibit solid behavior. The origins of this liquid/solid transition are discussed, and the possibility that thin layers of adventitious carbon are responsible for the prevalence of static friction is explored. The section concludes with descriptions of simulations using realistic models of hydrocarbon boundary lubricants between smooth and corrugated sufaces. The sixth section describes work on the common phenomenon of stick-slip motion, and microscopic models for its origins. Atomic-scale ratcheting is contrasted with long-range slip events, and the structural changes that accompany stick-slip transitions in simulations are described. The seventh and final section describes work on friction at extreme conditions such as high shear rates or large contact pressures. Simulations of tribochemical reactions, machining, and the evolution of microstructure in sliding contacts are discussed. ## II Atomistic Computer Simulations The simulations described in this chapter all use an approach called classical molecular dynamics (MD) that is described extensively in a number of review articles and books, including Allen and Tildesley (1987) and Frenkel and Smit (1996). The basic outline of the method is straightforward. One begins by defining the interaction potentials. These produce forces on the individual particles whose dynamics will be followed, typically atoms or molecules. Next the geometry and boundary conditions are specified, and initial coordinates and velocities are given to each particle. Then the equations of motion for the particles are integrated numerically, stepping forward in time by discrete steps of size $`\mathrm{\Delta }t`$. Quantities such as forces, velocities, work, heat flow, and correlation functions are calculated as a function of time to determine their steady-state values and dynamic fluctuations. The relation between changes in these quantities and the motion of individual molecules is also explored. When designing a simulation, care must be taken to choose interaction potentials and ensembles that capture the essential physics that is to be addressed. The potentials may be as simple as ideal spring constants for studies of general phenomena, or as complex as electronic density-functional calculations in quantitative simulations. The ensemble can also be tailored to the problem of interest. Solving Newton’s equations yields a constant energy and volume, or microcanonical, ensemble. By adding terms in the equation of motion that simulate heat baths or pistons, simulations can be done at constant temperature, pressure, lateral force, or velocity. Constant chemical potential can also be maintained by adding or removing particles using Monte Carlo methods or explicit particle baths. The amount of computational effort typically grows linearly with both the number of particles, $`N`$, and the number of time-steps $`M`$. The prefactor increases rapidly with the complexity of the interactions, and substantial ingenuity is required to achieve linear scaling with $`N`$ for long-range interactions or density-functional approaches. Complex interactions also lead to a wide range of characteristic frequencies, such as fast bond-stretching and slow bond-bending modes. Unfortunately, the time step $`\mathrm{\Delta }t`$ must be small ($``$ 2%) compared to the period of the fastest mode. This means that many time steps are needed before one obtains information about the slow modes. The maximum feasible simulation size has increased continuously with advances in computer technology, but remains relatively limited. The product of $`N`$ times $`M`$ in the largest simulations described below is about $`10^{12}`$. A cubic region of $`10^6`$ atoms would have a side of about 50nm. Such linear dimensions allow reasonable models of an atomic force microscope tip, the boundary lubricant in a surface force apparatus, or an individual asperity contact on a rough surface. However $`10^6`$ time steps is only about 10 nanoseconds, which is much smaller than experimental measurement times. This requires intelligent choices in the problems that are attacked, and how results are extrapolated to experiment. It also limits sliding velocities to relatively high values, typically meters per second or above. A number of efforts are underway to increase the accessible time scale, but the problem remains unsolved. Current algorithms attempt to follow the deterministic equations of motion, usually with the Verlet or predictor-corrector algorithms (Allen and Tildesley, 1987). One alternative approach is to make stochastic steps. This would be a non-equilibrium generalization of the Monte Carlo approach that is commonly used in equilibrium systems. The difficulty is that there is no general principle for determining the appropriate probability distribution of steps in a non-equilibrium system. In the following we describe some of the potentials that are commonly used, and the situations where they are appropriate. The final two subsections describe methods for maintaining constant temperature and constant load. ### A Model Potentials A wide range of potentials has been employed in studies of tribology. Many of the studies described in the next section use simple ideal springs and sine-wave potentials. The Lennard-Jones potential gives a more realistic representation of typical inter-atomic interactions, and is also commonly used in studies of general behavior. In order to model specific materials, more detail must be built into the potential. Simulations of metals frequently use the embedded atom method, while studies of hydrocarbons use potentials that include bond-stretching, bending, torsional forces and even chemical reactivity. In this section we give a brief definition of the most commonly used models. The reader may consult the original literature for more detail. The Lennard-Jones (LJ) potential is a two-body potential that is commonly used for interactions between atoms or molecules with closed electron shells. It is applied not only to the interaction between noble gases, but also to the interaction between different segments on polymers. In the latter case, one LJ particle may reflect a single atom on the chain (explicit atom model), a CH<sub>2</sub> segment (united atom model) or even a Kuhn’s segment consisting of several CH<sub>2</sub> units (coarse-grained model). United atom models (Ryckaert and Bellemans, 1978) have been shown by Paul et al. (1995) to successfully reproduce explicit atom results for polymer melts, while Tschöp et al. (1998a, 1998b) have successfully mapped chemically detailed models of polymers onto coarse-grained models and back. The 12-6 LJ potential has the form $$U(r_{ij})=4ϵ\left[\left(\frac{\sigma }{r_{ij}}\right)^{12}\left(\frac{\sigma }{r_{ij}}\right)^6\right]$$ (1) where $`r_{ij}`$ is the distance between particles $`i`$ and $`j`$, $`ϵ`$ is the LJ interaction energy, and $`\sigma `$ is the LJ interaction radius. The exponents 12 and 6 used above are very common, but depending on the system, other values may be chosen. Many of the simulation results discussed in subsequent sections are expressed in units derived from $`ϵ`$, $`\sigma `$, and a characteristic mass of the particles. For example, the standard LJ time unit is defined as $`t_{\mathrm{LJ}}=\sqrt{m\sigma ^2/ϵ}`$, and would typically correspond to a few picoseconds. A convenient time step is $`\mathrm{\Delta }t=0.005t_{\mathrm{LJ}}`$ for a LJ liquid or solid at external pressures and temperatures that are not too large. Most realistic atomic potentials can not be expressed as two-body potentials. For example, bond angle potentials in a polymer are effectively three-body interactions and torsional potentials correspond to four-body interactions. Classical models of these interactions (Flory, 1988; Binder, 1995) assume that a polymer is chemically inert and interactions between different molecules are modeled by two-body potentials. In the most sophisticated models, bond-stretching, bending and torsional energies depend on the position of neighboring molecules and bonds are allowed to rearrange (Brenner, 1990). Such generalized model potentials are needed to model friction-induced chemical interactions. For the interaction between metals, a different approach has proven fruitful. The embedded atom method (EAM), introduced by Daw and Baskes (1984), includes a contribution in the potential energy associated with the cost of ”embedding” an atom in the local electron density $`\rho _i`$ produced by surrounding atoms. The total potential energy $`U`$ is approximated by $$U=\underset{i}{}\stackrel{~}{F}_i(\rho _i)+\underset{i}{}\underset{j<i}{}\varphi _{ij}(r_{ij}).$$ (2) where $`\stackrel{~}{F}_i`$ is the embedding energy, whose functional form depends on the particular metal. The pair potential $`\varphi _{ij}(r_{ij})`$ is a doubly-screened short-range potential reflecting core-core repulsion. The computational cost of the EAM is not substantially greater than pair potential calculations because the density $`\rho _i`$ is approximated by a sum of independent atomic densities. When compared to simple two-body potentials such as Lennard-Jones or Morse potentials, the EAM has been particularly successful in reproducing experimental vacancy formation energies and surface energies, even though the potential parameters were only adjusted to reproduce bulk properties. This feature makes the EAM an important tool in tribological applications, where surfaces and defects play a major role. ### B Maintaining Constant Temperature An important issue for tribological simulations is temperature regulation. The work done when two walls slide past each other is ultimately converted into random thermal motion. The temperature of the system would increase indefinitely if there was no way for this heat to flow out of the system. In an experiment, heat flows away from the sliding interface into the surrounding solid. In simulations, the effect of the surrounding solid must be mimicked by coupling the particles to a heat bath. Techniques for maintaining constant temperature $`T`$ in equilibrium systems are well-developed. Equipartition guarantees that the average kinetic energy of each particle along each Cartesian coordinate is $`k_BT/2`$ where $`k_B`$ is Boltzmann’s constant.<sup>*</sup><sup>*</sup>* This assumes that $`T`$ is above the Debye temperature so that quantum statistics are not important. The applicability of classical MD decreases at lower $`T`$. To thermostat the system, the equations of motion are modified so that the average kinetic energy stays at this equilibrium value. One class of approaches removes or adds kinetic energy to the system by multiplying the velocities of all particles by the same global factor. In the simplest version, velocity rescaling, the factor is chosen to keep the kinetic energy exactly constant at each time step. However, in a true constant temperature ensemble there would be fluctuations in the kinetic energy. Improvements, such as the Berendsen and Nosé-Hoover methods (Nosé, 1991) add equations of motion that gradually scale the velocities to maintain the correct average kinetic energy over a longer time scale. Another approach is to couple each atom to its own local thermostat (Schneider and Stoll, 1978; Grest and Kremer, 1986). The exchange of energy with the outside world is modeled by a Langevin equation that includes a damping coefficient $`\gamma `$ and a random force $`\stackrel{}{f}_i(t)`$ on each atom $`i`$. The equations of motion for the $`\alpha `$ component of the position $`x_{i\alpha }`$ become: $$m_i\frac{d^2x_{i\alpha }}{dt^2}=\frac{}{x_{i\alpha }}Um_i\gamma \frac{dx_{i\alpha }}{dt}+f_{i\alpha }(t),$$ (3) where $`U`$ is the total potential energy and $`m_i`$ is the mass of the atom. To produce the appropriate temperature, the forces must be completely random, have zero mean, and have a second moment given by $$\delta f_{i\alpha }(t)^2=2k_\mathrm{B}Tm_i\gamma /\mathrm{\Delta }t.$$ (4) The damping coefficient $`\gamma `$ must be large enough that energy can be removed from the atoms without substantial temperature increases. However, it should be small enough that the trajectories of individual particles are not perturbed too strongly. The first issue in non-equilibrium simulations is what temperature means. Near equilibrium, hydrodynamic theories define a local temperature in terms of the equilibrium equipartition formula and the kinetic energy relative to the local rest frame (Sarman et al., 1998). In $`d`$ dimensions, the definition is $$k_BT=\frac{1}{dN}\underset{i}{}m_i\left[\frac{d\stackrel{}{x}_i}{dt}\stackrel{}{v}(\stackrel{}{x})\right]^2,$$ (5) where the sum is over all $`N`$ particles and $`\stackrel{}{v}(\stackrel{}{x})`$ is the mean velocity in a region around $`\stackrel{}{x}`$. As long as the change in mean velocity is sufficiently slow, $`\stackrel{}{v}(\stackrel{}{x})`$ is well-defined, and this definition of temperature is on solid theoretical ground. When the mean velocity difference between neighboring molecules becomes comparable to the random thermal velocities, temperature is not well-defined. An important consequence is that different strategies for defining and controlling temperature give very different structural order and friction forces (Evans and Morriss, 1986; Loose and Ciccotti, 1992; Stevens and Robbins, 1993). In addition, the distribution of velocities may become non-Gaussian, and different directions $`\alpha `$ may have different effective temperatures. Care should be taken in drawing conclusions from simulations in this extreme regime. Fortunately, the above condition typically implies that the velocities of neighboring atoms differ by something approaching 10% of the speed of sound. This is generally higher than any experimental velocity, and would certainly lead to heat buildup and melting at the interface. In order to mimic experiments, the thermostat is often applied only to those atoms that are at the outer boundary of the simulation cell. This models the flow of heat into surrounding material that is not included explicitly in the simulation. The resulting temperature profile is peaked at the shearing interface (e.g. Bowden and Tabor, 1986; Khare et al., 1996). In some cases the temperature rise may lead to undesirable changes in the structure and dynamics even at the lowest velocity that can be treated in the available simulation time. In this case, a weak thermostat applied throughout the system may maintain the correct temperature and yield the dynamics that would be observed in longer simulations at lower velocities. The safest course is to couple the thermostat only to those velocity components that are perpendicular to the mean flow. This issue is discussed further in Sec. III E. There may be a marginal advantage to local Langevin methods in non-equilibrium simulations because they remove heat only from atoms that are too hot. Global methods like Nosé-Hoover remove heat everywhere. This can leave high temperatures in the region where heat is generated, while other regions are at an artificially low temperature. ### C Imposing Load and Shear The magnitude of the friction that is measured in an experiment or simulation may be strongly influenced by the way in which the normal load and tangential motion are imposed (Rabinowicz, 1965). Experiments almost always impose a constant normal load. The mechanical system applying shear can usually be described as a spring attached to a stage moving at controlled velocity. The effective spring constant includes the compliance of all elements of the loading device, including the material on either side of the interface. Very compliant springs apply a nearly constant force, while very stiff springs apply a nearly constant velocity. In simulations, it is easiest to treat the boundary of the system as rigid, and to translate atoms in this region at a constant height and tangential velocity. However this does not allow atoms to move around (Sec. III D) or up and over atoms on the opposing surface. Even the atomic-scale roughness of a crystalline surface can lead to order of magnitude variations in normal and tangential force with lateral position when sliding is imposed in this way (Harrison et al., 1992b, 1993; Robbins and Baljon, 2000). The difference between constant separation and pressure simulations of thin films can be arbitrarily large, since they produce different power law relations between viscosity and sliding velocity (Sec. V B). One way of minimizing the difference between constant separation and pressure ensembles is to include a large portion of the elastic solids that bound the interface. This extra compliance allows the surfaces to slide at a more uniform normal and lateral force. However, the extra atoms greatly increase the computational effort. To simulate the usual experimental ensemble more directly, one can add equations of motion that describe the position of the boundary (Thompson et al., 1990b, 1992, 1995), much as equations are added to maintain constant temperature. The boundary is given an effective mass that moves in response to the net force from interactions with mobile atoms and from the external pressure and lateral spring. The mass of the wall should be large enough that its dynamics are slower than those of individual atoms, but not too much slower, or the response time will be long compared to the simulation time. The spring constant should also be chosen to produce an appropriate response time and ensemble. ## III Wearless Friction in Low Dimensional Systems ### A Two Simple Models of Crystalline Surfaces in Direct Contact Static and kinetic friction involve different aspects of the interaction between surfaces. The existence of static friction implies that the surfaces become trapped in a local potential energy minimum. When the bottom surface is held fixed, and an external force is applied to the top surface, the system moves away from the minimum until the derivative of the potential energy balances the external force. The static friction $`F_s`$ is the maximum force the potential can resist, i.e. the maximum slope of the potential. When this force is exceeded, the system begins to slide, and kinetic friction comes in to play. The kinetic friction $`F_k(v)`$ is the force required to maintain sliding at a given velocity $`v`$. The rate at which work is done on the system is $`vF_k(v)`$ and this work must be dissipated as heat that flows away from the interface. Thus simulations must focus on the nature of potential energy minima to probe the origins of static friction, and must also address dissipation mechanisms to understand kinetic friction. Two simple ball and spring models are useful in illustrating the origins of static and kinetic friction, and in understanding the results of detailed simulations. Both consider two clean, flat, crystalline surfaces in direct contact (Fig. 1a). The bottom solid is assumed to be rigid, so that it can be treated as a fixed periodic substrate potential acting on the top solid. In order to make the problem analytically tractable, only the bottom layer of atoms from the top solid is retained, and the interactions within the top wall are simplified. In the Tomlinson model (Fig. 1b), the atoms are coupled to the center of mass of the top wall by springs of stiffness $`k`$, and coupling between neighboring atoms is ignored (Tomlinson, 1929; McClelland and Cohen, 1990). In the Frenkel-Kontorova model (Fig. 1c), the atoms are coupled to nearest-neighbors by springs, and the coupling to the atoms above is ignored (Frenkel and Kontorova, 1938). Due to their simplicity, these models arise in a number of different problems and a great deal is known about their properties. McClelland (1989) and McClelland and Glosli (1992) have provided two early discussions of their relevance to friction. Bak (1982) has reviewed the Frenkel-Kontorova model and the physical systems that it models in different dimensions. ### B Metastability and Static Friction in One Dimension Many features of the Tomlinson and Frenkel-Kontorova models can be understood from their one-dimensional versions. One important parameter is the ratio $`\eta `$ between the lattice constants of the two surfaces $`\eta b/a`$. The other is the strength of the periodic potential from the substrate relative to the spring stiffness $`k`$ that represents interactions within the top solid. If the substrate potential has a single Fourier component, then the periodic force can be written as $$f(x)=f_1\mathrm{sin}\left(\frac{2\pi }{a}x\right).$$ (6) The relative strength of the potential and springs can be characterized by the dimensionless constant $`\lambda 2\pi f_1/ka`$. In the limit of infinitely strong springs ($`\lambda 0`$), both models represent rigid solids. The atoms of the top wall are confined to lattice sites $`x_l^0=x_{\mathrm{CM}}+lb`$, where the integer $`l`$ labels successive atoms, and $`x_{\mathrm{CM}}`$ represents a rigid translation of the entire wall. The total lateral or friction force is given by summing Eq. 6 $$F=f_1\underset{l=1}{\overset{N}{}}\mathrm{sin}\left[\frac{2\pi }{a}(lb+x_{\mathrm{CM}})\right],$$ (7) where $`N`$ is the number of atoms in the top wall. In the special case of equal lattice constants ($`\eta =b/a=1`$), the forces on all atoms add in phase, and $`F=Nf_1\mathrm{sin}(2\pi x_{\mathrm{CM}}/a)`$. The maximum of this restraining force gives the static friction $`F_s=Nf_1`$. Unless there is a special reason for $`b`$ and $`a`$ to be related, $`\eta `$ is most likely to be an irrational number. Such surfaces are called incommensurate, while surfaces with a rational value of $`\eta `$ are commensurate. When $`\eta `$ is irrational, atoms on the top surface sample all phases of the periodic force with equal probability and the net force (Eq. 7) vanishes exactly. When $`\eta `$ is a rational number, it can be expressed as $`p/q`$ where $`p`$ and $`q`$ are integers with no common factors. In this case, atoms only sample $`q`$ different phases. The net force from Eq. 7 still vanishes because the force is a pure sine wave and the phases are equally spaced. However, the static friction is finite if the potential has higher harmonics. A Fourier component with wavevector $`q2\pi /a`$ and magnitude $`f_q`$ contributes $`Nf_q`$ to $`F_s`$. Studies of surface potentials (Bruch et al., 1997) show that $`f_q`$ drops exponentially with increasing $`q`$ and thus imply that $`F_s`$ will only be significant for small $`q`$. As the springs become weaker, the top wall is more able to deform into a configuration that lowers the potential energy. The Tomlinson model is the simplest case to consider, because each atom can be treated as an independent oscillator within the upper surface. The equations of motion for the position $`x_l`$ of the $`l^{th}`$ atom can be written as $$m\ddot{x}_l=\gamma \dot{x}_lf_1\mathrm{sin}\left(\frac{2\pi }{a}x_l\right)k(x_lx_l^0)$$ (8) where $`m`$ is the atomic mass and $`x_l^0`$ is the position of the lattice site. Here $`\gamma `$ is a phenomenological damping coefficient, like that in a Langevin thermostat (Sec. II B), that allows work done on the atom to be dissipated as heat. It represents the coupling to external degrees of freedom such as lattice vibrations in the solids. In any steady-state of the system, the total friction can be determined either from the sum of the forces exerted by the springs on the top wall, or from the sum of the periodic potentials acting on the atoms (Eq. 7). If the time average of these forces differed, there would be a net force on the atoms and a steady acceleration (Thompson and Robbins, 1990a; Matsukawa and Fukuyama, 1994). The static friction is related to the force in metastable states of the system where $`\ddot{x_l}=\dot{x_l}=0`$. This requires that spring and substrate forces cancel for each $`l`$, $$k(x_lx_l^0)=f_1\mathrm{sin}\left(\frac{2\pi }{a}x_l\right).$$ (9) As shown graphically in Fig. 2a, there is only one solution for weak interfacial potentials and stiff solids ($`\lambda <1`$). In this limit, the behavior is essentially the same as for infinitely rigid solids. There is static friction for $`\eta =1`$, but not for incommensurate cases. Even though incommensurate potentials displace atoms from lattice sites, there are exactly as many displaced to the right as to the left, and the force sums to zero. A new type of behavior sets in when $`\lambda `$ exceeds unity. The interfacial potential is now strong enough compared to the springs that multiple metastable states are possible. These states must satisfy both Eq. 9 and the condition that the second derivative of the potential energy is positive: $`1+\lambda \mathrm{cos}\left(2\pi x_l/a\right)>0`$. The number of metastable solutions increases as $`\lambda `$ increases. As illustrated in Fig. 2b, once an atom is in a given metastable minimum it is trapped there until the center of mass moves far enough away that the second derivative of the potential vanishes and the minimum becomes unstable. The atom then pops forward very rapidly to the nearest remaining metastable state. This metastability makes it possible to have a finite static friction even when the surfaces are incommensurate. If the wall is pulled to the right by an external force, the atoms will only sample the metastable states corresponding to the thick solid portion of the substrate potential in Fig. 2b. Atoms bypass other portions as they hop to the adjacent metastable state. The average over the solid portion of the curve is clearly negative and thus resists the external force. As $`\lambda `$ increases, the dashed lines in Fig. 2b become flatter and the solid portion of the curve becomes confined to more and more negative forces. This increases the static friction which approaches $`Nf_1`$ in the limit $`\lambda \mathrm{}`$ (Fisher, 1985). A similar analysis can be done for the one-dimensional Frenkel-Kontorova model (Frank et al., 1949; Bak, 1982; Aubry, 1979, 1983). The main difference is that the static friction and ground state depend strongly on $`\eta `$. For any given irrational value of $`\eta `$ there is a threshold potential strength $`\lambda _c`$. For weaker potentials, the static friction vanishes. For stronger potentials, metastability produces a finite static friction. The transition to the onset of static friction was termed a breaking of analyticity by Aubry (1979) and is often called the Aubry transition. The metastable states for $`\lambda >\lambda _c`$ take the form of locally commensurate regions that are separated by domain walls where the two crystals are out of phase. Within the locally commensurate regions the ratio of the periods is a rational number $`p/q`$ that is close to $`\eta `$. The range of $`\eta `$ where locking occurs grows with increasing potential strength ($`\lambda `$) until it spans all values. At this point there is an infinite number of different metastable ground states that form a fascinating “Devil’s staircase” as $`\eta `$ varies (Aubry, 1979, 1983; Bak, 1982). Weiss and Elmer (1996) have performed a careful study of the 1D Frenkel-Kontorova-Tomlinson model where both types of springs are included. Their work illustrates how one can have a finite static friction at all rational $`\eta `$ and an Aubry at all irrational $`\eta `$. They showed that magnitude of the static friction is a monotonically increasing function of $`\lambda `$ and decreases with decreasing commensurability. If $`\eta =p/q`$ then the static friction rises with corrugation only as $`\lambda ^q`$. Successive approximations to an irrational number involve progressively larger values of $`q`$. Since $`\lambda _c<1`$, the value of $`F_s`$ at $`\lambda <\lambda _c`$ drops closer and closer to zero as the irrational number is approached. At the same time, the value of $`F_s`$ rises more and more rapidly with $`\lambda `$ above $`\lambda _c`$. In the limit $`q\mathrm{}`$ one has the discontinuous rise from zero to finite values of $`F_s`$ described by Aubry. Weiss and Elmer also considered the connection between the onsets of static friction, of metastability, and of a finite kinetic friction as $`v0`$ that is discussed in the next section. Their numerical results showed that all these transitions coincide. Work by Kawaguchi and Matsukawa (1998) shows that varying the strengths of competing elastic interactions can lead to even more complex friction transitions. They considered a model proposed by Matsukawa and Fukuyama (1994) that is similar to the one-dimensional Frenkel-Kontorova-Tomlinson model. For some parameters the static friction oscillated several times between zero and finite values as the interaction between surfaces increased. Clearly the transitions from finite to vanishing static friction continue to pose a rich mathematical challenge. ### C Metastability and Kinetic Friction The metastability that produces static friction in these simple models is also important in determining the kinetic friction. The kinetic friction between two solids is usually fairly constant at low center of mass velocity differences $`v_{\mathrm{CM}}`$. This means that the same amount of work must be done to advance by a lattice constant no matter how slowly the system moves. If the motion were adiabatic, this irreversible work would vanish as the displacement was carried out more and more slowly. Since it does not vanish, some atoms must remain very far from equilibrium even in the limit $`v_{\mathrm{CM}}0`$. The origin of this non-adiabaticity is most easily illustrated with the Tomlinson model. In the low velocity limit, atoms stay near to the metastable solutions shown in Fig. 2. For $`\lambda <1`$ there is a unique metastable solution that evolves continuously. The atoms can move adiabatically, and the kinetic friction vanishes as $`v_{\mathrm{CM}}0`$. For $`\lambda >1`$ each atom is trapped in a metastable state. As the wall moves, this state becomes unstable and the atom pops rapidly to the next metastable state. During this motion the atom experiences very large forces and accelerates to a peak velocity $`v_\mathrm{p}`$ that is independent of $`v_{\mathrm{CM}}`$. The value of $`v_\mathrm{p}`$ is typically comparable to the sound and thermal velocities in the solid and thus can not be treated as a small perturbation from equilibrium. Periodic pops of this type are seen in many of the realistic simulations described in Sec. IV. They are frequently referred to as atomic-scale stick-slip motion (Secs. IV B and VI), because of the oscillation between slow and rapid motion (Sec. VI). The dynamic equation of motion for the Tomlinson model (Eq. 8) has been solved in several different contexts. It is mathematically identical to simple models for Josephson junctions (McCumber, 1968), to the single-particle model of charge-density wave depinning (Grüner et al., 1981), and to the equations of motion for a contact line on a periodic surface (Raphael and deGennes, 1989; Joanny and Robbins, 1990). Fig. 3 shows the time-averaged force as a function of wall velocity for several values of the interface potential strength in the overdamped limit. (Since each atom acts as an independent oscillator, these curves are independent of $`\eta `$.) When the potential is much weaker than the springs ($`\lambda <1`$), the atoms can not deviate significantly from their equilibrium positions. They go up and down over the periodic potential at constant velocity in an adiabatic manner. In the limit $`v_{\mathrm{CM}}0`$ the periodic potential is sampled uniformly and the kinetic friction vanishes, just as the static friction did for incommensurate walls. At finite velocity the kinetic friction is just due to the drag force on each atom and rises linearly with velocity. The same result holds for all spring constants in the Frenkel-Kontorova model with equal lattice constants ($`\eta =1`$). As the potential becomes stronger, the periodic force begins to contribute to the kinetic friction of the Tomlinson model. There is a transition at $`\lambda =1`$, and at larger $`\lambda `$ the kinetic friction remains finite in the limit of zero velocity. The limiting $`F_k(v=0)`$ is exactly equal to the static friction for incommensurate walls. The reason is that as $`v_{\mathrm{CM}}0`$ atoms spend almost all of their time in metastable states. During slow sliding, each atom samples all the metastable states that contribute to the static friction and with exactly the same weighting. The solution for commensurate walls has two different features. The first is that the static friction is higher than $`F_k(0)`$. This difference is greatest for the case $`\lambda <1`$ where the kinetic friction vanishes, while the static friction is finite. The second difference is that the force/velocity curve depends on whether the simulation is done at constant wall velocity (Fig. 3) or constant force. The constant force solution is independent of $`\lambda `$ and equals the constant velocity solution in the limit $`\lambda \mathrm{}`$. The only mechanism of dissipation in the Tomlinson model is through the phenomenological damping force, which is proportional to the velocity of the atom. The velocity is essentially zero except in the rapid pops that occur as a state becomes unstable and the atom pops to the next metastable state. In the overdamped limit, atoms pop with peak velocity $`v_pf_1/\gamma `$ – independent of the average velocity of the center of mass. Moreover, the time of the pop is nearly independent of $`v_{\mathrm{CM}}`$, and so the total energy dissipated per pop is independent of $`v_{\mathrm{CM}}`$. This dissipated energy is of course consistent with the limiting force determined from arguments based on the sampling of metastable states given above (Fisher, 1985; Raphael and DeGennes, 1989; Joanny and Robbins, 1990). The basic idea that kinetic friction is due to dissipation during pops that remain rapid as $`v_{\mathrm{CM}}0`$ is very general, although the phenomenological damping used in the model is far from realistic. A constant dissipation during each displacement by a lattice constant immediately implies a velocity independent $`F_k`$, and vice versa. ### D Tomlinson Model in Two-Dimensions: Atomic Force Microscopy Gyalog et al. (1995) have studied a generalization of the Tomlinson model where the atoms can move in two dimensions over a substrate potential. Their goal was to model the motion of an atomic-force microscope (AFM) tip over a surface. In this case the spring constant $`k`$ reflects the elasticity of the cantilever, the tip, and the substrate. It will in general be different along the scanning direction than along the perpendicular direction. The extra degree of freedom provided by the second dimension means that the tip will not follow the nominal scanning direction, but will be deflected to areas of lower potential energy. This distorts the image and also lowers the measured friction force. The magnitude of both effects decreases with increasing stiffness. As in the one-dimensional model there is a transition from smooth sliding to rapid jumps with decreasing spring stiffness. However, the transition point now depends on sliding direction and on the position of the scan line along the direction normal to the nominal scan direction. Rapid jumps tend to occur first near the peaks of the potential, and extend over greater distances as the springs soften. The curves defining the unstable points can have very complex, anisotropic shapes. Hölscher et al. (1997) have used a similar model to simulate scans of MoS<sub>2</sub>. Their model also includes kinetic and damping terms in order to treat the velocity dependence of the AFM image. They find marked anisotropy in the friction as a function of sliding direction, and also discuss deviations from the nominal scan direction as a function of the position and direction of the scan line. Rajasekaran et al. (1997) considered a simple elastic solid of varying stiffness that interacted with a single atom at the end of an AFM tip with Lennard-Jones potentials. Unlike the other calculations mentioned above, this paper explicitly includes variations in the height of the atom and maintains a constant normal load. The friction rises linearly with load in all cases, but the slope depends strongly on sliding direction, scan position and the elasticity of the solid. The above papers and related work show the complexities that can enter from treating detailed surface potentials and the full elasticity of the materials and machines that drive sliding. All of these factors can influence the measured friction and must be included in a detailed model of any experiment. However, the basic concepts derived from 1D models carry forward. In particular, 1) static friction results when there is sufficient compliance to produce multiple metastable states, and 2) a finite $`F_k(0)`$ arises when energy is dissipated during rapid pops between metastable states. All of the above work considers a single atom or tip in a two-dimensional potential. However, the results can be superimposed to treat a pair of two-dimensional surfaces in contact, because the oscillators are independent in the Tomlinson model. One example of such a system is the work by Glosli and McClelland (1993) that is described in Sec. IV B. Generalizing the Frenkel-Kontorova model to two dimensions is more difficult. ### E Frenkel-Kontorova Model in Two Dimensions: Adsorbed Monolayers The two-dimensional Frenkel-Kontorova model provides a simple model of a crystalline layer of adsorbed atoms (Bak, 1982). However, the behavior of adsorbed layers can be much richer because atoms are not connected by fixed springs, and thus can rearrange to form new structures in response to changes in equilibrium conditions (i.e. temperature) or due to sliding. Overviews of the factors that determine the wide variety of equilibrium structures, including fluid, incommensurate and commensurate crystals, can be found in Bruch et al., (1997) and Taub et al. (1991). As in one-dimension, both the structure and the strength of the lateral variation or “corrugation” in the substrate potential are important in determining the friction. Variations in potential normal to the substrate are relatively unimportant (Persson and Nitzan, 1996; Smith et al., 1996). Most simulations of the friction between adsorbed layers and substrates have been motivated by the pioneering Quartz Crystal Microbalance (QCM) experiments of Krim et al. (1988, 1990, 1991). The quartz is coated with metal electrodes that are used to excite resonant shear oscillations in the crystal. When atoms adsorb onto the electrodes, the increased mass causes a decrease in the resonant frequency. Sliding of the substrate under the adsorbate leads to friction that broadens the resonance. By measuring both quantities, the friction per atom can be calculated. The extreme sharpness of the intrinsic resonance in the crystal makes this a very sensitive technique. In most experiments the electrodes were the noble metals Ag or Au. Deposition produces fcc crystallites with close-packed (111) surfaces. Scanning tunneling microscope studies show that the surfaces are perfectly flat and ordered over regions at least 100nm across. At larger scales there are grain boundaries and other defects. A variety of molecules have been physisorbed onto these surfaces, but most of the work has been on noble gases. The interactions within the noble metals are typically much stronger than the van der Waals interactions between the adsorbed molecules. Thus, to a first approximation, the substrate remains unperturbed and can be replaced by a periodic potential (Smith et al., 1996; Persson et al., 1998). However, the mobility of substrate atoms is important in allowing heat generated by the sliding adsorbate to flow into the substrate. This heat transfer into substrate lattice vibrations or phonons can be modeled by a Langevin thermostat (Eq. 3). If the surface is metallic, the Langevin damping should also include the effect of energy dissipated to the electronic degrees of freedom (Schaich and Harris, 1981; Persson, 1991; Persson and Volokitin, 1995). With the above assumptions, the equation of motion for an adsorbate atom can be written as $$m\ddot{x}_\alpha =\gamma _\alpha \dot{x}_\alpha +F_\alpha ^{ext}\frac{}{x_\alpha }U+f_\alpha (t)$$ (10) where $`m`$ is the mass of an adsorbate atom, $`\gamma _\alpha `$ is the damping rate from the Langevin thermostat in the $`\alpha `$ direction, $`f_\alpha (t)`$ is the corresponding random force, $`\stackrel{}{F}^{ext}`$ is an external force applied to the particles, and U is the total energy from the interactions of the adsorbate atoms with the substrate and with each other. Interactions between noble gas adsorbate atoms have been studied extensively, and are reasonably well described by a Lennard-Jones potential (Bruch et al., 1997). The form of the substrate interaction is less well-known. However, if the substrate is crystalline, its potential can be expanded as a Fourier series in the reciprocal lattice vectors $`\stackrel{}{Q}`$ of the surface layer (Bruch et al., 1997). Steele (1973) has considered Lennard-Jones interactions with substrate atoms and shown that the higher Fourier components drop off exponentially with increasing $`|\stackrel{}{Q}|`$ and height $`z`$ above the substrate. Thus most simulations have kept only the shortest wavevectors, writing: $$U_{sub}(\stackrel{}{r},z)=U_0(z)+U_1(z)\underset{l}{}\mathrm{cos}[\stackrel{}{Q}_l\stackrel{}{r}]$$ (11) where $`\stackrel{}{r}`$ is the position within the plane of the surface, and the sum is over symmetrically equivalent $`\stackrel{}{Q}`$. For the close-packed (111) surface of fcc crystals there are 6 equivalent lattice vectors of length $`4\pi /(\sqrt{3}a)`$ where $`a`$ is the nearest neighbor spacing in the crystal. For the (100) surface there are 4 equivalent lattice vectors of length $`2\pi /a`$. Cieplak et al. (1994) and Smith et al. (1996) used Steele’s potential with an additional 4 shells of symmetrically equivalent wavevectors in their simulations. However, they found that their results were almost unchanged when only the shortest reciprocal lattice vectors were kept. Typically the Lennard-Jones $`ϵ`$, $`\sigma `$ and $`m`$ are used to define the units of energy, length and time, as described in Sec. II A. The remaining parameters in Eq. 10 are the damping rates, external force, and the substrate potential which is characterized by the strength of the adsorption potential $`U_0(z)`$ and the corrugation potential $`U_1(z)`$. The Langevin damping for the two directions in the plane of the substrate is expected to be the same and will be denoted by $`\gamma _{}`$. The damping along $`z`$, $`\gamma _{}`$, may be different (Persson and Nitzan, 1996). The depth of the minimum in the adsorption potential can be determined from the energy needed to desorb an atom, and the width is related to the frequency of vibrations along $`z`$. In the cases of interest here, the adsorption energy is much larger than the Lennard-Jones interaction or the corrugation. Atoms in the first adsorbed layer sit in a narrow range of $`z`$ near the minimum $`z_0`$. If the changes in $`U_1`$ over this range are small, then the effective corrugation for the first monolayer is $`U_1^0U_1(z_0)`$. As discussed below, the calculated friction in most simulations varies rapidly with $`U_1^0`$ but is insensitive to other details in the substrate potential. The simplest case is the limit of weak corrugation and a fluid or incommensurate solid state of the adsorbed layer. As expected based on results from 1D models, such layers experience no static friction, and the kinetic friction is proportional to velocity: $`F_k=\mathrm{\Gamma }v`$ (Persson, 1993a; Cieplak et al., 1994). The constant of proportionality $`\mathrm{\Gamma }`$ gives the ”slip-time” $`t_sm/\mathrm{\Gamma }`$ that is reported by Krim and coworkers (1988, 1990, 1991). This slip time represents the time for the transfer of momentum between adsorbate and substrate. If atoms are set moving with an initial velocity, the velocity will decay exponentially with time constant $`t_s`$. Typical measured values are of order nanoseconds for rare gases. This is surprisingly large when compared to the picosecond time scales that characterize momentum transfer in a bulk fluid of the same rare gas. (The latter is directly related to the viscosity.) The value of $`t_s`$ can be determined from simulations in several different ways. All give consistent results in the cases where they have been compared, and should be accurate if used with care. Persson (1993a), Persson and Nitzan (1996), and Liebsch et al. (1999) have calculated the average velocity as a function of $`\stackrel{}{F}^{ext}`$ and obtained $`\mathrm{\Gamma }`$ from the slope of this curve. Cieplak et al. (1994) and Smith et al. (1996) used this approach and also mimicked experiments by finding the response to oscillations of the substrate. They showed $`t_s`$ was constant over a wide range of frequency and amplitude. The frequency is difficult to vary in experiment, but Mak and Krim (1998) found that $`t_s`$ was independent of amplitude in both fluid and crystalline phases of Kr on Au. Tomassone et al. (1997) have used two additional techniques to determine $`t_s`$. In both cases they used no thermostat ($`\gamma _\alpha =0`$). In the first method all atoms were given an initial velocity and the exponential decay of the mean velocity was used to determine $`t_s`$. The second method made use of the fluctuation-dissipation theorem, and calculated $`t_s`$ from equilibrium velocity fluctuations. A coherent picture has emerged for the relation between $`t_s`$ and the damping and corrugation in Eqs. 10 and 11. In the limit where the corrugation vanishes, the substrate potential is translationally invariant and can not exert any friction on the adsorbate. The value of $`\mathrm{\Gamma }`$ is then just equal to $`\gamma _{}`$. In his original 2D simulations Persson (1993a) used relatively large values of $`\gamma _{}`$ and reported that $`\mathrm{\Gamma }`$ was always proportional to $`\gamma _{}`$. Later work by Persson and Nitzan (1996) showed that this proportionality only held for large $`\gamma _{}`$. Cieplak et al. (1994), Smith et al. (1996), and Tomassone et al. (1997) considered the opposite limit, $`\gamma _{}=0`$ and found a nonzero $`\mathrm{\Gamma }_{ph}`$ that reflected dissipation due to phonon excitations in the adsorbate film. Smith et al. (1996) found that including a Langevin damping along the direction of sliding produced a simple additive shift in $`\mathrm{\Gamma }`$. This relation has been confirmed in extensive simulations by Liebsch et al. (1999). All of their data can be fit to the relation $$\mathrm{\Gamma }=\gamma _{}+\mathrm{\Gamma }_{ph}=\gamma _{}+C(U_1^0)^2$$ (12) where the constant $`C`$ depends on temperature, coverage, and other factors. Cieplak et al. (1994) and Smith et al. (1996) had previously shown that the damping increased quadratically with corrugation and developed a simple perturbation theory for the prefactor $`C`$ in Eq. 12. Their approach follows that of Sneddon et al. (1982) for charge-density waves, and of Sokoloff (1990) for friction between two semi-infinite incommensurate solids. It provides the simplest illustration of how dissipation occurs in the absence of metastability, and is directly relevant to studies of flow boundary conditions discussed in Sec. V A. The basic idea is that the adsorbate monolayer acts like an elastic sheet. The atoms are attracted to regions of low corrugation potential and repelled from regions of high potential. This produces density modulations $`\rho (\stackrel{}{Q}_l)`$ in the adsorbed layer with wavevector $`\stackrel{}{Q}_l`$. When the substrate moves underneath the adsorbed layer, the density modulations attempt to follow the substrate potential. In the process, some of the energy stored in the modulations leaks out into other phonon modes of the layer due to anharmonicity. The energy dissipated to these other modes eventually flows into the substrate as heat. The rate of energy loss can be calculated to lowest order in a perturbation theory in the strength of the corrugation if the layer is fluid or incommensurate. Equating this to the average energy dissipation rate given by the friction relation, gives an expression for the phonon contribution to dissipation. The details of the calculation can be found in Smith et al. (1996). The final result is that the damping rate is proportional to the energy stored in the density modulations and to the rate of anharmonic coupling to other phonons. To lowest order in perturbation theory the energy is proportional to the square of the density modulation and thus the square of the corrugation as in Eq. 12. This quantity is experimentally accessible by measuring the static structure factor $`S(\stackrel{}{Q})`$ $$\frac{S(\stackrel{}{Q})}{N_{ad}}|\rho (\stackrel{}{Q})|^2$$ (13) where $`N_{ad}`$ is the number of adsorbed atoms. The rate of anharmonic coupling is the inverse of an effective lifetime for acoustic phonons, $`t_{\mathrm{phon}}`$, that could also be measured in scattering studies. One finds $$\mathrm{\Gamma }_{ph}/m=\frac{cS(Q)}{N_{ad}}\frac{1}{t_{\mathrm{phon}}}$$ (14) where $`c`$ is half of the number of symmetrically equivalent $`\stackrel{}{Q}_l`$. For an fcc crystal $`c=3`$ on the (111) surface and $`c=2`$ on the (100) surface. In both cases the damping is independent of the direction of sliding, in agreement with simulations by Smith et al. (1996). Smith et al. performed a quantitative test of Eq. 14 showing that values of $`S(Q)`$ and $`t_{\mathrm{phon}}`$ from equilibrium simulations were consistent with non-equilibrium determinations of $`\mathrm{\Gamma }_{ph}`$. The results of Liebsch et al. (1999) provide the first comparison of (111) and (100) surfaces. Data for the two surfaces collapse on to a single curve when divided by the values of $`c`$ given above. Liebsch et al. (1999) noted that the barrier for motion between local minima in the substrate potential is much smaller for (111) than (100) surfaces and thus it might seem surprising that $`\mathrm{\Gamma }_{ph}`$ is 50% higher on (111) surfaces. As they state, the fact that the corrugation is weak means that atoms sample all values of the potential and the energy barrier plays no special role. The major controversy between different theoretical groups concerns the magnitude of the substrate damping $`\gamma _{}`$ that should be included in fits to experimental systems. A given value of $`\mathrm{\Gamma }`$ can be obtained with an infinite number of different combinations of $`\gamma _{}`$ and corrugation (Robbins and Krim, 1998; Liebsch et al., 1999). Unfortunately both quantities are difficult to calculate and to measure. Persson (1991, 1998) has discussed the relation between electronic contributions to $`\gamma _{}`$ and changes in surface resistivity with coverage. The basic idea is that adsorbed atoms exert a drag on electrons that increases resistivity. When the adsorbed atoms slide, the same coupling produces a drag on them. The relation between the two quantities is somewhat more complicated in general because of disorder and changes in electron density due to the adsorbed layer. In fact adsorbed layers can decrease the resistivity in certain cases. However, there is a qualitative agreement between changes in surface resistivity and the measured friction on adsorbates (Persson, 1998). Moreover, the observation of a drop in friction at the superconducting transition of lead substrates is clear evidence that electronic damping is significant in some systems (Dayo et al., 1998). There is general agreement that the electron damping is relatively insensitive to the number of adsorbed atoms per unit area or coverage. This is supported by experiments that show the variation of surface resistivity with coverage is small (Dayo and Krim, 1998). In contrast, the phonon friction varies dramatically with increasing density (Krim et al., 1988, 1990, 1991). This makes fits to measured values of friction as a function of coverage a sensitive test of the relative size of electron and phonon friction. Two groups have found that calculated values of $`\mathrm{\Gamma }`$ with $`\gamma _{}=0`$ can reproduce experiment. Calculations for Kr on Au by Cieplak et al. (1994) are compared to data from Krim et al. (1991) in Fig. 4 (a). Fig. 4(b) shows the comparison between fluctuation-dissipation simulations and experiments for Xe on Ag from Tomassone et al. (1997). In both cases there is a rapid rise in slip time with increasing coverage $`n_{ad}`$. At liquid nitrogen temperatures krypton forms islands of uncompressed fluid for $`n_{ad}<0.055`$Å<sup>-2</sup> and the slip time is relatively constant. As the coverage increases from 0.055 to 0.068 Å<sup>-2</sup>, the monolayer is compressed into an incommensurate crystal. Further increases in coverage lead to an increasingly dense crystal. The slip time increases by a factor of seven during the compression of the monolayer. For low coverages, Xe forms solid islands on Ag at T=77.4K. The slip time drops slightly with increasing coverage, presumably due to increasing island size (Tomassone et al., 1997). There is a sharp rise in slip time as the islands merge into a complete monolayer that is gradually compressed with increasing coverage. Fig. 4 shows that the magnitude of the rise in $`t_s`$ varies from one experiment to the next. The calculated rise is consistent with the larger measured increases. The simulation results of the two groups can be extended to nonzero values of $`\gamma _{}`$, using Eq. 12. This would necessarily change the ratio between the slip times of the uncompressed and compressed layers. The situation is illustrated for Kr on Au in Fig. 4(a). The dashed lines were generated by fitting the damping of the compressed monolayer with different ratios of $`\gamma _{}`$ to $`\mathrm{\Gamma }_{ph}`$. As the importance of $`\gamma _{}`$ increases, the change in slip time during compression of the monolayer decreases substantially. The comparison between theory and experiment suggests that $`\gamma _{}`$ is likely to contribute less than 1/3 of the friction in the compressed monolayer, and thus less than 5% in the uncompressed fluid. The measured increase in slip time for Xe on Ag is smaller and the variability noted in Fig. 4b makes it harder to place bounds on $`\gamma _{}`$. Tomassone et al. (1997) conclude that their results are consistent with no contribution from $`\gamma _{}`$. When they included a value of $`\gamma _{}`$ suggested by Persson and Nitzan (1996) they still found that phonon friction provided 75% of the total. Persson and Nitzan had concluded that phonons contributed only 2% of the friction in the uncompressed monolayer. Liebsch et al. (1999) have reached an intermediate conclusion. They compared calculated results for different corrugations to a set of experimental data and chose the corrugation that matched the change in friction with coverage. They conclude that most of the damping at high coverages is due to $`\gamma _{}`$ and most of the damping at low coverages is due to phonons. However, the data they fitted had only a factor of 3 change with increasing coverage and some of the data in Fig. 4b change by a factor of more than 5. Fitting to these sets would decrease their estimate of the size of $`\gamma _{}`$. The behavior of commensurate monolayers is very different than that of the incommensurate and fluid layers described so far. As expected from studies of one dimensional models, simulations of commensurate monolayers show that they exhibit static friction. Unfortunately, no experimental results have been obtained because the friction is too high for the QCM technique to measure. In one of the earliest simulation studies, Persson (1993a) considered a two-dimensional model of Xe on the (100) surface of Ag. Depending on the corrugation strength he found fluid, 2x2 commensurate, and incommensurate phases. He studied $`1/\mathrm{\Gamma }`$ as the commensurate phase was approached by lowering temperature in the fluid phase, or decreasing coverage in the incommensurate phase. In both cases he found that $`1/\mathrm{\Gamma }`$ went to zero at the boundary of the commensurate phase, implying that there was no flow in response to small forces. When the static friction is exceeded, the dynamics of adsorbed layers can be extremely complicated. In the model just described, Persson (1993a, 1993b, 1995) found that sliding caused a transition from a commensurate crystal to a new phase. The velocity was zero until the static friction was exceeded. The system then transformed into a sliding fluid layer. Further increases in force caused a first order transition to the incommensurate structure that would be stable in the absence of any corrugation. The velocity in this phase was also what would be calculated for zero corrugation $`F=\gamma _{}v`$ (dashed line). Decreasing the force led to a transition back to the fluid phase at essentially the same point. However, the layer did not return to the initial commensurate phase until the force dropped well below the static friction. The above hysteresis in the transition between commensurate and fluid states is qualitatively similar to that observed in the underdamped Tomlinson model or the equivalent case of a Josephson junction (McCumber, 1968). As in these cases, the magnitude of the damping effects the range of the hysteresis. The major difference is the origin of the hysteresis. In the Tomlinson model, hysteresis arises solely because the inertia of the moving system allows it to overcome potential barriers that a static system could not. This type of hysteresis would disappear at finite temperature due to thermal excitations (Braun et al., 1997a). In the adsorbed layers, the change in the physical state of the system has also changed the nature of the potential barriers. Similar sliding induced phase transitions were observed earlier in experimental and simulation studies of shear in bulk crystals (Ackerson et al., 1986; Stevens et al., 1991, 1993) and in thin films (Gee et al., 1990; Thompson and Robbins, 1990b). The relation between such transitions and stick-slip motion is discussed in Section VI. Braun and collaborators have considered the transition from static to sliding states at coverages near to a commensurate value. They studied one (Braun et al., 1997b; Paliy et al., 1997) and two (Braun et al., 1997a, 1997c) dimensional Frenkel-Kontorova models with different degrees of damping. If the corrugation is strong, the equilibrium state consists of locally commensurate regions separated by domain walls or kinks. The kinks are pinned because of the discreteness of the lattice, but this Peierls-Nabarro pinning potential is smaller than the substrate corrugation. In some cases there are different types of kinks with different pinning forces. The static friction corresponds to the force needed to initiate motion of the most weakly pinned kinks. As a kink moves through a region, atoms advance between adjacent local minima in the substrate potential. Thus the average velocity depends on both the kink velocity and the density of kinks. If the damping is strong, there may be a series of sudden transitions as the force increases. These may reflect depinning of more strongly pinned kinks, or creation of new kink-antikink pairs that lead to faster and faster motion. At high enough velocity the kinks become unstable, and a moving kink generates a cascade of new kink-antikink pairs that lead to faster and faster motion. Eventually the layer decouples from the substrate and there are no locally commensurate regions. As in Persson (1995), the high velocity state looks like an equilibrium state with zero corrugation. The reason is that the atoms move over the substrate so quickly that they can not respond. Although this limiting behavior is interesting, it would only occur in experiments between flat crystals at velocities comparable to the speed of sound. ## IV Dry Sliding of Crystalline Surfaces The natural case of interest to tribologists is the sliding interface between two three-dimensional objects. In this section we consider sliding of bare surfaces. We first discuss general issues related to the effect of commensurability, focusing on strongly adhering surfaces such as clean metal surfaces. Then simulations of chemically-passivated surfaces of practical interest are described. The section concludes with studies of friction, wear and indentation in single-asperity contacts. ### A Effect of Commensurability The effect of commensurability in three dimensional systems has been studied by Hirano and Shinjo (1990, 1993). They noted that even two identical surfaces are likely to be incommensurate. As illustrated in Fig. 5, unless the crystalline surfaces are perfectly aligned, the periods will no longer match up. Thus one would expect almost all contacts between surfaces to be incommensurate. Hirano and Shinjo (1990) calculated the condition for static friction between high symmetry surfaces of fcc and bcc metals. Many of their results are consistent with the conclusions described above for lower dimensions. They first showed that the static friction between incommensurate surfaces vanishes exactly if the solids are perfectly rigid. They then allowed the bottom layer of the top surface to relax in response to the atoms above and below. The relative strength of the interaction between the two surfaces and the stiffness of the top surface plays the same role as $`\lambda `$ in the Tomlinson model. As the interaction becomes stronger, there is an Aubry transition to a finite static friction. This transition point was related to the condition for multi-stability of the least stable atom. To test whether realistic potentials would be strong enough to produce static friction between incommensurate surfaces, Hirano and Shinjo (1990) applied their theory to noble and transition metals. Contacts between various surface orientations of the same metal (i.e. (111) and (100) or (110) and (111)) were tested. In all cases the interactions were too weak to produce static friction. Shinjo and Hirano (1993) extended this line of work to dynamical simulations of sliding. They first considered the undamped Frenkel-Kontorova model with ideal springs between atoms. The top surface was given an initial velocity and the evolution of the system was followed. When the corrugation was small, the kinetic friction vanished, and the sliding distance increased linearly with time. This ”superlubric” state disappeared above a threshold corrugation. Sliding stopped because of energy transfer from the center of mass motion into vibrations within the surface. The transition point depended on the initial velocity, since that set the amount of energy that needed to be converted into lattice vibrations. Note that the kinetic friction only vanishes in these simulations because atoms are connected by ideal harmonic springs (Smith et al., 1996). The damping due to energy transfer between internal vibrations (e.g. Eq. 14) is zero because the phonon lifetime is infinite. More realistic anharmonic potentials always lead to an exponential damping of the velocity at long times. Simulations for two dimensional surfaces were also described (Shinjo and Hirano, 1993; Hirano and Shinjo, 1993). Shinjo and Hirano noted that static friction is less likely in higher dimensions because of the ability of atoms to move around maxima in the substrate potential, as described in Sec. III D. A particularly interesting feature of their results is that the Aubry transition to finite static friction depends on the relative orientation of the surfaces (Hirano and Shinjo, 1993). Over an intermediate range of corrugations, the two surfaces slide freely in some alignments and are pinned in others. This extreme dependence on relative alignment has not been seen in experiments, but strong orientational variations in friction have been seen between mica surfaces (Hirano et al., 1991) and between a crystalline AFM tip and substrate (Hirano et al., 1997). Hirano and Shinjo’s conclusion that two flat, strongly adhering but incommensurate surfaces are likely to have zero static friction has been supported by two other studies. As described in more detail in Section IV C, S$`ø`$rensen et al. (1996) found that there was no static friction between a sufficiently large copper tip and an incommensurate copper substrate. Müser and Robbins (1999) studied a simple model system and found that interactions within the surfaces needed to be much smaller than the interactions between surfaces in order to get static friction. Müser and Robbins (1999) considered two identical but orientationally misaligned triangular surfaces similar to Fig. 5C. Interactions within each surface were represented by coupling atoms to ideal lattice sites with a spring constant $`k`$. Atoms on opposing walls interacted through a Lennard-Jones potential. The walls were pushed together by an external force ($`3`$MPa) that was an order of magnitude less than the adhesive pressure from the LJ potential. The bottom wall was fixed, and the free diffusion of the top wall was followed at a low temperature ($`T=0.1ϵ/k_B`$). For $`k10ϵ\sigma ^2`$, the walls were pinned by static friction for all system sizes investigated. For $`k25ϵ\sigma ^2`$, $`F_s`$ vanished, and the top wall diffused freely in the long time limit. By comparison, Lennard-Jones interactions between atoms within the walls would give rise to $`k200ϵ\sigma ^2`$. Hence, the adhesive interactions between atoms on different surfaces must be an order of magnitude stronger than the cohesive interactions within each surface in order to produce static friction between the flat, incommensurate walls that were considered. The results described above make it clear that the static friction between ideal crystals can be expected to vanish in many cases. This raises the question of why static friction is observed so universally in experiments. One possibility is that roughness or chemical disorder pins the two surfaces together. Theoretical arguments indicate that disorder will always pin low dimensional objects (e.g. Grüner et al., 1988). However, the same arguments show that the pinning between three-dimensional objects is exponentially weak (Caroli and Nozieres, 1996; Persson and Tosatti, 1996; Volmer and Natterman, 1997). This suggests that other effects like mobile atoms between the surfaces may play a key role in creating static friction. This idea is discussed below in Sec. V C. ### B Chemically Passivated Surfaces The simulations just described aimed at revealing general aspects of friction. There is also a need to understand the tribological properties of specific materials on the nanoscale. Advances in the chemical vapor deposition of diamond hold promise for producing hard protective diamond coatings on a variety of materials. This motivated Harrison et al. (1992b) to perform molecular-dynamics simulations of atomic-scale friction between diamond surfaces. Two orientationally-aligned, hydrogen-terminated diamond (111) surfaces were placed in sliding contact. Potentials based on the work of Brenner (1990) were used. As discussed in Section VII C, these potentials have the ability to account for chemical reactions, but none occurred in the work described here. The lattices contained ten layers of carbon atoms and two layers of hydrogen atoms, and each layer consisted of 16 atoms. The three outermost layers were treated as rigid units, and were displaced relative to each other at constant sliding velocity and constant separation. The atoms of the next five layers were coupled to a thermostat. Energy dissipation mechanisms were investigated as a function of load, temperature, sliding velocity, and sliding direction. At low loads, the top wall moved almost rigidly over the potential from the bottom wall, and the average friction was nearly zero. At higher loads, colliding hydrogen atoms on opposing surfaces locked into a metastable state before suddenly slipping past each other. As in the Tomlinson model, energy was dissipated during these rapid pops. The kinetic friction was smaller for sliding along the grooves between nearest neighbor hydrogen terminations, \[1$`\overline{1}`$0\], than in the orthogonal direction, \[11$`\overline{2}`$\], because hydrogen atoms on different surfaces could remain farther apart. In a subsequent study, Harrison et al. (1993) investigated the effect of atomic scale roughness by randomly replacing one eighth of the hydrogen atoms on one surface with methyl, ethyl or n-propyl groups. Changing hydrogen to methyl had little effect on the friction at a given load. However a new type of pop between metastable states was observed: Methyl groups rotated past each other in a rapid turnstile motion. Further increases in the length of the substituted molecules led to much smaller $`F_k`$ at high loads. These molecules were flexible enough to be pushed into the grooves between hydrogen atoms on the opposing surface, reducing the number of collisions. Note that Harrison et al. (1992b, 1993) and Perry and Harrison (1996, 1997) might have obtained somewhat different trends using a different ensemble and/or incommensurate walls. Their case of constant separation and velocity corresponds to a system that is much stiffer than even the stiffest AFM. Because they used commensurate walls and constant velocity, the friction depended on the relative displacement of the lattices in the direction normal to the velocity. The constant separation also led to variations in normal load by up to an order of magnitude with time and lateral displacement. To account for these effects, Harrison et al. (1992b, 1993) and Perry and Harrison (1996, 1997) presented values for friction and load that were averaged over both time and lateral displacement. Studies of hydrogen-terminated silicon surfaces (Robbins and Mountain) indicate that changing to a constant load and lateral force ensemble allows atoms to avoid each other more easily. Metastability sets in at higher loads than in a constant separation ensemble, the friction is lower, and variations with sliding direction are reduced. Glosli and coworkers have investigated the sliding motion between two ordered monolayers of longer alkane chains bound to commensurate walls (McClelland and Glosli, 1992; Glosli and McClelland, 1993; Ohzono et al., 1998). Each chain contained six alkane monomers with fixed bond lengths. Next-nearest neighbors and third-nearest neighbors on the chain interacted via bond bending and torsional potentials, respectively. One end of each chain was harmonically coupled to a site on the $`6\times 6`$ triangular lattices that made up each wall. All other interactions were Lennard Jones (LJ) potentials between CH<sub>3</sub> and CH<sub>2</sub> groups (the united atom model of Sec. II A). The chain density was high enough that chains pointed away from the surface they were anchored to. A constant vertical separation of the walls was maintained, and the sliding velocity $`v`$ was well below the sound velocity. Friction was studied as a function of $`T`$, $`v`$, and the ratio of the LJ interaction energies between endgroups on opposing surfaces, $`ϵ_1`$, to that within each surface, $`ϵ_0`$. Many results of these simulations correspond to the predictions of the Tomlinson model. Below a threshold value of $`ϵ_1/ϵ_0`$ (0.4 at $`k_BT/ϵ_0=0.284`$), molecules moved smoothly, and the force decreased to zero with velocity. When the interfacial interactions became stronger than this threshold value, “plucking motion” due to rapid pops between metastable states was observed. Glosli and McClelland (1993) showed that at each pluck, mechanical energy was converted to kinetic energy that flowed away from the interface as heat. Ohzono et al. (1998) showed that a generalization of the Tomlinson model could quantitatively describe the sawtooth shape of the shear stress as a function of time. The instantaneous lateral force did not vanish in any of Glosli and McClelland’s (1993) or Ohzono et al.’s (1998) simulations. This shows that there was always a finite static friction, as expected between commensurate surfaces. For both weak ($`ϵ_1/ϵ_0=0.1`$) and strong ($`ϵ_1/ϵ_0=1.0`$) interfacial interactions, Glosli and McClelland (1993) observed an interesting maximum in the $`T`$-dependent friction force. The position of this maximum coincided with the rotational “melting” temperature $`T_M`$ where orientational order at the interface was lost. It is easy to understand that $`F`$ drops at $`T>T_M`$ because thermal activation helps molecules move past each other. The increase in $`F`$ with $`T`$ at low $`T`$ was attributed to increasing anharmonicity that allowed more of the plucking energy to be dissipated. ### C Single Asperity Contacts Engineering surfaces are usually rough, and friction is generated between contacting asperities on the two surfaces. These contacts typically have diameters of order a $`\mu `$m or more (e.g. Dieterich and Kilgore, 1996). This is much larger than atomic scales, and the models above may provide insight into the behavior within a representative portion of these contacts. However, it is important to determine how finite contact area and surface roughness effect friction. Studies of atomic-scale asperities can address these issues, and also provide direct models of the small contacts typical of AFM tips. S$`ø`$rensen et al. performed simulations of sliding tip-surface and surface-surface contacts consisting of copper atoms (S$`ø`$rensen et al., 1996). Flat, clean tips with (111) or (100) surfaces were brought into contact with corresponding crystalline substrates (Fig. 6). The two exterior layers of tip and surface were treated as rigid units, and the dynamics of the remaining mobile layers was followed. Interatomic forces and energies were calculated using semiempirical potentials derived from effective medium theory (Jacobsen et al., 1987). At finite temperatures, the outer mobile layer of both tip and surface was coupled to a Langevin thermostat. Zero temperature simulations gave similar results. To explore the effects of commensurability, results for crystallographically aligned and misoriented tip-surface configurations were compared. In the commensurate Cu(111) case, S$`ø`$rensen et al. observed atomic-scale stick-slip motion of the tip. The trajectory was of zig-zag form which could be related to jumps of the tip’s surface between fcc and hcp positions. Similar zig-zag motion is seen in shear along (111) planes of bulk fcc solids (Stevens and Robbins, 1993). Detailed analysis of the slips showed that they occurred via a dislocation mechanism. Dislocations were nucleated at the corner of the interface, and then moved rapidly through the contact region. Adhesion led to a large static friction at zero load: The static friction per unit area, or critical yield stress, dropped from 3.0GPa to 2.3GPa as $`T`$ increased from 0 to 300K. The kinetic friction increased linearly with load with a surprisingly small differential friction coefficient $`\stackrel{~}{\mu }_\mathrm{k}F_\mathrm{k}/L.03`$. In the load regime investigated, $`\stackrel{~}{\mu }_\mathrm{k}`$ was independent of temperature and load. No velocity dependence was detectable up to sliding velocities of $`v=5`$ m/s. At higher velocities, the friction decreased. Even though the interactions between the surfaces are identical to those within the surfaces, no wear was observed. This was attributed to the fact that (111) surfaces are the preferred slip planes in fcc metals. Adhesive wear was observed between a commensurate (100) tip and substrate (Fig. 6). Sliding in the (011) direction at either constant height or constant load led to inter-plane sliding between (111) planes inside the tip. As shown in Fig. 6, this plastic deformation led to wear of the tip, which left a trail of atoms in its wake. The total energy was an increasing function of sliding distance due to the extra surface area. The constant evolution of the tip kept the motion from being periodic, but the saw-toothed variation of force with displacement that is characteristic of atomic-scale stick-slip was still observed. Nieminen et al. (1992) observed a different mechanism of plastic deformation in essentially the same geometry, but at higher velocities (100m/s vs. 5m/s) and with Morse potentials between Cu atoms. Sliding took place place between (100) layers inside the tip. This led to a reduction of the tip by two layers that was described as the climb of two successive edge dislocations, under the action of the compressive load. Although wear covered more of the surface with material from the tip, the friction remained constant at constant normal load. The reason was that the portion of the surface where the tip advanced had a constant area. While the detailed mechanism of plastic mechanism is very different than in S$`ø`$rensen et al. (1996), the main conclusions of both papers are similar: When two commensurate surfaces with strong adhesive interactions are slid against each other, wear is obtained through formation of dislocations that nucleate at the corners of the moving interface. S$`ø`$rensen et al. (1996) also examined the effect of incommensurability. An incommensurate Cu(111) system was obtained by rotating the tip by 16.1<sup>o</sup> about the axis perpendicular to the substrate. For a small tip (5x5 atoms) they observed an Aubry transition from smooth sliding with no static friction at low loads, to atomic-scale stick-slip motion at larger loads. Further increases in load led to sliding within the tip and plastic deformation. Finite systems are never truly incommensurate, and pinning was found to occur at the corners of the contact, suggesting it was a finite-size effect. Larger tips (19x19) slid without static friction at all loads. Similar behavior was observed for incommensurate Cu(100) systems. These results confirm the conclusions of Hirano and Shinjo (1990) that even bare metal surfaces of the same material will not exhibit static friction if the surfaces are incommensurate. They also indicate that contact areas as small as a few hundred atoms are large enough to exhibit this effect. Many other tip-substrate simulations of bare metallic surfaces have been carried out. Mostly, these simulations concentrated on indentation, rather than on sliding or scraping (see Sec. VII B). Among the indentation studies of metals are simulations of a Ni tip indenting Au(100) (Landman et al., 1990), a Ni tip coated with an epitaxial gold monolayer indenting Au(100) (Landman et al., 1992), an Au tip indenting Ni(001) (Landman and Luedtke, 1989, 1991), an Ir tip indenting a soft Pb substrate (Raffi-Tabar et al., 1992), and an Au tip indenting Pb(110) (Tomagnini et al., 1993). These simulations have been reviewed in detail within this series by Harrison et al. (1999). In general, plastic deformation occurs mainly in the softer of the two materials, typically Au or Pb in the cases above. Fig. 7 shows the typical evolution of the normal force and potential energy during an indentation at high enough loads to produce plastic deformation (Landman and Luedtke, 1991). As the Au tip approaches the Ni surface (upper line), the force remains nearly zero until a separation of about 1 Å. The force then becomes extremely attractive and there is a jump to contact (A). During the jump to contact, Au atoms in the tip displace by 2 Å within a short time span of 1 ps. This strongly adhesive contact produces reconstruction of the Au layers through the fifth layer of the tip. When the tip is withdrawn, a neck is pulled out of the substrate. The fluctuations in force seen in Fig. 7 correspond to periodic increases in the number of layers of gold atoms in the neck. Nanoscale investigations of indentation, adhesion and fracture of non-metallic diamond (111) surfaces have been carried out by Harrison et al. (1992a). A hydrogen-terminated diamond tip was brought in contact with a (111) surface that was either bare or hydrogen-terminated. The tip was constructed by removing atoms from a (111) crystal until it looked like an inverted pyramid with a flattened apex. The model for the surface was similar to that described in Section IV B, but one layer contained 64 atoms. The indentation was performed by moving the rigid layers of the tip in steps of 0.15 Å. The system was then equilibrated before observables were calculated. Unlike the metal/metal systems (Fig. 7), the diamond/diamond systems (Fig. 8) did not show a pronounced jump to contact (Harrison et al., 1992a). This is because the adhesion between diamond (111) surfaces is quite small if at least one is hydrogen-terminated (Harrison et al., 1991). For effective normal loads up to 200 nN (i.e. small indentations), the diamond tip and surface deformed elastically and the force distance curve was reversible (Fig. 8 (A)). A slight increase to 250nN, led to plastic deformation that produced hysteresis and steps in the force distance curve (Fig. 8 (B)) (Harrison et al., 1992a). ## V Lubricated Surfaces Hydrodynamics and elasto-hydrodynamics have been very successful in describing lubrication by micron-thick films (Dowson and Higginson, 1968). However, these continuum theories begin to break down as atomic structure becomes important. Experiments and simulations reveal a sequence of dramatic changes in the static and dynamic properties of fluid films as their thickness decreases from microns down to molecular scales. These changes have important implications for the function of boundary lubricants. This section describes simulations of these changes, beginning with changes in flow boundary conditions for relatively thick films, and concluding with simulations of submonolayer films and corrugated walls. ### A Flow boundary conditions Hydrodynamic theories of lubrication need to assume a boundary condition (BC) for the fluid velocity at solid surfaces. Macroscopic experiments are generally well-described by a ”no-slip” BC; that is that the tangential component of the fluid velocity equals that of the solid at the surface. The one prominent exception is contact line motion, where an interface between two fluids moves along a solid surface. This motion would require an infinite force in hydrodynamic theory, unless slip occurred near the contact line (Huh and Scriven, 1971; Dussan, 1979). The experiments on adsorbed monolayers described in Section III E suggest that slip may occur more generally on solid surfaces. As noted, the kinetic friction between the first monolayer and the substrate can be orders of magnitude lower than that between two layers in a fluid. The opposite deviation from no-slip is seen in some Surface Force Apparatus experiments – a layer of fluid molecules becomes immobilized at the solid wall (Chan and Horn, 1985; Israelachvili, 1986). In pioneering theoretical work, Maxwell (1867) calculated the deviation from a no-slip boundary condition for an ideal gas. He assumed that at each collision molecules were either specularly reflected or exchanged momentum to emerge with a velocity chosen at random from a thermal distribution. The calculated flow velocity at the wall was non-zero, and increased linearly with the velocity gradient near the wall. Taking $`z`$ as the direction normal to the wall and $`u_{}`$ as the tangential component of the velocity relative to the wall, Maxwell found $$u_{}(z_0)=𝒮\left(\frac{u_{}}{z}\right)_{z_0}$$ (15) where $`z_0`$ is the position of the wall. The constant of proportionality, $`𝒮`$, has units of length and is called the slip length. It represents the distance into the wall at which the velocity gradient would extrapolate to zero. Calculations with a fictitious wall at this position and no-slip boundary conditions would reproduce the flow in the region far from the wall. $`𝒮`$ also provides a measure of the kinetic friction per unit area between the wall and the adjacent fluid. The shear stress must be uniform in steady state, because any imbalance would lead to accelerations. Since the velocity gradient times the viscosity $`\mu `$ gives the stress in the fluid, the kinetic friction per unit area is $`u_{}(z_0)\mu /𝒮`$. Maxwell found that $`𝒮`$ increased linearly with the mean free path and that it also increased with the probability of specular reflection. Early simulations used mathematically flat walls and phenomenological reflection rules like those of Maxwell. For example, Hannon et al. (1988) found that the slip length was reduced to molecular scales in dense fluids. This is expected from Maxwell’s result, since the mean free path becomes comparable to an atomic separation at high densities. However work of this type does not address how the atomic structure of realistic walls is related to collision probabilities and whether Maxwell’s reflection rules are relevant. This issue was first addressed in simulations of moving contact lines where deviations from no-slip boundary conditions have their most dramatic effects (Koplik et al., 1988, 1989; Thompson and Robbins, 1989). These papers found that even when no-slip boundary conditions held for single fluid flow, the large stresses near moving contact lines led to slip within a few molecular diameters of the contact line. They also began to address the relation between the flow boundary condition and structure induced in the fluid by the solid wall. The most widely studied type of order is layering in planes parallel to the wall. It is induced by the sharp cutoff in fluid density at the wall and the pair correlation function $`g(r)`$ between fluid atoms (Abraham, 1978; Toxvaerd, 1981; Nordholm and Haymet, 1980; Snook and van Megen, 1980; Plischke and Henderson, 1986). An initial fluid layer forms at the preferred wall-fluid spacing. Additional fluid molecules tend to lie in a second layer, at the preferred fluid-fluid spacing. This layer induces a third, and so on. Some of the trends that are observed in the degree and extent of layering are illustrated in Fig. 9 (Thompson and Robbins, 1990a). The fluid density is plotted as a function of the distance between walls for a model considered in almost all the studies of flow BC’s described below. The fluid consists of spherical molecules interacting with a Lennard-Jones potential. They are confined by crystalline walls containing discrete atoms. In this case the walls were planar (001) surfaces of an fcc crystal. Wall and fluid atoms also interact with a Lennard-Jones potential, but with a different binding energy $`ϵ_{\mathrm{wf}}`$. The net adsorption potential from the walls (Eq. 11) can be increased by raising $`ϵ_{\mathrm{wf}}`$ or by increasing the density of the walls $`\rho _w`$ so that more wall atoms attract the fluid. Fig. 9 shows that both increases lead to increases in the height of the first density peak. The height also increases with the pressure in the fluid (Koplik et al., 1989; Barrat and Bocquet, 1999a) since that forces atoms into steeper regions of the adsorption potential. The height of subsequent density peaks decreases smoothly with distance from the wall, and only four or five well-defined layers are seen near each wall in Fig. 9. The rate at which the density oscillations decay is determined by the decay length of structure in the bulk pair-correlation function of the fluid. Since all panels of Fig. 9 have the same conditions in the bulk, the decay rate is the same. The adsorption potential only determines the initial height of the peaks. The pair correlation function usually decays over a few molecular diameters except under special conditions, such as near a critical point. For fluids composed of simple spherical molecules, the oscillations typically extend out to a distance of order 5 molecular diameters (Magda et al., 1985; Schoen et al., 1987; Thompson and Robbins, 1990a). For more complex systems containing fluids with chain or branched polymers, the oscillations are usually negligible beyond $``$ 3 molecular diameters (Bitsanis and Hadziianou, 1990; Thompson et al., 1995; Gao et al. 1997a, 1997b). Some simulations with realistic potentials for alkanes show more pronounced layering near the wall because the molecules adopt a rod-like conformation in the first layer (Ribarsky et al., 1992; Xia et al., 1992). Solid surfaces also induce density modulations within the plane of the layers (Landman et al., 1989; Schoen et al., 1987, 1988, 1989; Thompson and Robbins, 1990a, 1990b). These correspond directly to the modulations induced in adsorbed layers by the corrugation potential (Sec. III E), and can also be quantified by the two-dimensional static structure factor at the shortest reciprocal lattice vector $`Q`$ of the substrate. When normalized by the number of atoms in the layer, $`N_l`$, this becomes an intensive variable that would correspond to the Debye-Waller factor in a crystal. The maximum possible value, $`S(Q)/N_l=1`$, corresponds to fixing all atoms exactly at crystalline lattice sites. In a strongly ordered case such as $`\rho _w=\rho `$ in Fig. 9(c), the small oscillations about lattice sites in the first layer only decrease $`S(Q)/N_l`$ to 0.71. This is well above the value of 0.6 that is typical of bulk 3D solids at their melting point and indicates that the first layer has crystallized onto the wall. This was confirmed by computer simulations of diffusion and flow (Thompson and Robbins, 1990a). The values of $`S(Q)/N_l`$ in the second and third layers are 0.31 and 0.07, respectively, and atoms in these layers exhibit typical fluid diffusion. There is some correlation between the factors that produce strong layering and those that produce strong in-plane modulations. For example, chain molecules have several conflicting length scales that tend to frustrate both layering and in-plane order (Thompson et al., 1995; Gao et al., 1997a, 1997b; Koike and Yoneya, 1998, 1999). Both types of order also have a range that is determined by $`g(r)`$ and a magnitude that decreases with decreasing $`ϵ_{\mathrm{wf}}/ϵ`$. However, the dependence of in-plane order on the density of substrate atoms is more complicated than for layering. When $`\rho _w=\rho `$, the fluid atoms can naturally sit on the sites of a commensurate lattice, and $`S(Q)`$ is large. When the substrate density $`\rho _w`$ is increased by a factor of 2.52, the fluid atoms no longer fit easily into the corrugation potential. The degree of induced in-plane order drops sharply, although the layering becomes stronger (Fig. 9). Sufficiently strong adsorption potentials may eventually lead to crystalline order in the first layers, and stronger layering. However, this may actually increase slip, as shown below. Fig. 9 also illustrates the range of flow boundary conditions that have been seen in many studies (Heinbuch and Fischer, 1989; Koplik et al., 1989; Thompson and Robbins, 1990a; Bocquet and Barrat, 1994; Mundy et al., 1996; Khare et al., 1996; Barrat and Bocquet, 1999a). Flow was imposed by displacing the walls in opposite directions along the $`x`$-axis with speed $`U`$ (Thompson and Robbins, 1990a). The average velocity $`V_x`$ was calculated within each of the layers defined by peaks in the density (Fig. 9), and normalized by $`U`$. Away from the walls, all systems exhibit the characteristic Couette flow profile expected for Newtonian fluids. The value of $`V_x`$ rises linearly with $`z`$, and the measured shear stress divided by $`V_x/z`$ equals the bulk viscosity. Deviations from this behavior occur within the first few layers, in the region where layering and in-plane order are strong. In some cases the fluid velocity remains substantially less than $`U`$, indicating slip occurs. In others, one or more layers move at the same velocity as the wall, indicating they are stuck to it. Applying Maxwell’s definition of slip length Eq. 15 to these systems is complicated by uncertainty in where the plane of the solid surface $`z_0`$ should be defined. The wall is atomically rough, and the fluid velocity can not be evaluated too near to the wall because of the pronounced layering. In addition, the curvature evident in some flow profiles represents a varying local viscosity whose effect must be included the boundary condition. One approach is to fit the linear flow profile in the central region and extrapolate to the value of $`z^{}`$ where the velocity would reach $`+U`$. The slip length can then be defined as $`𝒮=z^{}z_{tw}`$ where $`z_{tw}`$ is the height of the top wall atoms. This is equivalent to applying Maxwell’s definition (Eq. 15) to the extrapolated flow profile at $`z_{tw}`$. The no-slip condition corresponds to a flow profile that extrapolates to the wall velocity at $`z_{tw}`$ as illustrated by the dashed line in Fig. 9(e). Slip produces a smaller velocity gradient and a positive value of $`𝒮`$. Stuck layers lead to a larger velocity gradient and a negative value of $`𝒮`$. The dependence of slip length on many parameters has been studied. All the results are consistent with a decrease in slip as the in-plane order increases. Numerical results for $`\rho _w=\rho `$ and $`ϵ_{\mathrm{wf}}=0.4ϵ`$ (Fig. 9(d)) are very close to the no-slip condition. Increasing $`ϵ_{\mathrm{wf}}`$ leads to stuck layers (Koplik et al., 1988, 1989; Thompson and Robbins, 1989, 1990a; Heinbuch and Fischer, 1989), and decreasing $`ϵ_{\mathrm{wf}}`$ can produce large slip lengths (Thompson and Robbins, 1990a; Barrat and Bocquet, 1999a) Increasing pressure (Koplik et al., 1989; Barrat and Bocquet, 1999a) or decreasing temperature (Heinbuch and Fischer, 1989; Thompson and Robbins, 1990a) increases structure in $`g(r)`$ and leads to less slip. These changes could also be attributed to increases in layering. However, increasing the wall density $`\rho _w`$ from $`\rho `$ to 2.52 $`\rho `$ increases slip in Fig. 9. This correlates with the drop in in-plane order, while the layering actually increases. Changes in in-plane order also explain the pronounced increase in slip when $`ϵ_{\mathrm{wf}}/ϵ`$ is increased to 4 in the case of dense walls (Fig. 9(f)). The first layer of fluid atoms becomes crystallized with a very different density than the bulk fluid. It becomes the “wall” that produces order in the second layer, and it gives an adsorption potential characterized by $`ϵ`$ rather than $`ϵ_{\mathrm{wf}}`$. The observed slip is consistent with that for dense walls displaced to the position of the first layer and interacting with $`ϵ`$. Thompson and Robbins (1990a) found that all of their results for $`𝒮`$ collapsed on to a universal curve when plotted against the structure factor $`S(Q)/N_l`$. When one or more layers crystallized onto the wall, the same collapse could be applied as long as the effective wall position was shifted by a layer and the $`Q`$ for the outer wall layer was used. The success of this collapse at small $`S(Q)/N_l`$ can be understood from the perturbation theory for the kinetic friction on adsorbed monolayers (Eq. 14). The slip length is determined by the friction between the outermost fluid layer and the wall. This depends only on $`S(Q)/N_1`$ and the phonon lifetime for acoustic waves. The latter changes relatively little over the temperature range considered, and hence $`𝒮`$ is a single-valued function of $`S(Q)`$. The perturbation theory breaks down at large $`S(Q)`$, but the success of the collapse indicates that there is still a one-to-one correspondence between it and the friction. In a recent paper Barrat and Bocquet (1999b) have derived an expression relating $`𝒮`$ and $`S(Q)`$ that is equivalent to Eq. 14. However, in describing their numerical resultsThey considered almost the same parameter range as Thompson and Robbins (1990a), but at a lower temperature ($`k_BT/ϵ=0.7`$ vs. 1.1 or 1.4) and at a single wall density ($`\rho =\rho _w`$). they emphasize the correlation between increased slip and decreased wetting (Barrat and Bocquet, 1999a, 1999b). In general the wetting properties of fluids are determined by the adsorption term of the substrate potential $`U_0`$ (Eq. 11). This correlates well with the degree of layering, but has little to do with in-plane order. In the limit of a perfectly structureless wall one may have complete wetting and yet there is also complete slip. The relation between wetting and slip is very much like that between adhesion and friction. All other things being equal, a greater force of attraction increases the effect of corrugation in the potential and increases the friction. However, there is no one-to-one correspondence between them. In earlier work Bocquet and Barrat (1993, 1994) provided a less ambiguous resolution to the definition of the slip length than that of Thompson and Robbins (1990a). They noted that the shear rate in the central region of Couette flow depended only on the sum of the wall position and the slip length. Thus one must make a somewhat arbitrary choice of wall position to fix the slip length. However, if one also fits the flow profile for Poiseuille flow, unique values of slip length and the effective distance between the walls $`h`$ are obtained (Barrat and Bocquet, 1999a). Bocquet and Barrat (1993, 1994) also suggested and implemented an elegant approach for determining both $`𝒮`$ and $`h`$ using equilibrium simulations and the fluctuation-dissipation theorem. This is one of the first applications of the fluctuation-dissipation theorem to boundary conditions. It opens up the possibility of calculating flow boundary conditions directly from equilibrium thermodynamics, and Bocquet and Barrat (1994) were able to derive Kubo relations for $`z_0`$ and $`𝒮`$. Analytic results for these relations are not possible in general, but in the limit of weak interactions they give an expression equivalent to Eq. 14 for the drag on the wall as noted above (Barrat and Bocquet, 1999b). Mundy et al. (1996) have proposed a non-equilibrium simulation method for calculating these quantities directly. In all of the work described above, care was taken to ensure that the slip boundary condition was independent of the wall velocity. Thus both the bulk of the fluid and the interfacial region were in the linear response regime where the fluctuation-dissipation theorem holds. This linear regime usually extends to very high shear rates ($`>10^{10}\mathrm{s}^1`$ for spherical molecules). However, Thompson and Troian (1997) found that under some conditions the interfacial region exhibits nonlinear behavior at much lower shear rates than the bulk fluid. They also found a universal form for the deviation from a linear stress/strain-rate relationship at the interface. The fundamental origin for this non-linearity is that there is a maximum stress that the substrate can apply to the fluid. This stress roughly corresponds to the maximum of the force from the corrugation potential times the areal density of fluid atoms. The stress/velocity relation at the interface starts out linearly and then flattens as the maximum stress is approached. The shear rate in the fluid saturates at the value corresponding to the maximum shear stress and the amount of slip at the wall grows arbitrarily large with increasing wall velocity. Similar behavior was observed for more realistic potentials by Koike and Yoneya (1998, 1999). ### B Phase Transitions and Viscosity Changes in Molecularly Thin Films One of the surprising features of Fig. 9 is that the viscosity remains the same even in regions near the wall where there is pronounced layering. Any change in viscosity would produce a change in the velocity gradient since the stress is constant. However, the flow profiles in panel (d) remain linear throughout the cell. The profile for the dense walls in panel (f) is linear up to the last layer, which has crystallized onto the wall. From Fig. 9 it is apparent that density variations by at least a factor of seven can be accommodated without a viscosity change. The interplay between layering and viscosity was studied in detail by Bitsanis et al. (1987), although their use of artificial flow reservoirs kept them from addressing flow BC’s. They were able to fit detailed flow profiles using only the bulk viscosity evaluated at the average local density. This average was taken over a distance of order $`\sigma `$ that smeared out the rapid density modulations associated with layering, but not slower variations due to the adsorption potential of the walls or an applied external potential. In subsequent work, Bitsanis et al. (1990) examined the change in viscosity with film thickness. They found that results for films thicknesses $`h>4\sigma `$ could be fit using the bulk viscosity for the average density. However, as $`h`$ decreased below $`4\sigma `$, the viscosity diverged much more rapidly than any model based on bulk viscosity could explain. These observations were consistent with experiments on nanometer thick films of a wide variety of small molecules. These experiments used the Surface Force Apparatus (SFA) which measures film thickness with Å resolution using optical interferometry. Films were confined between atomically flat mica sheets at a fixed normal load, and sheared with a steady (Gee et al., 1990) or oscillating (Granick, 1992) velocity. Layering in molecularly thin films gave rise to oscillations in the energy, normal force and effective viscosity (Horn and Israelachvili, 1981; Israelachvili, 1991; Georges et al., 1993) as the film thickness decreased. The period of these oscillations was a characteristic molecular diameter. As the film thickness decreased below 7 to 10 molecular diameters, the effective viscosity increased dramatically (Gee et al., 1990; Granick, 1992; Klein and Kumacheva, 1995). Most films of one to three molecular layers exhibited a yield stress characteristic of solid-like behavior, even though the molecules form a simple Newtonian fluid in the bulk. Pioneering grand canonical Monte Carlo simulations by Schoen et al. (1987) showed crystallization of spherical molecules between commensurate walls separated by up to 6 molecular diameters. However, the crystal was only stable when the thickness was near to an integral number of crystalline layers. At intermediate $`h`$, the film transformed to a fluid state. Later work (Schoen et al., 1988, 1989), showed that translating the walls could also destabilize the crystalline phase and lead to periodic melting and freezing transitions as a function of displacement. However, these simulations were carried out at equilibrium and did not directly address the observed changes in viscosity. SFA experiments can not determine the flow profile within the film, and this introduces ambiguity in the meaning of the viscosity values that are reported. Results are typically expressed as an effective viscosity $`\mu _{\mathrm{eff}}\tau _s/\dot{\gamma }_{\mathrm{eff}}`$ where $`\tau _s`$ is the measured shear stress and $`\dot{\gamma }_{\mathrm{eff}}v/h`$ represents the effective shear rate that would be present if the no-slip condition held and walls were displaced at relative velocity $`v`$. Deviations from the no-slip condition might cause $`\mu _{\mathrm{eff}}`$ to differ from the bulk viscosity by an order of magnitude. However, they could not explain the observed changes of $`\mu _{\mathrm{eff}}`$ by more than five orders of magnitude or the even more dramatic changes by 10 to 12 orders of magnitude in the characteristic viscoelastic relaxation time determined from the shear rate dependence of $`\mu _{\mathrm{eff}}`$ (Gee et al., 1990; Hu et al., 1991). Thompson et al. (1992) found very similar changes in viscosity and relaxation time in simulations of a simple bead-spring model of linear molecules (Kremer and Grest, 1990). Some of their results for the effective viscosity vs. effective shear rate are shown in Fig. 10. In (a), the normal pressure $`P_{}`$ was fixed and the number of atomic layers, $`m_l`$, was decreased from 8 to 2. In (b), the film was confined by increasing the pressure at fixed particle number. Both methods of increasing confinement lead to dramatic changes in the viscosity and relaxation time. The shear rate dependence of $`\mu _{\mathrm{eff}}`$ in Fig. 10 has the same form as in experiment (Hu et al., 1991). A Newtonian regime with constant viscosity $`\mu _0`$ is seen at the lowest shear rates in all but the uppermost curve in each panel. Above a characteristic shear rate $`\dot{\gamma }_c`$ the viscosity begins to decrease rapidly. This shear thinning is typical of viscoelastic media and indicates that molecular rearrangements are too slow to respond to the sliding walls at $`\dot{\gamma }_{\mathrm{eff}}>\dot{\gamma }_c`$. As a result, the structure of the fluid begins to change in a way that facilitates shear. A characteristic time for molecular rearrangements in the film can be associated with $`1/\dot{\gamma }_c`$. Increasing confinement by decreasing $`m_l`$ or increasing pressure, increases the Newtonian viscosity and relaxation time. For the uppermost curve in each panel, the relaxation time is longer than the longest simulation runs ($`>10^6`$ time steps) and the viscosity continues to increase at the lowest accessible shear rates. Note that the ranges of shear rate covered in experiment and simulations differ by orders of magnitude. However, the scaling discussed below suggests that the same behavior may be operating in both. For the parameters used in Fig. 10, studies of the flow profile in four layer films showed that one layer of molecules was stuck to each wall and shear occurred in the middle layers. Other parameter sets produced varying degrees of slip at the wall/film interface, yet the viscoelastic response curves showed the same behavior. Later work by Baljon and Robbins (1996, 1997) shows that lowering the temperature through the bulk glass transition also produces similar changes in viscoelastic response. This suggests that the same glass transition is being produced by changes in thickness, pressure or temperature. Following the analogy to bulk glass transitions, Thompson et al. (1993, 1995) have shown that changes in equilibrium diffusion constant and $`\dot{\gamma }_c`$ can be fit to a free volume theory. Both vanish as $`\mathrm{exp}(h_0/(hh_c))`$ where $`h_c`$ is the film thickness at the glass transition. Moreover, at $`h<h_c`$, they found behavior characteristic of a solid. Films showed a yield stress and no measurable diffusion. When forced to slide, shear localized at the wall/film interface and $`\mu _{\mathrm{eff}}`$ dropped as $`1/\dot{\gamma }`$, implying that the shear stress is independent of sliding velocity. This is just the usual form of kinetic friction between solids. The close relation between bulk glass transitions and those induced by confinement can perhaps best be illustrated by using a generalization of time-temperature scaling. In bulk systems it is often possible to collapse the viscoelastic response onto a universal curve by dividing the viscosity by the Newtonian value, $`\mu _0`$, and dividing the shear rate by the characteristic rate, $`\dot{\gamma }_c`$. Demirel and Granick (1996a) found that this approach could be used to collapse data for the real and imaginary parts of the elastic moduli of confined films at different thicknesses. Fig. 10(c) shows that simulation results for the viscosity of thin films can also be collapsed using Demirel and Granick’s approach (Robbins and Baljon, 2000). Data for different thicknesses, normal pressures, and interaction parameters taken from all parameters considered by Thompson et al. (1992, 1995) collapse onto a universal curve. Also shown on the plot (circles) are data for different temperatures that were obtained for longer chains in films that are thick enough to exhibit bulk behavior (Baljon and Robbins, 1996, 1997). The data fit well on to the same curve, providing a strong indication that a similar glass transition occurs whether thickness, normal pressure, or temperature is varied. The high shear rate region of the universal curve shown in Fig. 10(c) exhibits power law shear thinning: $`\mu _{\mathrm{eff}}\dot{\gamma }^x`$ with a best fit exponent $`x=0.69\pm .02`$. In SFA experiments, Hu et al. (1991) found shear thinning of many molecules was consistent with $`x`$ near -2/3. However, the response of some fluids followed power laws closer to -.5 as they became less confined (Carson et al., 1992). One possible explanation for this is that these measurements fall onto the crossover region of a universal curve like Fig. 10(c). The apparent exponent obtained from the slope of this log-log plot varies from 0 to -2/3 as $`\mu _{\mathrm{eff}}`$ drops from $`\mu _0`$ to about $`\mu _0/30`$. Many of the experiments that found smaller exponents were only able to observe a drop in $`\mu `$ by an order of magnitude. As confinement was increased, and a larger drop in viscosity was observed, the slope increased toward -2/3. It would be interesting to attempt a collapse of experimental data on a curve like Fig. 10(c) to test this hypothesis. Another possibility is that the shear thinning exponent depends on some detail of the molecular structure. Chain length alone does not appear to affect the exponent, since results for chains of length $`16`$ and $`6`$ are combined in Fig. 10(c). Manias et al. (1996) find that changing the geometry from linear to branched has little effect on shear thinning. In most simulations of simple spherical molecules, crystallization occurs before the viscosity can rise substantially above the bulk value. However, Hu et al. (1996) have found a set of conditions where spherical molecules follow a $`2/3`$ slope over two-decades in shear rate. Stevens et al. (1997) have performed simulations of confined films of hexadecane using a detailed model of the molecular structure and interactions. They found that films crystallized before the viscosity rose much above bulk values. This prevented them from seeing large power law scaling regimes, but the apparent exponents were consistently less than those for bead spring models. It is not clear whether the origin of this discrepancy is the inability to approach the glass transition, or whether structural changes under shear lead to different behavior. Hexadecane molecules have some tendency to adopt a linear configuration and become aligned with the flow. This effect is not present in simpler bead-spring models. Shear thinning typically reflects changes in structure that facilitate shear. Experiments and the above simulations were done at constant normal load. In bead-spring models the dominant structural change is a dilation of the film that creates more room for molecules to slide past each other. The dilations are relatively small, but have been detected in some experiments (Dhinojwala and Granick). When simulations are done at constant wall spacing, the shear-thinning exponent drops to $`x=0.5`$ (Thompson et al., 1992, 1995; Manias et al., 1996). Kröger et al. (1993) find the same exponent in bulk simulations of these molecules at constant volume, indicating that a universal curve like that in Fig. 10(c) might also be constructed for constant volume instead of constant pressure. Several analytic models have been developed to explain the power law behavior observed in experiment and simulations. All the models find an exponent of -2/3 in certain limits. However, they start from very different sets of assumptions and it is not clear if any of these correspond to the simulations and experiments. Two of the models yield an exponent of -2/3 for constant film thickness (Rabin and Hersht, 1993; Urbakh et al., 1995) where simulations give $`x=1/2`$. Urbakh et al. (1995) also find that the exponent depends on the velocity profile, while simulations do not. The final model (deGennes) is based on scaling results for the stretching of polymers under shear. While it may be appropriate for thick films, it can not describe the behavior of films which exhibit plug-like flow. It remains to be seen if the -2/3 exponent has a single explanation or arises from different mechanisms in different limits. The results described in this section have interesting implications for the function of macroscopic bearings. Some bearings may operate in the boundary lubrication regime where the separation between asperities decreases to molecular dimensions. The dramatic increase in viscosity due to confinement may play a key role in preventing squeeze-out of the lubricant and direct contact between asperities. Although the glassy lubricant layer would not have a low frictional force, the yield stress would be lower than that for asperities in contact. More importantly, the amount of wear would be greatly reduced by the glassy film. Studies of confined films may help to determine what factors control a lubricant’s ability to form a robust protective layer at atomic scales. As we now discuss, they may also help explain the pervasive observation of static friction. ### C Submonolayer Lubrication Physisorbed molecules, such as the short hydrocarbon chains considered above, can be expected to sit on any surface exposed to atmospheric conditions. Even in ultra-high-vacuum, special surface treatments are needed to remove strongly physisorbed species from surfaces. Recent work shows that the presence of these physisorbed molecules qualitatively alters the tribological behavior between two incommensurate walls (He et al., 1999; Müser and Robbins, 1999) or between two disordered walls (Müser and Robbins) As noted in Sec. IV, the static friction is expected to vanish between most incommensurate surfaces. Under similar conditions, the static friction between most amorphous, but flat, interfaces vanishes in the thermodynamic limit (Müser and Robbins). In both cases, the reason is that the density modulations on two bare surfaces can not lock into phase with each other unless the surfaces are unrealistically compliant. However, a sub-monolayer of molecules that form no strong covalent bonds with the walls can simultaneously lock to the density modulations of both walls. This gives rise to a finite static friction for all surface symmetries: commensurate, incommensurate, and amorphous (Müser and Robbins). A series of simulations were performed in order to elucidate the influence of such ”between-sorbed” particles on tribological properties (He et al., 1999; Müser and Robbins, 1999). A layer of spherical or short “bead-spring” (Kremer and Grest, 1990) molecules was confined between two fcc (111) surfaces. The walls had the orientations and lattice spacings shown in Fig. 5, and results are labeled by the letters in this figure. Wall atoms were bound to their lattice sites with harmonic springs as in the Tomlinson model. The interactions between atoms on opposing walls, as well as fluid-fluid and fluid-wall interactions, had the LJ form. Unless noted, the potential parameters for all three interactions were the same. The static friction per unit contact area, or yield stress $`\tau _\mathrm{s}`$, was determined from the lateral force needed to initiate steady sliding of the surfaces at fixed pressure. When there were no molecules between the surfaces, there was no static friction unless the surfaces were commensurate. As illustrated in Fig. 11(a), introducing a thin film led to static friction in all cases. Moreover, all incommensurate Although perfectly incommensurate walls are not consistent with periodic boundary conditions, the effect of residual commensurability was shown to be negligible (Muser and Robbins, 1999). cases (B-D) showed nearly the same static friction, and $`\tau _s`$ was independent of the direction of sliding relative to crystalline axes (e.g. along $`x`$ or $`y`$ for case D). Most experiments do not control the crystallographic orientation of the walls relative to each other or to the sliding direction, yet the friction is fairly reproducible. This is hard to understand based on models of bare surfaces which show dramatic variations in friction with orientation (Hirano and Shinjo, 1993; S$`ø`$rensen et al., 1996; Robbins and Smith, 1996). Fig. 11 shows that a thin layer of molecules eliminates most of this variation. In addition, the friction is insensitive to chain length, coverage, and other variables that are not well controlled in experiments (He et al., 1999). The kinetic friction is typically about 10 to 20% lower than the static friction in all cases (He and Robbins). Of course experiments do observe changes in friction with surface material. The main factor that changed $`\tau _s`$ in this simple model was the ratio of the characteristic length for wall-fluid interactions $`\sigma _{\mathrm{wf}}`$ to the nearest-neighbor spacing on the walls, $`d`$. As shown in Fig. 11(b), increasing $`\sigma _{\mathrm{wf}}/d`$ decreases the friction. The reason is that larger fluid atoms are less able to penetrate between wall atoms and thus feel less surface corrugation. Using amorphous, but flat, walls also produced a somewhat larger static friction. Note that $`\tau _s`$ rises linearly with the imposed pressure in all cases shown in Fig. 11. This provides a microscopic basis for the phenomenological explanation of Amontons’ laws that was proposed by Bowden and Tabor (1986). The total static friction is given by the integral of the yield stress over areas of the surface that are in molecular contact. If $`\tau _s=\tau _0+\alpha P`$, then the total force is $`F_s=\alpha L+\tau _0A_{\mathrm{real}}`$ where $`L`$ is the load and $`A_{\mathrm{real}}`$ is the total contact area. The coefficient of friction is then $`\mu _s=\alpha +\tau _0/\overline{P}`$ where $`\overline{P}=L/A_{\mathrm{real}}`$ is the mean contact pressure. Amontons’ laws say that $`\mu _s`$ is independent of load and the apparent area of the surfaces in contact. This condition is satisfied if $`\tau _0`$ is small or if $`\overline{P}`$ is constant. The latter condition is expected to hold for both ideal elastic (Greenwood and Williamson, 1966) and plastic (Bowden and Tabor, 1986) surfaces. The above results suggest that adsorbed molecules and other “third-bodies” may prove key to understanding macroscopic friction measurements. It will be interesting to extend these studies to more realistic molecular potentials and to rough surfaces. To date, realistic potentials have only been used between commensurate surfaces, and we now describe some of this work. The effect of small molecules injected between two sliding hydrogen-terminated (111) diamond surfaces on kinetic friction was investigated by Perry and Harrison (1996, 1997). The setup of the simulation was similar to the one described in Section IV B. Two sets of simulations were performed. In one set, bare surfaces were considered. In another set, either two methane (CH<sub>4</sub>) molecules, one ethane (C<sub>2</sub>H<sub>6</sub>) molecule, or one isobutane (CH<sub>3</sub>)<sub>3</sub>CH molecule was introduced into the interface between the sliding diamond surfaces. Experiments show that the friction between diamond surfaces goes down as similar molecules are formed in the contact due to wear (Hayward, 1991). Perry and Harrison found that these third bodies also reduced the calculated frictional force between commensurate diamond surfaces. The reduction of the frictional force with respect to the bare hydrogen-terminated case was most pronounced for the smallest molecule, methane. The molecular motions were analyzed in detail to determine how dissipation occurred. Methane produced less friction because it was small enough to roll in grooves between the terminal hydrogen atoms without collisions. The larger ethane and isobutane collided frequently. As for the simple bead-spring model described above, the friction increased roughly linearly with load. Indeed, Perry and Harrison’s data for all third bodies corresponds to $`\alpha 0.1`$, which is close to that for commensurate surfaces in Fig. 11(a). One may expect that the friction would decrease if incommensurate walls were used. However, the rolling motion of methane might be prevented in such geometries. Perry and Harrison (1996, 1997) compared their results to earlier simulations (Harrison et al., 1993) where hydrogen terminations on one of the two surfaces were replaced by chemisorbed methyl (-CH<sub>3</sub>), ethyl (-C<sub>2</sub>H<sub>5</sub>), or $`n`$-propyl (-C<sub>3</sub>H<sub>7</sub>) groups (Sec. IV B). The chemisorbed molecules were seen to give a considerably smaller reduction of the friction with respect to the physisorbed molecules. Perry and Harrison note that the chemisorbed molecules have fewer degrees of freedom, and so are less able to avoid collisions that dissipate energy. ### D Corrugated Surfaces The presence of roughness can be expected to alter the behavior of lubricants, particularly when the mean film thickness is comparable to the surface roughness. Gao et al. (1995, 1996) and Landman et al. (1996) used molecular dynamics to investigate this thin film limit. Hexadecane ($`n`$-C<sub>16</sub>H<sub>34</sub>) was confined between two gold substrates exposing only (111) surfaces (Fig. 12). The two outer layers of the substrates were completely rigid and were displaced laterally with a constant relative velocity of 10 to 20m/s at constant separation. The asperities on both walls were modeled by flat-topped pyramidal ridges with initial heights of 4 to 6 atomic layers. The united atom model (Sec. II A) was used for the interactions within the film, and the embedded atom method for Au-Au interactions. All other interactions were modeled with suitable 6-12 Lennard-Jones potentials. The alkane molecules and the gold atoms in the asperities were treated dynamically using the Verlet algorithm. The temperature was kept constant at $`T=350`$ K by rescaling the velocities every fiftieth time step. Simulations were done in three different regimes, which can be categorized according to the separation $`\mathrm{\Delta }h_{\mathrm{aa}}`$ that the outer surfaces of the asperities would have if they were placed on top of one another without deforming elasticly or plasticly. The cases were (1) large separation of the asperities $`\mathrm{\Delta }h_{\mathrm{aa}}=17.5`$ Å, (2) a near-overlap regime with $`\mathrm{\Delta }h_{\mathrm{aa}}=4.6`$ Å, and (3) an asperity-overlap regime with $`\mathrm{\Delta }h_{\mathrm{aa}}=6.7`$ Å. Some selected atomic and molecular configurations obtained in a slice through the near-overlap system are shown in Fig. 12. In all cases, the initial separation of the walls was chosen such that the normal pressure was zero for large lateral asperity separations. One common feature of all simulations was the formation of lubricant layers between the asperities as they approached each other (Fig. 12). This is just like the layering observed in equilibrium between flat walls (e.g. Fig. 9), but the layers form dynamically. The number of layers decreased with decreasing lateral separation between the asperities, and the lateral force showed strong oscillations as successive layers were pushed out. This behavior is shown in Fig. 13 for the near-overlap case. For large separation, case (1), four lubricant layers remained at the point of closest approach between the asperities, and no plastic deformation occurred. In case (2), severe plastic deformation occurred after local shear and normal stresses exceeded a limiting value of close to 4 GPa. This deformation led to direct intermetallic junctions, which were absent in simulations under identical conditions but with no lubricant molecules in the interface. The junctions eventually broke upon continued sliding, resulting in transfer of some metal atoms between the asperities. As in the overlap case (3), great densification and pressurization of the lubricant in the asperity region occurred, accompanied by a significant increase in the effective viscosity in that region. For the near-overlap system, local rupture of the film in the region between the departing asperities was seen. A nanoscale cavitated zone of length scale $`30`$ Å was observed that persisted for about 100 ps. The Deborah number $`D`$ was studied as well. $`D`$ can be defined as the ratio of the relaxation time of the fluid to the time of passage of the fluid through a characteristic distance $`l`$. $`D0.25`$ was observed for the near-overlap system, which corresponds to a viscoelastic response of the lubricant. The increased confinement in the overlap system, resulted in $`D=2.5`$, which can be associated with highly viscoelastic behavior, perhaps even elasto-plastic or waxy behavior. Gao et al. (1996) observed extreme pressures of 150 GPa in the near-overlap case when the asperities were treated as rigid units. Allowing only the asperities to deform reduces the peak pressure by almost two orders of magnitude. However, even these residual pressures are still quite large, and one may expect that a full treatment of the elastic response of the substrates might lead to further dramatic decreases in pressure and damage. Tutein et al. (2000) have recently compared friction between monolayers of anchored hydrocarbon molecules and rigid or flexible nanotubes. Studies of elasticity effects in larger asperities confining films that are several molecules thick are currently in progress (Persson and Ballone). ## VI Stick-Slip Dynamics The dynamics of sliding systems can be very complex and depend on many factors, including the types of metastable states in the system, the times needed to transform between states, and the mechanical properties of the device that imposes the stress. At high rates or stresses, systems usually slide smoothly. At low rates the motion often becomes intermittent, with the system alternately sticking and slipping forward (Rabinowicz, 1965; Bowden and Tabor, 1986). Everyday examples of such stick-slip motion include the squeak of hinges and the music of violins. The alternation between stuck and sliding states of the system reflects changes in the way energy is stored. While the system is stuck, elastic energy is pumped into the system by the driving device. When the system slips, this elastic energy is released into kinetic energy, and eventually dissipated as heat. The system then sticks once more, begins to store elastic energy, and the process continues. Both elastic and kinetic energy can be stored in all the mechanical elements that drive the system. The whole coupled assembly must be included in any analysis of the dynamics. The simplest type of intermittent motion is the atomic-scale stick-slip that occurs in the multistable regime ($`\lambda >1`$) of the Tomlinson model (Fig. 2(b)). Energy is stored in the springs while atoms are trapped in a metastable state, and converted to kinetic energy as they pop to the next metastable state. This phenomenon is quite general and has been observed in several of the simulations of wearless friction described in Sec. IV as well as in the motion of atomic force microscope tips (e.g. Carpick and Salmeron, 1997). In these cases, motion involves a simple ratcheting over the surface potential through a regular series of hops between neighboring metastable states. The slip distance is determined entirely by the periodicity of the surface potential. Confined films and adsorbed layers have a much richer potential energy landscape due to their many internal degrees of freedom. One consequence is that stick-slip motion between neighboring metastable states can involve microslips by distances much less than a lattice constant (Thompson and Robbins, 1990b; Baljon and Robbins, 1997; Robbins and Baljon, 2000). An example is seen at $`t/t_{\mathrm{LJ}}=620`$ in Fig. 14(b). Such microslips involve atomic-scale rearrangements within a small fraction of the system. Closely related microslips have been studied in granular media (Nasuno et al., 1997; Veje et al., 1999) and foams (Gopal and Durian, 1995). Many examples of stick-slip involve a rather different type of motion that can lead to intermittency and chaos (Ruina, 1983; Heslot et al., 1994). Instead of jumping between neighboring metastable states, the system slips for very long distances before sticking. For example, Gee et al. (1990) and Yoshizawa et al. (1993) observed slip distances of many microns in their studies of confined films. This distance is much larger than any characteristic periodicity in the potential, and varied with velocity, load, and the mass and stiffness of the SFA. The fact that the SFA does not stick after moving by a lattice constant indicates that sliding has changed the state of the system in some manner, so that it can continue sliding even at forces less than the yield stress. Phenomenological theories of stick-slip often introduce an unspecified ”state” variable to model the evolving properties of the system (Dieterich, 1979; Ruina, 1983; Batista and Carlson, 1998). One simple property that depends on past history is the amount of stored kinetic energy. This can provide enough inertia to carry a system over potential energy barriers even when the stress is below the yield stress. Inertia is readily included in the Tomlinson model and has been thoroughly studied in the mathematically equivalent case of an underdamped Josephson junction (McCumber, 1968). One finds a hysteretic response function where static and moving steady-states coexist over a range of forces between $`F_{\mathrm{min}}`$ and the static friction $`F_s`$. There is a minimum stable steady-state velocity $`v_{\mathrm{min}}`$ corresponding to $`F_{\mathrm{min}}`$. At lower velocities, the only steady state is linearly unstable because $`v/F<0`$ – pulling harder slows the system. It is well-established that this type of instability can lead to stick-slip motion (Bowden and Tabor, 1986; Rabinowicz, 1965). If the top wall of the Tomlinson model is pulled at an average velocity less than $`v_{\mathrm{min}}`$ by a sufficiently compliant system, it will exhibit large-scale stick-slip motion. Confined films have structural degrees of freedom that can change during sliding, and this provides an alternative mechanism for stick-slip motion (Thompson and Robbins, 1990b). Some of these structural changes are illustrated in Fig. 14 which shows stick-slip motion of a two layer film of simple spherical molecules. The bounding walls were held together by a constant normal load. A lateral force was applied to the top wall through a spring $`k`$ attached to a stage that moved with fixed velocity $`v`$ in the $`x`$ direction. The equilibrium configuration of the film at $`v=0`$ is a commensurate crystal that resists shear. Thus at small times, the top wall remains pinned at $`x_W=0`$. The force grows linearly with time, $`F=kv`$, as the stage advances ahead of the wall. When $`F`$ exceeds $`F_s`$, the wall slips forward. The force drops rapidly because the slip velocity $`\dot{x}_W`$ is much greater than $`v`$. When the force drops sufficiently, the film recrystallizes, the wall stops, and the force begins to rise once more. One structural change that occurs during each slip event is dilation by about 10% (Fig. 14(c)). Dhinojwala and Granick have recently confirmed that dilation occurs during slip in SFA experiments. The increased volume makes it easier for atoms to slide past each other and is part of the reason that the sliding friction is lower than $`F_s`$. The system may be able to keep sliding in this dilated state as long as it takes more time for the volume to contract than for the wall to advance by a lattice constant. Dilation of this type plays a crucial role in the yield, flow and stick-slip dynamics of granular media (Thompson and Grest, 1991; Jaeger et al., 1996; Nasuno et al., 1997). The degree of crystallinity also changes during sliding. As in Secs. III E and V A, deviations from an ideal crystalline structure can be quantified by the Debye-Waller factor $`S(Q)/N`$ (Fig. 14d), where $`Q`$ is one of the shortest reciprocal lattice vectors and $`N`$ is the total number of atoms in the film. When the system is stuck, $`S(Q)/N`$ has a large value that is characteristic of a 3D crystal. During each slip event, $`S(Q)/N`$ drops dramatically. The minimum values are characteristic of simple fluids that would show a no-slip boundary condition (Sec. V A). The atoms also exhibit rapid diffusion that is characteristic of a fluid. The periodic melting and freezing transitions that occur during stick-slip are induced by shear and not by the negligible changes in temperature. Shear-melting transitions at constant temperature have been observed in both theoretical and experimental studies of bulk colloidal systems (Ackerson et al., 1986; Stevens and Robbins, 1993). While the above simulations of confined films used a fixed number of particles, Lupowski and van Swol (1991) found equivalent results at fixed chemical potential. Very similar behavior has been observed in simulations of sand (Thompson and Grest, 1991), chain molecules (Robbins and Baljon, 2000), and incommensurate or amorphous walls (Thompson and Robbins, 1990b). These systems transform between glassy and fluid states during stick-slip motion. As in equilibrium, the structural differences between glass and fluid states are small. However, there are strong changes in the self-diffusion and other dynamic properties when the film goes from the static glassy to sliding fluid state. In the cases just described, the entire film transforms to a new state, and shear occurs throughout the film. Another type of behavior is also observed. In some systems shear is confined to a single plane - either a wall/film interface, or a plane within the film (Baljon and Robbins, 1997; Robbins and Baljon, 2000). There is always some dilation at the shear plane to facilitate sliding. In some cases there is also in-plane ordering of the film to enable it to slide more easily over the wall. This ordering remains after sliding stops, and provides a mechanism for the long-term memory seen in some experiments (Gee et al., 1990; Yoshizawa et al., 1993; Demirel and Granick, 1996b). Buldum and Ciraci (1997) found stick-slip motion due to periodic structural transformations in the bottom layers of a pyramidal Ni(111) tip sliding on an incommensurate Cu(110) surface. The dynamics of the transitions between stuck and sliding states are crucial in determining the range of velocities where stick-slip motion is observed, the shape of the stick-slip events, and whether stick-slip disappears in a continuous or discontinuous manner. Current models are limited to energy balance arguments (Robbins and Thompson, 1991; Thompson and Robbins, 1993) or phenomenological models of the nucleation and growth of ”frozen” regions (Yoshizawa et al., 1993; Heslot et al., 1994; Batista and Carlson, 1998; Persson, 1998). Microscopic models and detailed experimental data on the sticking and unsticking process are still lacking. Rozman et al. (1996, 1997, 1998) have taken an interesting approach to unraveling this problem. They have performed detailed studies of stick-slip in a simple model of a single incommensurate chain between two walls. This model reproduces much of the complex dynamics seen in experiments and helps to elucidate what can be learned about the nature of structural changes within a contact using only the measured macroscopic dynamics. ## VII Strongly Irreversible Tribological Processes Sliding at high pressures, high rates, or for long times can produce more dramatic changes in the structure and even chemistry of the sliding interface than those discussed so far. In this concluding chapter, we describe some of the more strongly irreversible tribological processes that have been studied with simulations. These include grain boundary formation and mixing, machining and tribochemical reactions. ### A Plastic Deformation For ductile materials, plastic deformation is likely to occur throughout a region of some characteristic width about the nominal sliding interface (Rigney and Hammerberg, 1998). Sliding induced mixing of material from the two surfaces and sliding induced grain boundaries are two of the experimentally observed processes that lack microscopic theoretical explanations. In an attempt to get insight into the microscopic dynamics of these phenomena, Hammerberg et al. (1998) performed large-scale simulations of a two-dimensional model for copper. The simulation cell contained $`256\times 256`$ Cu atoms that were subject to a constant normal pressure $`P_{}`$. Two reservoir regions at the upper and lower boundaries of the cell were constrained to move at opposite lateral velocities $`\pm u_\mathrm{p}`$. The initial interface was midway between the two reservoirs. The friction was measured at $`P_{}=30`$GPa as a function of the relative sliding velocity $`v`$. Different behavior was seen at velocities above and below about 10% of the speed of transverse sound. At low velocities, the interface welded together and the system formed a single workhardened object. Sliding took place at the artificial boundary with one of the reservoirs. At higher velocities the friction was smaller, and decreased steadily with increasing velocity. In this regime, intense plastic deformation occurred at the interface. Hammerberg et al. (1998) found that the early time-dynamics of the interfacial structure could be reproduced with a Frenkel-Kontorova model. As time increased, the interface was unstable to the formation of a fine-grained polycrystalline microstructure, which coarsened with distance away from the interface as a function of time. Associated with this microstructure was the mixing of material across the interface. ### B Wear Large scale, two and three-dimensional molecular dynamics simulations of the indentation and scraping of metal surfaces were carried out by Belak and Stowers (1992). Their simulations show that tribological properties are strongly affected by wear or the generation of debris. A blunted carbon tip was first indented into a copper surface and then pulled over the surface. The tip was treated as a rigid unit. Interactions within the metal were modeled with an embedded atom potential and Lennard-Jones potentials were used between Si and Cu atoms. In the two-dimensional simulation, indentation was performed at a constant velocity of about 1 m/s. The contact followed Hertzian behavior up to a load $`L2.7`$ nN and an indentation of about 3.5 Cu layers. The surface then yielded on one side of the tip, through the creation of a single dislocation edge on one of the easy slip planes. The load needed to continue indenting decreased slightly until an indentation of about five layers. Then the load began to rise again as stress built up on the side that had not yet yielded. After an indentation of about six layers, this side yielded, and further indentation could be achieved without a considerable increase in load. The hardness, defined as the ratio of load to contact length (area), slightly decreased with increasing load once plastic deformation had occurred. After indentation was completed, the carbon tip was slid parallel to the original Cu surface. The work to scrape off material was determined as a function of the tip radius. A power law dependence was found at small tip radii that did not correspond to experimental findings for micro-scraping. However, at large tip radii, the power law approached the experimental value. Belak and Stowers found that this change in power law was due to a change in the mechanism of plastic deformation from intragranular to intergranular plastic deformation. In the three-dimensional (3D) simulations, the substrate contained as many as 36 layers or 72,576 atoms. Hence long-range elastic deformations were included. The surface yielded plastically after an indentation of only 1.5 layers, through the creation of a small dislocation loop. The accompanying release of load was much bigger than in 2D. Further indentation to about 6.5 layers produced several of these loading-unloading events. When the tip was pulled out of the substrate, both elastic and plastic recovery was observed. Surprisingly, the plastic deformation in the 3D studies was confined to a region within a few lattice spacings of the tip, while dislocations spread several hundred lattice spacings in the 2D simulations. Belak and Stowers concluded that dislocations were not a very efficient mechanism for accommodating strain at the nanometer length scale in 3D. When the tip was slid laterally at $`v=100`$m/s during indentation, the friction or “cutting” force fluctuated around zero as long as the substrate did not yield (Fig. 15). This nearly frictionless sliding can be attributed to the fact that the surfaces were incommensurate and the adhesive force was too small to induce locking. Once plastic deformation occurred, the cutting force increased dramatically. Fig. 15 shows that the lateral and normal forces are comparable, implying a friction coefficient of about one. This large value was expected for cutting by a conical asperity with small adhesive forces (Suh, 1986). ### C Tribochemistry The extreme thermomechanical conditions in sliding contacts can induce chemical reactions. This interaction of chemistry and friction is known as tribochemistry (Rabinowicz, 1965). Tribochemistry plays important roles in many processes, the best known example being the generation of fire through sliding friction. Other examples include the formation of wear debris and adhesive junctions which can have a major impact on friction. Harrison and Brenner (1994) were the first to observe tribochemical reactions involving strong covalent bonds in molecular dynamics simulations. A key ingredient of their work is the use of reactive potentials that allow breaking and formation of chemical bonds. Two (111) diamond surfaces terminated with hydrogen atoms were brought into contact as in Sec. IV B. In some simulations, two hydrogen atoms from the upper surface were removed, and replaced with ethyl (-CH<sub>2</sub>CH<sub>3</sub>) groups. The simulations were performed for 30 ps at an average normal pressure of about 33 GPa. The sliding velocity was 100m/s along either the $`(1\overline{1}0)`$ or $`(11\overline{2})`$ crystallographic direction Sliding did not produce any chemical changes in the hydrogen-terminated surfaces. However, wear and chemical reactions were observed when ethyl groups were present. For sliding along the $`(11\overline{2})`$ direction, wear was initiated by the shearing of hydrogen atoms from the tails of the ethyl groups. The resulting free hydrogen atoms reacted at the interface by combining with an existing radical site or abstracting a hydrogen from either a surface or a radical. If no combination with a free hydrogen atom occurred, the reactive radicals left on the tails of the chemisorbed molecules abstracted a hydrogen from the opposing surface, or they formed a chemical bond with existing radicals on the opposing surface (Fig. 16). In the latter case, the two surface bonds sometimes broke simultaneously, leaving molecular wear debris trapped at the interface. It is interesting to note that the wear debris, in the form of an ethylene molecule CH<sub>2</sub>CH<sub>2</sub>, did not undergo another chemical reaction for the remainder of the simulation. Similarly, methane CH<sub>4</sub>, ethane C<sub>2</sub>H<sub>6</sub>, and isobutane (CH<sub>3</sub>)<sub>3</sub>CH were not seen to undergo chemical reactions when introduced into a similar interface composed of hydrogen terminated (111) diamond surfaces (Sec. V C) at normal loads up to about 0.8 nN/atom (Perry and Harrison, 1996, 1997; Harrison and Perry, 1998). At higher loads, only ethane reacted. In some cases a hydrogen broke off of the ethane. The resulting free H atom then reacted with an H atom from one surface to make an H<sub>2</sub> molecule. The remaining C<sub>2</sub>H<sub>5</sub> could then form a carbon-carbon bond with that surface when dragged close enough to the nascent radical site. The C-C bond of the ethane was also reported to break occasionally. However, due to the proximity of the nascent methyl radicals and the absence of additional reactive species, the bond always reformed. Sliding along the $`(1\overline{1}0)`$ direction produced other types of reaction between surfaces with ethyl terminations. In some cases, tails of the ethyl groups became caught between hydrogen atoms on the lower surface. Continued sliding sheared the entire tail from the rest of the ethyl group, leaving a chemisorbed CH$`{}_{}{}^{}{}_{2}{}^{}`$ group and a free CH$`{}_{}{}^{}{}_{3}{}^{}`$ species. The latter group could form a bond with an existing radical site, it could shear a hydrogen from a chemisorbed ethyl group, or it could be recombined with the chemisorbed CH$`{}_{}{}^{}{}_{2}{}^{}`$. ## VIII Acknowledgement Support from the National Science Foundation through Grant No. DMR-9634131 and from the German-Israeli Project Cooperation “Novel Tribological Strategies from the Nano to Meso Scales” is greatfully acknowledged. We thank Gang He, Marek Cieplak, Miguel Kiwi, Jean-Louis Barrat, Patricia McGuiggan and especially Judith Harrison for providing comments on the text. We also thank Jean-Louis Barrat for help in improving the density oscillation data in Fig. 9, and Peter A. Thompson for many useful conversations and for his role in creating Fig. 14. Abraham, F. F. (1978), “The interfacial density profile of a Lennard-Jones fluid in contact with a (100) Lennard-Jones wall and its relationship to idealized fluid/wall systems: A Monte Carlo simulation”, J. Chem. Phys. 68, 3713–3716. Ackerson, B. J., Hayter, J. B., Clark, N. A., and Cotter, L. (1986), “Neutron scattering from charge stabilized suspensions undergoing shear”, J. Chem. Phys. 84, 2344–2349. Allen, M. P. and Tildesley, D. J. (1987), Computer Simulation of Liquids, Clarendon Press, Oxford. Aubry, S. (1979), “The New Concept of Transitions by Breaking of Analyticity in a Crystallographic Model”, in Solitons and Condensed Matter Physics (Bishop, A. R. and Schneider, T., eds.), pp. 264–290, Springer-Verlag, Berlin. Aubry, S. (1983), “The Twist Map, The Extended Frenkel-Kontorova Model and the Devil’s Staircase”, Physica D 7, 240–258. Bak, P. (1982), “Commensurate phases, incommensurate phases and the devil’s staircase”, Rep. Progr. Phys. 45, 587–629. Baljon, A. R. C. and Robbins, M. O. (1996), “Energy Dissipation During Rupture of Adhesive Bonds”, Science 271, 482–484. Baljon, A. R. C. and Robbins, M. O. (1997), “Stick-Slip Motion, Transient Behavior, and Memory in Confined Films”, in Micro/Nanotribology and Its Applications (Bhushan, B., ed.), pp. 533–553, Kluwer, Dordrecht. Barrat, J.-L. and Bocquet, L. (1999a), “Large Slip Effect at a Nonwetting Fluid-Solid Interface”, Phys. Rev. Lett. 82, 4671–4674. Barrat, J.-L. and Bocquet, L. (1999b), “Influence of wetting properties on hydrodynamic boundary conditions at a fluid/solid interface”, Faraday Discuss. 112, 1–9. Batista, A. A. and Carlson, J. M. (1998), “Bifurcations from steady sliding to stick slip in boundary lubrication”, Phys. Rev. E 57, 4986–4996. Belak, J. and Stowers, I. F. (1992), “The Indentation and Scraping of a Metal Surface: A Molecular Dynamics Study”, in Fundamentals of Friction: Macroscopic and Microscopic Processes (Singer, I. L. and Pollock, H. M., eds.), pp. 511–520, Kluwer Academic Publishers, Dordrecht. Binder, K. (1995), Monte Carlo and Molecular Dynamics Simulations in Polymer Science, Oxford University Press, New York. Bitsanis, I. and Hadziioannou, G. (1990), “Molecular dynamics simulations of the structure and dynamics of confined polymer melts”, J. Chem. Phys. 92, 3827–3847. Bitsanis, I., Magda, J. J., Tirrell, M., and Davis, H. T. (1987), “Molecular dynamics of flow in micropores”, J. Chem. Phys. 87, 1733–1750. Bitsanis, I., Somers, S. A., Davis, H. T., and Tirrell, M. (1990), “Microscopic dynamics of flow in molecularly narrow pores”, J. Chem. Phys. 93, 3427–3431. Bocquet, L. and Barrat, J.-L. (1993), “Hydrodynamic Boundary Conditions and Correlation Functions of Confined Fluids”, Phys. Rev. Lett. 70, 2726–2729. Bocquet, L. and Barrat, J.-L. (1994), “Hydrodynamic boundary conditions, correlation functions, and Kubo relations for confined fluids”, Phys. Rev. E 49, 3079–3092. Bowden, F. P. and Tabor, D. (1986), The Friction and Lubrication of Solids, Clarendon Press, Oxford. Braun, O. M., Bishop, A. R., and Röder, J. (1997b), “Hysteresis in the Underdamped Driven Frenkel-Kontorova Model”, Phys. Rev. Lett. 79, 3692–3695. Braun, O. M., Dauxois, T., Paliy, M. V., and Peyrard, M. (1997a), “Dynamical Transitions in Correlated Driven Diffusion in a Periodic Potential”, Phys. Rev. Lett. 78, 1295–1298. Braun, O. M., Dauxois, T., Paliy, M. V., and Peyrard, M. (1997c), “Nonlinear mobility of the generalized Frenkel-Kontorova model”, Phys. Rev. E 55, 3598–3612. Brenner, D. W. (1990), “Empirical Potentials for Hydrocarbons for Use in Simulating the Chemical Vapor Deposition of Diamond Films”, Phys. Rev. B 42, 9458–9471. Bruch, L. W., Cole, M. W., and Zaremba, E. (1997), Physical Adsorption: Forces and Phenomena, Oxford, New York. Buldum, A. and Ciraci, S. (1997), “Interplay between stick-slip motion and structural phase transitions in dry sliding friction”, Phys. Rev. B 55, 12892–12895. Caroli, C. and Nozieres, P. (1996), “Dry Friction as a Hysteretic Elastic Response”, in Physics of Sliding Friction (Persson, B. N. J. and Tosatti, E., eds.), pp. 27–49, Kluwer, Dordrecht. Carpick, R. W. and Salmeron, M. (1997), “Scratching the Surface: Fundamental Investigations of Tribology with Atomic Force Microscopy”, Chem. Rev. 97, 1163–1194. Carson, G. A., Hu, H., and Granick, S. (1992), “Molecular Tribology of Fluid Lubrication: Shear Thinning”, Tribol. Trans. 35, 405–410. Chan, D. Y. C. and Horn, R. G. (1985), “The drainage of thin liquid films between solid surfaces”, J. Chem. Phys. 83, 5311–5324. Cieplak, M., Smith, E. D., and Robbins, M. O. (1994), “Molecular Origins of Friction: The Force on Adsorbed Layers”, Science 265, 1209–1212. Daw, M. S. and Baskes, M. I. (1984), “Embedded-Atom Method: Derivation and Application to Impurities, Surfaces, and Other Defects in Metals”, Phys. Rev. B 29, 6443–6453. Dayo, A., Alnasrallay, W., and Krim, J. (1998), “Superconductivity-Dependent Sliding Friction”, Phys. Rev. Lett. 80, 1690–1693. de Gennes, P. G., unpublished. Demirel, A. L. and Granick, S. (1996a), “Glasslike Transition of a Confined Simple Fluid”, Phys. Rev. Lett. 77, 2261–2264. Demirel, A. L. and Granick, S. (1996b), “Friction Fluctuations and Friction Memory in Stick-Slip Motion”, Phys. Rev. Lett. 77, 4330–4333. Dhinojwala, A. and Granick, S., unpublished. Dieterich, J. H. (1979), “Modeling of Rock Friction. 2. Simulation of Pre-Seismic Slip”, J. Geophys. Res. 84, 2169–2175. Dieterich, J. H. and Kilgore, B. D. (1996), “Imaging surface contacts: power law contact distributions and contact stresses in quartz, calcite, glass and acrylic plastic”, Tectonophysics 256, 219–239. Dowson, D. and Higginson, G. R. (1968), Elastohydrodynamic Lubrication, Pergamon, Oxford. Dussan, E. B. (1979), “On the Spreading of Liquids on Solid Surfaces: Static and Dynamic Contact Lines”, Ann. Rev. Fluid Mech. 11, 371–400. Evans, D. J. and Morriss, G. P. (1986), “Shear Thickening and Turbulence in Simple Fluids”, Phys. Rev. Lett. 56, 2172–2175. Fisher, D. S. (1985), “Sliding charge-density waves as a dynamic critical phenomenon”, Phys. Rev B 31, 1396–1427. Flory, P. (1988), Statistical Mechanics of Chain Molecules, Hanser Publishers, München. Frank, F. C. and van der Merwe, J. H. (1949), “One-dimensional dislocations. I. Static theory”, Proc. R. Soc. A 198, 205–225. Frenkel, D. and Smit, B. (1996), Understanding Molecular Simulation: From Algorithms to Applications, Academic Press, San Diego. Frenkel, Y. I. and Kontorova, T. (1938), “On the Theory of Plastic Deformation and Twinning”, Zh. Eksp. Teor. Fiz. 8, 1340. Gao, J., Luedtke, W. D., and Landman, U. (1995), “Nano-Elastohydrodynamics: Structure, Dynamics, and Flow in Non-uniform Lubricated Junctions”, Science 270, 605–608. Gao, J., Luedtke, W. D., and Landman, U. (1996), “Nano-Elastohydrodynamics: Structure, Dynamics and Flow in Nonuniform Lubricated Junctions”, in Physics of Sliding Friction (Persson, B. N. J. and Tosatti, E., eds.), pp. 325–348, Kluwer, Dordrecht. Gao, J., Luedtke, W. D., and Landman, U. (1997a), “Origins of Solvation Forces in Confined Films”, J. Phys. Chem. B 101, 4013–4023. Gao, J., Luedtke, W. D., and Landman, U. (1997b), “Structure and Solvation Forces in Confined Films: Linear and Branched Alkanes”, J. Chem. Phys. 106, 4309–4318. Gee, M. L., McGuiggan, P. M., Israelachvili, J. N., and Homola, A. M. (1990), “Liquid to solid transitions of molecularly thin films under shear”, J. Chem. Phys. 93, 1895–1906. Georges, J. M., Millot, S., Loubet, J. L., Touck, A., and Mazuyer, D. (1993), “Surface roughness and squeezed films at molecular level”, in Thin Films in Tribology (Dowson, D., Taylor, C. M., Childs, T. H. C., Godet, M., and Dalmaz, G., eds.), pp. 443–452, Elsevier, Amsterdam. Glosli, J. N. and McClelland, G. (1993), “Molecular dynamics study of sliding friction of ordered organic monolayers”, Phys. Rev. Lett. 70, 1960–1963. Gopal, A. D. and Durian, D. J. (1995), “Nonlinear Bubble Dynamics in a Slowly Driven Foam”, Phys. Rev. Lett. 75, 2610–2614. Granick, S. (1992), “Motions and Relaxations of Confined Liquids”, Science 253, 1374–1379. Greenwood, J. A. and Williamson, J. B. P. (1966), “Contact of nominally flat surfaces”, Proc. Roy. Soc. A 295, 300–319. Grest, G. S. and Kremer, K. (1986), “Molecular dynamics simulations for polymers in the presence of a heat bath”, Phys. Rev. A 33, 3628–3631. Grüner, G. (1988), “The dynamics of charge-density waves”, Rev. Mod. Phys. 60, 1129–1181. Grüner, G., Zawadowski, A., and Chaikin, P. M. (1981), “Nonlinear Conductivity and Noise due to Charge-Density-Wave Depinning in NbSe<sub>3</sub>”, Phys. Rev. Lett. 46, 511–517. Gyalog, T., Bammerlin, M., Lüthi, R., Meyer, E., and Thomas, H. (1995), “Mechanism of Atomic Friction”, Europhys. Lett. 31, 269–274. Hammerberg, J. E., Holian, B. L., Röder, J., Bishop, A. R., and Zhou, J. J. (1998), “Nonlinear dynamics and the problem of slip at material interfaces”, Physica D 123, 330–340. Hannon, L., Lie, G. C., and Clementi, E. (1988), “Micro-Hydrodynamics”, J. Stat. Phys. 51, 965–979. Harrison, J. A. and Brenner, D. W. (1994), “Simulated Tribochemistry: An Atomic-Scale View of the Wear of Diamond”, J. Am. Chem. Soc. 116, 10399–10402. Harrison, J. A., Brenner, D. W., White, C. T., and Colton, R. J. (1991), “Atomistic Mechanisms of Adhesion and Compression of Diamond Surfaces”, Thin Solid Films 206, 213–219. Harrison, J. A. and Perry, S. S. (1998), “Friction in the Presence of Molecular Lubricants and Solid/Hard Coatings”, MRS Bull. 23(6), 27–31. Harrison, J. A., Stuart, S. J., and Brenner, D. W. (1999), “Atomic-Scale Simulation of Tribological and Related Phenomena”, in Handbook of Micro/Nanotribology (Bhushan, B., ed.), pp. 525–594. CRC Press, Boca Raton. Harrison, J. A., White, C. T., Colton, R. J., and Brenner, D. W. (1992a), “Nanoscale Investigation of Indentation, Adhesion and Fracture of Diamond (111) Surfaces”, Surf. Sci. 271, 57–67. Harrison, J. A., White, C. T., Colton, R. J., and Brenner, D. W. (1992b), “Molecular-Dynamic Simulations of Atomic-Scale Friction of Diamond Surfaces”, Phys. Rev. B 46, 9700–9708. Harrison, J. A., White, C. T., Colton, R. J., and Brenner, D. W. (1993), “Effects of Chemically-Bound, Flexible Hydrocarbon Species on the Frictional Properties of Diamond Surfaces”, J. Phys. Chem. 97, 6573–6576. Hayward, I. P. (1991), “Friction and Wear Properties of Diamonds and Diamond Coatings”, Surf. Coat. Tech. 49, 554–559. He, G., Müser, M. H., and Robbins, M. O. (1999), “Adsorbed Layers and the Origin of Static Friction”, Science 284, 1650–1652. He, G. and Robbins, M. O., unpublished. Heinbuch, U. and Fischer, J. (1989), “Liquid flow in pores: Slip, no-slip, or multilayer sticking”, Phys. Rev. A 40, 1144–1146. Heslot, F., Baumberger, T., Perrin, B., Caroli, B., and Caroli, C. (1994), “Creep, stick-slip, and dry-friction dynamics: Experiments and a heuristic model”, Phys. Rev. E 49, 4973–4988. Hirano, M. and Shinjo, K. (1990), “Atomistic locking and friction”, Phys. Rev. B 41, 11837–11851. Hirano, M. and Shinjo, K. (1993), “Superlubricity and frictional anisotropy”, Wear 168, 121–125. Hirano, M., Shinjo, K., Kaneko, R., and Murata, Y. (1991), “Anisotropy of Frictional Forces in Muscovite Mica”, Phys. Rev. Lett. 67, 2642–2645. Hirano, M., Shinjo, K., Kaneko, R., and Murata, Y. (1997), “Observation of superlubricity by scanning tunneling microscopy”, Phys. Rev. Lett. 78, 1448–1451. Hölscher, H., Schwarz, U. D., and Wiesendanger, R. (1997), “Simulation of the scan process in friction force microscopy”, in Materials Research Society Symposia Proceedings (Bhushan, B., ed.), pp. 379–384, Kluwer Academic Publishers, Netherlands. Horn, R. G. and Israelachvili, J. N. (1981), “Direct measurement of structural forces between two surfaces in a nonpolar liquid”, J. Chem. Phys. 75, 1400–1412. Hu, H.-W., Carson, G. A., and Granick, S. (1991), “Relaxation Time of Confined Liquids under Shear”, Phys. Rev. Lett. 66, 2758–2761. Hu, Y.-Z., Wang, H., Guo, Y., Shen, Z.-J., and Zheng, L.-Q. (1996), “Simulations of Lubricant Rheology in Thin Film Lubrication Part II: Simulation of Couette Flow”, Wear 196, 249–253. Huh, C. and Scriven, L. E. (1971), “Hydrodynamic Model of Steady Movement of a Solid/Liquid/Fluid Contact Line”, J. Colloid. Interface Sci. 35, 85–101. Israelachvili, J. N. (1986), “Measurement of the Viscosity of Liquids in Very Thin Films”, J. Colloid Interface Sci. 110, 263–271. Israelachvili, J. N. (1991), Intermolecular and Surface Forces, 2nd ed., Academic Press, London. Jacobsen, K. W., Norskov, J. K., and Puska, M. J. (1987), “Interatomic Interactions in the effective-medium theory”, Phys. Rev. B 35, 7423–7442. Jaeger, H. M., Nagel, S. R., and Behringer, R. P. (1996), “Granular solids, liquids, and gases”, Rev. Mod. Phys. 68, 1259–1273. Joanny, J. F. and Robbins, M. O. (1990), “Motion of a contact line on a heterogeneous surface”, J. Chem. Phys. 92, 3206–3212. Kawaguchi, T. and Matsukawa, H. (1998), “Anomalous pinning behavior in an incommensurate two-chain model of friction”, Phys. Rev. B 58, 15866–15877. Khare, R., de Pablo, J. J., and Yethiraj, A. (1996), “Rheology of Confined Polymer Melts”, Macromolecules 29, 7910–7918. Klein, J. and Kumacheva, E. (1995), “Confinement-Induced Phase Transitions in Simple Liquids”, Science 269, 816–819. Koike, A. (1999), “Molecular Dynamics Study of Tribological Behavior of Confined Branched and Linear Perfluoropolyethers”, J. Phys. Chem. B 103, 4578–4589. Koike, A. and Yoneya, M. (1998), “Chain Length Effects on Frictional Behavior of Confined Ultrathin Films of Linear Alkanes Under Shear”, J. Phys. Chem. B 102, 3669–3675. Koplik, J., Banavar, J. R., and Willemsen, J. F. (1988), “Molecular Dynamics of Poiseuille Flow and Moving Contact Lines”, Phys. Rev. Lett. 60, 1282–1285. Koplik, J., Banavar, J. R., and Willemsen, J. F. (1989), “Molecular dynamics of fluid flow at solid surfaces”, Phys. Fluids A 1, 781–794. Kremer, K. and Grest, G. S. (1990), “Dynamics of Entangled Linear Polymer Melts: A Molecular-Dynamics Simulation”, J. Chem. Phys. 92, 5057–5086. Krim, J., Solina, D. H., and Chiarello, R. (1991), “Nanotribology of a Kr Monolayer: A Quartz-Crystal Microbalance Study of Atomic-Scale Friction”, Phys. Rev. Lett. 66, 181–184. Krim, J., Watts, E. T., and Digel, J. (1990), “Slippage of simple liquid films adsorbed on silver and gold substrates”, J. Vac. Sci. Technol. A 8, 3417–3420. Krim, J. and Widom, A. (1988), “Damping of a crystal oscillator by an adsorbed monolayer and its relation to interfacial viscosity”, Phys. Rev. B 38, 12184–12189. Kröger, M., Loose, W., and Hess, S. (1993), “Rheology and Structural-Changes of Polymer Melts via Nonequilibrium Molecular Dynamics”, J. Rheology 37, 1057–1079. Landman, U. and Luedtke, W. D. (1989), “Dynamics of Tip-Substrate Interactions in Atomic Force Microscopy”, Surf. Sci. Lett. 210, L117–L184. Landman, U. and Luedtke, W. D. (1991), “Nanomechanics and Dynamics of Tip-Substrate Interactions”, J. Vac. Sci. Technol. B 9, 414–423. Landman, U., Luedtke, W. D., Burnham, N. A., and Colton, R. J. (1990), “Atomistic Mechanisms and Dynamics of Adhesion, Nanoindentation, and Fracture”, Science 248, 454–461. Landman, U., Luedtke, W. D., and Gao, J. (1996), “Atomic-Scale Issues in Tribology: Interfacial Junctions and Nano-elastohydrodynamics”, Langmuir 12, 4514–4528. Landman, U., Luedtke, W. D., and Ribarsky, M. W. (1989), “Structural and dynamical consequences of interactions in interfacial systems”, J. Vac. Sci. Technol. A 7, 2829–2839. Landman, U., Luedtke, W. D., and Ringer, E. M. (1992), “Atomistic mechanisms of adhesive contact formation and interfacial processes”, Wear 153, 3–30. Liebsch, A., Gonçalves, S., and Kiwi, M. (1999), “Electronic versus Phononic Friction of Xenon on Silver”, Phys. Rev. B 60, 5034–5043. Loose, W. and Ciccotti, G. (1992), “Temperature and temperature control in nonequilibrium-molecular-dynamics simulations of the shear flow of dense liquids”, Phys. Rev. A 45, 3859–3866. Lupowski, M. and van Swol, F. (1991), “Ultrathin films under shear”, J. Chem. Phys. 95, 1995–1998. Magda, J., Tirrell, M., and Davis, H. T. (1985), “Molecular Dynamics of Narrow, Liquid-Filled Pores”, J. Chem. Phys. 83, 1888–1901. Mak, C. and Krim, J. (1998), “Quartz-crystal microbalance studies of the velocity dependence of interfacial friction”, Phys. Rev. B 58, 5157–5159. Manias, E., Bitsanis, I., Hadziioannou, G., and Brinke, G. T. (1996), “On the nature of shear thinning in nanoscopically confined films”, Europhys. Lett. 33, 371–376. Matsukawa, H. and Fukuyama, H. (1994), “Theoretical study of friction: One-dimensional clean surfaces”, Phys. Rev. B 49, 17286–17292. Maxwell, J. C. (1867), Philos. Trans. R. Soc. London Ser. A 170, 231. McClelland, G. M. (1989), “Friction at Weakly Interacting Interfaces”, in Adhesion and Friction (Grunze, M. and Kreuzer, H. J., eds.), volume 17, pp. 1–16, Springer Verlag, Berlin. McClelland, G. M. and Cohen, S. R. (1990), “Tribology at the Atomic Scale”, in Chemistry & Physics of Solid Surfaces VII (Vanselow, R. and Rowe, R., eds.), pp. 419–445, Springer Verlag, Berlin. McClelland, G. M. and Glosli, J. N. (1992), “Friction at the Atomic Scale”, in Fundamentals of Friction: Macroscopic and Microscopic Processes (Singer, I. L. and Pollock, H. M., eds.), pp. 405–422, Kluwer, Dordrecht. McCumber, D. E. (1968), “Effect of ac Impedance on the dc Voltage-Current Characteristics of Superconductor Weak-Link Junctions”, J. App. Phys. 39(7), 3113–3118. Mundy, C. J., Balasubramanian, S., and Klein, M. L. (1996), “Hydrodynamic boundary conditions for confined fluids via a nonequilibrium molecular dynamics simulation”, J. Chem. Phys. 105, 3211–3214. Müser, M. H. and Robbins, M. O., to be published. Müser, M. H. and Robbins, M. O. (1999), “Condition for static friction between flat crystalline surfaces”, Phys. Rev. B 60. Nasuno, S., Kudrolli, A., and Gollub, J. P. (1997), “Friction in Granular Layers: Hysteresis and Precursors”, Phys. Rev. Lett. 79, 949–952. Nieminen, J. A., Sutton, A. P., and Pethica, J. B. (1992), “Static Junction Growth During Frictional Sliding of Metals”, Acta Metall. Mater. 40, 2503–2509. Nordholm, S. and Haymet, A. D. J. (1980), “Generalized van der Waals Theory. I Basic Formulation and Application to Uniform Fluids”, Aust. J. Chem. 33, 2013–2027. Nosé, S. (1991), “Constant Temperature Molecular Dynamics Methods”, Prog. Theor. Phys. Supp. 103, 1–46. Ohzono, T., Glosli, J. N., and Fujihara, M. (1998), “Simulations of wearless friction at a sliding interface between ordered organic monolayers”, Jpn. J. Appl. Physics 37, 6535–6543. Paliy, M., Braun, O. M., Dauxois, T., and Hu, B. (1997), “Dynamical phase diagram of the dc-driven underdamped Frenkel-Kontorova Chain”, Phys. Rev. E 56, 4025–4030. Paul, W., Yoon, D. Y., and Smith, G. D. (1995), “An optimized united atom model for simulations of polymethylene melts”, J. Chem. Phys 103, 1702–1709. Perry, M. D. and Harrison, J. A. (1996), “Molecular Dynamics Investigations of the Effects of Debris Molecules on the Friction and Wear of Diamond”, Thin Solid Films 290-291, 211–215. Perry, M. D. and Harrison, J. A. (1997), “Friction between Diamond Surfaces in the Presence of Small Third-Body Molecules”, J. Phys. Chem. B 101, 1364–1373. Persson, B. N. J. (1991), “Surface resistivity and vibrational damping in adsorbed layers”, Phys. Rev. B 44, 3277–3296. Persson, B. N. J. (1993a), “Theory of friction and boundary lubrication”, Phys. Rev. B 48, 18140–18158. Persson, B. N. J. (1993b), “Theory and Simulation of Sliding Friction”, Phys. Rev. Lett. 71, 1212–1215. Persson, B. N. J. (1995), “Theory of Friction: Dynamical Phase Transitions in Adsorbed Layers”, J. Chem. Phys. 103, 3849–3860. Persson, B. N. J. (1998), Sliding Friction: Physical Principles and Applications, Springer, Berlin. Persson, B. N. J. and Ballone, P., unpublished. Persson, B. N. J. and Nitzan, A. (1996), “Linear sliding friction: On the origin of the microscopic friction for Xe on silver”, Surf. Sci. 367, 261–275. Persson, B. N. J. and Tosatti, E. (1996), “Theory of Friction: Elastic Coherence Length and Earthquake Dynamics”, in Physics of Sliding Friction (Persson, B. N. J. and Tosatti, E., eds.), pp. 179–189, Kluwer, Dordrecht. Persson, B. N. J. and Volokitin, A. I. (1995), “Electronic friction of physisorbed molecules”, J. Chem. Phys. 103, 8679–8683. Plischke, M. and Henderson, D. (1986), “Density Profiles and Pair Correlation \- Functions of Lennard-Jones Fluids Near a Hard-Wall”, J. Chem. Phys. 84, 2846–2852. Rabin, Y. and Hersht, I. (1993), “Thin Liquid Layers in Shear-Non-Newtonian Effects”, Physica A 200, 708–712. Rabinowicz, E. (1965), Friction and Wear of Materials, Wiley, New York. Raffi-Tabar, H., Pethica, J. B., and Sutton, A. P. (1992), “Influence of Adsorbate Monolayer on the Nano-Mechanics of Tip-Substrate Interactions”, Mater. Res. Soc. Symp. Proc. 239, 313–318. Rajasekaran, E., Zeng, X. C., and Diestler, D. J. (1997), “Frictional anisotropy and the role of lattice relaxation in molecular tribology of crystalline interfaces”, in Materials Research Society Symposia Proccedings (Bhushan, B., ed.), pp. 379–384, Kluwer, Netherlands. Raphael, E. and deGennes, P. G. (1989), “Dynamics of Wetting with Non-ideal Surfaces - The Single Defect Problem”, J. Chem. Phys. 90, 7577–7584. Ribarsky, M. W. and Landman, U. (1992), “Structure and dynamics of normal-alkanes confined by Solid-Surfaces. 1. Stationary Crystalline Boundaries”, J. Chem. Phys. 97, 1937–1949. Rigney, D. A. and Hammerberg, J. E. (1998), “Unlubricated Sliding Behavior of Metals”, MRS Bull. 23(6), 32–36. Robbins, M. O. and Baljon, A. R. C. (2000), “Response of Thin Oligomer Films to Steady and Transient Shear”, in Microstructure and Microtribology of Polymer Surfaces (Tsukruk, V. V. and Wahl, K. J., eds.), pp. 91–117. American Chemical Society, Washington DC. Robbins, M. O. and Krim, J. (1998), “Energy Dissipation in Interfacial Friction”, MRS Bull. 23(6), 23–26. Robbins, M. O. and Mountain, R. D., unpublished. Robbins, M. O. and Smith, E. D. (1996), “Connecting Molecular-Scale and Macroscopic Tribology”, Langmuir 12, 4543–4547. Robbins, M. O. and Thompson, P. A. (1991), “Critical Velocity of Stick-Slip Motion”, Science 253, 916. Robbins, M. O. (2000) “Jamming, Friction, and Unsteady Rheology”, in Jamming and Rheology: Constrained Dynamics on Microscopic and Macroscopic Scales, (Liu, A. J. and Nagel, S. R., eds.) Taylor and Francis, London. Rozman, M. G., Urbakh, M., and Klafter, J. (1996), “Stick-Slip Motion and Force Fluctuations in a Driven Two-Wave Potential”, Phys. Rev. Lett. 77, 683–686. Rozman, M. G., Urbakh, M., and Klafter, J. (1997), “Stick-slip dynamics as a probe of frictional forces”, Europhys. Lett. 39, 183–188. Rozman, M. G., Urbakh, M., Klafter, J., and Elmer, F.-J. (1998), “Atomic Scale Friction and Different Phases of Motion of Embedded Molecular Systems”, J. Phys. Chem. B 102, 7924–7930. Ruina, A. (1983), “Slip Instability and State Variable Friction Laws”, J. Geophys. Res. 88, 10359–10370. Ryckaert, J. P. and Bellemans, A. (1978), “Molecular Dynamics of Liquid Alkanes”, Discuss. Faraday Soc. 66, 95–106. Sarman, S. S., Evans, D. J., and Cummings, P. T. (1998), “Recent developments in non-Newtonian molecular dynamics”, Phys. Rep. 305, 1–92. Schaich, W. L. and Harris, J. (1981), “Dynamic Corrections to van der Waals Potentials”, J. Phys. F: Met. Phys. 11, 65–78. Schneider, T. and Stoll, E. (1978), “Molecular Dynamics Study of a Three-Dimensional One-Component Model for Distortive Phase Transitions”, Phys. Rev. B 17, 1302–1322. Schoen, M., Cushman, J. H., Diestler, D. J., and Rhykerd, C. L. (1988), “Fluids in Micropores II: Self-Diffusion in a Simple Classical Fluid in a Slit-Pore”, J. Chem. Phys. 88, 1394–1406. Schoen, M., Rhykerd, C. L., Diestler, D. J., and Cushman, J. H. (1987), “Fluids in Micropores. I. Structure of a Simple Classical Fluid in a Slit-Pore”, J. Chem. Phys. 87, 5464–5476. Schoen, M., Rhykerd, C. L., Diestler, D. J., and Cushman, J. H. (1989), “Shear Forces in Molecularly Thin Films”, Science 245, 1223–1225. Shinjo, K. and Hirano, M. (1993), “Dynamics of Friction: Superlubric State”, Surface Science 283, 473–478. Smith, E. D., Cieplak, M., and Robbins, M. O. (1996), “The Friction on Adsorbed Monolayers”, Phys. Rev. B. 54, 8252–8260. Sneddon, L., Cross, M. C., and Fisher, D. S. (1982), “Sliding Conductivity of Charge-Density Waves”, Phys. Rev. Lett. 49, 292–295. Snook, I. K. and van Megen, W. (1980), “Solvation in simple dense fluids. I”, J. Chem. Phys. 72, 2907–2914. Sokoloff, J. B. (1990), “Theory of energy dissipation in sliding crystal surfaces”, Phys. Rev. B 42, 760–765. S$`ø`$rensen, M. R., Jacobsen, K. W., and Stoltze, P. (1996), “Simulations of atomic-scale sliding friction”, Phys Rev. B 53, 2101–2113. Steele, W. A. (1973), “The physical interaction of gases with crystalline solids. I. Gas-solid energies and properties of isolated absorbed atoms”, Surf. Sci. 36, 317–352. Stevens, M. J., Mondello, M., Grest, G. S., Cui, S. T., Cochran, H. D., and Cummings, P. T. (1997), “Comparison of shear flow of hexadecane in a confined geometry and in bulk”, J. Chem. Phys. 106, 7303–7314. Stevens, M. J. and Robbins, M. O. (1993), “Simulations of shear-induced melting and ordering”, Phys. Rev. E 48, 3778–3792. Stevens, M. J., Robbins, M. O., and Belak, J. F. (1991), “Shear-Melting of Colloids: A Non-Equilibrium Phase Diagram”, Phys. Rev. Lett. 66, 3004–3007. Suh, N. P. (1986), Tribophysics, Prentice-Hall, Englewood Cliffs. Taub, H., Torzo, G., Lauter, H. J., and S. C. Fain, J. (1991), Phase Transitions in Surface Films 2, Plenum Press, New York. Thompson, P. A. and Grest, G. S. (1991), “Granular Flow: Friction and the Dilatancy Transition”, Phys. Rev. Lett. 67, 1751–1754. Thompson, P. A., Grest, G. S., and Robbins, M. O. (1992), “Phase Transitions and Universal Dynamics in Confined Films”, Phys. Rev. Lett. 68, 3448–3451. Thompson, P. A. and Robbins, M. O. (1989), “Simulations of Contact-Line Motion: Slip and the Dynamic Contact Angle”, Phys. Rev. Lett. 63, 766–769. Thompson, P. A. and Robbins, M. O. (1990a), “Shear flow near solids: Epitaxial order and flow boundary conditions”, Phys. Rev. A 41, 6830–6837. Thompson, P. A. and Robbins, M. O. (1990b), “Origin of Stick-Slip Motion in Boundary Lubrication”, Science 250, 792–794. Thompson, P. A., Robbins, M. O., and Grest, G. S. (1993), “Simulations of Lubricant Behavior at the Interface with Bearing Solids”, in Thin Films in Tribology (Dowson, D., Taylor, C. M., Childs, T. H. C., Godet, M., and Dalmaz, G., eds.), pp. 347–360, Elsevier, Amsterdam. Thompson, P. A., Robbins, M. O., and Grest, G. S. (1995), “Structure and Shear Response in Nanometer-Thick Films”, Israel J. of Chem. 35, 93–106. Thompson, P. A. and Troian, S. M. (1997), “A general boundary condition for liquid flow at solid surfaces”, Nature 389, 360–363. Tomagnini, O., Ercolessi, F., and Tosatti, E. (1993), “Microscopic Interaction between a Gold Tip and a Pb(110) Surface”, Surf. Sci. 287/288, 1041–1045. Tomassone, M. S., Sokoloff, J. B., Widom, A., and Krim, J. (1997), “Dominance of Phonon Friction for a Xenon Film on a Silver (111) Surface”, Phys. Rev. Lett. 79, 4798–4801. Tomlinson, G. A. (1929), “A molecular theory of friction”, Phil. Mag. Series 7, 905–939. Toxvaerd, S. (1981), “The structure and thermodynamics of a solid-fluid interface”, J. Chem. Phys. 74, 1998–2008. Tschöp, W., Kremer, K., Batoulis, J., Bürger, T., and Hahn, O. (1998a), “Simulation of polymer melts. I. Coarse graining procedure for polycarbonates”, Acta Polym. 49, 61–74. Tschöp, W., Kremer, K., Batoulis, J., Bürger, T., and Hahn, O. (1998b), “Simulation of polymer melts. II. From coarse grained models back to atomistic description”, Acta Polym. 49, 75–79. Tutein, A. B., Stuart, S. J., and Harrison, J. A. (2000), “Indentation Analysis of Linear-Chain Hydrocarbon Monolayers Anchored to Diamond”, J. Phys. Chem. B YY, xx–xx. Urbakh, M., Daikhin, L., and Klafter, J. (1995), “Dynamics of Confined Liquids Under Shear”, Phys. Rev. E 51, 2137–2141. Veje, C. T., Howell, D. W., and Behringer, R. P. (1999), “Kinematics of a two-dimensional granular Couette experiment at the transition to shearing”, Phys. Rev. E 59, 739–745. Volmer, A. and Natterman, T. (1997), “Towards a Statistical Theory of Solid Dry Friction”, Z. Phys. B 104, 363–371. Weiss, M. and Elmer, F.-J. (1996), “Dry Friction in the Frenkel-Kontorova-Tomlinson Model: Static Properties”, Phys. Rev. B 53, 7539–7549. Xia, T. K., Ouyang, J., Ribarsky, M. W., and Landman, U. (1992), “Interfacial Alkane Films”, Phys. Rev. Lett. 69, 1967–1970. Yoshizawa, H. and Israelachvili, J. N. (1993), “Fundamental Mechanisms of Interfacial Friction. 1. Stick-Slip Friction of Spherical and Chain Molecules”, J. Phys. Chem. 97, 11300–11313.
no-problem/0001/astro-ph0001162.html
ar5iv
text
# X-ray total mass estimate for the nearby relaxed cluster A3571 ## 1. Introduction Measuring cluster masses has significant implications for cosmology. Assuming that the cluster mass content represents that of the Universe, measured total and baryonic mass distributions in a cluster, combined with the Big Bang nucleosynthesis calculations and observed light element abundances, can be used to constrain the cosmological density parameter (White et al. 1993). A measured cluster mass function could be used to constrain cosmological parameters via the Press-Schechter formalism. However, the mass cannot be observed directly since most of the cluster mass is dark matter, and one must rely on observed quantities like gas temperature or galaxy velocities and assume the state of the cluster to derive the mass. In this paper, we estimate the total mass for the A3571 cluster under the assumptions of hydrostatic equilibrium and thermal pressure support. Until recently, most hydrostatic X-ray mass estimates have been made assuming that the gas is isothermal at the average broad beam temperature. However, the total mass within large radii is only as accurate as the local temperature at that radius. ASCA observations provide spatially resolved temperature data for hot clusters and yield their 2D temperature structure. A large number of ASCA clusters shows that the temperature declines with increasing radius (Markevitch et al. (1998)), in qualitative accordance with hydrodynamic cluster simulations (e.g. Evrard et al. (1996), Bryan & Norman (1997), Burns et al. (1999)). This implies that the total mass within small radii is greater, while at large cluster radii it is smaller than that derived assuming isothermality, which has also been observed in A2256 (Markevitch & Vikhlinin 1997b , A2029 (Sarazin, Wise & Markevitch (1998)), A496 and A2199 (Markevitch et al. (1999)) and A401 (Nevalainen et al. 1999a ). Consequently, the gas mass fraction within a large radius where the cluster may be a fair sample of the universe, is larger than that derived assuming isothermality which further aggravates the “baryon catastrophe” (e.g. White et al. (1993), White & Fabian (1995), Ettori & Fabian (1999), Mohr et al. (1999)). A3571 (z = 0.040) is suitable for measuring the dark and total mass distributions, since it is bright and hot ($``$ 7 keV) allowing accurate temperature determinations with ASCA. Indeed, the A3571 temperature profile used in this work (see Markevitch et al. (1998)) is among the most accurate for all hot clusters. The three ASCA pointings cover the cluster to $`r_{500}`$ (the radius where the mean interior density equals 500 times the critical density, approximately the radius inside which hydrostatic equilibrium holds, according to simulations of Evrard et al. (1996)). A3571 has a cooling flow (Peres et al. (1998)), but it is weak enough not to introduce large uncertainties in the temperature determination. We use $`H_050h_{50}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }=1`$ and report 90% confidence intervals throughout the paper. ## 2. ROSAT ANALYSIS We processed the ROSAT data, consisting of a PSPC pointing rp800287, using Snowden’s Soft X-Ray Background programs (Snowden et al. (1994)), which reduced the total exposure by 25% to 4.5 ks. The spatial analysis was restricted to the energy band of 0.73 - 2.04 keV (Snowden’s bands R6-R7) to improve sensitivity over the X-ray background. The surface brightness contour map (smoothed by a Gaussian with $`\sigma `$ = 1) is shown in Figure 1a. The data show no obvious substructures and no deviations from azimuthal symmetry, except for a slight ellipticity. Fabricant et al. (1984) showed that for A2256, whose X-ray brightness distribution is more elliptical than the one of A3571, the true elliptical total mass is very close to the hydrostatic total mass derived assuming spherical symmetry. Furthermore, Vikhlinin et al. (1999) divided the PSPC data of A3571 into several sectors, fitted the brightness with an azimuthally symmetric model and found that the azimuthal variation of the gas density gradient (due to ellipticity) was 6% of the global value, which would indicate a similar error in the total mass, which is negligible compared to total mass errors obtained with spherical model. Therefore, in the following analysis we assume the cluster to be azimuthally symmetric. We excluded point sources and generated a radial surface brightness profile in concentric annuli of width ranging from $`15^{\prime \prime }`$ at the center to $`7^{}`$ at a radial distance of $`40^{}`$. In the radial range of ASCA pointings ($`r<35^{}`$), we included only the ROSAT data from the sky areas covered by ASCA. A cooling flow with a mass flow rate of 40 - 130 M/yr and a cooling radius of 1.2 - 2.2 arcmin has been detected in the center of A3571 by Peres et al. (1998). Our data show a significant central brightness excess over the $`\beta `$-model (Figure 2), in agreement with the reported cooling flow. Therefore we excluded the data within the central $`r<3^{}`$ from the fit. We fitted the observed profile with a $`\beta `$\- model $$I(b)=I_0\left(1+\left(\frac{b}{a_x}\right)^2\right)^{(3\beta +\frac{1}{2})}+CXRB$$ (1) (Cavaliere & Fusco-Femiano (1976)), where $`b`$ is the projected radius. We fixed the cosmic X-ray background (CXRB) to $`1.5\times 10^4`$ counts s<sup>-1</sup> arcmin<sup>-2</sup> found from the outer part of the image, and included 5% of the background value as a systematic error, due to variation on the sky. We used XSPEC to convolve the surface brightness model through a spatial response matrix (constructed from the ROSAT PSF at 1 keV, for an azimuthally symmetric source centered on-axis) and to compare the convolved profile with the data. We find an acceptable fit in the radial range 3-43 (see Figure 2 and Table 1), with best fit parameters $`a_x=3.85\pm 0.35`$ arcmin (= $`310\pm 30h_{50}^1kpc`$), $`\beta =0.68\pm 0.03`$ with $`\chi ^2`$ = 74.1 for 87 degrees of freedom. Our values of $`a_x`$ and $`\beta `$ are consistent with another study of the ROSAT PSPC data of A3571 (Vikhlinin et al. 1999) who also excluded the cooling flow area from the fit. In yet another study of ROSAT PSPC data of A3571 (Mohr et al. 1999) inconsistently smaller values were found for $`a_x`$ and $`\beta `$. This is a consequence of including the data of the cooling flow into the profile fit in that work. If we assume that the intracluster gas is isothermal and spherically symmetric, the best-fit parameters $`a_x`$ and $`\beta `$ determine the shape of the gas density profile as: $$\rho _{gas}(r)=\rho _{gas}(0)\left(1+\left(\frac{r}{a_x}\right)^2\right)^{\frac{3}{2}\beta }$$ (2) The observed temperature variation in A3571 from 7 keV to 4 keV with radius will introduce at most a 2% effect on the gas mass (e.g. Mohr et al. 1999), which is negligible compared to other components in our error budget. We obtained the normalization of the gas density profile (as in Vikhlinin et al. (1999)) $`\rho _{gas}(0)=1.5\times 10^{14}`$M Mpc<sup>-3</sup>, or 1.0 $`\times 10^{26}`$ g cm<sup>-3</sup>, by equating the emission measure calculated from the above equation, with an observed value of 8.1 $`\times 10^{67}`$ cm<sup>-3</sup> inside a cylinder with $`r=0.12`$ $`h_{50}^1`$ Mpc radius, centered at the cluster brightness peak ($`r=0.1`$ Mpc encompasses the cooling flow excluded from all our analyses). ## 3. TEMPERATURE DATA We used the temperature profile data presented in Markevitch et al. (1998), who combined three ASCA pointings to derive the emission weighted, cooling flow - corrected temperature kT = $`6.9\pm 0.2`$ keV. Outside the cooling radius, the temperature values were measured in radial bins 2-6-13-22-35 (0.13-0.39-0.85-1.43-2.28 $`h_{50}^1`$ Mpc). The central bin that is affected by a significant cooling flow component is not used in the analysis below. The temperature errors were determined by generating Monte - Carlo data sets which properly account for the statistical and systematic uncertainties (including those of the PSF, effective area and background). As in most nearby clusters, the ASCA data reveal a temperature decline with radius. The ROSAT PSPC data on A3571 in the 0.2–2 keV band were also analyzed by Irwin et al. (1999), who derive a temperature profile consistent with a constant up to 20. However, those authors did not include the PSPC calibration uncertainties that dominate the ROSAT temperature errors for hot clusters such as A3571 (see e.g. Markevitch & Vikhlinin 1997a ); inclusion of these uncertaintines should make their results consistent with the ASCA profile. The ASCA temperature profile for A3571 is similar to profiles of a large sample of nearby ASCA clusters (Markevitch et al. (1998)), when scaled to physically meaningful units of the radii of fixed overdensity. Therefore it appears unlikely that the observed decline is due to an unknown instrumental effect. A more detailed discussion about the validity of the ASCA spatially resolved temperature data can be found in Nevalainen et al. (1999a). Hydrodynamic simulations predict a qualitatively similar radial temperature behavior in relaxed clusters (e.g., Evrard et al. (1996); Eke, Navarro, & Frenk 1997; Bryan & Norman (1997)), although there are differences in detail between the simulations and observations as well as between the different simulation techniques (e.g., Frenk et al. 1999). ## 4. VALIDITY OF THE HYDROSTATIC EQUILIBRIUM Quintana & de Souza (1993) report preliminary results of an optical study of the galaxies in A3571. They find a suggestion that the galaxy distribution in A3571 is irregular and forms several velocity subgroups, but they did not perform quantitative statistical analyses of the galaxy distribution, due to the small number of observed galaxies (see Figure 1 for the distribution of A3571 member galaxies from the NASA Extragalactic Database). The central giant galaxy MCG05-33-002 has an extensive optical halo with dimensions of 0.2 $`\times `$ 0.6 $`h_{50}^1`$Mpc, elongated along the major axis of the core region of this galaxy (Kemp & Meaburn (1991)). The galaxy distribution of A3571 is also aligned in the same direction (Kemp & Meaburn (1991)). As discussed by Quintana & de Souza (1993), the optical data suggests that the cD galaxy formed during the original collapse of the central part of the cluster and that the cluster may not yet be virialized. However, as Quintana & de Souza (1993) state, their galaxy distribution results are only tentative. Furthermore, galaxies are not the best measure of the relaxation since clusters form within intersecting filaments in larger scale and superpositions can give the appearance of the asymmetries, substructure, and superposed groups. In X-rays, the ROSAT PSPC data of A3571 (Figure 1) show that the gas is azimuthally symmetric (except for a slight ellipticity, see Section 2) and that there is no substructure and no correlation between the galaxy and gas distributions at large radii. Furthermore, the ASCA gas temperature map of A3571 (Markevitch et al. (1998)) shows no asymmetric variation that would indicate dynamic activity. Neumann & Arnaud (1999) found evidence in their ROSAT cluster sample that cooling flows are a recurrent phenomena that may be turned off by mergers, in accordance with a hierarchical clustering scenario (Fabian et al. 1994). Since A3571 has a considerable cooling flow (Peres et al. 1998), any merger must have been either not very strong or sufficiently in the past for the gas to reestablish equilibrium and a cooling flow. A3571 is a member of the Shapley supercluster (Raychaudhury et al. 1991) and therefore likely to have more frequent mergers and may not be typical of more isolated clusters. However, all the X-ray evidence consistently argues against any significant ongoing merger in A3571. Since the optical evidence does not contradict significantly the X-ray evidence for non-merger, we assume that the hydrostatic equilibrium is valid in A3571. ## 5. MASS FITTING ### 5.1. Method For the details of the mass calculation, we refer to our similar analysis of cluster A401 (Nevalainen et al. 1999a ). Briefly, we model the dark matter density with a constant core model $$\rho _{dark}\left(1+\frac{r^2}{a_d^2}\right)^{\alpha /2},$$ (3) and with the central cusp profile: $$\rho _{dark}\left(\frac{r}{a_d}\right)^\eta \left(1+\frac{r}{a_d}\right)^{\eta \alpha }.$$ (4) We fix $`\eta `$ = 1 in the cusp models, as suggested by numerical simulations (Navarro et al. (1997), hereafter NFW), but vary the other parameters. We solve the hydrostatic equilibrium equation $$M_{tot}(r)=3.70\times 10^{13}M_{}\frac{T(r)}{\mathrm{keV}}\frac{r}{\mathrm{Mpc}}\left(\frac{d\mathrm{ln}\rho _{gas}}{d\mathrm{ln}r}\frac{d\mathrm{ln}T}{d\mathrm{ln}r}\right),$$ (5) (e.g. Sarazin 1988, using $`\mu =0.60`$), for temperature, in terms of dark matter and gas density profile parameters. We fix the gas density to that found from the ROSAT data above, calculate the 3 - dimensional temperature profile model corresponding to given dark matter parameters, project it on the ASCA annuli, compare these values to the observed temperatures and iteratively determine the dark matter distribution parameters. To propagate the errors of the temperature profile data to our mass values, we repeat the procedure for a large number of Monte - Carlo temperature profiles with added random errors. We reject unphysical models that give infinite temperatures at large radii, and those models that are convectively unstable (that is, correspond to polytropic index $`>\frac{5}{3}`$ in the radial range of the temperature data, outside the cooling flow region $`r`$ = 3-35). From the distribution of the acceptable Monte - Carlo models, we determine the 1 $`\sigma `$ confidence intervals of the mass values as a function of radius. We convert these values to 90% confidence values, assuming a Gaussian probability distribution. We cannot constrain all dark matter model parameters independently due to the limited accuracy of the temperature data. However, the models with steeper dark matter density slopes (higher $`\alpha `$) require larger dark matter core radii (higher $`a_d`$) to produce similar shapes of temperature profile and due to this correlation the corresponding mass values vary within a relatively narrow range. We also propagate the estimate of the uncertainty of the local gas density gradient to the total mass values, as in Nevalainen et al. (1999a). In Figure 3 we show representative density profiles of forms (3) and (4), and the corresponding model temperature profiles. Both functional forms give acceptable fits to the data and yield masses consistent within 90% confidence errors. Our final 90% confidence intervals of total mass, at each radius, include the 90% confidence intervals of both models, and the average of the two models is used as the best value. As can be seen in Figures 3c and 3d, in the radial range $`3^{}15^{}`$, the Monte-Carlo densities are lower than the best fit values. This asymmetry is due to the fact that the polytropic index criterion effectively rejects the most massive models, as was also found for A401 (Nevalainen et al. 1999a). ### 5.2. Results The final mass profile is shown in Fig. 4. The overdensity, or the mean interior density in units of the critical density, calculated from our best fit models, is 240 at $`r=35^{}`$, the largest radius covered by the ASCA data. Simulations (e.g. Evrard et al. (1996)) suggest that within $`r_{500}`$, where the overdensity is 500, hydrostatic equilibrium is valid. For A3571, $$r_{500}=25.9^{}=1.7h_{50}^1\mathrm{Mpc}$$ (6) and the mass within this radius $$M_{tot}(r_{500})=7.8_{2.2}^{+1.4}\times 10^{14}h_{50}^1M_{}.$$ (7) The mass values within several interesting radii are given in Table 1. At large radii our mass errors are quite large because they cover the values allowed by two different models, and include the uncertainty of $`\beta `$. Therefore the isothermal mass is consistent with our results, but compared to our best values, the isothermal ones are greater by factors of 1.1 and 1.3 at radii of $`r_{500}`$ and 35 (see Figure 4). This difference is a natural consequence of the real temperatures being lower than the average temperature at large radii, similarly with other clusters with measured temperature profiles (Markevitch & Vikhlinin 1997b , Nevalainen et al. 1999a , Markevitch et al. (1999)). However, at small radii ($`r=a_x`$), differently from the other above clusters, the isothermal mass in A3571 is about equal to the value obtained with the observed temperature profile, due to the nearly constant temperature up to $`13^{}`$ in A3571. The deprojection method, with the isothermal assumption, gives a total mass value of $`6.22\times 10^{14}h_{50}^1M_{}`$ inside a radius of 0.91 $`h_{50}^1`$ Mpc (Ettori et al. (1997)), whereas our value at that radius is significantly smaller, $`4.9_{0.9}^{+0.6}\times 10^{14}h_{50}^1M_{}`$. This behaviour is similar to that found for A401 (Nevalainen et al. 1999a ). The frequently used mass - temperature scaling law obtained in cosmological simulations (Evrard et al. (1996)) predicts that A3571, which has kT = $`6.9\pm 0.2`$ keV (Markevitch et al. 1998), will have $`r_{500}=2.1\pm 0.14h_{50}^1\mathrm{Mpc}`$ and $`M_{tot}(r_{500})=1.3\pm 0.19\times 10^{15}h_{50}^1M_{}`$, whereas our measured values above are smaller by factors of 1.2 and 1.6. Similar behaviour has been found in several other hot clusters with temperature profiles measured with ASCA, i.e. A2256 (Markevitch & Vikhlinin 1997b ), A401 (Nevalainen et al. 1999a ), A496 and A2199 (Markevitch et al. (1999)). Also temperature profiles of cooler groups NGC5044 and HCG62 and a galaxy NGC507, measured with ROSAT, give similar results (Nevalainen et al. 1999b). These comparisons suggest that the above simulations produce too small temperature for a given mass. The virial theorem analysis of the galaxy velocity distribution in A3571 (Girardi et al. 1998) gives $`R_{vir}=4.18h_{50}^1`$ Mpc and $`M_{vir}=1.63_{0.72}^{+0.79}\times 10^{15}h_{50}^1M_{}`$, whereas our values extrapolated to this radius are $`1.2\pm 0.7\times 10^{15}h_{50}^1M_{}`$, consistent within the large errors. ## 6. DISCUSSION ### 6.1. NFW profile With $`\eta =1`$ and $`\alpha =3`$ the cusp model (Eq. 4) corresponds to the NFW “universal” mass profile. For A3571 there is a significant detection of a cooling flow in the center and the polytropic $`\gamma \frac{5}{3}`$ constraint is applicable only beyond the cooling flow region. At those radii the temperature gradient of the cusp model is not strong, and the model is convectively stable. This model also gives an acceptable fit to the A3571 data, and is consistent with our mass profile within the errors (see Figure 4). Using the best fit NFW profile for A3571, we obtain the concentration parameter $`cr_{200}/a_d=5.3`$ and $`M_{200}=1.3\times 10^{15}h_{50}^1M_{}`$. These values are consistent with the NFW simulations in SCDM and CDM$`\mathrm{\Lambda }`$ cosmological models. In the hydrostatic equilibrium scheme, since the observed gas density and temperature profiles are similar in different clusters, when scaled by their estimated virial radii (Vikhlinin et al. (1999) and Markevitch et al. (1998), respectively), a similar total mass distribution is implied, and that is what we observe. The shapes of the total mass profiles at large radii in other hot clusters with measured ASCA temperature profiles for A2256 (Markevitch & Vikhlinin 1997b ), A2029 (Sarazin, Wise & Markevitch (1998)), A496 and A2199 (Markevitch et al. (1999)) and A401 (Nevalainen et al. 1999a ) are also consistent with the NFW model. NFW profile also describes well the mass profiles of cool groups NGC5044 and HCG62 and a galaxy NGC507 that are derived using ROSAT PSPC temperature profiles (David et al. 1994, Ponman & Bertram 1993, Kim & Fabbiano 1995, respectively, see the discussion of NFW models for these objects in Nevalainen et al. 1999b). These consistencies suggest that the NFW profile may indeed be universal. ### 6.2. Gas mass fraction The dark matter density in the best fit model falls as r<sup>-4</sup> at large radii, whereas the gas density falls as r<sup>-2</sup>. This causes the gas mass fraction to increase rapidly at large radii, to a value of $$f_{gas}(r_{500})=0.19_{0.03}^{+0.06}h_{50}^{3/2}.$$ (8) (see Figure 4). This value is consistent with those for A2256 (Markevitch & Vikhlinin 1997b ), A401 (Nevalainen et al. 1999a ), A496 and A2199 (Markevitch et al. (1999)) obtained using ASCA temperature profiles implying that the gas and dark matter distributions are similar in different clusters. Our value is also consistent with results for samples of clusters analyzed assuming isothermality: $`f_{gas}(r_{500})=0.168_{0.056}^{+0.065}h_{50}^{3/2}`$ (Ettori & Fabian (1999)) and $`f_{gas}(r_{500})=0.212\pm 0.006h_{50}^{3/2}`$ (for clusters with $`kT>5`$ keV Mohr et al. (1999)). At larger radii the temperature profile analysis would probably yield still higher values compared to the isothermal ones. Following the method of White et al. (1993; see also Nevalainen et al. 1999a for its application to A401), using the $`f_{gas}`$ value above, we compute an estimate for the cosmological density parameter $`\mathrm{\Omega }_m=<\rho >/\rho _{crit}`$ as $$\mathrm{\Omega }_m=\mathrm{{\rm Y}}\mathrm{\Omega }_b\left(f_{gas}+\frac{M_{gal}}{M_{tot}}\right)^1,$$ (9) at $`r_{500}`$ for A3571, where $`\mathrm{{\rm Y}}`$ is the local baryon diminution, for which we use a value 0.90 as suggested by simulations (Frenk et al. 1999). Due to the lack of a reliable estimate of the total galaxy mass in A3571 within $`r_{500}`$ we compute only the upper limit for $`\mathrm{\Omega }_m`$ using $`M_{gal}=0`$. We take $`\mathrm{\Omega }_b=0.076\pm 0.007`$ (Burles et al. (1998)). Using a reasonable lower limit for the Hubble constant $`H_0>`$ 60 km s<sup>-1</sup> Mpc<sup>-1</sup> (e.g. Nevalainen & Roos 1998) we obtain $`\mathrm{\Omega }_m<0.4`$, consistent with several independendent current $`\mathrm{\Omega }_m`$ estimates (Freedman 1999, Roos & Harun-or-Rashid 1999). ## 7. CONCLUSIONS We have constrained the dark matter distribution in A3571, using accurate ASCA gas temperature data. The dark matter density in the best fit model scales as r<sup>-4</sup> at large radii, and the NWF profile also provides a good description of the dark matter density distribution. The total mass within $`r_{500}`$ (1.7 $`h_{50}^1`$ Mpc) is $`7.8_{2.2}^{+1.4}\times 10^{14}h_{50}^1M_{}`$ at 90% confidence, or 1.1 times smaller than the isothermal value, or 1.6 times smaller than that predicted by the scaling law based on simulations (Evrard et al. 1996), which is qualitatively similar to the results for other clusters with accurate temperature profiles. The gas density profile in A3571, proportional to $`r^{2.1}`$ at large radii, is shallower than that of the dark matter. Hence the gas mass fraction increases with radius, with $`f_{gas}(r_{500})=0.19_{0.03}^{+0.06}h_{50}^{3/2}`$ (90 % errors) at $`r_{500}`$, consistent with results for A2256, A401, A496, and A2199. Assuming that this is a lower limit of the primordial baryonic fraction, we obtain $`\mathrm{\Omega }_m<0.4`$ at 90% confidence. However, $`f_{gas}`$ is still strongly increasing at $`r_{500}`$, so that we obviously have not reached the universal value of the baryon fraction, which would make $`\mathrm{\Omega }_m`$ even smaller. JN thanks Harvard Smithsonian Center for Astrophysics for the hospitality. JN thanks the Smithsonian Institute for a Predoctoral Fellowship, and the Finnish Academy for a supplementary grant. We are indebted to Dr A.Vikhlinin for several helpful discussions. We thank Prof. M.Roos for his help. WF and MM acknowledge support from NASA contract NAS8-39073. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We thank the referee for a careful report on our paper.
no-problem/0001/nlin0001029.html
ar5iv
text
# Intermittency effects in Burgers equation driven by thermal noise. ## Abstract For the Burgers equation driven by thermal noise leading asymptotics of pair and high-order correlators of the velocity field are found for finite times and large distances. It is shown that the intermittency takes place: some correlators are much larger than their reducible parts. (Pis’ma v ZhETF, v.71, n.1, January 2000, in press, English Translation in JETP Letters, v.71, January 2000). Intermittency implies strong non-Gaussianity of statistics of fluctuating fields. This phenomenon is shown by hydrodynamical systems in a state of developed turbulence. In such far from equilibrium situations intermittency appears as prevalence of irreducible parts of some fourth order simultaneous correlators over reducible ones. As for thermal equilibrium, irreducible parts of simultaneous correlators of local fluctuating fields turn out to be of the same order as their Gaussian parts even in critical region. This property is inherent in the renormalization group method which takes care of interaction of fluctuations through renormalization of local field and effective Hamiltonian . In the recent paper V.V.Lebedev disclosed that the picture can change drastically when we pass to time-dependent correlations of thermally fluctuating quantities. He found that in the low-temperature phase of two-dimensional systems of the Berezinskii-Kosterlitz-Thouless kind different-time correlation functions of the vortex charge density may greately exeed their own Gaussian part. In the same paper the physical cause of such the behaviour is pointed out: at low temperatures the non-simultaneous correlation function of every order in vicinity of a given poin are defined by a single rare fluctuation. One can conclude from this interpretation that the intermittency effects may emerge in equilibrium dynamics of a wide range of systems. In the present paper I consider one-dimensional velocity field evolving according to the Burgers equation with the thermal noise term: $$u_t+uu_x\nu u_{xx}=\xi (t,x).$$ (1) Here $`\nu `$ is the dissipation constant and $`\xi (t,x)`$ is random noise with Gaussian statistics and the pair correlator: $$\xi (t,x)\xi (t_1,x_1)=\nu \beta ^1\delta ^{\prime \prime }(xx_1)\delta (tt_1).$$ (2) We will consider $`\nu `$ as being vanishingly small. The parameter $`\beta `$ plays the role of inverse temperature, so the simultaneous stationary distribution function $`𝒫[u]`$ has the Gibbs form: $$𝒫[u]=𝒩\mathrm{exp}\left\{\beta [u]\right\},[u]=𝑑xu^2(x).$$ (3) Here $`𝒩`$ is the normalization constant. The expression: $$u(t,x)u(t,x^{})=(2\beta )^1\delta (xx^{}).$$ (4) following from (3) correspond to a total absence of velocity correlation in spatially distant points in a given time moment. In the present paper some asympotics of various non-simultaneous correlators of the field $`u(t,x)`$ are found. The results obtained here show presence of intermittency effects in the equilibrium dynamics of the system (1). Dynamical scaling exponent $`z=3/2`$ for the problem (1)-(2) was discovered in the paper using dimensional analysis and utilizing the Galilean invariance. In the paper the absence of logarithmic divergencies for the spectrum $`\omega k^{3/2}`$ was checked in every order of renormalized perturbation theory. Thus the function $`F_2(T,x)=<u(T,x)u(0,0)>`$ has $`\beta x^3/T^2`$ as a dimensionless argument. First we find the main (exponential) part of the asymptotics of the function $`F_2`$ at $`\beta x^3/T^21`$ and $`\nu 0`$. The latter limit means that the diffusion cannot set up correlation of velocity in the points $`0`$ and $`x`$ in a time $`T`$. The role of the noise in dynamics on the time interval $`(0,T)`$ is also negligible. Thus we can consider $`u(0,y)`$ as a functional of $`u(T,x)`$ and vice versa. The velocity statistics at the time moment $`T`$ is Gaussian what allows us to represent the non-simultaleous correlator $`F_2`$ in the form: $$F_2(T,x)=(2\beta )^1\frac{\delta u(0,0)}{\delta u(T,x)}.$$ (5) The variational derivative $`\mathrm{\Theta }(t,y)=\delta u(t,y)/\delta u(T,x)`$ for $`\nu 0`$ satisfies the continuity equation: $$\mathrm{\Theta }_t+u\mathrm{\Theta }_y+u_y\mathrm{\Theta }=0,$$ (6) and the condition $`\mathrm{\Theta }(T,y)=\delta (xy)`$. This Cauchu problem is solved by the characteristic method and we arrive to the expression for $`F_2(T,x)`$: $$F_2(T,x)=(2\beta )^1\mathrm{\Theta }(0,0)=(2\beta )^1\delta \left(xy(T)\right)\left(\frac{y(T,\zeta )}{\zeta }\right)_{\zeta =0},$$ (7) (see ). Here $`y(T,\zeta )`$ is coordinate of the Lagrange particle started at the instant $`t=0`$ from the point $`\zeta `$: $$\dot{y}=u(t,y),y(0,\zeta )=\zeta ,$$ (8) and $`y(T)=y(T,0)`$. If $`u(t,y)`$ is discontinuous, then the equation (8) requires a regularization. We use physically evident condition that the particle on a shock wave front moves with the velocity of this front. Its formal justification starting from a finite small viscosity can be found in . The expression (7) tells us that the correlator $`F_2`$ in the limit being considered is determined by a most probable initial velocity fluctuation $`u_0(y)`$ which, evolving, carries out the particle from the point $`0`$ to the point $`x`$ in the time $`T`$. The probabilities of initial configurations are defined by the functional (3). The desired optimal fluctuation $`u_0(y)`$ minimizes $`[u_0]`$ with the condition $`y(T)=x`$. Let us show that it is the linear profile: $$u_0(y)=u_0^{}x/Ty/T,\mathrm{\hspace{0.33em}0}<y<x,u_0(y)=0,y<0,y>x.$$ (9) First, it is evident that the function $`u_0(y)`$ must have a maximum at $`y=0`$. It is also easy to understand the equality $`u_0(y)`$ to 0 for $`y<0`$ and $`y>x`$. Indeed, difference $`u_0(y)`$ from zero outside the interval $`(0,x)`$ does not affect the trajectory $`y(t)`$ but $`[u_0]`$ increases. The left edge of the distribution $`u(t,x)`$ for $`\nu 0`$ will be straight line with the slope $`\sigma =1/t`$. Such the time dependence can be checked by direct substitution into Burgers equation; see also . At $`t=T`$ the coordinate of the most rapid particle will be equal to $`x`$. Coordinates of the other particles from the interval $`(0,x)`$ will be precisely the same. Thus, for the class of initial distributions $`u_0(y)`$ described above the plot of the final function $`u(T,y)`$ has a form of triangle: $$u(T,y)=y/T,\mathrm{\hspace{0.33em}0}<y<x,u_0(y)=0,y<0,y>x.$$ (10) Now let us note that from the Burgers equation it follows: $$d[u(t,y)]/dt=2\nu 𝑑yu_y^20,$$ (11) what means that: $$[u_0(y)][u(T,y)].$$ (12) This inequality becomes strict one even for $`\nu 0`$ if shock waves were formed during the evolution. Consequently, the minimal admissible value of the functional $``$ is equal to: $$[u(T,y)]=x^3/3T^2.$$ (13) The value of $``$ on the function $`u_0^{}(y)`$ coincides with (13). The exlusion of shocks in the time interval $`(0,T)`$ justified above makes the expression (9) the only possible. Probability of the initial fluctuation (9) proportional to $`\mathrm{exp}\left(\beta [u_0(y)]\right)`$ defines the exponential part of the asymptotics of the pair correlator $`F_2`$: $$F_2(T,x)\mathrm{exp}\left(\frac{\beta x^3}{3T^2}\right).$$ (14) It is worth noting that the multipiler $`(y(T,\zeta )/\zeta )_{\zeta =0}`$ of the $`\delta `$-function in the formula (7) vanishes on the configuration (9), but it becomes different from zero under small variation of $`u_0(y)`$. In other words, this factor, along with unknown pre-exponential factor in the expression (14) as a whole, is determined by integration over variations $`\delta u`$ of the initial velocity field with respect to $`u_0^{}(y)`$. The essential values of $`\delta u`$ are small comparing with $`u_0^{}(y)`$; the parameter of this smallness is $`(\beta x^3/T^2)^1`$. However, the integration over $`\delta u`$ cannot be reduced to the Gaussian one even in the limit $`\beta x^3/T^21`$. The point is that at $`\nu 0`$ the functional $`[u]`$ is not analytical on the class of initial velocity fields $`u(y)`$ obeying the constraint $`y(T)=x`$. The variation $`\delta `$ turns out to be of the first order in $`\delta u`$ despite that inequality $`\delta 0`$ holds. $`[u]`$ can be expanded in functional Taylor series in $`\delta u`$ for $`\delta u\nu /x`$ only. The corresponding analysis will be given in another paper and here we restrict ourselves to exponential asymptotics. Noting that the initial linear profile (9) tranfers all the internal point of the interval $`(0,x)`$ by the time $`t=T`$ into the point $`x`$ we conclude that up to pre-exponential factor: $$F_{n+2}=u(T,x)\underset{j=1}{\overset{n}{}}u(0,y_j)u(0,0)F_2(T,x)\mathrm{exp}\left(\frac{\beta x^3}{3T^2}\right).$$ (15) Here $`0<y_1<y_2\mathrm{}<y_n`$. It is obvious that the reducible part of this correlator is equal to zero. The same initial fluctuation $`u_0^{}(y)`$ determines leading asymptotics of the correlator $`\mathrm{\Phi }_4=u(T,x)u(T,x+a_1)u(0,a)u(0,0)`$ at $`0<a<x`$ and $`0<a_1a`$: $$\mathrm{\Phi }_4\mathrm{exp}\left(\frac{\beta x^3}{3T^2}\right)\mathrm{\Phi }_{4,Gauss}\mathrm{exp}\left(\frac{2\beta x^3}{3T^2}\right).$$ (16) Here $`\mathrm{\Phi }_{4,Gauss}`$ designates reducible part of $`\mathrm{\Phi }_4`$. To find $`\mathrm{\Phi }_4`$ as a function of the parameter $`a`$ it is necessary to analyse the evolution of the perturbed linear profile. In this case the shock waves formation becomes inevitable what makes the problem more difficult. It is worth adding that $`a`$-dependence of the correlator $`\mathrm{\Phi }_4`$ may be related directly to the probability distribution function of velocity field gradients. In it was shown that the latter id defined by forming shocks. Proportionality of asymptotics of high order correlation functions to the asymptotics of pair correlator is characteristic for turbulent-like problems and in such the context was noted in . Correlation functions of the field $`u(t,x)`$ may be represented in a form of functional integrals (see e.g., ). In the present paper such the integrals were computed in essence by the saddle-point method with the parameter $`\beta x^3/T^21`$ contained into the object to be averaged, but not inherent to the action. This approach goes back to the works of I.M.Lifshits (see in ). Later it was generalized to find high order correlators in equilibrium and strongly non-equilibrium problems . Optimal fluctuation is called also as an instanton by analogy with quantum field theory. In the paper long-time asymptotics of the current autocorrelation function has been computed for a disordered contact and the large observation time was used as a saddle-point parameter. I am greately indebted to Michael Chertkov and Vladimir Lebedev for numerous discussions and advices. I am grateful to Gregory Falkovich and Thomas Spencer for backing investigations leading to this work. I would like to appreciate Michael Stepanov for useful remarks.
no-problem/0001/cond-mat0001184.html
ar5iv
text
# 1 Initial creep rates vs 𝑇. The inset shows decays of Mrem for 𝑇=6.7 mK (closed circles), 26 mK (open circles), 45 mK (closed diamonds), 100 mK (open diamonds), 200 mK (closed triangles ), 400 mK (open triangles), 600 mK (closed squares), and 800 mK (crosses). Transition into a low temperature superconducting phase of unconventional pinning in Sr<sub>2</sub>RuO<sub>4</sub> A.C. Mota<sup>a,</sup><sup>1</sup><sup>1</sup>1Corresponding author. Present address: Laboratorium für Festkörperpkysik, ETH Hönggerberg, CH 8093 Zürich, Switzerland. Fax - +41 1 633 10 77 E-mail: mota@solid.phys.ethz.ch , E. Dumont<sup>a</sup>, A. Amann<sup>a,</sup><sup>2</sup><sup>2</sup>2Present address: UCSD, IPAPS-0360, La Jolla, California 92093-0360 , and Y. Maeno<sup>b</sup> <sup>a</sup>Laboratorium für Festkörperpkysik, ETH Hönggerberg, 8093 Zürich, Switzerland <sup>b</sup> Department of Physics, Kyoto University, Kyoto 606-01, Japan ## Abstract We have found a sharp transition in the vortex creep rates at a temperature $`T^{}=0.05T_c`$ in a single crystal of Sr<sub>2</sub>RuO<sub>4</sub> ($`T_c=1.03`$ K) by means of magnetic relaxation measurements. For $`T<T^{}`$, the initial creep rates drop to undetectable low levels. One explanation for this transition into a phase with such extremely low vortex creep is that the low-temperature phase of Sr<sub>2</sub>RuO<sub>4</sub> breaks time reversal symmetry. In that case, degenerate domain walls separating discreetly degenerate states of a superconductor can act as very strong pinning centers. Keywords: Ruthenates; Heavy Fermions; Superconductivity The discovery of superconductivity in Sr<sub>2</sub>RuO<sub>4</sub> in 1994, a material structurally similar to the high-$`T_c`$ superconductor (La<sub>1-x</sub>Sr<sub>x</sub>)<sub>2</sub>CuO<sub>4</sub>, provided the first example of a layered perovskite without copper which becomes superconducting. The strong interest in this material is based on the suggestion that Sr<sub>2</sub>RuO<sub>4</sub> could constitute the first example of odd parity ($`l=1`$) superconductivity. The suggestion was based on the fact that in the normal state above $`T_c`$, Sr<sub>2</sub>RuO<sub>4</sub> behaves like a quasi-2D Landau Fermi liquid with many-body enhancements of the specific heat and the Pauli spin susceptibility similar to another Landau Fermi liquid, namely normal liquid <sup>3</sup>He below about $`T=100`$ mK. At present, there is no experimental evidence for tripplet pairing in Sr<sub>2</sub>RuO<sub>4</sub>. Some hints of unconventional pairing come from measurements of the specific heat. In the cleanest samples it is found that in the superconducting state, the residual electronic specific heat remains at about 50% of its normal value. Furthermore, NQR measurements show no indication of a Hebel–Slichter peak in $`1/T_1T`$ and also $`T_c`$ is strongly depressed by non-magnetic impurities. Recently we investigated the magnetic properties of the unconventional superconductor UPt<sub>3</sub> by means of magnetic relaxation measurements on high quality single crystals. We found out that in the low temperature B–phase, where a small spontaneous magnetic field has been observed in $`\mu `$SR experiments, no creep could be detected from any metastable configuration for about the first 10<sup>4</sup> seconds. Above the temperature at which the second jump in the specific heat occurs, we observed a different vortex regime. In this regime, the initial vortex creep is finite with a rate that increases rapidly as the temperature approaches the transition temperature $`T_c`$. We interpreted the zero initial creep rate in the low–temperature, low–field B–phase of UPt<sub>3</sub> as resulting from an intrinsic pinning mechanism where fractional vortices get strongly trapped in domain walls between domains of degenerate superconducting phases. These experimental results show that the widely different pinning strengths can be used as an indirect information on the character of a given superconducting phase. Superconducting phases that break time reversal symmetry might then be identified by their lack of vortex creep or their anomalous strong pinning. We present here similar measurements of the relaxation of the remanent magnetization on a single crystal of Sr<sub>2</sub>RuO<sub>4</sub>. The experimental arrangement has been described in a previous publication . The single crystal has a transition temperature $`T_c=1.03`$ K. The magnetic field in these measurements was applied at an angle of $`15^{}`$ from the basal plane. All the values of M<sub>rem</sub> were taken with the specimen cycled to sufficiently high fields, so that the sample was in the fully critical state. In the insert of Fig. 1 we give decays of the remanent magnetization normalized to the value of M<sub>rem</sub> at $`t=1`$ s at different temperatures. We observe that in the first couple of thousand seconds M<sub>rem</sub> relaxes following a logarithmic law. At longer times one observes a more rapid relaxation, similar to what we found in UPt<sub>3</sub>. This long time rapid relaxation is due to surface vortices. Here we only discuss the initial slopes of the decays which are determined by creep of bulk vortices. The normalized creep rates $`S_{initial}=\mathrm{ln}M/\mathrm{ln}t`$ for Sr<sub>2</sub>RuO<sub>4</sub> are given in Fig. 1 as function of temperature. We observe two different regimes of vortex creep separated by a rather sharp transition around $`T50`$ mK. For $`T<50`$ mK the creep rates fall to zero within our sensitivity ($`\mathrm{ln}M/\mathrm{ln}t10^5`$). Above $`T50`$ mK the creep rates are finite and increase rapidly as the temperature is increased. In Fig. 2 we display the same data in a double logarithmic scale. For comparison we have also plotted vortex creep rates of an YBa<sub>2</sub>Cu<sub>4</sub>O<sub>8</sub> single crystal where the creep rates are much stronger and they tend to a finite value for $`T0`$ on account of quantum tunneling. Based on our previous result of no observable creep in the low temperature superconducting phase of UPt<sub>3</sub> we propose that the sharp transition in Sr<sub>2</sub>RuO<sub>4</sub> at $`T50`$ mK into a phase with very strong pinning might have a similar physical origin. Work on more samples is under way to confirm these preliminary results.
no-problem/0001/cond-mat0001237.html
ar5iv
text
# Untitled Document 1. Introduction Hierarchical lattice models have played an important part in understanding the statistical mechanics of phase transitions. Such lattices differ, both in topology and in geometry, from the Bravais lattices . One major feature of such structures is their scale invariance, which enables an exact implementation of real space renormalization group (RSRG) techniques \[1-4\]. Apart from statistical mechanics, studies of the electronic spectrum of such lattices have also revealed striking properties, not shared by common crystalline structures \[5-7\]. For example, Domany et. al. solved the Schrödinger equation on a variety of hierarchical lattices using an exact recursive scheme. In the thermodynamic limit, the energy levels were found to be discrete, very closely spaced and highly degenerate . The electron localization problem however, is not easy to understand on hierarchical lattices. The fluctuating local environment around each lattice point is likely to localize the electronic wavefunctions in most situations , though for some hierarchical or fractal structures like the Seirpinski gasket and the Vicsek fractal, “extended” electronic states have been reported . This aspect makes the study of electronic properties of hierarchical lattices interesting. Normally, the “extended” character of electronic wavefunction is associated with translational invariance of the underlying lattice and, a hierarchical lattice (like the present one) does not have any translational periodicity by virtue of its construction (see Fig. 1). However, we are familiar with several one dimensional examples, viz. the one-dimensional random dimer or quasiperiodic lattice models where one comes across situations in which local positional correlation between constituent ‘atoms’ gives rise to a finite or infinite number of “extended” electronic states even though these lattices are not periodic. In hierarchical lattices, such local positional correlation is not always obvious. On the other hand, the topology of the lattice in certain earlier cases has been shown to play an important role in sustaining “extended” electronic wavefunctions . Such states sometimes exhibit anomalous transport properties, in the sense that though the amplitude of the wavefunction remains non-zero even at distant parts of arbitrarily large lattices, the end-to-end transmission of any incident wavepacket sometimes displays a power law decay . Above exact results are available for hierarchical models in which there is a finite variety of the nearest neighbour environment around a lattice site (strictly speaking, in any hierarchical lattice all sites are inequivalent if one looks beyond the nearest neighbours). The situation may thus become quite challenging if one tries to explore a case where the the range of the coordination number of the lattice points increases with the generation of bigger and bigger lattices. One such example is the well known diamond hierarchical lattice in which the coordination number of the vertices range from $`z=2`$ to $`z_{max}=2^{N1}`$ for an $`N^{th}`$ generation lattice. The thermodynamics of spin models on a diamond lattice has been studied exactly by RSRG methods . But its electronic properties has not been really studied in detail, a part of which we intend to address in the present communication. Our interest in the diamond lattice is two-fold. Firstly, we wish to investigate analytically, if there exists any “extended” electronic eigenstate though there is no translational periodicity in this lattice. Moreover, if such states do exist then, to our mind, it would be interesting to study their amplitude profiles and also to study the transmission properties of arbitrarily large finite lattices at those special values of the electron-energy for which the distribution of the amplitudes is non-trivial. Secondly, we note that similar hierarchical structures have recently been proposed by Samukhin et. al. as a possible basic structure of stretched polymers. In stretched polyacetylene the polymer network is constructed by coupled polymer chains oriented along some direction . The $`(m,n)`$ hierarchical pseudolattice that serves as the model for such polymers may be composed, according to Ref. , by taking $`n`$ bonds forming a chain of $`(n+1)`$ atoms and then joining $`m`$ such chains in parallel. It is then quite obvious that our diamond hierarchical structure is a $`(2,2)`$ model lattice in this group (see Fig. 1). So, an analytical approach to study the transport properties of a diamond structure seems to be an interesting step towards the understanding of the properties of the general $`(m,n)`$ structure. Recently, Zhu et. al. , numerically calculated the transmission coefficient of the $`(m,n)`$ polyacetylene model as a function of electron energy for different $`(m,n)`$ values and at different stage numbers. Interestingly, they found, among very fragmented patterns, many energy values for which the transmission coefficient turns out to be unity (or very nearly so) for upto fifth generation structures. Also, they have reported (numerically obtained) scaling behaviour of conductance for the $`(m,n)`$ structures, though its precise form is not known. In view of this we consider the simplest $`(2,2)`$ version, i.e., the diamond lattice and make an analytical attempt to see if such a structure ever becomes completely or partially transparent to an incoming electron. Most interestingly, we find that it is possible to get a special energy eigenvalue for which arbitrarily large sized diamond hierarchical lattices will have transmission coefficient equal to unity. We work out a scheme based on an intuitive approach to determine the distribution of the amplitudes of the wave function $`(\psi _i)`$ for this special energy on a small sized lattice and then use an RSRG approach to work out the distribution in lattices for higher generations. The central idea is not too difficult to extend to the general $`(m,n)`$ case. Using the same RSRG formalism we evaluate a whole hierarchy of energy eigenvalues for which we again have non-trivial distribution of the amplitudes $`\psi _i`$ over the entire lattice. The end-to-end transmission across lattices of gradually increasing size is now found to decay for this second group of eigenvalues. We explicitly calculate the scaling behaviour for one such case and obtain a precise form of the power law followed by the transmission coefficient. Scaling forms for other energies can be obtained following the scheme prescribed by us. In Sec. 2 we describe our model and the method, in Sec. 3 the transmission coefficient is discussed and we draw conclusions in Sec. 4. 2. The Model and the Method In Fig. 1, we show the first three stages of construction of the diamond lattice following Berker and Ostlund . A detailed discussion on the construction and topology of lattices belonging to this class is given in Ref. . The atomic sites sit at every vertex of the lattice. To study the electronic properties of the diamond lattice, we work with the tight binding hamiltonian with nearest neighbour approximation: $$H=\underset{i}{}ϵ_iii+\underset{<ij>}{}t_{ij}ij$$ (1) where $`ϵ_i`$ is the on-site potential and $`t_{ij}`$ is the nearest neighbour (n.n) hopping integral. As we are primarily interested in the topological aspect of the system we set identical values to all the site energies, i.e., $`ϵ_i=ϵ`$ and all the hopping integrals are taken to be the same, i.e. $`t_{ij}=t`$ (chosen to be the scale of energy). However, to facilitate the renormalization group calculations we shall designate the site energy of a site with coordination number $`z`$, as $`ϵ_z`$. Therefore, in an $`N^{th}`$ generation lattice $`ϵ_i`$’s range from $`ϵ_2`$ to $`ϵ_{z_{max}}`$, where $`z_{max}=2^{N1}`$. Let us, first of all, try to give a simple intuitive picture of how to construct a wavefunction that will have a non-trivial disribution of amplitude over the entire lattice, irrespective of its size. We work with the difference equation version of the Schrödinger equation, $$(Eϵ_i)\psi _i=t\underset{j(n.nofi)}{}\psi _j$$ (2) where, $`\psi _j`$ is the amplitude of the wave function at the $`j^{th}`$ site. We shall be looking for energy eigenvalues that solve the above equation consistently all over the lattice and yet yield a non-trivial distribution of $`\psi _j`$’s. We begin with $`N=3`$, i.e., where we have at least one class of sites with $`z>2`$ ( here, $`z_{max}`$ is 4). By inspection we can see that if we choose $`E=ϵ_2`$, then this equation satisfies the Schrödinger equation consistently at all sites if we demand that the amplitude of wave function vanishes at sites with coordination number four. Extending this idea we find that indeed we can satisfy Schrödinger equation locally at all sites in any arbitrarily large lattice provided we set the amplitude $`\psi _i`$ at a site with coordination number $`z`$, equal to zero for all $`z>2`$. That is, in this scheme, the electron cannot “feel” the presence of the sites with $`z>2`$. This helps in attaining an “extended” character of the wave function. It may be mentioned here that, a similar technique was used earlier in the case of a Vicsek fractal . Here, it is to be noted that the solutions of the Schrödinger equation on such a lattice will be highly degenerate and we are exhibiting one such case only. In Figs. 2a and 2b, the amplitude distributions for $`E=ϵ_2=0`$ are given on a lattice with $`N=3`$ and $`N=4`$ respectively. The pattern of distribution of the amplitudes for the $`N=4`$ case has been obtained by joining the extremeties $`A`$ and $`B`$ (Fig. 2a) of the basic $`N=3`$ plaquette (which now becomes the building block for the next generation) side by side such that the vertex $`B`$ of one plaquette falls on the vertex $`A`$ of the next (Fig. 2b). Following this strategy we can use the $`N=4`$ plaquette as the basic unit and determine the amplitude-distribution on an $`N=5`$ lattice by joining the $`N=4`$ plaquettes at the vertices having $`z=8`$, where the amplitude of the wavefunction is zero. The method can be extended easily to construct the distribution pattern for higher generations. The above idea can now be coupled to an RSRG scheme to extract other possible energy eigenvalues. Starting from any arbitrarily large finite version of the hierarchical diamond lattice, we can renormalize the lattice $`n`$ times. The recursion relations for the site energies and hopping integral are respectively given by : $$ϵ_z(n)=ϵ_{2z}(n1)+\frac{2zt^2(n1)}{[Eϵ_2(n1)]}$$ (3) and $$t(n)=\frac{2t^2(n1)}{[Eϵ_2(n1)]}$$ (4) Here, $`ϵ_z(n)`$ and $`t(n)`$ denote respectively the values of the on-site potential and the hopping integral, at the $`n^{th}`$ stage of renormalization. Now, suppose we start with an $`N^{th}`$ generation lattice. We renormalize this lattice $`n=N3`$ times and bring it down to an “effectively” third generation lattice where, $`z=2`$ and $`4`$. The site energies and the hopping integral for this renormalized version are calculated using Eqns. $`(3)`$ and $`(4)`$. We then apply the earlier trick on this renormalized lattice, i.e., we make the amplitudes $`\psi _i`$ vanish at the $`z=4`$ sites ( on the rescaled version ). This happens for $`E=ϵ_2(n)`$. This is a polynomial equation in $`E`$. Though, it is difficult to provide a complete proof, yet we have performed explicit calculations upto $`N=5`$ and for $`n=1`$ and $`2`$. In each case it turns out that the solutions of the equation $`E=ϵ_2(n)`$ satisfy the Schrödinger equation all over the lattice with a non-trivial distribution of the amplitudes of the wavefunction. Therefore, it is tempting to make the conjecture that the real roots of the equation $`E=ϵ_2(n)`$ will correspond to the “extended” eigenstates in the general case, if we follow the same prescription. When mapped onto the original lattice the wavefunction will vanish at sites with some value of $`z`$ onwards and will remain finite at all other lattice points with lower values of $`z`$. Let us clarify this idea by discussing two specific situations. First, we choose $`E=ϵ_2(1)`$. Now we should have at least an $`N=4`$ lattice as our starting structure, where $`z_{max}=8`$. We then renormalize it once to cast it into an effective $`N=3`$ \- stage lattice with $`z_{max}=4`$. We find that the solutions of the equation $`E=ϵ_2(1)`$ are $`E=\pm 2`$, with all $`ϵ_z=0`$ initially and $`t=1`$. Each of these energy values consistently satisfies the Schrödinger equation everywhere on the renormalized lattice provided we fix $`\psi _i=0`$ at the $`z=4`$ sites on this renormalized structure. When mapped onto the original $`N=4`$ \- stage hierarchical structure, we see that the amplitude of the wavefunction vanishes only at the vertex with the highest coordination number, i.e., 8 and it is non-zero at all other points. The rule for constructing such a distribution may be formulated as follows. We consider two plaquettes, type $`I`$ and type $`II`$, each being an $`N=3`$ diamond lattice. In Fig. 3a, the plaquette $`I`$ is shown. In type $`II`$ each non-zero amplitude of the wavefunction is of opposite sign to that at the corresponding vertex in type $`I`$. In both of them, the Schrödinger equation is satisfied at each vertex for $`E=2`$. These two plaquettes, $`I`$ and $`II`$, are then joined end-to-end in an alternate fashion such that the vertices with $`\psi _i=0`$ fall on each other (Fig. 3b). In this way the $`N=4`$ structure is built, and by selecting the $`N=4`$ structure as the basic unit we can construct the pattern for an $`N=5`$ lattice by joining them suitably at the extreme points (with $`\psi _i=0`$). The scheme can go on for other higher order lattices. It is to be appreciated that, for higher generations $`\psi _i`$ will remain zero at all vertices with $`z=8,16,32,\mathrm{}\mathrm{}`$ . The even number of the nearest neighbours helps maintaining the value of $`\psi _i`$ equal to zero at these sites. For the other sites with $`z<4`$ at any generation, the Schrödinger equation is locally satisfied everywhere, once it is satisfied at a lower generation. Second, we consider $`E=ϵ_2(2)`$. Here, we need to start from an $`N=5`$ structure. We map this lattice onto an $`N=3`$ version with renormalized parameters and force $`\psi _i=0`$, as before, at the sites with $`z=4`$ on this renormalized lattice. The effective $`N=3`$ lattice with the distribution of the amplitudes is presented in Fig. 4a. When unfolded to retrieve the original $`N=5`$ lattice, we now find that the amplitude is zero only at the vertices with $`z=16`$, and three new amplitudes $`\pm a`$, $`\pm b`$ and $`\pm c`$ appear at $`z=2`$ and $`4`$ \- sites. One quarter of the full $`N=5`$ lattice is shown in Fig. 4b with the distribution of the values $`a`$, $`b`$ and $`c`$. The complementary portion with $`a`$, $`b`$ and $`c`$ is not shown, but can easily be conceived of. In order to satisfy the Schrödinger equation consistently at each vertex, we see that $`a`$, $`b`$ and $`c`$ should take values $`(E^28)/8E`$, $`(E^28)/8`$ and $`E/8`$ respectively, provided the energy $`E`$ is a solution of the equation $`E^412E^2+16=0`$. This is precisely the polynomial equation that is obtained by setting $`E=ϵ_2(2)`$. We thus confirm that for every root of this equation we are able to construct extended eigenstates consistent with the Schrödinger equation. We expect that, reasoning in the same manner as in the above cases, we are likely to uncover a whole set of eigenvalues by solving the equation $`E=ϵ_2(n)`$ with $`n=1`$, $`2`$, $`3`$, …, in an arbitrarily large lattice for which the wavefunction will be “extended” in the sense described earlier. Before ending this section we point out two other possibilities of getting “extended” type of states: (i) We consider the third generation lattice, where $`z_{max}=4`$. It is quite obvious that by choosing $`E=ϵ_4`$ (which happens to be equal to $`ϵ_2`$ in the bare length scale in our model), we can have a consistent solution of the Schrödinger equation on this lattice in which the amplitude of the wavefunction is zero on all vertices with $`z=2`$ and alternates between $`\pm 1`$ on the vertices with $`z=4`$. This is, of course, one of the possible configurations. But we can carry on the process of construction for other eigenvalues by setting $`E=ϵ_4(n)`$ for an $`N=n+3`$ generation lattice, and by demanding that $`\psi _i`$ vanishes identically on all $`z=2`$ vertices on the $`n`$ step renormalized version of the same lattice. In Fig. 5, we exhibit the distribution of amplitudes for $`E=ϵ_4(1)`$ on a fourth generation lattice. Distribution for higher generations and other eigenvalues can be obtained by extending the earlier ideas. Similar results can be obtained for the general case with $`E=ϵ_{2z}(n)`$. (ii) The other possibility refers to a specific initial choice of the site energies. By looking at the recursion relations $`(3)`$ and $`(4)`$ for the on-site term and hopping integral respectively, we find that if we start with a model where $`ϵ_{2z}=ϵ_zzt`$, then for $`E=ϵ_2+2t`$, each on-site term and the hopping integral exhibit a fixed point behaviour. It implies that the nearest neighbour hopping integral does not flow to zero under iteration and we have a non-vanishing ‘connection’ between nearest neighbouring sites at all length scales. This is a clear signature of the corresponding eigenstate being extended . We will now describe how to investigate the transmission characteristics of a diamond hierarchical lattice. We will emphasize on the analytical treatment of the recursion relations and will discuss the behaviour of the transmission coefficient $`T(E)`$ for the two cases with $`E=ϵ_2`$ and $`E=ϵ_2(1)`$ respectively, for which the transmission coefficient displays totally opposite characteristics. $`T(E)`$ for the other energies $`(E=ϵ_{2z}(n))`$ can be obtained by following the method adopted for these cases. 3. Analysis of the Transmission Coefficient For calculating the transmission coefficient $`T(E)`$, we attach two semi-infinite perfectly ordered leads to the two ‘diametrically’ opposite vertices having the maximum coordination number in any generation $`N`$. The original lattice is then renormalized $`n(=N2)`$ times, so that we are left with a basic $`N=2`$ rhombus with four vertices each having an effective site energy $`ϵ_2(n)`$ and the nearest neighbour hopping integtral $`t(n)`$ \[Fig. 6\]. This elementary rhombus is now folded into a ‘dimer’ with site energy and nearest neighbour hopping integral respectively given by $$\stackrel{~}{ϵ}=ϵ_2(n)+\frac{2t^2(n)}{[Eϵ_2(n)]}$$ and $$\stackrel{~}{t}=\frac{2t^2(n)}{[Eϵ_2(n)]}$$ Following the standard procedure it is then easy to show that $$T(E)=\frac{4\mathrm{sin}^2k}{[P_{21}P_{12}+(P_{22}P_{11})\mathrm{cos}k]^2+(P_{11}+P_{22})^2\mathrm{sin}^2k}$$ (5) where the elements of the matrix $`P`$ are: $$P_{11}=\left[\left(\frac{E\stackrel{~}{ϵ}}{\stackrel{~}{t}}\right)^21\right]\frac{\stackrel{~}{t}}{t_0}$$ (6) $$P_{12}=\left(\frac{E\stackrel{~}{ϵ}}{\stackrel{~}{t}}\right)=P_{21}$$ (7) and $$P_{22}=\frac{t_0}{\stackrel{~}{t}}$$ (8) Here, $`k=\mathrm{cos}^1[(Eϵ_0)/2t_0]`$, where, $`ϵ_0`$ and $`t_0`$ refer to the on-site potential and the hopping integral respectively, of the ordered lead. In order to understand the behaviour of $`T(E)`$ for large systems at any particular energy, we must analyze how the matrix elements $`P_{ij}`$ behave for large number of RG iterations, $`n`$. As a lattice of any generation should finally be reduced to a basic $`N=2`$ rhombus having only $`ϵ_2(n)`$ and $`t(n)`$ (see Fig. 5), we must analyse the flow patterns of $`ϵ_2`$ and $`t`$ under successive RSRG iterations. The evolution of these two parameters ultimately controls $`\stackrel{~}{ϵ}`$ and $`\stackrel{~}{t}`$, and hence the matrix elements $`P_{ij}`$. Let us discuss it for two specific cases. Throughout the analysis we will set all $`ϵ_z=0`$ and $`t=1`$. Case (i) $`E=ϵ_2=0`$ From direct calculations we find that as $`E0`$, the leading behaviours (in $`E`$) of $`t`$ and $`ϵ_2`$ are given by, $$t(n)\frac{2t^2}{E}$$ (9) $$ϵ_2(n)\frac{4t^2}{E}$$ (10) for $`n=2,3,4`$ and $`5`$. We thus assume these forms to be true for any arbitrary value of $`n`$, viz., $`n=m`$. Then proceeding according to the standard method of induction, we can prove, using the recursion relations $`(3)`$ and $`(4)`$, that Eqns. $`(9)`$ and $`(10)`$ indeed hold good for $`n=m+1`$ as well. Therefore, we accept the above forms as the leading terms in the expressions for $`t(n)`$ and $`ϵ_2(n)`$ for $`E0`$ and for any arbitrary value of $`n`$ with $`n2`$. It is now easy to work out an expression for $`(E\stackrel{~}{ϵ})/\stackrel{~}{t}`$, which is given, to the leading order in $`E`$, by $$\frac{E\stackrel{~}{ϵ}}{\stackrel{~}{t}}=\frac{E^2}{2t^2}+1$$ (11) A direct subtitution of the above result in the expressions of $`P_{ij}`$ shows that, $`P_{11}=P_{22}0`$ and $`P_{12}=P_{21}=1`$ as $`E0`$. The expression for $`T(E=0)`$ now becomes : $$T(E=0)=\frac{4\mathrm{sin}^2k}{[P_{21}P_{12}]^2}=\mathrm{sin}^2k=1\frac{ϵ_0^2}{4t_0^2}$$ (12) If we select $`ϵ_0=0`$, then $`T(E=0)`$ is unity, and any arbitrarily large diamond hierarchical lattice becomes completely transparent to an incoming electron with $`E=0`$. Case (ii) $`E=ϵ_2(1)=2`$ The central idea of the analysis for case $`(i)`$ can now easily be extended to study the general situation, where $`E=ϵ_2(n)`$. The results, however, turn out to be totally different, as we have checked numerically by solving the equation $`Eϵ_2(n)=0`$ for several values of $`n`$. We present below analytical results for $`E=ϵ_2(1)=2`$ for steps $`n2`$. Once again, we observe the behaviour of $`t`$ and $`ϵ_2`$ around $`E=2`$ for successive iterations. We set $`E=2+\delta `$, $`\delta `$ being infinitesimally small, and find, by direct calculation that, for $`n2`$ $$ϵ_2(n)=\frac{2}{\delta }+f_n+𝒪(\delta )$$ (13) $$t(n)=\frac{1}{\delta }+g_n+𝒪(\delta )$$ (14) upto leading order in $`\delta `$, where $`f_n`$ and $`g_n`$ are respectively given by $$f_n=\frac{2^{2n}}{9}+\frac{2n}{3}\frac{11}{18}$$ and $$g_n=\frac{2^{2n2}}{9}\frac{n}{3}+\frac{35}{36}$$ The above forms set in, as in case (i), after the first iteration, and hold perfectly well for $`n=2,3,4`$ and $`5`$. We now make use of the recursion relations (3) and (4) to find that the result is true for $`(n+1)^{th}`$ stage as well. Thus, we take Eqns. (13) and (14) to represent the general $`n`$ behaviour of $`ϵ_2(n)`$ and $`t(n)`$ for any $`n2`$. The leading $`\delta `$ \- behaviour of $`\stackrel{~}{ϵ}`$ and $`\stackrel{~}{t}`$ are obtained to be, $$\stackrel{~}{ϵ}=\frac{1}{\delta }+\frac{f_{n+1}}{2}$$ and $$\stackrel{~}{t}=\frac{1}{\delta }+g_{n+1}$$ respectively, which lead to the equation $$\frac{E\stackrel{~}{ϵ}}{\stackrel{~}{t}}=\frac{2^{2n}}{3}\frac{4}{3}$$ (15) However, it should be noted that in order that the present analysis is valid, we must have a finite (although large) number of iterations $`n`$ such that $`f_n`$ and $`g_n`$ are also finite quantities and are small compared to $`1/\delta `$ as $`\delta 0`$. The matrix elements now read (neglecting terms of the order of $`\delta ^2`$ and taking the limit $`\delta 0`$), $`P_{11}=2(4^n4)/3`$ and, $`P_{22}=0`$, $`P_{12}=P_{21}=1`$. For large (but finite) values of $`n`$ one can now show, using Eqn. (5) that $$T_n(E=2)2^{4n}$$ It is quite obvious that, depending on the value of the energy, one has to select the on-site term and the hopping integral for the lead suitably, so that the energy does not fall beyond the “band” of the ordered lead or even coincide with the “band-edge”. In both cases the use of the expression (5) will not be meaningful. We have also numerically calculated $`T(E)`$ for $`E=ϵ_2(2)`$, $`E=ϵ_2(3)`$ and $`E=ϵ_4(1)`$ for lattices starting from $`N=3`$ upto $`N=6`$. We observe a gradual attenuation in the value of the transmission coefficient according to our expectation. The present results thus provide an example of what we may call an “atypical” extended state where, though the wavefunction displays non-zero amplitudes even at the farthest portions of an arbitrarily large lattice, the end-to-end transmission decays with increasing lattice size. We expect similar behaviour of $`T(E)`$ for other cases (with $`E=ϵ_{2z}(1)`$) also. The fixed point behaviour of T(E) : Before ending this section, we discuss the case where we have a fixed-point behaviour of the hamiltonian parameters. For this, we take a model with $`ϵ_{2z}=ϵ_zzt`$ and set $`E=ϵ_2+2t`$. All parameters then remain unaltered under RSRG and we find that for $`ϵ_2=0`$ and $`t=1`$, the numerical value of the transmission coefficient is given by, $$T(t_0)=\frac{4(t_0^21)}{t_0^4}$$ (16) where we have set the site energy of the lead $`ϵ_0`$, equal to zero. Naturally, we have to choose any suitable value for $`t_0`$ so that the above energy remains within the “band” of the ordered lead. The above expression for $`T`$ remains fixed for arbitrarily large versions of a diamond lattice. The wave function is definitely extended as $`t`$ does not flow to zero under RSRG. 4. Conclusion We have presented a hierarchical lattice model where the coordination number of the lattice points range from $`2`$ to $`2^{N1}`$ depending on the generation index $`N`$. In such a lattice there exist “extended” type of electronic states some of which have been identified and the corresponding eigenvalues have been calculated using renormalization group ideas. We also presented an exact analysis of the end-to-end transmission coefficient to reveal that the lattice, irrespective of its size, becomes completely transparent to an electron with energy $`E=0`$, while for other energies the transmission coefficient has a scaling behaviour. We obtained an exact form of the scaling for a specific energy, and the other forms can be obtained using the same methodology, though we did not present the other analytical results here. References e-mail addresses : <sup>(1)</sup> papluchakrabarti@hotmail.com <sup>(2)</sup> bibhas@cmp.saha.ernet.in <sup>(3)</sup> arunava@klyuniv.ernet.in and rkm@cmp.saha.ernet.in R. B. Griffiths and M. Kaufman, Phys. Rev. B 26, 5022 (1982). A. N. Berker and S. Ostlund, J. Phys. C 45, 4961 (1979). Y. Gefen, B. Mandelbrot and A. Aharony, Phys. Rev. Lett. 45, 855 (1980). S. Alexander and R. Orbach, J. Phys. Lett. 43, L625 (1982); J. R. Banavar and M. Cieplak, Phys. Rev. B 28, 3813 (1983). E. Domany, S. Alexander, D. Bensimon and L. P. Kadanoff, Phys. Rev. B 48, 3110 (1983). R. Rammal and G. Toulouse, Phys. Rev. B 49 ,1194 (1982). W. Schwalm and M. Schwalm, Phys. Rev. B 39, 12872 (1989); W. Schwalm and M. Schwalm, Phys. Rev. B 47, 7848 (1993); X. R. Wang, Phys. Rev. B 51, 9310 (1995) ; A. Chakrabarti, J. Phys.:Condens. Matter 8, 10951 (1996). A. Chakrabarti and B. Bhattacharyya , Phys. Rev. B 54, R12625 (1996); A. Chakrabarti, J. Phys.: Condens. Matter 8, L99 (1996). D. H. Dunlap, H. -L. Wu and P. Phillips, Phys. Rev. Lett. 65, 88 (1990). A. Chakrabarti, S. N. Karmakar and R. K. Moitra, Phys. Rev. B 50 , 13276 (1994). A. N. Samukhin, V. N. Prigodin and L. Jastrabik, Phys. Rev. Lett. 78, 326 (1997). C. P. Zhu, S. J. Xiong and T. Chen, Phys. Rev. B 52, 12848 (1998). B. W. Southern, A. A. Kumar, P. D. Loly and A. -M. S. Tremblay, Phys. Rev. B 27, 1405 (1983). A. Douglas Stone, J. D. Joannopoulos and D. J. Chadi, Phys. Rev. B 62, 5583 (1981). Figure Captions Fig. 1: First three stages (i.e., $`N=1,2,3`$) of the construction of a diamond hierarchical lattice. Fig. 2: Distribution of the amplitudes of an extended wavefunction at $`E=0`$ on (a) an $`N=3`$ lattice and (b) an $`N=4`$ lattice. All $`ϵ_i`$’s have been set equal to zero and $`t=1`$. $`\psi _i`$’s take on values $`1`$, $`0`$ and $`1`$ on different sites ($`i`$). Fig. 3: Amplitudes of a wavefunction on (a) the basic plaquette $`I`$ which acts as a building block of an $`N=4`$ lattice and (b) an $`N=4`$ lattice for $`E=ϵ_2(1)`$. All $`ϵ_i`$’s have been set equal to zero and $`t=1`$. $`\psi _i`$’s take on values $`1`$, $`1/2`$, $`0`$, $`1/2`$ and $`1`$ on different sites ($`i`$). Fig. 4: Amplitude distribution on (a) an effectively $`N=3`$ plaquette obtained by renormalizing an $`N=5`$ lattice twice and (b) one quarter of the original $`N=5`$ version for $`E=ϵ_2(2)`$. Dashed lines at the two extreme vertices indicate the presence of complementary plaquettes. All $`ϵ_i`$’s have been set equal to zero and $`t=1`$. The values of $`a`$, $`b`$ and $`c`$ are given in the text. Fig. 5: Amplitudes of a wavefunction for $`E=ϵ_4(1)`$ on an $`N=4`$ lattice. All $`ϵ_i`$’s have been set equal to zero and $`t=1`$. $`\psi _i`$’s take on values $`2\sqrt{2}`$, $`1`$, $`0`$, $`1`$ and $`2\sqrt{2}`$ on different sites ($`i`$). Fig. 6: Reduction of an $`n`$ times renormalized lattice to an effective dimer. The leads are shown as dashed lines.
no-problem/0001/astro-ph0001250.html
ar5iv
text
# Extragalactic H2 and its variable relation to CO ## 1 Introduction The difficulty of directly observing molecular hydrogen ( H<sub>2</sub>), the major constituent of the interstellar medium in galaxies, and ways of doing so indirectly are reviewed elsewhere in this volume (Combes 2000). Usually, H<sub>2</sub> cloud properties are derived by extrapolation from more easily conducted CO observations. For instance, observed CO cloud sizes and velocity widths yield total molecular gas masses under the assumption of virial equilibrium. However, in extragalactic systems especially, this method is beset by pitfalls (see Israel, 1997, hereafter Is97) and requires high linear resolutions (i.e. use of interferometer arrays). More seriously, the fundamental assumption of virialization appears to be false. As individual components (‘clumps’) have velocities of only a few km s<sup>-1</sup> and CO complex sizes are 50–100 pc, crossing times are comparable to CO complex lifetimes of only a few times 10<sup>7</sup> years or less (Leisawitz et al. 1989; Fukui et al. 2000; see also Elmegreen 2000). As equilibrium cannot be reached in a single crossing time or less, the virial theorem is not applicable to such complexes. Indeed, the elongated and interconnected filamentary appearance of many large CO cloud complexes do not suggest virialized systems (see also Maloney 1990). The observed CO intensity is the weighted product of CO brightness temperature and emitting surface area; actual CO column densities are completely hidden by high optical depths. However, in large beams CO cloud ensembles may be assumed to be statistically identical so that CO intensities scale with CO mass within the beam, i.e. beam-averaged CO column density. If we can determine the proportionality, the H<sub>2</sub>-to-CO conversion factor $`X`$, subsequent CO measurements can be used to find the appropriate H<sub>2</sub> column density and mass. In the Milky Way, the calibration of $`X`$ is controversial by a factor of about two (cf. Combes 2000), and frequently based on application of the virial theorem (but see preceding paragraph …). In other extragalactic environments, the assumption of statistical CO cloud ensemble similarity becomes questionable. Very clumpy, even fractal molecular clouds are very sensitive to e.g. variations in radiation field intensity and metallicity. As H<sub>2</sub> and CO, supposedly tracing H<sub>2</sub>, react differently to such variations, $`X`$ is also very sensitive to them (Maloney $`\&`$ Black 1988). The determination of the dependence of $`X`$ on metallicity and radiation field intensity, needed to correctly estimate amounts of H<sub>2</sub> in environments (dwarf galaxies, galaxy centers) different from the Solar Neighbourhood thus requires H<sub>2</sub> mass determinations independent of CO. ## 2 H<sub>2</sub> determinations from dust continuum emission Fortunately, H<sub>2</sub> and HI column densities are traced by optically thin continuum emission from associated dust particles. Unfortunately, dust emissivities depend strongly on temperature, dust particle properties are not accurately known and dust-to-gas ratios are frequently uncertain. The effect of these uncertainties are minimized if we can avoid the need for determining absolute values of the dust column density and the dust-to-gas ratio. Far-infrared/submillimeter continuum fluxes and HI intensities from spatially nearby positions, preferably in dwarf galaxies that lack strong temperature or metallicity gradients, can be used to obtain reasonably accurate H<sub>2</sub> column densities (Is97). The ratio of dust continuum emission to HI column density at locations lacking substantial molecular gas provides a measure for the dust-to-gas column density ratio. Without requiring its absolute value, we can apply this measure to a nearby location rich in molecular gas to find the total hydrogen column density and, after subtraction of HI, the H<sub>2</sub> column density. Division by the CO intensity yields the local value of $`X`$ in absolute units with an accuracy better than a factor of two (Is97). Individual molecular cloud complexes in the nearby Magellanic Clouds were used by Is97 to determine the effects of radiation field intensity (as sampled by far-infrared surface brightness) on $`X`$. Over a large range of intensities, $`X`$ is linearly proportional to the radiative energy available per nucleon ($`\sigma `$). Quiescent regions in the LMC yield $`X`$ values close to those of the Solar Neighbourhood, whereas a value 40 times higher is obtained for the radiation-saturated 30 Doradus region. The more metal-poor SMC exhibits higher $`X`$ values, but again linearly proportional to $`\sigma `$. ## 3 Dependence of $`X`$ on metallicity To further study the relation between $`X`$ and metallicity, we have added several recent results to the database given by Is97. These include data for NGC 7331 (3 points; Israel $`\&`$ Baas 1999), the Milky Way center and centers of NGC 253 (both from Dahmen et al. 1998), NGC 891 (Guélin et al. 1993; Israel et al. 1999), NGC 3079 (Braine et al. 1997; Israel et al. 1998a) as well as IC 10 (Madden et al. 1997) and D 478 in M 31 (Israel et al. 1998b). Although they were obtained somewhat differently from those in Is97, they are quite comparable (Figs. 1 and 2). In Fig. 1, radiation-corrected values $`X^{}`$ = $`X/\sigma `$ are plotted against metallicity \[O\]/\[H\]. In Fig. 2, values of $`X`$ are plotted in in its usual form. Figs. 1 and 2 yield the relations: $$logX^{}=logX/\sigma =\mathrm{\hspace{0.17em}4}log([O]/[H])+\mathrm{\hspace{0.17em}33.9}$$ (3.1) and $$logX=2.5log([O]/[H])+\mathrm{\hspace{0.17em}12.2}$$ (3.2) With a larger sample size, these results differ only slightly from those published by Is97. The points representing high-metallicity regions in NGC 7331 extend rather well along the relation defined by the low-metallicity dwarfs, as do those representing the galaxy centers with a larger scatter. Both correlations are highly significant. Thus, eqn. (3.2) should in general be used to convert CO intensities observed in large beams to obtain H<sub>2</sub> column densities within a factor of about two. Note that the result may greatly differ from that obtained by applying ‘standard’ Milky Way conversion factors (i.e. lower by a factor of 4–10 for high-metallicity galaxy centers and higher by a factor of 10–100 for low-metallicity irregular dwarf galaxies). In Fig. 2, we have also included $`X`$ values derived by virial theorem application to CO clouds mapped with interferometer arrays taken from Wilson (1995 – replacing her M 31 and M 33 metallicities by those from Garnett et al. 1999), Taylor $`\&`$ Wilson (1998) and Taylor et al. (1999). These points define a different dependence of $`X`$ on metallicity, with a much shallower slope of only -1.0. Generally, these $`X`$ values are much lower than those in Is97. ## 4 Discussion A steep dependence of $`X`$ on metallicity can be understood within the context of photon-dominated regions (PDRs). In weak radiation fields and at high metallicities, neither H<sub>2</sub> or CO suffers much from photo-dissociation, and the CO volume will fill practically the whole H<sub>2</sub> volume. However, when radiation fields become intense, CO photo-dissociates more rapidly than H<sub>2</sub> because it is less strongly selfshielding. Thus, the projected CO emitting projected area will shrink and no longer fill that of H<sub>2</sub>. The observed CO intensity, proportional to the shrinking emitting area, then requires use of a higher $`X`$ factor to obtain the correct, essentially unchanged H<sub>2</sub> mass. We have found that at constant metallicity, $`X`$ must be increased linearly with radiation field intensity. We may somewhat quantitatively estimate the effects of metallicity on CO (self)shielding and thereby on $`X`$. From Garnett et al. (1999) we find that over the range covered by Figs. 1 and 2, log \[C\]/\[H\] $``$ 1.7 log \[O\]/\[H\]. Thus, CO abundances drop significantly more rapidly than metallicity \[O\]/\[H\], as do dust abundances given by $`M_{dust}/M_{gas}`$ $``$ 2 log \[O\]/\[H\] (Lisenfeld $`\&`$ Ferrara 1998). Thus, a ten times lower metallicity (cf. Figs 1 and 2) implies a \[CO\]/\[ H<sub>2</sub>\] ratio lower by a factor of 50, and less CO shielding by a factor of 5000! The precise effect on $`X`$ depends on the nature of the cloud ensemble, but at lower metallicities PDR effects very quickly increase in magnitude. In a standard H<sub>2</sub> column, there is less CO to begin with, and this smaller amount is even less capable of resisting further erosion by photodissociation. With decreasing metallicity, CO is losing both its selfshielding and its dustshielding, so that even modestly strong radiation fields completely dissociate extended but relatively low-density diffuse CO gas, leaving only embedded smaller higher-density CO clumps intact. As CO intensities primarily sample emitting surface area, the loss of extended diffuse CO strongly reduces them, even when actual CO mass loss is still modest. Further metallicity decreases cause further erosion and molecular clumps of ever higher column density lose their CO gas. CO is thus occupying an ever-smaller fraction of the H<sub>2</sub> cloud which still fills most of the PDR. Its destruction releases a large amount of atomic carbon which is ionized and forms a large and bright cloud of CII filling the entire PDR. This and the expected anticorrelation between CO and CII intensities is indeed observed in the Magellanic Clouds and in IC 10 (Is97; Israel et al. 1996; Madden et al. 1997; Bolatto et al. 2000). As the strongly selfshielding H<sub>2</sub> is still filling most of the PDR (cf. Maloney $`\&`$ Black 1988), the appropriate value of $`X`$ becomes ever higher. In the extreme case of total CO dissociation, any amount of H<sub>2</sub> left defines an infinitely large value of $`X`$! In contrast, use of e.g. interferometer maps to find resolved CO clouds for the determination of $`X`$ introduces a strong bias in low-metallicity environments. In the PDR, only those subregions are selected which have most succesfully resisted CO erosion, with CO still filling a relatively large fraction of the local H<sub>2</sub> volume. The relatively low $`X`$ values thus derived, although appropriate for the selected PDR subregions, are not at all valid for the remaining PDR volume where CO has been much weakened or has disappeared; the PDR has a much higher overall $`X`$ value than the selected subregion. It is because of this bias that the array-derived points in Fig. 2 are much lower than the large-beam points and exhibit a much weaker dependence on metallicity. Incidentally, it also explains the suggested dependence of $`X`$ on observing beamsize (Rubio et al. 1993).
no-problem/0001/astro-ph0001274.html
ar5iv
text
# Neutrino decay and the thermochemical equilibrium of the interstellar medium ## 1 Introduction The diffuse interstellar medium (ISM) is observed to be inhomogeneous with cold ($`T10^2`$ K) clouds embedded in a warmer ($`T10^4`$ K) intercloud gas (see, for example, the review of Kulkarni & Heiles 1988). The theoretical explanation for this structure was provided by Field et al. (1969) who showed that two thermally stable phases can coexist in pressure equilibrium over a limited range of pressures, close to those observed in the ISM. Since the two-phase model of Field et al. (1969), the thermal and ionization equilibrium of the ISM and its stability have been studied by many authors including different heating and ionizing processes (Black 1987; Kulkarni & Heiles 1988). The response of the ISM to variations in physical processes and parameters are of interest both to a better understanding of the ISM behavior and because of its central role in star formation and galaxy evolution models. Effects on the ISM equilibrium of variations in, for example, X-ray and far UV radiation fields, cosmic ray ionization and metal abundance have been studied by several authors (Shull & Woods 1985; Parravano 1987; Wolfire et al. 1995; Parravano & Pech 1997). In this work, we are interested in a particular process: the flux of ionizing photons coming from the radiative decay of neutrinos (Sciama 1990). Notwithstanding this is a speculative theory, Sciama has argued in a set of papers (Sciama 1993, 1995, 1997a, 1997b, 1998) that it can explain the widespread ionization far from the galactic disk and many other observational results. The simplification in Sciama’s work is to assume a temperature of $`10^4`$ K without explicitly solving the thermal equilibrium. The former was made by Dettmar & Schulz (1992) who showed that heat input associated with neutrino decay is too small to account for the observed ISM temperature. However, their conclusion that neutrino decay cannot be a dominant source of ionization could be mistaken, as was pointed out by Sciama (1993), because they neglected the existence of other known heating mechanisms. Our goal here is to analize, in a more complete and self-consistent way, the effect of ionization due to neutrino decay on the thermochemical equilibrium of the ISM. In Sect. 2 we discuss the physical processes included in this work and provide the basic equations. Sect. 3 is dedicated to a discussion of the results, and the main conclusions are summarized in Sect. 4. ## 2 Basic equations In order to analize the effect of neutrino decay photons on the thermochemical equilibrium, a simple model for the ISM is used. The included cooling mechanisms are: a) cooling by collisions of electrons with C<sup>+</sup>, Si<sup>+</sup>, Fe<sup>+</sup>, O<sup>+</sup>, S<sup>+</sup> and N ($`\mathrm{\Lambda }_e`$); b) cooling by collisions of neutral hydrogen with Si<sup>+</sup>, Fe<sup>+</sup> and C<sup>+</sup> ($`\mathrm{\Lambda }_H`$); and c) cooling due to Ly-$`\alpha `$ excitation by electrons ($`\mathrm{\Lambda }_{Ly}`$). All the cooling rates and the relative abundances were taken from Dalgarno & McCray (1972). We consider the following heating mechanisms: a) Interaction of cosmic ray with hydrogen atoms and electrons: $`\mathrm{\Gamma }_{cr}=\zeta _{cr}\left[5\times 10^{12}(1+\mathrm{\Phi })n(1\chi )+5.1\times 10^{10}n_e\right]`$ $`\mathrm{ergs}\mathrm{cm}^3\mathrm{s}^1,`$ (1) where $`n=n(\mathrm{HI})+n(\mathrm{HII})`$ is the total number density of hydrogen, $`n_e`$ is the number density of electrons and $`\chi =n(\mathrm{HII})/n`$ is the ionization degree of hydrogen. The number of secondary ionizations ($`\mathrm{\Phi }`$) was taken from Dalgarno & McCray (1972), and the primary ionization rate is assumed to be $`\zeta _{cr}=10^{17}\mathrm{s}^1`$ (Spitzer 1978; Black et al. 1990; Webber 1998). b) Heating by $`\mathrm{H}_2`$ formation on dust grains (Spitzer 1978): $$\mathrm{\Gamma }_H=4.4\times 10^{29}n^2(1\chi )\mathrm{ergs}\mathrm{cm}^3\mathrm{s}^1.$$ (2) c) Photoelectric heating from small grains and PAHs (Bakes & Tielens 1994): $$\mathrm{\Gamma }_{pe}=10^{24}ϵG_on\mathrm{ergs}\mathrm{cm}^3\mathrm{s}^1,$$ (3) where the heating efficiency ($`ϵ`$) is given by $`ϵ`$ $`=`$ $`{\displaystyle \frac{4.87\times 10^2}{\left[1+4\times 10^3\left(G_oT^{1/2}/n_e\right)^{0.73}\right]}}+`$ (4) $`{\displaystyle \frac{3.65\times 10^2\left(T/10^4\right)^{0.7}}{\left[1+2\times 10^4\left(G_oT^{1/2}/n_e\right)\right]}},`$ and $`G_o`$ is the far UV field normalized to its solar neighborhood value. We only consider ionization/recombination for hydrogen. The recombination rate is given by $$X^+=\chi n_e\alpha (T)\mathrm{s}^1,$$ (5) where $`\alpha (T)`$ is the recombination coefficient to all states except the ground one, and it was taken from Spitzer (1978). The rate of ionization by cosmic rays is given by $$X_{cr}^{}=\zeta _{cr}(1+\varphi )(1\chi )\mathrm{s}^1.$$ (6) We also use the simple analytic fits provided by Wolfire et al. (1995) to estimate the ionization ($`X_{XR}^{}`$) and heating ($`\mathrm{\Gamma }_{XR}`$) due to the soft X-ray background as functions of the column density ($`N_w`$) and the electron fraction ($`n_e/n`$). In this work we adopt $`N_w=10^{19}\mathrm{cm}^2`$. In addition to the above sources of ionization, we also consider the photons produced by neutrino decay. The ionization due to this mechanism can be written in the form (Sciama 1990): $$X_\nu ^{}=F_\nu \sigma (1\chi )\mathrm{s}^1,$$ (7) where $`\sigma =6.3\times 10^{18}\mathrm{cm}^2`$ is the absorption cross section of hydrogen and $`F_\nu `$ is the flux of hydrogen-ionizing photons produced by neutrino decay. In this work $`F_\nu `$ is a free parameter, although Sciama (1997a) estimated that a value of $`F_\nu 3\times 10^4`$ photons cm<sup>-2</sup> s<sup>-1</sup> is necessary to produce an electron density $`n_e0.05`$ cm<sup>-3</sup> in the intercloud medium. The most recent (but still uncertain) estimation was $`F_\nu 10^5`$ photons cm<sup>-2</sup> s<sup>-1</sup> (Sciama 1998). ## 3 Results and discussion The thermochemical equilibrium is calculated by solving simultaneously the equations $`\mathrm{\Lambda }=\mathrm{\Gamma }`$ and $`X^+=X^{}`$, where $`\mathrm{\Lambda }=\mathrm{\Lambda }_e+\mathrm{\Lambda }_H+\mathrm{\Lambda }_{Ly}`$ is the total cooling rate, $`\mathrm{\Gamma }=\mathrm{\Gamma }_{cr}+\mathrm{\Gamma }_H+\mathrm{\Gamma }_{pe}+\mathrm{\Gamma }_{XR}`$ is the total heating rate, and $`X^{}=X_{cr}^{}+X_{XR}^{}+X_\nu ^{}`$ is the total ionization rate. Fig. 1a shows the equilibrium pressure-density relations for $`G_o=1`$ and for three different values of $`F_\nu `$ ($`0`$, $`10^2`$ and $`10^4`$ cm<sup>-2</sup> s<sup>-1</sup>). The corresponding electron fractions ($`n_e/n`$) are showed in Fig. 1b, where it can be seen that, as expected, $`n_e/n`$ increases as $`F_\nu `$ increases. Most of the ionization for the cases $`F_\nu >10`$ cm<sup>-2</sup> s<sup>-1</sup> is due to photons coming from neutrino decay, and thus the neutrino decay is a very efficient ionization mechanism. An increase in $`F_\nu `$ (and the consequent increase in the electron density) enhances the cooling by electron collisions ($`\mathrm{\Lambda }_e`$). Additionally, the dominant heating mechanism is always photoelectrons from grains and PAHs ($`\mathrm{\Gamma }_{pe}`$), which almost it is not affected by the flux $`F_\nu `$. Consequently, for a given density, when $`F_\nu `$ increases the thermal equilibrium is reached at lower temperatures in the regions where $`\mathrm{\Lambda }_e`$ dominates the cooling (high densities), and the equilibrium curve is shifted down (see Fig. 1a). In contrast, at low densities ($`n10^2`$ cm<sup>-3</sup>), the dominant cooling mechanism is $`\mathrm{\Lambda }_{Ly}`$, which decreases when $`F_\nu `$ increases and, in this case, the equilibrium is reached at higher temperatures. Fig. 1a also shows that two regions of thermal stability, i.e., where the slope is positive (Field 1965), always exist and two phases can coexist in pressure equilibrium if the interstellar pressure $`Pp/k`$ is between a minimum ($`P_{min}`$) and a maximum ($`P_{max}`$) value. For the case $`F_\nu =0`$ we obtain $`P_{max}1100`$ K cm<sup>-3</sup> and $`P_{min}490`$ K cm<sup>-3</sup>; and if we assume an equilibrium pressure of $`P10^3`$ K cm<sup>-3</sup>, then there can be gas with $`T8800`$ K, $`n0.1`$ cm<sup>-3</sup> and $`n_e/n4.7\times 10^2`$, and gas with $`T100`$ K, $`n10`$ cm<sup>-3</sup> and $`n_e/n1.7\times 10^3`$ coexisting in equilibrium. These results agree roughly with observational estimations of the warm and cold neutral phases in the local ISM (Kulkarni & Heiles 1988). Furthermore, it can be seen in Fig. 1a that $`P_{max}`$, the maximum pressure value over which only the cold phase can exist, decreases as $`F_\nu `$ increases. For $`F_\nu =10^2`$ cm<sup>-2</sup> s<sup>-1</sup> and for $`F_\nu =10^4`$ cm<sup>-2</sup> s<sup>-1</sup> we obtain $`P_{max}640`$ K cm<sup>-3</sup> and $`P_{max}500`$ K cm<sup>-3</sup>, respectively; but observations indicate that the pressure in most of the regions of the Galactic plane is $`10^3`$ K cm<sup>-3</sup> (Jenkins et al. 1983). Therefore, there seems to be an inconsistency between the observed pressure in a multi-phase medium and high $`F_\nu `$ values. The basic reason for this behavior (that high $`F_\nu `$ values imply too low $`P_{max}`$ values) is that neutrino decay is a poor heating agent, while other processes can ionize and heat the ISM. For instance, when the cosmic ray ionization rate ($`\zeta _{cr}`$) is changed (keeping $`F_\nu =0`$ fixed), we find that $`P_{max}`$ decreases as $`\zeta _{cr}`$ increases, but for $`\zeta _{cr}10^{16}`$ s<sup>-1</sup> the heating by cosmic rays ($`\mathrm{\Gamma }_{cr}`$) becomes more important that heating by photoelectrons from grains and PAHs ($`\mathrm{\Gamma }_{pe}`$), and then $`P_{max}`$ begins to increase as $`\zeta _{cr}`$ is increased. The minimum $`P_{max}`$ value is $`900`$ K cm<sup>-3</sup>. However, more efficient heating mechanisms that those considered here can rise up the equilibrium curve increasing $`P_{max}`$. In order to illustrate this effect, we have plotted in Fig. 2 $`P_{max}`$ as a function of $`F_\nu `$ for three different values of $`G_o`$ (1, 10 and 20) and, in consequence, three different values of $`\mathrm{\Gamma }_{pe}`$ (believed to be an important heating mechanism in the ISM). We can see that as $`F_\nu `$ is increased $`P_{max}`$ decreases until a minimum value ($`480`$ K cm<sup>-3</sup> for $`G_o=1`$, $`590`$ K cm<sup>-3</sup> for $`G_o=10`$ and $`660`$ K cm<sup>-3</sup> for $`G_o=20`$) and after that remains constant. This occurs when $`n_e/n1`$ in the warm gas and, therefore, additional increases in $`F_\nu `$ do not produce additional changes in this phase. Fig. 2 shows that if an ISM pressure of $`10^3`$ K cm<sup>-3</sup> is assumed, a two-phase medium is possible for $`G_o=1`$ only if $`F_\nu 10`$ cm<sup>-2</sup> s<sup>-1</sup>, and for $`G_o=20`$ only if $`F_\nu 200`$ cm<sup>-2</sup> s<sup>-1</sup>. We conclude that high fluxes of neutrino decay photons ($`10^3`$ cm<sup>-2</sup> s<sup>-1</sup>) can be consistent with a two-phase medium only if more efficient heating sources are acting on the gas. Sciama (1997a) estimated that $`F_\nu 3\times 10^4`$ cm<sup>-2</sup> s<sup>-1</sup> is necessary to produce $`n_e0.05`$ cm<sup>-3</sup> at $`T6000`$ K. Fig. 3 shows $`n_e`$ as function of $`F_\nu `$ for $`T=6000`$ K and for the same three values of $`G_o`$ given in Fig. 2. The desired electron density at this temperature can be reached at high $`F_\nu `$ values only if $`G_o20`$ and, again, more efficient heating sources seem to be necessary. An interesting consequence has to be noted: neutrino beams from neighboring regions may induce the condensation of cold clouds stimulating the formation of stars. The importance of star formation triggered by previously formed stars has been recognized by many authors (see the review of Elmegreen 1992). Triggering mechanisms are usually related to compression of the ISM by shock waves from close supernovas, because the transition warm gas $``$ cold gas is promoted if the ISM pressure rises over $`P_{max}`$. Although this kind of mechanism can act only over short distances (compared with the Galaxy size) it can propagate over large scales, and the idea of self-propagated star formation has been used to study the formation of spatial patterns in galactic disks (Mueller & Arnett 1976; Gerola & Seiden 1978; Seiden & Gerola 1979; Schulman & Seiden 1990; Jungwiert & Palous 1994). However, the phase transition warm gas $``$ cold clouds can be also obtained by decreasing $`P_{max}`$ under $`P`$ (assumed constant) if the local flux of decaying neutrinos increases due, for example, to an increase in the supernova explosion rate. This triggering mechanism depends on the propagation of neutrinos, and therefore can act over large distances in short time intervals. On the other hand, star formation can also inhibit in different ways the warm gas condensation self-regulating the star formation process inclusively over large distances (Cox 1983; Franco & Shore 1984; Struck-Marcell & Scalo 1987; Parravano 1988, 1989). It has been shown that star formation inhibition (rather than stimulation) can also contribute to the formation and maintenance of spatial patterns in galaxies (Freedman & Madore 1984; Chappell & Scalo 1997). Stimulation and inhibition mechanisms of star formation must be acting simultaneously in the Galaxy, but it is not clear yet what spatial scales are important for each one. The effect of non-local star formation stimulation on the formation of spiral patterns in galaxies should be analysed in future models. ## 4 Conclusions The thermochemical equilibrium of the ISM, including decay of neutrinos into an ionizing photon flux $`F_\nu `$, was calculated. The range $`0F_\nu 10^5`$ cm<sup>-2</sup> s<sup>-1</sup> always shows two regions of stability (a warm and a cold phase) that can coexist in equilibrium if the ISM pressure is below a threshold value ($`P_{max}`$). High $`F_\nu `$ values ($`3\times 10^4`$ cm<sup>-2</sup> s<sup>-1</sup>) estimated by Sciama (1997a) to produce $`n_e0.05`$ cm<sup>-3</sup> at $`T6000`$ K, only can be consistent with observed ISM pressures if more efficient processes are heating the gas. It also was showed that a neutrino flux increase (due, for example, to an increase in the supernova explosion rate) may stimulate the condensation of cold gas (and probably the star formation) by decreasing $`P_{max}`$ under the ISM pressure value. ###### Acknowledgements. This work has been partially supported by CONDES of Universidad del Zulia. The authors are very grateful to an anonymous referee for several helpful suggestions, and to Cesar Mendoza for his assistance in the manuscript preparation. Figure captions * Thermal pressure as a function of the total hydrogen density in equilibrium for $`G_o=1`$ and for $`F_\nu =0`$ (solid line), $`F_\nu =10^2`$ (dashed line) and $`F_\nu =10^4`$ cm<sup>-2</sup> s<sup>-1</sup> (dot-dashed line). * The electron fraction as a function of the total hydrogen density in equilibrium for $`G_o=1`$ and for $`F_\nu =0`$ (solid line), $`F_\nu =10^2`$ (dashed line) and $`F_\nu =10^4`$ cm<sup>-2</sup> s<sup>-1</sup> (dot-dashed line). * The maximum pressure $`P_{max}`$ for the coexistence of warm and cold gas as a function of $`F_\nu `$ for $`G_o=1`$ (solid line), $`G_o=10`$ (dashed line) and $`G_o=20`$ (dot-dashed line). * The electron density as a function $`F_\nu `$ for $`T=6000`$ K and for $`G_o=1`$ (solid line), $`G_o=10`$ (dashed line) and $`G_o=20`$ (dot-dashed line).
no-problem/0001/gr-qc0001057.html
ar5iv
text
# Stochastic Processes and Thermodynamics on Curved Spaces ## 1 Diffusion, Kinetics and Thermodynamics <br>in Locally Isotropic and Anisotropic <br>spacetimes We generalized the stochastic calculus on Riemannian manifolds for anisotropic processes anf for fiber bundles provided with nonlinear connection structure . Lifts in the total space of linear frame bundles were used in order to consider Browinian motions, Wiener processes and Langevin equations in a covariant fashion. The concept of thermodynamic Markovicity and Chapman–Kolmogorov equations were analyzed in connection to the possibility of obtaining information about pair–correlation functions on curved spaces. Fokker–Plank type covariant equations were derived for both locally isotropic and anisotropic gravitational and matter field interactions. Stability of equilibrium and nonequilibrium states, evolution criteria, fluctuations and dissipation are examinded from the view point of a general stochastic formalism on curved spaces. The interrelation between classical statistical mechanics, thermodynamics and kinetic theory (the Bogolyubov — Born and Green — Kirkwood — Yvon herachy, and derivation of Vlasov and Boltzmann equations ) was studied on Riemannian manifolds and vector bundles. The covariant diffusion and hydrodynamical approximations , the kinematics of relativistic processes, transfering and production of entropy, dynamical equations and thermodynamic relations were consequently defined. Relativistic formulations and anisotropic generalizations were considered for extended irreversible thermodynamics. ## 2 Thermodynamics of Black Holes with Local Anisotropy The formalism outlined in the previous section was applied to cosmological models and black holes with local spacetime anisotropy . We analyzed the conditions when the Einstein equations with cosmological constant and matter (in general relativity and low dimenensional and extended variants of gravity) describe generic locally anisotropic (la) spacetimes. Following De Witt approach we set up a method for deriving energy momentum tensors for locally anisotropic matter. We speculated on black la–hole solutions induced by locally anisotropic splittings from tetradic, spinor and gauge and generalized Kaluza–Klein–Finsler models of gravity . Possible extensions of la–metrics to string and brane models were considered. The thermodynamics of (2+1) dimensional black la–holes was discussed in connection with a possible statistical mechanics background based on locally anisotropic variants of Chern–Simons theories . We proposed a variant of irreversible thermodynamics for black la–holes. There were also considered constructions and calculus of thermodynamic parameters of black la–holes, in the framework of approaches to thermodynamic geometry for nearly equilibrium states, and the effects of local nonequilibrium and questons of stability were analyzed by using thermodynamic metrics and curvatures. Acknowledgements: The author thanks the Organizers and Deutsche Forschungsgemeinschaft for kind hospitality and support of his participation at Journees Relativistes 99.
no-problem/0001/nlin0001055.html
ar5iv
text
# Patterns and localized structures in bistable semiconductor resonators \[ ## Abstract We report experiments on spatial switching dynamics and steady state structures of passive nonlinear semiconductor resonators of large Fresnel number. Extended patterns and switching front dynamics are observed and investigated. Evidence of localization of structures is given. \] It has recently become apparent that pattern formation in optics is related in many ways with other fields of physics . One simple optical pattern forming system is a nonlinear resonator, the subject of recent investigations see, e.g. and op.cit. The analogy of resonator optics with fluids , particularly, suggests a variety of phenomena not considered for optics before . For active resonators (lasers, lasers with nonlinear absorber, 4-wave mixing oscillators) predicted phenomena, such as vortices and spatial solitons, have already been demonstrated experimentally . On the other hand, experimental results for passive resonators, recently studied extensively theoretically with a view to possible application , are limited to the early demonstration of structures in resonators containing liquid crystals or alkali vapours as the nonlinear medium . We report here the first experimental investigations of structure formation in passive nonlinear semiconductor resonators. These systems, apart from possible usefulness in applications, show phenomena in optics analogous to those found in other fields of nonlinear physics (e.g. optical Turing instability or competition between pattern formation and switching ). Our observations include regular (hexagonal) pattern formation and, in presence of optical bistability (OB), space- and time-resolved switching waves. We also present observations which indicate mutual locking of OB switching waves to form localized structures (spatial solitons) . Further evidence of localization effects is shown by independently switch bright spots of a hexagonal structure. The resonator used for the experiment consists of GaAs/GaAlAs multiple quantum well (MQW) material (18 periods of GaAs/Ga<sub>0.5</sub>Al<sub>0.5</sub>As with 10 nm/10 nm thickness) between Bragg mirrors of about 99.5 $`\%`$ reflectivity on a GaAs substrate. Properties of these microresonators have been described in . The optical resonator length is approximately 3 $`\mu `$m with a corresponding free spectral range of 50 THz. The resonator thickness varies over the usable sample area (10 x 20 mm). So, by choosing a particular position on the sample, it is possible to vary the cavity resonance wavelength such that its downshift $`\mathrm{\Delta }\lambda `$ below the exciton line lies in the range 0 $`<`$ $`\mathrm{\Delta }\lambda `$ $`<`$ 25 nm. The absorption of the MQW material depends on $`\mathrm{\Delta }\lambda `$. A typical finesse of the resonator far below the exciton energy is 200, which corresponds to 125 GHz half width at half maximum (HWHM) of the cavity resonance. The radiation source used is a continuous-wave Ar<sup>+</sup>-pumped Ti:Al<sub>2</sub>0<sub>3</sub>-laser. To realize a reasonable Fresnel number, the radiation has a spot width on the sample of about 60 $`\mu `$m. The nonlinearity used is predominantly dispersive and defocusing. The characteristic (bistable/monostable) and shape of the resonator response depends on both $`\mathrm{\Delta }\lambda `$ and the detuning $`\delta \lambda `$ of the laser field from the resonator resonance. As the substrate for the resonator is opaque, all observations are done in reflection. The OB describing the reflection returned from the input surface is ”N-shaped” , and complementary to the intracavity field characteristic, which has the familiar ”S-shape”. To avoid confusion, we will therefore refer to ”switched” and ”unswitched”, rather then ”upper” and ”lower”, states. Within the illuminated area, smaller areas ($``$ 8 $`\mu `$m) can be irradiated by short pulses ($``$ 0.1 $`\mu `$s) to initiate local switching. The light of these ”injection” pulses is polarized perpendicularly to and ”incoherent” with the ”background-” or ”holding-” beam. Address pulses thus produce an injection of photocarriers, locally changing the optical properties of the resonator. All observations are made within times lasting a few microseconds, repeated at 1 kHz, to eliminate thermal effects. Acousto-optic modulators (AOM) serve for fast intensity modulation of the injected optical fields, with a time resolution of 50 ns. They can be programmed to produce complex pulse-shapes, and to synchronize the drive and address pulses. A variety of detection equipment is employed. A CCD camera records two-dimensional images, but with slow time response. Images are thus most useful for stable steady-state structures. Observations of intrapulse dynamics are done with a fast detector (2 ns), which monitors the incident power, and measures reflected light intensity with a spatial resolution of 4 $`\mu `$m. The good repeatability of the spatio-temporal dynamics allows to map the intensity dynamics on a diameter of the illuminated area, by successively imaging the points of the diameter onto the detector while recording the intensity in each point as a function of time. At large $`\mathrm{\Delta }\lambda `$ and small negative $`\delta \lambda `$ ($``$$`\delta \lambda `$$``$ $``$ 1 HWHM) hexagonal structures form (Fig. 1). Hexagonal patterns appear in many different fields of physics , and have also been predicted, but not previously observed, for nonlinear resonators . The period of the hexagonal lattice is about 20 $`\mu `$m, consistent with typical predictions for semiconductor models . Our interpretation of Fig. 1 is that it is the result of a supercritical modulational instability of the unswitched branch. There is no observable threshold intensity for this pattern, however. We attribute this to a blurring of the threshold due to spatial and temporal variations and fluctuations of the input beam. Note that Fig. 1 is the reflected signal, so the imaged negative hexagons (lattice of dark spots) corresponds in terms of intracavity intensity to positive hexagons (lattice of bright spots). The lattice period is measured to scale linearly with $`1/\sqrt{\delta \lambda }`$ as expected for tilted waves , which we expected to be the basic mechanism of this hexagon formation. We observe a change from negative to positive hexagons with decreasing $`\delta \lambda `$. The contrast of the pattern decreases with increasing $`\delta \lambda `$. Above $`\delta \lambda `$ = 1.5 HWHM OB switching occurs before a pattern with notable contrast develops, when increasing the intensity. Since the illumination is with a Gaussian beam, the central part of the field switches first. The switched domain is separated from the surrounding unswitched area by a stationary switching front . Such a switching front moves if the incident light intensity is different from the ”Maxwellian” intensity, the intensity for which the potential maxima for lower and upper branch are equally deep. The front moves into the unswitched area if the local incident intensity is larger than the Maxwellian intensity and vice versa. It is stationary only on the contour on which the incident light intensity equals the Maxwellian intensity. Thus switching fronts can be moved by changing the incident light intensity, as Fig. 2 shows. Fig. 2a gives the incident and reflected intensity at the center of the Gaussian beam as a function of time; 2b and 2c show the intensity along a diameter of incident and reflected beam respectively as a function of time in the form of equiintensity contours. The recording Fig. 2d gives the reflectivity of the resonator along the same diameter of the Gaussian beam. The light intensity is programmed here by the AOM as described above in order to study switching-wave dynamics. An initial peak (2a) switches most of the beam cross section to the low reflectivity state. During the rapid initial outward motion of the switching front (as well as the inward motion at switch-off) the position of the switching front is probably not adiabatically controlled by the light field. As predicted , for adiabatic motion the switching front follows an intensity contour of the varying incident light; as evident by comparing 2b and 2c. As opposed to the extended patterns (Fig. 1) localized structures (spatial cavity solitons) have been predicted . Such structures can form due to interaction between switching fronts, in which context they have been called ”diffractive autosolitons” . If switching fronts have ”oscillating tails”, the gradients associated with these oscillations can mutually trap two switching fronts. In 2D such ”oscillating tails”, which can occur due to a nearby modulational instability , may stabilize a circular switching front. This is equivalent to a localized structure or spatial soliton, which is free to move as a whole unless constrained by boundary effects. This mechanism of formation of such spatial solitons was studied theoretically and demonstrated already experimentally on a system with phase bistability. Oscillating tails are readily observable in the present system by choosing appropriate $`\delta \lambda `$ and $`\mathrm{\Delta }\lambda `$; as seen in Fig. 3. Similar front-locking may thus occur in systems with intensity bistability such as the present one. The stabilization of front distances can thus be used as one criterion for the existence of spatial solitons. Another criterion is evidently the moveability of localized structures, which, furthermore, implies bistability of the structure. Fig. 4 shows a corresponding observation. The intensity on a diameter of the illuminated field is recorded as a function of time. The background illumination is set in the middle of the bistability range. At some distance from the beam center a narrow (8 $`\mu `$m) injection pulse can locally switch a small area to the upswitched branch. Fig. 4 shows how this switched spot moves to the center of the illuminated area (due to the intensity gradient of the background field). It would appear that the two points of the circular switching front, which can be followed in Fig. 4, move in parallel, suggesting a moving stable structure. After the structure has reached the center of the field, where the intensity is maximal, it remains there stationarily. When there are two or more local intensity maxima in the background illumination, the final position of the up-switched structure can be at any of the maxima. Fig. 5 shows a background illumination, which has an intensity saddle at the center of the picture and local maxima at the center of the left and right half of the picture. Depending on whether the injection is (anywhere) in the right or the left half of the picture, the final position of the up-switched structure will be the left or right intensity maximum respectively (Fig. 5a,b). Equally, at the two maxima up-switched structures can exist simultaneously (Fig. 5c). At small $`\delta \lambda `$, when the background intensity is increased beyond the appearance of a high contrast hexagonal pattern a noticeably small spot ($``$ 10 $`\mu `$m) appears as the switched-up structure, Fig. 6. For better measurement signal to noise ratio we have recorded in Figs 5 to 8 in a plane 400 $`\mu `$m below the sample surface. In this plane the structures appear bright, regular, with high contrast compared to the structures in the sample plane. Thus allowing to reduce the recording averaging times. Because of the small size and high brightness of the structure in Fig. 6b further tests were done on its soliton properties. The first test (bistability) is shown in Fig. 7. Fig. 7a shows intensity contours of the incident light in the same way as Fig. 2b. Fig. 7b shows the light intensity reflected from the sample. The initial background intensity is chosen in the middle of the bistability range. A short (pulsed) increase of intensity (at ca. 0.5 $`\mu `$s) is then seen to switch the sample up. The small bright structure remains up-switched when the intensity is returned to its initial value below the switching limit. The small structure is switched back off by momentary decrease of the illumination to below the lower bistability limit (at ca. 1.5 $`\mu `$s). This is clear proof of bistability as expected for a spatial soliton. If this structure is just a small up-switched domain, then the switching fronts surrounding it should follow an equiintensity contour of the background field. If, on the other hand, there is ”locking” of the switching front, it should not follow precisely the intensity changes. The result of the test is shown in Fig. 8: 8a and 8b give intensity contours for the incident and reflected light respectively, 8c gives the reflectivity. At the beginning the small structure is created by an initial pulse above the switching threshold, followed by a reduction of input intensity to a value in the middle of the bistability range. The test for robustness is then done by variation of the intensity (within the bistability range). Fig. 8b,c shows that the spatial variation of the reflected intensity, and thus of the switching front, is much less marked than that of the incident light intensity, in contrast to what we observe in Fig. 2. This indicates a ”robustness” of the structure indicative of self-localization. Fig. 9 gives evidence that in hexagonal patterns the individual bright spots can have properties of localized structures or spatial solitons. 9a shows a hexagonal structure seemingly similar to Fig. 1. The intensity of the background field is in the middle of the bistability range. Injection with a narrow beam in a short pulse aimed at the bright spot marked ”a” in 6a, switches it, as seen in 9b, to be a dark ”defect”. When the injection beam is aimed at the adjacent spot (marked ”b”), this adjacent spot is switched off, all other spots remaining unchanged. To demonstrate that the switched spot is stable, we show in Fig. 9d the output from that region as a function of time during the 2 $`\mu `$s duration of the main input pulse. In the upper trace, the address pulse is too weak to induce switching, and the output recovers within 100ns to its original steady value. In the lower trace, switching does occur, and the output remains almost constant at a level less than 20 $`\%`$ below its original value, until the holding light is returned to zero. Thus unlike a coherent extended hexagon pattern, the structure of Fig. 9a behaves like a collection of localized structures, who can independently be switched. Concluding, we have shown for the first time the formation of hexagon patterns, as predicted for non-linear passive resonators. Evidence was given of soliton properties of small spatial structures. Acknowledgements This work was supported by ESPRIT LTR project PIANOS. Discussions with the project partners are acknowledged. We thank W.J.Firth for valuable suggestions. We also thank K.Staliunas for developing clarifying concepts such as the linear filtering of spatial noise by a 2D resonator.
no-problem/0001/astro-ph0001265.html
ar5iv
text
# Dynamics of the X-ray clusters Abell 222, Abell 223 and Abell 520 based on observations made at ESO-La Silla (Chile), at the Canada France Hawaii Telescope and at the Pic du Midi Observatory (France). ## 1 Introduction Clusters of galaxies are the largest gravitationally bound systems, with the following components contributing to their total mass: dark matter, which is the dominant component, the hot X-ray emitting gas, that is the dominant baryonic component, and the stars and gas in galaxies. A valuable approach to determine the distribution of these components is offered by studying the relation between the global cluster properties which can be directly measured, such as the velocity dispersion $`\sigma `$, the total luminosity $`L`$, effective radii $`R_e`$, and morphological type distribution. Correlations between these intrinsic parameters have been found for many galaxy clusters, e.g. between richness and velocity dispersion (Danese, de Zotti & di Tullio 1980, Cole 1989), and between radius and luminosity (West, Oemler & Deckel 1989). Evidence pointing towards a close link between morphology and environment are the morphological segregation (Dressler 1980) and the correlation between type and velocity dispersion (Sodré et al. 1989), which we call kinematical segregation. Knowing whether these phenomena are due to initial conditions, environmental effects or both is one of the main questions to be answered in the study of these structures. Morphological types, however, can be expensive to obtain. An interesting alternative is to use spectral classification to obtain spectral types (Sodré & Cuevas 1994, 1997; Folkes, Lahav & Maddox 1996). This procedure is based on a principal component analysis (PCA) of the spectra and allows to define a spectral classification that presents some advantages over the usual morphological classification: it provides quantitative, continuous and well defined types, avoiding the ambiguities of the intrinsically more qualitative and subjective morphological classification. This method has also been applied to the study of the ESO-Sculptor Survey (Galaz & de Lapparent 1998) and to the Las Campanas Redshift Survey (Bromley et al. 1998), and it was found that PCA allows to classify galaxies in an ordered and continuous spectral sequence, which is strongly correlated with the morphological type. In this paper we present an analysis of three medium distant X-ray clusters, A222, A223 and A520, all of them belonging to the Butcher, Oemler & Wells (1983, hereafter BOW83) photometric sample. For these clusters, very few redshifts exist, as found in the NED database<sup>1</sup><sup>1</sup>1The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We have obtained new spectra which have enabled us to perform a preliminary study of their dynamical properties. The projected galactic densities of the clusters were compared to the X-ray emission images in order to seek for substructures inside these systems. From the observed spectra we have proceeded the spectral classification of the galaxies, allowing, for the first time, to study the morphological and kinematical segregations present in these clusters. In Section 2 we describe the observations, instrumentation, data reduction techniques and comparisons with previous measurements. In Section 3, the spatial distribution and kinematical properties are performed and discussed as well as mass and mass-luminosities ratio estimates. In Section 4 we analyze the spectral classification of the galaxies. Section 5 discuss the morphological and kinematical segregations. Finally, in Section 6 we present a summary of our main results. ## 2 Observations and Data Reductions The analysis of the dynamical state of the clusters discussed here is based on a large set of velocities for the cluster galaxies. Multi-object spectroscopy has been performed at CFHT in November 1993 and at the ESO 3.60m telescope in December 1995. The instrumentation used at CFHT was the Multi Object Spectrograph (MOS) using the grism O300 with a dispersion of 240 Å/mm and the STIS CCD of 2048x2048 pixels of 21$`\mu `$m, leading to a dispersion of 5 Å/pixel. The instrumentation used at ESO was the ESO Faint Object Spectrograph and Camera (EFOSC) with the grism O300 yielding a dispersion of 230Å/mm and the TEK512 CCD chip of 512x512 pixels of 27$`\mu `$m giving a resulting dispersion of 6.3 Å/pixel. We completed the observations during an observing run at the 2.0m Bernard Lyot telescope at Pic du Midi observatory in January 1997, using the ISARD spectrograph in its long-slit mode with a dispersion of 233 Å/mm and with the TEK chip of 1024x1024 pixels of 25$`\mu `$m, corresponding to 5.8 Å/pixel. Typically, two exposures of 2700s each were taken for fields across the cluster. Wavelength calibration was done using arc lamps before each exposure (Helium-Argon at CFHT, Helium-Neon at ESO and Mercury-Neon at Pic du Midi lamps). The data reduction was carried out with IRAF<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. using the MULTIRED package (Le Fèvre et al. 1995). The sky spectrum has been removed from the data in each slit by using measurements at each side of the galaxy spectra. Radial velocities have been determined using the cross-correlation technique (Tonry and Davis 1979) implemented in the RVSAO package (Kurtz et al. 1991, Mink & Wyatt 1995) with radial velocity standards obtained from observations of late-type stars and previously well-studied galaxies. From the total set of data we have retained 78 successful spectra of objects (28 for A222, 25 for A223 and 25 for A520) with a signal-to-noise ratio high enough to derive the measurement of the radial velocity with good confidence, as indicated in Table 1 by the R value of Tonry and Davis (1979). Note that since the templates used in the reduction were the same for all spectra, the R values contained in Table 1 are proportional to the signal-to-noise ratio of the spectra. Star contamination was very low (only 1 of the selected targets turned out to be a star). To help the reader to appreciate the kind of data discussed in this paper, we present in Figure 1 two spectra, one with high signal-to-noise ratio (R=6.59) and the other with low signal-to-noise ratio (R=3.96). Table 1 lists positions and heliocentric velocities for the 78 individual galaxies in the clusters. For each galaxy we give also $`f`$ and $`j`$ band photometry from BOW83. The table is completed with a few galaxies observed by Newberry, Kirshner & Boroson (1988) and Sandage, Kristian & Westphal (1976). The table entries are: 1. galaxy number 2. right ascension (hour, min, sec) 3. declination (degree, minute, second) 4. $`f_{57}`$ magnitude from Butcher et al. (1983). 5. $`jf_{57}`$ color from Butcher et al. (1983). 6. heliocentric radial velocity with its error in $`\mathrm{km}\mathrm{s}^1`$ 7. R-value derived from Tonry & Davis (1979). 8. instrumentation and notes, c: 3.60m CFHT telescope, e: 3.60m ESO telescope, l: Newberry, Kirshner & Boroson (1988), p: 2.0m BL telescope, s: Sandage, Kristian & Westphal (1976). In order to test the external accuracy of our velocities, we compared our redshift determinations ($`V_P`$) with data available in literature ($`V_L`$) for 5 galaxies observed in common (4 from Newberry, Kirshner & Boroson 1988, and 1 from Sandage, Kristian and Westphal 1976). The mean value of $`(V_PV_L)`$ is very small, 27 km s<sup>-1</sup>, and the null hypothesis that these two sets have the same variance but significantly different means can be rejected at the 99% level. ## 3 Spatial distribution and kinematical properties ### 3.1 The binary cluster A222 + A223 As already noticed by Sandage, Kristian & Westphal (1976), these two neighboring clusters have nearly the same redshift and probably constitute an interacting system which is going to merge in the future. Both are dominated by a particularly bright cD galaxy. They have a richness class R=3 and are X-ray luminous with $`L_X(7\mathrm{keV})=3.7\pm \mathrm{0.7\; 10}^{44}\mathrm{erg}\mathrm{s}^1`$ and $`1.5\pm \mathrm{0.6\; 10}^{44}\mathrm{erg}\mathrm{s}^1`$ for A222 and A223, respectively (Lea and Henry, 1988). The BOW83 sample covers only the central regions of these two clusters and, in order to study the galaxy distribution in these systems, as well as to estimate the projected density for the galaxies in our sample (see below), we have built a more extensive, although shallower, galaxy catalog, covering a region of $`45^{}\times 45^{}`$ centered on the median position of the two clusters. This catalogue, with 356 objects, was extracted from Digital Sky Survey (DSS) images, using the software SExtractor ( Bertin & Arnouts 1996). It is more than 90% complete to BOW83 magnitudes $`f_{57}19`$. Figure 2 displays the significance map for the projected densities of galaxies in the region (cf. Biviano et al. 1996 for details on this type of map), as derived from the DSS sample. The overall distribution of galaxies is elongated along the direction defined by the two main clusters, showing extension from both sides and suggesting that both clusters belong to the same – probably still collapsing – structure. In Figures 3a and 3b we display the isophotes of a wavelet reconstruction (Rué & Bijaoui, 1997) of a ROSAT HRI X-ray image, superposed on the significance maps of the projected density of galaxies (dashed lines). In contrast with Figure 2 above, these maps were constructed by taking galaxies from the deeper BOW83 catalog, which is complete up to $`f_{57}22`$. This resulted in much more features in the density maps than above, due to the introduction of the faint galaxy population of the clusters. The X-ray emission roughly follows the density contours of these maps. However, the apparent regularity of the X-ray isophotes may be due to the smoothing effect of the wavelet reconstruction, which favors larger scales against the smallest ones. These figures show that the general structure of both clusters, A222 and A223, is extremely complex, presenting several clumps of galaxies in projection on their central regions, the reality of which is hard to assert in the absence of much more radial velocity data than that provided in this paper. Moreover, the fact that this complexity is not seen in the brighter DSS projected density, indicates that the projected clumps are mainly populated by faint galaxies. The X-ray emission is centered, for both clusters, in their main galaxy concentrations, but this does not correspond to location of their brightest members, as it is usually observed in nearby rich clusters of galaxies. All these pieces of data support the view that we are facing a dynamically unrelaxed, young system. For the more X-ray luminous cluster A223 (Figure 3b), we notice the presence of an extended emission at North, centered near the position of the brightest galaxy of the cluster (the Northeast one). This emission is almost coincident with a projected substructure of galaxies delineated by the isopleth curves, suggesting that this is a real feature of the cluster. We have used the ROSTAT statistical package (Beers et al. 1990; see also Ribeiro et al. 1998 and references therein) to analyze the velocity distributions obtained in this paper. ROSTAT provides several robust estimators for the location, scale and shape of one-dimensional data sets. It includes a variety of normality tests as well as a conservative unimodality test of the distribution (the Dip test, see Hartigan & Hartigan 1985). The shape estimators given by ROSTAT are the Tail Index (TI) and the Asymmetry Index (AI) (for a thorough discussion on these two indexes, see Bird & Beers 1993). Since we will be dealing with poor samples, we have restricted ourselves to the use of the so called biweighted estimators of location and scale (see Beers et al. 1990 for a definition), which generally perform better in these cases. We notice, however, that the values of the biweighted estimators obtained here differ negligeably from that obtained using other commonly used estimators such as the conventional mean and dispersion obtained from recursive $`3\sigma `$ clipping (Yahil & Vidal, 1977), or median estimators. Our radial velocity sample for the A222 and A223 system consists of 53 spectroscopically measured galaxies to which we added another 9 taken from the literature (see Table 1). Although by no means complete, this sample is spatially reasonably well distributed to allow us some preliminary analysis. Figure 4 shows the corresponding wedge velocity diagram in right ascension and declination for A222 and A223. After removing some obvious background and foreground galaxies – also confirmed by the recursive $`3\sigma `$ clipping – we obtained a sample of 50 galaxies with measured velocities corresponding to the main peak seen in the inset of the upper panel of Figure 5, which displays the radial velocity distribution for the whole observed sample. The normality tests provided by the ROSTAT package fail to reject a Gaussian parent population for this sample. However, the Dip statistics has a value of 0.067, which is enough to reject the null hypothesis of unimodality at significance levels better than 10%. This is understandable, for this sample refers to both components of the binary system A222 and A223. Its mean velocity is $`V_{bi}=63833\pm 165`$ km s<sup>-1</sup>, with dispersion $`\sigma _{bi}=1157\pm 119`$ km s<sup>-1</sup>. This places the system at redshift $`z=0.21292`$, slightly higher than the value quoted by Strubble & Rood (1987). The lower panels of Figure 5 display the separate velocity distribution for the southern subsample (30 galaxies), which corresponds to A222, and for the northern one (20 galaxies), corresponding to A223 (see Figure 2). The normality tests do not reject Gaussian parent populations for any of these subsamples. The A222 galaxies have slightly higher velocities than those of A223: mean velocities are $`V_{bi}=64242\pm 194`$ km s<sup>-1</sup>, for A222, and $`V_{bi}=63197\pm 266`$ km s<sup>-1</sup> for A223. However, if we remove 4 galaxies belonging to the bridge connecting the two clusters (see Figure 2), the mean velocity of A223 increases to $`V_{bi}=63348\pm 295`$ km s<sup>-1</sup>, slightly reducing the significance of the velocities difference. Note that 3 of these galaxies locate at the low velocity tail of A223, as displayed in Figure 5. The velocity dispersion of the two clusters are about the same: $`\sigma _{bi}=1013\pm 150`$ km s<sup>-1</sup> for A222 and $`\sigma _{bi}=1058\pm 160`$ km s<sup>-1</sup> for A223 ($`1123\pm 191`$ when the bridge galaxies are removed). Table 2 gives mass and mass-luminosities ratio estimates for each cluster separately, obtained with the virial and projected mass estimators given by Heisler et al. (1985), for the case where all the cluster mass is supposed to be contained in the galaxies, and by Bahcall & Tremaine (1981), under the hypothesis that galaxies are test particles orbiting in a dark mass spherical potential. Total $`j`$ luminosities were estimated from the BOW83 catalog, which is complete up to $`j=22`$. ### 3.2 The cluster A520 The analysis of A520 data proceeded in the same lines as that of the A222+A223 system. Figure 6 displays the significance map of projected densities of the DSS galaxies in the field of A520. This figure also displays the positions of 21 galaxies having measured radial velocities and belonging to the cluster, as discussed below. As for the case of A222/3, here also we can see that the main concentration has two extensions, possibly due to infalling clumps of galaxies. In Figure 7 we display the X-ray isophotes of a wavelet reconstruction of the ROSAT/HRI image of A520, superposed to the significance map of the projected density of galaxies (dashed lines). As before, this map was constructed by taken galaxies from the BOW83 catalog, showing that the cluster may be much more complex than it could be noticed from the DSS map, although the reality of the substructures shown here cannot be assigned in view of the paucity of radial velocity data. As it can be seen from this figure, although the main X-ray emission roughly follows the projected density of galaxies, it seems dislocated relatively to main concentration of A520, located near the center of this field. The peak X-ray emissivity comes from a very compact region which may be consistent with a point-like source, almost coincident with a “blue” galaxy ($`jf_{57}=1.22`$), originally assigned by Sandage, Kristian & Westphal (1976) as one of the brightest A520 members (it ranks a 7th place in $`j`$ magnitudes but only 19th in $`f_{57}`$ magnitudes). In fact, as displayed in Figure 7, the $`f_{57}`$ brightest members of A520 do not seem to belong to any of the main galactic clumps observed, a situation which is similar to that already noticed in the case of A222. This means that most of the clumps are constituted by the faint galaxy population, not present in the DSS sample. We may conclude that, unless we are facing a serious case of background contamination, the A520 cluster, as for A222, may be an example of a dynamically young system where clumps of galaxies are still in phase of collapsing on its dark matter gravitational well, probably located at the mean center of the X-ray emission region seen in Figure 7. Unfortunately there is no X-ray spectra available for A520 (as also for A222/3), what hinders a more detailed diagnostic of the evolutive dynamical stage of the cluster. Our sample of spectroscopically measured objects in the field of A520 (Table 1) has 28 galaxies, with 25 coming from the observations reported here and 3 others from the literature (Sandage, Kristian & Westphal 1976; Newberry, Kirshner & Boroson 1987). Figure 8 shows the wedge velocity diagram in right ascension and declination for A520. The $`3\sigma `$ clipping of the total radial velocity distribution leaves 21 galaxies kinematically linked to the cluster. This sample is consistent with normality under all the statistical tests included in the ROSTAT routine. For comparison, we applied the same tests to a sample including the less discrepant galaxy, in velocity space, excluded by the $`3\sigma `$ clipping. Although the tests fail to reject the normality for this sample, it resulted skewed towards higher velocities, as indicated by the Asymmetry Index obtained: AI = 0.85. Figure 9 displays the velocity distribution of the 21 retained galaxies as well as that of the whole sample (inset). The mean velocity for this sample is $`V_{bi}=60127\pm 284`$ km s<sup>-1</sup>, with dispersion $`\sigma _{bi}=1250\pm 189`$ km s<sup>-1</sup>, placing the cluster at redshift $`z=0.20056`$. The values for the mass and mass-luminosity ratio, calculated under the same hypothesis as for A222 and A223, are given in Table 2. ## 4 Spectral Classification Spectral classification has been performed through a Principal Component Analysis (PCA) of the spectra. This technique makes use of all information contained in the spectra (except in the emission lines; see below) and, in this sense, it can provide a classification scheme more powerful than those based on the amplitude of individual absorption lines. Here we apply the method to a sample of 51 CFHT spectra of galaxies that are probably members of the clusters A222, A223 and A520 (section 3) to obtain spectral types. The point to be stressed is that the spectra of normal galaxies form a sequence- the spectral sequence- in the spectral space spanned by the $`M`$-dimensional vectors that contain the spectra, each vector being the flux of a galaxy (or a scaled version of it) sampled at $`M`$ wavelengths (Sodré & Cuevas 1994, 1997; Connolly et al. 1995; Folkes et al. 1996). The spectral sequence correlates well with the Hubble morphological sequence, and we define the “spectral type” (hereafter ST) of a galaxy from its position along the spectral sequence. Following Sodré & Cuevas (1997), we associate the spectral type ST of a galaxy with its value for the first principal component. Note that, since we are working with uncalibrated spectra, only a relative classification is possible, that is, we are only able to know whether a galaxy has an earlier or later spectral type than the others in the sequence (Cuevas, Sodré & Quintana 2000). We have jointly analyzed the spectra of galaxies in the three clusters because the differences in their redshifts are small and the observed spectra sample essentially the same rest-frame wavelength interval. PCA was applied to a pre-processed version of 51 CFHT uncalibrated galaxy spectra (33 were spectra of galaxies of A222 and A223 and 18 of A520). Firstly, the spectra were shifted to the rest frame and re-sampled in the wavelength interval from 3440 Å to 5730 Å, in equal-width bins of 2 Å. Secondly, we removed from the analysis 8 regions of $``$40 Å each centered at the wavelengths of \[OII\] $`\lambda `$3727, NeIII $`\lambda `$3869, H$`\delta `$ $`\lambda `$4102, H$`\gamma `$ $`\lambda `$4340, HeII $`\lambda `$4686, H$`\beta `$ $`\lambda `$4861, \[OIII\] $`\lambda `$4959 and \[OIII\] $`\lambda `$5007. This was done in order to avoid the inclusion of emission lines in the analysis, which increases the dispersion of the spectra in the principal plane (mainly due to an increase in the second principal component). The spectra, now sampled at $`M=980`$ wavelength intervals, were then normalized to the same mean flux ($`_\lambda f_\lambda =1`$). Finally, we subtracted the mean spectrum from the spectrum of each galaxy and use the PCA to obtain the principal components. This procedure is equivalent to the conventional PCA on the covariance matrix (that is, the basis vectors are the eigenvectors of a covariance matrix). Figure 10 shows the projection of the spectra of the 51 galaxies of the three clusters on to the plane defined by the first two principal components. They contain only 24% of the total variance, mainly due to the low signal-to-noise ratio of several spectra (the median S/N is $``$5.8 in the interval between 4500 Å to 5000 Å). Indeed, in this figure different symbols correspond to different signal-to-noise intervals (see the figure caption), and the scatter in the second principal component seems to increase as the signal-to-noise ratio decreases. On the other side, numerical simulations indicate that the noise does not introduce any significant bias in the spectral classification (Sodré & Cuevas 1997). Note that, in this figure, early-type galaxies are at the left side, and increasing values of ST (or, equivalently of the first principal component) correspond to later-type galaxies. The low variance accounted for by the two first principal components may rise doubts about whether we are indeed measuring meaningful spectral types through the first principal component. A possible approach is to compare our classification with the spectral classification present in Newberry, Kirshner & Boroson (1988). These authors classified a few galaxies in A222, A223, and A520 in red or blue accordingly to their colors and position in a color-magnitude diagram, or from the strength of some absorption features present in the spectra. Unfortunately we have only four cluster galaxies in common with Newberry, Kirshner & Boroson (1988). Nevertheless our results are encouraging, because the three galaxies classified as red by Newberry, Kirshner & Boroson (1988) have spectral types equal to or smaller than $``$1.5, while the only galaxy classified as blue by them has a spectral type of 2.46. Additionally, as we will show, with these spectral types we are able to recover both the morphological and kinematical segregations for the galaxies in these clusters, indicating that our spectral types are indeed carrying useful morphological information. It is worth emphasizing that we base our spectral classification only on the properties of the stellar populations that are contained in the continuum and absorption lines, and that the emission lines enter in no way in the classification scheme. It is important to point out, however, that the emission lines of normal galaxies do correlate with spectral types; see Sodré & Stasińska (1999) for a detailed discussion of this subject. ## 5 Morphological and Kinematical Segregation Now we use the spectral types of the galaxies to study whether the morphology-density relation (Dressler 1980) is present in these clusters. We have computed the projected local density from the 6 nearest (projected) neighbors of each of the galaxies in our spectroscopic sample with the estimator (Casertano & Hut 1985): $$\rho _{proj}=\frac{5}{\pi r_6^2}$$ where $`r_6`$ is the projected distance of the $`6^{th}`$ nearest galaxy. We have used the catalogue obtained from DSS (see section 3) to estimate the local density. Figure 11 shows the logarithm of the projected local density normalized by the median density of each cluster versus the spectral type. This figure shows that, for A222, A223 and A520, early-type galaxies tend to be located in denser regions than late-type galaxies, indicating that the morphology-density relation (Dressler 1980), as inferred using spectral types, was already established in clusters at $`z0.2`$. The correlation shown in Figure 11 is significant: the Spearman rank-order correlation coefficient is $`r_s`$ = -0.41 and the two-sided significance level of its deviation from zero is $`p`$ = 0.002. Nearby clusters also present a “kinematical segregation”: the velocity dispersion of early-type galaxies is lower than those of late-type galaxies (Sodré et al. 1989). This may be an evidence that late-type galaxies have arrived recently in the cluster and are not yet virialized, while the early-type galaxies constitute a relaxed systems with a low velocity dispersion. We present in Figure 12, as a function of spectral type, the absolute value of galaxy velocities relative to the clusters mean velocity, normalized by the velocity dispersion of each cluster. The points with error bars in this figure are the median values taken in bins of equal galaxy number; the vertical error bars correspond to the quartiles of the distribution, while the horizontal ones indicate the interval of ST corresponding to each bin. The data in Figure 12 indicate that early-type galaxies tend to have lower relative velocities than galaxies of later types; the Spearman rank-order correlation coefficient $`r_s`$ is now 0.39 and the two-sided significance level of its deviation from zero is 0.005. Hence, these clusters seem to present the same kind of kinematical segregation detected in low redshift galaxy clusters. ## 6 Summary We have presented here an analysis of three medium redshift clusters, A222, A223, and A520, based on new observations of radial velocities in the field of these clusters. Through observations made at the Canada-France-Hawaii Telescope, the European Southern Observatory, and the Pic du Midi Observatory, we obtained a set of 78 new redshifts, 71 of them corresponding to galaxies members of these clusters. From these observations and velocities and X-ray data from the literature we concluded that A222 and A223 have similar radial velocities and velocity dispersions, and will probably merge in the future, as already suggested by Sandage, Kristian & Westphal (1976). A520 also seems to be undergoing strong dynamical evolution, since its cD galaxy is not located at the center of the galaxy distribution (that is also coincident with the X-ray emission). We have used spectra taken at CFHT to obtain, through a Principal Component Analysis, spectral types for a subset of 51 galaxies in these clusters. We have shown that galaxies of “early” spectral types tend to be found in regions with densities larger than that where “late” spectral type galaxies are found, suggesting that the morphology - density relation was already established at $`z0.2`$. We have also found that galaxies with “early” spectral types tend to have lower velocity dispersions when compared with “late” spectral type galaxies, evidencing that the kinematical segregation too was already established at intermediate redshifts. These results are interesting because, despite the fact that these clusters are probably in a stage of strong evolution, they already show features that are expected for relaxed structures, as is the case of the segregations mentioned above. ###### Acknowledgements. We thank Christian Vanderriest for his collaboration in the CFHT observations and the CFHT, ESO and TBL staff. BTL, HC, HVC, and LSJ have benefited from the support provided by FAPESP, CNPq and PRONEX/FINEP to their work. We also thank an anonymous referee for useful comments that allowed us to improve the paper. References Bahcall J.N., Tremaine S., 1981, ApJ, 244,805. Beers T.C., Flynn K., Gebhart K., 1990, AJ, 100, 32. Bertin E., Arnouts S., 1996, A&AS, 117, 393. Bird C.M., Beers T.C., 1993, AJ, 105, 1596. Biviano A., Durret F., Gerbal D., le Fevre O., Lobo C., Mazure A., Slezak E., 1996, A&A, 311, 95. Bromley B.C., Press W.H., Lin H., Kirshner R.P., 1998, ApJ, 505, 25. Butcher H., Oemler A., Wells D.C., 1983, ApJS, 52, 183 (BOW83). Casertano S., Hut P., 1985, ApJ, 298, 80. Cole S., 1989, PhD Thesis. Connolly A.J., Szalay A.S., Bershady M.A., Kinney A.L., Calzetti D., 1995, AJ, 110, 1071. Cuevas H., Sodré L., Quintana H., 2000, in preparation. Danese L., De Zotti G., di Tullio G., 1980, A&A, 82, 322. Dressler A., 1980, ApJS, 42, 565. Folkes S.R., Lahav O., Maddox S.J., 1996, MNRAS, 283, 651. Galaz G., de Lapparent V., 1998, A&A, 332, 459. Hartigan J.A., Hartigan P.M., 1985, Annals of Stat., 13, 70. Heisler J., Tremaine S., Bahcall J.N., 1985, ApJ, 298, 8. Kurtz M.J., Mink D.J., Wyatt W.F., Fabricant D.G., Torres G., Kriss G.A., Tonry J.L., 1991, ASP Conf. Ser., 25, 432. Lea S.M., Henry J.P., 1988, ApJ, 332, 81. Le Fèvre O., Crampton C., Lilly S.J., Hammer F., Tresse L., 1995, ApJ, 455, 60. Mink D.J., Wyatt W.F., 1995, ASP Conf. Ser., 77, 496. Newberry M.V., Kirshner R.P., Boroson T.A., 1988, ApJ, 335, 629. Ribeiro A.L.B., de Carvalho R.R., Capelato H.V., Zepf S.E., 1998, ApJ, 497, 72. Rué F., Bijaoui A., 1997, Experim. Astron., 7, 129. Sandage A., Kristian J., Westphal J.A., 1976, ApJ, 205, 688. Sodré L., Capelato H.V., Steiner J.E., Mazure A., 1989, AJ, 97, 1279. Sodré L., Cuevas H., 1994, Vistas in Astronomy, 38, 287. Sodré L., Cuevas H., 1997, MNRAS, 287, 137. Sodré L., Stasińska G., 1999, A&A, 345, 391. Struble M.F., Rood H.J., 1987, ApJS, 63, 543. Tonry J., Davis M., 1979, AJ, 84, 1511. West M.J., Oemler A.,. Dekel A., 1989, ApJ, 346, 539. Yahil, A., Vidal, N.V., 1977, ApJ, 214, 347.
no-problem/0001/gr-qc0001034.html
ar5iv
text
# An Open Universe from Valley Bounce ## 1 Introduction Recent observations suggest the matter density of the universe is less than the critical density. Hence, it is desirable to have a model for an open universe, say $`\mathrm{\Omega }_00.3`$. The realization of an open universe is difficult in the ordinary inflationary scenario. This is because if the universe expands enough to solve the horizon problem, the universe becomes almost flat. One attempt to realize an open universe in the inflationary scenario is to consider inside the bubble created by the false vacuum decay . The scenario is as follows. Consider the potential which has two minimum. One is the false vacuum which has non-zero energy and the other is the true vacuum. Initially the field is trapped at the false vacuum. Due to the potential energy, universe expands exponentially and the large fraction of the universe becomes homogeneous. As the false vacuum is unstable, it decays and creates the bubble of the true vacuum. If the decay process is well suppressed, the interior of the bubble is still homogeneous. The decay is described by the $`O(4)`$ symmetric configuration in the Euclidean spacetime. Then, analytical continuation of this configuration to the Lorentzian spacetime describes the evolution of the bubble which looks from the inside like an open universe. Unfortunately, since the bubble radius cannot be greater than the Hubble radius, the created universe is curvature dominated even if the whole energy of the false vacuum is converted to the energy of the matter inside the bubble . Thus, the second inflation in the bubble is needed. If this second inflation stopped when $`\mathrm{\Omega }<1`$, our universe becomes homogeneous open universe. Though the basic idea is simple, the realization of this scenario in a simple model has been recognized difficult . The difficulty is usually explained as follows. Consider the model involving one scalar field. For the polynomial form of the potential like $`V(\varphi )=m^2\varphi ^2\delta \varphi ^3+\lambda \varphi ^4`$, the tunneling should occur at sufficiently large $`\varphi `$ to ensure that the second inflation gives the appropriate density parameter. Then, the curvature around the barrier which separates the false and the true vacuum is small compared with the Hubble scale which is determined by the energy of the false vacuum. In this case, the field jumps up onto the top of the barrier due to the quantum diffusion. When the field begins to roll down from the top of the barrier, large fluctuations are formed due to the quantum diffusion at the top of the barrier. Then the whole scenario fails. This problem is rather generic. To avoid jumping up, the curvature around the barrier should be large compared with the Hubble scale $`V^{\prime \prime }>H^2`$. On the other hand, to realize the second inflation, the field should roll down slowly, then we need $`V^{\prime \prime }<H^2`$. These two conditions are incompatible. There are several attempts to overcome this problem. Recently Linde constructs the potential which has sharp peak near the false vacuum . In this potential, the tunneling occurs and at the same time slow-rolling is allowed after tunneling, then the second inflation can be realized. But, it is still unclear what is the physical mechanism for the appearance of the sharp peak in the potential. We will reconsider this problem from different perspective. The point is the understanding of the tunneling process. In the imaginary-time path-integral formalism, tunneling is described by the solution of the Euclidean field equation. This solution gives the saddle-point of the path-integral. Then this determines the semi-classical exponent of the decay rate $`\mathrm{exp}(S_E(\varphi _B))`$, where $`S_E`$ is the Euclidean action. In the case the curvature around the barrier is small compared with the Hubble, the solution is given by the Hawking Moss (HM) solution, which stays at the top of the barrier through the whole Euclidean time . Recently Rubakov and Sibiryakov give the interpretation of this tunneling mode using the constrained instanton method . They show the HM solution does not represent the false vacuum decay if one takes into account the analytic continuation to the Lorentzian spacetime. This is because this solution does not satisfy the boundary condition that the field exists in the false vacuum at the infinite past. However, this does not imply the decay does not occur. One should consider a family of the almost saddle-point configurations instead of the true solution of the Euclidean field equation. They show although the decay rate is determined by the HM solution, the structure of the field after tunneling is determined by the other configuration which is one of the almost saddle-point solutions. In this method, one must choose the constraint so that a family of almost saddle-point solutions well covers the region which is expected to dominate the path-integral. One way to realize this is to cover the valley region of the functional space of the action . Along the valley line, the action varies most gently. Then it is reasonable to take the configurations on the valley line as a family of the almost saddle-point configurations. We will call the configuration on the valley line of the action the valley bounce $`\varphi _V`$. This analysis gives the possibility to overcome the problem. Even if the curvature around the barrier is small compared with the Hubble scale, it implies there is a possibility to occur the tunneling described by the valley bounce. If the field appears sufficiently far from the top, one can avoid the large fluctuations. During the tunneling, fluctuations of the tunneling field are generated. These fluctuations are stretched during the second inflation and observed in the open universe. These should be compatible with the observation. Once this can be confirmed, there is no difficulty in constructing the one bubble open inflationary model in the simple model with the polynomial form of the potential. In this paper, we show this is true as long as the tunneling is described by the valley bounce. We clarify the structure of the valley bounce extending the method developed by Aoyama.et.al to the de Sitter spacetime. We show the fluctuations can be reconciled with the observations. ## 2 Valley method in de Sitter spacetime First we review the formalisms which are necessary to describe the false vacuum decay in the de Sitter space. We want to examine the case in which the gravity comes to play a role. Unfortunately, we have not known how to deal with quantum gravity effect yet. So, we study the case in which we can treat gravity at the semi-classical level. That is, we treat the problem within the framework of the field theory in a fixed curved spacetime . The potential relevant to the tunneling is given by $$V(\varphi )=ϵ+V_T(\varphi ).$$ (1) We assume $`ϵ`$ is of the order $`M_{}^4`$ and $`V_T(\varphi )`$ is of the order $`M^4`$. We study the case $`M`$ is small compared to $`M_{}`$, $`MM_{}`$. Then the geometry of the spacetime is fixed to the de Sitter spacetime with $`H=M_{}^2/M_p`$, where $`M_p^2=8\pi G/3`$. We consider the situation in which the potential $`V_T(\varphi )`$ has the false vacuum at $`\varphi =\phi _F`$ and the top of the barrier at $`\varphi =\phi _T`$. Since the background metric is fixed, we can change the origin of the energy freely. We choose $`V_T(\varphi _F)=0`$. Following, we work in units with $`H=1`$. The decay rate is given by the imaginary part of the path-integral $$Z=[d\varphi ]\mathrm{exp}\left(S_E(\varphi )\right),$$ (2) where $`S_E`$ is the Euclidean action relevant to the tunneling. The dominant contribution of this path-integral is given by the configurations which have $`O(4)`$ symmetry . So, we assume the background metric and the field to have the form $`ds^2`$ $`=`$ $`d\sigma ^2+a(\sigma )^2\left(d\rho ^2+\mathrm{sin}^2\rho d\mathrm{\Omega }\right),`$ $`\varphi `$ $`=`$ $`\varphi (\sigma ),`$ (3) where $`a(\sigma )=\mathrm{sin}\sigma `$. Then, the Euclidean action of $`\varphi (\sigma )`$ is given by $$S_E=2\pi ^2𝑑\sigma \left(a^3\left(\frac{1}{2}\varphi ^2+V_T(\varphi )\right)\right).$$ (4) The saddle-point of this path-integral is determined by the Euclidean field equation $`\delta S_E/\delta \varphi `$=0; $$\varphi ^{\prime \prime }+3\mathrm{cot}\sigma \varphi ^{}V_T^{}(\varphi )=0.$$ (5) We impose the regularity conditions at the time when $`a(\sigma )=0`$ as $$\varphi ^{}(\sigma =0)=\varphi ^{}(\sigma =\pi )=0.$$ (6) We represent the solution of this equation as $`\varphi _B(\sigma )`$. If the fluctuations around this solution have a negative mode, this gives the imaginary part to the path-integral and this solution contributes to the decay dominantly. The decay rate $`\mathrm{\Gamma }`$ is evaluated by $$\mathrm{\Gamma }\mathrm{exp}(S_E(\varphi _B)).$$ (7) The equation has two types of the solutions depending on the shape of the potential. If the curvature around the barrier is large compared with the Hubble scale, then the Coleman De Luccia (CD) solution and the Hawking Moss (HM) solution exist . In this case, the decay is described by the CD solution. The analytic continuation of this solution to Lorentzian spacetime describes the bubble of the true vacuum. On the other hand, in the case the curvature around the barrier is small compared with the Hubble scale, only the HM solution exists. This solution is a trivial solution $`\varphi =\phi _T`$. The meaning of the HM solution is somewhat ambiguous. There are several attempts to interpret this tunneling mode. One way is to use the stochastic approach . Within this approach, it has been demonstrated that the decay rate given by eq.(7) coincides with the probability of jumping from the false vacuum $`\phi _F`$ onto the top of the barrier $`\phi _T`$ due to the quantum fluctuations. Recently, Rubakov and Sibiryakov give the interpretation of the HM solution using the constrained instanton method . The main idea is to consider a family of the almost saddle-point configurations instead of the true solution of the Euclidean field equation, i.e. the HM solution. The motivation comes from the boundary condition. They take the boundary condition that the state of the quantum fluctuations above the classical false vacuum is the conformal vacuum. In this case they show the field should not be constant at $`0<\sigma <\pi `$ and the HM solution is excluded by this boundary condition. Then one should seek the other configurations which obey the boundary condition and dominantly contribute to the path-integral. In the functional integral, the saddle-point solution gives the most dominant contribution, but the contribution from a family of almost saddle-point configurations which have almost the same action with that of the saddle-point solution should also be included. To realize this in the functional integral, one introduces the identity $`1=𝑑\alpha \delta (𝒞\alpha )`$ into the path integral for some constraint $`𝒞`$. First choose one $`\alpha `$. This selects the subspace of the functional space. In this subspace, we can perform the integral of the field using saddle-point method under the constraint. The minimum in this subspace satisfies the equation of motion with constraint instead of the field equation. This minimum corresponds to the almost saddle-point configuration $`\varphi _\alpha `$ which is slightly deformed from the HM solution. Changing $`\alpha `$, these configurations form a trajectory. We can evaluate the path-integral by integrating over $`\alpha `$ along this trajectory. Since along this trajectory the HM solution gives the minimum action, integrating over $`\alpha `$ gives the decay rate determined by the HM solution. But the structure of the field after tunneling can be determined by the other configuration on this trajectory $`\varphi _\alpha `$. They found the configuration which describes the bubble of the true vacuum if we continue it to the Lorentzian spacetime. Then, they conclude that even in the case only the HM solution exists, the result of the tunneling process can be the bubble of the true vacuum which is described by one of the almost saddle-point configurations. In this formalism, the validity of the method depends on the choice of the constraint . This is because, in practice, we do the Gaussian integral around the almost saddle-point solutions. To evaluate the path-integral properly we should choose the constraint so that a family of almost saddle-point solutions well covers the region which is expected to dominate the path-integral. Since the action varies most gently along the valley line, one way to realize the aim is to cover the valley region of the action . One can identify the configurations on the valley line and make Gaussian integral around these configurations. Taking into account the above fact, it is desirable to analyze the structure of not only the solution of the Euclidean field equation but also the configurations on the valley line. One way to define the configurations on this valley line is to use the valley method developed by Aoyama.et.al . To obtain the intuitive understanding of this method, consider the system of the field $`\varphi _i`$. Here $`i`$ stands for the discretized coordinate label and we take the metric as $`\delta _{ij}`$. In the valley method the equation which identifies the valley line in the functional space is given by $$D_{ij}_iS=\lambda _iS,D_{ij}=_i_jS,$$ (8) where $`_i=/\varphi _i`$. Since this equation has one parameter $`\lambda `$, this defines a trajectory in the space of $`\varphi `$. The parameter $`\lambda `$ is one of the eigen value of the matrix $`D_{ij}`$. On this trajectory the gradient vector $`_iS`$ is orthogonal to all the eigenvectors of $`D_{ij}`$ except for the eigenvector of the eigen value $`\lambda `$. This equation can be rewritten as $$_i\left(\frac{1}{2}(_jS)^2\lambda S\right)=0.$$ (9) This allows the interpretation of the solution for the equation. It extremizes the norm of the gradient vector $`_iS`$ under the constraint $`S=`$const., where $`\lambda `$ plays the role of the Lagrange multiplier. Such solution can be found each hypersurface of constant action, then the solutions of the equation form a line in the functional space. If we take $`\lambda `$ as the one with the smallest value, then the gradient vector is minimized. In this case, the action varies most gently along this line. This is a plausible definition of the valley line. We will call the configuration on the valley line of the action the valley bounce $`\varphi _V`$ and the trajectory they form the valley trajectory. Following we formulate this method in the de Sitter spacetime. The most convenient way is to use the variational method eq.(9). We shall define the valley action by $$S_V=S_E\frac{1}{2\lambda }𝑑\sigma \sqrt{g}\left(\frac{1}{\sqrt{g}}\frac{\delta S_E}{\delta \varphi }\right)^2.$$ (10) The valley bounce is obtained by varying this action. The equation which determines the valley bounce $`\delta S_V/\delta \varphi =0`$ is a fourth order differential equation. We introduce the auxiliary field $`f`$ to cancel the fourth derivative term ; $$S_f=\frac{1}{2\lambda }𝑑\sigma \sqrt{g}\left(f\frac{1}{\sqrt{g}}\frac{\delta S_E}{\delta \varphi }\right)^2.$$ (11) Then the valley action becomes $$S_V+S_f=S_E+\frac{1}{2\lambda }𝑑\sigma \sqrt{g}f^2\frac{1}{\lambda }𝑑\sigma f\frac{\delta S_E}{\delta \varphi }.$$ (12) Taking the variation of this action with respect to $`f`$ and $`\varphi `$, we obtain the equations for $`\varphi `$ and $`f`$; $`{\displaystyle \frac{1}{\sqrt{g}}}{\displaystyle \frac{\delta S_E}{\delta \varphi }}`$ $`=`$ $`f,`$ $`{\displaystyle 𝑑\sigma ^{}\frac{\delta ^2S_E}{\delta \varphi (\sigma )\delta \varphi (\sigma ^{})}f(\sigma )}`$ $`=`$ $`\lambda \sqrt{g}f(\sigma ).`$ (13) Using $`a(\sigma )=\mathrm{sin}\sigma `$, the valley equation which determines the structure of the valley bounce is given by $`\varphi ^{\prime \prime }+3\mathrm{cot}\sigma \varphi ^{}V_T^{}(\varphi )`$ $`=`$ $`f,`$ $`f^{\prime \prime }+3\mathrm{cot}\sigma f^{}V_T^{\prime \prime }(\varphi )f`$ $`=`$ $`\lambda f.`$ (14) We analyze the structure of the valley bounce for the case only the HM solution exists. We construct the piece-wise quadratic potential in which we can solve the valley equations analytically. The potential which we study is $`V_T(\varphi )=\{\begin{array}{cc}\frac{1}{2}m_F^2(\varphi \phi _F)^2,\hfill & \mathrm{}<\varphi <0,\hfill \\ & \\ \frac{1}{2}m_T^2(\varphi \phi _T)^2+\eta ,\hfill & 0\varphi <\mathrm{},\hfill \end{array}`$ (18) where $`\eta `$ is of the order $`M^4`$. For $`m_T^2<4`$, only the HM solution exists. For example we take $`m_T^2=2`$, $`m_F^2=0.5`$ and $`\eta =0.1M^4`$. The HM solution has one negative eigenvalue $`\rho _{HM,}=2`$ and the smallest positive eigenvalue is given by $`\rho _{HM,+}=2`$. The generic feature of the valley bounce is understood by the simple analysis of the case in which the valley bounce exists only in one parabola. First consider the valley trajectory associated with the negative eigenvalue. The solution of the valley equation is essentially has a form $`f=\lambda (\varphi \phi _T)=\text{const}`$. This solution does not represent the tunneling, so we seek the trajectory associated with the smallest positive eigenvalue $`\lambda (\varphi _{HM})=\rho _{HM,+}`$. The solution of the valley equation is given by $`\varphi \varphi _T\mathrm{cos}\sigma `$ and $`f=\lambda (\varphi \varphi _T)`$ (Fig.1). In this trajectory, the HM solution gives the minimum of the action (Fig.2). The horizontal coordinate is the norm of the filed $`\mathrm{\Phi }=\sqrt{𝑑\sigma a(\sigma )^3|\varphi (\sigma )\phi _T|^2}`$. The action grows as the variation of the field becomes large, but this increase is relatively gentle. Although the HM solution gives the dominant contribution to the path-integral, this solution does not satisfy the boundary condition for the false vacuum decay as shown by Rubakov and Sibiryakov . Making Analytic continuation to the Lorentzian spacetime at $`\sigma =0(z=1)`$, the field moves according to the field equation. If the field reaches $`\phi _F`$, this solution represents the false vacuum decay. The behavior of the field in this Lorentzian spacetime is determined by the initial position of the field. This is determined by the behavior of the field at $`\sigma =0`$ in the Euclidean region. Provided that its initial position is different from $`\phi _T`$, this boundary condition can be satisfied. From this fact, the HM solution does not satisfy the boundary condition. On the other hand the valley bounce does satisfy the boundary condition. Furthermore the fluctuations around the valley bounce should have one negative mode to ensure that the valley bounce plays a role instead of the HM solution. The valley bounce has a lowest eigenvalue $`\rho _{V,}<\lambda (\varphi _V)`$. We find this is negative on this trajectory. Since this is the unique negative eigenvalue, the gaussian integration of the fluctuations around this valley bounce gives the imaginary part to the path-integral. Then, the valley bounce contributes to the false vacuum decay and describes the creation of the bubble of the true vacuum. ## 3 An open universe from valley bounce We will see an open inflation model can be constructed using the valley bounce. Following, we restore the Hubble scale $`H`$. Since the radius of the bubble $`R`$ is small compared with the Hubble horizon , then the curvature scale is greater than the energy of the matter inside the bubble $`\rho _M`$ even if the whole energy of the false vacuum is converted to it, $`\rho _M/M_p^2H^2<1/R^2`$ . Then, we need the second inflation in the bubble. To realize the second inflation inside the bubble, the field should roll slowly down the potential. This implies the curvature of the potential is small compared with the Hubble. To avoid the $`ad`$ $`hoc`$ fine-tuning of the potential, we will assume this is true for all region of the potential. In this case, since $`m_T<H`$ the solution of the Euclidean equation is given by the HM solution and the valley bounce is shown as in Fig.7. We connect the linear potential at the point the field appears after the tunneling $`\varphi =\varphi _{}`$, $$V(\varphi )=V_{}\mu ^3(\varphi \varphi _{}),(\varphi >\varphi _{}).$$ (19) We demand the potential and its derivative are connected smoothly at the connection point $`\varphi _{}`$. Then we obtain $`V_{}`$ $`=`$ $`ϵ+\eta {\displaystyle \frac{1}{2}}m_T^2(\varphi _{}\phi _T)^2,`$ $`\mu ^3`$ $`=`$ $`m_T^2(\varphi _{}\phi _T).`$ (20) The initial conditions of the field are given by the valley bounce $$\varphi (t=0)=\varphi _0(z=1)=\varphi _{},\dot{\varphi }(t=0)=0.$$ (21) If the field obeys the classical field equation; $$\ddot{\varphi }+3\mathrm{coth}t\dot{\varphi }+V^{}(\varphi )=0,$$ (22) then the solution of $`\varphi `$ satisfies $$\dot{\varphi }(t)=\mu ^3\frac{\mathrm{cosh}^3t3\mathrm{cosh}t+2}{3\mathrm{sinh}^3t}.$$ (23) In the small $`t`$ this behaves as $`(1/4)\mu ^3t`$. The classical motion during one expansion time is given by $`|\dot{\varphi }|H^1`$. On the other hand the amplitude of the quantum fluctuations is given by $`\delta \varphi H`$. The curvature perturbation $``$ produced by the quantum fluctuations is approximately given by the ratio of these two quantities; $$\frac{\delta \varphi }{|\dot{\varphi }|H^1}\frac{H^3}{\mu ^3}\frac{H^2}{m_T^2}\left(\frac{H}{\varphi _{}\phi _T}\right).$$ (24) This should be of the order $`10^5`$ from the observation of the cosmic microwave background (CMB) anisotropies. If $`|\varphi _{}\phi _T|<H`$, as in the case the HM solution describes the tunneling, $`>1`$ and the scenario cannot work well. This is because at $`\varphi _{}\phi _T`$, the field experiences the quantum diffusion rather than the classical potential force. Fluctuations in this diffusion dominated epoch make the inhomogeneous delay of the start of the classical motion, thus make large fluctuations. Fortunately, from Fig.7, we see for appropriate $`\lambda `$, the valley bounce gives the initial condition as $`|\varphi _{}\phi _T|O(1)(M^2/m_T)`$, which is larger than the Hubble if $`M>H`$. In this case, the potential force works and the field rolls slowly down the potential. We expect the curvature perturbation can be suppressed for the valley bounce. In fact, we find the power of the curvature purturbations is given by $$\underset{p\mathrm{}}{lim}\frac{p^3}{2\pi ^2}P_{}(p,\lambda )=\frac{1}{4\pi ^2}\left(\frac{3H^3}{\mu ^3}\right)^2\left(\frac{M_{}^2}{M_pM}\right)^4\left(\frac{H}{m_T}\right)^2.$$ (25) Here we use the fact the valley bounce gives the initial condition as $`|\varphi _{}\phi _T|M^2/m_T`$, then $`\mu ^3=m_TM^2`$. This quantity should be of the order $`10^{10}`$ from the observation. This can be achieved by taking $`(M_{}^2/M)M_p`$. ## 4 Conclusion It is difficult to provide the model which solves the horizon problem and at the same time leads to the open universe in the context of the usual inflationary scenario. In the one bubble open inflationary scenario, the horizon problem is solved by the first inflation and the second inflation creates the universe with the appropriate $`\mathrm{\Omega }_0`$. Many works have been done within this framework of the scenario and it is recognized this scenario requires additional fine-tuning . The defect is thought to arise because the curvature around the barrier should be larger than the Hubble scale to avoid large fluctuations, which contradicts to the requirement that the curvature of the potential should be small to realize the second inflation inside the bubble. Thus to complete the scenario, we should solve this problem. The main claim of this paper is that this problem can be solved in the simple model with the polynomial form of the potential. We reconsidered the tunneling process from the different perspective. If the curvature around the potential is small, the tunneling is described by one of a family of the almost saddle-point solutions . This is because the true saddle-point solution, that is, the Hawking Moss solution does not satisfy the boundary condition for the false vacuum decay. The main idea is that the almost saddle-point solution can give the appropriate initial condition for the second inflation. A family of the almost saddle-point solutions generally forms a valley line in the functional space. We called the configurations on the valley line valley bounces. To identify valley bounces, we applied the valley method developed by Aoyama.et.al . In this method these configurations can be identified using the fact the trajectory they form in the functional space corresponds to the line on which the action varies most gently. We formulated this method in the de Sitter spacetime and clarified the structure of the valley bounces. We found the valley bounce which gives the appropriate initial condition of the second inflation even if the curvature around the barrier is small compared with the Hubble scale. Consider the case this valley bounce describes the tunneling. It is possible the field appears sufficiently far from the top of the barrier after the tunneling, then we can avoid the large fluctuations. Hence, using the valley bounce, we can solve the problem which arises in the open inflationary scenario besides the usual fine-tuning of the inflationary scenario. The one bubble open inflation model can be constructed without difficulty. ## Acknowledgements The work of J.S. was supported by Monbusho Grant-in-Aid No.10740118 and the work of K.K. was supported by JSPS Research Fellowships for Young Scientist No.04687
no-problem/0001/cond-mat0001348.html
ar5iv
text
# Structure of Electrorheological Fluids ## Abstract Specially synthesized silica colloidal spheres with fluorescent cores were used as model electrorheological fluids to experimentally explore structure formation and evolution under conditions of no shear. Using Confocal Scanning Laser Microscopy we measured the location of each colloid in three dimensions. We observed an equilibrium body-centered tetragonal phase and several non-equilibrium structures such as sheet-like labyrinths and isolated chains of colloids. The formation of non-equilibrium structures was studied as a function of the volume fraction, electric field strength and starting configuration of the colloid. We compare our observations to previous experiments, simulations and calculations. INTRODUCTION Electrorheological (ER) fluids are suspensions of dielectric particles (usually of size 1-100$`\mu `$m) in non-conducting or weakly conducting solvents. For particles with radii below several $`\mu `$m Brownian motion is still important and such dispersions are called colloidal. When electric fields are applied across these suspensions they tend to show altered viscous behaviour above a critical value of the electric field, with the apparent viscosities increasing by several orders of magnitude at low shear rates. Above this critical electric field, at low shear stresses the suspensions behave like solids, and at stresses greater than a ‘yield stress’ the suspensions flow with enhanced viscosity. The rheological response is observed to occur in milliseconds, and is reversible. This combination of electrical and rheological properties has led to many proposals for applications of ER fluids in such devices as hydraulic valves, clutches, brakes, and recently in photonic devices . Application of an electric field results in structural transitions in the colloidal suspension because the interparticle electrostatic interactions due to polarization are stronger than Brownian forces. The tendency of particles in suspension to form structures such as chains upon application of an electric field was reported centuries ago by scientists such as Franklin and Priestly . Quantitative experiments on the electrorheological effect were first performed by Winslow in 1949, when he reported that suspensions of silica gel particles in low-viscosity oils tend to fibrillate upon application of electric fields, with fibers forming parallel to the field . Winslow reported that at fields larger than $``$3kV/mm the suspensions behaved like a solid, which flowed like a viscous fluid above a yield stress that was proportional to the square of the applied electric field. Particle association is the main cause of the altered rheological behaviour of ER fluids. The nature of the field induced structures are an important factor in determining the yield stress and flow behaviour. Moreover, theoretical studies of the rheological properties of ER fluids are typically performed by subjecting possible field induced structures to shear stresses and obtaining stress-strain relationships. Therefore it is important to experimentally study the nature of the particle aggregates quantitatively in real-space. In this paper we describe an experimental study of structure formation in a model ER fluid in the absence of shear fields, by direct visualization of the fluid. A recent and comprehensive survey of ER fluids, where the issue of particle aggregation is addressed, is provided by Parthasarathy and Klingenberg . Tao and Sun have predicted the (zero-temperature) ground state for ER fluids to be a body centered tetragonal (BCT) structure (see figure 1). Their result was obtained for suspensions of uniform spheres by treating the ER fluid as a suspension of point dipoles in a dielectric fluid and by only taking energy considerations (neglecting entropy considerations) into account . An experimental verification of this proposed ground state structure has been provided by laser diffraction studies of ER systems consisting of 20$`\mu `$m diameter glass spheres suspended in silicone oil, where entropy considerations are indeed not important . Halsey and Toor have described the evolution of structure as occurring in two identifiable stages. The particles first chain along the electric field, and then aggregate into dense structures that take the form of columns aligned with the electric field. In recent computer simulation studies, Martin, Anderson and Tigges have observed the formation of two-dimensional ‘sheets’ of particles (described later) as an intermediate state in structure evolution . In their simulations, Martin and coworkers describe the evolution of structure in an ER fluid consisting of 10,000 particles over a concentration range of 10-50% volume fraction. They study the mechanics of coarsening and the emergence of crystallinity, and use various methods of characterizing the structures that evolve, including pair correlation functions, microcrystallinity and coordination number. Several aspects of their simulation results are evocative of our observations, but there are several notable exceptions as described below. METHODS The model ER fluid used in our experiments was a solution of monodisperse silica spheres (0.525$`\mu `$m radius, polydispersity in size 1.8%) in a mixture of water and glycerol. The spheres were charged and this prevented irreversible aggregation. The spheres had fluorescent cores of 0.4$`\mu `$m diameter and were labeled by fluorescein isothiocyanate (FITC). The colloids were the same as used in previous work in which particle coordinates were obtained in bulk samples . The non-fluorescent layer surrounding the core made it possible to distinguish individual particles even when they were in contact with each other. The synthesis of this kind of spheres is described in ref. . Interparticle attraction was reduced by matching the refractive index of the solvent to that of the particles, minimizing van der Waals forces. The matching of refractive indices also enabled visualization in the bulk of concentrated samples by reducing the multiple scattering of light. The silica spheres had a refractive index of 1.45, and index matching was achieved by using a solvent mixture of 16 w.% water (refractive index 1.33) and 84 w.% glycerol (refractive index 1.48). The viscosity of such a mixture (84.4 cP) also allows a larger time window in which the dynamics of structure formation can be followed with confocal microscopy. We estimate that the silica spheres had a dielectric constant of 3.7 (at 500kHz) while the solvent had a dielectric constant of $``$49 (at 500kHz), values which are close to the values at 0Hz. While the refractive indices were approximately matched at light frequencies, we could still obtain a dielectric mismatch between the spheres and the solvent at frequencies in the 500kHz range in which we applied AC electric fields. The ER fluid was placed in a cell, which consisted of parallel microscope slides that were coated with indium tin oxide (ITO), a transparent conductor. The ITO coating had an electrical resistance of 100 $`\mathrm{\Omega }`$ per square inch. One slide was $``$1.2mm thick, while the other was $``$150$`\mu `$m thick, and served as a cover slip. An insulating sheet of Kapton (DuPont) was placed between the electrodes, serving as a means for adjusting the electrode gap (between 5-100$`\mu `$m). The ER fluid was placed in a $``$3mm diameter hole cut out of the Kapton sheet, which served to confine the fluid, and the electrodes were aligned horizontally. The electrodes were connected to a power supply providing a uniform electric field. Electric fields of strength $``$1kV/mm and frequency $``$500kHz were used. A Krohn-Hite (Avon, MA) Model 7602M wideband power amplifier and a waveform generator from Wavetek (San Diego, CA) were used to produce the electric fields. The electric field strength was of the same order of magnitude as that used in previous experiments on ER fluids. The frequency was chosen so that effects due to the polarization of the double layer could be neglected, and the polarization of the spheres could be attributed to the dielectric mismatch between the spheres and the solvent. The observations were made using confocal scanning laser microscopy (CSLM) which is a technique that images individual planes in a sample by rejecting most of the fluorescent light emitted by particles out of the imaging plane. Additionally, confocal microscopy yields better resolution than conventional light microscopy both along the optical axis and in the image plane. The resolution produced is $`0.6\mu `$m along the optical axis, and $`0.2\mu `$m in the image plane. In our experiments we used CSLM to obtain sequences of digitized 2-dimensional images of the sample at planes separated by $`0.1\mu `$m. These images were then computer-analyzed to obtain the 3-dimensional coordinates of the particles present in the sample. These coordinates could then be used to analyze the structure, and to render the sample as a 3-dimensional graphical object. Typically it took several minutes to obtain a three dimensional data set of 512x512x100 voxels (e.g., $`40\mu \text{m}\times 40\mu \text{m}\times 20\mu `$m) and several seconds to obtain a single 2D image plane of 1024x1024 pixels (e.g., $`40\mu \text{m}\times 40\mu \text{m}`$) either as a plane perpendicular or parallel to the optic axis. The experiments were performed on a system consisting of a Leica inverted microscope with a 100x1.4NA oil lens with a Leica TCS confocal attachment. The process of obtaining the 3-dimensional coordinates of the particles from the confocal microscope images was similar to that described in , and is illustrated schematically in figure 2. The stacks of images were of sequential planes perpendicular to the optical axis, about 0.1$`\mu `$m apart. The (spherical) particles imaged through the confocal system appear as circular regions in the digitized images (see figures 3,4,5) because of the circular symmetry of the confocal microscope point spread function(psf) in the plane perpendicular to the optical axis (the xy plane). Because of the finite extent of the psf along the optical axis (z axis), a given particle is imaged in several consecutive image planes as circular regions of varying intensity and size. Each image was first analyzed to find the centers of each region present. This was achieved by identifying each region above a chosen threshold of intensity as resulting from an individual particle, and identifying the intensity-weighted average position (xy coordinate) of each region as the center (care was taken to identify overlapping particle regions). Neighbouring planes were then looked at, and region centers with approximately the same xy coordinates were identified as belonging to a single particle, forming a ‘string’ of centers for a single particle. The final xyz coordinates of each particle was obtained by finding the center of intensity along the ‘strings’. Since there is a distribution in the sizes of the cores there is also a distribution in the fluorescent intensities of the particles. Furthermore, the fluorescence photobleaches, and the detected intensities diminish in the bulk of the sample due to optical abberations. Therefore the algorithm had to be applied at many iterations of the threshold value. The initial threshold value was set at the intensity of the most weakly fluorescing particles. These were then identified as distinct particles using the above described algorithm, while particles of higher intensity blended together. Once the weakest particles were identified, they were removed from the dataset, the threshold was increased, and the search for particles was repeated. This was repeated at several iterations of the threshold value. The particle coordinates found were verified by marking the calculated coordinates in the raw data and visually inspecting the results. The data was rendered in 3 dimensions on a Silicon Graphics Indigo<sup>TM</sup> platform using programs implementing native graphics library routines. Data was also visualized through web-based 3d browsers using the VRML format (specified at http://www.vrml.org). IDL, a programming environment useful for visual data analysis (from Research Systems Inc., Boulder, CO), was used extensively for the manipulation of images. OBSERVATIONS We found the structure formation to proceed through a sequence of nonequilibrium structures depending on the initial conditions of the suspension and the strength of the applied field. The structure formation occurred most rapidly at early times, within a few seconds. We found it most useful to describe the structure development according to the volume fraction of spheres used. The following is a summary of our observations. At low fields, below $``$100V/mm, where the interparticle electrostatic interaction energies were low compared to thermal energies, no significant particle association was observed. The spheres tended to sediment to the bottom electrode, having a density larger than the glycerol-water solvent. The structural observations were performed at field strengths of $``$1000V/mm where field induced structures, such as chains of touching particles, that formed were not observed to break up due to thermal fluctuations, implying that the electrostatic energy at contact was many kT. However, we observed significant Brownian motion even in the final crystalline states. A particle in a BCT crystal with an applied field of 0.5kV/mm typically had a measured mean square displacement (in the plane perpendicular to the electric field) of about 5% of the lattice spacing. At the lowest observed particle volume fractions of about 10%, and with the particles initially distributed throughout the volume of the solvent, rapid application of fields of strength $``$1000V/mm resulted in the formation of field-aligned chains and ‘sheets’. Chains are linear associates of touching spheres aligned along the field, with a wide variation in length. Sheets are hexagonally ordered, 2-dimensional structures, aligned with the field direction (figure 3). At this particle concentration the number of chains formed initially was larger than the number of sheets, and as the concentration was increased, the presence of chains decreased relative to the presence of sheets. Sheets appeared within seconds of the field being turned on, and rapidly formed into a complex, interconnected labyrinthine formation (figure 4a). They formed initially at the electrodes and grew away from both electrodes towards the middle of the electrode gap. The chains appeared initially throughout the sample, with more in the middle of the electrode gap where they rapidly transformed into BCT structures by attracting each other. We determined the structures to be BCT by direct measurement of the crystal dimensions (figures 5,6). The sheets transformed into BCT structures over a time of a few hours by annealing together, beginning in the regions away from the electrodes, and growing towards the electrodes. The BCT structures that formed rearranged themselves into a network of misaligned BCT regions such as seen in figure 4b and only coarsened very slowly over the time of observation of 1-2 days by collective motions. The first 2-3 layers of spheres at the electrodes remained in hexagonal planes parallel to the electrodes throughout the observation (figure 6). An explanation may be that the spheres at the electrodes are strongly attracted to their image dipoles created by the conducting electrodes. We did not have enough data of particle coordinates to attempt to characterize the structure in terms of a local order parameter and thereby to quantify the structure formation over time , although in principle this is possible to do. Table 1 shows calculated values of the dipolar energy per particle at large electrode gaps for various particle arrangements (from refs and ). The energy per particle is in units of $`p^2`$$`/`$$`a^3`$$`ϵ_f`$, where $`p=a^3`$$`ϵ_f`$E($`ϵ_p`$-$`ϵ_f`$)/($`ϵ_p`$+2$`ϵ_f`$). $`ϵ_p`$ and $`ϵ_f`$ are the dielectric constants of the particle and solvent respectively, and E is the magnitude of the electric field. The BCT structure is the most favourable structure, while sheets have a value in between that of chains and BCT. The value given for sheets is that for large hexagonally packed sheets and is independent of the arrangement of the spheres within the sheets. Neglecting the presence of the electrodes, because of the symmetry, the dipolar energy per particle in a hexagonally packed planar field-aligned sheet is independent of its orientation . However, in our observations we saw a distinctive orientation for the sheets. As seen in figure 3, the sheets we observed can be considered to be composed of strings of nearest neighbours of spheres that are tilted by 30 with respect to the electric field axis (as opposed to our expectations of observing hexagonal packing created by a series of offset sphere chains aligned with the field). This observed orientation for the sheets is probably created by the presence of the electrodes. Because of the strong attraction between spheres and their image dipoles at the electrodes, a layer of spheres forms on the electrode, which nucleates the growth of hexagonal sheets having the configuration observed. Figure 7 shows a calculation of the energy per dipole (in units of $`p^2`$$`/`$$`a^3`$$`ϵ_f`$) for various structures where the calculations include the presence of the electrodes by including interactions with image dipoles. The calculations assume fixed identical dipoles interacting with the external field and each other. For each structure considered, the value given is that for a dipole located in the center of the structure. Two configurations of sheets are considered, the observed configuration and that formed by a close packed arrangement of chains. For electrode gaps smaller than $``$10 sphere diameters the observed sheet structure shows the lowest energy. At larger electrode gaps the BCT structure has the lowest energy, while the difference between the values for the two sheet configurations decreases. Our experiments agree with this simple calculation which neglects entropy contributions. For electrode gaps less than $``$12 sphere diameters we observed that the sheets (and a smaller population of chains) that formed upon application of the field persisted over the duration of observation (2 days) and were the dominant structure present, with a small fraction of spheres forming into BCT crystals. Thus experimentally the sheets seemed to be the equilibrium structures at these electrode gaps. It is interesting to note that while it would seem natural for chains of particles to aggregate together to form hexagonal sheets, we did not observe this in our experiments. Thus the small energy gain (due to the presence of the electrodes) in forming our observed sheets, while decreasing with increasing electrode gap, was most likely sufficient to select a preferred orientation for the sheets, even in very thick samples. As the particle volume fractions in our samples were increased from 10% to 15%, the presence of chains decreased, and increasing proportions of the sample adopted the metastable sheet state. The time for the sheets to transform into BCT structures increased as the applied E-field value was increased. We did not have enough time resolution to observe the mechanics of the formation of sheets, which formed within seconds. When the E-field was turned off, the structures disassociated, driven by Brownian motion, and the spheres returned to a dispersed state. At sphere volume fractions $``$25% the structure formation was investigated under two different starting conditions. In one case, the particles were allowed to sediment under gravity (figure 8a) in the absence of a field. The bottom layers of the sediment were observed to be randomly stacked hexagonal close-packed planes , and the upper layers of spheres were in a fluid-like state. When an electric field ($``$1kV/mm, 500kHz) was applied across the electrode gap of $``$70$`\mu `$m, the spheres at the interface of the sediment and solvent began to form field aligned chains that eventually reached the upper electrode. This occurred within a few (2-3) minutes. Over the next several hours the spheres in the entire sediment rearranged into field aligned chains that attracted each other to form columns that spanned the electrodes (figure 8b). The sphere arrangement within the columns was a BCT structure. The columns themselves were bridged together by domains of BCT crystals. After coarsening for a few hours further development of the column-structures stopped and no further evolution occurred over a period of 2 days. Lowering the electric field slowed down the structure formation. We did not see sheet-like structures under these initial conditions. When the electric field was applied across a solution with volume fraction $``$30%, but with the spheres initially dispersed through the solvent, we observed a pattern of structure formation similar to that seen at lower concentrations with similar starting configurations, where labyrinths of small sheet-like structures, along with isolated chains of spheres developed within seconds. The sheets evolved into a collection of small sections of interconnected BCT structures that retained the labyrinthine appearance of the sheets. When the field was turned off, the structures disappeared within seconds, as was the case in all our observations. Observations were made at higher concentrations, volume fraction $``$45% where the spheres crystallize. In the absence of electric fields the spheres were arranged in FCC stacked hexagonal layers parallel to the electrodes, with the top few layers being liquid-like . As is the case in our observations, it has been observed in other experiments that, except for the hard-sphere limiting case (thin steric stabilizer layer or very thin double layer with 0.1 M salt), the crystal structure formed upon sedimentation is FCC . When an electric field $``$1kV/mm and 500kHz was applied across the electrode gap ($``$80$`\mu `$m), defects appeared in the hexagonal structure, and areas of BCT formed in the bulk of the sample over the first $``$10 minutes. After a few hours it was observed that sections of the colloidal crystal had transformed into BCT order in the bulk of the sample, with the bottom 5-6 layers remaining hexagonal, and the top 2-3 layers remaining disordered. There were no sheets observed at this concentration. Figure 9 shows the transition from hexagonal ordering in the absence of a field, to a mix of BCT and hexagonal order after the field was on for $``$6 hours. It should be noted that, as indicated in the upper portion of Fig.9b that there is free space between BCT crystals of different orientation. When the field was switched off the BCT crystals stayed in the same symmetry but expanded to become 100 oriented FCC crystals, as opposed to the initial 111 FCC symmetry seen before the field was turned on. When the field is switched on again the crystal goes through a martensitic transition back to BCT. This ability to tune the crystal structure by using an electric field could have applications in the field of photonic crystals. DISCUSSION It is interesting to consider other studies that address the issue of structure formation. Labyrinthine structures similar to sheets have been observed in ferrofluids, but the structure within the sheets was not ascertained . The presence of hexagonal sheets has not been reported in previous experimental observations on ER fluids, but has been observed in simulations . The simulations of Martin, Anderson and Tigges show that for sphere concentrations less than $``$30% a sudden application of the field first induces formation of short chains parallel to the field. The chains then attract each other, forming sheets, which often are bent into tube and spiral-like forms, or form thick walls . This was in contrast to our observations, where the sheets remained two-dimensional for long periods of time. Very few BCT domains were found in the simulations that neglected thermal motion , probably because the sheets were prevented from annealing into the ground state BCT crystal. This is in contrast to our experiments, where large BCT crystals formed after long exposures to the field. In their simulations that considered thermal effects , Martin, Anderson and Tigges observed increased order, crystallinity, and larger domain sizes compared to their simulations neglecting thermal motion . However a sheet to BCT transition was not seen, possibly because of the insufficient duration of the simulation. The structure evolution we observed followed the general pattern of the simulations , although we did not observe the sheets to form from the association of chains. However, the time resolution of our observations was not sufficient to study the initial association in more detail. Halsey and Toor studied structure evolution by considering that the shape of a particle aggregate would be a droplet, modeled as a prolate spheroid. The droplet is assumed to grow with time, in a quasi-equilibrium manner, as individual particles attach to it. It elongates towards the electrodes as it grows, with its shape being determined by balancing the bulk and surface electrostatic energies due to the dipoles it contains, forming into a column spanning the electrodes. Individual columns will aggregate over time towards a bulk phase segregation. This model assumes the concentration of particles is low, and that the droplet is always in equilibrium, which requires that the electric field is low compared to thermal energies. They point out that the equilibrium droplet model fails at high fields, where columns will form rapidly in a non-equilibrium manner but growth will become arrested before equilibrium bulk phase separation occurs. We did not observe droplet-like structures. At the volume fractions we used, and with the rapidly quenched relatively high electric fields we used, structure formation did not occur particle by particle in a quasi-equilibrium manner as in the droplet model, but rather as a association of chains and sheets. We observed column formation at volume fractions of $``$25% and with the colloid initially sedimented, but the columns did not coarsen to allow bulk phase separation over a period of observation of 2 days. Martin and colleagues have performed two-dimensional light scattering studies on a model ER fluid using particles and field strengths similar to those used by us . They observed a two-stage, chain to column structure formation under conditions of no shear. Although sheet formation might be expected under the conditions used, they did not observe sheets. This may be because light scattering methods alone are insufficient to detect and interpret sheet-like structures. Melrose has performed Brownian dynamics simulations on ER fluids of particle volume fractions ranging from $``$10% to 50%. For the case where no shear was applied on the ER fluid, particles were observed to form into strings, which subsequently aggregated together. At 10% volume fraction, strings and small aggregates of strings are seen. At 30% v.f. a kinetically arrested gel is seen, where the gel is composed of hexagonal sheets of particles. The hexagonal arrangement within the sheets is not described. The sheets form into a labyrinthine structure similar to that seen in our observations. The simulation remains trapped in a local potential minimum, and is not seen to evolve further towards a crystalline ground state. In this simulation, Brownian motion was turned off upon application of the electric field. A similar kinetically trapped gel-like state was observed in the simulations of Hass , who did not observe the ER fluid to evolve into a regular lattice. Brownian effects were neglected in this simulation. A Brownian dynamics simulation by Tao and Jiang on a system consisting of 122 particles (with volume fraction $``$20%), which considers thermal motion during structure formation, shows the ER fluid rapidly forming chains, which aggregate into thick columns. These columns consist of polycrystalline BCT lattice grains that are aligned along the field direction, but misaligned in the xy direction. We observed similar column-like structures at $``$25% volume fraction. When thermal forces are neglected, the ER fluid is described to be trapped in a local minimum energy state. However, a sheet-like state is not reported as an intermediate state during structure development with thermal forces included. CONCLUSION We have verified through direct visualization that our model ER fluid reaches the BCT structure as a ground state under most conditions, and have been able to describe structure formation as a function of concentration. At the lowest concentrations observed (about 10%), BCT crystals were seen primarily to form through chains of particles attracting each other, while a smaller fraction of particles formed sheets that transformed into BCT structures by annealing together. As the concentration was increased beyond 15% the presence of chains decreased and sheets dominated at early times, forming into complex labyrinthine structures which lasted for hours before annealing into BCT crystals. At high concentrations (larger than 40%), where the initial structure was FCC, BCT crystals were formed via defects appearing in the existing hexagonal structures. The unusual symmetry of the BCT crystals and the martensitic FCC-BCT crystal transition both promise applications in photonics. Unlike magnetorheological fluids, ER fluids can be made from non light-absorbing materials. The martensitic crystal switching we observed was on structures of size of the order of the wavelength of light, as opposed to the FCC-BCT transition of $``$45$`\mu `$m sized spheres described recently . The column-like structures observed at intermediate concentrations ($``$25%), were interlinked in complex ways through strings of particles and BCT crystals, and had wide variation in size. The structure within the columns was that of domains of BCT which were misaligned in the plane normal to the electric field. We observed that at a low concentration there was a sample thickness (about 10 sphere diameters) below which the preferred state was that of strings and sheets, and BCT crystals did not form. Data and analyzed images of our observations are available on the world wide web at http://www.elsie.brandeis.edu. Acknowledgment: This work was supported by the United States Department of Energy under Grant No. DE-FG02-94ER45522 and NSF International Travel Grant INT-9113312 and is part of the research program of the “stichting voor Fundamenteel Onderzoek der Materie (FOM)”, which is financially supported by the “Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO)”. FIG. 1. The three-dimensional BCT structure. The spheres have radius $`a`$ (the spheres are not drawn to scale). The crystal can be regarded as a collection of close-packed planes. One such plane is indicated by the shaded spheres. Each close-packed plane is a collection of chains of touching spheres, where neighbouring chains are offset from each other by a particle radius. In the ground state of ER fluids the chains are aligned with the electric field, thus a section perpendicular to the electric field shows a square arrangement of spheres (see fig.5b). The volume fraction of a BCT crystal is 0.698. FIG. 2. Particle finding algorithm. A diagram showing the method of obtaining particle coordinates from the confocal micrographs. The z axis is along the optical axis and the electric field. (a)Each particle gives rise to roughly circular regions of varying intensity in neighbouring image planes which are separated by $``$0.1$`\mu `$m. The xy centroid coordinate of each region is obtained by averaging coordinates weighted by intensity. (b)The regions belonging to an individual particle are identified. (c) The final xyz coordinate is found by an intensity-weighted average along each axis of the regions identified in (b). FIG. 3. Sheet structure. The field strength is 1.2 kV/mm and the field has been on for $``$30 minutes. (a) x-z Confocal micrograph (raw data). There is a sheet of particles seen in the left side of the image, with a single chain of particles to the right of it. The bottom of the image is closest to the objective lens of the microscope. It should be noted that since only the cores of the silica spheres fluoresce, touching spheres are seen as separated. It is the anisotropic point spread function that causes the spherical fluorescent cores to appear ellipsoidal. Note also that the intensity and resolution diminishes away from the lens due to spherical abberation.(b) View of a sheet seen face-on. The image is taken from a rendering of reconstructed 3-d raw data such as seen in (a). The white scale bar is 2$`\mu `$m. The bottom row of spheres is touching the glass electrode (the electrode is perpendicular to the image), and the electric field is upwards. The spheres are hexagonally close packed within a sheet. The smallest angle between nearest neighbours and the field is $`\pm 30`$, except for the first two layers adjacent to the electrode, where nearest neighbours are aligned along the field. (c) Side view of the same sheet object. FIG. 4. Labyrinth of sheets (a)View of a labyrinth of sheets looking down the electric field. The images are raw x-y confocal micrographs. The point spread function is symmetric in the plane (xy) perpendicular the objective. The view is that seen after $``$3 minutes of the sample (volume fraction 15%) being in the E-field. The sheets form within seconds of the E-field being turned on, and evolve into the BCT structure over hours, indicated by (i) in the image above. The structure of a sheet such as indicated by (ii) in the image is shown in figures 3b and 3c. The white bar is 4$`\mu `$m. Note that the raw data only shows the (fluorescent) cores of the spheres, making it possible to distinguish touching spheres. Spheres within $`\pm 0.5`$$`\mu `$m of the image plane contribute to the image. The image is from a plane 20$`\mu `$m from an electrode in a sample with an electrode gap of 70$`\mu `$m. (b) After several hours, the sheets such as seen in (a) anneal together and form collections of BCT structures. The field is perpendicular to the image. Each BCT cluster extends long distances in the field direction. FIG. 5. Body Centered Tetragonal crystal. Raw 3D data set of a BCT crystal consisting of $`13\mu \text{m}\times 13\mu \text{m}`$ x-y planes separated in the z-direction by 0.09$`\mu `$m. The image (a) is a digital interpolation of data from a sequence of x-y planes such as seen in (b). (a) shows a view along a plane parallel to the E-field, showing the centered rectangular lattice of dimension 2$`\sqrt{3}a\times 2a`$ (110 plane) of the BCT lattice, where the spheres have radius $`a`$. In this plane the BCT structure consists of chains of spheres aligned along the field direction. Neighbouring chains are offset along the field by one particle radius. (b) shows a view looking down the E-field showing the square $`\sqrt{6}a\times \sqrt{6}a`$ lattice (001 plane) of the BCT lattice. The plane (b) is a section orthogonal to (a) such as along the line indicated in (a). The alternating intensity pattern seen in (b) arises because adjacent chains of spheres are offset in and out of the plane by one particle radius. FIG. 6. BCT Crystal. Two views of a BCT crystal rendered in 3D after obtaining coordinates of the centers of each particle from raw data such as seen in Figure 5. The view is of a portion of a larger crystalline region. (a) shows a view looking in a plane parallel to the E-field. The particles connected with a line show a chain parallel to the E-field. The view shows the hexagonal 110 plane of the BCT crystal. As in (b), the x and $``$ show neighbouring chains of particles that are out of register by a particle radius. The spheres in the first few layers adjacent to the electrode are arranged in random stacked hexagonal planes parallel to the electrode. (b) shows a view perpendicular to the E-field. The $``$ and x symbols represent chains of particles aligned with the E-field, and chains marked with a $``$ are out of register with the x chains by one particle radius. Table 1: Dipolar energy per particle for various infinite sized lattices. Energy is in units of $`p^2/a^3ϵ_f`$, where the dipole moment $`p=a^3ϵ_fE(ϵ_pϵ_f)/(ϵ_p+2ϵ_f).`$ $`ϵ_p`$ and $`ϵ_f`$ are the dielectric constants of the particle and solvent respectively, and E is the magnitude of the electric field. FIG. 7. Dipole energy per particle vs. electrode gap. Image charges are included in this calculation. The energy per particle is in units of $`p^2`$$`/`$$`a^3`$$`ϵ_f`$, where $`zp=a^3`$$`ϵ_f`$E($`ϵ_p`$-$`ϵ_f`$)/($`ϵ_p`$+2$`ϵ_f`$). $`ϵ_p`$ and $`ϵ_f`$ are the dielectric constants of the particle and solvent respectively, and E is the magnitude of the electric field. ‘Observed sheets’ are sheets with the configuration observed in the experiments, while ‘chain sheets’ are close-packed planar sheets consisting of chains of spheres aligned along the electric field, where neighbouring chains are offset along the electric field direction by one particle radius. At an electrode spacings of less than 10 particle diameters the observed sheets have a lower energy than the BCT crystal. FIG. 8. Field response of sedimented colloid. (a) At a particle volume fraction of $``$25% the colloid is left to sediment to the bottom electrode before a E-field is applied. The horizontal white lines represent the transparent electrodes. (b) Shows the sedimented sample as shown in (a) transforming into a columnlike structure upon application of a E-field. The image is taken 2 minutes after the field was on. Chains of particles are seen to form along the field. (c) After a few hours the spheres arrange into cross-linked columns formed parallel to the field. (d) A view of the columns from a plane perpendicular to the field. The square-like arrangement of the particles within the columns indicates that the structure within the columns is BCT. If the sphere-rich region and solvent are treated as dielectric fluids with different dielectric constants placed between the plates of a parallel plate capacitor, then the configuration seen in (c) (and shown schematically in the upper left corner of (c)), where the fluids are separated into regions with their interfaces parallel to the field, has lower electrostatic energy than the configuration in (a), where the fluids are separated in the sedimented state. This explains the tendency to form columns. FIG. 9. Field induced solid-solid transition. The image shows raw confocal microscope data of a sample of volume fraction $``$45%. (a) Shows a plane parallel to the electrodes before the E-field was turned on. The spheres are arranged as hexagonal planes with many defects, parallel to the electrodes. The structure is FCC. The image is from a plane 20$`\mu `$m from an electrode, and the electrode gap is 80$`\mu `$m. (b) Shows the same area about 6 hours after an E-field is applied (perpendicular to the image plane). Large areas of the crystal have transformed into BCT order, identified by the square configurations.
no-problem/0001/hep-ph0001010.html
ar5iv
text
# From Neutrino Masses to Proton Decay ## 1 Neutrino Story Once it became apparent that the spectrum of $`\beta `$ electrons was continuous , something drastic had to be done! In December 1930, in a letter that starts with typical panache, $`\mathrm{`}\mathrm{`}`$Dear Radioactive Ladies and Gentlemen…”, W. Pauli puts forward a “desperate” way out: there is a companion neutral particle to the $`\beta `$ electron. Thus earthlings became aware of the neutrino, so named in 1933 by Fermi (Pauli’s original name, neutron, superseded by Chadwick’s discovery of a heavy neutral particle), implying that there is something small about it, specifically its mass, although nobody at that time thought it was that small. Fifteen years later, B. Pontecorvo proposes the unthinkable, that neutrinos can be detected: an electron neutrino that hits a $`{}_{}{}^{37}Cl`$ atom will transform it into the inert radioactive gas $`{}_{}{}^{37}Ar`$, which can be stored and then detected through radioactive decay. Pontecorvo did not publish the report, perhaps because of the times, or because Fermi thought the idea ingenious but not immediately relevant. In 1956, using a scintillation counter experiment they had proposed three years earlier , Cowan and Reines discover electron antineutrinos through the reaction $`\overline{\nu }_e+pe^++n`$. Cowan passed away before 1995, the year Fred Reines was awarded the Nobel Prize for their discovery. There emerge two lessons in neutrino physics: not only is patience required but also longevity: it took $`26`$ years from birth to detection and then another $`39`$ for the Nobel Committee to recognize the achievement! This should encourage physicists to train their children at the earliest age to follow their footsteps at the earliest possible age, in order to establish dynasties of neutrino physicists. Perhaps then Nobel prizes will be awarded to scientific families? In 1956, it was rumored that Davis , following Pontecorvo’s proposal, had found evidence for neutrinos coming from a pile, and Pontecorvo , influenced by the recent work of Gell-Mann and Pais, theorized that an antineutrino produced in the Savannah reactor could oscillate into a neutrino and be detected. The rumor went away, but the idea of neutrino oscillations was born; it has remained with us ever since. Neutrinos give up their secrets very grudgingly: its helicity was measured in 1958 by M. Goldhaber , but it took 40 more years for experimentalists to produce convincing evidence for its mass. The second neutrino, the muon neutrino is detected in 1962, (long anticipated by theorists Inouë and Sakata in 1943 ). This time things went a bit faster as it took only 19 years from theory (1943) to discovery (1962) and 26 years to Nobel recognition (1988). That same year, Maki, Nakagawa and Sakata introduce two crucial ideas: neutrino flavors can mix, and their mixing can cause one type of neutrino to oscillate into the other (called today flavor oscillation). This is possible only if the two neutrino flavors have different masses. In 1964, using Bahcall’s result of an enhanced capture rate of $`{}_{}{}^{8}B`$ neutrinos through an excited state of $`{}_{}{}^{37}Ar`$, Davis proposes to search for $`{}_{}{}^{8}B`$ solar neutrinos using a $`100,000`$ gallon tank of cleaning fluid deep underground. Soon after, R. Davis starts his epochal experiment at the Homestake mine, marking the beginning of the solar neutrino watch which continues to this day. In 1968, Davis et al reported a deficit in the solar neutrino flux, a result that stands to this day as a truly remarkable experimental tour de force. Shortly after, Gribov and Pontecorvo interpreted the deficit as evidence for neutrino oscillations. In the early 1970’s, with the idea of quark-lepton symmetries suggests that the proton could be unstable. This brings about the construction of underground detectors, large enough to monitor many protons, and instrumentalized to detect the Čerenkov light emitted by its decay products. By the middle 1980’s, several such detectors are in place. They fail to detect proton decay, but in a remarkable serendipitous turn of events, 150,000 years earlier, a supernova erupted in the large Magellanic Cloud, and in 1987, its burst of neutrinos was detected in these detectors! All of a sudden, proton decay detectors turn their attention to neutrinos, while to this day still waiting for its protons to decay! Today, these detectors have shown great success in measuring the effects of solar and atmospheric neutrinos. They continue their unheralded watch for signs of proton decay, reassured in the knowledge that lepton number and baryon number violations are connected in most theories, leading to correlations between neutrino masses and proton decay rates. ## 2 Standard Model Neutrinos The standard model of electro-weak and strong interactions contains three left-handed neutrinos. The three neutrinos are represented by two-components Weyl spinors, $`\nu _i`$, $`i=e,\mu ,\tau `$, each describing a left-handed fermion (right-handed antifermion). As the upper components of weak isodoublets $`L_i`$, they have $`I_{3W}=1/2`$, and a unit of the global $`i`$th lepton number. These standard model neutrinos are strictly massless. The only Lorentz scalar made out of these neutrinos is the Majorana mass, of the form $`\nu _i^t\nu _j`$; it has the quantum numbers of a weak isotriplet, with third component $`I_{3W}=1`$, as well as two units of total lepton number. Higgs isotriplet with two units of lepton number could generate neutrino Majorana masses, but there is no such higgs in the Standard Model: there are no tree-level neutrino masses in the standard model. Quantum corrections, however, are not limited to renormalizable couplings, and it is easy to make a weak isotriplet out of two isodoublets, yielding the $`SU(2)\times U(1)`$ invariant $`L_i^t\stackrel{}{\tau }L_jH^t\stackrel{}{\tau }H`$, where $`H`$ is the Higgs doublet. As this term is not invariant under lepton number, it is not be generated in perturbation theory. Thus the important conclusion: The standard model neutrinos are kept massless by global chiral lepton number symmetry. The detection of neutrino masses is therefore a tangible indication of physics beyond the standard model. ## 3 Experimental Issues From the solar neutrino deficit to the spectacular result from SuperKamiokande, experiments suggest that neutrinos have masses, providing the first credible evidence for physics beyond the standard model. As we stand at the end of this Century, there remains several burning issues in neutrino physics that can be settled by future experiments: * The origin of the Solar Neutrino Deficit This is currently being addressed by SuperK, in their measurement of the shape of the $`{}_{}{}^{8}B`$ spectrum, of day-night asymmetry and of the seasonal variation of the neutrino flux. Their reach will soon be improved by lowering their threshold energy. SNO is joining the hunt, and is expected to provide a more accurate measurement of the Boron flux. Its raison d’être, however, is the ability to measure neutral current interactions. If there are no sterile neutrinos, we might have a flavor independent measurement of the solar neutrino flux, while measuring at the same time the electron neutrino flux! This experiment will be joined by BOREXINO, designed to measure neutrinos from the $`{}_{}{}^{7}Be`$ capture. These neutrinos are suppressed in the small angle MSW solution, which could explain the results from the $`pp`$ solar neutrino experiments and those that measure the Boron neutrinos. * Atmospheric Neutrinos Here, there are several long baseline experiments to monitor muon neutrino beams and corroborate the SuperK results. The first, called K2K, already in progress, sends a beam from KEK to SuperK. Another, called MINOS, will monitor a FermiLab neutrino beam at the Soudan mine, 730 km away. A third experiment under consideration would send a CERN beam towards the Gran Sasso laboratory (also about 730 km away!). Eventually, these experiments hope to detect the appearance of a tau neutrino. This brief survey of upcoming experiments in neutrino physics is intended to give a flavor of things to come. These experiments will not only measure neutrino parameters (masses and mixing angles), but will help us answer fundamental questions about the nature of neutrinos. But the future of neutrino detectors may be even brighter. Many of us expect them to detect proton decay, thus realizing the kinship between leptons and quarks. There is even increasing talk of producing intense neutrino beams in muon storage rings, and at this workshop of building a mammoth proton decay/neutrino detector! ## 4 Neutrino Masses Neutrinos must be extraordinarily light: experiments indicate $`m_{\nu _e}<10\mathrm{eV}`$, $`m_{\nu _\mu }<170\mathrm{keV}`$, $`m_{\nu _\tau }<18\mathrm{MeV}`$ , and any model of neutrino masses must explain this suppression. The natural way to generate neutrinos masses is to introduce for each one its electroweak singlet Dirac partner, $`\overline{N}_i`$. These appear naturally in the Grand Unified group $`SO(10)`$ where they complete each family into its spinor representation. Neutrino Dirac masses will then be generated by the couplings $`L_i\overline{N}_jH`$ after electroweak breaking. However, unless there are extraordinary suppressions, these couplings generate masses that are way too big, of the same order of magnitude as the masses of the charged elementary particles $`m\mathrm{\Delta }I_w=1/2`$. Based on recent ideas from string theory, it has been proposed that the world of four dimensions is in fact a “brane” immersed in a higher dimensional space. In this view, all fields with electroweak quantum numbers live on the brane, while standard model singlet fields can live on the “bulk” as well. One such field is the graviton, others could be the right-handed neutrinos. Their couplings to the brane are reduced by geometrical factors, and the smallness of neutrino masses is due to the naturally small coupling between brane and bulk fields. In the absence of any credible dynamics for the physics of the bulk, we think that “one neutrino on the brane is worth two in the bulk”. We take the more conservative approach where the bulk does opens up, but at much shorter scales. One indication of such a scale is that at which the gauge couplings unify, the other is given by the value of neutrino masses. This is achieved by introducing Majorana mass terms $`\overline{N}_i\overline{N}_j`$ for the right-handed neutrinos. The masses of these new degrees of freedom are arbitrary, as they have no electroweak quantum numbers, $`M\mathrm{\Delta }I_w=0`$. If they are much larger than the electroweak scale, the neutrino masses are suppressed relative to that of their charged counterparts by the ratio of the electroweak scale to that new scale: the mass matrix (in $`3\times 3`$ block form) is $$\left(\begin{array}{cc}0& m\\ m& M\end{array}\right),$$ (1) leading, for each family, to one small and one large eigenvalue $$m_\nu m\frac{m}{M}\left(\mathrm{\Delta }I_w=\frac{1}{2}\right)\left(\frac{\mathrm{\Delta }I_w=\frac{1}{2}}{\mathrm{\Delta }I_w=0}\right).$$ (2) This seesaw mechanism provides a natural explanation for small neutrino masses as long as lepton number is broken at a large scale $`M`$. With $`M`$ around the energy at which the gauge couplings unify, this yields neutrino masses at or below tenths of eVs, consistent with the SuperK results. The lepton flavor mixing comes from the diagonalization of the charged lepton Yukawa couplings, and of the neutrino mass matrix. From the charged lepton Yukawas, we obtain $`𝒰_e`$, the unitary matrix that rotates the lepton doublets $`L_i`$. From the neutrino Majorana matrix, we obtain $`𝒰_\nu `$, the matrix that diagonalizes the Majorana mass matrix. The $`6\times 6`$ seesaw Majorana matrix can be written in $`3\times 3`$ block form $$=𝒱_\nu ^t𝒟𝒱_\nu \left(\begin{array}{cc}𝒰_{\nu \nu }& ϵ𝒰_{\nu N}\\ ϵ𝒰_{N\nu }^t& 𝒰_{NN}\end{array}\right),$$ (3) where $`ϵ`$ is the tiny ratio of the electroweak to lepton number violating scales, and $`𝒟=\mathrm{diag}(ϵ^2𝒟_\nu ,𝒟_N)`$, is a diagonal matrix. $`𝒟_\nu `$ contains the three neutrino masses, and $`ϵ^2`$ is the seesaw suppression. The weak charged current is then given by $$j_\mu ^+=e_i^{}\sigma _\mu 𝒰_{MNS}^{ij}\nu _j,$$ (4) where $$𝒰_{MNS}=𝒰_e𝒰_\nu ^{},$$ (5) is the Maki-Nakagawa-Sakata (MNS) flavor mixing matrix, the analog of the CKM matrix in the quark sector. In the seesaw-augmented standard model, this mixing matrix is totally arbitrary. It contains, as does the CKM matrix, three rotation angles, and one CP-violating phase. In the seesaw scenario, it also contains two additional CP-violating phases which cannot be absorbed in a redefinition of the neutrino fields, because of their Majorana masses (these extra phases can be measured only in $`\mathrm{\Delta }=2`$ processes). Unfortunately, theoretical predictions of lepton hierarchies and mixings depend very much on hitherto untested theoretical assumptions. In the quark sector, where the bulk of the experimental data resides, the theoretical origin of quark hierarchies and mixings is a mystery, although there exits many theories, but none so convincing as to offer a definitive answer to the community’s satisfaction. It is therefore no surprise that there are more theories of lepton masses and mixings than there are parameters to be measured. Nevertheless, one can present the issues as questions: * Do the right handed neutrinos have quantum numbers beyond the standard model? * Are quarks and leptons related by grand unified theories? * Are quarks and leptons related by anomalies? * Are there family symmetries for quarks and leptons? The measured numerical value of the neutrino mass difference (barring any fortuitous degeneracies), suggests through the seesaw mechanism, a mass for the right-handed neutrinos that is consistent with the scale at which the gauge couplings unify. Is this just a numerical coincidence, or should we view this as a hint for grand unification? Grand unified theories, originally proposed as a way to treat leptons and quarks on the same footing, imply symmetries much larger than the standard model’s. Implementation of these ideas necessitates a desert and supersymmetry, but also a carefully designed contingent of Higgs particles to achieve the desired symmetry breaking. That such models can be built is perhaps more of a testimony to the cleverness of theorists rather than of Nature’s. Indeed with the advent of string theory, we know that the best features of grand unified theories can be preserved, as most of the symmetry breaking is achieved by geometric compactification from higher dimensions . An alternative point of view is that the vanishing of chiral anomalies is necessary for consistent theories, and their cancellation is most easily achieved by assembling matter in representations of anomaly-free groups. Perhaps anomaly cancellation is more important than group structure. Below, we present two theoretical frameworks of our work, in which one deduces the lepton mixing parameters and masses. One is ancient , uses the standard techniques of grand unification, but it had the virtue of predicting the large $`\nu _\mu \nu _\tau `$ mixing observed by SuperKamiokande. The other is more recent, and uses extra Abelian family symmetries to explain both quark and lepton hierarchies. It also predicted large $`\nu _\mu \nu _\tau `$ mixing, while both schemes predict small $`\nu _e\nu _\mu `$ mixings. ### 4.1 A Grand Unified Model The seesaw mechanism was born in the context of the grand unified group $`SO(10)`$, which naturally contains electroweak neutral right-handed neutrinos. Each standard model family appears in two irreducible representations of $`SU(5)`$. However, the predictions of this theory for Yukawa couplings is not so clear cut, and to reproduce the known quark and charged lepton hierarchies, a special but simple set of Higgs particles had to be included. In the simple scheme proposed by Georgi and Jarlskog , the ratios between the charged leptons and quark masses is reproduced, albeit not naturally since two Yukawa couplings, not fixed by group theory, had to be set equal. This motivated us to generalize their scheme to $`SO(10)`$, where it is (technically) natural, which meant that we had an automatic window into neutrino masses through the seesaw. The Yukawa couplings were of the Higgs-heavy, with $`\mathrm{𝟏𝟐𝟔}`$ representations, but the attitude at the time was “damn the Higgs torpedoes, and see what happens”. A modern treatment would include non-renormalizable operators , but with similar conclusion. The model yielded the mass relations $$m_dm_s=3(m_em_\mu );m_dm_s=m_em_\mu ;$$ (6) as well as $$m_b=m_\tau ,$$ (7) and mixing angles $$V_{us}=\mathrm{tan}\theta _c=\sqrt{\frac{m_d}{m_s}};V_{cb}=\sqrt{\frac{m_c}{m_t}}.$$ (8) While reproducing the well-known lepton and quark mass hierarchies, it predicted a long-lived $`b`$ quark, contrary to the lore of the time. It also made predictions in the lepton sector, namely maximal $`\nu _\tau \nu _\mu `$ mixing, small $`\nu _e\nu _\mu `$ mixing of the order of $`(m_e/m_\mu )^{1/2}`$, and no $`\nu _e\nu _\tau `$ mixing. The neutral lepton masses came out to be hierarchical, but heavily dependent on the masses of the right-handed neutrinos. The electron neutrino mass came out much lighter than those of $`\nu _\mu `$ and $`\nu _\tau `$. Their numerical values depended on the top quark mass, which was then supposed to be in the tens of GeVs! Given the present knowledge, some of the features are remarkable, such as the long-lived $`b`$ quark and the maximal $`\nu _\tau \nu _\mu `$ mixing. On the other hand, the actual numerical value of the $`b`$ lifetime was off a bit, and the $`\nu _e\nu _\mu `$ mixing was too large to reproduce the small angle MSW solution of the solar neutrino problem. The lesson should be that the simplest $`SO(10)`$ model that fits the observed quark and charged lepton hierarchies, reproduces, at least qualitatively, the maximal mixing found by SuperK, and predicts small mixing with the electron neutrino . ### 4.2 A Non-grand-unified Model There is another way to generate hierarchies, based on adding extra family symmetries to the standard model, without invoking grand unification. These types of models address only the Cabibbo suppression of the Yukawa couplings, and are not as predictive as specific grand unified models. Still, they predict no Cabibbo suppression between the muon and tau neutrinos. Below, we present a pre-SuperK model with those features. The Cabibbo supression is assumed to be an indication of extra family symmetries in the standard model. The idea is that any standard model-invariant operator, such as $`𝐐_i\overline{𝐝}_jH_d`$, cannot be present at tree-level if there are additional symmetries under which the operator is not invariant. Simplest is to assume an Abelian symmetry, with an electroweak singlet field $`\theta `$, as its order parameter. Then the interaction $$𝐐_i\overline{𝐝}_jH_d\left(\frac{\theta }{M}\right)^{n_{ij}}$$ (9) can appear in the potential as long as the family charges balance under the new symmetry. As $`\theta `$ acquires a $`vev`$, this leads to a suppression of the Yukawa couplings of the order of $`\lambda ^{n_{ij}}`$ for each matrix element, with $`\lambda =\theta /M`$ identified with the Cabibbo angle, and $`M`$ is the natural cut-off of the effective low energy theory. As a consequence of the charge balance equation $$X_{if}^{[d]}+n_{ij}X_\theta =0,$$ (10) the exponents of the suppression are related to the charge of the standard model-invariant operator , the sum of the charges of the fields that make up the the invariant. This simple Ansatz, together with the seesaw mechanism, implies that the family structure of the neutrino mass matrix is determined by the charges of the left-handed lepton doublet fields. Each charged lepton Yukawa coupling $`L_i\overline{N}_jH_u`$, has an extra charge $`X_{L_i}+X_{Nj}+X_H`$, which gives the Cabibbo suppression of the $`ij`$ matrix element. Hence, the orders of magnitude of these couplings can be expressed as $$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{Y}\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right),$$ (11) where $`\widehat{Y}`$ is a Yukawa matrix with no Cabibbo suppressions, $`l_i=X_{L_i}/X_\theta `$ are the charges of the left-handed doublets, and $`p_i=X_{N_i}/X_\theta `$, those of the singlets. The first matrix forms half of the MNS matrix. Similarly, the mass matrix for the right-handed neutrinos, $`\overline{N}_i\overline{N}_j`$ will be written in the form $$\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right)\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right).$$ (12) The diagonalization of the seesaw matrix is of the form $$L_iH_u\overline{N}_j\left(\frac{1}{\overline{N}\overline{N}}\right)_{jk}\overline{N}_kH_uL_l,$$ (13) from which the Cabibbo suppression matrix from the $`\overline{N}_i`$ fields cancels, leaving us with $$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{}\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right),$$ (14) where $`\widehat{}`$ is a matrix with no Cabibbo suppressions. The Cabibbo structure of the seesaw neutrino matrix is determined solely by the charges of the lepton doublets! As a result, the Cabibbo structure of the MNS mixing matrix is also due entirely to the charges of the three lepton doublets. This general conclusion depends on the existence of at least one Abelian family symmetry, which we argue is implied by the observed structure in the quark sector. The Wolfenstein parametrization of the CKM matrix , $$\left(\begin{array}{ccc}1& \lambda & \lambda ^3\\ \lambda & 1& \lambda ^2\\ \lambda ^3& \lambda ^2& 1\end{array}\right),$$ (15) and the Cabibbo structure of the quark mass ratios $$\frac{m_u}{m_t}\lambda ^8\frac{m_c}{m_t}\lambda ^4;\frac{m_d}{m_b}\lambda ^4\frac{m_s}{m_b}\lambda ^2,$$ (16) can be reproduced by a simple family-traceless charge assignment for the three quark families, namely $$X_{𝐐,\overline{𝐮},\overline{𝐝}}=(2,1,1)+\eta _{𝐐,\overline{𝐮},\overline{𝐝}}(1,0,1),$$ (17) where $``$ is baryon number, $`\eta _{\overline{𝐝}}=0`$, and $`\eta _𝐐=\eta _{\overline{𝐮}}=2`$. Two striking facts are evident: * the charges of the down quarks, $`\overline{𝐝}`$, associated with the second and third families are the same, * $`𝐐`$ and $`\overline{𝐮}`$ have the same value for $`\eta `$. To relate these quark charge assignments to those of the leptons, we need to inject some more theoretical prejudices. Assume these family-traceless charges are gauged, and not anomalous. Then to cancel anomalies, the leptons must themselves have family charges. Anomaly cancellation generically implies group structure. In $`SO(10)`$, baryon number generalizes to $``$, where $``$ is total lepton number, and in $`SU(5)`$ the fermion assignment is $`\overline{\mathrm{𝟓}}=\overline{𝐝}+L`$, and $`\mathrm{𝟏𝟎}=𝐐+\overline{𝐮}+\overline{e}`$. Thus anomaly cancellation is easily achieved by assigning $`\eta =0`$ to the lepton doublet $`L_i`$, and $`\eta =2`$ to the electron singlet $`\overline{e}_i`$, and by generalizing baryon number to $``$, leading to the charges $$X_{𝐐,\overline{𝐮},\overline{𝐝},L,\overline{e}}=()(2,1,1)+\eta _{𝐐,\overline{𝐮},\overline{𝐝}}(1,0,1),$$ (18) where now $`\eta _{\overline{𝐝}}=\eta _L=0`$, and $`\eta _𝐐=\eta _{\overline{𝐮}}=\eta _{\overline{e}}=2`$. The charges of the lepton doublets are simply $`X_{L_i}=(2,1,1)`$. We have just argued that these charges determine the Cabibbo structure of the MNS lepton mixing matrix to be $$𝒰_{MNS}\left(\begin{array}{ccc}1& \lambda ^3& \lambda ^3\\ \lambda ^3& 1& 1\\ \lambda ^3& 1& 1\end{array}\right),$$ (19) implying no Cabibbo suppression in the mixing between $`\nu _\mu `$ and $`\nu _\tau `$. This is consistent with the SuperK discovery and with the small angle MSW solution to the solar neutrino deficit. One also obtains a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos. Notice that these predictions are subtly different from those of grand unification, as they yield $`\nu _e\nu _\tau `$ mixing. It also implies a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos. On the other hand, the scale of the neutrino mass values depend on the family trace of the family charge(s). Here we simply quote the results our model . The masses of the right-handed neutrinos are found to be of the following orders of magnitude $$m_{\overline{N}_e}M\lambda ^{13};m_{\overline{N}_\mu }m_{\overline{N}_\tau }M\lambda ^7,$$ (20) where $`M`$ is the scale of the right-handed neutrino mass terms, assumed to be the cut-off. The seesaw mass matrix for the three light neutrinos comes out to be $$m_0\left(\begin{array}{ccc}a\lambda ^6& b\lambda ^3& c\lambda ^3\\ b\lambda ^3& d& e\\ c\lambda ^3& e& f\end{array}\right),$$ (21) where we have added for future reference the prefactors $`a,b,c,d,e,f`$, all of order one, and $$m_0=\frac{v_u^2}{M\lambda ^3},$$ (22) where $`v_u`$ is the $`vev`$ of the Higgs doublet. This matrix has one light eigenvalue $$m_{\nu _e}m_0\lambda ^6.$$ (23) Without a detailed analysis of the prefactors, the masses of the other two neutrinos come out to be both of order $`m_0`$. The mass difference announced by superK cannot be reproduced without going beyond the model, by taking into account the prefactors. The two heavier mass eigenstates and their mixing angle are written in terms of $$x=\frac{dfe^2}{(d+f)^2},y=\frac{df}{d+f},$$ (24) as $$\frac{m_{\nu _2}}{m_{\nu _3}}=\frac{1\sqrt{14x}}{1+\sqrt{14x}},\mathrm{sin}^22\theta _{\mu \tau }=1\frac{y^2}{14x}.$$ (25) If $`4x1`$, the two heaviest neutrinos are nearly degenerate. If $`4x1`$, a condition easy to achieve if $`d`$ and $`f`$ have the same sign, we can obtain an adequate split between the two mass eigenstates. For illustrative purposes, when $`0.03<x<0.15`$, we find $$4.4\times 10^6\mathrm{\Delta }m_{\nu _e\nu _\mu }^210^5\mathrm{eV}^2,$$ (26) which yields the correct non-adiabatic MSW effect, and $$5\times 10^4\mathrm{\Delta }m_{\nu _\mu \nu _\tau }^25\times 10^3\mathrm{eV}^2,$$ (27) for the atmospheric neutrino effect. These were calculated with a cut-off, $`10^{16}\mathrm{GeV}<M<4\times 10^{17}\mathrm{GeV}`$, and a mixing angle, $`0.9<\mathrm{sin}^22\theta _{\mu \tau }<1`$. This value of the cut-off is compatible not only with the data but also with the gauge coupling unification scale, a necessary condition for the consistency of our model, and more generally for the basic ideas of grand unification. ### 4.3 Proton Decay We have seen in the previous section that the ultraviolet cut-off $`M`$ appears directly in the seesaw masses. Now that it is determined by experiment, we can use it to estimate the strength of other interactions, in particular those that generate proton decay. In a supersymmetric theory with no R-parity violation, proton decay is caused by two types of operators that appear in the superpotential as $$W=\frac{1}{M}[\kappa _{112i}𝐐_1𝐐_1𝐐_2𝐋_i+\overline{\kappa }_{1jkl}\overline{𝐮}_1\overline{𝐮}_j\overline{𝐝}_k\overline{𝐞}_l]$$ (28) where for the first operator the flavor index $`i=1,2`$ if there is a charged lepton in the final state and $`i=1,2,3`$ if there is a neutrino and $`j=2,3`$, $`k,l=1,2`$. Operators that involve only one family, such as $`𝐐_1𝐐_1𝐐_1𝐋_i`$, and $`\overline{𝐮}_1\overline{𝐮}_1\overline{𝐝}_1\overline{𝐞}_l`$ are forbidden by symmetry. The reasons are that the combination $`𝐐_1𝐐_1𝐐_1`$ vanishes identically in the color singlet channel, and the combination $`\overline{𝐮}_1\overline{𝐮}_1`$ transforms as a color sextet, and cannot make a color invariant with the addition of an extra antiquark. This is the well-known statement that in supersymmetric theories, proton decay products will necessarily involve strange particles. The conventional decay into first family members is still there but not dominant. It would be most amusing if the first experimental manifestation of supersymmetry were to be the detection of proton decay into kaons! These interactions lead to dimension-five four-body interactions between two squarks and two sparticles (two squarks or two sleptons). After gaugino exchange, the two sparticles are turned into particles , leading to baryon number viola ting four fermion interactions, among them proton decay. The existing bounds on proton decay put severe constraints on the couplings $`\kappa _{112i}`$, and $`\overline{\kappa }_{1jkl}`$. In theories where the Cabibbo suppression of operators is related to their charges, we expect these operators to be highly Cabibbo-suppressed. This is because of sum rules which relate their charges to those of standard model invariants. Under the assumptions of tree-level top quark mass, zero $`\mu `$-term charge, and of the Green-Schwarz relation $`C_{\mathrm{color}}=C_{\mathrm{weak}}`$, the family-independent charges satisfy $$X_{𝐐_1𝐐_1𝐐_2𝐋_i}=X_{\overline{𝐮}_1\overline{𝐮}_j\overline{𝐝}_k\overline{𝐞}_l}=X_{𝐐_1\overline{𝐮}_1H_u}.$$ (29) Also, the branching ratios between different proton decay modes are determined by the $`U(1)`$ charges that are flavor dependent. In our model , the least suppressed operator is $`𝐐_1𝐐_1𝐐_2𝐋_{2,3}`$, with $$\kappa _{1122}\kappa _{1123}\lambda ^{11},$$ (30) leading to the estimate (with $`M`$ set by the neutrino mass values), $$\mathrm{\Gamma }(pK^0+\mu ^+)10^{32}\mathrm{yr}^1,$$ (31) at the same level as the SuperK limits presented at this workshop by L. Sulak. It is unfortunate that these models yield only orders of magnitude estimate, but it should be clear that those decay rates are tantalizingly close to the experimental bounds. Thus it is important to build a larger proton decay detector and improve the bou nds by at least one order of magnitude. ## 5 Outlook Theoretical predictions of neutrino masses and mixings depend on developing a credible theory of flavor. We have presented two flavor schemes, which predicted not only maximal $`\nu _\mu \nu _\tau `$ mixing, but also small $`\nu _e\nu _\mu `$ mixings. Neither scheme includes sterile neutrinos . The present experimental situation is somewhat unclear: the LSND results imply the presence of a sterile neutrino; and superK favors $`\nu _\mu \nu _\tau `$ oscillation over $`\nu _\mu \nu _{\mathrm{sterile}}`$. The origin of the solar neutrino deficit remains a puzzle, which several possible explanations. One is the non-adiabatic MSW effect in the Sun, which our theoretical ideas seem to favor, but it is an experimental question which is soon to be answered by the continuing monitoring of the $`{}_{}{}^{8}B`$ spectrum by SuperK, and the advent of the SNO detector. If neutrino masses reflect (through the seesaw) the value of the ultraviolet cut-off, they set the scale for the strength of proton decay interactions, implying that observation may not be far in the future. Neutrino physics has give n us a first glimpse of physics at very short distances, and proton decay cannot be too far behind. ## 6 Acknowledgements I wish to thank Professors C. K. Jung and M. V. Diwan for inviting me to this important and very stimulating workshop. This research was supported in part by the department of energy under grant DE-FG02-97ER41029.
no-problem/0001/cond-mat0001077.html
ar5iv
text
# Quantum Theory of the Smectic Metal State in Stripe Phases ## Abstract We present a theory of the electron smectic fixed point of the stripe phases of doped layered Mott insulators. We show that in the presence of a spin gap three phases generally arise: (a) a smectic superconductor, (b) an insulating stripe crystal and (c) a smectic metal. The latter phase is a stable two-dimensional anisotropic non-Fermi liquid. In the abscence of a spin gap there is also a more conventional Fermi-liquid-like phase. The smectic superconductor and smectic metal phases (or glassy versions thereof) may have already been seen in Nd-doped LSCO. In the past few years very strong experimental evidence has been found for static or dynamic charge inhomogeneity in several strongly correlated electronic systems, in particular in high-temperature superconductors , manganites, and quantum Hall systems. In $`d`$-dimensions, the charge degrees of freedom of a doped Mott insulator are confined to an array of self-organized ($`d1`$)-dimensional structures. In $`d=2`$ these structures are linear and are known as stripes. Stripe phases may be insulating or conducting. We have recently proposed that quite generally the quantum mechanical ground states, and the thermodynamic phases which emerge from them, can on the basis of broken symmetries, be characterized as electronic liquid crystal states. Specifically, a conducting stripe ordered phase is an electronic smectic state, while a state with only orientational stripe order (such as is presumably observed in quantum Hall systems) is an electronic nematic state. Here, we use a perturbative renormalization group analysis which is asymptotically exact in the limit of weak inter-stripe coupling, to reexamine the stability of the electronic phases of a stripe ordered system in $`d=2`$ and $`T0`$. The results are summarized in Figs. 1 and 2. In addition to an insulating stripe crystal phase, a variant of a Wigner crystal, we prove that there exist stable smectic phases: 1) An anisotropic smectic metal (non Fermi-liquid) state, which is a new phase of matter. 2) A stripe ordered smectic superconductor. We consider the cases of both spin-gap and spin-$`1/2`$ electrons. One-dimensional correlated electron systems are Luttinger liquids, which are quintessential non-Fermi liquids, and are scale invariant, so that their correlation functions exhibit power law behavior, typically with anomalous exponents. The problem of the stability of arrays of Luttinger liquids has recently been reexamined following a proposal by Anderson that the fermionic excitations of a Luttinger liquid are confined and consequently that inter-chain transport is incoherent. However perturbative studies of the effects of interchain couplings at the decoupled Luttinger liquid fixed point have invariably concluded that such systems always order at low temperatures, or cross over to a higher-dimensional Fermi liquid state, i.e. that the Luttinger behavior is restricted to a high-energy crossover regime. In particular, in the important case in which the interactions within a chain are repulsive, the most divergent susceptibility within a single chain, especially when there is a spin gap, is associated with 2$`k_F`$ or 4$`k_F`$ charge-density wave fluctuations, i.e. the decoupled Luttinger fixed point is typically unstable to two-dimensional crystallization. There is however a loophole in this argument. The decoupled Luttinger fixed point is not the most general scale-invariant theory compatible with the symmetries of an electron smectic. In particular, the long-wavelength density-density and/or current-current interactions between neighboring Luttinger liquids are exactly marginal operators, and should be included in the fixed point Hamiltonian (Eq. 2), which we call the generalized smectic non-Fermi liquid fixed point. Our principal results follow from a straightforward analysis of the perturbative stability of this fixed point. To the best of our knowledge, the model presented here is the first explicit example of a system with stable non-Fermi liquid behavior (albeit very anisotropic) in more than one dimension and which exhibits “confinement of coherence”. Sliding phases, which are classical analogs of the smectic metal state in $`3D`$ stacks of coupled $`2D`$ planes with XY, crystalline, or smectic order, have, however, been investigated . The low energy Luttinger liquid behavior of an isolated system of spinless interacting fermions is described by the fixed-point Hamiltonian of a bosonic phase field, $`\varphi (x,\tau )`$, whose dynamics is governed by the Lagrangian density (in imaginary time $`\tau `$) $$=\frac{w}{2}\left[\frac{1}{v}\left(\frac{\varphi }{\tau }\right)^2+v\left(\frac{\varphi }{x}\right)^2\right]$$ (1) where $`w`$ (the inverse of the conventional Luttinger parameter $`K`$) and the velocity of the excitations $`v`$ are non-universal functions of the coupling constants and depend on microscopic details. For repulsive interactions we expect $`w1`$ and, for weak interactions, $`w`$ and $`v`$ are determined by the backward and forward scattering amplitudes $`g_2`$ and $`g_4`$ . Physical observables such as the long wavelength components of the charge density fluctuations $`j_0`$ and the charge current $`j_1`$, are given by the bosonization formula $`j_\mu =\frac{1}{\sqrt{\pi }}ϵ_{\mu \nu }^\nu \varphi `$ where $`ϵ_{\mu \nu }`$ is the Levi-Civita tensor. If both spin and charge are dynamical degrees of freedom, there are two Luttinger parameters ($`K_c`$, $`K_s`$), and two velocities ($`v_c,v_s`$). The one-dimensional correlated electron fluids in the stripe phases of high-temperature superconductors are coupled to an active environment, and so are expected to have gapped spin excitations . As such they are best described as Luttinger liquids in the Luther-Emery regime whose low-energy physics is described by a single Luttinger liquid for charge. The same is true of the stripe states of the 2DEG in magnetic fields, which are (in almost all cases of interest) spin polarized. Now consider a system with $`N`$ stripes, each labeled by an integer $`a=1,\mathrm{},N`$. We will consider first the phase in which there is a spin gap. Here, the spin fluctuations are effectively frozen out at low energies. Nevertheless each stripe $`a`$ has two degrees of freedom: a transverse displacement field which describes the local dynamics of the configuration of each stripe, and the phase field $`\varphi _a`$ for the charge fluctuations on each stripe. The action of the generalized Luttinger liquid which describes the smectic charged fluid of the stripe state is obtained by integrating out the local shape fluctuations associated with the displacement fields. These fluctuations give rise to a finite renormalization of the Luttinger parameter and velocity of each stripe. More importantly, the shape fluctuations, combined with the long-wavelength inter-stripe Coulomb interactions, induce inter-stripe density-density and current-current interactions, leading to an imaginary time Lagrangian density of the form $$_{\mathrm{smectic}}=\frac{1}{2}\underset{a,a^{},\mu }{}j_\mu ^a(x)\stackrel{~}{W}_\mu (aa^{})j_\mu ^a^{}(x).$$ (2) These operators are marginal, i.e. have scaling dimension $`2`$, and preserve the smectic symmetry $`\varphi _a\varphi _a+\alpha _a`$ (where $`\alpha _a`$ is constant on each stripe) of the decoupled Luttinger fluids. Whenever this symmetry is exact, the charge-density-wave order parameters of the individual stripes do not lock with each other, and the charge density profiles on each stripe can slide relative to each other without an energy cost. In other words, there is no rigidity to shear deformations of the charge configuration on nearby stripes. This is the smectic metal phase. The fixed point action for a generic smectic metal phase thus has the form (in Fourier space) $`S`$ $`={\displaystyle \underset{Q}{}}{\displaystyle \frac{1}{2}}\left\{W_0(Q)\omega ^2+W_1(Q)k^2\right\}|\varphi (Q)|^2`$ (4) $`={\displaystyle \underset{Q}{}}{\displaystyle \frac{1}{2}}\left\{{\displaystyle \frac{\omega ^2}{W_1(Q)}}+{\displaystyle \frac{k^2}{W_0(Q)}}\right\}|\theta (Q)|^2`$ where $`Q=(\omega ,k,k_{})`$, and $`\theta `$ is the field dual to $`\varphi `$. Here $`k`$ is the momentum along the stripe and $`k_{}`$ perpendicular to the stripes.The kernels $`W_0(Q)`$ and $`W_1(Q)`$ are analytic functions of $`Q`$ whose form depends on microscopic details, e. g. at weak coupling they are functions of the inter-stripe Fourier transforms of the forward and backward scattering amplitudes $`g_2(k_{})`$ and $`g_4(k_{})`$, respectively. Thus, we can characterize the smectic fixed point by an effective (inverse) Luttinger function $`w(k_{})=\sqrt{W_0(k_{})W_1(k_{})}`$ and an effective velocity function $`v(k_{})=\sqrt{W_1(k_{})/W_0(k_{})}`$. In the presence of a spin gap, single electron tunneling is irrelevant, and the only potentially relevant interactions involving pairs of stripes $`a,a^{}`$ are singlet pair (Josephson) tunneling, and the coupling between the CDW order parameters. These interactions have the form $`_{\mathrm{int}}=_n\left(_{\mathrm{SC}}^n+_{\mathrm{CDW}}^n\right)`$ for $`a^{}a=n`$, where $`_{\mathrm{SC}}^n=`$ $`\left({\displaystyle \frac{\mathrm{\Lambda }}{2\pi }}\right)^2{\displaystyle \underset{a}{}}𝒥_n\mathrm{cos}[\sqrt{2\pi }(\theta _a\theta _{a+n})]`$ (5) $`_{\mathrm{CDW}}^n=`$ $`\left({\displaystyle \frac{\mathrm{\Lambda }}{2\pi }}\right)^2{\displaystyle \underset{a}{}}𝒱_n\mathrm{cos}[\sqrt{2\pi }(\varphi _a\varphi _{a+n})].`$ (6) Here $`𝒥_n`$ are the inter-stripe Josephson couplings, $`𝒱_n`$ are the $`2k_F`$ component of the inter-stripe density-density (CDW) interactions, and $`\mathrm{\Lambda }`$ is an ultra-violet cutoff, $`\mathrm{\Lambda }1/a`$ where $`a`$ is a lattice constant. A straightforward calculation, yields the scaling dimensions $`\mathrm{\Delta }_{1,n}\mathrm{\Delta }_{\mathrm{SC},n}`$ and $`\mathrm{\Delta }_{1,n}\mathrm{\Delta }_{\mathrm{CDW},n}`$ of $`_{\mathrm{SC}}^n`$ and $`_{\mathrm{CDW}}^n`$: $`\mathrm{\Delta }_{\pm 1,n}={\displaystyle _\pi ^\pi }{\displaystyle \frac{dk_{}}{2\pi }}\left[\kappa (k_{})\right]^{\pm 1}\left(1\mathrm{cos}nk_{}\right),`$ (7) where $`\kappa (k_{})w(0,0,k_{})`$. Since $`\kappa (k_{})`$ is a periodic function of $`k_{}`$ with period $`2\pi `$, $`\kappa (k_{})`$ has a convergent Fourier expansion of the form $`\kappa (k_{})=_n\kappa _n\mathrm{cos}nk_{}`$. We will parametrize the fixed point theory by the coefficients $`\kappa _n`$, which are smooth non-universal functions. In what follows we shall discuss the behavior of the simplified model with $`\kappa (k_{})=\kappa _0+\kappa _1\mathrm{cos}k_{}`$. Here, $`\kappa _0`$ can be thought of as the intra-stripe inverse Luttinger parameter, and $`\kappa _1`$ is a measure of the nearest neighbor inter-stripe coupling. For stability we require $`\kappa _0>\kappa _1`$. Since it is unphysical to consider longer range interactions in $`H_{int}`$ than are present in the fixed point Hamiltonian, we treat only perturbations with $`n=1`$, whose dimensions are $`\mathrm{\Delta }_{\mathrm{SC},1}\mathrm{\Delta }_{\mathrm{SC}}=\kappa _0\frac{\kappa _1}{2}`$, and $`\mathrm{\Delta }_{\mathrm{CDW},1}\mathrm{\Delta }_{\mathrm{CDW}}=2/\left(\kappa _0\kappa _1+\sqrt{\kappa _0^2\kappa _1^2}\right)`$. For a more general function $`\kappa (k_{})`$, operators with larger $`n`$ must also be considered, but the results are qualitatively unchanged . In Figure 1 we present the phase diagram of this model. The dark $`AB`$ curve is the set of points where $`\mathrm{\Delta }_{\mathrm{CDW}}=\mathrm{\Delta }_{\mathrm{SC}}`$, and it is a line of first order transitions. To the right of this line the inter-stripe CDW coupling is the most relevant perturbation, indicating an instability of the system to the formation of a 2D stripe crystal. To the left, Josephson tunneling (which still preserves the smectic symmetry) is the most relevant, so this phase is a 2D smectic superconductor. (Here we have neglected the possibility of coexistence since a first order transition seems more likely). Note that there is a region of $`\kappa _01`$, and large enough $`\kappa _1`$, where the global order is superconducting although, in the absence of interstripe interactions (which roughly corresponds to $`\kappa _1=0`$), the SC fluctuations are subdominant. There is also a (strong coupling) regime above the curve $`CB`$ where both Josephson tunneling and the CDW coupling are irrelevant at low energies. Thus, in this regime the smectic metal state is stable. This phase is a 2D smectic non-Fermi liquid in which there is coherent transport only along the stripes. The phase transitions from the smectic metal to the $`2D`$ smectic superconductor and the stripe crystal are continuous. The three phase boundaries meet at the bicritical point $`B`$, where $`\kappa _04`$, and $`\kappa _10.97\kappa _0`$. While the details of the phase diagram are nonuniversal, the basic properties of this model are quite general: the inter-stripe long wavelength density-density coupling rapidly increases the scaling dimension of the inter-stripe CDW coupling while the scaling dimension of the inter-stripe Josphson coupling is less strongly affected. Although for this model the smectic metal has a small region of stability, we expect it to grow for longer range interactions. The transport properties of isolated Luttinger liquids have been studied extensively, and many of these results can be applied in this context. At temperatures well above any ordering transition, we can use perturbation theory about the smectic fixed point in powers of the scaling variables $`X(𝒥/v)(\mathrm{\Lambda }v/T)^{2\mathrm{\Delta }_{\mathrm{SC}}}`$ and $`Y(𝒱/v)(\mathrm{\Lambda }v/T)^{2\mathrm{\Delta }_{\mathrm{CDW}}}`$, and for weak disorder, we can similarly employ perturbation theory in powers of the backscattering interaction, $`V_{\mathrm{back}}`$. (Electron-phonon coupling produces results similar to those of disorder, although with a temperature dependent effective $`V_{\mathrm{back}}`$.) However, because $`\sigma _{xx}`$ and $`\sigma _{xy}`$ are highly singular in the limit $`V_{\mathrm{back}}0`$ (when the system is Galilean invariant along the stripes), we must resum the naive perturbation expansion of the Kubo formula to obtain perturbative expressions for the component of the resistivity tensor along a stripe $`\rho _{xx}`$, the Hall resistance $`\rho _{xy}`$, and the conductivity transverse to the stripe, $`\sigma _{yy}`$. As is well known, $`\rho _{xx}=0`$ for $`V_{\mathrm{back}}=0`$, and develops a calculable power-law temperature dependence which, to leading order in $`V_{\mathrm{back}}`$ is $`\rho _{xx}={\displaystyle \frac{\mathrm{}}{e^2n_sv}}{\displaystyle \frac{|V_{\mathrm{back}}|^2}{T^2}}\left({\displaystyle \frac{T}{v\mathrm{\Lambda }}}\right)^{\overline{\mathrm{\Delta }}_{\mathrm{CDW}}}f_{xx}(X^2,Y^2)+\mathrm{},`$ (8) where $`f_{xx}(X,Y)`$ is a scaling function and $`f_{xx}(0,0)1`$. Here, $`n_s`$ is the density of stripes, and $`\overline{\mathrm{\Delta }}_{\mathrm{CDW}}\mathrm{\Delta }_{\mathrm{CDW},\mathrm{}}`$ is the dimension of the CDW order parameter. Whether the inter-stripe Josephson coupling, $`𝒥`$, is irrelevant or relevant, so long as the temperature is not too low, the component of the conductivity tensor transverse to the stripe direction can be obtained from a perturbative evaluation of the Kubo formula to lowest order in powers of the leading coupling $`𝒥`$. Combining this result with a simple scaling analysis we find (to zeroth order in $`V_{\mathrm{back}}`$) $$\sigma _{yy}=\frac{e^2}{h}n_sb^2\mathrm{\Lambda }\left(\frac{𝒥}{v}\right)^2\left(\frac{T}{\mathrm{\Lambda }v}\right)^{2\mathrm{\Delta }_{SC}3}f_{yy}(X^2,Y^2),$$ (9) where $`b`$ is the spacing between stripes, $`f_{yy}`$ is a scaling function and $`f_{yy}(0,0)1`$. An interesting aspect of this expression is that, in the perturbative (high-temperature) regime, the temperature derivative of $`\sigma _{yy}`$ changes from positive to negative at a critical value of $`\mathrm{\Delta }_{\mathrm{SC}}=3/2`$, whereas the actual superconductor to (CDW) insulator transition occurs somewhere in the range $`1<\mathrm{\Delta }_{\mathrm{SC}}<2`$, depending on the value of $`\kappa _0/\kappa _1`$. For a system with Galilean invariance along the stripes $`\sigma _{xy}=n^{\mathrm{eff}}ec/B`$, and, to leading order in $`V_{\mathrm{back}}`$, $$\rho _{xy}=B/n^{\mathrm{eff}}ec+\mathrm{}$$ (10) The physics governing $`n_{\mathrm{eff}}`$ is rather subtle - neglecting irrelevant couplings, the fixed point Hamiltonian is actually particle-hole symmetric, which implies $`\rho _{xy}=0`$. Thus $`n^{\mathrm{eff}}`$ is determined by the leading irrelevant couplings which break particle-hole symmetry, terms of the form $`(_x\varphi )^3`$ and $`(_x\theta )^2_x\varphi `$. Generically, $`1/n^{\mathrm{eff}}`$ approaches a non-zero constant value at low temperatures. However, in special cases (e.g. the quarter-filled Hubbard chain in the infinite $`U`$ limit) where there is an effective “particle-hole symmetry” at low energy, $`\rho _{xy}`$ will vanish as a power of $`T`$. Let us now discuss what happens if both charge and spin excitations are gapless on the stripes. We now have two Luttinger fluids on each stripe for charge and spin respectively, represented by the fields $`\varphi _c`$ and $`\varphi _s`$. $`SU(2)`$ spin invariance requires $`K_s=1`$ whereas $`K_c=K`$ as in the spin gap case. Here we will discuss a system in which there is only a coupling of the charge densities between neighboring stripes and no exchange coupling. Since both spin and charge are gapless, electron tunneling has to be considered in addition to CDW coupling and Josephson tunneling. The dimensions of the most relevant CDW and Josephson interactions in the gapless spin case are $`\mathrm{\Delta }_{\mathrm{CDW}}=1+\mathrm{\Delta }_{\mathrm{CDW}}^{(\mathrm{Gap})}`$, and $`\mathrm{\Delta }_{\mathrm{SC}}=1+\mathrm{\Delta }_{\mathrm{SC}}^{(\mathrm{Gap})}`$, where $`\mathrm{\Delta }_{\mathrm{CDW}}^{(\mathrm{Gap})}`$ and $`\mathrm{\Delta }_{\mathrm{SC}}^{(\mathrm{Gap})}`$ are their dimensions in the spin gap case, Eq. (5). The dimension of the nearest-neighbor single electron tunneling operator is $`\mathrm{\Delta }_e=\frac{1}{4}\left(\mathrm{\Delta }_{\mathrm{SC}}^{(\mathrm{Gap})}+\mathrm{\Delta }_{\mathrm{CDW}}^{(\mathrm{Gap})}+2\right)`$. It is also easy to check that the dimensions of the $`2k_F`$ charge density wave (CDW) and spin density wave (SDW) operators satisfy $`\mathrm{\Delta }_{\mathrm{CDW}}=\mathrm{\Delta }_{\mathrm{SDW}}`$. Similarly, the triplet and singlet superconductor couplings have the same dimension. We can now derive the phase diagram for the spin gapless case, shown in Figure 2. There is a large region of the phase diagram in which the electron tunneling operator is relevant, shown in Figure 2 as the region below the curve $`ABC`$ (defined by the marginality condition $`\mathrm{\Delta }_{e,1}=2`$). In this regime the system initially flows towards a 2D Fermi liquid fixed point, which will itself exhibt a BCS instability in the presence of residual attractive interactions ($`\kappa _0<1`$). For stronger inter-stripe couplings the system crystallizes, and there are also strong coupling smectic metal (non-Fermi liquid), and superconducting phases. The non-Fermi liquid smectic metal phase is a remarkable state of matter. Because inter-stripe tunneling of any type is irrelevant, the transport across the stripes is incoherent, whereas transport is coherent (and large) inside each stripe. Recently, evidence of the existence of a “metallic” stripe ordered state, which we identify as such a smectic, has been observed in La<sub>1.4-x</sub>Nd<sub>0.6</sub>Sr<sub>x</sub>CuO<sub>4</sub>: Glassy stripe order has been confirmed by neutron and X-ray scattering studies; the in-plane transport remains metallic (with at most a logaritmic increase) down to low temperatures while the inter-plane resistivity (which is perpendicular to the stripes) appears to diverge as $`T0`$. On the same system photoemission experiments have found strong evidence for one-dimensional electronic structure. Strikingly, Noda et. al have found that for $`x1/8`$, $`\rho _{xy}`$ vanishes (roughly linearly) as $`T0`$, while for $`x>1/8`$, although $`\rho _{xy}`$ still decreases strongly at low temperatures, it appears to approach a finite value. This behavior was taken by Noda et. al to indicate a crossover from one to two dimensional metallic conduction at $`x=1/8`$. We propose, instead, that the system is a smectic for a range of $`x`$, and that the crossover indicates that the stripes are nearly quarter filled, and have an approximate particle-hole symmetry for $`x<1/8`$, while particle-hole symmetry is broken for $`x>1/8`$. Finally, the present results suggest the existence of a smectic metal state of the 2DEG in large magnetic fields, a result conjectured previously by us and by Fertig, although microscopic calculations still yield conflicting conclusions. We thank S. Bacci, D. Barci, H. Esaki, M. P. A. Fisher, and Z. X. Shen for useful discussions. EF and SAK are grateful to S. C. Zhang and the Dept. of Physics of Stanford University, for their hospitality. This work was supported in part by the NSF, grants DMR98-08685 (SAK), DMR98-17941 (EF), DMR97-30405 (TCL), at and by DMS, USDOE contract DE-AC02-76CH00016 (VJE).
no-problem/0001/hep-lat0001019.html
ar5iv
text
# Fast methods for computing the Neuberger Operator ## 1 Introduction Quantum Chromodynamics (QCD) is a theory of strong interactions, where the chiral symmetry plays a mayor role. There are different starting points to formulate a lattice theory with exact chiral symmetry, but all of them must obey the Ginsparg-Wilson condition : $$\gamma _5D^1+D^1\gamma _5=a\gamma _5\alpha ^1,$$ (1) where $`a`$ is the lattice spacing, $`D`$ is the lattice Dirac operator and $`\alpha ^1`$ is a local operator and trivial in the Dirac space. A candidate is the overlap operator of Neuberger : $$D=1A(A^{}A)^{1/2},A=MaD_W$$ (2) where $`M`$ is a shift parameter in the range $`(0,2)`$, which I have fixed at one and $`D_W`$ is the Wilson-Dirac operator, $$D_W=\frac{1}{2}\underset{\mu }{}[\gamma _\mu (_\mu ^{}+_\mu )a_\mu ^{}_\mu ]$$ (3) and $`_\mu `$ and $`_\mu ^{}`$ are the nearest-neighbor forward and backward difference operators, which are covariant, i.e. the shift operators pick up a unitary 3 by 3 matrix with determinant one. These small matrices are associated with the links of the lattice and are oriented positively. A set of such matrices forms a ”configuration”. $`\gamma _\mu ,\mu =1,\mathrm{},5`$ are 4 by 4 matrices related to the spin of the particle. Therefore, if there are $`N`$ lattice points, the matrix is of order $`12N`$. A restive symmetry of the matrix $`A`$ that comes from the continuum is the so called $`\gamma _5symmetry`$ which is the Hermiticity of the $`\gamma _5A`$ operator. The computation of the inverse square root of a matrix is reviewed in . In the context of lattice QCD there are several sparse matrix methods, which are developed recently . I will focus here on a Lanczos method similar to . For a more general case of functions of matrices I refer to the talk of H. van der Vorst, and for a Chebyshev method I refer to the talk of K. Jansen, both included in these proceedings. ## 2 The Lanczos Algorithm The Lanczos iteration is known to approximate the spectrum of the underlying matrix in an optimal way and, in particular, it can be used to solve linear systems . Let $`Q_n=[q_1,\mathrm{},q_n]`$ be the set of orthonormal vectors, such that $$A^{}AQ_n=Q_nT_n+\beta _nq_{n+1}(e_n^{(n)})^T,q_1=\rho _1b,\rho _1=1/b_2$$ (4) where $`T_n`$ is a tridiagonal and symmetric matrix, $`b`$ is an arbitrary vector, and $`\beta _n`$ a real and positive constant. $`e_m^{(n)}`$ denotes the unit vector with $`n`$ elements in the direction $`m`$. By writing down the above decomposition in terms of the vectors $`q_i,i=1,\mathrm{},n`$ and the matrix elements of $`T_n`$, I arrive at a three term recurrence that allows to compute these vectors in increasing order, starting from the vector $`q_1`$. This is the $`LanczosAlgorithm`$: $$\begin{array}{c}\beta _0=0,\rho _1=1/b_2,q_0=o,q_1=\rho _1b\hfill \\ fori=1,\mathrm{}\hfill \\ v=A^{}Aq_i\hfill \\ \alpha _i=q_i^{}v\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ \beta _i=v_2\hfill \\ if\beta _i<tol,n=i,endfor\hfill \\ q_{i+1}=v/\beta _i\hfill \end{array}$$ (5) where $`tol`$ is a tolerance which serves as a stopping condition. The Lanczos Algorithm constructs a basis for the Krylov subspace : $$\text{span}\{b,A^{}Ab,\mathrm{},(A^{}A)^{n1}b\}$$ (6) If the Algorithm stops after $`n`$ steps, one says that the associated Krylov subspace is invariant. In the floating point arithmetic, there is a danger that once the Lanczos Algorithm (polynomial) has approximated well some part of the spectrum, the iteration reproduces vectors which are rich in that direction . As a consequence, the orthogonality of the Lanczos vectors is spoiled with an immediate impact on the history of the iteration: if the algorithm would stop after $`n`$ steps in exact arithmetic, in the presence of round off errors the loss of orthogonality would keep the algorithm going on. ## 3 The Lanczos Algorithm for solving $`A^{}Ax=b`$ Here I will use this algorithm to solve linear systems, where the loss of orthogonality will not play a role in the sense that I will use a different stopping condition. I ask the solution in the form $$x=Q_ny_n$$ (7) By projecting the original system on to the Krylov subspace I get: $$Q_n^{}A^{}Ax=Q_n^{}b$$ (8) By construction, I have $$b=Q_ne_1^{(n)}/\rho _1,$$ (9) Substituting $`x=Q_ny_n`$ and using (4), my task is now to solve the system $$T_ny_n=e_1^{(n)}/\rho _1$$ (10) Therefore the solution is given by $$x=Q_nT_n^1e_1^{(n)}/\rho _1$$ (11) This way using the Lanczos iteration one reduces the size of the matrix to be inverted. Moreover, since $`T_n`$ is tridiagonal, one can compute $`y_n`$ by short recurences. If I define: $$r_i=bA^{}Ax_i,q_i=\rho _ir_i,y_i=\rho _ix_i$$ (12) where $`i=1,\mathrm{}`$, it is easy to show that $$\begin{array}{c}\rho _{i+1}\beta _i+\rho _i\alpha _i+\rho _{i1}\beta _{i1}=0\hfill \\ q_i+y_{i+1}\beta _i+y_i\alpha _i+y_{i1}\beta _{i1}=0\hfill \end{array}$$ (13) Therefore the solution can be updated recursively and I have the following Algorithm1 for solving the system $`A^{}Ax=b`$: $$\begin{array}{c}\beta _0=0,\rho _1=1/b_2,q_0=o,q_1=\rho _1b\hfill \\ fori=1,\mathrm{}\hfill \\ v=A^{}Aq_i\hfill \\ \alpha _i=q_i^{}v\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ \beta _i=v_2\hfill \\ q_{i+1}=v/\beta _i\hfill \\ y_{i+1}=\frac{q_i+y_i\alpha _i+y_{i1}\beta _{i1}}{\beta _i}\hfill \\ \rho _{i+1}=\frac{\rho _i\alpha _i+\rho _{i1}\beta _{i1}}{\beta _i}\hfill \\ r_{i+1}:=q_{i+1}/\rho _{i+1}\hfill \\ x_{i+1}:=y_{i+1}/\rho _{i+1}\hfill \\ if\frac{1}{|\rho _{i+1}|}<tol,n=i,endfor\hfill \end{array}$$ (14) ## 4 The Lanczos Algorithm for solving $`(A^{}A)^{1/2}x=b`$ Now I would like to compute $`x=(A^{}A)^{1/2}b`$ and still use the Lanczos Algorithm. In order to do so I make the following observations: Let $`(A^{}A)^{1/2}`$ be expressed by a matrix-valued function, for example the integral formula : $$(A^{}A)^{1/2}=\frac{2}{\pi }_0^{\mathrm{}}𝑑t(t^2+A^{}A)^1$$ (15) From the previous section, I use the Lanczos Algorithm to compute $$(A^{}A)^1b=Q_nT_n^1e_1^{(n)}/\rho _1$$ (16) It is easy to show that the Lanczos Algorithm is shift-invariant. i.e. if the matrix $`A^{}A`$ is shifted by a constant say, $`t^2`$, the Lanczos vectors remain invariant. Moreover, the corresponding Lanczos matrix is shifted by the same amount. This property allows one to solve the system $`(t^2+A^{}A)x=b`$ by using the same Lanczos iteration as before. Since the matrix $`(t^2+A^{}A)`$ is better conditioned than $`A^{}A`$, it can be concluded that once the original system is solved, the shifted one is solved too. Therefore I have: $$(t^2+A^{}A)^1b=Q_n(t^2+T_n)^1e_1^{(n)}/\rho _1$$ (17) Using the above integral formula and puting everything together, I get: $$x=(A^{}A)^{1/2}b=Q_nT_n^{1/2}e_1^{(n)}/\rho _1$$ (18) There are some remarks to be made here: a) As before, by applying the Lanczos iteration on $`A^{}A`$, the problem of computing $`(A^{}A)^{1/2}b`$ reduces to the problem of computing $`y_n=T_n^{1/2}e_1^{(n)}/\rho _1`$ which is typically a much smaller problem than the original one. But since $`T_n^{1/2}`$ is full, $`y_n`$ cannot be computed by short recurences. It can be computed for example by using the full decomposition of $`T_n`$ in its eigenvalues and eigenvectors; in fact this is the method I have employed too, for its compactness and the small overhead for moderate $`n`$. b) The method is not optimal, as it would have been, if one would have applied it directly for the matrix $`(A^{}A)^{1/2}`$. By using $`A^{}A`$ the condition is squared, and one looses a factor of two compared to the theoretical case! c) From the derivation above, it can be concluded that the system $`(A^{}A)^{1/2}x=b`$ is solved at the same time as the system $`A^{}Ax=b`$. d) To implement the result (18), I first construct the Lanczos matrix and then compute $$y_n=T_n^{1/2}e_1^{(n)}/\rho _1$$ (19) To compute $`x=Q_ny_n`$, I repeat the Lanczos iteration. I save the scalar products, though it is not necessary. Therefore I have the following Algorithm2 for solving the system $`(A^{}A)^{1/2}x=b`$: $$\begin{array}{c}\beta _0=0,\rho _1=1/b_2,q_0=o,q_1=\rho _1b\hfill \\ fori=1,\mathrm{}\hfill \\ v=A^{}Aq_i\hfill \\ \alpha _i=q_i^{}v\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ \beta _i=v_2\hfill \\ q_{i+1}=v/\beta _i\hfill \\ \rho _{i+1}=\frac{\rho _i\alpha _i+\rho _{i1}\beta _{i1}}{\beta _i}\hfill \\ if\frac{1}{|\rho _{i+1}|}<tol,n=i,endfor\hfill \\ \\ Set(T_n)_{i,i}=\alpha _i,(T_n)_{i+1,i}=(T_n)_{i,i+1}=\beta _i,otherwise(T_n)_{i,j}=0\hfill \\ y_n=T_n^{1/2}e_1^{(n)}/\rho _1=U_n\mathrm{\Lambda }_n^{1/2}U_n^Te_1^{(n)}/\rho _1\hfill \\ \\ q_0=o,q_1=\rho _1b,x_0=o\hfill \\ fori=1,\mathrm{},n\hfill \\ x_i=x_{i1}+q_iy_n^{(i)}\hfill \\ v=A^{}Aq_i\hfill \\ v:=vq_i\alpha _iq_{i1}\beta _{i1}\hfill \\ q_{i+1}=v/\beta _i\hfill \end{array}$$ (20) where by $`o`$ I denote a vector with zero entries and $`U_n,\mathrm{\Lambda }_n`$ the matrices of the egienvectors and eigenvalues of $`T_n`$. Note that there are only four large vectors necessary to store: $`q_{i1},q_i,v,x_i`$. ## 5 Testing the method I propose a simple test: I solve the system $`A^{}Ax=b`$ by applying twice the $`Algorithm2`$, i.e. I solve the linear systems $$(A^{}A)^{1/2}z=b,(A^{}A)^{1/2}x=z$$ (21) in the above order. For each approximation $`x_i`$, I compute the residual vector $$r_i=bA^{}Ax_i$$ (22) The method is tested for a SU(3) configuration at $`\beta =6.0`$ on a $`8^316`$ lattice, corresponding to an order $`98304`$ complex matrix $`A`$. In Fig.1 I show the norm of the residual vector decreasing monotonically. The stagnation of $`r_i_2`$ for small values of $`tol`$ may come from the accumulation of round off error in the $`64`$-bit precision arithmetic used here. This example shows that the tolerance line is above the residual norm line, which confirms the expectation that $`tol`$ is a good stopping condition of the $`Algorithm2`$. ## 6 Inversion Having computed the operator, one can invert it by applying iterative methods based on the the Lanczos algorithm. Since the operator $`D`$ is normal, it turns out that the Conjugate Residual (CR) algorithm is the optimal one . In Fig. 2 I show the converegence history of CR on $`30`$ small $`4^4`$ lattices at $`\beta =6`$. The large number of multiplications with $`D_W`$ suggests that the inversion of the Neuberger operator is a difficult task and may bring the complexity of quenched simulations in lattice QCD to the same order of magnitude to dynamical simulations with Wilson fermions. Therefore, other ideas are needed. The essential point is the large number of small eigenvalues of $`A`$ that make the computation of $`D`$ time consuming. Therefore, one may try to project out these modes and invert them directly . Also, one may try $`5`$dimensional implementations of the Neuberger operator, such that its condition improves . I have tried also to reformulate the theory in 5 dimensions by using the corresponding approximate inversion as a coarse grid solution in a multigrid scheme . The scheme is tested and the results are shown in Fig. 2, where the multigrid pattern of the residual norm is clear. The gain with respect to CR is about a factor $`10`$. Note that to invert the “big” matrix I have used the BiCGstab2 algorithm which is almost optimal in most of the cases for the non-normal matrices as its is the matrix $``$ . ## 7 Acknowledgement The author would like to thank the organizers of this Workshop for the kind hospitality at Wuppertal.
no-problem/0001/hep-ph0001006.html
ar5iv
text
# Neutrino Masses and Lepton-Quark Symmetries ## 1 Neutrino Story The Neutrino Story starts with the experiment of O. Von Bayer, O. Hahn, and L. Meitner who measured the spectrum of electrons in $`\beta `$ radioactivity, and found it to be discrete! In 1914, Chadwick , then in Geiger’s laboratory in Berlin came to the correct conclusion of a continuous electron spectrum, but was interned for the duration of the Great War. After much controversy, the issue was settled in 1927 by C.D. Ellis and W. A. Wooster , who found the mean energy liberated in $`\beta `$ decay accounted to be only $`1/3`$ of the allowed energy. The stage was set for W. Pauli’s famous 1930 letter. In December of that year, in a letter that starts with typical panache, $`\mathrm{`}\mathrm{`}`$Dear Radioactive Ladies and Gentlemen…”, W. Pauli puts forward a “desperate” way out: there is a companion particle to the $`\beta `$ electron. Undetected, it must be electrically neutral, and in order to balance the $`NLi^6`$ statistics, it carries spin $`1/2`$. He calls it the neutron, but sees no reason why it could not be massive. In 1933, E. Fermi in his $`\beta `$ decay paper gave it its final name, the little neutron or neutrino, as it is clearly much lighter than Chadwick’s neutron which had just been discovered. In 1945, B. Pontecorvo proposes the unthinkable: neutrinos can be detected, through the following observation: an electron neutrino that hits a $`{}_{}{}^{37}Cl`$ atom will transform it into the inert radioactive gas $`{}_{}{}^{37}Ar`$, which then can be stored and be detected through its radioactive decay. Pontecorvo did not publish the report, perhaps because of its secret classification, or because Fermi thought the idea ingenious but not immediately achievable. In 1954, Davis follows up on Pontecorvo’s original proposal, by setting a tank of cleaning fluid outside a nuclear reactor. In 1956, using a scintillation counter experiment they had proposed three years earlier , Cowan and Reines discover electron antineutrinos through the reaction $`\overline{\nu }_e+pe^++n`$. Cowan passed away before 1995, the year Fred Reines was awarded the Nobel Prize for their discovery. There emerge two lessons in neutrino physics: not only is patience required but also longevity: it took $`26`$ years from birth to detection and then another $`39`$ for the Nobel Committee to recognize the achievement! This should encourage future physicists to train their children at the earliest age to follow their footsteps, in order to establish dynasties of neutrino physicists. In 1956, it was rumored that Davis had found evidence for neutrinos coming from a pile, and Pontecorvo , influenced by the recent work of Gell-Mann and Pais, theorized that an antineutrino produced in the Savannah reactor could oscillate into a neutrino and be detected by Davis. The rumor went away, but the idea of neutrino oscillations was born; it has remained with us ever since, and proven the most potent tool in hunting for neutrino masses. Having detected neutrinos, there remained to determine its spin and mass. Its helicity was measured in 1958 by M. Goldhaber , but convincing evidence for its mass had, until SuperK’s bombshell, eluded experimentalists. After the 1957 Lee and Yang proposal of parity violation, the neutrino is again at the center of the action. Unlike the charged elementary particles which have both left- and right-handed components, weakly interacting neutrinos are purely left-handed (a ntineutrinos are right-handed), which means that lepton-number is chiral. The second neutrino, the muon neutrino is detected in 1962, (long anticipated by theorists Inouë and Sakata in 1943 ). This time things went a bit faster as it took only 19 years from theory (1943) to discovery (1962) and 26 yea rs to Nobel recognition (1988). That same year, Maki, Nakagawa and Sakata introduce two crucial ideas; one is that these two neutrinos can mix, and the second is that this mixing can cause one type of neutrino to oscillate into the other (called today flavor oscillation). This is possible only if the two neutrino flavors have different masses. In 1963, the Astrophysics group at Caltech, Bahcall, Fowler, Iben and Sears puts forward the most accurate of neutrino fluxes from the Sun. Their calculations included the all important Boron decay spectrum, which produces neutrinos with the right energy range for the Chlorine experiment. In 1964, using Bahcall’s result of an enhanced capture rate of $`{}_{}{}^{8}B`$ neutrinos through an excited state of $`{}_{}{}^{37}Ar`$, Davis proposes to search for $`{}_{}{}^{8}B`$ solar neutrinos using a $`100,000`$ gallon tank of cleaning fluid deep underground. Soon after, R. Davis starts his epochal experiment at the Homestake mine, marking the beginning of the solar neutrino watch which continues to this day. In 1968, Davis et al reported a deficit in the solar neutrino flux, a result that has withstood scrutiny to this day, and stands as a truly remarkable experimental tour de force. Shortly after, Gribov and Pontecorvo interpreted the deficit as evidence for neutrino oscillations. In the early 1970’s, with the idea of quark-lepton symmetries comes the idea that the proton could be unstable. This brings about the construction of underground (to avoid contamination from cosmic ray by-product) detectors, large enough t o monitor many protons, and instrumentalized to detect the Čerenkov light emitted by its decay products. By the middle 1980’s, several such detectors are in place. They fail to detect proton decay, but in a serendipitous turn of events, 150,000 years e arlier, a supernova erupted in the large Magellanic Cloud, and in 1987, its burst of neutrinos was detected in these detectors! All of a sudden, proton decay detectors turn their attention to neutrinos, and to this day still waiting for its protons to dec ay! As we all know, these detectors routinely monitor neutrinos from the Sun, as well as neutrinos produced by cosmic ray collisions. ## 2 Standard Model Neutrinos The standard model of electro-weak and strong interactions contains three left-handed neutrinos. The three neutrinos are represented by two-components Weyl spinors, $`\nu _i`$, $`i=e,\mu ,\tau `$, each describing a left-handed fermion (right-handed antifer mion). As the upper components of weak isodoublets $`L_i`$, they have $`I_{3W}=1/2`$, and a unit of the global $`i`$th lepton number. These standard model neutrinos are strictly massless. The only Lorentz scalar made out of these neutrinos is the Majorana mass, of the form $`\nu _i^t\nu _j`$; it has the quantum numbers of a weak isotriplet, with third component $`I_{3W}=1`$, as well as two units of total lepton number. Higgs isotriplet with two units of lepton number could generate neutrino Majorana masses, but there is no such higgs in the Standard Model: there are no tree-level neutrino masses in the standard model. Quantum corrections, however, are not limited to renormalizable couplings, and it is easy to make a weak isotriplet out of two isodoublets, yielding the $`SU(2)\times U(1)`$ invariant $`L_i^t\stackrel{}{\tau }L_jH^t\stackrel{}{\tau }H`$, where $`H`$ is the Higgs doublet. As this term is not invariant under lepton number, it is not be generated in perturbation theory. Thus the important conclusion: The standard model neutrinos are kept massless by global chiral lepton number symmetry. The detection of non-zero neutrino masses is therefore a tangible indication of physics beyond the standard model. ## 3 Neutrino Mass Models Neutrinos must be extraordinarily light: experiments indicate $`m_{\nu _e}<10\mathrm{eV}`$, $`m_{\nu _\mu }<170\mathrm{keV}`$, $`m_{\nu _\tau }<18\mathrm{MeV}`$ , and any model of neutrino masses must explain this suppression. We do not discuss generating neutrino masses without new fermions, by breaking lepton number through interaction of lepton number-carrying Higgs fields. The natural way to generate neutrinos masses is to introduce for each one its electroweak singlet Dirac partner, $`\overline{N}_i`$. These appear naturally in the Grand Unified group $`SO(10)`$ where they complete each family into its spinor representation. Neutrino Dirac masses stem from the couplings $`L_i\overline{N}_jH`$ after electroweak breaking. Unfortunately, these Yukawa couplings yield masses which are too big, of the same order of magnitude as the masses of the charged elementary particles $`m\mathrm{\Delta }I_w=1/2`$. The situation is remedied by introducing Majorana mass terms $`\overline{N}_i\overline{N}_j`$ for the right-handed neutrinos. The masses of these new degrees of freedom are arbitrary, as they have no electroweak quantum numbers, $`M\mathrm{\Delta }I_w=0`$. If they are much larger than the electroweak scale, the neutrino masses are suppressed relative to that of their charged counterparts by the ratio of the electroweak scale to that new scale: the mass matrix (in $`3\times 3`$ block form) is $$\left(\begin{array}{cc}0& m\\ m& M\end{array}\right),$$ (1) leading, for each family, to one small and one large eigenvalue $$m_\nu m\frac{m}{M}\left(\mathrm{\Delta }I_w=\frac{1}{2}\right)\left(\frac{\mathrm{\Delta }I_w=\frac{1}{2}}{\mathrm{\Delta }I_w=0}\right).$$ (2) This seesaw mechanism provides a natural explanation for small neutrino masses as long as lepton number is broken at a large scale $`M`$. With $`M`$ around the energy at which the gauge couplings unify, this yields neutrino masses at or below tenths of eVs, consistent with the SuperK results. The lepton flavor mixing comes from the diagonalization of the charged lepton Yukawa couplings, and of the neutrino mass matrix. From the charged lepton Yukawas, we obtain $`𝒰_e`$, the unitary matrix that rotates the lepton doublets $`L_i`$. From the neutrino Majorana matrix, we obtain $`𝒰_\nu `$, the matrix that diagonalizes the Majorana mass matrix. The $`6\times 6`$ seesaw Majorana matrix can be written in $`3\times 3`$ block form $$=𝒱_\nu ^t𝒟𝒱_\nu \left(\begin{array}{cc}𝒰_{\nu \nu }& ϵ𝒰_{\nu N}\\ ϵ𝒰_{N\nu }^t& 𝒰_{NN}\end{array}\right),$$ (3) where $`ϵ`$ is the tiny ratio of the electroweak to lepton number violating scales, and $`𝒟=\mathrm{diag}(ϵ^2𝒟_\nu ,𝒟_N)`$, is a diagonal matrix. $`𝒟_\nu `$ contains the three neutrino masses, and $`ϵ^2`$ is the seesaw suppression. The weak charged current is then given by $$j_\mu ^+=e_i^{}\sigma _\mu 𝒰_{MNS}^{ij}\nu _j,$$ (4) where $$𝒰_{MNS}=𝒰_e𝒰_\nu ^{},$$ (5) is the Maki-Nakagawa-Sakata (MNS) flavor mixing matrix, the analog of the CKM matrix in the quark sector. In the seesaw-augmented standard model, this mixing matrix is totally arbitrary. It contains, as does the CKM matrix, three rotation angles, and one CP-violating phase. In the seesaw scenario, it also contains two additional CP-violating phases which cannot be absorbed in a redefinition of the neutrino fields, because of their Majorana masses (these extra phases can be measured only in $`\mathrm{\Delta }=2`$ processes). These additional parameters of the seesaw-augmented standard model, need to be determined by experiment. ## 4 Theories Theoretical predictions of lepton hierarchies and mixings depend very much on hitherto untested theoretical assumptions. In the quark sector, where the bulk of the experimental data resides, the theoretical origin of quark hierarchies and mixings is a mystery, although there exits many theories, but none so convincing as to offer a definitive answer to the community’s satisfaction. It is therefore no surprise that there are more theories of lepton masses and mixings than there are parameters to be measured. Nevertheless, one can present the issues as questions: * Do the right handed neutrinos have quantum numbers beyond the standard model? * Are quarks and leptons related by grand unified theories? * Are quarks and leptons related by anomalies? * Are there family symmetries for quarks and leptons? The measured numerical value of the neutrino mass difference (barring any fortuitous degeneracies), suggests through the seesaw mechanism, a mass for the right-handed neutrinos that is consistent with the scale at which the gauge couplings unify. Is this just a numerical coincidence, or should we view this as a hint for grand unification? Grand unified Theories, originally proposed as a way to treat leptons and quarks on the same footing, imply symmetries much larger than the standard model’s. Implementation of these ideas necessitates a desert and supersymmetry, but also a carefully designed contingent of Higgs particles to achieve the desired symmetry breaking. That such models can be built is perhaps more of a testimony to the cleverness of theorists rather than of Nature’s. Indeed with the advent of string theory, we know that the best features of grand unified theories can be preserved, as most of the symmetry breaking is achieved by geometric compactification from higher dimensions . An alternative point of view is that the vanishing of chiral anomalies is necessary for consistent theories, and their cancellation is most easily achieved by assembling matter in representations of anomaly-free groups. Perhaps anomaly cancellation is more important than group structure. Below, we present two theoretical frameworks of our work, in which one deduces the lepton mixing parameters and masses. One is ancient , uses the standard techniques of grand unification, but it had the virtue of predicting the large $`\nu _\mu \nu _\tau `$ mixing observed by SuperKamiokande. The other is more recent, and uses extra Abelian family symmetries to explain both quark and lepton hierarchies. It also predicts large $`\nu _\mu \nu _\tau `$ mixing. Both schemes imply small $`\nu _e\nu _\mu `$ mixings. ### 4.1 A Grand Unified Model The seesaw mechanism was born in the context of the grand unified group $`SO(10)`$, which naturally contains electroweak neutral right-handed neutrinos. Each standard model family is contained in two irreducible representations of $`SU(5)`$. However, the predictions of this theory for Yukawa couplings is not so clear cut, and to reproduce the known quark and charged lepton hierarchies, a special but simple set of Higgs particles had to be included. In the simple scheme proposed by Georgi and Jarlskog , the ratios between the charged leptons and quark masses is reproduced, albeit not naturally since two Yukawa couplings, not fixed by group theory, had to be set equal. This motivated us to generalize their scheme to $`SO(10)`$, where their scheme was (technically) natural, which meant that we had an automatic window into neutrino masses through the seesaw. The Yukawa couplings were of the form $$[A\mathrm{𝟏𝟔}_1\mathrm{𝟏𝟔}_2+B\mathrm{𝟏𝟔}_3\mathrm{𝟏𝟔}_3]\mathrm{𝟏𝟐𝟔}_1+[a\mathrm{𝟏𝟔}_1\mathrm{𝟏𝟔}_2+b\mathrm{𝟏𝟔}_3\mathrm{𝟏𝟔}_3](\mathrm{𝟏𝟎}_1+i\mathrm{𝟏𝟎}_2)$$ (6) $$+c\mathrm{𝟏𝟔}_2\mathrm{𝟏𝟔}_2\overline{\mathrm{𝟏𝟐𝟔}}_2+d\mathrm{𝟏𝟔}_2\mathrm{𝟏𝟔}_3\overline{\mathrm{𝟏𝟐𝟔}}_3.$$ (7) This is of course Higgs-heavy, but the attitude at the time was “damn the Higgs torpedoes, and see what happens”. This assignment was “technically” natural, enforced by two discrete symmetries. A modern treatment would include non-renormalizable operators , rather than introducing the $`\mathrm{𝟏𝟐𝟔}`$ representations, which spoil asymptotic freedom. The Higgs vacuum values produced the resultant masses $$m_b=m_\tau ;m_dm_s=m_em_\mu ;m_dm_s=3(m_em_\mu ).$$ (8) and mixing angles $$V_{us}=\mathrm{tan}\theta _c=\sqrt{\frac{m_d}{m_s}};V_{cb}=\sqrt{\frac{m_c}{m_t}}.$$ (9) While reproducing the well-known lepton and quark mass hierarchies, it predicted a long-lived $`b`$ quark, contrary to the lore of the time. It also made predictions in the lepton sector, namely maximal $`\nu _\tau \nu _\mu `$ mixing, small $`\nu _e\nu _\mu `$ mixing of the order of $`(m_e/m_\mu )^{1/2}`$, and no $`\nu _e\nu _\tau `$ mixing. The neutral lepton masses came out to be hierarchical, but heavily dependent on the masses of the right-handed neutrinos. The electron neutrino mass came out much lighter than those of $`\nu _\mu `$ and $`\nu _\tau `$. Their numerical values depended on the top quark mass, which was then supposed to be in the tens of GeVs! Given the present knowledge, some of the features are remarkable, such as the long-lived $`b`$ quark and the maximal $`\nu _\tau \nu _\mu `$ mixing. On the other hand, the actual numerical value of the $`b`$ lifetime was off a bit,and the $`\nu _e\nu _\mu `$ mixing was too large to reproduce the small angle MSW solution of the solar neutrino problem. The lesson should be that the simplest $`SO(10)`$ model that fits the observed quark and charged lepton hierarchies, reproduces, at least qualitatively, the maximal mixing found by SuperK, and predicts small mixing with the electron neutrino . ### 4.2 A Non-grand-unified Model There is another way to generate hierarchies, based on adding extra family symmetries to the standard model, without invoking grand unification. These types of models address only the Cabibbo suppression of the Yukawa couplings, and are not as predictive as specific grand unified models. Still, they predict no Cabibbo suppression between the muon and tau neutrinos. Below, we present a pre-SuperK model with those features. The Cabibbo supression is assumed to be an indication of extra family symmetries in the standard model. The idea is that any standard model-invariant operator, such as $`𝐐_i\overline{𝐝}_jH_d`$, cannot be present at tree-level if there are additional symmetries under which the operator is not invariant. Simplest is to assume an Abelian symmetry, with an electroweak singlet field $`\theta `$, as its order parameter. Then the interaction $$𝐐_i\overline{𝐝}_jH_d\left(\frac{\theta }{M}\right)^{n_{ij}}$$ (10) can appear in the potential as long as the family charges balance under the new symmetry. As $`\theta `$ acquires a $`vev`$, this leads to a suppression of the Yukawa couplings of the order of $`\lambda ^{n_{ij}}`$ for each matrix element, with $`\lambda =\theta /M`$ identified with the Cabibbo angle, and $`M`$ is the natural cut-off of the effective low energy theory. As a consequence of the charge balance equation $$X_{if}^{[d]}+n_{ij}X_\theta =0,$$ (11) the exponents of the suppression are related to the charge of the standard model-invariant operator , the sum of the charges of the fields that make up the the invariant. This simple Ansatz, together with the seesaw mechanism, implies that the family structure of the neutrino mass matrix is determined by the charges of the left-handed lepton doublet fields. Each charged lepton Yukawa coupling $`L_i\overline{N}_jH_u`$, has an extra charge $`X_{L_i}+X_{Nj}+X_H`$, which gives the Cabibbo suppression of the $`ij`$ matrix element. Hence, the orders of magnitude of these couplings can be expressed as $$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{Y}\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right),$$ (12) where $`\widehat{Y}`$ is a Yukawa matrix with no Cabibbo suppressions, $`l_i=X_{L_i}/X_\theta `$ are the charges of the left-handed doublets, and $`p_i=X_{N_i}/X_\theta `$, those of the singlets. The first matrix forms half of the MNS matrix. Similarly, the mass matrix for the right-handed neutrinos, $`\overline{N}_i\overline{N}_j`$ will be written in the form $$\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right)\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right).$$ (13) The diagonalization of the seesaw matrix is of the form $$L_iH_u\overline{N}_j\left(\frac{1}{\overline{N}\overline{N}}\right)_{jk}\overline{N}_kH_uL_l,$$ (14) from which the Cabibbo suppression matrix from the $`\overline{N}_i`$ fields cancels, leaving us with $$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{}\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right),$$ (15) where $`\widehat{}`$ is a matrix with no Cabibbo suppressions. The Cabibbo structure of the seesaw neutrino matrix is determined solely by the charges of the lepton doublets! As a result, the Cabibbo structure of the MNS mixing matrix is also due entirely to the charges of the three lepton doublets. This general conclusion depends on the existence of at least one Abelian family symmetry, which we argue is implied by the observed structure in the quark sector. The Wolfenstein parametrization of the CKM matrix , $$\left(\begin{array}{ccc}1& \lambda & \lambda ^3\\ \lambda & 1& \lambda ^2\\ \lambda ^3& \lambda ^2& 1\end{array}\right),$$ (16) and the Cabibbo structure of the quark mass ratios $$\frac{m_u}{m_t}\lambda ^8\frac{m_c}{m_t}\lambda ^4;\frac{m_d}{m_b}\lambda ^4\frac{m_s}{m_b}\lambda ^2,$$ (17) can be reproduced by a simple family-traceless charge assignment for the three quark families, namely $$X_{𝐐,\overline{𝐮},\overline{𝐝}}=(2,1,1)+\eta _{𝐐,\overline{𝐮},\overline{𝐝}}(1,0,1),$$ (18) where $``$ is baryon number, $`\eta _{\overline{𝐝}}=0`$, and $`\eta _𝐐=\eta _{\overline{𝐮}}=2`$. Two striking facts are evident: * the charges of the down quarks, $`\overline{𝐝}`$, associated with the second and third families are the same, * $`𝐐`$ and $`\overline{𝐮}`$ have the same value for $`\eta `$. To relate these quark charge assignments to those of the leptons, we need to inject some more theoretical prejudices. Assume these family-traceless charges are gauged, and not anomalous. Then to cancel anomalies, the leptons must themselves have family charges. Anomaly cancellation generically implies group structure. In $`SO(10)`$, baryon number generalizes to $``$, where $``$ is total lepton number, and in $`SU(5)`$ the fermion assignment is $`\overline{\mathrm{𝟓}}=\overline{𝐝}+L`$, and $`\mathrm{𝟏𝟎}=𝐐+\overline{𝐮}+\overline{e}`$. Thus anomaly cancellation is easily achieved by assigning $`\eta =0`$ to the lepton doublet $`L_i`$, and $`\eta =2`$ to the electron singlet $`\overline{e}_i`$, and by generalizing baryon number to $``$, leading to the charges $$X_{𝐐,\overline{𝐮},\overline{𝐝},L,\overline{e}}=()(2,1,1)+\eta _{𝐐,\overline{𝐮},\overline{𝐝}}(1,0,1),$$ (19) where now $`\eta _{\overline{𝐝}}=\eta _L=0`$, and $`\eta _𝐐=\eta _{\overline{𝐮}}=\eta _{\overline{e}}=2`$. It is interesting to note that $`\eta `$ is at least in $`E_6`$. The origin of such charges is not clear, as it implies in the superstring context, rather unconventional compactification. As a result, the charges of the lepton doublets are simply $`X_{L_i}=(2,1,1)`$. We have just argued that these charges determine the Cabibbo structure of the MNS lepton mixing matrix to be $$𝒰_{MNS}\left(\begin{array}{ccc}1& \lambda ^3& \lambda ^3\\ \lambda ^3& 1& 1\\ \lambda ^3& 1& 1\end{array}\right),$$ (20) implyingno Cabibbo suppression in the mixing between $`\nu _\mu `$ and $`\nu _\tau `$. This is consistent with the SuperK discovery and with the small angle MSW solution to the solar neutrino deficit. One also obtains a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos. Notice that these predictions are subtly different from those of grand unification, as they yield $`\nu _e\nu _\tau `$ mixing. It also implies a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos. On the other hand, the scale of the neutrino mass values depend on the family trace of the family charge(s). Here we simply quote the results our model . The masses of the right-handed neutrinos are found to be of the following orders of magnitude $$m_{\overline{N}_e}M\lambda ^{13};m_{\overline{N}_\mu }m_{\overline{N}_\tau }M\lambda ^7,$$ (21) where $`M`$ is the scale of the right-handed neutrino mass terms, assumed to be the cut-off. The seesaw mass matrix for the three light neutrinos comes out to be $$m_0\left(\begin{array}{ccc}a\lambda ^6& b\lambda ^3& c\lambda ^3\\ b\lambda ^3& d& e\\ c\lambda ^3& e& f\end{array}\right),$$ (22) where we have added for future reference the prefactors $`a,b,c,d,e,f`$, all of order one, and $$m_0=\frac{v_u^2}{M\lambda ^3},$$ (23) where $`v_u`$ is the $`vev`$ of the Higgs doublet. This matrix has one light eigenvalue $$m_{\nu _e}m_0\lambda ^6.$$ (24) Without a detailed analysis of the prefactors, the masses of the other two neutrinos come out to be both of order $`m_0`$. The mass difference announced by superK cannot be reproduced without going beyond the model, by taking into account the prefactors. The two heavier mass eigenstates and their mixing angle are written in terms of $$x=\frac{dfe^2}{(d+f)^2},y=\frac{df}{d+f},$$ (25) as $$\frac{m_{\nu _2}}{m_{\nu _3}}=\frac{1\sqrt{14x}}{1+\sqrt{14x}},\mathrm{sin}^22\theta _{\mu \tau }=1\frac{y^2}{14x}.$$ (26) If $`4x1`$, the two heaviest neutrinos are nearly degenerate. If $`4x1`$, a condition easy to achieve if $`d`$ and $`f`$ have the same sign, we can obtain an adequate split between the two mass eigenstates. For illustrative purposes, when $`0.03<x<0.15`$, we find $$4.4\times 10^6\mathrm{\Delta }m_{\nu _e\nu _\mu }^210^5rmeV^2,$$ (27) which yields the correct non-adiabatic MSW effect, and $$5\times 10^4\mathrm{\Delta }m_{\nu _\mu \nu _\tau }^25\times 10^3\mathrm{eV}^2,$$ (28) for the atmospheric neutrino effect. These were calculated with a cut-off, $`10^{16}\mathrm{GeV}<M<4\times 10^{17}\mathrm{GeV}`$, and a mixing angle, $`0.9<\mathrm{sin}^22\theta _{\mu \tau }<1`$. This value of the cut-off is compatible not only with the data but also with the gauge coupling unification scale, a necessary condition for the consistency of our model, and more generally for the basic ideas of Grand Unification. ## 5 Outlook Exact predictions of neutrino masses and mixings depend on developing a credible theory of flavor. In the absence of such, we have presented two schemes, which predicted not only maximal $`\nu _\mu \nu _\tau `$ mixing, but also smal $`\nu _e\nu _\mu `$ mixings. Neither scheme includes sterile neutrinos. The present experimental situation is somewhat unclear: the LSND results imply the presence of a sterile neutrino; at this conference we heard that superK favors $`\nu _\mu \nu _\tau `$ oscillation over $`\nu _\mu \nu _{\mathrm{sterile}}`$, and the origin of the solar neutrino deficit remains a puzzle, which several possible explanations. One is the non-adiabatic MSW effect in the Sun, which our theoretical ideas seem to favor. However, it is an experimental question which is soon to be answered by the continuing monitoring of the $`{}_{}{}^{8}B`$ spectrum by SuperK, and the advent of the SNO detector. Neutrino physics is at an exciting stage, and experimentally vibrant, as upcoming measurements will help us determine iour basic ideas about fundamental interactions. ## 6 Acknowledgments I would like to thank Professors G. Domokos and S. Kövesi-Domokos for their usual superb hospitality and the high scientific quality of this workshop. This research was supported in part by the department of energy under grant DE-FG02-97ER41029.
no-problem/0001/astro-ph0001099.html
ar5iv
text
# Biased Estimates of Ω from Comparing Smoothed Predicted Velocity Fields to Unsmoothed Peculiar Velocity Measurements ## 1 Introduction One of the most popular approaches to constraining the mass density parameter $`\mathrm{\Omega }`$, the ratio of the average matter density to the critical density, is based on comparisons between the galaxy density field mapped by redshift surveys and the galaxy peculiar velocity field inferred from distance-indicator surveys (see the review by Strauss & Willick (1995)). While the numerous implementations of this approach differ in many details, they are all motivated by the linear theory formula for the peculiar velocity field, $$𝐯(𝐱)=\frac{H_0f(\mathrm{\Omega })}{4\pi }\delta (𝐱^{})\frac{(𝐱^{}𝐱)}{|𝐱^{}𝐱|^3}d^3x^{},$$ (1) or its divergence $$\stackrel{}{}𝐯(𝐱)=a_0H_0f(\mathrm{\Omega })\delta (𝐱),$$ (2) where $`\delta (𝐱)\rho (𝐱)/\overline{\rho }1`$ is the mass density contrast, $`f(\mathrm{\Omega })\mathrm{\Omega }^{0.6}`$, $`H_0`$ is the Hubble parameter, and $`a_0`$ is the present value of the expansion factor (Peebles (1980)).<sup>1</sup><sup>1</sup>1Because galaxy distances are inferred from their redshifts via Hubble’s law, uncertainties in $`H_0`$ and $`a_0`$ do not introduce any uncertainy in peculiar velocity predictions; if one adopts km s<sup>-1</sup> distance units in place of Mpc, then $`H_0`$ and $`a_0`$ do not appear in equation (1) or (2). “Velocity-velocity” comparisons start from the observed galaxy density field, predict peculiar velocities via equation (1) or some non-linear generalization of it, and compare to estimated peculiar velocities (e.g., Kaiser et al. (1991); Strauss & Willick (1995); Davis, Nusser, & Willick (1996); Willick et al. (1997), 1998; Blakeslee et al. (1999)). “Density-density” comparisons start from the observed radial peculiar velocity field, infer the 3-dimensional velocity field using the POTENT method of Bertschinger & Dekel (1989), and compare the velocity divergence to the observed galaxy density field using equation (2) or a non-linear generalization of it (e.g., Dekel et al. (1993); Hudson et al. (1995); Sigad et al. (1998); Dekel et al. (1999)). Because the radial velocity field must be smoothed before computing the 3-dimensional velocity field via POTENT, density-density comparisons in practice always compare the smoothed galaxy density field to predictions derived from the smoothed peculiar velocity field. Velocity-velocity comparisons, on the other hand, usually smooth the galaxy density field to suppress non-linear effects and shot noise, but compare the velocity predictions from these smoothed density fields directly to the estimated peculiar velocities of individual galaxies or groups. (The spherical harmonic analysis of Davis et al. 1996 is an important exception in this regard.) The avoidance of smoothing the data is often seen as an advantage of the velocity-velocity approach, since smoothing a noisy estimated velocity field can introduce statistical biases that are difficult to remove. However, in this paper we show that comparing smoothed velocity predictions to unsmoothed velocity measurements generally leads to biased estimates of $`f(\mathrm{\Omega })`$, even when the galaxy positions and velocities are known perfectly. The reason for this bias is fairly simple: the errors in the predicted velocities are correlated with the predicted velocities themselves, violating the conventional assumption that an individual galaxy’s velocity can be modeled as a “large scale” contribution predicted from the smoothed density field plus an uncorrelated “small scale” contribution. Galaxy redshift surveys map the galaxy density field $`\delta _g(𝐱)`$ rather than the mass density field $`\delta (𝐱)`$, so inferences from velocity-velocity and density-density comparisons often assume a linear relation between galaxy and mass density contrasts, $`\delta _g(𝐱)=b\delta (𝐱)`$, and therefore constrain the quantity $`\beta f(\mathrm{\Omega })/b`$ rather than $`f(\mathrm{\Omega })`$ itself. The results reported in this paper emerged from a more general investigation of the effects of complex galaxy formation models on estimates of $`\beta `$ (Berlind, Narayanan & Weinberg 1999; Berlind, Narayanan & Weinberg, in preparation ). However, the statistical bias in $`f(\mathrm{\Omega })`$ that we find applies even when galaxies trace mass exactly, so here we focus on this simpler case. (Throughout this paper we use the term “bias” to refer to systematic statistical errors rather than the relation between the distributions of galaxies and mass.) We further restrict our investigation to the case in which galaxy positions and velocities are known perfectly, ignoring the additional complications that arise in analyses of observational data. ## 2 Results We have carried out N-body simulations of three different cosmological models, all based on inflation and cold dark matter (CDM). The first is an $`\mathrm{\Omega }=1`$, $`h=0.5`$ model ($`hH_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), with a tilted power spectrum of density fluctuations designed to satisfy both COBE and cluster normalization constraints. The cluster constraint requires $`\sigma _80.55`$ (White, Efstathiou & Frenk (1993)), where $`\sigma _8`$ is the rms linear density fluctuation in spheres of radius $`8h^1`$Mpc. Matching the COBE-DMR constraint and $`\sigma _8=0.55`$ with $`h=0.5`$ requires an inflationary spectral index $`n=0.803`$ if one incorporates the standard inflationary prediction for gravitational wave contributions to the COBE anisotropies (see Cole et al. (1997) and references therein). The other two models have $`\mathrm{\Omega }=0.2`$ and $`0.4`$, with a power spectrum shape parameter $`\mathrm{\Gamma }=0.25`$ (in the parameterization of Efstathiou, Bond & White (1992)) and cluster-normalized fluctuation amplitude $`\sigma _8=0.55\mathrm{\Omega }^{0.6}`$. We ran four independent simulations for each of the three cosmological models, and the results we show below are averaged over these four simulations. All simulations were run with a particle-mesh (PM) N-body code written by C. Park, which is described and tested by Park (1990). Each simulation uses a $`400^3`$ force mesh to follow the gravitational evolution of $`200^3`$ particles in a periodic cube $`400h^1`$Mpc on a side, starting at $`z=23`$ and advancing to $`z=0`$ in 46 steps of equal expansion factor $`a`$. We form the mass density field by cloud-in-cell (CIC) binning the evolved mass distribution onto a $`200^3`$ grid. We smooth this density field with a Gaussian filter of radius $`R_s`$ and derive the linear-theory predicted velocity field using equation (1). Finally, we linearly interpolate this velocity field to the galaxy positions to derive predicted galaxy peculiar velocities $`𝐯_{\mathrm{pred}}`$. Figure 1 compares the true velocities of particles ($`𝐯_{\mathrm{true}}`$) from one of the $`\mathrm{\Omega }=1`$ simulations to the velocities predicted ($`𝐯_{\mathrm{pred}}`$) by equation (1) from the mass density field smoothed with Gaussian filters of radius $`R_s=3,5,10,`$ and $`15h^1\mathrm{Mpc}`$ (panels a-d, respectively). The points in Figure 1 show one Cartesian component of the particles’ velocities. If we make the assumption, common to most velocity-velocity comparison schemes, that each galaxy’s velocity consists of a large scale contribution predicted from the density field plus an uncorrelated small scale contribution, then the best-fit slope of the $`𝐯_{\mathrm{true}}𝐯_{\mathrm{pred}}`$ relation should yield the parameter $`f(\mathrm{\Omega })`$, in this case $`f(\mathrm{\Omega })=1`$, with the scatter about this line yielding the dispersion of the small scale contribution. However, it is clear from Figure 1 that this slope increases systematically with increasing $`R_s`$. (We note that the best-fit line, which minimizes $`|𝐯_{\mathrm{true}}𝐯_{\mathrm{pred}}|^2`$, is shallower than the line one would naively draw through these data points by eye, since it is vertical scatter rather than perpendicular scatter that must be minimized.) The filled points in Figure 2 show the estimated $`f(\mathrm{\Omega })`$ as a function of $`R_s`$ for the $`\mathrm{\Omega }=1`$ (circles) and $`\mathrm{\Omega }=0.2`$ (squares) cosmological models. The solid lines show the true value of $`f(\mathrm{\Omega })`$. In both cases, the estimated value of $`f(\mathrm{\Omega })`$ is quite sensitive to the smoothing scale: it is slightly underestimated at small scales, but increasingly overestimated at large scales. The $`\mathrm{\Omega }=0.4`$ model yields similar results, so we do not plot it separately. We also investigated $`\mathrm{\Omega }=1`$ simulations with a factor of two lower force resolution ($`200^3`$ force mesh instead of $`400^3`$) and found identical results, so even at small smoothing scales our results are not affected by the simulations’ limited gravitational resolution. The breakdown of linear theory at small scales is not surprising; however, the systematic failure of this method at large smoothing scales has not, to our knowledge, been previously discussed. The dependence of the estimated $`f(\mathrm{\Omega })`$ on the smoothing scale used for velocity predictions is our principal result. We can understand the origin of the large scale bias in $`f(\mathrm{\Omega })`$ by considering the case in which galaxy peculiar velocities are given exactly by linear theory. In this case, $$𝐯_{\mathrm{true}}(𝐱)=(2\pi )^{3/2}H_0f(\mathrm{\Omega })e^{i𝐤𝐱}\frac{i\delta _𝐤𝐤}{|𝐤|^2}𝑑𝐤,$$ (3) where $`\delta _𝐤`$ are the Fourier modes of the density field and the integral extends over all of $`𝐤`$-space. Predicted velocities, however, are estimated from the density field smoothed with a window function $`W(r)`$ of characteristic scale $`R_s`$. Therefore, $$𝐯_{\mathrm{pred}}(𝐱)=(2\pi )^{3/2}H_0f(\mathrm{\Omega })\stackrel{~}{W}(kR_s)e^{i𝐤𝐱}\frac{i\delta _𝐤𝐤}{|𝐤|^2}𝑑𝐤,$$ (4) where $`\stackrel{~}{W}(kR_s)`$ is the Fourier transform of the window function. The error in the predicted velocity of a galaxy at position $`𝐱`$ is therefore, $$\mathrm{\Delta }𝐯(𝐱)=𝐯_{\mathrm{true}}𝐯_{\mathrm{pred}}=(2\pi )^{3/2}H_0f(\mathrm{\Omega })[1\stackrel{~}{W}(kR_s)]e^{i𝐤𝐱}\frac{i\delta _𝐤𝐤}{|𝐤|^2}𝑑𝐤.$$ (5) Note that in equation (4) we have defined $`𝐯_{\mathrm{pred}}`$ to be the velocity that would be predicted assuming the correct value of $`\mathrm{\Omega }`$. In practice, since we do not know the value of $`f(\mathrm{\Omega })`$ beforehand, we derive its value from the slope of the $`𝐯_{\mathrm{true}}`$ vs. $`f^1𝐯_{\mathrm{pred}}`$ relation (this is equivalent to assuming $`\mathrm{\Omega }=1`$ when computing $`𝐯_{\mathrm{pred}}`$). If $`\mathrm{\Delta }𝐯`$ were uncorrelated with $`𝐯_{\mathrm{pred}}`$, then the slope of the $`𝐯_{\mathrm{true}}`$ vs. $`f^1𝐯_{\mathrm{pred}}`$ relation would be an unbiased estimator of $`f(\mathrm{\Omega })`$. However, if $`\mathrm{\Delta }𝐯`$ is positively correlated with $`𝐯_{\mathrm{pred}}`$, then the slope of the relation is no longer $`f(\mathrm{\Omega })`$, since points preferentially scatter above the line for positive $`𝐯_{\mathrm{pred}}`$ and below the line for negative $`𝐯_{\mathrm{pred}}`$. This steepening of the $`𝐯_{\mathrm{true}}𝐯_{\mathrm{pred}}`$ relation is just the behavior seen in Figure 1. Equations (4) and (5) show that $`\mathrm{\Delta }𝐯`$ and $`𝐯_{\mathrm{pred}}`$ will be correlated as long as some Fourier modes contribute to both integrals, which happens for any smoothing function other than a step function in $`𝐤`$-space. We can quantitatively understand this bias by considering how $`f(\mathrm{\Omega })`$ is measured. For an ensemble of $`N`$ points ($`𝐯_{\mathrm{true},i}`$, $`f^1𝐯_{\mathrm{pred},i}`$), the slope of the best-fit line (assuming $`𝐯_{\mathrm{true}}=𝐯_{\mathrm{pred}}=0`$) is $`\mathrm{slope}`$ $`=`$ $`{\displaystyle \frac{(f^1𝐯_{\mathrm{true},i}𝐯_{\mathrm{pred},i})}{(f^2𝐯_{\mathrm{pred},i}𝐯_{\mathrm{pred},i})}}`$ (6) $`=`$ $`f(\mathrm{\Omega }){\displaystyle \frac{\frac{1}{N}[(𝐯_{\mathrm{true},i}𝐯_{\mathrm{pred},i})𝐯_{\mathrm{pred},i}+(𝐯_{\mathrm{pred},i}𝐯_{\mathrm{pred},i})]}{\frac{1}{N}(𝐯_{\mathrm{pred},i}𝐯_{\mathrm{pred},i})}}`$ $`=`$ $`f(\mathrm{\Omega })\left[1+{\displaystyle \frac{\mathrm{\Delta }𝐯𝐯_{\mathrm{pred}}}{𝐯_{\mathrm{pred}}𝐯_{\mathrm{pred}}}}\right].`$ Equation (6) shows how a non-zero cross-correlation between $`\mathrm{\Delta }𝐯`$ and $`𝐯_{\mathrm{pred}}`$ changes the measured slope of the velocity-velocity relation. We can compute this effect in the linear regime for a given power spectrum of density fluctuations $`P(k)`$ and window function $`\stackrel{~}{W}(kR_s)`$. Using equations (4) and (5) we have $`{\displaystyle \frac{\mathrm{\Delta }𝐯𝐯_{\mathrm{pred}}}{𝐯_{\mathrm{pred}}𝐯_{\mathrm{pred}}}}`$ $`=`$ $`{\displaystyle \frac{_0^{\mathrm{}}\stackrel{~}{W}(kR_s)[1\stackrel{~}{W}(kR_s)]P(k)𝑑k}{_0^{\mathrm{}}\stackrel{~}{W}^2(kR_s)P(k)𝑑k}}.`$ (7) For Gaussian and top hat window functions and a range of CDM power spectra, we find that the bias given by equation (7) is always positive and is always an increasing function of $`R_s`$. The dashed lines in Figure 2 show the slope computed (from eqs. 6 and 7) using the linear mass power spectra of the simulations and the same Gaussian window functions that were used to measure $`f(\mathrm{\Omega })`$ (solid points). The striking similarity on large smoothing scales between the N-body data and this linear theory calculation supports our conclusion that the large scale bias is indeed caused by the cross-correlation between $`\mathrm{\Delta }𝐯`$ and $`𝐯_{\mathrm{pred}}`$, which, in turn, is caused by the comparison of a smoothed prediction to unsmoothed data. From equation (7) it is evident that the linear theory cross-correlation between $`\mathrm{\Delta }𝐯`$ and $`𝐯_{\mathrm{pred}}`$ will be equal to zero if there is no smoothing at all, or if the smoothing function is a step function in $`𝐤`$-space, in which case the product $`\stackrel{~}{W}(kR_s)[1\stackrel{~}{W}(kR_s)]`$ is always equal to zero. The open symbols in Figure 2 show results of a velocity-velocity analysis of the same simulations, with the linear theory velocities now predicted from a density field smoothed with a sharp, low-pass $`𝐤`$-space filter. Specifically, we set to zero all Fourier modes with $`k>k_{\mathrm{cut}}`$ and plot the new estimates of $`f(\mathrm{\Omega })`$ at the values of $`R_s`$ for which a Gaussian filter falls to half its peak value at $`k=k_{\mathrm{cut}}`$ (i.e., $`e^{k_{\mathrm{cut}}^2R_s^2/2}=0.5`$). Using the sharp $`𝐤`$-space filter causes the bias to vanish completely on large scales, yielding estimates of $`f(\mathrm{\Omega })`$ that are correct and independent of smoothing length. This result further supports our interpretation of the cause of the large scale velocity-velocity bias. Figure 2 shows that $`f(\mathrm{\Omega })`$ is underestimated at small scales in the N-body simulations. The linear theory bias discussed above and shown by the dashed line in Figure 2 is always positive. Therefore, there must be a countervailing effect that biases $`f(\mathrm{\Omega })`$ estimates in the opposite direction on small scales. In highly non-linear regions of the density field, such as the cores of galaxy clusters, linear theory velocity predictions have large errors. However, errors caused by virial motions are uncorrelated with the predicted velocities because these virial motions have random directions. Such errors add random scatter to the velocity-velocity relation, but they do not change its slope. In mildly non-linear regions of the density field, on the other hand, galaxy velocities still follow coherent flows, but these flows may no longer be accurately predicted by linear theory. In the case of a galaxy falling towards a large over-density, linear theory will correctly predict the direction of motion, but it will overestimate the infall speed because it incorrectly assumes that the over-density has grown at the linear theory rate over the history of the universe, while in reality the over-density grows to large amplitude only at late times when it becomes non-linear. In such regions, $`\mathrm{\Delta }𝐯`$ will be opposite in sign to $`𝐯_{\mathrm{pred}}`$, causing an anti-correlation between the two quantities. The opposite happens in under-dense regions, but since fewer galaxies reside in these regions and the velocity errors are smaller in magnitude, the net effect is still an anti-correlation between $`\mathrm{\Delta }𝐯`$ and $`𝐯_{\mathrm{pred}}`$. In order to show how these different effects come into play, we adopt a fluid dynamics description and divide an individual galaxy’s velocity into a mean flow $`\overline{𝐯}`$ and a random “thermal” velocity $`𝝈`$, so that $`𝐯_{\mathrm{true}}=\overline{𝐯}+𝝈`$. Here $`\overline{𝐯}(𝐱)`$ is the average velocity of galaxies at spatial position $`𝐱`$, and therefore $`𝝈\overline{𝐯}=`$ 0 by definition. Let $`𝐯_{\mathrm{lin}}`$ denote the velocity predicted in linear theory from the unsmoothed density field (eq. 1). Equation (5) applies to the case where the velocity field is exactly linear, $`𝐯_{\mathrm{true}}=𝐯_{\mathrm{lin}}`$, but more generally, $`𝐯_{\mathrm{true}}`$ $`=`$ $`\overline{𝐯}+𝝈`$ (8) $`=`$ $`𝐯_{\mathrm{pred}}+(𝐯_{\mathrm{lin}}𝐯_{\mathrm{pred}})+(\overline{𝐯}𝐯_{\mathrm{lin}})+𝝈,`$ and, therefore, $$\mathrm{\Delta }𝐯=𝐯_{\mathrm{true}}𝐯_{\mathrm{pred}}=(𝐯_{\mathrm{lin}}𝐯_{\mathrm{pred}})+(\overline{𝐯}𝐯_{\mathrm{lin}})+𝝈.$$ (9) This equation shows the three possible sources of error in the smoothed linear theory prediction of galaxy velocities. The first term represents the effect caused by comparing a smoothed quantity with an unsmoothed quantity in linear theory and is given by equation (5). The second term represents the inadequacy of using a linear theory velocity estimator in regions where non-linear effects are important. The third term represents errors caused by galaxies’ random thermal motions. As shown in equation (6), the bias in $`f(\mathrm{\Omega })`$ depends on the cross-correlation of these errors with $`𝐯_{\mathrm{pred}}`$, $$\mathrm{\Delta }𝐯𝐯_{\mathrm{pred}}=(𝐯_{\mathrm{lin}}𝐯_{\mathrm{pred}})𝐯_{\mathrm{pred}}+(\overline{𝐯}𝐯_{\mathrm{lin}})𝐯_{\mathrm{pred}}+𝝈𝐯_{\mathrm{pred}}.$$ (10) The first term is positive and causes an overestimate of $`f(\mathrm{\Omega })`$ for nearly all smoothing functions. Our calculation of this effect via equation (7) shows that it is zero for no smoothing and increases monotonically with smoothing scale. We have argued above that the second term is generally negative and causes an underestimate of $`f(\mathrm{\Omega })`$. Since this effect arises from the non-linearity in the density field, it should dominate on small scales and vanish with increased smoothing of the density field. Finally, the third term is equal to zero because the thermal velocities have random directions. A combination of the first two terms of equation (10) explains the scale dependence of $`f(\mathrm{\Omega })`$ estimates in Figure 2. For large smoothing of the density field, the first term dominates and we overestimate $`f(\mathrm{\Omega })`$, whereas for small smoothing the second term dominates and we underestimate $`f(\mathrm{\Omega })`$. The estimate of $`f(\mathrm{\Omega })`$ is unbiased at the smoothing scale where these two effects cancel, but this scale should itself depend on the specifics of the underlying cosmological model. The numerical results in Figure 2 confirm this prediction: the $`f(\mathrm{\Omega })`$ estimate is unbiased at $`R_s=5h^1\mathrm{Mpc}`$ in the $`\mathrm{\Omega }=0.2`$ model (with $`\sigma _8=1.44`$; squares) and at $`R_s=4h^1\mathrm{Mpc}`$ in the $`\mathrm{\Omega }=1`$ model (with $`\sigma _8=0.55`$; circles). The smoothing scale for unbiased estimates could also depend on the assumed relation between galaxies and mass, a point we will investigate in future work. It is therefore, not possible to remove this bias simply by choosing the right smoothing scale in a model-independent way. If we had adopted a higher-order perturbative expansion for predicting velocities from the smoothed density field, then equation (10) would still hold with $`𝐯_{\mathrm{lin}}`$ replaced by $`𝐯_{\mathrm{per}}`$, the perturbative prediction in the absence of smoothing. The first term on the right hand side would still be positive, since some Fourier modes would contribute to both ($`𝐯_{\mathrm{per}}𝐯_{\mathrm{pred}}`$) and $`𝐯_{\mathrm{pred}}`$. The second term could be positive or negative depending on the approximation and the smoothing scale. However, while a higher-order approximation might reduce the magnitude of the second term relative to the linear approximation, it would not necessarily reduce the net bias in $`f(\mathrm{\Omega })`$, since this depends on the relative magnitude and sign of the first two terms. ## 3 Discussion The implications of our results for existing estimates of $`f(\mathrm{\Omega })`$ (or, more generally, of $`\beta `$) are probably limited. As already mentioned, density-density comparisons via POTENT are not influenced by the effects discussed here, because they compare density and velocity divergence fields smoothed at the same scale. The analysis of Davis et al. (1996), a mode-by-mode comparison of density and velocity fields, is also not affected, since the two fields are again compared at the same effective “smoothing”. If the observed velocities are unsmoothed, a comparison in which velocities are predicted using a truncated spherical harmonic expansion of the density field (e.g., Blakeslee et al. (1999)) may behave rather like our sharp $`𝐤`$-space filter analysis (open symbols in Figure 2), since for a Gaussian field the different spherical harmonic components are statistically uncorrelated (A. Nusser, private communication; Fisher et al. (1995)). Among recent velocity-velocity studies, our procedure here is closest to the VELMOD analyses of Willick et al. (1997) and Willick & Strauss (1998), who used a $`3h^1\mathrm{Mpc}`$ Gaussian filter to compute the predicted velocity field. These authors chose their smoothing scale partly on the basis of tests on N-body mock catalogs, and our results in Figure 2 suggest that biases in $`f(\mathrm{\Omega })`$ should indeed be small for this smoothing. However, we have shown that the disappearance of the bias in $`f(\mathrm{\Omega })`$ at this smoothing scale occurs because of a cancellation between positive and negative biases, and that the scale at which this cancellation occurs depends at least to some degree on the underlying cosmological model. As improvements in observational data reduce the statistical uncertainties in peculiar velocity data, control of the systematic uncertainties that arise from comparing smoothed velocity predictions to unsmoothed data will become essential to obtaining robust estimates of the density parameter. We thank Adi Nusser and Michael Strauss for helpful input and comments and Marc Davis and Jeff Willick for comments on the draft manuscript. This work was supported by NSF grant AST-9802568. VKN acknowledges support by the Presidential Fellowship from the Graduate School of The Ohio State University.
no-problem/0001/astro-ph0001028.html
ar5iv
text
# The variability analysis of PKS 2155-304 ## 1 Introduction BL Lac objects are a special subclass of active galactic nuclei (AGNs) showing some extreme properties: rapid and large variability, high and variable polarization, no or only weak emission lines in its classical definition. BL Lac objects are variable not only in the optical band, but also in radio, infrared, X-ray, and even $`\gamma `$-ray bands. Some BL Lac objects show that the spectral index changes with the brightness of the source (Bertaud et al. 1973; Brown et al. 1989; Fan 1993), generally, the spectrum flattens when the source brightens, but different phenomenon has also been found (Fan et al. 1999). The nature of AGNs is still an open problem; the study of AGNs variability can yield valuable information about their nature, and the implications for quasars modeling are extremely important ( see Fan et al. 1998a). PKS 2155-304, the prototype of the X-ray selected BL Lac objects and TeV $`\gamma `$-ray emitter (Chadwick et al. 1999), is one of the brightest and the best studied objects. Its spectrum from $`\lambda 3600`$ to $`\lambda 6800`$ appears blue (B-V$`<`$0.1) and featureless (Wade et al. 1979). A 0.17 redshift was claimed from the probably detected weak \[O III\] emission feature ( Charles et al. 1979), which was not detected in Miller & McAlister (1983) observation. Later, a redshift of 0.117 was obtained from several discrete absorption features (Bowyer et al. 1984). PKS 2155-304 varies at all observation frequencies and is one of the most extensively studied objects for both space-based observations in UV and X-ray bands (Treves et al. 1989; Urry et al. 1993; Pian et al. 1996; Giommi et al. 1998) and multiwavelength observations (Pesce et al. 1997 ). Variation over a time scale of one day was observed (Miller & Carini 1991) and that over a time scale of as short as 15 minutes is also reported by Paltani et al. (1997) in the optical band. Differently brightness-dependent spectrum properties are found ( see Miller & McAlister 1983; Smith & Sitko 1991; Urry et al. 1993; Courvoisier et al. 1995; Xie et al. 1996; Paltani et al. 1997). In this paper, we will investigate the periodicity in the light curve and discuss the variation as well. The paper has been arranged as follows: In section 2, the variations are presented and the periodicities are searched, in section 3, some discussions and a brief conclusion are given. ## 2 Variation ### 2.1 Light curves The optical data used here are from the literature: Brindle et al. (1986); Carini & Miller (1992); Courvoisier et al. (1995); Griffiths et al. (1979); Hamuy & Maza (1987); Jannuzi et al. (1993); Mead et al. (1990); Miller & McAlister (1983); Pesce et al. (1997); Smith & Sitko (1991); Treves et al. (1989); Urry et al. (1993); Xie et al. (1996) and shown in Fig. 1a-e. From the data, the largest amplitude variabilities in UBVRI bands are found: $`\mathrm{\Delta }U=1^m.5(11^m.8713^m.37)`$; $`\mathrm{\Delta }B=1^m.65(12^m.5514^m.20)`$; $`\mathrm{\Delta }V=1^m.85(12^m.2714^m.13)`$; $`\mathrm{\Delta }R=1^m.25(11^m.9613^m.21)`$; $`\mathrm{\Delta }I=1^m.14(11^m.5512^m.69)`$. and color indexes are found: $`(BV)=0.30\pm 0.06`$ (N=140 pairs); $`(UB)=0.72\pm 0.08`$ (N=105 pairs); $`(BR)=0.62\pm 0.07`$ (N=90 pairs); $`(VR)=0.32\pm 0.04`$ (N=98 pairs), the uncertainty is 1$`\sigma `$ dispersion. ### 2.2 Periodicity The photometric observations of PKS 2155-304 indicate that it is variable on time scales ranging from days to years (Miller & McAlister 1983). Is there any periodicity in the light curve? To answer this question, the Jurkevich (1971) method is used to search for the periodicity in the V light curve since there are more observations in this band. The Jurkevich method (Jurkevich 1971, also see Fan et al. 1998a) is based on the expected mean square deviation and it is less inclined to generate spurious periodicity than the Fourier analysis. It tests a run of trial periods around which the data are folded. All data are assigned to $`m`$ groups according to their phases around each trial period. The variance $`V_i^2`$ for each group and the sum $`V_m^2`$ of all groups are computed. If a trial period equals the true one, then $`V_m^2`$ reaches its minimum. So, a “good” period will give a much reduced variance relative to those given by other false trial periods and with almost constant values. To show the significance of the trial periodicity, we adopted the $`F`$-test (see Press et al. 1992). When the Jerkevich method is used to V measurements, some results are obtained and shown in Fig. 2 ($`m=10`$), which shows several minima corresponding to trial periods of less than 4.0-year and two broad minima corresponding to averaged periods of (4.16 $`\pm `$ 0.2) and (7.0 $`\pm `$ 0.16) years respectively. For the periods, which are smaller than 4.0-year, we found that the decrease of the $`V_m^2`$ is less than 3 times of the noise suggesting that it is difficult for one to take them as real signatures of periods, i.e., those periods should be discussed with more observations. For the two broad minima, the $`F`$-test is used to check their reality. The significance level is 93.8$`\%`$ for the 4.16-year period and 96.2$`\%`$ for the 7.0-year period. ## 3 Discussion PKS 2155-304 was observed more than 100 years ago. Griffiths et al. (1979) constructed the annually averaged B light curve up to the 1950’s from Harvard photographic collection. But there are only a few observations during the period of 1950-1970. The periodicity obtained here (see Fig. 2) are based on the post-1977 data. For comparison, we adopted the DCF (Discrete Correlation Function) method to the V measurements. The DCF method, described in detail by Edelson & Krolik (1988) (also see Fan et al. 1998b), is intended for analyses of the correlation of two data set. This method can indicate the correlation of two variable temporal series with a time lag, and can be applied to the periodicity analysis of a unique temporal data set. If there is a period, $`P`$, in the lightcurve, then the DCF should show clearly whether the data set is correlated with itself with time lags of $`\tau `$ = 0 and $`\tau `$ = $`P`$. It can be done as follows. Firstly, we have calculated the set of unbinned correlation (UDCF) between data points in the two data streams $`a`$ and $`b`$, i.e. $$UDCF_{ij}=\frac{(a_i\overline{a})\times (b_j\overline{b})}{\sqrt{\sigma _a^2\times \sigma _b^2}},$$ (1) where $`a_i`$ and $`b_j`$ are points in the data sets, $`\overline{a}`$ and $`\overline{b}`$ are the average values of the data sets, and $`\sigma _a`$ and $`\sigma _b`$ are the corresponding standard deviations. Secondly, we have averaged the points sharing the same time lag by binning the $`UDCF_{ij}`$ in suitably sized time-bins in order to get the $`DCF`$ for each time lag $`\tau `$: $$DCF(\tau )=\frac{1}{M}\mathrm{\Sigma }UDCF_{ij}(\tau ),$$ (2) where $`M`$ is the total number of pairs. The standard error for each bin is $$\sigma (\tau )=\frac{1}{M1}\{\mathrm{\Sigma }[UDCF_{ij}DCF(\tau )]^2\}^{0.5}.$$ (3) The resulting DCF is shown in Fig. 3. Correlations are found with time lags of (4.20 $`\pm `$ 0.2) and ( 7.31 $`\pm `$ 0.16) years. In addition, there are signatures of correlation with time lags of less than 3.0 years. If we consider the two minima in both the right and left sides of the 7.0-year minimum, then we can say that the periods of 4.16 and 7.0-year found with Jerkevich method are consistent with the time lags of 4.2-year and 7.3-year found with the DCF method. These two periods are used to simulate the light curve (see the solid curve in Fig. 4). It is clear that the solid curve does not fit the observations so well. One of the reasons is that there are probable more than two periods ($``$ 4.2 and $``$ 7.0 years) in the light curve as the results in Fig 2 and 3 indicate. Another reason is that the derived period is not so significant as Press (1978) mentioned. Press argued that periods of the order the third of the time span have a large probability to appear if longer-term variations exist. The data used here have a time coverage of about 16.0 years, i.e., about 3 times the derived periods. Therefore, these are only tentative and should be confirmed by independent work.” From the data, the largest amplitude variations are found for UBVRI bands with I and R bands showing smaller amplitude variations. One of the reasons is from the fact that there are fewer observations for those two bands, another reason is perhaps from the effect of the host galaxy, which affects the two bands more seriously. In this paper, the post-1970 UBVRI data are compiled for 2155-304 to discuss the spectral index properties and to search for the periodicity. Possible periods of 4.16 and 7.0 years are found. We are grateful to the referee for his/her comments and suggestions! This work is support by the National Pan Deng Project of China and the National Natural Scientific Foundation of China Figure Captions Fig. 1: a: The long-term U light curve of PKS 2155-304; b: The long-term B light curve of PKS 2155-304; c: The long-term V light curve of PKS 2155-304; d: The long-term R light curve of PKS 2155-304; e: The long-term I light curve of PKS 2155-304. Fig. 2: Plot of $`V_m^2`$ vs. trial period, $`P`$, in years Fig. 3: DCF for the V band data. It shows that the V light curve is self-correlated with time lags of 4.2 and 7.31 years. In addition, there are also correlation with time lags of less than 4.0 years. Fig. 4: The observed V light curve (filled points) and the simulated V light curve (solid curve) with the periods of 4.16 and 7.0 considered.
no-problem/0001/hep-ex0001049.html
ar5iv
text
# STATUS OF EXPERIMENTS AND RECENT RESULTS FROM CMD-2 DETECTOR AT VEPP-2M ## 1 Introduction The investigation of the reaction of $`e^+e^{}`$ annihilation into hadrons at low energies accounts about thirty years history of the experimental studies. Nevertheless, the understanding of the field is still rather far from the completeness. More precise measurements of the $`\rho `$-, $`\omega `$\- and $`\varphi `$-meson parameters are needed as well as properties of the continuum which provide unique information about interaction of light quarks and spectroscopy of their bound states. The knowledge of the total cross section of $`e^+e^{}`$ annihilation into hadrons at low energies and the magnitude of the exclusive cross sections is also necessary for precise calculations of various quantities. One of them is the strong interaction contribution into anomalous magnetic momentum of muon $`(g2)_\mu `$. Tab. 1 shows contributions of various channels into $`(g2)_\mu `$ (details of calculations can be found in and). One can see, that the main contribution (about 87%) comes from the energy region below 1.4 GeV and the dominant contribution (71%) in this region is from the channel $`e^+e^{}\pi ^+\pi ^{}`$. The energy behavior of cross section of the process $`e^+e^{}hadrons`$ at low energies is rather complicated. It is characteristic of various resonances ($`\rho `$, $`\omega `$, $`\varphi `$ and their recurrences) and onsets of the separate hadronic channels. Thus, to determine the value of the total cross section of $`e^+e^{}`$ annihilation into hadrons, one need to measure individual channels one by one and study decay modes of $`\omega `$ and $`\varphi `$ mesons. These physical tasks became the goal of the general-purpose detector CMD-2 which has been running at the VEPP-2M $`e^+e^{}`$ collider in Novosibirsk since 1992 studying the c.m. energy range from threshold of hadron production to 1.4 GeV. The CMD-2 detector is described in detail elsewhere. It is a general purpose detector consisting of a drift chamber (DC) with about 250 $`\mu `$ resolution in the plane transverse to the beam axis and multiwire proportional chamber (ZC) with an accurate measurement ($``$0.5 mm) of the z-coordinate of particle track along the beam direction. Both chambers are inside a thin (0.38 $`X_0`$) superconducting solenoid with a field of 1 T. The barrel calorimeter placed outside of the solenoid and consists of 892 CsI crystals of $`6\times 6\times 15`$ $`cm^3`$ size. The crystals are arranged in eight octants. The light readout is performed by PMTs. The energy resolution is about $`8\%`$ for photons with the energy more than 100 MeV. Both azimuthal and polar angle resolution is about 0.02 radian. The endcap calorimeter consists of 680 BGO crystals of $`2.5\times 2.5\times 15`$ $`cm^3`$ size. The light readout is performed by vacuum phototriodes placed on the crystals. The energy and angular resolution were found to be $`\sigma _E/E=4.6\%/\sqrt{E(GeV)}`$ and $`\sigma _{\varphi ,\theta }=210^2/\sqrt{E(GeV)}`$ radians respectively. The solid angle covered by both parts of the calorimeter is about 96% of 4$`\pi `$. The muon range system consists of two double layers of the streamer tubes operating in a self-quenching mode and is aimed to separate pions and muons. The inner and outer parts of this system are arranged in 8 modules each and cover 55% and 48% of the solid angle respectively. ## 2 Measurement of the pion form factor and $`\rho `$, $`\omega `$ meson parameters A large data sample of about 2 million $`e^+e^{}\pi ^+\pi ^{}`$ events was collected by CMD-2 detector in the energy range from 0.360 to 1.370 GeV. Analysis is completed for 10% of the data only. The beam energy was measured by the resonance depolarization technique at almost all energies. The pion form factor presented in fig.2 is based on the data sample at 53 energy points in the energy range from 0.37 to 0.96 MeV. The obtained $`\rho `$ meson parameters based on Gounaris-Sakurai parametrization were found to be: $`𝑴_𝝆`$ = $`\mathbf{775.28}\mathbf{\pm }\mathbf{0.61}\mathbf{\pm }\mathbf{0.20}`$ MeV, $`𝚪_𝝆`$ = $`\mathbf{147.70}\mathbf{\pm }\mathbf{1.29}\mathbf{\pm }\mathbf{0.40}`$ MeV, $`𝚪_{𝝆\mathbf{}𝒆^\mathbf{+}𝒆^{\mathbf{}}}`$ =$`\mathbf{6.93}\mathbf{\pm }\mathbf{0.11}\mathbf{\pm }\mathbf{0.10}`$ keV, $`𝑩𝒓\mathbf{(}𝝎\mathbf{}𝝅^\mathbf{+}𝝅^{\mathbf{}}\mathbf{)}`$=$`\mathbf{(}\mathbf{1.31}\mathbf{\pm }\mathbf{0.23}\mathbf{\pm }\mathbf{0.02}\mathbf{)}\mathbf{\%}`$. Here and below the first errors are statistical and the second are systematic. More details about the pion form factor can be found in. The energy range around the $`\omega `$ meson has been scanned at 13 energy points with a total integrated luminosity of about 1.5 $`\text{pb}^1`$, but the detailed analysis was performed for $`10\%`$ of the data. The $`\omega `$ meson parameters were measured with high accuracy using the $`\omega \pi ^+\pi ^{}\pi ^0`$ decay mode. The following parameters have been obtained from the fit: $`𝑴_𝝎`$ = $`\mathbf{782.71}\mathbf{\pm }\mathbf{0.07}\mathbf{\pm }\mathbf{0.04}`$ MeV, $`𝝈_\mathrm{𝟎}`$ = $`\mathrm{𝟏𝟒𝟖𝟐}\mathbf{\pm }\mathrm{𝟐𝟑}\mathbf{\pm }\mathrm{𝟐𝟓}`$ nb, $`𝚪_𝝎`$ = $`\mathbf{8.68}\mathbf{\pm }\mathbf{0.23}\mathbf{\pm }\mathbf{0.10}`$ MeV, $`𝚪_{𝒆^\mathbf{+}𝒆^{\mathbf{}}}`$ = $`\mathbf{0.605}\mathbf{\pm }\mathbf{0.014}\mathbf{\pm }\mathbf{0.010}`$ keV. The common excitation curve for the $`\omega `$ and $`\varphi `$ meson is presented in fig.2. ## 3 Measurements of $`\varphi `$ meson parameters The $`\varphi `$-meson parameters were measured using data on the four major decay modes of $`\varphi `$ $`K_SK_L`$, $`K^+K^{}`$, $`3\pi ,`$ $`\eta \gamma `$. The first results based on a relatively small integrated luminosity of about 300 $`\text{nb}^1`$ were published in. The new more precise results were obtained for the channel $`\varphi K_L^0K_S^0`$ when $`K_S^0`$ decays into a $`\pi ^+\pi ^{}`$. The data sample was collected in four scans of the energy range from 984 to 1040 MeV with the integrated luminosity of 2.37 $`\text{pb}^1`$ and contains $`2.97\times 10^5`$ of selected $`K_L^0K_S^0`$. Fig.3 shows energy dependence of the cross section of the reaction $`e^+e^{}K_L^0K_S^0`$ and the excitation curve of the $`\varphi `$ meson. The following parameters have been obtained from the fit: $`𝝈_\mathrm{𝟎}\mathbf{(}\mathit{\varphi }\mathbf{}𝑲_𝑳^\mathrm{𝟎}𝑲_𝑺^\mathrm{𝟎}\mathbf{)}\mathbf{=}\mathrm{𝟏𝟑𝟏𝟐}\mathbf{\pm }\mathrm{𝟕}\mathbf{\pm }\mathrm{𝟑𝟑}`$ nb, $`𝑴_\mathit{\varphi }\mathbf{=}\mathbf{1019.470}\mathbf{\pm }\mathbf{0.013}\mathbf{\pm }\mathbf{0.018}`$ MeV, $`𝚪_\mathit{\varphi }\mathbf{=}\mathbf{4.51}\mathbf{\pm }\mathbf{0.04}\mathbf{\pm }\mathbf{0.02}`$ MeV, $`𝚪_{\mathit{\varphi }\mathbf{}𝒆𝒆}\mathbf{}𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝑲_𝑳^\mathrm{𝟎}𝑲_𝑺^\mathrm{𝟎}\mathbf{)}`$ = $`\mathbf{(}\mathbf{4.181}\mathbf{\pm }\mathbf{0.024}\mathbf{\pm }\mathbf{0.084}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$ MeV. ## 4 Study of $`\varphi \eta \gamma \pi ^+\pi ^{}\pi ^0\gamma `$ decay The radiative magnetic dipole transition of $`\varphi `$ into $`\eta `$ has been studied with the integrated luminosity of about 1.9 $`\text{pb}^1`$. Events with two charged particles and one recoil photon with the energy more than 250 MeV were selected. The direction of the recoil photon should be close to the opposite direction of two charged pions. The reconstructed invariant mass of all other photons in this system (fig.5) forms a peak near the $`\pi ^0`$ mass or near zero corresponding to the events of the $`\eta `$ decay into $`\pi ^+\pi ^{}\gamma `$. The small fraction of the background comes from $`\varphi `$ decays into $`\pi ^+\pi ^{}\pi ^0,\omega \pi ^0,`$ $`K_L^0K_S^0`$ and was subtracted according to the simulation results. Fig.5 shows the energy behavior of the cross section of the process $`e^+e^{}\varphi \eta \gamma `$. Using the branching ratio for the $`\varphi e^+e^{}`$ decay from PDG, the following value for branching ratio has been determined: $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝜼𝜸\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{1.18}\mathbf{\pm }\mathbf{0.03}\mathbf{\pm }\mathbf{0.06}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$. ## 5 Process $`e^+e^{}\eta \gamma \pi ^0\pi ^0\pi ^0\gamma `$ The reaction $`e^+e^{}\eta \gamma `$ when $`\eta `$ decays into $`3\pi ^0`$ has been studied in the energy range from 0.6 to 1.4 GeV with the integrated luminosity about 21 $`\text{pb}^1`$. The preliminary results of measurement of the cross section of the process are shown in fig.7. The curve in this figure shows the fit of the energy dependence of the cross section which takes into account the interference of $`\rho `$, $`\omega `$ and $`\varphi `$ mesons in the intermediate state. The following branching ratios were obtained from the fit: $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝜼𝜸\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{1.24}\mathbf{\pm }\mathbf{0.02}\mathbf{\pm }\mathbf{0.08}\mathbf{)}\mathbf{}\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟐}`$, $`𝑩𝒓\mathbf{(}𝝎\mathbf{}𝜼𝜸\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{5.6}_{\mathbf{}\mathbf{1.1}}^{\mathbf{+}\mathbf{1.2}}\mathbf{)}\mathbf{}\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$, $`𝑩𝒓\mathbf{(}𝝆\mathbf{}𝜼𝜸\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{2.1}_{\mathbf{}\mathbf{0.5}}^{\mathbf{+}\mathbf{0.6}}\mathbf{)}\mathbf{}\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$ ## 6 Observation of $`\varphi \eta ^{}\gamma `$ decay A search of this rare radiative decay was performed with the integrated luminosity of about 14 $`mboxpb^1`$ at 14 energy points around the $`\varphi `$ meson when $`\eta ^{}`$ decays into $`\pi ^+\pi ^{}\eta `$. The analysis of events has been performed using three different decay modes of $`\eta `$: a. $`\eta \gamma \gamma `$, b. $`\eta \pi ^+\pi ^{}\gamma `$ and c. $`\eta \pi ^+\pi ^{}\pi ^0`$. For the first case (a), there are two charged pions and three photons in the final state. The monochromatic recoil photon has a fixed energy of 60 MeV. The invariant mass of two other (more hard) photons should equal $`M_\eta `$. Fig.7 shows the distribution of the invariant mass of two hard photons $`M_{12}`$ versus the softest photon energy $`\omega _3`$. The main source of the background comes from the decay $`\varphi \eta \gamma `$ when $`\eta `$ decay into $`\pi ^+\pi ^{}\pi ^0`$. In this case the final state has the same particles but their kinematics is drastically different. The hardest photon is monochromatic with the energy 362 MeV and the invariant mass of two others is $`M_{\pi ^0}`$. The decay $`\varphi \eta \gamma `$ is two orders of magnitude more probable. The branching ratio of $`Br(\varphi \eta ^{}\gamma )`$ was calculated relative to $`Br(\varphi \eta \gamma )`$. This ratio is not sensitive to systematic uncertainties from luminosity, detector inefficiency, resolution and so on. Using the values of the all needed branching ratios from PDG the following result has been obtained: $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝜼^{\mathbf{}}𝜸\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{0.82}_{\mathbf{}\mathbf{0.19}}^{\mathbf{+}\mathbf{0.21}}\mathbf{\pm }\mathbf{0.11}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$. For the second(b) and third(d) decay modes of $`\eta `$ there are four charged particles and two or three photons in the final state. The softest photon is monochromatic with the energy of 60 MeV. One of the combinations of two particles with opposite charges has to form a missing mass to $`M_{\pi ^0}`$ or zero. The kinematic constrained fit with additional angular cuts was applied to select events with the best $`\chi ^2`$. The main source of the background comes from decays: $`\varphi K_S^0K_L^0`$ when $`K_S^0\pi ^+\pi ^{}`$ and $`K_L\pi ^+\pi ^{}\pi ^0`$. The number of these background events was subtracted according to the simulation results. The following result has been obtained : $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝜼^{\mathbf{}}𝜸\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{0.58}\mathbf{\pm }\mathbf{0.18}\mathbf{\pm }\mathbf{0.15}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$. ## 7 Direct observation of $`K_S^0\pi e\nu `$ decay While the semileptonic decays of $`K_L^0`$ have been well measured, the information on the similar decays of $`K_S^0`$ is extremely scarce. PDG evaluates the corresponding decay rate indirectly using the $`K_L^0`$ semileptonic decays and assuming the rule: $`\mathrm{\Delta }`$S=$`\mathrm{\Delta }`$Q. We present results of the direct measurement of the branching ratio for $`K_S^0\pi e\nu `$ decay using the unique opportunity to study events containing a pure $`K_L^0K_S^0`$ system in the final state produced in the reaction: $`e^+e^{}\varphi K_L^0K_S^0`$. The data with the integrated luminosity of 14.8 $`\text{pb}^1`$ were used for this analysis. Fig.8 shows the distribution of the selected events over the parameter $`DPE=pE_{loss}E_{CsI}`$, where $`p`$ is the particle momentum measured in drift chamber, $`E_{CsI}`$ is the energy deposition in CsI calorimeter and $`E_{loss}`$ is the an ionization energy loss in the material in front of CsI calorimeter. An enhancement in this distribution around zero corresponds to the electrons from the decay $`K_S^0\pi e\nu `$. After background subtraction the corresponding number of the events was found to be: $`N=75\pm 13`$. Using the $`K_S^0\pi ^+\pi ^{}`$ decay for normalization, the following branching ratio was obtained: $`𝑩𝒓\mathbf{(}𝑲_𝑺^\mathrm{𝟎}\mathbf{}𝝅𝒆𝝂\mathbf{)}`$ = $`\mathbf{(}\mathbf{7.19}\mathbf{\pm }\mathbf{1.35}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$. This result is consistent with the PDG value obtained by recalculation from $`K_L^0`$ semileptonic rates. More details on the analysis can be found in. ## 8 Study of the conversion decays Conversion decays, when a virtual photon is converted into a lepton pair, are closely related to corresponding radiative decays. The branching ratios for conversion decays $`\varphi \eta e^+e^{}`$, $`\varphi \pi ^0e^+e^{}`$ as well as Dalitz decay $`\eta e^+e^{}\gamma `$ were determined using a data sample with the integrated luminosity of 15.5 $`\text{pb}^1`$. The decay $`\varphi \eta e^+e^{}`$ was detected via the mode $`\eta \gamma \gamma `$ and $`\eta 3\pi ^0`$, the decay $`\varphi \pi ^0e^+e^{}`$ – via the $`\pi ^0\gamma \gamma `$ and the decay $`\eta e^+e^{}\gamma `$ – via the mode $`\varphi \eta \gamma `$. The process $`\varphi \eta \gamma ,\eta \pi ^+\pi ^{}\gamma `$ was used to determine the number of $`\varphi `$-mesons. Events were selected with two charged particles in DC and photons in the calorimeter. These events were subject to the kinematic fit with energy-momentum conservation. The conversion decays have a peculiar feature of their kinematics: the angle between $`e^+`$ and $`e^{}`$ is as a rule close to zero. The significant background for these events comes from the $`\gamma `$-quantum conversion in the detector material. The detection efficiencies for these processes were determined by simulation. The decay $`\varphi \pi ^0e^+e^{}`$ has background from $`\varphi \pi ^+\pi ^{}\pi ^0`$ via the same final state. This background was suppressed by using the information about energy deposition by electrons and pions in the calorimeter. Fig.9 shows the distribution over the invariant mass of pair of photons for the events of the process $`\varphi \pi ^0e^+e^{}`$. As a preliminary result the following branching ratios were obtained: $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝜼𝒆^\mathbf{+}𝒆^{\mathbf{}}\mathbf{)}`$ = $`\mathbf{(}\mathbf{1.01}\mathbf{\pm }\mathbf{0.14}\mathbf{\pm }\mathbf{0.15}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$ when $`\eta \gamma \gamma `$, $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝜼𝒆^\mathbf{+}𝒆^{\mathbf{}}\mathbf{)}`$ = $`\mathbf{(}\mathbf{1.20}\mathbf{\pm }\mathbf{0.22}\mathbf{\pm }\mathbf{0.18}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟒}`$ when $`\eta \pi ^0\pi ^0\pi ^0`$, $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝝅^\mathrm{𝟎}𝒆^\mathbf{+}𝒆^{\mathbf{}}\mathbf{)}`$ = $`\mathbf{(}\mathbf{1.23}\mathbf{\pm }\mathbf{0.33}\mathbf{\pm }\mathbf{0.20}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟓}`$, $`𝑩𝒓\mathbf{(}𝜼\mathbf{}𝒆^\mathbf{+}𝒆^{\mathbf{}}𝜸\mathbf{)}`$ = $`\mathbf{(}\mathbf{6.85}\mathbf{\pm }\mathbf{0.60}\mathbf{\pm }\mathbf{1.00}\mathbf{)}\mathbf{\times }\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟑}`$. The obtained results are in agreement with the theoretical predictions and have better statistical accuracy than previous measurements quoted by PDG. ## 9 Reactions $`e^+e^{}\pi ^+\pi ^{}\pi ^+\pi ^{}`$ and $`e^+e^{}\pi ^+\pi ^{}\pi ^0\pi ^0`$ The reaction of $`e^+e^{}`$ annihilation into four pions (with two possible channels $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ and $`\pi ^+\pi ^{}\pi ^0\pi ^0`$) was studied in the energy range 1.05–1.38 GeV. Simultaneous analysis of both modes allowed to establish that the final state $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ is dominated by a mixture of $`\omega \pi ^0`$ and $`a_1(1260)\pi `$ mechanisms whereas only the latter contributes to the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ final state. The reaction $`e^+e^{}\pi ^+\pi ^{}\pi ^+\pi ^{}`$ was also studied in the energy range 0.6–0.97 GeV. The energy dependence of the cross section in this range agrees with the assumption of the $`a_1(1260)\pi `$ intermediate state. Fig.11 shows the energy behavior of the cross section of the reaction $`e^+e^{}\pi ^+\pi ^{}\pi ^+\pi ^{}`$ in the energy range 0.6 to 2 GeV. Also shown in this figure are the measurements of other groups. For the first time $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ events were observed at the $`\rho `$ meson energy. Under the assumption that all these events come from $`\rho `$ meson decay, the following value of the decay width was obtained: $`𝚪\mathbf{(}𝝆^\mathrm{𝟎}\mathbf{}𝝅^\mathbf{+}𝝅^{\mathbf{}}𝝅^\mathbf{+}𝝅^{\mathbf{}}\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{2.8}\mathbf{\pm }\mathbf{1.4}\mathbf{\pm }\mathbf{0.5}\mathbf{)}`$ keV or the branching ratio: $`𝑩𝒓\mathbf{(}𝝆^\mathrm{𝟎}\mathbf{}𝝅^\mathbf{+}𝝅^{\mathbf{}}𝝅^\mathbf{+}𝝅^{\mathbf{}}\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{1.8}\mathbf{\pm }\mathbf{0.9}\mathbf{\pm }\mathbf{0.3}\mathbf{)}\mathbf{}\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟓}`$ Fig.11 shows the preliminary results of measurement of the cross section of the process $`e^+e^{}\pi ^+\pi ^{}\pi ^+\pi ^{}`$ near $`\varphi `$ meson. A signal of the decay $`\varphi \pi ^+\pi ^{}\pi ^+\pi ^{}`$ is well seen in this figure. The following branching ratio was obtained from the fit: $`𝑩𝒓\mathbf{(}\mathit{\varphi }\mathbf{}𝝅^\mathbf{+}𝝅^{\mathbf{}}𝝅^\mathbf{+}𝝅^{\mathbf{}}\mathbf{)}\mathbf{=}\mathbf{(}\mathbf{5.4}\mathbf{\pm }\mathbf{1.6}\mathbf{\pm }\mathbf{2.0}\mathbf{)}\mathbf{}\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟔}`$ ## 10 Reaction $`e^+e^{}\pi ^+\pi ^{}\pi ^+\pi ^{}\pi ^0`$ The energy dependence of the cross section of the process $`e^+e^{}\pi ^+\pi ^{}\pi ^+\pi ^{}\pi ^0`$ was measured. The dominance by the contributions from the $`\eta \pi ^+\pi ^{}`$ and $`\omega \pi ^+\pi ^{}`$ states was shown. The reaction $`e^+e^{}\eta \pi ^+\pi ^{}`$ was also studied when $`\eta `$ decays into $`\gamma \gamma `$. The results of measurements are shown in fig.13 and 13. ## 11 Conclusion New interesting results were obtained with CMD-2 detector on VEPP-2M collider. Among them are the high precision measurement of the cross section of $`e^+e^{}`$ annihilation into hadrons in the energy range from the threshold of hadron production to 1.4 GeV, investigation of exclusive hadron channels as well as decays of $`\rho `$, $`\omega `$ and $`\varphi `$ mesons. Some of these results are outlined in the tab.2. Analysis is in progress to produce final results with a low systematic uncertainty to meet the original goals of CMD-2. This work is supported in part by the grants: RFBR-98-02-17851, RFBR-99-02-17053, RFBR-99-02-17119, INTAS 96-0624.
no-problem/0001/astro-ph0001018.html
ar5iv
text
# 1 Introduction ## 1 Introduction Asymptotic giant branch (AGB) stars of all populations have basically the same interior structures, with shell fusion zones of He and H surrounding C-O cores. Most AGB stars will undergo episodes of mass loss that eject their outer envelopes, leaving the exposed cores to fade away as white dwarfs. Thus low mass, metal-poor halo AGB stars and their higher mass, metal-rich disk counterparts exist in the same evolutionary domains and share the same eventual fates; all these stars appear to be theoretically quite similar. Observationally however, the properties of disk and halo population AGB stars are quite distinct. The high mass, high metallicity AGB stars are both extremely luminous and extremely cool. Sometimes they are surrounded by substantial gas/dust shells of their own making, and thus present unique photometric signatures (especially in the infrared). Often they exhibit spectroscopic peculiarities (strong carbon-containing molecular and neutron-capture element features) indicative of nuclear processing in their He fusion zones. These are the stars that are treated in most of the contributions to this workshop. In contrast, the low mass, low metallicity stars that can be positively associated with the AGB are photometrically and spectroscopically somewhat difficult to distinguish from first-ascent red giant branch (RGB) stars. In globular cluster colour-magnitude (c-m) diagrams the AGB is a thinly-populated stream of stars connecting the red end of the horizontal branch (HB) and the end of the RGB. Over the brightest 1–2 magnitudes, the AGB and RGB are separated by less than 0.2 in V magnitude at a given B–V colour (a particularly clear example is the M3 c-m diagram of Buonanno et al. 1986). Globular cluster AGB stars will not become extremely luminous because they are former HB stars, whose masses cannot in theory exceed $``$ $``$ 0.6$``$; the second-ascent AGB tip effectively merges with the first-ascent RGB tip. Unfortunately, for many globular clusters, photometry precise enough to cleanly separate the AGB from the RGB for most candidate stars still does not exist. The spectra of globular cluster AGB stars also do not differ radically from those of RGB stars. CH stars, those possessing spectroscopic evidence of having possibly mixed He shell burning products (carbon, neutron-capture elements) to their surfaces, are apparently very rare in globular clusters; only a handful have been discovered (e.g., McClure & Norris 1977; Cowley & Crampton 1985; Vanture & Wallerstein 1992; Côte et al. 1997). In recent years photometrists and spectroscopists have combined efforts to substantially increase the quantity and quality of data on AGB stars in globular clusters. In this paper, we look for chemical composition differences between AGB and RGB stars in three globular clusters, concluding that there is some evidence suggesting that AGB stars have less chemically evolved surface layers. This suggestion is then related to the “second parameter problem” of globular clusters. ## 2 Inter- and Intra-Cluster Chemical Inhomogeneities: A Brief Sketch Several decades of spectroscopic investigations have established the reality of large-scale star-to-star abundance variations among light elements in globular cluster stars. The variations are not of the same magnitude in all clusters, and indeed each cluster seems to have a chemical composition signature that is not repeated exactly in other clusters. Most of the abundance inhomogeneities observed in globular clusters involve some aspects of so-called “proton-capture” nucleosynthesis. Extensive reviews of these abundance variations have been published by e.g., Kraft (1979), Freeman & Norris (1981), Smith (1987), Suntzeff (1993), Briley et al. (1994), Kraft (1994), and Sneden (1998,1999). Some general statements about cluster nucleosynthesis are summarized here without attribution to specific papers, and the reader is strongly encouraged to consult the reviews and the original papers quoted in them for details on these abundance trends. The CN cycle: The chief products of ordinary CN cycle fusion are observed at the surfaces of most RGB and AGB stars. That is, the carbon isotope ratios are uniformly low (4 $``$ $`{}_{}{}^{12}\mathrm{C}/^{13}\mathrm{C}`$ $``$ 10), carbon abundances are usually low (–0.3 $``$ \[C/Fe\] $``$ –1.3), and nitrogen abundances are correspondingly very high (+0.5 $``$ \[N/Fe\] $``$ +1.5). However, the N overabundances are sometimes far greater than the amounts that would be predicted from simple C$``$N conversion. The ON cycle: Globular cluster giants, unlike almost all halo field giants, often exhibit very depleted oxygen abundances (–1.0 $``$ \[O/Fe\] $``$ +0.4). This suggests that the ON cycle, which requires higher temperatures ($`T`$ $``$ 40$`\times `$10<sup>6</sup> K) in hydrogen fusion zones than does the CN cycle, has been active either in the giants that are being observed or in an earlier cluster generation. This cycle’s major net effect is O$``$N conversion, and can therefore account for the anomalously large N abundances mentioned above. Finally, in nearly all cluster giants with complete CNO abundance data, the C+N+O abundance sum appears to be conserved, adding further weight to the idea that the variations in these elements are simply due to the combined CN and ON element re-shufflings. The NeNa cycle: Sodium abundances also vary widely among globular cluster giants (–0.3 $``$ \[Na/Fe\] $``$ +0.4). The same globular cluster giants that have low O abundances almost invariably have high Na abundances; an anticorrelation between these abundances apparently occurs in all lower metallicity clusters (\[Fe/H\] $`<`$ –1) studied to date. This anticorrelation suggests that the NeNa proton-fusion cycle, which can work efficiently at the same temperatures as does the ON cycle, has at some time in globular cluster histories converted Ne (undetectable in cluster giant spectra) into Na. The MgAl cycle: Aluminum abundances also have large star-to-star variations that are anticorrelated with O abundances, and in some well-studied clusters the anticorrelation extends also to Mg abundances. Again, proton-capture fusion leading to Mg$``$Al conversion is the probable culprit (Shetrone 1996), but the burning temperature requirements (T $``$ 70$`\times `$10<sup>6</sup> K) are large enough that it is difficult to imagine low mass globular cluster giants performing the MgAl cycle, unless such such transmutations occur as the result of a thermal instability of the H or He shell source (Langer et al. 1997, Powell 1999). Alternatively, stars with abnormally large Al abundances might either have been born with them, created in previous higher mass stars, or have accreted them from the winds of higher mass AGB stars. Other Nucleosynthesis effects: Some significant cluster-to-cluster abundance differences are seen in heavier elements that cannot be altered in proton-capture synthesis reactions. For example, the very heavy elements Ba, La, and Eu can have very different abundance ratios in different clusters, indicating varying contributions of slow and rapid neutron-capture synthesis reactions to the creation of these elements. Among the elements that participate in the major nuclear fusion chains, silicon should only be altered during the last stages of very high mass stars. But its mean abundance varies from cluster to cluster; some globular clusters have Si abundances nearly a factor of two larger than those of typical halo field giants. And in addition to the star-to-star variations of Al abundances within individual clusters, the Al mean abundance level also differs substantially from cluster to cluster. All of these abundance anomalies point to nucleosynthesis contributions of multiple generations of stars in a given cluster, either from stars that died before the present stars were born or during their formation. Also, the relatively small numbers of high mass stars that must have existed in or preceded formation of each cluster probably produced supernovae of different masses in each cluster, creating distinct “initial” abundance distributions in each cluster. ## 3 CN Bandstrengths in RGB and RGB Stars of NGC 6752 Perhaps the first suggestion that AGB and RGB stars in some clusters might on average have different compositions was made by Norris et al. (1981). In a large-sample study of CN bandstrengths among giants of NGC 6752, they found that there is a bi-modal distribution of CN bandstrengths that is nearly independent of RGB position. But they suggested that there is a nearly uni-modal set of CN bandstrengths among the AGB stars: their CN bands are almost all weak. Norris et al. presented this situation in their Figure 3, plotting the CN absorption index S(3839) as a function of V magnitude and B–V colour. We have used the formula developed by Norris et al. to convert S(3839) to a CN bandstrength indicator that is independent of stellar temperature/gravity effects, and in Figure 1 we show “boxplots” that illustrate the ranges in CN bandstrength found in RGB and AGB stars. The lower CN strengths of the AGB stars on average is obvious, but just as important is the near total lack of any CN strong AGB stars in this cluster. For comparison, we also show similar data for two other clusters, M4 and M13. The M4 CN bandstrength data are taken from either Norris (1981) or Suntzeff and Smith (1991), or the mean of both, where the variation of S(3839) with position in the c-m diagram has been removed according to Norris’ formula. The evolutionary status of the stars are those determined by Ivans et al. in their H-R diagram of Figure 12, which illustrates the reddening-free positions of the stars. For the M13 data, we referred to Suntzeff (1981), where we converted the photometric $`m`$(CN) indices to relative photometric bandstrengths $`\delta `$$`m`$(CN) using Suntzeff’s suggested relationship of the lower limit of $`m`$(CN) to B–V colour index. We further transformed the $`\delta `$$`m`$(CN) values to $`\delta `$S(3839) relative bandstrengths employing the relationship we derived for $`\delta `$$`m`$(CN) and $`\delta `$S(3839) found using stars in common between the studies of NGC 6752 stars by Langer et al. (1992), who used $`m`$(CN), and Norris et al. (1981) who used S(3839). The evolutionary status of the M13 stars were those determined by Suntzeff (1981) and, in the cases where the photometry made the status ambiguous, we supplemented the information using the stars in common studied by Pilachowski et al. (1996b). Thus, the distributions shown in Figure 1 are all, in effect, on the $`\delta `$S(3839) system of Norris et al. (1981) and only include the stars for which AGB vs RGB designations are unambiguous. Norris et al. (1981) offered two possible explanations for the relatively weak CN bandstrengths in NGC 6752 AGB stars; both explanations involve an inability of the strong-CN RGB stars to ascend the giant branch a second time after HB evolution. In one scenario, some cluster stars would have been born with abnormally large C and/or N abundances, accompanied by larger-than-average He/H ratios (presumably from the CN and/or ON cycles). Stellar evolution computations (e.g., Lee et al. 1994, and references therein) have shown that RGB stars with higher He contents will, after they undergo the He flash, take up residence in bluer parts of the HB than do otherwise identical stars with lower He contents. In fact, these stars may arrive at such a blue HB position that they may eventually evolve directly to the white dwarf track, entirely avoiding the AGB stage. In the other scenario, larger internal mixing in CN-strong stars during RGB evolution might drive large amounts of mass loss, leading to lower-than-average envelope masses after the He flash. Again, such stars would wind up on the bluer end of the HB, possibly never to return as AGB stars. Thus the stars observed on the AGB of NGC 6752 may have weak CN bands because they are the former RGB stars that had little mixing of CN cycle products (N and He) into their envelopes; they are the ones that survived the HB stage to rise again toward the giant branch tip. This latter hypothesis can be tested by comparing abundances of light proton-capture elements (C, N, O, Na, Mg, Al) in AGB and RGB stars of NGC 6752. Unfortunately, the extant high resolution spectroscopic studies of NGC 6752 giants (Gratton 1987, Norris & Da Costa 1995, Minniti et al. 1996, Shetrone 1998) were only able to include the brightest stars near the RGB tip, where the distinction between RGB and AGB stars cannot be made. ## 4 Sodium Abundance Variations in M13 Giants Pilachowski et al. (1996b) derived sodium abundances for 130 giants in M13; their program stars ranged from those at the RGB tip to ones about as faint as the HB. They showed that Na abundances of most M13 giants are greater than those of similar-metallicity halo field stars, but there are some significant differences between RGB and AGB stars in this cluster. We illustrate this situation in Figure 2 with another boxplot, in which we compare \[Na/Fe\] ratios for lower luminosity M13 RGB stars (those with log $`g`$ $`>`$ 1), RGB tip stars (log $`g`$ $`<`$ 1), and AGB stars, along with \[Na/Fe\] ratios for field stars in the metallicity range –1.2 $`>`$ \[Fe/H\] $`>`$ –1.9 (Pilachowski et al. 1996a). The higher Na abundances of M13 giants is obvious in this figure, but it is also clear that the AGB stars have lower mean Na abundances than do the RGB tip stars, and that they have a narrower range in Na. It is possible that the Na abundances for the very cool RGB tip stars must be corrected downward somewhat to correct for departures from LTE (e.g., Gratton et al. 1999). But oxygen abundances in cluster giants are always determined from the \[O I\] transitions, which do not suffer substantial departures from LTE. And in M13 not only do the RGB stars exhibit on average the largest Na abundances but they also have the lowest O abundances (Kraft et al. 1997). Therefore the difference between the the mean levels of Na in AGB and RB stars in M13 is probably real. Pilachowski et al. (1996) followed a line of reasoning similar to that of Norris et al. (1981) in supposing that the presently observed AGB stars in M13 are those whose envelope He contents remained relatively low when they were RGB stars; the RGB stars with elevated He took up residence on the blue part of the HB and never arrived on the AGB. However, the Pilachowski et al. scenario differed from that of Norris et al. in one important respect: the RGB stars with elevated He were those that had contaminated their atmospheres with material that had been processed through the CNO hydrogen-burning shell, in accordance with the deep mixing scenario and nuclear transmutation calculations of Langer et al. (1993), Langer & Hoffman (1995) and Cavallo et al. (1998). Sweigart (1997a,b) showed that such “deep mixed” stars could indeed be moved sharply to the blue in their subsequent evolution onto the HB, largely as a result of increased mass loss prior to the helium core flash. Pilachowski et al. also noted that since M13 has the most extreme cases of Na and Al enhancements and O depletions among RGB stars, it also probably has a higher percentage of high-He stars than other globular clusters. If so, then the AGB of M13 ought to be relatively unpopulated. This view is supported by the statistics of Caputo et al. (1978) and Buzzoni et al. (1983), from which one finds that M13 has the lowest ratio (by a factor of $``$2) of AGB to RGB stars among the 16 clusters studied. ## 5 Some Additional Comments M13 and NGC 6752 represent the clearest cases for chemical composition differences between AGB and RGB stars in globular clusters. But truth in advertising compels us to admit that the situation is probably far more complex than we have suggested so far. Smith & Norris (1993) suggested that the AGB stars of M5 have a different CN bandstrength distribution: “… the observations reported in this paper yield no consistent picture of the CN distributions among stars in more advanced stages of evolution. The asymptotic giant branch appears to be deficient in CN-weak stars for M5, but deficient in CN-strong stars for NGC 6752.” Consideration of these differences has been made possible by the existence of very large bandstrength or abundance samples in these two clusters. Unfortunately, most other globulars have not been studied in sufficient detail to assess the chemical compositions of AGB stars. In their study of a large number of bright giants in M4, Ivans et al. (1999) found some of the same correlated variations in proton-capture elements that have been seen in other clusters. Their data were most extensive for the determination of oxygen abundances, and they concluded that the mean oxygen abundance of M4 AGB stars is slightly larger than that of the RGB stars. This provides mild further support for the suggestion that AGB stars in globular cluster are on average less chemical evolved in the proton-capture elements than are RGB stars. This problem cannot be effectively dealt with until stellar samples in many globular clusters include at least 10 AGB stars, as well as many more RGB stars over a large luminosity range. Enough detailed high resolution, large wavelength coverage spectroscopic studies of individual stars in selected globulars to make it clear that the proton-capture phenomenon is “universal”. Thus in addition to the continued full-scale abundance analyses of the brightest cluster members, it will be especially fruitful to now survey cluster giant branches with multi-object spectrometers (in the manner of Pilachowski et al. 1996b) that concentrate on fairly complete descriptions of the abundance trends of just one or two elements that will stand as surrogates for the behaviour of the whole set of proton-capture elements. ## Acknowledgements We thank Raffaele Gratton for helpful discussions on this work. This research was supported by NSF grants AST-9217970 to RPK AST-9618364 to CS. Travel support given by the Rome Observatory to CS is gratefully acknowledged.
no-problem/0001/astro-ph0001321.html
ar5iv
text
# The 9.7 Micron Silicate Dust Absorption Toward the Cygnus A Nucleus and the Inferred Location of the Obscuring Dust Data presented here were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. ## 1 Introduction A wide range of the observed properties of active galactic nuclei (AGNs) can be explained by differing viewing angles toward a dusty torus with an inner radius of $`<`$10 pc around an accreting supermassive blackhole (Antonucci 1993). However, an important question about AGNs that has not yet been answered is “how common are type 2 quasars?” where we define type 2 quasars as highly luminous AGNs (bolometric luminosity of $`>`$10$`{}_{}{}^{12}L_{}^{}`$, or 2–10 keV hard X-ray luminosity of $`>`$10<sup>44</sup> ergs s<sup>-1</sup>) <sup>1</sup><sup>1</sup>1 We adopt $`H_0`$ = 75 km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$ = 0.5 throughout this paper. that are highly obscured ($`A_\mathrm{V}`$ $`>`$ 50 mag) by a dusty torus. To explain a significant fraction of the huge infrared luminosity of ultra-luminous infrared galaxies ($`L_{\mathrm{IR}}`$ $`>`$ 10$`{}_{}{}^{12}L_{}^{}`$; Sanders & Mirabel 1996) by dust emission powered by obscured AGN activity, the presence of such type 2 quasars is required. However, it has been suggested that dust around a highly luminous AGN is quickly expelled by strong radiation pressure and outflow activities, and therefore virtually no type 2 quasars exist (Halpern, Turner, & George 1999). One way to search for a type 2 quasar is to find an X-ray source with a large hard X-ray to soft X-ray flux ratio (i.e., an indication of soft X-ray attenuation by absorption), and then perform optical follow-up spectroscopy to derive the redshift, and hence estimate the intrinsic hard X-ray luminosity (e.g., Sakano et al. 1998). If both the intrinsic 2–10 keV hard X-ray luminosity and X-ray absorption are estimated to be high (e.g., $`L_\mathrm{X}`$(2–10 keV) $`>`$ 10<sup>44</sup> ergs s<sup>-1</sup> and $`N_\mathrm{H}`$ $`>`$ 10<sup>23</sup> cm<sup>-2</sup>), the source is a candidate type 2 quasar. However, X-ray absorption ($`N_\mathrm{H}`$) is caused both by gas and dust, and the $`N_\mathrm{H}`$/$`A_\mathrm{V}`$ ratio toward an AGN is often higher than that of the Galaxy ($`N_\mathrm{H}`$/$`A_\mathrm{V}`$ = 1.8 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup> mag<sup>-1</sup>; Predehl & Schmitt 1995), as much as 10 times higher in some cases (Alonso-Herrero, Ward, & Kotilainen 1997; Simpson 1998). Hence, the estimation of dust obscuration ($`A_\mathrm{V}`$) from hard X-ray absorption is highly uncertain. If the optical spectrum has sufficient spectral resolution and sensitivity, we can investigate dust obscuration by looking for the absence of a broad ($`>`$ 2000 km s<sup>-1</sup> in full width at half-maximum; FWHM) component of the rest-frame optical hydrogen emission lines (e.g., Ohta et al. 1996; Georgantopoulos et al. 1999). Its absence, however, indicates only that dust obscuration is at least several mag in $`A_\mathrm{V}`$. Such a small amount of dust is insufficient to explain the huge amount of infrared emission by dust heated by AGN activity. Therefore, the question as to whether there are type 2 quasars that can explain the huge infrared luminosity by dust heated by obscured AGN activity cannot be answered explicitly by this method. Study of thermal infrared regions (3–30 $`\mu `$m) is a powerful tool in the search for such a type 2 quasar. Firstly, since thermal infrared emission is visible through high dust obscuration, as high as $`A_\mathrm{V}`$ $`>`$ 50 mag, we can find a sample of highly luminous and highly dust obscured ($`A_\mathrm{V}`$ $`>`$ 50 mag) AGNs. Secondly, we can examine the location of the obscuring dust. Obscuration can be either by dust in a dusty torus in the vicinity of a central engine ($`<`$10 pc in inner radius), or by dust in the host galaxy ($`>`$ a few 100 pc scale). Needless to say, dust must be present in the location of the former in order for it to be heated by AGN activity. We can distinguish between these two possibilities by looking for the presence of a temperature gradient in the obscuring dust. If a dusty torus is responsible for the obscuration, a temperature gradient is predicted to occur, with the temperature of the dust decreasing with increasing distance from the central engine (Pier & Krolik 1992). The temperature of the innermost dust is expected to be $``$1000 K, close to the dust sublimation temperature. Since emission at 3 $`\mu `$m is dominated by dust at $``$1000 K, the extinction estimated using the $``$3 $`\mu `$m data should reflect the value toward the innermost dust around the central engine. On the other hand, the extinction estimated using $``$10 $`\mu `$m data should be lower, because the dust at $``$300 K, a dominant emission source at $``$10 $`\mu `$m, is located further out than the $``$1000 K dust, and thus the $``$10 $`\mu `$m data can only trace the extinction toward the outer region. In fact, if there is a temperature gradient, the optical depth of the 9.7 $`\mu `$m silicate dust absorption is predicted to be lower than the actual column density toward the central engine (Pier & Krolik 1992). Such a temperature gradient is not thought to occur in the $`>`$ a few 100 pc scale dust in the host galaxy. Hence, comparing extinction estimates at $``$3 $`\mu `$m and $``$10 $`\mu `$m can provide useful information on the location of obscuring dust. Cygnus A (3C 405; $`z`$ $`=`$ 0.056) has a highly luminous radio-loud AGN with a bolometric luminosity of $`>`$10<sup>45</sup> ergs s<sup>-1</sup> (Stockton, Ridgway, & Lilly 1994) and with an extinction-corrected 2–10 keV hard X-ray luminosity of 1–5 $`\times `$ 10<sup>44</sup> ergs s<sup>-1</sup> (Ueno et al. 1994; Sambruna, Eracleous, & Mushotzky 1999). The dust extinction toward the background $`L`$-band ($``$3.5 $`\mu `$m) emission region of the nucleus is estimated to be $`A_\mathrm{V}`$ $``$ 150 mag, based on the comparison of the observed $`L`$-band luminosity to the predictions from the optical \[OIII\] emission line and the extinction-corrected 2–10 keV hard X-ray luminosities (Ward 1996). Both results suggest that Cygnus A is a candidate type 2 quasar, but a large scale central dust lane (e.g., Thompson 1984) rather than a dusty torus could be responsible for the high dust extinction. Although the 8–13 $`\mu `$m spectrum of Cygnus A has been presented by Ward (1996), neither the presence of the 9.7 $`\mu `$m silicate dust absorption feature nor its optical depth are clear due to limited signal-to-noise ratios. We conducted much more sensitive 8–13 $`\mu `$m spectroscopy to estimate the optical depth of the 9.7 $`\mu `$m silicate dust absorption feature, thereby to investigate the location of the obscuring dust. ## 2 Observation and Data Analysis The 8–13 $`\mu `$m spectroscopy was conducted on the night of 1999, August 21 (UT) at the Keck I Telescope using the Long Wavelength Spectrometer (LWS; Jones & Puetter 1993) under photometric sky conditions. The seeing measured from a star was $`0\stackrel{}{\mathrm{.}}5`$ in FWHM. The LWS used a 128 $`\times `$ 128 Si:As array. A low-resolution grating was used with a $`0\stackrel{}{\mathrm{.}}5`$ wide slit and with an N-wide filter (8.1–13 $`\mu `$m). The resulting spectral resolution was $``$50. We utilized a “chop and nod” technique (e.g., Miyata et al. 1999) to cancel the first order gradient of the sky emission variation and the difference in background signal between different chopping beams. The frame rate was 20 Hz. Following the measurements at Mauna Kea by Miyata et al. (1999), the chopping and nodding frequencies were set to 5 Hz and 1/30 Hz, respectively, to achieve a background-limited sensitivity. In the actual data, a background-limited sensitivity may not have been achieved, since stripe-like noise patterns were recognizable in the array. Since the emission region at 10 $`\mu `$m of Cygnus A and the standard star Vega (=$`\alpha `$ Lyrae, =HR7001) were spatially unresolved, $`0\stackrel{}{\mathrm{.}}5`$ in FWHM, we set the chopping amplitude at 3<sup>′′</sup> so as to maximize the observing efficiency by placing the objects on the array all the time in both the chopping and the nodding beams. The total on-source integration time of Cygnus A was 1650 sec. Vega was observed just before the Cygnus A observation, with an air mass difference less than 0.1, and was used as a spectroscopic standard star. We followed a standard data analyzing procedure, using IRAF <sup>2</sup><sup>2</sup>2 IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under cooperative agreement with the National Science Foundation.. We first defined high dark current pixels and low-sensitivity pixels using, respectively, the dark and blackbody frames taken just after the Cygnus A observation. We replaced the data of these pixels with the interpolated signals of the surrounding pixels. The slit positions were nearly the same for Cygnus A and Vega, but the signal positions on the slit were slightly different. We corrected for the pixel variation of quantum efficiency along the slit by using the blackbody frame, whose flux was uniform along the slit. We extracted the spectra of the target and the standard star using an optimal extraction algorithm. The wavelengths were calibrated using sky lines. The Earth’s atmospheric transmission shows small and narrow dips at 11.7 $`\mu `$m and 12.6 $`\mu `$m (Tokunaga 1998) that are easily discernible in raw images as local maxima of sky background emission. We examined the positions of these two local maxima and confirmed that the wavelength per pixel was 0.043 arcsec pixel<sup>-1</sup>, consistent with the designed value. We therefore calibrated the wavelengths by assuming a linear relationship between wavelength and pixel of 0.043 arcsec pixel<sup>-1</sup> throughout the detector. In the wavelength-calibrated data, a broad local maximum of sky background emission is found at 9.3–9.7 $`\mu `$m. This wavelength range corresponds to the broad local minimum of the Earth’s atmospheric transmission (Tokunaga 1998). This wavelength calibration is believed to be accurate to within 0.05 $`\mu `$m. Although Vega is known to show an infrared excess at $`>`$20 $`\mu `$m, the excess is not appreciable at $`<`$15 $`\mu `$m (Heinrichsen, Walker, & Klaas 1998). We divided the signals of Cygnus A by those of Vega and multiplied the result by a blackbody profile of 9400 K (Cohen et al. 1992). In our spectroscopy, because the slit width was comparable to the seeing, some signal was possibly lost due to slight tracking errors; thus a high ambiguity in flux calibration may be introduced. We calibrated the flux of Cygnus A in such a way that our spectrum between 8.1 $`\mu `$m and 13 $`\mu `$m agreed with the $`N`$-band photometric data ($`N`$ = 0.18 Jy or 5.7 mag; Rieke & Low 1972; Heckman et al. 1983). The flux measurement using our data agrees with this by a factor of $``$2. ## 3 Results The flux-calibrated spectrum of Cygnus A is shown in Figure 1. A broad absorption-like feature is seen at a peak wavelength of $``$10 $`\mu `$m. This wavelength is consistent to the peak wavelength of the 9.7 $`\mu `$m silicate dust absorption feature redshifted with z = 0.056. Hence, the broad absorption-like feature is likely to originate from silicate dust absorption. It has been suggested that, if a galaxy is powered strongly by star-formation activities, and the polycyclic aromatic hydrocarbon (PAH) emission features at 7.7 $`\mu `$m, 8.6 $`\mu `$m, and 11.3 $`\mu `$m are strong, then the mid-infrared spectra mimic those with 9.7 $`\mu `$m silicate dust absorption (Genzel et al. 1998). In the case of Cygnus A, however, the extinction-corrected 2–10 keV hard X-ray luminosity ($`L_\mathrm{X}`$(2–10 keV) $`=`$ 1–5 $`\times `$ 10<sup>44</sup> ergs s<sup>-1</sup>) relative to the 40–500 $`\mu `$m far-infrared luminosity ($`L_{\mathrm{FIR}}`$ $`=`$ 0.7–1.6 $`\times `$ 10<sup>45</sup> ergs s<sup>-1</sup>) <sup>3</sup><sup>3</sup>3 We use the formula $`L_{\mathrm{FIR}}`$ = 2.1 $`\times `$ 10<sup>39</sup> $`\times `$ D(Mpc)<sup>2</sup> $`\times `$ (2.58 $`\times `$ $`f_{60}`$ \+ $`f_{100}`$), where $`f_{60}`$ and $`f_{100}`$ are, respectively, the IRAS 60 $`\mu `$m and 100 $`\mu `$m flux in Jy (Sanders & Mirabel 1996). The $`f_{60}`$ and $`f_{100}`$ fluxes of Cygnus A are 2.329 Jy and $`<`$8.278 Jy, respectively. is $`{}_{}{}^{}{}_{}{}^{>}`$0.1, as high as in galaxies powered predominantly by AGN activity. Furthermore, the wavelength coverage of our spectrum (8.1–13.0 $`\mu `$m) is 7.7–12.3 $`\mu `$m in the rest-frame and hence covers the 11.3 $`\mu `$m PAH emission feature (10.9–11.6 $`\mu `$m in the rest-frame; Rigopoulou et al. 1999). We find no detectable 11.3 $`\mu `$m PAH emission feature at 11.9 $`\mu `$m in the observed frame ($`<`$6 $`\times `$ 10<sup>-17</sup> W m<sup>-2</sup> in flux or $`<`$4 $`\times `$ 10<sup>41</sup> ergs s<sup>-1</sup> in luminosity). The 11.3 $`\mu `$m PAH to far-infrared 40–500 $`\mu `$m luminosity ratio is $`<`$6$`\times `$10<sup>-4</sup>, more than an order of magnitude smaller than that found for galaxies powered by star-formation activities (0.009$`\pm `$0.003; Smith, Aitken, & Roche 1989). Hence, it is very unlikely that the shorter side of our mid-infrared spectrum is dominated by PAH emission features from star-formation activities. We therefore ascribe the broad absorption-like feature fully to 9.7 $`\mu `$m silicate dust absorption. ## 4 Discussion ### 4.1 The Small Optical Depth of Silicate Dust Absorption Since the Galactic dust extinction toward the Cygnus A nucleus is estimated to be small, $`A_\mathrm{V}`$ $``$ 1 mag (Spinrad & Stauffer 1982; van den Bergh 1976), the 9.7 $`\mu `$m silicate dust absorption is believed to be attributed to dust in the Cygnus A galaxy. If we adopt the ratio of visual extinction to the optical depth of the 9.7 $`\mu `$m silicate dust absorption found in the Galactic interstellar medium ($`A_\mathrm{V}`$/$`\tau _{9.7}`$ $`=`$ 9–19; Roche & Aitken 1985), a dust extinction of $`A_\mathrm{V}`$ $``$ 150 mag should provide $`\tau _{9.7}`$ = 7.9–16.6. In this case, the flux at the absorption peak should be attenuated by a factor of 2.7$`\times `$10<sup>3</sup> – 1.6$`\times `$10<sup>7</sup>, and hence should be saturated in the spectrum. The observed spectrum, however, shows no such saturation at all. If we use the following formula by Aitken & Jones (1973), $`\tau _{9.7}`$ $`=`$ ln \[$`\frac{F_\lambda (8)+F_\lambda (13)}{2\times F_\lambda (9.7)}`$\] (rest-frame), then $`\tau _{9.7}`$ is only $``$1, much smaller than that expected from $`A_\mathrm{V}`$ $``$ 150 mag. This is the predicted trend when obscuring dust exists so close to a central engine that a temperature gradient occurs. Before pursuing this possibility in more detail, we review another possibility that could explain the small $`\tau _{9.7}`$. Interstellar dust consists mainly of silicate and carbonaceous dust (Mathis, Rumpl, & Nordsieck 1977; Mathis & Whiffen 1989). If the contribution of silicate dust to $`A_\mathrm{V}`$ is much smaller than that in the Galactic interstellar medium, $`\tau _{9.7}`$ could be small even in the case of high $`A_\mathrm{V}`$. However, since carbonaceous dust is more fragile than silicate dust (Draine & Salpeter 1979), carbonaceous dust should have been more destroyed than silicate dust around the Cygnus A nucleus, where a radiation field is expected to be much stronger and more energetic than that in the Galactic interstellar medium. The depletion of silicate dust relative to carbonaceous dust around Cygnus A is very unlikely. We therefore argue that a temperature gradient is the most likely explanation that could reproduce the observed small $`\tau _{9.7}`$. ### 4.2 Modeling In this subsection, we examine whether a temperature gradient can quantitatively explain the small $`\tau _{9.7}`$ in spite of a high $`A_\mathrm{V}`$ toward the background $`L`$-band emission region. In addition to our mid-infrared spectrum, we incorporate data points at 3–30 $`\mu `$m to constrain our model parameters. This is because dust emission powered by obscured AGN activity is strong at 3–30 $`\mu `$m and thus emission in this wavelength range can provide useful information on the dust distribution around a central engine. The photometric data at 3–30 $`\mu `$m used are summarized in Table 1. We do not incorporate data at $`<`$3 $`\mu `$m and at $`>`$30 $`\mu `$m, because stellar emission dominates the flux of Cygnus A at $`<`$3 $`\mu `$m (Djorgovski et al. 1991), while emission from cold dust in the host galaxy could contribute significantly at $`>`$30 $`\mu `$m. We estimate the contribution of (1) synchrotron emission and (2) stellar emission to the 3–30 $`\mu `$m emission of Cygnus A. Firstly, the spectral energy distribution of the Cygnus A nucleus (without hotspots emission) shows a clear flux excess in the infrared regions compared to the extrapolation of the synchrotron emission component at longer wavelengths (Haas et al. 1998). In fact, if we extrapolate the data at 0.45–2.0 mm using F<sub>ν</sub> $``$ $`\nu ^{0.6}`$ (Robson et al. 1998) and assume an extinction of $`A_\mathrm{V}`$ $`=`$ 150 mag toward the synchrotron emission region of the Cygnus A nucleus, then we find the synchrotron emission component contributes less than 1/10 of the observed flux at 3–30 $`\mu `$m. Secondly, the contribution from the stellar emission can be estimated from the near-infrared $`K`$-band (2.2 $`\mu `$m) photometric data ($`K`$ = 13.78$`\pm `$0.06 mag), which are dominated by stellar emission in the case of Cygnus A (Djorgovski et al. 1991). If we assume the $`KL`$ and $`KL^{}`$ colors of late type stellar populations in normal galaxies (0.2–0.4; Willner et al. 1984), the stellar contribution at $`L`$ and $`L^{}`$ is 13.4–13.6 mag. This is smaller than $``$30% of the observed flux at $`L`$ and $`L^{}`$. At wavelengths longer than $`L^{}`$, the stellar contribution decreases and becomes negligible. As a consequence, the 3–30 $`\mu `$m data of Cygnus A should be dominated by dust emission powered by obscured AGN activity. We use the code DUSTY developed by Ivezic, Nenkova, & Elitzur (1999) to investigate whether the observed small $`\tau _{9.7}`$ as well as the spectral energy distribution at 3–30 $`\mu `$m can be quantitatively explained by emission from dust in the vicinity of a central AGN. The code DUSTY solves the radiative transfer equation for a source embedded in a spherically symmetric dusty envelope. It creates an output spectrum as the sum of the attenuated input radiation, dust emission, and scattered radiation. For Cygnus A, the presence of radio hot spots (Wright & Birkinshaw 1984), the detection of strong optical \[OIII\] emission (Osterbrock & Miller 1975), and centrosymmetric polarization patterns (Tadhunter, Scarrott, & Rolph 1990; Ogle et al. 1997) indicate that the dusty envelope is torus-like, with dust along an unknown direction roughly perpendicular to our line of sight expelled with an unknown solid angle. Hence, strictly speaking, the assumption of spherical symmetry is not valid. If we view the torus-like structure from a face-on direction where the unattenuated hot ($``$1000 K) dust emission from the innermost envelope is seen directly, the output dust emission spectrum is expected to be quite different to the one from a spherical dusty envelope around a central engine, because all the hot dust emission is attenuated in the spherical model. However, it is most likely in the case of Cygnus A that we are viewing the torus from almost an edge-on direction, where no unattenuated emission from the innermost envelope is seen. Furthermore, because our aim is to explain the small $`\tau _{9.7}`$, the presence of a temperature gradient in the obscuring dust along our line of sight in front of a background emitting source is the most important factor. The geometry of dust perpendicular to our line of sight would not be crucial. Hence, for simplicity, we assume a spherically symmetric dust envelope. Thanks to the general scaling properties of the radiative transfer mechanism (Rowan-Robinson 1980; Ivezic & Elitzur 1997), there are only several free parameters. Firstly, we assume the input radiation as blackbody with a temperature of 40000 K, following Rowan-Robinson & Efstathiou (1993). This is not very critical as explained by Rowan-Robinson (1980). Next, we set the dust sublimation temperature as 1000 K following Rowan-Robinson & Efstathiou (1993) and Dudley & Wynn-Williams (1997). Since, in this model, the extinction estimated using $``$3 $`\mu `$m data is believed to be nearly the same as the extinction toward the central engine ($`\mathrm{\S }`$ 1), we set the optical depth at 0.55 $`\mu `$m ($``$ $`A_\mathrm{V}`$/1.08) toward the central engine to $`\tau `$ $`=`$ 140. We adopt the standard dust size distribution by Mathis et al. (1977) and standard interstellar dust mixture as defined in DUSTY. The ratio of the outer to the inner radius of the dusty envelope is set as a free parameter. We only try a simple power law radial density profile ($``$ r<sup>γ</sup>), and the value $`\gamma `$ is also set as a free parameter. When searching for parameters that can fit the observed data, we allow the model output spectrum and the observed data to differ by a few factors, particularly at shorter wavelengths. This is because the period over which the data in Table 1 were taken spans 10 years. Since emission at shorter wavelengths is from warm dust located at the inner part of a dusty envelope, variability over a time scale of 10 years could be significant. In Figure 2, we show the results of a calculation that provides a reasonable fit to the observed data, particularly at longer wavelengths. The outer-to-inner radius ratio is 200, and the power-law index of dust radial distribution is 2.5. If we adopt a value of $``$10<sup>45</sup> ergs s<sup>-1</sup> as the luminosity of the central radiation source, the physical scales of the inner radius of the dusty envelope is 2.25 pc. An outer-to-inner radius ratio of 80–500 and a power-law index of 2.5–3.0 for the dust radial distribution produce an output spectrum similar to that shown in Figure 2. In summary, the observed small $`\tau _{9.7}`$ as well as the spectral energy distribution at 3–30 $`\mu `$m of Cygnus A can be quantitatively explained by a model in which the emission at 3–30 $`\mu `$m originates from thermal emission by dust in the vicinity of a central AGN. Combining this result with the huge AGN luminosity of Cygnus A ($`L_\mathrm{X}`$(2–10 keV) $`>`$ 10<sup>44</sup> ergs s<sup>-1</sup>), we argue that observed data are consistent with the picture of Cygnus A being a type 2 quasar, that is, a highly luminous AGN that is highly obscured ($`A_\mathrm{V}`$ $`>`$ 50 mag) not by $`>`$ a few 100 pc scale dust in the host galaxy, but by a dusty torus with an inner radius of $`<`$10 pc. ### 4.3 Another type 2 quasar Although we demonstrated that the observed data are consistent with the picture of Cygnus A being a type 2 quasar, Cygnus A is a radio-loud AGN (the minor AGN population) and is a cD galaxy at the center of a cluster of galaxies (Spinrad & Stauffer 1982). It may be thought that Cygnus A is an unusual example. In this subsection, we mention briefly the emission properties of a radio-quiet AGN (the major AGN population), IRAS 08572+3915. IRAS 08572+3915 (z = 0.058) is one of the nearby ultra-luminous infrared galaxies ($`L_{\mathrm{IR}}`$ $`>`$ 10<sup>12</sup> $`L_{}`$; Kim, Veilleux, & Sanders 1998). The radio 20 cm to the far-infrared 40–500 $`\mu `$m flux ratio is similar to those of radio-quiet AGNs (Crawford et al. 1996). It displays a strong absorption-like feature at 10 $`\mu `$m (Dudley & Wynn-Williams 1997). Although the interpretation of the absorption-like feature at $``$10 $`\mu `$m is sometimes difficult ($`\mathrm{\S }`$ 3), this source displays a very strong 3.4 $`\mu `$m carbonaceous dust absorption feature and no detectable 3.3 $`\mu `$m PAH emission feature in the 3–4 $`\mu `$m spectrum (Wright et al. 1996), strongly suggesting that IRAS 08572+3915 is powered by a highly embedded AGN and not by star-formation activities <sup>4</sup><sup>4</sup>4Dudley & Wynn-Williams (1997) argue that the 8–22 $`\mu `$m spectrum of Arp 220 and that of IRAS 08572+3915 share similar properties. However, another interpretation of the 10 $`\mu `$m spectrum of Arp 220 has been proposed by Genzel et al. (1998). Given the absence of a high-quality 3–4 $`\mu `$m spectrum of Arp 220, it is unknown which interpretation is correct. In this paper we tentatively regard only IRAS 08572+3915 as a convincing sample of a type 2 quasar.. Hence, the strong absorption-like feature at $``$10 $`\mu `$m must be fully attributed to 9.7 $`\mu `$m silicate dust absorption. Based on the optical depth ratio between the 9.7 $`\mu `$m and 18 $`\mu `$m silicate dust absorption features, the presence of a temperature gradient in the obscuring dust around a very compact energy source (= AGN) has been argued (Dudley & Wynn-Williams 1997). If we adopt $`\tau _{9.7}`$ $`=`$ 5.2 (Dudley & Wynn-Williams 1997) and the Galactic optical depth ratio of the 3.4 $`\mu `$m carbonaceous to the 9.7 $`\mu `$m silicate dust absorption ($`\tau _{3.4}`$/$`\tau _{9.7}`$ $`=`$ 0.06–0.07; Pendleton et al. 1994; Roche & Aitken 1985), then $`\tau _{3.4}`$ $``$ 0.35 is expected. The observed $`\tau _{3.4}`$ is $``$ 0.9 (Pendleton 1996), more than a factor of 2 higher than expected. This means the extinction estimate at $``$3 $`\mu `$m is higher than that at $``$10 $`\mu `$m and thus supports the presence of a temperature gradient in the obscuring dust around the AGN. All the results are consistent with the picture of IRAS 08572+3915 being a radio-quiet type 2 quasar. IRAS 08572+3915 has a LINER-type optical spectrum (Kim et al. 1998). No broad ($`>`$ 2000 km s<sup>-1</sup> in FWHM) emission line has been detected in the 2 $`\mu `$m spectrum (Veilleux, Sanders, & Kim 1999). The detection of 2–10 keV hard X-ray flux by the Advanced Satellite for Cosmology and Astrophysics (ASCA) is at most marginal (based on our quick look into the ASCA archive). Namely, no sign of strong AGN activity has been detected at $`<`$2 $`\mu `$m or even at hard X-ray. The weak hard X-ray flux is not surprising, given the high obscuration toward the nucleus. The estimated $`A_\mathrm{V}`$ toward the 3 $`\mu `$m emission region (that is, toward the innermost part of the obscuring dust) from the observed $`\tau _{3.4}`$ is 130–220 mag, if we adopt the relation $`\tau _{3.4}`$/$`A_\mathrm{V}`$ $`=`$ 0.004–0.007 found in the Galactic interstellar medium (Pendleton et al. 1994). If the dust-to-gas ratio is more than a factor of five higher than the Galactic value ($`N_\mathrm{H}`$/$`A_\mathrm{V}`$ = 1.8 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup> mag<sup>-1</sup>), as observed in some AGNs (see $`\mathrm{\S }`$ 1), then $`N_\mathrm{H}`$ is higher than 10<sup>24</sup> cm<sup>-2</sup>, and direct 2–10 keV hard X-ray emission is completely blocked. Unlike Cygnus A, the signs of a type 2 quasar in IRAS 08572+3915 were first recognized through study of thermal infrared regions. We suggest that other type 2 quasars that are not recognizable at $`<`$2 $`\mu `$m and at hard X-ray region may also be found through detailed study of the thermal infrared region. ## 5 Summary Our main results are the following. 1. We detected a 9.7 $`\mu `$m silicate dust absorption feature toward the Cygnus A nucleus. 2. The optical depth of the absorption feature ($`\tau _{9.7}`$ $``$ 1) is smaller by a large factor than that expected from $`A_\mathrm{V}`$ toward the background $`L`$-band emission region ($``$150 mag). 3. We demonstrated that the small optical depth together with the spectral energy distribution at 3–30 $`\mu `$m could be quantitatively explained by emission from a dusty envelope in the vicinity ($`<`$10 pc in inner radius) of a central AGN. 4. Combining this finding with the huge AGN luminosity of Cygnus A, we argued that the observed data are consistent with the picture of Cygnus A being a type 2 quasar. We thank R. Campbell and T. Stickel for their support during the Keck observing run, Dr. C. C. Dudley for his useful comments on the manuscript, and L. Good for her proofreading of this paper. Drs. A. T. Tokunaga and H. Ando support my stay at the University of Hawaii. MI is financially supported by the Japan Society for the Promotion of Science during his stay at the University of Hawaii.
no-problem/0001/hep-ph0001139.html
ar5iv
text
# UR-1599 ER/40685/944 December 1999 BFKL Monte Carlo for Dijet Production at Hadron CollidersPresented at the Fermilab Run II Workshop, QCD and Weak Boson Physics, June 3–4, 1999. ## 1 MONTE CARLO APPROACH TO BFKL Fixed-order QCD perturbation theory fails in some asymptotic regimes where large logarithms multiply the coupling constant. In those regimes resummation of the perturbation series to all orders is necessary to describe many high-energy processes. The Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation performs such a resummation for virtual and real soft gluon emissions in such processes as dijet production at large rapidity difference in hadron-hadron collisions. BFKL resummation gives a subprocess cross section that increases with rapidity difference as $`\widehat{\sigma }\mathrm{exp}(\lambda \mathrm{\Delta })`$, where $`\mathrm{\Delta }`$ is the rapidity difference of the two jets with comparable transverse momenta $`p_{T1}`$ and $`p_{T2}`$. Experimental studies of these processes have recently begun at the Tevatron $`p\overline{p}`$ and HERA $`ep`$ colliders. Tests so far have been inconclusive; the data tend to lie between fixed-order QCD and analytic BFKL predictions. However the applicability of analytic BFKL solutions is limited by the fact that they implicitly contain integrations over arbitrary numbers of emitted gluons with arbitrarily large transverse momentum: there are no kinematic constraints included. Furthermore, the implicit sum over emitted gluons leaves only leading-order kinematics, including only the momenta of the ‘external’ particles. The absence of kinematic constraints and energy-momentum conservation cannot, of course, be reproduced in experiments. While the effects of such constraints are in principle sub-leading, in fact they can be substantial and should be included in predictions to be compared with experimental results. The solution is to unfold the implicit sum over gluons and to implement the result in a Monte Carlo event generator . This is achieved as follows. The BFKL equation contains separate integrals over real and virtual emitted gluons. We can reorganize the equation by combining the ‘unresolved’ real emissions — those with transverse momenta below some minimum value (chosen to be small compared to the momentum threshold for measured jets) — with the virtual emissions. Schematically, we have $$_{virtual}+_{real}=_{virtual+real,unres.}+_{real,res.}$$ (1) We perform the integration over virtual and unresolved real emissions analytically. The integral containing the resolvable real emissions is left explicit. We then solve by iteration, and we obtain a differential cross section that contains a sum over emitted gluons along with the appropriate phase space factors. In addition, we obtain an overall form factor due to virtual and unresolved emissions. The subprocess cross section is $$d\widehat{\sigma }=d\widehat{\sigma }_0\times \underset{n0}{}f_n$$ (2) where $`f_n`$ is the iterated solution for $`n`$ real gluons emitted and contains the overall form factor. It is then straightforward to implement the result in a Monte Carlo event generator. Because emitted real (resolved) gluons appear explicitly, conservation of momentum and energy, as well as evaluation of parton distributions, is based on exact kinematics for each event. In addition, we include the running of the strong coupling constant. See for further details. ## 2 DIJET PRODUCTION AT HADRON COLLIDERS At hadron colliders, the BFKL increase in the dijet subprocess cross section with rapidity difference $`\mathrm{\Delta }`$ is unfortunately washed out by the falling parton distribution functions (pdfs). As a result, the BFKL prediction for the total cross section is simply a less steep falloff than obtained in fixed-order QCD, and tests of this prediction are sensitive to pdf uncertainties. A more robust pediction is obtained by noting that the emitted gluons give rise to a decorrelation in azimuth between the two leading jets. This decorrelation becomes stronger as $`\mathrm{\Delta }`$ increases and more gluons are emitted. In lowest order in QCD, in contrast, the jets are back-to-back in azimuth and the (subprocess) cross section is constant, independent of $`\mathrm{\Delta }`$. This azimuthal decorrelation is illustrated in Figure 1 for dijet production at the Tevatron $`p\overline{p}`$ collider , with center of mass energy 1.8 TeV and jet transverse momentum $`p_T>20\mathrm{GeV}`$. The azimuthal angle difference $`\mathrm{\Delta }\varphi `$ is defined such that $`\mathrm{cos}\mathrm{\Delta }\varphi =1`$ for back-to-back jets. The solid line shows the analytic BFKl prediction. The BFKL Monte Carlo prediction is shown as crosses. We see that the kinematic constraints result in a weaker decorrelation due to suppression of emitted gluons, and we obtain improved agreement with preliminary measurements by the D$`\mathrm{}`$ collaboration , shown as diamonds in the figure. In addition to studying the azimuthal decorrelation, one can look for the BFKL rise in dijet cross section with rapidity difference by considering ratios of cross sections at different center of mass energies at fixed $`\mathrm{\Delta }`$. The idea is to cancel the pdf dependence, leaving the pure BFKL effect. This turns out to be rather tricky , because the desired cancellations occur only at lowest order. Therefore we consider the ratio $$R_{12}=\frac{d\sigma (\sqrt{s}_1,\mathrm{\Delta }_1)}{d\sigma (\sqrt{s}_2,\mathrm{\Delta }_2)}$$ (3) with $`\mathrm{\Delta }_2`$ defined such that $`R_{12}=1`$ in QCD lowest-order. the result is shown in Figure 2, and we see that the kinematic constraints strongly affect the predicted behavior, not only quantitatively but sometimes qualitatively as well. More details can be found in . ## 3 CONCLUSIONS In summary, we have developed a BFKL Monte Carlo event generator that allows us to include the subleading effects such as kinematic constraints and running of $`\alpha _s`$. We have applied this Monte Carlo to dijet production at large rapidity separation at the Tevatron. We found that kinematic constraints, though nominally subleading, can be very important. In particular they lead to suppression of gluon emission, which in turn suppresses some of the behavior that is considered to be characteristic of BFKL physics. It is clear therefore that reliable BFKL tests can only be performed using predictions that incorporate kinematic constraints.
no-problem/0001/hep-ph0001235.html
ar5iv
text
# THE Δ⁢𝐼=1/2 RULE AND 𝜀'/𝜀 IN THE CHIRAL QUARK MODEL ## Abstract I discuss the role of the $`\mathrm{\Delta }I=1/2`$ selection rule in $`K\pi \pi `$ decays for the theoretical calculations of $`\epsilon ^{}/\epsilon `$ . Lacking reliable “first principle” calculations, phenomenological approaches may help in understanding correlations among different contributions and available experimental data. In particular, in the chiral quark model approach the same dynamics which underlies the $`\mathrm{\Delta }I=1/2`$ selection rule in kaon decays appears to enhance the $`K\pi \pi `$ matrix elements of the gluonic penguins, thus driving $`\epsilon ^{}/\epsilon `$ in the range of the recent experimental measurements. The results announced this year by the KTeV and NA48 collaborations have marked a great experimental achievement, establishing 35 years after the discovery of CP violation in the neutral kaon system the existence of a much smaller violation acting directly in the decays. While the Standard Model (SM) of strong and electroweak interactions provides an economical and elegant understanding of indirect ($`\epsilon `$) and direct ($`\epsilon ^{}`$) CP violation in term of a single phase, the detailed calculation of the size of these effects implies mastering strong interactions at a scale where perturbative methods break down. In addition, CP violation in $`K\pi \pi `$ decays is the result of a destructive interference between two sets of contributions, which may inflate up to an order of magnitude the uncertainties on the individual hadronic matrix elements of the effective four-quark operators. THis makes predicting $`\epsilon ^{}/\epsilon `$ a complex and subtle theoretical challenge. In Fig. 1 I summarize the comparison of the theoretical predictions available before the KTeV announcement early this year with the present experimental data. The gray horizontal band shows the one-sigma average of the old NA31 (CERN) and E731 (Fermilab) data and the new KTeV and NA48 results. The vertical lines show the ranges of the published theoretical $`predictions`$ (before February 1999), identified with the cities where most members of the groups reside. The range of the naive Vacuum Saturation Approximation (VSA) is shown for comparison. By considering the complexity of the problem, the theoretical calculations reported in Fig. 1, show a remarkable agreement, all of them pointing to a non-vanishing positive effect in the SM. On the other hand, if we focus our attention on the central values, the München (phenomenological $`1/N`$) and Rome (lattice) calculations definitely prefer the $`10^4`$ regime, contrary to the Trieste result which is above $`10^3`$. Without entering the details of the calculations, it is important to emphasize that the abovementioned difference is mainly due to the different size of the hadronic matrix element of the gluonic penguin $`Q_6`$ obtained in the various approaches. While the München and Rome calculations assume for $`Q_6`$ values in the neighboroud of the leading $`1/N`$ result (naive factorization), the Trieste calculation, based on the effective Chiral Quark Model ($`\chi `$QM) and chiral expansion, finds a substantial enhancement of the $`I=0`$ $`K\pi \pi `$ amplitudes, which affect $`both`$ current-current and penguin operators. The bulk of such an enhancement can be simply understood in terms of chiral dynamics (final-state interactions) relating the $`\epsilon ^{}/\epsilon `$ prediction to the phenomenological embedding of the $`\mathrm{\Delta }I=1/2`$ selection rule. The $`\mathrm{\Delta }I=1/2`$ selection rule in $`K\pi \pi `$ decays is known by some 45 years and it states the experimental evidence that kaons are 400 times more likely to decay in the $`I=0`$ two-pion state than in the $`I=2`$ component. This rule is not justified by any general symmetry consideration and, although it is common understanding that its explanation must be rooted in the dynamics of strong interactions, there is no up to date derivation of this effect from first principle QCD. As summarized by Martinelli at this conference lattice cannot provide us at present with reliable calculations of the $`I=0`$ penguin operators relevant to $`\epsilon ^{}/\epsilon `$ , as well as of the $`I=0`$ components of the hadronic matrix elements of the tree-level current-current operators (penguin contractions), which are relevant for the $`\mathrm{\Delta }I=1/2`$ selection rule. In the Münich approach the $`\mathrm{\Delta }I=1/2`$ rule is used in order to determine phenomenologically the matrix elements of $`Q_{1,2}`$ and, via operatorial relations, some of the matrix elements of the left-handed penguins. Unfortunately, the approach does not allow for a phenomenological determination of the matrix elements of the penguin operators which are most relevant for $`\epsilon ^{}/\epsilon `$ , namely the gluonic penguin $`Q_6`$ and the electroweak penguin $`Q_8`$. In the $`\chi `$QM approach, the hadronic matrix elements can be computed as an expansion in the external momenta in terms of three parameters: the constituent quark mass, the quark condensate and the gluon condensate. The Trieste group has computed the $`K\pi \pi `$ matrix elements of the $`\mathrm{\Delta }S=1,2`$ effective lagrangian up to $`O(p^4)`$ in the chiral expansion. Hadronic matrix elements and short distance Wilson coefficients are then matched at a scale of $`0.8`$ GeV as a reasonable compromise between the ranges of validity of perturbation theory and chiral lagrangian. By requiring the $`\mathrm{\Delta }I=1/2`$ rule to be reproduced within a 20% uncertainty one obtains a phenomenological determination of the three basic parameters of the model. This step is crucial in order to make the model predictive, since there is no a-priori argument for the consistency of the matching procedure. As a matter of fact, all computed observables turn out to be very weakly scale (and renormalization scheme) dependent in a few hundred MeV range around the matching scale. Fig. 2 shows an anatomy of the various contributions which finally lead to the experimental value of the $`\mathrm{\Delta }I=1/2`$ selection rule. Point (1) represents the result obtained by neglecting QCD and taking the factorized matrix element for the tree-level operator $`Q_2`$, which is the leading electroweak contribution. The ratio $`A_0/A_2`$ is found equal to $`\sqrt{2}`$: by far off the experimental point (8). Step (2) includes the effects of perturbative QCD renormalization on the operators $`Q_{1,2}`$. Step (3) shows the effect of including the gluonic penguin operators. Electroweak penguins are numerically negligeable in the CP conserving amplitudes and are responsible for the very small shift in the $`A_2`$ direction. Perturbative QCD and factorization lead us from (1) to (4). Non-factorizable gluon-condensate corrections, a crucial model dependent effect entering at the leading order in the chiral expansion, produce a substantial reduction of the $`A_2`$ amplitude (5), as it was first observed by Pich and de Rafael. Moving the analysis to $`O(p^4)`$ the chiral loop corrections, computed on the LO chiral lagrangian via dimensional regularization and minimal subtraction, lead us from (5) to (6), while the finite parts of the NLO counterterms calculated via the $`\chi `$QM approach lead us to the point (7). Finally, step (8) represents the inclusion of $`\pi `$-$`\eta `$-$`\eta ^{}`$ isospin breaking effects. This model dependent anatomy shows the relevance of non-factorizable contributions and higher-order chiral corrections. The suggestion that chiral dynamics may be relevant to the understanding of the $`\mathrm{\Delta }I=1/2`$ selection rule goes back to the work of Bardeen, Buras and Gerard in the $`1/N`$ framework using a cutoff regularization. This approach has been recently revived and improved by the Dortmund group, with a particular attention to the matching procedure. A pattern similar to that shown in Fig. 2 for the chiral loop corrections to $`A_0`$ and $`A_2`$ was previously obtained in a NLO chiral lagrangian analysis, using dimensional regularization, by Missimer, Kambor and Wyler. The $`\chi `$QM approach allows us to further investigate the relevance of chiral corrections for each of the effective quark operators of the $`\mathrm{\Delta }S=1`$ lagrangian. The NLO contributions to the electroweak penguin matrix elements have been thouroughly studied for the first time by the Trieste group. Fig. 3 shows the individual contributions to the CP conserving amplitude $`A_0`$ of the relevant operators, providing us with a finer anatomy of the NLO chiral corrections. From Fig. 3 we notice that, because of the chiral loop enhancement, the $`Q_6`$ contribution to $`A_0`$ is about 20% of the total amplitude. As we shall see, the $`O(p^4)`$ enhancement of the $`Q_6`$ matrix element is what drives $`\epsilon ^{}/\epsilon `$ in the $`\chi `$QM to the $`10^3`$ ballpark. A commonly used way of comparing the estimates of hadronic matrix elements in different approaches is via the so-called $`B`$ factors which represent the ratio of the model matrix elements to the corresponding VSA values. However, care must be taken in the comparison of different models due to the scale dependence of the $`B`$’s and the values used by different groups for the parameters that enter the VSA expressions. An alternative pictorial and synthetic way of analyzing different outcomes for $`\epsilon ^{}/\epsilon `$ is shown in Fig. 4, where a “comparative anatomy” of the Trieste and München estimates is presented. From the inspection of the various contributions it is apparent that the final difference on the central value of $`\epsilon ^{}/\epsilon `$ is almost entirely due to the difference in the $`Q_6`$ component. The nature of the $`Q_6`$ enhancement is apparent in Fig. 5 where the various penguin contributions to $`\epsilon ^{}/\epsilon `$ in the Trieste analysis are further separated in LO (dark histograms) and NLO components—chiral loops (gray histograms) and tree level counterterms (dark histograms). It is clear that chiral-loop dynamics plays a subleading role in the electroweak penguin sector ($`Q_{810}`$) while enhancing by 60% the gluonic penguin ($`I=0`$) matrix elements. As a consequence, the $`\chi `$QM analysis shows that the same dynamics that is relevant to the reproduction of the CP conserving $`A_0`$ amplitude (Fig. 3) is at work also in the CP violating sector (gluonic penguins). In order to ascertain whether these model features represent real QCD effects we must wait for future improvements in lattice calculations. Indications for such a dynamics arise also from $`1/N`$ calculations and recent studies of analitic properties of the $`K\pi \pi `$ amplitudes. As a matter of fact, one should expect in general an enhancement of $`\epsilon ^{}/\epsilon `$ , with respect to the naive VSA estimate, due to final-state interactions. In two body decays, the $`I=0`$ final states feel an attractive interaction, of a sign opposite to that of the $`I=2`$ components. This feature is at the root of the enhancement of the $`I=0`$ amplitude over the $`I=2`$ one. Recent dispersive analysis of $`K\pi \pi `$ amplitudes show how a (partial) resummation of final state interactions increases substantially the size of the $`I=0`$ components, while slightly depleting the $`I=2`$ components. It is important to notice however that the size of the effect so derived is generally not enough to fully account for the $`\mathrm{\Delta }I=1/2`$ rule. Other non-factorizable contributions are needed, specially to reduce the large $`I=2`$ amplitude obtained from perturbative QCD and factorization. In the $`\chi `$QM approach the fit of the $`\mathrm{\Delta }I=1/2`$ rule is due to the interplay of FSI (at NLO) and non-factorizable soft gluonic corrections (at LO in the chiral expansion). It must be mentioned that the idea of a connection between the $`\mathrm{\Delta }I=1/2`$ selection rule and $`\epsilon ^{}/\epsilon `$ goes back a long way, although at the GeV scale, where we can trust perturbative QCD, penguins are far from providing the dominant contribution to the CP conserving amplitudes. I conclude by summarizing the relevant remarks: $`I=2`$ amplitudes: (semi-)phenomenological approaches which fit the $`\mathrm{\Delta }I=1/2`$ selection rule in $`K\pi \pi `$ decays, generally agree in the pattern and size of the $`\mathrm{\Delta }S=1`$ hadronic matrix elements with the existing lattice calculations. $`I=0`$ amplitudes: the $`\mathrm{\Delta }I=1/2`$ rule forces upon us large deviations from the naive VSA: $`B`$factors of $`O(10)`$ for $`Q_{1,2}_0`$ (lattice calculations presently suffer from large sistematic uncertainties). In the $`\chi `$QM calculation, the fit of the CP conserving $`K\pi \pi `$ amplitudes feeds down to the penguin sectors showing a substancial enhancement of the $`Q_6`$ matrix element, such that $`B_6/B_8^{(2)}2`$. Similar indications stem from $`1/N`$ and dispersive approaches. Promising work in progress on the lattice. Theoretical error: up to 40% of the present uncertainty in the $`\epsilon ^{}/\epsilon `$ prediction arises from the uncertainty in the CKM elements $`\text{Im}(V_{ts}^{}V_{td})`$ which is presently controlled by the $`\mathrm{\Delta }S=2`$ parameter $`B_K`$. A better determination of the unitarity triangle from B-physics is expected from the B-factories and hadronic colliders. From K-physics $`K_L\pi ^0\nu \overline{\nu }`$ gives the cleanest “theoretical” determination of $`\text{Im}\lambda _t`$. New Physics: in spite of recent clever proposals (mainly SUSY) it is premature to invoke physics beyond the SM in order to fit $`\epsilon ^{}/\epsilon `$ . A number of ungauged systematic uncertainties affect presently all theoretical estimates, and, most of all, every attempt to reproduce $`\epsilon ^{}/\epsilon `$ must also address the puzzle of the $`\mathrm{\Delta }I=1/2`$ rule, which is hardly affected by short-distance physics. Is the “anomalously” large $`\epsilon ^{}/\epsilon `$ the “penguin projection” of $`A_0/A_222`$ ?
no-problem/0001/astro-ph0001147.html
ar5iv
text
# COMPARING THE EVOLUTION OF THE GALAXY DISK SIZES WITH CDM MODELS: THE HUBBLE DEEP FIELD ## 1 INTRODUCTION The understanding of galaxy formation has recently undergone an appreciable progress. Observationally, this is due to photometric and spectroscopic information in deep galaxy fields. A corresponding theoretical progress has been achieved by the developement of semi-analytical approaches including the gas cooling and star formation processes into the well developed hierarchical clustering theories for dark matter (DM) halos; this allows to relate the galaxy DM circular velocities to observable, luminous properties. In this context, the Tully-Fisher (TF) relation represents a typical test for the present theoretical models, as it relates the total luminosity of a disk galaxy to its halo circular velocity. However, from an observational point of view, the measure of circular velocity is limited to bright nearby spirals and a few very bright galaxies at intermediate redshifts (e.g. Vogt et al. 1997). Extending the TF relation to fainter/distant spirals is essential to test the evolution in the $`L`$, $`z`$ plane of the $`M/L`$ ratio predicted by CDM theories. An alternative statistical approach to connect the luminous and dynamical properties of galaxy disks is based on the size-luminosity relation, once a specific model is assumed to connect size to circular velocity. This has the advantage of exploring the dynamical evolution over a wide range of luminosities and redshifts. In a previous paper (Poli et al. 1999, Paper I) we applied this novel approach to the ESO-NTT Deep Field (Arnouts et al. 1999) to derive morphological information for the galaxies in the field down to $`I=25`$ after appropriate seeing deconvolution. The derived intrinsic angular sizes were then converted into physical sizes adopting photometric redshifts for each galaxy in the catalog (Fontana et al. 1999a). The distribution of sizes in the $`L`$, $`z`$ plane were compared with predictions of CDM semi-analytic models of galaxy formation (e.g. Cole et al. 1994, Baugh et al. 1998) complemented with the specific size-velocity relation worked out by Mo, Mao, & White (1998) for rotationally supported disks. This analysis showed an excess of small-size low luminosity galaxies at small-intermediate redshifts. However, the sample magnitude limit did not allow an assessment of this excess at higher redshifts. Here we want to extend the study to higher $`z`$ and lower $`L`$ using the available data in the Hubble Deep field North where morphological information on the faint galaxy sample is available in the literature (Abraham et al. 1996; Odewahn et al. 1996; Driver et al. 1998; Marleau & Simard 1999). This will allow to assess if the excess observed at intermediate redshifts is an evolutionary effect or if it is present also at higher $`z`$, indicating the presence of remarkable physical processes not included in the standard CDM models. ## 2 THE GALAXY CATALOG The morphological information on the galaxies in the Hubble Deep Field north was obtained using the galaxy catalog by Marleau & Simard (1999) where the structural parameter values were derived from the intensity profile fitting. Exponential profiles were assumed for the disk component and a final characteristic disk radius in arcsec was computed for all the galaxies up to $`I26`$. The HDF-N galaxy catalog has been joined with the NTT Deep Field catalog used in Paper I limited to $`I25`$. To each galaxy in the joined catalog, a photometric redshift estimate was assigned with the same best fitting procedure applied in Paper I and in Giallongo et al. 1998. This was obtained through a comparison of the observed colors with those predicted by spectral synthesis models (Bruzual & Charlot 1996) including UV absorption by the intergalactic medium and dust reddening. The aperture magnitudes used to estimate the galaxy colors and the I total magnitude for each galaxy were extracted from the catalog by Fernandez-Soto et al. (1999). The catalog of the photometric redshifts is presented in Fontana et al. 1999a. The resulting typical redshift accuracy is $`\mathrm{\Delta }z0.06`$ up to $`z1.5`$ and $`\mathrm{\Delta }z0.3`$ at larger redshifts. As derived in other fields of similar depth, the bulk of the galaxies is concentrated at intermediate redshifts $`z0.51`$ with a tail in the distribution up to $`z6`$. In order to verify the effects of the background noise on the measured sizes of the faint galaxies in the HDF we have performed a set of simulations specifically designed to reproduce the typical conditions of real data. The same test has been performed in Paper I for the galaxies in the NTT Deep Field. The intensity profiles were reproduced as in the observed HDF images assuming the same pixel sampling. Assuming an intrinsic exponential profile, a series of synthetic images were constructed using different half light radii ranging from $`r_{hl}=0.1`$” to $`r_{hl}=0.9`$” with a step of 0.1”. An average of 25 random objects were computed for each radius, assuming a total magnitude of $`I26`$ which is the limiting magnitude of the morphological galaxy sample. The background subtracted image of a bright star was selected in the field to reproduce the instrumental PSF. Its normalized profile was then convolved with the synthetic images of the disk galaxies. The convolved two dimensional profiles were randomly inserted in regions of the HDF far from very bright objects to reproduce the observed HDF galaxies with the appropriate pixel size and noise levels. Finally, the multigaussian deconvolution technique was applied to the synthetic data as in Paper I. We first notice that there is no selection bias against galaxies with large size ($`r_{hl}1`$ arcsec) and low surface brightness down to $`I26`$ since all the synthetic objects were detected. The results are shown in Fig. 1, where the error bars represent the dispersion around the mean due to noise in the background subtraction. A good match between the intrinsic and measured half light radii was obtained up to $`r_{hl}0.7`$ arcsec. For larger values, a slight underestimate in the measured values appears at the sample limiting magnitude. In any case it can be seen that, even for the faint galaxies with $`I26`$, the overall correlation between intrinsic and measured half-light radii is preserved in such a way that an intrinsically large, faint object, e.g. with $`r_{hl}0.7^{\prime \prime }`$, can not be detected as a small sized one, e.g. with $`r_{hl}0.1^{\prime \prime }`$. The simulation shows that the fraction of small size galaxies present in the HDF morphological catalog is real and is not due to intrinsically larger objects which have been shrunk by noise effects. ## 3 THE DISTRIBUTION OF THE GALAXY SIZES IN LUMINOSITY AND REDSHIFT: A COMPARISON WITH CDM MODELS We have computed the disk linear size $`R_d`$ and the absolute blue magnitude for each galaxy in the catalog using the color estimated redshifts as discussed in the previous section. In Fig. 2 we plot the distribution of the observed sizes for the HDF galaxies as a function of luminosity in four different redshift intervals. The filled circles represent HDF spirals with bulge-to-total ratio in the range $`0.05<B/T<0.75`$ while asterisks represent galaxies with $`B/T<0.05`$, most of which of irregular morphology (Marleau & Simard 1998; Schade et al. 1998). HDF galaxies with $`B/T>0.75`$ are excluded since they are bulge dominated systems. The NTTDF galaxies of Paper I are also shown as empty squares. We also show in the shaded area the prediction of our rendition of the standard semi-analytical CDM models. This relates the luminous properties of galaxies to their circular velocity including the hierarchical merging of dark matter halos, the merging of galaxies inside the halos, the gas cooling, the star formation and the Supernovae feedback associated with the galaxies. Finally, the circular velocity of the halos is connected to the disk scale length using the Mo, Mao & White (1998) model; the latter correlation depends on the dimensionless angular momentum $`\lambda `$ whose lognormal distribution $`p(\lambda )`$ is given in Mo et al. (1998). The shaded area in Fig. 2 corresponds to that allowed for $`0.025<\lambda <0.1`$, the values corresponding to the 10% and 90% points of $`p(\lambda )`$. The solid line corresponds to $`\lambda =0.05`$, the 50% point of $`p(\lambda )`$. The full $`p(\lambda )`$ distribution is taken into account in the differential size distribution of galaxies with $`I<26`$ (normalized to the total number) shown in Fig. 3 for different redshift bins. A tilted CDM power spectrum of perturbations with $`n=0.8`$ in an $`\mathrm{\Omega }=1`$ Universe with $`H_o=50`$ km/s/Mpc has been used. The full details of our semi-analytic model are given in Appendix A of Paper I together with the adopted set of star formation and feedback parameters. The latter set was chosen as to optimize the matching to the local I-band Tully-Fisher relation for bright galaxies and the B-band galaxy luminosity function. Note that, since the disk velocity is $``$ 20% higher than that of the DM, a small offset ($`0.5`$ mag) between the predicted and the observed Tully-Fisher relation is present (see Fig. 9 of Paper I). Fig. 2 shows that at $`z<1`$ and for faint magnitudes ($`M_B>19`$), the observed sizes tend to occupy preferentially the small size region below the 50% locus of the angular momentum distribution. Correspondingly, Fig. 3 shows the excess of small ($`R_d<2`$ kpc) size galaxies with respect to the CDM predictions. Indeed, for $`M_B>19`$, the predicted average disk size is 2.1 Kpc while the observed one is 1.4 Kpc. These results are similar to those presented in Paper I, although extended down to $`M_B<16`$ and with a larger statistics. The excess becomes less evident at brighter magnitudes, in agreement with recent studies (Lilly et al. 1998, Simard et al. 1999) which indicate little evolution in the morphological properties of bright spirals in the CFRS sample up to $`z1`$. At $`z1`$, the larger statistics available with the present sample (with respect to that used in Paper I) shows that the excess persists and involves brighter galaxies ($`M_B20`$) with an observed average $`R_d1.3`$ kpc respect to the predicted $`R_d1.9`$ kpc. In addition, the excess appears for all the galaxies in the sample regardless of their morphological classification and so does not depend on the selection procedure adopted for the spiral sample. Furthermore, the above excesses cannot be due to the offset (only $`0.5`$ mag) between the observed and the theoretical Tully-Fisher relation, as confirmed by the good agreement of the $`R_dM_B`$ relation at the bright end. In summary, Figs. 2 and 3 indicate that small-size galaxies appear smaller and/or brighter than predicted by CDM at all $`z`$ (indeed, even more at high $`z`$). Within the framework of the adopted standard scenario of disk formation, which assumes the conservation of the specific baryonic angular momentum (Mo, Mao & White 1999), a viable solution consists in introducing a brightening of small-size galaxies. In particular, we note that shifting the predicted curves toward higher luminosities results in a better fit to the data. At $`z1`$ the best fitting shift is $`1`$ mag, while at larger $`z`$ the best fitting shift is $`1.5`$ mag (Fig. 2). ## 4 CONCLUSIONS AND DISCUSSION The present analysis performed on a larger and deeper sample, confirms our previous findings at $`z1`$ (Paper I), where an excess of faint ($`M_B>19`$), small-size ($`R_d<2`$ kpc) galaxies with respect to the CDM predictions was found. The results presented here show that the excess persits even at higher redshifts ($`1<z<3.5`$) and for brighter galaxies ($`M_B>20`$). Several processes may be responsible for the above excess (see Paper I), like the non conservation of the gas angular momentum during the collapse in the Mo et al. model. Alternatively, within the Mo et al. framework for disk formation, a possible explanation can be sought in luminosity-dependent effects related to the physical mechanisms involved in star formation activity already at high $`z`$. Indeed, a shift of the shaded region (the CDM prediction) by $`1`$ mag at $`z<1`$ and 1.5 mag at $`z>1.5`$ is sufficient to reconcile the CDM predictions with the observations. Such shift could be due to the starbust brightening of the numerous small size galaxies subject to close encounters/interactions. This brightening would have the advantage of explaining at the same time the flat shape of the global cosmological star formation rate $`\dot{M}_{}`$ observed at $`z>1.5`$ in deep surveys (Steidel et al. 1999; Fontana et al. 1999b) which results a factor $`510`$ higher than predicted by CDM. Since the SFR is proportional to the UV-B luminosity, a brightening of the predicted luminosities in small size galaxies by $`12`$ mag is needed in both cases. The interaction rate needed to reconcile the CDM evolutionary scenario with the various observables can be derived by the following simple considerations. The SFR in a galaxy halo from a cool gas of mass $`M_{cool}`$ can be written as $`\dot{M}_{}f_{gas}M_{cool}/\tau _i+(1f_{gas})M_{cool}/\tau _{}`$, where $`f_{gas}`$ is the fraction of gas converted in stars due to interactions, $`\tau _i`$ is the timescale for interactions and $`\tau _{}`$ is the quiescent star formation time scale. While only the latter term is usually included in the CDM models, we note that for $`f_{gas}0.1`$ a $`\tau _i`$ shorter than $`\tau _{}`$ by about a factor 100 would be implied to obtain a SFR consistent with the high $`z`$ data; for the population of small galaxies (dominating at high $`z`$) with a circular velocity $`v_c`$ 100 km/s (corresponding to $`\tau _{}5`$ Gyr, see Cole et al. 1994) this would imply $`\tau _i0.1`$ Gyr, in fact close to the dynamical time scale of these systems at $`z2`$. Note that for larger ($`v_c>200`$ km/s) disk galaxies, $`\tau _{}v_c^{1.5}`$ becomes smaller while $`\tau _i1/N_g`$ remains large due to their small number density $`N_g`$ (Cavaliere & Vittorini 1999), so that the quiescent star formation mode prevails for these systems; in addition since $`N_g(1+z)^3`$ for all the galaxies, the interaction time scale $`\tau _i`$ becomes ineffective at small $`z`$. Detailed implementation of the interaction-driven star formation mode in semi-analytical models will soon provide a more quantitative test for the importance of this physical mechanism in determining the galaxy properties at high $`z`$. FIGURE CAPTIONS Fig. 1. Deconvolved half light radii as a function of true values in simulated data of the HDF. Error bars are one sigma confidence intervals. Fig. 2. Distribution of galaxies in the luminosity-size plane in four redshift intervals. The disk radii are in kpc. Empty squares are NTTDF galaxies; filled circles are HDF spiral galaxies with $`0.05<B/T<0.75`$; stars are galaxies with $`B/T<0.05`$ most of which with irregular morphology. The shaded region represents the region allowed by the model. The upper and lower lines correspond to the 90% and 10% points of the angular momentum distribution. The solid line corresponds to the 50% point of the same distribution (see the text for details). Fig. 3. Size distribution of the low and high luminosity spiral galaxies in the HDF and NTTDF shown in Fig. 2. The corresponding curves are the distributions predicted by the CDM model. An excess of small size galaxies with respect to the CDM predictions is apparent at $`R_d1`$ kpc.
no-problem/0001/cond-mat0001167.html
ar5iv
text
# 1 Initial, final and saddle point configurations. These are for zero shear stress, boundaries fixed in 𝑥-direction. Circles indicate the dislocation. DISLOCATION MOBILITY IN TWO-DIMENSIONAL LENNARD-JONES MATERIAL N. P. BAILEY\*, J. P. SETHNA\* AND C. R. MYERS\** * Physics Department, Cornell University, 117 Clark Hall, Ithaca, NY 14853 \** Cornell Theory Center, Cornell University, Ithaca, NY 14853 ABSTRACT In seeking to understand at a microscopic level the response of dislocations to stress we have undertaken to study as completely as possible the simplest case: a single dislocation in a two dimensional crystal. The intention is that results from this study will be used as input parameters in larger length scale simulations involving many defects. We present atomistic simulations of defect motion in a two-dimensional material consisting of atoms interacting through a modified Lennard-Jones potential. We focus on the regime where the shear stress is smaller than its critical value, where there is a finite energy barrier for the dislocation to hop one lattice spacing. In this regime motion of the dislocation will occur as single hops through thermal activation over the barrier. Accurate knowledge of the barrier height is crucial for obtaining the rates of such processes. We have calculated the energy barrier as a function of two components of the stress tensor in a small system, and have obtained good fits to a functional form with only a few adjustable parameters. INTRODUCTION This paper is concerned with the motion of a single dislocation. Thus there is no dislocation-dislocation interaction; the interaction is between the dislocation and the applied stress. Furthermore we work in two dimensions, which eases the computational burden and aids visualization. Having simplified the problem to this extent, we have a chance of understanding it in detail. Once such understanding is developed, it will then make sense to proceed to more realistic, though computationally more expensive, cases (e.g. three dimensions, realistic potentials, etc.). Our system consists of a relatively small ($`<100`$) number of atoms in two dimensions (2D) interacting through a Lennard-Jones potential which has been truncated and made to go smoothly to zero at the cutoff distance ($`2.7\sigma `$; this is large enough for third neighbor interactions to be included). The system has periodic boundary conditions in the vertical direction, and rigid ‘walls’ on the sides. The walls are simply lines of atoms which are constrained to move as rigid bodies. In addition the atoms in each of the next-to-outermost columns are constrained to more rigidly in the $`x`$-direction, and independently in the $`y`$-direction. This system for the boundaries is due to Tomasi . If a shear stress is applied to the boundary walls the dislocation will move by glide, but only if the shear stress is above a certain critical value $`\sigma _c`$. The applied shear stress is the resolved shear stress in this geometry. Note that the critical resolved shear stress for dislocation motion depends on the other components of stress, hence knowledge of the resolved shear stress alone is not enough to decide whether a given dislocation will move or not. At zero temperature, with $`\sigma _{xy}<\sigma _c`$, the dislocation cannot move, but with a finite temperature, motion still occurs as thermally activated hops over an energy barrier. This barrier corresponds to the Peierls barrier for an edge dislocation in three dimensions. Our task was to calculate this barrier as a function of all three components of the stress tensor ($`\sigma _{xx},\sigma _{xy},\sigma _{yy}`$). However so far, we have only dealt with the dependence on the first two, since we have only begun to incorporate the techniques necessary for applying a constant stress in the direction in which periodic boundary conditions are imposed. We have considered only one size of system; finite size effects are important. Extensions to all three components of stress and extrapolations to large sizes are in progress. THEORY Thermal Activation We concentrate on calculating the energy barrier to hopping for shear stress less than the critical shear stress. For these values of shear, there exist so called fixed points of the dynamics. These are associated with local minima in the potential. Two nearby minima are separated by a barrier in the energy landscape (note that the saddle point of the barrier is also a fixed point, albeit an unstable one). When the energy barrier is large compared to the temperature, the transition rate between the states will have the form $$R=\nu \mathrm{exp}(\frac{E_B}{k_BT})$$ (1) where $`E_B`$ is the barrier height and $`\nu `$ is an attempt frequency which can be calculated from the curvature of the potential landscape near the minimum and near the barrier top . Because of the exponential, however, the rate is much more sensitive to $`E_B`$ than it is to $`\nu `$. Hence it is importance to know $`E_B`$ accurately to be able to reasonably calculate such rates. For the dislocation, the rate of hopping is proportional to the velocity, and hence its mobility. The barrier height is defined as follows. For any path between the two minima we find the point along the path where the potential energy is greatest; call this value $`E_{max}`$. We then consider all paths and take the one whose $`E_{max}`$ is smallest. This is the minimum energy path. The barrier height is $`min_{paths}\{E_{max}\}E_{intialstate}`$. The location of the maximum energy along the minimum energy path corresponds to a saddle point in the potential landscape. Several methods exist for finding barrier heights. We use one which finds the whole minimum energy path, called the ‘Nudged Elastic Band’ method . Model and Potential We model a small piece of two-dimensional material containing a single edge dislocation atomistically, using methods of molecular dynamics. We use a classical pair potential defined as follows: Lennard-Jones (6-12, with standard parameters $`ϵ`$ and $`\sigma `$) for $`r<r_{cut1}=2.41308788\sigma `$, a quadratic in $`r^2`$ for $`r_{cut1}<r<r_{cut2}=2.7\sigma `$, and zero for $`r>r_{cut2}`$. This potential was formulated by Chen . It is continuous and smooth everywhere. Extension to other potentials and other forms of the cutoff is planned. The units for the simulation are determined by the parameters $`ϵ`$ and $`\sigma `$ in the potential, which are set to unity, hence all energies are in units of $`ϵ`$, and distances in units of $`\sigma `$. Units of time, stress etc. all follow from these. Often one makes a connection with physical systems by matching the parameters to those of Argon, for which Lennard-Jones is a good potential. So, $`ϵ=119.8Kk_B`$ or $`0.01`$eV, and $`\sigma =0.341`$nm . However since we have a 2D system, there is little useful quantitative comparison to made with experiment (for one thing, stress has different units in 2D than in 3D). SIMULATION We simulate an ‘$`N\times N`$’ system, where $`N`$ is the number of rows on the left of the dislocation; there are $`N1`$ on the right. Two extra columns are added to each side to form the boundaries. Typically $`N=7`$, which corresponds to 71 atoms. Simulation runs consist of the following procedure: A set of atoms is configured as two lattices of slightly differing lattice constants, and correspondingly different numbers of rows, placed together. The atoms are relaxed by evolving the system using Langevin dynamics, after which there was a localized dislocation. Next, a shear stress is applied which is just of sufficient strength and duration to cause the dislocation to move one lattice spacing, after which the stress is reset to zero and further relaxation is done. Copies are made of the relaxed system before and after the move. The main part of the simulation consists of a loop in which stress is applied to these copies, their energy is minimized using the ‘MDmin’ procedure, taken from Ref. , and they are passed to the barrier finding routine, which uses the Nudged Elastic Band method . In this method a chain of replicas of the system is created forming a line in configuration space between the local minima. Forces from the potential, and between replicas are applied, with certain corrections, and the whole chain relaxed until it lies along the minimum energy path. Once the barrier is calculated, the stress is incremented and the loop repeats, minimizing the two copies now with a different stress, and so on. Fig. 1 shows initial, final and saddlepoint configurations of the $`7\times 7`$ system. Curve-fitting: Finding the top. In general there will not be a replica right at the saddle point, so given the energies of the replicas it is necessary to do curve fitting to find the actual maximum energy along the path, and hence the barrier height itself. The information returned from the routine includes the configurations of the replicas and the Euclidean distance along the chain for each one, called the reaction coordinate, as well as the energies. We must subtract the work done by the external stress to find the relevant energy-quantity: at finite temperature it would be the Gibbs free energy; at zero temperature it is equal to the enthalpy, and is the quantity that is minimized in equilibrium when a constant force is applied. The set of distance-enthalpy points can be plotted in order to visualize the shape of the barrier. To find the height, a cubic is fitted to the top four points, see Fig 2; the position of the maximum can then be simply calculated. An estimate of the uncertainty can be got from considering the top five points and fitting to a quartic, and taking the difference of the two results. The difference appeared only in the fifth digit, though close to critical shear, where convergence was not as good, the relative error became large. Constant pressure simulations The first runs were done with the boundary walls free only to move in the $`y`$-direction, and fixed $`L_y`$, though it was intended that we would eventually have constant $`\sigma _{xx}`$ and constant $`\sigma _{yy}`$. A material is generally under fixed stress not fixed strain, so incorporating conditions of constant stress, and hence fluctuating boundaries, is a more realistic approach. In fact we found direct evidence of the necessity for constant stress simulations when we looked at the system-size dependence of our results. The height of the barrier for $`\sigma _{xy}=0`$ depended strongly on system size—it decreased by a factor of two upon going from a $`7\times 7`$ system to a $`13\times 13`$ one. Since we would like to believe that we can in fact get away with simulating such small systems this was a worrying fact. Including constant $`\sigma _{xx}`$ was straightforward: we let the boundary walls move and put a force in the $`x`$-direction on them. When the system was relaxed in the initial stage of the simulation, the final separation of the boundaries was about half a lattice constant larger than the fixed separation we had been using—the dislocation liked to take up more space than an uninterrupted column of atoms, and hence there was a significant sideways pressure in the fixed boundary simulations. When the system size was increased this pressure decreased and since it was already clear that the barrier height depended on sideways pressure, this would explain the dependence on system size in the earlier simulations. So increasing the system size at fixed $`\sigma _{xx}`$ should have a much smaller effect on the barrier. In fact the barrier height was an increasing function of the system size when $`\sigma _{xx}`$ was held fixed. This was thought to be due to the vertical dimension of the sample being held fixed (the dimension in which periodic boundary conditions were operating), hence there was a varying effective pressure in this direction. Since there is no rigid boundary here as on the sides, a more sophisticated technique must be used, similar to Parrinello-Rahman dynamics . Here, the lengths of the simulation cell in the different directions are allowed to vary dynamically. The equations of motion are suitably modified to include this extra degree of freedom. In the present case this only had to be done for the vertical direction, with the variable $`L_y`$ being introduced. However there are subtleties associated with combining this technique with Nudged Elastic Band, not least the issue of defining an angle in a space which has one axis corresponding to a length and the rest to dimensionless positions. RESULTS The energy barrier as a function of shear stress $`\sigma _{xy}`$ with fixed $`\sigma _{xx}`$, for several values of $`\sigma _{xx}`$ is shown in Fig 4. The dots are data points from barrier calculations; the solid lines are three-parameter fits to a series expansion obtained by considering the one-dimensional barrier problem. For stress larger than the critical value, there is no fixed point, and the defect slides with periodically varying velocity. For stress smaller than the critical value, there are two fixed points, a stable one corresponding to the local minimum, and an unstable one corresponding to the barrier top (of course there are many more really, due to the periodicity of the lattice). The appearance of two fixed points as the stress goes below $`\sigma _c`$ (or equivalently their disappearance as stress goes above $`\sigma _c`$) is a saddle-node bifurcation. Note that we also have points for negative shear; this corresponds to hopping in the opposite direction for positive shear, see Fig. 3. For large negative shear the barrier becomes simply the energy difference between the two minima. These local minima only exist for $`|\sigma _{xy}|<\sigma _c`$ (beyond which the dislocation starts to slide in the appropriate direction), hence our data covers the range $`\sigma _c`$ to $`\sigma _c`$, for several values of $`\sigma _{xx}`$. CONCLUSIONS We have calculated the barrier height for dislocation hopping for range of both $`\sigma _{xx}`$ and $`\sigma _{xy}`$. We have shown how this data can be parametrized to reasonable accuracy with a fit that needs only a few parameters. These parameters could be used for example in a simulation whose primitive objects are dislocations which move in a stress field. The present results, augmented by $`\sigma _{xx}`$ dependence, could be used to calculate the motion of each dislocation given the values of the components of the stress tensor at its location (the stress field would be calculated from the elastic fields of the other dislocations plus any external sources of stress). The next development in the simulation will be to include the dependence on the third component of the stress tensor, $`\sigma _{yy}`$, which corresponds to pressure on the top and bottom of the system. We have just started to be able to do runs at constant $`\sigma _{yy}`$, though there are subtleties in combining this with the Nudged Elastic Band method. Once we are satisfied that we can do this well, we will repeat with different inter-atomic potentials. Another aspect of simulating this system is to consider shear stress above $`\sigma _c`$, where the dislocation slides continuously, if not quite steadily. Preliminary studies indicate interesting behaviour, including a delocalization of the core structure at high velocity. We will add finite temperature later as well, and eventually carry over this work into three dimensions and real potentials. ACKNOWLEDGMENTS This project grew out of a collaboration with Jeff Tomasi, who originated our boundary condition method. It was supported by NSF grant number DMR 9873214, and was done using the Intel/NT Velocity Cluster at the Cornell Theory Center. We had helpful discussions with Tejs Vegge, Enrique Batista and Markus Rauscher.
no-problem/0001/nlin0001057.html
ar5iv
text
# Numerical Replication of Computer Simulations: Some Pitfalls and How To Avoid Them January 25, 2000 draft; submitted to GECCO 2000. Comments welcome! ## 1 INTRODUCTION When we perform computer simulations, such as genetic algorithms, it is often useful to be able to replicate a run exactly, so that those results of the second run that we care about are exactly the same as those of the first. This kind of replication is called numerical replication . For instance, if we notice a strange result in a run, it is useful to be able to redo the run exactly, using the same parameter settings and random number seeds, but this time collecting additional data or perturbing the course of the run in order to test hypotheses about what is causing the strange results. Numerical replication can also be used to verify that experimental results are not due to a bug or human error, and to perform regression testing after making changes to a program. However, a program that uses IEEE standard floating-point arithmetic may produce different results on two different computers, even if the same input and random number seeds are used. In fact, there is no guarantee that it will produce the same results when run twice on the same computer, or even that a subexpression will have the same value when evaluated at two different points during a single run. This is because the calculations may be performed at different precisions each time, and the programmer has little control over what precision is used . This can cause numerical replication to fail unexpectedly. In the worst case, this can lead us to believe that two different sets of results are the same, and thereby cause us to draw incorrect conclusions. Luckily, if we are aware of these pitfalls, we can reliably avoid them in practice. We should not simply assume that two sets of data are the same because we used the same input and random number seeds; instead, we should always verify this empirically. Furthermore, we should always record the computer platform and run-time and compile-time parameters that we use along with the simulation data. This will make numerical replication easier to achieve. We need tools to make both of these tasks easy and automatic. Finally, we need to compile a knowledgebase of heuristics for achieving numerical replication. In the remainder of this paper, I discuss the problem further and justify these recommendations. ## 2 THE PROBLEM Computers perform arithmetic mainly on two kinds of numbers: integers (such as 42) and real numbers (such as 3.14159). There are various possible ways to represent real numbers in a computer; almost all modern computers use binary floating-point representations . This representation is essentially the same as scientific notation, except in binary; besides representing real numbers, it can also be used to represent very large integers. Floating-point numbers are represented in the form $`(1)^s\times 1.f\times 2^x`$. The part $`s`$ is the sign bit ($`0`$ or $`1`$), $`1.f`$ is called the significand (an older term is mantissa), $`f`$ is called the fraction, where $`0f<1`$, and $`x`$ is called the exponent. Almost all computer platforms used today use IEEE 754 standard floating-point arithmetic. The only important exceptions are the Cray X-MP, Y-MP, C90, and J90, the IBM /370 and 3090, and the DEC VAX; most of these are disappearing rapidly . This standard has caused floating-point arithmetic to be much more reliable, predictable, and portable. However, the standard does not guarantee that a program will produce the same results when run on two different computers . This is because different computers may perform floating point calculations differently, even if the computers all follow the IEEE standard. The standard specifies three different precisions for floating-point arithmetic: single precision (32 bits long), double precision (64 bits) and double extended precision (also called simply extended precision, 80 or more bits). Different computer platforms support these precisions to different extents. We may get different results, for instance, when we run a simulation once on a Hewlett-Packard (HP) PA-RISC workstation running HPUX and once on an Intel x86 PC running Linux or MS Windows. The HP workstation uses single-precision and double-precision floating point arithmetic, while the x86 uses IEEE 80-bit extended-precision floating-point arithmetic by default. (The Motorola 680x0 (m68k) is another CPU family that uses 80-bit extended precision; it was used in the first Macintosh computers.) Figure 1 shows an example program that may produce a different result on each platform, depending on the compiler and the compile-time settings. An HP workstation will print “Equal”, while an x86 computer may print either “Equal” or “Not Equal”, depending on the compiler and compile-time options that are used . If the results of a simulation depend on many floating-point calculations, this difference in precision may cause the two runs to produce wildly different results. This is particularly likely in simulations of complex systems, such as a genetic algorithm, where the simulation’s precise trajectory is highly sensitive to the initial conditions and to the stream of random numbers. Even if the different runs produce the same qualitative results, the numeric results may differ. This may occur with any program that uses native IEEE floating-point arithmetic, written in any language, on any computer or operating system. Discrepancies may also occur in integer arithmetic, but only if a program makes unwarranted assumptions about the size or representation of integer variables (for example, assuming that C variables of type int are 32 bits long). Both the x86’s and the m68k’s floating-point unit (FPU) can be switched into “single-precision” or “double-precision” mode (see Figure 2). This solves this particular problem on the m68k. Unfortunately, even when the x86 is in one of these modes, it will still produce different results than an HP or similar workstation would, since its internal registers will still use more bits of precision for the exponents (15 bits instead of 8 or 11) . To reduce the exponent range to be the same as that in “pure” single or double precision, the result must be stored to memory from the x86 FPU’s internal registers, and then reloaded from memory into the FPU. This will cause the computation to be two to four times slower than native floating-point arithmetic. If the gcc compiler is being used, this can be accomplished by using the -ffloat-store compiler option. However, there may still be a discrepancy on the x86 in the last bit of about $`10^{324}`$ because of double rounding, if the floating-point operation is a multiplication or division. To avoid this, one of the operands must be scaled down before the operation by $`2^{x_{\mathrm{max}_{\mathrm{extended}}}x_{\mathrm{max}_{\mathrm{double}}}}`$, where $`x_{\mathrm{max}_{\mathrm{extended}}}`$ is the maximum possible exponent for extended precision and $`x_{\mathrm{max}_{\mathrm{double}}}`$ is the maximum possible exponent for double precision, and the result must be scaled back up by the same amount afterwards. This additional scaling adds only marginally to the computation time . By using this technique, replication problems can be made much less likely, at the expense of computation speed. However, it will not guarantee that such problems will not occur. In fact, a program may produce different results when run twice on the same computer, even if the same input and random number seeds are used. This is because the results produced by a program depend not only on the computer’s floating-point unit and operating system but also on the compiler, the compile-time options, the compile-time and run-time libraries installed, and the input (here I include the date, the run-time environment, and the random number seeds). For instance, the discrepancy may occur if we run a simulation twice on a x86 computer, where the simulation is compiled the first time to store floating-point results to memory, and the second time to keep the results in the FPU’s internal registers. Also, the libraries of mathematical functions such as $`\mathrm{log}`$ and $`\mathrm{sin}`$ may produce different results on different platforms and may also differ from version to version on the same platform. (The IEEE standard only contains specifications for the square root function $`\sqrt{x}`$.) The IEEE standard also does not completely specify the accuracy conversion between binary and decimal representations. It is even technically possible that the results may depend on what other programs are running on the computer, or on bugs in the program, compiler, or libraries — this is especially true if the program is not carefully designed and implemented. Therefore, each time a simulation is run, it is prudent to act as if it were run on a different computer, even if the computer is in fact always exactly the same. A related issue is that if a floating-point expression occurs more than once in different locations in a program, it may be evaluated to different precisions each time it is used during a single run . For example, on an x86 computer the compiler may choose whether to keep a result in extended precision in the FPU or store it in double precision to memory based on the optimization level, the number of free floating-point registers, whether the result will be used as the argument to a function, and many other factors. (The forthcoming C99 ANSI/ISO C standard will guarantee that if the expression is stored in a variable, the same precision will be used whenever the variable’s value is evaluated.) Besides complicating numerical replication, this may cause problems if the program assumes that the expression always evaluates to the same value during the course of a run. The fcmp package implements Knuth’s suggestions for safer floating-point comparisons, which can be used to avoid this. Finally, some CPUs, such as the PowerPC, provide an operation called fused multiply-add that can perform the operation $`\pm ax\pm b`$ in a single instruction. If this instruction is used, a different result may be produced than if it is not used, since there is one less rounding step . Also, in expressions such as $`\pm ax\pm by`$ it is ambiguous which side is evaluated first (and hence rounded). Therefore, this instruction must not be used in certain algorithms, for instance when multiplying a complex number by its conjugate . Unfortunately, many compilers make it difficult for the programmer to specify whether this instruction should be allowed or inhibited in a program. Guaranteeing that two runs of a program will produce exactly the same results is extremely difficult and may be impossible in practice. Every component which might affect the results would have to be guaranteed to be the same for both runs; none of these components could ever be changed or upgraded unless the new version could be shown to have no effect on the results. On the one hand, determining the version of every component on a computer and recording all of this information with the simulation data would be extremely expensive in time and storage. On the other hand, it will be extremely difficult to weed out false positive results when testing whether two computers have different components: The fact that one of two otherwise identical computers has a copy of the game Quake installed probably will not affect whether a simulation will produce identical results on the two machines, but it will be difficult to prove this. Finally, the date is always changing, and this might have unforeseen effects on a program’s behavior. (Consider the recent Y2k problem, or the bug that depended on the phase of the moon .) ## 3 RECOMMENDATIONS If guaranteeing that we can numerically replicate a run is not an option, what can we do? I suggest that instead of asking how we can guarantee replication, we should ask two different questions: First, what is the worst-case result that can occur because of this problem, and how can we avoid it? Secondly, how can we make numerical replication easier to achieve and more reliable? The worst thing that can happen when we try to numerically replicate a run is that we mistakenly believe that the replicated results are exactly the same as the original, when they are in fact different. Our main concern should be to avoid this mistake. Luckily, there is an easy way to avoid it: Simply compare the data sets. If they are empirically identical, we are done. (Of course, if we do not record enough data from each run, it is possible that the runs’ actual trajectories may be different, even though the data are the same.) Therefore, we need a set of easy-to-use tools to compare results from two runs, and we should use these tools even if the runs were done on the same computer, as a sanity check. In some cases, where entire files need to match exactly, a utility such as the Unix diff command may suffice. In other cases, I suggest using Rivest’s MD5 message digest algorithm. This algorithm produces a short string (called a hash) that is easy to store with the data that it is computed from. Instead of comparing entire files, only the hash string from each file needs to be compared. If the data files clearly mark comments and other data that we do not need to replicate, such as the date of the run, then it is easy to write a short Perl program to compute an MD5 hash string from a data file, ignoring such extraneous information. (One common convention for marking comments in text files is to put a pound sign ‘#’ at the beginning of each comment line.) If it is necessary to ensure that a dataset has not been tampered with in any way, there are cryptographically secure methods, such as signing the data set with PGP or GnuPG , or using another message digest algorithm, such as RIPEMD-160 . (MD5 should not be used for this purpose !) Often, using the same input and random number seed will be all that is necessary to numerically replicate a run. Sometimes, however, this will not suffice. In this case, we can almost always replicate the results by tweaking a few special compile-time or run-time parameters (such as what precision the FPU uses). Experience suggests that numerical replication is usually easy to achieve in practice, even though it may be impossible to guarantee. In some cases, it may be necessary to rerun the simulation on the same computer platform that was used originally. For instance, if a simulation is run on an x86 platform using extended precision, it will be difficult to numerically replicate the results on any platform other than an x86 or m68k using extended precision. In addition to the techniques for comparing results mentioned previously, we need a set of heuristics for numerical replication, such as a list of compile-time and run-time parameters that often need to be tweaked. One such heuristic is the technique for emulating double-precision floating-point on x86 computers described in Section 2. To make numerical replication easier, the compile-time and run-time parameters that were used should be stored with a simulation’s results, along with information such as the program and compiler versions, the date, the name of the machine being used, the platform and operating system, etc. In addition to tools for comparing simulation results, we also need tools that make storing this kind of information easy and automatic. (Perl is an example of a program that stores a great deal of configuration information at compile-time; the information is accessible under Unix by running perl -V.) A researcher can then use this information when trying to numerically replicate the run. For example, if a simulation is run on an x86 computer using extended precision, it is important to record this fact. A future release of Drone will include tools for recording and comparing MD5 hashes of data files and for recording compile-time and run-time parameters of simulation programs. I hope that making these tools available will encourage researchers to use them every time they run a simulation. ## 4 CONCLUSION In summary, these are problems that everyone doing computer simulations should be aware of, but they are not insurmountable. In practice, a few simple techniques should be sufficient to avoid problems. First, we should never assume that the results from two simulation runs are identical because they used the same parameters and random number seed, even if they are run on the same computer. We should always verify this, either by comparing the relevant results directly or by comparing the MD5 hash strings of the two datasets. This verification process should be made so convenient that there is no reason not to do it. Secondly, we should compile a knowledgebase of likely parameters that can be tweaked to achieve numerical replication, if simply redoing a run with the same input and random seeds does not suffice. Finally, we should always store the compile-time and run-time parameters that we use. We need tools to make this convenient and automatic, as well. ## 5 FURTHER READING For a technical discussion of this problem, see Priest and the Java Grande Forum Numerics Working Group’s draft report . For a gentle introduction to floating-point arithmetic in general, see Patterson and Hennessy or Goldberg ; for a more technical discussion, see Goldberg or Knuth . The IEEE 754 floating-point standard is published in ; for a readable account see Cody, et al. . Cody and Coonen give C algorithms to support features of the standard. Kahan and Darcy and Darcy argue that it is undesirable to enforce exact replicability across all computing platforms, and Kahan gives an example of differences in floating-point arithmetic in different versions of Matlab on various platforms. Axtell, et al. discuss the differing degrees to which a simulation can be replicated. See Rivest and Robshaw for information on the MD5 message digest algorithm; for a Perl interface to MD5, see Winton . For information on RIPEMD-160, a more secure replacement for MD5, see Bosselaers or Dobbertin . ## 6 ACKNOWLEDGMENTS I am grateful to the members of the egcs and gcc mailing lists for answering questions and providing information and references. I also thank the other members of the University of Michigan Royal Road group (John Holland, Rick Riolo, Bob Lindsay, Leeann Fu, Tom Bersano-Begey, and Chien-Feng Huang) for their comments and encouragement.
no-problem/0001/astro-ph0001319.html
ar5iv
text
# 1 Introduction ## 1 Introduction Clusters of galaxies are X-ray bright to the extent that the ROSAT All-Sky Survey (RASS) allows sizeable, statistically complete cluster samples to be compiled . Large-scale structure (LSS) studies using clusters as tracers do not duplicate, but rather complement those using galaxies because clusters mark the locations of the deepest potential wells whereas galaxies probe primarily the low-density field. To date, most dynamical analyses of large-scale flows have been compared with the IRAS density fields . However, the distribution of rich clusters is significantly different from that of IRAS-selected galaxies as the latter are mostly spirals. There is evidence from dynamical modeling that mass congregates in clusters with much higher $`M/L`$ values than associated with field galaxies . Historically, optical searches for clusters of galaxies, and thus also the resulting LSS studies, were forced to avoid a wide band of the sky centered on the Galactic plane because of severe extinction and stellar obscuration at $`\left|b\right|<20^{}`$. With the advent of X-ray astronomy, this restriction is greatly relaxed. Rather than dust extinction and stellar confusion, it is now the X-ray absorbing equivalent Hydrogen column density, $`n_\mathrm{H}`$, that is the limiting factor. As shown in Figure 1, $`n_\mathrm{H}`$ rises only slowly toward the Galactic plane allowing an X-ray cluster survey to penetrate the plane to much lower latitude. ## 2 CIZA: closing the gap In an attempt to overcome the limitations of existing cluster samples, we have initiated a program aimed at the construction of a statistical sample of galaxy clusters in what used to be the zone of avoidance. Our survey is based on X-ray sources detected in the RASS as listed in the ROSAT Bright Source Catalog . In a first phase of our project which focuses on the X-ray brightest clusters we apply three selection criteria: $`\left|b\right|<20^{}`$, $`f_\mathrm{X}3\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, and a spectral hardness ratio cut to discriminate against softer, non-cluster X-ray sources. Cross-correlations with databases of Galactic and extragalactic objects as well as follow-up observations with the University of Hawai‘i’s 2.2m telescope have, so far, resulted in 73 spectroscopically confirmed galaxy clusters at $`0.02z0.34`$; only 15 of which were previously known (see Figure 1). Highlights of the survey so far include the discovery of a distant, extremely X-ray luminous cluster which acts as a gravitational lens, and, at the opposite end of the redshift scale, the discovery of a cluster at $`z=0.022`$ at $`\left|b\right|=0.3^{}`$ (i.e., in the mid-plane of the Milky Way) which is very likely part of the Perseus-Pegasus complex.
no-problem/0001/astro-ph0001324.html
ar5iv
text
# Observing the effects of strong gravity with future X-ray missions ## 1. Introduction As we have heard in this meeting, X-ray spectroscopy with ASCA and BeppoSAX are providing probes of the region very close to the supermassive black holes in active galactic nuclei (AGN). In particular, detailed observations and modeling of the broad $`K\alpha `$ fluorescence emission line of iron, which is thought to originate from the surface layers of the inner accretion disk, allow us to probe the inner disk structure and strong-field gravity in completely new ways. The current observational status of this field has been summarized in Dr. Nandra’s contribution in this volume. In this paper, I will discuss what there still is to learn, and how observations with future X-ray missions will help us understand the environment near an accreting supermassive black hole (SMBH). As one might expect, the region close to an accreting SMBH is complex, with many basic issues still unknown to us. At a fundamental level, the mass of most active SMBHs is very uncertain. Furthermore, there are essentially no robust indicators of black hole spin. Many models for the radio-loud/radio-quiet dichotomy of AGN postulate that the black hole mass and, especially, the spin are the control parameters that determine the radio-loudness of the object. However, without observational signatures of black hole masses and spins, it will be difficult or impossible to test such models. In additional, the physics governing the interaction of the accreting matter with the SMBH is far from clear. Some of the outstanding questions are: 1. Does the inner accretion disk in some objects become hot and geometrically-thick (see Dr. Sambruna’s contribution in this volume for a suggestion that this might be the case in broad line radio galaxies)? 2. Is the violently variable X-ray emission due to magnetic flares on the accretion disk surface, or changes within a central corona sitting within the cold accretion disk? 3. What happens within the radius of marginal stability? Does this region have observational relevance? For example, Krolik (1999) recently suggested that the magnetic field becomes very strong in this region, and as a result Alfén waves might plausibly transport significant amounts of energy from this region into an inner corona or the rest of the disk. 4. How are jets launched from the black hole region and collimated, and what contribution do they make to the emissions observed from non-blazar AGN. This article describes how future X-ray observations may attempt to disentangle these phenomena. ## 2. Current uncertainties and pure spectral studies The accretion disk model is highly successful at explaining the X-ray reprocessing spectrum observed in many AGN. A small number of AGN (MCG–6-30-15, Tanaka et al. 1995; NGC 3516, Nandra et al. 1999; NGC 4151, Wang et al. 1999) have been the subject of very long integrations with ASCA yielding high quality iron line profiles which match the predictions of the accretion disk model well (Fabian et al. 1995). However, there are still ambiguities present. Firstly, a time-averaged iron line profile contains no information about the mass of the central black hole. All parameters relevant to determining the line profile scale with the gravitational radius. Secondly, and more interesting from an astrophysics point of view, the line profile is sensitive to the X-ray source geometry, accretion disk structure (including the region inside the innermost stable orbit), and the spin of the SMBH. Degeneracies exists in the sense that different astrophysical assumptions and space-time geometries can produce very similar iron line profiles. The best studied example of this degeneracy is the case of the very-broad state of the iron line in MCG–6-30-15 found by Iwasawa et al. (1996). Making the standard assumptions that the line emission is axisymmetric, and there is only emission from outside of the radius of marginal stability, Iwasawa et al. (1996) suggested that the SMBH in this object must be rapidly rotating to produce a line as broad and redshifted as that seen. Dabowski et al. (1996) computed grids of iron line profiles for various values of the SMBH spin with the same assumptions and set a formal limit of $`a>0.94`$ on the spin of this SMBH. However, Reynolds & Begelman (1997) showed that the same iron line profile can result from a non-rotating SMBH if a high-latitude X-ray source illuminates disk material within the radius of marginal stability. This is an explicit demonstration of how uncertainties in the assumed astrophysics (e.g. the X-ray source geometry) leads to the degeneracy between models with very different space-time geometries (i.e. Schwarzschild vs. extremal Kerr). In a rather different vain, Weaver & Yaqoob (1998) showed that non-axisymmetric obscuration of the line emitting region could also reproduce these data. The first question to address is whether better spectroscopy with much higher signal-to-noise and/or larger bandpass than ASCA and BeppoSAX will remove these degeneracies. Returning to the example of MCG–6-30-15, Young, Fabian & Ross (1998) showed that iron fluorescence from material within the radius of marginal stability would be accompanied by a large iron edge. While it is questionable whether the current ASCA data are of sufficient quality to rule out the presence of such an edge, one might think that this would be a tell-tale signature that could be used to distinguish the Schwarzschild and extremal-Kerr models for this object. However, it is important to realize that such conclusions are at the mercy of extra epicycles of astrophysical theory. Both the Reynolds & Begelman (1997) and Young et al. (1998) models assume a smooth accretion flow within the radius of marginal stability. But strong magnetic fields in that region will inevitably produce clumping of the material which will in turn lower the ionization parameter of the material which produces the X-ray reflection signatures (Armitage & Reynolds 2000; also see Fig. 1). This, in turn, may diminish the depth of the iron edge that one would expect in the spectrum. ## 3. Iron line variability Spectral variability, and in particular variability of the broad iron line, is a powerful probe of AGN central engines. Many of the degeneracies described above can be broken by considering line variability. In this section, I shall distinguish three types of line variability and discuss how the study of each may help unravel the complexities of these systems. ### 3.1. Structural changes in the source As has already been mentioned above, ASCA has already seen broad iron line variability in several objects, e.g. MCG–6-30-15 (Iwasawa et al. 1996) and NGC 4051 (Wang et al. 1999). Figure 2 shows the line variability in MCG–6-30-15 in which the line changed from its ‘normal’ state (shown with open squares) to a very broad and strong state (shown by filled circles). This change in line profile accompanied a sharp drop in the continuum flux level during an event that lasted at least 60 ksec (which is greater than the dynamical timescale $`t_{\mathrm{dyn}}`$ for the inner accretion disk by a factor of $`100`$ or more for any plausible SMBH mass; Reynolds 1999). Unless the occultation scenario of Weaver & Yaqoob (1998) is correct, some dramatic change in the structure of the accretion disk and/or the geometry of the illuminating X-ray source is required to produce such dramatic and long-lived line changes. Changes in the thermal structure of the disk, which occur on a timescale of $`t_{\mathrm{th}}t_{\mathrm{dyn}}/\alpha `$ (where $`\alpha 0.10.01`$ is the standard viscosity parameter), may produce this type of variability. Even given the long-lived nature of these events, ASCA cannot produce high signal-to-noise line profiles in the different states. This hampers our ability to probe details of the disk/corona variability using these line changes. XMM will completely change this situation. With an effective area at iron line energies more than a factor of $`10`$ greater than ASCA, very high quality iron line profiles will be obtained at different times as a source such as MCG–6-30-15 undergoes one of these events. While I dare not predict what these observations will find, these studies will undoubtedly revolutionize our understanding of the kind of instabilities suffered by the inner accretion disk and X-ray emitting corona. ### 3.2. Orbiting flares The X-ray emission from most AGN is observed to be highly variable on timescales down to (our best estimate for) the dynamical timescale. Whether the X-ray emission is due to magnetic flares exploding out of the accretion disk or some other instability in a hot disk corona, the instantaneous X-ray emission is likely to be non-axisymmetric. If these non-axisymmetric structures are long lived (i.e. survive at least a couple of dynamical timescales), the iron line will be observed to undergo distinct profile changes as the system orbits the central SMBH. The computation of observables from an orbiting hot-spot on an accretion disk around a black hole is a classical problem and has been worked on my many authors (e.g. Ginzburg & Ozernoi 1977, Bao et al. 1994, Bromley et al. 1997). Most recently, Ruszkowski (1999; also see contribution in this volume) has computed the observed iron line variability when it is powered by an X-ray flare that is co-rotating with the disk. XMM should be able to track these profile changes and measure several key parameters. Firstly, the period and amplitude of energy variations in the peak energy of the iron line are an easy and robust way of determining the black hole mass. Note that the inclination can be measured from the time-averaged iron line profile and so is a known quantity in this calculation. Secondly, departures from sinusoidal time-dependence of the iron line peak can be attributed to relativistic effects and used to probe, for example, the spin parameter of the black hole. Such observations may yield signatures of a spinning black hole: if iron line variations are found that imply a flare orbiting on a circular orbit at a radius less the Schwarzschild radius of marginal stability ($`r=6GM/c^2`$), a rapidly rotating black hole is will be implied. ### 3.3. Reverberation If some X-ray flares are very short lived, or activate rapidly (as compared to the light-crossing time of the inner accretion disk), line profile changes due to the finite speed of light will occur. This then raises the possibility of performing ‘reverberation mapping’ of the central regions of the SMBH accretion disk (Stella 1990; Reynolds et al. 1999). In principal, reverberation provides powerful diagnostics of the space-time geometry and the geometry of the X-ray source. When attempting to understand reverberation, the basic unit to consider is the point-source transfer function, which gives the response of the observed iron line to an X-ray flash at a given location. As a starting point, one could imagine studying the brightest flares in real AGN and comparing the line variability to these point-source transfer functions in an attempt to measure the SMBH mass, spin and the location of the X-ray flare. By studying such transfer functions, it is found that a characteristic signature of rapidly rotating black holes is a ‘red-tail’ on the transfer function. This feature corresponds to highly redshifted and delayed line emission that originates from an inwardly moving ring of illumination/fluorescence that asymptotically freezes at the horizon (see Reynolds et al. 1999 for a discussion of this feature). The primary observational difficulty in characterizing iron line reverberation will be obtaining the required signal-to-noise. One must be able to measure an iron line profile on a timescale of $`t_{\mathrm{reverb}}GM/c^3500M_8\mathrm{s}`$, where we have normalized to a mass of $`10^8\mathrm{M}_{}`$. This requires an instrument such as Constellation-X. Figure 3 shows that Constellation-X can indeed detect reverberation from a bright AGN with a mass of $`10^8\mathrm{M}_{}`$. Furthermore, the signatures of black hole spin may well be within reach of Constellation-X (Young & Reynolds 1999). Although these simulations make the somewhat artificial assumption that the X-ray flare is instantaneous and located on the axis of the system, it provides encouragement that reverberation signatures may be observable in the foreseeable future. Of course, the occurrence of multiple, overlapping flares will also hamper the interpretation of iron line reverberation. The best way to disentangle these flares is still the subject of current work. However, Constellation-X may have the required signal to noise to allow the direct fitting of multiple transfer functions to real data (see Young & Reynolds 1999). ## 4. Direct Imaging of black hole accretion disks I will end by briefly discussing an exciting idea which will allow us to image the central regions of nearby AGN with sufficient angular resolution to probe structure on scales smaller than the size of the event horizon. By combining diffraction limited X-ray optics with the interferometric technologies that are currently being developed for the Space Interferometer Mission (SIM), it is within our technological reach to construct an X-ray interferometer capable of achieving sub-microarcsecond resolution (this concept has become known as MAXIM, the Micro-arcsec X-ray Interferometer Mission; see http://maxim.gsfc.nasa.gov/). As well as the obvious appeal of directly imaging an accreting black hole, an observatory capable of achieving $`0.1\mu \mathrm{arcsec}`$ would yield major scientific return. The geometry of the X-ray source (and the spatial nature of the X-ray flares) would be open to direct imaging studies. X-ray activity or fluorescence from within the radius of marginal stability could be easily seen (this region would be well resolved). We might also expect there to be X-ray emission from the base of the jet in the region where the magnetic field couples to the black hole spin via the Blandford-Znajek process. Such emission could be imaged, thereby providing the first look at these exotic physical mechanisms at work. If an interferometer can be constructed with sufficient effective area, we will be able to use the fluorescent iron line to make detailed velocity maps across the image. These velocity fields would provide direct constraints of the black hole mass and spin, and implicitly provide a stringent test of strong field General Relativity. ## 5. Conclusions The immediate environment of an accreting supermassive black hole is extremely exotic. Broad iron lines provide us with the best tool to date for studying these regions. ASCA and BeppoSAX observations have already shown us that the accretion disk in at least some AGN extends very close to the black hole (and maybe so close as to suggest that the black hole must be rotating). Furthermore, the detection of broad iron line variability by ASCA is most likely tracking structural changes in the accretion disk and/or X-ray emitting corona. However, large effective area detectors are required to make further progress. XMM will allow these structural changes to be characterized in detail, thereby probing the instabilities that affect the inner accretion disk/corona. Furthermore, XMM will allow us to study iron line variability caused by the accretion disk rotation, allowing us to measure the mass of the black hole and constrain the location/lifetime of the X-ray flares. Eventually, Constellation-X will allow us to search for iron line reverberation. The detection of reverberation will give robust signatures of black hole spin and provide the tools to study the inner disk structure in unprecedented detail. Further in the future, direct imaging of the inner disk and black hole region in nearby AGN will be possible using X-ray interferometry. This will provide the ultimate observational probe of black hole astrophysics. ## ACKNOWLEDGEMENTS CSR appreciates support from Hubble Fellowship grant HF-01113.01-98A. This grant was awarded by the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. ## REFERENCES Armitage P. J., Reynolds C. S., submitted Bao G., Hadrava P., Ostgaard E., 1994, ApJ, 423, 63 Bromley B. C., Chen K., Miller W. A., 1997, ApJ, 475, 57 Dabrowski Y., Fabian A. C., Iwasawa K., Lasenby A. N., Reynolds C. S., 1997, MNRAS, 288, L11 Fabian A. C. et al. 1995, MNRAS, 277, L11 Ginzburg V. L., Ozernoi L. M., 1977, Ap&SS, 48, 401 Krolik J. H., 1999, ApJL, 515, L73 Iwasawa K. et al., 1996, MNRAS, 282, 1038 Nandra K., George I. M., Mushotzky R. F., Turner T. J., Yaqoob T., 1999, ApJL, 525, 17 Reynolds C. S., 1999, ApJ, submitted Reynolds C. S., Begelman M. C., 1997, ApJ, 488, 109 Reynolds C. S., Young A. J., Begelman M. C., Fabian A. C., 1999, ApJ, 514, 164 Ruszkowski M., 1999, MNRAS, submitted (astro-ph/9906397) Stella L., 1990, Nat, 344, 747 Tanaka Y. et al., 1995, Nat, 375, 659 Wang J. X, Zhou Y. Y., Wang T. G., 1999, ApJL, 523, 129 Wang J. X., Zhou Y. Y., Xu H. G., Wang T. G., 1999, ApJL, 516, 65 Weaver K. A., Yaqoob T., 1998, ApJ, 502, L139 Young A. J., Reynolds C. S., 1999, ApJ, in press Young A., Fabian A. C., Ross R. R., 1998, MNRAS, 300, L11
no-problem/0001/astro-ph0001420.html
ar5iv
text
# Screw Instability and Blandford-Znajek Mechanism ## 1 Introduction When a magnetic accretion disk surrounds a black hole, the magnetic field lines frozen in the disk drift toward the black hole as the disk plasma is slowly accreted onto the black hole. After the accreted plasma particles get into the black hole, the magnetic field lines which were frozen to the plasma are released and then thread the black hole’s horizon. So the black hole is magnetized and an approximately stationary and axisymmetric magnetic field is formed around it (Thorne, Price, & Macdonald 1986). The existence of a disk is necessary for confining the magnetic field lines threading the black hole. If the black hole is rotating, the magnetic field lines threading it are twisted and toroidal components of magnetic field and thus poloidal electric currents are generated. If the other ends of the magnetic field lines connect with remote non-rotating astrophysical loads, the rotating black hole exerts a torque on the astrophysical loads and the rotational energy of the black hole is extracted and transported to the astrophysical loads via Poynting flux. This is the so-called Blandford-Znajek mechanism (Blandford & Znajek 1977). For a long time the Blandford-Znajek mechanism has been considered to be a plausible way to power the extragalactic jets (Rees et al 1982; Begelman, Blandford, & Rees 1984; Ferrari 1998). \[Recently some people have argued that the Blandford-Znajek mechanism may be less efficient since the electromagnetic power from the accretion disk may dominate the electromagnetic power from the black hole (Ghosh & Abramowicz 1997; Livio, Ogilvie, & Pringle 1999; Li 1999). Blandford and Znajek have also mentioned this possibility in their original paper (Blandford & Znajek 1977). But the situation may be different for a black hole with a geometrically thick disk (Armitage & Natarajan 1999).\] For magnetic field configurations with both poloidal components and toroidal components, the screw instability plays a very important role (Kadomtsev 1966; Bateman 1978; Freidberg 1987). If the toroidal magnetic field is so strong that from one end to the other end the magnetic field lines wind around the symmetry axis once or more, the magnetic field and the plasma confined by it become unstable against long-wave mode perturbations, and the plasma column quickly twists into a helical shape. To maintain a stable magnetic field structure around the black hole, which is necessary for the working of the Blandford-Znajek mechanism, the toroidal components of the magnetic field and the poloidal currents cannot exceed the limits set by the Kruskal-Shafranov condition which is the criterion for the screw instability (Kadomtsev 1966; Freidberg 1987). Recently Gruzinov has shown that if some magnetic field lines connect a black hole with an accretion disk, the black hole’s rotation twists the magnetic field lines sufficiently to excite the screw instability. This can make the black hole to produce quasi-periodical flares, and is argued to be a new mechanism for extracting the rotational energy of black hole (Gruzinov 1999). In this paper we show that the screw instability of magnetic field play a very important role in the Blandford-Znajek mechanism. To make the magnetic field and the plasma confined by it safe against the screw instability, the toroidal components of the magnetic field and thus the poloidal electric currents cannot be too big, which significantly lower the power of the Blandford-Znajek mechanism. The screw instability puts a stringent upper bound to the power of the Blandford-Znajek mechanism if the distance from the black hole to the astrophysical loads exceeds some critical value. The implications of the results for the scenario of extragalactic jets powered by the Blandford-Znajek mechanism are discussed. It’s argued that jets powered by the Blandford-Znajek mechanism cannot be dominated by Poynting flux at large scales and cannot be collimated by their own toroidal magnetic fields. ## 2 Power of the Blandford-Znajek Mechanism The power of the Blandford-Znajek mechanism is $`P{\displaystyle \frac{\mathrm{\Omega }_F\left(\mathrm{\Omega }_H\mathrm{\Omega }_F\right)}{4\pi }}r_H^2B_n\mathrm{\Psi }_H,`$ (1) where $`\mathrm{\Omega }_H`$ is the angular velocity of the black hole, $`\mathrm{\Omega }_F`$ is the angular velocity of the magnetic field lines, $`r_H`$ is the radius of the black hole horizon, $`B_n`$ is the poloidal magnetic field on the black hole horizon (on the horizon the poloidal magnetic field has only normal components), and $`\mathrm{\Psi }_HB_n\pi r_H^2`$ is the magnetic flux through the northern hemisphere of the black hole horizon (Macdonald & Thorne 1982). (Throughout the paper we use the geometric units with $`G=c=1`$.) On the horizon of the black hole, the toroidal magnetic field $`B_H`$ is related to the poloidal electric current $`I`$ flowing into the horizon by $`B_H{\displaystyle \frac{2I}{r_H}},`$ (2) suppose the magnetic field is aligned with the rotation axis of the black hole. The current is related to the poloidal magnetic field on the horizon by $`I{\displaystyle \frac{1}{2}}(\mathrm{\Omega }_H\mathrm{\Omega }_F)r_H^2B_n.`$ (3) Inserting Eq. (3) into Eq. (2), we obtain a relation between the toroidal magnetic field and the poloidal magnetic field on the horizon $`B_H(\mathrm{\Omega }_H\mathrm{\Omega }_F)r_HB_n,`$ (4) which can also be written as $`\mathrm{\Omega }_F\mathrm{\Omega }_H{\displaystyle \frac{1}{r_H}}{\displaystyle \frac{B_H}{B_n}}.`$ (5) Inserting Eq. (4) into Eq. (1) and using $`\mathrm{\Psi }_HB_n\pi r_H^2`$, we get $`P{\displaystyle \frac{1}{4}}B_nB_Hr_H^3\mathrm{\Omega }_F.`$ (6) ## 3 The Screw Instability of Magnetic Field For a cylindrical magnetic field with both poloidal and toroidal components (the so-called screw pinch configuration), the kink safety factor is defined as $`q={\displaystyle \frac{2\pi RB_{}}{LB_{}}},`$ (7) where $`L`$ is the length of the cylinder, $`R`$ is the radius of the cylinder, $`B_{}`$ is the poloidal component of the magnetic field which is parallel to the axis of the cylinder, and $`B_{}`$ is the toroidal component of the magnetic field which is perpendicular to the axis of the cylinder. The screw instability turns on when $`q<1,`$ (8) which is called the Kruskal-Shafranov criterion (Kadomtsev 1966, Freidberg 1987). Though this criterion is usually proved only in flat spacetime, Gruzinov has shown that it also holds for the force-free magnetosphere around a Kerr black hole (Gruzinov 1999). The screw instability is very important for tokamaks and pinches (Bateman 1978). Since it is a kind of long-wave mode instability, the screw instability can quickly disrupt the global structure of magnetic field. Now consider a quasi-cylindrical magnetic field configuration above the northern hemisphere of the black hole horizon. Then the kink safety factor is given by Eq. (7). If the magnetic flux and electric current in the cylinder are conserved, we have $`B_{}R^2B_nr_H^2,B_{}RB_Hr_H.`$ (9) Then the kink safety factor can be expressed with $`B_n`$ and $`B_H`$ $`q={\displaystyle \frac{2\pi r_HB_n}{LB_H}}.`$ (10) From Eq. (10) and Eq. (8), for the magnetic field to be safe against the screw instability (thus we require $`q>1`$), we must require $`B_H<{\displaystyle \frac{2\pi r_H}{L}}B_n.`$ (11) Eq. (11) gives an upper bound on the toroidal components of the magnetic field for a stable magnetic configuration. Using Eq. (2), the constraint on the induced poloidal electric current is $`I<\pi {\displaystyle \frac{r_H^2}{L}}B_n.`$ (12) From Eq. (5) and Eq. (11), the constraint on the angular velocity of the magnetic field lines is $`\mathrm{\Omega }_H{\displaystyle \frac{2\pi }{L}}<\mathrm{\Omega }_F<\mathrm{\Omega }_H,`$ (13) where $`\mathrm{\Omega }_F<\mathrm{\Omega }_H`$ is required by that energy is extracted from the black hole. The angular velocity of the magnetic field lines (and thus the toroidal components of the magnetic field and the induced poloidal electric current) is determined by the rotation of the black hole and the inertia of the remote astrophysical loads. In the optimal case, i.e. when the power takes its maximum, $`\mathrm{\Omega }_F=\mathrm{\Omega }_H/2`$ (which is called the impedance matching condition) (Macdonald & Thorne 1982). From Eq. (13), for the impedance matching case the magnetic field is stable only if $`L<4\pi /\mathrm{\Omega }_H=8\pi r_H(M/a)`$, where $`M`$ is the mass of the black hole and $`a`$ is the angular momentum per unit mass of the black hole \[$`\mathrm{\Omega }_H=a/(2Mr_H)`$\]. In other words, the impedance matching condition can be achieved only if $`L<8\pi r_H\left(\frac{a}{M}\right)^1`$. ## 4 Upper Bounds on the Power of the Blandford-Znajek Mechanism From Eq. (6), Eq. (11), and $`\mathrm{\Omega }_F<\mathrm{\Omega }_H`$, we obtain an upper bound on the power of the Blandford-Znajek mechanism immediately: $`P<\frac{1}{4}B_n^2r_H^3\mathrm{\Omega }_H\left(\frac{2\pi r_H}{L}\right)`$. With more detailed analysis \[Using Eq. (5) for $`\mathrm{\Omega }_F`$ instead of simply replacing $`\mathrm{\Omega }_F`$ with $`\mathrm{\Omega }_H`$ in Eq. (6)\], we obtain $`P<{\displaystyle \frac{2}{\alpha }}\left(1{\displaystyle \frac{1}{2\alpha }}\right)P_0(\alpha >1),`$ (14) where $`\alpha {\displaystyle \frac{L}{8\pi r_H}}{\displaystyle \frac{a}{M}},`$ (15) which measures the distance that the Blandford-Znajek process can work on; $`P_0{\displaystyle \frac{1}{64}}\left({\displaystyle \frac{a}{M}}\right)^2B_n^2r_H^2,`$ (16) which is the power of the Blandford-Znajek process in the optimal case ($`\mathrm{\Omega }_F=\mathrm{\Omega }_H/2`$). \[Eq. (16) differs from the result of Macdonald et al (Thorne, Price, & Macdonald 1986) by a factor of $`1/2`$ but this is not important for our current purposes.\] When $`\alpha <1`$ \[i.e. $`L<8\pi r_H(M/a)`$\], the impedance matching condition can be achieved and thus the upper bound on the power is given by $`P_0`$ \[Eq. (16)\]. Therefore, we obtain the upper bounds on the power of the Blandford-Znajek mechanism $`P_{\mathrm{max}}=\{\begin{array}{cc}\frac{2}{\alpha }\left(1\frac{1}{2\alpha }\right)P_0\hfill & (\alpha >1)\hfill \\ P_0\hfill & (\alpha <1)\hfill \end{array}.`$ (19) If $`\alpha 1`$, $`P_{\mathrm{max}}\frac{2}{\alpha }P_0`$, the power of Blandford-Znajek mechanism is significantly lowered by the screw instability of magnetic field (Fig. 6). The power limit arising from the screw instability can also be stated as follow: for a given power $`P`$ the Blandford-Znajek process can only operate within the distance $`L<8\pi r_H\left({\displaystyle \frac{a}{M}}\right)^1f(\eta ),f(\eta ){\displaystyle \frac{1}{\eta }}\left(1+\sqrt{1\eta }\right),`$ (20) where $`\eta P/P_0`$ is the efficiency of the Blandford-Znajek mechanism ($`0<\eta 1`$ always). Eq. (20) shows an interesting anti-correlation between the power of the central engine (black hole) and the maximum distance that the central engine can work at. For the Blandford-Znajek process to work efficiently (i.e. $`\eta 1`$), the distance $`L`$ cannot be larger than $`8\pi r_H\left(\frac{a}{M}\right)^1`$ (Fig. 6). ## 5 Implications for Extragalactic Jets Powered by the Blandford-Znajek Mechanism Extragalactic jets are usually thought to be associated with very powerful energetic processes in active galactic nuclei (AGN). The Blandford-Znajek process associated with a rapidly rotating supermassive black hole has been considered to be a possible mechanism for powering extragalactic jets since it provides a practical way for extracting the vast rotational energy of the black hole (Rees et al 1982; Begelman, Blandford, & Rees 1984). From the discussions in previous sections, the screw instability of magnetic field prevents the Blandford-Znajek mechanism from working at large distances from the black hole. For the Blandford-Znajek mechanism to work efficiently (i.e. $`\eta =P/P_01`$), the distance within which the Blandford-Znajek mechanism works must satisfy \[see Eq. (20)\] $`L<8\pi r_H\left({\displaystyle \frac{a}{M}}\right)^12.4\times 10^3\mathrm{pc}\left({\displaystyle \frac{M}{10^9M_{\mathrm{}}}}\right)\left({\displaystyle \frac{a}{M}}\right)^1.`$ (21) Typical extragalactic jets have lengths from several kiloparsecs to several hundred kiloparsecs (Bridle & Perley 1984), which are much larger than the limit given by Eq. (21). If the Blandford-Znajek process really works in AGN, it can only work within the distance limited by Eq. (21) unless it has an extremely low efficiency. Beyond that distance, either magnetic flux or poloidal current (or both) must fail to be conserved within the tube of fluid lines. \[Remember that as we move from Eq. (7) to Eq. (10) we have only used the conservation of magnetic flux and poloidal current.\] This implies that the magnetic field and poloidal current originating from the central black hole cannot extend to large distances in the jets, the astrophysical loads in the Blandford-Znajek mechanism have to be located at a very short distance from the central black hole \[the distance is limited by Eq. (21)\]. So, the bulk of Poynting flux from the central black hole is converted into the kinetic energy of plasma particles at the very beginning of jets — in fact the distance given by Eq. (21) is too close to the black hole to be resolved with current observations (Junor, Biretta, & Livio 1999). The jets so produced cannot be dominated by Poynting flux at large radii and cannot be collimated by their own magnetic fields. Therefore, the screw instability of magnetic field gives significant constraints on the scenario of jets powered by the Blandford-Znajek mechanism. The transition from a Poynting flux dominated jet to a matter dominated one could happen very close to the central black hole, but the collimation of such a jet at large radii might be problematic. Though current observations cannot tell if extragalactic jets are Poynting flux dominated, some observations show that much of the particle acceleration occurs at $`110`$ pc from the central black hole (Heinz & Begelman 1997). ## 6 Conclusions Considering the screw instability of magnetic field, the Blandford-Znajek mechanism can only work within a finite distance from the black hole. Beyond that distance, either the conservation of poloidal current or the conservation of magnetic flux (or both) must break down – otherwise the screw instability comes in and the global magnetic field structure is disrupted. The distance from the black hole to the boundary within which the Blandford-Znajek mechanism can work is $`L_{\mathrm{max}}8\pi r_H\left({\displaystyle \frac{a}{M}}\right)^1\left({\displaystyle \frac{P}{P_0}}\right)^1\left(1+\sqrt{1{\displaystyle \frac{P}{P_0}}}\right),`$ (22) where $`PP_0`$ always. If $`PP_0`$, $`L_{\mathrm{max}}8\pi r_H\left(\frac{a}{M}\right)^1`$. Thus, the Blandford-Znajek mechanism can only work efficiently in the close neighborhood of the black hole. In applying our results to extragalactic jets, we have found that the screw instability significantly constrains the scenario of extragalactic jets powered by the Blandford-Znajek mechanism. The jets produced by the Blandford-Znajek mechanism cannot be dominated by Poynting flux at large distances from the central engine and cannot be collimated by their own magnetic fields. Though people have discussed various magnetohydrodynamic instabilities for the propagation and acceleration of jets (Appl & Camenzind 1992; Begelman 1998 and references therein), nobody has considered the constraint on the power of the central engine. The results in this paper have shown that the screw instability of magnetic field significantly constrains the power of the central engine. The discussions can also be extended to the case when jets are produced by electromagnetic processes associated with an accretion disk (Blandford & Payne 1982). Since the radius of disk is usually larger than the radius of black hole, and jets produced by disk is less likely to be Poynting flux dominated due to that the coronae of accretion disk are full of mass, the jets produced by accretion disk seem to be less vulnerable to the screw instability than the jets produced by black hole. I am grateful to Bohdan Paczyński, Paul Wiita, Christian Fendt, and Julian Krolik for helpful discussions. I am also grateful to the anonymous referee for valuable comments. This work was supported by the NSF grant AST-9819787.
no-problem/0001/cond-mat0001393.html
ar5iv
text
# Exact solution of site and bond percolation on small-world networks ## Abstract We study percolation on small-world networks, which has been proposed as a simple model of the propagation of disease. The occupation probabilities of sites and bonds correspond to the susceptibility of individuals to the disease and the transmissibility of the disease respectively. We give an exact solution of the model for both site and bond percolation, including the position of the percolation transition at which epidemic behavior sets in, the values of the two critical exponents governing this transition, and the mean and variance of the distribution of cluster sizes (disease outbreaks) below the transition. In the late 1960s, Milgram performed a number of experiments which led him to conclude that, despite there being several billion human beings in the world, any two of them could be connected by only a short chain of intermediate acquaintances of typical length about six . This result, known as the “small-world effect”, has been confirmed by subsequent studies and is now widely believed to be correct, although opinions differ about whether six is an accurate estimate of typical chain length . The small-world effect can be easily understood in terms of random graphs for which typical vertex–vertex distances increase only as the logarithm of the total number of vertices. However, random graphs are a poor representation of the structure of real social networks, which show a “clustering” effect in which there is an increased probability of two people being acquainted if they have another acquaintance in common. This clustering is absent in random graphs. Recently, Watts and Strogatz have proposed a new model of social networks which possesses both short vertex–vertex distances and a high degree of clustering. In this model, sites are arranged on a one-dimensional lattice of size $`L`$, and each site is connected to its nearest neighbors up to some fixed range $`k`$. Then additional links—“shortcuts”—are added between randomly selected pairs of sites with probability $`\varphi `$ per link on the underlying lattice, giving an average of $`\varphi kL`$ shortcuts in total. The short-range connections produce the clustering effect while the long-range ones give average distances which increase logarithmically with system size, even for quite small values of $`\varphi `$. This model, commonly referred to as the “small-world model,” has attracted a great deal of attention from the physics community. A number of authors have looked at the distribution of path lengths in the model, including scaling forms and exact and mean-field results , while others have looked at a variety of dynamical systems on small-world networks . A review of recent developments can be found in Ref. . One of the most important consequences of the small-world effect is in the propagation of disease. Clearly a disease can spread much faster through a network in which the typical person-to-person distance is $`\mathrm{O}(\mathrm{log}L)`$ than it can through one in which the distance is $`\mathrm{O}(L)`$. Epidemiology recognizes two basic parameters governing the effects of a disease: the susceptibility—the probability that an individual exposed to a disease will contract it—and the transmissibility—the probability that contact between an infected individual and a healthy but susceptible one will result in the latter contracting the disease. Newman and Watts studied a model of disease in a small-world which incorporates these variables. In this model a randomly chosen fraction $`p`$ of the sites or bonds in the small-world model are “occupied” to represent the effects of these two parameters. A disease outbreak which starts with a single individual can then spread only within a connected cluster of occupied sites or bonds. Thus the problem of disease spread maps onto a site or bond percolation problem. At some threshold value $`p_c`$ of the percolation probability, the system undergoes a percolation transition which corresponds to the onset of epidemic behavior for the disease in question. Newman and Watts gave an approximate solution for the position of this transition on a small-world network. In this paper, we give an exact solution for both site and bond percolation on small-world networks using a generating function method. Our method gives not only the exact position of the percolation threshold, but also the values of the two critical exponents governing behavior close to the transition, the complete distribution of the sizes of disease outbreaks for any value of $`p`$ below the transition, and closed-form expressions for the mean and variance of the distribution. A calculation of the value of $`p_c`$ only, using a transfer-matrix method, has appeared previously as Ref. . The basic idea behind our solution is to find the distribution of “local clusters”—clusters of occupied sites or bonds on the underlying lattice—and then calculate how the shortcuts join these local clusters together to form larger ones. We focus on the quantity $`P(n)`$, which is the probability that a randomly chosen site belongs to a connected cluster of $`n`$ sites. This is also the probability that a disease outbreak starting with a randomly chosen individual will affect $`n`$ people. It is not the same as the distribution of cluster sizes for the percolation problem, since the probability of an outbreak starting in a cluster of size $`n`$ increases with cluster size in proportion to $`n`$, all other things being equal. The cluster size distribution is therefore proportional to $`P(n)/n`$, and can be calculated easily from the results given in this paper, although we will not do so. We start by examining the site percolation problem, which is the simpler case. Since $`P(n)`$ is difficult to evaluate directly, we turn to a generating function method for its calculation. We define $$H(z)=\underset{n=0}{\overset{\mathrm{}}{}}P(n)z^n.$$ (1) For all $`p<1`$, as we show below, the distribution of local clusters falls off with cluster size exponentially, so that every shortcut leads to a different local cluster for $`L`$ large: the probability of two shortcuts connecting the same pair of local clusters falls off as $`L^1`$. This means that any complete cluster of sites consists of a local cluster with $`m0`$ shortcuts leading from it to $`m`$ other clusters. Thus $`H(z)`$ satisfies the Dyson-equation-like iterative condition illustrated graphically in Fig. Exact solution of site and bond percolation on small-world networks, and we can write it self-consistently as $$H(z)=\underset{n=0}{\overset{\mathrm{}}{}}P_0(n)z^n\underset{m=0}{\overset{\mathrm{}}{}}P(m|n)[H(z)]^m.$$ (2) In this equation $`P_0(n)`$ is the probability of a randomly chosen site belonging to a local cluster of size $`n`$, which is $$P_0(n)=\{\begin{array}{cc}1p\hfill & \text{for }n=0\hfill \\ npq^{n1}(1q)^2\hfill & \text{for }n1\text{,}\hfill \end{array}$$ (3) with $`q=1(1p)^k`$. $`P(m|n)`$ is the probability of there being exactly $`m`$ shortcuts emerging from a local cluster of size $`n`$. Since there are $`2\varphi kL`$ ends of shortcuts in the network, $`P(m|n)`$ is given by the binomial $$P(m|n)=\left(\genfrac{}{}{0pt}{}{2\varphi kL}{m}\right)\left[\frac{n}{L}\right]^m\left[1\frac{n}{L}\right]^{2\varphi kLm}.$$ (4) Using this expression Eq. (2) becomes $$H(z)=\underset{n=0}{\overset{\mathrm{}}{}}P_0(n)z^n\left[1+\left(H(z)1\right)\frac{n}{L}\right]^{2\varphi kL}=\underset{n=0}{\overset{\mathrm{}}{}}P_0(n)\left[z\mathrm{e}^{2k\varphi (H(z)1)}\right]^n,$$ (5) for $`L`$ large. The remaining sum over $`n`$ can now be performed conveniently by defining $$H_0(z)=\underset{n=0}{\overset{\mathrm{}}{}}P_0(n)z^n=1p+pz\frac{(1q)^2}{(1qz)^2},$$ (6) where the second equality holds in the limit of large $`L`$ and we have made use of (3). $`H_0(z)`$ is the generating function for the local clusters. Now we notice that $`H(z)`$ in Eq. (5) is equal to $`H_0(z)`$ with $`zz\mathrm{e}^{2k\varphi (H(z)1)}`$. Thus $$H(z)=H_0\left(z\mathrm{e}^{2k\varphi (H(z)1)}\right).$$ (7) $`H(z)`$ can be calculated directly by iteration of this equation starting with $`H(z)=1`$ to give the complete distribution of sizes of epidemics in the model. It takes $`n`$ steps of the iteration to calculate $`P(n)`$ exactly. The first few steps give $`P(0)`$ $`=`$ $`1p,`$ (8) $`P(1)`$ $`=`$ $`p(1q)^2\mathrm{e}^{2k\varphi p},`$ (9) $`P(2)`$ $`=`$ $`p(1q)^2\left[2q+2k\varphi p(1q)^2\right]\mathrm{e}^{4k\varphi p}.`$ (10) It is straightforward to verify that these are correct. We could also iterate Eq. (7) numerically and then estimate $`P(n)`$ using, for instance, forward differences at $`z=0`$. Unfortunately, like many calculations involving numerical derivatives, this method suffers from severe machine-precision problems which limit us to small values of $`n`$, on the order of $`n20`$. A much better technique is to evaluate $`H(z)`$ around a contour in the complex plane and calculate the derivatives using the Cauchy integral formula: $$P(n)=\frac{1}{n!}\frac{\mathrm{d}^nH}{\mathrm{d}z^n}|_{z=0}=\frac{1}{2\pi \mathrm{i}}\frac{H(z)}{z^{n+1}}dz.$$ (11) A good choice of contour in the present case is the unit circle $`|z|=1`$. Using this method we have been able to calculate the first thousand derivatives of $`H(z)`$ without difficulty. In Fig. Exact solution of site and bond percolation on small-world networks we show the distribution of outbreak sizes as a function of $`n`$ calculated from Eq. (11) for a variety of values of $`p`$. On the same plot we also show the distribution of outbreaks measured in computer simulations of the model on systems of $`L=10^7`$ sites. As the figure shows, the agreement between the two is excellent. We can also calculate any moment of the distribution in closed form using Eq. (7). For example, the mean outbreak size is given by the first derivative of $`H`$: $$n=H^{}(1)=\frac{H_0^{}(1)}{12k\varphi H_0^{}(1)}=\frac{p(1+q)}{1q2k\varphi p(1+q)},$$ (12) and the variance is given by $`n^2n^2`$ $`=`$ $`H^{\prime \prime }(1)+H^{}(1)[H^{}(1)]^2,`$ (13) $`=`$ $`{\displaystyle \frac{p[1+3q3q^2q^3p(1q)(1+q)^2+2k\varphi p^2(1+q)^3]}{[1q2k\varphi p(1+q)]^3}}.`$ (14) In the inset of Fig. Exact solution of site and bond percolation on small-world networks we show Eq. (12) for various values of $`\varphi `$ along with numerical results from simulations of the model, and the two are again in good agreement. The mean outbreak size diverges at the percolation threshold $`p=p_c`$. This threshold marks the onset of epidemic behavior in the model and occurs at the zero of the denominator of Eq. (12). The value of $`p_c`$ is thus given by $$\varphi =\frac{1q_c}{2kp_c(1+q_c)}=\frac{(1p_c)^k}{2kp_c(2(1p_c)^k)},$$ (15) in agreement with Ref. . The value of $`p_c`$ calculated from this expression is shown in the left panel of Fig. Exact solution of site and bond percolation on small-world networks for three different values of $`k`$. The denominator of Eq. (12) is analytic at $`p=p_c`$ and has a non-zero first derivative with respect to $`p`$, so that to leading order the divergence in $`n`$ goes as $`(p_cp)^1`$ as we approach percolation. Defining a critical exponent $`\sigma `$ in the conventional fashion $`n(p_cp)^{1/\sigma }`$, we then have $$\sigma =1.$$ (16) Near $`p_c`$ we expect $`P(n)`$ to behave as $$P(n)n^\tau \mathrm{e}^{n/n^{}}\text{as }n\mathrm{}\text{.}$$ (17) It is straightforward to show that both the typical outbreak size $`n^{}`$ and the exponent $`\tau `$ are governed by the singularity of $`H(z)`$ closest to the origin: $`n^{}=(\mathrm{log}z^{})^1`$, where $`z^{}`$ is the position of the singularity, and $`\tau =\alpha +1`$, where $`H(z)(z^{}z)^\alpha `$ close to $`z^{}`$. In general, the singularity of interest may be either finite or not; the order of the lowest derivative of $`H(z)`$ which diverges at $`z^{}`$ depends on the value of $`\alpha `$. In the present case, $`H(z^{})`$ is finite but the first derivative diverges, and we can use this to find $`z^{}`$ and $`\alpha `$. Although we do not have a closed-form expression for $`H(z)`$, it is simple to derive one for its functional inverse $`H^1(w)`$. Putting $`H(z)w`$ and $`zH^1(w)`$ in Eq. (7) and rearranging we find $$H^1(w)=H_0^1(w)\mathrm{e}^{2k\varphi (1w)}.$$ (18) The singularity in $`H(z)`$ corresponds to the point $`w^{}`$ at which the derivative of $`H^1(w)`$ is zero, which gives $`2k\varphi z^{}H_0^{}(z^{})=1`$, making $`z^{}=\mathrm{e}^{1/n^{}}`$ a real root of the cubic equation $$(1qz)^32k\varphi pz(1q)^2(1+qz)=0.$$ (19) The second derivative of $`H^1(w)`$ is non-zero at $`w^{}`$, which implies that $`H(z)(z^{}z)^{1/2}`$ and hence $`\alpha =\frac{1}{2}`$ and the outbreak size exponent is $$\tau =\frac{3}{2}.$$ (20) A power-law fit to the simulation data for $`P(n)`$ shown in Fig. Exact solution of site and bond percolation on small-world networks gives $`\tau =1.501\pm 0.001`$ in good agreement with this result. The values $`\sigma =1`$ and $`\tau =\frac{3}{2}`$ put the small-world percolation problem in the same universality class as percolation on a random graph , which seems reasonable since the effective dimension of the small-world model in the limit of large system size is infinite just as it is for a random graph. We close our analysis of the site percolation problem by noting that Eq. (7) is similar in structure to the equation $`H(z)=ze^{H(z)}`$ for the generating function of the set of rooted, labeled trees. This leads us to conjecture that it may be possible to find a closed-form expression for the coefficients of the generating function $`H(z)`$ using the Lagrange inversion formula . Turning to bond percolation, we can apply the same formalism as above with only two modifications. First, the probability $`P_0(n)`$ that a site belongs to a local cluster of size $`n`$ is different for bond percolation and consequently so is $`H_0(z)`$ (Eq. (6)). For the case $`k=1`$ $$P_0(n)=np^{n1}(1p)^2,$$ (21) where $`p`$ is now the bond occupation probability. This expression is the same as Eq. (3) for the site percolation case except that $`P_0(0)`$ is now zero and $`P_0(n1)`$ contains one less factor of $`p`$. $`H_0(z)`$ for $`k=1`$ is $$H_0(z)=z\frac{(1p)^2}{(1pz)^2}.$$ (22) For $`k>1`$, calculating $`P_0(n)`$ is considerably more complex, and in fact it is not clear whether a closed-form solution exists. However, it is possible to write down the form of $`H_0(z)`$ directly using the method given in Ref. . For $`k=2`$, for instance, $$H_0(z)=\frac{z(1p)^4\left(12pz+p^3(1z)z+p^2z^2\right)}{14pz+p^5(23z)z^2p^6(1z)z^2+p^4z^2(1+3z)+p^2z(4+3z)p^3z\left(1+5z+z^2\right)}.$$ (23) The second modification to the method is that in order to connect two local clusters a shortcut now must not only be present (which happens with probability $`\varphi `$) but must also be occupied (which happens with probability $`p`$). This means that every former occurrence of $`\varphi `$ is replaced with $`\varphi p`$. The rest of the analysis follows through as before and we find that $`H(z)`$ satisfies the recurrence relation $$H(z)=H_0\left(z\mathrm{e}^{2k\varphi p(H(z)1)}\right),$$ (24) with $`H_0`$ as above. Thus, for example, the mean outbreak size is now $$n=H^{}(1)=\frac{H_0^{}(1)}{12k\varphi pH_0^{}(1)},$$ (25) and the percolation transition occurs at $`2k\varphi pH_0^{}(1)=1`$, which gives $$\varphi =\frac{1p_c}{2p_c(1+p_c)}$$ (26) for $`k=1`$ and $$\varphi =\frac{(1p_c)^3(1p_c+p_c^2)}{4p_c(1+3p_c^23p_c^32p_c^4+5p_c^52p_c^6)}$$ (27) for $`k=2`$. As in the site percolation case, the critical exponents are $`\sigma =1`$ and $`\tau =\frac{3}{2}`$. In the right panel of Fig. Exact solution of site and bond percolation on small-world networks we show curves of $`p_c`$ as a function of $`\varphi `$ for the bond percolation model for $`k=1`$ and $`k=2`$, along with numerical results for the same quantities. The agreement between the exact solution and the simulation results is good. We can also apply our method to the case of simultaneous site and bond percolation, by replacing $`P_0(n)`$ with the appropriate distribution of local cluster sizes and making the replacement $`\varphi \varphi p_{\mathrm{bond}}`$ as above. The developments are simple for the case $`k=1`$ but the combinatorics become tedious for larger $`k`$ and so we leave these calculations to the interested (and ambitious) reader. To conclude, we have studied the site and bond percolation problems in the Watts–Strogatz small-world model as a simple model of the spread of disease. Using a generating function method we have calculated exactly the position of the percolation transition at which epidemics first appear, the values of the two critical exponents describing this transition, and the sizes of disease outbreaks below the transition. We have confirmed our results with extensive computer simulations of disease spread in small-world networks. Finally, we would like to point out that the method described here can in principle be extended to small-world networks built on underlying lattices of higher dimensions . Only the generating function for the local clusters $`H_0(z)`$ needs to be recalculated, although this is no trivial task; such a calculation for a square lattice with $`k=1`$ would be equivalent to a solution of the normal percolation problem on such a lattice, something which has not yet been achieved. Even without a knowledge of $`H_0(z)`$, however, it is possible to deduce some results. For example, we believe that the critical exponents will take the values $`\sigma =1`$ and $`\tau =\frac{3}{2}`$, just as in the one-dimensional case, for the exact same reasons. It would be possible to test this conjecture numerically. The authors are grateful to Michael Renardy for pointing out Eq. (18), and to Keith Briggs, Noam Elkies, Philippe Flajolet, and David Rusin for useful comments. This work was supported in part by the Santa Fe Institute and DARPA under grant number ONR N00014–95–1–0975.
no-problem/0001/nucl-ex0001008.html
ar5iv
text
# Neutrino Detection using Lead Perchlorate ## 1 Introduction Due to its large cross section and relative cheapness, a number of groups have expressed interest in using Pb as a target for neutrino interactions to study supernovae or oscillations. As a result there have been several cross section calculations done recently. The interesting neutrino interactions on Pb consist of: $`\begin{array}{cccc}\nu _e+^{208}Pb\hfill & & {}_{}{}^{208}Bi^{}+e^{}\hfill & \hfill (CC)\\ & & \hfill & \\ & & {}_{}{}^{207}Bi+x\gamma +yn\hfill & \\ & & & \\ \nu _x+^{208}Pb\hfill & & {}_{}{}^{208}Pb^{}+\nu _x^{^{}}\hfill & \hfill (NC)\\ & & \hfill & \\ & & {}_{}{}^{207}Pb+x\gamma +yn\hfill & \end{array}`$ The number of neutrons emitted (0, 1, or 2) depends on the neutrino energy and whether the interaction is via the charged current (CC) or neutral current (NC). The nuclear physics of this system is described in Ref. . A lead-based neutrino detector must have an appreciable density of Pb atoms and the capability of detecting the electrons, gammas and neutrons produced in the reaction. Lead Perchlorate (Pb(ClO<sub>4</sub>)<sub>2</sub>) has a very high solubility in water (500 g Pb(ClO<sub>4</sub>)<sub>2</sub> /100 g H<sub>2</sub>O ) and the saturated solution appears transparent to the eye. The cost for an 80% solution is approximately $10,000/tonne in quantities of 100 tonne . This raises the possibility that a cost effective, Pb based, liquid $`\stackrel{ˇ}{C}`$erenkov detector can be constructed. The presence of <sup>35</sup>Cl provides a nucleus with a high cross-section for neutron capture with the subsequent emission of capture $`\gamma `$ rays totaling 8.4 MeV. Using Pb as a target would make a powerful supernova detector . The average energies of neutrinos emitted by a supernova are expected to follow a heirarchy: $`E_{\nu _e}<E_{\overline{\nu }_e}<E_{\nu _{\mu ,\tau }}`$ The observation of high energy $`\nu _e`$ would be an indication of $`\mu ,\tau `$ oscillations. The large cross section and delayed coincidence $`\nu _e`$ signature of Pb could provide a high statistics oscillation experiment at a beam stop where a short duration beam spill such as at ISIS allows the temparal separation of any mono energetic $`\nu _e`$ which result from $`\nu _\mu `$ oscillation. The hydrogen content of Pb(ClO<sub>4</sub>)<sub>2</sub> solution also makes the detector sensitive to $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations. Finaly, measuring the cross section for neutrino interactions in Pb is also of importance to supernova modelers investigating the explosion mechanism and transmutation of nuclei. ## 2 Physical Properties Some relevant properties of Pb(ClO<sub>4</sub>)<sub>2</sub> are given in Table 1. To build a large $`\stackrel{ˇ}{C}`$erenkov detector viewed by photo-multiplier tubes from the periphery, the attenuation of the light must be minimal. Data on the refractive index, spectral transmission and attenuation length of various Pb(ClO<sub>4</sub>)<sub>2</sub> solutions were obtained using an 80% solution from a commercial source . No attempt to filter suspended particulates or purify the solution was made. The strength of the solution was reduced using deionized water with an attenuation length of greater than 20 meters. The spectral trasmission, referenced with respect to a deionized water sample is given in Figure 2. There are no obvious absorption regions seen in the Pb(ClO<sub>4</sub>)<sub>2</sub> sample between 300 and 600 nm, the sensitive region of most PMT’s. Figure 3 shows the attenuation of light at 430 nm in an 80% solution. These data were obtained by passing a monochromatic, collimated beam of light through a column of liquid and measuring the transmittance using a PMT. The length of the column could be varied from 0 $``$100 cm length ## 3 Discussion The lack of absorption lines in the transmission spectrum of Pb(ClO<sub>4</sub>)<sub>2</sub> is encouraging. However, the current attenuation length of 43 cm in an 80% solution is too small to realize a conventional $`\stackrel{ˇ}{C}`$erenkov neutrino detector. Further more, diluting the solution with high purity water resulted in significant reduction of the attenuation length while the transmission spectra were uneffected. This suggests that the loss of light is due to scattering, perhaps due to the formation of Pb salts or polymeric molecules such as Pb<sub>4</sub>(OH)<sub>4</sub>, possibly as a result of reaction with dissolved O<sup>2</sup> and CO<sup>2</sup>. The science of building massive, low background water $`\stackrel{ˇ}{C}`$erenkov detectors is well understood. To demonstrate the feasibility of a Pb(ClO<sub>4</sub>)<sub>2</sub>$`\stackrel{ˇ}{C}`$erenkov detector, it remains to investigate the chemistry pertinant to light transmission in the solution.
no-problem/0001/cond-mat0001224.html
ar5iv
text
# Presence of Many Stable Nonhomogeneous States in an Inertial Car-Following Model ## Abstract A new single lane car following model of traffic flow is presented. The model is inertial and free of collisions. It demonstrates experimentally observed features of traffic flow such as the existence of three regimes: free, fluctuative (synchronized) and congested (jammed) flow; bistability of free and fluctuative states in a certain range of densities, which causes the hysteresis in transitions between these states; jumps in the density-flux plane in the fluctuative regime and gradual spatial transition from synchronized to free flow. Our model suggests that in the fluctuative regime there exist many stable states with different wavelengths, and that the velocity fluctuations in the congested flow regime decay approximately according to a power law in time. In the last years, growing effort has been made in understanding traffic flow dynamics. Recent experiments show that traffic flow demonstrates complex physical phenomena, among which are: * The existence of three states: free flow, ”synchronized” (or ”fluctuative”) flow and traffic jams (for low, intermediate and high densities correspondingly). The second state has two essential features: synchronization of flow in different lanes (for the multilane traffic) and fluctuation performed by the system in density-flux plane. Since our model is single-lane we will refer to this state as ”fluctuative”. * Hysteresis which is observed in transitions between the free and the fluctuative flow. * Long survival time of traffic jams. Modeling of traffic flow is traditionally performed using two approaches. The microscopic, or car-following models approach, which describes the nearest-neighbor interaction between two consecutive cars and investigates its influence on the flow (see e.g. \[3-5\]), and the macroscopic, or continuous models approach, which represents the flowing traffic as a continuous media and describes it using the hydrodynamical partial differential equations (see e.g. \[7-9\]). Wide surveys of these models are given in \[10-12\]. In this Letter we inroduce an inertial single lane car-following model, which is free of collisions. We study the model both numerically and analytically and find the existence of three regimes in traffic flow: free flow regime at low densities (where each car moves with almost a constant velocity), fluctuative flow regime at inetrmediate densities (where stable periodic oscillations of velocities of all cars are observed) and congested or jammed flow regime at high densities (where due to high density all the cars tend to move with the same, relatively small velocity). Our model predicts the existence of many inhomogeneous stable states in the fluctuative regime and demonstrates hysteresis in transitions between free and fluctuative regimes. The experimentally observed long survival time of jams may be explained by our finding that the fluctuations in the congested flow regime decay slowly according to a power law. To formulate the model we assume that car acceleration is affected by three factors: * aspiration to keep safety time gap from the car ahead, * pre-braking if the car ahead is much slower, * aspiration not to exceed significantly the permitted velocity. In mathematical description, the acceleration of the $`n`$th car $`a_n`$ is given by a sum of three terms depending on its coordinate $`x_n`$, velocity $`v_n`$, distance to the car ahead $`\mathrm{\Delta }x_n=x_{n+1}x_n`$ and the velocities difference $`\mathrm{\Delta }v_n=v_{n+1}v_n`$ : $$a_n=A(1\frac{\mathrm{\Delta }x_n^0}{\mathrm{\Delta }x_n})\frac{Z^2(\mathrm{\Delta }v_n)}{2(\mathrm{\Delta }x_nD)}kZ(v_nv_{per}),$$ (1) where $`A`$ is a sensitivity parameter, $`D`$ is the minimal distance between consecutive cars, $`v_{per}`$ is the permitted velocity and $`k`$ is a constant. The safety distance $`\mathrm{\Delta }x_n^0=v_nT+D`$ depends on $`T`$, which is the safety time gap constant. The function $`Z`$ is defined as $`Z(x)=(x+|x|)/2`$. Note that Eq.(1) can be generalized by adding a noise term. In the following we discuss in more details the terms in the right side of (1): * The first term plays an important role when velocity difference between consequtive cars is relatively small. In this case the $`n`$th car accelerates if $`\mathrm{\Delta }x_n>\mathrm{\Delta }x_n^0`$ and brakes if $`\mathrm{\Delta }x_n<\mathrm{\Delta }x_n^0`$. The choice of function in this term is not unique. Replacing it by other functions of $`\mathrm{\Delta }x_n`$ which are increasing, equal to zero if $`\mathrm{\Delta }x_n=\mathrm{\Delta }x_n^0`$ and tend to $`\mathrm{}`$ if $`\mathrm{\Delta }x_n0`$, such as $`A\mathrm{log}(\mathrm{\Delta }x_n/\mathrm{\Delta }x_n^0)`$, gives similar results. * The second term plays an important role when $`v_nv_{n+1}`$. A car getting close to a much slower car starts braking even if $`\mathrm{\Delta }x_n>\mathrm{\Delta }x_n^0`$. This term corresponds to the negative acceleration needed to reduce $`|\mathrm{\Delta }v_n|`$ to $`0`$ as $`\mathrm{\Delta }x_nD`$. * The dissipative third term is a repulsive force acting when the velocity exceeds the permitted velocity. Unlike optimal velocity models the acceleration in our model depends explicitly on $`\mathrm{\Delta }x`$ which enables us to make the flow free of collisions. The motion of cars is therefore described by the following system of ordinary differential equations $$\{\begin{array}{ccc}\dot{x}_n\hfill & =& v_n,\hfill \\ & & \\ \dot{v}_n\hfill & =& A(1\frac{v_nT+D}{x_{n+1}x_n})\hfill \\ & & \\ & & \frac{Z^2(v_nv_{n+1})}{2(x_{n+1}x_nD)}kZ(v_nv_{per}),\hfill \end{array}$$ (2) $`n=1,2,\mathrm{}N`$ with periodic boundary conditions $$x_{N+1}=x_1+\frac{N}{\rho },v_{N+1}=v_1.$$ A solution of Eqs. (2) which corresponds to homogeneous flow is $$v_n^0=v^0=\{\begin{array}{c}\frac{A(1D\rho )+kv_{per}}{A\rho T+k},\rho \frac{1}{D+Tv_{per}},\hfill \\ \\ \frac{1D\rho }{\rho T},\rho \frac{1}{D+Tv_{per}},\hfill \end{array}$$ (3) $$x_n^0=\frac{n1}{\rho }+v^0t.$$ In the following numerical results we use parameters values $`v_{per}=25(m/s)`$, $`T=2(s)`$, $`D=5(m)`$, $`1A5(m/s^2)`$ and $`k=2(s^1)`$. The flux-density relation (often called the fundamental diagram) for the homogeneous flow is shown in Fig.1(a) as a dashed line. Comparison of this curve with the fundamental diagrams (solid lines) obtained by the numerical solution of equations (2) for different values of $`A`$ starting from nonhomogeneous initial conditions indicates that for values of $`\rho `$ smaller than some critical value $`\rho _1`$ or greater than another critical value $`\rho _2`$ the flux is the same, while for the intermediate values of density ($`\rho _1<\rho <\rho _2`$) the measured flux is considerably lower than the homogeneous solution flux. Plotting the variance of velocities $`\sigma _v=[\frac{1}{N}\underset{n=1}{\overset{N}{}}(v_nv)^2]^{1/2}`$ (where $`v`$ is the average velocity) against $`\rho `$ (Fig.1(b)) shows the existence of velocity fluctuations for $`\rho _1<\rho <\rho _2`$. We can therefore define three regimes in traffic flow: the free flow regime ($`\rho <\rho _1`$), the fluctuative flow regime ($`\rho _1<\rho <\rho _2`$) and the congested flow regime ($`\rho >\rho _2`$). Note that the flow in the first and the last regimes is homogeneous. Note also that for small values of $`A`$ $`\rho _2`$ is greater than the maximal possible density $`\rho _{max}=1/D`$ and the congested flow regime does not exist. See Fig. 1(b) for $`A=2`$. This finding is supported by the analytical results shown below. In order to estimate the values of $`\rho _1`$ and $`\rho _2`$ we analyse the stability of the homogeneous flow solution. The linearization of Eqs. (2) near the homogeneuos flow solution (3) in variables $`\xi _n=x_nx_n^0`$ has the form $$\ddot{\xi }_n=p\dot{\xi }_n+q(\xi _{n+1}\xi _n),n=1,\mathrm{},N,$$ (4) $$\xi _{N+1}=\xi _1,$$ where $`p=AT\rho +k`$, $`q=\frac{AT+kTv_{per}+kD}{AT\rho +k}A\rho ^2`$ for $`\rho \frac{1}{D+Tv_{per}}`$ and $`p=AT\rho `$, $`q=A\rho `$ otherwise. A solution of equation (4) can be written as $$\xi _n=\mathrm{exp}\{i\alpha n+zt\},$$ (5) where $`\alpha =\frac{2\pi }{N}\kappa `$ ($`\kappa =0,\mathrm{},N1`$) and $`z`$ \- a complex number. Substituting (5) into (4) we obtain the algebraic equation for $`z`$ $$z^2+pzq(e^{i\alpha }1)=0.$$ (6) Each of the $`N`$ equations (6) has two solutions. These $`2N`$ different complex numbers are the eigenvalues of system (4). One of them (which corresponds to $`\kappa =0`$) is equal to zero regardless of values of parameters. In this case all $`\xi _n`$ in (5) are equal to a constant and belong to the one-dimensional subspace of equilibria of system (4) (defined by equations $`\xi _1=\mathrm{}=\xi _N`$, $`\dot{\xi }_1=\mathrm{}=\dot{\xi }_N=0`$). This indicates that the disturbed state $`x_n`$ for $`z=0`$ is also homogeneous. For $`z0`$ $`\xi _n`$ in (5) is a wave with increasing or decreasing amplitude. Therefore, if we find conditions under which other $`2N1`$ eigenvalues have negative real parts (the magnitude of wave (5) decreases with time) we can say that under these conditions the homogeneous flow solution (3) is stable. Following the approach of we can derive this condition as $`\frac{p^2}{q}>2`$ or $`S(\rho )>2`$, where $$S(\rho )=\{\begin{array}{c}\frac{(AT\rho +k)^3}{\rho ^2A(AT+kv_{per}T+kD)},\rho \frac{1}{D+Tv_{per}},\hfill \\ \\ A\rho T^2,\rho \frac{1}{D+Tv_{per}}.\hfill \end{array}$$ (7) A qualitative plot of $`S(\rho )`$ is sketched in Fig.1(c). From this figure it follows that depending on $`\rho `$ we have three regimes of stability/instability of the homogeneous flow solution. If $`\rho <\rho ^{}`$ (free flow) or $`\rho >\rho ^{\prime \prime }`$ (congested flow) the homogeneous flow solution is stable and if $`\rho ^{}<\rho <\rho ^{\prime \prime }`$ it is unstable, where $`\rho ^{}=\frac{1}{D+Tv_{per}}`$ and $`\rho ^{\prime \prime }=\frac{2}{AT^2}`$. Note that there are possible sets of parameters under which the minimum of the left part of $`S(\rho )`$ can be less than 2 and the flow can have five different regimes of stability/instability. Nevertheless, under the set of parameters specified above we have up to three regimes, where the third regime does not exist for $`\rho ^{\prime \prime }\rho _{max}`$ ($`A2D/T^2`$). Our numerical simulations show that $`\rho _2\rho ^{\prime \prime }`$, but $`\rho ^{}`$ is considerably greater than $`\rho _1`$, thus we expect that for $`\rho _1<\rho <\rho ^{}`$ both homogeneous and fluctuative states are stable. In the fluctuative regime ($`\rho _1<\rho <\rho _2`$) the flow is characterized by presence of humps (dense regions) moving backwards or forwards. When the flow has stabilized the humps are equidistant and the evolution of traffic in time and space resembles the spreading of a wave. The existence of a fluctuative regime was predicted by other car-following (e.g. where it was called ”jammed flow”) and continuous (e.g. , where it was called ”recurring humps state”) models and measured experimentally . Simulations of our model show that the fluctuative flow state is not unique. Figs. 2(a-c) present the cars velocities after the fluctuative flow regime has stabilized for three different initial conditions. It can be seen that the ”wavelengths” of these states are different. Fig. 2(d) presents the convergence of flux in these experiments to distinct values. Our simulations also show the existence of solutions with other ”wavelengths” and flux values. Fig. 2(e) shows the fundamental diargams for three different wavelengths. Consequently, depending on initial conditions different stable fluctuative states emerge with different values of flux and distances between neighboring humps. This indicates that for $`\rho _1<\rho <\rho _2`$ system (2) has many stable periodic (in $`\mathrm{\Delta }x_n`$, $`v_n`$ variables) solutions, and hence in the $`2N`$-dimensional space of variables $`\mathrm{\Delta }x_n`$, $`v_n`$ there exist many attractive limit cycles. As follows from above for $`\rho _1<\rho <\rho ^{}`$ not only fluctuative flow solutions are stable, but also the homogeneous flow solution. This bistability is the origin of hysteresis in transitions between free and fluctuative flow regimes. Such bistability was observed experimentally and was found in other models . Fig. 1(d) shows a hysteresis loop in the density-flux plane. The upper curve is obtained by increasing the density of cars adiabatically preserving the road length $`L`$ . It can be seen that up to the value of density $`\rho ^{}`$ the homogeneous flow is preserved. The lower curve was obtained by adiabatically decreasing the density in the same manner. While decreasing the density the flow remains fluctuative even for $`\rho <\rho ^{}`$. Fig.1(e) presents the hysteresis loop in the global density - velocities fluctuations plane. Our results also illustrate the well-known phenomenon of jumps which the system performs in the density-flux plane in the fluctuative flow regime when the density and the flux are measured locally. In our numerical simulation (Fig. 1(f)) we started from a value of density below $`\rho ^{}`$, increased it gradually in the described above manner up to a value greater than $`\rho ^{}`$ and decreased it back. These jumps may be explained by our finding of many stable states in the fluctuative regime. Our model also demonstrates the gradual spatial transition from the fluctuative to free flow in the downstream direction which was measured by . The results of local measurements of density and flux at different distances from an on-rump are shown in Fig.3. which is in good agreement with Fig.3 of . In the congested flow regime the only stable solution is the homogeneous flow solution. We have not found evidence of existence of bistability or hysteresis in transitions between the fluctuative and congested flow regimes. Starting from random initial conditions, we observe that initial fluctuations of the velocity seem to decay according to a power law $$\sigma _v\{\begin{array}{c}t^\beta ,tt^{}\hfill \\ e^{t/\tau },tt^{}.\hfill \end{array}$$ (8) where $`t^{}`$ is the crossover time between the power law and exponential decay. We find $`t^{}L^z`$ and $`\tau L^z`$ with $`z=2.0\pm 0.1`$. These results are qualitivly similar to that obtained by for a cellular automata model , but with different values of exponents. The result $`z2`$ seems to be in agreement with random walk arguments of . For the parameters values $`A=4,\rho =0.15`$ we get $`\beta 0.21\pm 0.04`$ (Fig.4). In summary, we present a single lane car-following model which explains important features of traffic observed experimentally. The model predicts the existence of many stable periodic states in the fluctuative (synchronized) flow regime. We wish to thank S. Schwarzer for usefull discussion.
no-problem/0001/astro-ph0001419.html
ar5iv
text
# Constraints on Photometric Calibration from Observations of High-Redshift Type Ia Supernovae ## 1 The type Ia supernovae results The type Ia supernova (SNIa) Hubble Diagrams established by the high-redshift SNIa search and photometry teams (Perlmutter et al 1997; Garnavich et al 1998; Schmidt et al 1998; Riess et al 1998; Perlmutter et al 1999) stand as a great astronomical achievement of this decade. These studies provide a tremendous confirmation of the expanding Universe and big-bang cosmology. Along with massive searches for microlensing events (eg, Alcock et al 1998; Beaulieu et al 1995; Udalski et al 1994), they show that large, coordinated surveys can be established to routinely make discoveries and follow them up uniformly. Along with soon-to-be completed studies of the cosmic background radiation, they hold the promise of making direct, precise measurements of the Universe’s kinematics. In fact, at the time of writing, the SNIa results already favor an accelerating Universe, eg, with $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })(0.3,0.7)`$ (Riess et al 1998; Perlmutter et al 1999). One widely overlooked conclusion which can be drawn from the SNIa results is that astronomical photometric calibration systems and techniques are basically correct. In order to make a precise cosmological measurement, the SNIa must span many magnitudes in flux. This requires that it be possible to measure, at the few-percent level, the relative flux between two sources separated by four orders of magnitude. Experiments with this kind of dynamic range are notoriously difficult in any field of study, but particularly in astronomy, where very different instrumentation, techniques, and sources of experimental error become relevant at different magnitude levels. Furthermore, because the SNIa must also span a large redshift interval, different SNIa are observed in different rest-frame bandpasses. This requires that the absolute spectral energy distribution (SED) shapes of the standard stars be known to better than the accuracy of the SNIa measurements (by a factor of at least $`\sqrt{N}`$). Given the tremendous care with which our photometric standards have been established and studied, it may not be surprising that the SNIa results are so good. However, it is important to note that the SNIa provide a crucial independent and qualitatively different approach to calibrating photometric measurements. Before the SNIa were established as standard (or standardizable; eg, Riess et al 1996) candles, and before surveys for them spanned the magnitude and redshift ranges they currently span, there were no precise tests of the photometric system by any technique fundamentally different from those by which the system was initially constructed. No astronomical results are secure until they are independently confirmed by qualitatively different techniques. The subject of this manuscript is the quantitative constraints placed by the SNIa results on photometric calibration. ## 2 Type Ia supernovae as standard stars The standard star system currently spans roughly $`0<V<16\mathrm{mag}`$ (Johnson & Morgan 1953; Kron et al 1953; Landolt 1973, 1983, 1992). The system is constructed by performing relative observations of groups of stars spanning small overlapping magnitude ranges (typically $`5\mathrm{mag}`$ each) at successively fainter magnitudes. The ranges are reconciled with one another to create the $`16\mathrm{mag}`$ range currently in use. This has created a “magnitude ladder,” with some analogy to the distance ladder, where very faint standards are tied to slightly brighter standards, which are tied in turn to brighter still. It is possible for systematic error to creep in. Of course most of the SNIa are much fainter than the end of the magnitude ladder; there are more opportunities for systematic errors in measuring relative fluxes between the $`15\mathrm{mag}`$ standard stars and SNIa as faint as $`25\mathrm{mag}`$. In principle the standard star ladder could be made irrelevant to the SNIa projects if all SNIa were compared with the same few faint standard stars. In practice, unfortunately, the brightest SNIa were compared with brighter standards, because the fainter standards had not been established. This dependence on brighter standards will become less important when new, bright SNIa are discovered, as long as the new SNIa are compared with the faint standards used with the faint SNIa. Possible sources for systematic photometry errors, in the standard star system or in the comparison of SNIa with standards, include: Detector linearity Detector linearity is generally well established for both the photomultiplier tubes employed in calibration and the CCDs employed in SNIa studies, so this is not expected to have a big effect. On the other hand, many CCDs (including the well-studied CCDs in the HST/WFPC2 instrument; Stetson 1998, Whitmore et al 1999) show a “charge transfer efficiency” problem which leads to flux underestimation which itself is a monotonic function of flux. This is exactly the kind of bias which could tilt the flux ladder at the very faint end, although it is only a problem at the few-percent level in HST/WFPC2 and will typically be an even smaller effect in high-background ground-based observations, even with similar instrumentation. Exposure time differences Generally the SNIa are measured with different exposure times than the standard stars in the SNIa studies (in part to avoid saturation); also bright standard stars are measured with different exposure times than faint ones; it is possible that there are biases in camera shutter controls. This is probably well calibrated for most instruments, at the few-percent level or better. (Also, exposure time changes affect the relative contributions of dark current, read noise, and sky counts in the image; it is not clear that such changes naturally lead to systematic errors.) Beam switching Standard star calibration measurements which include differencing of on- and off-source counts require that the off-source fields for faint standards be “cleaner” than those for bright standards. In imaging data, such problems are not likely to be bigger than the inverse signal-to-noise ratio ($`S/N`$) at which the standards are taken, since that is the level at which nearby faint companions can be observed. This problem therefore ought to be no worse than a few percent per $`5`$ mag range. Angular correlations of stars Stars are correlated on the sky, and this correlation will no doubt depend on stellar type and magnitude. These correlations could lead to biases in photometric measurements from the very faint stars correlated with their brighter neighboring standard star. Again, this is a problem proportional to the inverse $`S/N`$ and therefore ought not to be worse than a few percent per $`5`$ mag range. This is not a problem at all if SNIa projects use exactly the same focal-plane aperture as those used in the standard-star calibration programs. Image combination There can be up to tens-of-percent biases introduced in photometry when multiple images are combined by median filtering or averaging with sigma-clipping (eg, Steidel & Hamilton 1993). Difference imaging SNIa tend to be observed in time-separated difference images (ie, with and without the SNIa) whereas the standards tend to be observed in on- and off-source difference images. Some sources of noise are very different in these two different kinds of difference, including time variability in the detector and sky for the former, and the numbers and locations of background sources in the latter. Many CCD cameras have few-percent sensitivity variations with temperature and time. Sky brightness Standard stars tend to be taken at the beginning and end of the night, SNIa during the darkest hours. This changes the relative contributions of dark current, read noise and sky counts to the images. Of course it is not clear that such changes naturally lead to biases. (However, extinction changes which evolve over the night can lead to scatter, if not biases, when the standards are not interleaved into the observing program.) Bandpasses Filters of the same name on different detectors at different telescopes will be at least slightly different. This can lead to color terms in the photometric systems established with one detector but used to study SNIa with another. The simple fact that the slopes of the sensitivity-wavelength relationships are different at the tens of percent level for different detectors will lead to few-percent differences in broad bandpasses even when identical filters are employed. Clouds and atmosphere SNIa measurements may be made with less, or at any rate different, attention paid to atmospheric conditions than the standard star calibration measurements. Furthermore, SNIa measurements and standard star calibration have been done at different sites. Even at a fixed site, extinction coefficients for different bandpasses vary with time by factors of a few, and change in color (Landolt 1992). These color changes will affect the shape of the total throughput, telescope plus atmosphere, at the ten-percent level; it will affect relative calibration only at the few-percent level, because standard stars and SNIa are compared through the same bandpass. The magnitude of the problem depends on the differences between the SED shapes of the SNIa and the standards. Signal-to-noise SNIa, comparison standards, and the stars in the magnitude ladder are all measured at different $`S/N`$; some biases depend on $`S/N`$ alone (Hogg & Turner 1998). These are proportional to inverse $`S/N`$; they can only affect the very faintest SNIa at the five to ten-percent level. It is not clear that any of these possible sources of systematic error will in fact be significant. However, there are enough of them that it is a testament to the care of those who build and calibrate instruments, calibrate the photometric system, and collect and study SNIa that the listed effects do not ruin the SNIa Hubble Diagram. In fact, the SNIa Hubble Diagram is consistent with a set of cosmological world models within the reasonable range $`0<\mathrm{\Omega }_M<1`$ and $`0<\mathrm{\Omega }_\mathrm{\Lambda }<1`$ (Riess et al 1998; Perlmutter et al 1999). Since this reasonable range spans a magnitude difference of $`\pm 0.5\mathrm{mag}`$ (when tied down to the fluxes of the low-redshift SNIa), the SNIa Hubble Diagram constrains the drift or systematic error in the magnitude system to be less than $`\pm 0.5\mathrm{mag}`$ over $`11\mathrm{mag}`$, or less than $`0.045\mathrm{mag}`$ per magnitude. If the accumulated systematic error is treated as a tilt in the magnitude vs log flux diagram, the SNIa constraint corresponds to the statement that magnitude $`m`$ is related to flux $`f`$ by $`m=(2.50\pm 0.11)\mathrm{log}_{10}f+C`$. Constraining systematic error functions more complicated than a linear tilt is difficult with the current sample of known SNIa, which has very few in the redshift range $`0.1<z<0.4`$. This range is crucial for investigating the magnitude-dependence of any systematic errors, since it spans a large range in magnitude but is not strongly affected by changes in the world model. It is possible to remove the world model uncertainty by just considering the $`6`$ mag range of $`z<0.1`$ SNIa observations whose interpretation is relatively independent of cosmological world model. Although the interpretation of these SNIa has less dependence on world model, the constraint on the photometric system is weaker, because the magnitude baseline is shorter. A similar constraint on the magnitude system can be derived from photometry of Cepheids in the water-maser galaxy NGC 4258 (Maoz et al 1999), where the absolute distance is known from the kinematics of the water masers near the nucleus of the galaxy (Hernstein et al 1999). The comparison with the Large Magellanic Cloud spans 11 mag, and the uncertainty, including both the NGC 4258 and LMC distance uncertainties, is on the order of $`0.3`$ mag. Taken at face value, the SNIa results currently favor an accelerating Universe with $`\mathrm{\Omega }_\mathrm{\Lambda }>0`$. In the SNIa Hubble Diagram, these world models are separated from non-accelerating world models with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ by only $`0.1\mathrm{mag}`$. (At fixed $`\mathrm{\Omega }_M`$, accelerating and non-accelerating world models are separated by more than $`0.1\mathrm{mag}`$. However, the closest non-accelerating world model to any non-accelerating one is as close as $`0.1\mathrm{mag}`$.) Until there is independent empirical evidence that relative photometry techniques are linear to much better than $`0.1\mathrm{mag}`$ over that $`11`$ mag range, the SNIa will not particularly favor accelerating ($`\mathrm{\Omega }_\mathrm{\Lambda }>0`$) world models over non-accelerating ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) ones. ## 3 Type Ia supernovae as SED-shape calibrators Observations of SNIa currently span much of the redshift range $`0<z<1`$, so observations in a particular wavelength bandpass span a range of emitted wavelengths. For this reason, even if the underlying SED shapes of SNIa are unknown, the mere fact that they are standard (or standardizable) candles implies that they can be used to calibrate the relative sensitivities of different bandpasses. Usually observations are carried out all in a particular, fixed set of observational bandpasses, so the magnitudes must be k-corrected. The k-correction is the difference between the observed magnitude of a redshifted source and the magnitude which would have been observed for the source at the same distance but zero redshift. It depends on the individual SED shape of the source being observed because it is a logarithmic ratio of absolute fluxes in different bandpasses (observed and emitted). Here, clearly the k-correction is as good as our knowledge of the SED shape of the source. Although the source can be compared very accurately to standard stars such as Vega, the SED shape can only be known as well as the SED shapes of the standard stars. In principle a SNIa project could be designed such that sources at different redshifts are observed in different bandpasses, matched so that the observed fluxes of the SNe are observed at the same emitted wavelengths at all redshifts. This technique is also dependent on the SED shapes of the standard stars, because SNIa at different redshifts will have to be calibrated against different parts of the standard stars’ SEDs. The SED shapes of Vega and other standard stars are measured by comparison with laboratory blackbodies of known temperatures. The blackbody is close to the telescope (relative to the standard stars!), so airmass corrections have to be extrapolated from zero airmass to the airmasses of the stellar observations (Hayes 1970; Oke & Schild 1970). An alternative method of absolute calibration makes use of synthetic photometry of model stellar atmospheres (eg, Colina & Bohlin, 1994). Possible sources for systematic errors in standard star SEDs include: Laboratory blackbody temperatures The inferred SED shapes are really relative to the laboratory blackbody SED shape, so errors in temperature lead to SED shape errors. However, the laboratory blackbodies are very precise, so there is unlikely to be much temperature uncertainty; certainly $`\mathrm{\Delta }T/T<10^3`$ (Hayes 1970; Oke & Schild 1970). Illumination geometry The blackbodies are point sources near the telescope, calibrated in luminosity, whereas stars are point sources at infinity and are being calibrated in flux. The two will not illuminate the telescope and its instrumentation identically. The experiments are done carefully, so this error is not likely to be bigger than the angle the telescope aperture subtends to the blackbody, or on the order of a few percent. Absorption layers in the atmosphere The extrapolation of the blackbody observations from zero to finite airmass depends on an extrapolation of airmass corrections from observations at airmasses of, say, 1 to 2 down to zero. This extrapolation is not trivial if there are non-uniform absorbing layers in the atmosphere, or if the absorption at some wavelengths happens mainly at low altitude. The extrapolation has been tested at the ten-percent level (Stebbins & Kron 1964). Deviations of bandpass shapes The SNIa and standard stars are compared in finite bandpasses, not through spectrophotometry. If any aspect of bandpass estimation (telescope optics transmission, detector efficiency, filter curve) is uncertain, an uncertainty is introduced into the locations and widths of the bandpasses in wavelength space. This problem is not likely to be big for the SNIa projects, which have gone to great pains to assess their photometric systems (eg, Kim et al 1996). Atmospheric extinction variations Extinction coefficients for different bandpasses vary with time by factors of a few, and change in color, even at a fixed telescope site (Landolt 1992). These color changes will affect the shape of the total throughput, telescope plus atmosphere, at the ten-percent level; it will affect SED-shape inference at the few-percent level. Incorrect model spectra In the case of synthetic photometric calibration, the accuracy of the result is directly related to the accuracy of the model spectra. This is hard to assess, since the only calibration-independent tests of model spectra are the equivalent widths of lines and fractional strengths of spectral breaks, while it is the absolute level of the continuum that is involved in the calibration. However, there are some astronomical sources which are thought to be very accurately modeled astrophysically. Synthetic and blackbody calibrations may disagree at the five-percent level (eg, Colina & Bohlin, 1994). Again, it is not clear that any of these possible sources of systematic error is significant, but it is nonetheless impressive that these effects do not ruin the SNIa Hubble Diagram. The fact that the SNIa Hubble Diagram is consistent with a set of cosmological world models within the reasonable range $`0<\mathrm{\Omega }_M<1`$ and $`0<\mathrm{\Omega }_\mathrm{\Lambda }<1`$ constrains the SED error to be less than $`\pm 0.5\mathrm{mag}`$ over the wavelength range spanned by the redshift range $`0<z<1`$, ie, over a factor of two in wavelength. If SED shapes could be off by a significant fraction of 10 percent over the factor of two in wavelength, then the SNIa do not particularly favor accelerating world models over non-accelerating ones. ## 4 Conclusions The reasonableness of the SNIa results show that relative photometric calibration is good to within $`\pm 0.5\mathrm{mag}`$ over $`11\mathrm{mag}`$ and that the SED shapes of standard stars are known to $`\pm 0.5\mathrm{mag}`$ over a factor of two in wavelength. Although perhaps these constraints are not surprising, they testify to the quality of the photometric calibration, both of the standard star system, and of the SNIa projects. These constraints are important because they are completely independent of the astronomical techniques used to construct the calibration in the first place. If the calibration is uncertain at the few to ten-percent level over the same magnitude or wavelength range, then there is no more SNIa evidence for an accelerating Universe. Standard candles provide an invaluable resource for testing or, perhaps, in the future, even establishing systems of calibration. Unfortunately, they are rare. However, it is conceivable that certain kinds of calibration verification similar to that described here could be performed with massive, uniform sky surveys such as the Sloan Digital Sky Survey (SDSS). Because the SDSS collects uniform data on a huge range of galaxies over a range of redshifts, it will be possible to constrain certain aspects of photometric calibration. For example, if the $`r`$-band absolute calibration was low by ten percent, then all populations of extragalactic objects would appear to brighten at rest-frame 7000 Å in going from redshift $`z=0.4`$ to $`z=0.0`$ but fade at rest-frame 5000 Å over the same redshift interval. Intercomparison of the evolutionary behaviors of different extragalactic populations may therefore constrain many aspects of calibration. Like SNIa constraints on calibration, these would also be independent of the standard star system. This future project stands as possibly the least glamorous goal of the SDSS. Unfortunately, the prospects for finding new alternatives for independent verification of photometric calibration are not good. The main approach to improving relative photometry is, and should be, increased testing of detector and instrument linearity and repeatability, and continued calibration of standards at fainter levels and higher signal-to-noise ratio. Some tests of telescope linearity could involve “stopping down” a large telescope, perhaps with a randomly perforated entrance cover (since any neutral-density filter is as hard to calibrate as the photometry itself!). A stopped-down telescope would permit some differential tests of photometry that remove many (though not all) of the aforementioned systematic problems. A radical idea, fraught with a new set of observational difficulties, is to observe, at aphelion and perihelion, asteroids on highly elliptical orbits around the Sun (B. Paczynski. private communication). As for constraining cosmological world models, the SNIa projects will become much less sensitive to photometric calibration as they push to higher redshifts, where differing world models make very different predictions, which are themselves different from the expected flux-dependence of most of the possible systematic errors. It is a pleasure to thank the astrophysicists at the Institute for Advanced Study in 1999, especially Daniel Eisenstein and John Bahcall, for lunchtime discussions which culminated in this study. Comments from Alex Filippenko, John Gizis, Jim Gunn, Gerry Neugebauer, Jeff Newman, Bev Oke, Bohdan Paczynski, Jim Peebles, Michael Richmond, Adam Riess, Tom Soifer and Steve Thorsett were also extremely helpful. Support was provided by Hubble Fellowship grant HF-01093.01-97A from STScI, which is operated by AURA under NASA contract NAS 5-26555. This research made use of the NASA ADS Abstract Service.
no-problem/0001/cond-mat0001427.html
ar5iv
text
# Low-frequency peak in the magnetoconductivity of a non-degenerate 2D electron liquid \[ ## Abstract We study the frequency-dependent magnetoconductivity of a strongly correlated nondegenerate 2D electron system in a quantizing magnetic field. We first restore the single-electron conductivity from calculated 14 spectral moments. It has a maximum for $`\omega \gamma `$ ($`\mathrm{}\gamma `$ is the disorder-induced width of the Landau level), and scales as a power of $`\omega `$ for $`\omega 0`$, with a universal exponent. Even for strong coupling to scatterers, the electron-electron interaction modifies the conductivity for low and high frequencies, and gives rise to a nonzero static conductivity. We analyze the full many-electron conductivity, and discuss the experiment. \] One of the most interesting problems in physics of low-dimensional systems is the effect of the electron-electron interaction (EEI) on electron transport. In many cases the EEI is the major factor, fractional quantum Hall effect (QHE) being an example. At the same time, single-electron picture is often also used for interpreting transport, as in the integer QHE. Another closely related example is magnetotransport of a low-density two-dimensional electron system (2DES) on helium surface. For strong quantizing magnetic fields, experimental data on electron transport in this system are reasonably well described by the single-electron theory based on the self-consistent Born approximation (SCBA). This theory does not take into account the interference effects that lead to electron localization in the random potential of scatterers. Such a description appears to contradict the phenomenology of the integer QHE, where all but a finite number of single-particle states in the random potential are localized. The static single-electron magnetoconductivity $`\sigma _{xx}(0)`$ must vanish, as illustrated in Fig. 1, since the statistical weight of the extended states is equal to zero. In this paper we discuss the case where the EEI is strong and the electrons are correlated, as for 2DES on helium and in fractional QHE. Yet the characteristic force on an electron from the short-range random potential may exceed the force from other electrons. The interrelation between the forces determines the effective strength of the coupling to scatterers. The analysis allows us to understand the strong coupling limit and the crossover to weak coupling, and to resolve the apparent contradiction between localization of single-electron states and the experimental data for electrons on helium. We show that, for strong coupling to scatterers, the low-frequency magnetoconductivity $`\sigma _{xx}(\omega )`$ of a nondegenerate 2DES becomes nonmonotonic: it has a maximum at a finite frequency $`\omega _{\mathrm{max}}0.3\gamma `$, where $`\mathrm{}\gamma `$ is the SCBA level broadening \[Fig. 1\]. For small but not too small $`\omega /\gamma `$, the conductivity scales as $`\omega ^\mu `$ with a universal exponent $`\mu 0.215`$. Whereas the onset of the peak is a single-electron effect, the nonzero value of the static conductivity and the form of $`\sigma _{xx}(\omega )`$ for big $`\omega /\gamma `$ are determined entirely by the EEI. We obtain an estimate for $`\sigma _{xx}(0)`$ and analyze the overall shape of $`\sigma _{xx}(\omega )`$ in the parameter range where $`\mathrm{exp}(\mathrm{}\omega _c/k_BT)1`$ and $`k_BT\mathrm{}\gamma `$ ($`\omega _c`$ is the cyclotron frequency), the conditions usually met in strong-field experiments on electrons on helium. The single-electron conductivity at low frequencies is determined by the correlation function of the velocity of the guiding center $`𝐑`$ of the electron cyclotron orbit in the potential of scatterers. For $`\omega k_BT/\mathrm{}`$ and $`\mathrm{exp}(\mathrm{}\omega _c/k_BT)1`$, it can be written as $`\sigma _{xx}(\omega )=(ne^2l^2\gamma /8k_BT)\stackrel{~}{\sigma }(\omega ),`$ where $`n`$ is the electron density, $`l=(\mathrm{}/m\omega _c)^{1/2}`$ is the magnetic length, and $`\stackrel{~}{\sigma }`$ is the reduced conductivity, $`\stackrel{~}{\sigma }(\omega )=`$ $`{\displaystyle \frac{2\mathrm{}\gamma }{m\omega _c}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑te^{i\omega t}{\displaystyle _{𝐪,𝐪^{}}}\left(𝐪𝐪^{}\right)`$ (2) $`\times \stackrel{~}{V}_𝐪\stackrel{~}{V}_𝐪^{}\mathrm{exp}\left[i𝐪𝐑(t)\right]\mathrm{exp}\left[i𝐪^{}𝐑(0)\right].`$ Here, $``$ stands for thermal averaging followed by the averaging over realizations of the random potential of defects $`V(𝐫)`$, and $`\stackrel{~}{V}_𝐪=(V_𝐪/\mathrm{}\gamma )\mathrm{exp}(l^2q^2/4)`$ are proportional to the Fourier components $`V_𝐪`$ of $`V(𝐫)`$. We will assume that $`V(𝐫)`$ is Gaussian and delta-correlated, $$V(𝐫)V(𝐫^{})=v^2\delta (𝐫𝐫^{}),$$ (3) in which case $`\mathrm{}\gamma =(2/\pi )^{1/2}v/l`$. Time evolution of the guiding center $`𝐑(X,Y)`$ in Eq. (2) is determined by the dynamics of a 1D quantum particle with the generalized momentum and coordinate $`X`$ and $`Y`$, and with the Hamiltonian $$H=\mathrm{}\gamma _𝐪\stackrel{~}{V}_𝐪\mathrm{exp}(i\mathrm{𝐪𝐑}),[X,Y]=il^2.$$ (4) Because of the Landau level degeneracy in the absence of random potential, the problem of dissipative conductivity is to some extent similar to the problem of the absorption spectra of Jahn-Teller centers in solids, which are often analyzed using the method of spectral moments. This method can be applied to the conductivity (2) as well. It allows, at least in principle, to restore $`\sigma _{xx}(\omega )`$. In addition, the moments $$M_k=\frac{1}{2\pi \gamma }_{\mathrm{}}^{\mathrm{}}𝑑\omega (\omega /\gamma )^k\stackrel{~}{\sigma }(\omega )$$ (5) can be directly found from measured $`\sigma _{xx}(\omega )`$, and therefore are of interest by themselves. For $`\omega ,\gamma k_BT/\mathrm{}`$, the states within the broadened lowest Landau level are equally populated and the reduced conductivity is symmetric, $`\stackrel{~}{\sigma }(\omega )=\stackrel{~}{\sigma }(\omega )`$. Then odd moments vanish, $`M_{2k+1}=0`$. For even moments, we obtain from Eqs. (2), (5) $`M_{2k}=`$ $`2l^2{\displaystyle (𝐪_1𝐪_{2k+2})\stackrel{~}{V}_{𝐪_1}\mathrm{}\stackrel{~}{V}_{𝐪_{2k+2}}}`$ (7) $`\times [[\mathrm{}[e^{i𝐪_1𝐑},e^{i𝐪_2𝐑}],\mathrm{}],e^{i𝐪_{2k+1}𝐑}]e^{i𝐪_{2k+2}𝐑},`$ where the sum is taken over all $`𝐪_1,\mathrm{},𝐪_{2k+2}`$. The commutators (7) can be evaluated recursively using $$[e^{i\mathrm{𝐪𝐑}},e^{i𝐪^{}𝐑}]=2i\mathrm{sin}\left(\frac{1}{2}l^2𝐪𝐪^{}\right)e^{i(𝐪+𝐪^{})𝐑}.$$ (8) From Eq. (3), $`\stackrel{~}{V}_𝐪\stackrel{~}{V}_𝐪^{}=(\pi l^2/2S)\mathrm{exp}(l^2q^2/2)\delta _{𝐪+𝐪^{}}`$, where $`S`$ is the area. The evaluation of the $`2k`$th moment comes then to choosing pairs $`(𝐪_i,𝐪_i)`$ and integrating over $`k+1`$ independent $`𝐪_i`$. From Eq. (8), the integrand is a (weighted with $`𝐪_1𝐪_{2k+2}`$) exponential of the quadratic form $`(l^2/2)𝐪_i\widehat{A}_{ij}𝐪_j`$, where $`i,j=1,\mathrm{},k+1`$. The matrix elements $`\widehat{A}_{ij}`$ are themselves $`2\times 2`$ matrices, $`\widehat{A}_{ij}=\widehat{I}\delta _{ij}+a_{ij}\widehat{\sigma }_y`$, where $`\widehat{\sigma }_y`$ is the Pauli matrix, and $`a_{ij}=a_{ji}=0,\pm 1`$. Because of the structure of the matrices $`\widehat{A}`$, the moments $`M_{2k}`$ are given by rational numbers. For $`k=0,1,\mathrm{},7`$ we obtain $`M_{2k}=`$ $`1;{\displaystyle \frac{3}{8}};{\displaystyle \frac{443}{1152}};{\displaystyle \frac{25003}{38400}};{\displaystyle \frac{13608949709}{8941363200}};`$ (10) $`4.47809;\mathrm{\hspace{0.17em}15.7244};\mathrm{\hspace{0.17em}63.7499}`$ (we give approximate values of $`M_{2k}`$ for $`k5`$). To restore the conductivity $`\stackrel{~}{\sigma }(\omega )`$ from the calculated finite number of moments, we need its asymptotic form for $`\omega \gamma `$. It can be found from the method of optimal fluctuation, by calculating the thermal average in Eq. (2) on the exact eigenstates $`|n`$ of the lowest Landau band of the disordered system. All states $`|n`$ are equally populated for $`k_BT\mathrm{}\gamma `$. Their energies $`E_n`$ are symmetrically distributed around the band center ($`E=0`$), with the density of states $`\rho (E)\mathrm{exp}(4E^2/\mathrm{}^2\gamma ^2)`$. For large $`\omega /\gamma `$, the conductivity is formed by transitions between states $`|n,|m`$ with large and opposite in sign energies $`E_{n,m}`$ ($`|E_nE_m|=\mathrm{}\omega `$). The major contribution comes from $`E_n=E_m`$. Only those configurations of $`V(𝐫)`$ are significant, where the states $`|n,|m`$ are spatially close. However, the overlap matrix elements affect only the prefactor in $`\stackrel{~}{\sigma }`$ , and to logarithmic accuracy, $$\stackrel{~}{\sigma }(\omega )[\rho (\mathrm{}\omega /2)]^2\mathrm{exp}(2\omega ^2/\gamma ^2).$$ (11) Since the tail of the conductivity is Gaussian, one is tempted to restore $`\stackrel{~}{\sigma }(\omega )`$ from the moments $`M_n`$ using a standard expansion in Hermite polynomials, $`\stackrel{~}{\sigma }(\gamma x)=_nc_nH_n(\sqrt{2}x)\mathrm{exp}(2x^2)`$. From (5), the coefficients $`c_n`$ are recursively related to the moments $`M_k`$ with $`kn`$. However, for the moments values (10), such an expansion does not show convergence. This indicates possible nonanalyticity of the conductivity at $`\omega =0`$. For $`\omega 0`$, the conductivity can be found from scaling arguments by noticing that it is formed by states within a narrow energy band $`|E|\mathrm{}\gamma `$. The spatial extent of low-energy states is of the order of the localization length $`\xi l|\epsilon |^\nu `$, where $`\epsilon =E/\mathrm{}\gamma `$ and $`\nu =2.33\pm 0.03`$ is the localization exponent. The frequency $`\omega `$, on the other hand, sets a “transport” length $`L_\omega l(\gamma /\omega )^{1/2}`$. It is the distance over which an electron would diffuse in the random field $`V(𝐫)`$ over time $`1/\omega `$, with a characteristic diffusion coefficient $`D=l^2\gamma `$, if there were no interference effects. For large $`\xi ,L_\omega l`$, the scaling parameter can be chosen as $`g=(L_\omega /\xi )^{1/\nu }|\epsilon |(\omega /\gamma )^{1/2\nu }`$ . The conductivity $`\stackrel{~}{\sigma }(\omega )`$ is determined by the states within the energy band where $`g1`$. For high $`T`$, all these states contribute nearly equally, and $$\stackrel{~}{\sigma }(\omega )\omega ^\mu (\omega 0),\mu =(2\nu )^10.215.$$ (12) With Eqs. (11), (12), the conductivity can be written as $$\stackrel{~}{\sigma }(\omega )=x^\mu G(x)\mathrm{exp}(2x^2),x=|\omega |/\gamma .$$ (13) The function $`G(x)`$ ($`x0`$) can be expanded in Laguerre polynomials $`L_n^{(\mu 1)/2}(2x^2)`$, which are orthogonal for the weighting factor in Eq. (13). We have restored the corresponding expansion coefficients from the moments (10). The resulting conductivity is shown in Fig. 1 with solid line. The expansion for $`\stackrel{~}{\sigma }`$ converges rapidly for $`\mu `$ between 0.19 and 0.28 (as illustrated in Fig. 1 for $`\mu =0.215`$), whereas outside this region the convergence deteriorates. The electron-electron interaction (EEI) can strongly affect the magnetoconductivity even for low electron densities, where the 2DES is nondegenerate. Of particular interest for both theory and experiment are many-electron effects for densities and temperatures where $`\mathrm{\Gamma }e^2(\pi n)^{1/2}/k_BT1`$. The 2DES is then strongly correlated and forms a nondegenerate electron liquid or, for $`\mathrm{\Gamma }>130`$, a Wigner crystal. The motion of an electron is mostly thermal vibrations about the (quasi)equilibrium position inside the “cell” formed by other electrons. For strong $`B`$, the characteristic vibration frequency is $`\mathrm{\Omega }_p=2\pi e^2n^{3/2}/m\omega _c,\mathrm{\Omega }_p\omega _c`$ (for a Wigner crystal, $`\mathrm{\Omega }_p`$ is the zone-boundary frequency of the lower phonon branch). We will assume that $`k_BT\mathrm{}\mathrm{\Omega }_p`$. Then the vibrations are quasiclassical, with amplitude $`\delta _{\mathrm{fl}}(k_BT/e^2n^{3/2})^{1/2}l`$. The restoring force on an electron is determined by the electric field $`𝐄_{\mathrm{fl}}`$ from other electrons. The distribution of this field is Gaussian, except for far tails, and $`E_{\mathrm{fl}}^2=F(\mathrm{\Gamma })n^{3/2}k_BT`$, with $`F(\mathrm{\Gamma })`$ varying only slightly, from $`8.9`$ to $`10.5`$, in the whole range $`\mathrm{\Gamma }20`$ . Since $`\delta _{\mathrm{fl}}l`$, the field $`𝐄_{\mathrm{fl}}`$ is uniform over the electron wavelength $`l`$. The electron motion can be thought of as a semiclassical drift of an electron wave packet in the crossed fields $`𝐄_{\mathrm{fl}}`$ and $`𝐁`$, with velocity $`cE_{\mathrm{fl}}/B`$. In the presence of defects, moving electrons will collide with them. If the density of defects is small and their potential $`V(𝐫)`$ is short-range \[cf. Eq. (3)\], the duration of a collision is $$t_e=l(B/c)E_{\mathrm{fl}}^1(\mathrm{}/el)n^{3/4}(k_BT)^{1/2},$$ (14) and the scattering cross-section is $`\gamma ^2`$. For $`\gamma t_e1`$, electron-defect collisions occur independently and successively in time. This corresponds to weak coupling to the defects, and allows one to use a single-electron type transport theory, with the collision rate $`\tau ^1`$ calculated for the electron velocity $`cE_{\mathrm{fl}}/B`$ determined by the EEI, $`\tau ^1\gamma ^2t_e`$. The many-electron weak-coupling results have been fully confirmed by experiments. For $`\gamma t_e1`$, collisions with defects “overlap” in time, which corresponds to the strong coupling limit. In this case, from Eqs. (3), (14), the characteristic force on an electron from the random field of defects $`F_{\mathrm{rf}}=\mathrm{}\gamma /leE_{\mathrm{fl}}`$. One might expect therefore that the EEI does not affect the conductivity, and the single-electron theory discussed above would apply. It turns out, however, that this is not the case for the low- and high-frequency conductivity. As a result of the EEI, the energy of an electron in the potential of defects $`V(𝐫)`$ is no longer conserved. The motion of each electron gives rise to modulation of energies of all other electrons. The overall change of the Coulomb energy of the electron system over a small time interval is given by $`_ne(𝐄_n\delta 𝐫_n)`$, where $`\delta 𝐫_n`$ is the displacement of the $`n`$th electron due to the potential of defects, and $`𝐄_n`$ is the electric field on the $`n`$th electron from other electrons. Clearly, $`𝐄_n`$ and $`\delta 𝐫_n`$ are statistically independent. This allows us to relate the coefficient of energy diffusion of an electron $`D_ϵ`$ to the characteristic coefficient $`D=\gamma l^2`$ of spatial diffusion in the potential $`V(𝐫)`$, $$D_ϵ=(e^2/2)E_{\mathrm{fl}}^2D\gamma (\mathrm{}/t_e)^2.$$ (15) Energy diffusion eliminates electron localization which caused vanishing of the single-electron static conductivity. The low-frequency boundary $`\omega _l`$ of the range of applicability of the single-electron approximation can be estimated from the condition that the diffusion over the energy layer of width $`\delta \epsilon _l=(\omega _l/\gamma )^\mu `$ \[which forms the single-electron conductivity (12) at frequency $`\omega _l\gamma `$\] occurred over the time $`1/\omega _l`$. For $`\mu =1/(2\nu )`$, this gives $$\omega _{\mathrm{}}/\gamma =C_\mathrm{l}(\gamma t_e)^{2\nu /(\nu +1)},C_\mathrm{l}1.$$ (16) All states with energies $`|\epsilon |\delta ϵ_l`$ contribute to the conductivity for frequencies $`\omega <\omega _l`$. Therefore the many-electron conductivity may only weakly depend on $`\omega `$ for $`\omega <\omega _l`$, as shown in Fig. 1, and the static conductivity $$\sigma _{xx}(0)\sigma _{xx}(\omega _l)(ne^2\gamma l^2/k_BT)(\gamma t_e)^{1/(\nu +1)}.$$ (17) We note that there is a similarity between the EEI-induced energy diffusion, which we could quantitatively characterize for a correlated nondegenerate system, and the EEI-induced phase breaking in QHE . The cutoff frequency $`\omega _{\mathrm{}}`$ can be loosely associated with the reciprocal phase breaking time. The EEI also changes the high-frequency tail of $`\sigma _{xx}(\omega )`$ in the range $`\omega \omega _c`$. In the many-electron system, the tail is formed by processes in which a guiding center of the electron cyclotron orbit shifts in the field $`𝐄_{\mathrm{fl}}`$ (by $`\delta 𝐑`$). The energy $`\mathrm{}\omega `$ goes into the change of the potential energy of the electron system $`e𝐄_{\mathrm{fl}}\delta 𝐑`$, whereas the recoil momentum $`\mathrm{}\delta R/l^2`$ goes to defects. For large $`\omega `$, it is necessary to find optimal $`\delta 𝐑`$ and $`𝐄_{\mathrm{fl}}`$. For weak coupling to defects, $`\gamma t_e1`$, the correlator (2) can be evaluated to the lowest order in $`\gamma `$, which gives $$\stackrel{~}{\sigma }(\omega )=\gamma \omega t_e^2\mathrm{exp}\left[(2/\pi )^{1/2}\omega t_e\right].$$ (18) The exponential tail (18) is determined by the characteristic many-electron time (14), and the exponent is just linear in $`\omega `$. For larger $`\omega `$, the decay of $`\stackrel{~}{\sigma }`$ slows down to $`|\mathrm{ln}\stackrel{~}{\sigma }(\omega )|(\omega t_e)^{2/3}/[\mathrm{ln}(\omega /\gamma )]^{1/3}`$, provided $`n^{1/2}\delta _{\mathrm{fl}}(\omega t_e)^{1/3}1`$ . This asymptotics results from anomalous tunneling due to multiple scattering by defects. It also applies for strong coupling to defects, $`\gamma t_e1`$, and replaces the much steeper single-electron Gaussian asymptotics (11). We note that the overall frequency dependence of $`\sigma _{xx}(\omega )`$ is qualitatively different for strong and weak coupling to scatterers. In the latter case, $`\sigma _{xx}`$ is maximal for $`\omega =0`$ and decreases monotonously with the increasing $`\omega `$, in contrast to the behavior of $`\sigma _{xx}(\omega )`$ in the strong-coupling case shown in Fig. 1. Both $`\gamma `$ and $`t_e`$ increase with the magnetic field, and by varying magnetic field, electron density, and temperature one can explore the crossover between the limits of strong and weak coupling. It is interesting that both the static and the high-frequency conductivities are many-electron even for $`\gamma t_e1`$, where the coupling to defects is strong. It follows from Eq. (17) (see also Fig. 1) that the many-electron $`\sigma _{xx}(0)`$ is of the order of the single-electron SCBA conductivity $`\sigma _{xx}^{\mathrm{SCBA}}(0)=(4/3\pi )ne^2\gamma l^2`$ for not extremely large $`\gamma t_e`$. This is a consequence of the very steep frequency dependence of the full single-electron conductivity (12) for $`\omega 0`$. It follows from the above arguments that the random potential of defects does not eliminate self-diffusion in 2DES for $`\mathrm{\Gamma }<130`$, where the electrons form a nondegenerate liquid. For electrons on bulk helium, the results on the static conductivity apply also for $`\mathrm{\Gamma }>130`$, where electrons form a Wigner crystal. In this case the random field comes from thermally excited ripplons or (for $`T1`$ K) from helium vapor atoms. Ripplons, although they are extremely slow, do not pin the Wigner crystal (we note that, for scattering by ripplons, $`\gamma T^{1/2}`$). Random potential of vapor atoms is time-dependent (and also non-pinning). Vapor atoms stay within the electron layer only for a time $`t_v=a_B/v_T`$, where $`a_B`$ is the layer thickness and $`v_T`$ is the thermal velocity of the atoms. For strong magnetic fields one can have $`\gamma t_v1`$, and then if $`\gamma t_e1`$, coupling to the vapor atoms is strong, as observed in Refs. . The presented strong-coupling theory describes the conductivity for arbitrary $`t_e/t_v`$ provided the low-frequency cutoff of the single-electron theory $`\omega _{\mathrm{}}`$ (16) is replaced by min($`t_v^1,\omega _{\mathrm{}}`$). In conclusion, we have analyzed the magnetoconductivity of a nondegenerate 2D electron liquid in quantizing magnetic field. This is a simple and well-studied experimentally strongly correlated system, where effects of the electron-electron interaction on transport can be characterized qualitatively and quantitatively. It follows from our results that, whereas for weak coupling to short-range scatterers the conductivity $`\sigma _{xx}(\omega )`$ monotonically decays with increasing $`\omega `$ ($`\omega \omega _c`$), for strong coupling it becomes nonmonotonic. Even for strong coupling, the static conductivity is determined by many-electron effects, through energy diffusion. It is described in terms of the critical exponents known from the scaling theory of the QHE. The frequency dispersion of $`\sigma _{xx}`$ disappears for $`\omega \omega _{\mathrm{}}T^{\nu /(\nu +1)}`$, for temperature-independent disorder. In a certain range of magnetic fields and electron densities, the value of $`\sigma _{xx}(0)`$ (17) is reasonably close numerically to the result of the self-consistent Born approximation, which provides an insight into numerous experimental observations for electrons on helium surface. We are grateful to M. M. Fogler and S. L. Sondhi for useful discussions. Work at MSU was supported in part by the Center for Fundamental Materials Research and by the NSF through Grant no. PHY-972205. L.P. was supported in part by DOE grant DE-FG02-90ER40542
no-problem/0001/astro-ph0001389.html
ar5iv
text
# External Shock Model for Gamma-Ray Bursts during the Prompt Phase ## I Introduction An important question in GRB studies is whether the GRB engine produces a single impulsive collapse and ejection event, or instead operates over a period of time much longer than the $``$ ms dynamical time scale of the central engine. In the external shock modelmr93 ; dm99 , a single relativistic shell is ejected by the GRB engine and energized by interactions with the surrounding medium. Variability in the light curves is attributed to interactions with an inhomogeneous surrounding medium. In the colliding shell (or internal shock) modelrm94 ; kps97 , collisions between a succession of shells in a relativistic wind are thought to produce the variability observed in GRB light curves. If a conclusive resolution to this problem is obtained, then physical information can be extracted directly from GRB light curves. In the case of the external shock model, variations in GRB light curves reveal the distribution of circumstellar material near the sources of GRBs. In the case of the internal shock model, GRB light curves reflect the structure of and accretion processes operating within the putative disk of material that is accreted by the newly formed collapsed object to energize the relativistic wind. Here we review work focusing on the external shock model in the prompt $`\gamma `$-ray luminous phase. We find that the extensive phenomenology of GRBs can be explained with this model, so that the addition of multiple relativistic shells and the numerous parameters associated with a hybrid internal/external shock model are unnecessary. The fewer number of free parameters in the external shock model places definite constraints on the number and type of fireballs needed to explain GRB statistics. The most important implication is that classes of clean and dirty fireballs with well-defined properties must exist, and that the fireball event rate is much larger than previously estimated on the basis of detected GRBs. ## II Numerical Simulation of Light Curves When a relativistic blast wave with Lorentz factor $`\mathrm{\Gamma }`$ encounters an external medium, charged particles will be captured by the blast-wave shell even if the shell has only a very weak entrained magnetic field. A captured particle in the shell frame gets Lorentz factor $`\mathrm{\Gamma }`$. This internal energy derives from the directed energy of the relativistic shell, causing the shell to decelerate. We have developed a numerical simulation model cd99 ; dcm99 for a GRB blast wave that interacts with an external medium. The model treats synchrotron, Compton, and adiabatic processes, and blast-wave deceleration is self-consistently calculated. The parameters that enter the numerical model are those of the standard blast wave model. The macroscopic variables are the implied isotropic energy release $`E=10^{54}E_{54}`$ ergs/(4$`\pi `$ sr) and the initial Lorentz factor $`\mathrm{\Gamma }_0=300\mathrm{\Gamma }_{300}`$ of the blast wave. The environmental variables are the external density $`n(x)n_0x^\eta `$, where $`x`$ is the distance from the center of the explosion. We let $`n_0=10^2n_2`$ cm<sup>-3</sup> and consider a uniform surrounding medium ($`\eta =0`$). (Inhomogeneities in the external medium are considered in §V.) We also let the opening half-angle of the outflow $`\psi =10^{}`$, corresponding to a beaming factor $`f=0.76`$%. As long as $`\psi \mathrm{\Gamma }_0^1`$, the collimation has little effect on $`\gamma `$-ray emission during the prompt phase if the observer’s line-of-sight falls within an angle $`\theta \stackrel{<}{}\mathrm{\Gamma }_0^1`$ of the jet axis. The microscopic variables are the fraction of energy $`ϵ_e`$ that is transferred from the swept-up protons to the swept-up electrons, and the injection index $`p`$ of the assumed power-law electron energy distribution. A parameter $`ϵ_{\mathrm{max}}`$ is defined in terms of a maximum Lorentz factor $`\gamma _{\mathrm{max}}`$ obtained by balancing the minimum acceleration time scale and the synchrotron loss time scale, giving $`\gamma _{\mathrm{max}}=4\times 10^7ϵ_{\mathrm{max}}/[B(\mathrm{G})]^{1/2}`$. The magnetic field $`B`$ is specified by a magnetic-field parameter $`ϵ_B`$ through the relation $`B^2/(8\pi )=4ϵ_Bm_pc^2n(x)\beta (\mathrm{\Gamma }^2\mathrm{\Gamma })`$, where $`\beta c=(1\mathrm{\Gamma }^2)^{1/2}c`$ is the speed of the blast wave. Standard values used here are $`ϵ_e=0.5`$, $`p=2.5`$, $`ϵ_{\mathrm{max}}=1`$, and $`ϵ_B=10^4`$. The low value of $`ϵ_B`$ is required to avoid forming cooling spectra, which are not commonly observed in GRBs pea98 . We also note that the microscopic variables are assumed to be constant in time. The left panel in Fig. 1 shows calculations of light curves and spectral indices at different observing energies for a model GRB with standard parameters. For comparison, we also show a typical GRB with a smooth light curve. Several effects are apparent here. The first is that the generic Fast Rise, Exponential Decay (FRED) profile found in some 20-30% of all GRB light curves is reproduced (FRED is actually a misnomer, as the decay law is more closely approximated by a power law). The second is that the peaks are sharper at higher energies and broader at lower energies. Another is a hardness-intensity relation and a hard-to-soft evolution of the GRB light curves, so that the well-known correlations are reproduced. A prediction of the model is that the peaks are aligned at $`\gamma `$-ray energies, but lag at X-ray energies dbc99 . This prediction seems to be confirmed by observations with Beppo-SAX fea99 which has spectral coverage in the 2-700 keV range. Fig. 2a shows model GRB spectra at different observing times, and Fig. 2b shows the calculated relationship between $`E_{\mathrm{pk}}`$, flux, and fluence. At X-ray energies, the photon spectral index approaches a value $`\alpha 2/3`$, corresponding to the nonthermal synchrotron emissivity spectrum from an uncooled electron distribution with a low-energy cutoff. The spectrum turns over and approaches the value $`1+(p/2)`$ associated with a cooling electron distribution at the highest energies. Fig. 2b shows that the qualitative behavior of the $`E_{\mathrm{pk}}`$-fluence relationship observed in GRBs cea99 is reproduced. The spectral aging inferred from the decay of $`E_{\mathrm{pk}}`$ values in smooth GRB light curves is a natural consequence of the external shock model. The external shock model therefore accounts for the best established phenomenological correlations of FRED-type GRBs pm98 ; dbc99 . ## III Statistical Properties of GRBs Even if beaming is neglected, seven parameters enter into a blast-wave model calculation with an assumed uniform surrounding medium. We carried out a parameter study dcb99 showing that GRB observables are most sensitive to the value of the initial Lorentz factor (or baryon-loading parameter) $`\mathrm{\Gamma }_0`$ of the explosion. The typical duration of a GRB in the prompt phase varies as $`(E/\mathrm{\Gamma }_0^8n_0)^{1/3}`$ at observing energies $`\stackrel{>}{}_0`$. The quantity $`_0=E_{\mathrm{pk}}(t=0)`$ is the photon energy of the peak of the $`\nu F_\nu `$ spectrum at early times, and $`_0qn_0^{1/2}\mathrm{\Gamma }_0^4`$, where $`q`$ is a parameter related to the magnetic field and Lorentz factor of the lowest energy electrons. The power $`\mathrm{\Pi }_0`$ at photon energy $`_0`$ varies as $`(\mathrm{\Gamma }_0^8E_0^2/n_0)^{1/3}`$. These relations show that the mean duration, peak photon energy, and peak power output of a GRB are most sensitive to the value of $`\mathrm{\Gamma }_0`$. A central criticism of a relativistic beaming scenario has been to explain the apparent paradox between a model involving relativistically beamed outflows, and observations showing that $`E_{\mathrm{pk}}`$ is narrowly confined to an energy range near a few hundred keV. Brainerd’s Compton attenuation model jb94 , for example, was specifically designed to account for this fact, but the large column densities required by this model make it unable to explain rapid variability in GRB light curves bea99 . The beaming paradox is resolved by the external shock model dbc99 when the spectral behavior implied by blast wave physics is convolved with detector response. A dirty fireball with $`\mathrm{\Gamma }_0300`$ will have a $`\nu F_\nu `$ peak at low energies, and will rarely be detected because the blast wave energy is radiated over a long period of time ($`\mathrm{\Gamma }_0^{8/3}`$); thus its peak power is very weak ($`\mathrm{\Pi }_0\mathrm{\Gamma }_0^{8/3}`$). The flux in the BATSE range is even lower than implied by this relation because BATSE would be sensitive to only the soft, high-energy portion of the spectrum. A clean fireball, by contrast, would produce a brief, very luminous GRB, but BATSE would sample the very hard portion of the spectrum below the $`\nu F_\nu `$ peak where the received flux is not so great. Fig. 3a illustrates this behavior. Dirty fireballs would rarely trigger BATSE because the flux is so weak in the BATSE triggering range, and clean fireballs with $`\mathrm{\Gamma }_0300`$ would be so brief that the total fluence measured within the BATSE window would not be sufficient to trigger it. Fig. 3b shows the relationship between $`E_{\mathrm{pk}}`$ and fluence for a model calculation when only the parameter $`\mathrm{\Gamma }_0`$ is varied. When $`\mathrm{\Gamma }_0\stackrel{<}{}500`$, the external shock model predicts a positive correlation between $`E_{\mathrm{pk}}`$ and fluence, as has been recently reported lpm99 . The dirty fireballs with $`\mathrm{\Gamma }_0\stackrel{<}{}100`$ would not normally be detected and, as just described, there would also be biases against detecting the clean fireballs with $`\mathrm{\Gamma }_0300`$. It is necessary, however, to convolve temporal and spectral model results through a simulation of the detector response before drawing conclusions about the viability of the model. The BATSE instrument has provided the largest and most uniform data base on GRBs. It nominally triggers on 64, 256, and 1024 ms timescales when the flux in at least two detectors exceeds 5.5$`\sigma `$ over background. The data points in Fig. 4 show the peak photon-flux size distribution, the $`t_{50}`$ duration and the $`E_{\mathrm{pk}}`$ distributions measured with BATSE. The observable $`t_{50}`$ is the time interval over which the integrated counts range from 25% to 75% of the total counts over background. For comparison with statistical data, we developed an analytic model for the temporally evolving GRB spectrum dcb99 based on the detailed numerical calculations. To make a valid comparison between the external shock model and the observed statistical properties of GRBs, we have modeled detector triggering criteria. Model results were integrated over time to determine if the peak 50-300 keV flux exceeded the BATSE threshold so that the simulated BATSE detector would be triggered bd00 . Trigger efficiencies were explicitly taken into account, which is important for GRBs with fluxes near threshold. The underlying assumption of our statistical model is that the event rate of fireballs follows the star formation history of the universe mpd98 . We bd00 found that it was not possible to fit simultaneously the size, $`t_{50}`$ and $`E_{\mathrm{pk}}`$ distributions with a monoparametric model. Broad distributions of explosion energy $`E`$ and initial Lorentz factor $`\mathrm{\Gamma }_0`$ are needed to fit these distributions. The model fits shown in Fig. 4 are based upon power-law distributions of $`E`$ and $`\mathrm{\Gamma }_0`$, where $`dN/dEE^{1.52}`$ for $`10^{48}E(\mathrm{ergs})10^{54}`$, and $`dN/d\mathrm{\Gamma }_0\mathrm{\Gamma }_0^{0.25}`$ for $`\mathrm{\Gamma }_0260`$. The upper limit to $`\mathrm{\Gamma }_0`$ corresponds to a density of $`n_0=10^2`$ cm<sup>-3</sup>; the analytic model is degenerate in the quantity $`n_0\mathrm{\Gamma }_0^8`$. As can be seen from Fig. 4, the model provides reasonable fits to the peak-flux, $`E_{\mathrm{pk}}`$ and $`t_{50}`$ distribution of the long-duration ($`\stackrel{>}{}1`$ s) GRBs. The short hard GRBs must arise from a separate component. The implied redshift distribution of GRBs detected with BATSE is also shown in Fig. 4. We predict that most GRBs detected with BATSE lie in the redshift range $`0.2\stackrel{<}{}z\stackrel{<}{}1.2`$, with a tail of GRBs extending to high redshifts. The predicted number and distribution of high-$`z`$ GRBs detected with BATSE is quite uncertain, because the star-formation rate at high redshifts is poorly known, and the fit depends on the unproven assumption that the comoving space density of fireball transients follows the star formation rate. Moreover, the distribution of explosion energies is assumed to be described by a power-law function with a discrete cutoff. This distribution might instead have a tail extending to very high values. ## IV Dirty and Clean Fireballs An overall normalization factor for the fireball event rate per unit comoving volume is implied by the joint fits to the statistical properties of GRBs shown in Fig. 4. If no beaming is assumed, this normalization corresponds to a local event rate of $`440`$ yr<sup>-1</sup> Gpc<sup>-3</sup>, which is equivalent to a local GRB rate of $`90`$ Galactic events per Myr. This is a factor $`4000`$ greater than the result of Wijers et al.wea98 , who fit the combined BATSE/PVO peak-flux distributions only. This difference is due to an approach to GRB statistics where we abandon a standard candle assumption for the luminosity and rely on blast wave physics and detector response properties to determine whether a fireball transient would be detected with BATSE. Most crucially, we do not assume that there is a preference in nature to make fireballs with a specific energy release $`E`$ and baryon-loading parameter $`\mathrm{\Gamma }_0`$ (which would also entail a typical density of the surrounding medium) that would produce radiation that would trigger BATSE; any such assumption is highly artificial. The consequence of this approach is that fireballs with a wide range of energies and baryon-loading parameters are formed in nature, the bulk of which are not detected and for which we have no evidence except for the limits implied by surveysg99 ; gea99 . Only a very few nearby fireball transients with low values of $`E`$ would be detected, and fireballs with $`\mathrm{\Gamma }_0\stackrel{<}{}10^2`$ would be invisible to BATSE because most of the dirty fireball radiation is emitted at X-ray energies and below. The dirty fireball transients have longer durations and lower $`E_{\mathrm{pk}}`$ values than standard GRBs, and are difficult to detect because they are lost in the glow of the luminous diffuse X-ray background for wide field-of-view instruments. The X-ray transient events discovered with the Beppo-SAX WFC and reported at this meeting h00 might be fireball transients with a baryon load that is large enough that such events would not normally trigger a burst detector at hard X-ray energies. The number of clean fireball transients is not well constrained, but our results show that there must be a break or cutoff in the $`\mathrm{\Gamma }_0`$-distribution at high values of $`\mathrm{\Gamma }_0`$. Clean fireballs have shorter durations and $`E_{\mathrm{pk}}`$ values extending to MeV and GeV energies, and require sensitive, wide field-of-view gamma-ray telescopes to be detected dcb99 ; dc00 . If there are many more fireball events than implied by direct observations of GRBs, then a number of important implications follow: * The hypothesis that ultra-high energy cosmic rays are produced by GRBs remains viable. This hypothesis has been questioned s99 in light of redshift measurements of GRB counterparts that suggest a much lower event rate within the GZK radius than formerly thought. * The identification of X-ray hot spots in M101 with GRB remnants wang99 appears more probable. These associations seemed unlikely given the event rate inferred directly from GRB observations. * GRB explosions could leave many more observable Galactic remnants such as stellar arcs and HI holes, and produce greater biological effects than has been estimated sw99 . ## V Inhomogeneous External Medium Several arguments have been advanced to the effect that an external shock model cannot reproduce the short timescale variability observed in GRB light curves. We address these point by point. 1. An external shock model will display short timescale variability only if the radiative efficiency is poor. The analytic argumentsp97 assumes that density inhomogeneities (or “clouds”) located at an angle $`\theta \mathrm{\Gamma }^1`$ to the line-of-sight make the dominant contribution to variability. Clouds at $`\theta \mathrm{\Gamma }^1`$ actually make much stronger contributions to large-amplitude flux variability because of the combined effects of Doppler beaming and the much shorter observer timescale over which on-axis clouds radiate their emissiondm99 . 2. A condition of local spherical symmetry in radiating blast wave produces pulses in light curves which spread with time, contrary to the observationsfmn96 . An external shock model breaks the condition of local spherical symmetry if clouds with radius $`rR/\mathrm{\Gamma }_0`$ are present, as must be assumed to make the short timescale variability. Here $`R`$ is the distance of the cloud from the explosion center. 3. A decelerating blast wave produces spreading pulses, contrary to the observations. Only the portion of the blast wave that interacts with a cloud experiences strong deceleration, and its energy is dissipated by the interaction. The rest of the blast wave does not undergo significant deceleration until it intercepts another cloud, so no spreading from deceleration results. Thus it is not surprising that there is no spreading of peaks in GRB 990123frw99 , because different portions of the blast wave are producing the distinct pulses and peaks in the light curve. 4. Gaps and precursors are not possible due to the interference between a large number of causally disconnected regions. If there are shells of material from winds of GRB progenitor stars, as seems likely if GRB sources are associated with the collapse of massive stars, then gaps in the light curves can be formed. 5. A low-density confining medium will produce a low level of emission unless the density contrast between the clouds and the confining medium is very large. First, it is not necessary to have a confining medium if the massive star progenitor ejects material. Even if there is a low-density confining medium, the standard blast wave model implies that this residual emission will be radiated in a different energy band than the radiation emitted from the blast-wave/cloud interaction. Finally, we note difficulties in a colliding shell scenario. The efficiency for dissipating internal energy in a relativistic shell is maximized for collisions between a shell and a stationary external medium, and is much poorer in collisions between relativistic shells. It is simple to get $`\stackrel{>}{}10`$% radiative efficiency in the BATSE band for an external shock model, but efficiencies $`1`$% are more likely in an internal shock modelkumar99 , which calls into question the validity of the internal shock model for GRB 970508 pac99 . A colliding shell scenario must contend with spreading profiles unless pairs of shells collide only once near the burst source, which would mean an additional loss of efficiency. GRBs with widely separated pulses generally have $`\nu F_\nu `$ peak photon energies within a factor of 2-3 of each other. This is natural for an external shock model, where a blast wave with a single Lorentz factor collides with different clouds within the Doppler cone, but requires fine-tuning of the speeds between pairs of shells in a colliding shell model. ## VI Summary The original motivation for an external shock model was that it provided a simple explanation for the mean duration of GRBsrm92 . This duration roughly corresponds to the time scale $`t_{\mathrm{dec}}`$ where the relativistic shell has swept up a sufficient amount of matter to cause the shell to decelerate. For a GRB source at redshift $`z`$, $$t_{\mathrm{dec}}10(1+z)\left(\frac{E_{54}}{n_2\mathrm{\Gamma }_{300}^8}\right)^{1/3}\mathrm{s},$$ (1) which is comparable to the mean duration of GRBs observed with BATSE (see Fig. 4). This equation does not explain, however, why $`\mathrm{\Gamma }_0300`$. We now know the answer to this problem – fireballs do not have to have $`\mathrm{\Gamma }_0300`$. But if the baryon-loading is significantly different from this value, then a detector like BATSE will not be triggered. Dirty fireballs with $`\mathrm{\Gamma }_0300`$ will make long duration X-ray transients that will, in general, be too weak to trigger BATSE, and clean fireballs with $`\mathrm{\Gamma }_0300`$ make brief high-energy $`\gamma `$-ray transients with insufficient fluence in the BATSE band to be detected. The implication is that there are many fireball transients that will be detected with more sensitive telescopes employing appropriate triggering properties and scanning strategies. No explanation has been given within the context of the colliding shell/internal shock model as to why GRB durations should range from a fraction of a seconds to hundreds of seconds. There seems to be no reason why intermittent or dealyed accretion of a massive ring of material around a collapsed star should not take place over long time scales, particularly given the unusual behavior that the accretion process must display if it is to produce the variability observed in GRB light curves. The observation of a single GRB that recurs after several hours, days, or months would falsify the external shock model. No convincing case of recurrence has been observed. ###### Acknowledgements. I acknowledge discussions, collaborations, and joint work with M. Böttcher, J. Chiang, and K. Mitman.
no-problem/0001/cond-mat0001315.html
ar5iv
text
# Off-diagonal disorder in the Anderson model of localization ## Abstract We examine the localization properties of the Anderson Hamiltonian with additional off-diagonal disorder using the transfer-matrix method and finite-size scaling. We compute the localization lengths and study the metal-insulator transition (MIT) as a function of diagonal disorder, as well as its energy dependence. Furthermore we investigate the different influence of odd and even system sizes on the localization properties in quasi one-dimensional systems. Applying the finite-size scaling approach in conjunction with a nonlinear fitting procedure yields the critical parameters of the MIT. In three dimensions, we find that the resulting critical exponent of the localization length agrees with the exponent for the Anderson model with pure diagonal disorder. Introduction. The electron localization properties of disordered solids are of high interest both experimentally and theoretically. A simple approach to this problem is given by the Anderson model of localization. In this model one considers a single electron on a lattice with $`N`$ sites described by the Hamiltonian $$H=\underset{ij}{\overset{N}{}}t_{ij}|ij|+\underset{i}{\overset{N}{}}ϵ_i|ii|,$$ (1) where $`|i`$ denotes the Wannier states at site $`i`$. Disorder is usually introduced by varying the onsite potential energies $`ϵ_i`$ randomly in the interval $`[W/2,W/2]`$. In this work we report on the effects of additional off-diagonal disorder , i.e., with random hopping elements $`t_{ij}[cw/2,c+w/2]`$ between nearest neighbor sites. Thus $`c`$ represents the center and $`w`$ the width of the uniform off-diagonal disorder distribution. The energy scale is set by keeping $`w=1`$ fixed. This kind of disorder is similar to the random flux model , but here the modulus of the $`t_{ij}`$ is random while the phase is constant. The Hamiltonian (1) has a chiral symmetry on a bipartite lattice, i.e., under the operation $`\psi _n(1)^n\psi _n`$, the Hamiltonian $`H`$ changes it sign and as a consequence, if $`ϵ_n`$ is an eigenvalue of $`H`$, then so is $`ϵ_n`$. We calculate the localization length $`\lambda `$ of the electronic wave function by the transfer-matrix method (TMM), which is based on an iterative reformulation of Eq. (1) as $`\left(\begin{array}{c}\psi _{n+1}\hfill \\ \psi _n\hfill \end{array}\right)=T_n\left(\begin{array}{c}\psi _n\hfill \\ \psi _{n1}\hfill \end{array}\right)=\left(\begin{array}{cc}𝐭_{n+1}^1(E\mathrm{𝟏}𝐇_{})& 𝐭_{n+1}^1𝐭_n\\ \mathrm{𝟏}& \mathrm{𝟎}\end{array}\right)\left(\begin{array}{c}\psi _n\hfill \\ \psi _{n1}\hfill \end{array}\right),`$ (10) where $`E`$ is the energy, $`𝐇_{}`$ is the Hamiltonian in the $`n`$th slice of the TMM bar, $`\mathrm{𝟏}`$ and $`\mathrm{𝟎}`$ are unit and zero matrix, respectively , and the diagonal matrix $`𝐭_n`$ represents the hopping elements connecting the $`n`$th with the $`(n1)`$th slice. The cross section of the bar is $`1`$, $`M`$, and $`M^2`$ for spatial dimension $`d=1,2`$, and $`3`$, respectively, and we always assume periodic boundary conditions for the TMM. Starting with two initial states $`(\psi _1,\psi _0)`$ one iterates along the quasi 1D bar until the desired accuracy is obtained. The localization length $`\lambda _d=1/\gamma _{\mathrm{min}}`$ is computed from the smallest Lyapunov exponent $`\gamma _{\mathrm{min}}`$ of the eigenvalues $`\mathrm{exp}(\pm \gamma _i)`$ of $`lim_n\mathrm{}(\tau _n^{}\tau _n)^{1/2n}`$ with $`\tau _n=T_nT_{n1}\mathrm{}T_2T_1`$. Assuming the validity of one-parameter scaling close to the MIT, we expect the reduced localization lengths $`\lambda _d(M)/M`$ to scale onto a scaling curve $`\lambda _d(M)/M=f_d(\xi /M)`$ with scaling parameter $`\xi `$. Localization lengths at the 1D Dyson singularity. In 1D it has been shown that the density of states (DOS) of the disordered model diverges at $`E=0`$ , if $`W=0`$ on a bipartite lattice . This singularity is intimately related to a divergence of the localization length . Assuming that the localization properties of the wave function can be described as usual by an exponential decay, one finds analytically $$\lambda _1(E)=\frac{v_F}{D}\mathrm{log}\left|\frac{D}{E}\right|$$ (11) with $`v_F`$ the Fermi velocity and $`D`$ an energy parameter depending on the strength of the disorder. In Fig. 1 we show that this energy dependence is convincingly reproduced in numerical data of the model: $`\lambda _1`$ diverges at $`E=0`$ as in Eq. (11) for $`W0`$. Note that the smallest values for the localization length are obtained for $`c=0.25`$. We remark that the validity of the exponential form of the wave function has been questioned in previous studies presenting evidence for power-law localization usually related to critical states. Odd/even effects in quasi 1D systems. In 2D, the DOS also has a sharp peak at the band center and it was shown by the TMM that the localization length again diverges . A finite-size-scaling (FSS) analysis of the TMM data together with studies of the participation numbers and multifractal properties of the eigenstates revealed that the states at $`E=0`$ show critical behavior like in the 3D Anderson model at the MIT . Renewed interest in the study of quasi 1D systems with off-diagonal disorder stems from the fact that the functional form of the divergence depends on whether $`M`$ is odd or even as shown by a Fokker-Planck approach in Ref. . This question is intimately linked to the presence or absence of bipartiteness for the given lattice and boundary conditions. In Ref. numerical data in support of the analytical results had already been shown. Here we will broaden the investigation by studying how the localization length at $`E=0`$ is influenced by the odd/even effects. Eilmes et al. observed that the reduced localization length $`\lambda _2/M`$ for the 2D Anderson model with random hopping is independent of the system sizes at $`E=0`$ for bipartite lattices up to $`M=200`$. Motivated by the results of Refs. , we show the behavior of the localization lengths for non-bipartite systems with odd $`M`$ as well as bipartite systems with even $`M`$ in Figs. 2 and 3. We see that the values of the reduced localization length differ for odd and even $`M`$. Nevertheless, $`\lambda _2/M`$ at $`E=0`$ remains constant and thus critical in the sense of Ref. for odd and even $`M`$ up to $`M=101`$ and $`180`$, respectively. Critical exponents in 3D. For the 3D model we computed the critical parameters at the MIT when either $`E`$ or $`W`$ are varied across the transition, i.e., $`\xi |EE_c|^{\nu _E}`$ or $`\xi |WW_c|^{\nu _W}`$ at fixed $`c`$. The left panel of Fig. 4 shows the reduced localization length $`\lambda _3/M`$ for even $`M`$ up to $`14`$ at $`E=0`$ and $`c=0`$. The transition from extended states for $`W4.05`$ to localized states for $`W4.05`$ is clearly visible as $`\lambda _3/M`$ decreases (increases) with increasing $`M`$ for the extended (localized) case. In the right graph the resulting scaling function $`f_3(\xi /M)`$ is plotted. It was obtained by a non-linear fit taking non-linear and non-universal corrections to FSS into account. Comparing the exponents $`\nu _E=1.61\pm 0.07`$ and $`\nu _W=1.54\pm 0.03`$ of the transitions obtained from the fits at $`W=0`$, $`c=0`$ and $`E=0`$, $`c=0`$, respectively, with recent results for the Anderson model with pure diagonal disorder we find good agreement. Conclusion. In this work we have studied the localization properties of the Anderson model of localization with off-diagonal disorder. We find non-localized states in 1D and 2D only at the band center. In quasi 1D we examined odd/even system size effects for the localization length. We showed that the wave functions at $`E=0`$ in 2D remain critical up to the system sizes considered regardless of the odd/even effects. Our numerical results match reasonably well with the predictions . In the 3D case, we obtained the critical exponents of the MIT using TMM and FSS. Their values agree with recent results for the model with only diagonal disorder . Thus although the off-diagonal case exhibits some unusual features, its physics is nevertheless accurately described by the orthogonal universality class within the scaling theory of localization . Acknowledgements. We thank C. M. Soukoulis and J. Stolze for pointing out Refs. to us.
no-problem/0001/cond-mat0001208.html
ar5iv
text
# On the electronic structure of CaCuO2 and SrCuO2 ## References
no-problem/0001/math-ph0001001.html
ar5iv
text
# Normal ordering for deformed boson operators and operator-valued deformed Stirling numbers ## 1 Introduction The transformation of a second-quantized operator into a normally ordered form, in which each term is written with the creation operators preceding the annihilation operators, has been found to simplify quantum mechanical calculations in a large and varied range of situations. Techniques for the accomplishment of this ordering have been developed and are widely utilized . A particular subclass of problems and techniques involves situations in which the operators of interest commute with the number operator. More specifically, one is interested in transforming an operator which is a function of the number operator into a normally ordered form, or transforming an operator each of whose terms has an equal number of creation and annihilation operators corresponding to each degree of freedom, into an equivalent operator expressed in terms of the number operator only. In the present article we consider the corresponding problem for the deformed bosons which have been investigated very extensively in the last three years in connection with the recent interest in the properties and applications of quantum groups. ## 2 Stirling and deformed Stirling numbers The Stirling numbers of the first $`(s)`$ and second $`(S)`$ kinds were introduced in connection with the expression for a descending product of a variable $`x`$ as a linear combination of integral and positive powers of that variable, and the inverse relation, respectively $$x(x1)\mathrm{}(xk+1)=\underset{m=1}{\overset{k}{}}s(k,m)x^m$$ (1) $$x^m=\underset{k=1}{\overset{m}{}}S(m,k)x(x1)\mathrm{}(xk+1).$$ (2) Using these defining relations it is easy to show that the Stirling numbers satisfy the recurrence relations $$s(k+1,m)=s(k,m1)ks(k,m)$$ (3) and $$S(m+1,k)=S(m,k1)+kS(m,k),$$ (4) with the initial values $`s(1,1)=S(1,1)=1`$ and the “boundary conditions” $`s(i,j)=S(i,j)=0`$ for $`i<1`$, $`j<1`$ and for $`i<j`$. The combinatorial significance of the Stirling numbers has been amply discussed . Several generalisations of the Stirling numbers appeared in the mathematical literature \[7-12\]. In anticipation of further development we shall refer to them generically as deformed Stirling numbers. In this context we wish to distinguish between the two widely used forms of “deformed numbers” $`[x]_M=\frac{q^x1}{q1}`$, the usual choice in the mathematical literature on $`q`$-analysis , and $`[x]_P=\frac{q^xq^x}{qq^1}`$, which is common to the recent physical literature and to the literature on quantum groups. A generalisation was recently proposed by Wachs and White , which can be written in the form $`[x]_G=\frac{q^xp^x}{qp}`$. This form contains $`[x]_M`$ and $`[x]_P`$ as special cases, corresponding to the choices $`p=1`$ and $`p=q^1`$, respectively. We shall write $`[x]_{M(q)}`$, $`[x]_{P(q)}`$ and $`[x]_{G(p,q)}`$ instead of the symbols introduced above whenever the choice of the parameters $`q`$ and/or $`p`$ will have to be explicated. The identities $$[x]_{P(q)}=q^{x+1}[x]_{M(q^2)}[x]_{G(p,q)}=p^{x1}[x]_{M(q/p)}[x]_{G(p,q)}=(\sqrt{pq})^{x1}[x]_{P(\sqrt{q/p})}$$ (5) illustrate the notation and exhibit some of the elementary properties of these deformed numbers. One of the generalisations of the Stirling numbers involves a descending product of M-type deformed numbers expressed in terms of the powers of the M-type deformed number $`[x]_M`$ $$[x]_M[x1]_M\mathrm{}[xk+1]_M=\underset{m=1}{\overset{k}{}}s_q(k,m)[x]_M^m$$ (6) and the corresponding inverse relation $$[x]_M^m=\underset{k=1}{\overset{m}{}}S_q(m,k)[x]_M[x1]_M\mathrm{}[xk+1]_M.$$ (7) Using the defining relations it is easy to show that the deformed Stirling numbers $`s_q(k,m)`$ and $`S_q(m,k)`$, which are referred to in the mathematical literature as $`q`$-Stirling numbers of the first and second kind, respectively, satisfy the recurrence relations $$s_q(k+1,m)=q^k\left(s_q(k,m1)[k]_Ms_q(k,m)\right)$$ (8) and $$S_q(m+1,k)=q^{k1}S_q(m,k1)+[k]_MS_q(m,k),$$ (9) with “boundary conditions” and initial values identical with those specified above for the conventional Stirling numbers. A slight modification in the form of the descending product, replacing the factors $`[xi]_M`$ by $`[x]_M[i]_M`$, results in the relations \[8-10\] $$[x]_M([x]_M[1]_M)\mathrm{}([x]_M[k1]_M)=\underset{m=1}{\overset{k}{}}\stackrel{~}{s}_q(k,m)[x]_M^m$$ (10) and $$[x]_M^m=\underset{k=1}{\overset{m}{}}\stackrel{~}{S}_q(m,k)[x]_M([x]_M[1]_M)\mathrm{}([x]_M[k1]_M).$$ (11) Starting with these defining relations and using the identity $$[a]_M[b]_M=q^b[ab]_M$$ (12) we obtain the recurrence relations $$\stackrel{~}{s}_q(k+1,m)=\stackrel{~}{s}_q(k,m1)[k]_M\stackrel{~}{s}_q(k,m)$$ (13) and $$\stackrel{~}{S}_q(m+1,k)=\stackrel{~}{S}_q(m,k1)+[k]_M\stackrel{~}{S}_q(m,k),$$ (14) where the “boundary conditions” and initial values are, again, as above. Note that $$\stackrel{~}{s}_q(k,m)=q^{k(k1)/2}s_q(k,m)\stackrel{~}{S}_q(m,k)=q^{k(k1)/2}S_q(m,k).$$ (15) The two sets of deformed Stirling numbers of the first and second kinds, as well as the conventional Stirling numbers to which they reduce in the limit $`q1`$, satisfy the following dual relations $$\underset{m=1}{\overset{k}{}}s_q(k,m)S_q(m,k^{})=\delta (k,k^{})$$ (16) and $$\underset{k=1}{\overset{m}{}}S_q(m,k)s_q(k,m^{})=\delta (m,m^{}).$$ (17) An additional set of deformed Stirling numbers of the second kind was recently introduced by Wachs and White . Their definition is motivated by combinatorial considerations and has no algebraic origin. Their recurrence relation reads $$S_{p,q}(m+1,k)=p^{k1}S_{p,q}(m,k1)+[k]_GS_{p,q}(m,k)$$ (18) and it reduces to (14) for $`p=1`$. ## 3 Some algebraic properties of deformed boson operators In the context of recent interest in quantum groups and their realization, three types of deformed boson operators have been introduced . The most straightforward definition starts by postulating a Fock space on which creation ($`a`$), annihilation ($`a^{}`$) and number ($`\widehat{n}`$) operators are defined in analogy with the conventional boson operators. The general form postulated is $$a|l>=\sqrt{[l]}|l1>a^{}|l>=\sqrt{[l+1]}|l+1>\widehat{n}|l>=l|l>.$$ (19) It follows immediately that $`a^{}a=[\widehat{n}]`$ and $`aa^{}=[\widehat{n}+1]`$. The two widely used forms of the deformed bosons are obtained by choosing either $`[l]=[l]_M=\frac{q^l1}{q1}`$ or $`[l]=[l]_P=\frac{q^lq^l}{qq^1}`$. A generalisation was recently proposed by Chakrabarti and Jagannathan . We shall adhere to the notation introduced by Wachs and White and write this generalisation in the form $`[l]=[l]_G=\frac{q^lp^l}{qp}`$, which is trivially modified relative to that introduced in Ref. . As a consequence of a remark made in the previous section, this third type of deformed boson contains the first two as special cases. The deformed bosons as defined by Eq. (19) are not associated with any a priori specification of a (possibly deformed) commutation relation. Choosing a parameter $`Q`$, which does not have to be related to the two parameters $`p`$ and $`q`$ so far introduced, the deformed bosons are found to satisfy the deformed commutation relation $$[a,a^{}]_Q=aa^{}Qa^{}a=\varphi (\widehat{n})=\frac{1}{qp}\left(q^{\widehat{n}}(qQ)+p^{\widehat{n}}(Qp)\right).$$ (20) Since the choice of $`Q`$ is arbitrary we can opt to be guided by the requirement that the form of $`\varphi (\widehat{n})`$ be as simple as possible or by some other relevant criterion. The conventional choice $`Q=q`$, to which we will eventually adhere, results in $$aa^{}qa^{}a=\varphi _M(\widehat{n})=1,$$ (21) $$aa^{}qa^{}a=\varphi _P(\widehat{n})=q^{\widehat{n}}$$ (22) and $$aa^{}qa^{}a=\varphi _G(\widehat{n})=p^{\widehat{n}}$$ (23) for the M-type, P-type and G-type bosons, respectively. We do not label the creation and annihilation operators by indices such as M, P or G because the nature of these operators is always obvious from the context. The choice $`Q=p`$ results in $`\varphi (\widehat{n})=q^{\widehat{n}}`$ for all the three cases. For the M-type bosons ($`p=1`$) this choice implies $`Q=1`$, i.e., the deformed commutation relation becomes $`aa^{}a^{}a=q^{\widehat{n}}`$. For the P-type bosons ($`p=q^1`$) this choice is the familiar alternative to Eq. (22), namely $`aa^{}q^1a^{}a=q^{\widehat{n}}`$. In a recent study of the extension of the Campbell-Baker-Hausdorff formula to deformed bosons , it was noted that the choice $`Q=q`$ is the most suitable one for the M-type bosons, but that $`Q=q+q^11`$ seems to have some advantages for the P-type bosons. From the same point of view, one would choose $`Q=q+p1`$ for the G-type bosons. We shall also need the relation $$[a^k,a^{}]_{Q^k}=\mathrm{\Phi }(k,\widehat{n})a^{k1}$$ (24) which can be viewed as an extension of Eq. (20) in the sense that $`\mathrm{\Phi }(1,\widehat{n})=\varphi (\widehat{n})`$. One easily finds that $$\mathrm{\Phi }(k,\widehat{n})=\frac{1}{qp}\left(q^{\widehat{n}}(qQ)[k]_{G(Q,q)}+p^{\widehat{n}}(Qp)[k]_{G(Q,p)}\right).$$ (25) We shall retain the conventional choice $`Q=q`$ for the three cases specified above. With this choice we get $$\mathrm{\Phi }_M(k,\widehat{n})=[k]_M\mathrm{\Phi }_P(k,\widehat{n})=[k]_Pq^{\widehat{n}}\mathrm{\Phi }_G(k,\widehat{n})=[k]_Gp^{\widehat{n}}.$$ (26) ## 4 Normal ordering of powers of the deformed number operator The relevance of the ordinary Stirling numbers to the normal ordering of powers of the boson number operator was demonstrated in Ref. . In the present section we consider some normal ordering properties of the deformed bosons specified by the parameter choice $`p=1`$ and $`Q=q`$, which corresponds to the M-type boson operators and to the deformed commutation relation (21). Up to a trivial interchange of $`p`$ and $`q`$ this is the only combination of parameters for which the deformed commutator does not depend on $`\widehat{n}`$. The other types of deformed boson operators are considered in the following section where it is found that they differ in a significant respect from the case presently considered. In order to express an integral power of $`[\widehat{n}]_M`$ in a normally ordered form we can either formally write such an expansion and obtain a recurrence relation for the coefficients by applying Eq. (21) or use the deformed Stirling numbers of the second kind directly. We shall present both approaches because of the intrinsic interest of each one of them. In the direct approach, we start from the expansion $$[\widehat{n}]_M^m=(a^{}a)^m=\underset{k=1}{\overset{m}{}}c(m,k)(a^{})^ka^k.$$ (27) Expressing $`(a^{}a)^{m+1}`$ by means of Eq. (27) and using Eq. (21), we obtain a recurrence relation which is identical with the one satisfied by $`S_q(m,k)`$, Eq. (9). Moreover, it is obvious from the defining equation (27) that $`c(1,1)=S_q(1,1)=1`$. Thus, $`c(m,k)=S_q(m,k)`$. A different derivation can be obtained by using the identity $$\underset{i=0}{\overset{k1}{}}[\widehat{n}i]_M=(a^{})^ka^k.$$ (28) This identity follows by noting that application of both sides of Eq. (28) on any member of the complete set $`\{|l>;l=0,\mathrm{\hspace{0.17em}1},\mathrm{}\}`$ of eigenstates of the number operator results in $`_{i=0}^{k1}[li]_M`$. Using Eq. (7) we obtain $$[\widehat{n}]_M^m=\underset{k=1}{\overset{m}{}}S_q(m,k)\underset{i=0}{\overset{k1}{}}[\widehat{n}i]_M$$ (29) and substituting Eq. (28) we get the desired normally ordered expansion $$[\widehat{n}]_M^m=\underset{k=1}{\overset{m}{}}S_q(m,k)(a^{})^ka^k.$$ (30) We note in passing that an equivalent expansion could have been obtained starting from the identity $$\underset{i=0}{\overset{k1}{}}([\widehat{n}]_M[i]_M)=q^{k(k1)/2}(a^{})^ka^k.$$ (31) This identity can be proved either by induction or by considering the effect of both sides on the complete set of eigenstates of the number operator. Using (11) and (31), we obtain the normally ordered expansion of $`[\widehat{n}]_M^m`$ in the form $$[\widehat{n}]_M^m=\underset{k=1}{\overset{m}{}}\stackrel{~}{S}_q(m,k)q^{k(k1)/2}(a^{})^ka^k$$ (32) which is related to (30) by Eq. (15). In order to obtain the inverse relation, expressing a normally ordered product as a function of the number operator, we note that Eqs. (6) and (28) lead to $$(a^{})^ka^k=[\widehat{n}]_M[\widehat{n}1]_M\mathrm{}[\widehat{n}k+1]_M=\underset{m=1}{\overset{k}{}}s_q(k,m)[\widehat{n}]_M^m.$$ (33) ## 5 Operator-valued deformed Stirling numbers In the present section, we attempt to derive the normally ordered expansion of a power of the number operator for arbitrarily deformed bosons. Allowing $`p`$, $`q`$ and $`Q`$ to be arbitrary, we demand $$[\widehat{n}]_G^m=\underset{k=1}{\overset{m}{}}(a^{})^k\widehat{S}(m,k,\widehat{n})a^k.$$ (34) Using the general relation (24), we derive the recurrence relation $$\widehat{S}(m+1,k,\widehat{n})=Q^{k1}\widehat{S}(m,k1,\widehat{n}+1)+\widehat{S}(m,k,\widehat{n})\mathrm{\Phi }(k,\widehat{n}).$$ (35) The “boundary conditions” and initial values, for all values of $`\widehat{n}`$, are the same as those following Eq. (4). The M-type bosons ($`p=1`$), with the choice $`Q=q`$ which yields $`\mathrm{\Phi }_M(k,\widehat{n})=[k]_M`$, were studied in section 4. For this case, $`\widehat{S}(m,k,\widehat{n})`$ does not depend on $`\widehat{n}`$. More specifically, Eq. (35) then reduces to Eq. (9). For the G-type bosons, we found in section 3 that by choosing $`Q=q`$ we obtain $`\mathrm{\Phi }_G(k,\widehat{n})=[k]_Gp^{\widehat{n}}`$ ; consequently, we have $$\widehat{S}_G(m+1,k,\widehat{n})=q^{k1}\widehat{S}_G(m,k1,\widehat{n}+1)+\widehat{S}_G(m,k,\widehat{n})[k]_Gp^{\widehat{n}}.$$ (36) Note that in the general case $`\widehat{S}_G(m,k,\widehat{n})`$ depends on the operator $`\widehat{n}`$. The special cases $`p=1`$ and $`p=q^1`$ are contained in Eq. (36). The dependence of $`\widehat{S}_G(m,k,\widehat{n})`$ on $`\widehat{n}`$ for all cases except $`p=1`$ can be taken to imply that we have actually failed to obtain a normally ordered expansion for $`[\widehat{n}]_G^m`$ in terms of a finite sum in $`(a^{})^ka^k`$ with $`k=1,\mathrm{\hspace{0.17em}2},\mathrm{},m`$. The structure of the recurrence relation (36) indicates that the dependence on $`\widehat{n}`$ of the deformed Stirling numbers $`\widehat{S}_G(m,k,\widehat{n})`$ can be expressed in terms of the factor $`p^{(mk)\widehat{n}}`$. Defining the ($`\widehat{n}`$-independent) reduced Stirling numbers of the second kind $`\mathrm{\Xi }(m,k)`$ through $$\widehat{S}_G(m,k,\widehat{n})=q^{k(k1)/2}p^{(mk)\widehat{n}}\mathrm{\Xi }(m,k)$$ (37) we obtain the recurrence relation $$\mathrm{\Xi }(m+1,k)=p^{mk+1}\mathrm{\Xi }(m,k1)+[k]_G\mathrm{\Xi }(m,k)$$ (38) with the initial condition $`\mathrm{\Xi }(1,1)=1`$. To obtain the “inverse relation” to (34), expressing a normally ordered term $`(a^{})^ka^k`$ by means of a polynomial in $`[\widehat{n}]_G`$, we need the “G-arithmetic” identity $$[ab]_G=q^b([a]_Gp^{ab}[b]_G),$$ (39) which follows from the two identities $$[a+b]_G=q^b[a]_G+p^a[b]_G$$ (40) and $$[b]_G=(pq)^b[b]_G.$$ (41) We now proceed to obtain the desired relation $$(a^{})^ka^k=\underset{m=1}{\overset{k}{}}\widehat{s}_G(k,m,\widehat{n})[\widehat{n}]_G^m.$$ (42) Since $`(a^{})^{k+1}a^{k+1}=(a^{})^k[\widehat{n}]_Ga^k=(a^{})^ka^k[\widehat{n}k]_G`$, we can use Eqs. (39) and (42) to obtain the recurrence relation $$\widehat{s}_G(k+1,m,\widehat{n})=q^k\left(\widehat{s}_G(k,m1,\widehat{n})p^{\widehat{n}k}[k]_G\widehat{s}_G(k,m,\widehat{n})\right).$$ (43) Note that for $`p=1`$ this recurrence relation reduces to Eq. (8). Introducing the ($`\widehat{n}`$-independent) reduced Stirling numbers of the first kind $`\xi (k,m)`$ such that $$\widehat{s}_G(k,m,\widehat{n})=q^{k(k1)/2}p^{(km)\widehat{n}}\xi (k,m)$$ (44) in Eq. (43), we obtain the recurrence relation $$\xi (k+1,m)=\xi (k,m1)p^k[k]_G\xi (k,m).$$ (45) The exponential dependence on $`\widehat{n}`$ of the deformed Stirling numbers of the first kind, $`\widehat{s}_G(k,m,\widehat{n})`$, means that we have not been able to express $`(a^{})^ka^k`$ as a polynomial in $`\widehat{n}`$ but we did express it as a function of $`\widehat{n}`$. In order to derive the bi-orthogonality relations between the deformed Stirling numbers of the first and second kinds, we first rewrite Eq. (34) in the form $$[\widehat{n}]_G^m=\underset{k=1}{\overset{m}{}}(a^{})^ka^k\widehat{S}_G(m,k,\widehat{n}k).$$ (46) Using Eq. (37) we obtain $$\widehat{S}_G(m,k,\widehat{n}k)=p^{k(km)}\widehat{S}_G(m,k,\widehat{n}).$$ (47) Defining $`\mathrm{\Xi }^{}(m,k)=p^{k(km)}\mathrm{\Xi }(m,k)`$, we obtain relations of the form of Eqs. (16) and (17) with $`\mathrm{\Xi }^{}(m,k)`$ replacing $`S_q(m,k)`$ and $`\xi (k,m)`$ replacing $`s_q(k,m)`$. ## 6 A generating function for the deformed Stirling numbers of the first kind We start by transforming the $`q`$-binomial theorem into a G-binomial theorem. By introducing the symbol $$(\lambda ;x)^{(l)}=(\lambda +x)(p\lambda +qx)(p^2\lambda +q^2x)\mathrm{}(p^{l1}\lambda +q^{l1}x)$$ (48) we have $$(\lambda ;x)^{(l)}=\underset{i=0}{\overset{l}{}}\left[\begin{array}{ccc}& l& \\ & i& \end{array}\right]_Gp^{i(i1)/2}q^{(li)(li1)/2}\lambda ^ix^{li},$$ (49) where $$\left[\begin{array}{ccc}& l& \\ & i& \end{array}\right]_G=\frac{[l]_G!}{[i]_G![li]_G!}$$ (50) is a G-binomial coefficient and $`[k]_G!=[1]_G[2]_G\mathrm{}[k]_G`$. Equation (49) can be proved by induction, using the G-binomial coefficient recurrence relation $$\left[\begin{array}{ccc}l& +& 1\\ & i& \end{array}\right]_G=p^{l+1i}\left[\begin{array}{ccc}& l& \\ i& & 1\end{array}\right]_G+q^i\left[\begin{array}{ccc}& l& \\ & i& \end{array}\right]_G,$$ (51) which follows from the definition of the G-binomial coefficient on using the G-arithmetic relation (40). Now, from the identity $$\frac{(a^{})^ka^k}{[k]_G!}|l>=\left[\begin{array}{ccc}& l& \\ & k& \end{array}\right]_G|l>$$ (52) we obtain $$\underset{k=0}{\overset{m}{}}p^{k(k1)/2}q^{(lk)(lk1)/2}\lambda ^k\frac{(a^{})^ka^k}{[k]_G!}|l>=(\lambda ;1)^{(l)}|l>$$ (53) which can be written as an operator identity $$\underset{k=0}{\overset{\mathrm{}}{}}p^{k(k1)/2}q^{(\widehat{n}k)(\widehat{n}k1)/2}\lambda ^k\frac{(a^{})^ka^k}{[k]_G!}=(\lambda ;1)^{(\widehat{n})}.$$ (54) To obtain an expression for $`(a^{})^ka^k`$ as a function of the number operator $`\widehat{n}`$, we have to expand the right-hand side of Eq. (54) in powers of $`\lambda `$. The coefficient of $`\lambda ^k`$ can be extracted by writing $$(a^{})^ka^k=\frac{[k]_G!}{k!}p^{k(k1)/2}q^{(\widehat{n}k)(\widehat{n}k1)/2}\frac{^k}{\lambda ^k}(\lambda ;1)^{(\widehat{n})}|_{\lambda =0}.$$ (55) The identities $$[m]_{G(p^k,q^k)}=\frac{[km]_{G(p,q)}}{[k]_{G(p,q)}}$$ (56) and $$[km]_{G(p,q)}=\underset{i=1}{\overset{k}{}}\left(\begin{array}{ccc}& k& \\ & i& \end{array}\right)(qp)^{i1}[m]_{G(p,q)}^ip^{m(ki)}$$ (57) are found to be useful when implementing Eq. (55). (To avoid possible confusion we point out that the symbol appearing in Eq. (57) is the conventional binomial coefficient.) Note that for the conventional bosons, for which $`p=q=1`$, Eq. (55) reduces to an expression which can be related to the well-known generating function for the conventional Stirling numbers of the first kind . ## 7 Discussion In the present article we found that the normal ordering formulae for powers of the boson number operator can be extended in a simple and natural way to the M-type bosons, which satisfy $`[a,a^{}]_q=1`$. However, for the P-type bosons, which satisfy $`[a,a^{}]_q=q^{\widehat{n}}`$, as well as for the more general G-type bosons, we found that the extension of the conventional boson analysis results in “normal-ordering” expressions with $`\widehat{n}`$-dependent coefficients. The marked difference between the M-type bosons and all the others has already been noted before, in the context of the extension of the Campbell-Baker-Hausdorff formula for products of exponential operators . While the observations pointed out above set apart the M-type bosons, the following may be taken to set apart the P-type bosons disfavourably, within the general set of G-type bosons: Taking the Hamiltonian of the deformed harmonic oscillator to be $`=\frac{\mathrm{}\omega _0}{2}(a^{}a+aa^{})`$ and expanding in powers of $`s=\mathrm{ln}q`$ and $`t=\mathrm{ln}p`$ (which we assume to be sufficiently small), we find that $$=\mathrm{}\omega _0[\frac{s+t}{8}+(1\frac{s+t}{2})(\widehat{n}+\frac{1}{2})+\frac{s+t}{2}(\widehat{n}+\frac{1}{2})^2+\mathrm{}].$$ (58) Apart from an irrelevant shift of the energy zero and a renormalization of the frequency into $`\omega =\omega _0(1\frac{s+t}{2})`$ this Hamiltonian contains a quadratic anharmonicity unless $`s=t`$, i.e., unless $`p=q^1`$. It is true that a quadratic anharmonicity will emerge even for the P-type oscillator ($`p=q^1`$) as a residue of the fourth order term, but it will be associated with a fourth order anharmonicity which may well be inconsistent with the experimental spectrum of some system of interest, such as a diatomic molecule. We finally point out that a coordinate and a conjugate momentum can be defined for the deformed oscillator by means of the relations $`\widehat{x}=(a^{}+a)/\sqrt{2}`$ and $`\widehat{p}=i(a^{}a)/\sqrt{2}`$. Application of Eq. (20) with the choice $`Q=1`$ results in (for $`\mathrm{}\omega _0=1`$) $$[\widehat{x},\widehat{p}]=i[a,a^{}]=i[1+\left(s+t\frac{(s+t)^2}{2}\right)\widehat{n}+\frac{s^2+st+t^2}{2}\widehat{n}(\widehat{n}+1)+\mathrm{}],$$ (59) from which follows the deformed uncertainty relation. Acknowledgements One of the authors (JK) would like to thank the Région Rhône-Alpes for a visiting fellowship and the Institut de Physique Nucléaire de Lyon for its kind hospitality.
no-problem/0001/astro-ph0001146.html
ar5iv
text
# The modified dynamics is conducive to galactic warp formation ## 1 introduction It is not yet known what exactly induces and maintains galactic warps. It may well be that a number of mechanisms–from among those proposed already, or some new ones–act together or alone to produce this ubiquitous phenomenon. Some proposed mechanisms rely on a dark galactic halo as a direct actuator or as a mediator of perturbations (for reviews with extensive references see Briggs (1990), Binney (1992), and Binney $`\&`$ Merrifield (1998)). The modified dynamics (MOND) repudiates dark halos, but it offers a new mechanism that increases the warping efficacy of external perturbers over and above their possible tidal effects, which, notoriously, are too weak. This results from the nonlinearity of MOND and is most clearly demonstrated in the case where a system (a galaxy) falls in an external field that by itself is approximately constant in space. In a linear theory, such as Newtonian gravity, the constant external field has no effect on the internal dynamics of the system (motions with respect to its center of mass); in MOND it very much does. When the external field dominates the internal field of the system it is easy to deduce what its effects are, as discussed e.g. in Bekenstein $`\&`$ Milgrom (1984), and Milgrom (1986). In the present context the external acceleration is small compared with the internal ones at the position of the warp, which necessitates numerical studies. For our mechanism to work in field galaxies, one or more perturbers must be present. There is, indeed, growing evidence that the appearance of a warp in a galaxy is strongly correlated with the presence of nearby perturbers (see e.g. Reshetnikov $`\&`$ Combes (1998)). Even galaxies that had been thought to be isolated might, in fact, not be so (Shang $`\&`$ al (1998)). Of course, perturber companions have always been suspected, but their direct tidal effects on disks seem to be too small. The purpose of this letter is to demonstrate, by numerical solutions of simplified galaxy-perturber systems, that, with reasonable parameter values, this MOND effect produces galactic warps of the magnitude observed. We have not included effects due to variations in the external field (due to the motion of the perturber, or to the motion of the galaxy in a parent cluster). From the symmetry of our model problem the warps we produce have a straight line of nodes. The method is described in section 2; the results are detailed in sections 3 and 4; conclusions are drawn in section 5. ## 2 Method We use the nonrelativistic, modified-gravity formulation of MOND suggested by Bekenstein $`\&`$ Milgrom (1984). The acceleration field $`\stackrel{}{g}=\stackrel{}{}\varphi `$ produced by a mass distribution $`\rho `$ is derived from a potential $`\varphi `$ that satisfies $$\stackrel{}{}[\mu (|\stackrel{}{}\varphi |/a_0)\stackrel{}{}\varphi ]=4\pi G\rho $$ (1) instead of the usual Poisson equation $`\stackrel{}{}\stackrel{}{}\varphi =4\pi G\rho `$, where $`\mu (x)x`$ for $`x1`$, and $`\mu (x)1`$ for $`x1`$, and $`a_0`$ is the acceleration constant of MOND. The form $`\mu (x)=x/\sqrt{1+x^2}`$ has been used in all rotation curve analyses, and we also use it here. This nonlinear potential equation is solved numerically using multi-grid methods as detailed in Brada (1996), and adumbrated in Brada $`\&`$ Milgrom (1999). We consider two classes of models. To simulate a far away companion, or the effect of the mean field of a cluster on a member galaxy, we solve for the field of a rigid disk in the presence of a given external acceleration field $`\stackrel{}{g}_{ex}`$. In this case the field equation is solved subject to the boundary condition at infinity $`\varphi _{\mathrm{}}(\stackrel{}{r})=\stackrel{}{r}\stackrel{}{g}_{ex}`$. Then $`\stackrel{}{g}_{ex}`$ is subtracted from $`\stackrel{}{}\varphi `$ to get the field relative to the galaxy. This latter determines the galaxy’s internal dynamics, warps, etc. To simulate the effect of a nearby companion, exemplified here by the effect of the Magellanic clouds (MC) on the Milky-Way (MW), we solve fully for a disk-plus-perturber system (in which case $`\stackrel{}{}\varphi 0`$ at infinity). Then, the center-of-mass acceleration of the galaxy is computed using the surface-integral method given by eq.(14) of Bekenstein $`\&`$ Milgrom (1984), and is subtracted from the acceleration field to get the internal dynamics. In each case, after the acceleration field relative to the center of mass of the galaxy is found, we find closed, nearly circular, nearly planar, test-particle orbits. The orbits are integrated for many periods to insure that, within our accuracy and patience, they are closed. Thus, inasmuch as adiabaticity is a good approximation, these are non-precessing orbits. They are also found to be stable under small changes in their initial conditions. These are taken to trace a warp, in the spirit of the tilted-ring model (Rogstad, Lockhart, $`\&`$ Wright (1974)). ## 3 An exponential disk in a constant external field We take the model galaxy to be an exponential disk smoothly truncated at a radius that we use as our unit length, $`R_{cut}=1`$, and with a scale length of $`h=0.2`$ in these units. The surface density is of the form $`\mathrm{\Sigma }_0\mathrm{exp}(R/h)(1R^4)`$ for $`0R1`$. The disk lies in the $`xy`$ plane. To optimize the warping effect we take the external field to lie $`45^o`$ from the $`x`$ axis in the third quadrant of the $`xz`$ plane. Its absolute value is taken as $`g_{ex}=0.01`$ in units of $`a_0`$. (We work in units where $`a_0=1`$, $`G=1`$, so masses are given in units of $`a_0R_{cut}^2/G`$.) In a more extensive study we plan to calculate the effect as a function of the field direction (relative to the disk axis) and also to follow the test particle orbits as the external field changes with time to mimic the relative motion of the galaxy and perturber. For the present, pilot study we ran models with two values of the disk mass $`m=0.01`$ and $`m=0.04`$. The (MOND) accelerations of the isolated disk models at $`R=1`$ are $`m^{1/2}=0.1,0.2`$; i.e. respectively, ten and twenty times larger than the external field. Both accelerations are small compared with 1 ($`a_0`$) so we are rather deep in the MOND regime in the warp region. In the deep-MOND regime the theory has obvious scaling properties so the above parameters represent a family of models spanned e.g. by scaling by the same factor $`m^{1/2}`$ and $`g_{ex}`$, or $`m`$ and $`R_{cut}^2`$ (with $`h/R_{cut}`$ fixed). The results are summarized in Figures 1 and 2. We first show for each model a plot of the absolute value of the torque $`T|\stackrel{}{r}\times \stackrel{}{}\varphi _c|`$ in the $`xz`$ plane containing the disk axis and the external field ($`\stackrel{}{}\varphi _c`$ is the field in the center-of-mass frame). This is a quantity that brings out clearly the departure of the field from both axisymmetry and left-right symmetry. In the spherical case $`T=0`$ everywhere; in the isolated-disk case the $`T=0`$ line is the $`x`$ axis (and the $`z`$ axis). This torque plot is also useful for homing in on closed orbits of the potential whose center is near the galactic center, because these should cross the $`xz`$ plane near the zero-torque line. The orbits are found by actual integration, starting from a set of initial conditions. The projections of some such orbits are then shown. We surmise that in the spirit of the tilted-ring model they delineate the shape of the warp. ## 4 A disk-plus-companion system–the effect of the Magellanic Clouds on the Milky Way. The disk of the Milky Way is known to be warped beyond the solar circle (Burke (1957); Kerr (1957); Henderson, Jackson, $`\&`$ Kerr (1982), and for a recent description and references Binney $`\&`$ Merrifield (1998)). At galactic longitude $`(l90^{})`$ the HI disk curls steadily away from the plane. At $`(l270^{})`$ the disk curves southward before turning back towards the plane (see Binney $`\&`$ Merrifield (1998) for an analytic expression that approximates the warp shape beyond 11 kpc). The line of nodes in the tilted-ring picture is straight within the uncertainties, and is nearly perpendicular to the plane spanned by the inner-disk axis and the radius vector to the Magellanic clouds. This makes the cloud system a prime suspect in producing the warp. It was, however, appreciated long ago that the tidal field of the Clouds, in their present position, is too small to distort the disk to the extent observed. For example, Hunter $`\&`$ Toomre (1969) estimated that a cloud mass $`10^{10}M_{}`$ would generate a warp of amplitude $``$ 70 pc at a radius of 16 kpc. It has, however, been suggested by Weinberg (1995) that a “live” halo that actively responds to the perturbation of the clouds might augment the effect to produce a warp of the observed magnitude and geometry. MOND, as we said, excludes a dynamically important halo, but might lead to a large enough warp due to the non-linear effect discussed above. We model the system as follows. The MW is taken as a pure disk in the $`xy`$ plane, centered at the origin, with the cutoff, exponential surface-density law described in section 3 (with $`h=0.2`$); its dimensionless mass is $`M_{disk}=0.04`$. The Magellanic clouds are represented by one point mass $`M_{sat}`$ at a position whose dimensionless coordinates are (2.52, 0, -1.63). This is at $`15h`$ from the center of the Galaxy, and at the correct galactic latitude of the LMC. This ratio would correspond for example to $`h=3`$ kpc (Binney $`\&`$ Merrifield (1998)) and an LMC distance of 45 Mpc (Mould $`\&`$ al (1999)). The uncertainties in these parameters are still large. Two mass ratios were considered: $`M_{sat}/M_{disk}=0.1,0.2`$. (The B luminosity ratio of the clouds to the MW is about 0.2. Since the baryonic $`M/L`$ values of the two might be different, reasonable values of the mass ratio lie between 0.1 and 0.4.) Other nearby galaxies are expected to have a smaller effect than the LMC; for example, M31, despite its higher mass, causes a rather smaller acceleration near the MW. In Figure 3 we show, for each mass ratio, two closed, quasi-circular, stable orbits beyond the cutoff radius of the disk–presumed to delineate our calculated warp–together with a representation of the observed warp (as given by the formula in Binney $`\&`$ Merrifield (1998)). ## 5 Conclusions and discussion We see that for a constant external field whose ratio to the field of the isolated disk at $`5h`$ is only (5-10)$`\%`$, a noticeable warp beyond this radius is indicated by test-particle orbits. This acceleration ratio increases in proportion to the galactocentric radius, but, still, rotational velocities will be affected only little even up to $`(1015)h`$. Note that the warp is not symmetric but is less pronounced on the attracting side of the field. We used an external-field inclination that is favorable for an S-shape warp. When the field is in the disk plane, the axisymmetry is broken but the up-down one is not, so no warp will be induced. If the field is perpendicular to the disk, axisymmetry is preserved but not the up-down one; this might induce bowl-shaped warps. (In the limit of a highly dominant, perpendicular, external field, the analytic results in Milgrom (1986) show that the geometry remains up-down symmetric, but for a weak perturbing field this is not so.) Regarding the results for the MW-MC system, we see that even for a mass ratio of 0.1 a satellite at the position of the MC produces enough field distortion to accommodate inclined, quasi-circular orbit that rise to $`0.25h`$ at $`R=6h`$ on one side, and to a height of $`0.2h`$ at $`R=5.5h`$ on the other. With a mass ratio of 0.2 the amplitude of the warp is close to that observed for the MW. Our analysis requires various improvements, which we hope to include in a future, more extensive analysis. 1. A larger volume of the parameter space has to be surveyed. This includes more values of the relative strength of the perturbation, different disk-perturber alignments (leading perhaps to a wider varieties of warp shapes), more complex perturbations such as two or more satellites, which would bend the line of nodes at larger radii, where we cannot approximate the combined effect by a constant field. We would also have to study other galaxy mass distributions. For example, we expect that if a considerable fraction of the galaxy mass is put in a round bulge, a warp will form more easily. For the same reason, if $`h`$ is smaller (but the MC distance remains the same) the warp will be stronger at the same position. 2. Viewing the warp as an envelope of test-particle orbits may be too naive. Certainly in more complicated geometries we expect orbits to cross and gas dynamics must be considered. 3. We must reckon with the fact that in many relevant cases the geometry of the perturbation changes considerably during the response time of the disk (say the orbital period at the position of the warp). This is true of galaxies moving in or near the core of galaxy clusters; it is also true for the MW-MC system. The warp geometry will thus not just follow the perturbation adiabatically but, at larger radii, the geometry may reflect the past history of the perturbation (leading, among other things, to curvature of the line of nodes). According to proper-motion observations and models of the MC and the Magellanic stream motion (see e.g. Lin, Jones, $`\&`$ Klemola (1995)), the MC binary is moving on a nearly polar orbit around the galaxy with a tangential velocity that is now comparable with the rotational velocity of the galaxy. This means that the radius vector to the MC changes its angle with the Galaxy’s axis by $`90^o`$ during the Galaxy’s orbital period at about 15 kpc. This means that the adiabaticity assumption we have made might be broken, and more and more so at larger radii. Our subtraction of a constant center-of-mass acceleration is then also not valid. This could lead to a more complicated warp geometry than the integral-sign shape that we get with adiabaticity. Because the MC orbit is nearly polar we expect the line of nodes to remain straight. 4. We think self-gravity of the mass in the warped region is not so important. This is because the relative contribution of the warped mass to the acceleration field is small everywhere, even within the warp itself. (The surface density in the warp is smaller than the integrated surface density there.) This contribution can then be treated as a perturbation–linearizing in it the MOND field equation–and in MOND such density perturbations produce an even weaker effect than in Newtonian dynamics (hence the added stability in MOND). So we can at least expect that the nonlinearity of MOND will not beget some peculiar amplification of self gravity. But we do not really know what these effects might be–a point that has to be checked numerically. We plan to perform $`(N+1)`$-body simulations whereby the $`N`$-body warped disk and the point-mass perturber orbit each other. This will account for non-adiabaticity and for self gravity in the disk, and also partly for point 2 above. We thank James Binney for helpful suggestions and for comments on the manuscript, and the referee for improving suggestions.
no-problem/0001/astro-ph0001384.html
ar5iv
text
# Tests of the Accelerating Universe with Near-Infrared Observations of a High-Redshift Type Ia Supernova ## 1 Introduction Recent observations of high-redshift ($`z>0.3`$) Type Ia supernovae (SNe Ia) provide the backbone of the body of evidence that we live in an accelerating Universe whose content is dominated by vacuum energy (Riess et al. 1998; Perlmutter et al. 1999). The observational evidence for an accelerating Universe is that high-$`z`$ SNe Ia are $``$30% dimmer than expected in an open Universe (i.e., $`\mathrm{\Omega }_M`$=0.3, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$). The two most likely sources to obscure distant SNe Ia and affect their interpretation are dust and evolution. The tell-tale signature of extinction by Galactic-type dust, reddening, has not been detected in the amount required to provide $`A_V=0.3`$ mag for high-$`z`$ SNe Ia (Riess et al. 1998; Perlmutter et al. 1999). Yet the cosmological implications of the observed faintness of high-$`z`$ SNe Ia are so exotic as to merit the consideration of dust with more unusual properties. A physical model of dust composed of larger grains ($`>`$ 0.1 $`\mu `$m) has been posited by Aguirre (1999a,b) to provide a non-cosmological source of extinction with less reddening. An interstellar component of this so-called “gray” dust, if neglected, would add too much dispersion to be consistent with the observed luminosities (Riess et al. 1998). However, Aguirre (1999a,b) has shown that a uniformly distributed component of intergalactic gray dust with a mass density of $`\mathrm{\Omega }_{dust}5\times 10^5`$ could explain the faintness of high-$`z`$ SNe Ia without detectable reddening and without overproducing the far-infrared (far-IR) background. Previous data do not rule out this possibility. Indeed, significant interstellar extinction in the hosts of high-$`z`$ SNe Ia is still favored by some (Totani & Kobayashi 1999). Rest-frame evolution is the other potential pitfall in using high-$`z`$ SNe Ia to measure the cosmological parameters. The lack of a complete theoretical model of SNe Ia including the identification of their progenitor systems makes it difficult to access the expected evolution between $`z=0`$ and 0.5 (Livio 1999; Umeda et al. 1999; Höflich, Wheeler, & Thielemann 1998). An impressive degree of similarity has been observed between the spectral and photometric properties of nearby and high-$`z`$ SNe Ia (Schmidt et al. 1998; Perlmutter et al. 1998, 1999; Riess et al. 1998; Filippenko et al. 2000; but see also Riess et al. 1999c; Drell, Loredo, & Wassermann 1999). However, it is not known what kind or degree of change in the observable properties of SNe Ia would indicate a change in the expected peak luminosity by 30%. For that reason it has been necessary to compare a wide range of observable characteristics of nearby and high-$`z`$ SNe Ia to search for a complement to a luminosity evolution. Near-IR observations of high-$`z`$ SNe Ia can provide constraints on both sources of cosmological contamination. A physical model of gray intergalactic dust, such as that proposed by Aguirre (1999a,b), still induces some reddening of SN light which can be detected in the wavelength range between optical and near-IR light. In addition, near-IR observations provide a view of the behavior of high-redshift SNe Ia in a window previously unexplored. Specifically, normal, nearby SNe Ia exhibit a second infrared maximum about a month after the initial peak. We can increase our confidence that high-$`z`$ SNe Ia have not evolved by observing this second maximum; its absence would indicate a change in the physics SNe Ia across redshift with potentially important cosmological consequences. We obtained ground-based $`J`$-band and space-based optical observations of SN 1999Q ($`z=0.46`$) to initiate a study of the systematic effects of dust and evolution on high-$`z`$ SNe Ia. In §2 we descibe our observations, in §3 their analysis, and in §4 their interpretation. ## 2 Observations Our High-$`z`$ Supernova Search Team (HZT) has an ongoing program to discover and monitor high-redshift SNe Ia (Schmidt et al. 1998). SN 1999Q was discovered on Jan 18, 1999 using the CTIO 4-m Blanco Telescope with the Bernstein-Tyson Camera (http://www.astro.lsa.umich.edu/btc/btc.html) as part of a 3-night program to search for high-$`z`$ SNe Ia using well-established methods. High signal-to-noise ratio spectra of SN 1999Q obtained with the Keck-II telescope indicated that this was a typical SN Ia at $`z`$=0.46 shortly before $`B`$-band maximum (Garnavich et al. 1999; Filippenko et al. 2000). Rest-frame $`B`$ and $`V`$-band photometry using custom filters was obtained for SN 1999Q from observatories around the world. The Hubble Space Telescope (HST) monitored the $`B`$ and $`V`$ light curves of SN 1999Q from $``$ 10 to 35 days after $`B`$ maximum (rest-frame) using the WFPC2 and the F675W and F814W filters in the course of 6 epochs. Combined, these data provide excellent coverage of the rest-frame $`B`$ and $`V`$ light curves from a few days before to 60 days after $`B`$ maximum (in the rest frame; Clocchiatti et al. 2000). In addition, the SN was observed in the near-IR ($`J`$-band) for 5 epochs between 5 and 45 days after $`B`$ maximum (in the rest frame). The first observation employed the European Southern Observatory’s 3.5-m New Technology Telescope equipped with the Son of Isaac (SOFI) infrared camera spectrograph (http://www.ls.eso.org); subsequent observations used the Keck II Telescope equipped with the Near-Infrared Camera (NIRC; Matthews & Soifer 1994). Due to the high sky brightness in the IR, many dithered, short images were obtained and combined to avoid saturating the detector. Care was taken to maintain a detected sky flux level of $``$10,000 counts, a regime where both SOFI and NIRC exhibit less than 0.5% non-linearity (http://www.ls.eso.org; http://www.keck.hawaii.edu). Using the procedure described by Garnavich et al. (1998), we subtracted an empirical point-spread function (PSF) scaled to the brightness of the SN from each HST observation. A coadded image of total length 7200 seconds in both F675W and F814W revealed no trace of host galaxy light to more than 5 mag below the peak brightness of the supernova (i.e., $`m_B,m_V>`$ 27). The host of SN 1999Q is likely to be intrinsically faint or of very low surface brightness similar to the host of SN 1997ck (Garnavich et al. 1998). Due to the negligible contribution of host galaxy light to the images, we have made no correction for contamination to the measured supernova light. Assuming that the restframe $`VI`$ color of the host galaxy is no redder than that of early-K-type dwarfs ($`VI`$=1.0 for K0), this same practice is well justified for our measurements of SN light in the $`J`$ band. We conservatively adopt a systematic uncertainty of 0.03 mag in the SN photometry (and 0.02 mag uncertainty in the colors) to account for any remaining bias. The procedure described by Schmidt et al. (1998), Garnavich et al. (1998), and Riess et al. (1998) was followed to calibrate the measured magnitudes of SN 1999Q on the Johnson $`B`$ and $`V`$ passband system. Similar steps were performed to calibrate the observed $`J`$-band magnitudes of SN 1999Q onto the rest-frame Cousins $`I`$-band system, though a few exceptions are noted here. On three photometric nights we observed the secondary near-IR standards of Persson et al. (1998). Because these secondary standards are solar analogues (0.4 $`<`$ $`BV`$ $`<`$ 0.8), one can transform these stars from the “Persson system” to that of NIRC or SOFI by calculating the photometric difference of spectrophotometry of the Sun between these systems. We found these differences to be quite small ($`<`$0.02 mag) and this correction negligible. In practice the true transmission curve in $`J`$ is dictated by the natural opacity of atmospheric H<sub>2</sub>O and nightly variations are generally larger than differences between different facility $`J`$-passbands. For this reason we observed secondary standards in close temporal proximity to the SN field. Due to the inherent non-linearity of airmass extinction corrections in $`J`$, the field of SN 1999Q was observed at airmasses within 0.05 of the Persson et al. (1998) standards to avoid the need for airmass corrections. Assuming typical $`J`$-band extinction of 0.1 mag per airmass (Krisciunas et al. 1987), errors of $``$ 0.005 mag are introduced without explicit extinction corrections. Cross-band $`K`$-corrections (Kim, Goobar, & Perlmutter 1996) were calculated using spectrophotometry of SN 1994D (which extend redward of 9100 Å; Richmond et al. 1995) to transform the $`J`$-band magnitudes of SN 1999Q to the Cousins $`I`$-band. At the redshift of SN 1999Q ($`z=0.46`$), observed $`J`$-band light is an excellent match to rest-frame Cousins $`I`$, and the $`K`$-correction was determined to be $``$0.93 $`\pm 0.02`$ mag with no apparent dependence on supernova phase or color. The rest-frame $`I`$ photometry of SN 1999Q is given in Table 1. Of fundamental importance is our ability to correctly transform the observed $`J`$-band photometry to restframe $`I`$-band. Schmidt et al. (1998) discusses the derivations of optical zeropoints from numerous spectrophotometric stars which also have UBVRI photometry. Applying the same spectrophotometry to cross-band $`K`$-corrections removes the dependence of the transformed photometry on the observed band’s zeropoint. Unfortunately, this type of data is not available for the J-band. We have therefore calibrated our $`J`$-band data using Persson et al. standards, who are fundamentally tied to the Elias (1982) standards and we adopt an appropriate $`J`$-band zeropoint uncertainty of $`1\sigma `$=0.05 mag. ## 3 Analysis ### 3.1 Second IR Maximum A unique photometric signature of typical SNe Ia is a resurrection of the luminosity at infrared wavelengths about a month after the initial maximum. This feature is present (with the most exquisite photometry) in the $`V`$ band, grows into a “shoulder” in $`R`$, and increases to a second local maximum in $`I`$ (Ford et al. 1993; Suntzeff 1996; Hamuy et al. 1996a; Riess et al. 1999a). This second maximum is also readily apparent at near-IR wavelengths ($`J,H,`$ and $`K`$; Elias et al. 1981; Jha et al. 1999). No other type of SN exhibits this feature. The secondary maximum is thought to result from the escape of radiation from the core of the supernova at long wavelengths. Resonant scattering from lines is the dominant source of opacity. At short wavelengths (i.e., $`<`$ 5000 Å) line blanketing traps radiation; the resonance lines at longer wavelengths are fewer and further between providing escape routes for the trapped radiation (Spyromilio, Pinto, & Eastman 1994). Wheeler et al.(1998) argue that this effect in itself would not explain the nonmonotic behaviour of the J-band light curve. No model as yet fully explains the shape and timing of the infrared light curves of SNe. The location and strength of the second $`I`$-band maximum is a diagnostic of the intrinsic luminosity of SNe Ia (Riess, Press, & Kirshner 1996; Hamuy et al. 1996b). SNe Ia with typical peak luminosity (i.e., $`M_V=19.4`$ mag) crest again in $`I`$ about 30 days after $`B`$ maximum. Dimmer SNe Ia reach their second peak earlier. For example, SN 1992bo was $``$0.5 mag fainter than a typical SN Ia and reached its second peak in $`I`$ at $``$20 days after $`B`$ maximum (Maza et al. 1994). For very subluminous SNe Ia this second maximum is completely absent, merging into the phase of the initial decline (e.g., SN 1991bg; Filippenko et al. 1992). The physics detailing the formation of this feature also indicates that its magnitude and timing are sensitive to explosion parameters (e.g., ejecta composition) which determine the peak luminosity (Spyromilio, Pinto, & Eastman 1994). In Figure 1 the relative rest-frame $`I`$-band magnitudes of SN 1999Q ($`z=0.46`$) are compared to a luminosity sequence of nearby SNe Ia. The phase of the observations of SN 1999Q was determined by multicolor light-curve shape (MLCS; Riess et al. 1996; Riess et al. 1998) fits to the $`B`$ and $`V`$ light curves and have an uncertainty ($`1\sigma `$) of less than 2 days. The observation times were also corrected for $`1+z`$ time dilation (Leibundgut et al. 1996; Goldhaber et al. 1997; Riess et al. 1997). Although the precision and sampling of the rest-frame $`I`$-band data are not high, they are sufficient to indicate that this high-$`z`$ SN Ia retains significant luminosity at $``$30 days after $`B`$ maximum, consistent with the phase of the second $`I`$-band peak of typical SNe Ia and inconsistent with either very subluminous or moderately subluminous SNe Ia. Using the $`B`$ and $`V`$ light-curve shapes of SN 1999Q as a luminosity indicator, we find its distance modulus to be $`\mu _0`$=42.67$`\pm 0.22`$ mag, consistent with previous SNe Ia favoring a cosmological constant (Riess et al. 1998). If instead we consider the shape of the $`I`$-band light curve as an independent luminosity indicator, we find that this high-$`z`$ SN Ia (and presumably other high-$`z`$ SNe Ia) is not consistent with being subluminous by 0.5-0.6 mag as needed to indicate a Universe closed by ordinary matter. More precise data will be needed to differentiate between an open and $`\mathrm{\Lambda }`$-dominated Universe solely on the basis of $`I`$-band light curve shapes. ### 3.2 IR Color Excess To measure the $`BI`$ colors of SN 1999Q we used the MLCS fits to the $`B`$ light curve to determine the expected $`B`$ magnitudes at the time of the IR observations. Due to the exquisite HST photometry in rest-frame $`B`$, this process adds little uncertainty to the $`BI`$ magnitudes. The Milky Way (MW) dust maps from Schlegel, Finkbeiner, & Davis (1998) predict a reddening of $`E_{BV}`$=0.021 mag in the direction of SN 1999Q. We subtracted the expected Galactic reddening of the rest-frame $`BI`$ light (observed as $`RJ`$) of SN 1999Q, 0.037 mag, from the measured colors. Any remaining reddening results from extragalactic sources. In Figure 2 the measured $`BI`$ magnitudes of SN 1999Q are compared to a custom $`BI`$ curve predicted from the MLCS fits to the $`B`$ and $`V`$ light-curve shapes (Riess et al. 1996, 1998). The smaller uncertainties shown here result from photon statistics and were determined empirically (Schmidt et al. 1998). A significant, additional source of uncertainty is the intrinsic dispersion of SNe Ia $`BI`$ colors around their custom MLCS model. This intrinsic dispersion is determined empirically by measuring the variance of 30 nearby SNe Ia around their MLCS fits (Riess et al. 1996, 1998) and varies from 0.1 to 0.3 mag depending on the SN Ia age. Although the observed residuals from the model prediction are correlated for time separations of less than 3 days, correlated errors are insignificant for the larger differences in time between the observations of SN 1999Q. The larger uncertainties shown in Figure 2 for the $`BI`$ photometry of SN 1999Q include the intrinsic uncertainties. The measured $`E_{BI}`$ for SN 1999Q is $``$0.09$`\pm 0.10`$ mag. The error includes the systematic uncertainties of the $`K`$-corrections and the zeropoint of the $`J`$-band system, although the dominant sources of error are the photometry noise and the intrinsic dispersion in SN Ia $`BI`$ colors. This value is consistent with no reddening of this high-$`z`$ SN Ia. If Galactic-type dust rather than a cosmological constant were the sole reason that $`z0.5`$ SNe Ia are 30% fainter than expected for an open Universe (i.e., $`\mathrm{\Omega }_M=0.3,\mathrm{\Omega }_\mathrm{\Lambda }=0.0`$), then the $`E_{BI}`$ of SN 1999Q should be 0.25 mag (Savage & Mathis 1979). This alternative to an accelerating Universe (see Totani & Kobayashi 1999) is inconsistent with the data at the 99.9% confidence level (3.4$`\sigma `$). The reddening required for the SNe Ia data to be consistent with a Universe closed by matter is ruled out at the $`>`$99.99% (5.1$`\sigma `$) confidence level. Despite the low precision of this data set, the wavelength range of the $`BI`$ colors results in the ability to rule out extinction by Galactic-type dust from SN 1999Q alone with similar confidence as from the entire set of $`BV`$ color data of Riess et al. (1998) and Perlmutter et al. (1999). The reduced amount of reddening by “gray” dust grains (i.e., $`>`$ 0.1 $`\mu `$m) as proposed by Aguirre (1999a,b) is more difficult to detect. The amount of gray dust needed to supplant the cosmological constant as the cause of the dimming of high-$`z`$ SNe Ia would result in an $`E_{BI}`$=0.17 or 0.14 mag for a composition of graphite or graphite/silicate, respectively (Aguirre 1999a,b). These possibilities are moderately inconsistent with the data at the 99.0% (2.6$`\sigma `$) and 97.7% (2.3$`\sigma `$) confidence levels, respectively. The reddening provided by enough of such dust to change the cosmological forecast to favor a Universe closed by matter is ruled out at the 99.97%(3.7$`\sigma `$) and 99.90%(3.3$`\sigma `$) confidence levels, respectively. The weakest constraint comes from assuming the smallest amount of the grayest type of dust which is consistent at the 68% (1$`\sigma `$) confidence level with an open Universe (i.e., $`A_V`$=0.2 mag). This dust is inconsistent with the data at the 94% (1.9 $`\sigma `$) confidence level (although the true inconsistency of this scenario is derived from the product of the two individual likelihoods, i.e., 98% or 2.3$`\sigma `$). Although these results disfavor the existence of the proposed levels of gray dust, more data are needed to strengthen this important test. Because it is difficult to assess all sources of uncertainty in our model for the SN Ia $`BI`$ color evolution, we also performed a Monte Carlo simulation of the measurement of $`E_{BI}`$ for SN 1999Q. Using all nearby SNe Ia which are not spectroscopically peculiar (see Branch, Fisher, & Nugent 1993) nor photometrically extreme ($`0.9<\mathrm{\Delta }m(B)_{15}<1.6`$; Phillips 1993) and whose $`BI`$ colors were well observed, we generated a standard, unreddened $`BI`$ template curve using individual reddening estimates from Phillips et al. (1999) and Schlegel et al. (1998) for nearby SNe Ia. We then randomly selected five observations from a random member of the sample and perturbed the observations to match the photometric noise in the SN 1999Q observations. From 10,000 such synthetic measurements we generated a distribution of measured $`E_{BI}`$ whose shape should match the probability density function for the single $`E_{BI}`$ measurement of SN 1999Q. Compared to the $`BI`$ template curve, SN 1999Q has an $`E_{BI}`$ = $``$0.12 mag. The distribution of synthetic $`E_{BI}`$ values is asymmetric and implies an uncertainty in the measurement for SN 1999Q of $`+1\sigma =0.11`$ mag and $`1\sigma =0.17`$ mag (including the systematic uncertainties from $`K`$-corrections and the $`J`$-band zeropoint). The results are consistent with no extragalactic reddening and inconsistent with Galactic and gray dust reddening at nearly identical (though marginally higher) confidence levels as the MLCS fits. The strength of this method is that it samples real SN Ia data in the same manner as the observations of SN 1999Q and therefore incorporates the intrinsic and correlated uncertainties in the $`BI`$ colors of SNe Ia. ## 4 Discussion Two teams have independently concluded that the observed faintness of high-$`z`$ SNe Ia indicates that the expansion of the Universe is accelerating and that dark energy dominates the energy density of the Universe (Riess et al. 1998; Perlmutter et al. 1999). However, as a well-known adage reminds us, “extraordinary claims require extraordinary evidence.” Alternative explanations such as evolution in supernova luminosities or dust are no more exotic than a cosmological constant and must be rigorously tested. ### 4.1 Dust A $``$30% opacity of visual light by dust is the best quantified and therefore most readily testable alternative to a cosmological constant (Aguirre 1999a,b; Totani & Kobayashi 1999). Measurements of $`BV`$ colors indicate that this quantity of Galactic-type dust is not obscuring high-$`z`$ SNe Ia (Riess et al. 1998; Perlmutter et al. 1999) and the $`BI`$ observations presented here bolster this evidence. However, observations of neither SNe Ia nor other astrophysical objects previously ruled out a similar opacity by intergalactic gray dust (Aguirre 1999a,b). The observations presented here do disfavor a gray intergalactic medium providing this opacity, but additional data are needed to strengthen these conclusions. Indeed, a more precise measurement of $`E_{BI}`$ or $`E_{UI}`$ could constrain either the total optical depth of dust in the intergalactic medium or alternately push the minimum size of such grains into an unphysical domain (Aguirre 1999a,b). It may even be possible to use such measurements to constrain the contribution to the far-IR background by emission from the intergalactic medium (Aguirre 1999a,b). Measurements of gravitational lens systems have also been used as a probe of the high-$`z`$ extinction law and disfavor significant interstellar gray dust (Falco et al. 1999; McLeod 1999-except in molecular clouds). ### 4.2 Evolution To date, our inability to formulate a complete theoretical description of SNe Ia makes it impossible to either predict the degree of expected luminosity evolution between $`z=0`$ and 0.5 or to identify an observation which would conclusively determine whether the luminosity of SNe Ia are evolving (but see Hoeflich, Thielemann & Wheeler 1998). An empirical recourse is to compare all observable properties of nearby and high-$`z`$ SNe Ia with the assumption that if the luminosity of SNe Ia has evolved by $``$30% other altered characteristics of the explosion would be visible as well. The detection of such a change would cast doubt on the reliability of the luminosity distances from high-$`z`$ SNe Ia. A continued failure to measure any difference between SN Ia near and far would increase our confidence (though never prove) that evolution does not contaminate the cosmological measurements from high-$`z`$ SNe Ia. Having clearly stated our approach, it is now appropriate to review the current status of the ongoing efforts to determine if SNe Ia are evolving. Comparisons of high signal-to-noise ratio spectra of nearby and high-$`z`$ SNe Ia have revealed remarkable similarity (Riess et al. 1998; Perlmutter et al. 1998, 1999; Filippenko et al. 2000). Because the spectrum provides a detailed record of the conditions of the supernova in the atmosphere (i.e., temperature, abundances, and ejecta velocities), spectral comparisons are expected to be particularly meaningful probes of evolution. Further, comparisons of time sequences of spectra reveal no apparent differences as the photosphere recedes in mass (Filippenko et al. 2000), indicating that the striking resemblence between distant and nearby SNe Ia is not merely superficial, but endures at deeper layers. However, these comparisons still require the rigor of a quantitative approach to determine whether or not the two samples are statistically consistent. The distributions of light-curve shapes at high and low redshift are statistically consistent (Riess et al. 1998; Perlmutter et al. 1999). However, Drell et al. (1999) have noted that different approaches to quantifying the shape of the light curves may not be statistically consistent, so more attention needs to be focused on these light-curve shape comparisons. The colors of pre-nebular supernovae should provide a useful probe of luminosity evolution, indicating changes in the approximate temperature and hence the thermal output of the explosion. The $`BV`$ colors of nearby and high-$`z`$ SNe Ia were found to be consistent by Perlmutter et al. (1999). The same consistency was found here for the $`BI`$ colors. However, neither this work nor the $`BV`$ color measurements by Riess et al. (1998) can rule out the possibility that high-$`z`$ SNe Ia could be excessively blue (Falco et al. 1999); more data are needed to explore this possibility. The time interval between explosion and maximum light (i.e., the risetime) is expected to be a useful probe of the ejecta opacity and the distribution of <sup>56</sup>Ni . The initial comparison of the risetime of nearby (Riess et al. 1999b) and high-redshift SNe Ia (Goldhaber 1998; Groom 1998) found an apparent inconsistency (Riess et al. 1999c). Further analysis of the SCP high-redshift data by Aldering, Nugent, & Knop (2000), however, concludes that the high-redshift risetime was somewhat larger and far more uncertain than found by Groom (1998) and that the remaining difference in the risetime could be no more than a $``$2.0 $`\sigma `$ chance occurence. The weight of the evidence suggests no significant evolution of the observed SNe Ia, but more observations are needed to allay remaining reasonable doubts. Perhaps the best indication that SNe Ia provide reliable distances at high redshifts comes from SNe Ia in nearby early-type and late-type galaxies. These galaxies span a larger range of metallicity, stellar age, and interstellar environments than is expected to occur for galaxies back to $`z=0.5`$. Yet after correction for the light-curve-shape/luminosity relationship and extinction, no significant Hubble diagram residuals are seen which correlate with host galaxy morphology. This suggests that our distance estimates are insensitive to variations in the supernova progenitor environment (Schmidt et al. 1998). However, the evidence remains circumstantial and does not rule out the possibility that a characteristic of all progenitors of nearby SNe Ia differs for high-$`z`$ SNe Ia. Further observations, especially those in the near-IR bands, can better constrain the potential contamination of the cosmological conclusions from SNe Ia posed by dust and evolution. Further, rest-frame $`I`$ band measurements of nearby SNe Ia show less dispersion in intrinsic luminosity and extinction making this an attractive band for future observations (Hamuy et al. 1996b). Measurements of SNe Ia at $`z>1`$ should even discriminate between the effects of a cosmological constant and those of a monotonically increasing, but unidentified systematic errors (Filippenko & Riess 1999). Continuing studies of high-$`z`$ SNe Ia should ultimately provide the extraordinary evidence required to accept (or refute) the accelerating Universe. We wish to thank Alex Athey and S. Elizabeth Turner for their help in the supernova search at CTIO. We have benefited from helpful discussions with Anthony Aguirre, Stefano Casertano, and Ed Moran. We thank the following for their observations or for attempts to obtain useful data A. Dey, W. Danchi, S. R. Kulkarni, & P. Tuthill. The work at U.C. Berkeley was supported by the Miller Institute for Basic Research in Science, by NSF grant AST-9417213, and by grant GO-7505 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for AC was provided by the National Science Foundation through grant #GF-1001-95 from AURA, Inc., under NSF cooperative agreement AST-8947990 and AST-9617036, and from Fundación Antorchas Argentina under project A-13313. This work was supported at Harvard University through NSF grants AST-9221648, AST-9528899, and an NSF Graduate Research Fellowship. CS acknowledges the generous support of the Packard Foundation and the Seaver Institute. Based in part on observations collected at the European Southern Observatory, Chile, under proposal 62.H-0324. References Aguirre, A. 1999a, ApJ, 512, 19 Aguirre, A. 1999b, astro-ph/990439, accepted ApJ Aldering, G., Nugent, P. E., & Knop, R. 2000, submitted ApJ Branch, D., Fisher, A., & Nugent, P. 1993, AJ, 106, 2383 Clocchiatti, A., et al. 2000, in preparation Drell, P. S., Loredo, T. J., & Wasserman, I. 1999, astro-ph/9905027, accepted ApJ Elias, J. H., Frogel, J. A., Hackwell, J. A., & Persson, S. E. 1981, ApJ, 251, 13 Elias, J. H., 1982, AJ, 87, 1029 Falco, E. et al. 1999, ApJ, 523, 617 Filippenko, A. V. et al. 1992, AJ, 104, 1543 Filippenko, A. V. et al. 2000, in preparation Filippenko, A. V., & Riess, A. G. 1999, in Type Ia Supernovae: Observations and Theory. ed. J. Niemeyer and J. Truran (Cambridge: Cambridge Univ. Press), in press Ford, C. et al. 1993, AJ, 106, 1101 Garnavich, P., et al. 1998, ApJ, 493, 53 Garnavich, P., et al. 1999, IAUC 7097 Goldhaber, G. 1998, B.A.A.S., 193, 4713 Goldhaber, G., et al. 1997, in Thermonuclear Supernovae, eds. P. Ruiz-Lapuente, R. Canal, & J. Isern (Dordrecht: Kluwer), p. 777 Groom, D. E. 1998, B.A.A.S., 193, 11102 Hamuy, M., et al. 1996a, AJ, 112, 2408 Hamuy, M., et al. 1996b, AJ, 112, 2438 Höflich, P., Wheeler, J. C., & Thielemann, F. K. 1998, ApJ, 495, 617 Jha, S., et al. 1999, ApJS, in press Kim, A., Goobar, A., & Perlmutter, S. 1996, PASP, 108, 190 Krisciunas, K., et al. 1987, PASP, 99, 887 Leibundgut, B., et al. 1996, ApJ, 466, L21 Livio, M. 1999, astro-ph/9903264 Matthews, K., & Soifer, B. T. 1994, in Infrared Astronomy with Arrays: the Next Generation, ed. I. S. McLean (Dordrecht: Kluwer), p. 239 Maza, J., Hamuy, M., Phillips, M., Suntzeff, N., & Aviles, R. 1994, ApJ, 424, L107 Perlmutter, S., et al. 1998, Nature, 391, 51 Perlmutter, S., et al. 1999, ApJ, 517, 565 Persson, S. E., Murphy, D. C., Krzeminski, W., Roth, M., & Rieke, M. J., 1998, AJ, 116, 2475 Phillips, M. M. 1993, ApJ, L105, 413 Phillips, M. M. et al. 1999, AJ, in press Richmond, M. W. et al., 1995, AJ, 109, 2121 Riess, A. G., Press, W.H., & Kirshner, R.P. 1996, ApJ, 473, 88 Riess, A. G., et al. 1997, AJ, 114, 722 Riess, A. G., et al. 1998, AJ, 116, 1009 Riess, A. G., et al. 1999b, astro-ph/9907037, accepted AJ Riess, A. G., et al. 1999a, AJ, 117, 707 Riess, A. G., Filippenko, A. V., Li, W., & Schmidt, B. P. 1999c, astro-ph/9907038, accepted AJ Savage, B. D., & Mathis, J. S. 1979, ARAA, 17, 73 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 Schmidt, B. P., et al. 1998, ApJ, 507, 46 Spyromilio, J., Pinto, P., & Eastman, R. 1994, MNRAS, 266, 17 Suntzeff, N. B. 1996, in Supernovae and Supernovae Remnants, ed. R. McCray & Z. Wang (Cambridge: Cambridge Univ. Press), p. 41 Totani, T., & Kobahashi, C. astro-ph/9910038, accepted ApJ Umeda, H. et al. 1999, astro-ph/9906192 Wheeler, J. C., Hoeflich, P., Harkness, R. P., & Spyromilio, J., 1998, ApJ, 496, 908 FIGURE CAPTIONS: Fig 1.-Relative rest-frame $`I`$-band light curve of a high-redshift SN Ia, SN 1999Q ($`z=0.46`$), and a sequence of 3 nearby SNe Ia with different light-curve shapes and peak luminosities. The light curve of SN 1999Q is consistent with that of typical nearby SNe Ia which reach their second maxima at $``$30 days after $`B`$ maximum (e.g., SN 1995D; diamonds, Riess et al. 1999a). The data are inconsistent with SNe Ia which are subluminous at visual peak by $``$0.5 mag and reach the second maximum at $``$20 days after $`B`$ maximum (e.g., SN 1992bo; asterisks, Maza et al. 1994). SN 1999Q bares no resemblence to the rapid decline (without a second maximum) of very subluminous SNe Ia (e.g., SN 1991bg; circles, Filippenko et al. 1992b). Fig 2.-The color evolution, $`BI`$, and the color excess, $`E_{BI}`$, of a high-redshift SN Ia, SN 1999Q, compared to the custom MLCS template curve with no dust and enough dust (of either Galactic-type or grayer) to nullify the cosmological constant. The smaller error bars are from photometry noise; the larger error bars include all sources of uncertainty such as intrinsic dispersion of SN Ia $`BI`$ color, $`K`$-corrections, and photometry zeropoints. The data for SN 1999Q are consistent with no reddening by dust, moderately inconsistent with $`A_V`$=0.3 mag of gray dust (i.e., graphite dust with minimum size $`>0.1\mu `$m; Aguirre 1999a,b) and $`A_V`$=0.3 mag of Galactic-type dust (Savage & Mathis 1979).
no-problem/0001/astro-ph0001062.html
ar5iv
text
# The Discovery of a Second Field Methane Brown Dwarf from Sloan Digital Sky Survey Commissioning Data1footnote 11footnote 1Based on observations obtained with the Sloan Digital Sky Survey and the Apache Point Observatory 3.5-meter telescope, which are owned and operated by the Astrophysical Research Consortium, and with the United Kingdom Infrared Telescope. ## 1 INTRODUCTION Over the last five years, the study of brown dwarfs has evolved from a theoretical notion to a long-awaited discovery to the classification and modeling of a rapidly increasing known population. Dozens of candidate and bona fide brown dwarfs have now been identified using a variety of techniques, including coronagraphic imaging of nearby stars (Nakajima et al. 1994), spectroscopic tests for primordial lithium (Basri, Marcy, & Graham 1996; Rebolo et al. 1996), searches of young open clusters (Hambly 1998, and references therein), optical and near-infrared sky surveys (Tinney et al. 1998; Kirkpatrick et al. 1999; Strauss et al. 1999; Burgasser et al. 1999), and deep-field studies (Cuby et al. 1999). Until recently, the coolest known brown dwarf was Gliese 229B, a companion to a nearby M1 dwarf (Nakajima et al. 1995). The spectrum of Gliese 229B was singularly remarkable for its exhibition of $`H`$\- and $`K`$-band absorption features attributable to methane (Oppenheimer et al. 1998; Geballe et al. 1996). Under equilibrium conditions, methane (CH<sub>4</sub>) becomes the dominant carbon-bearing molecule for $`T_{\mathrm{eff}}<1200`$ K (Fegley & Lodders 1996; Burrows et al. 1997). Models of Gliese 229B’s infrared spectrum indicate an effective temperature of 900–1000 K for the brown dwarf (Allard et al. 1996; Marley et al. 1996; Tsuji et al. 1996; Leggett et al. 1999). Strauss et al. (1999) recently reported the discovery of a Gliese 229B-like brown dwarf from spectroscopic observations of a candidate identified from commissioning data of the Sloan Digital Sky Survey (SDSS). The optical and near-infrared spectrum of this object, SDSSp J162414.37+002915.6 (hereafter SDSS 1624+00), exhibits strong absorption by H<sub>2</sub>O and CH<sub>4</sub> and closely mimics the spectrum of Gliese 229B. Unlike Gliese 229B, which is a companion to a nearby star, SDSS 1624+00 is isolated in the field. Assuming that SDSS 1624+00 has an effective temperature and luminosity identical to those of Gliese 229B, Strauss et al. estimated a distance of 10 pc to SDSS 1624+00. Within three weeks of the discovery of SDSS 1624+00, a second field methane brown dwarf, SDSSp J134646.45–003150.4 (hereafter SDSS 1346–00), was discovered from the same SDSS commissioning data. (SDSS uses J2000 coordinates, and “p” indicates that the astrometric solution is preliminary.) Shortly thereafter, the discoveries of five more methane brown dwarfs were announced – four from the Two-Micron All Sky Survey (2MASS; Burgasser et al. 1999) and one from the New Technology Telescope (NTT) Deep Field project (Cuby et al. 1999). In this Letter, we report the discovery of SDSS 1346–00, present the first medium-resolution $`J`$-band spectrum of a methane brown dwarf, and comment on the space density of methane brown dwarfs in the solar neighborhood. ## 2 OBSERVATIONS The Sloan Digital Sky Survey project is described in detail by Gunn & Weinberg (1995<sup>2</sup><sup>2</sup>2see http://www.astro.princeton.edu/PBOOK/). Here, we briefly outline the characteristics of SDSS that are relevant to this work. SDSS images are obtained with a very large format CCD camera (Gunn et al. (1998)) attached to a dedicated 3 field of view 2.5 m telescope<sup>3</sup><sup>3</sup>3see http://www.astro.princeton.edu/PBOOK/telescop/telescop.htm at the Apache Point Observatory, New Mexico. The sky is imaged in drift-scan mode through five broa-dband filters spanning 0.33–1.05 $`\mu `$m: $`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$, and $`z^{}`$, with central wavelengths/effective widths of 3540Å/599Å, 4770Å/1379Å, 6222Å/1382Å, 7632Å/1535Å and 9049Å/1370Å, respectively (Fukugita et al. (1996)). The exposure time in each band is 54.1 s. The photometric calibration is obtained through contemporaneous observations of a large set of standard stars with an auxiliary $`20^{\prime \prime }`$ telescope<sup>4</sup><sup>4</sup>4see http://www.astro.princeton.edu/PBOOK/photcal/photcal.htm at the same site. The data is processed through an automated pipeline at the Fermi National Accelerator Laboratory, where the software performs photometric and astrometric calibrations, and finds and measures properties of all objects in the images<sup>5</sup><sup>5</sup>5see http://www.astro.princeton.edu/PBOOK/datasys/datasys.htm. To date, a number of 2$`\stackrel{}{\mathrm{.}}`$5-wide strips centered on the Celestial Equator have been imaged as part of the SDSS commissioning program. The region spanning right ascensions $`12^h`$ and $`16^h30^m`$ has been imaged twice, first in June 1998 and again in March 1999. Parts of the region not imaged in June 1998 were imaged twice in March 1999. Because the network of primary standard stars was not fully established during commissioning, the absolute photometric calibration of these images remains uncertain at the 5% level. Cross-correlating the data from the twice-imaged region removes any uncertainty regarding the identification of faint and very red objects, especially those detected in only one bandpass. Using this technique, we identified all point sources with $`i^{}z^{}>2.5`$, including sources detected through $`z^{}`$ only. (See §2.2 for explanation of superscripts.) After inspecting the images of each source, we found that the two reddest sources, SDSS 1346–00 and SDSS 1624+00, were also the most credible brown dwarf candidates. SDSS 1346–00 was detected only in the $`z^{}`$ images recorded on UT 1999 March 20 and 22. We note that the astrometric positions of SDSS 1346–00 in the two runs are consistent with one another. Figure 1 shows the finding chart for SDSS 1346–00. Table 1 lists the SDSS magnitudes and uncertainties for SDSS 1346–00. We indicate the preliminary photometric measurements with asterisks, but retain the primes for the filters themselves. The SDSS magnitudes are in the AB system (Fukugita et al. (1996)) and are given as asinh values (Lupton, Gunn, & Szalay (1999)). The $`u^{}`$, $`g^{}`$, $`r^{}`$, and $`i^{}`$ values all represent non-detections – 5$`\sigma `$ detections of a point source with 1<sup>′′</sup> FWHM images correspond to $`u^{}=22.3`$, $`g^{}=23.3`$, $`r^{}=23.1`$, $`i^{}=22.5`$, and $`z^{}=20.8`$. The two $`z^{}`$ measurements for SDSS 1346–00 agree to within 1 $`\sigma `$. Its $`i^{}`$$`z^{}4`$ is consistent with the $`i^{}`$$`z^{}=3.77\pm 0.21`$ measured for SDSS 1624+00 (Strauss et al. 1999). Note that M and L dwarfs are not expected to be redder than $`i^{}`$$`z^{}2.5`$ (Fan et al. (2000)). Near-infrared photometry of SDSS 1346–00 was obtained on UT 1999 May 23 using the IRCAM $`256\times 256`$ InSb array and the United Kingdom Infrared Telescope (UKIRT). The plate scale was 0$`\stackrel{}{\mathrm{.}}`$28 pixel<sup>-1</sup>, and the exposure times at $`J`$, $`H`$, and $`K`$ were 5 min, 14 min, and 18 min, respectively. The conditions were photometric, and the seeing was 0$`\stackrel{}{\mathrm{.}}`$8. The object was imaged using the standard dither technique, and the images were calibrated using observations of UKIRT faint standards (Casali & Hawarden (1992)). The UKIRT magnitudes of SDSS 1346–00 are listed in Table 1. (Vega has magnitude zero in all UKIRT bandpasses.) The $`JK`$ and $`HK`$ colors of SDSS 1346–00 are redder by $`0.1`$ mag than those of Gliese 229B (Leggett et al. (1999)) and SDSS 1624+00 (Strauss et al. 1999). The $`z^{}J`$ color is about the same for two SDSS methane dwarfs. SDSS 1346–00 is fainter than Gliese 229B and SDSS 1624+00 by $`\mathrm{\Delta }J=1.5`$ and $`\mathrm{\Delta }J=0.3`$, respectively. An optical spectrum of SDSS 1346–00 was obtained on UT 1999 May 10 using the Double Imaging Spectrograph (DIS) on the Apache Point 3.5 m telescope. The spectra were taken using the low resolution gratings, providing a spectral coverage of 0.4–1.05 $`\mu `$m with dispersions of 6.2 Å pixel<sup>-1</sup> on the blue side and 7.1 Å pixel<sup>-1</sup> on the red side, and a 2$`\stackrel{}{\mathrm{.}}`$0 slit. The exposure time was 30 minutes. The conditions were non-photometric, and the seeing was $`1`$$`\stackrel{}{\mathrm{.}}`$5. The initial flux calibration and removal of atmospheric absorption bands were achieved through observations of the spectrophotometric standard BD +26$`^{}`$2606 (F subdwarf, Oke & Gunn (1983)) over several nights. The final flux calibration, however, was obtained by matching the optical spectrum with the near-infrared spectrum in the overlapping region near 1 $`\mu `$m (see below). The calibrated optical spectrum is included in Fig. 2. Although the spectrum is significantly noisier than that of SDSS 1624+00 (Strauss et al. 1999), it shows similar characteristics. The spectrum rises steeply toward the near-infrared, and its shape matches the SDSS photometry well. A distinct H<sub>2</sub>O absorption band centered at $`0.94\mu `$m remains after subtraction of the telluric absorption feature at the same wavelength. No flux was detected shortward of $`0.8\mu `$m. Spectra covering the $`J`$, $`H`$, and $`K`$ bands were obtained on the nights of UT 1999 May 23 and UT 1999 June 2 with the facility grating spectrometer CGS4 (Mountain et al. (1990)) at UKIRT. The instument was configured with a 300 mm camera, a 40 l mm<sup>-1</sup> grating, and a 256$`\times `$256 InSb array. The 1$`\stackrel{}{\mathrm{.}}`$2 slit projected onto two detector pixels, providing a spectral resolving power $`R`$ in the range 300 to 500. The $`JHK`$ spectral range was spanned by five overlapping spectra with the following central wavelengths and total exposure times: 0.95 $`\mu `$m (48 min), 1.1 $`\mu `$m (56 min), 1.4 $`\mu `$m (33 min), 1.8 $`\mu `$m (48 min), and 2.2 $`\mu `$m (21 min). The individual spectra were obtained by nodding the object 7$`\stackrel{}{\mathrm{.}}`$32 (12 detector rows) along the slit. The final co-added spectrum has a resolution of 0.0025 $`\mu `$m across the $`J`$ and $`H`$ bands, and 0.0050 $`\mu `$m in the $`K`$ band. Spectra of Kr, Ar, and Xe lamps were used for wavelength calibration, and is accurate to $`0.001\mu `$m. Spectra of bright F dwarfs were obtained repeatedly throughout the observations for initial flux calibration (after removal of prominent H absorption features) and subtraction of telluric absorption lines. The individual spectra were then combined and scaled to match the near-infrared photometry. The resultant spectrum is shown in Fig. 2. On UT 1999 June 7 and June 10, we obtained two higher resolution (150 l mm<sup>-1</sup> grating, $`\mathrm{R}3000`$) CGS4 spectra of SDSS 1346–00 over the wavelength region $`1.235<\lambda <1.290\mu `$m. The observing technique was similar to the one described above, with a total exposure time of 52 min. This wavelength region spans the peak of the emergent energy spectrum of the brown dwarf. The inset in Fig. 2 shows a smoothed (by 1.5 pixel) average of the two spectra. The resolution of the smoothed spectrum is $`0.0005\mu `$m, which is the highest yet reported for a cool brown dwarf. The individual spectra, including the many narrow lines at the red end of the spectrum, matched well before being combined. The error bars for the resultant spectum are $``$ 7% everywhere, but increase to about twice that value near $`1.27\mu `$m, where telluric lines of O 1 are strong and variable. The two broad absorption features at $`1.243\mu `$m and $`1.252\mu `$m are due to K 1 (Kirkpatrick et al. 1993). ## 3 DISCUSSION The 0.8–2.5 µm spectrum of SDSS 1346–00 looks astonishingly like that of Gliese 229B, as recalibrated by Leggett et al. (1999), and that of SDSS 1624+00 (Strauss et al. 1999). Strong absorption bands of H<sub>2</sub>O and CH<sub>4</sub> dominate the spectrum, and the absorption lines of H<sub>2</sub>O at 2.0–2.1 $`\mu `$m discussed by Geballe et al. (1996) are also apparent. Note that while the zero-point of Gliese 229B’s spectrum is slightly uncertain due to a possible miscorrection for scattered light from Gliese 229A, no such uncertainty exists for our spectrum. Flux is not detected at the bottom of the H<sub>2</sub>O band at 1.36–1.40 $`\mu `$m, but is detected in the deepest parts of the H<sub>2</sub>O bands at 1.15 $`\mu `$m and 1.8–1.9$`\mu `$m and the CH<sub>4</sub> band at 2.2–2.5 $`\mu `$m. The only significant differences between the spectrum of SDSS 1346–00 and those of Gliese 229B and SDSS 1624+00 are SDSS 1346–00’s somewhat stronger absorption lines of K 1 at 1.2436 $`\mu `$m and 1.2536 $`\mu `$m and the slight excess of flux around 1.7 $`\mu `$m and 2.1 µm. The latter excess is also reflected in the slightly redder $`J`$$`K`$ and $`H`$$`K`$ colors of SDSS 1346–00 compared with those of SDSS 1624+00 and Gliese 229B. Figure 3 illustrates the differences between the $`K`$-band spectra of these three methane brown dwarfs. Burgasser et al. (1999) have also noted differences in the $`H`$-to-$`K`$ flux ratios of the 2MASS “T” dwarfs and Gliese 229B, and they use these ratios to establish a preliminary spectral sequence for those five brown dwarfs. Following their example, we infer that SDSS 1346–00 is somewhat warmer than SDSS 1624+00 and Gliese 229B. However, accurate modelling of the spectra is required to confirm and calibrate this assessment. The widths of the K 1 absorption doublet (EW $``$ 6 and 9 Å, FWHM = $`820\pm 50`$ km s<sup>-1</sup>) correspond to a rotation rate that greatly exceeds the escape velocity from even the most massive brown dwarf. Thus, rotational broadening alone cannot account for the width of the K 1 lines. As the dust-free photospheres of cool brown dwarfs are transparent in this wavelength range to depths with brightness temperatures of $`1700`$ K (Matthews et al. 1996) and pressures of $`30`$ bar (Marley et al. 1996), the observed widths of the K 1 doublet are probably caused by pressure broadening. Accurate modelling of this higher-resolution spectrum should significantly constrain the gravity, temperature, and pressure profiles of cool brown dwarfs. Although the S/N of the optical spectrum of SDSS 1346–00 is insufficient for a rigorous assessment, the overall shape of the continuum may be linked to the pressure-broadened K 1 lines. The optical spectrum of SDSS 1346-00 is remarkably similar to those of Gliese 229B (Schultz et al. 1998; Oppenheimer et al. 1998) and SDSS 1624+00 (Strauss et al. 1999). Gliese 229B’s optical flux is lower by 1–2 dex than the fluxes predicted by the models of dust-free photospheres that reproduce well its near-IR spectrum (Schultz et al. 1998; Golimowski et al. 1998). Possible explanations of this large discrepancy include absorption by aerosols produced photochemically by radiation from Gliese 229A (Griffith, Yelle, & Marley 1998), a warm dust layer deep in the photosphere (Tsuji, Ohnaka, & Aoki 1999), and extreme pressure-broadening of the K 1 doublet at $`0.76\mu `$m (Tsuji, Ohnaka, & Aoki 1999; Burrows, Marley, & Sharp 1999). The similarity between the optical spectra of Gliese 229B and the SDSS field methane dwarfs discourages the notion that photochemically induced aerosols are the absorbing agent. Absorption by warm dust or pressure-broadened K 1 remain viable and observationally testable hypotheses, however. Given the similarity of the colors and spectra of SDSS 1346–00, SDSS 1624+00, and Gliese 229B, it is reasonable to assume that these three brown dwarfs have similar luminosity. Using this argument and the measured distance to Gliese 229B of 5.8 pc (ESA (1997)), Strauss et al. (1999) estimated a distance to SDSS 1624+00 of 10 pc. The apparent magnitude differences between SDSS 1346–00 and SDSS 1624+00 are $`\mathrm{\Delta }m`$ = +0.26 ($`z^{}`$), +0.29 ($`J`$), +0.28 ($`H`$), and +0.14 ($`K`$). The average difference (excluding $`K`$) of +0.3 mag puts SDSS 1346–00 at a distance of 11.5 pc. This estimate must be treated with caution, however, since the SDSS 1346–00’s larger flux around 2.1 µm may reflect a slightly higher temperature (and hence luminosity as the models indicate that the radii of these objects are essentially independent of the temperature or mass) than those of Gliese 229B and SDSS 1624+00. The SDSS commissioning data obtained to date cover approximately 400 deg<sup>2</sup>, or $`1`$%, of the sky. To boost our confidence in the one-band detections at faint magnitudes, we have searched only the twice-imaged area of the sky for objects with $`z^{}19.8`$ ($``$ 12$`\sigma `$ detection) and $`i^{}z^{}>2.5`$. This strategy restricts the searched area of the survey to 130 deg<sup>2</sup>. The two reddest candidates in this restricted area, SDSS 1346–00 and SDSS 1624+00, have been spectroscopically identified as methane brown dwarfs. Recognizing the danger of statistical inferences based on a sample of two objects, we estimate 635 such objects on the sky (of which $``$ 1/4 will be discovered by SDSS because of its sky coverage) that satisfy our photometric-search criteria. This implies a surface density of 0.015 deg<sup>2</sup>. Using our detection limit of $`z^{}=19.8`$, our search area of 130 deg<sup>2</sup>, and Gliese 229B as a standard candle, we estimate our search volume to be $`40`$ pc<sup>3</sup> and the space density of Gliese 229B-like brown dwarfs to be 0.05 pc<sup>-3</sup>. Our surface-density estimate is $`3`$ times larger than that derived by Strauss et al. (1999). This discrepancy is due to our reduction by 68% of the search area and the doubling of the number of detected methane dwarfs. Based on the four objects identified from 1784 deg<sup>2</sup> of 2MASS, Burgasser et al. (1999) estimate $`90`$ “T” dwarfs on the sky brighter than a $`10\sigma `$ detection limit of $`J=16`$. This number corresponds to a surface density of 0.0022 deg<sup>-2</sup> and a space density of $`0.01`$ pc<sup>-3</sup>. For brown dwarfs with colors like those of SDSS 1346–00, the 2MASS detection limit is equivalent to $`z^{}19.5`$, i.e., slightly brighter than our selection criterion of $`z^{}<19.8`$. Despite the nearly equal sensitivity of 2MASS and SDSS to such brown dwarfs, our estimates of surface and space density are larger than the 2MASS estimates by factors of $`7`$ and $`5`$, respectively. Cuby et al. (1999) infer a space density of 1 pc<sup>-3</sup> from one confirmed methane brown dwarf in the 2$`\stackrel{}{\mathrm{.}}`$$`\times `$ 2$`\stackrel{}{\mathrm{.}}`$3 NTT Deep Field. This value is $`20`$ times higher than our estimate for the same type of object. The large dispersion in the SDSS, 2MASS, and NTT estimates is almost certainly a result of small-sample statistics. We look forward to the imminent routine operation of SDSS as a means of improving these very preliminary statistics. The Sloan Digital Sky Survey (SDSS) is a joint project of the University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Max-Planck-Institute for Astronomy, Princeton University, the United States Naval Observatory, and the University of Washington. Apache Point Observatory, site of the SDSS, is operated by the Astrophysical Research Consortium. Funding for the project has been provided by the Alfred P. Sloan Foundation, the SDSS member institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, and the Ministry of Education of Japan. The SDSS Web site is http://www.sdss.org/. We also thank Karen Gloria for her expert assistance at the Apache Point Observatory. UKIRT is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council. We are grateful to the staff of UKIRT for its support, to A. J. Adamson for use of UKIRT Director’s time and to Tom Kerr for obtaining the medium resolution $`J`$-band spectrum. Table 1. Photometry of SDSS J134646.45–003150.4 $`u^{}`$ $`g^{}`$ $`r^{}`$ $`i^{}`$ $`z^{}`$ $`J`$ $`H`$ $`K`$ $`24.11\pm 0.38`$ $`24.20\pm 0.40`$ $`24.54\pm 0.62`$ $`23.26\pm 0.65`$ $`19.29\pm 0.06`$ $`15.82\pm 0.05`$ $`15.85\pm 0.05`$ $`15.84\pm 0.07`$ $`24.08\pm 0.39`$ $`24.27\pm 0.43`$ $`24.07\pm 0.42`$ $`23.58\pm 0.75`$ $`19.23\pm 0.08`$
no-problem/0001/nlin0001023.html
ar5iv
text
# Large negative velocity gradients in Burgers turbulence ## I Introduction We consider the random forced Burgers equation $$u_t+uu_x\nu u_{xx}=\varphi $$ (1) that describes weak 1D acoustic perturbations in the reference frame moving with the sound velocity . The external force $`\varphi `$ in this frame is generally short-correlated in time, so let us assume that $$\varphi (x_1,t_1)\varphi (x_2,t_2)=\delta (t_1t_2)\chi (x_1x_2).$$ (2) Then the statistics of $`\varphi `$ can be thought Gaussian and therefore is completely characterized by (2). We are interested in turbulence with a large value of Reynolds number $`\mathrm{Re}=(\chi (0)L^4)^{1/3}/\nu `$, where $`L`$ is the characteristic scale of the stirring force correlator $`\chi (x)`$. This problem was intensively studied during the last years . The main feature of Burgers turbulence is the formation of shock waves with large negative velocity gradient inside and small viscous width of the front. The positive velocity gradients are decreased by the dynamics of Burgers equation due to self-advection of velocity. On the contrary the increasing of negative gradients could be stopped only by viscosity. The motion of shock waves leads to a strong intermittency, the PDF of velocity gradients $`𝒫(u_x)`$ is strongly non-Gaussian. The one way to describe the intermittency is to study rare events with large fluctuations of velocity, that give the main contribution to the high momenta $`u_x^n`$ or to the PDF tails. The right tail (positive large $`u_x`$) of PDF $`\mathrm{ln}𝒫(u_x)u_x^3`$ was first found by Feigel’man for the problem of charge density wave in an impurity potential. Later it was recovered using operator product expansion (see also ), instanton calculus , minimizers approach and mapping closure . The left tail in the inviscid limit seems to be algebraic, probably $`𝒫(u_x)|u_x|^{7/2}`$ (see also ). Due to viscosity the very far left tail is stretched exponential: $`\mathrm{ln}𝒫(u_x)\nu ^3|u_x/\nu |^\beta `$. The large negative gradients exist practically only inside the shock waves. The maximal value of gradient is proportional to the square of the velocity jump on the shock wave: $`|u_x|_{\mathrm{max}}=(\mathrm{\Delta }u)^2/8\nu `$. Then roughly the tail of the shock wave amplitude PDF has the form $`\mathrm{ln}𝒫_{\mathrm{shock}}(\mathrm{\Delta }u)\nu ^{32\beta }(\mathrm{\Delta }u)^{2\beta }`$. The analysis of the instanton structure predicts the value $`\beta =3/2`$ . This prediction is consistent with the assumption, that the tails of $`𝒫_{\mathrm{shock}}(\mathrm{\Delta }u)`$ should not depend on the viscousity $`\nu `$. We are interested in the statistics of large values of gradients $`u_xu_{\mathrm{rms}}/L(\chi (0)/L^2)^{1/3}`$. The velocity field configurations $`u(x,t)`$ that make a contribution to the probability $`𝒫(a)`$ of the equality $`u_x(0,0)=a`$ have the gradient greater or equal to $`a`$ somewhere. The probability $`𝒫(a)`$ decays very fast while $`a`$ increases, i.e. the contribution of events with gradient greater than $`a`$ somewhere is highly suppressed. Then one believes that only some specific field configurations $`u(x,t,a)`$ (“optimal fluctuations” or instantons) make contribution to $`𝒫(a)`$ at large $`a>(u_{\mathrm{rms}}/L)\mathrm{Re}`$. Under this assumption to calculate $`𝒫(a)`$ one should find this optimal field configuration $`u(x,t,a)`$ and estimate the probability of its realization. All instantons of this type are posed at the far tail of the statistical weight of averaging $`\mu [\varphi (x,t)]`$. Indeed, to produce large fluctuation of $`u`$ the stirring force $`\varphi `$ also should be large, and the probability of such fluctuation $`\varphi `$ is low. The weight $`\mu [\varphi ]`$ may not contain a large parameter, but it should have fast tails, e.g. exponential ones. Then the concurrence between statistical weight and the value of calculated quantity makes the contributing realizations of $`\varphi (x,t)`$ rather determined. This approach was introduced by Lifshitz . Later it was applyed to determine high order correlation functions in field theory and in the systems of hydrodynamic type: simultaneous (see, e.g. ) and non-simultaneous ones. The paper is organized as follows. In Sec. II we derive the equations for the instanton. Sec. III is devoted to the detailed description of our scheme of numerical calculations. In Sec. IV we discuss the numerical results and describe the behavior of the solution of instanton equations at large times. ## II Saddle-point approximation The velocity gradients PDF $`𝒫(a)`$ can be written as the path integral $`𝒫(a)=\delta (u_x(0,0)a)_\varphi `$ (3) $`={\displaystyle 𝒟u𝒟p\underset{i\mathrm{}}{\overset{i\mathrm{}}{}}𝑑\mathrm{exp}\left(𝒮+4\nu ^2(u_x(0,0)a)\right)},`$ (4) where the effective action $`𝒮`$ has the form $`𝒮=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\mathrm{}}{\overset{0}{}}}𝑑t{\displaystyle 𝑑x_1𝑑x_2p(x_1,t)\chi (x_1x_2)p(x_2,t)}`$ (6) $`i{\displaystyle \underset{\mathrm{}}{\overset{0}{}}}𝑑t{\displaystyle 𝑑xp\left(u_t+uu_x\nu u_{xx}\right)}.`$ The integration over $``$ gives rise to $`\delta (u_x(0,0)a)`$, and the factor $`4\nu ^2`$ was chosen for our convenience. Note that if retarded regularization of the path integral (4) is used then $`𝒟u𝒟p\mathrm{exp}(𝒮)=1`$ and we have no normalizing $`u`$-dependent denominators in (4). One can find some analogies between the appearance of the second field $`p`$ and technique that was developed by Keldysh for nonequilibrium dynamics description. We are interested in the tails of PDF $`𝒫(a)`$, i.e. the parameter $`a`$ in the integral (4) is large. The asymptotics of $`𝒫(a)`$ at large $`|a|(\chi (0)L)^{2/3}/\nu `$ is determined by the saddle-point configuration of fields $`u(x,t)`$, $`p(x,t)`$ (and also parameter $``$), near which the variation of the integrand is equal to zero . The saddle-point configuration (sometimes called classical trajectory or instanton) is governed by the following equations $`u_t+uu_x\nu u_{xx}=i\chi p,`$ (7) $`p_t+up_x+\nu p_{xx}=4i\nu ^2\delta (t)\delta ^{}(x).`$ (8) where $`\chi p`$ is the convolution $$(\chi p)(x)=𝑑x^{}\chi (xx^{})p(x^{}).$$ (9) The solution should satisfy boundary conditions $`\underset{t\mathrm{}}{lim}u(x,t)=0,\underset{t+0}{lim}p(x,t)=0,`$ (10) $`\underset{|x|\mathrm{}}{lim}u(x,t)=0,\underset{|x|\mathrm{}}{lim}p(x,t)=0.`$ (11) The value of $``$ is tuned in such a way that the condition $`u_x(0,0)=a`$ holds. The quantity $``$ is a Lagrange multiplier for finding the extremum of $`𝒮`$ with the condition $`u_x(0,0)=a`$. The equation for $`p`$ should be solved moving back in time because of the signs at $`p_t`$ and $`p_{xx}`$ in the instanton equation (8). The convolution $`i\chi p`$ is the optimal configuration of external force $`\varphi `$ that produces large negative gradient. In what follows we will measure the length in $`L`$ units, i.e. we set $`L=1`$. Rescaling the time $`t`$ and fields $`u`$, $`p`$ one can exclude the parameter $`\nu `$ from the instanton equations: $`t=T/2\nu ,u=2\nu U,p=4i\nu ^2P,a=2\nu A,`$ (12) $`U_T+UU_x\frac{1}{2}U_{xx}={\displaystyle 𝑑x^{}\chi (xx^{})P(x^{})},`$ (13) $`P_T+UP_x+\frac{1}{2}P_{xx}=\delta (T)\delta ^{}(x),`$ (14) at $`T=0`$ one has $`U_x(0,0)=A`$. The only parameter in the instanton equations is $`A=a/2\nu `$. Note that the steady-state kink solution of Burgers equation with the negative gradient $`a`$ is $$u=\sqrt{2\nu |a|}\mathrm{tanh}\left(\sqrt{|a|/2\nu }x\right).$$ (15) Thus the physical meaning of $`|A|`$ is the square of the ratio of pumping scale $`L=1`$ and the kink width $`w_{\mathrm{kink}}=1/\sqrt{|A|}`$. The effective action $`𝒮_{\mathrm{extr}}`$ at the instanton that gives the right exponent: $$\mathrm{ln}𝒫(a)𝒮_{\mathrm{extr}}(a),$$ (16) is equal to $`𝒮_{\mathrm{extr}}={\displaystyle \frac{1}{2}}{\displaystyle \underset{\mathrm{}}{\overset{0}{}}}𝑑t{\displaystyle 𝑑x_1𝑑x_2p(x_1,t)\chi (x_1x_2)p(x_2,t)}`$ (17) $`=4\nu ^3{\displaystyle \underset{\mathrm{}}{\overset{0}{}}}𝑑T{\displaystyle 𝑑x_1𝑑x_2P(x_1,T)\chi (x_1x_2)P(x_2,T)}.`$ (18) The freedom of rescaling the fields $`u`$, $`p`$ and the time $`t`$ with appropriate change of $`\nu `$ gives us the following relation: $$𝒮_{\mathrm{extr}}(a)=8\nu ^3S(a/2\nu )=(2\nu )^3S(A),$$ (19) with the function $`S(A)`$ to be determined. One can prove by straitforward calculation the following relation between functions $`(A)`$ and $`S(A)`$: $$(A)=\frac{dS(A)}{dA}.$$ (20) The relations of such sort are well-known in classical mechanics; here $`A`$ and $``$ are conjugate variables, and saddle-point configuration is the trajectory of extremal action. The instanton equations (13,14) are Hamiltonian: $`U_T(x,T)={\displaystyle \frac{\delta }{\delta P(x,T)}},P_T(x,T)={\displaystyle \frac{\delta }{\delta U(x,T)}},`$ (21) $`={\displaystyle 𝑑xP\left(UU_x\frac{1}{2}U_{xx}\frac{1}{2}\chi P\right)}.`$ (22) The Hamiltonian $``$ is the integral of motion, i.e. $`d/dT=0`$. Since both $`U`$ and $`P`$ tend to zero at $`T\mathrm{}`$ we have $`=0`$. From the instanton equations and the condition $`=0`$ we get $$S=\frac{A}{2}+\frac{1}{4}𝑑T𝑑xP_xU^2=\frac{A}{3}+\frac{1}{6}𝑑T𝑑xP_xU_x.$$ (23) The last term is due to viscousity. At the right tail it is unimportant, and we have $`dS/dA=3S/A`$, i.e. $`SA^3`$. At the viscous left tail its contribution to the action is of the same order as other terms. If $`\mathrm{ln}𝒫(a)`$ is a powerlike function: $`\mathrm{ln}𝒫(a)|a|^\beta `$, then one has $`𝑑T𝑑xP_xU_x=2(3\beta )S`$. The high momenta can be calculated by the instanton method in a following way. Because $`a^n𝒫(a)`$ is a narrow function for large $`n`$, and only narrow velocity interval, which position depends on $`n`$, contributes to $`a^n`$. The position of this interval is exactly the saddle-point in the integral $`a^n𝑑aa^n\mathrm{exp}(𝒮_{\mathrm{extr}}(a))`$ (see (16)), that satisfies the equation $$n=a\frac{d𝒮_{\mathrm{extr}}(a)}{da}=8\nu ^3A\frac{dS(A)}{dA}.$$ (24) Combining it with $`=n/8\nu ^3A`$ we again get (20). To get the instanton equations for the average $`a^n`$ one should only substitute $``$ in (14) for $`n/8\nu ^3A`$. Then the instanton equations become the same as in . One also should consider fluctuations near the instanton as a background. The way how the fluctuations can be taken into account is unknown yet but their influence to $`\mathrm{ln}𝒫(a)`$ due to their phase volume is small in comparison with $`𝒮_{\mathrm{extr}}`$ while $`a(u_{\mathrm{rms}}/L)\mathrm{Re}`$. At smaller gradients the fluctuations essentially change the answer and we have the algebraic tail . ## III Numerical calculations The preliminary calculations that were made in $`x`$, $`T`$ variables have shown that the width of the instanton equations solution grows with $`|T|`$ and is proportional to $`|T|^{1/2}`$, while its amplitude is proportional to $`|T|^{1/2}`$. To avoid the necessity of treating simultaneously narrow structure at small $`T`$ and wide one at large $`T`$ we used the following variables: $`x=\xi \sqrt{T_0T},T=T_0\left(1e^\tau \right),`$ (25) $`U=\stackrel{~}{U}/\sqrt{T_0T},P=\stackrel{~}{P}/\sqrt{T_0T},`$ (26) where $`T_0`$ is some constant of the order of unity. The instanton equations in these variables take the form $`\stackrel{~}{U}_\tau +\frac{1}{2}\left(\xi \stackrel{~}{U}\stackrel{~}{U}_\xi \right)_\xi +\stackrel{~}{U}\stackrel{~}{U}_\xi =\stackrel{~}{\chi }(\tau )\stackrel{~}{P},`$ (27) $`\stackrel{~}{P}_\tau +\frac{1}{2}\left(\xi \stackrel{~}{P}+\stackrel{~}{P}_\xi \right)_\xi +\stackrel{~}{U}\stackrel{~}{P}_\xi =\delta (\tau )\delta ^{}(\xi )/\sqrt{T_0},`$ (28) where $`\stackrel{~}{\chi }(\xi ,\tau )=(T_0T)^{3/2}\chi (x)`$. The boundary conditions for $`\stackrel{~}{U}`$, $`\stackrel{~}{P}`$ are analogous to (11). Let us describe now the general structure of the numerical scheme that finds the solution of our boundary problem. The diffusion terms $`\stackrel{~}{U}_{\xi \xi }`$, $`\stackrel{~}{P}_{\xi \xi }`$ in instanton equations (27,28) have opposite signs. If one considers these equations as two linked Cauchy problems, then the natural direction of time in (27) is positive, while in (28) the direction is negative. Assume that at a given value of $``$ the approximate solution $`\stackrel{~}{U}_{\mathrm{old}}(\xi ,\tau )`$ is known. Let us try to make it closer to the true solution of the problem. For this purpose let us solve the Cauchy problem for (28) starting from $`\tau =+0`$ and moving up to large enough $`\tau _{\mathrm{min}}<0`$. Then using $`\stackrel{~}{P}`$ that we have got in the previous step we solve the Cauchy problem for (27) moving from $`\tau =\tau _{\mathrm{min}}`$ up to $`\tau =0`$. As a result we get the new values $`\stackrel{~}{U}_{\mathrm{new}}(\xi ,\tau )`$. Further we will use the sign $`f`$ for the mapping $`\stackrel{~}{U}_{\mathrm{old}}\stackrel{~}{U}_{\mathrm{new}}`$. The stationary point $`\stackrel{~}{U}`$ of the mapping $`f`$ and the corresponding function $`\stackrel{~}{P}`$ are the desired solution of (27,28). The numerical experiments have shown that iterations $$\stackrel{~}{U}^{(i+1)}=f\left[\stackrel{~}{U}^{(i)}\right],\stackrel{~}{U}^{(0)}0$$ (29) converge if $`>_{}0.96`$. While $`<_{}`$ the simple iterations (29) are divergent. Curve 1 at the Fig. 1(a) shows how the value of the gradient $`\stackrel{~}{A}=\stackrel{~}{U}(\xi ,\tau )/\xi |_{\xi =0,\tau =0}`$ depends on the number of iteration for $`=2`$. It can be seen that the stationary point of $`f`$ is unstable, but the mapping $`f\left[f\left[\stackrel{~}{U}\right]\right]`$ has two stable stationary points. The stability properties of the iterations are determined by the spectrum of the linearization $`\widehat{K}`$ of $`f`$ near a stationary point: $$f\left[\stackrel{~}{U}+V\right]=\stackrel{~}{U}+\widehat{K}V+\mathrm{}$$ (30) The iteration process is convergent if the modulus of all eigenvalues of the linear part $`\widehat{K}`$ is less than 1. The period doubling indicates that while $``$ passes through $`_{}`$ one of $`\widehat{K}`$’s eigenvalues passes through $`1`$ . Let us denote this eigenvalue as $`\lambda `$. This knowledge allows us to construct the new mapping with stable stationary point that coincides with one of the mapping $`f`$. Let us proceed the following iterations: $$\stackrel{~}{U}^{(i+1)}=f_c\left[\stackrel{~}{U}^{(i)}\right]cf\left[\stackrel{~}{U}^{(i)}\right]+(1c)\stackrel{~}{U}^{(i)},$$ (31) where $`c`$ is some constant. It is easy to check that the stationary points of the mappings $`f`$ and $`f_c`$ do coincide. The linear part of $`f_c`$ is equal to $`\widehat{K}_c=c\widehat{K}+(1c)\widehat{1}`$, its eigenvalue corresponding to unstable $`\lambda `$ is $`\lambda _c=c\lambda +(1c)`$. If we take the value of $`c`$ inside the interval $`0<c<2/(1\lambda )<1`$, then $`|\lambda _c|<1`$, and the iterations (31) are convergent. Since we don’t know $`\lambda `$ a priori, the value of $`c`$ that provides the convergence of iterations, was determined from experiment. Fig. 1(a) illustrates the influence of the decreasing of $`c`$ on the dependence of $`\stackrel{~}{A}`$ vs. number of iteration. In the final version of the computer code the value of $`c`$ was changed in an adaptive way: each time the value of $`|\stackrel{~}{A}|`$ was decreased after an iteration, $`c`$ was multiplied by $`0.9`$. One can compare from the Fig. 1(b) the iterations run for $`c=0.1`$, $`c=0.05`$ and for adaptive decreasing of $`c`$, all three for $`=2`$. The initial and final values of $`c`$ in the case of adaptive change were equal to $`0.1`$ and $`0.10.9^4=0.05905\mathrm{}`$, respectively. It is clear that for the two last cases the iterations converge to the same solution. ### A Grid parameters The solution of Cauchy problem is found numerically using the method of finite differencies. The grid covers the rectangular domain $`0<\xi <\xi _{\mathrm{max}}`$, $`\tau _{\mathrm{min}}<\tau <0`$. In numerical calculations some of the boundary conditions (11) that in principle are posed at the infinity were considered as they are posed at (large enough) $`\xi _{\mathrm{max}}`$ and $`\tau _{\mathrm{min}}`$. Typical values used: $`\xi _{\mathrm{max}}=10`$ and $`\tau _{\mathrm{min}}=30`$. The grid had uniform mesh intervals in $`\xi `$, typical number of grid sites along $`\xi `$ axe was eqial to $`1024`$. The first calculations had shown that the solution changes rapidly in the vicinity of $`\tau =0`$, while at large $`\tau `$ it varies slowly. Because of this we used nonuniform grid for variable $`\tau `$. The time step was smaller inside the interval $`\tau _1<\tau <0`$, typical value used: $`\tau _1=0.4`$. The number of grid sites inside this interval varied from $`2000`$ to $`4000`$. The computer power we had limited the number of all time steps in the grid by $`5000`$. During all calculations we used $`T_0=1`$. ### B Cauchy problem for $`\stackrel{~}{P}`$ The equation (28) should be solved backward in time. The source term in right hand side of the equation (28) provides us the initial condition $`\stackrel{~}{P}(\tau =0)\delta ^{}(\xi )`$. This means that at small times the field $`\stackrel{~}{P}`$ is localized in a very narrow interval centered at $`\xi =0`$. Such initial condition can not be accurately discretized, so we used another way to represent $`\stackrel{~}{P}`$ at small times. For small $`\tau `$ the field $`\stackrel{~}{P}`$ is very narrow. At its support we can approximate the velocity $`\stackrel{~}{U}`$ by a linear profile: $`\stackrel{~}{U}=\stackrel{~}{A}(\tau )\xi `$. The evolution of $`\stackrel{~}{P}`$ in such a velocity field is described by the derivative of Gaussian contour: $`\stackrel{~}{P}(\xi ,\tau )={\displaystyle \frac{\xi P_{\mathrm{amp}}(\tau )}{\sqrt{2\pi T_0D^3(\tau )}}}\mathrm{exp}\left({\displaystyle \frac{\xi ^2}{2D(\tau )}}\right),`$ (32) $`P_{\mathrm{amp}}(0)=1,D(0)=0.`$ (33) $`D(\tau )={\displaystyle \underset{\tau }{\overset{0}{}}}𝑑\tau ^{}\mathrm{exp}\left({\displaystyle \underset{\tau ^{}}{\overset{\tau }{}}}𝑑\tau ^{\prime \prime }(2\stackrel{~}{A}(\tau ^{\prime \prime })+1)\right),`$ (34) $`P_{\mathrm{amp}}(\tau )=\mathrm{exp}\left({\displaystyle \underset{\tau }{\overset{0}{}}}𝑑\tau ^{}(2\stackrel{~}{A}(\tau ^{})+1/2)\right).`$ (35) We use such a representation for $`\stackrel{~}{P}`$ for $`\tau _0<\tau <0`$. The value of $`\tau _0`$ is chosen in such a way, that the velocity field $`\stackrel{~}{U}`$ is still linear at the width of $`\stackrel{~}{P}`$. From the other hand, $`\stackrel{~}{P}`$ at $`\tau =\tau _0`$ already becomes wide in comparison with mesh interval $`\mathrm{\Delta }\xi `$. Typical value of $`\tau _0`$ used: $`\tau _0=1/1500`$. For times $`\tau <\tau _0`$ the solution is found by fully implicit scheme $`{\displaystyle \frac{\stackrel{~}{P}_i^{n+1}\stackrel{~}{P}_i^n}{\mathrm{\Delta }\tau }}+{\displaystyle \frac{1}{2}}\stackrel{~}{P}_i^{n+1}+D_i^{n+1}{\displaystyle \frac{\stackrel{~}{P}_{i+1}^{n+1}2\stackrel{~}{P}_i^{n+1}+\stackrel{~}{P}_{i1}^{n+1}}{\mathrm{\Delta }\xi ^2}}`$ (36) $`+r_{+,i}^{n+1}{\displaystyle \frac{\stackrel{~}{P}_{i+1}^{n+1}\stackrel{~}{P}_i^{n+1}}{\mathrm{\Delta }\xi }}+r_{,i}^{n+1}{\displaystyle \frac{\stackrel{~}{P}_i^{n+1}\stackrel{~}{P}_{i1}^{n+1}}{\mathrm{\Delta }\xi }}=0,`$ (37) where $`r_{\pm ,i}^n`$, $`D_i^n`$ are equal to $`r_{\pm ,i}^n=0.5\left(r(\xi _i,\tau _n)\pm |r(\xi _i,\tau _n)|\right),`$ (38) $`D_i^n={\displaystyle \frac{1}{1+0.5|r(\xi _i,\tau _n)|\mathrm{\Delta }\xi }}.`$ (39) The function $`r(\xi ,\tau )`$ is expressed via velocity field: $`r(\xi ,\tau )=0.5\xi +\stackrel{~}{U}(\xi ,\tau )`$. Here $`\mathrm{\Delta }\xi >0`$, $`\mathrm{\Delta }\tau <0`$ — mesh intervals, and $`\xi _i`$, $`\tau _n`$ — site coordinates. The numerical scheme used is monotonous and stable, it is of the first order of accuracy in $`\mathrm{\Delta }\tau `$ and of the second order in $`\mathrm{\Delta }\xi `$ . ### C Cauchy problem for $`\stackrel{~}{U}`$ At this stage we use the initial condition $`\stackrel{~}{U}(\tau =\tau _{\mathrm{min}})0`$. The viscousity, source and self-advection terms are treated by splitting technique . At each time step it is first calculated the change of $`\stackrel{~}{U}`$ due to the source, then — to the viscousity, and at the last — to the nonlinearity. From calculated already grid layer $`\stackrel{~}{U}^n=\stackrel{~}{U}(\tau =\tau _n)`$ the next layer $`\stackrel{~}{U}^{n+1}`$ is found in a following order: first, the equation $`\stackrel{~}{U}_\tau =\stackrel{~}{\chi }(\tau )\stackrel{~}{P}`$, is solved: $$\frac{\stackrel{~}{U}_i^{n+1}\stackrel{~}{U}_i^n}{\mathrm{\Delta }\tau }=(\stackrel{~}{\chi }(\tau )\stackrel{~}{P})_n.$$ (40) The convolution $`\stackrel{~}{\chi }\stackrel{~}{P}`$ is calculated as the result of inverse fast Fourier transform (FFT) acting on the product of $`\stackrel{~}{\chi }`$’s and $`\stackrel{~}{P}`$’s FFT images. The external force correlator during all calculations was equal to $`\chi (x)=(1x^2)e^{x^2/2}=d^2e^{x^2/2}/dx^2`$. The numbers $`\stackrel{~}{U}_i^{n+1}`$ that are found in such a way are not a final solution for the layer $`\stackrel{~}{U}^{n+1}`$, since only the source term has been taken into account yet. We use them as an input at the next step, we will denote them as $`\stackrel{~}{U}_i^n`$ (note that they do not coincide with $`\stackrel{~}{U}_i^n`$ in (40)). Next, the viscousity and linear part of advection are taken into account, according to the equation $`\stackrel{~}{U}_\tau +\frac{1}{2}\left(\xi \stackrel{~}{U}\stackrel{~}{U}\right)_\xi =0`$. The fully implicit scheme was used analogously to (37): $`{\displaystyle \frac{\stackrel{~}{U}_i^{n+1}\stackrel{~}{U}_i^n}{\mathrm{\Delta }\tau }}+{\displaystyle \frac{1}{2}}\stackrel{~}{U}_i^{n+1}D_i{\displaystyle \frac{\stackrel{~}{U}_{i+1}^{n+1}2\stackrel{~}{U}_i^{n+1}+\stackrel{~}{U}_{i1}^{n+1}}{\mathrm{\Delta }\xi ^2}}`$ (41) $`+{\displaystyle \frac{1}{2}}{\displaystyle \frac{\stackrel{~}{U}_i^{n+1}\stackrel{~}{U}_{i1}^{n+1}}{\mathrm{\Delta }\xi }}=0,`$ (42) here $`D_i=1/(1+0.25\xi _i\mathrm{\Delta }\xi )`$. Again, the numbers $`\stackrel{~}{U}_i^{n+1}`$ do not form a final solution, and we send them to the next step with a $`\stackrel{~}{U}_i^n`$ notation. The nonlinear part of the equation $`\stackrel{~}{U}_\tau +\stackrel{~}{U}\stackrel{~}{U}_\xi =0`$ was solved by explicit conservative scheme : $$\frac{\stackrel{~}{U}_i^{n+1}\stackrel{~}{U}_i^n}{\mathrm{\Delta }\tau }+\frac{\stackrel{~}{U}_{i+1}^n+\stackrel{~}{U}_i^n+\stackrel{~}{U}_{i1}^n}{3}\frac{\stackrel{~}{U}_{i+1}^n\stackrel{~}{U}_{i1}^n}{2\mathrm{\Delta }\xi }=0,$$ (43) that finally gives us the next layer of the velocity field $`\stackrel{~}{U}^{n+1}`$. This scheme is of the first order of accuracy in $`\mathrm{\Delta }\tau `$ and of the second one in $`\mathrm{\Delta }\xi `$. ## IV Viscous instanton In this section we represent the results of our calculations, that show the structure of the instanton and its change with $`||`$. The minimal value of $``$ at which the reliable results in numerics were obtained is $`=2`$. The general features of the instanton structure change with $``$ can be obtained from Fig. 2 that shows the level curves of $`\stackrel{~}{U}(\xi ,\tau )`$ for three values of $``$. Since $`\stackrel{~}{U}`$ and $`\stackrel{~}{P}`$ are the odd functions of $`\xi `$, we draw only the region where $`\xi >0`$. The calculations were done in a rectangular $`0<\xi <10`$, $`30<\tau <0`$, whose dimensions are a bit larger than it is shown at the figure. One can see that the instanton life-time and the maximal value of $`|\stackrel{~}{U}|`$ rapidly increase with the growth of $`||`$. The growth leads to the deformation of level curves near $`\tau =0`$ because of the influence of nonlinearity, that is weak at $`=0.9`$ (Fig. 2(a)) and very strong at $`=2.0`$ (Fig. 2(c)). ### A Structure Detailed analysis of the instanton solution based on the results of numerical calculations allows us to distinguish five different regimes in the instanton time evolution. Below we discuss them consequently from $`t=0`$ to $`t=\mathrm{}`$. The first regime consist in the viscous smearing of the field $`p`$ up to the scale of the kink width $`w_{\mathrm{kink}}=\sqrt{2\nu /|a|}=1/\sqrt{|A|}`$ (see (15), Fig. 3). Since the viscousity plays the crucial role in this regime, we will also use dimensional variables. At $`t=0`$ we have $`p(x,t)\delta ^{}(x)`$, and the width of the kink in the velocity profile is equal to $`w_{\mathrm{kink}}=\sqrt{2\nu /|a|}`$ (see (15)). Since $`p`$ is very narrow, the viscousity dominates the evolution. The width of $`p`$ obeys the diffusion law and equals to $`\sqrt{2\nu |t|}`$. These two widths become comparable at time $`t=1/|a|`$, or $`T=1/|A|`$. This means that during the whole time of smearing of $`p`$ by viscousity the width of the kink is of the order of $`w_{\mathrm{kink}}`$. Indeed, if the shape of the velocity profile deviates from steady-state kink solution (15), then the change of the kink width during this time period would be of the order of $`ut\sqrt{2\nu |a|}/|a|=\sqrt{2\nu /|a|}=w_{\mathrm{kink}}`$. At this regime the source term $`\chi P`$ in instanton equation (13) is unimportant. When the width of $`p`$ becomes of the order of $`w_{\mathrm{kink}}`$ the rate of expansion of $`p`$ due to the velocity gradient becomes comparable with the rate due to viscousity (such a balance determines the width of the kink). The next (second) regime was exhaustively studied in . It consists in dilation of fields $`U`$, $`P`$ up to the pump scale $`L=1`$. The fields are advected by velocity $`U`$, and considering evolution back in time they are expanded by it since $`U_x|_{x=0}<0`$. The time needed for the expansion is equal to $`T_{}L/U1/\sqrt{|A|}`$. During the 3rd–5th regimes the width of $`U`$ and $`P`$ field is much greater than $`L=1`$. Then it is naturally to substitute $`\chi (x)`$ by $`\chi _2\delta ^{\prime \prime }(x)`$. The instanton equations take the following simple form: $$U_T+UU_x+\frac{1}{2}V_{xx}=0,V_T+UV_x+\frac{1}{2}U_{xx}=0,$$ (44) where $`V=2\chi _2PU`$, $`\chi _2=\frac{1}{2}𝑑xx^2\chi (x)`$. While moving to large negative time numerical solution has a tendency to fall in $`U=V`$ (see Fig. 4, curves $`\tau =4`$ and $`\tau =8`$). Then the equations for $`U`$, $`V`$ reduce to the Burgers equations with the evolution back in time. Such a substitution is not always possible. During such an evolution the shock waves occure (see, e.g., the level curves at Fig. 2(c) near $`\tau =5`$, Fig. 4, curves 1 and 2). For transition to equations (44) to be valid the width of these shock waves should be larger than $`L=1`$. Otherwise the substitution of $`\chi `$ by $`\delta ^{\prime \prime }`$ is not valid. The width of the shock wave in dimensionless variables is greater than $`1`$ only if its height is smaller than $`1`$. However, right after the 2nd regime when the width of $`U`$ and $`P`$ fields becomes greater than the pumping force correlation length $`L=1`$ the amplitude of the velocity field $`U`$ is of the order of $`\sqrt{|A|}1`$. The amplitude of $`U`$ becames small only at very large times (it is shown below that such a crossover happens at $`T\sqrt{|A|}`$). It means that there is an intermediate regime that goes after the 2nd one, where the substitution of $`\chi `$ by $`\delta ^{\prime \prime }`$ is inapplicable. In this (third) regime the fields $`U`$ and $`P`$ are smooth functions in the interval wider than $`1`$. At the ends of this interval they contain shock waves — the value of $`U`$ and $`P`$ rapidly goes to zero (as it is shown schematically in Fig. 5). We will use the word “shock” for these structures at the ends of the interval, while for narrow structure in the velocity field $`U`$ near $`x=0`$, $`T=0`$ we will use the word “kink”. Now we will consider the structure of the shocks in detail. Let us denote the height of the shocks in $`U(x,T)`$ and $`P(x,T)`$ fields by $`H_U(T)`$ and $`H_P(T)`$, respectively. Their position we will denote as $`x_{\mathrm{shock}}(T)`$ ($`x_{\mathrm{shock}}>0`$, shocks are posed at $`x=\pm x_{\mathrm{shock}}`$). In the third regime the width of the shocks in the field $`P`$ is determined by the competition of squeezing them by the velocity $`U`$ and spreading by viscousity (see Fig. 6). Since we are in a strongly nonlinear regime the viscousity is weak, and the width of $`P`$’s shocks is very small. The shocks in $`P`$ are stationed at the center of $`U`$’s shocks, i.e. in almost linear velocity profile with the gradient $`U_xH_U`$. The width of $`P`$’s shocks can be estimated as $`1/\sqrt{H_U}L`$. Then the good approximation for $`P(x,T)`$ near the shock $`xx_{\mathrm{shock}}`$ is $`P(x,T)H_P(T)\theta (x_{\mathrm{shock}}(T)x)`$, where $`\theta (x)`$ is step-function. During the evolution forward in time the shocks in $`U`$ do not break down because of source term $`\chi P`$. The source prevents the destructive effect of advection term $`UU_x`$. The shocks in $`P`$ should carry the $`U`$’s shocks of the height $`H_U`$, i.e. of strength $`UU_xH_U^2`$. Thus we should have $`H_PH_U^2`$ in this regime. Now we show it more carefully. Let us write the instanton equation (13) in the reference frame of the shock near the point $`x=x_{\mathrm{shock}}`$ (see Fig. 6). We have two contributions to time derivative $`U_T`$: from the growth of $`H_U`$ in time (of order $`H_U/T`$) and from the motion of the shock (of order $`H_U^2`$). Neglecting the first one we write the following equation for the velocity $`U(x,T)`$: $$\frac{1}{2}\left(U(x)U(x_{\mathrm{shock}})\right)_x^2=H_PX^{}(xx_{\mathrm{shock}}),$$ (45) where the new function $`X(x)`$ is determined by the equation $`\chi (x)=X^{\prime \prime }(x)`$ with the condition $`X0`$ with $`x\pm \mathrm{}`$. Integrating this equation once we obtain $$\left(U(x)U(x_{\mathrm{shock}})\right)^2=2H_P\left(X(0)X(xx_{\mathrm{shock}})\right).$$ (46) Since $`U(x_{\mathrm{shock}})=H_U/2`$ we have $`H_P=H_U^2/8X(0)`$. The next step consists in finding the solution of (13,14) between the shocks posed at $`x=\pm x_{\mathrm{shock}}`$ considering the fields $`U`$, $`P`$ as smooth ones and using the boundary condition $$P(\pm x_{\mathrm{shock}}0,T)=U^2(x_{\mathrm{shock}}0,T)/8X(0).$$ (47) The instanton equations (13,14) take the form $$U_T+UU_x=0,P_T+UP_x=0.$$ (48) Here we approximate the shocks as jump discontinuities, and the condition (47) relates the heights of the jumps (see Fig. 5). Here the diffusion terms and the term $`\chi P`$ are omitted. One can check that they are negligible since the characteristic $`x`$-scale of the solution is large enough. The equations (48) can be integrated by characteristics (or Lagrangian trajectories). The velocity of the shocks is equal to $`\pm H_U/2`$, i.e. all the trajectories disappear at the shocks (if we consider the evolution back in time). The value of $`U`$ (or $`P`$) is conserved in time if we follow the Lagrangian trajectory. This means that the relation $`P=U|U|/8X(0)`$ holds everywhere between the shocks. Due to self-advection the velocity field $`U`$ becames more and more linear as a function of $`x`$ while $`|T|`$ increases. This happens at the border between the 2nd and the 3rd regimes. In the third regime we can take $`U(x,T)=x/T`$ between the shocks. The field $`P`$ is equal to $`P(x,T)=x|x|/8X(0)T^2`$. The velocity $`U`$ simply squeezes or expands the field $`P`$ without changing its shape. Since $`H_PH_U^2`$, the field $`P`$ should have the same scaling as $`U^2`$, i.e. $`Px^2`$. The concave form of $`P`$ (as one can see $`=2`$ is not yet good enough for a clear picture) is shown in Fig. 7. Let us determine the time dependence of $`x_{\mathrm{shock}}`$. We have $$\frac{dx_{\mathrm{shock}}}{dT}=\frac{1}{2}H_U(T)=\frac{x_{\mathrm{shock}}}{2T}.$$ (49) Solving this equation we get $`x_{\mathrm{shock}}(T)=B\sqrt{T}`$. Since $`x_{\mathrm{shock}}1`$ at $`|T|1/\sqrt{|A|}`$, we get $`B|A|^{1/4}`$. The shocks heights are equal to $`H_U(T)=B/\sqrt{T}`$, $`H_P(T)=B^2/8X(0)|T|`$. The width of the shocks in $`P`$ field is of the order of $`1/\sqrt{H_U}`$. The shocks in velocity field $`U`$ have the width of the order of $`1`$, since $`U`$ is pumped by $`\chi P`$. At large time $`T\sqrt{|A|}`$ the height $`H_U`$ (and consequently the width of the shocks in $`P`$) becames of the order of $`1`$. This indicates the end of the third regime and the beginning of the fourth. Going further back in time we finally enter the domain of validity of the equations (44). The solution falls into $`U=V`$. Again, the solution has two shocks between which it is a smooth function. The shock position satisfies $`x_{\mathrm{shock}}\sqrt{T}`$, between the shocks $`U=V=x/T`$ holds. This regime exactly corresponds to self-similar solution $`u(x,t)=\theta (tCx^2)x/t`$ of inviscid Burgers equation $`u_t+uu_x0u_{xx}=0`$ (here $`C`$ is a parameter). At the far tail of the instanton due to viscous dissipation the solution $`U=V`$ transformes to the derivative of Gaussian contour — the self-similar solution of the diffusion equation (see Fig. 4, $`\tau =16`$). During the fifth regime the advection term $`UU_x`$ becomes irrelevant. In $`\xi `$, $`\tau `$ variables the solution tends to $`\stackrel{~}{U}\xi \mathrm{exp}(\tau /2\xi ^2/2)`$, that was observed in numerical calculations. During the 4th and the 5th regimes the amplitude of the velocity field $`U`$ is less than unity. It means that the amplitude of $`U`$ is lower than the order of the typical statistical fluctuations, and the saddle-point approximation is meaningless there. The typical events that demonstrate large negative gradients start from some velocity configuration $`U(x)`$ with the amplitude of the order of unity and are governed by the 3rd regime first. The action on these events and their further evolution almost do not depend on the initial velocity field $`U(x)`$, and the dependence of $`𝒫(a)`$ on $`a`$ remains unaltered by the averaging over all possible configurations $`U(x)`$. We considered the 4th and the 5th regimes since they are the parts of the whole solution of our nonlinear boundary problem. Let us now run the whole evolution forward in time. At the beginning (5th regime) the field $`U`$ is pumped by very wide $`P`$. The pumping force is proportional to $`P_{xx}`$. During the 4th regime the source is localised at shocks in $`P`$ that leads to a formation of shocks in $`U`$. The $`U`$’s shocks want to break down because of self-advection, but the source term $`\chi P`$ keeps them going. When the growing height of $`P`$ becomes larger than unity the shocks in $`P`$ become narrow. The balance between the terms $`UU_x`$ and $`\chi P`$ in (13) changes a little, that results in the change of the form of $`U`$’s shocks — it is determined by the shape of $`\chi (x)`$ now. The distance between the $`P`$’s shocks decreases in time and eventually it becomes comparable with unity. After this $`P`$ becomes even more narrow and the efficiency of the source term begins to fall down. The self-advection of the velocity destroyes the shocks and leads to a formation of the kink at $`x=0`$, while $`P`$ transformes to $`\delta ^{}(x)`$. The kink shape at $`=2`$ is shown in Fig. 3. Schematically time evolution of the instanton is illustrated in Fig. 8. ### B Action One can present the action $`S(A)`$ in the form of $`S=_{\mathrm{}}^0𝑑Ts(T)`$ like in expression (18). For $`=2`$ the action density $`s(T)`$ that was obtained from numerical calculations is shown in Fig. 9. While $`T<T_{}=1/\sqrt{|A|}`$ the convolution $`\chi p`$ is localized at shocks, so $`s(T)H_P^2(T)B^4/T^2`$. The maximum of $`s(T)`$ is posed at $`TT_{}`$. Further increasing $`T`$ leads to the decreasing of the density $`s(T)`$ because $`P(x)`$ becomes more and more narrow without an adequate growth of its amplitude. This region of small time $`T>T_{}`$ was studied in . It was shown that the contribution $`S_{T>T_{}}`$ to the extremal action from this interval is of order $`|A|^{3/2}`$, and the main contribution to the action from it comes from the region $`TT_{}`$ — the border between the 2nd and the 3rd regimes. Exactly these two regimes determine the optimal configuration of noise providing the event with large negative gradient. The contribution of region of time $`\sqrt{|A|}=T_B<T<T_{}=1/\sqrt{|A|}`$ (3rd regime) to the extremal action $`S(A)`$ can be estimated as $$S_{T<T_{}}\underset{T_B}{\overset{T_{}}{}}𝑑TH_P^2(T)B^4/T_{}|A|^{3/2}.$$ (50) Note that the value of $`S_{T<T_{}}`$ is again cumulated from the region $`TT_{}`$. The crucial point is that the contribution to the action $`S(A)`$ from the tail of the instanton (or large time $`T<T_{}`$) is finite, i.e. the integral (50) converges (the addition to the action from interval $`T<T_B`$ is negligible). Also this contribution is not dominant, i.e. it is not much greater than the contribution of the order $`|A|^{3/2}`$ from small times ($`T>T_{}`$). It means that in our case the instanton is localized enough in time. Its long-time dynamics does not destroy the fact that it is the main fluctuation determining the statistics of large negative gradients. At the Fig. 10 the function $`d(\mathrm{ln}S)/d(\mathrm{ln}A)=A/S`$ that was obtained from numerical calculations is shown. We used different grid parameters during calculations for instanton structure and for this figure. Here we used $`\tau _{\mathrm{min}}=4`$ with boundary condition $`\stackrel{~}{U}(\xi ,\tau _{\mathrm{min}})=\chi _2\stackrel{~}{P}(\xi ,\tau _{\mathrm{min}})`$. This boundary condition was used as initial condition for $`\stackrel{~}{U}`$ during iterations. It turned out, that e.g. for $`=2`$ we get the following values of $`A/S`$ and $`A`$ with different value of $`\tau _{\mathrm{min}}`$: | $`\tau _{\mathrm{min}}`$ | $`A/S`$ | $`A`$ | | --- | --- | --- | | $`4`$ | $`1.441`$ | $`148.1`$ | | $`30`$ | $`1.437`$ | $`102.6`$ | Although, when calculating with $`\tau _{\mathrm{min}}=30`$, the condition $`\stackrel{~}{U}(\xi ,4)=\chi _2\stackrel{~}{P}(\xi ,4)`$ holds within 15% (as one can see from Fig. 4) we prefer to use the grid shorter in time ($`\tau _{\mathrm{min}}=4`$) to have smaller time step. The value of $`A`$ strongly depends on time step. It was observed in numerical experiment. Such sensitivity is characteristic of Burgers equation also. Although the calculations with small $`|\tau _{\mathrm{min}}|`$ give us worse accuracy at the tail of the instanton, the smallness of the time step allows us accurately describe the main part of the instanton where the nonlinearity level is high. One can see the cubic asymptotics $`SA^3`$ at $`A>0`$. The instanton structure for $`A>0`$ that was described in was confirmed by our numerical calculations. The case $`A<0`$ corresponding to the PDF’s left tail is more complicated. The function $`A/S`$ has minimal value at $`A12`$. At further decrease of $`A`$ it starts to grow and finally tends to the value $`3/2`$. In this case the coefficient $`S/|A|^{3/2}`$ is small. ## V Conclusion We have examined the remote left tail of the velocity gradients PDF $`𝒫(u_x)`$ in Burgers forced turbulence. The possibility of direct numerical solving of instanton equations by iterations is demonstrated. Numerical calculations and the analysis of the instanton behavior at the time large compared with its lifetime $`t_{}1/\sqrt{\nu |u_x|}`$ with the solution at small time from show that $`\mathrm{ln}𝒫(u_x)|u_x|^{3/2}`$. ## ACKNOWLEDGMENTS We are grateful to G.E. Falkovich, A.V. Fouxon, I.V. Kolokolov, V.V. Lebedev and E.V. Podivilov for useful discussions. This work was partially supported by Russian Foundation for Basic Research (gr. 98-02-17814), by INTAS (M.S., gr. 96-0457) within the program of International Center for Fundamental Physics in Moscow, by the grants of Minerva Foundation, Germany and Mitchell Research Fund (M.S.).
no-problem/0001/astro-ph0001491.html
ar5iv
text
# Better Astrometric Deblending of Gravitational Microlensing Events by Using the Difference Image Analysis Method ## 1 Introduction Searches for Galactic dark matter by detecting flux variations of source stars caused by gravitational microlensing have been and are being carried out by several groups (MACHO: Alcock et al. 1993; EROS: Aubourg et al. 1993; OGLE: Udalski et al. 1993; DUO: Alard & Guibert 1997). To increase the event rate, these searches are being conducted towards very dense star fields such as the Galactic bulge and the Magellanic Clouds. While searches towards these dense star fields result in an increased event rate, it also implies that the observed light curves are affected by the unwanted flux from unresolved nearby stars: blending effect. The light curve of a microlensing event with an isolated source star is represented by $$F=A_0F_0,$$ $`(1)`$ where $`F_0`$ is the unlensed flux of the source star (baseline flux). The gravitational amplification is related to the lens-source separation $`u`$ normalized by the angular Einstein ring radius by $$A_0=\frac{u^2+2}{u\sqrt{u^2+4}};u=\left[\beta _{0}^{}{}_{}{}^{2}+\left(\frac{tt_0}{t_{\mathrm{E},0}}\right)^2\right]^{1/2},$$ $`(2)`$ where the lensing parameters $`\beta _0`$, $`t_0`$, and $`t_{\mathrm{E},0}`$ represent the lens-source impact parameter, the time of maximum amplification, and the Einstein ring radius crossing time scale (Einstein time scale), respectively. These lensing parameters are determined by fitting theoretical light curves to the observed one. One can obtain information about the lens mass $`M`$ because the Einstein time scale is proportional to the square root of the lens mass, i.e. $`t_{\mathrm{E},0}M^{1/2}`$. When an event is affected by blended light, on the other hand, its light curve differs from that of the unblended event by $$F_{\mathrm{PSF}}=A_0F_0+B,$$ $`(3)`$ where $`B`$ represents the flux from blended stars. Then, to fit the observed light curve of a blended event, one should include $`B`$ as an additional parameter in addition to the three fitting parameters ($`\beta _0`$, $`t_0`$, and $`t_{\mathrm{E},0}`$) of an unblended event. As a result, the uncertainties in the determined Einstein time scale and the corresponding lens mass for a blended event are significantly larger compared to those for an unblended event (Di Stefano & Esin 1995; Woźniak & Paczyński 1997; Han 1997; Alard 1997). To resolve the blending problem, a newly developed technique to detect and measure light variations caused by gravitational microlensing was proposed by Tomaney & Crotts (1996), Alard & Lupton (1998), and Alard (1999). This so-called Difference Image Analysis (DIA) method measures the variation of source star flux by subtracting observed images from a normalized reference image, i.e. $$F_{\mathrm{DIA}}=F_{\mathrm{obs}}F_{\mathrm{ref}}=F_0(A_01),$$ $`(4)`$ where $`F_{\mathrm{obs}}=A_0F_0+B`$ and $`F_{\mathrm{ref}}=F_0+B`$ represent the source star fluxes measured from the image obtained during the progress of the event and from the reference image, respectively. Since not only the baseline flux of the lensed source star but also the flux from blended stars are subtracted by the DIA method, the light variation measured from the subtracted image is free from the effect of blending. Since photometric precision is improved by removing the blended light, the DIA method was adopted by the MACHO group and actually applied to microlensing searches (Alcock et al. 1999a, 1999b). However, even with the DIA method dramatic reduction of the uncertainties in the determined Einstein time scales of gravitational microlensing events will be difficult. This is because the DIA method, by its nature, has difficulties in measuring the baseline flux $`F_0`$ of a source star. Unless the blended light fraction of the source star flux measured in the reference image, and thus the baseline flux $`F_0=F_{\mathrm{ref}}B`$, is determined by some other means, one still has to include $`B`$ as an additional fitting parameter.<sup>1</sup><sup>1</sup>1Since higher photometric precision is expected by using the DIA method, the uncertainties of determined lens parameters will be smaller than those of lens parameters determined by using the current method based on PSF photometry. Han (2000) showed that for $`30\%`$ of high amplification events, one can determine $`F_0`$ with uncertainties less than 50%. Therefore, detecting the blending effect and estimating the blended light fraction in the observed source star flux is still an important issue to be resolved (see more discussion in § 2). There have been several methods proposed for the detection of the blending effect. For a high amplification event, one can determine the unblended baseline flux of the source star from the shape of the light curve obtained by using the DIA method itself (Han 2000). In addition, if the color difference between the lensed and blended stars is large, the effect of blending can be detected by measuring the color changes during the event (Buchalter, Kamionkowski, & Rich 1996). One can also identify the lensed source among blended stars by using high resolution images obtained from the Hubble Space Telescope (HST) observations (Han 1997). In addition, Han & Kim (1999) showed that the effect of blending can be detected from the astrometric observations of an event by using high resolution interferometers such as the Space Interferometry Mission (SIM). However, these methods either have limited applicability only for several special cases of microlensing events or impractical due to the requirement of using highly-demanding instrument for space observations. A much more practical method for the detection of the blending effect that can be applicable for general microlensing events is provided by measuring the linear shift of the source star image centroid towards the lensed source star (hereafter centroid shift) during gravitational amplification (Alard, Mao, & Guibert 1995; Alard 1996; Goldberg 1998, see more detail in § 3). Goldberg & Woźniak (1997) actually applied this method to the OGLE-1 database and demonstrated the efficiency of this method by detecting centroid shifts greater than $`0^{\prime \prime }.2`$ for nearly half of the total tested events (seven out of 15 events). However, even with this method the blending effect for an important fraction of blended events, especially for low amplification events, cannot be detected due to their small centroid shifts (Han, Jeong, & Kim 1998). In this paper, we show that if the blending effect is investigated by detecting the shift of a source star image centroid, the DIA method will allow one to detect the blending effect with a significantly enhanced efficiency compared the efficiency of the current method based on PSF photometry (PSF method). This is because for a given event the centroid shift measurable by using the DIA method, $`\delta \theta _{\mathrm{c},\mathrm{DIA}}`$, is always larger than the centroid shift measurable by using the PSF method, $`\delta \theta _{\mathrm{c},\mathrm{PSF}}`$. We find that the ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{DIA}}`$ rapidly increases with increasing fraction of blended light. In addition, for events affected by the same fraction of blended light, the ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{DIA}}`$ is larger for the event with a lower amplification. Therefore, centroid shift measurements by using the DIA method will be an efficient method to detect the blending effect especially of highly blended events, for which the uncertainties in the determined Einstein time scale are large, as well as of low amplification events, for which the current method is highly inefficient. ## 2 Degeneracy Problem Even with the blending-free flux variations of a gravitational microlensing event measured by using the DIA method, it will be difficult to know whether the event is affected by the blending effect or not. This because the best-fit light curve obtained under the wrong assumption that the event is not affected by the blending effect matches well with the observed light curve. The relations between the best-fit lensing parameters of a microlensing event resulting from the wrong determination of its source star baseline flux and their corresponding true values are provided by the analytic equations derived by Han (1999). If a blended event is misunderstood as an unblended event, the baseline flux of the source star is overestimated by $`F_0+B`$, causing mis-normalization of the amplification curve.<sup>2</sup><sup>2</sup>2 The term ‘amplification curve’ represents the changes in the amplification of the source star flux as a function of time. Then the best-fit impact parameter $`\beta `$ determined from the mis-normalized amplification curve differs from the true value $`\beta _0`$ by $$\beta =\left[2(1A_\mathrm{p}^2)^{1/2}2\right]^{1/2};A_\mathrm{p}=\frac{A_{\mathrm{p},0}+\eta }{1+\eta },$$ $`(5)`$ where $`A_{\mathrm{p},0}=(\beta _0^2+2)/(\beta _0\sqrt{\beta _0^2+4})`$ and $`A_\mathrm{p}`$ represent the peak amplifications of the true and the mis-normalized amplification curves and $`\eta =\mathrm{\Delta }F_0/F_0=B/F_0`$ is the fractional deviation of the mis-determined baseline flux. Mis-normalization of the amplification curve makes the best-fit Einstein time scale also differ from the true value by $$t_\mathrm{E}=t_{\mathrm{E},0}\left(\frac{\beta _{\mathrm{th}}^{}{}_{}{}^{2}\beta _{0}^{}{}_{}{}^{2}}{\beta _{\mathrm{th},0}^{}{}_{}{}^{2}\beta ^2}\right)^{1/2}.$$ $`(6)`$ Here $`\beta _{\mathrm{th},0}=1.0`$ and $`A_{\mathrm{th},0}=3/\sqrt{5}`$ represent the threshold impact parameter and the corresponding threshold amplification for event detection, respectively. However, due to the mis-normalization of the amplification curve, the actually applied threshold amplification and the corresponding threshold impact parameter differ from $`A_{\mathrm{th},0}`$ and $`\beta _{\mathrm{th},0}`$ by $$A_{\mathrm{th}}=A_{\mathrm{th},0}(1+\eta )\eta $$ $`(7)`$ and $$\beta _{\mathrm{th}}=\left[2(1A_{\mathrm{th}}^2)^{1/2}2\right]^{1/2}.$$ $`(8)`$ Figure 1 shows the degeneracy problem in the light curve measured by using the DIA method. In the figure, the solid curve presents the light variation curve when an example event with lensing parameters $`\beta _0=0.5`$ and $`t_{\mathrm{E},0}=1.0`$ is observed by using the DIA method. The event has baseline flux of $`F_0=0.5`$ and it is affected by the blended flux of $`B=0.5`$. The dotted curve represents the best-fit light curve obtained by assuming that the event is not affected by the blending effect. The best-fit lensing parameters of the mis-normalized light curve determined by using the relations in equations (5)–(8) are $`\beta =0.756`$ and $`t_\mathrm{E}=0.742`$, respectively. One finds that the two light curve matches very well, implying that it will be not easy to detect the blending effect only from the light variation curve obtained by using the DIA method. ## 3 Centroid Shifts by Using the PSF and DIA Methods In previous section, we show that detection of the blending effect for general microlensing events will be difficult even with the blending-free light variation curve obtained by using the DIA method. However, we show in this section that if the blending effect of a microlensing event is investigated by detecting the centroid shift of a source star image, the DIA method will allow one to detect the blending effect with a significantly increased efficiency than the current method based on PSF photometry does. This is because for a given event the centroid shift measurable by using the DIA method is always larger than the shift measurable by the PSF method. For the comparison of the centroid shifts of an event measurable by the DIA and the PSF methods, let us consider a blended source star image within which multiple stars with individual positions and fluxes $`𝐱_i`$ and $`F_{0,i}`$ are included. Gravitational amplification occurs only for one of these stars.<sup>3</sup><sup>3</sup>3For Galactic microlensing event, the typical lens-source separation is of the order of milli-arcsec, while the average separation between stars is $`(𝒪)10^1`$ arcsec. Therefore, simultaneous lensing for multiple source stars hardly happens. Since the position of the lensed star usually differs from the centroid of the blended image, the centroid of the source star image is shifted toward the lensed source during the event. If the lensed source star with a baseline flux $`F_{0,j}`$ is located at a position $`𝐱_j`$, the amount of this centroid shift, which is measurable by using the current PSF method, is $$\stackrel{}{\delta \theta }_{\mathrm{c},\mathrm{PSF}}=𝒟\left(𝐱𝐱_j\right);𝒟=\frac{f(A_01)}{f(A_01)+1},$$ $`(9)`$ where $`f=F_{0,j}/_iF_{0,i}`$ represents the fractional flux of the lensed source out of the total flux of the blended stars (including $`F_{0,j}`$) within the integrated seeing disk and $`𝐱=_i𝐱_iF_{0,i}/_iF_{0,i}`$ is the position of blended image centroid before the gravitational amplification, i.e. the position of the blended source image centroid on the reference frame (Goldberg 1998). On the other hand, the position of the centroid measured on the subtracted image by using the DIA method is the true position of the lensed source star, i.e. $`𝐱_j`$. Therefore, the centroid shift measureable by using the DIA method is $$\stackrel{}{\delta \theta }_{\mathrm{c},\mathrm{PSF}}=𝐱𝐱_j.$$ $`(10)`$ Then the ratio between the centroid shifts measurable by the two methods is $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{PSF}}=𝒟^1`$. Since $`f(A_01)>0`$, and thus $`𝒟<1`$, for a given event the centroid shift measurable by the DIA method is always larger than the shift measurable by the PSF method. In Figure 2, we illustrate the centroid shifts measurable by the DIA and the PSF methods for visualization. On the left side, we present the contours (measured at an arbitrary flux level) of two source stars with identical baseline fluxes (the inner two dotted circles with their centers marked by ‘+’) and their integrated images (the outer solid curve centered at ‘x’) before (upper part) and during (lower part) gravitational amplification. Among the two stars within the blended image, the right one is lensed. Between the two left contours of integrated image, we marked the centroid shifts measurable by the PSF ($`\stackrel{}{\delta \theta }_{\mathrm{c},\mathrm{PSF}}`$) and the DIA ($`\stackrel{}{\delta \theta }_{\mathrm{c},\mathrm{DIA}}`$) methods. To better show the centroids shifts, the region enclosed by a dot-dashed line is expanded and presented on the right side. One finds that $`\delta \theta _{\mathrm{c},\mathrm{DIA}}>\delta \theta _{\mathrm{c},\mathrm{PSF}}`$. Then, how much larger is the centroid shift measurable by the DIA method than the shift measurable by the DIA method? The ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{PSF}}`$ depends not only on the blended light fraction $`1f`$, but also on the amplification $`A_0`$ of an event. To see these dependencies, we present in Figure 3 the relations between the ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{PSF}}`$ and the fraction of blended light for events with various impact parameters. From the figure, one finds the two following imporant trends. First, the ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{PSF}}`$ increases rapidly with the increasing fraction of blended light. This implies that compared to the PSF method the DIA method will be able to better detect the blending effect of highly blended events for which the uncertainties in the determined Einstein time scales are large. Second, if events are affected by the same fraction of blended light, the ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{PSF}}`$ is bigger for the event with lower amplification, implying that the blending effect of low amplification events can be better detected by using the DIA method. For events with low amplifications, the expected centroid shifts measurable by the PSF method are very small, making it difficult to detect the blending effect by this method. Therefore, by increasing the detection efficiency for centroid shifts for low amplification events, the DIA method will allow one to detect the blending effect for a significant fraction of Galactic microlensing events. ## 4 Summary We analyze the centroid shifts of the source star image centroid of gravitational microlensing events caused by the blending effect. The findings from the comparison of the centroid shifts measurable by the current method based on PSF photometry and the newly developed DIA method are summarized as follows. 1. For a given event the centroid shift measurable by using the DIA method is always larger than the shift measurable by using the PSF method, allowing one to better detect the blending effect of microlenisng events. 2. The ratio between the centroid shifts measurable by using the DIA and the PSF methods, $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{DIA}}`$, rapidly increases as the fraction of the blended light increases. Therefore, detection of the centroid shifts by using the DIA method is an efficient method for the detection of the blending effect especially for highly blended events which cause large uncertainties in the determined Einstein time scales. 3. If the blended light fraction is the same, the ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{DIA}}`$ is larger for the events with a lower amplification. Therefore, the DIA method enables one to better detect the source star image centroid shifts of low amplification events, for which detection of their blending effect by using the PSF is very difficult due to their small expected shifts. This work was supported by the grant (1999-2-113-001-5) from Korea Science & Engineering Foundation (KOSEF). Figure 1: The degeneracy problem in the light variation curve of a gravitational microlensing event observed by the DIA method. The solid curve represents the light variation curve which is expected when an example event with $`\beta _0=0.5`$, $`t_{\mathrm{E},0}=1.0`$, and $`F_0=0.5`$ is observed by using the DIA method. The dot-dashed curve represents the best-fit curve obtained under the assumption that the flux of the source star is not affected by the blending effect, despite that the source star flux in the reference image is influenced by the blended light with an amount $`B=0.5`$. Figure 2: Illustration of the source star image centroid shifts measurable by usng the PSF ($`\delta \theta _{\mathrm{c},\mathrm{PSF}}`$) and the DIA ($`\delta \theta _{\mathrm{c},\mathrm{DIA}}`$) methods. On the left side, the contours (measured at an arbitrary flux level) of two source stars with identical baseline fluxes (the inner two dotted circles with their centers marked by ‘+’) and their integrated images (the outer solid curve centered at ‘x’) before (upper part) and during (lower part) gravitational amplification are presented. Among the two stars within the blended image, the right one is lensed. Between the two left contours of integrated image, the centroid shifts measurable by the PSF ($`\stackrel{}{\delta \theta }_{\mathrm{c},\mathrm{PSF}}`$) and the DIA ($`\stackrel{}{\delta \theta }_{\mathrm{c},\mathrm{DIA}}`$) methods are marked. To better show the centroids shifts, the region enclosed by a dot-dashed line is expanded and presented on the right side. Figure 3: The relations between the centroid shift ratio $`\delta \theta _{\mathrm{c},\mathrm{DIA}}/\delta \theta _{\mathrm{c},\mathrm{PSF}}`$ and the fraction of blended light $`1f`$ for events with various impact parameters. To better show the relations with $`(1f)0.7`$, the region is enlarged and presented in a separate box.
no-problem/0001/astro-ph0001271.html
ar5iv
text
# Probing the Cosmic Dark Age in X–rays ## 1. Introduction The cosmic dark age ended when the first gas clouds condensed out of the primordial fluctuations at redshifts $`z=1020`$ (Peacock 1992; Rees 1996). These condensations are likely the sites where the first clusters of stars and the first quasar black holes appeared, giving birth to the first “mini–galaxies” or “mini–quasars” in the Universe. Despite the lack of observational data, this epoch has become a subject of intense theoretical study in the past few years. The recent interest can be attributed to forthcoming instruments: NGST could directly image sub–galactic objects at $`z>\mathrm{\hspace{0.33em}10}`$ in the infrared, while microwave satellites such as MAP or Planck could measure signatures from the reionization of the intergalactic medium (IGM). Currently, bright quasars are detected out to $`z5`$ (Fan et al. 1999). Although the abundance of optically and radio bright quasars declines at $`z>\mathrm{\hspace{0.33em}2.5}`$ (Schmidt et al. 1995; Shaver et al. 1996), a recent determination of the X–ray luminosity function (LF) of quasars from ROSAT data (Miyaji et al. 1998a) has not confirmed this decline. In this contribution, we point out that future X–ray observations might provide yet another probe of the first quasars and the end of the dark age at $`z10`$, and that X–ray data might be uniquely useful in distinguishing quasars from stellar systems. ## 2. The Appearance of the First Quasars and Stars In popular Cold dark matter (CDM) cosmologies,the first baryonic objects appear near the Jeans mass ($`10^6\mathrm{M}_{}`$) at redshifts as high as $`z30`$ (Haiman & Loeb 1999b, and references therein). At any redshift, the mass function of collapsed dark halos is given to within a factor of two by the Press–Schechter formalism. Following collapse, the gas in the first baryonic condensations is virialized by a strong shock (Bertschinger 1985). Provided it is able to cool on a timescale shorter than the Hubble time, the shock–heated gas continues to contract. Depending on the details of the cooling and angular momentum transport, the gas then either fragments into stars, or forms a central black hole exhibiting quasar activity. Although the actual fragmentation process is likely to be rather complex, the average fraction $`f_{\mathrm{star}}`$ of the collapsed gas converted into stars can be calibrated empirically so as to reproduce the average metallicity observed in the Universe at $`z3`$. The observed ratio, inferred from CIV absorption lines in Ly$`\alpha `$ forest clouds, is between $`10^3`$ and $`10^2`$ of the solar value (Songaila 1997 and references therein). If the carbon produced in the early mini–galaxies is uniformly mixed with the rest of the baryons in the Universe, this implies $`f_{\mathrm{star}}`$2–20% for a Scalo stellar mass function. An even smaller fraction of the cooling gas might condense at the center of the potential well of each cloud and form a massive black hole, exhibiting quasar activity. In the simplest scenario, the peak luminosity of each black hole is proportional to its mass, and the light–curve, expressed in Eddington units, is a universal function of time. Indeed, for a sufficiently high fueling rate, quasars are likely to shine at their maximum possible luminosity, which is some constant fraction of the Eddington limit, for a time which is dictated by their final mass and radiative efficiency. Here we assume that the black hole mass fraction $`rM_{\mathrm{bh}}/M_{\mathrm{halo}}`$ obeys a log-Gaussian probability distribution, $`p(r)=\mathrm{exp}[(\mathrm{log}r\mathrm{log}r_0)^2/2\sigma ^2]`$, with $`\mathrm{log}r_0=3.5`$ and $`\sigma =0.5`$ (Haiman & Loeb 1999b). These values roughly reflect the distribution of black hole to bulge mass ratios found in a sample of 36 local galaxies (Magorrian et al. 1998) for a baryonic mass fraction of $`(\mathrm{\Omega }_\mathrm{b}/\mathrm{\Omega }_0)0.1`$. We further postulate that each black hole emits a time–dependent bolometric luminosity in proportion to its mass, $`L_\mathrm{q}M_{\mathrm{bh}}f_\mathrm{q}=M_{\mathrm{bh}}L_{\mathrm{Edd}}\mathrm{exp}(t/t_0)`$, where $`L_{\mathrm{Edd}}=1.5\times 10^{38}M_{\mathrm{bh}}/\mathrm{M}_{}\mathrm{erg}\mathrm{s}^1`$ is the Eddington luminosity, $`t`$ is the time elapsed since the formation of the black hole, and $`t_0=10^6`$ yr is the characteristic quasar lifetime. Finally, we assume that the shape of the emitted spectrum follows the mean spectrum of known quasars (Elvis et al. 1994) up to a photon energy of 10 keV. We extrapolate the spectrum up to $`50`$ keV, assuming a spectral slope of $`\alpha `$=0 (or a photon index of -1). For reference, Figure 1 shows the adopted spectrum of quasars, assuming a black hole mass $`M_{\mathrm{bh}}=10^8\mathrm{M}_{}`$, placed at two different redshifts, $`z_\mathrm{s}=11`$ and $`z_\mathrm{s}=6`$, and processed through the IGM, and assumed that reionization occurred at $`z_\mathrm{r}=10`$ and that at higher redshifts the IGM was homogeneous and fully neutral. At lower redshifts, $`0<z<z_\mathrm{r}`$, we included the hydrogen opacity of the Ly$`\alpha `$ forest given by Madau (1995), extrapolating his fitting formulae for the evolution of the number density of absorbers beyond $`z=5`$ when necessary. As Figure 1 shows, the minimum black hole mass detectable by the $`2\times 10^{16}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ flux limit of CXO (see below) is $`M_{\mathrm{bh}}10^8\mathrm{M}_{}`$ at $`z=10`$ and $`M_{\mathrm{bh}}2\times 10^7\mathrm{M}_{}`$ at $`z=5`$. In our model, the corresponding halo masses are $`M_{\mathrm{halo}}3\times 10^{11}\mathrm{M}_{}`$, and $`M_{\mathrm{halo}}6\times 10^{10}\mathrm{M}_{}`$, respectively. Although such massive halos are rare, their abundance is detectable in wide-field surveys. ## 3. Infrared: Expected Counts with NGST The Next Generation Space Telescope<sup>1</sup><sup>1</sup>1see http://www.ngst.nasa.gov (NGST) will be able to detect the early population of mini–galaxies and mini–quasars. NGST is scheduled for launch in 2008, and is expected to reach an imaging sensitivity of $`1`$ nJy (S/N=10 at spectral resolution $`\lambda /\mathrm{\Delta }\lambda =3`$) for extended sources after several hours of integration in the wavelength range of 1–3.5$`\mu `$m. Figure 2 shows the predicted number counts in the mini–galaxy and mini–quasar models described above, in a $`\mathrm{\Lambda }`$CDM cosmology with ($`\mathrm{\Omega }_0,\mathrm{\Omega }_\mathrm{\Lambda },\mathrm{\Omega }_\mathrm{b},h,\sigma _{8h^1},n`$)=(0.35, 0.65, 0.04, 0.65, 0.87, 0.96), normalized to a $`5^{}\times 5^{}`$ field of view. This figure shows separately the number per logarithmic flux interval of all objects with redshifts $`z>5`$ (thin lines), and $`z>10`$ (thick lines). As the figure shows, NGST will be able to probe about $`100`$ quasars at $`z>10`$, and $`200`$ quasars at $`z>5`$ per field of view. The bright–end tail of the number counts approximately follows a power law, with $`dN/dF_\nu F_\nu ^{2.5}`$. The dashed lines show the corresponding number counts of mini–galaxies, assuming that each halo undergoes a starburst that converts a fraction of 2% (long–dashed) or 20% (short–dashed) of the gas into stars. These lines indicate that NGST would detect $`40300`$ mini–galaxies at $`z>10`$ per field of view, and $`60010^4`$ mini–galaxies at $`z>5`$. Unlike quasars, galaxies could in principle be resolved if they extend over a scale comparable to the virial radius of their dark matter halos (Haiman & Loeb 1997; Barkana and Loeb 1999). The supernovae and $`\gamma `$-ray bursts in these galaxies might outshine their hosts and may also be directly observable (Miralda-Escudé & Rees 1997). Finally, we note that recent data in the $`J`$ and $`H`$ infrared bands from deep NICMOS observations of the HDF (Thompson et al. 1999) could already be useful to constrain mini–quasar and mini–galaxy models. ## 4. Optical: Constraints from the Hubble Deep Field Although the infrared wavelengths are the best suited to detect the redshifted UV–emission from objects at $`z10`$, present data in the optical already yields a constraint on quasar models of the type described above. The properties of faint extended sources found in the HDF (Madau et al. 1996) agree with detailed semi–analytic models of galaxy formation (Baugh et al. 1998). On the other hand, the HDF has revealed only a handful of faint unresolved sources, but none with the colors expected for high redshift quasars (Conti et al. 1999). The simplest mini–quasar model described above predicts the existence of $`10`$ B–band “dropouts” in the HDF, inconsistently with the lack of detection of such dropouts up to the $`50\%`$ completeness limit at $`V29`$ in the HDF. To reconcile the models with the data, a mechanism is needed for suppressing the formation of quasars in halos with circular velocities $`v_{\mathrm{circ}}<\mathrm{\hspace{0.33em}50}75\mathrm{km}\mathrm{s}^1`$ (see Figure 3 for the counts). This suppression naturally arises due to the photo-ionization heating of the intergalactic gas by the UV background after reionization (Thoul & Weinberg 1996; Navarro & Steinmetz 1997). Alternative effects could help reduce the quasar number counts, such as a change in the background cosmology, a shift in the “big blue bump” component of the quasar spectrum to higher energies due to the lower black hole masses in mini–quasars, or a nonlinear black hole to halo mass relation; however, these effects are too small to account for the lack of detections in the HDF (Haiman, Madau & Loeb 1999). ## 5. X–Rays: Predictions for CXO Quasars can be best distinguished from star forming galaxies at high redshifts by their X-ray emission. Detections of high-$`z`$ quasars would therefore be highly valuable: detections, or upper limits would help in answering the important question of whether the IGM at $`z<\mathrm{\hspace{0.33em}6}`$ was reionized by stars or quasars, by yielding constraints on the ionizing photon rate from high–$`z`$ quasars. The simple quasar–model described above was constructed to accurately reproduce the evolution of the optical luminosity function in the B–band (Pei 1995) at redshifts $`z>\mathrm{\hspace{0.33em}2.2}`$ (Haiman & Loeb 1998). However, it yields good agreement with the data on the X–ray LF, as demonstrated in Haiman & Loeb (1999c), and shown here in Figure 4. We regard this model as a minimal toy model which successfully reproduces the existing data, and use a straightforward extrapolation of this model to predict the X–ray number counts. In Figure 5, we show the predicted counts in the 0.4–6keV energy band of the CCD Imaging Spectrometer (ACIS) of CXO. Note that these curves are insensitive to our extrapolation of the template spectrum beyond 10 keV. The figure is normalized to the $`17^{}\times 17^{}`$ field of view of the imaging chips. The solid curves show that of order a hundred quasars with $`z>5`$ are expected per field at the CXO sensitivity of $`2\times 10^{16}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ for a 5$`\sigma `$ detection of a point source. Note that CXO’s arcsecond resolution will ease the separation of these point sources from background noise. The abundance of quasars at higher redshifts declines rapidly; however, a few objects per field are still detectable at $`z8`$. The dashed lines show the results for a minimum circular velocity of the host halos of $`v_{\mathrm{circ}}100\mathrm{km}\mathrm{s}^1`$, and imply that the model predictions for the CXO satellite are not sensitive to such a change in the host velocity cutoff. This is because the halos shining at the CXO detection threshold are relatively massive, $`M_{\mathrm{halo}}10^{11}\mathrm{M}_{}`$, and possess a circular velocity above the cutoff. In principle, the number of predicted sources would be lower if we had assumed a steeper spectral slope. For example, as figure 6 shows below, our model falls short of predicting the hard X–ray background, by about an order of magnitude at 10 keV. The difference could be explained by a change in our template spectrum to include a population of quasars with hard, but highly absorbed spectra (caused by the denser, and more gas rich hosts at high redshift). We note, however, that the agreement between the LF predicted by our model at $`z3.5`$ and that inferred from ROSAT observations would be upset by such a change, and require a modification of the model that would in turn tend to counter-balance the decrease in the predicted counts. ## 6. The X–ray Background Existing estimates of the X–ray background (XRB) provide another useful check on our quasar model. Figure 6 shows the predicted spectrum of the XRB in our model at $`z=0`$ (solid lines). The unresolved background flux is shown, obtained by summing the emission of all quasars whose individual observed flux at $`z=0`$ is below the ROSAT PSPC detection limit for discrete sources of $`2\times 10^{15}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (Hasinger & Zamorani 1997). The short dashed lines show the predicted fluxes assuming a steeper spectral slope beyond 10 keV ($`\alpha =0.5`$, or a photon index of -1.5). The long dashed line shows the 25% unresolved fraction of the soft XRB observed with ROSAT (Miyaji et al. 1998b; Fabian & Barcons 1992). This fraction represents the observational upper limit on the component of the soft XRB that could in principle arise from high-redshift quasars. As the figure shows, our quasar model predicts an unresolved flux just below this limit in the 0.5-3 keV range. The model also predicts that most ($`>\mathrm{\hspace{0.33em}90}\%`$) of this yet unresolved fraction arises from quasars beyond $`z=5`$. The power spectrum of the unresolved background therefore might carry information on quasars at $`z>5`$, and be useful in constraining the models (Haiman & Hui 1999, in preparation). The correlations in the background have recently been measured by Soltan et al. (1999, see also this Proceedings). ## 7. Discussion We have demonstrated that state–of–the–art X-ray observations could yield more stringent constraints on quasar models than currently available from the Hubble Deep Field (Haiman, Madau, & Loeb 1999). The X–ray data might provide the first probe of the earliest quasars, complementing subsequent infrared and microwave observations. More specifically, we have found that forthcoming X–ray observations with the CXO satellite might detect of order a hundred quasars per field of view in the redshift interval $`5<z<\mathrm{\hspace{0.33em}10}`$. Our numerical estimates are based on the simplest toy model for quasar formation in a hierarchical CDM cosmology, that satisfies all the current observational constraints on the optical and X-ray luminosity functions of quasars. Although a more detailed analysis is needed in order to assess the modeling uncertainties in our predictions, the importance of related observational programs with CXO is evident already from the present analysis. Other future instruments, such as the HRC or the ACIS-S cameras on CXO, or the EPIC camera on XMM, which has a collective area 3–10 times larger than that of CXO, will also be useful in searching for high–redshift quasars. The relation between the black hole and halo masses may be more complicated than linear. With the introduction of additional free parameters, a non–linear (mass and redshift dependent) relation between the black–hole and halo masses can also lead to acceptable fits (Haehnelt et al. 1998) of the observed quasar LF near $`z3`$. Such fits, when extrapolated to higher redshift, can result in different predictions for the abundance of high–redshift quasars. From the point of view of selecting between these alternative models, even a non–detection by CXO would be invaluable. It is hoped further that either observations of the clustering properties of $`z3`$ quasars in the Sloan Digital Sky Survey, or a measurement of the power spectrum of the soft X–ray background, would break model degeneracies (Haiman & Hui 1999, in preparation). Quasars emit a broad spectrum which extends into the UV and includes strong emission lines, such as Ly$`\alpha `$. For quasars near the CXO detection threshold, the fluxes at $`1\mu `$m are expected to be relatively high, $`0.5`$$`0.8\mu `$Jy. Therefore, infrared spectroscopy of X–ray selected quasars with the Space Infrared Telescope Facility (SIRTF) or NGST can identify the redshifts of the faint X–ray point-like sources detected by the CXO satellite. Such an approach could prove to be a particularly useful approach for unraveling the reionization history of the intergalactic medium at $`z>\mathrm{\hspace{0.33em}5}`$. At present, the best constraints on hierarchical models of the formation and evolution of quasars originate from the Hubble Deep Field. However, HST observations are only sensitive to a limiting magnitude of $`V29`$, and cannot probe the earliest quasars, beyond $`z6`$. The combination of X-ray data from the CXO satellite and infrared spectroscopy from SIRTF and NGST could potentially resolve one of the most important open questions about the thermal history of the Universe, namely whether the intergalactic medium was reionized by stars or by accreting black holes. ## ACKNOWLEDGEMENTS I thank A. Loeb for his advice and guidance throughout many projects, M. Rees and P. Madau for many useful discussions, and N. White for the invitation to this stimulating conference. Support for this work was provided by the DOE and NASA through grant NAG 5-7092 at Fermilab, and a Hubble Fellowship, awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy for NASA under contract NAS 5-26555. ## REFERENCES Barkana, R., & Loeb, A. 1999, ApJ, in press, astro-ph/9906398 Baugh, C. M., Cole, S., Frenk, C. S. & Lacey, C. G. 1998, ApJ, 498, 504 Bertschinger, E. 1985, ApJS, 58, 39 Conti, A., Kennefick, J. D., Martini, P., & Osmer, P. S. 1999, AJ, 117, 645 Elvis, M., Wilkes, B. J., McDowell, J. C., Green, R. F., Bechtold, J., Willner, S. P., Oey, M. S., Polomski, E., & Cutri, R. 1994, ApJS, 95, 1 Fabian, A. C. & Barcons, X. 1992, ARA&A, 30, 429 Fan, X. et al. (SDSS collaboration) 1999, AJ, 118, 1 Haehnelt, M. G., Natarajan, P. & Rees, M. J. 1998, MNRAS, 300, 817 Haiman, Z., & Loeb, A. 1997, in Proc. of “Science with the Next Generation Space Telescope”, eds. E. Smith & A. Koratkar, p. 251 —————————– 1998, ApJ, 503, 505 —————————– 1999a, ApJ, 519, 479 —————————– 1999b, in Proc. of “After the Dark Ages: When Galaxies Were Young (the Universe at $`2<z<5`$)”, eds. S. Holt & E. Smith, p. 34 —————————– 1999c, ApJ, 521, 9 Haiman, Z., Madau, P., & Loeb, A. 1999, ApJ, 514 Hasinger, G., & Zamorani, G. 1997, in ”Festschrift for R. Giacconi’s 65th birthday”, World Scientific Publishing Co., H. Gursky, R. Ruffini, L. Stella eds., in press, astro-ph/9712341 Madau, P. 1995, ApJ, 441, 18 Madau, P., Ferguson, H. C., Dickinson, M. E., Giavalisco, M., Steidel, C. C., & Fruchter, A. 1996, MNRAS, 283, 1388 Magorrian, J., et al. 1998, AJ, 115, 2285 Miralda-Escudé, J. 1998, ApJ, 501, 15 Miralda-Escudé, J., & Rees, M. J. 1997, ApJ, 478, L57 Miyaji, T., Hasinger, G., & Schmidt, M. 1998a, Proceedings of “Highlights in X-ray Astronomy”, astro-ph/9809398 Miyaji, T., Ishisaki, Y., Ogasaka, Y., Ueda Y., Freyberg, M. J., Hasinger, G., & Tanaka, Y. 1998b, A&A 334, L13 Navarro, J. F., & Steinmetz, M. 1997, ApJ, 478, 13 Peacock, J. 1992, Nature, 355, 203 Pei, Y. C. 1995, ApJ, 438, 623 Press, W. H., & Schechter, P. L. 1974, ApJ, 181, 425 Rees, M. J. 1996, preprint astro-ph/9608196 Schmidt, M., Schneider, D. P., & Gunn, J. E. 1995, AJ, 110, 68 Shaver, P. A., et al. 1996, Nature, 384, 439 Songaila, A. 1997, ApJL, 490, 1 Soltan, A., et al. 1999, A&A, 349, 354 Thompson, R., et al. 1999, AJ, 117, 17 Thoul, A. A., & Weinberg, D. H. 1996, ApJ, 465, 608
no-problem/0001/hep-ph0001238.html
ar5iv
text
# References Moduli constraints on primordial black holes Martin Lemoine<sup>*</sup><sup>*</sup>*email: Martin.Lemoine@obspm.fr DARC, UMR–8629 CNRS, Observatoire de Paris-Meudon, F–92195 Meudon Cédex, France Abstract. The amount of late decaying massive particles (e.g., gravitinos, moduli) produced in the evaporation of primordial black holes (PBHs) of mass $`M_{\mathrm{BH}}<10^9`$g is calculated. Limits imposed by big-bang nucleosynthesis on the abundance of these particles are used to constrain the initial PBH mass fraction $`\beta `$ (ratio of PBH energy density to critical energy density at formation), as: $`\beta <5\times 10^{19}(x_\varphi /\mathrm{6\hspace{0.17em}10}^3)^1(M_{\mathrm{BH}}/10^9\mathrm{g})^{1/2}(\overline{Y_\varphi }/10^{14})`$; $`x_\varphi `$ is the fraction of PBH luminosity going into gravitinos or moduli, $`\overline{Y_\varphi }`$ is the upper bound imposed by nucleosynthesis on the number density to entropy density ratio of gravitinos or moduli. This notably implies that such PBHs should never come to dominate the cosmic energy density. PACS numbers: 98.80.Cq 1. Introduction – The spectrum of locally supersymmetric theories generically contain fields whose interactions are gravitational, and whose mass $`m_\varphi 𝒪(100\mathrm{GeV})`$. The Polonyi and gravitino fields of supergravity theories, or the moduli of string theories, are such examples. This leads to well-known cosmological difficulties: quite notably, such particles (hereafter generically noted $`\varphi `$ and termed moduli) decay on a timescale $`\tau _\varphi M_{\mathrm{Pl}}^2/m_\varphi ^310^8\mathrm{s}(m_\varphi /100\mathrm{GeV})^3`$, i.e., after big-bang nucleosynthesis (BBN), and the decay products may drastically alter the light elements abundances . The success of BBN predictions provides in turn a stringent upper limit on the number density to entropy density ratio ($`Y_\varphi `$) of these moduli, generically $`Y_\varphi <10^{14}`$ (see Sec. 3). It is argued in this letter that these same constraints can be translated into stringent constraints on the abundance of primordial black holes (PBHs) with mass $`M_{\mathrm{BH}}<10^9`$g. In effect, moduli are expected to be part of the Hawking radiation of an evaporating black hole as soon as the temperature of the black hole exceeds (roughly speaking) the rest-mass $`m_\varphi `$; and indeed, the Hawking temperature of a PBH reads $`T_{\mathrm{BH}}m_{\mathrm{Pl}}^2/M_{\mathrm{BH}}10^4\mathrm{GeV}(M_{\mathrm{BH}}/10^9\mathrm{g})^1`$ . Primordial black holes are liable to form in the early Universe at various epochs, e.g., when a density fluctuation re-enters the horizon with an overdensity of order unity , or when the speed of sound vanishes (as may occur in phase transitions). As a consequence, constraints on the abundance of PBHs can be translated into constraints on the structure of the very early Universe . Until recently, the only existing constraint on PBHs of mass $`M_{\mathrm{BH}}<10^9`$g relied on the assumption that via evaporation, PBHs leave behind stable Planck mass relics . However, recent work from the perspective of string theories seems to indicate that this is not the case , in particular that evaporation proceeds fully. Nevertheless, Green has pointed out recently that such PBHs would also produce supersymmetric particles, and consequently, cosmological constraints on the lightest supersymmetric particle (LSP) density could be turned into constraints on the initial PBH mass fraction $`\beta `$ (defined as the ratio of PBH energy density to critical energy density at formation). This constraint relies on the assumption that the LSP is stable, i.e. $`R`$parity is a valid symmetry; and, as attractive as $`R`$parity is, it is not of a vital necessity altogether. The constraint related to the production of gravitinos or moduli, to be derived below, is thus complementary to this $`R`$parity constraint, and it also turns out to be more stringent. Hereafter, units are $`\mathrm{}=k_\mathrm{B}=c=1`$, and $`m_{\mathrm{Pl}}M_{\mathrm{Pl}}/(8\pi )^{1/2}2.4\times 10^{18}\mathrm{GeV}`$ is the reduced Planck mass. 2. Moduli production – Although one is generally interested in $`Y_\varphi `$ itself, and not in its momentum dependence, it will prove necessary in a first approach to keep track of $`dY_\varphi /dk`$ (where $`k`$ is the momentum) integrated over the black hole lifetime. In effect, during their evaporation, PBHs produce moduli over a whole spectrum of momenta, with high Lorentz factors, and the existing constraints on $`Y_\varphi `$ depend strongly on the (cosmic) time at which moduli decay ($`\tau _\varphi `$ is the decay timescale in the modulus rest frame), hence on whether they are relativistic or not. More quantitatively, the mass and temperature of a PBH evolve with time $`t`$ during evaporation as: $`M(t)=M_{\mathrm{BH}}\left[1(tt_i)/\tau _{\mathrm{BH}}\right]^{1/3}`$ and $`T(t)=T_{\mathrm{BH}}\left[1(tt_i)/\tau _{\mathrm{BH}}\right]^{1/3}`$ . Here, $`t_i`$ denotes the time of formation, $`t_i\tau _{\mathrm{BH}}`$, with $`\tau _{\mathrm{BH}}`$ the PBH lifetime: $`\tau _{\mathrm{BH}}0.14\mathrm{s}(M_{\mathrm{BH}}/10^9\mathrm{g})^3`$ The lifetime of a black hole depends on the number of degrees of freedom $`g_s`$ in each spin $`s`$ in the radiation , i.e. $`\tau _{\mathrm{BH}}=6.2\mathrm{s}f(M_{\mathrm{BH}})^1(M_{\mathrm{BH}}/10^9\mathrm{g})^3`$, with $`f(M_{\mathrm{BH}})0.267g_0+0.147g_{1/2}+0.06g_1+0.02g_{3/2}+0.007g_2`$. Here the particle content of the minimal supersymmetric standard model (MSSM) with unbroken supersymmetry has been used, $`g_0=98`$, $`g_{1/2}=122`$, $`g_1=24`$, $`g_{3/2}=2`$, $`g_2=2`$.. Toward the end of the evaporation process, the temperature increases without apparent bound, although the standard analysis breaks down at $`Tm_{\mathrm{Pl}}`$ (see Ref. for a discussion of the end point of evaporation). Once the black hole temperature $`Tm_\varphi `$, moduli can be considered as massless. Then the number of moduli emitted per PBH, with momentum $`k`$ between $`k`$ and $`k+dk`$, and per unit of time, is, for a Schwarzschild black hole : $`q_\varphi (k,t)=(2\pi )^1\mathrm{\Gamma }_\varphi (M(t),k)/\left[\mathrm{exp}(k/T(t))(1)^{2s}\right]`$. The absorption coefficient $`\mathrm{\Gamma }_\varphi `$ is a non-trivial function of $`M`$, $`k`$ and $`s`$ which has to be calculated numerically , and $`s`$ is the spin of $`\varphi `$. As announced any PBH will thus produce moduli at some point, and, moreover, these moduli will be produced over a whole range in momentum. To give an example of the sensitivity of the constraints on $`Y_\varphi `$ on the time of decay: if $`\varphi `$ decays into photons, pair creation on the cosmic background (of temperature $`T_\gamma `$) suppresses cascade photons whose energy $`E>m_e^2/22T_\gamma `$; since $`T_\gamma 1\mathrm{MeV}(t/1\mathrm{s})^{1/2}`$, at early times $`<10^4`$s, the cut-off lies below the threshold of deuterium photo-dissociation ($`2`$MeV), and the constraints on $`Y_\varphi `$ are evaded, while at later times, the cut-off is pushed above this threshold, and photo-dissociation becomes highly effective. Finally, since a modulus carrying momentum $`k`$ at cosmic time $`\tau _\varphi `$ will decay at time $`t\tau _\varphi \mathrm{max}[(k/m_\varphi )^{2/3},1]`$, it is necessary to follow $`dY_\varphi /dk`$ as a function of time. As an aside, this will permit the calculation of $`Y_\varphi `$ produced by PBHs such that $`T_{\mathrm{BH}}<m_\varphi `$. This calculation is carried out below in the following limits. As a first approximation, it is sufficient to assume that all $`\varphi `$ particles are emitted at the same average energy, parametrized as $`\alpha T(t)`$; $`\alpha `$ is a constant which depends on $`s`$, with $`\alpha 2.8`$ for $`s=0`$, $`\alpha 4`$ for $`s=1/2`$, and $`\alpha 78`$ for $`s=3/2`$ The value of $`\alpha `$ for $`s=3/2`$ is based on extrapolation of the results of Ref. for other spins, while the fraction of luminosity emitted in spin $`s=3/2`$ (noted $`x_\varphi `$ in the following) is given in Ref. . It does not seem that a detailed study of Hawking radiation of gravitinos has ever been performed. Here it is assumed that the helicity states $`\pm 1/2`$ and $`\pm 3/2`$ of the gravitino are produced with values of $`\alpha `$ and $`x_\varphi `$ as quoted for generic spin $`s=1/2`$ and $`s=3/2`$ respectively. .This approximation suffices as the energy at peak flux corresponds to the average energy to within $`10`$, and since the injection spectrum cuts-off exponentially for $`k>\alpha T`$, and as a power-law for $`k<\alpha T`$. The initial mass fraction of PBHs is approximated to a delta function centered on $`M_{\mathrm{BH}}`$. Although recent considerations tend to indicate otherwise , this remains a standard and simple approximation; moreover, the extension of the results to a more evolved mass fraction is easy to carry out. Finally, it is also assumed that the Universe is radiation dominated all throughout the evaporation process, which implicitly implies that black holes never dominate the energy density. This latter assumption will be justified in Section 3. Then the distribution $`f_\varphi (k,t)s^1dn_\varphi /dk=dY_\varphi /dk`$, where $`s`$ denotes the radiation entropy density, at times $`\tau _{\mathrm{BH}}<t<\tau _\varphi `$ reads: $$f_\varphi (k,t)=Y_{\mathrm{BH}}_{t_i}^{\tau _{\mathrm{BH}}}q_\varphi (k^{},t^{})\frac{dk^{}}{dk}𝑑t^{}.$$ (1) In this expression, $`q_\varphi (k^{},t^{})`$ is the injection spectrum per black hole as above, $`Y_{\mathrm{BH}}n_{\mathrm{BH}}/s`$, where $`n_{\mathrm{BH}}`$ represents the PBH number density, and $`k^{}ka(t)/a(t^{})`$, where $`a`$ is the scale factor. The factor $`dk^{}/dk`$ results from redshifting of $`k^{}`$ at injection time $`t^{}`$ down to $`k`$ at time $`t`$. Equation (1) can be derived as the solution of the transport equation: $`_tf_\varphi =H_k(kf_\varphi )+Y_{\mathrm{BH}}q(k,t)`$, where $`H`$ is the Hubble scale at time $`t`$, and the first term on the r.h.s accounts for redshift losses. This equation and its solution Eq. (1) are valid for $`t\tau _\varphi `$, when the decay of $`\varphi `$ particles can be neglected. It should be recalled that in the range of masses $`m_\varphi `$ and $`M_{\mathrm{BH}}`$ considered, indeed $`\tau _{\mathrm{BH}}\tau _\varphi `$. Equation (1) also neglects the entropy injected in the plasma by PBH evaporation, which remains a good approximation as long as PBHs carry only a small fraction of the total energy density at all times. For mono-energetic injection at $`k^{}=\alpha T(t^{})`$: $$q_\varphi (k^{},t^{})=\frac{x_\varphi }{\alpha T(t^{})}\left|\frac{dM}{dt^{}}\right|\delta [k^{}\alpha T(t^{})].$$ Here $`x_\varphi `$ denotes the fraction of PBH luminosity $`\left|dM/dt^{}\right|`$ carried away by moduli; for the MSSM content, $`x_\varphi 6\times 10^3`$ for $`s=0`$ with one degree of freedom (e.g., a modulus field), $`x_\varphi 6\times 10^3`$ for $`s=1/2`$ with two degrees of freedom (e.g., helicity $`\pm 1/2`$ states of the gravitino), and $`x_\varphi 9\times 10^4`$ for $`s=3/2`$ with 2 degrees of freedom (e.g., helicity $`\pm 3/2`$ states of the gravitino) (see also previous footnote). The $`\delta `$ distribution can be rewritten as a function of $`t`$, using the identity: $`\delta [f(t)]=|df/dt|^1\delta (tt_s)`$, where $`t_s`$ is such that $`f(t_s)=0`$ (here $`t_s`$ is uniquely and implicitly defined in terms of $`k`$, $`k^{}`$). Equation (1) can be integrated in the limits $`kk_0`$ and $`kk_0`$, where $`k_0=\alpha T_{\mathrm{BH}}(t/\tau _{\mathrm{BH}})^{1/2}`$ is the momentum at time $`t`$ of a particle injected at time $`\tau _{\mathrm{BH}}`$ with momentum $`\alpha T_{\mathrm{BH}}`$. In particular, modes with $`kk_0`$ were injected with energy $`\alpha T_{\mathrm{BH}}`$ at time $`t^{}t(k/\alpha T_{\mathrm{BH}})^2\tau _{\mathrm{BH}}`$, while modes with $`kk_0`$ were produced in the final stages at time $`t^{}\tau _{\mathrm{BH}}`$ with momentum $`k^{}\alpha T(t^{})\alpha T_{\mathrm{BH}}`$. One obtains: $`f_\varphi (k,t)`$ $``$ $`{\displaystyle \frac{2}{3}}x_\varphi {\displaystyle \frac{M_{\mathrm{BH}}}{(\alpha T_{\mathrm{BH}})^2}}{\displaystyle \frac{k}{\alpha T_{\mathrm{BH}}}}{\displaystyle \frac{t}{\tau _{\mathrm{BH}}}}Y_{\mathrm{BH}}(kk_0),`$ (2) $`f_\varphi (k,t)`$ $``$ $`x_\varphi {\displaystyle \frac{M_{\mathrm{BH}}}{(\alpha T_{\mathrm{BH}})^2}}\left({\displaystyle \frac{k}{\alpha T_{\mathrm{BH}}}}\right)^3\left({\displaystyle \frac{t}{\tau _{\mathrm{BH}}}}\right)^1Y_{\mathrm{BH}}(kk_0),`$ (3) and both expressions agree to within a factor $`3/2`$ at $`k=k_0`$. If initially $`T_{\mathrm{BH}}<m_\varphi `$, moduli are produced only in the final stages for $`k^{}m_\varphi `$ at injection. Hence the above spectrum should remain valid if a low–momentum cut-off $`k_cm_\varphi (t/\tau _{\mathrm{BH}})^{1/2}>k_0`$ is introduced. The total number of $`\varphi `$ particles produced (hence the constraint on $`\beta `$) is thus suppressed (weakened) by a factor $`(m_\varphi /T_{\mathrm{BH}})^2`$, after integration of $`f_\varphi (k,t)`$ over $`k>k_c`$, if $`T_{\mathrm{BH}}<m_\varphi `$, i.e., if $`M_{\mathrm{BH}}>10^9\mathrm{g}(m_\varphi /10\mathrm{TeV})^1`$. Since this mass range $`M_{\mathrm{BH}}>10^9`$g is moreover strongly constrained by the effects on BBN of quarks directly produced in the evaporation , it will be ignored in the following. For PBHs such that $`T_{\mathrm{BH}}>m_\varphi `$, it is a very good approximation to consider that emitted moduli carry at time $`t`$ a momentum $`k_0`$, since $`kf_\varphi (k,t)=dY_\varphi /d\mathrm{ln}(k)`$ behaves as $`k^2`$ for $`kk_0`$, and as $`k^2`$ for $`kk_0`$. Moreover at time $`t=\tau _\varphi `$: $$\frac{k_0}{m_\varphi }0.01\alpha \left(\frac{m_\varphi }{1\mathrm{TeV}}\right)^{1/2}\left(\frac{M_{\mathrm{BH}}}{10^9\mathrm{g}}\right)^{1/2},$$ (4) and therefore the $`\varphi `$ particles decay at rest (in the plasma rest frame), at time $`\tau _\varphi M_{\mathrm{Pl}}^2/m_\varphi ^3`$, in the range of masses considered, $`m_\varphi <10`$TeV and $`M_{\mathrm{BH}}<10^9`$g. One then seeks the total number of moduli present at that time, which is given by: $$Y_\varphi \frac{x_\varphi }{2}\frac{M_{\mathrm{BH}}}{\alpha T_{\mathrm{BH}}}Y_{\mathrm{BH}}.$$ (5) This result can be obtained as a solution of the transport equation$`_tY_\varphi =x_\varphi Y_{\mathrm{BH}}\left|dM(t)/dt\right|/\alpha T(t)`$, or by integrating $`f_\varphi (k,t)`$ over $`k`$ in Eqs. (2), (3) above; all three results agree to within a factor $`3/2`$. Equation (5) has a simple interpretation: within a factor $`2`$ it corresponds to the instantaneous evaporation of black holes at time $`\tau _{\mathrm{BH}}`$, with total conversion of their mass $`M_{\mathrm{BH}}`$ in particles of energy $`\alpha T_{\mathrm{BH}}`$, a fraction $`x_\varphi `$ of which is moduli. This result can be rewritten in terms of more conventional parameters. The mass $`M_{\mathrm{BH}}`$ is taken to be a fraction $`\delta `$ of the mass within the horizon at the time of formation $`t_i`$: $`M_{\mathrm{BH}}4\pi \delta m_{\mathrm{Pl}}^2/H_i`$, where $`H_i`$ denotes the Hubble scale at time $`t_i`$, and $`\delta 𝒪(1)`$ is expected . Furthermore, instead of $`Y_{\mathrm{BH}}`$, one generally uses the mass fraction $`\beta n_{\mathrm{BH}}M_{\mathrm{BH}}/\rho _\mathrm{c}`$ defined at the time of PBH formation $`t_i`$, with $`\rho _\mathrm{c}=3H_i^2m_{\mathrm{Pl}}^2`$ the critical energy density at that time. Using $`s=(2\pi ^2/45)g_{}T_\gamma ^3`$, with $`T_\gamma `$ the cosmic background temperature, $`T_\gamma 0.5g_{200}^{1/4}H_i^{1/2}m_{\mathrm{Pl}}^{1/2}`$, and $`g_{200}=g_{}/200`$ ($`g_{}`$ number of degrees of freedom), one obtains: $$\beta 3\times 10^{21}g_{200}^{1/4}\delta ^{1/2}\left(\frac{M_{\mathrm{BH}}}{10^9\mathrm{g}}\right)^{3/2}Y_{\mathrm{BH}},$$ (6) and therefore: $$Y_\varphi 2\times 10^4\delta ^{1/2}g_{200}^{1/4}\left(\frac{x_\varphi }{\mathrm{6\hspace{0.17em}10}^3}\right)\left(\frac{\alpha }{3}\right)^1\left(\frac{M_{\mathrm{BH}}}{10^9\mathrm{g}}\right)^{1/2}\beta ,$$ (7) which constitutes the main result of this section. If the Universe went through a matter dominated era between times $`t_a`$ and $`t_b`$, with $`t_i<t_a<t_b\tau _{\mathrm{BH}}`$, then the r.h.s. of Eq. (7) must be multiplied by the factor $`(H_b/H_a)^{1/2}`$, where $`H_{a,b}`$ is the Hubble scale at time $`t_{a,b}`$, and the constraint on $`\beta `$ Eq. (8) below is weakened consequently. 3. Discussion – As mentioned previously, the most stringent constraints on $`Y_\varphi `$ result from the effect of the decay products of $`\varphi `$ on BBN . These studies assume monoenergetic injection at energy $`m_\varphi `$ at time $`\tau _\varphi `$, and their results can be used safely, since the moduli emitted by PBHs decay when non-relativistic. One usually considers production of hadrons or photons in $`\varphi `$ decay. The constraint due to hadron injection is in principle very significant for $`m_\varphi >1`$TeV, but it is not obvious that $`\varphi `$ can decay hadronically, and moreover it relies on assumptions on the cosmic evolution of helium-3 (see, e.g., Ref. ), which have now been proven uncertain (see, e.g., Ref. and references therein). Therefore, in the following, only constraints on photon injection are used; the bounds presented will thus be slightly conservative. Holtmann et al. obtain in this case: $`Y_\varphi <10^{15}`$ for $`m_\varphi 100\mathrm{GeV}`$, $`Y_\varphi <10^{14}`$ for $`m_\varphi 300\mathrm{GeV}`$, $`Y_\varphi <5\times 10^{13}`$ for $`m_\varphi 1`$TeV, and $`Y_\varphi <5\times 10^{10}`$ for $`m_\varphi 3`$TeV. The error on the upper limit is a factor $`4`$. It results from the uncertainty in the fudge factors that enter the $`\tau _\varphi (m_\varphi )`$ relationship when the constraints of Holtmann et al., given in the plane $`m_\varphi Y_\varphi `$$`\tau _\varphi `$, are translated into the plane $`Y_\varphi `$$`m_\varphi `$. Note that these constraints assume that $`\varphi `$ decays into photons with a branching ratio unity, and should be scaled consequently. However these limits should also be strengthened by a factor as high as $`50`$ to avoid <sup>6</sup>Li overproduction, if one assumes that <sup>6</sup>Li has not been destroyed in stars in which it has been observed . This constraint is ignored in what follows, as it relies on yet unproven assumptions on stellar evolution; this but makes the above constraints more conservative. Finally, the observational upper limit on the amount of $`\mu `$distortion in the cosmic microwave background implies : $`Y_\varphi <10^{13}(m_\varphi /100\mathrm{GeV})^{1/2}`$ for $`20\mathrm{GeV}<m_\varphi <500\mathrm{GeV}`$. Note that the most stringent constraint on $`\beta `$ results from the production of the lightest of all moduli–like particles in the theory, whose mass would likely be $`<\mathrm{few}\times 100`$GeV. Overall it seems that $`Y_\varphi <10^{14}`$ represents a reasonable generic upper limit from BBN. Using Eq. (7), this can be rewritten as a limit on $`\beta `$: $$\beta <5\times 10^{19}\delta ^{1/2}g_{200}^{1/4}\left(\frac{x_\varphi }{\mathrm{6\hspace{0.17em}10}^3}\right)^1\left(\frac{\alpha }{3}\right)\left(\frac{M_{\mathrm{BH}}}{10^9\mathrm{g}}\right)^{1/2}\left(\frac{\overline{Y}_\varphi }{10^{14}}\right),$$ (8) and $`\overline{Y}_\varphi `$ denotes the upper limit on $`Y_\varphi `$. This result does not depend on whether PBHs are shrouded in a photosphere, as suggested by Heckler , since moduli are not expected to interact with it due to their gravitational interaction cross-section. On the contrary, other astrophysical constraints on $`\beta `$ for $`M_{\mathrm{BH}}>10^9`$g are in principle sensitive to the presence of a photosphere, as they rely on the direct emission of photons and quarks . If $`R`$parity holds, the constraint on the LSP mass density $`\mathrm{\Omega }_{\mathrm{LSP}}<1`$ today implies:$`\beta <2\times 10^{17}\delta ^{1/2}g_{200}^{1/4}(\alpha /3)(x_{\mathrm{LSP}}/0.6)^1(M_{\mathrm{BH}}/10^9\mathrm{GeV})^{1/2}(m_{\mathrm{LSP}}/100\mathrm{GeV})^1`$. This constraint has been adapted from the study of Ref. and Eq. (7) above. The fraction of luminosity carried away by the LSP is $`x_{\mathrm{LSP}}0.6`$, since each spartner produced by a PBH will produce at least one LSP in its decay . This LSP constraint on $`\beta `$ is thus less stringent than the moduli constraint, provided at least one modulus of the theory has mass $`<1`$TeV. These results have several implications. First of all, the approximation made in Sec. 2, namely $`\mathrm{\Omega }_{\mathrm{BH}}1`$ at all times is justified. In effect, $`\mathrm{\Omega }_{\mathrm{BH}}=\beta (t/t_i)^{1/2}`$ at time $`t`$ in a radiation-dominated Universe, since PBHs behave as non-relativistic matter, and therefore at time $`\tau _{\mathrm{BH}}`$, $`\mathrm{\Omega }_{\mathrm{BH}}2.3\times 10^{14}\delta ^{1/2}(M_{\mathrm{BH}}/10^9\mathrm{g})\beta `$. Consequently, if $`\beta `$ verifies the above upper limits, indeed $`\mathrm{\Omega }_{\mathrm{BH}}1`$ at all times. However, since Eq. (7) is not valid if $`\mathrm{\Omega }_{\mathrm{BH}}=1`$ at some time $`t_{}<\tau _{\mathrm{BH}}`$, one needs to consider this case as well. An order of magnitude of $`Y_\varphi `$ in this case can be obtained as follows. If $`t_{}\tau _{\mathrm{BH}}`$, the radiation present subsequent to PBH evaporation has been produced in the evaporation process itself. Assuming total conversion of the PBH mass $`M_{\mathrm{BH}}`$ at time $`\tau _{\mathrm{BH}}`$ into particles (moduli and radiation) of energy $`\alpha T_{\mathrm{BH}}`$, one finds $`n_\varphi x_\varphi \rho _{\mathrm{BH}}/\alpha T_{\mathrm{BH}}`$, and $`s(4/3)\rho _{\mathrm{BH}}/T_{\mathrm{RH}}`$, where $`\rho _{\mathrm{BH}}`$ is the PBH energy density at evaporation, $`T_{\mathrm{RH}}2\mathrm{MeV}g_{10}^{1/4}(M_{\mathrm{BH}}/10^9\mathrm{g})^{3/2}`$ is the reheating temperature, and $`g_{10}=g_{}/10`$. Therefore $`Y_\varphi 3\times 10^{10}g_{10}^{1/4}(x_\varphi /6\times 10^3)(\alpha /3)^1(M_{\mathrm{BH}}/10^9\mathrm{g})^{5/2}`$, well above the previous limits. Note that one would naively expect $`Y_\varphi x_\varphi `$ since $`x_\varphi `$ is the fraction of PBH luminosity carried away by $`\varphi `$ particles. However the photons emitted by PBHs carry high energy $`\alpha T_{\mathrm{BH}}`$ and small number density $`\rho _{\mathrm{BH}}/\alpha T_{\mathrm{BH}}`$, and their thermalization leads to many soft photons carrying high entropy. Nevertheless, this discussion shows that PBHs of any mass should never come to dominate the energy density; if this were to happen, PBHs with $`M_{\mathrm{BH}}<10^9`$g would produce too many moduli, while the evaporation of PBHs with $`M_{\mathrm{BH}}>10^9`$g would lead to too low a reheating temperature. In particular, scenarios of reheating of the post-inflationary Universe by black hole evaporation, as put forward, e.g., in Ref. , are forbidden. This result was also envisaged in Ref. . Finally, the present constraints on $`\beta `$ exclude the possibility of generating the baryon asymmetry of the Universe through PBH evaporation. Indeed Barrow et al. have performed a detailed computation of the baryon number to entropy density ratio $`n_\mathrm{b}/s`$ produced in PBH evaporation, and find: $`n_\mathrm{b}/s7\times 10^4ϵ(x_\mathrm{H}/0.01)g_{200}^{1/4}(M_{\mathrm{BH}}/10^9\mathrm{g})^{1/2}\delta ^{1/2}\beta `$, where $`ϵ`$ is the baryon violation parameter, defined as the net baryon number created in each baryon-violating boson decay, $`x_\mathrm{H}`$ is the fraction of PBH luminosity carried away by such bosons, and other notations are as above. Unless all moduli–like particles are heavier than $`3`$TeV, and $`R`$parity does not hold, the above constraints on $`\beta `$ imply $`n_\mathrm{b}/s<10^{12}ϵ`$, which does not suffice since BBN indicates $`n_\mathrm{b}/s47\times 10^{11}`$.
no-problem/0001/gr-qc0001065.html
ar5iv
text
# Quantum Evolution of the Bianchi Type I Model ## Abstract The behaviour of the flat anisotropic model of the Universe with a scalar field is explored within the framework of quantum cosmology. The principal moment of the account of an anisotropy is the presence either negative potential barrier or positive repelling wall. In the first case occur the above barrier reflection of the wave function of the Universe, in the second one there is bounce off a potential wall. The further evolution of the Universe represents an exponential inflating with fast losses of an anisotropy and approach to the standard cosmological scenario. 1. Introduction One of the basic problems of the standard cosmological scenario is the presence of an initial singularity. For its elimination are used the various approaches, one of which is a quantum cosmology originated by DeWitt more than thirty years ago. In last one the whole Universe is described with the help of the wave function. Until recently main part of papers was devoted to the consideration of the process of quantum creation of the closed Universe. In this direction by number of authors was developed the scenario of the spontaneous creation of the closed Universe from ”nothing”, where ”nothing” means the state with absence not only matter, but also the spacetime in classical understanding. Thus it is necessary to note that the closed Universe has a zero total energy and charge therefore there is no violation of any conservation laws. At describing of process of creation of the Universe the so-called instanton method is widely used. In the last one the closure of the Universe is a necessary requirement of the quantum creation, otherwise the action is equal to infinity and, hence, relevant probability of the creation is near to zero. However, recently observant data have appeared according to which our Universe probably is open. A number of models of the quantum nucleating of the open Universe were proposed in this connection. One of its is the Hawking-Turok instanton , for which the opportunity of an analytic continuation of the instanton solution not only in the direction of creation of the closed Universe, but also in the direction of the open Universe is shown. This result, being interesting on itself, is not unexpected. For a long time it is known (see e.g. ) that dividing by various ways the four-dimensional de Sitter Universe on space and time we can obtain the various de Sitter Universes: closed, flat or open. The quantum creation of the open Universe can seem not actual in view of the noted above infinite value of action. But here it is possible to remember the theory of nucleation of the open Universe from bubble filled by false vacuum, when the Universe inside of this bubble looks finite from the point of view of an external observer and open, that is infinite, from the point of view of an inside observer. Therefore, for an external observer the process of the instanton nucleation of the open Universe is quite possible and the growth of the bubble size is carried out by transferring of energy from the surrounding de Sitter space. 2. Basic classical equations We shall consider an anisotropic Bianchi type I model. For it the synchronous form of metric can be expressed as (velocity of light is equal 1): $$ds^2=dt^2a_1^2(t)dx^2a_2^2(t)dy^2a_3^2(t)dz^2,$$ (1) where by $`a_1,a_2`$ and $`a_3`$ the scale factors on directions $`x`$, $`y`$, and $`z`$ accordingly are designated. This model is an anisotropic generalization of the Friedmann model with Euclidean spatial geometry. Three scale factors $`a_1,a_2`$ and $`a_3`$ are determined via Einstein’s equations. For convenience of realization of the analytical calculations we can write them as follows : $$a_1=r(t)q_1,a_2=r(t)q_2,a_3=r(t)q_3,$$ (2) where $`q_1,q_2,q_3`$ are dimensionless variable subordinated to the following requirements: $$\underset{\alpha =1}{\overset{3}{}}q_\alpha =1,\underset{\alpha =1}{\overset{3}{}}\left(\dot{q}_\alpha /q_\alpha \right)=0,$$ (3) (dot means derivative with respect to time $`t`$); whence follows that $`\underset{\alpha =1}{\overset{3}{}}a_\alpha =r^3`$. For the line element (1) with the account of (2) the components of the Ricci tensor write as: $`R_0^0`$ $`=`$ $`3{\displaystyle \frac{\ddot{r}}{r}}+{\displaystyle \underset{\alpha =1}{\overset{3}{}}}\left({\displaystyle \frac{\dot{q}_\alpha }{q_\alpha }}\right)^2,`$ $`R_\alpha ^\alpha `$ $`=`$ $`{\displaystyle \frac{\ddot{r}}{r}}+2\left({\displaystyle \frac{\stackrel{.}{r}}{r}}\right)^2+3{\displaystyle \frac{\dot{r}}{r}}{\displaystyle \frac{\dot{q}_\alpha }{q_\alpha }}+\left({\displaystyle \frac{\dot{q}_\alpha }{q_\alpha }}\right)^\text{.},`$ (4) $`R`$ $`=`$ $`6\left[{\displaystyle \frac{\ddot{r}}{r}}+\left({\displaystyle \frac{\dot{r}}{r}}\right)^2\right]+{\displaystyle \underset{\alpha =1}{\overset{3}{}}}\left({\displaystyle \frac{\dot{q}_\alpha }{q_\alpha }}\right)^2.`$ Let’s use the last one for obtaining (1-1) and (2-2) components of the Einstein tensor: $$G_1^1=2\frac{\ddot{r}}{r}+\left(\frac{\dot{r}}{r}\right)^23\frac{\dot{r}}{r}\frac{\dot{q}_1}{q_1}\left(\frac{\dot{q}_1}{q_1}\right)^\text{.}+\frac{1}{2}\underset{\alpha =1}{\overset{3}{}}\left(\frac{\dot{q}_\alpha }{q_\alpha }\right)^2,$$ $$G_2^2=2\frac{\ddot{r}}{r}+\left(\frac{\dot{r}}{r}\right)^23\frac{\dot{r}}{r}\frac{\dot{q}_2}{q_2}\left(\frac{\dot{q}_2}{q_2}\right)^\text{.}+\frac{1}{2}\underset{\alpha =1}{\overset{3}{}}\left(\frac{\dot{q}_\alpha }{q_\alpha }\right)^2.$$ Subtracting from $`G_1^1`$ the component $`G_2^2`$ one obtains: $$3\frac{\dot{r}}{r}\left(\frac{\dot{q}_2}{q_2}\frac{\dot{q}_1}{q_1}\right)+\left(\frac{\dot{q}_2}{q_2}\frac{\dot{q}_1}{q_1}\right)^\text{.}=0.$$ Entering in the last equation the notification $`Q_{\alpha \beta }=\left(\dot{q}_2/q_2\dot{q}_1/q_1\right)`$ let’s have: $$3\frac{\dot{r}}{r}+\frac{\dot{Q}_{\alpha \beta }}{Q_{\alpha \beta }}=0,$$ that after integration gives: $$Q_{\alpha \beta }=C_{\alpha \beta }/r^3,$$ where $`C_{\alpha \beta }`$ are integration constants. From here we get: $$\frac{\dot{q}_\alpha }{q_\alpha }=\frac{C_\alpha }{r^3},$$ (5) and according to requirements (3), $`\underset{\alpha =1}{\overset{3}{}}C_\alpha =0`$. Thus integrating the last equation one finds: $$q_\alpha =A_\alpha \mathrm{exp}\left\{C_\alpha \frac{dt}{r^3}\right\},$$ (6) where $`A_\alpha `$ are integration constants and $`\underset{\alpha =1}{\overset{3}{}}A_\alpha =1`$. Now, using the relation (5), from (Quantum Evolution of the Bianchi Type I Model) is got: $`R_0^0`$ $`=`$ $`3{\displaystyle \frac{\ddot{r}}{r}}+{\displaystyle \frac{1}{r^6}}{\displaystyle \underset{\alpha =1}{\overset{3}{}}}C_\alpha ^2,`$ $`R`$ $`=`$ $`6\left[{\displaystyle \frac{\ddot{r}}{r}}+\left({\displaystyle \frac{\dot{r}}{r}}\right)^2\right]+{\displaystyle \frac{1}{r^6}}{\displaystyle \underset{\alpha =1}{\overset{3}{}}}C_\alpha ^2,`$ (7) $`\underset{\alpha =1}{\overset{3}{}}C_\alpha ^2`$ determines an anisotropy of the given model. 3. Quantum evolution of the Universe The possibility of the nucleation of an open Universe circumscribed in Introduction gives the basis for the consideration of the quantum creation of a flat Universe. As is known the basic equation of the quantum cosmology is the Wheeler-DeWitt (WDW) equation. For its making we shall consider the theory of a scalar field $`\phi `$ with Lagrangian $$L=R/2+\left(_\mu \phi \right)^2/2V(\phi )$$ (8) or using the expression for scalar curvature $`R`$ from (Quantum Evolution of the Bianchi Type I Model) one obtains $$L=3r\dot{r}^2+\frac{r^3}{2}\underset{\alpha =1}{\overset{3}{}}\left(\frac{\dot{q}_\alpha }{q_\alpha }\right)^2+r^3\left[\frac{1}{2}\dot{\phi }^2V(\phi )\right]$$ (9) (accurate to complete derivative with respect to $`t`$). The relevant conjugate momentums are equal $`p_\phi `$ $`=`$ $`{\displaystyle \frac{L}{\dot{\phi }}}=r^3\dot{\phi },p_r={\displaystyle \frac{L}{\dot{r}}}=6r\dot{r},`$ $`p_{q_\alpha }`$ $`=`$ $`{\displaystyle \frac{L}{\dot{q}_\alpha }}=r^3{\displaystyle \frac{\dot{q}_\alpha }{q_\alpha ^2}}`$ (10) and the Hamiltonian of the system is $$H=p_\phi \dot{\phi }+p_r\dot{r}+\underset{\alpha =1}{\overset{3}{}}p_{q_\alpha }\dot{q}_\alpha L.$$ (11) Let’s note, that for deriving the exact equations it was necessary to use the expression (Quantum Evolution of the Bianchi Type I Model) for scalar curvature instead of (Quantum Evolution of the Bianchi Type I Model). Using the last one is impossible whereas in this one the integration for elimination of $`\dot{q}_\alpha /q_\alpha `$ is already yielded. It is intolerable as actually there is the deletion of variables $`q_\alpha `$ and thus truncation of the Hamiltonian. Let’s note also, that if to use the last of relations (Quantum Evolution of the Bianchi Type I Model) and expression $`\dot{q}_\alpha /q_\alpha =C_\alpha /r^3`$ from (5) exchanging simultaneously $`p_{q_\alpha }\widehat{p}_{q_\alpha }`$ (here $`\widehat{p}_{q_\alpha }=i/q_\alpha `$), it is easy to obtain that $`q_\alpha \widehat{p}_{q_\alpha }\mathrm{\Psi }=C_\alpha \mathrm{\Psi }`$. The last means that in our case $`\mathrm{\Psi }`$ is the eigenfunction of the operator $`\widehat{p}_{q_\alpha }`$. It allows with account (5) to write the Hamiltonian (11) as: $$H=\frac{1}{2}\frac{p_\phi ^2}{r^3}\frac{p_r^2}{12r}+r^3V(\phi )+\left(\underset{\alpha =1}{\overset{3}{}}C_\alpha ^2\right)/2r^3.$$ (12) Quantizing (12) by replacement of momentums $`p_\phi `$ and $`p_r`$ on $`i/\phi `$ and $`i/r`$ accordingly and also using the rescaling $`\phi \sqrt{6}\mathrm{\Phi }`$ we obtain the Klein-Gordon equation $$\left[\frac{1}{r^p}\frac{}{r}\left(r^p\frac{}{r}\right)\frac{1}{r^2}\frac{^2}{\mathrm{\Phi }^2}U_{ef}\right]\psi (r,\mathrm{\Phi })=0,$$ $$U_{ef}=6\left(\underset{\alpha =1}{\overset{3}{}}C_\alpha ^2\right)/r^212r^4V(\mathrm{\Phi }).$$ (13) It is the required WDW equation in minisuperspace of the variables $`r`$ and $`\mathrm{\Phi }`$. In this equation the parameter $`p`$ represents the ambiguity in the ordering of noncommuting operators $`r`$ and $`p_r`$. Let’s emphasize, that the wave function of the Universe $`\mathrm{\Psi }`$ does not depend on time. This circumstance is valid for the closed Universe by virtue of equality to zero of its total energy remains valid and for the flat Universe on the basis of given in Introduction reasonings. The transformation of the wave function $$\psi =r^{p/2}\mathrm{\Psi }$$ allows to eliminate first derivative in (13) $$\left[\frac{^2}{r^2}\frac{1}{r^2}\frac{^2}{\mathrm{\Phi }^2}U_{ef}\right]\psi (r,\mathrm{\Phi })=0,$$ $$U_{ef}=\frac{p}{2}\left(1\frac{p}{2}\right)\frac{1}{r^2}\frac{6\underset{\alpha =1}{\overset{3}{}}C_\alpha ^2}{r^2}12r^4V(\mathrm{\Phi }).$$ (14) Let’s note, that in expression for an effective potential energy there is an addend of ”centrifugal energy” $`p\left(1p/2\right)/2r^2`$. Is explored Eq. (14) with a various form of the potential $`V(\mathrm{\Phi })`$ and by choice of parameter $`p`$. A. Above barrier reflection of the wave function of the Universe 1. de Sitter minisuperspace. Let’s consider the simplest case of minisuperspace model, when the factor ordering $`p=0`$ and potential $`V(\mathrm{\Phi })`$ represents constant vacuum energy $`\epsilon _v`$ creating an effective cosmological constant. Then WDW Eq. (14) will take the form of the one-dimensional Schrödinger equation $$\left[\frac{d^2}{dr^2}+U_{ef}\right]\mathrm{\Psi }(r)=0,$$ $$U_{ef}=6\left(\underset{\alpha =1}{\overset{3}{}}C_\alpha ^2\right)/r^2H^2r^4,$$ (15) where $`H^2=12\epsilon _v`$ is the Hubble parameter. Entering $`\rho =H^{1/3}r`$ and $`\gamma =6\underset{\alpha =1}{\overset{3}{}}C_\alpha ^2`$ we can rewrite (15) as $$\left[\frac{d^2}{d\rho ^2}+U_{ef}\right]\mathrm{\Psi }(\rho )=0,U_{ef}=\gamma /\rho ^2\rho ^4.$$ (16) Eq. (16) describes the motion of a ”particle” with zero-point energy in the field of the effective potential $`U_{ef}`$. The interesting feature of the given potential is that near to the origin of coordinates it is approach infinity under the law $`U_{ef}\gamma /\rho ^2`$ (that is we can neglect the second term in potential). As is known , this case is intermediate between when there are usual stationary states and the cases when happens the ”collapse” of a particle in the origin of coordinates. Therefore, it is necessary to carry out the additional analysis here. For this purpose we shall search near $`\rho =0`$ for solution of $$\left[\frac{d^2}{d\rho ^2}+\frac{\gamma }{\rho ^2}\right]\mathrm{\Psi }(\rho )=0$$ (17) as $`\mathrm{\Psi }\rho ^s`$ that at substitution in (17) gives: $$s^2s+\gamma =0$$ with roots $$s_1=1/2+\sqrt{1/4\gamma },s_2=1/2\sqrt{1/4\gamma }.$$ Further, we select around the origin of coordinates small area of radius $`\rho _0`$ and is exchanged in it the function $`\gamma /\rho ^2`$ by the constant $`\gamma /\rho _0^2`$. Having defined wave functions in such ”cut off” field then we shall look what happens at the passage to the limit $`\rho _00`$. Let’s assume at first that $`\gamma <1/4`$. Then $`s_1>s_2>0`$ and at $`\rho >\rho _0`$ the general solution of Eq. (17) looks like (at small $`\rho `$) $$\mathrm{\Psi }=A\rho ^{s_1}+B\rho ^{s_2}$$ (18) ( $`A,B`$ are the constants). At $`\rho <\rho _0`$ the solution of $$\frac{d^2\mathrm{\Psi }}{d\rho ^2}+\frac{\gamma }{\rho ^2}\mathrm{\Psi }=0,$$ finiteness in the origin of coordinates looks like $$\mathrm{\Psi }=C\mathrm{sin}(k\rho ),k=\sqrt{\gamma }/\rho _0.$$ At $`\rho =\rho _0`$ function $`\mathrm{\Psi }`$ and its derivative should be continuous functions. Therefore, we can write one of requirements as the requirement of the continuity of the logarithmic derivative with respect to $`\mathrm{\Psi }`$ that gives in $$\sqrt{\gamma }ctg\sqrt{\gamma }=\frac{s_1\rho _0^{s_1s_2}+(B/A)s_2}{\rho _0^{s_1s_2}+(B/A)},$$ or solving rather $`(B/A)`$ we obtain $$B/A=const\rho _0^{s_1s_2}.$$ (19) Passing on to the limit $`\rho _00`$ we find that $`B/A0`$. Thus from two solutions (18) remains $$\mathrm{\Psi }=A\rho ^{s_1}.$$ (20) Let now $`\gamma >1/4`$. Then $`s_1`$ and $`s_2`$ are complex: $$s_1=1/2+i\sqrt{\gamma 1/4},s_2=s_1^{}.$$ By analogy to the previous reasonings we again come to equality (19) which at substitution of values $`s_1`$ and $`s_2`$ gives $$B/A=const\rho _0^{i\sqrt{4\gamma 1}}.$$ (21) At $`\rho _00`$ this expression does not approach to any limit so the direct passage to the limit $`\rho _00`$ is impossible. The general form of the real solution of (17) can be represented as following: $$\mathrm{\Psi }=const\sqrt{\rho }\mathrm{cos}\left(\sqrt{\gamma 1/4}\mathrm{ln}\left(\rho /\rho _0\right)+const\right).$$ (22) This function has zeros which number unlimited grows with decreasing of $`\rho _0`$. Then at any finite value of the total energy $`E`$ the ”normal state” of a ”particle” in given field corresponds to the energy $`E=\mathrm{}`$. As ”particle” is in infinitesimal area around of the origin of coordinates there is the ”collapse” of the ”particle” on centre. Further, we find the vector of the probability density flux near to zero. In our one-dimensional case we have: $$𝐣=\frac{i}{2}\left(\mathrm{\Psi }^{}\frac{\mathrm{\Psi }}{\rho }\mathrm{\Psi }\frac{\mathrm{\Psi }^{}}{\rho }\right).$$ It is easy to obtain from here that: 1) in the case $`\gamma <1/4`$ at use $`\mathrm{\Psi }`$ from (20) relevant probability density flux $`𝐣=0`$ (as well as for any real wave function). 2) At $`\gamma >1/4`$ with the account $`\mathrm{\Psi }`$ from (22) we have $`j=\sqrt{\gamma 1/4}`$. The upper sign corresponds to the ingoing and the lower one to the outgoing wave. The obtained result means that there is the constant probability density flux near to the origin of coordinates. Thus we have two types of the behaviour of the wave function of a ”particle” at various values of the parameter of an anisotropy $`\gamma `$: 1) at $`\gamma <1/4`$ the wave function near to the origin of coordinates tent to zero; 2) at $`\gamma >1/4`$ happens the ”collapse” of a ”particle” on centre. Let’s note that the probleme of falling of a particle on centre has been already studied in isotropic model with the matter equation of state $`p=ϵ`$ in . Let’s consider further the motion of a ”particle” in the semiclassical approximation. From the form of potential (16) and equality to zero of the total energy of the Universe follows that the examination of Eq. (16) is reduced to the one-dimensional problem about above barrier reflection (Fig. 1), i.e. to reflection of a ”particle” with energy exceeding height of the barrier. In our case ”particle” goes down to some ”turning point” $`\rho _2`$ in which it changes the direction of motion on an inverse. The given point represents a complex solution of the equation $`U_{ef}=0`$, namely $`\rho _2=\gamma ^{1/6}\mathrm{exp}\left(i\pi (2n+1)/6\right)`$. Then the required reflectance $`R`$ is found as : $$R=\mathrm{exp}\left(4\mathrm{Im}_{\rho _1}^{\rho _2}\left[U_{ef}\right]^{1/2}\right)$$ (23) (here $`\rho _1`$ is any point on the real axis). In the considered case using (16) and (23) we have: $$R=\mathrm{exp}\left(\frac{2}{3}\sqrt{\gamma }\pi \right).$$ (24) The last expression is obtained with use of the semiclassical approximation. It is useful also to find the exact solution of Eq. (16) which looks like: $$\mathrm{\Psi }(\rho )=C_1\sqrt{\rho }J(\frac{1}{3}\sqrt{1/4\gamma },\frac{1}{3}\rho ^3)+C_2\sqrt{\rho }Y(\frac{1}{3}\sqrt{1/4\gamma },\frac{1}{3}\rho ^3),$$ (25) where $`C_1`$ and $`C_2`$ are integration constants, $`J(\nu ,z)`$, $`Y(\nu ,z)`$ \- the Bessel functions of the first and second kind accordingly, $`\nu ,z`$ are index and argument of these functions. For the finding of the reflectance it is necessary to consider the behaviour of this solution on major distances from the origin of coordinates. In this case $`\mathrm{\Psi }(\rho )`$ describes ingoing and reflected waves and the reflectance will be equal to the ratio of amplitudes of these waves. The asymptotic form of the function $`\mathrm{\Psi }(\rho )`$ at $`\rho \mathrm{}`$ is $$\mathrm{\Psi }(\rho )\sqrt{\frac{6}{\pi \rho ^2}}\mathrm{cos}\left(\frac{1}{3}\rho ^3\frac{1}{6}\pi \sqrt{1/4\gamma }\frac{\pi }{4}\right).$$ (26) It is clear from here that: $$\text{for}\gamma <1/4:R=1$$ $$\text{for}\gamma >1/4:R=\mathrm{exp}\left(\frac{2}{3}\pi \sqrt{\gamma 1/4}\right).$$ (27) The second expression from (27) at $`\gamma 1/4`$ coincides with reflectance from the semiclassical approximation (24). Being based on results obtained at the analysis of behaviour of the wave function of the Universe near to the origin of coordinates is concluded that at $`\gamma <1/4`$ does not happen the accumulation of the wave function at $`\rho 0`$ and consequently takes place the complete reflection of the wave function from the barrier ($`R=1`$). In the case $`\gamma >1/4`$ which corresponds to collapse of a ”particle” on centre the reflectance $`R`$ becomes less than 1. It happens because there is nonzero probability density flux in the infinitesimal area around of the origin of coordinates. Let’s note that at the approach to zero the problem, generally speaking, ceases to be stationary. It gives that the wave function can accumulate in this area. 2. Variable scalar field. Let’s consider further the case of Eq. (14) when the factor ordering $`p`$ is equal to zero as before, but potential of the scalar field is variable: $`V(\phi )=m^2\phi ^2/2`$. Introducing the rescaling $`\phi \sqrt{6}\mathrm{\Phi }`$ and $`m\mu /6`$ we shall write the WDW equation as: $$\left[\frac{^2}{r^2}+\frac{1}{r^2}\frac{^2}{\mathrm{\Phi }^2}+U_{ef}\right]\mathrm{\Psi }=0,$$ $$U_{ef}=\frac{\gamma }{r^2}\mu ^2r^4\mathrm{\Phi }^2.$$ (28) The finding of the analytical solution of the obtained equation represents a complex problem. Therefore, for its examination we shall search the WKB solution as $`\mathrm{\Psi }_c=e^{iS}`$. The relevant equation for action $`S(r,\mathrm{\Phi })`$ is $$\left(\frac{S}{r}\right)^2+\frac{1}{r^2}\left(\frac{S}{\mathrm{\Phi }}\right)^2U_{ef}=0.$$ (29) For finding of the solution of this nonlinear differential equation it is possible to reduce it to system of the ordinary differential equations called the characteristic system of the given partial equation. Using this system it is possible to construct an integrated surface of Eq. (29), consisting from the characteristics. The required system of the characteristics written with respect to arbitrary parameter $`t`$ has a form : $`r^{}(t)`$ $`=`$ $`2p,\mathrm{\Phi }^{}(t)={\displaystyle \frac{2}{r^2}}q,`$ $`S^{}(t)`$ $`=`$ $`2\left({\displaystyle \frac{\gamma }{r^2}}+\mu ^2r^4\mathrm{\Phi }^2\right),`$ $`p^{}(t)`$ $`=`$ $`{\displaystyle \frac{2}{r^3}}q^2+2{\displaystyle \frac{\gamma }{r^3}}4\mu ^2r^3\mathrm{\Phi }^2,`$ (30) $`q^{}(t)`$ $`=`$ $`2\mu ^2r^4\mathrm{\Phi },`$ where denote a derivative with respect to $`t`$ and introducing denotations $`p=S/r`$, $`q=S/\mathrm{\Phi }`$. The obtained system of equations describes an one-dimensional motion of a ”particle” along the characteristic. In this case monotonically varying parameter $`t`$ can play a role of time in due course there is an evolution of the Universe. The obtained system of characteristics can be use for finding of dependence of coefficient of above barrier reflection $`R`$ of wave function of the Universe from value of the field $`\mathrm{\Phi }`$. But before we shall note one useful possibility of simplification of making of a calculation. Expand an effective potential $`U_{ef}`$ from (16) near to a maximum in a Taylor series: $$U_{ef}=U_{\mathrm{max}}+\frac{\alpha }{2}\left(\rho \rho _{\mathrm{max}}\right)^2,$$ (31) here $`U_{\mathrm{max}}`$ and $`\rho _{\mathrm{max}}`$ are a maximal value of potential and relevant value of $`\rho `$, and $`\alpha `$ is a value of a second derivative with respect to $`U_{ef}`$ from (16) in the point $`\rho _{\mathrm{max}}`$. The values of specified quantities are easily finding from (16): $$U_{\mathrm{max}}=3\left(\frac{\gamma }{2}\right)^{2/3},\rho _{\mathrm{max}}=\left(\frac{\gamma }{2}\right)^{1/6}$$ $$\alpha =24\left(\frac{\gamma }{2}\right)^{1/3}.$$ Then using (23) and (31) we have $$R=\mathrm{exp}\left(\frac{1}{2}\sqrt{\frac{3}{2}}\sqrt{\gamma }\pi \right),$$ (32) that approximately coincides with (24). Thus in an one-dimensional case there is an possibility to use approximate expression (31) instead of exact one. Further, we shall search for the solution of system (Quantum Evolution of the Bianchi Type I Model) numerically. Using (23) and (31) it is possible to obtain the following form of dependence $`R(\mathrm{\Phi })`$ (Fig. 2). It is obvious that the reflectivity promptly decreases with increasing of value of the field $`\mathrm{\Phi }`$. After reflection of the wave function the following necessary stage of evolution of the Universe is the inflationary period providing the ”stretch” of linear size of the Universe with Planck up to macroscopic. For ensuring of the sufficiently long period of inflation it is necessary the realization of two requirements on scalar field (see e.g. ): 1) it should be a Planck order; 2) its values should vary slowly with time. The realization first of these requirements corresponds rather small, but nevertheless distinct from zero reflectivity $`R`$ (see Fig. 2). Solving system (Quantum Evolution of the Bianchi Type I Model) will be easily convinced that is fulfilled second of the posed requirements also. B. The wave function bouncing off a potential wall Now we shall consider Eq. (14) in a case when the parameter $`p`$ is different from zero and there is a potential of the scalar field $`V(\phi )=m^2\phi ^2/2`$. At such setting of a problem we have two essentially various variant of the effective potential $`U_{ef}`$. At realization of a requirement $`0p2`$ the form of $`U_{ef}`$ from (14) as a matter of fact by nothing differs from the case when $`p=0`$ and the relevant consideration will be made by analogy to the problem studied in the previous section. In a case omissions of the mentioned above requirement the qualitatively new statement of a problem is possible. For this purpose the realization of one requirement is necessary: $`\left|p^2/2+p/2\right|>\gamma `$, which ensures occurrence in the effective potential of a positive factor before $`1/r^2`$ greater than $`\gamma `$. Thus there is a possibility of occurrence of a repelling potential wall. Then the effective potential in (14) will be: $$U_{ef}=\frac{\left|p^2/2+p/2\right|}{r^2}\frac{\gamma }{r^2}\mu ^2r^4\mathrm{\Phi }^2\epsilon .$$ (33) Thus the influence of a massless scalar field (e. g. photon gas) with energy density $`\epsilon `$ also is taken into account, the sense of which introduction will be explained below. The form of potential is shown on Fig.3. Then Eq. (14) describes a motion of a ”particle” with a zero-point energy in potential $`U_{ef}(r,\mathrm{\Phi })`$. As is obvious from Fig.3 the effective potential parts space on two areas: classically forbidden (interior) and classically allowed (exterior). At transition of a ”particle” in the I-st area its wave function will be exponentially decay (as $`U_{ef}(r,\mathrm{\Phi })`$ approach infinity at $`r\mathrm{}`$) and the relevant probability of realization of the given state tends to zero. On the other hand, the wave function $`\mathrm{\Psi }`$ describes the wave incident on the barrier $`U_{ef}(r,\mathrm{\Phi })`$ on the part of major $`r`$ and the wave reflex from the barrier. The physical sense of the given statement consists that the incident wave describes the contracting Universe and reflex - expanding one. We shall consider an evolution of the scalar field $`\mathrm{\Phi }`$ at stages of contraction and expansion. For this purpose we shall search the WKB solution of the equation (14) with potential (33) in classically allowed range as $`\mathrm{\Psi }_c=e^{iS}`$. The relevant equation for action $`S(r,\mathrm{\Phi })`$ will be: $$\left(\frac{S}{r}\right)^2+\frac{1}{r^2}\left(\frac{S}{\mathrm{\Phi }}\right)^2U_{ef}=0.$$ (34) By analogy to the case of Eq. (29) we shall make system of the characteristics with respect to arbitrary parameter $`t`$: $`r^{}(t)`$ $`=`$ $`2p,\mathrm{\Phi }^{}(t)={\displaystyle \frac{2}{r^2}}q,`$ $`S^{}(t)`$ $`=`$ $`2\left({\displaystyle \frac{\left|p^2/2+p/2\right|}{r^2}}{\displaystyle \frac{\gamma }{r^2}}\mu ^2r^4\mathrm{\Phi }^2\epsilon \right),`$ $`p^{}(t)`$ $`=`$ $`{\displaystyle \frac{2}{r^3}}q^22{\displaystyle \frac{\left|p^2/2+p/2\right|}{r^3}}+2{\displaystyle \frac{\gamma }{r^3}}4\mu ^2r^3\mathrm{\Phi }^2,`$ (35) $`q^{}(t)`$ $`=`$ $`2\mu ^2r^4\mathrm{\Phi }.`$ As well as in case of Eq. (29) system of equations (Quantum Evolution of the Bianchi Type I Model) is describes an one-dimensional motion of a ”particle” along the characteristic. Thus the behaviour of a ”particle” is similar on bounce off a potential barrier $`\mathrm{\Phi }(r)`$ made at cross of the effective potential with the plane $`\mathrm{\Phi }r`$ (that is when $`U_{ef}=0`$). Therefore, a semiclassical wave function $`\mathrm{\Psi }_c`$ describes an ensemble of classical universes evolving along the characteristics $`S`$. Then the ensemble of these characteristics can be considered as the trajectories of a motion with the various initial conditions. We shall note here that introduced earlier the massless scalar field with the energy density $`\epsilon `$ is necessary that the Universe having a zero total energy always remained in classically allowed area (Fig. 4). As it was specified above the evolution of the Universe is described by two stages: at first regime of contracting and then, after bounce off, regime of expansion. At the stage of contracting the field makes oscillations with increasing amplitude. After bounce off two variants are possible: 1) the field $`\mathrm{\Phi }`$ amount to rather major value and after reflection varies slowly that corresponds to an inflationary period and further transfers on scalaron stage (Fig. 5); 2) $`\mathrm{\Phi }`$ hasn’t amount to major values and having reflected at once makes fast oscillations losing the energy. Thus, naturally, there is no inflationary stage (or it is too short). It is necessary apart to note that in considered model the presence of a repelling potential wall does not allow the Universe to collapse. 4. Conclusions In the submitted paper we have considered an anisotropic cosmological Bianchi type I model with the scalar field. Distinctive feature at the solution of the given problem was representation of three scale factors in such manner that the final Einstein’s equations have turned out dependent only from one function $`r(t)`$. In the quantum approach the basic equation of the quantum cosmology - WDW equation (13) was obtained. The examination of the latter was reduced to considering the following problems: 1. The simplest model with constant scalar field playing a role of an effective cosmological constant was studied. It has allowed to reduce the WDW equation to the one-dimensional Schrödinger equation. The presence of the parameter of an anisotropy $`\gamma `$ in the effective potential gives in interesting feature: there is some critical value of this parameter at which there is partitioning the problem into two variants. In the first case at $`\gamma >1/4`$ the collapse of the wave function on centre takes place. Thus there is the constant nonzero probability density flux $`j`$ that means an possibility of accumulation of the wave function in close to the origin of coordinates area. In the second case ($`\gamma <1/4`$) the collapse of the ”particle” on centre misses and $`j=0`$. Further, the problem of finding of coefficient of above barrier reflection $`R`$ from the cosmological singularity of the wave function of the Universe $`\mathrm{\Psi }`$ was solved. For this purpose two approaches were used: 1) semiclassical approximation and 2) finding $`R`$ as the ratio of amplitudes of wave functions of reflected and ingoing waves on infinity. Both approaches give identical results at $`\gamma 1/4`$. Thus $`R<1`$ that means partial penetration of the wave function into close to zero area and its further collapse on centre. In case of $`\gamma <1/4`$ the second approach gives $`R=1`$ as against semiclassical one. It speaks about an inapplicability the latter in such situation and about necessity to use asymptoticses of the exact solutions. Let’s note that on coefficient of above barrier reflection the influence of Hubble parameter has not an effect in any way because it is only renormalaze the scale factor. 2. The model with a variable scalar field with potential $`V(\phi )=m^2\phi ^2/2`$ was considered. In view of complexity of finding of an exact analytical solution the given problem was explored numerically. With this purpose the semiclassical solution of Eq. (28) was found, therefore Eq. (29) is obtained for which the system of the characteristics (Quantum Evolution of the Bianchi Type I Model) was obtained. The last one was used for finding of relationship of the coefficient of the above barrier reflection $`R`$ from value of the field $`\mathrm{\Phi }`$. The obtained relationship shows that $`R`$ is rather great at Planck field. It means that after reflection there is an possibility of an output of the Universe on a rather long inflationary stage providing increase of its size with Planck up to macroscopic and a further output on the standard cosmological scenario. 3. In case of nonzero factor ordering there is essentially new possibility of a ”bounce off” a wave function of the Universe from a repelling potential wall ensured with a form of effective potential (33). By analogy to the previous case the solution of Eq. (34) was found with the help of system of the characteristics (Quantum Evolution of the Bianchi Type I Model). The obtained results show that the Universe beginning the evolution with small initial value of $`\mathrm{\Phi }`$ at the stage of contraction gathers a field up to Planck. After bounce off the field will increase still and further Universe transfers to an inflationary stage (see Fig. 5). Note that in considered cases above barrier reflection and bounce of a wave function of the Universe feature process of the quantum creation of the Universe. The further evolution of the model represents a stage of expansion with prompt losses of an anisotropy and transition into Friedmann Universe. Acknowledgements We are grateful to A.A. Starobinsky and I.B. Khriplovich for useful discussions of results. This work was supported by the research grant KR-154 of International Science and Technology Centre (ISTC).
no-problem/0001/nlin0001070.html
ar5iv
text
# Bubbling and bistability in two parameter discrete systems ## Abstract We present a graphical analysis of the mechanisms underlying the occurrences of bubbling sequences and bistability regions in the bifurcation scenario of a special class of one dimensional two parameter maps. The main result of the analysis is that whether it is bubbling or bistability is decided by the sign of the third derivative at the inflection point of the map function. PACS numbers: 05.45. +b, 05.40.+j 1. Introduction. The studies related to onset of chaos in one- dimensional discrete systems modeled by nonlinear maps, have been quite intense and exhaustive during the last two decades. Such a system normally supports a sequence of period doublings leading to chaos. It is also possible to take it back to periodicity through a sequence of period halvings by adding perturbations or modulations to the original system. This has, most often, been reported as a mechanism for control of chaos. In addition, there are features like tangent bifurcations, intermittency, crises etc., that occur inside the chaotic regime and are not of immediate relevance to the present work. However, if the system is sufficiently nonlinear, there are other interesting phenomena like bubble structures and bistability that have invited comparatively less attention. The simplest cases where these are realised are maps with at least two control parameters, one that controls the nonlinearity and the other which is a constant additive one. i.e., maps of the type, $$X_{n+1}=f(X_n,a,b)=f_1(X_n,a)+b$$ (1) In these maps, if $`a`$ is varied for a given $`b`$, the usual period doubling route to chaos is observed. But when $`a`$ is kept at a point beyond the first period doubling point $`a_1`$, and $`b`$ is varied, the first period doubling is followed by a period halving forming a closed loop like structure called the primary bubble in the bifurcation diagram(Fig(1.a)). If $`a`$ is kept beyond the second bifurcation point $`a_2`$, and $`b`$ is tuned, secondary bubbles appear on the arms of the primary bubble. Thus as we shift the map along the $`a`$-axis and drift it along the $`b`$-axis, the complete bubbling scenario develops in the different slices of the space $`(X,a,b)`$. This accumulates into what is known as bimodal chaos- chaos restricted or confined to the arms of the primary bubble. This can be viewed as a separate scenario to chaos in such systems. It has been confirmed that the Feigenbaum indices for this scenario with $`a`$ as control parameter would be the same as the $`\alpha `$ and $`\delta `$ of the normal period doubling route to chaos. However, detailed RG analysis by Oppo and Politi, involving the parameter $`b`$ also, indicates that if $`b`$ is kept at a critical value, $`b_c`$, where bimodal chaos just disappears, then there is a slowing down in the convergence rate leading to an index which is $`(\delta )^{1/2}`$. This has been experimentally verified in a $`CO_2`$ laser system with modulated losses. The bubbling scenario is seen in the bifurcation diagrams of many nonlinear systems like coupled driven oscillators, oscillatory chemical reactions, diode circuits, lasers, insect populations, cardiac cell simulations, coupled or modulated maps, quasi-periodically forced systems, DPCM transmission system and traffic flow systems etc. The very fact that this phenomenon appears in such a wide variety of systems makes it highly relevant to investigate and expose the common factor(s) in them i.e., the underlying basic features that make them support bubbles in their bifurcation scenario. The above mentioned continuous systems require maps with at least two parameters of type (1) to model them, the second additive parameter being the coupling strength, secondary forcing amplitude etc. We note that in all the above referred papers no specific mention is made regarding the mechanism of formation of bubbles, probably because the authors were addressing other aspects of the problem. However, there have been a number of isolated attempts to analyse the criteria for bubble formation in a few typical systems. According to Bier and Bountis, the two criteria are, the map must possess some symmetry and the first period doubling should occur facing the symmetry line. Later, Stone makes these a little more explicit by stating that the map should have an extending tail (with a consequent inflection point) and the inflection point should occur to the right of the critical point of the map. It is clear that this applies only to maps with one critical point. The relation of the extending tail to bubbling is briefly discussed in also. Bistability is an equally interesting and common feature associated with many nonlinear systems like a ring laser and a variety of electronic circuits .A recent renewal of interest in such systems arises from the fact that they form ideal candidates for studies related to stochastic resonance phenomena. The bistable behaviour in two parameter maps is shown in the bifurcation diagram in Fig(1.b). To the best of our knowledge, attempts to study any type of conditions for the occurrence of bistability are so far not seen reported in the literature. Our motivation in the present work is to generalise the criteria reported earlier for bubbling and put them together with more clarity and simplicity. As a byproduct, we succeed in stating the conditions for bistability also along similar lines in systems of type (1), even though these are two mutually exclusive phenomena as far as their occurrence regime is concerned. We provide a detailed graphical analysis, which leads to a simple and comprehensible explanation for the same. The paper is organized as follows. In section 2, the criteria for bistability and bubbling are stated followed by a brief explanation. The graphical analysis taking two simple cubic maps as examples is included in section 3 and the concluding comments in section 4. 2. The dynamics of bubbling and bistability For the special class of maps given in (1), the basic criteria for bubbling / bistability can be stated as follows. The non-linearity in $`f(X,a,b)`$ must be more than quadratic. This implies that, $`f^{}(X,a,b)`$ (the prime indicating derivative with respect to $`X`$), is non-monotonic in $`X`$ and there exists at least one inflection point $`X_i`$, where $$f^{\prime \prime }(X_i,a,b)=0.$$ (2) Then we consider the following two cases. Case (i) $$f^{\prime \prime \prime }(X_i,a,b)>0$$ (3) We define a value of $`a`$ as $`a_1`$, through the relation, $$f^{}(X_i,a_1,b)=1,$$ (4) the first period doubling point of the system. For a value of $`a`$ close to $`a_1`$ but greater than $`a_1`$, by adjusting the additive parameter $`b`$, the system can be taken through a bubble structure in the bifurcation scenario. Case (ii) $$f^{\prime \prime \prime }(X_i,a,b)<0$$ (5) Here $`a_1`$ is defined as the tangent bifurcation point of the system, through the relation $$f^{}(X_i,a_1,b)=+1$$ (6) Then by fixing $`a`$ greater than $`a_1`$, but close to $`a_1`$, and tuning $`b`$, a bistability region can be produced in the system. For case (i), the value of $`a`$ chosen to be greater than $`a_1`$ makes $`f^{}(X_i,a,b)<1`$ or $`\left|f^{}(X_i,a,b)\right|>1`$. Moreover, conditions (2) and (3) imply that $`X_i`$ is a minimum for $`f^{}`$, which is concave upwards on both sides of $`X_i`$. Hence for a fixed point $`X_{}^{}`$ which is to the left of $`X_i`$, but in the immediate neighborhood, of $`X_i`$, $`\left|f^{}(X_{}^{},a,b)\right|<1`$, and hence will be stable.Similarly, fixed point $`X_+^{}`$ to the right of $`X_i`$, but near to $`X_i`$, $`\left|f^{}(X_+^{},a,b)\right|<1`$ and is stable. Now, the second parameter $`b`$ is simple additive for the class of maps under consideration and hence $`f^{}`$ is independent of $`b`$. By adjusting $`b`$, the fixed point can be shifted such that $`f^{}(X_{}^{},a,b)`$ becomes equal to -1, the period doubling point of the map.Then $`X_{}^{}`$ will give rise to a 2-cycle with elements $`X_1^{}`$ and $`X_2^{}`$. Since these are in the neighborhood of $`X_i`$, $`f^{}(X_1^{})`$ and $`f^{}(X_2^{})`$ will be negative so that the product $`f^{}(X_1^{})f^{}(X_2^{})`$ is positive. With further increase of $`b`$, $`a`$ period merging takes place for the 2-cycle, with $`X_1^{}`$ and $`X_2^{}`$ collapsing into $`X_+^{}`$, which is just stable at the point where $`f^{}(X_+^{})=1`$. Thus in the parameter window $`(b_1,b_2)`$, a bubble structure is formed. The situation is exactly reversed for case (ii). Here conditions (2) and (5) makes $`X_i`$ a maximum of $`f^{}`$ and the falls off on both sides of $`X_i`$. At a value of $`a>a_1`$, where $`a_1`$ is defined by (6), $`f^{}(X_i,a,b)>+1`$. Then in the neighborhood of $`X_i`$, a fixed point $`X_{}^{}`$, to the left of $`X_i`$, can be stable since $`\left|f^{}(X_i,a,b)\right|<1`$. Similarly $`X_+^{}`$ on the right of $`X_i`$ also will be stable.By adjusting the second parameter $`b`$, these will be shifted to their respective tangent bifurcation points, i.e., $`b_1`$ where $`X_+^{}`$ is born and $`b_2`$ where $`X_{}^{}`$ disappears. Then a bistability window is seen in the interval $`(b_1,b_2)`$. 3. Graphical analysis. The mechanism of occurrence of bubbling and bistability explained above for maps satisfying the conditions in case (i) and case (ii) respectively can be made more transparent through a detailed graphical analysis. For this we plot the curve C1=$`f^{}(X)`$, the 1-cycle fixed point curve C2=$`f(X)X`$ and the 2-cycle curve C3=$`f(f(X))X`$ simultaneously as functions of $`X`$, for chosen values of $`a`$ and $`b`$. The zeroes of C2 give the 1-cycle fixed point $`X^{}`$ while those of C3 give the elements of the 2-cycle. Their stability can be checked from the same graph, since the value of the derivative at the fixed points can be read off. We fix the value of $`a`$ to be greater than $`a_1`$, which helps to position the curve C1 in the proper way. By plotting the three curves for different values of $`b`$, bistability regions or bubbling sequences can be traced for any given map function of type (1). For further discussion, we consider two specific forms of maps of the cubic type, which are simple but typical examples for case (i) and (ii). They are, $$M1:X_{n+1}=baX_n+X_n^3$$ (7) $$M2:X_{n+1}=b+aX_nX_n^3$$ (8) For M1, there are two critical points, $`X_{c1}=\sqrt{a/3}`$, which is a maximum and $`X_{c2}=\sqrt{a/3}`$, which is a minimum. The inflection point in between occurs at $`X_i=0`$ and $`f^{\prime \prime \prime }=6`$. Hence it belongs to case (i) and $`a_1`$ as defined by (4) is 1. In Fig(2), the three curves mentioned above are plotted for this map at $`a=1.3`$. We start from a value of $`b=1.34`$, Fig(2.a) where the fixed point $`X_{}^{}`$ is just born via tangent bifurcation since $`f^{}(X_{}^{})`$ here is +1, and the curves C2 and C3 just touches the zero line on the left of $`X_i`$ at $`X_{}^{}`$. Though C2 has a zero on the right, the slope there is larger than 1 and hence it is unstable, for this value of $`b`$. Since $`b`$ is only additive, increase in the value of $`b`$, shifts C2 upwards, resulting in a slow drift of $`X_{}^{}`$ from left to right. Thus as $`b`$ is increased to -0.7(Fig(2.b)), $`f^{}(X_{}^{})=1`$ and $`X_{}^{}`$ bifurcates into $`X_1^{}`$ and $`X_2^{}`$. At $`b=0.3`$(Fig(2.c)), the 2-cycle is stable with $`f^{}(X_1^{})`$ and $`f^{}(X_2^{})`$, both negative and their product is positive but less than 1. Note that the curve C3 has developed a maximum and a minimum on both sides of $`X_{}^{}`$, which is now unstable, cutting the zero line again at $`X_1^{}<X_{}^{}`$ and $`X_2^{}>X_{}^{}`$. As $`b`$ is further increased, they move apart. Since the value chosen is within the stability window of 2-cycle no further period doubling takes place. As $`X_{}^{}`$ crosses $`X_i`$, $`X_1^{}`$ $`\&`$ $`X_2^{}`$ move towards each other and merge together at $`b=0.7`$(Fig(2.d)) and coincide with the fixed point $`X_+^{}`$. Further, $`X_+^{}`$ disappears by a reverse tangent bifurcation at $`b=1.34`$, when $`f^{}(X_+^{})`$ becomes equal to +1. Thus the above events lead to the formation of a primary bubble in the window (-0.7,0.7). By keeping $`a`$ at a value beyond the second period doubling point $`a_2`$, of the map, the merging tendency starts only after the second period doubling and hence secondary bubbles are seen on the arms of the primary bubble. These can be repeated until at $`a>a_{\mathrm{}}`$, the system is taken to chaos. Now the above analysis is repeated for map M2, which satisfies the conditions in case (ii) (Fig(3)). Here of the two critical points of the map, $`X_{c1}=\sqrt{a/3}`$ is the minimum and $`X_{c2}=\sqrt{a/3}`$ is the maximum with a positive slope at the point of inflection $`X_i`$. $`a_1`$ in this case is also 1. Hence in the Fig (3) $`a`$ is chosen to be 1.4. Fig(3.a) shows the situation for $`b=0.35`$, where $`f^{}(X^{})=1`$ and hence the 1-cycle fixed point $`X_{}^{}`$ period doubles into a 2-cycle. For lower values of $`b`$, we expect the full period doubling scenario since $`f^{}`$ is monotonic beyond this point(Fig(3.b)). However, as $`b`$ is increased to -0.1, $`f^{}(X_+^{})=+1`$, where the other 1-cycle, $`X_+^{}`$ to the right of $`X_i`$ is born by tangent bifurcation(Fig(3.c)). Note that at this point $`X_{}^{}`$ is still stable with $`\left|f^{}(X_{}^{})\right|<1`$. This continues until $`b=+0.1`$, where $`f^{}(X_{}^{})=+1`$ and hence $`X_{}^{}`$ disappears(Fig(3.d)). The birth of $`X_+^{}`$ is concurrent with the maximum of C2 touching the zero line $`(b=b_1)`$ while the disappearance of $`X_{}^{}`$ occurs as the minimum of C2 touches the zero line $`(b=b_2)`$. As $`b`$ is increased and C2 is moving up it is clear that the former will take place for a lower $`b`$ value than the latter as the maximum of C2 occurs for $`X>X_i`$ and minimum at $`X<X_i`$ (slope being positive at $`X_i`$). Hence $`b_1<b_2`$, or there is a window $`(b_1,b_2)`$, where bistability exists, which in our graph is (-0.1,0.1) for $`a=1.4`$. $`X_+^{}`$ is stable beyond this point also and it period doubles as $`b`$ is increased to $`b=+0.35`$ where $`f^{}(X_+^{})=1`$. The full Feigenbaum scenario then develops for higher values of $`b`$.By keeping $`a`$ at higher values and tuning $`b`$, the bistability can be taken to 2-cycle, 4-cycle and even chaotic regions. The stability regions of the different types of dynamical behaviour possible for M1 can be marked out in parameter space plot in the $`(a,b)`$ plane(Fig(4.a)). The cone like region on the left is the stability zone of the 1-cycle fixed point (periodicity, p=1) and it is separated from the escape region by the tangent bifurcation line on both sides. The parabola like curve inside it marks out the 2-cycle (p=2) region, while the smaller parabolas indicate curves along which 4-cycles (p=4) and other higher periodic cycles becomes stable until chaos is reached. The line parallel to the $`b`$-axis at a value of $`a>a_1`$, along which primary bubble is formed, is shown by the dotted line. It is clear that along this line, the system is taken from escape –¿ 1-cycle –¿2-cycle –¿1-cycle –¿ escape. Similarly secondary bubbles are formed along a line drawn at $`a>a_2`$ etc. The parameter space plot for M2 is shown in Fig(4.b). The quadrilateral like region marked as (I) beyond $`a>a_1`$ is the bistable region for 1-cycle, while quadrilateral (II) is that for 2-cycle etc. The area marked with p=1, is the stability region of 1-cycle while p=2, that for 2-cycle etc. When the system is taken along the dotted line beyond $`a_1`$, bistability is seen in the central region, followed by period doubling bifurcations to both sides, until chaos is reached. 4. Conclusion. Although the above discussion is confined to two simple cubic maps, the analysis is repeated for a large number of maps of type (1) chosen from a wide variety of situations covering different functional forms like exponential, trigonometric and polynomial maps. We find that the qualitative behaviour in all cases remain the same and depends only on the criteria (2)-(6). Hence the pattern of scenario detailed in this paper can be taken to be atypical as far as maps of the form (1) are concerned. The criteria for bistability reported here are certainly novel while those for bubbling are more rigorous and general in nature compared to earlier studies. They can be used as a test to identify maps in which bistability or bubbling is possible and also to isolate the regions in the parameter space $`(a,b)`$ where they occur. Our main result is that whether it is bistability or bubbling is decided by the sign of the third derivative of the map function at the inflection point. If $`f^{\prime \prime \prime }(X_i)`$ is positive, because of the concave nature of the derivative, tangent bifurcation will precede period doubling as $`b`$ is increased. Hence bubbling structure is possible. Similarly when $`f^{\prime \prime \prime }(X_i)`$ is negative, curve of $`f^{}`$ is convex and hence period doubling precedes tangent bifurcation, leading to bistability. In case $`f^{\prime \prime \prime }(X_i)=0`$, higher derivatives must be considered for deciding the behaviour. Bubbling can be looked upon as an extreme case of incomplete period doublings and the latter has been often associated with positive Schwarzian derivative. But for the system under study, it is easy to check that this is always negative (independent of the form of the map function), because of properties (2), (3) and (5). In fact, a few such maps have been reported earlier though in a totally different context. The bubbling scenario in maps of the type M1, leads to bimodal chaos that is restricted to the arms of the primary bubble. Such confined chaos or even low periodic behavior prior to that, makes them better models in population dynamics of eco systems than the usual logistic type maps. Attempts to extend the criteria to continuous and higher dimensional systems are underway and will be reported elsewhere. Acknowledgements SNV thanks the UGC, New Delhi for financial assistance through a junior research fellowship and GA acknowledges the warm hospitality and computer facility at IUCAA, Pune. References $`\left[1\right]`$ S Parthasarathy & S Sinha, Phys. Rev E51, 6239(1995) $`\left[2\right]`$ P R Krishnan Nair, V M Nandakumaran & G Ambika, Pramana (J.Phys.) 43, 421 (1994) $`\left[3\right]`$ M Bier & T Bountis, Phys. Lett. A104, 239(1984) $`\left[4\right]`$ G L Oppo & A Politi, Phys. Rev. A30, 435(1984) $`\left[5\right]`$ C Lepers, J Legrand & P Glorieux, Phys.Rev. A43, 2573(1991) $`\left[6\right]`$ T Hogg & B A Huberman, Phys. Rev. A29, 275(1984) $`\left[7\right]`$ J Kozlowski, U Parlitz & W Lauterborn, Phys. Rev. E51,1861(1995) $`\left[8\right]`$ C S Wang, Y H Kao, J C Huang & Y S Gou, Phys.Rev. A45, 3471(1992) $`\left[9\right]`$ K Coffman, W D McCormick & H L Swinney, Phys.Rev.Lett. 56, 999(1986) $`\left[10\right]`$ T S Bellows, J.Anim. Ecol., 50, 139(1981) $`\left[11\right]`$ M R Guevara, L Glass & A Shrier, Science, 214, 1350(1981) $`\left[12\right]`$ P R Krishnan Nair, V M Nandakumaran & G Ambika,Computational Aspects in Nonlinear dynamics & Chaos, eds.- G Ambika & V M Nandakumaran (Wiley Eastern Pub.Ltd., New Delhi),144, (1994) $`\left[13\right]`$ P P Saratchandran,V M Nandakumaran & G Ambika,Pramana(J.Phys.) 47, 339(1996) $`\left[14\right]`$ Z Qu, G Hu, G Yang & G Qiu, Phys. Rev. Lett. 74, 1736(1995) $`\left[15\right]`$ C Uhl & D Fournier-Prunaret, Int.J.Bif.& Chaos,5, 1033(1995) $`\left[16\right]`$ X Zhang & D F Jarett, Chaos 8, 503(1998) $`\left[17\right]`$ L Stone, Nature, 365, 617(1993) $`\left[18\right]`$ S Sinha & P Das, Pramana (J.Phys.) 48, 87(1997) $`\left[19\right]`$ R Roy & L Mahdel, Opt.Commun. 34, 133(1980) $`\left[20\right]`$ L O Chua & K A Stromsmoe, Int.J.of Engg.Science,9, 435(1971) $`\left[21\right]`$ G Nicolis, C Nicolis & D McKerman, J.Stat.Phys. 70, 125(1993) $`\left[22\right]`$ D Singer, Int. J. Appl. Math.35, 260(1978) $`\left[23\right]`$ H E Nusse & J A Yorke, Phys. Rev. Lett. 27, 328(1988) $`\left[24\right]`$ S Sinha & S Parthasarathy, Proc.Natl.Acad. Sci.USA 93,1504(1996) Figure Captions Fig.1:- Bifurcation diagram showing (a) bubble structure and (b) bistable behaviour, for a fixed value of $`a`$, with $`b`$ as the control parameter. Fig.2:- The derivative curve C1, the 1-cycle solution curve C2 and the 2-cycle solution curve C3 plotted with the value of $`a`$ at 1.3 for the map M1. In (a), $`b=1.34`$ shows the point where the 1-cycle $`X_{}^{}`$ is just born, with $`f^{}(X_{}^{})=+1`$. (b) With $`b=b_1=0.7`$, $`f^{}(X_{}^{})=1`$ hence the $`X_{}^{}`$ becomes unstable and the 2-cycle is just born.(c) $`b=0.3`$, shows the elements of the stable 2-cycle with $`X_1^{}`$ to the left and $`X_2^{}`$ to the right of the $`X_{}^{}`$, which is unstable now and (d) $`b=b_2=+0.7`$, the 1-cycle fixed point $`X_+^{}`$ becomes stable after the merging of $`X_1^{}`$ and $`X_2^{}`$. Fig.3:- Here the curves C1 ,C2 and C3 for the map M2 defined in (8) with $`a=1.4`$ plotted. (a) At $`b=0.5`$, it is clear from the figure that the 1-cycle solution is unstable and the 2-cycle is stable. (b) $`b=0.35`$ gives the first period doubling point i.e., here $`f^{}(X_{}^{})=1`$. (c) At $`b=b_1=0.1`$, $`f^{}(X_+^{})=+1`$, i.e., the creation of a new fixed point $`X_+^{}`$ by tangent bifurcation. Note that still $`X_{}^{}`$ is stable and (d) $`b=b_2=0.1`$, $`f^{}(X_{}^{})`$ is +1. Hence the existing fixed point $`X_{}^{}`$ disappears. Thus $`(b_1,b_2)`$ gives the bistability window. Fig.4:- Parameter space plot in $`(a,b)`$ plane (a) for map M1 and (b) for map M2.
no-problem/0001/astro-ph0001161.html
ar5iv
text
# Hubble Space Telescope Imaging of the Young Planetary Nebula GL 618 ## 1. Introduction GL 618 is a young, bipolar planetary nebula (PN). Ground-based optical and near-IR imaging of this object reveal two lobes of emission (each about 3<sup>′′</sup> in extent) separated by a dark lane. The central regions of the nebula are hidden from direct view at optical wavelengths by the lane of obscuring material. The spectrum of GL 618 is composed of a faint continuum and a variety of low-excitation emission lines. Trammell, Dinerstein, & Goodrich (1993) used spectropolarimetry to study GL 618 and found that the continuum and part of the permitted line emission are reflected from deep in the nebula. The low-excitation, forbidden line flux and remainder of the permitted line emission are produced in the bipolar lobes. The emission produced in the bipolar lobes is indicative of shock heating ($`V_s`$ = 50$``$100 kms<sup>-1</sup>). Long-slit optical spectroscopy of GL 618 confirms that the shock emission is associated with out-flowing gas (Carsenty & Solf 1982) and near-IR spectroscopy of GL 618 has revealed the presence of thermally excited H<sub>2</sub> emission (Thronson 1981; Latter et al. 1992). GL 618 exhibits \[Fe II\] emission also thought to be associated with the shock-heated gas (Kelly, Latter, & Rieke 1992) and more recent observations establish that this emission is associated with an outflow (Kelly, Hora, & Latter 1999). Shock-excited emission dominates the spectra of the lobes of GL 618. We present WFPC2 images of this object that demonstrate that the source of this shock-excited emission is a set of highly collimated outflows originating in the central regions of the object. ## 2. Observations We have obtained WFPC2 images of GL 618 as part of a HST Cycle 6 program. GL 618 was centered in the Planetary Camera which has a 36<sup>′′</sup> $`x`$ 36<sup>′′</sup> field of view and a plate scale of 0.0455<sup>′′</sup>per pixel<sup>-1</sup>. Images were obtained through four filters: F631N (isolating \[O I\]$`\lambda `$6300 line emission), F656N (isolating H$`\alpha `$ line emission), F673N (isolating \[S II\] $`\lambda `$$`\lambda `$6717,31 line emission), and F547M (a continuum band). These filters were chosen so that we could study the morphology of the shock-excited emission in the lobes of GL 618. The images were processed through the HST data reduction pipeline procedures and cosmic rays were removed by combining several exposures of each object. The images of GL 618 were obtained on 23 October 1998 and exposure times ranged from 15 to 45 minutes. ## 3. Results The overall morphologies seen in the \[S II\] and \[O I\] images are similar (Figure 1, panels (a) and (b)). These images trace the morphology of the shock-excited forbidden line emission in GL 618. Three highly collimated outflows, or jets, are seen in both images. The brightest emission occurs near the tip of each of the outflows and there is no forbidden line emission seen in the central regions of GL 618. Ripple-like morphology is seen in the outflows in both the \[S II\] and \[O I\] images. These ripples might be the result of instabilities in the flow and/or an interaction with the surrounding nebular material. The morphology observed in the H$`\alpha `$ image (Figure 1, panel (c)) differs slightly from the forbidden line morphology. H$`\alpha `$ emission is seen associated with the outflows, but in addition, a significant amount of H$`\alpha `$ emission is seen towards the central regions of GL 618. Spectropolarimetric observations indicate that part of the H$`\alpha `$ emission is reflected and part of this emission is produced by shocks in the lobes (Trammell et al. 1993). We have spatially separated these components in the HST images. The H$`\alpha `$ emission associated with the central regions of the object is probably reflected emission from an H II region buried deep in the nebula. A high density H II region has been observed at the center of GL 618 at radio wavelengths (e.g. Kwok & Feldman 1981) and in the reflected optical spectrum (Trammell et al. 1993). The H$`\alpha `$ emission coincident with the outflows is the shock-excited component of the permitted line emission. The \[O I\] to \[S II\] line ratios in the bullet-like structures at the tips of the outflows are approximately 3.0$``$3.5. By comparing these observed line ratios with the predictions of planar shock models (Hartigan, Raymond, & Hartmann 1987), we estimate the shock velocity in these regions to be approximately 80 kms<sup>-1</sup>. This is consistent with the range in shock velocities estimated from previous spectropolarimetric observations (Trammell et al. 1993). Careful examination of the bullet-like structures at the tip of the outflow in the upper lobe in Figure 1 reveals an excitation gradient across this region. H$`\alpha `$ is brightest on the side of the spot facing away from the central regions of GL 618. \[S II\] and \[O I\] are brighter on the side closest to the central source. This type of gradient is expected for a jet flowing away from the central source and impinging on the surrounding nebular material. The bright spots near the tops of the outflows are not clumps of material being overrun by a wind or outflow. ## 4. Discussion HST observations (e.g. Trammell & Goodrich 1996; Sahai & Trauger 1998) and ground-based imaging surveys (e.g. Balick 1987; Schwarz, Corradi, & Melnick 1992) have revealed the presence of collimated outflows, FLIERs, and a myriad of other small-scale structures in PN. The origins of these structures and their role in the overall development of PN remain puzzling. The debate concerning the origin of these small-scale structures, and also the formation of aspherical PN in general, centers on whether binary or single stars are responsible for producing aspherical mass loss. Both models of binary star interaction (e.g. Soker & Livio 1994) and magnetic confinement (e.g. Garcia-Sergura 1997), while providing a scheme for producing the overall aspherical structure in PN, may also provide mechanisms to produce the highly collimated outflows. The complex, mulitpolar outflow geometry seen in GL 618 may be difficult for either of these types of models to explain. Our observations demonstrate that jets can be present during the early phases of PN development and may play an important role in the early shaping of these objects. Futher, these collimated outflows may set the stage for the development of other small scale structures seen in more evolved objects. ## References Balick, B. 1987, AJ, 94, 671 Carsenty, U. & Solf, J. 1982, A&A, 106, 307 Garcia-Sergura, G. 1997, ApJ, 489, L189 Hartigan, P., Raymond, J., & Hartmann, L. 1987, ApJ, 316, 323 Kelly, D. M., Hora, J. L., & Latter, W. B. 1999, in preparation Kelly, D. M., Latter, W. B., & Rieke, G. H. 1992, ApJ, 395, 174 Kwok, S. & Feldman, P. A. 1981, ApJ, 247, L67 Latter, W. B., Maloney, P. R., Kelly, D. M., Black, J. H., Rieke, G. H., & Rieke, M. J. 1992, ApJ, 389, 347 Sahai, R. & Trauger, J. T. 1998, AJ, 116, 1357 Schwarz, H. E., Corradi, R. L. M., & Melnick, J. 1992, A&AS, 96, 23 Soker, N. & Livio, M. 1994, ApJ, 421, 219 Thronson, H. A. 1981, ApJ, 248, 984 Trammell, S. R. & Goodrich, R. W. 1996, ApJ, 468, L107 Trammell, S. R., Dinerstein, H. L., & Goodrich, R. W.1993, ApJ, 402, 249 Support for this research was provided by NASA through grant number GO-06761 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
no-problem/0001/cond-mat0001324.html
ar5iv
text
# Increments of Uncorrelated Time Series Can Be Predicted With a Universal 75% Probability of Success ## 1 Introduction Predicting the future evolution of a system from the analysis of past time series is the quest of many disciplines, with a wide range of useful potential applications including natural hazards (volcanic eruptions, earthquakes, floods, hurricanes, global warming, etc.), medecine (epilectic seizure, cardiac arrest, parturition, etc.) and stock markets (economic recessions, financial crashes, investments, etc.). The absolute fundamental prerequisite is that the (possibly spatio-temporal) time series $`x_1,x_2,\mathrm{}`$ possess some dependence of the future on the past. If absent, the best prediction of the future is captured by the mathematical concept of a martingale: the expectation $`\mathrm{E}(x_{t+1}|\mathrm{past})`$ of the future conditioned on the past is the last realisation $`x_t`$. In many applications, one is interested in the variation $`x_{t+1}x_t`$ of the time series. The result we present below is, in one sense, obvious and, in another, quite counter-intuitive. Starting from a completely uncorrelated time series, we know by definition that future values cannot be better predicted than by random coin toss. However, we show that the sign of the increments of future values can be predicted with a remarkably high success rate of up to $`75\%`$ for symmetric time series. The derivation is straightforward but the counter-intuitive result warrants, we believe, its exposition. This little exercice illustrates how tricky can be the assessment of predictive power and statistical testing. ## 2 First derivation Consider a time series $`x(t)`$ sampled at discrete times $`t_1,t_2,\mathrm{}`$ which can be equidistant or not. We denote $`x_1,x_2,\mathrm{}`$ the corresponding measurements. We assume that the measurements $`x_1,x_2,\mathrm{}`$ are i.i.d. (independent identically distributed). Consider first the simple case where $`x_1,x_2,\mathrm{}`$ are uniformly and independently drawn in the interval $`[0,1]`$ and the average value or expectation is $`\mathrm{E}(x)=1/2`$. ### 2.1 Prediction scheme We ask the following question: based on previous values up to $`x_i`$, what is the best predictor for the increment $`x_{i+1}x_i`$? A naive answer would be that, since the $`x`$’s are independent and uncorrelated, their increments are also independent and the best predictor for the increment $`x_{i+1}x_i`$ is zero (martingale choice). This turns out to be wrong. If indeed the expectation of the increment is given by $$\mathrm{E}(x_{i+1}x_i)=\mathrm{E}(x_{i+1})\mathrm{E}(x_i)=1/21/2=0,$$ (1) the conditional expectation $`\mathrm{E}(x_{i+1}x_i|x_i)`$, conditionned on the last realization $`x_i`$, is given by $$\mathrm{E}(x_{i+1}x_i|x_i)=\mathrm{E}(x_{i+1}|x_i)\mathrm{E}(x_i|x_i)=\frac{1}{2}x_i,$$ (2) where the term $`1/2`$ uses the independence between $`x_{i+1}`$ and $`x_i`$ ($`\mathrm{E}(x_{i+1}|x_i)=\mathrm{E}(x_{i+1})=1/2`$) and the last term in the r.h.s. uses the identity $`\mathrm{E}(x_i|x_i)=x_i`$. We thus see that the sign of the increment has some predictability: * if $`x_i>1/2`$, the expectation is that $`x_{i+1}`$ will be smaller than $`x_i`$; * if $`x_i<1/2`$, the expectation is that $`x_{i+1}`$ will be larger than $`x_i`$. This predictability can be seen from the fact that the increments of $`x(t)`$ are anti-correlated: $$\mathrm{E}\left(\left(x_{i+1}x_i\right)\left(x_ix_{i1}\right)\right)=\mathrm{E}(x_{i+1}x_i)\mathrm{E}(x_{i+1}x_{i1})\mathrm{E}(x_i^2)+\mathrm{E}(x_ix_{i1})=\frac{1}{4}\frac{1}{4}\frac{1}{2}+\frac{1}{4}=\frac{1}{4}.$$ (3) This anti-correlation leads indeed to the predictability mentionned above, namely that the best predictor for $`x_{i+1}x_i`$ is that $`x_{i+1}x_i`$ be of the sign opposite to $`x_i1/2`$. Another way to understand where the predictability of the increments of incorrelated variables comes from is to realize that increments are discrete realizations of the differentiation operator. Under its action, a flat (white noise) spectrum becomes colored towards the “blue” (which is the opposite of the well-known action of integration which “reddens” white noise) and there is thus a short-range correlation appearing in the increments. ### 2.2 Probability for a successful prediction A natural question is to determine the success rate $`p_+`$ of this strategy, i.e. the probability that the sign of the increment $`x_{i+1}x_i`$ be as predicted equal to the sign of $`1/2x_i`$. To address this question, we study the following quantity $$ϵ\mathrm{E}\left(\mathrm{sign}\left(x_{i+1}x_i\right)\mathrm{sign}\left(\frac{1}{2}x_i\right)\right),$$ (4) where the product of signs inside the expectation operator is $`+1`$ if the prediction is born out by the data and $`1`$ in the other case. The relationship between $`ϵ`$ and $`p_+`$ is $$ϵ=(+1)p_++(1)(1p_+)=2p_+1p_+=\frac{1}{2}+\frac{ϵ}{2}.$$ (5) Expression (5) shows that $`ϵ`$ quantifies the deviation for the random coin toss result $`p_+=50\%`$. From the definition (4), we have $$ϵ=_0^{\frac{1}{2}}𝑑x_i(+1)\left(_0^{x_i}𝑑x_{i+1}(1)+_{x_i}^1𝑑x_{i+1}(+1)\right)+_{\frac{1}{2}}^1𝑑x_i(1)\left(_0^{x_i}𝑑x_{i+1}(1)+_{x_i}^1𝑑x_{i+1}(+1)\right),$$ (6) which gives $$ϵ=1/2\mathrm{and}\mathrm{thus}p_+=75\%.$$ (7) Figure 1 shows a numerical simulation which evaluates $`p_+`$ as a function of cumulative number of realisations with the strategy that $`x_{i+1}x_i`$ is predicted of the opposite sign to $`x_i1/2`$, using a pseudo-random number generator with values uniformely distributed between $`0`$ and $`1`$. ## 3 General derivation for arbitrary distributions This result is actually quite general. Consider an arbitrary random variable $`x_i`$ with arbitrary probability density distribution $`P(x)`$ with average $`x`$. We form the centered variable $$\delta _ix_ix$$ (8) with zero mean $`\delta =0`$ and pdf $`P(\delta )`$. Similarly to (2), we study the conditional expectation of its increments $`\delta _{i+1}\delta _i`$, given the last realization $`\delta _i`$: $$\mathrm{E}(\delta _{i+1}\delta _i|\delta _i)=\mathrm{E}(\delta _{i+1}|\delta _i)\mathrm{E}(\delta _i|\delta _i)=\delta _i,$$ (9) where we have used the fact that the $`\delta _i`$’s are uncorrelated. Thus, the best predictor of the sign of the increment of the $`\delta `$’s is the opposite of the sign of the last realization. We then quantify the probability of prediction success through the quantity defined similarly to (4) as $$ϵ\mathrm{E}\left(\mathrm{sign}\left(\delta _{i+1}\delta _i\right)\mathrm{sign}\left(\delta _i\right)\right),$$ (10) which is related to the success probability $`p_+`$ by (5). It is easily calculated as $`ϵ`$ $`=`$ $`{\displaystyle _{\mathrm{}}^0}𝑑\delta _iP(\delta _i)(+1)\left[{\displaystyle _{\mathrm{}}^{\delta _i}}𝑑\delta _{i+1}P(\delta _{i+1})(1)+{\displaystyle _{\delta _i}^+\mathrm{}}𝑑\delta _{i+1}P(\delta _{i+1})(+1)\right]`$ (11) $`+`$ $`{\displaystyle _0^+\mathrm{}}𝑑\delta _iP(\delta _i)(1)\left[{\displaystyle _{\mathrm{}}^{\delta _i}}𝑑\delta _{i+1}P(\delta _{i+1})(1)+{\displaystyle _{\delta _i}^+\mathrm{}}𝑑\delta _{i+1}P(\delta _{i+1})(+1)\right].`$ It is convenient to introduce the cumulative distribution $$F(x)_{\mathrm{}}^x𝑑\delta P(\delta ),$$ (12) and the probabilities $`F_{}=F(0)`$ (resp. $`F_+=1F(0)`$) that $`\delta `$ be less (resp. larger) than $`0`$. Expression (11) transforms into $$ϵ=F_{}F_+\left(F(0)\right)^2+\left(F(+\mathrm{})\right)^2\left(F(0)\right)^2,$$ (13) where we have used the identity $`2_{\mathrm{}}^y𝑑xP(x)F(x)=[F(y)]^2`$. Using the definition of $`F_{}`$ and $`F_+`$ and the normalization $`F(+\mathrm{})=1`$ leads to $$ϵ=2F_+(1F_+)\mathrm{and}p_+=\frac{1}{2}+F_+(1F_+).$$ (14) For symmetric distributions and for those distributions such that $`F_+=1/2`$, we retrieve the previous result (7). This result is thus seen to be very general and independent of the shape of the distribution of the i.i.d. variables as long as $`F_+=1/2`$ (attained in particular but not exclusively for symmetric distributions). Note that the value $`p_+=75\%`$ is the largest possible result attained for $`F_+=1/2`$. For $`F_+1/2`$, $`0.5p_+<0.75`$. Figure 2 shows the estimation of $`p_+`$ used on the thirty year US treasury bond TYX from Oct. 29, 1993 till Aug. 9, 1999. Specifically, we start from the daily close quotes $`q(t)`$ and construct the price variations $`\delta q(t)=q(t)q(t1)`$. We try to predict the variation of $`\delta q(t)`$ with the strategy that $`\delta q(t+1)\delta q(t)`$ is predicted of the opposite sign to $`\delta q(t)\delta q`$. The corresponding success probability $`p_+`$ is plotted as a function of time by cumulating the realizations to estimate $`p_+`$. As expected, at the beginning, large fluctuations express the limited statistics. As the statistics improves, $`p_+`$ converges to the predicted value $`75\%`$. We note that, in comparison to the pseudo-random number series shown in figure 1, the convergence seems to occur at a similar rate, suggesting that there are no appreciable global short-range correlations, in agreement with many previous statistical tests . ## 4 Discussion This paradoxical result tells us that one can get on average a success rate of three out of four in guessing what is the sign of the increment of uncorrelated random variables. This is quite surprising a priori but, as we explained above, stems from the action of the differential operator which makes the spectrum “blueish”, thus introducing short-range correlations. This predictive skill does not lead to any anomalies. Consider for instance the time series of price returns of a stock market. According to the efficient market hypothesis (ref. and references therein) and the random walk model, successive (say daily) price returns of liquid organized markets are essentially independent with approximately symmetric distributions. Our result (14) shows then that we can predict with a $`75\%`$ accuracy the sign of the increment of the daily returns (and not the sign of the returns that are proportional to the increment of the prices themselves). This predictive skill is not associate to an arbitrage opportunities in market trading. This can be seen as follows. For simplicity of language, we consider price returns $`\delta `$’s relative to their average so that we deal with uncorrelated variables with zero mean as defined in (8). In addition, we restrict our discussion to the optimal case where $`F_+=1/2`$. Consider first the situation where $`\delta _i`$ is positive and quite large (say two standard deviations above zero). We expect that any typical realization, and in particular the next one $`\delta _{i+1}`$, to be positive or negative but close to zero to within say one standard deviation. This implies that we expect with a large probability $`\delta _{i+1}`$ to be smaller than $`\delta _i`$. This is the guess that is compatible and in fact constructs the result (14). Consider now the second situation where $`\delta _i`$ is positive but very small and close to zero. We then have by construction of the process that $`\delta _{i+1}`$ will be larger or smaller than $`\delta _i`$ with probability close to $`1/2`$. In this case, we loose any predictive skill. What the result (14) quantifies mathematically is that all these types of realizations averages out to a global probability of $`75\%`$ for the sign of the increment to be predicted by the sign of $`\delta _i`$. This large value is not giving us any “martingale” (in the common sense of the word). Actually, it states simply that, for independent realizations, large values have to be followed by smaller ones. This analysis relies fundamentally on the independence between successive occurrence of the variables $`\delta _i`$. Predicting with $`75\%`$ probability the sign of $`\delta _{i+1}\delta _i`$ does not improve in any way our success rate for prediction the sign of $`\delta _{i+1}`$ (which would be the real arbitrage opportunity). Deviations from $`p_+=75\%`$, and in particular results larger than $`75\%`$ which is a maximum in the uncorrelated case (see (14), signal the presence of correlations. An instance is shown in figure 3 which plots $`p_+`$ for the prediction of the variations of the isotopic deuterium time series from the Vostok (south Pole) ice core sample, which is a proxy for the local temperature from about 220 ky in the past to present. The data is taken from . We observe that $`p_+`$ remains above $`75\%`$ showing a significant genuine anti-correlation. Acknowledgements: We thank P. Yiou for providing the temperature time series and S. Gluzman for stimulating discussions.
no-problem/0001/quant-ph0001034.html
ar5iv
text
# Inequalities for dealing with detector inefficiencies in Greenberger-Horne-Zeilinger-type experiments Copyright (2000) by the American Physical Society. To appear in Phys. Rev. Lett. ## Abstract In this article we show that the three-particle GHZ theorem can be reformulated in terms of inequalities, allowing imperfect correlations due to detector inefficencies. We show quantitatively that taking into account these inefficiencies, the published results of the Innsbruck experiment support the nonexistence of local hidden variables that explain the experimental results. The issue of the completeness of quantum mechanics has been a subject of intense research for almost a century. Recently, Greenberger, Horne and Zeilinger (GHZ) proposed a new test for quantum mechanics based on correlations between more than two particles . What makes the GHZ proposal distinct from Bell’s inequalities is that they use perfect correlations that result in mathematical contradictions. The argument, as stated by Mermin in , goes as follows. We start with a three-particle entangled state $$|\psi =\frac{1}{\sqrt{2}}(|+_1|+_2|_3+|_1|_2|+_3).$$ This state is an eigenstate of the following spin operators: $`\widehat{𝐀}`$ $`=`$ $`\widehat{\sigma }_{1x}\widehat{\sigma }_{2y}\widehat{\sigma }_{3y},\widehat{𝐁}=\widehat{\sigma }_{1y}\widehat{\sigma }_{2x}\widehat{\sigma }_{3y},`$ $`\widehat{𝐂}`$ $`=`$ $`\widehat{\sigma }_{1y}\widehat{\sigma }_{2y}\widehat{\sigma }_{3x},\widehat{𝐃}=\widehat{\sigma }_{1x}\widehat{\sigma }_{2x}\widehat{\sigma }_{3x}.`$ From the above we have that the expected correlations $`E(\widehat{𝐀})=E(\widehat{𝐁})=E(\widehat{𝐂})=1.`$ However, $`\widehat{𝐃}=\widehat{𝐀}\widehat{𝐁}\widehat{𝐂},`$ and we also obtain that, according to quantum mechanics, $`E(\widehat{𝐃})=E(\widehat{𝐀}\widehat{𝐁}\widehat{𝐂})=1.`$ It is easy to show that these correlations yield a contradiction if we assume that spin exist independent of the measurement process. GHZ’s proposed experiment, however, has a major problem. How can one verify experimentally predictions based on perfect correlations? This was also a problem in Bell’s original paper. To “avoid Bell’s experimentally unrealistic restrictions”, Clauser, Horne, Shimony and Holt derived a new set of inequalities that would take into account imperfections in the measurement process. A main purpose of this article is to derive a set of inequalities for the experimentally realizable GHZ correlations. We show that the following four inequalities are both necessary and sufficient for the existence of a local hidden variable, or, equivalently , a joint probability distribution of $`𝐀`$, $`𝐁`$, $`𝐂`$, and $`\mathrm{𝐀𝐁𝐂}`$, where $`𝐀,𝐁,𝐂`$ are three $`\pm 1`$ random variables. $$2E(𝐀)+E(𝐁)+E(𝐂)E(\mathrm{𝐀𝐁𝐂})2,$$ (1) $$2E(𝐀)+E(𝐁)+E(𝐂)+E(\mathrm{𝐀𝐁𝐂})2,$$ (2) $$2E(𝐀)E(𝐁)+E(𝐂)+E(\mathrm{𝐀𝐁𝐂})2,$$ (3) $$2E(𝐀)+E(𝐁)E(𝐂)+E(\mathrm{𝐀𝐁𝐂})2.$$ (4) For the necessity argument we assume there is a joint probability distribution consisting of the eight atoms $`abc,\mathrm{},\overline{a}\overline{b}\overline{c}`$, where we use a notation where $`a`$ is $`𝐀=1`$, $`\overline{a}`$ is $`𝐀=1`$, and so on. Then, $`E(𝐀)=P(a)P(\overline{a})`$, where $`P(a)=P(abc)+P(a\overline{b}c)+P(ab\overline{c})+P(a\overline{b}\overline{c})`$, and $`P(\overline{a})=P(\overline{a}bc)+P(\overline{a}\overline{b}c)+P(\overline{a}b\overline{c})+P(\overline{a}\overline{b}\overline{c})`$, and similar equations hold for $`E(𝐁)`$ and $`E(𝐂)`$. Next we do a similar analysis of $`E(\mathrm{𝐀𝐁𝐂})`$ in terms of the eight atoms. Corresponding to (1), we now sum over the probability expressions for the expectations $`F=E(𝐀)+E(𝐁)+E(𝐂)E(\mathrm{𝐀𝐁𝐂})`$, and obtain $`F`$ $`=`$ $`2[P(abc)+P(\overline{a}bc)+P(a\overline{b}c)+P(ab\overline{c})]`$ $`2[P(\overline{a}\overline{b}\overline{c})+P(\overline{a}\overline{b}c)+P(\overline{a}b\overline{c})+P(a\overline{b}\overline{c})].`$ Since all the probabilities are nonnegative and sum to $`1`$, we infer (1) at once. The derivation of the other three inequalities is similar. To prove the converse, i.e., that these inequalities imply the existence of a joint probability distribution, is slightly more complicated. We restrict ourselves to the symmetric case $`P(a)=P(b)=P(c)p`$, $`P(\mathrm{𝐀𝐁𝐂}=1)q`$ and thus $`E(𝐀)=E(𝐁)=E(𝐂)=2p1,`$ $`E(\mathrm{𝐀𝐁𝐂})=2q1.`$ In this case, (1) can be written as $`03pq2,`$ while the other three inequalities yield just $`0p+q2`$. Let $`xP(\overline{a}bc)=P(a\overline{b}c)=P(ab\overline{c})`$, $`yP(\overline{a}\overline{b}c)=P(\overline{a}b\overline{c})=P(a\overline{b}\overline{c})`$, $`zP(abc)`$ and $`wP(\overline{a}\overline{b}\overline{c})`$. It is easy to show that on the boundary $`3p=q`$ defined by the inequalities the values $`x=0,y=\frac{q}{3},z=0,w=1q`$ define a possible joint probability distribution, since $`3x+3y+z+w=1`$. On the other boundary, $`3p=q+2`$ a possible joint distribution is $`x=\frac{(1q)}{3},y=0,z=q,w=0`$. Then, for any values of $`q`$ and $`p`$ within the boundaries of the inequality we can take a linear combination of these distributions with weights $`\frac{3pq}{2}`$ and $`1\frac{3pq}{2}`$ and obtain the joint probability distribution, $`x=(1\frac{3pq}{2})\frac{1q}{3},y=\frac{3pq}{2}\frac{q}{3},z=(1\frac{3pq}{2})q,w=\frac{3pq}{2}(1q)`$, which proves that if the inequalities are satisfied a joint probability distribution exists, and therefore a local hidden variable as well. The generalization to the asymmetric case is tedious but straightforward. The correlations present in the GHZ state are so strong that even if we allow for experimental errors, the non-existence of a joint distribution can still be verified. Let $`E(𝐀)=E(𝐁)=E(𝐂)1ϵ`$, $`E(\mathrm{𝐀𝐁𝐂})1+ϵ`$, where $`ϵ`$ represents a decrease of the observed correlations due to experimental errors. To see this, let us compute the value of $`F`$ defined above, $`F=3(1ϵ)(1+ϵ).`$ But the observed correlations are only compatible with a local hidden variable theory if $`F2`$, hence $`ϵ<\frac{1}{2}.`$ Then, in the symmetric case, there cannot exist a joint probability distribution of $`𝐀,𝐁`$ and $`𝐂`$ satisfying (i) and (ii) if $`ϵ<1/2.`$ We will give an analysis of what happens to the correlations when the detectors have efficiency $`d[0,1]`$ and a probability $`\gamma `$ of detecting a dark photon within the window of observation when no real photon is detected. Our analysis will be based on the experiment of Bouwmeester et al. . In their experiment, an ultraviolet pulse hits a nonlinear crystal, and pairs of correlated photons are created. There is also a small probability that two pairs are created within a window of observation, making them indistinguishable. When this happens, by restricting to states where only one photon is found on each output channel to the detectors, we obtain the following state, $$\frac{1}{\sqrt{2}}|+_T(|+_1|+_2|_3+|_1|_2|+_3),$$ where the subscripts refer to the detectors and $`+`$ and $``$ to the linear polarization of the photon. Hence, if a photon is detected at the trigger $`T`$ (located after a polarizing beam splitter) the three-photon state at detectors $`D_1,D_2`$, and $`D_3`$ is a GHZ-correlated state (see FIG. 1). We will assume that double pairs created have the expected GHZ correlation, and the probability negligible of having triple pair produtions or of having fourfold coincidence registered when no photon is generated. (Our analysis is different from that of Żukowski , who considered only ideal detectors.) Two possibilities are left: i) a pair of photons is created at the parametric down converter; ii) two pairs of photons are created. We will denote by $`p_1p_2`$ the pair creation, and by $`p_1\mathrm{}p_4`$ the two-pair creation. We will assume that the probabilities add to one, i.e. $`P\left(p_1\mathrm{}p_4\right)+P\left(p_1p_2\right)=1.`$ We start with two photons. $`p_1p_2`$ can reach any of the following combinations of detectors: $`TD_1,`$ $`TD_2,`$ $`TD_3,`$ $`D_1D_1,`$ $`D_1D_2,`$ $`D_1D_3,`$ $`D_2D_2,`$ $`D_2D_3,`$ $`D_3D_3,`$ $`TT`$. For an event to be counted as being a GHZ state, all four detectors must fire (this conditionalization is equivalent to the enhancement hypothesis). We take as our set of random variables $`𝐓,𝐃_1,𝐃_2,𝐃_3`$ which take values $`1`$ (if they fire) or $`0`$ (if they don’t fire). We will use $`t,d_1,d_2,d_3`$ ($`\overline{t},\overline{d}_1,\overline{d}_2,\overline{d}_3`$) to represent the value 1 (0). We want to compute $`P\left(td_1d_2d_3p_1p_2\right),`$ the probability that all detectors $`T,D_1,D_2,D_3`$ fire simultaneously given that only a pair of photons has been created at the crystal. We start with the case when the two photons arrive at detectors $`T`$ and $`D_3.`$ Since the efficiency of the detectors is $`d`$, the probability that both detectors detect the photons is $`d^2,`$ the probability that only one detects is $`2d(1d)`$ and the probability that none of them detect is $`(1d)^2.`$ Taking $`\gamma `$ into account, then the probability that all four detectors fire is $$P\left(td_1d_2d_3p_1p_2=TD_3\right)=\gamma ^2\left(d+\gamma (1d)\right)^2,$$ where $`p_1p_2=TD_3`$ represents the simultaneous (i.e. within a measurement window) arrival of the photons a the trigger $`T`$ and at $`D_3.`$ Similar computations can be carried out for $`p_1p_2=TD_1,`$ $`TD_2,`$ $`D_1D_3,`$ $`D_1D_2,`$ $`D_2D_3.`$ For $`p_1p_2=D_iD_i`$ the computation of $`P\left(td_1d_2d_3p_1p_2=D_iD_i\right)`$ is different. The probability that exactly one of the photons is detected at $`D_i`$ is $`d(1d)`$ and the probability that none of them are detected is $`(1d)^2.`$ Then, it is clear that $$P\left(td_1d_2d_3p_1p_2=D_iD_i\right)=d\left(1d\right)\gamma ^3+(1d)^2\gamma ^4,$$ and we have at once that $`P\left(td_1d_2d_3p_1p_2\right)`$ $`=`$ $`6\gamma ^2\left(d+\gamma (1d)\right)^2`$ $`+4\gamma ^3\left(1d\right)\left(d+\gamma \right).`$ We note that the events involving $`P\left(td_1d_2d_3p_1p_2\right)`$ have no spin correlation, contrary to GHZ events. We now turn to the case when four photons are created. The probability that all four are detected is $`d^4,`$ that three are detected is $`4d^3(1d),`$ that two are detected is $`6d^2(1d)^2,`$ that one is detected is $`4d(1d)^3,`$ and that none is detected is $`(1d)^4.`$ If all four are detected, we have a true GHZ-correlated state detected. However, one can again have four detections due to dark counts. We will write $`p_1\mathrm{}p_4=GHZ`$ to represent having the four GHZ photons detected, and $`p_1\mathrm{}p_4=\overline{GHZ}`$ as having the four detections as a non-GHZ state. We can write that $$P\left(td_1d_2d_3p_1\mathrm{}p_4=GHZ\right)=d^4+\gamma \left(1d\right)d^3$$ (5) and $$P\left(td_1d_2d_3p_1\mathrm{}p_4=\overline{GHZ}\right)=3\gamma d^3(1d)+6\gamma ^2d^2(1d)^2+4\gamma ^3d(1d)^3+\gamma ^4(1d)^4.$$ The last term in (5) comes from the unique role of the trigger $`T,`$ that needs to detect a photon but not necessarily one that has a GHZ correlation. How do the non-GHZ detections change the GHZ expectations? What is measured in the laboratory is the conditional correlation $`E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)`$, where $`𝐒_1,`$ $`𝐒_2`$ and $`𝐒_3`$ are random variables with values $`\pm 1,`$ representing the spin measurement at $`D_1,D_2`$ and $`D_3`$ respectively. We can write it as $$E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)=\frac{E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\&GHZ\right)P(GHZ)}{P(GHZ)+P(\overline{GHZ})}.$$ since for non-GHZ states we expect a correlation zero for the term $$\frac{E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\&\overline{GHZ}\right)P(\overline{GHZ})}{P(GHZ)+P(\overline{GHZ})}.$$ Neglecting terms of higher order than $`\gamma ^2`$, using $`\gamma d`$, and $`P(p_1p_2)P(p_1\mathrm{}p_4),`$ we obtain, from $`P(\overline{GHZ})=6P(p_1p_2)\gamma ^2d^2+3P(p_1\mathrm{}p_4)\gamma (1d)d^3`$ and $`P\left(GHZ\right)=P(p_1\mathrm{}p_4)\left[d^4+\gamma \left(1d\right)d^3\right],`$ that $$E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)=\frac{E(𝐒_1𝐒_2𝐒_3td_1d_2d_3\&GHZ)}{\left[1+6\frac{P(p_1p_2)}{P(p_1\mathrm{}p_4)}\frac{\gamma ^2}{d^2}\right]}.$$ (6) This value is the corrected expression for the conditional correlations if we have detector efficiency taken into account. The product of the random variables $`𝐒_1𝐒_2𝐒_3`$ can take only values $`+1`$ or $`1.`$ Then, if their expectation is $`E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)`$ we have $$P\left(𝐒_1𝐒_2𝐒_3=1td_1d_2d_3\right)=\frac{1+E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)}{2}.$$ The variance $`\sigma ^2`$ for a random variable that assumes only $`1`$ or $`1`$ values is $`4P(1)\left(1P(1)\right).`$ Hence, in our case we have as a variance $$\sigma ^2=1\left[E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)\right]^2.$$ We will estimate the values of $`\gamma `$ and $`d`$ to see how much $`E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)`$ would change due to experimental errors. For that purpose, we will use typical rates of detectors for the frequency used at the Innsbruck experiment, as well as their reported data . First, modern detectors usually have $`d0.5`$ for the wavelengths used at Innsbruck. We assume a dark-count rate of about $`3\times 10^2`$ counts/s. With a time window of coincidence measurement of $`2\times 10^9`$ s, we then have that the probability of a dark count in this window is $`\gamma 6\times 10^7.`$ From we use that the ratio $`P(p_1p_2)/P(p_1\mathrm{}p_2)`$ is on the order of $`10^{10}.`$ Substituting this three numerical values in (6) we have $`E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)0.9.`$ From this expression it is clear that the change in correlation imposed by the dark-count rates is significant for the given parameters. However, it is also clear that the value of the correlation is quite sensitive to changes in the values of both $`\gamma `$ and $`d.`$ We can now compare the values we obtained with the ones observed by Bouwmeester et al. for GHZ and $`\overline{GHZ}`$ states . In their case, they claim to have obtained a ratio of $`1:12`$ between $`\overline{GHZ}`$ and GHZ states. In this case the correlations are $`E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)0.92.`$ It is clear that a detailed analysis of the parameters would be necessary to fit the experimental result to the predicted correlations that take the inefficiencies into account, but at this point one can see that values close to an experimentally measured $`0.92`$ can be obtained with appropriate choices of the parameters $`d`$ and $`\gamma `$ (see FIG. 2). This expected correlation also satisfies $$E\left(𝐒_1𝐒_2𝐒_3td_1d_2d_3\right)>1\frac{1}{2}.$$ (7) This result is enough to prove the nonexistence of a joint probability distribution. We should note that the standard deviation in this case is $$\sigma \sqrt{\left(1+0.92\right)\left(10.92\right)}=0.39.$$ (8) As a consequence, since $`0.920.39=0.53,`$ the result $`0.92`$ is bounded away from the classical limit $`0.5`$ by more than one standard deviation (see FIG. 3). We showed that the GHZ theorem can be reformulated in a probabilistic way to include experimental inefficiencies. The set of four inequalities (1)-(4) sets lower bounds for the correlations that would prove the nonexistence of a local hidden-variable theory. Not surprisingly, detector inefficiencies and dark-count rates can change considerably the correlations. How do these results relate to previous ones obtained in the large literature of detector inefficiencies in experimental tests of local hidden-variable theories. We start with Mermin’s paper , where an inequality for $`F`$ similar to ours but for the case of $`n`$-correlated particles is derived. Mermin does not derive a minimum correlation for GHZ’s original setup that would imply the non-existence of a hidden-variable theory, as his main interest was to show that the quantum mechanical results diverge exponentially from a local hidden-variable theory if the number of entangled particles increase. Braunstein and Mann take Mermin’s results and estimate possible experimental errors that were not considered here. They conclude that for a given efficiency of detectors the noise grows slower than the strong quantum mechanical correlations. Reid and Munru obtained an inequality similar to our first one, but there are sets of expectations that satisfy their inequality and still do not have a joint probability distribution. In fact, as we mentioned earlier, our complete set of inequalities is a necessary and sufficient condition to have a joint probability distribution. We have used an enhancement hypothesis, namely, that we only counted events with all four simultaneous detections, and showed that with the coincidence constraint a joint probability did not exist in the Innsbruck experiment. Enhancement hypotheses have to be used when detector efficiencies are low, but they may lead to loopholes in the arguments about the nonexistence of local hidden-variable theories. Loophole-free requirements for detector inefficiencies are based on the analysis of for the Bell case and for for the GHZ experiment without enhancement. However, in the Innsbruck setup enhancement is necessary, as the ratio of pair to two-pair production is of the order of $`10^{10}`$ . Until experimental methods are found to eliminate the use of enhancement in GHZ experiments, no loophole-free results seem possible. FIG. 3 shows the number of standard deviations, as computed above, by which the existence of a joint distribution is violated. We can see that if we change the experiment such that we reduce the dark-count rate to 50 per second, instead of the assumed 300, a large improvement in the experimental result would be expected. Detectors with this dark-count rate and the assumed efficiency are available . We emphasize that there are other possible experimental manipulations that would increase the observed correlation, e.g. the ratio $`P(p_1p_2)/P(p_1\mathrm{}p_2),`$ but we cannot enter into such details here. The point to hold in mind is that FIG. 3 provides an analysis that can absorb any such changes or other sources of error, not just the dark-count rate, to give a measure of reliability. We would like to thank Prof. Sebastião J. N. de Pádua for comments, as well as the anonymous referees.
no-problem/0001/astro-ph0001101.html
ar5iv
text
# CCD PHOTOMETRY OF THE GLOBULAR CLUSTER 𝜔 CENTAURI. I. METALLICITY OF RR LYRAE STARS FROM 𝐶⁢𝑎⁢𝑏⁢𝑦 PHOTOMETRY ## 1 INTRODUCTION For dating globular clusters and several other important problems (e.g., measuring distances to Population II objects), it is essential to know the luminosity of the RR Lyrae stars, M<sub>bol</sub>(RR), and how it varies with metal abundance (see Sandage 1990b; Lee, Demarque, and Zinn 1990, hereafter LDZ). The variation of M<sub>bol</sub>(RR) with \[Fe/H\] affects the age - metallicity relation of the Galactic globular cluster system, and thus provides constraints on the scenarios of the Galaxy formation. However, due to the variety of different techniques used, the particular data set chosen, and the reddening corrections adopted, there is no consensus on the size of the dependency of M<sub>bol</sub>(RR) upon \[Fe/H\] (Layden et al. 1996). To investigate and resolve the problem of the dependence of M<sub>bol</sub>(RR) on \[Fe/H\], one needs a large sample of RR Lyrae stars, spanning a wide range of \[Fe/H\], for which precise measurements of relative luminosity and \[Fe/H\] exist. The RR Lyrae stars in $`\omega `$ Cen are an ideal sample for this study. In $`\omega `$ Cen, there is a wide range in \[Fe/H\], and clearly the relative values of M<sub>bol</sub>(RR) can be inferred straightforwardly from their mean apparent visual magnitudes since they are all located at the same distance and are all reddened by the same amount. However, investigations by Freeman & Rodgers(1975), Butler et al. (1978, hereafter BDE), Sandage (1982), and Gratton et al. (1986, hereafter GTO) have revealed that the M<sub>bol</sub>(RR) - \[Fe/H\] correlation in $`\omega `$ Cen is peculiar: a few metal-rich $`([\mathrm{Fe}/\mathrm{H}]>1.1)`$ RR Lyrae stars in their sample are fainter than the more metal-poor ones, but no obvious correlation exists among the metal-poor $`([\mathrm{Fe}/\mathrm{H}]<1.4)`$ RR Lyrae stars. This and the lack of a period-shift - \[Fe/H\] correlation amongst the variables was recognized by Sandage (1982) as a possible contradiction to his steep correlation between M<sub>bol</sub>(RR) and \[Fe/H\]. In general these “discrepant” observational results were simply considered to be yet another anomaly of the stellar population of $`\omega `$ Cen (see also Smith 1995). Recent advances in our understanding of the evolution of horizontal-branch (HB) stars are throwing new light on this long-standing problem. In particular, the HB evolutionary models by Lee (1990) suggest that M<sub>bol</sub>(RR) depends on HB morphology as well as metallicity, especially when the HB morphology is extremely blue, due to the effect of redward evolution off the zero-age horizontal-branch (ZAHB). Using these model calculations, Lee (1991) has shown that the observed nonlinear behavior of M<sub>bol</sub>(RR) with \[Fe/H\] in $`\omega `$ Cen is not something peculiar, but is in fact predicted. The detailed model calculations suggest that two effects are responsible for the observed behavior of M<sub>bol</sub>(RR) with \[Fe/H\] in $`\omega `$ Cen, and are: (1) the abrupt increase in M<sub>bol</sub>(RR) near \[Fe/H\] = -1.5 as RR Lyrae stars become highly evolved stars from the blue side of the instability strip as HB morphology gets bluer with decreasing \[Fe/H\], and (2) the nonmonotonic behavior of the HB morphology with decreasing \[Fe/H\], which together with the first effect makes the correlation between M<sub>bol</sub>(RR) and \[Fe/H\] looks like a step function, because M<sub>bol</sub>(RR) depends sensitively on HB morphology. Despite the lack of a complete understanding of why HB morphology changes as it does, the definite conclusions from Lee’s (1991) work are: (1) The correlation between M<sub>bol</sub>(RR) and \[Fe/H\] in the halo of our Galaxy is probably not linear due to the effect of HB morphology (evolution). (2) The use of a simple linear relationship between M<sub>bol</sub>(RR) and \[Fe/H\] in deriving the distances to blue HB clusters should be avoided. This suggests that when the distances to the population II objects are to be estimated using the RR Lyrae stars, the HB type of the stellar population, as well as metallicity, must be known. Although the $`\omega `$ Cen data do appear to support this model, a definite conclusion was not possible because of the uncertainty in \[Fe/H\] and of the lack of metal-rich stars in the available data. In order to provide a more complete and homogeneous sample of RR Lyrae stars with relatively well-measured metallicity, we obtained \[Fe/H\] abundances for most of the RR Lyrae stars in $`\omega `$ Cen, using the $`Caby`$ photometric system. The $`Caby`$ photometric system is an expansion of the standard $`uvby`$ system with the inclusion of a fifth filter, $`Ca`$, centered on the K and H lines of Ca II (90 $`\AA `$ FWHM). The $`hk`$ index is defined as $`hk`$ = $`(Cab)(by)`$, and is found to be much more sensitive to metal abundance than the $`Str`$$`\ddot{o}`$$`mgren`$ $`m_1`$ index (Anthony-Twarog et al. 1991; Twarog & Anthony-Twarog 1991, 1995; Anthony-Twarog & Twarog 1998). The sensitivity of the $`hk`$ index to metallicity changes is high at all \[Fe/H\] for hotter stars and also for cooler stars more metal-poor than \[Fe/H\] = -1.0. It is about three times more sensitive than the $`m_1`$ index (see Fig. 9 of Twarog & Anthony-Twarog 1995). Baird (1996, hereafter B96) extended $`Caby`$ photometry to RR Lyrae stars of known metallicity and showed that the $`hk`$ index retains good sensitivity even at the hottest phases of pulsation. It was demonstrated that isometallicity lines formed in the $`hk/(by)`$ diagram are single valued with respect to both $`by`$ and $`hk`$. Therefore, the $`hk/(by)`$ diagram gives consistent metallicities throughout a star’s pulsational cycle, including during rising light and near maximum light, when $`\mathrm{\Delta }`$$`S`$ results are unreliable, and so precise knowledge of light curve phase is unnecessary. An additional advantage of the photometric approach is that standard crowded-field techniques can be used to measure stars even in rich cluster centers. In this paper we present the results of a new $`Caby`$ photometric survey of 131 RR Lyrae stars in $`\omega `$ Cen, from which metal abundances are derived via the $`hk`$ index. In section 2, we describe the observations and the reduction procedures. The adopted metallicity calibration procedures are outlined in section 3. In section 4, we present the results of our metallicity determination for field RR Lyrae stars and $`\omega `$ Cen RR Lyrae stars, with a comparison with the previous $`\mathrm{\Delta }`$$`S`$ measurements. Finally, in section 5 we discuss the impact of our new metallicity measurements on the M<sub>V</sub>(RR) - \[Fe/H\] and period-shift - \[Fe/H\] relations. The color-magnitude diagram resulting from the $`Caby`$ photometry, and a discussion of the metallicity distribution of giant branch stars will be presented in a future paper of this series. ## 2 OBSERVATIONS AND DATA REDUCTIONS All the observations were made using the CTIO 0.9 m telescope and Tektronix 2048 No. 3 CCD during three nights of an observing run in March 1997. We covered $`\omega `$ Cen in a 3 $`\times `$ 3 grid and observed one sequence of this grid per each night. The field size of each grid point was 13$`\mathrm{}`$.6 $`\times `$ 13$`\mathrm{}`$.6 with a pixel scale of 0$`\mathrm{}`$.40. Our program field, centered on the cluster, covers approximately 40$`\mathrm{}`$ $`\times `$ 40$`\mathrm{}`$ which roughly corresponds to the area enclosed within the half tidal radius of $`\omega `$ Cen. Typical exposure times were 1400 s for $`Ca`$, 360 s for $`b`$, and 180 s for $`y`$, with the CCD being read out simultaneously through all four amplifiers, using an Arcon CCD controller. The observation log for the program fields is presented in Table 1. Two to four frames were taken in each band and each field. The frames were calibrated from twilight or dawn sky flats and zero-level exposures, using the IRAF QUADPROC routines. Calibration frames were made by combining several individual exposures. All exposure times were sufficiently long that the center-to-corner shutter timing error was negligible. These procedures produced object frames with the sky flat to better than 1% in all filters. The IRAF routine COSMICRAYS was used to remove nearly all of the cosmic ray events in each frame, with conservative parameters set to avoid corrupting the stellar profiles. Photometry of $`\omega `$ Cen stars was accomplished using DAOPHOT II and ALLSTAR (Stetson 1987, 1995). For each frame, a Moffat function PSF, varying cubically with radial position, was constructed using 100 to 200 bright, isolated, and unsaturated stars. The PSF was improved iteratively by subtracting faint nearby companions of the PSF stars. Aperture corrections were calculated using the program DAOGROW (Stetson 1990). The final aperture correction were made by adjusting the ALLSTAR magnitude of all stars by the weighted mean of the difference between the total aperture magnitude and the profile-fitting ALLSTAR magnitude for selected stars (e.g., PSF stars). After the aperture correction, we used DAOMATCH/DAOMASTER (Stetson 1992) to match stars of all frames covering the same field, and derived the average instrumental magnitude and colors on the same photometric scale. For each frame, the magnitude offset with respect to each master frame in $`Ca`$, $`b`$, and $`y`$ was calculated, and photometry for the two to four frames for the same field was transformed to a common instrumental system. On each night, five to seven standards from the list of Twarog & Anthony-Twarog (1995) were observed, and due to the small sample size the results for each night were combined. Comparison of the instrumental magnitudes for the final 15 observations in each filter with the standard values allowed the construction of linear transformations for the observed $`y`$, $`by`$, and $`hk`$ magnitudes from the instrumental to the standard system. The standard stars observed cover a color range of 0.1 - 0.7 and 0.2 - 1.4 for $`by`$ and $`hk`$, respectively, and an air mass range of 1.0 - 1.6. Extinction coefficients for all the filters were determined by a series of standard stars over a wide range of airmass. The final transformation equations were obtained by a linear least-square fit. They are $$by=0.956(by)_i0.013,$$ $$hk=0.891hk_i1.013,$$ $$y=y_i+0.026(by)_i5.007,$$ where $`by`$, $`hk`$, and $`y`$ are the color indices and visual magnitude in the standard $`Caby`$ system, $`(by)_i`$, $`hk_i`$, and $`y_i`$ refer to instrumental magnitudes corrected for extinction. No other trends in the residuals were noticeable, and therefore no additional terms in the transformation equations appear to be necessary. The calibration equations relate observed to standard values for $`y`$, $`by`$, and $`hk`$ with standard deviations of 0.01, 0.01, and 0.02, respectively. During the observing runs, six field RR Lyrae “standard” stars (four RR$`ab`$ stars and two RR$`c`$ stars) were observed in order to make a comparison between our result and that of B96, as discussed in the next section. Throughout, we have corrected for reddening using the reddening ratios, $`E(by)/E(BV)`$ = 0.75, $`E(hk)/E(by)`$ = -0.1, adopted by B96. ## 3 METALLICITY CALIBRATION B96 successfully provided the \[Fe/H\] vs. $`hk_o`$ calibrations for two values of $`(by)_o`$ = 0.15 and 0.30, from eight RR$`ab`$ stars and two RR$`c`$ stars. Using these relations, it is possible to determine the metallicity of any RR Lyrae star for which there is $`Caby`$ photometry at either of these colors. However, in order to find the metallicity of RR Lyrae stars at arbitrary phase, it is necessary to find the relations between \[Fe/H\] and $`hk_o`$ for various values of $`(by)_o`$ and ultimately produce a set of isometallicity lines that are continuous across the full range of $`(by)_o`$. In addition to two calibrations for $`(by)_o`$ = 0.15 and 0.30, Baird & Anthony-Twarog (1999) added a new set of calibrations for a more complete grid of $`(by)_o`$ values \[i.e., $`(by)_o`$ = 0.20, 0.25, and 0.35\] from high-quality photometric data for 14 RR$`ab`$ stars, combined with previous data from B96. As did B96, the metallicity values of Layden (1994) were adopted because they provide a uniform set of values for all the field RR$`ab`$ stars, and they are based on the Zinn & West (1984; hereafter ZW) metallicity scale for Galactic globular clusters. Layden’s (1994) metallicities for RR$`ab`$ stars are based on the relative strengths of the Ca II K line and the H<sub>δ</sub>, H<sub>γ</sub>, and H<sub>β</sub> Balmer lines. The \[Fe/H\] values of the RR$`c`$ stars were adopted from Kemper (1982) and transformed to the ZW scale with Layden’s (1994) equation. In the following discussion, we will denote \[Fe/H\]<sub>spec</sub> as the metallicity measured spectroscopically for RR$`ab`$ and RR$`c`$ standard stars used in our calibration. The final \[Fe/H\]<sub>hk</sub> vs. $`hk_o`$ relations were obtained by a straight line fit (Baird & Anthony-Twarog 1999). They are $$[Fe/H]_{hk}=8.11hk_o3.37(\sigma _{rms}=0.110)for(by)_o=0.15,$$ $$[Fe/H]_{hk}=7.75hk_o3.28(\sigma _{rms}=0.055)for(by)_o=0.20,$$ $$[Fe/H]_{hk}=7.45hk_o3.36(\sigma _{rms}=0.035)for(by)_o=0.25,$$ $$[Fe/H]_{hk}=6.44hk_o3.36(\sigma _{rms}=0.040)for(by)_o=0.30,$$ $$[Fe/H]_{hk}=5.06hk_o3.13(\sigma _{rms}=0.074)for(by)_o=0.35.$$ The $`\sigma _{rms}`$ are root mean square deviations calculated in the sense \[Fe/H\]<sub>spec</sub> \- \[Fe/H\]<sub>hk</sub>, where \[Fe/H\]<sub>hk</sub> is the value calculated from the observed $`hk_o`$ values using the above relations. The $`\sigma _{rms}`$ values are highest at the extreme colors, i.e., at $`(by)_o`$ = 0.15 and 0.35, where the number of calibrating points is lowest. Figure 1 shows the derived \[Fe/H\]<sub>hk</sub> vs. $`hk_o`$ relations for five values of $`(by)_o`$, along with the photometric indices for 14 field RR Lyrae stars that define the relations. At warmer temperatures, the sensitivity of $`hk_o`$ to \[Fe/H\]<sub>hk</sub> drops, and the slope in a \[Fe/H\]<sub>hk</sub> vs. $`hk_o`$ relation becomes steeper. When stars get hotter than $`(by)_o`$ = 0.25, the slopes of the \[Fe/H\]<sub>hk</sub> vs. $`hk_o`$ relations are nearly the same, indicating that \[Fe/H\]<sub>hk</sub> is a function of $`hk_o`$ only, as suggested by B96. $`Caby`$ photometry is useful for stars as blue as $`(by)_o`$ = 0.10, but at higher temperatures the sensitivity of $`Caby`$ photometry to metallicity will certainly decrease, and the contamination by the H<sub>ϵ</sub> line line should become quite substantial (Baird & Anthony-Twarog 1999). We have calculated the metal abundance of the RR Lyrae stars in the field and $`\omega `$ Cen using the above \[Fe/H\]<sub>hk</sub> vs. $`hk_o`$ relations. Additional \[Fe/H\]<sub>hk</sub> vs. $`hk_o`$ relations were derived as necessary by interpolating these relations within the range of 0.15 $`<`$ $`(by)_o`$ $`<`$ 0.35. However, for stars with $`(by)_o`$ $`<`$ 0.15, which lie outside the limits of the current \[Fe/H\]<sub>hk</sub> vs. $`hk_o`$ relations, we applied the relation for $`(by)_o`$ = 0.15 because at these warm temperatures the isometallicity lines are horizontal in the $`hk_o/(by)_o`$ diagram, as described above. We estimate that, with these procedures, introduced uncertainties will be less than 0.1 dex at any point of the $`hk_o/(by)_o`$ diagram. ## 4 RESULTS ### 4.1 Field RR Lyrae Stars To check the validity of our \[Fe/H\]<sub>hk</sub> calibration, we will compare our observations of field RR Lyraes with those by B96. Our measured values of \[Fe/H\]<sub>hk</sub> for field RR Lyrae stars are listed in Table 2, and comparison between spectroscopic metallicity, \[Fe/H\]<sub>spec</sub> (B96), and our \[Fe/H\]<sub>hk</sub> are shown in Figure 2. The \[Fe/H\]<sub>spec</sub> and the reddening of the stars was taken from Table 1 of B96. For the RR$`ab`$ stars, our \[Fe/H\]<sub>hk</sub> is in excellent agreement with \[Fe/H\]<sub>spec</sub>, with an rms scatter of 0.12 dex. On the other hand, the \[Fe/H\]<sub>hk</sub> for the two RR$`c`$ stars show larger scatter than that for the RR$`ab`$ stars. For V535 Mon, $`(by)_o`$ is very small, and the sensitivity of $`hk`$ index is less than at redder colors, which might account for at least some of the discrepancy. The reason for the large deviation of AU Vir is not clear, however it anticipates the difficulties we have with the $`\mathrm{\Delta }`$$`S`$ measurements for the $`\omega `$ Cen RR$`c`$ stars, discussed below in sections 4.3 and 4.5. Using the CTIO 4 m Telescope in 1997 December, Walker (1999) observed 14 $`Caby`$ standard stars and three RR$`ab`$ stars (U Lep, RY Col, HH Pup) from B96’s list. We reduced this data in the same way as described above, and list the derived \[Fe/H\]<sub>hk</sub> for the RR Lyrae stars in Table 3. These stars are also plotted in Fig. 2, and are in excellent agreement both with our 0.9 m results and with Fe/H\]<sub>spec</sub>. For all our RR$`ab`$ stars and those of Walker (1999), the rms scatter of \[Fe/H\]<sub>hk</sub> corresponds to 0.10 dex. ### 4.2 $`\omega `$ Cen RR Lyrae Stars In our program field of $`\omega `$ Cen, we measured 131 RR Lyrae stars, consisting of 74 RR$`ab`$ and 57 RR$`c`$ stars, which can be compared to the total of 180 $`\omega `$ Cen RR Lyrae stars known to date (Hogg 1973; Kaluzny et al. 1997b). For each RR Lyrae star we obtained two to four points of $`by`$ and $`hk`$, and dereddened using the reddening law stated in section 2. We adopted the reddening value E($`BV`$) = 0.12 from Harris (1996). BDE used E($`BV`$) = 0.11 which is essentially identical to the independent work of Dickens & Saunders (1965). Whitney et al. (1998) adopted E($`BV`$) = 0.15 for their analysis of the hot stellar population of the $`\omega `$ Cen. However, the effect of a small uncertainty in E($`BV`$) values is negligible for our metallicity determination ($`\mathrm{\Delta }`$\[Fe/H\] $`<`$ 0.02 dex). Additional correction for the interstellar contribution to the K line was ignored ($`\mathrm{\Delta }`$\[Fe/H\] $``$ 0.03 dex, see GTO). These both affect only the mean cluster \[Fe/H\] value, not the star-to-star scatter. Table 4 lists the dereddened values, $`(by)_o`$ and $`hk_o`$, and their photometric errors for each RR Lyrae star. After obtaining the individual values of \[Fe/H\]<sub>hk</sub> for each RR Lyrae star, we calculated the mean value of the \[Fe/H\]<sub>hk</sub> by weighting with the photometric error of the $`hk`$ value. A number of data points tagged as poor measurements were rejected, and some data points that showed a large deviation from their isometallicity line in the $`hk_o/(by)_o`$ diagram were also excluded. Table 5 lists our final weighted mean \[Fe/H\]<sub>hk</sub> values in column (3). Column (5) is the number of independent measurements used in the calculation of the mean \[Fe/H\]<sub>hk</sub>. For the error of the mean \[Fe/H\]<sub>hk</sub> value, we adopted the standard deviation of the mean of the individual \[Fe/H\]<sub>hk</sub> measures. This error is listed as $`\sigma `$<sub>\[Fe/H\]</sub> in the column (4). For stars with only one data point, their $`\sigma `$<sub>\[Fe/H\]</sub> values have been set to blank. The typical value of $`\sigma `$<sub>\[Fe/H\]</sub> corresponds to about 0.20 dex. For those stars where the scatter is larger than typical, it is not clear whether this is due to observational error, or to some small non-repeatability and/or phase dependence in the $`hk_o/(by)_o`$ diagram as suggested by B96. We do not have sufficient observations per star to clarify this, and encourage more observations of the field “standard stars”.<sup>1</sup><sup>1</sup>1 During the rapid rise to maximum of RR$`ab`$ stars, one may question whether the effect of the sequence of exposure times for $`Ca`$, $`b`$, and $`y`$ over more than half an hour will cause errors in the metallicity determinations. However, for a few identified data points on the rising branch, we did not find severe deviations from other data points in the $`hk_o/(by)_o`$ diagram. Furthermore, since our \[Fe/H\]<sub>hk</sub> corresponds to the mean of the individual measures and the typical error of the mean \[Fe/H\]<sub>hk</sub> value is small (about 0.2 dex), this effect should be negligible. As a reference, we estimate the typical values of frame-to-frame scatter of HB stars as 0.02 and 0.03 mag for $`by`$ and $`hk`$, respectively. This scatter of $`hk`$ corresponds to an error of less than 0.20 dex in \[Fe/H\], at any $`by`$. ### 4.3 Comparison with Previous $`\mathrm{\Delta }S`$ Observations Among the 131 RR Lyrae stars in our $`\omega `$ Cen field, 56 stars are in common with the previous $`\mathrm{\Delta }S`$ observations of BDE and GTO, and we make a comparison between these values, \[Fe/H\]<sub>ΔS</sub> \[column (6) of Table 5\], and those from our $`Caby`$ photometry, \[Fe/H\]<sub>hk</sub> \[column (3) of Table 5\]. Most values of \[Fe/H\]<sub>ΔS</sub> come from BDE, but for a few stars also observed by GTO, new values have been calculated by averaging the measurements of BDE and GTO. All \[Fe/H\]<sub>ΔS</sub> values have been corrected to the ZW metallicity scale using the relation obtained by Layden (1994) (i.e., \[Fe/H\]<sub>ZW</sub> = 0.90\[Fe/H\]<sub>ΔS</sub> \- 0.34) in order that all the \[Fe/H\] data is placed on a consistent metallicity scale. Figure 3 illustrates the residuals in the sense \[Fe/H\]<sub>hk</sub> \- \[Fe/H\]<sub>ΔS</sub> as a function of \[Fe/H\]<sub>ΔS</sub>. The closed circles are RR$`ab`$ stars while open circles are RR$`c`$ stars. The larger symbols represent stars with smaller observational error ($`\sigma `$<sub>\[Fe/H\]</sub> $``$ 0.2 dex) in \[Fe/H\]<sub>hk</sub>. It is apparent that a significant difference between \[Fe/H\]<sub>hk</sub> and \[Fe/H\]<sub>ΔS</sub> is present in a manner which is metallicity dependent.<sup>2</sup><sup>2</sup>2 Since Freeman & Rodgers (1975) used a bigger telescope, at higher dispersion, than did BDE, it is worth to make a comparison between our result and that of Freeman & Rodgers. However, we found the rms scatter between these two metallicities for 16 RR$`ab`$ stars to be still large (0.37 dex). The residuals for the RR$`c`$ and RR$`ab`$ stars appear similar, although the \[Fe/H\]<sub>hk</sub> for most RR$`c`$ stars is metal-rich compared to \[Fe/H\]<sub>ΔS</sub>. In order to more clearly see metallicity differences between \[Fe/H\]<sub>hk</sub> and \[Fe/H\]<sub>ΔS</sub> in the $`hk_o/(by)_o`$ diagram, we introduce $`hk_{o,\mathrm{\Delta }S}`$, which is the expected value of $`hk_o`$ from \[Fe/H\]<sub>ΔS</sub>, and so construct a $`hk_{o,\mathrm{\Delta }S}/(by)_o`$ diagram. We calculate $`hk_{o,\mathrm{\Delta }S}`$ by inserting \[Fe/H\]<sub>ΔS</sub> into inverse equations of our final \[Fe/H\] vs. $`hk_o`$ relations. For the calculation of this $`hk_{o,\mathrm{\Delta }S}`$, we retained our observed value of $`(by)_o`$. In Figure 4, we compare our observed $`hk_o/(by)_o`$ diagram with $`hk_{o,\mathrm{\Delta }S}/(by)_o`$ for 56 RR Lyrae stars. In each diagram, we present schematic isometallicity lines, which were made from five \[Fe/H\] vs. $`hk_o`$ relations with step size of 0.5 dex. It should be noted that our observed $`hk_o`$ distribution for the RR$`ab`$ stars is slightly more compressed than that of $`hk_{o,\mathrm{\Delta }S}`$ for all $`(by)_o`$. In the case of the RR$`c`$ stars and for some RR$`ab`$ stars with $`(by)_o`$ $`<`$ 0.2, the distribution of $`hk_o`$ is shifted in the metal-rich direction, by about 0.5 dex in the mean, compared to that of $`hk_{o,\mathrm{\Delta }S}`$. These comparisons confirm that there are systematic differences between \[Fe/H\]<sub>hk</sub> and \[Fe/H\]<sub>ΔS</sub>. ### 4.4 Comparison with Other Metallicity Determinations For 48 RR$`ab`$ stars in $`\omega `$ Cen, Jurcsik (1998, hereafter J98) determined the empirical \[Fe/H\] values from the light-curve parameters using the observations of Kaluzny et al. (1997b). Comparing the empirical \[Fe/H\] values with the $`\mathrm{\Delta }S`$ measurements of BDE and GTO samples for RR$`ab`$ stars, J98 found significant discrepancies and suspected that the $`\mathrm{\Delta }`$$`S`$ data of BDE and GTO were inaccurate. Schwarzenberg-Czerny & Kaluzny (1998; hereafter SK98) independently compared their empirical \[Fe/H\] with the $`\mathrm{\Delta }S`$ metallicites, \[Fe/H\]<sub>ΔS</sub>, of BDE for 11 RR$`ab`$ stars and revealed no obvious correlation between their empirical \[Fe/H\] and \[Fe/H\]<sub>ΔS</sub>. These results encouraged us to check for consistency between our \[Fe/H\]<sub>hk</sub> and the empirical \[Fe/H\] values of J98 and SK98. Using the list of 47 RR$`ab`$ stars employed by J98, we found the rms scatter between the metallicities of J98, \[Fe/H\]<sub>J</sub>, and \[Fe/H\]<sub>ΔS</sub> for 23 RR$`ab`$ stars turned out to be large (0.48 dex), whereas that between \[Fe/H\]<sub>J</sub> and our \[Fe/H\]<sub>hk</sub> for 47 RR$`ab`$ stars is much smaller (0.23 dex). Comparing the empirical metallicities independently obtained by SK98, \[Fe/H\]<sub>SK98</sub>, with the $`\mathrm{\Delta }S`$ observations for the 11 RR$`ab`$ stars in common, significant discrepancies were found with a 0.44 dex rms scatter. However, from the comparison between \[Fe/H\]<sub>SK98</sub> and our \[Fe/H\]<sub>hk</sub> for 10 RR$`ab`$ stars, the scatter reduced to 0.28 dex rms. In summary, both the empirical metallicities obtained from J98 and SK98 show larger deviation from \[Fe/H\]<sub>ΔS</sub> of BDE and GTO than they do from from our photometric \[Fe/H\]<sub>hk</sub>. Considering the assumed accuracy of the empirical metallicities as 0.10 - 0.15 dex (Jurcsik & Kovács 1996; J98 and references therein), this is strong evidence that the $`\mathrm{\Delta }S`$ measurements of BDE and GTO are subject to larger errors than the authors state. ### 4.5 Metallicity Differences between \[Fe/H\]<sub>hk</sub> and \[Fe/H\]<sub>ΔS</sub> What causes the systematic discrepancies between our \[Fe/H\]<sub>hk</sub> and the \[Fe/H\]<sub>ΔS</sub> of BDE and GTO? We checked that there was no color dependency, which might be the case if our transformations as a function of color were incorrect. We also found that metallicity residuals between \[Fe/H\]<sub>hk</sub> and \[Fe/H\]<sub>ΔS</sub> showed a similar pattern for the inner and outer regions of our program field, demonstrating that there is no dependency on image crowding. We discuss supporting evidence for there being large errors in the $`\mathrm{\Delta }S`$ measurements of BDE and GTO. First of all, the excellent agreement between \[Fe/H\]<sub>hk</sub> and \[Fe/H\]<sub>spec</sub>, which is compatible to the $`\mathrm{\Delta }S`$ measurements (see Layden 1994), of four field RR$`ab`$ stars provides the most positive evidence of the accuracy of the present work and the large error of the $`\mathrm{\Delta }S`$ results of BDE and GTO (see section 4.1). Second, there are non-negligible discrepancies between the results of BDE and GTO. GTO claimed that the internal errors are about 0.2 dex in both BDE and GTO $`\mathrm{\Delta }S`$ measurements, and their system is thus not far from the standard $`\mathrm{\Delta }S`$ system. However, as GTO already noted, a few stars (V32, V39, and V72) show large deviation (more than 0.5 dex) with BDE’s results, probably, due to observation at phases far from the minimum (see Fig. 2 of GTO). Furthermore, the rms scatter of the mean difference of the $`\mathrm{\Delta }S`$ measurements between BDE and GTO corresponds to 0.34 dex, certainly not negligible. Third, J98 obtained an unexpectedly large (0.52 dex) rms scatter between her empirical metallicity values and $`\mathrm{\Delta }S`$ metallicity of BDE, but comparing empirical data with GTO’s observations, a smaller 0.38 dex rms scatter was obtained. Therefore, it is suspected that the $`\mathrm{\Delta }S`$ measurements for $`\omega `$ Cen, especially BDE’s data, are inaccurate. Both of the empirical metallicities obtained from J98 and SK98 show smaller rms scatter in our \[Fe/H\]<sub>hk</sub> than in $`\mathrm{\Delta }S`$ metallicity, \[Fe/H\]<sub>ΔS</sub> (see section 4.4). This suggests that our \[Fe/H\]<sub>hk</sub> are more accurate than the \[Fe/H\]<sub>ΔS</sub> of BDE and GTO. Fourth, despite a more extensive sample than that of previous $`\mathrm{\Delta }S`$ observations, the relation between magnitude and metallicity of our observations show a smaller scatter. We also note consistency with the model predictions of Lee (1991) (see section 5.1 and Fig. 6). Finally, as we will see in section 4.6, the metallicity distribution of our observations for RR$`ab`$ stars is more consistent with that of the giant stars of Suntzeff & Kraft (1996, hereafter SK), rather than that of the previous $`\mathrm{\Delta }S`$ observations. It was also suggested by SK that the large population of very metal-poor stars found from $`\mathrm{\Delta }`$$`S`$ measurements is incorrect. According to stellar evolution theory, the RR Lyrae stars are an intrinsically abundance-biased population due to the low probability that the extremely metal-rich (-poor) red (blue) HB stars evolve through the instability strip (Lee & Demarque 1990). Therefore, it is unreasonable to expect that the metallicity distribution of RR Lyrae stars is wider than that of their progenitor stars. Consequently, we conclude that the systematic discrepancies between our \[Fe/H\]<sub>hk</sub> and the \[Fe/H\]<sub>ΔS</sub> of BDE and GTO are caused by the large uncertainties of the $`\mathrm{\Delta }S`$ measurements. Finally, we discuss the difference in metallicity distribution in our results between the RR$`ab`$ and RR$`c`$ stars. Our distribution of metal-rich RR$`c`$ stars is difficult to understand from the standpoint of standard metal-rich HB evolutionary tracks, which do not penetrate into the hotter regions of the instability strip (Lee & Demarque 1990). While no definite resolution on the disagreement in the metallicity can be offered, we suggest that the metal enhancement of RR$`c`$ stars may be due to the possible contamination of Ca II H by H<sub>ϵ</sub>. For hotter stars, inclusion of the H<sub>ϵ</sub> feature will weaken the metallicity effect because the weakening of the Ca II H line can be partially compensated by the growth of the Balmer line (Anthony-Twarog et al. 1991). Although we used the \[Fe/H\] vs. $`hk_o`$ relation at $`(by)_o`$ = 0.15 for stars with $`(by)_o`$ $`<`$ 0.15 (see section 3), we should treat this data with caution until the \[Fe/H\] vs. $`hk_o`$ relations at higher temperatures are confirmed. Furthermore, because the \[Fe/H\] vs. $`hk_o`$ relation at high temperature \[e.g., $`(by)_o`$ = 0.15\], as shown in Fig. 1, does not extend to metallicity higher than \[Fe/H\] = -1.0, the metal-rich end of the RR$`c`$ stars may also be suspect. More calibration data for RR$`c`$ stars will be needed to resolve this problem. For this reason, we regard our results for RR$`c`$ stars to be tentative, and we will restrict our analysis to RR$`ab`$ stars in the following discussions. ### 4.6 The Metallicity Distribution Considering the homogeneity and large sample size of the present database, it is worthwhile to investigate the metallicity distribution of RR Lyrae stars and compare it with the earlier results for RR Lyrae stars and giant stars (BDE; Dickens 1989; Norris et al. 1996; SK). However, care must be taken when comparing the metallicity distribution of RR Lyrae stars and giant stars, since as mentioned above RR Lyrae stars are intrinsically an abundance-biased sample due to the failure of the extremely metal-rich red HB stars to penetrate into the instability strip. Furthermore, the frequency of RR Lyrae stars at a given metallicity depends on the HB morphology as well as the metallicity distribution of the underlying stellar population (e.g., Lee 1992 and Walker & Terndrup 1991 for RR Lyrae stars in Baade’s window). Figure 5 presents the metallicity distributions for the RR$`ab`$ stars and giant stars. All of the earlier studies and our results agree on the non-Gaussian shape of the metallicity distribution which contains a sharp rise from the low metallicity side, a modal value of \[Fe/H\] $``$ -1.8 and a tail of metal-rich stars reaching at least \[Fe/H\] $``$ -0.9. For the 161 - star bright giant (BG) sample and the 199 - star subgiant (SGB) sample of SK (Fig. 5c), the metallicity distribution is narrower than that of 34 - star RR$`ab`$ sample obtained from the $`\mathrm{\Delta }`$$`S`$ method (Fig. 5a). SK suggested that the large population of very metal-poor stars found in the $`\mathrm{\Delta }`$$`S`$ measurement is due to the large rms error of 0.4 dex of the old $`\mathrm{\Delta }`$$`S`$ study and probably the strong low-metallicity tail of the error distribution is spurious. The more complete sample of RR$`ab`$ stars from our $`hk`$ method (Fig. 5b) also shows the paucity of very metal-poor stars. Consequently, contrary to the case of the $`\mathrm{\Delta }S`$ observations, the range of the metallicity distribution of our observations for RR$`ab`$ stars is consistent with that of the giant stars of SK. While a detailed discussion of the origin of the abundance distribution is outside the scope of this paper, we wish to point out that in the analysis of a $`B,V`$ CMD for $`\omega `$ Cen containing 130,000 stars, Lee et al. (1999) found several distinct red giant branches (RGBs). They also showed from population models that the most metal-rich RGB is about 2 Gyr younger than the dominant metal-poor component, suggesting that $`\omega `$ Cen has enriched itself over this timescale. An extensive study of the metallicity distribution for an homogeneous and nearly complete sample of $`\omega `$ Cen giant branch stars, now underway, will place this result on a firmer footing. The RR Lyrae stars, being clearly representatives of the oldest populations in $`\omega `$ Cen, will be an important part of any enrichment model for the cluster. ## 5 DISCUSSION ### 5.1 The M<sub>V</sub>(RR) - \[Fe/H\] Relation Given our homogeneous metallicity measurements for nearly the whole sample of $`\omega `$ Cen RR Lyrae stars, we can turn to a discussion of the magnitude-metallicity relation. We will use the intensity mean magnitude values, $`<V>`$, given in BDE and Kaluzny et al. (1997b). For the stars whose $`<V>`$ values are available from both sources the mean values have been adopted. Column (7) of Table 5 lists the $`<V>`$ for each RR Lyrae stars. While the photometry of BDE is restricted to the outer region of the cluster, that of Kaluzny et al. (1997b) covers a larger area including the central region from their extensive observations (Kaluzny et al. 1996, 1997a). Although Kaluzny et al. (1997b) have claimed field-to-field differences on the level of a few hundredth’s of magnitude and uncertainties when combining photometry obtained in different fields for the same variables, we adopted the averaged value of magnitude for stars with multiple entries in their Table 1. From intercomparison of 66 RR Lyrae stars between BDE and Kaluzny et al. (1997b), we found a zeropoint offset of $`<V>`$ as -0.03 $`\pm `$ 0.06 in the sense BDE minus Kaluzny et al. (1997b). However, considering the intrinsic spread (or scatter) and random error of $`<V>`$ for $`\omega `$ Cen RR Lyrae stars, this small offset would have only a small or negligible effect on the discussion of the M<sub>V</sub>(RR) - \[Fe/H\] relation. We are presently reducing $`BV`$ photometry for $`\omega `$ Cen RR Lyrae stars, which in the future will provide a more homogeneous and consistent dataset of $`<V>`$. Using the data in Table 5, the observed correlation between M<sub>V</sub>(RR) and \[Fe/H\] is presented in Figure 6, where panel (a) is based on \[Fe/H\] determined by the previous $`\mathrm{\Delta }`$$`S`$ measurements while panel (b) is based on our new $`Caby`$ photometry. In the transformation to the absolute magnitude, we adopted a distance modulus of V - M<sub>V</sub> = 14.1 based on the recent evolutionary models of M<sub>V</sub>(RR) by Demarque et al. (1999). In Fig. 6b, closed circles are stars which overlap with the sample of BDE and GTO (i.e., Fig. 6a), while triangles represent stars only observed in our study. The large symbols are for stars with smaller observational error ($`\sigma `$<sub>\[Fe/H\]</sub> $``$ 0.2 dex) of the \[Fe/H\]<sub>hk</sub> with the same criterion of Fig. 3. It appears that the random errors of \[Fe/H\]<sub>hk</sub> are smaller than those of \[Fe/H\]<sub>ΔS</sub> in the M<sub>V</sub>(RR) - \[Fe/H\] distribution. In particular, V5 (\[Fe/H\]<sub>ΔS</sub> = -2.32) and V56 (\[Fe/H\]<sub>ΔS</sub> = -1.82), which are fainter (about 0.2 mag.) than similarly metal-poor RR Lyrae stars in the M<sub>V</sub>(RR) - \[Fe/H\]<sub>ΔS</sub> diagram, are moved to relatively metal-rich \[Fe/H\]<sub>hk</sub>. Their new metallicities (\[Fe/H\]<sub>hk</sub> = -1.35 and -1.26) are more consistent with their intrinsic luminosity, following a general trend shown in Fig. 6b. We have superimposed the model correlations of Lee (1991), which were constructed based on his HB population models under two assumptions regarding the variation of HB type with metallicity. The solid (age = 13.5 Gyr) and short-dashed (age = 15.0 Gyr) lines are for the case that the HB type follows the nonmonotonic behavior with decreasing \[Fe/H\] similar to that observed in the Galactic globular cluster system \[see Fig. 3 of Lee (1991)\]. Lee (1991) suggested that this nonmonotonic behavior of HB morphology is perhaps due either to the highly nonlinear relationship between mass loss and \[Fe/H\] or to some combination of the effects of mass loss and enhanced $`\alpha `$-elements, although the complete understanding is still lacking. The long-dashed line is a simple model locus, with fixed mass loss, age, and $`\alpha `$-elements, which fails to reproduce the observed nonmonotonic behavior of HB type with decreasing \[Fe/H\]. The sudden upturn in M<sub>V</sub>(RR) of model loci can be explained by a series of HB population models (see Fig. 5 of Lee 1993), where one can see how sensitively the population of the instability strip changes with decreasing \[Fe/H\]. As \[Fe/H\] decreases, there is a certain point where the zero age portion of the HB just crosses the blue edge of the instability strip. Then, only highly evolved stars from the blue HB can penetrate back into the instability strip, and the mean RR Lyrae luminosity increases abruptly (Lee 1993). As shown in Fig. 6b, the correlation predicted from the model loci, including the sudden upturn in M<sub>V</sub>(RR), agree better with our new M<sub>V</sub>(RR) - \[Fe/H\]<sub>hk</sub> distribution. Note that the choice of HB evolutionary tracks has little effect on this conclusion, as Demarque et al. (1999) recently showed that new synthetic HB models based on evolutionary tracks with improved input physics (Yi et al. 1997) produce qualitatively the same results. Lee (1991) noted that the solid line in the M<sub>V</sub>(RR) - \[Fe/H\]<sub>ΔS</sub> diagram of Fig. 6a does not pass through a few stars at \[Fe/H\]<sub>ΔS</sub> $``$ -1.4, and he suspected this deviation is perhaps due either to the observational errors or to the zero point uncertainty of the metallicity scale. Alternatively, he suggested that a better match is obtained by the older model locus of 15.0 Gyr (i.e., short-dashed line of Fig. 6). In our new diagram of Fig. 6b, we can see that these deviant stars are now moved to \[Fe/H\]<sub>hk</sub> $``$ -1.5 and are, therefore, more well matched to the model locus. With our new data, the best match with the models is expected somewhere between the solid and dashed lines (i.e., $``$ 14.3 Gyr). Note that the absolute ages in these models are based on the assumption that the mean age of the inner halo clusters is $``$ 14.5 Gyr, thus this result suggests $`\omega `$ Cen is comparable in age with other inner halo clusters. If we remove stars in the range -1.9 $`<`$ \[Fe/H\]<sub>hk</sub> $`<`$ -1.5, where most of the variables are believed to be extremely evolved stars (see section 5.4 below), then we obtain $`\mathrm{\Delta }`$M<sub>V</sub>(RR)/$`\mathrm{\Delta }`$\[Fe/H\] = 0.24 $`\pm `$ 0.04, which is consistent, to within the errors, with the slopes obtained by LDZ and Lee (1990) from the evolutionary models, excluding the clusters in this metallicity range. Consequently, RR Lyrae stars in $`\omega `$ Cen and their nonlinear M<sub>V</sub>(RR) - \[Fe/H\] relations from our observations provide a strong support for the LDZ and Lee (1990) evolutionary models. This non-linearity, which also implies that the relation between period-shift and metallicity is not linear (see section 5.3 below), would clarify some of the disagreements with other investigators because fits of straight lines to different data sets produce significantly different slopes. ### 5.2 The m<sub>bol</sub> \- logT<sub>eff</sub> Diagram of RR Lyrae Stars In order to test more clearly the metallicity dependence of the luminosity of RR Lyrae stars, we constructed the bolometric magnitude (m<sub>bol</sub>) - temperature (T<sub>eff</sub>) diagram for RR$`ab`$ stars of different metallicities in Figure 7. Panel (a) and (b) contains 27 and 34 RR$`ab`$ stars for which \[Fe/H\] has been determined from the $`\mathrm{\Delta }`$$`S`$ method and our $`Caby`$ photometry, respectively. For the $`BV`$, we used the equilibrium color defined by Bingham et al. (1984), $`(BV)_{eq}`$ = $`\frac{2}{3}<BV>`$ \+ $`\frac{1}{3}(<B><V>)`$. In the calculations of $`T_{eff}`$ and bolometric correction, we adopted a color-temperature relation that has been used in the construction of the Revised Yale Isochrones (Green et al. 1987; see Green 1988) for the consistency with the work of Lee (1991). The color information, $`<BV>`$ and $`<B><V>`$, of the $`\omega `$ Cen variables was taken from Sandage (1981). Fig. 7a shows that the relationship between metallicity and bolometric magnitude is not clear when the metallicities determined by previous $`\mathrm{\Delta }`$S observations are used (Dickens 1989). Not all the faintest stars are the most metal-rich stars, and some metal-rich stars are apparently as bright as the metal-poor stars. On the other hand, as shown in Fig 7b, the metallicity dependence of RR Lyrae magnitude becomes more distinct when we use our new \[Fe/H\]<sub>hk</sub> metallicities. Now, most metal-rich RR Lyrae stars lie below (i.e., having fainter magnitude) the RR Lyrae stars that are relatively more metal-poor. The cases of V5 and V56 were discussed above in this context. The magnitude gap between the metal-rich and metal-poor RR Lyrae stars near m<sub>bol</sub> $``$ 14.6 is due to the abrupt increase, in magnitude at approximately \[Fe/H\] = -1.5 (see Fig. 6b). ### 5.3 The Period Shift Effect If the relationship between M<sub>V</sub>(RR) and \[Fe/H\]<sub>hk</sub> is not linear as noticed above, we expect a similar correlation between period-shift and \[Fe/H\] for $`\omega `$ Cen RR Lyrae stars. In order to confirm this, we obtained the period-shifts of $`\omega `$ Cen RR$`ab`$ stars at a fixed T<sub>eff</sub> from the deviations in the period of each $`\omega `$ Cen RR$`ab`$ star from the M3 fiducial line in the logP - logT<sub>eff</sub> plane. Periods have been obtained mainly from Kaluzny et al. (1997b), but for some stars we adopted the values of BDE. Column (8) of Table 5 gives the period for each RR Lyrae star. As we did in section 5.2, we also used the $`(BV)_{eq}`$, calculated from the photometry of Sandage (1981) and $`(BV)`$ \- T<sub>eff</sub> relations of Green et al. (1987) in the calculations of $`T_{eff}`$ for M3 RR Lyrae stars. We transformed the observed periods of M3 RR$`c`$ stars to fundamental periods by adding 0.125 to their logarithms (Bingham et al. 1984; LDZ) to obtain the logP - log$`T_{eff}`$ relationship for all the M3 RR Lyraes. The correlation between period-shift, $`\mathrm{\Delta }`$log$`P(T_{eff})`$, and \[Fe/H\] is shown in Figure 8. Assuming no large differences between P and P even in the case of M<sub>bol</sub> range ($``$ 0.2 mag) for $`\omega `$ Cen RR Lyrae stars, we superimposed the model loci of $`\mathrm{\Delta }`$logP(T<sub>eff</sub>) \- \[Fe/H\] of Lee (1993; see his Fig. 6b) for the comparison with observations. The P corresponds to the “reduced period”, which is corrected for the differing luminosity within the cluster by normalizing to the mean magnitude of RR Lyrae stars (see LDZ). As shown in Fig. 8a, for the 27 RR$`ab`$ stars, whose \[Fe/H\] has been determined by the $`\mathrm{\Delta }`$$`S`$ method, there is no distinct $`\mathrm{\Delta }`$logP(T<sub>eff</sub>) - \[Fe/H\]<sub>ΔS</sub> correlation. It should be noted that the period-shift values for metal-rich stars show similar or even larger shifts, compared to metal-poor stars. This metallicity effect of the period-shift is much smaller than that found in the period-shift - \[Fe/H\] relationship of Oosterhoff I and Oosterhoff II clusters, as well as the field RR Lyrae stars covering a similar range of metallicity (LDZ; Lee 1990, 1993). However, when we adopt our new metallicity, \[Fe/H\]<sub>hk</sub>, a more clear correlation between $`\mathrm{\Delta }`$logP(T<sub>eff</sub>) and \[Fe/H\]<sub>hk</sub> emerges (Fig. 8b) which follows the model locus of Lee (1993), despite some scatter among the metal-poor stars. This is expected, $`\omega `$ Cen RR Lyrae stars should show more scatter than the models for globular cluster system because post-ZAHB luminosity evolution causes scatter in period-shifts and also because only single determinations of period and \[Fe/H\] exists for individual $`\omega `$ Cen RR Lyrae stars, whereas those for the clusters represent averages over many stars (see Lee 1990). As in the M<sub>V</sub>(RR) - \[Fe/H\]<sub>hk</sub> diagram, V5 and V56 now belong to the metal-rich stars in $`\mathrm{\Delta }`$logP(T<sub>eff</sub>) - \[Fe/H\]<sub>hk</sub> diagram. The RR Lyrae stars having -1.9 $`<`$ \[Fe/H\] $`<`$ -1.5, which are considered to be highly evolved stars arisen from the bluest HBs, are shifted in period relative to M3 variables of the same T<sub>eff</sub> by approximately the same (or even larger) amounts as the variables in the more metal-poor RR Lyrae stars (see section 5.4 below). Consequently, our new correlation between period-shift and \[Fe/H\]<sub>hk</sub> shows roughly the same trend as the M<sub>V</sub>(RR) - \[Fe/H\]<sub>hk</sub> relation, and is more in agreement with the model locus. ### 5.4 Highly Evolved RR Lyrae Stars The effect of post ZAHB evolution plays a key role in our understanding of the M<sub>V</sub>(RR) - \[Fe/H\] relation and other related problems, such as the Sandage period-shift effect (see also Lee 1993 and references therein). In particular, the evolutionary models of Lee (1990) suggest that the RR Lyrae stars in very blue HB clusters within the range -2.0$`<`$\[Fe/H\]$`<`$-1.6 are highly evolved stars from the bluest HBs, and have significantly brighter magnitudes and longer periods than those near the ZAHB (see also Figs. 6 and 8). Highly evolved RR Lyrae stars can be identified from a star’s position in a period (logP) - blue amplitude ($`A_B`$) diagram by comparing them with RR Lyrae stars in clusters having similar metallicity but with redder HB morphology (Jones et al. 1992; Cacciari et al. 1992; Clement & Shelton 1999). Assuming that $`A_B`$ depends on \[Fe/H\] as well as $`T_{eff}`$ (LDZ; Caputo 1988), at a fixed metallicity, relative $`A_B`$ values are reliable indicators of relative $`T_{eff}`$. Therefore, highly evolved RR Lyrae stars in $`\omega `$ Cen can be detected from a series of logP - $`A_B`$ diagrams covering the range of metallicities. Figure 9 shows a logP - $`A_B`$ diagram for $`\omega `$ Cen RR$`ab`$ stars at three metallicity groups with \[Fe/H\]<sub>hk</sub> $`<`$ -1.9, -1.9 $``$ \[Fe/H\]<sub>hk</sub> $`<`$ -1.5, and \[Fe/H\]<sub>hk</sub> $``$ -1.5, respectively. The A<sub>B</sub> values are from Sandage (1981) as given in column (9) of Table 5. For comparison, we also plotted RR$`ab`$ stars in M15 (\[Fe/H\] = -2.17; Lee, Demarque, & Zinn 1994), M3 (\[Fe/H\] = -1.66), and M4 (\[Fe/H\] = -1.28) for each metallicity group, respectively, with data from Sandage (1990b). The solid line represents the fiducial line of the lower envelope to the M3 distribution of logP - $`A_B`$ (Sandage 1990a). For the most metal-poor (Fig. 9a) and the most metal-rich stars (Fig. 9c), the majority of $`\omega `$ Cen RR Lyrae stars do not, respectively, show deviations from the M15 and M4 variables. This would indicate that the evolutionary stages of these $`\omega `$ Cen variables are not significantly different from those for variables in M15 and M4, respectively. However, most $`\omega `$ Cen RR$`ab`$ stars in the range -1.9 $``$ \[Fe/H\] $`<`$ -1.5 are obviously deviant when compared to the M3 variables. These stars have much longer periods than M3 variables of similar $`A_B`$, thus most of them are probably evolved with higher luminosities. In order to provide a reference for highly evolved stars, in Fig. 9b, we include two field RR$`ab`$ stars, SU Dra and SS Leo (open triangles), which have similar metallicity to M3, but are considered to be in a highly evolved and luminous state (Jones et al. 1992). The open square represents a M3 RR$`ab`$ star (V65), which is in a more advanced evolutionary state than the majority of M3 RR$`ab`$ stars (Kaluzny et al. 1998; Clement & Shelton 1999). Kaluzny et al. (1998) noted two other highly evolved M3 RR$`ab`$ stars, V14 and V104. The similarity of all of these stars to those in $`\omega `$ Cen confirms that most $`\omega `$ Cen RR$`ab`$ stars in the range -1.9 $``$ \[Fe/H\] $`<`$ -1.5 are in a highly evolved stage of their HB evolution. Recently, Clement & Shelton (1999) re-examined the logP - $`V`$ amplitude (A<sub>V</sub>) relation of RR Lyrae stars in globular clusters of both Oosterhoff types by applying the test of Jurcsik & Kovács (1996) to identify and remove Blazhko variables. They concluded that the logP - A<sub>V</sub> relation for “normal” RR$`ab`$ stars is not a function of metal abundance, but rather, related to the Oosterhoff type. Along with the discovery of three bright M3 RR$`ab`$ stars in a more advanced evolutionary state, they also concluded that the Oosterhoff dichotomy has something to do with evolution off the ZAHB. This is consistent with our result presented here, and these observations provide a support to the LDZ hypothesis that evolution away from the ZAHB plays a crucial role in the Oosterhoff period dichotomy (see also Lee & Carney 1999). ## 6 SUMMARY AND CONCLUSIONS We present new metallicity measurements of 131 RR Lyrae stars in the $`\omega `$ Cen using the $`hk`$ index of the $`Caby`$ photometric system. From our study, we draw the following conclusions: (1) We provide the most complete and homogeneous metallicity data to date, with a typical internal error of 0.20 dex, based on the \[Fe/H\] vs. $`hk_o`$ calibrations of Baird & Anthony-Twarog (1999). (2) For RR Lyrae stars in common with the $`\mathrm{\Delta }`$$`S`$ observations of BDE and GTO, we find that our metallicity values, \[Fe/H\]<sub>hk</sub>, are systematically deviant from the $`\mathrm{\Delta }`$$`S`$ metallicities, \[Fe/H\]<sub>ΔS</sub>, whereas the \[Fe/H\]<sub>hk</sub> for well observed field RR$`ab`$ stars are consistent with previous spectroscopic measurements. With some supporting evidence, we find that this discrepancy is due to errors in the BDE and GTO results. (3) The M<sub>V</sub>(RR) - \[Fe/H\] and period-shift - \[Fe/H\] relations from our observations show a tight distribution with a nearly step function change in luminosity near \[Fe/H\] = -1.5. This is consistent with the model predictions of Lee (1991), which suggest that the luminosity of RR Lyrae stars depends on evolutionary status as well as metallicity. (4) From a series of logP - $`A_B`$ diagrams at a range of metallicities, we also identify highly evolved RR$`ab`$ stars in the range of -1.9 $``$ \[Fe/H\]<sub>hk</sub> $`<`$ -1.5, as predicted from the synthetic HB models. Therefore, this gives support to LDZ’s hypothesis that evolution away from the ZAHB plays a role in the Oosterhoff dichotomy. Some work remains to be done in the future. As noted already, because the \[Fe/H\] vs. $`hk_o`$ relation at high temperature \[e.g., $`(by)_o`$ = 0.15\] shown in Fig. 1 does not extend to metallicities higher than \[Fe/H\] = -1.0, the metal-rich calibration for the RR$`c`$ stars may be suspect. More calibration data are needed to resolve this problem. Furthermore, more RR$`c`$ stars should be observed to check whether there is any difference between the RR$`ab`$ and RR$`c`$ calibrations. Additionally, in order to test the viability of the field RR Lyrae stars calibration, it would be valuable to observe samples of RR Lyrae stars in a number of globular clusters with various and well-determined metallicities. Then we can determine if the calibrations for field RR Lyrae stars are consistent with those for the globular cluster RR Lyrae stars. On the theoretical side, it would be useful to study the relationship between \[Fe/H\] and $`hk`$ using synthetic spectra, and in particular, clarify the problem of the contamination of Ca II H by H$`ϵ`$ for hotter stars. Finally, with its distinct advantages such as ease of observations and analysis, the $`hk`$ method should supersede the old $`\mathrm{\Delta }S`$ method in determining the metallicity of RR Lyrae stars, despite the need for more accurate calibrations. We would like to thank A. Jurcsik and N. Suntzeff for providing electronic copies of their datasets, and an anonymous referee for a careful review and useful comments. S.-C.R. is grateful to Suk-Jin Yoon for his helpful efforts in some model calculations. Support for this work was provided by the Creative Research Initiatives Program of Korean Ministry of Science & Technology, and in part by the Korea Science & Engineering Foundation through grant 95-0702-01-01-3. ## Appendix A NOTES ON INDIVIDUAL RR LYRAE STARS $`V24`$.$``$V24 (\[Fe/H\]<sub>hk</sub> = -1.86) has a very small period-shift value \[$`\mathrm{\Delta }`$logP(T<sub>eff</sub>) = -0.09\], compared with normal RR$`ab`$ stars (see Fig. 8b). Considering its light-curve characteristics, such as period (0.4623 day; Kaluzny et al. 1997b) and blue amplitude (A<sub>B</sub> = 0.47; Sandage 1981) (see Fig. 9b), and sinusoidal light-curve shape (Kaluzny et al. 1997b), V24 is likely to be an RR$`c`$ star. $`V52`$.$``$Kaluzny et al. (1997b) suggested that V52, which is the brightest RR$`ab`$ star ($`<V>`$ = 13.95), is actually a BL Her variable. However, its period (0.66 day) is significantly shorter than the 0.75 day, short period limit found for Pop. II Cepheids (Wallerstein & Cox 1984). $`V7`$, $`V116`$, $`and`$ $`V149`$.$``$Unlike other metal-rich stars, V116 and V149 show brighter magnitudes, comparable to those of relatively metal-poor stars (see Fig. 6b). Considering the large deviation in the logP - A<sub>B</sub> diagram (see Fig. 9), V149 is probably a highly evolved RR Lyrae star. According to its period (0.72 day) and $`V`$ amplitude, A<sub>V</sub> (0.54 mag, Kaluzny et al. 1997b), V116 is likely to be in a similar evolutionary state. Although the luminosity is not as high as that of V116 and V149, considering its large deviation in the logP - A<sub>B</sub> diagram and metallicity (\[Fe/H\]<sub>hk</sub> = -1.46) close to the boundary for the evolved stars, it is not unreasonable to consider V7 as a highly evolved star, also.
no-problem/0001/hep-ph0001163.html
ar5iv
text
# Ultra High Energy Cosmic Rays from Cosmological Relics ## 1 Introduction Ultra High Energy Cosmic Rays (UHECR) is a puzzle of modern physics. Its solution needs the new ideas in astrophysics or in elementary particle physics. The problem of UHECR is known for more than 30 years. It consits in observation of primary particles with energies up to $`2310^{20}eV`$ . If these particles are extragalactic protons and their sources are distributed uniformly in the Universe, their spectrum must expose steepening, which starts at energy $`E_{bb}310^{19}eV`$ due to interaction with microwave photons. This steepening is known as the Greisen-Zatsepin-Kuzmin (GZK) cutoff . The GZK cutoff is not seen in the observed spectrum. The spectrum of UHECR according to AGASA observations is shown in Fig.1 together with the spectrum calculated for uniform distribution of the sources in the Universe under assumption that generation spectrum is proportional to $`E^{2.3}`$. The excess of the observed events above the GZK cutoff is clearly seen. The observational data for UHECR ($`E110^{19}eV`$) can be summarized as follows. * At $`E10^{19}eV`$ the spectrum is flatter than at lower energies and it extends up to $`2310^{20}eV`$ (maximum observed energies). * Chemical composition is favoured by protons, though UHE photons are not excluded as primaries. * Data are consistent with isotropy, but close angular pairs (doublets) and triplets compose about $`20\%`$ of all events at $`E410^{19}eV`$ (22 events in doublets and triplets from 92 total . Galactic origin of UHECR due to acceleration by sources located in the Galactic disc is excluded. Numerical simulations of propagation of UHECR in magnetic fields of disc and halo of the Galaxy predict the strong anisotropy for particles with rigidity $`E/Z>110^{19}eV`$ ( \- ). Extragalactic protons, if their sources are distributed uniformly in the universe, should have GZK cutoff due to pion production on microwave radiation (see Fig.1). Extragalactic nuclei exhibit the cutoff at the same energy $`E310^{19}eV`$, mainly due to $`e^+e^{}`$-pair production on microwave radiation ,,. Nearby sources must form a compact group with large overdensity of the sources to avoid GZK cutoff . Local Supercluster (LS) with the typical size $`R_{LS}10Mpc`$ is a natural candidate for such group. The calculations (see ) show that for absence of GZK cutoff the LS overdensity $`\delta _{LS}>10`$ is needed, while the observed one is $`\delta _{LS}1.4`$ . Note, that diffusion propagation due to magnetic field cannot help in softening of GZK cutoff. Nearby single source can provide the absence of GZK cutoff. The idea is that powerful sources of UHECR in the Universe are very rare and by chance we live nearby one of them. Such case has been numerically studied for the burst generation of UHECR and their non-stationary diffuse propagation . Anisotropy can be small even at energies exceeding $`110^{20}eV`$. The calculated cutoff at $`E_c110^{20}eV`$ is questioned by existence of two events with energy $`210^{20}eV`$. An interesting case of single source UHECR origin was recently proposed in . The physical essence of this model can be explained in the following way. A nearby single source is the powerful radio galaxy M87 in Virgo cluster. UHE particles from it falls to gigantic magnetic halo of our Galaxy (with height about $`1.5Mpc`$), where the azimuthal magnetic field diminishes as $`1/r`$. Magnetic field in the halo focuses the highest energy particles to the Sun in such way, that arriving particles have isotropical distribution. Numerical simulations of the trajectories in the magnetic field, similar to that in the galactic wind, confirm this model. This interesting proposal should be further studied taking into account such phenomena as diffuse radio, X-ray and gamma radiation produced by high energy electrons diffusing from the Galactic disc. The calculations of these processes limit the size of magnetic halo by $`35kpc`$ . Acceleration of UHECR is a problem for astrophysical scenarios. Shock acceleration and unipolar induction are the ”standard” acceleration mechanisms to UHE, considered in the literature (see for a review). A comprehensive list of possible sources with shock acceleration was thoroughly studied in ref.() with a conclusion, that maximum energy of acceleration does not exceed $`10^{19}10^{20}eV`$ (see also ref.() with a similar conclusion). The most promising source from this list is a hot spot in radiogalaxy produced by a jet , where maximum energy can reach $`10^{20}eV`$. Radiogalaxy M87, considered in , belongs to this class of sources. Gamma Ray Bursts (GRB) models offer two new mechanisms of acceleration to UHE. The first one is acceleration by ultrarelativistic shock. A reflected particle gains at one reflection the energy $`E\mathrm{\Gamma }_{sh}^2E_i`$, where $`\mathrm{\Gamma }_{sh}10^210^3`$ is the Lorentz factor of the shock and $`E_i`$ is initial energy of a particle. The second cycle of such acceleration has extremely low probability to occur and therefore to produce the particles with $`E10^{20}eV`$, this mechanism must operate in the space filled by pre-accelerated particles with energies $`E_i>10^{14}eV`$. The second mechanism works in the model with multiple shocks. The collisions of the shocks produces the turbulence where the particles are accelerated by Fermi II mechanism. The turbulent velocities are mildly relativistic in the fireball rest system. The maximum energy in the rest system , $`E_{max}^{}eH_0^{}l_0^{}`$, is boosted by Lorentz factor $`\mathrm{\Gamma }`$ of fireball in laboratory system (here $`l_0^{}`$ and $`H_0^{}`$ are the maximum linear scale of turbulence with coherent magnetic field $`H_0^{}`$ there). Taking for $`H_0^{}`$ equipartition value, one obtains $`E_{max}10^{20}eV`$ in the laboratory system. This mechanism faces two problems. Actually the maximum energy is somewhat less than $`110^{20}eV`$ , if acceleration time is evaluated more realistically. It diminishes the energy of GZK cutoff in the diffuse spectrum, because it is formed by the particles with production energies higher than the observed ones. The most serious problem, however, is that the produced flux of accelerated particles suffer the adiabatic energy losses . In summary, the acceleration (astrophysical) scenarios are somewhat disfavoured, but not excluded. Apart from them, many elementary particle solutions were proposed to solve UHECR puzzle. Among them there is such an extreme proposal as breaking the Lorentz invariance , light gluino as the lightest supersymmetric particle and UHE carrier , UHE neutrinos producing UHECR due to resonance interaction with the dark matter neutrinos and some other suggestions. In this paper I will review two most conservative sources of UHECR of non-accelerator origin: Superheavy Dark Matter (SHDM) and Topological Defects (TD). ## 2 UHECR from Superheavy Dark Matter Superheavy Dark Matter (SHDM) as a source of UHECR was first suggested in refs.(). SHDM particles with masses larger than $`10^{13}GeV`$ are accumulated in the Galactic halo with overdensity $`10^5`$ and hence UHECR produced in their decays do not exhibit the GZK cutoff. The other observational signatures of this model are dominance of UHE photons and anisotropy connected with non-central position of the Sun in the Galactic halo . Production of SHDM SHDM particles are very efficiently produced by the various mechanisms at post-inflationary epochs. This common feature has a natural explanation. The SHDM particles due to their tremendous mass had never been in the thermal equilibrium in the Universe and never were relativistic. Their mass density diminished as $`1/a^3`$, while for all other particles it diminishes much faster as $`1/a^4`$, where $`a`$ is the scaling factor of the Universe. When normalized at inflationary epoch, $`a_i=1`$, $`a(t)`$ reaches enormous value at large $`t`$. It is enough to produce negligible amount of superheavy particles in the post-inflationary epoch in order to provide $`\mathrm{\Omega }_X1`$ now. Actually, in most cases one meets a problem of overproduction of SHDM particles (further on we shall refer to them as to X-particles). One very general mechanism of X-particle production is given by creation of particles in time-variable classical field. In our case it can be inflaton field $`\varphi `$ or gravitational field. In case of inflaton field the direct coupling of X-particle (or some intermediate particle $`\chi `$) with inflaton is needed,e.g. $`g^2\varphi ^2X^2`$ or $`g^2\varphi ^2\chi ^2`$. The intermediate particle $`\chi `$ then decays to X-particle. In case of time-variable gravitational field no coupling of X to inflaton or any other particles is needed: X-particles are produced due to their masses. For the review of above-mentioned mechanisms and references see . Super-heavy particles are very efficiently produced at preheating . This stage, predecessor of reheating, is caused by oscillation of inflaton field after inflation near the minimum of the potential. Such oscillating field can non-perturbatively (in the regime of broad parametric resonance) produce the intermediate bosons $`\chi `$, which then decay to X-particles. The mass of X-particles can be one-two orders of magnitude larger than inflaton mass $`m_\varphi `$, which should be about $`10^{13}GeV`$ to provide the amplitude of density fluctuations observed by COBE. Another mechanism, more efficient than parametric resonance and operating in its absence, is so-called instant preheating . It works in the specific models, where mass of $`\chi `$ particles is proportional to inflaton field, $`m_\chi =g\varphi `$. When inflaton goes through minimum of potential $`\varphi =0`$ $`\chi `$-particles are massless and they are very efficiently produced. When $`|\varphi |`$ increases, $`m_\chi `$ increases too and can reach the value close to $`m_{Pl}`$. Another possible mechanisms of SHDM particle production are non-equilibrium thermal production at reheating and by early topological defects . The latter can be produced at reheating . Gravitational production of particles occurs due to time variation of gravitational field during expansion of the universe . For particles with the conformal coupling with gravity, $`(1/6)RX`$, where $`R`$ is the space-time curvature of the expanding universe, the particle mass itself couples a particle with the field (gravitation) and any other couplings are not needed. $`X`$ particles can be even sterile! Neither inflation is needed for this production. It rather limits the gravitational production of the particles. Since this production is described by time variation of the Hubble constant $`H(t)`$, only particles with masses $`m_XH(t)`$ can be produced. In inflationary scenario $`H(t)m_\varphi `$, where $`m_\varphi `$ is the mass of the inflaton. It results in the limit on mass of produced particles $`m_X10^{13}GeV`$ . The gravitational production of superheavy particles was recently studied in refs (see for a review). It is remarkable that for the mass $`m_X10^{13}GeV`$ the relic density is $`\mathrm{\Omega }_X1`$ without any additional assumptions. It makes superheavy particles most natural candidates for Cold DM. Lifetime Superheavy particles are expected to be very short-lived. Even gravitational interaction (e.g. described by dimension 5 operators suppressed by the Planck mass) results in the lifetime much shorter than the age of the Universe $`t_0`$. The superheavy particles must be protected from fast decay by some symmetry, respected even by gravitational interaction, and such symmetries are known. They are discrete gauge symmetries. They can be very weakly broken e.g. by wormhole effects or instanton effects to provide the needed lifetime. The systematic analysis of broken discrete gauge symmetries is given in ref.. For the group $`Z_{10}`$ the lifetime of X-particle with $`m_X10^{13}10^{14}GeV`$ was found in the range $`10^{11}10^{26}yr`$. The realistic elementary particle models for such long-lived particles were suggested . Spectrum of UHECR Quark and gluons produced in the decay of superheavy particle originate QCD cascade, similar to that from $`Z^0`$ decay. The resulting spectrum of hadrons can be calculated using the standard QCD technique . The spectrum of hadrons is not power-law, its most spectacular feature is the Gaussian peak at small x. Photons dominate the primary spectrum by a factor $`6`$. The calculations of the spectrum were performed in ref. (HERWIG MC simulation for ordinary QCD) and in ref. (analytic MLLA calculations for SUSY QCD). Observational predictions. Overdensity $`\delta `$ of SHDM particles in the Galactic halo is the same as for any other form of CDM, and numerically it is given by a ratio of CDM density observed in the halo to CDM density in extragalactic space ($`\delta 10^5`$). Spectrum of UHECR produced by decaying X-particles in the Galactic halo and beyond is shown in Fig.2. One can see that UHE photon flux appreciably dominates over that of protons. Anisotropy is caused by non-central position of the Sun in the halo. Most notable effect, the difference in fluxes in directions of Galactic center and anticenter, cannot be observed by existing arrays. Calculated phase and amplitude of the first harmonic of anisotropy are compared in Fig.3 with observations. In spite of the visual agreement, one might only conclude that predicted anisotropy does not contradict the observations: within $`1.5\sigma `$ AGASA data are compatible with isotropy. Angular clustering in UHECR arrival (doublet and triplet events) can be due to clumpiness of DM halo. Numerical N-body simulations show the presence of dense DM clouds in the halo. For example, the high resolution simulations of ref. predict about 500 DM clouds with masses $`M10^8M_{}`$ in the halo of our Galaxy. The baryonic content of these clouds should be low , and therefore one cannot expect the identification of all sources of UHECR doublets and triplets with the observed clouds. The smallest clumps resolved so far in the high resolution simulations reach $`M_{cl}10^6M_{}`$, i.e. they fall into range of the globular cluster masses. It could be that some of the doublet/triplet UHECR sources are globular clusters. The high resolution simulations demonstrate the early origin of the clumps ($`z5`$) , and therefore the core overdensity, as compared with present density, is $`(1+z)^3200`$. The extended halos of DM clouds can be stripped away by tidal interactions when clouds cross the galactic disc. The formation of dense compact DM objects was discussed in refs.. Assuming the typical distance of a dense compact cloud to the sun as $`r1kpc`$, one can estimate the fraction of UHE particles arriving to us from one of these objects as $`f(M_{cl}/M_h)(R_h^2/r^2)`$, where $`R_h100kpc`$ is a size of the halo, and $`M_{cl}`$ and $`M_h`$ are the masses of a cluster and halo, respectively. For $`M_{cl}10^6M_{},M_h10^{12}M_{}`$ and $`r1kpc`$, one obtains $`f0.01`$, i.e. about ten of such sources can provide the doublets and triplets observed in AGASA and other detectors. Part of these sources can be globular clusters. More detailed discussion will be presented in one of forthcoming publications by A.Vilenkin and the author. ## 3 Topological defects. Topological defects, TD, (for a review see ) can naturally produce particles of ultrahigh energies (UHE). The pioneering observation of this possibility was made by Hill, Schramm and Walker (for a general analysis of TD as UHE CR sources see ). In many cases TD become unstable and decompose to constituent fields, superheavy gauge and Higgs bosons (X-particles), which then decay producing UHECR. It could happen, for example, when two segments of ordinary string, or monopole and antimonopole touch each other, when electrical current in superconducting string reaches the critical value and in some other cases. In most cases the problem with UHECR from TD is not the maximal energy, but the fluxes. One very general reason for the low fluxes consists in the large distance between TD. A dimension scale for this distance is the Hubble distance $`H_0^1`$. However, in some rather exceptional cases this dimensional scale is multiplied to a small dimensionless value $`r`$. If a distance between TD is larger than UHE proton attenuation length, then the flux at UHE is typically exponentially suppressed. The following TD have been discussed as potential sources of UHE particles: superconducting strings , ordinary strings , ,, magnetic monopoles , or more precisely bound monopole-antimonopole pairs (monopolonium and monopole-antimonopole connected by strings ), networks of monopoles connected by strings , necklaces , and vortons . Monopolonia, monopole-antimonopole connected by strings and vortons are clustering in Galactic halo and their observational signatures for UHECR are identical to SHDM particles discussed above. (i) Superconducting strings. As was first noted by Witten, in a wide class of elementary particle models, strings behave like superconducting wires. Moving through cosmic magnetic fields, such strings develop electric currents. Superconducting strings produce X particles when the electric current in the strings reaches the critical value. Superconducting strings produce too small flux of UHE particles and thus they are disfavoured as sources of observed UHECR. (ii) Ordinary strings. There are several mechanisms by which ordinary strings can produce UHE particles. For a special choice of initial conditions, an ordinary loop can collapse to a double line, releasing its total energy in the form of X-particles. However, the probability of this mode of collapse is extremely small, and its contribution to the overall flux of UHE particles is negligible. String loops can also produce X-particles when they self-intersect (e.g. ). Each intersection, however, gives only a few particles, and the corresponding flux is very small . Superheavy particles with large Lorentz factors can be produced in the annihilation of cusps, when the two cusp segments overlap . The energy released in a single cusp event can be quite large, but again, the resulting flux of UHE particles is too small to account for the observations . It has been recently argued that long strings lose most of their energy not by production of closed loops, as it is generally believed, but by direct emission of heavy X-particles. If correct, this claim will change dramatically the standard picture of string evolution. It has been also suggested that the decay products of particles produced in this way can explain the observed flux of UHECR . However, as it is argued in ref , numerical simulations described in allow an alternative interpretation not connected with UHE particle production. But even if the conclusions of were correct, the particle production mechanism suggested in that paper cannot explain the observed flux of UHE particles. If particles are emitted directly from long strings, then the distance between UHE particle sources $`D`$ is of the order of the Hubble distance $`H_0^1`$, $`DH_0^1R_p`$, where $`R_p`$ is the proton attenuation length in the microwave background radiation. In this case UHECR flux has an exponential cutoff at energy $`E310^{10}GeV`$. In the case of accidental proximity of a string to the observer, the flux is strongly anisotropic. A fine-tuning in the position of the observer is needed to reconcile both requirements. (iii) Monopolonium and $`M\overline{M}`$-pair connected by string. Monopole-antimonopole pairs ($`M\overline{M}`$) can form bound state . Spiraling along the classical orbits they fall to each other and annihilate, producing superheavy particles. The lifetime of this system depends on the initial (classical) radius, and can be larger than the age of the Universe $`t_0`$ . Production of UHECR by monopolonia was studied in ref. (clustering of monopolonia in the Galactic halo was not noticed in this paper and was indicated later in ref.). Recently it was demonstrated that friction of monopoles in the cosmic plasma results in the monopolonium lifetime much shorter than $`t_0`$. Instead of monopolonium the authors have suggested a similar object, $`M\overline{M}`$ pair connected by a string, as a candidate for UHECR. This TD is produced in the sequence of the symmetry breaking $`GH\times U(1)H`$. At the first symmetry breaking monopoles are produced, at the second one each $`M\overline{M}`$-pair is connected by a string. For the light strings the lifetime of this TD is larger than $`t_0`$. $`M\overline{M}`$-pairs connected by strings are accumulated in the halo as CDM and have the same observational signatures as SHDM particles. (iv)Network of monopoles connected by strings. The sequence of phase transitions $$GH\times U(1)H\times Z_N$$ (1) results in the formation of monopole-string networks in which each monopole is attached to N strings. Most of the monopoles and most of the strings belong to one infinite network. The evolution of networks is expected to be scale-invariant with a characteristic distance between monopoles $`d=\kappa t`$, where $`t`$ is the age of Universe and $`\kappa =const`$. The production of UHE particles are considered in . Each string attached to a monopole pulls it with a force equal to the string tension, $`\mu \eta _s^2`$, where $`\eta _s`$ is the symmetry breaking vev of strings. Then monopoles have a typical acceleration $`a\mu /m`$, energy $`E\mu d`$ and Lorentz factor $`\mathrm{\Gamma }_m\mu d/m`$, where $`m`$ is the mass of the monopole. Monopole moving with acceleration can, in principle, radiate gauge quanta, such as photons, gluons and weak gauge bosons, if the mass of gauge quantum (or the virtuality $`Q^2`$ in the case of gluon) is smaller than the monopole acceleration. The typical energy of radiated quanta in this case is $`ϵ\mathrm{\Gamma }_Ma`$. This energy can be much higher than what is observed in UHECR. However, the produced flux (see ) is much smaller than the observed one. (v)Vortons. Vortons are charge and current carrying loops of superconducting string stabilized by their angular momentum . Although classically stable, vortons decay by gradually losing charge carriers through quantum tunneling. Their lifetime, however, can be greater than the present age of the universe, in which case the escaping $`X`$-particles will produce a flux of cosmic rays. The $`X`$-particle mass is set by the symmetry breaking scale $`\eta _X`$ of string superconductivity. The number density of vortons formed in the early universe is rather uncertain. According to the analysis in ref., vortons are overproduced in models with $`\eta _X>10^9GeV`$, so all such models have to be ruled out. In that case, vortons cannot contribute to the flux of UHECR. However, an alternative analysis suggests that the excluded range is $`10^9GeV<\eta _X<10^{12}GeV`$, while for $`\eta _X10^{12}GeV`$ vorton formation is strongly suppressed. This allows a window for potentially interesting vorton densities with $`\eta _X10^{12}10^{13}GeV`$. Production of Ultra High Energy particles by decaying vortons was studied in ref.. Like monopoles connected by strings and SH relic particles, vortons are clustering in the Galactic halo and UHECR production and spectra are identical in these three cases. (vi)Necklaces. Necklaces are hybrid TD corresponding to the case $`N=2`$ in Eq.(1), i.e. to the case when each monopole is attached to two strings. This system resembles “ordinary” cosmic strings, except the strings look like necklaces with monopoles playing the role of beads. The evolution of necklaces depends strongly on the parameter $$r=m/\mu d,$$ (2) where $`d`$ is the average separation between monopoles and antimonopoles along the strings. As it is argued in ref. , necklaces might evolve to configurations with $`r1`$, though numerical simulations are needed to confirm this conclusion. Monopoles and antimonopoles trapped in the necklaces inevitably annihilate in the end, producing first the heavy Higgs and gauge bosons ($`X`$-particles) and then hadrons. The rate of $`X`$-particle production can be estimated as $$\dot{n}_X\frac{r^2\mu }{t^3m_X}.$$ (3) Restriction due to e-m cascade radiation demands the cascade energy density $`\omega _{cas}210^6eV/cm^3`$. The cascade energy density produced by necklaces can be calculated as $$\omega _{cas}=\frac{1}{2}f_\pi r^2\mu _0^{t_0}\frac{dt}{t^3}\frac{1}{(1+z)^4}=\frac{3}{4}f_\pi r^2\frac{\mu }{t_0^2},$$ (4) where $`f_\pi 0.5`$ is a fraction of total energy release transferred to the cascade. The separation between necklaces is given by $`Dr^{1/2}t_0`$ for large $`r`$. Since $`r^2\mu `$ is limited by cascade radiation, Eq.(4), one can obtain a lower limit on the separation $`D`$ between necklaces as $$D\left(\frac{3f_\pi \mu }{4t_0^2\omega _{cas}}\right)^{1/4}t_0>10(\mu /10^6GeV^2)^{1/4}kpc,$$ (5) Thus, necklaces can give a realistic example of the case when separation between sources is small and the Universe can be assumed uniformly filled by the sources. The fluxes of UHE protons and photons are shown in Fig.4 according to calculations of ref.. Due to absorption of UHE photons the proton-induced EAS from necklaces strongly dominate over those induced by photons at all energies except $`E>310^{11}GeV`$, where photon-induced showers can comprise an appreciable fraction of the total rate. ## 4 Conclusions At $`E110^{19}eV`$ a new component of cosmic rays with a flat spectrum is observed. According to the Fly’s Eye and Yakutsk data the chemical composition is better described by protons than heavy nuclei. The AGASA data are consistent with isotropy in arrival of the particles, but about $`20\%`$ of particles at $`E410^{19}eV`$ arrive as doublets and triplets within $`24^{}`$. The galactic origin of UHECR due to conventional sources is disfavoured: the maximal observed energies are higher than that calculated for the galactic sources, and the strong Galactic disc anisotropy is predicted even for the extreme magnetic fields in the disc and halo. The signature of extragalactic UHECR is GZK cutoff. The position of steepening is model-dependent value. For the Universe uniformly filled with sources, the steepening starts at $`E_{bb}310^{19}eV`$ and has $`E_{1/2}610^{19}eV`$ (the energy at which spectrum becomes a factor of two lower than a power-law extrapolation from lower energies). The spectra of UHE nuclei exhibit steepening approximately at the same energy as protons. UHE photons have small absorption length due to interaction with radio background radiation. The extragalactic astrophysical sources theoretically studied so far, have either too small $`E_{max}`$ or are located too far away. The Local Supercluster (LS) model can give spectrum with $`E_{1/2}10^{20}eV`$, if overdensity of the sources is larger than 10. However, IRAS galaxy counts give overdensity $`\delta =1.4`$. GRBs and a nearby single source (e.g. M87) remain the potential candidates for the observed UHECR. Superheavy Dark Matter can be the source of observed UHECR. These objects can be relic superheavy particles or topological defects such as $`M\overline{M}`$-pairs connected by strings or vortons. These objects are accumulated in the halo and thus the resulting spectrum of UHECR does not have the GZK cutoff. In this case UHECR is a signal from inflationary epoch, because both superheavy particles and topological defects are most probably produced during reheating. The observational signatures of UHECR from SHDM are (i) absence of GZK cutoff, (ii) UHE photons as the primaries and (iii) anisotropy due to non-central position of the Sun in the halo. The angular clustering is possible due to clumpiness of DM in the halo. Topological Defects naturally produce particles with extremely high energies, much in excess of what is presently observed. However, the fluxes from most known TD are too small. Only necklaces, $`M\overline{M}`$ connected by strings and vortons remain candidates for the sources of the observed UHECR. Necklaces give so far the only known example of extragalactic TD as a sources of UHECR. Its signature is the presence of the photon component in the primary radiation and its dominance at the highest energies $`E>10^{20}eV`$. ## 5 Acknowledgments I am grateful to my co-authors Pasquale Blasi, Michael Kachelriess and Alex Vilenkin for many useful discussions.
no-problem/0001/math0001067.html
ar5iv
text
# Untitled Document This paper is withdrawn by the authors due to the gap found in the proof.
no-problem/0001/physics0001021.html
ar5iv
text
# Evolution in the Multiverse ## 1 Introduction The Many Worlds Interpretation (MWI) of Quantum Mechanics has become increasingly favoured in recent years over its rivals, with a recent straw poll of eminent physicists\[18, pp170–1\] showing more than 50% support it. David Deutsch provides a convincing argument in favour of MWI, and the multiverse in the title is due to him. Tegmark has somewhat waggishly suggested that a Principle of Plenitude (alternatively All Universes Hypothesis — AUH), coupled with the Anthropic Principle<sup>1</sup><sup>1</sup>1The Anthropic Principle is a statement that the universe we observe must be consistent with the existence of us as observers. In the all universes hypothesis, the anthropic principle acts to select those universes that are “interesting”, i.e. capable of supporting self aware consciousness. In this all universes picture, the distinction between the weak and strong forms of the anthropic principle is meaningless, so we will simply refer to the Anthropic Principle throughout ths paper. (AP) could be the ultimate theory of everything (TOE). Tegmark’s Plenitude consists of all mathematically consistent logical systems, the principle of plenitude according each of these systems physical existence, however by the anthropic principle, we should only expect to find ourselves in a system capable of supporting self-aware substructures, i.e. conciousness. Alternative Plenitudes have been suggested, for example Schmidhuber’s all possible programs for a universal turing machine. I have argued elsewhere, that the quantum mechanical subset of the Plenitude, namely the Multiverse, is the most likely system to be observed by conscious beings. In this paper, we accept the MWI or Multiverse as a working hypothesis, and consider what the implications are for evolutionary systems. An evolutionary system consists of a means of producing variation, and a means of selecting amongst those variations (natural selection). Now variations are produced by chance and in the Multiverse picture, this corresponds to a branching of histories, whereby a particular entity’s offspring will have different forms in different histories. The measure of each variant is related to the proportions in which the variants are formed, and the measure of each variant evolves in time through a strictly deterministic application of Schrödinger’s equation. What, then, determines which organisms we see today, given that a priori, any possible history, and hence any mix of organisms may correspond to our own? Is natural selection completely meaningless? The first principle we need to apply is the anthropic principle, i.e. only those histories leading to complex, self-aware substructures will be selected. We also need to apply the self sampling assumption (SSA). The SSA is that each observer should regard itself as a random sample drawn from the set of all observer. It is the implicit assumption used in Carter and Leslie’s Doomsday argument, and much other anthropic reasoning. Stated another way, as observers, we should expect to see a world that is nearly maximal in measure, subject to it being consistent with our existence. In this picture, natural selection is a process that differentiates the measure attributed to each variant organism. ## 2 Complexity Growth in Evolution As I argued elsewhere, lawful universes with simple initial states by far dominate the set consistent with the AP. So the AP fixes the end point of our evolutionary history (existence of complex, self-aware organisms), and the SSA fixes the beginning (evolutionary history is most likely started with the simplest organisms). We should therefore expect to see an increase in complexity through time. What about living systems not governed by the anthropic principle? Examples include extra terrestrial life (within our own universe, if it exists) and artificial life systems. Nonhuman terrestrial life is governed by the AP, since one expects that the evolutionary process that produced us will also produce the numerous other organisms found on Earth. A system of life that has evolved completely independently of Earth has no requirement to produce intelligent beings, and unless complexity growth is inevitable given the laws of physics and chemistry, no requirement to produce complex life forms. Proponents of SETI (the Search for Extra-Terrestrial Intelligence) believe in an inevitability of the evolution of intelligent life, given the laws of physics. The anthropic principle does indeed ensure that the laws of physics are compatible with the evolution of intelligence, but does not mandate that this should be likely (excepting, obviously in our own case). Hanson has studied a model of evolution based on easy and hard steps to make predictions about what the distribution of such steps should be within the fossil record. He finds that the fossil record is consistent with there being 4–5 hard steps in getting to intelligent life on Earth. By hard steps, he means steps who’s expected duration greatly exeeds the present age of the universe. The hard steps include * origin of first replicator * origin of sex * origin of eukaryotic cells * origin of multicellularity * possibly the origin of self-aware conscious entities This would imply that intelligent life is fairly unique within our own universe, to the chagrin of the SETI proponents, but simple prokaryotic life may well be ubiquitous. Of course, it is also true that a single example of extra terrestrial intelligence would be an important counterexample to these arguments based on the AP and SSA, so SETI is by itself not a fruitless exercise. Likewise, for artificial life, it would seem plausible that a serious of easy and hard steps are required to climb the complexity ladder. Already, the first such hard transition (the creation of replicators from the primeval soup) has been observed, but equivalents of other transitions (eg transition to sexual reproduction, prokaryote to eukaryote or multicellularity) have not been observed to date. Ray is leading a major experiment designed to probe the transition to multicellularity — success in this experiment will provide remarkable constraints on just how finely tuned the physics and chemistry needs to be in order for the system to pass through a hard transistion. Adami and co-workers examined the Avida alife system for evidence of complexity growth during evolution. They did find this, although this is largely seen as the artificial organisms learning how to solve arithmetic problems that have been imposed artificially on the system. An analogous study by myself of Tierra showed no such increase in complexity over time — if anything the trend was to greater simplicity. This work is still in progress. ## 3 Evolutionary Physics? Returning back to the picture of the “All Universes Hypothesis”, we can see that our current universe is made up from contingency and necessity. The necessity comes from the requirements of the anthropic principle, however when a particular aspect of the universe is not constrained by the AP, its value must be decided by chance (according to the SSA) the first time it is “measured” by self-aware beings (this measurement may well be indirect — properties of the microscopic or cosmic worlds will need to be consistent with our everyday observations at the macroscopic level, so may well be determined prior to the first direct measurements). Evolution is also described as a mixture of contingency and necessity. When understood in terms of the AP supplying the necessary, and the SSA supplying the rationale for resolving chance, the connection between the selection of phyical laws and the selection of organisms in evolution is made clear. It is as though the laws of physics and chemistry have themselves evolved. Perhaps applying evolutionary principles to the underlying physico-chemical laws of an alife system will result in an alife system that can pass through these hard transitions.
no-problem/0001/astro-ph0001038.html
ar5iv
text
# 1 Introduction ## 1 Introduction Clusters of galaxies are likely to be dynamically young systems and a promising way to constrain cosmological models arises from the study their substructures (; ). Among the methods suggested to quantify the degree of inhomogeneity in clusters (; ; ), one of the most promising is based on the so-called power ratios $`\mathrm{\Pi }^{\left(m\right)}`$ (, hereafter VGB99; ). It amounts to a multipole expansion accounting for the angle dependence of cluster X–ray surface brightness, limited to the first few multipole terms and at fixed scales (typically of the order of the Mpc). This is an effective and synthetic way to discriminate cluster features and $`\mathrm{\Pi }^{\left(m\right)}`$ are found do depend on the cosmological model enabling to discriminate among different cosmologies. ## 2 Power ratios: definition and evolution The number of photons collected by ROSAT PSPC does not depend on the temperature $`T`$, if it is$`\stackrel{>}{}1`$ keV. We can then assume that the X–ray surface brightness $`\mathrm{\Sigma }_X=\mathrm{\Lambda }\rho _b^2\left(𝐫\right)𝑑z`$. Here $`\mathrm{\Lambda }const`$ results from an integration along the line of sight. The procedure to work out $`\mathrm{\Pi }^{\left(m\right)}`$ is as follows (see, e.g., , hereafter G98; VGB99; ): (i) $`\rho _b^2\left(𝐫\right)`$ is projected along a line of sight on a (random) plane to yield the $`X`$–ray surface brightness $`\mathrm{\Sigma }(R,\phi )`$; the centroid is used as origin. (ii) By solving the Poisson equation $`^2\mathrm{\Phi }=\mathrm{\Sigma }(R,\phi )`$ we obtain the pseudo–potential $`\mathrm{\Phi }(R,\phi )`$. (iii) The coefficients of the expansion of $`\mathrm{\Phi }`$ in plane harmonics will be used to build the power ratios $`\mathrm{\Pi }^{\left(m\right)}\left(R_{ap}\right)=\mathrm{log}_{10}\left(P_m/P_0\right)`$. Here, $`P_m\left(R\right)=\left(\alpha _m^2+\beta _m^2\right)/2m^2,P_0=\left[\alpha _0\mathrm{ln}\left(R/\mathrm{kpc}\right)\right]^2`$, while $$\alpha _m=_0^1𝑑ss^{m+1}_0^{2\pi }𝑑\phi \left[\mathrm{\Sigma }(sR,\phi )R^2\right]\mathrm{cos}\left(m\phi \right),$$ (1) and $`\beta _m`$ has an identical definition, with sin instead of cos. Owing to the definition of the centroid, $`\mathrm{\Pi }^{\left(1\right)}`$ vanishes. We shall restrict our analysis to $`\mathrm{\Pi }^{\left(m\right)}`$($`m=2,3,4`$), to account for substructures on scales not much below $`R`$ itself. We consider three different aperture radii $`R_{ap}=0.4,0.8,1.2h^1`$Mpc. Because of its evolution, a cluster moves along a curve of the 3–dimensional space spanned by such $`\mathrm{\Pi }^{\left(m\right)}`$’s; this curve is called evolutionary track. Quite in general, a cluster starts from a configuration away from the origin, corresponding to a large amount of internal structure and evolves towards isotropization and homogeneization. Actual data, of course, do not follow the motion of a given cluster along the evolutionary track. Different clusters, however, lie at different redshifts and can be used to describe a succession of evolutionary stages. ## 3 The simulated and the observed cluster sample We consider three spatially flat cosmological models: CDM, $`\mathrm{\Lambda }`$CDM with a cosmological constant accounting for $`70\%`$ of the critical density, and CHDM with 1 massive neutrino with mass $`m_\nu =4.65`$eV, yielding a HDM density parameter $`\mathrm{\Omega }_h=0.20`$. We set $`h=0.5`$ for CDM and CHDM and $`h=0.7`$ for $`\mathrm{\Lambda }`$CDM; for all models the primeval spectral index $`n=1`$ and the baryon density parameter is selected to give $`\mathrm{\Omega }_bh^2=0.015`$. All models were normalized in order to reproduce the present observed cluster abundance (, ). In order to achieve a safe statistical basis for our analysis, for each cosmological model, we select the 40 most massive clusters from an N–body P3M simulation. For each of them, we perform a hydrodynamical TREESPH simulation (VGB99, G98; see also and ). Clusters are distributed in redshift so to reproduce the same redshfit distribution of the observed cluster sample. Our observed data set is the same used by , including nearby ($`z\stackrel{<}{}0.2`$) clusters observed with ROSAT PSPC instrument (see VGB99, G98 for details). The resulting sample is partially incomplete, but, clusters were not selected for reasons related to their morphology and the missing clusters are expected to have a distribution of power ratios similar to the observed one. ## 4 Results and Conclusions For simulated clusters, power ratios $`\mathrm{\Pi }^{\left(m\right)}`$ have been computed from the gas distribution. A visual inspection of how the $`\mathrm{\Pi }^{\left(m\right)}`$ are distributed can be obtained from Fig.1, whose histograms show the fraction of clusters with a given $`\mathrm{\Pi }^{\left(m\right)}`$ . Here we show the distribution for $`\mathrm{\Pi }^3`$ and $`R_{ap}=0.8h^1`$Mpc for each cosmological model and for the ROSAT data sample; distributions for the other $`\mathrm{\Pi }^{\left(m\right)}`$ and the other apertures show a similar behavior Quite in general we can conclude that while CDM and CHDM are marginally consistent with data, $`\mathrm{\Lambda }`$CDM is far below them. In order to quantify these differences, we used the Student t–test, the F–test and the Kolgomorov–Smirnov (KS) test. For example, according to the t–test, the probability $`p`$$`t`$ that the simulated and observed power ratios distributions are originated from the same process is roughly in the ranges 0.11–0.60 for CDM, 0.03–0.37 for CHDMand 0.15$`10^4`$–0.87$`10^2`$ for $`\mathrm{\Lambda }`$CDM. The other statistical tests provide similar probabilities. Such figures seem to exclude that $`\mathrm{\Lambda }`$CDM can be considered a reasonable approximation to data. The best score belongs to CDM, but also CHDM is not fully excluded and different mixtures could certainly have better performance. An inspection of the model clusters actually shows that the $`\mathrm{\Lambda }`$CDM model does produce less substructures than the other models do. A possible interpretation of such output is that the actual amount of substructures is governed by $`\mathrm{\Omega }_0`$ rather than by the shape of power spectra. According to the same tests, if cosmological models are compared with data on the basis of DM $`\mathrm{\Pi }^{\left(m\right)}`$, values are shifted, indicating an increase in the amount of substructures for DM with respect to the gas. This is to be ascribed to the smoothing effects of the interactions among gas particles, which erase anisotropies and structures, while DM $`\mathrm{\Pi }^{\left(m\right)}`$scarcely feel dissipative processes. Hence, using DM $`\mathrm{\Pi }^{\left(m\right)}`$leads to biased scores: CDM and CHDM models keep too many substructures and are no longer consistent with data; on the contrary, the increase of substructures pushes $`\mathrm{\Lambda }`$CDM to agree with ROSAT sample outputs. We also considered the cluster distribution in the 3–dimensional parameter space with axes given by $`\mathrm{\Pi }^{\left(m\right)}`$ ($`m=2,3,4`$), as well as projections of such distributions on planes. Comparing such distributions for data and models, we find a significantly stronger correlation of $`\mathrm{\Pi }^{\left(m\right)}`$ in models than in data. Distributions for simulated clusters show a linear trend while distributions of observed clusters tend to be more scattered than simulated points. The degree of correlation depends on the model, but seems however in disagreement with data. Model clusters tend to indicate a significantly faster evolution than data. The cosmological model which seems closest to data is CHDM and it is possible that different CHDM mixtures can lead to further improvements. Also $`\mathrm{\Lambda }`$ models with $`\mathrm{\Omega }_m>0.5`$ might deserve to be explored. Virialized clusters had their turn–around at a time $`<t_o/3`$ ($`t_o`$: present age of the Universe). In turn, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ becomes dominant at $`z=\left(\mathrm{\Omega }_\mathrm{\Lambda }/\mathrm{\Omega }_m\right)^{1/3}`$. If such redshift occurs at a time $`t_o/3`$, we expect results from $`\mathrm{\Lambda }`$ models to be closer to observations.
no-problem/0001/astro-ph0001106.html
ar5iv
text
# 1 AMI and SZ cluster skys ## 1 AMI and SZ cluster skys The Sunyaev-Zel’dovich (1972, SZ) effect, unlike optical and X-ray cluster surveys, is not affected by redshift because it measures the integrated line of sight intracluster gas pressure via its Compton scattering of cosmic microwave background (CMB) photons. During the last 10 years, interferometric techniques have been developed, which are providing firm detections of known clusters (eg. Jones et al. 1993, Carlstrom, Joy and Grego 1996). The technology and expertise is now available to survey the sky to discover clusters. The construction of a CMB telescope for the observation of the CMB on angular scales of one to several arcminutes and with a sensitivity of a few micro-Kelvin, comparable to the Planck Surveyor, has been proposed in Cambridge. The Arcminute MicroKelvin Imager (AMI) will consist of a small compact array of 4 meter dish antennas combined with the existing Ryle telescope antennas in an extended array, with a new receiving system and novel correlator. This design achieves optimal sensitivity to the cluster SZ effect and a separation with other components such as the primary CMB and radio sources. Although the instrument is dedicated to the study of clusters, it will generally probe the structure of the CMB on sub-Planck and super-ALMA scales. Therefore it is also sensitive to other phenomena such as, inhomogeneous ionisation, density - velocity correlations (Vishniac and Ostriker effect), filaments and topological defects, if they exist, which are all of immense interest as well. In the following however we demonstrate the ability of AMI to discover clusters. To produce simulated SZ cluster sky maps, the Press-Schechter expression (1974) is used to create a list of cluster masses and redshifts having an abundance consistent with the local cluster temperature function (Eke et al. 1996). These clusters are placed at random angular positions within a 5 $`\times `$ 5 sky map with 40 arcsec pixels. To model the cluster SZ signal, template maps have been created from the hydrodynamical simulations of Eke, Navarro and Frenk (1998), and these are pasted, suitably scaled, onto the cluster positions. This procedure is performed for two cases, a low present density ($`\mathrm{\Omega }=0.3`$) and a high density ($`\mathrm{\Omega }=1`$) universe, both with a Hubble constant of 70 km s<sup>-1</sup> Mpc<sup>-1</sup>. The gas fraction is fixed at 10 % in both cases rather than to the primordial nucleosynthesis value, which would have introduced an $`\mathrm{\Omega }`$ dependence, enhancing the differences. Our value for the gas fraction is at the low end of the value estimated from X-ray clusters (Ettori and Fabian (1999) and Mohr, Mathiesen and Evrard (1999) find it to be 0.1-0.25 at the 95 % CL), and the model SZ number counts would increase with a less conservative choice. We also produced corresponding X-ray maps in 0.5-2 and 2-10 keV bands, and have checked that they are complete to an X-ray flux limit of $`1\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup> \[0.5-2 keV\]. ## 2 An SZ survey and X-ray / optical follow-up The sensitivity of future instruments, we use AMI specific numbers, is high enough to allow a survey of the sky for SZ clusters. To make our simulation of the observation process computationally feasible we simplified the observation by running a compensated beam with carefully chosen shape and amplitude over the cluster map. This procedure had been gauged with detailed simulations of the interferometer response including the specific noise properties of a field observation and through the recovery process. The resulting number counts are shown in figure 1. There is a turn-over at the confusion limit for a 4.5 arcmin beam. Also shown in figure 1 are sensitivity lines each corresponding to a fixed observation time and displayed as the inverse of the survey area against the flux limit. The ratio between the number counts and the sensitivity lines is the number of clusters detected in the respective time. The maximum detection number lies at a limiting observed flux of about 100 $`\mu `$Jy or a corresponding survey area of 5 deg<sup>2</sup> and about 10 clusters for the high density case and tens of clusters for low density are expected for a 6 months observation. Since the optimum is shallow it will also be possible to adopt different surveying strategies at a low cost of inefficiency. This will probe the slope of the number counts which is a function of $`\mathrm{\Omega }`$. The parameters of the simulations, which have the largest effects on the number counts are listed with present uncertainties in table 1. Their effect is considerable, but substantial improvements can be expected in the future, in particular from the new X-ray missions. Due to the $`\left(1+z\right)^4`$ dimming of bolometric X-ray flux compared to the Compton scattering process the X-ray and SZ flux limited cluster samples have very different redshift distributions, even for an X-ray limit of $`5\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup> \[0.5-2 keV\] in the case of the currently favoured low density model. With scaling laws we find that the ratio between SZ to X-ray flux scales with $`\left(1+z\right)^{5/2}`$ and with only a weak dependence on $`\mathrm{\Omega }`$ and the cluster temperature. When observing the same field in microwaves and X-rays this ”photometric” redshift effect will immediately allow a crude measure of the redshift distribution (see figure 2). A limit on the redshift will be obtained for the SZ clusters undetected in X-rays. A major uncertainty in determining $`\mathrm{\Omega }`$ from this distribution comes from $`f_g`$. However when optical redshifts are obtained we find that for the flux limit reached with AMI the median redshift of only 20 clusters will allow to distinguish between the two cases with more than 99 % confidence. Going back to the X-ray and SZ data with an estimate of $`\mathrm{\Omega }`$, the gas physics for example $`f_g\left(z\right)`$ can be studied. | Table 1: Model parameters affecting SZ number counts | | | | --- | --- | --- | | parameter | change in percent | fractional change in $`N\left(>Y\right)`$ | | $`h`$ | 20 % | 1.3 | | $`f_g`$ | 30 % | 1.5 | | $`\sigma _8`$ | 7 % (1$`\sigma `$) | 1.5 | | | 14 % (2$`\sigma `$) | 3.2 | | Compare this to $`N\left(\mathrm{\Omega }=0.3\right)/N\left(\mathrm{\Omega }=1\right)3.5`$ | | | ## 3 Conclusions Instruments to survey the sky for SZ clusters can be build. The expected number of clusters is greater than 10 for a half year observation and depends strongly on the matter density. The SZ data in combination with X-ray and optical data can be used to constrain $`\mathrm{\Omega },f_g`$, $`\sigma _8`$, and from distance measurements, $`H_0`$ and most likely $`\mathrm{\Omega }_\mathrm{\Lambda }`$, if enough suitable clusters are present at high redshift. SZ cluster surveys will be particularly useful as pathfinders for future X-ray missions such as Constellation-X and XEUS, which will have small fields of view. With the cluster SZ effect a sample of massive clusters at high redshift can be provided as targets for a detailed study of the plasma physics in X-rays, interesting for the understanding of cluster formation and cosmology. Acknowledgements. I am grateful to my collaborators Mike Jones and Vincent Eke, and acknowledge financial support from an EU Marie Curie Fellowship.
no-problem/0001/hep-lat0001033.html
ar5iv
text
# The 1 Teraflops QCDSP computer ## 1 Introduction The search for the smallest constituents of matter has led to the discovery of many sub-atomic particles and the development of the standard model of particle physics. This model is based on the principle of “local gauge invariance”, first seen in Maxwell’s theory of electromagnetism, where it constrains the types of interactions possible between photons and electrons. The standard model includes the strong, weak and electromagnetic forces, providing a description of virtually all experimental phenomenon seen to date. It is a theory of generalized force-carrying particles of spin one interacting with matter that is either fermionic (spin one-half) or bosonic (spin zero). The principle of local gauge invariance is an important abstract idea, similar to the concept of evolution in biology, but as embodied in the standard model it also leads to a quantitative theory which describes particle interactions precisely. Many comparisons between standard model predictions and experiment have been made, primarily involving the weak and electromagnetic part of the model or the strong force at high energies. At low energies (up to a few GeV) the strong force (described by a part of the standard model known as quantum chromodynamics, QCD) is not analytically tractable in any reliable approximation, making quantitative predictions in this region solely accessible by computational techniques. As a simple example, most of the proton’s properties are completely determined by QCD, but cannot be calculated from first principles. The lack of precise predictions from QCD is currently a restriction on further tests of the standard model and our ability to understand new phenomena to be probed by experiment. QCD is formulated in terms of quarks (spin one-half particles) and gluons, which mediate the strong force. The electroweak interactions of quarks are given by the standard model, but many of the manifestations of these interactions are only visible in physical particles (hadrons) which are bound states of quarks. Precise predictions from QCD for electroweak processes requires knowing precise information about the quark content of hadrons. In addition, the RHIC (Relativistic Heavy Ion Collider) at Brookhaven National Laboratory will soon begin colliding nuclei at high energies to probe nuclear matter at high temperatures and densities. Once again analytic calculations here are limited, even though the underlying physical formulation is presumed understood. ## 2 Lattice Gauge Theory The need for accurate calculational results from QCD is vital for many areas of research in particle physics. QCD is also of intrinsic interest as a theory in its own right and as a prototype for other, more fundamental theories of nature. For almost 20 years, QCD has been the subject of numerical investigations. As a calculational problem, QCD is particularly straightforward, since only a few free parameters (the strength of the strong force at some distance scale and masses for the quarks) completely define the theory. It is, however, very computationally intensive. When QCD is formulated on a space-time grid, it is generally referred to as lattice QCD . Most numerical work on QCD uses the Feynman path integral approach, where the quantum mechanical nature of the system is exhibited by summing over all possible configurations of quarks and gluons, weighted by the classical action for such a configuration. In this sum over configurations, we measure values for various observables, which are related to the physical quantities of interest. To evaluate an observable $`O_i`$ we must calculate a multi-dimensional integral $$0_i=\frac{_{n,\mu }dU(n,\mu )det\{D_{([U],m)}\}\mathrm{exp}(\beta S[U]/\mathrm{})O_i}{_{n,\mu }dU(n,\mu )det\{D_{([U],m)}\}\mathrm{exp}(\beta S[U]/\mathrm{})}$$ (1) where $`n`$ runs over all space-time points, $`\mu =0`$ to 3 runs over the 4 directions in space-time, $`U`$ is an SU(3) matrix, $`dU`$ is the gauge invariant Haar measure on the group SU(3), $`D_{([U],m)}`$ is one of the possible lattice Dirac operators, $`m`$ is the quark mass, $`S[U]`$ is the classical action for a gauge field $`U`$, $`\beta `$ is the inverse of the couping constant squared and $`\mathrm{}`$ is Planck’s constant. As written, the path integral over the matrices $`U`$ represents the integration over all gluon degrees of freedom. The quark integrations have already been done and their effects are included through the determinant factor above. The largest part of the computational load in lattice QCD comes from evaluating this determinant for a fixed background gauge field $`U`$. The lattice Dirac operator $`D`$ is a linear operator, which depends on $`U`$, and has a dimensionality greater than the number of space-time points. (A $`32^3\times 64`$ space-time volume contains $`10^7`$ points.) Discretizing the continuum Dirac operator for lattice simulations produces a variety of different lattice operators which should all give the same physics back in the continuum limit. Common lattice Dirac operators are the Wilson , staggered , domain wall and overlap/Neuberger operators. The first two are in wide use by the community, the third will be described more later in this article and the fourth is described in another paper in this series. The importance sampling algorithms (hybrid molecular dynamics and hybrid Monte Carlo) generally employed in lattice QCD do not require a calculation of the determinant. By writing the determinant as the exponential of the trace of the logarithm of the matrix, only the trace of the Dirac operator is needed. The trace of the Dirac operator is in turn found with a a stochastic estimator, which means solving a linear system involving the Dirac (quark) matrix. This matrix is large, but sparse, only of $`O(10)`$ non-zero entries per row or column. It is easy to parallelize this linear equation problem, since the data flow is regular and known. On each local processor, one must be able to efficiently multiply $`3\times 3`$ complex matrices with a complex 3-vector. Ultimately, better algorithms may be found, but the presence of fermions has hindered progress in this area for many years. Given the importance of QCD calculations in relating experimental results to standard model parameters, an obvious way to proceed is to develop inexpensive, scalable, massively parallel computers to run lattice QCD. This approach has been pursued for many years by a number of groups, including the group at Columbia. We now turn to the most recent machine developed primarily at Columbia, QCDSP (Quantum Chromodynamics on Digital Signal Processors). ## 3 QCDSP The design of the QCDSP computer began in the spring of 1993 . At that time, a number of dedicated, special purpose QCD computers were in operation (including ACP-MAPS at Fermilab, GF11 at IBM, APE in Italy, QCD-PAX in Tsukuba and the Columbia 256 node machine). Sustained speeds of $`5`$ Gigaflops were being achieved, but most of the simulations were studying quenched QCD, i.e. QCD with the determinant factor in the path integral set to 1. This is an uncontrolled approximation to the full theory, which would require computers on the Teraflops scale to remove. Costs for a general purpose commercial teraflops scale machine at this time were estimated at around $100 million US dollars, for delivery in two years. A joint commercial/academic effort in the US estimated a few tens of millions of US dollars for a commercial computer slightly customized to run QCD at a Teraflops scale . This project never materialized, with one cause being the large cost. The very successful CP-PACS project in Tsukuba, Japan followed a commercial/academic path, culminating in the 600 Gflop Hitachi computer and is described elsewhere in this volume. The major design goal of QCDSP was maximum sustained performance for QCD per unit cost in a machine which could scale to a peak performance of at least a Teraflops. Initial estimates, detailed below, gave a price for parts of 3 million US dollars. Low costs required inexpensive processors and the Teraflops goal required the machine to scale to very large numbers of processors (10,000 or more). Another important part of cost effectiveness is the ratio of money spent on processors, memory and communications hardware. Since the communications patterns for the currently preferred QCD algorithms are very regular and dominantly involve transfer of data between nearest neighbor space-time points, a machine with a grid based communications network works very well for QCD. A grid based network is quite straightforward to design and inexpensive to build. Since no routing information is needed for nearest neighbor transfers (the hardware directly connects nearest neighbors) the network has very little startup latency. This is important if most data transfers (as is the case for QCD on this type of machine) involve sending many small amounts of data. The nearest neighbor grid architecture of course allows general communications to be done by hopping data between processors, which requires routing information and decreases network bandwidth. A four dimensional grid-based communication network was chosen for QCDSP. This is natural, since our problem is one in four-dimensional space-time (although the domain wall fermions described below are naturally thought of in five dimensions). This makes the mapping of the problem to the machine particularly simple; each processor is responsible for the data storage of variables for a particular space-time volume. The communication between processors is then dominantly nearest neighbor (except for global sums which are described in more detail below). One can always run a four dimensional problem on a lower dimensional regular grid (still using the natural mapping) by making some dimensions local to the processor. Another advantage of the four-dimensional nearest neighbor communications network is that no single dimension need be large for a machine with a very large number of processors. For example, 10,000 processors are contained in a four-dimensional hypercubic lattice with 10 processors in each dimension. Since the natural mapping described above implies that the size of the physics problem in each dimension is greater than or equal to the number of processors in each dimension, it is important to be able to keep the processor grid of roughly equal size in each dimension. ### 3.1 QCDSP Architecture: Processor Nodes In 1993, to build a Teraflops machine for a few million dollars required an inexpensive processing node with very low power consumption. Digital Signal Processors (DSPs) are commercial floating point chips which are used in devices with these constraints and in 1993 were expected to provide 1$/Megaflops performance within 2 years. At that time the first Pentium and DEC Alpha processors were available, with better performance per chip, but much larger cost per Megaflops and power consumption. Today, 670 Megaflops DSPs are available, along with 1 Gigaflops Alpha processors, but they still differ widely in cost and power consumption. DSPs allow very dense packing, due to their low power. Since memory bandwidth was (and still is) always a problem for microprocessor based machines, using processors, like the DSP, with relatively modest performance made for a better match with existing DRAM speeds. In addition, many high-performance processors are not single chips, but rather chip sets, where cache and a memory controller are commonly separate chips from the processors. Without the full compliment of chips high-performance microprocessors can perform very modestly. These general considerations led to the processor nodes diagramed in Figure 1. This node contains a Texas Instruments (TI) Digital Signal Processor (DSP), 2 Mbytes of EDC corrected DRAM and a custom Application Specific Integrated Circuit (ASIC), which we call the Node Gate Array (NGA). This processing node is about the size of a credit card, has a peak speed of 50 Mflops, costs $68 in quantity in 1997 and uses about 3 watts of total power. By current standards, the DRAM size is small. However, DRAM prices were very firm during our research and development cycle and at the prices available then, 2 MBytes of DRAM was close to 40% of the total parts cost. We now describe the DSP and NGA in some detail. #### 3.1.1 DSP Description The original DSPs began as general CPU’s with only integer arithmetic capability and found use as generally programmable controllers. As they achieved fixed point and then floating point capabilities, they became useful for any application requiring fast arithmetic capability for low cost. They are currently used in cell phones, modems, microwave ovens, stereo equipment, etc., anywhere that low cost, low electrical power floating point intensive algorithms are implemented. The 50 Mflops TI TMS320C31 DSP we use in QCDSP cost about $50 in 1995 ($38 in 1997) and uses about 1 watt of power. It is a single precision processor where double precision can be done in software with a large performance penalty. The low power consumption makes it possible to pack the processors close together and the cooling system for the computer need not be more than air circulated through the machine and through radiators fed with chilled water. Also the entire power bill for a 10,000 processor machine is in the range of $50,000 per year. DSPs generally have a smaller number of internal registers than a conventional microprocessor and at the time QCDSP was being built, they only contained single arithmetic units without the complicated conditional scheduling common in high end microprocessors. On the C31 we use, the small number of registers is offset by the presence of 2 kilowords of on chip memory. While not identical to additional registers, this memory can be accessed by the CPU without any delay and is vital to getting high performance for QCD from this DSP. From a programming point of view, writing programs (say in C or C++) for the DSP is identical to writing for a microprocessor. The limitations in the DSP (small register set, modest speed) appear as parts of the code which perform relatively more slowly, not as something which cannot be done. For straightforward floating point applications, which is the dominant time used in QCD applications, the DSP performs very well. To support a multi-user operating system, like UNIX, the DSP would have to do substantial swapping to memory every time a different user input was received, due to the small number of internal registers. #### 3.1.2 NGA Description The NGA is the only custom integrated circuit in QCDSP, all other components are standard commercial products (although we do have a few Programmable Array Logic (PAL) chips which are programmed for specific QCDSP tasks). The NGA has a look-ahead cache (called the circular buffer), EDC circuitry and controllers to handle the physical transfers to the eight nearest neighbor processors in our four dimensional grid. The NGA is described in . Achieving high sustained bandwidth to memory is a major difficulty in all microprocessor based computers. QCD calculations make this problem somewhat easier, since for the most floating point intensive part of the calculation, the pattern of memory fetching is regular and each fetched floating point number is used twice. Also, the number of words written to memory is much smaller than the number of words read (generally 25% or less). This results from the dominant calculation involving multiplication of a complex $`3\times 3`$ matrix times a complex 3-vector. $$R_i=\underset{j=0}{\overset{2}{}}M_{ij}V_j$$ (2) One can easily see that every real number on the right is used twice in calculating the result on the left, since this is complex arithmetic. Thus even though the DSP can only fetch one real number every 25 MHz cycle, and it needs two per cycle to run at full speed, we can still achieve a large fraction of peak speed through a strategy we now describe. 1. Locate the program which does $`M\times V`$ in on-chip memory. 2. Copy the vector $`V`$ into on-chip memory. 3. Do the multiply, fetching one code word from on-chip memory, one operand from on-chip memory and one operand from DRAM every machine cycle. 4. Write the result out to DRAM. This strategy yields over 40% of peak speed, assuming the program is pre-loaded into on-chip memory. Note that it makes extensive use of the on-chip memory and assumes that DRAM can provide one operand per cycle. This one operand per cycle capability is made possible by the circular buffer. The circular buffer is a 32 word deep cache, which is given rules about the next transfer by loading a register. For the example above, the circular buffer is set to fetch a maximum of 18 words from DRAM, starting at the first address that is read (the address of $`M_{00}`$ in this case). The circular buffer is also told that subsequent addresses accessed by the DSP will never skip by more than 2 words for this transfer. Thus once the first fetch of $`M_{00}`$ is made, the circular buffer will immediately begin getting the remaining 17 words from DRAM. The circular buffer will provide these to the DSP without delay, so the full input bandwidth can be achieved. Since the circular buffer stores all 18 words internally, the user can also jump back to previously fetched words, a vital feature when the multiply requires the Hermitian conjugate of a matrix. The NGA also implements the four-dimensional nearest neighbor communications network in a subsystem called the SCU (Serial Control Unit). The only part of the network that is not in the NGA are the wires connecting neighboring nodes and the transceivers which are used to drive wires which leave a motherboard. The four-dimensional network does physical transfers over a bit-serial connection that runs at 25 or 50 MHz. The 50 MHz connections have proved to be stable and reliable and are used primarily. The SCU does automatic hardware resends when single bit parity errors are detected. The SCU has direct access to DRAM, without going through the circular buffer and requires only 2 registers to be loaded to start a transfer. This allows for very low startup latency in communications. Users only specify the starting address in DRAM for a transfer and the total number of words (which can be divided into blocks with a fixed stride). The conversion of 32 bit words into a bit-serial stream is handled by the SCU. Each link between two processors runs independently of all other links in the machine; no global synchronicity is needed or achieved. The two processors at each end of the link must understand which one sends and which one receives, which for lattice QCD is trivially implemented by a shift left/shift right approach to most data transfers. However, the hardware supports more generally, asynchronous message passing over the nearest neighbor grid and one general message passing scheme has been implemented . QCD also requires efficient calculation of global sums across the entire machine (for example, to know the dot product of two vectors distributed over the entire machine). The bit-serial communication links cause a large overhead if global sums are done by sending a word to a node and adding the received value to the local value and iterating. (The overhead comes since each 32-bit word must be entirely received by a neighboring node before that node can use its DSP to perform the sum.) To avoid this, the SCU can do a global sum by adding together, as each bit arrives, the data coming in on any set of the communications links and from local memory. This bit-wise sum is then sent out over a selected wire to another node which repeats the process. By choosing an appropriate tree path through the machine, a single node holds the required global sum, which it broadcasts (also handle by the SCU with small latency) to all other nodes. ### 3.2 QCDSP Architecture: Motherboards For a computer with such a large number of nodes, simplicity and ease of repair are very important. To meet these requirements, the node diagramed in Figure 1 is contained on a single printed circuit board (called a daughterboard). The daughterboards are attached to the motherboards with standard 40-pin SIMM connectors, just as DRAM is generally connected to a PC . Each motherboard holds 63 daughterboards and a 64th processor node is soldered to the motherboard. This 64th processor (called node 0 on a motherboard) is attached to an NGA, 8 MBytes of DRAM, two SCSI buses, an EPROM, a DSP serial connection to each of the 63 daughterboard DSPs and other electronics which controls low-level setup parameters for the motherboard. The 64 processors are arranged in a $`4\times 4\times 2\times 2`$ processor mesh. Node 0 on each motherboard plays a special role during booting of QCDSP and when I/O is done to the host workstation. The motherboards are connected to each other through a tree made out of the SCSI bus connections. Figure 2 shows the configuration of the networks for a two-dimensional cross-section of the machine. Node 0 on each motherboard is equivalent to all the others when a production physics calculation is underway and I/O is not being done. In addition, each node 0 can drive a standard SCSI disk, giving a large bandwidth to disk since each SCSI bus is independent and the bandwidth adds. ### 3.3 QCDSP Architecture: Crates and Racks The low power of the DSP allows for close packing of the daughterboards, without having to use more than forced air cooling. Eight motherboards fit into a crate, where the backplane of the crate provides power, the clock, the reset signal, and interrupts to the motherboards. Three slots in the crate can be set to run motherboards as individual machines. Two crates (a total of 16 motherboards) fit in a rack, which is about four feet high, two feet wide and three feet deep. For pictures of the hardware, please see the Web site http://www.phys.columbia.edu/~cqft/ and links therein. The extent of the four dimensional processor mesh is determined by ribbbon cables connected to each motherboard through the backplane. One periodic dimension of extent 4 processors is completely contained on the motherboard. The remaining three dimensions are connected with the external cables into a periodic processor mesh that is an integral multiple of $`4\times 2\times 2`$ in each dimension. Changing the size of the machine requires recabling and generally can be done in a few hours. ## 4 QCDSP Software Another important design objective was to make QCDSP programming as straightforward as possible . Although QCD algorithms are quite well established, there are continual improvements (the domain wall fermions described below are an example) and new techniques which must be added. Even leaving aside new techniques, implementing the existing body of algorithms necessary for a complete QCD simulation environment is sufficient effort that a reasonable programming environment is necessary. One major advantage to using a commercial processor in a custom computer like QCDSP is having many software tools available. TI provides both an assembler and C compiler for the DSP. In addition, we also purchased a C++ compiler from Tartan, which has since been bought out by TI. The majority of lines of our programs are written in C++, with the kernels for the floating point intensive parts written in assembly. These are single node compilers, which do not do parallelization for the user. Also for the single node case, we have commercial debuggers, evaluation modules (commercial DSP boards hosted by a workstation or PC) and hardware emulators that we used in code development and hardware debugging. It is an enormous simplification to not be responsible for developing all these tools. QCDSP is a fully MIMD (multiple instruction, multiple data) computer. Each processor can have a different program running on it, although this situation requires a general communication protocol running on each processor if inter-processor communication is required by the programs. For lattice gauge theory simulations, the same program is loaded to each processor with conditional branching depending on the processor coordinates in the four-dimensional processor grid. ### 4.1 Operating System The QCDSP operating system was written completely at Columbia. Since the memory per processor is limited, it was not possible to consider porting Linux, for example, to QCDSP. In addition, our application does not require a full multitasking operating system, since whenever the machine is doing a physics calculation, that is all it is doing. Also, in its high performance mode, the circular buffer state is altered by interrupts. Therefore, a multitasking operating system which insisted on occasionally tending to its own housekeeping chores, would have to be switched to a non-multitasking mode during high-performance parts of the QCD programs. It is also much easier to debug hardware if the software is not throwing interrupts at random times. In order to preserve ease of programming, the operating system provides a UNIX-like environment to a C or C++ programmer. Many standard C library calls are implemented (printf, fopen, fclose, …). (We have not implemented the C++ iostreams system at this time.) In addition, there are system calls to functions which return the grid coordinates for a processor, check the hardware status of the local processor, return the machine size, etc. Other system calls handle data transfers over the nearest neighbor network. We did not implement MPI (message passing interface), since our hardware directly supports only a subset of MPI calls. It would not be very difficult to port MPI to QCDSP. There are two major components to the operating system. One part resides on the host SUN workstation and other runs on QCDSP. When QCDSP is booted, the host begins to query the machine and determines the total number of motherboards, daughterboards and the four-dimensional configuration of the machine. Recabling the machine does not require any software changes; the new configuration is determined on the subsequent boot. During this process, the host builds the appropriate tables which it uses to route information to any particular node. The operating system also contains a number of features for testing and locating faulty hardware. While booting QCDSP the host does various hardware tests to determine whether there are any problems with the machine. In particular, the QCDSP run-time kernels are not loaded until the local nodes have passed a DRAM test. It is very important that the operating system be able to do a substantial amount of diagnostic work automatically on a machine with so many nodes. When user code begins executing, the host workstation becomes a slave to QCDSP, providing services as QCDSP requests them. Currently, users have access to the host file system from QCDSP and can send output to the host console from QCDSP. More features are planned, including the QCDSP disk system. During program execution, system calls can be made to determine whether any hardware errors have occurred (parity errors on SCU transfers, single or double bit errors in data read from DRAM). At the end of user program execution, the operating system scans the machine to check the hardware state. The operating system currently uses about 1/4 of the available memory per node. About half of this is used for buffers to store the operating system log and the results of users printf(…) calls on each node. Users can retrieve detailed information about their program from each node by retrieving the print buffer contents after program completion. ### 4.2 Application Software Over the last several years, a large lattice QCD software package for QCDSP has been written in C++ and assembly. The vast majority of code is in C++, with the implementation of the various lattice Dirac operators written in assembly, along with SU(3) matrix and vector routines. Interprocessor communication is done by a set of library routines which handle the normal transfers required in lattice QCD. While QCDSP was being built, programs to solve the Dirac equation for Wilson and staggered fermions were written. These programs were used to test the NGA design and are used as tests on the silicon wafers when NGAs are made. Our Wilson and staggered fermion inverters sustain between 20 and 30% of peak speed depending on the local volume. Now that we have a quite complete implementation of algorithms for lattice QCD, we plan to spend more time on the Dirac equation solvers. Performance between 40 and 50% is achievable. C++ has proved very useful for organization of this fairly large software system. The C++ class structure is used, for example, to guarantee that the correct conjugate gradient solver is called for the kind of fermions you are currently working with. With around 10 collaborators working on the Columbia QCDSP machine and these 10 plus another 8 working on the RIKEN-BNL QCDSP machine, we need software organization to be able to effectively share code written by others. Generic C++ code runs at the few percent level on QCDSP. This is primarily due to low memory bandwidth when program instructions and data are being accessed from different areas of memory. These kinds of accesses are slowed by the delays suffered when one changes DRAM pages. Function calls are also slow for similar reasons, since pushing(popping) register contents onto the stack requires writing(reading) to(from) DRAM. QCDSP is a general computer that has been optimized for lattice QCD. There should be other grid based problems which would work well on this architecture. A completely different physics calculation has been done on QCDSP and other applications are being considered. ## 5 QCDSP Status The QCDSP machine at Columbia was finished in April, 1998. It has now been running production physics calculations for about a year. During the first few months of running, we removed processors which logged occasional hardware errors (primarily parity errors on SCU transfers or DRAM access). There were also nodes which would cause the machine to hang. Since the communication between each node is independent of all others, if one communication transaction does not complete, eventually all other processors will stop as the effects of the one frozen link cause successive neighbors to stall waiting for data to arrive. These kinds of errors are generally tackled by keeping a running log of the state of each link in DRAM, so that after a hang the offending link can be determined. The RIKEN-BNL 0.6 Teraflops QCDSP machine was completed in October, 1998 and most of it has been in production running since then. We are finding the burn in time for this machine comparable to the QCDSP machine at Columbia. We expect the entire machine to be running production physics very soon. This machine was awared the Gordon Bell prize in the performance per dollar category at SC ’98 in Orlando, Florida. ## 6 Domain Wall Fermion Physics from QCDSP As mentioned above, during the development of QCDSP a new lattice Dirac operator was developed, the domain wall fermion operator. The original idea was due to Kaplan and was pursued by Shamir and Neuberger and Narayanan . Here we discuss the boundary variant of domain wall fermions due to Shamir. When the continuum Dirac operator is discretized, one can easily change some of its properties. In particular, until recently, all known discretizations destroyed the chiral symmetry of the Dirac operator. (The chiral symmetries return as the lattice spacing is taken small, provided the parameters are adjusted appropriately.) In the continuum, chiral symmetry says that the Dirac operator does not couple left- and right-handed quarks to each other. (Handedness refers to whether the spin and momentum are parallel or antiparallel.) For massless quarks in the continuum, the left- and right- handed components only couple through the dynamics of QCD, a process known as chiral symmetry breaking. If the discretized Dirac operator breaks chiral symmetry, then it is hard to separate the chiral symmetry breaking due to QCD and that due to the discretization. Chiral symmetry breaking is one of the dominant characteristics of the theory at low energies and it is important to have its effects clearly represented. Domain wall fermions are a discretization which preserves the chiral symmetry of the theory at finite lattice spacing, Domain wall fermions employ a five-dimensional fermionic field, coupled to the four-dimensional gauge (gluon) field. The boundary conditions at each end of the fifth dimension are chosen so that a surface state (a mode that propagates in four dimensions) appears which is chiral. In particular, a right-handed, four dimensional quark appears at one end of the fifth dimension and a left-handed four dimensional quark at the other. These states are the four dimensional chiral quarks we desire. As the extent of the fifth dimension, $`L_s`$, is taken large, the domain wall Dirac operator breaks chiral symmetry with terms of order $`\mathrm{exp}(\alpha L_s)`$ where $`\alpha `$ is a constant. Computationally, domain wall fermions cost a factor of $`L_s`$ more than other approaches. Simulations done by the Columbia group and others show that for smaller lattice spacing, $`L_s=16`$ is likely sufficient, while at larger lattice spacing, $`L_s=32`$ or more may be necessary. How much one gains from having chiral symmetry must be balanced against this additional cost. ### 6.1 QCD Thermodynamics When QCD is heated up, the quarks and gluons which compose hadronic matter are liberated into a quark-gluon plasma. Lattice simulations have found this temperature to be $`160`$ MeV. However, the detailed properties of the phase transition are expected to be controlled by the symmetries of the theory, including the chiral symmetries. Since domain wall fermions have the correct chiral symmetries even at finite lattice spacing, it is important to see if our understanding of the critical region of the QCD phase transition changes with this formulation. The group at Columbia has been actively studying the finite temperature QCD phase transition with domain wall fermions using QCDSP . At the large lattice spacings where current thermodynamics studies can be done, large values for $`L_s`$ are required. Figure 3 shows the dependence on the length of the fifth dimension of the chiral condensate. (The chiral condensate should go to zero with the quark mass in the quark-gluon phase and be non-zero in the normal hadronic phase.) One can see the expected exponential fall-off for the $`L_s`$ dependent effects. We have done first simulations of the phase transition region for QCD using domain wall fermions and $`L_s=24`$. This is not a large enough value for $`L_s`$ to completely remove the exponentially small chiral symmetry breaking effects. However, we did find the temperature for the QCD phase transition with domain wall fermions to be $`170`$ MeV, which is consistent with other techniques. We are currently doing more simulations with smaller chiral symmetry breaking effects. The power of the QCDSP computers is vital for these studies. ### 6.2 Weak Matrix Elements Part of testing the standard model of particle physics involves knowing precisely the effects of weak interactions on hadronic states. We must use computational techniques to make the hadrons and then measure the weak interaction effects in these hadrons made on the computer. The process of inserting the weak interaction effects into the hadrons is much more controlled if chiral symmetry is intact. Without chiral symmetry, different effects can become intermixed. In addition, chiral symmetry tells us that the behavior of weak interactions in certain hadronic states becomes small as the quark mass becomes small. Without chiral symmetry to enforce this condition, one ends up calculating a small number by subtracting two large numbers. This is very costly and introduces large errors. In the realm of weak interactions in QCD systems, the topic of CP violation is of particular importance. The standard model allows the combined symmetry of charge conjugation and parity (CP symmetry) to be violated, due to the presence of one complex parameter in the theory. CP violation was first measured experimentally in 1964 by Cronin and Fitch in the kaon system (the kaon is a bound state of two quarks, one of which is the strange quark). This measurement of CP violation through what is called mixing has recently been joined by a new experimental announcement of CP violation in decays. Without detailed QCD calculations, one cannot know if both effects are consistent with a single value for the complex parameter in the standard model. The first work on weak matrix elements with domain wall fermions was done by who calculated a parameter related to CP violation by mixing for quenched QCD. (This had previously been done by other groups using Wilson and staggered fermions.) They found that for moderately small lattice spacings, an $`L_s`$ of $`16`$ effectively restored chiral symmetry for the domain wall fermions. In a joint work using the QCDSP computer at the RIKEN-BNL Research Center, a collaboration of the RIKEN-BNL, BNL and Columbia lattice groups (which includes the authors of ) are using domain wall fermions to calculate the CP violation in mixing and decays, for quenched QCD. CP violation in decays has been worked on for some time using other fermions formulations. The lack of the full symmetries of the theory has made these calculations very difficult, a problem which is solved by the domain wall fermions. The domain wall fermions make the calculation many times more computationally expensive, but may solve enough other problemss that a final answer is possible. We are anxiously awaiting the completion of this calculation. ## 7 Conclusions The QCDSP computer is a very cost effective computer for calculations in lattice QCD. This computer was designed and constructed by a small group of people, primarily physicists, over five years with a total parts cost of $`4`$ million dollars, including research and development. (Salaries add an additional $`1.5`$ million dollars to the cost.) The final machine has a cost performance of about $10/Megaflops and won the 1998 Gordon Bell prize in the cost performance category at SC ’98 in Orlando, Florida. Including the Columbia and RIKEN-BNL computers, 20,480 processors with a peak speed of over 1 Teraflops are available for QCD calculations. Physicists at Columbia, BNL and RIKEN-BNL are aggressively using these machines to study QCD, focusing on using the new domain wall fermion formalism. Calculations for QCD thermodynamics and weak matrix elements, among others, are well underway. Acknowledgements The QCDSP computer was developed with funds provided by the United States Department of Energy. Funds for the 0.4 Teraflops QCDSP computer were provided by the US DOE, while the 0.6 Teraflops computer was funded by the RIKEN-BNL Research Center. The work discussed here is the cumulative effort of many individuals over a number of years. They are: Ping Chen, Norman Christ, George Fleming, Tim Klassen, Robert Mawhinney, Gabi Siegert, ChengZhong Sui, Pavlos Vranas, Lingling Wu and Yuri Zhestkov. Igor Arsenin, Dong Chen, Chulwoo Jung, Adrian Kaehler, Yubing Luo and Catalin Malureanu. Alan Gara and John Parsons Michael Creutz, Chris Dawson and Amarjit Soni. Tom Blum, Shigemi Ohta, Shoichi Sasaki and Matthew Wingate Robert Edwards and Tony Kennedy (now at Edinburgh) Greg Kilcup Jim Sexton Sten Hanson
no-problem/0001/astro-ph0001341.html
ar5iv
text
# Radio pulsar death line revisited: is PSR J2144-3933 anomalous? ## 1. Introduction Copious pair production in pulsar inner magnetospheres has long been conjectured as an essential condition for the radio emission of pulsars since the pioneering work of Sturrock (1971)<sup>1</sup><sup>1</sup>1Although some authors argued that pair production might not be an essential condition for pulsar radio emission (e.g. Weatherall & Eilek 1997), no such pairless radio emission theory is fully constructed.. Generally speaking, pair production can create a dense plasma which may allow various coherent instabilities to grow, so as to account for the high brightness temperature observed from the pulsars. The secondary pairs (rather than primary particles) with $`\gamma 10^210^4`$ can produce typical radio-band emission within various models (e.g. Ruderman & Sutherland 1975, hereafter RS75; Melrose 1978; Qiao & Lin 1998). As a consequence, the so-called radio pulsar “death line” is defined as a line in a two-dimensional pulsar parameter phase space ($`P\dot{P}`$ diagram, $`PB_s`$ diagram, or $`P\mathrm{\Phi }`$ diagram, where $`P`$ is the pulsar period, $`\dot{P}`$ the pulsar spin-down rate, $`B_s`$ the surface magnetic field, and $`\mathrm{\Phi }`$ the polar cap potential), which separates the pulsars which can support pair production in their inner magnetospheres from those which cannot (RS75; Arons & Scharlemann 1979, hereafter AS79; Chen & Ruderman 1993, hereafter CR93; Rudak & Ritter 1994; Qiao & Zhang 1996; Björnsson 1996; Weatherall & Eilek 1997; Arons 1998, 2000). Present pulsar death line theories require some degree of anomalous field line configurations (multipole component or offset dipole) to interpret known pulsar data. However, a recently discovered long-period (8.5s) pulsar PSR J2144-3933 (Young, Manchester & Johnston 1999) is clearly located well beyond the conventional death valley (CR93), unless special neutron star equation-of-state or even ad hoc magnetic field configurations are assumed. This challenges the widely accepted belief that pair production is essential for pulsar radio emission. Arons (1999) found that this pulsar is located within the death valley of an inverse Compton controlled space-charge-limited flow model with frame-dragging included, but some degree of the point dipole offset is needed. ## 2. Death lines in various models The death line has been defined (e.g. RS75) by the condition that the potential drop across the accelerator ($`\mathrm{\Delta }V`$) required to produce enough pairs per primary to screen out the parallel electric field is larger than the maximum potential drop ($`\mathrm{\Phi }_{\mathrm{max}}`$) available from the pulsar, in which case no secondary pairs would be produced <sup>2</sup><sup>2</sup>2Note that there are some additional criteria to constrain the death lines for the millisecond pulsars (Rudak & Ritter 1994; Qiao & Zhang 1996; Björnsson 1996) and the pulsars with strong surface magnetic fields (Baring & Harding 1998), and we will not address them in this paper.. It is worth noting that pulsar death lines are actually model-dependent. Besides the model-dependent $`\mathrm{\Phi }_{\mathrm{max}}`$ (see and ), the form of $`E_{}`$ within the accelerator (which depends on the boundary conditions and on whether the general relativistic frame-dragging effect is taken into account), the typical energy of the $`\gamma `$-ray photons (which depends on whether their origin is curvature radiation (CR) or inverse Compton scattering (ICS)), and the strength of the perpendicular magnetic field the $`\gamma `$-ray photon encounters (which depends on the field strength and the field line curvature near the neutron star surface) can change $`\mathrm{\Delta }V`$ considerably and alter the death lines. Furthermore, the obliquity of the pulsar (which changes $`E_{}`$ and $`\mathrm{\Phi }_{\mathrm{max}}`$) and the equation-of-state of the neutron star (which changes the moment of inertia $`I=10^{45}I_{45}`$ and the radius $`R=10^6R_6`$ of the star) will also influence the location of the death lines. As a result, the phase space for pulsars to die should be a “valley” rather than a single line. CR93 defined a death valley, within the framework of the RS75 vacuum gap model, as the phase space range between the death line of the dipolar field configuration and the death line of some special multipolar field configurations. Here we will also adopt such a death valley, and regard it as also including the scatter of obliguities for different pulsars<sup>3</sup><sup>3</sup>3Strictly speaking, there could be different death lines for pulsars with different obliquities. The death lines could be quite different for the extreme cases of aligned and orthognal rotators.. Modification of the equation-of-state or of the polar cap radius will also modify the death valleys systematically. The surface magnetic field for the star-centered dipolar configuration is $`B_d=6.4\times 10^{19}(P\dot{P})^{1/2}I_{45}^{1/2}R_6^3`$ regardless of the internal field geometry (Shapiro & Teukolsky 1983; Usov & Melrose 1995). Thus the only model dependence of the dipolar surface field is the offset of the dipole center from the star center. Two subgroups of inner gap models were proposed by adopting different boundary conditions. The vacuum-like gap model (V model, RS75) assumes strong binding of ions at the neutron star surface ($`E_{}(z=0)0`$), while the space-charge-limited flow model (SCLF model, AS79) assumes free emission of particles from the surface ($`E_{}(z=0)=0`$). Both of these models were originally proposed by assuming that CR of the primary particles is the main mechanism to create $`\gamma `$-ray seeds to ignite the pair production cascades, and by neglecting general relativistic effects. They were improved later by different authors to include ICS and the inertial frame-dragging effects. The role of frame-dragging in pulsar physics was first explored by Muslimov & Tsygan (1992, hereafter MT92) and updated by Muslimov & Harding (1997, hereafter MH97) and Harding & Muslimov (1998, hereafter HM98) within the framework of the SCLF model. MT92 noted that since stellar rotation actually drags the local inertial frame (LIF) to rotate with an angular velocity $`\mathrm{\Omega }_{\mathrm{LIF}}\mathrm{\Omega }_{}\kappa _g(R/r)^3`$ ($`\mathrm{\Omega }_{}`$ is the angular velocity of the star), the electric field required to bring a charged particle into corotation is weaker than that in the flat spacetime, since such a field only needs to compensate the angular velocity difference between $`\mathrm{\Omega }_{}`$ and $`\mathrm{\Omega }_{\mathrm{LIF}}`$ rather than the difference of $`\mathrm{\Omega }_{}`$ and the angular velocity at infinity (which is 0). As a result, near the star surface, the Goldreich-Julian density becomes<sup>4</sup><sup>4</sup>4For an explicit expression for $`\eta _R`$, see eqs., and of HM98. $$\eta _R\frac{(𝛀_{}𝛀_{\mathrm{LIF}})𝐁}{2\pi c\alpha }\frac{𝛀_{}𝐁}{2\pi c\alpha }\left[1\kappa _g\left(\frac{R}{r}\right)^3\right],$$ (1) where $`\kappa _g(r_g/R)(I/MR^2)0.150.27`$ (HM98; we will adopt a typical value of 0.15 hereafter), $`\alpha =(1r_g/R)^{1/2}0.78`$ is the redshift factor, $`r_g`$ is the gravitational radius, and $`M`$ is the mass of the neutron star. ### 2.1. Vacuum gap model (V model) In principle, pulsar $`E_{}`$ arises from the deviation of the local charge density ($`\eta `$) from the Goldreich-Julian density ($`\eta _R`$). If the binding energy of the positive ions is large enough to prevent the ions from thermionic or field emission ejection, a vacuum-like gap (RS75) will form right above the neutron star surface, in which the charge depletion is very large (of the order of $`\eta _R`$ itself) so that a very strong $`E_{}`$ is built up right above the surface. This picture was questioned since later calculations showed that the ion binding energy is actually not high enough (for recent reviews, see, e.g. Usov & Melrose 1995). Recent observations indicate that some pulsars may favor the existence of such vacuum gaps (Vivekanand & Joshi 1999; Deshpande & Rankin 1999), and some ideas to solve the binding energy problem have been proposed (e.g. Xu, Qiao & Zhang 1999). The inclusion of ICS in such a model was carried out by Zhang & Qiao (1996) and Zhang et al. (1997), and the corresponding death line was examined by Qiao & Zhang (1996). The influence of the frame-dragging effect on the model has not been examined before. The maximum potential available for the V model is just the homopolar generator potential, which is the potential difference between the pole and the edge of the polar cap at the star surface and reads $$\mathrm{\Phi }_{\mathrm{max}}(\mathrm{V})(1\kappa _g)\frac{B_dR^3\mathrm{\Omega }_{}^2}{2c^2}5.6\times 10^{12}(\mathrm{Volts})B_{d,12}P^2R_6^3.$$ (2) Note frame-dragging modifies the flat spacetime result by a factor of $`\zeta =(1\kappa _g)0.85`$, which arises from $`(\mathrm{\Omega }_{}\mathrm{\Omega }_{\mathrm{LIF}})`$. The solution of the one-dimensional Poisson’s equation (the infinitesimal gap in RS75) then gives $`E_{}(z)=\zeta (2\mathrm{\Omega }_{}B/c)(hz)`$ and $`\mathrm{\Delta }V=\zeta (\mathrm{\Omega }_{}B/c)h^2`$, which are essentially RS75’s results with the correction factor $`\zeta `$. The gap height depends on three length scales, i.e., $`l_{\mathrm{acc}}`$, the acceleration scale before the primary electron or positron achieves a high enough energy $`\gamma _c`$; $`l_ec[\dot{\gamma }(\gamma _cmc^2)/E_c]^1`$, the mean free path of the electron/positron with $`\gamma _c`$ to emit one $`\gamma `$-ray quanta with energy $`E_c`$; and $`l_{ph}=\chi \rho (B_{cri}/B_s)(2mc^2/E_c)`$, the mean free path of the $`\gamma `$-photon before being absorbed, where $`B_{cri}=m^2c^3/\mathrm{}e4.4\times 10^{13}`$G is the critical magnetic field, and $`\chi 0.1`$ is the key parameter to describe $`\gamma B`$ absorption coefficient (Erber 1966). The electron mean free path $`l_e`$ should not exceed the gap height $`h=l_{\mathrm{acc}}+l_{ph}`$. For the V model, one gets $`\gamma _c=2\zeta (e/mc^2)(\mathrm{\Omega }_{}B/c)l_{\mathrm{acc}}(hl_{\mathrm{acc}}/2)`$ with the form of $`E_{}`$. For the CR-induced cascade model, we have $`E_c=(3/2)(\mathrm{}c/\rho )\gamma _c^3`$, and $`l_e=(9/4)(\mathrm{}c/e^2)\rho \gamma _c^1l_{ph}l_{\mathrm{acc}}h`$. To treat the gap breakdown process, we are actually looking for the minumum of $`h`$, which could be obtained by setting the derivative $`h`$ with respect to $`l_{\mathrm{acc}}`$ to zero. With some approximations, we finally get the gap parameters as $`h=5.0\times 10^3(\mathrm{cm})\zeta ^{3/7}P^{3/7}B_{12}^{4/7}\rho _6^{2/7}`$, and $`\mathrm{\Delta }V=1.6\times 10^{12}(\mathrm{Volts})\zeta ^{1/7}P^{1/7}B_{12}^{1/7}\rho _6^{4/7}`$, which are analogous to RS75 (their eqs., except for the $`\zeta `$ correction), who treated the problem by simply adopting $`hl_{ph}`$. By setting $`\mathrm{\Delta }V=\mathrm{\Phi }_{\mathrm{max}}`$(V), we get the death lines of this model ($`\zeta 0.85`$ has been adopted) (cf. eqs., of CR93) $`\mathrm{log}\dot{P}=(11/4)\mathrm{log}P14.62`$ $`[\mathrm{I}]`$ (3) $`\mathrm{log}\dot{P}=(9/4)\mathrm{log}P16.58+\mathrm{log}\rho _6`$ $`[\mathrm{I}^{^{}}]`$ (4) for the dipolar and the $`\rho R`$, $`B_sB_d`$ multipolar field configuration, respectively. For the resonant ICS-induced gap, we have $`E_c=2\gamma \mathrm{}(eB/mc)`$ (Zhang et al. 1997; HM98), and $`\dot{\gamma }_{res}4.92\times 10^{11}B_{12}^2T_6\gamma _c^1`$ (Dermer 1990), so that $`l_e0.00276\gamma _c^2B_{12}^1T_6^1`$. Since $`l_{\mathrm{acc}}l_el_{ph}`$ for the ICS case, we solve the gap height by setting $`hl_{ph}l_e`$. Also treating $`T_6`$ self-consistently through self-sustained polar cap heating, i.e., $`T=(e\mathrm{\Delta }V\dot{N}/\sigma \pi r_p^2)^{1/4}`$, where $`\dot{N}=(\mathrm{\Omega }B/4\pi e)\pi r_p^2`$ is the polar cap luminosity, and $`r_p`$ is the polar cap radius, we finally get the gap height $`h=2.6\times 10^4(\mathrm{cm})P^{1/7}B_{s,12}^{11/7}\rho _6^{4/7}\zeta ^{1/14}`$, the gap potential $`\mathrm{\Delta }V=4.2\times 10^{13}(\mathrm{Volts})P^{5/7}B_{s,12}^{15/7}\rho _6^{8/7}\zeta ^{6/7}`$, and the surface temperature $`T_6=5.9B_{12}^{2/7}P^{3/7}\rho _6^{2/7}\zeta ^{3/14}`$ (Note that such a high polar cap temperature conflicts with the observations). The death lines of this model are then<sup>5</sup><sup>5</sup>5The typical ICS photon energy adopted here is the one for resonant scattering. In the cases of high temperatures, the scatterings above the resonance with the photons at Planck’s peak may become important (Zhang et al. 1997), and the death lines could be lower. $`\mathrm{log}\dot{P}=(2/11)\mathrm{log}P13.07`$ $`[\mathrm{II}]`$ (5) $`\mathrm{log}\dot{P}=(2/11)\mathrm{log}P14.50+(8/11)\mathrm{log}\rho _6`$ $`[\mathrm{II}^{^{}}]`$ (6) for dipolar and multipolar configurations, respectively. ### 2.2. Space-charge-limited flow model (SCLF model) If the charged particles (electrons or ions) can actually be pulled out freely from the neutron star surface (which is favored by ion binding energy calculations), a space-charge-limited flow is a natural picture, with $`E_{}=0`$ at the surface. The $`E_{}`$ at higher altitudes then arises from the small imbalance of the local charge density $`\eta `$ from $`\eta _R`$ (eq.) due to the flow of the charged particles along field lines. The conservation of current requires $`\eta r^3`$, thus the deviation $`(\eta \eta _R)`$ arises from the extra $`r`$-dependence of $`\eta _R`$ (besides $`B`$ declination, which is $`r^3`$). In flat spacetime, this dependence is just the “flaring” of the field lines, on which the AS79’s pioneering SCLF model is based. Since such a deviation is so small, it takes a long length scale for a particle to be accelerated to pair producing energy via a gradual built-up of $`E_{}`$, so that the gap shape is usually narrow and long. The maximum potential available in this model is much smaller than the one available in V models (see footnote 9), so that the death lines in this model are very high (see eq., of AS79). The ICS-induced version of such a SCLF model was presented by Luo (1996). HM98 explicitly studied both CR- and ICS- controlled SCLF accelerators with the frame-dragging effect included, and such a model is very good in interpreting high energy radiation luminosities of the spin-powered pulsars (Zhang & Harding 2000, hereafter ZH00). The inclusion of the general relativistic frame-dragging effect in such SCLF models (MT92; MH97) is essential in two respects. First, besides the $`B`$ declination, $`\eta _R`$ in curved spacetime has an extra $`(R/r)^3`$ dependence (see second term in the bracket of eq.), while the current conservation requirement leads to $`\eta \frac{𝛀_{}𝐁}{2\pi c\alpha }(1\kappa _g)`$ near the surface. As a result, $`E_{}`$ is built up much faster. Secondly, the maximum potential available is much larger than the flat spacetime value (but smaller than that in V models), which is<sup>6</sup><sup>6</sup>6For a full expression of $`\mathrm{\Phi }_{\mathrm{max}}(\mathrm{SCLF})`$, see eq. of HM98. What we adopted here is the $`\mathrm{cos}\chi `$ term, which is much larger than the $`\mathrm{sin}\chi `$ term (the maximum available for the AS79 model). $$\mathrm{\Phi }_{\mathrm{max}}(\mathrm{SCLF})\kappa _g\frac{B_dR^3\mathrm{\Omega }_{}^2}{2c^2}1.0\times 10^{12}(\mathrm{Volts})B_{d,12}P^2R_6^3.$$ (7) As a result, the death lines are considerably lower than the AS79 model. By introducing an upper $`E_{}=0`$ boundary at the pair formation front, HM98 have presented the explicit formalism and detailed numerical simulation of $`E_{}`$ within SCLF accelerators. Unfortunately, simple analytic formulae applicable for all the cases are not available. However, we notice that near the death lines, the height of the accelerators ($`h`$) are all larger than the polar cap radius $`r_{pc}=[\mathrm{\Omega }_{}R/cf(1)]^{1/2}R`$ ($`f(1)1.4`$ is the factor of curved spacetime), and that the $`E_{}`$ in the accelerators has achieved saturation (see ZH00 for discussions of different regimes of acceleration $`E_{}`$ and the criterion to separate the two regimes, their eq.). In such a case, we can adopt the following approximate acceleration picture: near the star surface, $`E_{}`$ grows approximately linearly with respect to the height $`z`$ (thus is analogous to V model in this regime) with the form $`E_{}(1)[3\kappa _g/(1ϵ)](\mathrm{\Omega }_{}B/c)z157B_{12}P^1z`$ (eq.A of HM98), where $`ϵ=r_g/R0.4`$, and saturates above $`zr_{pc}/3`$ at a value of $`E_{}(2)(3\kappa _g/2)(𝛀_{}𝐁/c)r_{pc}(r_{pc}/R)(1\xi ^2)3.5\times 10^3B_{12}P^2R_6^2`$ (eq.\[A5\] of HM98), where $`f(1)=1.4`$, $`\xi =0.7`$ have been adopted. To study the death lines, we assume that the accelerators are located at the surface, though HM98 have argued that accelerators could be $`(0.51)R`$ above the surface in young pulsars due to anisotropy of the upward versus downward ICS. In old pulsars, the returning positron fraction is smaller, and the lower pair formation front may not exist, so that both the CR- and the ICS- induced SCLF accelerators could be formed at the surface. For the CR-induced model, we again have $`l_el_{\mathrm{acc}}l_{ph}`$. Since $`h>r_{pc}`$ near the death lines (ZH00), we can adopt $`\gamma _c(e/mc^2)E_{}(2)l_{\mathrm{acc}}`$. Following the same procedure as for the V model, but using $`\mathrm{\Delta }V=E_{}(2)h`$ (recall the quadratic form of V model and note the difference), we get $`h=3.3\times 10^5(\mathrm{cm})P^{3/2}B_{12}^1\rho _6^{1/2}R_6^{3/2}`$ and $`\mathrm{\Delta }V=3.5\times 10^{11}(\mathrm{Volts})P^{1/2}\rho _6^{1/2}R_6^{1/2}`$. Equating $`\mathrm{\Delta }V`$ with $`\mathrm{\Phi }_{\mathrm{max}}`$(SCLF) (eq.), we get the death lines $`\mathrm{log}\dot{P}=(5/2)\mathrm{log}P14.56`$ $`[\mathrm{III}]`$ (8) $`\mathrm{log}\dot{P}=2\mathrm{log}P16.52+\mathrm{log}\rho _6`$ $`[\mathrm{III}^{^{}}]`$ (9) for dipolar and multipolar field configurations, respectively. For the resonant ICS-induced SCLF model, we again have $`l_{\mathrm{acc}}l_el_{ph}`$ and $`h>r_{pc}`$. This brings an important difference with the CR-induced case, that is, one should adopt the linear $`E_{}`$ form, e.g., $`\gamma _c=(e/mc^2)_0^{l_{\mathrm{acc}}}E_{}(1)𝑑z`$ to describe the acceleration phase, but adopt the saturated $`E_{}`$ form to describe the final potential, i.e., $`\mathrm{\Delta }V=E_{}(2)h`$. Since the SCLF model has a much lower charge deficit than the V model, the number of reversed positrons required to screen the field is only a factor of $`f\left|(E_{}/r)/(8\pi \rho _{_{GJ}})\right|`$ of the Goldreich-Julian density (AS79; ZH00), so that one gets less polar cap heating, i.e., $`T=(e\mathrm{\Delta }V\dot{N}f/\sigma \pi r_p^2)^{1/4}`$, in this model. For the saturated accelerators, this factor is roughly $`f5.7\times 10^5P^1`$ near the surface (eq. of ZH00). A self consistent treatment of $`T`$ finally leads to $`h=9.7\times 10^4(\mathrm{cm})P^{4/13}B_{12}^{22/13}\rho _6^{8/13}R_6^{2/13}`$, $`\mathrm{\Delta }V=1.0\times 10^{11}(\mathrm{Volts})P^{22/13}B_{12}^{9/13}\rho _6^{8/13}R_6^{24/13}`$, and $`T_6=0.11P^{12/13}B_{12}^{1/13}\rho _6^{2/13}R_6^{6/13}`$, so that the death lines are $`\mathrm{log}\dot{P}=(3/11)\mathrm{log}P15.36`$ $`[\mathrm{IV}]`$ (10) $`\mathrm{log}\dot{P}=(7/11)\mathrm{log}P16.79+(8/11)\mathrm{log}\rho _6.`$ $`[\mathrm{IV}^{^{}}]`$ (11) In this model, the polar cap temperature is sustained by the thermal energy released from the crust deposited by the reverse flow of positrons and high energy photons. The primary beam fluctuations (typical timescale $`h/c3\times 10^6`$s) may result in discontinuous illumination of the polar cap. However, because of the “inertia” of photon diffusion from the relatively deep layers to the photosphere (e.g. Eichler & Cheng 1989), this process is unlikely to affect the average polar cap temperature. For example, it takes $`6\times 10^3T_5^{1.5}`$s for the photons to diffuse upthrough the surface from a depth of 100$`\mathrm{g}\mathrm{cm}^2`$. ## 3. Conclusion and discussions The death lines of different models are plotted in Fig.1. A remarkable fact is, though PSR J2144-3933 is beyond the death valleys of both the CR- and ICS- induced V models and CR-induced SCLF model, it is well above the star-centered dipolar death line of the ICS-induced SCLF model. Thus one does not need to introduce a special neutron star equation-of-state or anomalous field configurations at all to maintain strong pair formation in this pulsar. The ICS-SCLF death lines of Fig.1 imply a very large phase space for radio emission in the long period regime, thought previously to be radio forbidden. In fact, Young et al. (1999) argued that the population of pulsars with parameters silimar to PSR J2144-3933 is very large, since the detectability of such pulsars is very small due to their small polar caps. We expect more pulsars to be detected in this region. A general trend in Fig.1 is that death lines in the ICS-induced models have much flatter slopes than those in the CR-induced model. This arises from the very different $`P`$-, $`B_s`$\- dependences of both $`l_{ph}`$ and $`l_e`$ for the resonant ICS processes. The saturation of $`E_{}`$ in long period pulsars is critical in lowering the death lines in the SCLF models relative to those of the V models<sup>7</sup><sup>7</sup>7V models also have $`E_{}`$ saturation, but only near the death lines when $`hr_{pc}`$ (RS75). SCLF models, however, can have $`E_{}`$ saturation much farther away from the death lines and thus allows a greater distance for pair production before $`\mathrm{\Phi }_{\mathrm{max}}`$ is reached.. The difference is more prominant in ICS- induced models. For the ICS-SCLF model, our death lines are lower than the ones derived in Arons (2000). The difference might be due to the different polar cap temperature treatments. We thank the referee for good comments and suggestions and Demosthenes Kazanas, G.J.Qiao, R.X.Xu, and Z.Zheng for interesting discussions or helpful comments.
no-problem/0001/hep-ph0001070.html
ar5iv
text
# Preprint MPI-PhT/2000-03Dark Matter at the Galactic Center ## Abstract Particle dark matter near the galactic center is accreted by the central black hole into a dense spike, strongly enhancing its annihilation rate. Searching for its annihilation products may give us information on the presence or absence of a central cusp in the dark halo profile. This is a summary of a paper of ours, ref. , in which we use the absence of neutrino signals from the galactic center to bound the steepness of a possible central cusp in a dark matter halo made of neutralinos. This summary updates the bounds published in by using the current upper limit by the MACRO collaboration on the neutrino emission from the galactic center.<sup>1</sup><sup>1</sup>1Thanks to Francesco Ronga for communicating the new upper limit. The evidence is mounting for a massive black hole at the galactic center. Ghez et al. have confirmed and sharpened the Keplerian behavior of the star velocity dispersion in the inner 0.1 pc of the galaxy found by Eckart and Genzel . These groups estimate the mass of the black hole to be $`M=2.6\pm 0.2\times 10^6M_{}`$. If cold dark matter is present at the galactic center, as in current models of the dark halo, it is accreted by the central black hole into a dense spike. Particle dark matter then annihilates strongly inside the spike, making it a compact source of photons, electrons, positrons, protons, antiprotons, and neutrinos. The spike luminosity depends on the density profile of the inner halo: halos with finite cores have unnoticeable spikes, while halos with inner cusps may have spikes so bright that the absence of a detected neutrino signal from the galactic center already places interesting upper limits on the density slope of the inner halo. Figure 1 illustrates the two classes of halo models. The “empirical” models are fit to the data and have a central region of constant density, called the core. The “theoretical” models arise from the results of numerical N-body simulations and have a power law density profile in the inner region, dubbed the cusp. Actually, even the highest resolution results presently available extend inwards only to $`1`$ kpc, but we have boldly extrapolated the cusp to the inner parsec. The effect of the central black hole on the dark matter distribution in its neighborhood can be found in the following way. Before the formation of the black hole, the dark matter density within its radius of influence ($`0.2`$ pc) can be assumed to be either constant or a power law of index $`\gamma `$ ($`\rho r^\gamma `$). The dark matter density after the formation of the black hole is obtained by assuming that the black hole grows slowly and hence the dark matter distribution evolves adiabatically. Conservation of the three adiabatic invariants – phase-space density, angular momentum, and radial action – then gives the final dark matter density. A power-law density profile results around the black hole, with an index $`\gamma _{\mathrm{spike}}`$ that depends on the initial index $`\gamma `$ and on the analytical properties of the initial profile, i.e. core or cusp. We call “spike” this density enhancement close to the black hole, to distinguish it from the cusp further out. The maximum density in the spike is reached either at the small distance of $`10`$ Schwarzschild radii within which dark matter is captured by the black hole, or at a larger radius where the annihilation time becomes equal to the age of the black hole and within which the density is constant. Examples of spike density profiles are given in . Annihilation signals, which increase with the square of the density, are enhanced dramatically. The enhancement increases with increasing initial slope $`\gamma `$, and this allows an upper limit to be set on the value of $`\gamma `$ given an upper limit on some of the annihilation signals from the galactic center. In we considered the neutrino emission. High energy neutrinos from the galactic center could be detected with a neutrino telescope in the Northern hemisphere through their conversion to muons in a charge current interaction in the rock surrounding the detector. The current bound on the neutrino emission from the galactic center is 1104 neutrino-induced muons $`>1`$ GeV per km<sup>2</sup> per year. We impose this bound on the emission expected from neutralino dark matter in the minimal supersymmetric model, calculated using the DarkSUSY code . We use the database of points in supersymmetric parameter space built in refs. , namely the 35121 points in which the neutralino is a good cold dark matter candidate, in the sense that its relic density satisfies $`0.025<\mathrm{\Omega }_\chi h^2<1`$. The upper limit comes from the age of the Universe, the lower one from requiring that neutralinos are a major fraction of galactic dark halos. For each point in parameter space, we can then obtain a separate upper bound $`\gamma _{\mathrm{max}}`$ on the inner halo slope. These bounds are plotted in figure 2a. (Plotted values of $`\gamma _{\mathrm{max}}>2`$ are unphysical extrapolations but are shown for completeness.) Present bounds are of the order of $`\gamma _{\mathrm{max}}0.5`$, right in the ballpark of current results from N-body calculations. Future neutrino telescopes observing the galactic center could probe the inner structure of the dark halo, or indirectly find the nature of dark matter. For example, with a muon energy threshold of 25 GeV, the neutrino flux from the spike after imposing the current constraints could still be over 2 orders of magnitude above the atmospheric background (Fig. 3), allowing to probe $`\gamma `$ as low as 0.05 (Fig. 2b). In conclusion, we have shown that if the galactic dark halo is cusped, as favored in recent N-body simulations of galaxy formation, a bright dark matter spike would form around the black hole at the galactic center. A search of a neutrino signal from the spike could either set upper bounds on the density slope of the inner halo or clarify the nature of dark matter.
no-problem/0001/hep-ph0001052.html
ar5iv
text
# 𝑆⁢𝑂⁢(10) and Large 𝜈_𝜇-𝜈_𝜏 Mixing ## References 1. Weinberg, S. Trans. N.Y. Acad. Sci. 38, 185 (1977); Wilczek, F. and Zee, A. Phys. Lett. B70, 418 (1977); Fritzsch, H. Phys. Lett. B70, 436 (1977). 2. Fritzsch, H. Phys. Lett. 73B, 317 (1978). 3. Albright, C.H. and Barr, S.M. Phys. Rev. D58, 013002 (1998). (9712488) 4. Albright, C.H., Babu, K.S. and Barr, S.M. Phys. Rev. Lett. 81, 1167 (1998) (hep-ph/9802314). Albright, C.H. and Barr, S.M. Phys. Lett. B452, 287 (1999). 5. Albright, C.H. and Barr, S.M. Phys. Lett. B461, 218 (1999). 6. Sato, J. and Yanagida, T. Phys. Lett. B430, 127 (1998). (hep-ph/9710516) 7. Irges, N., Lavignac, S. and Ramond, P. Phys. Rev. D58, 035003 (1998). (hep-ph/9802334) 8. Hagiwara, K. and Okamura, N. Nucl. Phys. B548, 60 (1999) (hep-ph/9811495). Altarelli, G. and Feruglio, F. Phys. Lett. B451, 388 (1999) (hep-ph/9812475). Berezhiani, Z. and Rossi, A. hep-ph/9907397. 9. Babu, K.S., Pati, J. and Wilczek, F. hep-ph/9812538. 10. Barr, S.M. and Raby, S. Phys. Rev. Lett. 79, 4748 (1997).
no-problem/0001/astro-ph0001049.html
ar5iv
text
# The Rise Times of High and Low Redshift Type Ia Supernovae are Consistent ## 1 Introduction Two independent research groups have presented compelling evidence for an accelerating universe from the observation of high-redshift Type Ia supernovae (SNe Ia) (Perlmutter et al., 1999; Riess et al., 1998). These findings have such important ramifications for cosmology that every effort must be made to thoroughly test the calibrated standard candles on which they are based. Indeed, these groups, and others, are pursuing additional observations at both high- and low-redshift to confirm these results. There are programs in place aimed at reducing the statistical errors, testing systematic errors, limiting the amount of absorption due to grey dust (Aguirre, 1999), and searching for signs of evolution as a function of redshift in SNe Ia. Recently Riess et al. (1999a) attempted to examine the question of whether the rise times of SN Ia evolve. They used new low-redshift SNe Ia light-curve photometry from Riess et al. (1999b) to compare the mean rise time of these SNe Ia to a preliminary rise time for high-redshift SNe Ia given in a conference abstract by Groom (1998) and based on a composite light curve derived from Supernova Cosmology Project (SCP) observations. Riess et al. noted a 5.8-$`\sigma `$ difference between the rise times from the low-redshift data and from the Groom (1998) preliminary analysis of high-redshift data, with the high-redshift supernovae having shorter rise times by 2.4 days. Based on this result, they suggested the possibility that SNe Ia undergo sufficient evolution to account for what has been interpreted as evidence for an accelerating universe. In what follows, we address major shortcomings of these earlier analyses which fundamentally alter the conclusion of Riess et al. (1999a). Specifically, the analysis method used in Groom (1998) to produce a high-redshift rise-time estimate is very different than that used to produce the low-redshift rise-time estimate of Riess et al. (1999a). Furthermore, both analyses neglected correlated uncertainties in the light-curve fit parameters, and amongst the light-curve data points, so neither of these analyses is complete. We also examine, in $`\mathrm{\S }3`$, the role of light-curve sampling differences between the low-redshift and high-redshift SN Ia observations and how they can conspire with systematic deviations from the fitted reference template — seen for normal SNe Ia — to shift the inferred rise time. In $`\mathrm{\S }4`$ we briefly discuss the (small) impact on the cosmological application of SNe Ia resulting from light-curve variations. We conclude in $`\mathrm{\S }5`$ with a summary of our results and a discussion intended to help guide future work on the question of whether SNe Ia evolve. ## 2 Statistical Analysis of SNe Ia Rise Times ### 2.1 Description of the Problem Figure 1 illustrates the full SN Ia template, $`\psi (t)`$, normally used by the SCP, which is a modified version of the Leibundgut template (Leibundgut, 1988; Perlmutter et al., 1997). The light-curve fitting parameters are the peak flux, $`f_{max}`$, time of maximum, $`t_{max}`$, and light curve stretch, $`s`$. (Note that all time dependent quantities refer to the rest frame of the supernova.) Goldhaber (1998) has demonstrated the remarkable fact that the stretch method applies to the rising portion of SN Ia light curves as well as it applies to the declining portion (up to +25 days after maximum) to better than 2% of the peak flux. This has been confirmed for nearby SNe Ia by Riess et al. (1999b). One can represent the flux light curve, $`f(t)`$, as follows: $$f(t)=f_{max}\psi ((tt_{max})/s)$$ This approach works well in the $`U`$, $`B`$ and $`V`$bands over the range $`20\mathrm{days}<tt_{max}<+25\mathrm{days}`$ (see both Perlmutter et al. (1999) and Perlmutter et al. (1997) for a full explanation of the use of this approach). A meaningful comparison of rise times for low– and high–redshift supernovae requires that both datasets be fit with the same template, and that the fits be performed in a manner which fully accounts for the covariance between the light-curve fitting parameters and the calculated rise time. For the high-redshift data, accounting for covariance in the light-curve fitting parameters is especially important since the uncertainties on individual data points are relatively large. Such uncertainties allow the fitted date of maximum light, $`t_{max}`$, the peak brightness, $`f_{max}`$, and the light-curve width, $`s`$, to be changed in compensating ways to yield similarly good fits. Thus, these parameters are correlated, and since determination of the rise time or explosion date, $`t_{exp}`$, involves both $`s`$ and $`t_{max}`$, it is incorrect to fit for these parameters while holding $`s`$ and $`t_{max}`$ fixed. Take for example the case where the fitted value of $`t_{max}`$, $`t_{max}^{}`$, is too early by 1 day. The fitted value of $`s`$, $`s^{}`$, will suffer a compensating increase by roughly $`1/15`$ in an effort to fit the data on the fast-declining, well-sampled portion of the light curve at $`+10<tt_{max}<+20\mathrm{days}`$. The effective stretch-corrected epoch, $`t_s=(tt_{max})/s`$, of a point nominally at $`tt_{max}=20`$ days and for $`s=1`$ would be incorrect by: $$\mathrm{\Delta }t_s=(tt_{max})/s(tt_{max}^{})/s^{}$$ $$=\frac{20}{1.00}\frac{19}{1.07}=2.2\mathrm{days}.$$ Likewise, if $`t_{max}^{}`$ were 1 day after $`t_{max}`$, $`s^{}`$ would be smaller than the true $`s`$, changing $`\mathrm{\Delta }t_s`$ by roughly $`+2.5`$ days. This is the principal mechanism by which uncertainties in the light-curve fit parameters propagate into increased uncertainty in SN Ia rise times. (Our Monte Carlo simulations in §3 bear this out.) If the uncertainties in $`t_{max}`$ and $`s`$ had simply been propagated as if they were independent, the assigned uncertainty would be 1.7 days, and the correlated nature of the uncertainties would be lost. It is true that a point at $`tt_{max}=20`$ days may also play some role in constraining $`s`$. However, for the datasets considered here the observations on the rising portion of the light curves are generally much less certain than those on the declining portion. The analyses presented in Groom (1998) and Goldhaber (1998) were designed to test the efficacy of the stretch technique when applied to the rising portion of the light curves of high-redshift supernovae, and to attempt to improve that portion of the SCP light curve template. The high-redshift data from the SCP were aligned to stretch-corrected epochs, $`t_s`$, using $`t_{max}`$ and $`s`$ for each supernova determined from individual light-curve fits without exclusion of data from any light-curve epoch. Then a $`t^2`$ rise-time model was fit to the ensemble pre-max data, with the final result quoted for $`t^2`$ fits covering rest-frame epochs $`21`$ to $`10`$ days with respect to $`t_{max}`$. None of the uncertainty due to the light-curve fitting parameters was propagated into the final quoted rise-time uncertainty (Goldhaber, 1999, private communication). The resulting $`t^2`$ fit was then used to develop a revised template, and the individual SNe Ia light curves were then re-fit to this revised template. Riess et al. (1999a) analyzed the low-redshift data very differently: they aligned their low-redshift data using $`t_{max}`$ and $`s`$ for each supernova as in the preliminary high-redshift analysis, but they used only data from $`10`$ to $`+35`$ days to fit the light curves. After aligning the light curves, a $`t^2`$ rise-time model was fit to the ensemble pre-max data, with the final result quoted for $`t^2`$ fits covering rest-frame epochs $`23`$ to $`10`$ days. Following Riess et al. (1999b), the uncertainty in $`t_{max}`$ and $`s`$ was accounted for in Riess et al. (1999a) by increasing the uncertainties on the stretch-corrected light-curve photometry points. The modest contribution due to correlated uncertainties was not included. Both of these studies fixed $`t_{max}`$ and $`s`$ for the individual SNe Ia before fitting the $`t^2`$ model from which explosion dates were inferred. They propagated the uncertainties in the light-curve fits parameters in an incomplete and approximate way. Since this approach does not allow each individual supernova’s light-curve fit parameters, $`f_{max}`$, $`t_{max}`$, and $`s`$, to adjust to give the best fit as different rise times are tested, the uncertainties quoted in these studies are likely to be underestimates. In addition, since the two studies fit to different time intervals of data, a comparison of the central values may not be self-consistent. ### 2.2 Fitting Method The most assumption-free means of accounting for how the uncertainties in the fits to individual SNe Ia light curves affect the value and uncertainty of the rise time is to explicitly test various rise times to see how well the SNe Ia are able to adjust to give fits of similar quality. This is more accurate than, and avoids difficulties associated with, attempting to propagate uncertainties based on the covariance matrix determined at the best-fit value when dealing with complex parameter probability spaces, such as those which occur for some SNe Ia light curves dealt with here. This approach requires that a family of templates with different rise times be defined and fit to the entire photometric dataset for each SN Ia. Unfortunately, at present, very little light-curve data are available for determining a suitable early-epoch template for a SN Ia. Therefore we have constructed a grid of templates consisting of $`t^2`$ models starting with zero flux at an explosion epoch, $`t_{exp}`$, and joined to the modified Leibundgut template at epoch, $`t_{join}`$. A $`t^2`$ model can be justified under the conditions of uniform expansion and constant effective temperature from simple physics (see also Arnett (1982)). Two examples from this family of $`t^2`$-model, $`t_{exp}`$, $`t_{join}`$ templates are shown in Figure 1, with the epochs $`t_{exp}`$ and $`t_{join}`$ labeled. These can be compared to the modified Leibundgut template, which is known to be a reasonable approximation to the light curves of many SNe Ia (with the timescale stretched or contracted). Note that the use of $`t_{exp}`$, $`t_{join}`$ to describe the early-epoch light curve is simply a reparameterization of the $`\alpha ,t_{exp}^2`$ models (i.e., $`f(t)=\alpha (tt_{exp})^2`$) used in previous studies, with the added constraint of continuity where the $`\alpha ,t_{exp}^2`$ model ends and the modified Leibundgut template begins. Riess et al. (1999a) did not impose a continuity constraint since the fitting to the early stretch-corrected light curve with an $`\alpha ,t_{exp}^2`$ model was performed after (some portion) of the original light curve was fit with another template. Groom (1998) and Goldhaber (1998) have an implicit continuity constraint in that they mated their best-fit $`t_{exp}^2`$ model to the remainder of their light curve when constructing each new template. In this paper the fit for the rise time and the overall light-curve parameters is performed simultaneously. An added benefit of our parameterization is that $`t_{exp}`$, $`t_{join}`$ are more nearly orthogonal than $`\alpha ,t_{exp}^2`$. This is because the already-established modified Leibundgut template provides a strong constraint on the amplitude of a $`\alpha `$, $`t_{exp}^2`$ model at the point it crosses the modified Leibundgut template. $`\alpha `$ simply adjusts itself to satisfy this constraint as $`t_{exp}`$ is changed. (This leads to the narrow, but strongly tilted, confidence regions in Figure 1 of Riess et al. (1999a)). The fitting method we use integrates the probability \[$`Pexp(\chi ^2/2)`$; see Eq. 28.22 in Ceolin et al. (1998) \] over the parameters $`f_{max}`$, $`t_{max}`$, and $`s`$ separately for each supernova, at each value of $`t_{exp}`$, $`t_{join}`$. The fits are performed in flux (rather than magnitudes); this allows the use of non-detections, these being the principal source of early-epoch data for the Perlmutter et al. (1999) high-redshift supernovae. Two alternative methods are used to perform the integrations over $`f_{max}`$, $`t_{max}`$, and $`s`$. In the first, the integral over $`f_{max}`$ is performed analytically and the subsequent integration over $`t_{max}`$, and $`s`$ uses the adaptive integration algorithm of Berntsen et al. (1991). The second method uses a grid of $`\mathrm{\Delta }f_{max}=0.01`$, $`\mathrm{\Delta }t_{max}=0.1`$ days and $`\mathrm{\Delta }s=0.01`$ centered on the averages of the best fit values over $`t_{exp}`$ for the high-redshift SNe Ia. To account for the tightly constrained parameters of the low-redshift SNe Ia a hybrid technique is used in which the integral over $`f_{max}`$ is done analytically and a grid of $`\mathrm{\Delta }t_{max}=0.01`$ days and $`\mathrm{\Delta }s=0.0005`$ is used to integrate over $`t_{max}`$ and $`s`$. In each, the limits are chosen such that the probabilities are negligible at the boundaries. We find excellent agreement between each of these methods. The end product is a map of probability over $`t_{exp}`$, $`t_{join}`$ for each supernova. These probability maps are then multiplied, then renormalized, for an ensemble of supernovae, e.g., the high-redshift supernovae from Perlmutter et al. (1999) or the low-redshift supernovae of Riess et al. (1999a), to determine the joint probability distribution function over $`t_{exp}`$, $`t_{join}`$, $`P(t_{exp},t_{join})`$; or after normalizing over $`t_{exp}`$ for each $`t_{join}`$, the conditional probability distribution function for $`t_{exp}`$ given $`t_{join}`$, $`P(t_{exp}|t_{join})`$. ### 2.3 Supernova Light-Curve Samples What we will hereafter refer to as the “low-redshift SNe Ia” sample consists of SN 1990N, SN 1994D, SN 1996bo, SN 1996bv, SN 1996by, SN 1997bq, SN 1998aq, SN 1998bu, and SN 1998ef, for which early-epoch light-curve photometry transformed to $`B`$-band from unfiltered CCD images has been reported by Riess et al. (1999b). The early-epoch photometry was supplemented with data from Lira et al. (1998); Patat et al. (1996); Meikle et al. (1996); Riess et al. (1999c); Suntzeff et al. (1999); Jha et al. (1999); Riess et al. (1999b) to produce full $`B`$-band light curves extending over peak and beyond. Riess et al. (1999b) reports four early-epoch light-curve points (one an upper limit) for SN 1998dh, however we were unable to include this supernova since the subsequent light-curve photometry was unavailable. What we will hereafter refer to as the “high-redshift SNe Ia” sample consists of the 30 SNe Ia from Perlmutter et al. (1999) having redshift $`0.35<z<0.65`$, with the exception of SN 1997aj<sup>1</sup><sup>1</sup>1SN 1997aj was excluded due the presence of several highly deviant points in its light curve (including large deviations within a given night) which for some combinations of $`t_{exp}`$ and $`t_{join}`$ produced fits with greatly improved values of $`\chi ^2`$, but unacceptably large values of stretch. Inclusion of SN 1997aj gave longer rise times, in better agreement with Riess et al. (1999a), and reduced the rise-time difference by $``$ 0.9 days compared to our results in §2.4. Thus, although SN 1997aj was found to reinforce the findings discussed below, the most conservative choice was to eliminate this SN.. As defined, this sample satisfies the requirements that at least 60% of the light in the $`R`$-band comes from the rest-frame $`B`$-band and that at least 60% of the rest-frame $`B`$-band light is included in the $`R`$-band. Redshift limits satisfying these conditions were determined using the $`B`$-band and $`R`$-band filter responses given in Bessel (1990), along with spectra of normal SNe Ia as a function of light-curve epoch constructed by Nugent et al. (2000). These restrictions allow comparison with the low-redshift $`B`$-band photometry of Riess et al. (1999a) while minimizing the potential uncertainties inherent in making large cross-filter K-corrections. Even so, K-correction uncertainties will be present for those supernovae near these redshift limits, as well as at very early times where few spectra are available from which K-corrections can be calculated. Note that most of the other eleven supernovae from Perlmutter et al. (1999) have complete light curves only in rest-frame $`V`$-band or $`U`$-band, and therefore are unsuitable for determination of the $`B`$-band light-curve parameters. Also note that not all 30 high-redshift SNe Ia have equal value in determining the rise time. Only those that were fortuitously caught on the rise in the reference images of the search run can constrain this region of the light curve. ### 2.4 Results of the Statistical Analysis Templates were generated for $`29.9<t_{exp}<10.1`$ days, in steps of 0.2 days, and for $`20<t_{join}<4`$ days, in 1 day steps. Fitted templates were required to have $`t_{exp}`$ earlier than $`t_{join}`$. Figure 2 presents the results of these fits; shown are the 1–, 2–, and 3–$`\sigma `$ confidence regions for the conditional probability, $`P(t_{exp}|t_{join})`$, for the high-redshift SNe Ia sample. Also shown are points which mark the most probable value of $`t_{exp}`$ at each $`t_{join}`$ for the low-redshift SNe Ia sample. Figure 3 distills the $`t_{exp}`$ differences taken from Figure 2 into equivalent Gaussian standard deviations for the difference in $`t_{exp}`$ between the high-redshift and low-redshift SNe Ia samples. These plots demonstrate that for $`t_{join}<10`$, the high-redshift and low-redshift SNe Ia samples agree at the 1-$`\sigma `$ level or better. For $`t_{join}`$ less than $`15`$ days, the high-redshift SNe Ia sample is unable to place meaningful constraints on $`t_{exp}`$. The rise-time value quoted in Riess et al. (1999a) of $`t_{exp}=19.98\pm 0.15`$ — compared to $`t_{exp}=20.08\pm 0.19`$ (statistical) obtained from our analysis — was determined for $`t_{join}=10`$ days, and is plotted in Figure 2. Even at this reference epoch the disagreement between the high-redshift and low-redshift SNe Ia samples is only 1.5-$`\sigma `$, not the 5.8-$`\sigma `$ difference found by Riess et al. (1999a). The value of $`t_{exp}=17.6\pm 0.4`$ days given in preliminary analysis of the high-redshift sample by Groom (1998) is also plotted in Figure 2. As Figure 2 shows, the main difference between our finding and that of Riess et al. (1999a) lies in different best-fit values and larger uncertainties for the high-redshift SNe Ia sample (differing by $`0.7`$ days at $`t_{join}=10`$ days). The uncertainties are larger, especially for the high-redshift SNe Ia sample, when uncertainties in the light-curve fit parameters, $`f_{max}`$, $`t_{max}`$, $`s`$ (and to a lesser extent amongst the photometry points) are fully taken into account. These larger uncertainties come about because the individual SNe Ia are given the proper freedom to adjust to templates away from the global best-fit template. Previous analyses have artificially suppressed this freedom, and have therefore underestimated the uncertainty on $`t_{exp}`$. Given the large uncertainty in $`t_{exp}`$, potential perturbations from the systematic effects discussed in the next section, and the fair to good agreement in $`t_{exp}`$ between the low- and high-redshift SNe Ia for reasonable values of $`t_{join}`$, we consider a detailed analysis of the best $`t_{join}`$ unwarranted. Riess et al. (1999b) found that $`\chi ^2`$ per degree of freedom deteriorated for their fits for $`t_{join}>8`$ days, indicating that the simple $`\alpha ,t_{exp}^2`$ model is not appropriate later than $`8`$ days. A cursory examination of the joint probability, $`P(t_{exp},t_{join})`$, for our fits showed that the low-redshift SNe Ia sample prefers $`t_{join}8`$ days, where our analysis finds a modest disagreement between the low-redshift and high-redshift supernovae. However, the early low-redshift SNe Ia observations prefer a slightly different $`t_{join}`$; $`P(t_{exp},t_{join})`$ based on observations having $`tt_{max}<6`$ days gives a preferred $`t_{join}12`$ days, where high- and low-redshift rise times agree quite well. This mild tension within the low-redshift SNe Ia sample with regard to the preferred $`t_{join}`$ is somewhat less than the 2–$`\sigma `$ level. A similar, but weaker, situation is found for the high-redshift SNe Ia sample. This is not a complete surprise; as the following section demonstrates, there are systematic variations in the late-time light-curve behavior of SNe Ia (such as SN 1994D from the low-redshift SNe Ia sample) which can affect the preferred rise time. Furthermore, a best fit value of $`t_{join}`$ depends not only on the rise-time behavior, but also the accuracy of the modified Leibundgut template for $`t_{join}tt_{max}4`$ days (the latest $`t_{join}`$ tested). The relative probabilites at different values of $`t_{join}`$ include a contribution from the $`\alpha ,t_{exp}^2`$ model for $`t<t_{join}`$ and from the Leibundgut template for $`t>t_{join}`$. Because the parameters ($`t_{max}`$ and $`s`$) for the modified Leibundgut template are driven largely by points with $`t>4`$ days, any early-time mismatch between the modified Leibundgut template and the data will degrade the quality of the fit a different amount for different values of $`t_{join}`$. This effect should only be of importance for $`t_{join}`$ later than about $`10`$ days, where the data are better and where the best-fit $`\alpha ,t_{exp}^2`$ curves begin to depart from the (full) modified Leibundgut template. As things stand, the goodness of fit changes imperceptibly with $`t_{join}`$ for the high-redshift SNe Ia sample. ## 3 Systematic Effects Given these findings from the statistical analysis it is clear that there is a reasonable consistency between the rise times of the high- and low-redshift SNe Ia. However, it is important to explore the possibility of systematic effects which have the potential to drive a fit to another location and/or increase the error bars further. One such effect arises from application of the stretch relationship when fitting an observed light curve with a given template. As mentioned in $`\mathrm{\S }2.1`$, the stretch method works particularly well up to $`t+25`$ days past maximum. After this point the light curve of a SN Ia leaves the photospheric phase and enters into the nebular phase. This is marked by a bend in the light curve between +25 and +35 days after maximum light where the rapid drop from peak brightness slows down into an exponential decline of the light curve. Since this exponential decline is governed mostly by the radioactive decay of <sup>56</sup>Co to <sup>56</sup>Fe one would not expect it to “stretch” like the earlier portion of the light curve. In fact, as seen in Leibundgut (1988), the slopes of the declines are very similar for a wide range of SNe Ia light-curve widths. This highlights one of the current limitations of the stretch method; the entire template, regardless of epoch, is stretched to fit the data. This is not just a problem for the stretch method, but for any of the current SN Ia template fitting methods, which all employ a one-to-one correlation between peak brightness and the shape of the light curve. This is a small effect compared to the peak flux and the typical photometric uncertainties in current low- and high-redshift data sets. However, it is important to consider its effect specifically on the measurement of the rise time. The amplitude with respect to peak of the aforementioned exponential decline varies among SNe Ia. It turns out that the stretch method can compensate somewhat for these differing amplitudes, providing better fits in the $`\chi ^2`$ sense, but at the expense of introducing a possible bias in $`s`$. Since the amplitude variations during the exponential decline become apparent at brightnesses similar to those on the rising portion of the light curve being studied here, and since the data are generally much better for the later portion of the light curve, the late-time light-curve behavior may bias determination of the rise time. The effect of this bias on the template fitting method was studied via a Monte Carlo simulation, as described below. Figure 4a shows the modified Leibundgut template along with two other templates derived from the SNe Ia 1986G and 1994D (Phillips et al., 1987; Meikle et al., 1996; Patat et al., 1996). These supernovae were chosen because, among those SNe Ia with good late-time data, they produced the largest deviations from the modified Leibundgut template in the tail of the light curve. To produce these templates for the Monte Carlo simulations the data through day +15 for SNe 1986G and 1994D were adjusted to fit the modified, unity-stretch, Leibundgut template. The resulting adjustments were then applied to the data beyond day +15 using the stretch method. These adjusted late-time data were fit with a smooth curve through the bend in the light curve, followed by an exponential decline. These late-time curves were then mated to the modified, unity-stretch, Leibundgut template for $`t<+15`$ days to form complete templates. Figure 4b shows the normalized ensemble photometric error for both the high-redshift and low-redshift SN Ia samples in 7 day bins from $`35<tt_{max}<+75`$. This indicates how accurately a light curve would have been measured had all the observations come from just one supernova. Similarly, provided the stretch method works sufficiently well, and $`f_{max}`$, $`t_{max}`$, and $`s`$ are known, this would be the accuracy of a stretch-corrected composite light curve. Note that the high-redshift data are of consistent quality through $`50`$ days after maximum light, which enables the high-redshift SNe Ia sample data to constrain the fit to a template over a large range in time with nearly equal weight. However, this makes the high-redshift SNe Ia data susceptible to a systematic bias on the rise time due to possible deviations from the stretch fitting method for $`t>40`$ days for deviant light curves like those shown in Figure 4a. The Monte Carlo simulation performed to test for such a bias created simulated light-curve photometry data for the three sets of supernovae based on the templates seen in Figure 4a. Each set was comprised of $`100`$ different realizations of each of the supernovae in the high-redshift SNe Ia sample based on their individual temporal sampling and associated photometry errors. All of the generated supernovae were created with the following input parameters: $`s=1.0`$, $`t_{join}=10.0`$ days, $`t_{exp}=20.0`$ days, $`t_{max}=0.0`$ and $`f_{max}=1.0`$. The resultant light curves produced in each set were fit with the modified Leibundgut template. $`\chi ^2`$ surfaces of $`t_{exp}`$ and $`t_{join}`$ were created for each of the fits, and within a set these surfaces were added together to find the global minimum. The results of these simulations are given in Table 1. It is apparent that given a set of SN Ia observations like those available from the high-redshift SNe Ia sample, a fit for $`t_{exp}`$ can be biased by 2—3 days in either direction if all the observed SNe Ia have deviant late-type light curves like SN 1986G or SN 1994D. To allow direct comparison with Figure 2, these same simulations were used to determine the best values of $`t_{exp}`$ for input templates with $`t_{join}`$ fixed at $`10.0`$ days. For this case we found $`t_{exp}=19.8`$, $`17.5`$, and $`22.5`$, when the data were simulated using the Leibundgut, SN 1986G, and SN 1994D templates, respectively. This shows that systematic errors in $`t_{exp}`$ are large even when $`t_{join}`$ is held fixed. While the SNe Ia template light curves used to simulate the high-redshift data can be thought of as extreme cases, at present the exact nature and frequency of such deviations at this light-curve phase is poorly quantified due to a lack of high-quality, well-sampled observations over peak and through day $``$ +60 for nearby supernovae. Therefore, this result should be taken as a rough upper limit on the systematic error on $`t_{exp}`$ due to temporal sampling and our current limited understanding of how the stretch relationship should be applied at late times. ## 4 Cosmological Implications Assuming all SNe Ia have rise times similar to that found by Riess et al. (1999b) from good early-time photometry, a light-curve template with $`t_{exp}20`$ days and $`t_{join}10`$ days might be a better template for use in fitting the light curves of SNe Ia at all redshifts. This raises the question of whether such a change from the modified Leibundgut template to a Riess–like template would alter the corrected peak magnitudes determined in Perlmutter et al. (1999). In comparing our fits to the high-redshift SNe Ia sample using these two alternative templates, we find no measurable change in the ensemble mean corrected peak magnitudes. We also find that no individual SN Ia changed by more than 0.02 magnitudes. Another obvious question, addressed by the simulations of §3, is whether systematic variations in late-time light-curve behavior can affect the cosmological results of Perlmutter et al. (1999). In the last column of Table 1 we list $`\mathrm{\Delta }M_B^{corr}`$, the change in the ensemble stretch-corrected peak magnitude for each dataset determined using the stretch-luminosity relation of Perlmutter et al. (1999). These changes ($`0.039<\mathrm{\Delta }M_B^{corr}<0.020`$) are small, and less than the systematic biases already considered in Perlmutter et al. (1999) (0.05 mag). Given the fact that these simulations represent the most extreme deviations encountered with our fitting method, we conclude that this bias has no effect on the determination of the cosmological parameters from SNe Ia. ## 5 Conclusions & Discussion We find no compelling statistical evidence for a rise-time difference between nearby and distant SNe Ia, and therefore no evidence for evolution of SN Ia. We do find that for the high-redshift SNe Ia sample, temporal sampling coupled with real deviations of SNe Ia light curves at late-times could systematically bias the inferred rise time by 2—3 days. Even if present, these biases cannot dim the peak magnitudes by more than 0.02 magnitudes nor brighten them by more than 0.04 magnitudes even in the extreme cases that all the distant SNe Ia have late-time light curves like SN 1994D or SN 1986G, respectively. This leaves the cosmological results of Perlmutter et al. (1999) unchanged. Due to the large statistical uncertainties and possible systematic effects, we conclude that the extant photometry of high-redshift SNe Ia are in fact poorly suited for placing meaningful constraints on SN Ia evolution from their rise times. If future studies using better early-epoch data (such as that expected from the SNAP satellite<sup>2</sup><sup>2</sup>2See http://snap.lbl.gov for information pertaining to the SuperNova Acceleration Probe.) were to find significant rise-time differences between nearby and distant SNe Ia, would this invalidate the use of SNe Ia as calibrated standard candles? This is a very complicated question. However, at least some models suggest that variations in the early rise-time behavior may be very sensitive to the spatial distribution of <sup>56</sup>Ni immediately after the explosion. Such differences would diminish as the SN Ia expands and the photosphere recedes, meaning that rise-time variations wouldn’t necessarily translate into differences in peak brightness (Pinto, 1999, private communication). Careful measurement of the rise time and the peak spectral energy distribution of individual SNe Ia will have to be carried out to address this question (see Nugent et al. (1995a, b) for a full description of the interplay between the rise time and the spectral energy distribution on the peak brightness of a SN Ia). It may even prove possible to use the rise time as an additional parameter to improve the standardization of SNe Ia. We close with some general observations concerning the issue of SN Ia evolution. The peak brightnesses of SNe Ia are determined at some level by the underlying physical parameters of metallicity and progenitor mass, whose mean values can be expected to evolve with redshift. Nonetheless, there should exist nearby analogs for most distant SNe Ia since there is active star formation and a wide range of metallicities within nearby galaxies (Henry and Worthey, 1999; Kobulnicky and Zaritsky, 1999). The existing empirical relations between intrinsic luminosity and light-curve shape are able to homogenize almost all nearby SNe Ia. This implies that SNe Ia with some finite (but as yet poorly quantified) range of metallicities and progenitor masses can be used as calibrated standard candles. This forms the basis for using SNe Ia at high-redshift to probe the cosmology. If there is a dominant population of SNe Ia whose members are underluminous for their light curve shape at $`z0.5`$, as would be required to explain current observations in terms of evolution, there should be nearby examples of these SNe Ia. Such SNe Ia are not predominant among nearby SNe Ia, as almost all nearby SNe Ia obey a width-brightness relation. For such SNe Ia to predominate at $`z0.5`$ while being rare nearby requires a large reduction in their rate. Searches for SNe Ia conducted using exactly the same CCD-based wide-area blind-search methods used by the SCP find that the SNe Ia rate per comoving volume element does not change significantly between $`z<0.1`$ (Aldering, 2000), $`z0.5`$ (Pain et al., 1996, 2000, in preparation), and $`z1.2`$ (Aldering et al., 2000, in preparation). For the global rates to stay roughly constant while the rate of such hypothetical subluminous SNe Ia changes by an order of magnitude would be remarkable. For instance, a shift from Pop II progenitors at $`z0.5`$ to Pop I progenitors nearby would result in suppressed rates at $`z0.5`$. This is due to the fact that Pop II stars are a minor contributor to the luminosity density out to $`z0.5`$ (Shimasaku and Fukugita, 1998). Quantifying these arguments is beyond the scope of this paper, so we do not claim they as yet place a bound on SN Ia evolution. However, such arguments should be borne in mind when weighing the likelihood that the calibrated peak brightnesses of SNe Ia evolve. These arguments can also provide a partial basis for rigorous testing of the SN Ia evolution hypothesis. We would like to thank our colleagues in the Supernova Cosmology Project for their support and contributions to this work. In particular, we thank Gerson Goldhaber, Don Groom, and Saul Perlmutter, for discussions on the ongoing SCP effort that provides the larger context for the analysis presented here. The analysis of the data and the Monte Carlo simulations presented in this paper were performed on the National Energy Research Scientific Computing Center’s T3E supercomputer and we thank them for a generous allocation of computing time. We would also like to thank Bill Saphir and the NERSC PC Cluster Project for additional computational time. The computing support was made possible by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098.
no-problem/0001/hep-ph0001050.html
ar5iv
text
# The colour dipole approach to small 𝑥 processes ## Introduction In the high-energy (small-$`x`$) limit of photon-proton processes there is a well-known (and very old ) factorization of time-scales that allows diffraction to occur. In the proton’s rest frame, Deep Inelastic Scattering can be described by long lived photonic fluctuations, dominated by the qq̄ Fock state, which scatter off the proton over much shorter time-scales than those governing the formation of the fluctuations or the subsequent formation of the hadronic final state. When there is no net exchange of colour between the photon system and the proton, a large rapidity gap is observed between the two remnant systems and for small-$`t`$ the proton is likely to remain intact. The kinematics are such that virtual partonic states of the photon (vector mesons, real photons or the continuum) may be ‘diffracted into existence’ by the relatively soft interaction with the colour fields of the proton (for recent reviews of diffraction in photon-proton collisions and extensive references see ). The experimental observation of such diffractive processes constitutes one of the major achievements of the HERA experiments. The qq̄ Fock state is a colour dipole and may be described by specifying the photon’s light-cone wavefunction, $`\psi _\gamma (r,\alpha )`$, which depends on the transverse radius $`r`$ and the fraction, $`\alpha `$, of the photon’s energy carried by e.g. the quark. The kinematics at small-$`x`$ are such that the dipole is frozen on the time-scale of the interaction, which may be described by the scattering probability of a dipole in a given configuration. Because of this factorization of time-scales, it is reasonable to assume that the interaction itself is to a good approximation independent of the (much slower) formation of the dipole and of the hadronization process. This leads to the concept of a dipole cross section, a universal quantity which may be used not only for all small-$`x`$ exclusive processes but also the inclusive photon-proton scattering via the optical theorem. This universal interaction is driven by the soft (i.e. low momentum fraction, or sea) structure of the proton, which for perturbative interactions corresponds mainly to gluons. The suppression of large dipoles in the photon wave function at large $`Q^2`$ means that the small $`r`$ and large $`r`$ regions correspond roughly to perturbative and non-perturbative contributions respectively to the overall process. It is not clear whether the two regimes may be separated at small $`x`$. A number of authors have extracted the dipole cross section (DCS) from data and make predictions for other process rates . In this paper, we discuss assumptions which influence the various choices of parameterizations of the DCS. ## A comparison of models for the Dipole Cross Section The DCS is related to the inclusive photon-proton cross sections via an integral over all dipole configurations weighted by the square of the light-cone wavefunction of the photon of the appropriate polarization: $$\sigma _{\gamma ^{},p}^{L,T}=\text{d}\alpha \text{d}^2r|\psi _\gamma ^{L,T}(\alpha ,r)|^2\sigma _{q\overline{q}}(s,r,\alpha )$$ (1) Forshaw et al have fitted a parameterization of the DCS $$\sigma _{q\overline{q}}(s,r)=a\frac{P_s^2(r)}{1+P_s^2(r)}(r^2s)^{\lambda _s}+bP_h^2(r)\mathrm{exp}(\nu _h^2r)(r^2s)^{\lambda _h}$$ (2) where $`P_s(r)`$ and $`P_h(r)`$ are polynomials in r. The first term representing the ‘soft’ contribution with energy exponent $`\lambda _s=0.06`$ has a polynomial fraction that saturates at large $`r`$; the second, ‘hard’ term with energy exponent $`\lambda _h=0.39`$ provides the steep, small $`x`$ rise at large $`Q^2`$ and dies away at large $`r`$. Both have an $`r^2`$ dependence at small $`r`$. For the photon wave function itself, the tree level QED expression is used, modified by a Gaussian peak factor to represent non-perturbative effects in line with those observed in a generalized vector dominance analysis . Golec-Biernat and Wusthoff suggested the following simple $`x`$-dependent saturating form for the dipole cross section: $$\sigma _{q\overline{q}}(x,r)=\sigma _0(1exp[r^2Q_0^2/4(x/x_0)^\lambda ])$$ (3) which has the sensible feature that it is proportional to $`r^2`$ at small $`r`$. It is constant at large $`r`$. A fit to the HERA data on DIS with $`x<0.01`$, excluding charm and assuming $`Q_0=1.0`$ GeV, produced the following values $`\sigma _0=23`$ mb, $`x_0=3.0\times 10^4`$, $`\lambda =0.29`$ and a reasonable $`\chi ^2`$. It seems intuitively clear that the DCS cannot depend on scaling variable $`x=Q^2/W^2`$ for very large dipoles (of the order of the pion radius and above). Either a flat energy ($`W^2`$) dependence, as above, or one in agreement with the slow power-like Donnachie-Landshoff growth observed in hadronic cross sections would seem reasonable once the ‘dipole’ reaches the size of the hadron (it is very unlikely that it will be literally be a dipole for such large $`r`$: here one should think of $`r`$ as the typical transverse size of the complicated non-perturbative system). The small $`r`$ behaviour is expected on geometrical grounds: the cross section grows linearly with the transverse area of the dipole. In fact, at small $`r`$ one may go further. It is well known from perturbative QCD, and has been applied to hard vector meson production (see e.g. ), that the dipole cross section is related to leading log accuracy to the LO gluon density: $$\sigma _{q\overline{q}}(x,r)=\frac{\pi ^2r^2}{3}\alpha _s(\overline{Q}^2)x^{}g(x^{},\overline{Q}^2)$$ (4) where $`x^{}\text{ }>x`$. A model which exploits this fact has recently been developed . It was found that the large magnitude and steep small-$`x`$ rise of the LO gluon densities extracted from conventional global fits imply that unitarity will be violated by this QCD-improved form at small-$`x`$ because the magnitude of the dipole cross section becomes as large as the typical meson-proton cross section ($`\sigma _{\pi p}25`$ mb). Depending on the precise form of the ansatz used this will happen either within, or just beyond, the HERA region. This fact calls into question the use of the usual DGLAP leading-twist analysis for the analysis of small-$`x`$ structure functions for moderate photon virtualities (in the range $`Q_0^2<Q^2<10`$ GeV<sup>2</sup>, which is usually consider a safe region in which to apply DGLAP). What should be used for $`\overline{Q}^2(r^2)`$ in a particular hard process? To leading-log accuracy all choices of hard scales are equivalent, but can we make an inspired choice based on our knowledge of the $`r`$-integral, pending a full NLO calculation? In the $`r`$-space expression for $`F_L`$ is used to set the relationship between transverse dipole size and four-momentum scales using the ansatz: $`Q^2<r>_L^2=\lambda `$, where the constant $`\lambda 10`$ is assumed to be universal. This clearly requires a definition of what is meant by an ‘average’: for example one may use a median or a mode average. For the median average, it was found in that the value of $`\lambda `$ shows some $`x,Q^2`$ variation. However, for hard enough processes the principal $`r`$-behaviour of the DCS comes from the $`r^2`$ piece in equation (4) since $`\alpha _sxg`$ is a rather weak function of its argument, providing the latter is large, which implies a weak $`r`$-dependence. Figure (1) illustrates the point showing this function, at fixed $`x=10^3`$, for a variety of PDF sets. For large arguments the DCS varies little with scale, which corresponds to a weak dependence on the precise value of $`\lambda `$. However it does inherit the steep energy rise from the gluon. This observation would seem to support the steep small-$`x`$ rise implied at small $`r`$ in equation (3), as well as the recent two-Pomeron picture of Donnachie and Landshoff in which the soft Pomeron is ‘higher-twist’ (i.e. its influence dies off with increasing $`Q^2`$, relative to the ‘hard Pomeron’ which has a steeper energy dependence.) The scale at which to sample $`\alpha _sxg`$ under the integral is $`\overline{Q}^2=\lambda /r^2`$ and the typical average scale changes from process to process depending on the weighting provided by the appropriate light-cone wavefunctions under the integral in $`r`$. Once the relationship between transverse sizes and $`Q^2`$-scales has been set it is necessary to decide how to extrapolate the QCD-improved DCS into the non-perturbative region (see for more details). This inevitably leads to some model-dependence of predictions for particular small-x processes which is intimately related to the interplay of long and short distance Physics. ## Hard exclusive diffractive processes This question of scale setting is very important in making pQCD predictions for hard exclusive diffractive processes such as Hard Vector Meson Production and Deeply Virtual Compton Scattering (for which first data will soon be available). In these processes $`\sigma _{q\overline{q}}`$ appears in the amplitude and the issue of skewedness of the amplitude also plays a role and in general modifies the gluon density in equation (4) to the skewed gluon density and thereby restricting the universality of the DCS to some extent. A topical review of recent HERA data on Vector Mesons may be found in the preprint by Crittenden . The recent review of Teubner presents a rather optimistic view of the ability of two-gluon model of perturbative QCD to make clear predictions for such hard diffractive processes. The predictions are clear in the asymptotically hard limit. However, it is not clear how close the measured data are to this limit. The analysis of the relatively light $`J/\psi `$-meson family will play a vital role because the typical scales involved in the gluon density lie in precisely the dangerous region of rather low $`\overline{Q}^2`$, where, at fixed $`x`$, $`\alpha _sxg`$ has strong sensitivity to its argument (see figure (1)). At low scales this function starts off roughly flat in energy and steepens as the scale hardens. This leads to very large uncertainties in both the normalization and energy dependence of processes which sample the gluon density at relatively low scales. As such, one of us (MM) strongly disagrees with the popular statement that diffractive $`J/\psi `$ photo- and electroproduction is well understood from the point of view of perturbative QCD. Experimental evidence backs up this caution. For example, as recently pointed out by Hoyer and Peigne , the pQCD predictions for the ratio of $`\psi (2s)/\psi (1s)`$ in photoproduction are too small by a factor as much as five. This conclusion may be directly inferred by extending the analysis of the scaling issue in , to smaller effective scales. In the opinion of one of the authors (MM) a complete, careful reanalysis of the $`J/\psi `$-family is urgently required, given a reasonable ansatz for the DCS. ## Inclusive Diffraction In exclusive processes, such as the diffractive structure functions, higher order Fock state are known to contribute. However for specific diffractive final states such as exclusive Heavy Vector Meson Production and DVCS their effects might be expected to be minimal. The qq̄ dipole and qq̄g higher Fock state contributions to $`F_2^{D(3)}`$ have been calculated using expressions derived from a momentum space treatment, with an effective two gluon dipole description for the latter . Plots of these quantities are compared with H1 and ZEUS 1994 data in figure (2). Agreement is good for the H1 data, even at low $`\beta `$ where the qq̄g term dominates. The ZEUS data also give good agreement overall but with deviations at larger $`Q^2`$ values for small and moderate $`\beta `$. ## Conclusions In conclusion, it is still unclear whether saturation in energy is a feature of $`\sigma _{q\overline{q}}`$ at present energies. The best way to settle the issues surrounding the precise form of the DCS is to perform global analysis of all available small-$`x`$ processes: inclusive and diffractive structure functions, exclusive vector meson production, DVCS, etc. Different processes are sensitive to different regions in $`r`$, a global analysis of this kind would therefore be very valuable in differentiating between the different ansätze for the dipole cross section. Obviously, the increased precision of the small-x data which should result from the forthcoming luminosity upgrade will be vital in addressing the Physics issues discussed here. GRK would like to thank PPARC for a Studentship. MM wishes to thank Mark Strikman and Jochen Bartels for useful discussions. ## References
no-problem/0001/hep-ex0001011.html
ar5iv
text
# Studies of Avalanche Photodiode Performance in a High Magnetic Field ## 1 Introduction The avalanche photodiode (APD) is a solid state photodiode with internal gain. It has been chosen as the baseline photodetector for the electromagnetic crystal calorimeter (ECAL) of the Compact Muon Solenoid (CMS) Detector at the Large Hadron Collider at CERN in Geneva, Switzerland. The ECAL consists of some 60,000 lead tungstate (PbWO<sub>4</sub>) crystals, each to be read out by a pair of APD’s. Among the reasons for choosing the APD are high quantum efficiency, a weak response to minimum ionizing particles, otherwise known as the nuclear counter effect, and, as we show here, an insensitivity to high magnetic fields. At the CMS experiment, the magnetic field provided by the superconducting solenoid will be 4 T. Here we present the results of tests carried out to investigate the performance of a Hamamatsu APD, similar to the one chosen for CMS, in the presence of a 7.9 T field. While it has been widely believed that such a magnetic field would have little or no effect on APD’s, this is, as far as we know, the first explicit measurement of APD performance in a strong magnetic field with modern devices. ## 2 APD Studied The APD studied was an experimental model developed by Hamamatsu for CMS. The gain (M) of the APD, determined at bias voltage V<sub>b</sub>, was calculated using Eq. 1. I<sub>d</sub>, I<sub>ill</sub>, and I<sub>ph</sub> are the dark current, the illuminated current, and the photocurrent, respectively. The photocurrent is the difference between the illuminated and dark current. Illumination was provided by the light-emitting diode (LED) described in Section 4. $$M(V_b)=\frac{I_{ill}(V_b)I_d(V_b)}{I_{ill}(30V)I_d(30V)}=\frac{I_{ph}(V_b)}{I_{ph}(30V)}$$ (1) The photocurrent at 30 V was used to determine the gain because in this region of bias voltage, the photocurrent is essentially constant and the gain is assumed to be equal to one. The working bias voltage for a gain of 50 is approximately 355 V at a temperature of 25 degrees C . Other parameters describing the APD are found in Table 1 . ## 3 The Magnetic Field The 7.9 T field was produced by a solenoidal magnet in the High-Field EPR Laboratory at Northeastern University. The magnet, produced by NMR Magnex Scientific Inc., features a horizontal room-temperature bore design that allows convenient optical access to the region of highest field homogeneity. The accessible region of the magnet is a cylindrical volume of diameter 60.5 mm and length 0.650 m. The field possesses homogeneity typical of magnets used in NMR applications, with a maximum inhomogeneity of 0.2 ppm. The region of maximum homogeneity is a cylindrical volume in the center of 1 cm in diameter and 1 cm in length. Measurements were performed with the APD in this cylindrical volume. ## 4 APD Test ### 4.1 Experimental Set-Up and Procedure The APD was mounted inside a light-tight container and was connected to a blue light-emitting diode (LED) with optical fiber. The blue LED was a model NSPB320BS produced by Nichia America Corporation and emits light at a peak wavelength of 460 nm. The set-up allowed for the surface of the APD to be oriented parallel and perpendicular to the direction of the magnetic field. At each of these orientations, the APD was inserted into the center of the solenoid and the dark current (I<sub>d</sub>) and the illuminated current (I<sub>ill</sub>) were measured as the bias voltage (V<sub>b</sub>) was increased from 30 to 360 volts. From these values, the gain (M) as a function of V<sub>b</sub> was obtained using Eq. 1. For comparison, this procedure was also repeated with the APD outside of the field. The bias source voltage was provided by a Keithley 2410 Sourcemeter; the current measurements were performed with this device as well. ### 4.2 Analysis and Results The gain values for each value of the bias voltage for the runs conducted outside of the field (five runs total) were averaged and appear in Fig. 1. For each run conducted inside the field (one at each orientation), and for each value of the bias voltage, the gain was divided by the average value of the gain from the runs conducted outside of the field. These values appear in Figs. 2-3. The plots indicate that there appears to be no effect on the performance of the APD in the presence of the magnetic field. ## 5 Summary An experimental avalanche photodiode (APD) produced by Hamamatsu was exposed to a 7.9 T magnetic field. The surface of the APD was oriented both parallel and perpendicular to the field. At each orientation, the dark current (I<sub>d</sub>) and illuminated current (I<sub>ill</sub>) were measured as the bias voltage (V<sub>b</sub>) was increased from 30 to 360 volts. From these values the photocurrent (I<sub>ph</sub>) and gain for each value of the bias voltage were obtained. For comparison, this procedure was also performed with the APD outside of the magnetic field. From this comparison, we find that APD gain is unaffected by the presence of a 7.9 T magnetic field. We thank Y. Musienko for valuable advice and assistance, and gratefully acknowledge the National Science Foundation for financial support. We would also like to thank our colleagues on CMS.
no-problem/0001/hep-ph0001227.html
ar5iv
text
# From scalar to string confinement ## I Introduction Going beyond the non-relativistic potential model of quark confinement means that more than the static interaction energy must be specified. In the language of potential models the Lorentz nature of the interaction is needed. To agree with the observed spin-orbit splitting it was proposed long ago that the large distance (confining) potential is a Lorentz scalar. In this case there is no magnetic field to influence the quarks’ spins and the only spin-orbit interaction is the kinematic “Thomas term.” The Thomas type spin-orbit interaction partially cancels that of the short range one-gluon exchange, in agreement with the observed spectrum. Some insight into the use of the scalar potential was given by Buchmüller . His argument is that at large distances one expects the QCD field of the quarks to become string- or flux-tube-like. The QCD flux tube is purely chromoelectric in its rest frame, and hence in the rest frame of each quark there is no chromomagnetic field to provide a spin-orbit interaction. The scalar interaction yields this same result by fiat; there is no magnetic field anywhere because it is not a vector-type interaction. This provides some justification for using the scalar potential but does not establish a direct connection. It agrees only in having the same spin-orbit interaction at long range as QCD. Subsequently it was shown that for slowly moving quarks, QCD predicts both spin-dependent and spin-independent relativistic corrections. The long-range spin dependence is just the Thomas type spin-orbit interaction. The spin-independent corrections differ from those of scalar confinement . It also has been established that the QCD predictions at long distance are the same as those of a string or flux tube interaction . Lattice simulations also favor the Thomas interaction . Since spin-independent effects are difficult to identify from the data, scalar confinement remains phenomenologically successful. As scalar confinement is also relatively simple computationally, it continues to be a popular and useful tool in hadron physics. It should be pointed out that its use in the Salpeter equation leads to cancellations in the ultra-relativistic limit, resulting in a very non-linear Regge trajectory . Although scalar confinement has been used for a long time in hadron physics, its relation to QCD has never been clarified. It is the purpose of this paper to place scalar confinement in relation to QCD and in particular to the QCD string. In section II we point out that there is a certain four-vector potential that is isomorphic to a scalar potential. In section III we compare this four-vector potential to the QCD string. Noting certain similarities and differences, we propose a model intermediate between the string and scalar confinement. The semi-relativistic reductions for scalar, time-component vector, intermediate, and string confinements are compared in section IV. Although by construction, all these confinement models have the same non-relativistic limit, their relativistic reductions differ. In section V we explore the “ultra-relativistic” Regge sector with a massless quark via semi-classical quantization. The Regge behavior of the different confinement models show some remarkable similarities and differences. Finally, in section VI we present our conclusions and summarize our work. ## II The four-vector potential isomorphic to the scalar potential The action for a scalar (spinless) quark moving in Lorentz scalar and four-vector potentials, $`\varphi (x)`$, and $`A_\mu (x)`$ respectively, is $$S=𝑑\tau \left[m+\varphi (x)u^\mu A_\mu (x)\right],$$ (1) where $`m`$ is the rest-mass of the quark, $`u^\mu `$ is the quark’s four-velocity, and $`d\tau `$ is the proper time element $`dt/\gamma `$. The quark four-velocity, $`u^\mu =(\gamma ,\gamma 𝐯)`$, with $`\gamma =(1𝐯^2)^{1/2}`$, satisfies $`u^\mu u_\mu =1`$. When $`A_\mu (x)0`$, the action (1) reduces to the usual scalar potential action. On the other hand, when $`\varphi (x)=0`$, the action (1) describes a quark moving in an “electromagnetic” ($`U(1)SU(3)_{\mathrm{color}}`$) color field. It was pointed out by Buchmüller that in the rest frame of the QCD flux tube there is no color magnetism so that the only spin orbit interaction is Thomas precession. If we want to implement Buchmüller’s criterion we may assume that in the quark rest frame $$A^\mu ^{}(x)=(\varphi (r),\mathrm{𝟎}),$$ (2) where $`\varphi (r)A^0^{}(x)`$ is the time component of $`A^\mu ^{}(x)`$. In the laboratory frame, where the quark velocity is $`𝐯`$, the four-vector potential is $$A^\mu =u^\mu \varphi (r)=(\gamma ,\gamma 𝐯)\varphi (r).$$ (3) We note that the components depend on both position and velocity. The vector potential contributes to the action (1) as $$u^\mu A_\mu =\varphi (r).$$ (4) The resulting contribution is exactly the same as the scalar potential in Eq. (1). The four-vector potential corresponding to $`\varphi (r)=ar`$ was discussed by us earlier in Ref. . By this simple demonstration we have shown that there are two Lorentz type potentials that have identical consequences. The four-vector version is apparently more closely related to QCD. As we will see, we can quite closely draw similarities and differences. ## III Comparing scalar and string confinement - an intermediate model emerges For a spinless quark moving relative to a heavy quark at the origin, the action can be written as the time integral of a function of the light quark’s position and velocity, $$S=𝑑tL(𝐫,𝐯).$$ (5) If we consider the quark as a particle of mass $`m`$ moving in a linear scalar confining potential $`\varphi (r)=ar`$, its Lagrangian is $`L_{\mathrm{scalar}}`$ $`=`$ $`\gamma ^1\left(m+\varphi (r)\right)`$ (6) $`=`$ $`m\sqrt{1v^2}ar\sqrt{1v^2}.`$ (7) At large distances, QCD is thought to resemble a Nambu-Goto string or flux tube model. For a scalar quark at the end of a straight flux tube, the corresponding Lagrangian is $$L_{\mathrm{string}}=m\sqrt{1v^2}ar_0^1𝑑\sigma \sqrt{1\sigma ^2v_{}^2},$$ (8) where $`v_{}`$ is the quark velocity transverse to the string. Comparing the scalar and string interactions, we see there are two evident differences. The first is that the string energy is spread along the length of the string whereas in the scalar potential case the energy may be thought of as being concentrated at the quark coordinate. The second is that because of the reparametrization invariance of the Nambu-Goto action (which physically is the invariance of an electric field to boosts along its direction), from which Eq. (8) follows, only the transverse velocity of the string may appear in the interaction energy. The first distinction can be considered as a quantitative one which leaves the basic structure unchanged. This difference changes the velocity dependence of the additional three-momentum due to the interaction from $`𝐩=ar𝐯`$ in the scalar case to $`𝐩=\frac{ar}{2v_{}}\left[\frac{\mathrm{arcsin}v_{}}{v_{}}\sqrt{1v_{}^2}\right]\widehat{𝐯}_{}`$ for the string. The second distinction has far-reaching consequences. In a non-rotating ($`s`$-wave) system, the scalar interaction contributes to the momentum whereas the string does not. The string Hamiltonian contributes only as the time component of a vector potential (vector-like) while the scalar Hamiltonian remains scalar. It is instructive to construct a confinement model in which one of the above distinctions is removed. We will briefly consider the intermediate model having Lagrangian $$L_{\mathrm{Int}}=m\sqrt{1v^2}ar\sqrt{1v_{}^2}.$$ (9) We note that although the interaction is concentrated at the quark position, it depends only on the transverse velocity. This Lagrangian will lead to a Hamiltonian having characteristics of the string while remaining algebraically tractable. In the usual way, the Hamiltonian corresponding to Eq. (9) is found to be $$H_{\mathrm{Int}}=m\gamma +ar\gamma _{},$$ (10) and the angular momentum, $`J=L_{\mathrm{Int}}/\omega `$, with $`v_{}=\omega r`$, is $$J=m\gamma v_{}r+ar^2\gamma _{}v_{}.$$ (11) Unlike in the string system, the velocities here can be eliminated in favor of the momenta, making this model much more tractable. From the definition of radial momentum $$p_r=\frac{L_{\mathrm{Int}}}{\dot{r}}=m\gamma \dot{r},$$ (12) the useful identity $$m\gamma =W_r\gamma _{},$$ (13) with $$W_r\sqrt{p_r^2+m^2},$$ (14) follows. Using the identity (13), we find that $`H_{\mathrm{Int}}`$ and $`J`$ of Eqs. (10), (11) become $`H_{\mathrm{Int}}`$ $`=`$ $`(W_r+ar)\gamma _{},`$ (15) $`J`$ $`=`$ $`rv_{}\gamma _{}(W_r+ar).`$ (16) We can solve Eq. (16) for $`\gamma _{}`$ using $`v_{}^2=1\gamma _{}^2`$, and substituting into Eq. (15) to obtain $$H_{\mathrm{Int}}=\sqrt{\frac{J^2}{r^2}+(W_r+ar)^2}.$$ (17) ## IV Comparing relativistic corrections of spinless confinement models As we have seen, there are several types of confinement models, even for spinless quarks. In this section we will enumerate and compare the relativistic reductions of various models. We first consider the relativistic reductions of the classic static potential models. ### A Scalar confinement From the scalar interaction Lagrangian (6) with $`\varphi =ar`$, we find the canonical three-momentum to be $$𝐩=(m+ar)\gamma 𝐯,$$ (18) which results in the Hamiltonian $$H=\sqrt{p^2+(m+ar)^2}.$$ (19) For $`mar`$ and $`mp`$, we expand to obtain the relativistic corrections $`H`$ $``$ $`\sqrt{p^2+m^2}+ar{\displaystyle \frac{a}{2m^2}}p^2r+\mathrm{}`$ (20) $`=`$ $`\sqrt{p^2+m^2}+ar{\displaystyle \frac{ap_r^2r}{2m^2}}{\displaystyle \frac{aJ^2}{2m^2r}}+\mathrm{}.`$ (21) Even though scalar confinement will yield, for spin-1/2 quarks, the spin-orbit interaction consistent with experiment, lattice, and QCD, the spin-independent terms in Eq. (21) are inconsistent with QCD . ### B Time component vector confinement In time component vector confinement models, the potential $`ar`$ is taken to be the (laboratory frame) time component of a vector potential $`A^\mu `$; $`A^\mu =(ar,\mathrm{𝟎})`$. The quark Lagrangian then is $$L_{\mathrm{vector}}=m\sqrt{1v^2}ar.$$ (22) The canonical three-momentum following from this Lagrangian, $$𝐩=_𝐯L=m\gamma 𝐯,$$ (23) leads to the Hamiltonian $$H=\sqrt{m^2+p^2}+ar.$$ (24) There are no relativistic corrections other than kinetic energy corrections. Vector confinement is disfavored since the associated spin-orbit interaction adds to the short range spin-orbit interaction giving spin-orbit splittings that are too large when compared to experimental values or lattice simulations. ### C Intermediate model The Hamiltonian for this model was given in Eq. (17). The relativistic reduction for $`mar`$ and $`mp`$ is $$H_{\mathrm{Int}}\sqrt{p^2+m^2}+ar\frac{aJ^2}{2m^2r}+\mathrm{}.$$ (25) Comparing to $`H_{\mathrm{scalar}}`$ in Eq. (21), we see the same reduction except for the missing $`p_r`$ term. This might be expected since the interaction does not contribute to the radial momentum. We discuss this result further in the following subsection. ### D String confinement The reduction of the string is discussed in Ref. , where it was shown that the string contributes a rotational energy equal to that of a uniform rod of length $`r`$ and mass $`ar`$. This energy is $`E_R`$ $`=`$ $`{\displaystyle \frac{1}{2}}I\omega ^2,`$ (26) $`=`$ $`{\displaystyle \frac{1}{2}}k(ar)r^2\left({\displaystyle \frac{J}{mr^2}}\right)^2,`$ (27) $`=`$ $`{\displaystyle \frac{kaJ^2}{2m^2r}},`$ (28) where the geometrical factor $`k=\frac{1}{3}`$ for a uniform rod. If all of the “mass” of the string is concentrated at the position of the moving quark end, then $`k=1`$. The “kinetic” energy term, when expanded, yields $$\sqrt{p^2+m^2}m+\frac{p^2}{2m}\frac{p^4}{8m^3}+\mathrm{}.$$ (29) In the semi-relativistic regime the momentum is mostly that of the quark with a small contribution from the “interaction.” $`p^2`$ $`=`$ $`p_r^2+{\displaystyle \frac{1}{r^2}}(J_q+J_{\mathrm{in}})^2`$ (30) $``$ $`\left(p_r^2+{\displaystyle \frac{J_q^2}{r^2}}\right)+{\displaystyle \frac{2J_qJ_{\mathrm{in}}}{r^2}}`$ (31) $``$ $`p_q^2+{\displaystyle \frac{2J_q}{r^2}}\left({\displaystyle \frac{J_q}{mr^2}}\right)\left(kar^3\right),`$ (32) $`p^2`$ $``$ $`p_q^2+4mE_R,`$ (33) and hence, $$\sqrt{p^2+m^2}\sqrt{p_q^2+m^2}+2E_R.$$ (34) So, if one separates the Hamiltonian into the quark’s energy plus an interaction energy $`ar+E_R`$, then $$H\sqrt{p^2+m^2}+arE_R.$$ (35) This is exactly what one finds in the intermediate models with $`k=1`$. The string Hamiltonian is then the same, only with $`k=\frac{1}{3}`$. $$H\sqrt{p^2+m^2}+ar\frac{aJ^2}{6m^2r}.$$ (36) This result follows systematically from the string invariants (59) and (60) in the large mass expansion . ## V Comparing Regge Structures of Spinless Confinement Models In this section we explore both the analytic and the numerical solutions for the Regge spectroscopy expected from the previously considered models. In particular, we investigate the ultra-relativistic limit when the “light” quark has zero mass. The extension to two light quarks is straightforward. It is in this “massless” limit where straight Regge trajectories with evenly spaced daughter trajectories are obtained in many confinement models and a close correspondence to observed light and heavy-light mesons is expected. In our analytical work we will usually assume that the orbital excitations are large compared to the radial excitation. We may consequently expect the semi-classical quantization scheme to be quite accurate. Quantization is carried out by performing the phase-space integral, $$2\pi (n+\mathrm{\Gamma })=p_r𝑑r=2_r_{}^{r_+}p_r𝑑r,$$ (37) where $`r_\pm `$ are the classical turning points and $`\mathrm{\Gamma }`$ is a constant that depends upon the problem.<sup>*</sup><sup>*</sup>*Roughly, $`\mathrm{\Gamma }`$ depends on the nature of the potential at the turning point. For two smooth turning points $`\mathrm{\Gamma }=\frac{1}{2}`$, and for two rigid walls $`\mathrm{\Gamma }=1`$. For the mixed case of one of each, $`\mathrm{\Gamma }=\frac{3}{4}`$. As shown by Langer , the classical angular momentum $`J`$ must be replaced by $`J+\frac{1}{2}`$ in the expression for the radial momentum $`p_r`$. In all cases considered here, the quantization integral can be written, or accurately approximated by, $$_r_{}^{r_+}p_r𝑑r=C_y_{}^{y_+}\frac{dy}{y}\sqrt{(y_+y)(yy_{})},$$ (38) where $`y`$ is either $`r`$ or $`r^2`$ and $`C`$ is a constant. This integral can be carried out to yield the the semi-classical quantization relation $$n+\mathrm{\Gamma }=\frac{C}{2}\left[y_++y_{}2\sqrt{y_+y_{}}\right].$$ (39) ### A Scalar confinement We first consider the scalar case because of its simplicity and its central role in this paper. The square scalar Hamiltonian, (21), with the light quark massless is $$H^2=p^2+a^2r^2.$$ (40) This is equivalent to the three-dimensional harmonic oscillator and its eigenvalues are well-known to be $$M^2=2a\left(J+2n+\frac{3}{2}\right),J,n=0,1,2,3,\mathrm{},$$ (41) where $`J`$ is now the angular momentum quantum number. To connect with the analytic solutions to the remaining confinement models we compute the semi-classical solution for this interaction. Semi-classical quantization starts with the separation of the momentum into angular and radial pieces, $`p^2=p_r^2+\frac{J^2}{r^2}`$, and hence $$p_r^2=M^2\frac{J^2}{r^2}a^2r^2.$$ (42) The classical turning points ($`p_r=0`$) satisfy $`r_+^2+r_{}^2`$ $`=`$ $`\left({\displaystyle \frac{M}{a}}\right)^2,`$ (43) $`r_+r_{}`$ $`=`$ $`{\displaystyle \frac{J}{a}},`$ (44) $`{\displaystyle \frac{a}{r}}\sqrt{(r_+^2r^2)(r^2r_{}^2)}`$ $`=`$ $`p_r.`$ (45) Comparing this last relation to Eq. (38), we read off $`C=\frac{a}{2}`$, and $`y=r^2`$, and by Eq. (39) with $`\mathrm{\Gamma }=\frac{1}{2}`$ and $`JJ+\frac{1}{2}`$, we find $$n+\frac{1}{2}=\frac{a}{4}\left[\frac{M^2}{a^2}\frac{2}{a}\left(J+\frac{1}{2}\right)\right],$$ (46) which yields $$M^2=2a\left[J+2n+\frac{3}{2}\right],$$ (47) identical to the exact solution (41). In Fig. 1 we show the Regge plot for pure scalar confinement. The dots represent the exact numerical solution by the variational method, for instance see the appendix in Ref. . The numerical solutions correspond to the unsquared Hamiltonian (19) with $`m=0`$. The lines are the analytic solution, Eq. (41) or (47). We note that states of even (or odd) $`J`$ are degenerate. This is unique among combinations of scalar and time-component vector potential confinement . It is important to note that the “ultra-relativistic” limit where the quark mass vanishes is in fact not ultra-relativistic for scalar confinement. From the Hamiltonian (40) with $`p^2=p_r^2+J^2/r^2`$, the circular orbit condition is $$\frac{H^2}{r}|_J=0,$$ (48) which implies a circular orbit radius of $$r_0^2=\frac{J}{a}.$$ (49) The circular velocity is then given by $$v_0=r_0\frac{H}{J}|_{r=r_0}=\frac{1}{\sqrt{2}}.$$ (50) The massless quark moves at a velocity less than unity because the scalar interaction contributes an effective mass of $`ar_0`$. ### B Time-component vector confinement The Hamiltonian (24), with $`m=0`$ and the replacement $`p^2=p_r^2+\frac{J^2}{r^2}`$, becomes $`p_r^2`$ $`=`$ $`(Mar)^2{\displaystyle \frac{J^2}{r^2}}`$ (51) $`=`$ $`\left(Mar{\displaystyle \frac{J}{r}}\right)\left(Mar+{\displaystyle \frac{J}{r}}\right).`$ (52) The first factor contains the classical turning points and the second has only distant zeros. To good approximation, we may use the zero condition $`Mar=J/r`$ from the first term in the second, and obtain $$p_r^2\frac{2Ja}{r^2}\left(r^2+\frac{M}{a}r\frac{J}{a}\right).$$ (53) This is of the form of our general phase-space integrand in Eq. (38) with $`C=\sqrt{2Ja}`$, $`y=r`$, where the turning points satisfy $`r_++r_{}`$ $`=`$ $`{\displaystyle \frac{M}{a}},`$ (54) $`r_+r_{}`$ $`=`$ $`{\displaystyle \frac{J}{a}}.`$ (55) The quantization condition Eq. (39) becomes $$n+\frac{1}{2}=\frac{\sqrt{2Ja}}{2}\left[\frac{M}{a}2\sqrt{\frac{J}{a}}\right].$$ (56) Solving for $`M^2`$, dropping the small squared radial excitation energy and making Langer’s replacement of $`J`$ by $`J+\frac{1}{2}`$, we find $$M^2=4a\left(J+\sqrt{2}n+\frac{1}{2}+\frac{1}{\sqrt{2}}\right).$$ (57) Fig. 2 shows the Regge spectrum of time component vector confinement. The semi-classical quantization method yields the correct slope, radial excitation energy, and even nearly the correct $`J=0`$ intercept. ### C Intermediate model From the intermediate model Hamiltonian, Eq. (17), the Regge spectrum can be exactly computed numerically, which we show in Fig. 3. The Regge trajectories are neither straight, nor equally spaced. The radial excitation energy is several times larger than the scalar confinement potential. A comparison of the intermediate and scalar Hamiltonians reveals that they coincide in the classical circular orbit limit. It is in radial excitation that the two models differ qualitatively. Of course, even the quantized $`n=0`$ radial state has some radial excitation. A semi-classical quantization can also be done in this case and yields a complicated transcendental relationship between $`M^2`$ and $`J`$. ### D String confinement In this subsection we find that the Regge structure of the confining string, with a massless quark at its end, resembles almost exactly scalar confinement once the energy is rescaled. This, despite the anomalous Regge trajectories of the “intermediate” model which was supposed to mimic the string. We will later discuss the reason for the occurrence. We begin with the string Lagrangian (8). The conserved quantities $`H`$ and $`J=\frac{L}{\omega }`$ are $`{\displaystyle \frac{J}{r}}`$ $`=`$ $`W_r\gamma _{}v_{}`$ (58) $`+`$ $`{\displaystyle \frac{ar}{2v_{}}}\left({\displaystyle \frac{\mathrm{arcsin}v_{}}{v_{}}}\sqrt{1v_{}^2}\right),`$ (59) $`H`$ $`=`$ $`W_r\gamma _{}+ar{\displaystyle \frac{\mathrm{arcsin}v_{}}{v_{}}},`$ (60) where the “radial energy” $`W_r=\sqrt{p_r^2+m^2}`$ was defined in Eq. (14) and $`v_{}=\omega r`$. For circular orbits in the massless quark limit the end of the string approaches the speed of light ($`v_{}1`$). Since this is the limit we are interested in for the Regge structure we set $`v_{}=1`$ in the string quantities in Eq. (59) and (60) to obtain $`{\displaystyle \frac{J}{r}}`$ $`=`$ $`W_r\gamma _{}v_{}+{\displaystyle \frac{a\pi r}{4}},`$ (61) $`H`$ $`=`$ $`W_r\gamma _{}+{\displaystyle \frac{a\pi r}{2}},`$ (62) We do not set $`v_{}=1`$ in the quark terms since a delicate limiting process occurs. In this limit, all of the angular momentum and energy resides in the string and all of the radial momentum is carried by the quark. Next, we consider the difference of the squares of $`H`$ and $`J/r`$ $$H^2\frac{J^2}{r^2}=W_r^2+\frac{a\pi r}{2}W_r\gamma _{}+\frac{3}{4}\left(\frac{a\pi r}{4}\right)^2.$$ (63) Using Eq. (61) to eliminate $`W_r\gamma _{}`$, after a little simplification we find $$H^2=p_r^2+\frac{J^2}{r^2}+\frac{a\pi J}{2}+\left(\frac{a\pi r}{4}\right)^2,$$ (64) where $`W_r^2=p_r^2`$ in the massless limit. If we define $`H_0^2`$ $`=`$ $`H^2{\displaystyle \frac{a\pi J}{2}},`$ (65) $`a_0`$ $`=`$ $`{\displaystyle \frac{\pi a}{4}},`$ (66) the square of the string Hamiltonian appears to be a harmonic oscillator $$H_0^2=p_r^2+\frac{J^2}{r^2}+a_0^2r^2,$$ (67) which is very similar in form to the squared scalar confinement Hamiltonian (40). The squared string Hamiltonian in Eq. (64) has a critical difference from the harmonic oscillator, as we now demonstrate. The circular orbit occurs where $$\frac{H^2}{r}|_J=0,$$ (68) which implies that the circular orbit radius is $$r_0^2=\frac{4J}{a\pi }.$$ (69) The associated circular orbit velocity is $$v_0=r_0\omega =r_0\frac{H}{J}|_{r=r_0}=1.$$ (70) Thus, as we mentioned previously, the massless quark moves at the speed of light in a circular orbit. For radial excitation the quark moves in the effective potential of Eq. (64). From the limiting form (61) of the angular momentum (59), we see that for radial motion the radius cannot exceed $`r_0`$ because $`W_r\gamma _{}v_{}`$ cannot be negative. The $`r=r_0`$ coordinate represents a horizon or “impenetrable barrier” and the quark moves in the “half harmonic oscillator” potential shown in Fig. 4. The semi-classical quantization of the string motion is equivalent to a half harmonic oscillator shifted by an amount $`\frac{a\pi J}{2}=2a_0J`$. The half harmonic quantization condition is $$\pi \left(n+\frac{3}{4}\right)=\frac{a_0}{2}_y_{}^{y_0}\frac{dy}{y}\sqrt{(y_+y)(yy_{})},$$ (71) where $`y=r^2`$, $`y_0=r_0^2`$, and $`\mathrm{\Gamma }=\frac{3}{4}`$, corresponding to one smooth turning point. The integral is not precisely one-half of the full harmonic oscillator integral but the difference vanishes for large $`J`$. The result is $$\pi \left(n+\frac{3}{4}\right)=\frac{\pi }{8a_0}\left[M_0^22a_0\left(J+\frac{1}{2}\right)\right],$$ (72) or $$M_0^2=2a_0\left(J+4n+\frac{3}{4}+\frac{1}{2}\right).$$ (73) Finally, we rewrite Eq. (73) in terms of $`M^2=M_0^2+\frac{a\pi J}{2}`$ and $`a=\frac{4a_0}{\pi }`$ to obtain $$M^2=a\pi \left(J+2n+\frac{7}{4}\right).$$ (74) We observe that the combination of the shift and the half oscillator reproduces the $`J+2n`$ pattern of excitation seen in the harmonic oscillator, and hence in scalar confinement. We can check the intercept ($`J=0`$) by directly quantizing the $`s`$-wave states. From Eq. (60) with $`\gamma _{}=1`$, we have $$H=p_r+arM.$$ (75) The quantization integral, $$\pi \left(n+\frac{1}{2}\right)=_0^{M/a}𝑑r(Mar)=\frac{M^2}{a}\frac{M^2}{2a},$$ (76) directly yields $$M^2=\pi a\left(2n+1+\frac{1}{2}\right),$$ (77) where the $`\frac{1}{2}`$ is the Langer correction for the radial equation. The result indicates the 3D harmonic oscillator. We conclude that the true intercept that should appear in Eq. (74) ought to lie between $`\frac{3}{2}`$ and $`\frac{7}{4}`$. In Fig. 5 we show the exact numerical string Regge excitations with quark mass $`m=0`$. The numerical solutions of Eqs. (59) and (60) have been discussed earlier . The lines are the analytic solution (74) but with intercept $`\frac{3}{2}`$ from Eq. (77). Similar solutions obtained from different points of view have been obtained previously . ## VI Conclusions and Summary The concept of scalar confinement has been an important ingredient in hadron model building for over two decades. Its primary motivation was the resulting pure Thomas type spin-orbit interaction which partially cancels the vector type short range spin-orbit contributions. Despite its phenomenological success, scalar confinement has always had an uncertain relationship with fundamental theory. As pointed out by Buchmüller , the desired spin terms follow if the color magnetic field vanishes in the quark rest frame. This situation assumes no interaction with the quark color magnetic moment and occurs naturally in the usual color electric flux tube expected from QCD. This observation originally was proposed to justify the use of scalar confinement . We emphasize here that this does not imply that the scalar potential follows from QCD, only that they share a common spin-orbit interaction. In this paper we have demonstrated that a four-vector confinement interaction we found previously is equivalent to scalar confinement. This vector type interaction bears a close resemblance to the QCD string, although there are significant differences. We have primarily considered here a class of confinement models that share the same Thomas spin dependence. Our comparison of scalar and string/flux tube confinement has shown some interesting differences and similarities even with spinless quarks. We introduced an intermediate model that has aspects of both scalar confinement and the QCD string. In this intermediate model the energy depends only on the transverse quark velocity as expected in a straight string model. The interaction energy is effectively concentrated at the quark as in scalar potential interaction. The spin independent relativistic corrections of scalar and string confinement differ, as has been known for some time . The relativistic corrections of the intermediate model are as if an extra transverse mass $`ar`$ were concentrated at the quarks position. In the string case this same mass is distributed along the string. It is in the massless limit where interesting distinctions arise. For pure linear scalar confinement the meson mass is exactly given by $`M^2=2a(J+2n+3/2)`$, where $`J`$ and $`n`$ are the rotational and radial quantum numbers. The result, shown on the Regge plot in Fig. 1, is a series of straight lines with an excitation pattern $`J+2n`$. That is, there are degenerate mass towers of states of even or odd parities. The (laboratory frame) time-component vector confinement again produces linear Regge trajectories, shown in Fig. 2, but with no tower structure, owing to the excitation pattern $`J+\sqrt{2}n`$ with incommensurate contributions from the rotational and radial quantum numbers. Although one might expect that QCD, being a vector interaction like QED, would have a time-component interaction, it is evidently not time-component in the laboratory frame. This is precisely because the QCD field in which the quark moves is not chromoelectrostatic (purely chromoelectric and time-independent in the laboratory frame). Instead, the QCD field is dynamical because the quark drags a chromoelectric flux tube along with it as it moves. In this respect there are no “test charges” in QCD. The QCD field is purely chromoelectric in its rest frame, leading to time-component vector interaction in the quark’s rest frame, which we have shown is mathematically equivalent to a scalar interaction. Neglect of the spatial distribution of the QCD field energy thus leads directly to scalar confinement. The string/flux tube picture is the result of taking into account the distribution of the field energy and momentum. The intermediate model has a Regge structure very different from any of the other models studied here, with somewhat curved trajectories and an uneven pattern of radial excitation, as shown in Fig. 3. Evidently, the modification of the interaction that removes interaction contributions to the radial momentum but leaves all the interaction energy and momentum at the quark’s position makes the intermediate model less, rather than more, string-like in its consequences. The string Regge spectroscopy, Fig. 5, again is similar to that of scalar confinement, except with a different Regge slope. Due to the distribution of energy along the string, the quark now moves at the speed of light in the massless limit. This creates a horizon barrier so the quark appears to move in a half oscillator. The net effect is to give an energy spectrum $`M^2=\pi a(J+2n+3/2)`$ with the same tower of states structure as in the scalar case. Though the primary difference between the two theories is the manner in which the energy and momentum of the QCD field are distributed, the close relationship between their Regge structures appears to be accidental. We have pointed out a close, but not exact, relationship between scalar confinement and the QCD string. One might wonder whether one could change the string tension and make the two even more similar. The answer lies in the static potential, which determines the low-lying states. Presumably, because they both arise from QCD, the string tension and the short-range potential are correlated, as should be confirmed by experiment and lattice simulation. One cannot redefine the string tension arbitrarily. Even were scalar confinement and the string/flux tube to yield the same Regge slope, their static potential and semi-relativistic reductions are different. ## Acknowledgments This work was supported in part by the US Department of Energy under Contract No. DE-FG02-95ER40896.
no-problem/0001/nucl-th0001003.html
ar5iv
text
# Angular momentum projected analysis of Quadrupole Collectivity in {^{30,32,34}}𝑀⁢𝑔 and {^{32,34,36,38}}𝑆⁢𝑖 with the Gogny interaction. ## Abstract A microscopic angular momentum projection after variation is used to describe quadrupole collectivity in $`{}_{}{}^{30,32,34}Mg`$ and $`{}_{}{}^{32,34,36,38}Si`$. The Hartree-Fock-Bogoliubov states obtained in the quadrupole constrained mean field approach are taken as intrinsic states for the projection. Excitation energies of the first $`2^+`$ states and the $`B(E2,0^+2^+)`$ transition probabilities are given. A reasonable agreement with available experimental data is obtained. It is also shown that the mean field picture of those nuclei is strongly modified by the projection. Neutron-rich nuclei with $`N20`$ are spectacular examples of shape coexistence between spherical and deformed states. Experimental evidence for an island of deformed nuclei near $`N=20`$ has been found in the fact that $`{}_{}{}^{31}Na`$ and $`{}_{}{}^{32}Na`$ are more tightly bound than could be explained with spherical shapes . Additional support comes from the unusually low excitation energy of the $`2^+`$ state in $`{}_{}{}^{32}Mg`$ . A large ground state deformation has also been inferred from intermediate energy Coulomb excitation studies in $`{}_{}{}^{32}Mg`$. Quadrupole collectivity of $`{}_{}{}^{3238}Si`$ has also been studied in . Very recently this region has been the subject of detailed experimental spectroscopic studies at ISOLDE . From a theoretical point of view, deformed ground states have been predicted for nuclei with $`N20`$ . In those calculations the rotational energy correction is the essential ingredient for the stabilization of the deformed configuration. On the other hand, some calculations have predicted a spherical ground state for $`{}_{}{}^{32}Mg`$ but it has also been found that deformation effects may appear as a result of dynamical correlations. Some shell model calculations, even with restricted configuration spaces, have been able to explain the increased quadrupole collectivity at $`N=20`$ as a result of neutron $`2p2h`$ excitations into the $`fp`$ shell, see for example . Recently, a mean field study has explored the suitability of several Skyrme parameterizations in the description of this and other regions of shape coexistence. The mean field description of nuclei is usually a good starting point as it provides a qualitative, and in many cases quantitative, understanding of the nuclear properties. This is the case when the mean field solution corresponds to a well defined minimum. However, in regions of shape coexistence where two minima are found at a comparable energy, the correlation effects stemming from the restoration of broken symmetries and/or collective motion can dramatically alter the energy landscape thus changing the mean field picture. For this reason, we have included in our mean field calculations the effects related to the restoration of the broken rotational symmetry by performing, for the first time with the Gogny force, angular momentum projected calculations of the energies and other relevant quantities. The reason for choosing to restore rotational symmetry is that the zero point energy associated with this restoration is somehow proportional to deformation and ranges, in this region, from a few KeV for nearly spherical configurations to several MeV for well deformed ones. This energy range is comparable to the energy differences found between different shapes in nuclei of this region. Therefore, in addition to the mean field results, both angular momentum projected $`I=0`$ and $`I=2`$ surfaces were computed for the nuclei $`{}_{}{}^{30,32,34}Mg`$ and $`{}_{}{}^{32,34,36,38}Si`$ and angular momentum projected transition probabilities $`B(E2,0^+2^+)`$ among different configurations. The calculation proceeds in two steps: in the first one we perform a set of constrained Hartree- Fock- Bogoliubov (HFB) calculations using the D1S parameterization of the Gogny force and the mass quadrupole operator $`\widehat{Q}_{20}=z^2\frac{1}{2}(x^2+y^2)`$ as the constraining operator in order to obtain a set of “intrinsic” wave functions $`\varphi (q_{20})`$. The self-consistent symmetries imposed in the calculation were axial symmetry, parity and time reversal. The two body kinetic energy correction was fully taken into account in the minimization process. On the other hand, the Coulomb exchange term was replaced by the local Slater approximation and neglected in the variational process. The Coulomb pairing term as well as the contribution to the pairing field from the spin-orbit interaction were neglected. A harmonic oscillator (HO) basis of 10 major shells was used to expand the quasi-particle operators and the two oscillator lengths defining the axially symmetric HO basis were kept equal for all the values of the quadrupole moment. The reason for choosing the basis this way was that we wanted a basis closed under rotations (i.e. an arbitrary rotation of the basis elements always yields wave functions that can be solely expressed as linear combinations of the elements of the basis) in order to avoid the technical difficulties discussed in when a non-closed basis is used. In the second step we compute the angular momentum projected energy for each intrinsic wave function $`\varphi (q_{20})`$ obtaining in this way a set of energy curves $`E_I(q_{20})`$ for each value of $`I=0,2,\mathrm{}`$ The minima of each curve provide us with the energies and wave functions of the $`I=0^+,2^+,\mathrm{}`$ yrast and isomeric states. The theoretical background for angular momentum projection is very well described in and therefore we will not dwell on the details here. However, a few remarks concerning the peculiarities of our calculation are in order: first, and due to the axial symmetry imposed in the HFB wave functions, the angular momentum projected energy is given by $$E_I(q_{20})=\frac{_0^{\frac{\pi }{2}}𝑑\beta sen\beta d_{00}^I(\beta )\varphi (q_{20})\widehat{H}^{}[\rho _\beta (\stackrel{}{r})]e^{i\beta \widehat{J}_y}\varphi (q_{20})}{_0^{\frac{\pi }{2}}𝑑\beta sen\beta d_{00}^I(\beta )\varphi (q_{20})e^{i\beta \widehat{J}_y}\varphi (q_{20})}$$ (1) with $`\widehat{H}^{}[\rho _\beta (\stackrel{}{r})]=\widehat{H}[\rho _\beta (\stackrel{}{r})]\lambda _\pi (\widehat{N}_\pi Z)\lambda _\nu (\widehat{N}_\nu N)`$. The term $`\lambda _\pi (\widehat{N}_\pi Z)\lambda _\nu (\widehat{N}_\nu N)`$ is included to account for the fact that the projected wave function does not have the right number of particles on the average. The previous term would correspond to the application of first order perturbation theory if the chemical potentials used were the derivatives of the projected energy with respect to the number of particles. In our calculations we have simply used the chemical potentials obtained in the HFB theory<sup>1</sup><sup>1</sup>1This recipe has been previously used in the context of angular momentum projection and in Generator Coordinate Method (GCM) calculations . In both cases it has been found that the present recipe works very well.. This simplification is justified by the fact that the deviations induced in the number of particles due to the angular momentum projection are always small and so are their effects on the projected energies. For the computation of the matrix elements of the rotation operator in a HO basis we have used the results of ref. . Another relevant point to be discussed is the prescription to use for the density dependent part of the Gogny force. In the calculation of the energy functional $`E[\varphi ]=\varphi \left|\widehat{H}\right|\varphi `$ the density appearing in the density dependent part of the force is simply $`\rho (\stackrel{}{r})=\varphi \left|\widehat{\rho }\right|\varphi `$ rendering the energy a functional of the density and the pairing tensor but with a dependence on the density different from the canonical quadratic one of the standard HFB theory. On the other hand, the energy overlap $`E[\varphi ,\varphi ^{}]=\varphi \left|\widehat{H}\right|\varphi ^{}/\varphi |\varphi ^{}`$ can be evaluated using the extended Wick theorem. The final expression is the same as the HFB functional $`E[\varphi ]`$ but replacing the density matrix by $`\overline{\rho }_{ij}=\varphi \left|c_j^+c_i\right|\varphi ^{}/\varphi |\varphi ^{}`$ and the pairing tensor by $`\overline{\kappa }_{ij}=\varphi \left|c_jc_i\right|\varphi ^{}/\varphi |\varphi ^{}`$ and $`\overline{\stackrel{~}{\kappa }}_{ij}=\varphi \left|c_ic_j\right|\varphi ^{}/\varphi |\varphi ^{}`$. As a consequence, it seems rather natural to use the density $`\overline{\rho }\left(\stackrel{}{r}\right)=\varphi \left|\widehat{\rho }\right|\varphi ^{}/\varphi |\varphi ^{}`$ in the evaluation of the density dependent term of the force in $`E[\varphi ,\varphi ^{}]`$. In our case, this leads to the introduction of a density dependent term depending on $`\overline{\rho }(\stackrel{}{r},\beta )=\varphi \left|\widehat{\rho }e^{i\beta \widehat{J}_y}\right|\varphi /\varphi \left|e^{i\beta \widehat{J}_y}\right|\varphi `$. This density dependence seems to yield to bizarre consequences like having a non-hermitian and non rotationally invariant hamiltonian. These apparent inconsistencies can be overcome if we think of a density dependent force, not as an operator to be added to the kinetic energy in order to obtain a hamiltonian, but rather as a device to get energy functionals like $`E[\varphi ]`$ and $`E[\varphi ,\varphi ^{}]`$ with the property of yielding an energy that is a real quantity and independent of the orientation of the reference frame. The density dependence just mentioned fulfills these two requirements as can be readily checked. In addition, when the intrinsic wave function is strongly deformed and the Kamlah expansion can be used to obtain an approximate expression for the projected energy (the cranking model) the above density dependence yields the correct expression for the angular velocity $`\omega `$ including the “rearrangement” term . A more elaborated argumentation in favor of the density dependence just mentioned will be given elsewhere. As an example of the results obtained we show in Fig. 1 the HFB and projected energies as a function of $`q_{20}`$ for the nucleus $`{}_{}{}^{34}Mg`$. In contrast with the HFB result, the $`I=0`$ energy surface shows two pronounced minima in the prolate and oblate side which are rather close to each other in energy being the prolate minimum slightly deeper than the oblate one. Therefore, it is difficult to assign a given character to the $`I=0`$ state until a configuration mixing calculation is performed, although it is very likely that the predominant configuration for the $`I=0`$ state is going to be the prolate one. For $`I=2`$ there is a well developed prolate minimum. Let us also mention that for configurations with a $`q_{20}`$ value close to zero (i.e. close to the spherical configuration $`q_{20}=0`$ which is a pure $`I=0`$ state) it is very difficult to compute the $`I=2`$ projected energy due to numerical instabilities related to the smallness of $`\varphi (q_{20})\left|\widehat{P}_{00}^I\right|\varphi (q_{20})`$. In the inset of Fig. 1 we have plotted the energy difference $`E_{ROT}(I)=E_{HFB}E_I`$ as a function of $`q_{20}`$ for $`I=0`$ (full line) in order to compare it with the rotational energy correction $`E_{ROT}^{App}=J_y^2/𝒥_Y`$ often used in mean field calculations (dashed line). The Yoccoz moment of inertia $`𝒥_Y`$ has been computed, as it is usually done, in an approximate way by neglecting the two body quasiparticle interaction term of the hamiltonian (the same kind of approximation yields to the Inglis-Belyaev moment of inertia instead of the Thouless-Valatin one). We notice that $`E_{ROT}^{App}`$ agrees qualitatively well with $`E_{ROT}(0)`$ for $`q_{20}`$ values greater than $`100fm^2`$ and smaller than $`50fm^2`$ as expected: these are regions of strong deformation where the validity conditions for $`E_{ROT}^{App}`$ to be a good approximation to $`E_{ROT}`$ (Kamlah expansion) are satisfied. On the other hand, the behavior of $`E_{ROT}^{App}`$ is completely wrong in the inner region. One prescription to extend the rotational formula to weakly deformed states is the one of based on results with the Nilsson model. The prescription multiplies $`E_{ROT}^{App}`$ by a function of $`J_y^2`$ with the property of going to zero (one) for $`J_y^2`$ going to zero (infinity). The resulting rotational energy is also depicted in the inset of Fig. 1 (dotted line) and, although the qualitative agreement with the exact result improves somewhat, the quantitative one is far from satisfactory in the nucleus considered. The rotational energy correction formula is based on the assumption that the quantity $`h(\beta )=\widehat{H}e^{i\beta \widehat{J}_y}/e^{i\beta \widehat{J}_y}`$ can be very well approximated by a quadratic function $`h(\beta )h(0)+\frac{1}{2}h^{\prime \prime }(0)\beta ^2`$ where $`h^{\prime \prime }(0)`$ is related to the exact Yoccoz moment of inertia by the expression $`𝒥_Y=\widehat{J}_y^2^2/h^{\prime \prime }(0)`$. It is well known that this assumption is justified for deformed heavy nuclei. However, we have checked that it is not the case for the nuclei studied here even for the largest deformations considered. Therefore, we conclude that the exact restoration of the rotational symmetry is fundamental for a qualitative and quantitative description of the rotational energies in these light nuclei. The main outcomes of the calculation are summarized in Fig. 2 where we show, on the left hand side panel, the HFB potential energy surfaces for Mg and Si isotopes as a function of the mass quadrupole moment. These surfaces have been shifted accordingly to fit them in the plot. We observe that only in the nuclei $`{}_{}{}^{34}Mg`$ and $`{}_{}{}^{38}Si`$ we obtain a prolate minimum at $`\beta _2`$ deformations of 0.4 and 0.35 respectively. For the other nuclei, the minimum corresponds to the spherical configuration. For all the nuclei considered the energy curves are very flat around the corresponding minimum indicating that further correlations can substantially modify the energy landscape and therefore the conclusions obtained from the raw HFB results. On the middle and right hand side panels of Fig. 2 we show the angular momentum projected $`I=0`$ and $`I=2`$ potential energy surfaces for all the nuclei considered. These surfaces have also been shifted to fit them in the plot. For $`I=0`$, apart from the nucleus $`{}_{}{}^{34}Mg`$ that shows a rather clear prolate minimum, the general trend for the ground state is to show shape coexistence. For $`I=2`$ we have prolate minima for $`{}_{}{}^{3234}Mg`$ and oblate minima for $`{}_{}{}^{3234}Si`$ whereas the other nuclei are examples of shape coexistent structures. The results just shown indicate that, for a quantitative description of the ground and $`2^+`$ states in all these nuclei, a configuration mixing calculation (GCM) using the mass quadrupole moment as generating coordinate is needed. In spite of this, we present in Table 1 the $`0^+2^+`$ energy differences for the four possible configurations with the $`0^+`$ in the prolate (P) or oblate (O) minimum and the $`2^+`$ also in the P or O minimum. The energies written in boldface correspond to the predictions obtained by strictly using the criterion of the absolute minimum of $`E_I(q_{20})`$ to assign the $`0^+`$ and $`2^+`$ states. Comparison of these predictions with the experimental results indicates a reasonable agreement except for $`{}_{}{}^{36}Si`$. The inclusion of configuration mixing will presumably improve the agreement as it always yields to a mixed configuration with an energy lower than the energies of the states being mixed. Therefore, if the $`0^+`$configurations strongly mix (shape coexistence) but the $`2^+`$ones do not ( there is a well established minima) the $`0^+2^+`$ energy difference will increase whereas it will decrease if the opposite situation takes place. On the other hand, if configuration mixing is important for both the $`0^+`$ and $`2^+`$ states anything can happen to the excitation energy. Therefore, we expect that configuration mixing is going to increase the excitation energies in all cases except in $`{}_{}{}^{30}Mg`$ and $`{}_{}{}^{36,38}Si`$ where the behavior is unpredictable. In Table 2 we present the results obtained for the $`B(E2,0^+2^+)`$ transition probabilities for the four possible combinations. As in the previous table, the results obtained by choosing for the $`0^+`$ and $`2^+`$ states the ones corresponding to the absolute minima of the projected energies are written in boldface. For the nuclei $`{}_{}{}^{32}Mg`$ and $`{}_{}{}^{34}Mg`$ we obtain very collective values for the $`B(E2)`$ which, in the case of $`{}_{}{}^{32}Mg`$, are in rather good agreement with the experiment. For both nuclei, we expect a contamination of the ground state wave function by the oblate $`0^+`$ state that will yield to a reduction of the $`B(E2)`$ values (see column two for the $`B(E2,0_O^+2_P^+)`$) that will bring the theoretical predictions in closer agreement with the experimental data. For the $`{}_{}{}^{32}Si`$, $`{}_{}{}^{34}Si`$ and $`{}_{}{}^{38}Si`$ isotopes we underestimate the $`B(E2)`$ values but, presumably, admixtures of the $`0_P^+2_P^+`$ transition will help to bring the theoretical results in closer agreement with the experiment, specially for the $`{}_{}{}^{38}Si`$ nucleus. Concerning $`{}_{}{}^{36}Si`$ we can only conclude that a strong $`0_P^+2_P^+`$ component has to be present in the evaluation of the $`B(E2).`$ In Table 3 the HFB and projected ground state energies for the nuclei under consideration are shown and compared to the experimental data taken from . The inclusion of the zero point energy stemming from the restoration of the rotational symmetry clearly improves the theoretical description of the binding energies. In conclusion, we have computed several properties of neutron rich $`Mg`$ and $`Si`$ isotopes using the HFB theory and exact angular momentum projection. In the calculations the finite range density dependent Gogny force has been used. The results for the excitation energies $`0^+2^+`$ and $`B(E2,0^+2^+)`$ transition probabilities obtained from the angular momentum projected wave functions are in reasonable agreement with the experiment. The analysis of the projected energy surfaces and also the discrepancies found between theory and experiment indicate that configuration mixing is an important ingredient in these nuclei. Work is in progress in order to incorporate such configuration mixing. One of us (R. R.-G.) kindly acknowledges the financial support received from the Spanish Instituto de Cooperacion Iberoamericana (ICI). This work has been supported in part by the DGICyT (Spain) under project PB97/0023.
no-problem/0001/cond-mat0001191.html
ar5iv
text
# Scaling exponents for Barkhausen avalanches in polycrystals and amorphous ferromagnets ## Abstract We investigate the scaling properties of the Barkhausen effect, recording the noise in several soft ferromagnetic materials: polycrystals with different grain sizes and amorphous alloys. We measure the Barkhausen avalanche distributions and determine the scaling exponents. In the limit of vanishing external field rate, we can group the samples in two distinct classes, characterized by exponents $`\tau =1.50\pm 0.05`$ or $`\tau =1.27\pm 0.03`$, for the avalanche size distributions. We interpret these results in terms of the depinning transition of domain walls and obtain an expression relating the cutoff of the distributions to the demagnetizing factor which is in quantitative agreement with experiments. The Barkhausen noise is an indirect measure of complex microscopic magnetization processes and is commonly employed as a tool to investigate ferromagnetic materials . In recent years, the interest for this phenomenon has grown considerably due to the connections with disordered systems and non-equilibrium critical phenomena. Experiments have shown that the histogram of Barkhausen jump (avalanche) sizes follows a power law distribution , suggesting the presence of an underlying critical point . This hypothesis implies that the statistical properties of the noise should be described by universal scaling laws, with critical exponents that are independent of the material microstructure, at least for some class of materials. However, the exact nature of the critical behavior is still debated. Several authors have analyzed the dynamics of flexible domain walls in random media relating the Barkhausen exponents to the scaling expected at the depinning transition. Other theoretical explanations involve a critical point tuned by the disorder in the the framework of disordered spin models . The experiments reported in the literature can hardly resolve the theoretical issues since the measured exponents span a relatively wide range and it is difficult to confirm whether universality holds. In particular, there has been no extensive and systematic measurement of critical exponents in different materials with homogeneous and controlled experimental conditions and reliable statistics. For example, experimental evidence of universality has recently been reported for acoustic emission avalanches recorded during martensitic transformations, by analyzing in detail the behavior of several alloys with different compositions . The precise dependence of the Barkhausen characteristic sizes on experimentally tunable parameters is still a debated question and a complete agreement between theory, simulations and experiments is still lacking. Understanding this point is crucial in order to link the material microstructure to the noise properties, or conversely to use the noise to obtain information on the structure of the material. In the past, this problem, which has important technological relevance, has been mainly addressed using phenomenological models where the material properties (i.e. grain size , internal stresses ) are accounted for by effective fitting parameters. The main limit of this approach is that there is no systematic way to implement the program, without a precise understanding of the microscopic dynamics. In this letter, we report experimental data for a large set of materials suggesting the existence of two distinct universality classes and show that the results are in quantitative agreement with the theory of domain wall depinning transition. We perform Barkhausen noise measurements in six different ferromagnetic materials under similar experimental conditions, averaging the distributions over a large number ($`10^510^6`$) of events, carefully testing the effect of the magnetic field rate on the exponents. The cutoff of the distributions is tuned by the demagnetizing factor and defines two new critical exponents, which we measure for two materials belonging to the different classes. Using scaling arguments, we predict values for these exponents that are in good agreement with experiments. We record the Barkhausen noise using standard inductive methods, described in details in Refs. . A long solenoid provides an homogeneous low frequency triangular driving field, while a secondary pickup around the sample cross section gets the induced flux. The solenoid is 60 cm long, with a value of 1450 turns/meter, ensuring an homogeneous field up to 55 cm long samples with peak amplitude of about 150 A/m. The pickup is made of 50 isolated copper turns, wounded within 1 mm. Such a small width is required to avoid spurious effects due to demagnetizing fields. All the measurements are performed only in the central part of the hysteresis loop around the coercive field, where domain wall motion is the relevant magnetization mechanism . We take special care to reduce excess external noise during the measurements of avalanches distributions, as the evaluation of critical exponents is strongly affected by spurious noise. In this respect, the most appropriate cutoff frequency of the low pass pre-amplifier filter is chosen in the 3-20 kHz range, roughly half of the sampling frequency, as usual in noise measurements. The analysis of Barkhausen avalanche distribution is performed following the procedure discussed in Ref.. We impose a reference level for $`v_r`$ for the signal $`v(t)`$, chosen above the background noise. The duration $`T`$ of the Barkhausen avalanches is defined as the interval within two successive intersections of the signal with the $`v=v_r`$ line. The avalanche size $`s`$ is calculated as the integral of the signal between the same points. We observe that the avalanche distributions follow a power law $$P(s)=s^\tau f(s/s_0),P(T)=T^\alpha g(T/T_0),$$ (1) where $`s_0`$ and $`T_0`$ indicate the position of the cutoff to the power law behavior. The critical exponents result to be independent of the reference level for a reasonable range of $`v_r`$ . We employ several different soft magnetic materials, both polycrystalline and amorphous: an Fe-Si 7.8 wt.% strip (30 cm $`\times `$ 0.5 cm $`\times `$ 60 $`\mu `$m) produced by plan flow casting, annealed several times around 950C to obtain grains of average dimension of 25 $`\mu `$m; two strips of Fe-Si 6.5 wt.% (30 cm $`\times `$ 0.5 cm $`\times `$ 45 $`\mu `$m), one annealed for 2h at 1200C, with grains of 160 $`\mu `$m, and the other annealed for 2h at 1050C, with grains of 35 $`\mu `$m . The amorphous samples have composition of the type Fe<sub>x</sub>Co<sub>85-x</sub>B<sub>15</sub> and we employ Fe<sub>21</sub>Co<sub>64</sub>B<sub>15</sub> as cast (20 cm $`\times `$ 1 cm $`\times `$ 22 $`\mu `$m), Fe<sub>64</sub>Co<sub>21</sub>B<sub>15</sub> as cast (28 cm $`\times `$ 1 cm $`\times `$ 23 $`\mu `$m). With these highly magnetostrictive alloys ($`\lambda _s3050\times 10^6`$) a tensile stress of $`\sigma 100`$ MPa is applied during the measurement. The applied stress is found to enhance the signal-noise ratio, reducing biases in the distributions, but does not change the exponents . A partially crystallized Fe<sub>64</sub>Co<sub>21</sub>B<sub>15</sub> (22 cm $`\times `$ 1 cm $`\times `$ 23 $`\mu `$m) is also employed, with annealing for 30 min at 350C and then for 4h at 300C under an applied tensile stress of 500 MPa. This induces the formation of $`\alpha `$-Fe crystals of about 50 nm, with a crystal fraction of $`5\%`$ . In Fig. 1a we show the avalanche size distribution, obtained for the smallest available magnetic field rates ($`f`$ = 3-5 mHz). We clearly see that the data can be grouped in two universality classes with $`\tau =1.50\pm 0.05`$ and $`\tau =1.27\pm 0.03`$. The first class includes all the Si-Fe polycrystals and the partially crystallized amorphous alloy, while the amorphous alloys under stress belong to the second class. For the materials in the first class, we observed a linear decrease of the exponents on the frequency $`f`$ of the external magnetic field, in agreement with earlier findings . The material in the second class do not show any noticeable dependence of the exponents on the field rate. We note that $`\tau 1.3`$, independent of the frequency, was previously measured in Perminvar . Next, we measure the distribution of avalanche durations (see Fig. 1b) and find $`\alpha =2.0\pm 0.2`$ and $`\alpha =1.5\pm 0.1`$ for the two classes, with a quite large error bar due to the limited range of scaling, and the presence of unavoidable excess external noise at low durations. Also in this case, $`\alpha `$ decreases linearly with $`f`$ for the materials belonging to the first class. The scaling of the cutoff of Barkhausen avalanche distributions has been the object of an intense debate in the literature . In Ref. the control parameter was identified with the demagnetizing factor $`k`$. We thus measure the Barkhausen avalanche distributions varying $`k`$, using samples of different aspect ratios. In particular, we use the same sample and cut it progressively in shorter pieces, recording the noise always in the same region, whose size is limited by the pickup coil width. In this way only $`k`$ is varied, while stress, internal disorder and system size are kept constant. The demagnetizing factor is estimated as $`k=1/\mu _c1/\mu _i`$ where $`\mu _c`$ is the linear permeability around the coercive field and $`\mu _i`$ is the intrinsic permeability (i.e. in an infinite strip) estimated using a magnetic yoke . We perform the measurements on the Fe-Si 6.5 wt.% 1200C (with lengths spanning from 28 to 10 cm) and the Fe<sub>21</sub>Co<sub>64</sub>B<sub>15</sub> (from 27 to 8 cm) under constant tensile stress. In Fig. 2 we report the avalanche size distribution for different $`k`$ for Fe<sub>21</sub>Co<sub>64</sub>B<sub>15</sub> in order to show the increase of the cutoff. Data collapse analysis yields $`s_0k^{1/\sigma _k}`$ with $`1/\sigma _k0.78`$ (see the inset of Fig. 2). Similarly the duration distribution cutoff scales as $`s_0k^{\mathrm{\Delta }_k}`$ with $`\mathrm{\Delta }_k0.4`$. In the case of Fe-Si, the analysis is complicated by the frequency dependence of the exponents, therefore we fit the cutoff for different values of $`f`$ and extrapolate the results for $`f0`$. The results for $`s_0`$ and $`T_0`$ for both materials are reported in Fig. 3 and Table I. To interpret the experimental results we use the model of domain wall depinning discussed in Ref. . A single $`180^{}`$ domain wall is described by its position $`h(\stackrel{}{r})`$, dividing two regions of opposite magnetization directed along the $`x`$ axis. The total energy for a given configuration is the sum of different contributions due to magnetostatic, ferromagnetic and magneto-crystalline interactions, and gives rise to the following equation of motion $`\mathrm{\Gamma }{\displaystyle \frac{h(\stackrel{}{r},t)}{t}}=2\mu _0M_sHk{\displaystyle d^2r^{}h(\stackrel{}{r}^{},t)}+\gamma _w^2h(\stackrel{}{r},t)`$ $$+d^2r^{}J(\stackrel{}{r}\stackrel{}{r}^{})(h(\stackrel{}{r}^{})h(\stackrel{}{r}))+\eta (\stackrel{}{r},h),$$ (2) where $`\mathrm{\Gamma }`$ is the viscosity, $`M_s`$ is the saturation magnetization, $`H`$ is the applied field, $`k`$ is the demagnetizing factor, $`\gamma _w`$ is the surface tension of the wall, $`J`$ is the kernel due to dipolar interactions given by $$J(\stackrel{}{r}\stackrel{}{r}^{})=\frac{\mu _0M_s^2}{2\pi |\stackrel{}{r}\stackrel{}{r}^{}|^3}\left(1+\frac{3(xx^{})^2}{|\stackrel{}{r}\stackrel{}{r}^{}|^2}\right),$$ (3) and $`\eta (\stackrel{}{r},h)`$ is a Gaussian uncorrelated random field taking into account all the possible effects of dislocations, residual stress and non-magnetic inclusions. The critical behavior of Eq. 2 has been understood using renormalization group methods , which show that at large length scales the critical exponents take mean-field values . This result is due to the linear dependence on the momentum of the interaction kernel (Eq. 3) in Fourier space . In general, if we consider an interface whose interaction kernel in momentum space scales as $`J(q)=J_0|q|^\mu `$, the upper critical dimension is given by $`d_c=2\mu `$ and the values of the exponents depend on $`\mu `$ (see Table I). In particular, Eq. 2 yields $`\tau =3/2`$ and $`\alpha =2`$ (i.e. $`\mu =1`$), or $`\tau 1.27`$ and $`\alpha 1.5`$ if dipolar interactions are neglected (i.e. $`\mu =2`$) . Numerical simulations confirm the linear dependence of the exponents on the driving frequency $`f`$ for $`\mu =1`$, and no dependence for $`\mu =2`$ . The experimental results are in perfect agreement with the values predicted using Eq. 2 and suggest that the dipolar interactions are stronger than surface tension effects in polycrystals, or whenever small grains are present, while in amorphous alloys under stress the surface tension is much stronger. Magnetostriction can be one of the sources of this effect, since the domain wall surface tension increases with stress $`\sigma `$ as $`\gamma _w\sqrt{K_0+3/2\lambda _s\sigma }`$, where $`K_0`$ is the zero applied stress anisotropy and $`\lambda _s`$ is the saturation magnetostriction . Micromagnetic calculations for these particular materials are needed to have a final confirmation of this effect. We derive the dependence of the cutoff on $`k`$ from Eq. 2 noting that the demagnetizing field acts as a restoring force for the interface motion and is responsible for the cutoff in the avalanche sizes. The interface can not jump over distances larger than $`\xi `$, defined as the length for which the interaction term $`J_0|q|^\mu `$ is overcome by the restoring force (i.e. $`J_0h\xi ^\mu k\xi ^dh`$) which implies $$\xi (k/J_0)^{\nu _k}\nu _k=1/(\mu +d).$$ (4) The avalanche size and duration distributions cutoff can be obtained using the scaling relations reported in Ref. : $$s_0D(k/J_0)^{1/\sigma _k},1/\sigma _k=\nu _k(d+\zeta )$$ (5) and similarly $$T_0D(k/J_0)^{\mathrm{\Delta }_k},\mathrm{\Delta }_k=z\nu _k,$$ (6) where $`D\sqrt{\eta ^2}`$ denotes the typical fluctuation of the disorder. The dynamic exponent $`z`$ and the interface roughness exponent $`\zeta `$ define the spatio-temporal scaling of the domain wall width $`\sqrt{h^2h^2}=\xi ^\zeta F(t/\xi ^z)`$. Inserting in Eqs. (5-6) the renormalization group results $`\zeta =(2\mu d)/3`$ and $`z=\mu (4\mu 2d)/9`$, we obtain $`1/\sigma _k=2/3`$ and $`\mathrm{\Delta }_k=(\mu (4\mu 2d)/9)/(\mu +d)`$. We have performed extensive numerical simulations in $`d=2`$ using the model described in Ref. in order to test these results (see Table I). Our results also agree with earlier numerical simulations in $`d=1`$ for $`1<\mu <2`$ where $`s_0k^{0.65}`$ independent of $`\mu `$ and with the result reported in Ref. (i.e. $`s_0L^{1.4}`$) obtained using $`k1/L^2`$. The scaling of the cutoff is different for a local demagnetizing field $`kh(x,t)`$ , which yields $`\nu _k=1/\mu `$ . However, a non-local term is more appropriate to describe the demagnetizing field, due to the long-range of magnetostatic interactions , as it is confirmed by the agreement between experiments and theory. In conclusions, our experiments suggest that the Barkhausen effect can be described by universal scaling functions and that materials can be classified in different universality classes. The theory of interface depinning can be used to obtain a quantitative explanation of the experiments, providing a natural framework to understand the properties of soft magnetic materials.
no-problem/0001/hep-ph0001324.html
ar5iv
text
# Model-independent Determination of |𝑉_{𝑢⁢𝑏}| ## 1 Problems and Solutions The fundamental Cabibbo-Kobayashi-Maskawa (CKM) matrix element $`|V_{ub}|`$ has been determined from inclusive charmless semileptonic decays of $`B`$ mesons, $`BX_u\mathrm{}\nu `$. However, there are experimental and theoretical problems that obstruct a precise determination of $`|V_{ub}|`$. Experimentally, it is very difficult to separate signals from the rare $`BX_u\mathrm{}\nu `$ decay from large $`BX_c\mathrm{}\nu `$ background. Theoretically, QCD uncertainties arise in calculations that relate the measured quantity to $`|V_{ub}|`$. The potential theoretical uncertainties from perturbative and nonperturbative QCD can be comparable. The solutions for the problems are provided by a novel method. It has been proposed to use the kinematic cut on the variable $`\xi _u=(q^0+|𝐪|)/M_B`$ ($`q`$ is the momentum transfer to the lepton pair) above the kinematic limit for $`BX_c\mathrm{}\nu `$, $`\xi _u>1M_D/M_B`$, to separate $`BX_u\mathrm{}\nu `$ signal from $`BX_c\mathrm{}\nu `$ background. Most of $`BX_u\mathrm{}\nu `$ events pass the above cut. This kinematic requirement provides a very efficient way for background suppression. $`|V_{ub}|`$ can then be extracted from the weighted integral of the measured $`\xi _u`$ spectrum via the sum rule for inclusive charmless semileptonic decays of $`B`$ mesons with little theoretical uncertainty. The sum rule is derived from the light-cone expansion and beauty quantum number conservation. Thus a model-independent determination of $`|V_{ub}|`$ can be achieved, minimizing the overall (experimental and theoretical) error. ## 2 Sum Rule Because of the large $`B`$ meson mass, the light-cone expansion is applicable to inclusive $`B`$ decays that are dominated by light-cone singularities. For inclusive charmless semileptonic decays of $`B`$ mesons, the light-cone expansion and beauty quantum number conservation lead to the sum rule $$S_0^1𝑑\xi _u\frac{1}{\xi _u^5}\frac{d\mathrm{\Gamma }}{d\xi _u}(BX_u\mathrm{}\nu )=|V_{ub}|^2\frac{G_F^2M_B^5}{192\pi ^3}.$$ (1) This sum rule has the following advantages: * Independent of phenomenological models * No perturbative QCD uncertainty * Dominant hadronic uncertainty avoided The sum rule (1) establishes a relationship between $`|V_{ub}|`$ and the observable quantity $`S`$ in the leading twist approximation of QCD. The only remaining theoretical uncertainty in the relation comes from higer-twist corrections to the sum rule, which are suppressed by a power of $`\mathrm{\Lambda }_{\mathrm{QCD}}^2/M_B^2`$. ## 3 The $`\xi _u`$ Spectrum Now let me explain why the decay distribution of the kinematic variable $`\xi _u`$ is unique and why the kinematic cut on $`\xi _u`$ is very efficient in the discrimination between $`BX_u\mathrm{}\nu `$ signal and $`BX_c\mathrm{}\nu `$ background. Without QCD corrections, the tree-level $`\xi _u`$ spectrum in the free quark decay $`bu\mathrm{}\nu `$ in the $`b`$-quark rest frame is a discrete line at $`\xi _u=m_b/M_B`$. This is simply a consequence of kinematics that fixes $`\xi _u`$ to the single value $`m_b/M_B`$, no other values of $`\xi _u`$ are kinematically allowed in $`bu\mathrm{}\nu `$ decays. This discrete line at $`\xi _u=m_b/M_B0.9`$ lies well above the charm threshold, $`\xi _u>1M_D/M_B=0.65`$. The $`O(\alpha _s)`$ perturbative QCD correction to the $`\xi _u`$ spectrum has been calculated. The $`\xi _u`$ spectrum remains a discrete line at $`\xi _u=m_b/M_B`$, even if virtual gluon emission occurs. Gluon bremsstrahlung generates a small tail below the parton-level endpoint $`\xi _u=m_b/M_B`$. To calculate the real physical decay distribution in $`BX_u\mathrm{}\nu `$, we must also account for hadronic bound-state effects. In the framework of the light-cone expansion, the leading nonperturbative QCD effect is incorporated in the $`b`$-quark distribution function $$f(\xi )=\frac{1}{4\pi }\frac{d(yP)}{yP}e^{i\xi yP}B|\overline{b}(0)y/𝒫exp[ig_s_y^0𝑑z^\mu A_\mu (z)]b(y)|B|_{y^2=0},$$ (2) where $`𝒫`$ denotes path ordering. Although several important properties of it are known in QCD, the form of the distribution function has not been completely determined. The distribution function $`f(\xi )`$ has a simple physical interpretation: It is the probability of finding a $`b`$-quark with momentum $`\xi P`$ inside the $`B`$ meson with momentum $`P`$. The real physical spectrum is then obtained from a convolution of the hard perturbative spectrum with the soft nonperturbative distribution function: $$\frac{d\mathrm{\Gamma }}{d\xi _u}(BX_u\mathrm{}\nu )=_{\xi _u}^1𝑑\xi f(\xi )\frac{d\mathrm{\Gamma }}{d\xi _u}(bu\mathrm{}\nu ,p_b=\xi P),$$ (3) where the $`b`$-quark momentum $`p_b`$ in the perturbative spectrum is replaced by $`\xi P`$. The interplay between nonperturbative and perturbative QCD effects has been accounted for. Bound-state effects lead to the extension of phase space from the parton level to the hadron level, also stretch the spectrum downward below $`m_b/M_B`$, and are solely responsible for populating the spectrum upward in the gap between the parton-level endpoint $`\xi _u=m_b/M_B`$ and the hadron-level endpoint $`\xi _u=1`$. The interplay between nonperturbative and perturbative QCD effects eliminates the singularity at the endpoint of the perturbative spectrum, so that the physical spectrum shows a smooth behaviour over the entire range of $`\xi _u`$, $`0\xi _u1`$. Although the monochromatic $`\xi _u`$ spectrum at tree level is smeared by gluon bresstrahlung and bound-state effects around $`\xi _u=m_b/M_B`$, about $`80\%`$ of $`BX_u\mathrm{}\nu `$ events remain above the charm threshold. The uniqueness of the decay distribution of the kinematic variable $`\xi _u`$ implies that the kinematic cut on $`\xi _u`$ is very efficient in disentangling $`BX_u\mathrm{}\nu `$ signal from $`BX_c\mathrm{}\nu `$ background. ## 4 How Do You Measure $`S`$? To measure the observable $`S`$ defined in Eq. (1), one needs to measure the weighted $`\xi _u`$ spectrum $`\xi _u^5d\mathrm{\Gamma }(BX_u\mathrm{}\nu )/d\xi _u`$, using the kinematic cut $`\xi _u>1M_D/M_B`$ against $`BX_c\mathrm{}\nu `$ background. $`S`$ can then be obtained from an extrapolation of the weighted spectrum measured above the charm threshold to low $`\xi _u`$. While the normalization of the weighted spectrum given by the sum rule (1) does not depend on the $`b`$-quark distribution function $`f(\xi )`$, thus being model-independent, the shape of the weighted spectrum does. The detailed analysis is presented in Ref. . Gluon bremsstrahlung and hadronic bound-state effects strongly affect the shape of the weighted $`\xi _u`$ spectrum. However, the shape of the weighted $`\xi _u`$ spectrum is insensitive to the value of the strong coupling $`\alpha _s`$, varied in a reasonable range. The overall picture appears to be that the weighted $`\xi _u`$ spectrum is peaked towards larger values of $`\xi _u`$ with a narrow width. The contribution below $`\xi _u=0.65`$ is small and relatively insensitive to forms of the distribution function. This suggests that extrapolating the weighted $`\xi _u`$ spectrum down to low $`\xi _u`$ would not introduce a considerable uncertainty in the value of $`S`$. ## 5 Summary The kinematic cut on $`\xi _u`$, $`\xi _u>1M_D/M_B`$, and the semileptonic $`B`$ decay sum rule, Eq. (1), make an outstanding opportunity for the precise determination of $`|V_{ub}|`$ from the observable $`S`$. This method is both exceptionally clean theoretically and very efficient experimentally in background suppression. There remain two kinds of theoretical error in the model-independent determination of $`|V_{ub}|`$. First, higher-twist (or power suppressed) corrections to the sum rule cause an error of the order $`O(\mathrm{\Lambda }_{\mathrm{QCD}}^2/M_B^2)1\%`$ in $`|V_{ub}|`$. Second, the extrapolation of the weighted $`\xi _u`$ spectrum to low $`\xi _u`$ gives rise to a systematic error in the measurement of $`S`$. The size of this error depends on how well the weighted spectrum can be measured, since the measured spectrum would directly determine the form of the distribution function. In addition, the form of the universal distribution function can also be determined directly by the measurement of the $`BX_s\gamma `$ photon energy spectrum. The experimental determination of the distribution function would provide a model-independent way to make the extrapolation, allowing an error reduction. Eventually, the error in $`|V_{ub}|`$ determined by this method would mainly depend on how well the observable $`S`$ can be measured. To measure $`S`$ experimentally one needs to be able to reconstruct the neutrino. This poses a challenge to experiment. The unique potential of determining $`|V_{ub}|`$ warrants a feasibility study for the experiment. ## Acknowledgments I would like to thank Hai-Yang Cheng and Wei-Shu Hou for the very stimulating and enjoyable conference. This work was supported by the Australian Research Council.
no-problem/0001/math0001113.html
ar5iv
text
# A Remark on the Chisini Conjecture In this note, we establish the following consequence of Kulikov’s results on the Chisini conjecture . ###### Theorem 1 A generic ramified covering $`f:S^2`$ of degree at least $`12`$ is uniquely determined by its branch curve in $`^2.`$ In other words, the Chisini conjecture holds for generic morphisms of degree $`12`$. We discuss the assumptions of this theorem and the Chisini conjecture itself in section 1. Then, in section 2, we present a proof of Theorem 1 and observe a few related applications of our approach. 1. Generic morphisms of surfaces and the Chisini conjecture. A ramified covering $`f:S^2`$ is a finite morphism of a non-singular irreducible projective surface onto the projective plane. The branch curve of $`f`$ is defined as the set of points over which $`f`$ is not étale. A finite morphism $`f:S^2`$ of degree $`\mathrm{deg}f3`$ is said to be generic if the following holds: * the branch curve $`B^2`$ is irreducible and has ordinary cusps and nodes only; * $`f^{}B=2R+C`$, where the ramification divisor $`R`$ is irreducible and non-singular, and $`C`$ is reduced; * $`f|_R:RB`$ is the normalization of $`B`$. Two generic morphisms $`f_1:S_1^2`$ and $`f_2:S_2^2`$ are called equivalent if there exists an isomorphism $`\phi :S_1S_2`$ such that $`f_1=f_2\phi `$. The precise assertion of Theorem $`1`$ is therefore that two generic morphisms (of a priori different surfaces!) having the same branch curve are equivalent provided that at least one of them has degree $`12`$. The above definition of genericity is parallel to the case of Riemann surfaces. Recall that, according to Riemann and Hurwitz, a ramified covering $`f:\mathrm{\Sigma }^1`$ is called generic if over each point in $`^1`$ there is at most one quadratic ramification point of $`f`$. On the other hand, Theorem 1 shows that the complex surface case is essentially rigid. Note that every algebraic surface $`S`$ admits a generic morphism $`f:S^2`$ or, in other words, can be represented as a generic ramified covering over $`^2`$. For instance, if $`S^r`$, then almost every projection $`^r^2`$ yields a generic morphism $`p:S^2`$. Therefore Theorem 1 might be used to understand the moduli spaces of complex surfaces in terms of the geometry of plane curves. The conjecture that a generic morphism of degree at least $`5`$ is completely determined by its branch curve was proposed by O. Chisini , who also gave an alleged proof of this statement. The assumption $`\mathrm{deg}f5`$ is necessary because of the following example due to Chisini and Catanese . Let $`B=C^{}`$ be the dual curve of a non-singular plane cubic $`C`$. ($`B`$ is a sextic with $`9`$ cusps). Then there exist four non-equivalent generic morphisms with branch curve $`B`$. Three of them, of degree $`4`$, are maps from $`^2`$ given by projections of the Veronese surface. The fourth map, of degree $`3`$, is the projection on $`^2`$ of the elliptic ruled surface obtained as the preimage of $`C`$ in the incidence variety $`^2\times ^2`$. So far this example and its fiber products with ramified coverings are the only known examples of non-uniqueness of generic morphisms with a given branch curve. In the important paper , Moishezon proved the Chisini conjecture for branch curves of generic projections of smooth hypersurfaces in $`^3`$ by introducing and analyzing the braid presentations of the fundamental group of $`^2B`$. Recently this approach was substantially developed by Vik. Kulikov . He proved that, for a given branch curve $`B`$, the generic morphism is unique provided that its degree is greater than a certain function depending on the curve $`B`$ (the explicit expression is given in formula (1) below). This allowed him to prove the Chisini conjecture for a wide class of generic morphism (for instance, for pluri-canonical morphisms of surfaces of general type). Our (rather modest) contribution is that one can estimate Kulikov’s function from above by using the Bogomolov–Miyaoka–Yau (BMY) inequalities. In what follows we say that the Chisini conjecture holds for a class of generic morphisms if every morphism in this class is determined by its branch curve up to equivalence. 2. Cusps of branch curves and BMY inequalities. Consider a generic morphism $`f:S^2`$ of degree $`\mathrm{deg}f=N`$ with branch curve $`B^2`$. Denote by $`2d`$ the degree of $`B`$ (it is always even), by $`g`$ the genus of the desingularization of $`B`$, and by $`c`$ the number of cusps of $`B`$. It was proved in that the morphism $`f:S^2`$ is uniquely determined by $`B`$ if $$N>\frac{4(3d+g1)}{2(3d+g1)c}.$$ (1) We wish to apply the Bogomolov-Miyaoka-Yau inequality on the algebraic surface $`S`$ to estimate the number of cusps of the branch curve $`B`$ via $`g`$ and $`d`$, which leads to an a priori upper bound for the right hand side of (1). To this end we shall need the following formulas (cf. Lemmas 6 and 7 in ) for the self-intersection of the canonical class and the topological Euler characteristic of $`S`$: $$\begin{array}{ccc}\hfill K_S^2& =& 9N9d+g1,\hfill \\ \hfill e(S)& =& 3N+2(g1)c.\hfill \end{array}$$ (2) Proof of Theorem 1. Let us assume first that $`S`$ satisfies the BMY inequality $$K_S^23e(S).$$ (3) (This means essentially that $`S`$ is not a blow-up of an irrational ruled surface of irregularity $`>1`$.) Plugging (2) into the BMY inequality we obtain $$9N9d+g19N+6(g1)3c,$$ and therefore $$c3d+\frac{5}{3}(g1).$$ It follows that $$\frac{4(3d+g1)}{2(3d+g1)c}\frac{12d+4(g1)}{3d+\frac{1}{3}(g1)}=4+\frac{8(g1)}{9d+(g1)}<12,$$ (4) which proves the Theorem in this case. Now, if $`S`$ is (a blow-up of) an irrational ruled surface, it satisfies the inequality $`K_S^22e(S)`$. (The BMY inequality used above does not follow from this because both sides may be negative.) Arguing in the same way as before, we obtain $$c\frac{3}{2}N+\frac{9}{2}d+\frac{3}{2}(g1)<\frac{3}{2}(3d+g1).$$ This gives us the (sharper) estimate $$\frac{4(3d+g1)}{2(3d+g1)c}<8,$$ which completes the proof. $`\mathrm{}`$ In fact, all surfaces of non-general type except $`^2`$ satisfy $`K_S^22e(S)`$. Moreover, by Theorem 3 in the Chisini conjecture holds for generic endomorphisms of $`^2`$ of degree $`5`$. Hence, the second part of our proof yields the following result. ###### Theorem 2 The Chisini conjecture holds for generic morphisms of degree at least $`8`$ of surfaces of non-general type. Returning to the case of surfaces satisfying the “general” BMY inequality (3), we see that, in certain cases, the last inequality in (4) can be improved. From the formula for $`K_S^2`$ we have $$9d=K_S^2+9N+(g1).$$ Plugging this into (4) we deduce the following result (including by the way the case of $`^2`$). ###### Corollary. The Chisini conjecture holds for generic morphisms $`f:S^2`$ of degree $`8`$ such that $`K_S^2<9\mathrm{deg}f`$. Another complementary result can be obtained under additional assumptions on the branch curve. ###### Corollary. The Chisini conjecture holds for generic morphisms of degree $`8`$ such that the genus of the branch curve is less than $`64`$. Proof. It follows from the Riemann–Hurwitz formula applied to the preimage of a projective line that $`\mathrm{deg}fd+1`$ (cf. Lemma 1 in ). However, if $`d7`$ and $`g63`$ (so that $`9d>g1`$), then estimate (4) improves to $`<8`$. $`\mathrm{}`$ Remarks $`1^{}`$ Viktor Kulikov has also obtained the following result: the Chisini conjecture holds for generic morphisms of degree $`5`$ whose branch curves are cuspidal, i. e. have no nodes. $`2^{}`$ The estimates for the number of cusps of branch curves of generic morphisms obtained above are slightly stronger than the bounds known for arbitrary curves with simple singularities. It should be noted that the best estimates so far were obtained by applying the logarithmic BMY inequality to double coverings (see ). Acknowledgements. The author is deeply grateful to Viktor Kulikov for sharing his results and expertise. This paper was written during the author’s stay at Max-Planck Institute für Mathematik in Bonn. It is a pleasure to acknowledge the support of this hospitable institution. The author was also supported by RFBR grant 99-01-00969. Steklov Mathematical Institute, 117966 Moscow, Russia E-mail address: stefan@mccme.ru
no-problem/0001/astro-ph0001417.html
ar5iv
text
# OGLE Cepheids have Lower Amplitudes in SMC than in LMC ## 1 Introduction The period - luminosity relation for cepheids is the foundation of the HST Key Project (cf. Freedman 1999, and references therein), and it is used to determine distances to galaxies which are up to 23 Mpc away. Recently, Mochejska et al. (1999) demonstrated that ground based photometry of cepheids in M31 is strongly affected by blending: the true apparent luminosity of cepheids, as measured with high resolution HST images, is systematically lower than the ground based value, as numerous blends are not resolved from the ground. This finding made Mochejska et al. suggest that blending may affect the HST Key Project photometry, and may lead to an underestimate of the distances based on the period - luminosity relation. Stanek & Udalski (1999), using the OGLE ground based data for cepheids in LMC and SMC, and adopting a model for the HST photometry, concluded that the effect may reach up to 0.3 mag at the distance of 20 Mpc at the HST resolution. The correctness of their approach has been disputed by Ferrarese et al. (1999), who claim that the systematic error due to crowding does not exceed 0.02 magnitudes for the HST Key Project photometry. Gibson et al. (1999) used Type Ia supernovae and the Tully - Fisher relation to check for the effects of blending with inconclusive (in our view) results: the Fisher - Tully relation could not discriminate between strong blending and no blending hypothesis, while Type Ia supernovae were only marginally in favor of no blending. The issue of blending may not be settled for some time, as it is difficult to model. Note, that a significant contribution to blending may be due to physical companions, which are common among young stars. The star - star correlation function is strong for young stars (Harris & Zaritsky 1999), and an estimate based on randomly placed ‘artificial stars’, often used in blending tests (e.g. Ferrarese et al. 1999), may be inadequate. Hence, it is useful to explore an approach which is not affected by any blending. A simple way to overcome the blending problem altogether is to use the AC signal from cepheids, i.e. the period - flux amplitude relation (cf. Paczyński 1999). The recently developed image subtraction software (e.g. Alard & Lupton 1998, cf. its applications by Alard 1999a,b, Olech et al. 1999, Woźniak et al. 1999) provides the light variations of point sources as the only directly measurable quantity. Of course, the image subtraction does not provide a measure of the DC signal. For the period - flux amplitude relation to be useful it has to be verified empirically, as theoretical models do not provide reliable values of cepheid amplitudes. Recently published OGLE database of about $`8\times 10^5`$ photometric measurements in standard BVI bands for over 3,000 cepheids in both Magellanic Clouds (Udalski et al. 1999a,b, cf. http://www.astrouw.edu.pl/~ftp/ogle/ogle2/cepheids/query.html ) offers an opportunity to test the universality of the period - flux amplitude relation. ## 2 Results Only bright, i.e. long period cepheids are useful for distance determination. As OGLE data are affected by CCD saturation for LMC cepheids with periods longer than 30 days we selected only those with periods shorter than $`P_{max}=10^{1.4}`$ days. On the short period side a resonance complicates light curves of cepheids with periods near 10 days. Therefore, we selected only those with periods longer than $`P_{min}=10^{1.1}`$ days, and which were in the narrow band of the observed period - luminosity relation defined by Udalski et al. (1999a,b). There were 33 such objects in the LMC and 35 in the SMC OGLE database. These numbers will increase in the future when OGLE covers a larger area of both Magellanic Clouds, and the longest period cepheids will be measured in both using shorter exposure times. The OGLE public domain database provides over 200 I-band data points per cepheid, and typically 15 data points in V and B bands. A visual inspection of the very accurate 68 I-band light curves revealed an unpleasant surprise: cepheids in the SMC had smaller amplitudes than those in the LMC. The median I-band amplitude is 0.56 mag in the LMC and only 0.46 mag in the SMC. In order to quantify this effect we approximated the I-band flux variation of every cepheid with a truncated Fourier series: $$F_I=F_I+\underset{i=1}{\overset{n}{}}\left[a_{is}\mathrm{sin}(2\pi it/P)+a_{ic}\mathrm{cos}(2\pi it/P)\right],$$ $`(1)`$ where all the coefficients were calculated so as to minimize the rms deviation between the observed data points, $`F_{I,k}`$, and the formula (2). The power in the first four harmonics was calculated as $$F_4=\left[\underset{i=1}{\overset{4}{}}\left(a_{is}^2+a_{ic}^2\right)\right]^{1/2},$$ $`(2)`$ and we defined the relative amplitude as $$fF_4/F_I,$$ $`(3)`$ These values were tabulated for the 33 LMC and 35 SMC cepheids, and they are shown in Fig. 1 as a function of pulsation period. Next, we made 1,000 random drawings from these samples, with replacement. The average values and the variances of the medians were found to be $$f_{LMC}=0.2466\pm 0.0076,f_{SMC}=0.2086\pm 0.0052,$$ $$f_{LMC}f_{SMC}=0.0380\pm 0.0092,(4.1\sigma ).$$ $`(4)`$ The difference in amplitude between the LMC and SMC cepheids is a 4 $`\sigma `$ effect. In order to verify the extent to which the difference is affected by the outliers we removed cepheids with $`f<0.14`$ from the sample; 3 of these were in the SMC and 1 was in the LMC. The same procedure was repeated, and we obtained $$f_{LMC}=0.2480\pm 0.0052,f_{SMC}=0.2116\pm 0.0076,$$ $$f_{LMC}f_{SMC}=0.0364\pm 0.0092,(3.9\sigma ).$$ $`(5)`$ The difference in amplitudes remained a 4 $`\sigma `$ effect. We found a similar difference in the V-band amplitudes; it was a 3 $`\sigma `$ effect, presumably because of the vastly smaller number of photometric measurements. It is clear that no matter how the pulsation amplitude is estimated there is a significant difference between the two Magellanic Clouds, with the amplitudes of LMC cepheids larger by $`18\%`$. Being at the 4 $`\sigma `$ level the effect is very unlikely to be a result of a random fluctuation. Of course, it will be useful to check it when the sample of LMC and SMC cepheids becomes larger in a year or two. We make no attempt to interpret the difference in amplitudes, though the most natural reason seems to be the difference in the metal content. If this is a metallicity effect then galactic cepheids, as well as those in M31 and M33 may have even larger amplitudes than those in the LMC. Unfortunately, ground based M31 and M33 data are known to be affected by serious blending, while there are relatively few galactic cepheids in the period range $`10^{1.1}P10^{1.4}`$ days to make a comparison useful. The unfortunate consequence of our finding is that the period - flux amplitude relation is not universal, and cannot be used for accurate distance determination. This work was supported by the NSF grant AST-9820314.
no-problem/0001/gr-qc0001104.html
ar5iv
text
# The Angular Scale of Topologically-Induced Flat Spots in the Cosmic Microwave Background Radiation ## 1 Introduction Recently, there has been considerable interest in the prospect that the universe has non-trivial topology. This heightened awareness of the possibility of topology has been driven by the ever strengthening case against a flat, matter dominated universe. The two alternatives which best fit the data are either a flat, cosmological constant dominated universe, or a negatively curved, matter dominated universe. If the universe has negative curvature, then the curvature scale is also the natural scale on which one would generically expect topology. Since the curvature scale of a negatively curved universe $$R_c\frac{3000h^1}{\sqrt{1\mathrm{\Omega }}}\mathrm{Mpc}$$ (1) is considerably less than the radius of the observable universe, one might hope to be able to observe evidence for the non-trivial topology, and comparisons of the COBE satellite’s observations of fluctuations in the Cosmic Microwave Background Radiation (CMBR) temperature to predicted fluctuations for closed hyperbolic manifolds have been made . Much effort has gone into uncovering good signatures for such topology . The effort is complicated by the fact that there are infinitely many different possible topologies for negatively curved manifolds. Moreover, a large, perhaps infinite number of these have small enough closed loops (in at least some directions) that they could in principle yield observable consequences. Therefore the best signatures must allow general searches for topology and not just allow us to tell whether the universe is a particular manifold. One such generic topology search algorithm utilizes the very robust observation that the existence of topology will result in the existence of pairs of circles on the sky which have highly correlated patterns of CMBR temperature fluctuations . Another suggestion, made by Levin et al , is that flat spots (regions with suppressed long-wavelength fluctuations in temperature) will appear in the CMBR sky, in particular in compact hyperbolic manifolds. Levin et al examined one particular hyperbolic manifold, the so-called horn topology (which is not compact), and discovered that down the direction of the horn such flat spots do indeed appear. To understand the suggestion that these flat spots are generic one must realize that hyperbolic manifolds of non-trivial topology correspond to tilings of the covering space of hyperbolic geometry, the usual “open” $`𝐇^3`$. A compact hyperbolic manifold is then a tiling whose fundamental or Dirichlet domain does not extend to spatial infinity. Such compact hyperbolic manifolds are however constructed by a process of Dehn surgery on so-called cusped manifolds, which extend to infinity at a finite number of isolated points, called cusps. The cusp portions of the cusped manifolds are very much like the horn topology in that in both cases the cross-section of the manifold narrows exponentially as one moves down the horn/cusp toward spatial infinity. As the manifold narrows, geodesics can readily wrap around the horn/cusp a large number of time and so smooth out any features. The suggestion that the flat spots seen in the horn topology may be more generic, assumes that the Dehn surgery required to turn the cusped manifold into the compact manifold is sufficiently gentle so as to preserve this evidence of the cusps of the parent cusped manifold. In this paper we will try to see just how far the analogy between cusped and horned manifold can take us – how flat are the flat spots which would appear in the cusped manifold. This represents the flattest that one could expect the flat spots in the daughter cusp-free manifold to be. We will show that although the cusps do produce flat spots they are generically not quite so prominent as those produced in the horn topology. ## 2 Modes on the Horosphere In a small volume cusped hyperbolic manifold, it is difficult to calculate the eigenmodes of the wave operator which contribute to variations in the CMBR (although see ). However, since all that interests us is whether or not there is a flat spot on the CMBR in the vicinity of a cusp, we need not solve the full problem. Instead, we find how the topology affects the modes on the surface of last scattering (SLS) near the cusp. To compute these modes, we need to choose a model of $`𝐇^3`$ in which to compute. There are many models of $`𝐇^3`$. The most common representation of $`𝐇^3`$ is Poincare’s model, which is the unit ball in $`𝐑^3`$ with the metric $`ds^2=\frac{4}{(1r^2)^2}dx^2`$. Here $`dx^2`$ is the normal metric of $`𝐑^3`$ and $`r`$ is the distance from the origin. In this model, geodesics are diameters of the unit sphere and circular arcs perpendicular to the surface of the unit sphere . Of more use to us will be the the hyperboloid model of $`𝐇^3`$, which is the set of points in $`𝐑^{1+3}`$ on the upper sheet of the hyperboloid $`1=x_0^2+x_1^2+x_2^2+x_3^2`$. The distance $`d`$ between two points $`x,y`$ in this model is $`d=`$arccosh$`(xy)`$, where $`xy=x_0y_0+x_1y_1+x_2y_2+x_3y_3`$ is the Lorentz dot product of two points in $`𝐑^{1+3}`$. Geodesics in the hyperboloid model have the form $`\lambda (t)=x\mathrm{cosh}(t)+y\mathrm{sinh}(t)`$, where $`x`$ is a point on the hyperboloid and $`y`$ is a unit vector in $`𝐑^{1+3}`$ orthogonal to it . Finally, we will also make use of the Klein model of $`𝐇^3`$. This is obtained from the hyperboloid model by projecting the point $`(x_0,x_1,x_2,x_3)`$ to the point $`(\frac{x_1}{x_0},\frac{x_2}{x_0},\frac{x_3}{x_0})`$. Geodesics in the Klein model are open chords of the unit ball. We now use a horosphere to find the modes near the cusp. A horosphere is a sphere inside and tangent to the unit sphere in the Poincare model of $`H^3`$. We consider the horosphere tangent at the cusp that goes through the point on the SLS in the direction of the cusp. On the horosphere, the transformation group of the manifold restricts to a Euclidean similarity group. We calculate the modes of this group, and use them as an approximation to the modes on the SLS. By comparing the density of these modes to those of a corresponding patch of open sky, we get an estimate of any suppression caused by the topology. We did this calculation on a particular cusped manifold – number m003 from the Snappea census of cusped manifolds. This manifold is obtained by gluing the faces of two ideal tetrahedra<sup>1</sup><sup>1</sup>1An ideal polyhedron is one with only ideal vertices. An ideal vertex in the Poincare or Klein model is a vertex located on the unit sphere. A finite vertex is a vertex which is not ideal. together and has a volume $`V2.0299`$, in units of the curvature radius cubed. The Dirichlet domain we considered (cf. figure 1) is centered on one of the tetrahedra, with the other tetrahedron split into quarters which are attached to the faces of the first tetrahedron. The resulting figure has four ideal vertices, four finite vertices, eighteen edges, and twelve faces. Numbering the ideal vertices 1, 2, 3, and 4, we can associate each finite vertex with the ideal vertices which it shares edges with. The finite vertices are numbered 5, 6, 7, and 8, with vertex 5 forming edges with vertices 1, 2, and 3, vertex 6 forming edges with vertices 1, 2, and 4, vertex 7 forming edges with vertices 2, 3, and 4, and vertex 8 forming edges with vertices 1, 3, and 4. Each face can be identified by its three vertices. In m003, the faces are glued in the pattern 125-347, 237-348, 138-246, 148-135, 146-247, and 126-235. There are then six classes of faces, six classes of edges and two classes of vertices. In the Klein model, the vertices of m003 are $`v_1`$ $`=({\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}}),v_2=({\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}}),`$ $`v_3`$ $`=({\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}}),v_4=({\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}},{\displaystyle \frac{1}{\sqrt{3}}}),`$ (2) $`v_5`$ $`=({\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}}),v_6=({\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}}),`$ $`v_7`$ $`=({\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}}),\mathrm{and}v_8=({\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}},{\displaystyle \frac{\sqrt{3}}{5}}).`$ The six generators of the transformation group are, in the hyperboloid model, $`a_0=\left(\begin{array}{cccc}\frac{7}{4}& \frac{\sqrt{3}}{4}& \frac{3\sqrt{3}}{4}& \frac{\sqrt{3}}{4}\\ \frac{\sqrt{3}}{4}& \frac{1}{4}& \frac{3}{4}& \frac{3}{4}\\ \frac{3\sqrt{3}}{4}& \frac{3}{4}& \frac{5}{4}& \frac{3}{4}\\ \frac{\sqrt{3}}{4}& \frac{3}{4}& \frac{3}{4}& \frac{1}{4}\end{array}\right),a_1=\left(\begin{array}{cccc}\frac{7}{4}& \frac{\sqrt{3}}{4}& \frac{3\sqrt{3}}{4}& \frac{\sqrt{3}}{4}\\ \frac{3\sqrt{3}}{4}& \frac{3}{4}& \frac{5}{4}& \frac{3}{4}\\ \frac{\sqrt{3}}{4}& \frac{3}{4}& \frac{3}{4}& \frac{1}{4}\\ \frac{\sqrt{3}}{4}& \frac{1}{4}& \frac{3}{4}& \frac{3}{4}\end{array}\right),`$ (11) $`a_2=\left(\begin{array}{cccc}\frac{7}{4}& \frac{\sqrt{3}}{4}& \frac{\sqrt{3}}{4}& \frac{3\sqrt{3}}{4}\\ \frac{\sqrt{3}}{4}& \frac{1}{4}& \frac{3}{4}& \frac{3}{4}\\ \frac{\sqrt{3}}{4}& \frac{3}{4}& \frac{1}{4}& \frac{3}{4}\\ \frac{3\sqrt{3}}{4}& \frac{3}{4}& \frac{3}{4}& \frac{5}{4}\end{array}\right),a_3=\left(\begin{array}{cccc}\frac{7}{4}& \frac{\sqrt{3}}{4}& \frac{\sqrt{3}}{4}& \frac{3\sqrt{3}}{4}\\ \frac{3\sqrt{3}}{4}& \frac{3}{4}& \frac{3}{4}& \frac{5}{4}\\ \frac{\sqrt{3}}{4}& \frac{1}{4}& \frac{3}{4}& \frac{3}{4}\\ \frac{\sqrt{3}}{4}& \frac{3}{4}& \frac{1}{4}& \frac{3}{4}\end{array}\right),`$ (20) $`a_4=\left(\begin{array}{cccc}\frac{7}{4}& \frac{\sqrt{3}}{4}& \frac{\sqrt{3}}{4}& \frac{3\sqrt{3}}{4}\\ \frac{3\sqrt{3}}{4}& \frac{3}{4}& \frac{3}{4}& \frac{5}{4}\\ \frac{\sqrt{3}}{4}& \frac{1}{4}& \frac{3}{4}& \frac{3}{4}\\ \frac{\sqrt{3}}{4}& \frac{3}{4}& \frac{1}{4}& \frac{3}{4}\end{array}\right)\mathrm{and}a_5=\left(\begin{array}{cccc}\frac{7}{4}& \frac{3\sqrt{3}}{4}& \frac{\sqrt{3}}{4}& \frac{\sqrt{3}}{4}\\ \frac{\sqrt{3}}{4}& \frac{3}{4}& \frac{3}{4}& \frac{1}{4}\\ \frac{3\sqrt{3}}{4}& \frac{5}{4}& \frac{3}{4}& \frac{3}{4}\\ \frac{\sqrt{3}}{4}& \frac{3}{4}& \frac{1}{4}& \frac{3}{4}\end{array}\right).`$ (29) These generators are orientation preserving and satisfy the six relations $`a_0a_1^1a_5^1=1,a_3a_2a_4^1=1,`$ $`a_1a_2a_4=1,a_0a_1a_3=1,`$ (30) $`a_5a_0a_4^1=1,\mathrm{and}a_2a_5a_3^1=1.`$ The Euclidean similarity group of a horosphere centered on a cusp is a tiling of the Euclidean plane with hexagons. There are four elements in the tiling, one from each ideal vertex of the domain. The tiling is shown in figure 1. To calculate the eigenmodes of the wave operator on the tiling, the rectangular domain and the axes shown in figure 1 were used. The similarity group of this tiling is generated by the two transformations $`T_1(x,y)=(x+L,y)`$, $`T_2(x,y)=(x+\frac{L}{2},y+\frac{\sqrt{3}L}{2})`$, and the modes $`\varphi (x,y)`$ of the tiling are solutions of the Helmholtz (wave) equation under the two boundary conditions $`\varphi (T_i(x,y))=\varphi (x,y)`$ for $`i=1,2`$. A straightforward argument shows that the normal modes that satisfy these boundary conditions have wavevectors $`\stackrel{}{k}`$ of the form $$\stackrel{}{k}=\frac{2\pi }{L}\left[n(1,\frac{1}{\sqrt{3}})+m(0,\frac{2}{\sqrt{3}})\right]$$ (31) with $`n`$, $`m`$ arbitrary integers. The shortest nonzero $`\stackrel{}{k}`$ modes have $`k_{min}=\frac{2\pi }{L}\frac{2}{\sqrt{3}}`$. This corresponds to a maximum wavelength $`\lambda _{max}=\frac{2\pi }{k_{min}}=\frac{\sqrt{3}L}{2}`$ for solutions with these boundary conditions. ## 3 From Horosphere to Sphere of Last Scatter We have now found the wavelengths of the modes on the portion of the SLS that is tiled like the horosphere. To find $`L`$ and calculate how much of the SLS is tiled, we computed the intersections of the SLS with the edges of the domains in the tiling. The size and maximum wavelength of flat spots on the CMB varies from point to point in the manifold, depending on how far down the cusp a point is. To account for this, we chose to consider the SLS of three points: the point $`(1,0,0,0)`$ in the hyperboloid model, the point at the radius of half volume ($`0.61`$ in units of the curvature radius $`R_{curv}`$) towards the cusp from $`(1,0,0,0)`$, and the point at a distance equal to the curvature radius towards the cusp from $`(1,0,0,0)`$. The point $`(1,0,0,0)`$ is at the center of the Dirichlet domain we considered, at a distance $`0.58R_{curv}`$ from the nearest face. It is positioned as far from the small regions down the cusp as possible, so it has the smallest amount of its SLS tiled and the largest cusp wavelength cutoff of the points in the manifold. The radius of half volume is calculated by determining the radius of a sphere centered on $`(1,0,0,0)`$ whose intersection with the Dirichlet domain has a volume half that of the manifold. The point at the radius of half volume is closer to the small cusp regions than approximately half of the points in the manifold, so it has a spot size and wavelength cutoff which we take to be representative of an average point of the manifold. It is a distance $`0.33R_{curv}`$ from the nearest face. The point at the curvature radius distance is well down the cusp, with a distance of only $`0.22R_{curv}`$ from the nearest face, so we will use it to estimate the spot size and shortest wavelength cutoff of points in the manifold which are further down the cusp than average. The radius of the last scattering surface in units of the curvature scale is a function of $`\mathrm{\Omega }_0`$: $$R_{SLS}R_{curv}\mathrm{arccosh}\left(\frac{2\mathrm{\Omega }_0}{\mathrm{\Omega }_0}\right).$$ (32) Using $`\mathrm{\Omega }_0=0.3`$ and $`R_{curv}1`$, we get $`R_{SLS}2.4`$. (This is actually the radius of the particle horizon, which is marginally larger than the radius of the SLS.) In the domain centered on $`(1,0,0,0)`$ in the hyperboloid model, a point $`p`$ on the edge between the ideal vertices $`v_i`$ and $`v_j`$ satisfies the equation $`p(r)=e_{ij}\mathrm{cosh}r+t_{ij}\mathrm{sinh}r`$, where $`e_{ij}`$ is the center of the edge $`ij`$ and $`t_{ij}`$ is the unit vector tangent to the hyperboloid pointing along the edge. The $`e_{ij}`$ and $`t_{ij}`$ are: $$\begin{array}{cc}e_{12}=(\sqrt{\frac{3}{2}},0,\sqrt{\frac{1}{2}},0),\hfill & t_{12}=(0,\sqrt{\frac{1}{2}},0,\sqrt{\frac{1}{2}})\hfill \\ e_{13}=(\sqrt{\frac{3}{2}},0,0,\frac{1}{\sqrt{2}})\hfill & t_{13}=(0,\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},0)\hfill \\ e_{14}=(\sqrt{\frac{3}{2}},\frac{1}{\sqrt{2}},0,0)\hfill & t_{14}=(0,0,\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})\hfill \\ e_{23}=(\sqrt{\frac{3}{2}},\frac{1}{\sqrt{2}},0,0),\hfill & t_{23}=(0,0,\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})\hfill \\ e_{24}=(\sqrt{\frac{3}{2}},0,0,\frac{1}{\sqrt{2}})\hfill & t_{24}=(0,\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},0)\hfill \\ e_{34}=(\sqrt{\frac{3}{2}},0,\frac{1}{\sqrt{2}},0)\hfill & t_{34}=(0,\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}})\hfill \end{array}$$ (33) The edges between a finite vertex $`v_i`$ and an ideal vertex $`v_j`$ satisfy the equation $`p(r)=v_i\mathrm{cosh}r+t_{ij}\mathrm{sinh}r`$, where $`v_i`$ is the finite vertex and $`t_{ij}`$ is the unit vector tangent to the hyperboloid in the direction of the ideal vertex $`v_j`$. The $`v_i`$ are stated above in equation 2. The $`t_{ij}`$ are: $$\begin{array}{cc}t_{51}=(\frac{1}{4},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}}),\hfill & t_{52}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}})\hfill \\ t_{53}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}}),\hfill & t_{61}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}})\hfill \\ t_{62}=(\frac{1}{4},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}}),\hfill & t_{64}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}})\hfill \\ t_{72}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}}),\hfill & t_{73}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}})\hfill \\ t_{74}=(\frac{1}{4},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}}),\hfill & t_{81}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}})\hfill \\ t_{83}=(\frac{1}{4},\frac{7}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}}),\hfill & t_{84}=(\frac{1}{4},\frac{1}{4\sqrt{3}},\frac{1}{4\sqrt{3}},\frac{7}{4\sqrt{3}})\hfill \end{array}$$ Every domain in the tiling can be obtained by applying an element of the transformation group of the tiling to the domain centered at $`(1,0,0,0)`$. Applying the transformation of a domain to these edge parametrizations yields a parametrization of the edges of that domain. The intersection of any edge with the SLS of any point $`P_c`$ can now be found by numerically solving the equation $`D(p(r),P_c)=2.4`$, where $`D(x,y)`$ is the distance between two points in the hyperboloid model and $`p(r)`$ is the parametrization of the edge. All intersections of edges with the SLS of each of the three points in the vicinity of cusp 1 of the domain centered at $`(1,0,0,0)`$ were calculated. The results for the point $`(1,0,0,0)`$ are shown in figure 2, which plots the pattern of domain edge intersections as seen on the night sky. The connected points are intersections of the edges of the Dirichlet domain with the SLS. The large dot in the center is the geodesic traveling straight down the cusp. The line segments from the cusp show the scale of the diagram, and are of length $`4.8`$ and $`8.8`$ degrees. The diagram shows that the tiling of the sky within a half-angle of $`4.8`$ degrees has little distortion and will have modes similar to the horosphere. The outer hexagons shown are falling back into the large part of the manifold, and can no longer be approximated by tiled horosphere hexagons. The side length of the central hexagon is $`0.65`$ degrees. The half volume point and the curvature radius point have disks with radii of $`5.4`$ and $`6.2`$ degrees and central hexagons with side lengths of $`0.35`$ and $`0.24`$ degrees. We used these hexagon side lengths $`l_h`$ for the scale $`L=2\sqrt{3}l_h`$ of the tiling of the horosphere to find $`k_{min}=\frac{2\pi }{L}\frac{2}{\sqrt{3}}`$ and $`\lambda _{max}=\frac{2\pi }{k_{min}}`$ for the modes on the horosphere. The point $`(1,0,0,0)`$ has a $`\lambda _{max}=2.0`$ degrees, so the longest wavelength mode on the SLS for the point $`(1,0,0,0)`$ in the vicinity of the cusp is approximately $`2.0`$ degrees. In the absence of non-trivial topology, this region will have the modes of a disc of radius $`4.8`$ degrees, which have wavelengths $`\lambda _n=\frac{2\pi 4.8}{J_{m,n}}`$ degrees, where $`J_{m,n}`$ is the $`n`$th zero of the $`m`$th cylindrical Bessel function. The longest wavelength mode will have $`\lambda _{max}=12.5`$ degrees. Comparing the longest wavelength of the drum, $`12.5`$ degrees, and the longest wavelength of the horosphere, $`2.0`$ degrees, shows that the topology reduces the longest wavelength to about $`0.16`$ of its expected value. So the SLS of the point $`(1,0,0,0)`$ exhibits a flat spot of approximately $`4.8`$ degrees. The half volume point has a longest horosphere wavelength of $`1.05`$ degrees and a longest disc wavelength of $`14.1`$ degrees. Here the longest wavelength is reduced to about $`0.07`$ of its expected value on a spot of half angle $`5.4`$ degrees. The curvature radius point has horosphere modes of longest wavelength $`0.72`$ degrees and disc modes of longest wavelength $`16.1`$ degrees. The longest wavelengths is reduced to $`0.04`$ of its normal value on a spot of half angle $`6.2`$ degrees. These results depend on having a relatively small value of $`\mathrm{\Omega }_0`$. As an example of this, the corresponding calculations using this method with an $`\mathrm{\Omega }_0=0.9`$, which has an $`R_{sls}0.65R_{curv}`$ yield a null result for flattening. The point $`(1,0,0,0)`$ has only one hexagon in its tiling, with a sidelength of $`40`$ degrees. The half volume radius point has seven hexagons in its tiling, with a central side length of $`19`$ degrees and serious distortions of the outer hexagons. The curvature radius point also has a distorted tiling of seven hexagons, with a central side length of $`11.4`$ degrees. None of these points exhibits an extensive regular tiling of the SLS in the direction of the cusp, so our calculations do not predict a flat spot due to the cusp for $`\mathrm{\Omega }_0=0.9`$. Finally, Gaussian random fields do have flat spots, arising purely from statistical fluctuations. However, in order for a statistical flat spot to be confused with one of topological origin, many modes would have to have an amplitude much smaller than the mean. For example, in order to create a spot in which the wavelength of the longest observed mode is only 7% of the expected value approximately $`600`$ modes would have to have statistically small amplitudes. (Since $`\pi \times \left(\frac{100}{7}\right)^2600`$.) ## 4 Conclusion In reference , the horn topology was shown to have flat spots which could, in principle cover a large portion of the CMBR. By estimating the modes on the surface of last scatter for a point at half volume down the cusp, we have found that the cusped manifold m003 from the Snappea census has a flat spot of about five degrees with longest wavelengths cut to about $`0.07`$ of normal when $`\mathrm{\Omega }_0=0.3`$ for an average observer. Calculations with two other points show that the flat spot is larger and the wavelength cutoff is more pronounced at points farther down the cusp and smaller and less pronounced for points nearer the center of the manifold. The calculations suggest that similar spots will be seen in any cusped manifold at points which are close enough to a cusp. A cusped manifold will only be able to avoid having spots by being large enough that most points are far from cusps, so any small volume cusped hyperbolic manifold should have observable flat spots. Such flat spots are unlikely to be mere statistical fluctuations of the temperature field. This supports visible flat spots in the CMBR fluctuation maps as a likely, though not necessarily automatic, feature in a hyperbolic universe with non-trivial topology.
no-problem/0001/hep-lat0001001.html
ar5iv
text
# 1 Introduction ## 1 Introduction If we construct a lattice fermion formulation, there are a number of goals to be considered: doubling should be avoided; even at finite lattice spacing $`a`$, we want to represent chiral symmetry in a sound way; and we are seeking a good scaling behavior. Conceptually we have to require locality (the lattice Dirac operator $`D(x,y,U)`$ has to decay at least exponentially in $`|xy|`$ ). In addition, for practical purposes we desire a high level of locality, i.e. a fast exponential decay or even ultralocality (which means that the couplings in $`D`$ drop to zero beyond a finite number of lattice spacings). A further issue is a good approximation to rotation invariance. Last but not least, the formulation should be simple enough to allow for efficient simulations. Here we report on a construction, which is designed to do justice to all of these goals. ## 2 Ginsparg-Wilson fermions (an unconventional introduction) For a lattice Dirac operator $`D`$, full chiral invariance ( $`\{D,\gamma _5\}=0`$ ) is incompatible with other basic requirements (Hermiticity, locality, absence of doublers, discrete translation invariance) . Therefore we only implement a modified chiral symmetry, which does allow $`D`$ to fulfill those requirements. For such a modified chiral transformation we start from the ansatz $$\overline{\psi }\overline{\psi }(1+ϵ[1F]\gamma _5),\psi (1+ϵ\gamma _5[1G])\psi ,$$ (1) $`ϵ`$ being an infinitesimal transformation parameter. The transformation, and therefore $`F`$ and $`G`$ should be local, and $`F,G=O(a)`$, so that we reproduce the full chiral symmetry in the (naive) continuum limit. <sup>1</sup><sup>1</sup>1For convenience, we set $`a=1`$ in the formulae (on an isotropic Euclidean lattice), but we classify the terms nevertheless by the order of $`a`$ that they (would) belong to. Invariance of the Lagrangian $`\overline{\psi }D\psi `$ holds to $`O(ϵ)`$ if <sup>2</sup><sup>2</sup>2In our short-hand notation, the ’products’ are convolutions in c-space. $$\{D,\gamma _5\}=F\gamma _5D+D\gamma _5G.$$ (2) This implies a continuous modified chiral symmetry, which has the full number of generators. It may be compared to the remnant chiral symmetry of staggered fermions: there the doubling problem is not solved, and one is only left with a $`U(1)U(1)`$ symmetry, which does, however, protect the mass from additive renormalization. The same can be shown here if we assume “$`\gamma _5`$-Hermiticity”, $`D^{}=\gamma _5D\gamma _5`$, and we choose $`F=DR`$, $`G=RD`$, where $`R`$ is local again, non-trivial and $`[R,\gamma _5]=0`$ (this generalizes Ref. ). Then eq. (2) turns into the Ginsparg-Wilson relation (GWR) <sup>3</sup><sup>3</sup>3$`\gamma _5`$-Hermiticity is essentially inevitable for any sensible solution, but if we want to formulate the GWR even without this assumption, then it reads $`\{D,\gamma _5\}=2DR\gamma _5D`$. This follows from the immediately obvious prescription $`\{D^1,\gamma _5\}=2R\gamma _5`$. Alternatively, if we require $`D`$ to be normal we arrive at $`\{D,\gamma _5\}=2RD\gamma _5D`$ . However, for the results presented in Sec. 3, 4 this doesn’t matter, since we always use $`R_{x,y}\delta _{x,y}`$. $$D+D^{}=2D^{}RD,$$ (3) and it implies the absence of additive mass renormalization (see also Ref. ) since $$(\sqrt{2R}D^{}\sqrt{2R}1)(\sqrt{2R}D\sqrt{2R}1)=1.$$ (4) As another illustration we can write the GWR as $`\{D^1,\gamma _5\}=2R\gamma _5`$, and we see that a local term $`R`$ does not shift the poles in $`D^1`$ (in contrast to the cases where $`\{D,\gamma _5\}/2`$ is local, such as a mass or a Wilson term). <sup>4</sup><sup>4</sup>4An analogous treatment is also conceivable in the continuum. For comments related to the Pauli-Villars regularization, see Ref. . In dimensional regularization $`\gamma _5`$ is a notorious trouble-maker. It may be useful to substitute it by operators $`(1F)\gamma _5`$ resp. $`\gamma _5(1G)`$ at the suitable places. In $`d+\epsilon `$ dimensions ($`d`$ even), $`F`$ and $`G`$ could take the form $`\epsilon DR/\mu `$ resp. $`\epsilon RD/\mu `$ (where $`R`$ is some local term, and $`\mu `$ is the usual scale in dimensional regularization). We expect the chiral anomaly to be reproduced correctly as $`\epsilon 0`$. \[With respect to the general ansatz, we have to require the right-hand side of $`D^1+D^1=D^1\gamma _5F\gamma _5+GD^1=D^1F+\gamma _5G\gamma _5D^1`$ to be local.\] $`\gamma _5`$-Hermiticity implies $`R^{}=R`$. If we now start from some lattice Dirac operator $`D_0`$ (obeying the assumption of the Nielsen-Ninomiya theorem such as absence of doublers, but otherwise quite arbitrary), we can construct a Ginsparg-Wilson operator $`D`$ from it by enforcing eq. (4) as $$D=\frac{1}{\sqrt{2R}}\left[1+\frac{A}{\sqrt{A^{}A}}\right]\frac{1}{\sqrt{2R}},A:=\sqrt{2R}D_0\sqrt{2R}1.$$ (5) This is the generalization of the “overlap formula”, which uses the “standard GW kernel” $$R_{x,y}^{(st)}:=\frac{1}{2}\delta _{x,y},$$ (6) and which leads from the Wilson fermion $`D_0=D_W`$ to the Neuberger fermion $`D=D_{Ne}`$ . In the general solution of the GWR, eq. (5), we can obviously vary the parameters in $`D_0`$ or in $`R`$ (or in both) in many ways, without violating their required properties. This shows that there exists a continuous set of GWR solutions $`D`$ in the space of coupling parameters. Any solution of the GWR is related to a fully chirally invariant Dirac operator $`D_\chi =D(1RD)^1`$, which is, however, non-local (in the free case, $`D_\chi (p)`$ has poles, cf. eq. (4)). Vice versa, if we start from some $`D_\chi `$ with this type of non-locality (such as the Rebbi fermion , for example) we can construct a GW solution $`D=D_\chi (1+RD_\chi )^1`$, which is local, at least in the free and weakly interacting case. The mechanism of providing locality by inserting a local term $`R0`$ is known from the framework of perfect actions, where the factor $`R^1`$ occurs in a Gaussian block variable renormalization group transformation term of the fermions . Hence $`R0`$ corresponds to a $`\delta `$ function block variable transformation, and the corresponding perfect action has a Rebbi-type non-locality. The transition to locality requires the superficial breaking of the full chiral symmetry, $`R0`$: chirality is manifest in the action only in the sense of the GWR, but it is fully present in the physical observables . <sup>5</sup><sup>5</sup>5Such a superficial symmetry breaking in the transformation term is not necessary in order to preserve supersymmetry in a RGT, an hence in a perfect action . A. Thimm also commented on an analogous treatment of further symmetries . In contrast to the Rebbi fermion , the axial anomaly is correctly reproduced for the perfect action at any local term $`R`$, including the perfect $`D_\chi `$ (for $`R=0`$). This should also be checked if one generally wants to use $`D_\chi `$ in an indirect way , by measuring the right-hand side of $`D_\chi ^1=D^1R`$. By introducing a non-trivial kernel $`R`$ we have relaxed the condition of chiral symmetry somewhat — without doing harm to the physical properties related to chirality — and this allows for locality of $`D`$ (as well as the absence of doublers etc.) <sup>6</sup><sup>6</sup>6This hold at least as long as the gauge background is smooth. At very strong coupling, locality is uncertain, and also the doubling problem can return, see Subsec. 4.2. without contradiction to the Nielsen-Ninomiya theorem. In the case of the Neuberger fermion $`D_{Ne}`$, locality has been demonstrated in a smooth gauge background. In particular, zero eigenvalues in $`A^{}A`$ are excluded if the inequality (in $`d`$ dimensions) $$1P<\frac{b}{d(d1)}$$ (7) holds for any plaquette variable $`P`$ and a suitable bound $`b`$. From Ref. we obtain $`b=0.4`$, which has recently been improved to $`b=(1+1/\sqrt{2})^10.586`$ . Still, this constraint is somewhat inconvenient; for instance, at least one eigenvalue of $`A^{}A`$ has to cross zero if we want to change the topological sector. Furthermore, the GWR allows for locality only in the sense that the couplings in $`D(x,y,U)`$ decay exponentially, but not for ultralocality. This was conjectured intuitively in Ref. . To demonstrate this No-Go rule for GW fermions, it is sufficient to show it for the free fermion. A proof, which was specifically restricted to $`R=R^{(st)}`$, has been given in Ref. . By now, a complete proof covering all (local) GW kernels $`R`$ has been added, hence this rule is completely general . In that context, it is amusing to reconsider the ordinary Wilson fermion. From the mass shift we know that it is certainly not a GW fermion in general, and according to our No-Go rule not even the free Wilson fermions can obey any GWR. Indeed, if we insert the free $`D_W`$ into the GWR and solve for $`R_{x,y}`$, we find that it decays $`|xy|^4`$ in $`d=2`$, and $`|xy|^6`$ in $`d=4`$, which is nonlocal and therefore not a GW kernel. The exponential decay of $`D`$ is satisfactory from the conceptual point of view, but the existence of couplings over an infinite range is a problem for practical purposes. One would hope for a high degree of locality at least, i.e. for a fast exponential decay. Almost all the literature on overlap fermions solely deals with the Neuberger fermion, but it turns out that the couplings in $`D_{Ne}`$ do not decay as fast as one would wish, see Fig. 5. Moreover, also other properties listed in Sec. 1 — most importantly scaling, but also approximate rotational invariance — are unfortunately rather poor. This is obvious even from the free fermion, see Figs. 2, 5. On the level of the action, there are generally no $`O(a)`$ artifacts for GW fermions, because any additional clover term violates the GWR , but from the free case we see already that the $`O(a^2)`$ scaling artifacts in $`D_{Ne}`$ are large. <sup>7</sup><sup>7</sup>7Our study of the 2d and 4d free fermion, as well as the 2d interaction fermion, show consistently that the artifacts in $`D_{Ne}`$ are even worse than those in $`D_W`$. Ref. is more optimistic about the scaling of the Neuberger fermion (but it does not include a comparison to other types of lattice fermions). And yet its simulation is tedious (the quenched case requires already a similar effort as simulating $`D_W`$ with dynamical fermions ), allowing only for the use of small lattices. However, $`D_{Ne}`$ arises only from a very special choice in a large class of GWR solutions described by eq. (5), namely $`D_0=D_W`$ and $`R=R^{(st)}`$. The message of this report is that there are better options, and we are going to show in Sec. 3, 4 how improved overlap fermions can be constructed, tested and applied. This generalization and the improvement concept for overlap fermions was introduced in Ref. . It was extensively tested in the framework of the Schwinger model . ### 2.1 THE CONCEPT OF IMPROVING OVERLAP FERMIONS We first summarize the main idea: if $`D_0`$ happens to be a GW operator already (with respect to a fixed term $`R`$), then eq. (5) yields $`D=D_0`$; the operator reproduces itself. Therefore, any GW fermion — such as the perfect or the classically perfect fermion — is automatically an overlap fermion too. The construction of a classically perfect action for asymptotically free models only requires minimization — no (numeric) functional integral, in contrast to the perfect action — and it still has the additional virtues of excellent scaling and rotation invariance. Unfortunately, a really powerful quasi-perfect action is not available so far for interacting fermions in $`d=4`$. However, if we manage to construct at least an approximate GW fermion, then we can expect it to change only modestly if we insert it as $`D_0`$ in the overlap formula (for the corresponding $`R`$), $`DD_0`$. If our approximate GW fermion is in addition short-ranged, then we can expect $`D`$ to have a high degree of locality, since the long distance couplings are turned on just a little in $`D`$. (Also $`D_0=D_W`$ is short-ranged, but since this is far from a GW fermion, it changes a lot in the overlap formula, and those long distance couplings cannot be predicted to be tiny). Similarly, if $`D_0`$ scales well, then we can expect this quality to be essentially preserved in $`D`$ if $`DD_0`$, and the same argument applies to the approximate rotation invariance. In Sec. 3 we discuss examples for promising approximate GW operators. In Sec. 4 they are transformed into exact GW fermions, and the above predictions are verified. ## 3 Short-ranged approximate Ginsparg-Wilson fermions Perfect free fermions can be constructed and parameterized in c-space explicitly . If we choose the term $`R`$ such that the locality is optimal at mass zero, then we arrive at the standard form $`R^{(st)}`$. To make such a fermion tractable, first of all its couplings have to be truncated to a short range. A truncation to couplings inside a unit hypercube (“hypercube fermion”, HF) was performed in Ref. by means of periodic boundary conditions. (Alternative 4d truncated perfect HFs can be found in Refs. .) Truncation causes some scaling artifacts, but they are small, so that the free HF is still strongly improved over the Wilson fermion. This can be observed from the dispersion relation as well as thermodynamic scaling ratios . At the same time, truncation also implies a (small) violation of the GWR, in agreement with the absence of ultralocal GW fermions. The spectrum of a GW fermion with $`R_{x,y}=\delta _{x,y}/(2\mu )`$, ($`\mu >0`$) is situated on a circle in $`CI`$ with center and radius $`\mu `$ (GW circle), see eq. (4). Hence we can test the quality of our approximate GW fermion (with respect to $`R^{(st)}`$) by checking how close its spectrum comes to a unit circle. This is shown in Fig. 1 for the 4d HF of Ref. on a $`20^4`$ lattice. We see that we have a good approximation, especially in the physically important regime of eigenvalues close to 0. In $`d=2`$ we start off from a similar massless HF, which is optimized for its scaling behavior; its set of couplings is given in Ref. , and its strong improvement over the Wilson fermion is visible in Fig. 2. <sup>8</sup><sup>8</sup>8As an alternative, one could directly optimized the chiral properties by minimizing the violation of the GWR . However, those properties can still be corrected later on by means of the overlap formula, whereas no such tool is available to correct the scaling behavior. Hence it is important to optimize scaling — rather than chirality — from the beginning. Moreover, if one constructs a truncated approximate GW fermions in a short range, there is no direct control over the level of locality of the full GW fermion that one approximates. Actually it is not even certain if the GW solution that one would obtain by gradually extending the range to infinity is local at all, since also non-local solutions of the GWR exist. A simple example for that can be constructed from the SLAC fermion . The Dirac operator $`D=D_{SLAC}(1+RD_{SLAC})^1`$ solves the GWR, but it is non-local. The free spectrum of the “scaling optimal hypercube fermion” (SO-HF) is again close to a unit circle, on the same level as the 4d spectrum in Fig. 1. There are also other ways to see that we are in the vicinity of a GW fermion, for instance by summing over its violation (squared) in each site , or by inserting the HF into the GWR and solving for $`R`$: for the SO-HF, $`R_{x,y}`$ decays about 6 times faster than the corresponding (pseudo-)$`R`$ for $`D_W`$. If we compare the truncated perfect HF to $`D_W`$, then this factor even amounts to $`71`$ in $`d=2`$, and to $`75`$ in $`d=4`$. We now proceed to the 2-flavor Schwinger model, and we gauge the SO-HF by attaching the couplings to the shortest lattice paths only. When there are several shortest paths, the coupling is split and attached to them in equal parts. Moreover we add a clover term with coefficient 1. For the gauge part we use the standard plaquette action (which is perfect for 2d pure $`U(1)`$ gauge theory ). This simple “gauging by hand” causes a further deviation from the GWR, which is increasingly manifest if the gauge background becomes rougher. In Fig. 3 we show the spectra for typical configurations at $`\beta =2`$ and at $`\beta =6`$. It turns out, however, that the SO-HF is indeed an approximate GW fermion up to a considerable couplings strength. Next we test the scaling behavior in the presence of gauge interaction. Our simulation results (here and below) were obtained in collaboration with I. Hip. They are based on 5000 quenched configurations on a $`16\times 16`$ lattice at $`\beta =6`$ (we use the same set of configurations for all types of fermions), but the evaluation does include the fermion determinant, following Ref. . Fig. 4 shows the dispersion relations for the two meson-type states, a massless triplet and one massive mode , which we denote as $`\pi `$ and $`\eta `$ (by analogy). Again the SO-HF is drastically improved over the Wilson fermion (at $`\kappa _c=0.25927`$, from Ref. ). It even reaches the same level as a (very mildly truncated) classically perfect action , which was parameterized by 123 independent couplings per site, whereas only 6 such couplings are used for the SO-HF. Therefore, it is realistic to extend the HF formulation to QCD, and in fact it has been shown already that minimally gauged 4d HFs can indeed be applied in QCD simulations . However, in QCD as well as in the Schwinger model, we observed as an unpleasant feature of the directly applied HF a strong additive mass renormalization. Using the SO-HF, even at $`\beta =6`$ the $`\pi `$ mass is renormalized from 0 to 0.13, as Fig. 4 shows. This corresponds to a lowest real eigenvalue around 0.03, cf. Fig. 3. As an even more striking example, the 4d massless HF, minimally gauged by hand and applied to QCD at $`\beta =5`$, leads to a “pion mass” of 3.0 , and at $`\beta =6`$ the critical bare HF mass amounts to $`0.92`$ (which is unfavorable for the level of locality before truncation, and hence for the magnitude of the truncation effects). We overcome this problem in the next section by inserting the SO-HF into the overlap formula. \[As a further alternative to negative bare mass and overlap, we can reach the exact chiral limit solely by the use of fat links . This is essentially equivalent to gauging the HF such that the interacting GWR is just violated modestly. As yet another possibility one may attach an amplification factor $`>1`$ to each link .\] ## 4 Improved overlap fermions We now perform the second step in our program and insert the SO-HF — which is an approximate GW fermion with an excellent scaling behavior — into the overlap formula (5) with $`R=R^{(st)}`$ given in eq. (6). This leads to an exact GW fermion and therefore to all the nice properties related to chirality, which are extensively discussed in the recent literature on GW fermions: correct anomalies; no renormalization of mass zero, vector current and flavor non-singlet axial vector current; no mixing of weak matrix elements; no exceptional configurations ; correctly reproduced chiral symmetry breaking etc. Our first prediction was that the level of locality should be improved over the Neuberger fermion, and this is clearly confirmed, both, in the free and in the interacting case. Fig. 5 compares the decay of the couplings of the free fermion (left) and the decay of the “maximal correlation” $`f`$ over a certain distance $`r`$ — as suggested in Ref. — at $`\beta =6`$ (right). <sup>9</sup><sup>9</sup>9One puts a unit source at some site $`x`$ and defines $`f(r):=_y^{\mathrm{max}}\{\psi (y)\text{ }|xy|=r\}`$. (In the latter figure one could still try to improve the locality in both cases by deviating from $`R^{(st)}`$ and tuning $`R`$ for optimal locality.) Next we want to verify if the good scaling quality survives the modification due to the overlap formula. For the free fermions, this is confirmed in an impressive way, see Fig. 2. <sup>10</sup><sup>10</sup>10The dispersion curves of the overlap fermions just stop inside the Brillouin zone due to the square root. The end-points can be shifted by choosing $`R_{x,y}=\delta _{x,y}/(2\mu )`$ and varying $`\mu `$. We show $`\mu =1`$ which is a reasonable choice; if we push the end-point to far, the danger of doubling increases. On the other hand, the regime of small momenta is needed, of course. On fine lattices, the scaling is practically identical before and after the use of the overlap formula; only as the lattice becomes really coarse, the overlap does some harm to the scaling quality at some point. In the presence of gauge interaction, we consider again the “meson” dispersions, and we observe again the persistence of the improvement, see Fig. 6. Furthermore, continuous rotation invariance is approximated much better for the HF than for the Wilson fermion (after all, also this holds exactly for the perfect fermion ). In Ref. this was also tested in the interacting case by measuring how smoothly correlations decay with the Euclidean distance, and we observed that the SO-HF is again by far superior over the Wilson fermion; once more it reaches the same level as the classically perfect action. The overlap formula (5) suggests that this property is essentially inherited for the overlap fermions, and indeed we observed a similar level for the Wilson fermion and the Neuberger fermion on one hand, and for the SO-HF and the overlap SO-HF on the other hand. This confirms that also the strongly improved approximate rotational invariance of the SO-HF survives if it is turned into an overlap fermion. For the free fermion, this progress can also been observed in Fig. 5 (left) from the width of the “cones”. ### 4.1 Chiral correction in terms of a power series In $`d=2`$ we can afford an exact evaluation of the notorious square root in the overlap formula, but in QCD this is not feasible any more. In the recent literature, a number of iterative procedures have been suggested for the Neuberger fermion . In Ref. we presented a new method, which is very simple and robust, and which is especially designed for the case where $`D_0`$ is an approximate GW fermion already. We evaluate the square root in $$D=1+\frac{A}{\sqrt{A^{}A}},A:=D_01$$ (8) as a power series in $`\epsilon :=A^{}A1`$. For $`D_0=D_{HF}`$ we have $`\epsilon 1`$ (if the configuration is not extremely rough), hence the expansion converges rapidly. On the other hand, for the case of the Neuberger fermion, i.e. for $`D_0=D_W`$, this expansion fails to converge even in the free case, which is presumably the reason why it had not been considered in the earlier literature. We call this method a “perturbative chiral correction”, where the perturbative expansion refers to the GWR violation $`\epsilon `$ (and not to the coupling $`g`$). It yields a Dirac operator of the form $$D_{p\chi c}=1+AY.$$ (9) For the chiral correction to $`O(\epsilon ^n)`$, $`Y`$ is a polynomial in $`A^{}A`$ of order $`n`$, for instance, $`Y`$ $`=`$ $`[3A^{}A]/2\mathrm{for}n=1,`$ (10) $`Y`$ $`=`$ $`[1510A^{}A+3(A^{}A)^2]/8\mathrm{for}n=2,\mathrm{etc}.`$ Hence the computational effort amounts roughly to $`1+2n`$ matrix-vector multiplication (the matrix being $`A`$ or $`A^{}`$), i.e. it increases only linearly (though the convergence is also just linear). Actually this represents a fermion with couplings of range $`1+2n`$ in each component, and it would be very tedious to implement it explicitly, even for $`n=1`$. However, due to its specific form we never need to do so; $`D_{p\chi c}`$ can always be evaluated by iteration of the above matrix-vector products. The crucial question now is if the first few orders are sufficient already to do most of the chiral correction. We first look at the free SO-HF, and Fig. 7 (left) shows that the first order alone does practically the full job. In this context, we also obtain a geometric picture of the effect of the overlap formula: for $`R_{x,y}=\delta _{x,y}/(2\mu )`$ it can be viewed as a projection of the eigenvalues onto the circle with center and radius $`\mu `$. This projection is often close to radial. In the interacting case, the convergence to the circle under iteration is slowest in the arc around 0. As an example, we show in Fig. 7 (right) a histogram of the small real eigenvalues at $`\beta =6`$. We see that the mass renormalization is removed almost completely if we proceed to $`O(\epsilon ^2)`$. ### 4.2 Behavior in extremely rough gauge configurations For smooth configurations, it is easy to find a parameter $`\mu `$ — and hence a GW circle where the spectrum is mapped on — such that the small (large) real eigenvalues are mapped on 0 ($`2\mu `$). This leaves the index unchanged (though it is defined by exact zero modes now) and it provides a sensible definition of the topological charge via the index theorem . It also means that the doubling problem is safely avoided for all typical configurations at moderate or large $`\beta `$. However, if we dare proceeding to extremely strong coupling, then it is not possible any more to find such a center $`\mu `$ of a GW circle, which does the right mapping for all typical configurations. In fact, for extremely rough QCD test configurations (on very small lattices) it was observed explicitly that all the eigenvalues of the minimally gauged $`D_{HF}`$ are close to the arc of the GW circle, which is opposite to 0 . For $`D_W`$ the eigenvalues are densely scattered over a wide area with a large real part. Examples for such spectra of an extremely rough configuration are shown in Fig. 8, which was provided by N. Eicker, I. Hip and Th. Lippert. At a coupling strength where such configurations are frequent, the doubling problem is back for those overlap fermions, which are constructed from some simple $`D_0`$ (for the Schwinger model, this problem sets in around $`\beta 1`$ ). This agrees with the result of a strong coupling expansion (in the Hamiltonian formulation) which applies to $`D_{Ne}`$ . <sup>11</sup><sup>11</sup>11For a Euclidean strong coupling hopping parameter expansion of $`D_{Ne}`$, see Ref. . In such cases, the construction of $`D_{Ne}`$ maps all (almost) real eigenvalues onto the arc close to 2, hence additive mass renormalization is back as well — again in agreement with Ref. . In view of the latter point one could be tempted to just use a large mass parameter $`\mu >1`$; this does not help, however, with respect to the doubling problem. At very strong coupling, only the (classically) perfect action would help. As we mentioned earlier, also locality is in danger in that regime , and we should therefore keep away from it. As one more advantage of choosing $`D_0`$ to be an approximate GW fermion, the regime of $`\beta `$ where we are (statistically) on safe grounds is enlarged compared to the Neuberger fermion. Of course, in the safe regime where $`\beta `$ is large enough (in QCD this includes $`\beta =6`$ for sure ) the chiral correction of $`D_0=D_{HF}`$ can also be carried out by iteration methods different from the one described in the previous Subsection, for alternative experiments see Ref. . The efficiency in QCD is still to be compared, but for sure in any method the convergence will be much faster for $`D_0=D_{HF}`$ than for $`D_0=D_W`$. ## 5 Conclusions Our program outlined in Sec. 2.1 has been realized in the Schwinger model, and the properties of a resulting improved overlap fermion have been tested extensively. They are all clearly superior over the Neuberger fermion, confirming our prediction: the overlap SO-HF scales much better, it is more local and it comes much closer to rotation invariance. The question now is the applicability of this program in $`d=4`$. The 4d HF formulation is worked out already, and the corresponding improved overlap fermion is currently under investigation in QCD by the SESAM collaboration in Jülich and Wuppertal. Acknowledgment Most of the results presented here are based on my collaboration with Ivan Hip, and I would like to thank him for his crucial contributions. Furthermore I am indebted to P. Damgaard, T. DeGrand, P. Hernández, A. Hoferichter, K. Jansen, J. Jersák, F. Klinkhamer, Th. Lippert, K.F. Liu, M. Lüscher, J. Nishimura, P. Rakow, K. Schilling and A. Thimm for useful comments. Finally I would like to thank the organizers of this work-shop, in particular V. Mitrjushkin, for their kind hospitality in beautiful Dubna.
no-problem/0001/cond-mat0001110.html
ar5iv
text
# Paramagnetic Meissner effect in mesoscopic samples. ## Abstract Using the non-linear Ginzburg–Landau (GL) theory, we study the magnetic response of different shaped samples in the field–cooled regime (FC). For high external magnetic fluxes, the conventional diamagnetic response under cooling down can be followed by the paramagnetic Meissner effect (PME). A second-order transition from a giant vortex state to a multi–vortex state, with the same vorticity, occurs at the second critical field which leads to the suppression of PME. PACS number(s): 74.24.Ha, 74.60.Ec, 73.20.Dx The Meissner effect is considered the most important characteristic property of superconductivity. When a superconductor is cooled down in the presence of an external magnetic field the field is expelled and it behaves as a diamagnet. However, some samples show a paramagnetic response under cooling. The finding of PME (or Wohlleben effect) in high-$`T_c`$ superconductors initiated the appearance of several models interpreting PME as evidence of non-conventional superconductivity in these materials (e.g. see Ref.). However, numerous observations of PME in conventional macroscopic and mesoscopic superconductors indicate the existence of another mechanism, which may be explained with GL theory. Based on results from axial-symmetric solutions of the GL equations by Fink and Presson , Cruz et al. proposed that PME in their experiments on $`Pb_{99}Tl_{01}`$ cylinders is caused by a temperature variation of the superconducting density in a giant vortex state with fixed angular momentum . In such a state, the superconducting current, which shields the magnetic field in the vicinity of the sample boundary (essentially the Meissner effect), changes its direction expelling magnetic field into the sample that can lead to the PME. Thereafter this idea, often called flux compression, was exploited in , but a quantitative analysis of PME within GL theory is still missing and several principal questions remain to be answered: 1) is the vorticity of the giant vortex state fixed during cooling down as was assumed in Refs. ?, 2) if yes, can this lead to the appearance of PME?, and, finally, 3) can the proposed mechanism explain the PME in recent experiments with conventional macroscopic and mesoscopic samples? In this Letter, we follow the GL approach and address these questions by studying the magnetic response of different–shaped samples in the FC regime. We consider a defect–free superconducting disk (and cylinder) immersed in an insulating media with a perpendicular (along cylinder axis) uniform magnetic field $`H_0`$. The behaviour of the superconductor is characterized by its radius $`R`$ (and thickness $`d`$ for the disk case), magnetic field $`H_0`$, the coherence $`\xi (T)=\xi (0)(1T/T_c)^{1/2}`$ and penetration $`\lambda (T)=\lambda (0)(1T/T_c)^{1/2}`$ lengths, where $`T_c`$ is the critical temperature. To reduce the number of independent variables we measure the distance in units of the sample radius, the vector potential $`\stackrel{}{A}`$ in $`c\mathrm{}/2eR`$, and the order parameter $`\mathrm{\Psi }`$ in $`\sqrt{\alpha /\beta }`$ with $`\alpha `$, $`\beta `$ being the GL coefficients . Then the GL equations become $$\left(i\stackrel{}{}_{2D}\stackrel{}{A}\right)^2\mathrm{\Psi }=\frac{R^2}{\xi ^2}\mathrm{\Psi }(1|\mathrm{\Psi }|^2),$$ (1) $$\mathrm{}_{3D}\stackrel{}{A}=\frac{R^2}{\lambda ^2}f(z)\stackrel{}{j}_{2D}.$$ (2) Here, the indices $`2D`$, $`3D`$ refer to two-dimensional and three-dimensional operators; $$\stackrel{}{j}_{2D}=\frac{1}{2i}\left(\mathrm{\Psi }^{}\stackrel{}{}_{2D}\mathrm{\Psi }\mathrm{\Psi }\stackrel{}{}_{2D}\mathrm{\Psi }^{}\right)|\mathrm{\Psi }|^2\stackrel{}{A},$$ (3) is the density of superconducting current in the plane ($`x,y`$), and the external magnetic field is directed along the $`z`$-axis. The boundary conditions to Eqs. (1-2) correspond to zero superconducting current at the sample boundary and uniform magnetic field far from the sample. The order parameter and superconducting current are assumed to be independent of the $`z`$coordinate which is valid for cylinders as well as for thin $`d\xi ,\lambda `$ disks . Then $`f(z)=1`$ and $`f(z)=d\delta (z)`$ for the cylindrical and disk geometry, respectively. Superconducting disks and cylinders placed in a magnetic field and cooled down transit from a normal to a superconducting state at the critical temperature $`T_{}`$, which depends both on $`H_0`$ and $`R`$. However, the unitless parameters $`H_0/H_{c2}(T_{})`$, $`R/\xi (T_{})`$, and the angular momentum of the giant vortex superconducting state in the nucleation point $`T_{}`$ depend only on the magnetic flux $`\mathrm{\Phi }=\pi R^2H_0`$ piercing through the sample . Here, $`H_{c2}=\mathrm{\Phi }_0/2\pi \xi ^2`$ and $`\mathrm{\Phi }_0=hc/2e`$ are the second critical field and the flux quantum, respectively. With further cooling down, the magnetic response is characterized by only two independent variables $`H_0/H_{c2}`$ and $`\mathrm{\Phi }`$. Our numerical approach for solving Eqs. (1-2) is described in . It turns out that an accurate simulation of a multi-vortex state is a hard task in the case of large vorticity $`L`$. The latter correponds to the total angular momentum and the number of vortices in the giant-vortex and the multi–vortex state, respectively. To improve the accuracy we apply a non-uniform rectangular space grid condensing in the vicinity of the sample boundary. However, due to the tremendous computational expenses (e.g. the total number of grid points in the plane ($`x,y`$) was about 160000 for $`L=30`$) we could only treat the vortex state with large $`L`$ in the cylindrical case, when the vector potential is uniform in the $`z`$-direction. In the disk case, we restrict our considerations to the axial symmetric solutions $`\mathrm{\Psi }=\psi (\rho )exp(iL\varphi )`$ ($`\rho `$, $`\varphi `$ are the cylindrical coordinates), which are shown to be stable in the region $`H_0>H_{c2}`$. When the sample is cooled down below the critical temperature $`T_{}`$, a giant vortex state appears with angular momentum $`L`$, which is determined by the magnetic flux $`\mathrm{\Phi }`$ . Starting from this state we mimick the FC regime by decreasing (increasing) slowly the value of $`H_0/H_{c2}1/(1T/T_c)`$ ($`R^2/\xi ^2(1T/T_c)`$) such that the system evolves along a path with fixed external magnetic flux (e.g. see Fig. 1). Using the superconducting state found at the previous step as input, we find the next steady–state solution to Eqs. (1-2). Doing so we consider only stable solutions and neglect thermal fluctuations, which could lead to possible transitions between metastable states. This assumption is valid for normal superconductors where the barriers separating the metastable states exceed by far the sample temperature, except near points in which the state becomes unstable , e.g. near the saddle points. When calculating the dipole magnetic moment, we can neglect non-linear effects in the vicinity of the nucleation point. The quantum angular momentum $`L`$ increases almost proportional to the magnetic flux $`\mathrm{\Phi }`$ but remains always smaller than $`\mathrm{\Phi }/\mathrm{\Phi }_0`$. The supervelocity $`v_\varphi =\rho ^1(L\mathrm{\Phi }\rho ^2/\mathrm{\Phi }_0R^2)`$, which is oriented along the azimuthal direction, changes its sign at $`\rho _{}=R\sqrt{L\mathrm{\Phi }_0/\mathrm{\Phi }}`$. Therefore, both diamagnetic ($`\rho >\rho _{}`$) and paramagnetic ($`\rho <\rho _{}`$) currents exist in any giant vortex state. However, the magnetic moment, which can be estimated in the lowest Landau level (LLL) approximation as $`D𝑑\rho \rho ^2v_\varphi |\psi _L|^2`$ with $`\psi _L`$ being the lowest eigenfunction of the linearized first GL equation, turns out to be always diamagnetic both for disks and cylinders. As long as the superconducting density $`|\mathrm{\Psi }|^2`$ remains small, the magnetic moment almost linearly increases in absolute value with decreasing $`H_0/H_{c2}`$ (inset (a) in Fig. 1). With further cooling down, the LLL approximation breaks down. Due to non-linear effects (mainly from the second term in the RHS of Eq. (1)) the order parameter increases more rapidly in the inner region which leads to an increase of the paramagnetic component (Figs. 1,2). The resulting magnetic moment crucially depends on the ratio between $`L`$ and $`\mathrm{\Phi }/\mathrm{\Phi }_0`$. With increasing magnetic field, the switching $`LL+1`$ of the angular momentum of the nucleated state occurs at a certain $`\mathrm{\Phi }_L`$ . For given $`L`$, the magnetic moment reaches its maximum and minimum value at $`\mathrm{\Phi }_{L1}`$ and $`\mathrm{\Phi }_L`$, respectively. Due to angular momentum quantization the magnetic moment exhibits a strong oscillating behaviour as function of the magnetic field (see inset of Fig. 3) which agrees with Geim’s observations . With decreasing temperature, the dipole magnetic moment becomes zero for certain ratio $`H/H_{c2}1/(1T/T_c)`$ and the original diamagnetic response becomes paramagnetic. This magnetic field is shown in Fig. 3 for $`\mathrm{\Phi }=\mathrm{\Phi }_{L1}`$ and $`\mathrm{\Phi }=\mathrm{\Phi }_L`$. Note that: 1) for the cylinder geometry a larger GL parameter $`\kappa =\lambda /\xi `$ favours PME, 2) while an increase of the effective penetration length $`\lambda ^2/d`$ suppresses PME in disks, and 3) an increase of $`\mathrm{\Phi }`$ favours PME both in cylinders and disks. The reason is that the point $`\rho _{}`$, where the supervelocity changes its direction, shifts towards the sample boundary with increasing angular momentum which is related to the total magnetic flux $`\mathrm{\Phi }(L1)\mathrm{\Phi }_0(L+L^{1/2})`$ . In cylinders, the magnetization is directly proportional to the magnetic moment. In disks: 1) the diamagnetic currents flowing near the sample boundary give larger contributions to the magnetization than inner paramagnetic currents and consequently, the paramagnetic contribution to the magnetization will be strongly suppressed (inset (b) in Fig. 1), 2) but the smaller trapped magnetic flux, in disks as compared to cylinders, decreases the diamagnetic response. This is the reason why thicker disks show onset of PME at larger $`H_0/H_{c2}`$ (Fig. 3, open symbols). When the second critical field $`H_{c2}(T)`$ becomes smaller than the applied magnetic field, the giant–vortex state transits to the multi–vortex state with the same vorticity (Figs. 4,5). This second-order transition is not followed by any jumps in the magnetization or the magnetic moment. Just after the transition, all vortices are arranged in a ring (0:L). Note, that the magnetic moment of the state (0:L) continues to increase with decreasing $`H_0/H_{c2}(T)`$ but with a smaller slope (Fig. 5). With further decreasing $`H_0/H_{c2}(T)`$ a pair of vortices moves to the inner region and the state (2:L-2) appears for $`L=20`$ (see Fig. 5). This first-order transition is followed by a weak jump in the magnetic moment. The derivative $`dD/dT`$ changes sign and further cooling down results in the disappearance of PME. The magnetic moment of the diamagnetic state with smaller angular momentum $`L=19`$ and $`\mathrm{\Phi }=\mathrm{\Phi }_L`$ is also affected by the transition to the multi–vortex state, which increases the diamagentic response (inset (b) in Fig. 4). As the temperature decreases, vortices continue to move from the outer to the inner shell. Note, that the corresponding weak jumps in the magnetic moment are practically not visible on the scale used in Fig. 4. Although the state with $`L=19`$ is energetically more favourable than the one with $`L=20`$, no vortex exits the system which is in agreement with observations . Since the vorticity remains unchanged under cooling down, the magnetic response will be almost reversible. The hysteresis caused by the first–order transitions between different multi–vortex states with the same vorticity is weak (see inset (a) of Fig. 4). Starting from the point $`H/H_{c2}=0.9`$ and warming up the system to $`H/H_{c2}=1.1`$ we find the magnetic moment, which coincides in the scale of Fig. 4 with that obtained by cooling down. Although the magnetization curves, shown in Fig. 4, agree qualitatively with those from experiments , a number of issues remain unclear: 1) in experimental observations of macroscopic samples a weak hysteresis is found in cooling down and subsequent warming up, and 2) the observed maximum in the magnetic momentum are weaker than those from our calculations which may be due to the presence of vortex pinning centra. Note that within GL theory we found a weak PME which is caused by the competition of large diamagnetic and paramagnetic responses of the outer and inner part of the sample, respectively. Any mechanism which slightly influences any of the two responses may strongly influence the total magnetic behavior. As an example, in macroscopic disks PME disappears after mechanical abrading the top and bottom surfaces . A number of samples made from the same material as those demonstrating PME exhibited only diamagnetic behaviour . This indicates the important role played by the sample structural inhomogeneity. For mesoscopic disks, experiments show PME for rather small angular momenta which does not agree with our simulations for flat circular mesoscopic disks. To address some of these sample structural issues we consider the influence of the superconductor shape on the magnetic moment by varying radially the thickness of the disk. We limit ourselves to the case of a strong type-$`II`$ superconductor ($`\kappa 1`$) and solve, therefore, only Eq. (1). The dipole magnetic moment of different-shaped samples is shown in Fig. (6) for $`H_0=H_{c2}(T)`$, where the scale of the thickness variation is apparent from the insets of Fig. 6. The magnetic moment becomes more negative (positive) in magnifying glass (crown) like samples as compared to flat disks. An increase of the local thickness near the sample boundary (see Fig. 6(c)) increases the PME. This strongly suggests a possible non flat geometry of the disks of Ref. . In summary, the giant–vortex state remains stable under cooling down and transits to the multi–vortex state with the same vorticity at $`H_0H_{c2}`$. The paramagnetic response is caused by a more rapid growth of the superconducting electron density in the inner region of the sample, due to non-linear effects, where paramagnetic currents flow. The appearance of the multi–vortex state supresses PME and the maximum of the paramagnetic response corresponds to $`H_0H_{c2}`$. We showed that within the GL theory a paramagnetic response is possible for large magnetic fluxes, which is in agreement with experimental findings on disks with large radia, but does not agree with the experimental results on mesoscopic disks of Geim et al . After finishing this work, we came aware of a preprint by Palacios who used the LLL approximation to study the FC regime and failed to find any paramagnetic response. As shown above, one has to go beyond the LLL approximation in order to find PME which is caused by non-linear effects. We thank A.K. Geim for useful discussions. This work is supported by the Flemish Science Foundation (FWO-Vl) and the “Interuniversity Poles of Attraction Program - Belgian State, Prime Minister’s Office - Federal Office for Scientific, Technical and Cultural Affairs”. One of us (VAS) was supported by a DWTC fellowship and (FMP) is a research director with the FWO-Vl.
no-problem/0001/hep-ph0001053.html
ar5iv
text
# 1 Introduction ## 1 Introduction Although $`Q^2`$ dependence of parton distributions is calculated by perturbative QCD and it has been confirmed by experiments, the distributions themselves cannot be calculated without relying on nonperturbative methods. Therefore, determination of the parton distributions is important for testing nucleon structure models. However, it is also important for finding any exotic physics signature in hadron reactions possibly beyond the current theoretical framework. It is a good idea to study such distributions at the proposed 50-GeV facility in Japan. At this stage, secondary-beam experiments have been mainly focused in the proposal, but there are not extensive studies on primary-beam projects. However, it could be a valuable facility in investigating large-$`x`$ parton distributions by using the primary beam. In particular, if the proton polarization is attained, the facility is a unique one for investigating polarized parton distributions in the medium- and large-$`x`$ regions. Although the RHIC-Spin project, for example, investigates the spin structure of the proton, it measures mainly on the smaller-$`x`$ region. In this sense, the 50-GeV facility is compatible with the RHIC-Spin and other high-energy projects, and it is important for understanding the nucleon structure in the whole $`x`$ range. Because the unpolarized antiquark flavor asymmetry, nuclear structure functions, and the $`g_1`$ structure function are discussed by other speakers , the author would like to address himself to different topics. In Sec. 2, comments are given on interesting large-$`x`$ physics. Then, polarized proton-deuteron (pd) Drell-Yan process is discussed for studying unmeasured structure functions for a spin-1 hadron in Sec. 3. Using a polarized pd Drell-Yan formalism, we discuss the possibility of extracting flavor asymmetry in polarized light-antiquark distributions in Sec. 4. The summary is given in Sec. 5. ## 2 Comments on large-$`x`$ physics There are three interesting topics on the large-$`x`$ physics as far as the author is aware. First, the counting rule is usually used for predicting parton distributions at large $`x`$, so that the large-$`x`$ measurements are valuable for testing the idea. Second, because nuclear corrections are generally large in such a $`x`$ region, the experiments provide important information on nuclear models at high energies. This topic is related to the first one, e.g. in the studies of $`F_2^n/F_2^p`$ , because the deuteron and <sup>3</sup>He targets are used for measuring the neutron structure function. Third, it is crucial to know the details of the parton distributions for finding new exotic signatures. The first two topics are discussed in other publications, so that the interested reader may read for example Ref. . Because the third topic would be much important in relation to other fields of particle physics, we discuss more details. In the recent years, anomalous events were reported at Fermilab and DESY in the very large $`Q^2`$ region. We cannot judge precisely whether or not these are really “anomalous” in the sense that the parton distributions, especially the gluon distribution, are not well known in the large-$`x`$ region. For example. the CDF anomalous jet data originally indicated that the perturbative QCD could not explain the data. However, noting the gluon subprocesses play an important role in the large-$`E_T`$ region, we could explain the data by adjusting the gluon distribution at large $`x`$ . However, nobody knows that this is a right treatment because there is no independent experiment for probing the gluon distribution at such large $`x`$. This topic was partly discussed in connection with a possible low-energy facility . This example suggests the importance of the 50-GeV facility by the following reason. In order to find any new physics possibly beyond QCD, we need to increase the “resolution” $`Q^2`$ significantly. Because the momentum fraction $`x`$ is given by $`x=Q^2/2pq`$, for example in the lepton scattering, the large $`Q^2`$ roughly corresponds to large $`x`$. However, the large-$`x`$ parton distributions are not necessarily well known as obvious from the interpretation of the above CDF events. What we have been doing first is to determine the parton distributions at fixed $`Q^2`$ ($`Q_0^2`$1 GeV<sup>2</sup>) from various high-energy-reaction data with typical $`Q^2=1`$a few hundred GeV<sup>2</sup>. Then, DGLAP evolution equations are used for calculating the variation of the distributions from $`Q_0^2`$ to the large-$`Q^2`$ points, where the anomalous data are taken. Therefore, it is crucial to determine accurate distributions at $`Q_0^2`$ and especially at large $`x`$. Because present high-energy accelerators focus inevitably on the small-$`x`$ region, they are not advantageous to such physics. The 50-GeV facility should be a unique one in studying the larger-$`x`$ region. We believe that the primary-beam experiments could have impact on other fields of particle physics if it is properly used. ## 3 Polarized proton-deuteron Drell-Yan process Spin structure of the proton has been investigated mainly by polarized lepton-nucleon scattering and will be also studied by polarized proton-proton scattering at RHIC. There are already many data on the structure function $`g_1`$ and we have rough idea on the polarized parton distributions . It is desirable to use different observables in order to test our understanding of hadron spin structure. Additional spin structure functions for the deuteron could be suitable quantities. Theoretically, this topic has been investigated in the last ten years, and it is known that there exists a new leading-twist structure function $`b_1`$ . Because it has not been measured at all, it should a good idea to test theoretical predictions in comparison with future lepton scattering data. In addition, a theoretical formalism had been completed recently for the polarized pd Drell-Yan process . The results suggested that there exist many new structure functions which are associated with the deuteron tensor structure. There are two major reasons for studying the polarized pd Drell-Yan process. The first purpose is, as mentioned above, to investigate new structure functions which do not exist in the spin-1/2 proton. The second one is to investigate antiquark flavor asymmetry as discussed in Sec. 4. In this section, we explain the major consequences of the polarized pd formalism without discussing the details. The polarized proton-proton (pp) Drell-Yan process has been investigated theoretically for a long time and the studies are the basis of the RHIC-Spin project. Reference extended these studies to the polarized pd Drell-Yan by taking into account the tensor structure of the deuteron. A general formalism of the pd Drell-Yan was first studied in Ref. by using spin-density matrices and the Ralston-Soper type analysis. Then, it was found that many new structure functions exist due to the spin-1 nature of the deuteron. The process was also analyzed in a quark model. The hadron tensor is first written for an annihilation process $`q+\overline{q}\mathrm{}^++\mathrm{}^{}`$ by correlation functions. They are expanded in terms of the sixteen $`4\times 4`$ matrices: $`\mathrm{𝟏},\gamma _5,\gamma ^\mu ,\gamma ^\mu \gamma _5,\sigma ^{\mu \nu }\gamma _5`$ together with kinematically possible vectors under the conditions of Hermiticity, parity conservation, and time-reversal invariance. We found in the analysis that there exists only one additional spin asymmetry to the pp Drell-Yan case, and it was called the unpolarized-quadrupole $`Q_0`$ asymmetry: $$A_{UQ_0}=\frac{_ae_a^2\left[f_1(x_1)\overline{b}_1(x_2)+\overline{f}_1(x_1)b_1(x_2)\right]}{_ae_a^2\left[f_1(x_1)\overline{f}_1(x_2)+\overline{f}_1(x_1)f_1(x_2)\right]}.$$ (1) Here, $`f_1(x)`$ and $`\overline{f}_1(x)`$ are unpolarized quark and antiquark distributions, and $`b_1(x)`$ and $`\overline{b}_1(x)`$ are tensor-polarized distributions. The momentum fractions are denoted as $`x_1`$ and $`x_2`$ for partons in the hadron 1 (proton) and 2 (deuteron), respectively. This asymmetry is measured by using the unpolarized proton and tensor polarized deuteron. It should provide us new information on the tensor-polarized distributions because the unpolarized distributions are well known in the proton and deuteron. If the large-$`x_F`$ region is considered, Eq. (1) becomes $$A_{UQ_0}\text{(large }x_F\text{)}\frac{_ae_a^2f_1(x_1)\overline{b}_1(x_2)}{_ae_a^2f_1(x_1)\overline{f}_1(x_2)}\text{at large }x_F.$$ (2) This equation suggests that antiquark tensor distributions should be obtained rather easily by the quadrupole spin asymmetry in the polarized pd Drell-Yan. The $`x`$ dependence of $`b_1`$ has been investigated in quark models. The $`b_1`$ vanishes in any models with only the S-wave; therefore, it should probe orbital-motion effects which are related to the tensor structure. Because the tensor spin structure is completely different from the present longitudinal spin physics, experimental data should provide challenging information for theorists. Furthermore, it has not been measured at all, so that the proposed 50-GeV facility has an opportunity of significant contributions to a new area of high-energy spin physics. ## 4 Polarized light-antiquark flavor asymmetry It became clear in the last ten years that the light-antiquark distributions are not flavor symmetric by the Gottfried-sum-rule violation and Drell-Yan experiments. In particular, the Fermilab Drell-Yan experiments clarified the $`x`$ dependence of $`\overline{u}/\overline{d}`$ by using the difference between the pp and pd cross sections. In the same way, the difference between the polarized pp and pd cross sections should be useful for determining the flavor asymmetry in polarized light-antiquark distributions. There could be two issues in extracting the longitudinal one $`\mathrm{\Delta }\overline{u}/\mathrm{\Delta }\overline{d}`$ and the transversity one $`\mathrm{\Delta }_T\overline{u}/\mathrm{\Delta }_T\overline{d}`$. First, there was no theoretical formalism of the polarized pd Drell-Yan before Ref. , so that we were not sure how to formulate the spin asymmetry in terms of polarized distributions. Now, this issue has been clarified and we can discuss the cross section ratio pd/pp by using the results. Second, nuclear corrections should exist in the deuteron. However, they are not the essential part, so that they are neglected in the present studies. If experimental data are obtained in future, such corrections should be taken into account carefully. If higher-twist effects are neglected in the longitudinal and transverse spin asymmetries, the expressions for the cross-section ratios are given by $$R_{pd}\frac{\mathrm{\Delta }_{\left(T\right)}\sigma _{pd}}{2\mathrm{\Delta }_{\left(T\right)}\sigma _{pp}}=\frac{_ae_a^2\left[\mathrm{\Delta }_{\left(T\right)}q_a\left(x_1\right)\mathrm{\Delta }_{\left(T\right)}\overline{q}_a^d\left(x_2\right)+\mathrm{\Delta }_{\left(T\right)}\overline{q}_a\left(x_1\right)\mathrm{\Delta }_{\left(T\right)}q_a^d\left(x_2\right)\right]}{2_ae_a^2\left[\mathrm{\Delta }_{\left(T\right)}q_a\left(x_1\right)\mathrm{\Delta }_{\left(T\right)}\overline{q}_a\left(x_2\right)+\mathrm{\Delta }_{\left(T\right)}\overline{q}_a\left(x_1\right)\mathrm{\Delta }_{\left(T\right)}q_a\left(x_2\right)\right]},$$ (3) where $`\mathrm{\Delta }_{(T)}=\mathrm{\Delta }`$ or $`\mathrm{\Delta }_T`$ depending on the longitudinal or transverse case. If the valence-quark distributions satisfy $`\mathrm{\Delta }_{(T)}u_v(x1)\mathrm{\Delta }_{(T)}d_v(x1)`$ at large $`x_F=x_1x_2`$, Eq. (3) becomes $$R_{pd}\left(x_F1\right)=1\left[\frac{\mathrm{\Delta }_{\left(T\right)}\overline{u}\left(x_2\right)\mathrm{\Delta }_{\left(T\right)}\overline{d}\left(x_2\right)}{2\mathrm{\Delta }_{\left(T\right)}\overline{u}\left(x_2\right)}\right]_{x_20}=\frac{1}{2}\left[\mathrm{\hspace{0.17em}1}+\frac{\mathrm{\Delta }_{\left(T\right)}\overline{d}\left(x_2\right)}{\mathrm{\Delta }_{\left(T\right)}\overline{u}\left(x_2\right)}\right]_{x_20}.$$ (4) Namely, the deviation from one indicates the difference between $`\mathrm{\Delta }_{(T)}\overline{u}`$ and $`\mathrm{\Delta }_{(T)}\overline{d}`$ directly. On the other hand, if another limit is taken $`x_F1`$, the ratio becomes $$R_{pd}\left(x_F1\right)=\frac{1}{2}\left[\mathrm{\hspace{0.17em}1}+\frac{\mathrm{\Delta }_{\left(T\right)}\overline{d}\left(x_1\right)}{4\mathrm{\Delta }_{\left(T\right)}\overline{u}\left(x_1\right)}\right]_{x_10}.$$ (5) The factor of 1/4 suggests that the ratio of this region is not as sensitive as the one of the large-$`x_F`$ region. We discuss numerical results for the ratio $`R_{pd}`$ in Fig. 1 . For the parton distributions, we use a recent parametrization in Ref. at $`Q^2`$=1 GeV<sup>2</sup>. The transversity distributions are assumed to be the same as longitudinally-polarized ones. Then, three ratios are assumed for the antiquark distributions: $`r_{\overline{q}}\mathrm{\Delta }_{(T)}\overline{u}/\mathrm{\Delta }_{(T)}\overline{d}`$ =0.7, 1.0, or 1.3. Then, they are evolved to $`Q^2=M_{\mu \mu }^2`$=25 GeV<sup>2</sup> by the LO-DGLAP evolution equations. The numerical results indicate that the obtained ratios are much different depending on the flavor-asymmetry ratio $`r_{\overline{q}}`$ particularly in the large-$`x_F`$ region. Therefore, the measurement of this ratio could determine $`r_{\overline{q}}`$ and it should be an important test of theoretical models for explaining the unpolarized flavor asymmetry. ## 5 Summary We discussed first why the 50-GeV PS facility is important for structure-function studies in the large-$`x`$ region. Then, specific topics are discussed by using polarized deuteron. We explained that the polarized proton-deuteron Drell-Yan process is interesting in two respects. First, it should be valuable for finding new tensor structure functions in the deuteron. Second, it could be used for studying flavor asymmetry in the polarized light-antiquark distributions. Because these topics are not investigated by other facilities, possible 50-GeV data should provide important information on hadron spin structure. ## Acknowledgments S.K. was partly supported by the Grant-in-Aid for Scientific Research from the Japanese Ministry of Education, Science, and Culture under the contract number 10640277. * Email: kumanos@cc.saga-u.ac.jp. Information on his research is available at http://www-hs.phys.saga-u.ac.jp.
no-problem/0001/physics0001002.html
ar5iv
text
# REFERENCES We can know more in double-slit experiment Gao Shan Institute of Quantum Mechanics 11-10, NO.10 Building, YueTan XiJie DongLi, XiCheng District Beijing 100045, P.R.China E-mail: gaoshan.iqm@263.net ## Abstract We show that we can know more than the orthodox view does, as one example, we make a new analysis about double-slit experiment, and demonstrate that we can measure the objective state of the particles passing through the two slits while not destroying the interference pattern, the measurement method is to use protective measurement. Double-slit experiment has been widely discussed, and nearly all textbooks about quantum mechanics demonstrated the weirdness of quantum world using it as one example, as Feynman said, it contains all mysteries of quantum mechanics, but have we disclosed these mysteries and understood the weirdness in double-slit experiment? as we think, the answer is definitely No. When discussing double-slit experiment, the most notorious question is which slit the particle passes through in each experiment, it is just this problem that touches our sore spots in understanding quantum mechanics, according to the widely-accepted orthodox view, this question is actually meaningless, let’s see how it gets this bizarre answer, it assumes that only an measurement can give an answer to the above question, then detectors need to be put near both slits to measure which slit the particle passes through, but when this is done the interference pattern will disappear, thus the orthodox view asserts that the above question is meaningless since we can not measure which slit the particle passes through while not destroying the interference pattern. In fact, the above question is indeed meaningless, and at it happens the orthodox answer is right, but its reason is by no means right, the genuine reason is that if the particle passes through only one slit in each experiment, the interference pattern will not be formed at all<sup>*</sup><sup>*</sup>*Here we assume the only existence of particle, thus Bohm’s hidden-variable theory is not considered., thus it is obviously wrong to ask which slit the particle passes through in each experiment, it does not pass through a single slit at all! On the other hand, we can still ask the following meaningful question, namely how the particle passes through the two slits to form the interference pattern? now as to this question, the deadly flaw of the orthodox view is clearly unveiled, what is its answer? as we know, its answer will be there does not exist any objective motion picture of the particle, the question is still meaningless, but how can it get this conclusion? it can’t! and no one can. Since we have known that the particle does not pass through a single slit in each experiment, the direct position measurement near both slits is obviously useless for finding the objective motion state of the particle passing through the two slits, and it will also destroy the objective motion state of the particle, then the operational basis of the orthodox view disappears, it also ruins, thus the orthodox demonstrations can’t compel us to reject the objective motion picture of the particleWhy we can’t detect which slit the particle passes through when not destroying the interference pattern is not because there does not exist any objective motion picture of the particle, but because the particle does not pass through a single slit at all., it only requires that the motion picture of classical continuous motion should be rejected, this is undoubtedly right, since the motion of microscopic particle will be not classical continuous motion at all, it will be one kind of completely different motion. Once the objective motion picture of the particle can’t be essentially rejected, we can first have a look at it using the logical microscope, since the particle does not pass through a single slit in each experiment, it must pass through both slits during passing through the two slits, it has no other choices! this kind of bizarre motion is not impossible since it will take a period of time for the particle to pass through the slits, no matter how short this time interval is, so far as it is not zero, the particle can pass through both slits during this finite time interval, what it must do is just discontinuously move, nobody can prevent it from moving in such a way! in fact, as we have demonstrated, this is just the natural motion of particle. On the other hand, in order to find and confirm the objective motion picture of the particle passing through the two slits, which will be very different from classical continuous motion, we still need a new kind of measurement, which will be very different from the position measurement, fortunately it has been found several year ago, its name is protective measurement, since we know the state of the particle beforehand in double-slit experiment, we can protectively measure the objective motion state of the particle when it passes through the two slits, while the state of the particle will not be destroyed after such protective measurement, and the interference pattern will not be destroyed either, thus by use of this kind of measurement we can find the objective motion picture of the particle passing through the two slits while not destroying the interference pattern, and the measurement results will reveal that the particle indeed passes through both slits as we see using the logical microscope. Now, the above analysis has strictly demonstrated that we can know more than the orthodox view does in double-slit experiment, namely we know that the particle passes through both slits to form the interference pattern, while the orthodox view never knows this.
no-problem/0001/cond-mat0001373.html
ar5iv
text
# Double exchange-driven spin pairing at the (001) surface of manganites ## Abstract The (001) surface of La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system in various magnetic orderings is studied by first principle calculations. A general occurrence is that $`z^2`$ dangling bond charge – which is “invisible” in the formal valence picture – is promoted to the bulk gap/Fermi level region. This drives a double-exchange-like process that serves to align the surface Mn spin with its subsurface neighbor, regardless of the bulk magnetic order. For heavy doping, the locally “ferromagnetic” coupling is very strong and the moment enhanced by as much as 30% over the bulk value. Although most efforts on the colossal magnetoresistance (CMR) materials typified by the La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> (LCMO) system are still concentrated on bulk properties, growing interest is being shown in the surface behavior. Knowledge of surface properties is essential not only to develop a perovskite manganite-based technology but also to determine fundamental phenomena and mechanisms of magnetoelectronic behavior. Indeed, the CMR effect occurs at high temperature, around the magnetic ordering temperature, and a magnetic field of several Tesla is required to suppress the thermal magnetic disorder and produce the change in resistivity. Since high magnetic fields are generally unavailable in applications, alternative ways to trigger large low-field MR were considered, such as with trilayer junctions and polycrystalline samples. The junctions are epitaxially grown along the direction, and are made of a central insulating thin film of SrTiO<sub>3</sub> (the barrier), sandwiched by two metallic layers of La<sub>0.67</sub>Sr<sub>0.33</sub>MnO<sub>3</sub> (LSMO). Applying a low magnetic field, the tunneling conductivity can be switched by inducing a parallel (switch on) or anti-parallel (switch off) spin-orientation in the two electrodes. Taking advantage of their half metallicity gives a very large tunneling MR (TMR). Large low-field intergrain MR (IMR) over a large temperature range has been observed in polycrystalline samples of LSMO, CrO<sub>2</sub>, and the double perovskite systems Sr<sub>2</sub>Fe(Mo,Re)O<sub>6</sub>, all of which are expected to be half metallic magnets. Magnetotunneling across grain boundaries, in which the relative orientation of the magnetization of neighboring grains is manuipulated by an applied field, is believed to be the mechanism. In the IMR process, which may be the most promising for MR applications, there is mounting evidence that the state of the surfaces of the grains is important in the intergrain tunneling process. For TMR it has long been clear that tunneling characteristics are strongly influenced, perhaps even dominated, by the electronic and magnetic structure at the interface, and for IMR surface states have been suggested to play the central role. In the few experimental works present in the literature intrinsic difficulties have been reported in the process of obtaining clean, bulk-truncated surfaces, due to surface segregation that occurs during growth at high temperature, and strain effects induced by film-substrate mismatch. Structural and electronic properties of the low-index surfaces (including the possibility of reconstructions) are still unknown, in spite of their importance in establishing the half metallic nature of the CMR materials using photoelectron emission. However, advancements in epitaxial growth and surface uniformity are being reported, so a first fundamental step towards describing real surfaces consists in understanding how the intrinsic properties of the ideal unreconstructed surfaces differ from the respective bulk properties, i.e. how the bulk truncation in itself modifies the physics of this compounds. First-principle calculations are ideally suited for this aim, and in this paper we focus on surface spin ordering of the Mn-terminated (001) surface of La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub>. Although this system shows an extremely rich variety of magnetic phases for different level of doping, we identify a general, robust mechanism that should dominate the surface spin order for any doping level $`x`$. In the regime of heavy doping $`x1/2`$, the surface-to-subsurface magnetic coupling is much stronger than in bulk, and the surface moment is enhanced by as much as 30%. Based on the growing understanding of the double exchange (DEX) process in bulk manganites, it can be expected that the surface spin alignment will be strongly dependent on the Mn $`e_g`$ occupation. At the (001) surface, however, the $`e_g`$ degeneracy is broken: the $`x^2y^2`$ orbital remains very strongly $`dp\sigma `$ hybridized with neighboring (in surface layer) O ions, but the $`z^2`$ orbital is left “dangling.” The implications of this symmetry breaking were first glimpsed in the simple, undoped $`x`$=1 member CaMnO<sub>3</sub>, which has G-type AFM bulk ordering due to standard AFM superexchange between filled t<sub>2g</sub> shells. These $`t_{2g}`$ shells contain the nominal three electrons assigned to Mn<sup>4+</sup> in the formal valence picture, with the $`e_g`$ formally empty. As it has been pointed out in other contexts,, the amount of actual $`d`$ charge is not at all identical to the formal $`d^n`$ charge. For bulk phenomena however, this idealization usually gives a reliable broad picture of general behavior, including spin, charge, and orbital order. For the CaMnO<sub>3</sub> (001) surface, however, G-type spin order does not survive at the surface. Instead, a flip of all spins in the surface layer occurs, driven by the appearance of Mn $`z^2`$ charge that drives a double exchange (DEX) process that strives to align spins. This $`z^2`$ charge is present in the bulk, resulting from $`dp\sigma `$ mixing that draws a truly significant amount of $`e_g`$ charge into the “O 2$`p`$” bands: the 18 O $`p`$ bands actually contain on the order of 1.5-2 electrons of Mn $`e_g`$ character. Some of this becomes a dangling bond band at the surface, lying in the bulk gap and driving the surface spin to flip. What we show in this paper is that this mechanism survives, and in fact is enhanced, as doping occurs. First-principles calculations have been performed within local-density approximation (LDA), employing a plane wave basis and Vanderbilt pseudopotentials. A 30 Ryd cut-off energy and the exchange-correlation potential of Perdew and Zunger was used. For the La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> (001) surfaces we have studied, we used slab of nine atomic layers. For $`x`$=1/2, the stacking along $`\widehat{z}`$ is made by alternating layers of La and Ca (see Fig. 1), retaining a mirror symmetry with respect to the central Mn layer. The artificial ordering of La and Ca layers (which must be done somehow in a finite supercell) should not affect our conclusions, since they simply become ionized by contributing their valence electrons to the O and Mn bands. For the planar lattice constant of bulk La<sub>1/2</sub>Ca<sub>1/2</sub>MnO<sub>3</sub> we obtain by energy minimization a<sub>0</sub>= 7.21 a.u, which is a reasonable value between the experimental 7.35 a.u. for La<sub>2/3</sub>Ca<sub>1/3</sub>MnO<sub>3</sub>, and 7.05 a.u. for CaMnO<sub>3</sub>. We find the AFM phase favored by 15 meV/Mn over the FM, which has a nearly half-metallic density of states. In Table I the calculated relative surface energies for $`x`$=1/2 are reported. As can be seen from Fig. 1, there are two kinds of Mn-terminated surfaces, i.e. one with La in the subsurface layer (indicated as Mn-La), and one with Ca instead (Mn-Ca). Since (Table I) they give equivalent results, we will quote specific results for just one of them (La-Mn). Treating an inner region with the true spin, charge, and orbital order for $`x`$=1/2 is well beyond computational capabilities, but the behavior we identify is so robust that we expect it to be independent of bulk order. For each of the geometries of our nine layer surface, four spin arrangements on Mn are possible, labeled in Table I by arrows on central (C), subsurface (SS) and surface (S) Mn, in this order. These are: surface and subsurface parallel, aligned or antialigned to the central layer, and surface and subsurface antiparallel, with the subsurface aligned or antialigned with the central layer. We find that S-SS spin alignment is strongly favored, with the most stable configuration having both the surface and subsurface layer spins antiparallel to the central Mn spin: (from third to top layer) $``$. The energies can be mapped onto an interlayer Ising model with three independent effective exchange constants: J<sub>S-SS</sub>, J<sub>SS-C</sub> and J<sub>S-C</sub>, the latter being a second neighbor coupling. J<sub>SS-C</sub> = -18 meV (AFM) is close to the exchange parameter obtained directly from the bulk calculation (J<sub>bulk</sub> = -15 meV). The interaction between Mn on first and third layers, J<sub>C-S</sub> = 8 meV, is FM in sign and is related to the d$`_{z^2}`$ the surface state discussed below. The most striking result for $`x`$=1/2 (Table I) is the positive, unusually large value of J<sub>S-SS</sub> = 53 meV, more than three times larger than, and opposite in sign to, the bulk AFM coupling. For comparison, for CaMnO<sub>3</sub> ($`x`$ = 1) the interlayer exchange constant at the surface was 29 meV. (The bulk coupling for $`x`$=1 is also different from $`x`$=1/2, with J<sub>bulk</sub> = -26 meV.) This large FM coupling, for both $`x`$=1 and $`x`$=1/2, is the consequence of a very general characteristic of (001) surface formation. In Fig.2 the orbital-resolved density of states (DOS) of the Mn ions for the (001) surface in the most stable spin configuration (i.e. $``$) is shown. Two surface Mn d$`_{z^2}`$ DOS peaks straddle the Fermi energy (E<sub>F</sub> = 0), with a tail of occupied states that extends down to $``$ -1.5 eV. These states are also visible on subsurface Mn and, for the occupied peak, on central Mn as well. Thus the surface has a deep surface d$`_{z^2}`$ resonance extending to the fifth layer below the surface. In the majority channel of the central (‘bulk’) Mn ion, d$`_{z^2}`$ and d$`_{x^2y^2}`$ orbitals contribute to the DOS at E<sub>F</sub>, whereas in the minority channel the only contribution comes from $`t_{2g}`$ states. The d$`_{z^2}`$ dangling bond discussed above leads to the formation of the surface resonance as it does in CaMnO<sub>3</sub>. It is also apparent that the d<sub>xy</sub> bands are shifted upward in energy, so that the minority channel is depleted (i.e. the Mn at surface is fully polarized) and the d$`{}_{}{}^{}{}_{xy}{}^{}`$ surface bands contribute to the DOS at E<sub>F</sub>. The magnetic moment on the surface Mn (3.23 $`\mu _B`$) is 10% larger than on subsurface Mn (2.97 $`\mu _B`$) and 30% larger than in ‘bulk’ central Mn (2.50 $`\mu _B`$), but the total charge on Mn ($``$ 5.3 electrons using our definition) is nearly the same at surface and in the bulk. The increase of magnetization is due mostly to the d$`_{z^2}`$ polarization with some contribution from the depletion of d$`{}_{}{}^{}{}_{xy}{}^{}`$ states around E<sub>F</sub>. Also, a small intra-atomic charge readjustment occurs from d$`_{x^2y^2}`$ and d<sub>xy</sub> to the polarized d$`_{z^2}`$ orbital on the surface Mn ion. The resulting surface polarization can be visualized from the isosurfaces of the magnetization displayed in Fig.3. Contributions coming from states that lie in the region within 0.3 eV below E<sub>F</sub> are shown, i.e. the “core” $`t_{2g}`$ moments are not included in the subsurface and central Mn, whereas the surface Mn shows a combination of d$`_{z^2}`$ and d<sub>xy</sub> spins; on subsurface Mn the d$`_{z^2}`$ magnetization is mixed with some d$`_{x^2y^2}`$. The double exchange effect between d$`_{z^2}`$ orbitals on surface and subsurface Mn comes into play and leads to the strong FM coupling J<sub>S-SS</sub>=53 meV responsible for the spin alignment. On the central Mn with its antialigned spin, the magnetization is d$`_{x^2y^2}`$-like. (Unfortunately, present computational limitations do not allow us to study an eleven layer slab, for which the central layer should be more bulklike.) Also evident in Fig. 3 is that a remarkably large fraction of this surface-induced magnetization lies in the O $`p_\pi `$ orbitals of the surface layer. Polarization of the O ion in FM bulk environments in manganites has been emphasized elsewhere. The change of the Mn d$`_{z^2}`$ orbital from broad, strongly $`dp\sigma `$ hybridized in the bulk to an atomic-like, narrow in energy, surface state is a very specific feature of this (001) surface formation, and this surface dehybridization generally should be described well by LDA. We suggest that this effect is strong enough to turn the AFM spin coupling into FM for any doping level. At least two arguments support this hypothesis. First, the spin-pairing occurs for the (001) surface of CaMnO<sub>3</sub> that should be the most unfavorable case, since in the bulk (nominally) only the majority t<sub>2g</sub> orbitals are occupied, thus their AFM character is dominant. Nevertheless, the partially occupied d$`_{z^2}`$ surface state reverses the magnetic coupling. Second, the very large change of exchange interaction parameter (from -15 meV in bulk to +53 meV at the surface) would overcome AFM bulk coupling even stronger than the one considered here. A crucial case is the $`x`$=0 member LaMnO<sub>3</sub>, which is A-type AFM in the bulk. The spin-pairing argument applied to the surface parallel to the FM (001) layers predicts a spin-flip of the surface Mn layer. The AFM spin coupling along the $`\widehat{z}`$ axis is robust and explained by a well estabilished picture: the in-plane FM coupling is stabilized by the ordering of Mn $`e_g`$ orbitals, so that occupied d$`_{x^2}`$ (d$`_{y^2}`$) orbitals alternates with empty d$`_{x^2}`$ (d$`_{y^2}`$) orbitals on neighboring Mn. Thus, all the $`e_g`$-type charge fills in-plane orbitals, and the d$`_{z^2}`$ orbitals are empty and higher in energy. As a consequence, the AFM interactions between neighboring t<sub>2g</sub>’s dominates in the orthogonal direction. A realistic first-principle calculation of the LaMnO<sub>3</sub> surface is beyond the possibility of detailed calculations, since it would require a $`\sqrt{2}\times \sqrt{2}`$ lateral enlargement of the cell as well as additional thickness to treat the tilting of the MnO<sub>6</sub> octahedra, and the Jahn-Teller distortion at the surface would have to be determined. However, the formation of the d$`_{z^2}`$ surface state within bulk LaMnO<sub>3</sub> gap seems to be beyond doubt, based on the behavior of the d$`_{z^2}`$ dangling bond for $`x`$=1 and $`x`$=1/2. The question is whether this would be able to overcome the t<sub>2g</sub> AFM contribution. In Ref. it is shown that the t<sub>2g</sub> contribution for bulk LaMnO<sub>3</sub> increases linearly in magnitude with the distortion between in-plane and inter-planar lattice constants at fixed volume, i.e. the AFM coupling between (001) planes increases linearly by shortening the interplanar distance, likely due to the electrostatic repulsion that further depletes the d$`_{z^2}`$ orbitals. The t<sub>2g</sub> contribution to J<sub>bulk</sub> has been calculated in Ref. as a function of the lattice distortion. It is in the range of $``$ 20-30 meV, i.e. not large enough to overcome the value of J<sub>S-SS</sub>, thus we definitely expect the occurrence of a spin-flip process at the (001) LaMnO<sub>3</sub> surface. In Fig.4 we show the $`e_g`$ orbitals on surface and subsurface layers, and indicate the expected filling and orbital ordering after the formation of the surface state. The orbitals are ordered in “FM” fashion both in-plane and orthogonally to the surface, as a result of the surface formation that fills the Mn surface d$`_{z^2}`$ orbital no longer degenerate with the d$`_{z^2}`$ orbital of the underliyng subsurface Mn. To summarize, we have found that terminating the (001) surface of La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> surfaces with the Mn ion exposed, results in partial filling of the d$`_{z^2}`$ orbital that drives a double-exchange-like ordering of the surface and subsurface layers of Mn ions. We have shown this effect explicitly for $`x`$=1/2 and (previously) for undoped CaMnO<sub>3</sub>. A comparison between these two cases indicates that it is stronger in doped systems. This result has important implications (1) for surface studies, where this effect tends to insure that surfaces of the CMR materials ($`x`$1/3) will remain ferromagnetically aligned and half metallic as well, as supported by photoemission studies, and (2) for the intergrain magnetoresistance effect, where the magnetic structure of the grain surfaces can strongly affect the device characteristics. This behavior, which is strongly related to band filling but much less dependent on ion size effects, should also hold for the La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> and La<sub>1-x</sub>Ba<sub>x</sub>MnO<sub>3</sub> systems. This research was supported by National Science Foundation grant DMR-9802076. Calculations were done at the Maui High Performance Computing Center.
no-problem/0001/math-ph0001020.html
ar5iv
text
# On representation of the P–Q pair solution at the singular point neighborhood ## 1. Introduction The inverse monodromic (or isomonodromic) transformation (IMT) method is a powerful tool for studying the class of nonlinear ordinary differential equations (ODE’s) representable as the compatibility condition of the overdetermined linear system (P–Q pair). The IMT method reduces the initial value problem for the system of nonlinear ODE’s to solving the inverse problem for associated isomonodromic linear equation. This inverse problem is formulated in terms of the monodromy data, which are constructed using the asymptotic expansions of solution of the P–Q pair at neighborhood of the singular points. It was shown for particular cases that the matrix coefficient of the isomonodromic equation can be uniquely specified from the monodromy properties of its global solution . The existence of the global solution can be investigated in the frameworks of the theory of the Riemann–Hilbert problem . This work is devoted to obtaining in closed form of the expansion in series of solution of P–Q pair at neighborhood of the singular points. The problem considered here is important for developing the IMT method on P–Q pairs of arbitrary matrix dimension with different types of singularities of both the equations forming the pair. In particular, the compatibility of the equations, that are imposed on the remainder term of the asymptotic expansion of the P–Q pair solution, is suggested in implementing the direct problem of the IMT method for the monodromy data to be determined. The irregular and regular singularities of the second equation of P–Q pair (Q–equation) are studied in Sec.III and Sec.IV respectively. We require no the additional conditions on the coefficient of Q–equation such as the inequality (on modulo integers for the regular singularity) of the eigenvalues of the leading coefficients of the expansions in singular points (cf. ). Besides, the independent variable of the first equation of P–Q pair is not supposed to be immediately connected with the subset of the ”deformation parameters” of the monodromy data (see Ref.). The theorems establishing the existence of the compatible expansion in series at the singular point neighborhood of solutions of the P–Q pair equations are proven. The conservation laws for system of nonlinear ODE’s admitting the compatibility condition representation are derived from the expansions in series of solution of corresponding P–Q pair. ## 2. P–Q pair Nonlinear ODE’s considered in the frameworks of the IMT method can be written in the form $$Q_xP_\lambda +[Q,P]=0$$ (1) with matrices $`P=P(x,\lambda )`$ and $`Q=Q(x,\lambda )`$ depending rationally on variable $`\lambda `$. This equation is the compatibility condition of overdetermined linear system $$\mathrm{\Psi }_x=P\mathrm{\Psi },$$ (2) $$\mathrm{\Psi }_\lambda =Q\mathrm{\Psi }.$$ (3) The six Painlev$`\stackrel{´}{\text{e}}`$ equations are the most famous nonlinear ODE’s that admit representation (1) . Let at least Q–equation of P–Q pair (2,3) at vicinity of point $`x=x_0`$ have the singularity in point $`\lambda =0`$. We also suppose that the coefficients of the P–Q pair are expanded in series at neighborhood of point $`(x=x_0,\lambda =0)`$ as given $$P=\underset{i=m}{\overset{\mathrm{}}{}}\lambda ^iP^{(i)},Q=\underset{i=n}{\overset{\mathrm{}}{}}\lambda ^iQ^{(i)}$$ (4) ($`m0`$, $`n<0`$), where matrix coefficients $`P^{(i)}`$ and $`Q^{(i)}`$ are holomorphic in variable $`x`$. Substituting expansions (4) into Eq.(1) and equalizing to zero the coefficients at different powers of $`\lambda `$, one obtains an infinite set of equations $$Q_x^{(i)}(i+1)P^{(i+1)}+\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}[Q^{(j)},P^{(ij)}]=0(im+n).$$ (5) It is assumed hereafter that $`Q^{(i)}=0`$ if $`i<n`$ and $`P^{(i)}=0`$ if $`i<m`$. We quote no system (2,3) as Lax pair to distinguish from the overdetermined systems for nonlinear partial differential equations integrable in the frameworks of the inverse scattering (or spectral) transformation method . The P–Q pairs, whose coefficients are represented as given $$P=\underset{i=m}{\overset{\mathrm{}}{}}(\lambda +f(x))^iP^{(i)}(x),Q=\underset{i=n}{\overset{\mathrm{}}{}}(\lambda +f(x))^iQ^{(i)}(x),$$ are evidently led to the form considered here by means of changing independent variables $`(x,\lambda )(x,\lambda +f(x))`$. ## 3. Irregular singularity of Q–equation If point $`\lambda =0`$ is the irregular singularity of the second equation of P–Q pair we have the following Theorem 1. Let the asymptotic expansion in series of solution of Eq.(3) for fixed $`x=x_0`$ at neighborhood of irregular singular point $`\lambda =0`$ be represented in closed form $$\mathrm{\Psi }=\underset{i=0}{\overset{\mathrm{}}{}}\lambda ^iC^{(i)}\mathrm{\Lambda },$$ (6) where $`C^{(0)}=E`$, $`\mathrm{\Lambda }`$ is nondegenerate solution of equation $$\mathrm{\Lambda }_\lambda =\underset{i=n}{\overset{1}{}}\lambda ^i\mathrm{\Omega }^{(i)}\mathrm{\Lambda }$$ (7) in point $`x=x_0`$, coefficients $`C^{(i)}`$ $`(i>0)`$ and $`\mathrm{\Omega }^{(i)}`$ $`(ni<0)`$ are defined from infinite set of equations $$(i+1)C^{(i+1)}+\underset{j=n}{\overset{1}{}}C^{(ij)}\mathrm{\Omega }^{(j)}\underset{j=n}{\overset{\mathrm{}}{}}Q^{(j)}C^{(ij)}=0(in,C^{(i)}=0\text{if}i<0)$$ (8) with $`x=x_0`$. Then the compatible asymptotic expansion in series of the solutions of both the equations of P–Q pair (2,3) at neighborhood of point $`(x=x_0,\lambda =0)`$ is represented in form (6), where $`\mathrm{\Lambda }`$ is nondegenerate solution of Eq.(7) and equation $$\mathrm{\Lambda }_x=\underset{i=m}{\overset{0}{}}\lambda ^i\mathrm{\Phi }^{(i)}\mathrm{\Lambda },$$ (9) coefficients $`C^{(i)}`$ satisfy Eqs.(8), which define simultaneously coefficients $`\mathrm{\Omega }^{(i)}`$, and the set of equations $$C_x^{(i)}+\underset{j=m}{\overset{0}{}}C^{(ij)}\mathrm{\Phi }^{(j)}=\underset{j=m}{\overset{\mathrm{}}{}}P^{(j)}C^{(ij)}(im)$$ (10) that also define coefficients $`\mathrm{\Phi }^{(i)}`$ $`(mi0)`$. There exist matrices $`C^{(i)}`$, $`\mathrm{\Phi }^{(i)}`$ and $`\mathrm{\Omega }^{(i)}`$, which solve Eqs.(8,10), such, that the overdetermined system of equations (7,9) is compatible. Proof. From the sets of equations (8) and (10) for $`ni<0`$ and $`mi0`$ respectively we have the recurrent definitions of matrices $`\mathrm{\Omega }^{(i)}`$ and $`\mathrm{\Phi }^{(i)}`$: $$\mathrm{\Omega }^{(i)}=Q^{(i)}+\underset{j=n}{\overset{i1}{}}\left(Q^{(j)}C^{(ij)}C^{(ij)}\mathrm{\Omega }^{(j)}\right),$$ (11) $$\mathrm{\Phi }^{(i)}=P^{(i)}+\underset{j=m}{\overset{i1}{}}\left(P^{(j)}C^{(ij)}C^{(ij)}\mathrm{\Phi }^{(j)}\right).$$ (12) The normal system of ODE’s on coefficients $`C^{(i)}`$ $$C_x^{(i)}=\underset{j=m}{\overset{i}{}}P^{(j)}C^{(ij)}\underset{j=m}{\overset{0}{}}C^{(ij)}\mathrm{\Phi }^{(j)}$$ (13) follows from Eqs.(10) if $`i>0`$. We assume for convenience that $`\mathrm{\Phi }^{(i)}=0`$ if $`i<m`$ or $`i>0`$, $`\mathrm{\Omega }^{(i)}=0`$ if $`i<n`$ or $`i0`$. This agreement yields immediately two sets of useful identities: $$\mathrm{\Omega }^{(i)}=Q^{(i)}+\underset{j=\mathrm{}}{\overset{i1}{}}\left(Q^{(j)}C^{(ij)}C^{(ij)}\mathrm{\Omega }^{(j)}\right)(i<0),$$ (14) $$\mathrm{\Phi }^{(i)}=P^{(i)}+\underset{j=\mathrm{}}{\overset{i1}{}}\left(P^{(j)}C^{(ij)}C^{(ij)}\mathrm{\Phi }^{(j)}\right)(i0)$$ (15) from Eqs.(11,12). Using notations $$H^{(i)}=\mathrm{\Omega }_x^{(i)}(i+1)\mathrm{\Phi }^{(i+1)}+\underset{j=\mathrm{}}{\overset{1}{}}[\mathrm{\Omega }^{(j)},\mathrm{\Phi }^{(ij)}](i<0),$$ (16) the system of equations arising from the compatibility condition of Eq.(7) and Eq.(9) is written in following manner: $$H^{(i)}=0(m+ni<0).$$ (17) This equation is obviously valid for $`i<m+n`$. Let matrices $`C^{(i)}`$ $`(i>0)`$ satisfy system (13) at neighborhood of point $`x=x_0`$. Substitution of identities (14,15) into first two terms in the right–hand side of Eqs.(16) leads after cumbersome calculations to formulas: $$H^{(i)}=\underset{j=\mathrm{}}{\overset{i}{}}\left(F^{(ij)}\mathrm{\Phi }^{(j)}P^{(j)}F^{(ij)}\right)\underset{j=\mathrm{}}{\overset{i1}{}}C^{(ij)}H^{(j)},$$ (18) where $$F^{(i)}=(i+1)C^{(i+1)}+\underset{j=\mathrm{}}{\overset{1}{}}C^{(ij)}\mathrm{\Omega }^{(j)}\underset{j=\mathrm{}}{\overset{i}{}}Q^{(j)}C^{(ij)}(i0).$$ (19) It should be stressed that the expression for $`F^{(i)}`$ is nothing but the left–hand side of Eq.(8) for $`i0`$. Differentiation of Eqs.(19) gives, taking into account Eqs.(5,13,16), system of ODE’s: $$F_x^{(i)}=\underset{j=\mathrm{}}{\overset{i}{}}\left(P^{(j)}F^{(ij)}F^{(ij)}\mathrm{\Phi }^{(j)}\right)+\underset{j=\mathrm{}}{\overset{1}{}}C^{(ij)}H^{(j)}.$$ (20) Since Eqs.(8) are valid for fixed $`x=x_0`$, we can choose the initial values of matrices $`C^{(i)}`$ $`(i>0)`$ in this point such, that $$F^{(i)}=0$$ (21) for $`x=x_0`$ (see the remark after Eqs.(19)). Then Eqs.(21) are fulfilled at a neighborhood of point $`x=x_0`$ due to Eqs.(18,20). Vanishing right–hand side of Eqs.(18) provides for the compatibility of overdetermined linear system (7,9), whose coefficients are defined by Eqs.(11,12). We have next from Eqs.(11,19,21) that matrices $`C^{(i)}`$ and $`\mathrm{\Omega }^{(i)}`$ satisfy Eqs.(8). At last, it is checked by direct substitution that expansion (6) yields formally the solution of P–Q pair. If $`m=0`$ the proof of theorem can be carried out in more simple way by supposing that matrices $`\mathrm{\Omega }^{(i)}`$ are solutions of Eqs.(17). Eqs.(13) can be solved recurrently in this case. Choosing initial values for $`C^{(i)}`$ and $`\mathrm{\Omega }^{(i)}`$ to satisfy Eqs.(8), one deduces from Eqs.(16,17) the matrix conservation laws of the system of nonlinear ODE’s, which admits representation (1). By means of nondegenerate matrix solution $`\mathrm{\Psi }_0`$ of linear equation $$\mathrm{\Psi }_{0,x}=P^{(0)}\mathrm{\Psi }_0$$ (22) conservation laws $`J^{(i)}`$ $`(ni<0)`$ are written in next form: $$J^{(i)}=\mathrm{\Psi }_0^1\mathrm{\Omega }^{(i)}\mathrm{\Psi }_0.$$ For $`m<0`$ the matrix conservation law of corresponding system of nonlinear ODE’s will be given by matrix $`J^{(1)}`$ if $`\mathrm{\Psi }_0`$ is solution of Eq.(22) with coefficient $`\mathrm{\Phi }^{(0)}`$ instead of $`P^{(0)}`$. Conservation laws $`J^{(1)}`$ and $`\mathrm{\Psi }_0^1Q^{(1)}\mathrm{\Psi }_0`$, which corresponds to the regular singularity discussed in the sequel, are connected with ”exponent matrix of formal monodromy” (see Ref.) if the eigenvalues of matrix $`Q^{(n)}`$ are distinct and $`n<m0`$. Explicit expression for matrix $`\mathrm{\Lambda }`$ can be then obtained and independent variable $`x`$ can be directly associated with the subset of the deformation parameters of the monodromy data . ## 4. Regular singularity of Q–equation In this section we consider the case $`n=1`$. Expansion in series of solution of Eq.(3) for fixed $`x=x_0`$ at neighborhood of singular point $`\lambda =0`$ is represented in closed form: $$\mathrm{\Psi }=\underset{i=0}{\overset{\mathrm{}}{}}\underset{j=0}{\overset{N1}{}}\lambda ^i(\mathrm{ln}\lambda )^jC^{(i,j)}\mathrm{\Lambda }.$$ (23) Here $`N`$ is the matrix dimension of P–Q pair, $`C^{(0,0)}=E`$, $`\mathrm{\Lambda }`$ is nondegenerate solution of equation $$\mathrm{\Lambda }_\lambda =\lambda ^1Q^{(1)}\mathrm{\Lambda }$$ (24) in point $`x=x_0`$, coefficients $`C^{(i,j)}`$ ($`i0`$, $`0j<N`$, $`i^2+j^20`$) are defined from infinite set of equations $$(i+1)C^{(i+1,j)}+[C^{(i+1,j)},Q^{(1)}]+(j+1)C^{(i+1,j+1)}\underset{k=0}{\overset{\mathrm{}}{}}Q^{(k)}C^{(ik,j)}=0$$ (25) ($`i1`$, $`0j<N`$, $`C^{(i,j)}=0`$ if $`i<0`$ or $`jN`$) with $`x=x_0`$. The theorem of the previous section is valid if the expansion in series for fixed $`x=x_0`$ at neighborhood of regular singular point $`\lambda =0`$ has form (6). It means that conditions $`C^{(i,j)}=0`$ ($`i0,j>0`$) will be kept under the evolution of coefficients $`P^{(i)}`$ and $`Q^{(i)}`$ governed by compatibility condition (5). The logarithmic terms can enter the expansion in this case through matrix $`\mathrm{\Lambda }`$. The proof of analogous theorem for expansion (23) containing explicitly the logarithmic terms encounters difficulties since the expansion possesses the internal degrees of freedom. Nevertheless, in the case $`m=0`$ we come to Theorem 2. The compatible expansion in series of the solutions of both the equations of P–Q pair (2,3) at neighborhood of point $`(x=x_0,\lambda =0)`$ is represented by Eq.(23), in which $`\mathrm{\Lambda }`$ is nondegenerate solution of Eq.(24) and equation $$\mathrm{\Lambda }_x=P^{(0)}\mathrm{\Lambda },$$ (26) coefficients $`C^{(i,j)}`$ satisfy Eqs.(25) and the set of equations $$C_x^{(i,j)}+[C^{(i,j)},P^{(0)}]=\underset{k=1}{\overset{\mathrm{}}{}}P^{(k)}C^{(ik,j)}(i0,0j<N).$$ (27) Proof. It is seen from Eqs.(5) that overdetermined system of equations (24) and (26) is compatible. If $`\mathrm{\Psi }_0`$ is nondegenerate solution of Eq.(22), then matrix $`\mathrm{\Psi }_0^1Q^{(1)}\mathrm{\Psi }_0`$ is the conservation law of corresponding system of ODE’s arising from the compatibility condition of P–Q pair (2,3). Substitution of expansion (23) into P–Q pair gives two sets of equations (25,27). Let matrices $`C^{(i,j)}`$ satisfy system (27) at neighborhood of point $`x=x_0`$. Introducing notation for the expression in left–hand side of Eqs.(25) $$F^{(i,j)}=(i+1)C^{(i+1,j)}+[C^{(i+1,j)},Q^{(1)}]+(j+1)C^{(i+1,j+1)}\underset{k=0}{\overset{\mathrm{}}{}}Q^{(k)}C^{(ik,j)}$$ ($`i1`$, $`0j<N`$), we obtain $$F_x^{(i,j)}+[F^{(i,j)},P^{(0)}]=\underset{k=1}{\overset{i+1}{}}P^{(k)}F^{(ik,j)},$$ taking into account Eqs.(5,27). So, the initial values of matrices $`C^{(i,j)}`$ in point $`x=x_0`$ can be chosen to satisfy condition $$F^{(i,j)}=0$$ at a neighborhood of this point. The problem remaining open is the representation of the compatible expansion in series in closed form of the solutions of both the P–Q pair equations if $`m<0`$ and some coefficients $`C^{(i,j)}`$ ($`j>0`$) are nonequal to zero.
no-problem/0001/quant-ph0001064.html
ar5iv
text
# Quantum interfaces ## Reality construction by “knowables” Otto Rössler, in a thoughtful book , has pointed to the significance of object-observer interfaces, a topic which had also been investigated in other contexts (cf., among others, refs. ). By taking up this theme, the following investigation is on the epistemology of interfaces, in particular of quantum interfaces. The informal notions of “cartesian cut” and “interface” are formalized. They are then applied to observations of quantum and virtual reality systems. A generic interface is presented here as any means of communication or information exchange between some “observer” and some observed “object.” The “observer” as well as the “object” are subsystems of some larger, all-encompassing system called “universe.” Generic interfaces are totally symmetric. There is no principal, a priori reason to call one subsystem “observer” and the other subsystem “object.” The denomination is arbitrary. Consequently, “observer” and “object” may switch identities. Take, for example, an impenetrable curtain separating two parts of the same room. Two parties — call them Alice and John — are merely allowed to communicate by sliding through papers below the curtain. Alice, receiving the memos emanating from John’s side of the curtain, thereby effectively constructs a “picture” or representation of John and vice versa. The cartesian cut spoils this total symmetry and arbitrariness. It defines a distinction between “observer” and “object” beyond doubt. In our example, one agent — say Alice — becomes the observer while the other agent becomes the observed object. That, however, may be a very arbitrary convention which not necessarily reflects the configuration properly. A cartesian cut may presuppose a certain sense of “rationality,” or even “consciousness” on the “observer’s” side. We shall assume that some observer or agent exists which, endowed with rational intelligence, draws conclusions on the basis of certain premises, in particular the agent’s state of knowledge, or “knowables” to (re)construct “reality.” Thereby, we may imagine the agent as some kind of robot, some mechanistic or algorithmic entity. (From now on, “observer” and “agent” will be used as synonyms.) Note that the agent’s state of knowledge may not necessarily coincide with a complete description of the observed system, nor may the agent be in the possession of a complete description of its own side of the cut. Indeed, it is not unreasonable to speculate that certain things, although knowable “from the outside” of the observer-object system, are principally unknowable to an intrinsic observer . Although we shall come back to this issue later, the notion of “consciousness” will not be reviewed here. We shall neither speculate exactly what “consciousness” is, nor what may be the necessary and sufficient conditions for an agent to be ascribed “consciousness”. Let it suffice to refer to two proposed tests of consciousness by Turing and Greenberger . With regards to the type of symbols exchanged, we shall differentiate between two classes: classical symbols, and quantum symbols. The cartesian cuts mediating classical and quantum symbols will be called “classical” or “quantum” (cartesian) cuts, respectively. ## Formalization of the cartesian cut The task of formalizing the heuristic notions of “interface” and “cartesian cut” is, at least to some extent, analogous to the formalization of the informal notion of “computation” and “algorithm” by recursive function theory via the Church-Turing thesis. In what follows, the informal notions of interface and cartesian cut will be formalized by symbolic exchange; i.e., by the mutual communication of symbols of a formal alphabet. In this model, an object and an observer alphabet will be associated with the observed object and with the observer, respectively. Let there be an object alphabet $`𝒮`$ with symbols $`s𝒮`$ associated with the outcomes or “message” of an experiment possible results. Let there be an observer alphabet $`𝒯`$ with symbols $`t𝒯`$ associated with the possible inputs or “questions” an observer can ask. At this point we would like to keep the observer and object alphabets as general as possible, allowing also for quantum bits to be transferred. Such quantum bits, however, have no direct operational meaning, since they cannot be completely specified. Only classical bits have a (at least in principle) unambiguous meaning, since they can be completely specified, copied and measured. We shall define an interface next. * An interface $`I`$ is an entity forming the common boundary between two parts of a system, as well as a means of information exchange between those parts. * By convention, one part of the of the system is called “observer” and the other part “object.” * Information between the observer and the object via the interface is exchanged by symbols. The corresponding functional representation of the interface is a map $`I:𝒯𝒮`$, where $`𝒯`$ and $`𝒮`$ are the observer and the object alphabets, respectively. Any such information exchange is called “measurement.” * The interface ist total in the sense that the observer receives all symbols emanating from the object. (However, the object needs not receive all symbols emanating from the observer.) * Types of interface include purely classical, quasi-classical, and purely quantum interfaces. + Classical scenario I: A classical interface is an interface defined in a classical system, for which the symbols in $`𝒮`$ and $`𝒯`$ are classical states encodable by classical bits “$`0`$” and “1” corresponding to “$`\mathrm{𝚝𝚛𝚞𝚎}`$” and “$`\mathrm{𝚏𝚊𝚕𝚜𝚎}`$,” respectively. This kind of binary code alphabet corresponds to yes-no outcomes to dichotomic questions; experimental physics in-a-nutshell. An example for a dichotomic outcome associated with is “there is a click in a counter” or “there is no click in a counter,” respectively. + Quasi-classical scenario II: a quasi-classical interface is an interface defined in a quantum system, whereby the symbols in $`𝒮`$ and $`𝒯`$ are classical states encoded by classical bits. This is the picture most commonly used for measurements in quantum mechanics. + Quantum scenario III: A quantum interface is an interface defined in a quantized system. In general, the quantum symbols in $`𝒮`$ and $`𝒯`$ are quantum states. Informally, in a measurement, the object “feels” the observer’s question (in $`𝒯`$) and responds with an answer (in $`𝒮`$) which is felt by the observer (cf. Fig. 1). The reader is encouraged to view the interface not as a static entity but as a dynamic one, through which information is constantly piped back and forth the observer and the object and the resulting time flow may also be viewed as the dynamic evolution of the system as a whole. In what follows it is important to stress that we shall restrict our attention to cases for which the interface is total; i.e., the observer receives all symbols emanating from the object. ## One-to-one quantum state evolution and “haunted” measurements On a microphysical scale, we do not wish to restrict quantum object symbols to classical states. The concept pursued here is rather that of the quantum scenario III: a uniform quantum system with unitary, and thus reversible, one-to-one evolution. Any process within the entire system evolves according to a reversible law represented by a unitary time evolution $`U^1=U^{}`$. As a result, the interface map $`I`$ is one-to-one; i.e., it is a bijection. Stated pointedly, we take it for granted that the wave function of the entire system—including the observer and the observed object separated by the cartesian cut or interface—evolves one-to-one. Thus, in principle, previous states can be reconstructed by proper reversible manipulations. In this scenario, what is called “measurement” is merely an exchange of quantum information. In particular, the observer can “undo” a measurement by proper input of quantum information via the quantum interface. In such a case, no information, no knowledge about the object’s state can remain on the observer’s side of the cut; all information has to be “recycled” completely in order to be able to restore the wave function of the object entirely in its previous form. Experiments of the above form have been suggested and performed under the name “haunted measurement” and “quantum eraser” . These matters are very similar to the opening, closing and reopening of Schrödinger’s catalogue of expectation values \[12, p. 823\]: At least up to a certain magnitude of complexity, any measurement can be “undone” by a proper reconstruction of the wave-function. A necessary condition for this to happen is that all information about the original measurement is lost. In Schrödinger’s terms, the prediction catalog (the wave function) can be opened only at one particular page. We may close the prediction catalog before reading this page. Then we can open the prediction catalog at another, complementary, page again. By no way we can open the prediction catalog at one page, read and (irreversible) memorize the page, close it; then open it at another, complementary, page. (Two non-complementary pages which correspond to two co-measurable observables can be read simultaneously.) ## Where exactly is the interface located? The interface has been introduced here as a scaffolding, an auxiliary construction to model the information exchange between the observer and the observed object. One could quite justifyable ask (and this question has indeed been asked by Professor Bryce deWitt), “where exactly is the interface in a concrete experiment, such as a spin state measurement in a Stern-Gerlach apparatus?” We take the position here that the location of the interface very much depends on the physical proposition which is tested and on the conventions assumed. Let us take, for example, a statement like “the electron spin in the $`z`$-direction is up.” In the case of a Stern-Gerlach device, one could locate the interface at the apparatus itself. Then, the information passing through the interface is identified with the way the particle took. One could also locate the interface at two detectors at the end of the beam paths. In this case, the informaton penetrating through the interface corresponds to which one of the two detectors (assumed lossles) clicks (cf. Fig. 2). One could also situate the interface at the computer interface card registering this click, or at an experimenter who presumably monitors the event (cf. Wigner’s friend ), or at the persons of the research group to whom the experimenter reports, to their scientific peers, and so on. Since there is no material or real substrate which could be uniquely identified with the interface, in principle it could be associated with or located at anything which is affected by the state of the object. The only difference is the reconstructibility of the object’s previous state (cf. below): the “more macroscopic” (i.e., many-to-one) the interface becomes, the more difficult it becomes to reconstruct the original state of the object. ## From one-to-one to many-to-one If the quantum evolution is reversible, how come that observers usually experience irreversibility in measurement processes? We take the position here that the concept of irreversible measurement is no deep principle but merely originates in the practical inability to reconstruct a quantum state of the object. Restriction to classical state or information exchange across the quantum interface—the quasi-classical scenario II—effectively implements the standard quantum description of the measurement process by a classical measurement apparatus: there exists a clear distinction between the “internal quantum box,” the quantum object—with unitary, reversible, one-to-one internal evolution—and the classical symbols emanating from it. Such a reduction from the quantum to the classical world is accompanied by a loss of internal information “carried with the quantum state.” This effectively induces a many-to-one transition associated with the measurement process, often referred to as “wave function collapse.” In such a case, one and the same object symbol could have resulted from many different quantum states, thereby giving raise to irreversibility and entropy increase. But also in the case of a uniform one-to-one evolution (scenario I), just as in classical statistical physics, reconstruction greatly depends on the possibility to “keep track” of all the information flow directed at and emanating from the object. If this flow is great and spreads quickly with respect to the capabilities of the experimenter, and if the reverse flow of information from the observer to the object through the interface cannot be suitably controlled then the chances for reconstruction are low. This is particularly true if the interface is not total: in such a case, information flows off the object to regions which are (maybe permanently) outside of the observer’s control. The possibility to reconstruct a particular state may widely vary with technological capabilities which often boil down to financial commitments. Thus, irreversibility of quantum measurements by interfaces appears as a gradual concept, depending on conventions and practical necessities, and not as a principal property of the quantum. In terms of coding theory, the quantum object code is sent to the interface but is not properly interpreted by the observer. Indeed, the observer might only be able to understand a “higher,” macroscopic level of physical description, which subsumes several distinct microstates under one macro-symbol (cf. below). As a result, such macro-symbols are no unique encoding of the object symbols. Thus effectively the interface map $`I`$ becomes many-to-one. This also elucidates the question why there should be any meaningful concept of classical information if there is merely quantum information to begin with: in such a scenario, classical information appears as an effective entity on higher, intermediate levels of description. Yet, the most fundamental level is quantum information. ### Do conscious observers “unthink”? Because of the one-to-one evolution, a necessary condition for reconstruction of the object wave function is the complete restoration of the observer wave function as well. That is, the observer’s state is restored to its previous form, and no knowledge, no trace whatsoever can be left behind. An observer would not even know that a “measurement” has taken place. This is hard to accept, in particular if one assumes that observers have consciousness which are detached entities from and not mere functions of the quantum brain. Thus, in the latter case, one might be convinced that conscious observers “unthink” the measurement results in the process of complete restoration of the wave function. In the latter case, consciousness might “carry away” the measurement result via a process distinct from the quantum brain. (Cf. Wigner’s friend .) But even in this second, dualistic, scenario, the conscious observer, after reconstruction of the wave function, would have no direct proof of the “previously measured fact,” although subsequent measurements might confirm his allegations. This amounts to a proposal of an experiment involving a conscious observer (not merely a rational agent) and a quantized object. The experiment tests the metaphysical claim that consciousness exists beyond matter . As sketched above, the experiment involves four steps. * Step I: The conscious observer measures some quantum observable on the quantized object which occurs irreducibly random according to the axioms of quantum theory. As a consequence, the observer “is aware of” the measurement result and ascribes to it an “element of physical reality” . * Step II: The original quantum state of the quantized object is reconstructed. Thereby, all physical information about the measurement result is lost. This is also true for the brain of the conscious observer. Let us assume that the observer “is still aware of” the measurement result. In this case, the observer ascribes to it an “element of metaphysical reality.” * Step III: The observer guesses or predicts the outcome of the measurement despite the fact that no empirical evidence about the outcome of the previous measurement exists. * Step IV: The measurement is “re-done” and the actual measurement result is compared with the conscious observer’s prediction in step III. If the prediction and the actual outcome do not coincide, the hypothesis of a consciousness beyond matter is falsified. As an analogy, one might think of a player in a virtual reality environment. Although at the observation level of the virtual reality, the measurement is undone, the player himself “knows” what has been there before. This knowledge, however, has been passed on to another interface which is not immanent with respect to the virtual reality. That is, it cannot be defined by intrinsic (endo-) means. Therefore, it can be called a transcendent interface with respect to the virtual reality. However, if we start with the real universe of the player, then the same interface becomes intrinsically definable. The hierarchical structure of meta-worlds has been the subject of conceptual and visual art and literature . ### Parallels in statistical physics: from reversibility to irreversibility The issue of “emergence” of irreversibility from reversible laws is an old one and subject of scientific debate at least since Boltzmann’s time . We shall shortly review an explanation in terms of the emergence of many-to-one (irreversible) evolution relative to a “higher” macroscopic level of description from one-to-one (reversible) evolution at a more fundamental microscopic “complete” level of description. These considerations are based on the work of Jaynes , Katz and Hobson , among others. See Buček et al. for a detailed review with applications. In this framework, the many-to-one and thus irreversible evolution is a simple consequence of the fact that many different microstates, i.e., states on the fundamental “complete” level of physical description, are mapped onto a single macroscopic state (cf. Fig. 3). Thereby, knowledge about the microphysical state is lost; making impossible the later reconstruction of the microphysical state from the macroscopic one. (In the example drawn in Fig. 3, observation of the “macrostate” $`II`$ could mean that the system is either in microstate $`1`$ or $`2`$.) on some intermediate, “higher” level of physical description, whereas it remains reversible on the complete description level. Here, just as in the quantum interface case, irreversibity in statistical physics is a gradual concept, very much depending on the observation level, which depends on conventions and practical necessities. Yet again, in principle the underlying complete level of description is one-to-one. As a consequence, this would for example make possible the reconstruction of the Library of Alexandria if one takes into account all smoky emanations thereof. The task of “reversing the gear,” of reconstructing the past and constructing a different future, is thus not entirely absurd. Yet fortunately or unfortunately, for all practical purposes it remains impossible. ## Principle of information conservation In another scenario (closely related to scenario I), classical information is a primary entity. The quantum is obtained as an effective theory to represent the state of knowledge, the “knowables,” of the observer about the object . Thereby, quantum information appears as a derived theoretical entity, very much in the spirit of Schrödinger’s perception of the wave function as a catalogue of expectation values (cf. above). The following circular definitions are assumed. * An elementary object carries one bit of (classical) information . * $`n`$ elementary objects carry $`n`$ bits of (classical) information . The information content present in the physical system is exhausted by the $`n`$ bits given; nothing more can be gained by any perceivable procedure. * Throughout temporal evolution, the amount of (classical) information measured in bits is conserved. One immediate consequence seems a certain kind of irreducible randomness associated with requesting from an elementary object information which has not been previously encoded therein. We may, for instance, think of an elementary object as an electron which has been prepared in spin state “up” in some direction. If the electron’s spin state is measured in another direction, this must give rise to randomness since the particle “is not supposed to know” about this property. Yet, we may argue that in such a case the particle might respond with no answer at all, and not with the type of irreducible randomness which, as we know from the computer sciences , is such a preciously expensive quality. One way to avoid this problem is to assume that the apparent randomness does not originate from the object but is a property of the interface: the object always responds to the question it has been prepared for to answer; but the interface “translates” the observer’s question into the appropriate form suitable for the object. In this process, indeterminism comes in. As a result of the assumption of the temporal conservation of information, the evolution of the system has to be one-to-one and, for finite systems, a permutation. Another consequence of the conservation of information is the possibility to define continuity equations. In analogy to magnetostatics or thermodynamics we may represent the information flow by a vector which gives the amount of information passing per unit area and per unit time through a surface element at right angles to the flow. We call this the information flow density $`𝐣`$. The amount of information flowing across a small area $`\mathrm{\Delta }A`$ in a unit time is $$𝐣𝐧\mathrm{\Delta }A,$$ where $`𝐧`$ is the unit vector normal to $`\mathrm{\Delta }A`$. The information flow density is related to the average flow velocity $`v`$ of information. In particular, the information flow density associated with an elementary object of velocity $`v`$ per unit time is given by $`𝐣=\rho v`$ bits per second, where $`\rho `$ stands for the information density (measured in bits/$`m^3`$). For $`N`$ elementary objects per unit volume carrying one bit each, $$𝐣=Nvi.$$ Here, $`i`$ denotes the elementary quantity of information measured in bit units. The information flow $`I`$ is the total amount of information passing per unit time through any surface $`A`$; i.e., $$I=_A𝐣𝐧𝑑A.$$ We have assumed that the cut is on a closed surface $`𝒜_c`$ surrounding the object. The conservation law of information requires the following continuity equation to be valid: $$_{𝒜_c}𝐣𝐧𝑑A=\frac{d}{dt}(\mathrm{Information}\mathrm{inside})$$ or, by defining an information density $`\rho `$ and applying Gauss’ law, $$𝐣=\frac{d\rho }{dt}.$$ To give a quantitative account of the present ability to reconstruct the quantum wave function of single photons, we analyze the “quantum eraser” paper by Herzog, Kwiat, Weinfurter and Zeilinger . The authors report an extension of their apparatus of $`x=0.13`$ m, which amounts to an information passing through a sphere of radius $`x`$ of $$I_{\mathrm{qe}}=4\pi x^2ci=6\times 10^7\mathrm{bits}/\mathrm{second}.$$ Here, $`𝐣=ci`$ ($`c`$ stands for the velocity of light in vacuum) has been assumed. At this rate the reconstruction of the photon wave function has been conceivable. We propose to consider $`I`$ as a measure for wave function reconstruction. In general, $`I`$ will be astronomically high because of the astronomical numbers of elementary objects involved. Yet, the associated diffusion velocity $`v`$ may be considerably lower than $`c`$. Let us finally come back to the question, “why should there be any meaningful concept of classical information if there is merely quantum information to begin with?” A tentative answer in the spirit of this approach would be that “quantum information is merely a concept derived from the necessity to formalize modes of thinking about the state of knowledge of a classical observer about a classical object. Although the interface is purely classical, it appears to the observer as if it were purely quantum or quasi-classical.” ## Virtual reality as a quantum double Just as quantum systems, virtual reality universes can have a one-to-one evolution. We shall shortly review reversible automata which are characterized by the following properties: * a finite set $`S`$ of states, * a finite input alphabet $`I`$, * a finite output alphabet $`O`$, * temporal evolution function $`\delta :S\times IS`$, * output function $`\lambda :S\times IO`$. The combined transition and output function $`U`$ is reversible and thus corresponds to a permutation: $$U:(s,i)(\delta (s,i),\lambda (s,i)),$$ (1) with $`sS`$ and $`iI`$. Note that neither $`\delta `$ nor $`\lambda `$ needs to be a bijection. As an example, take the perturbation matrix $$U=\left(\begin{array}{cccccc}1& 0& 0& 0& 0& 0\\ 0& 1& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 0& 1\\ 0& 0& 1& 0& 0& 0\end{array}\right).$$ It can be realized by a reversible automaton which is represented in Table 1. Neither its evolution function nor its transition function is one-to-one, since for example $`\delta (s_1,3)=\delta (s_2,1)=s_2`$ and $`\lambda (s_1,2)=\lambda (s_1,3)=2`$. Its flow diagram throughout five evolution steps is depicted in Figure 3, where the microstates $`1,2,3,4`$ are identified by $`(s_1,1)`$, $`(s_1,2)`$, $`(s_2,1)`$ and $`(s_2,2)`$, respectively. ## Metaphysical speculations Although the contemporaries always attempt to canonize their relative status of knowledge about the physical world, from a broader historical perspective this appears sentimental at best and ridiculous at worst. The type of natural sciences which emerged from the Enlightenment is in a permanent scientific revolution. As a result, scientific wisdom is always transitory. Science is and needs to be in constant change. So, what about the quantum? Quantum mechanics challenges the conventional rational understanding in the following ways: * by allowing for randomness of single events, which collectively obey quantum statistical predictions; * by the feature of complementarity; i.e., the mutual exclusiveness of the measurement of certain observables termed complementary. Complementarity results in a non-classical, non-distributive and thus non-boolean event structures; * by non-standard probabilities which are based on non-classical, non-boolean event structures. These quantum probabilities cannot be properly composed from its proper parts, giving rise to the so-called “contextuality.” I believe that, just as so many other formalisms before, also quantum theory will eventually give way to a more comprehensive understanding of fundamental physics, although at the moment it appears almost heretic to pretend that there is something “beyond the quantum”. Exactly how this progressive theory beyond the quantum will look like, nobody presently can say . (Otherwise, it would not be beyond anymore, but there would be another theory lurking beyond the beyond.) In view of the quantum challenges outlined before, it may be well worth speculating that the revolution will drastically change our perception of the world. It may well be that epistemic issues such as the ones reviewed here will play an important role therein. I believe that the careful analysis of conventions which are taken for granted and are never mentioned in standard presentations of the quantum and relativity theory will clarify some misconceptions. Are quantum-like and relativity-like theories consequences of the modes we use to think about and construct our world? Do they not tell us more about our projections than about an elusive reality? Of course, physical constants such as Planck’s constant or the velocity of light are physical input. But the structural form of the theories might be conventional. Let me also state that one-to-one evolution is a sort of “Borgesian” nightmare, a hermetic prison: the time evolution is a constant permutation of one and the same “message” which always remains the same but expresses itself through different forms. Information is neither created nor discarded but remains constant at all times. The implicit time symmetry spoils the very notion of “progress” or “achievement,” since what is a valuable output is purely determined by the subjective meaning the observer associates with it and is devoid of any syntactic relevance. In such a scenario, any gain in knowledge remains a merely subjective impression of ignorant observers.
no-problem/0001/astro-ph0001278.html
ar5iv
text
# Abstract ## Abstract This paper aims to understand the continuum of Seyfert 2 galaxies. By fitting the single galaxies in the sample of Heckman et al. (1995) with composite models (shock+ photoionization), we show that five main components characterize the SED of the continuum. Emission in the radio range can be recognized as bremsstrahlung from relatively cold gas, or synchrotron radiation due to Fermi mechanism at the shock front. The bump in the IR is due to reradiation of the central radiation by dust and to mutual heating and cooling between dust and gas. In the optical-UV range the main component is due to bremsstrahlung from gas heated and ionized by the primary flux (the flux from the active center (AC) or/and the radiation from young stars), by diffuse radiation emitted by the hot slabs of gas, and by collisional ionization. Shocks play an important role since they produce a high temperature zone where soft X-rays are emitted. Finally, the harder X-ray radiation from the AC, seen through the clouds, is easily recognizable in the spectrum. Assuming that the NLR is powered by a power-law central ionizing radiation and by shocks, we discuss the optical-ultraviolet featureless continuum of Seyfert 2. We show that in this wavelength range, the slope of the NLR emission reproduces the observed values, and may be the main component of the featureless continuum. However, the presence of star forming regions cannot be excluded in the circumnuclear region of various Seyfert galaxies. Their photoionizing radiation may prevail in the outskirts of the galaxy where the power-law radiation from the AC is diluted. An attempt is made to find their fingerprints in the observed AGN spectra. Finally, it is demonstrated that multi-cloud models are necessary to interpret the spectra of single objects, even in the global investigation of a sample of galaxies. ## 1 Introduction Observational data in the ultraviolet range are now available for more and more galaxies. They complete the range of the observed frequencies and permit a better interpretation of the emitted spectra. The nature of the ultraviolet continuum in type 2 Seyfert galaxies was investigated by Heckman et al. (1995) on the basis of the IUE spectra of 20 galaxies (hereinafter, this sample is referred to as HS). Various possibilities were considered to explain the observed featureless continuum, as, for example, light from a hidden Seyfert 1 nucleus scattered by dust or warm electrons. The results show that no more than 20 % of the Seyfert 2 template’s continuum can be light from a hidden Seyfert 1 nucleus. The alternative favored by Heckman et al. is that most of the UV continuum in these galaxies is produced by a reddened circumnuclear starburst. Heckman et al. claim that the UV spectral slopes and the ratios of far IR to UV continuum fluxes are very similar to the corresponding properties of typical metal-rich, dusty starburst galaxies. The interpretation of the observed continuum (and line) spectra of the galaxies is difficult due to the complex structure of the emitting regions. Modeling is crucial to disentangle the different components contributing to a single galaxy spectrum. Previous papers on active galactic nuclei (AGN) and starburst galaxy emission spectra (Contini, Prieto, & Viegas 1998a,b, Viegas, Contini, & Contini 1999, Contini et al 1999) have shown that multi-cloud models are necessary to fit the observational data even in a single region of a galaxy. Moreover, the spectral energy distribution (SED) of AGN and starburst galaxy continua can be roughly decomposed into five components. Emission in the radio range can be recognized as bremsstrahlung from relatively cold gas, or synchrotron radiation due to the Fermi mechanism at the shock front. The bump in the IR is due to reradiation by dust of the central radiation. In the optical-UV range the main component is due to bremsstrahlung from the clouds photoionized by the flux from the active center (AC) and the radiation from young stars, by diffuse radiation emitted by the hot slabs of gas, as well as by collisional ionization. The shock plays an important role since it produces a high temperature zone emitting soft X-rays . Finally, the hard X-ray radiation from the AC, seen through the clouds or through the dust torus, is easily recognizable in the spectrum. Previous results (Contini, Prieto & Viegas 1998a,b) show that the SED of the continuum in the various frequency ranges are often correlated. In order to further investigate the nature of the continuum of HS galaxies, we have simulated the galaxy spectra by multi-cloud composite models. These models account for the photoionizing effect of the radiation from the AC source, as well as for shock effects on the emitting clouds. One of our goals is to search for starburst or AGN characteristics prevailing in single objects and see how these can be recognized by the analysis of the spectra. Actually, we focus on the continuum, looking for the features that are easily recognized in the observed spectra. We show that for consistency, however, both the continuum and line spectra must be considered. The physical conditions in single galaxies and in the whole sample are also investigated. Another goal is to verify if the continuum obtained from the components listed above reproduces the observed characteristics of the HS galaxies, in particular the slopes discussed by Tran (1995) and by Heckman et al. (1995). For all the objects of the sample, the fit of the observed continuum SED and the main characteristics of the models explaining the observed continuum are presented in §2. The line ratios are presented and discussed in §3. Concluding remarks follow in §4. ## 2 The SED of the continuum We consider composite models for the NLR which account consistently for the effects of an ionizing radiation flux from an external source and of the shocks due to cloud motions. The SUMA code (see, for instance, Viegas & Contini 1994) is used. The input parameters are the shock velocity, $`\mathrm{V}_\mathrm{s}`$, the preshock density, $`\mathrm{n}_0`$, the preshock magnetic field, $`\mathrm{B}_0`$, the ionizing radiation spectrum, the chemical abundances, the dust-to gas ratio by number, d/g, and the geometrical thickness of the clouds, D. A power-law, characterized by the power index $`\alpha `$ and the flux, $`\mathrm{F}_\mathrm{H}`$, at the Lyman limit, reaching the cloud (in units of cm<sup>-2</sup> s<sup>-1</sup> eV<sup>-1</sup>) is generally adopted. However, for some models a high temperature ($`\mathrm{T}_{}`$=1-2 $`10^5`$ K) blackbody ionizing radiation is used, characterized by the ionization parameter U. This high blackbody temperature could be associated to an evolved stellar cluster (Terlevich & Melnick 1985, Cid-Fernandes, Dottori, Gruenwald & Viegas 1991). Some models with a blackbody spectrum at lower temperature ($`\mathrm{T}_{}`$=5 $`10^4`$ K) are also considered, in order to mimic ionization by a starburst. For all the models, $`\mathrm{B}_0`$ = $`10^4`$ gauss, $`\alpha `$ = 1.5, and cosmic abundances (Allen 1973) are adopted. The remaining input parameters are listed in Table 1. Notice that some of the models differ only in the dust-to-gas ratio. SUMA accounts for silicate grains with an initial radius of 0.2 $`\mu `$m. The grains are sputtered entering the shock front (see Viegas & Contini 1994). The d/g ratio by mass in the Galaxy is $``$ 4.1 $`10^4`$ (Drain & Lee 1994) which corresponds to d/g $`10^{14}`$ by number, adopting a silicate density of $``$3 g $`\mathrm{cm}^3`$. Previous results, obtained by self-consistently fitting the continuum and emission-line spectra of the Circinus galaxy and NGC 5252, confirm that velocities of 100-1000 $`\mathrm{km}\mathrm{s}^1`$ are present in the narrow line region of Seyfert 2 galaxies, with preshock densities of 100-1000 $`\mathrm{cm}^3`$. The ionizing flux from the AC are about $`10^{11}10^{12}`$ photons $`\mathrm{cm}^2\mathrm{s}^1\mathrm{eV}^1`$ at 1 Ryd. ### 2.1 Theoretical results : the optical-UV peak The calculated optical-ultraviolet continuum depends on the temperature distribution across the emitting clouds. An illustration of the results is presented in Figure 1. The most significant models are chosen with the following criteria: shock velocities in the range 100-300 $`\mathrm{km}\mathrm{s}^1`$, characterizing the velocity field of the NLR, with related pre-shock densities in the range 100-300 $`\mathrm{cm}^3`$, in order to reproduce the observed densities. Such $`\mathrm{V}_\mathrm{s}`$ values produce postshock zones with temperatures of about $`10^510^6`$ K. Regarding the ionizing radiation, the characteristic log $`\mathrm{F}_\mathrm{H}`$ generally varies between 10 and 12.7. Assuming that starbursts can also be effective in AGN, we present the results of models with a black body temperature of 5 $`10^4`$ K which are in agreement with starburst values and U between 0.01 and 1 (see Viegas et al. 1999). The results are shown in Figure 1a for a power-law radiation and in Figure 1b for the $`\mathrm{T}_{}`$= 5 $`10^4`$ K blackbody radiation, with dust-to-gas ratio equal to $`10^{15}`$. In each figure, the left pannel shows the temperature distribution across a cloud, where the left edge of the diagram corresponds to the shock front, while the right edge to the photoionized side. The right pannel shows the corresponding SEDs. The particular case of a blackbody spectrum with a high ioniziation parameter (U=100., Table 1b, left pannel) shows that black body radiation hardly heats the gas to temperatures higher than T= $`10^4`$ K, since there are not enough high energy photons. On the other hand, for power-law models higher gas temperatures (T$``$ 4$`\times `$ 10<sup>4</sup> K) can be reached depending on $`\mathrm{F}_\mathrm{H}`$. The peak of bremsstrahlung emission in the optical range of the SED depends on the temperature distribution across the clouds. Therefore, different sets of input parameters provides SEDs with different shapes. The peaks, shown in Figures 1a and 1b, correspond to gas at $`10^4`$ K, while the peaks in the soft X-rays are produced by the gas at higher temperatures in the post-shock region, and are clearly related to the shock velocity. ### 2.2 Comparison with the observations In Figure 2 the spectral luminosities of all the galaxies of the sample (open squares) appear together. NGC 3393 is recognizable as the lower limit (crosses) and NGC 7582 as the upper limit (open triangles). The references of the observational data for the continuum of the Seyfert 2 galaxies of HS, from infrared to X-rays appear in Table 2a. The 1.4 GHz data were taken from Heckman et al. (1995). A roughly common shape of the SED can be noticed. In the radio range the data show the band of the luminosities. Collectively, however, they cannot indicate any kind of slope, either with bremsstrahlung or with power-law characteristics (Fermi mechanism). On the basis of the previous analysis of the continuum and emission line spectra of individual Seyfert 2 galaxies, we have selected the least number of models which fit the continuum of the HS galaxies (see Table 1). The great number of different conditions obtained by the full modeling of individual objects shows that modeling a sample collectively can give only a rough idea of the real picture. #### 2.2.1 Fitting the continuum : the sample The observed and calculated continuum SEDs are compared in Figure 3. The SED of the continuum in the optical-UV range shows a complex nature. Notice that the data are, generally, not reddening corrected, however, the correction is small, even in the optical-UV range with E(B-V) between $``$ 0.0 and 0.1 (E(B-V)$``$ 0.35 for NGC 2110, McAlary et al 1983). One of the difficulties of the study of the AGN continuum is the extraction of the stellar population component. Extraction of the stellar continuum from long slit spectroscopic data of AGN are usually obtained using a template (see, for instance, a comprehensive analysis by Cid-Fernandes, Storchi-Bergmann & Schmitt 1998). For low dispersion data as used in this paper, however, a less sophisticated correction is usually applied; the stellar continuum is represented by a low temperature blackbody component. Here, for all the galaxies, a blackbody spectra with 3 $`10^3`$ K$``$ T $``$ 5 $`10^3`$ K is used to mimic the emission from the old-stellar galactic population which contributes to the nuclear continuum. Model results are then used to fit data, in the same frequency range, that are uncontaminated by the old stellar population. These results correspond mainly to bremsstrahlung from gas photoionized by the AC radiation. The diffuse secondary emission from the hot slabs of the shocked gas may also contribute to the optical-ultraviolet spectrum. For each galaxy the models representing the main components are given in Table 2b. The black body temperature corresponding to the old population stars is given in column 2 and the input parameters of the models in columns 3-7. In column 8 we give the weights (W) which are adopted to fit the data for each model. They represent the ratio of the emitting surface at the nebula to the surface at earth (4$`\pi \mathrm{d}^2`$ , where d is the distance to the galaxy). The covering factor ($`\eta `$) - corresponding to the models fitting the SED between $`10^{14}`$-5 $`10^{14}`$ Hz - is calculated assuming that the NLR is located $``$ 1 Kpc from the central source and is listed in column 9. Notice that the models which fit the SED at $`10^{14}`$-5 $`10^{14}`$ Hz generally show the highest $`\eta `$, thus the values listed in Table 2b are upper limits. Regarding the galaxies in the sample, Figure 3 shows that: 1) For most of the galaxies, the data available in the radio range are not sufficient to indicate a definitive slope characterizing the emission mechanism. However, for NGC 2992, Mrk 3, and Mrk 463 the radio emission correponds to synchrotron radiation produced by the Fermi mechanism at the shock front (Contini, Prieto, & Viegas 1998b). On the other hand, for NGC 2110, NGC 3393, NGC 4388, NGC 5135, NGC 5643, NGC 6221, NGC 7582, and Mrk 348 the radio emission is dominated by free-free radiation that can often be reduced by absorption. 2) Emission in the IR is due to reradiation by dust (see Kraemer & Harrington 1986). Generally, the observational data are better explained by a multi-cloud model (e.g. NGC 3393, NGC 4388). In these cases, the dust temperature is different in the different clouds and each component peaks at a different frequency. Thus, the resulting peak is flatter, better reproducing the observed data. For NGC 2110, NGC 3393, NGC 5506, Mrk 3, Mrk 34, Mrk 78, Mrk 348, IC 3639, and IC 5135 the dust-to-gas ratio in some of the clouds is particularly high ($`10^{13}`$). 3) Except for Mrk 477 and Mrk 573, an old stellar population, with temperatures in the range 3 to 5 $`10^3`$ K, seems to be contributing to the optical continuum of the galaxies in the sample, representing the underlying stellar population. 4) Soft X-ray data are fitted by gas with a high shock velocity ( $`>`$ 900 $`\mathrm{km}\mathrm{s}^1`$). This soft-X ray component originates from the post-shock zone where $`\mathrm{V}_\mathrm{s}`$$``$ 900 $`\mathrm{km}\mathrm{s}^1`$, corresponding to temperatures of $``$ 1.2 $`10^7`$ K. Such temperatures are in good agreement with the Raymond-Smith interpretation of the ASCA soft X-rays data for NGC 5506 (Wang et al. 1999) and NGC 7582 (Xue et al. 1998) which gives T $`10^7`$ K. Also for NGC 2110 temperatures of 6.8 $`10^6`$\- 1.5 $`10^7`$ K are deduced from ROSAT soft X-ray data (Weaver et al 1995). Dust grains can also be heated to relatively high temperatures (Viegas & Contini 1994) in the post shock region. Consequently, besides the soft X-ray emission, the shape of the mid-IR continuum may provide another key to the presence of high velocity clouds in AGN. In this case we expect the mid-infrared emission to be correlated with the soft X-ray component. For the galaxies analysed here, high velocity clouds ($`\mathrm{V}_\mathrm{s}`$= 900-1000 $`\mathrm{km}\mathrm{s}^1`$) invoked to explain the data in the soft X-ray and in the near IR generally have $`normal`$ dust-to-gas ratios ($`10^{13}10^{15}`$). 5) The d/g ranges from 5 $`10^{16}`$, in poor dusty cases, to $`>10^{13}`$, in dusty clouds (Table 1). Very different conditions can be found in different clouds of the same galaxy (e.g Mrk 477) . In fact, dust, which is generally present in star forming regions, can be destroyed by sputtering and evaporation. Therefore, the effect of the shock is crucial because sputtering and evaporation depend on the shock velocity and on the grain temperature, respectively. Finally, Fig. 3 shows that shock velocities of $``$ 200 $`\mathrm{km}\mathrm{s}^1`$ and preshock densities of $``$ 200 $`\mathrm{cm}^3`$ strongly prevail in the fit of the data in the optical range. On the other hand, the data in the optical-UV range, between 4250 Å and 1200 Å (corresponding to log $`\nu `$ = 14.85 - 15.4), are better fitted by model 1, characterized by $`\mathrm{V}_\mathrm{s}`$=100 $`\mathrm{km}\mathrm{s}^1`$ and a relatively high $`\mathrm{F}_\mathrm{H}`$. #### 2.2.2 Fitting the continuum : particular models $`\mathrm{𝑆𝑡𝑎𝑟𝑏𝑢𝑟𝑠𝑡𝑠}\mathrm{𝑖𝑛}\mathrm{𝑡ℎ𝑒}\mathrm{𝑐𝑖𝑟𝑐𝑢𝑚𝑛𝑢𝑐𝑙𝑒𝑎𝑟}\mathrm{𝑟𝑒𝑔𝑖𝑜𝑛}`$ The fit of NGC 5506 UV data with model 1 is not exact. Moreover, we have referred to the data in the UV for all galaxies considering that the flux drops at log $`\nu `$ $`>`$ 15.4. A better fit to NGC 5506 can be obtained either by a power-law (pl) dominated model with $`\mathrm{V}_\mathrm{s}`$=200 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$=50 $`\mathrm{cm}^3`$, log $`\mathrm{F}_\mathrm{H}`$= 9.3, D= 6 $`10^{17}`$ cm, and d/g= 5 $`10^{15}`$ or with a black body (bb) dominated model with $`\mathrm{V}_\mathrm{s}`$= 200 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 200 $`\mathrm{cm}^3`$, U = 0.01, $`\mathrm{T}_{}`$= 5 $`10^4`$ K, D= 5 $`10^{16}`$ cm, and d/g = $`10^{14}`$. In Fig. 3d, the contributions of these models to the continuum are represented by the long dash lines (thick) and the dash-dot lines (thick), respectively. The sum of the models will give an even better fit to the data. The data in the X-ray range are fitted by a model with $`\mathrm{V}_\mathrm{s}`$= 900 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$ = 1000 $`\mathrm{cm}^3`$, D = 8 $`10^{17}`$ cm, d/g = 5 $`10^{13}`$. The bb model which fits the high frequency data represents the case where the bb flux from the stars reaches the very shock front of the clouds moving outwards from the AC. In other words, the dominant starbursts are located in the circumnuclear region. $`\mathrm{𝐶𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑜𝑛}\mathrm{𝑜𝑓}\mathrm{𝑏𝑙𝑎𝑐𝑘}\mathrm{𝑏𝑜𝑑𝑦}\mathrm{𝑎𝑛𝑑}\mathrm{𝑝𝑜𝑤𝑒𝑟}\mathrm{𝑙𝑎𝑤}\mathrm{𝑑𝑜𝑚𝑖𝑛𝑎𝑡𝑒𝑑}\mathrm{𝑚𝑜𝑑𝑒𝑙𝑠}`$ To distinguish starbursts from AGN we have run two models with shock parameters as model 5 and model 1, but with black body radiation with $`\mathrm{T}_{}`$= 5 $`10^4`$ K (U=1), corresponding to a starburst and $`\mathrm{T}_{}`$= 2 $`10^5`$ K (U=10), corresponding to a ”warmer” (Terlevich & Melnick 1985), respectively. The results are presented for NGC 3081 (Figure 3b). For models with $`\mathrm{V}_\mathrm{s}`$=200 $`\mathrm{km}\mathrm{s}^1`$ the shock prevails and there is no great difference between the pl model and the bb model. Diffuse radiation from the hot slabs of the gas downstream maintains the temperature of the gas at $``$ 1-2 $`10^4`$ K. On the other hand, in case of a low velocity shock ($`\mathrm{V}_\mathrm{s}`$=100 $`\mathrm{km}\mathrm{s}^1`$), even a strong bb radiation flux corresponding to a high temperature cannot heat the gas enough to shift the peak in the optical-UV to log ($`\nu `$) $`>`$ 15 (see §2.1). $`\mathrm{𝑇ℎ𝑒}\mathrm{𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛}\mathrm{𝑜𝑓}\mathrm{𝑎𝑛}\mathrm{𝑖𝑛𝑡𝑒𝑟𝑚𝑒𝑑𝑖𝑎𝑡𝑒}\mathrm{𝑠𝑡𝑒𝑙𝑙𝑎𝑟}\mathrm{𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛}`$ It is suggested by Heckman et al (1995) that the optical-UV SED of the continuum can be fitted by the black body radiation from relatively high temperature stars. In fact, a young stellar population is observed in some galaxies, e.g. Mrk 477 (see Heckman et al. 1997). In Figure 4 we present the fit of the Mrk 477 continuum by black body fluxes corresponding to different temperatures. The three bumps correspond to $`\mathrm{T}_{}`$= 4,000 K (dash-dot line), 10,000 K (long-dash line), and 20,000 K (short-dash line). The second and the third one correspond to intermediate population stars (B). The ratios of the weights adopted to fit the data are bb(4,000) : bb(10,000) : bb(20,000) = 3.2 $`10^{11}`$ : 5 $`10^{23}`$ : 1.6 $`10^{24}`$. These are not able to explain the observational evidence of starburst activity (T. Contini 1999, private communication). Moreover, we assume that the UV emission by a younger (T$`>\mathrm{5\hspace{0.17em}10}^4`$ K) star population is absorbed by the clouds and reemitted as bremsstrahlung (see for example Figure 3b). So, we conclude that although intermediate population stars contribute to the continuum, the bremsstrahlung from illuminated clouds prevails. This is also consistent with the results of line spectra calculations. ### 2.3 The spectral slopes One of the important observational features of optical-UV spectra in Seyfert 2 galaxies is the continuum slopes (Heckman et al. 1995). These suggest that the featureless continuum comes from a reddened starburst in the ranges 1200 - 2600 Å ($`F_\lambda `$ $``$ $`\lambda ^\beta `$) and 1910 - 4250 Å ($`F_\lambda `$ $``$ $`\lambda ^\gamma `$). The frequencies corresponding to these critical wavelengths are indicated in Figure 3 by vertical lines. In Table 3 the spectral slopes $`\gamma `$ and $`\beta `$ obtained from the models fitting each galaxy in the sample are compared with the values given by Heckman et al. (1995, Table 1). The agreeement is quite good, since our results depend on the fitting of the whole continuum spectra. It can be improved if more observational data become available in the different wavelength ranges. In particular, for NGC 2110 and MRK 34, a better agreement will probably be reached when data in the UV become available. Interestingly, the data in the different wavelengths correspond to different models. Table 3 shows that low $`\gamma `$ values are provided by data which correspond to a model with relatively low $`\mathrm{V}_\mathrm{s}`$, low $`\mathrm{n}_0`$ and log $`\mathrm{F}_\mathrm{H}`$= 12.7 (models 1 and 2). One important point regarding the slopes is the position of the optical-UV peak. This is highly dependent on the temperature across the cloud as shown in §2.1. The model dependence of the frequency corresponding to the peak position is illustrated in Figure 5, where the results for various models are plotted, with the curves shifted vertically for sake of clarity. Another argument favoring the origin of the ultraviolet continuum of Seyfert 2 galaxies in a reddened population of hot stars is the correlation between the ratio of the infrared to the ultraviolet flux ($`\mathrm{L}_{\mathrm{ir}}`$/$`\mathrm{L}_{\mathrm{uv}}`$) and the slope of the ultraviolet continuum (Heckman et al. 1995). These follow the behavior of observed starbursts (Meurer et al. 1995). In Figure 6 we show the plot for the Seyfert 2 galaxies in the sample. For each galaxy, we plot the far-IR/UV ratio (see Table 1 of Heckman et al.) versus the two $`\beta `$ values given in Table 3. The corresponding points are close, so the correlation is also present even if the continuum is not due to a reddened starburst but to the NLR emission. Notice that dust emission is strongly coupled to gas emission (see, for instance, Viegas & Contini 1994) through shock and radiation effects and that $`\mathrm{L}_{\mathrm{ir}}`$ also depends strongly on the dust-to-gas ratio. For both starburst and Seyfert galaxies the correlation shown in Figure 6 is usually used to show that the absorbed ultraviolet radiation is reemitted in the infrared. For Seyfert 2 galaxies, particularly, the relation between the central radiation source and infrared emission may not be direct, since both shocks and photoionization are powering the NLR, and, in our models, the observed UV continuum is not coming from the central source. The results above indicate that the continuum emission from the NLR clouds may be another explanation for the featureless continuum of Seyfert 2 galaxies. Since this component is extended, galaxies where this component is dominant should show little or no dilution of stellar absorption lines as discussed by Cid-Fernandes et al. (1998). ## 3 Constraining the models : the line spectra In previous sections it was found that the SED of the continuum in the optical-UV range is mainly reprocessed radiation from heated gas clouds. Moreover, it was found that reradiation from clouds photoionized by black body radiation from young stars can hardly be distinguished from that from clouds photoionized by a power-law radiation flux. Three main cases are considered: 1) power-law radiation from the AC, 2) blackbody radiation from stars with $`\mathrm{T}_{}`$=5 $`10^4`$ K which represent the starburst case, and 3) blackbody radiation from stars with $`\mathrm{T}_{}`$=1-2 $`10^5`$ K which represents the Terlevich & Melnick (1985) case. So, the interpretation of the line spectra is essential for disentangling the domain of each mechanism. Our suggestion that the featureless continuum in Seyfert 2 galaxies could be due to the NLR continuum emission can be tested by the observed emission lines. Our previous analysis of the Circinus galaxy and NGC 5252 showed that a self-consistent model can only be obtained by simultaneous fitting of the continuum and emission-line spectra (Contini et al 1998a, 1998b). Although a full discussion of the emission-line spectra is out of the scope of this paper, it is important to show that the models adopted to fit the continuum of the galaxies in the sample are also consistent with the observed line ratios. Emission-line data for various objects were collected from the literature. Those for NGC 2110, NGC 2992, and NGC 5506 come from Shuder (1980); NGC 3081 comes from Durret & Bergeron (1986); NGC 3393 from Diaz, Prieto, & Wamsteker (1988); NGC 4388 from Pogge (1988); Mrk 3, Mrk 34, Mrk 78, Mrk 348, and Mrk 573 from Koski (1978); NGC 5135 and IC 5135 from Vaceli et al. (1997). The emission-line intensity, relative to H$`\beta `$, for the most indicative lines in the optical range is listed in Table 4 for models 2, 3, 6, 9, 12 (Table 1). Models 2(SD) and 6(SD) correspond to models 2 and 6, respectively, but are calculated in the shock dominated (SD) case, i.e. adopting $`\mathrm{F}_\mathrm{H}`$= 0. The minimum and the maximum observed values for the galaxies referred to above are given in the second column, whereas model results appear in columns 3 to 9. Data for NGC 2110, which shows line ratios rather different than those of the other galaxies, are also included. Various clouds of the NLR at different physical conditions must contribute to the continuum, as well as to the emission-line ratios. Therefore, multi-cloud model results obtained by averaged sums are also given in Table 4 (columns 10-12). The weights adopted in the averaged sums appear in the bottom of Table 4, as well as the calculated absolute values of H$`\beta `$ for the individual models. Notice that in order to have a good fit, the results of SD models are included. The weights of SD models are higher than those of radiation dominated models because the absolute fluxes are weaker (see Viegas et al. 1999). The averaged results are within the maximum and minimum observed ratios except for the \[N I\] emission-line, which is highly dependent on the geometrical depth. Concerning the \[N II\] 6584+ line, a slighter higher N/H abundance could provide better agreement (cf. Contini et al. 1999). For sake of consistency, we present in Figure 7 the SEDs corresponding to the theoretical models AV1, AV2, and AV3. An hypothetical bb emission from the background old star population is also shown (long dash-dot lines) to better understand the diagrams. The SED maxima for shock-dominated models are determined by the shock velocity, whereas the optical-UV peaks in radiation-dominated models depend on the radiation flux.The model results are compared to the NGC 5643 data which are representative of the continuum shown by the galaxies in the sample. Notice that AV2 gives a better fit than AV1 and AV3, both in the near infrared and in the far ultraviolet. This suggests that the fit of the line spectra must include shock dominated models. ## 4 Concluding Remarks The aim of this paper is to understand the continuum SED of Seyfert galaxies. We show that composite models for the NLR of Seyfert 2 galaxies can explain the full range of the observed continuum, and, in particular, the optical-ultraviolet continuum. Comparison of theoretical results and observational data shows that $`\mathrm{V}_\mathrm{s}`$ of about 100-300 $`\mathrm{km}\mathrm{s}^1`$, and $`\mathrm{n}_0`$ of 200-300 $`\mathrm{cm}^3`$ must prevail in the NLR. Higher velocities may also be present in order to explain the soft X-ray emission. Multi-cloud models are necessary to interpret the spectra (both line and continuum) of single objects, even in the global investigation of a sample of galaxies. An important point is the characteristics of the featureless continuum of Seyfert 2 galaxies. Regarding the continuum slopes and the correlation between the far-infrared to ultraviolet ratio and the UV slope, both are reproduced by our models. This is an indication that the NLR continuum emission may be the main components of featureless continuum. The main result of our investigation is that the continuum observed in the HS sample is reprocessed radiation from the clouds of the NLR. These clouds are mainly powered by the central radiation, usually characterized by a power-law ionizing spectrum. Nevertheless, black body radiation from starbursts located in the outskirts of the nuclear region may, in some cases, contribute to the UV data. The results will be confirmed when further data in the far-UV become available. Acknowledgements. We are grateful to the referee for enlightening comments and to G. Drukier for reading the manuscript. This paper is partially supported by the Brazilian financial agencies: FAPESP (1997/13816-4), CNPq (304077/77-1), and PRONEX/Finep(41.96.0908.00). References Aaronson,M. et al. 1981, MNRAS, 195, 1; Allen, C.W. 1973 in ”Astrophysical Quantities” (Athlon) Allen,D.A. 1976, ApJ, 207, 367; Becker,R.M., White,R.L., & Edwards, A.L. 1991, ApJS, 75, 1; Boroson,T.A.,Strom,K.M., & Strom,S.E. 1983, ApJ, 274, 39; Cid-Fernandee, R., Dottori, H., Gruenwald, R. & Viegas, S. M. 1991, MNRAS 255, 165 Cid-Fernandes, R., Storchi-Bergmann, T. & Schmitt,H. 1998, MNRAS, 297, 579 Contini,M., Prieto,M.A., & Viegas,S.M. 1998a, ApJ, 492, 511 Contini,M., Prieto,M.A., & Viegas,S.M. 1998b, ApJ, 505, 621 Contini,M., Radovich,M., Rafanelli,P., & Richter,G. 1999, submitted De Vaucouleurs, A. & Longo, G. 1988, Catalogue of Visual and Infrared Photometry of Galaxies from 0.5 $`\mu `$m to 10 $`\mu `$m (1961-1985); De Vaucouleurs, G. et al. 1991 Third Reference Catalogue of Bright Galaxies, Diaz,A.I., Prieto, M.A., & Wamsteker,W. 1988, A&A, 195, 53 Doroshenko,V.T. & Terebezh,V.Yu 1979, SvAL, 5, 305; Drain,B.T., & Lee,H.M. 1994, ApJ, 285, 89 Durret,F. & Bergeron,J. 1986, A&A, 156, 51 Fabbiano,G., Kim, D.-W.,& Trinchieri, G. 1992, ApJS, 80, 531; Frogel,J.F., Elias,J.H., & Phillips,M.M. 1982, ApJ, 260, 70; Glass,I.S. 1973, MNRAS, 164, 155; Glass,I.S. 1976, MNRAS, 175, 191; Glass,I.S. 1978, MNRAS, 183, 85; Glass,I.S. 1979, MNRAS, 186, 29; Glass,I.S. 1981, MNRAS, 197, 1067; Glass,I.S. et al. 1982, A&A, 107, 276; Gower,J.F.R., Scott,,P.F., & Wills,D. 1967, MmRAS, 71, 49; Gregory,P.C. & Condon,J.J. 1991, ApJS, 75, 1011; Gregory,P.C. et al. 1994, ApJS, 90, 173; Griersmith,D., Hyland, A.R., & Jones, T.J. 1982, AJ, 87, 1106; Griffith,M.R. et al. 1994, ApJS, 90, 179; Griffith, M.R. 1995, ApJS, 97, 347; Heckman,T., Krolik,J., Meurer,G., Calzetti,D., Kinney,A., Koratkar,A., Leitherer,C., Robert,C., & Wilson,A. 1995, ApJ, 452, 549 Heckman,T.M. et al. 1997, ApJ, 482, 114 Joyce,R.R. & Simon, M. 1976, PASP, 88, 870; Kinney,A.I. et al. 1993, ApJS, 86, 5; Kormendy,J. 1977, ApJ, 214, 359; Koski,A.T. 1978, 223, 56 Large,M.I. et al. 1981, MNRAS, 194, 693; Lauberts,A. & Valentijn,E.A. 1989, The Surface Photometry Catalogue of the ESO-Uppsala Galaxies, 1989, Garching Bei Munchen ESO; Leitherer,C., Robert,C., & Heckman, T. 1995 ApJS, 99, 173 Maddox,S.J. et al. 1990, MNRAS, 243, 692; Mathewson,D.S. & Ford, V.L. 1996, ApJS, 107, 97; McAlary,C.W.,McLaren,R.A., & Crabtree,D.R. 1979, ApJ, 234, 471; McAlary,C.W. et al. 1983, ApJS, 52, 341; Meurer, G., Heckman, T., Leitherer, C., Kinney, A., Robert, C., & Garnett, D. 1995, AJ, 110, 2665 Moshir, M. et al. 1990, Infrared Astronomical Satellite Catalogs, 1990,The Faint Source Catalog, Version 2.0; Mould,J.,Aaronson,M.,& Huchra,J. 1988, ApJ, 238, 458; Neugebauer,G. et al. 1976, ApJ, 205, 29; Pogge,R.W. 1988, ApJ, 332, 702 Rieke,G.H. 1978, ApJ, 226, 550; Rieke,G.H. & Low,F.J. 1972, ApJ, 176L, 95; Rudy,R.J.,Levan,P.D., & Rodriguez-Espinosa,J.M. 1982 Sandage,A. & Visvanathan,N., 1978, 223,707; Scoville,N.Z. et al. 1983, ApJ, 271, 512; Shuder,J.M. 1980, ApJ 240, 32 Soifer,B.T. et al. 1989, AJ, 98, 766; Stein,W.A. & Weedman,D.W. 1976, 205, 44; Terlevich, R. & Melnick, J. 1985, MNRAS, 213, 841 Tran, H. D. 1995, ApJ, 440, 578 Vaceli,M.S., Viegas,S.M., Gruenwald,R., & De Souza,R.E. 1997, AJ, 114, 1245 Viegas,S.M. & Contini,M. 1994, ApJ, 428, 113 Viegas,S.M., Contini,M., & Contini,T. 1999, A&A, 347, 112 Wang,T. et al. 1999, ApJ, 515, 567 Ward,M. et al. 1982, MNRAS, 199, 953; Weaver,K.A. et al. 1995, ApJ, 442, 597 White,R.L. & Becker,R.H. 1992, ApJS, 79, 331; Wright,A.E. et al 1996, ApJS, 103, 145; Wright,A.E. et al. 1994, ApJS, 91, 111; Wright,A. & Otrupcek,R. 1990, Parkes Catalogue, 1990, Australia Telscope National Facility; Xue,S.-J. et al. 1998, PASJ, 50, 519 Figure Captions Fig. 1 The results corresponding to a power-law ionizing radiation (a) and a 5 $`10^4`$ K blackbody radiation (b). In each figure, the left pannel show the temperature distribution across a cloud, where the left edge of the diagram corresponds to the shock front, while the right edge to the photoionized side. The thin vertical line in the middle of the diagrams indicates the separation of the cloud in two halves. The axis scales are logarithmic. The horizontal axis scale is symmetric in order to provide an equal view of the two sides of the cloud, that dominated by collisional ionization and the radiation dominated one. The right pannel shows the corresponding spectral energy distribution. The power-law results refer to $`\mathrm{V}_\mathrm{s}`$= 100 $`\mathrm{km}\mathrm{s}^1`$and $`\mathrm{n}_0`$ = 100 $`\mathrm{cm}^3`$ (thin lines) and to $`\mathrm{V}_\mathrm{s}`$= 300 $`\mathrm{km}\mathrm{s}^1`$and $`\mathrm{n}_0`$ = 300 $`\mathrm{cm}^3`$ (thick lines). Solid , short-dashed, and long-dashed lines correspond to log$`\mathrm{F}_\mathrm{H}`$= 12, 11, and 10 , respectively. The black body results were obtained for U=0.01 (long-dashed), 0.1 (short-dashed), and 1. (solid), and U=100. (dash-dot line). Fig. 2 The spectral luminosities of all the galaxies of the Heckman et al. (1995) sample. Fig. 3 The fit of the calculated to the observed SED for all the objects of the Heckman et al. sample. Solid lines : models 4, 11, 12, and 13; short dashed lines : models 5, 6, and 7 ; long dashed lines : models 1, 2; short dash-dot lines : models 3 and 8; dotted lines : models 14, 15 and 16; long dash-dot : the black body emission from the back ground old star population. Filled squares represent the data. Vertical thin lines define the crucial wavelengths at 4250 Å, 2600 Å, 1910 Å, and 1200 Å(see §2.3). Fig. 4 The fit of MRK 477 continuum SED by star population at different temperatures. The three bumps between 14 $`<`$ log($`\nu `$) $`<`$ 15 correspond to black body radiation with T = 4,000 K (dash-dot line), 10,000 K (long-dash line), and 20,000 K (short-dash line). Solid lines correspond to bremsstrahlung emission from the gas and to IR thermal emission from dust from the NLR clouds (see Figure 3i). Fig. 5 Comparison of the optical-UV peaks in the SED of the continuum calculated by models presented in Figs. 1a and 1b. 1 : $`\mathrm{V}_\mathrm{s}`$=100 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 100 $`\mathrm{cm}^3`$, log$`\mathrm{F}_\mathrm{H}`$=12; 2 : $`\mathrm{V}_\mathrm{s}`$=100 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 100 $`\mathrm{cm}^3`$, log$`\mathrm{F}_\mathrm{H}`$=11; 3 : $`\mathrm{V}_\mathrm{s}`$=300 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 300 $`\mathrm{cm}^3`$, log$`\mathrm{F}_\mathrm{H}`$=12; 4 : $`\mathrm{V}_\mathrm{s}`$=300 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 300 $`\mathrm{cm}^3`$, U=1; 5 : $`\mathrm{V}_\mathrm{s}`$=300 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 300 $`\mathrm{cm}^3`$, U=0.1; 6 : $`\mathrm{V}_\mathrm{s}`$=300 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 300 $`\mathrm{cm}^3`$, U=0.01; 7 : $`\mathrm{V}_\mathrm{s}`$=100 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 100 $`\mathrm{cm}^3`$, U=1; 8 : $`\mathrm{V}_\mathrm{s}`$=100 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 100 $`\mathrm{cm}^3`$, U=0.1; 9 : $`\mathrm{V}_\mathrm{s}`$=500 $`\mathrm{km}\mathrm{s}^1`$, $`\mathrm{n}_0`$= 500 $`\mathrm{cm}^3`$, U=1; Fig. 6 The far-infrared to ultraviolet luminosity ratio ($`\mathrm{L}_{\mathrm{ir}}`$/$`\mathrm{L}_{\mathrm{uv}}`$) versus the slope of the ultraviolet continuum $`\beta `$ (see §2.3). The circles refer to our results and the stars to Heckman et al. (1995), as listed in Table 3. The straight line represents the correlation obtained from starburst data (Meurer et al. 1995). Fig. 7 The SED of the continua referring to the multi-cloud models AV1, AV2, and AV3 (Tables 4). Shock dominated models are represented by dotted lines, model 3 by short dash-dot, model 6 by long dash, and model 12 by solid lines. Model 2 has a low weight and does not appear in the figures. An hypothetical bb emission from the background old star population is also shown (long dash-dot lines). Filled squares refer to NGC 5643 data.
no-problem/0001/physics0001036.html
ar5iv
text
# 1 Introduction ## 1 Introduction Today everybody believes that the matter has both particle character and wave character. Following belief up to now, wave is understood as continuous change of any quantity in period in space and in time with its immanent cause. With the view-point of particle the periodic property puzzled everybody, and it is looked as a presence of wave. Now, we think that there is still an other way to understand nature more accurately and more consistently: we use still Newton’s ideas “along with ray of light there must be a manifestation of some periodicity” to explain wave phenomena of the light and elementary particles. The article is organized as follows. In Section 2, we show an explanation intuitionally of interference of the light. And the interferential phenomenon of electron is illustrated in Section 3. Conclusion is given in Section 4. ## 2 Interferential picture of the light The electromagnetic field can exit independently and so it includes invariant structures (particles). The electromagnetic field has periodicity and so this periodicity either goes with particle by anyway or manifests in the distribution of particles in space and this squadron of particles fly along a fixed ray, | $`|`$$`\lambda `$$`|`$ | $`\stackrel{}{c}`$ | | --- | --- | | $``$– – – – $``$– – – – $``$– – – – $``$ | – – – $``$ | This picture is similar to the so-called wave. Let us use this imagination of the light to explain how the interferential picture is created. Suppose that there is a gun, and after each fixed interval of time $`\mathrm{\Delta }t`$ it shoots one ball. Balls are alike in all their aspect. They fly with the same velocity $`v`$ in environment without resistance force and their gravities are ignorable. The “wave-length” - i.e. the distance between two balls - is $`\lambda =v.\mathrm{\Delta }t`$. All balls are electrized weak charge of the same sign, and they are covered with a sticky glue envelope so that electrostatically propulsive force between them cannot win stickiness of glue envelope when they touch together with small sufficient meeting angle. A target with two parallel interstices in the vertical is set square with the axis of the gun and is also electrized weak charge but different from the sign of balls. The distance between two interstices is not too large in comparison with the wave-length $`\lambda `$. And assume that sticky glue of balls does not effect on the target. Thus, probability in order that any ball flies through one interstice or the other is identical. (The interstices are enough large that balls can fly through easily). After flying past, balls are changed flying direction with all possible angles in the horizontal plane with definite probability distribution. With such initial conditions, balls are in ‘dephasing’ with each other and, after overcoming the target, balls that do not fly through the same interstice are able to meet together with a some non-zero propability. If meeting angle is enough small in order that stickiness has effect, two some balls will couple together to be a system, then change direction and fly on the bisector of meeting angle. If behind the target and distant from the target a space $`d\lambda `$, a reception screen is set parallel to the target, then on this screen we will harvest falling point locations of single balls and couple balls. Argumentation and calculation show that single balls (missed interference) form a monotonous background of falling points, and couple balls (caused by interference) fall concentratively and create definite veins on the background, depending on dephasing degree of interfered balls. Hence, from Newton’s idea and using quantities such as “wave-length”, “dephase”, and so on results obtained is fitted in ones calculated using wave behaviour. Furthermore, they explain why, when amount of balls is not enough large (the time to do experiment is short), the picture of falling points seems chaotic, randum. Only with a large number of balls (the time to do experiment is long), interferential veins are really clear. This is one that, if using wave behaviour, is impossible to explain. Thus, if we consider the light as a system of particles that, in the most rudimentary level, are similar above balls: they are attractable together and radiated periodically, then the interference of the light is nothing groundless or difficult to understand when we refuse to explain it by using wave behaviour. Moreover, without wave conception the phenomena of the light is very bright, clear, and more unitary. Such a view-point of the light requires to imagine again many problems, simultaneously brings about new effects that need to prove in experiment. If there is a source that radiates continously separately light pulses of a fixed thickness and with a fixed distance between pulses (Fig. 1), we can carry out a following experiment (Fig. 2). With two continously radiative sources we direct light pulses together with an intersecting angle $`\alpha `$. Interference presents only in the area ABCD of Fig. 1. If the light is wave, then to see interference we should set a photographic plate in the area ABCD, because outside this area interference is impossible to present. Two light sources are independent from each other, the stable condition of interferential veins is not ensured, position of vains is changed incessantly, and the consequence is that on the photographic plate we cannot obtain interferential veins. But if the light is particle, then we always obtain interferential veins (in suitable polarization condition) though the photographic plate is set inside or outside the area ABCD. Carrying out experiments according to this diagram we can check interferential ability of the light of different wave-lengths. Given two radiative sources of different frequencies by that way, we are able to see whether photons of different frequencies is identical. A consequence of particle behaviour is that we can make mirror-holography with any reappear light source. This is drawn from the phenomenon that two photons interfered together change direction and fly on the the bisector of meeting angle. In the conception of particle, frequency of a light wave is understood as number of photons radiated in a unit of time to a definite direction. The photo-electric effect, therefore, is understood as follows: the more number of coming photons that collide to electrons in a unit of time is, the higher energy that electrons gain is. If momenta of all photons are of the same value, then it is possible that number of photons electrons gain is proportional to light frequency, and thus momentum of each photon is proportional to Plank’s constant $`\mathrm{}`$. With such understanding, energy that electron gain from interfered photons is higher than one from non-interferential photons. With wave behaviour and imagined that atom sends spherical waves, then the photo-electric effect is impossible to explain as shown. But if thought that atom radiate directly light, then because radiation is a wave process, in radiation process atom does not keep still a place in space, thus it is difficult to maintain that all energy quanta is transmitted out space in a definite ray. Situation is the same for absorbing process of energy quanta. For instance, electron is impossible to keep still a place to await absorbing all energy quanta then moves to other position. Thus, there is not any firm basis to say that energy is absorbed piecemeal quanta $`E=\mathrm{}\nu `$. ## 3 Interferential picture of elementary particle Let us consider interference of electron. First of all we can confirm that there would not be any experiment we gain interference of electron if we used conditions as already stated for the light. Because with actual experience electrons are not like as above balls with stickiness that lack of this ability there is not any presence of interferential couples. Interference of electron is completely different, if using the word “interference”. Suppose that there is an electron flying to a block of matter made up from particles heavier very much than electron. Each heavy particle in the block of matter is a scattering center. Because of interaction, after flying out of influential region of scattering center electron is changed direction with an angle $`\alpha `$ in comparison with initial direction. Assume that the deviation angle $`\alpha `$ is dependent on the aim distance $`\rho `$ obeying on the law a or b as on the figures. Derivative of $`\alpha `$ with respect to $`\rho `$ is equal to $`0`$ at $`\rho =\rho _0`$. If all values of the aim distance $`\rho `$ is the same probability for $`e`$, then the probability in order that the particle $`e`$ is deviated from the coming direction an angle ($`\alpha _0`$) is “infinitely large” in comparison with any other angle: $`\frac{\rho }{\alpha }|_{\alpha =\alpha _0}=\mathrm{}`$. It is necessary to say that $`\alpha _0`$ is inversely proportional to the momentum of $`e`$. The higher the momentum of $`e`$ is, the higher the direction conservability of its momentum vector is, then the smaller the deviation angle $`\alpha `$ is. (That is just the basis of Broblie relation.) We bring in a quantity $`ϵ`$ called the maximum divergence that measurement is acceptable, that border rays deviated in comparison with the angle $`\alpha `$ a value $`ϵ`$ belongs to still that angle $`\alpha `$. Thus, we can establish a function of probability density having the following form, (Fig. 5), The greater the sharpness of this distribution is, the greater the contrast of probability densities between the angle $`\alpha `$ and its neighbouring angles. Call average distance between two scattering centers $`2R`$, then influence space of any center (in the field of the aim distance) is $`\pi R^2`$. The space, where the particle $`e`$ is falled into and deviated an angle ($`\alpha \pm ϵ`$), is proportional to $`\pi \left[(\rho +\mathrm{\Delta }\rho )^2(\rho \mathrm{\Delta }\rho )^2\right]4\pi \rho \mathrm{\Delta }\rho `$. The probability in order that $`e`$ scatters into the angle $`\alpha `$ is $`\frac{4\pi \rho \mathrm{\Delta }\rho }{\pi R^2}=\frac{4\rho \mathrm{\Delta }\rho }{R^2}`$ (Fig. 6). The probability density that $`e`$ scatters into the angle $`\alpha `$ in a some direction is $`\frac{4\rho \mathrm{\Delta }\rho }{R^2}\frac{1}{2\pi }=\frac{4\rho \mathrm{\Delta }\rho }{\pi R^2}=P_{(\alpha )}`$. However, using the above method to estimate the probability density with all directions in the space for all scattering processes of particles is very completed and unwieldy. For this reason, here we are only interested “relative” probabilities of variety directions, namely events: the probability in order that $`e`$ scatters into the angle $`\alpha _0`$ is “infinitely”<sup>2</sup><sup>2</sup>2It means $`ϵ0`$. large in comparison with that for all other angles. In that correlation, all deviation angles that are different from $`\alpha _0`$ should be ignored because they form only a monotonous background. This has not influence on the qualitative accuracy of the law. Consider $`i`$-th scattering experiment (each experiment there is only one $`e`$ attended with a constant momentum, the same for all experiments.) Assume that all scatterings are elastic. The scattering process of $`e`$ is easy to imagine as Figures 7 and 8. After the 1-th scattering, $`e`$ can be deviated with any angle and fly with any direction, but all directions with highest probability ($`P_{\alpha _0}=\mathrm{}`$) form a cone with the top at the scattering center and an arrangement angle ($`2\alpha _0`$). In the 2-nd scattering, the directions with highest probability of $`e`$’s trajectory form the cones with angles ($`0`$, $`2(2\alpha _0)`$). This is explained as follows. Scattering centers in the block of matter distribute at random for $`e`$’s trajectory and this random is always maintained by thermic fluctuations, inelastic scatterings, … So, on the conic surface (1-2) there forms a brim - the locus of probability of 2-nd scattering centers, is plotted as the brim (2-2) in the figure. At each point of the brim there exist 2-nd scattering centers with some probability. Thus, in the 2-nd scattering at each point of the brim there forms a new probability cone, similar to fomation at the 1-th scattering center. These probability cones interfere with each other forming two collective cones with open angles $`0`$ and $`2(2\alpha _0)`$ respectively. Indeed, if we put a spherical surface of enough large radius and center coincident with the 1-th scattering center, then cones intersect the spherical surface and form circles as given in Figure 9. For simplification, we are not concerned with curvature of the spherical surface. The probability in order that the particle falls into any point of circles is identical. Let us find the probability density of $`e`$’s falling points on this surface. In the figure, it is clear that this density is proportional to the ratio $`\frac{\mathrm{\Delta }\mathrm{}}{\mathrm{\Delta }S}`$, where $`\mathrm{\Delta }\mathrm{}`$ is the total length of circles in an elementary surface $`\mathrm{\Delta }S`$ ($`dS`$). Solve this problem in the pole coordinate, the equation of circles in this coordinate is $`\rho `$ $`=`$ $`C\mathrm{cos}\phi \pm \sqrt{R^2C^2\mathrm{sin}^2\phi },`$ (1) $`d\rho `$ $`=`$ $`C\mathrm{sin}\phi \left(1\pm {\displaystyle \frac{C\mathrm{cos}\phi }{\sqrt{R^2C^2\mathrm{sin}^2\phi }}}\right)d\phi ,`$ (2) $`dS`$ $``$ $`\rho d\rho d\phi ,`$ $`\mathrm{\Delta }\mathrm{}`$ $``$ $`2\sqrt{d\rho ^2+\rho ^2d\phi },`$ $`M_\rho `$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }\mathrm{}}{\mathrm{\Delta }S}}={\displaystyle \frac{2\sqrt{d\rho ^2+\rho ^2d\phi }}{\rho d\rho d\phi }}.`$ (3) By differentiating of $`M_\rho `$ over $`\rho `$ then putting this derivative zero or substituting values of $`\rho `$ and $`d\rho `$ into (3) and approach $`\phi 0`$ we also obtain $`M_\rho \mathrm{}`$. Thus, at maximum and minimum values of $`\rho `$ the probability density $`M_\rho `$ is infinitely large (in comparison with other values of $`\rho `$). The surface $`dS`$ is equivalent to the volume angle $`d\mathrm{\Omega }`$, and $`\mathrm{\Delta }\mathrm{}`$ is equivalent to the probability in order that the particle scatters into that volume angle. Therefore, we obtain that in the 2-nd scattering all possible directions with highest probability of trajectories form two cones with open angles $`0`$ and $`2(2\alpha _0)`$. With similar argument we realize that at the 3-rd scattering the directions with highest probability of $`e`$’s trajectories form cones with open angles $`3(2\alpha _0)`$, $`1(2\alpha _0)`$, $`1(2\alpha _0)`$, $`1(2\alpha _0)`$. Generally, up to the $`n`$-th scattering there form probability cones of possible directions of $`e`$’s trajectories with open angles $`n(2\alpha _0)`$ , $`(n2)(2\alpha _0)`$, …, $`(n2m)(2\alpha _0)`$. Put a spherical surface as a catching screen with its axis coincident to the initial direction of particle before coming to the target, and its radius much larger than the thickness of the target. The center of the spherical surface is on the target and in the coming point of scattering particle. Hence, the probability cones forming at the last scattering intersect the catching screen and form circle brims - These are locus of falling points with $`e`$’s maximum probability. If $`n`$ is even, we obtain an even number of brims; $`n`$ is odd, we have an odd number of brims. And in all times of experiment $`e`$ always scatters $`n`$ times, then on the catching screen we obtain either an even number or odd number of brims, depending on even or odd value of $`n`$. Of course, in one time of experiment, $`n`$ has only value. Thus, the probability intensity of obtained brims after total experiment $`\mathrm{\Sigma }`$ is dependent not only on decreasing law from inner to outer but also on frequence of $`n`$, i.e. on the ratio $`\frac{i_n}{\mathrm{\Sigma }}`$; $`i_n`$ is frequency (number of occuring times of $`n`$) in total experiment $`\mathrm{\Sigma }`$. It is difficult to define this frequency for every possible value of $`n`$. However, at the most rudimentary, it is sure that spectrum of $`n`$’s values is not large<sup>3</sup><sup>3</sup>3 We are only interested particles that overcome through the target. and probability in order that $`n`$ has even or odd value is identical, then spectrum of differaction brims is complete from ($`0`$) to ($`n_{\text{max}}`$). From conditions of experiment we realize that the larger the matter density of the target is the tickness is, the larger $`n_{\text{max}}`$ is. Here, once again we find out that the differaction picture is only clear since the number of particles taking in scattering are enough much. If they are is too little, on the screen we see only a chaotic distribution of $`e`$’s marks, but on the contrary, if they are too much and the catching screen is a photographic plate, then all points on the screen are saturated and differaction brims are hidden. The scattering of light particles in a radial field is described in mechanics as follows $$\phi =_{r_0}^r\frac{M}{mr^2}\frac{dr}{\sqrt{\frac{2}{m}\left(E\frac{M^2}{2mr^2}U(r)\right)}},$$ where $`\phi `$ is the angle made by radius vector of particle’s position on the trajectory and radius vector of particle’s extremal point ($`\stackrel{}{r}_0`$), $`M`$ is the momentum of the particle, $`E`$ is the energy and $`m`$ is mass of the particle, and $`U(r)`$ is the potential of the field. If the potential of the field $`U(r)`$ has the form $`U(r)=\pm \frac{A}{r}`$, then from the above formula we can express $`\phi `$ as a function of the aim distance $`\rho `$ since the upper bound approaches to infinity, thus the deviation angle $`\alpha =\pi 2\phi `$ is also a function of $`\rho `$. Calculations give the result that the derivative of $`\phi (\rho )`$ with respect to $`\rho `$ is not equal to zero at any position. That proves that the condition to have “wave-like” differaction is not satisfied. If in fact the potential of nuclear field had the form as above, then the ability in order that electron would fall into nucleus has a very large probability. This is not compatible with the fact<sup>4</sup><sup>4</sup>4 To avoid this there was a quantum mechanics.. We can suppose a supplement (not interested in quantum mechanics) as: far from the attracting center a distance $`a`$ there is a surface $`L`$. This surface changes trajectory of scattering particles. Because it is not absolutely hard (in the present region of the surface, the potential field has some form), there happens a slippery effect of particle on the surface. This softens variation of deviation angle of trajectory $`\alpha `$, and then $`\alpha `$ is still a continous function of $`\rho `$. Thus, the momentum of scattering particle is unsurpassed a some value in order to unbreak elasticity of the surface. Otherwise, the momentum of scattering particle is larger than a supposed criterion, the surface $`L`$ is broken to form the light radiation. Some radiation forms can belong to Trerenkov or Compton effect. In summary, with this supplement, $`\alpha `$ is a function of $`\rho `$ with dependence as Figure 10. where $`\rho _0a`$ and hence, the condition to have the “wave-like” differaction is satisfied. ## 4 Conclusion Thus, we show in rather detail some ideas based on the particle behavour of matter. By using Newton’s model of light ray we have explained rather completely “wave” behaviours of the light. The polarization of the light is a effect of particle behaviour: two photons interfere with each other forming a system with axisymmetry. The experiment set as Figure 1 is important. Carrying on this experiment will take part in confirmation of light’s particle behaviour. Its detailed results will open up new ideas, and new directions of research. The wave character of elementary particle, electron is typical, can be explained by pure particle behaviour. This give us a similarity between the wave function in quantum mechanics and the vector function of particle behaviour. The motion of any particle can be expressed as a vector, whose direction points out particle’s motion direction at a given point of trajectory, and whose module expresses probability amplitude of particle flying in that direction. Thus, the probability trajectory of particle is completely able to expressed as a vector function $$\psi =A(\alpha _{(t)})e^{i\alpha _{(t)}},$$ $`\alpha _{(t)}`$ is the deviation angle in comparison with the initial direction, is a function of time; $`A(\alpha )`$ is the probability of particle flying with the angle $`\alpha `$. But this does not mean that the particle expressed as above will have a really wave character. If we find a way to express the value of $`\alpha _{(t)}`$ by the value of energy-momentum of scattering particle, parameters of scattering environment, and simplification: $`\alpha _{(t)}`$ is continously independent of time but discontinously dependent on time (due to the fact that scattering centers are discontinous), then the vector function is not basically different from the wave function in quantum mechanics. Doing with appropriate operators for probability function, we can obtain correlative quantities. Hence, from the natural idea of particle behaviour of matter we can discover further natural phenomena. One of the most host problems today is inflation of the universe affirmed from the red-shift of Doppler’s effect. However, if the space between observer and light source is vacuum, then the explanation of the red-shift based on Doppler’s effect is fully satisfactory. But in fact the universe is filled with gravitational fields, macrometric and micrometric objects as stars and clusters. Therefore, these problems had been re-examined in further detail by us, with consideration actual influences of interstellar environment on frequency shift. This is only realized by foundation of particle behaviour of the light. The existence of cosmic dusts as motional scattering centers is an essential condition to happen scattering-interference processes of the light when it flies through the interstellar environment. In turn, these scattering-interference processes lead to the shift of light frequency. Our results give a reliable confirmation that the red-shift is not exhibit of the inflation of the universe. ## Acknowledgments The present article was supported in part by the Advanced Research Project on Natural Sciences of the MT&A Center.
no-problem/0001/hep-ph0001152.html
ar5iv
text
# Strategy for discovering a low-mass Higgs boson at the Fermilab Tevatron ## I Introduction The success of the Standard Model (SM) of particle physics, which provides an accurate description of almost all particle phenomena observed so far, has been spectacular. However, one crucial aspect of it remains mysterious: the fundamental mechanism that underlies electro-weak symmetry breaking (EWSB) and the origin of fermion mass. Elucidating the nature of EWSB is the next major challenge of particle physics and will be the focus of upcoming experiments at the Fermilab Tevatron and the CERN Large Hadron Collider (LHC) during the early years of the twenty-first century. In many theories, EWSB occurs through the interaction of one or more doublets of scalar (Higgs) fields with the initially massless fields of the theory. An important goal over the next decade is to determine whether or not, in broad outline, this picture of EWSB is correct. In the Standard Model there is a single scalar doublet. The EWSB endows the weak bosons ($`W^\pm ,Z`$) with masses and gives rise to a single physical neutral scalar particle called the Higgs boson ($`H_{SM}`$). In minimal supersymmetric (SUSY) extensions of the SM, two Higgs doublets are required resulting in five physical Higgs bosons: two neutral CP-even scalars ($`h,H`$), a neutral CP-odd pseudo-scalar ($`A`$) and two charged scalars ($`H^\pm `$). Non-minimal SUSY theories generally posit more than two scalar doublets. Given this picture of EWSB, the direct and indirect measurements of the top quark and $`W`$ boson masses constrain the mass of the SM Higgs boson ($`M_{H_{SM}}`$), as indicated in Fig 1. A global fit to all electroweak precision data, including the top quark mass, gives a central value of $`M_{H_{SM}}=107_{45}^{+67}`$ GeV/c<sup>2</sup> and a 95% confidence level upper limit of 225 GeV/c<sup>2</sup>. In broad classes of SUSY theories the mass $`M_h`$ of the lightest CP-even neutral Higgs boson, $`h`$, is constrained to be less than 150 GeV/c<sup>2</sup>. In the minimal supersymmetric SM (MSSM), the upper bound on $`M_h`$ is lowered to about 130 GeV/c<sup>2</sup>. This bound is reasonably robust with respect to changes in the parameters of the theory. Furthermore, in the limit of large pseudo-scalar Higgs boson mass, $`M_A>>M_Z`$, where $`M_Z`$ is the mass of the $`Z`$ boson, the properties of the lightest MSSM Higgs boson $`h`$ are indistinguishable from those of the SM Higgs boson, $`H_{SM}`$. These intriguing indications of a low-mass Higgs boson motivate the study of strategies that maximize the potential for its discovery at the upgraded Tevatron. This paper describes a strategy that achieves this goal. The current 95% CL lower limit on the Higgs boson mass, from the CERN $`e^+e^{}`$ collider LEP, is 107.9 GeV/c<sup>2</sup> and is expected to reach close to 114 GeV/c<sup>2</sup> in the near future. We have therefore studied the mass range 90 GeV/c<sup>2</sup> $`<M_H<`$ 130 GeV/c<sup>2</sup>, where $`H`$, hereafter, denotes the SM Higgs boson, $`H_{SM}`$. The cross sections for SM Higgs boson production at the Fermilab Tevatron are shown in Fig 2. At $`\sqrt{s}=2`$ TeV, the dominant process for the production of Higgs bosons in $`p\overline{p}`$ collisions is $`ggH`$. The Higgs boson decays to a $`b\overline{b}`$ pair about 85% of the time. Unfortunately, even with maximally efficient $`b`$-tagging this channel is swamped by QCD di-jet production. The more promising channels are $`p\overline{p}WH\mathrm{}\nu b\overline{b}`$, $`p\overline{p}ZH`$ $`\mathrm{}^+\mathrm{}^{}`$ $`b\overline{b}`$ and $`p\overline{p}`$ $`ZH`$ $`\nu \overline{\nu }b\overline{b}`$, which are the ones we have studied. In $`WH`$ events the lepton can be lost because of deficiencies in the detector or the event reconstruction or the lepton energy being below the selection threshold. For such events the reconstructed final state would be indistinguishable from that arising from the process $`p\overline{p}`$ $`ZH`$ $`\nu \overline{\nu }b\overline{b}`$. We have therefore studied these processes in terms of the channels: single lepton ($`\mathrm{}`$ \+ $`E\text{/}_T`$ \+ $`b\overline{b}`$ from $`WH`$), di-lepton ($`\mathrm{}^+\mathrm{}^{}b\overline{b}`$ from $`ZH`$) and missing transverse energy ($`E\text{/}_T`$ \+ $`b\overline{b}`$ from $`ZH`$ and $`WH`$), where $`E\text{/}_T`$ denotes the missing transverse energy from all sources, including neutrinos. For each of these channels, we have carried out a comparative study of multivariate and conventional analyses of these channels in which we compare signal significance and the integrated luminosity needed for discovery. The paper is organized as follows: In Sec. II we describe our strategy in general terms. Sections III, IV and V, respectively, describe our analyses of the single lepton, di-lepton and missing transverse energy channels. Our conclusions are given in Sec. VI. ## II Optimal Event Selection In conventional analyses a cut is applied to each event variable, usually one variable at a time, after a visual examination of the signal and background distributions. Although analyses done this way are sometimes described as “optimized,” in practice, unless the signal and background distributions are well separated, the traditional procedure for choosing cuts is rarely optimal in the sense of minimizing the probability to mis-classify events. Since we wish to maximize the chance of discovering the Higgs boson we need to achieve the optimal separation between signal and background, while maximizing the signal significance. Given any set of event variables, optimal separation can always be achieved if one treats the variables in a fully multivariate manner. Given a set of event variables, it is useful to construct the discriminant function $`D`$ given by $$D=\frac{s(𝐱)}{s(𝐱)+b(𝐱)},$$ (1) where $`𝐱`$ is the vector of variables that characterize the events and $`s(𝐱)`$ and $`b(𝐱)`$, respectively, are the $`n`$dimensional probability densities describing the signal and background distributions. The discriminant function $`D=r/(1+r)`$ is related to the Bayes discriminant function which is proportional to the likelihood ratio $`rs(𝐱)/b(𝐱)`$. Working with $`D`$, instead of directly with $`𝐱`$, brings two important advantages: 1) it reduces a difficult $`n`$dimensional optimization problem to a trivial one in a single dimension and 2) a cut on $`D`$ can be shown to be optimal in the sense defined above. There is, however, a practical difficulty in calculating the discriminant $`D`$. We usually do not have analytical expressions for the distributions $`s(𝐱)`$ and $`b(𝐱)`$. What is normally available are large discrete sets of points $`𝐱_i`$, generated by Monte Carlo simulations. Fortunately, however, there are several methods available to approximate the discriminant $`D`$ from a set of points $`𝐱_i`$, the most convenient of which uses feed-forward neural networks. Neural networks are ideal in this regard because they approximate $`D`$ directly. Many neural network packages are available, any one of which can be used to calculate $`D`$. We have used the JETNET package to train three-layer (that is, input, hidden and output) feed-forward neural networks (NN). The training was done using the back-propagation algorithm, with the target output for the signal set to one and that for the background set to zero. In this paper we use the terms “neural network output” and “discriminant” interchangeably. However, the distinction between the exact discriminant $`D`$, as we have defined it above, and the network output, which provides an estimate of $`D`$, should be borne in mind. ## III Single Lepton Channel We have considered final states with a high $`p_T`$ electron (e) or muon ($`\mu `$) and a neutrino from $`W`$ decay and a $`b\overline{b}`$ pair from the decay of the Higgs boson. The $`WH`$ events were simulated using the PYTHIA program for Higgs boson masses of $`M_H`$ = 90, 100, 110, 120 and 130 GeV/c<sup>2</sup>. In Table I we list the cross section $`\times `$ branching ratio (BR) we have used for the process $`p\overline{p}WH\mathrm{}\nu b\overline{b}`$ where $`\mathrm{}=e,\mu `$, $`\tau `$. The processes $`p\overline{p}Wb\overline{b}`$, $`p\overline{p}WZ`$, $`p\overline{p}t\overline{t}`$, single top production—$`p\overline{p}W^{}tb`$ and $`p\overline{p}Wgtqb`$, which have the same signature, $`\mathrm{}\nu b\overline{b}`$, as the signal, are the most important sources of background. They have all been included in our study. The $`Wb\overline{b}`$ sample was generated using CompHEP, a parton level Monte Carlo program based on exact leading order (LO) matrix elements. The parton fragmentation was done using PYTHIA. The single top, $`t\overline{t}`$ and $`WZ`$ events were simulated using PYTHIA. To generate the s-channel process, $`W^{}tb`$, we forced the $`W`$ to be produced off-shell, with $`\sqrt{\widehat{s}}>m_t+m_b`$, and then selected the final state in which $`Wtb`$. The cross sections used for the background processes are given in Table I. To model the expected response of the CDF and DØ Run II detectors at Fermilab we used the SHW program, which provides a fast (approximate) simulation of the trigger, tracking, calorimeter clustering, event reconstruction and $`b`$-tagging. The SHW simulation predicts a di-jet mass resolution of about 14% at $`M_H`$ = 100 GeV/c<sup>2</sup>, varying only slightly over the mass range of interest. However, to allow for comparisons with the other $`WH`$ and $`ZH`$ studies at the Physics at Run II SUSY/Higgs workshop, some of which do not use SHW, we have re-scaled the di-jet mass variables for all signal and background events so that the resolution is 10% at each Higgs boson mass. The consensus of Run II workshop is that such a mass resolution can be achieved, albeit with considerable effort. In principle, multivariate methods can be applied at all stages of an analysis. However, in practice, experimental considerations, such as trigger thresholds and the need to restrict data to the phase space in which the detector response is well understood, dictate a set of loose cuts on the event variables. These cuts define a base sample of events. In our case, the base sample was determined by the following cuts: * the transverse momentum of the isolated lepton $`P_T^{\mathrm{}}>15`$ GeV/c * the pseudo-rapidity of the lepton $`|\eta _{\mathrm{}}|<2`$ * the missing transverse energy in the event $`E\text{/}_T>20`$ GeV * two or more jets in the event with $`E_T^{jet}>10`$ GeV and $`|\eta _{jet}|<2`$. Since the Higgs decays into a $`b\overline{b}`$ pair we impose the requirement that two jets be $`b`$-tagged. This of course does little to reduce the dominant $`Wb\overline{b}`$ background, due to the presence of the $`b\overline{b}`$ pair, but it becomes powerful when the invariant mass, $`M_{b\overline{b}}`$, of the $`b`$-tagged jets is used as an event variable. The di-jet mass distributions for the signal is expected to peak at the Higgs boson mass, whereas one expects a broad distribution for the background, with the exception of the $`WZ`$ background which peaks at the $`Z`$ boson mass. One of the $`b`$-tags was required to be tight and the other loose. A tight $`b`$-tag is defined by an algorithm that uses the silicon vertex detector, while a loose $`b`$-tag is defined by the same algorithm with looser cuts or by a soft lepton tag. The mean double $`b`$-tagging efficiency in SHW is about 45%. We searched for variables that discriminate between the signal and the backgrounds and arrived at the following set: * $`E_T^{b1},E_T^{b2}`$ – transverse energies of the $`b`$-tagged jets * $`M_{b\overline{b}}`$ – invariant mass of the $`b`$-tagged jets * $`H_T`$ – sum of the transverse energies of all selected jets * $`E_T^{\mathrm{}}`$ – transverse energy of the lepton * $`\eta _{\mathrm{}}`$ – pseudo-rapidity of the lepton * $`E\text{/}_T`$ – missing transverse energy * $`S`$ – sphericity ($`S=\frac{3}{2}(Q_1+Q_2)`$ where $`Q_i`$ are the eigenvalues obtained by diagonalizing the normalized momentum tensor $`M_{ab}=_ip_{ia}p_{ib}/_i|p_i|^2`$ where the sums are over the final state particle momenta and the subscripts $`a`$ and $`b`$ refer to the spatial components of the momenta $`p_i`$ * $`\mathrm{\Delta }R(b_1,b_2)`$ – the distance, in the $`(\eta ,\varphi )`$-plane, between the two $`b`$-tagged jets, where $`\mathrm{\Delta }R=\sqrt{\mathrm{\Delta }\eta ^2+\mathrm{\Delta }\varphi ^2}`$ and $`\varphi `$ is the azimuthal angle * $`\mathrm{\Delta }R(b_1,\mathrm{})`$ – the $`\mathrm{\Delta }R`$ distance between the lepton and the first $`b`$-tagged jet. Most of the variables used are directly measured (reconstructed) kinematic quantities while some are deduced variables. The choice of $`M_{b\overline{b}}`$ as a discriminating variable is obvious, as discussed earlier. The variable $`H_T`$ is a measure of the “temperature” of the interaction; a large $`H_T`$ is a sign of the decay of massive objects. For example, $`WH`$ events would have larger $`H_T`$ (increasing with $`M_H`$) than the $`Wb\overline{b}`$ background, but smaller $`H_T`$ than the $`t\overline{t}`$ background. The $`WH`$ events are also more spherical than the $`Wb\overline{b}`$ events and have larger values of sphericity. The $`\mathrm{\Delta }R(b,\overline{b})`$ is smaller for $`Wb\overline{b}`$ background where the $`b`$-jets come mainly from $`gb\overline{b}`$ than in $`WH`$ events where the $`b`$-jets come from the heavy object decay $`Hb\overline{b}`$. For each Higgs boson mass we trained three networks to discriminate against the main backgrounds $`Wb\overline{b}`$, $`WZ`$ and $`t\overline{t}`$. The subsets of variables used to train the networks are listed in Table II while in Fig 3(a-c) we show the distributions of some of these variables. Each network has 7 input variables, 9 hidden nodes and one output node. We calcuated three discriminants $`D`$ for every signal and background event and for every Higgs boson mass. Figure 3(d) shows the distributions of the discriminants for signal and background calculated using the network trained to discriminate between signal events, with $`M_H`$ = 100 GeV/c<sup>2</sup>, and the specified background. We note that all backgrounds, with the exception of $`WZ`$, are well separated from the signal. For Higgs boson masses close to the $`Z`$ mass the $`WZ`$ background is kinematically identical to the signal and therefore difficult to deal with. But for Higgs boson masses well above the $`Z`$ mass the discrimination between $`WH`$ and $`WZ`$ improves, as does that between $`WH`$ and the other backgrounds. (In all figures, the signal histograms are shaded dark while the background histograms are shaded light.) The arrows in Fig. 3(d) indicate the cuts applied to the discriminants. The cuts were chosen to maximize $`S/\sqrt{B}`$, where $`S`$ and $`B`$ are the signal and background counts, respectively. The cuts to suppress the $`WZ`$ background vary from 0.18 to 0.80, increasing for higher Higgs boson masses; the cuts to suppress $`Wb\overline{b}`$ are generally about 0.8, while those for top events are in the range 0.35 to 0.75. At this stage it is instructive to compare the conventional and multivariate approaches, to assess what has been gained by using the latter approach. In Fig. 4 we compare the signal efficiency vs. background efficiency (given in terms of the number of events for 1 fb<sup>-1</sup>) for an ensemble of possible cuts on the three discriminants (using the random grid search technique) with the efficiencies obtained using the standard cuts defined by the Run II Higgs Workshop. Each dot corresponds to a particular set of cuts on the three discriminants; the triangular marker indicates what is achieved using the standard cuts, while the star indicates the results obtained from an optimal choice of cuts (which maximizes $`S/\sqrt{B}`$) on the three network outputs. Table III shows results for the $`WH`$ channel. ## IV Di-Lepton Channel For the di-lepton channel we followed a strategy similar to that described for the single lepton channel. The final state signature considered is: two high $`P_T`$ same flavor leptons ($`ee`$ or $`\mu \mu `$) from $`Z`$ boson decay and two b-jets (from $`Hb\overline{b}`$). The $`ZH`$ events were generated using PYTHIA for Higgs boson masses of 90, 100, 110, 120 and 130 GeV/c<sup>2</sup>. The principal backgrounds are due to $`ZZ`$, $`Zb\overline{b}`$, single top and $`t\overline{t}`$ production. The $`Zb\overline{b}`$ background sample was generated using CompHEP, with fragmentation done using PYTHIA, while all other samples were generated using PYTHIA. As before, the SHW program was used to simulate the detector response and we assumed that two jets are $`b`$-tagged (one tight and one loose). The cross sections for signal and background are shown in Table I. The base sample was determined by the following cuts: * $`P_T^{\mathrm{}}>10`$ GeV/c * $`|\eta _{\mathrm{}}|<2`$ * $`E\text{/}_T<10`$ GeV * at least two jets with $`E_T^{jet}>8`$ GeV and $`|\eta _{jet}|<2`$. A network was trained for each Higgs boson mass and for each of the three backgrounds with the following variables * $`E_T^{b1},E_T^{b2}`$ * $`P_T`$ of the two leptons * $`M_{b\overline{b}}`$ * $`M_\mathrm{}\overline{\mathrm{}}`$ – invariant mass of the leptons * $`H_T`$ * $`\mathrm{\Delta }R(b_1,\mathrm{})`$ between the first lepton and the first $`b`$-tagged jet. Distributions of these variables, as well as those of the network output, are shown in Fig 5(a-d). The signal distributions are for $`M_H`$=100 GeV/c<sup>2</sup>. Our results after applying cuts on the three network outputs, for the di-lepton channels are summarized in Table IV. ## V Missing Transverse Energy Channel This channel has contributions from both $`ZH\nu \overline{\nu }b\overline{b}`$ and $`WH(\mathrm{})\nu b\overline{b}`$ where $`(\mathrm{})`$ denotes the lepton that is lost. The event generation and detector simulation were carried out as described in the single lepton and di-lepton channel studies. The base sample was defined by the cuts * $`|\eta _{\mathrm{}}|<2`$ * $`E\text{/}_T>10`$ GeV/c * no isolated lepton with $`P_T^{\mathrm{}}>10`$ GeV/c * $`E_T^{jet3}<30`$ GeV * at least two jets with $`E_T^{jet}>8`$ GeV and $`|\eta _{jet}|<2`$. The three networks were trained with $`ZH\nu \overline{\nu }b\overline{b}`$ events as signal and $`Zb\overline{b}`$, $`ZZ`$ and $`t\overline{t}`$ as the three backgrounds, respectively. The same networks were used to evaluate contributions from $`WH`$ and the relevant backgrounds. We used the following variables to train the networks: * $`E_T^{b1},E_T^{b2}`$ * $`M_{b\overline{b}}`$ * $`H_T`$ * $`E\text{/}_T`$ * $`S`$ * $`𝒞`$ – centrality ($`_{jets}E_T/_{jets}E`$, with $`E_T^{jet}>15`$ GeV) * $`\frac{E\text{/}_T}{\sqrt{E_T^{b1}}}`$ * minimum $`\mathrm{\Delta }\varphi (jet,E\text{/}_T)`$. The centrality, $`𝒞`$, has larger mean value (as is the case with $`S`$) for signal events than for backgrounds. The variable $`\frac{E\text{/}_T}{\sqrt{E_T^{b1}}}`$ is a measure of the significance of the missing transverse energy. The smallest of azimuthal angles between $`E\text{/}_T`$ and the jets in the event is expected to be smaller for $`Wb\overline{b}`$, $`Zb\overline{b}`$ as well as high multiplicity $`t\overline{t}`$ events than in signal events. We show the distributions of the variables and neural network outputs in Figs. 6(a-d). Again the signal distributions are for $`M_H`$=100 GeV/c<sup>2</sup>. The results for this channel, after optimized cuts on network outputs, are listed in Table V. ## VI Discussion and Summary In Table VI we compare the results of our multivariate analysis with those based on the standard cuts, while Table VII and Figs. 7 and 8 show our final results, where we have combined all channels. The striking feature of these results is the substantial reduction in integrated luminosity required to make a $`5\sigma `$ discovery of the Higgs boson if one adopts a multivariate approach instead of the traditional method based on univariate cuts. In each of the three channels, the signal significance, which we define as $`S/\sqrt{B}`$, is seen to be 20-60% higher from our multivariate analysis as compared to an optimal conventional analysis. For example, at $`M_H=110`$ GeV/c<sup>2</sup> we find that the required integrated luminosity for a $`5\sigma `$ observation decreases from 18.3 fb<sup>-1</sup> to 8.5 fb<sup>-1</sup>. The results in Table VII include statistical errors only. The dominant systematic error will likely be due to background modeling. However, given the large data-sets expected by the end of Run II we can anticipate that a thorough experimental study of the relevant backgrounds will have been undertaken. Therefore, it is possible that systematic errors could, eventually, be reduced to well under $`10`$%. We can estimate the effect of systematic error by adding it in quadrature to the statistical error. If we assume a 10% systematic error on the total background the required integrated luminosity for a $`5\sigma `$ observation increases from 8.5 fb<sup>-1</sup> to 12.8 fb<sup>-1</sup>. Run II at the Tevatron with the CDF and DØ detectors will begin in early 2001. Recently the scope of Run II has been expanded. The goal (hope) is to collect about 15-20 fb<sup>-1</sup> per experiment in the period up to and including the start of the LHC. After 5 years of running, each experiment could see a 3$`\sigma `$-5$`\sigma `$ signal of a neutral Higgs boson with $`M_H`$ 130 GeV/c<sup>2</sup>. This exciting possibility for the Tevatron is the principal motivation for the recent important decision to expand the scope of Run II in order to accumulate as much data as possible. However, even with the expanded scope a discovery may be possible only if these data are analyzed with the most efficient methods available, such as the one we have described in this paper. It is important to note that the results we have presented are for a single experiment. That is, our conclusion is that each experiment has the potential of making an independent discovery. If the experiments combine their results the discovery of a low-mass Higgs boson at the Tevatron might be at hand a lot sooner. ###### Acknowledgements. We thank the members of the Run II Higgs Working Group and, in particular, Ela Barberis, Alexander Belyaev, John Conway, John Hobbs, Rick Jesik, Maria Roco and Weiming Yao for useful discussions and for help with the event simulation. The research was supported in part by the U.S. Department of Energy under contract numbers DE-AC02-76CHO3000 and DE-FG02-97ER41022. This work was carried out by the authors as part of the Higgs Working Group<sup>*</sup><sup>*</sup>*Run II Higgs Working Group (Run II SUSY/Higgs workshop). http://fnth37.fnal.gov/higgs.html. study at Fermilab.
no-problem/0001/hep-ex0001036.html
ar5iv
text
# Confidence belts on bounded parameters ## 1 Introduction In a recent paper , Feldman and Cousins have revisited the long-standing problem of confidence belts on bounded parameters, for which the standard method proposed by Neyman leads in some cases to null, or unphysical, results, and have proposed a new method. The advocated method takes advantage of the freedom left by the Neyman construction in the choice of ordering used to select the ensemble of values of the measurement with a given probability content. This new method is strictly classical -or frequentist- (that is, not using bayesian extensions to classical statistics), avoids overcoverage, gives a natural and non-biasing transition between upper limits and intervals, and, according to the authors, avoids null results. We show in this paper that the last point is not strictly met and that null results can only be avoided for confidence levels above a limit which depends upon the probability law under consideration. ## 2 The ”problem” and its solutions ### 2.1 Building confidence domains Let us consider a random variable $`x`$ of density probability $`f(x|\mu )`$ where $`\mu `$ is an unknown parameter. Given an observation $`x`$, one wishes to make a statement on $`\mu `$ with a given confidence level (noted CL in the following) $`\alpha `$. $`\alpha `$ is the probability that the statement is true. Let us first consider the case of an upper limit on $`\mu `$. The Neyman construction consists in defining for each value of $`\mu `$ the value $`x_m`$ such that $$F(x_m|\mu )=_{\mathrm{}}^{x_m}f(x|\mu )𝑑x=1\alpha $$ (1) Whatever $`\mu `$, $`x`$ has a probability $`\alpha `$ to be bigger than $`x_m(\mu )`$. $`x`$ being observed, the $`\alpha `$ CL limit $`\mu _M`$ on $`\mu `$ is obtained by solving $`x_m(\mu _M)=x`$. In the following, we will consider only the cases where this equation has a unique solution, implying that $`x_m(\mu )`$ is a monotonic increasing function of $`\mu `$ for any value $`\alpha `$. This is equivalent to stating that $$\frac{F(x|\mu )}{\mu }<0x,\mu $$ (2) One can equally define an $`\alpha `$ CL interval $`[\mu _m,\mu _M]`$ on $`\mu `$ by constructing for each $`\mu `$ an interval $`[x_m(\mu ),x_M(\mu )]`$ in $`x`$ of probability content $`\alpha `$ and, given the observation $`x`$, solve $`x_m(\mu _M)=x_M(\mu _m)=x`$. But contrary to upper (or lower) limits, the choice of interval in $`x`$ is not unique and an ordering prescription on $`x`$ is necessary. The usual prescription consists in defining $`x_m`$ and $`x_M`$ by $$_{\mathrm{}}^{x_m}f(x|\mu )𝑑x=_{x_M}^{\mathrm{}}f(x|\mu )𝑑x=(1\alpha )/2$$ (3) (the so-called central interval). This only works for 1-dimensional $`x`$, and a new ordering for n-dimensional $`x`$ has to be chosen. One generally uses the $`\chi ^2`$ between $`x`$ and the mean value of $`x`$, $`\overline{x}(\mu )`$, or the likelihood ratio $`R=f(x|\mu )/f(x_0|\mu )`$ where $`f(x_0|\mu )`$ is the maximum of $`f`$, and one defines a cut $`c`$ on $`\chi ^2`$ (resp R) so that the probability of $`\chi ^2<c`$ (resp $`R<c`$) is $`\alpha `$. These two methods (among others), when applied to the 1-dimensional case, would generally give non central intervals. ### 2.2 The case of bounded parameters The method outlined above can lead to null results on the parameter $`\mu `$ if this parameter is bounded ( for example, $`\mu >0`$ if $`\mu `$ is a mass, a variance, etc…). Such a case is illustrated on figure 1, where one sees that for low enough values of $`x`$, no upper limit on $`\mu `$ is obtained (or a negative limit is obtained in case $`f(x|\mu )`$ is defined even for negative $`\mu `$’s). For a statistician, this is not a problem, since an $`\alpha `$ CL statement has a $`(1\alpha )`$ probability to be wrong. For unbounded parameters, it is impossible to know if a statement is true or false. For bounded parameters, a fraction of the physicists emitting a wrong statement are aware that their statement is wrong, and would rather like not to be in this uncomfortable situation. One should stress however that their result, which when expressed as a limit on $`\mu `$ seems to bring no information, is as legitimate and useful as any other result obtained by those physicists publishing a physical limit (which might be right or wrong). But the discomfort caused by such a situation is so strong that cures have been searched for to avoid publishing null results. ### 2.3 The bayesian solution The bayesian extension to classical statistics consists in building a probability density on the unknown parameter $`\mu `$ from the observation $`x`$ by applying the (classical) Bayes theorem on conditional probabilities, af if $`\mu `$ were a random variable . This gives a density probability (a degree of belief in bayesian language) $$g(\mu |x)=K(x)f(x|\mu )P(\mu )$$ (4) where K is a normalization coefficient insuring that $$_{\mathrm{}}^{\mathrm{}}g(\mu |x)𝑑x=1$$ and $`P(\mu )`$ summarizes our knowledge on $`\mu `$ prior to the observation $`x`$. In particular, for bounded parameters, $`P(\mu )`$ will be null outside the physical region. An upper limit at ”$`\alpha `$ CL” $`\mu _0`$ can then be built on $`\mu `$ by solving $$_{\mathrm{}}^{\mu _0}g(\mu |x)𝑑x=\alpha $$ (5) By construction, the selected range of physical $`\mu `$ values will never be empty, so that null results are impossible. This is shown on figure 1. This approach, although recommended by PDG till 1997, has been criticized by frequentist statisticians for the following reasons: 1. Near the parameter boundary, where null results happen with the standard method, this bayesian ”$`\alpha `$ CL” limit has a classical CL higher than $`\alpha `$ (it is actually 1 if $`\mu `$ happens to be below $`\mu _1`$, see figure 1), leading to overcoverage and loss of predictive power. Note however that this overcoverage is a local property, and could well transform into undercoverage at higher $`\mu `$ values. 2. There is some arbitrariness in the choice of $`P(\mu )`$. PDG proposes to use a Heavyside function to describe the physical limit on $`\mu `$. Such a prescription has some drawbacks: it is not invariant under changes of parametrization, so that limits will actually depend on the parametrization used. Furthermore, there are cases where the integral in (4) is divergent unless $`P(\mu )`$ is adequately chosen. I would like , in the next section, to propose a cure to the second point. ### 2.4 A modified bayesian limit A classical upper limit on $`\mu `$ set at $`\mu _0`$ from an observation $`x`$ has a confidence level $`\alpha `$ equal to $`1F(x|\mu _0)`$. If one is willing to interpret $`P(\mu <\mu _0)=\alpha `$ as a probability statement on $`\mu `$, then the cumulative probability distribution for $`\mu `$ has to be $`1F(x|\mu )`$, so that the probability density for $`\mu `$ deduced from the observation $`x`$ is given by: $$\widehat{g}(\mu |x)=\frac{}{\mu }F(x|\mu )$$ (6) Contrary to the usual bayesian definition, $`\widehat{g}`$ is always defined and normalized to 1 by construction. Furthermore, when equation 2 is satisfied, which we have supposed, $`\widehat{g}`$ is positive. When $`\mu `$ is bounded, the definition of conditional probability can be applied to $`\widehat{g}`$ to get its restriction to physical values. If $`\mu >a`$, then (dropping $`x`$ in the notation): $$\stackrel{~}{g}(\mu |\mu >a)=\frac{\widehat{g}(\mu )}{_a^{\mathrm{}}\widehat{g}(\mu )𝑑\mu }$$ (7) will be the probability distribution of $`\mu `$ restricted to its physical values. $`\stackrel{~}{g}`$ can then be used instead of g (defined in equation 4) to set an upper limit $`\mu _0`$ on $`\mu `$, by solving: $$_a^{\mu _0}\stackrel{~}{g}(\mu )𝑑\mu =\alpha $$ (8) Using $`\stackrel{~}{g}`$ rather than $`g`$ has several advantages: * When $`\mu `$ is not bounded, the obtained limit is identical to the classical limit (for bounded parameters, it can be proven it gives overcoverage whatever the value of $`\mu `$ is, contrary to the usual bayesian method). * the limit is invariant under reparametrization. If we change $`\mu `$ to $`\lambda =r(\mu )`$, the limit $`\lambda _0`$ will be $`r(\mu _0)`$ * When prior knowledge is limited to the physical boundary, there is no need to invoke the ambiguous function $`P(\mu )`$, which is replaced by a Heavyside function irrespective of the parametrization. Let us note before closing this section that $`\widehat{g}(\mu |x)=f(x|\mu )`$ in some special cases. Among them are the normal law of mean $`\mu `$ and constant variance, and the Poisson law of mean $`\mu `$, so that for these cases, the limit given by $`\stackrel{~}{g}`$ becomes identical to the former PDG recommendation. ### 2.5 Feldman and Cousins solution Feldman and Cousins wanted to fulfill 3 conditions: 1. Avoid null results 2. keep a frequentist approach and produce results with a classical CL equal to $`\alpha `$ with no overcoverage (except when $`x`$ is discrete, since this discreteness implies unavoidedly some overcoverage) 3. Solve the ”flip-flop” problem of how to switch from upper limit to confidence interval, as they have shown that a choice made according to the value of the observation $`x`$ leads, as is usually the case, to a biased result, namely an undercoverage (the actual CL being lower than the claimed one). This problem occurs in practice for parameters bounded from below, $`\mu a`$, where $`\mu a`$ is the strength of an hypothetic signal on which to make a statement, and observations also bounded from below ($`x>x_0`$). These authors propose an ordering principle on $`x`$ based on $$r(x)=\frac{f(x|\mu )}{f(x|\mu _{best})}$$ (9) where $`\mu _{best}`$ is the physical value of $`\mu `$ which maximizes the denominator. $`r(x)`$ varies between 0 and 1, and one selects for each $`\mu `$ the values of $`x`$ such that $`r(x)>r_c`$ and $$_{r>r_c}f(x|\mu )𝑑x=\alpha $$ This construction, being classical, meets the second condition. It will meet the first and third conditions in cases where the algorithm selects $`x`$ values between $`x_{min}(\mu )`$ and $`x_{max}(\mu )`$ such that $`x_{min}=x_0`$ for $`\mu <a+s_0`$, and $`x_{min}>x_0`$ for $`\mu >a+s_0`$, so that an upper limit is published when the observed value of $`x`$ is below $`x_{max}(a)`$, and an interval on $`\mu `$ (not necessarily central) for higher values of $`x`$. If it is not the case, the algorithm will lead to null results: * If $`x_0`$ is selected for all values of $`\mu `$, the algorithm will produce lower limits for $`x>x_{max}(a)`$ and ”null” results below (all physical values of $`\mu `$ being accepted). * If $`x_0`$ is excluded for all values of $`\mu `$, it will give a null result for $`x<x_{min}(a)`$, an interval for $`x>x_{max}(a)`$ and an upper limit inbetween. We show in the next section that such a situation is likely to occur. ### 2.6 Study of the ordering algorithm In order to prove the above mentionned fact, it is sufficient to exhibit cases where null results can be obtained. I will restrict in the following to cases where $`\mu `$ and $`x`$ both take only positive values, and where $`f(x|\mu )`$ is , up to a normalization factor, a function $`g(y)`$ with $`y=x/\mu `$. If $`g(y)𝑑y=1`$, the normalization factor is $`1/\mu `$, so that $`f(x|\mu )=g(y)/\mu `$. (Such cases can be met in real cases: $`y`$ can follow a $`\chi ^2`$ law and $`\mu `$ be the unknown variance, $`x`$ then being the unnormalized $`\chi ^2`$ from which one wishes to give a statement on the variance). Note that the limits on $`x`$ at any confidence level are just straight lines passing thru the origin, and no problem of null results is encountered with the usual classical method. The problems appear only when for some reason, one restricts $`\mu `$ to values higher than $`a`$, a strictly positive constant (in the $`\chi ^2`$ example, one knows that the variance is at least $`a`$, and one wishes to know if there is some other contribution to it). Note also that such distributions satisfy the condition of equation 2. When no bound is imposed on $`\mu `$ (other than being positive), $`\mu _{best}`$ as defined in (9) is $`\mu _0`$ given by $$\frac{}{\mu }f(x|\mu _0)=0$$ This equation is equivalent to $$\frac{d}{dy}h(y_0)=0$$ where $`h(y)=yg(y)`$ and $`y_0=x/\mu _0`$. Let us suppose that $`h(y)`$ is such that it has a unique maximum at $`y=y_0`$. To build $`r(x)`$ when $`\mu >a`$, two cases have to be considered: * when $`x>ay_0`$, $`\mu _0>a`$, $`\mu _{best}=\mu _0`$ and $`r(x)=h(y)/h(y_0)`$ * when $`x<ay_0`$, $`\mu _0<a`$, $`\mu _{best}=a`$ and $`r(x)=h(y)/h(\mu y/a)`$ Note that $`r(x_0=\mu y_0)=1`$, and $$r(0)=\underset{y0}{lim}\frac{h(y)}{h(\mu y/a)}=\left(\frac{a}{\mu }\right)^{(n+1)}$$ where $`n`$ is the first non-null derivative of $`g(y)`$ at $`y=0`$. When $`\mu a`$, $`r(x)1`$ for $`x`$ between 0 and $`ay_0`$, then decreases. $`r(x)`$ is shown on figure 2. We will suppose in the following that $`r(x)`$ is a non decreasing function of $`x`$ for $`x`$ between 0 and $`ay_0`$, as it is the case in the explicit examples below (this is to avoid disconnected acceptance domains). Then, the acceptance domain for $`x`$ contains $`x_0=\mu y_0`$ and extends to both sides according to the chosen value of $`\alpha `$, including or not $`x=0`$. When $`\mu a`$, the acceptance domain will or not contain $`x=0`$ depending on the value of $`I=_0^{y_0}g(y)𝑑y`$. $`I`$ is $`1`$. If $`I`$ is smaller than $`\alpha `$, the acceptance domain at $`\mu =a`$ will include $`x=0`$ and no null results are possible. If $`I`$ is bigger than $`\alpha `$, the acceptance domain at $`\mu =a`$ will NOT include 0 and null results will occur. One thus sees that the occurence or not of null results with Feldman and Cousins method will depend on the chosen CL. Contrary to the bayesian approach, the absence of null results is not insured by construction. ### 2.7 Some examples #### 2.7.1 exponential law $`g(y)=e^y`$ $`h(y)=ye^y`$ has a unique maximum at $`y=1`$. $`I=_o^1g(y)𝑑y=1e^1=0.63`$ A CL higher than 0.63 will avoid null results. This law is a $`\chi _2^2`$ law (2 degrees of freedom) for $`2y`$. More generally, $`\chi _N^2`$ laws give values of $`I`$ slowly decreasing to $`13e^2=0.594`$ when $`N\mathrm{}`$. Other usual laws tend to give values of $`I`$ smaller than the usual choices of CL, $`0.9`$ or higher, so that in practical cases, null results are likely to be avoided. It is however possible, by a careful choice of $`g(y)`$, to obtain situations where $`I`$ can be made as close to 1 as one wishes. For example, the sigmoid-like function: $$g(y)=\frac{k}{e^{(yb)}+1}$$ with $`y`$ and $`b`$ positive gives for large values of $`b`$ an integral $`I`$ whose value approaches $`1\mathrm{log}(b)/b`$ so that it can be made higher than any given CL. The next example, purely academic, exhibits a case where null results are unavoidable. #### 2.7.2 flat distribution $`g(y)=1`$ for y between 0 and 1 $`x`$ in this case is flat between 0 and an unknown value $`\mu `$ restricted to be bigger than $`a`$. $`r(x)`$ in this case is equal to $`a/\mu `$ for $`xa`$, and equal to $`x/\mu `$ for $`x`$ between $`a`$ and $`\mu `$. The acceptance domain for $`x`$ goes from $`(1\alpha )\mu `$ to $`\mu `$ (by connexity of the interval, higher values of $`x`$ being selected first). Thus, whatever the chosen value for $`\alpha `$, $`x=0`$ will be excluded from the acceptance domain for all values of $`\mu `$, only lower limits will be obtained and null results will always occur. ## 3 Discussion We have shown in this paper that the newly proposed method to put confidence belts on bounded parameters fails to meet some of its advocated properties, at least in principle although not in practice for most cases. Why should such a method be preferred to others? (Note that PDG, in its 1998 edition , recommends this new method). Feldman and Cousins argue that their method disantangles estimation and hypothesis testing, but one can argue as well that they obtain estimations from an ordering based on a quantity used for hypothesis testing! In view of the failure to avoid null results, I don’t think any real argument can be given if one wishes to stick to classical statistics. In my opinion, different methods, as long as they give actual confidence levels equal to the announced one, are mathematically equally acceptable, and a comparison between them implies, consciously or not, some dose of ”bayesian” input. The main problem at hand was to avoid null results. To cure this problem (completely for the bayesian solution, partly only with Feldman and Cousins), one is led to break the symetry of central intervals which equally separates the wrong statements between the victims of fluctuations towards high values of $`x`$ and victims of fluctuations towards low values. To avoid null results, a dissymetry is introduced to favor the latter to the detriment of the former, who anyway will never know if they are right or wrong. One could argue it is not fair! I would like to add that the good points of Feldman and Cousins’ method could be translated to the bayesian approach to get rid of all the criticisms it has received. We have already shown in section 2.4 how to solve the criticisms of arbitrariness by using a modified probability density for $`\mu `$. The only critics left is that of overcoverage for bayesian upper limits. But what Feldman and Cousins have shown is that one should publish intervals, which happen to be upper limits when the interval starts at the physical boundary. Thus, it would be perfectly legitimate, in order to avoid overcoverage, to build the ”$`(1+\alpha )/2`$” upper limit with our modified bayesian recipee; it will show overcoverage, but this can be removed by implementing a lower limit on $`\mu `$ obtained by defining $`x_{max}(\mu )`$ so that the probability content of selected $`x`$ be $`\alpha `$ for any $`\mu `$. Such a construction would be devoid of any criticism : no overcoverage, no null results, no arbitrariness, and it co ncides with the classical central intervals of the Neyman construction for unbounded parameters. This is illustrated on figure 3. The results obtained for Poisson statistics (in the case of a signal over a known background) would be the same as the usual $`(1+\alpha )/2`$ (instead of $`\alpha `$, but this is the price to be paid to avoid flip-flop biasing) bayesian upper limit suitably complemented by a lower limit at high values of $`x`$, both curves co nciding with the Neyman usual central interval for zero background. This method I think would certainly satisfy the advocates of the bayesian approach, and be acceptable by classical statisticians. Its construction might look kind of odd, but it uses as well as others the freedom left in the Neyman construction while fully responding to the anxiety created (with no good reason in my opinion) by the occurence of null results. I will close this paper with a final remark. All the recent discussions on the choice of method to be used, linked to the fact that different conclusions could be reached (for example, whether or not the recent Karmen result excludes the LSND result on neutrino oscillations ) have taken such an importance due to the tendancy to publish results at lower and lower confidence levels. Let me remind that the probability that 2 results being published at 90% CL are both right is only 0.81 ! And with 90 % CL, a non negligible fraction of experiments can obtain null results. It would be in my opinion much more reasonable to publish results with CL of at least 99%, so that the intensity of discussions would considerably drop, because on the one hand, the fraction of null results would reach a very low level and on the other hand the probability of two results being both right would reach 98 %, whatever the algorithms used. Certainly results would look less spectacular, but would gain in fiability.
no-problem/0001/astro-ph0001305.html
ar5iv
text
# Glimpses of a strange star (1) Abdus Salam ICTP, Trieste, Italy and Azad Physics Centre, Maulana Azad College, Calcutta 700013, India; email: azad@vsnl.com (2) 1/10 Prince Golam Md. Road, Calcutta 700 026,India; email : deyjm@giascl01.vsnl.net.in (3) Dept. of Physics, Presidency College, Calcutta 700 073, India and Abdus Salam ICTP, Trieste, Italy (4) Department of Astronomy and Astronomical and Astrophysical Center of East China, Nanjing University, Nanjing 210093, China; email: lixd@nju.edu.cn (5) Dipartimento di Fisica, Universitá di Pisa, via Buonarroti 2, I-56127 and INFN Sezione Pisa, Italy; email: bombaci@pi.infn.it There are about 2000 gamma ray burst (GRB) events known to us with data pouring in at the rate of one per day. While the afterglows of GRBs in radio, optical and X-ray bands are successfully explained by the fireball model, a significant difficulty with the proposed mechanisms for GRBs is that a small amount ($`10^6M_{}`$) of baryons in the ejecta can be involved. There are very few models that fulfill this criteria together with other observational features, among which are the differentially rotating collapsed object model and the ”supernova” model . These models generally invoke rapidly rotating neutron stars, and may be subject to uncertainties in the formation mechanisms and the equations of state of neutron stars. According to Spruit , the problem of making a GRB from an X-ray binary is reduced to finding a plausible way to make the star rotate differentially. We suggest that a model of strange star (SS) can naturally explain many of these bursts with not only their low baryon content , but the differential rotation which leads to an enhanced magnetic field that surfaces up and is responsible for GRBs. The model of SS that we have suggested for some compact objects , has differential rotation as a consequence of its stratified structure as a natural phenomenon. It is based on a stable point in the binding energy as a function of density, for charge neutral, beta stable strange quark matter at about 5 $`\rho _0`$ where $`\rho _0`$ is the normal nuclear matter density. It employs a quark-quark (qq) potential that has asymptotic freedom and confinement–deconfinement mechanism built into it. At high surface density of $`5\rho _0`$ at the radius $`R`$ of the star, the qq - interaction is already small as compared to that in a hadron. Obviously the interaction is even smaller at the central density of $`15\rho _0`$. Further we have a density dependent mass for the quarks which at such high central density ensures that the quarks have nearly current masses. The denser inner parts of the star are composed of quarks which are asymptotically free and nearly massless whereas the surface quarks are relatively more massive and interacting - leading to a peculiar structure which is different from that of a neutron star. The energy density as a function of the radius $`r`$ is shown in Fig. (1). To illustrate the peculiarity of the system we have also plotted the kinetic energy density (KE) of the quarks. The KE of the u and d quarks are each roughly half of the total which is less than the KE from the strange quark. The potential energy is negative and cannot be separated into parts. The interesting point to see is that the potential energy increases a little from the surface to the centre but not as much as one would expect, considering that the number density in the centre is about five times more than that near the surface. One should recall that the potential energy is a two-body term and thus is proportional to the square of the number. Using this density variation of we put the surface $`r=R`$ into rotation with a frequency $`\omega `$(R) about an axis. One can easily see that the central region on the equatorial plane perpendicular to this axis rotates more than 100 times faster than the outer parts Fig. (2) to conserve angular momentum. The polar regions will rotate with $`\omega `$(R). This natural differential rotation is the required one of the Kluźniak and Ruderman model . According to this model, in a differentially rotating strange star, the internal poloidal magnetic field ($`B_0`$) will be wound up into a toroidal configuration and amplified (to $`B_\varphi `$) as the interior part of the star rotates faster than the exterior. After $`N_\varphi `$ revolutions $`B_\varphi =2\pi B_0N_\varphi `$. The field thus amplified forms a toroid that encloses some strange quark matter. This magnetic toroid will float up from the deep interior only when a critical field value is reached that is sufficient to fully overcome the (approximately radial) stratification in the composition of the strange star. The model for strange quark matter that we have proposed is simple and is based on ’t Hooft’s pioneering work on large colour expansion . His idea was to consider the number of colours, $`N_c`$ in quantum chromodynamics to be a parameter of expansion for the field theoretic diagrams entering the expressions for variables like the energy of quark and gluon fields. Simple arguments then show that one can obtain a finite theory if one scales down the quark-gluon or gluon-gluon couplings by a factor of $`N_c^{1/2}`$. Then the quark loops are suppressed by a factor of $`\frac{1}{N_c}`$ compared to planar gluon loops and non-planar gluon loops are suppressed by $`\frac{1}{N_c^2}`$. As explained by Witten , ’t Hooft’s expansion gives only tree level interactions between valence quarks for baryons in leading order in the $`\frac{1}{N_c}`$ \- expansion scheme. The success of the large colour expansion model has also been stressed by and others over the years. In particular, using a model potential designed to fit the heavier mesons , as well as lighter ones like the $`\rho ,a,f`$ \- meson were able to fit baryons like the $`\mathrm{\Omega }_{}`$ and others using self consistent relativistic Hartree-Fock calculations. As already indicated the potential has asymptotic freedom and confinement-deconfinement built into it. This is done very simply and ingeniously by modifying logarithmic momentum dependence of the running coupling constant in the potential : $$V(q^2)=\frac{12\pi }{27}\frac{1}{ln(q^2/\mathrm{\Lambda }^2)}\frac{1}{q^2},$$ (1) to by replacing $`\frac{1}{q^2}ln(q^2/\mathrm{\Lambda }^2)`$ by $`\frac{1}{q^2}ln(1+q^2/\mathrm{\Lambda }^2)`$. For large $`q^2`$ the original coupling eq.(1) is recovered whereas for large distance interaction when the momentum transfer $`q^2`$ is small one gets a $`\frac{1}{q^4}`$ dependence equivalent to a string-like tension $`\mathrm{\Lambda }^2|\stackrel{}{r}_1\stackrel{}{r}_2|`$ in the particle coordinates. For dense systems the $`q^2`$ is replaced by $`q^2+D^2`$ where $`D^1`$ is the well known Debye screening factor. One further ingredient is very important and this has to do with the fact that QCD possesses approximate chiral symmetry in the sense that the current quark masses of u, d, s are small but in the grounds state this symmetry is broken leading to massive so - called constituent quarks. At high density chiral symmetry is believed to be restored and this has been parameterized by us with a single parameter $`\nu `$ in a form : $$M_i=m_i+310sech(\nu \frac{\rho }{\rho _0}).$$ (2) where $`m_i`$ = 4, 7 and 150 for i = u, d and s respectively (all in MeV). The suggestion that compact objects like SAX J1808.8-3658, Her X-1, 4U 1820-30 or 4U 1728-34 are strange stars give us the possibility to fix the chiral symmetry restoration parameter $`\nu `$ (eq. 2) from astronomical data. This amounts to constraining microscopic physics of light objects in terms of some of the densest objects known in the universe. Although our model is simple the basis is robust and we believe that the results will retain their validity even if more refined calculations are done in the future. We would like to point out that the strange star candidate SAX J1808.8-3658 is the fastest rotating X-ray pulsar with surface rotation frequency $`\omega (R)400`$ Hz, shown in the power spectrum as a very sharp line at that frequency in Wijnands and van der Klis . It was suggested that it may appear as an eclipsing radio pulsar during periods of X-ray quiescence by Chakraborty and Morgan . Recently this has been confirmed when radio signals were found to be present, a day after the X-ray flux suddenly deviated from exponential decay and began to decrease rapidly , suggesting that LMXB-s are progenitors of millisecond radio pulsars (MSR). This completes the following scenario : the birth of a strange star may be due to accretion from its binary partner - leading sometimes to such high rotational frequency that the star explodes to a GRB. Those that survive due to slower rotation become LMXB-s like the SAX J1808.8-3658. it may continue to prey on its partner and become closely related ‘black widow’ MSR which are evaporating their companions through irradiation as suggested in . From the stability of SAX J1808.8-3658 we can safely assert that only those strange stars rotating faster than a critical $`\omega (R)_{crit}>400`$ Hz may acquire the critical magnetic field and fly off to a GRB mode. There are several possible channels for strange star formation: type II/Ib supernovae, accretion-induced collapse of white dwarfs, and conversion from accreting neutron stars in binary systems . The new born strange stars could rotate at periods $`1`$ ms because of rapid rotation of the progenitor stars due to either contraction or mass accretion. Furthermore, they are not subject to the $`r`$mode instability which slows rapidly rotating, hot neutron stars to relatively long rotation periods via gravitational wave radiation. Thus differential rotation may naturally occur in the interiors of these strange stars as discussed above. The authors SR, MD and JD are grateful to Abdus Salam ICTP, the IAEA and the UNESCO for hospitality at Trieste, Italy, and to Dept. of Science and Technology, Govt. of India. We dedicate this letter to the memory of Dr. Bhaskar Datta who encouraged this work through many discussions and was collaborator to one of us (I.B.). XL was supported by National Natural Science Foundation of China.
no-problem/0001/cond-mat0001262.html
ar5iv
text
# Realistic simulations of Au(100): Grand Canonical Monte Carlo and Molecular Dynamics ## I Figure captions Fig. 1:(a) The initial state of Au(100): the top layer is perfectly square and unreconstructed; (b) the Monte Carlo creation of new particles starts modifying the structure; (c) the correct lateral density is achieved; (d) quasihexagonal order (equilibrium state) is obtained at surface after the Monte Carlo simulation. Color of atoms reflects their height, atoms in (d) are brighter due to vertical expansion (about $`20\%`$) connected to the reconstruction of the first layer. Fig. 2: Evolution of the particle occupancy of the initially first layer and of two layers growing on it (system (B)). Layer densities are normalised to bulk $`(100)`$ lateral density. The first layer decreases its density and becomes square with defects, and eventually perfectly square when two complete layers are adsorbed on it. The deconstruction of the underlying layer increases the growth rate of an adlayer. Fig. 3 (color): Top view of a detail of the simulated slab. Equilibrated step at $`800K`$, suddenly brought to a higher temperature of $`950K`$. Yellow atoms are top layer atoms, blue atoms belong to the second layer. Fig. 4 (color): Intermediate snapshots of the simulation (frames are separated by 1.4 ps). The step has retracted and a whole line of atoms (1-7) passed from the step rim to the second layer. Atoms 13, 14, 5, 6 and 7 are part of the reconstruction of the uncovered zone of the second layer. Fig. 5 (color): The final situation, with evident shrinking of the step (70 ps of simulation have been completed). The uncovered zone has become reconstructed, and atoms $`17`$, formerly at the step edge, as well as $`1115`$, formerly second line, have been incorporated into the lower terrace. Also note the new large wiggliness of the retracted step.
no-problem/0001/nlin0001015.html
ar5iv
text
# The statistical properties of the city transport in Cuernavaca (Mexico) and Random matrix ensembles \[ ## Abstract We analyze statistical properties of the city bus transport in Cuernavaca (Mexico) and show that the bus arrivals display probability distributions conforming those given by the Unitary Ensemble of random matrices. \] It is well known that the statistical properties of coherent chaotic quantum systems are well described by the Wigner/ Dyson random matrix ensembles. The fact that the spectral statistics of such chaotic systems is to a large extend generic - a phenomenon known as the universality of quantum chaos - has been confirmed both theoretically and experimentally.(See for instance for references.) The statistical distributions characterizing the ensembles of random matrices can be understood as minimizing the information contained in the system with the constrains that the matrices posses some discrete symmetry properties , . Let $`P(x_1,x_2,\mathrm{},x_n)`$ denotes the joint probability distribution of the eigenvalues $`x_1,x_2,\mathrm{},x_n`$ of the given matrices and $$I=P(x_1,x_2,\mathrm{},x_n)\mathrm{ln}(P(x_1,x_2,\mathrm{},x_n))𝑑x_1\mathrm{}𝑑x_n$$ (1) be its information content. Assuming for instance that the matrices are invariant with respect to a time reversal transformation the information $`I`$ is minimized when the distribution $`P(x_1,x_2,\mathrm{},x_n)`$ describes Orthogonal ensemble (GOE). If there is not external symmetry the total minimum of the information $`I`$ is achieved for the Unitary ensemble (GUE), where the only constrain is that the matrices should be hermitean. It is known for a long time that matrix ensembles are of relevance also for classical one dimensional interacting many particle systems, where the matrix eigenvalues $`x_1,x_2,\mathrm{},x_n`$ describe the positions of the particles. So the thermal equilibrium of a one-dimensional gas interacting via Coulomb potential (Dyson gas) has statistical properties (depending on temperature) that are identical with those of random matrix ensembles . The same holds true also for other potentials. An example is the Pechukas gas where the one dimensional particles interact by a potential $`\lambda V(x)`$ with $`V(x)=1/|x|^2`$, $`x`$ being their mutual distance and $`\lambda `$ the relevant coupling constant. Regarding the couplings $`\lambda `$ as additive canonical variables it has been shown by Pechukas and Yukawa , that the statistical equilibrium of the related canonical ensemble is described by random matrix theory. It has to be stressed however, that those results were obtained under special requirements on the dynamics of the variables $`\lambda `$, ensuring in fact full equivalence of the system to matrix diagonalization. Nevertheless the methods of statistical physics remain valid also for different shapes of the particle potential as well as for different dynamics of the coupling variables $`\lambda `$. It can be shown that the potential $$V(x)1/|x|^a$$ (2) with $`a`$ being positive constant, leads also to random matrix distribution of the particle positions. The equivalence of the statistical properties of the particle positions of one dimensional interacting gases to random matrix ensembles and the fact that GUE minimizes the information (1) lead us to speculate, that whenever the information contained in the gas is minimized its properties are described by GUE. However, according to our best knowledge, this fact was never tested. The one dimensional gas to be studied in the present letter is represented by buses that operate the city line number 4 in Cuernavaca (Mexico). We will show that the statistical properties of the bus arrivals are described by the Unitary Ensemble of random matrices. To explain the origin of the interaction between subsequent buses several remarks are necessary . First of all it has to be stressed that there is not a covering company responsible for organizing the city transport. Consequently such constrains like a time table etc. that represent external influence on the transport do not exist. Moreover, each bus is a property of the driver. The drivers try to maximize their income and hence the amount of passengers they transport. This lead to competition among the drivers and to their mutual interaction. It is clear that without interaction the probability distribution of the distances between subsequent buses will be Poissonian. (This is due to the rather complicated traffic conditions in the city that work as an effective randomizer). Poisson distribution imply, however, that the probability of close encounters of two buses is high (bus clustering) which is in conflict with the effort of the driver to maximize the number of transported passengers and accordingly maximize the distance to the preceding bus. In order to avoid the unpleasant clustering effect the bus drivers in Cuernavaca engage people that record the arrival times of buses on significant places . Arriving at a checkpoint, the driver gets the information when the previous bus passed that place. Knowing the time interval to the preceding bus the driver tries to optimize the distance to it either by slowing down or speeding up. In such a way the obtained information leads to interaction between buses and changes their statistical properties. We have collected records catching the arrivals of the buses of the line No.4 close to the city center. The record contains altogether 3500 arrivals during a time period of 27 days whereby the arrivals on different days are regarded as statistically independent. After unfolding the peak times we evaluated the related probability distributions and compared them with the predictions of GUE. In particular we have focused on the bus spacing distribution, i.e. on the probability density $`P(s)`$ that spacing between two subsequent buses equals to $`s`$ and on the bus number variance N(T) measuring the fluctuations of the total number $`n(T)`$ of buses arriving to the place during the time interval $`T`$: $$N(T)=<\left(n(T)T\right)^2>$$ (3) where $`<>`$ means the sample averaging. (Note that after unfolding the mean distance between buses equals to 1.) According to the prediction of the unitary ensemble the spacing distribution and the number variance are given by $$P(s)=\frac{32}{\pi ^2}s^2\mathrm{exp}\left(\frac{4}{\pi }s^2\right)$$ (4) and $$N(T)\frac{1}{\pi ^2}\left(\mathrm{ln}2\pi T+\gamma +1\right)$$ (5) Those predictions are compared with the obtained bus arrival data and displayed on the following figures: Figure 1 shows the bus interval distribution when compared with the GUE prediction (4). The bus data are marked by $`(+)`$. The minor discrepancy between the GUE prediction and the bus data can be explained taking into account the fact that the bus data do not represent the full record. Assuming that roughly $`0.8\%`$ of the bus arrivals is not notified and rejecting the same amount of randomly chosen data from the random matrix eigenvalues, we get very satisfactory agreement. Due to the limited amount of records available the bus interval distribution is sensitive to the binning used in the evaluation of the probability density $`P(s)`$. This is why on the next figure we plot the integrated interval distribution $`I(s)=P(s^{})𝑑s^{}`$ that is not liable to binning fluctuations. The agreement with the GUE distribution is evident. The next figure shows the number variance (5) obtained for GUE and compared with the bus data. Here the agreement is good up to time interval $`T3`$. For larger $`T`$ the number variance of the bus arrivals lies significantly above the prediction given by (5). This indicates that the long range correlations between more then three buses are weaker then predicted by the Unitary ensemble. The explanation is simple: getting the time interval information of the preceding bus the driver tries to optimize his position . Doing so he has, however, to take into account also the assumed interval to the bus behind him since otherwise this bus will overtake him. Hence the driver tries to optimize his position between the preceding and following bus that leads to the observed correlation. The GUE properties of the bus arrival statistics can be understood when regarding the buses as one dimensional interacting gas. It was already mentioned that the exact GUE statistics is obtained for Coulomb interaction between the gas particles, i.e. for the interaction potential $`V`$ given by $$V=\underset{i<j}{}log(|x_ix_j|)+\frac{1}{2}\underset{i}{}x_i^2$$ (6) (In (6) the second terms represents a force confining the gas close to origin and is not important for our discussion. Equivalently one can discuss a one dimensional gas on a circle instead and then the second term is missing). The statistical properties of the particle positions of the Dyson gas are identical with those of the random matrix ensembles ,. In particular the properties of the unitary ensemble are recovered by minimizing the information contained in the particle positions. It is of interest that similar potential can be indeed found when studying the reaction of driver on the traffic situation. Here we can use older results describing the behaviour of highway drivers. For one dimensional models it was shown that the i-th driver accelerate according to $$\frac{dv_i}{dt}\frac{f(v_{i+1},v_i)}{x_{i+1}xi}$$ (7) where $`x_{i+1}`$ and $`v_{i+1}`$ represent the position and velocity of the preceding car respectively and $`f(v_{i+1},v_i)`$ is a function depending on the car velocities only. Approximating $`f`$ by a constant (justified for low velocities) we get that the cars accelerate in the same way as described by the Coulomb interaction (6). The exact form of the potential is, however, not crucial for the result. Using Metropolis algorithm we have numerically evaluated the equilibrium distributions of the positions of one dimensional gas interacting via potential (2). When the exponent $`a`$ is fixed and $`a<2`$ the resulting equilibrium distributions belong to the same class as in the Dyson case (6). The numerical results show clearly that for a given $`a`$ one can always find such a temperature of the gas that the equilibrium distribution is given by GUE. Moreover, the fact that the original Dyson potential (6) contains interaction between all pairs of the gas particles is also not substantial. Numerical simulations show that a good agreement with the random matrix theory is obtained when the summation in (6) is restricted and involves three neighboring particles only. The exact interaction between buses in Cuernavaca is not known. However the weak sensitivity of the statistical equilibrium to the exact form of the potential guide us to the conviction that unitary ensembles are a good choice for bus description. We conclude that the statistical properties of the city bus transport in Cuernavaca can be described by Gaussian Unitary Ensemble of random matrices. This behavior can be understood as equilibrium state of interacting one dimensional gas under the assumption that the information contained in the positions of individual gas particles is minimized. The agreement of the actual bus data with the GUE prediction is surprisingly good. We would like to thank Dr. Markus Mueller from the University in Cuernavaca who helped us to collect the bus data and to Tomas Zuscak for patience by entering the collected data to computer. This work was supported by the Academy of Sciences of the Czech Republic under Grant No. A1048804 and by the ”Foundation for Theoretical Physics” in Slemeno, Czech Republic.
no-problem/0001/hep-ph0001059.html
ar5iv
text
# 1 Introduction ## 1 Introduction An important issue in high energy physics is to understand the mechanism of mass generation. In the standard model, a fundamental complex Higgs scalar is introduced to break the electroweak symmetry and generate masses. However, arguments of triviality and naturalness suggest that the symmetry breaking sector of the standard model is just an effective theory. The top quark, with a mass of the order of the weak scale, is singled out to play a key role in probing the new physics beyond the standard model (SM) . The electroweak interactions of the top quark are particularly interesting and can be probed in the single top production and top decays. In this work we focus on the single top production at the Tevatron. Single top production at the Tevatron occurs within the SM in three different channels, the $`s`$-channel $`W^{}`$ production, $`q\overline{q}^{}W^{}t\overline{b}`$ , the $`t`$-channel $`W`$-exchange mode, $`bqtq^{}`$ (sometimes referred to as $`W`$-gluon fusion), and through $`tW^{}`$ production . The process $`q\overline{q}t\overline{b}`$, compared to the single top production via W-gluon fusion has the advantage that the cross section can be calculated reliably because the quark and antiquark structure functions at the relevant values of $`x`$ are better known than the gluon structure functions that enter in the calculation for the W-gluon cross section. Measurement of single top production cross section has been discussed in detail in Ref. In these references it is estimated that single top production can be measured with an experimental error, at the one sigma level, of $`\pm `$ 19 % at Run 2 (now called Run 2a) with an integrated luminosity of $`2fb^1`$. The measured cross section can then be used to extract the CKM element $`V_{tb}`$ with a combined theoretical and experimental error of $`\pm `$ 12-19% in Run 2a depending on how one estimates the theoretical error. In Ref it was mentioned that there may be a Run 3 producing $`30fb^1`$ of data and if only the $`s`$-channel $`W^{}`$ production, $`q\overline{q}^{}W^{}t\overline{b}`$ is used, then $`V_{tb}`$ could be extracted at Run 3 with an error (including theoretical error) of about $`\pm `$ 5%. At present Run 2a is expected to start next year and achieve ultimately an integrated luminosity of $`2fb^1`$. The run beyond an integrated luminosity of $`2fb^1`$ is no longer called Run 3 but is a continuation of Run 2 (Run 2b) and may achieve an integrated luminosity of $`15fb^1`$ or higher. Update of the estimate on the precision in single top measurement at Run 2 since Ref is not yet available . As a rough estimate for the errors in measuring $`V_{tb}`$ in Run 2b, operating at an integrated luminosity of $`15fb^1`$, one can multiply the estimate in “Run 3” presented in Ref by a factor of $`\sqrt{2}`$. The unitarity of the CKM matrix leads to a value of $`V_{tb}1`$. Hence a measurement of $`V_{tb}`$ which differs from unity would indicate presence of new physics. For instance a measurement of $`V_{tb}<1`$ is commonly taken to indicate the existence of new generation of fermions mixed with the third generation. Thus, it is possible that the effects of new physics will be revealed in single top production . In this paper we consider effects that extra dimension theories can produce in single top production at the Tevatron. If in such theories, the gauge fields of the Standard Model(SM) live in the bulk of the extra dimensions then they will have Kaluza-Klein(KK) excitations. The possibility that the masses of the lowest lying of these states could be as low as $``$ a few TeV or less (of the order of the inverse size of the compactification radius ) leads to a very rich and exciting phenomenology at future and, possibly, existing colliders. Limits on the masses of the lowest lying excitations obtained from direct $`Z^{}/W^{}`$ and dijet bump searches at the Tevatron from Run 1 indicate that they must lie above $`0.85`$ TeV. A null result for a search made with data from Run 2 will push this limit to $`1.1`$ TeV . Model dependent limits can also be placed on the masses of the excitation from low energy observables and precision electroweak measurements . For instance in Ref global fits to the electroweak observables, with certain assumptions, were found to provide lower bounds on the compactification scale, $`M_c`$, (which is equal to the mass of the first excited KK gauge boson) which were generically in the 2-5 TeV range depending on which standard model fermions, as well as the higgs boson, live in the bulk of the extra dimensions or are localized at different points of it. In fact Ref found scenarios where global electroweak fits give a 95% C.L upper and lower bounds on $`M_c`$ in the range $`0.95`$ TeV $`M_c3.44`$ TeV. Note the analysis of Ref assumed the standard model fermions to be stuck at the boundary of the extra dimension. In addition to the various assumptions, mentioned above, that are involved in putting bounds on $`M_c`$ from global electroweak fits there is another very important assumption made in all these analyses. In all these analyses it is assumed that the only new physics beyond the standard model arise from the the physics of the KK excitations of the standard model fields. For instance, as mentioned in Ref, in all these analyses the gravity induced processes are assumed not to significantly affect the electroweak observables. Note that, in general, the gravity induced processes will affect electroweak observables, changing the bound on $`M_c`$ from electroweak data, but will not affect single top production at tree level. In fact it is quite likely that there are additional new physics effects which may easily change the bounds on $`M_c`$ obtained from global fits to electroweak data. One can represent the effects of this additional new physics in terms of higher dimensional operators in the effective Lagrangian framework. Recent studies have clearly demonstrated that the presence of higher dimensional operators can significantly effect global fits to the electroweak observables . In light of the above discussion we do not strictly enforce the bounds on $`M_c`$ from global electroweak fits in a specific model but rather assume that $`M_c`$ is in the same ballpark as obtained from global electroweak fits. In other words we assume that $`M_c`$ TeV. In this work we consider the contribution of the first excited KK mode of the $`W`$, denoted by $`W_{KK}`$, on the s-channel mode for the single top production at the Tevatron. This channel is more sensitive to the presence of a new charged resonance than the t-channel $`W`$-gluon fusion mechanism as was discussed in Ref. This is because the momentum of the the s-channel resonance is time-like which leads to larger interference with the standard model amplitude than the t-channel process where the momentum of the $`W_{KK}`$ is space-like. For the s-channel process there can be a resonant enhancement of the amplitude which does not occur in the t-channel process. Note that the additional new physics effects, discussed above, which are not of gravitational origin may also affect single top production and has been extensively studied in Ref and we do not consider these effects in this work. However, if the additional new physics is from gravity induced processes then there is no effect, at the tree level, on the s-channel mode for single top production which is mediated by the exchange of a charged boson. The paper is organized as follows. In section II, we calculate the effects of the excited Kaluza-Klein $`W`$ state on the single top production. In section III, we present our results and conclusions. ## 2 Effect of KK excited $`W`$ in the single top production rate at Tevatron To study the physics of the KK excited $`W`$ we use a model which is based on a simple extension of the SM to 5 dimensions (5D) . However, as discussed above, we do not assume that this model represents all the physics beyond the standard model. The 5D SM is probably a part of a more fundamental underlying theory. In the 5D SM model the fifth dimension $`x_5`$ is compactified on the orbifold $`S^1/Z_2`$, a circle of radius $`R`$ with the identification $`x_5x_5`$. This is a segment of length $`\pi R`$ with two 4D boundaries, one at $`x_5=0`$ and another at $`x_5=\pi R`$ (the two fixed points of the orbifold). The SM gauge fields live in the 5D bulk, while the SM fermions, $`\psi `$, and the Higgs doublets, can either live in the bulk or be localized on the 4D boundaries. We do not consider gravity in our analysis. It is possible that gravity might propagate in more extra dimensions than the SM fields. We do not expect gravity to affect single top production at the Tevatron. If the standard model fields live in the bulk then they will have KK excitations. The fields living in the bulk can be Fourier-expanded as $`\mathrm{\Phi }_+(x_\mu ,x_5)`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\mathrm{cos}{\displaystyle \frac{nx_5}{R}}\mathrm{\Phi }_+^{(n)}(x_\mu ),`$ $`\mathrm{\Phi }_{}(x_\mu ,x_5)`$ $`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}\mathrm{sin}{\displaystyle \frac{nx_5}{R}}\mathrm{\Phi }_{}^{(n)}(x_\mu ),`$ (1) where $`\mathrm{\Phi }_\pm ^{(n)}`$ are the KK excitations of the 5D fields and the fields have been defined to be even or odd under the $`Z_2`$-parity, i.e. $`\mathrm{\Phi }_\pm (x_5)=\pm \mathrm{\Phi }_\pm (x_5)`$. As mentioned above the gauge fields live in the bulk. They are assumed to be even under the $`Z_2`$ parity. Their (massless) zero modes correspond to the standard model gauge fields. If the Higgs boson lives in the bulk then it is assumed to be even under the $`Z_2`$ parity also. Fermions in 5D have two chiralities, $`\psi _L`$ and $`\psi _R`$, that can transform as even or odd under the $`Z_2`$. The precise assignment is a matter of definition. It is assumed that $`\psi _L`$ ($`\psi _R`$) components of fermions $`\psi `$, which are doublets (singlets) under SU(2)<sub>L</sub> have even $`Z_2`$ parity and consequently only the $`\psi _L`$ of SU(2)<sub>L</sub> doublets and $`\psi _R`$ of SU(2)<sub>L</sub> singlets have zero modes. The fermions in this model couple to the KK excited gauge bosons only if they are localized on the 4D boundaries. The Lagrangian along with additional details and low energy phenomenology of this model can be found in and will not be presented in this paper. For simplicity we will assume that there is one higgs doublet which along with the fermions that participate in the s-channel process for the single top production are localized on the 4D boundary at $`x_5=0`$. The effective four dimensional Lagrangian can be obtained after integrating over the fifth dimension . The piece of this Lagrangian relevant to our calculation is the charged electroweak sector and is given by $$^{ch}=\underset{a=1}{\overset{2}{}}_a^{ch}+_{new}$$ (2) with $`_a^{ch}`$ $`=`$ $`{\displaystyle \frac{1}{2}}m_W^2W_aW_a+{\displaystyle \frac{1}{2}}M_c^2{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}n^2W_a^{(n)}W_a^{(n)}`$ (3) $``$ $`gW_aJ_ag\sqrt{2}J_a^{KK}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}W_a^{(n)},`$ where $`m_W^2=g^2v^2/2`$, the weak angle $`\theta `$ is defined by $`e=gs_\theta =g^{}c_\theta `$, while the currents are $`J_{a\mu }`$ $`=`$ $`{\displaystyle \underset{\psi }{}}\overline{\psi }_L\gamma _\mu {\displaystyle \frac{\sigma _a}{2}}\psi _L,`$ $`J_{a\mu }^{KK}`$ $`=`$ $`{\displaystyle \underset{\psi }{}}\epsilon ^{\psi _L}\overline{\psi }_L\gamma _\mu {\displaystyle \frac{\sigma _a}{2}}\psi _L.`$ (4) Here $`\epsilon ^{\psi _L}`$ takes the value 1(0) for the $`\psi _L`$ living in the boundary(bulk). The mass of the $`n^{th}`$ excited KK state of the $`W`$ is given by $`nM_c=n/R`$ where R is the compactification radius. In this work we consider only the $`n=1`$ state. The term $`_{new}`$ represents the additional new physics beyond the 5 dimensional standard model the structure of which remains unknown till the full underlying theory is understood. The coupling of KK excited $`W`$ to the standard model is determined in terms of the Fermi coupling, $`G_F`$, up to corrections of $`O(m_Z^2/M_c^2)`$ . For $`M_c`$ TeV the $`O(m_Z^2/M_c^2)`$ effects are small for single top production and therefore we do not include these effects in our calculations. We have ignored the mixing of the $`W`$ with $`W_{KK}`$ which is also an $`O(m_Z^2/M_c^2)`$ effect. Thus, assuming the $`W_{KK}`$ decays only to standard model particles, the predicted effect of $`W_{KK}`$ on single top production depends, in addition to the SM parameters, only on the unknown mass of the $`W_{KK}`$. The cross section for $`p\overline{p}t\overline{b}X`$ is given by $`\sigma (p\overline{p}t\overline{b}X)`$ $`=`$ $`{\displaystyle 𝑑x_1𝑑x_2[u(x_1)\overline{d}(x_2)+u(x_2)\overline{d}(x_1)]\sigma (u\overline{d}t\overline{b})}.`$ (5) Here $`u(x_i)`$, $`\overline{d}(x_i)`$ are the $`u`$ and the $`\overline{d}`$ structure functions, $`x_1`$ and $`x_2`$ are the parton momentum fractions and the indices $`i=1`$ and $`i=2`$ refer to the proton and the antiproton. The cross section for the process $$u(p_1)+\overline{d}(p_2)W^{}\overline{b}(p_3)+t(p_4),$$ is given by $`\sigma `$ $`=`$ $`\sigma _{SM}\left[1+4{\displaystyle \frac{A}{D}}+4{\displaystyle \frac{C}{D}}\right],`$ $`A`$ $`=`$ $`(sM_W^2)(sM_{W_{KK}}^2)+M_WM_{W_{KK}}\mathrm{\Gamma }_W\mathrm{\Gamma }_{W_{KK}},`$ $`C`$ $`=`$ $`(sM_W^2)^2+(M_W\mathrm{\Gamma }_W)^2,`$ $`D`$ $`=`$ $`(sM_{W_{KK}}^2)^2+(M_{W_{KK}}\mathrm{\Gamma }_{W_{KK}})^2,`$ (6) and $`\sigma _{SM}`$ $`=`$ $`{\displaystyle \frac{g^4}{384\pi }}{\displaystyle \frac{(2s+M_t^2)(sM_t^2)^2}{s^2[(sM_W^2)^2+(M_W\mathrm{\Gamma }_W)^2]}}.`$ (7) Here $`s=x_1x_2S`$ is the parton center of mass energy while $`S`$ is the $`p\overline{p}`$ center of mass energy . To calculate the width of the $`W_{KK}`$ we will assume that it decays only to the standard model particles. The $`W_{KK}`$ will then have the same decays as the $`W`$ boson but in addition it can also decay to a top-bottom pair which is kinematically forbidden for the $`W`$ boson. The width of the $`W_{KK}`$, $`\mathrm{\Gamma }_{W_{KK}}`$, is then given by $`\mathrm{\Gamma }_{W_{KK}}`$ $``$ $`{\displaystyle \frac{2M_{W_{KK}}}{M_W}}\mathrm{\Gamma }_W+{\displaystyle \frac{2M_{W_{KK}}}{3M_W}}\mathrm{\Gamma }_WX,`$ $`X`$ $`=`$ $`(1{\displaystyle \frac{M_t^2}{M_{W_{KK}}^2}})(1{\displaystyle \frac{M_t^2}{2M_{W_{KK}}^2}}{\displaystyle \frac{M_t^4}{M_{W_{KK}}^4}}).`$ (8) where $`\mathrm{\Gamma }_W`$ is the width of the $`W`$ boson and we have neglected the mass of the $`b`$ quark along with the masses of the lighter quarks and the leptons. ## 3 Results In Fig. 1, we plot $`\mathrm{\Delta }\sigma /\sigma `$ versus $`M_{W_{KK}}`$ , the mass of the first excited KK $`W`$ state, where $`\mathrm{\Delta }\sigma `$ is the change in the single top production cross section in the presence of $`W_{KK}`$ and $`\sigma `$ is the standard model cross section<sup>6</sup><sup>6</sup>6 We have not included the QCD and Yukawa corrections to the single top quark production rate. They will enhance the total rate, but not change the percentage of the correction of new physics to the cross section. . We have used the CTEQ structure functions for our calculations and obtain a standard model cross section of 0.30 pb for the process $`p\overline{p}t\overline{b}X`$ at $`\sqrt{S}=2`$TeV. We observe from Fig. 1 that the presence of $`W_{KK}`$ can lower the cross section by as much as 25 % for $`M_{W_{KK}}1`$TeV. This has an important implication for the measurement of $`V_{tb}`$ using the s-channel mode at the Tevatron. It was pointed out in Ref that there could be models where the presence of an additional $`W`$( denoted as $`W^{}`$) could lead to a measurement of the cross section for the s-channel $`p\overline{p}t\overline{b}X`$ smaller than the standard model prediction. A specific example of such a model with a $`W^{}`$ that causes a significant decrease of the single top cross section can be found in Ref. This could, as pointed out in Ref , lead one to conclude that $`V_{tb}<1`$ which could then be wrongly interpreted as evidence for the existence of new generation(s) of fermions mixed with the third generation. Our work provides another specific example of such a model and our results clearly demonstrates that a measurement of the cross section for the s-channel $`p\overline{p}t\overline{b}X`$ smaller than the standard model prediction would not necessarily imply $`V_{tb}<1`$ or evidence of extra generation(s) of fermions mixed with the third generation. Note that, as mentioned above, the predicted effect of $`W_{KK}`$ on single top production depends, in addition to the SM parameters, only on the unknown mass of the $`W_{KK}`$, while in most other $`W^{}`$ models the predictions for single top production depend, in addition to the SM parameters, on the unknown mass of the $`W^{}`$ as well as on unknown mixing parameter(s). Note that the $`W_{KK}`$ can also be searched at the Tevatron through its decay into a high energy lepton and a neutrino if it couples to the leptons. Searches for this resonance at the Tevatron allow discovery at 1.11 TeV and 1.34 TeV with 2 $`fb^1`$ and 20 $`fb^1`$ for $`\sqrt{S}=2`$ TeV . In this energy range, as shown in Fig. 1, there will be significant effects on the single top production rate. If the SM leptons are allowed to live in the bulk then they will not couple to $`W_{KK}`$ and so it is no longer possible to search for this resonance through its decay to leptons. In such a scenario, single top production could be a very effective probe of the $`W_{KK}`$ resonance. In Fig.2 we show the $`t\overline{b}`$ invariant mass distribution $`\frac{d\sigma }{dM_{tb}}`$, where $`M_{tb}`$ is the invariant mass of the $`t\overline{b}`$ pair for various values of $`M_{W_{KK}}`$. We see a significant decrease in the signal at $`M_{W_{KK}}1`$ TeV for lower values of $`M_{tb}`$. For lower $`M_{tb}`$, the interference term ($`\frac{A}{D}`$) in Eq. (6) has a stronger effect than the direct term ($`\frac{C}{D}`$) which leads to a reduction of the signal. In Fig. 3 we show the $`t\overline{b}`$ invariant mass distribution $`\frac{d\sigma }{dM_{tb}}`$ for an extended range of $`M_{tb}`$ for $`M_{W_{KK}}=1`$TeV. As we go to larger values of $`M_{tb}`$, close to the resonance region, the direct term in Eq. (6) becomes dominant. This leads to a bump in the resonance region. However, the signal is considerably reduced because of smaller parton distributions. In conclusion, we have studied the effects of a KK excited $`W`$ on the cross section of the single top production at the Tevatron. The model of $`W_{KK}`$ considered in this work leads to a definite structure for its coupling to the standard model fields. Moreover, the prediction for single top production, up to very small corrections, depend only on one additional unknown parameter, the mass of the $`W_{KK}`$. This is unlike the usual $`W^{}`$ models, which require extending the standard model gauge group and the predictions for single top production depend on the unknown mass of the $`W^{}`$ as well as on additional unknown mixing parameter(s). Our results show that the cross section for s channel single top quark production can be significantly reduced, by about 25%, from the standard model value for $`M_{W_{KK}}1`$TeV. Therefore the s channel single top production could be a very effective probe of the $`W_{KK}`$ resonance with a mass $``$ TeV at the Tevatron. Acknowledgment: This work was supported in part by Natural Sciences and Engineering Research Council of Canada(A. Datta and P.J. O’Donnell) and by Chinese National Science Foundation (T. Huang, X. Zhang and Z.-H. Lin). We thank A. P. Heinson, W.-B. Lin and S.-H. Zhu for discussions.
no-problem/0001/math-ph0001015.html
ar5iv
text
# 1. Introduction ## 1. Introduction Let $`D^n,n2,`$ be a bounded domain with a sufficiently smooth boundary $`\mathrm{\Gamma }`$, not necessarily connected, but consisting of a finitely many connected components. Let $`D^{}:=^nD`$ be the exterior domain, $`k>0`$ a fixed wavelength, $`\alpha S^{n1}`$ a given unit vector, $`S^{n1}`$ the unit sphere. It is well known that the obstacle scattering problem: $$\mathrm{}^2u+k^2u=0\text{ in }D^{},$$ $`1`$ $$u_N=0\text{ on }\mathrm{\Gamma },$$ $`2`$ $$u=u_0+v,u_0:=\mathrm{exp}(ik\alpha x),$$ $`3`$ where $`v`$ satisfies the radiation condition $$\underset{r\mathrm{}}{lim}_{|x|=r}|v_rikv|^2𝑑s=0,$$ $`4`$ and $`N`$ is the exterior unit normal to $`\mathrm{\Gamma }`$ has been studied intensively and there are many ways known for proving the existence and uniqueness of its solution which is called the scattering solution . The function $`v`$ has the following asymptotics $$v=A(\alpha ^{},\alpha ,k)\gamma (r)+o(\frac{1}{r})\text{ as }r\mathrm{},\alpha ^{}:=x/r.$$ $`5`$ The coefficient $`A(\alpha ^{},\alpha ,k)`$ is called the scattering amplitude. We also consider the Robin boundary condition in place of (2): $$u_N+\sigma (s)u=0\text{ on }\mathrm{\Gamma },$$ $`6`$ where $`\sigma `$ is a continuous real-valued function on $`\mathrm{\Gamma }`$. In what follows, we denote by a subindex zero the quantity which is fixed. The inverse obstacle scattering problems (IOSP1-5) can be stated as follows: 1) Given $`A(\alpha ^{},\alpha _0,k)\alpha ^{}S^{n1},k[a,b],\mathrm{\hspace{0.17em}0}a<b,`$ find $`\mathrm{\Gamma }`$, or, if Robin’s condition is assumed, find $`\mathrm{\Gamma }`$ and $`\sigma `$; 2) Given $`A(\alpha ^{},\alpha ,k_0)\alpha ^{},\alpha S^{n1},`$ find $`\mathrm{\Gamma }`$, or, if Robin’s condition is assumed, find $`\mathrm{\Gamma }`$ and $`\sigma `$; 3) Given $`A(\alpha ^{},\alpha _0,k_0)\alpha ^{}S^{n1},`$ find $`\mathrm{\Gamma }`$, or, if Robin’s condition is assumed, find $`\mathrm{\Gamma }`$ and $`\sigma `$; 4) Given $`A(\alpha ,\alpha ,k_0)\alpha S^{n1},`$ find $`\mathrm{\Gamma }`$, or, if Robin’s condition is assumed, find $`\mathrm{\Gamma }`$ and $`\sigma `$; (backscattering data) 5) Given $`A(\alpha ,\alpha ,k)\alpha S^{n1},k[a,b],\mathrm{\hspace{0.17em}0}a<b,`$ find $`\mathrm{\Gamma }`$, or, if Robin’s condition is assumed, find $`\mathrm{\Gamma }`$ and $`\sigma `$; Of course, if IOSP4 is solved then IOSP5 is solved. In all these problems one can assume for uniqueness studies that the data are given on open subsets of $`S^{n1}`$, however small, since such data allow one to uniquely recover the data on all of $`S^{n1}`$ . In this paper we discuss only IOSP1-2. Uniqueness of the solution to other three problems has been (and still is) an open problem for several decades, although for IOSP5 uniqueness for convex obstacles follows from the results in . The reconstruction of $`\mathrm{\Gamma }`$ from the scattering data is not discussed here, see , and references therein. The history and various proofs of the uniqueness theorems for IOSP1,2 are given in , and references therein, and a new method of proof and its applications are given in -. Uniqueness of the solution for IOSP1 was proved by M.Schiffer (1962) for the Dirichlet boundary condition, while for IOSP2 it was proved by A.G.Ramm (1985) for the Dirichlet, Neumann and Robin boundary conditions (see for these proofs). In - a new method of proof was given. In this paper we discuss the technical question: the role of smoothness of the boundary in the various proofs of the uniqueness theorems for IOSP1-2. We justify the applicability of Green’s formula in the Schiffer’s and other proofs and point out that the question of whether the Neumann Laplacian has a discrete spectrum in a certain domain with non-smooth boundary can be avoided completely. This question arises in the Schiffer’s type of proofs. Furthermore, we generalize the uniqueness results for Lipschitz domains, i.e., for domains with Lipschitz boundaries. In section 2 the Schiffer’s type proof and the proof from , are presented, and the role of the non-smoothness of some of the domains, used in these proofs, is analyzed. An important role is played by the sets of finite perimeter and Green’s formula for such sets. The related theory is discussed in , and . We first assume in this paper that the boundary $`\mathrm{\Gamma }`$ is sufficiently smooth and then show that our argument is valid in Lipschitz domains. So, this paper deals with the technical problems. Recall that a Lipschitz domain is a bounded domain each point of whose boundary has a neighborhood in which the equation of the boundary in the local coordinates is given by a function satisfying a Lipschitz condition. Lipschitz domains are denoted as $`C^{0,1}`$ domains. In the potential theory results are given for Lipschitz domains. The definition of the solution to problem (1)-(3) in non-smooth domains is as follows: A function $`uH_{loc}^2H^1(D_R^{})`$ solves (1)-(3), iff it satisfies conditions (3) and (4), and the following identity: $$_D^{}(k^2u\varphi \mathrm{}u\mathrm{}\varphi )𝑑x=0\varphi H_{loc}^2H_c^1(D^{}).$$ $`7`$ Here $`H^l`$ is the Sobolev space, $`H_{loc}^2`$ is the space of functions which are in $`H^2(\stackrel{~}{D}^{})`$ for any compact strictly inner subdomain $`\stackrel{~}{D}^{}`$ of $`D^{},H_c^1(D^{})`$ is the space of functions which vanish near infinity (but not necessarily near $`\mathrm{\Gamma }`$), and $`H^1(D_R^{})`$ is the space of functions which for any sufficiently large $`R`$ belong to $`H^1(D^{}B_R)`$, where $`B_R`$ is the ball of radius $`R`$, centered at the origin. This definition does not require any smoothness of the boundary. The solution to (1), (6), (3) is a function in $`H_{loc}^2H^1(D_R^{})`$ which satisfies conditions (3) and (4), and the identity $$_D^{}(k^2u\varphi \mathrm{}u\mathrm{}\varphi )𝑑x+_\mathrm{\Gamma }\sigma u\varphi 𝑑s=0\varphi H_{loc}^2H_c^1(D^{}).$$ $`8`$ Here the Lipschitz boundary $`\mathrm{\Gamma }`$ is admissible because the imbedding theorem holds for such a boundary. In this paper we use the following notations: $`D_{12}:=D_1D_2,D^{12}:=D_1D_2,\mathrm{\Gamma }_{12}`$ is the boundary of $`D_{12}`$, $`\mathrm{\Gamma }^{12}`$ is the boundary of $`D^{12}`$, $`\mathrm{\Gamma }_1^{}`$ is the part of $`\mathrm{\Gamma }_1`$ which lies outside of $`D_2`$, and $`\mathrm{\Gamma }_2^{}`$ is defined likewise, $`\stackrel{~}{D}_1`$ is a connected component of $`D_1D^{12}`$, $`D_3:=D_{12}D^{12}`$. ## 2. Uniqueness results for IOSP with the Neumann and Robin boundary conditions ### 2.1.Uniqueness for IOSP1 Consider IOSP1 first. Let us outline a variant of the Schiffer’s type of proof, which allows us to deal with non-smooth boundaries of the domains arising in the proof. Assume that there are two different obstacles, $`D_j,j=1,2,`$ which generate the same scattering data for IOSP1. Let $`w:=u_1u_2`$, where $`u_j`$ are the corresponding scattering solutions. The function $`w`$ solves equation (1) in $`D_{12}^{}`$ and $`w=o(1/r)`$ because the scattering data are the same for $`D_1`$ and $`D_2`$. Thus, lemma \[3,p.25\] implies $`w=0`$ in $`D_{12}^{}`$. Let $`U:=u_1=u_2`$ in $`D_{12}^{}`$. Then $`U`$ can be continued analytically, as a solution to (1) , to the domains $`D_3`$ and $`(D^{12})^{}`$, because either $`u_1`$ or $`u_2`$ are defined in these domains and solve (1) there. We assume that $`(D^{12})^{}`$ is not empty. If it is, the argument is even simpler: $`V:=Uu_0`$ solves equation (1) in $`^n`$ and satisfies the radiation condition; thus, $`V=0`$ and $`U=u_0`$ in $`^n`$. Since $`u_0`$ does not satisfy the boundary condition (2), we have got a contradiciton. This contradiction proves that the assumption ($`(D^{12})^{}`$ is empty) is wrong. The domain $`D_3`$ is bounded since both $`D_j`$ are. The function $`U`$ solves equation (1) and satisfies the homogeneous boundary condition (2) on its boundary $`\mathrm{\Gamma }_3`$, except for, possibly, the set of $`(n1)`$-dimensional Hausdorff measure, namely, except for the set of points which belong to the intersection of $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$. Since the scattering solutions in domains with smooth boundaries are uniformly bounded functions whose first derivatives are smooth (Lipschitz are sufficient for our argument), the function $`U`$ has the same properties. Therefore, for any $`k[a,b]`$, $`U`$ is in $`L^2(D_3)`$, and, as we prove below, the functions $`U`$, corresponding to different $`k`$, are orthogonal in $`L^2(D_3)`$. Since this Hilbert space is separable, we arrive at a contradiction: existence of a continuum of orthogonal non-trivial elements in the separable Hilbert space $`L^2(D_3)`$. This contradiction proves that $`D_1=D_2`$, and the uniqueness theorem is proved for IOSP1. The original Schiffer’s argument presented in the literature, uses discreteness of the spectrum of the Laplacian, corresponding to a boundary condition, in a bounded domain. The discreteness of the spectrum holds for any bounded domain for the Dirichlet Laplacian, but not necessarily for the Neumann one. This is why we want to avoid the reference to the discreteness of the spectrum of the Neumann Laplacian or the Robin Laplacian. To complete the proof, it is sufficient to prove the claim about the orthogonality of $`U`$ with different $`k`$. The proof of this claim goes along the usual line. The new point is the discussion of the applicability of Green’s formula, used in the argument, for non-smooth domains. Let $`U_j:=U(x,\alpha _0,k_j)`$, $`L:=\mathrm{}^2`$, and let the overline denote complex conjugate. Then: $$I:=_{D_3}(U_1L\overline{U_2}\overline{U_2}LU_1)𝑑x=(k_1^2k_2^2)_{D_3}U_1\overline{U_2}𝑑x.$$ $`9`$ We wish to prove that the right-hand side vanishes. This follows if $`I=0`$. The integral $`I`$ can be transformed formally by Green’s formula and, using the boundary condition, one concludes that $`I=0`$. The problem is to justify the applicability of Green’s formula in the domain $`D_3`$ with non-smooth boundary. The remaining part of the proof contains such a justification. Our starting point is the known (see ,, ) result: Green’s formula holds for the domains with finite perimeter and functions whose first derivatives are in the space $`BV`$, provided that their rough traces are summable on the reduced boundary of the domain (in our case $`D_3`$ is the domain) with respect to $`(n1)`$-dimensional Hausdorff measure. Let $`\mathrm{\Omega }^n`$ be a domain. Recall that the space $`BV(\mathrm{\Omega })`$ consists of functions whose first derivatives are signed measures locally in $`\mathrm{\Omega }`$ (, ). A set $`D_j`$ has finite perimeter if $`\chi _j`$, the characteristic function of this set, belongs to $`BV(^n)`$. The reduced boundary, denoted by $`\mathrm{\Gamma }^{}`$, is the set of points at which the exterior normal in the sense of Federer exists (see , , or for the definition of this normal and for that of the rough trace). It is proved in , that for the sets with finite perimeter the reduced boundary has full $`(n1)`$-dimensional Hausdorff measure, so that the normal in the sense of Federer is defined almost everywhere on $`\mathrm{\Gamma }`$ with respect to $`(n1)`$-dimensional Hausdorff measure (we will write $`s`$-almost everywhere for brevity). What we need is to check that: 1) the set $`D_3`$ has finite perimeter, 2) the function $`\mathrm{}\psi `$ is a measure in $`D_3`$, where $`\psi :=U_1\mathrm{}\overline{U_2}\overline{U}_2\mathrm{}U_1`$, and 3) $`\psi `$ has a summable rough trace on $`\mathrm{\Gamma }_3^{}`$, the reduced boundary of $`D_3`$. Note that the integrand in the first integral in formula (9) is of the form $`\mathrm{}\psi `$, and $`\mathrm{}\psi =(k_1^2k_2^2)U_1\overline{U}_2`$. First, let us prove that $`D_3`$ has finite perimeter. Note , that $`D_3`$ is not necessarily a Lipschitz domain, although $`D_1`$ and $`D_2`$ are. Let us denote by $`P(D)`$ the perimeter of $`D`$ and by $`\mathrm{}\chi `$ the norm of $`\chi `$ in the space $`BV(^n)`$, that is, the total variation of the vector measure $`\mathrm{}\chi `$. By definition, $`P(D)=\mathrm{}\chi `$. Let $`s(\mathrm{\Gamma })`$ denote the $`n1`$-dimensional Hausdorff measure of $`\mathrm{\Gamma }`$. It is known that $`P(D)s(\mathrm{\Gamma })`$ and the strict inequality is possible for non-smooth $`\mathrm{\Gamma }`$. Also, it can happen that $`P(D)<\mathrm{}`$, but $`s(\mathrm{\Gamma })=\mathrm{}`$. If $`P(D)<\mathrm{}`$, then $`\mathrm{\Gamma }^{}`$ is $`s`$-measurable and $`s(\mathrm{\Gamma }^{})=P(D)`$, see \[9, p.193\]. The set $`D_3`$ has finite perimeter iff $`\mathrm{}\chi _3<\mathrm{}`$. Clearly: $$\chi _3=\chi _{12}\chi ^{12},$$ $`10`$ $$\chi _{12}=\chi _1+\chi _2\chi ^{12},$$ $`11`$ $$\chi ^{12}=\chi _1\chi _2,$$ $`12`$ where $`\chi ^{12}`$, e.g., is the characteristic function of the domain $`D^{12}`$. By the assumption, $`\mathrm{}\chi _j<\mathrm{},j=1,2`$. The space $`BV`$ is linear. Therefore, by formulas (10)-(12), it follows that $`P(D_3)<\mathrm{}`$, if one checks that the function $`\chi _1\chi _2BV(^n)`$. This, however, is a direct consequence of the known formula \[9,p.189\] for the derivative of the product of bounded $`BV`$ functions: $`\mathrm{}(\chi _1\chi _2)=\widehat{\chi _2}\mathrm{}\chi _1+\widehat{\chi _1}\mathrm{}\chi _2`$, where $`\widehat{\chi }`$ denotes the averaged value of $`\chi `$ at the point $`x`$ (see \[9, p.189\] for the derivation of this formula). Note that the usual formula for the derivative of the product (the formula without the averaged values) is not valid for $`BV`$ functions, in particular, it is wrong for the characteristic functions. Let us now check that the function $`\mathrm{}\psi `$ is a signed measure in $`D_3`$. Since $`\mathrm{}\psi =(k_1^2k_2^2)U_1\overline{U}_2`$ and the functions $`U_1,U_2`$ belong to $`H^1(D_3)`$, it follows that $`U_1\overline{U}_2L^1(D_3)`$. Thus, $`\mathrm{}\psi `$ is a signed measure in $`D_3`$. Finally, $`\psi `$ has a summable rough trace on $`\mathrm{\Gamma }_3^{}`$. In fact, more holds: the summable trace $`\psi ^+`$ exists $`s`$-almost everywhere on $`\mathrm{\Gamma }_3^{}`$ and this implies existence of the summable rough trace. Recall that the trace $`\psi ^+`$ is defined at the point $`x\mathrm{\Gamma }`$ as the following limit (if it exists): $$\psi ^+(x)=\underset{r0}{lim}\frac{1}{meas_n(D_r(x))}_{D_r(x)}\psi (y)𝑑y,$$ where $`D_r(x):=\{y:yD_3,|xy|<r\}`$. Existence of the summable trace of the function $`\psi `$ on $`\mathrm{\Gamma }_3^{}`$ follows from \[10, lemma 5.7\]. One can see that the trace exists in yet stronger sense: $`U_j(x)`$ and $`\mathrm{}U_j(x)`$ have non-tangential limits as $`xt\mathrm{\Gamma }_3^{}`$, these limits are in $`L^2(\mathrm{\Gamma }_3^{},ds)`$ and therefore their product is in $`L^1(\mathrm{\Gamma }_3^{},ds)`$, that is , the trace of $`\psi `$ is summable. This completes the proof of the uniqueness theorem for IOSP1. Let us formulate the result: ###### Theorem 1 Assume that the obstacles $`D_j,j=1,2,`$ have the following properties: 1) they are Lipschitz domains, 2) $`A_1(\alpha ^{},\alpha _0,k)=A_2(\alpha ^{},\alpha _0,k)\alpha ^{}S^{n1},k[a,b],\mathrm{\hspace{0.17em}0}a<b`$. Then $`D_1=D_2`$ and, in the case of Robin’s boundary condition, $`\sigma _1=\sigma _2`$. ###### Demonstration Proof Only the last statement is not yet proved. However, since we have already established that $`D_1=D_2:=D`$ and $`u_1=u_2`$ in $`D^{}`$, it follows that $$\sigma _1=\frac{u_{1N}}{u_1}=\frac{u_{2N}}{u_2}=\sigma _2\text{ on }\mathrm{\Gamma }.\mathit{}$$ Another proof can be given. It is based on formula (13) and on the method, developed in section 2.2 below. If $`\mathrm{\Gamma }_j`$ are Lipschitz boundaries, then the existence and uniqueness of the scattering solutions can be established as in with the help of the potential theory for domains with Lipschitz boundaries . The details of this theory will be published elsewhere. In the next subsection we consider IOSP2 and use the method developed in - for the uniqueness proof. ### 2.2.Uniqueness for IOSP2 The starting point is the identity first established in : $$4\pi (A_1A_2)=_{\mathrm{\Gamma }_{12}}[u_1u_{2N}u_{1N}u_2]𝑑s,$$ $`13`$ where $`u_1:=u_1(x,\alpha ,k),u_2:=u_2(x,\alpha ^{},k)`$, $`u_N`$ denotes the normal derivative, as before, $`u_j`$ and $`A_j:=A_j(\alpha ^{},\alpha ,k)`$ are, respectively, the scattering solution and scattering amplitude, corresponding to the obstacle $`D_j,j=1,2.`$ Applications of this useful formula are given in -. If $`A_1=A_2`$ for the fixed energy data in IOSP2, then (13) yields: $$0=_{\mathrm{\Gamma }_{12}}[u_1(s,\alpha )u_{2N}(s,\alpha ^{})u_{1N}(s,\alpha )u_2(s,\alpha ^{})]𝑑s,\alpha ,\alpha ^{}S^{n1},$$ $`14`$ where we have dropped the dependence on the fixed energy $`k_0`$. Let $`G_j:=G_j(x,y,k)`$ denote Green’s function for the problem (1)-(3), or the exterior problem with the Robin boundary condition. It is proved in \[3,p.46\], that $$G_j=\gamma (r)[u_j(x,\alpha ,k)+O(\frac{1}{r})],r\mathrm{},y/r:=\alpha ,r:=|y|,$$ $`15`$ where $`\gamma (r)`$ is a known function (e.g., $`\gamma =\frac{exp(ikr)}{4\pi r}`$ if $`n=3`$), and the coefficient $`u_j`$ in (15) is the scattering solution. ###### Lemma 1 Equation (14) implies: $$0=_{\mathrm{\Gamma }_{12}}[G_1(s,x)G_{2N}(s,y)G_{1N}(s,x)G_2(s,y)]𝑑s,x,yD_{12}^{}.$$ $`16`$ ###### Demonstration Proof We give a proof for $`n=3`$. For other $`n`$ the proof is similar. First, let us derive the equation: $$W(y):=_{\mathrm{\Gamma }_{12}}[u_1(s,\alpha )G_{2N}(s,y)u_{1N}(s,\alpha )G_2(s,y)]𝑑s=0,yD_{12}^{},\alpha S^{n1}.$$ $`17`$ Indeed, $`W(y)`$ solves equation (1) in $`D_{12}^{}`$ and $`W=o(1/r)`$, as follows from (14) and (15). Thus, $`W=0`$ in $`D_{12}^{}`$, see \[3,p.25\]. Let us prove (16) now. Fix any $`yD_{12}^{}`$ and let $`w`$ denote the integral in (16). Then $`w`$ solves (1) in $`D_{12}^{}`$ and $`w=o(1/r)`$, as follows from (15) and (17). Thus, (16) follows and Lemma 1 is proved. We want to derive a contradiction from (16). This contradiction will prove that $`D_1=D_2`$. According to the argument given in the section 2.1, the set $`D_{12}`$ has finite perimeter, Green’s formula is applicable to (16) in the domain $`D_{12}^{}`$, and we get the following equation: $$0=G_1(y,x)G_2(x,y)x,yD_{12}^{},$$ $`18`$ where the radiation condition for $`G_1`$ and $`G_2`$ was used: it allowed us to neglect the integral over the large sphere, which appeared in Green’s formula. We now want to derive a contradiction from (18). Note that $`G_j(x,y)=G_j(y,x)`$ and consider, for instance, the Neumann condition (2). The Robin condition is treated similarly. Differentiate (18) with respect to $`y`$ along the normal $`N_t,t\mathrm{\Gamma }_2^{}`$, and let $`yt`$. This yields: $$0=G_{1N_t}(t,x)xD_{12}^{},t\mathrm{\Gamma }_2^{}.$$ $`19`$ The point $`t`$ belongs to $`D_1^{}`$. Therefore $$|G_{1N_t}(t,x)|\mathrm{}\text{ as }xt.$$ $`20`$ Equation (20) contradicts (19). This contradiction proves that $`D_1=D_2`$. We have proved the following result: ###### Theorem 2 Let the assumption 1) of Theorem 1 hold and assume that $`2^{}`$) $`A_1(\alpha ^{},\alpha ,k_0)=A_2(\alpha ^{},\alpha ,k_0),\alpha ,\alpha ^{}S^{n1}.`$ Then $`D_1=D_2`$ and, in the case of Robin boundary condition, $`\sigma _1=\sigma _2`$. This completes the discussion of the uniqueness theorem for IOSP2 for the case of Neumann and Robin boundary conditions. e-mail: rammmath.ksu.edu
no-problem/0001/hep-ex0001017.html
ar5iv
text
# References The WA102 collaboration has recently published a study of the centrally produced $`4\pi `$ final states . In this paper the production and decay properties of the resonances observed in these channels will be presented. In previous publications the properties of the $`f_1(1285)`$ , $`\eta _2(1645)`$ and $`\eta _2(1870)`$ have already been presented. In this paper the properties of the $`f_0(1370)`$, $`f_0(1500)`$, $`f_0(2000)`$ and $`f_2(1950)`$ will be discussed. In previous analyses it has been observed that when the centrally produced system has been analysed as a function of the parameter $`dP_T`$, which is the difference in the transverse momentum vectors of the two exchange particles , all the undisputed $`q\overline{q}`$ states (i.e. $`\eta `$, $`\eta ^{}`$, $`f_1(1285)`$ etc.) are suppressed at small $`dP_T`$ relative to large $`dP_T`$, whereas the glueball candidates $`f_0(1500)`$, $`f_0(1710)`$ and $`f_2(1950)`$ are prominent . In addition, an interesting effect has been observed in the azimuthal angle $`\varphi `$ which is defined as the angle between the $`p_T`$ vectors of the two outgoing protons. For the resonances studied to date which are compatible with being produced by DPE, the data are consistent with the Pomeron transforming like a non-conserved vector current . In order to determine the $`\varphi `$ dependence for the resonances observed, a spin analysis has been performed on the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ and $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ channels in four different $`\varphi `$ intervals each of 45 degrees. As an example, fig. 1 shows the $`J^{PC}`$ = $`0^{++}`$ $`\rho \rho `$ wave from the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ channel in the four intervals. The waves have been fitted in each interval with the parameters of the resonances fixed to those obtained from the fits to the total data as described in ref . The distributions found are consistent for the two channels and the fraction of each resonance as a function of $`\varphi `$ from the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ channel is plotted in fig. 2. The distributions observed for the $`f_0(1370)`$ and $`f_0(1500)`$ are similar to what was found in the analysis of the $`\pi ^+\pi ^{}`$ final state . In order to calculate the contribution of each resonance as a function of $`dP_T`$, the waves have been fitted in three $`dP_T`$ intervals with the parameters of the resonances fixed to those obtained from the fits to the total data as described in ref . Table 1 gives the percentage of each resonance in three $`dP_T`$ intervals together with the ratio of the number of events for $`dP_T`$ $`<`$ 0.2 GeV to the number of events for $`dP_T`$ $`>`$ 0.5 GeV for each resonance considered. The dependences found for the $`f_0(1370)`$ and $`f_0(1500)`$ are similar to what was found in the analysis of the $`\pi ^+\pi ^{}`$ final state . The fact that the $`f_0(1370)`$ and $`f_0(1500)`$ have different $`\varphi `$ and $`dP_T`$ dependences confirms that these are not simply $`J`$ dependent phenomena. This is also true for the $`J`$ = 2 states, where the $`f_2(1950)`$ has different dependences to the $`f_2(1270)`$ and $`f_2^{}(1520)`$ . In order to determine the four momentum transfer dependence ($`|t|`$) of the resonances observed in the $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ channel the waves have been fitted in 0.1 GeV<sup>2</sup> bins of $`|t|`$ with the parameters of the resonances fixed to those obtained from the fits to the total data as described in ref . Fig. 2 shows the four momentum transfer from one of the proton vertices for these resonances. The distributions have been fitted with a single exponential of the form $`exp(b|t|)`$ and the values of $`b`$ found are given in table 2. The values of $`b`$ for the $`f_0(1370)`$ and $`f_0(1500)`$ are similar to what was found in the analysis of the $`\pi ^+\pi ^{}`$ final state . The $`\varphi `$ distribution, the $`dP_T`$ and $`t`$ dependence of the $`f_2(1950)`$ are different to what has been observed for other $`J^{PC}`$ = $`2^{++}`$ resonances but are similar to what was observed for the $`\varphi \varphi `$ and $`K^{}(892)\overline{K}^{}(892)`$ final states which were both found to have $`J^{PC}`$ = $`2^{++}`$. In order to see if the $`\varphi \varphi `$ and $`K^{}(892)\overline{K}^{}(892)`$ final states could be due to the $`f_2(1950)`$, the parameters of the $`f_2(1950)`$ have been used as input to a Breit-Wigner function which has been modified to take into account the different thresholds. Superimposed on the $`\varphi \varphi `$ mass spectrum in fig. 3a) is the distribution that could be due to the $`f_2(1950)`$. As can be seen, although the $`f_2(1950)`$ can describe most of the spectrum, there is an excess of events in the 2.3 GeV mass region. Including a Breit-Wigner to describe the $`f_2(2340)`$, which has previously been observed decaying to $`\varphi \varphi `$ , with M = 2330 $`\pm `$ 15 MeV and $`\mathrm{\Gamma }`$ = 130 $`\pm `$ 20 MeV gives the distribution in fig. 3b). Assuming that the $`f_2(1950)`$ has a $`\varphi \varphi `$ decay mode then correcting for the unseen decay modes the branching ratio of the $`f_2(1950)`$ to $`f_2(1270)\pi \pi /\varphi \varphi `$ was found to be 72 $`\pm `$ 9. Superimposed on the $`K^0\overline{K}^0`$ mass spectrum in fig. 3c) is the distribution that could be due to the $`f_2(1950)`$. As can be seen the $`f_2(1950)`$ can describe all the $`K^0\overline{K}^0`$ mass spectrum. Assuming that the $`f_2(1950)`$ has a $`K^0\overline{K}^0`$ decay mode then correcting for the unseen decay modes the branching ratio of the $`f_2(1950)`$ to $`f_2(1270)\pi \pi /K^0\overline{K}^0`$ was found to be 33 $`\pm `$ 4. In addition, the branching ratio of the $`f_2(1950)`$ to $`\varphi \varphi /K^0\overline{K}^0`$ above the $`\varphi \varphi `$ threshold is 0.8 $`\pm `$ 0.14. We have previously published a paper describing the decays of the $`f_0(1370)`$ and $`f_0(1500)`$ to $`\pi \pi `$ and $`K\overline{K}`$ . In ref. a fit has been performed to the $`\rho \rho `$ and $`\sigma \sigma `$ final states and the contributions of the $`f_0(1370)`$ and $`f_0(1500)`$ has been determined. After correcting for the unseen decay modes and the $`\sigma \sigma `$ decay mode the branching ratio of the $`f_0(1500)`$ to $`4\pi /\pi \pi `$ is found to be 1.37 $`\pm `$ 0.16 . In the initial Crystal Barrel publication this value was 3.4 $`\pm `$ 0.8 . In the latest preliminary analysis of the Crystal Barrel data the value is 1.54 $`\pm `$ 0.6. Hence although the experiments disagree about the relative amount of $`\rho \rho `$ and $`\sigma \sigma `$ in the $`4\pi `$ decay mode , the overall measured branching ratio is consistent. After correcting for the unseen decay modes and taking into account the above uncertainties the branching ratio of the $`f_0(1370)`$ to $`4\pi /\pi \pi `$ is found to be 34 $`{}_{9}{}^{}{}_{}{}^{+22}`$. The large error is due to the fact that there is considerable uncertainty in the amount of $`f_0(1370)`$ in the $`\pi \pi `$ final state due to the possible contribution from the high mass side of the $`f_0(1000)`$. In the latest preliminary analysis of the Crystal Barrel data the value is 12.2 $`\pm `$ 5.4. A coupled channel fit of the $`\pi \pi `$, $`K\overline{K}`$, $`4\pi `$, $`\eta \eta `$ and $`\eta \eta ^{}`$ final states is in progress and will be reported in a future publication. In summary, the $`dP_T`$, $`\varphi `$ and $`|t|`$ distributions for the $`f_0(1370)`$, $`f_0(1500)`$, $`f_0(2000)`$ and $`f_2(1950)`$ have been presented. For the $`J`$ = 0 states the $`f_0(1370)`$ and $`f_0(2000)`$ have similar $`dP_T`$ and $`\varphi `$ dependences. These are different to the $`dP_T`$ and $`\varphi `$ dependences of the $`f_0(980)`$, $`f_0(1500)`$ and $`f_0(1710)`$. For the $`J`$ = 2 states the $`f_2(1950)`$ has different dependences to the $`f_2(1270)`$ and $`f_2^{}(1520)`$. This shows that the $`dP_T`$ and $`\varphi `$ dependences are not just $`J`$ phenomena. Acknowledgements This work is supported, in part, by grants from the British Particle Physics and Astronomy Research Council, the British Royal Society, the Ministry of Education, Science, Sports and Culture of Japan (grants no. 07044098 and 1004100), the French Programme International de Cooperation Scientifique (grant no. 576) and the Russian Foundation for Basic Research (grants 96-15-96633 and 98-02-22032). Figures Figure 1 Figure 2 Figure 3
no-problem/0001/hep-ex0001030.html
ar5iv
text
# Problems and stoppers for 𝛾⁢𝛾,𝛾⁢𝜇,𝜇⁢𝑝 colliders using very high energy muons. Invited talk at the Workshop Studies on Colliders and Collider Physics at the Highest Energies: Muon Colliders at 10 TeV to 100 TeV, 27 September - 1 October, 1999 Montauk, New York, USA, be published by the American Institute of Physics. ## 1 Introduction Firstly, I would like to explain the origin of this talk. Two weeks ago the chairman of our workshop Bruce King have sent me e:mail with the request to give a plenary talk on “prospects for very high energy $`\gamma \gamma `$ or $`\gamma \mu `$ colliders driven by the muon beams”, he added that “even if it is impractical it would still be nice if you could give a brief explanation”. I have agreed to give such a talk but only without the word “prospects” in the title because I do not see any prospects here, only stoppers. Nevertheless, this physics is very interesting, and it is pleasure to me to tell briefly about high energy photon colliders based on e<sup>+</sup>e<sup>-</sup> (ee) linear colliders and explain why such photon colliders are completely impractical with muons. The third combination of colliding particles, $`\mu `$p, is also discussed here very briefly. ## 2 Photon Colliders based on linear ee colliders As you know, to explore the energy region beyond LEP-II, linear e<sup>+</sup>e<sup>-</sup> colliders (LC) in the range from a few hundred GeV to about 1.5 TeV and higher are under intense study around the world . Beside e<sup>+</sup>e<sup>-</sup> collisions, linear colliders provide a unique possibility for obtaining $`\gamma \gamma `$, $`\gamma `$e colliding beams with energies and luminosities comparable to those in e<sup>+</sup>e<sup>-</sup> collisions . High energy photons for these collisions can be obtained using Compton scattering of laser light on high energy electrons. This idea is based on the following facts: * Unlike the situation in storage rings, in linear colliders each beam is used only once. * Using an optical laser with reasonable parameters (flash energy of 1 to 5 J) one can “convert” almost all electrons to high energy photons; * The energy of scattered photons is close into the energy of initial electrons. Each one of these items is vital for obtaining $`\gamma \gamma `$, $`\gamma `$e collisions at energies and luminosities comparable to those in parental electron-electron collisions. <sup>1</sup><sup>1</sup>1Here we do not discuss “photon colliders” based on collisions of virtual photons. This possibility always exists, however the luminositis and energies are considerably smaller than those in parental ee collisions, see sect.3.6 The physics at high energy $`\gamma \gamma `$,$`\gamma `$e colliders is very rich and no less interesting than with pp or e<sup>+</sup>e<sup>-</sup> collisions. This option has been included in the pre-conceptual design reports of LC projects , and work on full conceptual designs is under way. Reports on the present status of photon colliders can be found elsewhere . Well, can we make similar photon colliders on the basis of muon colliders? What is the difference? ## 3 $`\gamma \gamma `$,$`\gamma \mu `$ colliders based on high energy $`\mu \mu `$ colliders ### 3.1 Multi-pass collisions At muon colliders two bunches are collided about 1000 times, which is one of the advantages over linear e<sup>+</sup>e<sup>-</sup> colliders where beams are collided only once. However, if one tries to convert muons into high energy photons (by whatever means), the resulting $`\gamma \gamma `$ luminosity will be smaller than that in $`\mu \mu `$ collisions at least by a factor of 1000. This argument alone sufficient to give up the idea of $`\gamma \gamma `$ colliders based on high energy muon colliders. However, at this workshop F.Zimmermann proposed the idea of one pass muon colliders. So, I will continue enumeration of stoppers. ### 3.2 Laser wave length The required wave length follows from the kinematics of Compton scattering . In the conversion region a laser photon with the energy $`\omega _0`$ scatters at a small collision angle (head-on) on a high energy electron(muon) with the energy $`E_0`$. The maximum energy of scattered photons (in direction of electrons) is given by $$\omega _m=\frac{x}{x+1}E_0;x=\frac{4E_0\omega _0}{m^2c^4},$$ (1) where $`m`$ is the mass of the charged particle. In order to obtain photons with the energy comparable to that of initial particles, say, 80 %, one needs $`x4`$, or the energy of laser photons $$\omega _0m^2c^4/E_0.$$ (2) The corresponding laser wave length is then $$\lambda 5E_0[\text{TeV}]\mu \text{m}\text{for electron beams;}$$ (3) $$\lambda 0.12E_0[\text{TeV}]\text{nm}\text{for muon beams}.$$ (4) So, one can use optical lasers to make high energy photons by means of backward Compton scattering on electron beams, while at muon colliders one would have to use X-ray lasers! ### 3.3 Flash energy The probability of Compton scattering for an beam particle in the laser target $`pn\sigma _Cl`$, where $`n,l,\sigma _C`$ are the density of the laser target, its length and the Compton cross section, respectively. The density $`n(A/lS)/\omega _0`$, where $`A`$ is the laser flash energy and $`S`$ is the cross section of the laser beam which should be larger than that of the muon beam. <sup>2</sup><sup>2</sup>2In the case of the electron LC, where optical photons are used, the laser spot size is determined by diffraction: $`a_\gamma \sqrt{\lambda l/4\pi }`$ which is several $`\mu `$m for LC electron beams . At muon colliders, the required wave length is much shorter and diffraction can be neglected. The Compton cross section for muon at $`x=4`$ is about $$\sigma _C(x=4)\pi r_e^2\left(\frac{m_e}{m_\mu }\right)^2,$$ (5) where $`r_e=e^2/m_ec^2`$ is the classical radius of the electron. From the above relations we get the required laser flash energy (for $`p1`$) $$A(S/\sigma _C)\omega _0=\frac{S}{\pi r_e^2E_0}\left(\frac{m_\mu }{m_e}\right)^4m_e^2c^4=1.5\times 10^3\frac{S[\mu \text{m}^2]}{E_0[\text{TeV}]}\left(\frac{m_\mu }{m_e}\right)^4\text{Joule}.$$ (6) At the muon collider with $`E_0=50`$ TeV, $`S=1`$ $`\mu \text{m}^2`$ one needs the X-ray laser with the flash energy $`10^5`$ J and the wave length of 6 nm (see eq. 2). This is certainly impossible. Beside this “technical” problem, there are even more fundamental stoppers for photon colliders based on muon beams, see below. ### 3.4 e<sup>+</sup>e<sup>-</sup> pair creation in the conversion region Beside the Compton scattering at the conversion region, at muon colliders there is another competing process: e<sup>+</sup>e<sup>-</sup> pair creation in collision of laser photons with the high energy muons, $`\gamma \mu \mu \text{e}\text{+}\text{e}\text{-}`$. The ratio of the cross sections $$\frac{\sigma _{\gamma \mu \mu \text{e}\text{+}\text{e}\text{-}}}{\sigma _{\gamma \mu \gamma \mu }}\frac{\frac{28\alpha r_e^2}{9}\mathrm{ln}\frac{4E_0\omega _0}{m_em_\mu c^4}}{\pi r_e^2(m_e/m_\mu )^2}7\times 10^3\left(\frac{m_\mu }{m_e}\right)^2\mathrm{ln}\left(\frac{m_\mu }{m_e}x\right)2000\text{at}x=4.$$ (7) So, high energy photons are produced with a very small probability, less than 1/1000 ! In all other cases muons lose their energy via creation of e<sup>+</sup>e<sup>-</sup> pairs. This effect alone suppresses the attainable $`\gamma \gamma `$ luminosity at muon colliders by a factor of more than $`10^6`$ ! ### 3.5 Coherent pair creation OK, the yield of high energy photons from the conversion region is very small, but this is not the whole story. What happens to the “happy” photons at the interaction region? They will be “killed” by the process of coherent e<sup>+</sup>e<sup>-</sup> pair creation in the field of the opposing muon beam. This process restricts the luminosity of photon colliders based on electron linear colliders .<sup>3</sup><sup>3</sup>3For an LC with the energy below about 1 TeV this effects is still not very important and one can obtain, in principle, $`L_{\gamma \gamma }>L_{\text{e}\text{+}\text{e}\text{-}}`$ The effective threshold of this process $`\mathrm{{\rm Y}}=\frac{\omega }{m_ec^2}\frac{B}{B_0}1`$, where $`\omega `$ is the photon energy, $`B`$ is the beam field, $`B_0=\alpha e/r_e^24.4\times 10^{13}`$ Gauss. For the “evolutionary” $`2E=100`$ TeV muon collider (see the B.King’s tables) with $`N=0.8\times 10^{12}`$, $`\sigma _z=2.5`$ mm, $`\sigma _{x,y}=0.2`$ $`\mu `$m and a photon energy 40 TeV we have $`\mathrm{{\rm Y}}180`$. Using formulae given in ref., one can find the probability of e<sup>+</sup>e<sup>-</sup> pair creation during the bunch collision: it is very high, about 200. This means that only about 1% of high energy photons will survive in beam collisions and contribute to the $`\gamma \gamma `$ luminosity. ### 3.6 Summary on $`\gamma \gamma `$, $`\gamma \mu `$ colliders based on high energy muons 1) The laser required for conversion of 50 TeV muons into high energy photons should have flash energy $`A10^5`$ J and wave length $`\lambda 5`$ nm. This is impossible. 2) The achievable $`\gamma \gamma `$ luminosity $$L_{\gamma \gamma }/L_{\mu \mu }\frac{1}{1000}\times \left(\frac{1}{2000}\right)^2\times \left(\frac{1}{100}\right)^2=2.5\times 10^{14}!$$ (8) Here the first factor is due to the one pass nature of photon colliders, the second one is due to the dominance of e<sup>+</sup>e<sup>-</sup> creation at the conversion region (instead of Compton scattering), and the third one is due to coherent pair creation at the interaction region. All clear. One can forget about $`\gamma \gamma `$ (and $`\gamma \mu `$ too) colliders based on high energy muon beams. However, $`\gamma \gamma `$,$`\gamma \mu `$ interactions can be studied at muon colliders in collisions of virtual photons (without $`\mu \gamma `$ conversion). The luminosities in such collisions are $$L_{\gamma ^{}\gamma ^{}}10^2L_{\mu \mu }W_{\gamma \gamma }>0.1\times 2E_0$$ (9) $$L_{\gamma ^{}\gamma ^{}}10^4L_{\mu \mu }W_{\gamma \gamma }>0.5\times 2E_0.$$ (10) $$L_{\gamma ^{}\mu }0.15L_{\mu \mu }W_{\gamma \mu }>0.1\times 2E_0$$ (11) $$L_{\gamma ^{}\mu }0.05L_{\mu \mu }W_{\gamma \mu }>0.5\times 2E_0.$$ (12) ## 4 $`\gamma \mu `$ collisions at LC–muon colliders One can also consider $`\gamma \mu `$ colliders where high energy photons are produced at LC (on electrons) and then are collided with high energy muon beams. This option also has no sense for several reasons: a) $`N_e10^2N_\mu `$; b) loss of photons at the IP due to coherent e<sup>+</sup>e<sup>-</sup> pair creation; c) none of the LCs have the pulse structure of muon colliders (almost uniform in time), which results in a factor 100 times loss in luminosity. All factors combined give $`L_{\gamma \mu }<10^5L_{\mu \mu }`$. Such $`\gamma \mu `$ collider has no sense; besides, $`\gamma \mu `$ collisions can be studied for free with much larger luminosities in $`\gamma ^{}\mu `$ collisions (see the end of the previous section). ## 5 $`\mu `$p colliders Let us first consider collisions of the LHC proton beams with muon beams of a 100 TeV muon collider. Without special measures, the luminosity in such collisions is lower than that in pp collisions at LHC due to larger distance between bunches at muon colliders (smaller collision rate). $$L_{\mu p}L_{pp}\times (\nu _\mu /\nu _p)10^3L_{pp}10^{31}\text{cm}\text{-2}\text{s}\text{-1}.$$ (13) This is too small for study of any good physics. However, one can think about a special source of proton with several stages of electron cooling. If parameters of the proton beam are the same as those of the muon beam, then the luminosity at the 100 TeV $`\mu p`$ collider $`L_{\mu p}=L_{\mu \mu }10^{36}\text{cm}\text{-2}\text{s}\text{-1}`$. That is not easy to achieve, but such possibility is not excluded. One of problems at such colliders is hadronic background. At $`L_{\mu p}=10^{36}`$ and $`\nu =10^4`$ the number of background $`\gamma p`$ reactions is about 5000/crossing. One can decrease backgrounds by increasing the collision rate (up to a factor of 5–10). It is not excluded that even with such backgrounds one can extract interesting physics. This option certainly makes sense, if a very high energy $`\mu \mu `$ collider is to be built. Its feasibility and potential problems should be studied in more detail.
no-problem/0001/cond-mat0001346.html
ar5iv
text
# Nonadiabatic Landau Zener tunneling in Fe8 molecular nanomagnets ## Abstract The Landau Zener method allows to measure very small tunnel splittings $`\mathrm{\Delta }`$ in molecular clusters Fe<sub>8</sub>. The observed oscillations of $`\mathrm{\Delta }`$ as a function of the magnetic field applied along the hard anisotropy axis are explained in terms of topological quantum interference of two tunnel paths of opposite windings. Studies of the temperature dependence of the Landau Zener transition rate $`P`$ gives access to the topological quantum interference between excited spin levels. The influence of nuclear spins is demonstrated by comparing $`P`$ of the standard Fe<sub>8</sub> sample with two isotopically substituted samples. The need of a generalised Landau Zener transition rate theory is shown. During the last few decades, a large effort has been spent to understand the detailed dynamics of quantum systems that are exposed to time-dependent external fields and dissipative effects . It has been shown that molecular magnets offer an unique opportunity to explore the quantum dynamics of a large but finite spin. These molecules are the final point in the series of smaller and smaller units from bulk magnets to single magnetic moments. They are regularly assembled in large crystals where often all molecules have the same orientation. Hence, macroscopic measurements can give direct access to single molecule properties. The most prominent examples are a dodecanuclear mixed-valence manganese-oxo cluster with acetate ligands, Mn<sub>12</sub> , and an octanuclear iron(III) oxo- hydroxo cluster of formula \[Fe<sub>8</sub>O<sub>2</sub>(OH)<sub>12</sub>(tacn)<sub>6</sub>\]<sup>8+</sup>, Fe<sub>8</sub> , where tacn is a macrocyclic ligand. Both systems have a spin ground state of $`S=10`$, and an Ising-type magneto-crystalline anisotropy, which stabilises the spin states with the quantum numbers $`M=\pm 10`$ and generates an energy barrier for the reversal of the magnetisation of about 67 K for Mn<sub>12</sub> and 25 K for Fe<sub>8</sub> . Fe<sub>8</sub> is particular interesting for studies of quantum tunnelling because it shows a pure quantum regime, i.e. below 360 mK the relaxation is purely due to quantum tunnelling, and not to thermal activation . We showed recently that the Landau Zener method can be used to measure the very small tunnel splittings $`\mathrm{\Delta }`$ in Fe<sub>8</sub> . The observed oscillations of $`\mathrm{\Delta }`$ as a function of the magnetic field applied along the hard anisotropy axis are explained in terms of topological quantum interference of two tunnel paths of opposite windings which was predicted by Garg . This observation was the first direct evidence of the topological part of the quantum spin phase (Berry or Haldane phase ) in a magnetic system. Recently, we demonstrate the influence of nuclear spins, proposed by Prokof’ev and Stamp , by comparing relaxation and hole digging measurements of two isotopically substituted samples: (i) the hyperfine coupling was increased by the substitution of <sup>56</sup>Fe with <sup>57</sup>Fe, and (ii) decreased by the substitution of <sup>1</sup>H with <sup>2</sup>H. These measurements were supported quantitatively by numerical simulations taking into account the altered hyperfine coupling . In this letter, we present studies of the temperature dependence of the Landau Zener transition rate $`P`$ yielding a deeper insight into the spin dynamics of the Fe<sub>8</sub> cluster. By comparing the three isotopic samples we confirm the influence of nuclear spins on the tunneling mechanism and in particular on the lifetime of the first excited states. Our measurements show the need of a generalised Landau Zener transition rate theory taking into account environmental effects such as hyperfine and spin–phonon coupling . All measurements of this article were performed using a new technique of micro-SQUIDs where the sample is directly coupled with an array of micro-SQUIDs . The high sensitivity of this magnetometer allows us to study single Fe<sub>8</sub> crystals of the order of 10 to 500 $`\mu `$m. The crystals of the standard Fe8 cluster, <sup>st</sup>Fe<sub>8</sub> or Fe<sub>8</sub>, \[Fe<sub>8</sub>(tacn)<sub>6</sub>O<sub>2</sub>(OH)<sub>12</sub>\]Br<sub>8</sub>.9H<sub>2</sub>O where tacn = 1,4,7- triazacyclononane, were prepared as reported by Wieghardt et al. . For the synthesis of the <sup>57</sup>Fe-enriched sample, <sup>57</sup>Fe<sub>8</sub>, a 13 mg foil of 95$`\%`$ enriched <sup>57</sup>Fe was dissolved in a few drops of HCl/HNO<sub>3</sub> (3 : 1) and the resulting solution was used as the iron source in the standard procedure. The <sup>2</sup>H-enriched Fe<sub>8</sub> sample, <sup>D</sup>Fe<sub>8</sub>, was crystallised from pyridine-d<sub>5</sub> and D<sub>2</sub>O (99$`\%`$) under an inert atmosphere at 5C by using a non-deuterated Fe(tacn)Cl<sub>3</sub> precursor. The amount of isotope exchange was not quantitatively evaluated, but it can be reasonably assumed that the H atoms of H<sub>2</sub>O and of the bridging OH groups, as well as a part of those of the NH groups of the tacn ligands are replaced by deuterium while the aliphatic hydrogens are essentially not affected. The crystalline materials were carefully checked by elemental analysis and single-crystal X-ray diffraction. The simplest model describing the spin system of Fe<sub>8</sub> molecular clusters has the following Hamiltonian : $$H=DS_z^2+E\left(S_x^2S_y^2\right)+H_2g\mu _B\mu _0\stackrel{}{S}\stackrel{}{H}$$ (1) $`S_x`$, $`S_y`$, and $`S_z`$ are the three components of the spin operator, $`D`$ and $`E`$ are the anisotropy constants, $`H_2`$ takes into account weak higher order terms , and the last term of the Hamiltonian describes the Zeeman energy associated with an applied field $`\stackrel{}{H}`$. This Hamiltonian defines a hard, medium, and easy axes of magnetisation in $`x`$, $`y`$ and $`z`$ direction, respectively. It has an energy level spectrum with $`(2S+1)=21`$ values which, in first approximation, can be labelled by the quantum numbers $`M=10,9,\mathrm{}10`$. The energy spectrum, can be obtained by using standard diagonalisation techniques of the $`[21\times 21]`$ matrix describing the spin Hamiltonian $`S=10`$. At $`\stackrel{}{H}=0`$, the levels $`M=\pm 10`$ have the lowest energy. When a field $`H_z`$ is applied, the energy levels with $`M<<0`$ increase, while those with $`M>>0`$ decrease. Therefore, different energy values can cross at certain fields. This crossing can be avoided by transverse terms containing $`S_x`$ or $`S_y`$ spin operators which split the levels. The spin $`S`$ is in resonance between two states $`M`$ and $`M^{}`$ when the local longitudinal field is close to such an avoided energy level crossing ($`|H_z|<10^8`$ T for the avoided level crossing around $`H_z`$ = 0). The energy gap, the so-called tunnel spitting $`\mathrm{\Delta }_{M,M^{}}`$, can be tuned by an applied field in the $`xy`$plane via the $`S_xH_x`$ and $`S_yH_y`$ Zeeman terms. It turns out that a field in $`H_x`$ direction (hard anisotropy direction) can periodically change the tunnel spitting $`\mathrm{\Delta }`$ as displayed in Fig. 1 where $`H_2`$ in Eq. 1 was taken from . In a semi-classical description, these oscillations are due to constructive or destructive interference of quantum spin phases of two tunnel paths . A direct way of measuring the tunnel splittings $`\mathrm{\Delta }_{M,M^{}}`$ is by using the Landau- Zener model which gives the tunnelling probability $`P_{M,M^{}}`$ when sweeping the longitudinal field $`H_z`$ at a constant rate over an avoided energy level crossing : $$P_{M,M^{}}=1e^{\frac{\pi \mathrm{\Delta }_{M,M^{}}^2}{2\mathrm{}g\mu _B|MM^{}|\mu _0dH_z/dt}}$$ (2) Here, $`M`$ and $`M^{}`$ are the quantum numbers of the avoided energy level crossing, $`dH_z/dt`$ is the constant field sweeping rate, $`g2`$, $`\mu _B`$ the Bohr magneton, and $`\mathrm{}`$ is Planck’s constant. In order to apply the Landau-Zener formula (Eq. 2), we first cooled the sample from 5 K down to 0.04 K in a field of $`H_z`$ = -1.4 T yielding a negative saturated magnetisation state. Then, we swept the applied field at a constant rate over the zero field resonance transition and measured the fraction of molecules which reversed their spin. This procedure yields the tunnelling rate $`P_{10,10}`$ and thus the tunnel splitting $`\mathrm{\Delta }_{10,10}`$ (Eq. 2). The predicted Landau-Zener sweeping field dependence of $`P_{10,10}`$ can be checked by plotting $`\mathrm{\Delta }_{10,10}`$ as a function of the field sweeping rate which should show a constant which was indeed the case for sweeping rates between 1 and 0.001 T/s (fig. 2). The deviations at lower sweeping rates are mainly due to the hole-digging mechanism which slows down the relaxation. The comparison with the isotopically substituted Fe<sub>8</sub> samples shows a clear dependence of $`\mathrm{\Delta }_{10,10}`$ on the hyperfine coupling (Fig. 2). Such an effect has been predicted for a constant applied field by Tupitsyn et al. . All measurement so far were done in the pure quantum regime ($`T<0.36`$ K) where transition via excited spin levels can be neglected. We discuss now the temperature region of small thermal activation ($`T<0.7`$ K) where we should consider transition via excited spin levels . We make the Ansatz that only ground state tunnelling ($`M=\pm 10`$) and transitions via the first excited spin levels ($`M=\pm 9`$) are relevant for temperatures slightly above 0.36 K. We will see that this Ansatz describes well our experimental data but, nevertheless, it would be important to work out a complete theory . In order to measure the temperature dependence of the transition rate, we used the Landau–Zener method as described above with a phenomenological modification of the transition rate $`P`$ (for a negative saturated magnetisation): $$P=n_{10}P_{10,10}+P_{th}$$ (3) where $`P_{10,10}`$ is given by Eq. 2 , $`n_{10}`$ is the Boltzmann population of the $`M=10`$ spin level, and $`P_{th}`$ is the overall transition rate via excited spin levels. $`n_{10}1`$ for the considered temperature $`T<0.7`$ K and a negative saturated magnetisation of the sample. Fig. 3 displays the measured transition rate $`P`$ for <sup>st</sup>Fe<sub>8</sub> as a function of a transverse field $`H_x`$ and for several temperatures. The oscillation of $`P`$ are seen for all temperatures but the periods of oscillations decreases for increasing temperature (Fig. 4). This behaviour can be explained by the giant spin model (Eq. 1) with higher order transverse terms ($`H_2`$). Indeed, the tunnel splittings of excited spin levels oscillate as a function of $`H_x`$ with decreasing periods (Fig. 1). Fig. 5 displays the transition rate via excited spin levels $`P_{th}=Pn_{10}P_{10,10}`$. Surprisingly, the periods of $`P_{th}`$ are temperature independent in the region $`T<0.7`$ K. This suggests that only transitions via excited levels $`M=\pm 9`$ are important in this temperature regime. This statement is confirmed by the following estimation , see also Ref. . Using Eq. 2, typical field sweeping rates of 0.1 T/s, and tunnel splittings from Fig. 1, one easily finds that the Landau Zener transition probability of excited levels are $`P_{M,M}1`$ for $`M<10`$ and $`\stackrel{}{H}0`$. This means that the relaxation rates via excited levels are mainly governed by the lifetime of the excited levels and the time $`\tau _{res,M}`$ during which these levels are in resonance. The later can be estimated by $$\tau _{res,M}=\frac{\mathrm{\Delta }_{M,M}}{g\mu _BM\mu _0dH_z/dt}.$$ (4) The probability for a spin to pass into the excited level $`M`$ can be estimated by $`\tau _M^1e^{E_{10,M}/k_BT}`$, where $`E_{10,M}`$ is the energy gap between the levels $`10`$ and $`M`$, and $`\tau _M`$ is the lifetime of the excited level $`M`$. We yield $$P_{th}\underset{M=9,8}{}\frac{\tau _{res,M}}{\tau _M}e^{E_{10,M}/k_BT}\underset{M=9,8}{}\frac{\mathrm{\Delta }_{M,M}}{\tau _Mg\mu _BM\mu _0dH_z/dt}e^{E_{10,M}/k_BT}.$$ (5) Note that this estimation neglects higher excited levels with $`|M|<8`$ . Fig. 6 displays the measured $`P_{th}`$ for the three isotopic Fe<sub>8</sub> samples. For 0.4 K $`<T<`$ 1 K we fitted Eq. 5 to the data leaving only the level lifetimes $`\tau _9`$ and $`\tau _8`$ as adjustable parameters. All other parameters are calculated using Eq. 1 . We obtain $`\tau _9=1.0,0.5,`$ and $`0.3\times 10^6`$s, and $`\tau _8=0.7,0.5,`$ and $`0.4\times 10^7`$s for <sup>D</sup>Fe<sub>8</sub>, <sup>st</sup>Fe<sub>8</sub>, and <sup>57</sup>Fe<sub>8</sub>, respectively. This result justifies our Ansatz of considering only the first excited level for 0.4 K $`<T<`$ 0.7 K. Indeed, the second term of the summation in Eq. 5 is negligible in this temperature interval. It is interesting to note that this finding is in contrast to hysteresis loop measurements on Mn<sub>12</sub> which suggested an abrupt transition between thermal assisted and pure quantum tunnelling . Furthermore, our result shows clearly the influence of nuclear spins which seem to decrease the level lifetimes $`\tau _M`$, i.e. to increase dissipative effects. The nuclear magnetic moment and not the mass of the nuclei seems to have the major effect on the dynamics of the magnetization. In fact the mass is increased in both isotopically modified samples whereas the effect on the the relaxation rate is opposite. On the other hand ac susceptibility measurements at $`T>`$ 1.5 K showed no clear difference between the three samples suggesting that above this temperature, where the relaxation is predominately due to spin-phonon coupling , the role of the nuclear spins is less important. Although the increased mass of the isotopes changes the spin–phonon coupling, this effect seems to be small. We can also exclude that the change of mass for the three isotopic samples has induced a significant change in the magnetic anisotropy of the clusters. In fact the measurements below $`T<`$ 0.35 K, where spin–phonon coupling is negligible, have shown that (i) relative positions of the resonances as a function of the longitudinal field $`H_z`$ are unchanged , and (ii) all three samples have the same period of oscillation of $`\mathrm{\Delta }`$ as a function of the transverse field $`H_x`$ , a period which is very sensitive to any change of the anisotropy constants. In conclusion, we presented detailed measurements based the Landau Zener method which demonstrated again that molecular magnets offer an unique opportunity to explore the quantum dynamics of a large but finite spin. We believe that a more sophisticated theory is needed which describes the dephasing effects of the environment. \*** D. Rovai, and C. Sangregorio are acknowledged for help by sample preparation. We are indebted to J. Villain for many fruitful discussions. This work has been supported by DRET and Rhone-Alpe.
no-problem/0001/astro-ph0001546.html
ar5iv
text
# Near–Infrared Classification Spectroscopy: J–Band Spectra of Fundamental MK Standards ## 1 Introduction Over the last several years, there has been an explosion of interest in near–infrared spectroscopy. Improvements in infrared array detectors have led to the construction of sensitive, high resolution spectrographs for this wavelength regime (e.g. Hinkle et al. 1998; McLean et al. 1999). In addition, large area photometric sky surveys such as 2MASS (Skrutskie et al. 1997) and DENIS (Epchtein, 1997) have produced large target lists which require follow–up spectroscopy. In order to provide a comprehensive set of uniform high quality infrared stellar spectra, we undertook a multi–wavelength survey of fundamental MK standards with the KPNO Mayall 4 m telescope utilizing the Fourier Transform Spectrograph. Wallace and Hinkle (1997; hereafter WH97) report observations of these stars in the K–band. Meyer et al. (1998; hereafter MEHS98) present the H–band data from our survey as well as outline a classification scheme based on several atomic and molecular indices. This third paper in the series is devoted to presenting the J–band spectra. Relatively little work has been done thus far in this wavelength range compared to the H– and K–bands. The first atlas of stellar spectra in the near–infrared was the pioneering work of Johnson and Mendez (1970). They present spectra of 32 stars from 1–4 $`\mu `$m with resolving power varying from 300–1000 for stars of spectral type A0–M7 as well as some carbon stars. A review of early work in infrared stellar spectroscopy is given by Merrill and Ridgeway (1979). More recently, Kirkpatrick et al. (1993) present spectra from 0.6–1.5 $`\mu `$m for a series of M dwarf standards from M2–M9. They present a detailed analysis of features useful for classification as well as compare the spectra with model atmosphere calculations for cool stars. Joyce et al. (1998a) has complemented this work with an atlas of spectra obtained from 1.0–1.3 $`\mu `$m at $`R=1100`$ for 103 evolved stars of S–, C–, and M–type. In this study, the dominant atomic and molecular absorption features were identified and compared to laboratory spectra. For a review of more recent work concerning the infrared spectra of stars in the K– and H–bands, see WH97 and MEHS98 respectively. Here we present a comprehensive atlas of stellar spectra from 7400–9500 cm<sup>-1</sup> (1.05–1.34 $`\mu `$m) to complement and extend the work described above. In section 2, we describe the observations and the data reduction. In section 3, we present the spectra and detail the identification of spectral features observed. We explore stellar classification based on J–band spectra in section 4, and discuss and summarize our results in section 5. ## 2 Data Acquisition & Reduction ### 2.1 Description of the Sample and Observations Our sample of 88 stars is drawn from lists of fundamental MK spectral standards as follows; i) Morgan, Abt, & Tabscott (1978) for stars O6–G0; ii) Keenan & McNeil (1989) for stars G0–M6; and iii) Kirkpatrick, Henry, & McCarthy (1991) for late–type dwarfs K5–M3. The goal was to observe bright well–established standard stars covering the full range of spectral types (26 bins) and luminosity classes (three bins) in the two–dimensional H–R diagram (78 bins total). Secondary standards were also added from compilations of Jaschek, Conde, & de Sierra (1964) and Henry, Kirkpatrick, and Simon (1994). Due to the sensitivity limits of the FTS, we were unable to observe dwarf star standards with spectral types later than M3. Our sample is nearly identical to that used in the H–band study of MEHS98 (82 stars in common). For details concerning the sample selection and stellar properties, see section II in MEHS98. In Table 1, we provide a journal of observations including the catalogue name of the source, its common name, the spectral type of the star, and the date of observation. We also indicate which stars analyzed are common to this sample and that of MEHS98, and which spectra are used in the equivalent width analysis presented in section 4. The observations were obtained with the Mayall 4m telescope at Kitt Peak National Observatory in Arizona. We employed the dual–output Fourier Transform Spectrometer (FTS) described by Hall et al. (1979) to collect spectra of our survey sample in the J– and H–band simultaneously. A dichroic beam–splitter was used to separate the portions of the incident flux longward and shortward of 1.5 $`\mu `$m and re–direct the beams toward detectors equipped with the appropriate filters. Each star was centered in a 3.8” aperture and simultaneous measurements of the sky were obtained through an identical aperture offset 50” in the east–west direction. The interferogram was sampled at 1 kHz in both the forward and backward scan directions as the path difference was continuously varied from $`\pm `$0.75 cm yielding an unapodized FWHM resolution of 0.8 cm<sup>-1</sup>. Data were obtained in a beam–switching mode (A–B–B–A), alternating the source position between the two input apertures. The interferograms were co–added keeping the opposite scan direction separate but combining data obtained in both apertures. Because of the novel design of the dual–input FTS, background emission from the night sky (obtained from the offset aperture) is subtracted from the interferogram of the star+sky spectrum in Fourier space as the data are collected. The resultant forward and backward scan pairs were transformed at KPNO with output spectra of relative flux as a function of wavenumber ($`\sigma `$ in cm<sup>-1</sup>). Further description can be found in MEHS98. ### 2.2 Analysis of the Data Following the production of the transformed spectra, the next step in the reduction is division by the spectrum of an essentially featureless reference star taken with the same instrumental set-up and interpolated to the same air mass. This has the effect of removing the filter response function and correcting for stable telluric absorbers in this spectral region (e.g. O<sub>2</sub>). Vega (A0 V) and Sirius (A1 V) were used as reference stars after interpolation over photospheric absorption features due to H P$`\beta `$, H P$`\gamma `$ and C I. This is different from the technique used by MEHS98 where the time–constant portion of the atmospheric opacity was derived from multiple observations of reference stars as a function of airmass. An example of removing the airmass–dependent portion of the telluric absorption is shown in the center panel of Fig. 1 for HR1713 (B8 Ia). This process compensates well for O<sub>2</sub> but only partially for H<sub>2</sub>O because it can vary over time and does not correlate strictly with airmass. For this reason MEHS98 separated the J– and H–band portions of the spectra and concentrated on the H–band data in their paper. We have taken the next step in correcting for the time variable H<sub>2</sub>O absorption, enabling a full analysis of these spectra. To obtain the H<sub>2</sub>O spectrum we ratioed two spectra of the same reference star, Vega, obtained at similar air mass on different nights, but preferably in the same observing run as the program star. This required experimentation with different spectra to obtain the largest H<sub>2</sub>O signal since some spectra showed similar amounts of H<sub>2</sub>O. This water spectrum was then stretched logarithmically to match the residual H<sub>2</sub>O in the program star spectrum remaining and divided out. The result of this second step for HR1713 is shown in the lower panel of Fig. 1. The influence of H<sub>2</sub>O is clearly reduced. These corrections appear to be very sensitive to drift in instrumental settings. Because of this, it was essential to obtain frequent spectra of reference stars. Weather problems interfered with the 1992 October run resulting in inadequate reference spectra and necessitating use of reference spectra from other runs. This appears to have introduced a continuum distortion noticeable in some of the spectra. This distortion, illustrated in the lower panel of Fig. 1, is the result of low frequency variations between the derived H<sub>2</sub>O spectrum and the program object. We suspect that the character of the band-pass filter may have been temperature sensitive, though a definitive cause has not been established. The full list of 88 reduced spectra is given in Table 1, which includes 12 that have been observed on multiple runs and three for which observations on different nights of a single run have been combined. In general, Vega was used as the reference star to achieve the telluric correction, the only exceptions being; i) HR2197, HR2943, and HR2985 for which an interpolation between Sirius and Vega was used; and ii) HR2827 for which Sirius alone was used. In Figures 2–5, we present the highest signal–to–noise ratio (SNR $`>`$ 25) spectra for each distinct spectral type in the survey: 18 supergiants, 19 giants, 9 subgiants, and 19 dwarf stars. These spectra have been apodized with a gaussian filter for a resulting resolution of 2.7 cm<sup>-1</sup> matching that of WH97 and MEHS98 (see WH97 for details concerning the apodization process). We utilize these 65 spectra in the analyses presented below. ## 3 Line Identifications In order to identify the dominant features observed, we concentrated on four spectra of highest SNR spanning a wide range of effective temperatures in luminosity classes I–II, III, and V. These twelve spectra are shown at a resolution of 2.7 cm<sup>-1</sup> with an expanded scale in Figures 6–8. The spectra have also been shifted to zero velocity with respect to the laboratory frequencies of H and He lines (for early–type stars) and Al I lines (for late–type stars). The identifications are based on our previous studies of the solar photosphere (Livingston & Wallace 1991; Wallace et al. 1993), and sunspot umbrae (Wallace & Livingston 1992; Wallace et al. 1998), as well as the high resolution Arcturus Atlas (Hinkle et al 1995). A list of the identified features is given in Table 2 along with references to relevant laboratory data. Many of these features are also labeled in Figures 6–8. At the resolution of our data, some features in the spectra appear to be blends from multiple species. For example, the feature at 8552 cm<sup>-1</sup> appears to be due primarily to Fe I in stars of spectral type F–G. However, for later spectral types K I absorption plays the dominant role in determining the feature strength. In some cases, it was not clear whether a particular species contributes to an observed spectral feature. For example, Sr II may play a role along with Mg II in determining the strength of the 9160 cm<sup>-1</sup> feature. However detailed modeling is required in order to determine which species dominates. Some of the spectra have high enough SNR that weaker features might also be identified but we stopped at what seems to be a defensible level. ## 4 Dependence on Temperature and Luminosity Class As illustrated in Figures 2–5 (& 6–8), the J–band spectral region contains a large number of features which are temperature and luminosity sensitive. In the earliest type stars, features of neutral hydrogen (Pa $`\beta `$ and Pa $`\gamma `$) and helium dominate the spectra. These features become weaker in stars of intermediate spectral type (A, F, G), whose spectra are dominated by lines of neutral metal species of lower ionization potential. For example, C I appears only in spectral types A through early G. Si I first appears in early F stars and fades by early M. Al I, Mn I, Fe I, and Mg I behave similarly to Si I but persist through the latest types observed. Na I at 7880 cm<sup>-1</sup> first appears in early G and grows in strength toward later types though it is relatively weak. K I, the lowest ionizational potential species observed in these spectra, first appears in early K stars. Even later type dwarfs ($`>`$ M5), which are not included in our survey, show very strong features due to FeH in this wavelength regime (cf. Kirkpatrick et al. 1993; Jones et al. 1994). The most striking luminosity sensitive feature in the spectra is the 0–0 band of CN at 9100 cm<sup>-1</sup>. This feature, observed in stars later than G6 for luminosity class III stars is barely detected in dwarf stars. The 0–0 band of TiO at 9060 cm<sup>-1</sup> (obvious in the spectrum of the M6 III HR7886) may also play a role in the latest type stars. Ti I appears in the spectra of K–M giants but is much weaker in luminosity classes IV–V. The Ca II features are observed in spectra of F–G supergiants, but are much weaker in stars of higher surface gravity. These features are inferior to the first– and second–overtone features of CO observed in the K– & H–bands, as well as the R– & I–band features of TiO as surface gravity diagnostics. In order to quantify the visual impressions made by the data, we have measured equivalent widths from these spectra for a number of prominent features observed. Following MEHS98, we have defined nine bandpasses and continuum regions with the intervals indicated in Table 3. We measured the feature strengths with respect to the continuum levels by linear interpolation between two nearby regions. Because the only available short wavenumber continuum region for CN is far from the feature (“redward” of a region of uncertain telluric correction 8650 to 9000 cm<sup>-1</sup>) we have estimated the equivalent width of CN using only a long wavenumber continuum reference. The features measured for Al I, Mn I, Si I, Fe I, and C I are sums over two or more lines of the atom. These nine equivalent widths are reported in Table 4 for the 65 stars shown in Figures 2–5. Typical SNR for these indices as measured from multiple observations of the same stars are $`>`$ 10 except for Mn I (SNR $``$ 5) and CN (SNR $``$ 3). Adopting the spectral type to effective temperature conversion as a function of luminosity class given in MEHS98, we plot several of these equivalent width indices as a function of T<sub>eff</sub> in Figure 9. P $`\beta `$ shows the expected behavior as a function of temperature with a peak in strength near 10,000 K (A0). The strength of the Mn I feature increases consistently with cooler temperatures in the spectra of the giant stars. The Al I feature displays similar behavior in the dwarf star spectra, peaking in strength near 4000 K. Finally the CN index, although difficult to quantify because of uncertainties in establishing accurate continuum levels, crudely indicates the dependence on luminosity class, being much stronger in the giants and supergiants compared to the dwarfs. In many cases, the strength of these features measured from stellar spectra can be compared to our standards as a guide to assigning spectral types. However, it is often useful to examine diagnostic line ratios which are not as sensitive to continuum excess emission that can dilute straight equivalent widths. In an attempt to identify a two–dimensional classification plane similar to that outlined in MEHS98, we have investigated several ratios from among the features listed in Table 3. For dwarf stars of spectral type F5–M2, the ratio of the Al I equivalent width to that of the Mg I feature provides an estimate of the effective temperature. $$T_{eff}(V)=7100\pm 390(2050\pm 80)\frac{EW[AlI]}{EW[MgI]}$$ (1) Here EW is the equivalent width in cm<sup>-1</sup> for the indices identified in Table 3 and listed in Table 4. For late–type giants G0–M6, the ration of the Mn I feature strength to that of Mg I gives a better temperature estimate than the Al I feature. $$T_{eff}(III)=6170\pm 400(1860\pm 90)\times \frac{EW[MnI]}{EW[MgI]}$$ (2) Furthermore, the strength of the molecular features of CN and TiO can be used to estimate the surface gravity of the star, providing a rough estimate of the luminosity class. The temperature and luminosity dependence of these diagnostics is illustrated in Figure 10. Thus measurement of these line ratios provides an approximate classification of the spectral type and luminosity class for late–type stars. Based on the errors estimated from multiple observations of several stars with $`SNR>25`$, we conclude that crude spectral types can be estimated within $`\pm `$ 3 subclasses (500 K) for late–type stars based on these indices alone. ## 5 Discussion and Summary Because stars earlier than M5 (3000 K) have SEDs that peak at shorter wavelengths, features observed in the spectra of these stars over the J–band are intrinsically weak. However, there are several photospheric features present in this wavelength regime which are diagnostic and observing them in the J–band can help to penetrate large amounts of extinction. Absorption due to interstellar dust at the J–band is less than $`1/3`$ that observed as visual wavelengths (e.g. Rieke and Lebofsky, 1985). For many objects it makes sense to obtain spectra at even longer wavelengths such as the K–band (WH97). However some targets surrounded by circumstellar material (such as young stellar objects or evolved stars) exhibit continuum excess emission due to thermal dust at temperatures as high at 1500–2000 K (e.g. Meyer et al. 1997). Because the emission from such hot dust peaks at 1.5–2.0 $`\mu `$m, it may be advantageous to observe these objects in the J–band. Perhaps the greatest utility of J–band spectroscopy (not demonstrated in this paper) lies in the classification of very cool stars ($`>`$ M5), whose I– and J–bands spectra are dominated by very broad atomic and molecular features (e.g. Kirkpatrick et al. 1993; Jones et al. 1994). There are also technical reasons that make the J–band attractive: it can be observed with a warm spectrograph without any appreciable thermal background contributed by the optical train. However this requires a non-silicon based detector as standard CCD’s are not efficient beyond 1.0 $`\mu `$m. Joyce et al. (1998b) report detector–limited performance using a HgCdTe infrared detector in a dewar blocked with a cold J–filter in the focal plane of an ambient temperature spectrograph while the H–band observations were still limited by thermal radiation from the spectrograph. Further the intensity of the OH airglow emission, an important source of noise in low resolution 1–2.5 $`\mu `$m spectroscopic observations, is roughly twice as strong in the H–band compared with the J–band (Maihara et al. 1993). However, in deciding whether or not to utilize the J–band spectral region for stellar classification, it must be remembered that telluric water vapor absorption can be difficult to treat over significant portions of the J–band spectral region. We present J–band spectra from 7400–9550 cm<sup>-1</sup> (1.05–1.34 $`\mu `$m) for 88 fundamental MK spectral standards. The stars span a range of spectral types from O5–M6 and cover luminosity class I–V. Special care was taken to remove absorption due to time–variable telluric H<sub>2</sub>O features which prevented analysis of these data in a previously published paper (MEHS98). We have identified several features in the spectra which are temperature and luminosity sensitive. The absorption strengths of the nine most diagnostic features are tabulated over narrow–band indices. We also define a two–dimensional classification scheme based on these data, which makes use of diagnostic line ratios rather than the equivalent widths of the features alone. Using this scheme, J–band spectra of stars spectral type G–M taken at $`R3000`$ with $`SNR>25`$ can be classified within $`\pm `$ 3 subclasses. However, the very latest–type stars can be classified with great precision at much lower spectral resolution. We conclude that the J–band contains many spectral features of interest for a wide range of astrophysical studies and we hope that this contribution will facilitate their use. ## 6 Appendix A. Electronic Availability of the Data The 101 reduced spectra listed in Table 1 are available through the World Wide Web at http://www.noao.edu/archives.html. The data are organized into directories according to year and month of observation. Thus, the two spectra of HR1899 are in the directories 93mar and 94jan. These spectra have been corrected for telluric absorption as described above and scaled to unity in the region from 7900 to 8100 cm<sup>-1</sup>, but not further processed. The data will also be available through the Astronomical Data Center, NASA Goddard Space Flight Center, Code 631, Greenbelt, MD 20771 (tel: 301–286–8310; fax: 301–286–1771; or via the internet at http://adc.gsfc.nasa.gov). The raw FTS data are also available directly from NOAO (contact KHH for details). We thank Steve Strom for his expert assistance in guiding the 4m telescope which enabled the collection of data presented in this paper. This research made use of the SIMBAD database operated by CDS in Strasbourg, France as well as NASA’s Astrophysics Data System Abstract Service. Support for M.R.M. was provided by NASA through Hubble Fellowship grant HF–01098.01–97A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. for NASA under contract NAS 5–26555. S.E. acknowledges support from the National Science Foundation’s Faculty Award for Women Program.
no-problem/0001/cond-mat0001243.html
ar5iv
text
# Stabilization of A-type Layered Antiferromagnetic Phase in LaMnO3 by cooperative Jahn-Teller Deformations ## 1 Introduction Perovskite oxides containing Mn ions have been the object of intense interest in the recent years. In spite of being known for a very long time, these compounds have been reconsidered in great detail owing to their colossal magnetoresistive properties. Starting from the ”parent” phases $`LaMnO_3`$ (trivalent Mn) and $`CaMnO_3`$ (tetravalent Mn), substitutional doping has revealed an extremely rich phase diagram. Understanding this diagram requires at least the following ingredients : i) strong on-site Coulomb interactions; ii) the ”double exchange” mechanism due to the interplay of $`e_g`$ electron itineracy and Hund’s exchange with the more localized $`t_{2g}`$ electron spins, which favours ferromagnetism Zen ; And ; DeGen ; iii) superexchange between $`t_{2g}`$ electrons as well as between $`e_g`$ electrons on neighbouring sites; iv) large electron-lattice interactions, in particular due to Jahn-Teller (JT) effect on $`Mn^{3+}`$ ions Mil ; Zan . All these elements are necessary to understand the interplay between spin, charge and orbital ordering. The latter lifts the degeneracy of the $`e_g`$ orbitals by a cooperative Jahn-Teller lattice deformation and leads to tetragonal or orthorhombic deformations of the cubic structure. Although Goodenough Good provided long time ago a qualitative understanding of the phase diagram of the $`(La,Ca)MnO_3`$ family, a full microscopic description is still lacking. Especially the dramatic dependence of all physical properties with very fine tuning of the chemical composition requires a precise estimate of the various parameters, and clear identification of the dominant mechanism for every doping. Surprisingly enough, such an understanding is not yet reached in the insulating antiferromagnet $`LaMnO_3`$, although it seems essential before quantitatively studying the doped phases. This phase, when fully stoechiometric, presents a layered antiferromagnetic order, with ferromagnetic couplings (F) in two directions and antiferromagnetic (AF) coupling in the other Wol . The AF directions are associated to a shortening of the $`MnO`$ bonds, leading to tetragonal distortion, while in the F directions long and short bonds alternate, yielding the overall orthorhombic structure. In what follows, we shall neglect the tilting of the $`MnO`$ octahedra and concentrate only on the $`MnO`$ bond length deformations. These can be understood in terms of cooperative JT effect. The corresponding lifting of $`e_g`$ degeneracy can be viewed as an orbital ordering, with occupied $`d`$ orbitals pointing preferentially in the directions of long $`MnO`$ bonds. Several proposals have been made to explain layered antiferromagnetism in $`LaMnO_3`$. Goodenough Good used the picture of ”semi-covalence” where oxygen orbitals play an essential role in overlapping empty $`d`$ orbitals of $`Mn`$ ions. This picture, although useful for qualitative purposes, has not received confirmation by microscopic calculations and does not allow to write simple enough models, for instance based on a Hamiltonian involving only metal orbital electrons and their basic interactions. A microscopic description requires to identify clearly the dominant interactions in the problem. In pioneering works, Kugel and Khomskii KK , and Lacroix Lacr (see also earlier work by Roth Roth ), proposed that superexchange in the presence of $`e_g`$ orbital degeneracy results in ferromagnetism and orbital ordering : Hund’s rule favours in this case different orbitals on neighbouring sites and ferromagnetic coupling. Using a simplified model with equal hopping integrals between $`e_g`$ orbitals leads to the same ordering along the three cubic lattice directions : the resulting structure is an insulating ferromagnet, with ”antiferroorbital” ordering. However, taking properly into account the hopping integrals between $`d_{x^2y^2}`$ (denoted $`x`$) and $`d_{z^2}`$ (denoted $`z`$) orbitals, Kugel and Khomskii KK found the correct magnetic structure. Starting with degenerate $`e_g`$ orbitals, they performed a perturbative calculation in $`\frac{t}{U}`$ and $`\frac{J_H}{U}`$ where $`t`$, $`J_H`$ and $`U`$ are the typical hopping integral, the Hund coupling and the on-site repulsion in the order. Based on the weak electron-lattice coupling in the compound $`KCuF_3`$, they considered the JT couplings as a perturbation. As a result, orbital and magnetic ordering result from superexchange (SE) only: Intraorbital SE dominates in the c-direction (defined as the z-axis), leading to AF coupling, while interorbital SE dominates in the ab-directions, yielding F coupling. Occupied orbitals are dominantly $`d_{z^2x^2}`$ and $`d_{z^2y^2}`$, therefore, as Kugel and Khomskii remark, for $`Cu^{2+}`$ in $`KCuF_3`$ (hole orbital), JT coupling implies a shortening as experimentally observed ($`c/a<1`$). However, for $`Mn^{3+}`$ ions with one electron in the $`e_g`$ levels, they correctly point out that repulsion between metal and anion orbitals, together with JT coupling, would trigger a lengthening of the c-axis ($`c/a>1`$), in contradiction with the actual structure. In a recent work, Feiner and Oles Feiner reconsidered Kugel and Khomskii’s model, including both Hund’s coupling betwen $`e_g`$ and $`t_{2g}`$ orbitals and the antiferromagnetic superexchange interaction between $`t_{2g}`$ spins (equal to $`\frac{3}{2}`$ in the ground state). Their results confirms those of Ref.KK : They find the correct layered structure (which they call MOFFA), but only if the $`d_{z^2}`$ orbital has lower energy than the $`d_{x^2y^2}`$ one, contrarily to what happens for electron-like orbitals (case of $`LaMnO_3`$). This contradiction sets the limits of the Kugel-Khomskii model for $`LaMnO_3`$. We believe that the JT effect, on the contrary, has to be considered from the very beginning in the model. Essentially, the assumption that the $`e_g`$ degeneracy is lifted principally by superexchange may be justified in $`KCuF_3`$, but is definitely not correct in $`LaMnO_3`$. In fact, this could hold only if the typical JT splitting $`ϵ`$ was much smaller than the superexchange splitting, of order $`\frac{t^2}{U}`$. The latter (related to the magnetic transition temperatures) being of the order of a few $`meV`$, the former is much larger. Although there is no precise evaluation of this quantity, this is supported by experiment: On the one hand, the deformations of Mn-O bonds is extremely large, more than ten per cent, indicating that $`ϵ>k_BT`$. On the other hand, neutron scattering measurements show that the local distorsions persist above the orthorhombic-cubic transition at $`750K`$ Moussa2 . This temperature only marks the disappearance of cooperative JT ordering, while distorted $`MnO_6`$ octahedra still exist at higher temperatures. Photoemission dessaushen measurements indicate that JT splittings are as large as a few tenth of $`eV`$, comparable to the electronic hopping integrals between neighbouring sites. And optical conductivity analysis Jung also shows evidence of large splittings. In these conditions, the degenerate perturbation calculation of Ref. KK does not hold anymore. In a previous work we have reconsidered the problem wihin perturbation theory us , making the opposite assumption, i.e. $`ϵ>>\frac{t^2}{U}`$: This means that, given the crystal deformations, due to strong cooperative JT effect, the $`e_g`$ orbitals split so as to give a certain type of orbital ordering. The orbitals stabilized at each sites are different from the one predicted by pure superexchange. We have found that, depending on the values of the two JT modes $`Q_2`$ and $`Q_3`$, different magnetic ordering could be stabilized, among which the layered ”FFA”. This ordering is always stabilized if the $`Q_3`$ mode is positive, e.g. for dilatation in the c-direction. But in the real case $`Q_3<0`$, FFA order is realized only if the in-plane alternate $`Q_2`$ mode is sufficiently large and overcomes the contrary effect of $`Q_3`$. Looking at structural numbers, one checks that this is actually the case. Nevertheless the system is close to the point where the FFA order becomes unstable towards FFF. This results in the F exchange (along the ab-plane) being larger than the AF exchange (along the c-axis). This feature has been obtained from inelastic neutron scattering Moussa1 , and it cannot be explained by the Kugel-Khomskii model, which obtains on the contrary that the F superexchange is of order $`\frac{J_H}{U}`$ times the AF one, thus much smaller. The interplay between lattice distortions and magnetism has also been investigated from ab-initio calculations of the electronic structures Pick ; Sol ; Sawa . All conclude with a prominent role of those distortions to stabilize the actual magnetic order. In particular, Solovyev et al. Sol have found that the c-axis exchange is antiferromagnetic only if the JT distortion is sufficiently large. For $`LaMnO_3`$ with its very large distortion they obtain the layered antiferromagnetic structure, but it is close to the border between FFA and FFF phases. Very recent Monte Carlo calculations have also demonstrated the relevance of the JT interaction in stabilizing the FFA magnetic order HYMD . In the present work, we reconsider the problem, beyond any perturbation theory, by exact diagonalizations on pairs of $`Mn^{3+}`$ sites. The two $`e_g`$ orbitals are considered together with the quantum $`\frac{3}{2}`$-spins due to the electrons in the $`t_{2g}`$ levels. Our conclusions confirm the essential role of JT deformations, especially the $`Q_2`$ mode, to stabilize the layered AF order. They also demonstrate that it is essential to include Hund’s coupling with $`t_{2g}`$ orbitals, and that the role of the intrinsic $`t_{2g}`$ AF exchange is to slightly stabilize the FFA order with respect to the FFF one. ## 2 The Model ¿From the discussion of the preceding section it is clear that the basic physical ingredients required for a satisfactory description of the Manganites should involve both Coulomb and lattice (namely JT) interactions. Accordingly we consider the following model $$H=H_t+H_H+H_{UU^{}}+H_J+H_{JT}$$ (1) with $`H_t`$ $`=`$ $`{\displaystyle \underset{i𝐚\alpha \alpha ^{}\sigma }{}}t_{\alpha \alpha ^{}}^𝐚c_{i\alpha \sigma }^{}c_{i+𝐚\alpha ^{}\sigma }`$ $`H_H`$ $`=`$ $`J_H{\displaystyle \underset{i\alpha \sigma \sigma ^{}}{}}c_{i\alpha \sigma }^{}𝐬_{\sigma \sigma ^{}}c_{i\alpha \sigma ^{}}`$ $`\times `$ $`\left[𝐒_i+{\displaystyle \underset{\alpha ^{}\alpha \stackrel{~}{\sigma }\stackrel{~}{\sigma }^{}}{}}c_{i\alpha ^{}\stackrel{~}{\sigma }}𝐬_{\stackrel{~}{\sigma }\stackrel{~}{\sigma }^{}}c_{i\alpha ^{}\stackrel{~}{\sigma }^{}}\right]`$ $`H_{UU^{}}`$ $`=`$ $`U{\displaystyle \underset{i\alpha }{}}\left(c_{i\alpha }^{}c_{i\alpha }\right)\left(c_{i\alpha }^{}c_{i\alpha }\right)`$ $`+`$ $`U^{}{\displaystyle \underset{i\alpha \alpha ^{}\sigma \sigma ^{}}{}}\left(c_{i\alpha \sigma }^{}c_{i\alpha \sigma }\right)\left(c_{i\alpha ^{}\sigma ^{}}^{}c_{i\alpha ^{}\sigma ^{}}\right)`$ $`H_J`$ $`=`$ $`J_t{\displaystyle \underset{ij}{}}𝐒_i𝐒_j`$ $`H_{JT}`$ $`=`$ $`g{\displaystyle \underset{i}{}}\left(c_{i\alpha \sigma }^{}\tau _{}^{(\mathrm{𝟑})}{}_{\alpha \alpha ^{}}{}^{}c_{i\alpha ^{}\sigma }Q_{3i}+c_{i\alpha \sigma }^{}\tau _{}^{(\mathrm{𝟐})}{}_{\alpha \alpha ^{}}{}^{}c_{i\alpha ^{}\sigma }Q_{2i}\right),`$ The first term represents the kinetic energy with the electrons in the Manganese $`3d_{x^2y^2}`$ ($`\alpha =x`$) or $`3d_{3z^2r^2}`$ ($`\alpha =z`$) orbitals hopping from site $`i`$ to the nearest neighbor (nn) site $`i+𝐚`$ in the $`𝐚`$ lattice direction. Here $`𝐬`$ is the vector of Pauli matrices for spins and $`\tau `$ the vector of Pauli matrices for orbital pseudospins in the $`x,z`$ basis. Specifically, for a standard choice of the phases for the orbital wavefunctions, the hopping between the $`x`$ and the $`z`$ orbitals are given by $`t_{xx}^{𝐱,𝐲}`$ $`=`$ $`3t;t_{zz}^{𝐱,𝐲}=t;`$ $`t_{xz}^𝐱`$ $`=`$ $`\sqrt{3}t;t_{xz}^𝐲=\sqrt{3}t`$ $`t_{zz}^𝐳`$ $`=`$ $`4tt_{xx}^𝐳=t_{xz}^𝐳=0.`$ (2) Together with the Hund coupling given by $`H_H`$ the kinetic energy gives rise to the usual “double-exchange” itinerancy of the $`e_g`$ electrons. The (strong) on-site Coulomb interactions, are represented by the intraorbital repulsion $`U`$ and by the interorbital $`U^{}=U2J_H`$ term. For simplicity here and in the following we will not distinguish between the Hund exchange energy between electrons in the $`e_g`$ and $`t_{2g}`$ orbitals. The antiferromagnetic superexchange coupling between neighboring $`t_{2g}`$ spins is considered with $`H_J`$, while the JT interaction between the $`e_g`$ electrons and the (cooperative) lattice deformation is given by the last term $`H_{JT}`$. The Jahn-Teller modes are defined in terms of the short ($`s`$), medium ($`m`$) and long ($`l`$) $`MnO`$ bonds by $`Q_2=\sqrt{2}(ls)`$ and $`Q_3=\sqrt{2/3}(2mls)`$, the $`m`$ bonds lying in the $`z`$ direction and the $`s,l`$ ones in the $`x,y`$ planes. Since in the present work we will not attempt to perform any energy minimization by including the elastic interactions due to the lattice, we here disregard these energy terms by treating the JT deformations $`𝐐=(Q_2,Q_3)`$ as external fields imposed by a lattice ordering involving a much higher energy scale than the magnetic ones. Therefore in the following the various magnetic couplings will be determined in terms of assigned lattice deformations. This viewpoint, which already guided us in the perturbative analysis of the stability of FFA antiferromagnetism in the undoped LMO us is definitely justified by the experimental observation that the JT energy splitting is much larger than all magnetic couplings. We exactly diagonalize the Hamiltonian in Eq. (1) for a system of two sites with open boundary conditions. The two sites are located either on the same $`xy`$ plane or on adjacent planes and the suitable hopping matrix elements between the various orbitals have been considered according to expressions (2). The JT energy splitting $`ϵ=g\sqrt{Q_2^2+Q_3^2}`$ and the deformation anisotropy ratio $`rQ_2/Q_3`$ are given external parameters and are fixed for any diagonalization procedure. Once the ground state is found, the effective exchange coupling between the total spins on the two sites can be determined. Specifically, since the Hamiltonian conserves the total spin of the two-site cluster, we determine the ground states with total spin $`S_T=4,M_{S_T}=4`$ and $`S_T=3,M_{S_T}=3`$. Then the magnetic coupling is given by the energy difference $`E(S_T=4,M_{S_T}=4)E(S_T=3,M_{S_T}=3)=2J`$. Once the magnetic couplings (and particularly their sign) along the various lattice directions are found, the resulting magnetic phase is also determined. ## 3 Results In order to gain insight from the physical processes underlying the intersite magnetic couplings, we first carry out a comparison between the results of the perturbative analysis of the superexchange interactions (see Ref. us ) and the exact numerical calculations. The perturbative analysis not only was performed assuming very large local Coulomb interactions ($`U,U^{}`$ and $`J_H`$ much larger than $`t`$), but the additional assumption was made that the JT energy splitting $`ϵ`$ greatly exceeds the typical superexchange energy scale of order $`t^2/U`$. In this way the ground state can safely be assumed to be formed by just one singly occupied $`e_g`$ level. Accordingly the exact numerical calculations to be compared with the analytic results have been performed for $`Q_3<0`$ and $`t=0.2eV`$, $`U=8eV`$, $`J_H=1.2eV`$, $`ϵ=0.4eV`$, and $`J_t=0`$. Fig. 1 reports the superexchange interactions both in the planar and interplanar directions obtained with both the perturbative and the exact-diagonalization analysis. As it is apparent, the perturbative $`J_{xy}`$ and $`J_z`$ display the same qualitative behavior as in the exact calculation. This confirms that, at least in the $`ϵt^2/U`$ limit, a substantial part of the magnetic effective interactions is generated by the superexchange processes due to the hopping of electrons lying in the lower $`e_g`$ level on the same or on different nearest neighbor $`e_g`$ orbitals. On the other hand, the quantitative comparison indicates that the range of stability for the FFA phase (i.e. $`J_{xy}<0`$ and $`J_z>0`$) is modified. In fact a positive $`J_z`$ together with a negative $`J_{xy}`$ are obtained in the exact calculation on a somewhat larger range of lattice deformation anisotropies $`(Q_2/|Q_3|2.5)`$. In order to establish a tighter connection between the experimentally determined $`J`$’s and the observed deformations, and to investigate the role of the various interactions in the model, a more systematic analysis is required. Assuming the JT interaction to be relevant in stabilizing the FFA phase, we investigate the behavior of the exchange constants $`J_{xy}`$ (denoted ”intraplane”) and $`J_z`$ (denoted ”interplane”) in terms of $`ϵ`$ and the deformation ratio $`r`$. Figs. 2 and 3 report $`J_{xy}`$ and $`J_z`$ as functions of $`|r|`$ for the $`Q_3<0`$ case (the one relevant for LMO) at a large ($`ϵ2t`$) and at a small ($`ϵ<0.1t`$) value of the JT splitting respectively. Different values of the Hund coupling $`J_H`$ are considered. One can first observe from Fig. 2 that in the large-$`ϵ`$ case the increase of the Hund coupling shifts downwards both the intraplane and the interplane magnetic couplings. This outcome can be rationalized in terms of perturbatively generated superexchange processes providing AF effective couplings of the form $$J_{xy,z}^{AF}\frac{A_{xy,z}}{U+(3/2)J_H}+\frac{B_{xy,z}}{U+ϵ}$$ (3) competing with the generated F interaction $$J_{xy,z}^F\frac{C_{xy,z}}{U+ϵ(5/2)J_H}.$$ (4) The numerical coefficients $`A,B`$, and $`C`$ stem from the different hopping matrix elements between the different orbitals in the different directions us . Specifically, while the $`A`$’s are related to the hopping processes between two nearest-neighbour lower-lying $`e_g`$ orbitals, the $`B`$ and $`C`$ coefficients are due to hoppings between one low-lying and one higher JT-split orbitals (this is why the corresponding denominators involve $`ϵ`$). The $`A,B`$ and $`C`$ coefficients are independent from the Coulomb interactions, which only determine the energies of the virtual intermediate states in the superexchange processes. The above schematic expressions clearly show that, when $`J_H`$ is increased for a fixed $`ϵt`$, the F spin configuration becomes more favourable, since the F coupling become stronger, while the AF interaction weakens. We remark that purely electronic models such as in Refs.KK ; Feiner make use of degenerate perturbation theory. Then the orbital splitting is of order of the exchange couplings $`J`$ and therefore those models become invalid if $`ϵ>J`$, which is the case in $`LaMnO_3`$. More seriously, the orbital order resulting from purely electronic interactions is at odds with that obtained from the actual Jahn-Teller distortions, showing that those distortions do not result from an orbital ordering of electronic origin, but are on the contrary the mere source of orbital ordering. Another quite generic effect, which can be interpreted in terms of perturbatively generated superexchange processes is the tendency of $`J_z`$ to acquire a F (or at least a less AF) character at low values of $`Q_2/|Q_3|`$ (this can also be accompanied by an upturn of $`J_z`$ for $`|r|`$ tending to zero). This occurs because for $`Q_3<0`$, the lowest $`e_g`$ level progressively loses its $`3d_{3z^2r^2}`$ component: By schematically writing the lower and the upper $`e_g`$ states as $`|a|x+\eta |z`$ and $`|b\eta |x+|z`$ respectively, $`\eta `$ vanishes with $`|r|0`$. Now, the superexchange along $`z`$ is driven by the interplane hopping, which is only allowed between $`3d_{3z^2r^2}`$ orbitals. Furthermore one can see us that the ferromagnetic superexchange arises from $`|a|b`$ hoppings, which are of order $`\eta `$, while the antiferromagnetic coupling is mostly generated from intraorbital $`|a|a`$ hopping (the $`A`$ term in Eq. (3). Since this latter is of order $`\eta ^2`$, it is quite natural that in the low-$`|r|`$ region, as $`\eta `$ decreases, the superexchange along $`z`$ is ferromagnetic and vanishes with $`\eta `$. This ferromagnetic tendency is, however, contrasted (and actually overcome in Figs. 2 and 3) by the independent AF superexchange $`J_t`$ between the $`t_{2g}`$ spins, which becomes relatively more important. Of course, when $`|r|`$ increases, the $`\eta ^2`$ terms in the hopping become relevant, the intraorbital $`|a|a`$ hopping starts to dominate and $`J_z`$ eventually becomes (more) positive (i.e. AF). As far as the superexchange along the planes is concerned, at small $`|r|`$ this is instead dominated by the large hopping between $`3d_{x^2y^2}`$ orbitals, which favor the $`|a|a`$ hopping and, consequently produces an AF magnetic coupling. On the contrary, for large $`|r|`$, orbital ordering implies that the main superexchange contribution comes from hopping between different orbitals, thus favouring ferromagnetism KK . All the above arguments are obviously only valid as long as the conditions for the perturbation theory nearly hold. On the other hand, the simple perturbative approach between non-degenerate states breaks down when $`ϵt^2/U`$ as in Fig. 3 and the interpretation of the results is not so transparent. However, the effect of $`J_H`$ favoring ferromagnetism is still present. An important difference between the results in Figs. 2 and 3 is that the FFA phase is generically obtained in a broad range of parameters when $`ϵt`$. In particular, for rather realistic values of $`J_H5t1`$eV the deformation ratios required to generate negative (i.e. F) couplings in the $`xy`$ planes and positive ones in the z direction are quite reasonable $`|r|23`$. The same does not hold in the case of small JT splitting, where $`J_{xy}`$ and $`J_z`$ have the same sign (FFF). Therefore a first result is that a sizable $`ϵ`$ is needed in order to obtain both the FFA phase and reasonable lattice distortion ratios $`Q_2/Q_3`$. This result is also confirmed by the calculation of $`J_{xy}`$ and $`J_z`$ as a function of $`ϵ`$, at a fixed value of the deformation anisotropy ratio $`r`$. Figs. 4 and 5 report the values of $`J_{xy}`$ and $`J_z`$ for $`r=3`$ and $`r=3`$ respectively. While the positive $`r`$ case is generic for perovskite materials with the lattice elongated in the $`z`$ direction ($`c/a>1`$), the latter choice is more pertinent to the case of the undoped LMO, where $`c/a<1`$. As already discussed by Kugel and Khomskii KK for a different model and as confirmed by the perturbative analysis of Ref. us , the JT deformation and the superexchange interactions cooperate when $`Q_3>0`$ like in KCuF<sub>3</sub> so that it is not surprising that for all values of $`J_H`$ the FFA is realized over a much broader range of $`ϵ`$. On the other hand, for $`Q_3<0`$, Fig. 4 shows that the conditions for a FFA phase, $`J_{xy}<0`$ and $`J_z>0`$, are only realized for a smaller range of $`ϵ`$ values. In particular a sizeable minimum value of $`ϵ`$ is required to have an AF coupling along $`z`$, while exceedingly large values of $`ϵ`$ (of order $`J_H`$) produce an AF coupling also along the planes. Both the minimum and the maximum values of $`ϵ`$ for obtaining the FFA phase increase upon increasing $`J_H`$. However, the maximum value of $`ϵ`$ increases more rapidly and the overall effect is that, increasing $`J_H`$, the available range in $`ϵ`$ to obtain an FFA phase is enlarged. Again the behavior displayed in the exact calculations reported in Fig. 4 can easily be interpreted in terms of the perturbative superexchange processes schematically represented in Eqs. (3) and (4). First of all these expressions at once account for the increasing behavior of the couplings upon increasing $`ϵ`$: While only the interorbital part of $`J_{AF}`$ (the contribution proportional to $`B`$) decreases upon increasing $`ϵ`$, the whole ferromagnetic part in Eq. (4) is suppressed when $`ϵ`$ grows, so that the total coupling, although ferromagnetic at small JT energy splitting, eventually vanishes and becomes positive. Moreover it turns out that, for $`|r|>23`$ the hoppings generate smaller $`A,B,C`$ coefficients in the $`z`$ direction. This accounts for the more rapid rise of $`J_z`$ when $`ϵ`$ is increased. Finally, along the same line of the discussion of Fig. 2, one can easily observe that an increasing $`J_H`$ strenghtens the ferromagnetic component and weakens the antiferromagnetic one, thus rationalizing the generic tendency of all curves to be shifted downwards when $`J_H`$ grows. Besides the above specific findings, the occurrence of the various magnetic phases can be cast in a phase diagram at zero temperature illustrating the stability region of these phases in terms of the JT energy splitting and the deformation ratio. In the light of their richer complexity and of the present interest in the Manganites, we here consider in greater detail the case of $`Q_3<0`$ of relevance for the undoped LMO, while the $`Q_3>0`$ case is only described in the inset of Fig. 5. Figs. 5 and 6 report the phase diagram for two different values of the Hund coupling. Both phase diagrams display the same qualitative features. In particular, at moderate and large values of $`ϵ`$ a Néel AAA phase is found for weak planar distortions (small $`r`$). As seen in the discussion of Fig. 2, in the very-small-$`r`$ region, $`J_{xy}`$ is naturally positive, while the superexchange between $`e_g`$ levels along $`z`$, although ferromagnetic, is small so that the direct superexchange between $`t_{2g}`$ spins may easily dominate and gives rise to the AAA phase (see Figs. 2 and 3). As it can be also be seen from Fig. 1, it can be checked that the AAA phase is replaced by the so-called $`C`$-like antiferromagnetic AAF phase in the $`J_t=0`$ case. At small-to-intermediate values of $`ϵ`$, a progressive increase of $`|r|`$ drives the system towards the phase AAF. In this phase $`J_{xy}`$ keeps its AF character, while the negative superexchange between $`e_g`$ levels along $`z`$ is small, but no longer is overcome by $`J_t`$. At larger values of $`ϵ`$ the AAF phase is not present, but the intimate nature of the AAA phase changes upon increasing $`|r|`$. In particular while at low $`|r|`$ the AF along $`z`$ is determined by $`J_t`$, at larger $`|r|`$, the superexchange between $`e_g`$ levels along $`z`$ is itself AF and therefore the $`t_{2g}`$ superexchange contributes, but it is not strictly necessary to the AF coupling along $`z`$. On the other hand, a further increase of $`|r|`$ promotes a F coupling along the planes and leads to the A-type antiferromagnetism FFA experimentally observed in undoped LMO. At small values of the JT splitting, the phase diagram is prominently occupied by a FFF phase. In this latter regard, from the comparison of Figs. 5 and 6, the important observation can be done that the FFF phase at low and moderate $`ϵ`$’s is greatly stabilized by the increase of the Hund coupling $`J_H`$, as previously expected. Within the present exact numerical treatment of the model in Eq. (1) it is also possible to attempt at “precise” estimates of $`J_{xy}`$ and of $`J_z`$. As an example, we report here a realistic sets of parameters (among many others) providing the values $`J_{xy}=0.83`$ meV and $`J_z=0.58`$ meV experimentally observed with inelastic neutron scattering Moussa1 . Assuming $`Q_2/|Q_3|=3.2`$, a value largely confirmed by many groups Moussa2 , we take $`t=0.124eV`$, $`U=5.81eV`$, $`J_H=1.2eV`$, $`J_t=2.1`$ meV, and $`ϵ=0.325eV`$. The quite reasonable values of the model parameters needed to reproduce the measured magnetic couplings is an indirect test of the validity of the considered model. We emphasize that the ”anomalous” trend $`|J_{xy}|>|J_z|`$ is correctly reproduced, and that our fit is relatively flexible concerning parameters $`U`$, $`J_H`$ or $`t`$, provided $`ϵ`$ is large enough. ## 4 Conclusions In this paper we presented the results of calculations based on the exact diagonalization of a model aiming to describe the stoechiometric LaMnO<sub>3</sub>. The model includes strong local Coulomb interactions as well as a JT coupling between the electrons and the $`Q_2`$ and $`Q_3`$ lattice deformations. Despite the smallness of our cluster, we believe that our determination of the magnetic couplings not only is qualitatively, but also quantitatively significant. This is so because, in the presently considered undoped LMO, the coherent charge mobility is negligible due to the large on-site Coulomb repulsions and to the substantial JT deformations. As a consequence the magnetic interactions do not arise, e.g., from Fermi surface instabilities or other collective effects, but are rather determined by short-distance (incoherent) processes. One first relevant result is that, when the $`MnO`$ octahedron is compressed along $`z`$, a FFA phase is only obtained for a sizable (staggered) $`Q_2`$ deformation of the planar unit cell. This finding agrees with the ab initio calculations of Ref. Sol . Our analysis also points out the relevant role played by the Hund coupling, which generically emphasizes the ferromagnetic component of the superexchange processes. Quite relevant turns out to be also the Hund coupling between the $`e_g`$ electrons and the $`t_{2g}`$ spins. In this latter regard, we explicitely checked that, keeping $`J_H`$ finite between the $`e_g`$ electrons, but decoupling them from the $`t_{2g}`$ spins no longer gives rise to the FFF phase at low values of the JT splitting (cf. the phase diagrams in Figs. 6 and 7). Instead at $`ϵ0`$ a FFA phase is found in agreement with the results of Ref. KK for a model, which only considered $`e_g`$ electrons and no JT splitting. This indicates that the determination of the stable phase (at least) at small values of the JT energy must take in due account the Hund coupling thereby including the $`t_{2g}`$ levels. Secondly a quantitative determination of the stability region for the FFA phase and of the value of the magnetic couplings is subordinate to the consideration of the $`J_H`$ term. Our work shares with Ref. HYMD the generic result that JT distortions strongly affect the magnetic structure. Nevertheless it is worth pointing out some differences. In a certain respect our work is less ambitious in so far it does not attempt to determine the JT distortions, but it rather imposes them as external parameters of the calculation. Actually we do not believe that such deformations can be easily determined by microscopic models, which should incorporate complex effects such as long-range Coulomb interactions, cation and anion size and tilts of the $`MnO_6`$ octaedra. On the other hand realistic deformations as obtained from experiments can easily be imposed and the consequent local electronic structure can be determined exactly: orbital ordering results essentially from cooperative Jahn-Teller deformations. Moreover, and quite importantly for a quantitative determination of the magnetic coupling and of the stability of the magnetic phases, we here also take into account the electronic Coulomb repulsion. This interaction is perforce larger than the JT interaction and contributes to its insulating behavior as well as to the numerical values of the exchange couplings. Finally we showed that using reasonable parameters the experimental values of the magnetic couplings can easily be reproduced. Of course precise estimates depend on the knowledge of the various couplings entering the model, which are not always available neither from experiments nor from reliable first principle calculations. However calculating the magnetic couplings for various parameters and matching the numerical results with the experimentally obtained values provides useful connections between the involved parameters and set limits to the poorly known physical quantities.
no-problem/0001/hep-ph0001065.html
ar5iv
text
# 1 Introduction ## 1 Introduction Many signals of interest for tests of the Standard Model and search for new physics at the Linear Collider will be given by many-particle final states. It is therefore important to develop the calculation techniques and the tools necessary for the physics analysis of these phenomena, taking into account all the background effects and keeping under control all the relevant final-state correlations. In particular, the six-fermion signatures will be relevant to several subjects, such as intermediate-mass Higgs boson production, top-quark physics and the analysis of anomalous quartic gauge couplings. These topics are addressed in the present contribution. Numerical results are presented and discussed. The numerical calculations have been performed by means of a computer code that involves the algorithm ALPHA , for the automatic calculation of the scattering amplitudes, and a Monte Carlo integration procedure derived from the four-fermion codes HIGGSPV and WWGENPV , and developed to deal with six-fermion processes. ## 2 Intermediate-mass Higgs boson The search for the Higgs boson, that is carried on presently at LEP and Tevatron, will be also in the physics programme of future high-energy colliders, where the whole range of mass values allowed by the general consistency conditions for the Standard Model, that is up to $`1`$ TeV, can be explored. The current lower bound on the Higgs mass deduced from direct search at LEP is 95.2 GeV at 95 $`\%`$ C.L. , while the upper bound given by fits to the precision data on electroweak observables is 245 GeV at 95 $`\%`$ C.L. . The Linear Collider will not only be able to discover the Higgs boson, but it will also provide the possibility of making precision studies on its properties. It is then of great interest to make accurate predictions on the processes in which the Higgs boson can be produced at the LC, and to develop the tools for making simulations. In the mass range favoured by the present experimental information, that is between 100 and 250 GeV, the relevant signatures are four-fermion final states if the Higgs mass is below 130-140 GeV, and six-fermion final states if the Higgs mass is greater than 140 GeV. The processes of the first kind have been extensively studied in connection with physics at LEP, while those of the second kind have only recently been addressed . In this contribution, complete electroweak tree-level calculations for the processes $`e^+e^{}q\overline{q}l^+l^{}\nu \overline{\nu }`$, with $`q=u,d,c,s`$, $`l=e,\mu ,\tau `$ and $`\nu =\nu _e,\nu _\mu ,\nu _\tau `$ are presented. These processes are characterized by the presence of both charged and neutral currents and of different mechanisms of Higgs production involving Higgs-strahlung and vector boson fusion; moreover, QCD backgrounds are absent. The total cross-section is shown in fig. 1 as a function of the center-of-mass (c.m.) energy for three values of the Higgs mass, with suitable kinematical cuts, to avoid the soft-pair singularities. The off-shellness effects due to the finite widths of the gauge bosons and of the Higgs boson have also been studied by comparing the result obtained by means of the signal diagrams with the one in the narrow-width approximation. Deviations of the order of $`1015\%`$ have been found . Various distributions have been studied, after generating samples of unweighted events. The analysis is restricted in this case to the processes with $`l=e`$ and a luminosity of 500 fb<sup>-1</sup> is assumed. The invariant masses of different systems of four fermions are plotted in fig. 2, including the effects of initial-state radiation (ISR) and beamstrahlung (BS) . The different sets of fermions correspond to the Higgs boson in different signal diagrams. It is interesting to observe that at 800 GeV the $`qqe^+e^{}`$ invariant mass gives a clean signal, not affected by ISR and BS, that can be traced back to the $`WW`$ fusion signal diagram. Other distributions can be considered in order to reveal the presence of the Higgs boson and to measure its properties . As a conclusion, the processes under consideration turn out to be of interest for the study of intermediate Higgs bosons. Thanks to the sums over quark, charged lepton and neutrino flavours, as well as the combined action of different production mechanisms, assuming a luminosity of $`500`$ fb<sup>-1</sup>/yr and a Higgs mass of, say, $`185`$ GeV, more than $`1000`$ events can be expected at a c.m. energy of $`360`$ GeV and more than $`2000`$ at $`800`$ GeV (see fig. 1). The complete calculation shows the relevance of background and off-shellness effects, and it is possible to exploit the features of the different signal diagrams to find at different energies suitable distributions that are sensitive to the presence and to the properties of the Higgs boson. ## 3 Top-quark physics in six-quark processes The study of $`t\overline{t}`$ production both at threshold and above at the Linear Collider will give the opportunity of making significant tests of QCD and to get important information through the determination of the electroweak properties of the top quark. The production of a $`t\overline{t}`$ pair gives rise to six fermions in the final state. The $`6f`$ signatures relevant to the study of the top quark in $`e^+e^{}`$ collisions can be summarized as follows: $`b\overline{b}l\nu _ll^{}\nu _l^{}`$ (leptonic, $`10\%`$ of the total rate), $`b\overline{b}q\overline{q}^{}l\nu _l`$ (semi leptonic, $`45\%`$), $`b\overline{b}+4q`$ (hadronic, $`45\%`$). Semi leptonic signatures have been considered in refs. . It is then of great interest to carefully evaluate the size of the totally hadronic, six-quark ($`6q`$) contributions to integrated cross-sections and distributions as well as to determine their phenomenological features. The $`6q`$ signatures of the form $`b\overline{b}+4q`$, where $`q=u,d,c,s`$ are considered in the present study and the results of complete electroweak tree-level calculations are presented. In particular the rôle of electroweak backgrounds and of ISR and BS are studied and the shape of the events is analysed to the end of isolating the QCD backgrounds. The integrated cross-section has been studied in the energy range between 350 and 800 GeV for different Higgs masses and has been compared with the signal contribution alone and with the result in the narrow-width-approximation, showing that the background and off-shellness effects are of the order of several per cent . The total electroweak cross-section has also been studied at the threshold for $`t\overline{t}`$ production as a function of the Higgs mass. Although the dominant effects in this case come from QCD contributions, as is well known , the electroweak backgrounds turn out to give a sizeable uncertainty, of the order of $`10\%`$ of the pure electroweak contribution, in the intermediate range of Higgs masses (see fig. 3), which is related to the fact that the Higgs mass is not known. The topology of the events has been studied by means of various event-shape variables, in order to study the possibility of isolating the top-quark signal from the QCD backgrounds. The pure QCD contributions have been analysed in ref. . In fig. 4 the thrust distribution of the electroweak contribution is shown at a c.m. energy of 500 GeV and with a Higgs mass of 185 GeV in the Born approximation (dashed histogram) and with ISR and BS (the solid histogram; in this case, the distribution is calculated after going to the c.m. frame). A luminosity of 500 fb<sup>-1</sup> is assumed and the invariant masses of the $`b\overline{b}`$ pair and of all the pairs of quarks other than $`b`$ are required to be greater than 10 GeV. Remarkable effects due to ISR and BS can be seen in this plot, where the peak in the thrust distribution is strongly reduced with respect to the Born approximation and the events are shifted towards the lower values of $`T`$, which correspond to spherical events. As a conclusion, at 500 GeV, in view of the results of ref. , the thrust variable is very effective in discriminating pure QCD backgrounds, also in the presence of electroweak backgrounds and of ISR and BS. ## 4 Anomalous gauge couplings The situation in which a Higgs boson with mass below 1 TeV is not found can be described by means of the electroweak effective lagrangian, as discussed in ref. . Different models of electroweak symmetry breaking can be parameterized by this effective lagrangian. The contributions at lowest order in the chiral expansion, if the $`SU(2)`$ custodial symmetry is assumed, are model-independent. At next-to-leading order, dimension-four operators are present, with parameters that depend on the model of symmetry breaking adopted. These operators can give rise to trilinear and quadrilinear couplings of the massive gauge bosons, that modify the standard ones contained in the Yang-Mills lagrangian for the gauge bosons. The dimension-four operators that give only quartic vertices, usually indicated as $`_4`$, $`_5`$, $`_6`$, $`_7`$ and $`_{10}`$, have been implemented in ALPHA. Anomalous $`4W`$, $`WWZZ`$ and $`4Z`$ vertices are provided by these terms. In the following only the two $`SU(2)`$-custodial symmetry conserving operators $`_4`$ and $`_5`$ are discussed. Their expressions in the unitary gauge are : $`_4`$ $`=`$ $`\alpha _4g^4\left({\displaystyle \frac{1}{2}}W_\mu ^+W^{+\mu }W_\nu ^{}W^\nu +{\displaystyle \frac{1}{2}}(W_\mu ^+W^\mu )^2+{\displaystyle \frac{1}{c_W^2}}W_\mu ^+Z^\mu W_\nu ^{}Z^\nu +{\displaystyle \frac{1}{4c_W^4}}(Z_\mu Z^\mu )^2\right)`$ $`_5`$ $`=`$ $`\alpha _5\left((W_\mu ^+W^\mu )^2+{\displaystyle \frac{1}{c_W^2}}W_\mu ^+W^\mu Z_\nu Z^\nu +{\displaystyle \frac{1}{4c_W^4}}(Z_\mu Z^\mu )^2\right)`$ (1) These anomalous quartic couplings have been studied by several authors at the loop level, where they contribute to radiative corrections to electroweak observables , and at tree-level in processes of gauge boson scattering and real gauge boson production . In a more realistic approach, where the gauge bosons are not real, signatures with at least six fermions in the final state have to be considered. For the present study, the processes $`e^+e^{}2q+2q^{}\nu _e\overline{\nu }_e`$, with $`q=u,c`$ and $`q^{}=d,s`$, have been considered, and a full tree-level calculation has been performed, by using the effective lagrangian containing the dimension-four operators mentioned above. Through the study of some event samples, several variables that are sensitive to the parameters $`\alpha _4`$ and $`\alpha _5`$ have been found. A set of kinematical cuts has thus been deduced to enhance the effects of anomalous couplings. For example, the cross-section obtained with this set of cuts is shown in fig. 5 as a function of the parameter $`\alpha _5`$ in the range $`(0.01,0.01)`$. The variables involved in the cuts, as indicated in the figure, are the invariant mass of the system of four jets, $`M(WW)`$, the angle $`\theta (W)`$ of one reconstructed $`W`$ (where a simple procedure is used to identify the $`W`$ boson from the quarks) and the invariant mass of the pair of jets with lowest transverse momentum. The limits of the $`1\sigma `$ experimental uncertainty around the value at $`\alpha _5=0`$ are also shown in fig. 5, by assuming a luminosity of 1000 fb<sup>-1</sup>. The sensitivity to this parameter with this set of cuts can be seen to be of the order of $`10^2`$. The cross-sections and the variables used in the cuts have been analysed also in the presence of ISR and BS. The above conclusions on the sensitivity to the anomalous couplings should not be modified by the inclusion of such effects as long as variables not involving the missing momentum are considered. ## 5 Conclusions The six-fermion final states will be among the most relevant new signatures at future $`e^+e^{}`$ linear colliders. In particular they are interesting for Higgs bosons in the intermediate mass range, for $`t\overline{t}`$ production and for the study of quartic anomalous gauge couplings. These subjects are addressed in the studies that are presented in this contribution. A Monte Carlo event generator has been developed for complete tree-level calculations of such processes at the energies of the Linear Collider. This code, that makes use of ALPHA for the calculation of the scattering amplitudes, has been adapted to deal with a large variety of diagram topologies, including both charged and neutral currents, so as to keep under control all the relevant signals of interest as well as the complicated backgrounds that are involved in six-fermion processes where hundreds of diagrams contribute to the tree-level amplitudes. The effects of initial-state-radiation and beamstrahlung are also included. The studies of Higgs boson production in the intermediate mass range, of $`t\overline{t}`$ production and of anomalous gauge couplings have shown the importance of complete calculations to keep under control all the background and finite-width effects, and to obtain simulations of all kinds of final-state distributions such as invariant masses, angular correlations and event-shape variables, that are essential both for the detection of the signals of interest and for the analysis of the properties of the particles under study. Acknowledgements The authors wish to thank Thorsten Ohl for his interest in this work and for useful discussions.
no-problem/0001/astro-ph0001112.html
ar5iv
text
# A Sociological Study of the Optically Emitting Isolated Neutron Stars ## 1. Introduction The sample of the optically emitting Isolated Neutron Stars (INS) is not a rapidly growing one. In spite of non negligible observational efforts, no new objects have been positively detected in the last few years. The last new identification dates back to 1997 when HST resolved the counterpart of PSR1055-52 (Mignani et al., 1997). Studying the optical behaviour of INSs is certainly a challenging task. There are neither routine nor serendipitous discovery such as in radio and, at least up to a point, in X-rays, where a numerous community has ample access to observing facilities. At variance with radio and X-ray wavelengths, in the optical domain there are no instruments dedicated to the study of INS. Moreover, the tiny kernel left by a SN explosion is, by and large, not perceived as a potentially interesting object by the community of the optical astronomers. ## 2. INS Sociology All astronomical objects score differently along the electromagnetic spectrum, but INSs seem to be a rather extreme case. Very prominent in the radio domain, they undergo a minimum in the optical while the rise again in X-rays, only to reach another maximum in high-energy gamma-rays. Neutron stars are not glamorous optical emitters: they tend to be faint, point-like sources with flat spectra (when at all measured) and no lines. But these balls of iron, ideal for studying physics under extreme conditions, have fostered more Nobel prizes than any other celestial object. However, with their ultrathin atmosphere, they are hardly considered stars any more. Lacking prominent lines, they defy the traditional tools of the optical trade and would rather require ad hoc, unconventional approaches to unveil their peculiarities such as, e.g. the behaviour of elongated atoms. Moreover, their study calls for the most powerful optical telescopes, whose observing time is very much in demand for other hot topics in astronomy. Thus, Isolated Neutron Stars get, at best, few percent of the precious observing time of a big optical telescope, to be compared with a significant fraction of the observing time (if not the totality) of a radio one and a 10-20 % in X-rays. No wonder the family of X-ray emitting neutron stars is more numerous than that of the optical ones. (see e.g. Becker and Trümper, 1997) Table 1 presents a summary of the data available on INSs with an optical identification either secured on the basis of timing or proper motion (PM) or proposed on the basis of positional coincidence (Pos). The grand total remains 9 and recent efforts on PSR 1706-44 (Mignani et al, 1999) and on the newly discovered 16 msec pulsar, PSR 0537-6910 (Mignani et al, 2000), have not yet yielded positive identifications. Quite a lot of work went also into the timing of PSR0656+14 and Geminga (Shearer et al 1997,1998) but the low S/N of the ground based data has severely hampered the statistical significance of the results gathered so far. The optical behaviour of INSs is composite, ranging from non-thermal emission for the younger objects (Crab, PSR 0540-69, Vela and , possibly, PSR 1509-58), to mostly thermal, for the remaining, older, ones. Caraveo (1998) and Mignani (1998) have comprehensively reviewed the subject. Now we want to tackle the problem from a different point of view : sociology vs. physics and astronomy. Table 1 shows that the optically identified NSs are few and generally faint. Are these unfavourable characteristics enough to explain the very limited interest (or lack thereof) enjoyed by INS in the optical domain? ### 2.1. Does the appeal of a class of celestial sources depend upon their number? High energy gamma-ray astronomy offers an interesting example. While the bona fide NSs detected as gamma-ray sources are seven (Thompson et al, 1997), the high energy Astronomy community considers neutron stars amongst the most interesting objects in the sky and finds it perfectly sound to devolve quite a lot of observating time (and effort) to their quest. A different example, in the optical domain, could be that of gravitational lensing systems, which attracted, quite correcly, an enormous interest when numbering in the few. The same is true also for the MACHO events. ### 2.2. …or their brightness? The sky is full of faint targets which get their share of astronomical attention, irrespective of their consistency as a class. In fact, ever since Galileo, in astronomy the faintest objects, the ones at the limits of every telescope, are always the newest and most exciting. Optically identified INSs are comparable both in number and in brighness to the optically identified GRBs, to mention a very recent hot topic. However, GRBs do have lines and thus the classical astronomical tools can be immediately applied to them, making it worthwhile to obtain spectra of 26 mv objects. An inspection to Table 1 shows that faintness is not even a limitation for neutron stars. The best studied neutron star is certainly Geminga, which is also one of the very faintest, while comparatively little has been done on the Crab, by far the brightest NS, and the only accessible also to small telescopes. ## 3. The brightest and the dimmest Here we shall use Crab and Geminga as test cases to show that the understanding of a NS behaviour depends more on the interest it arises in the community than on anything else. ### 3.1. Crab Soon after the discovery of the pulsating star, a rough evaluation of its spectrum, proper motion and secular decrease were announced (see Nasuti et al, 1996 for a complete list of the references). The proper motion was eventually nailed down in 1977 (Wyckoff and Murray, 1977) but nothing was done to obtain a decent spectrum of an easy target nor to better assess its secular decrease. While the Crab was extensively, and wrongly, used as a textbook example of the multiwavelength behaviour of INSs, two signatures of the emitting mechanism(s), namely the pulsar spectrum and the decrease of its total brightness, were totally neglected, in spite of their potential interest for the understanding of the physics of the Crab pulsar. A reasonable spectrum was eventually obtained by Nasuti et al.(1996), showing a real flat power low continuum. Does this measurement exhaust the interest in the spectral behaviour of the Crab? Crab is certainly the only NS bright enough to allow a higher resolution spectral study, with the aim of looking for something unique to the special physics of the pulsar and its surroundings. On the contrary, obtaining high resolution pictures of the Crab Nebula and its pulsar is a rewarding exercise. This lead to a series of HST pictures which have been recently used also to measure anew the pulsar proper motion (Caraveo and Mignani, 1999), suggesting a possible link between the pulsar proper motion and the X-ray jet structure. While the study of the interaction of the pulsar with the nearby medium has been extensively carried out, the precise photometry of the pulsar was never pursued. This has left the pulsar secular decrease as an open issue of the physiscs of the Crab (see Nasuti et al, 1996 for a complete discussion). ### 3.2. Geminga The long chase for Geminga has been reviewed by Bignami and Caraveo (1996 and references therein) from its discovery, back in 1973 by NASA’s SAS II, to the HST era. The milestones in the Geminga chase have been -1981: confirmation and positioning by COS-B -1983: discovery of a possible X -ray counterpart by the Einstein Observatory -1987: pinpointing of its possible optical counterpart, dubbed G” -1992: discovery of the 237 msec periodicity in X-rays, followed by similar results in the $`\gamma `$-ray domain, thus linking the X and $`\gamma `$-ray sources (ROSAT and EGRET) -1993: measurement of the proper motion of G”, thus proving its INS origin -1996: improvement of the $`\gamma `$-ray light curve when taking into account the proper motion (Mattox et al.1996), thus linking G” to the source of $`\gamma `$-rays. After the HST first refourbishement mission, in 1993, Geminga has been extensively imaged: first, to measure the source parallactic displacement (Caraveo et al. 1996), then to collect multiband photometry data. These HST observations, confirming and refining difficult measurements with ground-based instruments, have resulted in the spectral distribution of Fig.1(Mignani et al, 1998). A broad feature, centered at $`\lambda =5998\AA `$ and with a width of 1,300 $`\AA `$, appears superimposed to the Rayleigh-Jeans continuum, as extrapolated from the soft X-rays. If interpreted as an ion-cyclotron emission, this implies, for a pure H atmosphere a B field of $`3.810^{11}G`$ (or $`7.610^{11}G`$ in the case of He, see Jacchia et al, 1999) not too far from to the value of $`1.510^{12}`$ obtained, theoretically, using Geminga’s period and period derivative. This is the first time that the magnetic field of an INS is directly measured. Moreover, the phenomenology of the source at high energies has been considerably enriched, owing to the very precise positioning of the optical counterpart. The possibility to link HST data to the Hipparcos reference frame yielded the position of Geminga to an accuracy of 0.040 arcsec, a value unheard of for the optical position of a pulsar, or of any object this faint (Caraveo et al, 1998). This positional accuracy has allowed to phase together data collected over more than 20 years by SAS-2, COS-B and EGRET (Mattox et al, 1998). The many ”firsts” of Geminga have been summarized by Bignami (1998). Quite surprisingly, some of the key parameters of Geminga are now known with an accuracy better than that available for the Crab pulsar. This is due in part to the remarkable stability of this object, which rendered possible to phase together such a long time span of $`\gamma `$-ray data, in part to the continous attention this object has been receiving by the astronomical community at large. ### 3.3. Geminga-like sources The identification of Geminga as a radio quiet INS broadened considerably the framework of the multiwavelengths study of NSs, establishing a promising yet elusive template: the Geminga-like objects. A Geminga-like source should be bright in $`\gamma `$ ray, conspicous in X-rays, faint in the obtical, nul in radio. How many Gemingas are hidden in the third EGRET catalogue (Hartman et al., 1999)? The only way to tell is to start a chase on promising sources, possibly at middle galactic latitudes, to avoid far away objects, preferably in non crowded regions. Imaging X-ray instruments, good resolution and high troughput are mandatory to play the game with a reasonable efficiency. The ESA XMM telescope, with its EPIC cameras, could play a significant role on this, as Einstein did for Geminga, 20 years ago. Caraveo, Bignami and Trümper (1997) have further elaborated on this idea applying the template to unidentified X-ray sources and recognizing a handful of radio quiet INS candidates. Once again the numbers are small, but they are going to grow rapidly. On XMM, EPIC will certainly provide plenty of serendipitous sources awaiting an identification. ## 4. Making Neutron Stars Optically Appealing So far, the studies of the optical behaviour of NS have been carried out mostly by ”amateur” optical astronomers. Indeed, those who develop a taste for the optical observations of Isolated Neutron Stars usually stumble into optical astronomy as a part of their multiwavelength approach aimed at the understanding of these fascinating objects. Will it be possible to render the topic more appealing to the optical community, showing that NS are not just curious objects? Will the optical Time Allocation Committees become more generous and invest observing time on the subject? Will the community be able (or willing) to develop the needed tools? Instruments devoted to timing are, of course, important, but timing, is just but one aspect of a multi-facet problem. We have to apply all the tools of classical optical astronomy and possibly develop new ones. The case of Geminga shows that endurance pays but the process needs to be accelerated. ## References Becker, W. & Trümper, J. 1997 A&A, 326, 682 Bignami, G.F.& Caraveo, P.A. 1996 ARA&A, 34, 331 Bignami, G.F. 1998 Advances in Space Research, 21,243 Caraveo, P.A. et al. 1996 ApJ, 461, L91 Caraveo, P.A., Bignami, G.F. & Trümper 1997 Caraveo, P.A. et al. 1998 A&A, 329, L1 Caraveo, P.A. 1998 Advances in Space Research, 21, 187 Caraveo, P.A.& Mignani, R. 1999, A&A, 344, 367 Hartman, R.C. et al. 1999 ApJS, 123, 79 Jacchia A. et al. 1999 A&A, 347, 494 Mattox, J.R., Halpern J.P. & Caraveo, P.A. A&AS, 120C, 77 Mattox, J.R., Halpern J.P. & Caraveo, P.A. ApJ, 493, 891 Mignani, R., Caraveo, P.A. & Bignami, G.F. 1997a, ApJ, 474, L51 Mignani, R., Caraveo, P.A. & Bignami, G.F. 1997b, The Messenger, 87, 43 Mignani, R., Caraveo, P.A. & Bignami, G.F. 1998, A&A, 332, L37 Mignani, R. 1998, in ”Neutron Stars and Pulsars: Thirty Years after the Discovery” Eds. N. Shibazaki et al, Universal Academic Press 335 Mignani, R., Caraveo, P.A. & Bignami, G.F. 1999, A&A, 343, L5 Mignani, R. et al 2000, A&A, in press Nasuti F.P. et al 1996, A&A, 314, 849 Shearer A. et al. 1997, ApJ, 487, L181 Shearer A. et al. 1998, A&A, 335, L21 Thompson, D.J. et al. 1997 Proc. of the Fourth Compton Symposium , eds. Dermer C.D. et al. AIP Conference Proceedings, 410, 39 Wyckoff, S. & Murray, C.A. 1977, MNRAS, 180, 717
no-problem/0001/astro-ph0001037.html
ar5iv
text
# 1 Introduction ## 1 Introduction The identification breakdown of soft X–ray surveys has clearly established that AGN are by far the dominant population of the X–ray sky (see for the most recent update) implying that about 60–70 % of the soft XRB has been already resolved into AGN at a limiting 0.5–2 keV flux of $``$ 10<sup>-15</sup> $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ . Thanks to the imaging capabilities of ASCA and BeppoSAX detectors the number of hard X–ray selected optically identified objects is increasing and several dozens of identifications are now available confirming that most of these sources are indeed AGN (, ). Their contribution to the 2–10 keV XRB is of the order of 20–25 % at a flux limit of $``$ 5 $`\times `$ 10<sup>-14</sup> $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ . The present findings provide support to the XRB synthesis models which, in the framework of the unified scheme, assume that a mixture of unabsorbed and absorbed AGN can account for almost the entire XRB spectral intensity in the 2–100 keV energy range (, , , ) Even though the so far proposed models differ in several of the assumptions concerning the AGN luminosity function and the X–ray spectral properties, they all agree in predicting that the fraction of obscured AGN is rapidly increasing towards high energy and faint fluxes. It turns out that these sources can be efficiently discovered with sensitive hard X–ray surveys. In order to quantitatively test AGN synthesis models for the XRB we have carried out an X–ray survey in the hardest band accessible with the present imaging detectors: the 5–10 keV BeppoSAX High Energy LLarge Area Survey (HELLAS). ## 2 Source Counts About 80 square degrees of sky have been surveyed in the 5–10 keV band using several BeppoSAX MECS () high Galactic latitude ($`\left|b\right|>20`$ deg) fields. All MECS pointings cover different sky positions. The fields were selected among public data (as March 1999) and our proprietary data excluding those fields centered on extended sources and bright Galactic objects. A robust detection algorithm has been used on the coadded MECS1 MECS2 and MECS3 (or MECS2 plus MECS3 after the failure of MECS1 in May 1997) images and the quality of the detection has been checked interactively for all the 147 sources of the final sample. Background subtracted count rates were converted to 5–10 keV fluxes assuming a power law spectrum with energy index $`\alpha _E`$ = 0.6 A detailed description of the survey and the detection procedure is presented in . The 5–10 keV logN–logS of the HELLAS sources is reported in Figure 1. The source surface density at the survey limit (4.8 $`\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ ) is 17$`\pm `$6 sources per square degree. The error bars on the binned integral counts account for both the statistic and systematic uncertainties, the latter due to the lack of information on the intrinsic spectrum of the faint sources. The systematic error has been estimated assuming a range of spectral shapes (0.2 $`<`$ $`\alpha _E`$ $`<`$ 1.0) to convert count rates into fluxes. The HELLAS counts are in very good agreement with the recent ASCA results (, ) obtained in the 2–10 keV band (Figure 1). Given that both the HELLAS and the ASCA counts have been computed assuming the same prescription for the spectral slope the comparison is straightforward. The dashed line in Figure 1 corresponds to the best–fit to the ASCA counts computed by and converted in the 5–10 keV band with $`\alpha _E`$ = 0.6. The cumulative flux of the HELLAS sources accounts for a significant fraction (20–30 %) of the 5–10 keV XRB. The main uncertainty on the resolved fraction is due to the still poorly understood normalization of the extragalactic XRB spectrum as measured by the different satellites (see and for more details). ## 3 Hardness ratios The HELLAS sources are too faint to perform a spectral fit. In order to study their spectral properties we have computed for each source, whenever possible, two X–ray colours defined as HR1 = (M$``$L)/(M+L) and HR2 = (H$``$M)/(H+M) where H, M and L are the number of counts in the H=4.5–10 keV M=2.5–4.5 keV and L=1.3–2.5 keV energy ranges. A wide range of spectral properties is evident from the analysis of the color–color diagram reported in Figure 2. For example extremely hard sources, with nearly all the photons detected only above 4.5 keV, populate the upper right part of the diagram. The X–ray colours depend from the intrinsic spectrum and from the source redshift. The colours expected for a range of spectral shapes at different redshifts has been computed following the same procedure adopted by for their ASCA survey. The big star at HR1 = 0.25 and HR2 = 0 represents an unabsorbed power law spectrum with $`\alpha _E`$ = 0.4. The rightmost solid curve connecting the open squares indicates the colours for an AGN–like power law ($`\alpha _E`$ = 0.8) absorbed by an increasing value of the column density : log $`N_H`$ = 0, 22, 22.7, 23, 23.7, 24 at z=0. The dashed curve is the same but at z=0.4. The innermost solid curves have been computed at z=0 assuming the same model but allowing a fraction of 10% and 1% respectively to be unabsorbed. These models should be considered as indicative and indeed they do not cover the entire diagram. As first noted by , more complicate spectral shapes, such as those characterizing the sources in the upper left portion of the plot, might be present. ASCA follow–up pointed observations of two relatively bright HELLAS sources allowed to collect enough counts to perform a more detailed spectral analysis. In both of the cases the data are consistent with a hard absorbed power law spectrum making us confident on the robustness of the hardness ratio results. ## 4 Optical Identifications The cross–correlation of the HELLAS sample with various source catalogs provided 25 coincidences (19 AGN: 7 radio–loud, 12 radio quiet and 6 clusters of galaxies). In addition, optical spectroscopic follow–up observations have been performed and 22 new identifications (18 AGN) are available (, ). A detailed discussion of the optical identifications is beyond the purposes of this paper and will be presented by . The average X–ray spectrum as inferred form the softness ratio value S$``$H/S+H (where S is the number of counts in the 1.3–4.5 keV energy range) is plotted in Figure 3 versus the source redshift and optical classification. The most important results are the following: $``$ The fraction of type 2 objects (including in this class Seyfert types 1.8–1.9–2.0 and quasars with a red optical continuum) is of the order of 40–45 %. This percentage is higher than in other optically identified samples of X–ray selected sources from ROSAT and ASCA surveys. $``$ The degree of obscuration as inferred from the hardness ratio analysis described in the previous section seems to be uncorrelated with the optical reddening indicators, such as the optical line widths and line ratios. $``$ The softness ratio of a few high luminosity broad lined quasars with blue optical colours implies X–ray absorption by a substantial column density (log $`N_H>`$ 23). $``$ Optical and near–infrared photometry of ten HELLAS sources carried out at the Italian National Telescope Galileo (TNG) indicates that the optical colors of type 1.8–2.0 and red AGN are dominated by the host galaxy, though the obscured AGN contributes to some of the infrared emission (, , ). ## 5 A quick comparison with XRB synthesis models The observed 5–10 keV logN–logS is compared with the AGN number counts predictions in the same energy range (Figure 4). Given that in the HELLAS band a column as high as log $`N_H`$ = 23 is needed to significantly reduce the photon flux, the contribution of sources with a different degree of obscuration is reported splitted into 3 classes : relatively unobscured log $`N_H<`$ 23, obscured Compton thin 23 $`<`$ log$`N_H<`$ 24 and almost Compton thick log $`N_H>`$ 24. According to the model predictions relatively unobscured sources outnumber absorbed objects. The expected fraction of Compton thin sources ranges from 25 to 35 % while the number of Compton thick sources is always negligible. Given that about one third of the AGN reported in Figure 3 have a softness ratio value consistent with a column density 23 $`<`$ log$`N_H<`$ 24 and none is Compton thick the agreement with the model predictions is remarkable. These findings are at variance with the relatively high space density of Compton thick AGN in the local Universe , . We note, however, that the fraction of Compton thick objects has been estimated only for optically selected, nearby Seyferts and thus might not be representative of the more distant X–ray selected population. ## 6 Discussion The hard X–ray sky, as surveyed by BeppoSAX, is dominated by AGN with a wide range of X–ray and optical properties. At first glance the present data are in line with what expected from the standard synthesis models for the XRB . In addition, new trends characterizing the AGN population are emerging. The large dispersion in the hardness/softness ratios suggests that the spectral properties of the HELLAS sources cannot be explained only with a distribution of absorbing column densities as assumed in the XRB models. In agreement with ASCA findings , the presence of additional soft X–ray emission above the hard absorbed component is required. The “soft excess” is likely to be due either to a fraction of the nuclear component scattered in the line of sight or to an incomplete covering of the central radiation or to thermal emission originating in a starburst region and/or due to X–ray binaries and supernova remnants. The “soft excess” intensity, which ranges from a few percent to 10–20 % of the total X–ray flux, is not relevant as far as the fit to the hard XRB is concerned, however it should be consistently taken into account when model predictions are compared to hard and soft X–ray counts. Indeed, if soft excesses are common among obscured AGN, as the HELLAS results indicate, these sources might well be detectable in the soft X–ray band even in the presence of substantial obscuration simply because the flux limit of the ROSAT soft X–ray surveys is about a factor 30–40 deeper than the limit actually reached in the hard X–ray band. Another interesting and unexpected result concerns the optical and near–infrared properties of the hard X–ray selected sources. Even though detailed studies have been carried out only for a small HELLAS subsample the results do clearly indicate that: $``$ The broad band optical (U,B,V,R,I) and near–infrared (J, K) colors of HELLAS AGN, spectroscopically classified as type 1.8–2.0 Seyferts or red quasars, are undistinguishable from those of normal passive galaxies. $``$ There is an increasing evidence that the X–ray absorption properties and the optical appearance of AGN change with redshift and/or luminosity, suggesting that high luminosity, highly obscured quasars are present among optically broad lined blue objects. If confirmed by future observations, these findings would imply that the X–ray obscured AGN responsible for a large fraction of the hard XRB energy density could be “hidden” among objects which would be optically classified either as normal galaxies or as “normal” blue quasars. It is also worth noting that the redshift of obscured AGN could be in principle obtained by deep photometric observations of their host galaxies. A step forward in the study of the sources of the hard XRB will be achieved by the foreseen XMM and Chandra surveys. In particular the X–ray data will allow to obtain an unbiased estimate of the absorption distribution (a key parameter of the XRB models), while optical and near–infrared follow–up observations are likely to provide new insights on the nature of the absorbing gas (such as the dust–to–gas ratio) and on the morphology of the host galaxies of obscured AGN. As a final remark we note that, while type 2 quasars are probably numerous, it is likely that they have been elusive so far because have been searched for in the optical rather than in X–rays. In most luminous and/or distant absorbed quasars the Narrow Lines Region may be also obscured or lacking altogether (NGC 6240 is a classical example ), and therefore they are missed in optical spectroscopic surveys. Deep Chandra and XMM surveys will hopefully settle this long–standing issue, and then provide a key test for XRB synthesis models. Acknowledgements. We thank the BeppoSAX SDC, SOC and OCC teams for the successful operation of the satellite and preliminary data reduction and screaning, P. Giommi, R. Maiolino, L.A. Antonelli, S. Molendi, M. Mignoli, R. Gilli, G. Risaliti (the “HELLAS boys”) for the fruitful collaboration, M. Salvati, G.C. Perola and G. Zamorani for useful discussions. Partial support from ASI contract ARS–99–75 and MURST grant Cofin98–02–32 is acknowledged.
no-problem/0001/astro-ph0001373.html
ar5iv
text
# A NEAR INFRARED PHOTOMETRIC PLANE FOR ELLIPTICALS AND BULGES OF SPIRALS ## 1 INTRODUCTION Amongst the most important issues in studying formation of galaxies are the epoch and physical mechanism of bulge formation. There are two competing scenarios for the formation of bulges. One assumes that the bulge and disk form independently, with the bulge preceding the disk (e.$`.`$g. Andredakis, Peletier & Balcells 1995, hereafter APB95), while the other suggests that the disk forms first and the bulge emerges later from it by secular evolution (Courteau, de Jong & Broeils 1996). However, recent analysis of a complete sample of early type disk galaxies (Khosroshahi, Wadadekar & Kembhavi 2000, hereafter KWK) has shown that more than one mechanism of bulge formation may be at work. This is corroborated by recent HST observations, which show that distinct bulge formation mechanisms operate for large and small bulges. Recent studies have revealed that the bulges of early type disk galaxies are old (Peletier et al. 1999). This is in agreement with semi-analytical results which claim that the bulges of field as well as cluster disk galaxies are as old as giant elliptical galaxies in clusters (Baugh et al. 1998). However, the formation mechanism of the bulges of late type spiral galaxies is likely to be very different. For example, Carollo (1999) found that although a small bulge may form at early epochs, it is later fed by gas flowing into the galaxy core, possibly along a bar-like structure caused by instabilities in the surrounding disk. The situation becomes more complicated for intermediate sized bulges, with some having formed at early epochs and some relatively recently from gas inflows. Correlations among global photometric parameters that characterize the bulge – such as colors, scale lengths etc. – can be used to differentiate between the competing bulge formation models. These have the advantage of being independent of spectroscopic parameters such as velocity dispersion which are difficult to measure for bulges. Some of the photometric parameters such as the colors are measured directly, while others like the scale lengths require elaborate bulge-disk decomposition using empirical models for the bulge and disk profiles. Conventionally, radial profiles of bulges (de Vaucouleurs 1959), like those of elliptical galaxies (de Vaucouleurs 1948), have been modeled by the $`r^{1/4}`$ law. These are fully described by two parameters determined from the best fit model – the central surface brightness $`\mu _\mathrm{b}(0)`$ and an effective radius $`r_\mathrm{e}`$, within which half the total light of the galaxy is contained. In recent years an additional parameter has been introduced, with the $`r^{1/4}`$ law replaced by a $`r^{1/n}`$ law (Sersic 1968), where $`n`$ is a free parameter (e.$`.`$g. Caon et al. 1993, APB95, KWK). The so called Sersic shape parameter $`n`$ is well correlated with other observables like luminosity, effective radius, the bulge-to-disk luminosity ratio and morphological type (APB95). In particular, it has been demonstrated that a tight correlation exists between $`\mathrm{log}n`$, $`\mathrm{log}r_\mathrm{e}`$ and $`\mu _\mathrm{b}(0)`$ (KWK). In this letter we show that $`\mathrm{log}n`$, $`\mathrm{log}r_\mathrm{e}`$ and $`\mu _\mathrm{b}(0)`$ for elliptical galaxies are tightly distributed about a plane in logarithmic space, and that this photometric plane for elliptical galaxies is indistinguishable from the analogous plane for the bulges of early type disk galaxies. Throughout this paper we use $`H_0=50\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$ and $`q_0=0.5`$. ## 2 THE DATA AND DECOMPOSITION METHOD The analysis in this study is based on the near-IR, K-band images of 42 elliptical galaxies, in the Coma cluster (Mobasher et al. 1999). This is combined with a complete magnitude and diameter limited sample of 26 early type disk galaxies in the field (Peletier and Balcells 1997) from the Uppsala General Catalogue (Nilson 1973). Details about the sample selection, observations and data reduction are given in the above references. We chose to work with K band images because the relative lack of absorption related features in the band leads to smooth, featureless light profiles which are convenient for extraction of global parameters. Extracting the global bulge parameters of a galaxy requires the separation of the observed light distribution into bulge and disk components. This is best done using a 2-dimensional technique, which performs a $`\chi ^2`$ fit of the light profile model to the galaxy image. A scheme is used in which each pixel is weighted by its estimated signal-to-noise ratio (Wadadekar, Robbason and Kembhavi 1999). We decomposed all the galaxies in our sample into a bulge component which follows a $`r^{1/n}`$ law with $$I_{bulge}(r)=I_b(0)e^{2.303b_n(r/r_\mathrm{e})^{1/n}},$$ (1) where $`b_n=0.8682n0.1405`$, and $`r`$ is the distance from the center along the major-axis. The disk profile is taken to be an exponential $`I_\mathrm{d}(r)=I_\mathrm{d}(0)e^{(r/r_\mathrm{d})}`$, where $`r_\mathrm{d}`$ is the disk scale length and $`I_\mathrm{d}(0)`$ is the disk central intensity. Apart from the five parameters mentioned here, the fit also involves the bulge and disk ellipticities. The model for each galaxy was convolved with the appropriate point spread function (PSF). Details of the procedure used in the decomposition of the disk galaxies are given in KWK. We have got good fits for all 42 of the 48 elliptical galaxies in the complete sample of Mobasher et al. (1999) for which we have data, and for 26 of the 30 disk galaxies in the complete sample of APB95. Twelve of the 42 elliptical galaxies show a significant disk ($`D/B0.2`$). Four disk galaxies did not provide good fits because of their complex morphology (see KWK) and these galaxies have been excluded from our discussion. In Figure 1 we plot histograms showing the distribution of the shape parameter $`n`$ for the two samples. For elliptical galaxies, $`n`$ ranges from 1.7 to 4.7 with a clear peak around $`n=4`$. This observation is in agreement with the fact that de Vaucouleurs’ law has historically provided a reasonable fit to the radial profile of most (but not all) elliptical galaxies. For the disk galaxies $`n`$ ranges from 1.4 to 5 with an almost uniform distribution within this range. The rather flat distribution in $`n`$ for the disk galaxies implies that de Vaucouleurs’ law will provide a poor fit to the bulges of these galaxies. This is indeed the case as demonstrated in de Jong (1996). ## 3 THE PHOTOMETRIC PLANE Study of the correlations among the parameters describing photometric properties of elliptical galaxies and bulges of spiral galaxies is essential in constraining galaxy formation scenarios. We find that for the spiral galaxies sample, the bulge central surface brightness is well correlated with $`\mathrm{log}n`$, with a linear correlation coefficient of -0.88 (Figure 2). The corresponding coefficient for the elliptical galaxies, also shown in Figure 2, is -0.79. These relations are significant at $`>99.99`$% level as measured by the Student’s $`t`$ test. There is a weak correlation between bulge effective radius and $`n`$ for the disk galaxies (KWK) but such a correlation does not exist for the elliptical galaxies. An anti-correlation between the effective radius and mean surface brightness within the effective radius – known as the Kormendy relation (Kormendy 1977, Djorgovski & Davis 1987)– has been reported in elliptical galaxies. In Figure 3 we plot the mean surface brightness within the effective radius against effective radius for the two samples. The elliptical galaxies are clustered around the best fit line – $`\mu _\mathrm{b}(r_\mathrm{e})=2.57\mathrm{log}r_\mathrm{e}+14.07`$ with rms scatter of 0.59 in mean surface brightness. A weaker relation with larger scatter exists for the bulges of the early type spiral galaxies suggesting a formation history similar to that of elliptical galaxies. As we demonstrated in KWK, the bulges of late type spiral galaxies do not show a Kormendy type relation, suggesting a different formation history. It is possible that some of the scatter seen in the Kormendy relation is caused by the effect of a third parameter, which can only be $`n`$ in our scheme. We have applied standard bivariate analysis techniques to obtain the best fit plane in the space of the three parameters $`\mathrm{log}n`$, $`\mu _\mathrm{b}(0)`$ and $`\mathrm{log}r_\mathrm{e}`$. We find that the least scatter around the best fit plane is obtained by expressing it in the form $`\mathrm{log}n=A\mathrm{log}r_\mathrm{e}+B\mu _\mathrm{b}(0)+\mathrm{constant}`$, and minimizing the dispersion of $`\mathrm{log}n`$ as measured by a least-squares fit. The equation of the best fit plane for the elliptical galaxies is $`\mathrm{log}n`$ $`=`$ $`(0.173\pm 0.025)\mathrm{log}r_\mathrm{e}(0.069\pm 0.007)\mu _\mathrm{b}(0)+(1.18\pm 0.05),`$ (2) while for the bulges of the disk galaxies it is $`\mathrm{log}n`$ $`=`$ $`(0.130\pm 0.040)\mathrm{log}r_\mathrm{e}(0.073\pm 0.011)\mu _\mathrm{b}(0)+(1.21\pm 0.11).`$ (3) The errors in the best fit coefficients here were obtained by fitting planes to synthetic data sets generated using the bootstrap method with random replacement (Fisher 1993). The scatter in $`\mathrm{log}n`$ for the above planes is 0.043 dex (corresponding to 0.108 magnitude) and 0.058 dex (corresponding to 0.145 magnitude) respectively. The angle between the two planes is $`2.41\pm 1.99\mathrm{deg}`$; this error was also obtained by the bootstrap technique. The difference in angle between the two planes is only slightly more than the $`1\sigma `$ uncertainty, which strongly suggests that the two planes are identical. We therefore obtained a new equation for the common plane, combining the data for the two samples, which is: $`\mathrm{log}n`$ $`=`$ $`(0.172\pm 0.020)\mathrm{log}r_\mathrm{e}(0.069\pm 0.004)\mu _\mathrm{b}(0)+(1.18\pm 0.04),`$ (4) The smaller errors here are due to the increased size of the combined sample. A face-on and two mutually orthogonal edge-on views of the best fit plane for the two samples are shown in Figure 4. $`\mathrm{K}_1,\mathrm{K}_2`$ and $`\mathrm{K}_3`$ are orthonormal vectors constructed from linear combinations of the parameters of the photometric plane. An additional representation of this best fit plane with $`\mathrm{log}n`$ as the ordinate, together with the data points used in the fit, is shown in Figure 5. The rms scatter in $`\mathrm{log}n`$ here is 0.050 dex, corresponding to 0.125 magnitude. This is comparable to the rms error in the fitted values of $`\mathrm{log}n`$, so any intrinsic scatter about the plane is small. It is possible that some of the observed correlation is produced due to correlations between the fitted parameters of the bulge-disk decomposition. We have examined the extent of such an induced correlation, using extensive simulations of model galaxies obtained using the observed distributions of $`n`$, $`\mu _\mathrm{b}(0)`$ and $`r_\mathrm{e}`$ for both samples. We chose at random a large number of $`n`$, $`\mu _\mathrm{b}(0)`$ and $`r_\mathrm{e}`$ triplets from these distributions, with the values in each triplet chosen independently of each other. Such a random selection ensured that there was no correlation between the input parameters. Other parameters needed to simulate a galaxy, like disk parameters, were also chosen at random from the range of observed values. We added noise at the appropriate level to the simulated images and convolved the models with a representative point spread function. We then extracted the parameters for these galaxies using the same procedure as we adopted for the observed sample. Results from the fit to the simulated data do not show significant univariate or bivariate correlations between the extracted parameters. This indicates that the correlations seen in the real data are not generated by correlated errors. ## 4 DISCUSSION It is tempting to investigate the use of Equation 4 as a distance indicator. The main source of uncertainty here is that the two distance independent parameters, $`\mathrm{log}n`$ and $`\mu _\mathrm{b}(0)`$, are in fact correlated, leading to an increased error in the best-fit photometric plane, and hence in the estimated $`\mathrm{log}r_\mathrm{e}`$ values. This gives an error of 53% in the derived distance, which is similar to the error in other purely photometric distance indicators, but is significantly larger than the $`20\%`$ error found in distances from the the near-IR fundamental plane, using both photometric and spectroscopic data (e.$`.`$g. Mobasher et al. 1999; Pahre, Djorgovski & de Carvalho 1998). However, data for the photometric plane are easy to obtain, as no spectroscopy is involved and it should be possible to get more accurate distances to clusters by independently measuring distances to several galaxies in the cluster. The elliptical galaxies seem to form a more homogeneous population than the bulges of spiral galaxies, as revealed from the distribution of their shape parameter, $`n`$. Considering that the near-infrared light measures contribution from the old stellar population in galaxies (i$`.`$e$`.`$, the integrated star formation) and since the near-infrared mass to luminosity ratio $`(M/L)_K`$ is expected to be constant among the galaxies, the relatively broad range covered by the shape parameter, $`n`$, for bulges of spirals reveals differences in the distribution of the old population among these bulges. Environmental factors could play an important role here, since the elliptical galaxies in our sample are members of the rich Coma cluster while the spiral galaxies are either in the field or are members of small groups. While elliptical galaxies and bulges appear to be different in the context of the Kormendy relation, they are unified onto a single plane when allowing for differences in their light distribution, as measured by the shape parameter, $`n`$. This supports the use of $`n`$ as a fundamental parameter in studying elliptical galaxies and bulges of early type disk galaxies, similar to the velocity dispersion in the fundamental plane of elliptical galaxies. The existence of a photometric plane for ellipticals and bulges of early-type disk galaxies further supports an independent study by Peletier et al. (1999) which found that bulges in early type disk galaxies and ellipticals have similar stellar content and formation epochs. It will be important to see whether the photometric plane for lenticulars too coincides with the plane for elliptical galaxies and bulges of early type galaxies, to explore whether lenticulars indeed provide an evolutionary link between elliptical galaxies and early type disk galaxies. The observed tightness of the photometric plane provides a strong constraint on formation scenarios, and therefore it is required to study its physical basis. Recently Lima Neto, Gerbal & Marquez (1999) have proposed that elliptical galaxies are stellar systems in a stage of quasi-equilibrium, which may in principle, have a unique entropy per unit mass – the specific entropy. It is possible to compute the specific entropy assuming that elliptical galaxies behave as spherical, isotropic, one-component systems in hydrostatic equilibrium, obeying the ideal-gas equations of state. Using the specific entropy and a analytic approximation to the three dimensional deprojection of the Sersic profile, they predict a relation between the three parameters of the Sersic law. This relation defines a plane in parameter space which they call the entropic plane. The parameters used in their fit are not identical to ours, and therefore a comparison is not straightforward. The photometric plane may be useful in probing the bulge formation mechanism in galaxies. In this context it will be interesting to see whether the bulges of late type disk galaxies also share a single plane with the bulges of early type disk galaxies and ellipticals. If they do, then a single mechanism for bulge formation in all types would be indicated. But if bulges in early and late type disk galaxies are formed differently (Peletier et al. 1999, Carollo 1999) then a single plane is not expected. It will also be of interest to compare scaling laws which follow from the photometric plane with those implied by the existence of the fundamental plane (Djorgovski & Davis 1987) of elliptical galaxies. We thank S. George Djorgovski for useful discussions and Y. C. Andredakis, R. F. Peletier and M. Balcells for making their data publicly available. We thank an anonymous referee for comments that helped improve this paper. One of us, HGK, would like to thank Y. Sobouti and J. V. Narlikar for their help and support during this project. REFERENCES Andredakis, Y.C., Peletier, R.F., & Balcells, M. 1995, MNRAS, 275, 874 (APB95) Baugh, C.M., Cole, S., Frenk, C.S. & Lacey, C.G. 1998, ApJ, 498, 504 Caon, N., Capaccioli, M. & D’Onofrio, M. 1993, MNRAS, 163, 1013 Carollo, C. M. 1999, ApJ, 523, 566 Courteau, S., de Jong, R. S. & Broeils, A. H. 1996, ApJ, 457, L73 de Jong R. S. 1996, A&AS, 118, 557 de Vaucouleurs, G. 1948, Ann. d’Astrophys., 11, 247 de Vaucouleurs, G. 1959, Hdb. d. Physik, 53, 311 Djorgovski, S.G. & Davis, M. 1987, ApJ, 313, 59 Fisher, N.I. 1993, Statistical analysis of circular data, (Cambridge: Cambridge University Press) Khosroshahi H. G., Wadadekar, Y. & Kembhavi A. 2000, ApJ, in press (astro-ph/9911402) (KWK) Kormendy, J. 1977, ApJ, 217, 406 Lima Neto, G. B., Gerbal, D., & Marquez, I. 1999, MNRAS, 309, 481 Mobasher, B., Guzman, R., Aragon-Salamanca, A., & Zepf, S. 1999, MNRAS, 304, 225 Nilson, P. 1973, Uppsala General Catalogue of Galaxies, (Uppsala: Astronomiska Observatorium) Pahre, M. A., Djorgovski, S. G., & de Carvalho, R. R. 1998, AJ, 116, 1591 Peletier, R.F., & Balcells, M. 1997, New Astronomy, 1, 349 Peletier, R.F., Balcells, M., Davies, R.L., Andredakis, Y. 1999, MNRAS, in press (astro-ph/9910153) Sersic, J.L. 1968, Atlas de galaxies australes. Observatorio Astronomica, Cordoba Wadadekar Y., Robbason R., & Kembhavi, A. 1999, AJ, 117, 1219
no-problem/0001/astro-ph0001350.html
ar5iv
text
# 1 Introduction ## 1 Introduction The Milky Way is our local laboratory of star formation and ISM physics. Some observations even have only been possible in the Milky Way, e.g. turbulent motion and magnetic field in ISM clouds, or search for dark matter candidates. Also, many observable quantities are directly linked to the history and structure of the Milky Way, e.g. metalicity gradients (Friedli 1999), and distribution of OH/IR stars (see below). A better understanding of the detailed mass distribution and gas flow will therefore benefit many other studies. Morphology and type of the Milky Way are hard to recognize because of dust obscuration and the location of the sun within the disk. Spiral arm tangents can be identified in optical and radio brightness distributions (Sofue 1973), as well as in molecular gas (e.g., Dame et al. 1987), HII regions (Georgelin & Georgelin 1976), and other tracers; see Vallée (1995) for a complete list. The 3-kpc-arm, is unusual in this respect: it is very bright in CO, but not traced by HII regions. It is therefore possible that it does not form stars at the present time. Maybe this is because it has an enhanced turbulent velocity or a higher differential shear (Rohlfs & Kreitschmann 1987). The 3-kpc arm and other peculiar gas dynamics in the galactic center are evidence for a bar in the bulge region. The other spiral arms outside the bar region within the so-called molecular ring appear to be on almost circular orbits since they do not show large non-circular motion in front of the galactic center. A comparison of many publications indicates that the Milky Way most likely has 4 spiral arms with a pitch angle of 12 (Vallée 1995). Another important aspect is, that the true rotation curve of the Milky Way can be measured more precisely by a combination of a stellar mass model and a hydrodynamical model. When the rotation curve is determined from the gas dynamics alone, an axisymmetric model yields an uncommon sharp peak at $`0.5\mathrm{kpc}`$ (Clemens 1985). An axisymmetric model of the stellar mass distribution, however, implies a mostly flat rotation curve there (Kent 1992). A bar naturally solves this issue by explaining the sharp peak with non-circular motion caused by orbits elongated along the bar (Binney et al. 1991). It also explains the observed nuclear disk of molecular gas inside $`200\mathrm{pc}`$. Finally, having a better model for the mass distribution and rotation curve, one also obtains better constraints on the amount of dark matter in the solar neighborhood. Since a short review cannot give a complete coverage of all Milky Way models in the literature, we refer the reader to the following papers for similar and alternative models: Lin, Yuan & Shu (1969), Mulder & Liem (1986), Amaral & Lépine (1997), Wada et al. (1994), Weiner & Sellwood (1999), and Fux (1999a). For reviews about spiral structure see: Wielen (1974), Toomre (1977), Binney & Tremaine (1987), and Bertin & Lin (1996). ## 2 Using observations to yield a detailed mass model An early axisymmetric mass model for the inner Galaxy was constructed by Kent, Dame & Fazio (1991), by fitting parametric models to the photometric near-IR maps at $`2.4\mu \mathrm{m}`$ (K-Band) which where obtained by the Spacelab Infrared Telescope (IRT) and have a resolution of $`1^{}`$. By assuming a constant mass-to-light ratio for each component Kent (1992) found various possible combinations of bulge, disk, and halo components to fit the observed mass distribution and kinematics, as well as the gaseous rotation curve outside the bulge region. The bulge region was excluded from the fit, because the gaseous rotation curve from Clemens (1985) shows signs of non-circular motion due to the presence of a bar. Since the dark halo is only observable through its gravitational interaction, Kent used the dark halo component to compensate for the mismatch between gaseous rotation curve and the rotation curve implied by the disk and the mass distribution in the inner galaxy. He also defined a maximum disk model, which minimizes the amount of dark matter required by maximizing the mass-to-light ratio for the disk (he found $`M/L=1.3`$). This model is particular interesting, because it gives a lower limit for the amount of dark matter in the solar neighborhood. One reason why there is large uncertainty about the relative contribution of dark halo and disk, is that not enough constraints about the vertical distribution of mass in the galaxy are known (Dehnen & Binney 1998). A major improvement over the model of Kent was made possible by the near-IR maps obtained with the DIRBE experiment on board of the COBE satellite. By using a foreground dust screen model, Dwek et al. (1995) found, that the bulge light is best fitted by a triaxial light distribution. This bar appears to be elongated with axis ratios $`1:0.33:0.22`$, the nearer end at positive longitudes and inclined by about $`20^{}`$. Later, an improved dust correction and improved parametric model was obtained by Freudenreich (1998). Another line of models comes from a non-parametric deprojection method introduced by Binney & Gerhard (1996). No parametric model distribution has to be prescribed, but an initial model has to be given. The method can also be applied to a part of the galaxy, while the model outside that part stays fixed as specified by the initial parametric model. Binney, Gerhard, & Spergel (1997) applied the method to the DIRBE data and found, that the result is robust against changes in the initial model and can reliably recover the 3D light distribution in the bulge. They also found additional light concentrations on the minor axis of the bar, which they attribute to spiral arm heads in that area. While the deprojection method is not able to recover spiral arm structure, tests show that spiral arms could produce clumps on the minor axis like the ones found in the DIRBE data. The only free parameters in this model are the mass-to-light ratio and the orientation of the bar. See Gerhard (1996, 1999) for a comparison of these models and further evidence for the bar. For further modeling, we expanded the mass model from Binney, Gerhard, & Spergel (1997) into a series of spherical harmonics to calculate the gravitational potential. The radial density profile in the galactic center follows a power law $`\rho r^{1.85}`$ (Becklin & Neugebauer 1968) and this peak in the density is not reproduced by the deprojection because of finite resolution and smoothing effects. We therefore corrected the zeroth order spherical harmonic (monopole) to reflect the power law. The resulting rotation curve falls off beyond $`4.5\mathrm{kpc}`$, because no dark halo has been included yet. By construction, the model should reflect the contribution of all luminous mass in the galaxy without any assumptions about how bulge and disk are build together assuming the same M/L. For some models, we changed the monopole to allow for a flat rotation curve. ## 3 Terminal rotation curve The rotation curve of the Milky Way is not directly observable. In external galaxies, the rotation curve can be obtained by an observation of Doppler shift along a slit across the center. Using parametric models for disk, bulge and halo, the rotation curve can then be decomposed. The parametric model for disk and bulge must follow the observed surface brightness, while the dark matter halo is only constrained by the fit to the rotation curve. Where density wave theory is applicable, i.e. for tightly wound spirals, further constraints are available for the dark matter contribution (Fuchs, Möllenhoff & Heidt 1998). In the Milky Way, the rotation curve can only be inferred indirectly from the terminal velocity, i.e. the maximum observed radial velocity within the galactic plane at a given longitude. Historically, the terminal rotation curve, has been used to infer the radial mass distribution by assuming circular rotation. Inside the solar circle, the terminal velocity is equal to the circular rotation curve minus the motion of the local standard of rest (LSR). The motion of the LSR can be inferred from the streaming of stars measured by Hipparcos (Feast & Whitelook 1997), or the proper motion of Sgr A (Backer & Stramek 1999). Both methods agree reasonable well, and no $`m=1`$ mode seems to be present in the center of the galaxy. The high peak in the terminal curve, gas on apparently forbidden velocities, and the large radial velocity of the 3-kpc-arm indicate non-circular motion in the inner galaxy. This led to the idea, that our galaxy is actually barred and the presence of a bar implies non-circular motion which cannot be corrected for without modeling (Gerhard & Vietri 1986). Fortunately, the non-axisymmetric light distribution of the bar can be extracted from photometric imaging due to perspective effects. These effects are only strong enough if the diameter of the stellar system is comparable to its distance. This excludes application of similar models for distant objects in the near future. A non-parametric method for the deprojection has been recently developed by Binney & Gerhard (1996). Its main advantage over parametric methods is, that it allows to recover more structure details and the precise radial mass distribution. But an application of this method to the DIRBE data yielded controversial results (Binney, Gerhard & Spergel 1997; in the following: BGS). The recovered radial and vertical structure of bar and disk provides a realistic view of the inner galaxy. But in addition, mass concentrations on the minor axis have been found in the deprojection, which seem to indicate that the inner galaxy has significant spiral structure as well. In Fig. 1 we compare the axisymmetric part of the BGS model (short dashed line) with the axisymmetric model by Kent (1992, long dashed line) and the <sup>12</sup>CO data from Clemens (1985) and HI data from Burton & Liszt (1993). Note that Kents model includes a dark halo and assumes LSR motion $`V_0=234\mathrm{km}\mathrm{s}^1`$ while the BGS model was plotted for $`V_0=220\mathrm{km}\mathrm{s}^1`$ and does not include a halo. Increasing the LSR motion increases the gap between model and observed terminal curve at the solar circle ($`l=90^{}`$), but can be compensated for with a dark matter component. Both axisymmetric models fail to explain the peak in the rotation curve. Between 15 and $`40^{}`$ the BGS model fits the data better, further out the contribution of the dark halo is missing which is already included in Kents model. When we assume a constant rotation curve beyond $`5\mathrm{kpc}`$ ($`=40^{}`$), the BGS model also matches the data between $`40^{}`$ and $`90^{}`$. Closed orbits in the full BGS potential match the high peak in the rotation curve (Fig. 2). This confirms that the deprojection indeed provides a good description of the bar potential. Further out, the combination of bar and mass concentration on the minor axis complicates the picture. A hydrodynamical model of the gas flow in the BGS model is presented below. When the terminal curve for this model is compared to the data (Fig. 1; solid line), the result depends on the resolution in the gas model. Hydrodynamical forces tend to depopulate the orbits responsible for the peak in the rotation curve due to low resolution. A model with higher resolution matches the peak better (Fig. 1; dotted line). Whether bars and spirals are independent dynamical entities is not yet clear and may vary from galaxy to galaxy. Numerical $`N`$-body simulations show that a bar can coexist with a spiral of much lower pattern speed (Sellwood & Sparke 1988). On the other hand, detailed modelling of the observed gas dynamics in NGC 1365 was possible using bar and spiral perturbation with a single pattern speed (Lindblad, Lindblad & Athanassoula 1996). Moreover, the spiral mode is most likely growing and decaying within a few rotations of the galaxy (Lin & Shu 1964). For our gas model, we assumed that these additional light concentrations can be used as a first approximation to the contribution of spiral arms in this region and that the spiral pattern is rotating with the same speed as the bar and is stationary. A more advanced deprojection technique combined with spiral arm modeling will be used to shed more light on this issue shortly (Bissantz & Gerhard 2000). This will also allow us to study models with independently rotating bar and spiral mode. ## 4 Steady state / Stationary gas flow solution If the gas is allowed to relax in the potential of the galaxy, i.e. the mass distribution in the galaxy does not change over a few rotational periods, we expect the gas to form a stationary flow pattern. Since the gas looses energy in collisions, it arranges itself to follow nested closed orbits which have no intersections. In barred galaxies, orbits are organized in families. The so-called $`x_1`$-orbits are elongated along the bar and exist from somewhat inside the corotation annulus to the center. Inside the bar region there is a special cusped $`x_1`$-orbit, which is the last $`x_1`$ orbit the gas can settle on. Inside this orbit, the $`x_1`$ orbits in our potential develop loops with self-intersections. Gas on such orbits, would suffer collisions causing it to leave the orbit. In linear theory, the gas has to pass through the inner Lindblad resonance (ILR), which causes a phase shift of $`\pi /2`$ in the orbits. The gas follows this phase shift with a delayed response, and switches to the $`x_2`$ family of orbits, which are elongated perpendicular to the bar and exist only inside the ILR. In general, further ILRs may exist, but there is no evidence for further ILRs in the BGS potential. The transition at the ILR is not smoothly, but is accompanied by a shock in the gas flow. In strong bars, the shocks are straight lines, which in real galaxies have been identified with dust lanes and velocity jumps. Due to the non-axisymmetric distribution of gas caused by the shocks or spiral arms, the bar imposes a gravitational torque on the gas, which causes the gas to flow in (out) when the trailing spiral arm is inside (outside) corotation. For leading spiral arms, the torque would be reversed. For this reason, gas is slowly depleted in the corotation region and accumulates within the ILR to form a ring or disk of gas on $`x_2`$ orbits. For hydrodynamics we chose the smooth particle hydrodynamics (SPH) method (Benz 1990; Steinmetz & Müller 1993). In SPH a continuous gas distribution is approximated by a spatially smeared out particle distribution. The advantage of this method is, that self-gravity can easily be included. Further numerical details of this simulation are given in Englmaier & Gerhard (1999). In addition to the free parameters from the deprojection, our gas model needs only three additional parameters: the corotation radius or equivalently the pattern speed, the effective sound speed, and the LSR motion for calculation of (lv) diagrams. Our best model is shown in Fig. 3. The corotation radius in our model is tightly constrained by the observed (lv)-diagram as follows. Since the 3-kpc-arm is clearly in non-circular rotation, it must be inside the corotation radius. By comparison, the 4-armed spiral pattern in the molecular ring between 4 and $`7\mathrm{kpc}`$ shows no large deviation from circular motion. This observation can only be reproduced in our model, if the corotation is between 3 and $`4\mathrm{kpc}`$. The angular extend of the 3-kpc-arm is best reproduced for corotation at $`3.4\mathrm{kpc}`$. The $`M/L`$ ratio and the LSR motion are fixed by a fit of the terminal velocity curve to the model. Actually the circular rotation velocity for the LSR is kept constant to $`200\mathrm{km}\mathrm{s}^1`$ times a scaling constant which depends on $`M/L`$. The final value for the LSR motion is $`208\mathrm{km}\mathrm{s}^1`$ after the best model fit has been determined. While this value is lower than the current best estimate for the LSR motion, we note that the $`M/L`$ value and other model details do not depend strongly on this parameter. The difference in the terminal curve can be compensated for by a dark halo. The gas flow model confirms the allowed range for the bar inclination found by BGS ($`25\pm 10`$). Models with $`20^{}`$ or $`25^{}`$ yield a better overall fits than models with $`15^{}`$ or $`30^{}`$. A better model for the stellar spiral arms may improve this situation. ## 5 Identification of spiral arms in the (lv)-diagram A full sky survey of emission from atomic hydrogen and molecular species like <sup>12</sup>CO allow mapping of large scale structure such as spiral arms (Fig. 4). The longitude-velocity diagram (lv-diagram) shows traces of spiral arms as crowded regions. By assuming circular rotation, one could map the observed (lv)-position back to real space, however, this leads to serious errors close to tangential points (Burton 1971). Nevertheless, with some success, the principle spiral arm structure has been inferred from lv-diagrams. First, this was done by Oort, Kerr & Westerhout (1958), later the 4-armed structure was first claimed by Georgelin & Georgelin (1976) using the (lv)-diagram for large HII-regions. This picture has later been refined by Caswell & Haynes (1987), Lockman (1989), and others. In Fig. 3 we compare various known HII-regions and giant molecular clouds from the literature with our model. HII-regions are a particular good tracer for spiral arms, as is known from observations of external galaxies. The 4 spiral arms are also prominent in the so-called molecular ring between 4 and $`7\mathrm{kpc}`$. This area corresponds to a bright band of <sup>12</sup>CO emission between about $`\mathrm{30..60}^{}`$ on the northern side of the galaxy (left in Fig. 4) to $`(\mathrm{30..60})^{}`$ on the southern side. As Solomon et al. (1987) pointed out, the spiral arm tangent at around $`+30^{}`$ is actually split into two components at $`25^{}`$ and $`30^{}`$. A summary of available spiral arm tangent data is listed in Table 1. The molecular emission also shows an arm apparently not traced by HII-regions: the 3-kpc-arm. This arm shows large non-circular motion, as it passes in front of the galactic center (where it appears in absorption) with $`+54\mathrm{km}\mathrm{s}^1`$ radial velocity. All other arms show much less radial velocity in front of the galactic center. Our gas model qualitatively reproduces many of the prominent features in the observed (lv)-diagram (Fig. 4). The 4 spiral arms are found to be embedded in the molecular ring with about the right tangential directions (see Table 1). Our model, however, assumes perfect point symmetry in the gas, while the galaxy is not. Hence a perfect match cannot be expected in such an idealized model. Inside the molecular ring, gas is depleted in the corotation region and the spiral arms show a gap (see Fig. 3). Within corotation the spiral pattern continues followed by the usual gas flow configuration in a barred galaxy (see previous section). One of the spiral arms is similar to the 3-kpc-arm in the galaxy, alas with too small an expansion velocity. In this region, the gas flow may be disturbed by the stellar spiral arms. In a modified model where we approximate the spiral arm gravity by the gaseous model spiral arms, but spatially smeared out to account for wider stellar spiral arms, the 3-kpc-arm can be reproduced quantitatively (Englmaier & Gerhard 1999). Our model also predicts the location of the arm corresponding to the 3-kpc-arm on the far side of the galaxy. It happens to fall close to the molecular ring and indeed this arm tangent was found to be split into two components by Solomon et al. (1987) and is also visible at $`l=25^{},30^{}`$ in Fig. 4. A similar position for the far 3-kpc-arm was suggested by Sevenster (1999), but see Fux (1999a) for an alternative explanation. A nuclear disk with $`200\mathrm{pc}`$ radius is formed in the model by gas on $`x_2`$-orbits. Size and rotational velocity of this disk depends on the enclosed mass which has been adjusted by the central density cusp profile. While the disk matches the observed CS emission (Stark et al. 1991), only part of the disk is occupied by dense enough gas to show CS emission. ## 6 OH/IR stars as a fossil dynamical record Sevenster (1997, 1999) made a complete sample of the OH/IR stars in the bulge region between $`45^{}l10^{}`$ and $`|b|3^{}`$. The OH/IR stars are giant stars that lose matter in the so-called asymptotic giant branch (AGB) superwind phase. While these stars are quite old, i.e. $`1`$ to $`8\mathrm{Gyr}`$, they quickly go through an evolutionary phase characterized by OH maser emission in the IR. The maser emission lasts only for $`10^{56}\mathrm{yr}`$ which is short compared to dynamical timescales. The sample of OH/IR therefore provides a snapshot of the dynamically evolved star formation regions $`1`$ to $`8\mathrm{Gyr}`$ ago. Sevenster (1999) found, that the lv-diagram for the sample shows a striking correlation between the OH/IR stars and the 3-kpc-arm observed in <sup>12</sup>CO. A young group of about 100 to $`350\mathrm{Myr}`$ old OH/IR stars follows this arm very nicely, while the older OH/IR population is more equally distributed. This appears to be in contradiction with dynamics, since a circular orbit at $`3\mathrm{kpc}`$ takes $`2\pi r/v80\mathrm{Myr}`$ to complete. Any trace of the spatial distribution of star formation would have been wiped out. However, if these stars were formed in one of the spiral arms which itself rotates, then only the relative orbit between stars and arm would matter. For a constant pattern speed equal to the bar pattern speed, say $`\mathrm{\Omega }60\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$, the orbit in the rotating frame of the arm would take about $`2\pi r/(v\mathrm{\Omega }r)0.5\mathrm{Gyr}`$ to complete. This much longer timescale may allow stars formed at the same place to generate a distribution similar to the observed one. Furthermore, the 3-kpc-arm is not a strong shock, making it possible for stars formed within to drift away rather slowly. In order to estimate the dynamical constraints of our model on the observed OH/IR star distribution, we picked our best model and selected gas particles which are in compressed areas, i.e. in spiral arm shocks (see Fig. 5). We evolved these particles as test particles in the background potential of the model, and plotted snapshots for the particle distribution at later times. It turns out that, although OH/IR stars may form a transient arm like feature, it will only last for a few $`100\mathrm{Myr}`$. In Fig. 6, we show the lv-diagram for $`100\mathrm{Myr}`$ old stars. There is a 3-kpc-arm-like distribution of stars, but these stars come from the corotation region at the minor axis, not the 3-kpc-arm. Just inside corotation, a star formed from the gas moves faster than the bar pattern, and this gives raise to a coriolis force pulling the star inwards. A group of freshly formed but isolated stars would form a stream which may be coincident with the 3-kpc-arm. As we discussed above, the 3-kpc-arm does not contain HII-regions and presumably, it does not form stars because it is dynamically hot. Stars which are formed in the 3-kpc-arm are quickly distributed on closed orbits, it is therefore possible that OH/IR stars are indeed distributed in rings indicating radii of more compressed gas in our model, and these orbits, which are not circular orbits, may be indirectly correlated with the 3-kpc-arm, because arms which are driven by a bar, often start with a local density enhancement and strong star formation on the bar major axis. However, we would expect those rings to be inclined with respect to the 3-kpc-arm, unless the 3-kpc-arm is not stationary. A similar result was found by Fux (1999b), where the observed OH/IR star distribution could be reproduced for about $`40\mathrm{Myr}`$ old stars. The OH/IR stars in his model originate in the far end of the bar in an area of enhanced density. Contrary to our model, Fux’s model has the additional advantage, that the 3-kpc-arm changes rapidly and thus the gas is on more ballistic orbits. He did not show any evidence, however, that the coincidence of stars and gas can be maintained over multiple rotations as required by the observations. While both models are optimized to fit various aspects of the Milky Way, they both suggest that OH/IR stars are not directly tracing spiral arms. We therefore consider it more likely that the stars formed close to corotation where timescales are comparable to the age of these stars. It will be interesting if further evidence can be found, to show that OH/IR stars follow closed orbits and whether these favored orbits have changed with time. ## 7 Discussion We presented a coherent description for the structure and gas dynamics in the Milky Way reaching from small scale $`100\mathrm{pc}`$ to large scale $`\mathrm{kpc}`$. Our model provides a link between photometric and gas dynamical observations, but does not intend to be a a self-consistent treatment of formation and evolution of the Milky Way. Idealized assumptions about symmetry and in the description of the ISM have been made to isolate generic from peculiar features. We find, that the 3-kpc-arm can be explained qualitatively by the forcing of the bar. The bar may also be responsible for some of the spiral structure observed in the gas. Strong bars can drive spiral arms to and somewhat beyond the outer Lindblad resonance (Mulder 1986). The deprojection of the near-IR bulge light distribution introduced additional structure on the minor axis of the bar at around corotation. When this structure is included into the model, four spiral arms are driven in the gas similar to the observed spiral arm structure in this region. The pattern is very similar to a model by Mulder & Liem (1986), but the location of the sun is different. Interestingly, Mulder (1986) mentions that this pattern can be driven by a strong bar or by a weak bar plus spiral arms beyond corotation. When the gravity of the spiral arms is included in the model, the 3-kpc-arm can also be explained quantitatively. This finding, and in addition our inability to find a fitting model for the 3-kpc-arm in models without the minor axis structure demonstrates, that the inner galaxy is affected by both bar and spiral arms. While spiral arms in barred galaxies usually start at the end of the bar, they do not need to be driven by the bar (Sellwood & Sparke 1988), and might be dynamically independent with much lower pattern speed. No steady state but maybe a periodic steady state can be found in the gas flow of such a model. Independent stellar spiral arms further complicate the gas dynamics to some extent, because they cause non-linear velocity jumps in the gas and will not just add to the structure formed by the bar. From density wave theory (Lin & Shu 1964) we know, that spiral arms are not stationary but form and decay in a few rotational periods. With a lower pattern speed, the spiral pattern would extend to larger radii. A model for the galaxy, where spiral arms and bar have different pattern speeds has still to be constructed, yet. Considering all these objections, it is even more surprising how well our model explains the spiral structure. Both the bar and the spiral pattern impose similar constraints on the orientation of the pattern, i.e. the phase angle. Nevertheless, this overlap might be a mere coincidence and a better treatment of bar and spiral mode is underway to investigate other possibilities (Bissantz et al., in prep.). ## Acknowledgments I want to thank O. Gerhard for his collaboration on this project. This work was supported by the Swiss NSF grants 21-40’464.94 and 20-43’218.95. References Amaral L.H., Lépine J.R.D., 1997, MNRAS, 286, 885 Backer D.C., Stramek R.A., 1999, ApJ, 524, 805 Becklin E.E., Neugebauer G., 1968, ApJ, 151, 145 Benz W., 1990, in The Numerical Modelling of Nonlinear Stellar Pulsations, ed. Buchler J.R., p. 269, Dordrecht, Kluwer Bertin G., Lin C.C., 1996, MIT Press, Cambridge, Mass. Beuermann K., Kanbach G., Berkhuijsen E.M., 1985, A&A, 153, 17 Binney J.J., Gerhard O.E., Stark A.A., Bally J., Uchida K.I., 1991, MNRAS, 252, 210 Binney J.J., Gerhard O.E., 1996, MNRAS, 279, 1005 Binney J.J., Gerhard O.E., Spergel D.N., 1997, MNRAS, 288, 365 Binney J.J., Tremaine S., 1987, Princeton University Press Bissantz N., Gerhard O.E., 2000, in prep. Bloemen J.B.G.M., Deul E.R., Thaddeus P., 1990, A&A, 233, 437 Burton W.B., 1971, A&A, 10, 76 Burton W.B., Shane W.W., 1970, IAU Symposium 38, eds. Becker W., Contopoulos G., Reidel, Dordrecht, p. 397 Burton W.B., Liszt H.S., 1993, A&A, 274, 765 Chen W., Gehrels N., Diehl R., Hartmann D., 1996, A&AS, 120, 315 Clemens D.P., 1985, ApJ, 295, 422 Caswell J.L., Haynes, R.F., 1987, A&A, 171, 261 Cohen R.J., Cong H., Dame T.M., Thaddeus P., 1980, ApJ, 239, L53 Dame T.M., Elmegreen B.G., Cohen R.S., Thaddeus P., 1986, ApJ, 305, 892 Dame T., et al., 1987, ApJ, 322, 706 Dehnen W., Binney J.J., 1998, MNRAS, 294, 429 Downes D., Wilson T.L., Bieging J., Wink J., 1980, A&AS, 40, 379 Dwek E., et al, 1995, ApJ, 445, 716 Englmaier P., Gerhard O.E., 1999, MNRAS, 304, 512 Feast M., Whitelook P., 1997, MNRAS, 291, 683 Freudenreich H.T., 1998, ApJ, 492, 495 Friedli D., 1999, in The Evolution of Galaxies on Cosmological Timescales, eds. Beckman J.E., Mahoney T., ASP Conf. Series, astro-ph/9903143 Fuchs B., Möllenhof C., Heidt J., 1998, A&A, 336, 878 Fux R., 1999a, A&A, 347, 77 Fux R., 1999b, in The Evolution of Galaxies on Cosmological Timescales, eds. Beckman J.E., Mahoney T., ASP Conf. Series, astro-ph/9908091 Georgelin Y.M., Georgelin Y.P., 1976, A&A, 49, 57 Gerhard O.E., 1996, in: Unsolved Problems of the Milky Way, IAU Symp. 169, eds. Blitz L., Teuben P., p. 79 Gerhard O.E., 1999, in Galaxy Dynamics, eds. Merritt D.R., Valluri M., Sellwood J.A., ASP Conf. Series, p. 307 Gerhard O.E., Vietri M., 1986, MNRAS, 223, 377 Grabelsky D.A., Cohen R.S., Bronfman L., Thaddeus P., 1987, ApJ, 315, 122 Hayakawa S. et al, 1981, A&A, 100, 116 Henderson A.P., 1977, A&A, 58, 189 Kent S.M., 1992, ApJ, 387, 181 Kent S.M., Dame T.M., Fazio G., 1991, ApJ, 378, 131 Lin C.C., Shu F.H., 1964, ApJ, 140, 646 Lin C.C., Yuan C., Shu F.H., 1969, ApJ, 155, 721 Lindblad P.A.B., Lindblad P.O., Athanassoula E., 1996, A&A, 313, 65 Lockman F.J., 1979, ApJ, 232, 761 Lockman F.J., 1989, ApJ Suppl., 71, 469 Mulder W.A., 1986, A&A, 156, 354 Mulder W.A., Liem B.T., 1986, A&A, 157, 148 Oort J.H., Kerr F.T., Westerhout G., 1958, MNRAS, 118, 379 Rohlfs K., Kreitschmann J., 1987, A&A, 178, 95 Schmidt M., 1965, Bull. Astr. Inst. Netherlands, 13, 15 Sevenster M., 1997, thesis, Leiden Sevenster M., 1999, MNRAS, in press, astro-ph/9907319 Sellwood J.A., Sparke L.S., 1988, MNRAS, 231, 25 Sofue, 1973, PASJ, 25, 207 Solomon P.M., Sanders D.B., Rivolo A.R., 1985, ApJ, 292, 19 Solomon P.M., Rivolo A.R., Barrett J., Yahil A., 1987, ApJ, 319, 730 Stark A.A., Bally J., Gerhard O.E., Binney J.J., 1991, MNRAS, 248, 14 Steinmetz M., Müller E., 1993, A&A, 268, 391 Toomre A., 1977, ARA&A, 15, 437 Vallée J.P., 1995, ApJ, 454, 119 Wada K., Taniguchi Y., Habe A., Hasegawa T., 1994, ApJ, 437L, 123 Weaver 1970, in: The Spiral Structure of Our Galaxy, IAU Symp. 38, eds. Becker W., Contopoulos G., Reidel, Dordrecht, p. 126 Weiner B.J., Sellwood J.A., 1999, ApJ, 524, 112 Wielen R., 1974, PASP, 86, 341
no-problem/0001/astro-ph0001065.html
ar5iv
text
# The Angular Momentum Evolution of Very Low Mass Stars ## 1 Introduction The study of stellar rotation can provide insights into a variety of interesting subjects in the fields of star formation and stellar structure and evolution. The observed rotation velocities and rotation periods of open cluster stars as a function of mass and age yield clues about the star formation process, the internal transport of angular momentum, the loss of angular momentum through magnetized stellar winds, and the origin and generation of stellar magnetic fields. In addition, rotation can drive mixing not present in standard stellar models with important consequences for the observed surface abundances of stars. We are now able to observe significant samples of stars down to the hydrogen burning limit in open clusters (e.g. NGC 2420, von Hippel et al. 1996), globular clusters (e.g. 47 Tucanae, Santiago, Elson & Gilmore 1996), and the field (e.g. Tinney, Mould & Reid 1993). We have also been able to observe rotation in these stars, using both spectroscopy to determine $`v\mathrm{sin}i`$ (Kraft 1965, Stauffer et al. 1997a), and photometry to monitor spot modulation on the stars (Barnes et al. 1999, Prosser et al. 1995) and thereby determine rotational periods. This plethora of information about the rotation of low mass stars has been a great boon to the study of these stars for a number of reasons. First of all, the rotation rates of stars on the main sequence are determined by their pre-main sequence evolution, so that by studying the rotational evolution of low mass stars, we can investigate the early stages of stellar evolution. Second, stellar magnetic phenomena are related to stellar rotation. The evolution of the rotation rates is largely determined by angular momentum loss from a magnetized stellar wind (Kawaler 1988, Weber & Davis 1967). Stellar rotation is found to correlate with chromospheric activity and other magnetic tracers (for a review see Hartmann & Noyes 1987), which lends support to the idea that rotation plays a crucial role in the generation of stellar magnetic fields, through the operation of a dynamo. In recent years there have been a large number of observational and theoretical studies of the angular momentum evolution of low mass stars (see Krishnamurthi et al. 1997 for a review). These studies have focused on solar analogues which have the most extensive observational database. However, the majority of the moment of inertia of solar analogues is in the radiative interior, so theoretical predictions for their angular momentum evolution are strongly influenced by the treatment of internal angular momentum transport. In this paper we examine the rotational properties of stellar models of lower mass stars where the radiative core provides a smaller (or even nonexistent) fraction of the moment of inertia. We will show that the study of these low mass stars can provide valuable clues about angular momentum loss and the distribution of initial conditions that depend only weakly on the treatment of internal angular momentum transport. We begin with a discussion of the important ingredients for theoretical models of stellar angular momentum evolution. The rotational evolution of a low mass star is determined by four factors. The first factor is the star’s initial angular momentum. Young low mass stars are fully convective, and helioseismic data indicate that the rotation rate in the solar convective zone is independent of radius; as a result, the initial angular momentum is fully specified by the initial rotation period. Observed T Tauri star rotation periods are between 2 and 16 days, with an average of 9.54 days and a median of 8.5 days (Choi & Herbst 1996). The rate at which angular momentum is lost through a magnetic wind will also strongly influence the stellar rotation rate. In a linear dynamo the angular momentum loss rate is proportional to the surface angular velocity cubed (Weber & Davis 1967, Kawaler 1988). However, an angular momentum loss law of this form predicts rapid spin down of fast rotators in disagreement with observational data (Pinsonneault et al. 1990). More recent theoretical models use an angular momentum loss law that saturates at some critical rotation rate; the lower the saturation threshold, the longer that rapid rotation can persist. Previous work on solar analogues (Barnes & Sofia 1996, Krishnamurthi et al. 1997) has suggested that the saturation threshold depends on mass, and is inversely proportional to the global convective overturn timescale. This scaling is prompted by its relationship to the Rossby number, a measure of the strength of dynamo magnetic activity in the star, and supported by observational data on the relationship between chromospheric and coronal activity indicators, rotation, and mass (e.g. Noyes et al. 1984, Patten & Simon 1996, Krishnamurthi et al. 1998). The rotation history of a star is also influenced by the internal transport of angular momentum. If a torque is applied to the surface convection zone a shear will be generated between the convective envelope and radiative core; the angular momentum evolution will depend on the efficiency of the angular momentum coupling between the core and envelope. One straightforward model is to assume that the coupling time scale is extremely short, i.e. that solid body rotation is enforced at all times throughout the star. This is predicted in the limiting case of strong magnetic coupling between the core and envelope (see for example Charbonneau & MacGregor 1993), and models of this type have been investigated in the context of the angular momentum evolution of solar analogues (Collier-Cameron & Jianke 1994; Bouvier, Forestini, & Allain 1997; Krishnamurthi et al. 1997). Another approach is to solve for the internal transport of angular momentum by hydrodynamic mechanisms (see Pinsonneault 1997 for a review), internal gravity waves (Talon & Zahn 1998), magnetic fields (Charbonneau & MacGregor 1993), or hybrid models including more than one of these mechanisms (e.g. Barnes, Charbonneau, & MacGregor 1999). In these cases, angular momentum can be stored in a rapidly rotating core. The rotational evolution of models of this class can therefore be different than that of solid body models as that reservoir of angular momentum can be shielded from the strong angular momentum loss that accompanies rapid surface rotation. One of the most challenging features of the open cluster data for theoretical models to explain is the large number of young slowly rotating stars. A simple projection of T Tauri star rotation velocities with conservation of angular momentum would predict more rapid rotation than is observed, and the short time scale would not permit sufficient angular momentum loss given the modest predicted rotation rates (see for example Stauffer & Hartmann 1987, Keppens, MacGregor, & Charbonneau 1995). Königl (1991) proposed that the presence of an accretion disk, coupled to a protostar by a magnetic field, will force the protostar to rotate with the same period as the disk (see also Keppens, MacGregor, & Charbonneau 1995; Collier-Cameron, Campbell, & Quaintrell 1995). Only when the disk is disrupted can the star begin its normal angular momentum evolution (spin-up because of contraction modified by the processes described above). When the star is no longer locked to the disk, it begins to spin up as its radius contracts. Since the star is starting with a smaller moment of inertia, it can never spin up as much as a star which lost its disk at the birthline. This would produce a range of surface rotation rates from a range of accretion disk lifetimes and provided an attractive physical explanation for the observed distribution of stellar surface rotation rates. There is indirect evidence for the model (Edwards et al. 1993, Choi & Herbst 1996), in the sense that the distribution of rotation periods is different for young stars with infrared excesses than for young stars without them (but see also Stassun et al. 1999). The longest reasonable disk lifetime, as determined from studies of T Tauri stars, is about 10 Myr (Strom et al. 1989). We therefore wish to reproduce the slowest rotators in each cluster with models which have disk lifetimes of 10 Myr or less. This creates problems for solid body models, since they require much longer disk lifetimes and also have difficulty in explaining the observed spindown of slow rotators on the main sequence (Krishnamurthi et al. 1997, Allain 1998). The ingredients which determine the rotational evolution of low mass stars are not independent. For example, a rapidly rotating star could exist because it had a short initial period. It could be rotating rapidly because it has not lost much angular momentum since the wind saturation threshold is low. Or, it could have been locked to its disk for a very short period of time. However, this degeneracy of factors is not an insoluble problem. We can determine the angular momentum loss rate (the saturation threshold) by comparing rotation rates of the fastest rotators in clusters of different ages. These stars must have had the fastest initial periods and have not been locked to disks. Therefore, any change in their rotation rates will be caused by their contraction to the main sequence, plus any loss due to their wind. We have evidence from solar analogues that the magnetic wind saturation threshold depends on mass (Barnes & Sofia 1996, Krishnamurthi et al. 1997). By modeling the fastest rotators in a series of clusters with different ages, we can constrain this threshold for each mass. After we know the dependence of the saturation threshold on mass, we can attempt to model the slowest rotators in young open clusters by allowing them to retain their disks for long periods of time. By studying the lowest mass stars, we can simplify the problem. Stars with masses less than $``$ 0.5 M are fully convective throughout their pre-main sequence evolution, while stars with masses less than 0.25 M are fully convective for their entire lifetime. Helioseismic data suggest that they will therefore always rotate as solid bodies, so the internal transport of angular momentum in such stars can be modeled very simply. We can use fully convective stars to diagnose the initial conditions in open clusters, such as the range of disk lifetimes and initial rotation periods, and the mass dependence of the angular momentum loss law. As discussed above, a number of researchers have investigated the angular momentum evolution of solar analogues (0.8 - 1.2 M). The major obstacles which prevented us from modeling very low mass stars accurately in the past have been the lack of adequate model atmospheres, opacities and equations of state for low temperatures (less than 4000 K). Lately, several groups (Alexander & Ferguson 1994; Allard & Hauschildt 1995; Saumon, Chabrier & Van Horn 1995) have made breakthroughs in the necessary physics. Improved evolutionary models of very low mass stars have been produced over the last few years (Baraffe et al. 1998). However, the effects of rotation have not been included in these recent low mass models, and neglecting rotation could lead to anomalous results. For example, rotation can modify the amount of lithium depletion in low mass stars, affecting the derived ages from lithium isochrone fitting (e.g. Stauffer, Schultz & Kirkpatrick 1998). In this work, we present the first rotational models of stars less massive than 0.5 M. We also include models for stars up to 1.1 M, for comparison with previous work. In section 2, we discuss the methods used to determine the rotational evolution of the low mass stars. We present the results in section 3, and discuss their implications in section 4. ## 2 Method We used the Yale Rotating Stellar Evolution Code (YREC) to construct models of the low mass stars. YREC is a Henyey code which solves the equations of stellar structure in one dimension (Guenther et al. 1992). The star is treated as a set of nested, rotationally deformed shells. The chemical composition of each shell is updated separately using the nuclear reaction rates of Gruzinov & Bahcall (1998). The initial chemical mixture is the solar mixture of Grevesse & Noels (1993), and our models have a metallicity of Z=0.0188. Gravitational settling of helium and heavy elements is not included in these models. We use the latest OPAL opacities (Iglesias & Rogers 1996) for the interior of the star down to temperatures of $`\mathrm{log}T(K)=4`$. For lower temperatures, we use the molecular opacities of Alexander & Ferguson (1994). For regions of the star which are hotter than $`\mathrm{log}T(K)6`$, we used the OPAL equation of state (Rogers, Swenson & Iglesias 1996). For regions where $`\mathrm{log}T(K)5.5`$, we used the equation of state from Saumon, Chabrier & Van Horn (1995), which calculates particle densities for hydrogen and helium including partial dissociation and ionization by both pressure and temperature. In the transition region between these two temperatures, both formulations are weighted with a ramp function and averaged. The equation of state includes both radiation pressure and electron degeneracy pressure. For the surface boundary condition, we used the stellar atmosphere models of Allard & Hauschildt (1995), which include molecular effects and are therefore relevant for low mass stars. We used the standard Böhm-Vitense mixing length theory (Cox & Guili 1968; Böhm-Vitense 1958) with $`\alpha `$=1.72. This value of $`\alpha `$, as well as the solar helium abundance, $`Y_{}=0.273`$, was obtained by calibrating models against observations of the solar radius ($`6.9598\times 10^{10}`$ cm) and luminosity ($`3.8515\times 10^{33}`$ erg/s) at the present age of the Sun (4.57 Gyr). The structural effects of rotation are treated using the scheme derived by Kippenhahn & Thomas (1970) and modified by Endal & Sofia (1976). The details of this particular implementation are discussed in Pinsonneault et al. (1989). In summary, quantities are evaluated on equipotential surfaces rather than the spherical surfaces usually used in stellar models. The mass continuity equation is not altered by rotation: $$\frac{M}{r}=4\pi r^2\rho .$$ (1) The equation of hydrostatic equilibrium includes a term which takes into account the modified gravitational potential of the non-spherical equipotential surface: $$\frac{P}{M}=\frac{GM}{4\pi r^4}f_P,$$ (2) where $$f_P=\frac{4\pi r^4}{GMS}\frac{1}{g^1},$$ (3) and $$g^1=\frac{1}{S}_{\psi =const}g^1𝑑\sigma ,$$ (4) $`S`$ is the surface area of an equipotential surface, and $`d\sigma `$ is an element of that surface. The factor $`f_P`$ is less than one for non-zero rotation, and approaches one as the rotation rate goes to zero. The radiative temperature gradient also depends on rotation: $$\frac{\mathrm{ln}T}{\mathrm{ln}P}=\frac{3\kappa }{16\pi acG}\frac{P}{T^4}\frac{L}{M}\frac{f_T}{f_P},$$ (5) where $$f_T=\left(\frac{4\pi r^2}{S}\right)^2\frac{1}{gg^1},$$ (6) and $`g`$ is analogous to $`g^1`$. $`f_T`$ has the same asymptotic behaviour as $`f_P`$, but is typically much closer to 1.0. The energy conservation equation retains its non-rotating form. Therefore, all the structural effects of rotation are limited to the equation of hydrostatic equilibrium and the radiative temperature gradient. This modified temperature gradient is used in the Schwarzschild criterion for convection: $$\frac{\mathrm{ln}T}{\mathrm{ln}P}=min[_{ad},_{rad}\frac{f_T}{f_P}]$$ (7) where $`_{ad}`$ and $`_{rad}`$ are the normal spherical adiabatic and radiative temperature gradients. The Endal & Sofia scheme is valid across a wide range of rotation rates, for a restricted class of angular momentum distributions. This scheme requires that the rotational velocity is constant on equipotential surfaces, which does not allow for modeling of latitude-dependent rotational profiles. We assume that horizontal turbulence is sufficiently strong to enforce spherical rotation (Chaboyer & Zahn 1992). The other restriction on the Endal & Sofia scheme is that it assumes that the potential is conservative, which is not valid when the star is expanding or contracting. Therefore, it is necessary to take small timesteps during any phase of expansion or contraction (such as the pre-main sequence) to minimize the errors introduced by this limitation. This method, and most others used in rotational stellar evolution codes, does not include the horizontal transport of heat, which may be important in the most rapidly rotating stars, rotating very close to their breakup velocities. See Meynet & Maeder (1997) for a detailed discussion of the validity of this approach to the evolution of rotating stars. To model the loss of angular momentum from the surface, we adopt a modified Kawaler angular momentum loss rate with a N=1.5 wind law (Chaboyer, Demarque & Pinsonneault 1995), given by $$\frac{dJ}{dt}=K\omega ^3\left(\frac{R}{R_{}}\right)^{0.5}\left(\frac{M}{M_{}}\right)^{0.5},\omega \omega _{crit}$$ (8) $$\frac{dJ}{dt}=K\omega _{crit}^2\omega \left(\frac{R}{R_{}}\right)^{0.5}\left(\frac{M}{M_{}}\right)^{0.5},\omega >\omega _{crit}$$ This represents the draining of angular momentum from the outer convection zone through a magnetic wind. The magnetic field is assumed to be proportional to the angular velocity while that velocity is small, but then saturates at $`\omega =\omega _{crit}`$. The constant $`K`$ is calibrated by reproducing the solar rotation rate (2 km/s) at the solar age (4.57 Gyr from the birthline) for a 1.0 M model with an initial period of 10 days and no disk-locking. The value of $`\omega _{crit}`$ has been found to depend on mass (Krishnamurthi et al. 1997, Barnes & Sofia 1996). We have used the prescription for the variation of $`\omega _{crit}`$ with mass from Krishnamurthi et al. (1997). In this prescription, $`\omega _{crit}`$ is inversely proportional to the convective overturn timescale in the star at 200 Myr (Kim & Demarque 1996): $$\omega _{crit}=\omega _{crit}\frac{\tau _{}}{\tau }$$ (9) The convective overturn times were linearly extrapolated for masses lower than 0.5 M. We have also considered models in which no angular momentum loss is allowed, to determine the maximum structural effects of rotation. We have calculated the rotational evolution of two classes of stellar models: models in which solid body rotation is enforced at all time, and models in which internal angular momentum transport is affected by hydrodynamic mechanisms. The rotation rate of the solid body models is determined from the moment of inertia of the model at a given time, and the total angular momentum as determined by the loss rate given in equation 8. For the second set of models, rigid rotation is enforced throughout the convection zone only, and the rotation in the interior is governed by the transport of angular momentum by hydrodynamic mechanisms. The chemical mixing associated with this angular momentum transport is computed using a set of diffusion equations (Pinsonneault et al. 1989); the amount of coupling between transport and mixing is calibrated by requiring that the amount of lithium depletion calculated by our model matches the observed value for the Sun. We start our models on the birthline of Palla & Stahler (1991), which is the deuterium-burning main sequence and corresponds to the upper envelope of T Tauri observations in the HR diagram. It has been shown (Barnes & Sofia 1996) that this physically realistic assumption for the initial conditions of stellar rotation models is crucial for accurate modeling of ultra-fast rotators in young clusters. The models with no angular momentum loss began with an initial rotation period of 8 days, corresponding to the mean classical T Tauri star rotation period (Choi & Herbst 1996). The models which included angular momentum loss began with initial rotation periods of either 4 or 10 days. We present models for solar metallicity stars between 0.1 and 1.1 M in increments of 0.1 M. These models have been evolved from the birthline to an age of 10 Gyr. ## 3 Results ### 3.1 The Structural Effects of Rotation Evolutionary tracks for both rotating and non-rotating models are presented in figure 1. The rotating models have initial periods of 10 days and experience no angular momentum loss. As expected (Sackmann 1970, Pinsonneault et al. 1989), the effect of rotation is to shift stars to lower effective temperatures and lower luminosities, mimicking a star of lower mass. This effect is most pronounced for the highest mass stars presented in this paper, and is reduced to a low level for stars less than 0.4 M. Since low mass stars are fully convective, their temperature gradient will be the adiabatic gradient, which does not depend on the rotation rate (equation 7). However, the structural effects of rotation are still apparently in fully convective stars, and diminish with decreasing mass. This suggests that an additional mechanism is also at work. As stars get less massive, their central pressure is being provided less by thermal pressure and more by degeneracy pressure. The amount of degeneracy is determined by the density in the interior, which is not affected by rotation (see equation 1). Rotation provides an additional method of support for the star, but in stars with a significant amount of degeneracy, the rotational support is a smaller fraction of the total pressure. Therefore, the structure of the low mass stars is less affected by rotation than their higher mass counterparts. Figure 2 compares the evolutionary tracks for rotating stars under different assumptions about internal angular momentum transport. The solid tracks are stars which have differentially rotating radiative cores and rigidly rotating convection zones, while the dashed lines show the tracks for stars which are constrained to rotate as solid bodies. The two tracks for each mass have the same surface rotation rate at the zero-age main sequence. The low mass stars show no difference between the two assumptions, since these stars are fully convective for the entire 10 Gyr plotted here. Therefore, they always rotate as solid bodies. The higher mass stars begin their lives high on the pre-main sequence as fully convective, solid body rotators. As they contract and develop radiative cores, however, the difference in the two assumptions about angular momentum transport becomes apparent. Differential rotators have a higher total angular momentum than solid body rotators of the same surface rotation rate. As stars contract along the pre-main sequence, they become more centrally concentrated, which means that the core spins up more than the envelope does. The solid body rotators are forced to spread their angular momentum evenly throughout the star, so they have less total angular momentum for a given surface rotation rate. Therefore, the impact of rotation on the structure of the star is larger for differential rotators than for solid body rotators of the same surface rotation rate. However, at constant initial angular momentum, the solid body rotators are cooler at the zero age main sequence, and have longer pre-main sequence lifetimes, than differentially rotating stars of the same mass. When comparing the effects of rotation between different models, it is important to note whether the comparison is between stars with the same current surface rotation rate, or with the same initial angular momentum. We included the kinetic energy of rotation ($`T=\frac{1}{2}I\omega ^2`$) in our determination of the total luminosity in each shell of the star. As the star changes its moment of inertia $`I`$ and its rotation rate $`\omega `$, the resulting change in its rotational kinetic energy can be included in the energy budget of the star. Most implementations of stellar rotation into stellar structure and evolution neglect this energy since it is expected that the amount of kinetic energy available is not enough to significantly affect the evolution of the star. Since very low mass stars have much lower luminosities than solar-mass stars, but their moments of inertia are not as significantly lower, it is plausible that the kinetic energy of rotation would contribute a significant fraction of the total luminosity of the star. As shown in figure 4, however, the change in the kinetic energy of rotation contributes no more than 6% of the total luminosity of the star in the 1.0 M model, and that contribution lasts less than 50 Myr. As expected, the lowest mass star has the most significant contribution, lasting for about 1 Gyr, but at 4% or less. The positions of stars in the HR diagram are minimally affected by the inclusion of this source of energy. The kinetic energy of rotation reduces the luminosity at any given time by less than 0.02 dex in $`\mathrm{log}(L/L_{})`$, and usually less than 0.005 dex. The timescales for evolution are also equally unaffected. The models with no angular momentum loss will have the maximum possible effect of rotational kinetic energy. Since these models show no significant effect, we conclude that the change in the kinetic energy of rotation is at most a perturbation on the structure. The main structural effect of rotation is a reduction in the effective temperature of stars. Using our tracks, we have quantified the relationship between rotational velocity and the difference in effective temperature at the zero age main sequence. In figure 5 we present this relationship for stars of different masses, and for both the differentially rotating (solid lines) and solid body models (dashed lines). For low mass stars, the difference in temperature caused by rotation is of order a few tens of K (and reduces to less than 10 K for stars of 0.2 M). This difference is therefore negligible. However, the reduction in effective temperature is larger for the more massive stars, and can reach significant levels of a few hundred K for stars more massive than about 0.6 M. Therefore, when determining masses from observed temperatures or colours, it is important know how fast these stars are rotating. The relationship between rotation rate and difference in effective temperature, for a given stellar mass, is well-fit by a polynomial. The coefficients for this polynomial at different masses and under different assumptions of internal angular momentum transport are given in table 2. It should be noted that while solid body rotators of the same initial period rotate faster at the zero age main sequence than differentially rotating stars, the structural effects of rotation are slightly more pronounced in the differential rotators at constant surface rotation speed. Therefore, for a constant rotational velocity, stars which rotate differentially have a higher angular momentum than solid body rotators. For stars of the same mass, rotation reduces the luminosity of stars as well as their temperatures. The difference in luminosity is not as important as the difference in temperature cause by rapid rotation, as shown in figure 5. Even for the most extreme case, the difference in luminosity for a 1.0 M star rotating at 250 km/s is less than 0.12 dex in $`\mathrm{log}(L_{})`$. While differences of this size will result in a thicker main sequence of a cluster, it should not affect any scientific results significantly. Most stars in clusters do not rotate very fast, so the upper main sequence will be well-defined for any isochrone fitting or distance determination. Luminosity is used as an indicator of mass for low mass stars, but since the difference in luminosity between rapid rotators and non-rotators is very small for low mass stars, this calibration should not be affected by rotation. The total effect of rotation is such that the locus of the zero age main sequence becomes brighter as stars rotate more quickly. The combination of a significant decrease in temperature with a small decrease in luminosity for stars of the same mass moves the locus above the non-rotating main sequence. At a surface rotation rate of 100 km/s, the rotating main sequence is brighter by about 0.01 magnitudes. At 200 km/s, the sequence is brighter by 0.03 magnitudes. Therefore, we expect to see rapid rotators in clusters lying above the cluster main sequences by a few hundredths of a magnitude. Since they are fainter, rapid rotators have slightly longer lifetimes compared to non-rotating stars of the same mass. The amount of increase depends on the mass of the star and the rotation rate, but in the most extreme case (1.0 M rotating at 250 km/s), the difference in pre-main sequence lifetime is 7%. For rotation rates less than 100 km/s, the increase in lifetime is less than 1% for all masses. ### 3.2 Rotational Evolution We have compared our models with rotational data from young open clusters. Each cluster provides a sample of stars with different masses, allowing us to study the effects of mass on angular momentum loss and disk lifetimes. By comparing the progression of rotation rates from very young clusters to older ones, we can study the rotational history of stars of a range of masses. Both probes are very useful in the study of stellar rotation. There are four well-studied data sets used when investigating rotation in young open clusters: IC 2602 and IC 2391, at 30 Myr; $`\alpha `$ Persei, at 60 Myr, the Pleiades at 110 Myr; and the Hyades, at 600 Myr. In figures 6-16, we compare our models to observational data at the appropriate age for the cluster, under a number of different assumptions about the initial conditions, the internal transport of angular momentum, the saturation threshold, and the disk locking lifetime. The data were taken from Stauffer et al. 1997b (IC 2602 and IC 2391), Prosser et al. 1995 and references therein ($`\alpha `$ Per), Soderblom et al. 1993; Prosser et al. 1995 and Stauffer et al. 1999 (Pleiades and Hyades), and Radick et al. 1987 (Hyades), supplemented with data from the Open Cluster Database (Prosser & Stauffer 1999). We wish to reproduce a number of important features of these data sets. First, our models must predict the correct rotation rates for the fast rotators in each of these clusters. The presence of fast rotators is caused by the saturation of the angular momentum loss law (equation 8), and constrains the value of $`\omega _{crit}`$ as a function of mass. Also, our models should reproduce the spin-down of the fast rotators with time, as seen by comparing stars of the same mass in different clusters. The comparison between our models and observed fast rotators confirms the conclusion of Krishnamurthi et al. (1997) that a mass-dependent $`\omega _{crit}`$ is necessary, in the sense that $`\omega _{crit}`$ increases for increasing mass. Figures 6 and 7 present rotational models with different normalizations of the Rossby scaling and initial periods of 10 days. Figure 6 presents models which rotate as solid bodies throughout their lifetime, while figure 7 shows differentially rotating models in which internal angular momentum transport is determined by hydrodynamical instabilities. The thick solid lines in each frame represent the upper envelope of rotation rates observed in each cluster, which have not been corrected for inclination angle. Observations of X-ray activity in rotating stars as a function of $`v\mathrm{sin}i`$ show that reasonable values for the angular momentum loss saturation threshold velocity range from 5 to 20 $`\omega _{}`$ (Patten & Simon 1996). We have plotted three different normalizations, with values of $`\omega _{crit}`$ = 5, 10 and 20 $`\omega _{}`$ (from top to bottom each frame). For the solid body models (figure 6), it is clearest at the older ages that stars with a high saturation threshold lose too much angular momentum, and the low saturation threshold models lose too little. Therefore, the best choice for normalization is about 10 $`\omega _{}`$. For the differentially rotating models, the best choice for normalization is about 5 $`\omega _{}`$, since the other values of $`\omega _{crit}`$ produce stars which are rotating too slowly. These normalizations are the same as the ones adopted by Krishnamurthi et al. (1997) for solar analogues. The Rossby scaling with the normalizations suggested by Krishnamurthi et al. (1997) are shown in figures 8 (solid body models) and 9 (differentially rotating models). The upper line in each frame corresponds to models with initial rotation periods of 4 days, while the lower line corresponds to models with initial periods of 10 days. The fastest rotators in all the clusters lie below the 4 day line, with the exception of the lowest mass stars in the Hyades, which rotate faster than the predictions of the differentially rotating models. We conclude that a different normalization for the Rossby scaling is necessary for low mass stars. The second main feature of these data sets that we wish to reproduce is the large spread in rotation rate at constant mass. This range in rotational velocity is caused by a range in protostellar disk lifetime. In figures 10 and 11, we present models with initial rotation periods of 10 days, and the above-mentioned normalizations for $`\omega _{crit}`$. The upper line in each frame shows models which have detached from their disk at the birthline. Moving down each frame, the lines represent models with disk lifetimes of 0.3, 1, 3 and 10 Myr from the birthline. The differentially rotating models (figure 11) reproduce the rotation rates of the slowest rotators in these clusters with disk lifetimes of 10 Myr or less, while the solid body models (figure 10) require longer disk lifetimes, perhaps even as long as the current age of the cluster. We have verified that neither the Krishnamurthi et al. (1997) Rossby scaling nor any other unique Rossby scaling can reproduce the mass dependence of the angular momentum evolution below 4000 K, corresponding to masses of 0.6 M and below. The persistence of rapid rotation in the Hyades cluster requires inefficient angular momentum loss in the lowest mass stars, while a uniform lowering of the saturation threshold would predict more rapid rotation for higher mass stars than is observed in the Hyades. We therefore constructed models for the 0.1 to 0.5 M range where the value of $`\omega _{crit}`$was tuned to reproduce the Hyades upper envelope; we adopted the Krishnamurthi et al. (1997) Rossby scaling for the higher mass models. Figures 12 and 13 show the same models as in figures 10 and 11, but with this different normalizations of the mass dependence for $`\omega _{crit}`$. In figure 12, we present solid body rotational models with $`\omega _{crit}`$ = 7, 6.2, 5.1, 3.6, 1.7 $`\omega _{}`$ at 0.5, 0.4, 0.3, 0.2 and 0.1 M respectively. Figure 13 shows differentially rotating models with $`\omega _{crit}`$ = 3, 3.3, 3.5, 1.9, 0.9 $`\omega _{}`$ for the same masses. Both these sets of models reproduce the fastest rotators in the Hyades at low masses. With the exception of a different zero-point for the saturation threshold, the solid body models and the differentially rotating models predict very similar angular momentum evolution histories for these low mass stars. The different normalization and similar evolution can be understood as follows. The DR models require a higher constant in the angular momentum loss law to extract angular momentum from a rapidly rotating core and a slowly rotating envelope. For models with $`\omega >`$ $`\omega _{crit}`$, the angular momentum loss rates can be made identical by a suitable zero-point shift in $`\omega _{crit}`$. Although the DR models would experience more severe angular momentum loss for $`\omega <`$$`\omega _{crit}`$, the low values of $`\omega _{crit}`$ required for reproducing the stellar rotation data are not reached until ages older than the clusters that we are studying. In figure 14 we use the models presented in the previous two figures to produce distributions of disk lifetimes for three of the clusters studied in this paper. A statistical correction of 4/$`\pi `$ was applied to the observed rotation velocities. This overestimates the rotation rates for the rapid rotators, where $`\mathrm{sin}i`$ is likely to be close to one, but does provide a more accurate estimate on average for the bulk of the slow rotator population. We find an essentially constant distribution of disk lifetimes with age for the differentially rotating models across the young clusters. Disk lifetimes longer than 10 Myr are inconsistent with observations of infrared excesses around T Tauri stars (Strom et al. 1989), which predict maximum disk lifetimes between 3 and 10 Myr. Therefore, the large fraction of stars which are required to have long disk lifetimes is an argument against the solid body models. The predicted rotation rates at the age of the Hyades are systematically slightly higher than the data for the differentially rotating models at the higher end of the mass range, while the solid body models are in good agreement with the observed range. This leads to spuriously short disk lifetimes for the higher mass stars when combined with the insensitivity of the rotation to the initial conditions at this late age. We interpret this as an indication that the angular momentum coupling time scale is intermediate between the Pleiades and Hyades ages, which is consistent with the flat solar rotation curve. The recent Orion data set of Stassun et al. (1999) also has interesting consequences for angular momentum evolution. They were sensitive to rotation periods shorter than eight days; this complicates the question of directly testing the accretion disk-locking model, since this is near the peak of the period distribution found by earlier studies of young stars. However, they found 85 (out of 264) stars with rotation periods less than 3 days. By comparison, models with an initial rotation period of eight days would have a rotation period of 2.64 days at an age of 1 Myr. Stassun et al. (1999) observed a cutoff in the distribution below 0.5 days, which would correspond to a rotation period of 2 days at the birthline (a factor of two greater than the maximum angular momentum we assumed for setting the upper envelope of the distribution). This indicates that both the initial period and the distribution of accretion disk lifetimes needs to be taken into account when modeling the rapid rotator distribution. Because we use the upper envelope of rotation to set the value of $`\omega _{crit}`$, assigning a shorter initial period to the upper envelope of rotation would imply systematically larger values of $`\omega _{crit}`$. This would lead to shorter predicted disk lifetimes for the slower rotators in young open clusters, although not enough to alter the qualitative conclusions about differentially rotating versus solid body models. An additional feature of these models is the possible lack of long disk lifetimes observed for very low mass stars. This is most obvious in a plot of disk lifetime as a function of T<sub>eff</sub>, shown in figures 15 and 16. Both solid body and differentially rotating models show the same trend. In the Pleiades, we would expect to see stars with $`v\mathrm{sin}i`$ at or near the lower detection limit of 7 km/s below 3500 K. The low mass portion of the data set was chosen based on colour of the stars and not on any rotational information, and therefore should be unbiased (Stauffer et al. 1999). However, we see a lack of slow rotators at the low mass end. There are two possible explanations. One scenario is that the lowest mass stars have systematically shorter accretion disk lifetimes than higher mass stars. A second possibility is suggested by a comparison of figures 10 and 11 with figures 12 and 13; the mass dependence of the saturation threshold for angular momentum loss has a strong influence on the predicted rotation rates for a given initial condition. With the small Hyades cool star sample, the upper envelope appears to be flat; we therefore infer relatively short disk coupling lifetimes for the lowest mass stars. However, if the upper envelope were to rise with decreasing mass in the Hyades this would indicate that it is the angular momentum loss law, not a mass-dependent set of initial conditions, that was responsible. We cannot rule this explanation out due to the relatively small sample of Hyades stars cooler than 3500 K. However, the two solutions make different predictions for the observed rotation rates in older clusters. If the trend towards higher mean rotation rates at lower mass in the Hyades persists below 3500 K, this indicates that the mass dependence of the angular momentum loss law is the primary cause; if the data can be fit using the existing angular momentum loss law with consistently shorter disk lifetimes for lower mass stars it is an indication of a genuine change in the distribution of initial conditions as a function of mass. ## 4 Discussion In this paper, we present models of very low mass stars ($`<`$ 0.5 M) which include the effects of rotation. These models have been made possible by the work of a number of groups on the physics of low temperature stellar atmospheres, opacities and equations of state. By including the physics of many molecules in stellar atmosphere calculations, Allard & Hauschildt (1995) have created models which are valid for low mass stars with effective temperatures less than 4000 K. The equation of state of Saumon, Chabrier & Van Horn (1995) includes partial dissociation and ionization of hydrogen and helium caused by both pressure and temperature effects, and is applicable to both low mass stars and giant planets. Finally, Alexander & Ferguson (1994) added atomic line absorption, molecular line absorption and some grain absorption and scattering to the usual sources of continuous opacity to produce opacity tables which reach to temperatures as low as 700 K. Since most previous atmosphere, opacity and equation of state did not include the effects of molecules and grains, these three improvements represent a great leap forward in our ability to model very low mass stars. ### 4.1 The Structural Effects of Rotation We have investigated the effect of rotation on the structure of low mass stars. We discussed a number of implications, based on models which demonstrate the maximum extent of the differences between rotating and non-rotating models. The most important effect is the reduction of effective temperature for stars of a given mass. Rapid rotators are cooler than slow rotators, and so, for stars more massive than 0.5 M, any relationship between temperature and mass should take into account the rotation rate of the star. We have shown that the structural effects of rotation in very low mass stars (less than $``$ 0.5 M) are minimal and can be neglected when interpreting temperatures and luminosities of these stars from observations. Table 2 gives the polynomial correction between the effective temperature of a rotating star and that of a non-rotating star, as a function of rotation rate and stellar mass. We have shown that the kinetic energy of rotation is not a significant contribution to the total luminosity of stars between 0.1 and 1.0 M, and does not change the timescale for evolution on the pre-main sequence. This rotational contribution to the total energy of the star does not affect the position of the star in the HR diagram, and we have neglected this effect in the evolutionary calculations presented in this paper. Stellar activity can also influence the colour-temperature relationship; because of the well-known correlation between increased rotation, increased stellar spot coverage, and increased chromospheric activity this will tend to change the observed position of rapidly rotating stars in the HR diagram relative to slow rotators. Different colour indices are affected to different degrees; for example, Fekel, Moffett, & Henry (1986) found a systematic departure between the B-V and V-I colours of active stars. Stars with more modest activity levels have a more normal colour-colour relationship (e.g. Rucinski 1987). Active stars tend to be bluer in B-V than in V-I relative to less active stars. Fekel, Moffett, & Henry (1986) treated this as an infrared excess, but in open clusters such as the Pleiades and $`\alpha `$ Persei rapid rotators are on or above the main sequence in V-I, but can be below the zero-age main sequence in B-V (Pinsonneault et al. 1998). Given the theoretical trends presented here, this suggests that V-I is a good tracer of temperature, and that B-V is the colour which is most affected by activity. The difference between effective temperatures based on B-V and those based on V-I can reach 200 K in Pleiades stars (Krishnamurthi, Pinsonneault, King, & Sills 1999). ### 4.2 Angular Momentum Evolution The study of low mass stars can provide valuable constraints on the three coupled ingredients of angular momentum evolution models: internal angular momentum transport, angular momentum loss, and the distribution of initial conditions. We have compared the properties of solid body models with those of models with internal angular momentum transport from hydrodynamic mechanisms. We confirm previous results that the angular momentum evolution of systems at and younger than the Pleiades age of 110 Myr are best reproduced by models which permit differential rotation with depth. We also find that the solid-body models do a better job of reproducing the data at the Hyades age (600 Myr) and older; in addition, helioseismic data are inconsistent with the strong differential rotation with depth predicted by models with hydrodynamic angular momentum transport. The simplest solution that is consistent with the data is an additional angular momentum transport mechanism with a time scale intermediate between 110 Myr and 600 Myr. Angular momentum loss has a strong impact on the rotational history of low mass stars; the greatest challenge in understanding the angular momentum evolution of low mass stars has been distinguishing between the effects of angular momentum loss, internal angular momentum transport, and the initial conditions. The combination of deep convective envelopes and mild angular momentum loss in stars below 0.6 M makes their behaviour insensitive to the treatment of internal angular momentum transport. We believe that these stars provide a simpler laboratory for the study of the two major other ingredients, namely the loss law and the initial conditions. Previous work established that an angular momentum loss law which saturates at a mass-dependent critical value of the stellar angular velocity is required to reproduce the fastest rotators in young clusters. We find that the prescription of Krishnamurthi et al. (1997), in which $`\omega _{crit}`$ is inversely proportional to the convective turnover time of the star at 200 Myr (the Rossby scaling), yields a consistent solution from 0.6 to 1.1 M . A Rossby scaling underestimates the mass dependence when extended to the lowest mass stars, in the sense that the efficiency of angular momentum loss drops more rapidly than predicted by a Rossby scaling. Models for stars below 3500 K ($``$ 0.4 M) do not yield consistent initial conditions compared to more massive stars even when the normalization appropriate for the observed upper envelope of the Hyades data is used. It is possible that this could reflect a change in the distribution of initial conditions, but more data are required to rule out the angular momentum loss rate as a cause. It is also possible that our estimate of the global convective overturn time is inaccurate for the lowest mass stars. However, it is clear that angular momentum loss must be occurring, since models which experience no angular momentum loss are rotating at velocities which are at least a factor of 2 too large to agree with any of the observations. In general, we find no evidence for a significant change in the form of angular momentum evolution and loss when stars become fully convective; all of the trends observed are smooth as a function of mass without a sharp break at the fully convective boundary. This last point has implications for cataclysmic variable (CV) research. The orbital periods of these mass-accreting white dwarfs are observed to have a gap between 2 and 3 hours, in which very few systems are seen. The accepted interpretation for this gap has been a sharp reduction in angular momentum loss rate as the stars become fully convective (see Patterson 1984 for a complete introduction to CVs, and McDermott & Taam 1989 for one particular CV model). For masses higher than $``$ 0.3 M, angular momentum loss through a magnetic wind (magnetic braking) is assumed to occur in the mass-losing secondary of the CV system. The angular momentum is lost from the system entirely, and the secondary continues to fill its Roche lobe, transferring mass onto the white dwarf. This mass transfer keeps the secondary out of thermal equilibrium, so the star has a slightly larger radius for its mass than an isolated star. When the star becomes fully convective, however, it is proposed that the secondary’s magnetic field is no longer anchored to the radiative core of the star, and abruptly ceases to exist. Angular momentum (and hence mass) is no longer transferred from the secondary, and the secondary is allowed to contract to its normal main sequence radius. Only when the system shrinks further, due to gravitational radiation, does the mass transfer restart, and the system again becomes a CV. We have not seen any sharp break in the angular momentum loss rates as we move to lower masses, suggesting that any theory for the period gap of cataclysmic variables which relies on the cessation of angular momentum loss requires a mechanism other than the standard magnetic braking. There are several promising directions for future theoretical studies. First, there is an increasing database of rotational periods and velocities in systems with a range of ages and metal abundances. A combined analysis of the information in protostars, young clusters, and intermediate-aged systems will provide interesting constraints on theoretical models. The next generation of theoretical models should incorporate multiple physical mechanisms for internal angular momentum transport while also relying on the complementary information on surface mixing. More sophisticated models of stellar winds and angular momentum loss will also be needed to investigate the physical implications of the empirical trends deduced in theoretical studies of the type we have performed. We would like to conclude with a plea to observers of young open clusters. The observational database for rotation rates of very low mass stars is quite sparsely populated. We have observations of stellar rotation rates down to about 0.2 M in the Pleiades and Hyades, but the data in $`\alpha `$ Persei, IC 2602 and IC 2391 have only spotty coverage below $``$ 0.6 M. These young stars provide constraints on the early spindown of low mass stars. The Hyades is the oldest cluster in this sample, and many of the different scenarios are best distinguished from each other at later ages. It would be very beneficial to this field to determine rotation rates for the lower mass stars in young open clusters. This work was supported by NASA grant NAG5-7150. A. S. wishes to recognize support from the Natural Sciences and Engineering Research Council of Canada. We would like to acknowledge use of the Open Cluster Database, as provided by C.F. Prosser (deceased) and J.R. Stauffer, and which currently may be accessed at http://cfa-www.harvard.edu/$``$stauffer/, or by anonymous ftp to cfa0.harvard.edu (131.142.10.30), cd /pub/stauffer/clusters/.
no-problem/0001/astro-ph0001461.html
ar5iv
text
# The Mid-Infrared Spectra of Normal Galaxies ## 1 Introduction The Infrared Space Observatory (ISO; Kessler et al. 1996) has provided a unique opportunity for infrared spectroscopy at wavelengths and sensitivities inaccessible to sub-orbital platforms. Mid-infrared spectroscopy has been an important tool in characterizing star formation and the interstellar medium in galaxies since the mid-seventies (Willner et al. 1977; Roche et al. 1991), and has taken a major leap forward thanks to the sensitivity and unimpeded spectral coverage of ISO. This is a first report on mid-infrared spectroscopy of galaxies using ISO-PHOT (Lemke et al. 1996) obtained as part of the ISO Key Project under NASA Guaranteed Time on the interstellar medium of normal galaxies (Helou et al. 1996). This Key Project used ISO to derive the physical properties of the interstellar gas, dust and radiation field in a broad sample of “normal” galaxies, defined as systems whose luminosity is derived from stars. This sample includes sixty objects comprising all morphological types, with visible-light luminosities ranging from $`10^8`$ to $`10^{11}`$ $`L_{\mathrm{}}`$, infrared-to-blue ratios from 0.1 to 100, and IRAS colors $`R(60/100)=f_\nu (60)/f_\nu (100)`$ between 0.3 and 1.2. The sample is not statistically complete, but is designed to capture the great diversity among galaxies, especially in terms of the ratio of current to long-term average star formation rate. ## 2 The Spectra The PHT-S module of the ISO-PHOT instrument (Lemke et al. 1996) has a $`24^{\prime \prime }\times 24^{\prime \prime }`$ aperture on the sky, pointed with an accuracy $`2^{\prime \prime }`$ (Kessler et al. 1996). The instrument has two 64-element linear Si:Ga detector arrays covering the range 2.5 – 4.9$`\mu \mathrm{m}`$ with $`\mathrm{\Delta }\lambda =0.04\mu \mathrm{m}`$ per element, and the range 5.9 – 11.7$`\mu \mathrm{m}`$ with 0.1$`\mu \mathrm{m}`$ per element. The elements are sized to match the image of the entrance aperture, thereby determining the spectral resolution. The FWHM of an unresolved line varies between 1.5 and 2 elements depending on the centering of the line with respect to pixel boundary. Each galaxy was observed for a total of 512 seconds, split evenly between galaxy and sky, using a double-sided chopping scheme for sky subtraction. The PHT-S spectra were derived from the Edited Raw Data using the ISO-PHOT Interactive Analysis (PIA) V.7 in a standard way. The flux calibration was done using a mean, signal-dependent “detector response function” derived directly from chopped observations of standard stars with known spectra. Our final spectra are the integral under the PHT-S beam profile of the surface brightness distribution of the source, expressed as a flux density. The combined uncertainties of the relative calibration across the spectrum and the absolute flux scale should be $``$30% according to the PHT-S calibration report as well as our own cross-calibration with the CAM photometry at 6.7$`\mu \mathrm{m}`$ in Silbermann et al. (2000). Of the 45 galaxies eventually observed in total with PHT-S, figure 1 shows spectra for seven galaxies selected so that most of their flux is contained within the PHT-S aperture (Table 1). This selection is based on broadband images at 6.75$`\mu \mathrm{m}`$ ($`\mathrm{\Delta }\lambda 3\mu \mathrm{m}`$) obtained with ISO-CAM (Césarsky et al 1996) and described elsewhere (Silbermann et al 2000). Table 1 lists the galaxies and illustrates the large spread in their basic properties. Column (1) gives the name, (2) the IRAS color ratio $`R(60/100)`$, and (3) the optical morphology. Column (4) gives the fraction of 6.75$`\mu \mathrm{m}`$ flux within the PHT-S aperture, (5) the infrared-to-blue ratio, (6) the luminosity in solar units within the FIR synthetic band (42.5 to 122.5$`\mu \mathrm{m}`$) defined in Helou et al. (1988), and (7) the mid-infrared morphology from Silbermann et al. (2000). The mid-infrared spectra of all these galaxies are dominated by emission features which appear in two main groups, one stretching from 5.5 to 9$`\mu \mathrm{m}`$, and the other starting at 11$`\mu \mathrm{m}`$ and extending beyond the spectral range of these data (see Boulanger et al. 1996 for a similar spectrum at longer wavelengths of a molecular cloud region). The shape and relative strengths of the features are quite similar to “Type A sources” which are the most common non-stellar objects in the Milky Way: reflection nebulae, planetary nebulae, molecular clouds, diffuse atomic clouds, and HII regions (Geballe 1997, Tokunaga 1997, and references therein). While there is good evidence to link these features to Polycyclic Aromatic Hydrocarbons (PAH) or similar compounds, there is no rigorous spectral identification (Puget & Léger 1989; Allamandola et al. 1989). It is generally agreed however that the emitters are small structures, $``$100 atoms typically, transiently excited to high internal energy levels by single photons. The identification issue will not be discussed further here, and the spectral features will be referred to as “aromatic features in emission” (AFE). Quantitatively similar spectra have been reported from spectroscopic observations with PHT-S, ISO-CAM CVF or ISO-SWS on a variety of Galactic sources and a number of galaxies (Tielens et al. 1999 review, Vigroux et al. 1996, Metcalfe et al. 1996). However, interstellar dust can manifest mid-infrared spectra of fundamentally different appearance in environments such as the Galactic Center (Lutz et al. 1996), supernova remnants (Tuffs 1998), compact HII regions (Césarsky et al. 1996a) and AGNs (Lutz et al. 1998; Roche et al. 1985). Such sources are thus obviously not major contributors to the integrated spectra of normal galaxies. ISO-SWS spectra with greater spectral resolution show AFEs with the same shape, a clear indication that they are spectrally resolved in our PHT-S data. The non-detection of the 3.3$`\mu \mathrm{m}`$ feature in individual spectra is not surprising, since it is known to amount to 1% or less of the luminosity carried by the mid-infrared AFEs in “Type A sources” (Tokunaga 1997; Willner et al. 1977 for M82) and would therefore be below the 1$`\sigma `$ level in our individual spectra. Finally, an important consequence of the invariant shape of the spectrum up to 11$`\mu \mathrm{m}`$ is that the 10$`\mu \mathrm{m}`$ trough is best interpreted as a gap between AFE rather than a silicate absorption feature. An absorption feature would become more pronounced in galaxies with larger infrared-to-blue ratios, and that is not observed (see also Sturm et al. 2000). ## 3 The Average Spectrum The seven objects in Table 1 are among 45 galaxies observed by ISO for the Key Project, most of which display similar spectra, regardless of the relative sizes of aperture and galaxy. The only significant exception are NGC 4418, which is known to harbor an AGN, and NGC 5866, an early-type galaxy discussed in detail by Lu et al. (2000). Thus the mid-infrared spectral shape varies little or only weakly with galaxy attributes. Relative to each other, various feature luminosities are constant to within the signal-to-noise ratio, or $`20\%`$. One exception is the relative strength of the 11.3$`\mu \mathrm{m}`$ feature which varies among galaxies by as much as 40%, to be discussed in more detail elsewhere (Lu et al. 1996, Lu et al. 2000). Figure 2 shows a composite spectrum obtained by averaging the data from 28 galaxies, including the seven in Figure 1, after normalizing each spectrum to the integrated flux between 6 and 6.6$`\mu \mathrm{m}`$. The 28 galaxies are a random subset of the Key Project sample with diverse properties, ranging for instance from 0.28 to 0.88 in $`R(60/100)`$. This composite spectrum should be a reliable representation of the emission from the ISM of normal galaxies. The spectrum in Figure 2 is consistent with earlier data in this spectral range, including early M82 spectra by Willner et al. (1977), ground-based surveys (Roche et al. 1991), and IRAS-LRS data (Cohen & Volk 1989). However, it reveals new details and establishes the universality of the AFE. A striking aspect of the composite spectrum is the smooth continuum stretching from 3 to 5$`\mu \mathrm{m}`$, and apparently underlying the AFE at longer wavelengths (see §5 below). Madden (1996) and Boselli & Lequeux (1996) show spectra of elliptical galaxies dominated by stellar photosphere emission, which drop off between 2 and 5$`\mu \mathrm{m}`$ like $`f_\nu \lambda ^{2.5}\nu ^{2.5}`$. This component appears negligible in the composite spectrum at $`\lambda 3`$$`\mu \mathrm{m}`$, since the continuum has $`f_\nu \nu ^{0.65}`$ at 3$`\mu \mathrm{m}`$$`\lambda `$5$`\mu \mathrm{m}`$. The well known 3.3$`\mu \mathrm{m}`$ aromatic feature is detected at the expected wavelength, carrying about 0.5% of the power in the AFE longwards of 5$`\mu \mathrm{m}`$, a significantly smaller fraction than observed in M82 (Willner et al. 1977). The small bump at 7$`\mu \mathrm{m}`$ is a significant signal, carrying about 0.65% of the total AFE power, and might include the \[Ar II\] $`\lambda 6.985\mu \mathrm{m}`$ and the S(5) pure rotational line ($`v=00`$) of H<sub>2</sub> $`\lambda 6.910\mu \mathrm{m}`$. In well resolved ISO-SWS spectra, e.g. M82 or the line of sight to the Galactic Center (Lutz et al. 1996), \[Ar II\] clearly dominates. The 0.65% fraction of AFE power, or about 0.1% of the FIR luminosity, approaches the most luminous lines in the far infrared, \[OI\] and \[CII\] (Malhotra et al. 1998). Although no dust related feature has been identified reliably at this wavelength, the high luminosity suggests that such a feature may contribute in addition to the \[Ar II\]+H<sub>2</sub> blend. The smaller bump at 9.6$`\mu \mathrm{m}`$ coincides with the S(3) pure rotational line ($`v=00`$) of H<sub>2</sub> $`\lambda 9.665\mu \mathrm{m}`$, but it is too luminous to be due to that line alone, scaling from well studied galaxies (Valentijn et al. 1996). ## 4 The Energetics The fraction of starlight processed through AFE has been under debate since the IRAS mission (Helou, Ryter & Soifer 1991), and can now be directly estimated using the new ISO data for the sample described above. The various AFE are measured by integrating the spectrum in the intervals 6 to 6.5$`\mu \mathrm{m}`$, 7 to 9$`\mu \mathrm{m}`$, and 11 to 11.5$`\mu \mathrm{m}`$. The contribution from an extrapolation of the 4$`\mu \mathrm{m}`$ continuum is completely negligible, below the 1% level. The 11.3$`\mu \mathrm{m}`$ feature merges into a complex that extends to about 13$`\mu \mathrm{m}`$. We estimate the total power of this complex by extending our composite spectrum using the mean, continuum-subtracted spectrum from Boulanger et al. (1996) and Césarsky et al. (1996b). This extension amounts to a 12% adjustment to the AFE emission within the PHT-S range. The result is that AFE account for about 65% of the total power between 3 and 13$`\mu \mathrm{m}`$, and about 90% of the power between 6 and 13$`\mu \mathrm{m}`$. The AFE carry 25 to 30% of L(FIR) in quiescent galaxies in our sample. This fraction gradually drops to less than 10% in the most actively star forming galaxies, i.e. those with the greatest L(IR)/L(B) ratio or $`R(60/100)`$, following the trend already noted in Helou, Ryter & Soifer (1991). In a typical quiescent galaxy, AFE might carry 12% of the total infrared dust luminosity between 3$`\mu \mathrm{m}`$ and 1 mm, whereas all dust emission at $`\lambda <13\mu \mathrm{m}`$ come up to $`18`$% of the total dust luminosity. In the individual galaxy spectra, the power integrated between 7 and 9$`\mu \mathrm{m}`$ runs at 2.5 to 3 times that between 6 and 7$`\mu \mathrm{m}`$; these integrals include the plateau between the features. From the composite spectrum, we find the integrals from 5.8 to 6.6$`\mu \mathrm{m}`$, 7.2 to 8.2$`\mu \mathrm{m}`$, and 8.2 to 9.3$`\mu \mathrm{m}`$ to be in the ratio 1:2:1. ## 5 The 4-Micron Continuum Even though it lies an order of magnitude below the AFE peaks, the continuum level shortward of 5$`\mu \mathrm{m}`$ is unexpectedly strong (see for instance the model of Désert et al. 1990). The reliability of this continuum is not in question, since it was detected in several individual galaxies with the same relative strength, and was confirmed by ISO PHT-S staring observations of a few galaxies. However, the calibration of such weak signals may be uncertain by more than the nominal 30%; comparison with ISO-CAM data show a 30% difference, with PHT-S on the high side. The 4$`\mu \mathrm{m}`$ continuum flux density is positively correlated with the AFE flux, strong evidence linking the continuum to dust rather than stellar photospheres. It appears to follow a power law $`f_\nu \nu ^{+0.65}`$ between 3 and 5$`\mu \mathrm{m}`$, with an uncertainty of 0.15 on the power-law index. Its extrapolation runs three times weaker than the observed $`f_\nu (10\mu \mathrm{m})`$, leaving open the nature of the connection between the 4$`\mu \mathrm{m}`$ continuum and the carriers of the AFE. Bernard et al. (1994) have reported evidence for continuum emission from the Milky Way ISM in COBE-DIRBE broad-band data at these wavelengths, with comparable amplitude. Extrapolating the 4-$`\mu \mathrm{m}`$ continuum to longer wavelengths, and assuming the AFE are superposed on it, one finds that the continuum contributes about a third of the luminosity between 3 and 13$`\mu \mathrm{m}`$, the balance being due to AFE. In the range between 6 and 13$`\mu \mathrm{m}`$, that fraction drops to about 10%. Against this extrapolated continuum, the AFE, defined again as the emission from 5.8 to 6.6$`\mu \mathrm{m}`$, 7.2 to 8.2$`\mu \mathrm{m}`$, and 8.2 to 9.3$`\mu \mathrm{m}`$, would have equivalent widths of about 4$`\mu \mathrm{m}`$ or $`3.4\times 10^{13}`$ Hz, 18$`\mu \mathrm{m}`$ or $`9.2\times 10^{13}`$ Hz, and 13$`\mu \mathrm{m}`$ or $`4.9\times 10^{13}`$ Hz, respectively. The natural explanation for the 4-$`\mu \mathrm{m}`$ continuum is a population of small grains transiently heated by single photons to apparent temperatures $`1000`$K. Such a population was invoked by Sellgren et al. (1984) to explain the 3$`\mu \mathrm{m}`$ emission in reflection nebulae, and by other authors to explain the IRAS 12$`\mu \mathrm{m}`$ emission in the diffuse medium (e.g. Boulanger et al. 1988). Small particles with ten to a hundred atoms have sufficiently small heat capacities that a single UV photon can easily propel them to 1000 K equivalent temperature (Draine & Anderson 1985). Such a population is a natural extension of the AFE carriers, though it is not clear from these data whether it is truly distinct, or whether the smooth continuum is simply the non-resonant emission from the AFE carriers. While the current data cannot rule out other contributions, the shape does rule out a simple extension of the photospheric emission from main sequence stars. Red supergiants and Asymptotic Giant Branch stars may contribute, though the level would have to be fortuitously comparable to the dust emission at 10$`\mu \mathrm{m}`$, and the superposition of emission spectra would have to mimick a $`\mathrm{f}_\nu \nu ^{+0.65}`$ spectrum. ## 6 Summary and Discussion The mid-infrared spectra of normal star forming galaxies are dominated by interstellar dust emission. They are well described between 3 and 13$`\mu \mathrm{m}`$ as a combination of Aromatic Features in Emission and an underlying $`\mathrm{f}_\nu \nu ^{+0.65}`$ continuum, with the Features carrying about 65% of the luminosity between 3 and 13$`\mu \mathrm{m}`$. One can reliably assume this is a universal spectral signature of dust. The constant spectral shape against changing heating conditions from galaxy to galaxy is strong evidence for particles transiently excited by individual photons rather than in thermal equilibrium. This explanation is especially compelling because it accounts for both the aromatic features and the continuum. Transient heating obtains only within a finite range of ISM phases, namely from the translucent molecular regions, through the atomic, and up to weakly ionized regions. In denser regions, AFE carriers may be insufficiently illuminated (Beichman et al. 1988, Boulanger et al. 1990), or condensed onto larger grains (Draine 1985), whereas in HII regions they would be destroyed by ionizing radiation (Césarsky et al. 1996b). Therefore the AFE flux may be approximated as the integral over the appropriate ISM phases in each galaxy of the product of the AFE carrier cross-section and the heating intensity. Helou, Ryter & Soifer (1991) showed that the mid-infrared carries a diminishing fraction of the dust luminosity as the star formation activity increases in a galaxy. While this has been interpreted as the result of generally depressed AFE carrier abundance throughout the whole galaxy, it is more likely to result primarily from relatively smaller AFE carrier niches, presumably overtaken by HII regions where harder and more intense radiation destroys AFE carriers. The mid-infrared spectral shape is sufficiently uniform among galaxies that it can be used for redshift determinations, using for instance the SIRTF InfraRed Spectrometer (Roellig et al. 1998): $`L(FIR)=10^9L_{\mathrm{}}`$ galaxies can be readily detected out to z=0.1 in about an hour of integration time, whereas ultraluminous galaxies may be detectable out to z=3 in a comparable amount of time depending on the AFE-to-far-infrared ratio assumed (Weedman, private communication). For galaxies with known redshifts, AFE detection would be an unmistakable dust signature, and thus instantly distinguish between thermal and non-thermal mid-infrared emission, or quantify their relative importance (Lutz et al. 1998). We would like to thank L. Vigroux, X. Désert, F. Boulanger and B.T. Soifer for interesting discussions. The anonymous referee’s comments were helpful in improving the paper. GH acknowledges the hospitality of IAS, Université Paris Sud, and of Delta Airlines during part of this work. The Infrared Space Observatory (ISO) is an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom), and with participation by NASA and ISAS. The ISOPHOT data presented in this paper were reduced using PIA, which is a joint development by the ESA Astrophysics Division and the ISOPHOT consortium. This work was supported in part by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.