id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9909/astro-ph9909248.html
ar5iv
text
# Pixellated Lenses and Estimates of 𝐻₀ from Time-delay Quasars ## 1 Introduction It has been 35 years since Refsdal (1964) proposed an elegant method to derive the Hubble constant, $`H_0`$, from a multiply-imaged QSO system. Until a few years ago the main obstacle to implementing the method was the lack of sufficiently accurate data on the image positions and time delays between images. As the precision of the observational measurements improves the errors in $`H_0`$ become dominated by the uncertainties in the galaxy mass distribution. These errors are very hard to quantify using parametric shape(s) for the galaxy lens model; the derived errors will tend to be underestimated as was noted by Bernstein and Fischer (1999) who constructed many types of parametric models for Q0957+561: ‘The bounds on $`H_0`$ are strongly dependent on our assumptions about a “reasonable” galaxy profile.’ In this contribution we develop and apply a non-parametric method for modeling lensing galaxies and thus estimating $`H_0`$ with the errorbars derived entirely from the uncertainties in the galaxy mass map. The method was initially applied to reconstructing mass maps of lensing galaxies (Saha & Williams 1997); in this volume, Saha et al. show how it can be extended to recover mass distribution in galaxy clusters. ## 2 The Method We start by tiling the lens plane with $`25^2`$ independent mass pixels each $`0.1^{\prime \prime }`$ on the side. Then we pixellate the lens equation and the arrival time surface. The image observables, which we take to be exact, enter as primary modeling constraints. The secondary constraints pertain to the main lensing galaxy: 1. mass pixel values, $`\kappa _n`$ must be non-negative; 2. location of the galaxy center is assumed to be coincident with that of the optical/IR image; 3. the gradient of the projected mass density of the galaxy must point away from galaxy center to within a tolerance angle of $`\pm 45^{}`$; 4. inversion symmetry, i.e. a galaxy must look the same if rotated by $`180^{}`$ (applied only if galaxy optical/IR image looks symmetric); 5. logarithmic projected density gradient in the image region, $`\frac{dlog\kappa }{dlogr}=\mathrm{ind}(r)`$ should not be any shallower that -0.5; 6. external shear, i.e. influence of mass other than the main lensing galaxy is restricted to be constant across the image region, i.e. it is represented by adding a term $`\frac{1}{2}\gamma _1(\theta _1^2\theta _2^2)+\gamma _2(\theta _1\theta _2)`$ to the lensing potential. After both primary and secondary constraints have been applied, we are left with a large number of viable galaxy models. We then generate a fair sample of the remaining model space by random walking through it until we accumulate 100 mass maps. Each of these galaxies reproduce the image properties exactly and each looks reasonably like a real galaxy. The distribution of the 100 corresponding $`H_0`$’s is therefore the derived probability distribution $`p(h)`$. The width of the distribution indicates the uncertainty arising from our lack of sufficient knowledge about the lensing galaxy. (See Williams & Saha 1999 for details.) ## 3 Blind tests First we apply the method to a synthetic sample of lenses. One of us constructed 4 sets of QSO-galaxy four-image lenses, some including external shear, and conveyed the image position and time delay ratio information only to the other one of us, the modeler. In each case the modeler decided, based on the appearance of the reconstructed galaxy mass maps whether inversion symmetry was appropriate or not. The modeler was told the correct answer for $`h`$, which is 0.025 in our synthetic universe, after the final $`p(h)`$ distributions were obtained. Distributions from the four synthetic lenses were multiplied together to yield the combined estimated $`p(h)`$ shown in Figure 1 as the solid line. The dashed histogram is for the case where inversion symmetry was imposed on all four galaxies, and the dotted histogram represents the case where inversion symmetry was not applied in any of the systems. All three resultant distributions recover $`h`$ fairly well, with the $`90\%`$ of the models contained within $`10\%`$ of the true $`h=0.025`$. However the distributions are not the same; the most probable values are different by $`10\%`$. This illustrates how a relatively minor feature in modeling constraints, namely inclusion or exclusion of inversion symmetry, can make a considerable difference in the estimated $`h`$ value when the goal is to achieve precision of better than $`10\%`$. Based on this observation we conclude that the assumed galaxy shape in parametric reconstructions plays a major role in determining the outcome of $`H_0`$. ## 4 PG1115+080 and B1608+656 We now apply the method to PG1115 and B1608, both of which have accurate image position and time delay measurements (Schechter et al. 1997, Barkana 1997, Myers et al. 1995, Fassnacht et al. 1999). Figure 2 shows an ensemble average of 100 reconstructed mass maps for PG1115. Note that the density contours are smooth and roughly elliptical. Figure 3 is a plot of the double logarithmic slope of the projected density profile in the vicinity of the images vs. the derived estimate for $`H_0`$ for each of the 100 reconstructed galaxies. Since the anticorrelation is well defined and is understood in terms of the arrival time surface, it can potentially be used as an additional modeling constraint. The combined $`p(h)`$ distribution based on the lensing data of PG1115 and B1608 is presented in Figure 4. The median is at about 60 km s<sup>-1</sup> Mpc<sup>-1</sup>, and the $`90\%`$ confidence range extends from 45 to 80 km s<sup>-1</sup> Mpc<sup>-1</sup> .
no-problem/9909/cond-mat9909224.html
ar5iv
text
# 1 Introduction ## 1 Introduction The single band Hubbard model is widely accepted as the simplest starting point of a microscopic description of correlated electron systems . More recently it has been realized that inclusion of a $`t^{}`$ coupling can fit the phenomenology of some cuprates. Recent photoemission experiments seem to provide Fermi surfaces compatible with the dispersion relation of the model at moderate values of $`t^{}`$ and densities , while it has been argued that it can fit some features of ruthenium compounds at higher values of $`t^{}`$ and the doping. The study of inhomogeneous charge and spin phases in the Hubbard model has been a subject of interest since the discovery of the high-$`T_c`$ compounds as it was seen that they have a very inhomogeneous electronic structure at least in the underdoped regime. However most of the work was done before the importance of $`t^{}`$ was realized . A sufficiently large value of the ratio $`t^{}/t`$, compatible with the values suggested for the cuprates ($`t^{}/t0.3`$), leads to a significant change in the magnetic properties of the model, as a ferromagnetic phase appears at low doping. This phase has been found by numerical and analytical methods , and it is a very robust feature of the model. The purpose of this work is to study the influence of $`t^{}`$ on the magnetism of the Hubbard model at moderate values of U and density. We will make use of an unrestricted Hartree-Fock approach in real space what allows us to visualize the charge and spin configurations. We believe that the method is well suited for the present purposes as: i) it gives a reasonable description of the Nèel state, with a consistent description of the charge gap and spin waves, when supplemented with the RPA. ii) It becomes exact if the ground state of the model is a fully polarized ferromagnet, as, in this case, the interaction plays no role. A fully polarized ground state is, indeed, compatible with the available Montecarlo and t-matrix calculations. iii) It is a variational technique, and it should give a reasonable approximation to the ground state energy. This is the only ingredient required in analyzing the issue of phase separation. iv) It describes the doped antiferromagnet, for $`t^{}=0`$ as a dilute gas of spin polarons. The properties of such a system are consistent with other numerical calculations of the same model . On the other hand, the method used here does not allow us to treat possible superconducting instabilities of the model, which have been shown to be present, at least in weak coupling approaches . The study of these phases requires extensions of the present approach, and will be reported elsewhere. The main new feature introduced by a finite $`t^{}`$, using simple concepts in condensed matter physics, is the destruction the perfect nesting of the Fermi surface at half filling, and the existence of a second interesting filling factor, at which the Fermi surface includes the saddle points in the dispersion relations. At this filling, the density of states at the Fermi level becomes infinite, and the metallic phase becomes unstable, even for infinitesimal values of the interaction. For sufficiently large values of $`t^{}/t`$, the leading instability at this filling is towards a ferromagnetic state. In the following section, we present the model and the method. Then, we discuss the results. As the system shows a rich variety of behaviors, we have classified the different regimes into an antiferromagnetic region, dominated by short range antiferromagnetic correlations, a ferromagnetic one, and an intermediate situation, where the method suggest the existence of phase separation. The last section presents the main conclusions of our work. ## 2 The model and the method The t-t’ Hubbard model is defined in the two dimensional squared lattice by the hamiltonian $$H=t\underset{<i,j>,s}{}c_{i,s}^+c_{j,s}t^{}\underset{<<i,j>>,s}{}c_{i,s}^+c_{j,s}+U\underset{i}{}n_{i,}n_{i,},$$ (1) with the dispersion relation $$\epsilon (𝐤)=2t[\mathrm{cos}(k_xa)+\mathrm{cos}(k_ya)]+4t^{}\mathrm{cos}(k_xa)\mathrm{cos}(k_ya).$$ We have adopted the convention widely used to describe the phenomenology of some hole-doped cuprates , $`t>0`$, $`t^{}<0,\mathrm{\hspace{0.33em}2}t^{}/t<\mathrm{\hspace{0.33em}1}`$. With this choice of parameters the bandwidth is $`W=8t`$ and the Van Hove singularity is approached by doping the half-filled system with holes. Throughout this study we will fix the value of $`t=1`$ so that energies will be expressed in units of $`t`$. Unless otherwise stated, we will work in a $`12\times 12`$ lattice with periodic boundary conditions. We have choosen the $`12\times 12`$ lattice because it is the minimal size for which finite size effects are almost irrelevant . The unrestricted Hartree-Fock approximation minimizes the expectation value of the hamiltonian (1) in the space of Slater determinants. These are ground states of a single particle many-body system in a potential defined by the electron occupancy of each site. This potential is determined selfconsistently $$H=\underset{i,j,s}{}t_{ij}c_{i,s}^{}c_{j,s}\underset{i,s,s^{}}{}\frac{U}{2}\stackrel{}{m}_ic_{i,s}^{}\stackrel{}{\sigma }_{s,s^{}}c_{i,s^{}}+\underset{i}{}\frac{U}{2}q_i(n_i+n_i)+c.c.,$$ (where $`t_{i,j}`$ denotes next, t, and next-nearest, t’, neighbors), and the self-consistency conditions are $$\stackrel{}{m}_i=\underset{s,s^{}}{}<c_{i,s}^{}\stackrel{}{\sigma }_{s,s^{}}c_{i,s^{}}>,q_i=<n_i+n_i1>,$$ where $`\stackrel{}{\sigma }`$ are the Pauli matrices. We have established a very restrictive criterium for the convergence of a solution. The iteration ends when the effective potential of the hamiltonian and the one deduced from the solution are equal up to $`E<\mathrm{\hspace{0.33em}10}^7`$. When different configurations converge for a given value of the parameters, their relative stability is found by comparing their total energies. ## 3 The results The results in this work are summarized in fig.1 which represents the energy of the ground state configurations versus doping from $`x=0`$ to $`x=0.34`$ (where x is the rate of total number of electrons over the total number of sites) for the representative values $`t^{}=0.3`$ and $`U=8`$. As, in most cases, a variety of selfconsistent solutions can be found, we have tried to avoid an inital bias by starting with random spin and charge configurations. Once the system has evolved to a stable final configuration, this has been used as initial condition for the nearby dopings. Hence, most of the configurations discussed in the text are robust in the sense that they have not been forced by a choice of initial conditions and hence are stable under small changes of the initial values. Exceptions are the diagonal commensurate domain walls and the stripes. These configurations were set as initial conditions and found to be self-consistent. Even if there are many possible solutions, the system cannot be “forced ” to converge to a given solution by appropiately choosing the inital conditions. In particular homogeneous solutions, such as a pure AF solution, do not converge near half filling, as will be discussed later. Fig. 2 and fig. 3 show a comparison of the energies of different configurations converging in the same range of dopings. Fig. 1 shows only minimal energy configurations. Once a configuration converges, we have checked its stability under changes in $`U`$ and $`t^{}`$. The most remarkable feature of fig. 1 is the smooth transition from insulating antiferromagnetism to metallic ferromagnetism. The antiferromagnetic region extends in a range of hole doping from $`x=0`$ to $`x=0.125`$ and the ferromagnetic region from $`x=0.125`$ to $`x=0.34`$. In the antiferromagnetic region the predominant configurations are fully polarized antiferromagnetism (AF), polarons (POL), diagonal commensurate domain-walls(dcDW), and noncollinear solutions ($`S_x`$). In the ferromagnetic region the phases are ferromagnetic domains (fm DOM), ferromagnetic non collinear solutions (fm SDW) and the fully polarized state or Nagaoka configuration (Ng). Most of the AF configurations are known as solutions of the Hubbard model with $`t^{}=0`$ . We will here comment on the changes induced by $`t^{}`$. The FM configurations are totally new and due to the presence of $`t^{}`$, as well as the zone of coexistence of both magnetic orderings. In addition, we have also analyzed in detail some striped configurations, due to their possible experimental relevance. In what follows we analyze the antiferromagnetic and ferromagnetic regions and discuss the possibility of phase separation. ### 3.1 The antiferromagnetic region The study of the motion of a few holes in an antiferromagnetic background has been one of the main subjects in the literature related to the cuprates as these are doped AF insulators. The region of the Hubbard model at and close to half filling is also the area where the metal-insulator transition occurs, and where the well-established spin polarons or spin bags coexist with domain walls and, possibly, striped configurations. The diagonal hopping $`t^{}`$ has a strong influence over this region as it destroys the perfect nesting of the Hubbard model at half filling and the particle-hole symmetry which leads to AFM order in weak coupling approaches . The AF region is formed by fully polarized antiferromagnetism, polarons, diagonal commensurante domain-walls and non collinear solutions. We also have found stripes as excited states. The configurations and the density of states are shown in fig. 4, fig. 5, fig. 6 and fig.7. We will give a brief discussion of these configurations. Antiferromagnetism For the reference values of $`U=8`$ and $`t^{}=0.3`$ fully polarized antiferromagnetism (AF) is the lowest energy configuration only at half filling. For the range of dopings $`0.007x0.027`$, ($`1h4`$, where h denotes the number of holes), AF converges but POL are energetically more favourable. Above four holes a purely AF initial configuration evolves to polarons. AF is the minimal energy configuration for lower values of U in a wider range of dopings. For example for $`U=4`$ and $`t^{}=0.3`$ AF is the lowest energy configuration in the range $`0x0.03`$. This result is almost insensitive to changes in $`t^{}`$. We can conclude then that in the presence of $`t^{}`$, the homogeneous fully polarized antiferromagnetic configuration (Nèel state) is not the dominant solution near half filling. This result is to be contrasted with what happens with electron doping where AF dominates a larger region of the doping space. The reason for this asymmetry will become clear in the discussion of the polaronic configuration following. For $`t^{}0`$ inhomogeneous solutions are clearly energetically more favourable. Polarons Magnetic polarons have been discussed at length in the literature . For $`t^{}=0`$ the magnetization points along the same direction everywhere in the cluster and the extra charge is localized in regions that can be of either cigar or diamond shape. These regions defined a core where the magnetization is reduced. In the present case, $`t^{}0`$, this picture changes substantially. The two Hubbard bands in the Nèel state are no longer equivalent, with bandwidths given, approximately, by $`8|t^{}|\pm 4t^2/U`$. Polarons are found at the edges of the narrower band at all values of t’. This situation corresponds to hole doping for our choice of sign of $`t^{}`$ ( $`t^{}/t<0`$). The doping of the wider electron band leads usually to stable homogeneous metallic AF solutions, where the extra carriers are delocalized throughout the lattice. We have found polarons in the electron region only when $`U`$ is big ($`U>6`$) and t’ small ($`t^{}<0.15`$). On the other hand, the localization of the polarons induced by hole doping increases with increasing $`|t^{}/t|`$. This reflects the fact that these polarons are derived from a narrower Hubbard band. Qualitatively, this fact can be understood in terms of the asymmetric tendency of the system towards phase separation when $`t^{}0`$ . The polarons also can be understood as an incipient form of phase separation, as the core shows strong ferromagnetic correlations. Polarons converge in a wide region of the phase diagram, coexisting with AF and dcDW as shown in fig. 2. They have lowest energy in the doping range $`0.007x0.035`$ ($`1h5`$). They do not converge in the range $`0.076x0.097`$ ($`11h13`$) where the noncollinear solution has lower energy. This is also different from the situation with $`t^{}=0`$ where polarons converge and have lower energy in the full range of dopings $`2h30`$ . The DOS of polarons is shown if fig. 5b for five holes. Fig. 5a shows the reference AF state at half filling. In the DOS for polarons the localized states appear in the antiferromagnetic gap. As doping increases, they form a mid gap subband but the shape of antiferromagnetic spectrum is still clearly seen (see fig. 5c). Diagonal commensurate domain-walls The dcDW configuration is formed by polarons arranged in the diagonal direction creating an almost one dimensional charged wall that forms a ferromagnetic domain (see fig. 4b). We stress “commensurate ” because they do not separate different AF domains as stripes do. The density of states of these solutions differ from that of usual domain walls in that the one dimensional band where the holes are located is wider, and the Fermi level lies inside it. We do not find a size independent one particle gap in these solutions, unlike for conventional domain walls. Within the numerical precision of our calculations, these structures are metallic, while antiphase domain walls are insulating. The dcDW are the predominant configuration in the AF region. They converge in the range of dopings $`0.014x0.118`$ ($`2h17`$) and are the minimal energy configuration in most of the doping range as can be seen in fig. 2. These configurations resemble an array of the polarons discussed earlier, along the (1,1) direction. Thus, we can say that individual polarons have a tendency towards aligning themselves along the diagonals. This may be due to the fact that $`t^{}`$ favors the hopping along these directions. This tendency can be also seen in the density of states. We have also checked that a lower value of $`t^{}`$ ( also for $`U=8`$) these configurations are less favored. In particular for $`U=8`$, dcDW are not formed at low $`t^{}`$ while they do form at $`t^{}=0.3`$. Moreover we have tried vertical domain walls and they are not formed for $`U=8`$ and $`t^{}=0.3`$ but they do with $`t^{}`$ lower ($`t^{}=0.1`$). Summarizing, we have checked that $`t^{}`$ favours dcDW against POL and disfavours vertical DW. This behaviour is also found with the striped configurations (see below). Non collinear solution $`S_x`$ The structure denoted by $`S_x`$ in the phase diagram consists of a special configuration with noncollinear spin $`S_z`$. It appears in the range of dopings $`0.076x0.097`$, ($`11h13`$), in competition with dcDW and has lower energy. We have checked that this structure is never seen in the absence of $`t^{}`$ and is also found at $`U=4`$ and $`t^{}=0.3`$. The configuration is shown if fig. 4c. We see that there are polarons but there is a contribution of the spin x at some random sites. The convergence in this configuration is very slow. It is interesting to point out that we obtain this configuration when using conventional polarons as the initial condition. We do not have a complete understanding of why this configuration is preferred in this region although, it is interesting to note that it happens for the commensurate value of twelve holes (in our $`12\times 12`$ lattice) and the two neighboring values $`h=11`$ and $`h=13`$. Its density of states shown in fig. 5d is very similar to the polaronic DOS. Stripes The striped configurations are similar to the domain-walls but the one dimensional arrange of charge separates antiferromagnetic domains with a phase shift of $`\pi `$. Two typical striped configurations are shown in fig. 6a and fig. 6b. Recently the stripes have attracted a lot of interest as the half filled vertical stripes (one hole every second site) are found in cuprates while diagonal stripes are found in nickelates . We have obtained stripes as higher energy configurations and we have not found half filled vertical stripes in agreement with other works using mean field approximations for $`U=8`$ . It is known that the addition of a long range Coulomb interaction could stabilize the vertical stripes as ground states for large $`U`$ and applying a slave-boson version of the Gutzwiller approach the half filled vertical stripes can be ground states depending on paramenters . We have studied the filled vertical stripe and the diagonal stripe obtained at values of the doping commensurate with the lattice. Our main interest is the role played by $`t^{}`$ in these configurations. We have seen that $`t^{}`$ has a strong influence on them: $`t^{}`$ reduces significantly the basin of attraction of the vertical stripes, what agrees with recent calculations in the $`tt^{}J`$ model , while it favors diagonal stripes. The evolution of the vertical stripe with $`t^{}`$ can be seen in fig. 6b and fig. 6c for the values $`U=4`$, $`t^{}=0`$ and $`t^{}=0.2`$ . These stripes do not converge for higher values of $`t^{}`$. We have instead found that diagonal stripes are favored by $`t^{}`$ much as the dcDW were. The density of states of the stripes is very similar to that of polarons. We can conclude that they are insulating states (see fig. 7) unlike the similar commensurate domain walls where a more metallic character can be appreciated. ### 3.2 The ferromagnetic region The existence of metallic ferromagnetism in the Hubbard model remains one of the most controversial issues in the subject . Large areas of ferromagnetism in the doping parameter were found in the earliest works on the $`tt^{}`$ Hubbard model within mean field approximation and were often assumed to be an artifact of the approximation which would be destroyed by quantum corrections. There are two main regions where ferromagnetism is likely to be the dominant configuration. One is the region close to half filling, in particular at one hole doping where the Nagaoka theorem ensures a fully polarized ferromagnetic state in a bipartite lattice at $`U=\mathrm{}`$. The other is the region around the Van Hove fillings where there is a very flat lower band and where ferromagnetism was found for large values of $`t^{}`$ close to $`t^{}=0.5`$ with quantum Monte Carlo techniques and in the T-matrix approximation . FM is also found to be the dominant instability for small $`U`$ and large $`t^{}`$ in analytical calculations based on the renormalization group , and at intermediate values of U with a mixture of analytical and mean field calculations . Finally, there is a controversy on whether Nagaoka ferromagnetism is stabilized at the bottom of the band $`\rho 0`$ . Most of the previous calculations relay on the study of the divergences of the magnetic susceptibility pointing to either a symmetry breaking ground state or to the formation of spin density waves as low energy excitations of the system. In many cases it is not possible in this type of analyses to discern on the precise nature of the magnetic phases, and, in particular, whether they correspond to fully polarized states (long range order) or to inhomogeneous configurations with an average magnetization. A complete study of the magnetic transitions as a function of the electronic density is also a difficult issue. We have studied the stability of ferromagnetic configurations in the full range of dopings discussed previously. Two main issues can be addressed within the method of the present paper. One is the existence of the fully polarized ferromagnetic state (Nagaoka state), and its stability not only towards the state with one spin flip, but against any weakly polarized or paramagnetic configuration. The other is the specific symmetry of the partially polarized ferromagnetic configurations. In the region close to half filling, our results indicate that the Nagaoka theorem does probably hold in the presence of $`t^{}`$ (which spoils the bipartite character of the lattice) since the Nagaoka state appears when doping with one hole at such large values of $`U`$ as to make the kinetic term quite irrelevant. We found Nagaoka FM at values of $`U`$ such as $`U=128`$. No FM configurations are found doping with two holes even at $`U=128`$. The region of low to intermediate electron density has been analyzed for various values of $`U`$ and $`t^{}`$. This region includes dopings close to the Van Hove singularity where FM should be enhanced due to the large degeneracy of states in the lower band. The position of the Van Hove singularity for a given value of $`U`$ and $`t^{}`$ can be read off from the undoped DOS; it has been determined in . Our results are the following: Nagaoka FM is not found for $`t^{}=0.1`$ at any filling for $`U8`$. For $`t^{}=0.3`$, two types of FM configurations are the most stable in the range of dopings shown in fig. 1. Ferromagnetic spin density waves (fm SDW) despicted in fig. 8b dominate the phase diagram at densities close to the AFM transition $`0.146x0.194`$ ($`21h28`$), and in $`x0.264`$ ($`h37`$). In the region in between, ferromagnetic domains (fm DOM) as the one shown in fig. 8c are the most stable. Both types of configurations show a strong charge segregation and are clearly metallic. The excitation spectrum of these configurations can be seen in fig. 9. Fully polarized FM metallic states (Nagaoka) shown in fig. 8a, are found at all values of $`h`$ corresponding to closed shell configurations from a critical value $`h_c(U)`$ depending on $`t^{}`$ till the bottom of the band. They are shown as vertical solid lines in fig. 1. They are also metallic with a higher DOS at the Fermi level than the partially polarized configurations. Larger values of $`t^{}`$ or $`U`$ push down the critical $`h`$ in agreement with previous works . Some values of $`h_c(U)`$ are, for $`t^{}=0.3`$, $`h_c(6)=37,h_c(8)=29,h_c(10)=21.`$ For $`t^{}=0.4,h_c(8)=25.`$ As mentioned before, no FM is found for $`t^{}=0.1`$. The former results show a large region of ferromagnetic configurations whose upper boundary coincides with previous estimations but which extends to the botton of the band. We have found paramagnetic configurations to converge in the bottom of the band but their energies are higher than the ferromagnetic ones. Comparing our solutions with the corresponding results obtained with the same method in the case $`t^{}=0`$ , we find that the inclusion of $`t^{}`$ favors ferromagnetism for intermediate to large dopings. ### 3.3 Phase separation Although the issue of phase separation (PS) in the Hubbard model is quite old , it has become the object of very active research work following the experimental observation of charge segregation in some cuprates . Despite the effort, the theoretical situation is quite controversial, although recent calculations rule out PS in the 2D Hubbard model . It seems to occur above some values of $`J`$ , although other work suggests that it is likely for all values of $`J`$ in the $`tJ`$ model . PS has also been invoked in connection with the striped phase of the cuprates . The theoretical study of PS is a difficult subject. While it is a clear concept in statistical mechanics dealing with homogeneous systems in thermodynamical equilibrium, the characterization of PS in discrete systems is much more involved. It is assumed to occur in those density regions where the energy as a function of density is not a convex function. This behavior is difficult to achieve in finite systems where the indication of PS is a line, E(x), of zero curvature, i.e of infinite compressibility. Even this characterization, which should be correct if it refers to uniform phases of the system, is problematic when many inhomogeneous phases compete in the same region of parameter space. On the other hand, simple thermodynamic arguments suggest that it should be a general phenomenon near magnetic phase transitions . PS is also very hard to observe numerically as demonstrated by the results cited previously. Exact results as the one obtained in are very restrictive and hence of relative utility. Our work supports the evidence for phase separation of the model in several ways. The first is through the plot of the total energy of the minimal energy configuration as a function of the doping $`x`$ shown in fig. 1. There we can see that the dominant feature follows a straight line. As mentioned before, this characterization has the problem of comparing the energies of different type of configurations. The evidence is more clear if we observe the same plot for a given fixed configuration in the AF region where phase separation occurs (fig. 2). The polaronic configurations in fig. 2 follow a straight line while negative curvature is clearly seen in the plot of the commensurate domain walls, the more aboundant solution in this region. A Maxwell construction done to this region of curve interpolates rather well to half filling. The best evidence is provided by the comparison between the plots corresponding to the two uniform configurations existing in the system. In the case of the Nèel state (AF of fig. 2) we can see a straight line in the region of densities where it is a self-consistent solution. This plot should be compared with the one in fig. 10 corresponding to the uniform Nagaoka states. In the large region where this homogeneous state is found, the plot follows very closely a standard quadratic curve. Finally we have looked at the charge and spin configurations of minimal energy. Apart from the AFM configuration at half filling and the Nagaoka FM, all inhomogeneous configurations show the same path: coexisting regions of an accumulation of holes accompanied by a ferromagnetic order with regions of lower density with an AFM order The charge segregation is obvious in configurations like the ones shown in fig. 4 and in fig. 8c. We have found fully polarized solutions in closed shell configurations down to the lowest electron occupancies allowed in our $`12\times 12`$ cluster (5 electrons). However, we cannot rule out the existence of paramagnetic solutions at even lower fillings. With all the previous hints we reach the conclusion that the $`tt^{}`$ Hubbard model tends to phase separate into an antiferrromagnetic and a ferromagnetic fully polarized state with different densities for any doping away to half filling up to the Van Hove filling where FM sets in. It is interesting to note that this result was predicted in a totally different context by Markiewicz in ref. . Phase separation has also been predicted in the same range of dopings in ref. but between a paramagnetic and a ferromagnetic state. ## 4 Conclusions In this paper we have analyzed the charge and spin textures of the ground state of the $`tt^{}`$ Hubbard model in two dimensions as a function of the parameters $`U`$, $`t^{}`$ and the electron density $`x`$ in a range from half filling to intermediate hole doping with the aim of elucidating the role of $`t^{}`$ on some controversial issues. These include the existence and stability of ordered configurations such as domain walls or stripes, and the magnetic behavior in the region of intermediate to large doping where the lower band becomes very flat. We have used an unrestricted Hartree Fock approximation in real space as the best suited method to study the inhomogeneous configurations of the system. Our results are summarized in the representative phase diagram of fig. 1 obtained for the standard values of the parameters $`U=8`$, $`t^{}=0.3`$. There we can see that the system undergoes a transition from generalized antiferromagnetic insulating configurations including spin polarons and domain walls, to metallic ferromagnetic configurations. For the values of the parameters cited, the transition occurs at an electron density $`x=0.125`$ ($`h=18`$). Both types of magnetic configurations converge in the intermediate region indicating that the transition is smooth more like a crossover. The generalized antiferromagnetic configurations are characterized by a large peak in the density of states of the lower band and by the presence of an antiferromagnetic gap with isolated polarons for very small doping that evolves to a mid gap subband for larger dopings. ferromagnetic configurations have a metallic character with a DOS at the Fermi level that increases for configurations with increasing total magnetization. Fully polarized Nagaoka states are found at all closed shell configurations in the ferromagnetic zone of the phase diagram. They have the highest DOS at the Fermi level. Apart from the homogeneous Néel and Nagaoka states, all inhomogeneous configurations show the existence of the two magnetic orders associated to charge segregation. AF is found in the regions of low charge density and FM clusters are formed in the localized regions where the extra charge tends to accumulate. Our main conclusion is that the only stable homogeneous phases of the system consists of the purely antiferromagnetic Nèel configuration at half filling, and Nagaoka ferromagnetism, which appears around the Van Hove filling. We find the system is unstable towards phase separation for all intermediate densities. We have reached this conclusion trough a careful study of the curves representing the total energy versus doping of the various configurations. Besides, the approach used allows us to visualize the inhomogeneous configurations. In all of them we find coexisting regions of an accumulation of holes accompanied by a ferromagnetic order with regions of lower density with an antiferromagnetic order. As the ferromagnetic phase is metallic while the Nèel state is insulating, we expect the transport properties of the model in the intermediate region to resemble that of a percolating network, a system which has attracted much attention lately . Finally, our study does not exclude the existence of other non magnetic instabilities, most notably d-wave superconductivity. This can be, however, a low energy phenomenon, so that the main magnetic properties at intermediate energies or temperatures are well described by the study presented here. We thank R. Markiewicz for a critical reading of the manuscript with very useful comments. Conversations held with R. Hlubina, E. Louis, and M. P. López Sancho are also gratefully acknowledged. This work has been supported by the CICYT, Spain, through grant PB96-0875 and by CAM, Madrid, Spain. ## 5 Figure captions Fig. 1: Complete phase diagram for U = 8, t’= 0.3. Vertical dashed lines separate the different configurations described in the text. Vertical solids lines correspond to closed shells fillings were Nagaoka ferromagnetism occurs. The curve is a plot of the total energy (in units of t) of the lowest energy configuration versus the electron density x. Fig. 2: Comparison between the energies of the different configurations converging in the AF region. The configurations are displayed in fig. 4. Fig. 3: Comparison between the energies of the configurations in the FM region. The configurations are displayed in fig. 8. Fig. 4: Examples of the minimal energy configurations discussed in the text for the reference values U=8, t’=0.3 in the antiferromagnetic region. Fig. 4a shows the polaronic (POL) configuration obtained when doping with five holes. Fig. 4b shows the diagonal commensurate domain wall (dcDW) configuration doping with six holes. Fig. 4c shows non collinear ($`S_x`$) configuration obtained when doping with twelve holes. Fig. 5: Density of states of the various configurations dicussed in the text in the antiferromagnetic region for the parameter values U = 8, t’ = 0.3. The fermi level is indicated as a vertical line. Fig. 5a shows the reference Nèel configuration at half filling. The asymmetry of the band due to t’ is noticeable. The rest of figures show POL with 5 holes (5b), POL with 14 holes(5c), dcDW with 6 holes (5d), and the $`S_x`$ configuration with 12 holes (5e). Fig. 6: Striped configurations appearing at values of the doping commensurate with the lattice. Fig. 6a. shows the diagonal stripe obtained for the parameter values $`U=8`$ and $`t^{}=0.3`$. Figs. 6b and 6c correspond to a vertical stripe obtained for $`U=4`$ for the values of t’ $`t^{}=0`$ (6b), and $`t^{}=0.2`$ (6c). We can see that for bigger t’ the vertical stripe is spoiled. Fig. 7: Density of states of the diagonal stripe configurations. The Fermi level is indicated as a vertical line. Fig. 8: Examples of the minimal energy configurations discussed in the text for the reference values U=8, t’=0.3 in the ferromagnetic region. Fig. 8a shows the fully polarized Nagaoka state (Ng) with h=49, fig. 8b shows fmSDW obtained for h= 24, and fig. 8c corresponds to the fm DOM obtained for h=36. Fig. 9: Density of states of the ferromagnetic configurations in fig, 8. Fig. 9a corresponds to the Nagaoka configuration, fig. 9b to fmSDW, and fig. 9c shows the DOS of FM dom. The Fermi level is indicated as a vertical line. Fig. 10: Plot of the energy versus doping for the fully polarized Nagaoka configurations in the full range of dopings where they converge. The solid line is a fit to a quadratic curve.
no-problem/9909/hep-lat9909045.html
ar5iv
text
# UTCCP-P-73 Sept. 1999 Eta meson mass and topology in QCD with two light flavorstalk presented by R. Burkhalter at Lattice ’99 ## 1 INTRODUCTION Recently considerable progress has been made in the simulation of full QCD . In particular, sea quark effects have been found to lead to a closer agreement of the light meson spectrum with experiment . Missing from the calculated spectrum, however, has been the flavor singlet meson $`\eta ^{}`$. Due to the difficulty of the determination of the disconnected contribution, only preliminary lattice results have been available . In the first half of this article, we present new results on this problem. Since these are obtained with two flavors of dynamical quark, we call the flavor singlet meson as $`\eta `$ and reserve the name $`\eta ^{}`$ for the case of $`N_f=3`$. The $`\eta ^{}`$ meson is expected to obtain a large mass through the connection to instantons. This leads us to an investigation of topology in full QCD, presented in the latter half of this article. Calculations have been performed on configurations of the CP-PACS full QCD project . These have been generated using an RG-improved gauge action and a tadpole-improved SW clover quark action at four different lattice spacings and four values of the sea quark mass corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}0.8`$–0.6. An overview of the simulation parameters is given in Table 1. More details can be found in . ## 2 FLAVOR SINGLET MESON The mass difference $`\mathrm{\Delta }m`$ between the flavor singlet ($`\eta `$) and non-singlet ($`\pi `$) meson can be extracted from the ratio $$R(t)=\frac{\eta (t)\eta (0)_{\mathrm{disc}}}{\eta (t)\eta (0)_{\mathrm{conn}}}1B\mathrm{exp}(\mathrm{\Delta }mt),$$ (1) where the right hand side indicates the expected behavior at large time separation $`t`$. In this work, the connected propagator was calculated with the standard method. For the disconnected propagator we used two methods. In the first instance, it was calculated using a volume source without gauge fixing (the Kuramashi method). This measurement was made after every trajectory in the course of configuration generation for all runs listed in Table 1. As for the second method, we employed a U(1) volume noise source with 10 random noise ensembles for each color and spin combination. This was performed only at $`\beta =1.95`$ on stored configurations separated by 10 HMC trajectories. Figure 1 compares the ratio $`R(t)`$ for the two methods. We observe that they are consistent with each other but that the error is smaller for the first method. This might be due to the fact that there are 10 times more measurements with it, although binning is made over 50 HMC trajectories for both cases to take into account auto-correlations. In the following we only use data obtained with the first method. In Fig. 1 we also see that the error of $`R(t)`$ increases exponentially, which make the determination of $`\mathrm{\Delta }m`$ via Eq. 1 impossible at large time separations. The data, however, shows the expected behavior already beginning from small $`t`$ and a fit with Eq. 1 is possible from $`t_{min}=2`$. Increasing $`t_{min}`$ leads to stable results, as can be seen in Fig. 2. The chiral extrapolation of $`m_\eta ^2`$ linear in the quark mass is shown in Fig. 2. Contrary to the pion mass, the flavor singlet meson remains massive in the chiral limit. Figure 3 shows the $`\eta `$ meson mass at all measured lattice spacings after the chiral extrapolation. The scale is set using the $`\rho `$ meson mass. A linear extrapolation to the continuum limit gives $`m_\eta =863(86)`$ MeV. This value lies between the experimental $`\eta `$(547) and $`\eta ^{}`$(958) masses. We emphasize that a proper comparison with experiment requires the introduction of a third (strange) quark and a mixing analysis. ## 3 TOPOLOGY Studies of topology on the lattice have encountered several difficulties. In addition to the ambiguity of defining a lattice topological charge, it was found that topological modes have a very long auto-correlation time in the case of full QCD with the Kogut-Sussind quark action . We employ the field theoretic definition of the topological charge together with cooling. For the charge we use a tree-level improved definition which includes a $`1\times 2`$-plaquette, hence the $`O(a^2)`$ terms are removed for instanton configurations. For the cooling we compare two choices of improved actions, both including a $`1\times 2`$-plaquette term: 1) a tree-level Symanzik improved (LW) action and 2) the RG improved Iwasaki action. Using different actions for cooling can lead to different values of the topological charge. This ambiguity is only expected to vanish when the lattice is fine enough. We have tested this explicitly by simulating the pure SU(3) gauge theory at three lattice spacings in the range $`a0.2`$–0.1 fm and at a constant size of 1.5 fm. As Fig. 4 shows, the topological susceptibilities $`\chi _t=Q^2/V`$ for the two cooling actions converge to a common value towards the continuum limit. Using $`\sqrt{\sigma }=440`$ MeV we obtain $`\chi _t=(178(9)\mathrm{MeV})^4`$, in agreement with previous studies . In full QCD we have so far measured the topological charge at $`\beta =1.95`$. Figure 5 shows the time history for two quark masses. Auto-correlation times are visibly small even for the smallest quark mass. For the Wilson quark action rather short auto-correlation times have been reported in Ref. . The fact that we find even shorter auto-correlations might be explained by the coarseness of our lattice. Based on the anomalous flavor-singlet axial vector current Ward identity, one expects the topological susceptibility to vanish in the chiral limit. Indeed, Fig. 5 shows the width to be shrinking with the quark mass. The decrease, however, is not sufficient; as we find in Fig. 6, the dimensionless ratio $`\chi _t/\sigma ^2`$, with $`\sigma `$ calculated for each sea quark mass, does not vary much with the quark mass, and takes a value similar to that for pure gauge theory. To understand the origin of this behavior, more investigations at different lattice spacings will be needed. This work is supported in part by Grants-in-Aid of the Ministry of Education (Nos. 09304029, 10640246, 10640248, 10740107, 11640250, 11640294, 11740162). SE and KN are JSPS Research Fellows. AAK, TM and HPS are supported by JSPS Research for the Future Program. HPS is supported by the Leverhulme foundation.
no-problem/9909/math9909152.html
ar5iv
text
# Tangent Spheres and Triangle Centers ## 1 Tangent Spheres Any four mutually tangent spheres determine six points of tangency. We say that a pair of tangencies $`\{t_i,t_j\}`$ is opposite if the two spheres determining $`t_i`$ are distinct from the two spheres determining $`t_j`$. Thus the six tangencies are naturally grouped into three opposite pairs, corresponding to the three ways of partitioning the four spheres into two pairs. ###### Lemma 1 (Altshiller-Court \[1, §630, p. 231\]) The three lines through opposite points of tangency of any four mutually tangent spheres in $`R^\mathrm{3}`$ are coincident. Proof: If three spheres have a common tangency, the three lines all meet at that point; otherwise, each sphere either contains all of or none of the other three spheres. Let the four given spheres $`S_i`$ ($`i\{\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4}\}`$) have centers $`\stackrel{\textcolor[rgb]{1,0,0}{bar}}{x}_i`$ and radii $`r_i`$. If $`S_i`$ contains none of the other spheres, let $`R_i=r_i^\mathrm{1}`$, else let $`R_i=r_i^\mathrm{1}`$. Then the point of tangency $`t_{ij}`$ between spheres $`S_i`$ and $`S_j`$ can be expressed in terms of these values as $$t_{ij}=\frac{R_i}{R_i+R_j}\stackrel{\textcolor[rgb]{1,0,0}{bar}}{x}_i+\frac{R_j}{R_i+R_j}\stackrel{\textcolor[rgb]{1,0,0}{bar}}{x}_j.$$ In other words, it is a certain weighted average of the two sphere centers, with weights inversely proportional to the (signed) radii. Now consider the point $$M=\frac{_{i=\mathrm{1}}^\mathrm{4}R_i\stackrel{\textcolor[rgb]{1,0,0}{bar}}{x}_i}{_{i=\mathrm{1}}^\mathrm{4}R_i}$$ formed by taking a similar weighted average of all four sphere centers. Then $$M=\frac{R_\mathrm{1}+R_\mathrm{2}}{(R_\mathrm{1}+R_\mathrm{2})+(R_\mathrm{3}+R_\mathrm{4})}t_{\mathrm{1}\mathrm{2}}+\frac{R_\mathrm{3}+R_\mathrm{4}}{(R_\mathrm{1}+R_\mathrm{2})+(R_\mathrm{3}+R_\mathrm{4})}t_{\mathrm{3}\mathrm{4}},$$ i.e., $`M`$ is a certain weighted average of the two tangencies $`t_{\mathrm{1}\mathrm{2}}`$ and $`t_{\mathrm{3}\mathrm{4}}`$, and therefore lies on the line $`t_{\mathrm{1}\mathrm{2}}t_{\mathrm{3}\mathrm{4}}`$. By a symmetric argument, $`M`$ also lies on line $`t_{\mathrm{1}\mathrm{3}}t_{\mathrm{2}\mathrm{4}}`$ and line $`t_{\mathrm{1}\mathrm{4}}t_{\mathrm{2}\mathrm{3}}`$, so these three lines are coincident. Note that a similar weighted average for three mutually externally tangent circles in the plane gives the Gergonne point of the triangle formed by the circle centers. Altshiller-Court’s proof is based on the fact that the lines $`\stackrel{\textcolor[rgb]{1,0,0}{bar}}{x}_itij`$ meet in triples at the Gergonne points of the faces of the tetrahedron formed by the four sphere centers. We will use the following special case of the lemma in which the four sphere centers are coplanar: ###### Corollary 1 The three lines through opposite points of tangency of any four mutually tangent circles in $`R^\mathrm{2}`$ are coincident. ## 2 New Triangle Centers Any triangle $`ABC`$ uniquely determines three mutually externally tangent circles centered on the triangle vertices; if the triangle’s sides have length $`a`$, $`b`$, $`c`$ then these circles have radii $`(a+b+c)/\mathrm{2}`$, $`(ab+c)/\mathrm{2}`$, and $`(a+bc)/\mathrm{2}`$. The sides of triangle $`ABC`$ meet its incenter at the three points of tangency of these circles. For any three such circles $`O_A`$, $`O_B`$, $`O_C`$, there exists a unique pair of circles $`O_S`$ and $`O_S^{}`$ tangent to all three. The quadratic relationship between the radii of the resulting two quadruples of mutually tangent circles was famously memorialized in Frederick Soddy’s poem, “The Kiss Precise”. The set $`R^\mathrm{2}(O_AO_BO_C)`$ has five connected components, three of which are disks and the other two of which are three-sided regions bounded by arcs of the three circles; we distinguish $`O_S`$ and $`O_S^{}`$ by requiring $`O_S`$ to lie in the bounded three-sided region and $`O_S^{}`$ to lie in the unbounded region. Note that $`O_S`$ is always externally tangent to all three circles, but $`O_S^{}`$ may be internally or externally tangent depending on the positions of points $`ABC`$. If $`O_A`$, $`O_B`$, and $`O_C`$ have a common tangent line, then we consider $`O_S^{}`$ to be that line, which we think of as an infinite-radius circle intermediate between the internally and externally tangent cases. We can then use Corollary 1 to define two triangle centers: let $`M`$ denote the point of coincidence of the three lines $`t_{AS}t_{BC}`$, $`t_{BS}t_{AC}`$, and $`t_{CS}t_{AB}`$ determined by the pairs of opposite tangencies of the four mutually tangent circles $`O_A`$, $`O_B`$, $`O_C`$, and $`O_S`$ (Figure 1), and similarly let $`M^{}`$ denote the point of coincidence of the three lines $`t_{AS^{}}t_{BC}`$, $`t_{BS^{}}t_{AC}`$, and $`t_{CS^{}}t_{AB}`$ determined by the pairs of opposite tangencies of the four mutually tangent circles $`O_A`$, $`O_B`$, $`O_C`$, and $`O_S^{}`$. Clearly, the definitions of $`M`$ and $`M^{}`$ do not depend on the ordering of the vertices nor on the scale or position of the triangle. Despite their simplicity of definition, and despite the large amount of work that has gone into triangle geometry , centers $`M`$ and $`M^{}`$ do not appear in the lists of over 400 known triangle centers collected by Clark Kimberling and Peter Yff (personal communications). ## 3 Relations to Known Centers $`M`$ and $`M^{}`$ are not the only triangle centers defined in relation to the “Soddy circles” $`O_S`$ and $`O_S^{}`$. Already known were the centers $`S`$ and $`S^{}`$ of these circles ; note that $`S`$ is also the point of coincidence of the three lines $`At_{AS}`$, $`Bt_{BS}`$, $`Ct_{CS}`$ and similarly for $`S^{}`$. The Gergonne point $`Ge`$ can be defined in a similar way as the point of coincidence of the three lines $`At_{BC}`$, $`Bt_{AC}`$, and $`Ct_{AB}`$. It is known that $`S`$ and $`S^{}`$ are collinear with and harmonic to $`Ge`$ and $`I`$, where $`I`$ denotes the incenter of triangle $`ABC`$ . Similarly $`Ge`$ and $`I`$ are collinear with and harmonic to the isoperimetric point and the point of equal detour . ###### Theorem 1 $`M`$ and $`M^{}`$ are collinear with and harmonic to $`Ge`$ and $`I`$. Proof: By using ideas from our proof of Lemma 1, we can express $`M`$ as a weighted average of $`S`$ and $`Ge`$: $$M=\frac{R_AA+R_BB+R_CC}{R_A+R_B+R_C+R_S}+\frac{R_SS}{R_A+R_B+R_C+R_S}=\frac{R_A+R_B+R_C}{R_A+R_B+R_C+R_S}Ge+\frac{R_S}{R_A+R_B+R_C+R_S}S.$$ Hence, $`M`$ is collinear with $`S`$ and $`Ge`$. Collinearity with $`Ge`$ and $`I`$ follows from the known collinearity of $`S`$ with $`Ge`$ and $`I`$. A symmetric argument applies to $`M^{}`$. We omit the proof of harmonicity, which we obtained by manipulating trilinear coordinates of the new centers in Mathematica. See http://www.ics.uci.edu/~eppstein/junkyard/tangencies/trilinear.pdf for the detailed calculations. A simple compass-and-straightedge construction for the Soddy circles and our new centers $`M`$ and $`M^{}`$ can be derived from the following further relation: ###### Theorem 2 Let $`\mathrm{}_A`$ denote the line through point $`A`$, perpendicular to the opposite side $`BC`$ of the triangle $`ABC`$. Then the two lines $`\mathrm{}_A`$ and $`t_{AS}t_{BC}`$ and the circle $`O_A`$ are coincident. Proof: Let $`O_D`$ be a circle centered at $`t_{BC}`$, such that $`O_A`$ and $`O_D`$ cross at right angles. Then inverting through $`O_D`$ produces a figure in which $`O_B`$ and $`O_C`$ have been transformed into lines parallel to $`\mathrm{}_A`$, while $`O_A`$ is unchanged. Since the image of $`O_S`$ is tangent to $`O_A`$ and to the two parallel lines, it is a circle congruent to $`O_A`$ and centered on $`\mathrm{}_A`$. Therefore, the inverted image of $`t_{AS}`$ is a point $`p`$ where $`\mathrm{}_A`$ and $`O_A`$ cross. Points $`t_{BC}`$, $`t_{AS}`$, and $`p`$ are collinear since one is the center of an inversion swapping the other two. Since $`\mathrm{}_A`$, $`O_A`$, and $`t_{BC}`$ are all easy to find, one can use this result to construct line $`t_{AS}t_{BC}`$, and symmetrically the lines $`t_{BS}t_{AC}`$ and $`t_{CS}t_{AB}`$, after which it is straightforward to find $`O_S`$, $`S`$, and $`M`$. A symmetric construction exists for $`O_S^{}`$, $`S^{}`$, and $`M^{}`$.
no-problem/9909/gr-qc9909011.html
ar5iv
text
# I Introduction ## I Introduction The string theory is believed to be the most promising candidate to quantum gravity. So it is natural to expect that it will resolve some problems inherent in the general relativity like the initial singularity problem. In fact, in the regime of Planck length curvature, quantum fluctuation is very large so that string coupling becomes large and consequently the fundamental string degrees of freedom are not weakly coupled good ones. Instead, solitonic degrees of freedom like solitonic p-branes or D-brane are more important. Therefore it is an interesting question to ask whether including these degrees of freedom resolve the initial singularity. The new gravity theory that can deal with such new degree of freedom should be a deformation of standard general relativity so that in a certain limit it should be reduced to the standard Einstein theory. The Brans-Dicke theory is a generic deformation of the general relativity allowing variable gravity coupling. Therefore whatever is the motivation to modify the Einstein theory, the Brans-Dicke theory is the first one to be considered. As an example, low energy limit of the string theory contains the Brans-Dicke theory with a fine tuned deformation parameter ($`\omega =1`$) and it is extensively studied under the name of the string cosmology. Without knowing the exact theory of the p-brane cosmology, the best guess is that it should be a Brans-Dicke theory with matters. In fact there is some evidence for this , where it is found that the natural metric that couples to the p-brane is the Einstein metric multiplied by certain power of dilaton field. In terms of this new metric, the action that gives the p-brane solution becomes Brans-Dicke action with definite deformation parameter $`\omega `$ depending on p. In our previous works , we studied the gas of solitonic p-brane treated as a perfect fluid type matter in a Brans-Dicke theory allowing the equation of state parameter $`\gamma `$ arbitrary. We had studied the case where the perfect fluid does not couple to the dilaton like the matter in the Ramond-Ramond sector of the string theory. Here we study the opposite case where matter couples to the dilaton like those coming from NS-NS sector(see reference for similar study). Exact solutions are found both in string and Einstein frame and the cosmology is classified according to the values of $`\gamma `$ and $`\omega `$. In string frame we will find non-singular solutions for some ranges of $`\gamma `$ and $`\omega `$. In Einstein frame, however, we will find that all solutions are singular, unlike the string frame. The rest of this paper is organized as follows. In section II, we construct the action for the case when the matter coupled to the dilaton and find analytic solutions for the equations motion. In section III, the relation between cosmic time $`t`$ and parameter $`\tau `$ is studied. In section IV, we consider the behavior of the scale factor $`a`$ as a function of $`\tau `$. Using these results, in section V, we study the scale factor as a function of the cosmic time $`t`$. In section VI, we study the asymptotic behavior of $`a(t)`$ and classify them according to acceleration and deceleration phases. Up to the section VI, all analyses were done in string frame. In section VII, we investigate the cosmology in Einstein frame. In section VIII, we summarize and conclude with some discussion. ## II Action with solitonic NS-NS matter and its solutions We begin with the four dimensional Brans-Dicke like string action of which matter is coupled to the dilaton field. $$S=d^4x\sqrt{g}e^\varphi \left[R\omega (\varphi )^2+L_m\right]$$ (1) Notice that the action differs from that in reference by the coupling of the matter Lagrangian $`L_m`$ with the dilaton factor. By varying the action, we get equations of motion: $`R_{\mu \nu }{\displaystyle \frac{1}{2}}g_{\mu \nu }R`$ $`=`$ $`T_{\mu \nu }+\omega \{_\mu \varphi _\nu \varphi {\displaystyle \frac{1}{2}}g_{\mu \nu }(\varphi )^2\}`$ (2) $`+`$ $`\{_\mu _\nu \varphi +_\mu \varphi _\nu \varphi +g_{\mu \nu }^2\varphi g_{\mu \nu }(\varphi )^2\}`$ (3) $`R2\omega ^2\varphi +\omega (\varphi )^2+L_m`$ $`=`$ $`0.`$ (4) Let’s choose the metric as $$ds^2=Ndt^2+e^{2\alpha (t)}dx_idx^i(i=1,2,3),$$ where $`N`$ is a lapse function. We consider perfect fluid type matter whose energy-momentum tensor is $$T_{\mu \nu }=pg_{\mu \nu }+(p+\rho )U_\mu U_\nu ,$$ satisfying the conservation law $$\dot{\rho }+3(p+\rho )\dot{\alpha }=0.$$ Using the equation of state, $`p=\gamma \rho ,`$ we get $$\rho =\rho _0e^{3(1+\gamma )\alpha }.$$ So, we can rewrite the action as $$S=𝑑te^{3\alpha \varphi }\left[\frac{1}{\sqrt{N}}\{6\dot{\alpha }^2+6\dot{\alpha }\dot{\varphi }+\omega \dot{\varphi }^2\}\sqrt{N}\rho _0e^{3(1+\gamma )\alpha }\right].$$ (5) Now we define new variable $`\tau `$ by $$d\tau e^{3\alpha \varphi }=dt.$$ (6) Then, the action becomes $`S`$ $`=`$ $`{\displaystyle 𝑑\tau [\frac{1}{\sqrt{N}}\{6\alpha _{}^{}{}_{}{}^{2}+6\alpha ^{}\varphi ^{}+\omega \varphi _{}^{}{}_{}{}^{2}\}\sqrt{N}\rho _0e^{3(1\gamma )\alpha 2\varphi }]},`$ (7) $`=`$ $`{\displaystyle 𝑑\tau [\mathrm{\Gamma }_1Y_{}^{}{}_{}{}^{2}+\mathrm{\Gamma }_3X_{}^{}{}_{}{}^{2}\rho _0e^{2X}]},`$ (8) where $`\mathrm{\Gamma }_1`$ $`=`$ $`6+9(1\gamma )+{\displaystyle \frac{9}{4}}(1\gamma )^2\omega ={\displaystyle \frac{9}{4}}(1\gamma )^2(\omega \omega _{\mathrm{\Gamma }_1}),`$ (9) $`\mathrm{\Gamma }_2`$ $`=`$ $`6+3(1\gamma )\omega =3(1\gamma )(\omega \omega _{\mathrm{\Gamma }_2}),`$ (10) $`\mathrm{\Gamma }_3`$ $`=`$ $`\omega {\displaystyle \frac{\mathrm{\Gamma }_2^2}{4\mathrm{\Gamma }_1}}={\displaystyle \frac{3(2\omega +3)}{\mathrm{\Gamma }_1}},`$ (11) $`2X`$ $`=`$ $`3(1\gamma )\alpha 2\varphi ,`$ (12) $`Y`$ $`=`$ $`\alpha +{\displaystyle \frac{\mathrm{\Gamma }_2}{2\mathrm{\Gamma }_1}}X,`$ (13) $`\omega _{\mathrm{\Gamma }_1}`$ $`=`$ $`{\displaystyle \frac{4(3\gamma 1)}{3(1\gamma )^2}},`$ (14) $`\omega _{\mathrm{\Gamma }_2}`$ $`=`$ $`{\displaystyle \frac{2}{1\gamma }}=\omega _\eta .`$ (15) From this action we get equations of motion: $`Y^{\prime \prime }`$ $`=`$ $`0,`$ (16) $`X^{\prime \prime }{\displaystyle \frac{\rho _0}{\mathrm{\Gamma }_3}}e^{2X}`$ $`=`$ $`0.`$ (17) The constraint equation is obtained by varying lapse function $`N`$: $$\mathrm{\Gamma }_1Y_{}^{}{}_{}{}^{2}+\mathrm{\Gamma }_3X_{}^{}{}_{}{}^{2}+\rho _0e^{2X}=0.$$ (18) The behavior of the solution depend crucially on the sign of $`\mathrm{\Gamma }_1`$. * $`\mathrm{\Gamma }_1<0`$ case: $`X`$ $`=`$ $`\mathrm{ln}\left[{\displaystyle \frac{q}{c}}\mathrm{cosh}c\tau \right],`$ (19) $`Y`$ $`=`$ $`A\tau +B.`$ (20) where, $`c,A,B`$ and $`q=\sqrt{\frac{\rho _0}{|\mathrm{\Gamma }_3|}}`$ are arbitrary real constants. Using the cosntraint equation, we can determine $`A`$ in terms of other variables $$A=c\sqrt{\frac{\mathrm{\Gamma }_3}{\mathrm{\Gamma }_1}}=c\frac{\sqrt{3(2\omega +3)}}{|\mathrm{\Gamma }_1|}.$$ (21) * $`\mathrm{\Gamma }_1>0`$ case: $`X`$ $`=`$ $`\mathrm{ln}\left[{\displaystyle \frac{q}{c}}|\mathrm{sinh}c\tau |\right],`$ (22) $`Y`$ $`=`$ $`A\tau +B.`$ (23) Having been found X and Y, $`\alpha `$ and $`\varphi `$ can be found from the relation Eq.(13). In next section, we will find $`t(\tau )`$ and find that its behavior depends on $`\omega `$ and $`\gamma `$. ## III Phase space classification in terms of $`t`$ and $`\tau `$ ### A $`\mathrm{\Gamma }_1<0`$ case From Eqs.(6) and (20), $`t(\tau )`$ is found to be $$tt_0=𝑑\tau e^{\frac{3}{2}(1+\gamma )(A\tau +B)}\left[\frac{q}{c}\mathrm{cosh}(c\tau )\right]^{\frac{3\mathrm{\Gamma }_2}{4\mathrm{\Gamma }_1}(1+\gamma )1}.$$ (24) It is easy to see $`t(\tau )`$ is a monotonic function. When $`\tau `$ goes to $`\pm \mathrm{}`$, $`t`$ can be approximately integrated to be $$t\frac{1}{T_\pm }e^{T_\pm \tau },$$ (25) where $$T_\pm =\frac{3\sqrt{3(2\omega +3)}}{2\mathrm{\Gamma }_1}(1+\gamma )\left[\frac{3\mathrm{\Gamma }_2}{4\mathrm{\Gamma }_1}(1+\gamma )+1\right].$$ (26) We have fixed $`c=1`$. We will say that $`t`$ is supermonotonic function of $`\tau `$ when it is monotonic and $`t`$ runs the entire real line when $`\tau `$ does. When $`t`$ is supermonotonic function of $`\tau `$, the universe evolves from infinite past to infinite future. Otherwise, the universe has a starting(ending) point at a finite cosmic time $`t_i(t_f)`$. The running range of $`t`$ depend on the sign of $`T`$. $`\mathrm{}<t<\mathrm{}`$ $`\mathrm{if}T_{}<0<T_+,`$ (27) $`\mathrm{}<t<t_f`$ $`\mathrm{if}T_{}<0andT_+<0,`$ (28) $`t_i<t<\mathrm{}`$ $`\mathrm{if}T_{}>0andT_+>0,`$ (29) $`t_i<t<t_f`$ $`\mathrm{if}T_+<0<T_{}.`$ (30) The solution for $`T_{}<0`$ is found to be $$\frac{3}{2}<\omega <\frac{4}{3},and\omega >\omega _\kappa ,$$ (31) where we have defined $$\frac{(3\gamma 5)}{3(1\gamma )}:=\omega _\kappa .$$ To get Eq.(31) we used following identity. $$3(2\omega +3)=(\frac{\mathrm{\Gamma }_2}{2})^2\omega \mathrm{\Gamma }_1.$$ The region II in Fig.1 is correspond to this solution. For $`T_{}>0`$, we have solution: $$\omega >\frac{3}{2},\omega <\omega _\kappa ,or\omega >\frac{4}{3}.$$ (32) The region I and VII is satisfy this solution. If $`T_+<0`$, the solution to Eq.(26) is $$\frac{3}{2}<\omega <\frac{4}{3},and\omega <\omega _\kappa .$$ (33) The region I correspond to this solution. For $`T_+>0`$, we have $$\omega >\frac{3}{2},\omega >\omega _\kappa ,or\omega >\frac{4}{3}.$$ (34) The region II and VII satisfy this solution. ### B $`\mathrm{\Gamma }_1>0`$ case In this case the solution $`X(\tau )`$ in Eq.(23) has a singularity at $`\tau =0`$. So we have to look at the behavior of $`t`$ near $`\tau =0`$ carefully. From Eqs.(6) and (23), the $`t(\tau )`$ can be written as $$tt_0=𝑑\tau e^{\frac{3}{2}(1+\gamma )[\frac{\sqrt{3(2\omega +3)}}{\mathrm{\Gamma }_1}c\tau +B]}\left[\frac{q}{c}|\mathrm{sinh}(c\tau )|\right]^{\frac{3\mathrm{\Gamma }_2}{4\mathrm{\Gamma }_1}(1+\gamma )1}.$$ (35) The asymptotic behavior of $`t`$ in the limit $`\tau \pm \mathrm{}`$, is given by $$t\frac{1}{T_\pm }e^{T_\pm \tau },$$ (36) where $$T_\pm =\frac{3\sqrt{3(2\omega +3)}}{2\mathrm{\Gamma }_1}(1+\gamma )\left[\frac{3\mathrm{\Gamma }_2}{4\mathrm{\Gamma }_1}(1+\gamma )+1\right].$$ (37) Notice the difference from Eq.(25). The condition $`T_{}<0`$ gives solution: $$\omega >\frac{3}{2},\omega <\omega _\kappa ,and\omega >\frac{4}{3}.$$ (38) There is no region satisfying this solution. For $`T_{}>0`$ the solution is $$\omega >\frac{3}{2},\omega >\omega _\kappa ,or\omega <\frac{4}{3}.$$ (39) Therefore III, IV, V and VI satisfies this solution. The solution for $`T_+<0`$ is $$\omega >\frac{3}{2},\omega >\omega _\kappa ,and\omega >\frac{4}{3}.$$ (40) The region V and VI satisfy this solution. For $`T_+>0`$ the solution is $$\omega >\frac{3}{2},\omega <\omega _\kappa ,or\omega <\frac{4}{3}.$$ (41) The region III and IV satisfy this solution. So far our analysis is parallel to the previous section. However, we have to pay attention to the behavior of $`t(\tau )`$ near $`\tau =0`$. In the limit $`\tau 0`$, $$t\frac{\mathrm{sign}(\tau )}{1\eta }|\tau |^{1\eta }$$ (42) where $`\eta =\frac{3\mathrm{\Gamma }_2}{4\mathrm{\Gamma }_1}(1+\gamma )+1`$. Notice $`t(\tau )`$ is regular at $`\tau =0`$, if $`\eta <1`$. When $`\eta >1`$, $`t(\tau )`$ is singular at $`\tau =0`$. So we consider $`t(\tau )`$ in the region $`\mathrm{}<\tau <0`$ and $`0<\tau <\mathrm{}`$ separately. The condition $`\eta >1`$ is equivalent to $$\omega >\omega _{\mathrm{\Gamma }_2}.$$ (43) When $`\tau `$ goes to zero from the below, $$t\frac{(\tau )^{1\eta }}{\eta 1},$$ (44) which means $`t\mathrm{}`$ as $`\tau 0`$. On the other hand, when $`\tau `$ goes to zero from the above, $$t\frac{(\tau )^{1\eta }}{\eta 1},$$ (45) so that $`t\mathrm{}`$ as $`\tau +0`$. From these analysis, we see that the parameter space of $`\gamma `$ and $`\omega `$ is divided into seven regions as depicted in Fig.1. We summarize the results that are found in this section. * Region I: $`T_+<0`$, $`T_{}>0`$, $`\mathrm{\Gamma }_1<0`$; $`t`$ evolves from initial time $`t_i`$ to final time $`t_f`$ for $`\tau (\mathrm{},\mathrm{})`$. * Region II: $`T_{}<0`$, $`T_+>0`$, $`\mathrm{\Gamma }_1<0`$; $`t`$ evolves from negative infinity to positive infinity for $`\tau (\mathrm{},\mathrm{})`$. * Region III: $`T_+>0`$, $`T_{}>0`$, $`\mathrm{\Gamma }_1>0`$; $`t`$ evolves from initial time $`t_i`$ to positive infinity for $`\tau (\mathrm{},\mathrm{})`$. * Region IV: $`T_{}>0`$, $`T_+>0`$, $`\mathrm{\Gamma }_1>0`$; Since $`\tau =0`$ is singular, the region of $`\tau `$ is divided into two regions. $`t`$ evolves from initial time $`t_i`$ to positive infinity for $`\tau (\mathrm{},0)`$; $`t`$ evolves from negative infinity to positive infinity for $`\tau (0,\mathrm{})`$. * Region V: $`T_+<0`$, $`T_{}>0`$, $`\mathrm{\Gamma }_1>0`$; $`t`$ evolves from initial time $`t_i`$ to final time $`t_f`$ for $`\tau (\mathrm{},\mathrm{})`$. * Region VI: $`T_+<0`$, $`T_{}>0`$, $`\mathrm{\Gamma }_1>0`$; Since $`t`$ is singular at $`\tau =0`$, we should divide into two regions. $`t`$ evolves from initial time $`t_i`$ to positive infinity for $`\tau (\mathrm{},0)`$ and negative infinity to final time $`t_f`$ for $`\tau (0,\mathrm{})`$. * Region VII: $`T_{}>0`$, $`T_+>0`$, $`\mathrm{\Gamma }_1<0`$; $`t`$ evolves from initial time $`t_i`$ to positive infinity for $`\tau (\mathrm{},\mathrm{})`$. ## IV The behavior of the scale factor We now study the phases of the cosmology by looking at the scale factor $`a(\tau )=\mathrm{exp}(\alpha (\tau ))`$. ### A $`\mathrm{\Gamma }_1<0`$ case In this case $`\alpha (\tau )`$ in scale factor $`e^{\alpha (\tau )}`$ is given by $$\alpha (\tau )=\frac{c\sqrt{3(2\omega +3)}}{\mathrm{\Gamma }_1}\tau +B\frac{\mathrm{\Gamma }_2}{2\mathrm{\Gamma }_1}\left[\mathrm{ln}\frac{q}{c}\mathrm{cosh}(c\tau )\right].$$ (46) In the limit $`\tau \pm \mathrm{}`$, the scale factor can be written as $$a(\tau )e^{H_\pm \tau },$$ (47) where the $`H_\pm `$ is defined by $$H_\pm =\frac{c\sqrt{3(2\omega +3)}}{\mathrm{\Gamma }_1}\frac{\mathrm{\Gamma }_2}{2\mathrm{\Gamma }_1}c.$$ (48) Eq.(48) for $`H_{}<0`$ gives the solution $$\omega >\omega _{\mathrm{\Gamma }_2},and\omega <0.$$ (49) The region II in Fig.2 satisfies this solution. For $`H_{}>0`$ the solution is $$\omega <\omega _{\mathrm{\Gamma }_2},or\omega >0.$$ (50) The region I and VI satisfy this solution. If $`H_+<0`$, the solution is given by $$\omega <\omega _{\mathrm{\Gamma }_2},and\omega <0.$$ (51) The region I satisfies this solution. For $`H_+>0`$ the solution is $$\omega >\omega _{\mathrm{\Gamma }_2},or\omega >0.$$ (52) The region II and VI satisfy this solution. ### B $`\mathrm{\Gamma }_1>0`$ case In this case, the $`\alpha (\tau )`$ in scale factor $`e^{\alpha (\tau )}`$ is given by $$\alpha (\tau )=\frac{c\sqrt{3(2\omega +3)}}{\mathrm{\Gamma }_1}\tau +B\frac{\mathrm{\Gamma }_2}{2\mathrm{\Gamma }_1}\left[\mathrm{ln}\frac{q}{c}|\mathrm{sinh}(c\tau )|\right].$$ (53) In the limit $`\tau \pm \mathrm{}`$, $`a(\tau )`$ is given by $$a(\tau )e^{H_\pm \tau },$$ (54) where $`H_\pm `$ is defined by $$H_\pm =\frac{c\sqrt{3(2\omega +3)}}{\mathrm{\Gamma }_1}\frac{\mathrm{\Gamma }_2}{2\mathrm{\Gamma }_1}c.$$ (55) The solution to the condition $`H_{}<0`$ is $$\omega <\omega _{\mathrm{\Gamma }_2},and\omega >0.$$ (56) There is no region satisfying this solution. For $`H_{}>0`$ the solution is $$\omega >\omega _{\mathrm{\Gamma }_2},or\omega <0.$$ (57) The satisfying region is III, IV and V. The solution for $`H_+<0`$ case is $$\omega >\omega _{\mathrm{\Gamma }_2},and\omega >0.$$ (58) The region V satisfies this solution. For $`H_+>0`$ the solution is given by $$\omega <\omega _{\mathrm{\Gamma }_2},or\omega <0.$$ (59) The region III and IV satisfy this solution. Now we consider $`\tau 0`$ limit. In this limit $`a(\tau )`$ is approximately given by $$a(\tau )|\tau |^{\frac{3(1\gamma )(\omega \omega _{\mathrm{\Gamma }_2})}{2\mathrm{\Gamma }_1}}.$$ (60) Therefore, if $`\omega >\omega _{\mathrm{\Gamma }_2}`$, $`a(\tau )`$ goes to infinite as $`\tau 0`$. From these analyses we get a phase diagram Fig.2. Summarizingly, we have following cases. * Region I: $`H_{}>0`$, $`H_+<0`$, $`\mathrm{\Gamma }_1<0`$; The scale factor $`a(\tau )`$ goes to zero size as $`\tau \pm \mathrm{}`$. * Region II: $`H_{}<0`$, $`H_+>0`$, $`\mathrm{\Gamma }_1<0`$; $`a(\tau )`$ goes to infinity as $`\tau \pm \mathrm{}`$. * Region III: $`H_{}>0`$, $`H_+>0`$, $`\mathrm{\Gamma }_1>0`$; In this region, the behavior of $`t`$ is not singular, so we need not consider the behavior of $`a(\tau )`$ at $`\tau =0`$ where the scale factor vanishes. $`a(\tau )`$ goes to zero as $`\tau \mathrm{}`$ and $`a(\tau )`$ goes to infinity as $`\tau \mathrm{}`$. * Region IV: $`H_{}>0`$, $`H_+>0`$, $`\mathrm{\Gamma }_1>0`$; The $`a(\tau )`$ goes to infinite size as $`\tau `$ goes to zero for $`\tau (\mathrm{},0)`$ and zero size as $`\tau \mathrm{}`$. $`a(\tau )`$ goes to infinity as $`\tau `$ goes to zero for $`\tau (0,\mathrm{})`$ and goes to positive infinity as $`\tau \mathrm{}`$. * Region V: $`H_{}>0`$, $`H_+<0`$ ,$`\mathrm{\Gamma }_1>0`$; $`a(\tau )`$ goes to infinity as $`\tau `$ goes to zero and $`a(\tau )`$ goes to zero as $`\tau `$ goes to negative infinity for $`\tau (\mathrm{},0)`$. $`a(\tau )`$ goes to infinity as $`\tau `$goes to zero and $`a`$ goes tozero $`\tau `$ infinity for $`\tau (0,\mathrm{})`$. * Region VI: $`H_{}>0`$, $`H_+>0`$, $`\mathrm{\Gamma }_1<0`$; In the limit of $`\tau \mathrm{}`$, $`a(\tau )`$ goes to zero size. In the limit of $`\tau \mathrm{}`$, $`a(\tau )`$ goes to infinite size. ## V Phases of the cosmology In previous sections we have studied $`t(\tau )`$ and $`a(\tau )`$. From all considerations of these results, we can classify the parameter space of $`\gamma `$ and $`\omega `$ by the behavior of $`a(t)`$ into sixteen phases. In Fig.3, we show the phase diagram. In asymptotic region where $`\tau \pm \mathrm{}`$, we can write the scale factor $`a(t)`$ as: $`a(t)[T_{}(tt_i)]^{\frac{H_{}}{T_{}}}`$ (61) $`a(t)[T_+(tt_f)]^{\frac{H_+}{T_+}}.`$ (62) From these relations we see that the behavior of the scale factor $`a(t)`$ depends on the sign of $`T_\pm `$ and the value of $`H_\pm /T_\pm `$ determins acceleration or deceleration of the scale factor which is discussed below. In Fig.4 - Fig.6, we show the behavior of scale factor by numerical study. The $``$ sign in $`V`$ indicates the branch for $`\tau (\mathrm{},0)`$. Similarly, $`V+`$ means $`\tau (0,\mathrm{})`$. * Region I, $`T_{}>0,T_+<0,H_{}>0`$ and $`H_+<0`$. The universe evolves from zero size at a finite initial time $`t_i`$ to a zero size at a finite final time $`t_f`$. This region contains the matter for inflation $`\gamma =1`$. However, we see in the limit $`\tau \mathrm{}`$ the cosmic time $`t`$ approaches to $`t_i`$. * Region II, $`T_{}<0,T_+>0,H_{}<0`$ and $`H_+>0`$. The universe evolves from infinite to infinite size as $`t`$ runs from negative infinity to positive infinity. This is the region where we can find the non-singular behavior of $`a(t)`$. * Region III, $`T_+>0,T_{}>0,H_{}>0`$ and $`H_+>0`$. The universe evolves from zero size to infinite as $`t`$ runs from finite initial time $`t_i`$ to infinity for $`\tau (\mathrm{},\mathrm{})`$. During evolution the universe goes to zero as $`\tau `$ goes to zero. So we divided into two branches at $`\tau =0`$. * Region IV, $`T_{}>0,T_+>0,H_{}>0`$ and $`H_+>0`$. The universe evolves from zero to infinite size as $`t`$ runs from finite $`t_i`$ to infinity. Like region $`III`$ we divided into two branches since the universe becomes zero as $`\tau `$ goes to zero. By numerical study we depicted the behavior of scale factor $`a(t)`$. * Region V, $`T_{}>0,T_+>0,H_{}>0`$ and $`H_+>0`$. The universe evolves from zero to infinite size as $`t`$ runs finite initial time $`t_i`$ to infinity and the universe evolves from infinite to infinite as $`t`$ runs negative infinity to infinity. * Region VI, $`T_{}>0,T_+<0,H_{}>0`$ and $`H_+>0`$. The universe evolves from zero to zero size as $`t`$ runs from initial time $`t_i`$ to infinity and the universe evolves from zero to infinite size as $`t`$ runs negative infinity to finite final time $`t_f`$. * Region VII, $`T_{}>0,T_+>0,H_{}<0`$ and $`H_+>0`$. The universe evolves from infinite to infinite size as $`t`$ runs from finite initial time $`t_i`$ to infinity. * Region VIII, $`T_{}>0,T_+<0,H_{}>0`$ and $`H_+>0`$. The universe evolves from zero to infinite size as $`t`$ runs from finite initial time $`t_i`$ to infinity and the universe evolves from infinite to infinite size as $`t`$ runs negative infinity to finite final time $`t_f`$. * Region IX, $`T_{}>0,T_+<0,H_{}>0`$ and $`H_+<0`$. The universe evolves from zero to infinite size as $`t`$ runs from finite initial time $`t_i`$ to infity and the universe evolves from infinite size to zero as $`t`$ runs from negative infinity to finite final time $`t_f`$. * Region X, $`T>0,T_+>0,H_{}>0`$ and $`H_+>0`$. The universe evolves from zero to infinite size as $`t`$ runs from finite initial time $`t_i`$ to infinity. ## VI Acceleration / Deceleration phase Notice that as we have seen in Eq.(63) not only the sign of $`H_\pm /T_\pm `$ but also that of $`H_\pm /T_\pm 1`$ is important because the universe will accelerate or deccelerate according to the sign of the latter. ### A $`\mathrm{\Gamma }_1<0`$ case #### 1 $`H_{}/T_{}>1`$ For $`T_{}>0`$, the condition $`H_{}/T_{}>1`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (63) Consider $`\gamma >\frac{1}{3}`$ case first. To our surprise, the inequality (63) gives us $`\omega >\frac{4(3\gamma 1)}{3(1\gamma )^2}=\omega _{\mathrm{\Gamma }_1}`$, namely $`\mathrm{\Gamma }_1>0`$. This is contradiction. Now for $`\gamma <\frac{1}{3}`$, Eq.(63) gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$. The region I in figure 3 corresponds to this case. For $`T_{}<0`$, the $`H_{}/T_{}>1`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}>3(1\gamma )(2\omega +3).$$ (64) This inequality gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$ for $`\gamma >\frac{1}{3}`$. The region II corresponds to this. For $`\gamma <1/3`$, we get the condition $`\omega >\omega _{\mathrm{\Gamma }_1}`$ which contradicts to $`\mathrm{\Gamma }_1<0`$. In a summary, region I and II satisfies $`H_{}/T_{}>1`$, $`\mathrm{\Gamma }_1<0`$. #### 2 $`H_+/T_+>1`$ For $`T_+>0`$, the condition $`H_+/T_+>1`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (65) The analysis is completely similar to the case A. For $`\gamma <1/3`$, from above inequality we get $`\omega <\omega _{\mathrm{\Gamma }_1}`$ . Since $`T_+>0`$ is satisfied only by the region II, III, IV, VII, it is easy to see that there is no region satisfying all three conditions, $`\mathrm{\Gamma }_1<0,T_+>0,\gamma <1/3`$. For $`\gamma >1/3`$ Eq.(65) gives $`\omega >\omega _{\mathrm{\Gamma }_1}`$ which contradicts to $`\mathrm{\Gamma }_1<0`$. For $`T_+<0`$, $`H_+/T_+>1`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}>3(1\gamma )(2\omega +3).$$ (66) When $`\gamma >1/3`$, the above inequality gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$. Since the condition $`T_+<0`$ is only satisfied by the region I, V, VI, there is no region satisfying three conditions $`\mathrm{\Gamma }_1<0,T_+<0,\gamma >1/3`$. When $`\gamma <1/3`$, Eq.(65) gives $`\gamma >\omega _{\mathrm{\Gamma }_1}`$ which contracdicts to $`\mathrm{\Gamma }_1<0`$. In a summary, it is always $`H_+/T_+<1`$, $`\mathrm{\Gamma }_1<0`$. ### B $`\mathrm{\Gamma }_1>0`$ case #### 1 $`H_{}/T_{}>1`$ For $`T_{}>0`$, the condition $`H_{}/T_{}>1`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (67) Consider $`\gamma >1/3`$ case first. The left hand side is always positive while the right hand side is always negative. So there is no solution for this. For $`\gamma <1/3`$ case. Eq.(67) gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$ which contradicts to $`\mathrm{\Gamma }_1>0`$. For $`T_{}<0`$ case, from section III, we know that there is no region satisfying $`T_{}<0`$ condition. Therefore there are no solutions for conditions $`\mathrm{\Gamma }_1>0,H_{}/T_{}>1`$. We summarize if $`\mathrm{\Gamma }_1>0`$ then we have $`H_{}/T_{}<1`$ for all region. #### 2 $`H_+/T_+>1`$ For $`T_+>0`$, $`H_+/T_+>1`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (68) Consider $`\gamma >1/3`$ case. The above inequality gives $`\omega >\omega _{\mathrm{\Gamma }_1}`$. Part of the region V satisfies these conditions. For the solution to $`\gamma <1/3`$ case, Eq.(68) gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$ which contradicts to $`\mathrm{\Gamma }_1>0`$. For $`T_+<0`$, $`H_+/T_+>1`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}>3(1\gamma )(2\omega +3).$$ (69) When $`\gamma <1/3`$, we find solution $`\omega >\omega _{\mathrm{\Gamma }_1}`$ which is satisfied by the region VI in Fig 3. For $`\gamma >1/3`$ case, Eq.(69) gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$ which contradicts $`\mathrm{\Gamma }_1>0`$. We summarize what we have obtained so far by a table. | phase | sign of | sign of | sign of | range of | $`H_{}/T_{}`$ | $`H_+/T_+`$ | | --- | --- | --- | --- | --- | --- | --- | | | $`\mathrm{\Gamma }_1`$ | $`T_{}`$ | $`T_+`$ | t | | | | I | $``$ | + | $``$ | $`[t_i,t_f]`$ | $`H_{}/T_{}>1`$ | $`0<H_+/T_+<1`$ | | II | $``$ | $``$ | + | $`(\mathrm{},\mathrm{})`$ | $`H_{}/T_{}>1`$ | $`0<H_+/T_+<1`$ | | $`III^{}`$ | + | + | | $`[t_i,t_f]`$ | $`0<H_{}/T_{}<1`$ | $`H_+/T_+>1`$ | | $`III^+`$ | + | | + | $`[t_i,\mathrm{}]`$ | | | | $`IV^{}`$ | + | + | | $`[t_i,t_f]`$ | $`0<H_{}/T_{}<1`$ | $`H_+/T_+>1`$ | | $`IV^+`$ | + | | + | $`[t_i,\mathrm{})`$ | | | | $`V^{}`$ | + | + | | $`[t_i,\mathrm{})`$ | $`0<H_{}/T_{}<1`$ | $`H_+/T_+>1`$ | | $`V^+`$ | + | | + | $`(\mathrm{},\mathrm{})`$ | | | | $`VI^{}`$ | + | + | | $`[t_i,t_f]`$ | $`0<H_{}/T_{}<1`$ | $`H_+/T_+<0`$ | | $`VI^+`$ | + | | $``$ | $`[t_i,\mathrm{})`$ | | | | VII | $``$ | + | + | $`[t_i,\mathrm{})`$ | $`H_{}/T_{}<0`$ | $`0<H_+/T_+<1`$ | | $`VIII^{}`$ | + | + | | $`[t_i,\mathrm{})`$ | $`0<H_{}/T_{}<1`$ | $`H_+/T_+<0`$ | | $`VIII^+`$ | + | | $``$ | $`(\mathrm{},t_f]`$ | | | | $`IX^{}`$ | + | + | | $`[t_i,\mathrm{})`$ | $`0<H_{}/T_{}<1`$ | $`0<H_+/T_+<1`$ | | $`IX^+`$ | + | | $``$ | $`(\mathrm{},t_f]`$ | | | | X | $``$ | + | + | $`[t_i,\mathrm{})`$ | $`0<H_{}/T_{}<1`$ | $`0<H_+/T_+<1`$ | ## VII Einstein frame In this section, we study the cosmology in Einstein frame. Especially, we investigate the difference between the behavior of the scale factor in Einstein frame and that in the string frame as well as the possibity to avoid the initial singularity. The metric in Einstein frame is obtained from string frame metric by transformation using the relation $`g_{E\mu \nu }=e^\varphi g_{\mu \nu }`$: $`ds_{E}^{}{}_{}{}^{2}`$ $`=`$ $`e^\varphi ds^2`$ (70) $`=`$ $`e^\varphi dt^2+e^{2\alpha \varphi }dx_idx^i`$ (71) $`=`$ $`dt_{E}^{}{}_{}{}^{2}+e^{2\alpha _E}dx_idx^i(i=1,2,3).`$ (72) From above relations, we see $`\alpha _E=\alpha \frac{\varphi }{2}`$ and $`dt_E=e^{\frac{\varphi }{2}}dt`$. Then we can obtain solutions in Einstein frame combining the above relation with the solutions in string frame. Therefore the solutions in Einstein frame are $$\alpha _E(\tau )=c\frac{(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}\tau \mathrm{ln}\left[\frac{q}{c}\mathrm{cosh}(c\tau )\right]\left(\frac{(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1}{8\mathrm{\Gamma }_1}\right)+\frac{3\gamma +1}{4}B,$$ (73) for $`\mathrm{\Gamma }_1<0`$, and $$\alpha _E(\tau )=c\frac{(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}\tau \mathrm{ln}\left[\frac{q}{c}|\mathrm{sinh}(c\tau )|\right]\left(\frac{(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1}{8\mathrm{\Gamma }_1}\right)+\frac{3\gamma +1}{4}B,$$ (74) for $`\mathrm{\Gamma }_1>0`$. In next section we find out $`t_E(\tau )`$ and see that the interval of $`t_E(\tau )`$ can be classified by $`\omega `$ and $`\gamma `$. ### A Classification of the phases by $`t_E`$ and $`\tau `$ #### 1 $`\mathrm{\Gamma }_1<0`$ case From Eq.(6) in section II, and using Eqs.(72) and (73), $`t_E(\tau )`$ can be read $`t_Et_{E0}`$ $`=`$ $`{\displaystyle 𝑑\tau e^{3\alpha _E(\tau )}}`$ (75) $``$ $`{\displaystyle 𝑑\tau e^{\frac{3(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}c\tau }\left[\mathrm{cosh}(c\tau )\right]^{\frac{3[4\mathrm{\Gamma }_1+(3\gamma +1)\mathrm{\Gamma }_2]}{8\mathrm{\Gamma }_1}}}.`$ (76) In the limit $`\tau \pm \mathrm{}`$, we can write $$t_E\frac{1}{T_\pm }e^{T_{E\pm }\tau }.$$ where $$T_{E\pm }=\frac{3(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}\frac{3[4\mathrm{\Gamma }_1+(3\gamma +1)\mathrm{\Gamma }_2]}{8\mathrm{\Gamma }_1}.$$ (77) The condition for $`T_E<0`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (78) Consider first $`\gamma >\frac{1}{3}`$ case. The solution for Eq.(78) is $`\omega >\omega _{go}`$ which violate $`\mathrm{\Gamma }_1<0`$. For $`\gamma <\frac{1}{3}`$, we get $`\omega <\omega _{\mathrm{\Gamma }_1}`$. Now $`T_E>0`$ case that is $$(3\gamma +1)\sqrt{3(2\omega +3)}>3(1\gamma )(2\omega +3).$$ (79) For $`\gamma >\frac{1}{3}`$, Eq.(79) gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$. However, for $`\gamma <\frac{1}{3}`$, the left hand side is negative while right hand side is positive which is inconsistent. Now consider $`T_{E+}<0`$ case which is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (80) For $`\gamma >\frac{1}{3}`$, in Eq.(80) the left hand side is positive while right hand side negative which is inconsitent. For $`\gamma <\frac{1}{3}`$, we have $`\omega <\omega _{\mathrm{\Gamma }_1}`$. Consider $`T_{E+}`$ which is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}>3(1\gamma )(2\omega +3).$$ (81) For $`\gamma >\frac{1}{3}`$, we have $`\omega <\omega _{\mathrm{\Gamma }_1}`$. For $`\gamma <\frac{1}{3}`$, Eq.(81) gives $`\omega >\omega _{\mathrm{\Gamma }_1}`$ which violates $`\mathrm{\Gamma }_1<0`$. One can summarize that under the condition $`\omega <\omega _{\mathrm{\Gamma }_1}`$, $$\mathrm{For}\gamma <\frac{1}{3}:T_E<0,T_{E+}<0.$$ $$\mathrm{For}\gamma >\frac{1}{3}:T_E>0,T_{E+}>0.$$ These relations tell us that $$\mathrm{For}\gamma <\frac{1}{3}:\mathrm{}<t_E<t_{Ef}.$$ $$\mathrm{For}\gamma >\frac{1}{3}:t_{Ei}<t_E<\mathrm{}.$$ We emphasize that there is no region where $`t_E`$ can run from $`\mathrm{}`$ to $`+\mathrm{}`$. This is because $`T_{E+}`$ and $`T_E`$ have the same sign in any given region unlike the string frame. #### 2 $`\mathrm{\Gamma }_1>0`$ case In this case $`t_E(\tau )`$ can be read from Eqs.(6), (72) and (74), $`t_Et_{E0}`$ $`=`$ $`{\displaystyle 𝑑\tau e^{3\alpha _E(\tau )}}`$ (82) $``$ $`{\displaystyle 𝑑\tau e^{\frac{3(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}c\tau }|\mathrm{sinh}(c\tau )|^{\frac{3(4\mathrm{\Gamma }_1+(3\gamma +1)\mathrm{\Gamma }_2)}{8\mathrm{\Gamma }_1}}}.`$ (83) As we saw above, this case is singular as $`\tau 0`$. So it is necessary to consider the behavior in that case. As $`\tau 0`$, $$t_E\mathrm{sign}(\tau )\frac{|\tau |^{1\eta }}{1\eta },$$ (84) where $`\eta =\frac{3[(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1]}{8\mathrm{\Gamma }_1}`$. If $`\eta >1`$, $`t_E`$ is singular at $`\tau =0`$, while if $`\eta <1`$, it is regular. $`\eta >1`$ case gives $$\omega >\frac{(5+3\gamma )}{3(1\gamma ^2)}=\omega _{}.$$ The other case, $`\eta <1`$, gives $`\omega <\omega _{}`$ which does not overlap with $`\mathrm{\Gamma }_1>0`$. Therefore there is no regular region for $`\mathrm{\Gamma }_1>0`$. As $`\tau \mathrm{}`$, $`t_E+\mathrm{}`$ while as $`\tau +0`$, $`t_E\mathrm{}`$. Let us find out the behavior in the region $`\tau \pm \mathrm{}`$. In this limit $`t_E`$ and $`\tau `$ is given by $$t_Et_{E0}𝑑\tau e^{T_{E\pm }\tau },$$ (85) where $$T_{E\pm }=\frac{3(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}\frac{3[(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1]}{8\mathrm{\Gamma }_1}.$$ (86) The condition $`T_E>0`$ is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}>3(1\gamma )(2\omega +3).$$ (87) For $`\gamma >\frac{1}{3}`$, the left hand side is always positive while the right hand side always negative. So Eq.(87) satisfies trivially. For $`\gamma <\frac{1}{3}`$, Eq.(87) gives $`\omega >\omega _{\mathrm{\Gamma }_1}`$ which is consistent with $`\mathrm{\Gamma }_1>0`$. For $`T_E<0`$, we have the inequality $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (88) For $`\gamma >\frac{1}{3}`$, since after dividing by $`3\gamma +1`$ the left hand side is always positive while the right hand side is always negative. So there is no solution. For $`\gamma <\frac{1}{3}`$, Eq.(88) gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$ which contradicts to $`\mathrm{\Gamma }_1>0`$. Now consider the case $`T_{E+}>0`$. This condition is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}>3(1\gamma )(2\omega +3).$$ (89) Consider first $`\gamma >\frac{1}{3}`$. Eq.(89) gives $`\omega <\omega _{\mathrm{\Gamma }_1}`$ which contradicts to $`\mathrm{\Gamma }_1>0`$. For $`\gamma <\frac{1}{3}`$, the left hand side is always positive while the right hand side always is negative. So we have no solution for $`T_{E+}>0`$. Consider $`T_{E+}<0`$ which is reduced to $$(3\gamma +1)\sqrt{3(2\omega +3)}<3(1\gamma )(2\omega +3).$$ (90) For $`\gamma >\frac{1}{3}`$, Eq.(90) gives $`\omega >\omega _{\mathrm{\Gamma }_1}`$ which satisfies $`\mathrm{\Gamma }_1>0`$. For $`\gamma <\frac{1}{3}`$, in Eq.(90) after dividing by $`3\gamma +1`$ we see that the left hand side is positive while the right hand side is negative. It is easy to check that when $`\omega >\omega _{\mathrm{\Gamma }_1}`$, $$T_E<0,andT_{E+}<0.$$ In a summary, for $`\mathrm{\Gamma }_1>0`$, $`t_E`$ is singular as $`\tau 0`$. $`t_E`$ runs from initial time $`t_{Ei}`$ to $`+\mathrm{}`$ for $`\tau (\mathrm{},0)`$ and $`\mathrm{}`$ to final time $`t_{Ef}`$ for $`\tau (0,\mathrm{})`$. ### B The scale factor In previous section we see that $`t_E(\tau )`$ is not monotonic function which is crucial to decide whether the scale factor is singular or not. In this section following the same procedure we study the behavior of the scale factor, $`a_E(\tau )`$, and classify by $`\omega `$ and $`\gamma `$. #### 1 $`\mathrm{\Gamma }_1<0`$ case For this case we use the Eq.(73), then $`a_E(\tau )`$ $`=`$ $`e^{\alpha _E(\tau )}`$ (91) $`=`$ $`c_1e^{c\frac{(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}\tau }\left[{\displaystyle \frac{q}{c}}\mathrm{cosh}(c\tau )\right]^{\frac{(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1}{8\mathrm{\Gamma }_1}},`$ (92) where $`c_1=e^{\frac{(3\gamma +1)B}{4}}`$. In the limit $`\tau \pm \mathrm{}`$, the scale factor can be rewritten as $`e^{H_{E\pm }\tau }`$ where $$H_{E\pm }=\frac{(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}\frac{(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1}{8\mathrm{\Gamma }_1}$$ (93) Since, in the limit $`\tau \pm \mathrm{}`$, $`H_E`$ and $`T_E`$ are proportional$`(T_E=3H_E)`$, we can analyse by using $`T_E`$ in the previous subsection. Summary: $$a_E(\tau )runsfrom+\mathrm{}to\mathrm{\hspace{0.33em}\hspace{0.33em}0}for\gamma <\frac{1}{3}.$$ $$a_E(\tau )runsfrom\mathrm{\hspace{0.33em}\hspace{0.33em}0}to+\mathrm{}for\gamma >\frac{1}{3}.$$ #### 2 $`\mathrm{\Gamma }_1>0`$ case From Eq.(74), we can write the scale factor for this case as follows. $`a_E(\tau )`$ $`=`$ $`e^{\alpha _E(\tau )}`$ (94) $`=`$ $`c_2e^{\frac{(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}c\tau }|\mathrm{sinh}(c\tau )|^{\frac{[(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1]}{8\mathrm{\Gamma }_1}}.`$ (95) where $`c_2=e^{\frac{(3\gamma +1)B}{4}}(\frac{q}{c})^{\frac{[(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1]}{8\mathrm{\Gamma }_1}}`$. Since as we saw in the section VII.1.B, $`\eta <1`$ and $`\mathrm{\Gamma }_1>0`$ are not consitent with each other, we consider the case $`\eta >1`$. In this region there is singularity at $`\tau =0`$. So we need to consider the limit $`\tau 0`$: $`a_E(\tau )`$ $``$ $`|\tau |^{\eta /3}`$ (96) $`=`$ $`|\tau |^{\frac{(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1}{8\mathrm{\Gamma }_1}}.`$ (97) As $`\tau 0`$, the behavior of the scale factor always goes to positive infnity because $`\eta >1`$. As $`\tau \pm \mathrm{}`$, $`a_E(\tau )`$ can be written as $$e^{H_{E\pm }\tau }$$ where $$H_{E\pm }=\frac{(3\gamma +1)\sqrt{3(2\omega +3)}}{4\mathrm{\Gamma }_1}\frac{[(3\gamma +1)\mathrm{\Gamma }_2+4\mathrm{\Gamma }_1]}{8\mathrm{\Gamma }_1}.$$ (98) By the same analysis of subsection VII.1.B we can write the solutions. Under the condition $`\omega >\omega _{\mathrm{\Gamma }_1}`$, $$H_E>0andH_{E+}<0.$$ The scale factor evolves from 0 to $`+\mathrm{}`$ for $`\tau (\mathrm{},0)`$ and from $`\mathrm{}`$ to 0 for $`\tau (0,\mathrm{})`$. Now we study $`a_E(t_E)`$. First consider in the limit $`\tau 0`$. In this limit $`\mathrm{\Gamma }_1<0`$ case is regular. Therefore we investigate $`\mathrm{\Gamma }_1>0`$ case. From Eqs.(84) and (97) we write $`a_E(t_E)`$ as $$a_E(t_E)\left[(1\eta )sign(\tau )t_E\right]^{\frac{\eta }{3(\eta 1)}}.$$ (99) When $`\frac{\eta }{3(\eta 1)}>1`$ i.e. $`1<\eta <\frac{3}{2}`$, the scale factor will accelerate while when $`\frac{\eta }{3(1\eta )}<1`$ i.e. $`\eta >\frac{3}{2}`$ the scale factor will decelerate. To accelerate, $`\omega `$ should satisfy $$\omega >\omega _{\mathrm{\Gamma }_2}for\gamma <\frac{1}{3}.$$ $$\omega <\omega _{\mathrm{\Gamma }_2}for\gamma >\frac{1}{3}.$$ In Fig.7, region IV correspond to these relations. For deceleration $`\omega `$ should satisfy $$\omega >\omega _{\mathrm{\Gamma }_2}for\gamma >\frac{1}{3}.$$ $$\omega <\omega _{\mathrm{\Gamma }_2}for\gamma <\frac{1}{3}.$$ Region III and V satisfy these relations. In both cases($`\mathrm{\Gamma }_1>0or\mathrm{\Gamma }_1<0`$), in the limit $`\tau \pm \mathrm{}`$ the scale factor $`a_E(t_E)`$ behaves as $$a_E(t_E)t_{E}^{}{}_{}{}^{H_{E\pm }/T_{E\pm }}.$$ (100) Since we already know that $`T_E`$ and $`H_E`$ satisfy $`T_{E\pm }=3H_{E\pm }`$, from Eq.(100), $`a_E(t_E)`$ can be written $$a_E(t_E)t_{E}^{}{}_{}{}^{1/3}.$$ It is interesting to compare with Einstein general relativity where $`a_E(t_E)`$ behaves as $$a_E(t_E)t_{E}^{}{}_{}{}^{2/3}fordust(\gamma =0).$$ $$a_E(t_E)t_{E}^{}{}_{}{}^{1/2}forradiation(\gamma =1/3).$$ We summarize the behavior of the scale factor in terms of $`t_E`$. * Region I: $`T_E(=3H_E)<0,T_{E+}(=3H_{E+})<0`$. The universe described by $`a_E(t_E)`$ evolves from infinite size to zero as time runs from negative infinity to finite final time $`t_{Ef}`$. * Region II: $`T_E>0,T_{E+}>0`$. The universe evolves from zero size to infinite one as time runs from finite initial time $`t_{Ei}`$ to positive infinity. * Region III, IV and V: $`T_E>0`$ and $`T_{E+}<0`$. The universe evolves from zero to infite size as time runs from finite initial time $`t_{Ei}`$ to infinity for $`\tau (\mathrm{},0)`$ and infinte size to zero as time runs from negative infinity to finite final time $`t_{Ef}`$ for $`\tau (0,+\mathrm{})`$. Furthermore, in the limit $`\tau 0`$ the universe has acceleration or deceleration regime which depends on the range of $`\eta `$. Region III and V is decelerationary while region IV is accelerationary phase. ## VIII Discussion and conclusion We have considered string motivated Brans-Dicke(BD) cosmology with perfect fluid type matter which arise when a certain kind of the dilaton coupled p-brane gas is dominating the universe. This is the complementary study to our earlier work, where p-brane gas that does not couple to the dilaton was studied. Cosmology is classified into 16 phases according to the asymptotic behavior of the time interval and the scale factor. This is qualitatively similar to the result obtained before for the dilaton coupled case. In string frame, there is a phase where the cosmology has no singularity, namely, region II in Fig 3. In Einstein frame, contrary to the string frame, there is no singularity free phase. This is partly due to the difference of cosmic time($`t`$ in string frame and $`t_E`$ in Einstein frame) and partly due to the dilaton factor relating two frames. In asymptotic regime, $`\tau \pm \mathrm{}`$, the behavior of the scale factor is $`t_{E}^{}{}_{}{}^{1/3}`$. To our surprise the inflationary regime of the dilaton-graviton string cosmology is gone in the presence of the matter. The matter contribution seems to give mass term or potential to the dilaton regulating the dilaton from growing dilaton kinetic energy. In a recent study, with the assumption of holographic principle, it was argued that this principle requires the existence of graceful exit by smoothly connecting the pre and post big-bang branches. According to the , all cosmological solutions of p-brane dominating the universe can be mapped to the present case or the case studied in . In any case, there is no solution which exhibit both inflation and graceful exit. Therefore our result draws a negative conclusion to what has been said in .
no-problem/9909/solv-int9909002.html
ar5iv
text
# Discrete 𝑍^𝛾 and Painlevé equations ## 1 Introduction Circle patterns as discrete analogs of conformal mappings is a fast developing field of research on the border of analysis and geometry. Recent progress in their investigation was initiated by Thurston’s idea about approximating the Riemann mapping by circle packings. The corresponding convergence was proven by Rodin and Sullivan . For hexagonal packings, it was established by He and Schramm that the convergence is $`C^{\mathrm{}}.`$ Classical circle packings comprised by disjoint open disks were later generalized to circle patterns, where the disks may overlap (see for example ). In , Schramm introduced and investigated circle patterns with the combinatorics of the square grid and orthogonal neighboring circles. In particular, a maximum principle for these patterns was established which allowed global results to be proven. On the other hand, not very much is known about analogs of standard holomorphic functions. Doyle constructed a discrete analogue of the exponential map with the hexagonal combinatorics , and the discrete versions of exponential and erf-function, with underlying combinatorics of the square grid, were found in . The discrete logarithm and $`z^2`$ have been conjectured by Schramm and Kenyon (see ). In a conformal setting, Schramm’s circle patterns are governed by a difference equation which turns out to be the stationary Hirota equation (see ). This equation is an example of an integrable difference equation. It appeared first in a different branch of mathematics – the theory of integrable systems (see for a survey). Moreover, it is easy to show that the lattice comprised by the centers of the circles of a Schramm’s pattern and by their intersection points is a special discrete conformal mapping (see Definition 1 below). The latter were introduced by in the setting of discrete integrable geometry, originally without any relation to circle patterns. The present paper is devoted to the discrete analogue of the function $`f(z)=z^\gamma ,`$ suggested first in . We show that the corresponding Schramm’s circle patterns can be naturally described by methods developed in the theory of integrable systems. Let us recall the definition of a discrete conformal map from . ###### Definition 1 $`f:𝐙^\mathrm{𝟐}𝐑^\mathrm{𝟐}=𝐂`$ is called a discrete conformal map if all its elementary quadrilaterals are conformal squares, i.e. their cross-ratios are equal to -1: $$q(f_{n,m},f_{n+1,m},f_{n+1,m+1},f_{n,m+1}):=$$ $$\frac{(f_{n,m}f_{n+1,m})(f_{n+1,m+1}f_{n,m+1})}{(f_{n+1,m}f_{n+1,m+1})(f_{n,m+1}f_{n,m})}=1$$ (1) This definition is motivated by the following properties: 1) it is Möbius invariant, and 2) a smooth map $`f:D𝐂𝐂`$ is conformal (holomorphic or antiholomorphic) if and only if $`(x,y)D`$ $$\underset{ϵ0}{lim}q(f(x,y),f(x+ϵ,y)f(x+ϵ,y+ϵ)f(x,y+ϵ))=1.$$ For some examples of discrete conformal maps and for their applications in differential geometry of surfaces see . A naive method to construct a discrete analogue of the function $`f(z)=z^\gamma `$ is to start with $`f_{n,0}=n^\gamma ,n0`$, $`f_{0,m}=(im)^\gamma ,m0`$, and then to compute $`f_{n,m}`$ for any $`n,m>0`$ using equation (1). But a so determined map has a behavior which is far from that of the usual holomorphic maps. Different elementary quadrilaterals overlap (see the left lattice in Fig. 1). ###### Definition 2 A discrete conformal map $`f_{n,m}`$ is called an immersion if the interiors of adjacent elementary quadrilaterals $`(f_{n,m},f_{n+1,m},f_{n+1,m+1},f_{n,m+1})`$ are disjoint. To construct an immersed discrete analogue of $`z^\gamma ,`$ which is the right lattice presented in Fig. 1, a more complicated approach is needed. Equation (1) can be supplemented with the following nonautonomous constraint: $$\gamma f_{n,m}=2n\frac{(f_{n+1,m}f_{n,m})(f_{n,m}f_{n1,m})}{(f_{n+1,m}f_{n1,m})}+2m\frac{(f_{n,m+1}f_{n,m})(f_{n,m}f_{n,m1})}{(f_{n,m+1}f_{n,m1})},$$ (2) which plays a crucial role in this paper. This constraint, as well as its compatibility with (1), is derived from some monodromy problem (see Section 2). Let us assume $`0<\gamma <2`$ and denote $`𝐙_+^\mathrm{𝟐}=\{(n,m)𝐙^\mathrm{𝟐}:n,m0\}.`$ Motivated by the asymptotics of the constraint (2) as $`n,m\mathrm{}`$, and by the properties $$z^\gamma (𝐑_+)𝐑_+,z^\gamma (i𝐑_+)e^{\gamma \pi i/2}𝐑_+$$ of the holomorphic mapping $`z^\gamma `$, we use the following definition of the ”discrete” $`z^\gamma `$. ###### Definition 3 The discrete conformal map $`Z^\gamma :𝐙_+^\mathrm{𝟐}𝐂,0<\gamma <2`$ is the solution of (1) and (2) with the initial conditions $$Z^\gamma (0,0)=0,Z^\gamma (1,0)=1,Z^\gamma (0,1)=e^{\gamma \pi i/2}.$$ (3) Obviously $`Z^\gamma (n,0)𝐑_+`$ and $`Z^\gamma (0,m)e^{\gamma \pi i/2}(𝐑_+)`$ for any $`n,m𝐍`$. Fig. 2 suggests that $`Z^\gamma `$ is an immersion. The corresponding theorem is the main result of this paper. ###### Theorem 1 The discrete map $`Z^\gamma `$ for $`0<\gamma <2`$ is an immersion. The proof is based on analysis of geometric and algebraic properties of the corresponding lattices. In Section 3 we show that $`Z^\gamma `$ corresponds to a circle pattern of Schramm’s type. (The circle pattern corresponding to $`Z^{2/3}`$ is presented in Fig. 2.) Next, analizing the equations for the radii of the circles, we show that in order to prove that $`Z^\gamma `$ is an immersion it is enough to establish a special property of a separatrix solution of the following ordinary difference equation of Painlevé type $$(n+1)(x_n^21)\left(\frac{x_{n+1}ix_n}{i+x_nx_{n+1}}\right)n(x_n^2+1)\left(\frac{x_{n1}+ix_n}{i+x_{n1}x_n}\right)=\gamma x_n.$$ Namely, in Section 4 it is shown that $`Z^\gamma `$ is an immersion if and only if the unitary solution $`x_n=e^{i\alpha _n}`$ of this equation with $`x_0=e^{i\gamma \pi /4}`$ lies in the sector $`0<\alpha _n<\pi /2.`$ Similar problems have been studied in the setting of the isomonodromic deformation method . In particular, connection formulas were derived. These formulas describe the asymptotics of solutions $`x_n`$ for $`n\mathrm{}`$ as a function of $`x_0`$ (see in particular ). These methods seem to be insufficient for our purposes since we need to control $`x_n`$ for finite $`n`$’s as well. The geometric origin of this equation permits us to prove the property of the solution $`x_n`$ mentioned above by purely geometric methods. Based on results established for $`Z^\gamma `$, we show in Section 5 how to obtain discrete immersed analogs of $`z^2`$ and $`\mathrm{log}z`$ as limiting cases of $`Z^\gamma `$ with $`\gamma 2`$ and $`\gamma 0`$, respectively. Finally, discrete analogs of $`Z^\gamma `$ for $`\gamma >2`$ are discussed in Section 6. ## 2 Discrete $`Z^\gamma `$ via a monodromy problem Equation (1) is the compatibility condition of the Lax pair $$\mathrm{\Psi }_{n+1,m}=U_{n,m}\mathrm{\Psi }_{n,m}\mathrm{\Psi }_{n,m+1}=V_{n,m}\mathrm{\Psi }_{n,m}$$ (4) found by Nijhoff and Capel : $$U_{n,m}=\left(\begin{array}{cc}1& u_{n,m}\\ \frac{\lambda }{u_{n,m}}& 1\end{array}\right)V_{n,m}=\left(\begin{array}{cc}1& v_{n,m}\\ \frac{\lambda }{v_{n,m}}& 1\end{array}\right),$$ (5) where $$u_{n,m}=f_{n+1,m}f_{n,m},v_{n,m}=f_{n,m+1}f_{n,m}.$$ Whereas equation (1) is invariant with respect to fractional linear transformations $`f_{n,m}(pf_{n,m}+q)/(rf_{n,m}+s)`$, the constraint (2) is not. By applying a fractional linear transformation and shifts of $`n`$ and $`m`$, (2) is generalized to the following form: $$\beta f_{n,m}^2+\gamma f_{n,m}+\delta =2(n\varphi )\frac{(f_{n+1,m}f_{n,m})(f_{n,m}f_{n1,m})}{(f_{n+1,m}f_{n1,m})}+$$ $$2(m\psi )\frac{(f_{n,m+1}f_{n,m})(f_{n,m}f_{n,m1})}{(f_{n,m+1}f_{n,m1})},$$ (6) where $`\beta ,\gamma ,\delta ,\varphi ,\psi `$ are arbitrary constants. ###### Theorem 2 $`f:𝐙^\mathrm{𝟐}𝐂`$ is a solution to the system (1, 6) if and only if there exists a solution $`\mathrm{\Psi }_{n,m}`$ to (4, 5) satisfying the following differential equation in $`\lambda `$: $$\frac{d}{d\lambda }\mathrm{\Psi }_{n,m}=A_{n,m}\mathrm{\Psi }_{n,m},A_{n,m}=\frac{B_{n,m}}{1+\lambda }+\frac{C_{n,m}}{1\lambda }+\frac{D_{n,m}}{\lambda },$$ (7) with $`\lambda `$independent matrices $`B_{n,m},C_{n,m},D_{n,m}.`$ The matrices $`B_{n,m},C_{n,m},D_{n,m}`$ in (7) are of the following structure: $`B_{n,m}`$ $`=`$ $`{\displaystyle \frac{n\varphi }{u_{n,m}+u_{n1,m}}}\left(\begin{array}{cc}u_{n,m}& u_{n,m}u_{n1,m}\\ 1& u_{n1,m}\end{array}\right){\displaystyle \frac{\varphi }{2}}I`$ $`C_{n,m}`$ $`=`$ $`{\displaystyle \frac{m\psi }{v_{n,m}+v_{n,m1}}}\left(\begin{array}{cc}v_{n,m}& v_{n,m}v_{n,m1}\\ 1& v_{n,m1}\end{array}\right){\displaystyle \frac{\psi }{2}}I`$ $`D_{n,m}`$ $`=`$ $`\left(\begin{array}{cc}\frac{\gamma }{4}\frac{\beta }{2}f_{n,m}& \frac{\beta }{2}f_{n,m}^2\frac{\gamma }{2}f_{n,m}\frac{\delta }{2}\\ \frac{\beta }{2}& \frac{\gamma }{4}+\frac{\beta }{2}f_{n,m}\end{array}\right).`$ The constraint (6) is compatible with (1). the proof of this theorem is straightforward but involves some computations. It is presented in Appendix A. Note that the identity $$det\mathrm{\Psi }_{n,m}(\lambda )=(1+\lambda )^n(1\lambda )^mdet\mathrm{\Psi }_{0,0}(\lambda )$$ for determinants implies $$\mathrm{tr}A_{n,m}(\lambda )=\frac{n}{1+\lambda }\frac{m}{1\lambda }+a(\lambda ),$$ (11) where $`a(\lambda )`$ is independent of $`n`$ and $`m.`$ Thus, up to the term $`D_{n,m}/\lambda `$, equation (7) is the simplest one possible. Further, we will deal with the special case in (6) where $`\beta =\delta =\varphi =\psi =0,`$ leading to the discrete $`Z^\gamma .`$ The constraint (2) and the corresponding monodromy problem were obtained in for the case $`\gamma =1`$, and generalized to the case of arbitrary $`\gamma `$ in . ## 3 Circle patterns and $`Z^\gamma `$ In this section we show that $`Z^\gamma `$ of Definition 3 is a special case of circle patters with the combinatorics of the square grid as defined by Schramm in . ###### Lemma 1 Discrete $`f_{n,m}`$ satisfying (1) and (2) with initial data $`f_{0,0}=0`$, $`f_{1,0}=1`$, $`f_{0,1}=e^{i\alpha }`$ has the equidistant property $$f_{2n,0}f_{2n1,0}=f_{2n+1,0}f_{2n,0},f_{0,2m}f_{0,2m1}=f_{0,2m+1}f_{0,2m}$$ for any $`n1`$, $`m1`$. Proof: For $`m=0`$ or $`n=0`$ the constraint (2) is an ordinary difference equation of the second order. The Lemma is proved by induction. Given initial $`f_{0,0}`$, $`f_{0,1}`$ and $`f_{1,0}`$ the constraint (2) allows us to compute $`f_{n,0}`$ and $`f_{0,m}`$ for all $`n,m1.`$ Now using equation (1) one can successively compute $`f_{n,m}`$ for any $`n,m𝐍`$. Observe that if $`|f_{n+1,m}f_{n,m}|=|f_{n,m+1}f_{n,m}|`$ then the quadrilateral $`(f_{n,m},f_{n+1,m},f_{n+1,m+1},f_{n,m+1})`$ is of the kite form – it is inscribed in a circle and is symmetric with respect to the diameter of the circle $`[f_{n,m},f_{n+1,m+1}].`$ If the angle at the vertex $`f_{n,m}`$ is $`\pi /2`$ then the quadrilateral $`(f_{n,m},f_{n+1,m},f_{n+1,m+1},f_{n,m+1})`$ is of the kite form too. In this case the quadrilateral is symmetric with respect to its diagonal $`[f_{n,m+1},f_{n+1,m}]`$. ###### Proposition 1 Let $`f_{n,m}`$ satisfy (1) and (2) in $`𝐙_+^\mathrm{𝟐}`$ with initial data $`f_{0,0}=0`$, $`f_{0,1}=1`$, $`f_{0,1}=e^{i\alpha }.`$ Then all the elementary quadrilaterals $`(f_{n,m},f_{n+1,m},f_{n+1,m+1},f_{n,m+1})`$ are of the kite form. All edges at the vertex $`f_{n,m}`$ with $`n+m=0(\mathrm{mod}2)`$ are of the same length $$|f_{n+1,m}f_{n,m}|=|f_{n,m+1}f_{n,m}|=|f_{n1,m}f_{n,m}|=|f_{n,m1}f_{n,m}|.$$ All angles between the neighboring edges at the vertex $`f_{n,m}`$ with $`n+m=1(\mathrm{mod}2)`$ are equal to $`\pi /2.`$ Proof follows from Lemma 1 and from the above observation by induction. Proposition 1 implies that for any $`n,m`$ such that $`n+m=0(\mathrm{mod}2)`$, the points $`f_{n+1,m},`$ $`f_{n,m+1},`$ $`f_{n1,m},`$ $`f_{n,m1}`$ lie on a circle with the center $`f_{n,m}`$. ###### Corollary 1 The circumscribed circles of the quadrilaterals $`(f_{n1,m},f_{n,m1},f_{n+1,m},f_{n,m+1})`$ with $`n+m=0(\mathrm{mod}2)`$ form a circle pattern of Schramm type (see ), i.e. the circles of neighboring quadrilaterals intersect orthogonally and the circles of half-neighboring quadrilaterals with common vertex are tangent (see Fig. 3). Proof: Consider the sublattice $`\{n,m:n+m=0(\mathrm{mod}2)\}`$ and denote by $`𝐕`$ its quadrant $$𝐕=\{z=N+iM:N,M𝐙^\mathrm{𝟐},M|N|\},$$ where $$N=(nm)/2,M=(n+m)/2.$$ We will use complex labels $`z=N+iM`$ for this sublattice. Denote by $`C(z)`$ the circle of the radius $$R(z)=|f_{n,m}f_{n+1,m}|=|f_{n,m}f_{n,m+1}|=|f_{n,m}f_{n1,m}|=|f_{n,m}f_{n,m1}|$$ (12) with the center at $`f_{N+M,MN}=f_{n,m}.`$ From Proposition 1 it follows that any two circles $`C(z)`$, $`C(z^{})`$ with $`|zz^{}|=1`$ intersect orthogonally, and any two circles $`C(z)`$, $`C(z^{})`$ with $`|zz^{}|=\sqrt{2}`$ are tangent. Thus, the corollary is proved. Let $`\{C(z)\},z𝐕`$ be a circle pattern of Schramm type on the complex plane. Define $`f_{n,m}:𝐙_+^\mathrm{𝟐}𝐂`$ as follows: a) if $`n+m=0(\mathrm{mod}2)`$ then $`f_{n,m}`$ is the center of $`C(\frac{nm}{2}+i\frac{n+m}{2}),`$ b) if $`n+m=1(\mathrm{mod}2)`$ then $`f_{n,m}:=C(\frac{nm1}{2}+i\frac{n+m1}{2})C(\frac{nm+1}{2}+i\frac{n+m+1}{2})=C(\frac{nm+1}{2}+i\frac{n+m1}{2})C(\frac{nm1}{2}+i\frac{n+m+1}{2}).`$ Since all elementary quadrilaterals $`(f_{n,m},f_{n+1,m},f_{n+1,m+1},f_{n,m+1})`$ are of the kite form, equation (1) is satisfied automatically. In what follows, the function $`f_{n,m},`$ defined as above by a) and b), is called a discrete conformal map corresponding to the circle pattern $`\{C(z)\}`$ . ###### Theorem 3 Let $`f_{n,m}`$ satisfying (1) and (2) with initial data $`f_{0,0}=0`$, $`f_{0,1}=1`$, $`f_{0,1}=e^{i\alpha }`$, be an immersion. Then $`R(z)`$ defined by (12) satisfies the following equations: $$\begin{array}{c}R(z)R(z+1)(2M\gamma )+R(z+1)R(z+1+i)(2(N+1)\gamma )+\hfill \\ R(z+1+i)R(z+i)(2(M+1)\gamma )+R(z+i)R(z)(2N\gamma )=0,\hfill \end{array}$$ (13) for $`z𝐕_l:=𝐕\{N+i(N1)|N𝐍\}`$ and $$\begin{array}{c}(N+M)(R(z)^2R(z+1)R(zi))(R(z+i)+R(z+1))+\hfill \\ (MN)(R(z)^2R(z+i)R(z+1))(R(z+1)+R(zi))=0,\hfill \end{array}$$ (14) for $`z𝐕_{int}:=𝐕\backslash \{\pm N+iN|N𝐍\}.`$ Conversely let $`R(z):𝐕𝐑_+`$ satisfy (13) for $`z𝐕_l`$ and (14) for $`z𝐕_{int}.`$ Then $`R(z)`$ defines an immersed circle packing with the combinatorics of the square grid. The corresponding discrete conformal map $`f_{n,m}`$ is an immersion and satisfies (2). Proof: Suppose that the discrete net determined by $`f_{n,m}`$ is immersed, i.e. the open discs of tangent circles do not intersect. Consider $`n+m=1(\mathrm{mod}2)`$ and denote $`f_{n+1,m}=f_{n,m}+r_1e^{i\beta }`$, $`f_{n,m+1}=f_{n,m}+ir_2e^{i\beta }`$, $`f_{n1,m}=f_{n,m}r_3e^{i\beta }`$, $`f_{n,m1}=f_{n,m}ir_4e^{i\beta },`$ where $`r_i>0`$ are the radii of the corresponding circles. The constraint (2) reads as follows $$\gamma f_{n,m}=e^{i\beta }\left(2n\frac{r_1r_3}{r_1+r_3}+2im\frac{r_2r_4}{r_2+r_4}\right).$$ (15) The kite form of elementary quadrilaterals implies $$f_{n+1,m+1}=f_{n+1,m}e^{i\beta }r_1\frac{(r_1ir_2)^2}{r_1^2+r_2^2},f_{n+1,m1}=f_{n+1,m}e^{i\beta }r_1\frac{(r_1+ir_4)^2}{r_1^2+r_4^2}.$$ Computing $`f_{n+2,m}`$ from the constraint (15) at the point $`(n+1,m)`$ and inserting it into the identity $`|f_{n+2,m}f_{n+1,m}|=r_1`$, after some transformations one arrives at $$r_1r_2(n+m+1\gamma )+r_2r_3(n+m+1\gamma )+r_3r_4(nm+1\gamma )+r_4r_1(nm+1\gamma )=0.$$ (16) This equation coincides with (13). Now let $`f_{n+2,m+1}=f_{n+1,m+1}+R_1e^{i\beta ^{}}`$, $`f_{n+1,m+2}=f_{n+1,m+1}+iR_2e^{i\beta ^{}}`$, $`f_{n,m+1}=f_{n+1,m+1}R_3e^{i\beta ^{}}`$, $`f_{n+1,m}=f_{n+1,m+1}iR_4e^{i\beta ^{}}.`$ Since all elementary quadrilaterals are of the kite form we have $$R_4=r_1,R_3=r_2,e^{i\beta ^{}}=ie^{i\beta }\frac{(r_2+ir_1)^2}{r_1^2+r_2^2}.$$ Substituting these expressions and (15) into the constraint (2) for $`(n+1,m+1)`$ and using (16), we arrive at: $$R_1=\frac{(n+1)r_1^2(r_2+r_4)+mr_2(r_1^2r_2r_4)}{(n+1)r_2(r_2+r_4)m(r_1^2r_2r_4)},$$ $$R_2=\frac{(m+1)r_2^2(r_1+r_3)+nr_1(r_2^2r_1r_3)}{(m+1)r_1(r_1+r_3)n(r_2^2r_1r_3)}.$$ These equations together with $`R_4=r_1,R_3=r_2`$ describe the evolution $`(n,m)(n+1,m+1)`$ of the crosslike figure formed by $`f_{n,m},f_{n\pm 1,m},f_{n,m\pm 1}`$ with $`n+m=1(\mathrm{mod}2)`$. The equation for $`R_2`$ coincides with (14). We have considered internal points $`z𝐕_{int}`$, now we consider those that are not. Equation (13) at $`z=N+iN`$ and $`z=N+i(N1),N𝐍`$ reads as $$R(\pm (N+1)+i(N+1))=\frac{2N+\gamma }{2(N+1)\gamma }R(N+iN).$$ (17) The converse claim of the Theorem is based on the following Lemma. ###### Lemma 2 Let $`R(z):𝐕𝐑_+`$ satisfy (13) for $`z𝐕_l`$ and (14) for $`z=iM,M𝐍.`$ Then $`R(z)`$ satisfies: a) equation (14) for $`z𝐕\backslash \{N+iN|N𝐍\}`$, b) equation $$\begin{array}{c}(N+M)(R(z)^2R(z+i)R(z1))(R(z1)+R(zi))+\hfill \\ (MN)(R(z)^2R(z1)R(zi))(R(z+i)+R(z1))=0\hfill \end{array}$$ (18) for $`z𝐕\backslash \{N+iN|N𝐍\}`$ c) equation $$R(z)^2=\frac{\left(\frac{1}{R(z+1)}+\frac{1}{R(z+i)}+\frac{1}{R(z1)}+\frac{1}{R(zi)}\right)R(z+1)R(z+i)R(z1)R(zi)}{R(z+1)+R(z+i)+R(z1)+R(zi)}$$ (19) for $`z𝐕_{int}`$. Proof of this Lemma is technical and is presented in Appendix B. Let $`R(z)`$ satisfy (13,14) then the item c) of Lemma 2 implies that at $`z𝐕_{int}`$ equation (19) is fulfilled. In it was proven that, given $`R(z)`$ satisfying (19), the circle pattern $`\{C(z)\}`$ with radii of the circles $`R(z)`$ is immersed. Thus, the discrete conformal map $`f_{n,m}`$ corresponding to $`\{C(z)\}`$ is an immersion. The item b) of Lemma 2 implies that $`R(z)`$ satisfies (18) at $`z=N+iN,N𝐍`$, which reads $$R(N1+iN)R(N+i(N+1))=R^2(N+iN).$$ (20) This equation implies that the center $`O`$ of $`C(N+iN)`$ and two intersection points $`A,B`$ of $`C(N+iN)`$ with $`C(N1+iN)`$ and $`C(N+i(N+1))`$ lie on a straight line (see Fig. 4). Thus all the points $`f_{n,0}`$ lie on a straight line. Using equation (13) at $`z=N+iN`$, one gets by induction that $`f_{n,m}`$ satisfies (2) at $`(n,0)`$ for any $`n0.`$ Similarly, item a) of Lemma 2, equation (14) at $`z=N+iN,N𝐍`$, and equation (13) at $`z=N+i(N1),N𝐍`$, imply that $`f_{n,m}`$ satisfies (2) at $`(0,m)`$. Now Theorem 2 implies that $`f_{n,m}`$ satisfies (2) in $`𝐙_+^\mathrm{𝟐}`$, and Theorem 3 is proved. Remark. Equation (19) is a discrete analogue of the equation $`\mathrm{\Delta }\mathrm{log}(R)=0`$ in the smooth case. Similarly equations (14) and (18) can be considered discrete analogs of the equation $`xR_yyR_x=0`$, and equation (13) is a discrete analogue of the equation $`xR_x+yR_y=(\gamma 1)R`$. From the initial condition (3) we have $$R(0)=1,R(i)=\mathrm{tan}\frac{\gamma \pi }{4}.$$ (21) Theorem 3 allows us to reformulate the immersion property of the circle lattice completely in terms of the system (13, 14). Namely, to prove Theorem 1 one should show that the solution of the system (13, 14) with initial data (21) is positive for all $`z𝐕.`$ Equation (17) implies $$R(\pm N+iN)=\frac{\gamma (2+\gamma )\mathrm{}(2(N1)+\gamma )}{(2\gamma )(4\gamma )\mathrm{}(2N\gamma )}.$$ (22) ###### Proposition 2 Let the solution $`R(z)`$ of (14) and (13) in $`𝐕`$ with initial data $$R(0)=1,R(i)=\mathrm{tan}\frac{\gamma \pi }{4}$$ be positive on the imaginary axis, i.e. $`R(iM)>0`$ for any $`M𝐙_+`$. Then $`R(z)`$ is positive everywhere in $`𝐕`$. Proof: Since the system of equations for $`R(z)`$ defined in Theorem 3 has the symmetry $`NN`$, it is sufficient to prove the proposition for $`N0`$. Equation (13) can be rewritten as $$R(z+1+i)=\frac{R(z)R(z+1)(2M+\gamma )+R(z)R(z+i)(2N+\gamma )}{R(z+1)(2N+2\gamma )+R(z+i)(2M+2\gamma )}.$$ For $`\gamma 2`$, $`N0,M>0`$, and positive $`R(z),R(z+1),R(z+i)`$, we get $`R(z+1+i)>0.`$ Using $`R(N+iN)>0`$ for all $`N𝐍`$, one obtains the conclusion by induction. ## 4 $`Z^\gamma `$ and discrete Painlevé equation Due to Proposition 2 the discrete $`Z^\gamma `$ is an immersion if and only if $`R(iM)>0`$ for all $`M𝐍`$. To prove the positivity of the radii on the imaginary axis it is more convenient to use equation (2) for $`n=m`$. ###### Proposition 3 The map $`f:𝐙_+^\mathrm{𝟐}𝐂`$ satisfying (1) and (2) with initial data $`f_{0,0}=0`$, $`f_{0,1}=1`$, $`f_{0,1}=e^{i\alpha }`$ is an immersion if and only if the solution $`x_n`$ of the equation $$(n+1)(x_n^21)\left(\frac{x_{n+1}ix_n}{i+x_nx_{n+1}}\right)n(x_n^2+1)\left(\frac{x_{n1}+ix_n}{i+x_{n1}x_n}\right)=\gamma x_n,$$ (23) with $`x_0=e^{i\alpha /2}`$, is of the form $`x_n=e^{i\alpha _n}`$, where $`\alpha _n(0,\pi /2)`$. Proof: Let $`f_{n,m}`$ be an immersion. Define $`R_n:=R(in)>0`$, and define $`\alpha _n(0,\pi /2)`$ through $`f_{n,n+1}f_{n,n}=e^{2i\alpha _n}(f_{n+1,n}f_{n,n}).`$ By symmetry, all the points $`f_{n,n}`$ lie on the diagonal $`\mathrm{arg}f_{n,n}=\alpha /2.`$ Taking into account that all elementary quadrilaterals are of the kite form, one obtains $$f_{n+2,n+1}=e^{i\alpha /2}(g_{n+1}+R_{n+1}e^{i\alpha _{n+1}}),f_{n+1,n+2}=e^{i\alpha /2}(g_{n+1}+R_{n+1}e^{i\alpha _{n+1}}),$$ $$f_{n+1,n}=e^{i\alpha /2}(g_{n+1}iR_{n+1}e^{i\alpha _n}),f_{n,n+1}=e^{i\alpha /2}(g_{n+1}+iR_{n+1}e^{i\alpha _n}),$$ and $$R_{n+1}=R_n\mathrm{tan}\alpha _n,$$ (24) where $`g_{n+1}=|f_{n+1,n+1}|`$ (see Fig. 5). Now the constraint (2) for $`(n+1,n+1)`$ is equivalent to $$\gamma g_{n+1}=2(n+1)R_{n+1}\left(\frac{e^{i\alpha _n}+ie^{i\alpha _{n+1}}}{i+e^{i(\alpha _n+\alpha _{n+1})}}\right).$$ Similarly, $$\gamma g_n=2nR_n\left(\frac{e^{i\alpha _{n1}}+ie^{i\alpha _n}}{i+e^{i(\alpha _{n1}+\alpha _n)}}\right).$$ Putting these expressions into the equality $$g_{n+1}=g_n+e^{i\alpha _n}(R_n+iR_{n+1})$$ and using (24) one obtains (23) with $`x_n=e^{i\alpha _n}`$. This proves the necessity part. Now let us suppose that there is a solution $`x_n=e^{i\alpha _n}`$ of (23) with $`\alpha _n(0,\pi /2)`$. This solution determines a sequence of orthogonal circles along the diagonal $`e^{i\alpha /2}𝐑_+`$, and thus the points $`f_{n,n},f_{n\pm 1,n},f_{n,n\pm 1}`$, for $`n1.`$ Now equation (1) determines $`f_{n,m}`$ in $`𝐙_+^\mathrm{𝟐}.`$ Since $`\alpha _n(0,\pi /2)`$, the inner parts of the quadrilaterals $`(f_{n,n},f_{n+1,n},f_{n+1,n+1},f_{n,n+1})`$ on the diagonal, and of the quadrilaterals $`(f_{n,n1},f_{n+1,n1},f_{n+1,n},f_{n,n})`$ are disjoint. That means that we have positive solution $`R(z)`$ of (13),(14) for $`z=iM,z=1+iM,N𝐍.`$ (See the proof of Theorem 3.) Given $`R(iM)`$, equation (13) determines $`R(z)`$ for all $`z𝐕.`$ Due to Lemma 2, $`R(z)`$ satisfies (13, 14). From Proposition 2 it follows that $`R(z)`$ is positive. Theorem 3 implies that the discrete conformal map $`g_{n,m}`$ corresponding to the circle pattern $`\{C(z)\}`$ determined by $`R(z)`$ is an immersion and satisfies (2). Since $`g_{n,n}=f_{n,n}`$ and $`g_{n\pm 1,n}=f_{n\pm 1,n}`$, equation (1) implies $`f_{n,m}=g_{n,m}.`$ This proves the theorem. Remark. Note that although (23) is a difference equation of the second order a solution $`x_n`$ of (23) for $`n0`$ is determined by its value $`x_0=e^{i\alpha /2}.`$ From the equation for $`n=0`$ one gets $$x_1=\frac{x_0(x_0^2+\gamma 1)}{i((\gamma 1)x_0^2+1)}.$$ (25) Remark. Equation (23) is a special case of an equation that has already appeared in the literature, although in a completely different context. Namely, it is related to the following discrete Painlevé equation $$\frac{2\zeta _{n+1}}{1X_{n+1}X_n}+\frac{2\zeta _n}{1X_nX_{n1}}=\mu +\nu +\zeta _{n+1}+\zeta _n+$$ $$\frac{(\mu \nu )(r^21)X_n+r(1X_n^2)[\frac{1}{2}(\zeta _n+\zeta _{n+1})+(1)^n(\zeta _n\zeta _{n+1}2m)]}{(r+X_n)(1+rX_n)},$$ which was considered in , and is called the generalized d-PII equation. The corresponding transformation<sup>1</sup><sup>1</sup>1We are thankful to A.Ramani and B.Grammaticos for this identification of the equations. is $$X=\frac{(1+i)(xi)}{\sqrt{2}(x+1)}$$ with $`\zeta _n=n`$, $`r=\sqrt{2},\mu =0,(\zeta _n\zeta _{n+1}2m)=0,`$ $`\gamma =(2\nu \zeta _n+\zeta _{n+1}).`$ Equation (23) can be written in the following recurrent form: $$x_{n+1}=\phi (n,x_{n1},x_n):=$$ $$x_{n1}\frac{nx_n^2+i(\gamma 1)x_{n1}^1x_n^1+(\gamma 1)+i(2n+1)x_{n1}^1x_n+(n+1)x_n^2}{nx_n^2i(\gamma 1)x_{n1}x_n+(\gamma 1)i(2n+1)x_{n1}x_n^1+(n+1)x_n^2}.$$ (26) Obviously, this equation possesses unitary solutions. ###### Theorem 4 There exists a unitary solution $`x_n`$ of the equation (23) with $`x_nA_I\backslash \{1,i\}S^1`$, $`n0,`$ where $$A_I:=\{e^{i\beta }|\beta [0,\pi /2]\}.$$ Proof: Let us study the properties of the function $`\phi (n,x,y)`$ restricted to the torus $`T^2=S^1\times S^1=\{(x,y):x,y𝐂,|x|=|y|=1\}.`$ 1. The function $`\phi (n,x,y)`$ is continuous on $`A_I\times A_I`$ $`n0.`$ (Continuity on the boundary of $`A_I\times A_I`$ is understood to be one-sided.) The points of discontinuity must satisfy: $$n+1+(\gamma 1)y^2i(2n+1)xyi(\gamma 1)xy^3+ny^4=0.$$ The last identity never holds for unitary $`x,y`$ with $`n𝐍`$ and $`0<\gamma <2.`$ For $`n=0`$ the right hand side of (25) is also continuous on $`A_I.`$ 2. For $`(x,y)A_I\times A_I`$ we have $`\phi (n,x,y)A_IA_{II}A_{IV}`$ where $`A_{II}:=\{e^{i\beta }|\beta (\pi /2,\pi ]\}`$ and $`A_{IV}:=\{e^{i\beta }|\beta [\pi /2,0)\}`$ . To show this it is convenient to use the following substitution: $$u_n=\mathrm{tan}\frac{\alpha _n}{2}=\frac{x_n1}{i(x_n+1)}.$$ In the $`u`$-coordinates, (26) takes the form $$u_{n+1}=F(n,u_{n1},u_n):=\frac{(u_n+1)(u_{n1}P_1(n,u_n)+P_2(n,u_n))}{(u_n1)(u_{n1}P_3(n,u_n)+P_4(n,u_n))},$$ where $$P_1(n,v)=(2n+\gamma )v^3(2n+4+\gamma )v^2+(2n+4+\gamma )v(2n+\gamma ),$$ $$P_2(n,v)=(2n+\gamma )v^3+(6n+4\gamma )v^2+(6n+4\gamma )v(2n+\gamma ),$$ $$P_3(n,v)=(2n+\gamma )v^3+(6n+4\gamma )v^2(6n+4\gamma )v(2n+\gamma ),$$ $$P_4(n,v)=(2n+\gamma )v^3(2n+4+\gamma )v^2(2n+4+\gamma )v(2n+\gamma ).$$ Identity (25) reads as $$u_1=\frac{(u_0+1)(\gamma u_0^24u_0+\gamma )}{(u_01)(\gamma u_0^2+4u_0+\gamma )}.$$ (27) We have to prove that for $`(u,v)[0,1]\times [0,1]`$, the values $`F(n,u,v)`$ lie in the interval $`[1,+\mathrm{}].`$ The function $`F(n,u,v)`$ is smooth on $`(0,1)\times (0,1)`$ and has no critical points in $`(0,1)\times (0,1)`$. Indeed, for critical points we have $`\frac{F(n,u,v)}{u}=0`$ which yields $`P_1(n,v)P_4(n,v)P_2(n,v)P_3(n,v)=0`$ and, after some calculations, $`v=0,1,1.`$ On the other hand, one can easily check that the values of $`F(n,u,v)`$ on the boundary of $`[0,1]\times [0,1]`$ lie in the interval $`[1,+\mathrm{}].`$ For $`n=0`$, using (27) and exactly the same considerations as for $`F(n,0,v)`$, one shows that $`1u_1+\mathrm{}`$ for $`u_0[0,1].`$ Now let us introduce $$S_{II}(k):=\{x_0A_I|x_kA_{II},x_lA_Il0<l<k\},$$ $$S_{IV}(k):=\{x_0A_I|x_kA_{IV},x_lA_Il0<l<k\},$$ where $`x_n`$ is the solution of (23). From the property 1 it follows that $`S_{II}(k)`$ and $`S_{IV}(k)`$ are open sets in the induced topology of $`A_I`$. Denote $$S_{II}=S_{II}(k),S_{IV}=S_{IV}(k),$$ which are open too. These sets are nonempty since $`S_{II}(1)`$ and $`S_{IV}(1)`$ are nonempty. Finally introduce $$S_I:=\{x_0A_I:x_nA_In𝐍\}.$$ It is obvious that $`S_I,`$ $`S_{II}`$, and $`S_{IV}`$ are mutually disjoint. Property 2 implies $$S_IS_{II}S_{IV}=A_I.$$ This is impossible for $`S_I=\mathrm{}.`$ Indeed, the connected set $`A_I`$ cannot be covered by two open disjoint subsets $`S_{II}`$ and $`S_{IV}`$. So there exists $`x_0`$ such that the solution $`x_nA_In`$. From $$\phi (n,x,1)i,\phi (n,x,i)1,$$ (28) it follows that (for this solution) $`x_n1,x_ni.`$ This proves the theorem. To complete the proof of Theorem 1 it is necessary to show $`e^{i\gamma \pi /4}S_I.`$ This problem can be treated in terms of the method of isomonodromic deformations (see, for example, for a treatment of a similar problem). One could probably compute the asymptotics of solutions $`x_n`$ for $`n\mathrm{}`$ as functions of $`x_0`$ and show that the solution with $`x_0e^{\gamma \pi /4}`$ cannot lie in $`S_I.`$ The geometric origin of equation (23) allows us to prove the result using just elementary geometric arguments. ###### Proposition 4 $`S_I=\{e^{i\gamma \pi /4}\}.`$ Proof: We have shown that $`S_I`$ is not empty. Take a solution $`x_nS_I`$ and consider the corresponding circle pattern (see Theorem 4 and Theorem 3). Equations (13) and (18) for $`N=M`$ make it possible to find $`R(N+iN)`$ and $`R(N+i(N+1))`$ in a closed form. We now show that substituting the asymptotics of $`R(z)`$ at these points into equation (14) for $`M=N+1`$, for immersed $`f_{n,m}`$, one necessarily gets $`R(i)=\mathrm{tan}\frac{\gamma \pi }{4}`$. Indeed, formula (22) yields the following representation in terms of the $`\mathrm{\Gamma }`$\- function: $$R(N+iN)=c(\gamma )\frac{\mathrm{\Gamma }(N+\gamma /2)}{\mathrm{\Gamma }(N+1\gamma /2)},$$ where $$c(\gamma )=\frac{\gamma \mathrm{\Gamma }(1\gamma /2)}{2\mathrm{\Gamma }(1+\gamma /2)}.$$ (29) From the Stirling formula $$\mathrm{\Gamma }(s)=\sqrt{\frac{2\pi }{s}}\left(\frac{s}{e}\right)^s\left(1+\frac{1}{12s}+O\left(\frac{1}{s^2}\right)\right)$$ (30) one obtains $$R(N+iN)=c(\gamma )N^{\gamma 1}\left(1+O\left(\frac{1}{N}\right)\right).$$ (31) Now let $`R(i)=a\mathrm{tan}\frac{\gamma \pi }{4}`$ where $`a`$ is a positive constant. Equation (18) for $`M=N,N0`$ reads $$R(N1+iN)R(N+i(N+1))=R^2(N+iN).$$ This is equivalent to the fact that the centers of all the circles $`C(N+iN)`$ lie on a straight line. This equation yields $$R(N+i(N+1))=\left(a\mathrm{tan}\frac{\gamma \pi }{4}\right)^{(1)^N}\left(\frac{(2(N1)+\gamma )(2(N3)+\gamma )(2(N5)+\gamma )\mathrm{}}{(2N\gamma )(2(N2)\gamma )(2(N4)\gamma )\mathrm{}}\right)^2.$$ Using the product representation for $`\mathrm{tan}x`$, $$\mathrm{tan}x=\frac{\mathrm{sin}x}{\mathrm{cos}x}=\frac{x\left(1\frac{x^2}{\pi ^2}\right)\mathrm{}\left(1\frac{x^2}{(k\pi )^2}\right)\mathrm{}}{\left(1\frac{4x^2}{\pi ^2}\right)\left(1\frac{4x^2}{(3\pi )^2}\right)\mathrm{}\left(1\frac{4x^2}{((2k1)\pi )^2}\right)\mathrm{}}$$ one arrives at $$R(N+i(N+1))=a^{(1)^N}c(\gamma )N^{(\gamma 1)}\left(1+O\left(\frac{1}{N}\right)\right).$$ (32) Solving equation (14) with respect to $`R^2(z)`$ we get $$R^2(z)=G(N,M,R(z+i),R(z+1),R(zi)):=$$ $$\frac{R(z+i)R(z+1)R(zi)+R^2(z+1)\left(\frac{M+N}{2M}R(zi)+\frac{MN}{2M}R(z+i)\right)}{R(z+1)+\frac{M+N}{2M}R(z+i)+\frac{MN}{2M}R(zi)}.$$ (33) For $`z𝐕,R(z+i)0,R(z+1)0,R(zi)0`$, the function $`G`$ is monotonic: $$\frac{G}{R(z+i)}0,\frac{G}{R(z+1)}0,\frac{G}{R(zi)}0.$$ Thus, any positive solution $`R(z),z𝐕`$ of (4) must satisfy $$R^2(z)G(N,M,0,R(z+1),R(zi)).$$ Substituting the asymptotics of $`R`$ (31) and (32) into this inequality and taking the limit $`K\mathrm{}`$, for $`N=2K`$, we get $`a^21`$. Similarly, for $`N=2K+1`$ one obtains $`\frac{1}{a^2}1`$, and finally $`a=1.`$ This completes the proof of the Proposition and the proof of Theorem 1. Remark. Taking further terms from the Stirling formula (30), one gets the asymptotics for $`Z^\gamma `$ $$Z_{n,k}^\gamma =\frac{2c(\gamma )}{\gamma }\left(\frac{n+ik}{2}\right)^\gamma \left(1+O\left(\frac{1}{n^2}\right)\right),n\mathrm{},k=0,1,$$ (34) having a proper smooth limit. Here the constant $`c(\gamma )`$ is given by (29). Due to representation (7) the discrete conformal map $`Z^\gamma `$ can be studied by the isomonodromic deformation method. In particular applying a technique of one can probably prove the following Conjecture The discrete conformal map $`Z^\gamma `$ has the following asymptotic behavior $$Z_{n,m}^\gamma =\frac{2c(\gamma )}{\gamma }\left(\frac{n+im}{2}\right)^\gamma \left(1+o\left(\frac{1}{\sqrt{n^2+m^2}}\right)\right).$$ For $`0<\gamma <2`$ this would imply the asymptotic embeddedness of $`Z^\gamma `$ at $`n,m\mathrm{}`$ and, combined with Theorem 1, the embeddedness<sup>2</sup><sup>2</sup>2A discrete conformal map $`f_{n,m}`$ is called an embedding if inner parts of different elementary quadrilaterals $`(f_{n,m},f_{n+1,m},f_{n+1,m+1},f_{n,m+1})`$ do not intersect. of $`Z^\gamma :𝐙_+^\mathrm{𝟐}𝐂`$ conjectured in . ## 5 The discrete maps $`Z^2`$ and $`\mathrm{Log}`$. Duality Definition 3 was given for $`0<\gamma <2.`$ For $`\gamma <0`$ or $`\gamma >2`$, the radius $`R(1+i)=\gamma /(2\gamma )`$ of the corresponding circle patterns becomes negative and some elementary quadrilaterals around $`f_{0,0}`$ intersect. But for $`\gamma =2`$, one can renormalize the initial values of $`f`$ so that the corresponding map remains an immersion. Let us consider $`Z^\gamma `$, with $`0<\gamma <2`$, and make the following renormalization for the corresponding radii: $`R\frac{2\gamma }{\gamma }R.`$ Then as $`\gamma 20`$ from below we have $$R(0)=\frac{2\gamma }{\gamma }+0,R(1+i)=1,R(i)=\frac{2\gamma }{\gamma }\mathrm{tan}\frac{\gamma \pi }{4}\frac{2}{\pi }.$$ ###### Definition 4 $`Z^2:𝐙_+^\mathrm{𝟐}𝐑^\mathrm{𝟐}=𝐂`$ is the solution of (1), (2) with $`\gamma =2`$ and the initial conditions $$Z^2(0,0)=Z^2(1,0)=Z^2(0,1)=0,Z^2(2,0)=1,Z^2(0,2)=1,Z^2(1,1)=i\frac{2}{\pi }.$$ In this definition, equations (1) are (2) are understood to be regularized through multiplication by their denominators. Note that for the radii on the border one has $`R(N+iN)=N.`$ Equation (19) has the symmetry $`R\frac{1}{R}.`$ ###### Proposition 5 Let $`R(z)`$ be a solution of the system (13,14) for some $`\gamma `$. Then $`\stackrel{~}{R}(z)=\frac{1}{R(z)}`$ is a solution of (13, 14) with $`\stackrel{~}{\gamma }=2\gamma .`$ This proposition reflects the fact that for any discrete conformal map $`f`$ there is dual discrete conformal map $`f^{}`$ defined by (see ) $$f_{n+1,m}^{}f_{n,m}^{}=\frac{1}{f_{n+1,m}f_{n,m}},f_{n,m+1}^{}f_{n,m}^{}=\frac{1}{f_{n,m+1}f_{n,m}}.$$ (35) Obviously this transformation preserves the kite form of elementary quadrilaterals and therefore is well-defined for Schramm’s circle patterns. The smooth limit of the duality (35) is $$(f^{})^{}=\frac{1}{f^{}}.$$ The dual of $`f(z)=z^2`$ is, up to a constant, $`f^{}(z)=\mathrm{log}z.`$ Motivated by this observation, we define the discrete logarithm as the discrete map dual to $`Z^2`$, i.e. the map corresponding to the circle pattern with radii $$R_{\mathrm{Log}}(z)=1/R_{Z^2}(z),$$ where $`R_{Z^2}`$ are the radii of the circles for $`Z^2.`$ Here one has $`R_{\mathrm{Log}}(0)=\mathrm{}`$, i.e. the corresponding circle is a straight line. The corresponding constraint (2) can be also derived as a limit. Indeed, consider the map $`g=\frac{2\gamma }{\gamma }Z^\gamma \frac{2\gamma }{\gamma }.`$ This map satisfies (1) and the constraint $$\gamma \left(g_{n,m}+\frac{2\gamma }{\gamma }\right)=2n\frac{(g_{n+1,m}g_{n,m})(g_{n,m}g_{n1,m})}{(g_{n+1,m}g_{n1,m})}+2m\frac{(g_{n,m+1}g_{n,m})(g_{n,m}g_{n,m1})}{(g_{n,m+1}g_{n,m1})}.$$ Keeping in mind the limit procedure used do determine $`Z^2`$, it is natural to define the discrete analogue of $`\mathrm{log}z`$ as the limit of $`g`$ as $`\gamma +0`$. The corresponding constraint becomes $$1=n\frac{(g_{n+1,m}g_{n,m})(g_{n,m}g_{n1,m})}{(g_{n+1,m}g_{n1,m})}+m\frac{(g_{n,m+1}g_{n,m})(g_{n,m}g_{n,m1})}{(g_{n,m+1}g_{n,m1})}.$$ (36) ###### Definition 5 $`\mathrm{Log}`$ is the map $`\mathrm{Log}:𝐙_+^\mathrm{𝟐}𝐑^\mathrm{𝟐}=\overline{𝐂}`$ satisfying (1) and (36) with the initial conditions $$\mathrm{Log}(0,0)=\mathrm{},\mathrm{Log}(1,0)=0,\mathrm{Log}(0,1)=\mathrm{i}\pi ,$$ $$\mathrm{Log}(2,0)=1,\mathrm{Log}(0,2)=1+\mathrm{i}\pi ,\mathrm{Log}(1,1)=\mathrm{i}\frac{\pi }{2}.$$ The circle patterns corresponding to the discrete conformal mappings $`Z^2`$ and $`\mathrm{Log}`$ were conjectured by O. Schramm and R. Kenyon (see ), but it was not proved that they are immersed. ###### Proposition 6 Discrete conformal maps $`Z^2`$ and $`\mathrm{Log}`$ are immersions. Proof: Consider the discrete conformal map $`\frac{2\gamma }{\gamma }Z^\gamma `$ with $`0<\gamma <2.`$ The corresponding solution $`x_n`$ of (23) is a continuous function of $`\gamma `$. So there is a limit as $`\gamma 20`$, of this solution with $`x_nA_I`$, $`x_0=i`$, and $`x_1=\frac{1+i\pi /2}{1+i\pi /2}A_I`$. The solution $`x_n`$ of (23) with the property $`x_nA_I`$ satisfies $`x_n1`$, $`x_ni`$ for $`n>0`$ (see (28)). Now, reasoning as in the proof of Proposition 3, we get that $`Z^2`$ is an immersion. The only difference is that $`R(0)=0`$. The circle $`C(0)`$ lies on the border of $`𝐕`$, so Schramm’s result (see ) claiming that corresponding circle pattern is immersed is true. $`\mathrm{Log}`$ corresponds to the dual circle pattern, with $`R_{\mathrm{Log}}(z)=1/R_{Z^2}(z)`$, which implies that $`\mathrm{Log}`$ is also an immersion. ## 6 Discrete maps $`Z^\gamma `$ with $`\gamma [0,2]`$ Starting with $`Z^\gamma ,\gamma [0,2]`$, defined in the previous sections, one can easily define $`Z^\gamma `$ for arbitrary $`\gamma `$ by applying some simple transformations of discrete conformal maps and Schramm’s circle patterns. Denote by $`S_\gamma `$ the Schramm’s circle pattern associated to $`Z^\gamma ,\gamma (0,2]`$. Applying the inversion of the complex plane $`z\tau (z)=1/z`$ to $`S_\gamma `$ one obtains a circle pattern $`\tau S_\gamma `$, which is also of Schramm’s type. It is natural to define the discrete conformal map $`Z^\gamma ,\gamma (0,2]`$, through the centers and intersection points of circles of $`\tau S_\gamma `$. On the other hand, constructing the dual Schramm circle pattern (see Proposition 5) for $`Z^\gamma `$ we arrive at a natural definition of $`Z^{2+\gamma }`$. Intertwining the inversion and the dualization described above, one constructs circle patterns corresponding to $`Z^\gamma `$ for any $`\gamma `$. To define immersed $`Z^\gamma `$ one should discard some points near $`(n,m)=(0,0)`$ from the definition domain. To give a precise description of the corresponding discrete conformal maps in terms of the constraint (2) and initial data for arbitrarily large $`\gamma `$ a more detailed consideration is required. To any Schramm circle pattern $`S`$ there corresponds a one complex parameter family of discrete conformal maps described in . Take an arbitrary point $`P_{\mathrm{}}𝐂\mathrm{}`$. Reflect it through all the circles of $`S`$. The resulting extended lattice is a discrete conformal map and is called a central extension of $`S`$. As a special case, choosing $`P_{\mathrm{}}=\mathrm{}`$, one obtains the centers of the circles, and thus, the discrete conformal map considered in Section 3. Composing the discrete map $`Z^\gamma :𝐙_{}^{\mathrm{𝟐}}{}_{+}{}^{}𝐂`$ with the inversion $`\tau (z)=1/z`$ of the complex plane one obtains the discrete conformal map $`G(n,m)=\tau (Z^\gamma (n,m))`$ satisfying the constraint (2) with the parameter $`\gamma _G=\gamma `$. This map is the central extension of $`\tau S_\gamma `$ corresponding to $`P_{\mathrm{}}=0`$. Let us define $`Z^\gamma `$ as the central extension of $`\tau S_\gamma `$ corresponding to $`P_{\mathrm{}}=\mathrm{}`$, i.e. the extension described in Section 3. The map $`Z^\gamma `$ defined in this way also satisfies the constraint (2) due to the following ###### Lemma 3 Let $`S`$ be a Schramm’s circle pattern and $`f^{\mathrm{}}:𝐙_+^2𝐂`$ and $`f^0:𝐙_+^2𝐂`$ be its two central extensions corresponding to $`P_{\mathrm{}}=\mathrm{}`$ and $`P_{\mathrm{}}=0`$, respectively. Then $`f^{\mathrm{}}`$ satisfies (2) if and only if $`f^0`$ satisfies (2). Proof: If $`f^{\mathrm{}}`$ (or $`f^0`$) satisfies (2), then $`f_{n,0}^{\mathrm{}}`$ (respectively $`f_{n,0}^0`$) lie on a straight line, and so do $`f_{0,m}^{\mathrm{}}`$ (respectively $`f_{0,m}^0`$). A straightforward computation shows that $`f_{n,0}^{\mathrm{}}`$ and $`f_{n,0}^0`$ satisfy (2) simultaneously, and the same statement holds for $`f_{0,m}^{\mathrm{}}`$ and $`f_{0,m}^0`$. Since (1) is compatible with (2) $`f^0`$ (respectively $`f^{\mathrm{}}`$) satisfy (2) for any $`n,m0`$. Let us now describe $`Z^K`$ for $`K𝐍`$ as special solutions of (1, 2). ###### Definition 6 $`Z^K:𝐙_+^\mathrm{𝟐}𝐑^\mathrm{𝟐}=𝐂`$, where $`K𝐍`$, is the solution of (1, 2) with $`\gamma =K`$ and the initial conditions $$Z^K(n,m)=0\mathrm{for}n+mK1,(n,m)𝐙_+^\mathrm{𝟐}.$$ (37) $$Z^K(K,0)=1,$$ (38) $$Z^K(K1,1)=i\frac{2^{K1}\mathrm{\Gamma }^2(K/2)}{\pi \mathrm{\Gamma }(K)}.$$ (39) The initial condition (37) corresponds to the identity $$\frac{d^kz^K}{dz^k}(z=0)=0,k<K,$$ in the smooth case. For odd $`K=2N+1`$, condition (39) reads $$Z^{2N+1}(2N,1)=i\frac{(2N1)!!}{(2N)!!},$$ and follows from constraint (2). For even $`K=2N`$, any value of $`Z^K(K1,1)`$ is compatible with (2). In this case formula (39) can be derived from the asymptotics $$\underset{N\mathrm{}}{lim}\frac{R(N+iN)}{R(N+i(N+1))}=1$$ and reads $$Z^{2N}(2N1,1)=i\frac{2}{\pi }\frac{(2N2)!!}{(2N1)!!}.$$ We conjecture that so defined $`Z^K`$ are immersed. Note that for odd integer $`K=2N+1`$, discrete $`Z^{2N+1}`$ in Definition 39 is slightly different from the one previously discussed in this section. Indeed, by intertwining the dualization and the inversion (as described above) one can define two different versions of $`Z^{2N+1}`$. One is obtained from the circle pattern corresponding to discrete $`Z(n,m)=n+im`$ with centers in $`n+im,n+m=0(\mathrm{mod}2)`$. The second one comes from Definition 39 and is obtained by the same procedure from $`Z(n,m)=n+im`$, but in this case the centers of the circles of the pattern are chosen in $`n+im,n+m=1(\mathrm{mod}2)`$. These two versions of $`Z^3`$ are presented in Figure 8. The left figure shows $`Z^3`$ obtaied through Definition 39. Note that this map is immersed, in contrast to the right lattice of Figure 8 which has overlapping quadrilaterals at the origin (see Figure 9). ## 7 Acknowledgements The authors would like to thank T. Hoffman for his help in finding discrete $`Z^\gamma `$, and for producing the figures for this paper. We also thank U. Pinkall and Y. Suris for useful discussions. ## Appendix A Proof of Theorem 2. Compatibility. Suppose that $`f_{n,m}`$ is a solution of (1) which satisfies (6) for $`(n,0)`$ and $`(0,m)`$ $`n,m.`$ Direct, but very long, computation (authors used Mathematica computer algebra to perform it) shows that if the constraint (6) holds for 3 vertices of an elementary quadrilateral it holds for the fourth vertex. Inductive reasoning yields that $`f_{n,m}`$ satisfies (6) for any $`n,m.`$ So the constraint (6) is compatible with (1). Necessity. Now let $`f_{n,m}`$ be a solution to the system (1),(6). Define $`\mathrm{\Psi }_{0,0}(\lambda )`$ as a nontrivial solution of linear equation (7) with $`A(\lambda )`$ given by Theorem 2. Equations (4) determine $`\mathrm{\Psi }_{n,m}(\lambda )`$ for any $`n,m.`$ By direct computation, one can check that the compatibility conditions of (7) and (4) $`U_{n,m+1}V_{n,m}`$ $`=`$ $`V_{n+1,m}U_{n,m},`$ $`{\displaystyle \frac{d}{d\lambda }}U_{n,m}`$ $`=`$ $`A_{n+1,m}U_{n,m}U_{n,m}A_{n,m},`$ (40) $`{\displaystyle \frac{d}{d\lambda }}V_{n,m}`$ $`=`$ $`A_{n,m+1}V_{n,m}V_{n,m}A_{n,m},`$ are equivalent to (1, 6). Sufficiency. Conversely, let $`\mathrm{\Psi }_{n,m}(\lambda )`$ satisfy (7) and (4) with some $`\lambda `$independent matrices $`B_{n,m}`$, $`C_{n,m}`$, $`D_{n,m}`$. From (11) it follows that $`\mathrm{tr}B_{n,m}=n,`$ $`\mathrm{tr}C_{n,m}=m`$. Equations (40) are equivalent to equations for their principal parts at $`\lambda =0`$, $`\lambda =1`$, $`\lambda =1`$, $`\lambda =\mathrm{}`$: $`D_{n+1,m}\left(\begin{array}{cc}1& u_{n,m}\\ 0& 1\end{array}\right)`$ $`=`$ $`\left(\begin{array}{cc}1& u_{n,m}\\ 0& 1\end{array}\right)D_{n,m},`$ (45) $`D_{n,m+1}\left(\begin{array}{cc}1& v_{n,m}\\ 0& 1\end{array}\right)`$ $`=`$ $`\left(\begin{array}{cc}1& v_{n,m}\\ 0& 1\end{array}\right)D_{n,m},`$ (50) $`B_{n+1,m}\left(\begin{array}{cc}1& u_{n,m}\\ \frac{1}{u_{n,m}}& 1\end{array}\right)`$ $`=`$ $`\left(\begin{array}{cc}1& u_{n,m}\\ \frac{1}{u_{n,m}}& 1\end{array}\right)B_{n,m},`$ (55) $`B_{n,m+1}\left(\begin{array}{cc}1& v_{n,m}\\ \frac{1}{v_{n,m}}& 1\end{array}\right)`$ $`=`$ $`\left(\begin{array}{cc}1& v\\ \frac{1}{v}& 1\end{array}\right)B_{n,m},`$ (60) $`C_{n+1,m}\left(\begin{array}{cc}1& u_{n,m}\\ \frac{1}{u_{n,m}}& 1\end{array}\right)`$ $`=`$ $`\left(\begin{array}{cc}1& u_{n,m}\\ \frac{1}{u_{n,m}}& 1\end{array}\right)C_{n,m},`$ (65) $`C_{n,m+1}\left(\begin{array}{cc}1& v_{n,m}\\ \frac{1}{v_{n,m}}& 1\end{array}\right)`$ $`=`$ $`\left(\begin{array}{cc}1& v_{n,m}\\ \frac{1}{v_{n,m}}& 1\end{array}\right)C_{n,m},`$ (70) $`(D_{n+1,m}B_{n+1,m}C_{n+1,m})\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right)\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right)(D_{n,m}B_{n,m}C_{n,m})`$ $`=`$ $`\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right),`$ (77) $`(D_{n,m+1}B_{n,m+1}C_{n,m+1})\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right)\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right)(D_{n,m}B_{n,m}C_{n,m})`$ $`=`$ $`\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right).`$ (84) From (55, 60) and $`\mathrm{tr}B_{n,m}=n`$, it follows that $$B_{n,m}=\frac{n\varphi }{u_{n,m}+u_{n1,m}}\left(\begin{array}{cc}u_{n,m}& u_{n,m}u_{n1,m}\\ 1& u_{n1,m}\end{array}\right)\frac{\varphi }{2}I.$$ Similarly, (65, 70) and $`\mathrm{tr}C_{n,m}=m`$ imply $$C_{n,m}=\frac{m\psi }{v_{n,m}+v_{n,m1}}\left(\begin{array}{cc}v_{n,m}& v_{n,m}v_{n,m1}\\ 1& v_{n,m1}\end{array}\right)\frac{\psi }{2}I.$$ Here, $`\varphi `$ and $`\psi `$ are constants independent of $`n,m.`$ The function $`a(\lambda )`$ in (11), independent of $`n`$ and $`m`$, can be normalized to vanish identically, i.e. $`\mathrm{tr}D_{n,m}=0.`$ Substitution of $$D=\left(\begin{array}{cc}a& b\\ c& a\end{array}\right)$$ into equations (45, 50) yields $$c_{n+1,m}=c_{n,m},c_{n,m+1}=c_{n,m},$$ (85) $$a_{n+1,m}=a_{n,m}u_{n,m}c_{n,m},a_{n,m+1}=a_{n,m}v_{n,m}c_{n,m},$$ (86) $$b_{n+1,m}=b_{n,m}+u_{n,m}(a_{n,m}+a_{n+1,m}),b_{n,m+1}=b_{n,m}+v_{n,m}(a_{n,m}+a_{n,m+1}).$$ (87) Thus $`c`$ is a constant independent of $`n,m.`$ Equations (86) can be easily integrated $$a_{n,m}=cf_{n,m}+\theta $$ where $`\theta `$ is independent of $`n,m`$ (recall that $`u_{n,m}=f_{n+1,m}f_{n,m},`$ $`v_{n,m}=f_{n,m+1}f_{n,m}`$). Substituting this expression into (87) and integrating we get $$b_{n,m}=cf_{n,m}^2+2\theta f_{n,m}+\mu ,$$ for some constant $`\mu `$. Now (77) and (84) imply $$b_{n,m}=\frac{n\varphi }{u_{n,m}+u_{n1,m}}u_{n,m}u_{n1,m}\frac{m\psi }{v_{n,m}+v_{n,m1}}v_{n,m}v_{n,m1},$$ which is equivalent to the constraint (6) after identifying $`c=\frac{\beta }{2}`$, $`\theta =\frac{\gamma }{4}`$, $`\mu =\frac{\delta }{2}`$. ## Appendix B The proof of Lemma 2 uses the following technical ###### Lemma 4 For positive $`R`$, the following hold: 1) equations (14) and (13) at $`z`$ and equation (13) at $`zi`$ imply (18) at $`z+1,`$ 2) equation (18) at $`N+iN`$ and equations (13) at $`N+iN`$ and at $`N1+iN`$ imply $$(N+M)(R(z)^2R(z+1)R(zi))(R(zi)+R(z1))+$$ $$(NM)(R(z)^2R(zi)R(z1))(R(z+1)+R(zi))=0,$$ (88) at $`z=N+iM`$, for $`M=N+1`$, 3) equations (18) and (88) at $`z=N+iM,N\pm M`$, imply (19) at $`z,`$ 4) equations (18) and (19) at $`z=N+iM,N\pm M`$, imply $$(N+M)(R(z)^2R(z+i)R(z1))(R(z+1)+R(z+i))+$$ $$(NM)(R(z)^2R(z+1)R(z+i))(R(z+i)+R(z1))=0,$$ (89) and (14) at $`z,`$ 5) equations (89) and (13) at $`z`$ and equation (13) at $`z1`$ imply (88) at $`z+i.`$ Proof is a direct computation. Let us check, for example, 3). Equations (18) and (88) read $$\xi (R(z)^2R(z+i)R(z1))+\eta (R(z+i)+R(z1))=0,$$ $$\xi (R(z)^2R(z+1)R(zi))\eta (R(z+1)+R(zi))=0,$$ where $`\xi =(N+M)(R(z1)+R(zi)),`$ $`\eta =(MN)(R(z)^2R(z1)R(zi)).`$ Since $`\xi 0`$ we get $$(R(z)^2R(z+i)R(z1))(R(z+1)+R(zi))+$$ $$(R(z)^2R(z+1)R(zi))(R(z+i)+R(z1))=0,$$ which is equivalent to (19). Proof of Lemma 2. By symmetry reasons it is enough to prove the Lemma for $`N0.`$ Let us prove it by induction on $`N.`$ For $`N=0`$, identity (13) yields $`R(1+iM)=R(1+iM).`$ Equation (14) at $`z=iM`$ implies (18). Now equations (14) and (18) at $`z=iM`$ imply (19). Induction step $`NN+1`$. The claim 1) of Lemma 4 implies (18) at $`z=N+1+iM.`$ The claims 2),3),4) yield equations (19) and (14) at $`z=N+1+i(N+2).`$ Now using 5),3),4) of Lemma 4 one gets, by induction on $`L`$, equations (19) and (14) at $`z=N+1+i(N+L+1)`$ for any $`L𝐍.`$
no-problem/9909/cond-mat9909433.html
ar5iv
text
# Photoemission Quasi-Particle Spectra of Sr2RuO4 ## Abstract Multi-band quasi-particle calculations based on perturbation theory and dynamical mean field methods show that the creation of a photoemission hole state in Sr<sub>2</sub>RuO<sub>4</sub> is associated with a highly anisotropic self-energy. Since the narrow Ru-derived $`d_{xz,yz}`$ bands are more strongly distorted by Coulomb correlations than the wide $`d_{xy}`$ band, charge is partially transferred from $`d_{xz,yz}`$ to $`d_{xy}`$, thereby shifting the $`d_{xy}`$ van Hove singularity close to the Fermi level. PACS numbers: 79.60.Bm, 73.20.Dx, 74.70.Ad, 74.80.Dm Angle-resolved photoemission spectroscopy is one of the key techniques providing detailed information on the Fermi surface topology of high temperature superconductors. In the layered copper oxide compounds (cuprates) high-resolution photoemission spectra can be obtained below and above the superconducting transition temperature and for different hole doping regimes. Although de Haas–van Alphen (dHvA) experiments in principle yield more reliable bulk Fermi surface data, they are less useful for the investigation of cuprates since they require extremely pure samples. Thus, it has so far not been possible for any of the high $`T_c`$ cuprates to obtain consistent Fermi surface data from both photoemission and dHvA measurements. The detection of superconductivity in Sr<sub>2</sub>RuO<sub>4</sub> is of great importance since this system is the only layered perovskite compound known so far that is superconducting in the absence of copper and without requiring doping. Thus, a critical comparison of photoemission Fermi surface data with those derived from dHvA measurements is feasable. Surprisingly, independent studies of the dHvA effect and angle-resolved photoemission yield highly contradictory Fermi surface topologies. This discrepancy raises serious questions concerning the interpretation of photoemission data also in cuprate superconductors. Because of the layered structure of Sr<sub>2</sub>RuO<sub>4</sub>, the electronic bands close to the Fermi level may be qualitatively understood in terms of a simple tight-binding picture. These bands are derived mainly from Ru $`t_{2g}`$ states. The wide $`xy`$ band exhibits two-dimensional character, while the narrow $`xz`$ and $`yz`$ bands are nearly one-dimensional. All three bands are roughly 2/3 occupied, giving about 4 Ru $`d`$ electrons per formula unit. Density functional calculations based on the local density approximation (LDA) place the $`(\pi ,0),(0,\pi )`$ saddle point van Hove singularity of the $`xy`$ band about 60 meV above the Fermi energy. Taking into account gradient corrections slightly lowers this singularity to about 50 meV above $`E_F`$ . Fig. 1 provides a qualitative picture of the $`t_{2g}`$ bands and of the Fermi surface exhibiting one hole sheet ($`\alpha `$) and two electron sheets ($`\beta `$, $`\gamma `$). Whereas the dHvA data are consistent with these results, photoemission spectra reveal a fundamentally different topology : the $`xy`$ van Hove singularity near $`M`$ appears below the Fermi level, converting the $`\gamma `$ sheet from electron-like to hole-like. Nevertheless, both experiments are reported to be in accord with Luttinger’s theorem. Various effects have been proposed to explain this discrepancy between dHvA and photoemission data. (a) Since photoemission is surface sensitive, a surface-induced reconstruction associated with the breaking of bonds could lead to a modification of Ru–O hopping and a shift of $`t_{2g}`$ bands. Estimates based on bulk calculations indicate that this mechanism could push the $`xy`$ van Hove singularity below $`E_F`$ . Actual surface electronic structure calculations for Sr<sub>2</sub>RuO<sub>4</sub> , however, show that this singularity remains above $`E_F`$ even in the presence of surface relaxation. (b) Slightly different doping levels, temperatures, and magnetic fields used in both experiments could result in different Fermi surfaces, but these effects are believed to be too small to explain the main discrepancy . (c) That the interpretation of the dHvA data is incorrect seems unlikely in view of the large body of experience available for this method. (d) The interpretation of the photoemission spectra, on the other hand, is non-trivial because of the quasi-particle character of the hole state created in the experiment. This aspect of the data has been ignored so far and is the focus of the present work. According to the reduced dimensionality of Sr<sub>2</sub>RuO<sub>4</sub>, creation of a photohole should be associated with highly anisotropic screening processes which reflect the nature of the different electronic states involved. As shown in Fig. 1, the relevant bands near $`E_F`$ comprise a roughly 3.5 eV wide band formed by in-plane hopping between Ru $`d_{xy}`$ and O $`2p`$ orbitals, and $`1.4`$ eV narrow $`d_{xz}`$, $`d_{yz}`$ bands. Assuming an on-site Ru $`dd`$ Coulomb interaction $`U1.5`$ eV, we have the intriguing situation: $`W_{xz,yz}<U<W_{xy}`$, where $`W_i`$ is the width of the $`i^{th}`$$`t_{2g}`$ band. A value $`U1.5`$ eV was in fact deduced from the observation of a valence band satellite in resonant photoemission from Sr<sub>2</sub>RuO<sub>4</sub> . According to this picture, intra-atomic correlations have a much larger effect on the $`xz,yz`$ bands than on the $`xy`$ band, giving rise to a strongly anisotropic self-energy. Because of the $`2/3`$ filling of the $`xz,yz`$ bands, their narrowing, combined with Luttinger’s theorem, leads to a charge flow from the $`xz,yz`$ bands to the $`xy`$ band. As we discuss below, for reasonable values of $`U`$ this charge transfer is large enough to push the $`xy`$ van Hove singularity close to or even below the Fermi level. Since we are concerned with the qualitative influence of multi-band correlations on quasi-particle spectra, we consider for simplicity next-nearest-neighbor tight-binding bands of the form $`\epsilon (k)=\epsilon _02t_x\mathrm{cos}ak_x2t_y\mathrm{cos}ak_y+4t^{}\mathrm{cos}ak_x\mathrm{cos}ak_y`$, where $`(\epsilon _0,t_x,t_y,t^{})=`$ ($`0.50`$, $`0.44`$, $`0.44`$, $`0.14`$), ($`0.24`$, $`0.31`$, $`0.045`$, $`0.01`$), ($`0.24`$, $`0.045`$, $`0.31`$, $`0.01`$) eV for $`xy,xz,yz`$, respectively (see Fig. 1). These parameters ensure that the $`xy`$ band has edges at $`2.8`$ and 0.7 eV, with a van Hove singularity at 0.05 eV, and the $`xz,yz`$ bands have edges at $`0.9`$ and 0.5 eV, with van Hove singularities at $`0.80`$ and 0.26 eV, in agreement with the LDA band structure . Next we specify the on-site Coulomb and exchange integrals which we use in the self-energy calculations discussed below. In the present case involving only $`t_{2g}`$ states, there are three independent elements ($`ij`$) : $`U=ii||ii`$, $`U^{}=ij||ij`$, and $`J=ij||ji=ii||jj=(UU^{})/2`$, where $`i=1\mathrm{}3`$ denotes $`xy,xz,yz`$. Thus, the Hartree-Fock energies are $`\mathrm{\Sigma }_1^{HF}=n_1U+2n_2(2U^{}J)`$ and $`\mathrm{\Sigma }_{2,3}^{HF}=n_1(2U^{}J)+n_2(U+2U^{}J)`$. As the band occupations $`n_i`$ are rather similar, it is convenient to define the average occupation $`\overline{n}`$, so that $`n_1=\overline{n}2\delta `$, $`n_{2,3}=\overline{n}+\delta `$, and $`\mathrm{\Sigma }_1^{HF}=5\overline{n}(U2J)+2\delta (U5J)`$, $`\mathrm{\Sigma }_{2,3}^{HF}=5\overline{n}(U2J)\delta (U5J)`$. It is instructive to consider the second-order contribution to the local self-energy since the key point, namely, the large difference between the quasi-particle shifts of the $`xy`$ and $`xz,yz`$ bands, can already be illustrated in this approximation. Because the $`t_{2g}`$ bands do not hybridize, the self-energy has no off-diagonal elements. The imaginary parts of the diagonal second-order Coulomb and exchange terms are given by $$\mathrm{Im}\mathrm{\Sigma }_i(\omega )=\pi \underset{jkl}{}R_{jkl}(\omega )ij||kl\left[2kl||ijkl||ji\right]$$ (1) where $`R_{jkl}(\omega )`$ $`=`$ $`\left({\displaystyle _0^{\mathrm{}}}{\displaystyle _{\mathrm{}}^0}{\displaystyle _{\mathrm{}}^0}+{\displaystyle _{\mathrm{}}^0}{\displaystyle _0^{\mathrm{}}}{\displaystyle _0^{\mathrm{}}}\right)d\omega _1d\omega _2d\omega _3`$ (3) $`\times \rho _j(\omega _1)\rho _k(\omega _2)\rho _l(\omega _3)\delta (\omega +\omega _1\omega _2\omega _3).`$ Here, $`\rho _j(\omega )`$ denotes the single-particle density of $`t_{2g}`$ states. Exploiting the symmetry properties of the Coulomb matrix elements, Eq. (1) reduces to $`\mathrm{Im}\mathrm{\Sigma }_1(\omega )`$ $`=`$ $`U^2R_{111}(\omega )+2J^2R_{122}(\omega )`$ (5) $`+4(U^2+J^2U^{}J)R_{212}(\omega )`$ $`\mathrm{Im}\mathrm{\Sigma }_{2,3}(\omega )`$ $`=`$ $`(U^2+2U^2+3J^22U^{}J)R_{222}(\omega )`$ (8) $`+J^2R_{211}(\omega )`$ $`+2(U^2+J^2U^{}J)R_{112}(\omega ).`$ The above expressions demonstrate that even for $`J=0`$ the self-energy of a given band depends on scattering proccesses involving all three $`t_{2g}`$ bands. Nevertheless, $`\mathrm{\Sigma }_{xy}`$ is dominated by interactions within the wide $`xy`$ band or between $`xy`$ and $`xz,yz`$. On the other hand, $`\mathrm{\Sigma }_{xz,yz}`$ primarily depends on interactions within the narrow $`xz,yz`$ bands or between $`xz,yz`$ and $`xy`$. These differences are a consequence of the layered structure of Sr<sub>2</sub>RuO<sub>4</sub> and give rise to anisotropic relaxation shifts. For a more accurate description of charge transfer among quasi-particle bands, we include self-consistency in the spirit of dynamical mean-field theory . In this scheme, $`\mathrm{\Sigma }_i`$ is a functional of the effective bath Green’s function $`𝒢_i^1=G_i^1+\mathrm{\Sigma }_i`$, where the local $`G_i`$ is given by $$G_i(\omega )=_{\mathrm{}}^{\mathrm{}}𝑑\omega ^{}\frac{\rho _i(\omega ^{})}{\omega +\mu \mathrm{\Sigma }_i(\omega )\omega ^{}}.$$ (9) A typical frequency variation of $`\mathrm{\Sigma }_i`$ is shown in Fig. 2. Near $`E_F`$, the imaginary parts vary quadratically with frequency and the real parts satisfy $`\mathrm{\Sigma }_{xz,yz}\mathrm{\Sigma }_{xy}`$, i.e., the energy shift of the narrow $`xz,yz`$ bands is much larger than for the wide $`xy`$ band. Moreover, the difference $`\mathrm{\Sigma }_{xz,yz}\mathrm{\Sigma }_{xy}`$ at $`E_F`$ is much larger than the difference between the Hartree-Fock energies $`\mathrm{\Sigma }_{xz,yz}^{HF}\mathrm{\Sigma }_{xy}^{HF}`$. Qualitatively similar results are derived from more refined treatments of on-site Coulomb correlations using multi-band self-consistent Quantum Monte Carlo (QMC) methods . The temperature of the simulation was 15 meV with 128 imaginary time slices and $`\mathrm{300\hspace{0.17em}000}`$ Monte Carlo sweeps. Fig. 3 shows the quasi-particle density of states $`N_i(\omega )=\frac{1}{\pi }\mathrm{Im}G_i(\omega )`$, obtained via maximum entropy reconstruction , together with the bare density of states $`\rho _i(\omega )`$. The van Hove singularities near the edges of the $`xz,yz`$ bands are shifted towards $`E_F`$, causing a sizeable band narrowing. Because of the $`2/3`$ filling of these bands, this effect is not symmetric, giving a stronger relaxation shift of the occupied bands than for the unoccupied bands. There is also some band narrowing of the $`xy`$ bands, but since $`U<W_{xy}`$ this effect is much smaller than for the $`xz,yz`$ bands. A crucial point is now that in order to satisfy the Luttinger theorem the more pronounced band narrowing of the $`xz,yz`$ bands requires a transfer of spectral weight to the $`xy`$ bands. Thus, the $`xy`$ van Hove singularity is pushed towards the Fermi level. In the example shown in Fig. 3, it lies about 10 meV above $`E_F`$, compared to 50 meV in the single-particle spectrum. We emphasize that this result is a genuine multi-band effect where the filling of a relatively wide quasi-particle band is modified by correlations within other narrow bands of a different symmetry. Since the values of $`U`$ and $`J`$ are not well known, and considering the approximate nature of our single-particle bands and self-energy calculations, it is not possible at present to predict the exact position of the $`xy`$ singularity. It is conceivable, therefore, that this saddle point might lie even closer to or below $`E_F`$. As indicated in Fig. 1, the topology of the Fermi surface of Sr<sub>2</sub>RuO<sub>4</sub> depends critically on the position of the $`xy`$ van Hove singularity with respect to $`E_F`$. It is evident therefore that the charge transfer from $`xz,yz`$ to $`xy`$ due to the creation of the photohole must be taken into account when using angle-resolved photoemission to determine the shape of the Fermi surface. To compare our results with photoemission spectra, we show in Fig. 4(a) the dispersion of the $`t_{2g}`$ quasi-particle bands along $`\mathrm{\Gamma }M`$ and $`MX`$ derived from the spectral function $`A_i(𝐤,\omega )=\frac{1}{\pi }\mathrm{Im}\left[\omega +\mu \epsilon _i(𝐤)\mathrm{\Sigma }_i(\omega )\right]^1`$. The $`xy`$ van Hove singularity at $`M`$ lies 10 meV above $`E_F`$, so that considerable spectral weight appears below $`E_F`$ in the immediate vicinity of $`M`$. To account for the finite energy resolution, and following the experimental procedure for determining the spectral weight near $`E_F`$ , we show in Fig. 4(b) the Fermi surface obtained from the partially integrated spectral function $`\overline{A}_i(𝐤)=_\mathrm{\Delta }^\mathrm{\Delta }𝑑\omega A_i(𝐤,\omega +i\mathrm{\Delta })`$ with $`\mathrm{\Delta }=25`$ meV. Considering in addition the finite aperture of the detector (typically $`\pm 1^o`$, corresponding to $`\pm 5`$% of $`k_{}`$ near $`M`$ for 25 eV photon energy), it is unavoidable to pick up spectral weight from occupied regions near $`M`$, even when the detector is nominally set at $`M`$. Thus, the near-degeneracy of the $`xy`$ singularity with $`E_F`$ makes it extremely difficult using angle-resolved photoemission to determine the $`k`$-point at which the $`xy`$ band crosses the Fermi energy. Photoemission data taken with better energy and angle resolution might provide a more conclusive answer. Figs. 3 and 4 also show that due to the narrowing of the $`xz,yz`$ bands, the weakly dispersive band is shifted from $`0.8`$ eV to about $`0.4`$ eV, in agreement with photoemission data . For $`k_{}`$ between $`M`$ and $`X`$, this band is observed to cross $`E_F`$ at about $`(\pi ,0.6\pi )`$, in good accord with our calculations. In addition, the calculations indicate the existence of a satellite below the $`xz,yz`$ bands which might be related to the spectral feature observed near 2.5 eV binding energy using resonant photoemission . The precise location of this satellite is difficult to determine because of the uncertainty of $`U`$ and the approximate nature of our self-energy calculations. Because of the proximity of the quasi-particle $`xy`$ van Hove critical point to the Fermi level, the imaginary part of the self-consistent self-energy exhibits a small linear contribution near $`E_F`$, indicating that the system may partially behave like a marginal Fermi liquid. In fact, in Eq. (3), it is only the first term $`R_{111}(\omega )`$ that gives rise to a linear term if the singularity coincides with $`E_F`$. As a result of multi-band effects, however, this contribution is rapidly dominated by stronger quadratic terms involving the narrow $`xz,yz`$ bands. Thus, we find the marginality to be rather weak. We finally discuss the mass renormalization derived from our quasi-particle bands. For Coulomb and exchange matrix elements in the range $`U=1.21.5`$ eV, $`J=0.20.4`$ eV we find $`m^{}/m2.12.6`$, in agreement with photoemission estimates $`m^{}/m2.5`$ , while dHvA measurements and specific heat data suggest a factor of $`34`$. In summary, multi-band quasi-particle calculations for Sr<sub>2</sub>RuO<sub>4</sub> show that the simultaneous existence of nearly one- and two-dimensional $`t_{2g}`$ bands near $`E_F`$ leads to a highly anisotropic self-energy of the photoemission hole state. Because of Luttinger’s theorem, this anisotropy gives rise to a charge flow from the narrow $`xz,yz`$ bands to the wide $`xy`$ band, thereby shifting the $`xy`$ van Hove singularity very close to $`E_F`$. As a result, in the vicinity of $`M`$ considerable spectral weight appears below $`E_F`$. These results might explain the controversial nature of recent photoemission data which have difficulty in determining whether or not the $`xy`$ band at $`M`$ is occupied. The calculations were performed on the Cray T3e of the Forschungszentrum Jülich with grants from the John von Neumann Institute for Computing.
no-problem/9909/cond-mat9909397.html
ar5iv
text
# Enhancement of Stochastic Resonance in distributed systems due to a selective coupling ## Abstract Recent massive numerical simulations have shown that the response of a “stochastic resonator” is enhanced as a consequence of spatial coupling. Similar results have been analytically obtained in a reaction-diffusion model, using nonequilibrium potential techniques. We now consider a field-dependent diffusivity and show that the selectivity of the coupling is more efficient for achieving stochastic-resonance enhancement than its overall value in the constant-diffusivity case. The phenomenon of stochastic resonance (SR)—namely, the enhancement of the output signal-to-noise ratio (SNR) caused by injection of an optimal amount of noise into a periodically driven nonlinear system—stands as one of the most puzzling and promising cooperative effects arising from the interplay between deterministic and random dynamics in a nonlinear system. The broad range of phenomena—indeed drawn from almost every field in scientific endeavor—for which this mechanism can offer an explanation has been put in evidence by many reviews and conference proceedings, Ref. being the most recent and comprehensive one, from which one can scan the state of the art. Most phenomena that could possibly be explained by SR occur in extended systems: for example, diverse experiments are being carried out to explore the role of SR in sensory and other biological functions or in chemical systems . Notwithstanding this fact, the overwhelming majority of the studies made up to now are based on zero-dimensional systems, while most of the features of this phenomenon that are peculiar to the case of extended systems—or stochastically resonating media (SRM)—still remain largely to be explored. Particularly interesting numerical simulations on arrays of coupled nonlinear oscillators have been recently reported , indicating that the coupling between these stochastic resonators enhances the response of the array, which exhibits moreover a higher degree of synchronization. This effect has its counterpart in the continuum, as a study on the overdamped continuous limit of a $`\varphi ^4`$ field theory shows . Recently—by exploiting the previous knowledge of the nonequilibrium potential (NEP) for a bistable reaction-diffusion (RD) model —one of us has shown analytically that the SNR increases with diffusivity in the range explored . While considering a constant diffusion coefficient $`D`$ is a standard approach, it is not the most general one: it is reasonable to expect that the reported enhancement in the SNR by the effect of diffusion could depend in a more detailed way on $`D`$. In this regard, see for instance . In this letter we consider the more realistic case of a field-dependent diffusion coefficient $`D(\varphi (x,t))`$, and show that it causes an enhancement of the SNR still larger than the one associated with a homogeneous increase of its amplitude. The model under study—a one-dimensional, one-component RD model describing a system that undergoes an electrothermal instability —can be regarded as the continuous limit of the coupled system studied by Lindner et al . The field $`\varphi (x,t)`$ might describe the (time-dependent) temperature profile in the “hot-spot model” of superconducting microbridges . This model can be also regarded as a piecewise-linear version of the space-dependent Schlögl model for an autocatalytic chemical reaction, and that for the “ballast resistor”, describing the so-called “barretter effect” . As a matter of fact, since in the ballast resistor the thermal conductivity is a function of the energy density, the resulting equation for the temperature field includes a temperature-dependent diffusion coefficient in a natural way . Pointers to other contexts in which a description containing a field-dependent diffusivity becomes inescapable have been included in Refs.. By adequate rescaling of the field, space-time variables and parameters, we get a dimensionless time-evolution equation for the field $`\varphi (x,t)`$ $$_t\varphi (x,t)=_x\left(D(\varphi )_x\varphi \right)+f(\varphi )$$ (1) where $`f(\varphi )=\varphi +\theta (\varphi \varphi _c)`$, $`\theta (x)`$ is Heaviside’s step function. All the effects of the parameters that keep the system away of equilibrium (such as the electric current in the electrothermal devices or some external reactant concentration in chemical models) are included in $`\varphi _c`$. Moreover, since the value of the field $`\varphi (x,t)`$ corresponds in these models to the deviations with respect to e.g. a reference temperature $`T_B>0`$ (the temperature of the bath) in the ballast resistor or to a reference concentration $`\rho _0`$ in the Schlögl model, it is clear that—up to a given strict limit (i.e. $`\varphi =T_B`$ for the ballast resistor)—some negative values of $`\varphi (x,t)`$ are allowed. As was done for the reaction term , a simple choice that retains however the qualitative features of the system is to consider the following dependence of the diffusion term on the field variable $$D(\varphi )=D_0(1+h\theta (\varphi \varphi _c)),$$ (2) For simplicity, here we choose the same threshold $`\varphi _c`$ for the reaction term and the diffusion coefficient. The more general situation is left for a forthcoming work . We assume the system to be limited to a bounded domain $`x[L,L]`$ with Dirichlet boundary conditions at both ends, i.e. $`\varphi (\pm L,t)=0`$. The piecewise-linear approximation of the reaction term in Eq.(1)—which mimicks a cubic polynomial—was chosen in order to find analytical expressions for its stationary spatially-symmetric solutions. In addition to the trivial solution $`\varphi _0(x)=0`$ (which is linearly stable and exists for the whole range of parameters) we find another linearly stable nonhomogeneous structure $`\varphi _s(x)`$—presenting an excited central zone (where $`\varphi _s(x)>\varphi _c`$) for $`x_cxx_c`$—and a similar unstable structure $`\varphi _u(x)`$, which exhibits a smaller excited central zone. The form of these patterns is analogous to what has been obtained in previous related works , as is shown in Fig. 1. The difference is that in the present case $`d\varphi /dx|_{x_c}`$ is discontinuous and the area of the central zone depends on $`h`$. The indicated patterns are extrema of the NEP, which—among other properties that we shall be using presently—is a Lyapunov functional for the deterministic system introduced thus far. In fact, the unstable pattern $`\varphi _u(x)`$ is a saddle-point of this functional, separating the attractors $`\varphi _0(x)`$ and $`\varphi _s(x)`$ . The notion of a NEP has been thoroughly studied, mainly by Graham and his collaborators . Loosely speaking, it is an extension to non-equilibrium situations of the familiar notion of (equilibrium) thermodynamic potential. For the case of a field-dependent diffusion coefficient $`D(\varphi (x,t))`$ as described by Eq. (1), it reads $$[\varphi ]=_L^{+L}\left\{_0^\varphi D(\varphi ^{})f(\varphi ^{})𝑑\varphi ^{}+\frac{1}{2}\left(D(\varphi )\frac{\varphi }{x}\right)^2\right\}𝑑x.$$ (3) Given that $`_t\varphi =(1/D(\varphi ))\delta /\delta \varphi `$ one finds $`\dot{}=\left(\delta /\delta \varphi \right)^2𝑑x0`$, thus warranting the Lyapunov-functional property. This NEP functional offers the possibility of studying not just the linear but also the nonlinear—in the case at hand the global—stability of the patterns, following its changes as the parameters of the model are varied . For a given threshold value $`\varphi _c^{}`$, both wells corresponding in a representation of the NEP to the linearly stable states have the same depth (i.e. both states are equally stable). Figure 2 shows the dependence of $`[\varphi ]`$ on the parameter $`\varphi _c`$. As in previous cases, we analyze only the neighborhood of $`\varphi _c=\varphi _c^{}`$ . Here we also consider the neighborhood of $`h=0`$, where the main trends of the effect can be captured. Now, with the aim of studying SR, we introduce a weak signal that modulates the potential $``$ around the situation in which the two wells have the same depth. This is accomplished by allowing the parameter $`\varphi _c`$ to oscillate around $`\varphi _c^{}`$: $`\varphi _c(t)=\varphi _c^{}+\delta \varphi _c\mathrm{cos}(\mathrm{\Omega }t+\phi )`$, with $`\delta \varphi _c\varphi _c(t)`$. We also introduce in Eq.(1) a fluctuating term $`\xi (x,t)`$, which we model (as is customary) as an additive Gaussian white-noise source with zero mean value and a correlation function $`\xi (x,t)\xi (x^{},t^{})=2\gamma \delta (tt^{})\delta (xx^{}),`$ thus yielding a stochastic partial differential equation for the random field $`\varphi (x,t)`$. The parameter $`\gamma `$ denotes the noise strength . As in previous works , we exploit a generalization to extended systems of the Kramers-like result for the evaluation of the decay time or “mean-first-passage time” $`\tau `$ . Here, those results are extended to the case of field-dependent diffusivity, yielding $$\tau =\tau _0\mathrm{exp}\left\{\frac{𝒲[\varphi ,\varphi _i]}{2\gamma }\right\}.$$ (4) The functional $`𝒲[\varphi ,\varphi _i]`$ ($`\varphi _i`$ indicates the initial metastable state, which at each instant may be either $`\varphi _0`$ or $`\varphi _s`$), that is the solution of a Hamilton-Jacobi-like equation, has the following expression $`𝒲[\varphi ,\varphi _i]={\displaystyle _L^{+L}}𝑑x\left\{\left({\displaystyle \frac{D(\varphi )}{2}}\left({\displaystyle \frac{\varphi }{x}}\right)^2U(\varphi )\right)\left({\displaystyle \frac{D(\varphi _i)}{2}}\left({\displaystyle \frac{\varphi _i}{x}}\right)^2U(\varphi _i)\right)\right\},`$ (5) with $`U(\varphi )=_0^\varphi 𝑑\varphi ^{}f(\varphi ^{})`$. The prefactor $`\tau _0`$ in Eq.(4) is essentially determined by the curvature of the NEP $`[\varphi ,\varphi _c]`$ at its extrema. The calculation of the SNR proceeds, for the spatially extended problem, through the evaluation of the space-time correlation function $`\varphi (y,t)\varphi (y^{},t^{})`$. To do that we use a simplified point of view, based on the two-state approach , which allows us to apply some known results almost directly. To proceed with the calculation of the correlation function, we need to evaluate the transition probabilities $`W_\pm \tau ^1`$, which appear in the associated master equation. For small $`\delta \varphi _c,`$ $`𝒲[\varphi ,\varphi _i]𝒲[\varphi ,\varphi _i]_{\varphi _c^{}}+\delta \varphi _c\left({\displaystyle \frac{𝒲[\varphi ,\varphi _i]}{\varphi _c}}\right)_{\varphi _c^{}}\mathrm{cos}(\mathrm{\Omega }t+\phi ).`$ Solving such a master equation up to first order in $`\delta \varphi _c`$ it is possible to evaluate the correlation function. Its double Fourier transform, the generalized susceptibility $`S(\kappa ,\omega )`$, factorizes in this approach, and the relevant term becomes a function of $`\omega `$ only (the corresponding expressions are omitted, see for details). It is worth noting that many of the results exposed here (e.g. the profiles of the stationary patterns and the corresponding values of the NEP) are exact. The only approximations involved in the calculation of the SNR are the standard ones, namely the Kramers-like expression in Eq.(4) and the two-level approximation used for the evaluation of the correlation function . Using the definition from Ref. for the SNR at the excitation frequency (here indicated by $`R`$), the final result is $$R(\frac{\mathrm{\Lambda }}{\tau _0\gamma })^2\mathrm{exp}(𝒲[\varphi ,\varphi _i]_{\varphi _c^{}}/\gamma ),$$ (6) where $`\mathrm{\Lambda }=(d𝒲[\varphi ,\varphi _i]/d\varphi _c)_{\varphi _c^{}}\delta \varphi _c`$, and $`\tau _0`$ is given by the asymptotically dominant linear stability eigenvalues: $`\tau _0=2\pi (\left|\lambda ^{un}\right|\overline{\lambda ^{st}})^{1/2}`$ ($`\lambda ^{un}`$ is the unstable eigenvalue around $`\varphi _u`$ and $`\overline{\lambda ^{st}}`$ is the average of the smallest eigenvalues around $`\varphi _0`$ and $`\varphi _s`$). Equation (6) is analogous to the results in zero-dimensional systems, but here $`\mathrm{\Lambda }`$, $`\tau _0`$ and $`𝒲[\varphi ,\varphi _i]_{\varphi _c^{}}`$ contain all the information regarding the spatially extended character of the system. In Fig.3 we depict the dependence of $`R`$ on the noise intensity $`\gamma `$, for several (positive) values of $`h`$. These curves show the typical maximum that has become the fingerprint of the stochastic resonance phenomenon. Figure 4 is a plot of the value $`R_{max}`$ of these maxima as a function of $`h`$. The dramatic increase of $`R_{max}`$, of several $`dB`$ for a small positive variation of $`h`$, is apparent and shows the strong effect that the selective coupling (or field-dependent diffusivity) has on the response of the system. The present prediction prompts to devise experiments (for instance, through electronic setups) as well as numerical simulations taking into account the indicated selective coupling. This result could be of relevance for technological applications such as signal detection and image recognition, as well as for the solution of some puzzles in biology (mammalian sensory systems, ionic channels in cells). The present form of analysis is being extended to (the bistable regime of) multicomponent models of the activator-inhibitor type since—in addition to their applications to systems of chemical (e.g. Bonhoffer-Van der Pol model) and biological (e.g. FitzHugh-Nagumo model) origins—these models are related to spatio-temporal synchronization problems. An effective treatment of models of this type gives rise to a non-local coupling, which would compete with the nearest neighbour coupling $`D(\varphi )`$ presented here . Acknowledgments: The authors thank J.I.Deza for his help with the numerical calculations and preparation of the figures and V. Grunfeld for a revision of the manuscript. HSW and RD thank the ICTP for the kind hospitality extended to them during their stay. Partial support from the Argentine agencies CONICET (grant PIP 4953/97) and ANPCyT (grant 03-00000-00988), is also acknowledged.
no-problem/9909/astro-ph9909223.html
ar5iv
text
# 𝑈⁢𝑉 (2000 Å) luminosity function of Coma cluster galaxies ## 1 Introduction In spite of the darkness of the sky at ultraviolet wavelengths (O’Connell 1987) and of the crucial role played by the $`UV`$ emission in the determination of the metal production rate, the $`UV`$ band is still one of the less explored spectral regions. This is even more true for objects in the local Universe, because non redshifted $`UV`$ emission can be observed only from space. In both the single stellar population and continuous star formation scenarios, the $`UV`$ luminosity of late–type galaxies appears to be largely dominated by young massive stars, thus implying a direct link between $`UV`$ luminosities and star formation rates (e.g. Buzzoni 1989). In recent years, the understanding that samples of galaxies at very high redshift can be selected from multicolor deep images (such as the Hubble Deep Field), has renewed the interest in $`UV`$ observations, allowing tentative determinations of the $`UV`$ luminosity function (hereafter LF) for galaxies at $`z>2`$ (Steidel et al. 1999, Pozzetti et al. 1998). In the local Universe, available samples of $`UV`$ data for normal galaxies are generally not suitable for these types of studies due to either the lack of well defined selection criteria (see for instance the IUE sample reviewed in Longo & Capaccioli 1992) or to the optical selection of the objects. Exceptions to this rule are the samples produced by the FOCA experiment (Milliard et al. 1991) which allowed to derive, among various other quantities, the local field $`UV`$ luminosity function (Treyer et al. 1998), and to constrain the $`UV`$ luminosity function of galaxies in the Coma cluster (Donas, Milliard, & Laget 1991, hereafter DML91). In this paper we rediscuss the $`UV`$ luminosity function (LF, hereafter) of galaxies in the Coma cluster, first explored by DML91. Since DML91 two important sets of data have been acquired: the sample of galaxies with known redshift in the Coma direction has increased by about 60 %, and background counts in $`UV`$, essential for computing the LF, have been measured. The paper is structured as follows: we first describe the data used (§2), then, we present the color–magnitude and color–color relations for galaxies in the Coma cluster direction (§3) and we show that the availability of colors does not help in identifying interlopers. In §4 we use field counts and the extensive redshift surveys in the Coma cluster direction to constrain background counts in the Coma direction and to derive the Coma cluster LF, presented in §5. In §6 we discuss the bivariate LF and, finally, in §7 we compare the Coma $`UV`$ LF to the recently determined field LF. A summary is given in §8. In this paper we adopt $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. ## 2 The data Among nearby clusters of galaxies, Coma ($`v7000`$ km s<sup>-1</sup>) is one of the richest ($`R=2`$) ones. At a first glance, it looks relaxed and virialized in both the optical and X-ray passbands. For this reason it was designed by Sarazin (1986) and Jones & Forman (1984) as the prototype of this class of clusters. The optical structure and photometry at many wavelengths, velocity field, and X-ray appearance of the cluster (see the references listed in Andreon 1996) suggest the existence of substructures. Since these phenomena are also observed in many other clusters (Salvador-Solé, Sanromà, & Gonzáles-Casado 1993), the Coma cluster appears typical also in this respect. Coma was observed in the $`UV`$ with a panoramic detector (FOCA). Complementary data, are taken from Godwin, Metcalfe & Peach (1983; blue and red isophotal magnitudes designed here $`b`$ and $`r`$, respectively) and Andreon (1996; radial velocities taken from the literature and updated for this paper by means of new NED entries and accurate morphological types). The FOCA experiment consisted in a 40-cm Cassegrain telescope equipped with an ITT proximity focused image intensifier coupled to a IIaO photographic emulsion. The filter, centered at 2000 Å with a bandwidth of 150 Å, has negligible red leakage for objects as red as G0 stars and little dependence of the effective wavelength upon the object effective temperature. Observations of the Coma cluster were obtained with a field of view of 2.3 deg and a position accuracy of about 5 arcsec. The angular resolution of 20 arcsec FWHM was too coarse to allow an effective discrimination between stars and galaxies (for more details on the experiment see Milliard et al. 1991). The observations consisted of many short exposures, totalizing 3000s, and were obtained in April 1988. The galaxy catalog and details on the data reduction were published in DML91 and Donas, Milliard, & Laget (1995, hereafter DML95). Coma $`UV`$ selected sample is found by DML95 to be complete down to $`UV1717.5`$ mag and 70% complete in the range $`17.5<UV<18`$ mag and includes only $`UV`$ sources with at least one optical counterpart. Detected objects were classified by DML91 and DML95 as stars or galaxies according to their optical appearance. Following DML91, the $`UV`$ magnitude is defined by the expression: $`UV=2.5log(F_\lambda )21.175`$ where $`F_\lambda `$ is the flux in ergs cm<sup>-2</sup>A<sup>-1</sup>. Typical photometric errors are 0.3 mag down to $`UV17`$ mag and reach 0.5 mag at the detection limit $`UV18`$ mag. ## 3 Color–magnitude and color–color diagrams Figure 1 (upper panel) shows the $`UVb`$ vs $`UV`$ color–magnitude diagram for the 254 galaxies detected in the UV in the Coma field. This sample includes a larger number of galaxies with known redshift than in DML91, due to the numerous redshift surveys undertaken since 1991. We consider as Coma members only galaxies with $`4000<v<10000`$ km s<sup>-1</sup> (which is similar or identical to the criteria adopted by Kent & Gunn (1982), Mazure et al. (1988), Caldwell et al. (1993), Carlberg et al. (1994), Biviano et al. (1995), Andreon (1996), De Propris et al. (1997)). Figure 1 shows that only a few galaxies detected in $`UV`$ are near to the optical catalog limit ($`b=21`$ mag), except at $`UV18`$ mag, suggesting that only a minority of UV galaxies are missed because they are not visible in the optical<sup>1</sup><sup>1</sup>1We stress out that the $`UV`$ galaxy catalog contains only sources with at least one optical counterpart, see DML95.. This confirms the DML91 statement that the $`UV`$ sample is truly $`UV`$ selected, except maybe in the last half–magnitude bin. Many optically–faint and $`UV`$–bright galaxies have not measured redshift. The lower panel in Figure 1 shows the optical $`br`$ vs $`b`$ color-magnitude diagram for the brightest (in $`b`$) 254 galaxies in the same field. We have accurate morphological types for all galaxies brighter than $`b16.517`$ mag (Andreon et al. 1996, 1997). Coma early–type galaxies (i.e. ellipticals and lenticulars) have $`UVb3`$ mag and $`br1.8`$ mag (DML95, Andreon 1996). The comparison of the two panels in Figure 1 shows several interesting features. First of all, bright $`UV`$ galaxies are blue and not red, as instead is the case in the optical. In other words, early–type galaxies, due to their $`UV`$ faintness, do not dominate the $`UV`$ color–magnitude diagram. Red and blue galaxies are small fractions of the $`UV`$ and optically selected samples, respectively. In second place, galaxies show a much larger spread in $`UVb`$ (7 mag for the whole sample, 6 mag for the redshift confirmed Coma members) than in $`br`$ or in any other optical or optical–near–infrared color (see, for example, the compilation in Andreon (1996)). From the theoretical point of view, such a large scatter in color implies that the $`UV`$ and $`b`$ passbands trace the emission of quite different stellar populations. For all but the very old stellar populations, the $`UV`$ traces mainly the emission from young stars (see for instance Donas et al. 1984; Buat et al. 1989), having maximum main sequence lifetime of a few $`10^8`$ years. Therefore, for star forming galaxies the $`UV`$ is a direct measure of the present epoch star formation rate. Optical data provide instead a weighted average of the past to present star formation rate. The large scatter in color therefore implies that galaxies bright in $`UV`$ are not necessarily massive, but more likely the most active in forming stars. From the observational point of view, this large scatter in color is a problem, since deep optical observations are needed to derive optical magnitudes and hence colors (blue galaxies with $`UV=18`$ mag have $`b2021`$ mag) or even for discriminating stars from galaxies. This limitation makes difficult to characterize the properties of $`UV`$ selected samples, such as, for instance, the optical morphology (a galaxy with $`UV=18`$ mag is bright and large enough to be morphologically classified only if it is quite red); the redshift (since they are usually measured from the optical emission or for an optically selected sample); the luminosity function of galaxies in cluster (the background subtraction is uncertain because the stellar contribution is difficult to estimate in absence of a deep optical imaging), etc. Furthermore, it is dangerous to limit the sample to galaxies with known redshift or morphological type, since, this would introduce a selection criterion (mainly an optical selection) which has nothing to do with the $`UV`$ properties of the galaxies. Figure 2 shows the color–magnitude diagram for the field in a direction that in part overlaps the Coma optical catalog provided by Godwin et al. (1983) and includes even a few members located in the Coma outskirts. Also these data were obtained with FOCA (Treyer et al. 1998). Most of the background galaxies have blue apparent colors, but with a large spread. Almost no background galaxies lay in the upper-right corner of the graph, i.e. no background galaxy is simultaneously very red ($`UVb3`$) and faint ($`UV17`$). The selection criteria used by Treyer et al. (1998) for studying this sample are quite complex and galaxies with missing redshift (failed or not observed) are not listed, so that it is not trivial to perform a background subtraction in the color–magnitude plane (as it is sometimes done in the optical; see, for instance, Dressler et al. 1994). The color-color diagram of galaxies in the direction of Coma (Figure 3) has already been discussed in DML95. But, in our sample, the number of galaxies having known membership is larger by 60% (from 61 to 99 galaxies). The diagram shows that background galaxies have colors overlapping those of known Coma galaxies, and, therefore, it is not of much use in discriminating members from non–members. This conclusion is strengthened by fragmentary knowledge of colors of $`UV`$–selected samples, which renders premature to adopt a color selection criterion for the purpose of measuring the LF. ## 4 Evaluation of background counts in the Coma direction Since clusters are by definition volume–limited samples, the measure of the cluster LF consists in counting galaxies in each magnitude bin after having removed the interlopers, i.e. galaxies along the same line of sight but not belonging to the cluster. In general, interlopers can be removed in three different ways: by determining the membership of each galaxy throughout an extensive redshift survey, by a statistical subtraction of the expected background contamination (see, for instance, Oemler 1976), or by using color–color or color–magnitude diagrams (see for instance, Dressler et al. 1994 and Garilli, Maccagni & Andreon 1999). In our case, the available color informations are not sufficient to discriminate members from interlopers, and surveys in the Coma direction available in literature are not complete down to the magnitude limit of our sample. Therefore we were forced to use a hybrid method to estimate and remove the background contribution. Because the available membership information is qualitatively different for bright and faint galaxies, we consider them separately. For almost all galaxies brighter than $`M_{UV}=19.7`$ mag, redshifts are available in the literature, and interlopers can be removed one by one. For fainter galaxies we compute the LF from a statistical subtraction of the field counts, and, therefore, the largest source of error may come from possible large background fluctuations from field to field. Milliard et al. (1992) present galaxy counts in three random fields, measured with the same experiment used to acquire the Coma data. One of the pointings is very near in the sky to the Coma cluster. The slope is nearly Euclidean for the total (i.e. galaxy+stars) counts ($`\alpha 0.54`$) with a small scatter among the counts in the three directions (roughly 10%). After removing the stellar contribution, galaxy counts have again a nearly Euclidean slope, but an amplitude which is half the previous one. Dots in Figure 4 show galaxy counts (i.e. $`n(m)`$) in the Coma direction (we simply count all galaxies in each bin, open dots and dashed line in the figure) and the average of the three “field directions” (solid dots and solid line). At magnitudes fainter than $`UV=16`$ mag, galaxy counts in the Coma direction are lower than those in directions not including clusters of galaxies, although errorbars are quite large. At first sight, this plot is surprising: clusters are overdensities and therefore counts in their directions should be higher than field counts. However, this expectation is not necessarily correct in the $`UV`$ band. Star formation is inhibited in the high density environments (Hashimoto et al, 1998, Merluzzi et al, 1999) and therefore counts in the direction of the cluster can be similar, or even lower, to counts not having clusters on the line of sight. The $`UV`$ luminosity is, in fact, a poor indicator for the galaxy mass. Another possible explanation could be related to large errors and large background fluctuations from field to field. We discuss now in depth this point, taking advantage of the existence of redshift surveys available in the Coma cluster direction. Figure 5 shows integral counts (i.e. $`n(<m)`$). The solid line is the integral of the solid line in Figure 4, i.e. it gives is the expected integral field galaxy counts. All other lines refer instead to true measurements in the Coma cluster direction. The lower solid histogram in Figure 5 is the lower limit to the background in the Coma cluster direction, computed as the sum of the galaxies having (known) velocity falling outside of the assumed range for Coma. The upper solid histogram is the upper limit to the background in the Coma cluster direction, given, instead, as the sum of the galaxies outside of the assumed velocity range and the galaxies with unknown membership. The dotted lines are the $`1\sigma `$ confidence contours, for the lower and upper limits, computed according to Gehrels (1986). They simply account for Poissonian fluctuations and show how large (or small) the real background could be (at the 68% confidence level) in order to observe such large (or small) counts. A background lower than the lower dotted histogram would produce too few (at the 68% confidence level) interlopers in the Coma cluster direction with respect to the observed ones; whereas a background higher than the higher dotted histogram would imply (always at the 68 % confidence level) a number of galaxies larger than the size of the sample (once the Coma members are removed). To summarize, in order to be consistent (at the 68% confidence level) with redshift surveys in the Coma direction, background counts in the Coma direction must be bracketed in between the two dotted histogram. Assuming smooth counts of nearly Euclidean slope, we consider the most extreme amplitudes for the background that are still compatible at the 68% confidence level with the two dashed histograms in at least one magnitude bin, and in what follow we call them “maximum$`+1\sigma `$” and “minimum$`1\sigma `$”. Under the hypothesis of a nearly Euclidean slope, the background in the Coma direction turns out to be between 2.8 and 17.8 times smaller than the expectation shown by the line in Figure 5. The expected field counts (i.e. the line in Figure 5) are $`3\sigma `$ away from the maximum background allowed by the Coma redshift survey (i.e. the upper solid histogram). This is an unlikely but not impossible situation, in particular when we take into account that the stellar contribution has been assumed and not measured in two background fields and that counts are slightly over–estimated, due to the presence of the Coma cluster and supercluster (Treyer et al. 1998) in the last field. ## 5 $`UV`$ Luminosity Function In the previous section we derived an estimate for the background in the Coma cluster direction or, to be more precise, a range for the amplitude of background counts under the further assumption of nearly Euclidean slope for background counts. We can, therefore, statistically remove the background contribution and compute the faint end of the LF (at bright magnitudes the membership is known for each individual galaxy) and look at the dependence of the LF on the assumed values of the background amplitude. Therefore, the determination of the faint end of the LF still depends in part on the poorly known background counts, but much less than in DML91 since at that time the slope and the amplitude of the background contribution were almost unknown and it was left free to span over a range extending from almost all the data to zero. In order to clarify the error implied by our limited knowledge of the background counts, we compute twice the lower end of the LF, assuming a minimum$`1\sigma `$ background and a maximum$`+1\sigma `$ one. The actual Coma $`UV`$ LF is bracketed in between. We made use of a maximum–likelihood method (Press et al. 1992) to fit the differential LF of Coma with a Schechter (1976) or power law functions: $`f(m)=\varphi ^{}10^{0.4(\alpha _S+1)(m^{}m)}exp(10^{0.4(m^{}m)})`$ $`f(m)=k10^{\alpha m}`$ The most important advantage of the maximum–likelihood method is that it does not require to bin the data in an arbitrarily chosen bin size and works well also with small samples where the $`\chi ^2`$ fitting is not useful. It also naturally accounts for lower limits (bins with zero counts if data are binned). The maximum likelihood method leaves the normalization factor undetermined (since it is reduced in the computation). We therefore derived it by requiring that the integral of the best fit is equal to the observed number of galaxies. In our case we have 125 and 233 galaxies in the Coma sample, depending on the adopted background subtraction. The Coma cluster $`UV`$ LF - the first ever derived for a cluster - is shown in Figure 6. Error bars are large, and only the rough shape of the LF can be sketched. The Coma $`UV`$ LF is well described by a power law (or alternatively by a Schechter function with a characteristic magnitude $`M_{UV}^{}`$ much brighter than the brightest galaxy): a Kolmogorov-Smirnov test could not reject at more than 20% confidence level the null hypothesis that the data are extracted from the best fit (whereas we need a 68% confidence level to exclude the model at $`1\sigma `$). The best slope is $`\alpha =0.42\pm 0.03`$ and $`\alpha =0.50\pm 0.03`$ assuming a maximum$`+1\sigma `$ and minimum$`1\sigma `$ background contamination, respectively. In terms of the slope of the Schechter (1976) function $`\alpha _S`$, these values are $`2.06`$ and $`2.26`$ respectively. The exact value of the background amplitude, once bound by redshift surveys, have small impact on the slope of the LF, which is quite steep. The Coma $`UV`$ LF is steeper than the optical LF, ($`\alpha _S1.0`$, from 5000 Å to 8000 Å Garilli, Maccagni & Andreon 1999), when computed within a similar range of magnitudes (i.e. at $`M_3+3`$, where $`M_3`$ is the magnitude of the 3th brightest galaxy of the cluster). It needs to be stressed, however, that the computed slope of the LF depends on the assumption of a nearly Euclidean slope for galaxy counts (the amplitude is constrained by the redshift survey). We now measure the effect of neglecting this hypothesis. A very low limit to the slope of the Coma LF can be computed under the extreme assumption that all galaxies not confirmed as Coma members (i.e. all galaxies without known redshift and those with redshifts outside the velocity range of Coma members) are actual interlopers. The resulting LF is shown in Figure 7. No matter how large and how complex the shape of background counts in the Coma direction is, this estimate provides the very low limit to the slope of the Coma LF because galaxies with unknown membership are only faint, and they could only rise the faint part of the LF. In such an extreme case, we find $`M_{UV}^{}=22.6`$ mag, brighter than the brightest cluster galaxy. Fitting a power law, we find instead $`\alpha =0.21\pm 0.04`$. Even in this case, however, the slope is larger than what is found in the optical (at $`M_3+3`$). This LF is computed with no assumption about the shape of the background counts. This LF is unlikely to be near to the “true” Coma $`UV`$ LF, because the assumption that all galaxies with unknown redshift are interlopers is unrealistic and implies an over–Euclidean slope ($`\alpha 0.75`$) for the background, which is much steeper than those observed in the three field pointings by Milliard, Donas & Laget (1992). Nevertheless, this very low limit LF gives the very minimum slope for the Coma $`UV`$ LF, $`\alpha _S=1.45`$. The steep Coma $`UV`$ LF implies that faint and bright galaxies give similar contributions to the total $`UV`$ flux, and that the total $`UV`$ flux has not yet converged 4 magnitude fainter than the brightest galaxy (or, which is the same, at $`M_3+3`$). Therefore, in order to derive the total luminosity and hence the metal production rate, it is very important to measure the LF down to faint magnitude limits. ## 6 Bivariate LF Since the redshift information is quite different for blue ($`UVb<1.7`$) and red ($`UVb>1.7`$) galaxies, the two $`UV`$ LFs are computed in different ways. Redshifts are available for all the red galaxies (which all belong to the cluster) and the respective $`UV`$ LF is easy to compute. Almost all blue galaxies brighter than $`M_{UV}20`$ mag have known redshift, and therefore the determination of this part of the blue LF is quite robust. For the faint part of the blue LF, we adopt an “average” background, given as the average normalization between the maximum$`+1\sigma `$ and minimum$`1\sigma `$ backgrounds previously computed. The resulting bivariate color–luminosity function is given in Figure 8. The bulk of the $`UV`$ emission comes from blue ($`UVb<1.7`$) galaxies while all red galaxies have $`M_{UV}>20`$ mag. Therefore, since blue galaxies dominate the $`UV`$ LF both in number and luminosity, the Coma $`UV`$ LF is dominated by star forming galaxies and not by massive galaxies. From previous morphological studies (Andreon 1996) it turns out that Coma red galaxies in our sample are ellipticals or lenticulars. The fact that early–type galaxies contribute little to the $`UV`$ LF may be explained as a consequence of the fact that these systems have a low recent star formation histories. Please note that in the optical, the LF is dominated at the extreme bright end by the early–type (i.e. red) galaxies (Bingelli, Sandage & Tammann 1988, Andreon 1998), and not by blue ones as it is in $`UV`$. ## 7 Comparison with the $`UV`$ field LF The $`UV`$ LF of field galaxies has been recently measured by Treyer et al. (1998) in a region close to Coma, where they found $`\alpha _S=1.62_{0.21}^{+0.16}`$, $`M_{UV}^{}=21.98\pm 0.3`$ mag for a sample of 74 galaxies. As pointed out by Buzzoni (1998), this slope is quite different from that assumed for the distant field galaxies by Madau (1997). Is there any significant difference between the Coma cluster and the field $`UV`$ LFs? The best Schechter fit to the field data satisfactorily matches both the very low limit to the Coma LF and the Coma data after the subtraction of the maximum$`+1\sigma `$ background contribution (the probability of a worse fit is 0.1, according to the Kolmogorov-Smirnov test, whereas we need a probability of 0.05 to call the fit worse at $`2\sigma `$), but does not in the case of minimum$`1\sigma `$ background contribution (the probability of a worse fit is 0.00078, according to the same test, i.e. the two LF differ at $`4\sigma `$). However, using $`\alpha _S1\sigma `$ instead of $`\alpha _S`$ for the field LF, the fit to the Coma data cannot be rejected with a probability larger than 0.02, i.e., the $`1\sigma `$ confidence contour of the field LF crosses the $`2\sigma `$ confidence contour of the Coma LF. Therefore, given the available data, Coma and field $`UV`$ LFs are different at $`23\sigma `$ at most. Given the large errors involved, the field and clusters LFs result therefore compatible with each other. ## 8 Conclusions The analysis of $`UV`$ and optical properties of Coma galaxies is indicative of the difficulties encountered in studying $`UV`$ selected samples: background galaxy counts are uncertain (as well as their variance); the background contamination in the $`UV`$ color–magnitude plane is poorly known. In spite of these difficulties we found: 1) galaxies in Coma show a large range of $`UV`$–optical color (6–7 mag), much larger than what is observed at other redder passbands. 2) Blue galaxies are the brightest ones and the color–magnitude relation is not as outstanding as it is at longer wavebands. Early–type or red galaxies are a minority in the Coma $`UV`$ selected sample. In $`UV`$, the brightest galaxies are the most star forming galaxies and not the more massive ones. 3) In spite of the rather large errors, the $`UV`$ LF discussed here is the first LF ever derived for a cluster. The major source of error in estimating the $`UV`$ LF comes from the field to field variance of the background, that it is subtracted statistically. Present redshift surveys in the studied field constrain at high and low amplitudes the background contribution in the Coma direction, as shown in Figure 5. The Coma $`UV`$ LF is steep and bracketed between the two estimates shown in Figure 6, with a likely Schechter slope in the range $`2.0`$ to $`2.3`$. Even under the extreme hypothesis that all galaxies with unknown membership are interlopers, the very minimum slope of the UV-LF is $`\alpha _S1.45`$. 4) The steep Coma $`UV`$ LF implies that faint and bright galaxies give similar contributions to the total $`UV`$ flux, and that the total $`UV`$ flux has not yet converged 4 magnitude fainter than the brightest galaxy (or, which is the same, at $`M_3+3`$). Therefore, in order to derive the total luminosity and hence the metal production rate, it is very important to measure the LF down to fainter magnitude limits. 5) The Coma $`UV`$ LF is dominated in number and luminosity by blue galaxies, which are often faint in the optical. Therefore the Coma $`UV`$ LF is dominated by star forming galaxies, not by massive and large galaxies. 6) The Coma $`UV`$ LF is compatible with the field LF at $`23\sigma `$. ###### Acknowledgements. This work has been partially done at the Istituto di Fisica Cosmica “G.P.S. Occhialini”. Its director, Gabriele Villa, is warmly thanked for the hospitality. This work would have not been possible without the good $`FOCA`$ data and I wish to aknowledge all people involved in that project for their good work. Jean–Charles Cuillandre, Jose Donas, Catarina Lobo and, in particular, Giuseppe Longo are also warmly thanked for their attentive lecture of the paper. The anonymous referee makes useful comments that help to focus the paper on its major objectives.
no-problem/9909/cond-mat9909148.html
ar5iv
text
# Capillary-gravity waves: The effect of viscosity on the wave resistance ## Abstract The effect of viscosity on the wave resistance experienced by a two-dimensional perturbation moving at uniform velocity over the free surface of a fluid is investigated. The analysis is based on Rayleigh’s linearized theory of capillary-gravity waves. It is shown in particular that the wave resistance remains bounded as the velocity of the perturbation approaches the minimum phase speed $`c^{min}=(4g\gamma /\rho )^{1/4}`$ ($`\rho `$ is the liquid density, $`\gamma `$ is the liquid-air surface tension, and $`g`$ the acceleration due to gravity), unlike what is predicted by the inviscid theory. Consider a body of liquid in equilibrium in a gravitational field and having a planar free surface. If, under the action of some external perturbation, the surface is moved from its equilibrium position at some point, motion will occur in the liquid. This motion will be propagated over the whole surface in the form of waves, which are called capillary-gravity waves . These waves are driven by a balance between the liquid inertia and its tendency, under the action of gravity and under surface tension forces, to return to a state of stable equilibrium. For an inviscid liquid of infinite depth, the relation between the circular frequency $`\omega `$ and the wave number $`k`$ (i.e., the dispersion relation) is given by $`\omega ^2=gk+\gamma k^3/\rho `$ where $`\rho `$ is the liquid density, $`\gamma `$ the liquid-air surface tension, and $`g`$ the acceleration due to gravity . The above equation may also be written as a dependence of wave speed $`c=\omega /k`$ on wave number $$c=\left(g/k+\gamma k/\rho \right)^{1/2}$$ (1) An important feature of Eq. (1) is that it implies a minimum phase speed of $`c^{min}=(4g\gamma /\rho )^{1/4}`$ reached at $`k_{min}=\kappa `$ where $`\kappa ^1=\left[\gamma /(\rho g)\right]^{1/2}`$ is the capillary length . For water with $`\gamma =73\mathrm{mN}\mathrm{m}^1`$ and $`\rho =10^3\mathrm{kg}\mathrm{m}^3`$, the minimun phase speed is $`c^{min}=0.23\mathrm{m}\mathrm{s}^1`$ and the corresponding wavelength is $`\lambda ^{min}=2\pi /\kappa =\mathrm{1.7\hspace{0.33em}10}^2\mathrm{m}`$. The dispersive property of capillary-gravity waves is responsible for the complicated wave pattern generated at the free surface of a still liquid by a disturbance moving with a velocity $`V`$ greater than $`c^{min}`$ . The disturbance may be produced by a small object (such as a fishing line) immersed in the liquid or by the application of an external surface pressure distribution $`P_{ext}`$. The waves generated by the moving disturbance continuously remove energy to infinity. Consequently, for $`V>c^{min}`$, the disturbance will experience a drag, $`R`$, called the wave resistance . For $`V<c^{min}`$, the wave resistance is equal to zero since, in this case, no waves are generated by the disturbance. A few years ago , it has been predicted that the wave resistance corresponding to a surface pressure distribution symmetrical about a point should be discontinuous at $`V=c^{min}`$ . This prediction has been checked very recently by Browaeys and co-workers using a magnetic fluid . The experimental results of Browaeys et al. indicate, however, that the disturbance experienced a small but nonzero drag for $`V<c^{min}`$. Since this nonzero drag might be due, in part, to the finite viscosity of the fluid, it is of some importance to incorporate this physical parameter in the inviscid model of Ref.. In order to simplify the discussion, we will here consider the case of a pressure distribution $`P_{ext}`$ localized along a line. The more complicated case of an axisymmetric surface pressure distribution will be consider elsewhere. We take the $`xy`$-plane as the equilibrium surface of the fluid and assume that a pressure distribution of the form $$P_{ext}=P_0\frac{b}{\pi (b^2+x^2)}$$ (2) travels over the surface in the $`x`$-direction with a velocity $`V`$ (in all what follows we assume that $`b\kappa ^1`$). It can be shown that in the case on an inviscid liquid the wave resistance per unit length corresponding to the external pressure distribution (2) is given by : $$R=\frac{P_0^2}{\gamma (k_1k_2)}\left[k_1e^{2bk_1}+k_2e^{2bk_2}\right](V>c^{min})$$ (3) (remember that for an inviscid liquid, $`R=0`$ for $`V<c^{min}`$). In Eq. (3), the wave numbers $`k_1`$ and $`k_2`$ denotes the two (real) solutions of $`(g/k+\gamma k/\rho )^{1/2}=V`$ (see Eq. (1)). A brief inspection of Eq. (3) shows that the wave resistance is a decreasing function of the perturbation velocity $`V`$. In the limit $`Vc^{min}`$, Eq. (3) reduces to $`R(P_0^2/\gamma )e^{4(b/\kappa ^1)(V/c^{min})^2}`$ . As the velocity $`V`$ decreases towards $`c^{min}`$, the wave resistance Eq. (3) becomes unbounded. This result is directly related to the fact that as $`V`$ approaches $`c^{min}`$, the energy transferred by the moving pressure distribution cannot be radiated away. We now turn our attention to the case of a viscous liquid and investigate how the wave resistance (3) is modified by the liquid viscosity. In order to calculate $`R`$, we may imagine a rigid cover fitting the surface everywhere, as suggested by Havelock . The assigned pressure system $`P_{ext}`$ is applied to the liquid surface by means of this cover; hence, the wave resistance is simply the total resolved pressure per unit length in the $`x`$-direction : $$R=P_{ext}(x)\left(\frac{d}{dx}\zeta (x)\right)𝑑x$$ (4) where $`\zeta (x)`$ denotes (in the frame of the perturbation) the displacement of the free surface from its equilibrium position. Let $`\widehat{\zeta }`$ and $`\widehat{P}_{ext}`$ denote the Fourier transforms of $`\zeta `$ and $`P_{ext}`$, respectively. Using the Navier-Stokes equation for a viscous fluid along with the appropriate stress condition at the free surface , one can relate $`\widehat{\zeta }`$ to $`\widehat{P}_{ext}`$ through the relation $$\left[(2\nu k^2iVk)^2+gk+\frac{\gamma }{\rho }k^34\nu ^2k^3\sqrt{k^2i\frac{V}{\nu }k}\right]\widehat{\zeta }=\frac{k}{\rho }\widehat{P}_{ext}$$ (5) where the parameter $`\nu `$ is the kinematic viscosity of the liquid. Inserting Eq. (5) into Eq. (4) we obtain $$R=\frac{P_0^2}{\pi \gamma }\mathrm{}\left(\underset{0}{\overset{\mathrm{}}{}}\frac{ik^2\mathrm{exp}\left(4\frac{b}{\kappa ^1}v^2k\right)}{(2ϵ_0vk^2ik)^2+\frac{1}{4}v^4k+k^34ϵ_0^2v^2k^3\sqrt{k^2i\frac{k}{ϵ_0v}}}𝑑k\right)$$ (6) where $`v=V/c^{min}`$ and $`ϵ_0=\eta c^{min}/\gamma `$ . The symbol $`\mathrm{}`$ represent the real part of a complex expression. The wave resistance Eq. (6) is shown graphically in figure (1) as a function of the reduced velocity $`v`$ for $`ϵ_0=\mathrm{3\hspace{0.33em}10}^3`$ and $`b=\mathrm{2.5\hspace{0.33em}10}^3\kappa ^1`$ (the inset highlights the behaviour of $`R`$ at low velocities). Two important features should be noted in comparison to the inviscid case: (a) First, while $`R`$ increases steeply near $`V=c^{min}`$, it remains bounded; (b) Secondly, as soon as the perturbation velocity $`V`$ is greater than zero, the wave resistance takes finite values. This late result is a direct consequence of the internal viscous dissipation inside the liquid. For $`ϵ_0`$ much smaller than unity, Eq. (6) can be simplified and the above two features can be recovered analytically (for water with $`\gamma =73\mathrm{mM}\mathrm{m}^1`$ and $`\rho =10^3\mathrm{kg}\mathrm{m}^3`$, $`ϵ_0\mathrm{3\hspace{0.33em}10}^31`$). Using standard mathematical technics , one can show that the wave resistance displays a maximum of $$R=R_{max}\frac{P_0^2}{\gamma \sqrt{ϵ_0}}$$ (7) for $$V=V_{max}c^{min}(1+ϵ_0)$$ (8) On the other hand, in the limit $`Vc^{min}`$, the wave resistance $`R`$ varies linearly with the perturbation velocity: $$R\frac{4P_0^2}{\pi \gamma }ϵ_0\left(\frac{V}{c^{min}}\right)\mathrm{Log}\left(\frac{\kappa ^1}{b}\right)$$ (9) This linear behavior can be observed in the insert in figure (1). Let us conclude by a few remarks. In the calculations made above for the wave resistance $`R`$, we have used Rayleigh’s linearized theory of capillary-gravity waves . We have shown that one of the effects of viscosity was to cutoff the unbounded response of the liquid predicted by the inviscid model as $`Vc^{min}`$. For a small viscosity, the response of the liquid remains, however, rather large near $`c^{min}`$ (see figure (1)), and it would be of some interest to take nonlinear effects into account in the calculation of $`R`$. This is beyond the scope of the present letter. For a recent review of nonlinear capillary-gravity waves, the reader is referred to the work of Dias and Kharif . In the present study we have emphasized the asymptotic behavior of the wave resistance for a liquid of low viscosity. We hope to explore the high viscosity limit in a subsequent report. Note also that the calculations presented in this letter assumed an external pressure distribution localized along a band. Further work will assess the effect of viscosity for a pressure field symmetrical around a point. Acknowledgements. This work was motivated by discussions with P.-G. de Gennes. We would like to thank him as well as J.-C. Bacri, J. Browaeys, F. Dias and D. Qu r for helpful comments. Figure caption : $`R^{}=\pi \gamma P_0^2R`$ as a function of $`v=V/c^{min}`$, with R being the wave resistance Eq. (6), $`ϵ_0=\mathrm{3\hspace{0.33em}10}^3`$ and $`b=\mathrm{2.5\hspace{0.33em}10}^3\kappa ^1`$. The inset shows the behavior at low velocities.
no-problem/9909/cond-mat9909062.html
ar5iv
text
# Multicomponent nonisothermal nucleation. 3. Numerical results ## 1 Numerical results and conclusions To show the numerical effects of the error approach we shall consider the same situation as it was done in . As far as in has not been declared in what normalizing factor of the equilibrium distribution was used to calculate the stationary rate of nucleation we have to use the isothermal rate of nucleation published in (see Fig.1 there) as some given data<sup>1</sup><sup>1</sup>1The same qualitative picture will be under the arbitrary normalizing factor.. The detailed description of the experimental conditions and data can be found in , . The condensation of the ethanol (first component) - haxaganol (second component) is considered. The nucleation rate logarithm over the mean activity $`z=\sqrt{\zeta _1^2+\zeta _2^2}`$ is drawn for the several values of the activity fraction $`q=\zeta _1/(\zeta _1+\zeta _2)`$. In Fig.1. the points correspond to the results of Strey and Visanen . The solid lines show the isothermal rates of nucleation. Two dashed lines presents the nonisothermal nucleation rates for different values of the passive gas (argon) accommodation coefficient $`\alpha _{accg}`$. The lower curve corresponds to $`\alpha _{accg}=0.01`$, the upper corresponds to $`\alpha _{accg}=0.1`$ (for all activity fractions). The values of $`q`$ are written below the series of experimental points and above the theoretical curves. For small values of $`q`$ the isothermal and nonisothermal curves practically coincides, but this occurs only due to the big slope of the drawn dependencies. Moreover one can analytically show that the difference in $`J`$ between isothermal and nonisothermal approaches is growing with the growth of the nucleation rate and, thus, for small $`q`$ this difference is the greatest. We omit the comparison with the results of Lazaridiz and Drossinos because their nucleation rates are higher than the classical isothermal results. It lies in contradiction with the principle of stability. It is quite possible that Lazaridis and Drossinos used another input data as the parameters of their theory. Fig.2 and Fig.3 show the difference between the nucleation rates calculated by Djikaiev et al. and by the formulas presented here. Our results are dotted lines, the results of Djikaiev et al. are dashed lines, the nonisothermal rates logarithms are solid lines. All curves are drawn for $`\alpha _{accg}=0.1`$. The greater the nucleation rate is the greater is the manifestation of the thermal effects and the greater is the difference between the nucleation rate calculated by Djikaiev et al. and our results. That’s why we take two situations with the lowest theoretical nucleation rates which corresponds to $`q=0.980`$ (Fig.2) and $`q=0.929`$ (Fig.3). Certainly, the difference for $`\mathrm{ln}J`$ isn’t too big, but the correct account of the passive gas cooling changes $`J`$ in several times in comparison with results of Djikaiev et al. Our results are closer to the experimental data. To show the qualitative difference we can assume that $`\tau _j`$ , $`W_j^+`$ $`n_\mathrm{}j`$ $`F/\nu _j`$ have equal values for all components. Then all components can not be separated and we have the nonisothermal nucleation for one component but the passive gas is taken $`i_0`$ times into account in ($`i_0`$ is the number of the condensating components). Also we can approximately assume that the main cooling of the embryo occurs due to the passive gas. Then taking into account that the renormalization of the stationary rate is proportional now to the quantity of the passive gas we can see that the error in $`J`$ attains $`i_0`$ times (two times in the binary condensation). This error is likely more significant than the difference between the Stauffer approach and the steepens descent method . All necessary limit transitions of the presented theory (to the one component theory, to the nonisothermal theory) are observed and give the correct asymptotes to the already described situations. To finish our description we can briefly recall the new facts presented here in comparison with other publications. Certainly, the most advanced version of the theory was presented by Djikaiev et al. , but even in comparison with this publication the new features are the following ones: * The theory is now presented for the multicomponent case. * The shift terms in kinetic equation are obtained. The sense of these terms is clarified, their negligible role is justified. It is shown that their negligible role can be shown only in frames of the initial steps of the Chapman-Enskog procedure. The connection of the vanishing of the shift terms and the possibility to forget about the lattice structure of the distribution domain is shown. * The common cooling by the passive instead of the separate cooling is considered. This leads to essential numerical difference in the nucleation rate. * The relaxation in the absence of specific parameter required in is based. It allows to consider by the known Chapman-Enskog approach the situation of the strong thermal effects. * The wrong parameter of decomposition presented in is now corrected. This clarify the transition to the isothermal multicomponent theory. The evident weak point of the presented theory is the absence of the surface tension dependence on the temperature. This phenomena will be taken into account in the next publication. Figure 1. Figure 2. Figure 3.
no-problem/9909/nucl-ex9909014.html
ar5iv
text
# Towards the Quark-Gluon Plasma ## 1 Introduction Research with ultra-relativistic nuclear collisions aims at producing, in the laboratory, quark-gluon plasma. This new state of matter is predicted to exist at high temperatures and/or high baryon densities. Specifically, numerical solution of QCD using lattice techniques imply that the critical temperature (at zero baryon density) is about 170 MeV . Comprehensive surveys of the various experimental approaches of how to produce such matter in nucleus-nucleus collisions have been given recently . Here we focus on a few very selected aspects, namely radial flow, strangeness production, dilepton production, and J$`\mathrm{\Psi }`$ measurements and explore possible correlations among these observations. The aim is to elucidate possible connections of these observations to the QCD phase transition. ## 2 Radial Flow A large body of data on transverse momentum (or transverse mass) distributions of hadrons demonstrates that the inverse slope constant of these (in general exponential) spectra scales linearly with particle mass m. The experimental facts for Pb+Pb collisions at SPS energy have been compiled recently and are shown in Fig. 1. The observed linear relationship is naturally interpreted in terms of a collectively expanding fireball, where $`p_t=p_t^{thermal}+m\gamma _t\beta _t`$ with $`\beta _t`$ the transverse flow velocity and $`\gamma _t=\sqrt{1/(1\beta _t^2)}`$. Using the known relationship between $`<p_t>`$ and T,m one may deduce a mean value of the transverse velocity of $`<\beta _t>0.4`$, in good agreement with results from hydrodynamic descriptions . However, the T-m relation shown in Fig. 1 also implies T=180 MeV for m=0, not consistent with the current interpretation that thermal freeze-out takes place at around T=120 MeV . The now detailed measurements of transverse momentum distributions also imply little centrality dependence of the observed slope constants, in conflict with an interpretation of these observations in terms of initial state scattering . An interesting and somewhat puzzling deviation from the general trend is observed for multi-strange baryons: the corresponding slope parameters are significantly smaller than expected. This has been interpreted as evidence for early freeze-out . It could be due to very small cross sections for elastic pion-strange baryon scattering. The overall conclusion, however, is that the picture of a collectively expanding fireball has survived all tests of the past years. ## 3 Strangeness Enhancement and Equilibration The production yields of strange hadrons are significantly increased in ultra-relativistic nuclear collisions compared to what is expected from a superposition of nucleon-nucleon collisions. This has been observed by several experiments both at the AGS and at the SPS. To demonstrate the degree of enhancement observed we show, in Fig. 2, the results of the WA97 collaboration for multi-strange baryons . The observed enhancement of more than one order of magnitude can currently not be understood within any of the hadronic event generators<sup>1</sup><sup>1</sup>1For a discussion see the proceedings of the Quark Matter 99 conference, Nucl. Phys. A, to be published.. Surprizingly, it is quantitatively explained if one assumes complete chemical equilibrium in the hadronic phase of the collision . Similar observations have been made for analyses of S-induced SPS data and for AGS data . How chemical equilibration can be reached in a purely hadronic collision is not clear in view of the small production cross section for strange and especially multi-strange hadrons. In fact, system lifetimes of the order of 50 fm/c or more are needed for a hot hadronic system to reach full chemical equilibration . Such lifetimes are at variance with lifetime values established from interferometry analyses, where upper limits of about 10 fm/c are deduced . Another very interesting observation is that the chemical potentials $`\mu `$ and temperatures T resulting from the thermal analyses of place the systems at chemical freeze-out very close to where we currently believe is the phase boundary between plasma and hadrons. This is demonstrated in Fig. 3 <sup>2</sup><sup>2</sup>2This is an updated version of the figure shown in . where also results from lower energy analyses are plotted. The freeze-out trajectory (solid curve through the data points) is just to guide the eye but follows closely the empirical curve of . The closeness of the freeze-out parameters (T,$`\mu `$) to the phase boundary might be the clue to the apparent chemical equilibration in the hadronic phase: if the system prior to reaching freeze-out was in the partonic (plasma) phase, then strangeness production is determined by larger partonic cross sections as well as by hadronization. Slow cooking in the hadronic phase is then not needed to produce the observed large abundances of strange hadrons. Early simulations of strangeness production in the plasma and during hadronization support this interpretation at least qualitatively . Faced with the present results a new theoretical look seems mandatory. ## 4 Enhancement of Low Mass Dilepton Pairs The CERES and HELIOS/3 collaborations found that, in central nucleus-nucleus collisions at SPS energy, low mass (m $`<`$ 800 MeV) dilepton pairs are produced at yields which are significantly larger than expected from nucleon-nucleon collisions . The enhancement is concentrated at low pair transverse momentum as can be seen from the recent CERES data which are presented in Fig. 4. The observed enhancement has been attributed to changes of the mass and/or width of the $`\rho `$ meson in the hot and dense fireball. The still somewhat controversial situation has been reviewed recently . Here we want to add two points. First, the enhancement sets in at centralities corresponding to less than 35 % of the total inelastic cross section . This implies impact parameters b $`<`$ 8 fm (see Fig. 2 in . From thereon it scales quadratically with particle multiplicity. Secondly, the $`\rho `$ mesons are formed according to Fig. 3 at around T=165 MeV. These two facts suggest that pion number (i.e. temperature) and $`\pi \pi `$ collisions in the medium are at the origin of the observed enhancement. ## 5 J/$`\mathrm{\Psi }`$ Suppression The suppression of J/$`\mathrm{\Psi }`$ mesons (compared to what is expected from hard scattering models) was early on predicted to be a signature for color deconfinement. Data for S-induced collisions exhibited a significant suppression but systematic studies soon revealed that such suppression exists already in p-nucleus collisions and is due to the absorption in (normal) nuclear matter of a color singlet $`c\overline{c}g`$ state that is formed on the way towards J/$`\mathrm{\Psi }`$ production. The situation has been summarized in . The data for Pb+Pb collisions now exhibit clear evidence for anomalous absorption beyond the standard absorption expected for such systems. The most recent results are summarized in Fig. 5, taken from . There seems to be a break away from the standard absorption curve at around a transverse energy value of 40 GeV corresponding to an impact parameter of about 8 fm . We note that this impact parameter value agrees with the value from where on anomalous dilepton enhancement is observed by the CERES collaboration (see above)<sup>3</sup><sup>3</sup>3The data from CERES do not reach lower centralities corresponding to larger impact parameters. Hence this point is not yet “water-tight”.. The new minimum bias data from NA50 (open points in Fig. 5) with their much smaller error bars very much accentuate the difference to the standard absorption curve. Whether these data provide unambiguous evidence for the existence of a deconfined phase in central Pb+Pb collisions is hotly debated. However, there is at present no convincing explanation of the observed data in standard scenarios without plasma. The theoretical curves in Fig. 5 show this. All calculations from the Giessen group , from the Kahana team , from Capella’s group , and from the Frankfurt group using UrQMD are based on models for the destruction of charmonium by comoving pions, strings, etc. The corresponding dissociation cross sections are poorly known . Despite very different assumptions about these cross sections and despite a number of other nontrivial assumptions (see, e.g. the discussion in ) none of the calculations reproduce the data. For example, comparing the Giessen calculation with the 1996 data (solid points in Fig. 5) yields a reduced $`\chi ^2`$ of larger than 4, while comparison of any of the calculations with the high statistics minimum bias data (open points in Fig. 5) yields reduced $`\chi ^2`$ values of larger (sometimes much larger) than 10. We conclude that the charmonium suppression observed by the NA50 collaboration in central Pb+Pb collisions is highly non-trivial. ## 6 Summary and Outlook Taken together, the observations of flow, strangeness enhancement, enhancement of low-mass dilepton pairs, and charmonium suppression lend strong support to the interpretation that, during the course of a central Pb+Pb collision at SPS energy, an at least partly deconfined state of matter, i.e. of quark-gluon plasma, has been created. Two further rounds of experiments at the SPS will provide the possibility to consolidate this picture. Meanwhile, experiments at the RHIC collider are about to commence and the planning for the ALICE experiment at the LHC is well underway. Physics prospects for experiments at these accelerators are bright . Extrapolating from the results of the AGS and SPS program we expect much higher energy densities and temperatures at collider energy. Production and study of a deconfined phase over a large space-time volume should be possible.
no-problem/9909/astro-ph9909404.html
ar5iv
text
# Gravonuclear Instabilities in Post-Horizontal-Branch Stars ## 1 Introduction A number of intriguing astrophysical problems are associated with the termination of central helium burning in horizontal-branch (HB) stars. One example concerns the so-called “breathing pulses” of the convective core. While numerical algorithms for treating semiconvection (e.g., Robertson & Faulkner 1972) work quite effectively during most of the HB phase, they invariably fail once the central helium abundance $`Y_c`$ falls below $`0.1`$. At that time the convective core suddenly grows so large that it engulfs most of the previous semiconvective zone, thereby bringing so much fresh helium into the center that $`Y_c`$ increases. These breathing pulses are generally suppressed using one of the following methods: $``$ Omission of gravitational energy term $`ϵ_g`$. Dorman & Rood (1993) have shown that the breathing pulses can be suppressed by setting $`ϵ_g`$ equal to 0 in the stellar structure equations during the core-helium-exhaustion phase. An example of a canonical HB and post-HB track computed with this approach is given in Figure 1a. Note that the evolution is quite smooth without any indication of an instability. The helium-burning luminosity $`L_{He}`$ along this sequence, given in Figure 2a, shows only a characteristic dip at the end of the HB phase, as the helium burning shifts outward from the center to a shell. Most importantly, the composition profile within the core at the end of the HB phase contains a broad region of varying helium abundance $`Y`$ corresponding to the former semiconvective zone (see Figure 3a). $``$ Limit on growth of the convective core. An alternative method for suppressing the breathing pulses, used by Bono et al. (1997a,b), is to limit the rate at which the convective core can grow in order to prevent $`Y_c`$ from increasing. Inspection of Tables 1 – 5 of Castellani et al. (1991) shows that this ad hoc method leads to a greatly enlarged convective core that remains large until helium exhaustion. We infer therefore that the final helium profile should then contain a large discontinuity at the edge of the helium-exhausted core and thus should be markedly different from the profile produced by the first suppression method. Bono et al. (1997a,b) recently argued that the onset of helium-shell burning in post-HB stars is dramatically different from the smooth evolution shown in Figure 1a, especially for metal-rich stars with low envelope masses. Their results indicate that the helium-burning shell undergoes a series of “gravonuclear instabilities” caused by relaxation oscillations similar to the helium-shell flashes that occur later on the asymptotic giant branch. These instabilities lead to pronounced “gravonuclear loops” (GNLs) along the evolutionary tracks, which could have interesting observational consequences. Figures 1d and 2d give examples of these GNLs and the associated helium-shell instabilities (see §2). We have undertaken extensive calculations to understand the cause of these gravonuclear instabilities. Our main conclusions are: The occurrence of gravonuclear instabilities depends critically on the helium profile within the core at the end of the HB phase and hence on the method used to suppress the breathing pulses. Gravonuclear instabilities are only found when there is a large discontinuity in the helium abundance, which forces the helium burning to be confined to a narrow region at the edge of the core. Contrary to the Bono et al. results, we find that gravonuclear instabilities are not caused by a high envelope opacity nor do they depend on the envelope mass. Rather, they are a consequence of the method used by Bono et al. to suppress the breathing pulses. ## 2 Dependence of Instabilities on Core-Helium Profile We first explored the dependence of the gravonuclear instabilities on the composition profile within the core at the end of the HB phase. To do this, we computed a number of HB and post-HB evolutionary sequences for a star with a mass $`M=0.48M_{}`$ and a heavy-element abundance $`Z=0.03`$ for various assumptions about the final composition profile. These model parameters were chosen to optimize the likelihood of gravonuclear instabilities according to the Bono et al. results. A very small time step of only 400 yr was used in the post-HB models in order to resolve any instabilities, if present. Moreover, a thermal stability analysis was performed on each of the $`30,000`$ post-HB models in each sequence to search for any unstable modes. Our first sequence was a canonical semiconvection sequence computed with the $`ϵ_g=0`$ method for suppressing the breathing pulses. The results, given in Figures 1a and 2a, show no signs of any instability. In particular, the onset of helium-shell burning is marked by only minor (and damped) ringing in the helium-burning luminosity. The composition profile for this sequence, given in Figure 3a, contains a discontinuity in the helium abundance at $`M_r=0.237M_{}`$ corresponding to the outer edge of the semiconvective zone at its maximum extent during the HB phase. Interior to this discontinuity there is a broad region of varying helium abundance which also is a remnant of the previous semiconvection and which we will refer to as the “helium tail”. Note that the helium burning in Figure 3a covers a wide range in mass. We repeated these calculations using a “composition algorithm” to specify the size of the helium-depleted region instead of the canonical semiconvection algorithm. Essentially we required that the composition be completely mixed from the center out to a specified mass point in each model regardless of whether this region was fully convective. No mixing was permitted outside this point. By varying the size of this mixed region during the HB evolution we were able to generate different composition profiles having more or less steep helium tails at core-helium exhaustion. All these profiles had a helium discontinuity at $`M_r=0.237M_{}`$, as found with canonical semiconvection. This composition algorithm was turned off at the end of the HB, and post-HB evolution was then followed in the same manner as the canonical case. Figures 1b, 2b and 3b present the results for a sequence computed with this algorithm. The final composition profile given in Figure 3b was chosen to mimic the composition profile for the semiconvection sequence given in Figure 3a. The resulting track morphology and helium-burning luminosity are virtually identical to those for the semiconvection sequence. Moreover, no thermally unstable modes were found. This indicates that the post-HB evolution is not sensitive to how the composition profile at the end of the HB phase is actually produced. We then used our composition algorithm to compute sequences with shallower helium tails. No gravonuclear instabilities were found until the size of the helium tail was reduced to that shown in Figure 3c. The helium burning in Figure 3c initially covered a broad region within the helium tail, and the models were then stable. After $`10^6`$ yr following core-helium exhaustion, however, this helium tail burned away, and the helium burning then shifted outward to a narrow region just outside the helium discontinuity. The helium burning immediately became unstable, giving rise to the flashes in Figure 2c and the GNLs in Figure 1c. The stability analysis of these models revealed the existence of thermally unstable modes. We have also considered the limiting case of a helium discontinuity and no tail (Figure 3d). As shown in Figures 1d and 2d, such a profile leads to strong gravonuclear instabilities and to large GNLs. The composition profile in Figure 3d should be similar to the profile produced by the Bono et al. method for suppressing the breathing pulses. Note that this method forces the helium burning to be confined to a narrow region just outside the helium discontinuity. The above results are not surprising. Schwarzschild & Härm (1965) showed that a nuclear burning shell must be thin to be unstable. Figures 1, 2 and 3 confirm that gravonuclear instabilities do not occur when the helium-burning region is broad. Only when the composition profile confines the burning to a narrow region, as in Figure 3d, do we find gravonuclear instabilities. As further confirmation of this result, Figure 4 shows the helium profile at four times during the post-HB evolution of the sequence plotted in Figures 1d, 2d and 3d. Panel (a) is the same as Figure 3d and corresponds to the onset of gravonuclear instability. Panel (b) shows the helium profile during the period of strong gravonuclear instability, while panel (c) shows the helium profile for the last model in which we found thermally unstable modes. The helium-burning region in Figure 4 progressively broadens with time until by panel (d) the models are completely stable. We conclude therefore that the gravonuclear instabilities disappear as soon as the helium-burning region broadens into its characteristic S-shape profile. ## 3 Dependence of Instabilities on Envelope Mass and Z We have also investigated whether the gravonuclear instabilities depend on the envelope mass and metallicity, as suggested by Bono et al. Figure 5 shows the time dependence of $`L_{He}`$ during the post-HB evolution of four sequences with $`M=0.70M_{}`$ (and hence larger envelope mass). The composition profiles at the end of the HB phase for these sequences are virtually identical to those in the corresponding panels of Figure 3. Figure 5 shows the same overall behavior of $`L_{He}`$ as Figure 2, except for the shorter timescale of the instabilities caused by the higher $`L_{He}`$ of these higher mass sequences. It is clear therefore that the occurrence of gravonuclear instabilities does not depend on the envelope mass. The same conclusion applies to the metallicity. Calculations for a $`M=0.52M_{}`$, $`Z=0.002`$ sequence, computed for the same helium profile as in Figure 3d, show extensive GNLs just as for our higher metallicity sequences. ## 4 Conclusions Extensive calculations obtained with three independent stellar evolution codes have enabled us to consistently induce or remove GNLs according to the helium-composition profile at the end of core-helium burning. We have shown that GNLs are not favored by higher metallicities or lower masses, as postulated by Bono et al. (1997a,b). Rather, GNLs are produced by the narrowness of the helium-burning region when the helium-burning shell ignites immediately following core-helium exhaustion. This is most influenced by the way that the convective and semiconvective regions are calculated at the completion of core-helium burning and further illustrates our incomplete knowledge of this complicated phase. If we could observationally determine the existence, or otherwise, of GNLs, it would be a direct probe into the helium profile at helium exhaustion and hence provide information about the occurrence of core-breathing pulses. ## References > Bono, G., Caputo, F., Cassisi, S., Castellani, V. & Marconi, M. 1997a, ApJ, 479, 279 > > Bono, G., Caputo, F., Cassisi, S., Castellani, V. & Marconi, M. 1997b, ApJ, 489, 822 > > Castellani, V., Chieffi, A., & Pulone, L. 1991, ApJS, 76, 911 > > Dorman, B., & Rood, R. T. 1993, ApJ, 409, 387 > > Robertson, J. W., & Faulkner, D. J. 1972, ApJ, 171, 309 > > Schwarzschild, M., & Härm, R. 1965, ApJ, 142, 855
no-problem/9909/astro-ph9909051.html
ar5iv
text
# 1 Introduction ## 1 Introduction It has been suggested that the universe can be viewed as a fractal where the density of the matter obeys the law $$\rho r^D.$$ (1) If the fractal power index $`D=2`$, all the objects in the universe are self-similar, since the gravitational potential do not change with radius $`r`$ $$\phi =\frac{Gm}{r}=G\rho r^2=const.$$ (2) Fractal galaxy distribution was discussed in , , , . This can be described in terms of the radial density run $$N(<R)=_0^R𝑑r\underset{i}{}\delta (rr_i)R^D$$ (3) where $`N(<R)`$ is the average number of galaxies within radius $`R`$ from any given galaxy. The conventional point of view , , , is that, on scales $`<20h^1\mathrm{Mpc}`$, galaxies obeys $`D1.22.2`$. On scales $`>20h^1\mathrm{Mpc}`$, the fractal power index increases with scale towards the value $`D=3`$ on scales of about $`100h^1\mathrm{Mpc}`$. On the contrary Pietronero and collaborators , , claimed that galaxies have a fractal distribution with constant $`D2`$ on all scales. In the model of the universe with the linear law of evolution , the density of the matter obeys the law (1) with the power index $`D=2`$. Such a law arises due to the linear evolution of the scale of mass with time. Beneath, within the framework of this model, the fractal structure of the universe will be considered. ## 2 The universe with the linear law of evolution Let us consider the model of the homogeneous and isotropic universe based on the premise that the coordinate system of reference is not defined by the matter but is a priori specified. Take the coordinate system of reference in the form $$dl^2=a(t)^2d\stackrel{~}{l}^2,t$$ (4) where $`d\stackrel{~}{l}^2`$ is the Euclidean metric, and $`t`$ is the absolute time. The scale factor of the universe is a function of time. Specify the evolution law of the scale factor as linear when the scale factor grows with the velocity of light $$a=ct.$$ (5) Consider the universe as a particle relative to the coordinate system of reference. The total mass of the universe relative to the coordinate system of reference includes the mass of the matter and the energy of its gravity. Adopt the total mass of the universe equal to zero, then the mass of the matter is equal to the energy of its gravity $$c^2=\frac{Gm}{a}.$$ (6) Allowing for eq. (5), from eq. (6) it follows that the mass of the matter changes with time as $$m=\frac{c^2a}{G}=\frac{c^3t}{G},$$ (7) and the density of the matter, as $$\rho =\frac{3c^2}{4\pi Ga^2}=\frac{3}{4\pi Gt^2}.$$ (8) So the universe with the linear law of evolution has the fractal structure with the power index $`D=2`$. The fractal structure arises due to the linear evolution of the scale of mass with time and hence due to the linear dependence of the scale of mass on the distance $`Mtr`$. ## 3 The permanent hierarchy of scales In view of eq. (8), every distance defines its own density $$\rho _ir_i^2.$$ (9) The objects of the radii $`r_i`$ are arranged in the permanent hierarchy of scales. This hierarchy arises due to the evolution of the scale of mass. Since galaxies and clusters of galaxies approximately obey the law (9), the formation of these is not caused by the growth of the density fluctuations by gravitational instability. Since stars do not obey the law (9), it is naturally to think that stars are formed due to the growth of the density fluctuations by gravitational instability. In this case the radius $`r_i`$ defines the size of the region from which the star forms by gravitational instability. In view of eq. (9), the Jeans length for the region of the radius $`r_i`$ is given by $$\lambda _{Ji}\rho _i^{1/2}r_i.$$ (10) So all the regions of the radii $`r_i`$ are of scale invariance from the viewpoint of growth of density fluctuations by gravitational instability. Potential fluctuations connect two scales, the scale of homogeneity and the scale of fluctuations $$\frac{\delta M_i}{M_i}=\frac{\delta \phi }{\phi }.$$ (11) Here $`M_i`$ is the scale of homogeneity, and $`\delta M_i`$ is the scale of fluctuations. The size of fluctuations $`\delta r_i`$ is given by $$\frac{\delta r_i}{r_i}=\left(\frac{\delta M_i}{M_i}\right)^{1/2}.$$ (12) In the epoch of recombination $`z=1400`$, the potential fluctuations are of order of the cosmic microwave background (CMB) anisotropy $`\delta \phi /\phi \delta T/T10^5`$ . Hence the size of fluctuations is of order $`\delta r_ir_i\times 10^{2.5}`$. Before recombination, the Jeans length is of order of the size of the region $`\lambda _{Ji}r_i`$. Hence the size of fluctuations is less than the Jeans length. After recombination, the Jeans length decreases and becomes of order $`\lambda _{Ji}r_i\times 10^4`$. Hence the size of fluctuations becomes more than the Jeans length, and the density fluctuations grow by gravitational instability. ## 4 The hierarchy of the preferred scales Consider the hierarchy of the preferred scales arranged in the following way. Let potential fluctuations connect two scales $$\frac{M_i}{M_j}=\frac{\delta \phi }{\phi }.$$ (13) Here $`M_i`$ is the scale of homogeneity, and $`M_j`$ is the scale of fluctuations. $`M_j`$ being the scale of fluctuations relative to the scale $`M_i`$, in turn, defines another scale of homogeneity. Develop the hierarchy of the preferred scales starting from the scale defined by the mass and the radius of the universe. Determine the modern age of the universe within the framework of the universe with the linear law of evolution . Since density of the relativistic matter is defined by its temperature as $$\rho T^4,$$ (14) from eq. (8) it follows that the temperature of the relativistic matter changes with time as $$Ta^{1/2}t^{1/2}.$$ (15) In view of eq. (15), the modern age of the universe is given by $$t_0=\alpha t_{Pl}\left(\frac{T_{Pl}}{T_0}\right)^2$$ (16) where $`\alpha `$ is the electromagnetic coupling, the subscript $`Pl`$ corresponds to the Planck period, the subscript $`0`$ corresponds to the modern period. Calculations yield the value $`t_0=1.06\times 10^{18}\mathrm{s}`$. In view of eq. (7), the mass of the universe is $`M_U=4.29\times 10^{56}\mathrm{g}`$. This value corresponds to the relativistic matter. To transit to the usual matter it is necessary to multiply the value by a factor of 2. In view of eq. (5), the radius of the universe is $`r_U=3.18\times 10^{28}\mathrm{cm}`$. The potential fluctuation $`\delta \phi /\phi `$ can be determined from the CMB spectrum. The size of the potential fluctuation $`\mathrm{\Delta }r`$ represents the feature in the CMB spectrum. In the fractal universe, the multipole in the CMB spectrum is given by $$\mathrm{}_{eff}=\left(\frac{\mathrm{\Delta }r}{r}\right)^1=\left(\frac{\mathrm{\Delta }M}{M}\right)^{1/2}=\left(\frac{\delta \phi }{\phi }\right)^{1/2}.$$ (17) Anisotropy measurements on degree scales pin down the feature in the CMB spectrum. The position of the feature is $`\mathrm{}_{eff}=263`$ , $`\mathrm{}_{eff}=260`$ . Adopt the value $`\mathrm{}_{eff}=260`$. This corresponds to the potential fluctuation $`\delta \phi /\phi =1.48\times 10^5`$. With the use of the above determined mass and radius of the universe and potential fluctuation, develop the hierarchy of the preferred scales. $$\begin{array}{cc}M_1=8.6\times 10^{56}\mathrm{g}\hfill & r_1=3.2\times 10^{28}\mathrm{cm}\hfill \\ M_2=1.3\times 10^{52}\mathrm{g}\hfill & r_2=1.2\times 10^{26}\mathrm{cm}\hfill \\ M_3=1.9\times 10^{47}\mathrm{g}\hfill & r_3=4.7\times 10^{23}\mathrm{cm}\hfill \\ M_4=2.8\times 10^{42}\mathrm{g}\hfill & r_4=1.8\times 10^{21}\mathrm{cm}\hfill \\ M_5=4.1\times 10^{37}\mathrm{g}\hfill & r_5=7.0\times 10^{18}\mathrm{cm}\hfill \\ M_6=6.1\times 10^{32}\mathrm{g}\hfill & r_6=2.7\times 10^{16}\mathrm{cm}\hfill \end{array}$$ Here the second scale can be identified with superclusters, the third scale can be identified with clusters of galaxies, the fourth scale can be identified with galaxies, the fifth scale can be identified with star clusters, the sixth scale can be identified with stars. The radius $`r_6=2.7\times 10^{16}\mathrm{cm}`$ corresponds to the size of the region from which the star forms by gravitational instability.
no-problem/9909/astro-ph9909048.html
ar5iv
text
# Gamma-Ray Burst - Supernova Relation ## 1 Introduction The most dramatic recent breakthrough in our understanding of gamma-ray bursts (GRBs) was made by the BeppoSAX team, which discovered the first X-ray afterglow (Costa et al. 1997). That was quickly followed with the discovery of optical (van Paradijs et al. 1997) and radio (Frail et al. 1997) afterglows, and the determination of the first optical redshift (Metzger et al. 1997). By now about two dozen afterglows were detected, almost all within fraction of an arc second of very faint galaxies, with typical R-band magnitudes 24 - 26. Approximately ten redshift were measured. Gradually evidence emerged that GRBs appear to be associated with star forming regions (Paczyński 1998, Kulkarni et al. 1998, Galama et al. 1998). In several cases a direct association with a supernova (SN) appeared: GRB 980425 - SN 1998bw (Galama et al. 1998), GRB 980326 (Bloom et al. 1999, Castro-Tirado & Gorosabel 1999), and GRB 970228 (Reichart 1999, Galama et al. 1999). We should keep it in mind that all this exciting development is for the long duration GRBs, as these were the only type for which accurate coordinates became available within hours of the burst. The rest of this paper is about the long gamma-ray bursts only. Until recently the most popular models of gamma-ray bursts (GRBs) were related to merging neutron stars, and neutron stars merging with stellar mass black holes. However, these would be located far away from star forming regions, and far away from parent dwarf galaxies. This does not seem to be the case for the location of GRB afterglows, and this is the reason why an association of bursts with explosions of massive stars became popular. Throughout this paper I shall adopt popular assumptions and terminology. The bursts with strong high energy spectra require very large bulk Lorentz factors, $`\mathrm{\Gamma }>300`$, to reconcile their rapid variability with their huge luminosities and no evidence for spectral cut-off due to pair creation (Baring & Harding 1996). During its activity GRB’s intensity varies rapidly. Several seconds or minutes after the beginning of the burst an afterglow becomes dominant, as recently shown by Burenin et al. (1999). The afterglows fade smoothly, usually as a broken power law of time, and they are almost certainly due to the interaction between the relativistic ejecta and ambient medium. Their emission is non-thermal, and thus it is fundamentally different form a thermal emission of a non-relativistic supernova. When the ejecta decelerate to non-relativistic expansion a GRB remnant is created, and at this stage it may resemble a supernova remnant. ## 2 Rates I adopt Hubble constant $`H_0=70kms^1Mpc^1`$ throughout this paper. According to Wijers et al. (1998) the energy generation rate due to GRBs is at present epoch (i.e. z = 0) equal $$ϵ_{GRB,0}10^{52}ergGpc^3yr^1,$$ $`(1)`$ assuming that the GRB rate follows the star formation rate as a function of redshift. Note, that this number is independent of beaming of GRB emission. If there is beaming the energy per GRB is reduced, but the the number of GRB explosions increases, so that the product, i.e. $`ϵ_{GRB,0}`$ remains unchanged. Using a very different analysis Schmidt (1999) obtained the GRB energy generation rate about the same as Wijers et al. (1998). The rate of all types of supernovae is approximately 1.5 per $`10^{10}L_{B,}`$ per century (van den Bergh & Tammann 1991). The mass density of the universe is probably $`\mathrm{\Omega }_m0.25`$, and the average mass to blue light ratio is $`M/L_B200M_{}/L_{}`$ (Bahcall et al. 1995). Therefore, the blue luminosity within one cubic gigaparsec is $`1.6\times 10^{17}L_{}`$, and the local supernova rate is $$n_{SN}2.4\times 10^5Gpc^3yr^1.$$ $`(2)`$ Adopting $`10^{51}`$ erg of kinetic energy per supernova we obtain the overall energy generation rate (at z = 0) $$ϵ_{SN,0}2.4\times 10^{56}ergGpc^3yr^1.$$ $`(3)`$ It appears that global energy release rate is more than 4 orders of magnitude higher for supernovae than it is for gamma-ray bursts (Wijers et al. 1998, Schmidt 1999). Obviously, both rates are uncertain. It is possible that kinetic energy of GRB ejecta is considerably higher than their gamma ray output (Wijers et al. 1998 Kumar 1999). It is also possible that the actual supernova rate is much higher, as intrinsically faint explosions, like SN 1987A, are difficult to discover, yet they release about as much energy as ordinary SN Ia or SN II. While both, $`ϵ_{GRB}`$ and $`ϵ_{SN}`$, may well be higher than the estimates given with the eqs. (1) and (3), it is likely that the ratio $`ϵ_{SN}/ϵ_{GRB}1`$. If this seemingly obvious conclusion is correct it has consequences for finding GRB remnants. There is no generally accepted quantitative model of GRB emission at this time, and we may only guess what is the ratio of gamma-ray energy to kinetic energy of the ejecta. While it is common to think that this ratio is small (Wijers et al. 1998, Kumar 1999), it may just as well be much larger than unity, i.e. the kinetic energy may turn out to be much smaller than gamma-ray energy. This possibility follows from the recent analysis of the non-relativistic radio remnant of GRB 990508 by Frail, Waxman and Kulkarni (1999), who find that the total energy is only $`5\times 10^{50}erg`$. At the same time Rhoads (1999b) finds that GRB 970508 was not strongly beamed, as its afterglow had un-broken power law decline for over 100 days. The total gamma-ray emission was at least $`3\times 10^{50}erg`$ for this burst (Rhoads 1999b). If these claims are correct then for this burst gamma-ray and kinetic energies were comparable, and this rules out the popular ‘internal shock’ models, which are very inefficient in generating gamma-rays (e.g. Kumar 1999). Of course, GRB 970508 was not a typical gamma-ray burst. Its afterglow was the only one which first increased in luminosity for about 2 days, and later declined as un-broken power law for over 100 days. This is also the only event for which quantitative estimates were made for both: gamma-ray and kinetic energies. We have no direct information for the ratio of these two energy forms for any other burst. ## 3 GRB and SN Remnants The global energetics of supernovae and gamma-ray bursts has direct implications for the extra energetic supernova remnants. Recently, several suggestions were made that these may be remnants of gamma-ray burst explosions (Efremov et al. 1998, Loeb & Perna 1998, Wang 1999). However, if a typical GRB generates a factor $`f`$ more energy than a typical supernova then the GRB rate must be lower than the supernova rate by a factor $`10^4f`$, and correspondingly the number of GRB remnants must be vastly smaller than suggested by the number of very energetic remnants. Therefore, it is unlikely that the very energetic supernova remnants are related to gamma-ray bursts, unless GRBs generate vastly more kinetic energy than gamma-ray energy. Let us suppose that the energetic remnants were caused by single explosions. We know that some rare supernovae are much more powerful than average. For example, SN 1998bw has released $`20\times 10^{51}erg`$ (cf. Woosley et al. 1999, Iwamoto 1999, and references therein, but a much less energetic explosion has been proposed by Hoflich et al. 1999). It may well be that some stellar explosions are even more powerful than SN 1998bw. However, there is no obvious reason why the most powerful stellar explosions should be related to gamma-ray bursts. A classical GRB with a hard spectrum requires ejecta with the bulk Lorentz factor $`300`$, or more. Nobody knows how to generate outflow so highly ultra-relativistic, and it is not clear that the total energetics of the explosion has to be extraordinarily large, as a strongly beamed explosion may appear to be much more energetic that it really is. In other words: the ability to generate hard gamma-ray emission and the overall energetics of an explosion may be correlated with each other, or just as well the two may be uncorrelated. As long as we do not have a sound quantitative model there is no justification for either assumption; a semi-empirical approach may be more promising than theoretical speculations. ## 4 GRB and SN Beaming The possibility that highly relativistic GRB explosions may be jet-like was considered for a very long time and I do not know who was the first to make a suggestion. Some similarity between the GRBs and the blazars is so striking that a term ‘micro-quasar’ was suggested some years ago (Paczyński 1993). Similarities of these two classes of objects were recently analyzed by Dermer & Chiang (1998). If these are taken seriously a very strong beaming of GRBs follows, with a drastic reduction of the energetics compared to a spherical explosion. Recently, the breaks in the rate of decline of several afterglows were interpreted as evidence for beaming (Kulkarni et al. 1999, Stanek et al. 1999, Harrison et al. 1999). If GRB emission is confined to a very narrow beam they may not need much more energy than the ‘standard’ $`10^{51}erg`$ of an ordinary supernova. At this time there is no robust estimate of the degree and the possible range of GRB beaming (e.g. Rhoads 1997, 1999a,b). More than a decade ago observations of a ‘mystery spot’ near SN 1987A were reported by Karovska et al. (1987) and Matcher et al. (1987). Piran & Nakamura (1987) suggested that this might have been a jet generated by the supernova. Not knowing about SN 1987A Cen (1998) suggested that supernovae may create relativistic jets, which may give rise to gamma-ray bursts. This idea gained some support when the new analysis of SN 1987A data provided stronger evidence for the original ‘mystery spot’, and in addition provided evidence for a second spot on the opposite side of the supernova, suggesting relativistic jets (Nisenson & Papaliolios 1999). Evidence for a strong non-sphericity of SN 1998S was reported by Leonard et al. (1999). Hoflich et al. (1999) claim that the explosion of SN 1998bw was highly non-spherical. Jets in supernovae became popular (e.g. Khokhlov et al. 1999, Cen 1999, Nagataki 1999), and often suggested to be associated with a beamed gamma-ray emission. A schematic picture may involve a quasi-spherical and non-relativistic supernova explosion with a narrow ultra-relativistic jet streaming along the rotation axis. The possibility that some supernovae may generate jets is very interesting, and it should be possible to test it with the VLBA observations of very young radio supernovae. However, there is no reason to expect that all jets must generate gamma-ray bursts, as this would require all outflows to reach the huge Lorentz factor $`\mathrm{\Gamma }300`$. It seems much more likely that there is a broad range of jet velocities, and only some are capable of GRB-like emission. ## 5 Hypernova While the term ‘hypernova’ became popular recently, it was been sporadically used in the past (e.g. Wilkinson & Bruyn 1990). It does not have a clear, universally accepted meaning. The following are several examples. 1. Hypernova is just a name. The optical light curve of GRB 970508 was several hundred times brighter than any SN ever discovered. The absolute luminosity of several other afterglows, e.g. 990123, 971214, 990510, was higher by another factor $`100`$ (cf. Norris et al. 1999). So, rather than call it a super-super-nova, or a super-duper-nova, a term hypernova seems reasonable as a description of the phenomenon, with no implications for its nature. 2. Hypernova is a special type of a supernova explosion. At least some optical afterglows appear to be associated with star forming regions. Note that GRBs are many orders of magnitude less common than supernovae, and there may be an almost continuous transition from a typical massive SN to a typical GRB; the SN 1987A with its relativistic jet and GRB 980425 - SN 1998bw may be examples of intermediate explosions. The link between the GRBs and the deaths of massive stars does not specify the mechanism for a GRB, and it is testable without a need for theoretical models. A question: ‘are GRBs in star forming regions?’ can be answered observationally. In this context a ‘hypernova’ is an explosion of a massive star, soon after its formation. Soon, means several million years, not a delayed explosion of the merging neutron star type. 3. Hypernova is a rotationally driven supernova. The idea that at least some supernovae explosions are driven by a rapid rotation of a compact core has been around for several decades (e.g. Ostriker & Gunn 1971). A qualitative reasoning proceeds as follows. A spherical collapse of a massive stellar core transforms $`3\times 10^{53}erg`$ of gravitational energy into thermal energy of a hot neutron star, and 99.7% of that energy is lost in a powerful neutrino - anti-neutrino burst, with the remaining $`10^{51}erg`$ used to power a supernova explosion. If a pre-collapse core is rapidly rotating, than additional $`3\times 10^{53}erg`$ may be stored in the rotation of the collapsed object. Some rotational energy is lost in gravitational radiation, but a large fraction cannot be readily disposed of. If an ultra strong magnetic field is generated by the differential rotation then it may act as the energy transmitter from the spinning relativistic object to the envelope, powering an explosion, perhaps in a form of a relativistic jet. The more rotation there is, the more jet-like explosion results, and the more relativistic the jet. This is just a speculation at this time, recycled in dozens, perhaps hundreds of theoretical papers, with terms like a ‘micro-quasar’ (Paczyński 1993) or a ‘failed supernova’ (Woosley 1993) used at least as often as a ‘hypernova’ (Paczyński 1998). ## 6 Pessimistic Conclusions It is useful to put theoretical work on gamma-ray bursts in a broader perspective of other exotic objects and phenomena in order to asses the prospects for a short term progress. There is almost universal agreement that GRB emission is non-thermal. Several important correlations were found for various GRB properties (e.g. Fenimore et al. 1995, Liang & Kargatis 1996, Beloborodov et al. 1998, Stern et al. 1999, Norris et al. 1999), but is not clear how to incorporate them in a theoretical model. This is not surprising. It is very difficult to prove which specific physical processes are responsible for the operation of a non-thermal source - consider current theories of quasars and radio pulsars. Well into the fourth decade of their development, and no serious ambiguity about the relevant distance scales, there are no generally accepted theories that account for either quasar or pulsar non-thermal emission. There is no reason why GRBs should be easier to understand. For several decades there has been a consensus that Type Ia supernovae result from explosive carbon burning in white dwarfs close to the Chandrasekhar limit, while all other supernovae are related to core collapse of various massive stars. However, the detailed physics is so complicated that there is still no satisfactory and quantitative model that could describe the propagation of the nuclear burning front in SN Ia, without introducing free, adjustable parameters. There is also no agreement how $`0.3\%`$ of energy released in core collapse is channeled to drive the explosion of a SN II. As far as I can tell, if there were no observations of SN II it would be impossible to predict them from the first principles, even though hundreds of sophisticated papers were written on the subjects. The guidance provided by the observations of GRBs and their afterglows is less clear than it has been for supernovae. In my view there is no way to prove with theoretical models that either merging neutron stars or hypernova explosions should generate gamma-ray bursts. It is hard to believe that the puzzle of the central engine can be solved for GRBs more readily than for supernovae. There is plenty of observational evidence that a huge diversity of rotating objects generates either bipolar outflows or jets - the phenomenon is obviously natural, as it appears so commonly in nature. Yet, there is no quantitative theory of the phenomenon that could explain (without ad hoc assumptions and ad hoc free parameters or free functions) what outflow velocities, or what rates of mass loss, should be associated with any particular object. The same applies to gamma-ray bursts and the current attempts to explain why their ejecta are likely to be beamed. There is no theory that could predict the outflow velocity of any jet, but it seems natural to expect that only very specific conditions make it possible to reach the outflow with the Lorentz factor $`\mathrm{\Gamma }300`$, as needed for HE bursts. There may be many more jets with more modest values of $`\mathrm{\Gamma }30,3`$, or non-relativistic at all. There is no direct evidence for a large Lorentz factor for the NHE bursts, which appear to have no photons above $`300keV`$ (Pendleton et al. 1997), and the pair creation argument does not apply to them. Perhaps the NHE GRBs are driven by non-relativistic explosions. ## 7 Optimistic Conclusions In spite of all theoretical problems there was a spectacular progress in our understanding of gamma-ray bursts. The statistics of GRB distribution obtained with BATSE on Compton Gamma Ray Observatory (Meegan et al. 1992, Paczyński 1995, and references therein) provided a very strong argument for a cosmological distance scale to the majority of GRBs. The obviously explosive nature of gamma-ray bursts provided the basis for the theoretical prediction of the afterglows as the products of interaction between GRB ejecta and ambient medium (Paczyński & Rhoads 1993, Katz 1994, Mészáros & Rees (1997). This prediction was confirmed with the discovery of afterglows with BeppoSAX (Costa et al. 1997), and soon provided the proof for the cosmological distance (Metzger et al. 1997). The observed distribution of the afterglows with respect to host galaxies indicated that GRBs are associated with star forming regions, and therefore with the explosions of massive stars, rather than with merging neutron stars (Paczyński 1998, Kulkarni et al. 1998, Galama et al. 1998). There is evidence that at least some bursts are directly associated with explosions of some supernovae (Galama et al. 1998, Bloom et al. 1999, Castro-Tirado & Gorosabel 1999, Reichart 1999, Galama et al. 1999). There is every reason to expect more progress along similar lines: observations and their analysis providing more and more hints about the nature of the bursts. The following are some of the likely lines of progress in our understanding. The new GRB instruments will provide hundreds of accurate positions within seconds of the burst’s beginning, for long as well as for short bursts. We may expect that the distribution in distance will soon be known not only for the long HE bursts, but also for the NHE bursts and for the short bursts. It may well be that in several years some GRB will be the redshift record holder. If GRBs trace massive star formation rate, then they may become a new probe of the process in very dusty regions, or at very high redshifts. While old GRB remnants may be difficult to distinguish from SN remnants, there is a possibility that a clear signature of the effect of non-thermal emission from a GRB and its afterglow may be detected in the interstellar medium (e.g Perna et al. 1999, Draine 1999, Weth et al. 1999), and it may turn out to be a powerful new diagnostics for these events. The importance if the interstellar scintillation for the estimates of radio afterglow expansion has already proven to be an important research tool (Goodman 1997, Frail et al. 1997). If GRBs are related to explosions of massive stars then we expect that a circum-stellar gas is a leftover from a strong stellar wind, as all massive stars appear to have winds. Currently there is mixed evidence from afterglow studies, with some events consistent with ambient gas density falling off as $`1/r^2`$, as expected of wind environment, while in some the ambient gas density appeared to be constant (Chevalier & Li 1999a,b). With many more afterglows followed with multi-band studies it will be possible to determine which environment is more common, and to make inferences about the nature of the exploding object. At a cost much lower than any GRB space mission a super-super ROTSE or a super-super-LOTIS may be developed to follow up on the experience of ROTSE (Akerlof et al. 1999) and LOTIS (Williams et al. 1999). At a cost less than $`\$10^6`$ it should be possible to implement an all sky optical monitoring system sensitive to optical flashes of $`1`$ minute duration, like the one discovered by ROTSE (Akerlof et al. 1999), detectable without any GRB trigger. There may be many more optical flashes than gamma-ray bursts if less extreme Lorentz factors are sufficient for generating optical flashes. Rather obviously, a major difficulty is not hardware but software. We already know that some supernovae (SN 1998bw) eject some matter at a relativistic or sub-relativistic velocity (Waxman & Loeb 1999). There is a fairly strong case for a relativistic jet from the SN 1987A (Nisenson & Papaliolios 1999). We may expect (or at least hope) that other cases of relativistic motion will be discovered in other SN. For supernovae within $`100`$ Mpc it may be possible to detect anisotropy in their ejecta, perhaps even superluminal jets, using VLBA. If jets are detected in many cases it will be possible to study the distribution of jet velocities. When the number of recorded supernova explosions will exceed $`10^4`$ we shall know more about the high energy tail of their power distribution, and we may learn if there is a sharp maximum, or is there an extended tail, to the explosions in the $`10^{53}10^{54}erg`$ range. The ever more vigorous searches for distant (i.e. faint) supernovae will discover optical afterglows without a need for the GRB alert (Rhoads 1997). There may be a rich diversity of SN-like or afterglow-like events, perhaps even optical transients from merging neutron stars (Li & Paczyński 1998). If the past can be used as a guide for the future than the most spectacular breakthroughs in the observations and understanding of gamma-ray bursts will be unexpected, just as the most recent BeppoSAX breakthrough was. An example may be the recent empirical finding of a very tight correlation between photon energy-dependent lags and peak luminosities of gamma-ray bursts (Norris et al. 1999). This work was not supported by any grant.
no-problem/9909/cond-mat9909345.html
ar5iv
text
# Liquid-Gas phase transition in Bose-Einstein Condensates with time evolution \[ ## Abstract We study the effects of a repulsive three-body interaction on a system of trapped ultra-cold atoms in Bose-Einstein condensed state. The stationary solutions of the corresponding $`s`$wave non-linear Schrödinger equation suggest a scenario of first-order liquid-gas phase transition in the condensed state up to a critical strength of the effective three-body force. The time evolution of the condensate with feeding process and three-body recombination losses has a new characteristic pattern. Also, the decay time of the dense (liquid) phase is longer than expected due to strong oscillations of the mean-square-radius. PACS 03.75.Fi, 36.40.Ei, 05.30.Jp, 34.10.+x \] The experimental evidences of Bose-Einstein condensation (BEC) in magnetically trapped weakly interacting atoms brought a considerable support to the theoretical research on bosonic condensation. The nature of the effective atom-atom interaction determines the stability of the condensed state: the two-body pseudopotential is repulsive for a positive $`s`$wave atom-atom scattering length and it is attractive for a negative scattering length . The ultra-cold trapped atoms with repulsive two-body interaction undergoes a Bose-Einstein phase-transition to a stable condensed state, in a number of cases found experimentally, as for <sup>87</sup>Rb , for <sup>23</sup>Na and <sup>7</sup>Li . However, a condensed state of atoms with negative $`s`$wave atom-atom scattering length would be unstable for a large number of atoms . It was indeed observed in the <sup>7</sup>Li gas , for which the $`s`$wave scattering length is $`a=(14.5\pm 0.4)`$ Å, that the number of allowed atoms in the condensed state was limited to a maximum value between 650 and 1300, which is consistent with the mean-field prediction . From a theoretical approach, the addition of a repulsive three-body interaction can extend considerably the region of stability for a condensate even for a very weak three-body force . As one can observe from Refs. , both signs for the three-body interaction are, in principle, allowed. However, in the present study we only consider the case of a repulsive three-body elastic interaction together with an attractive two body interaction. We will show that, due to the repulsive three-body force, new physical aspects appears in the time evolution of the condensate. In respect to the static situation, it was suggested that, for a large number of bosons the three-body repulsion can overcome the two-body attraction, and a stable condensate will appear in the trap . Singh and Rokhsar have also observed that above a critical value the only local minimum is a dense gas state, where the neglect of three-body collisions fails. In this work, using the mean-field approximation, we develop the scenario of collapse, which includes two aspects of three-body interaction, that is recombination and replusive mean-field interaction. We begin by investigating the competition between the leading term of an attractive two-body interaction, which is originated from a negative two-atom $`s`$wave scattering length, and a repulsive three-body interaction, which can happen in the Efimov limit , when $`|a|\mathrm{}`$. (The physics of three-atoms in the Efimov limit is discussed in Refs. ). We first consider the stationary solutions of the corresponding extension of the Ginzburg-Pitaevskii-Gross (GPG) nonlinear Schrödinger equation (NLSE), for fixed number of particles, without dissipative terms, extending an analysis previously reported in Refs. . The liquid-gas phase transition in the condensate, suggested in , was confirmed by a more detailed analysis in the present stationary calculations. Then, the time evolution of the feeding process of the condensate by an external source is obtained by solving the time-dependent NLSE with repulsive three-body interaction (given by $`g_3>0`$) and dissipation due to three-body recombination processes. The dramatic collapse and the consequent atom loss that happens at the critical number of atoms (when $`g_3=0`$ is softened by the addition of the three-body repulsive force. The decay time of the liquid-phase is also unexpectedly long, when compared with the decay time that occurs for $`g_3=0`$, which gives a clue about the possible observation of three-body interaction effects. Our results pointed out that the mean-square-radius is an important observable to be analyzed experimentally to study the dynamics of the growth and collapse of the condensate . In the present study, in order to enphasize the real part of the three-body interaction, we choose $`g_3`$ significantly larger than the magnitude of the dissipative term; although, in general, they are expected to be of the same order. The NLSE, which describes the condensed wave-function in the mean-field approximation, after considering the two-body attractive and three-body repulsive effective interaction, is variationally obtained from the corresponding effective Lagrangian (see Gammal et al. in ). By considering a stationary solution, $`\mathrm{\Psi }(\stackrel{}{r},t)=e^{i\mu t/\mathrm{}}`$ $`\psi (\stackrel{}{r})`$ where $`\mu `$ is the chemical potential and $`\psi (\stackrel{}{r})`$ is normalized to the number of atoms $`N`$. By rescaling the NLSE for the $`s`$wave solution, we obtain $$\left[\frac{d^2}{dx^2}+\frac{x^2}{4}\frac{|\varphi (x)|^2}{x^2}+g_3\frac{|\varphi (x)|^4}{x^4}\right]\varphi (x)=\beta \varphi (x)$$ (1) for $`a<0`$, where $`x\sqrt{2m\omega /\mathrm{}}r`$ and $`\varphi (x)\sqrt{8\pi |a|}r\psi (\stackrel{}{r})`$. The dimensionless parameters, related to the chemical potential and the three-body strength are, respectively, given by $`\beta \mu /\mathrm{}\omega `$ and $`g_3\lambda _3\mathrm{}\omega m^2/(4\pi \mathrm{}^2a)^2`$. The normalization for $`\varphi (x)`$ reads $`_0^{\mathrm{}}𝑑x|\varphi (x)|^2=n`$ where the reduced number $`n`$ is related to $`N`$ by $`n2N|a|\sqrt{2m\omega /\mathrm{}}.`$ The boundary conditions in Eq.(1) are given by $`\varphi (0)=0`$ and $`\varphi (x)`$ $`C\mathrm{exp}(x^2/4+[\beta \frac{1}{2}]\mathrm{ln}(x))`$ when $`x\mathrm{}.`$ In Fig. 1, considering several values of $`g_3`$ (0, 0.012, 0.015, 0.0183 and 0.02), using exact numerical calculations, we present the evolution of some relevant physical quantities $`E,`$ $`\mu ,`$ $`\rho _c`$ and $`r^2`$ as functions of the reduced number of atoms $`n`$. For $`g_3=0`$, our calculation reproduces the result presented in Ref. , with the maximum number of atoms limited by $`n_{max}1.62`$ ($`n`$ is equal to $`|C_{nl}^{3D}|`$ of Ref. ). As shown in the figure, for $`0<g_3<0.0183`$, the density $`\rho _c,`$ the chemical potential $`\mu `$ and the root-mean-squared radius $`r^2`$ present back bendings typical of a first order phase transition. For each $`g_3,`$ the transition point given by the crossing point in the $`E`$ versus $`n`$ corresponds to a Maxwell construction in the diagram of $`\mu `$ versus $`n`$. At this point an equilibrated condensate should undergo a phase transition from the branch extending to small $`n`$ to the branch extending to large $`n.`$ The system should never explore the back bending part of the diagram because as we have seen in Fig. 1 it is an unstable extremum of the energy. From this figure it is clear that the first branch is associated with large radii, small densities and positive chemical potentials while the second branch presents a more compact configuration with a smaller radius a larger density and a negative chemical potential. This justify the term gas for the first one and liquid for the second one. However we want to stress that both solutions are quantum fluids. With $`g_3=0.012`$ the gas phase happens for $`n<1.64`$ and the liquid phase for $`n>1.64`$. For $`g_3>0.0183`$ all the presented curves are well behaved and a single fluid phase is observed. At $`g_30.0183`$ and $`n1.8`$, the stable, metastable and unstable solutions come to be the same. This corresponds to a critical point associated with a second order phase transition. At this point the derivatives of $`\mu ,`$ $`\rho _c`$ and $`r^2`$ as a function of $`n`$ all diverge. We also checked that calculations with the variational expression of $`r^2`$, $`\rho _c`$ and $`\mu `$ are in good agreement with the ones depicted in Fig. 1. In the lower frame of Fig. 2, we show the phase boundary separating the two phases in the plane defined by $`n`$ and $`g_3`$ and the critical point at $`n1.8`$ and $`g_30.0183`$. In the upper frame, we show the boundary of the forbidden region in the central density versus $`g_3`$ diagram. The main physical characteristic of the repulsive three-body force is to prevent the collapse of the condensate for the particle number above the critical number found with only two-body attractive interaction. The three-body repulsive potential tends to overcome the attraction of the two-body potential at short distances, as described by Eq. (3), as the repulsive interaction grows inversely to $`x^4`$, while the two-body potential is proportional to $`x^2`$. Thus, the implosive force that shrinks the condensate at the critical number is compensated by the repulsive three-body force. The time evolution of the growth and collapse of the condensate with attractive interactions should be qualitatively modified by the presence of the repulsive three-body force. The three-body recombination effect , which “burns” partially the condensed state should be taken into account to describe quantitatively the dynamics of the condensate. In the case of only two-body attractive interaction, as observed by Kagan et al. , by considering the feeding of the condensate from the nonequilibrium thermal cloud, the time evolution is dominated by a sequence of growth and collapse of the trapped condensate. The collapse occurs when the number of atoms in the condensate exceeds the critical number $`N_c`$; and it is followed by an expansion after the atoms in the high density region of the wave-function are lost due to three-body recombination processes and consequently the average attractive potential from the two-body force is weakened. It is also noticed in Ref. that the condensate time evolution is dominated by an oscillatory mode of frequency $`\omega `$; and, as time grows and $`N`$ reaches a value $`>N_c`$, a huge compression takes place to implode the condensate. The repulsion given by the three-body force will dynamically affect the compression of the condensate weakening the implosive force and allowing more atoms to survive at high densities. In order to quantitatively study the above features with repulsive three-body interaction, we consider the time-dependent non-linear Schrödinger equation corresponding to Eq. (1), including three-body recombination effects (with an intensity parameter $`2\xi `$) and an imaginary linear term corresponding to the feeding of the condensate (with intensity parameter $`\gamma `$): $`i{\displaystyle \frac{\mathrm{\Phi }}{d\tau }}`$ $`=`$ $`\left[{\displaystyle \frac{d^2}{dx^2}}+{\displaystyle \frac{x^2}{4}}{\displaystyle \frac{|\mathrm{\Phi }|^2}{x^2}}+(g_32i\xi ){\displaystyle \frac{|\mathrm{\Phi }|^4}{x^4}}+{\displaystyle \frac{i\gamma }{2}}\right]\mathrm{\Phi },`$ (2) where $`\mathrm{\Phi }\mathrm{\Phi }(x,\tau )`$ and $`\tau \omega t`$. For the parameters $`\xi `$ and $`\gamma `$ we are using the same notation as given in Ref. . In Fig. 3, we show the time evolution of the number of condensed atoms, starting with $`N/N_c=0.75`$, found by the numerical solution of Eq. (2) with $`\xi =`$0.001 and $`\gamma =`$0.1, with and without repulsive three-body potential. We compare the results of a three-body potential with $`g_3=`$ 0.016 to the case considered in Ref. , with $`g_3=`$0. In both, $`N_c`$ is the critical number for $`g_3=`$0. The first striking feature with repulsive three-body force is the smoothness of the compression mode in comparison with the results of $`g_3=0`$. This is a result of the explosive force from the repulsion, which oppose to the sudden density increase and damps the loss of atoms due to three-body recombination effects. Even for $`g_3`$ lower than 0.016, and much closer to $`g_3=0`$, the collapses can no longer “burn” the same number of atoms as in the case of $`g_3=0`$. By extending our calculation presented in Fig. 3 for all cases with $`g_3>0.01`$ and for times beyond $`\omega t=50`$, we have checked that the number of atoms will increase without limit while the condensate is oscillating with frequency about $`2\omega `$. In particular, the present approach indicates that the experimental recent observation of the maximum number of <sup>7</sup>Li atoms is compatible with $`g_3`$ much smaller than 0.01. The mean square radius for $`g_3=0`$, after each strong collapse (when $`N>N_c`$) begins to oscillate at an increased average radius. The collapse “burns” the atoms in the states with higher densities and explain the sudden increase of the square radius after each compression, remaining the atoms in dilute states. The inclusion of the repulsive three-body force, still maintains the oscillatory mode, but the compression is not as dramatic as in the former case, and consequently atoms in higher density states are not so efficiently burned. The increase of the mean square radius (averaged with time) is smaller than the one found with only attractive two-body force. This is a remarkable feature of the stabilizing effect of the repulsive three-body force allowing the presence of states with higher densities, as we found in the stationary study. Finally, we have to consider that, in the situation when the 3-body repulsion dominates over the 2-body attraction, the condensate can be in a denser phase where it is expected to be strongly unstable due to recombination losses. The decay time of the condensate in a denser phase is expected to be much smaller than the decay time of the condensate in the less dense phase. However, we should observe that the dynamics of the condensate is modulated by an oscillatory mode with a frequency of the order of $`2\omega `$, which was already identified by to be $`\omega `$ even when $`g_3=0`$. In case of $`g_3>0`$, such oscillatory mode dominates the time evolution of the condensate. As the oscillations allow changes in the density, the condensate does not “burn” as fast as expected. In order to study the condensate decay, we consider the original NLSE with the dissipative term and allow different possibilities for the three-body interaction $`g_3`$. We use $`\xi =0.001`$ (the same value used in Ref. ). In Fig. 4 we show the result of this study for $`g_3=0`$ and $`g_3=0.016`$. The initial number of atoms $`N`$ can be obtained from $`n`$, given in the figure. For $`g_3=0`$, we took $`n=1.625`$, which is close to the critical limit. For $`g_3=0.016`$, we consider three cases: two of them starting with the same number of atoms, $`n=1.756`$, but in different phases (the corresponding chemical potentials are $`\beta =1.2`$, in a denser phase, and $`\beta =0.3`$); and another in an even denser phase, with $`n=1.965`$ and $`\beta =2.3`$ (see also Fig. 1). Based on the results obtained in these four different cases, we can estimate that the mean-life for the condensate, which is initially in a denser phase, is not as small as expected when comparing with $`g_3=0`$. We observe in this case the relevant role of the oscillatory mode, related to the frequency of the trap potential, which dominates the dynamics of the condensate when $`g_3>0`$. To summarize, our present results can be relevant to determine a possible clear signature of the presence of a repulsive three-body interaction in Bose condensed states. It points out to a new type of phase transition between two Bose fluids. Because of the condensation of the atoms in a single wave-function this transition may present very peculiar fluctuations and correlation properties. As a consequence, it may fall into a different universality class than the standard liquid-gas phase transition, which are strongly affected by many-body correlations. The characterization of the two-phases through their energies, chemical potentials, central densities and radius were also given for several values of the three-body parameter $`g_3`$. We develop a scenario of collapse which includes both three-body recombination and three-body repulsive interaction. From the time-dependent analysis, we show that the decay time of the condensate which begins in a denser phase is long enough to allow observation. However, the observed strongly oscillating states are quite different from the analysed stationary states. In accordance with the observed strong oscillations of the mean-squared-radius, the condensate density also strongly oscillates and the observed states cannot be characterized as “dense” or “dilute”, justifying the long decay time. Nevertheless, through the amplitude of the oscillations one can distinguis if the system starts in a denser phase. AG, TF and LT thank Profs. G.V. Shlyapnikov and A.E. Muryshev for providing details of their numerical calculations; Prof. R.G. Hulet for details on experimental results and Profs. N. Akhmediev and M.P. Das for relevant correspondence related to Ref. . This work was partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo and Conselho Nacional de Desenvolvimento Científico e Tecnológico.
no-problem/9909/astro-ph9909221.html
ar5iv
text
# Comments Regarding “On Neutrino-Mixing-Generated Lepton Asymmetry and the Primordial Helium-4 Abundance” \[ ## Abstract This is a reply to the preprint “On Neutrino-Mixing-Generated Lepton Asymmetry and the Primordial Helium-4 Abundance” by M. V. Chizhov and D. P. Kirilova (hep-ph/9908525), which criticised our recent publication (X. Shi, G. M. Fuller and K. Abazajian Phys. Rev. D 60 063002 (1999)). Here we point out factual errors in their description of what our paper says. We also show that their criticisms of our work have no merit. \] (1) The main point of the paper by M. V. Chizhov and D. P. Kirilova (hereafter CK) in regards to Shi, Fuller and Abazajian (hereafter SFA) is that the primordial <sup>4</sup>He abundance yield in Big Bang Nucleosynthesis (BBN) can be appreciably affected by neutrino mixing (sterile neutrino production) even when the lepton number asymmetry, $`L`$, is small ($`L0.01`$, for example). Of course, this has been known for some time. The change in the <sup>4</sup>He abundance yield in an extreme case, $`L0`$, was discussed and calculated as long ago as the early 1980’s . One of the calculations was in fact done by an author of SFA . On page 2 of the CK paper, it says, “Certainly such consideration (meaning that we only consider the <sup>4</sup>He abundance change for $`L>0.01`$) is valid for the simple case of nucleosynthesis without oscillations!” This is in fact a very good (although not 100% accurate) statement regarding the SFA paper. It is obvious that SFA was indeed only concerned with active-sterile neutrino mixings when the relevant mixing angles were sufficiently small that the sterile neutrino production from active-to-sterile neutrino oscillation (other than the MSW resonant active-to-sterile neutrino conversion whose amplitude is much less sensitive to mixing angles) is negligible. This was done for a reason: cases where oscillation effects are large have been considered before, e.g., in the papers cited above. The particular parameter space we chose to examine in SFA was based on the calculation that shows that lepton asymmetry can be generated by mixings as small as $`\mathrm{sin}^22\theta 10^{10}`$ . However, oscillation effects (other than MSW resonant conversion) won’t be important until $`\mathrm{sin}^22\theta 10^4`$ for $`\delta m^2\begin{array}{c}<\hfill \\ \hfill \end{array}1\mathrm{eV}^2`$ (see figures of Shi ’96). In this parameter space chosen by SFA, neutrinos or anti-neutrinos can be converted (via matter-enhanced MSW) to sterile neutrinos, thus creating a neutrino asymmetry, but the overall neutrino energy density may not be changed significantly. In such a situation, production of asymmetries $`L0.01`$ indeed do not have an appreciable impact on the primordial <sup>4</sup>He abundance yield. This mixing parameter space chosen by us has no overlap with the parameter spaces considered by CK ($`\mathrm{sin}^22\theta >0.01`$ and $`\delta m^2<10^7`$, from their figure 2, where oscillation effects are important). It is therefore rather ironic that based on an irrelevant comparison of two non-overlapping parameter spaces the authors of CK can claim “The obtained constraints on $`\delta m^2`$ are by several orders of magnitude more severe than the constraints obtained in SFA.” (2) In the footnote of page 2, CK stated that “we are really sorry that…” Here we are happy to report that the authors of CK don’t have to be sorry because nowhere in the SFA paper did we claim to be the first to discover this account (of the effects of neutrino spectral distortion and evolution). The effects of neutrino spectral distortion and evolution on <sup>4</sup>He synthesis have been known since the early studies of BBN, even in the original complete paper on the subject, Wagoner, Fowler and Hoyle 1967. In SFA we merely apply this account to particular cases of neutrino mixing. We do not know from which page and which paragraph in SFA we can be implicated in a claim of discovery. (3) In regards to the first paragraph of page 8, the authors of CK are welcome to read more carefully Shi (1996) , where there is a lengthy discussion on whether the $`L`$-generation process meets the classic criterion of chaos (see also a recent work of Enqvist et al. ). They are also welcome to produce any evidence showing what will be the sign of a net lepton number asymmetry $`L`$ resulting from resonant neutrino transformation. And yes, even though the chaotic feature of the $`L`$-generation process is not well understood, we will “continue exploiting it fabricating models and constraints.” Doesn’t any scientific model involve some assumptions that are not well understood? We do not believe that our scientific integrity is in any way compromised when we discuss these models and constraints, because we have always discussed, and will always continue to do so, the underlying assumptions of these models and constraints. Finally, we should point out that the entire problem of neutrino flavor-transformation in the early universe is a difficult one. Not the least of the difficulties is solving the Boltzmann equation plus the MSW equations for multiple particle species with a spread of energies and occupation numbers. Furthermore, the equations have non-linear feedback terms that may generate chaos in solutions. Many groups have attacked these issues. They have obtained many interesting and important results. But in our opinion, a satisfactory, general solution has yet to be found. In this sense our understanding of the problem so far is indeed “shallow” and “simplistic.” We have no doubt that any future breakthroughs in this problem will offer deeper and more sophisticated understandings of neutrino physics and cosmology.
no-problem/9909/hep-th9909002.html
ar5iv
text
# Parafermionic and Generalized Parafermionic Algebras Abstract: The general properties of the ordinary and generalized parafermionic algebras are discussed. The generalized parafermionic algebras are proved to be polynomial algebras. The ordinary parafermionic algebras are shown to be connected to the Arik–Coon oscillator algebras. The study of systems of many spins is of interest in many branches of physics. This study is in many cases facilitated through boson mapping procedures (see for a comprehensive review). Some well-known examples are the Holstein–Primakoff mapping of the spinor algebra onto the harmonic oscillator algebra and the Schwinger mapping of Lie algebras (or of $`q`$-deformed algebras) onto the usual (or onto the $`q`$-deformed) oscillator algebras . In parallel, in addition to bosons and fermions, parafermions of order $`p`$ have been introduced (with $`p`$ being a positive integer), having the characteristic property that at most $`p`$ identical particles of this kind can be found in the same state. Ordinary fermions clearly correspond to parafermions with $`p=1`$, since only one fermion can occupy each state according to the Pauli principle. While fermions obey Fermi–Dirac statistics and bosons obey Bose–Einstein statistics , parafermions are assumed to obey an intermediate kind of statistics, called parastatistics .The notion of parafermionic algebras has been recently enlarged by Quesne , while the relation between parafermionic algebras and other algebras has been given in . The properties of parafermions and parabosons, as well as the parastatistics and field theories associated with them, have been the subject of many recent investigations . Parafermions and parabosons have also been involved in mapping studies. A mapping of the spinor algebra onto a parafermionic algebra has been discussed in . Mappings of so(2n), sp(2n,R), and other Lie algebras onto parafermionic and parabosonic algebras have been studied in , while parabosonic mappings of osp(m,n) superalgebras have been given in . Recently the algebras of the operators of a single spinor with fixed spin value $`j`$ have been mapped onto polynomial algebras, which constitute a quite recent subject of investigations in physics . In polynomial algebras the commutator of two generators does not result in a linear combination of generators, as in the case of the usual Lie algebras, but rather into a combination of polynomials of the generators. The mappings of ref. connect the class of spinor algebras to the class of polynomial algebras. In the present work we show that the polynomial algebras of ref. , which are connected to the single spinor algebras, are indeed examples of either parafermionic algebras or generalized parafermionic algebras . Let us start by defining the algebra $`𝒜_n^{[p]}`$, corresponding to $`n`$ parafermions of order $`p`$. This algebra is generated by $`n`$ parafermionic generators $`b_i,b_i^{}`$, where $`i=1,2,\mathrm{},n`$, satisfying the trilinear commutation relations: $$[M_k\mathrm{},b_m^{}]=\delta _\mathrm{}mb_k^{},[M_\mathrm{}k,b_m]=\delta _\mathrm{}mb_k,$$ (1) where $`M_k\mathrm{}`$ is an operator defined by: $$M_k\mathrm{}=\frac{1}{2}\left([b_k^{},b_{\mathrm{}}]+p\delta _k\mathrm{}\right).$$ (2) From this definition it is clear that eq. (1) is a trilinear relation, i.e. a relation relating three of the operators $`b_i^{},b_i`$. Finally the definition of the parafermionic algebra is completed by the relation: $$[b_i,[b_j,b_k]]=[b_i^{},[b_j^{},b_k^{}]]=0.$$ (3) Each parafermion separately is characterized by the ladder operators $`b_i^{}`$ and $`b_i`$ and the number operator $`M_{ii}`$. The basic assumption is that the parafermionic creation and annihilation operators are nilpotent ones: $$\left(b\right)^{p+1}=\left(b^{}\right)^{p+1}=0.$$ (4) In ref. it is proved that the single parafermionic algebra is a generalized oscillator algebra , satisfying the following relations (for simplicity we omit the parafermion indices): $$[M,b^{}]=b^{},[M,b]=b,$$ (5) $$b^{}b=[M]=M\left(p+1M\right),bb^{}=[M+1]=(M+1)\left(pM\right),$$ (6) $$M\left(M1\right)\left(M2\right)\mathrm{}\left(Mp\right)=0.$$ (7) The definition (2) – or equivalently eq. (6) – implies the commutation relation: $$[b^{},b]=2(Mp/2).$$ (8) The above relation combined with (5) suggests the use of the parafermions as spinors of spin $`p/2`$ $$S_+b^{},S_{}b,S_o(Mp/2).$$ (9) The Cayley identity is also valid: $$\underset{k=p/2}{\overset{p/2}{}}\left(S_ok\right)=0.$$ It is worth noticing that in the case of parafermions the commutation relation (8) is some how trivial because it is inherent in the definition of the number operator (2). This relation switches the trilinear commutation relations to ordinary commutation relations, where two operators are involved. In contrast, in the case of parabosons this construction is not trivial, because anticommutation relations are involved in the definition of the number operator. We start now examining in detail the connection between spinors with $`j=p/2`$ and parafermions of order $`p`$. The $`p=1`$ parafermions coincide with the ordinary fermions, i.e. the usual spin 1/2 spinors . For spinors with $`j=1`$ Chaichian and Demichev use the following mapping $$S_+\sqrt{2}a^{},S_{}\sqrt{2}a,$$ (10) where $`a^3=a_{}^{}{}_{}{}^{3}=0,`$ (11) $`aa^{}+a_{}^{}{}_{}{}^{2}a^2=1.`$ (12) Using the above two relations we can define the number operator $`N`$ $$N=1[a,a^{}]=a^{}a+a_{}^{}{}_{}{}^{2}a^2.$$ (13) This number operator satisfies the linear commutation relations: $$[N,a^{}]=a^{},[N,a]=a.$$ The self-contained commutation relations for the $`p=2`$ parafermions are given in ref. (eqs (5.13) to (5.20) ) $`b^3=b_{}^{}{}_{}{}^{3}=0,`$ (14) $`bb^{}b=2b,`$ (15) $`b^{}b^2+b^2b^{}=2b.`$ (16) The set of relations (14)–(16) imply the following definition of the number operator $`M`$: $$M=\frac{1}{2}\left([b^{},b]+2\right).$$ (17) The set of relations (11)–(12) imply relations (14)–(16) after taking into consideration the correspondence: $$a=\frac{1}{\sqrt{2}}b,a^{}=\frac{1}{\sqrt{2}}b^{}.$$ (18) For example one can easily see the following: * Eq. (14) occurs trivially from eq. (11). * Eq. (15) is obtained by multiplying eq. (12) by $`a`$ on the right and using eq. (11). * Eq. (16) is obtained by multiplying eq. (12) by $`a`$ on the left and using eq. (11) and (12). In ref. the parafermionic algebra (14)–(17) was shown to be equivalent to the deformed oscillator algebra , which is defined by relations (4)–(7), for $`p=2`$. This deformed oscillator algebra satisfies in addition the relations (11) to (13). Therefore the Chaichian - Demichev polynomial algebra (11)–(13), the $`p=2`$ parafermionic algebra (14)–(17) and the deformed oscillator algebra (4)–(7) are equivalent. Relations (12) and (13) indicate that $`aa^{}`$ and $`N`$ can be expressed as a linear combination of monomials $`\left(a^{}\right)^ka^k`$. This is the reason the algebra described by eqs (12)–(13) is called in a “polynomial” algebra. What we have just seen is that the polynomial algebra (11)–(13) is in fact the $`p=2`$ parafermionic algebra (14)–(17). The new result which arises from this discussion is that the parafermionic algebra can be written as a polynomial algebra through the r.h.s of eq. (13). It seems that this fact has been ignored, while the “dual” relation, giving $`b^{}b`$ or $`bb^{}`$ as polynomial functions of the number operator, $$b^{}b=M(3M),bb^{}=(M+1)(2M),$$ is known . For spinors with $`j=3/2`$ Chaichian and Demichev use the following mapping $$S_+\sqrt{3}a^+,S_{}\sqrt{3}a,$$ (19) where $`a^4=a_{}^{}{}_{}{}^{4}=0,`$ (20) $`aa^{}=1+{\displaystyle \frac{1}{3}}a^{}a{\displaystyle \frac{1}{3}}a_{}^{}{}_{}{}^{2}a^2{\displaystyle \frac{2}{3}}a_{}^{}{}_{}{}^{3}a^3,`$ (21) $`[a,a^{}]=1{\displaystyle \frac{2}{3}}N.`$ (22) (23) The last two equations imply the following expansion of the number operator: $$N=a^{}a+\frac{1}{2}a_{}^{}{}_{}{}^{2}a^2+a_{}^{}{}_{}{}^{3}a^3.$$ (24) These relations are the analogues of eqs. (11)–(13) for the $`j=3/2`$ case. The complicated self-consistent commutation relations for the $`p=3`$ parafermionic algebra are given in Appendix B of ref. . After long but straightforward calculations the $`p=3`$ parafermionic relations are deduced from the above eqs (21)–(24) by taking into account the correspondence: $$a=\frac{1}{\sqrt{3}}b,a^{}=\frac{1}{\sqrt{3}}b^{}.$$ (25) Therefore the polynomial algebra (21)–(24) is in fact the $`p=3`$ parafermionic algebra. The new result which again arises from this discussion is that the parafermionic algebra can be written as a polynomial algebra through eq. (24), while the “dual” relation $$b^{}b=M(4M),bb^{}=(M+1)(3M),$$ is again already known . Stimulated by the above results we can show the following proposition: ###### Proposition 1 The $`j=p/2`$ spinor algebra $`\{S_\pm ,S_o\}`$ is mapped onto the $`p`$-parafermionic algebra $`\{b^+,b,M\}`$ which is a polynomial algebra given by the relations: $$\begin{array}{c}[M,b^{}]=b^{},\\ [M,b]=b,\\ b^{p+1}=\left(b^{}\right)^{p+1}=0,\\ b^{}b=M\left(p+1M\right)=\left[M\right],\\ bb^{}=\left(M+1\right)\left(pM\right)=\left[M+1\right],\\ M=\frac{1}{2}\left([b^{},b]+p\right),\end{array}$$ (26) where the number operator $`M`$ is given by the following polynomial relation $$M=\underset{k=1}{\overset{p}{}}\frac{c_k}{p^k}b_{}^{}{}_{}{}^{k}b^k.$$ (27) With the “factorial” $`\left[k\right]!`$ being defined as $$\left[0\right]!=1,\left[n\right]!=\left[n\right]\left[n1\right]!=\underset{\mathrm{}=1}{\overset{n}{}}\left[\mathrm{}\right]=\frac{n!p!}{(pn)!},$$ the coefficients $`c_1,c_2,\mathrm{},c_p`$ can be determined from the solution of the system of equations: $$\begin{array}{c}\rho (1)=1\hfill \\ \rho (2)=2\hfill \\ \mathrm{}\hfill \\ \rho (p)=p\hfill \end{array}\}$$ (28) where $$\rho (n)=\underset{k=1}{\overset{n}{}}\frac{c_k}{p^k}\frac{\mathrm{\Gamma }(n+1)}{\mathrm{\Gamma }(nk+1)}\frac{\mathrm{\Gamma }(p+kn+1)}{\mathrm{\Gamma }(pn+1)}.$$ This is true because we can see that $$b_{}^{}{}_{}{}^{k}b^k=\underset{\mathrm{}=0}{\overset{k1}{}}\left[M\mathrm{}\right]\frac{\mathrm{\Gamma }(M+1)}{\mathrm{\Gamma }(Mk+1)}\frac{\mathrm{\Gamma }(p+kM+1)}{\mathrm{\Gamma }(pM+1)}.$$ The fact that the number operator of a parafermionic algebra can be written as a combination of monomials, i.e. eq. (27), was not previously known in the context of parafermionic algebras. The polynomial expressions as in eq. (27) are similar to the ones used for the construction of the projection operators in the case of the su(2) algebra . The projection operator method has also been used in the case of the su<sub>q</sub>(2) and su<sub>q</sub>(1,1) algebras as a dynamic tool for the calculation of the Clebsch-Gordan coefficients. On the other hand the parafermionic algebra is a finite dimensional realization of the su(2) algebra, coinciding with the spinor algebra. The analytic calculation of the coefficients $`c_k`$ can be achieved by expanding the number operator $`M`$ in a sum over the projection operators $$P_m|n=\delta _{nm}|n$$ in the following way: $$M=\underset{m=1}{\overset{p}{}}mP_m$$ The projection operator $`P_o`$ to the lowest weight eigenvalue is given by the expression: $$P_o=\underset{k=0}{\overset{p}{}}d_k\left(b^+\right)^kb^k,$$ while $$P_m=\frac{1}{\left[m\right]!}\left(b^+\right)^mP_ob^m=\frac{1}{\left[m\right]!}\underset{k=0}{\overset{pm}{}}d_k\left(b^+\right)^kb^k,$$ where the coefficients $`d_n`$ are given by the recurrence formula $$\begin{array}{c}d_0=1,\hfill \\ d_n=\underset{k=0}{\overset{n1}{}}\frac{d_k}{\left[nk\right]!}.\hfill \end{array}$$ Then the general solution is given by: $$d_n=\underset{i=1}{\overset{n}{}}\left(1\right)^i\left(\underset{\begin{array}{c}0<k_1,k_2,\mathrm{},k_in\\ k_1+k_2+\mathrm{}+k_i=n\end{array}}{}\frac{1}{\left[k_1\right]!\left[k_2\right]!\mathrm{}\left[k_i\right]!}\right)$$ (29) We must point out that these formulae are not specific to the chosen parafermion structure function $`\left[x\right]=x(p+1x)`$ and can be applied for any parafermionic oscillator stucture function. The number operator $`M`$ can be expressed using the projection operators: $$M=\underset{m=1}{\overset{p}{}}mP_m=\underset{k=1}{\overset{p}{}}\frac{c_k}{p^k}\left(b^+\right)^kb^k,$$ while the coefficients $`c_n`$ can be fould to be $$c_n=p^n\underset{k=1}{\overset{n}{}}\frac{k}{\left[k\right]!}d_{nk}.$$ In Table 1 the coeffients up to $`p=5`$ are explicitly given. One must notice that the parafermionic algebra (57) has affinities with the Arik – Coon $`Q`$-deformed algebra , which is defined by the relations: $$[N,a]=a,[N,a^{}]=a^{},a^{}a=[N]_Q,aa^{}=[N+1]_Q$$ (30) where $`[x]_Q=(1Q^x)/(1Q)`$. The generators of this oscillator satisfy the following commutation relation $$[a,a^{}]=Q^N$$ (31) By defining $`Q=\mathrm{exp}[\tau ]`$ the commutation relation (31) can be written, for $`\tau 0`$ as $$[a,a^{}]=\mathrm{exp}[\tau N]=1\tau N+𝒪(\tau ^2)$$ (32) After comparing the above equation with equation (8), we can see that there is a approximative mapping of the parafermionic oscillator to the Arik – Coon oscillator by putting $$\begin{array}{ccc}& & \\ \mathrm{Arik}\mathrm{Coon}& & \text{para Fermi}\\ & & \\ b& & \sqrt{p}a\\ b^{}& & \sqrt{p}a^{}\\ M& & N\\ \tau & =& 2/p\end{array}$$ The meaning of the order $`p`$ of the parafermionic oscillator is quite clear, $`p`$ is a the ”capacity” of the oscillator, i.e. the maximum number of permitted states, which can be found similtaneously at the same position. Therefore, the parameter $`Q`$ of the Arik – Coon oscillator is a ”measure ” of the similtaneously existed states at the same position. A nice example is the case of the $`J=0`$ pairing of nucleons in a closed nuclear shell. The algebra of the fermion pairs coupled to angular momentum zero are descibed by the algebra : $$[A_0,A_0^+]=1N_F/\mathrm{\Omega },[\frac{N_F}{2},A_0^+]=A_0^+,[\frac{N_F}{2},A_0]=A_0$$ where $`N_F`$ is the number of fermions, $`2\mathrm{\Omega }=2j+1`$ is the size of the shell, i.e. the ”capacity” of our space. The simplest piring Hamiltonian is given by: $$H=G\mathrm{\Omega }A_0^+A_0$$ For the above algebra there is a natural mapping to parafermions of order $`p`$, each parafermion corresponds to a Fermi pair and $`p=\mathrm{\Omega }`$. The ordinary $`q`$-deformed oscillator fails to give an approximation of the pairing model, while the Arik – Coon oscillator is quite satisfactory . Conclusions The parafermionic algebras can be considered as polynomial algebras, their diagonal number operator $`M_{ii}`$ being able to be written as a combination of monomials of the ladder operators. The general problem of finding an expression of the number operator $`M_{ij}`$ as a combination of monomials of the ladder operators is still open. A similar problem exists in quonic algebras . Work in this direction is in progress. The Arik – Coon deformed oscillator is a fair approximation of the parafermionic algebra. Support from the Greek Secretariat of Research and Technology under contract PENED 95/1981 is gratefully acknowledged.
no-problem/9909/gr-qc9909021.html
ar5iv
text
# The close limit of colliding black holes: an update ## 1 Introduction ### 1.1 The three regimes of a black hole collision The collision of binary black holes is one of the primary expected sources of gravitational waves to be detected by the broadband interferometric gravitational wave telescopes currently under construction, like the LIGO project in the US, the British/German GEO project, the TAMA project in Japan and the French/Italian VIRGO project. A collision of two binary black holes can be divided into three distinct regimes. Initially, the black holes spiral around each other in quasi-Newtonian orbits. The radius of the orbits decrease due to the emission of gravitational radiation. Let us call this period the “inspiral” phase. The gravitational waves produced during this phase are well described by the post-Newtonian approximation. Notice that such an approximation does not provide a good description of the whole spacetime, since it breaks down close to each hole (to first approximation, the holes are singular point particles), but as long as the holes are far apart, this is not expected to be relevant from the point of view of the waveforms observed at infinity<sup>)</sup><sup>)</sup>)Technically, one can ignore the vicinity up to third order post-Newtonian level .. The gravitational waves from this phase of the collision correspond to a quasi-regular sinusoid whose frequency and amplitude increases with time as the holes start getting closer to each other, known as the “chirp”. Good descriptions of this approximation applied to the binary black hole case and appropriate references can be found in Blanchet et al. . When the separation of the holes is around 10 to 12 times the mass of each individual holes, it is expected that the post-Newtonian approximation breaks down. It is not completely clear what is the extent of the breakdown, since the post-Newtonian approximation leads to an asymptotic perturbation series. In fact, attempts are currently being made to extend the validity of the domain of the approximation using Padé approximants . In any event, there will be a limiting separation of the holes such that if they are any closer, one cannot use the post-Newtonian approximation. The domain that starts at that point and continues up to the point in which the black holes form a single black hole is expected to only be treatable by implementing the evolution of the Einstein equations numerically. This has proven to be a notoriously difficult problem. The state of the art of three dimensional simulations of black hole collisions is such that at present the codes can rarely evolve more than 30 or 40 in units of the final black hole mass. One would need at least two orders of magnitude more to be able to follow the black holes in the supposedly rapidly decaying orbit below 10 $`m`$ in separation. Given that additional resolution in 3D is very expensive, it is unlikely that the required increase will be obtained simply by using more powerful computers; new ideas appear to be needed. Finally, when the black holes are close to each other, one can treat the problem as a single distorted black hole that “rings down” into equilibrium, evolving the distortions using perturbation theory. This is called the “close limit approximation” and will be the main subject of this talk. ### 1.2 Why study the close limit? The study of the final ringdown can be approached with three different perspectives. All of them have their own appeal, so I will describe them in some detail: a) As a code check. Whenever we finally have available a three dimensional numerical code to integrate the Einstein equations for colliding holes, one could start the evolutions with the black holes close to each other<sup>)</sup><sup>)</sup>)As the experience in head-on collisions shows numerical codes can develop additional problems when the black holes are close, let us ignore this detail here, since these problems can usually be dealt with. The results should therefore coincide with those of the close limit approximation. This point of view has actually been pursued successfully in the head-on collisions. It turns out that even for this case the full numerical simulations have certain difficulties, and the close limit approximation can be used as a guiding principle to build numerical codes . b) To reach astrophysical conclusions. It is usually assumed that the ringdown waveforms play no role in gravitational wave detection. This assumption is based on the fact that expectations are that most black holes will occur in a mass range of a few solar masses. For such mass range, the ringdown occurs at too high a frequency to be detectable by interferometric detectors, whereas the inspiral phase sweeps the frequency range at which the detectors have the peak sensitivity. However, if the mass of the colliding holes is higher, the inspiral’s frequency becomes too low to be detected whereas the ringdown is more easily detectable. In fact, given that larger masses also imply more radiated energy, these collisions become easier to detect (for a detailed discussion see the papers by Flanagan and Hughes ). In fact, for an optimal mass range of about 300 Solar masses these collisions could even be visible by the initial LIGO interferometers up to a distance of 200Mpc. In fact, they are likely to be the only ones visible by the initial interferometer. That such collisions might occur is not completely out of the question, given our current ignorance about the population of black holes. Recent suggestions that black holes in the significant mass range might exist only reinforce this possibility, although the existence of such holes is being currently debated. Even if one assumes that collisions like these take place, the detectability of ringdowns is technically more involved than that of the inspiral (essentially since many noises in the detector look like ringdowns and also because template matching is hard since the ringdowns are short lived in terms of number of cycles of oscillation). Jolien Creighton discusses these issues in detail . The main drawback of using the “close limit approach” in this context is that one does not have the appropriate initial data to start the problem. The initial data one would need to have “astrophysically meaningful” estimates of waveforms and radiated energies would correspond to the endpoint of a black hole merger. But this is precisely what we are unable to compute! The families of initial data for colliding black holes usually considered are not supposed to be physically realistic when one makes the separation parameters too small. This is essentially due to the fact that they are constructed via ad-hoc superpositions based on mathematical convenience. If one still insists on using them in the close regime (as we will do) one has to admit that the results will not have a definite physical justification. Our experience after trying several families of initial data is that the results (in terms of radiated energy) in the end rarely differ by more than a factor of order unity. Therefore —at the level of an art-form more than that of a scientific prediction—, one may be able to trust the results we present here physically as order of magnitude estimates. This is the point of view we will adopt from here on. c) To supplement numerical evolutions. If the state of the art of numerical relativity remains limited to few dozens of $`m`$ in terms of the time length of the evolution, it would be useful to spend the precious three dimensional evolution time of the codes “coalescing” the holes rather than following the ringdown of a single formed hole. This approach has already been implemented in the case of collapse of disks by Abrahams, Shapiro and Teukolsky . This study used perturbations of Schwarzschild. Currently under study is the more general approach based on perturbations of Kerr for the case of colliding holes by Baker, Campanelli and Lousto (the “Lazarus/Zorro” project at the Albert Einstein Institute in Potsdam). ### 1.3 Initial data To evolve a collision of black holes in the close limit one has to start with a given family of initial data. As we mentioned in the previous section, the physically correct initial data for close black holes arising from an inspiral and merger is not available. The usual families of initial data for binary black holes are obtained by ad-hoc mathematical prescriptions. We will discuss in this section some of the issues involved in such constructions. To have initial data for general relativity means to have a three dimensional spatial metric and an extrinsic curvature that solve the constraint equations of the initial value problem of general relativity, i.e., the “$`G_{00}`$” and “$`G_{0i}`$” components of the Einstein equations. A popular method of constructing solutions for these equations is the Lichnerowicz-York conformal approach. In this approach one assumes that the three dimensional metric is conformally related to a given fixed metric. To simplify things, let us assume (as was done for instance by Bowen and York ) that it is conformally flat. If in addition one assumes that the trace of the extrinsic curvature vanishes, the constraint equations simplify significantly. The momentum constraint simply becomes the flat space divergence of a tensor that is up to a factor the extrinsic curvature, and the Hamiltonian constraint becomes an equation stating that the conformal factor’s Laplacian is related to the square of the extrinsic curvature divided by the conformal factor to a given power. The momentum constraint equations are easy to solve, and solutions were introduced by Bowen and York . In this approach the tensor related to the extrinsic curvature completely determines the ADM momentum and angular momentum of the slice. The solutions constructed by Bowen and York depend on two vectors that coincide with the angular momentum and linear momentum of the slice. One then is supposed to solve the remaining nonlinear elliptic equation for the conformal factor prescribing certain boundary conditions. This is usually achieved numerically, as discussed by Cook . Since the Bowen-York extrinsic curvatures are linear in the momentum (linear or angular), for slow moving (or rotating) holes, one can also seek approximate solutions for the conformal factor expanding in powers of the momentum. To zeroth order the solution simply corresponds to the vanishing of the Laplacian of the conformal factor. This is the same equation one would have for a time-symmetric situation ($`K_{ab}=0`$). Since solutions to this case with the topology of two holes are known (the Misner and Brill-Lindquist solutions), one immediately has approximate solutions for moving holes, to zeroth order of approximation. It turns out this is all we will really need for the close limit (in first order perturbation theory). For higher orders in perturbation theory, one can iterate the construction and explicitly obtain a solution for the conformal factor as a power series in the momentum. These approximations work remarkably well, as shown in the figure 1 for the case of a single spinning hole . An important drawback of the Bowen–York family of solutions is the conformally flat nature of the spatial metric. This is especially troublesome since neither a boosted Schwarzschild black hole nor a spinning Kerr hole appear to admit slicings with spatially flat sections. This means that the Bowen–York solutions will not represent purely boosted or spinning black holes, but there will be “additional radiation”, which in general will be larger the larger the momenta of the holes. Since for realistic collisions one expects the holes to be rapidly spinning, this is a serious impediment. In fact, one can consider a single Bowen–York hole and study its behavior treating it as a perturbation of a Schwarzschild hole. This has been done both for boosted and spinning holes, and as shown in figure 2 for the spinning case, the total radiated energy is low for small values of the spin. As we will see, collisions of black holes rarely radiate more than $`1\%`$ of the holes’ mass, therefore one sees that the extra radiation in the Bowen–York family is tolerable even for moderate values of the momenta. Attempts have been made to generalize the Bowen–York ansatz to better accommodate especially the spinning cases. Krivan and Price and independently, Baker and Puzio , have proposed methods of solutions of the constraint equations. The Krivan–Price approach is based based on the fact that for solutions that are “conformally Kerr” one can also find ways of superposing holes. Their solutions develop some undesired singularities, that in the case of close black holes can be hidden by the common horizon. These families of initial data have indeed been evolved successfully in the close limit . The Baker–Puzio method is based on choosing an ansatz for the spatial geometry that is able of accommodating what one would intuitively consider the superposition of the spatial metrics of two Kerr black holes, and then solving an eikonal equation for the extrinsic curvature. This is a quite novel approach in that one prescribes metric and solves for the extrinsic curvature. The eikonal equations might however develop caustics and other singularities, and the method has not completely been implemented in practice. Both methods are up to present restricted to axisymmetry, and therefore are not yet applicable for the most interesting cases of inspiralling holes. It appears that the only solutions that one can construct that can reasonably accommodate spinning holes in inspiralling situations will have to be built numerically. Other initial data proposals involve the use of Kerr-Schild ansatze for the metric. They have both been pursued in the Cauchy and null formulations. The close limit of these families has not been explored yet. ## 2 Evolution Once one has the initial data, one can proceed to evolve. To achieve this, the usual procedure has been to expand the initial data in terms of an expansion parameter that goes to zero as the separation of the holes goes to zero, and identify the radial coordinate in conformal space with the radial coordinate in isotropic Schwarzschild coordinates. For cases involving boost or spin, one also keeps the leading terms in $`P`$ or $`S`$, and assumes that $`P`$ and $`S`$ are of the same order as the separation in conformal space $`d`$ in order to keep mixed terms. The first order departures from Schwarzschild are used to evaluate the initial data for the Zerilli function, which is then evolved using the Zerilli equation. It should be noticed that the first order departures are of order $`d^2`$ in terms of the conformal separation for non-moving holes and also have terms of order $`Pd`$ for boosted holes. Here we face an inevitable difficulty, which in the end becomes problematic, at least for inspiralling holes. When one considers collisions of black holes with arbitrary boosts and spins, the problem is really multiparametric, and one is really pushing things by insisting on fitting the problem into the usual framework of black hole perturbation theory, where one starts assuming a one-parameter family of space-times. To evolve first order perturbations of black holes one has available several formalisms. They are all equivalent, but the details are significantly different. Let me concentrate on two of the most popular approaches. One of them is based on the Newman-Penrose formulation and leads (in the case in which the background space-time corresponds to a rotating black hole) to the so-called Teukolsky equation. This equation is a (complex) equation for the linearized part of one of the components of the Weyl spinor. In the case of a non-rotating background the Teukolsky equation reduces to the so-called Bardeen–Press equation. For a variety of historical reasons we have not used these formalisms in our approach, but rather used a different formalism which we broadly call Regge–Wheeler–Zerilli (RWZ) formalism. This formalism was constructed by treating separately the even and odd parity portions of the linearized perturbations. In both cases a real function is constructed out of the linearized components of the metric and satisfies a linear equation. For the case of even-parity perturbations the equation is called the Zerilli equation and for the odd-parity perturbations it is called the Regge–Wheeler equation. They are both equations for a single real function enconding the relevant gravitational degree of freedom. In the even parity case, the so-called “Zerilli function” is constructed with the components of the perturbative metric and its first time derivatives. Formulas for its construction are available that are invariant under first order coordinate transformations (gauge transformations) . Similar formulations are available for the odd-parity perturbations. For the case of head-on collisions of momentarily stationary and boosted black holes, as well as for non-head-on collisions of non-spinning holes, the first order perturbations are only even-parity, and therefore the whole problem can be treated solely with the Zerilli equation. This was in part the historical reason for looking at this formalism, since it is somewhat simpler than the Teukolsky one for these initially important cases, and yet applicable. At first order in perturbation theory, the Zerilli and Regge-Wheeler formalisms use functions that involve only first order time derivatives of the initial metric. The Bardeen–Press approach requires one further derivative, which means that one has to use the Einstein equations in addition to the initial value equations to construct the initial data. This is a bit more cumbersome, and in fact, can introduce differences depending on how one keeps orders in solving the Einstein equations, so it should be kept in mind in comparing the formalisms. Figure 3 shows the radiated energy in a collision of two momentarily stationary black holes (Misner problem), as a function of the initial separation. We see here that the close approximation predicts very well the radiated energy up to separations of about six times the mass of each individual hole, when compared with full numerical simulations. The figure also includes results for second order perturbation theory. The formalism is described in detail in . What is clear from the figure is that the perturbative formalism is self-consistent, that is, it is able to predict via recourse to higher order perturbations when it will fail to agree with the numerical results. The results are even more impressive for waveforms as shown in the next figure. This figure corresponds to a region of parameters in which the second order correction is maximal, that is, just before perturbation theory breaks down. It should be emphasized that the numerical results have certain uncertainties as well, as discussed in , so the agreement is probably even better than depicted. These results are illustrative of what one can achieve in the head-on collision case with the close limit. One can also discuss collisions of boosted black holes to first and second order in perturbation theory, and the agreement with the numerical results are even more attractive. In figure 5 we show the agreement with numerical results for the energy. We see two remarkable things in the energy plot: first of all, the approximation works very well for large values of the momentum. Remember we argued initially that we would be doing a “slow” approximation, in the fact that we ignored the right-hand-side of the Hamiltonian constraint. Why is it then that the energies keep on agreeing well for large values of $`P`$? The answer lies in the structure of the equations of the initial value problem. The equation satisfied by the extrinsic curvature is linear. Therefore as we increase the momentum, the extrinsic curvature grows without bound. In fact it grows linearly. The Hamiltonian constraint, however, due to its non-linear structure, implies that the conformal factor has a weak dependence on the momentum. As a consequence, for large values of the momentum, the initial data is completely “dominated” by the extrinsic curvature piece. We are doing a very poor job of accounting for the conformal factor with the “slow” approximation, but since the conformal factor is in norm very small with respect to the extrinsic curvature, the evolution is completely dominated by the extrinsic curvature, for which we have an exact solution! One should be a bit cautious with this statement on one occasion: in the calculation of the ADM mass. The ADM mass is completely dominated by the conformal factor, therefore it is very poorly approximated by our technique. But the ADM mass can be computed numerically in the initial slice, and since the collisions radiate a comparatively small amount, the approximation is good throughout the evolution. The second interesting aspect of the energy plot for the boosted collision is given by the “dip” in the energy that occurs as one increases the momentum. This is related to the previous point. If one starts the plot from the left, initially there is zero momentum, so one is simply recovering the results of the Misner case. In such case, all the radiation is produced by the conformal factor since the extrinsic curvature vanishes identically. As one increases the momentum, the portion of the initial data coming from the conformal factor and that of the extrinsic curvature “compete” with each other, and actually cancel each other, giving rise to the dip. As the momentum is increased further, the extrinsic curvature dominates. The cancellation at the dip implies that first order perturbation theory actually does not work too well, in spite of the fact that nominally we are in the optimal regime of applicability. Since there is a cancellation occurring one needs higher orders to account properly for things, as the energy plot shows. ## 3 Collisions with net angular momentum The most interesting collisions of black holes are not the head-on ones of course but the ones with angular momentum. In such case the immediate reaction is to think that the space-time should be approximated as a perturbation of a Kerr black hole. We shall see that this is not necessarily the case. There are several reasons why it might not be better to consider perturbations of Kerr. To begin with, the whole perturbative paradigm consists in assuming one has a background metric and then “small departures” characterized by a dimensionless parameter $`ϵ`$. Consider the collision of two non-spinning holes. In the “close limit” approximation the way we have set up things is to assume that both the separation of the holes $`d`$ and their linear momenta $`P`$ are small. As a consequence the total angular momentum of the holes $`L=Pd`$ will be small. In the “close limit” the angular momentum goes to zero. That is, when we make the perturbative parameter small in such a family of initial data, we recover the Schwarzschild spacetime and not the Kerr spacetime. We shall in fact see that for this case (non-spinning holes) one is indeed better off not using Kerr perturbations in practice. Moreover, if one insists in using Kerr perturbations, the perturbative formalism one sets up is at best peculiar. This is due to the fact that the perturbative parameter (essentially the angular momentum) now appears in the background spacetime. And to all orders in perturbation theory. This is not the usual way perturbation theory is set up. We have carried out calculations of this sort but we will shortly see that these conceptual difficulties eventually lead to confusions. What if the holes are spinning? In such a situation one would presumably be better off considering the problem as a perturbation of a Kerr spacetime, but there are caveats. Is one going to consider the spins as fixed and determining the background and then use linear momenta and separation as “small” and “comparable” perturbative parameters? One might, but it would be odd in the sense that the angular momentum of the system should be taken into account when computing the total angular momentum. However, the orbital angular momentum is a significant component of the total angular momentum, we are back at the same problem as before: the background will depend (maybe more mildly) on the perturbative parameter. To add to the difficulties, the families of Bowen and York do not represent Kerr black holes well individually, so if one is interested in studying situation with high spins in the individual holes, one will be adding a lot of spurious radiation. One might be facing an unsolvable problem in the sense that the Schwarzschild solution has a more “robust” nature than the Kerr solution. That is, all concentrations of energy that are roughly spherical are close to the Schwarzschild solution outside. Rotating configurations only have exterior Kerr fields if there is a precisely tuned set of multipoles in the field. Two distinct concentrations of energy, like black holes, that inspiral towards each other might simply not look from the outside like a single rotating black holes, multipole-wise. In the end perhaps the best way of sorting out these issues is to attempt to apply the perturbative formalism for these problems, and see what is the outcome. One can be conservative: in parameter regions where all formalisms agree, one can be quite confident in the results, and discard other results until confirmed in other ways. We are in the process of doing so. Currently we have only completed the non-head-on collision of two non-spinning Bowen–York holes . We have evolved it with both the Zerilli and the Teukolsky formalism. To achieve the evolutions with the Teukolsky formalism we needed several intermediate results. To begin with, there was virtually no experience with the Teukolsky equation in the time domain, largely because it is a $`2+1`$-dimensional problem. Krivan, Laguna, Papadopoulos and Andersson have now written a code that integrates the Teukolsky equation in the time domain. That is the code we are using for evolution. Given the lack of experience with the Teukolsky equation in the time domain, we had to set up formulas relating the metric and extrinsic curvature to the initial data for the Teukolsky function. This is somewhat complicated technically, but it can be achieved . Figure (6) depicts the waveforms and energy radiated for the non-head-on collision of two black holes. The result shown is for two black holes initially separated in conformal flat space by $`d=1.8`$ in terms of the mass of each hole. (If one were considering a Misner type geometry, the proper separation measured along the geodesic threading the throats would be $`5.5`$ in in the same units .) The curve labeled $`Z`$ shows results for linearized perturbation calculations using the Zerilli equation; the curve $`T`$ shows the result of “hybrid perturbation” calculations using the Teukolsky equation. The two results diverge around parameter values of $`J/M^2=0.4`$ to $`0.5`$, and this is a reasonable limit to take for the applicability of perturbation estimates. We note that the Teukolsky results lie above the Zerilli results, and this weakly suggests that the Zerilli-based estimates are more accurate. (In close limit estimates for head-on collisions, linearized results always overestimated the nonlinear — i.e., numerical relativity — results.) Summarizing, we see that in the close limit the collisions do not seem to radiate more than $`1\%`$ of the mass of the holes. This limitation is robust, in the sense that we already saw it in the boosted head-on collisions: if one attempts to increase the radiation by boosting the black holes harder, one also increases the initial ADM mass and therefore the radiated fraction of the energy in the end does not increase. The one percent figure is smaller than numbers that have been traditionally assumed for data analysis purposes . Having at hand collisions without axisymmetry, one can ask questions about the radiation of angular momentum. A priori these are very interesting questions since it is expected that black holes will inspiral towards each other with too much angular momentum, in the sense of possessing more angular momentum than that needed to make the final resulting black hole extremal. Presumably this excess angular momentum has to be radiated somehow. It is not expected that this would happen in the final instants of the collision, but nevertheless it would be instructive to see what happens in these final moments. We have computed the radiated angular momentum in both the Zerilli and Teukolsky formalisms. At the moment however, it is not clear if these calculations are appropriate. We find that the radiated angular momentum disagrees in both calculations. We have eliminated all possible sources of error by simply evolving the same initial data with the Teukolsky and Zerilli codes and checking that if one eliminates from the Teukolsky evolution equation (but not from the initial data) the angular momentum dependent terms, the results agree with those of the Zerilli evolutions. It appears that the addition of those (inconsistent perturbatively, as we argued above) small terms changes the predictions dramatically. This is not entirely surprising. The radiated angular momentum is a more subtle quantity to compute than the energy (where both formalisms agree quite well). The latter is basically a sum of squares, whereas the angular momentum is given by a correlation of modes. A small phase shift in one of the modes will therefore have no impact whatsoever on the calculation of the energy, but would change dramatically the angular momentum radiated. Apparently this is the effect of the extra terms in the Teukolsky equation, and therefore the calculation in this formalism predicts much more radiated angular momentum. There clearly is more to be understood in the comparison of Teukolsky and Zerilli calculations for the close limit of colliding black holes. This will require working both formalisms to higher order. Progress in setting up a second order formalism for the Teukolsky equation is being made by Campanelli and Lousto . ## 4 Summary The close limit of black hole collisions has taught us several things about black hole collisions. The formalism is not capable of addressing the most interesting questions in the subject, but it allows us to tackle certain issues in a degree of concreteness that the full numerical simulations are currently lacking. Further work is needed to complete the understanding of the close limit of inspiralling black holes with spin. The whole subject has spawned interest in the initial data problem and progress is being made on this front too. The application of perturbative techniques to extend the life of full numerical codes also opens a new avenue for synergy between numerical and analytical work. In my opinion this synergy will be vital to allow the final tackling of the problem of two colliding black holes. ## 5 Acknowledgments I wish to thank the organizers for the invitation to this wonderful conference. This paper summarizes work done with many collaborators: among the ones who have contributed the most are John Baker, Reinaldo Gleiser, Gaurav Khanna, Pablo Laguna, Hans-Peter Nollert, Richard Price. I am also grateful to Pete Anninos, Andrew Abrahams, Greg Cook, Steve Brandt and Ed Seidel for help with comparisons with full numerical results. I am indebted to many other people for insights and discussions. This work was supported in part by the National Science Foundation under grants NSF-INT-9512894, NSF-PHY-9423950, NSF-PHY-9407194, research funds of the Pennsylvania State University, the Eberly Family research fund at PSU. JP acknowledges support of the Alfred P. Sloan and John Simon Guggenheim foundations. I wish to thank the Institute for Theoretical Physics of the University of California at Santa Barbara for hospitality during the completion of this work.
no-problem/9909/gr-qc9909018.html
ar5iv
text
# Effective Action and Thermodynamics of Radiating Shells in General Relativity ## 1 Introduction The Einstein field equations for a general distribution of matter are a formidable challenge and the study of the collapse of self-gravitating compact objects is a very hard task to which much effort has been dedicated since the birth of General Relativity (see, e.g., Ref. ). One of the first papers in this field showed that, when gravity overcomes all other forces, a sphere of matter collapses under its own weight into a point-like singularity. This opened up a whole line of investigation about the nature of such a singularity and the way it forms. A state corresponding to a point-like singularity would violate Heisenberg’s uncertainty principle, therefore such a problem makes physical sense in a region for which gravity can be treated at the (semi)classical level (see e.g. and Refs. therein). Provided the conditions for such an approximation hold, one is left with two types of difficulties which conspire against the achievement of a definitive answer: firstly, one would like to consider a realistic model for the collapsing matter and, secondly, one has to face the intrinsic non-linearity of the Einstein field equations. The former aspect includes the quantum nature of matter as described by the standard model of elementary particles with all of its own intricacies. The latter difficulty becomes greater along with the complications of the matter model but, in turn, provides the most interesting features. Thus one has to find a sensible compromise between reality and practical solvability of the equations. A great deal of simplification usually follows from global space-time symmetry. The natural framework for the study of gravitational collapse is isotropy with respect to a point. Of course this rules out rotating objects, such as stars and other astrophysical systems, and freezes many of the degrees of freedom of gravity (Birkhoff’s theorem forbids the emission of gravitational waves ). However this does not render the field equations trivial and some of the strong field effects certainly survive in this approximation (see, e.g., Ref. ). An example of manageable isotropic distributions is given by dust fluid models, such as the Tolman-Bondi space-time , which can be used to represent spheres of pressureless matter and cosmological models. Such space-times develop caustics, where the energy density of the fluid diverges, because the geodesics followed by dust particles cross for generic initial conditions. The dynamics can be investigated by approximating the continuous distribution of matter by a discrete medium of nested homogeneous time-like shells of small but finite thickness. When dust particles are confined by their own weight to roughly within a few Compton wavelengths and the latter is negligible with respect to any other “macroscopic” length in the model (to wit, the square root of the area of the shell and its Schwarzschild radius ), one can take the limit of infinitely small thickness (thin shell limit) and treat each shell as a singular hypersurface generated by a $`\delta `$-function term in the matter stress-energy tensor . Then the basic problems are the study the dynamics of one shell and the collisions between two nearby shells (an issue we do not touch upon here). The method for treating singular hypersurfaces in General Relativity was formulated in Ref. as following from the Einstein field equations. It amounts to the Lanczos junction equations between the embedding metrics and entails the conservation of the total stress-energy. The equations of motion of thin time-like shells have then been derived for all spherically symmetric embedding four-geometries , including the vacuum and bubbles of non-vanishing cosmological constant . It is remarkable and physically sound that, in non-radiating cases, the trajectory of the area of the shell is determined by the junction equations once one has fixed the two external metrics and the equation of state for the density and the surface tension (the radial pressure must vanish). The special case of radiating shells was studied in Ref. for unpolarized radiation of relatively high frequency which behaves as null dust and gives rise to a Vaidya external geometry . This is indeed a good approximation when the radiation wavelength is much smaller than any other “macroscopic” length in the model (see, e.g. and Refs. therein) and is reasonably consistent within our approach, since the same condition on the Compton wavelength of the matter in the shell is required by the thin shell limit . For the purpose of developing a quantum theory, knowledge of the equations of motion is not sufficient, instead one needs an (effective) action for the shell degrees of freedom. The conceptual starting point is thus the Einstein-Hilbert action rather than Einstein field equations. In Ref. both metric and matter degrees of freedom were kept dynamical and the general form of the action was given for a barotropic fluid with step- or $`\delta `$-function discontinuity. One then implements the symmetry of the system in order to reduce the action to a manageable effective form. The literature on this topic treats essentially two approaches for the shell: In Ref. , the embedding empty space-time is foliated into spatial sections of constant time according to the ADM prescription , with lapse and shift functions in the four-metric. The Einstein-Hilbert Lagrangian from Ref. is then integrated over the spatial sections by making use of the properties of a spherical vacuum . This leads to a canonical effective action for the canonical variables of the system which include the inner and outer Schwarzschild times on the shell as reminders of the geometrodynamics of the embedding space-time. A set of constraints ensures the invariance of the action under reparametrization. The shell matter Lagrangian used in is not in the form of a field theory and does not give rise to any canonical variable. In Refs. the embedding metrics are chosen a priori to be specific solutions of Einstein field equations and are not dynamical. The corresponding contribution to the Einstein-Hilbert Lagrangian can then be integrated over a convenient space-time volume and expressed in terms of metric variables of the shell world-volume . This yields an effective action which is invariant under reparametrization of the time on the shell and is equivalent to the canonical one in domains of the phase space where some of the constraints are solved classically . Thus one expects that the mini-superspace approach gives a limited information on the quantum theory of geometry fully addressed in the canonical approach, but does not limit the possibility of taking into account the quantum nature of matter in a semi-classical context, as was later done in Ref. . In the present paper we illustrate the derivation of an effective action for a time-like shell which can emit unpolarized high frequency radiation. Since the basic aim is to establish a starting point for the study of semi-classical effects induced by the quantum evolution of the matter in the shell (along the lines of Ref. ), the mini-superspace approach will suffice. The dynamical meaning of such an action will just be to generate the time evolution of the shell degrees of freedom, which will be identified with the area and mass aspect of the shell. However, we shall also see that this approach, unlike the standard junction prescriptions, allows a simple interpretation of the evolution equations and time-reparametrization invariance in terms of the thermodynamics of the shell. The plan of the paper is as follows: in Section 2 we start from the general Einstein-Hilbert action as given in Ref. and simplify it in order to derive a mini-superspace effective action. In Section 3 we write the Euler-Lagrange equations of motion, two of which are equivalent to the junction equations and compare with previous approaches. In section 4 we illustrate a thermodynamic interpretation and in Section 5 we finally comment our results. We shall follow the sign convention of Ref. (see also Appendix B) with Greek indices $`\mu ,\nu ,\mathrm{}`$ labeling space-time four-coordinates and Latin indices $`i,j,\mathrm{}`$ for the coordinates of a three-dimensional sub-manifold; $`\kappa =16\pi G`$, with $`G`$ the Newton constant. ## 2 Derivation of the action The space-time $`\mathrm{\Omega }`$ we are considering is parted into two regions, $`\mathrm{\Omega }^\pm `$, separated by the shell world-volume $`\mathrm{\Sigma }`$. $`\mathrm{\Omega }^{}`$ inside the shell is devoid of matter and $`\mathrm{\Omega }^+`$ possibly contains out-flowing radiation. The corresponding Einstein-Hilbert action, from Ref. , is thus $`S_{EH}`$ $`=`$ $`{\displaystyle \frac{1}{\kappa }}{\displaystyle _\mathrm{\Omega }^{}}d^4x\sqrt{g}`$ (2.1) $`+{\displaystyle \frac{2}{\kappa }}{\displaystyle _\mathrm{\Sigma }}d^3x\sqrt{h}\left[𝒦\right]_{}^+{\displaystyle _\mathrm{\Sigma }}d^3x\sqrt{h}\rho `$ $`+{\displaystyle \frac{1}{\kappa }}{\displaystyle _{\mathrm{\Omega }^+}}d^4x\sqrt{g}+{\displaystyle _{\mathrm{\Omega }^+}}d^4x\sqrt{g}_{rad},`$ where $`g`$ is the determinant of the four-dimensional metric, $`h`$ the determinant of the three-dimensional metric on $`\mathrm{\Sigma }`$, $``$ is the scalar curvature, $`𝒦`$ the trace of the extrinsic curvature of $`\mathrm{\Sigma }`$, $`\rho `$ the energy density of the shell and $`_{rad}`$ the radiation Lagrangian density. We have also introduced the notation $`\left[F\right]_{}^+`$ for the difference between the limiting (on the shell) values of a function $`F`$ computed in $`\mathrm{\Omega }^+`$ and $`\mathrm{\Omega }^{}`$. The above action is the starting point for our derivation. First of all we observe that the spherical symmetry of the system makes it possible to introduce three global coordinates $`(r,\theta ,\varphi )`$, with $`r>0`$ the radial coordinate of a sphere of area $`4\pi r^2`$, $`\theta (0,\pi )`$ and $`\varphi (0,2\pi )`$ the usual angular coordinates. The next step is to decide which one of the two approaches outlined in the introduction is better suited for the present problem. Since a formulation of the kind given in Ref. is not available at present for a radiation-filled space-time , the canonical approach is not viable. Therefore, we follow Ref. and assume the Einstein field equations are satisfied inside the space-time volumes $`\mathrm{\Omega }^\pm `$ . The above assumption fixes the form of the embedding metrics. In particular, we observe that $`\mathrm{\Omega }`$ can be naturally parted into three regions (see Fig. 1): i) the inner empty space $`\mathrm{\Omega }_{in}=\mathrm{\Omega }^{}`$, whose geometry can be either Minkowski or Schwarzschild. Let $`t_{in}`$ be the (Schwarzschild) time coordinate in $`\mathrm{\Omega }_{in}`$, then the corresponding metric can be written $`ds_{in}^2=a_{in}dt_{in}^2+a_{in}^1dr^2+r^2d\mathrm{\Omega }^2,`$ (2.2) where $`d\mathrm{\Omega }^2=d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2`$ is the usual line element on the two-sphere and $`a_{in}=12M_{in}/r`$. The constant $`M_{in}`$ is positive for Schwarzschild and zero for Minkowski; ii) an outer space $`\mathrm{\Omega }_{out}`$ which can be filled with radiation. If $`t_{out}`$ denotes the (Schwarzschild) time in $`\mathrm{\Omega }_{out}`$, the metric, which is Vaidya , takes the diagonal form $`ds_{out}^2=b_{out}^2a_{out}^1dt_{out}^2+a_{out}^1dr^2+r^2d\mathrm{\Omega }^2,`$ (2.3) where $`b_{out}=_tm/_rm`$, $`a_{out}=12m/r`$. The mass aspect, $`m=m(t_{out},r)`$, equals the total energy included inside the sphere of area $`4\pi r^2`$ at time $`t_{out}`$ ; iii) an empty space $`\mathrm{\Omega }_{\mathrm{}}`$ prior to any emission of radiation with Schwarzschild metric and mass parameter equal to the total ADM mass $`M_{\mathrm{}}`$ of the system, $`ds_{\mathrm{}}^2=a_{\mathrm{}}dt_{\mathrm{}}^2+a_{\mathrm{}}^1dr^2+r^2d\mathrm{\Omega }^2.`$ (2.4) where $`a_{\mathrm{}}=12M_{\mathrm{}}/r`$. In this frame, the shell trajectory is given by $`r=R(\tau )`$ and $`t_{in/out}=T_{in/out}(\tau )`$, where $`\tau `$ is the arbitrary time variable on the shell world-volume $`\mathrm{\Sigma }`$ with three-metric $`h_{ij}`$ given by $`ds_\mathrm{\Sigma }^2=N^2d\tau ^2+R^2d\mathrm{\Omega }^2,`$ (2.5) $`N=N(\tau )`$ being the shell lapse function. The full stress-energy tensor of the matter in the system contains two parts, $`𝒯_{\mu \nu }=𝒮_{ij}\delta (rR)+𝒯_{\mu \nu }^{rad}\theta (rR),`$ (2.6) where $`\delta `$ is the Dirac $`\delta `$-function and $`\theta `$ the step-function. The source term, $`𝒮_{ij}=\mathrm{diag}[N^2\rho ,R^2P,R^2\mathrm{sin}^2\theta P],`$ (2.7) is the three-dimensional stress-energy tensor of a fluid with density $`\rho `$ and (surface) tension $`P`$ and $`𝒯_{\mu \nu }^{rad}`$ is the stress-energy tensor of the out-flowing null radiation. The mass aspect can be extended to all values of $`r>0`$, does not decrease for increasing $`r`$ and must be continuous in $`\mathrm{\Omega }`$ except at the shell radius, thus $`\underset{rR^{}}{lim}m(T_{in},r)=M_{in}`$ $`\underset{rR^+}{lim}m(T_{out},r)=M_{out}M_{in}`$ $`\underset{r\mathrm{}}{lim}m(t_{\mathrm{}},r)=M_{\mathrm{}}M_{out},`$ (2.8) where $`M_{out}=M_{out}(\tau )`$ is the mass aspect at the (outer) shell radius and equals the total energy of the shell plus $`M_{in}`$. Other matching conditions will be implemented in the following and, as in Eq. (2.8), we shall use capital letters for the restriction (or limit) to the shell position of functions of the space-time coordinates which are denoted by the corresponding small letters, e.g., $`A_{in}=a_{in}(R,T)`$. We shall usually drop the subscripts $`in/out`$ whenever this does not cause any confusion (e.g., $`t=t_{in}`$ for $`r<R`$) and total derivatives with respect to $`\tau `$ are denoted by a dot. Since the metrics above solve the Einstein equations with the source (2.6), the volume contributions in the action (2.1) must vanish identically. Further, the matching between $`\mathrm{\Omega }_{out}`$ and $`\mathrm{\Omega }_{\mathrm{}}`$ is smooth, provided the radial coordinate $`R_s`$ of the border between the two regions satisfies $`{\displaystyle \frac{dR_s}{dt_{\mathrm{}}}}=1{\displaystyle \frac{2M_{\mathrm{}}}{R_s}},`$ (2.9) where $`M_{\mathrm{}}`$ equals $`M_{out}`$ at the time $`\tau `$ at which the emission starts, and no boundary terms arise at the surface $`r=R_s`$. The reduced Einstein-Hilbert action can then be written $`S_{EH}^{red}`$ $`=`$ $`{\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau NR^2\left[𝒦\right]_{in}^{out}4\pi {\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau NR^2\rho +S_B`$ (2.10) $``$ $`S_G^{shell}+S_M^{shell}+S_B,`$ where $`S_G^{shell}`$ is the surface gravitational action of the shell, $`S_B`$ contains all surface contributions at the border of the space-time $`\mathrm{\Omega }`$ (including those which are usually introduced to cancel second derivatives of the dynamical variables) and $`S_M^{shell}`$ is the shell matter action, $`S_M^{shell}=4\pi {\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau NR^2\rho ,`$ (2.11) where we assume $`\rho `$ does not depend on $`N`$ and its dependence on the other two shell degrees of freedom $`R`$ and $`M_{out}`$ will be clarified in Section 3. At this point one can foresee a technical difficulty in evaluating surface terms at $`t`$ constant because the region $`\mathrm{\Omega }_{out}`$ is not homogeneous (see Fig. 1). In fact, the mass aspect $`m`$ is constant along the lines of the outgoing flow of radiation and one can introduce a null coordinate $`u=u_{out}(t_{out},r)`$ such that out-going null geodesics are given by $`u`$ constant with four-velocity $`k^\mu (0,1,0,0)`$ and one has $`𝒯_{\mu \nu }^{rad}={\displaystyle \frac{4}{\kappa }}{\displaystyle \frac{1}{r^2}}{\displaystyle \frac{dm}{du}}\delta _\mu ^u\delta _\nu ^u.`$ (2.12) The mass aspect $`m=m(u)`$ does not depend on $`r`$ now, and the metric is written (for $`r>R`$) $`ds_{out}^2=a_{out}du^22dudr+r^2d\mathrm{\Omega }^2.`$ (2.13) Such a $`u_{out}`$ is defined also for $`M_{out}`$ constant, in which case it coincides with a null Eddington-Finkelstein coordinate for the Schwarzschild space-time (see also Appendix A). Integration over $`r`$ at time $`t_{out}`$ constant would then involve integrating functions of $`m`$ and its derivatives, that is the knowledge of $`M(T)`$ and $`R(T)`$, for all $`t_iTt_{out}`$. However, although the differential relation between $`(t_{out},r)`$ and $`(u_{out},r)`$ is known, $`du=a_{out}^1\left(b_{out}dt_{out}+dr\right),`$ (2.14) $`u=u_{out}(t_{out},r)`$ cannot be written explicitly unless the functions $`M_{out}(\tau )`$ and $`R(\tau )`$ are fixed once and for all. At the same time the trajectory of the shell is to be derived from the sought effective action and cannot be given a priori. This renders explicit integration over slices of constant $`t`$ inconvenient. The above argument suggests one define a “mixed” foliation, where volume sections are defined by $`t`$ constant for $`r<R`$ and $`u`$ constant for $`r>R`$ (see Appendix A for an equivalent overall null foliation). Hence we shall integrate the Einstein-Hilbert action in the four-volume $`\mathrm{\Omega }=\mathrm{\Omega }_{in}\mathrm{\Omega }_{out}`$ displayed in Fig. 2. Such volume has a time-like boundary at $`r=R_{\mathrm{}}`$ and two “mixed” boundaries defined by $`t_{in}=t_i`$, $`u_{out}=u_i`$ and $`t_{in}=t_f>t_i`$, $`u_{out}=u_f`$. The shell trajectory is described equivalently by $`r=R(\tau )`$, $`t=T(\tau )`$, with (fixed) end-points $`R_i=R(\tau _i)`$ at $`T(\tau _i)=t_i`$ and $`R_f=R(\tau _f)`$ at $`T(\tau _f)=t_f`$, in $`\mathrm{\Omega }_{in}`$ and $`r=R(\tau )`$, $`u=U(\tau )`$ in $`\mathrm{\Omega }_{out}`$, with end-points at $`U(\tau _i)=u_i`$ and $`U(\tau _f)=u_f`$. The fact that all volume terms (the integrals over $`\mathrm{\Omega }^\pm `$ in Eq. (2.1)) do not contribute to the reduced action (2.10) can now be checked explicitly. The basic observation is to note that the scalar curvature for a metric of the form (2.13) or (A.1) is given by (see Appendix B) $`={\displaystyle \frac{2}{r^2}}\left[1{\displaystyle \frac{}{r}}\left(ar+{\displaystyle \frac{r^2}{2}}{\displaystyle \frac{a}{r}}\right)\right],`$ (2.15) and vanishes identically once one substitutes in $`a=a_{out}`$ for Vaidya or $`a=a_{in}`$ for Schwarzschild. The corresponding volume action is thus $`{\displaystyle \frac{4\pi }{\kappa }}{\displaystyle _{t_i}^{t_f}}𝑑t{\displaystyle _0^R}r^2𝑑r_{in}+{\displaystyle \frac{4\pi }{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u{\displaystyle _R^R_{\mathrm{}}}r^2𝑑r_{out}=0,`$ (2.16) as expected. One also recalls that the Lagrangian density for a field of null dust is proportional to $`k_\mu k^\mu `$, where $`k^\mu `$ is the fluid four-velocity, and thus vanishes along null geodesics. In order to compute $`S_G^{shell}`$ we introduce Gaussian normal coordinates $`(\tau ,\eta ,\theta ,\varphi )`$ near the shell such that the trajectory $`R(\tau )`$ is given by $`\eta =0`$ with $`\eta >0`$ for $`r>R`$. Then, the jump in the components of the extrinsic curvature of the shell world-volume are given by the relation $`\left[𝒦_{ij}\right]_{out}^{in}=\underset{ϵ0^+}{lim}\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{g_{ij}}{\eta }}|_{\eta =ϵ}{\displaystyle \frac{1}{2}}{\displaystyle \frac{g_{ij}}{\eta }}|_{\eta =+ϵ}\right).`$ (2.17) In detail (see Appendix C) $`𝒦_{\theta \theta }={\displaystyle \frac{𝒦_{\varphi \varphi }}{\mathrm{sin}^2\theta }}=\{\begin{array}{cc}R{\displaystyle \frac{\beta _{in}}{N}}\hfill & \eta =ϵ\hfill \\ & \\ R{\displaystyle \frac{\beta _{out}}{N}}\hfill & \eta =+ϵ,\hfill \end{array}`$ (2.21) $`𝒦_{\tau \tau }=\{\begin{array}{cc}{\displaystyle \frac{1}{\beta _{in}}}\left[\dot{R}\dot{N}{\displaystyle \frac{N^3}{2}}{\displaystyle \frac{A_{in}}{R}}\ddot{R}N\right]\hfill & \eta =ϵ\hfill \\ & \\ {\displaystyle \frac{1}{\beta _{out}}}\left[\dot{R}\dot{N}{\displaystyle \frac{N^3}{2}}{\displaystyle \frac{A_{out}}{R}}\ddot{R}N{\displaystyle \frac{N^3}{2}}\left({\displaystyle \frac{\beta _{out}\dot{R}}{A_{out}N}}\right)^2{\displaystyle \frac{A_{out}}{U}}\right]\hfill & \eta =+ϵ,\hfill \end{array}`$ (2.25) where $`\beta \sqrt{AN^2+\dot{R}^2}.`$ (2.26) The above expressions give $`S_G^{shell}={\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau \left[2\beta R+{\displaystyle \frac{R^2}{\beta }}\left\{\ddot{R}\dot{R}{\displaystyle \frac{\dot{N}}{N}}+{\displaystyle \frac{N^2}{2}}\left[{\displaystyle \frac{A}{R}}+\left({\displaystyle \frac{\beta \dot{R}}{AN}}\right)^2{\displaystyle \frac{A}{U}}\right]\right\}\right]_{out}^{in},`$ (2.27) where we used $`a_{in}/u_{in}=0`$ to write the limits from the two regions in the same form. The integral in Eq. (2.27) contains a second derivative of $`R`$ which should be eliminated by a suitable term in $`S_B`$. Such term must be a total derivative so that it does not affect the equations of motion and for the Schwarzschild metric it can be found in Ref. . The same kind of term works for Vaidya as well and is given by $`S_B^{(1)}={\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau {\displaystyle \frac{d}{d\tau }}\left[R^2\mathrm{tanh}^1\left({\displaystyle \frac{\dot{R}}{\beta }}\right)\right]_{out}^{in}.`$ (2.28) Adding (2.28) to (2.27) gives $`S_G^{shell}+S_B^{(1)}={\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau \left[2\beta R2\dot{R}R\mathrm{tanh}^1\left({\displaystyle \frac{\dot{R}}{\beta }}\right)+{\displaystyle \frac{M\beta }{A}}{\displaystyle \frac{\dot{M}R}{A}}\right]_{out}^{in}.`$ (2.29) We observe that $`\beta _{in}/A_{in}=\dot{T}_{in}`$ and the mass aspect $`M_{in}`$ is constant, therefore the last two terms are irrelevant in $`\mathrm{\Omega }_{in}`$ (being zero or total derivatives). In $`\mathrm{\Omega }_{out}`$, however, they cannot be neglected. We shall now identify another surface term of dynamical relevance which arises from the borders of $`\mathrm{\Omega }`$. We observe that the expression (2.15) of the scalar curvature is defined up to the addition of zero which can be written in the form $`0={\displaystyle \frac{2\alpha }{r^2}}{\displaystyle \frac{}{r}}(rar)+{\displaystyle \frac{2\gamma }{r^2}}{\displaystyle \frac{}{r}}\left({\displaystyle \frac{r^2}{2}}{\displaystyle \frac{a}{r}}\right),`$ (2.30) where $`\alpha `$ and $`\gamma `$ are arbitrary constants. Hence $`{\displaystyle \frac{4\pi }{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u{\displaystyle _R^R_{\mathrm{}}}r^2𝑑r=`$ $`{\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u{\displaystyle _R^R_{\mathrm{}}}𝑑r\left[\left(1+\alpha \right)\left(1+\alpha \right){\displaystyle \frac{}{r}}\left(ar\right)\left(1\gamma \right){\displaystyle \frac{}{r}}\left({\displaystyle \frac{r^2}{2}}{\displaystyle \frac{a}{r}}\right)\right],`$ (2.31) from which one readily sees that there are no (non-trivial) surface terms at the null borders $`u=u_i`$ and $`u=u_f`$ of $`\mathrm{\Omega }_{out}`$. It is shown in Ref. that no such terms arise along the spatial borders $`t=t_i`$ and $`t=t_f`$ of $`\mathrm{\Omega }_{in}`$. Therefore, from (2.31) one concludes that a surface term could come only from the border at $`r=R_{\mathrm{}}`$ and should be compensated by $`S_B^{(2)}`$ $`=`$ $`{\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u\left[(1+\alpha )ar+(1\gamma ){\displaystyle \frac{r^2}{2}}{\displaystyle \frac{a}{r}}\right]_R_{\mathrm{}}`$ (2.32) $`=`$ $`{\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u\left[\left(1+2\alpha +\gamma \right)M_{out}\left(1+\alpha \right)R_{\mathrm{}}\right].`$ For $`\alpha =1`$ and $`\gamma =0`$ one obtains the usual contribution (see Appendix D). However we find it more useful to set $`\alpha =1`$ and $`\gamma =0`$, for which the integrand in Eq. (2.32) with the relation (C.6) becomes $`S_B^{(2)}`$ $`=`$ $`{\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau {\displaystyle \frac{\beta _{out}\dot{R}}{A_{out}}}M_{out},`$ (2.33) since this form cancels one of the terms in the integral (2.29). Putting together all the relevant pieces given in Eqs. (2.11), (2.29) and (2.33) finally yields the effective action $`S_{eff}(N,R,M_{out})`$ $`=`$ $`{\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau \left\{\left[2R\beta 2R\dot{R}\mathrm{tanh}^1\left({\displaystyle \frac{\dot{R}}{\beta }}\right)\right]_{out}^{in}+{\displaystyle \frac{\dot{M}_{out}R}{A_{out}}}{\displaystyle \frac{M_{out}\dot{R}}{A_{out}}}\right\}`$ (2.34) $`4\pi {\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau NR^2\rho `$ $``$ $`{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau L_{eff}(N,R,M_{out}),`$ for the variables $`N`$, $`R`$ and $`M_{out}`$. It is worth noting that one recovers the effective action for a shell which collapses without emission of radiation simply by setting $`\dot{M}_{out}=0`$. In fact the integral of $`M_{out}\dot{R}/A_{out}`$, which does not appear in the action given in Ref. , depends only on the end-points when $`M_{out}`$ is a constant and is then dynamically irrelevant. In the next Section we shall vary $`S_{eff}`$ in order to check that it generates the same equations of motion as following from the junction conditions reviewed in appendix E. ## 3 Equations of motion In the following we shall obtain the Euler-Lagrange equations of motion by varying the effective action with respect to the $`\tau `$-dependent quantities appearing in the effective Lagrangian with the end-points held fixed at $`\tau _i`$ and $`\tau _f`$. Such variations are formally computed by acting on the Lagrangian with the operator ($`i=1,2,3`$) $`{\displaystyle \frac{\delta }{\delta X^i}}{\displaystyle \frac{}{X^i}}{\displaystyle \frac{d}{d\tau }}\left({\displaystyle \frac{}{\dot{X}^i}}\right),`$ (3.1) with $`X^i=(N,R,M_{out})`$ and $`\delta X^i(\tau _i)=\delta X^i(\tau _f)=0`$. To begin with, we note that it is sensible in the spirit of General Relativity to assume that the shell proper energy $`E`$ does not depend on the shell velocity $`\dot{R}`$. Further, in order to keep a sufficient degree of generality, we consider $`E`$ depending on both shell variables $`R`$ and $`M_{out}`$ (but not on $`N`$), so that the physical shell energy is related to the quantity $`4\pi R^2\rho `$ by $`E=4\pi R^2\rho (R,M_{out}),`$ (3.2) and agrees with the source in the $`\theta \theta `$-junction equation (E.2). We next introduce the canonical momenta $`P_i(L_{eff}/\dot{X}^i)`$, which are formally given by $`P_N=0`$ (3.3) $`P_R={\displaystyle \frac{8\pi }{\kappa }}\left\{2R\left[\mathrm{tanh}^1\left({\displaystyle \frac{\dot{R}}{\beta }}\right)\right]_{out}^{in}+{\displaystyle \frac{M_{out}}{A_{out}}}\right\}`$ (3.4) $`P_M={\displaystyle \frac{8\pi }{\kappa }}{\displaystyle \frac{R}{A_{out}}}.`$ (3.5) The particular expressions (3.3) and (3.5) deeply affect the nature of the dynamical system at hand, as can be seen from the (symmetric) matrix $`W_{ij}`$ $``$ $`{\displaystyle \frac{L_{eff}}{\dot{X}^i\dot{X}^j}}={\displaystyle \frac{P_i}{\dot{X}^j}},`$ (3.6) whose rank equals the number of canonical degrees of freedom with $`\mathrm{Dim}[W]\mathrm{Rank}[W]`$ being the number of primary constraints. Since $`\mathrm{Rank}[W]=\mathrm{Rank}\left[\mathrm{diag}\left(\begin{array}{ccc}0,& {\displaystyle \frac{P_R}{\dot{R}}},& 0\end{array}\right)\right]=1,`$ (3.8) the Lagrangian $`L_{eff}`$ contains two primary constraints and is said to be non-standard . Varying $`S_{eff}`$ with respect to the lapse function yields $`{\displaystyle \frac{\delta L_{eff}}{\delta N}}={\displaystyle \frac{L_{eff}}{N}}={\displaystyle \frac{16\pi }{\kappa }}{\displaystyle \frac{R}{N}}\left[\beta \right]_{out}^{in}E,`$ (3.9) where we made use of our previous assumption that $`\rho `$ does not depend on $`N`$. Upon setting this variation to zero we obtain the first primary (Hamiltonian) constraint $`{\displaystyle \frac{H}{N}}E{\displaystyle \frac{16\pi }{\kappa }}{\displaystyle \frac{R}{N}}\left[\beta \right]_{out}^{in}=0,`$ (3.10) which is related to the time-reparametrization invariance of the shell three-metric and is formally equal to the analogous constraint obtained in Ref. for non-radiating shells. In order to make contact with the notation in Appendix E we observe that, for $`N=1`$, one formally recovers the $`\theta \theta `$-junction equation (E.2). Varying with respect to $`R`$ yields the equation of motion $`{\displaystyle \frac{\delta L_{eff}}{\delta R}}={\displaystyle \frac{L_{eff}}{R}}{\displaystyle \frac{d}{d\tau }}\left({\displaystyle \frac{L_{eff}}{\dot{R}}}\right)=0,`$ (3.11) that is $`{\displaystyle \frac{\delta }{\delta R}}\left(NE\right)={\displaystyle \frac{16\pi }{\kappa }}\left\{\left[\beta +{\displaystyle \frac{1}{\beta }}\left(R\ddot{R}+{\displaystyle \frac{N^2M}{R}}{\displaystyle \frac{R\dot{N}\dot{R}}{N}}\right)\right]_{out}^{in}+{\displaystyle \frac{\dot{M}_{out}}{\beta _{out}}}{\displaystyle \frac{\beta _{out}\dot{R}}{12M_{out}/R}}\right\}.`$ (3.12) Upon defining $`P{\displaystyle \frac{1}{8\pi NR}}{\displaystyle \frac{\delta }{\delta R}}\left(NE\right)={\displaystyle \frac{\delta E}{\delta 𝒜}},`$ (3.13) ($`𝒜=4\pi R^2`$ is the area of the surface of the shell) and setting $`N=1`$ we formally recover the $`\tau \tau `$-junction equation (E.3). Now, unlike the non-radiating case, one must also consider varying $`M_{out}`$, thus obtaining the second primary constraint which has no interpretation as a junction condition, $`{\displaystyle \frac{\delta L_{eff}}{\delta M_{out}}}`$ $`=`$ $`{\displaystyle \frac{L_{eff}}{M}}{\displaystyle \frac{d}{d\tau }}\left({\displaystyle \frac{L_{eff}}{\dot{M}}}\right)`$ (3.14) $`=`$ $`{\displaystyle \frac{16\pi }{\kappa }}{\displaystyle \frac{\beta _{out}\dot{R}}{12M_{out}/R}}{\displaystyle \frac{\delta }{\delta M_{out}}}\left(NE\right)=0.`$ It was already obvious from Eq. (3.3) and $`W_{11}=0`$ that the lapse function $`N`$ is a Lagrange multiplier and can be assigned any function of $`\tau `$. Hence, from now on we work in the proper time gauge $`N=1`$, which makes most of the equations look simpler. The corresponding constraint (3.10) was in anticipation named the Hamiltonian constraint since the quantity $`NH`$ is the canonical Hamiltonian, as can be seen from the Hamiltonian form of the effective Lagrangian, $`L_{eff}=P_N\dot{N}+P_R\dot{R}+P_M\dot{M}_{out}NH.`$ (3.15) The Hamiltonian constraint must be preserved in time, which yields the secondary constraint $`{\displaystyle \frac{dH}{d\tau }}`$ $`=`$ $`{\displaystyle \frac{dE}{d\tau }}{\displaystyle \frac{E}{R}}\dot{R}{\displaystyle \frac{E}{M_{out}}}\dot{M}_{out}`$ (3.16) $`=`$ $`4\pi R^2\left[\dot{\rho }2{\displaystyle \frac{\dot{R}}{R}}\left(P\rho \right){\displaystyle \frac{4}{\kappa }}{\displaystyle \frac{\dot{M}_{out}}{R}}{\displaystyle \frac{\beta _{out}\dot{R}}{R2M_{out}}}\right]=0,`$ where we have used both Eqs. (3.12) and (3.14) to show that, in general, Eq. (3.16) is trivially satisfied by our ansatz $`E=E(R,M_{out})`$ in Eq. (3.2). Since $`W_{33}=0`$, also the quantity $`M_{out}`$ is not a true dynamical variable. Further, the total time derivative of (3.14) vanishes in virtue of $`E=E(R,M_{out})`$ on using Eqs. (3.14) itself and (3.16), therefore no new secondary constraint is generated. This means that one can set $`M_{out}=M_{out}(\tau )`$ (any function of $`\tau `$), which determines $`E=E(R,M_{out})`$ through the constraints (3.10) and (3.14), and compute $`R=R(\tau )`$ with correspondingly $`R(\tau _i)=R_i`$ by solving Eq. (3.12). This picture is well suited to describe a shell whose luminosity curve in time is given by $`{\displaystyle \frac{dQ}{d\tau }}{\displaystyle \frac{16\pi }{\kappa }}{\displaystyle \frac{\beta _{out}\dot{R}}{12M_{out}/R}}{\displaystyle \frac{dM_{out}}{d\tau }}.`$ (3.17) The above quantity is negative for radiating shells (we required $`\dot{U}>0`$, see Eq. (C.6)) and diverges for the shell approaching its Schwarzschild radius , unless $`\dot{M}_{out}=\dot{U}{\displaystyle \frac{dM_{out}}{du}}(R2M_{out}),`$ (3.18) for $`R2M_{out}`$. This singular behaviour (and the corresponding bound on $`\dot{M}_{out}`$) is due to the fact that the coordinates $`(u,r)`$ are related to the point of view of a static observer (see Appendix E) and is removed by passing to Israel’s coordinates (see also Appendix C). However, one can alternatively choose the value of $`E=E(R(\tau ),M_{out}(\tau ))`$ as an explicit function of $`\tau `$ and determine $`P`$ from (3.12) and the luminosity from (3.14). Then the trajectory $`R=R(\tau )`$ is obtained by imposing the Hamiltonian constraint (3.10) for all times $`\tau _i\tau \tau _f`$. This option is particularly useful if one wishes to impose only initial conditions and then consider both the trajectory and the luminosity of the shell as completely determined by the specific interaction between the shell matter and the emitted radiation encoded in the dependence of $`E`$ on $`R`$ and $`M_{out}`$. ### 3.1 Comparison with other approaches In a previous approach the outer mass is considered fixed, in which case it is not possible nor necessary to consider the variation with respect to it. Nontheless one may proceed in analogy with such an approach and obtain a self-consistent, complete set of equations starting from our action Eq. (2.34). The first case one may consider is that for which the arbitrary function $`M_{out}(\tau )`$ is taken to be a function of the shell radius $`M_{out}=M_{out}(R(\tau ))`$. One then has that the two terms to the right of eq. (2.34) are a total derivative with respect to the proper time, $`{\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau \left({\displaystyle \frac{\dot{M}_{out}R}{A_{out}}}{\displaystyle \frac{M_{out}\dot{R}}{A_{out}}}\right)={\displaystyle _{\tau _i}^{\tau _f}}𝑑\tau \dot{R}{\displaystyle \frac{R^2}{2}}\left({\displaystyle \frac{}{R}}\mathrm{log}A_{out}\right),`$ (3.19) which is a boundary term and does not influence the equation of motion. Then the tension is found to be $`P={\displaystyle \frac{2}{\kappa }}\left[{\displaystyle \frac{\beta }{R}}+{\displaystyle \frac{\ddot{R}}{\beta }}+{\displaystyle \frac{1}{2\beta }}{\displaystyle \frac{dA}{dR}}\right]_{out}^{in}`$ (3.20) which is the analogue of eq. (3.12) and agrees with the $`\tau \tau `$-junction equation. Using this expression and imposing the conservation of the Hamiltonian constraint we obtain as anticipated a continuity equation, which takes the form $`{\displaystyle \frac{d\rho }{dR}}={\displaystyle \frac{2}{R}}(P\rho ).`$ (3.21) Finally for the general case $`M_{out}(\tau )`$ we note that Eq. (3.14), which was obtained by varying the action with respect to $`M_{out}`$, can also be arrived at by using the expression for the pressure (that is the equation of motion for $`R`$) and imposing the conservation of the Hamiltonian constraint $`{\displaystyle \frac{dH}{d\tau }}=0.`$ (3.22) It is clear, as we have mentioned after Eq. (2.34), that for $`M_{out}`$ constant we reproduce the usual results . ## 4 Thermodynamics Just as for the black hole case , one may attempt for the case of the radiating shell a thermodynamic approach. The first observation is that in thermodynamics one has “quasi-static” processes so that the equations of state are always satisfied. This of course implies in our case that all time derivatives be considered small so that matter on the shell always be in equilibrium . Let us illustrate such an approach (in this Section we denote the quasi-static limit of previously defined quantities by the same letter in italics style). In the shell one expects that the intensive coordinate be the surface tension $`𝒫`$ and the extensive one be the area $`𝒜`$. Thus, considering quasi-static processes for which $`\dot{R}^21{\displaystyle \frac{2M_{out}}{R}},`$ (4.23) from Eq. (3.12) one finds (again setting $`N=1`$) $`𝒫={\displaystyle \frac{2}{\kappa R}}\left[\sqrt{A}+{\displaystyle \frac{M}{R\sqrt{A}}}\right]_{out}^{in}.`$ (4.24) Similarly, from Eq. (3.10) one may identify $`={\displaystyle \frac{16\pi }{\kappa }}R\left[\sqrt{A}\right]_{out}^{in},`$ (4.25) which will correspond to the shell internal energy. We note that all quantities are a function of the (slowly) time-dependent variables $`M_{out}`$ and $`R`$, thus it is clear that $`d`$ (a change of the internal energy) is an exact differential of $`M_{out}`$ and $`R`$, which is a statement of the first law of thermodynamics . One may now introduce a variation of the heat $`𝒬`$, $`\text{d}\text{-}𝒬`$ $`=`$ $`d𝒫d𝒜`$ (4.26) $`=`$ $`d8\pi R𝒫dR`$ $`=`$ $`{\displaystyle \frac{}{M_{out}}}dM_{out}+\left({\displaystyle \frac{}{R}}8\pi R𝒫\right)dR,`$ which on using Eqs. (4.24) and (4.25) leads to $`\text{d}\text{-}𝒬={\displaystyle \frac{16\pi }{\kappa }}{\displaystyle \frac{dM_{out}}{\sqrt{12M_{out}/R}}}.`$ (4.27) This is in agreement with the quasi-static limit of Eq. (3.14) and is related to the luminosity of the shell, Eq. (3.17). At this point, in order to obtain the change in entropy, one must introduce a temperature $`𝒯`$ which must be such that $`d𝒮=\text{d}\text{-}𝒬/𝒯`$ is an exact differential as a statement of the second law of thermodynamics . This implies that $`{\displaystyle \frac{}{R}}\left(𝒯\sqrt{12M_{out}/R}\right)^1=0,`$ (4.28) which has a solution $`𝒯={\displaystyle \frac{\mathrm{}}{8\pi k_bM_{out}}}{\displaystyle \frac{1}{\sqrt{12M_{out}/R}}},`$ (4.29) where $`k_b`$ is the Boltzmann constant, and corresponds to the desired equation of state connecting $`M_{out}`$, $`R`$ and $`𝒯`$. The entropy change in this case is then given by $`d𝒮=\left({\displaystyle \frac{16\pi k_b}{\mathrm{}\kappa }}\right)\mathrm{\hspace{0.17em}8}\pi M_{out}dM_{out},`$ (4.30) and for $`\kappa =16\pi `$, $`\mathrm{}=k_b=1`$ one has (omitting an arbitrary constant) $`𝒮=4\pi M_{out}^2={\displaystyle \frac{1}{4}}(\text{area of horizon}),`$ (4.31) which is the usual result for a black hole and it is clear that a collapse for which $`\dot{M}_{out}=0`$ is adiabatic (“isentropic”). Obviously, given our expressions for the entropy, surface tension and internal energy, one may actually evaluate the various thermodynamic potentials. Let us just end by determining the specific heat at constant area. It will be given by $`C_𝒜=𝒯\left({\displaystyle \frac{𝒮}{𝒯}}\right)_𝒜={\displaystyle \frac{1}{8\pi 𝒯^2}}\left(1{\displaystyle \frac{3M_{out}}{R}}\right)^1,`$ (4.32) which for $`R>3M_{out}`$ is negative as expected. ## 5 Conclusions In this paper we have analyzed the dynamics of radiating shells in General Relativity by deriving a mini-superspace effective action for the area and mass aspect of the shell. Besides proving the dynamical equivalence of this approach with the usual treatment via junction equations, we have introduced a temperature (equation of state) and obtained a thermodynamic, quasi-static description for the evolution of the shell. The effective action in itself is useful for developing the quantum theory and we hope to use it along the lines of Ref. . We also think that the remarks in the Introduction and the analysis of the equations of motion carried on in the previous Section make clear that description given for radiating shells is a general framework which can be used to study a wide variety of cases of physical interest. ## Appendix A Overall null foliation Instead of the “mixed” foliation used in the text, one might introduce a null Eddington-Finkelstein coordinate $`du_{in}=dt_{in}a_{in}^1dr`$ such that $`ds_{in}^2=a_{in}du_{in}^22du_{in}dr+r^2d\mathrm{\Omega }^2,`$ (A.1) for $`r<R`$, and consider slices of constant $`u`$ both inside and outside the shell. Then the volume over which one integrates the Einstein-Hilbert action would have null boundaries at $`u=u_i`$ and $`u=u_f`$ for all $`0<r<R_{\mathrm{}}`$ (this would change the shape of $`\mathrm{\Omega }_{in}`$ with respect to Fig. 2). It is however easy to show that this does not change the effective action. In fact the scalar curvature for Schwarzschild and Minkowski can be formally written as in Eq. (2.15), with $`a_{in}`$ a function of (at most) $`r`$, since the mass aspect is constant and equal to $`M_{in}`$ for $`r<R`$. Therefore both the volume contribution and surface terms vanish in the region $`r<R`$ and one is left with the terms at $`\eta =ϵ`$ in $`S_G^{shell}`$ as given in Eq. (2.29). ## Appendix B Curvature scalar for the Vaidya metric The scalar curvature for a metric of the form (2.13) or (A.1) with a generic $`a=a(u,r)`$ can be computed straightforwardly from the definition of the Riemann tensor $`=g^{\nu \sigma }_{\nu \mu \sigma }^\mu ,`$ (B.1) where $`\mu ,\nu \mathrm{}=u,r,\theta ,\varphi `$, and $`_{\nu \mu \sigma }^\mu =\mathrm{\Gamma }_{\nu \sigma ,\mu }^\mu \mathrm{\Gamma }_{\nu \mu ,\sigma }^\mu +\mathrm{\Gamma }_{\nu \sigma }^\alpha \mathrm{\Gamma }_{\alpha \mu }^\mu \mathrm{\Gamma }_{\nu \alpha }^\mu \mathrm{\Gamma }_{\sigma \mu }^\alpha .`$ (B.2) The non-vanishing connection coefficients, $`\mathrm{\Gamma }_{\nu \lambda }^\mu ={\displaystyle \frac{1}{2}}g^{\mu \sigma }\left(g_{\sigma \nu ,\lambda }+g_{\sigma \lambda ,\nu }g_{\nu \lambda ,\sigma }\right),`$ (B.3) are given by $`\begin{array}{ccc}\mathrm{\Gamma }_{uu}^u=\frac{1}{2}\frac{a}{r},\hfill & \mathrm{\Gamma }_{\varphi \varphi }^u=\mathrm{sin}^2\theta \mathrm{\Gamma }_{\theta \theta }^u=r^2\mathrm{sin}^2\theta ,\hfill & \\ & & \\ \mathrm{\Gamma }_{uu}^r=\frac{1}{2}\frac{a}{u}+\frac{a}{2}\frac{a}{r},\hfill & \mathrm{\Gamma }_{\varphi \varphi }^r=\mathrm{sin}^2\theta \mathrm{\Gamma }_{\theta \theta }^r=ar\mathrm{sin}^2\theta ,\hfill & \mathrm{\Gamma }_{ur}^r=\frac{1}{2}\frac{a}{r},\hfill \\ & & \\ \mathrm{\Gamma }_{r\theta }^\theta =\mathrm{\Gamma }_{r\varphi }^\varphi =\frac{1}{r},\hfill & \mathrm{\Gamma }_{\varphi \varphi }^\theta =\mathrm{sin}^2\theta \mathrm{\Gamma }_{\theta \varphi }^\varphi =\mathrm{sin}\theta \mathrm{cos}\theta ,\hfill & \end{array}`$ (B.9) and lead to the result stated in Eq. (2.15). ## Appendix C Gaussian normal coordinates in Vaidya space-time The metric in a neighborhood of the shell can be written in Gaussian normal coordinates $`(\tau ,\eta ,\theta ,\varphi )`$ as $`ds^2=g_{\tau \tau }d\tau ^2+g_{\eta \eta }d\eta ^2+2g_{\tau \eta }d\tau d\eta +r^2d\mathrm{\Omega }^2,`$ (C.1) where $`g_{\tau \tau }(\tau ,\eta )=a\left({\displaystyle \frac{u}{\tau }}\right)^22{\displaystyle \frac{u}{\tau }}{\displaystyle \frac{r}{\tau }}`$ $`g_{\tau \eta }(\tau ,\eta )=0=a{\displaystyle \frac{u}{\eta }}{\displaystyle \frac{u}{\tau }}{\displaystyle \frac{u}{\tau }}{\displaystyle \frac{r}{\eta }}{\displaystyle \frac{u}{\eta }}{\displaystyle \frac{r}{\tau }}`$ $`g_{\eta \eta }(\tau ,\eta )=1=a\left({\displaystyle \frac{u}{\eta }}\right)^22{\displaystyle \frac{u}{\eta }}{\displaystyle \frac{r}{\eta }},`$ (C.2) and we choose $`\eta >0`$ ($`\eta <0`$) for points in $`\mathrm{\Omega }_{out}`$ ($`\mathrm{\Omega }_{in}`$). Matching (C.1) to the three-metric (2.5) on the shell world-volume at $`\eta =0`$ gives the set of equations $`A\dot{U}^2+2\dot{U}\dot{R}=N^2`$ (C.3) $`A\dot{U}{\displaystyle \frac{u}{\eta }}|_{\eta =0}+\dot{U}{\displaystyle \frac{r}{\eta }}|_{\eta =0}+\dot{R}{\displaystyle \frac{u}{\eta }}|_{\eta =0}=0`$ (C.4) $`A\left({\displaystyle \frac{u}{\eta }}|_{\eta =0}\right)^2+2{\displaystyle \frac{u}{\eta }}|_{\eta =0}{\displaystyle \frac{r}{\eta }}|_{\eta =0}=1.`$ (C.5) From (C.3) it follows that $`\dot{U}=(\pm \beta \dot{R})/A`$, with $`\beta `$ given in Eq. (2.26). We require $`\dot{U}>0`$ and, since we wish to describe collapsing trajectories with $`\dot{R}<0`$ for $`R>2M`$, we must choose the plus sign $`\dot{U}={\displaystyle \frac{\beta \dot{R}}{A}}.`$ (C.6) For $`R2M`$ the above expression diverges (unless $`\beta =\dot{R}=0`$), which signals the fact that the coordinates in use become singular on that surface . In fact, one has that $`u+\mathrm{}`$ for $`r2m`$ (both for Vaidya and Schwarzschild). This can be cured from the onset by passing to Israel’s coordinates $`(v,w)`$ which are regular all the way down to $`r=0`$ . They are defined by $`du={\displaystyle \frac{dv}{W(v)}},{\displaystyle \frac{dW}{dv}}={\displaystyle \frac{1}{4m(v)}}`$ (C.7) and $`r=2m(v)+W(v)w`$. In this frame the Vaidya line element becomes $`ds^2=\left({\displaystyle \frac{4}{W}}{\displaystyle \frac{dm}{dv}}+{\displaystyle \frac{w^2}{2mr}}\right)dv^2+2dvdw+r^2d\mathrm{\Omega }^2.`$ (C.8) However the same result for the components of the extrinsic curvature is obtained if the change is made at the end of the computation (see Appendix E). Eqs. (C.4) and (C.5) yield $`{\displaystyle \frac{u}{\eta }}|_{\eta =0}={\displaystyle \frac{\beta \dot{R}}{AN}},{\displaystyle \frac{r}{\eta }}|_{\eta =0}={\displaystyle \frac{\beta }{N}}.`$ (C.9) Hence the radial coordinate $`r`$ as a function of $`\tau `$ and $`\eta `$ is continuous, but its derivative with respect to $`\eta `$ has a jump on $`\mathrm{\Sigma }`$. At the same time $`u`$ need not even be continuous across $`\mathrm{\Sigma }`$. We can now write $`(u,r)`$ as functions of the Gaussian normal coordinates $`(\tau ,\eta )`$ explicitly up to order $`\eta `$, $`\{\begin{array}{c}u=U{\displaystyle \frac{\beta \dot{R}}{AN}}\eta +𝒪(\eta ^2)\hfill \\ \\ r=R+{\displaystyle \frac{\beta }{N}}\eta +𝒪(\eta ^2),\hfill \end{array}`$ (C.13) where we recall that the above expressions hold both in $`\mathrm{\Omega }_{in}`$ ($`\eta <0`$) and $`\mathrm{\Omega }_{out}`$ ($`\eta >0`$), thus it is to be understood that $`A=\{\begin{array}{cc}\underset{\eta 0^{}}{lim}a_{in}=12M_{in}/R\hfill & \mathrm{in}\mathrm{\Omega }_{in}\hfill \\ \underset{\eta 0^+}{lim}a_{out}=12M_{out}/R\hfill & \mathrm{in}\mathrm{\Omega }_{out},\hfill \end{array}`$ (C.16) and so forth. From the knowledge of the mapping (C.13) one can determine the components (C.1) of the metric and their derivatives with respect to $`\eta `$ and compute the extrinsic curvature according to Eq. (2.17). ## Appendix D Standard boundary term at $`R_{\mathrm{}}`$ Surface contributions are usually computed by making use of the trace $`𝒦`$ of the extrinsic curvature of the border of the space-time volume $`\mathrm{\Omega }`$, according to Eq. (2.10). Further, $`𝒦`$ is related to the covariant derivative of the unit normal to the border. Thus, at $`r=R_{\mathrm{}}`$, one has $`S_B^{(3)}={\displaystyle \frac{2}{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u𝑑\theta 𝑑\varphi \sqrt{h}_\mu n^\mu |_R_{\mathrm{}},`$ (D.1) where $`h=ar^4\mathrm{sin}^2\theta `$ is the determinant of the pull-back of the metric (2.13) on an hypersurface of constant $`r`$, $``$ denotes the covariant derivative in the metric (2.13), $`_\mu n^\mu ={\displaystyle \frac{1}{\sqrt{g}}}_\mu \left(\sqrt{g}n^\mu \right),`$ (D.2) and $`n^\mu =(a^{1/2},a^{1/2},0,0)`$ is the unit normal to an hypersurface of constant $`r`$ in $`(u,r)`$ components. Computing the derivative at $`r=R_{\mathrm{}}`$ one finds $`S_B^{(3)}={\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u\left[2R_{\mathrm{}}{\displaystyle \frac{R_{\mathrm{}}^2}{R_{\mathrm{}}2M_{out}}}{\displaystyle \frac{dM_{out}}{du}}3M_{out}\right].`$ (D.3) We may now show that Eq. (D.3) is dynamically equivalent to the result (2.32) used in the text. We first integrate the second term in the integral above and obtain $`S_B^{(3)}={\displaystyle \frac{8\pi }{\kappa }}{\displaystyle _{u_i}^{u_f}}𝑑u\left[2R_{\mathrm{}}3M_{out}\right]+{\displaystyle \frac{4\pi }{\kappa }}R_{\mathrm{}}^2\mathrm{ln}{\displaystyle \frac{R_{\mathrm{}}M_f}{R_{\mathrm{}}M_i}}.`$ (D.4) The last term does not affect the equations of motion, since the effective action is varied with fixed endpoints, and can therefore be dropped. It is shown in Section 2 that $`S_B^{(2)}`$ depends on two arbitrary coefficients and for $`\alpha =1`$ and $`\gamma =0`$ one just recovers the integral in Eq. (D.4). ## Appendix E Junction equations The junction equations at the shell surface $`\mathrm{\Sigma }`$ relate the jump in the extrinsic curvature $`K_{ij}`$ to the matter stress-energy tensor on $`\mathrm{\Sigma }`$ , $`\left[𝒦_{ij}\right]_{in}^{out}={\displaystyle \frac{\kappa }{2}}\left(𝒮_{ij}{\displaystyle \frac{1}{2}}g_{ij}𝒮_k^k\right),`$ (E.1) where $`g_{ij}`$ is the three-metric (2.5) in the proper time gauge $`N=1`$ and the source term $`𝒮_{ij}`$ is given in Eq. (2.7). Upon substituting the components (2.21) of the extrinsic curvature into Eq. (E.1) one obtains the $`\theta \theta `$-junction equation $`E={\displaystyle \frac{16\pi }{\kappa }}R\left[\beta \right]_{out}^{in},`$ (E.2) where we have introduced the shell proper mass $`E`$ according to Eq. (3.2). Analogously (2.25) yield the $`\tau \tau `$-junction equation $`P={\displaystyle \frac{2}{\kappa }}\left\{\left[{\displaystyle \frac{\beta }{R}}+{\displaystyle \frac{\ddot{R}}{\beta }}+{\displaystyle \frac{M}{R^2\beta }}\right]_{out}^{in}+{\displaystyle \frac{\dot{M}_{out}}{\beta _{out}}}{\displaystyle \frac{\beta _{out}\dot{R}}{R2M_{out}}}\right\}.`$ (E.3) In Israel’s coordinates (see Appendix C) the last term in the right hand side of Eq. (E.3) becomes $`{\displaystyle \frac{\dot{U}\dot{M}_{out}}{R\beta _{out}}}={\displaystyle \frac{1}{R\beta _{out}}}{\displaystyle \frac{\dot{V}^2}{W}}{\displaystyle \frac{dM_{out}}{dv}},`$ (E.4) which renders our equation (E.3) the same as Eq. (3.3) of Ref. where the problem of integrating Eqs. (E.2) and (E.3) was addressed numerically for a particular choice of the luminosity. Let us focus on analytic results that can be obtained independently of the detail of the interaction between the shell and the out-flowing null dust. We take, for $`\mathrm{\Omega }_{in}`$, flat Minkowski space ($`M_{in}=0`$) and $`2M_{out}(\tau _i)<R_i`$. 1) Taking $`R2M_{out}`$ and $`\dot{R}1`$ in Eq. (E.2) gives, to leading order, $`E(16\pi /\kappa )M_{out}`$, which is the relation one would expect in (asymptotically) flat space. 2) Expanding Eqs. (E.2) and (3.16) for $`R2M_{out}2M_{out}`$ gives, again to leading order, $`E2M_{out}\left(\sqrt{1+\dot{R}^2}+\dot{R}\right)`$ (E.5) $`\dot{E}{\displaystyle \frac{64\pi }{\kappa }}{\displaystyle \frac{M_{out}\dot{M}_{out}\dot{R}}{R2M_{out}}}8\pi M_{out}\dot{R}P.`$ (E.6) For $`\dot{E}=0`$ it follows that $`\dot{M}_{out}=0`$ only if $`P`$ always remains zero (dust). One can interpret this result as if, in general, the radiation can be fed by the gravitational energy of the shell whose extraction induces a surface tension (and changes the motion of $`R`$ correspondingly). If we demand that $`P`$ be regular everywhere, because of the denominator on the right hand side of Eq. (E.6), for $`\dot{M}_{out}<0`$ the proper energy emitted per unit proper time diverges with $`R2M_{out}`$. This, together with Eq. (E.5), namely $`EM_{out}`$, implies that $`E`$ and $`M_{out}`$ would vanish at a value of $`R2M_{out}`$. On the other hand, if the time derivative of the mass aspect vanishes according to the bound in Eq. (3.18) one (temporarily) recovers the non-radiating case, for which the shell crosses $`2M_{out}`$ at a finite proper time and with finite proper energy. The condition (3.18) is necessary for the stress-energy tensor of the radiation to be locally finite when the shell crosses the surface $`r=2M_{out}`$ , that is $`\underset{r2M_{out}}{lim}{\displaystyle \frac{|T_{uu}^{rad}|}{(r2M_{out})^2}}\underset{R2M_{out}}{lim}{\displaystyle \frac{|\dot{M}_{out}|}{(R2M_{out})}}<\mathrm{}.`$ (E.7) We remark that the stronger condition $`\dot{M}_{out}=0`$ is necessary to ensure that $`r=2M_{out}`$ is a null surface after the shell has crossed it, which is a property of event horizons as opposite to apparent horizons . In fact, the tangent to the surface $`r=2M_{out}`$ in $`\mathrm{\Omega }_{in}`$ is $`t^\mu =(\dot{T}_{in},2\dot{M}_{out},0,0)`$ and has norm $`t^\mu t_\mu =1`$. In $`\mathrm{\Omega }_{out}`$ the tangent would be $`t^\mu (\dot{U},2\dot{M}_{out},0,0)`$ with norm $`t^\mu t_\mu 4\dot{M}_{out}\dot{U}`$. Therefore $`t^\mu `$ is time-like in $`\mathrm{\Omega }_{in}`$ and would become space-like (or null for $`2\dot{M}_{out}=0`$) in $`\mathrm{\Omega }_{out}`$ . The mathematical origin of the bound (3.18) lies in the inadequacy of the coordinates $`(u,r)`$ to cover both the interior and the exterior of $`r=2m`$ in the Vaidya space-time (see Appendix C). It has also a physical interpretation in terms of the point of view of a static observer at $`r2M_{out}`$. On assuming that the metric for $`r2M_{out}`$ can be approximated by the Schwarzschild line element, the time $`t_{\mathrm{}}`$ of the static observer is related to $`\tau `$ according to $`{\displaystyle \frac{dt_{\mathrm{}}}{d\tau }}={\displaystyle \frac{\beta }{12M_{out}/R}}{\displaystyle \frac{2M_{out}\dot{R}}{R2M_{out}}}.`$ (E.8) Then Eq. (E.6) can be written as $`{\displaystyle \frac{dE}{dt_{\mathrm{}}}}{\displaystyle \frac{32\pi }{\kappa }}{\displaystyle \frac{dM_{out}}{d\tau }}.`$ (E.9) This is in accordance with the fact that, due to the infinite redshift experienced by a distant observer, the luminosity of a star which collapses and forms a black hole would decay exponentially and eventually (for $`t_{\mathrm{}}+\mathrm{}`$) fade. On the other hand, if the flux of radiation measured by the distant observer does not vanish before all the proper energy of the shell has been radiated away, then the shell remains always outside the surface $`r=2M_{out}`$ until $`E`$ vanishes (thus leading to flat Minkowski space instead of a black hole).
no-problem/9909/physics9909037.html
ar5iv
text
# Carbon clusters near the crossover to fullerene stability \[ ## Abstract > The thermodynamic stability of structural isomers of $`\mathrm{C}_{24}`$, $`\mathrm{C}_{26}`$, $`\mathrm{C}_{28}`$ and $`\mathrm{C}_{32}`$, including fullerenes, is studied using density functional and quantum Monte Carlo methods. The energetic ordering of the different isomers depends sensitively on the treatment of electron correlation. Fixed-node diffusion quantum Monte Carlo calculations predict that a $`\mathrm{C}_{24}`$ isomer is the smallest stable graphitic fragment and that the smallest stable fullerenes are the $`\mathrm{C}_{26}`$ and $`\mathrm{C}_{28}`$ clusters with $`\mathrm{C}_{2v}`$ and $`\mathrm{T}_d`$ symmetry, respectively. These results support proposals that a $`\mathrm{C}_{28}`$ solid could be synthesized by cluster deposition. \] Since the discovery of the fullerene $`\mathrm{C}_{60}`$, the study of carbon clusters has revealed a rich variety of physical and chemical properties. Fullerene clusters may now be synthesised in macroscopic quantities, but despite many experimental and theoretical advances the detailed energetics of these systems are not yet fully understood. The question “which is the smallest stable fullerene?” remains both interesting and contentious due to the sensitivity of cluster formation to experimental conditions and the challenges posed to theoretical methods by system size and the high accuracy required. In this Letter we report very accurate calculations of the relative energies of $`\mathrm{C}_{24}`$, $`\mathrm{C}_{26}`$, $`\mathrm{C}_{28}`$ and $`\mathrm{C}_{32}`$ clusters, and identify the smallest stable fullerenes. The number of low-energy candidate structures can be large, even for quite small clusters, precluding exhaustive theoretical searches with highly accurate but computationally expensive methods. In practice, a hierarchy of methods of increasing accuracy and computational cost must be used. The first step is to select candidate structural isomers via empirical methods based on bond counting and geometric “rules” such as “minimize the number of adjacent pentagons”. Quantum mechanical calculations based on tight-binding and density functional theory (DFT) methods can then used to refine the selection. To finally establish the energetic ordering of different isomers, highly accurate calculations must be performed. Quantum chemical methods, such as coupled cluster (CC) calculations, are potentially highly accurate, but are severely limited by the size of basis set that is computationally affordable in these systems. Quantum Monte Carlo (QMC) methods give an accurate treatment of electron correlation which, combined with an absence of basis set error, favorable scaling with system size and suitability for parallel computation, renders them ideal for these studies. QMC calculations have reproduced experimental binding energies of small hydrocarbons to within 1%. Using the techniques described below we have calculated the cohesive energy of bulk diamond, obtaining values of 7.36(1) and 7.46(1) eV per atom in variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC), respectively, which are in very good agreement with the experimental value of 7.37 eV. Carbon clusters are particularly challenging to model accurately due to the wide range of geometries and the occurrence of single, double, and triple bonds. These differences result in a non-cancelation of errors in relative energies, exaggerating any errors due to approximations involved in electronic structure methods. Despite these potential difficulties, carbon clusters have been extensively studied using methods such as tight-binding, density functional, quantum chemical and QMC. The need for high accuracy calculations with a sophisticated treatment of electron correlation has been clearly illustrated by several previous studies. Grossman et al. have performed diffusion Monte Carlo calculations for $`\mathrm{C}_{20}`$ clusters, finding that the fullerene is not energetically stable. A DFT study of $`\mathrm{C}_{20}`$ isomers showed that the local density approximation (LDA) and the BLYP gradient-corrected functional gave different energy orderings for the sheet, bowl and fullerene structures, with neither agreeing with that of DMC. Jensen et al. made calculations for the monocyclic ring and fullerene isomers of $`\mathrm{C}_{24}`$ which demonstrated significant differences between the predictions of the LDA, gradient corrected and hybrid density functionals. These authors also performed second-order M$`ø`$ller-Plesset and CC calculations, but with a limited basis set, concluding that the fullerene is lower in energy than the ring. Raghavachari et al. and Martin et al. studied seven isomers of $`\mathrm{C}_{24}`$ using DFT, but obtained conflicting energetic orderings. For clusters containing between 20 and 32 atoms, three classes of isomer are energetically competitive: fullerenes, planar or near-planar sheets and bowls, and monocyclic rings. The smallest possible fullerene, defined as a closed cage containing only pentagonal and hexagonal faces, consists of 20 atoms. However, the smallest fullerenes most commonly identified by time of flight and mass spectroscopy measurements are the $`\mathrm{C}_{30}`$ and $`\mathrm{C}_{32}`$ clusters. Rings are found to dominate up to approximately 28 carbon atoms under typical experimental conditions, and fullerenes are mostly observed for larger clusters, although other structures are also present (see for example, Ref. ). In this work we present a QMC study of five isomers of $`\mathrm{C}_{24}`$, three of $`\mathrm{C}_{26}`$ and $`\mathrm{C}_{28}`$, and two of $`\mathrm{C}_{32}`$, thereby covering the range of masses from where the fullerene is clearly predicted to be unstable to where a fullerene is clearly observed. This enables us to predict the smallest energetically stable fullerene. We apply the diffusion quantum Monte Carlo method in which the imaginary time Schrödinger equation is used to evolve an ensemble of electronic configurations towards the ground state. The “fixed node approximation” is central to this method; the nodal surface of the exact fermionic wave function is approximated by that of a guiding wave function. Core electrons were modeled by an accurate norm-conserving pseudopotential, and the non-local energy was evaluated stochastically within the locality approximation. We used Slater-Jastrow guiding wave functions consisting of the product of a sum of Slater determinants of single-particle orbitals obtained from CRYSTAL95 or Gaussian94 with a Jastrow correlation factor. Optimized uncontracted valence Gaussian basis sets of four *s*, four *p* and one *d* function were used to represent the single-particle orbitals. Jastrow factors of up to 80 parameters were optimized using efficient variance minimization techniques, yielding 75-90% of the DMC correlation energy. We relaxed the structures by performing highly converged density functional calculations. The geometries were obtained from all-electron calculations using the B3LYP hybrid functional and Dunning’s cc-pVDZ basis set , which has been found to be an accurate and affordable combination. To assess the sensitivity of the total energies to the geometries, we compared the energies of the fully relaxed ring and $`\mathrm{D}_6`$ fullerene isomers of $`\mathrm{C}_{24}`$ (see Fig. 1) using the BLYP and B3LYP functionals. The functionals give significantly different energetic orderings, but the differences between the geometries are small - less than 0.03 angstroms in bond lengths and 0.4 degrees in bond angles. The relative energies of these structures changed by a maximum of 0.27 eV for each of the functionals investigated. The relative energies are therefore rather insensitive to the functional used to obtain the geometries, but are more sensitive to the functional used to calculate the energies. These changes are small compared with the overall range of energies, but some changes in the orderings of the isomers closest in energy could occur. We considered the following isomers of $`\mathrm{C}_{24}`$, as depicted in Fig. 1: a polyacetylenic monocyclic ring, a flat graphitic sheet, a bowl-shaped structure with one pentagon, a caged structure with a mixture of square, pentagonal and hexagonal faces, and a fullerene. Other candidate structures, such as bicyclic rings and a 3-pentagon bowl were excluded on the grounds that DFT calculations using several different functionals have shown them to be significantly higher in energy. As well as DMC calculations we have also performed DFT calculations using the LDA, two gradient corrected functionals (PBE and BLYP) and the B3LYP functional. The results shown in Fig. 1 confirm that the treatment of electron correlation has a profound effect on the relative energies. All of the functionals give different energetic orderings, and none gives the same ordering as DMC. The graphitic sheet is placed lowest in energy by DMC, in agreement with each of the functionals except BLYP, which places the ring lowest in energy. The low energy of the $`\mathrm{C}_{24}`$ graphitic sheet is expected because the structure accommodates a large number (7) of hexagonal rings without significant strain. This structure is predicted to be the smallest stable graphitic fragment. Both DMC and the DFT approaches find the $`\mathrm{C}_{24}`$ fullerene to be unstable. Three isomers of $`\mathrm{C}_{26}`$ were considered: a cumulenic monocyclic ring, a graphitic sheet with one pentagon and a fullerene of $`\mathrm{C}_{2v}`$ symmetry (Fig. 2). Few studies of the $`\mathrm{C}_{26}`$ fullerene have been made, in part due to the high strains evident in its structure. Recently Torelli and Mitáš have demonstrated the importance of using multi-determinant trial wave functions to describe aromaticity in 4N+2 carbon rings . We have tested this for the $`\mathrm{C}_{26}`$ ring, using a 43 determinant trial wave function obtained from a CI singles-doubles calculation. The multi-determinant wave function gave a slightly lower DMC energy than the single determinant wave function, by approximately $`0.5`$ eV, confirming that CI wave functions can have better nodal surfaces than HF wave functions. The ring and sheet-like isomers are close in energy, but the fullerene is approximately $`2.5`$ eV below these isomers and is therefore predicted to be the smallest stable fullerene. Small changes in the geometries are highly unlikely to change this conclusion. Three $`\mathrm{C}_{28}`$ isomers were investigated: a monocyclic ring, a graphitic sheet and a fullerene of $`\mathrm{T}_d`$ symmetry (Fig. 3). Other bowl and sheet-like structures were excluded on energetic grounds. Spin-polarized DFT calculations show the ground state of the $`\mathrm{T}_d`$ symmetry fullerene to be a spin-polarized $`{}_{}{}^{5}A_{2}^{}`$ state. DMC predicts that this spin-polarized fullerene is the lowest energy isomer of $`\mathrm{C}_{28}`$, and this is supported by each of the functionals except BLYP. The spin-polarized fullerene has four unpaired electrons and is therefore highly reactive. This property has been exploited in atom trapping experiments in which fullerenes containing single four-valent atoms, $`\mathrm{C}_{28}\mathrm{M}`$, have been prepared by laser vaporization of a graphite-MO<sub>2</sub> (M = Ti, Zr, Hf or U) composite rod. Our prediction that the fullerene is the most stable isomer of $`\mathrm{C}_{28}`$ indicates that isolated fullerenes might be readily produced. This would facilitate investigations of $`\mathrm{C}_{28}`$ fullerene solids, which have been discussed but not yet produced, although this route may be hampered by the chemical reactivity of the fullerene. (A $`\mathrm{C}_{36}`$ fullerene solid has been reported.) Our DFT and DMC results for $`\mathrm{C}_{28}`$ (Fig. 3) again highlight a wide variation between different DFT functionals. The LDA and B3LYP functionals predict the same ordering as DMC, but the PBE and BLYP functionals give different orderings. The DMC data strongly indicates that the $`\mathrm{T}_d`$ fullerene is the most stable $`\mathrm{C}_{28}`$ isomer at zero temperature. The fullerene has the lowest DMC energy in both spin-polarized and non spin-polarized calculations, and is substantially more stable than the sheet and ring. Small changes in the geometries are therefore unlikely to change this ordering. Our DMC calculations for the $`\mathrm{C}_{32}`$ monocyclic ring and fullerene show that the fullerene is 8.4(4) eV per molecule lower in energy, which is consistent with the observation of a large abundance of $`\mathrm{C}_{32}`$ fullerenes in a recent cluster experiment. In Fig. 4 we plot the DMC binding energies per atom of all the ring and fullerene structures considered. The binding energies of the fullerenes rise much more rapidly with cluster size than those of the rings because of the large amount of strain in the smaller fullerenes. The DMC binding energy of the $`\mathrm{C}_{32}`$ fullerene is approximately $`1`$ eV per atom less than the experimental binding energy of $`\mathrm{C}_{60}`$. Our DFT and DMC results highlight several important trends in the relative performance of the different functionals. The overall quality of a functional for the clusters is best judged by the agreement with the DMC data for the overall shapes of the relative energy data of Figs. 1-3. The best agreement is given by the PBE and B3LYP functionals, with the LDA being slightly inferior and the BLYP functional being worst. The tendency of the LDA to favor structures of high average coordination and for the BLYP functional to favor structures of low average coordination is consistent with the results for $`\mathrm{C}_{20}`$ reported by Grossman et al.. The final test of our predictions must lie with experiment. It is clear that the actual abundances of different clusters depend sensitively on experimental conditions. Analysis of the stability of clusters against fragmentation, growth and other chemical reactions is complicated. One key issue is that the clusters are formed at temperatures of order 1000K and therefore the vibrational contributions to the free energy can be significant. Fortunately, a simple picture emerges from computations of vibrational properties . Fullerenes are relatively rigid and have higher vibrational free energies than rings, which have many low-lying vibrational modes. Vibrational effects therefore tend to favor the ring isomers at high temperatures. However, according to our DMC calculations the $`\mathrm{C}_{26}`$ and $`\mathrm{C}_{28}`$ fullerenes are several eV per cluster lower in energy than the other isomers, so that significant amounts of fullerene could exist at the temperatures of formation. If thermodynamic stability alone were to determine which cluster sizes were observed then only the largest fullerenes would ever be observed, but in a recent experiment the abundance of the $`\mathrm{C}_{32}`$ fullerene was found to be greater than $`\mathrm{C}_{60}`$. There is more evidence that thermodynamic stability to rearrangements of clusters of a particular size are important in determining which isomers are observed. For example, in the experimental study of Ref. , fullerenes were mostly observed for clusters containing more than about 30 carbon atoms, while for smaller clusters mostly rings were formed. This crossover is close to the critical size for fullerene stability of 26-28 atoms predicted by our DMC calculations. In conclusion, performing accurate calculations of the relative energies of carbon clusters is a severe test of electronic structure methods because of the widely differing geometries and the occurrence of single, double and triple bonds. In our DMC calculations for $`\mathrm{C}_{24}`$, the lowest energy isomer is a graphitic sheet, which is expected to be the smallest stable graphitic fragment. We predict that the smallest energetically stable fullerenes are the $`\mathrm{C}_{2v}`$ symmetry $`\mathrm{C}_{26}`$ cluster and the spin polarized $`{}_{}{}^{5}A_{2}^{}`$ state of the $`\mathrm{T}_d`$ symmetry $`\mathrm{C}_{28}`$ cluster. This prediction lends weight to recent proposals that a $`\mathrm{C}_{28}`$ solid could be synthesized by surface deposition of $`\mathrm{C}_{28}`$ fullerenes. Financial support was provided by EPSRC (UK). Calculations were performed on the CRAY-T3E at the University of Manchester and the Hitachi SR2201 located at the University of Cambridge HPCF.
no-problem/9909/astro-ph9909303.html
ar5iv
text
# Lensing by Groups of Galaxies ## Abstract A large fraction of known galaxy-lens systems require a component of external shear to explain the observed image geometries. In most cases, this shear can be attributed to a nearby group of galaxies. We discuss how the dark-matter mass distribution of groups of galaxies can influence the external shear for strongly lensed sources and calculate the expected weak lensing signal from groups for various mass profiles. Most galaxies in the universe are neither in clusters nor isolated, but found in groups. To date, it is not known whether there exists a significant group dark-matter halo, similar to that for clusters of galaxies, or whether most of the matter is associated with the individual galaxies themselves. A recent weak-lensing study of a number of groups has shown that it is possible in principle to determine the mass profile of groups using their weak lensing signal (Hoekstra 1999). Because many models of strong lens systems require a substantial external shear to explain the observed image geometries and magnifications (Keeton, Kochanek & Seljak 1997, Kneib et al. 1998), we investigate how the dark matter distribution in a nearby group affects the external shear. We outline how both strong and weak gravitational lensing can be used to determine the mass distribution within groups of galaxies. To investigate the gravitational lensing effect of groups we use a fast and accurate numerical ray-tracing code (Möller & Blain 1998, Blain, Möller & Maller 1999) and a group model based on PG115+080 (Keeton & Kochanek 1997). The lens PG115+080 consists of four galaxies separated by $`10^{\prime \prime }`$ with a total velocity dispersion of about $`270\mathrm{km}\mathrm{s}^1`$. The group halo and galaxies are modelled as singular isothermal spheres (SISs). The group halo is assumed to be centred on the geometrical mean position of the group galaxies. We carried out simulations for different values of the group halo : individual galaxy mass ratio. Fig. 1(a) shows the expected radial profile of the reduced shear. There is a significant difference between a model in which all the mass is associated with the galaxies and models in which a significant fraction of the mass is in an intergalactic group halo. This will affect both the weak lensing distortions of background galaxies and the external shear contribution to any nearby strong galaxy lens. To test whether weak lensing observations can be used to distinguish these two different cases we simulated 100 $`60^{\prime \prime }\times 60^{\prime \prime }`$ fields at a resolution of $`0.1^{\prime \prime }`$, each containing 500 galaxies. The future Advanced Camera for Surveys (ACS) on the Hubble Space Telescope will have a field of view of similar size, a resolution slightly below $`0.1^{\prime \prime }`$ and a sensitivity that will yield several hundreds of galaxies per square arc-minute in a few hours of integration. Fig. 1(b) shows the reconstructed mass profiles. For groups that have a massive group halo the mass profile is significantly steeper than for groups in which most mass is associated with the galaxies. The offset between the reconstructed and true mass profiles is due to the mass sheet degeneracy. The results show that it would be possible to distinguish between the different mass models with observations of this quality, which will be practical using ACS. The external shear produced by a galaxy group depends significantly on the mass profile of the group. As most galaxies, and hence lenses, are likely to be part of a group, most lens models will need to take this effect into account. In the weak lensing regime, our results show that observations with new instruments, like the ACS, could differentiate between various group mass profiles. ###### Acknowledgements. We thank Andrew Blain for useful discussions and comments on the manuscript.
no-problem/9909/cond-mat9909094.html
ar5iv
text
# Asymptotic Distribution of Eigenvalues for a Self-Affine String ## I Introduction An old problem in mathematical physics is how various irregularities influence the asymptotics of the cumulative eigenvalue distribution for a physical resonator. This problem has many significant physical applications, such as wave scattering from fractal surfaces, liquid flow in porous media, vibrations of cracked bodies or macro-molecules (polymers) etc. In 1910 Lorentz put forward the conjecture that the number of eigenmodes up to some (large) frequency $`\omega `$, depends on the “volume” and not on the shape of the resonator. This was later, proved by Weyl, under condition of a sufficiently smooth but otherwise arbitrary boundaries . Later Hunt et al. improved this formula by including a correction term which depends on lower powers of the frequency and also on the “surface area” of the resonator’s perimeter. Berry , when working on wave scattering from fractal surfaces, generalized the result of Hunt et al. to fractal boundaries. He conjectured that for fractal boundaries with Hausdorff dimension $`D_H`$, the first correction term should scale as $`\omega ^{D_H}`$. However, Lapidus has proved that the correct fractal dimension is not the Hausdorff dimension, but another nontrivial dimension known as the Minkowski dimension $`D_M`$ . Over the last decade or so, there have been renewed interest in this, and related problems, both in the mathematics and physics communities. In this paper we consider a related problem where we study a string which has a irregular (self-affine) mass density and local elastic coefficient. For a physical realization, we could for instance think of a long vibrating (“fussy”) polymer. This paper is organized as follows. In Section II we introduce some of the general theoretical background for the problem. We then in Section III, make our conjecture for the integrated density of states (IDOS) for our self-affine string based on the results of Section II. In Section IV we discuss the decimation technique, the numerical method used to calculated the IDOS. The numerical results are presented in Section V, and the conclusion of this paper is drawn in Section VI. ## II General Theory In this section we review some results which will prove useful in the later discussion. Let $`\mathrm{\Gamma }^n`$ be a bounded open set. Consider the 1D elliptic differential equation $`^\mathrm{𝟐}u(x)+a(x)\omega ^2u(x)`$ $`=`$ $`0,x\mathrm{\Gamma }`$ (1) with homogeneous Dirichlet boundary conditions. The “weight function” $`a(x)`$, will be assumed to be a positive real valued function. This equation has a countable sequence of positive eigenvalues (eigenfrequencies) tending to infinity. Let $`N(\omega ,\mathrm{\Gamma })`$ denote the integrated density of states (IDOS), i.e. the number of eigenmodes with eigenfrequency below $`\omega `$. Lapidus and Fleckinger have showed that the asymptotic behavior of $`N(\omega ,\mathrm{\Gamma })`$ is $`N(\omega ,\mathrm{\Gamma })`$ $``$ $`W(\omega ,\mathrm{\Gamma })={\displaystyle \frac{_n\omega ^n}{(2\pi )^n}}{\displaystyle _\mathrm{\Gamma }}(a(x))^{n/2}d^nx,`$ (2) as $`\omega \mathrm{}`$. Here $`_n`$ denotes the volume of the unit ball in $`^n`$ and $`\mathrm{\Gamma }`$ is the resonator domain. This term is usually called the “Weyl term” after Weyl who first proved this asymptotic behavior for a “classical” (i.e. non-fractal) resonator (the Weyl conjecture) . ## III Conjectures for the self-affine string In this paper we will study the situation where the boundary is regular while the weight-function and/or the elastic coefficient is irregular. We will use the following physical model: Consider a freely vibrating string with fixed endpoints at $`x=0`$ and $`x=L`$ (Dirichlet boundary conditions). It has an elastic coefficient $`E(x)=E_0+E_1(x)`$ and density profile $`\rho (x)=\rho _0+\rho _1(x)`$. Here $`E_0>0`$ and $`\rho _0>0`$ are constants and $`E_1(x)`$ and $`\rho _1(x)`$ are strictly positive, self-affine functions. The equation describing the vibrations of this “rough string” is $$\frac{1}{\rho (x)}\frac{d}{dx}E(x)\frac{d}{dx}u(x)+\omega ^2u(x)=0.$$ (3) If $`E(x)=E_0`$, then $`\rho (x)a(x)`$ in Eq. (1) . We will assume that the elastic coefficient $`E_1(x)`$ and the density $`\rho _1(x)`$ vary in a self-affine way . The concept of self-affinity is a scaling property. A function defined as $`h=h(x)`$ is said to self-affine if it is statistically invariant under the transformation $`x`$ $``$ $`\lambda x,`$ (5) $`h(x)`$ $``$ $`\lambda ^Hh(x),`$ (6) for all positive $`\lambda `$, or equivalently $`h(x)`$ $``$ $`\lambda ^Hh(\lambda x),`$ (7) where $``$ is used in order to indicate statistical equality. The parameter $`H`$ is the Hurst exponent (or roughness exponent). When $`H>1`$, $`h(x)`$ is not asymptotically flat and the surface is not statistically invariant under translation. When $`H<0`$, the variance of $`h(x)`$ diverge when the interval over which it is measures goes to zero. $`h(x)`$ is then referred to as a fractional noise. We will in our analysis assume $`0<H<1`$. Self-affinity is in practice only found within a restricted range of scales. In this work we explicitly introduce a lower cut-off $`l`$. The upper cut-off is the system size $`L`$, which is the length of the string. Asymptotically, on large scale, $`E_1(x)`$ and $`\rho _1(x)`$ will dominate the behavior of $`E(x)`$ and $`\rho (x)`$ no matter what $`E_0`$ and $`\rho _0`$ are. However, we will investigate the system at intermediate scales where the constant terms may or may not dominate over the self-affine terms in $`E(x)`$ and $`\rho (x)`$. We will distinguish four cases: 1. $`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$, 2. $`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$, 3. $`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$, 4. $`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$, where $``$ denotes the averaging operator. We note that the left hand side of Eq. (3), $`(1/\rho )[(dE_1/dx)(du/dx)+E(d^2u/dx^2)]`$, involves the derivative of a self-affine function, $`dE_1/dx`$. This quantity scales as $`dE_1/dx\lambda ^{H1}dE_1/dx`$ when $`x\lambda x`$, while $`E_1`$ scales as $`E_1\lambda ^HE_1`$. Thus, we make the assumption that the term containing $`dE_1/dx`$ may be neglected in comparison to the term containing $`E_1`$ in Eq. (3). With this assumption, we expect the IDOS for the self-affine string to behave as ($`n=1`$, $`_1=2`$) $`N(\omega ,L)`$ $``$ $`W(\omega ,L)={\displaystyle \frac{\omega }{\pi }}{\displaystyle _0^L}\sqrt{{\displaystyle \frac{\rho (x)}{E(x)}}}𝑑x,`$ (8) in the asymptotic limit. Note that this expression can be written $`W(\omega ,L)=\omega L/\pi \sqrt{\rho (x)/E(x)}`$. Thus, the asymptotic behavior of the IDOS for a rough string is expected to behave as a classical (non-rough) string with a constant inverse velocity $`\sqrt{\rho (x)/E(x)}`$. We now discuss the four cases in turn. Assume now that we change the system size according to $`L\lambda L`$. Then, for case (1) ($`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$) after a change of variable, it follows from Eqs. (III) and (8) that $`W(\omega ,\lambda L)`$ $`=`$ $`{\displaystyle \frac{\omega }{\pi }}{\displaystyle _0^{\lambda L}}\sqrt{{\displaystyle \frac{\rho (x)}{E_0}}}𝑑x`$ (10) $`=`$ $`{\displaystyle \frac{\omega }{\pi }}{\displaystyle _0^L}\sqrt{{\displaystyle \frac{\rho (\lambda x^{})}{E_0}}}d(\lambda x^{})`$ (11) $``$ $`\lambda ^{1+\frac{H}{2}}W(\omega ,L).`$ (12) Using Eq. (8), this relation may be rewritten $`W(\lambda ^{1\frac{H}{2}}\omega ,\lambda L)`$ $``$ $`W(\omega ,L).`$ (13) Note that Eqs. (10) and (13) are in principle equivalent, but open up two different possibilities for physical interpretation. In the former case, it is the number of eigenmodes which is scaled, while in the latter it is the frequency. Through similar arguments, we find for case (2) ($`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$), $$W(\omega ,\lambda L)\lambda W(\omega ,L),$$ (15) or equivalently $$W(\lambda ^1\omega ,\lambda L)W(\omega ,L).$$ (16) For case (3) ($`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$), we expect $`W(\omega ,\lambda L)\lambda ^{1H/2}W(\omega ,L),`$ (18) $`W(\lambda ^{1+H/2}\omega ,\lambda L)W(\omega ,L).`$ (19) Case (4) ($`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$) leads to the same behavior as Case (2), Eqs. (III). ## IV Outline of the numerical method In order to do numerical simulation of our self-affine string we discretize Eq. (3) on a lattice of $`N`$ sites $`A_{ij}^{(0)}U_j(\omega )`$ $`=`$ $`\omega ^2U_i(\omega )i,j=1,2,\mathrm{},N.`$ (20) Here $`A^{(0)}`$ is the $`N\times N`$ matrix representation for the operator $`(1/\rho (x))(d/dx)E(d/dx)`$. It is tridiagonal. The method used in this paper to calculate the IDOS for the above equation is the decimation technique of Lambert and Weaire . This method is based on a renormalization philosophy, where successive degrees of freedom are eliminated from the system, see Refs. for more details. After removing the sites corresponding to $`k=NM,\mathrm{},N`$ the system becomes $`A_{ij}^{(M)}U_j(\omega )`$ $`=`$ $`\omega ^2U_i(\omega ),i=1,\mathrm{},NM`$ (21) with $`A_{ij}^{(M)}`$ $`=`$ $`A_{ij}^{(M1)}+{\displaystyle \frac{A_{i,N+1M}^{(M1)}A_{N+1M,j}^{(M1)}}{D^{(M)}}}`$ (22) where the denominator is given by $`D^{(M)}`$ $`=`$ $`A_{N+1M,N+1M}^{(M)}\omega ^2.`$ (23) At the very heart of the method lies the fact that renormalized system is equivalent to the original one in the sense that the two systems have the same eigenvalues. By repeating this procedure all degrees of freedom can be eliminated. The number of eigenmodes less then $`\omega `$, i.e. the IDOS, can now be shown to be equal to the number of negative denominators in the sequence $`\{D^{(i)}\}_{i=1}^N`$ . We would like to point out that this algorithm is very efficient. ## V Simulation and results In order to test the conjecture (III), and thus also Eqs. (III) to (16), we have calculated the IDOS for various system sizes, $`L`$, fixing the Hurst exponent $`H`$ to the value $`0.7`$. We work in length units where $`l=1/2^{11}`$. Thus, the system size $`N=2^{11}=2048`$ corresponds to a string of unit length ($`L=1`$). We set $`E_0=1`$ and $`E_1(x)=0.01`$ for the cases where $`E_0E_1(x)`$, and $`E_0=0.01`$ and $`E_1(x)=1`$ for the cases where $`E_0E_1(x)`$. Likewise, we set $`\rho _0=1`$ and $`\rho _1(x)=0.01`$ for the cases where $`\rho _0\rho _1(x)`$, and $`\rho _0=0.01`$ and $`\rho _1(x)=1`$ for the cases where $`\rho _0\rho _1(x)`$. We averaged over 500 samples in case (1), while we used 50 samples in the three other cases. As can be seen from Figs. 1a the IDOS is a linear function of the frequency $`\omega `$, and $`W(\omega ,L)`$ increases with system size $`L`$. This is consistent with the predictions of Eq. (10). A similar linear behavior has been found for case (2) to (4), but these results are not shown explicitly. We test in Fig. 1b the case (1) scaling relation (10), in Fig. 2 the case (2) scaling relation (15), in Fig. 3a the case (3) scaling relation (18), and in Fig. 4 the case (4) scaling relation (15). In all cases except case (3), our numerical data are consistent with the theoretical predictions. In case (3), where $`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$, data collapse was obtained by scaling $`W(\omega ,L)/\mathrm{exp}(0.46L)`$ (see Fig. 3b). This is very different from the predicted behavior, Eq. (18). We do not know the reason for this discrepancy between theory and numerical result. ## VI Summary and conclusion We have investigated the IDOS for a self-affine string, i.e. a string with self-affine variations in mass density and elastic coefficient. There are four relevant cases to study depending on whether the self-affine variations dominate or not in the mass density and elastic coefficient. We have compared numerical studies with a conjecture based on assuming that a result of Lapidus and Fleckinger is valid for the self-affine string. We find that our conjecture works for three out of the four cases. However, it fails for the case where $`E_0E_1(x)`$ and $`\rho _0\rho _1(x)`$. ###### Acknowledgements. One of the authors (I.S.) would like to thank the Research Council of Norway and Norsk Hydro ASA for financial support. A.H. would like to thank H.N. Nazareno and F.A. Oliveira for warm hospitality and the I.C.C.M.P. for support. This work has received support from the Research Council of Norway (Program for Supercomputing) through a grant of computing time.
no-problem/9909/hep-ex9909035.html
ar5iv
text
# 1 Introduction ## 1 Introduction Single diffraction, or the inclusive inelastic production of beam-like particles with momenta within a few percent of the associated incident beam momentum, as in: $$p(\overline{p})+p_iX+p_f$$ (1) has been studied for more than 30 years. The chief characteristic of data from these processes is the existence of a pronounced enhancement at Feynman-$`x_p`$ of $`p_f`$ near unity, with the absence of other particles nearby in rapidity (“rapidity gap”). This is interpreted using Regge phenomenology\[1–6\] as evidence for the dominance of color-singlet $`𝒫`$omeron–exchange (see Fig. 1). The observed $`x_p`$ spectrum reflects the distribution of the exchanged $`𝒫`$omeron’s momentum fraction in the proton<sup>4</sup><sup>4</sup>4We use the symbol $`\xi `$ for this variable in view of its simplicity and its increasing use in the literature., $`\xi x_{IP}=1x_p`$. A relatively recent idea underlying the phenomenology is that, although the $`𝒫`$omeron’s existence in the proton is due to non-perturbative QCD, once the $`𝒫`$omeron exists, perturbative QCD processes can occur in proton-$`𝒫`$omeron and $`\gamma ^{}`$-$`𝒫`$omeron interactions. Ref. proposed the study of such hard processes in order to determine the $`𝒫`$omeron structure. First “hard diffraction” results were obtained by the UA8 collaboration using React. 1, and by the H1 and ZEUS collaborations using $`ep`$ interactions. Hard diffraction results on React. 1 also exist from the CDF and D0 collaborations at the Tevatron. Factorization of $`𝒫`$omeron emission and interaction in the inclusive React. 1 is expressed by writing the single-diffractive differential cross section as a product of a “Flux Factor” of the $`𝒫`$omeron in the proton, $`F_{𝒫/p}(t,\xi )`$, and a proton-$`𝒫`$omeron total cross section (see Sect. 2): $$\frac{d^2\sigma _{sd}}{d\xi dt}=F_{𝒫/p}(t,\xi )\sigma _{p𝒫}^{total}(s^{})$$ (2) $`s^{}`$ is the squared invariant mass of the $`X`$ system and, to good approximation, is given by: $`s^{}=\xi s`$. $`t`$ is the $`𝒫`$omeron’s four–momentum transfer. There are many examples in the literature of the validity of factorization (see for example Refs. ), and our working assumption in the present paper is that Eq. 2 is a good approximation. There is, however, a long–standing unitarity problem with the $`𝒫`$omeron–exchange prediction for React. 1 which deserves re–examination. The rising total cross sections observed at Serpukhov ($`K^+p`$) and the ISR ($`pp`$) in the early 1970s led to the conclusion that the effective $`𝒫`$omeron Regge trajectory intercept at $`t`$ = 0, $`\alpha (0)=1+ϵ`$, was larger than unity<sup>5</sup><sup>5</sup>5The fit result for the trajectory in Ref. was 1.06 + 0.25$`t`$; the latest refined fits to the $`s`$–dependences of all total cross sections yield $`ϵ0.10`$. . Although this violates the Froissart-Martin unitarity bound for total cross sections, it presents no difficulty at present and forseeable collider energies. However, this is not the case for partial cross sections such as diffraction. This is easily seen by examining the dominant $`\xi `$–dependent Regge factor in Eq. 2 at small–$`\xi `$ and small–$`t`$: $$F_{𝒫/p}(t,\xi )\xi ^{12\alpha (0)}=\frac{1}{\xi ^{1+2ϵ}},$$ (3) Kinematically, $`\xi `$ has a minimum value in React. 1, $`\xi _{min}=s_{min}^{}/s`$, which decreases with increasing energy, such that the rise in $`F_{𝒫/p}(t,\xi )`$ at small $`\xi `$ becomes more and more pronounced. With $`ϵ`$ = 0.10, this leads to a rapidly increasing predicted total single diffractive cross section, $`\sigma _{sd}^{total}`$, with $`s`$, shown as the solid curve in Fig. 2 . Of course, the observed $`\sigma _{sd}^{total}`$ does not display this behavior, but rises much more modestly<sup>6</sup><sup>6</sup>6For reasons explained below in Sect. 3.1, we tend to discount the smaller of the two $`\sigma _{sd}^{total}`$ values at $`\sqrt{s}`$ = 546 GeV. with $`s`$. This discrepency between the predictions and the observed $`\sigma _{sd}^{total}`$ should be understandable in the framework of Gribov’s Reggeon calculus through multi–$`𝒫`$omeron–exchange effects (Regge cuts), described variously in the literature as screening, shadowing, absorption or damping . Eq. 2, traditionally used with the $`ϵ`$ obtained from fitting to the $`s`$–dependence of total cross section data, does not take these effects into account. It is expected that multi–$`𝒫`$omeron–exchange effects increase with $`s`$. This corresponds to a decreasing effective $`ϵ`$ which will suppress $`\sigma _{sd}^{total}`$ corresponding to the observed behavior. To the best of our knowledge, effective $`ϵ`$ values have never been directly extracted from $`\sigma _{sd}^{total}`$ data on React. 1, although Schuler and Sjöstrand have developed a model of hadronic diffractive cross sections in which they use $`ϵ=0`$ as a reasonable approximation at the highest energies. It was also suggested that the observation in $`\gamma ^{}p`$ interactions at HERA, of an $`Q^2`$-dependent effective $`𝒫`$omeron intercept could be the result of the decrease of screening with increasing $`Q^2`$. In the present paper we use the measured $`\sigma _{sd}^{total}`$ values to determine the effective $`ϵ`$ values as a function of energy. We then fit to the $`t`$–dependence of $`d\sigma _{sd}^{total}/dt`$ ($`\xi <0.05`$) at the ISR and SPS–Collider to obtain more reliable values of $`ϵ`$ as well as the slope, $`\alpha ^{}`$ at $`t`$ = 0. These latter fits also provide confirming evidence of the relatively flat $`s`$–independent trajectory in the higher–$`|t|`$ region, 1.0–2.0 GeV<sup>2</sup>, which was previously reported by the UA8 Collaboration. The value of this trajectory at $`|t|=1.5`$ GeV<sup>2</sup> is consistent with the trajectory obtained from photoproduction of vector mesons at HERA. We note that a decreasing effective intercept has the effect of suppressing the event yield at small–$`\xi `$ and small–$`|t|`$, as suggested by the data. In Ref. , we introduced an ad hoc Damping Factor to account for the observed suppression of the total diffractive cross section. However, since it was observed that the damping effects extend to larger $`\xi `$ at the largest Tevatron energy, our fixed Damping Factor had limited applicability. In the present paper, we show that the $`s`$–dependent effective intercept, as a manifestation of multi–$`𝒫`$omeron–exchange, offers a physics explanation for the effect and seems to be applicable up the the highest available energies. Sect. 2 summarizes the analysis by the UA8 collaboration, in which they fit Eq. 2 to ISR and SPS data; they obtain parametrizations of $`F_{𝒫/p}(t,\xi )`$ and $`\sigma _{p𝒫}^{total}`$ which embody features not previously known and specify the $`𝒫`$omeron trajectory at high–$`|t|`$. Sect. 3 shows how the effective $`𝒫`$omeron intercept depends on interaction energy in React. 1 and how predictions at $`\sqrt{s}`$ = 1800 GeV agree with the CDF collaboration’s results. The analysis in Sect. 4 yields a new $`𝒫`$omeron trajectory which depends on $`s`$ only at low–$`|t|`$. Finally, Sect. 5 contains our conclusions and a discussion of some consequences. ## 2 UA8 Triple–Regge fits and the $`𝓟`$omeron trajectory at high–$`\mathbf{|}𝒕\mathbf{|}`$ The UA8 collaboration analyzed data from their experiment at the CERN SPS–Collider ($`\sqrt{s}`$ = 630 GeV) in the $`|t|`$–range, 0.90–2.00 GeV<sup>2</sup>, and from the CHLM experiment at the CERN ISR ($`\sqrt{s}`$ = 23–62 GeV) in the $`|t|`$–range, 0.15–2.35 GeV<sup>2</sup>. Eq. 2 was fit to the data, using the dominant two terms in the Mueller–Regge expansion , $`𝒫𝒫𝒫`$ and $`𝒫𝒫`$ (see Fig. 1), for the differential cross section of React. 1. These correspond, respectively, to $`𝒫`$omeron exchange and the exchange of other non–leading, C=+ $``$eggeon trajectories (e.g., $`f_2`$) in the proton–$`𝒫`$omeron interaction<sup>7</sup><sup>7</sup>7In reality, the $`(s^{})^ϵ`$ terms have the form $`(s^{}/s_0)^ϵ`$ with $`s_0=1`$ GeV<sup>2</sup>.), $$\frac{d^2\sigma _{sd}}{d\xi dt}=[KF_1(t)^2e^{bt}\xi ^{12\alpha (t)}]\sigma _0[(s^{})^{ϵ_1}+R(s^{})^{ϵ_2}].$$ (4) Comparing with Eq. 2, the left–hand bracket is the $`𝒫`$omeron flux factor, $`F_{𝒫/p}(t,\xi )`$, and the right–hand bracket (together with $`\sigma _0`$) is the proton–$`𝒫`$omeron total cross section, $`\sigma _{p𝒫}^{total}`$. Because this expression for $`\sigma _{p𝒫}^{total}`$ is identical to that used in the fits to real particle cross sections (where $`ϵ_1=0.10`$, and $`ϵ_2=0.32`$ are found - the latter for $`f/A_2`$ exchange) and the value of $`R`$ found in the UA8 fits ($`4.0\pm 0.6`$) is similar to the values found in the fits to real particle cross sections, we take as a working assumption that $`\sigma _{p𝒫}^{total}`$ is like a real particle cross section (as is done in predicting hard diffractive cross sections). Thus, $`ϵ_1`$ and $`ϵ_2`$ are fixed at the above values. In Eq. 4, $`|F_1(t)|^2`$ is the standard Donnachie–Landshoff form–factor<sup>8</sup><sup>8</sup>8$`F_1(t)=\frac{4m_p^22.8t}{4m_p^2t}\frac{1}{(1t/0.71)^2}`$ which is multiplied by a possible correction at high–$`|t|`$, $`e^{bt}`$. Thus, the product, $`|F_1(t)|^2e^{bt}`$, carries the $`t`$–dependence of $`G_{𝒫𝒫𝒫}(t)`$ and $`G_{𝒫𝒫}(t)`$ in the Mueller-Regge expansion and is assumed to be the same in both. Physically, this means that the $`𝒫`$omeron has the same flux factor in the proton, irrespective of whether the proton–$`𝒫`$omeron interaction proceeds via $`𝒫`$omeron–exchange or $``$eggeon–exchange. The products, $`K\sigma _0=G_{𝒫𝒫𝒫}(0)`$, and $`K\sigma _0R=G_{𝒫𝒫}(0)`$. The $`𝒫`$omeron trajectory, $`\alpha (t)`$ in $`F_{𝒫/p}(t,\xi )`$, was assumed to have the usual linear form with a quadratic term added to allow for a flattening of the trajectory at high–$`|t|`$, as required by the data: $$\alpha (t)=\mathrm{\hspace{0.17em}\hspace{0.17em}1}+ϵ+0.25t+\alpha ^{\prime \prime }t^2$$ (5) $`ϵ`$ was fixed at 0.10 in the fits. Although we show in the next section that the effective intercept decreases with $`s`$, the only low–$`|t|`$ data used in the UA8 fits was that from the lower ISR energies, where 0.10 is a good approximation. We return to this point in Sec. 4. To avoid difficulties with differing experimental resolutions in the combined ISR–UA8 data sample, simultaneous fits of Eq. 4 were first made to data in the range $`0.03<\xi <0.04`$ and $`|t|<2.25`$ GeV<sup>2</sup>, assuming zero non–$`𝒫`$omeron–exchange background; then fits were made to the entire region, $`0.03<\xi <0.10`$, including a background term of the form $`Ae^{ct}\xi ^1`$. All fits gave self–consistent results. The fit values of the four free parameters in Eq. 4 were: | $`K\sigma _0`$ | = | $`0.72\pm 0.10`$ | mb GeV<sup>-2</sup> | | --- | --- | --- | --- | | $`\alpha ^{\prime \prime }`$ | = | $`0.079\pm 0.012`$ | GeV<sup>-4</sup> | | $`b`$ | = | $`1.08\pm 0.20`$ | GeV<sup>-2</sup> | | $`R`$ | = | $`4.0\pm 0.6`$ | The fitted $`𝒫`$omeron trajectory, Eq. 5 with $`\alpha ^{\prime \prime }=0.08`$, is shown as the shaded band in Fig. 3. The band edges correspond to $`\pm 1\sigma `$ error limits on $`\alpha ^{\prime \prime }`$. Independent confirmation of the $`\alpha (t)`$ values at high–$`|t|`$ seen in Fig. 3 was obtained by fitting (resolution–smeared) Eq. 4 to the $`\xi `$–dependence of the UA8 data at fixed–$`t`$ in the different $`\xi `$–region, $`\xi <0.03`$, where non–$`𝒫`$omeron–exchange background could be ignored. Although, in Eq. 4, the dominant $`\xi `$–dependence is in $`F_{𝒫/p}(t,\xi )`$ and has the form $`\xi ^{12\alpha (t)}`$, there are the additional (weaker) $`(s^{})^ϵ\xi ^ϵ`$ dependences in the $`𝒫𝒫𝒫`$ and $`𝒫𝒫`$ terms of $`\sigma _{p𝒫}^{total}`$, both of which must be included in the fit. Because the ($`𝒫𝒫`$) term is more sharply peaked at small values of $`\xi `$ than is the $`𝒫𝒫𝒫`$ term, leaving it out of the fit<sup>9</sup><sup>9</sup>9Note that this would mean assuming $`R=0`$, which is in blatant disagreement with the value, $`R=4.0`$, quoted above. causes a systematic upward shift in the resultant $`\alpha (t)`$. The solid points in Fig. 3 show the fit values of $`\alpha (t)`$ at four $`t`$-values, when both $`𝒫𝒫𝒫`$ and $`𝒫𝒫`$ terms in Eq. 4 are used in the fit (with $`R=4.0`$). The solid points and the band in the figure are in good agreement. The two different, but self–consistent, fits to the data in the high–$`|t|`$ region give confidence in the value of the overall normalization constant, $`K\sigma _0`$, and in the $`t`$–dependence, $`|F_1(t)|^2e^{bt}`$. Table 1 summarizes the two types of fits performed by the UA8 collaboration in determining $`\alpha (t)`$ at high $`t`$, and shows which data sets were used in each. In Sect. 4, a third independent type of fit is described which also yields essentially the same results for $`\alpha (t)`$ at high–$`|t|`$ at both ISR and SPS-Collider. ## 3 An $`𝒔`$-dependent effective intercept As explained above, a $`\sigma _{sd}^{total}`$ prediction depends sensitively on the value of $`ϵ`$ used in the Flux Factor. For each of the ISR, SPS and Tevatron<sup>6</sup> points in Fig. 2, we have therefore found the value of the effective $`ϵ`$ which yields the measured $`\sigma _{sd}^{total}`$. Eq. 4 is integrated over $`\xi <0.05`$ and all $`t`$, with the following assumptions: * We assume that screening only effects the Flux Factor, $`F_{𝒫/p}(t,\xi )`$, and therefore only allow the $`ϵ`$ which appears therein to change. As stated above, our working assumption is that the proton–$`𝒫`$omeron total cross section, $`\sigma _{p𝒫}^{total}(s^{})=\sigma _0[(s^{})^{0.10}+R(s^{})^{0.32}]`$ with $`R=4.0`$, is like a real–particle total cross section and hence has fixed parametrization. * $`K\sigma _0`$, $`R`$, $`b`$ and $`\alpha ^{\prime \prime }`$ are fixed at the UA8 fit values given in Sect. 2. Although using these fixed values is not self-consistent with allowing $`ϵ`$ to vary, any corrections are of order $`10\%`$ and do not obscure the essential results. Fig. 4 shows the resulting $`ϵ`$ values vs. $`\sqrt{s}`$; their errors only reflect the measurement errors in the $`\sigma _{sd}^{total}`$ points. Starting with $`ϵ=0.10`$ at the lowest end of the ISR region, the points display a pronounced downward trend with $`s`$, reaching $`0.03`$ at the SPS–Collider and $`0.01`$ at the Tevatron. From the fits in Sect. 4, $`\alpha ^{}`$ also decreases with decreasing $`ϵ`$ (this is not surprising since the trajectory has to match up with its $`s`$–independent part at higher $`t`$). Since $`\alpha ^{}`$ = 0.15 is preferred at SPS and Tevatron energies, the dashed line in Fig. 4 is a fit to the solid points ($`\alpha ^{}`$ = 0.25) at the ISR and the open points at SPS-Tevatron ($`\alpha ^{}`$ = 0.15). The solid line shows $`ϵ`$ vs. $`s`$ which results from the fits to the ISR data in Sect. 4. The difference between solid points and the solid line is that $`\alpha ^{}`$ is not fixed at the arbitrary value of 0.25 GeV<sup>-2</sup> in the latter. ### 3.1 Predictions for the Tevatron Fig. 5 shows Eq. 4 ploted vs. $`\xi `$ at $`\sqrt{s}`$ = 1800 GeV and small momentum transfer, $`|t|=0.05`$ GeV<sup>2</sup>, for several different values<sup>10</sup><sup>10</sup>10$`\alpha ^{}=0.15`$ is used for these curves; however, the results are almost identical for $`\alpha ^{}=0.25`$. of $`ϵ`$. The figure also shows the results of the CDF experiment, which they give in the form of a function which, after convoluting with their experimental resolution and geometric acceptance, was fit to their observed differential cross section<sup>11</sup><sup>11</sup>11For reasons explained in Sects. 5.2 and 5.3 of Ref. , we add the CDF “signal” and “background” as a good representation of their corrected differential cross section at small $`\xi `$ (after unfolding experimental resolution).. The CDF function (solid curve) in Fig. 5 is seen to agree in both absolute magnitude and shape to a prediction with $`ϵ0.03\pm 0.02`$. Similar results are found for the CDF function at $`\sqrt{s}`$ = 546 GeV (not shown here). We have integrated these same CDF functions for $`\xi <0.05`$, to obtain $`d\sigma _{sd}/dt`$ at both 546 and 1800 GeV. At $`|t|=0.05`$ Gev<sup>2</sup>, both CDF values agree with UA4. However, the CDF $`|t|`$-slope at 546 GeV is much steeper ($`b=7.7`$ GeV<sup>-2</sup>) than are all other ISR, SPS and Tevatron (1800 GeV) measurements. Since its magnitude is sufficient to account for the difference between UA4 and CDF $`\sigma _{sd}^{total}`$ at 546 GeV, we have therefore ignored that CDF point in preparing Fig. 4. ## 4 The $`𝓟`$omeron trajectory From the results in Sect. 3 at low–$`|t|`$, we have an $`s`$-dependent effective $`𝒫`$omeron intercept which reflects multi–$`𝒫`$omeron–exchange effects. However, as already mentioned, at high–$`|t|`$ ($`>1`$ GeV<sup>2</sup>) the trajectory shows no signs of an $`s`$–dependence, since the triple–Regge formalism describes the data between ISR and SPS with no apparent need of damping. In order to parametrise the full $`𝒫`$omeron trajectory, we resort to a somewhat unorthodox procedure and fit the $`\xi <0.05`$ integral of Eq. 4 (with free parameters to describe the effective $`𝒫`$omeron trajectory), to the $`t`$–dependence of the total differential cross section, $`d\sigma _{sd}/dt`$ ($`\xi <0.05`$). This method relies on our knowledge of the remaining $`t`$–dependence, $`|F_1(t)|^2e^{bt}`$ in $`F_{𝒫/p}(t,\xi )`$, which we believe is reliable since the results from our fits yield a $`𝒫`$omeron trajectory at high–$`t`$ which agrees with $`\alpha `$ values obtained in the UA8 fits to the shape of $`d\sigma /d\xi `$ vs. $`\xi `$ at fixed $`t`$ values. There is only one set of $`d\sigma _{sd}/dt`$ data above ISR energies which covers the complete $`|t|`$ range from 0–2 GeV<sup>2</sup>. Fig. 6 shows the measurements at the SPS-Collider by the UA4 collaboration (open points) and by the UA8 collaboration (solid points). The UA4 data cover most of the $`t`$–range because they come from independent high–$`\beta `$ and low–$`\beta `$ runs at the SPS. The UA8 data only cover the high–$`|t|`$ part of the range, but they are in good agreement with the UA4 points where they overlap<sup>12</sup><sup>12</sup>12We ignore the small ($`3\%`$) difference in $`\sigma _{sd}^{total}`$ between the two $`\sqrt{s}`$ values, 546 and 630 GeV.. Although the poor $`\xi `$ resolution of the low–$`\beta `$ run precludes use of the data for fits to the $`\xi `$–dependence, the $`d\sigma _{sd}/dt`$ distribution is hardly influenced. The solid and dashed curves in Fig. 6 are fits (row 3 in Table 1) of the $`\xi <0.05`$ integrated version<sup>13</sup><sup>13</sup>13In the fits, $`K\sigma _0`$, $`b`$ and $`R`$ are fixed at the UA8 values given above. of Eq. 4 to the $`d\sigma _{sd}/dt`$ data points in the figure. They correspond to the two different, but similar, parametrizations of $`\alpha (t)`$ shown in Fig. 3(a) (those with lower intercepts). One is the quadratic trajectory (solid), $`\alpha (t)=1+ϵ+\alpha ^{}t+\alpha ^{\prime \prime }t^2`$, with three free parameters ($`0.035\pm 0.001`$, $`0.165\pm 0.002`$ GeV<sup>-2</sup> and $`0.059\pm 0.001`$ GeV<sup>-4</sup>, respectively). The dashed trajectory consists of two straight lines, and also has three free parameters, the intercept, slope and the $`|t|`$ value at which the trajectory continues horizontally to larger–$`|t|`$ values ($`0.033\pm 0.001`$, $`0.134\pm 0.003`$ GeV<sup>-2</sup> and $`0.80\pm 0.02`$ GeV<sup>2</sup>, respectively). The two fitted trajectories are nearly identical. Remarkably, they are seen to agree with the two previous independent determinations of $`\alpha (t)`$ in the high–$`|t|`$ region with $`\xi >0.03`$. Encouraged by this result, we turn to the corresponding ISR measurements of $`d\sigma _{sd}/dt`$ (see Fig. 7). The solid curves in Fig. 7 are the result of a single 6–parameter fit ($`\chi ^2`$/DF = 1.4) of the $`\xi <0.05`$ integrated version of Eq. 4 to all data points shown, using $`\alpha (t)=1+ϵ+\alpha ^{}t+\alpha ^{\prime \prime }t^2`$. The 6 parameters arise from assuming that $`ϵ`$, $`\alpha ^{}`$ and $`\alpha ^{\prime \prime }`$ each has an $`s`$–dependence of the type, $`ϵ(s)=ϵ(549)+Alog(s/549)`$. At $`s=549`$ Gev<sup>2</sup>, the fit parameters, $`ϵ`$, $`\alpha ^{}`$ and $`\alpha ^{\prime \prime }`$ are, respectively, $`0.096\pm 0.004`$, $`0.215\pm 0.011`$ GeV<sup>-2</sup> and $`0.064\pm 0.006`$ GeV<sup>-4</sup>. $`ϵ(549)=0.10`$ agrees perfectly with the value obtained from fitting to the $`s`$–dependence of total cross sections<sup>14</sup><sup>14</sup>14This justifies its use in the fits of Ref. . , while $`\alpha ^{}(549)=0.215`$ is smaller than the conventional 0.25. The fit also yields the energy dependence parameter, “$`A`$”, for each of $`ϵ`$, $`\alpha ^{}`$ and $`\alpha ^{\prime \prime }`$: $`0.019\pm 0.005`$, $`0.031\pm 0.012`$ and $`0.010\pm 0.006`$, respectively. This allows us to plot the fitted $`ϵ`$ vs $`s`$ in Fig. 4 (solid line) over the ISR energy range. The curve is more reliable than the points obtained in Sect. 3 from $`\sigma _{sd}^{total}`$ values, because $`\alpha ^{}`$ is not fixed at the arbitrary value, 0.25. The areas under the fitted curves in Figs. 6 and 7 are in good agreement with the published $`\sigma _{sd}^{total}`$ values. The effective trajectories corresponding to the fits in Fig. 7 are plotted in Fig. 3(b) at the lowest ($`s=549`$ GeV<sup>2</sup>) and highest ($`s=3892`$ GeV<sup>2</sup>) ISR energies. Also shown is the same SPS-Collider trajectory as in Fig. 3(a). At $`|t|=1.5`$ GeV<sup>2</sup>, all trajectory values agree to within about $`\pm 0.01`$. We therefore refit the ISR data of Fig. 7, constraining all ISR trajectories to have the same value at $`|t|=1.5`$ GeV<sup>2</sup>. With $`\chi ^2`$/DF = 1.4, we find $`\alpha (1.5)=0.923\pm 0.002`$ ($`\alpha (0)`$ and $`\alpha ^{}(0)`$ do not change significantly). When the uncertainty in $`b=1.08\pm 0.20`$ is taken into account, the error enlarges to $`\alpha (1.5)=0.92\pm 0.03`$, shown as the square point in Fig. 3(b). ## 5 Discussion Using Refs. and the work in the present paper, we have seen that the triple–Regge formula with both $`𝒫𝒫𝒫`$ and $`𝒫𝒫`$ terms describes all available inclusive single–diffractive data from ISR to Tevatron, provided that the effective $`𝒫`$omeron Regge trajectory intercept, $`\alpha (0)`$, is $`s`$–dependent and decreases from a value, 1.10, at low energies to a value about 1.03 at the SPS-Collider and perhaps smaller at the Tevatron. The data also require a “flattening” of the $`𝒫`$omeron trajectory at $`\alpha 0.92`$, for momentum transfer, $`|t|>1`$ GeV<sup>2</sup>. Together, these two characteristics specify a new effective $`𝒫`$omeron trajectory in inelastic diffraction, which is in disagreement with the “traditional” soft $`𝒫`$omeron trajectory obtained from fits to the energy dependence of hadronic total cross sections. An $`s`$-dependent effective intercept which decreases with increasing energy is expected from multi–$`𝒫`$omeron–exchange (screening/damping) calculations. We find it remarkable that, despite the presence of multi–$`𝒫`$omeron–exchange contributions, Eq. 2 and the factorization of $`𝒫`$omeron emission and interaction seem to retain a high degree of validity. This suggests that multi–$`𝒫`$omeron–exchange effects behave in an approximately factorizable way. It is also remarkable that single–$`𝒫`$omeron–exchange with a fixed trajectory describes inelastic diffraction so well in the higher–$`|t|`$ domain, when it is known that high–$`|t|`$ elastic scattering has multiple exchange contributions there. It will be useful to elaborate the evidence for a dominant fixed $`𝒫`$omeron trajectory at high–$`|t|`$ in inelastic diffraction: (a) First, we note that elastic and inelastic diffraction are very different. There is no evidence in inelastic diffraction (see Figs. 6 and 7) for the characteristic presence of the $`s`$–dependent dip (and break) seen in $`pp`$ elastic scattering and very differently in $`\overline{p}p`$ elastic scattering. Indeed, we have a self–consistent set of single–$`𝒫`$omeron–exchange fits to the $`pp`$ data at the ISR (Fig. 7) and to the $`\overline{p}p`$ data at the SPS–Collider (Fig. 6) (b) The $`x_p`$ (or $`\xi `$) distribution in React. 1 shows similar $`x_p1`$ peaking at high–$`|t|`$ ($`>1`$ GeV<sup>2</sup>) as at low–$`|t|`$ (see Ref. ), which is a signature for the flattening of the $`𝒫`$omeron trajectory at high–$`|t|`$. It is extremely interesting to note that the trajectory value we obtain, $`\alpha =0.92\pm 0.03`$ at $`|t|`$ = 1.5 GeV<sup>2</sup>, is consistent with the $`𝒫`$omeron trajectory obtained at high–$`|t|`$ in $`\rho ^0`$ and $`\varphi `$ photoproduction by the ZEUS collaboration and in $`J/\mathrm{\Psi }`$ photoproduction by the H1 Collaboration. This suggests a universal fixed $`𝒫`$omeron trajectory in the high–$`|t|`$ domain. (c) The fits of Eq. 4 with its embedded Regge factor, $`\xi ^{12\alpha (t)}`$, to the entire set of differential cross section data and their $`s`$–dependences are highly overconstrained and yield good results. Thus, additional complications are not required by the data. It is particularly impressive that the three different and independent ways of determining $`\alpha (t)`$ (see Table 1) for $`|t|>1`$ GeV<sup>2</sup> in Fig. 3 all give the same result. In one case, the fits are to all data points at ISR and SPS-Collider with $`\xi >0.03`$ (including non–$`𝒫`$omeron–exchange background). Secondly, there are fits to the shapes of $`d^2\sigma /d\xi dt`$ vs. $`\xi `$ with $`\xi <0.03`$ at fixed–$`t`$. And, in the present paper, we fit to all available $`d\sigma _{sd}/dt`$ with $`\xi <0.05`$ over the entire range of $`t`$ at both ISR and SPS-Collider. (d) An additional argument in favor of one–$`𝒫`$omeron–exchange at high–$`t`$ is that, in Ref. , the extracted $`𝒫`$omeron$`𝒫`$omeron total cross section, $`\sigma _{𝒫𝒫}^{total}`$, agrees with factorization expectations above the few–GeV mass region. (e) Finally, the UA8 results on hard scattering at high–$`|t|`$ yield essentially the same picture of the $`𝒫`$omeron’s partonic structure as do the low–$`|t|`$ experiments at HERA and Tevatron. On another point, we understand that, naively, screening effects are expected to increase with $`|t|`$. This appears to contradict our observations. Indeed, the flattening of the $`𝒫`$omeron trajectory at high–$`|t|`$, as well as the apparent absence of damping there (damping effects are seen to “fade away” as $`|t|`$ increases from 0.5 to 1.0 GeV<sup>2</sup>) suggests that the trajectory is entering the perturbative domain. For example, this change in dynamics at high–$`|t|`$ from the simple eikonal approximation could arise from the dominance of “small–size configurations” in the recoil nucleon. It is, in any case, an intriguing situation which should be given further attention. One hopes that future calculations of multi–$`𝒫`$omeron–exchange effects will account for the effective intercept and slope at $`t`$ = 0 which we have presented, as well as preserve the high degree of factorization exhibited by the data. An additional factor which should be taken into account in multi–$`𝒫`$omeron–exchange calculations can be inferred from a recent result of the UA8 Collaboration on the analysis of double–$`𝒫`$omeron–exchange data, where both final state observed $`p`$ and $`\overline{p}`$ are in the momentum transfer range, $`|t|>1.0`$ GeV<sup>2</sup>. The extracted $`𝒫`$omeron$`𝒫`$omeron total cross section agrees with factorization expectations in the invariant mass range, $`9<\sqrt{s^{}}<25`$ GeV. However, at smaller masses, there is a pronounced enhancement of the $`𝒫`$omeron$`𝒫`$omeron cross section, peaking in the few–GeV mass region, with about a factor of ten larger cross section than expected from factorization. Although, with a mass resolution about $`\sigma =2`$ GeV, it is impossible to observe structure in the $`𝒫`$omeron$`𝒫`$omeron spectrum, this result implies that there is at least a strong interaction in the low–mass $`𝒫`$omeron$`𝒫`$omeron system, which can have a significant, and perhaps simplifying, impact on the nature of multiple $`𝒫`$omeron exchange. ## Acknowledgements We have benefited greatly from discussions with Alexei Kaidalov, Uri Maor and Mark Strikman on issues of damping and screening. Helpful discussions with John Dainton are also appreciated. We also wish to thank the CERN laboratory, where much of this work was done, for their long hospitality.
no-problem/9909/hep-ex9909020.html
ar5iv
text
# Differential Production Cross Section of 𝑍 Bosons as a Function of Transverse Momentum at √𝑠=1.8 TeV ## Abstract We present a measurement of the transverse momentum distribution of $`Z`$ bosons produced in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$ TeV using data collected by the DØ experiment at the Fermilab Tevatron Collider during 1994–1996. We find good agreement between our data and a current resummation calculation. We also use our data to extract values of the non-perturbative parameters for a particular version of the resummation formalism, obtaining significantly more precise values than previous determinations. We report a new measurement of the differential cross section with respect to transverse momentum ($`d\sigma /dp_T`$) of the $`Z`$ boson in the dielectron channel with statistics and precision greatly improved beyond previous measurements . The measurement of $`d\sigma /dp_T`$ of the $`Z`$ boson provides a sensitive test of QCD at high-$`Q^2`$. At small transverse momentum ($`p_T`$), where the cross section is highest, uncertainties in the phenomenology of vector boson production have contributed significantly to the uncertainty in the mass of the $`W`$ boson. Due to its similar production characteristics and the fact that the decay electrons can be very well-measured, the $`Z`$ provides a good laboratory for evaluating the phenomenology of vector boson production. In the parton model, $`Z`$ bosons are produced in collisions of $`q\overline{q}`$ constituents of the proton and antiproton. The fact that observed $`Z`$ bosons have finite $`p_T`$ can be attributed to gluon radiation from the colliding partons prior to their annihilation. In standard perturbative QCD (pQCD), the cross section for $`Z`$ boson production is calculated by expanding in powers of the strong coupling constant, $`\alpha _s`$. This procedure works well when $`p_T^2Q^2`$ with $`Q=M_Z`$. However, when $`p_TQ`$, correction terms that are proportional to $`\alpha _s\mathrm{ln}(Q^2/p_T^2)`$ become significant, and the cross section diverges at small $`p_T`$. This difficulty is surmounted by reordering the perturbative series through a technique called resummation . Although this technique extends the applicability of pQCD to lower values of $`p_T`$, a more fundamental barrier is encountered when $`p_T`$ approaches $`\mathrm{\Lambda }_{\text{QCD}}`$. In this region, $`\alpha _s`$ becomes large and the perturbative calculation is no longer valid. In order to account for the non-perturbative contribution, a phenomenological form factor must be invoked, which contains several parameters that must be tuned to data . The resummation may be carried out in impact-parameter ($`b`$) space via a Fourier transform, or in transverse momentum space. Both formalisms require a non-perturbative function to describe the low-$`p_T`$ region beyond some cut-off value $`b_{max}`$ or $`p_{Tlim}`$ and they merge to the fixed-order perturbation theory at $`p_TQ`$. The current state-of-the-art for the $`b`$-space formalism resums terms to next-to-next-to-next-to-leading-log and includes fixed-order terms to $`𝒪(\alpha _s^2)`$ . Similarly, the $`p_T`$-space formalism resums terms to next-to-next-to-leading-log and includes fixed-order terms to $`𝒪(\alpha _s)`$ . In the $`b`$-space formalism, the resummed cross section is modified at large $`b`$ (above $`b_{max}`$) by $`\mathrm{exp}(S_{\mathrm{NP}}(b,Q^2))`$. The form factor $`S_{\mathrm{NP}}(b,Q^2)`$ has a general renormalization group invariant form, but requires a specific choice of parameterization when making predictions. A possible choice, suggested by Ladinsky and Yuan , is $`S_{\mathrm{NP}}(b,Q^2)=`$ (1) $`g_1b^2+g_2b^2\mathrm{ln}({\displaystyle \frac{Q^2}{Q_o^2}})+g_1g_3b\mathrm{ln}(100x_ix_j),`$ (2) where $`x_i`$ and $`x_j`$ are the fractions of incident hadron momenta carried by the colliding partons and $`g_i`$ are the non-perturbative parameters. An earlier parameterization by Davies, Webber, and Stirling corresponds to the above with $`g_30`$. For measurements at the Fermilab Tevatron at $`Q^2=M_Z^2`$, the calculation is most sensitive to the value of $`g_2`$ and quite insensitive to the value of $`g_3`$. In the $`p_T`$-space formalism, the resummed cross section is modified at low-$`p_T`$ (below $`p_{Tlim}`$) by multiplying the cross section by $`F_{\mathrm{NP}}(p_T)`$. In this case, the form of the non-perturbative function is not constrained by renormalization group invariance. The choice suggested by Ellis and Veseli , is $$\stackrel{~}{F}_{\mathrm{NP}}(p_T)=1e^{\stackrel{~}{a}p_T^2}$$ (3) where $`\stackrel{~}{a}`$ is a non-perturbative parameter. Previously published measurements of the differential cross section for $`Z`$ boson production have been limited primarily by statistics (candidate samples of a few hundred events). This measurement is based on a sample of 6407 $`Ze^+e^{}`$ events, corresponding to an integrated luminosity of $`111`$ pb<sup>-1</sup>, collected with the DØ detector in 1994-1996. A recent measurement by the CDF Collaboration has a similar number of events . Electrons are detected in the uranium/liquid-argon calorimeter with a fractional energy resolution of $`15\%/\sqrt{E(\mathrm{GeV})}`$. The calorimeter has a transverse granularity at the electron shower maximum of $`\mathrm{\Delta }\eta \times \mathrm{\Delta }\varphi =0.05\times 0.05`$, where $`\eta `$ is the pseudorapidity and $`\varphi `$ is the azimuthal angle. The two electron candidates in the event with the highest transverse energy ($`E_T`$), both having $`E_T`$$`>`$ 25 GeV, are used to reconstruct the $`Z`$ boson candidate. One electron is required to be in the central region, $`|\eta _{\mathrm{det}}|<1.1`$, and the second electron may be either in the central or in the forward region, $`1.5<|\eta _{\mathrm{det}}|<2.5`$, where $`\eta _{\mathrm{det}}`$ refers to the value of $`\eta `$ obtained by assuming that the shower originates from the center of the detector. Offline, both electrons are required to be isolated and to satisfy cluster-shape requirements. Additionally, at least one of the electrons is required to have a matching track in the drift chamber system that points to the reconstructed calorimeter cluster. Both the acceptance and the theory predictions modified by the DØ detector resolution are calculated using a simulation technique originally developed for measuring the mass of the $`W`$ boson , with minor modifications required by changes in selection criteria. The four-momentum of the $`Z`$ boson is obtained by generating the mass of the $`Z`$ according to an energy-dependent Breit-Wigner lineshape. The $`p_T`$ and rapidity of the $`Z`$ boson are chosen randomly from two-dimensional grids created using the computer program legacy , which calculates the $`Z`$ boson cross section for a given $`p_T`$, rapidity, and mass of the $`Z`$ boson. The positions and energies of the electrons are smeared according to the measured resolutions, and corrected for offsets in energy scale caused by the underlying event and recoil particles that overlap the calorimeter towers. Underlying events are modeled using data from random inelastic $`p\overline{p}`$ collisions of the same luminosity profile as the $`Z`$ boson sample. The electron energy and angular resolutions are tuned to reproduce the observed width of the mass distribution at the $`Z`$-boson resonance and the difference between the reconstructed vertex positions of the electrons. We determine the shape of the efficiency of the event selection criteria as a function of $`p_T`$ using $`Ze^+e^{}`$ events generated with herwig, smeared with the DØ detector resolutions, and overlaid on randomly selected zero bias $`p\overline{p}`$ collisions. This simulation models the effects of the underlying event and jet activity on the selection of the electrons. The absolute efficiency is obtained from $`Ze^+e^{}`$ data . The values of the efficiency times acceptance range from 26-37% for $`p_T`$ below 200 GeV and is 53% for $`p_T`$ above 200 GeV. The primary background arises from multiple-jet production from QCD processes in which two jets pass the electron selection criteria. We use several DØ data sets for estimating this background—direct-$`\gamma `$ events, dijet events, and dielectron events in which both electrons fail quality criteria—all of which have very similar kinematic characteristics . The level of the multijet background is determined by fitting the $`ee`$ invariant mass in the range $`60<M_{ee}<120`$ GeV to a linear combination of Monte Carlo $`Ze^+e^{}`$ signal events (using pythia ) and background (from direct-$`\gamma `$ events). We assign a systematic uncertainty to this measurement by varying the choice of mass window used in the fit, and by changing the background sample among those mentioned above. We estimate the total multijet background level to be (4.4$`\pm `$0.9)%. The direct-$`\gamma `$ sample is used to parameterize the shape of the background distribution as a function of $`p_T`$ . Backgrounds from other sources, such as $`Z\tau ^+\tau ^{}`$, $`t\overline{t}`$, and diboson production, are negligible. We use the data corrected for background, acceptance, and efficiency, to determine the best value of the non-perturbative parameter, $`g_2`$, given our data. In the fit, we fix $`g_1`$ and $`g_3`$ to the values obtained in and vary the value of $`g_2`$. We use the CTEQ4M pdf. The prediction is smeared with the known detector resolutions, and the result fitted to our data. The resulting $`\chi ^2`$ distribution as a function of $`g_2`$ is well-behaved and parabolic, yielding a value of $`g_2=0.59\pm 0.06`$ GeV<sup>2</sup>, considerably more precise than previous determinations. For completeness, we also fit the individual values of $`g_1`$ and $`g_3`$, with the other two parameters fixed to their published values . We obtain $`g_1=0.09\pm 0.03`$ GeV<sup>2</sup> and $`g_3=1.1\pm 0.6`$ GeV<sup>-1</sup>. Both results are consistent with the values of Ref. . To determine the true $`d\sigma /dp_T`$, we correct the measured cross section for effects of detector smearing, using the ratio of generated to resolution-smeared ansatz $`p_T`$ distributions. We use the calculation from legacy as our ansatz function, with the $`g_2`$ determined from our fit. The largest smearing correction occurs at low-$`p_T`$, where smearing causes the largest fractional change in $`p_T`$ and where the kinematic boundary at $`p_T`$$`=0`$ produces non-Gaussian smearing. The correction is 18.5% in the first bin, decreasing to about 2% at 5 GeV. For all $`p_T`$ values above 5 GeV, the correction is $`5`$%. Systematic uncertainties arising from the choice of ansatz function are evaluated by varying $`g_2`$ within $`\pm 1`$ standard deviation of the best-fit values. Additional uncertainties are evaluated by varying the detector resolutions by $`\pm 1`$ standard deviation from the nominal values. The effect of these variations is negligible relative to the other uncertainties in the measurement. Table I shows the values of $`d\sigma (Ze^+e^{})/dp_T`$. The uncertainties on the data points include statistical and systematic contributions. An additional normalization uncertainty of $`\pm `$4.4% arises from the uncertainty on the integrated luminosity that is not included in any of the plots nor in the table, but must be taken into account in any fits involving an absolute normalization. Figure 1 shows the final differential cross section, corrected for the DØ detector resolutions, compared to the fixed-order calculation and the resummation calculation with three different parameterizations of the non-perturbative region using published values of the non-perturbative parameters. Also shown are the fractional differences of the data from the considered resummation predictions. The data are normalized to the measured $`Ze^+e^{}`$ cross section (221 pb ) and the predictions are absolutely normalized. We observe the best agreement with the Ladinsky-Yuan parameters for the $`b`$-space formalism; however, we expect that fits to the data using the Davies-Weber-Stirling ($`b`$-space) or Ellis-Veseli ($`p_T`$-space) parameterizations of the non-perturbative functions could describe the data similarly well. Figure 2 shows the measured differential cross section compared to the fixed-order calculation and the resummation calculation using the Ladinsky-Yuan parameterization. We observe strong disagreement between the data and the fixed-order prediction in the shape for all but the highest values of $`p_T`$. We attribute this to the divergence of the next-to-leading-order calculation at $`p_T`$$`=0`$, and a significant enhancement of the cross section relative to the prediction at moderate values of $`p_T`$. This disagreement confirms the presence of contributions from soft gluon emission, which are accounted for in the resummation formalisms. In summary, we have measured the inclusive differential cross section of the $`Z`$ boson as a function of its transverse momentum. With the enhanced precision of this measurement over those previous, we can probe non-perturbative, resummation, and fixed-order QCD effects. We observe good agreement between the $`b`$-space resummation calculation using the published values of the non-perturbative parameters from Ladinsky-Yuan and the measurement for all values of $`p_T`$. Using their parameterization for the non-perturbative region, we obtain $`g_2=0.59\pm 0.06`$ GeV<sup>2</sup>. We thank the Fermilab and collaborating institution staffs for contributions to this work and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat à L’Energie Atomique (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), and CONICET and UBACyT (Argentina).
no-problem/9909/cond-mat9909376.html
ar5iv
text
# Flowing sand - a possible physical realization of Directed Percolation ## I Introduction Directed Percolation (DP) is perhaps the simplest model that exhibits a non-equilibrium phase transition between an “active” or “wet” phase and an inactive “dry” one . In the latter phase the system is in a single “absorbing” state; once it reaches the completely dry state, it will always stay there. Interest in DP mainly stems from universality of the associated critical behavior. It is believed that transitions in all models with an absorbing state belong to the DP universality class (unless there are some special underlying symmetries). DP exponents were measured for an extremely wide variety of models. Even though the exponents have not yet been calculated analytically, their values (especially in 1+1 dimensions) are known with very high precision . Despite the preponderance of models in the DP universality class, so far no physical system has been found to exhibit DP behavior. Indeed, as noted by Grassberger, > …there is still no experiment where the critical behavior of DP was seen. This is a very strange situation in view of the vast and successive theoretical efforts made to understand it. Designing and performing such an experiment has thus top priority in my list of open problems” . The purpose of this paper is to point out that a simple system of sand flow on an inclined plane, that has recently been introduced and studied by Daerr and Douady (DD), may well be the first physical realization of a transition in the DP universality class . In Sec. II we describe these experiments in fair detail. The data presented by DD is of qualitative value and raises serious questions regarding the applicability of DP. In particular, the observed shapes of wet clusters differ from those seen in standard DP simulations; they are much more compact. Since the corresponding model, called Compact Directed Percolation (CDP), is unstable against perturbations towards the standard DP behavior , the latter is the generic case expected to occur (if no parameters were fine-tuned to place the system in the CDP class). This motivated us to look for a simple model which is defined in terms of dynamic rules that can plausibly be related to the experiments and, at the same time, exhibit features that look like the experimentally observed ones. Whether the transition exhibited by such a model does belong to the DP universality class remains to be investigated. Such a model is introduced in Sec. III. It is a directed sandpile model, which is simpler than the one introduced and analyzed by Tadic and Dhar ; here the system is reset to a uniform initial state after each avalanche. In Sec. IV we show the outcome of some simulations. The avalanches (observed in the active phase) reproduce the experimental observations quite well. We establish the existence of a transition from an active to an inactive phase. However, the critical behavior extracted from these figures does not seem to be in the DP universality class, rather, it seems close to CDP. As it turns out, this CDP type critical behavior is only a transient: the true critical behavior is of the DP type, but can only be seen after a very long crossover regime, in which the exponents are those of CDP. This observation is based on a careful numerical study, which is presented in Secs. V and VI. Our conclusion is that the DD experiment does serve as a possible realization of a DP-type transition. Observation of DP exponents may be tricky as a substantial crossover regime may mask the true critical behavior, and one should try to find methods to shorten this regime. Finally we should note that the DD system is a simple case of Self Organized Criticality (SOC). Without any fine tuning, the system “prepares itself” at the critical point of a DP type transition. The way in which this happens differs from standard SOC models in which a slow driving force (acting on a much time scale smaller than that of the system’s dynamic response) causes evolution to a critical state. In the present case avalanches are started by hand one by one. ## II The Douady-Daerr Experiment The experimental apparatus consists of an inclined plane (size of about $`1m`$) covered by a rough velvet cloth; the angle of inclination $`\phi _0`$ can be varied. Glass beads (e.g.“sand”) of diameter $`250`$-$`425\mu `$ are poured uniformly at the top of the plane and flow down while a thin layer of thickness $`h=h_d(\phi _0)`$, consisting of several monolayers, settles and remains immobile. At this thickness the sand is dynamically stable; the value of $`h_d`$ decreases with an increasing angle of inclination. For each $`\phi _0`$ there exists another thickness $`h_s`$ with $`h_s(\phi _0)>h_d(\phi _0)`$, beyond which a static layer becomes unstable. Hence there exists a region (see Fig. 1) in the $`(\phi ,h)`$ plane, in which a static layer is stable but a flowing one is unstable. We can now take the system, that settled at $`h_d(\phi _0)`$, and increase its angle of inclination to $`\phi `$, staying within this region of mixed stability. The layer will not flow spontaneously, but if we disturb it at the top, generating a flow near the perturbation, the flow will persist and an avalanche will be generated, leaving behind a layer of thickness $`h_d(\phi )`$. These avalanches had the shape of a fairly regular triangle, with opening angle $`\theta `$. As the increment of the inclination $`\mathrm{\Delta }\phi =\phi \phi _0`$ decreases, the value of $`\theta (\mathrm{\Delta }\phi )`$ decreases as well and the area affected by the avalanche decreases, vanishing as $`\mathrm{\Delta }\phi 0`$. This calls for testing a power law behavior of the form $$\theta (\mathrm{\Delta }\phi )^x.$$ (1) If instead of increasing $`\phi `$ we lower the plane, i.e., go to $`\mathrm{\Delta }\phi <0`$, our system, whose thickness is $`h_d(\phi _0)`$, is below the present thickness of dynamic stability, $`h_d(\phi )`$. We believe that in this case an initial perturbation will not propagate, it will rather die out after a certain time (or beyond a certain size $`\xi _{}`$ of the transient avalanche). As the deviation $`|\mathrm{\Delta }\phi |`$ decreases, we expect the size of the transient active region to increase, i.e., the decay length should grow according to a power law $$\xi _{}(\mathrm{\Delta }\phi )^\nu _{}.$$ (2) Hence, by pouring sand at inclination $`\phi _0`$, DD produced a self-organized critical system. The system is precisely at the borderline (with respect to changing the angle) between a stable regime $`\phi <\phi _0`$ in which perturbations die out and an unstable one, $`\phi >\phi _0`$, where perturbations persist and spread. Once this connection has been made, it is natural to associate this system with the problem of DP. Denote by $`p`$ either the site or bond percolation probability and by $`p_c`$ its critical value (i.e., for $`p>p_c`$ the system is in the active phase). We associate the change in tilt with $`pp_c`$, assuming that near the angle of preparation the behavior of the sand system is related to a DP problem with $$\mathrm{\Delta }\phi =\phi \phi _0pp_c.$$ (3) Hence, the exponent $`\nu _{}`$ should be compared with the known values for DP and CDP. The exponent $`x`$ in Eq. (1) can also be measured and compared with $$\mathrm{tan}\theta \xi _{}/\xi _{}(\mathrm{\Delta }\phi )^{\nu _{}\nu _{}}.$$ (4) ## III The Model Our aim is to write down a simple model based on the physics of flowing sand. We adopt the observation made by DD, that in the regime of interest (i.e., for tilt angles close to $`\phi _0`$) grains of the top layer of sand rest on grains of the layer below (rather than on other grains of the top layer)<sup>1</sup><sup>1</sup>1 This holds for $`\phi <\phi _0`$ and also for $`\phi >\phi _0`$, as long as we stay within the region of mixed stability.. Hence the lower layers provide for the top one a kind of washboard potential, as depicted in Fig. 2. We further assume that only the top layer participates in an avalanche and therefore place the grains of this layer on the sites of a regular square lattice<sup>2</sup><sup>2</sup>2We chose to work with a square lattice, but could have used a triangular one as well, with each site communicating with two neighbors above and two below. (see Fig. 3). At any given time a particular horizontal row of grains may become active, while at the next time step the activity may be transferred to the row beneath. The physical picture that underlies the model is as follows. A grain $`G`$ may become active if at least one of the neighboring grains in the row above it has been active at the previous time step. These grains may then transfer energy to $`G`$; if $`\mathrm{\Delta }E(G)`$, the total energy transferred to $`G`$, exceeds the barrier $`E_b`$ of the washboard, $`G`$ becomes active. An active grain “rolls down” at the next time step and collides with the grains of the next row. The energy it brings to these collisions is $`1+\mathrm{\Delta }E(G)`$, where 1 is the potential energy due to the height difference between two consecutive rows. A fraction $`f`$ of its total energy is dissipated, while the rest is divided stochastically among its three neighbors from the lower row. The model is hence defined in terms of two variables; an activation variable, $`S_i^t=\{\begin{array}{cc}1\hfill & \text{if grain }(t,i)\text{ active,}\hfill \\ 0\hfill & \text{otherwise,}\hfill \end{array}`$ and an energy variable $`E_i^t`$. The index $`t`$ denotes rows of our square lattice and time; at time $`t`$ we update the states of the grains belonging to row $`t`$. Energy is measured in units of the difference between two successive minima of the potential (see Fig. 2). The model is controlled by two parameters, namely | $`E_b,`$ | the barrier height, and | | --- | --- | | $`f,`$ | the fraction of dissipated energy. | The dynamic rules of our model are defined in terms of these variables and parameters as follows. For given values of activities $`S_i^t`$ and energies $`E_i^t`$ we first calculate the energy transferred to the grains of the next row $`t+1`$. To this end we generate for each active site $`S_i^t=1`$ three random numbers, $`z_i^t(\delta )`$ (with $`\delta =\pm 1,0`$) in a way that $$\underset{\delta =\pm 1,0}{}z_i^t(\delta )=1.$$ (5) The energy transferred to grain $`(t+1,i)`$ is then given by $$\mathrm{\Delta }E_i^{t+1}=(1f)\underset{\delta =\pm 1,0}{}S_{i\delta }^tE_{i\delta }^tz_{i\delta }^t(\delta ).$$ (6) The values of these energies determine the activation of the grains of row $`t+1`$: $$S_i^{t+1}=\{\begin{array}{cc}1& \text{if }\mathrm{\Delta }E_i^{t+1}>E_b,\\ 0& \text{if }\mathrm{\Delta }E_i^{t+1}E_b.\end{array}$$ (7) The energies of the active grains are set according to $$E_i^{t+1}=S_i^{t+1}(1+\mathrm{\Delta }E_i^{t+1}).$$ (8) The meaning of these rules, in words, is obvious: the energy of site $`i`$ at time $`t+1`$ is obtained by identifying, among its three neighbors of the preceding row, those sites (or grains) that were active at time $`t`$. At each such active site $`(t,i)`$ we generated three random numbers $`z_i^t(\delta )`$ which represent the fraction of energy transferred from the grain at site $`(t,i)`$ to the one at $`(t+1,i+\delta )`$. We add up the energy contributions from these active sites; the fraction $`1f`$ is not dissipated and compared to the barrier height $`E_b`$. If the acquired energy $`\mathrm{\Delta }E_i^{t+1}`$ exceeds $`E_b`$ , site $`(t+1,i)`$ becomes active, rolls over the barrier bringing to the collisions (at time $`t+2`$) the acquired energy calculated above and its excess potential energy (of value 1). ## IV Short-time simulations and qualitative discussion of the transition Let us consider the behavior of our model as we vary $`E_b`$ at a fixed value of the dissipation. We expect that for small values of $`E_b`$ an active grain will activate the grains below with high probability; avalanches will propagate downhill and also spread sideways. For a strongly localized initial activation we should, therefore, observe activated regions of triangular shape. As $`E_b`$ increases, the rate of activation decreases and the opening angle $`\theta `$ of these triangles should decrease, until $`E_b`$ reaches a critical value $`E_b^c`$, beyond which initial activations die out in a finite number of time steps (or rows). These expectations are indeed borne out by simulations of the model: the critical value $`E_b^c`$ depends on the dissipation $`f`$ and the resulting phase transition line is shown in Fig. 4 as a solid line. In order to understand this transition qualitatively, let us consider a simple mean-field type approximation, in which all stochastic variables are replaced by their average values. We consider an edge separating an active region from an inactive one at time $`t`$: sites to the left of $`i`$ and $`i`$ itself are wet, whereas $`i+1,i+2,\mathrm{}`$ are dry. Will the rightmost wet site be wet or dry at the next time step? Assuming that all wet sites at time $`t`$ have the same energy $`E^t`$, in our mean-field type estimate the energy delivered to site $`i`$ at time $`t+1`$ is $$\mathrm{\Delta }E_i^{t+1}=\frac{2}{3}(1f)(1+\mathrm{\Delta }E^t),$$ (9) where we set in Eq. (6) all $`z(\delta )=1/3`$. At the critical point we expect all energies just to be sufficient to go over the barrier; hence set $`\mathrm{\Delta }E_i^{t+1}=\mathrm{\Delta }E^t=E_b^c`$ in Eq. (9). Solving the resulting equation yields $$E_b^c=\frac{2(1f)}{1+2f}.$$ (10) In Fig. 4 this rough estimate of the transition line is shown as a dotted line. This simple calculation captures the physics of the problem. However, it is easy to improve it in the following way. As before, we assume the energy of toppling grains to be distributed equally among the three neighbors of the subsequent row. However, we no longer assume all active sites to carry the same energy, instead we compute the energy profile at the edge of a cluster. To this end let us consider a semi-infinite cluster with $`S_i^t=1`$ for $`i0`$ and $`S_i^t=0`$ for $`i>0`$. According to Eq. (6), we are looking for a stationary solution of the equation of motion $$\mathrm{\Delta }E_i^{t+1}=\frac{1f}{3}\{\begin{array}{cc}3+\mathrm{\Delta }E_{i1}^t+\mathrm{\Delta }E_i^t+\mathrm{\Delta }E_{i+1}^t\hfill & \text{if }i<0\hfill \\ 2+\mathrm{\Delta }E_1^t+\mathrm{\Delta }E_0^t\hfill & \text{if }i=0\hfill \\ 0\hfill & \text{if }i>0\hfill \end{array}$$ where $`\mathrm{\Delta }E_0^t=E_b^c`$. The corresponding stationary solution reads $$\mathrm{\Delta }E_i^{stat}=E_{bulk}E_{gap}\mathrm{exp}(ai),(i0)$$ (11) where $`E_{bulk}`$ $`=`$ $`(1f)/f,`$ (12) $`E_{gap}`$ $`=`$ $`{\displaystyle \frac{2+f\sqrt{12f3f^2}}{2f(1f)}},`$ (13) $`a`$ $`=`$ $`\text{arccosh}{\displaystyle \frac{2+f}{22f}}.`$ (14) Thus, the critical threshold is given by the expression $$E_b^c=\frac{2f^25f+\sqrt{12f3f^2}}{2f(f1)}$$ (15) which slightly improves the mean field result (10), especially for small values of $`f`$ (see dashed line in Fig. 4).The energy profile decreases at the edges of the cluster and saturates in the bulk at $`E_{bulk}`$, as shown in Fig. 5. The connection of our model to the experimental conditions is based on the assumption that the tilt angle of the experiment tunes the ratio between the barrier height and the difference of potential energies between two rows. If the system has been prepared at some $`\phi _0`$, we raise the tilt angle to $`\phi `$; perturbing the system in this region of mixed stability will generate an avalanche. That is, for $`\phi >\phi _0`$ we have $`E_b<E_b^c`$. As the tilt angle is reduced, the size of $`E_b`$ (measured in units of the potential difference) increases, until it reaches its critical value precisely at $`\phi _0`$. Thus increasing $`E_b`$ in the model corresponds to lowering the tilt angle towards the value at which the system has been prepared and, as such, is precisely the boundary of dynamic stability. Hence to reproduce the experiment we were looking for 1. fairly compact triangular regions of activation for $`E_b<E_b^c`$, 2. a varying opening angle of these triangles which should go to zero as $`E_b`$ approaches $`E_b^c`$ from below. The number of “time steps” that correspond to the DD experiment can be estimated as the number of rows of beads from top to bottom of the plate, i.e. about 3000. We simulated the model defined in Eqs. (6)-(8) to check whether it is possible to reproduce the qualitative features of the experiment. Indeed we found this to be the case, as can be seen in Fig. 6. The two avalanches were produced for dissipation $`f=0.5`$, activating a single site at $`t=0`$, to which an initial energy of $`E_0=500`$ was assigned<sup>3</sup><sup>3</sup>3Note that after less than 20 time steps all the initial energy has been dissipated.. The avalanches were compact, triangular, and with fairly straight edges. The edges became rough only when $`E_b`$ was very close to its critical value, as can be seen on the right hand side of Fig. 6. The opening angle of the active regions $`\theta `$ decreased as $`E_b`$ increased towards $`E_b^c`$, which is shown in Fig. 7. From these simulations we obtain the estimate (see Eq. (4)) $$x=\nu _{}\nu _{}=0.98(5)1.$$ (16) We predict that measuring the dependence of the avalanche opening angle on $`\mathrm{\Delta }\phi `$ in the experiment should also give a linear law. Furthermore, the density of active sites in the interior of the triangular regions is found to be almost constant, indicating a first-order transition. These results suggest that the transition belongs to the CDP universality class, which is characterized by the critical exponents $$\nu _{}=2,\nu _{}=1,\beta =0.$$ (17) These observations pose, however, a puzzle: since we believe that DP is the generic situation, we would expect to find non-compact active regions and DP exponents. In the following Section we present a careful numerical analysis of the critical behavior of our model which resolves this problem: the exponents seen in our simulations (and in the experiment) should cross over to the DP values, but only if one gets very deep into the critical region. ## V Crossover to directed percolation The linear law observed in Fig. 7 can be explained by assuming compact clusters whose temporal evolution is determined by the fluctuations of their boundaries. The boundaries perform an effective random walk with a spatial bias proportional to $`E_bE_b^c`$. Therefore, the critical model should behave in the same way as a Glauber-Ising model at zero temperature, i.e., the transition should belong to the CDP universality class. However, according to the DP conjecture any continuous spreading transition from a fluctuating active phase into a single frozen state should belong to the universality class of directed percolation (DP), provided that the model is defined by short range interactions without exceptional properties such as higher symmetries or quenched randomness (see Sec. VI). Clearly, the present model fulfills these requirements. It has indeed a fluctuating active state and exhibits a phase transition into a single absorbing state which is characterized by a positive one-component order parameter. According to these arguments, the phase transition should belong to the DP universality class. In order to understand this apparent paradox we perform high-precision Monte-Carlo simulations for dissipation $`f=0.5`$. We employ time-dependent simulations , i.e., we topple a single grain in the center and analyze the properties of the resulting cluster. As usual for this type of simulations, we measure the survival probability $`P(t)`$, the number of active sites $`N(t)`$, and the mean square spreading from the origin $`R^2(t)`$ averaged over the surviving runs. At criticality, these quantities are expected to show an asymptotic power law behavior $$P(t)t^\delta ,N(t)t^\eta ,R^2(t)t^{2/z},$$ (18) where $`\delta `$, $`\eta `$, and $`z`$ are critical exponents which label the universality class. In the case of CDP these exponents are given by $$\delta =1/2,\eta =0,z=2,$$ (19) whereas DP is characterized by the exponents $$\delta =0.1595,\eta =0.3137,z=1.5807.$$ (20) In order to eliminate finite-size effects, we use a dynamically generated lattice adjusted to the actual size of the cluster. Moreover, we observe that the initial non-universal transient is minimal if an excitation energy $`E_015`$ is used. Detecting deviations from power-law behavior in the long-time limit we estimate the critical energy by $`E_b^c=0.385997(5)`$. Our numerical results (obtained from simulations at the critical point) are shown in Figs. 8-10. In all measurements we observe different temporal regimes: 1. During the first few time steps, the activation energy is distributed to the nearest neighbors whereby the cluster grows at maximal speed. Therefore, the survival probability $`P(t)`$ is $`1`$ and the particle number $`N(t)`$ grows linearly. 2. In the intermediate regime, which extends up to a few hundred time steps, the inactive islands within the cluster are not yet able to break up the cluster into separate parts. Thus, the cluster can be considered as being compact and the temporal evolution is governed by a random walk of its boundaries. In this regime we observe a power-law behavior with CDP exponents (indicated by dotted lines in Figs. 8-10). 3. The intermediate regime is followed by a long crossover from CDP to DP extending over almost two decades up to more than $`10^4`$ time steps<sup>4</sup><sup>4</sup>4Note that the crossover in the present model is different from the one studied in , where inhomogeneous interactions at the cluster’s boundaries were assumed.. 4. Finally the system enters an asymptotic DP regime (indicated by dashed lines in Figs. 8-10). The crossover from CDP to DP is illustrated in Fig. 11. Two avalanches are plotted on different scales. The left one represents a typical avalanche within the first few thousand time steps. As can be seen, the cluster appears to be compact on a lateral scale up to 100 lattice sites. However, as shown in the right panel of Fig. 11, after a very long time the cluster breaks up into several branches. The right hand figure shows a typical cluster on a scale of $`\mathrm{150\hspace{0.17em}000}`$ time steps, where the branches still have a certain characteristic thickness. Going to even larger scales the width of the branches becomes irrelevant and we obtain the typical patterns of critical DP clusters. In comparison with ordinary DP lattice models, in the present model the observed crossover is unusually slow. This due to short-range correlations between active sites leading to active branches with a certain typical thickness $`\xi _{act}`$. In ordinary DP lattice models the average size of active branches is of the order of a few lattice spacings. In the present case, however, we find a much larger value $`\xi _{act}20`$. Based on this observation, the typical crossover time $`t_c`$ can be approximated as follows. In order to cross over to DP, the average size of inactive regions between neighboring branches $`\xi _{inact}`$ has to become larger than the thickness of the branches $`\xi _{act}`$. In Fig. 12 we plot both quantities as a function of time at criticality, using a lattice with $`N=2^{14}`$ sites and homogeneous initial conditions $`E_i^{t=0}=2`$. Initially $`\xi _{act}=N`$ and $`\xi _{inact}=0`$. As time evolves, the average size of active branches decreases and saturates at a constant value $`\xi _{act}20`$. However, the average size of inactive regions $`\xi _{inact}`$ continues to grow and exceeds $`\xi _{act}`$ at time $`t_c10^5`$. As can be seen, this provides a good estimate of the typical time where the critical behavior of the system crosses over to DP. In order to observe the crossover experimentally, it would be interesting to know how the crossover time $`t_c`$ can be reduced. To this end we measure $`\xi _{act}`$ for several values of the dissipation $`f`$ (see inset of Fig. 11). It turns out that by increasing $`f`$ the typical size of active branches can be decreased down to $`10`$ lattice spacings. Consequently, the crossover time can be reduced by more than one decade. Hence, for an experimental verification of DP, systems with high dissipation are more appropriate. The influence of the dissipation can easily be explained within the improved mean field approximation of Sect. IV. Clearly, the stability of a cluster against breakup into several branches by fluctuations depends on the energy gap $`E_{gap}=E_{bulk}E_c`$. As can easily be verified, this energy difference (and therewith the stability of compact clusters) decreases with increasing dissipation $`f`$, explaining the observed $`f`$-dependence. ## VI The effect of randomness The above model describes the physics of flowing sand in a highly idealized manner. In particular, it ignores the fact that spreading avalanches may be subjected to frozen disorder. For example, irregularities of the plate and the velvet cloth could lead to quenched randomness in the equations of motion. Moreover, the system prepares itself in an initial state which is not fully homogeneous. Thus, we have to address the question to what extent quenched randomness will affect the expected crossover to DP. Certain types of quenched disorder are known to change the critical behavior of DP. For example, Moreira and Dickman studied the diluted contact process with spatially quenched disorder . Even for small amplitudes quenched randomness was found to destroy the DP transition, turning algebraic into logarithmic laws. Janssen confirmed and substantiated these findings by a field-theoretic analysis. Recently Cafiero et al. mapped DP with spatially quenched disorder onto a non-markovian process with memory exhibiting the same nonuniversal properties. The memory is due to the formation of bound states of particles in those regions where the percolation probability is very high. As shown by Webman et al., these bound states give rise to a glassy phase separating active and inactive parts of the phase diagram . Similar nonuniversal properties were also observed in DP processes with temporally quenched disorder . In all cases investigated so far, quenched disorder destroys the DP transition. However, the disorder in the DD experiment is different in nature. Clearly, it is neither spatially nor temporally quenched, rather it depends on both space and time. On the level of our model we may think of randomly varying energy barriers $$E_bE_b+A\eta (x,t),$$ (21) where the amplitude $`A`$ controls the intensity of disorder. Here $`\eta (x,t)`$ is a white Gaussian noise specified by the correlations $$\overline{\eta (x,t)\eta (x^{},t^{})}=\delta ^d(xx^{})\delta (tt^{}),$$ (22) where $`d=1`$ denotes the spatial dimension. In the standard situation of quenched noise of this type $`\eta (x,t)`$ is kept fixed while the experiment is repeated and the quantities under investigation are averaged over many independent avalanches. Yet in the DD experiment, the situation is different. Here once the sand has been poured, a particular realization of the random variables has been selected. However, there is no process to repeat the experiment over and over again with a fixed $`\eta (x,t)`$. Rather, after each avalanche the system is prepared again (by pouring sand or by starting an avalanche elsewhere). Hence the averaging process is done simultaneously over the $`\eta (x,t)`$ and the stochastic dynamic process that generates the avalanches. This type of averaging is of the annealed type and therefore less likely to alter the critical behavior than its quenched version. In order to find out whether fully quenched disorder affects the asymptotic critical behavior of DP, we simulated a directed bond percolation process with randomly distributed bond probabilities between $`p^{}`$ and $`1`$. For $`p^{}=0.289(1)`$, we find asymptotic power laws with DP exponents, indicating that the transition is not affected by spatio-temporally quenched noise.Therefore, we expect the same to be true in the case of annealed disorder in our model for flowing sand. To support this point of view, we study the case of quenched randomness in the DP Langevin equation $`_t\rho (x,t)`$ $`=`$ $`a\rho (x,t)g\rho ^2(x,t)+`$ (24) $`D\rho (x,t)+\mathrm{\Gamma }\sqrt{\rho (x,t)}\xi (x,t),`$ where $`\rho (x,t)`$ is the particle density and $`a`$ represents the percolation probability. $`\xi (x,t)`$ is a Gaussian white noise which represents the intrinsic randomness of the DP process. At the critical dimension $`d=4`$, where fluctuations start to contribute, the Langevin equation (24) is invariant under scaling transformations $`xbx`$, $`tb^2t`$, and $`\rho b^2\rho `$. In order to include spatio-temporally quenched randomness, we allow for small variations of $`a`$, i.e., we add the term $$A\rho (x,t)\eta (x,t)$$ on the right hand side of Eq. (24). However, as can be shown by simple dimensional analysis, this term is irrelevant in $`d=4`$ dimensions, i.e., it decreases and eventually vanishes under scaling transformations. This observation strongly supports the result that the DP transition in our model is indeed not affected by quenched randomness. We emphasize that the irrelevance of quenched randomness in our model is due to the special role of ’time’ which coincides with the vertical coordinate of the plane. That is, for each time step the stochastic processes take place in a different random environment. To that extent the DD experiment differs from other DP-related experiments such as catalytic reactions where spatially quenched disorder affects the critical behavior. ## VII Conclusions We introduced a simple model for flowing sand on an inclined plane. The model is inspired by recent experiments and reproduces some of the observed features. In contrast to the experiment, which prepares itself in a self-organized critical state, our model needs to be tuned to a critical point by varying the energy barrier $`E_b`$. At criticality the system undergoes a nonequilibrium phase transition from an inactive (dry) phase with finite avalanches to an active (wet) phase where the mean size of avalanches diverges. Analyzing the critical behavior near the transition, we obtained the following results: 1. On short scales, i.e., on scales considered in the DD experiment, the model reproduces the experimentally observed triangular compact avalanches. In the active phase their opening angle $`\theta `$ is predicted to vary linearly<sup>5</sup><sup>5</sup>5Note added after submission: This prediction has to be compared with the model proposed by Bouchaud et al. which predicts the exponent $`x=1/2`$. with $`\mathrm{\Delta }\phi `$. 2. On very large scales the critical behavior of the model crosses over to ordinary DP. Thus, the DD experiment could serve as a first physical realization of directed percolation. Crossover to DP is seen in the model after about $`10^4`$ time steps, whereas the DD experiment stops at about 3000 steps (i.e. rows of beads). Hence in order to observe the crossover in the experiment, larger system sizes and/or smaller beads would be required. 3. We have shown that quenched randomness with short-range correlations due to irregularities in the experiment should not affect the asymptotic critical behavior. 4. The typical time needed to cross over to DP is found to decrease with increasing dissipation. Thus, in order to create experimental conditions favoring a crossover to DP, we suggest to use small glass beads, large system sizes, and an initial angle $`\phi _0`$ where the dissipation of energy per toppling grain is maximal. For physical reasons we would expect the dissipation to be maximal for small angles $`\phi _0`$, but this has to be verified in the actual experiment. As a necessary precondition for a crossover to DP, compact clusters must be able to split up into several branches, as illustrated in Fig. 11. Thus, before measuring critical exponents, this feature has to be tested experimentally. To this end the DD experiment should be performed repeatedly at the critical tilt $`\phi =\phi _0`$. In most cases the avalanches will be small and compact. However, large avalanches, reaching the bottom of the plate, will sometimes be generated. If these avalanches are non-compact (consisting of several branches) we expect the asymptotic critical behavior to be described by DP. Only then it is worthwhile to optimize the experimental setup and to measure the critical exponents quantitatively. Acknowledgements AJD wishes to thank the kind hospitality of the Weizmann Institute and acknowledges financial support from the CICPB and the UNLP, Argentina and from the Weizmann Institute. HH thanks the Weizmann Institute and the Einstein Center for hospitality and financial support. ED thanks the Germany-Israel Science Foundation (GIF) for partial support, B. Derrida for some most helpful initial insights and A. Daerr for communicating his results at an early stage.
no-problem/9909/cond-mat9909006.html
ar5iv
text
# Competing frustration and dilution effects on the antiferrromagnetism in La2-xSrxCu1-zZnzO4 ## Abstract The combined effects of hole doping and magnetic dilution on a lamellar Heisenberg antiferromagnet are studied in the framework of the frustration model. Magnetic vacancies are argued to remove some of the frustrating bonds generated by the holes, thus explaining the increase in the temperature and concentration ranges exhibiting three dimensional long range order. The dependence of the Néel temperature on both hole and vacancy concentrations is derived quantitatively from earlier renormalization group calculations for the non–dilute case, and the results reproduce experimental data with no new adjustable parameters. Since the discovery of high-$`T_c`$ superconductors much effort was invested in the investigation of the effect of dopants on the magnetic properties of the parent compounds La<sub>2</sub>CuO<sub>4</sub> (LCO) and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6</sub> (YBCO). It is now well established that even a very small dopant concentration, which introduces a concentration $`x`$ of holes into the CuO<sub>2</sub> planes, strongly reduces the Néel temperature, $`T_N`$. In LCO, doped with strontium or with excess oxygen, the antiferromagnetic long range order (AFLRO) disappears at a hole concentration $`x_c2\%`$ , while in YBCO $`x_c3.5\%`$ . In contrast, the effect of Cu dilution by nonmagnetic Zn is much weaker. Like in percolation, the AFLRO persists at Zn concentration, $`z`$, as large as 25 % . Recently, Hücker and coworkers studied the phase diagram of La<sub>2-x</sub>Sr<sub>x</sub>Cu<sub>1-z</sub>Zn<sub>z</sub>O<sub>4</sub>, and found surprising results: It appears that the vacancies introduced by Zn doping weaken the destructive effect of holes (introduced by the Sr) on the AFLRO. E. g., in a sample with $`z=15\%`$, the critical concentration $`x_c`$ of the holes is approximately 3%, i.e. larger than in vacancy free LCO. Also, at $`x=0.017`$ the Néel temperature has a maximum as function of $`z`$, implying a reentrant transition! To explain these phenomena, Hücker et al. measured the variable range hopping conductivity in their samples (at temperatures lower than 150 K all samples were insulators), and showed that Zn doping lowers the localization radius of the holes. Their qualitative conclusion was that as the holes become more “mobile”, their influence on $`T_N`$ increases. However, so far there has been no quantitative understanding of the combined dependence of $`T_N`$ on both $`x`$ and $`z`$. In this paper we present a quantitative calculation, which reproduces all the surprising features of the function $`T_N(x,z)`$. Our theory extends an earlier calculation , which treated the effects of quenched hole doping on the AFLRO in Sr doped LCO, i. e. calculated $`T_N(x,0)`$. The same parameters were then used to reproduce the observed $`T_N(x)`$ for the bi-layer Ca doped YBCO . Here we reproduce the full function $`T_N(x,z)`$, with practically no additional adjustable parameters. Our theory is based on the frustration model , which argues that when a hole is localized on a Cu–O–Cu bond , it effectively turns the interaction between the Cu spins strongly ferromagnetic, causing a canting of the surrounding Cu moments with an angle which decays with the distance $`r`$ as $`1/r`$. The frustrating bond thus acts like a magnetic dipole . As argued in Ref. , similar dipolar effects also arise when the hole is localized over more than one bond. The frustration model also predicted a magnetic spin glass phase for $`x>x_c`$ , as recently confirmed in detail in doped LCO and YBCO . Furthermore, the model successfully reproduced the local field distributions observed in NQR experiments . In earlier work, Glazman and Ioselevich analyzed the planar non–linear $`\sigma `$ model with random dipolar impurities, assuming that the dipole moments are annealed and expanding in $`x/T`$. In Ref. we generalized that analysis, treating the dipoles as quenched. The two calculations coincide to lowest order in $`x`$, but our renormalization group analysis allows a calculation of $`T_N`$ all the way down to zero at $`x_c`$, supplying a good interpolation between these two limits. In what follows we summarize that theory, with emphasis on the changes necessary for including the Zn vacancies. We argue that the main effects of the vacancies enter in two related ways. First, the concentration $`z`$ of the Zn vacancies renormalizes the concentration of frustrated bonds; when a Cu ion is missing from (at least) one end of a “frustrated” bond, then this bond is no longer acting like a “dipole”. The probability to find a bond without vacancies on both ends is $`(1z)^2`$, and therefore the effective concentration of “dipolar” bonds is equal to $$y=x(1z)^2.$$ (1) Second, when one Cu ion at an end of a hole–doped bond is replaced by Zn, then the strong antiferromagnetic coupling between the spins of the second Cu and of the hole on the oxygen will form a singlet, which is equivalent to a magnetic vacancy also on the second Cu. Hence the holes increase the number of vacancies, turning their effective concentration into $$v=z[1+2x(1z)].$$ (2) In what follows, we shall concentrate on the regime $`x<0.03`$, where the $`x`$-dependence of $`v`$ on $`x`$ is very weak. Following Ref. , we descibe the system by the Hamiltonian $$=_v+_d,$$ (3) where $`_v`$ is the non–linear sigma model (NL$`\sigma `$M) Hamiltonian in the renormalized classical region , representing the long wave length fluctuations of the unit vector $`𝐧(𝐫)`$ of antiferromagnetism. In the presence of short-range inhomogeneity, this Hamiltonian can be written as $$_v=\frac{1}{2}𝑑𝐫\rho _s(𝐫)\underset{i,\mu }{}(_in_\mu )^2.$$ (4) Here $`i=1,\mathrm{},d`$ and $`\mu =1,\mathrm{},𝒩`$ run over the spatial Cartesian components and over the spin components, respectively, $`_i/x_i`$, and the effective local stiffness $`\rho _s(𝐫)`$ is a random function. The spatial fluctuations $`\delta \rho _s(𝐫)`$ of this function are $`\delta `$-correlated: $`[\delta \rho _s(𝐫)\delta \rho _s(𝐫^{})]=K\delta (𝐫𝐫^{})`$, where \[…\] means quenched averages. Simple A power counting arguments show that $`K`$ is irrelevant in the renormalization group sense. Therefore, we can replace $`\rho _s(𝐫)`$ in Eq. (4) by its quenched average $`\rho _s(v)[\rho _s(𝐫)]`$. A $`_d`$ is constructed to reproduce the dipolar canting of the spins at long distances. Denoting by $`𝐚(𝐫_{\mathrm{}})`$ the unit vector directed along the frustrating bond at $`𝐫_{\mathrm{}}`$, and by $`M_l𝐦(𝐫_{\mathrm{}})`$ the corresponding dipole moment (where $`𝐦(𝐫_{\mathrm{}})`$ is a unit vector giving the direction of the dipole, and $`M_l`$ is its magnitude), we have $$_d=\rho _s(v)𝑑𝐫\underset{i}{}𝐟_i(𝐫)_i𝐧,$$ (5) with $$𝐟_i(𝐫)=\underset{\mathrm{}}{}\delta (𝐫𝐫_{\mathrm{}})M_{\mathrm{}}a_i(𝐫_{\mathrm{}})𝐦(𝐫_{\mathrm{}}),$$ (6) where the sum runs only over doped bonds which frustrate the surrounding (namely have both Cu ions present). As argued in Ref. , the renormalization group procedure generates an effective dipole–dipole interaction between the dipole moments, $`\{𝐦(𝐫)\}`$, which is mediated via the canted Cu spins. At low temperature $`T`$ these moments develop very long ranged spin–glassy correlations, and may thus be considered frozen. Hence, we treat all the variables $`𝐫_{\mathrm{}}`$, $`𝐚(𝐫_{\mathrm{}})`$, and $`𝐦(𝐫_{\mathrm{}})`$ as quenched, and we have $$\left[f_{i\mu }(𝐫)f_{j\nu }(𝐫^{})\right]=\lambda \delta _{\mu \nu }\delta _{ij}\delta (𝐫𝐫^{}).$$ (7) Here $`\lambda =Ay`$, $`A=M^2Q/d`$, where $`Q=[m_\mu ^2(𝐫)]`$, and the effective dipole concentration $`y`$ replaces the parameter $`x`$ used in Ref. . With these assumptions, we have now mapped our problem to that treated in Ref. . We can thus take over the results from there, and $`T_N(x,z)`$ should be equal to the Néel temperature derived there for hole concentration $`y`$ and stiffness constant $`\rho _s(v)`$. The renormalization group analysis of the Hamiltonian (3) found the two–dimensional antiferromagnetic correlation length $`\xi _{2D}`$, as function of the two parameters $`t=T/\rho _s`$ and $`\lambda `$. The results contain exact exponential factors, which give the leading behavior, and approximate prefactors. For doped LCO, the results were given for two separate regimes: $$\xi _{2D}/a=C_1\lambda ^{0.8}\mathrm{exp}\left(\frac{2\pi }{3\lambda }\right)$$ (8) for $`t<\lambda `$, and $$\xi _{2D}/a=C_2\mathrm{exp}\left(\frac{2\pi }{3\lambda }\left[1\left(1\frac{\lambda }{t}\right)^3\right]\right)$$ (9) for $`t>\lambda `$. Here, $`a=3.8\AA `$ is the lattice constant, and the coefficients $`C_1`$ and $`C_2`$ may have a weak dependence on $`t`$ and on $`\lambda `$. In Ref. the data on La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> were fully described by the constant values $`C_1=0.74`$ and $`C_2=0.5`$. The three dimensional (3D) Néel temperature was then derived from the relation $$\alpha \xi _{2D}^21,$$ (10) representing the appearance of 3D AFLRO due to the weak relative spin anisotropy or the weak relative interplanar exchange coupling, both contained in the parameter $`\alpha `$. Combining Eqs. (8) and (10) thus yields an $`\alpha `$-dependent value for the critical value $`\lambda _c`$, above which AFLRO is lost. This value should give the critical line for all $`t<\lambda `$. Using the undoped value $`\alpha 10^4`$ , Ref. estimated $`\lambda _c0.37`$. Assuming that $`\alpha `$ is independent of either $`y`$ and $`v`$, this yields $`y_c=\lambda _c/A0.019`$, where we have used the value $`A=20`$ found for slightly doped LCO . Combining this with Eq. (1), we thus find $$x_c\frac{0.019}{(1z)^2},$$ (11) showing an increase of the antiferromagnetic regime with increasing $`z`$. At $`z=0.15`$, this would predict $`x_c0.026`$, slightly lower than the observed value. This discrepancy could result from various sources. For example, dilution may affect the nearest neighbor exchange energy in the plane, $`J`$, more strongly than the interplanar interaction or the anisotropy. This would imply that $`\alpha `$ increases with $`z`$. Combining Eqs. (9) and (10), one finds that for $`t>\lambda `$, the critical line is given by $$\frac{t_N(\lambda )}{t_N(0)}\frac{B\lambda }{1(13B\lambda )^{1/3}},$$ (12) where $$B=\frac{1}{4\pi }\mathrm{ln}(\alpha C_2^2)\frac{1}{t_N(0)}.$$ (13) We next look at the dependence of $`T_N`$ on $`x`$ for fixed $`z`$. Ignoring the weak dependence of $`v`$ on $`x`$ in Eq. (2), $`\rho _s`$ is assumed to depend only on $`z`$ (the relative error in $`\rho _s`$ from neglecting $`x`$ in Eq. (2) is less than 3%, see below). In that case, $`\rho _s`$ drops out of the ratio on the LHS in Eq. (12), which becomes equal to $`T_N(x,z)/T_N(0,z)`$. The RHS of that equation now depends only on $`\lambda =Ay=Ax(1z)^2`$, reflecting a universality of the plot of $`T_N(x,z)/T_N(0,z)`$ versus the rescaled variable $`y=x(1z)^2`$. Note that this universal plot, which should describe the Néel temperature for many values of $`z`$, requires no new parameters; all the parameters are known from the limit $`z=0`$. In fact, it is worth noting that the RHS of Eq. (12) depends only on the combination $`B\lambda `$, so that it should also apply to other lamellar systems with different values of $`B`$, resulting from different values of $`\alpha `$. Figure 1 presents the universal plot of $`T_N(x,z)/T_N(0,z)`$, from both Eqs. (12) (for $`t>\lambda `$) and (11) (for $`t<\lambda `$). This theoretical curve is then compared with various experiments, for both $`z=0`$ and $`z=0.15`$. It is satisfactory to note that except for one point, the data from the latter are indistinguishable from those for the non–dilute case, confirming our universal prediction. For comparison of the $`z`$dependence of $`T_N(x,z)`$ with experiments, it is more convenient to scale $`T_N(x,z)`$ by $`T_N(0,0)`$. For that purpose, we need the ratio $`T_N(0,z)/T_N(0,0)`$. Theoretically, Eq. (13) yields $$\frac{T_N(0,z)}{T_N(0,0)}=\frac{t_N(0,z)\rho _s(z)}{t_N(0,0)\rho _s(0)}=\frac{B(0)}{B(z)}\frac{\rho _s(z)}{\rho _s(0)},$$ (14) where the weak $`z`$dependence of $`B(z)`$ may result from such a dependence of either $`\alpha `$ or $`C_2`$ in Eq. (13). According to Refs. and , the experimental data fit the linear dependence $$\frac{T_N(0,z)}{T_N(0,0)}13.20z.$$ (15) up to $`z=0.25`$. At low concentrations, $`z<0.10`$, this is in good agreement with the $`1/S`$ expansion result, $`\rho _s(z)/\rho _s(0)=13.14z`$, if one uses the approximation $`B(z)B(0)`$ in Eq. (14). At higher concentrations the ratio $`\rho _s(z)/\rho _s(0)`$ in the classical limit decreases with dilution approximately as $`\rho _s(z)/\rho _s(0)=13.14z+1.57z^2`$, i.e. slower than the experimental $`T_N(z)/T_N(0)`$. This discrepancy can be due to quantum corrections to $`\rho _s`$, or to the $`z`$ dependence of $`B`$. Substituting Eqs. (14) and (15) (also replacing $`z`$ by $`v`$) into Eq. (12), we have $$\frac{T_N(x,z)}{T_N(0,0)}=(13.20v)\frac{ABy}{1(13ABy)^{1/3}}.$$ (16) Figure 2 shows the dependence of $`T_N(x,z)`$ on the dilution $`z`$, given by the above equation, for three concentrations of holes. The theoretical curve for $`x=0.018`$ reproduces very well the observed maximum in the dependence of $`T_N(x,z)/T_N(0,0)`$ on $`z`$. The experimental points were measured at a nominal Sr concentration $`x=0.017\pm 0.001`$. An important prediction of the theory is the high sensitivity of the maximum to the hole concentration. The maximum exists only at $`x`$ sufficiently close to $`x_c`$, and disappears at lower $`x`$. It would be interesting to check this prediction experimentally. In conclusion, we found the combined effect of hole doping and magnetic dilution on the long-range order in lamellar Heisenberg antiferromagnets. We showed that dilution weakens the destructive effect of the holes on the AFLRO. The critical concentration $`x_c`$ increases with dilution, and the dependence of $`T_N`$ on vacancy concentration reveals a maximum, if the hole concentration is sufficiently close to $`x_c`$. These findings are in quantitative agreement with the experiment. Furthermore, the experimental data for the two concentrations agree with each other even better than with the theoretical curve, demonstrating the validity of the scaling $`xx(1z)^2`$ beyond any theory. We note that the consistency of our theory with the experiments, with no new adjustable parameters, supports the validity of the frustration model. This project has been supported by the US-Israel Binational Science Foundation. AA also acknowledges the hospitality of the ITP at UCSB, and the partial support there for the NSF under grant No. PHY94007194.
no-problem/9909/hep-lat9909061.html
ar5iv
text
# Hadron Masses and Quark Condensate from Overlap Fermions ## Abstract We present results on hadron masses and quark condensate from Neuberger’s overlap fermion. The scaling and chiral properties and finite volume effects from this new Dirac operator are studied. We find that the generalized Gell-Mann-Oakes-Renner relation is well satisfied down to the physical u and d quark mass range. We find that in the range of the lattice spacing we consider, the $`\pi `$ and $`\rho `$ masses at a fixed $`m_\pi /m_\rho `$ ratio have weak $`O(a^2)`$ dependence. The recent advance in chiral fermion formulation which satisfies Ginsparg-Wilson relation has a great promise in implementing chiral fermion for lattice QCD at finite lattice spacing. It is shown to have exact chiral symmetry on the lattice and it has no order $`a`$ artifacts . Neuberger’s Dirac operator derived from the overlap formalism has a compact form in four dimensions which involves a matrix sign function $$D=\frac{1}{2}\left[1+\mu +(1\mu )\gamma _5ϵ(H)\right].$$ (1) In this talk, I present some preliminary results from our numerical implementation of the Neuberger fermion. We adopt the optimal rational approximation of the matrix sign function with 12 terms in the polynomials. The smallest 10 to 20 eigenvalues of $`H^2`$ are projected out for exact evaluation of the sign function for these eigenstates. We use multi-mass conjugate gradient as the matrix solver for both the inner and outer loops. With residuals at $`10^7`$, the inner loop takes $`200`$ iterations and the outer loop takes $`100`$ iterations. We check the unitarity of the matrix $`V=\gamma _5ϵ(H)`$. For $`Vx=b`$, we find $`|x^{}xb^{}b|10^9`$. Even for topological sectors with $`Q0`$, we find the critical slowing down is much milder than that of the Wilson fermion and there are no exceptional configurations. The critical slowing down sets in quite abruptly after $`\mu a=0.003`$ which is already at the physical u and d masses. It is shown that the generalized Gell-Mann-Oakes-Renner (GOR) relation $$\mu d^4x\pi (x)\pi (0)=2\overline{\mathrm{\Psi }}\mathrm{\Psi },$$ (2) with $`\pi (x)`$ being the pion interpolation field, is satisfied for each quark mass and volume, configuration by configuration. We utilize this relation as a check of our numerical implementation of the Neuberger operator. We find that for the lattices we considered ($`6^3\times 12,\beta =5.7,5.85;8^3\times 16,\beta =5.85`$, and $`10^3\times 20,\beta =5.85,6.0`$) the GOR relation is satisfied very well (to within 1%) all the way down to the smallest mass $`\mu a=0.0001`$ for the $`Q=0`$ sector. For the $`Q0`$ sector, the presence of zero modes demands higher precision for the approximation of $`ϵ(H)`$. For example, we show in Fig. 1 the ratio of the right to left side of Eq. (2) for a configuration with topology on the $`6^3\times 12`$ lattice at $`\beta =5.7`$ as a function of the quark mass. When only 10 smallest eigenmodes are projected, we see that the ratio deviates from one for small quark masses. The situation is considerably improved when 20 smallest eigenmodes are included. The situation is better than the domain-wall fermion case when the size of the fifth dimension is limited to $`L_s=10`$ to 48 . We also calculate the quark condensate $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ with 3 to 6 $`Z_2`$ noises for each configuration. For small quark mass, it has the form $$\overline{\mathrm{\Psi }}\mathrm{\Psi }=\frac{|Q|}{\mu V}+c_0+c_1\mu .$$ (3) The singular term which is due to the zero modes in the configurations with topology ($`Q0`$) is specific to the quenched approximation. It will be suppressed when the determinant is included in the dynamical fermion case. We see this clearly in the following figure which is first seen with the domain-wall fermion . A fit to the formula in Eq. (3) is given in Fig. 2. We see that $`c_0`$ is non-zero. The standard definition of the quark chiral condensate entails the extrapolation of $`c_0`$ to the infinite volume before taking the massless limit. Another way is to consider the finite-size scaling . When the size of the lattice is much smaller than the pion Compton wavelength, i. e. $`L1/m_\pi `$, the $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ is proportional to $`\mu \mathrm{\Sigma }^2V`$ for small masses besides the $`\frac{|Q|}{\mu V}`$ term due to quenching. From this, the infinite volume condensate $`\mathrm{\Sigma }`$ can be extracted. We plot in Fig. 3 $`\overline{\mathrm{\Psi }}\mathrm{\Psi }a^3/\mu a`$ vs $`\mu a`$ in the $`Q=0`$ sector for 3 lattice volumes ($`6^3\times 12`$, $`8^3\times 16`$, and $`10^3\times 20`$) at $`\beta =5.85`$. We see that they are quite flat which indicates that the condensate is indeed proportional to $`\mu `$ and we also see that they increase with volume. We have calculated the $`\pi ,\rho `$ and nucleon masses. A typical result on the $`8^3\times 16`$ lattice at $`\beta =5.85`$ is given in Fig. 4. We see the finite volume effect on the nucleon mass when $`\mu a`$ is smaller than $`0.15`$. To see the behavior of pion masses near the chiral limit, we plot $`m_\pi ^2a^2`$ as a function of $`\mu a`$ in Fig. 5 for three lattices with about the same physical volume. It appears that there might be a $`\sqrt{\mu a}`$ behavior in the very small $`\mu a`$ region which we will explore further. When we project only 10 smallest eigenmodes in the approximation for the sign function in the $`6^3\times 12`$ case, we see that $`m_\pi ^2a^2`$ tends to a finite value as $`\mu a0`$. This implies a residual mass due to the poor approximation of $`ϵ(H)`$, a behavior similar to that observed in the domain-wall fermion with finite $`L_s`$. Finally, we check scaling. We plot in Fig. 6 $`\pi /\sqrt{\sigma }`$ and $`\rho /\sqrt{\sigma }`$ vs $`\sigma a^2`$ where $`\sigma `$ is the string tension from which the lattice spacings are determined . It is known that the overlap operator does not have $`O(a)`$ artifacts. Now it appears that the $`O(a^2)`$ errors are small. This work is partially supported by DOE Grants DE-FG05-84ER40154 and DE-FG02-95ER40907. We thank R. Edwards for sharing his experience in implementing the sign function solver. We also thank H. Neuberger for stimulating discussions.
no-problem/9909/chao-dyn9909039.html
ar5iv
text
# Collision and symmetry–breaking in the transition to strange nonchaotic attractors ## Abstract Strange nonchaotic attractors (SNAs) can be created due to the collision of an invariant curve with itself. This novel “homoclinic” transition to SNAs occurs in quasiperiodically driven maps which derive from the discrete Schrödinger equation for a particle in a quasiperiodic potential. In the classical dynamics, there is a transition from torus attractors to SNAs, which, in the quantum system is manifest as the localization transition. This equivalence provides new insights into a variety of properties of SNAs, including its fractal measure. Further, there is a symmetry breaking associated with the creation of SNAs which rigorously shows that the Lyapunov exponent is nonpositive. By considering other related driven iterative mappings, we show that these characteristics associated with the the appearance of SNA are robust and occur in a large class of systems. The unexpected—and fascinating—connection between strange nonchaotic dynamics and localization phenomena brings together two current strands of research in nonlinear dynamics and condensed matter physics. The former describes temporal dynamics converging on a fractal attractor on which the largest Lyapunov exponent is nonpositive while the later involves exponentially decaying wave functions. Recent work has shown that the fluctuations in the exponentially decaying localized wave function are fractal, and this appears in the classical problem as an attractor with fractal measure. Here we exploit this relationship further to understand the mechanism for the transition to SNA, which is a subject of continuing interest . In this Letter, we show that the transition to SNAs has two unusual and general features. Firstly, SNAs can be created by the homoclinic collision of invariant curves with themselves. Secondly, the bifurcation to SNAs, when occurring such that the largest nontrivial Lyapunov exponent passes through zero, is accompanied by a symmetry–breaking. These features provide us with a novel way to characterize and quantify the transition to SNA. Furthermore, by considering a variety of quasiperiodic maps, we demonstrate that these aspects of the SNA transition are generic. The quasiperiodically forced dynamical system under investigation here is the Harper map , $$x_{n+1}=f(x_n,\varphi _n)[x_nE+2ϵ\mathrm{cos}2\pi \varphi _n]^1,$$ (1) with the rigid–rotation dynamics $`\varphi _n=n\omega +\varphi _0`$ giving quasiperiodic driving for irrational $`\omega `$. This map is obtained from the Harper equation , $$\psi _{n+1}+\psi _{n1}+2ϵ\mathrm{cos}[2\pi (n\omega +\varphi _0)]\psi _n=E\psi _n,$$ (2) under the transformation $`x_n=\psi _{n1}/\psi _n`$. Note that the lattice site index of the quantum problem is the time (or iteration) index in the classical problem. The Harper equation is a discrete Schrödinger equation for a particle in a periodic potential on a lattice. The wavefunction at site $`n`$ of the the lattice is $`\psi _n`$, and $`E`$ is the energy eigenvalue. The parameters $`ϵ`$, $`\omega `$, and $`\varphi _0`$ determine the strength, periodicity and phase (relative to the underlying lattice) of the potential. For irrational $`\omega `$ (usually taken to be the golden mean, $`(\sqrt{5}1)/2`$), the period of the potential is incommensurate with the periodicity of the lattice. For the classical map, both $`ϵ`$ and $`E`$ are important parameters, but the quantum problem is meaningful only when $`E`$ is an eigenvalue of the system, so we limit our discussion of the classical system to these special values of $`E`$. However, as we discuss below, this restriction can lifted when we consider perturbations of the map which are not related to the eigenvalue problem. For most of our work we set $`E=0`$ which is an eigenvalue. The Harper equation is paradigmatic in the study of localization phenomena in quasiperiodic systems , exhibiting a localization transition at $`ϵ=1`$. For $`ϵ<1`$, all eigenstates are extended and hence are characterized by an infinite localization length, while for $`ϵ>1`$, eigenstates are localized with localization length $`\gamma ^1=\mathrm{ln}ϵ`$. As we discuss below, the fact that the Lyapunov exponent of Harper equation is known exactly is crucial in establishing the existence of SNA in the Harper map. Of the two Lyapunov exponents for the Harper equation, that corresponding to the $`\varphi `$ dynamics is 0, while the other can be easily calculated as $$\lambda =\underset{N\mathrm{}}{lim}\frac{1}{N}\underset{i=1}{\overset{N}{}}y_i$$ (3) where $`y_i`$ is the “stretch exponent” defined through $`y_i`$ $`=`$ $`\mathrm{ln}|f^{}(x_i)|=\mathrm{ln}x_{i+1}^2`$ (4) $`=`$ $`2\mathrm{ln}|x_iE+2ϵ\mathrm{cos}2\pi \varphi _i|.`$ (5) It is easy to see that in the localized state, $$\lambda =2\gamma ,$$ (6) and therefore, the localized wave function of the Harper equation corresponds to an attractor with negative Lyapunov exponent for the Harper map. The second important point in establishing the existence of SNA in Harper equation stems from the fact that the fluctuations about the localized wave function in the Harper equation are fractal. This result, based on renormalization studies of the Harper equation, suggests that the corresponding attractor in the Harper map has a fractal measure and hence is an SNA. Furthermore, a perturbative argument starting from the strong coupling limit provides a rigorous proof for the existence of SNA for $`E=0`$ , making the Harper mapping one of the few systems where the existence of SNA is well established. We now discuss the scenario for the formation of SNAs in this system when $`E=0`$. For $`ϵ<1`$, the phase space is foliated by invariant curves, each parametrized by the initial conditions. It is important to note that for $`ϵ<1`$ there are no attractors in the system since all the curves are neutrally stable. However, at $`ϵ=1`$, trajectories converge on an attractor. The convergence is power-law and hence the Lyapunov exponent is zero: we can characterize this via a power-law exponent $$\beta =\underset{N\mathrm{}}{lim}\frac{1}{\mathrm{ln}N}\mathrm{ln}\underset{i=0}{\overset{N1}{}}\mathrm{exp}y_i$$ (7) the transition from a family of invariant tori to an attractor being signaled by a non-zero value of $`\beta `$. The transition from an invariant curve to the attractor can be described as a collision phenomenon as we discuss below. For $`ϵ<1`$, the invariant curves have two branches \[see Fig. 1(a)\] deriving from the fact that for $`ϵ=0`$, the map does not have a period–1 fixed point for real $`x`$ but has instead a period–2 orbit. As $`ϵ1`$, the two branches approach each other and collide at $`ϵ=1`$, the point of collision being a singularity. Since the dynamics in $`\varphi `$ is ergodic, the collision occurs at a dense set of points. Furthermore, this happens for each invariant curve, and in effect all invariant curves approach each other and collide at $`ϵ=1`$, forming an attractor \[see Fig. 1(b)\]. We quantify this collision by demonstrating that as $`ϵ1`$, the distance $`d`$ between the two branches goes to zero as a power–law. When the quasiperiodic forcing frequency $`\omega `$ is the golden mean ratio, the distance between the two branches of an invariant curve can be calculated by first noting that a point $`(x_i,\varphi _i)`$ and its successive Fibonacci iterates, $`(x_{i+F_k},\varphi _{i+F_k})`$, where $`F_k`$ is the $`k^{}`$th Fibonacci number, are closely spaced in $`\varphi `$ . If the two branches of the invariant curve are labeled C (for central) and N (for noncentral) \[see Fig. 1(a)\], the sequence of Fibonacci iterates follows the symbolic coding CCNCCNCCNCCN $`\mathrm{}`$ or NNCNNCNNCNNC$`\mathrm{}`$. This follows from the fact that the Fibonacci numbers are successively even, odd, odd, even, odd, odd,$`\mathrm{}`$. Thus, if $`k`$ is chosen appropriately, such that $`F_k`$ is even and $`F_{k+1}`$ is odd (or vice-versa), $$d_k(i)=|x_{i+F_k}x_{i+F_{k+1}}|$$ (8) measures the approximate vertical distance between the curves at $`(x_i,\varphi _i)`$. Minimizing this distance along the invariant curve, we find that the closest approach of the two branches decreases as a power, $$d=\mathrm{min}[\underset{k\mathrm{}}{lim}d_k(i)](1ϵ)^\delta .$$ (9) Our results, given in Fig. 2, provide a quantitative characterization of the transition to SNA in this system. For eigenvalues other than $`E=0`$, the scenario for SNA formation may be different. When the eigenvalue $`E`$ is at the band–edge, the SNAs appear to be formed via the fractalization route, namely by gradually wrinkling and forming a fractal . The reason for this difference can be traced to the simple fact that unlike the $`E=0`$ case, below $`ϵ=1`$ the invariant curve for the minimum eigenvalue has a single branch which originates from a fixed point for $`ϵ=0`$. The self–collision of invariant curves to form SNAs is a general mechanism. Consider a family of maps, $`x_{i+1}`$ $`=`$ $`[x_i+\alpha x_i^\nu +2ϵ\mathrm{cos}2\pi \varphi _i]^1`$ (10) $`\varphi _{i+1}`$ $`=`$ $`\varphi _i+\omega \text{mod}1`$ (11) which bear no relation to an eigenvalue problem. For $`\nu `$ an odd integer, the above map is invertible and hence does not have any chaotic attractors. Numerical results for $`\nu =3`$ show that in these perturbed maps, an SNA is also born after the attractor collides with itself. Similar results are obtained for other polynomial or sinusoidal perturbations. A more fundamental characteristic of this route to SNAs is a dynamical symmetry–breaking. Although the dynamics is nontrivial for the variable $`x`$, the Lyapunov exponent is exactly zero for $`ϵ<1`$. To understand this from a dynamical point of view, we first note that for finite times along a trajectory, the local expansion and contraction rates vary. It turns out that a meaningful way to understand the role of the parameter $`ϵ`$ is to study the return–map for the stretch exponents, $$y_{i+1}=2\mathrm{ln}|\text{sgn}(x_i)\mathrm{exp}(y_i/2)E+2ϵ\mathrm{cos}2\pi \varphi _i|.$$ (12) Shown in Fig. 3(a) is the above map for $`E=0`$ and $`ϵ=0.5`$. There is a reflection symmetry evident, namely $`(x,yy,x)`$ although this symmetry is not easy to see directly in the mapping, Eq. (12) itself owing to the quasiperiodic nature of the dynamical equations. However, as a consequence of the symmetry, the positive and the negative terms cancel exactly in Eq. (3), giving a zero Lyapunov exponent. All finite sums of the stretch exponents, namely the finite–time Lyapunov exponents also share the same symmetry features. This symmetry is maintained for $`0<ϵ1`$, above which this symmetry is broken \[Fig. 3(b)\]. When the negative stretch exponents exceed the positive ones, the Lyapunov exponent $`\lambda `$ therefore becomes negative; coupled with the fact that the attractor has a dense set of singularities , this rigorously confirms the existence of strange nonchaotic dynamics. Symmetry–breaking appears to be operative in a large class of systems, including the mapping where SNAs were first shown to exist , viz. $`x_{n+1}=2ϵ\mathrm{cos}2\pi \varphi _n\mathrm{tanh}x_n`$, and similar systems where the transition to SNA is via the blowout bifurcation . In all these instances, the largest Lyapunov exponent goes through zero when the SNA is born. When the eigenvalue $`E`$ differs from 0, say at the band–edge, the attractor in the localized state is also a SNA which is born at $`ϵ=1`$, with zero Lyapunov exponent. Again \[see Fig. 3(c)\] there is the symmetry in the return map for the stretch exponents which is broken for $`ϵ>1`$. In summary, our work shows that the fractal measure of the trajectory has its origin in the homoclinic collisions of an invariant curve with itself. This characterization of the transition to SNAs can be quantified, and may serve as a useful scenario for the appearance of SNAs in a variety of nonlinear dissipative systems. Furthermore, we demonstrate that the transition from an invariant curve to a SNA proceeds via a symmetry–breaking. A zero value for the Lyapunov exponent of a system can arise in a number of ways, and the present instance, namely the exact cancelation of expanding and contracting terms is very special. (There is similar symmetry breaking at all period–doubling bifurcations in such systems as well, but these points are of measure zero.) It is conceivable that there are more complex symmetries in other systems which similarly lead to a zero value for the Lyapunov exponent. The significance of this symmetry and its breaking in the corresponding quantum problem may be an important question in characterizing the localization transition itself. There are numerous lattice models exhibiting localization in aperiodic potentials, including the quantum kicked rotor. The corresponding derived aperiodic mappings are worthy of further study and might well extend the subject of SNA to systems beyond quasiperiodically driven maps. In addition, there are interesting open questions regarding localization and its absence in quasiperiodic potentials with discrete steps . It is conceivable that this type of mapping of the quantum problem onto the classical problem may provide better understanding of localization phenomena. ACKNOWLEDGMENT: We would like to thank U. Feudel, J. Ketoja, and J. Stark for various illuminating discussions during the seminar “Beyond Quasiperiodicity” where this work was started. We would also like to acknowledge the hospitality of the Max Planck Institute for Complex Systems, Dresden. RR is supported by the Department of Science and Technology, India, and AP by the CSIR. The research of IIS is supported by grant DMR 097535 from the National Science Foundation.
no-problem/9909/chao-dyn9909041.html
ar5iv
text
# Improved surrogate data for nonlinearity tests ## Abstract Current tests for nonlinearity compare a time series to the null hypothesis of a Gaussian linear stochastic process. For this restricted null assumption, random surrogates can be constructed which are constrained by the linear properties of the data. We propose a more general null hypothesis allowing for nonlinear rescalings of a Gaussian linear process. We show that such rescalings cannot be accounted for by a simple amplitude adjustment of the surrogates which leads to spurious detection of nonlinearity. An iterative algorithm is proposed to make appropriate surrogates which have the same autocorrelations as the data and the same probability distribution. PACS: 05.45.+b The paradigm of deterministic chaos has become a very attractive concept for the study of the irregular time evolution of experimental or natural phenomena. Nonlinear methods have indeed been successfully applied to laboratory data from many different systems . However, soon after the first signatures of low dimensional chaos had been reported for field data , it turned out that nonlinear algorithms can mistake linear correlations, in particular those of the power law type, for determinism. This has lead on the one hand to more critical applications of algorithms like the correlation dimension. On the other hand, significance tests have been proposed which allow for the detection of nonlinearity even when for example a clear scaling region is lacking in the correlation integral. The idea is to test results against the null hypothesis of a specific class of linear random processes. One of the most popular of such tests is the method of “surrogate data”, which can be used with any nonlinear statistic that characterizes a time series by a single number. The value of the nonlinear discriminating statistic is computed on the measured data and compared to its empirical distribution on a collection of Monte Carlo realizations of the null hypothesis. Usually, the null assumption we want to make is not a very specific one, like a certain particular autoregressive (AR) process. We would rather like to be able to test general assumptions, for example that the data is described by some Gaussian linear random process. Thus we will not try to find a specific faithful model of the data; we will rather design the Monte Carlo realizations to have the same linear properties as the data. The authors of call this a “constrained realization” approach. In particular, the null hypothesis of autocorrelated Gaussian linear noise can be tested with surrogates which are by construction Gaussian random numbers but have the same autocorrelations as the signal. Due to the Wiener–Khinchin theorem, this is the case if their power spectra coincide. One can multiply the discrete Fourier transform of the data by random phases and then perform the inverse transform (phase randomized surrogates). Equivalently, one can create Gaussian independent random numbers, take their Fourier transform, replace those amplitudes with the amplitudes of the Fourier transform of the original data, and then invert the Fourier transform. This is similar to a filter in the frequency domain. Here the “filter” is the quotient of the desired and the actual Fourier amplitudes. In practice, the above null hypothesis is not as interesting as one might like: Very few of the time series considered for a nonlinear treatment pass even a simple test for Gaussianity. Therefore we want to consider a more general null hypothesis including the possibility that the data were measured by an instantaneous, invertible measurement function $`h`$ which does not depend on time $`n`$. A time series $`\{s_n\},n=1,\mathrm{},N`$ is consistent with this null hypothesis if there exists an underlying Gaussian linear stochastic signal $`\{x_n\}`$ such that $`s_n=h(x_n)`$ for all $`n`$. If the null hypothesis is true, typical realizations of a process which obeys the null are expected to share the same power spectrum and amplitude distribution. But even within the class defined by the null hypothesis, different processes will result in different power spectra and distributions. It is now an essential requirement that the discrimiating statistics must not mistake these variations for deviations from the null hypothesis. The tedious way to achieve this is by constructing a “pivotal” statistics which is insensitive to these differences. The alternative we will pursue here is the “constrained realizations” approach: the variations in spectrum and distribution within the class defined by the null hypothesis are suppressed by constraining the surrogates to have the same power spectrum as well as the same distribution of values as the data. In , the amplitude adjusted Fourier transform (AAFT) algorithm is proposed for the testing of this null hypothesis. First, the data $`\{s_n\}`$ is rendered Gaussian by rank–ordering according to a set of Gaussian random numbers. The resulting series $`s_n^{}=g(s_n)`$ is Gaussian but follows the measured time evolution $`\{s_n\}`$. Now make phase randomized surrogates for $`\{s_n^{}\}`$, call them $`\{\stackrel{~}{s}_n^{}\}`$. Finally, invert the rescaling $`g`$ by rank–ordering $`\{\stackrel{~}{s}_n^{}\}`$ according to the distribution of the original data, $`\stackrel{~}{s}_n=\overline{g}(\stackrel{~}{s}_n^{})`$. The AAFT algorithm should be correct asymptotically in the limit as $`N\mathrm{}`$ For finite $`N`$ however, $`\{\stackrel{~}{s}_n\}`$ and $`\{s_n\}`$ have the same distributions of amplitudes by construction, but they do not usually have the same sample power spectra. One of the reasons is that the phase randomization procedure performed on $`\{s_n^{}\}`$ preserves the Gaussian distribution only on average. The fluctuations of $`\{\stackrel{~}{s}_n^{}\}`$ and $`\{s_n^{}\}`$ will differ in detail. The nonlinearity contained in the amplitude adjustment procedure ($`\overline{g}`$ is not equal to $`g^1`$) will turn these into a bias in the empirical power spectrum. Such systematic errors can lead to false rejections of the null hypothesis if a statistic is used which is sensitive to autocorrelations. The second reason is that $`g`$ isn’t really the inverse of the nonlinear measurement function $`h`$, and instead of recovering $`\{x_n\}`$ we will find some other Gaussian series. Even if $`\{s_n\}`$ were Gaussian, $`g`$ would not be the identity. Again, the two rescalings will lead to an altered spectrum. In Fig. 1 we see power spectral estimates of a clinical data set and of 19 AAFT surrogates. The data is taken from data set B of the Santa Fe Institute time series contest . It consists of 4096 samples of the breath rate of a patient with sleep apnea. The sampling interval is 0.5 seconds. The discrepancy of the spectra is significant. A bias towards a white spectrum is noted: power is taken away from the main peak to enhance the low and high frequencies. The purpose of this letter is to propose an alternative method of producing surrogate data sets which have the same power spectrum and distribution as a given data set. We do not expect that these two requirements can be exactly fulfilled at the same time for finite $`N`$, except for the trivial solution, a cyclic shift of the data set itself. We will rather construct sequences which assume the same values (without replacement) as the data and which have spectra which are practically indistinguishable from that of the data. We can require a specific maximal discrepancy in the power spectrum and report a failure if this accuracy could not be reached. The algorithm consists of a simple iteration scheme. Store a sorted list of the values $`\{s_n\}`$ and the squared amplitudes of the Fourier transform of $`\{s_n\}`$, $`S_k^2=|_{n=0}^{N1}s_ne^{i2\pi kn/N}|^2`$. Begin with a random shuffle (without replacement) $`\{s_n^{(0)}\}`$ of the data. Now each iteration consists of two consecutive steps. First $`\{s_n^{(i)}\}`$ is brought to the desired sample power spectrum. This is achieved by taking the Fourier transform of $`\{s_n^{(i)}\}`$, replacing the squared amplitudes $`\{S_k^{2,(i)}\}`$ by $`\{S_k^2\}`$ and then transforming back. The phases of the complex Fourier components are kept. Thus the first step enforces the correct spectrum but usually the distribution will be modified. Therefore, as the second step, rank–order the resulting series in order to assume exactly the values taken by $`\{s_n\}`$. Unfortunately, the spectrum of the resulting $`\{s_n^{(i+1)}\}`$ will be modified again. Therefore the two steps have to be repeated several times. At each iteration stage we can check the remaining discrepancy of the spectrum and iterate until a given accuracy is reached. For finite $`N`$ we don’t expect convergence in the strict sense. Eventually, the transformation towards the correct spectrum will result in a change which is too small to cause a reordering of the values. Thus after rescaling, the sequence is not changed. In Fig. 2 we show the convergence of the iteration scheme as a function of the iteration count $`i`$ and the length of the time series $`N`$. The data here was a first order AR process $`x_n=0.7x_{n1}+\eta _n`$, measured through $`s_n=x_n^3`$. The increments $`\eta _n`$ are independent Gaussian random numbers. For each $`N=1024,2048,\mathrm{},32768`$ we create a time series and ten surrogates. In order to quantify the convergence, the spectrum was estimated by $`S_k^2=|_{n=0}^{N1}s_ne^{i2\pi kn/N}|^2`$ and smoothed over 21 frequency bins, $`\widehat{S}_k^2=_{j=k10}^{k+10}S_k^2/21`$. Note that for the generation of surrogates no smoothing is performed. As the (relative) discrepancy of the spectrum at the $`i`$–th iteration we use $`_{k=0}^{N1}(\widehat{S}_k^{(i)}\widehat{S}_k)^2/_{k=0}^{N1}\widehat{S}_k^2`$. Not surprisingly, progress is fastest in the first iteration, where the random scramble is initially brought from its white spectrum to the desired one (the initial discrepancy of the scramble was $`0.2\pm 0.01`$ for all cases and is not shown in Fig. 2). For $`i1`$, the discrepancy of the spectrum decreases approximately like $`1/i`$ until an $`N`$ dependent saturation is reached. The saturation value seems to scale like an inverse power of $`N`$ which depends on the process. For the data underlying Fig. 2 we find a $`1/\sqrt{N}`$ dependence, see Fig. 3. For comparison, the discrepancy for AAFT surrogates did not fall below 0.015 for all $`N`$. We have observed similar scaling behavior for a variety of other linear correlated processes. For data from a discretized Mackey–Glass equation we found exponential convergence $`\mathrm{exp}(0.4i)`$ before a saturation value was reached which decreases approximately like $`1/N^{3/2}`$. Although we found rapid convergence in all examples we have studied so far, the rate seems to depend both on the distribution of the data and the nature of the correlations. The details of the behavior are not yet understood. In order to verify that false rejections are indeed avoided by this scheme we compared the number of false positives in a test for nonlinearity for the AAFT algorithm and the iterative scheme, the latter as a function of the number of iterations. We performed tests on data sets of 2048 points generated by the instantaneously, monotonously distorted AR process $`s_n=x_n\sqrt{|x_n|}`$, $`x_n=0.95x_{n1}+\eta _n`$. The discriminating statistic was a nonlinear prediction error obtained with locally constant fits in two dimensional delay space. For each test, 19 surrogates were created and the null hypothesis was rejected at the 95% level of significance if the prediction error for the data was smaller then those of the 19 surrogates. The number of false rejections was estimated by performing 300 independent tests. Instead of the expected 5% false positives we found $`66\pm 5`$% false rejections with the AAFT algorithm. Fig. 4 shows the percentage of false rejections as a function of the number of iterations of the scheme described in this letter. The correct rejection rate for the 95% level of significance is reached after about 7 iterations. This example is particularly dramatic because of the strong correlations, although the nonlinear rescaling is not very severe. Let us make some further remarks on the proposed algorithm. We decided to use an unwindowed power spectral estimate which puts quite a strong constraint on the surrogates (the spectrum fixes $`N/2`$ parameters). Thus it cannot be excluded that the iterative scheme is able to converge only by also adjusting the phases of the Fourier transform in a nontrivial way. This might introduce spurious nonlinearity in the surrogates in which case we can find the confusing result that there is less nonlinearity in the data than in the surrogates. If the null hypothesis is wrong, we expect more nonlinearity in the data (better nonlinear predictability, smaller estimated dimension etc.). Therefore we can always use one–sided tests and thus avoid additional false rejections. However, spurious structure in the surrogates can diminish the power of the statistical test. Since an unwindowed power spectral estimate shows strong fluctuations within each frequency bin, it seems unnecessary to require the surrogates to have exactly the same spectrum as the data, including the fluctuations. The variance of the spectral estimate can be reduced for example by windowing, but the frequency content of the windowing function introduces an additional bias. Let us finally remark that although the null hypothesis of a Gaussian linear process measured by a monotonous function is the most general we have a proper statistical test for, its rejection does not imply nonlinear dynamics. For instance, noninstantaneous measurement functions (e.g., $`s_n=x_n^2x_{n1}`$) are not included and (correctly) lead to a rejection of the null hypothesis, although the underlying dynamics may be linear. Another example is first differences of the distorted output from a Gaussian linear process. In conclusion, we established an algorithm to provide surrogate data sets containing random numbers with a given sample power spectrum and a given distribution of values. The achievable accuracy depends on the nature of the data and in particular the length of the time series. We thank James Theiler, Daniel Kaplan, Tim Sauer, Peter Grassberger, and Holger Kantz for stimulating discussions. This work was supported by the SFB 237 of the Deutsche Forschungsgemeinschaft.
no-problem/9909/chao-dyn9909046.html
ar5iv
text
# The Statistics of Chaotic Tunnelling \[ ## Abstract We discuss the statistics of tunnelling rates in the presence of chaotic classical dynamics. This applies to resonance widths in chaotic metastable wells and to tunnelling splittings in chaotic symmetric double wells. The theory is based on using the properties of a semiclassical tunnelling operator together with random matrix theory arguments about wave function overlaps. The resulting distribution depends on the stability of a specific tunnelling orbit and is therefore not universal. However it does reduce to the universal Porter-Thomas form as the orbit becomes very unstable. For some choices of system parameters there are systematic deviations which we explain in terms of scarring of certain real periodic orbits. The theory is tested in a model symmetric double well problem and possible experimental realisations are discussed. \] Tunnelling is crucial in describing many physical phenomena, from chemical and nuclear reactions to conductances in mesoscopic devices and ionisation rates in atomic systems. When such systems are complex, it is natural to model tunnelling effects using random matrix theory . We show here that when the underlying system is one of clean chaotic dynamics, successful statistical modelling demands explicit incorporation of nonuniversal, but simple, dynamical information. We derive a distribution for the tunnelling rate which depends on a single parameter, calculated from the stability properties of the dominant tunnelling orbit. The signatures of chaos in tunnelling rates have been receiving a growing amount of attention, two important regimes having been considered. The first is that the quantum state is initially localised in a region where the dynamics is largely nonchaotic and one wants to understand the tunnelling rate through chaotic regions of phase space . The second, and the one we shall focus on, is that virtually all of the energetically accessible phase space is chaotic so that the quantum state is initially localised in a chaotic region of phase space. These two situations are different in many important ways. In particular, the statistical distribution of the tunnelling rates in the first regime has power law decays whereas we show that the distribution in the second regime has exponential decay. The result is a generalisation of the Porter-Thomas distribution used to model neutron and proton resonances and conductance peak heights in quantum dots . It is shown in Refs. that the average tunnelling rate from an energetically-connected region of phase space is determined by a complex orbit we will call the instanton. Fluctuations about this average appear to be pseudo-random in the chaotic case and are given by properties of the wavefunction in an area localised around a real extension of the instanton . To characterise these fluctuations we define a rescaled tunnelling rate as follows. In the case of metastable wells the absolute tunnelling rate of a given state labelled by $`n`$ is measured by the resonance width, or inverse lifetime $`\mathrm{\Gamma }_n`$. The corresponding normalised tunnelling rate is defined to be $$y_n=\mathrm{\Gamma }_n/\overline{\mathrm{\Gamma }},$$ (1) where $`\overline{\mathrm{\Gamma }}(E,\mathrm{})=\mathrm{\Gamma }_n`$ is a local average computed for a given set of physical parameters. $`\overline{\mathrm{\Gamma }}(E,\mathrm{})`$ is a smooth, monotonic function of its arguments and is given by an explicit formula in terms of the (purely imaginary) action and stability of the instanton . A similar definition holds for splittings in double wells and in either case $`y_n=1`$ by construction. Fluctuations in $`y_n`$ are calculated using a tunnelling operator, $`𝒯`$, which is constructed from the semiclassical Green’s function and can be interpreted as transporting the wavefunction across the barrier. Specifically, $$y_nn|𝒯|n,$$ (2) where $`|n`$ is the wavefunction (which may be calculated while ignoring tunnelling effects) represented in a Hilbert space which quantises a surface of section. A closed-form expression can be found by expanding classical actions in quadratic order around the instanton. Details can be found in , but for present purposes it is enough to know its spectrum. For a two-dimensional system this is $`\{\lambda ^k\left|\lambda \right|^{1/2},k=0,1\mathrm{}\}`$, where $`\lambda `$ is the inverse of the stability of the instanton orbit, is always less than unity in magnitude and can easily be found using real dynamics in the inverted potential. (The discussion is readily generalised to higher dimension but we refrain from doing so for clarity.) To derive statistical distributions for $`y_n`$, we will make statistical assumptions about the state $`|n`$, but not about the tunnelling operator. The resulting distribution depends parametrically on $`\lambda `$ and is therefore system-specific and not universal. We can always express $`𝒯`$ as a diagonal operator in its own eigenbasis: $`_k\lambda ^k\left|\lambda \right|^{1/2}|kk|`$. We then have $$y_n=a\underset{k=0}{\overset{\mathrm{}}{}}\lambda ^k|k|n|^2=a\underset{k=0}{\overset{\mathrm{}}{}}\lambda ^k|x_k|^2$$ (3) where we denote $`x_k=k|n`$ and the prefactor $`a=1\lambda `$ ensures that $`y=1`$. The states $`|n`$ are normalised so that on average $`|x_k|^2`$ is unity. We now make the statistical ansatz that the overlaps can be treated as Gaussian random variables. This is the basis of almost all statistical treatments of wave functions, going back to the seminal work of Porter and Thomas . In the event that there is a time reversal symmetry, $`x_k`$ can be expressed as a single real number, leading to GOE statistics. If there is no such symmetry then $`x_k`$ is complex and will be described by two statistically independent quantities leading to GUE statistics. We simplify the derivation by assuming GOE statistics; the generalisation to GUE is simple and we give the final result for both. We start by assuming that the $`x_k`$ are statistically independent and given by the joint distribution $`P(𝐱)\mathrm{d}𝐱=_k\left[\mathrm{exp}(x_k^2/2)/\sqrt{2\pi }\right]\mathrm{d}x_k`$ where $`𝐱=\{x_k\}`$. We then note that $`P(y;\lambda )`$ $`=`$ $`{\displaystyle d𝐱P(𝐱)\delta \left(ya\underset{k=0}{\overset{\mathrm{}}{}}\lambda ^kx_k^2\right)}.`$ (4) We use the identity $`\delta (z)=dt\mathrm{exp}(itz)/2\pi `$ in the above expression and observe that each $`x_k`$ involves a simple Gaussian integral. The final result (and generalising to the GUE case) is $$P(y;\lambda )=\frac{1}{2\pi }dte^{ity}\underset{k=0}{\overset{\mathrm{}}{}}\left(1+\frac{2i}{\beta }a\lambda ^kt\right)^{\beta /2},$$ (5) where $`\beta =1`$ and $`2`$ for GOE and GUE respectively. The product above converges rapidly provided $`\lambda `$ is not too close to unity so the formula can easily be used to calculate $`P(y;\lambda )`$ in practice. This result has a simple interpretation if we think of each eigenstate of $`𝒯`$ as providing a distinct and statistically independent channel to tunnel; it then corresponds to the discussion of Ref. but with an infinite number of distinctly weighted open channels. We could even imagine adding a weak magnetic field so as to interpolate between the GOE and GUE limits as in although we refrain from that here. As mentioned, we have $`y=1`$ by construction; the second moment is $$y^2=1+\frac{2}{\beta }\frac{1\lambda }{1+\lambda }.$$ (6) The channel interpretation helps in a qualitative understanding of this distribution as we vary $`\lambda `$. For small $`\lambda `$, only the first $`(k=0)`$ channel plays any significant role and the distribution is of the Porter-Thomas form: $`\mathrm{exp}(y/2)/\sqrt{2\pi y}`$ and $`\mathrm{exp}(y)`$ for $`\beta =1`$ and $`2`$ respectively. This can be understood analytically from (5) by doing a branch-point/residue analysis around the nearest singularity at $`t=i\beta /2a`$. This distribution is commonly used to model point tunnelling contacts . It is often a very accurate approximation but its validity is not universally guaranteed, as we shall discuss. For $`\lambda `$ close to unity, many channels contribute significantly, the fluctuations around the mean are accordingly reduced and the distribution approaches a Gaussian with variance $`\sigma ^2a/\beta `$ (which becomes a delta function for small $`a`$). For $`\lambda >0`$ and $`y0`$ we close the contour of (5) in the lower half plane; since the integrand has no singularities there, the result is simply zero. This is consistent with the fact that $`𝒯`$ is a positive definite operator so that Eq.(3) does not admit the possibility of negative $`y`$. By the same argument, any derivative of $`p(y)`$ is also zero for $`y0`$ implying a nonanalyticity at $`y=0`$ with $`p(y)`$ going to zero faster than any power of $`y`$ for $`y`$ small and positive. In the opposite limit $`y\lambda `$ we can expand around the first singularity to obtain $`P(y;\lambda )_{\mathrm{GOE}}`$ $``$ $`{\displaystyle \frac{\mathrm{exp}(y/2a)}{\sqrt{2\pi ay}}}`$ (7) $`P(y;\lambda )_{\mathrm{GUE}}`$ $``$ $`{\displaystyle \frac{\mathrm{exp}(y/a)}{a}}.`$ (8) This falls off exponentially with $`y`$, and not with a power law as observed in the chaos-assisted regime . Equations (3), (5) and (6) remain valid when $`\lambda `$ is negative. This situation arises when we compute splittings in symmetric double wells for which the symmetry is inversion through a point rather than reflection through an axis . In this situation $`𝒯`$ is no longer positive definite and we admit the possibility of negative splittings (for which the odd member of a doublet has a lower energy than the even member). The distribution then allows all values of $`y`$, positive and negative. It decays exponentially for $`|y|\lambda `$ as in Eq. (7) but with different exponents for $`y>0`$ and $`y<0`$ because on doing the integral (5) we must switch from closing the integration contour in the lower half plane to the upper half plane as $`y`$ changes sign. For a given state, we can typically induce a zero splitting by tuning one system parameter. A zero splitting means that we can construct states which remain localised in either well for all time as in the one dimensional time-dependent system considered in . At this point we contrast our results with standard random matrix theory modelling. In section VII.H of their extensive review , Brody et al. show under rather general assumptions that one expects a Gaussian distribution for the expectation values of an arbitrary operator by showing that all of the moments of the distribution approach those of a Gaussian. This can be understood as a sort of central limit theorem. One of their assumptions is that the operator is non-singular, i.e. does not have many zero eigenvalues. Because of the exponential decay of the eigenvalues of the tunnelling operator $`𝒯`$, it is effectively singular. Therefore their conclusions do not apply to our situation and we have non-Gaussian distributions. It is interesting to note that in the limit $`|\lambda |1`$, the operator $`𝒯`$ has an ever increasing number of significant eigenvalues and the distributions do in fact approach Gaussians, in conformity with their general considerations. Since it is a simpler numerical task to calculate many splittings in a double well than to calculate many resonance widths in a metastable well, we use the former to test our predictions and note that any conclusions apply identically to the latter. Consider the potential $$V(x,y)=(x^21)^4+x^2y^2+\mu y^2+\nu y+\sigma x^2y.$$ (9) There is a reflection symmetry in $`x`$ and if the energy is less than $`1\nu ^2/4\mu `$ the motion is classically confined either to $`x<0`$ or to $`x>0`$. It is convenient to work at fixed energy in order to keep $`\lambda `$ constant and we do this by quantising $`q=1/\mathrm{}`$ , that is, by finding those values of $`\mathrm{}`$ which are consistent with a specified choice of parameters and energy. This is effectively what happens, for example, in scaling problems such as a hydrogen atom in a magnetic and electric field . In Fig. 1 we show histograms constructed from the $`q`$-spectra for two choices of parameters such that the classical dynamics is almost fully chaotic. We also show the distribution (5) with $`\beta =1`$ and using the corresponding values of $`\lambda `$. Clearly, the numerically computed histograms are well captured by the theoretical distribution. We show, for comparison, the Porter-Thomas distribution which clearly fails to correctly model the numerical data. We remark that this sort of agreement was observed for most parameter values as long as the dynamics was fully chaotic. In Fig. 2 we show an exception to the general agreement. In this case the numerical histogram is intermediate between the theoretical distribution (5) and the Porter-Thomas form. We attribute this to the effects of scarring , as follows. The instanton has real turning points where the momentum vanishes and the position is real. At these points we can elect either to integrate in imaginary time in which case the instanton retraces itself or to integrate in real time, in which case we get a real trajectory. We refer to this real trajectory as the real extension of the instanton. There is no reason why this real extension should itself be periodic. Typically it is not. However, the parameters of Fig. 2 have been tuned so that the real extension is in fact periodic. We find in this case that the overlaps $`x_k=k|n`$ are no longer distributed according to the Gaussian $`P(x_k)=\mathrm{exp}(x_k^2/2)/\sqrt{2\pi }`$ as assumed in our derivation — there are relatively more large overlaps and more small overlaps. This effect can be explained using a recent theory of scarring which describes how the overlaps between a wavepacket placed on a periodic orbit and the chaotic eigenstates deviate from random matrix theory. In our problem the eigenvectors $`|k`$ behave like wavepackets of this type when the real extension is periodic. The effect of this deviation from random matrix theory is to give more large splittings and more small splittings than (5) predicts and, therefore, to push the distribution in the direction of the Porter-Thomas form. We remark that for $`\nu =\sigma =0`$, the real extension is always a periodic orbit and we see anomalous statistics for this situation as well. The final case we discuss is if the term $`\nu y+\sigma x^2y`$ in (9) is replaced by $`\tau xy`$. Now the potential is symmetric under $`(x,y)(x,y)`$ rather than under $`(x,y)(x,y)`$ (this symmetry would persist if we were to add a uniform magnetic field). In this case $`\lambda <0`$ and negative splittings can occur. We present the results for a typical case in Fig. 3. Again, the theoretical distribution agrees with the histogram. Our results are relevant to situations in which particles tunnel out of or between chaotic regions separated by an energetic barrier. Applications include hydrogen atoms in parallel electric and magnetic fields where the competition between the imposed fields and the Coulomb force causes chaotic motion while the presence of the electric field causes tunnelling . Dissociative decays of excited nuclei and molecules may also fall into this regime. Another application is to conductances of quantum dots. In the Coulomb blockade regime electrons must tunnel into and out of dots which are thought to be chaotic. Such experiments have been done leading to results which are consistent with the Porter-Thomas distribution for the tunnelling widths, just as for the neutron and proton resonance widths. We contend that the reason is that the instanton path in all cases is very unstable, leading to a small value of $`\lambda `$. For energies near the saddle $`\lambda \mathrm{exp}(2\pi \omega _y/\omega _x)`$ where $`\omega _x`$ and $`\omega _y`$ are the curvatures of the potential saddle along and transverse to the instanton, respectively. This is often small, but by making the saddle flat in the transverse direction or sharp in the instanton direction, it is possible to have $`\lambda `$ be of order unity. It is an interesting question whether this can be arranged for the quantum dots. One feature which helps in this regard is that we predict a distribution which vanishes for small $`y`$ whereas the Porter-Thomas distribution diverges as $`1/\sqrt{y}`$. This difference could be discernible even for rather small values of $`\lambda `$. Another possibility for nonuniversal statistics would be a situation analogous to Fig. 2, where the tunnelling route is directly connected to a real periodic orbit. This geometry could be engineered into quantum dots and is present in the hydrogen atom problem . In this case we predict deviation from the predictions of random matrix theory. This would be similar in spirit to the recent work of Narimanov et al. who look for dynamical effects in the correlations of conductance peaks of quantum dots.
no-problem/9909/astro-ph9909059.html
ar5iv
text
# Faint Field Galaxies Around Bright Stars - A New Strategy for Imaging at the Diffraction Limit ## 1 Introduction It has only been very recently, with the help of the Keck Telescopes and the Hubble Space Telescope (HST) that galaxies have been identified which are thought to be producing their first generation of stars. With the high resolution of the HST many of these young galaxies appear to be more numerous and smaller than nearby galaxies (Phillips et al. 1997), and often have a distorted morphology (Driver et al. 1998). Taken together, these attributes suggest that galaxies have gone through a period of significant evolution since their formation. Optical imaging, however, is biased by the fact that at high redshifts the observed light was emitted in the UV where star forming regions dominate the emission. This can result in a more distorted appearance and give a biased estimate of the morphology. It is probable then, that at least some of the close groupings of optical knots seen in deep HST images may actually be multiple star forming regions within a single galaxy. Infrared cameras can directly image the optical emission from high redshift galaxies and provide a more accurate determination of the galaxy’s morphology. With the NICMOS camera on HST, images of the Hubble Deep Field (Thompson et al. 1999) have shown that for at least some objects, their infrared morphology is in fact smoother and less complex than their optical morphology. High redshift objects are also very small, usually less that one arcsecond in extent, so direct ground based images often yield little morphological information. Even with NICMOS resolutions are limited to about 0$`\stackrel{}{\mathrm{.}}`$2 barely resolving many galaxies. What is needed is diffraction limited observations from larger (8-10 m class) ground based telescopes in the infrared. Adaptive Optics (AO) Systems coupled to new and anticipated infrared instruments will be able to probe the infrared morphologies of distant galaxies in much more detail than previous studies. An intrinsic problem with the earliest form of most high order AO systems, however, is their reliance on bright natural guide stars; often brighter than about 12th magnitude at R. This limitation makes most extragalactic targets unobservable because of their intrinsic faintness and the relative rarity of sufficiently bright nearby guide stars. Recent observations with relatively low Strehl ratios of a few quasars and radio galaxies have been possible with curvature type AO systems due to the ability of these systems to operate at slower speeds and larger effective aperture (Stockton et al. 1999, Hutchings et al. 1999, and Aretxaga et al. 1998). Laser guide stars will partially remedy this problem in a couple of years, but laser systems usually produce lower Strehl ratios than natural guide star systems, so sensitivities and resolution will suffer. We have developed an interesting new strategy for using natural guide star systems to observe extremely faint galaxies. These observations rely on the high density of galaxies on the sky. Deep infrared surveys (e.g. Djorgovski et al. 1995) have shown that there are about 2x10<sup>5</sup> galaxies per square degree brighter than K=24 mag. To a limiting magnitude of K=20 this number is down by about a factor of ten to 2x10<sup>4</sup> per square degree, but this still implies that within 20 arcseconds of ANY guide star there are on average 2 galaxies brighter than K=20 mag and 20 galaxies brighter than 24th magnitude. Recent redshift surveys (e.g. Cohen et al. 1996) have shown that the average redshift of field galaxies brighter than K=20 mag is greater than 0.5 and this should rise at fainter magnitudes. So our strategy is to perform deep infrared imaging around bright ($`<`$12 mag) A-type stars to identify faint galaxies, then use the much smaller field of view of the AO infrared cameras to image selected galaxies with high Strehl ratio at or close to the diffraction limit. We present here our first infrared images near five bright stars that are at relatively high galactic latitude, have a relatively blue color (A spectral type), and which pass close to the zenith of the Keck Observatory and other Northen Hemisphere Observatories. We calculate infrared colors for two of the fields, and crude morphologies when possible to allow for better early selection of potentially interesting objects. ## 2 Observations We’ve used the Keck Near Infrared Camera (NIRC) to image around a sample of five early-type stars with visual magnitudes between 8.5 and 10.3. A-type stars are preferred because they are relatively blue and thus reduce the amount of scattered and diffracted light in the infrared images as compared to other stars of comparable optical magnitude. O and B type stars are of course bluer, but are relatively rare at high galactic latitudes. The stars were also selected to have low proper motions ($`<`$0$`\stackrel{}{\mathrm{.}}`$01 per year), relatively high galactic latitude (lb$`>`$20<sup>o</sup> or lb$`<`$-20<sup>o</sup>), and a declination within 5 degrees of Keck’s latitude (but not those passing through the zone of avoidance near the zenith); at certain RA, the galactic latitude constraint forced us further north. Table 1 gives the list of stars observed along with their coordinates, R band magnitudes, spectral types and galactic latitudes. Also given are the infrared exposure times for each band used (J, H or K), the spatial resolution of the final summed infrared images and the date of the observations. All observations were performed with the Near Infrared Camera (NIRC, Matthews & Soifer 1994) on the Keck I Telescope on the nights of 06 September, 1998 and 08-10 October, 1998. Conditions on each night were clear and photometric. Typical seeing was between 0.4 and 0.7 arcseconds, but was as good as 0$`\stackrel{}{\mathrm{.}}`$2 and as bad as 1$`\stackrel{}{\mathrm{.}}`$0 during certain short periods. In each band, many individual frames were taken. For the K band, each frame consisted of 20 coadded exposures of 3 seconds each, except for the observation of PPM91714 on 08 October, 1998 which had 60 coadded exposures of 1 second each. For H, each frame consisted of 60 coadded exposures of 1 second each, and for J, they were 12 exposures of 10 seconds each. Image sequences consisted of a 3 by 3 pattern with a step size of 5” along each axis. This yielded 9 minutes of exposures for each sequence, except for the J band sequences which were 18 minutes for each 3 by 3 grid. For deeper observations, additional 3 by 3 sequences were taken with a small (typically 3”) offset between sequences. Because of the magnitudes of the stars themselves, they always saturated and were positioned at the corner of the array to reduce the effect of electronic bleeding and diffraction spikes within the images. This has the drawback of reducing the amount of the isoplanatic patch that was covered. The data were reduced with custom IDL routines that medianed images in groups of 9 without aligning the frames in order to make sky and flat fields. Bright objects were masked from the images before producing the skies and flats. Each sky and flat was only used on the central 3 images within the group of 9. This produced very good skies and flats that accurately match the varying sky levels throughout the observation without reducing the observing efficiency by taking separate sky frames. The sky subtracted and flat fielded frames are then aligned to the nearest integer pixel and combined using a clipped mean at each pixel. The final images have very uniform backgrounds and noise consistent with the square root of the number of frames used in each mosaic. ## 3 Results Figures 1 through 5 show the images of the stellar fields. They have been stretched very hard to show the faint galaxies in the central, cleanest, parts of the images; this makes the noise at the edges appear artificially extreme. In each case, the noise is consistent with the number of frames contributing at that pixel, except where diffraction spikes or bleeding leave residual images. In this paper, we have only identified many sigma objects which are clearly real in each field. Table 2 lists the galaxies and gives the galaxy name (specified by the star it is near followed by its relative RA and Dec in arcseconds), average FWHM, infrared magnitudes and angular separation of its guide star. In some of the fields, there were objects that were difficult to identify as either stars or galaxies; in these cases we included all the objects in the list and marked the ones that are ambiguous with a superscript ’a’. The brightest confirmed galaxy is 16.9 mag in K with a FWHM of 1$`\stackrel{}{\mathrm{.}}`$53. The faintest objects identified in each field were typically $``$21st magnitude in K. Crude morphological types of some of the galaxies can be determined from these observations, including a few galaxies with clear spiral structures. These spiral galaxies are identified with a superscript ’b’ in the table. ### 3.1 PPM 91088 The field around PPM 91088 is the richest field in terms of resolved galaxies. It has at least 20 galaxies between 17th and 21.5 magnitude (K band) within the slightly less than one square arcminute that is covered by our images. These include four bright disk galaxies: PPM 91088+08+29 (K=19.2 mag and FWHM=0$`\stackrel{}{\mathrm{.}}`$94), PPM 91088+01+20 (K=18.5 mag and FWHM=0$`\stackrel{}{\mathrm{.}}`$72), PPM 91088-21+18 (K=17.6 mag and FWHM=0$`\stackrel{}{\mathrm{.}}`$83), and PPM 91088-26+09 (K=19.3 mag and FWHM=1$`\stackrel{}{\mathrm{.}}`$15). The number density is about 50% more than would be expected on average and may indicate weak clustering in this field. Many of the most interesting galaxies in this field are approximately 20” to 30” away from the guide star. This is not optimal, but it is alleviated in part by the presence of a relatively bright star (PPM 91088-23+12, K=16.2 mag) located at a separation of 26$`\stackrel{}{\mathrm{.}}`$0. This star is actually very close to two of the disk galaxies and an AO camera with a field of view on the order of 10” should be able to simultaneously image the psf star and both galaxies. This would allow for very accurate psf determinations and deconvolutions. ### 3.2 PPM 91714 This field is relatively empty, except for two bright, potentially interacting galaxies and one bright star. The galaxies (PPM 91714+22-19 and PPM 91714+18-16) are roughly 6” apart and the larger of the pair has an asymmetric disk with a full extent of about 5”. The 2nd galaxy, PPM 91714+18-16, is fainter at K=18.1 mag and also shows an asymmetrical extension that points away from the brighter galaxy. If this does represent an interacting system, one might expect to see enhanced star formation potentially in the form of giant star forming regions that could be very compact. The presence of a bright psf star (PPM 91714+09-19, K=12.6 mag) with a separation comparable to that of the bright galaxies, makes this an efficient field to study. The guide star is also quite bright at R=8.5 mag. Two other fainter objects are located south of the guide star. ### 3.3 PPM 50296 This is a fairly typical field with seven galaxies brighter than 21.5 mag at K within the field of view. But the field is very notable for the presence of a 16.9 mag disk galaxy; the brightest in any of our five fields. The galaxy (PPM 50296-07-23) is highly inclined and has an extent of about 3”. More significantly, it asymmetrical and has a faint companion (not necessarily physically associated) about 3” to the south-east. ### 3.4 PPM 98537 PPM 98537 is at a galactic latitude of 28 degrees, and Galactic stars significantly “contaminate” the field. Nevertheless, there is at least one identifiable galaxy in the field (PPM 98537+05+08, K=19.2 mag) which is quite close to the guide star (offset=8.8”) and several fainter objects at separations around 20”. ### 3.5 PPM 106365 This is the lowest Galactic latitude field in our sample (23 degrees) and it is clearly dominated by Galactic stars. This has one very positive effect in that there are many stars that can provide accurate psf’s simultaneously with each galaxy image. There is of course one negative effect as well, that without very good seeing, it is difficult to identify which objects are very compact galaxies. ## 4 Galaxy Simulations An important concern in imaging faint galaxies is sensitivity. In particular, many of these galaxies are difficult to image when most of their light is concentrated in a few pixels, how much more difficult will this be when they are sampled at 0$`\stackrel{}{\mathrm{.}}`$02 per pixel? Also for morphological studies, you want not only to detect the galaxy but also measure its brightness over some extended area, or at least determine its size and light profile. Thankfully, the background per pixel also goes down as the square of the pixel scale, so these studies are possible even in the near infrared. To quantify this, high quality R band images of the nearby spiral galaxy NGC 5371 and the S0 galaxy NGC 4036 were used to create artificial AO images. The R band was selected because it is roughly red shifted to the H band at a redshift of about 1.5, where we might expect to find a significant number of faint galaxies. All simulations assume the Keck Telescope (10 m) with a one hour integration and a camera throughput of 30%; significantly worse than the current non-AO near infrared camera (NIRC, $``$46% from Matthews & Soifer 1994). It is also assumed that the object can be dithered on chip to generate sky measurements. The simulated backgrounds were 13.7 at H and 12.9 at K’ corresponding to the nominal H band sky background at Mauna Kea, but an increased K’ background (nominal is 13.9) in order to simulate additional thermal emission from the AO system. The Strehl ratio was assumed to be 30% at H with a diffraction limited fwhm of 0$`\stackrel{}{\mathrm{.}}`$04 and a halo with a fwhm of 0$`\stackrel{}{\mathrm{.}}`$6. For K band, the Strehl was 70% with a core fwhm of 0$`\stackrel{}{\mathrm{.}}`$06 and a halo of 0$`\stackrel{}{\mathrm{.}}`$6. No deconvolutions were performed on any of the images. Since some galaxies are available with small separations from their potential guide star, no anisoplanatic effects are included. This clearly becomes an important factor as the Strehl ratio declines rapidly beyond 20-30 arcseconds. NGC 5371 was selected to compare to the resolved galaxies that make up the brightest members of our sample (e.g. PPM 91088-21+18, PPM 91088-25-04, PPM 91714+22-19, PPM 50296-07-23, PPM 106365-23+18). It has a bright central halo and a near face-on spiral disk. The galaxy was resampled onto a 100x100 grid simulating a 2” field of view with 0$`\stackrel{}{\mathrm{.}}`$02 /pixel. The visible disk was given an extent of roughly 2”x1” comparable again to our brightest candidates. The total magnitude was set to 17.0 at K’ and 17.5 in H band. Figure 6 shows 4 panels of NGC 5371 under different conditions. Panel (a) is the original image with no noise and essentially 1 pixel resolution. Panel (b) is a simulated image under good seeing conditions (0$`\stackrel{}{\mathrm{.}}`$6) but no AO system and no noise. It has a plate scale of 0$`\stackrel{}{\mathrm{.}}`$16 per pixel. This panel shows essentially an unresolved object since the central region dominates the light distribution. Panel (c) is the H band simulated image with the AO parameters described in the last paragraph. One bright HII region is visible to the bottom left of the disk, and the disk is easily observed. Little of the spiral structure is apparent in this raw image. Panel (d) is the K band simulation. Both the bright HII region and the spiral arms are easily distinguished. The core is elongated, but not quite enough to distinguish the small central bar present in this galaxy. Figure 7 shows an azimuthally averaged radial profile of the galaxy. The smoother curve is a theoretical fit with a de Vaucouleur’s profile for the bulge and an exponential disk. The effective radii are well determined with r<sub>bulge</sub>=0$`\stackrel{}{\mathrm{.}}`$06 and r<sub>disk</sub>=0$`\stackrel{}{\mathrm{.}}`$65. Notice that the noise in the radial plot is extremely low even though the signal to noise in each pixel is quite modest. The second galaxy (NGC 4036) was selected to test how well the size and basic morphology could be determined for some of our faintest and smallest candidates. The image was resampled onto a 100x100 pixel grid (0$`\stackrel{}{\mathrm{.}}`$02 per pixel) such that its horizontal fwhm was 0$`\stackrel{}{\mathrm{.}}`$1 and its vertical fwhm was 0$`\stackrel{}{\mathrm{.}}`$08. Its flux was scaled to a 20th magnitude object at K and 20.5 at H. Figure 8 shows the simulated galaxy under four different conditions. Panel (a) is the original with 1 pixel resolution and no noise. Panel (b) shows the unresolved image that is present in any good seeing (0$`\stackrel{}{\mathrm{.}}`$6) non-AO image. Panels (c) and (d) show the AO simulations at H and K’ band respectively. Figure 9 shows an azimuthally averaged radial profile of the galaxy. The smoother curve is a theoretical fit with a de Vaucouleur’s profile for the bulge and an exponential disk. The effective radii are well determined with r<sub>bulge</sub>=0$`\stackrel{}{\mathrm{.}}`$05 and r<sub>disk</sub>=0$`\stackrel{}{\mathrm{.}}`$15. ## 5 Conclusions In this paper, we have presented a new strategy for observing faint compact galaxies with a high order AO system. Over 40 galaxies were identified near 5 bright stars, all appropriate candidates for early Adaptive Optics observations with large ground based telescopes. Our simulations demonstrate that typical objects found in the fields are observable and that fundamental galaxy properties such as disk and bulge size can be measured. We believe these observations will greatly facilitate the future diffraction limited observations of faint field galaxies, even with the very limited fields of view of early AO cameras. The authors are very grateful for the support and encouragement of Ian McLean and Andrea Ghez. We would also like to thank Alycia Weinberger and Bruce Macintosh for many useful conversations and assistance with observing. This work would not be possible without the help and interaction with the adaptive optics team at Keck: Peter Wizinowich, Scott Acton, Olivier Lai, Chris Shelton, and Paul Stomski. Finally we would like to thank our telescope operator, Gary Puniwae, and all of the Keck staff.
no-problem/9909/astro-ph9909013.html
ar5iv
text
# Untitled Document Panspermia revisited John Gribbin Astronomy Centre, University of Sussex, Falmer, Brighton BN1 9QJ J.R.Gribbin@Sussex.ac.uk The discovery of evidence for life on Earth more than 3850 million years ago (1) naturally encourages a revival of speculation about the possibility that life did not originate on Earth, but was carried to the planet in the form of microorganisms such as bacteria, either by natural processes or deliberate seeding of the Galaxy by intelligent beings. This idea, known as panspermia, has a long history (2, 3), but it is curious that in recent decades astronomers have tended to dismiss the possibility of panspermia on the grounds that microorganisms could not survive the damage caused by ultraviolet radiation and cosmic rays on their journey out of a planetary system like the Solar System (4) while some biologists (5) have argued that it is impossible for life to have emerged from simple molecules in the limited time available (now seen to be substantially less than 1000 million years) since the Earth formed. This has led Crick, in particular, to argue that the seeds of life were indeed carried to Earth (and presumably other planets) protected inside automated spaceprobes, a process he calls directed panspermia (6). Recently, however, Wesson and his colleagues (7,8,9,10) have pointed out a way in which biological material could escape from a planet like the Earth orbiting a star like the Sun by natural processes, and survive with its DNA more or less intact. The problem is that although microorganisms could escape from the Earth today, their biological molecules would quickly be destroyed by radiation in the near-Earth environment. Bacteria shielded inside fine grains of material such as carbon could survive in the interplanetary environment near Earth, but would then be too heavy for the radiation pressure of the Sun today to eject them from the Solar System. The solution is to argue that suitably shielded microorganisms can be ejected from a planetary system like ours when the star is in its red giant phase. This makes it possible for natural mechanisms to seed the Galaxy with viable life forms – and even if the biological material is damaged on its journey, as these authors point out, even the arrival of fragments of DNA and RNA on Earth some 4000 million years ago would have given a kick start to the processes by which life originated here. The remaining puzzle about this process is how the grains of life-bearing dust get down to Earth. In their eagerness to suggest how microorganisms could have escaped from a planetary system, few of the proponents of natural panspermia seem to have worried unduly about how the life-bearing grains get back down to a planetary surface. But the work of Wesson and his colleagues naturally leads one to surmise that the immediate fate of the microorganisms ejected from a planetary system during the red giant phase will be to mingle with the other material ejected from the star, forming part of the material of interstellar space and becoming part of an interstellar molecular cloud. When a new planetary system forms from such a cloud, it is likely that the accretion processes in he circumstellar disc produce very large numbers of cometary bodies, which preserve intact the material of the cloud. Although the processes of accretion of a planet like the Earth generate heat which would destroy any microorganisms present (and which may well have driven off all the primordial volatiles), it is likely that as the planet cools it will be bombarded by comets containing large amounts of primordial material (and water) down to the surface (for a review, see 11). If this material includes dormant bacteria, or even fragments of DNA, life will be able to get a grip on the planet as soon as its surface cools, as seems to have happened on Earth. The possibility that comets may have brought the seeds of life to Earth in this way has been discussed by, for example, McKay (12); but those earlier suggestions required that the orghanic mateerial was ejected from Earth-like planets inside rocky debris as a result if meteoritic impacts. It is difficult to see how material in this form could have become a general feature of the interstellar medium, or, indeed, how it would get in to comets. What I propose here, in the light of the work of Wesson and his colleagues, is that organic material is not only a natural and widely dispersed component of the interstellar medium, but will inevitably be incorporated into the material from whoch new planets form. The immediate difficulty faced by this hypothesis is explaining why life did not get a grip on Venus or Mars, as well – but that is a difficulty shared by all variations on the panspermia theme. Unlike those other variations on the theme, though, this one is testable. It would be feasible to obtain material from a long-period comet, which has never previously entered the inner Solar System, and analyse this material for traces of DNA. If the hypothesis is correct, there should be biological material very similar to that of life on Earth in these comets. Bibliography (1) Holland, H. D., 1997, Science, 275, 38. (2) Arrhenius, S., 1908, Worlds in the Making, Harper & Row, New York. (3) Shklovskii, I. S. and Sagan, C., 1966, Intelligent Life in the Universe, Holden-Day, San Francisco. (4) Chyba, C. and Sagan, C., 1988, Nature, 355, 125. (5) Crick, F. H. C. and Orgel, L. E., 1973, Icarus, 19, 341. (6) Crick, F. H. C., 1982, Life Itself, Macdonald London. (7) Wesson, P. S., Secker, J., and Lepock, J. R., 1997. Proceedings of the 5th International Conference on Bioastronomy, IAU Colloquium No. 161, p539, Editrice Compositori, Bologna. (8) Secker, J., Wesson, P. S., and Lepock, J., 1996, Journal of the Royal Astronomical Society of Canada, 90, 17. (9) Secker, J., Lepock, J., and Wesson, P., 1994, Astrophysics and Space Science, 219, 1. (10) Wesson, P. S., 1990, Quarterly Journal of the Royal Astronomical Society, 31, 161. (11) Gribbin, J., in press, Stardust, Viking, London. (12) McKay, C., 1996, Mercury, 25(6), 15.
no-problem/9909/astro-ph9909143.html
ar5iv
text
# Eternal inflation, black holes, and the future of civilizations. ## I eternal inflation Inflation is a period of accelerated expansion in the early universe. It is the only cosmological scenario that we have that can explain the large-scale homogeneity and flatness of the universe. During inflation the universe is expanded by a huge factor, so that we can see only a part of it, which is nearly homogeneous and flat. The inflationary expansion is driven by the potential $`V(\varphi )`$ of a scalar field $`\varphi `$, which is called the inflaton. In Fig. 1 the inflaton is represented by a little ball that rolls down the potential hill. Near the top of the potential the slope is very small, so the roll is slow and $`V(\phi )const.`$ In this regime the universe expands exponentially, $$a(t)e^{Ht},$$ (1) with the expansion rate determined by the height of the potential, $$H^2=\frac{8\pi }{3}V(\phi ).$$ (2) (Throughout the paper we use natural units in which $`\mathrm{}=c=G=1`$) When $`\phi `$ gets to the steep part of the potential, it starts oscillating about the minimum, and its energy gets dumped into relativistic particles. The particles quickly thermalize at a high temperature, and the following evolution is along the lines of the standard hot cosmological model. Thus, thermalization plays the role of the big bang in the inflationary scenario, and inflation prepares the initial conditions for the big bang (for a review of inflation, see ). We hope that the shape of the potential $`V(\phi )`$ will be determined from the theory of elementary particles, but our present understanding of particle physics is not sufficient for this task. Fortunately, some general features of inflation can be studied without a detailed knowledge of $`V(\phi )`$. One of the remarkable features of inflation is its eternal character. This is due to quantum fluctuations of the inflaton. On the flat portion of the potential, the force that drives $`\phi `$ down the hill is small and quantum fluctuations are important. The physics of the fluctuations is determined by the expansion rate $`H`$: the field $`\phi `$ experiences quantum jumps of magnitude $`\delta \phi \pm H/2\pi `$ on a time scale $`\delta tH^1`$. These jumps are not homogeneous in space: quantum fluctuations of $`\phi `$ are not correlated over distances larger than $`H^1`$. In other words, fluctuations in non-overlapping regions of size $`H^1`$ are independent of one another. Since quantum fluctuations of $`\phi `$ are different at different locations, the inflaton does not get to the themalization point at the bottom of the hill simultaneously everywhere in space. In some rare regions, the fluctuations will keep $`\phi `$ at high values of the potential for much longer than it would otherwise stay there. Such regions will be “rewarded” by a large amount of expansion at a high rate $`H`$. The dynamics of the total volume of inflating regions in the universe is thus determined by two competing processes: the loss of inflating volume due to thermalization and generation of new volume at the high rate sustained by quantum fluctuations. Analysis shows that the second process “wins” for a generic inflaton potential $`V(\phi )`$, and the total inflating volume grows with time . Thus, inflation never ends completely: at any time there are parts of the universe that are still inflating. The spatial distribution of inflating and thermalized regions in an eternally inflating universe is illustrated in Fig. 2. It was obtained in a numerical simulation for a double-well potential $`V(\phi )=\lambda (\phi ^2\eta ^2)^2`$. Inflating regions are white, and the two types of thermalized regions corresponding to the two minima at $`\phi =\pm \eta `$ are shown with different shades of grey. Different types of thermalized regions will generally have different physical properties. The spacetime structure of the universe in this model is illustrated in Fig. 3 using the same shading code. Now, the vertical axis is time and the horizontal axis is one of the spatial directions. The boundaries between inflating and thermalized regions play the role of the big bang for the corresponding thermalized regions. In the figure, these boundaries become nearly vertical at late times so that it appears that they correspond to a certain position in space rather than to a certain moment of time. The reason is that the horizontal axis in Fig. 3. is the co-moving distance, with the expansion of the universe factored out. The physical distance is obtained by multiplying by the expansion factor $`a(t)`$, which grows exponentially as we go up along the time axis. If we used the physical distance in the figure, the themalization boundaries would “open up” and become nearly horizontal (but then it would be difficult to fit more than one thermalized region in the figure). The spacetime structure of a thermalized region near the thermalization boundary is illustrated in Fig. 4. The thermalization is followed by a hot radiation era and then by a matter-dominated era during which luminous galaxies are formed and civilizations flourish. All stars eventually die, and thermalized regions become dark, cold and probably not suitable for life (see the next section). Hence, civilizations are to be found within a layer of finite (temporal) width along the thermalization boundaries in Fig. 3. For an observer inside one of the thermalized regions, the thermalization boundary (the “big bang”) is in his past, so he cannot reach this boundary, no matter how fast he moves <sup>*</sup><sup>*</sup>*Thermalization boundaries are infinite spacelike hypersurfaces. With an appropriate choice of the time coordinate, each thermalized region is an infinite sub-universe containing an infinite number of galaxies.. Even at the speed of light, it is impossible to send a signal that would cross the boundary and get into the inflating region. It follows that different thermalized regions are causally disconnected from one another: it is impossible to send signals between different regions, and the course of events in one region can be in no way affected by what is happening in another. This conclusion may be avoided in the presence of a peculiar form of matter described by a field equation with a special non-linear gradient term in which sound propagates faster than the speed of light . The existence of such matter is consistent with the principles of the theory of relativity: the Lagrangian is perfectly Lorentz invariant, but the cosmological solutions select a preferred frame in which the speed of sound can be superluminal. In what follows, however, we shall disregard this somewhat exotic possibility. Note also that in order to get from one thermalized region to another, the sound waves have to cross the inflating region that separates them. As a result, unless the speed of sound is so large that the trip can be made almost instantaneously, the wavelength may get stretched by an enormous expansion factor. In this case the period of the waves would become too long for even one oscillation during the lifetime of a star in the target region, and it is hard to see how such waves could transmit any information. ## II Messages to the future Let us now consider the future prospects for a civilization on a cosmological timescale. It appears very unlikely that a civilization can survive forever. Even if it avoids natural catastrophes and self-destruction, it will in the end run out of energy. The stars will eventually die, and other sources of energy (such as tidal forces) will also come to an end. Dyson has argued that civilizations may still survive indefinitely into this cold and dark future of the universe by constantly reducing the rate of their energy consumption. This must be accompanied by a corresponding reduction in the rate of information processing. In the asymptotic future both rates become infinitesimal, but Dyson argued that the total amount of information processed by a civilization may still be infinite. The “subjective” lifetime of such a civilization would then also be infinite, as well as its physical lifetime. Nevertheless, a more detailed analysis of the problem has been recently given by Krauss and Starkman , with the conclusion that the steady decrease in the rate of information processing proposed by Dyson seems physically impossible to achieve. Thus, it appears that an eternal civilization is impossible, even in principle. If we are doomed to perish, then perhaps we could send messages, or even representatives, to future civilizations? Those civilizations could also send messages to the future, and so on. We would then become a branch in an infinite “tree” of civilizations, and our accumulated wisdom would not be completely lost. Here we shall consider the feasibility of some scenarios of this sort . ### A Recycling universe In an eternally inflating universe, new thermalized regions will continue to be formed in the arbitrarily distant future. These regions will go through radiation and matter-dominated periods, will form galaxies and evolve civilizations. However, as we discussed at the end of the preceding section, different thermalized regions are causally disconnected from one another and communication between them is impossible. We can of course send messages to other civilizations within our own thermalized region. But the resulting tree of civilizations would necessarily be finite, and its time span would be bounded by the same energetic factors that limit the lifetime of a single civilization. The spacetime structure of the universe illustrated in Fig. 3 assumes that the vacuum energy density, otherwise known as the cosmological constant $`\mathrm{\Lambda }`$, is equal to zero. However, observations of distant supernovae performed by two independent groups during the last year suggest that the universe is now expanding with an acceleration, indicating that $`\mathrm{\Lambda }`$ has a small positive value . A nonzero $`\mathrm{\Lambda }`$, no matter how small, changes the large-scale structure of the universe . As the universe expands, the density of matter goes down, $`\rho _ma^3`$, while the vacuum energy density remains constant, so eventually the universe gets dominated by the cosmological constant. After that it expands exponentially, as in Eq. (1) , but at a very small rate $`H_\mathrm{\Lambda }=(\mathrm{\Lambda }/3)^{1/2}`$. In a universe with $`\mathrm{\Lambda }>0`$, there is a finite constant probability for the inflaton field to tunnel quantum-mechanically from its vacuum value, where $`V(\phi )=\mathrm{\Lambda }/8\pi `$, to the values in the inflationary range near the top of the potential. The tunneling occurs within a spherical volume of radius $`H_\mathrm{\Lambda }^1`$ (a “bubble”), with a probability per unit volume per unit time $`𝒫\mathrm{exp}(S_b3\pi \mathrm{\Lambda }^1)`$ . Here $`S_b`$ is the action of the instanton solution responsible for the tunneling. (Note that $`𝒫`$ vanishes for $`\mathrm{\Lambda }=0`$). Each inflating bubble develops into a fully-fledged eternally inflating region of the universe. In the course of its evolution, it forms an infinite number of thermalized regions, each containing an infinite number of galaxies. These thermalized regions get later dominated by the cosmological constant, with subsequent nucleation of new inflationary bubbles, and so on . The large-scale structure of such a “recycling” universe is illustrated in Fig. 5. The recycling nature of the universe opens the possibility of sending a message to a future civilization. All one needs is a very strong and durable container. One simply puts the message in the container and sends it into space. In due course, the universe gets dominated by the cosmological constant, and inflating bubbles begin to nucleate. The hope is that the container will be engulfed by one of the bubbles. The problem with this scenario is that inflating bubbles are not the only things that can nucleate in a $`\mathrm{\Lambda }`$-dominated universe. There is also a constant rate of nucleation of black holes, $`𝒫_{bh}\mathrm{exp}(\pi /\mathrm{\Lambda })`$. For the value of $`\mathrm{\Lambda }10^{122}`$ suggested by observations, this is greater than the rate of bubble nucleation by a factor $`\mathrm{exp}(10^{122})`$. Thus, the message-carrying container will almost certainly be swallowed by a black hole. In order to beat the odds, one would have to send more than $`\mathrm{exp}(10^{122})`$ containers. ### B Black holes and an upper bound on information Instead of relying on bubble nucleation, one can take a more active approachThis possibility was suggested to us by E. Guendelman.. Quantum nucleation of an inflating region can be triggered by a gravitational collapse in our part of the universe (see Fig. 6). One has to generate an implosion of a small high-energy vacuum region, leading to gravitational collapse, with a message-carrying container at the implosion center. All one sees is the formation of a black hole; a new inflating region may or may not be inside it. The tunneling probability for the formation of such a region is $$𝒫\mathrm{exp}(C/H^2),$$ (3) where $`H`$ is the expansion rate in the new inflating region and $`C1`$ is a constant coefficient. For a grand-unification-scale inflation, $`H10^7`$ and $`𝒫\mathrm{exp}(10^{14})`$. The exponential suppression of the tunneling disappears if inflation is at the Planck scale, $`H1`$. But then we encounter a different problem: there seems to be an upper bound on the amount of information that can be sent. Indeed, if the container is to survive a period of inflation at a high expansion rate $`H`$, then its diameter must be smaller than the horizon, $`D<H^1`$, or else the container would be torn apart by the expansion. (One could imagine spreading the information among multiple containers, but it would be very difficult for it to be reassembled after the containers had been separated by vast distances during inflation.) Now, it has been argued that the amount of information contained in a sphere of diameter $`D`$ is bounded by $$IA/4,$$ (4) where $`A=\pi D^2`$ is the surface area of the sphere (in natural units). The general validity of this “holographic” bound is still a matter of debate. However, for our particular case, the inequality (4) can be derived directly from the generalized second law of thermodynamics. The maximum information that a package can contain at a given value of its energy is equal to the logarithm of the number of microstates of this package compatible with the given energy. That is, the maximum amount of information coincides with the thermodynamical entropy of the package in thermal equilibrium. Now, we can think of a process in which the package collapses into a black hole (or is swallowed by a very small black hole). The entropy of the package has to be less than the entropy of the resulting black hole, which is equal to one fourth of its surface area . Since the largest black hole that can exist in an inflating universe has radius $`1/\sqrt{3}H`$ , this implies that the largest amount of information that can be sent is $$I_{max}=1/12H^2.$$ (5) The usual values of $`H`$ in inflationary models range from $`10^7`$ for grand unification scale inflation to $`10^{34}`$ for electroweak scale inflation, yielding $`I_{\text{max}}10^{13}10^{68}`$. This can be compared with $`I10^{10}`$ for the human genome, $`I10^7`$ for a typical book, and $`I10^{15}`$ for all books in the Library of Congress. Even if one makes no assumptions about the model, with $`I10^7`$, Eqs. (5),(3) require $`H^210^8`$ and $`𝒫\mathrm{exp}(10^8)`$. This is an improvement over the case of nucleating bubbles, but still the number of attempts required to beat the odds far exceeds the number of elementary particles in the visible part of the universe . ### C Negative energies The root of the problem appears to be in the tiny tunneling probability (3), so it is reasonable to inquire whether or not a new inflating region can be created without quantum tunneling. This question was addressed by Farhi and Guth who concluded that the answer is “no”, provided that a few very general assumptions are satisfied. Among these assumptions the most important is the weak energy condition, asserting that the energy density measured by any observer is never negative. Although it is satisfied in all familiar states of matter, this condition is known to be violated in certain states of quantum fields (e.g., the electromagnetic field or scalar fields that are used in inflationary models). The newly-created inflating region should have an extent $`H^1`$, where $`H`$ is the inflationary expansion rate. Hence, one needs to violate the weak energy condition in a spacetime region $$\mathrm{\Delta }L\mathrm{\Delta }tH^1.$$ (6) The required magnitude of the negative energy density is $$\rho H^2.$$ (7) For non-interacting fields, the magnitude and the duration of violations of the weak energy condition are constrained by the so-called quantum inequalities , $$\rho (\mathrm{\Delta }t)^4.$$ (8) Combining this with Eqs. (6),(7), we see that all conditions can be satisfied only for super-Planckian inflation with $`H1`$. Then again, one has to face the information bound (5). The negative energy density in Eq. (8) should be understood in the sense of a quantum expectation value. There are quantum fluctuations about this value, and occasionally the fluctuation may get large enough to provide the required negative energy in a sufficiently large region. But again, such fluctuations are suppressed by an exponentially large factor . It should be noted, however, that the validity of quantum inequalities like (8) is not certain beyond the case of free quantum fields for which they have been established . For example, the Casimir energy density of the electromagnetic vacuum between two conducting plates appears to be negative and permanent, in violation of (8). ### D Limiting curvature Quantum effects at high curvature could significantly modify the dynamics of the gravitational field. In particular, it has been suggested that there exists a limiting curvature $`R_{max}`$ which can never be exceeded. This would result in a drastic change in the final stages of the gravitational collapse. It has been argued in that the black hole singularity and the adjacent high curvature region in the black hole interior get replaced by a de Sitter space of the limiting curvature $`R_{max}`$. In the Schwarzschild solution describing the usual black hole, the spacetime could not be continued beyond the singularity, but now the de Sitter space extends all the way into a new inflating universe which is in the absolute future with respect to the original one. Assuming that the state of limiting curvature is metastable and decays dumping its energy into particles, we will have formation of thermalized regions and the usual picture of eternal inflation inside the black hole. In the course of the gravitational collapse, the effective energy-momentum tensor in this model should develop negative energy densities that violate quantum inequalities. In this sense, the limiting curvature conjecture can be regarded as a specific example of a more general class of models with negative energies. An important difference, however, is that with a limiting curvature, inflating universes automatically form inside black holes, with no effort required on our part. To send an information container to another universe, all we need to do is to drop it into a black hole. And in a recycling universe, black holes are no danger at all: they simply provide a passage to a new inflating region. The curvature of de Sitter space is $`R=12H^2`$, and it follows from (5) that the largest amount of information that can be sent is $$I_{max}=1/R_{max}.$$ (9) We thus see that a non-trivial amount of information can be sent only if $`R_{max}`$ is well below the Planck scale, $`R_{max}1`$. ### E Summary To summarize, it appears that all mechanisms that involve quantum tunneling are doomed to failure, because of extremely small tunneling probabilities. Creation of new inflating regions without quantum tunneling requires a violation of the weak energy condition that is in conflict with quantum inequalities. It is not clear how seriously this constraint is to be taken, since we don’t know to what extent quantum inequalities apply to interacting fields. Since the future of the civilization depends on the outcome, this can be regarded as a good reason to increase funding for the negative energy research! In the following Section, we shall take an optimistic attitude and assume that advanced civilizations will figure out how to get around quantum inequalities.In the absence of energy conditions, wormholes would also be possible, and perhaps could be used to communicate between different regions. However, the maintenance of a long-lasting wormhole requires negative energies to exist indefinitely, whereas creating a new inflating region as above requires them only for a short period of time. We will not consider wormhole scenarios any further here. ## III Discussion Suppose now that we have resolved all “technical” problems associated with sending messages to future civilizations. This includes learning how to generate negative energy in a sufficiently large volume, so that we can create new inflating regions (in case such negative energies are not generated without our intervention, due to the limiting curvature or some other mechanism), and designing containers for information that can survive the negative energy density, a period of inflation, and the subsequent hot radiation era. Now, what should our strategy be? How many containers should we send? Should we search for a message in our part of the universe, in case it was created by an advanced civilization that lived prior to our inflation? Let us first address the question about the number of messages that we should send. A mission can be regarded a success if the information is successfully transmitted to a civilization in the new region that is capable of sending messages of its own. If the probability of success is $`p`$, then one should send $`N1/p`$ information packages, to make sure that at least one of them succeeds. The ultimate goal is to initiate an infinite tree of civilizations, so that our knowledge can propagate indefinitely into the future. If $`\overline{N}`$ is the average number of missions launched by the civilizations that form the tree, then the average number of civilizations in each successive generation is related to the number of their predecessors by a factor $`p\overline{N}`$. Thus, for $`\overline{N}>1/p`$ there is a non-zero probability that the process will never end and the total number of civilizations in the tree will be infinite. We next turn to the implications of the bound (5) on the amount of information that can be contained in a message. The maximal information $`I_{max}`$ is equal to the logarithm of the total number of quantum states available to the information package. This means that there is only a finite number of possible messages, $`N\mathrm{exp}(I_{max})`$. There should, therefore, be an “optimal” message which gives the highest rate of reproduction, by inducing its recepients to send the largest number of successful missions. This optimal message will inevitably be discovered in each infinite tree of civilizations and will eventually become the dominant message, in the sense that the fraction of civilizations receiving other messages within that tree will exponentially approach zero as we go up along the tree. (Clearly, the “optimal” message should come with the instruction that it should be passed along without change.) How likely is it that our civilization will receive a message from the past? Let us assume that some finite fraction of the “orphan” civilizations, who do not receive messages from their ancestors, succeed in initiating infinite trees. Then, for each civilization that does not receive a message, there is an infinite number of civilizations (in its future) who do receive messages, with most of the civilizations receiving the optimal message. One may be inclined to conclude that our civilization is most likely to be a recepient of the optimal message. One has to be careful, however, because the numbers of civilizations that do and do not receive messages are both infinite, and comparing infinite sets is a notoriously ambiguous task. The result crucially depends on how one maps one set onto the other. Instead of comparing our civilization with its descendants in the infinite tree, it appears more natural to compare it with other civilizations in the same thermalized region. We cannot prove that this is the correct procedure, but we note that a similar approach appears to work well for calculating probabilities in an eternally inflating universe . Our thermalized region contains an infinite number of civilizations, but it can contain only a finite number of information packages from our predecessors. Hence, the probability for a package to be anywhere in our neighborhood is zero. If we do receive a message (which of course is extremely unlikely), then the same argument can applied to the civilization that sent it to us. With a 100% probability, that civilization was the first in line and received no messages from its predecessors. So it is very unlikely that the message we receive is the optimal message. Since the probablity of receiving a message is zero, our descendants would be foolish to waste their resources to search for messages. We could therefore try to make the information container very conspicuous. It could, for example, transmit its message in the form of electromagnetic waves, using some star as a source of energy. (Of course, the container should then be programmed to search for a suitable star.) Perhaps a more reliable plan might be for the “message” to instead be a device which reproduces our civilization in the new region, rather than waiting for new civilizations to evolve. In such case one can consider this process to be the continuation of the old civilization, rather than a new civilization at all. One can even imagine some individual members of the old civilization surviving in the container into the new region, perhaps by having their physical form and state of knowledge encoded in some compact and durable way for later reproduction. However, due to the limitations on the amount of information that can be included in the container, it may be necessary to send “simplified” representatives if the energy scale of inflation is high. Since there is a finite number of possible messages, the process of accumulation of knowlege will inevitably halt, and one can ask if there is any point in generating an infinite tree of civilizations. There may well be a point at the beginning of the tree, while the limits on the size of the message are not yet reached. After the optimal message is hit upon, there may be no point, but the process is likely to continue anyway. We are grateful to Eduardo Guendelman for a useful discussion. J.G., V.F.M. and A.V. are grateful to Edgard Gunzig and Enric Verdaguer for their hospitality in Peyresq where part of this work was completed. This work was supported in part by CIRIT grant 1998BEAI400244 (J.G.), by a NATO grant CRG 951301 (J.G.), and by the National Science Foundation (K.D.O. and A.V.).
no-problem/9909/hep-th9909193.html
ar5iv
text
# New “Electric” Description of Supersymmetric Quantum Chromodynamics *footnote **footnote *To be published in Phys. Lett. B. ## Abstract Responding to the recent claim that the origin of moduli space may be unstable in $`\mathrm{}`$magnetic” supersymmetric quantum chromodynamics (SQCD) with $`N_f`$ $``$ $`3N_c/2`$ ($`N_c>2`$) for $`N_f`$ flavors and $`N_c`$ colors of quarks, we explore the possibility of finding nonperturbative physics for $`\mathrm{}`$electric” SQCD. We present a recently discussed effective superpotential for $`\mathrm{}`$electric” SQCD with $`N_c+2`$ $``$ $`N_f`$ $``$ $`3N_c/2`$ ($`N_c>2`$) that generates chiral symmetry breaking with a residual nonabelian symmetry of $`SU(N_c)_{L+R}`$ $`\times `$ $`SU(N_fN_c)_L`$ $`\times `$ $`SU(N_fN_c)_R`$. Holomorphic decoupling property is shown to be respected. For massive $`N_fN_c`$ quarks, our superpotential with instanton effect taken into account produces a consistent vacuum structure for SQCD with $`N_f`$ = $`N_c`$ compatible with the holomorphic decoupling. It has been widely accepted that physics of $`N`$=1 supersymmetric quantum chromodynamics (SQCD) at strong $`\mathrm{}`$electric” coupling is well described by the corresponding dynamics of SQCD at weak $`\mathrm{}`$magnetic” coupling. This dynamical feature is referred to as Seiberg’s $`N`$=1 duality. In order to apply the $`N`$=1 duality to the physics of SQCD, one has to adjust dynamics of $`\mathrm{}`$magnetic” quarks so that the anomaly-matching conditions are satisfied. In SQCD with quarks carrying $`N_f`$ flavors and $`N_c`$ colors for $`N_f`$ $``$ $`N_c`$+2, the $`N`$=1 duality is respected as long as $`\mathrm{}`$magnetic” quarks have $`N_fN_c`$ colors. Appropriate interactions of $`\mathrm{}`$magnetic” quarks can be derived in SQCD embedded in a softly broken $`N`$=2 SQCD that possesses the manifest $`N`$=2 duality. In SQCD with 3$`N_c`$/2 $`<`$ $`N_f`$ $`<`$ 3$`N_c`$, its phase is characterized by an interacting Coulomb phase, where the $`N`$=2 duality can be transmitted to $`N`$=1 SQCD. On the other hand, in SQCD with $`N_f`$ $``$ 3$`N_c`$/2, it is not clear that the $`N`$=1 duality is supported by the similar description in terms of the $`N`$=2 duality although it is believed that the result for 3$`N_c`$/2 $`<`$ $`N_f`$ can be safely extended to apply to this case. Lately, several arguments have been made to discuss other possibilities than the physics based on the $`N`$=1 duality, especially for SQCD with $`N_f`$ $``$ 3$`N_c`$/2. It is clamed in Ref. that the origin of moduli space becomes unstable in $`\mathrm{}`$magnetic” SQCD and that the spontaneous breakdown of the vectorial $`SU(N_f)_{L+R}`$ symmetry is expected to occur. An idea of an anomalous $`U(1)`$ symmetry, $`U(1)_{anom}`$, taken as a background gauge symmetry has been employed. Their findings are essentially based on the analyses made in the slightly broken supersymmetric (SUSY) vacuum. On the other hand, emphasizing nonperturbative implementation of $`U(1)_{anom}`$, the authors of Ref. have derived a new type of an effective superpotential applicable to $`\mathrm{}`$electric” SQCD. However, the physical consequences based on their superpotential have not been clarified yet. Finally, extensive evaluation of formation of condensates has provided a signal due to spontaneous breaking of chiral symmetries although there is a question on the reliability of their dynamical gap equations. These attempts suggest that, in order to make underlying property of SQCD more transparent, it is helpful to employ a composite chiral superfield composed of chiral gauge superfields that is responsible for relevant expression for $`U(1)_{anom}`$. In a recent article, we have discussed what physics is suggested by SQCD with $`N_c+2`$ $``$ $`N_f`$ $``$ $`3N_c/2`$ and have found that, once SQCD triggers the formation of one condensate made of a quark-antiquark pair, the successive formation of other condensates is dynamically induced to generate spontaneous breakdown of the chiral $`SU(N_f)`$ symmetry to $`SU(N_fN_c)`$ as a residual chiral nonabelian symmetry. <sup>§</sup><sup>§</sup>§ Another case with a chiral $`SU(N_fN_c+1)`$ symmetry has also been discussed. The anomalies associated with original chiral symmetries are matched with those from the Nambu-Goldstone superfields. As in Ref., our suggested dynamics can also be made more visible by taking softly broken SQCD in its supersymmetric limit. The derived effective superpotential has the common structure to the one discussed in Ref.. It should be noted that the $`\mathrm{}`$magnetic” description should be selected by SQCD if SQCD favors the formation of no condensates. In this paper, we further study effects of SUSY-preserving masses. It is shown that our superpotential is equipped with holomorphic decoupling property. In the case that quarks carrying flavors of $`SU(N_fN_c)`$ are massive, our superpotential supplemented by instanton contributions correctly reproduces consistent vacuum structure with the decoupling property. In SQCD with $`N_c+2`$ $``$ $`N_f`$ $``$ $`3N_c/2`$ ($`N_c`$ $`>`$ 2), our superpotential takes the form of $$W_{\mathrm{eff}}=S\left\{\mathrm{ln}\left[\frac{S^{N_cN_f}\mathrm{det}\left(T\right)f(Z)}{\mathrm{\Lambda }^{3N_cN_f}}\right]+N_fN_c\right\}$$ (1) with an arbitrary function, $`f(Z)`$, to be determined, where $`\mathrm{\Lambda }`$ is the scale of SQCD. The composite superfields are specified by $`S`$ and $`T`$: $$S=\frac{1}{32\pi ^2}\underset{A,B=1}{\overset{N_c}{}}W_A^BW_B^A,T_j^i=\underset{A=1}{\overset{N_c}{}}Q_A^i\overline{Q}_j^A,$$ (2) where chiral superfields of quarks and antiquarks are denoted by $`Q_A^i`$ and $`\overline{Q}_i^A`$ and gluons are by $`W_A^B`$ with Tr($`W`$) = 0 for $`i`$ = 1 $``$ $`N_f`$ and $`A`$, $`B`$ = 1 $``$ $`N_c`$. The remaining field, $`Z`$, describes an effective field. Its explicit form can be given by $$Z=\frac{_{i_1\mathrm{}i_{N_f},j_1\mathrm{}j_{N_f}}B^{[i_1i_2\mathrm{}i_{N_c}]}T_{j_{N_c+1}}^{i_{N_c+1}}\mathrm{}T_{j_{N_f}}^{i_{N_f}}\overline{B}_{[j_1j_2\mathrm{}j_{N_c}]}}{\mathrm{det}\left(T\right)}\left(\frac{BT^{N_fN_c}\overline{B}}{\mathrm{det}\left(T\right)}\right),$$ (3) where $`B^{[i_1i_2\mathrm{}i_{N_c}]}={\displaystyle \underset{A_1\mathrm{}A_{N_c}}{}}{\displaystyle \frac{1}{N_c!}}\epsilon ^{A_1A_2\mathrm{}A_{N_c}}Q_{A_1}^{i_1}\mathrm{}Q_{A_{N_c}}^{i_{N_c}},`$ (4) $`\overline{B}_{[i_1i_2\mathrm{}i_{N_c}]}={\displaystyle \underset{A_1\mathrm{}A_{N_c}}{}}{\displaystyle \frac{1}{N_c!}}\epsilon _{A_1A_2\mathrm{}A_{N_c}}\overline{Q}_{i_1}^{A_1}\mathrm{}\overline{Q}_{i_{N_c}}^{A_{N_c}}.`$ (5) This superpotential is derived by requiring that not only it is invariant under $`SU(N_f)_L`$ $`\times `$ $`SU(N_f)_R`$ as well as under two additional $`U(1)`$ symmetries but also it is equipped with the transformation property under $`U(1)_{anom}`$ broken by the instanton effect, namely, $`\delta `$ $``$ $`F^{\mu \nu }\stackrel{~}{F}_{\mu \nu }`$, where $``$ represents the lagrangian of SQCD and $`F^{\mu \nu }`$ ($`\stackrel{~}{F}_{\mu \nu }ϵ_{\mu \nu \rho \sigma }F^{\rho \sigma }`$) is a gluon’s field strength. Note that $`Z`$ is neutral under the entire chiral symmetries including $`U(1)_{anom}`$ and the $`Z`$-dependence of $`f(Z)`$ cannot be determined by the symmetry principle. Although the origin of the moduli space, where $`T`$ = $`B`$ = $`\overline{B}`$ = 0, is allowed by $`W_{\mathrm{eff}}`$, the consistent SQCD must automatically show the anomaly-matching property with respect to unbroken chiral symmetries. Since the anomaly-matching property is not possessed by SQCD realized at $`T`$ = $`B`$ = $`\overline{B}`$ = 0, composite fields are expected to be dynamically reshuffled so that the anomaly-matching becomes a dynamical consequence. Usually, one accepts that SQCD is described by $`\mathrm{}`$magnetic” degrees of freedom instead of $`T`$, $`B`$ and $`\overline{B}`$. However, it is equally possible to occur that $`\mathrm{}`$electric” SQCD dynamically rearranges some of $`T`$, $`B`$ and $`\overline{B}`$ to develop vacuum expectation values (VEV’s). In this case, chiral symmetries are spontaneously broken and the presence of the anomalies can be ascribed to the Nambu-Goldstone bosons for the broken sector and to chiral fermions for the unbroken sector. If the consistent SQCD with broken chiral symmetries is described by our superpotential, the anomaly-matching constraint must be automatically satisfied and is indeed shown to be satisfied by the Nambu-Goldstone superfields. In the ordinary QCD with two flavors, we know the similar situation, where QCD with massless proton and neutron theoretically allows the existence of the unbroken chiral $`SU(2)_L`$ $`\times `$ $`SU(2)_R`$ symmetry but real physics of QCD chooses its spontaneous breakdown into $`SU(2)_{L+R}`$. However, since the SUSY-invariant theory possesses any SUSY vacua, which cannot be dynamically selected, both $`\mathrm{}`$magnetic” and $`\mathrm{}`$electric” vacua will correspond to a true vacuum of SQCD. In the classical limit, where the SQCD gauge coupling $`g`$ vanishes, the behavior of $`W_{\mathrm{eff}}`$ is readily found by applying the rescaling $`S`$ $``$ $`g^2S`$ and invoking the definition $`\mathrm{\Lambda }`$ $``$ $`\mu \mathrm{exp}(8\pi ^2/(3N_cN_f)g^2)`$, where $`\mu `$ is a certain reference mass scale. The resulting $`W_{\mathrm{eff}}`$ turns out to be $`WW/4`$, which is the tree superpotential for the gauge kinetic term. If $`S`$ is integrated out, one reaches the ADS-type superpotential: $$W_{\mathrm{eff}}^{\mathrm{ADS}}=\left(N_fN_c\right)\left[\frac{\mathrm{det}\left(T\right)f(Z)}{\mathrm{\Lambda }^{3N_cN_f}}\right]^{1/(N_fN_c)}.$$ (6) In this case, $`W_{\mathrm{eff}}^{\mathrm{ADS}}`$ vanishes in the classical limit only if $`f(Z)`$ = 0, where the constraint of $`BT^{N_fN_c}\overline{B}`$ = $`\mathrm{det}\left(T\right)`$, namely $`Z`$ = 1, is satisfied. The simplest form of $`f(Z)`$ that satisfies $`f(Z)`$ = 0 can be given by $$f(Z)=(1Z)^\rho (\rho >0),$$ (7) where $`\rho `$ is a free parameter. If one flavor becomes heavy, our superpotential exhibits a holomorphic decoupling property. Add a mass to the $`N_f`$ flavor, then we have $$W_{\mathrm{eff}}=S\left\{\mathrm{ln}\left[\frac{S^{N_cN_f}\mathrm{det}\left(T\right)f(Z)}{\mathrm{\Lambda }^{3N_cN_f}}\right]+N_fN_c\right\}mT_{N_f}^{N_f}.$$ (8) Following a usual procedure, we divide $`T`$ into $`\stackrel{~}{T}`$ with a light flavor $`(N_f1)`$ $`\times `$ $`(N_f1)`$ submatrix and $`T_{N_f}^{N_f}`$ and also $`B`$ and $`\overline{B}`$ into light flavored $`\stackrel{~}{B}`$ and $`\stackrel{~}{\overline{B}}`$ and heavy flavored parts. At SUSY minimum, the off-diagonal elements of $`T`$ and the heavy flavored $`B`$ and $`\overline{B}`$ vanish and $`T_{N_f}^{N_f}`$ = $`S/m`$ is derived. This relation is referred to as Konishi anomaly relation. Inserting this relation into Eq.(8), we obtain $$W_{\mathrm{eff}}=S\left\{\mathrm{ln}\left[\frac{S^{N_cN_f+1}\mathrm{det}(\stackrel{~}{T})f(\stackrel{~}{Z})}{\stackrel{~}{\mathrm{\Lambda }}^{3N_cN_f+1}}\right]+N_fN_c1\right\},$$ (9) where $`\stackrel{~}{Z}`$ = $`\stackrel{~}{B}\stackrel{~}{T}^{N_fN_c1}\stackrel{~}{\overline{B}}`$/$`\mathrm{det}(\stackrel{~}{T})`$ from $`Z`$ = $`\stackrel{~}{B}T_{N_f}^{N_f}\stackrel{~}{T}^{N_fN_c1}\stackrel{~}{\overline{B}}/T_{N_f}^{N_f}\mathrm{det}(\stackrel{~}{T})`$ and $`\stackrel{~}{\mathrm{\Lambda }}^{3N_cN_f+1}`$ = $`m\mathrm{\Lambda }^{3N_cN_f}`$. Thus, after the heavy flavor is decoupled at low energies, we are left with Eq.(1) with $`N_f1`$ flavors. We can also derive an effective superpotential for $`N_f`$ = $`N_c1`$ by letting one flavor to be heavy in Eq.(1) for $`N_f`$ = $`N_c`$. For $`N_f`$ = $`N_c`$, at SUSY vacuum, we find that $$\mathrm{det}\left(T\right)f(Z)=\mathrm{\Lambda }^{2N_c},$$ (10) which turns out to be the usual quantum constraint of $`\mathrm{det}\left(T\right)B\overline{B}`$ = $`\mathrm{\Lambda }^{2N_c}`$ if $`\rho `$ = 1 giving $`f(Z)`$ = $`1Z`$. The discussion goes through in the similar manner to the previous one. In this case, we find $`B`$ = $`\overline{B}`$ = 0, leading to $`Z`$ = 0, and $`t`$ = $`S/m`$ at SUSY minimum. As a result, Eq.(1) with $`N_f`$ = $`N_c1`$ is derived if $`\mathrm{\Lambda }^{2N_c+1}`$ is identified with $`m\mathrm{\Lambda }^{2N_c}/f(0)`$, where $`f(0)`$ = 1 by Eq.(7). The induced $`W_{\mathrm{eff}}`$ is nothing but the famous ADS superpotential after $`S`$ is integrated out. It is thus proved that our superpotential with Eq.(7) exhibits the holomorphic decoupling property and provides the correct superpotential for $`N_c`$ $`<`$ $`N_f`$. Next, we proceed to discussing what physics is expected in SQCD especially with $`N_c+2`$ $``$ $`N_f`$ $``$ $`3N_c/2`$. It is known that keeping chiral symmetries unbroken requires the duality description using $`\mathrm{}`$magnetic” quarks. Therefore, another dynamics, if it is allowed, necessarily induces spontaneous breakdown of chiral symmetries. In our superpotential, Eq.(1), this dynamical feature is more visible when soft SUSY breaking effects are token into account. Although the elimination of $`S`$ from $`W_{\mathrm{eff}}`$ gives no effects on the SUSY vacuum, to evaluate soft SUSY breaking contributions can be well handled by $`W_{\mathrm{eff}}`$ with $`S`$. Since physics very near the SUSY-invariant vacua is our main concern, all breaking masses are kept much smaller than $`\mathrm{\Lambda }`$. The property of SQCD is then inferred from the one examined in the corresponding SUSY-broken vacuum, which is smoothly connected to the SUSY-preserving vacuum. Let us briefly review what was discussed in Ref. in a slightly different manner. To see solely the SUSY breaking effect, we adopt the simplest term that is invariant under the chiral symmetries, which is given by the following mass term, $`_{mass}`$, for the scalar quarks, $`\varphi _A^i`$, and antiquarks, $`\overline{\varphi }_i^A`$: $$_{mass}=\underset{i,A}{}\left(\mu _L^2|\varphi _A^i|^2+\mu _R^2|\overline{\varphi }_i^A|^2\right).$$ (11) Together with the potential term arising from $`W_{\mathrm{eff}}`$, we find that $`V_{\mathrm{eff}}=G_T\left({\displaystyle \underset{i=1}{\overset{N_f}{}}}|W_{\mathrm{eff};i}|^2\right)+G_B\left({\displaystyle \underset{i=B,\overline{B}}{}}|W_{\mathrm{eff};i}|^2\right)+G_S|W_{\mathrm{eff};\lambda }|^2+V_{\mathrm{soft}},`$ (12) $`V_{\mathrm{soft}}=(\mu _L^2+\mu _R^2)\mathrm{\Lambda }^2{\displaystyle \underset{i=1}{\overset{N_f}{}}}|\pi _i|^2+\mathrm{\Lambda }^{2(N_c1)}\left(\mu _L^2|\pi _B|^2+\mu _R^2|\pi _{\overline{B}}|^2\right)`$ (13) with the definition of $`W_{\mathrm{eff};i}`$ $``$ $`W_{\mathrm{eff}}/\pi _i`$, etc., where $`\pi _{\lambda ,i,B,\overline{B}}`$, respectively, represent the scalar components of $`S`$, $`T_i^i`$, $`B^{[12\mathrm{}N_c]}`$ and $`\overline{B}_{[12\mathrm{}N_c]}`$. The coefficient $`G_T`$ comes from the K$`\ddot{\mathrm{a}}`$hlar potential, $`K`$, which is assumed to be diagonal, $`^2K/T_i^kT_j^{\mathrm{}}`$ = $`\delta _{ij}\delta _k\mathrm{}G_T^1`$ with $`G_T`$ = $`G_T(T^{}T)`$, and similarly for $`G_B`$ = $`G_B(B^{}B+\overline{B}^{}\overline{B})`$ and $`G_S`$ = $`G_S(S^{}S)`$. Since the dynamics requires that some of the $`\pi `$ acquire non-vanishing VEV’s, suppose that one of the $`\pi _i`$ ($`i`$=1 $``$ $`N_f`$) develops a VEV, and let this be labeled by $`i`$ = $`1`$: $`|\pi _1|`$ = $`\mathrm{\Lambda }_T^2`$ $``$ $`\mathrm{\Lambda }^2`$. This VEV is determined by solving $`V_{\mathrm{eff}}/\pi _i`$ = 0, yielding $$G_TW_{\mathrm{eff};a}^{}\frac{\pi _\lambda }{\pi _a}\left(1\alpha _B\right)=G_SW_{\mathrm{eff};\lambda }^{}\left(1\alpha _B\right)+\beta _BX_B+M^2\left|\frac{\pi _a}{\mathrm{\Lambda }}\right|^2,$$ (14) for $`a`$=1$`N_c`$, where $`\alpha _B`$ = $`zf^{}(z)/f(z)`$ and $`\beta _B`$ = $`z\alpha _B^{}`$ with $`z`$ = $`0|Z|0`$, and $`M^2=\mu _L^2+\mu _R^2+G_T^{}\mathrm{\Lambda }^2{\displaystyle \underset{i=1}{\overset{N_f}{}}}\left|W_{\mathrm{eff};i}\right|^2,`$ (15) $`X_B=G_T{\displaystyle \underset{a=1}{\overset{N_c}{}}}W_{\mathrm{eff};a}^{}{\displaystyle \frac{\pi _\lambda }{\pi _a}}G_B{\displaystyle \underset{x=B,\overline{B}}{}}W_{\mathrm{eff};x}^{}{\displaystyle \frac{\pi _\lambda }{\pi _x}}.`$ (16) The SUSY breaking effect is specified by $`(\mu _L^2+\mu _R^2)|\pi _1|^2`$ in Eq.(14) through $`M^2`$ because of $`\pi _1`$ $``$ 0. This effect is also contained in $`W_{\mathrm{eff};\lambda }`$ and $`X_B`$. From Eq.(14), we find that $$\left|\frac{\pi _a}{\pi _1}\right|^2=1+\frac{(M^2/\mathrm{\Lambda }^2)(\left|\pi _1\right|^2\left|\pi _a\right|^2)}{G_SW_{\mathrm{eff};\lambda }^{}\left(1\alpha _B\right)+(M^2/\mathrm{\Lambda }^2)\left|\pi _a\right|^2+\beta _BX_B},$$ (17) which cannot be satisfied by $`\pi _{a1}`$ = 0. In fact, $`\pi _{a1}`$ = $`\pi _1`$ is a solution to this problem, leading to $`|\pi _a|`$ = $`|\pi _1|`$ (= $`\mathrm{\Lambda }_T^2`$). Since the classical constraint of $`f(z)`$ = 0 is expected not to be modified at the SUSY minimum, the SUSY breaking effect may arise as tiny deviation of $`f(z)`$ from 0, which is denoted by $`\xi `$ $``$ $`1z`$ ($``$ 1). By further using the explicit form of Eq.(7) for $`f(z)`$, we find $$\left|\pi _{i=1N_c}\right|=\mathrm{\Lambda }_T^2,\left|\pi _{i=N_c+1N_f}\right|=\xi \left|\pi _{i=1N_c}\right|,|\pi _B|=|\pi _{\overline{B}}|\mathrm{\Lambda }_T^{N_c},\left|\pi _\lambda \right|\mathrm{\Lambda }^3\xi ^{\frac{\rho +N_fN_c}{N_fN_c}},$$ (18) in the leading order of $`\xi `$. Therefore, in softly broken SQCD, our superpotential indicates the breakdown of all chiral symmetries. This feature is in accord with the result of the dynamics of ordinary QCD. Does the resulting SUSY-breaking vacuum structure persist in the SUSY limit? At the SUSY minimum with the suggested vacuum of $`|\pi _{a=1N_c}|`$ = $`\mathrm{\Lambda }_T^2`$, we find the classical constraint of $`f(z)`$ = 0, as expected, which is derived by using $`W_{\mathrm{eff};\lambda }`$ = 0 and by noticing that $`\pi _\lambda /\pi _{i=N_c+1N_f}`$ = 0 from $`W_{\mathrm{eff};i}`$ = 0. In the SUSY limit defined by $`\xi `$ $``$ 0, $`\pi _{i=N_c+1N_f}`$ vanish to recover chiral $`SU(N_fN_c)`$ symmetry and $`\pi _\lambda `$ vanishes to recover chiral $`U(1)`$ symmetry. The symmetry breaking is thus described by $`SU(N_f)_L\times SU(N_f)_R\times U(1)_V\times U(1)_A`$ (19) $`SU(N_c)_{L+R}\times SU(N_fN_c)_L\times SU(N_fN_c)_R\times U(1)_V^{}\times U(1)_A^{},`$ (20) where $`U(1)_V^{}`$ is associated with the number of ($`N_fN_c`$)-plet superfields of $`SU(N_fN_c)`$ and $`U(1)_A^{}`$ is associated with the number of $`SU(N_c)_{L+R}`$ \- adjoin and - singlet fermions and of scalars of the ($`N_fN_c`$)-plet. The SUSY vacuum characterized by $`|\pi _{a=1N_c}|`$ = $`\mathrm{\Lambda }_T^2`$ yields spontaneous breakdown of $`SU(N_c)_L`$ $`\times `$ $`SU(N_c)_R`$ to $`SU(N_c)_{L+R}`$. In other words, once the spontaneous breaking is triggered, then $`|\pi _{i=1N_c}|`$ = $`\mathrm{\Lambda }_T^2`$ is a natural solution to SQCD, where soft SUSY breaking can be consistently introduced. This breaking behavior is translated into the corresponding behavior in the Higgs phase by the complementarity. To generate $`SU(N_c)_{L+R}`$ can be achieved by $`0|\varphi _A^a|0`$ = $`\delta _A^a\mathrm{\Lambda }_T`$ and $`0|\overline{\varphi }_a^A|0`$ = $`\delta _a^A\mathrm{\Lambda }_T`$, for $`a,A`$ = 1 $``$ $`N_c`$. The anomaly-matching is trivially satisfied in the Higgs phase. The complementarity shows that massless particles are just supplied by $`T_{a=1N_c}^{b=1N_c}`$ with Tr($`T_a^b`$) = 0, $`T_{i=N_c+1N_f}^{a=1N_c}`$, $`T_{a=1N_c}^{i=N_c+1N_f}`$, $`B^{[12\mathrm{}N_c]}`$ and $`\overline{B}_{[12\mathrm{}N_c]}`$, which are all contained in the Nambu-Goldstone superfields. Therefore, the anomaly-matching is automatically satisfied as a result of the spontaneous breakdown and is a dynamical consequence. Let us discuss effects of SUSY-invariant mass terms. If any quarks with flavors of $`SU(N_fN_c)`$ are massive, we can determine their vacuum structure from the holomorphic decoupling property. On the other hand, if all quarks with flavors of $`SU(N_fN_c)`$ are massive, we can further utilize instanton contributions to prescribe vacuum structure. If our superpotential provides correct description of SQCD, both results must be consistent with each other. The instanton calculation for the gluino and $`N_f`$ massless quarks and antiquarks concerns the following $`SU(N_c)`$ \- invariant amplitude: $$(\lambda \lambda )^{N_c}\mathrm{det}(\psi ^i\overline{\psi }_j),$$ (21) where $`\psi (\overline{\psi })`$ is a spinor component of $`Q`$ $`(\overline{Q})`$, which can be converted into $$\underset{a=1}{\overset{N_c}{}}\pi _a=c\mathrm{\Lambda }^{2N_c}\underset{i=Nc+1}{\overset{N_f}{}}\left(m_i/\mathrm{\Lambda }\right),$$ (22) where c ($``$ 0) is a coefficient to be fixed. At our SUSY minimum, the condition of $`W_{\mathrm{eff}}/\pi _\lambda `$ = 0 together with $`W_{\mathrm{eff}}/\pi _i`$ = 0 for $`i`$ = $`N_c+1`$ $``$ $`N_f`$ giving $`\pi _\lambda /\pi _i`$ = $`m_i`$ reads $$f(z)=\underset{a=1}{\overset{N_c}{}}\left(\mathrm{\Lambda }^2/\pi _a\right)\underset{i=N_c+1}{\overset{N_f}{}}\left(m_i/\mathrm{\Lambda }\right).$$ (23) By combining these two relations, we observe that the mass dependence in $`f(z)`$ is completely cancelled and derive c = 1/$`f(z)`$ giving $`f(z)`$0 instead of $`f(z)`$ = 0 for the massless SQCD. Since $`f(z)`$ = $`(1z)^\rho `$, $`z`$ $``$ 1 is required and the classical constraint, corresponding to $`z`$ = 1, is modified. These VEV’s are the solution to $$\mathrm{det}(\stackrel{~}{T})f(\stackrel{~}{Z})=\stackrel{~}{\mathrm{\Lambda }}^{2N_c},$$ (24) where $`\stackrel{~}{\mathrm{\Lambda }}^{2N_c}`$ = $`\mathrm{\Lambda }^{3N_cN_f}_{i=Nc+1}^{N_f}m_i`$, which is the quantum constraint (10) for $`N_f`$ = $`N_c`$ and which is also consistent with the successive use of the holomorphic decoupling. Therefore, our superpotential for $`N_c+2`$ $``$ $`N_f`$ $``$ $`3N_c/2`$ supplemented by the instanton contributions is shown to provide a consistent vacuum structure for SQCD with $`N_f`$ = $`N_c`$ compatible with the holomorphic decoupling. A comment is in order for the case with $`\rho `$ = 1. The quantum constraint for $`N_f`$ = $`N_c`$, Eq.(10), is rewritten as $$\mathrm{det}(T)^{1\rho }\left[\mathrm{det}(T)B\overline{B}\right]^\rho =\mathrm{\Lambda }^{2N_c}.$$ (25) If $`\rho `$ $``$ 1, $`\mathrm{det}(T)`$ $``$ 0 is required and shows the spontaneous breakdown of chiral $`SU(N_f)`$ symmetry. There is no room for $`\mathrm{det}(T)`$ = 0. While, if $`\rho `$ = 1 as in the Seiberg’s choice, there are two options: one for the spontaneous breakdown of chiral $`SU(N_f)`$ symmetry and the other for that of vector $`U(1)`$ symmetry of the baryon number. The latter case corresponds to $`z`$ = $`\mathrm{}`$, which means that $`z_{a=1}^{N_c}\pi _a`$ cannot be separated into $`z`$ and $`_{a=1}^{N_c}\pi _a`$, and is possible to be realized by taking $$\underset{a=1}{\overset{N_c}{}}\pi _a=0,\pi _B\pi _{\overline{B}}=\mathrm{\Lambda }^{2N_c}\underset{i=Nc+1}{\overset{N_f}{}}\left(m_i/\mathrm{\Lambda }\right)(=\stackrel{~}{\mathrm{\Lambda }}^{2N_c}),$$ (26) as instanton contributions. In summary, we have demonstrated that dynamical symmetry breaking in the $`\mathrm{}`$electric” SQCD with $`N_c+2`$ $``$ $`N_f`$ $``$ $`3N_c/2`$ ($`N_c`$ $`>`$ 2) can be described by $$W_{\mathrm{eff}}=S\left\{\mathrm{ln}\left[\frac{S^{N_cN_f}\mathrm{det}\left(T\right)\left(1Z\right)^\rho }{\mathrm{\Lambda }^{3N_cN_f}}\right]+N_fN_c\right\}(\rho >0)$$ (27) with $$Z=\frac{BT^{N_fN_c}\overline{B}}{\mathrm{det}(T)},$$ (28) which turns out to be $`W_{\mathrm{eff}}^{\mathrm{ADS}}`$ of the ADS-type: $$W_{\mathrm{eff}}^{\mathrm{ADS}}=(N_fN_c)\left[\frac{\mathrm{det}\left(T\right)\left(1Z\right)^\rho }{\mathrm{\Lambda }^{3N_cN_f}}\right]^{1/(N_fN_c)}.$$ (29) This superpotential exhibits 1. holomorphic decoupling property, 2. spontaneously breakdown of chiral $`SU(N_c)`$ symmetry and restoration of chiral $`SU(N_fN_c)`$ symmetry described by $`SU(N_f)_L\times SU(N_f)_R\times U(1)_V\times U(1)_ASU(N_c)_{L+R}\times SU(N_fN_c)_L\times SU(N_fN_c)_R\times U(1)_V^{}\times U(1)_A^{}`$, 3. consistent anomaly-matching property due to the emergence of the Nambu-Goldstone superfields, and 4. correct vacuum structure for $`N_f`$ = $`N_c`$ reproduced by instanton contributions when all quarks with flavors of $`SU(N_fN_c)`$ become massive. The breaking of chiral $`SU(N_f)`$ symmetry to $`SU(N_c)_{L+R}`$ includes the spontaneous breakdown of the vectorial $`SU(N_f)_{L+R}`$ symmetry, which has also been advocated in Ref.. The dependence of the SUSY-breaking effect on various VEV’s can be summarized as $`\left|0|T_i^i|0\right|=\{\begin{array}{c}\mathrm{\Lambda }_T^2(i=1N_c)\hfill \\ \xi \mathrm{\Lambda }_T^2(i=N_c+1N_f)\hfill \end{array},`$ (32) $`\left|0|B^{[12\mathrm{}N_c]}|0\right|=\left|0|\overline{B}_{[12\mathrm{}N_c]}|0\right|\mathrm{\Lambda }_T^{N_c},\left|0|S|0\right|\mathrm{\Lambda }_T^3\xi ^{\frac{\rho +N_fN_c}{N_fN_c}},`$ (33) and the classical constraint of $`1Z`$=0 is modified into $$1Z=\xi ,$$ (34) where $`\xi `$ $``$ 0 gives the SUSY limit. The parameter $`\rho `$ will be fixed if we find $`\mathrm{}`$real” properties of SQCD beyond those inferred from arguments based on the symmetry principle only. The choice of $$\rho =1$$ (35) seems natural since, in this case, $`W_{\mathrm{eff}}^{\mathrm{ADS}}`$ with $`N_f`$ = $`N_c+1`$ correctly reproduces the Seiberg’s superpotential. Furthermore, the superpotential derived in Ref.: $$W_{\mathrm{eff}}=S\left(\mathrm{ln}𝒵+N_fN_c+\underset{n=1}{\overset{\mathrm{}}{}}c_n𝒵^n\right)$$ (36) with $$𝒵=\frac{S^{N_cN_f}\mathrm{det}\left(T\right)\left(1Z\right)}{\mathrm{\Lambda }^{3N_cN_f}}$$ (37) has the similar structure to Eq.(27). This form implies that $`\rho `$ = 1 although their additional terms may yield different physics from ours. It should be stressed that, in addition to the usually believed physics of $`\mathrm{}`$magnetic” SQCD, where chiral $`SU(N_f)`$ symmetry is restored, our suggested physics of spontaneous chiral symmetry breakdown is expected to be realized in $`\mathrm{}`$electric” SQCD at least for $`N_c+2`$ $``$ $`N_f`$ $``$ $`3N_c/2`$. Therefore, we expect that there are two phases in SQCD: one with unbroken chiral symmetries realized in $`\mathrm{}`$magneic” SQCD and the other with spontaneously broken chiral symmetries realized in $`\mathrm{}`$electric” SQCD.
no-problem/9909/astro-ph9909284.html
ar5iv
text
# X-rays from the Highly Polarized Broad Absorption Line QSO CSO 755 ## 1 Introduction The ejection of matter at moderate to high velocities is a common and perhaps universal phenomenon of Quasi-Stellar Objects (QSOs). One of the main manifestations of QSO outflows is the blueshifted UV Broad Absorption Lines (BALs) seen in $`10`$% of optically selected QSOs, the BAL QSOs (e.g., Weymann 1997). X-ray spectroscopy of BAL QSOs is potentially important for studying their outflows and nuclear geometries, but the study of BAL QSOs in the X-ray regime has not yet matured, largely due to low X-ray fluxes (e.g., Green & Mathur 1996; Gallagher et al. 1999). Only $`9`$ BAL QSOs have been detected in X-rays at present. The current data suggest that the X-ray emission from BAL QSOs suffers from significant intrinsic absorption, with many BAL QSOs having absorption column densities $`\stackrel{>}{}`$ (1–5)$`\times 10^{23}`$ cm<sup>-2</sup>. Optical brightness is not a good predictor of X-ray brightness for BALQSOs; some optically faint BAL QSOs have been clearly detected (e.g., PHL 5200; $`V=18.1`$) while some of the optically brightest (e.g., PG $`1700+518`$; $`V=15.1`$) remain undetected in deep 0.1–10 keV observations. In the limited data available at present, however, there is a suggestion that the BAL QSOs with high ($`\stackrel{>}{}2`$%) optical continuum polarization may be the X-ray brighter members of the class (see §4 of Gallagher et al. 1999). A polarization/X-ray flux connection, if indeed present, would provide a clue about the geometry of matter in BAL QSO nuclei (see §3). To improve understanding of the X-ray properties of BAL QSOs and examine the possible polarization/X-ray flux connection, we have started a program to observe highly polarized BAL QSOs in X-rays. An excellent target for this program was the Case Stellar Object 755 (CSO 755; $`z=2.88`$; Sanduleak & Pesch 1989), which has $`V=17.1`$ (e.g., Barlow 1993) and is a representative, ‘bona-fide’ BAL QSO in terms of its luminosity and UV absorption properties (e.g., Glenn et al. 1994). Its continuum polarization is high ($``$ 3.8–4.7%; only 8/53 BAL QSOs studied by Schmidt & Hines 1999 had $`>2`$%) and rises to the blue. We adopt $`H_0=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=\frac{1}{2}`$. The Galactic neutral hydrogen column density towards CSO 755 is $`(1.6\pm 0.4)\times 10^{20}`$ cm<sup>-2</sup> (Stark et al. 1992). ## 2 Observations, Analysis and Results We observed CSO 755 with BeppoSAX (Boella et al. 1997) on 1999 Feb 2. We will focus on the results from the Medium-Energy Concentrator Spectrometers (MECS; 1.8–10 keV; 35.2 ks exposure) and Low-Energy Concentrator Spectrometer (LECS; 0.1–4 keV; 12.7 ks exposure), since the data from the other instruments are not useful for such a faint source. Our energy coverage corresponds to 0.4–39 keV in the rest frame. The observation went smoothly, and the resulting data were processed with Version 1.2 of the BeppoSAX Science Data Center (BSDC) pipeline. We have adopted the standard reduction methods recommended by the BSDC (Fiore, Guainazzi & Grandi 1999), and we do not observe any irregular background variability. The screened events resulting from the above reduction were analyzed using xselect. We made full-band images for each of the detectors as well as combined MECS2+MECS3 images. An X-ray source consistent with the precise optical position of CSO 755 is detected with high statistical significance in our MECS2, MECS3 and MECS2+MECS3 images (e.g., Figure 1), but it is not detected by the LECS. Given the observed flux (see below), the probability of a confusing source is $`\stackrel{<}{}5\times 10^3`$, and no particularly suspicious sources are found in the Palomar Optical Sky Survey or the ned/simbad catalogs. To determine MECS count rates, we have used a $`3^{}`$-radius circular source cell centered on the X-ray centroid. For background subtraction, we use five $`3^{}`$-radius circular cells near CSO 755 (we have not used an annulus because a weak nearby source would fall inside the annulus). We have corrected for energy-dependent vignetting of the background following §3.1.5 of Fiore et al. (1999). In the MECS2+MECS3 full-band (1.8–10 keV) image, we detect $`54.3\pm 14.3`$ counts from CSO 755 for a MECS2+MECS3 count rate of $`(1.5\pm 0.4)\times 10^3`$ count s<sup>-1</sup>. The LECS $`3\sigma `$ upper limit on the 0.1–1.8 keV count rate is $`<1.7\times 10^3`$ count s<sup>-1</sup> (computed using a circular source cell with a $`5^{}`$ radius). While we do not have enough photons for spectral fitting, we have analyzed MECS2+MECS3 images in three observed-frame energy bands to place crude constraints on spectral shape: 1.8–3 keV (band 1; channels 40–66), 3–5.5 keV (band 2; channels 67–120), and 5.5–10 keV (band 3; channels 121–218). CSO 755 is detected in all bands, although with varying degrees of statistical significance. We give the corresponding count rates in Table 1, and the Poisson probabilities of false detections in bands 1, 2 and 3 are $`6.8\times 10^5`$, $`4.8\times 10^3`$ and $`2.8\times 10^2`$, respectively. The detection in band 3 (21–39 keV in the rest frame) is notable. To compare the observed spectral shape with spectral models, we have employed a band-fraction diagram similar to those used in studies of the diffuse soft X-ray background (e.g., see §5 of Burstein et al. 1977). We first consider a simple power-law model with photon index, $`\mathrm{\Gamma }=`$ 1.7–1.9 (a typical, representative range for radio-quiet QSOs; e.g. Reeves et al. 1997), and neutral absorption at $`z=2.88`$. For this model, Figure 2 shows that column densities less than $`7\times 10^{23}`$ cm<sup>-2</sup> are most consistent with our data. Alternatively, for small column densities, values of $`\mathrm{\Gamma }`$ down to $`0.8`$ are most consistent with our data (i.e. the spectrum could be as flat as that for a ‘reflection-dominated’ source). Incorporating the LECS upper limit into similar analyses does not significantly tighten our constraints. If we consider a $`\mathrm{\Gamma }=1.9`$ power-law model with the Galactic column density, we calculate an observed-frame 2–10 keV flux of $`1.3\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, corresponding to a rest-frame 7.8–39 keV luminosity of $`4.0\times 10^{45}`$ erg s<sup>-1</sup>. These two quantities are relatively insensitive to the internal column density for $`N_\mathrm{H}<5\times 10^{23}`$ cm<sup>-2</sup>. If we extrapolate this model into the rest-frame 2–10 keV band, the luminosity is $`3.4\times 10^{45}`$ erg s<sup>-1</sup>. We have also calculated $`\alpha _{\mathrm{ox}}`$ (the slope of a hypothetical power law between 3000 Å and 2 keV in the rest frame), since this parameter can be used as a statistical predictor of the presence of X-ray absorption (e.g., Brandt, Laor & Wills 1999). We calculate the rest-frame 3000 Å flux density using the observed-frame 7500 Å flux density of Glenn et al. (1994) and a continuum spectral index of $`\alpha =0.5`$. The rest-frame flux density at 2 keV is more difficult to calculate since we do not have strong constraints on X-ray spectral shape or a BeppoSAX detection at $`\frac{2\mathrm{keV}}{(1+z)}=0.52`$ keV (although see our discussion of the ROSAT data below). If we normalize a $`\mathrm{\Gamma }=1.9`$ power-law model with Galactic absorption to the rest frame 7–39 keV count rate (corresponding to 1.8–10 keV in the observed frame), we calculate $`\alpha _{\mathrm{ox}}=1.58`$. Of course, this $`\alpha _{\mathrm{ox}}`$ value is really telling us about the rest-frame 7–39 keV emission rather than a directly measured flux density at 2 keV. We have searched for any Einstein, ROSAT or ASCA pointings that serendipitously contain CSO 755, but unfortunately there is none. We have also analyzed the data from the ROSAT All-Sky Survey (RASS). CSO 755 was observed for 939 s during the RASS between 1990 Dec 31 and 1991 Jan 4 (a relatively long RASS exposure; see Figure 2 of Voges et al. 1999). There appears to be an $`7`$-photon enhancement over the average background at the position of CSO 755. Comparative studies of RASS and pointed data show that $``$ 90% of such 7-photon RASS sources are real X-ray sources rather than statistical fluctuations, and CSO 755 is included in the Max-Planck-Institut für Extraterrestrische Physik RASS faint source catalog (Voges et al., in preparation) with a likelihood of 11 (see Cruddace, Hasinger & Schmitt 1988). However, to be appropriately cautious we shall treat the probable RASS detection as tentative. The probable RASS detection corresponds to a vignetting-corrected flux in the observed 0.1–2.4 keV band of $`1.1\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> (for a power-law model with $`\mathrm{\Gamma }=1.9`$ and the Galactic absorption column). Given the relative effective areas and imaging capabilities of the ROSAT PSPC and BeppoSAX LECS, a RASS detection is consistent with the LECS upper limit given in Table 1 (see Figure 2 of Parmar et al. 1999). Provided there is not substantial intrinsic X-ray absorption below the MECS band, the relative RASS and MECS fluxes are entirely plausible. If we use the ROSAT flux to normalize a $`\mathrm{\Gamma }=1.9`$ power law with Galactic absorption, we calculate $`\alpha _{\mathrm{ox}}=1.62`$. If ROSAT has indeed detected CSO 755, the ROSAT band has the advantage that it directly constrains the rest-frame 2 keV flux density. ## 3 Discussion and Conclusions Our BeppoSAX and probable ROSAT detections of CSO 755 make it the highest redshift as well as the most optically luminous BAL QSO detected in X-rays. It was selected for study not based upon high optical flux but rather based on its high (observed-frame) optical continuum polarization (3.8–4.7%; hereafter OCP), and it is X-ray brighter than several other BAL QSOs that have $``$ 4–6 times its $`V`$-band flux (compare with Gallagher et al. 1999). While its higher X-ray flux could partially result from the higher redshift providing access to more penetrating X-rays (i.e. a ‘negative $`K`$-correction’), there is also suggestive evidence that the BAL QSOs with high OCP may be the X-ray brighter members of the class. We have investigated the OCP percentages of the 10 BAL QSOs (including CSO 755) with reliable X-ray detections using the data from Berriman et al. (1990), Hutsemékers, Lamy & Remy (1998), Ogle (1998) and Schmidt & Hines (1999). The OCP percentages have a mean of $`2.28\pm 0.28`$, a standard deviation of 0.88, and a median of 2.24. These values indeed place the X-ray detected BAL QSOs toward the high end of the BAL QSO OCP distribution function (compare with §2 of Schmidt & Hines 1999). At present, however, our nonparametric testing is unable to prove that the X-ray detected BAL QSOs have higher OCPs than those that are undetected in sensitive X-ray observations. This is due to small sample sizes as well as concerns about possible secondary correlations and observational biases. Many of the BAL QSOs with high-quality X-ray data have been observed because they have exceptional properties (e.g., low-ionization absorption, extreme Fe ii emission), and thus the currently available sample is not necessarily representative of the population as a whole. In addition, the current X-ray and polarization observations of BAL QSOs span a wide range of rest-frame energy/wavelength bands due to redshift and instrumentation differences (redshifts for the 10 X-ray detected BAL QSOs run from $`z=`$ 0.042–2.88). At higher redshifts one samples harder X-rays that are less susceptible to absorption. Also at higher redshifts, observed-frame OCP measurements tend to sample shorter wavelengths, and many QSOs show polarization that rises towards the blue. Systematic X-ray and polarimetric observations of uniform, well-defined BAL QSO samples are needed to examine this issue better. A polarization/X-ray flux connection could be physically understood if the direct lines of sight into the X-ray nuclei of BAL QSOs were usually blocked by extremely thick matter ($`10^{24}`$ cm<sup>-2</sup>). In this case, we could only see X-rays when there is a substantial amount of electron scattering in the nuclear environment by a ‘mirror’ of moderate Thomson depth.<sup>5</sup><sup>5</sup>5Ogle (1998) suggests that there is a large range of mirror optical depths among the BAL QSO population. The scattering would provide a periscopic, indirect view into the compact X-ray emitting region while also polarizing some of the more extended optical continuum emission (see Figure 3). Measured X-ray column densities would then provide information only about the gas along the indirect line of sight. For CSO 755, the X-ray scattering medium would need to be located on fairly small scales ($`\stackrel{<}{}`$ a few light weeks) to satisfy the spectropolarimetric constraints of Glenn et al. (1994) and Ogle (1998). These show that the material scattering the optical light is located at smaller radii than both the Broad Line Region (BLR) and BAL region. Our calculations in §2 give an $`\alpha _{\mathrm{ox}}`$ value of $``$ 1.6, although our only direct constraint on the rest-frame 2 keV flux density is via the probable ROSAT detection. Our $`\alpha _{\mathrm{ox}}`$ value is entirely consistent with those of typical radio-quiet QSOs (compare with Figure 1 of Brandt, Laor & Wills 1999), and it is smaller than those of many BAL QSOs (e.g. Green & Mathur 1996). A ‘normal’ $`\alpha _{\mathrm{ox}}`$ value would appear somewhat surprising in the context of the scattering model of the previous paragraph, since one would expect the X-ray flux level to be reduced if the direct line of sight is blocked. However, there is enough dispersion in the $`\alpha _{\mathrm{ox}}`$ distribution that the observed value of $`\alpha _{\mathrm{ox}}`$ does not cause a serious problem, provided the scattering is efficient. The scattering mirror would need to subtend a fairly large solid angle (as seen by the compact X-ray source) and have a moderate Thomson depth (say $`\tau _\mathrm{T}0.3`$). In addition, there may be ‘attenuation’ at 3000 Å (in the sense of §2 of Goodrich 1997) that helps to flatten $`\alpha _{\mathrm{ox}}`$. Finally, we note that CSO 755 has a high enough X-ray flux to allow moderate quality X-ray spectroscopy and variability studies with XMM. It is currently scheduled for a 5 ks XMM observation, but this is an inadequate exposure time for such work. A longer XMM exposure would allow a study of any iron K spectral features, and the high redshift of CSO 755 moves the iron K complex right to the peak of the XMM EPIC spectral response. If we are viewing a large amount of scattered X-ray flux in CSO 755 and other high polarization BAL QSOs, then narrow iron K lines with large equivalent widths may be produced via fluorescence and resonant scattering (as for the much less luminous Seyfert 2 galaxies; e.g., Krolik & Kallman 1987). Such lines could allow direct detection of the X-ray scattering medium, and line energies and blueshifts/redshifts would constrain the ionization state and dynamics of the mirror. We would also not expect rapid ($`\stackrel{<}{}1`$ day) and large-amplitude X-ray variability if most of the X-ray flux is scattered. We thank J. Halpern, J. Nousek, W. Voges and B. Wills for helpful discussions, and we thank H. Ebeling for the use of his idl software. We acknowledge the support of NASA LTSA grant NAG5-8107 (WNB), Italian Space Agency contract ASI-ARS-98-119 and MURST grant Cofin-98-02-32 (AC), NASA grant NAG5-4826 and the Pennsylvania Space Grant Consortium (SCG), and the fund for the promotion of research at the Technion (AL).
no-problem/9909/hep-ph9909524.html
ar5iv
text
# Resonances and higher twist in polarized lepton-nucleon scattering11footnote 1Work supported in part by DFG and BMBF ## 1 Introduction High energy lepton scattering is a well established tool to investigate the structure of the nucleon. We restrict ourselves to charged leptons ($`e`$ or $`\mu `$); the exchanged virtual photon transfers four-momentum $`q^\mu =(q_0,𝒒)`$, with the resolution determined by the virtuality $`Q^2=q^2=𝒒^2q_0^2`$. At $`Q^21\mathrm{G}\mathrm{e}\mathrm{V}^2`$ deep-inelastic scattering resolves the partonic constituents (quarks and gluons) of the nucleon. At $`Q^2`$$`\stackrel{<}{}`$$`\mathrm{\hspace{0.33em}1}\mathrm{GeV}^2`$, on the other hand, the excitation of nucleon resonances and multi-pion continuum states is important. Exploring the transition between partonic and hadronic scales is of great significance to our understanding of the nucleon. The aim of the present paper is to discuss polarized lepton-nucleon scattering in kinematic regions where both hadron and parton degrees of freedom are expected to coexist. The response of the nucleon is expressed in terms of the hadronic tensor $`W_{\mu \nu }(x,Q^2)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle \underset{X}{}}(2\pi )^4\delta ^4(P+qP_X)N(P,S)|J_\mu (0)|X(P_X,\lambda _X)`$ $`X(P_X,\lambda _X)|J_\nu (0)|N(P,S)`$ $`=`$ $`W_{\mu \nu }^{(S)}+W_{\mu \nu }^{(A)}.`$ The matrix elements of the electromagnetic current $`J_\mu `$ describe the transition of a nucleon with four-momentum $`P`$, invariant mass $`M`$ ($`P^2=M^2`$) and spin $`S`$ to a hadronic final state $`X`$ with four-momentum $`P_X`$ and polarization $`\lambda _X`$. The sum in (1) implies an integration over three-momentum, $`\frac{d^3P_X}{(2\pi )^32P_{X0}}`$, and the normalization of $`|N`$ and $`|X`$ is $`N(P^{},S)|N(P,S)=2P_0(2\pi )^3\delta ^3(𝑷^{}𝑷)\delta _{S,S^{}}`$. The symmetric part $`W_{\mu \nu }^{(S)}`$ involves the spin independent structure functions $`F_{1,2}`$ measured in the scattering of unpolarized particles. The antisymmetric term, $`W_{\mu \nu }^{(A)}=iϵ_{\mu \nu \lambda \sigma }q^\lambda \left[{\displaystyle \frac{g_1(x,Q^2)}{Pq}}S^\sigma +{\displaystyle \frac{g_2(x,Q^2)}{(Pq)^2}}(PqS^\sigma qSP^\sigma )\right].`$ (2) introduces the spin structure functions $`g_1`$ and $`g_2`$. The nucleon spin vector is $`S^\sigma =\frac{1}{2}\overline{u}(P,S)\gamma ^\sigma \gamma _5u(P,S)`$ with Dirac spinors normalized as $`\overline{u}u=2M`$. The structure functions depend on the Bjorken variable $`x=Q^2/(2Pq)`$ and on $`Q^2`$. Spin structure function data have been taken at SLAC, CERN and DESY , primarily in the partonic high $`Q^2`$ range. Polarized deep-inelastic scattering in the resonance region was measured by the E143 collaboration at SLAC . In the first part of our study we combine these data with other available information from the photo- and leptoproduction of nucleon resonances and investigate their contribution to the moments of the proton spin structure function $`g_1`$. The influence of resonances and non-resonant low-mass excitations turns out to be quite significant for $`Q^2`$$`\stackrel{<}{}`$$`\mathrm{\hspace{0.33em}4}\mathrm{GeV}^2`$, as we shall demonstrate. For example, at $`Q^2=\mathrm{\hspace{0.33em}2}\mathrm{GeV}^2`$ they account for as much as $`20\%`$ of the first moment of $`g_1`$. Similar observations have been made for unpolarized deep-inelastic scattering . In the second part we use the QCD operator product expansion and extract twist-4 matrix elements from the leading moments of $`g_1`$. For the first moment such an analysis has been carried out in great detail in ref.. We find substantial higher-twist contributions to the first, third and fifth moments of $`g_1`$ for $`Q^2`$$`\stackrel{<}{}`$$`\mathrm{\hspace{0.33em}2},4`$ and $`10\mathrm{GeV}^2`$, respectively. We examine target mass effects and investigate the different components of the higher-twist pieces of $`g_1`$. It turns out that contributions from elastic scattering, low-mass hadronic excitations and the partonic high-mass continuum are all of similar importance. We comment on the applicability of the twist expansion and recall basic ideas of parton-hadron duality. Altogether our results emphasize the need for high-precision experiments in the resonance region, to be performed at the Jefferson laboratory . ## 2 Twist expansion of $`g_1`$ In this section we briefly summarize results from the operator product expansion for the nucleon spin structure function $`g_1`$ (for details see e.g. ). Following the conventions of ref. we introduce the $`n`$-th moment of $`g_1`$ as: $$g_1^{(n)}(Q^2)=_0^1dxx^{n1}g_1(x,Q^2)(\mathrm{with}\mathrm{n}=1,3,5\mathrm{}).$$ (3) Note that the upper limit of integration includes the contribution from elastic scattering. Its presence results from the fact that the operator product expansion, applied to deep-inelastic scattering, implicitly involves a sum over all final hadronic states including the nucleon itself. The importance of the elastic component in a QCD analysis of structure function moments has been emphasized in ref.. At large momentum transfers, $`Q^2\mathrm{\Lambda }_{\mathrm{QCD}}^2`$, the moments (3) can be written in terms of the twist expansion : $$g_1^{(n)}(Q^2)=\underset{\tau =2,4,\mathrm{}}{}\frac{\mu _\tau ^{(n)}(Q^2)}{Q^{\tau 2}}.$$ (4) ”Twist” is a useful bookkeeping device to classify the light cone singularity of the coefficients in the QCD operator product expansion. Let a local operator in this expansion be a Lorentz tensor of rank $`r`$ with (mass) dimension $`d`$, and let $`\sigma r`$ be the ”spin”associated with this operator. Then twist is defined as $`\tau =d\sigma `$. The functions $`\mu _\tau ^{(n)}(Q^2)`$ are related to nucleon matrix elements of quark and gluon operators with maximal twist $`\tau `$. Their leading (logarithmic) $`Q^2`$-dependence can be calculated perturbatively as a series expansion in the strong coupling constant $`\alpha _s`$. It should be mentioned that, due to the asymptotic nature of QCD perturbation theory, a systematic separation of the twist expansion and the perturbation series for $`\mu _\tau ^{(n)}`$ is non-trivial and still a matter of ongoing investigations (for detailed discussions see e.g ref.). Up to corrections of order $`1/Q^4`$ one finds : $`g_1^{(n)}(Q^2)`$ $`=`$ $`{\displaystyle \frac{1}{2}}a_{n1}(Q^2)+{\displaystyle \frac{M^2}{Q^2}}{\displaystyle \frac{n(n+1)}{2(n+2)^2}}\left(na_{n+1}(Q^2)+4d_{n+1}(Q^2)\right)`$ $`+`$ $`{\displaystyle \frac{4}{9}}{\displaystyle \frac{M^2}{Q^2}}f_{n+1}(Q^2)+𝒪\left({\displaystyle \frac{M^4}{Q^4}}\right)`$ The coefficients $`a_n`$ represent the genuine twist-$`2`$ contributions to $`g_1^{(n)}`$. They depend only logarithmically on $`Q^2`$ and dominate for $`Q^2`$ much larger than a typical hadronic scale, say the squared nucleon mass $`M^2`$. The second term in (2) arises from target mass corrections . They are determined by the twist-$`2`$ pieces $`a_n`$ and the twist-$`3`$ corrections $`d_n`$ related to moments of the spin structure function $`g_2`$ : $$d_{n1}=2g_1^{(n)}+\frac{2n}{n1}g_2^{(n)}+𝒪\left(\frac{M^4}{Q^4}\right).$$ (6) The true twist-$`4`$ contributions in eq.(2) are denoted by $`f_{n+1}`$. For higher moments, $`n>1`$, several matrix elements of twist-$`4`$ are involved. Their sum gives the coefficient $`f_{n+1}`$ in (2). In our work twist-$`2`$ contributions are defined through moments of presently available NLO parametrizations of $`g_1`$ . The extraction of higher twist contributions from structure function data has been a subject of recent studies . The active interest in these quantities derives from the fact that they are related to matrix elements which are sensitive to quark-gluon interactions in the nucleon. For example, one has : $$2f_2(Q^2)M^2S^\mu =\underset{f}{}e_f^2N(P,S)|g\overline{\psi }_f\stackrel{~}{G}^{\mu \nu }\gamma _\nu \psi _f|N(P,S).$$ (7) The sum is taken over all quark fields $`\psi _f`$ with flavor $`f`$ and charge $`e_f`$, and $`\stackrel{~}{G}^{\mu \nu }`$ stands for the dual gluon field strength tensor ($`g`$ denotes the QCD coupling strength). ## 3 Helicity amplitudes In this paper we investigate contributions to the proton spin structure function $`g_1^p`$ resulting from the electro-production of nucleon resonances, as well as from the production of continuum states in the deep-inelastic regime. Resonance contributions are conveniently described in terms of helicity amplitudes : $$G_m=\frac{1}{2M}X(P_X,\lambda ^{}=m\frac{1}{2})\left|ϵ^mJ(0)\right|N(P,\lambda =\frac{1}{2}).$$ (8) We choose $`𝒒/|𝒒|`$ as the spin quantization axis. The amplitude $`G_m`$ represents the production of a hadronic state $`X`$ with spin projection $`\lambda ^{}`$ following the absorption of a virtual photon with polarization (helicity) $`m=\pm 1,0`$ on a nucleon with spin projection $`\lambda =1/2`$. The photon polarization vectors are $`ϵ^\pm =(0,1,i,0)/\sqrt{2}`$ and $`ϵ^0=(|𝒒|,0,0,\nu )/Q`$, with $`Q=\sqrt{Q^2}`$. Combining eqs.(1) and (8) gives: $`g_1={\displaystyle \frac{1}{1+\frac{Q^2}{\nu ^2}}}{\displaystyle \underset{X}{}}M^2\delta (W^2M_X^2)\left[|G_+|^2|G_{}|^2+{\displaystyle \frac{\sqrt{2Q^2}}{\nu }}G_0^{}G_+\right],`$ (9) with $`\nu =Pq/M`$. The final state $`X`$ with invariant mass $`M_X`$ has $`\lambda ^{}=+1/2`$ for $`G_+`$, $`\lambda ^{}=3/2`$ for $`G_{}`$, and $`\lambda ^{}=1/2`$ for $`G_0`$. It is common to use the amplitudes ($`e`$ is the electric charge with $`e^2/4\pi =1/137`$): $$A_{1/2}=e\sqrt{\frac{M}{W^2M^2}}G_+,A_{3/2}=e\sqrt{\frac{M}{W^2M^2}}G_{},S_{1/2}=e\sqrt{\frac{M}{W^2M^2}}\frac{|𝒒^{\mathbf{}}|}{Q}G_0,$$ (10) where $`𝒒^{}`$ denotes the three-momentum transfer as measured in the photon nucleon center-of-mass frame, i.e. $`𝒒_{}^{}{}_{}{}^{2}=Q^2+(W^2M^2Q^2)^2/4W^2`$ with the total c. m. energy $`W`$. ## 4 Model In the following we present a parametrization of the proton structure function $`g_1^p`$ which is applicable at small and moderate values of $`Q^2`$. We follow hereby closely an analysis of recent data in ref.. At small center-of-mass energies, $`W<1.7`$ GeV, we account for the contribution of dominant nucleon resonances. In addition, a phenomenological non-resonant background is added. For large $`W>1.7`$ GeV we use an existing parametrization of available data. The contribution of an isolated nucleon resonance to $`g_1`$ is usually expressed through helicity dependent virtual photon-nucleon cross sections. In terms of the helicity amplitudes (10) these are defined as: $`\sigma _{1/2,3/2}^T={\displaystyle \frac{M\mathrm{\Gamma }_R}{M_R[(WM_R)^2+\mathrm{\Gamma }_R^2/4]}}|A_{1/2,3/2}|^2,`$ $`\sigma _{1/2}^L={\displaystyle \frac{M\mathrm{\Gamma }_R}{M_R[(WM_R)^2+\mathrm{\Gamma }_R^2/4]}}{\displaystyle \frac{Q^2}{𝒒_{}^{\mathbf{}}{}_{}{}^{2}}}|S_{1/2}|^2,`$ $`\sigma _{1/2}^{LT}={\displaystyle \frac{M\mathrm{\Gamma }_R}{\sqrt{2}M_R[(WM_R)^2+\mathrm{\Gamma }_R^2/4]}}{\displaystyle \frac{Q}{|𝒒^{\mathbf{}}|}}S_{1/2}^{}A_{1/2}.`$ (11) Here $`M_R`$ is the mass and $`\mathrm{\Gamma }_R`$ the width of the resonance. Combining eqs.(9,10,4) gives for the contribution of a resonance $`R`$ to $`g_1`$: $`g_1(x,Q^2)|_R`$ $`=`$ $`{\displaystyle \frac{\nu MQ^2/2}{4\pi ^2\alpha }}{\displaystyle \frac{1}{1+Q^2/\nu ^2}}\left({\displaystyle \frac{\sigma _{1/2}^T\sigma _{3/2}^T}{2}}+{\displaystyle \frac{Q}{\nu }}\sigma _{1/2}^{LT}\right),`$ (12) where the photon-nucleon cross sections refer to the excitation of $`R`$. At low $`W`$ the helicity amplitudes are reasonably well known only for the photoproduction of the prominent nucleon resonances. In the case of electro-production accurate data are rare (for a review see e.g. ). We restrict ourselves to the dominant low mass resonances $`\mathrm{\Delta }(1232)`$, $`S_{11}(1535)`$, and $`D_{13}(1520)`$. Our parametrizations of the corresponding helicity amplitudes are summarized in eqs.(13,14), with parameters given in Table 1. At low center of mass energies the excitation of the $`\mathrm{\Delta }(1232)`$ resonance is of particular importance. At small $`Q^2`$ it is dominated by a magnetic dipole transition which implies $`A_{3/2}/A_{1/2}\sqrt{3}`$. Indeed, for real photons one finds $`A_{3/2}/(\sqrt{3}A_{1/2})1.064`$ . At large momentum transfers, $`Q^21`$ GeV<sup>2</sup>, perturbative QCD gives $`A_{3/2}/A_{1/2}1/Q^2`$. However, it has been observed that even at $`Q^2=3`$ GeV<sup>2</sup> the magnetic dipole transition still dominates by far . We can therefore assume $`A_{3/2}/A_{1/2}const.`$ for $`Q^2`$$`\stackrel{<}{}`$$`\mathrm{\hspace{0.17em}3}`$ GeV<sup>2</sup>. The $`Q^2`$-dependence of $`A_{1/2}`$ and $`A_{3/2}`$ is then extracted from an analysis of the $`Q^2`$-dependence of the transverse amplitude $`|A_T|`$ . The $`S_{11}`$ resonance has spin $`1/2`$, so that the helicity amplitude $`A_{3/2}`$ is absent. We constrain the parametrization of the amplitude $`A_{1/2}`$ by the photo- and electro-production data from ref.. For the $`D_{13}(1520)`$ the amplitude $`A_{1/2}`$ is found to be very small at $`Q^2=0`$. Here $`A_{3/2}`$ dominates. On the other hand data require $`A_{1/2}>A_{3/2}`$ for $`Q^2>1`$ GeV<sup>2</sup>. The parametrization in eqs.(13,14) agrees with the present, albeit quite rough, empirical information on the $`Q^2`$-dependence of the asymmetry $`𝒜_1`$ and the individual helicity amplitudes : $$|A_T|=\left(|A_{1/2}|^2+|A_{3/2}|^2\right)^{1/2}=C\mathrm{exp}[BQ^2],$$ (13) and $$|A_{1/2,3/2}|=\sqrt{\frac{1\pm 𝒜_1}{2}}|A_T|,\mathrm{with}𝒜_1=\frac{|\mathrm{A}_{1/2}|^2|\mathrm{A}_{3/2}|^2}{|\mathrm{A}_{1/2}|^2+|\mathrm{A}_{3/2}|^2},$$ (14) with parameters given in table 1. The interference term $`\sigma ^{LT}`$ involving the longitudinal and transverse photon-nucleon amplitudes is fairly unknown. Nevertheless, unpolarized scattering constrains the asymmetry ratio: $$𝒜_2=\frac{2\sigma _{1/2}^{LT}}{\sigma _{1/2}^T+\sigma _{3/2}^T}<\sqrt{R(x,Q^2)},$$ (15) with $`R=2\sigma _{1/2}^L/(\sigma _{1/2}^T+\sigma _{3/2}^T)`$. In the resonance region one finds on average $`R=0.06\pm 0.02`$ for $`1\mathrm{GeV}^2<\mathrm{Q}^2<8\mathrm{GeV}^2`$ and $`W<1.7`$ GeV . Some fraction of this value is due to incoherent background contributions and not related to the excitation of single nucleon resonances. In the following we use $`𝒜_2=0.08`$. As a matter of fact, at $`Q^2>1`$ GeV<sup>2</sup>, $`𝒜_2`$ contributes only very little to the structure function moments to be discussed later: changing $`𝒜_2`$ by $`100\%`$ modifies our results for $`g_1^{(1)}`$ by less than $`2\%`$. At low energies, $`W<1.7`$ GeV, the structure function $`g_1`$ receives contributions also from non-resonant (multi-)meson production. However, hardly any empirical information is available here. We use a linear interpolation in the squared photon-nucleon center of mass energy $`W^2`$ which connects the inelastic threshold $`W^2=(M+m_\pi )^2`$ with experimental data at $`W>1.7`$ GeV. Having modeled the structure function $`g_1^p`$ at small center-of-mass energies, we continue to $`W>1.7`$ GeV, where we use a parametrization from ref. which reproduces data in the deep-inelastic region. This, finally, completes our model for $`g_1^p`$. In Fig.1 we compare our model with recent $`g_1`$ data from the E143 collaboration taken at $`Q^2=1.2`$ GeV<sup>2</sup>. Within the admittedly large experimental errors good agreement is found. A comparison of $`g_1^p`$, calculated with our model, and a parametrization of its deep-inelastic twist-$`2`$ part is shown in Fig.2. At $`Q^2`$$`\stackrel{<}{}`$$`\mathrm{\hspace{0.17em}2}`$ GeV<sup>2</sup>, significant deviations are apparent. The contributions of the $`S_{11}`$ and $`D_{13}`$ resonances are located around $`x0.4`$ at $`Q^2=1`$ GeV<sup>2</sup>, while the excitation of the $`\mathrm{\Delta }`$ occurs at $`x0.6`$. As $`Q^2`$ increases the low mass nucleon resonance excitations become less important. Furthermore, the contribution of nucleon resonances moves towards larger values of $`x`$, as one can see from the fact that the squared invariant mass of a particular nucleon excitation is fixed at $`W^2=M^2+Q^2(1x)/x`$. Finally, at $`Q^2=10`$ GeV<sup>2</sup> our model coincides with the leading twist parametrization of ref.. ## 5 Analysis of moments of structure function In this section we discuss the first moments of the proton structure function $`g_1`$ as obtained from the model previously described. In particular, we investigate the importance of contributions from elastic scattering, resonance production, target mass corrections, and true higher twist. The elastic contribution, corresponding to the kinematic limit $`x=1`$, is determined by the Pauli and Dirac electromagnetic form factors of the nucleon as follows: $$g_1^{(n)}(Q^2)|_{el}=\frac{1}{2}F_1(Q^2)\left[F_1(Q^2)+F_2(Q^2)\right].$$ (16) In our numerical analysis we use parametrizations from ref.. ### 5.1 Resonance contributions In order to investigate the role of low-mass nucleon excitations it is useful to introduce the ratio $`{\displaystyle \frac{g_1^{(n)}(Q^2)|_{W_0}}{g_1^{(n)}(Q^2)}}={\displaystyle \frac{_{x_0}^1dxx^{n1}g_1(Q^2)}{_0^1dxx^{n1}g_1(Q^2)}},\mathrm{with}x_0=x(W=W_0).`$ (17) In the numerator we sum over all contributions of nucleon resonances and non-resonant multi-meson excitations with invariant mass $`M_X<W_0=2`$ GeV. In addition we always include the elastic part (16). Figure 3 shows that these low mass contributions to $`g_1^{(n)}`$ are quite sizable, especially for higher moments. For example, at $`Q^2=2`$ GeV<sup>2</sup> they are responsible for about $`20\%`$ of the first moment $`g_1^{(1)}`$, while they account for $`75\%`$ of $`g_1^{(3)}`$. With increasing $`n`$ the role of low-mass excitations becomes evidently more pronounced. At the same time, the influence of the low-mass part of the spectrum also increases with decreasing $`Q^2`$. Roughly speaking, for $`Q^2<2n\text{GeV}^2`$ low-mass excitations with $`W<\mathrm{\hspace{0.33em}2}\mathrm{GeV}`$ account for more than $`10\%`$ of $`g_1^{(n)}`$. At large $`Q^2`$ continuum contributions with $`W>W_0`$ take over. A similar observation has been made in an analysis of unpolarized lepton scattering . ### 5.2 Higher twist analysis In order to extract the genuine higher twist coefficients $`f_n`$ from the structure function moments $`g_1^{(n)}`$ one has to subtract twist-$`2`$ contributions and target mass corrections from each given moment. Returning to eq.(2) we have: $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$ $`=`$ $`g_1^{(n)}(Q^2){\displaystyle \frac{1}{2}}a_{n1}(Q^2){\displaystyle \frac{M^2}{Q^2}}{\displaystyle \frac{n(n+1)}{2(n+2)^2}}\left(na_{n+1}(Q^2)+4d_{n+1}(Q^2)\right)`$ (18) $`=`$ $`f_{n+1}{\displaystyle \frac{4}{9}}{\displaystyle \frac{M^2}{Q^2}}+𝒪\left({\displaystyle \frac{M^4}{Q^4}}\right).`$ (19) In the following we consider the first three moments, $`n=1,3,5`$. We compare results obtained from our model for $`g_1^p`$ with the twist-$`2`$ contributions $`a_{n1}/2`$ from the NLO parametrization of ref.. We also study the influence of target mass effects. Finally we discuss different contributions to the higher twist part $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$. In Fig.4 we compare the full moments $`g_1^{(n)}`$ with the leading twist parts, $`a_{n1}/2`$, and the higher twist components $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$. At small $`Q^2`$ one observes significant differences between $`g_1^{(n)}`$ and $`a_{n1}/2`$. In particular one finds $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}>0.1a_{n1}/2`$ for $`Q^2<2,\mathrm{\hspace{0.17em}4},\mathrm{\hspace{0.17em}10}`$ GeV<sup>2</sup> and $`n=1,\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3}`$, respectively. The region where higher twist becomes important depends obviously on the moment $`n`$. For fixed $`Q^2`$ the difference between $`g_1^{(n)}(Q^2)`$ and $`a_{n1}/2`$ increases with $`n`$. This is easily understood since contributions of low-mass nucleon excitations are enhanced in higher moments as pointed out in the previous section. Also shown in Fig.4 is the size of target mass effects. Since the coefficients $`d_n`$ are not known accurately we use $`d_n=0`$ which is compatible with present data and corresponds to the Wandzura-Wilczek conjecture . For this choice target mass effects are indeed small. As an example, at $`Q^2=2`$ GeV<sup>2</sup> and n=1,3,5 they amount to less than $`10\%`$ of the higher twist part. To estimate the uncertainty of this result we also use $`d_n`$ obtained from eq.(6) for $`g_2(x)=0`$. In this case target mass effects increase significantly and lead to a decrease of $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$ by about $`30\%`$. High precision data on the spin structure function $`g_2`$, which are of course interesting in their own right, are therefore an important ingredient in the QCD analysis of $`g_1`$ itself. Twist-4 contributions to $`g_1^{(n)}`$ are proportional to $`1/Q^2`$ (up to logarithmic corrections). In order to have a closer look at these terms it is instructive to plot the higher-twist moments $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$ versus $`1/Q^2`$, as done in Fig. 5. Their approximately linear behavior indicates that twist-4 contributions play indeed a dominant role in $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$. Neglecting terms of twist-6 and higher gives $`f_2^p0.1`$ at $`Q^2=\mathrm{\hspace{0.33em}2}\mathrm{GeV}^2`$, which agrees quite well with the analysis of ref.. Further estimates can be found in ref.. In the same figure we show the separate contributions to $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$ from elastic scattering and from low-mass excitations with $`(M+m_\pi )<W<2\mathrm{GeV}`$. Evidently, none of these contributions is small, in fact they all are of the same order of magnitude as $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$ itself. These observations emphasize the need for high-precision measurements especially in the resonance region. Upcoming data from TJNAF are certainly welcome here. Figure 5 also points to the crucial role played by the elastic piece (16). Its proper treatment requires accurate information on the nucleon electromagnetic form factors in the range $`1.5\mathrm{GeV}^2<Q^2<10\mathrm{GeV}^2`$. For the higher moments with $`n=3,5`$ the kinematic window in which twist-4 contributions dominate, that is, where $`g_1^{(n)}(Q^2)|_{\mathrm{ht}}`$ behaves linearly with $`1/Q^2`$, moves successively to higher $`Q^2`$. Again the contributions from elastic, resonant and non-resonant scattering all turn out to be of similar importance. ### 5.3 Parton-hadron duality With decreasing $`Q^2`$ the higher twist contributions eventually reach the magnitude of the leading twist parts. As a consequence the twist expansion (4) breaks down. Our model for $`g_1^p`$ can be used to suggest where this transition takes place: for a given moment $`g_1^{(n)}`$ higher twist contributions amount to less than $`50\%`$ of the leading twist ones if $`Q^2>n`$ GeV<sup>2</sup>. On the other hand, we have learned in section 5.1 that low mass excitations account for more than $`10\%`$ of $`g_1^{(n)}`$ if $`Q^2<2n\text{GeV}^2`$. This indicates a region of $`n`$ and $`Q^2`$ in which perturbative higher twist corrections coexist with resonance contributions. The resonance terms are significant, and the transition amplitudes involving these resonances introduce powers of $`1/Q^2`$ in just such a way that they follow the deep-inelastic, large $`Q^2`$ behaviour of $`g_1^p`$. Such a behavior is known as parton-hadron duality, a notion introduced by Bloom and Gilman for the unpolarized structure function $`F_2`$. A QCD explanation of this phenomenon has first been offered in ref. and was further elaborated in ref.. According to our results similar arguments apply to polarized lepton-nucleon scattering. ## 6 Summary * Contributions from the region of the nucleon resonances are an essential ingredient in the ”higher-twist” analysis of the spin structure function $`g_1`$. Their effects are clearly visible in $`g_1^p`$ even at $`Q^2`$ as large as $`5\mathrm{GeV}^2`$. For example, low mass excitations with $`W<\mathrm{\hspace{0.33em}2}\mathrm{GeV}`$ account for more than $`50\%`$ of the 3rd moment and more than $`80\%`$ of the 5th moment of $`g_1^p`$ in the range $`Q^2`$$`\stackrel{<}{}`$$`\mathrm{\hspace{0.33em}3}\mathrm{GeV}^2`$. * We have pointed to the importance of the elastic scattering ($`x=1`$) part in a consistent moment analysis of $`g_1`$. Without inclusion of this elastic part, an extraction of higher-twist terms would not be meaningful. * We observe a coexistence of resonance contributions and perturbative higher-twist corrections in a window, roughly framed by $`n\mathrm{GeV}^2<Q^2<\mathrm{\hspace{0.33em}2}n\mathrm{GeV}^2`$, where $`n=1,3,5,\mathrm{}`$ denotes the moment of $`g_1`$. The understanding of this coexistence region in terms of parton-hadron duality is an interesting issue. Precision data from TJNAF will help clarifying these questions in the near future. Acknowledgments: The authors wish to acknowledge helpful discussions with K.A. Griffioen and L. Mankiewicz.
no-problem/9909/astro-ph9909030.html
ar5iv
text
# The Galaxy-Weighted Small-Scale Velocity Dispersion of the Las Campanas Redshift Survey ## 1. Introduction The small-scale thermal energy of the observed galaxy distribution is an important diagnostic for cosmological models. For the past decade the pair velocity dispersion $`\sigma _{12}(r)`$ (Davis & Peebles 1983) has been the usual measure of this quantity (e.g., Bean et al. 1983; de Lapparent, Geller, & Huchra 1988; Hale-Sutton et al. 1989; Mo, Jing, & Borner 1993; Zurek et al. 1994; Fisher et al. 1994; Marzke et al. 1995; Brainerd et al. 1996; Somerville, Primack, & Nolthenius 1997; Landy, Szalay, & Broadhurst 1998; Jing, Mo, & Borner 1998). But in spite of its widespread application and the relative ease of its measurement within large redshift surveys, the $`\sigma _{12}(r)`$ statistic has a number of well-known deficiencies. Chief among them is its pair-wise weighting, which gives extreme influence to rare, rich clusters of galaxies containing many close pairs with high velocity dispersion. Alternative statistics to measure the thermal energy distribution have been suggested by Kepner, Summers, & Strauss (1997) and by Davis, Miller, & White (1997, hereafter DMW). The Kepner et al. algorithm computes the pair-weighted dispersion as a function of the local galaxy density; this statistic demonstrates the heterogeneity of the environments of the local galaxy distribution, but it must be computed in volume-limited samples. The $`\sigma _1`$ statistic described by DMW can be estimated within a flux-limited catalog and is readily interpreted in terms of a filtered version of the cosmic energy equation. The statistic is a measure of the rms one-dimensional velocity of galaxies, with large-scale bulk flow motions filtered out. DMW applied this statistic to the UGC catalog of optical galaxies within the Optical Redshift Survey (Santiago et al. 1995), as well as the 1.2-Jy IRAS catalog (Fisher et al. 1995). They showed that $`\mathrm{\Omega }_m=1`$ simulations were far too hot to match the observed dispersion. Even when compared with simulations in which the small-scale kinetic energy had been artificially lowered by a factor of four, the observed velocity distribution was colder than the simulated distribution. However, the UGC catalog surveys a rather limited volume of the local Universe, and the IRAS catalog is quite dilute and under-samples dense cluster regions. It is therefore of considerable interest to apply the DMW statistic to a larger, more representative redshift survey such as the Las Campanas Redshift Survey (LCRS; Shectman et al. 1996), and to compare the results with $`N`$-body simulations of cosmological models which are favored by current data. This paper reports the application of this new statistic to the LCRS and compares the result to a few simulations of flat and open cosmological models. In a future paper (Baker, Davis, & Ferreira 1999), we discuss a wider variety of models, and we discuss in more detail the comparison of the LCRS with $`N`$-body simulations and the potential applications of $`\sigma _1`$ as a cosmological probe. ## 2. Application of $`\sigma _1`$ to the LCRS The LCRS survey consists of 26,000 galaxies selected in a hybrid R band. The survey was conducted in six thin slices, each of size $`1\stackrel{}{\mathrm{.}}5\times 80\mathrm{°}`$ on the sky, with median redshift $`cz=30,000\mathrm{km}\mathrm{s}^1`$. The redshift accuracy of the observations is typically $`\sigma _{\mathrm{err}}=67\mathrm{km}\mathrm{s}^1`$ (Shectman et al. 1996), which is sufficient for measuring the thermal, small-scale velocity dispersion. For measurement of $`\sigma _1`$, we work with the subset of 19,306 LCRS galaxies in the range $`10,000<cz<45,000\mathrm{km}\mathrm{s}^1`$, and absolute magnitude $`22.5<M<18.5`$. To estimate the random background of the neighbors about each galaxy, we used a catalog of 268,000 randomly distributed points with the same selection function as the LCRS galaxies, including the restriction against pairs with angular separation less than $`55\mathrm{}`$ caused by limitations of optical fiber placement. Since the six slices of the LCRS are spatially separated by more than the projected separation used in the $`\sigma _1`$ statistic, the statistical procedure is applied to each slice individually and the results are averaged. ### 2.1. Method We now briefly describe our procedure, similar to that of DMW, for determining $`\sigma _1`$. For each galaxy $`i`$ in a slice of the survey, we lay down a cylinder centered on the galaxy in redshift space. Let $`r_p`$ be the projected radius of the cylinder and $`v_l`$ its half-length along the redshift direction. For neighboring galaxies $`j`$ within the cylinder, we construct the distribution $`P_i(\mathrm{\Delta }v)`$, which counts the number of neighbors with redshift separation in a redshift bin centered at $`\mathrm{\Delta }v=v_jv_i`$. The counts accumulated in $`P_i(\mathrm{\Delta }v)`$ are weighted by the inverse selection function $`\varphi _i/\varphi _j`$ (though equal weighting yields virtually identical results). We subtract from this distribution the background distribution $`B_i(\mathrm{\Delta }v)`$, which counts the number of weighted neighbors expected for an unclustered galaxy distribution. We are interested in the width of the overall distribution $`D(\mathrm{\Delta }v)`$ constructed by an appropriately weighted sum over the $`N_g`$ galaxies: $$D(\mathrm{\Delta }v)=\frac{1}{N_g}\underset{i=1}{\overset{N_g}{}}w_i\left[P_i(\mathrm{\Delta }v)B_i(\mathrm{\Delta }v)\right],$$ (1) where the weight for galaxy $`i`$ is denoted by $`w_i`$. In order to make the statistic object-weighted rather than pair-weighted, we wish to normalize the distributions by the number of neighbors $`N_{\mathrm{ex}}`$ in excess of the random background, that is: $$w_i^1=N_{\mathrm{ex},i}=\underset{\mathrm{\Delta }v}{}\left[P_i(\mathrm{\Delta }v)B_i(\mathrm{\Delta }v)\right].$$ (2) This however presents a problem for galaxies which do not have enough neighbors to ensure that the sum is positive. DMW dealt with this problem by deleting these objects from consideration, but under half of the LCRS galaxies have at least one excess neighbor for $`r_p=1h^1\mathrm{Mpc}`$, and these galaxies are a biased sample because they populate over-dense regions. It is therefore desirable to modify the statistic to include galaxies with fewer neighbors. We achieve a more inclusive statistic by considering separately the distributions of high- and low-density objects; that is, only galaxies with $`N_{\mathrm{ex}}1`$ are included in the sum for $`D_{\mathrm{hi}}`$, while only galaxies with $`N_{\mathrm{ex}}<1`$ are included in the sum for $`D_{\mathrm{lo}}`$. We then weight the galaxies in the combined distribution according to $$w_i=\{\begin{array}{cc}A_{\mathrm{hi}}N_{\mathrm{ex},i}^1\hfill & N_{\mathrm{ex},i}1\hfill \\ A_{\mathrm{lo}}\hfill & N_{\mathrm{ex},i}<1.\hfill \end{array}$$ (3) Here $`A_{\mathrm{lo}}`$ and $`A_{\mathrm{hi}}`$ are normalization constants for the two distributions, chosen so that the distributions are weighted in proportion to the number of objects included: $$A_{\mathrm{hi}}=\frac{N_{\mathrm{hi}}/N_g}{_{\mathrm{\Delta }v}\left[D_{\mathrm{hi}}(\mathrm{\Delta }v)D_{\mathrm{hi}}(\mathrm{})\right]},$$ (4) and similarly for $`A_{\mathrm{lo}}`$. Here $`N_{\mathrm{hi}}`$ and $`N_{\mathrm{lo}}`$ are the number of galaxies with $`N_{\mathrm{ex}}1`$ and $`N_{\mathrm{ex}}<1`$, respectively, thus $`N_{\mathrm{hi}}+N_{\mathrm{lo}}=N_g`$. The baselines $`D(\mathrm{})`$ are estimated from the flat tails of the distributions within $`500\mathrm{km}\mathrm{s}^1`$ of $`\mathrm{\Delta }v=\pm v_l`$. With this normalization the final distribution obeys $`_{\mathrm{\Delta }v}D(\mathrm{\Delta }v)=1`$. Note that scaling $`D_{\mathrm{hi}}`$ and $`D_{\mathrm{lo}}`$ by the constants $`A`$ does not affect the derived widths for these distributions; rather, it merely alters the weighting of the two in the combined distribution. This procedure, in contrast to that of DMW, allows us to include all of the available data, yielding an unbiased, object-weighted measure of the thermal energy of the galaxy distribution. It is the object-weighting which differentiates our procedure from the more traditional measure of the pair dispersion $`\sigma _{12}(r)`$; all galaxies (not pairs) are assigned equal weight in our statistic $`\sigma _1`$. We measure the width of the distribution $`D(\mathrm{\Delta }v)`$ using the convolution procedure outlined by DMW (equation 18), in which a velocity broadening function $`f(v)`$ is convolved with the two-point correlation function $`\xi (r)`$ to produce a model $`M(\mathrm{\Delta }v)=\overline{\xi }_{r_p}f`$ for $`D(\mathrm{\Delta }v)`$: $$M(\mathrm{\Delta }v)=\underset{0}{\overset{r_p}{}}𝑑r\mathrm{\hspace{0.17em}2}\pi r\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑y\xi (\sqrt{r^2+y^2})f(\mathrm{\Delta }vy).$$ (5) The two-point correlation function of the LCRS is well-approximated by $`\xi (r)=(r_0/r)^\gamma `$, with $`r_0=5h^1\mathrm{Mpc}`$ and $`\gamma =1.8`$ (Jing et al. 1998), while for the $`N`$-body simulations we use the cylindrically averaged mass correlation function $`\overline{\xi }_{r_p}(\mathrm{\Delta }v)`$ measured directly from the particle distribution. We find that an exponential broadening function (see Diaferio & Geller 1996; Sheth 1996, Juszkiewicz, Fisher, & Szapudi 1998) $$f(v)=\frac{1}{\sigma _1}\mathrm{exp}\left(\frac{|v|}{\sigma _1}\right)$$ (6) provides a much better fit to the LCRS data and all $`N`$-body models than does a Gaussian. Here we have defined the width $`\sigma _1`$ so that it is a measure of the rms velocity of individual galaxies in one dimension (with bulk motions on scales $`1h^1\mathrm{Mpc}`$ filtered out). The (object-weighted) rms difference in velocity between any two galaxies is then $`\sigma _1\sqrt{2}`$ (DMW call this quantity, which is equal to the rms dispersion of the distribution $`f`$, the “intrinsic” dispersion $`\sigma _I`$; we will work exclusively with $`\sigma _1`$ to avoid confusion). Fig 1.— Galaxy-weighted velocity distribution $`D(\mathrm{\Delta }v)`$ for the six LCRS slices. The three-dimensional dispersions are larger by an additional factor $`\sqrt{3}`$. We perform a nonlinear $`\chi ^2`$-minimization fit to determine the width $`\sigma _1`$ and amplitude of the model $`M(\mathrm{\Delta }v)`$. Before fitting, we convolve the model with a Gaussian of rms $`\sigma _{\mathrm{err}}\sqrt{2}=95\mathrm{km}\mathrm{s}^1`$ to account for the LCRS redshift measurement uncertainties; the factor of $`\sqrt{2}`$ converts from the measurement uncertainty for individual redshifts to the uncertainty for redshift differences, which are accumulated in $`D(\mathrm{\Delta }v)`$. We also include baseline terms in the model which are constant and linear in $`\mathrm{\Delta }v`$, for a total of four fit parameters. The linear term is necessary for the LCRS because for simplicity we define “cylinders” in redshift space based on projected angular separation on the sky. This leads to a small gradient in the measured distribution function $`D(\mathrm{\Delta }v)`$ because the “cylinders” are in fact conic sections, but the term is quite small because the length of the cylinders, $`2v_l`$, is much smaller than the typical redshift of galaxies in the survey. Although the gradient term has a negligible effect on the derived width, it does improve significantly the quality of the $`\chi ^2`$ fit. ### 2.2. Results for the LCRS We have used the six independent slices of the LCRS to estimate the errors in $`D(\mathrm{\Delta }v)`$ as a function of $`\mathrm{\Delta }v`$ in computing $`\chi ^2`$. However, we expect that the bins may be correlated due to sample variance; the fitting procedure is therefore not strictly legitimate, but the consistency of the results for the widths of the individual slices serves as a check on the degree to which sample variance affects the result. We also expect $`\chi _\nu ^2>1`$ if the exponential broadening function of width $`\sigma _1`$ (assumed independent of $`r`$) provides an inadequate description of the small-scale velocities. The $`D(\mathrm{\Delta }v)`$ distributions for the six independent LCRS slices are plotted in Figure 2.1, and Table 1 lists the derived widths. The second to last line gives the mean and standard deviation of the mean for separate fits to the six slices, while the last line is the result of a single fit to the combined distribution of all galaxies. Note that the dispersion measured for objects with excess neighbors ($`N_{\mathrm{ex}}1`$) is clearly higher than that measured for objects with fewer neighbors. This behavior is expected because objects with more neighbors are found in regions of higher density, which tend to be hotter. The fit to the LCRS $`D(\mathrm{\Delta }v)`$, shown in Figure 2.2, is quite good, with $`\chi _\nu ^2=117/96=1.22`$; the probability of $`\chi ^2`$ exceeding this value is $`1P(\chi ^2|\nu )=7\%`$. The best-fitting Gaussian $`f(v)`$ is much worse, with $`\chi _\nu ^2=1.84`$ and $`1P(\chi ^2|\nu )=10^6`$. Based on the mean of the six slices we adopt $`\sigma _1=126\pm 10\mathrm{km}\mathrm{s}^1`$. This value has been computed for $`r_p=1h^1\mathrm{Mpc}`$ and $`v_l=2500\mathrm{km}\mathrm{s}^1`$. The results are quite insensitive to cylinder length, ranging only from $`117\pm 14\mathrm{km}\mathrm{s}^1`$ at $`v_l=1500\mathrm{km}\mathrm{s}^1`$ to $`132\pm 13\mathrm{km}\mathrm{s}^1`$ at $`v_l=3500\mathrm{km}\mathrm{s}^1`$. Our chosen value $`v_l=2500\mathrm{km}\mathrm{s}^1`$ is large enough to allow a clean measure of the tails of the distribution without significant non-linearities in the baseline gradient due to variations in the selection function. A modest decrease in $`\sigma _1`$ is evident as $`r_p`$ is increased above $`r_p=1h^1\mathrm{Mpc}`$ (see Table 2). Although the $`D(\mathrm{\Delta }v)`$ distributions are very insensitive to $`r_p`$, the averaged correlation function $`\overline{\xi }_{r_p}(\mathrm{\Delta }v)`$ becomes broader as $`r_p`$ increases. As a result, smaller values of $`r_p`$ provide a cleaner measure of the true (real-space) velocity broadening on small scales, but decreasing $`r_p`$ below $`1h^1\mathrm{Mpc}`$ reduces the signal-to-noise, as most galaxies have too few neighbors. The background subtraction also becomes cleaner as $`r_p`$ is reduced. Note that for the larger value $`r_p=2h^1\mathrm{Mpc}`$ used Fig 2.— Velocity distribution $`D(\mathrm{\Delta }v)`$ and fit for the combined LCRS data (upper panel), and residuals for the fit with errors estimated from the standard deviation of the six slices (lower panel; note the change in vertical scale). by DMW, our result is $`\sigma _1=114\pm 10\mathrm{km}\mathrm{s}^1`$. If, as in the DMW analysis, we do not account for broadening due to redshift measurement errors, the result increases to $`\sigma _1=136\pm 9\mathrm{km}\mathrm{s}^1`$. Since the two surveys have comparable redshift uncertainties, our LCRS result is perfectly consistent with the value $`\sigma _1=130\pm 15\mathrm{km}\mathrm{s}^1`$ which DMW derived for the much smaller UGC catalog. ## 3. Comparison to $`N`$-body models We have completed a suite of $`N`$-body simulations designed to predict the small-scale velocity dispersion in a variety of cosmological models. Here we discuss the results of a few of these models: the “standard” Cold Dark Matter (SCDM) model and a tilted variant (TCDM), a model with a cosmological constant $`\mathrm{\Lambda }`$ (LCDM), and an open model (OCDM). The cosmological parameters for these models are listed in Table 3. All models are approximately normalized to the present-day abundance of clusters; the LCDM and TCDM models additionally satisfy the COBE normalization. The SCDM model is known to fail a number of cosmological tests and is included for historical reasons, and only LCDM is fully consistent with current limits from high-redshift supernovae (Perlmutter et al. 1999). We note that on the scales relevant for our simulations, the TCDM power spectrum is indistinguishable from a $`\tau `$CDM spectrum with shape parameter $`\mathrm{\Gamma }=0.2`$. A broader range of models and a more detailed discussion of the simulations may be found elsewhere (Baker et al. 1999). Initial power spectra were obtained using the CMBFAST code (Seljak & Zaldarriaga 1996, 1998). The simulations were evolved on a $`128^3`$ mesh using a P<sup>3</sup>M code (Brieu, Summers, & Ostriker 1995) in which short-range forces are computed using a special purpose GRAPE-3AF board (Okumura et al. 1993). We chose a box of size $`L=50h^1\mathrm{Mpc}`$ to match the length of the LCRS cylinders; with $`N_p=64^3`$ particles this gives a mass resolution of $`1.3\times 10^{11}\mathrm{\Omega }_mh^1\mathrm{M}_{}`$, where $`h=H_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. A Plummer force softening $`ϵ=50h^1\mathrm{kpc}`$ was used. The simulations were started at redshifts $`z_i=15`$ (for $`\mathrm{\Omega }_m=1`$) or $`z_i=19`$ (for $`\mathrm{\Omega }_m=0.3`$) and evolved to $`z=0`$ in 1500 time-steps using $`p=a^2`$ as the integration variable. The simulations are converted to “redshift” space by adding the velocities $`v_i`$ along one of the three coordinates $`i`$ to the positions $`x_i`$: $`x_ix_i+v_i/H`$, where $`H`$ is the Hubble constant. Periodic boundary conditions are applied at the box edges. We then apply exactly the same statistical procedure for determining $`\sigma _1`$ as for the LCRS, except that the selection function is now unity. ### 3.1. Tests of $`\sigma _1`$ Measurements We have used our $`N`$-body simulations to perform a number of checks on the robustness of our method for determining the small-scale velocity dispersion. One test is to ask how well our model is able to account for the redshift measurement uncertainties in the LCRS. To simulate these uncertainties, we added Gaussian random velocities of rms $`\sigma _{\mathrm{err}}`$ along the “redshift” coordinate in the simulations. We then make two determinations of $`\sigma _1`$, which should ideally be equal. In one determination, the random velocities have been added and we perform an extra Gaussian convolution in the model to account for them. In the other, no random velocities are added and no Gaussian convolution is necessary. We find that the two widths agree quite well, to within $`10\mathrm{km}\mathrm{s}^1`$ over the range of interest for $`\sigma _1`$ (100–300$`\mathrm{km}\mathrm{s}^1`$). The agreement improves as $`\sigma _1`$ increases and the uncertainties contribute relatively less to the width of the observed velocity distribution. A second test of the method is to compare velocity widths measured in real space with those measured in cylinders in redshift space. For this test, we replace the velocities of the simulation particles with velocities drawn from a random exponential distribution of a given rms $`\sigma `$. It is straightforward to show that the velocity distribution appropriate for the difference distribution $`D(\mathrm{\Delta }v)`$ is then $$f(v)=\frac{1}{2\sigma ^2}\left(|v|+\frac{\sigma }{\sqrt{2}}\right)\mathrm{exp}\left(\sqrt{2}\frac{|v|}{\sigma }\right).$$ (7) Using this form in the redshift-space model (Equation 5), we find that our procedure recovers the true velocity dispersion with an accuracy better than 10% for $`\sigma _1`$ in the range 100–300$`\mathrm{km}\mathrm{s}^1`$. Finally, we can test the extent to which our measurement of $`\sigma _1`$ in the long redshift-space cylinders is contaminated by motions on scales larger than $`1h^1\mathrm{Mpc}`$. First we construct distributions analogous to $`D(\mathrm{\Delta }v)`$, but measured in real space, with neighbors drawn from spheres of radius $`1h^1\mathrm{Mpc}`$ in the simulations. These are compared to distributions with neighbors drawn from the long cylinders, also measured in real space. The widths of these distributions agree to within 1%, and we conclude that the contamination from large scales is negligible. ### 3.2. Selection of Galaxies from the Mass Distribution We can easily compute $`\sigma _1`$ for particles in the simulations, but the observed small-scale dispersion of galaxies, which correspond in some way to halos in the simulations, will in general differ from that of the mass. The internal velocity dispersions of galaxies are not included in the observed statistic; moreover, the galaxy population may be a biased tracer. In order to test whether our simulations can reproduce the LCRS result for $`\sigma _1`$, it is therefore important to identify “galaxies” within the $`N`$-body simulations. Unfortunately the process of galaxy formation includes baryonic physics on a wide range of scales not probed by our dark-matter only simulations. For the present work, we define galaxies using a simple phenomenological model which we expect to yield similar results to those of larger gas-dynamical simulations. We first apply the standard friends-of-friends (FOF; Davis et al. 1985) algorithm to the simulations, with a linking length of 0.2 mesh cells and a minimum group size $`N10`$, corresponding to halos with mass $`M10^{12}\mathrm{\Omega }_mh^1\mathrm{M}_{}`$. We have also considered the HOP method (Eisenstein & Hut 1998) for defining halos, but we obtain similar results for reasonable parameter choices and do not discuss them here. Our limited resolution and the nature of the FOF algorithm lead to a serious and well-known over-merging problem, in which a large cluster containing many galaxies will be identified as a single halo. This drastically lowers the small-scale velocity dispersion because the motions of galaxies within clusters are neglected. To remedy this situation, we split halos with more than $`N_s`$ particles by randomly selecting particles from within the halos and identifying these particles as galaxies. Halos identified in this way will again include the internal motions of galaxies, but as the splitting is only applied to large, hot halos ($`N_s10`$), we expect these internal motions to have a negligible effect on our result. Small halos with fewer than $`N_s`$ particles are taken to be individual galaxies. For comparison with the LCRS, we wish to choose a set of halos which resemble the LCRS galaxies as closely as possible. Some $`N`$-body models yield a correlation function $`\xi (r)`$ which is too steep, and it is therefore advantageous to select halos which are anti-biased on small scales (Jing et al. 1998). We accomplish this through our halo-splitting procedure by drawing random particles with a probability $`p`$ which has a power-law dependence on the number of particles $`N`$ in the parent halo: $`p(N)=N_s^{\alpha 1}N^\alpha `$, with $`\alpha >0`$. The number of galaxies per unit halo mass then falls as $`N^\alpha `$ for large halos. We choose parameters $`N_s`$ and $`\alpha `$ which simultaneously mimic the power-law shape of the LCRS correlation function and produce approximately the correct number density of galaxies, $`n0.02h^3\mathrm{Mpc}^3`$, implying 2500 galaxies per simulation volume. Increasing $`\alpha `$ tends to flatten the correlation function on small scales and yields fewer halos; increasing $`N_s`$ at fixed $`\alpha `$ tends to lower the correlation amplitude and also yields fewer halos. This behavior is illustrated in Figure 3 for the LCDM model. Figure 4 shows the correlation functions for our selected halos in each of the models. In the low-density models, we are able to select halos which match the LCRS $`\xi (r)`$ quite well. The normalization of the high-density models is such that $`\xi (r)`$ always falls below the LCRS power law on large scales. The TCDM halos match well at $`r2h^1\mathrm{Mpc}`$. In the SCDM model, we are unable to reproduce exactly the shape of $`\xi (r)`$ without falling too far below the LCRS amplitude and producing too few halos. However, the differences in $`J_2`$ (see §3.4) computed from these correlation functions show that this mismatch should affect our estimate of $`\mathrm{\Omega }_m`$ by at most 30%. ### 3.3. Results for $`\sigma _1`$ The results for $`\sigma _1`$ for our four cosmological models are listed in Table 4. We see that the mass in the two $`\mathrm{\Omega }_m=1`$ models is far too hot on $`1h^1\mathrm{Mpc}`$ scales, with $`\sigma _1`$ well over twice the LCRS value. The spectral tilt of the TCDM model has very little effect on the small-scale velocities, as the result is nearly identical to the SCDM result. The mass in the low-$`\mathrm{\Omega }_m`$ models, on the other hand, is also hotter than the LCRS, but only by a factor of about 1.5. The halos in the simulations are somewhat cooler than the mass, with small-scale dispersions lower by factors in the range 0.7–0.9. The LCDM halos come closest to the LCRS value; at $`143\mathrm{km}\mathrm{s}^1`$, they are only marginally ($`1.7\sigma `$) hotter than the LCRS. The open model produces velocity dispersions slightly higher than the LCDM model, while the halos in the $`\mathrm{\Omega }_m=1`$ models are again much hotter than the LCRS data. Figure 5 shows that the exponential $`f(v)`$ provides an excellent fit to the velocity distributions measured in the simulations in redshift space. We show distributions for the $`N`$-body mass particles and for the halos. The halo distributions are noisier because there are many fewer halos than mass particles in the simulation volumes. The distributions for the SCDM and OCDM models are nearly indistinguishable from the TCDM and LCDM distributions, respectively, and are not shown. We have also computed $`\sigma _1`$ for galaxies drawn using more sophisticated semi-analytic techniques from a large Virgo simulation (Benson et al. 1999) of the LCDM model. This simulation has a mass resolution better than ours by about a factor of two, and the box length is nearly three times as large. The result is $`126\mathrm{km}\mathrm{s}^1`$, only slightly lower than our value of $`143\mathrm{km}\mathrm{s}^1`$. This suggests that our procedure for defining galaxies is reasonable. The Virgo result exactly matches the LCRS dispersion, which suggests that the small-scale velocity dispersion predicted by the $`\mathrm{\Omega }_m=0.3`$ flat model is in fact perfectly consistent with the observational data. Further details of this comparison will be presented in a future work (Baker et al. 1999). As noted in §2.2, the LCRS velocity width decreases somewhat as the limiting radius $`r_{p,\mathrm{max}}`$ is increased. In Figure 3.5, we show this scale dependence measured in independent cylindrical shells of width $`1h^1\mathrm{Mpc}`$, where the limits on the radial integration in the model (Equation 5) have been adjusted appropriately. Although the measured LCRS $`D(\mathrm{\Delta }v)`$ shows little scale dependence, the integrated correlation function broadens with scale, leading to a smaller measured velocity width. None of the $`N`$-body models, however, are able to reproduce the scale dependence observed in the LCRS. The halos drawn from the Virgo simulation, which show very little scale dependence, come closest, while the other models tend to show an increase in velocity dispersion with scale. Only the LCDM model is shown in Figure 3.5, but we find similar discrepancies for the other models as well. Although the $`\mathrm{\Omega }_m=0.3`$ LCDM model produces a reasonable match to the velocity dispersion on very small scales, all of the models seem unable to reproduce the observed coldness of the velocities on intermediate scales $`1`$$`3h^1\mathrm{Mpc}`$. At present it is unclear whether this discrepancy is due to problems with the galaxy selection procedure, the resolution of the simulations, or a more fundamental flaw in the cosmological models. ### 3.4. Filtered Cosmic Energy Equation The $`\sigma _1`$ statistic is ideally suited for the application of the cosmic energy (Layzer-Irvine) equation filtered on small scales. As shown by DMW, we expect $`\sigma _1^2\mathrm{\Omega }_mJ_{2,m}`$ in the absence of velocity bias, where $$J_2=_{r_{\mathrm{min}}}^{r_{\mathrm{max}}}𝑑rr\xi (r).$$ (8) The subscript $`m`$ means that $`J_2`$ is computed from $`\xi _m(r)`$, the correlation function for the underlying mass. We can write this in terms of the measured $`\xi (r)`$ of an observed sample $`j`$ by defining an effective bias $`b_j^2=J_{2,j}/J_{2,m}`$. If we then compare $`\sigma _{1,j}`$ measured for sample $`j`$ with $`\sigma _{1,N}`$ measured for the underlying mass in an $`N`$-body simulation with mass density parameter $`\mathrm{\Omega }_N`$, we can measure the parameter $$\mathrm{\Omega }_m/b_j^2=\left(\frac{\sigma _{1,j}}{\sigma _{1,N}}\right)^2\left(\frac{J_{2,N}}{J_{2,j}}\right)\mathrm{\Omega }_N.$$ (9) If in addition we can choose a sample of $`N`$-body halos which matches the correlation function of the sample $`j`$, then we have a direct measure of $`\mathrm{\Omega }_m`$: $$\mathrm{\Omega }_m=(\sigma _{1,j}/\sigma _{1,N})^2\mathrm{\Omega }_N,$$ (10) where $`\sigma _{1,N}`$ is now measured for the $`N`$-body halos rather than the underlying mass. The results of combining the LCRS dispersion $`\sigma _1=126\pm 10\mathrm{km}\mathrm{s}^1`$ with our four cosmological $`N`$-body models are listed in Table 5. Based on the halos in each of the four simulations, we derive consistent values $`\mathrm{\Omega }_m0.2`$. Note that the errors listed on $`\mathrm{\Omega }_m`$ are 1-$`\sigma `$ uncertainties derived solely from the LCRS $`\sigma _1`$ result; they do not include any systematic errors in the model results. The fact that we derive similar values of $`\mathrm{\Omega }_m`$ from each of the different models is an important consistency check, and gives us confidence that our method is indeed a sensitive probe of the matter density. Table 5 also lists the values of $`\mathrm{\Omega }_m/b^2`$ derived by comparing the LCRS dispersion with the dispersion of the $`N`$-body mass. The integral $`J_2`$ converges rather slowly, and its value is quite sensitive to the integration limits $`r_{\mathrm{min}}`$ and $`r_{\mathrm{max}}`$. A reasonable lower limit is $`r_{\mathrm{min}}=0.1h^1\mathrm{Mpc}`$, which eliminates from the analysis the internal velocity dispersion of typical galaxies and includes only the dispersion of galaxies moving relative to each other. We might also take $`r_{\mathrm{max}}`$ to be slightly larger than $`1h^1\mathrm{Mpc}`$, since the length of the redshift-space cylinders means that there will be some contribution to $`\sigma _1`$ from larger scales (although we have measured this effect in the simulations and have found that it is very small). The ranges shown for $`\mathrm{\Omega }_m/b^2`$ were obtained by allowing $`r_{\mathrm{min}}`$ and $`r_{\mathrm{max}}`$ to vary over the ranges 0.05–0.2 and 1–5 $`h^1\mathrm{Mpc}`$, respectively. Our results for the high-density models are consistent with the value $`\mathrm{\Omega }_m/b^2=0.14\pm 0.05`$ found by DMW, who only considered an $`\mathrm{\Omega }_m=1`$ model. The parameter $`\mathrm{\Omega }_m/b^2`$ is approximately equal to $`\beta ^2`$, where $`\beta \mathrm{\Omega }_m^{0.6}/b`$ is the parameter measured by large-scale flow analyses. We find $`\beta 0.3`$$`0.4`$ for the two high-density models, and $`\beta 0.45`$$`0.55`$ for the two low-density models. These ranges are generally consistent with some large-scale flow determinations (e.g., Willick & Strauss 1998; Baker et al. 1998; Davis, Nusser, & Willick 1996) but not with the POTENT analyses, which prefer $`\beta 1`$ (e.g., Sigad et al. 1998). Of course, the bias may in general depend on scale, in which case our small-scale result need not match the $`\beta `$ values measured using flows on much larger scales. Finally, we can combine the values of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_m/b^2`$ to obtain an estimate of the bias of the galaxy distribution on small scales. Our high-density models require biases $`b=1.0`$$`1.5`$, while the low-density models are slightly anti-biased, $`b=0.7`$$`1.1`$. These ranges are consistent with the biases measured directly from the correlation functions of the simulations. ### 3.5. Effects of Streaming Velocities Although our goal is to measure the particle distribution function from redshift-space information alone, we must do this by considering the relative motions of pairs of galaxies, for which we expect mean streaming as well as thermal motions. As defined in Equation 6, our model does not account for a non-zero first moment of the velocity distribution of pairs of galaxies. However, the first moment will, in general, be non-negligible due to the mean tendency of galaxies to approach each other, and it will contaminate a measurement of the second moment. On small scales in virialized clusters, for example, the infall velocity approximately cancels the Hubble expansion, and so its presence Fig 6.— Object-weighted velocity dispersion measured in independent cylindrical shells of width $`1h^1\mathrm{Mpc}`$. The LCRS data are shown as filled circles with error bars. Also shown are LCDM mass (squares) and halos drawn from our simulations (crosses) and from the Virgo simulation (triangles). can affect our measurements on $`1h^1\mathrm{Mpc}`$ scales by of order $`100\mathrm{km}\mathrm{s}^1`$. Jing & Boerner (1998) have shown that the effect of the streaming motions on the estimate of the pairwise velocity dispersion can be dramatic, increasing $`\sigma _{12}`$ from $`400\mathrm{km}\mathrm{s}^1`$ to $`580\mathrm{km}\mathrm{s}^1`$ at $`1h^1\mathrm{Mpc}`$ separation. The effects of the streaming motions can be incorporated into our analysis by writing the distribution function in Equation 5 as $$f(v)=\frac{1}{\sigma _1^{}}\mathrm{exp}\left(\frac{|v\overline{v_1}|}{\sigma _1^{}}\right),$$ (11) where $`\overline{v_1}`$ is the mean object-weighted streaming velocity, which is a function of separation, and $`\sigma _1^{}`$ is the second moment of the streaming-corrected velocity distribution. The form of $`\overline{v_1}`$ is unknown but can be measured directly from $`N`$-body simulations. Our estimate of $`\sigma _1`$ with $`\overline{v_1}=0`$ will be smaller than $`\sigma _1^{}`$ because streaming motions tend to cause objects to pile up at small velocity separations in redshift space. However, $`\sigma _1`$ has the advantage that it is a model-independent statistic, relying only on the assumption of an exponential velocity distribution. The comparison of the data with $`N`$-body models is also consistent; to the extent that the models describe the real universe, the same streaming motions will be present in both the data and the models, and will affect the estimates of $`\sigma _1`$ similarly. Incorporating a non-zero $`\overline{v_1}`$ introduces model dependencies into the measurement, and there is no guarantee that the infall measured in the $`N`$-body simulations matches that of the real universe. For the application of the cosmic energy equation, it is in fact more appropriate to use $`\sigma _1`$ rather than $`\sigma _1^{}`$, because contributions from both random thermal motions and mean streaming motions are already included. On the other hand, $`\sigma _1^{}`$ is a better measure of the truly thermal energy of the galaxy distribution. We can estimate it by using Equation 11 with an appropriate model for $`\overline{v_1}`$. For the mean pairwise velocity, the simple form $$\overline{v_{12}}(r)=\frac{FH_0r}{1+(r/r_0)^2},$$ (12) (Davis & Peebles 1983) is often used, where $`F`$ is a numerical factor, typically $`F=1`$$`1.5`$. Another expression has been proposed more recently by Juszkiewicz, Springel, & Durrer (1999): $$\overline{v_{12}}(r)=\frac{2}{3}fH_0r\overline{\overline{\xi }}(r)\left[1+\alpha \overline{\overline{\xi }}(r)\right],$$ (13) where $`f\mathrm{\Omega }_m^{0.6}`$, $`\alpha 1.20.65\gamma _0`$ with $`\gamma _0d\mathrm{ln}\xi /d\mathrm{ln}r|_{\xi =1}`$, and $$[1+\xi (r)]\overline{\overline{\xi }}(r)=\frac{3}{r^3}_0^r𝑑xx^2\xi (x).$$ (14) These two forms for $`\overline{v_{12}}(r)`$ are nearly equal at small scales $`r10h^1\mathrm{Mpc}`$ if we set $`F=1.8\mathrm{\Omega }_m^{0.6}`$; note that $`F=1`$ corresponds to streaming motions which just cancel the Hubble expansion on small scales. Table 6 shows that the streaming correction has a substantial effect on the derived LCRS velocity width, with $`\sigma _1^{}`$ rising to $`201\pm 13\mathrm{km}\mathrm{s}^1`$ for $`F=1`$ and $`261\pm 15\mathrm{km}\mathrm{s}^1`$ for $`F=1.8`$. The $`\chi ^2`$ statistic worsens somewhat for $`F>1`$. The $`N`$-body models show similar behavior. We caution, however, that the streaming-corrected dispersions are model-dependent and are not an appropriate measure of the single-particle dispersion for use with the cosmic energy equation, which is defined in the comoving frame of the universe. This is in contrast to analyses of the pair dispersion, where it is appropriate to use the cosmic virial theorem, defined in the mean streaming frame. ## 4. Conclusions Although the potential of small-scale cosmological velocities as a cosmological probe has long been recognized, the application of pair-weighted statistics is problematic. We apply an extended version of the more stable galaxy-weighted statistic of DMW to the Las Campanas Redshift Survey. We derive a one-dimensional rms velocity for individual galaxies relative to their neighbors of $`\sigma _1=126\pm 10\mathrm{km}\mathrm{s}^1`$ on scales $`1h^1\mathrm{Mpc}`$. Using this new statistic, we find that the observed velocities remain quite cold relative to the predictions of high-$`\mathrm{\Omega }_m`$ $`N`$-body simulations. Tilting the power spectrum to reduce the initial power on small scales does little to resolve this discrepancy. We have also examined flat and open models with $`\mathrm{\Omega }_m=0.3`$; these models produce significantly lower dispersions than the high-density models. Combining the LCRS data with the predictions based on halos in the simulations, we measure consistent values $`\mathrm{\Omega }_m0.2`$ for all models, and we can rule out $`\mathrm{\Omega }_m=1`$ with a high degree of confidence. Our result suggests that the extremely cold dispersion measured in the vicinity of the Local Group (Schlegel, Davis, & Summers 1994; Governato et al. 1997) might be a local anomaly, as currently popular low-density models can reproduce the observed mean dispersion on $`1h^1\mathrm{Mpc}`$ scales. On the other hand, at slightly larger separations, we find evidence that all of the models may again be too hot relative to the observations. In the future, it will be extremely useful to apply our statistic to upcoming redshift surveys, such as the Sloan Digital Sky and 2dF surveys, which will contain enough galaxies to compute $`\sigma _1`$ precisely for different sub-samples of the galaxy population. The Deep Extragalactic Probe (DEEP; Davis & Faber 1998) and other surveys at high redshift will also provide a measure of the evolution of $`\sigma _1`$, which can be used to place additional constraints on cosmological parameters and the bias of the galaxy distribution. J. E. B. acknowledges support from an NSF graduate fellowship. This work was supported in part by NSF grant AST95-28340. HL acknowledges support provided by NASA through Hubble Fellowship grant #HF-01110.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. We thank C. Frenk and A. Benson for generously providing data from the Virgo simulations, and we thank R. Sheth and R. Juszkiewicz for helpful discussions. We are also grateful to U. Seljak and M. Zaldarriaga for making their CMBFAST code publicly available, and we thank D. Eisenstein for the HOP code.
no-problem/9909/math9909103.html
ar5iv
text
# The Thermal Explosion Revisited ## 1 Introduction We revisit in this Note the classical problem of a thermal explosion in a long circular cylindrical vessel containing an exothermically reacting gas at rest. It was formulated and solved under some assumptions by D. A. Frank–Kamenetsky , (see also ). In the original formulation it was assumed that the wall of the vessel is ideally conducting, so that the gas temperature at the boundary is equal to the temperature of the ambient medium. Such boundary condition made the problem cylindrically symmetric, and the symmetry essentially simplified the solution. In the present Note the problem is modified in the following way: the wall is partially isolated so that the symmetry is lost. The critical values of the radius of the vessel are determined numerically. Especially instructive results were obtained for the cases when the isolated parts of the boundary are distributed periodically with large angular frequency, and the isolated part of the boundary is large. The scaling laws for the critical values were found. For large angular frequencies it was found that there exists an axisymmetric core of the temperature distribution which occupies a major part of the vessel. The conditions in this core at the critical case were found to be subcritical. ## 2 Mathematical Problem Formulation Assume that a gas at rest is enclosed in a long cylindrical vessel of radius $`r_0`$. An exothermic reaction is going in the gas with the thermal effect $`Q`$ per unit mass of reacted gas. For the reaction rate the Arrhenius law is assumed with the activation energy $`E`$. If the thermal effect and the activation energy are large, it can be shown (see ) that an ‘intermediate-asymptotic’ steady state regime is achieved. For this regime the gas consumption in the reaction can be neglected, and, from the other side, the temperature distribution in the vessel is steady. Applying the Frank–Kamenetsky large activation energy approximation, a non-linear equation for dimensionless reduced temperature $`𝕦`$ is obtained: $$\mathrm{\Delta }u+\lambda ^2e^u=0,$$ (2.1) where $$u=\frac{(TT_0)E}{RT_0^2},$$ (2.2) the Laplace operator $`\mathrm{\Delta }`$ is related to dimensionless variables $`\rho =r/r_0`$, $`\theta `$; $`r,\theta `$ are the polar coordinates. The constant $`\lambda `$ is $$\lambda =\frac{r_0}{l},l=\left(e^{\frac{E}{RT_0}}\kappa RT_0^2c/QE\sigma (T_0)\right)^{1/2}.$$ (2.3) Here $`T`$ is the absolute temperature, $`T_0`$–the temperature of the ambient medium, $`R`$–the universal gas constant, $`\kappa `$–the molecular thermal diffusivity, $`𝕔`$–heat capacity of gas per unit volume, $`\sigma (T)`$ is the pre-exponential factor in the Arrhenius reaction rate expression: a slow function of temperature. In the classical problem formulation it was assumed that the whole wall of the vessel is ideally heat conducting, so that the gas temperature at the boundary is equal to the temperature of the ambient medium. This gives a Dirichlet condition for the equation (2.1): $$u(1,\theta )=0.$$ (2.4) The boundary value problem formulation under this condition is axisymmetric, and this was important for obtaining the analytic solution in an explicit form. D. A. Frank–Kamenetsky showed that the solution to the problem (2.1), (2.4) does exist for $`\lambda \lambda _{cr}=\sqrt{2}`$ only. Physically it means that a quiet, non-explosive proceeding of the reaction is possible only if the radius of the vessel is less than a critical one: $`r_0(r_0)_{cr}=\sqrt{2}l`$. This condition is known as the condition of the thermal explosion. In the present Note the following modification of the problem (2.1), (2.4) is proposed: Only a fraction $`\alpha `$ of the wall is heat conducting, while the fraction $`1\alpha `$ is thermally isolated. The simplest formulation corresponds to a mixed problem (Figure 1,a) $$\begin{array}{ccc}\hfill u(1,\theta )=0& \text{at}\hfill & 0\theta 2\pi \alpha \hfill \\ \hfill _\rho u(1,\theta )=0& \text{at}\hfill & 2\pi \alpha <\theta 2\pi .\hfill \end{array}$$ (2.5) More interesting is the case when the isolated part of the wall is not concentrated on a single arc, but is distributed periodically (Figure 1,b): the boundary $`\rho =1`$ is divided into $`N`$ segments $$\frac{2\pi }{N}(1+k)\theta \frac{2\pi }{N}k,k=0,1,\mathrm{},N1.$$ (2.6) The fraction $`\alpha `$ of each segment is left heat conducting, while the fraction $`1\alpha `$ becomes isolated. In this case the mixed boundary condition at $`\rho =1`$ has the form: $$\begin{array}{ccc}\hfill u(1,\theta )=0& \text{at}\hfill & \frac{2\pi }{N}k\theta \frac{2\pi \alpha }{N}+\frac{2\pi }{N}k,\hfill \\ \hfill _\rho u(1,\theta )=0& \text{at}\hfill & \frac{2\pi \alpha }{N}+\frac{2\pi }{N}k<\theta \frac{2\pi }{N}(k+1).\hfill \end{array}$$ (2.7) The central question addressed in the present Note is: What are the asymptotic laws for the critical radius if $`N\mathrm{}`$ and $`\alpha 0`$, i.e. the period of the boundary condition tends to zero, and the isolation is close to a complete one. Remember: for the complete isolation the critical radius is equal to zero. ## 3 The Numerical Method In order to answer the questions posed above, we must numerically evaluate the critical values $`\lambda _{cr}`$ for fixed $`N`$ and $`\alpha `$. There are two special items in determining $`\lambda _{cr}`$: the singularity of the linearized equation (2.1): $$\mathrm{\Delta }\delta u+\lambda ^2e^u\delta u=0$$ (3.1) at the critical value of $`\lambda =\lambda _{cr}`$, and the dependence of the values of $`\lambda _{cr}`$ obtained by discretization upon the number of points, $`n`$ used to discretize each dimension. Here $`\delta u`$ is the perturbation of the solution. For fixed values of $`N`$, $`n`$, and $`\alpha `$, we determined a trajectory of solutions $`u`$ versus $`\lambda `$ by solving equation (2.1) with a Newton–Raphson method. This method requires the solution of the linearized equation (3.1) which becomes singular at $`\lambda =\lambda _{cr}`$. In practice it prevents us from approaching the critical point closely. Therefore to make an accurate determination of $`\lambda _{cr}`$ an extrapolation procedure was used. It was assumed that for $`\lambda `$ approaching $`\lambda _{cr}`$ a parabolic approximation is valid: $$\lambda _{cr}^2\lambda ^2=C(uu_0)^2.$$ (3.2) The parameters $`\lambda _{cr}`$, $`C`$ and $`u_0`$ were determined to fit the last 10 points on the trajectory $`u`$ versus $`\lambda `$ approaching $`\lambda _{cr}`$. The approximation (3.2) happened to be satisfactory. Typically the fit (3.2) is accurate to a few parts in $`10^6`$. The above procedure yielded an estimate for $`\lambda _{cr}`$ as a function of $`N`$, $`\alpha `$ and the number of discretization points $`n`$. It is natural to remove the dependence of $`\lambda _{cr}`$ of non-physical, computational parameter $`n`$. For this purpose another extrapolation was used. In our numerical approximations the second-order accurate discretizations of the operators was employed. If the boundary conditions were smooth, the solution $`u`$ would approach a limit with an error of the order of $`1/n^2`$. But the boundary conditions are non-smooth, the derivative of the solution is discontinuous at the boundary, and it causes the order of the approximation to decrease. Extensive numerical calculations have shown $`\lambda _{cr}^2`$ to vary linearly with $`1/n`$. Therefore, the following iterative procedure was used: the value $`\lambda _{cr}(N,\alpha ,n)`$ in (3.2) was calculated for three different values of $`n`$. These three values are then fit to a linear function $`a+b/n`$. If the fit was poor, as might happen if the values of $`n`$ were too small, the procedure is repeated with larger values of $`n`$ and so on, until a satisfactory fitting was obtained. The extrapolation $`n\mathrm{}`$ is simply the value of the coefficient $`a`$. ## 4 Results of the Numerical Analysis The results obtained by numerical solution are represented on Figures 2–4. Three instructive properties are revealed. (i) On the graph of Figure 2 the values of $`\lambda _{cr}^2`$ are presented for growing values of $`N`$ as the functions of $`1/\alpha `$. It is seen that the critical value $`\lambda _{cr}`$, i.e. the critical radius of the vessel for large $`N`$ is practically insensitive to $`\alpha `$ up to $`\alpha `$ very small. For small $`N`$ the dependence of $`\lambda _{cr}`$ on $`\alpha `$ is strong. Clearly, for any $`N`$, $`\lambda _{cr}=0`$ for $`\alpha =0`$, but it is instructive that for instance, for $`N=256`$ when only $`1/512`$ ($`0.2`$ percent) part of the boundary is heat conducting, the critical value of the radius is only 4 percent less than the critical radius for wholly heat conducting wall. (ii) For large $`N`$, starting, say, from $`N=32`$, there exists an internal core $`0\rho \rho _{}`$ where the solution is close to axisymmetric one (see Figure 3). The value $`\rho _{}`$ was selected so that $`|u_{\mathrm{max}}(\rho _{},\theta )u_{\mathrm{min}}(\rho _{},\theta )|<10^4`$. Introducing the mean value $`u_{}=(u_{\mathrm{max}}+u_{\mathrm{min}})/2`$ we notice, that for $`0\rho \rho _{}`$ the solution is close to axisymmetric, so that the equation (2.1) and the boundary condition at $`\rho =\rho _{}`$ assume the form: $$\frac{1}{\rho }\frac{d}{d\rho }\rho \frac{du}{d\rho }+\lambda ^2e^u=0,u=u_{}\text{ at }\rho =\rho _{}.$$ (4.1) Transforming the variables $$u=u_{}+v,R=\frac{\rho }{\rho _{}}$$ (4.2) we reduce the problem to a classic one: $$\frac{1}{R}\frac{d}{dR}R\frac{dv}{dR}+\mathrm{\Lambda }^2e^v=0,v(1,\theta )=0$$ (4.3) where $`0R1`$, $`\mathrm{\Lambda }^2=\lambda ^2\rho _{}^2e^u_{}`$. We calculate now the values of $`\mathrm{\Lambda }_{cr}^2=\lambda _{cr}^2\rho _{}^2e^u_{}`$ where $`\lambda _{cr}`$ is the critical value obtained in previous calculations. The graphs $`\mathrm{\Lambda }_{cr}^2`$ as the functions of $`1/\alpha `$ for different $`N`$ are presented on Figure 4. It can be seen that the values of $`\mathrm{\Lambda }_{cr}^2`$ are always less than $`2`$ (within the limits of our numerical accuracy). This means that the ‘rugged’ boundary layer near the wall $`\rho =1`$ controls the approach to criticality. The thickness of this layer is of the order of the length of the segment $`2\pi /N`$. The angular derivative $`_\theta u`$ in the boundary layer is large. The value $`u_{}`$ decreases as $`N`$ increases. (iii) The intermediate power laws are observed for $`\lambda _{cr}^2`$ at large $`N`$ and small $`\alpha `$: $$\lambda _{cr}^2=S(N)\alpha ^{t(N)}.$$ (4.4) The values of $`S(N)`$ and $`t(N)`$ for various $`N`$ are given in the Table. ## 5 Conclusion Non-axisymmetric modification of the problem of thermal explosion in a cylindrical vessel is formulated. The boundary is partly isolated, and only partly ideally conducting. Special attention is paid to the case of periodic distribution of the isolated and conducting parts. The critical values of radius and other relevant properties are obtained numerically. It is shown that for the period small in comparison with the vessel radius the critical value of radius of the vessel is practically insensitive to the relative size of the open area of the wall up to its very small values. The temperature distribution in the central core is axisymmetric and subcritical even at globally critical conditions: the criticality is due to a thin boundary layer near the wall where the temperature distribution is highly non-axisymmetric. Intermediate power laws are obtained for the critical radius as the function of the relative open area of the wall. This work was supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy under Contract DE-AC-03-76-SF00098, and in part by the National Science Foundation under Grants DMS94-14631 and DMS-2732710. Table | $`N`$ | $`S(N)`$ | $`t(N)`$ | | --- | --- | --- | | 32 | 2.03 | 0.10 | | 64 | 2.03 | 0.055 | | 128 | 2.02 | 0.028 | | 256 | 2.01 | 0.015 | Figure Captions Figure 1. A fraction of the wall is isolated. (a) Isolated (5/6) and conducting (1/6) parts are connected. (b) Isolated and conducting parts are distributed periodically. Figure 2. Dimensionless critical radius as the function of conducting fraction $`\alpha `$ for different angular frequencies. It is seen that at large frequencies the critical radius is practically $`\alpha `$-independent up to very small values of the conducting fraction $`\alpha `$. Figure 3. The solution reveals an axisymmetric internal central core and ‘rugged’ boundary layer ($`N=32`$, $`\alpha =1/32`$). Figure 4. The temperature distribution in the internal central core is a subcritical one: $`\mathrm{\Lambda }^2<2`$.
no-problem/9909/cond-mat9909302.html
ar5iv
text
# Statistical Properties of Statistical Ensembles of Stock Returns ## Abstract We select $`n`$ stocks traded in the New York Stock Exchange and we form a statistical ensemble of daily stock returns for each of the $`k`$ trading days of our database from the stock price time series. We analyze each ensemble of stock returns by extracting its first four central moments. We observe that these moments are fluctuating in time and are stochastic processes themselves. We characterize the statistical properties of central moments by investigating their probability density function and temporal correlation properties. In the last years a large amount of statistical analyses of the dynamics of the price time series of a single stock has been performed by physicists interested in the modeling of financial markets . In this paper we present the results of an empirical analysis performed by taking a different approach. We investigate the statistical properties of daily returns of $`n`$ selected stocks simultaneously traded in a financial market. There are two main motivations for this kind of analysis. From a fundamental point of view our analysis may help in understanding collective behaviors in stock markets. These behaviors become of great importance in times of financial turmoil when stocks in the market become more interlinked, and during market crashes. From an applied point of view our analysis may be useful in the management of large portfolio of stocks. The investigated market is the New York Stock Exchange (NYSE) during the 12-year period January 1987 to April 1999 comprising 3113 trading days. We select four ensembles of $`n`$ stocks. The number of stocks in each ensemble is not always constant during the investigated period because the number of stocks is rapidly increasing in the NYSE, ranging from approximately $`1100`$ in 1987 to approximately $`2800`$ in 1999. Old stocks disappear and new ones start to be traded in the market. Moreover for each ensemble we consider only the stocks traded in the NYSE and we exclude those traded in the NASDAQ or AMEX market. Hence $`n`$ is constant or slowly increasing with time in the selected ensembles of stocks: (i) $`n=30`$ stocks which are used to compute the Dow Jones Industrial Average (DJIA30); (ii) $`n>86`$ stocks which are used to compute the Standard & Poors 100 Index (SP100); (iii) $`n>313`$ stocks which are used to compute the Standard & Poors 500 Index (SP500), and (iv) all the $`n>1100`$ stocks traded in NYSE (NYSE). The variable investigated in our analysis is the daily return, which is defined as $$R_i(t+1)\frac{Y_i(t+1)Y_i(t)}{Y_i(t)},$$ (1) where $`Y_i(t)`$ is the closure price of $`i`$th stock at day $`t`$. For each day we consider $`n`$ returns. $`n`$ is about $`30,90,420,2100`$ depending on the chosen set. A first analysis concerns the distribution of returns at a given day $`t`$. We observe that in many days the central part of this distribution is approximated by a Laplace or double exponential distribution . Significant changes in the shape and scale are frequently observed, especially in times of financial turmoil . Laplace distribution has been considered recently in economic analysis of the growth dynamics of companies . In order to characterize more quantitatively the return distribution at day $`t`$, we determine the first four central moments for each of the $`3113`$ trading days of the 4 sets of stocks considered. Specifically, we consider the mean, the standard deviation, the skewness and the kurtosis defined as $`\mu (t)={\displaystyle \frac{1}{n}}{\displaystyle \underset{i=1}{\overset{n}{}}}R_i(t),`$ (2) $`\sigma (t)=\sqrt{{\displaystyle \frac{1}{n1}}{\displaystyle \underset{i=1}{\overset{n}{}}}(R_i(t)\mu (t))^2},`$ (3) $`\rho (t)={\displaystyle \frac{1}{n}}{\displaystyle \underset{i=1}{\overset{n}{}}}\left({\displaystyle \frac{R_i(t)\mu (t)}{\sigma (t)}}\right)^3,`$ (4) $`\kappa (t)={\displaystyle \frac{1}{n}}{\displaystyle \underset{i=1}{\overset{n}{}}}\left({\displaystyle \frac{R_i(t)\mu (t)}{\sigma (t)}}\right)^4.`$ (5) The mean $`\mu (t)`$ gives a measure of the general trend of the market at day $`t`$. The standard deviation $`\sigma (t)`$ controls the width of the distribution and gives a measure of the variety of behaviors observed in the financial market. A large value of $`\sigma (t)`$ indicates that different companies show very different behaviors at day $`t`$. Skewness $`\rho (t)`$ and kurtosis $`\kappa (t)`$ are scale-free parameters, whose values depend on the shape of the distribution but not on its scale. $`\rho (t)`$ describes the asymmetry of the distribution with respect to $`\mu (t)`$. A positive value of $`\rho (t)`$ indicates that few companies perform great profits, and many companies have small losses at day $`t`$ with respect to the mean. A negative value of $`\rho (t)`$ corresponds to the complementary case. Finally the kurtosis $`\kappa (t)`$ gives a measure of the distance of the distribution from a Gaussian distribution. In our analysis we have discarded returns whose absolute value was $`|R_i(t)|>0.5`$, because some of these returns might be attributed to errors in the database. Similar errors would strongly affect the estimation of higher moments because statistical analyses of moments of a distribution higher than the second are more and more sensible to extreme values. We obtain the values of the four moments for each trading day. These quantities are not constant but fluctuate in time. By observing the time evolution of $`\mu (t)`$, we note that several trading days are present in which big jumps of the mean are observed. These findings can be evaluated more quantitatively by investigating the empirical probability density function (PDF) of $`\mu (t)`$ temporal series. This PDF is also approximated by a Laplace distribution. In Fig. 1 we show the PDFs of the variety $`\sigma (t)`$. We observe that the distribution is roughly log-normal with a approximately power-law tail observed for the higher values for each ensemble considered. We note that distributions do not coincide for different ensembles. In particular the mean of $`\sigma (t)`$ increases by increasing the number of stocks considered and by decreasing the (average) capitalization of the stocks. Indeed the stocks which compose the DJIA30 set have great capitalization and small volatility. On the other hand in the NYSE set there are present companies with both small and large capitalization and with different levels of volatility. The NYSE set is more heterogeneous than DJIA30 set and this is reflected in a greater value of variety $`\sigma (t)`$. The higher moments are extremely sensible to rare events. The PDF of the skewness is non-Gaussian with fat tails and is slightly asymmetrical around the value $`\rho =0`$ (especially for NYSE set). Positive values of the skewness are a bit more probable than negative ones. This implies that days in which few companies reach great gains and many companies have small losses with respect to the mean are slightly more frequent than the complementary case. The PDFs of the kurtosis $`P(\kappa )`$ are approximately characterized by a power-law tail $`\kappa ^\gamma `$ for higher values of $`\kappa `$ for the four ensembles of stocks. The exponent $`\gamma `$ of the power-law region is approximately equal to $`2`$ and becomes slightly greater moving from the NYSE to the DJIA30 sets. In summary the first four central moments of the distribution of daily returns are distributed in a non-trivial way. In order to better characterize the temporal evolutions of $`\mu (t)`$ and $`\sigma (t)`$, we investigate their memory properties. To this end we calculate their autocorrelation functions. We find that the mean is delta correlated, whereas a different behavior is observed for $`\sigma (t)`$. Fig. 2 shows the autocorrelation function of the variety $`\sigma (t)`$ for the four ensembles considered in a log-log plot. We observe that the autocorrelation function of empirical data is well approximated by a power-law function $`R(\tau )\tau ^\delta `$. By performing a best fit of $`R(\tau )`$ with a maximum time lag of $`50`$ trading days, we determine $`\delta `$ as $`0.27`$ (DJIA30), $`0.25`$ (SP100), $`0.26`$ (SP500) and $`0.20`$ (NYSE). These results indicate that a long-time memory is present in the market for the variety $`\sigma (t)`$. We observe a power-law autocorrelation function also for the quantity $`|\mu (t)|`$. We mention that the behaviors of the autocorrelation function of $`\mu (t)`$ and $`\sigma (t)`$ are consistent with the results of our analysis of their Fourier transform. We observe that $`\mu (t)`$ has a white noise power spectrum, whereas the variety $`\sigma (t)`$ has a power-law power spectrum. In conclusion we have introduced the concept of variety of a statistical ensemble of stocks traded in a financial market. Statistical properties of variety are non-trivial and are characterized by a non-Gaussian PDF and by a long-term time memory. The authors thank INFM and MURST for financial support.
no-problem/9909/physics9909014.html
ar5iv
text
# Notes on anelastic effects and thermal noise in suspensions of test masses in interferometric gravitational-wave detectors ## 1 Introduction The thermal noise is expected to be one of the main limiting factors on the sensitivity of interferometric gravitational-wave detectors like LIGO and VIRGO. Thermal fluctuations of internal modes of the interferometer’s test masses and of suspension modes will dominate the noise spectrum at the important frequency range between 50 and 200 Hz (seismic noise and photon shot noise dominate for lower and higher frequencies, respectively). It is important to note that off-resonant thermal noise level in high-quality systems is so low that it is unobservable in table-top experiments. Therefore, predictions of the thermal-noise spectrum in LIGO are based on a combination of theoretical models (with the fluctuation-dissipation theorem of statistical mechanics serving as a basis) and experimental measurements of quality factors of systems and materials involved. It is assumed that losses in the test masses and suspensions will occur mainly due to internal friction in their materials, which is related to anelasticity effects in solids. These informal notes comprise some basic results on the theory of anelasticity and thermal noise in pendulum suspensions. This collection is by no means complete and focus on aspects which are of interest for the author. The original results can be found in a number of books, research papers, and theses. Some of these sources are listed in a short bibliography at the end of the present text; a list of research papers (since 1990) devoted to various aspects of the thermal noise in interferometric gravitational-wave detectors was prepared by the author and is available as a separate document. ## 2 Fluctuation-dissipation theorem Consider a linear one-dimensional mechanical system with coordinate $`x(t)`$. If a force $`F(t)`$ acts on the system, than in the frequency domain the force and the coordinate are related by $$x(\omega )=H(\omega )F(\omega ),$$ (2.1) where $`H(\omega )`$ is the system response function. Then the spectral densities (see Appendix) are related by $$S_x(\omega )=|H(\omega )|^2S_F(\omega ).$$ (2.2) The impedance of the system is defined as $`Z(\omega )=F(\omega )/v(\omega )=F(\omega )/[i\omega x(\omega )]`$. Therefore, the impedance and the response function are related by $`Z(\omega )=1/[i\omega H(\omega )]`$. If the system is in equilibrium with the thermal bath of temperature $`𝒯`$, then the *fluctuation-dissipation theorem* (FDT) says that the spectral density of the thermal force is $$S_F^{\mathrm{th}}(\omega )=4k_B𝒯\mathrm{Re}Z(\omega ),$$ (2.3) where $`k_B`$ is the Boltzmann constant. The form (2.3) of the FDT is valid in the classical regime, when the thermal energy $`k_B𝒯`$ is much larger than the energy quantum $`\mathrm{}\omega `$. Using the FDT, one readily obtains the thermal noise spectrum $$S_x^{\mathrm{th}}(\omega )=\frac{4k_B𝒯}{\omega ^2}\mathrm{Re}Y(\omega ),$$ (2.4) where $`Y(\omega )=1/Z(\omega )`$ is the admittance and $`\mathrm{Re}Y(\omega )=\mathrm{Re}Z(\omega )/|Z(\omega )|^2`$ is the conductance. The FDT is the basis for calculations of the thermal noise spectrum in interferometric gravitational-wave detectors. ### 2.1 Example: Damped harmonic oscillator Consider a damping harmonic oscillator of mass $`m`$, spring constant $`k`$, and damping constant $`\gamma `$. The equation of motion is $$m\ddot{x}+\gamma \dot{x}+kx=F(t).$$ (2.5) In the frequency domain this can be written as $$(m\omega ^2+i\gamma \omega +k)x(\omega )=F(\omega ).$$ (2.6) The impedance of this system is $`Z(\omega )=\gamma +i(m\omega k/\omega )`$. Then the FDT gives the spectrum densities: $$S_F^{\mathrm{th}}(\omega )=4k_B𝒯\gamma ,S_x^{\mathrm{th}}(\omega )=\frac{4k_B𝒯\gamma }{(m\omega ^2k)^2+\gamma ^2\omega ^2}.$$ (2.7) ## 3 Anelasticity of solids The FDT means that if a system has no dissipation channel, thermal fluctuations will be zero. For an ideal elastic spring without friction, $`\mathrm{Re}Z(\omega )=0`$, and there are no fluctuations: $`S_x^{\mathrm{th}}(\omega )=0`$. Deviations of solids from the ideal elastic behavior (anelasticity) will result in internal friction (dissipation) and related thermal noise. For gravitational-wave detectors like LIGO, the test masses will be highly isolated, so the internal friction in materials of which the masses and their suspensions are made is believed to be the main source of dissipation and thermal noise. ### 3.1 The complex Young modulus and the loss function Deformations of solids are usually described in terms of stress $`\sigma `$ and strain $`ϵ`$ (equivalents of mechanical restoring spring force $`F_s`$ and displacement $`x`$, respectively). Perfect elastic solids satisfy Hooke’s law $`\sigma (t)=Eϵ(t),`$ (3.1) where $`E`$ is the (constant) Young modulus (an equivalent of the spring constant $`k`$). Anelasticity can be described by introducing the complex Young modulus (or the complex spring constant in a mechanical model). This is done in the frequency domain: $$E(\omega )=\frac{\sigma (\omega )}{ϵ(\omega )},k(\omega )=\frac{F_s(\omega )}{x(\omega )}.$$ (3.2) If an external force $`F(t)`$ acts on a point mass $`m`$ attached to such an anelastic spring, then the equation of motion in the frequency domain is $$[m\omega ^2+k(\omega )]x(\omega )=F(\omega ).$$ (3.3) The impedance of this system is $$Z(\omega )=\frac{m\omega ^2+k(\omega )}{i\omega },$$ (3.4) and $`\mathrm{Re}Z(\omega )=(1/\omega )\mathrm{Im}k(\omega )`$. Now, the FDT theorem gives the thermal noise spectrum: $$S_x^{\mathrm{th}}(\omega )=\frac{4k_B𝒯}{k_R\omega }\frac{\varphi (\omega )}{(1m\omega ^2/k_R)^2+\varphi ^2}.$$ (3.5) Here, $`k_R(\omega )\mathrm{Re}k(\omega )`$, and $$\varphi (\omega )=\frac{\mathrm{Im}k(\omega )}{\mathrm{Re}k(\omega )}$$ (3.6) is the so-called loss function. Note that $`\varphi =\mathrm{tan}\delta `$, where $`\delta `$ is the angle by which strain lags behind stress. The loss function $`\varphi `$ is a measure of the energy dissipation in the system. The rate at which energy is dissipated is $`\overline{F_s\dot{x}}`$. Then the energy dissipated per cycle by an anelastic spring is $$\mathrm{\Delta }=(2\pi /\omega )\overline{F_s\dot{x}}.$$ (3.7) Taking $`F_s=F_0\mathrm{cos}\omega t`$ and $`x=x_0\mathrm{cos}(\omega t\delta )`$, one finds $$\mathrm{\Delta }=\pi x_0F_0\mathrm{sin}\delta .$$ (3.8) If $`\delta `$ is small than the total energy of spring vibration is $`=\frac{1}{2}x_0F_0`$. Then for $`\delta 1`$, one obtains $$\varphi =\frac{\mathrm{\Delta }}{2\pi }.$$ (3.9) For small $`\varphi `$ (which is usually the case for the internal friction in materials used in detectors like LIGO), it is customary to neglect the frequency dependence of $`k_R`$. Then one can write $`k(\omega )=k[1+i\varphi (\omega )]`$, where $`k=m\omega _0^2`$ is a constant (and $`\omega _0`$ is the resonant frequency). Though this is a good approximation for many practical reasons, in general $`k_R`$ must be frequency-dependent because real and imaginary parts of $`k(\omega )`$ are related via the Kramers-Kronig relations. ### 3.2 Simple models Here we consider some simple models of anelasticity in solids. Neither of them gives a full description of the behavior of a real material, but nevertheless they are useful from the didactic point of view. #### 3.2.1 Perfect elastic solid The mechanical model of perfect elastic solid is a lossless spring. In this case $`\sigma =Eϵ`$, so $`\varphi =0`$. There is no dissipation and no thermal noise. #### 3.2.2 Maxwell solid The mechanical model of Maxwell solid is a lossless spring in series with a dashpot. The dashpot provides a source of viscous friction with $`\sigma =\eta \dot{ϵ}`$. Then for Maxwell solid stress and strain are related by equation $$\dot{ϵ}=E^1\dot{\sigma }+\eta ^1\sigma .$$ (3.10) This equation shows that for a constant strain, stress decays exponentially. On the other hand, for a constant stress, strain increases linearly, which is a very wrong description for crystalline solids. Going to the frequency domain, one obtains $$E(\omega )=\frac{\sigma (\omega )}{ϵ(\omega )}=\frac{i\omega \eta E}{E+i\omega \eta }=\frac{\omega ^2\eta ^2E+i\omega \eta E^2}{E^2+\omega ^2\eta ^2}$$ (3.11) and $`\varphi (\omega )=E/(\eta \omega )`$. #### 3.2.3 Voigt-Kelvin solid The mechanical model corresponding to Voigt-Kelvin anelastic solid consists of a lossless spring and a dashpot in parallel, which corresponds to a damped harmonic oscillator. The relation between stress and strain reads $$\eta \dot{ϵ}+Eϵ=\sigma .$$ (3.12) For a constant stress $`\sigma _0`$, strain changes exponentially with the decay time $`\eta /E`$ from its initial value $`ϵ_0`$ to the equilibrium (Hooke) value $`\sigma _0/E`$. For a constant strain, stress is also constant, like in Hooke’s law. This is a good description for materials like cork, but it is not suitable for metals. In the frequency domain, one has $$E(\omega )=E+i\eta \omega ,\varphi (\omega )=(\eta /E)\omega .$$ (3.13) Substituting this $`\varphi `$ into Eq. (3.5), we find $$S_x^{\mathrm{th}}(\omega )=\frac{4k_B𝒯\eta }{(m\omega ^2E)^2+\eta ^2\omega ^2}.$$ (3.14) This is the same as Eq. (2.7) for a damped harmonic oscillator with $`Ek`$ and $`\eta \gamma `$. #### 3.2.4 Standard anelastic solid Though the model of standard anelastic solid (SAS) does not gives a complete account of properties of real metals, it describes quite well basic mechanisms responsible for anelastic effects. In fact, if a dissipation mechanism has characteristic relaxation times for strain upon a constant stress and for stress upon a constant strain, then the SAS model gives an adequate description. The corresponding mechanical model consists of a spring in parallel with a Maxwell unit (which is a spring in series with a dashpot). Let $`E_1`$ and $`E_2`$ be the Young moduli of the separate spring and of the spring in the Maxwell unit, respectively, and $`\eta `$ be the dashpot constant, as usual. Then stress and strain are related by the following equation: $$\frac{E_2}{\eta }\sigma +\dot{\sigma }=\frac{E_1E_2}{\eta }ϵ+(E_1+E_2)\dot{ϵ}.$$ (3.15) For a constant strain $`ϵ_0`$, stress decays exponentially from its initial value $`\sigma _0`$ to the equilibrium (Hooke) value $`E_1ϵ_0`$: $$\sigma (t)=E_1ϵ_0+(\sigma _0E_1ϵ_0)e^{t/\tau _ϵ},\tau _ϵ=\eta /E_2.$$ (3.16) Analogously, for a constant stress $`\sigma _0`$, strain decays exponentially from its initial value $`ϵ_0`$ to the equilibrium (Hooke) value $`\sigma _0/E_1`$: $$ϵ(t)=\frac{\sigma _0}{E_1}+\left(ϵ_0\frac{\sigma _0}{E_1}\right)e^{t/\tau _\sigma },\tau _\sigma =\frac{E_1+E_2}{E_1E_2}\eta .$$ (3.17) Then Eq. (3.15) can be rewritten in the following form $$\sigma +\tau _ϵ\dot{\sigma }=E_R(ϵ+\tau _\sigma \dot{ϵ}),$$ (3.18) where $`E_RE_1`$ is called the relaxed Young modulus. Transforming to the frequency domain, one obtains $$(1+i\omega \tau _ϵ)\sigma (\omega )=E_R(1+i\omega \tau _\sigma )ϵ(\omega ).$$ (3.19) Then the complex Young modulus is given by $$E(\omega )=E_R\frac{1+i\omega \tau _\sigma }{1+i\omega \tau _ϵ}=E_R\frac{(1+\omega ^2\tau _\sigma \tau _ϵ)+i\omega (\tau _\sigma \tau _ϵ)}{1+\omega ^2\tau _ϵ^2}.$$ (3.20) It is easy to see that $$E(\omega )\{\begin{array}{cc}E_R,\hfill & \omega 1\hfill \\ E_U,\hfill & \omega 1,\hfill \end{array}$$ (3.21) where $`E_UE_1+E_2`$ is called the unrelaxed Young modulus. The loss function has the form $$\varphi (\omega )=\frac{\omega (\tau _\sigma \tau _ϵ)}{1+\omega ^2\tau _\sigma \tau _ϵ}=\mathrm{\Delta }\frac{\omega \overline{\tau }}{1+\omega ^2\overline{\tau }^2},$$ (3.22) where $$\overline{\tau }=\sqrt{\tau _\sigma \tau _ϵ}=\tau _ϵ\sqrt{\frac{E_U}{E_R}},\mathrm{\Delta }=\frac{E_UE_R}{\sqrt{E_UE_R}}=\frac{\tau _\sigma \tau _ϵ}{\sqrt{\tau _\sigma \tau _ϵ}}.$$ (3.23) One sees that $`\varphi \omega `$ for $`\omega \overline{\tau }1`$ and $`\varphi \omega ^1`$ for $`\omega \overline{\tau }1`$. The loss function has its maximum $`\varphi =\mathrm{\Delta }/2`$ at $`\omega =\overline{\tau }^1`$. This is called the Debye peak. This behavior is characteristic for processes with exponential relaxation of stress and strain. $`\overline{\tau }`$ is the characteristic relaxation time and $`\mathrm{\Delta }`$ is the relaxation strength. ##### Thermoelastic damping mechanism Zener pointed out that the SAS model with $$\varphi (\omega )=\mathrm{\Delta }\frac{\omega \overline{\tau }}{1+\omega ^2\overline{\tau }^2},$$ (3.24) is suitable for describing processes in which the relaxation of stress and strain is related to a diffusion process. One example of such a process is the so-called thermoelastic damping. Consider a specimen which is a subject to a deformation in such a way that one part of it expands and the other contracts (e.g., a wire of a pendulum which bends near the top while the pendulum swings). The temperature increases in the contracted part and decreases in the expanded part. The resulting thermal diffusion leads to the dissipation of energy. This anelastic effect can be described by the SAS model with the thermal relaxation strength and relaxation time given by $$\mathrm{\Delta }=\frac{E_U𝒯\alpha ^2}{C_v},\overline{\tau }\frac{d^2}{D},$$ (3.25) where $`𝒯`$ is the temperature, $`\alpha `$ is the linear thermal expansion coefficient, $`C_v`$ is the specific heat per unit volume, $`d`$ is the characteristic distance heat must flow, and $`D`$ is the thermal diffusion coefficient, $`D=\varrho /C_v`$, where $`\varrho `$ is the thermal conductivity. For a cylindrical wire of diameter $`d`$, the frequency of the Debye peak is $$\overline{f}=\frac{1}{2\pi \overline{\tau }}2.6\frac{D}{d^2}.$$ (3.26) ### 3.3 Boltzmann’s superposition principle While the SAS has certain general features in common with actual solids, it does not reproduce precisely the behavior of any real metal. Simple models considered above can be generalized by a theory which only assumes that the relation between stress and strain is linear. This assumption was expressed by Boltzmann in the form of a superposition principle: If the deformation $`x_1(t)`$ was produced by the force $`F_1(t)`$ and the deformation $`x_2(t)`$ was produced by the force $`F_2(t)`$, then the force $`F_1(t)+F_2(t)`$ will produce the deformation $`x_1(t)+x_2(t)`$. On the other hand, the deformation can be regarded as the independent variable. In this case the superposition principle states: If the force $`F_1(t)`$ is required to produce the deformation $`x_1(t)`$ and the force $`F_2(t)`$ is required to produce the deformation $`x_2(t)`$, then the force $`F_1(t)+F_2(t)`$ will be required to produce the deformation $`x_1(t)+x_2(t)`$. Let us introduce the quantity $`\lambda (t)`$ which is called the creep function and is the deformation resulting from the sudden application at $`t=0`$ of a constant force of magnitude unity. During an infinitesimal interval from $`t`$ to $`t+dt`$, the applied force $`F(t)`$ can be approximated by a constant force of magnitude $`\dot{F}dt`$. Then the superposition principle gives $$x(t)=_{\mathrm{}}^t\lambda (tt^{})\dot{F}(t^{})𝑑t^{}.$$ (3.27) Conversely, we may regard the deformation as a specified function of time. Let us define the quantity $`\kappa (t)`$ which is called the stress function and is the force which must be applied in order to produce the step-function deformation $`x(t)=\mathrm{\Theta }(t)`$ (here $`\mathrm{\Theta }(t)`$ is 1 for $`t0`$ and 0 for $`t<0`$). Then the linear relationship is $$F(t)=_{\mathrm{}}^t\kappa (tt^{})\dot{x}(t^{})𝑑t^{}.$$ (3.28) The relation between the creep function and the strain function is rather complicated; in general they satisfy the following inequality $$\lambda (t)\kappa (t)1.$$ (3.29) For constant $`\kappa (t)=k`$ we recover Hooke’s law $`F(t)=kx(t)`$ and then $`\lambda (t)=k^1`$. Integrating by parts in Eq. (3.28), we obtain another expression of the superposition principle, $$F(t)=_{\mathrm{}}^tf(tt^{})x(t^{})𝑑t^{},$$ (3.30) where $$f(t)=\kappa (0)\delta (t)+\dot{\kappa }(t).$$ (3.31) The relationship between the force and the deformation becomes very simple in the frequency domain. Toward this end we introduce the functions $$f_p(t)=f(t)\mathrm{\Theta }(t),\kappa _p(t)=\kappa (t)\mathrm{\Theta }(t),\lambda _p(t)=\lambda (t)\mathrm{\Theta }(t),$$ (3.32) which are zero for $`t<0`$. Using these functions, one can expand the upper integration limit in Eqs. (3.27), (3.28), and (3.30) to $`\mathrm{}`$. Then we just can use the fact that a convolution in the time domain is a product in the frequency domain. This gives $$F(\omega )=i\omega \kappa _p(\omega )x(\omega )=f_p(\omega )x(\omega ).$$ (3.33) Thus the Fourier transform of the stress function is simply related to the complex spring constant of Eq. (3.2): $$k(\omega )=f_p(\omega )=i\omega \kappa _p(\omega ).$$ (3.34) #### 3.3.1 Example: Standard anelastic solid For the SAS the stress function is given by $$\kappa (t)=E_R+(E_UE_R)e^{t/\tau _ϵ}.$$ (3.35) It is straightforward to see that this function leads to the first-order differential equation of the form (3.18). Then we find the function $`f(t)`$, $$f(t)=E_U\delta (t)\frac{E_UE_R}{\tau _ϵ}e^{t/\tau _ϵ},$$ (3.36) and the complex string constant, $$k(\omega )=_0^{\mathrm{}}f(t)e^{i\omega t}𝑑t=E_U\frac{E_UE_R}{1+i\omega \tau _ϵ}.$$ (3.37) This can be rewritten in the form $$k(\omega )=E_R\frac{1+i\omega \tau _\sigma }{1+i\omega \tau _ϵ}=E_R\frac{(1+\omega ^2\tau _\sigma \tau _ϵ)+i\omega (\tau _\sigma \tau _ϵ)}{1+\omega ^2\tau _ϵ^2}.$$ (3.38) which coincides with Eq. (3.20). ## 4 Calculation of the thermal noise spectrum for a pendulum suspension For a point mass $`m`$ attached to an anelastic spring with the complex spring constant $`k(\omega )`$, we found a simple result $$Z(\omega )=\frac{k(\omega )m\omega ^2}{i\omega },$$ which can be used in the FDT to derive the thermal noise spectrum $`S_x^{\mathrm{th}}(\omega )`$ as given by Eq. (3.5). However, the question is how to find the thermal noise spectrum for more complicated systems, e.g., for pendulum suspensions of test masses in interferometric gravitational-wave detectors like LIGO. In the literature we can find two different approaches: the “direct” application of the FDT to the whole system and the method which is based on decomposing a complicated system into a set of normal modes. Below, we describe briefly both of these approaches. ### 4.1 The direct approach In brief, this method can be described as follows. First, one should write equations of motion for the whole system and find the impedance $`Z(\omega )`$. Then the FDT provides the thermal noise spectrum: $$S_x^{\mathrm{th}}(\omega )=\frac{4k_BT}{\omega ^2}\mathrm{Re}[1/Z(\omega )].$$ (4.1) The impedance $`Z(\omega )`$ contains the information about resonances of the system. The dissipation enters by taking the Young moduli of the materials to be complex: $$E(\omega )=[1+i\varphi (\omega )]\mathrm{Re}E(\omega ),$$ (4.2) or, for simplicity, $`E(\omega )=E_0[1+i\varphi (\omega )]`$, where $`E_0`$ is a constant. The loss function $`\varphi (\omega )`$ is obtained from experiments on the anelasticity of materials used in the system (e.g., on the suspension wires). Of course, the resulting noise spectrum $`S_x^{\mathrm{th}}(\omega )`$ depends very much on what form of $`\varphi (\omega )`$ is used. ### 4.2 The normal-mode decomposition The normal-mode decomposition is a more traditional approach. Consider, for example, an one-dimensional system of linear mass density $`\rho (z)`$ and total length $`L`$, which is described in terms of the normal modes $`\psi _n(z)`$. These modes satisfy the orthonormality relation, $$_0^L\rho (z)\psi _m(z)\psi _n(z)𝑑z=\delta _{mn},$$ (4.3) and an arbitrary displacement $`x(z,t)`$ can be decomposed as $$x(z,t)=\underset{n}{}\psi _n(z)q_n(t).$$ (4.4) Here, $`q_n(t)`$ are the mode coordinates which satisfy $$\ddot{q}_n+\omega _n^2q_n=F_n(t),$$ (4.5) where $`\omega _n`$ are the resonance frequencies of the modes, and $$F_n(t)=_0^Lf(z,t)\psi _n(z)𝑑z$$ (4.6) is the generalized force produced by the force density $`f(z,t)`$ applied to the system. This decomposition effectively replaces the complicated system by a collection of oscillators, and each of them satisfies $$[\omega ^2+\omega _n^2(\omega )]q_n(\omega )=F_n(\omega ).$$ (4.7) The dissipation is included by taking $$\omega _n^2(\omega )=\omega _n^2[1+i\varphi _n(\omega )],$$ (4.8) where $`\varphi _n(\omega )`$ are the loss functions. Then we can write $$q_n(\omega )=\frac{F_n(\omega )}{\omega ^2+\omega _n^2+i\omega _n^2\varphi _n(\omega )}.$$ (4.9) Let us assume that the force is applied at the end of the system $`z=L`$, such that $`f(z,t)=F(t)\delta (zL)`$. Then the generalized forces are $`F_n(t)=F(t)\psi _n(L)`$, and we can substitute Eq. (4.9) into the Fourier transform of Eq. (4.4) to obtain $$x(L,\omega )x(\omega )=\underset{n}{}\frac{\psi _n^2(L)}{\omega ^2+\omega _n^2+i\omega _n^2\varphi _n(\omega )}F(\omega ).$$ (4.10) This gives the admittance of the system in the form $$Y(\omega )=1/Z(\omega )=\underset{n}{}\frac{i\omega \psi _n^2(L)}{\omega ^2+\omega _n^2+i\omega _n^2\varphi _n(\omega )}.$$ (4.11) Then the FDT can be used to obtain the spectral density of thermal fluctuations at $`z=L`$: $$S_x^{\mathrm{th}}(\omega )=\frac{4k_B𝒯}{\omega }\underset{n}{}\frac{\psi _n^2(L)\omega _n^2\varphi _n(\omega )}{(\omega _n^2\omega ^2)^2+\omega _n^4\varphi _n^2}.$$ (4.12) This can be written as a sum $$S_x^{\mathrm{th}}(\omega )=\underset{n}{}S_n^{\mathrm{th}}(\omega )$$ (4.13) over the contributions $$S_n^{\mathrm{th}}(\omega )=\frac{4k_B𝒯}{\omega }\frac{k_n^1\varphi _n(\omega )}{(1m_n\omega ^2/k_n)^2+\varphi _n^2}=\frac{4k_B𝒯}{\omega }\frac{m_n^1\omega _n^2\varphi _n(\omega )}{(\omega _n^2\omega ^2)^2+\omega _n^4\varphi _n^2}$$ (4.14) of independent oscillators labeled by the index $`n`$. Each of these oscillators consists of a point mass $`m_n=[\psi _n(L)]^2`$ attached to an anelastic spring with the complex spring constant $`k_n(\omega )=k_n[1+i\varphi _n(\omega )]`$, such that the resonant angular frequencies are $`\omega _n=\sqrt{k_n/m_n}`$. So, in order to obtain the thermal noise spectrum one needs to find all the normal modes, their effective masses, resonant frequencies, and loss functions. ### 4.3 Modes of a pendulum suspension The most important modes of a pendulum suspension are the pendulum mode, the rocking mode, and the violin modes. We will not consider here the rocking mode because for multi-loop suspensions the rocking motion of the test mass is essentially suppressed. The loss function of each mode depends on the type of mode and on anelastic properties of the pendulum wire. #### 4.3.1 The pendulum mode For the pendulum mode, we will assume that the mass of the wire is much smaller than the mass of the bob (which is the test mass) and that the bob is attached near its center of mass. Also, the angle by which the pendulum swings is considered to be very small. Then the pendulum may be modelled as an oscillator of the resonant angular frequency $$\omega _\mathrm{p}=\sqrt{g/L},$$ (4.15) where $`g`$ is the acceleration due to the Earth gravity field, and $`L`$ is the pendulum length. The energy of the pendulum consists of two parts: the gravitational energy $`_{\mathrm{gr}}`$ and the elastic energy $`_{\mathrm{el}}`$ due to the bending of the wire. The gravitational energy is lossless; provided that all the losses due to interactions with the external world (friction in the residual gas, dumping by eddy currents, recoil losses into the seismic isolation system, friction in the suspension clamps, etc.) are made insignificant by careful experimental design, the assumption is made that the losses are dominated by internal friction in the wire material. Consequently, $`\mathrm{\Delta }=\mathrm{\Delta }_{\mathrm{el}}`$. Usually, $`_{\mathrm{gr}}_{\mathrm{el}}`$, so we obtain for the pendulum-mode loss function: $$\varphi _\mathrm{p}=\frac{\mathrm{\Delta }}{2\pi _{\mathrm{tot}}}=\frac{\mathrm{\Delta }_{\mathrm{el}}}{2\pi (_{\mathrm{el}}+_{\mathrm{gr}})}\frac{\mathrm{\Delta }_{\mathrm{el}}}{2\pi _{\mathrm{el}}}\frac{_{\mathrm{el}}}{_{\mathrm{gr}}}.$$ (4.16) Note that $$\varphi _\mathrm{w}=\frac{\mathrm{\Delta }_{\mathrm{el}}}{2\pi _{\mathrm{el}}}$$ (4.17) is the loss function for the wire itself which occurs due to anelastic effects in the wire material. Then we obtain $$\varphi _\mathrm{p}=\xi _\mathrm{p}\varphi _\mathrm{w},$$ (4.18) where $`\xi _\mathrm{p}`$ is the ratio between the elastic energy and the gravitational energy for the pendulum mode, $$\xi _\mathrm{p}=\left(\frac{_{\mathrm{el}}}{_{\mathrm{gr}}}\right)_\mathrm{p}.$$ (4.19) The lossless gravitational energy of the pendulum is $$_{\mathrm{gr}}=\frac{1}{2}M\omega _\mathrm{p}^2L^2\theta _m^2=\frac{1}{2}MgL\theta _m^2,$$ (4.20) where $`M`$ is the pendulum mass and $`\theta _m`$ is the maximum angle of swing. The elastic energy depends on how many wires are used and how they are attached to the pendulum. For one wire, the fiber in the pendulum mode will bend mostly near the top, with the bending elastic energy $$_{\mathrm{el}}=\frac{1}{4}\sqrt{TEI}\theta _m^2.$$ (4.21) Here, $`T`$ is the tension force in the wire ($`T=Mg`$ for one wire), $`E`$ is the Young modulus of the wire material, and $`I`$ is the moment of inertia of the wire cross section ($`I=\frac{1}{2}\pi r^4`$ for a cylindrical wire of radius $`r`$). Using these results, one finds for a single-wire pendulum: $$\xi _\mathrm{p}=\frac{\sqrt{TEI}}{2MgL}=\frac{1}{2L}\sqrt{\frac{EI}{Mg}}=\frac{1}{2L}\sqrt{\frac{EI}{T}}.$$ (4.22) This result can be easily generalized for the case when the test muss is suspended by $`N`$ wires. Then the elastic energy $`_{\mathrm{el}}`$ of Eq. (4.21) should be multiplied by $`N`$ and the tension in each wire becomes $`T=Mg/N`$. Then $$\xi _\mathrm{p}=\frac{N\sqrt{TEI}}{2MgL}=\frac{1}{2L}\sqrt{\frac{EIN}{Mg}}=\frac{1}{2L}\sqrt{\frac{EI}{T}}.$$ (4.23) In Eq. (4.23) we assumed that all the wires are in one plane: a plane through the center of mass of the pendulum, whose normal is parallel to the direction of swing. (Note that in such an configuration one should take into account the rocking mode of the test mass.) In this arrangement, the pendulum mode causes bending of the wires mostly at the top. If one uses a number of wire loops along the test mass length, then the rocking mode is essentially suppressed and the wires bend both at the top and the bottom. Therefore, the bending elastic energy of the multi-loop configuration is given by multiplying the result of Eq. (4.21) by $`2N`$, $$_{\mathrm{el}}=\frac{N}{2}\sqrt{TEI}\theta _m^2.$$ (4.24) Then the energy ratio is $$\xi _\mathrm{p}=\frac{N\sqrt{TEI}}{MgL}=\frac{1}{L}\sqrt{\frac{EIN}{Mg}}=\frac{1}{L}\sqrt{\frac{EI}{T}}.$$ (4.25) The contribution of the pendulum mode to the thermal noise spectrum is obtained from Eq. (4.14) by taking $`m_n=M`$, $`k_n=Mg/L`$, $`\omega _n=\omega _\mathrm{p}`$ and $`\varphi _n=\varphi _\mathrm{p}=\xi _\mathrm{p}\varphi _\mathrm{w}`$. This gives $$S_\mathrm{p}^{\mathrm{th}}(\omega )=\frac{4k_B𝒯}{\omega M}\frac{\omega _\mathrm{p}^2\varphi _\mathrm{p}(\omega )}{(\omega _\mathrm{p}^2\omega ^2)^2+\omega _\mathrm{p}^4\varphi _\mathrm{p}^2}.$$ (4.26) For LIGO suspensions, $`f_\mathrm{p}=\omega _\mathrm{p}/2\pi `$ is about 1 Hz. This is much below the working frequency range (near 100 Hz), so we may assume $`\omega _\mathrm{p}/\omega 1`$. Also, the loss function is very small, $`\varphi _\mathrm{p}<10^5`$. Then the pendulum-mode contribution to the thermal noise spectrum is $$S_\mathrm{p}^{\mathrm{th}}(\omega )\frac{4k_B𝒯\omega _\mathrm{p}^2\varphi _\mathrm{p}(\omega )}{M\omega ^5}=\frac{4k_B𝒯}{L^2}\sqrt{\frac{gEIN}{M^3}}\frac{\varphi _\mathrm{w}(\omega )}{\omega ^5}.$$ (4.27) #### 4.3.2 The violin modes The angular frequency of the $`n`$th violin mode ($`n=1,2,3,\mathrm{}`$) is given by $$\omega _n=\frac{n\pi }{L}\sqrt{\frac{T}{\rho }}\left[1+\frac{2}{k_eL}+\frac{1}{2}\left(\frac{n\pi }{k_eL}\right)^2\right],$$ (4.28) where $`L`$ is the length of the wire, $`T`$ is the tension force, $`\rho `$ is the linear mass density of the wire, and $$k_e\sqrt{\frac{T}{EI}}.$$ (4.29) In the violin mode the wire bends near both ends in a similar way. The bending occurs over the characteristic distance scale $`k_e^1\sqrt{EI/T}`$, the same as in the pendulum mode. For $`k_e^1L`$, which is a very good estimation for heavily loaded thin wires like in LIGO, one have approximately, $$\omega _n\frac{n\pi }{L}\sqrt{\frac{T}{\rho }}.$$ (4.30) This is just the angular frequency of the $`n`$th vibrational mode of an ideal spring. It can be shown that for the $`n`$th violin mode, the loss function is $$\varphi _n=\xi _n\varphi _\mathrm{w},\xi _n=\left(\frac{_{\mathrm{el}}}{_{\mathrm{gr}}}\right)_n,$$ (4.31) where the energy ratio is $$\xi _n=\frac{2}{k_eL}\left(1+\frac{n^2\pi ^2}{2k_eL}\right)\frac{2}{L}\sqrt{\frac{EI}{T}}\left(1+\frac{1}{2L}\sqrt{\frac{EI}{T}}n^2\pi ^2\right).$$ (4.32) Since $`k_eL1`$, for first several modes the energy ratio is approximately $$\xi _n\xi _\mathrm{v}=\frac{2}{L}\sqrt{\frac{EI}{T}}.$$ (4.33) This expression takes into account only the contribution to the elastic energy due to wire bending near the top and the bottom. For higher violin modes, one should also consider the contribution due to wire bending along its length, which leads to Eq. (4.32). For the one-loop suspension configuration, the elastic energy of the lowest violin modes is about twice of that for the pendulum mode (for the last one the wires bend only at the top while for the former ones the wires bend at both ends). In the multi-loop configuration, the elastic energy of the lowest violin modes and of the pendulum mode is approximately the same. On the other hand, the gravitational energy of the pendulum mode is by a factor of 2 larger than that of a violin mode. For the violin modes of each wire, the gravitational energy is $`\frac{1}{4}TL\theta _m^2`$. Then for $`N`$ wires, $$(_{\mathrm{gr}})_\mathrm{v}=\frac{1}{4}NTL\theta _m^2=\frac{1}{4}MgL\theta _m^2.$$ (4.34) This is just one half of the gravitational energy for the pendulum mode, $`(_{\mathrm{gr}})_\mathrm{p}=\frac{1}{2}MgL\theta _m^2`$ (cf. Eq. (4.20)). This explains the difference between the loss functions for the pendulum mode and for the violin modes: $`\xi _\mathrm{v}4\xi _\mathrm{p}`$ for the one-loop configuration and $`\xi _\mathrm{v}2\xi _\mathrm{p}`$ for the multi-loop configuration. The effective mass of the $`n`$th violin mode is $$m_n=[\psi _n(L)]^2=\frac{1}{2}NM\left(\frac{\omega _n}{\omega _\mathrm{p}}\right)^2\frac{\pi ^2M^2}{2\rho L}n^2,$$ (4.35) where we took expression (4.30) for $`\omega _n`$ and $`T=Mg/N`$. This effective mass arises because the violin vibrations of the wire cause only a tiny recoil of the test mass $`M`$. The contribution of the violin modes to the thermal noise spectrum is given by $$S_\mathrm{v}^{\mathrm{th}}(\omega )=\frac{4k_B𝒯}{\omega }\underset{n=1}{\overset{\mathrm{}}{}}\frac{m_n^1\omega _n^2\varphi _n(\omega )}{(\omega _n^2\omega ^2)^2+\omega _n^4\varphi _n^2}.$$ (4.36) Typical values of $`f_1=\omega _1/2\pi `$ are from 350 to 500 Hz. If we are interested in the thermal spectrum density near 100 Hz, we can assume $`\omega ^2\omega _n^2`$. Then we have approximately $$S_\mathrm{v}^{\mathrm{th}}(\omega )\frac{8k_B𝒯\omega _\mathrm{p}^2}{NM\omega }\underset{n=1}{\overset{\mathrm{}}{}}\frac{\varphi _n(\omega )}{\omega _n^4}\frac{8k_B𝒯N\rho ^2L^3}{\pi ^4gM^3\omega }\underset{n=1}{\overset{\mathrm{}}{}}\frac{\varphi _n(\omega )}{n^4}.$$ (4.37) One can see that the contributions of higher violin modes are very small due to the factor $`n^4`$ in the sum. Taking $`\varphi _n=\xi _n\varphi _\mathrm{w}`$ and using Eq. (4.32), we obtain $$\underset{n=1}{\overset{\mathrm{}}{}}\frac{\varphi _n(\omega )}{n^4}=\frac{2}{k_eL}\left(\frac{\pi ^4}{90}+\frac{\pi ^4}{12k_eL}\right)\varphi _\mathrm{w}(\omega )\frac{\pi ^4}{45L}\sqrt{\frac{EI}{T}}\varphi _\mathrm{w}(\omega ).$$ (4.38) Here, we assumed $`k_eL1`$. Finally, we substitute (4.38) into (4.37) and find the following expression for the violin-mode contribution to the thermal noise spectrum, $$S_\mathrm{v}^{\mathrm{th}}(\omega )\frac{8}{45}k_B𝒯\rho ^2L^2\sqrt{\frac{EIN^3}{g^3M^7}}\frac{\varphi _\mathrm{w}(\omega )}{\omega }.$$ (4.39) ## 5 Experiments on anelasticity effects for pendulum suspensions ### 5.1 Basic types of experiments In order to predict the thermal noise fluctuations in pendulum suspensions, two basic types of experiments are performed: 1. Investigations of anelastic properties of wires made of various materials, in order to determine the wire loss function $`\varphi _\mathrm{w}(\omega )`$. 2. Measurements of quality factors ($`Q=\varphi ^1`$ at a resonance) for the pendulum and violin modes of actual suspensions, in order to verify the relationships $$\varphi _\mathrm{p}(\omega )=\xi _\mathrm{p}\varphi _\mathrm{w}(\omega ),\varphi _\mathrm{v}(\omega )=\xi _\mathrm{v}\varphi _\mathrm{w}(\omega ).$$ (5.1) Early experiments showed serious discrepancy between the measured quality factors and those predicted using Eq. (5.1). It was discussed that this discrepancy may happen due to stress-dependent effects. However, it was shown later that the internal losses of the wires are almost independent of the applied stress. Many recent experiments proved that the above discrepancy appears due to serious losses in the clamps. A smart design of clamps can be used to reduce these excess losses and then predictions of Eq. (5.1) are quite accurate. A very promising possibility is the use of monolithic or semi-monolithic suspensions. The design of clamps plays a crucial role in the reduction of the thermal noise of the test mass suspensions. ### 5.2 Internal losses in wire materials A number of experiments were performed to study internal losses of various wire materials (e.g., steel, tungsten, fused quartz, and some others). The main drawback of many of these experiments is a small number of frequencies for which $`\varphi _\mathrm{w}`$ was measured. Also, there are serious discrepancies between results of different experiments. Therefore, the exact frequency dependence of $`\varphi _\mathrm{w}`$ is still unclear for many materials. Below, we briefly review results of some recent experiments. ##### Kovalik and Saulson, 1993 Method: Quality factors were measured for resonances of freely suspended wires. Materials: Tungsten, silicon, sapphire, fused quartz. Results: Insignificant frequency dependence for tungsten; for fused quartz, measured $`\varphi _\mathrm{w}`$ are above those predicted by the thermoelastic damping (TED) for some frequencies and near TED for others; sapphire and silicon showed behavior consistent with TED. ##### Saulson et al., 1994 Method: Quality factors were measured for an inverted pendulum of tunable length. Material: Free-Flex cross-spring flexure made of crossed steel strips. Results: In agreement with a frequency-independent $`\varphi _\mathrm{w}`$. ##### Gillespie and Raab, 1994 Method: Quality factors were measured for resonances of freely suspended wires. Material: Steel music wires. Results: A constant value of $`\varphi _\mathrm{w}`$ for low frequencies (from 30 to 150 Hz). For higher frequencies (from 150 Hz to 2 kHz) $`\varphi _\mathrm{w}`$ increases with $`\omega `$, like TED predicts, but the measured value $`\varphi _{\mathrm{meas}}`$ is well above $`\varphi _{\mathrm{TED}}`$. These results may be explained by the formula $`\varphi _{\mathrm{meas}}=\varphi _{\mathrm{TED}}+\varphi _{\mathrm{ex}}`$, where $`\varphi _{\mathrm{ex}}`$ is a frequency-independent excess loss. ##### Rowan et al., 1997 Method: Quality factors were measured for resonances of ribbons fixed at one end. Material: Fused quartz ribbons. Results: Data were obtained for 5 resonances in the range from 6 to 160 Hz. $`\varphi _{\mathrm{meas}}`$ is well above $`\varphi _{\mathrm{TED}}`$ for lower frequencies (below 30 Hz), and in agreement with TED for higher frequencies (above 80 Hz). ##### Dawid and Kawamura, 1997 Method: Quality factors were measured for the violin modes of wires fixed at both ends in a “guitar”-type apparatus. Materials: Invar, titanium, steel, tungsten and several other metals. Results: $`\varphi _{\mathrm{meas}}^1`$ was proportional to $`\sqrt{T}`$, in accordance with the formula $`\varphi _\mathrm{v}=(2/L)\sqrt{EI/T}\varphi _\mathrm{w}`$ for frequency-independent $`\varphi _\mathrm{w}`$. ##### Huang and Saulson, 1998 Method: Quality factors were measured for resonances of freely suspended wires. Materials: Steel and tungsten. For steel, $`\varphi _{\mathrm{meas}}`$ coincides with the predictions of TED (the characteristic Debye-peak frequency dependence). Some differences were found between properties of annealed wires ($`\varphi _{\mathrm{meas}}`$ slightly above $`\varphi _{\mathrm{TED}}`$) and “curly” wires ($`\varphi _{\mathrm{meas}}`$ slightly below $`\varphi _{\mathrm{TED}}`$). The difference can be explained by modifications of thermal properties. For tungsten wires, $`\varphi _{\mathrm{meas}}`$ only slightly increases with frequency; the loss function increases with the wire diameter, as should be for TED at frequencies well below $`\overline{f}`$. ## 6 Conclusions It is seen that predictions of the spectral density for thermal fluctuations in pendulum suspensions depend strongly on the type of the dissipation mechanism. Sources of external losses (friction in the residual gas, dumping by eddy currents, recoil losses into the seismic isolation system, friction in the suspension clamps, etc.) should be eliminated by careful experimental design. In particular, results of many recent experiments show that excess losses in clamps may seriously deteriorate the quality factors of suspension resonances. When external losses are made sufficiently small, the main source of dissipation is the internal friction in the wires due to anelastic effects. The thermal noise spectrum depends on the form of the loss function. Unfortunately, the exact frequency dependence of the wire loss function $`\varphi _\mathrm{w}(\omega )`$ is not yet completely understood. In many experiments $`\varphi _\mathrm{w}`$ was measured only at few frequencies and experimental uncertainty of results was often quite large. Moreover, there is a contradiction between results of different experiments. Therefore, it is very difficult to make certain conclusions about the behavior of $`\varphi _\mathrm{w}(\omega )`$. In particular, it is unclear if clamp losses are negligible in experiments with freely suspended wires, as is usually assumed. Certainly, there is a room for more experiments on anelastic properties of wires, in order to clarify the issue of internal friction in the frequency range of interest for gravitational-wave detection. ## Acknowledgment This work would be impossible without great help and encouragement by Malik Rakhmanov. I thank him for long hours of illuminating discussions and for encouraging me to enter the realm of thermal noise and anelasticity. ## Appendix: Correlation function and spectral density Consider a system characterized by some quantity $`\alpha `$ (e.g., position or velocity). For stationary processes, the correlation function is $$\rho _\alpha (t)=\alpha (\tau )\alpha (\tau +t),$$ where the average is over a statistical ensemble. Using the ergodic theorem, this can be replaced by the time average, $$\rho _\alpha (t)=\underset{T\mathrm{}}{lim}\frac{1}{T}_T^T\alpha (t^{})\alpha (t^{}+t)𝑑t^{}.$$ Now, define the function $$\alpha _T(t)=\{\begin{array}{cc}\alpha (t),\hfill & t[T,T]\hfill \\ 0,\hfill & \mathrm{other}\hfill \end{array}$$ and its Fourier transform $$\alpha _T(\omega )=_{\mathrm{}}^{\mathrm{}}\alpha _T(t)e^{i\omega t}𝑑t.$$ The definition of the spectral density is $$S_\alpha (\omega )=\underset{T\mathrm{}}{lim}\frac{|\alpha _T(\omega )|^2}{\pi T}.$$ It is easy to see that the correlation function $`\rho _\alpha (t)`$ and the spectral density $`S_\alpha (\omega )`$ are related via the Fourier transform: $$\rho _\alpha (t)=\frac{1}{2}_{\mathrm{}}^{\mathrm{}}S_\alpha (\omega )e^{i\omega t}𝑑\omega ,S_\alpha (\omega )=\frac{1}{\pi }_{\mathrm{}}^{\mathrm{}}\rho _\alpha (t)e^{i\omega t}𝑑t.$$ This result is known as the Wiener-Khinchin theorem. ## Bibliography The fluctuation-dissipation theorem was introduced in H. B. Callen and T. A. Welton, “Irreversibility and Generalized Noise,” Phys. Rev. 83, 34 (1951); H. B. Callen and R. F. Greene, “On a Theorem of Irreversible Thermodynamics,” Phys. Rev. 86, 702 (1952). The theorem is discussed in a number of textbooks on statistical physics, for example, L. E. Reichl, *A Modern Course in Statistical Physics* (Univ. Texas Press, Austin, 1980); L. D. Landau and E. M. Lifshitz, *Statistical Physics* (Pergamon Press, Oxford, 1980). The theory of anelasticity is discussed in C. Zener, *Elasticity and Anelasticity of Metals* (Univ. Chicago Press, Chicago, 1948); A. S. Novick and B. S. Berry, *Anelastic Relaxation in Crystalline Solids* (Academic Press, New York, 1972). The theory of thermoelastic damping was presented by Zener: C. Zener, “Theory of Internal Friction in Reeds,” Phys. Rev. 52, 230 (1937); C. Zener, “General Theory of Thermoelastic Internal Friction,” Phys. Rev. 53, 90 (1938). Thermal fluctuations of pendulum suspensions and related problems were discussed in many works. Some of them are listed below: P. R. Saulson, “Thermal noise in mechanical experiments,” Phys. Rev. D 42, 2437 (1990); G. I. González and P. R. Saulson, “Brownian motion of a mass suspended by an anelastic wire” J. Acoust. Soc. Am. 96, 207 (1994); J. E. Logan, J. Hough, and N. A. Robertson, “Aspects of the thermal motion of a mass suspended as a pendulum by wires,” Phys. Lett. A 183, 145 (1993); A. Gillespie and F. Raab, “Thermal noise in the test mass suspensions of a laser interferometer gravitational-wave detector prototype,” Phys. Lett. A 178, 357 (1993); J. Gao, L. Ju, and D. G. Blair, “Design of suspension systems for measurement of high-Q pendulums,” Meas. Sci. Technol. 6, 269 (1995); V. B. Braginsky, V. P. Mitrofanov, and K. V. Tokmakov, “On the thermal noise from the violin modes of the test mass suspension in gravitational-wave antennae,” Phys. Lett. A 186, 18 (1994); V. B. Braginsky, V. P. Mitrofanov, and K. V. Tokmakov, “Energy dissipation in the pendulum mode of the test mass suspension of a gravitational wave antenna,” Phys. Lett. A 218, 164 (1996); G. Cagnoli, L. Gammaitoni, J. Kovalik, F. Marchesoni, and M. Punturo, “Suspension losses in low-frequency mechanical pendulums,” Phys. Lett. A 213, 245 (1996). Experiments on internal friction in various types of wires (see Sec. 5.2) were reported in the following papers: J. Kovalik and P. R. Saulson, “Mechanical loss in fibers for low-noise pendulums,” Rev. Sci. Instrum. 64, 2942 (1993); P. R. Saulson, R. T. Stennins, F. D. Dumont, and S. E. Mock, “The inverted pendulum as a probe of anelasticity,” Rev. Sci. Instrum. 65, 182 (1994); A. Gillespie and F. Raab, “Suspension losses in the pendula of laser interferometer gravitational-wave detectors,” Phys. Lett. A 190, 213 (1994); S. Rowan, R. Hutchins, A. McLaren, N. A. Robertson, S. M. Twyford, and J. Hough, “The quality factor of natural fused quartz ribbons over a frequency range from 6 to 160 Hz,” Phys. Lett. A 227, 153 (1997); D. J. Dawid and S. Kawamura, “Investigation of violin mode Q for wires of various materials,” Rev. Sci. Instrum. 68, 4600 (1997); Y. L. Huang and P. R. Saulson, “Dissipation mechanisms in pendulums and their implications for gravitational wave interferometers,” Rev. Sci. Instrum. 69, 544 (1998).
no-problem/9909/astro-ph9909153.html
ar5iv
text
# La Palma night-sky brightness ## 1 Introduction The zenith brightness of the moonless night sky at a clear dark observing site, measured at high ecliptic and galactic latitudes, and during solar minimum, is typically B = 22.9 mag arcsec<sup>-2</sup>, V = 21.9 mag arcsec<sup>-2</sup>, about 10 million times dimmer than the daylight sky (but easily visible to the dark-adapted eye). This glow<sup>1</sup><sup>1</sup>11 $`S_{10}`$ one 10-th mag star per square degree, 220 $`S_{10}`$ $`V`$ = 21.9 mag arcsec<sup>-2</sup> comes from airglow (145 $`S_{10}`$), zodiacal light (60 $`S_{10}`$), the integrated light of faint stars ($`<`$ 5 $`S_{10}`$), starlight scattered by interstellar dust (10 $`S_{10}`$), and extragalactic light ($``$ 1 $`S_{10}`$). Auroral light is significant at geomagnetic latitude $`>`$ 40<sup>o</sup>. The airglow is emitted by atoms and molecules in the upper atmosphere which are excited by solar UV radiation during the day. Its intensity correlates with solar activity, being $``$ 0.5 mag brighter at solar maximum, and can also vary randomly, by up to a few 10s of %, with position on the sky and with time during the night. The strength of at least the NaD line varies with season; $``$ 30 Rayleighs in local summer vs $``$ 180 Rayleighs in winter. Zodiacal light is sunlight scattered by interplanetary dust. At high ecliptic latitude, it contributes $``$ 60 $`S_{10}`$. Its brightness rises slowly with decreasing ecliptic latitude, to about 140 $`S_{10}`$ on the ecliptic plane, for ecliptic longitde $`>`$ 90<sup>o</sup> from the sun. The spectrum of zodiacial light is very similar to that of the sun over the UV - IR range, and it’s fractional contribution to the brightness of the night sky peaks at a wavelength of 4500 Å ($``$ 0.5 of total for $`\beta =30^o`$). Starlight contributes substantially to the integrated brightness of the sky, $`25+250e^{(|b|/20^o)}`$ $`S_{10}`$ units (approximate fit to the data of Roach & Gordon 1973, $`b`$ = galactic latitude). If all the starlight were scattered uniformly over the sky, it would produce a background of $``$ 100 $`S_{10}`$ units. However, most of this light is from stars with 6 $`<V<`$ 16 and published measurements of sky brightness usually now refer to the sky between stars with $`V<`$ 20. Starlight scattered by interstellar dust produces a diffuse glow concentrated along the galactic plane, analagous to the zodiacal light along the ecliptic. Faint galaxies contribute $`<`$ 5 S<sub>10</sub> (observational upper limit) to the brightness of the night sky; a lower limit can be estimated from the faint galaxy counts, $`>`$ 1 $`S_{10}`$. Light pollution at observatory sites arises principally from tropospheric scattering of light emitted by sodium- and mercury-vapour and incandescent street lamps.<sup>2</sup><sup>2</sup>2Lighting wastefully emitted above the horizontal costs the US taxpayer $``$ $`\$`$10<sup>9</sup> year<sup>-1</sup>, more than the cost of funding US astronomy (Hunter & Crawford 1991). Garstang (1989) estimates the increase in brightness at zenith distance 45<sup>o</sup>, in the direction of a conurbation of population P at distance D km to be $`PD^{2.5}`$/70 mag. The IAU recommendations for a dark site are continuum $`\mathrm{\Delta }mag<`$ 0.1 for 3000 $`<\lambda <`$ 10000 Å, and intensity of NaD light pollution $`<`$ that from airglow, at $`ZD`$ = 45<sup>o</sup> towards any city (Smith 1979). At high airmass, light from outside the atmosphere may be dimmed by scattering and absorption, but the airglow will probably be brighter, since a line of sight intercepts a larger number of atoms in the airglow layer. For airglow arising in a thin layer at height $`h`$, the intensity should vary with zenith distance $`ZD`$ as $`(1(a/(a+h)^2sin^2ZD))^{0.5}`$ where $`a`$ is the radius of the earth (the van Rhijn formula). ## 2 Measurements from La Palma The observatory on La Palma lies close to the island’s peak, on the rim of a large volcanic caldera, at longitude 18<sup>o</sup> W, latitude 29<sup>o</sup> N, altitude 2300 m, geomagnetic latitude $``$ 20<sup>o</sup> N. Approximately 70% of the nights are clear (95% in the summer) and 60% are photometric. The median site seeing is $``$ 0.7 arcsec. Atmospheric extinction is typically 0.15 mag in $`V`$, but can be substantially higher during the summer, when dust from the Sahara desert (400 km away) blows over the Canary Islands. The island has a population of $``$ 80000 people, but light-pollution is strictly controlled, by a 1992 decree, the ‘Ley del Cielo’. Dark-of-moon sky brightness was measured from 427 CCD images taken with the Isaac Newton and Jacobus Kapteyn Telescopes on 63 photometric nights. The measurements were made in areas free of nebulosity, stars, cosmic-ray events or CCD defects. The median values of sky brightness during solar minimum 1994-6 are $`B`$ = 22.7 $`\pm `$ 0.03, $`V`$ = 21.9 $`\pm `$ 0.03, $`R`$ = 21.0 $`\pm `$ 0.03 mag arcsec<sup>-2</sup>, with a scatter of about 0.15 mag in each band. We have fewer measurements in $`U`$ and $`I`$ bands; the median sky brightnesses are $`U`$ 22.0, $`I`$ 20.0 mag arcsec<sup>-2</sup>, consistent with the few values measured elsewhere. The $`I`$ brightness is dominated by OH emission bands (Fig. 1) and varies by up to a factor $``$ 2 during the night. The brightnesses in $`V`$ and $`R`$ are similar to those measured at other sites. The measured brightness in $`B`$ is 0.2 mag higher than the sunspot-minimum value reported for other sites at previous minima, but the La Palma median is dominated by data from 1995, and Krisciunas (1997) found that the $`B`$ sky brightness at Mauna Kea was still declining then. The 0.15-mag scatter in the measured values is a combination of measurement errors and true variations in sky brightness. The measured sky brightness varies with solar cycle, being 0.4 $`\pm `$ 0.1 mag brighter in 1990 than in 1995 (Fig. 2). It also varies with ecliptic latitude, being 0.4 mag brighter on the ecliptic than at the poles, consistent with the known variation in the brightness of the zodiacal light. The sky brightens steadily with rising airmass of observation, being 0.25 +- 0.07 mag brighter at airmass $``$ 1.5 than it is at the zenith, consistent with the eqution at the end of Section 1, if airglow contributes 70% of the total. The sky is brighter under exceptionally dusty conditions, with extincton $`A_V>`$ 0.25 mag. 80000 people live on La Palma, mostly in or near nine small towns lying between 10 and 15 km from the observatory. The predicted brightening of the zenith sky from the main populated areas, including the neighbouring islands of La Gomera, El Hierro and Tenerife, is given in the last column of the table overleaf, using the model of Garstang (1989). The predictions are approximate, but serve to indicate the relative sizes of the light-pollution contributions expected from different sources. Most of the expected contamination is from La Palma’s 14000 streetlamps, which emit $``$ 120 Mlumens, corresponding to $``$ 180 kW of visible photons. Approximately half of the light generated emerges from the lamp housings, and $``$ 10% of the emerging light is reflected from the ground, so $``$ 9 kW escapes to the sky. The $`V`$-band zenith sky brightness on La Palma is similar to that at other dark sites, which suggests that light pollution must contribute $`<`$ 0.1 mag to the total. A stronger limit on the contribution of light pollution to the brightness of the sky has been obtained from the equivalent widths of the NaD and Hg lines in the spectrum of the zenith night sky. These yield directly the contribution to sky brightness from emission lines, while that from broad and continuum features can be inferred from the strengths of the emission lines, the known shapes of the lamp spectra, and the relative numbers of lamps of different kinds. ¿From these, we estimate that light pollution contributes $`<`$ 0.03 mag to the zenith continuum sky brightness in all bands; and that the total contamination is $`<`$ 0.03 mag in $`U`$ (Hg lines), $``$ 0.02 mag in $`B`$ (Hg lines), $``$ 0.10 mag in $`V`$ (mainly low-pressure and high-pressure NaD), $``$ 0.10 mag in $`R`$ (NaD, as for V band). This degree of light pollution is comparable to that at Kitt Peak, $``$ 0.02 mag in $`B`$, 0.05 mag in $`V`$ in 1988 (Massey et al 1990) from Tucson (275000 inhabitants, 65 km distant). It is likely to decrease in the future. ## 3 Conclusions $``$ The brightness of the moonless night sky above La Palma, at high ecliptic and galactic latitudes and low airmass, at solar minimum, is given in various units in columns 5 - 9 of the table overleaf. Column 4 gives the equivalent intensity in Jy for apparent mag = 0. The units of column 8 ($`\gamma `$) are photons s<sup>-1</sup>m<sup>-2</sup>Å<sup>-1</sup>arcsec<sup>-2</sup>. The brightness of the La Palma sky is similar to that at other dark sites. $``$ The sky is $``$ 0.4 mag brighter on the ecliptic plane than at the ecliptic pole, $``$ 0.4 mag brighter at solar maximum than at solar minimum, and $``$ 0.25 mag brighter at zenith distance 45<sup>o</sup> than at the zenith. $``$ Light pollution contributes $`<`$ 0.03 mag to the zenith continuum sky brightness in all bands, well below the 0.1-mag limit recommended by IAU for a dark site. The total contamination is $`<`$ 0.03 mag in $`U`$, $``$ 0.02 mag in $`B`$, $``$ 0.10 mag in $`V`$, $``$ 0.10 mag in $`R`$. For sky-limited exposures, an uncertainty of 0.4 mag in the surface brightness of the night sky translates into an uncertainty of a factor of 2 in the exposure time required to reach a given signal-to-noise, with potential wastage of half that time. The measurements reported here of the absolute level of sky brightness, and of the ways in which it varies, allow more efficient use to be made of telescope time. We are grateful to Ed Zuiderwijk (RGO) for helping us extract data from the archive, the Carlsberg Meridian Group (RGO) for extinction data, and Javier Diaz (IAC) for information about street-lighting on La Palma. SLE carried out part of this work while a summer student at the Isaac Newton Group in 1996. References Benn, C. R., & Ellison S. L., 1998, La Palma Technical Note 115 Garstang R.H., 1989, PASP, 101, 306 Hunter T.B. & Crawford D.L., 1991 in ‘Light Pollution, Radio Interference and Space Debris’, ed. D.L. Crawford (PASP conference vol. 17), p.89 Krisciunas K., 1997, PASP, 109, 1181 Massey P., Gronwall C., Pilachowski C., 1990, PASP, 102, 1046 Roach F.E. & Gordon J.L., 1973, ‘The Light of the Night Sky’ (Dordrecht: Reidel) Smith F.G., 1979, Trans IAU 17A, 220
no-problem/9909/astro-ph9909355.html
ar5iv
text
# ASCA Observation of the quiescent X-ray counterpart to SGR1627-41 ## 1 Introduction SGR1627-41, the fourth Soft Gamma Repeater, was discovered in a series of observations between 1998 June and 1998 July by the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma-Ray Observatory (Woods et al. 1999), the Gamma-Ray Burst Experiment aboard Ulysses (Hurley et al. 1999), the KONUS experiment aboard the Wind spacecraft (Mazets et al. 1999), and the All Sky Monitor on the Rossi X-Ray Timing Explorer (Smith et al. 1999). The Interplanetary Network (IPN) error box for SGR1627-41 passes through the Galactic supernova remnant (SNR) G337.0-0.1 (Hurley et al. 1999). Two BeppoSAX observations of this region on 1998 August 7 and 1998 September 16 revealed an X-ray source, most likely a neutron star, whose position was consistent with that of both the IPN localization and the SNR; the source displayed a possible periodicity of 6.41 s (chance probability $`6\times 10^3`$, based on a limited number of trials: Woods et al. 1999). Based on BATSE observations of the SGR in outburst, Woods et al. (1999) estimated that the neutron star magnetic field strength was $`5\times 10^{14}\mathrm{Gauss}`$. These properties would make the SGR counterpart a magnetar, an object in which magnetic energy dominates all other sources of energy, including rotation (Thompson and Duncan, 1995, 1996). Evidence that other SGRs are also magnetars has been presented (Kouveliotou et al. 1998, 1999; but see also Marsden, Rothschild & Lingenfelter 1999). In those cases, decisive evidence for the magnetic field strength came from the spindown rates, and their interpretation as dipole radiation. In an attempt to confirm the periodicity of SGR1627-41 and measure its spindown rate, we observed the source with the Advanced Satellite for Cosmology and Astrophysics (ASCA). ## 2 ASCA Observations The ASCA observation took place between 1999 February 26 and 1999 February 28. The nominal pointing direction was $`\alpha (2000)=16^\mathrm{h}36^\mathrm{m}14^\mathrm{s},\delta (2000)=47^\mathrm{o}32\mathrm{}31\mathrm{}`$, and the approximate exposures were 72.7 ks for the SIS and 78.4 ks for the GIS. We used the standard screening criteria for such parameters as Earth elevation angle, South Atlantic Anomaly, and cutoff rigidity to extract photons, as explained in the ASCA Data Reduction Guide, Version 2<sup>1</sup><sup>1</sup>1http://heasarc.gsfc.nasa.gov/docs/asca/abc/abc.html. No bursts from the source were observed by Ulysses , BATSE, or ASCA during the observation (the last burst from SGR1627-41 was observed in 1998 August). Using the Ximage source detection tool, a quiescent source was detected at $`\alpha (2000)=16^\mathrm{h}35^\mathrm{m}46.41^\mathrm{s},\delta (2000)=47^\mathrm{o}35\mathrm{}13.1\mathrm{}`$ with a 3 $`\sigma `$ error radius 55 $`\mathrm{}`$, consistent with the 1 $`\mathrm{}`$ radius error circle of the BeppoSAX source (figure 1). Approximately 3800 net counts were detected, versus $``$ 2850 in the first BeppoSAX observation. Two other sources were detected in this observation (one is visible in figure 1), but neither had a position consistent with either the IPN annulus or G337.0-0.1. Assuming that the ASCA, BeppoSAX , and SGR sources are the same object, the most likely position of the SGR is around the intersection of the IPN annulus with this new error circle, at $`\alpha (2000)=16^\mathrm{h}35^\mathrm{m}52^\mathrm{s},\delta (2000)=47^\mathrm{o}35\mathrm{}14\mathrm{}`$. The region used for spectral analysis consisted of a 105 $`\mathrm{}`$ radius circle centered at the source position; background was taken from the same observation, using a similar circle at a region where no source was present, as determined by Ximage. Spectral fitting to the GIS2 and GIS3 data was done using XSPEC and three trial functions: blackbody, thermal bremsstrahlung, and a power law, all with absorption. These results are reported in table 1, along with the earlier BeppoSAX results. There is no clear preference for any of these models, but we adopt the power law fit for further discussion. To search for periodicity, barycentric light curves were constructed with 0.125 s binning from the sum of the GIS2 and GIS3 data, by extracting $``$ 1 - 10 keV counts from a 4 $`\mathrm{}`$ radius circular region around the source, and an FFT was performed (figure 2). The most prominent peak in the power spectrum was at 0.10821 Hz (significance 0.12). The 90% confidence upper limit to the power of any signal in the spectrum with period between 0.01 and 1 Hz is $``$ 3% (rms). Woods et al. (1999) found a 6.413183 s period in the first of their two BeppoSAX observations, with an rms pulse fraction of 10% $`\pm `$ 2.6%. The upper limit to the signal power at this period from the folded ASCA light curve is 1.8% (rms). This limit would be appropriate only if the spindown rate were zero; if the quiescent counterpart to SGR1627-41 is characterized by a rate $`10^{10}\mathrm{ss}^1`$, as is the case for SGR1900+14, the period could have changed by as much as 0.0015 s between the BeppoSAX and ASCA observations, and the 3% upper limit would be the appropriate one. We can also state with varying degrees of confidence that no significant periodicities exist between 0.001 and 0.01 Hz, although in this range the period search is dominated by windowing effects from the data gaps and non-uniform sampling. ## 3 Discussion and Conclusion The earlier BeppoSAX observations of the quiescent counterpart of SGR1627-41 indicated a fading trend, significant at the $`5.9\sigma `$ level in the raw data, over the $``$ 5 week period between the two pointings (Woods et al. 1999, and table 1). The unabsorbed 2-10 keV flux found in the ASCA observation reported here is consistent with that found in the second BeppoSAX observation, and therefore indicates that this trend did not continue. We have checked this conclusion in two ways. First, we performed a joint fit to the BeppoSAX and ASCA observations, and found that they could be described well by a single power law with no change in normalization. Second, we calculated the unabsorbed 2-10 keV source flux fixing the best fit power law index and $`\mathrm{N}_\mathrm{H}`$ to those found in the BeppoSAX observations, and confirmed that it agreed with the BeppoSAX flux. Relatively little is known about the mechanisms for variability in the quiescent soft X-ray counterparts to SGRs. Since variability in the quiescent emission of SGR1900+14 has definitely been observed (Kouveliotou et al. 1999, Murakami et al. 1999), it seems plausible that the quiescent steady emission from SGR1627-41, varying at an earlier time, could have ceased to vary by the time of the observations reported here. It could also be argued that the periodic quiescent emission originated on a cooling hot spot on the neutron star surface, and that it became undetectable by the time of the ASCA observations. In any case, however, compelling evidence that SGR1627-41 is a magnetar must await an unambiguous detection of the periodicity and a measurement of the spindown rate. Short Chandra observations could resolve this and also determine the precise source position. KH and PL are grateful to NASA for support under the ASCA AO-7 Guest Investigator Program.
no-problem/9909/cond-mat9909299.html
ar5iv
text
# Disorder Averaging and Finite Size Scaling Finite size scaling (FSS) is a very powerful tool of theoretical physics: it allows us to extract some properties of the infinite system near a phase transition by studying finite, numerically accessible samples. It is therefore of major interest to have a clear theoretical background behind the bold extrapolation from finite to infinite sizes. Though the basic concepts can be summarized in a few lines, the theory of FSS is far from trivial even for clean systems. Randomness brings in additional complexity, and a deeper understanding of FSS in disordered systems is still lacking. The main difference with the clean case is that somehow we have to average over the different random samples. There is an on-going discussion of whether the way the disorder average is taken influences the FSS results or not , and if it does, what is the “correct” average? The importance of the details of averaging is demonstrated most spectacularly by the so called Chayes et al theorem , which claims that a certain finite-size correlation length exponent cannot be smaller than $`2/d`$, $`d`$ being the dimension of the disorder, for any phase transition driven by quenched randomness. It turns out that the proof of this quite general statement relies entirely on the specific manner the disorder was generated: a slight change in the ensemble of the random samples gives a different final result. Further studies along these lines showed that in some cases even the numerically measured quantities do depend on the set of the disorder realizations , though there are claims that they shouldn’t . In this Letter we propose to understand the role of the disorder based on the scaling of a single realization instead of renormalizing the averaged free energy. We argue that, eventhough the difference between the two approaches is expected to vanish for infinite systems and short-range interactions, it might be crucial for finite samples and/or long-range forces. From a practical point of view, our main result is that disorder averaging should be done after finding the critical point of each sample independently. We demonstrate how this works in practice by performing an extensive numerical study of the two-dimensional random-bond Ising model. First, recall some basic ideas of FSS in clean systems. Close to a continuous phase transition the correlation length, $`\xi `$, diverges as $`\xi (T)\tau ^\nu `$ with $`\nu `$ the correlation length critical exponent and $`\tau =|TT_c|`$ the distance from the critical temperature $`T_c`$ of the infinite system. For a finite system, the size $`L`$ itself is measured in units of $`\xi `$, i.e. a physical quantity $`Q`$ depends on $`L`$ only through the ratio $`L/\xi `$, i.e. $$Q(T,L)=L^y\psi (L^{1/\nu }\tau ),$$ (1) where $`y`$ describes the $`L`$-dependence at criticality. The surprise in Eq.(1) is that it contains the infinite system’s correlation length (or critical temperature $`T_c`$), eventhough in a finite system the actual characteristic length $`\xi _L(T)`$ is typically different from $`\xi (T)`$. Indeed, while in the high-temperature phase $`\xi _L\xi `$, there is a temperature $`T_c(L)`$ where the correlation length reaches the system size, i.e. $`\xi _LL`$. Below this temperature the whole sample becomes correlated and $`\xi _L`$ is defined by subtracting this overall correlation. We call the temperature $`T_c(L)`$, the critical temperature at size $`L`$. In terms of RG flows, the trajectories bend towards high temperatures for $`T>T_c(L)`$, towards zero for $`T<T_c(L)`$, and they “stick around” a fixed point for $`T=T_c(L)`$. Admittedly, $`T_c(L)`$ is not a very well defined quantity, but the peak in a susceptibility or specific heat may give it a sensible meaning. Still, both the RG picture and the behaviour of $`\xi _L`$ suggest that the scaling variable of the problem is $`\tau _L=TT_c(L)`$ instead of $`\tau `$, leading to the FSS formula $$Q(T,L)=L^yf(L^{1/\nu }\tau _L).$$ (2) For clean systems the connection between Equations (1) and (2) is delivered by the scaling of $`T_c(L)`$: $$T_c(L)=T_c+CL^{1/\nu }.$$ (3) The constant $`C`$ in this equation is not universal, it depends e.g. on the boundary conditions. But once the details are fixed, C is constant for large $`L`$’s. Substituting Eq.(3) into Eq.(2) gives us the usual form of FSS (Eq.(1)). Now we argue that in the presence of randomness, fixing the disorder distribution and the boundary conditions is not enough to keep the value of $`C`$ in Eq.(3) constant. Due to the randomness, $`C`$ will fluctuate from sample to sample and under renormalization. Consequently, $`T_c(L)`$ of a given disorder realization will fluctuate as well, preventing the use of the infinite system’s $`T_c`$ for all samples and sizes, like in Eq.(1). At the same time $`\tau _L`$ remains a good scaling variable and, after an appropriate averaging, Eq.(2) holds. The basic observation in support of the above is that the RG trajectories for disordered systems are not smooth, but rather look like a random walk. Each time we integrate out high-energy degrees of freedom, they will contain some randomness. Accordingly, the renormalized temperature will pick up a random part, too. Of course, this fluctuation of the RG trajectory will scale as a negative power of $`L`$, in the gaussian case as $`L^{d/2}`$, and disappear if $`L\mathrm{}`$. But in the case of FSS, we are comparing temperatures as close as $`L^{1/\nu }`$, so for a $`\nu `$ close to $`2/d`$ the random walk of the RG trajectories becomes important. Since $`T_c(L)`$ itself changes under renormalization, we find that the critical surface will be random and different for each disorder realization. Now let’s take a random sample of size $`L`$ and consider the RG trajectory starting at a temperature $`T`$ close to the sample’s $`T_c(L)`$. After a renormalization step we get the renormalized values $`L^{}`$, $`T^{}`$, and $`T_c^{}(L^{})`$. According to the above arguments both $`T^{}`$ and $`T_c^{}(L^{})`$ have a random part. But both $`T^{}`$ and $`T_c^{}`$ are temperatures, and they are close to each other, so it is natural to suppose that their fluctuating part will be almost the same, i.e. they are correlated. The main consequence of this correlation is that $`T^{}(L^{})T_c^{}(L^{})`$ will be a smooth function of $`L^{}`$, scaling with the exponent $`1/\nu `$, while $`T^{}(L^{})T_c`$ will show the large fluctuations of the random walk (see Fig.1). The standard (grand canonical ) average uses $`T_c`$ only, and completely neglects the correlations. Such an approach is justified as long as the fluctuations of the RG trajectories are much smaller than their distance from $`T_c`$. In the case of FSS, however, they might be of the same order and the correlations become important: one has to use $`\tau _L=TT_c(L)`$ to extract the critical exponents. If $`T`$ is at some distance (but not too far) from $`T_c(L)`$, so the sample contains many correlated regions, the system is almost self averaging. But around $`T_c(L)`$ the remaining randomness in other quantities does not necessarily scale to zero and, in order to use Eq.(2), we have to get rid of this extra noise by averaging. The “correlated average” requires then to find the critical temperature of a given sample, and average over realizations with the same $`\tau _L`$. In practice, this means “shifting” and superposing the curves of $`Q(T)`$ measured on different random samples of the same size. We now test the above theoretical concepts on the two-dimensional random-bond Ising model. We simulated $`L\times L`$ systems ($`L=32,\mathrm{}.128`$) with periodic boundary conditions using the Wolff single-cluster algorithm to overcome critical slowing down. Disorder was generated from a bimodal distribution: bonds had two values, $`J_1`$ and $`J_2`$ (all positive) with equal probabilities. The strength of randomness was tuned by changing the ratio $`r=J_1/J_2`$ ($`r=0.25,0.5`$). The exact critical temperature $`\beta _c=1/k_BT_c`$ of this model is known as a function of $`r`$ through : $`\mathrm{sinh}(2\beta _cJ_2)\mathrm{sinh}(2\beta _crJ_2)=1`$. For each measurement, we used up to $`10^4`$ Monte Carlo (MC) steps each comprising $`10`$ cluster updatings and we used $`10^4`$ steps for equilibration. To avoid inaccuracies due to unfortunate choice of the random number generator, we compared results obtained from different generators. We concentrated on the susceptibility, defined as $$\chi =\frac{1}{L^2}\frac{M^2|M|^2}{T},$$ (4) where $`M`$ is the total magnetization of the sample. In Figure 2 we show the susceptibilities of different disorder realizations of the same size ($`L=64`$) as a function of temperature. We see large sample-to-sample fluctuations, exceeding the thermal fluctuations by at least an order of magnitude. At the same time, it is obvious that the curves are quite similar to each other, they are just “displaced”. This is exactly what we expect on the basis of random RG trajectories: for each sample, $`T_c(L)`$ is different, which explains the displacement of the curves. To study the shape of the susceptibilities of the different disorder realizations, in Figure 3 we “shifted” the curves to have each sample at the same $`T_c(L)`$ (see below for details). We emphasize that these are the very same data as in Fig. 2. The excellent overlap of the different samples’ susceptibilities demonstrates that $`\tau _L=TT_c(L)`$ is indeed the good scaling variable. Fig. 3 also shows that, as expected, disorder fluctuations are pronounced only at, or around $`T_c(L)`$. Our data indicate that the relative fluctuations of the peak heights, are in the same order for all studied system sizes, depending only on the disorder strength $`r`$. In terms of averaging over disorder, Figures 2 and 3 correspond to the grand canonical and correlated averages, respectively: the latter achieves a spectacular noise reduction, but it is still to see, which one reproduces the expected scaling of the very large system. Without randomness $`\nu _{pure}=1`$, and a perturbative RG approach predicts that small disorder is marginally irrelevant : $$\frac{d\mathrm{\Delta }}{dx}=8\mathrm{\Delta }^2+𝒪(\mathrm{\Delta }^3),$$ (5) where $`\mathrm{\Delta }`$ is proportional to the square dispersion of the random bonds, and $`x\mathrm{ln}(L^1)`$. According to Eq.(5), the disorder scales to zero, but only logarithmically with $`L`$, so we have to take it into account in the RG equations of other quantities, like the reduced temperature $`\tau `$, $$\frac{d\tau }{dx}=(14\mathrm{\Delta })\tau +\mathrm{}.$$ (6) This equation predicts an effective exponent $`\nu _{eff}1+4\mathrm{\Delta }`$, which approaches $`\nu _{pure}=1`$ very slowly. Since the randomness does not couple to the magnetization in first order, one expects that the susceptibility exponent $`\gamma /\nu =1.75`$ remains unchanged. Even though these results were obtained by using replicas and grand-canonical disorder average, for a short-range-interaction model and very large system sizes we still expect them to be correct. The detailed form of the above scaling corrections is still under debate even today . Here we wish to concentrate only on their qualitative nature: disorder introduces corrections to $`\nu `$ (the width of the susceptibility peak), but not to $`\gamma /\nu `$ (the height of the peak at criticality). As we will see, this expectation is satisfied only with the correlated average. The major difficulty of the correlated average is to find $`T_c(L)`$ of a given disorder realization, and “shift” the different samples’ curves as in Fig. 3. Trying to identify the peak of the susceptibility for each realization is one possibility , but both thermal and random fluctuations are biggest at this point. Instead, we used the entire susceptibility curves and minimized the “distance” between them. We verified that the final results do not depend on the details of this procedure, and the average critical point $`\overline{T_c(L)}`$ scales to the exact $`T_c`$ when $`L\mathrm{}`$. The exact values are $`T_c=1.641018`$ for $`r=0.5`$ and $`T_c=1.239078`$ for $`r=0.25`$. Our extrapolated $`L\mathrm{}`$ numerical results are $`T_c=1.640(1)`$ and $`T_c=1.239(1)`$ respectively. In the case of small disorder, $`r=0.5`$, we found corrections to scaling in the case of grand canonical average both for $`\nu `$ and $`\gamma /\nu `$, though both of these corrections are relatively small. This violates what is expected for $`\gamma /\nu `$. On the other hand, for the available sizes, the correlated average gives an almost perfect scaling plot with the pure exponents, as shown in Figure 3 (inset). No corrections were visible here. The differences between the two disorder averages are even more pronounced for stronger disorder, $`r=0.25`$. Clearly, for the grand canonical average (Fig. 4) not only the widths but also the heights of the peaks show sizable corrections to scaling. Note that there are no corrections for the heights (scaled by $`\gamma /\nu `$) when the data have been evaluated with correlated average (Fig. 5). For this disorder strength the corrections in $`\nu `$ already appear, and an effective thermal exponent $`1/\nu _{eff}0.92`$ gives a good description of the data within this range of sizes (see the inset of Fig. 5). We emphasize that only the results of the correlated average reproduce our expectations for the infinite system scaling. In addition to the susceptibility, other singular quantities, like the specific heat, show critical behaviour. A question of consistency arises: Do the same temperature shifts calculated from the susceptibilities of different samples give the best collapse of the specific heat curves? Indeed, the answer is yes, as can be seen in Fig. 6. This supports our theory that $`\tau _L=TT_c(L)`$ of a given sample is a good scaling variable for any critical quantity. We have proposed a new picture of the RG in random systems, which leads to a recently introduced way of disorder averaging for FSS, the so called “correlated average”. We studied in detail the FSS properties of the $`d=2`$ disordered Ising model, and found that only the correlated average reproduces the expected behaviour of the susceptibility, in addition to spectacular noise reduction in averaged quantities. A detailed account of our simulations’ results will be published elsewhere. We would like to thank Alex Hansen (Trondheim) and the CNUSC in Montpellier for computer time. FP thanks INLN (Nice), the Hungarian Science Foundation (OTKA29236), and the Bólyai Fellowship for financial support.
no-problem/9909/hep-ph9909556.html
ar5iv
text
# Resonances, Chiral Symmetry, Coupled Channel Unitarity and Effective Lagrangians Talk given at the 8th International Conference on Hadron Spectroscopy, HADRON99, August 24-28, 1999, Beijing, China. Work partially supported by DGICYT under contracts PB96-0753 and AEN97-1693 and by the EU TMR network eurodaphne contract no. ERBFMRX-CT98-0169. ## 1 Introduction Chiral Perturbation Theory (ChPT) has proved very successful in order to describe the physics of mesons at very low energies. The key point of the whole approach is to identify the lightest pseudoscalar mesons $`\pi ,K`$ and $`\eta `$ as the Goldstone bosons associated to the chiral symmetry breaking. These particles will be the only degrees of freedom at low energies and their interactions can be described in terms of the most general effective Lagrangian which respects the chiral symmetry constraints. So far as this is a low energy approach, the amplitude of a given process is basically given as an expansion in the external momenta over the scale of symmetry breaking $`4\pi v1.2`$GeV. It is also possible to calculate loops, which increase by two the order of the term in the chiral expansion and generate logarithmic contributions as well as divergences. The former are very important at low energies since they can dominate over some polynomial terms and the latter have to be absorbed in the renormalization of the free parameters that appear at the next order in the Lagrangian. It is therefore possible to obtain results which are finite to a given order in momenta. They provide a very good description of meson interactions up to about 500 MeV in the best cases. However, if one is interested in resonances in particular, as it happens in meson spectroscopy, it is little what one could do with just plain ChPT. In this work we will review recently proposed new nonperturbative schemes imposing unitarity to the chiral Lagrangian, thus enlarging the convergence of the chiral expansion and reproducing resonances. We will briefly comment on possible implications for meson spectroscopy. ## 2 ChPT and Unitarity Within the coupled channel formalism, the unitarity of the $`T`$ matrix reads $$\text{Im}T=T\text{Im}GT^{}\text{Im}T^1=\text{Im}GT=[\text{Re}T^1i\text{Im}G]^1,$$ (1) where $`\text{Im}G`$ is a known diagonal matrix, whose entries are just the phase space of the intermediate states. Indeed, $`G`$ is the integral of the propagators of the two particles in the intermediate state. Within ChPT the amplitudes are obtained as an expansion in powers of momenta, i.e. $`T=T_2+T_4+\mathrm{}`$, where the subscript stands for the order in the expansion. Being basically a polynomial these amplitudes can only satisfy perturbative unitarity and they cannot yield poles and therefore resonances. Within the Inverse Amplitude Method , the ChPT expansion is only used for $`\text{Re}T^1`$, which is then used on the right hand side of eq.(1). This procedure ensures exact unitarity, while keeping the very same ChPT expansion at low energies. Using the complete $`O(p^4)`$ calculation, it was first applied to single channel $`\pi \pi `$ and $`\pi K`$ scattering and it was able to reproduce several isospin and angular momentum channels and to generate dynamically the $`\sigma `$, $`\rho `$ and $`K^{}`$ resonances . In principle, the IAM calculations up to order $`n`$ need the complete ChPT calculations up to the same order. The next step was motivated by the results obtained when unitarizing the lowest order $`O(p^2)`$ ChPT scalar amplitudes using the Bethe-Salpeter equations (BS) . Remarkably it was possible to fit the meson-meson scalar phase shifts up to 1.2 GeV and reproduce the $`\sigma `$, $`f_0`$ and $`a_0`$ resonances, just by setting the cutoff to a natural value around 1 GeV, since there are no other free parameters. The link with the IAM was established in : the BS solution is recovered from the IAM if one approximates $`\text{Re}T_4T_2\text{Re}GT_2`$. But in order to obtain vector resonances the $`O(p^2)`$ is not enough. Unfortunately, the full $`O(p^4)`$ calculation is not available for all the meson-meson scattering channels. However it was also shown in that in order to obtain a good results up to 1.2 GeV, it is enough to add to the BS approximation the $`O(p^4)`$ tree level, that is, $$\text{Re}T_4\underset{O(p^4)\text{tree}}{\underset{}{T_4^P}}+\underset{\text{s-channel loops}}{\underset{}{T_2\text{Re}GT_2}}.$$ (2) The results are remarkable, as it can be seen in Fig.1, where we display an updated fit to the meson-meson phase shifts and inelasticities . Compared with our previous work , we have also corrected a small error in one amplitude, which only has a minor effect. It can be noticed that we are able to reproduce together the $`\sigma `$, $`f_0`$, $`a_0`$, $`\rho `$, $`\kappa `$ and $`K^{}`$ resonances and the isospin zero state of the vector octet, $`\omega _8`$ ( see ). It is also possible to find, in the unphysical sheets, the poles associated to these resonances (see ). In table I we list the values of the fitted IAM parameters. Note that within this approximation, tadpoles and crossed loops are neglected and absorbed in the chiral parameters, so that we cannot compare directly with the standard ChPT coefficients. Nevertheless they should be of the same order of magnitude, once the appropriate renormalization scale is used. If we had the complete $`O(p^4)`$ ChPT calculation for all the meson-meson channels and we used the same renormalization scheme as in standard ChPT, the fitted parameters should be very similar to the standard ones. This has been checked with the single channel $`\pi \pi `$ and $`\pi K`$ scattering, but also by calculating the complete $`2\times 2`$ $`T`$ amplitude for the $`\pi \pi `$, $`K\overline{K}`$ coupled channels . In both cases the fitted parameters are perfectly compatible with those of standard ChPT. Further work is still in progress. ## 3 Summary and Implications for Meson spectroscopy Chiral Perturbation Theory unitarized with the Inverse Amplitude Method, describes correctly the dynamics of meson-meson scattering in coupled channels, generating: * The $`\rho `$ and the $`K^{}`$, as well as a pole that corresponds to the $`\omega _8`$, which, all together, form the lightest vector octet. In order to obtain them it is necessary to include the $`O(p^4)`$ chiral parameters. The nonet is not reproduced since in the SU(3) limit the $`\omega _1`$ does not couple to two mesons. * The $`f_0`$, $`a_0`$, $`\sigma `$ and $`\kappa `$. All their masses are below 1 GeV, and the last two are very wide, not Breit-Wigner resonances. All of them can be simply generated by unitarization of the lowest order ChPT, with just a cutoff as a free parameter. The role of the $`O(p^4)`$ chiral parameters can be understood by writing a Lagrangian with pions, kaons and etas, but also with heavier resonances coupled in a chirally invariant way. Then one integrates out these heavier states and the resulting Lagrangian is that of ChPT, but now the values of the chiral constants can be related to the masses and widths of the preexisting heavier resonances (“Resonance Saturation Hypothesis”). Most of the experimental values of the chiral coefficients are saturated by these estimates due to vector resonances alone (that is vector meson dominance) but some other parameters still need the existence of scalar states. Recently , using the N/D unitarization method with explicit resonances added to the lowest order ChPT Lagrangian, it has been established that these heavier scalar states should appear with a mass around 1.3 - 1.4 GeV for the octet and 1 GeV for the singlet. In addition, the $`\sigma `$, $`\kappa `$, $`a_0`$ and a strong contribution to the $`f_0`$, were also generated from the unitarization of the ChPT lowest order. These states still survive when the heavier scalars are removed. That agrees with our observation that the $`\sigma `$, $`\kappa `$, $`f_0`$ and $`a_0`$ are generated independently of the chiral parameters, that is, of the preexisting scalar nonet, which is heavier. Since Chiral Perturbation Theory does not deal with quarks and gluons, it is very hard to make any conclusive statement about the nature of these states ($`q\overline{q}`$, four-quark, molecule, etc…), unless we make additional assumptions. However, it seems clear that the nature of the lightest scalar mesons is different from that of vectors. In addition, the fact that we obtain simultaneously the above nine scalar resonances with the same procedure, seems to indicate that they are good candidates to form a complete $`SU(3)`$ nonet.
no-problem/9909/gr-qc9909095.html
ar5iv
text
# Untitled Document Does a Dynamical System Lose Energy by Emitting Gravitational Waves? F. I. Cooperstock Department of Physics and Astronomy, University of Victoria P.O. Box 3055, Victoria, B.C. V8W 3P6 (Canada) ## Abstract We note that Eddington’s radiation damping calculation of a spinning rod fails to account for the complete mass integral as given by Tolman. The missing stress contributions precisely cancel the standard rate given by the ’quadrupole formula’. This indicates that while the usual ’kinetic’ term can properly account for dynamical changes in the source, the actual mass is conserved. Hence gravity waves are not carriers of energy in vacuum. This supports the hypothesis that energy including the gravitational contribution is confined to regions of non-vanishing energy-momentum tensor $`T_{ik}`$. . PACS numbers: 04.20.Cv, 04.30.-w
no-problem/9909/astro-ph9909082.html
ar5iv
text
# Measuring Ω/𝑏 with weak lensing ## 1. Introduction Weak lensing promises to be one of the most effective ways to study the properties of Large Scale Structure (LSS) in the next years (Kaiser & Squires 1993; Kaiser 1998; Van Waerbeke, Bernardeau & Mellier 1999; Bartelmann & Schneider 1999). Most of the weak lensing methods proposed so far rest heavily on the analysis of background galaxy distorsions (Kaiser 1992; Bernardeau et al. 1997, Schneider et al. 1997), as they potentially contain finer detail about the LSS than the fluctuations of the background number counts, which are affected by intrinsic clustering. However, detecting and analyzing the typical shear expected from the LSS offers technical problems difficult to underestimate (Kaiser, Squires & Broadhurst 1995; Bonnet & Mellier 1995; Van Waerbeke et al. 1997; Kaiser 1999; Kuijken 1999; although see Schneider et al 1998). In addition, a number of surveys will be available in the near future, as the Sloan digital Sky survey (Gunn & Weinberg (1995)), NOAO (Januzzi et al. 1999) etc. with vast amounts of data which may not be optimal for shear analysis due to the imaging pixel size or typical seeing. It is therefore necessary to develop methods which are able to tap the wealth of cosmological information contained in such surveys without relying exclusively on image distorsion analysis. Such an approach is provided by the background-foreground correlation function $`w_{fb}`$ (Bartelmann & Schneider (1991); Bartelmann & Schneider (1993)). The value of this statistic has been calculated by several authors using linear and nonlinear evolution models for the power spectrum evolution (Bartelmann 1995; Villumsen 1996; Sanz, Martínez-González & Benítez 1997; Dolag & Bartelmann 1997; Moessner & Jain 1998; Moessner, Jain & Villumsen 1998). The two most obvious cases in which it is possible to measure $`w_{fb}`$ are galaxy–galaxy correlations and quasar–galaxy correlations. The detection of the latter has a long and controversial history (for a discussion see Schneider, Ehlers & Falco 1992, Benítez 1997), but it seems to be already well established. However, due to the scarcity of complete, well defined quasar catalogs not affected by observational biases, the results have low signal–to–noise and are difficult to interpret (Benítez, Sanz & Martínez–González 1999). The value of the expected amplitude for the low-z galaxy–high-z galaxy cross–correlation is rather small, and hard to measure within typical single CCD fields (Villumsen, Freudling & Da Costa (1997)). Only with the advent of deep, multicolor galaxy samples and reliable photometric redshift techniques it has been possible to detect this effect (Herranz et al. 1999). To interpret the measurements of $`w_{fb}`$ using the calculations mentioned above, it is necessary to assume a certain shape for the power spectrum. Unfortunately it is still far from clear whether the most popular ansatz, that of Peacock and Dodds (1996)—or any other for that matter—provides an accurate fit to the LSS distribution (see e.g. Jenkins et al. 1998). It is thus desirable to develop methods whose application is not hindered by this uncertainty. An example is the statistic $`R`$ (Van Waerbeke, 1998), which combines shear and number counts information, and can be used to measure the scale dependence of the bias. The value of $`R`$ is almost independent of the shape of the power spectrum if the foreground galaxy distribution has a narrow redshift range. Here we show that something similar can be achieved with $`w_{fb}`$, with the additional advantage of being able to do without the shear information in those cases where the latter is difficult to obtain. It follows from the magnification bias effect (Canizares 1981, Narayan 1989, Broadhurst, Taylor & Peacock 1995) that the surface number density of background population $`n_b`$ is changed by the magnification $`\mu `$, associated with a foreground galaxy population with number density $`n`$ as $`g_b=(\alpha 1)\delta \mu `$, where $`g_b`$ is the perturbation in the galaxy surface density $`n_b`$ ($`g_b=n_b/<n_b>1`$), $`\alpha `$ is the logarithmic slope of the number counts and in the weak lensing regime $`\mu 1+\delta \mu `$ , $`\delta \mu <<1`$. Since $`\mu 1+2\kappa `$ and $`\kappa `$, the convergence, is proportional to the projected matter surface density $`\mathrm{\Sigma }`$, it follows that $`g_b\delta \kappa \delta \mathrm{\Sigma }\mathrm{\Omega }\delta `$, where $`\delta `$ is the dark matter surface density perturbation. Therefore $`w_{fb}<gg_b>\mathrm{\Omega }<g\delta >`$ and assuming linear and deterministic bias $`w_{fb}\mathrm{\Omega }b^1w`$ where $`w`$ is the two-point galaxy correlation function for the background populations, and the biasing factor $`b`$ is defined by $`w=b^2w_{\delta \delta }`$. This result provides a straightforward, virtually model independent method to estimate the ratio $`\mathrm{\Omega }/b`$. Williams & Irwin 1998 arrived to a similar expression using a phenomenological approach. The redshift distorsion method provides the quantity $`\beta \mathrm{\Omega }^{0.6}/b`$ (Dekel 1994), so combining both one can estimate $`\mathrm{\Omega }`$ and $`b`$ separately. The outline of the paper is the following. In Sec. 2 we show rigorously that under reasonable assumptions $`w_{fb}\mathrm{\Omega }b^1w`$. Sec. 3 explores the application of this method to future and ongoing surveys and Sec. 4 summarizes our main results and conclusions. ## 2. Foreground-Background correlations Let us consider two populations of sources: a background one (e.g. quasars or galaxies) and a foreground one (e.g. galaxies), placed at different distances $`\lambda `$ with p.d.f.’s $`R_b(\lambda )`$ and $`R(\lambda )`$, respectively. In Sanz et al. (1997), the background-foreground correlation $`w_{fb}`$ was calculated as a functional of the power spectrum $`P(\lambda ,k)`$ of the matter fluctuations at any time ($`\lambda `$ is the comoving distance from the observer to an object at redshift $`z`$, $`\lambda =\frac{(1+\mathrm{\Omega }z)^{1/2}1}{(1+\mathrm{\Omega }z)^{1/2}1+\mathrm{\Omega }}`$ and $`\mathrm{\Omega }<1,\mathrm{\Lambda }=0`$ ): $$w_{fb}(\theta )=(\alpha _b1)C_{\mu \delta }(\theta ),$$ (1) $$C_{\mu \delta }(\theta )=12\mathrm{\Omega }_0^1𝑑\lambda b(\lambda ,s\theta )T_b(\lambda )R(\lambda )\frac{\lambda ^2}{(1\lambda )^2}\tau (\lambda ,s\theta ),$$ (2) $$\tau (\lambda ,s\theta )\frac{1}{2\pi }_0^{\mathrm{}}𝑑kkP(\lambda ,k)J_0(ks\theta ),s\frac{\lambda }{1(1\mathrm{\Omega })\lambda ^2},$$ (3) where $`b(\lambda ,s\theta )`$ is the bias factor for the foreground galaxies (assumed to be linear, non-stochastic but possibly redshift and scale dependent), $`\alpha _b`$ is the slope of the background source number counts and $`C_{\mu \delta }(\theta )2<\mu (\stackrel{}{\varphi })\delta (\stackrel{}{\varphi }+\stackrel{}{\theta })>`$ is the correlation between the magnification and the mass density fluctuation. $`T_b(\lambda )`$ is the lensing window function given by equation (7) in Sanz et al. (1997): $`T_b(\lambda )\frac{1}{\lambda }_\lambda ^1\frac{du}{u}R_b(u)(u\lambda )\frac{1(1\mathrm{\Omega })u\lambda }{1(1\mathrm{\Omega })\lambda ^2}`$. The angular two point correlation function for the foreground population $`w`$ can be obtained as $$w(\theta )=_0^1𝑑\lambda b^2(\lambda ,s\theta )R^2(\lambda )[1(1\mathrm{\Omega })\lambda ^2]\tau (\lambda ,s\theta ),$$ (4) where $`w(\theta )<g(\stackrel{}{\varphi })g(\stackrel{}{\varphi }+\stackrel{}{\theta })>`$ is the foreground angular galaxy–galaxy correlation function. If we assume that the foreground and background galaxies are concentrated at the ‘effective’ distances $`\lambda _f`$ and $`\lambda _b`$ (a good approximation for realistic, nonoverlapping redshift distributions, Sanz et al. 1997), the previous formulas can be rewritten as $$w_{fb}(\lambda _f,\lambda _b,\theta )(\alpha _b1)12\mathrm{\Omega }\frac{2b}{a_f^2\mathrm{\Sigma }_c}\tau (\lambda _f,s_f\theta )$$ (5) $$w(\lambda _f,\mathrm{\Delta }\lambda _f,\theta )b^2\frac{1}{\mathrm{\Delta }\lambda _f}[1(1\mathrm{\Omega })\lambda _{f}^{}{}_{}{}^{2}]\tau (\lambda _f,s_f\theta ),$$ (6) where $`\mathrm{\Sigma }_c(\lambda _b,\lambda _f)\frac{2D_b}{D_fD_{fb}}`$ is the critical surface mass distance defined by the angular distance $`D`$, $`a_f\frac{(1\lambda _f)^2}{1(1\mathrm{\Omega })\lambda _f^2}`$ is the scale factor, and $`\mathrm{\Delta }\lambda _f1/R_f(\lambda _f)`$ is the width of the p.d.f. $`R_f(\lambda )`$ (e.g. the FWHM for a Gaussian distribution). Dividing Eq. (5) by Eq. (6) and simplifying one obtains $$\frac{w_{fb}}{w}Q\mathrm{\Delta }z_f(\alpha _b1)\frac{\mathrm{\Omega }}{b}$$ (7) i.e. the two correlations are proportional. The proportionality factor $`Q`$ has the form $$Q\frac{6\lambda _f(1\lambda _f/\lambda _b)[1(1\mathrm{\Omega })\lambda _f\lambda _b](1\lambda _f)}{[1(1\mathrm{\Omega })\lambda _f][1(1\mathrm{\Omega })\lambda _f^2]^2}$$ (8) Fig. 1 (top) shows the dependence of $`Q`$ on $`z_b`$ and $`z_f`$ for the $`\mathrm{\Omega }=0.3,\mathrm{\Lambda }=0`$ case. For the $`\mathrm{\Omega }+\mathrm{\Lambda }=1,\mathrm{\Lambda }>0`$ case, there is not an explicit form for the angular distances, and the corresponding expression for $`Q`$ is (Fig. 1, bottom) $$Q=\frac{12(1+z_f)}{\mathrm{\Sigma }_c(z_f,z_b)\sqrt{1+\mathrm{\Omega }z_f+\mathrm{\Lambda }[(1+z_f)^21]}}$$ (9) Note in Fig. 1 that for $`z_f<0.2`$ and $`z_b>1`$ the value of $`Q`$ in both open and flat geometries changes very slowly with $`z_b`$ and almost linearly with $`z_f`$. To better understand this behavior from Eq. (8) let’s assume that $`z_f<<1<z_b`$. In that case, $`\lambda _fs_f\frac{1}{2}z_f`$, $`a_f1z_f`$ and $`Q3z_f`$. Therefore, $`w_{fb}`$ is almost independent of $`\lambda _b`$ (Fig. 1) and roughly $`w_{fb}3z_f\mathrm{\Delta }z_f(\alpha _b1)\mathrm{\Omega }b^1w`$ . Fig. 2 also shows that for the same redshift range, the amplitude of $`Q`$ is also approximately independent of $`\mathrm{\Omega }`$, allowing an empirical, model–independent estimation of $`\mathrm{\Omega }/b`$. ## 3. Practical application The main quantity which determines the signal in the measurement of the angular correlation function is the excess (or defect) in the expected number of galaxy pairs. For a bin with surface $`A_{bin}`$ at distance $`\theta `$ this number will be $$\mathrm{\Delta }N(\theta )(\pm )N_fn_bA_{bin}(\theta )w_{fb}(\theta )$$ (10) Therefore, the signal–to–noise of the detection will be roughly $$\frac{S}{N}\frac{\mathrm{\Delta }N}{\sqrt{N}}=\sqrt{N_f}\sqrt{n_bA_{bin}}w_{fb}(\theta )$$ (11) We have not included the scatter due to the clustering of foreground galaxies and background galaxies, but it is unlikely that this effect will increase the noise over Poisson more than a factor of a few for any realistic case. The quantity $`n_bA_{bin}`$ is the number of background galaxies per bin. If we assume radial concentric bins of width $`\mathrm{\Delta }\theta `$ and $`w_{fb}=A_{fb}\theta ^\gamma `$, the signal to noise in each bin will be $$\frac{S}{N}A_{fb}\sqrt{2\pi N_fn_b\mathrm{\Delta }\theta }\theta ^{0.5\gamma }$$ (12) Since $`\gamma `$ is typically $`0.70.8`$, the efficiency of the method will decrease very slowly with radius, allowing to map $`\mathrm{\Omega }/b`$ up to very large scales. The SDSS will obtain $`10^6`$ spectra for a population of $`<z>0.1`$ galaxies (Gunn & Weinberg (1995)), forming a splendid foreground sample. It may be assumed that its projected angular correlation function will be similar to that of the APM catalog, which has an amplitude $`A=0.44`$ at $`1^{}`$ and a slope $`\gamma =0.668`$ (Maddox et al. (1990)). The SDSS will also obtain $`S/N10`$ $`u^{}g^{}r^{}i^{}z^{}`$ photometry for $`10^8`$ galaxies in a $`10^4`$ degree region, for which photometric redshifts will be estimated. The background sample can be formed by those galaxies with $`z>0.4`$ and $`<z>0.70.8`$, with $`n_b11.5/\mathrm{}\mathrm{}`$. The value of $`Q`$ cannot be considered constant for these two samples (see Fig. 1) because of the range of redshifts involved. Luckily, the existence of spectroscopic redshifts for the foreground sample will allow to select subsamples with an extremely thin redshift distribution for which $`Q`$ is approximately constant, each yielding independent estimates of $`\mathrm{\Omega }/b`$ which can afterwards be combined together. A rough estimate of the expected result using Eq. (12) gives $$\left(\frac{S}{N}\right)_{SDSS}20\frac{\mathrm{\Omega }}{b}\sqrt{\left(\frac{\mathrm{\Delta }\theta }{5^{}}\right)}\theta ^{0.2}$$ (13) (we have assumed $`N_f=10^6,n_b=1.5,\mathrm{\Delta }z_f=0.05,Q=0.25,\alpha _b1=0.5`$) This shows that it will be possible to map $`\mathrm{\Omega }/b`$ with an extremely good combination of resolution and accuracy using the SDSS. At large radii, the bias factor is expected to become constant (Coles (1993)), which means that the bin size can be made as large as desired without being affected by the scale dependence of $`b`$ and therefore substantially decrease the error in the estimation of $`\mathrm{\Omega }/b`$. It is obvious from the above numbers that the main source of errors will not be the shot noise, but contamination due to low redshift galaxies which may sneak into the high–redshift sample creating a spurious correlation. This contamination can be very effectively minimized by applying a Bayesian threshold (Benítez 1999). In addition, it will be possible to accurately quantify and correct this effect using a calibration sample with spectroscopic redshifts. The SDSS also plans to obtain spectra for a sample of $`10^5`$ red luminous galaxies with $`<z>0.4`$ and $`10^5`$ QSOs. Due to the much lower density of the background sample, it will not be possible to attain such a good spatial resolution in the measurement of $`\mathrm{\Omega }/b`$, but these samples will allow to trace the evolution of biasing with redshift up to $`z1`$. However, experience shows that in this case the utmost attention will have to be paid to eliminating or accounting for the observational biases in the QSO detection and identification procedure, which can totally distort the estimation of $`w_{bf}`$ (Ferreras, Benítez & Martínez-González 1997, Benítez, Sanz & Martínez-González 1999). ## 4. Conclusions A correlation (positive or negative) between the surface density of foreground galaxies tracing the Large Scale Structure and the position of background galaxies and quasars is expected from the magnification bias effect (Canizares 1981). We show that this foreground–background correlation function $`w_{fb}`$ can be used as a straightforward and almost model-free method to measure the cosmological ratio $`\mathrm{\Omega }/b`$. For samples with appropriate redshift distributions, $`w_{fb}\mathrm{\Omega }<\delta g>`$, where $`\delta `$ and $`g`$ are respectively the foreground dark matter and galaxy surface density fluctuations. Therefore, $`\mathrm{\Omega }/bw_{fb}/w`$, where $`w<gg>`$ is the foreground galaxy angular two-point correlation function, $`b`$ is the biasing factor, and the proportionality factor is independent of the dark matter power spectrum. Simple estimations show that the application of this method to the galaxy and quasar samples generated by the upcoming Sloan Sky Digital Survey will achieve a highly accurate and well resolved measurement of the ratio $`\mathrm{\Omega }/b`$. The authors thank Enrique Martínez–González for interesting discussions. NB acknowledges a Basque Government postdoctoral fellowship and partial financial support from the NASA grant LTSA NAG-3280. JLS acknowledges a MEC fellowship and partial financial support from CfPA.
no-problem/9909/hep-ph9909256.html
ar5iv
text
# 1 Introduction ## 1 Introduction With the advent of inflationary theories of the early universe, it has been argued that the present stage of hot FRW “big-bang” cosmology was preceded by an epoch of cosmological evolution dominated by the dynamics of scalar fields . The success of inflationary models in providing explanations for flat large-scale geometry (as suggested by the location of the acoustic peaks in the CMBR anisotropies), and for the origin of approximately scale-free adiabatic density perturbations (which can be used to simultaneously fit both the CMBR anisotropies and observations of cosmic structure formation), lends support to the idea of an early scalar-field dominated epoch. A crucial question in this picture is the nature of the transition from the scalar field dominated epoch, to the hot FRW epoch, which is referred to as reheating. The nature of this transition also relates to other aspects of early universe dynamics necessary for a successful cosmology, such as mechanisms of baryogenesis, the resolution of cosmological moduli problems, and possible sources for non-thermal dark matter. The standard approach to reheating, which applies to sufficiently weakly coupled inflaton fields , is to treat quanta of the inflaton field as particles, which undergo independent single particle decay; this treatment, if adequate, has the advantage that the post-inflation reheat temperature is determined by the microphysics of the model. For inflaton fields with mass as suggested by the simplest chaotic or supersymmetric models, and decaying by gravitational strength interactions, this treatment is adequate, leading to moderate reheat temperatures ($`\mathrm{O}(10^{10})`$ GeV) which are consistent with the absence of GUT-scale defects such as monopoles, which are capable of incorporating a variety of (s)leptogenesis or electroweak mechanisms for generation of the BAU, and which avoid, in the supersymmetric case, cosmological problems with thermal overproduction of gravitinos after reheating . Recently it has been realized that the standard treatment of reheat in terms of single particle inflaton decay may be seriously misleading in circumstances where there is coherent enhancement of the transition to bosonic decay products . For large mode occupation numbers of the decay product field we may treat its dynamics as being essentially classical. Mode by mode for the decay product field its coupling to the oscillating inflaton field induces a periodic time dependence in the mode mass (modulated by cosmic expansion which “sweeps” the time dependence of each comoving mode through the bands of the stability chart of the mode equations.) This periodic modulation of the parameters of the oscillator associated with each mode of the decay product field, can induce parametric resonance in bands of the mode parameters, leading to exponential growth in the decay mode amplitude. The resonant decay of the inflaton may have important cosmological implications like non-thermal symmetry restoration and subsequent formation of topological defects , revival of GUT baryogenesis scenarios , supersymmetry breaking , superheavy particle production , and gravitino production . The exponential growth in the mode occupation number may be modified or regulated by a number of physical processes. These include the decay of produced quanta to other particles or the rescattering of final state particles . Another possibility, occurring in models with gauge-strength self-interactions between the produced final state particles, is the regulation of the parametric resonance by the self-interaction induced effective masses of the produced quanta, which can move these quanta out of the available resonance bands; in this case, resonance only proceeds as thermalization and Hubble dilution reduce the plasma masses of the final state quanta, resulting in a quasi-steady-state resonance conversion of inflaton oscillation energy to decay products . This general scenario for the physically realistic case of decay products with gauge charge has been verified in explicit calculations in the narrow-band resonance case , and in numerical simulations in the broad-band case . While to date analytical and numerical treatments of parametric resonance have considered oscillations of a single real field decaying to a single real decay product field, in realistic models the field content is often more extensive. In the case of supersymmetric theories this is unavoidably the case, as the physical scalars of simple (N=1) supersymmetry come as components of chiral multiplets and are complex. So for these types of theories, we should at the very least consider the nature of coherent decays when the fields involved are complex, though non-supersymmetric models with multiple real scalar fields may share some of the features of the simplest complex case. Within supersymmetric models of particle physics, there are several different circumstances under which the decay of a homogeneous complex scalar condensate may occur in the early universe. At the end of inflation one expects to have a spatially homogeneous inflaton scalar condensate, whose decay energy will ultimately be responsible for cosmic reheating and the initiation of hot big-bang cosmology. As well, in the supersymmetric standard model there are directions in the scalar field space of squarks and sleptons which are F-flat and D-flat, and which only gain a potential from supersymmetry breaking. In the early universe these directions may be populated with (very) large vev’s after the end of the inflationary epoch; these vev’s may carry enormous vev to mass ratios (Mathieu resonance parameter $`q`$) and couple to other directions in scalar field space with couplings capable of inducing resonant decay. Finally, supersymmetric models are generically plagued with gauge singlet scalar moduli, whose homogeneous oscillation poses grave cosmological difficulties which might be ameliorated by coherent decay of their oscillation amplitude. For self-interactions of complex scalars of the general form dictated by the F-term and D-term couplings arising in globally supersymmetric theories, the fields generally appear in complex conjugate pairs for the F-terms and diagonal D-terms. For example, let us consider a complex scalar field $`\mathrm{\Phi }`$ in a chiral supermultiplet whose decay will be induced by a trilinear (renormalizable) coupling in a superpotential $`W`$ to a chiral multiplet labelled by its scalar $`\mathrm{\Xi }`$, where $`W\mathrm{\Phi }\mathrm{\Xi }\mathrm{\Xi }`$. The resulting F-term coupling inducing the decay is then of the form $`\mathrm{\Phi }^{}\mathrm{\Phi }\mathrm{\Xi }^{}\mathrm{\Xi }`$, and is invariant under global phase redefinitions of either the $`\mathrm{\Phi }`$ or the $`\mathrm{\Xi }`$. We will see in the next section that in cases such as this the phase invariance of the resulting couplings implies that the equations for modes of the real and imaginary components of $`\mathrm{\Xi }`$ are decoupled and independent, and will allow us to simply analyze the resonant decay of a $`\mathrm{\Phi }`$ condensate with out of phase oscillations for the real and imaginary components of $`\mathrm{\Phi }`$, into real and imaginary components of the decay product field $`\mathrm{\Xi }`$. We can always phase rotate our scalar field $`\mathrm{\Phi }`$ to a basis such that its initial vev lies along the real axis. If there is no component of force along the direction of the imaginary axis (i.e. the scalar potential is phase invariant), the trajectory of the motion of $`\mathrm{\Phi }`$ is limited to the real axis and the field hits the origin as it oscillates back and forth. In this case, provided that the coupling of the oscillating field to the final state field is also phase invariant, the situation is exactly that of a real oscillating field, and the same arguments apply for parametric resonance particle production. However, if the scalar potential is not phase invariant, i.e. depends on the phase of the oscillating field as well, a torque is exerted on the field. This leads to the deflection of the trajectory from a straight line and results in changing the trajectory into something that finally resembles an ellipsoid, after the torque in field space has effectively ceased its action. In this case the field no longer passes through the origin but rather has a finite distance of closest approach to it. This will have important implications for broad-band parametric resonance as we discuss below. In supersymmetric models not only are complex scalar fields inherently involved, but also a phase dependent part of the scalar potential can arise naturally from supersymmetry breaking. Let us consider the simplest case with the following terms only involving the inflaton in the superpotential $`W=\frac{1}{2}m\mathrm{\Phi }^2+\frac{1}{3}\lambda \mathrm{\Phi }^3`$. In supergravity models with broken supersymmetry, there is a corresponding phase dependent term (the “A-term”) $`Am_\mathrm{\Delta }\frac{W}{\mathrm{\Phi }}+\mathrm{h}.\mathrm{c}.`$ in the scalar potential, where $`A`$ is a dimensionless model-dependent constant and $`m_\mathrm{\Delta }`$ is the scale of supersymmetry breaking in the sector in which $`\mathrm{\Phi }`$ lies. There is also a phase dependent term $`m\lambda ^{}\mathrm{\Phi }\mathrm{\Phi }_{}^{}{}_{}{}^{2}+\mathrm{h}.\mathrm{c}.`$ in the F-term part of the scalar potential. This generically occurs in minimal supergravity models for inflation where the superpotential contains a series of $`\lambda _n\frac{\mathrm{\Phi }^n}{M^{n3}}`$ terms , and occurs even in “no scale” supergravity models after the inclusion of radiative coreections to the effective potential . For $`V\mathrm{\Phi }^m(\mathrm{\Phi }^{})^n`$ the potential along the angular direction is periodic with $`mn`$ minima. In general, during inflation $`\mathrm{\Phi }`$ rolls down to its minimum both along the radial and angular directions. In order to have a torque to deflect the trajectory, $`\mathrm{\Phi }`$ must not be at the minimum along the angular direction at the onset of radial motion. This can happen in two ways: either there are several phase dependent parts of the potential with a non-adiabatic transition from the minimum of one to another, or $`\mathrm{\Phi }`$ does not settle at the minimum along the angular direction. The first possibility happens when other supersymmetry breaking sources in the early universe (e.g. non-zero energy density of the universe or finite temperature effects) are dominant over the low energy one. In this case the minimum in the angular direction at early times is different from that at late times (due to independent phases for the coefficients of different A-terms). If the transition from one minimum to another one is non-adiabatic, $`\mathrm{\Phi }`$ will not be at the minimum at late times regardless of its start at the minimum at early times. The second possibility happens when the potential along the angular direction is flat enough during inflation. In this case $`\mathrm{\Phi }`$ will not roll down to its minimum and can start at any position at the onset of radial motion. In both cases, the further $`\mathrm{\Phi }`$ is away from the minimum, the larger the deflection of its trajectory and the wider the ellipsoidal shape will be. ## 2 Complex Mathieu Resonance As described in the previous section, the potential for complex scalar oscillations usually includes (as well as the scalar mass-squared terms) Hubble induced A-terms which are mainly effective during the first few cycles of oscillation. The A-terms provide a “torque” to the complex oscillation during the first few cycles, resulting in a net “elliptic” motion in the mass-term induced potential, after the A-terms have effectively ceased to be active. The resulting elliptic oscillation in the mass term potential will be damped by the Hubble drag, resulting in the ellipse shrinking over time. In order to introduce new considerations characteristic of resonance with complex fields, without getting involved in model dependent details, in most of this paper we shall simply ignore the damping and consider complex, elliptic, constant amplitude oscillations. In particular, this means we do not need to specify which particular field is considered (e.g. inflaton versus susy standard model flat direction), nor do we need to determine the cosmological details involved in determining expansion and damping at the time that the oscillations of the field in question occur. In addition to presenting a tractable and interesting mathematical problem, consideration of the undamped oscillation should also provide the essential features of the full cosmological case including the effects of expansion, at least in the generic case of broad-band resonance. This follows from the key observation of that in the broad-band case the resonant excitation of the decay product field occurs over a tiny fraction of the cycle of the driving field, when the latter passes near the origin, as only here do the decay product field dynamics depart from the adiabatic regime. So mode number excitation proceeds by a series of abrupt jumps, and the dynamics of a given jump may be considered with the instantaneous value of the oscillator parameters, resulting in the picture of “stochastic resonance” analyzed in . The present paper discusses the changes in the dynamics of decay mode excitation which arise from the complex nature of the driving field oscillation—the differences in question arise from suppression of the adiabaticity violations that induce the jump in mode occupation numbers of the decay product field, so we expect that considerations using the instantaneous value of the driving oscillation amplitude should give insight in the complex case, much as such considerations did in the real stochastic resonance case. In any case, for the purposes of our present calculations we shall consider the parametric resonance production of decay product field modes $`\mathrm{\Xi }`$, from phase-invariant coupling to constant-amplitude out of phase (“elliptic”) oscillations of a driving field $`\mathrm{\Phi }`$. Detailed cosmological studies of applications to inflaton or moduli oscillations will be considered elsewhere. With the couplings discussed in the previous section, after the A-terms cease to be effective, the equation of motion for the $`\mathrm{\Xi }`$ field is of the form: $$\ddot{\mathrm{\Xi }}_k+\left(\frac{k^2}{a^2}+m_\mathrm{\Xi }^2+g^2|\mathrm{\Phi }|^2\right)\mathrm{\Xi }_k=0,$$ (1) where $`\mathrm{\Xi }_k`$ is the decay product field mode with comoving wavenumber $`k`$, $`a`$ is the FRW scale factor, $`m_\mathrm{\Xi }`$ the mechanical mass of the $`\mathrm{\Xi }`$, and the superpotential coupling is as above. We note that both the real and imaginary piece of the $`\mathrm{\Xi }`$ field will separately obey this equation, and hereafter we use $`\chi `$ to denote either the real or imaginary piece of $`\mathrm{\Xi }`$. In our analysis, we will treat the physical momentum of the decay field mode and the relative phase and amplitude of the driving field oscillation as fixed parameters, and attempt to map out the regions of instability in their parameter ranges. As noted above, for the case of stochastic broad-band resonance, where the amplification occurs in small intervals while the field passes close to the origin, one should be able to approximate the instantaneous behaviour within each interval by the corresponding behaviour of a system of the type we analyze here. We decompose the driving field $`\mathrm{\Phi }`$ into real and imaginary pieces as follows: $$\mathrm{\Phi }=\varphi _R+i\varphi _I.$$ (2) By a phase rotation we put the largest amplitude component of oscillation into the real piece, and so we write: $`\varphi _R`$ $`=`$ $`\varphi \mathrm{sin}(m_\varphi t)`$ (3) $`\varphi _I`$ $`=`$ $`f\varphi \mathrm{cos}(m_\varphi t),`$ (4) where now $`\varphi `$ is the constant amplitude of the real component of oscillation and $`f[0,1]`$ is the “out of phase” fractional component giving elliptic oscillation in the complex $`\mathrm{\Phi }`$ plane; we will be particularly interested in the case where $`f1`$. We wish to cast this into the canonical form of the (real) Mathieu equation: $$y^{\prime \prime }+(A2q\mathrm{cos}(2z))y=0,$$ (5) where denotes derivative with respect to the independent variable $`z`$. We begin by substituting our definition of the $`\mathrm{\Phi }`$ field into (1) above, giving $$\ddot{\chi }_k+\left(\frac{k^2}{a^2}+m_\chi ^2+g^2\varphi ^2\left(\mathrm{sin}^2(m_\varphi t)+f^2\mathrm{cos}^2(m_\varphi t)\right)\right)\chi _k=0.$$ (6) We replace $`\mathrm{cos}^2(m_\varphi t)`$ with $`1\mathrm{sin}^2(m_\varphi t)`$ and collect terms, giving $$\ddot{\chi }_k+\left(\frac{k^2}{a^2}+m_\chi ^2+f^2g^2\varphi ^2+\left(1f^2\right)g^2\varphi ^2\mathrm{sin}^2(m_\varphi t)\right)\chi _k=0.$$ (7) Using the half-angle formula $`\mathrm{sin}^2\theta =\frac{1}{2}(1\mathrm{cos}2\theta )`$ we obtain the form $$\ddot{\chi }_k+\left(\frac{k^2}{a^2}+m_\chi ^2+f^2g^2\varphi ^2+\frac{1}{2}\left(1f^2\right)g^2\varphi ^2\left(1\mathrm{cos}(2m_\varphi t)\right)\right)\chi _k=0.$$ (8) which may be rewritten in the form of the Mathieu equation $$\chi _k^{\prime \prime }+\left(A_k(f)2q(f)\mathrm{cos}2z\right)\chi _k=0$$ (9) with the following new identifications: $`z`$ $`=`$ $`m_\varphi t`$ (10) $`A_k(f)`$ $`=`$ $`{\displaystyle \frac{\frac{k^2}{a^2}+m_\chi ^2+f^2g^2\varphi ^2}{m_\varphi ^2}}+2q(f)`$ (11) $`q(f)`$ $`=`$ $`{\displaystyle \frac{(1f^2)g^2\varphi ^2}{4m_\varphi ^2}}.`$ (12) Notice that the coefficients of the Mathieu equation are now functions of the imaginary fraction $`f`$. This is an important feature, as it means that the characteristic behaviour of the parametric resonance as described by the Mathieu equation takes the same form in the complex case as in the real case; however, *the relationship between the physical parameters of the process and the Mathieu coefficients is redefined*. So we see that there is a mapping that takes the Mathieu equation for the complex modes of the decay field when driven by the complex parametric field with out of phase real and imaginary pieces in its oscillation amplitude, and maps it to a real Mathieu equation for the oscillations of the real and imaginary pieces of the decay product field with shifted parameters. There are several features of this mapping that simply encapsulate physical features of the original problem. First we note that for the case where $`f=0`$, where the oscillation of the parametric driving field is along the real axis, the coefficients $`A_k`$ and $`q`$ reduce to those previously considered in the literature for the case of purely real parametric oscillation . In the other extreme limit, when $`f=1`$, we have $`q(1)=0`$, meaning that one is restricted to be along the $`A_k`$ axis on the Mathieu equation stability diagram, allowing only non-resonant particle production. This corresponds to the physical observation that because the decay couplings were phase invariant, our original equation for the complex oscillations involved only the magnitude of $`\mathrm{\Phi }`$ in the oscillation equation. In the case that the real and imaginary pieces of the $`\mathrm{\Phi }`$ oscillation have the same amplitude ($`f=1`$), then the coefficients in the $`\chi `$ equation of motion become time independent and there can be no parametric amplification. We also note that the imaginary fraction $`f`$ of the oscillations enters into the coefficients only as $`f^2`$, meaning that the effect is the same regardless of direction traveled around the ellipse (i.e. whether $`f`$ is positive or negative). This reflects that fact that the original equation for the complex oscillations is second order and symmetric under time inversion, which corresponds to having the parametric field circulate about the oval in the $`\mathrm{\Phi }`$ plane in the opposite sense, or reversing the sign of $`f`$. Finally, in the intermediate regime $`f(0,1)`$, $`q(f)`$ is always lessened, while $`A_k`$ is always increased (compared to $`f=0`$), so increasing $`f`$ never causes the system to leave the physical regime. As noted above, increasing $`f`$ means moving “inland” on the stability diagram for the Mathieu equation. In general, this causes suppression of the resonant growth of the $`\chi `$ modes; however, it also allows one to explain the counterintuitive observation that in certain cases the resonant band exponential growth parameter $`\mu _k`$ may actually increase as one turns on the out of phase component $`f`$. To understand this, imagine that for oscillations with no imaginary fraction $`f`$ one is sitting in parameter space at the lower border of one of the instability bands (where $`\mu _k`$ is zero). Now slowly increase $`f`$. The parameter mapping derived above implies that you start to move in a “northwest” direction on the band chart into the instability band. As such, your $`\mu _k`$ begins to increase. As you continue to increase $`f`$, you will eventually hit the maximum possible $`\mu _k`$ along your trajectory, after which your $`\mu _k`$ begins to drop again. Eventually you will leave the instability band altogether for sufficiently large $`f`$. For very high order instability bands, it should be possible to encounter many bands along one trajectory of increasing $`f`$. From a different point of view, were one to look at the instability diagram as a function of the the $`A_k`$ and $`q`$ of standard real Mathieu parametric resonance (ie. as a function of our $`A_k(0)`$ and $`q(0)`$, for different values of $`f`$), the effect of turning on $`f`$ would be seen to manifest itself as both a narrowing and a downwards shift of the instability bands as $`f`$ increased. In addition, isocontours of $`\mu _k`$ would be seen to “flow out” of the bands as $`f`$ increases. For $`f=1`$ each instability band collapses to a line with $`\mu _k=0`$, as with a phase independent coupling of the parameter field to the mode field there would be no time dependence in the mode field equation of motion, and the system would be stable for all $`A_k`$ and $`q`$. Figure 1 illustrates in detail both the bending of the bands and the decrease of $`\mu _k`$ for the first resonance band as we turn on the imaginary fraction $`f`$ of our oscillations. We expect that the suppression of resonance for fixed non-zero imaginary fraction $`f`$ is stronger in higher resonance bands, as at larger $`q`$ the resonance proceeds by violation of adiabaticity in the decay mode evolution, and for large $`q`$ at fixed $`f`$ the induced decay mode field mass is always large. This is illustrated in Figure 2 where we show the quenching of resonance for the first 7 resonance bands, as $`f`$ is turned on. We see that there is more severe suppression for the higher bands, in accord with our physical intuition; in the next section we will analytically estimate the extent of the domain of surviving resonant bands, for fixed imaginary fraction $`f`$. ## 3 Parameter Domain For Broad-Band Resonance Let us first briefly discuss the effect of mixing in the narrow-band regime. The narrow-band resonance in the case of a real oscillating field is efficient for $`(\frac{m}{g})^{\frac{4}{3}}\varphi \frac{m}{g}`$ . In the case of a complex field with mixing parameter $`f`$ this reads as $$\left(\frac{m^2+f^2g^2\varphi ^2}{(1f^2)g^2}\right)^{\frac{2}{3}}\varphi \left(\frac{m^2+f^2g^2\varphi ^2}{(1f^2)g^2}\right)^{\frac{1}{2}},$$ (13) where $`m`$ and $`g`$ are replaced with $`\sqrt{m^2+f^2g^2\varphi ^2}`$ and $`\sqrt{1f^2}g`$, respectively. It is easily seen that the condition for an efficient narrow-band resonance remains almost unchanged in the complex case, unless $`f`$ is close to 1. Therefore, in physically interesting situations the narrow-band resonance will not be substantially affected by the mixing. Of course, in the extreme case with $`f=1`$, there is no time variation in the Mathieu equation and, hence, no narrow-band resonance. In the case of real broad-band parametric resonance, Kofman, Linde, and Starobinski argue that the requirement of adiabaticity violation for broad-band parametric amplification implies that it only occurs with significant $`\mu _k`$ for $`A_k2q\sqrt{q}`$. Their argument presupposes that the decay terminates after it has entered an “explosive” phase, where the effective mass of the decaying $`\varphi `$ is dominated by the coupling to the plasma of decay product $`\chi `$ modes which has been built up by parametric resonance decay. The effective physical 3-momentum of the quasi-relativistic decay $`\chi `$ modes is of order their energy, which is no more than of order the induced mass of the decaying $`\varphi `$; this is of order $`g\chi _{\mathrm{end}}`$ which in turn is of order $`g\varphi _{\mathrm{end}}`$ which can also be written $`\sqrt{gm_\varphi ^{\mathrm{eff}}\varphi _{\mathrm{end}}}`$. So $`(k_{\mathrm{phys}}^2/m_\varphi ^2)(g\varphi _{\mathrm{end}}/m_\varphi ^{\mathrm{eff}})`$, which can be rewritten as $`A_k2q\sqrt{q}`$. For a detailed discussion we refer to the treatment in . We have seen in the preceding section that the case of complex oscillation with imaginary fraction $`f`$ can be mapped onto a Mathieu equation with shifted resonance parameters. By substituting the “shifted” parameters induced by the non-zero imaginary component of oscillation, we should thus be able to determine what range of $`q`$ supports broad-band resonance for oscillation with a given fraction of out of phase imaginary component for the oscillation of the driving parameter. Recall the expressions for the equivalent shifted $`A_k(f)`$ and $`q(f)`$ from equations (11) and (12) respectively. Substituting these expressions into the relation $`A_k2q\sqrt{q}`$ allows us to write it as: $$\frac{\frac{k^2}{a^2}+m_\chi ^2+f^2g^2\varphi ^2}{m_\varphi ^2}\frac{\sqrt{(1f^2)}g\varphi }{2m_\varphi }.$$ (14) This leads us to the relation $$A_k(0)2q(0)+4f^2q(0)\sqrt{1f^2}\sqrt{q(0)},$$ (15) or, defining $`E_kA_k(0)2q(0)`$, we have $$E_k+4f^2q(0)\sqrt{1f^2}\sqrt{q(0)}.$$ (16) If we recall that physical values of $`E_k`$ are positive semi-definite, we find that for a fixed non-zero imaginary fraction $`f`$ there is an upper bound on the parameter $`q(0)`$ for which resonance occurs, and the allowed range of resonant $`q(0)`$ is bounded above as $`\frac{1f^2}{16f^4}`$. (For the small imaginary fractions $`f`$ of physical interest, however, the weaker approximate bound of $`\frac{1}{16}f^4`$ will suffice). So instead of an an ever widening resonance region above the $`A_k=2q`$ line, with thickness of order $`\sqrt{q}`$, as one has in the real case, in the complex case with fixed non-zero imaginary fraction $`f`$, one instead has a region above the $`A_k=2q`$ line of finite extent, with an upper bound on the values of the $`q`$ parameter which can result in resonance. This is qualitatively reasonable, as a fixed imaginary fraction $`f`$ for the oscillation means that as we scale up $`q`$ the ellipse of $`\mathrm{\Phi }`$ broadens as it lengthens, preserving its shape; so throughout the $`\mathrm{\Phi }`$ oscillation $`|\mathrm{\Phi }|`$ has a large value, inducing a large mass for the modes of the decay field $`\mathrm{\Xi }`$, which in turn leads to adiabatic evolution of the $`\mathrm{\Xi }`$, and suppression of broad-band parametric resonance production of the $`\mathrm{\Xi }`$. ## 4 Complex Resonance in “Instant Preheat” Recently, a simpler method of efficient scalar field decay has been proposed, called “instant preheat” . Within models of this type the decaying field rolls once through the origin, at which point the mass of the decay product field to which it is coupled passes through zero, and modes of the decay product field experience non-adiabatic excitation. As the decaying field rolls away from zero (perhaps monotonically) the modes of the decay product field grow in mass; they drain energy from the decaying field through their mass. As the mass of the modes of the decay product field grows, so does their decay width; their subsequent decay, after their mass and decay width have grown sufficiently, then releases the energy they have taken from the original decay field, and dumps it into their final decay products, which thermalize the resulting energy. It is interesting to note that while final state effects such as rescattering, backreaction, or plasma masses can prevent preheating from occurring, they are unimportant in the instant preheating scenario. The reason is that for these effects to become important, (at least) several oscillations are needed to build up a large enough occupation number for the final state field. In the instant preheating, on the other hand, the energy drain from the oscillating field occurs during each half of an oscillation. In fact, instant preheating can be efficient even if the adiabaticity condition is violated only during the first half of the first oscillation. Therefore, instant preheating is essentially unaffected by the final state effects. The mixing of the real and imaginary parts of the oscillating field, on the other hand, has the same effect in the instant preheating case as in the standard preheating scenario. We recall that the torque from A-terms deflects the trajectory of the oscillating field from that of a straight line. The Hubble induced A-terms have their largest value at the beginning of the oscillations, and rapidly decrease with Hubble expansion. This means that the deflection is largest during the initial oscillations. Thus, a large enough $`f`$ to restore adiabaticity in the preheating case could do the same for the instant preheating case. ## 5 Non-Convex Potentials Another possibility to achieve rapid decay of a homogeneous condensate occurs in the case that the potential governing the evolution of the condensate scalar has non-convex behaviour over some region of field space . In this circumstance, it becomes energetically favorable for a scalar condensate in the non-convex region to decompose into inhomogeneous modes; provided the inhomogeneity occurs over long enough wavelengths, the price one pays in kinetic energy for the inhomogeneity is more than compensated by the decreased average potential energy of the regions of field excess and deficit compared to the average field value. This produces a wavenumber band for exponential growth of the mode amplitudes. This has been considered in both the case of inflaton decay , and in the case of the growth of inhomogeneities in scalar condensates corresponding to F-flat and D-flat directions of the standard model with non-convex potentials of the type arising from gauge-mediated supersymmetry breaking . It is clear that this is one type of instability which is not vitiated by having the scalar order parameter complex or involving multiple scalar fields. If there is a region in field space with respect to which the potential is non-convex in some direction, then fluctuations corresponding to modes of the field variation in that field direction, of sufficiently long wavelength, will win on the potential versus kinetic energy budget, and grow exponentially. Indeed the treatment of (complex) flat directions in the supersymmetric standard model in explicitly analyzes the conditions for instability of a complex field with a potential which is a non-convex function of the field modulus, and exhibits the resulting instability bands. ## 6 Other Couplings Here, we briefly comment on the situation for another type of coupling between $`\mathrm{\Phi }`$ and $`\mathrm{\Xi }`$ fields which is also of interest and application. This is the coupling $`g^2(\varphi _R\chi _R+\varphi _I\chi _I)^2`$, where its simplest manifestation is for the potential $`V(\mathrm{\Phi })=\frac{1}{4}\lambda |\mathrm{\Phi }|^4`$, with $`\mathrm{\Phi }`$ and $`\mathrm{\Xi }`$ being the same field. It also arises in supersymmetric models from the D-term part of the scalar potential. This type of coupling leads to the mixing of $`\chi _R`$ and $`\chi _I`$ mode equations: $`\ddot{\chi }_{R,k}+\left({\displaystyle \frac{k^2}{a^2}}+m_{\chi }^{}{}_{}{}^{2}g^2\varphi _{R}^{}{}_{}{}^{2}\right)\chi _{R,k}+g^2\varphi _R\varphi _I\chi _{I,k}`$ $`=`$ $`0`$ (17) $`\ddot{\chi }_{I,k}+\left({\displaystyle \frac{k^2}{a^2}}+m_{\chi }^{}{}_{}{}^{2}g^2\varphi _{I}^{}{}_{}{}^{2}\right)\chi _{I,k}+g^2\varphi _R\varphi _I\chi _{R,k}`$ $`=`$ $`0.`$ (18) In this case the mass eigenstates are $`\frac{\varphi _R\chi _R+\varphi _I\chi _I}{\varphi }`$ and $`\frac{\varphi _I\chi _R\varphi _R\chi _I}{\varphi }`$ instead of $`\varphi _R`$ and $`\varphi _I`$ themselves. For oscillatory motion of $`\varphi _R`$ and $`\varphi _I`$ with a phase difference, there are two periodic changes that may lead to resonance: change in the mass eigenvalues (the usual parametric resonance) and change in the mass eigenstates. They can’t be simply superimposed and it is not very easy to give rough arguments for the instability bands and the respective value of $`\mu _k`$’s. The important point is that for such a coupling, even in the $`f=1`$ case there is still time variation in mode equations. This variation is present in both the mass eigenstates and mass eigenvalues. ## 7 Cosmic Expansion and Complex Resonance So far, we have considered modifications to parametric resonance decay which arise in complex field oscillations in the absence of effects of Hubble expansion. As we noted above, since broad-band resonance is induced by non-adiabaticity of the $`\chi `$ evolution during small intervals of the $`\varphi `$ oscillation, instantaneous approximation of the $`\chi `$ excitation should be a useful guide during each of the jumps in mode number. Cosmic expansion then functions to shift the parameters of the oscillator between episodes of mode excitation as $`\varphi `$ passes near zero. We now examine the implications of this in both the narrow- and broad-band cases. Implications for the narrow-band case are simple; as we have seen, the introduction of a phase difference between real and imaginary components of our complex inflaton field $`\mathrm{\Phi }`$ only kills narrow-band resonance for phase differences approaching $`\frac{\pi }{2}`$, or, in the language of this paper, for $`f1`$. Therefore, the resonance should be qualitatively the same in the static approximation and with the Hubble expansion included. For broad-band resonance the situation is completely different. According to equation (16), the broad-band resonance is shut off for $`q>\frac{1}{16}f^4`$. In the static limit $`f`$ and $`q`$ are both constant and resonance is either suppressed, or viable. In an expanding universe, $`f`$ eventually becomes approximately constant as the Hubble induced A-terms turn off (indeed, as pointed out earlier, after several Hubble times the motions along the real and imaginary directions are decoupled and free), while, on the other hand, $`q(t)=(\frac{g\varphi (t)}{2m})^2`$ is redshifted as $`a^3`$. This implies that even if the resonance is suppressed initially, it may be initiated after a sufficient time such that $`q(t)<\frac{1}{16}f^4`$. The right-hand side is less than 1 (or very close to it) for $`f\frac{1}{2}`$. Therefore, in the case of large out of phase components of oscillation, the broad-band resonance is killed and resonance may resume only in the narrow-band regime at a later time. In most physically interesting cases, however, $`f<\frac{1}{2}`$ and the right-hand side is considerably greater than 1. In such cases, broad-band resonance is not eliminated in an expanding universe, but rather delayed. The parameter $`f`$ is determined by the action of the scalar potential (including A-terms) for the oscillating field, and after the initial oscillations it often becomes effectively time-independent. Depending on the dimensionality of the A-term, it may be a function of $`q_i`$, the value of $`q`$ at the start of oscillations. If $`q_i<\frac{1}{16}f^4`$, the onset of broad-band resonance will be unaffected. For $`q_i>\frac{1}{16}f^4`$, resonance is delayed initially, but will resume after sufficient expansion such that $`q<q_{\mathrm{eq}}=\frac{1}{16}f^4`$. A larger $`f`$ leads to a smaller $`q_{\mathrm{eq}}`$, for $`f\frac{1}{2}`$ we have $`q_{\mathrm{eq}}<1`$ and resonance can only occur in the narrow-band regime. For $`f=1`$ resonance is eliminated. Even though the broad-band resonance (for interesting cases) is only delayed in an expanding universe, the mixing still has important consequences. Perhaps the most notable example relates to the production of superheavy particles during resonance. In the standard preheating, $`\mathrm{\Xi }`$’s with a mass up to $`q^{\frac{1}{4}}m_\varphi `$ can be produced. A reduction in $`q`$ at the onset of resonance implies a reduction in the maximum mass of produced particles. This is even more pronounced in the instant preheating case. Here $`\mathrm{\Xi }`$ decay products $`\mathrm{\Psi }`$ with masses up to $`q^{\frac{1}{2}}m_\varphi `$ and which have a large enough coupling $`h`$ to $`\mathrm{\Xi }`$, can be produced . A smaller $`q_{\mathrm{eq}}`$ has a two-fold effect in this case. First, the allowable masses are smaller, and second, the decay rate $`\mathrm{\Gamma }_d=\frac{h^2}{8\pi }g\varphi `$ may not be large enough (compared to the frequency of oscillations $`m_\varphi `$) for efficient production of $`\mathrm{\Psi }`$’s. It is easily seen that $`\mathrm{\Gamma }_dm_\varphi `$ for $`h4\pi ^{\frac{1}{2}}f`$. Therefore, $`\mathrm{\Psi }`$ production is not efficient if $`h4\pi ^{\frac{1}{2}}f`$. Even for $`h4\pi ^{\frac{1}{2}}f`$, only $`\mathrm{\Psi }`$’s with a mass $`m_\psi \frac{1}{4f^2}m_\varphi `$ can be produced. ## 8 Conclusions In this paper, we have considered the changes to the standard picture of parametric resonance decay of a real homogeneous cosmological scalar field which arise if the field is instead complex, with out of phase oscillation of its real and imaginary components and a phase invariant decay coupling. For the case of complex Mathieu type resonance, we give an explicit mapping to a corresponding real Mathieu resonance with shifted parameters that encode the effects of the out of phase components of the oscillating decay field. We showed the resulting effects on the instability bands, demonstrating how they shift and shrink with increasing out of phase (“elliptic”) component of the driving field motion, limiting the swath of instability to a finite area on the $`A_k`$-$`q`$ chart, and eliminating broad-band resonance in the higher modes. We argued that similar effects may be present in the case of complex field models of “instant preheat,” but that instabilities due to regions in field space with non-convex potentials are qualitatively the same in the complex case. Finally, in the context of an expanding FRW universe, we noted that the presence of a fraction of out of phase oscillation would usually delay the onset of parametric resonance, but not eliminate it entirely. Acknowledgements We would like to thank Andrei Linde for very helpful discussions, and explanations of broad-band resonance in the real case. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. RA would like to thank the CERN Theory Division for kind hospitality during part of this research. Figure Captions
no-problem/9909/physics9909002.html
ar5iv
text
# The Reason for the Efficiency of the Pian–Sumihara Basis ## 1 Introduction Pian and Sumihara first identified the basis $`\left[\begin{array}{ccccc}1& 0& 0& \eta & 0\\ 0& 1& 0& 0& \xi \\ 0& 0& 1& 0& 0\end{array}\right]`$ as the most efficient linear basis for approximating stress in enhanced strain problems. This observation they made more rigorous by way of a Wilson element (a perturbation of sorts). This paper presents a logical mathematical argument for making the same choice of basis, albeit with the wisdom of hindsight. It attributes the greater efficiency of the basis to properties inherent in the mathematics of the problem. The components of the stress tensor are recognised to be related by way of an Airy stress function and it is in this way that a fundamentally more correct representation of the full linear basis is arrived at. By further desiring the advantages of a two field problem, the most efficient, linear basis is obtained. ## 2 An Airy Stress Function The Airy stress function is a potential of sorts. Interpreting stresses to be the various second derivatives of a single polynomial leads to selective simplification and interdependence between the resulting linear approximations. This simplification and the interdependence are not obvious in a more superficial treatment. $`div𝝈=\mathrm{𝟎}`$ $``$ $`{\displaystyle \frac{\sigma _{11}}{x}}+{\displaystyle \frac{\sigma _{12}}{y}}=0\text{and}{\displaystyle \frac{\sigma _{21}}{x}}+{\displaystyle \frac{\sigma _{22}}{y}}=0`$ This is recogniseable as $$curl(\sigma _{12},\sigma _{11},0)=0\text{and}curl(\sigma _{22},\sigma _{21},0)=0.$$ This, in turn, implies that $`(\sigma _{12},\sigma _{11},0)`$ and $`(\sigma _{22},\sigma _{21},0)`$ may be interpretted as $`\alpha `$ and $`\beta `$ respectively, without any inconsistancy in the $$curl()=0$$ identity. By symmetry of $`𝝈`$, $$\sigma _{12}=\sigma _{21}\frac{\alpha }{x}\frac{\beta }{y}=0$$ and for a two dimensional problem of the type under consideration this once again implies $$curl(\beta ,\alpha ,0)=0.$$ $`(\beta ,\alpha ,0)`$ may therefore be interpretted as $`\mathrm{\Phi }`$ without any inconsistancy in the $$curl()=0$$ identity. In summary, with an equation $$div𝝈=\mathrm{𝟎}$$ governing the motion, in the two-dimensional case, the components of the stress may be derived from an Airy stress function as follows $`\sigma _{11}`$ $`=`$ $`{\displaystyle \frac{^2\mathrm{\Phi }}{y^2}},`$ $`\sigma _{22}`$ $`=`$ $`{\displaystyle \frac{^2\mathrm{\Phi }}{x^2}},`$ $`\sigma _{12}`$ $`=`$ $`{\displaystyle \frac{^2\mathrm{\Phi }}{xy}},`$ where $`\mathrm{\Phi }`$ is the Airy stress function. ### 2.1 Finite Element Approximation Due to approximation, $`div𝝈=\mathrm{𝟎}`$ and not the constitutive $`div𝝈=𝒇`$ are really the equations being solved (Reddy ). Defining a function $`\varphi (\xi ,\eta )\mathrm{\Phi }(x(\xi ,\eta ),y(\xi ,\eta ))`$ on each element $`\mathrm{\Omega }_e`$, $`\sigma _{22}`$ $`=`$ $`{\displaystyle \frac{^2\mathrm{\Phi }}{x_1^2}}`$ $`=`$ $`{\displaystyle \frac{}{\xi }}\left({\displaystyle \frac{\varphi }{\xi }}{\displaystyle \frac{\xi }{x_1}}+{\displaystyle \frac{\varphi }{\eta }}{\displaystyle \frac{\eta }{x_1}}\right){\displaystyle \frac{\xi }{x_1}}+{\displaystyle \frac{}{\eta }}\left({\displaystyle \frac{\varphi }{\xi }}{\displaystyle \frac{\xi }{x_1}}+{\displaystyle \frac{\varphi }{\eta }}{\displaystyle \frac{\eta }{x_1}}\right){\displaystyle \frac{\eta }{x_1}}`$ $`=`$ $`\left({\displaystyle \frac{^2\varphi }{\xi ^2}}{\displaystyle \frac{\xi }{x_1}}+{\displaystyle \frac{^2\varphi }{\eta \xi }}{\displaystyle \frac{\eta }{x_1}}\right){\displaystyle \frac{\xi }{x_1}}+\left({\displaystyle \frac{^2\varphi }{\eta \xi }}{\displaystyle \frac{\xi }{x_1}}+{\displaystyle \frac{^2\varphi }{\eta ^2}}{\displaystyle \frac{\eta }{x_1}}\right){\displaystyle \frac{\eta }{x_1}}`$ #### Assumption The individual elements, $`\mathrm{\Omega }_e`$, are usually mapped to the master element, $`\widehat{\mathrm{\Omega }}`$, with $`\frac{\xi }{x_2}\frac{\eta }{x_1}0`$ on average, $`\frac{\xi }{x_1}a_1`$ and $`\frac{\eta }{x_2}a_2`$, $`a_1`$ and $`a_2`$ some constants, on average. (Alternatively it can be argued that there will be no loss of generality or weakening of the argument if a rectangular mesh is considered. Not allowing this simplification leads to an extremely messy argument, a chapters long exercise in differentiation.) This implies $`\sigma _{22}`$ $`=`$ $`a_1^2{\displaystyle \frac{^2\varphi }{\xi ^2}}.`$ Similarly, $`\sigma _{11}`$ $`=`$ $`a_2^2{\displaystyle \frac{^2\varphi }{\eta ^2}}`$ $`\sigma _{12}`$ $`=`$ $`\sigma _{21}=a_1a_2{\displaystyle \frac{^2\varphi }{\xi \eta }}`$ ## 3 The Relationship Implicit in the Linear Approximation Since linear approximations of $`\sigma _{11}`$ are to be considered, $`{\displaystyle \frac{^2\varphi }{\eta ^2}}`$ $`=`$ $`b_1+b_2\xi +b_3\eta `$ where $`b_1`$, $`b_2`$ and $`b_3`$ are the relevant combining constants. This means $`\varphi (\xi ,\eta )`$ $`=`$ $`{\displaystyle _1^1}{\displaystyle _1^1}b_1+b_2\xi +b_3\eta d\eta d\eta `$ (3) $`=`$ $`c_1+c_3\eta +{\displaystyle \frac{1}{2}}b_1\eta ^2+{\displaystyle \frac{1}{2}}b_2\xi \eta ^2+{\displaystyle \frac{1}{6}}b_3\eta ^3+\eta f_1(\xi )+f_2(\xi )`$ in which the exact form of $`\eta f_1(\eta )+f_2(\eta )`$ remains to be determined. Similarly, approximating $`\sigma _22`$ as some multiple of $`b_4+b_5\xi +b_6\eta `$ implies this very same polynomial function $`\varphi (\xi ,\eta )`$ $`=`$ $`{\displaystyle _1^1}{\displaystyle _1^1}b_4+b_5\xi +b_6\eta d\xi d\xi \text{(}\text{by Airy stress function)}`$ (4) $`=`$ $`c_1+c_2\xi +{\displaystyle \frac{1}{2}}b_4\xi ^2+{\displaystyle \frac{1}{6}}b_5\xi ^3+{\displaystyle \frac{1}{2}}b_6\xi ^2\eta +\xi g_1(\eta )+g_2(\eta ),`$ in which the exact form of $`g_2(\eta )`$ is determined by equation (3). This equation in turn specifies $`f_2(\xi )`$ in equation (3). Approximating $`\sigma _{12}=\sigma _{21}`$ in it’s turn as as $`b_7+b_8\xi +b_9\eta `$ implies the polynomial function $`\varphi (\xi ,\eta )`$ $`=`$ $`{\displaystyle _1^1}{\displaystyle _1^1}b_7+b_8\xi +b_9\eta d\xi d\eta `$ (5) $`=`$ $`c_1+b_7\xi \eta +{\displaystyle \frac{1}{2}}b_8\xi ^2\eta +{\displaystyle \frac{1}{2}}b_9\xi \eta ^2+f_2(\xi )+g_2(\eta )`$ where $`f_2(\xi )`$ and $`g_2(\eta )`$ have already been determined by equations (4) and (3) respectively. This last expression for $`\varphi (\xi ,\eta )`$ also specifies the, until now undetermined, $`\eta f_1(\xi )`$ and $`\xi g_1(\eta )`$ in equations (3) and (4). In summary, collecting equations (3), (4) and (5) together leads to the specification of an implied, single parent approximating polynomial $`\varphi (\xi ,\eta )`$ $`=`$ $`c_1+c_2\xi +c_3\eta +c_4\xi ^2+c_5\xi \eta +c_6\eta ^2+c_7\xi ^3+c_8\xi ^2\eta +c_9\xi \eta ^2+c_{10}\eta ^3.`$ Having established both the existance and nature of the relationship between the constants in what were apparently seperate linear approximations, $`{\displaystyle \frac{^2\varphi }{\xi ^2}}`$ $`=`$ $`2c_4+6c_7\xi +2c_8\eta `$ $`{\displaystyle \frac{^2\varphi }{\eta ^2}}`$ $`=`$ $`2c_6+2c_9\xi +6c_{10}\eta `$ $`{\displaystyle \frac{^2\varphi }{\xi \eta }}`$ $`=`$ $`c_5+2c_8\xi +2c_9\eta `$ can now be written where the $`c_i`$’s ($`i=4,\mathrm{}10`$) are constants related to the finite element solution of the problem in question. ### Conclusion The Airy stress function therefore reveals how a linear approximation of the components of $`𝝈`$ on each element really amounts to $`\left[\begin{array}{c}\sigma _{11}\\ \sigma _{22}\\ \sigma _{12}\end{array}\right]=\left[\begin{array}{ccccccc}1& 0& 0& \eta & 0& \xi & 0\\ 0& 1& 0& 0& \xi & 0& \eta \\ 0& 0& 1& 0& 0& \eta & \xi \end{array}\right]\left[\begin{array}{c}\\ \\ \end{array}\right]`$ (15) instead of the superficially more obvious $`\left[\begin{array}{c}\sigma _{11}\\ \sigma _{22}\\ \sigma _{12}\end{array}\right]=\left[\begin{array}{ccccccccc}1& 0& 0& \xi & 0& 0& \eta & 0& 0\\ 0& 1& 0& 0& \xi & 0& 0& \eta & 0\\ 0& 0& 1& 0& 0& \xi & 0& 0& \eta \end{array}\right]\left[\begin{array}{c}\\ \\ \end{array}\right]`$ ## 4 Eliminating the Last Two Columns The rank of the matrix in equation (15) indicates that there are still two extra columns. The equation in which $`𝝈`$ is used is a three–field problem, in which the strain, $`𝜸`$, only occurs once in a term $`𝝈𝜸`$. Choosing $`𝝈`$ correctly would reduce the problem to a two–field problem since $`{\displaystyle 𝝈𝜸𝑑𝛀}`$ $`=`$ $`0`$ is required in accordance with Reddy . In other words $`𝝈𝜸`$ $`=`$ $`\left[\begin{array}{ccccccc}1& 0& 0& \eta & 0& \xi & 0\\ 0& 1& 0& 0& \xi & 0& \eta \\ 0& 0& 1& 0& 0& \eta & \xi \end{array}\right]\left[\begin{array}{c}\\ \\ \end{array}\right]\left[\begin{array}{cccc}\xi & 0& 0& 0\\ 0& \eta & 0& 0\\ 0& 0& \xi & \eta \end{array}\right]\left[\begin{array}{c}\\ \\ \end{array}\right]`$ must always be zero. This is only certaint if the sixth and seventh columns of the stress basis are omitted. ## 5 Conclusion An Airy stress function and consequent simplification resulting from the differentiation of an implied, single, parent, approximating polynomial are able to provide a logical explanation as to why the choice of $`\left[\begin{array}{ccccc}1& 0& 0& \eta & 0\\ 0& 1& 0& 0& \xi \\ 0& 0& 1& 0& 0\end{array}\right]`$ (the Pian–Sumihara basis) as a linear basis to approximate stress leads to greater efficiency in enhanced strain problems.
no-problem/9909/gr-qc9909063.html
ar5iv
text
# Consistent canonical quantization of general relativity in the space of Vassiliev knot invariants ## Abstract We present a quantization of the Hamiltonian and diffeomorphism constraint of canonical quantum gravity in the spin network representation. The novelty consists in considering a space of wavefunctions based on the Vassiliev knot invariants. The constraints are finite, well defined, and reproduce at the level of quantum commutators the Poisson algebra of constraints of the classical theory. A similar construction can be carried out in $`2+1`$ dimensions leading to the correct quantum theory. The Ashtekar new variables describe general relativity as a theory of a connection, having the same kinematical phase space as a Yang–Mills theory. The canonical conjugate pair is given by a set of (densitized) triads $`\stackrel{~}{E}_i^a`$ and an SU(2) connection $`A_a^i`$. This allowed to describe the theory in terms of holonomies leading to the development of the loop representation, and later the spin network representation . These representations encode in a natural way the diffeomorphism invariance of the theory through the notion of knot invariance. The dynamics of the theory, embodied in the Hamiltonian constraint, remained elusive. The quantization of this constraint led to the so-called Wheeler-DeWitt equation in the traditional formulation of general relativity. This is a non-polynomial equation and presents several challenges as a quantum field theory, since the usual techniques for regularizing operators introduce fiducial background metric structures that are incompatible with the general covariance of the theory. In terms of the Ashtekar new variables an important step forward was realized when Thiemann showed how to write the Hamiltonian constraint as a scalar on the manifold. This raised hopes that a natural realization in terms of spin networks could be achieved. Thiemann represented the action of this constraint on diffeomorphism invariant states. He showed that the constraint commuted with itself, as one expects in a diffeomorphism invariant context. Moreover, Thiemann’s formulation took place in the context of the real version of the Ashtekar variables introduced by Barbero , bypassing the controversial issue of the “reality conditions”. The Hamiltonian considered corresponded to the usual real, Lorentzian general relativity. In this paper we will present a realization of the Hamiltonian constraint in terms of a different space of wavefunctions, associated with the Vassiliev knot invariants. A distinctive feature of these wavefunctions is that they are “loop differentiable”. The loop derivative is the derivative that arises in the space of functions of loops when one considers the change in wavefunctions due to the addition of an infinitesimal loop. In the context of holonomies, this derivative encodes the information of the curvature tensor $`F_{ab}`$. There is a well known difficulty with computing this derivative in the context of knot invariants, since due to the diffeomorphism symmetry there is no notion of “infinitesimal” loop. Therefore one cannot compute the limit involved in the derivative in a direct way. In the case of Vassiliev invariants one can assign a value to this limit recalling the relationship between them and the expectation value of the Wilson loop in a Chern–Simons theory, $$E(s,\kappa )=DA\mathrm{exp}\left(\frac{1}{\kappa }\mathrm{Tr}(AA+\frac{2}{3}AAA)\right)W_s[A]$$ (1) where $`s`$ is a spin network (a multivalent graph with holonomies in representations of SU(2) associated with each edge) and $`W_s[A]`$ is an SU(2) invariant obtained by interconnecting the holonomies along the edges with appropriate intertwiners constructed with invariant tensors in the group. It is a natural generalization to the spin network context of the “Wilson loop” (trace of the holonomy) one constructs with ordinary loops. The quantity $`E(s,k)`$ is an infinite series in powers of $`\frac{1}{k}`$, and is a (framing dependent) knot invariant. This invariant was first considered as connected with a Chern–Simons theory by Witten in the context of loops and remarkably, also in the context of spin networks . In the context of loops this invariant is associated with the evaluation for a particular value of the variable of the Kauffman polynomial. The coefficients in the infinite series are all knot invariants and one can isolate within these coefficients the elements of a basis of framing independent invariants called the Vassiliev invariants when restricted to ordinary loops. This construction can be extended to the spin network context, as we showed in two recent papers . We will refer to the resulting invariants as Vassiliev invariants (including the framing dependent ones), although it should be noticed that this is a generalization of the usual notion of Vassiliev invariant, which is customarily introduced for ordinary non-intersecting loops. One can evaluate the loop derivative on these invariants and one is left with a simple formula , $$\mathrm{\Delta }_{ab}(\pi _o^x)E(s,\kappa )=\kappa \underset{e_k}{}(1)^{2(J_j+J_k)}\mathrm{\Lambda }_{J_jJ_k}ϵ_{abc}_{e_k}𝑑y^c\delta ^3(xy)E(s^{},\kappa ).$$ (2) where $`s^{}`$ is a new spin network obtained by interconnecting in a certain way the original spin network $`s`$ with the path $`\pi `$ on which the loop derivative $`\mathrm{\Delta }_{ab}`$ depends, and $`\mathrm{\Lambda }_{J_jJ_k}`$ is a group factor dependent on the valences $`J_j`$ and $`J_k`$ of the lines $`e_j`$ and $`e_k`$. The action of the derivative is distributional, as one would expect in a diffeomorphism invariant context. A similar action is obtained not just for the infinite series $`E`$ but also for each individual coefficient and its framing dependent and framing independent portions. In terms of the loop derivative we just discussed one can now obtain an action for the Hamiltonian constraint in the scalar version introduced by Thiemann . We will only discuss for simplicity here the action on trivalent spin networks and we will concentrate on the “Euclidean” portion of the constraint. Thiemann has shown how if one has the action of this portion one can construct the rest of the full Lorentzian Hamiltonian constraint. Classically, the constraint is written as , $$H(N)=\frac{2}{G}d^3xN(x)\{A_a^i,V\}F_{bc}^i\stackrel{~}{ϵ}^{abc},$$ (3) where $`V`$ is the volume of the manifold and $`G`$ is Newton’s constant. At a quantum level, one introduces a triangulation adapted to the spin network of the state one is acting upon, replaces the Poisson bracket by a commutator, and represents the connection as an infinitesimal holonomy. In the context of trivalent intersections only one term in the commutator is non-vanishing, and one gets for the Hamiltonian , $$H(N)\psi \left(\text{}\right)=\frac{8}{3G}\underset{ϵ0}{lim}d^3y\underset{vs}{}ϵ_{ijk}_{e_i}𝑑u^a_{e_j}𝑑w^b\chi (u,w,y;v)N(y)\rho (J_1,J_2,J_3)\mathrm{\Delta }_{ab}^{(k)}(\pi _v^y)\psi \left(\text{}\right),$$ (4) where $`\rho `$ is a group factor dependent on the valences of the three incoming lines at the intersection. The action of the Hamiltonian is only non-vanishing at intersections. The function $`\chi `$ is a regulator that restricts the integrals in $`u,w`$ to the tetrahedra surrounding the vertex $`v`$ and fixes the point $`y`$ to the vertex $`v`$, a concrete realization is $`\chi (y,z,w)=\mathrm{\Theta }_\mathrm{\Delta }(y,v)\mathrm{\Theta }_\mathrm{\Delta }(z,v)\mathrm{\Theta }_\mathrm{\Delta }(w,v)/𝒱ϵ^3`$ where the Theta functions are one if the first argument is within any of the eight tetrahedra surrounding the vertex $`v`$ and zero otherwise, and the volume of each tetrahedra is given by $`ϵ^3𝒱`$. This expression is quite similar to the original proposal for a (doubly densitized) Hamiltonian in the loop representation in terms of the loop derivative . If one particularizes this expression to the expectation value of the Wilson net, one gets a very compact expression , $$H(N)E(\text{},\kappa )=\frac{\kappa }{3G}\underset{vs}{}N(v)\nu _{(J_1J_2J_3)}E(\text{},\kappa ),$$ (5) where $`\nu _{J_iJ_jJ_k}`$ is a group factor. From this expression one can derive the action of the Hamiltonian on a given Vassiliev invariant; it turns out to produce an invariant of one order less. It is quite remarkable that the action of the loop derivative in a space of diffeomorphism invariant functions yields a finite well defined expression for the constraint. For intersections of valences higher than three the action of the Hamiltonian ceases to be just a prefactor, but it still can be written explicitly. One can also introduce a diffeomorphism constraint, $$C(\stackrel{}{N})\mathrm{\Psi }(s)=\underset{k}{}\underset{ϵ0}{lim}d^3x_{e_k}𝑑y^b\frac{(N^a(x)+N^a(y))}{2}f_ϵ(x,y)\mathrm{\Delta }_{ab}(\pi _y^x)\mathrm{\Psi }(s).$$ (6) where $`f_ϵ(x,y)`$ is a regularization of the Dirac delta. Acting on Vassiliev invariants, one can explicitly check via a detailed calculation that the constraint vanishes identically, as one would expect since the wavefunctions are diffeomorphism invariant . As we see from equation (4), the action of the Hamiltonian constraint on a Vassiliev invariant produces a prefactor that depends on the location of the vertices times a group prefactor times a Vassiliev invariant. The location of the vertex is determined by the the intersection of the edges of the spin network. The latter are modified by the loop derivative, and as a consequence the loop derivative acts on functions of the position of the vertices. The loop derivative leaves the group factors unchanged. Therefore the action of the Hamiltonian produces as a result a function that is not diffeomorphism invariant but that is still loop differentiable, allowing one can compute the constraint algebra. We call these states generically $`\psi (s,M,\mathrm{\Omega })`$ where $`M`$ is the function of the vertex and $`\mathrm{\Omega }`$ the group factor. We can think of these states as the action of an operator $`\widehat{O}(M,\mathrm{\Omega })`$ on $`\psi (s)`$. An explicit calculation shows that, $$C(\stackrel{}{N})O(M,\mathrm{\Omega })\psi (s)=O(N^a_aM,\mathrm{\Omega })\psi (s)+O(M,\mathrm{\Omega })C(\stackrel{}{N})\psi (s).$$ (7) That is, the diffeomorphism Lie-drags the prefactor and therefore acts geometrically. This ensures that the constraint algebra of diffeomorphisms is correctly implemented in this space. It also shows that the commutator of diffeomorphism and Hamiltonian is correct, that is, the Hamiltonian transforms covariantly. To study the consistency of the commutator of two Hamiltonians with the classical Poisson relation $`\{H(N),H(M)\}=C(q^{ab}V_a)`$ where $`V_a=M_aNN_aM`$, one needs to promote to a quantum operator the right-hand-side of the relation, which is proportional to the product of a diffeomorphism and the doubly-contravariant spatial metric. When one computes the right hand side, one finds that it vanishes identically on spin network states. This, in fact, can be tracked down to the vanishing of the double contravariant metric, which quantum mechanically can be written as , $$\widehat{q}^{ab}(z)\psi (s)=\underset{\delta 0}{lim}\underset{ϵ0}{lim}\frac{8}{9G^2}\underset{vs}{}_{e_r}𝑑y^a_{e_u}𝑑w^bϵ^{pqr}ϵ^{stu}\frac{\delta ^6}{ϵ^6}\mathrm{\Theta }_\mathrm{\Delta }(y,v)\mathrm{\Theta }_\mathrm{\Delta }(w,v)Q(e_p,e_q,e_s,e_t)\psi (s)$$ (8) where the operator $`Q`$ can be written in terms of the holonomies along edges incoming to the vertex and the volume operator and is finite for any spin network. If one assumes that the regularizations $`\delta `$ and $`ϵ`$ are of the same order, the above expression is of order $`ϵ^2`$ (given by the two one dimensional integrals of $`\mathrm{\Theta }`$ functions of size $`ϵ`$) and therefore vanishes. If one computes the doubly-covariant metric one finds that it diverges. In spite of the fact that the loop derivative acts on the prefactor generated by the action of the Hamiltonian, when one computes the successive action of two Hamiltonians a cancellation takes place and the left hand side of the commutator equation vanishes and therefore the algebra is consistent. There is regularization ambiguity in these expressions. A clear example of this is in the double contravariant metric where there are two limits and one could choose to carefully “tune” them in order to end with a non-vanishing expression. The price to pay is that the non-vanishing expression depends on the background structures used in the regularization. This is not surprising. In the spin network representation we are in a manifold without a pre-determined metric. The only information we have are the locations of intersections and the orientations of the lines entering (not their tangent vectors). This is insufficient information to construct a symmetric tensor. Therefore the expression for the metric was bound to either be zero or background dependent. Similar considerations hold for the covariant metric. A posteriori, the result we find via a careful regularization is what one should have intuitively expected. We therefore have a non-trivial, well defined quantization of canonical general relativity with the space of states given by the Vassiliev invariants. The expressions of the constraints are relatively simple, well defined and finite. Moreover, one can compute the constraint algebra and it is consistent with the classical Poisson algebra. Notice that the realization of the constraints is “off shell” in the sense that we do not need to work with diffeomorphism invariant states from the outset, and in fact this is sensible since the Hamiltonian constraint does not map within such a space of states. These points (the space of states chosen and the fact that we have an infinitesimal generator of diffeomorphisms) distinguish our construction from that of Thiemann which operated on diffeomorphism invariant states. It has in common the fact that the Hamiltonian commutes with itself. Should one worry about a theory of quantum gravity where the metric appears to vanish? This will largely depend on how the semi-classical limit is set up for the theory. As we argued above, the double contravariant metric could not be anything else but vanishing in the context of the spin network quantum theory. More meaningful physical operators (like the length, the area and curvature invariants ) are non-vanishing and the volume operator would also be non-vanishing if one included intersections beyond the trivalent ones. A correct semi-classical limit could be built in terms of these and other operators which are in no sense degenerate. Can one find solutions to the Hamiltonian constraint? We can already construct several. If one considers the framing independent Vassiliev invariants, one can check that they are annihilated by the Hamiltonian constraint (in the context of trivalent intersections) . What is lacking if one compares with the construction of Thiemann is to have an inner product that would allow us to characterize these and other states as normalizable. Other, more non-trivial solutions (some of them with a cosmological constant) are likely to be present, as is hinted by the results involving Chern–Simons states in the loop representation (, see also for some results in terms of spin networks). Thiemann’s approach has also been studied in $`2+1`$ dimensions , and appears to lead to a satisfactory quantization, provided one chooses in an ad-hoc way an inner product that rules out certain infinite dimensional set of solutions. In a forthcoming paper we will discuss the quantization of $`2+1`$ dimensional gravity using an approach that has elements in common with the one we pursue here, in particular the requirement of loop differentiability of the states. We will see that this requirement limits us (at least for low valence intersections) to the correct solution space in a natural way. Having a family of consistent theories provides a context for calculations that are of a more “kinematical” nature, like the calculations of the entropy of black holes . It also provides a basis for calculations of semi-classical behavior that are more dependent on the dynamics of the theory . It is expected that the theory could be coupled to matter following the ideas of Thiemann . Deciding if one of these consistent theories is a physically realistic quantum theory of gravity will have to wait until testable predictions that involve the dynamics in a more elaborate way are worked out. We wish to thank Abhay Ashtekar, Laurent Freidel, Don Marolf and Thomas Thiemann for comments and discussions. This work was supported in part by the National Science Foundation under grants PHY-9423950, INT-9811610, PHY-9407194, research funds of the Pennsylvania State University and the Eberly Family research fund at PSU. JP acknowledges support of the Alfred P. Sloan and John S. Guggenheim foundations. We acknowledge support of PEDECIBA.
no-problem/9910/hep-ph9910521.html
ar5iv
text
# PHYSICS GOALS OF THE LINEAR COLLIDER ## 1 Introduction For more than twenty years, high-energy physicists have dreamed about using linear $`e^+e^{}`$ colliders to extend the reach of $`e^+e^{}`$ annihilation to the TeV energy scale. About ten years ago, with the first results from the precision electroweak experimental program at SLC, LEP, and the Tevtron, it became possible to envision a sharply focused physics program for linear collider experiments that would begin at center-of-mass energies of 400–500 GeV. The experimental results of the past few years—in particular, the dramatic confirmation of the theory of the electroweak interactions to part-per-mil precision—have made the experiments proposed for the linear collider seem even more urgent and central to the goals of high-energy physics. In this article, I will briefly review the most important physics objectives of the program planned for the next-generation $`e^+e^{}`$ linear collider (LC). Recently, a number of detailed reviews have appeared which discuss the broad array of measurements that can be performed at the LC. My goal here is to highlight those measurements that, in my opinion, form the key justifications for the LC program. Why do we expect to find new physics at the LC? The most important experimental discovery of the past decade has been the success of the Glashow-Weinberg-Salam theory of unified weak and electromagnetic interactions. This model is based on the idea that the weak and electromagnetic interactions are mediated by vector bosons associated with a symmetry group $`SU(2)\times U(1)`$, which is spontaneously broken to $`U(1)`$, the gauge symmetry of Maxwell’s equations. The characteristic prediction of gauge theory is that coupling constants should be universal, and, indeed, experiments at the $`Z^0`$ have shown that that the weak and electromagnetic couplings of all species of quarks and leptons are given by two universal couplings $`g`$ and $`g^{}`$ (or $`e`$ and $`\mathrm{sin}^2\theta _w`$). At the 1% level of accuracy, there are deviations from this prediction, but these are accounted for by the radiative corrections of the electroweak theory when one uses the observed mass of the top quark. This success brings into relief the fact that the foundation of the electroweak theory is shrouded in mystery. We have no direct experimental information on what agent causes the spontaneous breaking of $`SU(2)\times U(1)`$ symmetry, and even the indirect indications are fairly meager. In the minimal model, this symmetry breaking is due to a single Higgs boson, but the true story is probably more complex. On the other hand, the information must be close at hand. In the electroweak theory, the formula for the $`W`$ boson mass is $`m_W=gv/2`$, and from the known value of $`g`$ we can find the mass scale of electroweak symmetry breaking: $`v=246`$ GeV. Simple arguments from unitarity tell us that the Higgs boson or some other particle from the symmetry breaking sector must appear at energies below 1.3 TeV. But, further, models in which the Higgs boson is very heavy give electroweak radiative corrections which are inconsistent with the precision experiments. The analysis of radiative corrections requires either that the Higgs boson lie at a mass below 250 GeV, or that other new particles with masses at about 100 GeV be present to cancel the effects of a heavy Higgs boson. Unless Nature is very subtle, the first signs of the electroweak symmetry breaking sector will be found before the LC begins operation. There is a significant window for the discovery of the Higgs boson at LEP 2 or at the Tevatron. In almost every scenario, the Higgs boson or other signals of new physics will appear at the LHC. Our problem, though, is not just to obtain some clues but to solve the mystery. For this, the unique precision and clarity of information from the LC will play a crucial role. Because the role of the LC will most likely be to clarify the nature of new physics discovered elsewhere, that role depends on what new particles are observed. In particular, it depends on the actual mechanism of electroweak symmetry breaking (EWSB). To justify the LC project at our current state of knowledge, one must be prepared to argue that, in any model of EWSB, the LC brings important new information that cannot be obtained from the LHC. Systematic analysis shows that this is the case. On the other hand, this line of reasoning put a spotlight on specific precision measurements and requires that the LC experiments be capable of performing them. I will point out a number of these crucial experiments in this review. My survey of the LC program will proceed as follows: First, I will introduce the capabilities of $`e^+e^{}`$ annihilation experiments by discussing the search for contact interactions in $`e^+e^{}f\overline{f}`$. Next, I will review experiments relevant to strong-coupling models of EWSB. Finally, I will review experiments relevant to weak-coupling models of EWSB. ## 2 Contact Interactions Before I discuss detailed models of EWSB, I would like to call attention to the ability of the LC to make precise test of the structure of the electroweak interactions at very short distances. This study brings in a number of unique features that the LC can also use to study more complex reactions involving new particles. Here we see these features used in their simplest context, the study of $`e^+e^{}f\overline{f}`$. The study of $`e^+e^{}`$ annihilation to fermion pairs begins from the observation that the Standard Model cross section formulae are simple and depend only on electroweak quantum numbers. For example, $`{\displaystyle \frac{d\sigma }{d\mathrm{cos}\theta }}(e_L^{}e_R^+f_L\overline{f}_R)={\displaystyle \frac{\pi \alpha ^2}{2s}}N_C`$ $`\left|Q_f+{\displaystyle \frac{(\frac{1}{2}\mathrm{sin}^2\theta _w)(I_f^3Q_f\mathrm{sin}^2\theta _w)}{\mathrm{cos}^2\theta _w\mathrm{sin}^2\theta _w}}{\displaystyle \frac{s}{sm_Z^2}}\right|^2(1+\mathrm{cos}\theta )^2.`$ (1) In this formula, $`N_C=1`$ for leptons and 3 times the QCD enhancement for quarks, $`I_f^3`$ is the weak isospin of $`f_L`$, and $`Q_f`$ is the electric charge. The angular distribution is characteristic of annihilation to spin-$`\frac{1}{2}`$ fermion pairs. For $`f_L`$ production, the $`Z^0`$ contribution typically interferes with the photon constructively for an $`e_L^{}`$ beam and destructively for an $`e_R^{}`$ beam. Thus, initial-state polarization is a useful diagnostic. For annihilation to the $`\tau `$ and the top quark, the final state polarization can also be measured. This simplicity of formulae such as (1) allow one to determine unambiguously the spin and Standard Model quantum numbers of any new state that is pair-produced in $`e^+e^{}`$ annihilation. Applied to the familiar particles, they provide a diagnostic of the electroweak exchanges that might reveal new heavy weak bosons or other types of new interactions. These tests can be applied independently to the couplings to $`e`$, $`\mu `$, polarized $`\tau `$, $`c`$, $`b`$, and light quarks. Figure 1 illustrates how the available set of observables can be used to study the couplings of a new $`Z^0`$ in four different models for its couplings. A 1 TeV linear collider would be sensitive, through these precision measurements, to a new $`Z^0`$ up to masses of about 4 TeV. A new $`Z^0`$ boson would also appear at the LHC, up to a similar reach in mass, as a resonance in $`e`$ or $`\mu `$ pair production. However, little can be learned about its couplings if its mass is above about 1 TeV. For such a boson, the LC will fill in the picture of its couplings to quarks and leptons. Measurements of simple annihilation processes can also be used to test for new interactions that would signal quark and lepton compositeness; a 50 fb<sup>-1</sup> event sample at 500 GeV would be sensitive to a compositeness scale $`\mathrm{\Lambda }`$ of 30 TeV. More exotic effects are also possible. Recently proposed models with large extra dimensions predict contact interactions due to graviton exchange. These precision measurements can not only reveal the presence of these interactions, but also their spin-2 character. ## 3 Strong-Coupling Route to EWSB In the remainder of this article, I will focus on topics relevant to the question of electroweak symmetry breaking (EWSB). As I have explained above, the origin of electroweak symmetry breaking must lie in the TeV energy region. In principle, EWSB could either be generated by a weak-coupling theory with an elementary Higgs boson or by a strong-coupling theory, with the symmetry-breaking possibly due to a composite operator. Many models have been proposed that illustrate the two viewpoints. The models of the two classes have quite different phenomenological implications. I will first consider models of EWSB due with strong-coupling dynamics. In such models, the signals of the EWSB mechanism are most clear in the properties of the heaviest Standard Model particles, the $`W`$ and $`Z`$ bosons and the top quark. The LC can illuminate this mechanism through its ability to study the couplings of these particles in detail. Often, the model of EWSB will also contain new particles that decay to weak bosons and third-generation fermions. The LC would allow these particles to be studied by the same techniques. ### 3.1 $`W`$ boson Consider first the $`W`$ boson. The process $`e^+e^{}W^+W^{}`$ is the most important single process contributing to $`e^+e^{}`$ annihilation at high energy. This process also has numerous features that make it especially amenable to detailed study. From the viewpoint of EWSB, the $`W`$ is interesting because it receives mass through the Higgs mechanism. The massless $`W`$ has only two degrees of freedom, corresponding to transverse polarizations. The massive $`W`$ has a third degree of freedom, which corresponds to the longitudinal polarization state. This state must be stolen from the symmetry-breaking sector. In fact, it is a theorem in quantum field theory that, in the limit of high energy, the amplitude for producing a longitudinally polarized $`W`$ is given precisely by the amplitude for producing the charged Goldstone boson associated with $`SU(2)\times U(1)`$ symmetry-breaking. Effects of new physics on the cross section for $`e^+e^{}W^+W^{}`$ are traditionally expressed in terms of effective 3-vector boson couplings $`g_{1Z}`$, $`\kappa _{\gamma ,Z}`$, $`\lambda _{\gamma ,Z}`$. These in turn are given in terms of coefficients $`L_i`$ that appear in the effective Lagrangian describing the Goldstone bosons. The parameter deviations predicted are rather small; for example, new strong interactions similar to QCD at TeV energies would give a deviation $`(\kappa _\gamma 1)3\times 10^3`$. This should be compared with upper limits of several percent which have been obtained from LEP 2. To do better, the LC can take advantage of several features. First, the effect of the Goldstone boson couplings is naturally enhanced by a factor $`s/m_W^2`$. Second, going to higher energy separates the $`W^+`$ and $`W^{}`$ into opposite hemispheres and makes the kinematics more well-defined. In Figure 2, I show the results of a simulation study of events at a 500 GeV LC in which one $`W`$ decays hadronically and the other leptonically. The full detail of the reaction, including both production and decay angles, can be reconstructed. In particular, the $`W`$ bosons at central values of the decay angle $`\mathrm{cos}\theta `$ are those with longitudinal polarization. By fitting the full multi-variable distribution, it is possible to obtain limits on the $`\kappa `$ and $`\lambda `$ parameters at the $`10^3`$ level at 500 GeV, and even more stringent limits at higher energy. ### 3.2 $`WW`$ scattering The principle that gives us access to the production amplitudes for states from the symmetry-breaking sector also allows us to study the interactions of these particles. In the reactions $`e^+e^{}\nu \overline{\nu }VV`$, where $`VV`$ is $`W^+W^{}`$ or $`Z^0Z^0`$, the most important subprocess is that in which the incoming electron and positron radiate a $`W^{}`$ and $`W^+`$, which then collide and scatter. One can show that a substantial fraction of the radiated $`W`$’s are longitudinally polarized. The scattering amplitudes for these bosons come directly from the symmetry-breaking interactions. Experiments on these scattering processes are difficult both at the LC and at the LHC. At the LHC, one can radiate $`W`$’s from quark lines, detect the final vector bosons using their leptonic decays, and apply a forward jet tag or other topological cuts to enhance the signal over background. At the LC, one can study vector bosons using their hadronic decay models, imposing a cut on the total transverse momentum of the $`VV`$ system to remove background from two-photon processes. It is important to be able to separate $`W`$ and $`Z`$ on the basis of the 2-jet mass. Table 1, taken from ref. 13, compares the capabilities of LHC and the LC for 100 fb<sup>-1</sup> event samples and an assumed LC energy of 1.5 TeV. (A larger LC luminosity sample would allow the study to be done at somewhat lower energies.) A notable advantages of the LC is its extraordinary sensitivity to vector resonances, which show up as $`s`$-channel resonances in $`e^+e^{}W^+W^{}`$. The LC also has a unique advantage in its ability to study the reaction $`W^+W^{}t\overline{t}`$, a reaction that directly probes the coupling of the top quark to the symmetry-breaking sector. ### 3.3 Top quark Finally, the LC can access a strongly-coupled symmetry breaking sector through precision studies of the heaviest Standard Model particle, the top quark. The pair production reaction $`e^+e^{}t\overline{t}`$ may be studied either at threshold or at higher energy. The Standard Model prediction for $`e^+e^{}t\overline{t}`$, like the prediction for $`e^+e^{}W^+W^{}`$, has a rich structure. The production cross section depends strongly on both the electron and the t quark polarization. For example, the subprocess $`e_L^{}e_R^+t\overline{t}`$ is dominated by forward production of $`t_L`$. The top polarization is visible because the short $`t`$ lifetime guarantees that a produced $`t`$ will not be depolarized by soft hadronic interactions, and because the dominant decay $`tbW^+`$ and the subsequent $`W^+`$ decay have distributions sensitive to polarization. To take advantage of the final-state polarization observables, it is necessary to be able to reconstruct $`t\overline{t}`$ events efficiently in the 6-jet mode produced by hadronic $`W`$ decays on both sides. In a theory with strong-coupling electroweak symmetry breaking, the top coupling to the strong sector shows up in its coupling to gauge bosons. Already in the Standard Model, 70% of the $`W^+`$’s from top decay are longitudinally polarized, reflecting the dominance of the top Yukawa coupling over the $`SU(2)`$ gauge coupling in top decays. This fraction may be enhanced in strong-coupling models. In technicolor models, the $`Z^0`$ coupling to third-generation quarks is predicted to be shifted by diagrams involving extended technicolor boson exchange. This effect is not seen in the $`Z^0b\overline{b}`$ coupling. However, it is natural that effects which cancel in that coupling add constructively in the coupling to top, giving rise to shifts of up to 10% in the $`Z^0t\overline{t}`$ coupling that would be revealed by the measurements of the polarization asymmetry for top production. On the other hand, if there is a light Higgs boson, it should be possible to observe the process $`e^+e^{}t\overline{t}h^0`$ and thus measure the $`t\overline{t}h`$ coupling directly. It is also interesting to obtain as accurate as possible a value for the top quark mass, both because of the important role of virtual top quarks in phenomonology and because of its intrinsic interest for the problem of flavor. At a LC, the top quark mass can be computed from the position of the $`t\overline{t}`$ threshold. The energy region that, for lighter quarks, holds the bound quarkonium states is smeared out by the large top quark width. The resulting smeared shape can be computed accurately in QCD. The position of the threshold can be located to about 200 MeV with relatively small data samples (10 fb<sup>-1</sup>), given an accurate value of $`\alpha _s`$. The results of a simulation study are shown in Figure 3. The threshold position can be related to the short distance parameter $`m_{t\overline{MS}}(m_t)`$ with a similarly small error. ## 4 Weak-Coupling Route to EWSB The alternative class of models of electroweak symmetry breaking are those in which $`SU(2)\times U(1)`$ is broken by the vacuum expectation value of a weakly-coupled Higgs scalar field. In these models, there is a light Higgs boson, and possibly also a spectrum of heavier Higgs states. Since the precision electroweak data favor a low Higgs boson mass and also exclude large modifications of the $`Zb\overline{b}`$ coupling, it is this alternative which currently has the most experimental support. A Higgs boson in this mass range should be discovered before the LC experiments, at LEP 2 or the Tevatron and certainly at the LHC. However, it will be the LC that tests whether this particle indeed generates the quark, lepton, and gauge boson masses. The simplest weak-coupling models do not explain why $`SU(2)\times U(1)`$ is broken. Rather, the symmetry-breaking is the result of a negative (mass)<sup>2</sup> parameter for the Higgs field that is inserted into the Lagrangian by hand. The only way to avoid this unsatisfactory situation without requiring strong coupling is to introduce a symmetry that links the Higgs field to some field of higher spin. This eventually requires that the theory of electroweak symmetry breaking be supersymmetric. Conversely, a supersymmetric generalization of the Standard Model easily generates a symmetry-breaking potential for the Higgs field as the result of radiative corrections due to the heavy top quark. Thus, the assumption that EWSB has a weak-coupling origin leads naturally to supersymmetry. Both aspects of the weak-coupling models have interesting implications for the LC. The light and heavy states of the Higgs boson spectrum can be studied in detail in $`e^+e^{}`$ annihilation. The LC also offers many incisive tools for the precision study of the spectrum of supersymmetric particles. ### 4.1 Higgs boson One of the key aspects of the LC experimental program is the study of a light Higgs boson. Any Higgs boson with a mass below 350 GeV can be studied at a 500 GeV LC through the reaction $`e^+e^{}Z^0h^0`$. Though the Higgs boson is not produced at rest as a resonance, the experimental setting is extremely clean. The $`h^0`$ appears as a peak at a definite recoil energy, and our precise knowledge of the $`Z^0`$ mass and branching ratios can be used to establish the signal in a variety of $`h^0`$ decay modes. The crucial question for a light Higgs boson is, does it couple to all species proportional to mass? To test this, one may check the relative Higgs branching ratios predicted by the Minimal Standard Model. The relative rates for $`b`$, $`c`$, and $`\tau `$ pairs (72%:3%:7% for $`m_h=120`$ GeV) correspond to an identical scale for the Higgs couplings to down quarks, up quarks, and leptons. In multi-Higgs models, the lightest Higgs will typically couple preferentially either to up- or to down-type fermions. The coupling to $`WW`$ and the total $`Zh`$ production rate, which is proportional to the $`hZZ`$ coupling, test the extent to which the $`W`$ and $`Z`$ masses that are due to the $`h^0`$. The branching ratios to $`gg`$ and $`\gamma \gamma `$ measure sum rules over the colored and uncolored massive spectrum. In Figure 4, I show a recent estimate of the accuracies that can be achieved in a variety of Higgs decay modes. The measurement of the $`\gamma \gamma `$ branching ratio or partial width from $`Zh`$ production requires very large luminosity samples. Alternatively, this measurement is straightforward at a $`\gamma \gamma `$ collider and provides a strong physics motivation for developing that technology. There is no reason why a weakly-coupled Higgs sector should not contain several scalar fields whose vacuum expectation values contribute to the $`Z`$ and $`W`$ masses. Experiments at the LC can discover the complete set of these bosons and prove that they are fully responsible for the vector boson masses. To be specific, let the vacuum expectation value of the $`i`$th Higgs $`h_i^0`$ be $`f_iv`$, where $`v=246`$ GeV. Then the $`h_i^0`$ is produced in recoil against the $`Z^0`$ with a cross section equal to a factor $`f_i^2`$ times the cross section for a Minimal Standard Model Higgs of that mass. We have found the full set of scalars when the observed bosons saturate the sum rule $$\underset{i}{}f_i^2=1.$$ (2) The ability of the LC to recognize the Higgs boson as a peak in the $`Z^0`$ recoil energy spectrum, independently of the Higgs decay mode, is crucial for this study. Models with additional Higgs fields also contain additional heavy spin-0 states. Supersymmetric models, for example, typically contain heavy Higgs states that are pair-produced via $`e^+e^{}H^0A^0`$, $`e^+e^{}H^+H^{}`$. The couplings of these states to fermion pairs are not universal among species but rather depend strongly on the underlying parameters of the Higgs sector. Thus, the branching ratios can be used systematically to determine these parameters, such as $`\mathrm{tan}\beta `$, which are needed as input in other aspects of the theory. ### 4.2 Supersymmetry I have explained above that supersymmetry is naturally connected to the idea of weak-coupling electroweak symmetry breaking. Many theorists (I am one) would claim that any plausible model with a light Higgs boson must contain supersymmetry at the TeV scale. If supersymmetry is responsible for electroweak symmetry breaking, supersymmetric particles should be discovered at LEP2, the Tevatron, or the LHC before the LC experiments begin. Very clever methods have been devised to make precise mass measurements of supersymmetric particles at the LHC. But nevertheless, there are intrinsic difficulties in studying supersymmetry at hadron colliders. It is not possible to determine the initial parton energies or, because of unobserved final particles, to reconstruct the complete final state. All possible supersymmetric particles are produced at once in the same event sample, so that individual particles must be separated on the basis of branching to characteristic decay modes. The LC brings new tools that can clarify the nature of these new particles. First of all, since cross sections in $`e^+e^{}`$ annihilation depend in a model-independent way on the spins and $`SU(2)\times U(1)`$ quantum numbers of the produced particles, the LC can verify that new particles have the correct quantum numbers to be supersymmetric partners of Standard Model states. By adjustment of the center-of-mass energy and polarization, one can select specific states preferentially. An example is given in Figure 5, where the masses of the distinct supersymmetric partners of $`e_L^{}`$ and $`e_R^{}`$ are determined by the positions of kinematic endpoints observed in $`e^+e^{}\stackrel{~}{e}^+\stackrel{~}{e}^{}`$ with a polarized $`e^{}`$ beams. A detailed analysis in which this strategy is used to make a precise spectrum measurement is presented in ref. 33. In systems where superpartners naturally mix—for example, the $`\stackrel{~}{t}_L,\stackrel{~}{t}_R`$ and $`\stackrel{~}{w}^+,\stackrel{~}{h}^+`$ combinations—the dependence on beam polarization can be used to measure the mixing angles. For the $`\stackrel{~}{\tau }`$ and other states that decay to $`\tau `$, the kinematic contraints allow final-state $`\tau `$ polarization to be used also as a powerful probe. These probes are needed because supersymmetry models are typically complex, with not only a doubling of the particle spectrum but also a number of new phenomena. As one example, I have already noted that, in models of supersymmetry, EWSB may arise as a byproduct of the renormalization of the scalar mass spectrum. We need to be able to measure the underlying parameters of responsible for this effect to see whether this in fact is the explanation for EWSB. In the simplest models, the masses derived from supersymmetry breaking are independent of flavor, but this is not necessary and must be tested directly. In Table 2, I have made a more complete list of issues that must be probed experimentally before we can claim that we understand the supersymmetric generalization of the Standard Model. Underlying all of these issues is the question of the origin of supersymmetry breaking. This phenomenon, which supplies most of the new parameters of a supersymmetric model, would probably arise from energy scales far above 1 TeV. The understanding of the new parameters of supersymmetry could then potentially give us a window into physics at extremely short distances. ### 4.3 Extra dimensions Many people express the opinion that supersymmetry, with large number of postulated new particles, is too daring a generalization to be the true theory of the TeV scale physics. My own opinion is that it is not daring enough. Supersymmetric models require all of their complex components to explain the details in Nature which are missing from the Standard Model. But these components are not unified by a common underlying idea. Contrast with it the theory which is now understood for the GeV scale. Here experiment revealed a complex array of new states and couplings, but these turned out all to arise from the underlying simplicity of the Yang-Mills gauge interaction. Recently, there has been much discussion of a grander idea for the nature of TeV-scale physics. For many years, string theory has suggested that space-time has more than four dimensions. It is possible that the scale of these dimensions, or even the scale of quantum gravity, is as low as TeV energies. In this picture, high-energy experiments would reveal not only supersymmetry but also the higher-dimensional spectrum with extended supersymmetry that characterize string theory at short distances. As one might expect, theories with new space dimensions suggest new phenomena that could be discovered at high energy. A low quantum gravity scale would allow gravitational radiation to be seen in high-energy collisions, both as missing-energy processes and as spin-2 contact interactions in fermion-fermion scattering. A TeV scale for new dimensions would imply recurrences of the Standard Model gauge bosons, which would appear as dramatic $`s`$-channel resonances. Some of these phenomena could be observed at the LHC, but many of the new effects would require the probes with beam polarization and precision measurement which are the domain of the LC. ## 5 Conclusions The success of the Standard Model in accounting for the detailed properties of the strong, weak, and electromagnetic interactions leads us to focus attention on physics of electroweak symmetry breaking. At this time we do not know the what new physics is responsible for this symmetry breaking. But, in any scenario, physicists would look to the LC for tools essential to understanding the new phenomena. These include the ability to predict background cross sections precisely, to interpret signal cross sections unambiguously, to detect $`b`$, $`c`$, and $`\tau `$ with high efficiency, and to analyze the effects of polarization both in the initial state and in decays. The capabilities of the LC will allow us to characterize these new interactions in detail, and to uncover their origin. ## Acknowledgments This article attempts to summarize a huge body of work and many insights that have come out of the international study of linear collider physics. I would like to give special thanks to Charlie Baltay, Sachio Komamiya, and David Miller for their harmonious organization of the current phase of this study, and to Enrique Fernandez and the local organizers of the Sitges meeting for providing such a pleasant setting for this meeting. This work was supported by the US Department of Energy under contract DE–AC03–76SF00515. ## References
no-problem/9910/cond-mat9910510.html
ar5iv
text
# Unequal Intra-layer Coupling in a Bilayer Driven Lattice Gas ## I. INTRODUCTION Equilibrium statistical mechanics has served us well in the understanding of collective behaviour in many-body systems in, or near, thermal equilibrium. However, nature abounds with examples of systems that are far from equilibrium and their behaviour cannot be predicted by the theory. Linear response theory, a form of perturbation theory, works well only for systems slightly off equilibrium but not for those far from equilibrium. The way to tackle such new systems is to study simple models that have well-understood equilibrium properties. Much work had followed from the early attempt by Katz, Lebowitz and Spohn to drive the Ising lattice gas model into non-equilibrium steady states via the introduction of an ‘external electric field’. This driven lattice gas (DLG) model became the prototype to study Driven Diffusive Systems (DDS). The time-independent final state of the DLG model has a probability distribution which is not given by the usual Boltzmann factor but depends on the dynamics controlling the evolution. The KLS or standard model for a DDS is composed of an ordinary lattice gas in contact with a thermal bath, having particles hopping to their nearest-neighbour unoccupied sites. This is controlled by a rate specified by both the energetics of inter-particle interactions and an external, uniform driving field . Achahbar and Marro studied a variant of the standard model: stacking two fully periodic standard models on top of each other, without interactions across the layers. This system is coupled to a heat bath at temperature $`T`$ using spin-exchange (Kawasaki) dynamics with the usual Metropolis rate. In Kawasaki dynamics, pairs of sites (both intra- and inter-layer) are considered for exchange in order to have a global conservation of particles. Thus we have a diffusive system without sources or sinks. Half-filled systems are studied in order to access the critical point. The two decoupled Ising systems gave two phase transitions as the temperature is decreased from a large value. First, the disordered (D) phase at high $`T`$ transforms into a state with strips in both layers (S phase). This is much like two aligned, single-layer driven systems. Upon further lowering of $`T`$, a first-order transition occurs which results in an ordered state, resembling the equilibrium Ising system. It consists of a homogeneously filled layer and an empty layer (FE phase). Hill, Zia and Schmittmann unveils the mystery for the presence of the two phase transitions. They did a natural extension to Achahbar and Marro’s model: addition of a coupling across the layers. This coupling, $`J_z`$, can be both attractive and repulsive. This led to novel discoveries. From the new phase diagram in $`T`$$`J_z`$ space at a fixed $`E`$, we can observe the intrusion of the S phase into that for the FE phase. Please refer to their paper for the figure. It was shown that the ‘usual’ FE to D transition is interrupted by the presence of the S phase. The two phase transitions reported by Achahbar et.al. is located along the $`J_z=0`$ line. Note that the strength of the ‘electric field’ $`E`$ used is large but finite to drive the system far out of equilibrium. In our paper, we investigate such systems further with yet another trivial modification. We attempt to observe the effects of having an unequal coupling in the $`x`$\- and $`y`$-directions within each top and bottom layers. In particular, we wish to map out the phase diagram in the $`T`$$`J_z`$$`J_y`$ plane. Taking $`E`$ to be in the $`x`$-direction, we have particle-particle interactions in the transverse direction, $`J_y`$, being larger or equal to that along the field, $`J_x`$. The latter case should recover Hill et. al’s results. Besides extending the phase diagram in a new ‘dimension’, we also attempt to determine the universality class of the system for $`J_z<0`$, i.e. for FE to D second-order transitions. It was stated in that preliminary results seem to suggest that D-S transition belongs to the class of the single-layer driven lattice gas. It is our objective here to test the hypothesis that the D-FE transition belongs to the Ising universality class, which many systems belong. ## II. DEFINITION OF THE MODEL AND TOOLS EMPLOYED Following Hill et.al., our system consists of two fully periodic $`L\times L`$ square lattices, arranged in a bilayer structure. We label the sites by $`(j_1,j_2,j_3)`$ with $`j_1,j_2=0,\mathrm{},L1`$ and $`j_3=0,1`$. Each site may be either occupied or empty, such that we can specify a configuration of the system by a set of occupation numbers $`\{n(j_1,j_2,j_3)\}`$, where $`n`$ is 0 or 1. In spin language, we have spin, $`s=2n1=\pm 1`$. For half-filled systems, $`n=L^2`$ or $`s=0`$ i.e. zero net magnetization. The Hamiltonian is given by, $$H=J_1\underset{xdir}{}nn^{}J_2\underset{ydir}{}nn^{}J_3nn^{\prime \prime },$$ (1) where $`n`$ and $`n^{}`$ are the occupancies for nearest neighbours within a given layer while $`n`$, $`n^{\prime \prime }`$ are for those across layers. Summations in x- and y-directions include both top and bottom layers. Hereupon, $`J_{1,2,3}`$ will be used in place of $`J_{x,y,z}`$. Note that with $`J_3=0`$, we have two decoupled Ising systems. This has been confirmed by computing the equivalent Ising model heat capacity from the system and comparing with exact results, where good agreement is observed. We restrict $`J_1`$ and $`J_2`$ to positive values, with $`J_3/J_1=\beta `$ in the range $`[10,10]`$. For $`J_2/J_1=\alpha `$, we let it take on values 1, 2, 5 and 10. We set $`J_1`$ to unity and with $`\alpha =1`$, we should be able to reproduce results obtained by Hill et. al. The temperature $`T`$ is given in units of the single layer Onsager temperature, being $`0.5673J_1/k_B`$ in particle language. Finally, the external driving field $`E`$ is given in units of $`J_1`$ as well, which affects the Metropolis rate via a subtraction of $`E`$ from $`\mathrm{}H`$ for hops along the field and vice-versa. A value of 25$`J_1`$ is used throughout the study. Lattice sizes investigated are of dimensions $`L`$= 32, 64 and 128. Typical Monte Carlo steps (MCS) per site taken are 500,000 for the phase diagram determination and $`10^6`$ for the universality class investigation. Runs are performed at fixed $`Js`$, $`E`$ and $`T`$, starting from a random initial configuration generated by a 64-bit Linear Congruential random number generator. Discarding the first $`5\times 10^4`$ MCS, measurements are taken every 200 MCS. We thus believe that after this amount of steps, the system has settled into a steady state. However, if a significant change in character is seen in the configuration, as in any approach to the true steady state from any local minimum (in energy), the time average is taken only after the changeover point. To determine the critical temperatures, many systems are started from identical initial states but with different temperature settings. A susceptibility plot is then constructed from which the $`T`$ value giving the maximum susceptibility ($`T_{peak}`$) is obtained via a quadratic least-squares fit. This is to be repeated for each $`L`$ and the estimate for $`T_c`$ obtained via the usual finite-size scaling hypothesis, $$T_{peak}(L)T_cL^{1/\nu }.$$ (2) The critical exponent $`\nu `$ is chosen to be 1.0, as for the Ising model. In fact, for an undriven system with $`J_1=J_2`$ and $`J_3=0`$, the $`T_c`$ obtained via this method is 0.9886, using $`L`$ = 4, 8, 16 and 32. This is in good agreement with the expected value of 1.0. However, for a driven system, it has yet to be shown explicitly that $`\nu `$ is still 1.0, which is the other objective of this paper. For the D-S transitions, it was suggested in and that the critical exponent $`\nu `$ is 0.7. Nonetheless, due to the enormous demand on computer time, $`T_{peak}`$ is taken as a rough estimate for $`T_c`$ in the determination of the phase diagrams. Thus for D-FE, the $`T_{peak}`$ values serve as upper bounds on the true critical temperatures. Hence the value of $`\nu `$ does not affect the shapes of the phase diagrams significantly. The susceptibility is defined as, $$\chi (l_1,l_2,l_3)=\frac{L^d}{k_BT}[|\stackrel{~}{n}(l_1,l_2,l_3)|^2|\stackrel{~}{n}(l_1,l_2,l_3)|^2],k_B=1,$$ (3) where $`d=2`$ for our 2-D system and $`|\stackrel{~}{n}|`$ is taken to be the relevant order parameter. We define $`\{l_1,l_2,l_3\}`$ as taking the same range as $`\{j_1,j_2,j_3\}`$ introduced earlier. The Fourier Transform of the occupancy $`n(j_1,j_2,j_3)`$ is given by, $$\stackrel{~}{n}(l_1,l_2,l_3)=\frac{1}{2L^2}\underset{j_1,j_2,j_3}{}n(j_1,j_2,j_3)e^{2\pi i[(j_1l_1+j_2l_2)/L+j_3l_3/2]}.$$ (4) Thus in order for the Fast Fourier Transform to be applicable, only system sizes $`L=2^k`$ is used, with $`k`$ being any positive integer. The quantity $`|\stackrel{~}{n}(l_1,l_2,l_3)|^2`$ is called the structure factor. A change across the lattices is reflected in the third index, $`l_3`$, in $`\stackrel{~}{n}(0,0,1)`$. For a perfect FE phase, $`|\stackrel{~}{n}(0,0,1)|^2=0.5^2=0.25`$ is the only non-trivial positive entry in the power spectrum, besides the trivial $`|\stackrel{~}{n}(0,0,0)|^2=0.25`$ due to the half-filled nature of the lattice. Thus the quantity $`S(0,0,1)`$ computed is the structure factor for the FE phase, where the time average operations are redundant for the pure phases. Other entries in the power spectrum such as $`|\stackrel{~}{n}(0,1,0)|^2`$ can be used to characterise other phases. In fact, Hill et. al. used this entry’s time average S(0,1,0) to represent the S phase, but we found that any odd $`l_2`$ index suffices. We thus speculate that any given configuration of the bilayer DLG can be viewed as consisting of a superposition of many ‘pure tones’, such as the FE configuration. Thus, through a Fourier Transform, we can pick out the ‘frequencies’ present by monitoring a few entries in the power spectrum which represent various possible steady states from energy arguments. Upon taking time averages, the corresponding structure factors can be computed. For D-FE transitions, S(0,0,1) is monitored together with S(0,1,1) which represents the ‘local minimum’ solution. This is a ‘staggered’ form of the FE phase, with an occupied band on one layer matched by an empty one on the other, which we termed the AFS(Anti-Ferromagnetic Strip) phase. It is like a hybrid between FE and S phases and occurs at low temperatures for systems with repulsive interlayer coupling. See Fig.1 below for a pictorial view. The transition to D from a pure FE phase (dominant at moderate temperatures) is marked by a drop of S(0,0,1) from its maximum of 0.25 to near zero. The location of $`T_c`$ is where the slope of drop is the largest or where $`\chi (0,0,1)`$ peaks. Due to finite-size effects, the peaks of the susceptibility function do not diverge to infinity but is “rounded” and the peak location shifted in temperature. These two features are observed from our simulation data. ## III. NEW PHASE DIAGRAMS The phase diagram for a driven system with the same parameters as used by Hill et.al. can be reproduced to an acceptable degree by our implementation, which is of paramount importance to our work here. We shall present our finding as a set of four new phase diagrams, including the one similar to that obtained by Hill’s group. The diagrams are actually slices off the full 3D phase diagram in the $`T\beta \alpha `$ space. Note the $`J_3`$ will be used interchangeably with $`\beta `$ for clearer physical meaning. See Figure 2 for the phase diagrams, which were all plotted on the same scale for better comparison. A few qualitative features can be discerned from the phase diagrams. The first of these is the growth of the ‘triangular’ region, a term coined in Hill’s paper for the intrusion region of the S phase into that for the FE phase, as $`\alpha `$ is increased. This observation proved beyond doubt that the small ‘triangular’ region seen by Hill is not an artifact. Without an external drive, no bias exists between the FE and the S phases. However, application of a drive in the x-direction (vertical) seems to favour the S phase with its linear interface aligned with the drive as compared to the isotropic FE phase. This is speculated to be analogous to magnetic domain growth in a ferromagnetic material under the action of an external magnetic field. The S phase, which is not expected to be stable when replusive interactions exist between the layers, could become stable due to the drive. The driving field could somehow compensate for the gain in configuration energy as a result of particle stacking under repulsive interactions. The survival of the S phase in the negative $`\beta `$ $`(=J_3/J_1)`$ region is increased as the coupling transverse to the drive ($`J_2`$) increases. The phase region occupied by the S phase thus grows in the expense of the FE phase! Another feature worth noting is the shifting of the tri-critical point towards more negative $`\beta `$ values as well as towards higher temperatures. Thus the S phase becomes more stable at moderate $`\beta `$ values as $`\alpha `$ is increased, despite its instability from energy arguments. We judge whether the transition is second or first order by looking at the plots of structure factors against temperature $`T`$. A second order transition has continuous derivatives at every point, an example of which is shown in Fig. 3 for the D-FE transition. A first order transition, like the S-FE, will show a discontinuity as the right plot in Fig.3 illustrates. Table 1 below presents some representative $`T_c`$ values from the phase diagrams. One can plot the difference between the $`T_c`$ values for the 2nd(D-S) \[column 4 of table: 0.0(2)\] and 1st(D-FE) \[column 3 of table\] order transitions along the $`\beta `$ = 0 line against $`\alpha `$ and observe that a least-squares straight line can be fitted through them. However, due to a lack of finite-sized scaling knowledge for the D-S transition, we could not get a better estimate for $`T_c`$ at the 2nd order transition point and thus could not conclude if the error bars could tolerate a linear fit. Nonetheless, a linear fit might be possible, though no theory has yet been developed to investigate this. We also plotted $`T_c`$ at $`\beta =10`$, 0 and 10 against the $`\alpha `$ values. The plot for $`\beta =10`$ (D-FE) seems to exhibit a logarithmic relationship. As for the non-negative $`\beta `$ values, which are for D-S transitions, the relationship seems linear except at large $`\alpha `$ for $`\beta =10`$ and small $`\alpha `$ for $`\beta =0`$. ## IV. INTERPRETATIONS AND DISCUSSIONS The fact that the FE phase survives under a driving field should not be taken as expected. For large $`J_2`$ couplings, we would expect staggered and horizontal anti-ferromagnetic bands to form in the undriven bilayer DLG from energy arguments. The form looks like the AFS configuration but rotated by 90 degrees. Under a driving field directed perpendicular to these bands, it appears that even a large coupling of 10 could not stand up to the effect of an even larger driving field (strength 20). It has yet to be seen if the reverse situation can favour the rotated AFS phase. We would like to suggest some explanations for the observations from the phase diagrams. Firstly, the increased intrusion of the S phase as $`J_2`$ increases can be understood as follows. In a ‘thought model’, the S phase can be thought of as consisting of strings of particles, of one particle width, aligned with the external and large driving field. These are bounded together through the coupling $`J_2`$, in the transverse direction to the field E$`\widehat{x}`$. As $`T`$ increases, the arrangement will be disturbed till at a sufficiently large $`T`$, disorder reigns. However, if we increase $`J_2`$, the increased binding could compensate for the disorientating effect of large $`T`$. This effectively makes the critical temperature between S and D phases higher. However, this is not to say that the increased $`J_2`$ does not help to increase $`T_c`$ for the D-FE transitions as well. In fact, on careful observation of the phase diagrams, it does! The increase of $`T_c`$ was much smaller in the FE case. The effect of $`J_2`$ helps neighbouring particles to bind together in the $`y`$-direction. This helps the configuration to hold together despite the larger temperatures applied and is true for both S and FE phases. One possible reason for the much lower thermal tolerance for the FE case might be that each of the $`L^2`$ particles in one layer has equal probability to leave the pure FE phase. On the other hand, for the S phase, only particles at the edges (top and bottom layers) aligned with the field can migrate traverse to the field and leave the pure S phase. This implies that only $`2L`$ particles has a chance of migration. Thus it is easier to destroy a FE phase than an S phase once they are formed. In actual simulations starting from random configurations, this implies that it is easier to form the S phase. This might provide the key to the stark difference in the amount of benefit acquired from an increased $`J_2`$ for the two pure phases. The argument also holds for configurations of a ‘near FE’ or ‘near S’ nature, before the critical temperature. Further, each movement of a particle out of the filled band for an S phase violates the occupied-occupied single site configuration across the layers, which is typical of the S phase. However, the ‘exchange’ of a particle with a hole on the opposite layer in a FE phase does not violate the empty-occupied configuration typical of an FE phase! Note that this argument is only for a single site. The configuration within layers is violated for both cases. Hence in a way it is easier to ‘destroy’ an FE phase. Conversely, starting from an initial random configuration, it is harder to form the FE phase as particles not just have to couple together, they have to all reside on one of the layers. This can only happen at low enough $`T`$. Thus we may argue that the FE phase is the dominant phase only at large enough repulsive interlayer couplings under the drive. ## V. LONG-LIVED TRANSIENTS When investigating the transition of the FE to S phase (first order due to a discontinuity in the structure factor versus $`T`$ plot), several transient phases are observed. They appear to be the ‘local minimum’ solutions of an ‘optimisation problem’ in which the S phase is the best ‘solution’, i.e. configuration of lowest ‘free energy’ satisfying the parameters of the system. The new phases observed are composed of from 2 up to 4 or 5 vertical bands, compared to the S phase which only has one band. These are dominant at the comparatively low $`T`$ for the FE-S transition, whereas we can find the S phase again at moderate $`T`$. In fact, these multi-banded structures had been reported in an Anisotropic Lattice Gas Automata proposed by Marro et.al., only recently . In their case, they have a single lattice gas system evolving not under the Metropolis rate but automata rules. The $`n`$-banded S phases are seen to give way to the 2-banded phase as $`T`$ increases. For certain runs at moderate $`T`$, the latter is even seen to “evolve” into the single-banded phase during a long enough simulation run ( $`>3\times 10^5`$ MCS). This observation lends further evidence that the $`n`$-banded phases are the “local minima”, from which we could reach the “global minimum” with an increase in $`T`$ or a longer run (implying greater chances given to the system). See Figure 4. Here, we can also speculate that the cause of the emergence of $`n`$-banded S phases is the larger coupling $`J_2`$. From , the $`n`$-banded S phases were obtained with a setting of 0.9 for a parameter $`b`$ in their model, with $`b[0,1]`$. If $`b>1/2`$, it implies that there exists a tendency for particles to approach each other in the transverse direction to the driving field. For $`b<1/2`$, it represents a tendency for particles to separate from each other. Thus we can see that $`b=0.9`$ has a similar effect to a large $`J_2`$ in our case! This realization implies that the $`n`$\- to single-banded S phase transition is a real phenomena in DDS as it can be produced by different models. The transients were not reported in Hill’s work, probably because the ratio $`\alpha `$ is 1. Only when the coupling in the transverse direction to the drive increases far beyond one can these transients be observed. These are made more stable by the larger $`J_2`$. In a way, the increase of $`J_2`$ has the effect of “stretching out” the system dynamics, making otherwise short or nonexistent transient phenomena emerge. Besides making transients longer, the transition to disorder is also lengthened for systems with larger $`J_2`$. This is related to a larger $`T_c`$ for D-S transitions. If we plot the structure factors for $`J_2`$ = 1 and 10, the same shape is observed for both plots but the temperature range is about 10 times larger for $`J_2`$ = 10. Please refer to Figure 5 for the ‘shark’s fin’ plots. This ‘glassy’ behaviour is speculated to be also due to the larger $`J_2/J_1`$ ratio. Observe the first order transition at the low temperature end and the second order transition at higher temperatures. The first order transition is due to a FE-S transition for $`J_2`$ = 1 whereas it is for a $`n`$-banded S to 1-banded S for the $`J_2`$ = 10 case. Finally, some words about obtaining the FE-S first order transition line. For $`J_2>J_1`$, the FE phase is seldom observed inside the ‘triangular’ region. Instead, either the AFS phase or a sort of mixed phase having both AFS and $`n`$-banded S characteristics are observed. This led to the $`n`$-banded S phase at higher $`T`$. Thus we are seeing another transient configuration. Their appearance effectively fuzzed out the first order transition line and so a heuristic approach has to be taken. We simply take the smallest $`T`$ which gives a n-banded S phase as an estimate of $`T_c`$. ## VI. CRITICAL EXPONENT DETERMINATION Critical exponents, unlike the critical temperatures which depend very much on the details of the model system, only depend on a few specifications of the system. For models with short-range interactions, like in our case, these are simply the dimensionality of space and the symmetry of the order parameter. All models with the same exponents belong to the same class, of which the Ising universality class is the most common, labelled by the simplest member. In the paper by Hill, of which the present work is based, it was mentioned that work was in progress to identify the universal properties of the D-FE transition in our model. Though no concrete results were published, we worked under the hypothesis that it is Ising due to the wide applicability of the class, unless proven otherwise. The strategy we adopted was to either prove or disprove the Ising class hypothesis. The current status of knowledge in the field was that for a KLS model, it belongs to the DLG class. If we remove the drive, it is reduced to an Ising model due to the equivalence between spin and lattice gas systems. For a bilayered structure with two KLS systems stacked on top of each other but uncoupled, the model exhibits two phase transitions of which D-S is DLG and D-FE is Ising. Finally, removing the drive for the above system and we should expect two Ising systems. The effect of adding coupling to a driven system is currently being studied. We tried to determine the universality class for the D-FE transition under a finite but large drive. Working under the hypothesis that the system is still Ising, we computed the quantity $`\gamma /\nu `$ to see if the Ising value of $`\frac{7}{4}`$ can be obtained. This is done by assuming the finite-size scaling relation, $$\chi _{max}(L)L^{\gamma /\nu }.$$ (5) Hence, by getting good estimates of the susceptibility peak values for various system sizes, we can obtain an estimate for the ratio $`\gamma /\nu `$. Before we proceed, we would like to say something on the critical exponents. The exponent $`\gamma `$ controls the divergence of the susceptibility function near the critical point, as in the power law, $$\chi \left|TT_c\right|^\gamma .$$ (6) The value for the 2-D Ising model is 7/4. As for the exponent $`\nu `$, it is called the correlation length exponent and takes on the value 1 for the Ising model. It controls how the correlation length diverges near criticality. Let us outline the tactic we used. For a given $`J_2`$ setting, we attempt to obtain estimates of $`\gamma /\nu `$ as well as the individual exponents $`\gamma `$ and $`\nu `$ for representative $`J_3`$ values, namely $`1`$, $`5`$ and $`10`$. To do this, we require more detailed susceptibility plots especially for the region near the peak, where systems with $`T`$ values differing only in the 3rd decimal place are investigated. Data points close to the peak are fitted with a least-squares quadratic polynomial and the maximum value as well as its location determined. These are the $`\chi _{max}(L)`$ and $`T_{peak}(L)`$ we desire. By repeating the procedure for system sizes $`L`$ = 32, 64 and 128, we could plot $`T_{peak}`$ vs $`L`$ with a guess for $`\nu `$ to obtain $`T_c`$. Naturally, $`\nu `$ = 1.0 is chosen to test our hypothesis. By plotting $`\mathrm{log}\chi _{max}`$ vs $`\mathrm{log}L`$, the gradient of the least-squares fit straight line gives the ratio $`\gamma /\nu `$. This value is then used in the $`\mathrm{log}(\chi L^{\gamma /\nu })`$ versus $`\mathrm{log}(|TT_c|L^{1/\nu })`$ plot with $`\nu `$ set to 1.0. (This plot shall also be called “scaling plot” for short.) With this we can check to see if the derived quantities gives good “data collapse”, which is expected if the scaling relations are satisfied. From the plot, the slopes of the two best-fit straight lines is expected to give us the exponent $`\gamma `$. In other words, if the simulation data fits the finite-size scaling theory well, we should obtain two branches which are well-fitted by straight lines with the same slope, characterising the same power law behaviour of the $`\chi `$ values as the critical point is approached. Before we present our data and make any conclusions with regards to the universality class of our DDS, we would like to present the computed heat capacity values from our model and compare with the exact Ising results. It is clear that under no drive and without any interlayer interactions, we have essentially two separate 2-D Ising systems. Hence, by setting $`J_3=0`$ and $`J_2=J_1`$, simulation runs are performed for system sizes $`L`$ = 4, 8 and 16. Starting with the definition of the heat capacity for the 2-D Ising model, we derived the equation that relates the particle Hamiltonian for our model to that of the 2-D Ising spin system given below, $$c_\sigma =\frac{L^2}{k_BT^2}[e_\sigma ^2e_\sigma ^2],$$ (7) with $`e_\sigma =4(H/2L^2)+2J_1`$ and $`T`$ is given in the spin language, i.e. $`T_c=2.26`$. As a reminder, $`H`$ is the total energy of the lattice gas system. Table 2 gives the numerical results as compared to Ising. The length of runs taken is $`10^7`$ MCS, which is achievable for such small systems. The first $`5\times 10^4`$ MCS are skipped to avoid transient phenomena. Note that the results are for the equivalent spin system in order to compare with the Ising model and that double precision floating point arithmetic is used, with 5 separate runtime averages from random initial conditions used for computing the average heat capacity. As the Table shows, we have good agreement with the exact Ising values. This provides evidence that our model is implemented correctly. As a first application of the method outlined above to estimate the ratio $`\gamma /\nu `$, we investigated the universality properties for a decoupled, undriven and isotropic lattice gas, essentially expecting to see Ising behaviour. With simulation runs of $`5\times 10^5`$ MCS, the $`T_{peak}(L)`$’s for $`L`$ = 4, 8, 16 and 32 are estimated to be 1.389, 1.165, 1.070 and 1.036 respectively. Theoretical arguments give $`T_c`$ as 1.0. Hence we see that finite-size effects are indeed at work to shift $`T_{peak}`$ values further from the true $`T_c`$ as $`L`$ decreases. With these, plots of $`T_{peak}`$ vs $`L^{1/\nu }`$ as well as that of $`\mathrm{log}\chi _{max}`$ vs $`\mathrm{log}L`$ are made. It was found that if we do not include the $`L=4`$ data, the value of $`T_c`$ obtained assuming $`\nu `$ = 1.0 is 0.9885 compared to 0.9729 if we do. This is evident that data from the $`L`$ = 4 system is too small. The estimates of the peak heights are 0.2347, 0.8213, 2.9207 and 9.3172 respectively. Only the last three values are used in the latter plot, from which the gradient gives an estimate of 1.7520 for the ratio $`\gamma /\nu `$. This values has a relative error of only 0.11$`\%`$ as compared to the Ising value of 1.75! Assuming exponent $`\gamma `$ to be 1.75, we obtained the experimentally obtained $`\nu `$ value of 0.9989 (very close to 1.0) with $`T_c`$ then obtained as 0.9886. Hence, both cases give $`T_c`$ very close to the expected value of 1.0. From the scaling plot, Figure 6, the value of $`\gamma `$ is estimated from the slope of the least-squares line fitting the linear portion of the upper data points. This turns out to be 1.7273 for the cases of both $`[\nu =1.0,T_c=0.9885]`$ and $`[0.9989,0.9886]`$. As the percentage error of this value from the assumed value of 1.75 is only $`1.30\%`$, we conclude that the undriven, decoupled bilayer system is indeed Ising in nature. This is the expected result as we have, in fact, two independent Ising systems. With this much groundwork done, we can proceed to the new findings. Due to time and resource constraints, only $`J_2=1`$ and a portion of the $`J_2=2`$ FE-D phase space is explored to determine the universality class. As a rough guide, the CPU time spent on this portion of the paper was about 1800 hours (an underestimate) for a Digital Alpha processor running at 600MHz. Typical running times: 1 hour for $`L=32`$, 5 hours for $`L=64`$ and 24 hours for $`L=128`$, all with a run length of $`1\times 10^6`$ MCS. These resource hungry tasks are completed thanks to a cluster of 30 Compaq Personal Workstations at the Department of Computational Science, NUS. Running under the Condor batch submission system developed at the University of Wiscosin, USA, which enables the simultaneous running of up to 10 jobs, all runs were started from the same initial (randomly) half-filled configuration but at different temperatures. Our results seem to indicate a deviation from Ising when the system is placed under the large but finite drive. First of all, we would like to give a figure depicting the problems we faced in the determination of the peaks for the susceptibility plots. See Figure 8 for the plots of the peak as well as a zoomed-in portion where the $`\chi _{max}`$ and $`T_{peak}`$ values are estimated through a quadratic fit. As shown in Figure 8(b), data points about the peak are sort of jagged. In theory, the susceptibility values do not grow infinitely large due to the finite size of the model system. They should be “rounded” at the top due to the finite system size, over the range of temperatures for which the correlation length $`\xi `$ is close to $`L`$. In practice, data points are scattered about some fitting quadratic polynomial. This observation could be due to critical slowing down of the dynamics near criticality due to divergence of $`\xi `$. Hence, we need an estimate of how well the polynomial fits the data values, thus giving us an estimate of the error associated with the maximum $`\chi `$ value obtained via the fit. We attempt to associate an error with the estimate of $`\chi _{max}`$ through the following heuristic approach. From the set of data points about the observed peak of the function, a linear interpolation is made to obtain more points. The difference between these pseudo data points and those from the parabolic fit to the chosen interval is denoted by $`ϵ`$ ($`=y_{data}y_{fit}`$). Due to plotting limitations, artificial data points are introduced through a linear interpolation which should preserve the original nature of the data and thus $`ϵ`$ can only be close to zero in the best cases. We next compute the variance of the set of $`ϵ`$ values as $`var(ϵ)=ϵ^2`$ and take the standard deviation, $`\sigma (ϵ)=\sqrt{\frac{var(ϵ)}{(n1)}}`$ as an estimate of the error in $`\chi _{max}`$. This gives us a gauge as to the spread of the “errors” when the data points are fitted by a least-squares degree 2 polynomial. However, this estimate does not tell us how far our estimate is from the true $`\chi _{max}`$ for the set of parameters, as effects like critical slowing down may be present to alter the observed peak height. In Figure 9, we plotted the log-log plots of $`\chi _{max}`$ data versus the system sizes $`L`$ investigated. The error bars plotted represent twice the propagated errors in $`log(\chi _{max})`$ which is the error in $`\chi _{max}`$ divided by $`\chi _{max}`$. It is observed that in general the errors associated with the largest system size of $`L=128`$ is larger, but not large enough to cause a significant variation in the slopes. Table 3 lists the estimates for the ratio $`\gamma /\nu `$ based on taking the ratio of $`\mathrm{log}(\chi _2/\chi _1)`$ over $`\mathrm{log}(L_2/L_1)`$. Here $`\chi _1`$ is the short form of $`\chi _{max1}`$ for system size $`L_1`$. Listed are the values for different ratios $`L_2/L_1`$ as well as the propagated errors in $`\gamma /\nu `$, which is $`\delta (\gamma /\nu )=\frac{1}{\mathrm{log}(L_2/L_1)}[\sigma _2/\chi _2+\sigma _1/\chi _1]`$, where the natural logarithm is taken. From the Table, it is clear that all intervals for $`\gamma /\nu `$ computed do not include the value 1.75. An important observation is that for ratios computed using the $`L=128`$ data, a value greater than 2.0 can be obtained! This data does not fit into our scheme of things so far which places a limit that $`\gamma /\nu `$ is less than 2. If we take the upper bound of the ratio $`\gamma /\nu `$ to be 2.0, it would mean that the data points for $`L=128`$ may be inaccurate. As the errors computed could not explain the discrepancy, it was suspected that critical slowing down is quite severe in such a large system size and that 1 million MCS taken was not sufficient for the system to reach the true steady state. If this is indeed the case, then the data for $`L`$=32 and 64 should be more trustworthy. But their intervals also do not include 1.75. Thus it is concluded that we observe here a significant deviation from the Ising value of 1.75 for the ratio of the exponents. With the experimental ratios of $`\gamma /\nu `$, we assumed $`\gamma `$ to remain at the Ising 1.75 value and plotted $`T_{peak}`$ against $`L^{1/\nu }`$ for each setting of coupling strengths investigated. With $`\nu <1`$, or for that matter with $`\nu =1.0`$ for Ising systems, the plots obtained could not be reasonably fitted with least-squares straight lines. In fact, all plots seem logarithmic-like. Is this another signature of a non-Ising system or the existence of two correlation lengths? We could not provide an answer at this current stage of research. In order to proceed, we used a linear fit to obtain a $`T_c`$ via extrapolation using the “experimentally” obtained value of $`\nu `$. See Fig. 11 for a representative plot. We made “scaling plots” for the different system sizes for each value of the parameter $`J_3`$ investigated. As our susceptibility $`\chi `$ plots show much similarity with Ising plots, we assumed that the exponent $`\gamma `$ which determines the power-law scaling of the $`\chi `$ plots on either side of the peak to remain Ising, i.e. it has a value of 1.75. However, this would imply that the exponent $`\nu `$ is less than 1.0! Hence we plotted the curves with exponent $`\nu `$ set to 1.0 as well as the computed value and compared between the plots, besides observing whether the slopes of the upper and lower best-fit straight lines give the $`\gamma `$ value assumed. It was found that the “Ising” plots were not consistent in that we do not recover the assumed $`\gamma `$ value of 1.75 from the slopes. There are altogether eight plots for the four $`J_3`$ settings we looked at (with $`J_2/J_1=1`$). We realised that for consistency, we cannot use the $`L`$ = 32 and 64 data to estimate the $`\gamma /\nu `$ yet deal with all three sets in the determination of $`T_c`$ and in the “scaling plots”. For that, we boldly assume that the ratio of $`\gamma /\nu `$ in our model is indeed close to 2.0! This would imply a non-Ising character, where justification will be presented later. From the scaling plots with the experimental values, we observed that the straight line of slope 1.75 can be fitted through the data points in the linear regions. Thus, the assumption of $`\gamma `$ being 1.75 is consistent with the plots. Further, we observed that the data points for different system sizes shows signs of scaling behaviour, in that data points from smaller systems deviate from the perceived linear region faster. This applies for both the top and bottom branches and is much due to finite-size effects. Another point to note is the very short linear regions obtained from the model. Finally, compare Figure 12 with Figure 13 where the exponents assume Ising values. The data collapse near the “bend” is not as good as in the former plot. Similar situations occurred for the other settings of $`J_3`$, where the values we sampled ranged from close to the bi-critical point to well in the region of large repulsive inter-layer potentials. All the slopes measured are close to the value of 1.75 assumed. Again, collapse is visually better with the “all-experimental” cases. We also moved on to look into the case where $`J_2/J_1`$ is larger than 1. Compare Fig. 12 with Fig.14 and Fig. 15 with Fig. 16. It is not difficult to observe that the data collapse is not as good in the case of $`J_2=2`$. Does this imply that the deviation from Ising is more severe for this case? It is hard to make any statements as current knowledge indicates that intra-layer couplings are not expected to affect the universality class of the model system. However, although $`J_3=10`$ gave us a $`\gamma /\nu `$ ratio of 1.9268, that of $`J_3=1`$ is only 1.7939, which is still a puzzle. As the susceptibility plots from $`J_2=2`$ is similar in nature to those from $`J_2=1`$, we do expect similar results though the peak heights are lower in the former case. See the susceptibility plots presented later. Our suspicions are that we did not gather enough data points near $`T_c`$, leading to less accurate estimates of the ratio. Though our numerical results indicates non-Ising behaviour, there may still be problems. The phenomena of critical slowing down of the system dynamics which becomes more significant as we probe closer to $`T_c`$ may affect our numerical results. Unfortunately, we cannot quantify how this phenomena will affect our results of $`\chi `$ and $`T_c`$ near criticality. As this conflicts with our need to get a better estimate of the susceptibility peak, we attempt to counteract via longer running times up to 1 million MCS, while only up to 500,000 MCS would be more than sufficient to plot the phase diagrams. This is due to the divergence of the correlation time near $`T_c`$, where very long running times would be needed as we go “closer” to the critical region than could be realised in practice for large systems. We are confident that $`1\times 10^6`$ MCS used should be sufficient for $`2\times 32\times 32`$ and $`2\times 64\times 64`$ systems but may not be so for the $`2\times 128\times 128`$ system. As the algorithm already has almost linear running time, it would not be trivial to improve upon. Hence, this huge demand on computer resources also limits the number of data points we can collect. The observed peaks are increasing at a rate higher than (Ising) expected as $`L`$ increases and we do not see any reasonable way to “bring down” the peak heights. For temperatures near criticality the appropriate entry of the power spectrum we are monitoring (whose time average is the order parameter) is quite constant but with sudden drops to zero, like “pot-holes” in the ground. Note that the drops as depicted are not as sudden, since we sample the data only every 200 MCS. In our case the entry is for the FE phase. The susceptibility is known to diverge near criticality, which implies huge fluctuations of the dominant power spectrum entry. For low (and high) $`T`$’s, the constantly high (and low) values gives very small fluctuations and hence susceptibilities close to zero, as expected and indeed observed. The above is the general expectations for Ising systems, where the FE phase’s dominance near $`T_c`$ changes intermittently and all other ordered phases are negligible. For our situation, the story is slightly different. When drops occur for the FE representation, the entry for AFS (stripped antiferromagnetic layers) rises. They are in a way antagonistic to each other! This curious observation of the possibility that the dominant phase may occasionally lose out to its local minimum “sibling” during its evolution towards the steady state speaks of a non-Ising behaviour. This is only seen near criticality and its power spectrum entry either stays near its peak value (low $`T`$) or close to zero (at very high $`T`$) elsewhere. Closer scrutiny of the fluctuation plot (Fig.10) actually reveals that the $`L`$=128 system near $`T_c`$ has equilibrated, since there is no observable time asymmetry. In fact, the explanation for the observed $`\gamma /\nu `$ being more close to the upper limit of 2.0 could be in the plot itself! This is because we can interpret the switching of the dominant phase between FE and AFS as a signature of a first order transition, where $`\gamma /\nu `$ is exactly 2.0. Hence, the configuration of the negatively coupled bilayer system could be FE at moderately low temperatures and as $`T_c`$ is approached, the AFS phase becomes significant and competes with FE in the second order structure-disorder transition! This is possible since the AFS phase is only slightly higher in energy compared with FE and is in fact a local minimum while FE is the global one. As $`T`$ is increased further, the amplitudes of both components were observed to become comparable till they both become close to zero as for other phases at very high $`T`$. We would like to comment that the driving field does not influence particle hops across layers. The disappearance of a particle poor-rich segregation (across layers) phase could be explained as chance events, which are frequent due to the high thermal disordering effects. Particles from the particle rich layer hop to the particle poor one. This would bring down the neighbouring particles due to attractive interactions between particles on the same layer, possibly resulting in an avalanche. Without any drive, these clumps of particles in a generally particle poor region would not have any long ranged order. However, under the drive, linear interfaces would tend to result due to particle alignment with the external field. Thus effect is especially important near $`T_c`$ where the correlation length diverges. Locally the rule of having particle-hole pairs across layers were satisfied by both FE and AFS phases. What resulted was much like switching between weak forms of the FE and AFS phases, a first-order transition-like behaviour. Such ability of the lattice gas to switch between two phases of very close energy does not have a counterpart in the equilibrium model. Plotting the susceptibility curves for each set of parameter settings over the different system sizes, we found plots characteristic of second order phase transitions. See Figures 17 and 18. The structure factor data $`S(0,0,1)`$ (no shown) at the transitions are smooth and no discontinuities. This is expected since we monitored the change of the FE phase as $`T`$ increases. We would not be able to get any delta functions characteristic of first order transitions as the transition between a phase more FE and one more AFS occur during a single run. What we are implying is that D-FE is second order but near $`T_c`$, any disordering effect on the FE structure by the moderately high temperature is ordered into a AFS-like phase by the large drive in its direction. This does not occur for undriven systems. The AFS phase is not an equilibrium phase by energy arguments since the FE phase is the more stable one given the same set of conditions, thus we will not have normal transitions between their fully ordered forms. The key ingredient is the large, finite driving field which leads to this nonequilibrium phenomenon. ## VII. CONCLUSIONS We have attempted to extend the phase diagrams of the bilayer driven lattice gas for unequal intra-layer attractive couplings. This is in continuation to the work done by Hill et.al . The main findings are that the phase region occupied by the configuration which consist of ferromagnetic bands across the layers (S phase) increases in the expense of the other phase, which is the FE (Filled-Empty) phase. We speculate that the preference of the S phase over the FE phase by the driving field increases as the intra-layer coupling traverse to the drive increases. We also tried to determine the universality class of our bilayer lattice model with repulsive inter-layer interactions. Starting with an Ising hypothesis, we found discrepancies of the ratio $`\gamma /\nu `$ with the Ising value of 1.75. The ratio determined from the peaks of susceptibility plots according to the finite-size scaling theory is found to be closer to 2.0. Due to the similarity of the plots with Ising ones, we assumed $`\gamma `$ to take the Ising value of 1.75 and self-consistent plots using the $`\gamma /\nu `$ ratio independently determined could be obtained. The reason for the experimentally determined ratios of $`\gamma /\nu `$ to be close to 2.0 is speculated to be due to a first-order transition like competition of the AFS phase with the FE phase near criticality. The general D-FE transition should still be second order. This leads to a non-Ising conclusion. In fact, this could also explain why the plots of $`T_{peak}`$ against $`L^{1/\nu }`$ is not linear. On the other hand, another explanation could be that the scaling is anisotropic, requiring two correlation length exponents $`\nu _{}`$ and $`\nu _{}`$, associated with the directions perpendicular and parallel to the driving field, respectively. This could also explain the nonlinearity of the $`T_{peak}`$ plots. However, as is well acknowledged in the field, this proposal would be very difficult to investigate. There is in fact some work on the universality class of bilayered systems by Marro et.al in . There they looked at the differences between single and twin-layered driven lattice gases, where they concluded that the S-FE transition is Ising in nature. However, no work is done for the D-FE transitions. On hindsight, we should do a comprehensive study of the undriven case and compare the current results with it in order to isolate the effects of the drive. But we expected the bilayered, undriven case to be well studied and only looked at the case of large interlayer repulsion, namely the case of $`J_3`$ = -10 for $`L`$ = 32 and $`J_3`$ = -20 for $`L`$ = 128. Combining the two sets of data, which is allowed as the system behaviour should be similar for such large repulsions, we obtained a $`\gamma /\nu `$ of 1.7518, which is very close to 1.750 for Ising. With $`\nu `$ = 1, $`T_c`$ of 2.0053 is obtained, which is the expected result since as $`J_3\pm \mathrm{}`$, the bilayer structure becomes irrelevant and the system reduces to a 2-D Ising system with twice the coupling. This can be understand as cross layer particle-particle pairs or particle-hole pairs moving in unison in the 2-D lattice. However, when we attempt to do a “data-collapse” plot, the collapse is reasonable but the slope of the top branch is only 1.60! Hence we have a slight consistency problem. See Fig. 19. We can see that the two branches are not quite parallel, with the lower one giving a slope closer to 1.75. Further, as the susceptibility plot for $`L`$ = 128 is not very refined near the peak, this could introduce errors in the peak estimation. Also, the run lengths used were only 500,000 MCS in view of the fact that larger repulsion should lead to faster equilibration. Nonetheless, the evidence speaks strongly of Ising in this case. It is worthwhile to note that though the Ising universality class is broad, there are exceptions as found in where $`\nu `$ could be only 0.89 and in where $`\nu `$ is 1.35 but in both cases $`\gamma /\nu `$ is still 1.75. In both cases, the system has only a single layer and no driving fields are present. Here, we have a new situation where $`\gamma /\nu `$ is non-Ising but $`\gamma `$ could remain Ising! Thus, the universality class of the replusive inter-layer bilayer lattice gas does not belong to the Ising class due to the presence of two dominant phases near criticality in the approach of the system towards disorder. The theoretical and physical ramifications as well as an analytical understanding are yet to be worked out.
no-problem/9910/astro-ph9910270.html
ar5iv
text
# Evolution of the correlation function as traced by the HDF galaxies ## 1. Introduction The theoretical possibility of an initially highly biased spatial two-point auto-correlation function of dark matter haloes, $`r_{\text{halo}}`$, which decreases in amplitude as fluctuations in low-density regions successively become non-linear and collapse (decreasing correlation period, hereafter DCP), was noticed at least as early as 1993 (Roukema 1993; Brainerd & Villumsen 1994). Yamashita (unpublished) also noticed the same effect in hydrodynamical N-body simulations, so the effect could follow through to the galaxy correlation function $`\xi `$ either for the simplest possible star formation hypotheses (e.g. every halo becomes luminous immediately following collapse), or for less simplistic models. Because implicit estimates of $`\xi `$ at high redshift via angular correlation function measurements from photometric surveys were lower than expected (e.g. Roukema & Peterson 1994 and references therein) in the early 1990’s, it seemed that the DCP would have contradicted the observations. However, Ogawa, Roukema & Yamashita (1997) showed that the HDF-N estimates of the angular correlation function by Villumsen (1997) were not in contradiction with the DCP, i.e. that HDF observations were consistent with the DCP. Since then, high redshift ($`z\stackrel{>}{}\mathrm{\hspace{0.17em}1}`$) galaxy spectroscopy via Lyman-break selection and photometric redshift techniques applied (in particular) to the HDF-N (and -S) have created a new era in measurement of galaxy statistics. It quickly became clear that the Lyman-break galaxies, at $`z3`$, are highly clustered (Giavalisco et al. 1998) and that the DCP was no longer a mere theoretical prediction. Several papers further developing the theory of the DCP (Mo & White 1996; Bagla 1998; Moscardini et al. 1998 and references therein) have since appeared, and several observational estimates at $`z\stackrel{>}{}\mathrm{\hspace{0.17em}2}`$ have been made and compared to various theoretical predictions (Miralles, Pelló & Roukema 1999; Arnouts et al. 1999; Magliocchetti & Maddox 1999). In parallel with the various model dependent methods of analysing the observational results, it is suggested that Ogawa et al.’s (1997) extension of Groth & Peebles’ (1977) power law model of correlation function evolution via a transition redshift $`z_t`$ and a power law of $`(1+z)`$ for $`zz_t`$ should provide a simple way to characterise and compare different observational and theoretical analyses. This is presented in §2. A complementary observational analysis to the above is that of estimating $`\xi (z2),`$ which is done by a method which is itself also complementary to the above: the Lyman-break technique is used to select UV drop-in galaxies as opposed to UV drop-out galaxies, i.e. those for which $`z\stackrel{<}{}\mathrm{\hspace{0.17em}2.5}`$. This analysis is summarised in §3., and was carried out using integral constraint corrections without reintroducing linear uncertainty terms which formulae like that of Landy & Szalay (1993) are designed to avoid or minimise (§4.). A high bias was not detected at $`z2.`$ Combination of the Giavalisco et al. (1998) ($`z_{\text{med}}`$ $`3`$) and Miralles et al. (1999) ($`z_{\text{med}}`$ $`3.7`$) estimates for $`\xi `$ can then be used to estimate $`z_t`$ and $`\nu `$ in eq. (1). This is presented in §5. ## 2. Characterising the DCP After Ogawa et al. (1997), an extension of Groth & Peebles’ (1977) classical formula \[Eq. (1) with $`z_t1`$\] provides a way to represent the observational results without depending on particular galaxy formation models: $$\xi (r,z)=\{\begin{array}{cc}\left[\frac{(1+z)}{(1+z_t)}\right]^\nu \xi (r,z_t),\hfill & z>z_t\hfill \\ (r_0/r)^\gamma (1+z)^{(3+ϵ\gamma )},\hfill & z_tz>0\hfill \end{array}$$ (1) where $`r`$ is the galaxy pair separation; $`r`$ and $`r_0`$ are expressed in comoving units; $`\gamma 1.8`$ is determined from observations; $`ϵ`$ represents low redshift correlation growth and has the value $`ϵ=0`$ for clustering which is stable (constant) in proper units on small scales; $`z_t`$ is a transition redshift from the DCP at high $`z`$ to the low $`z`$ period of correlation growth; and $`\nu `$ represents the rate of correlation decrease at high $`z`$ \[no relation to $`\nu `$ of Bagla (1998)\]. ## 3. The UV drop-in technique: selecting a $`z2`$ sample The UV drop-out technique selects galaxies above $`z2.5`$. In contrast, by selecting only those HDF galaxies for which a source is detected in the $`U`$ (F300W) band, Mobasher & Mazzei (1999) defined a UV drop-in sample for which $`z2.5`$ was a strong upper limit in redshift. Roukema et al. (1999) used Mobasher & Mazzei’s photometric redshift estimations to create a subsample in the range $`1.5\stackrel{<}{}z\stackrel{<}{}\mathrm{\hspace{0.17em}2.5}`$. Roukema et al. (1999) estimated $`\xi (z2)`$ from this sample, finding that for stable clustering in proper coordinates, $`r_02.6_{1.7}^{+1.1}`$h<sup>-1</sup> Mpc for curvature parameters $`\mathrm{\Omega }_0=1,\lambda _0=0`$, or $`r_05.8_{3.9}^{+2.4}`$h<sup>-1</sup> Mpc for $`\mathrm{\Omega }_0=0.1,\lambda _0=0.9,`$ if one does not apply any correction for effects of the non-zero size of galaxy haloes on $`\xi `$. The correction for possible effects of non-zero halo size is discussed in Roukema (1999). ## 4. How not to reintroduce linear terms when correcting for the integral constraint Correlation function estimates in small fields require integral constraint corrections. See §3.1.3 of Roukema et al. (1999) for a discussion and references, in particular Hamilton (1993) for an in-depth analysis. The following formula from Landy & Szalay (1993): $$w(\theta )=\frac{N_{gg}2N_{gr}+N_{rr}}{N_{rr}}+C$$ (2) where $`N_{gg},`$ $`N_{gr}`$ and $`N_{rr}`$ are numbers of galaxy-galaxy, galaxy-random and random-random pairs and $`C=0`$, avoids linear terms in the uncertainty of the estimate of $`w`$ (angular correlation function). However, it follows from Hamilton (1993) that if $`C`$ is allowed to be a free parameter which is varied in order to match $`w(\theta )`$ values to a prior hypothesis, e.g. that $`w`$ is a power law of a given slope, then linear terms are reintroduced. Without changing the prior hypothesis, the way to avoid reintroducing these terms is to use eq. (24) of Hamilton (1993): $$w(\theta )=\frac{N_{gg}2\overline{n_{\text{est}}}N_{gr}+\overline{n_{\text{est}}}^2N_{rr}}{\overline{n_{\text{est}}}^2N_{rr}}$$ (3) where $`\overline{n_{\text{est}}}`$ is the mean number density, in principle estimated by some means external to the sample, divided by the number density of the sample itself. By treating $`\overline{n_{\text{est}}}`$ as a free parameter instead of $`C`$, the correction is applied optimally. ## 5. DCP parameter estimates From Giavalisco et al.’s (1998) estimate $`r_0=5.3_{1.3}^{+1.0}`$h<sup>-1</sup> Mpc ($`\mathrm{\Omega }_0=1,\lambda _0=0`$) at $`z_{\text{med}}`$ $`3`$ and Miralles et al.’s (1999) estimate $`r_0=7.1\pm 1.5`$h<sup>-1</sup> Mpc at $`z_{\text{med}}`$ $`3.7`$ ($`\mathrm{\Omega }_0=1,\lambda _0=0`$), the parameters $`z_t`$ and $`\nu `$ can be estimated from Eq. (1). These are $`z_t=1.7\pm 0.9`$ and $`\nu =2.1\pm 3.6`$, and are illustrated in Fig. 1. These values are similar to those expected from simulations \[§3, §6 of Ogawa et al. (1997); also Fig. 3 of Bagla (1998)\], and consistent with the $`\xi (z2)`$ estimate which indicates that the DCP has ended by about this epoch. ## 6. Conclusion The difficulty in estimating photometric redshifts at $`z2`$ can be at least partly overcome by applying the UV drop-in technique. This enables studies of galaxy properties at an epoch which appears to be an effective transition epoch between two periods or regimes of galaxy formation, characterised by a minimum in the amplitude of the spatial correlation function. Further work at this epoch such as that of Martinéz et al. (1999) may therefore provide important clues in understanding galaxy formation. ### Acknowledgments. This research has been supported by the Polish Council for Scientific Research Grant KBN 2 P03D 008 13 and has benefited from the Programme jumelage 16 astronomie France/Pologne (CNRS/PAN) of the Ministère de la recherche et de la technologie (France). ## Discussion Judy Cohen: The HDF field subtends a very small angle, so that there are only a small number of galaxies in the field. Doesn’t this scare you? B. F. R.: The serious violations of international humanitarian law allegedly carried out in Yugoslavia by the most powerful military coalition on the planet scare me (Rangwala et al. 1999). In contrast, for galaxy two-point auto-correlation function estimates, the conservative use of error bars (e.g. Roukema & Peterson 1994; Roukema et al. 1999) should help avoid undue emotion. The error bars on $`r_0`$, $`z_t`$ and $`\nu `$ are there for a reason, not just for amusement. ## References Arnouts S., Cristiani S., Moscardini L., Matarrese S., Lucchin F., Fontana A., Giallongo E. 1999, MNRAS, accepted (astro-ph/9902290) Bagla J. S. 1998, MNRAS, accepted, (astro-ph/9711081) Brainerd T. G., Villumsen J. V. 1994, ApJ, 431, 477 Giavalisco M., Steidel C. C., Adelberger K. L., Dickinson M. E., Pettini M., Kellogg M. 1998, ApJ, 503, 543 (astro-ph/9802318) Groth E. J., Peebles P. J. E. 1977, ApJ, 217, 385 Hamilton A. J. S. 1993, ApJ, 417, 19 Landy S. D., Szalay A. S. 1993, ApJ, 412, 64 Magliocchetti M., Maddox S. 1999, MNRAS, accepted (astro-ph/9811320) Martinéz J., Lambas D. G., Valls-Gabaud V. 1999, in ASP Conf. Ser. (this vol), Clustering at High Redshift, ed. A. Mazure, O. Le Fèvre, V. Le Brun (San Francisco: ASP) Miralles J. M., Pelló R., Roukema, B.F. 1998, submitted, (astro-ph/9801062) Mo H., White S. D. M. 1996, MNRAS, 282, 347 Mobasher B., Mazzei P. 1999 , ApJ, (submitted) Moscardini L., Coles P., Lucchin F., Matarrese S. 1998, MNRAS, accepted Ogawa T., Roukema B. F., Yamashita K. 1997 , ApJ, 484, 53 Rangwala G., et al. 1999, http://ban.joh.cam.ac.uk/$``$maicl/ Roukema B. F. 1993, Ph.D. thesis, Australian National University Roukema B. F. 1999, in ASP Conf. Ser. (this vol), Clustering at High Redshift, ed. A. Mazure, O. Le Fèvre, V. Le Brun (San Francisco: ASP) Roukema B. F., Peterson B. A. 1994, A&A, 285, 361 Roukema, B.F., Valls-Gabaud, D., Mobasher, B., Bajtlik, S. 1999, MNRAS, 305, 151 Villumsen J. V., Freudling W., da Costa L. N. 1997, ApJ, 481, 578
no-problem/9910/astro-ph9910109.html
ar5iv
text
# The impact hazard from small asteroids: current problems and open questions ## 1 Introduction The interest in the impact of interplanetary bodies with planets, particularly with Earth, has been increased significantly during the last few years because of several events such as the fall of the D/Shoemaker–Levy 9 into Jupiter’s atmosphere. Particular attention was given to the detection of kilometre–sized objects, which pose a severe threat to the Earth. In recent years, this has been emphasized by several authors with differing points of view (e.g. Aduskhin and Nemtchinov 1994, Chapman and Morrison 1994, Toon et al. 1997). The reason is quite simple, as written by Clark Chapman (1996): the impact of such an object has non–zero probability of creating a global ecological catastrophe within our lifetime. Larger objects (tens of kilometres) can cause an extinction level event. The consequent “asteroidal winter”, deriving from a strong injection of dust in the atmosphere, is quite similar to the nuclear winter, radioactive consequences apart. It would cause the onset of environmental conditions whose main features are: a very long period of darkness and reduced global temperature, something similar to the polar winter on a world wide scale (Cockell and Stokes 1999). Even though I understand and respect these opinions, I think that we cannot neglect small bodies at all. There are two main reasons: first, the fragmentation of asteroids in the Earth’s atmosphere is not well known. Observations of small asteroids (up to tens of metres) show that the fragmentation occurs when the dynamical pressure is lower than the mechanical strength, and there is no reason to suppose that larger bodies behave differently. Therefore, airburst can give us data to test theories for fragmentation, which are also valid for larger bodies. The second reason is that, although the damage caused by Tunguska–like can be defined as “local”, it is not negligible. Specifically, there are several scientists, such as J. Lewis, M. Paine, S.P. Worden and B.J. Peiser (see debates in the Cambridge Conference Net), suggesting that small asteroids might be even more dangerous than larger bodies. Moreover, David Jewitt (2000), after the paper by Rabinowitz et al. (2000) where authors strongly reduced the number of NEO larger than 1 km, suggested that it is time to set up a more ambitious NEO survey, including small objects. The present paper does not present any new theory or observation, but it review some points that are not present in previous analyses and studies. The purpose of this paper is to strengthen studies on small objects simply because our knowledge is very poor. The paper is divided into two parts: in the Section 2, I add some notes to the debate on the danger from small asteroids. In the Sects. 3 and 4, I present the evidence that the fragmentation of small asteroids in the Earth’s atmosphere is still an open problem. ## 2 Tunguska–like events Small objects, of the order of tens or hundreds of metres, can cause severe local damages. The best known event of this kind is the Tunguska event of 30 June 1908, which resulted in the devastation of an area of $`2150\pm 25`$ km<sup>2</sup> and the destruction of more than 60 million trees (for a review, see Vasilyev 1998). Still today there is a wide debate all over the world about the nature of the cosmic body which caused that disaster. Just last July an Italian scientific expedition, *Tunguska99*, went to Siberia to collect data and samples (Longo et al. 1999). Chapman and Morrison (1994) considered Tunguska–like events as a negligible threat. They could be right, considering the substantial uncertainties in these studies, but they underestimate some values. Although they proposed data with large error bars, the question is: where do we have to center these bars? Let us analyse the assumptions of Chapman and Morrison: first of all, they consider that the area destroyed in Tunguska (i.e. the area where the shock wave was sufficient to fell trees) was about 1000 km<sup>2</sup>. This value is somehow larger than the area where the peak overpressure reached the value of 4 psi (27560 Pa), sufficient to destroy normal buildings (according to the formula quoted by Chapman and Morrison the area of 4 psi is about 740 km<sup>2</sup>, using a yield of 20 Mton). There are two main objections to this hypotesis: firstly, the *measured* value of the area with fallen trees is more than double (see above; Vasilyev 1998). In addition to this, it is worth noting that an overpressure of 2 psi produces wind of 30 m/s, which is sufficient to cause severe damages to wood structures. In addition to this, debris flying at such speed is a threat to life (Toon et al. 1997). Therefore, a reasonable value of human beings *risking death* during a Tunguska–like event is $`10^4`$, rather than $`7\times 10^3`$ as indicated by Chapman and Morrison. The above value has been calculated by using the formula in Adushkin and Nemtchinov (1994) and assuming an explosion energy of 12.5 Mton (Ben–Menahem 1975). Chapman and Morrison (1994) correctly note that there is a much greater probability that such an event might occur in a uninhabitated part of the world. On the other hand, in the unlikely event of it occurring in a populated city, it would cause a great disaster. For example, in Rome which has a population density of about 2000 people per square kilometre the number of human beings at risk would be more than 2 millions. It is also necessary to evaluate the impact frequency of Tunguska–like events. Chapman and Morrison consider a time interval of 250 yr, but several other studies and episodes suggested a lower value. Farinella and Menichella (1998) studied the interplanetary dynamics of Tunguska–sized bodies by means of a numerical model and they found that the impact frequency is 1 per 100 yr. However, in that study, the authors did not take into account the Yarkovsky effect (see Farinella and Vokrouhlickỳ, 1999, and references therein), that can slightly increase the delivery of NEO (Near Earth Objects) toward the Earth. There are also ground–based and space–based observations that support these conclusions, even though the frequency range can vary greatly. For a 1 Mton explosion, the impact frequency can be once in 17 (ReVelle 1997) or 40 yr (Nemtchinov et al. 1997b), that implies a Tunguska event (12.5 Mton) once in 100 or 366 yr. If we consider an energy of 10 Mton, as calculated by Hunt et al. (1960), we obtain a value of the impact frequency of respectively 88 or 302 yr. In addition to this, Steel (1995) reportes two other Tunguska–like event in South America in 1930 and 1935: this strengthens the impact frequency value of one per 100 yr (or less). Now, if we consider a typical time interval of one Tunguska–like impact per 100 yr and $`10^4`$ deaths per impact, we obtain 100 death per years throughout the world; this value is no longer negligible in the Chapman and Morrison’s scale (1994). *On the other hand, we would stress the great uncertainty of these values, which are mainly due to the use of empirical relations with scarce data.* We are aware that the threat posed by kilometre and multikilometre objects is more dangerous and therefore we must study these objects and methods to avoid a global catastrophe. However, the few points raised in this paper suggest that we must *also* study Tunguska–like events. In addition to this, it is worth noting that studies about the impact hazard are often based on models of cosmic bodies fragmentation in the Earth’s atmosphere. These models assume that the fragmentation begins when the dynamical pressure in the stagnation point is equal to the mechanical strength of the body. However, as we shall see, this does not occur. ## 3 The failure of current theories The calculations of the impact hazard are strongly related to available numerical models for the fragmentation of asteroids/comets in the Earth’s atmosphere. Present models consider that fragmentation begins when the dynamical pressure in front of the cosmic body is equal to the material mechanical strength. However, observations of very bright bolides proves that large meteoroids or small asteroids breakup at dynamical pressures lower than their mechanical strength. Today there is still no explanation for this conundrum. This is of paramount importance, because it allow us to know whether or not an asteroid might reach the Earth’s surface. In addition to this, the atmospheric breakup also effects the crater field formation (Passey and Melosh 1980) or on the area devastated by the airblast. Therefore, it allows us to establish a reliable criterium to assess the impact hazard. All studies shown above are based on models where fragmentation begin when the dynamical pressure is equal to the mechanical strength of the asteroid. But, as we shall see, observations indicate that this is not true. The interaction of a cosmic body in the Earth’s atmosphere can be divided into two parts, according to the body dimensions. For millimetre to metre sized bodies (meteoroids), the most useful theoretical model is the gross–fragmentation model developed by Ceplecha et al. (1993) and Ceplecha (1999). In this model, there are two basic fragmentation phenomena: *continuous fragmentation*, which is the main process of the meteoroid ablation, and *sudden fragmentation* or the discrete fragmentation at a certain point. For small asteroids another model is used, where the ablation is contained in the form of explosive fragmentation, while at high atmospheric heights it is considered negligible. Several models have been developed: Baldwin and Shaeffer (1971), Grigoryan (1979), Chyba et al. (1993), Hills and Goda (1993), Lyne et al. (1996). A comparative study on models by Grigoryan, Hills and Goda, and Chyba–Thomas-Zahnle was carried out by Bronshten (1995). He notes that the model proposed by Chyba et al. does not take into account fragmentation: therefore, the destruction heights are overestimated (about 10–12 km). Bronshten also concludes that the Grigoryan and Hills–Goda’s models are equivalent. There is also a class of numerical models, called “hydrocodes” (e.g., CTH, SPH), which were used particularly for the recent impact of Shoemaker–Levy 9 with Jupiter. Specifically, Crawford (1997) uses CTH to simulate the impact, while M. Warren, J. Salmon, M. Davies and P. Goda used SPH. The latter was only published on the internet and is no longer available. Despite the particular features of each model, fragmentation is always considered to start when the dynamical pressure $`p_0`$ in the front of the meteoroid (stagnation point) exceeds the mechanical strength $`S`$ of the body. Although direct observations for asteroid impact are not available, it is possible to compare these models with observations of bodies with dimensions of several metres or tens of metres. Indeed, in this range, the gross–fragmentation model overlaps the explosive fragmentation models. As underlined several times by Ceplecha (1994, 1995, 1996b), observations clearly show that meteoroids breakup at dynamical pressures lower (10 times and more) than their mechanical strength. These data are obtained from photographic observation of meteors and the application of the gross–fragmentation model, that can be very precise. According to Ceplecha et al. (1993) it is possible to distinguish five strength categories with an average dynamical pressure of fragmentation (Tab. 1). For continuous fragmentation the results obtained also indicate that the maximum dynamical pressure is below 1.2 MPa, but five exceptions were found: 4 bolides reached 1.5 MPa and one survived up to 5 MPa (Ceplecha et al. 1993). It is also very important to relate the ablation coefficient $`\sigma `$ with the fragmentation pressure $`p_{\mathrm{fr}}`$, in order to find a relationship between the meteoroid composition and its resistance to the air flow. To our knowledge, a detailed statistical analysis on this subject does not exist, but in the paper by Ceplecha et al. (1993) we can find a plot made by considering data on 30 bolides (we refer to Fig. 12 in that paper). We note that stony bodies (type I) have a wide range of $`p_{\mathrm{fr}}`$ values. In the case of weak bodies, we can see that there is only one cometary bolide (type IIIA), but this is due to two factors: firstly, cometary bodies undergo continuous fragmentation, rather than a discrete breakup at certain points. Therefore, it is incorrect to speak about fragmentation pressure; we should use the maximum tolerable pressure. The second reason is that there is a selection effect. Indeed, from statistical studies, Ceplecha et al. (1997) found that a large part of bodies in the size range from 2 to 15 m are weak cometary bodies. However, a recent paper has shown that statistics from physical properties can lead to different results when compared with statistics from orbital evolution (Foschini et al. 2000). To be more precise, physical parameters prove that, as indicated above, a large part of small near Earth objects are weak cometary bodies, whilst, the analysis of orbital evolution proves a strong asteroidal component. The reason for the presence of cosmic bodies with very low fragmentation pressure can be explained by the assumption that additional flaws and cracks may be created by collisions in space, even though they do not completely destroy the cosmic body (Baldwin and Shaeffer 1971). Other explanations could be that the asteroid was not homogeneous (see the referee’s comment in Ceplecha et al. 1996) or it had internal voids (Foschini 1998). Almost all models described deal with the motion of a cosmic body in the Earth’s atmosphere. However, it is worth noting that we cannot observe directly the cosmic body: we can only see the light emitted during the atmospheric entry. Therefore, we have to introduce in equations several coefficient that cannot be derived from direct observations. If we turn our attention to the hypersonic flow around the body, we could have data from direct observations. Among models discussed above, only Nemtchinov et al. (1997a, b) tried to investigate the hypersonic flow around the asteroid with a numerical model. Foschini (1999) investigated the analytic approach: indeed, although the details of an hypersonic flow are very difficult to calculate and there is need of numerical models, the pressure distribution can be evaluated with reasonable precision by means of approximate methods. In the limit of a strong shock ($`M>>1`$) several equations tend to asymptotic values and calculations become easier. The application of this technique to a particular episode, such as the Tunguska event, gave reasonable values (Foschini 1999). However, although first results are encouraging, further work is necessary before having a complete and detailed theory. ## 4 Special cases In addition to data published in the paper by Ceplecha et al. (1993) and Ceplecha (1994) we consider some specific cases of bright bolides. We provide here a short description and we refer for details to the papers quoted. The Lost City meteorite (January 3, 1970), a chondrite (H), was analysed by several authors (McCrosky et al. 1971, ReVelle 1979, Ceplecha 1996a). The recent work by Ceplecha (1996a) is of particular interest, because by taking into account the meteoroid rotation, he succeeds in explaining the atmospheric motion without discrepancies. Obviously, except the dynamical pressure, that in this episode reaches the value of $`p_{\mathrm{fr}}=1.5`$ MPa, while the mechanical strength of a stony body is about 50 MPa. In the work by ReVelle (1979), it is also possible to find useful data for two other episodes: Přìbram (April 7, 1959) and Innisfree (February 6, 1977). In both episodes a meteorite was recovered: respectively ordinary chondrite and L chondrite. Values for $`p_{\mathrm{fr}}`$ of 9.2 MPa and 1.8 MPa respectively were obtained in this work. The Šumava bolide (December 4, 1974) reached $`21.5`$ absolute visual magnitude and was produced by a cometary body. It exhibited several flares during continuous fragmentation, ending at a height of about 60 km. The maximum dynamical pressure was in the range $`0.0250.14`$ MPa, much lower than the mechanical strength of a cometary body, i.e. 1 MPa (Borovička and Spurný 1996). The Benešov bolide (May 7, 1991) was very atypical and was analysed in detail by Borovička and Spurný (1996) and Borovička et al. (1998a, b). From these studies, results show that it was very probably a stony object which underwent a first fragmentation at high altitudes ($`5060`$ km) at dynamical pressures of about $`0.10.5`$ MPa. However, some compact fragments were disrupted at pressures of 9 MPa (24 km of height). The fall of the Peekskill meteorite (October 9, 1992) was the first of such events to be recorded by a video camera (Ceplecha et al. 1996). The fireball was brighter than the full moon and 12.4 kg of ordinary chondrite (H6 monomict breccia) were recovered. Tha availability of a video recording allows us to compute, with relative precision, the evolution of the meteoroid speed and, therefore, the dynamical pressure. It was discovered that the maximum value of $`p_{\mathrm{fr}}`$ was about $`0.71.0`$ MPa, while the meteorite has an estimated strength close to 30 MPa. In recent years, space–based infrared sensors detected several bolides all around the world. Nemtchinov et al. (1997) investigated these events by using a radiative–hydrodynamical numerical code. They simulated three bright bolides (April 15, 1988; October 1, 1990; February 1, 1994) and they obtained respectively these results: stony meteoroid, $`p_{\mathrm{fr}}=1.62.0`$ MPa; stony meteoroid, $`p_{\mathrm{fr}}=1.5`$ MPa; iron meteoroid, $`p_{\mathrm{fr}}=1015`$ MPa. Concerning the latter, Tagliaferri et al. (1995) reached a slightly different conclusion: stony meteoroid, $`p_{\mathrm{fr}}=9`$ MPa. The condition that fragmentation starts when the dynamical pressure reaches the mechanical strength of the meteoroid was imposed by Baldwin and Shaeffer (1971), but it is worth noting that this is a hypotesis. Now we have sufficient, though incomplete, data to claim that this hypotesis has no physical ground and we have to find new conditions for fragmentation. ## 5 Conclusion Only in recent decades, and particularly in recent years, the impact hazard has attracted the attention of more and more scientists. Evaluation of impact frequencies and damages are made by means of empirical or semiempirical formulas. However, we are faced with scarce, and often contradictory data. For example, Chapman and Morrison (1994) considered an impact frequency of one Tunguska–like event every 250 yr by using data from lunar craters, ReVelle obtains a higher frequency for the same kind of objects (1 per 100 yr) by considering data from airbursts. The main problem is the fragmentation mechanism, that is still unclear. From observations, it results that fragmentation occurs when the dynamical pressure is lower than the mechanical strength. We do not know whether this is due to any special feature in the hypersonic flow around the body or to any particular matter in the body. Today all that we can say is that current models of fragmentation of small asteroids in the Earth’s atmosphere *are not consistent* with observations. We require more data and theories to understand the matter better. Airbursts can give us useful data to test theories. ## 6 Acknowledgements Part of this work was already presented at the IMPACT Workshop (1999). I wish to thank the International Astronomial Union for a grant that allowed me to attend the IMPACT Workshop in Torino. Some ideas exposed here rose thanks to discussions with Zdenek Ceplecha during a visit in the Ondřejov Astronomical Observatory: I wish to thank Z. Ceplecha, his wife Hana, and meteor scientists at the observatory for their kind hospitality.
no-problem/9910/cond-mat9910162.html
ar5iv
text
# Binding of molecules to DNA and other semiflexible polymers ## I Introduction Aqueous solutions containing polymers and small associating molecules such as folded proteins and amphiphiles (surfactants) are commonly found in biological systems and industrial applications. As a result, extensive efforts have been devoted in the past few decades to the study of polymer–surfactant interactions . In addition, there has been growing interest in the interactions between DNA macromolecules and surfactants, lipids and short polyamines . These interactions are relevant to various biochemical applications such as DNA extraction and purification and genetic delivery systems . Association of folded proteins (e.g., RecA) with DNA plays a key role in genetic regulatory mechanisms. Structural details of this association have been studied in recent experiments . Recently, we have presented a general theory for the self-assembly in aqueous solutions of polymers and smaller associating molecules . Two different scenarios emerge, depending on the flexibility of the polymer. If the polymer is flexible enough, it actively participates in the self-assembly, resulting in mixed aggregates jointly formed by the two species. The polymer conformation changes considerably upon self-assembly but remains extended on a global scale, as the chain undergoes only partial collapse . On the other hand, if the polymer is stiff, partial collapse is inhibited. The criterion determining the ‘flexible’ vs. ‘stiff’ scenarios concerns the polymer statistics on a mesoscopic length scale characterizing correlations in the solution (usually a few nanometers). It was found that the flexible (stiff) scenario holds if the exponent $`\nu `$, relating the number of monomers $`N`$ to the spatial size $`R`$ they occupy, $`RN^\nu `$, is smaller (larger) than $`2/d`$ on that length scale ($`d`$ being the dimensionality). This distinction is analogous to the one made in the critical behavior of certain disordered systems — if the critical exponent $`\nu `$ of a system satisfies $`\nu <2/d`$, the critical behavior would be smeared by impurities (in analogy to the partial collapse), whereas if $`\nu >2/d`$, the critical point remains intact. Indeed, neutral flexible polymers in three dimensions, having $`\nu 3/5<2/3`$, are found by scattering experiments to associate with surfactants in the form of a ‘chain of wrapped aggregates’ . On the other hand, stiff DNA molecules, having $`\nu =1`$ on the relevant length scale, are found to either remain unperturbed by surfactant binding , or undergo a discontinuous coil-to-globule transition , provided the chain is much longer than the persistence length. In previous publications we concentrated on the flexible case and the corresponding partial collapse, where the polymer degrees of freedom play an important role. In the opposite extreme limit of stiff, rod-like molecules, the conformational degrees of freedom of the polymer can be neglected and the chain may be regarded as a linear ‘binding substrate’. Models for stiff polymers, inspired by the Zimm-Bragg theory , treat the bound molecules as a one-dimensional lattice-gas (or Ising) system with nearest-neighbor interactions . They have been widely used to fit experimental binding isotherms for polyelectrolytes and oppositely charged surfactants . Recently, more detailed electrostatic models have been proposed for the interaction between rod-like polyelectrolytes and oppositely charged surfactants . In addition, a theoretical work focusing on the specific binding of proteins to DNA has been recently presented , treating a pair of bound proteins as geometrically constraining inclusions on the DNA chain. In the current work we address the intermediate case of semiflexible polymers. The polymer we consider is stiff in the sense defined above, i.e., its persistence length, $`l_\mathrm{p}`$, exceeds several nanometers and, hence, the polymer is characterized by $`\nu =1>2/3`$ on that length scale. The total chain length, however, is considered to be much larger than $`l_\mathrm{p}`$, and the entire polymer cannot be regarded, therefore, as a single rigid rod. This case corresponds, in particular, to experiments on long DNA molecules , whose persistence length is typically very large (of order 50 nm), but much smaller than the total chain length (which is usually larger than a micron) . We argue that such an intermediate system may, in certain cases, be governed by different physics. Although the polymer is too stiff to change conformation and actively participate in the self-assembly, its degrees of freedom induce attractive correlations between bound molecules. Those fluctuation-induced correlations are weak but have a long spatial range (of order $`l_\mathrm{p}`$) and, hence, may strongly affect the binding thermodynamics. The model is presented in Sec. II. Bound molecules are assumed to modify the local features of polymer conformation, e.g., change its local stiffness. In the limit of weak coupling, our model reduces to the Kac-Baker model , which is solvable exactly. This limit is discussed in Sec. III. Although turning out to be of limited interest in practice, the weak-coupling limit provides insight into the mechanism of association, and helps us justify further approximations. Section IV presents a mean-field calculation for an arbitrary strength of coupling. This analysis leads to our main conclusions, and in Sec. V it is extended to polymers under external tension. The results are summarized in Sec. VI, where we also discuss several relevant experiments involving DNA and point at future directions. ## II The Model Small molecules bound to stiff polymers are commonly modeled as a one-dimensional lattice gas (or Ising system) . Each monomer serves as a binding site, which can either accommodate a small molecule or be empty, and the surrounding dilute solution is considered merely as a bulk reservoir of small molecules. In the current work we stay at the level of a one-dimensional model, assuming that the polymer is still quite (yet not infinitely) stiff, i.e., the persistence length is much larger than the monomer size. In addition, a dilute polymer limit is assumed, where inter-chain effects can be neglected. We focus on the effect of introducing the polymer degrees of freedom and, hence, seek a simple meaningful coupling between the polymer and the bound ‘lattice gas’. A polymer configuration is defined by a set of vectors, $`\{𝐮_n\}_{n=1\mathrm{}N}`$, specifying the lengths and orientations of the $`N`$ monomers. In addition, each monomer serves as a binding site which can be either empty ($`\phi _n=0`$) or occupied by a small molecule ($`\phi _n=1`$). A configuration of the entire system is defined, therefore, by specifying $`\{𝐮_n,\phi _n\}_{n=1\mathrm{}N}`$. Since the polymer is assumed to be locally stiff, a natural choice would be to couple $`\phi _n`$ with the square of the local chain curvature, $`\phi _n(𝐮_{n+1}𝐮_n)^2`$, thus modifying the local chain stiffness. However, in the usual Kratky-Porod worm-like-chain model of semiflexible polymers , chain segments are taken as rigid rods of fixed length ($`|𝐮_n|=\text{const}`$), and each squared-curvature term contains only one degree of freedom (e.g., the angle $`\theta _n`$ between $`𝐮_n`$ and $`𝐮_{n+1}`$). Consequently, this coupling, $`\phi _n\mathrm{cos}\theta _n`$, would leave $`\{\phi _n\}`$ uncorrelated, leading merely to a trivial shift in the chemical potential of bound molecules . One option to proceed is to consider higher-order extensions of the worm-like-chain Hamiltonian, involving three consecutive monomers. This will introduce correlations between bound molecules at different sites. We take, however, a simpler route and modify the worm-like-chain model by allowing the monomer length to fluctuate. This modification was originally presented by Harris and Hearst , using a single global constraint for the average chain length. The modified model was shown to successfully reproduce the results of the Kratky-Porod model as far as thermodynamic averages (e.g., correlation functions, radius of gyration) were concerned. It was less successful, however, in recovering more detailed statistics of the worm-like chain (e.g., distribution function, form factor), particularly in the limit of large stiffness. The Harris-Hearst model was later refined by Lagowski et al. and Ha and Thirumalai , replacing the single global constraint by a set of local constraints for the average segment lengths. This further modification was shown to be equivalent to a stationary-phase approximation for the chain partition function, yielding reliable results for average quantities, as well as more detailed statistics . We note that a similar approach was used in a recent model of semiflexible polymer collapse . It should be borne in mind that, despite its success in the past, the constraint relaxation remains essentially an uncontrolled approximation. In the current work we restrict ourselves to thermodynamic averages, such as monomer-monomer correlations and free energies, for which the modified model with a single global contraint can be trusted. Thus, the rigid constraints of the original Kratky-Porod model, $`u_n^2=1`$, are relaxed into thermodynamic-average ones, $`u_n^2=1`$, where the mean-square monomer size is taken hereafter as the unit length. Using the modified model for the chain, each $`\phi _n(𝐮_{n+1}𝐮_n)^2`$ term involves two consecutive monomers (and not merely the angle between them), leading to a meaningful coupling between binding and polymer conformation. The partition function of the combined system of polymer and bound molecules is written, therefore, as $`Z`$ $`=`$ $`\underset{\{\phi _n=0,1\}}{\text{Tr}}{\displaystyle \underset{n=1}{\overset{N}{}}\mathrm{d}𝐮_n\mathrm{exp}()}`$ (1) $``$ $`=`$ $`{\displaystyle \frac{3}{4}}l_\mathrm{p}{\displaystyle \underset{n=1}{\overset{N1}{}}}(1+ϵ\phi _n)(𝐮_{n+1}𝐮_n)^2+{\displaystyle \underset{n=1}{\overset{N}{}}}\lambda _nu_n^2\mu {\displaystyle \underset{n=1}{\overset{N}{}}}\phi _n.`$ (2) In Eq. (2) $`l_\mathrm{p}`$ is the persistence length of the bare chain, characterizing its intrinsic stiffness. It is assumed to be much larger than the monomer size, $`l_\mathrm{p}1`$. The coupling is introduced through the stiffness term, assuming that a bound molecule modifies the local stiffness by a fraction $`ϵ>1`$, which may be either negative or positive but cannot change the positive sign of the overall stiffness term . The second term contains a set of multipliers, $`\lambda _n`$, to be chosen so that the constraints $`u_n^2=1`$ are satisfied. However, replacement of the entire set $`\{\lambda _n\}`$ by a single multiplier $`\lambda `$ can be shown to yield a non-extensive correction , which becomes negligible in the limit $`N\mathrm{}`$. Hence, we use hereafter a single multiplier, $`\lambda `$. Finally, the system is assumed to be in contact with a reservoir of solute molecules. The last term in Eq. (2) accounts for this contact along with any other factors which couple linearly to the degree of binding. Typically, $`\mu `$ contains the chemical potential of the solute reservoir and the direct energy of solute molecule–monomer binding. (All energies in this work are expressed in units of the thermal energy, $`k_\mathrm{B}T`$.) Note that we have not included in Eq. (2) any direct short-range (e.g., nearest-neighbor) interactions between bound molecules. Thus, all interactions in the model arise from the coupling to the polymer degrees of freedom. Short-range interactions between bound molecules do exist in physical systems. Yet, in the limit of $`l_\mathrm{p}1`$ and $`|ϵ|1`$, which is of interest to the current work, such direct interactions have a minor effect on binding, as is demonstrated in the following sections. Hence, we omit them for the sake of brevity. As a reference, let us start with the previously studied partition function of the bare polymer , $$Z_\mathrm{p}=\underset{n}{}\mathrm{d}𝐮_n\mathrm{exp}[\frac{3}{4}l_\mathrm{p}\underset{n}{}(𝐮_{n+1}𝐮_n)^2\lambda \underset{n}{}u_n^2].$$ (3) It is a Gaussian integral which can be calculated either by transforming it to Fourier space and integrating, or by analogy to the path integral of a three-dimensional quantum oscillator . The result in the limit $`N\mathrm{}`$ and for $`l_\mathrm{p}1`$ is $$Z_\mathrm{p}^{1/N}=\left(\frac{4}{3\pi l_\mathrm{p}}\right)^{3/2}\mathrm{exp}\left(3\sqrt{3\lambda /l_\mathrm{p}}\right).$$ (4) The multiplier $`\lambda `$ can now be determined according to $$\frac{1}{N}\frac{\mathrm{log}Z_\mathrm{p}}{\lambda }=u_n^2_\mathrm{p}=1\lambda =\frac{3}{4l_\mathrm{p}},$$ (5) where $`\mathrm{}_\mathrm{p}`$ denotes a thermal average over the bare chain statistics (i.e., using $`Z_\mathrm{p}`$). The corresponding free energy per monomer (in the ensemble of constrained $`𝐮_n`$) is $$f_\mathrm{p}=\frac{1}{N}\mathrm{log}Z_\mathrm{p}\lambda =\frac{3}{2}\mathrm{log}l_\mathrm{p}+\frac{3}{4l_\mathrm{p}}+\text{const}.$$ (6) Various correlations in the bare chain can be calculated. The pair correlation between segment vectors along the chain sequence is $$𝐮_m𝐮_n_\mathrm{p}=\mathrm{e}^{|mn|/l_\mathrm{p}},$$ (7) which explains why the parameter $`l_\mathrm{p}`$ has been defined as the persistence length. Two higher-order pair correlations are calculated as well: $`g_1`$ $``$ $`(𝐮_{n+1}𝐮_n)^2_\mathrm{p}={\displaystyle \frac{2}{l_\mathrm{p}}}+𝒪(l_\mathrm{p}^2)`$ (8) $`g_2(m,n)`$ $``$ $`(𝐮_{m+1}𝐮_m)^2(𝐮_{n+1}𝐮_n)^2_\mathrm{p}g_1^2={\displaystyle \frac{8}{3l_\mathrm{p}^3}}\mathrm{e}^{2|mn|/l_\mathrm{p}}+𝒪(l_\mathrm{p}^4),`$ (9) and will be of use in the next section, where we re-examine the coupled system. ## III Weak Coupling Let us return to the full partition function (2), which can be equivalently written as $$Z=Z_\mathrm{p}\underset{\{\phi _n\}}{\text{Tr}}\mathrm{exp}(\mu \underset{n}{}\phi _n)\mathrm{exp}[\frac{3l_\mathrm{p}ϵ}{4}\underset{n}{}\phi _n(𝐮_{n+1}𝐮_n)^2]_\mathrm{p}.$$ (10) First we consider the weak-coupling limit, $`|ϵ|1`$, where the partition function (10) can be treated by a cumulant expansion. In this limit the model becomes analogous to the exactly solvable Kac-Baker model , and we show that identical results are derived from a simple mean-field calculation. We then use this observation to justify a mean-field calculation for an arbitrary value of $`ϵ`$. A cumulant expansion of Eq. (10) to 2nd order in $`ϵ`$ leads to $$ZZ_\mathrm{p}\underset{\{\phi _n\}}{\text{Tr}}\mathrm{exp}\left[\left(\mu \frac{3l_\mathrm{p}ϵ}{4}g_1\right)\underset{n}{}\phi _n+\frac{1}{2}\left(\frac{3l_\mathrm{p}ϵ}{4}\right)^2\underset{m,n}{}g_2(m,n)\phi _m\phi _n\right],$$ (11) where the correlations $`g_1`$ and $`g_2`$ were defined in Eq. (9). Substituting expressions (9), the partition function is decoupled into a polymer contribution and an effective contribution from the bound solute molecules, $`Z`$ $``$ $`Z_\mathrm{p}Z_\mathrm{s}=Z_\mathrm{p}\underset{\{\phi _n\}}{\text{Tr}}\mathrm{exp}(_\mathrm{s})`$ (12) $`_\mathrm{s}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{mn}{}}V_{mn}\phi _m\phi _n\widehat{\mu }{\displaystyle \underset{n}{}}\phi _n,`$ (13) where $`V_{mn}`$ $``$ $`{\displaystyle \frac{3ϵ^2}{2l_\mathrm{p}}}\mathrm{e}^{2|mn|/l_\mathrm{p}}`$ (14) $`\widehat{\mu }`$ $``$ $`\mu {\displaystyle \frac{3ϵ}{2}}+{\displaystyle \frac{3ϵ^2}{4l_\mathrm{p}}}.`$ (15) The introduction of the polymer degrees of freedom and their coupling to the binding ones have led to two effects, as compared to previous lattice-gas theories. First, there is a shift in the chemical potential, $`\mu \widehat{\mu }`$. This is equivalent to an effective change in the affinity between the small molecules and the chain. As expected, if binding strengthens the local stiffness of the chain ($`ϵ>0`$), the affinity is reduced (i.e., the isotherm is shifted to higher chemical potentials), whereas if it weakens the stiffness ($`ϵ<0`$), the shift is to lower $`\mu `$. The second, more interesting effect is that bound molecules experience an attractive potential, $`V_{mn}`$, along the chain. The amplitude of this effective interaction is small ($`ϵ^2/l_\mathrm{p}`$), but its range is large — of order $`l_\mathrm{p}`$. When $`l_\mathrm{p}`$ is increased there are two opposing consequences — the interaction amplitude diminishes, while the interaction range is extended. The overall effect on the thermodynamics of binding, therefore, has to be checked in detail. ### A Analogy with the Kac-Baker Model The effective Hamiltonian of the bound solute, $`_\mathrm{s}`$, is a lattice-gas version of the Kac-Baker model , which is exactly solvable. Moreover, the procedure relevant to our semiflexible polymer, i.e., increasing $`l_\mathrm{p}`$ while keeping $`1l_\mathrm{p}N`$, is precisely the one studied in detail by Kac and Baker. Their results, as applied to our binding problem, can be summarized as follows. For any finite $`l_\mathrm{p}`$, the bound molecules are always in a disordered state along the polymer chain, as in any one-dimensional system with finite-range interactions. Consequently, the binding isotherm, i.e., the binding degree $`\phi \phi _n`$ as function of $`\mu `$ (see, e.g., Fig. 2a), is a continuous curve. However, in the limit $`l_\mathrm{p}\mathrm{}`$, taken after the infinite-chain limit $`N\mathrm{}`$, there is a critical value of coupling above which the binding exhibits a discontinuous (1st-order) transition. According to Baker’s rigorous calculation , the critical value of the potential amplitude multiplied by $`l_\mathrm{p}`$ (equal, in our case, to $`3ϵ_\mathrm{c}^2/2`$) is 4, i.e., $$ϵ_\mathrm{c}^\pm =\pm \sqrt{8/3}\pm 1.63.$$ (16) Note that the symmetry with respect to the sign of $`ϵ`$ is merely an artificial consequence of our 2nd-order expansion, Eq. (11). In general, the results should not be the same if the stiffness is weakened ($`ϵ<0`$) or strengthened ($`ϵ>0`$), as is demonstrated in Sec. IV. The negative critical value in Eq. (16), $`ϵ_\mathrm{c}^{}1.63`$, lies outside the range of validity of the original polymer binding model, $`ϵ>1`$ \[cf. Eq. (2)\]. The positive value, $`ϵ_\mathrm{c}^+1.63`$, does not satisfy the assumption of weak coupling, $`|ϵ|1`$, which have led to the analogy with the Kac-Baker model in the first place. Thus, the sharp binding isotherms obtained from the Kac-Baker model for $`|ϵ|>ϵ_\mathrm{c}`$ do not apply, strictly speaking, for our polymer binding problem. The weak-coupling calculation does demonstrate, however, how fluctuations in polymer conformation induce long-range attraction between bound molecules. This basic feature is expected to remain when one considers stronger coupling, $`|ϵ|>1`$, and the resulting many-body terms omitted in Eq. (11). This is further discussed in the following sections. Finally, the polymers we consider have a large but finite $`l_\mathrm{p}`$. For example, the persistence length of a DNA macromolecule is typically of order 50–100 nm, whereas the length of a single base pair is $`0.34`$ nm. Hence, $`l_\mathrm{p}`$ is of order $`10^2`$ (in units of monomer length) . It is worth checking to what extent the sharpness of binding in the Kac-Baker model for $`|ϵ|>ϵ_\mathrm{c}`$ is affected by finite $`l_\mathrm{p}`$. For this purpose, let us define a cooperativity parameter for the binding, measuring the maximum slope of the binding isotherm, $$C\frac{\phi }{\mu }|_{\mathrm{max}}\frac{1}{4}.$$ (17) This parameter is equivalent to the zero magnetic field susceptibility in the analogous spin system, and is commonly measured from the slope of binding isotherms obtained in potentiometric experiments . It has been defined in Eq. (17) so as to yield zero for vanishing interaction ($`ϵ=0`$) and diverge at a critical point. (In the current weak-coupling limit, the maximum slope is obtained for $`\phi =1/2`$.) Given $`l_\mathrm{p}`$ and $`ϵ`$, the cooperativity is numerically calculated using Kac’s exact solution , as is explained in the Appendix. Figure 1 presents the results for $`l_\mathrm{p}=10`$ and 50. For $`l_\mathrm{p}=50`$ the binding becomes highly cooperative for $`|ϵ|>ϵ_\mathrm{c}`$. For even larger values of $`l_\mathrm{p}10^2`$ (relevant, e.g., to DNA) the binding will be hardly distinguishable from that of an infinite $`l_\mathrm{p}`$. ### B Mean-Field Calculation In fact, the results of the Kac-Baker model in the limit $`N\mathrm{},l_\mathrm{p}\mathrm{}`$, while keeping $`l_\mathrm{p}<N`$, can be also obtained from a simple mean-field calculation . The heuristic argument for this agreement is the following: as $`l_\mathrm{p}`$ is increased, the range of interaction is extended and each bound molecule interacts with an increasing number of neighbors. As a result, the averaging assumption underlying the mean-field approximation is justified, and becomes exact when the range of interaction is taken to infinity. The correspondence between infinite-range models and mean field was rigorously proved by Lebowitz and Penrose for a more general class of potentials . Indeed, employing a mean-field approximation for the potential (15) in the limit of very large $`l_\mathrm{p}`$, $`{\displaystyle \underset{mn}{}}V_{mn}\phi _m\phi _n{\displaystyle \frac{3ϵ^2}{2l_\mathrm{p}}}\left({\displaystyle \underset{mn}{}}\mathrm{e}^{2|mn|/l_\mathrm{p}}\right)\phi ^2{\displaystyle \frac{3ϵ^2}{2}}N\phi ^2,`$ where $`\phi `$ is an average, uniform binding degree, we are led to the following mean-field free energy per monomer: $$f=f_\mathrm{p}+f_\mathrm{s}f_\mathrm{p}+\phi \mathrm{log}\phi +(1\phi )\mathrm{log}(1\phi )\frac{3ϵ^2}{4}\phi ^2\widehat{\mu }\phi ,\text{for }l_\mathrm{p}\mathrm{}.$$ (18) It is easily verified that the critical point of this free energy is $`ϵ_\mathrm{c}^2=8/3`$, in agreement with the rigorous result, Eq. (16). The cooperativity parameter can be calculated as well from Eq. (18), yielding $$C=\frac{ϵ^2}{4(ϵ_\mathrm{c}^2ϵ^2)},\text{for }l_\mathrm{p}\mathrm{}.$$ (19) This expression shows the usual critical behavior obtained from mean-field theories, $`C|ϵϵ_\mathrm{c}|^\gamma `$ with $`\gamma =1`$. The dependence of $`C`$ on $`ϵ`$ according to Eq. (19) is shown by the solid line in Fig. 1. The curves obtained from Kac’s solution approach it, as expected, when $`l_\mathrm{p}`$ is increased. Recall that expressions (18) and (19) correspond to the original problem of bound molecules only in the limit of small $`ϵ`$. ## IV Strong Coupling The interesting part of our theory requires $`|ϵ|1`$ and thus limits the interest in the analogy to the Kac-Baker model. Nevertheless, based on the heuristic argument given above, it is reasonable to assume that, in the limit $`l_\mathrm{p}1`$, the mean-field approximation is good for larger values of $`|ϵ|`$ as well . The preceding section, discussing the Kac-Baker model in the weak-coupling limit, may be regarded, therefore, as a justification for using the mean-field approximation for one-dimensional models with large $`l_\mathrm{p}`$ and $`|ϵ|1`$. Applying a mean-field approximation to the binding degrees of freedom $`\phi _n`$ in our starting point, Eq. (2), the tracing over $`u_n`$ can be done exactly. The resulting free energy is composed of the polymer free energy, $`f_\mathrm{p}`$, evaluated with an effective persistence length, $`l_\mathrm{p}l_\mathrm{p}(1+ϵ\phi )`$, and the entropy of mixing for $`\phi `$, $$f=f_\mathrm{p}|_{l_\mathrm{p}l_\mathrm{p}(1+ϵ\phi )}+\phi \mathrm{log}\phi +(1\phi )\mathrm{log}(1\phi )\mu \phi .$$ (20) Using Eq. (6), we obtain $$f=\phi \mathrm{log}\phi +(1\phi )\mathrm{log}(1\phi )+\frac{3}{2}\mathrm{log}[l_\mathrm{p}(1+ϵ\phi )]+\frac{3}{4l_\mathrm{p}(1+ϵ\phi )}\mu \phi .$$ (21) For $`ϵ1`$ and $`l_\mathrm{p}1`$ this expression reduces, as expected, to our previous result for the weak-coupling limit, Eq. (18). In the limit $`l_\mathrm{p}1`$ the critical points of the free energy (21) are $$ϵ_\mathrm{c}^{}=\frac{2}{3}\left(2\sqrt{10}\right)0.775,ϵ_\mathrm{c}^+=\frac{2}{3}\left(2+\sqrt{10}\right)3.44,$$ (22) both of which lie within our general range of validity, $`ϵ>1`$. (Note the loss of symmetry with respect to the sign of $`ϵ`$, which was a consequence of the weak-coupling approximation in Sec. III.) The corresponding critical chemical potentials are $$\mu _\mathrm{c}^\pm =\frac{3ϵ_\mathrm{c}^\pm (ϵ_\mathrm{c}^\pm +2)}{4(ϵ_\mathrm{c}^\pm +1)}\mathrm{log}(ϵ_\mathrm{c}^\pm +1)\pm 1.67.$$ (23) The binding isotherm, $`\phi =\phi (\mu )`$, as derived from Eq. (21), satisfies $$\mu =\mathrm{log}\frac{\phi }{1\phi }+\frac{3ϵ}{2(1+ϵ\phi )},l_\mathrm{p}1.$$ (24) Figure 2a shows three binding isotherms for three different values of $`ϵ`$ below and above the critical point. The corresponding binding cooperativity is $$C=\frac{8(1+ϵ)^2}{3(2+ϵ)^2(ϵϵ_\mathrm{c}^{})(ϵ_\mathrm{c}^+ϵ)}\frac{1}{4},l_\mathrm{p}1.$$ (25) As in Eq. (19), this expression exhibits the usual mean-field critical behavior, $`C|ϵϵ_\mathrm{c}|^\gamma `$ with $`\gamma =1`$. The dependence of $`C`$ on $`ϵ`$ is plotted in Fig. 2b. Finally, the binding phase diagram arising from Eq. (21) in the limit $`l_\mathrm{p}1`$ is depicted in Fig. 3. At the lower limit of model validity, $`ϵ1`$, the spinodal approaches a finite value, $`\mu _{\mathrm{sp}}=\mathrm{log}(2/3)5/22.91`$, whereas the binodal diverges. Indeed, for $`ϵ1`$ the free energy (21) tends to $`\mathrm{}`$ for $`\phi =1`$, regardless of the value of $`\mu `$, and the binodal is thus obtained at $`\mu \mathrm{}`$. In this respect, the limit $`ϵ=1`$ for the bound molecules is similar to the limit of zero temperature — the induced interaction is so strong that the molecules condense for any value of the chemical potential. Note that in this special limit, $`ϵ1,\phi 1`$, the effective stiffness, $`l_\mathrm{p}(1+ϵ\phi )`$, becomes vanishingly small. This limit cannot be accurately treated within the continuum form of the semiflexible polymer Hamiltonian . Equations (22)–(25) and the phase diagrams in Fig. 3 summarize the results obtained so far. They indicate that in cases of semiflexible polymers, where binding of small molecules significantly affects local chain features, the binding should be a very sharp process. For finite $`l_\mathrm{p}`$ the slope of the binding isotherm is finite, i.e., the binding is always continuous, yet for $`l_\mathrm{p}10^2`$ like in DNA, the behavior will be practically indistinguishable from a discontinuous phase transition. It should be borne in mind that the sharp binding, obtained despite the one-dimensionality of the model, relies on the long range of the induced interaction. A direct short-range interaction between bound molecules could not produce a similar effect. Hence, such a short-range interaction (e.g., a nearest-neighbor interaction), which was omitted in Eq. (2) for the sake of brevity, does not have an important effect on the binding in the domain of interest, i.e., $`l_\mathrm{p}1`$ and $`|ϵ|1`$. ## V Chains under Tension In addition, we consider binding to semiflexible chains which are subject to external tension. This scenario is relevant to recent single-molecule manipulation experiments . Since the tension suppresses chain fluctuations, it is expected to have a significant effect on the fluctuation-induced mechanism presented in the preceding sections. In order to incorporate the external tension into our model, a term is to be added to the chain Hamiltonian \[cf. Eq. (2)\] , $`Z`$ $`=`$ $`\underset{\{\phi _n=0,1\}}{\text{Tr}}{\displaystyle \underset{n=1}{\overset{N}{}}\mathrm{d}𝐮_n\mathrm{exp}(_\mathrm{t})}`$ (26) $`_\mathrm{t}`$ $`=`$ $`𝐭{\displaystyle \underset{n=1}{\overset{N}{}}}𝐮_n,`$ (27) where $``$ has been defined in Eq. (2), and $`𝐭`$ is the exerted tension (in units of $`k_\mathrm{B}T`$ divided by monomer length). As in Sec. II, we begin with the previously studied problem of a bare semiflexible chain, yet it is now a chain under tension . The additional tension term has not changed the Gaussian form of the polymer part of $`Z`$. It can be calculated, therefore, in a similar way to that of Sec. II, yielding $$Z_{\mathrm{pt}}^{1/N}=Z_\mathrm{p}^{1/N}\mathrm{exp}(t^2/4\lambda ),$$ (28) where $`Z_\mathrm{p}`$ is the tensionless polymer partition function given in Eq. (4). The equation for the multiplier $`\lambda `$ is, in this case, $$\frac{1}{2}\left(\frac{3}{l_\mathrm{p}\lambda }\right)^{1/2}+\frac{t^2}{4\lambda }=1,$$ (29) which reduces to Eq. (5) for $`t=0`$. The resulting polymer free energy is $$f_{\mathrm{pt}}=\frac{3}{2}\mathrm{log}l_\mathrm{p}+\left(\frac{3\lambda }{l_\mathrm{p}}\right)^{1/2}\frac{t^2}{4\lambda }\lambda ,$$ (30) where $`\lambda =\lambda (l_\mathrm{p},t)`$ is the solution to Eq. (29). For $`l_\mathrm{p}t1`$, the solution for $`\lambda `$ is $`\lambda {\displaystyle \frac{3}{4l_\mathrm{p}}}\left[1+{\displaystyle \frac{8}{9}}(l_\mathrm{p}t)^2+𝒪(l_\mathrm{p}t)^4\right],`$ and the free energy becomes $$f_{\mathrm{pt}}f_\mathrm{p}\frac{l_\mathrm{p}}{3}t^2+𝒪(l_\mathrm{p}^3t^4),t1/l_\mathrm{p},$$ (31) where $`f_\mathrm{p}`$ is the tensionless free energy given in Eq. (6). This is the elastic regime, where the energy is quadratic (i.e., the relative chain extension is linear) in tension . Since we assume a large persistence length, this regime corresponds to very weak tension, $`t1/l_\mathrm{p}1`$. In the opposite limit, $`l_\mathrm{p}t1`$, the solution to Eq. (29) becomes $`\lambda {\displaystyle \frac{t}{2}}\left[1+{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{3}{2l_\mathrm{p}t}}\right)^{1/2}+𝒪(l_\mathrm{p}t)^1\right],`$ and the corresponding free energy is $$f_{\mathrm{pt}}\frac{3}{2}\mathrm{log}l_\mathrm{p}t+\left(\frac{3t}{2l_\mathrm{p}}\right)^{1/2}+𝒪(l_\mathrm{p}^1t^0),t1/l_\mathrm{p}.$$ (32) In this regime the chain extension changes like the inverse square root of tension . Let us turn now to the effect of tension on the system of polymer and bound molecules, Eq. (27). As in Sec. IV, we employ the mean-field approximation, valid for $`l_\mathrm{p}\mathrm{}`$. The resulting free energy is the same as Eq. (21), but with $`f_{\mathrm{pt}}`$ instead of $`f_\mathrm{p}`$, $$f=f_{\mathrm{pt}}|_{l_\mathrm{p}l_\mathrm{p}(1+ϵ\phi )}+\phi \mathrm{log}\phi +(1\phi )\mathrm{log}(1\phi )\mu \phi .$$ (33) Due to the additional degree of freedom, namely tension, the binding phase diagrams of Fig. 3 become three-dimensional. In particular, the critical points $`ϵ_\mathrm{c}^\pm `$ become critical lines, $`ϵ_\mathrm{c}^\pm (t)`$. (Note that $`𝐭`$ is an external field coupled to $`\{𝐮_n\}`$ rather than $`\{\phi _n\}`$, and, hence, it does not destroy the critical behavior.) The ‘condensation’ of bound molecules in our model results from attraction induced by polymer fluctuations. By suppressing fluctuations, the tension should weaken the attraction and shift the critical coupling to higher values, i.e., increase the positive critical point, $`ϵ_\mathrm{c}^+`$, and decrease the negative one, $`ϵ_\mathrm{c}^{}`$. Using Eqs. (29), (30) and (33), the critical lines, $`ϵ_\mathrm{c}^\pm (t)`$, can be calculated. The results are shown in Fig. 4. Before getting into the detailed effect of tension, we address the question whether the critical behavior can survive any strength of tension. In this respect there is an essential difference between stiffness-strengthening binding ($`ϵ>0`$) and stiffness-weakening one ($`ϵ<0`$). In the former case, since the value of $`ϵ`$ is unbound, there exists $`ϵ_\mathrm{c}^+(t)`$ for any value of $`t`$, such that the binding is a sharp transition for $`ϵ>ϵ_\mathrm{c}^+(t)`$. In other words, the critical line $`ϵ_\mathrm{c}^+(t)`$ exists for any $`0t<\mathrm{}`$. Indeed, substituting $`ϵ\mathrm{}`$ in Eq. (33) while using Eq. (32), we find that the free energy always describes a sharp transition, regardless of the value of $`t`$. On the other hand, in the latter case of stiffness-weakening binding, there is a lower bound for $`ϵ`$, $`ϵ>1`$, where the validity of the entire approach breaks (see previous section). Substituting $`ϵ=1`$ in Eqs. (33) and (32), we find that a critical point exists only for $`t<t^{}`$, where $$\frac{t^{}}{l_\mathrm{p}}=\frac{4}{9}\left(337\sqrt{21}\right)0.410.$$ (34) Thus, the critical line $`ϵ_\mathrm{c}^{}(t)`$ terminates at the point $`(t^{},ϵ_\mathrm{c}^{}=1)`$, beyond which a sharp binding transition cannot be attained. This situation is similar to a case where the critical temperature $`T_\mathrm{c}`$ coincides with $`T=0`$ (e.g., in a one-dimensional Ising model), and the system is disordered at all temperatures $`T>0`$. Several regimes are found as function of tension. For very weak tension, $`t<1/l_\mathrm{p}`$, the leading-order term which couples binding and tension is found from Eqs. (31) and (33) to scale like $`l_\mathrm{p}t^2ϵ\phi `$, i.e., it is only linear in $`\phi `$. Hence, to leading order in $`l_\mathrm{p}t`$ there is no effect on the critical point. Although the tension influences chain fluctuations (e.g., causing the chain to extend linearly with $`t`$), it is too weak to affect the fluctuation-induced interactions between bound molecules. The next-order term scales like $`l_\mathrm{p}^3t^4(1+ϵ\phi )^3`$, leading to a very small shift of $`l_\mathrm{p}^3t^4`$ in the critical point (see also Fig. 4). For $`t>1/l_\mathrm{p}`$, the leading-order term in the free energy, according to Eqs. (32) and (33), is $`(t/l_\mathrm{p})^{1/2}(1+ϵ\phi )^{1/2}`$. Here two regimes should be distinguished. For intermediate tension, $`1/l_\mathrm{p}<t<l_\mathrm{p}`$, the critical line scales like $`(t/l_\mathrm{p})^{1/2}`$, reflecting a more significant, yet still weak effect of tension. Although the chain conformation is significantly stretched by tension in this regime, the induced interaction between bound molecules is not strongly affected. However, for $`t>l_\mathrm{p}`$, the tension term in the free energy \[$`(t/l_\mathrm{p})^{1/2}(1+ϵ\phi )^{1/2}`$\] becomes dominant, leading to a linear dependence of the critical point on tension, $`ϵ_\mathrm{c}^+t/l_\mathrm{p}`$. The above analysis for the dependence of the critical coupling on tension is summarized in the following expression: $$|ϵ_\mathrm{c}^\pm (t)ϵ_\mathrm{c}^\pm (0)|\{\begin{array}{cc}l_\mathrm{p}^3t^4\hfill & t<1/l_\mathrm{p}\hfill \\ & \\ (l_\mathrm{p}/t)^{1/2}\hfill & 1/l_\mathrm{p}<t<l_\mathrm{p}\hfill \\ & \\ l_\mathrm{p}/t\hfill & t>l_\mathrm{p},\text{relevant only to }ϵ_\mathrm{c}^+.\hfill \end{array}$$ (35) The various regimes are also clearly seen in Fig. 4. Note that for the large values of $`l_\mathrm{p}`$ considered in this theory the intermediate tension region, $`1/l_\mathrm{p}<t<l_\mathrm{p}`$, is very wide. ## VI Discussion and Conclusions We have considered binding of small molecules to isolated semiflexible polymer chains, where the persistence length $`l_\mathrm{p}`$ is much larger than the monomer size but still smaller than the total chain length $`N`$. We have demonstrated that in such systems polymer fluctuations induce attraction between bound molecules. The long range of this interaction (of the same order as the persistence length) can lead to strong effects on the binding process. In particular, if bound molecules significantly affect local features of the chain, e.g., weaken or strengthen the stiffness by a factor of about 5 ($`ϵ<ϵ_\mathrm{c}^{}`$ or $`ϵ>ϵ_\mathrm{c}^+`$), then the binding is predicted to be extremely cooperative, occurring as a transition for a sharply defined solute concentration. This is an unusual, yet practical example for a one-dimensional system exhibiting a sharp transition due to long-range interactions. The results of the model should apply, in particular, to the association of DNA with smaller molecules such as surfactants and compact proteins. Subjecting the polymer to external tension has been studied as well. By suppressing the fluctuation-induced interaction, the applied tension may strongly affect the binding. The effect is significant for sufficiently strong tension of order $`tl_\mathrm{p}`$. \[For DNA this implies $`t10^2k_\mathrm{B}T/(10\text{Å})10^2`$ pN.\] In cases where binding weakens the chain stiffness, such a high tension should make the sharp binding transition disappear altogether (i.e., regardless of the strength of coupling or temperature). In cases where binding strengthens the chain stiffness, a tension of $`tl_\mathrm{p}`$ significantly shifts $`ϵ_\mathrm{c}^+`$ to higher values. It is worth mentioning that tension-induced pairwise interaction between specifically bound proteins on a DNA chain was studied in a previous work . The interaction of DNA with oppositely charged cationic surfactants has been thoroughly studied by potentiometric techniques and fluorescence microscopy . Isotherms measured by potentiometry reveal a very cooperative, albeit continuous binding. Fluorescence microscopy convincingly demonstrated, however, that the binding to a single DNA molecule has a discrete nature resembling a 1st-order phase transition. It is usually accompanied by a coil-to-globule collapse of the DNA chain (which lies outside the scope of the current theory). The smoothness of potentiometric isotherms was shown to arise from averaging over an ensemble of DNA molecules, coexisting in bound and unbound states . Similar results were obtained for the association of DNA with spermidine . The microscopic origin of the observed cooperativity (or even discontinuous transition) has not been clarified. It is usually fitted to a phenomenological parameter describing strong interaction between nearest-neighboring bound molecules . On the other hand, it is reasonable to expect that oppositely charged surfactants bound to DNA chains significantly modify the chain stiffness (probably weakening it). Thus, our model demonstrates that the strong cooperativity observed in experiments can be well accounted for by weak, yet long-range interactions induced by polymer fluctuations. Recently, the kinetics of non-specific binding of RecA proteins to DNA has been studied by single-molecule manipulation . RecA is a bacterial protein involved in DNA recombination and known to cause significant changes in the local structure of the double strand upon binding . It was found to increase the DNA stiffness by a large factor, estimated around 10 in one study and above 4 in another . This corresponds to a large, positive $`ϵ`$ in our model. A very cooperative nucleation-and-growth kinetics was observed, as expected from the current model. Moreover, in certain situations it was possible to achieve a smaller increase of stiffness by binding of RecA. This led, correspondingly, to a less cooperative process . Yet probably the most compelling evidence is that the binding cooperativity was shown to be sensitive to external tension of order 10–100 pN. It was consequently deduced that DNA conformational fluctuations play a key role in RecA binding , in accord with the model. The current work is restricted to one-dimensional interactions along the chain sequence, assuming that the polymer is locally stiff and obeys the worm-like-chain description. Apart from changing local properties of the polymer, an important feature not treated by the model is that bound molecules may also modify volume interactions between the monomers, thus affecting the three-dimensional conformation of the polymer. For example, binding of oppositely charged surfactants to a DNA molecule locally neutralizes the DNA charge. This should lead, indeed, to a modified stiffness, but also to a reduced 2nd virial coefficient, which may drive a coil-to-globule collapse . The collapse can be also driven by fluctuations in the concentration of ions adjacent to the chain, as has been demonstrated by recent theoretical studies . In order to check the theory presented in this work more experiments are required, focusing, in particular, on the effect of persistence length and tension on binding. The fluorescence microscopy techniques, which have been successfully used for DNA–surfactant association, may be applied to chains under tension or flow, thus examining the role of fluctuations. It would be interesting to study a system consisting of a semiflexible polymer and bound molecules in computer simulations, and thereby check the applicability of our mean-field approximation. An important extension of the model, as mentioned above, would be to introduce volume interactions and obtain binding-induced collapse as observed in experiments. ###### Acknowledgements. We greatly benefited from discussions and correspondence with R. Bar-Ziv, M. Feingold, A. Libchaber, R. Netz, A. Parsegian, R. Podgornik, M. Schwartz and V. Sergeyev. Partial support from the Israel Science Foundation founded by the Israel Academy of Sciences and Humanities — Centers of Excellence Program, and the Israel–US Binational Science Foundation (BSF) under grant No. 98-00429, is gratefully acknowledged. HD would like to thank the Clore Foundation for financial support. ## Numerical Details The aim of the numerical scheme is to calculate the results of the Kac-Baker model for finite $`l_\mathrm{p}`$, which are presented in Fig. 1. Using Kac’s solution , the partition function of bound solute molecules, Eqs. (13)-(15), is expressed in the limit $`N\mathrm{}`$ as $$Z_\mathrm{s}=\text{const}\times e_0^N,$$ (36) where $`e_0`$ is the largest eigenvalue of the following ‘transfer kernel’: $$K(x,y)=[(1+\mathrm{e}^{\mu 3ϵ/2+\sqrt{J}x})(1+\mathrm{e}^{\mu 3ϵ/2+\sqrt{J}y})]^{1/2}\mathrm{exp}\left[\frac{y^2x^2}{4}\frac{(y\mathrm{e}^{2/l_\mathrm{p}}x)^2}{2(1\mathrm{e}^{4/l_\mathrm{p}})}\right],$$ (37) where $`J3ϵ^2/2l_\mathrm{p}`$, and $`x,y(\mathrm{},\mathrm{})`$ are real variables. We define a vector, $`\{x_i\}=\{(2iM)d\}_{i=0\mathrm{}M}`$, where $`M`$ is an even integer and $`d`$ a real number, and use it to discretize the kernel $`K(x,y)`$ into a transfer matrix, $$K_{ij}K(x_i,x_j).$$ (38) In addition, we define the diagonal matrix $$A_{ij}x_i\delta _{ij}.$$ (39) Given $`l_\mathrm{p}`$, $`ϵ`$ and $`\mu `$, the transfer matrix $`K_{ij}`$ is diagonalized and its largest eigenvalue, $`e_0`$, is found. The binding degree, $`\phi `$, can be calculated in two ways. The first is by calculating the variation of $`\mathrm{log}e_0`$ with respect to $`\mu `$, $$\phi =\mathrm{log}e_0/\mu .$$ (40) The second way is by using the equation $$\phi =\stackrel{~}{A}_{00}/(B\sqrt{J}),$$ (41) where $`B\mathrm{coth}(1/l_\mathrm{p})`$, and $`\stackrel{~}{A}`$ is the matrix $`A`$ transformed to the basis where $`K`$ is diagonal . The cooperativity parameter, $`C`$, as defined in Eq. (17), is found by calculating the variation of $`\phi `$ with respect to $`\mu `$ around the point $`\phi =1/2`$. The value $`\mu =\mu _{1/2}`$ which gives $`\phi =1/2`$ is analytically found by transforming the lattice-gas partition function, Eqs. (13)-(15), into an Ising one ($`\phi _ns_n=2\phi _n1`$), and requiring that the ‘magnetic field’ coefficient should vanish. The result is $$\mu _{1/2}=3ϵ/2JB/2.$$ (42) For each calculation (i.e., for each set of $`l_\mathrm{p}`$, $`ϵ`$ and $`\mu `$) the discretization parameters, $`M`$ and $`d`$, were tuned until the result became insensitive to further refinement to six significant figures. In addition, the two methods for calculating $`\phi `$ were used and verified to yield identical results to six figures. All algebraic manipulations were performed using Mathematica.
no-problem/9910/hep-th9910051.html
ar5iv
text
# Georgi-Goldstone Realization of Chiral Symmetry in Quantum Chromodynamics ## Abstract It is shown that quantum chromodynamics based on asymptotic freedom and confinement exhibits the vector mode of chiral symmetry conjectured by Georgi. ($``$): e-mail address: raghunath.acharya@asu.edu I shall begin by summarizing the key results constituting conventional wisdom in standard Quantum Chromodynamics (QCD). Firstly, in vector-like gauge theories and in QCD in particular, non-chiral symmetries such as $`SU_{L+R}(2)SU_L(2)\times SU_R(2)`$ or $`SU_{L+R}(3)SU_L(3)\times SU_R(3)`$ cannot be spontaneously broken. This is the Vafa-Witten result . In QCD with large $`N_c`$, Coleman and Witten showed that chiral symmetry is broken to diagonal $`U(N_F)`$ and thus, if chiral symmetry is broken, it must happen in such a manner that flavor symmetry is preserved. Secondly, chiral $`SU_L(3)\times SU_R(3)`$ symmetry in QCD with massless $`u,d,s`$ quarks must be spontaneously broken. However, it is difficult to show that chiral $`SU_L(2)\times SU_R(2)`$ symmetry in QCD with massless $`u,d`$ quarks is also spontaneously broken. Thirdly, QCD may very well exhibit the Higgs mode for the vector current and the Goldstone mode for the axial vector current i.e., the massless scalars arising from Goldstone theorem get ‘eaten up’ by the gauge vector field which consequently acquires a finite mass. This important conjecture was introduced by Georgi in 1989 who called it as a new realization of chiral symmetry (“vector mode” ) since it involves both the Wigner-Weyl and Nambu-Goldstone modes. This is Georgi’s conjecture . To quote Weinberg , “A recent paper of Georgi, can be interpreted as proposing that QCD at zero temperature is near a second order phase transition, at which the broken chiral $`SU_L(3)\times SU_R(3)`$ symmetry has a $`(8,1)+(1,8)`$ representation, consisting of the octet of pseudoscalar Goldstone bosons plus an octet of massless scalars, that on the broken symmetry side of the phase transition, become the helicity-zero states of the massive vector meson octet, $`\mathrm{}`$. It is intriguing and mysterious that at the second order phase transition at which chiral $`SU_L(2)\times SU_R(2)`$ of massless QCD becomes unbroken , this symmetry may become local with $`\rho `$ and $`A_1`$ as massless gauge bosons”. As a final point one may observe that the charges corresponding to spontaneously broken local gauge symmetries are screened and the vector mesons are massive. This is a manifestation of spontaneously broken local symmetries. For instance, the well-known example is the Abelian Higgs model: in the spontaneously broken phase, the vector field has a finite mass (thus the field is of finite range) and the conserved current does not have a total charge in the physical Hilbert space. The nature of spontaneously broken chiral symmetry is intimately connected to spontaneously broken scale invariance. It has been emphasized by Adler that there are two examples of relativistic field theories which exhibit spontaneously broken scale invariance where chiral symmetry is also broken. These are Johnson-Baker-Wiley model of quantum electrodynamics and asymptotically free gauge theories. This indeed may be a general feature as we pointed out recently in our investigation of spontaneously broken chiral symmetry in QCD. Let us review this connection briefly. Unbroken scale invariance can be expressed as $$Q_D(t)|0>=0,$$ (1) where the dilatation charge is $$Q_D(t)=d^3xD_0(𝐱,t),$$ (2) defined in terms of $`D_\mu (𝐱,t)`$, the dilatation current. Equivalently, $$^\mu D_\mu |0>=0.$$ (3) Invoking Coleman’s theorem , which is valid for continuous symmetries, we can then prove that the divergence of the dilatation current itself must vanish identically: $$^\mu D_\mu =0.$$ (4) On the other hand, it turns out that we know that the divergence of the dilatation current is determined by the trace anomaly in QCD: $$^\mu D_\mu =\frac{1}{2}\frac{\beta (g)}{g}G_{\mu \nu }^\alpha G_\alpha ^{\mu \nu }+\underset{i}{}m_i[1+\gamma _i(\theta )]\overline{\psi }_i\psi _i,$$ (5) where the second term vanishes for massless quarks in the chiral limit. Consequently the beta function must vanish. It is well-known that in an asymptotically free theory of QCD which also exhibits confinement, the behavior of $`\beta (g)`$ is such that it decreases as $`g`$ increases and never turns over. Consequently $`g=0`$ is the only possibility and hence the theory reduces to triviality. We therefore conclude by reductio ad absurdum that scale invariance must be broken spontaneously by the QCD vacuum state $$Q_D(t)|0>0.$$ (6) Thus, scale invariance is broken both “spontaneously” by the vacuum state and explicitly by the trace anomaly. Consequently, the states obtained by successive repeated application of $`Q_D(t)`$ on the vacuum state are neither vacuum states nor are they necessarily degenerate. Let us now consider the commutator $$[Q_D(0),Q_a(0)]=id_QQ_a(0)$$ (7) which defines the scale dimension of the charge $`Q^a(0)`$, the generators of vector $`SU(N)`$ (the flavor (non-chiral) group of QCD). Eq.(7) may be “promoted” to arbitrary time by introducing the operator $`e^{iHt}`$ on the left, $`e^{iHt}`$ on the right (and inserting the unit operator $`1=e^{iHt}e^{iHt}`$ in the middle of the commutator on the left hand side): $$[Q_D(t),Q_a(t)]=id_QQ_a(t).$$ (8) It is important to point out that operator relations such as Eq.(8) are unaffected by spontaneous symmetry breaking, which is manifested in the properties of physical states, as emphasized by Weinberg . I shall now proceed to establish that the flavor vector charges, $`Q_a(t)=d^3xV_a^0(𝐱,t)`$, are screened, i.e., $`Q_a(t)=0`$, where $`V_a^\mu `$ are the conserved vector currents in QCD. ($`a=N_F^21`$, where $`N_F`$ is the number of flavors). Since $`V_a^\mu `$ are conserved, the corresponding vector charges $`Q_a(t)`$ commute with $`H`$: $$[Q_a(t),H]=0,$$ (9) where $`H`$ is the Hamiltonian (density), $`H=\mathrm{\Theta }^{00}`$. Eq.(9) implies that the following double commutator also vanishes: $$[Q_D(t),[Q_a(t),H]]=0,$$ (10) where $`Q_D(t)`$ is the dilation charge defined in Eq.(2). Let us now invoke the Jacobi identity to recast Eq.(10): $$[Q_a(t),[H,Q_D(t)]]+[H,[Q_D(t),Q_a(t)]]=0.$$ (11) Since $$[H,Q_D(t)]=i_\mu D^\mu (𝐱,t)0,$$ (12) by virtue of the trace anomaly, Eq.(5), and the second double commutator on the left hand side of Eq.(11) vanishes in view of Eqs.(8,9), we arrive at the important operator relation: $$[Q_a(t),^\mu D_\mu (𝐱,t)]=0.$$ (13) Applying Eq.(13) on the vacuum state, we obtain: $$[Q_a(t),^\mu D_\mu (𝐱,t)]|0=0.$$ (14) We now invoke the Vafa-Witten result $$Q_a(t)|0=0.$$ (15) From Eqs.(14,15), we conclude that $$𝒪(𝐱,t)|0Q_a(t)^\mu D_\mu (𝐱,t)|0=0$$ (16) where the operator $`𝒪`$ is local in space and time . Consequently we can utilize the all-powerful Federbush-Johnson theorem which applies to any local operator to conclude that $$Q_a(t)^\mu D_\mu (𝐱,t)0$$ (17) where $`^\mu D_\mu (𝐱,t)`$ is governed by the trace anomaly, . Since $`^\mu D_\mu (𝐱,t)`$ cannot vanish in QCD exhibiting both asymptotic freedom and confinement (except at $`g=0`$ ), we are led to conclude that the vector charges are screened, i.e., $`Q_a(t)=0`$, proving Georgi’s conjecture: QCD at zero temperature exhibits chiral symmetry in the Nambu-Goldstone mode ($`N_F=3`$) and the vector symmetry is realized in the Higgs mode (“vector mode”) in the sense conjectured by Georgi, ı.e., the vector charges $`Q_a`$ are screened and the corresponding vector mesons become massive by devouring the would-be scalar Goldstone bosons which disappear from the physical spectrum. The key ingredients in this analysis are the Vafa-Witten theorem , the Federbush-Johnson theorem and the trace anomaly for the divergence of the dilation current . Some concluding remarks are in order. First of all, Eq.(9) which is the local version expressing current conservation $`^\mu V_\mu ^a=0`$ holds if the surface terms at infinity can be neglected. This is, a posteriori, justified since the vector charges $`Q_a(t)`$ must annihilate the vacuum (Vafa-Witten theorem) and hence the flavor vector symmetry cannot be spontaneously broken (i.e., there are no scalar Goldstone bosonsto produce a long range interaction which would have resulted in a non-vanishing surface term). Secondly, it may be worthwhile amplifying the connection between the vanishing of the vector charges, i.e., $`Q_a(t)=0`$ and Georgi’s conjecture. Let us consider a massive vector boson coupled to a conserved source (by construction). While in the massless case, this is mandatory, in the massive case, one can choose the current to be conserved if one so wishes. In such a case, clearly the charges can be non-vanishing, even though the corresponding particle is massive. But the essential point here is that if the charges vanish, $`Q_a(t)=0`$, the vector meson cannot be massless: it must be massive ! This is the desired connection with Georgi’s conjecture. Finally, there are unresolved issues with the vector limit advocated by Georgi, such as its compatibility with lattice results and the question as to why aren’t the pions also eaten up by the axial vector $`A_1`$’s ? These issues remain to be resolved, perhaps, in a future publication. I am indebted to P. Narayana Swamy for numerous conversations on the perennial topic of chiral symmetry and for our previous efforts in trying to nail down Georgi’s conjecture.
no-problem/9910/astro-ph9910343.html
ar5iv
text
# 1 Introduction ## 1 Introduction The Wolf–Rayet phenomenon is a stellar phenomenon. This means that its length scale is essentially that of the star, i.e. some $`10^9`$ cm. A typical nebular size is $`10^{14}`$ cm or higher, which immediately illustrates that it may not be easy to make the connection between the star and the nebulae and observations confirm this. In this paper I will try to discuss some ways in which the nebular properties could be changed by the presence of a \[WR\] central star. The star’s actio in distans is through two means 1. Photons The ionizing photons from the star heat and ionize the gas. This can have non–trivial consequences for the shaping of the PN, as shown by Marten & Schönberner (1991), Mellema (1994), and Mellema (1995). However, although the spectral energy distribution of a \[WR\] star is quite different from that of a ‘normal’ central star of a PN (CSPN), I do not expect this difference to cause any big differences in the hydrodynamic structure of the PN. The detailed ionization structure can be quite different, as illustrated by the photo–ionization modelling of Crowther et al. (1999) for the nebulae M1–67, but the ways in which the onset of ionization modifies the density of the circumstellar matter through large or small scale ionization fronts will probably not be very different. 2. Stellar wind The stellar wind differs in two aspects from a normal CSPN wind The stellar wind from a \[WR\] star is generally a factor 10–100 more massive than from a normal CSPN, see for instance Leuenhagen et al. (1998). This means that the wind luminosity ($`\frac{1}{2}\dot{M}v^2`$) and its momentum ($`\dot{M}v`$) will be a factor 10–100 higher. This should have an effect on the formation of the PN. The wind from a typical \[WC\] star consists for approximately 0% of H, 50% of He, 40% of C and 10% of O, abundances radically different from the usual cosmic or solar abundances. As a wind with these abundances starts interacting with the environment its radiative cooling behaviour will be very different from the standard case. Also this will have an effect on the shaping of the PN. Apart from what we observe now the mass loss history of the star is also important for understanding the shaping. Did the star become a \[WR\] directly after the AGB or is it the result of a late or very late He-shell flash? In the first case, was the mass loss during the AGB different for the \[WR\] stars or did the event which turned the star into a \[WR\] star influence its mass loss? Which role does binarity play in \[WR\] systems, and are there accretion disks? These are all questions to which the answer is difficult to give since even for normal PNe they are not always well known. The standard model to explain the formation of PNe is the Interacting Stellar Winds \[ISW\] model or sometimes called Generalized ISW \[GISW\] model. In this model the nebulae is formed from the interaction between the slow AGB wind and the faster post–AGB wind. The usual assumption (confirmed by observations) is that the AGB wind has an aspherical density distribution (disk or torus), which leads to the formation of an aspherical PNe. The ISW model goes back to the late 70-ies (Kwok et al. 1978). In this paper I will discuss how the formation of a PN can be different for \[WR\] central stars, taking the ISW model as the basis. However, recently the ISW model has received some criticism which may be good to list here, even though we do not know how relevant these problems are for the PNe around \[WR\] stars. * Although aspherically distributed circumstellar matter is commonly observed in PNe, it is not around AGB stars. The puzzle then is how and when did the mass loss turn from more or less spherical to aspherical. The ISW model does not address this point, it just assumes the asphericity. A complete model should include a mechanism for producing aspherical mass loss. Most observational evidence indicates that the transition to aspherical mass loss happens right at the end of the AGB. * Jets and other collimated outflow phenomena are observed in some PNe. Although the ISW model can produce some degree of collimation, it seems unlikely at this point that the observed jets can be explained by the ISW, especially since they often show point-symmetry. * Point–symmetry is not only shown by jets and collimated outflow phenomena, but even by whole nebulae. This is indicative of changes in direction of the symmetry axis of the system. Again, this seems difficult to establish in the case of the ISW model, since this would require a warped density distribution in the AGB wind. * Young PNe, and many pre-PNe show aspherical morphologies, often with very peculiar shapes (see for example Sahai & Trauger 1998; Ueta et al. 1999). Since the post–AGB wind only becomes energetic for high effective temperatures, it is unclear how the ISW model can produce the observed pre-PNe and young PNe. Also, some of the shapes of the young PNe seem to be in conflict with the ISW model; we see objects with several symmetry axes, wave function–like morphologies, etc. To deal with these difficulties some alternatives and/or additions to the ISW model have been proposed. The role of companions (stellar or substellar) for establishing aspherical mass loss has become generally accepted, albeit more for lack of alternatives than solid observational proof. See Soker (1998) and references therein for ways in which gravitational interaction with a companion can lead to aspherical mass loss. To explain jets and jet-like phenomena accretion disks have been proposed, possibly around a companion object. Sahai & Trauger (1998) even suggested that a short jet phase is responsible for moulding the initially spherical AGB wind into a torus like density distribution. A third alternative is the existence of strong magnetic fields in the post–AGB wind which through hoop stresses can lead to the formation of aspherical PNe (Chevalier & Luo 1995). These ‘MHD models’ can possibly also explain jets and point-symmetry (Garcia–Segura et al. 1999). Thus, the caveat is that the ISW model may not be the whole story, but since much remains unclear, I will in this paper still concentrate on the ISW model. ## 2 Observations The observations of PNe around \[WC\] stars are reviewed by Górny in this volume. The summary is that there are few to no differences. The only differences to be noted are * The average expansion velocity of the ionized material is observed to be somewhat higher than in the case of ‘normal’ PNe (Górny & Stasińska 1995). * The line shapes in WR-PNe seem to require a higher value for the turbulent velocity component than in normal PNe (Geşicki & Acker 1996). As we will see in the following sections both these effects can be understood from the differences between normal fast winds and those from \[WR\] stars. The most puzzling aspect of the observations is the lack of correlation between \[WR\] stars and certain types of PN morphologies. All types of morphology (elliptical, bipolar, attached shells, etc.) are found around \[WR\] stars in approximately the same fractions as in normal PNe. Somehow one would expect the morphology of a PN to be dependent on the mass loss history of the star. For a \[WR\] star this must have been different since the entire H envelope was lost. Reversely, it has been shown that bipolar PNe are associated with more massive progenitors, and one would expect that the mass of the star is important to determine whether it becomes a \[WR\] star or not. Still there seems to be the normal fraction of bipolar PNe around \[WR\] stars. The conclusion thus seems to be that the process which determines the shapes of PNe, i.e. the start of aspherical mass loss, is unrelated to the process which turns the star in a \[WR\]–type star, or at least is not influenced much by the way in which the star loses its envelope. This is a rather amazing conclusion. If one for example considers a common envelope scenario, in which the entire envelope is ejected, one would think that the case in which all of the H–envelope is removed is more extreme than the one in which only part of the H-envelope is removed, which would lead to different morphologies of the subsequent PN. But the observations show that this is not the case. This behaviour can be used as a test for proposed mechanisms to introduce asphericity in the AGB/post-AGB system. For example one might argue that mixing will be stronger in the case of faster rotation, and therefore some relation between nebulae around H-poor central stars and more extreme morphologies should be present. Since there is no such relation, rotation is a less likely mechanism to introduce asphericity. Although useful in principle, the application of this test is complicated by the fact that the real mechanism for producing H-poor central stars is not fully understood. All the proposed mechanisms centre around thermal pulses (either a final pulse on the AGB or a late pulse in the post-AGB phase, see the contributions of Blöcker and Herwig in this volume), which play a marginal role in most of the proposed mechanisms for aspherical mass loss. Also, the set of \[WR\]–PNe has not been studied in much detail. A more thorough investigation of image and kinematic data of individual nebulae may still reveal some differences, gone unnoticed when using catalogue data. A more detailed study of a sample of \[WR\]–PNe should be done before the above test becomes really hard, but such a study would be worth it. ## 3 Massive stellar winds The typical fast wind of a normal CSPN has a mass loss rate in the range $`10^9`$$`10^7`$ M yr<sup>-1</sup>, whereas the for the \[WR\] stars the reported rates are $`10^7`$$`10^5`$ M yr<sup>-1</sup>, roughly a factor 100 higher. The wind velocities do not differ much, which is not surprising since for radiatively driven winds the terminal wind velocity is of order the escape velocity from the surface of the star. The result of a more massive stellar wind is that the momentum ($`\dot{M}v`$) and energy ($`\frac{1}{2}\dot{M}v^2`$) input is a 100 times larger. The main effect of this is an increase in the expansion velocity of the nebula. Nebulae formed by a stellar wind come in two types, depending on how well the stellar wind cools when it is shocked (see e.g. Lamers & Cassinelli 1999, Ch. 12.3). If this cooling is efficient enough to radiate away all the energy injected by the stellar wind, the wind–driven bubble is said to be ‘momentum–driven’. The structure of the bubble is as shown in Fig. 1a, the stellar wind fills the entire volume of the bubble and there is a thin cooling zone separating the stellar wind from the material swept up from the environment. In this case it is the ram pressure of the stellar wind which sweeps up a bubble. If the cooling of the stellar wind is inefficient, a volume of hot, shocked fast wind material forms and can fill most of the volume of the bubble, with an inner shock lying relatively close to the star, see Fig. 1b. This type of bubble is said to be ‘energy–driven’, it is the thermal pressure of the volume of hot shocked fast wind material which pushes the nebula into the environment. There are of course intermediate cases, but in general the division between the two holds. Kahn (1983) showed that for an energy–driven bubble the expansion velocity can be approximated by $$v_{\mathrm{exp}}=\lambda v_{\mathrm{slow}}$$ (1) where $`\lambda `$ is given by the solution of the cubic equation $$\lambda (\lambda 1)^2=\frac{2}{3}\frac{\dot{M}_{\mathrm{fast}}v_{\mathrm{fast}}^2}{\dot{M}_{\mathrm{slow}}v_{\mathrm{slow}}^2},$$ (2) which depends on the ratio of the two wind luminosities ($`\frac{1}{2}\dot{M}v^2`$). An alternative solution was presented by Koo & McKee (1992). They derive $$v_{\mathrm{exp}}=\left(1+\left(\frac{2\pi }{3}\mathrm{\Gamma }_{\mathrm{rad}}\xi \frac{\dot{M}_{\mathrm{fast}}v_{\mathrm{fast}}^2}{\dot{M}_{\mathrm{slow}}v_{\mathrm{slow}}^2}\right)^{1/3}\right)v_{\mathrm{slow}},$$ (3) in which $`\mathrm{\Gamma }_{\mathrm{rad}}`$ is the fraction of the energy injected by the stellar wind (in the case of no radiative losses equal to 1), and $`\xi `$ a numerical constant of order unity (whose value is not given by the authors). These two solutions are equivalent. For the momentum–driven case Kahn & Breitschwerdt (1990) found that $$v_{\mathrm{exp}}=\left(1+\left(\frac{\dot{M}_{\mathrm{fast}}v_{\mathrm{fast}}}{\dot{M}_{\mathrm{slow}}v_{\mathrm{slow}}}\right)^{1/2}\right)v_{\mathrm{slow}},$$ (4) which depends on the ratio of the two wind momentum rates ($`\dot{M}v`$). Fig. 2 shows a plot of the expansion velocity for both cases as a function of $`\dot{M}_{\mathrm{fast}}`$ for fixed $`\dot{M}_{\mathrm{slow}}`$, $`v_{\mathrm{slow}}`$, and $`v_{\mathrm{fast}}`$. One sees that the expansion velocity increases as a function of fast wind mass loss rate, but that the difference becomes large only for very high mass loss rates. For the chosen parameters the effect is stronger for the energy–driven case. Figure 2 should not be overinterpreted. The spread in mass loss rates and velocities in a sample of PNe will make the correlation less clear, especially since aspherical nebulae will have different expansion velocities in different directions. Also, Fig. 2 neglects any evolution of the winds; we know that mass loss rates and wind velocities are changing throughout the post–AGB phase but in calculating the expansion velocities the fast wind properties are assumed to be constant (the actual assumptions are that $`(\dot{M}v)_{\mathrm{fast}}`$ is constant in time for the momentum–driven case and $`(\dot{M}v^2)_{\mathrm{fast}}`$ for the energy–driven case). Still the conclusion is that we should not be surprised to find an on average higher expansion velocity for the nebulae around \[WR\] stars. ## 4 Radiative Cooling As was outlined above the efficiency of the radiative cooling in the shocked fast wind determines the character of the wind–driven bubble. Since the cooling processes are mostly two body interactions, the cooling rate is proportional to the square of the particle density, $`n^2`$. The dependence on the temperature is much stronger. For a gas of cosmic abundances, the cooling is very strong between temperatures of $`10^4`$ to $`10^6`$ K (see e.g. Dalgarno & McCray 1972). Since the immediate post–shock temperature for a strong shock is given by $$T_\mathrm{s}=\frac{3}{16}\frac{\mu m_\mathrm{H}}{k}v^2$$ (5) this means that shocks with $`v`$ between 30 and 400 km s<sup>-1</sup> are likely to be radiative. The velocity $`v`$ here is the pre–shock velocity in the frame in which the shock is stationary. As a star evolves through the post–AGB phase its wind velocity will go up and mass loss rate down. This means that one expects the bubble to be initially momentum–driven and later make a transition to energy–driven. Kahn & Breitschwerdt (1990) analytically calculated the conditions for which this transition happens and found that for a wide range of parameters the bubble makes the transition from momentum to energy–driven when $`v_{\mathrm{fast}}150`$ km s<sup>-1</sup>. The reason for this is the strong dependency of the post–shock cooling time on the wind velocity $$t_{\mathrm{cool}}=0.255\frac{(1+\sqrt{\alpha })^2}{\alpha q}\frac{v_{\mathrm{slow}}}{\dot{M}_{\mathrm{slow}}}v_{\mathrm{fast}}^5t^2,$$ (6) in which $$\alpha =\frac{\dot{M}_{\mathrm{fast}}v_{\mathrm{fast}}}{\dot{M}_{\mathrm{slow}}v_{\mathrm{slow}}},$$ (7) and $`q`$ is a constant from the assumed analytical cooling function which has a value of $`4\times 10^{32}`$ cm<sup>6</sup> g<sup>-1</sup> s<sup>-4</sup> for normal cosmic abundances. Mellema (1994) and Dwarkadas & Balick (1998) numerically calculated the evolution of wind–driven bubbles with evolving fast winds and confirmed this behaviour. Also when using detailed radiative cooling or a somewhat less accurate cooling curve, the bubble makes the transition from momentum–driven to energy–driven at a fast wind velocity of approximately 150 km s<sup>-1</sup>. Dwarkadas & Balick (1998) investigated the momentum–driven phase somewhat more closely and found that during this phase the bubble is sensitive to the Nonlinear Thin Shell Instability (NTSI). The effects of this instability survive even beyond the momentum–driven phase as they found that bubbles which evolved through a momentum–driven phase possess a much more disturbed interior. Another property of bubbles in the momentum–driven phase is that they are much more affected by asphericities in the fast wind. The reason for this is that the ram pressure of the fast wind is the driving force, so if the wind pushes harder in one direction than in another, the shape of the bubble will reflect this. During the momentum–driven phase it is possible to form an aspherical PNe using an aspherical fast wind and a spherical slow wind. On the other hand, in the energy–driven phase, it is the thermal pressure of the hot shocked fast wind which drives the bubble and as thermal pressure is locally uniform, and any large scale pressure variations in the hot bubble are quickly smoothed out, possible asphericities in the fast wind will during the energy–driven phase only influence the shape of the bubble in a watered down manner, if at all. The ‘canonical’ value of 150 km s<sup>-1</sup> for the transition from one phase to the other depends on * Density of the fast wind, and hence its mass loss rate (since cooling goes with $`n^2`$) * Details of the cooling process expressed by the value of $`q`$, which is partly determined by the abundances. Since the winds from \[WR\] stars have both high densities and peculiar abundances one can expect the value of the transition velocity to change. To find out how much requires a detailed calculation taking into account the cooling rates of all the different ions, which as far as I know has never been attempted. An approach similar to that described in Raga et al. (1997) should work well for this kind of problem. A simple estimate can be made using Eq. (6). Considering the ionic cooling curves from Cox & Tucker (1969) one can estimate that an increase of the abundance of C by a factor 500-1000 (and factors 6 and 50 for He and O respectively) will mean an increase in the cooling efficiency by a similar factor. Raising $`q`$ by $`10^3`$ and $`\alpha `$ by $`10^2`$ in Eq. (6) means that the critical velocity goes up with a factor 10 or so. In other words up to wind velocities of 1000 km s<sup>-1</sup> or even higher the bubbles around \[WR\] stars would be momentum–driven. This includes all late–type \[WC\] stars. There are claims from bubbles around Pop. I WR stars that this is in fact the case (Chu 1982), but for the \[WR\] nebulae this question has as yet not been considered. The consequences would be more chaotic nebulae due to the Nonlinear Thin Shell Instability, and more extreme morphologies due to aspherical mass loss during the post-AGB phase. For the first there is little indication in the observed morphologies, although more careful analysis may show otherwise. There is the reported need for a higher turbulent velocity needed to explain the line shapes from PN around \[WR\] stars (Geşicki & Acker 1996), which may be due to this effect. As pointed out above, there is no evidence for the second effect. This absence of any correlation between \[WR\] stars and nebular morphology then implies that mass loss in the post–AGB phase does not deviate much from spherical. ## 5 Born–again Planetary Nebulae One model for the formation of \[WR\] stars is the occurrence of a late to very late He–shell flash (Blöcker, this volume). This model seems at the moment to be able to explain the observed abundances the best. However, the fact that there appear to be no differences between the nebulae around \[WR\] and normal CSPN causes problems for this scenario. The reason is that the formation of PN empties the region around the star. The fast wind blows a bubble which is almost as empty as interstellar space. Models show that a typical density inside a PN bubble is 1 to 10 cm<sup>-3</sup>. If one assumes that a very late He-shell flash happens after $`10^4`$ years and that the PN expands with a velocity of 30 km s<sup>-1</sup>, one obtains a PN radius of $`10^{16}`$ cm at the time of the flash, which means a total mass of about $`10^8`$ M yr<sup>-1</sup> interior to this first PN, insufficient to sweep up a second PN, which would require between 0.01 and 1 M. One can think of three possible solutions to this problem. Firstly, ‘reuse’ the old PN. The implication of this would be that the PNe around born–again post–AGB stars are ‘old’, at least older than the apparent age of the star. In the case of A30, A78, and Sakurai’s object this certainly seems to be the case, but for the majority of \[WR\] stars this is not true, their PNe are very similar to those seen around stars which supposedly evolved straight off the AGB, implying that they have not suffered a late to very late He-shell flash. Perhaps it would be possible to accrete part of the old PN back to the star. The scenario would be that when the fast wind stops, the pressure inside the nebula starts dropping and material from the swept–up nebula starts diffusing back in. There is little observational evidence for a process like this, but it is true that the Helix nebula is actually not as empty as one expects. The ‘hot bubble’ is filled with cometary knots and a more diffuse high ionization gas of a density of about 100 cm<sup>-3</sup> (see e.g. Maeburn et al. 1998). The evolution of ‘old’ PNe has not been studied well and requires some more attention. Still it is doubtful that a new PN made out of the material of a diffused old PN would look the same as a ‘normal’ PN. Thirdly, the star could lose 0.01 to 1 M with a slow velocity, this way mimicking AGB mass loss. Also this seems unlikely, since the stars do not have that much mass to lose. In all, the (very) late He–shell model seems to be irreconcilable with the fact that the PNe around \[WR\] look so ‘normal’. ## 6 Conclusions The \[WR\] phenomenon shows us again that it is most useful to consider a star and its circumstellar environment together. The observed properties of the nebula around these stars can help us in understanding the nature of the \[WR\] phenomenon, and at the same time properties of \[WR\] stars can help us understand the formation of PNe. To sum up the conclusions reached in this paper: 1. The higher average expansion velocity of PNe around \[WR\] stars can be understood as being due to the higher mass loss rates from the star. 2. The typical \[WC\] abundances are expected to lead to a longer lasting momentum–driven phase in the formation of the nebulae. This phase may last until the wind reaches velocities of 1000 km s<sup>-1</sup>. 3. A longer lasting momentum–driven phase should lead to the nebulae becoming more affected by instabilities. This may be the explanation for the fact that a higher turbulent velocity is needed to explain the line shapes of PNe around \[WR\] stars. 4. A longer lasting momentum–driven phase allows asphericities in the stellar wind to have a larger effect on the shape of the PN. Since the observed nebulae show no sign of this, it implies that the post–AGB wind is mostly spherical. 5. Producing a second PNe in the case of a born–again PN scenario is difficult to impossible. Consequently, born–again PNe should show a discrepancy between the age of the PN and that of the star. Since this is not observed for most \[WR\] stars, the born–again scenario seems not to apply. 6. The lack of correlation between PN morphology and the \[WR\] phenomenon shows that the ultimate mechanism to produce aspherical PNe is unrelated to the process which produces the \[WR\] star.
no-problem/9910/cond-mat9910142.html
ar5iv
text
# REFERENCES Hellberg and Manousakis reply: In our Letter we concluded that the ground state of the t-J model does not have stripes at $`J/t=0.35`$. In the preceding Comment , White and Scalapino (WS) raise several objections to our findings. We refute all of their points and argue that our analysis is indeed correct. The physical mechanism cited by WS explains why if a striped state were the ground state of the t-J model, the domain boundaries would prefer a $`\pi `$-phase shift in the antiferromagnetic background. However, this mechanism does not favor the stripe state over other alternative candidates for the ground state of the t-J model in the low hole-doping region. One serious candidate is a state of electronic phase separation where an antiferromagnetic region is separated from the hole rich region by only one (as opposed to infinitely many as is the case for the striped state) energetically costly interface. When a finite-size system is in a region of the parameter space for which the infinite system would phase separate, its energy will be best minimized if the two components in which the finite-system tends to phase-separate respect the geometry imposed by the boundaries. Thus an instability or near instability to phase separate may cause domain walls or other structures to form in a finite-size system. In the thermodynamic limit, the strong fluctuations in the one-dimensional stripes will destroy the stripes. Such fluctuation effects are suppressed in finite-size systems where only a few stripes are present and the length of the stripes is very limited. The argument cited by WS explains why if one has stripes one has to have a $`\pi `$-phase shift in the antiferromagnetic order parameter to accommodate hole motion. It does not explain why stripes are formed as opposed to a phase separated state. WS view their boundary condition as a symmetry breaking field whose strength can be taken to zero. However, this procedure requires taking the thermodynamic limit and studying whether or not the stripes remain. By studying WS’s results obtained for cylindrical boundary conditions as a function of the number of legs in the cylinder, it seems that the stripes are strongly influenced by finite size effects. Depending on the cylinder’s width, WS find stripes with different linear hole densities along the stripe. In six-leg ladders, the optimum linear density is $`\rho _6=2/3`$ , and in eight-leg ladders, $`\rho _8=1/2`$ . WS find that the addition of a $`t^{}`$ term destroys stripes in their calculations. A next-nearest-neighbor hopping $`t^{}`$ inhibits phase separation in the t-J model. If in a particular finite geometry, the near instability to phase separate is manifested by stripe formation, adding a $`t^{}`$ hopping will destabilize the stripes. WS are incorrect in stating that our conclusions would have been different if we had excluded the clusters which they believe are too one dimensional. The (2,2) translation vectors are four lattice spacings in distance, just as the (0,4) translation vectors are. However, irrespective of which clusters we keep or exclude, our conclusions are unchanged. Even if we restrict ourselves to cluster No. 2, the lowest energy state of this cluster has energy $`_0=0.6397t`$ and is not striped. State (e), the lowest energy vertical stripe state, has energy $`_e=0.6353t`$. Thus even if we had only examined the cluster in which we found the vertical stripes, we would still conclude that these stripes are excited states. WS believe a system with at least four holes is required to study stripe formation. If many-hole correlations are important, one needs to explain why stripes are seen in mean-field studies. Prelovsek and Zotos only found stripe correlations for large $`J/t`$ and did not study stripe correlations in two-hole clusters. The degenerate excited states (e) and (f) in our periodic cluster No. 2 are nearly identical to the stripes seen by WS . The charge density wave amplitude and the spin structure (including the $`\pi `$-phase shift) are the same. And, as shown in Fig. 3 of our Letter , the use of open boundary conditions in one direction breaks the degeneracy and stabilizes these states as the ground state. WS argue that our results might be a finite-size effect (FSE); however, our main reason for performing a small cluster exact calculation was to study the role of FSEs on the formation of stripes. Thus, FSEs are welcome in our calculation. Since stripes are periodic structures, calculations on small periodic clusters that are commensurate with the stripe order including the $`\pi `$-phase shift between stripes, such as our cluster No. 2, favor the formation of stripes. We do find stripes in cluster No. 2 with exactly the same structure as those found by WS but only as excited states. The fact that these stripes are not the ground state even of the cluster most favorable for their formation indicates that the ground state of the infinite system is not striped. We thank S.A. Kivelson for useful discussions. This work was supported by the Office of Naval Research. C. Stephen Hellberg<sup>1</sup> and E. Manousakis<sup>2</sup> <sup>1</sup>Center for Computational Materials Science Naval Research Laboratory, Washington, DC 20375 <sup>2</sup>Department of Physics and MARTECH Florida State University, Tallahassee, FL 32306-3016
no-problem/9910/math-ph9910038.html
ar5iv
text
# Untitled Document ALGEBRAICALLY LINEARIZABLE DYNAMICAL SYSTEMS R.Caseiro and J.P. Françoise<sup>∗∗</sup> Universidade de Coimbra Departamento de Matemática 3000 Coimbra, Portugal. and <sup>∗∗</sup>Université de Paris 6, UFR 920, tour 45-46, 4 place Jussieu, B.P. 172, Equipe ”Géométrie Différentielle, Systèmes Dynamiques et Applications” 75252 Paris, France. Summary The main result of this paper is the evidence of an explicit linearization of dynamical systems of Ruijsenaars-Schneider (RS) type and of the perturbations introduced by F. Calogero of these systems with all orbits periodic of same period. Several other systems share the existence of this explicit linearization, among them, the Calogero-Moser system (with and without external potential) and the Calogero-Sutherland system. This explicit linearization is compared with the notion of maximal superintegrability which has been discussed in several articles (to quote few of them, Hietarinta , Henon , Harnad-Winternitz , S. Wojchiechowsky ). Short title: Superintegrability Introduction Let $`H:V^{2m}R`$ be a Hamiltonian system defined on a symplectic manifold $`V^{2m}`$, of dimension $`2m`$, equipped with a symplectic form $`\omega `$ of dimension $`2m`$. Recall that $`H`$ is said to be integrable in Arnol’d-Liouville sense if $`H`$ displays $`m`$ generically independent first integrals (one of these maybe the Hamiltonian itself) which are in involution for the Poisson bracket associated with the symplectic form $`\omega `$. A vector field $`X`$ on a manifold $`V`$ of dimension $`n`$ defines a flow and a dynamical system. The vector field (not necessarily Hamiltonian) is classically said to be maximally superintegrable if it has $`n1`$ generically independent globally defined first integrals $`f_1,\mathrm{},f_{n1}`$. The orbits of $`X`$ are then contained in the connected components of the common level sets of the functions $`f_i,i=1,\mathrm{},n1`$. Some Hamiltonian systems are known to be maximally superintegrable and so they display $`2m1`$ first integrals. This is so for instance of the rational Calogero-Moser system, the Kepler problem, the isotropic oscillator (cf. , , , , ). Recently, this specific class of Hamiltonian systems has deserved interest in several articles (cf. for instance, ). In this article the definition of algebraic linearization is proposed in a slightly broader sense (also more precise sense). Definition A differential system is algebraically (resp. analytically) linearizable if there are $`n`$ globally defined functions (rational, resp. meromorphic) which are generically independent so that the time evolution of the flow expressed in these functions is linear (in time) and algebraic in the initial coordinates. First purpose of this article is to prove that the Hyperbolic and the Rational Ruijsenaars-Schneider systems are algebraically linearizable. The perturbations recently considered by Calogero , of the Hyperbolic and Rational Ruijsenaars-Schneider systems which display only periodic orbits of the same period are algebraically linearizable as well. This is proved rather easily using the Lax matrix first introduced by Bruschi-Calogero and the extra-equation which implements the integrability of these systems first introduced in the article . I. Algebraic linearization of the rational Calogero-Moser system and of the rational Calogero-Moser system with an external quadratic potential This first paragraph is devoted to the proof that the rational Calogero-Moser system (with or without) external quadratic potential is algebraically linearizable. The usefullness of this (apparently) new notion is displayed on these classical examples. The rational Calogero-Moser system is represented by the Hamiltonian $$H=(1/2)\underset{i=1}{\overset{m}{}}y_i^2+g^2\underset{ij}{}(x_ix_j)^2$$ $`(1.1)`$ where the constant $`g`$ is a parameter. J. Moser introduced the matrix function: $$L(x,y)/L_{ij}=y_i\delta _{ij}+g\mathrm{i}(x_ix_j)^1(1\delta _{ij}),$$ $`(1.2)`$ and observed that the time evolution of this matrix function $`L(x,y)`$ along the flow is of Lax pair type: $$\dot{L}=[L,M].$$ $`(1.3)`$ This Lax pair equation is supplemented with the equation: $$\dot{X}=[X,M]+L,$$ $`(1.4)`$ displayed by the diagonal matrix $`X`$: $$X(x,y)/X_{ij}=x_i\delta _{ij}.$$ $`(1.5)`$ Following the classical approach, introduce the rational functions: $$F_k=tr(L^k).$$ $`(1.6)`$ The Lax matrix equation yields: $$\dot{F}_k=0.$$ $`(1.7)`$ Introduce then the functions: $$G_k=tr(XL^k),$$ $`(1.8)`$ which undergo the time evolution: $$\dot{G}_k=F_{k+1}.$$ $`(1.9)`$ Clearly, the whole collection of the rational functions $`F_k,G_k`$ provide the algebraic linearization of the system. Indeed, classical superintegrability can be recovered as follows: $$F_k(k=1,\mathrm{},m)H_k=F_kG_kF_{k+1}G_{k1}(k=1,\mathrm{},m1)$$ provide $`2m1`$ integrals of motion. The next classical example to be considered is the rational Calogero-Moser system with an external quadratic potential. The system is described by the Hamiltonian: $$H=(1/2)\underset{i=1}{\overset{m}{}}y_i^2+g^2\underset{ij}{}(x_ix_j)^2+(\lambda ^2/2)\underset{i=1}{\overset{m}{}}y_i^2.$$ $`(1.10)`$ The equations (1.3) and (1.4), with the same matrices $`L`$ and $`X`$ get modified as follows: $$\dot{L}=[L,M]\lambda ^2X,$$ $`(1.11a)`$ $$\dot{X}=[X,M]+L.$$ $`(1.11b)`$ The classical approach consists in (cf ) introducing matrices: $$Z=L+\mathrm{i}\lambda X,$$ $`(1.12a)`$ $$W=L\mathrm{i}\lambda X.$$ $`(1.12b)`$ These matrices undergo the time evolution: $$\dot{Z}=\mathrm{i}\lambda Z+[Z,M],$$ $`(1.13a)`$ $$\dot{W}=\mathrm{i}\lambda W+[W,M].$$ $`(1.13b)`$ It was then observed () that the matrix $`P=ZW`$ defines Lax matrix for the system: $$\dot{P}=[P,M].$$ $`(1.14)`$ Here, we note that the functions: $$F_k=tr(ZP^k),$$ $`(1.15a)`$ $$G_k=tr(WP^k),$$ $`(1.15b)`$ yield: $$\dot{F}_k=\mathrm{i}\lambda F_k,$$ $`(1.16a)`$ $$\dot{G}_k=\mathrm{i}\lambda G_k.$$ $`(1.16b)`$ Thus these functions provide the algebraic linearization of the system. II. Algebraic linearization of the Calogero-Sutherland system The Calogero-Sutherland system is defined by the Hamiltonian $$H(x,y)=\frac{1}{2}\underset{i=1}{\overset{m}{}}y_i^2+\frac{g^2}{2}\underset{\stackrel{i,j=1}{ij}}{\overset{m}{}}\mathrm{sinh}^2(x_ix_j)$$ $`(2.1)`$ and has a Lax pair $`\dot{L}=[L,M]_{}`$ with Lax matrix $$L_{ij}=y_i\delta _{ij}+\frac{\sqrt{1}g}{sinh(x_ix_j)}(1\delta _{ij}).$$ $`(2.2)`$ Defining the matrix $`X`$ by $$X_{ij}=\mathrm{exp}(2x_i)\delta _{ij},$$ $`(2.3)`$ we get the dynamical equation $$\dot{X}=[X,L]_++[X,M]_{}.$$ $`(2.4)`$ Above and throughout of course $`[A,B]_{}ABBA`$ and $`[A,B]_+AB+BA`$. Consider the functions $$F_k=Tr(L^k),k=1,\mathrm{},m$$ $`(2.5a)`$ $$G_k=Tr(XL^k),k=1,\mathrm{}.m$$ $`(2.5b)`$ The functions $`F_k`$ are first integrals of the dynamical system defined by (2.1). Newton’s formulae relate these constant of motion with the coefficients $`A_0,\mathrm{},A_{n1}`$ of the characteristic polynomial of the matrix $`L`$. The theorem yields: $$L^n=A_{n1}L^{n1}+A_{n2}L^{n2}+\mathrm{}+A_0I.$$ $`(2.6)`$ Theorem II-1 The functions $`G_k`$ undergo a linear evolution under the time evolution of the system defined by (2.1). Proof: Once (L,M) is a Lax pair of the system and $`X`$ satisfyies (2.3), $$\dot{G}_k=tr(XL^{k+1})=G_{k+1}.$$ $`(2.7)`$ Thus, the vector $`G=(G_0,\mathrm{},G_{n1})`$ displays the time evolution: $$\dot{G}=AG,$$ $`(2.8)`$ where the matrix $`A`$ is with coefficients first integrals of the differential system: $$A_{ij}=\delta _{i+1,j}+A_{j1}\delta _{in}.$$ $`(2.9)`$ So, the Calogero-Sutherland system is algebraically linearizable. III. Algebraic linearization of Hyperbolic and Rational Ruijsenaars-Schneider systems The dynamical systems of Ruijsenaars-Schneider (RS) type characterized by the equations of motion $$\ddot{z}_j=\underset{k=1,kj}{\overset{n}{}}\dot{z}_j\dot{z}_kf(z_jz_k),j=1,\mathrm{},n,$$ $`(3.1)`$ are “integrable” or “solvable” , if $$f(z)=2/z\mathrm{`}\mathrm{`}\mathrm{𝑐𝑎𝑠𝑒}(i)\mathrm{"},$$ $`(3.2a)`$ $$f(z)=2/[z(1+r^2z^2)]\mathrm{`}\mathrm{`}\mathrm{𝑐𝑎𝑠𝑒}(\mathrm{𝑖𝑖})\mathrm{"},$$ $`(3.2b)`$ $$f(z)=2a\mathrm{cotgh}(az)\mathrm{`}\mathrm{`}\mathrm{𝑐𝑎𝑠𝑒}(\mathrm{𝑖𝑖𝑖})\mathrm{"},$$ $`(3.2c)`$ $$f(z)=2a/\mathrm{sinh}(az)\mathrm{`}\mathrm{`}\mathrm{𝑐𝑎𝑠𝑒}(\mathrm{𝑖𝑣})\mathrm{"},$$ $`(3.2d)`$ $$f(z)=2a\mathrm{cotgh}(az)/[1+r^2\mathrm{sinh}^2(az)]\mathrm{`}\mathrm{`}\mathrm{𝑐𝑎𝑠𝑒}(v)\mathrm{"},$$ $`(3.2e)`$ $$f(z)=a𝒫^{}(az)/[𝒫(az)𝒫(ab)]\mathrm{`}\mathrm{`}\mathrm{𝑐𝑎𝑠𝑒}(\mathrm{𝑣𝑖})\mathrm{"}.$$ $`(3.2f)`$ Of course the solutions $`z_j(t)`$ of (3.1) move in the complex plane; and indeed all the constants appearing in (3.2), namely $`r`$, $`a`$ and $`b`$, as well as the constants $`\omega `$ and $`\omega ^{}`$ implicit in the definition of the Weierstrass function $`𝒫(z)𝒫(z|\omega ,\omega ^{})`$, might be complex. Indeed, the main contribution of this paper is to solve explicitly several systems following a scheme which may be of broader interest. The starting point of the analysis is the observation that (3.1) with (3.2e) is equivalent to the following “Lax-type” (n$`\times `$n)-matrix equation: $$\dot{L}=[L,M]_{},$$ $`(3.3)`$ with $$L_{jk}=\delta _{jk}\dot{z}_j+(1\delta _{jk})(\dot{z}_j\dot{z}_k)^{1/2}\alpha (z_jz_k),$$ $`(3.4)`$ $$M_{jk}=\delta _{jk}\underset{m=1,mj}{\overset{n}{}}\dot{z}_m\beta (z_jz_m)+(1\delta _{jk})(\dot{z}_j\dot{z}_k)^{1/2}\gamma (z_jz_k),$$ $`(3.5)`$ and $$\alpha (z)=\mathrm{sinh}(a\mu )/\mathrm{sinh}[a(z+\mu )],$$ $`(3.6a)`$ $$\beta (z)=a\mathrm{cotgh}(a\mu )/[1+r^2\mathrm{sinh}^2(az)],$$ $`(3.6b)`$ $$\gamma (z)=a\mathrm{cotgh}(az)\alpha (z),$$ $`(3.6c)`$ where $$\mathrm{sinh}(a\mu )=\mathrm{i}/r.$$ $`(3.7d)`$ It was furthermore recently noted that the diagonal matrix $$X(t)=\mathrm{diag}\{\mathrm{exp}[2az_j(t)]\},$$ $`(3.8)`$ undergoes the following time evolution: $$\dot{X}=[X,M]_{}+a[X,L]_+.$$ $`(3.9)`$ Let $`F_k`$ and $`G_k`$ be the functions defined as: $$F_k=tr(L^k),G_k=tr(XL^k).$$ $`(3.10)`$ The functions $`F_k`$ are first integrals of the dynamical system defined by (3.1). Newton’s formulae relate these constant of motion with the coefficients $`A_0,\mathrm{},A_{n1}`$ of the characteristic polynomial of the matrix $`L`$. The theorem yields: $$L^n=A_{n1}L^{n1}+A_{n2}L^{n2}+\mathrm{}+A_0I,$$ $`(3.11)`$ Theorem III-1 The functions $`G_k`$ undergo a linear evolution under the time evolution of the system (3.1). Proof: The equations (3.3) and (3.9) yield: $$\dot{G}_k=2atr(XL^{k+1})=2aG_{k+1}.$$ $`(3.12)`$ Thus,the vector $`G=(G_0,\mathrm{},G_{n1})`$ displays the time evolution: $$\dot{G}=AG,$$ $`(3.13)`$ where the matrix $`A`$ is with coefficients first integrals of the differential system: $$A_{ij}=2a\delta _{i+1,j}+2aA_{j1}\delta _{i,n}.$$ $`(3.14)`$ F. Calogero introduced the following perturbation of the trigonometric and rational Ruijsenaars-Schneider systems characterized by the equations of motion: $$\ddot{z}_j+\mathrm{i}\mathrm{\Omega }\dot{z}_j=\underset{k=1,kj}{\overset{n}{}}\dot{z}_j\dot{z}_kf(z_jz_k),j=1,\mathrm{},n.$$ $`(3.15)`$ F. Calogero made the remarkable conjecture , now proved in the trigonometric and rational cases, that all the orbits of the dynamical system defined by (3.15) are periodic of period $`\mathrm{\Omega }`$. The equations under consideration here are modified due to the presence of the perturbation. The Lax equation (3.3) gets modified into (cf. ): $$\dot{L}=[L,M]_{}+\mathrm{i}\mathrm{\Omega }L,$$ $`(3.16)`$ and the time evolution of the matrix $`X`$ is not modified. This yields new time evolution for the functions $`F_k`$ and $`G_k`$: $$\dot{F}_k=\mathrm{i}\mathrm{\Omega }kF_k$$ $`(3.17a).`$ $$\dot{G}_k=2atr(XL^{k+1})+\mathrm{i}\mathrm{\Omega }ktr(XL^k)=2aG_{k+1}+\mathrm{i}\mathrm{\Omega }kG_k$$ $`(3.17b).`$ REFERENCES Barucchi, G., Regge, T. Conformal properties of a class of exactly solvable $`N`$-body problems in space dimension one. J. Math. Phys., 18, n<sup>o</sup>6, 1149-1153 (1977). Bruschi, M., Calogero, F.: The Lax representation for an integrable class of relativistic dynamical systems. Commun. Math. Phys. 109, 481-492 (1987). Calogero, F.: A class of integrable hamiltonian systems whose solutions are (perhaps) all completely periodic. J.Math. Phys. 38, 5711-5719 (1997). Calogero, F.: Tricks of the trade: relating and deriving solvable and integrable dynamical systems. To appear in the Proceedings of the International Workshop on Calogero-Moser-Sutherland type models, held at the Centre de Recherches Mathématiques de l’Université de Montréal, Canada, in March 1997 (Springer, in press). Calogero, F.: Motion of poles and zeros of nonlinear and linear partial differential equations and related many-body problems. Nuovo Cimento 43B, 177-241 (1978). Calogero, F.: A solvable N-body problem in the plane. I. J. Math. Phys. 37, 1735-1759 (1996). Calogero, F.: Integrable and solvable many-body problems in the plane via complexification. J. Math. Phys., 39, 5268-5291 (1998). Calogero, F., Françoise, J.-P.: Solution of certain integrable dynamical systems of Ruijsenaars-Schneider type with completely periodic trajectories. To appear Françoise, J.-P.: Canonical partition functions of Hamiltonian systems and the stationary phase formula. Commun. Math. Phys. 117, 37-47 (1988). Harnad, J., Winternitz, P. Harmonics on hyperspheres, separation of variables and the Bethe ansatz. Lett. Math. Phys. 33, n<sup>o</sup> 1, 61-74 (1995). Henon, M. Numerical exploration of Hamiltonian systems. Col: Chaotic behavior of deterministic systems. (Les Houches 1981), 53-170 (1983), North-Holland, Amsterdam-New York. Hietarinta Direct methods for the search of the second invariant. Phys. Rep. 147, 87-154 (1987). Krichever, I.M.: Elliptic solutions of the Kadomtsev-Petviashvili equation and integrable systems of particles. Funct. Anal. Appl. 14, 282-289 (1981). See, for instance: Ruijsenaars, S.N.M., Schneider, H.: A new class of integrable systems and its relation to solitons. Ann. Phys. (NY) 170, 370-405 (1986); Ruijsenaars, S.N.M.: Systems of Calogero-Moser type. Proceedings of the 1994 Banff Summer School on Particles and Fields, CRM Proceedings and Lecture Notes (in press),and the papers referred to there. Wojciechowsky, S. Superintegrability of the Calogero-Moser system. Phys. Letters A 95, 279-281 (1983).
no-problem/9910/astro-ph9910466.html
ar5iv
text
# Relativistic Jets from Collapsars ## 1 Motivation and numerical setup Various catastrophic collapse events have been proposed to explain the energies released in a gamma–ray burst (GRB) including compact binary system mergers , collapsars and hypernovae . These models all rely on a common engine, namely a stellar mass black hole (BH) which accretes several solar masses of matter from a disk (formed during a merger or by a non–spherical collapse). A fraction of the gravitational binding energy released by accretion is converted into a pair fireball. Provided the baryon load of the fireball is not too large, the baryons are accelerated together with the e$`^+`$e<sup>-</sup> pairs to ultra–relativistic speeds (Lorentz factors $`>10^2`$; ). The existence of such relativistic flows is supported by radio observations of GRB 980425 . The dynamics of spherically symmetric relativistic fireballs has been studied by several authors by means of 1D Lagrangian hydrodynamic simulations (e.g.,). It has been argued that the rapid temporal decay of several GRB afterglows is more consistent with the evolution of a relativistic jet after it slows down and spreads laterally than with a spherical blast wave . The lack of a significant radio afterglow in GRB 990123 provides independent evidence for jet–like geometry . Motivated by these observations and by the collapsar model of , we have simulated the propagation of jets from collapsars using relativistic hydrodynamics. In the continued evolution of rotating helium stars, whose iron core collapse does not produce a successful outgoing shock but instead forms a BH surrounded by a compact accretion disk, has been explored. Assuming that the efficiency of energy deposition by $`\nu \overline{\nu }`$–annihilation or, e.g., magneto-hydrodynamic processes is higher in the polar regions, obtained relativistic jets along the rotation axis, which remained highly focused, and capable of penetrating the star. However, as these simulations were performed with a Newtonian hydrodynamic code, appreciably superluminal speeds in the jet flow were obtained. We have performed axisymmetric relativistic simulations of jets from collapsars starting from Model 14A of . The simulations have been performed with GENESIS a multidimensional relativistic hydrodynamic code (based on Godunov-type schemes) developed by using 2D spherical coordinates ($`r,\theta `$). GENESIS employs a 3th order explicit Runge–Kutta method to advance in the time the relativistic Euler equations written in conservation form. High spatial order is provided by a PPM reconstruction that sets up the values of the physical variables in order to solve linearized Riemann problems at every cell interface (using the Marquina’s flux formula ). The innermost $`2.03M_{}`$ representing the iron core were removed from the helium star model by introducing an inner boundary at a radius of $`200`$km. When the central BH has acquired a mass of $`3.762M_{}`$, we mapped the model to our computational grid. In the $`r`$–direction the computational grid consists of 200 zones spaced logarithmically between the inner boundary and the surface of the helium star at $`R_{}=2.98\times 10^{10}`$cm. Assuming equatorial–plane symmetry we use four different zonings in the angular direction: 44, 90 and 180 uniform zones (i.e.,$`2^{},1^{}`$ and $`0.5^{}`$ angular resolution), and 100 nonuniform zones covering the polar region $`0^{}\theta 30^{}`$ with 60 equidistant zones ($`0.5^{}`$ resolution) and the remaining 40 zones being logarithmically distributed between $`30^{}\theta 90^{}`$. The gravitational field of the BH is described by the static Schwarzschild metric, neglecting the effects due to self–gravity of the star. We used the EOS of which includes the contribution of non–relativistic nucleons treated as a mixture of Boltzmann gases, and radiation, as well as an approximate correction due to pairs $`e^+e^{}`$. Full ionization and non-degeneracy of the electrons is assumed. We advect (i.e., we do not solve additional Riemann problems for each component) nine non-reacting nuclear species which are present in the initial model. In a consistent collapsar model the jet will be launched by any physical process which gives rise to a local deposition of energy and/or momentum. We mimic this process by depositing energy at a constant rate, $`\dot{E}`$, within a $`30^{}`$ cone around the rotation axis of the progenitor star. In radial direction the deposition region extends from the inner boundary to a radius of $`6\times 10^7`$cm. We consider two cases that bracket the expected $`\dot{E}`$ of the collapsar models: $`10^{50}`$erg/s, and $`10^{51}`$erg/s. ## 2 Results Low energy deposition rate (Model A). Using a constant $`\dot{E}=10^{50}`$erg/s a relativistic jet forms within a fraction of a second and starts to propagate along the rotation axis (Fig. 1). The jet exhibits all the typical morphological elements : a terminal bow shock, a narrow cocoon, a contact discontinuity separating ste- llar and jet matter, and a hot spot. The propagation of the jet is unsteady, because of density inhomogeneities in the star. The Lorentz factor of the jet, $`W`$, increases non–monotonically with time, while the density drops to $`10^6`$ gr/cm<sup>3</sup>. The density profile shows large variations (up to a factor of 100) due to internal shocks. The mean density in the jet is $`10^21`$ g/cm<sup>3</sup>. Some of the internal shocks are biconical and recollimate the beam. These shocks develop during the jet’s propagation and may provide the “internal shocks” proposed to explain the observed gamma–ray emission . A particularly strong recollimation shock forms during the early stages of the evolution, followed by a strong rarefaction that causes the largest acceleration of the beam material giving rise to a maximum in $`W`$. When the jet encounters a region along the axis where the density gradient is positive the jet’s head is decelerated, while the a central channel in the beam is cleaned by outflow into the cocoon through the head. This leads to an acceleration of the beam. The combination of both effects (deceleration of the head and beam acceleration) increases the strength of the internal shocks. The relativistic treatment of the hydrodynamics leads to an overall qualitatively similar evolution than in (formation of a jet), being, however, a quantitatively very different. We find that the results strongly depend on the angular resolution, and the minimum acceptable one is $`0.5^{}`$ (at least near the axis). At this resolution we find $`W_{\mathrm{max}}1520`$ (at shock break–out) at a radius $`8\times 10^9`$ cm. Within the uncertainties of the jet mass determinations due to finite zoning and the lack of a precise numerical criterium to identify jet matter, the baryon load, $`\eta `$, seems to decrease with increasing resolution. In the highest resolution run we find $`\eta 1.3\pm 1.2`$ at shock break-out (see also Sect. 4). High energy deposition rate (Model B). Enhancing $`\dot{E}`$ by a tenfold ($`\dot{E}=10^{51}`$ erg/s), the jet flow reaches larger values of $`W_{\mathrm{max}}`$. We observe transients during which $`W_{\mathrm{max}}`$ becomes as large as 40 ($`W_{\mathrm{max}}=33.3`$ at shock breakout). The jet propagates faster than in model A. The time required to reach the surface of the star is 2.27 s instead of 3.35 s. The opening angle of the jet at shock breakout is $`10^{}`$, i.e., the jets is less collimated than model A. The strong recollimation shock present in the model A is not so evident here. Instead, several biconical shocks are observed, and $`W`$ near the head of the jet is larger ($`22`$ in the final model) because, due to the larger $`\dot{E}`$, the central funnel is evacuated faster, and because the mean density in the jet is 5 times smaller than in model A ($`\eta `$ being twice as large). Evolution after shock breakout. After reaching the stellar surface the relativistic jet propagates through a medium of decreasing density continuously releasing energy into a medium whose pressure is negligible compared to that in the jet cavity, and whose density is (initially) of the same order as that of the jet. These are jump conditions that generate a strong blast wave. The external density gradient determines whether the shock will accelerate or decelerate with time (). In order to satisfy the conditions for accelerating shocks (), we have generated a Gaussian atmosphere matching an external uniform medium. We use models A and B to simulate the evolution after shock breakout. The computational domain is extended for this purpose to a radius of $`R_t=7.6\times 10^{10}`$ cm. The jet reaches $`R_t`$ (from the stellar surface) after 1.8 s in both models, i.e., the mean propagation velocity is $`0.85c`$ (almost three times larger than that inside the star). The evolution after shock breakout can be distinguished into three epochs (see Figs. 1 and 2), which are related with (i) the external thermodynamical gradients and (ii) the importance of the axial momentum flux relative to the pressure into the jet cavity. Both effects determine the shape of the expanding bubble –prolate– (see Figs. 1 and 2) during the post–breakout evolution. However, when the jet reaches the uniform part of the circumstellar environment, the shape changes appreciably, because the sideways expansion is faster. We have not followed the evolution long enough to see what happens when most of the bubble has reached the uniform part of the environment. Nevertheless, we can infer from Fig. 2 than the widening rate reduces with time in a way similar to what has happened to the axial expansion. At latter times most of the bubble is inside the uniform medium, and the bubble will eventually be pressure driven. Hence a isotropic expansion is expected. After shock breakout there are transients in which $`W_{\mathrm{max}}`$ becomes almost 50 in some parts of the beam, $`W_{\mathrm{max}}`$ is again obtained behind the strongest recollimation shock. The Lorentz factor near the boundary of the cavity blown by the jet grows from $`1`$ (at shock breakout) to $`3`$ in both models decreasing with latitude. At the end of the simulation $`W_{\mathrm{max}}`$ is 29.35 (44.17) for model A (B), which is still smaller than the ones required for the fireball model (). However, our simulations have not been pushed far enough in time yet and, therfore, they can (at the present stage) neither account for the observational properties of GRBs nor of their afterglows. Instead, our set of numerical models can be regarded as simulations of a proto-GRB, because the scales treated in the simulations are still by more than 100 times smaller than the typical distances at which the fireball eventually becomes optically thin ($`10^{13}`$ cm).
no-problem/9910/cond-mat9910288.html
ar5iv
text
# Nucleation of vortex arrays in rotating anisotropic Bose-Einstein condensates \[ ## Abstract The nucleation of vortices and the resulting structures of vortex arrays in dilute, trapped, zero-temperature Bose-Einstein condensates are investigated numerically. Vortices are generated by rotating a three-dimensional, anisotropic harmonic atom trap. The condensate ground state is obtained by propagating the Gross-Pitaevskii equation in imaginary time. Vortices first appear at a rotation frequency significantly larger than the critical frequency for vortex stabilization. This is consistent with a critical velocity mechanism for vortex nucleation. At higher frequencies, the structures of the vortex arrays are strongly influenced by trap geometry. \] Since the experimental achievement of Bose-Einstein condensation (BEC) in confined alkali gases , the possibility of generating vortices in confined weakly-interacting dilute Bose gases has been intensively studied . While theoretical investigations of stability have generally been restricted to the case of a single vortex , the proposed experimental techniques may induce several vortices simultaneously . Under appropriate stabilizing conditions, such as a continuously applied torque, these vortices would form an array akin to those obtained in rotating superfluid helium . A standard approach used to ‘spin-up’ superfluid helium is to rotate the container at an angular frequency $`\mathrm{\Omega }`$. Aside from significant hysteresis effects , vortices tend to first appear at a frequency $`\mathrm{\Omega }_\nu `$, whose value is comparable to the critical frequency $`\mathrm{\Omega }_c`$ at which the presence of vortices lowers the free energy of the interacting system . Energy minimization arguments have also yielded vortex arrays that are very similar to those observed experimentally . Despite these successes, the mechanisms for vortex nucleation by rotation remain poorly understood; important factors are thought to include presence of a normal fluid, impurities, and surface roughness . It has been suggested that vortices may be similarly generated in the dilute Bose gases by rotating the trap about its center . Evidently, a harmonic potential can transfer angular momentum to the gas only if it is anisotropic in the plane of rotation. While vortices in such a system at zero temperature have been shown to become energetically stable for $`\mathrm{\Omega }>\mathrm{\Omega }_c`$ , the particle flow could remain irrotational at these angular frequencies since there exists an energy barrier to vortex formation . Suppression of this barrier could be induced by application of a perturbing potential near the edge of the confined gas, as has been simulated in the low-density limit . One of the primary motivations for the present work, however, is to determine if there exists any intrinsic mechanism for vortex nucleation in a dilute quantum fluid that is free of impurities, surface effects, and thermal atoms. We find that vortices can indeed be generated by rotating Bose condensates confined in an anisotropic harmonic trap. The value of $`\mathrm{\Omega }_\nu `$ at which vortices are spontaneously nucleated is somewhat larger than $`\mathrm{\Omega }_c`$. For $`\mathrm{\Omega }>\mathrm{\Omega }_\nu `$ multiple vortices appear simultaneously, in patterns that depend upon the geometry of the trap. The dynamics of a dilute Bose condensate at zero temperature are governed by the time-dependent Gross-Pitaevskii (GP) equation . Previous simulations of the GP equation have demonstrated that vortex-antivortex pairs or vortex half-rings can be generated by superflow around a stationary obstacle or through a small aperture . In , the vortex pairs form when the magnitude of the superfluid velocity exceeds a critical value which is proportional to the local sound velocity; recent experimental results support this conclusion . To our knowledge, no numerical investigation of vortex nucleation in three-dimensional inhomogeneous rotating superfluids has hitherto been attempted. The numerical calculations presented here model the experimental apparatus of Kozuma et al. , where <sup>23</sup>Na atoms are confined in a completely anisotropic three-dimensional harmonic oscillator potential. In the presence of a constant external torque, the condensate obeys the time-dependent GP equation in the rotating reference frame: $$i_\tau \psi (𝐫,\tau )=\left[T+V_{\mathrm{trap}}+V_\mathrm{H}\mathrm{\Omega }L_z\right]\psi (𝐫,\tau ),$$ (1) where the kinetic energy is $`T=\frac{1}{/}2\stackrel{}{}^2`$, the trap potential is $`V_{\mathrm{trap}}=\frac{1}{/}2\left(x^2+\alpha ^2y^2+\beta ^2z^2\right)`$, and the Hartree term is $`V_\mathrm{H}=4\pi \eta |\psi |^2`$. The angular momentum operator $`L_z=i\left(y_xx_y\right)`$ rotates the system about the $`z`$-axis at the trap center at a constant angular frequency $`\mathrm{\Omega }`$. The trapping frequencies are $`(\omega _x,\omega _y,\omega _z)=\omega _x(1,\alpha ,\beta )`$ with $`\omega _x=2\pi \times 26.87`$ rad/s, $`\alpha =\sqrt{2}`$, and $`\beta =1/\sqrt{2}`$. Normalizing the condensate $`𝑑𝐫|\psi (𝐫,\tau )|^2=1`$ yields the scaling parameter $`\eta =N_0a/d_x`$, where $`a=2.75`$ nm is the s-wave scattering length for Na and $`N_0`$ is the number of condensate atoms. Unless explicitly written, energy, length, and time are given throughout in scaled harmonic oscillator units $`\mathrm{}\omega _x`$, $`d_x=\sqrt{\mathrm{}/M\omega _x}4.0\mu `$m, and $`\mathrm{T}=\omega _x^16`$ ms, respectively. The stationary ground-state solution of the GP equation, defined as that which minimizes the value of the chemical potential, is found by norm-preserving imaginary time propagation (the method of steepest descents) using an adaptive stepsize Runge-Kutta integrator. The complex condensate wavefunction is expressed within a discrete-variable representation (DVR) based on Gauss-Hermite quadrature, and is assumed to be even under inversion of $`z`$. The numerical techniques are described in greater detail elsewhere . The initial state (at zero imaginary time $`\stackrel{~}{\tau }i\tau =0`$) is taken to be the vortex-free Thomas-Fermi (TF) wavefunction $`\psi _{\mathrm{TF}}=\sqrt{(\mu _{\mathrm{TF}}V_{\mathrm{trap}})/4\pi \eta }`$, which is the time-independent solution of Eq. (1), neglecting $`T`$ and $`L_z`$, with chemical potential $`\mu _{\mathrm{TF}}=\frac{1}{/}2(15\alpha \beta \eta )^{2/5}`$. The GP equation for a given value of $`\mathrm{\Omega }`$ and $`N_0`$ is propagated in imaginary time until the fluctuations in both the chemical potential and the norm become smaller than $`10^{11}`$. It should be emphasized that the equilibrium configuration is found not to depend on the choice of purely real initial state. Since the final state is unconstrained except for $`z`$-parity, the lowest-lying eigenfunction of the GP equation corresponds to a local minimum of the free energy functional. In Fig. 1 are depicted the condensate density, which is stationary in the rotating frame, as well as the condensate phase and the velocity field in the laboratory and rotating frames, for $`\mathrm{\Omega }=0.45\omega _x`$ and $`N_0=10^6`$. The density profile at this angular frequency contains no vortices but is slightly extended from that of a non-rotating condensate due to the centrifugal forces. The velocity field in the laboratory frame is given by $`𝐯_s^l\stackrel{}{}\phi `$ in units of $`\omega _xd_x`$, where $`\phi `$ is the condensate phase. In the rotating frame, $`𝐯_s^r=𝐯_s^l\mathrm{\Omega }\widehat{z}\times 𝐫`$. There are no closed velocity streamlines found in Fig. 1(a). Such an irrotational flow $`\stackrel{}{}\times 𝐯_s=0`$ is characteristic of a superfluid, distinct from the related properties of vortex quantization and stability. The only solution of the GP equation satisfying irrotational flow in a cylindrically-symmetric trap is $`𝐯_s=0`$: rotating the trap is equivalent to doing nothing. The irrotational velocity field for an anisotropic trap is nontrivial, however. Since the density profile is independent of orientation, mass flow must accompany the rotation even though the superfluid prefers to remain at rest . The condensate is found to remain vortex-free for angular velocities significantly larger than the expected critical frequency for the stability of a single vortex $`\mathrm{\Omega }_c^{(1)}`$ . In order to determine if irrotational configurations correspond to the global free energy minima of the system, vortex states are investigated by artificially imposing total circulation $`n\kappa `$ on the condensate wavefunction. By winding the phase at $`\stackrel{~}{\tau }=0`$ by $`2\pi n`$ about the trap center, imaginary-time propagation of the GP equation yields the minimum energy configuration with $`n`$ vortices if such a solution is stationary or metastable . The results for $`N_0=10^6`$ and $`\mathrm{\Omega }=0.45\omega _x`$ are summarized in Table I. At this angular frequency, states with $`n=1,2,3`$ are all energetically favored over the vortex-free solution. The vortices in these cases are predominantly oriented along the ($`\widehat{z}`$) axis of rotation, and are located symmetrically about the origin on the (loose) $`x`$-axis. The frequency chosen is too low to support the four vortex case $`\mathrm{\Omega }<\mathrm{\Omega }_c^{(4)}`$, but is larger than the frequency (which may correspond to metastability) at which the chemical potentials for $`n=3`$ and $`n=4`$ cross. As shown in Fig. 2, vortices with the same circulation $`\kappa `$ (as opposed to vortex-antivortex pairs) begin to penetrate the cloud above a critical angular velocity for vortex nucleation $`\mathrm{\Omega }_\nu `$. The value of $`\mathrm{\Omega }_\nu `$ is found to not depend strongly on trap geometry and to decrease very slowly with $`N_0`$; for $`N_0=10^q`$ with $`q=\{5,6,7\}`$, we obtain $`\mathrm{\Omega }_\nu =\{0.65,\mathrm{\hspace{0.17em}0.50},\mathrm{\hspace{0.17em}0.36}\}\omega _x\pm 0.01\omega _x`$, respectively. In contrast, the critical frequency for the stabilization of a single vortex in an anisotropic trap is approximately given by the TF expression $`\mathrm{\Omega }_c^{(1)}(5\alpha /2R^2)\mathrm{ln}(R/\xi )\omega _x`$, where $`R=\sqrt{2\mu _{\mathrm{TF}}}`$ and $`\xi =\sqrt{\alpha }/R`$ are the dimensionless condensate radius along $`\widehat{x}`$ and the healing length, respectively . For the parameters considered here, the values are predicted to be $`\mathrm{\Omega }_c^{(1)}=\{0.61,\mathrm{\hspace{0.17em}0.33},\mathrm{\hspace{0.17em}0.16}\}\omega _x`$ and are found numerically to be $`\mathrm{\Omega }_c^{(1)}=\{0.54,\mathrm{\hspace{0.17em}0.29},\mathrm{\hspace{0.17em}0.14}\}\omega _x`$. The number of vortices $`n_\nu `$ present just above $`\mathrm{\Omega }_\nu `$ is found to increase with $`N_0`$; $`n_\nu =4`$ and $`8`$ for $`N_0=10^6`$ and $`10^7`$, respectively. The value of $`\mathrm{\Omega }_\nu `$ may be interpreted as the critical frequency $`\mathrm{\Omega }_c^{(n_\nu )}`$ for the stabilization of $`n_\nu `$ vortices. If $`n_\nu =n`$ for all $`N_0`$, then $`\mathrm{\Omega }_\nu \mathrm{\Omega }_c^{(n)}N_0^{2/5}`$. That $`\mathrm{\Omega }_\nu `$ decreases more slowly with $`N_0`$ implies that $`n_\nu `$ must increase with $`N_0`$. The small difference between $`\mathrm{\Omega }_c^{(1)}`$ and $`\mathrm{\Omega }_\nu `$ for $`N_0=10^5`$ reflects the instability of vortex arrays in the low-density limit. As $`N_0`$ decreases, the spacing between successive $`\mathrm{\Omega }_c^{(n)}`$ diminishes, and vanishes for $`N_0=0`$ in cylindrically-symmetric traps; for very large $`N_0`$, the spacing approaches a constant as the vortex-vortex interactions become negligible. The numerical results for $`\mathrm{\Omega }_\nu `$ suggest that the criteria for vortex stabilization and nucleation are different. Superflow through microapertures, or the motion of an object or ion through a superfluid, can give rise to vortex half-ring or vortex-pair production through the accumulation of phase-slip. One might expect similar excitations in a rotating condensate : vortex half-rings would be nucleated at the condensate surface when the local tangential velocity exceeds a critical value. Indeed, the distinction between a half-ring and vortex becomes blurred in a trapped gas with curved surfaces, as discussed further below. A crude estimate of $`\mathrm{\Omega }_\nu `$ may be obtained by invoking the Landau criterion for the critical velocity $`v_{\mathrm{cr}}=\mathrm{min}(\omega _q/q)`$, where $`\omega _q`$ is the frequency of the mode at wavevector $`q`$. Such a minimum corresponds to values of $`q_c`$ at which the hydrodynamic description of the collective excitations begins to fail . For a spherical trapped Bose gas, the crossover to a single-particle behavior occurs in a boundary-layer region at the cloud surface whose thickness is several $`\delta =(2R)^{1/3}d_x`$ . Minimizing $`\omega _q/q`$ using the dispersion relation for the planar surface modes of such a system $`\omega _q^2\omega _x^2[qRd_x^4q^4\left(\mathrm{ln}(q\delta )0.15\right)]`$, one obtains $`q_c=(R/0.3)^{1/3}d_x^1\delta ^1`$ and $`\mathrm{\Omega }_\nu =v_{\mathrm{cr}}/RR^{2/3}`$. Since $`RN_0^{1/5}`$, the critical frequency $`\mathrm{\Omega }_\nu N_0^{2/15}`$ decreases far more slowly than does the TF estimate for $`\mathrm{\Omega }_cN_0^{2/5}`$. The number-dependence of $`\mathrm{\Omega }_\nu `$ is in reasonable agreement with the numerical data. Real-time simulations further confirm that high-frequency oscillations of the condensate are required for vortex production at the same $`\mathrm{\Omega }_\nu `$ found using the imaginary-time approach. The above analysis does not clearly identify the instability of the surface modes with the penetration of vortices into the cloud, however. Further insight may be gained by considering the free energy $`F`$ of a single vortex in a cylindrical trap, relative to that of the vortex-free state, as a function of the vortex displacement $`\rho `$ from the trap center . In the TF limit, $`F`$ vanishes for $`\rho ^2=R^2`$ and $`\rho ^2=R^2(5/2\mathrm{\Omega })\mathrm{ln}(R/\xi )`$, corresponding to the right and left roots of the free energy barrier to vortex generation, respectively. As $`\mathrm{\Omega }`$ increases, the energy barrier at the surface narrows but remains finite. Yet, as discussed above, the hydrodynamic excitations begin to break down at a radius $`\stackrel{~}{\rho }R\delta `$. Vortices will therefore spontaneously penetrate the cloud when the angular frequency exceeds $$\mathrm{\Omega }_\nu =\frac{5\sqrt[3]{2}}{4R^{2/3}}\mathrm{ln}\left(\frac{R}{\xi }\right)\omega _x,$$ (2) since the barrier effectively disappears when $`\stackrel{~}{\rho }\rho `$. Thus, the frequencies of nucleation and penetration have the same number-dependence and are defined by a single critical wavelength. Once the condensate contains vortices at a given $`\mathrm{\Omega }>\mathrm{\Omega }_\nu `$, the functional $`F`$ will again include a barrier to vortex penetration from the surface, reflecting the hydrodynamic stability of the vortex state. One may thus envisage a succession of multiple-vortex nucleation events at well-defined angular frequencies. The stationary configurations of vortex arrays are shown as a function of applied rotation in Fig. 2. The condensate density is shown integrated down the axis of rotation $`\widehat{z}`$, in order to mimic an in situ image of the cloud. While the vortices near the origin appear to have virtually isotropic cores, those in the vicinity of the surface are generally wider and are noticeably distorted. The anisotropy is due in part to the divergence of the coherence length as the density decreases, but is mostly the result of vortex curvature. Off-center vortices are not fully aligned with the axis of rotation $`\widehat{z}`$, since they terminate at normals to the ellipsoidal condensate surface. Far from the origin, the vortex structure approaches that of a half-ring pinned to the condensate surface. The symmetries of the confining potential impose constraints on the vortex arrays that may be produced by rotating anisotropic traps. Stationary configurations are found to always have the inversion symmetry $`(x,y,z)(x,y,z)`$. As shown in Fig. 2, the number of vortices is at least four and is even for each array; real-time simulations demonstrate that vortices with the same circulation are nucleated in pairs at inversion-related points on the surface. No vortex is found at the origin, since the odd number of remaining vortices cannot be distributed symmetrically. At low angular velocities, therefore, the array tends to approximate a regular tetragonal lattice. As the total number of vortices increases with $`\mathrm{\Omega }`$, however, a different pattern begins to emerge. While a triangular array is inconsistent with the twofold trap symmetries, it is more efficient for close packing; this geometry is favored for vortices near the rotation axis of rapidly rotating vessels of superfluid helium . If vortices in trapped condensates could be made sufficiently numerous, they would likely form a near-regular triangular array but with the central vortex absent. In summary, the critical frequencies for the zero-temperature nucleation of vortices $`\mathrm{\Omega }_\nu `$ in rotating anisotropic traps are obtained numerically, and are found to be larger than the vortex stability frequencies $`\mathrm{\Omega }_c`$. The number-dependence of $`\mathrm{\Omega }_\nu `$ is consistent with a critical-velocity mechanism for vortex production. The structures of vortex arrays are strongly affected by trap geometry, but approach triangular at large densities. ###### Acknowledgements. The authors are grateful to A. L. Fetter, R. L. Pego, S. L. Rolston, J. Simsarian, and S. Stringari for numerous fruitful discussions, and to P. Ketcham for assistance in generating the figures. This work was supported by the U.S. office of Naval Research.
no-problem/9910/cond-mat9910502.html
ar5iv
text
# Simple model of a limit order-driven market. ## Abstract We introduce and study a simple model of a limit order-driven market. Traders in this model can either trade at the market price or place a limit order, i.e. an instruction to buy (sell) a certain amount of the stock if its price falls below (raises above) a predefined level. The choice between these two options is purely random (there are no strategies involved), and the execution price of a limit order is determined simply by offsetting the most recent market price by a random amount. Numerical simulations of this model revealed that despite such minimalistic rules the price pattern generated by the model has such realistic features as “fat” tails of the price fluctuations distribution, characterized by a crossover between two power law exponents, long range correlations of the volatility, and a non-trivial Hurst exponent of the price signal. Recent years have witnessed an explosion of activity in the area of statistical analysis of high-frequency financial time series . This lead to the discovery of robust and to a certain degree universal features of price fluctuations, and triggered theoretical studies aimed at explaining or simply mimicking these observations. The list of empirical facts that need to be addressed by any successful theory or model is: (i) The histogram of short time-lag increments of market price has a very peculiar non-Gaussian shape with a sharp maximum and broad wings . The current consensus about the functional form of this distribution is that up to a certain point it follows a Pareto-Levy distribution, with the exponent of its power law tail $`1+\alpha _12.42.7`$, after which it crosses over to a steeper power law $`1+\alpha _244.5`$ , or, as reported in another study , to an exponential decay. (ii) When viewed on time scales less than several trading days, the graph of price vs time appears to have a Hurst exponent $`H0.60.7`$ , different from an ordinary uncorrelated random walk value $`H_{RW}=0.5`$. (iii) The volatility (the second moment of price fluctuations) exhibits correlated behavior. It is manifested in clustering of volatility, i.e. the presence of regions of unusually high amplitude of fluctuations separated by relatively quiet intervals, visible with a “naked eye” in the graph of price increment vs time. These clustering effects determine the shape of the autocorrelation function of volatility as a function of time, which was shown to decay as a power law with a very small exponent $`\gamma 0.30.4`$ and no apparent cutoff . There are several approaches to modeling market mechanics. In one type of models price fluctuations result from trading activity of conscious agents, whose decisions to buy or sell are dictated by well defined strategies. These strategies evolve in time (often according to some Darwinian rules) and give rise to a slowly changing fluctuation pattern. There is little doubt that the evolution and dynamics of investor’s strategies and beliefs influence the long term behavior of real market prices. For example, if some company could not keep up with the competition, sooner or later investors would realize it, and in the long-term its stock price would go down. However, it is unclear how does it influence the properties of stock price fluctuations at very short timescales, which do not allow time for traders to update their strategies or for a company to change its profile. Another problem with models explaining short time price fluctuations in terms of strategy evolution is that they inevitably lead their creators to shaky grounds of speculations about relevant and irrelevant psychological motivations of a “typical” trader in a highly heterogeneous trader population. The remarkable universality of general features of price fluctuations in markets of different types of risky assets such as stocks, options, foreign currency, and commodities (say, cotton or oil) makes one to suspect that in fact psychological factors play little role in determining their short time properties, and leads one to try to look for a simpler mechanisms giving rise to these features. In this work we do a first step in this direction by introducing and numerically studying a simple market model, where a nontrivial price pattern arises not due to the evolution of trading strategies, but rather as a consequence of trading rules themselves. Before we proceed with formulating the rules of our model we need to define several common market terms. A market trader is usually allowed to place a so-called “limit order to sell (buy)”, which is an instruction to automatically sell (buy) a particular amount of stock if its market price would raise higher (or drop lower for a limit buy order) than the predetermined threshold. This threshold is sometimes referred to as the execution price of the limit order. In many modern markets, known to economists as order-driven markets , limit orders placed by ordinary traders constitute the major source of the liquidity of the market. It means that a request to immediately buy or sell a particular amount of stock at the best market price, or “market order”, is filled by matching it to the best unsatisfied limit order in the limit order book. To better understand how transactions are made at an order-driven market it is better to consider the following simple example: suppose one trader (trader #1) has submitted a limit order to sell 1000 shares of the stock of a company X, provided that its price would exceed $20/share. Subsequently another trader (trader #2) has submitted a limit order to sell 2000 shares of X if the price would exceed $21/share. Finally, a third trader decides to buy 2000 shares of X at the market price. In the absence of other limit orders his order will be filled as follows: he will buy 1000 shares from trader #1 at $20/share and 1000 shares from trader #2 at $21/share. After this transaction the limit order book would contain only one partially filled limit order to sell, that of trader #2 to sell 1000 shares of X at $21/share. Traders in our model can either trade stock at the market price or place a limit order to sell or buy. To simplify the rules of our toy market, traders are allowed to trade only one unit (lot) of stock in each transaction. That makes all limit and market orders to be of the same size. The empirical study of limit-order trading at the ASX can be used to partially justify this simplification. In this work it was observed that limit orders mostly come in sizes given by round numbers such as 1000 shares and (to a lesser extent) 10000 shares. Unlike many other market models, we do not fix the number of traders. Instead, at each time step a “new” trader appears out of a “trader pool” and attempts to make a transaction. With equal probabilities this new trader is a seller or a buyer. He then performs one of the following two actions: * with probability $`q_{lo}`$ he places a limit order to sell/buy. * otherwise (with probability $`1q_{lo}`$) he trades (sells or buys) at the market price. The rule of execution of a market order in our model is particularly simple. Since all orders are of the same size, a market order is simply filled with the best limit order (i.e. the highest bid among limit orders to buy and the lowest ask among limit orders to sell), which is subsequently removed from the limit order book. This transaction performed at the execution price of the best limit order sets a new value of the market price $`p(t)`$. To complete the definition of the rules one needs to specify how a trader who selected to place a new limit order decides on its execution price. Traders in our model do this in a very “non-strategic” way by simply offsetting the price of the last transaction performed on the market (current market price $`p(t)`$), by a random number $`\mathrm{\Delta }`$. This positive random number is drawn each time from the same probability distribution $`P(\mathrm{\Delta })`$. A new limit order to sell is placed above the current price at $`p(t)+\mathrm{\Delta }`$, while a new limit order to buy – below it at $`p(t)\mathrm{\Delta }`$. This way ranges of limit orders to sell and to buy never overlap, i.e. there is always a positive gap between highest bid and lowest ask prices. This “random offset” rule constitutes a reasonable first order approximation to what may happen in real order-driven markets and is open to modifications if it fail a reality check. The most obvious variants of this rule, which we plan to study in the near future, are i) A model where each trader has his individual distribution $`P(\mathrm{\Delta })`$. This modification would allow for the coexistence of “patient” traders who do not care very much about when their order will be executed or if it will be executed at all, and can therefore select large $`\mathrm{\Delta }`$ and pocket the difference, and “impatient” traders who need their order to be executed soon, so they tend to select a small $`\mathrm{\Delta }`$ or trade at the market price. ii) A model in which the probability distribution of $`\mathrm{\Delta }`$ is determined by the historic volatility of the market. This rule seems to be particularly reasonable description of a real order-driven market. Indeed, if traders selection of $`\mathrm{\Delta }`$ is influenced primarily by his desire to reduce waiting time before his order is executed, then it would make sense to select a larger $`\mathrm{\Delta }`$ in a more volatile market, which is likely to cover larger price interval during the same time interval. However, before any of these more complicated versions of this rule could be explored one needs to study and understand the behavior of the base model, where $`\mathrm{\Delta }`$ is just a random number, uncorrelated with volatility and/or the individual trader profile. One should notice that the behavior of traders in our model is completely passive and “mechanical”: once a limit order is placed it cannot be removed or shifted in response to a current market situation. This makes our rules fundamentally different from these of the Bak-Paczuski-Shubik (BPS) model , where randomly increases or decreases their quotes at each time step. Such haphazard trader behavior cannot be realized in an order-driven market, where each change of the limit-order execution price carries a fee. We have simulated our model with $`q_{lo}=1/2`$, i.e. when on average half of the traders select to place limit orders, while the other half trade at the market price. The random number $`\mathrm{\Delta }`$, used in setting an execution price of a new limit order, was drawn from a uniform distribution in the interval $`0\mathrm{\Delta }\mathrm{\Delta }_{\mathrm{max}}=4`$. Obviously, price patterns in models with different values of $`\mathrm{\Delta }_{\mathrm{max}}`$ are identical up to an overall rescaling factor. Our choice of $`\mathrm{\Delta }_{\mathrm{max}}=4`$ was dictated by the desire to compare the behavior of the model with continuous spectrum of $`\mathrm{\Delta }`$ to that with a discrete spectrum $`\mathrm{\Delta }=\{1,2,3,4\}`$. Discrete spectrum of $`\mathrm{\Delta }`$ may better compare to the behavior of real markets, where all prices are multiples of a unit tick size. Our comparison confirmed that most scaling properties of the price pattern are the same in both variants. We were surprised to notice that non-trivial features of our model survived even in a model with deterministic $`\mathrm{\Delta }=1`$. To improve the speed of numerical simulations we studied a version of the model, where only $`2^{17}`$ lowest ask and highest bid quotes were retained. The list of quotes was kept ordered at all times, which accelerated the search for the highest bid and lowest ask limit orders whenever a transaction at the market price was requested. We also studied a variant of our model where each limit order had an expiration time: if a limit order was not filled within 1000 time steps it was removed from the list. Not only this rule prevented an occasional accumulation of a very long list of limit orders, but also it made sense in terms of how limit orders are organized in a real market. Indeed, limit orders at, for example, New York Stock Exchange are usually valid only during the trading day when they were submitted. There are also so called “good till canceled” (or open) orders, which are valid until they are executed or withdrawn . Then the version of our model, where the expiration time of a limit order is not specified corresponds to all orders being “good till canceled”, while the version, where only the most recent orders are kept, mimics the market composed of only “day orders”. We have checked that for any reasonably large value of the cutoff parameter, no matter if it is an expiration time or the number of best sell/buy orders to keep, one ends up with the same scaling properties of price fluctuations. In Fig. 1 we present an example of price history in one of the runs of our model. Visually it is clear that this graph is quite different from an ordinary random walk. This impression is confirmed by looking at the pattern of price increments $`p(t+1)p(t)`$, shown in the same figure. One can see that large increments are clustered in regions of high volatility, separated by relatively quite intervals. The Fourier spectrum of the price signal averaged over many runs of the models provides us with a value of the Hurst exponent $`H`$ of the price graph. Indeed, the exponent of the Fourier transform of price autocorrelation function $`S_p(f)`$ is related to the Hurst exponent as $`S_p(f)f^{(1+2H)}`$. The log-log plot of $`S_p(f)`$, logarithmically binned and averaged over multiple realizations of the price signal of length $`2^18`$, is shown in the inset to Fig. 1. It has an exceptionally clean $`f^{3/2}`$ functional form for over 5 decades in $`f`$, which corresponds to the Hurst exponent of the price signal $`H=1/4`$. This exponent is definitely different from its random walk value $`H_{RW}=1/2`$. A Hurst exponent $`H=1/4`$ was also observed in the Bak-Paczuski-Shubik model A . An intuitive argument in favor of a small Hurst exponent can be constructed for our model. According to the rules of the model an execution price of a new limit order is always determined relative to the current price. It is also clear that a large density of limit orders around current price position reduces its mobility. Indeed, in order for the price to move to the new position all limit orders in the interval between the current and new values of the price must be filled by market orders. If for one reason or the other the price remained fairly constant for a prolonged period of time, limit orders created during this time tend to further trap the price in this region. This self-reinforcing mechanism qualitatively explains the slow rate of price change in our model. Unfortunately, the nontrivial Hurst exponent $`H=1/4`$ is a step in the wrong direction from its random walk value $`H_{RW}=1/2`$. Indeed, the short time Hurst exponent of real stock prices was measured to be $`H_{real}0.60.7`$. The amplitude of price fluctuations in our model has significant long range correlations. One natural measure of these correlations is the autocorrelation function of the absolute value of price increments $`S_{abs}(t)=|p(t^{}+t+1)p(t^{}+t)||p(t^{}+1)p(t^{})|_t^{}`$. In our model this quantity was measured to have a power law tail $`S_{abs}(t)t^{1/2}`$. This is illustrated in Fig. 2 where the Fourier transform of $`S_{abs}(t)`$ has a clear $`f^{1/2}`$ form. The exponent $`\gamma =1/2`$ of $`S_{abs}(t)t^\gamma `$ in our model is not far from $`\gamma =0.3`$ measured in the S&P 500 stock index . In Fig. 2 we also show the Fourier transform of the autocorrelation function of signs of price increments $`S_{sign}(t)=sign[p(t^{}+t+1)p(t^{}+t)]sign[p(t^{}+1)p(t^{})]_t^{}`$, which has a white noise (frequency independent) form. This is again, similar to the situation in real market, where signs of price increments are known to have only short range ($`<30`$ min) correlations. Finally, in Fig. 3 we present three histograms of price increments $`p(t+\delta t)p(t)`$ in our model, measured with time lags $`\delta t=1,10,100`$. The overall form of these histograms is strongly non-Gaussian and is reminiscent of the shape of such distribution for real stock prices. As the time lag is increased the sharp maximum of the distribution gradually softens, while its wings remain strongly non-Gaussian. In the inset we show a log-log plot of the histogram of $`p(t+1)p(t)`$ ($`\delta t=1`$) collected during $`t_{stat}=3.5\times 10^7`$ timesteps (as compared to $`t_{stat}=40000`$ for the data shown in the main panel) and logarithmically binned. One can clearly distinguish two power law regions separated by a sharp crossover around $`p(t+1)p(t)1`$. The exponents of these two regions were measured to be $`1+\alpha _1=0.6\pm 0.1`$ and $`1+\alpha =3\pm 0.2`$. The power law exponent $`1+\alpha _2=3`$ of the far tail lies right at the borderline, separating the Pareto-Levy region $`\alpha <2`$, where the distribution has an infinite second moment, from the Gaussian region. In any case, since price fluctuations in our model were shown to have long range correlations, one should not expect convergence of the price fluctuations distribution to a universal Pareto-Levy or Gaussian functional form as $`\delta t`$ is increased. The existence of a similar power law to power law crossover was reported in the distribution of stock price increments in NYSE, albeit with different exponents $`1+\alpha _11.41.7`$, and $`1+\alpha _244.5`$ . The mechanism responsible for this crossover in a real market is at present unclear. In conclusion, we have introduced and numerically studied a simple model of a limit order-driven market, where agents randomly submit limit or market orders. The execution price of new limit orders is set by offsetting the current market price by a random amount. In spite of such strategy-less, mechanistic behavior of traders, the price time series in our model exhibit a highly nontrivial behavior characterized by long range correlations, fat tails in the histogram of its increments, and a non-trivial Hurst exponent. These results are in qualitative agreement with empirically observed behavior of prices on real stock markets. More work is required to try to modify the rules of our model in order to make this agreement more quantitative. The work at Brookhaven National Laboratory was supported by the U.S. Department of Energy Division of Material Science, under contract DE-AC02-98CH10886. The author thanks Y.-C. Zhang for useful discussions, and the Institut de Physique Théorique, Université de Fribourg for the hospitality and financial support during the visit, when this work was started.
no-problem/9910/quant-ph9910119.html
ar5iv
text
# Untitled Document A $`\underset{¯}{\text{Exactly Solvable Model of Quantum Spin}}`$ $`\underset{¯}{\text{Interacting with Spin Environment}}`$ Dima Mozyrsky Department of Physics, Clarkson University, Potsdam, NY 13699–5820 Key Words: Thermalization, decoherence, spin bath, effects of environment ABSTRACT An exactly solvable model of a quantum spin interacting with a spin environment is considered. The interaction is chosen to be such that the state of the environment is conserved. The reduced density matrix of the spin is calculated for arbitrary coupling strength and arbitrary time. The stationary state of the spin is obtained explicitely in the $`t\mathrm{}`$ limit. The problem of a quantum system interacting with a heat bath has been extensively studied in various contexts during the last few decades. Originating from quantum optics in connection with studying spontanious emission and resonant adsorpsion \[1-2\], it has become an important issue in condensed matter physics. The first, and probably one of the most famous works in this field was by Caldeira and Leggett who studied effects of dissipation on the probability of quantum tunneling . In this problem the heat bath is modeled by a set of noninteracting harmonic oscillators linearly coupled to the quantum system. This model of heat bath has been mathematically justified in , and has become a widely accepted description of dissipative quantum dynamics, which advantageously combines both microscopical and phenomenological aspects of interaction between a quantum system and phonons or delocalized electrons . A similar problem of a system in a double well potential under the influence of a Heat Bath was studied in connection with magnetic flux tunneling in Josephson junctions . It has been shown that the system looses its quantum coherence due to the interaction with the heat bath and, in case of zero temperature and sufficiently strong coupling, completely localizes in one of the wells. Later this model was formalized by the spin-boson Hamiltonian that received a lot of attention in modern condenced matter literature. The study of quantum dissipation effects in the case of coupling to the fermionic heat bath has also received some attention in the literature . For Hubbard-like coupling, behaviour quite different from that of the bosonic heat bath, has been found. The case of magnetic coupling similar to RKKY interactions was also extensively studied in conjecture with magnetic grains and giant spins of macromolecules interacting with a spin environment \[11-13\]. It is believed that this type of heat bath can not be mapped onto a bosonic bath model and needs separate treatment . This mechanism of quantum relaxation turns out to be effective especially at low temperatures resulting in a set of interesting phenomena, such as “degeneracy blocking” caused by the nuclear spins . Another interesting and quite general effect that results in the ineraction of quantum system with its environment is the destruction of quantum interference in the quantum system due to such interaction. This process, usually termed in the literature as decoherence, has attracted attention of both theorists and experimentalists not only due to its fundamental importance in quantum mechanics, but also due to the new fastly developing fields, such as quantum computing and quantum information theory, where decoherence is one of the major obstacles on the way of practical realizations of various, presently mostly hypothetical devices, such as quantum computers, etc . No matter how well such a device is isolated from its environment, being essentially a macroscopic system, it will inevitably interact with the environment, resulting in the loss of interference between the states and thus desrupting its proper functioning. Several models have been proposed to study properties of decoherence in quantum computers and in more general systems \[14-16\]. The essential feature of these models is that the interaction is set up in such a way that there is no energy exchange between the system under consideration and the environment, so the system’s energy is conserved. This corresponds to a situation when a quantum system is very well isolated from its evironment, so only a phase exchange is allowed. One considers a hamiltonian $$H=H_S+H_B+V,$$ $`(1)`$ where the first term $`H_S`$ in (1) corresponds to the system alone, $`H_B`$ is the heat bath and $`V`$ is the interaction between the bath and the system. It is assumed that the expectation value of $`H_S`$ is conserved during the system’s evolution. This can be formalized by the assumption that the system’s hamiltonian $`H_S`$ commutes with the full hamiltonian in (1), and in particular with the interaction term $`V`$. $`H_S`$ and $`H_B`$ naturally commute with each other as they act in different subspaces. This feature often allows one to carry out exact solutions for the system’s reduced density matrix (to be defined later) in the basis of eigenavalues of the self-hamiltonain of the system $`H_S`$. In this work we consider an opposite extreeme, a model having a property that the state of the bath is preserved. The motivation to study such a model comes from a very common assumption in the literature on the propeties of the heat bath. It is often assumed that the heat bath has so many degrees of freedom that the effects of the interaction with the system dissipate away in it and will not influence the system back to any significant extent so that the bath remains described by a thermal equilibrium distribution at constant temperature, irrespective of the amount of energy and polarization diffusing into it from the system . This assumption is also called mollecular chaos or Stosszahlansatz. There are some attempts to analyze this assumtion , but we beleive it is still little understood. This work is aiming to contribute to this topic. Usually the following picture is assumed: the initial state of the full system is given by $$\rho (0)=\rho _S(0)\rho _B(0),$$ $`(2)`$ where $`\rho _S(0)`$ and $`\rho _B(0)`$ are the initial density matricies of the system and the bath respectively. The heat bath is initially assumed to be in thermal equilibrium state and the two systems are not initially entangled. When the interaction is switched on at time $`t=0`$, the full system’s evolution is given approximately by the following $$\rho (t)\rho _S(t)\rho _B(0),$$ $`(3)`$ that is, the state of the heat bath does not change in time to any significant extent. This assertion, formalized by the Markoffian approximation, leads to the famous Pauli master equations, which provide a key to understanding many profound phenomena in quantum optics and condensed matter physics . Here we propose a model for which equation is exact. Note that the form of interaction $`V`$ between the system and the bath has not been specified yet. Comparing equations (1) and (3), one can notice that in order for (3) to hold, one can simply require the following commutation relation $$[V,H_B]=[H,H_B]=0.$$ $`(4)`$ Let us comment on this assumption. The most common form of the interaction between the system and the bath is the coupling between operators $`Q_i`$, acting in the subspace of the system and the bath operators $`F_i`$, so that $$V=\underset{i}{}Q_iF_i.$$ $`(5)`$ The choice of operators $`Q_i`$ and $`F_i`$ is usually determined by the particular features of the physical situation under consideration. However usually operators $`Q_i`$ and $`F_i`$ are not diagonal in the energy representation of the system and the bath and so the relation (4) may not be rigorously satisfied in general and is usually postulated by the Stosszahlansatz assumption. In this paper we study a model for which relation (4) is satisfied directly due to the commutation properties of the interaction and bath hamiltonians. Such a property, even though not very common in the literature, seems worth exploring due to the above arguments. Moreover, this model allows exact solution for the reduced density matrix, which is a rather rare example in the literature on decoherence and thermalization of quantum systems. In particular, we consider a model of interaction between a two-level quantum system and spin environment \[11-14\]. A bath of noninteracting spins $`\stackrel{}{\sigma }^k`$ is coupled to the two-level system $`\stackrel{}{\sigma }^0`$ under consideration. The coupling is chosen to be such that the Hamiltonian for the full system (two-level system + spin bath) is: $$H=\mathrm{\Delta }\sigma _z^0+\underset{k}{}\omega _k\sigma _z^k+\sigma _x^0\underset{k}{}g_k\sigma _z^k,$$ $`(6)`$ The first term in (6) corresponds to the two-level system and we will refer to it in the following as the central spin; the second is the spin bath, and the last one is the interaction between the bath and the two-level system. Here $`2\mathrm{\Delta }`$ is the bare magnetic resonance (MR) frequency of the central spin, $`\omega _k`$ and $`g_k`$ are the frequencies and the coupling constants respectively for the spins of the spin bath. The spin bath self-hamiltonian obviosly commutes with the full hamiltonian in (6) and so the state of the bath is conserved in the course of the full system’s evolution. The interaction is assumed to be switched on at time $`t=0`$ and the two systems (the central spin and the bath) are initially not entangled with each other. The density matrix of the full system is given by $$\rho (0)=\left(|1_01_0|\right)\frac{1}{Z}e^{\beta H_B}.$$ $`(7)`$ Here $`\beta =1/k_BT`$ is the inverse temperature and by $`H_B`$ we denote the self-hamiltonian of the bath, which is the second term in (6). The spin bath is assumed to be initially in thermal equilibrium at temperature $`T`$. $`Z`$ is the normalization constant or partition function of the free heat bath, i.e., a system of noninteracting spins $$Z=\mathrm{Tr}e^{\beta H_B}=\underset{k}{}\left(2\mathrm{cosh}\beta \omega _k\right).$$ $`(8)`$ To avoid unnecesary mathematical complications we have assumed that the spin is initially in the excited state, even though the calculation can be carried out exactly for arbitrary superposition of both ground and excited states. The full system evolves in time $`t`$ quantum mechanically according to $$\rho (t)=U(t)\rho (0)U^1(t).$$ $`(9)`$ Here and in the following we assume that $`\mathrm{}=1`$. The evolution operator $`U(t)=e^{iHt}`$ can be explicitely calculated by expanding the exponent and combining the terms, that correspond to the expansion of cosine and sine respectively, resulting in $$U=e^{H_Bt}\left[\mathrm{cos}\gamma ti\frac{\mathrm{\Delta }\sigma _z^0+\mathrm{\Omega }\sigma _x^0}{\gamma }\mathrm{sin}\gamma t\right],$$ $`(10)`$ where $$\mathrm{\Omega }=\underset{k}{}g_k\sigma _z^k,$$ $`(11)`$ $$\gamma =\left(\mathrm{\Delta }^2+\mathrm{\Omega }^2\right)^{\frac{1}{2}}.$$ $`(12)`$ The reduced density matrix for the central spin is given by $$\rho ^r(t)=\mathrm{Tr}^{}\left[\rho (t)\right],$$ $`(13)`$ where prime denotes that the trace is taken over the states of the spin bath only. We calculate the elements of the density matrix using the following technique. Consider the magnetization of the central spin, which is given by the expectation value of $`\sigma _z^0`$ or the difference of the reduced density matrix diagonal elements $$\sigma _z^0(t)=1_0|\rho ^r(t)|1_00_0|\rho ^r(t)|0_0.$$ $`(14)`$ With the use of (7), (9)-(10), equation (14), after some algebra, can be rewritten as $$\sigma _z^0(t)=\mathrm{Tr}\left[\frac{e^{\beta H_B}}{Z}\left(\mathrm{cos}^2\gamma t\frac{\mathrm{\Omega }^2\mathrm{\Delta }^2}{\gamma ^2}\mathrm{sin}^2\gamma t\right)\right].$$ $`(15)`$ The trace in the basis of eigenstates of $`H_B`$ and $`\mathrm{\Omega }`$ becomes a sum over Ising-like variables $`s_k=\pm 1`$ $$\sigma _z^0(t)=\underset{\{s_k\}}{}\frac{e^{\beta H_B}}{Z}\mathrm{\Lambda }_\eta \frac{\mathrm{sin}\gamma \eta }{\gamma },$$ $`(16)`$ where the sum is taken over all possible configuartions of $`s_k`$. In equation (14) by $`H_B`$ and $`\mathrm{\Omega }`$ (see eqs.(11)-(12)) we mean, of course, the eigenvalues of these operators, i.e., $`H_B=_k\omega _ks_k`$ and $`\mathrm{\Omega }=_kg_ks_k`$. In (16) we have also intoduced operator $`\mathrm{\Lambda }_\eta `$ given by $$\mathrm{\Lambda }_\eta =\left[\frac{}{\eta }\right]_{\eta =2t}+\mathrm{\Delta }^2_0^{2t}𝑑\eta ,$$ $`(17)`$ which can be conviniently interchanged with the summation in (16). In order to compute the sum (16) we employ the following identity : $$\mathrm{Re}_0^\eta e^{ix\mathrm{\Omega }}J_0\left(\mathrm{\Delta }\sqrt{\eta ^2x^2}\right)𝑑x=\frac{\mathrm{sin}\eta \left(\mathrm{\Delta }^2+\mathrm{\Omega }^2\right)^{\frac{1}{2}}}{\left(\mathrm{\Delta }^2+\mathrm{\Omega }^2\right)^{\frac{1}{2}}}.$$ $`(18)`$ Here $`J_0(z)`$ is the zeroth order Bessel function. At this point performing the summation in (16) is a straightforward procedure as $`\mathrm{\Omega }`$ enters linearly in the exponent in (18) and the sum is equivalent to the simple calculation of the partition function for a system of uncoupled spins in external magnetic field. So using (18) and also (11)-(12), after some algebra we obtain: $$\sigma _z^0(t)=\mathrm{\Lambda }_\eta \mathrm{Re}_0^\eta 𝑑xJ_0\left(\mathrm{\Delta }\sqrt{\eta ^2x^2}\right)\frac{1}{Z}\underset{\{s_k\}}{}e^{\beta H_B+ix\mathrm{\Omega }}=$$ $$\mathrm{\Lambda }_\eta \mathrm{Re}_0^\eta 𝑑xJ_0\left(\mathrm{\Delta }\sqrt{\eta ^2x^2}\right)\mathrm{\Phi }(x),$$ $`(19)`$ and $$\mathrm{\Phi }(x)=\underset{k}{}\left[\mathrm{cos}g_kxi\mathrm{tanh}\beta \omega _k\mathrm{sin}g_kx\right].$$ $`(20)`$ The trick used in (18)-(20) is simular to that in calculation of partition functions for certain mean field models, such as Curie-Weiss model; see and references therein. The off-diagonal elements of the reduced density matrix can be calculated in a simular way. The final result is $$\rho _{10}^r(t)=\mathrm{\Lambda }_\eta ^{}\mathrm{Im}\left[\mathrm{\Phi }(x)\mathrm{\Delta }_0^\eta x\frac{J_1\left(\mathrm{\Delta }\sqrt{\eta ^2x^2}\right)}{\sqrt{\eta ^2x^2}}\mathrm{\Phi }(x)𝑑x\right],$$ $`(21)`$ where $$\mathrm{\Lambda }_\eta ^{}=\left[\frac{i}{2}\right]_{\eta =2t}+\mathrm{\Delta }_0^{2t}𝑑\eta .$$ $`(22)`$ Equations (19)-(21) constitute the main result of this work. All calculations up to this point were exact for arbitrary coupling constants and energy splittings of the spins. Let us specify the coupling constants $`g_k`$ and MR frequencies $`2\omega _k`$ of the external spins $`\stackrel{}{\sigma }^k`$. For simplicity we assume that the energy splittings for all ”external” spins are the same $`\omega _k=\omega `$ and the coupling constants are randomly distributed with average $`g`$ and second moment $`g^2`$. Expression (20) for $`\mathrm{\Phi }(x)`$ can be exponentiated thus transforming the product into summation in the exponent $$\mathrm{\Phi }(x)=\mathrm{exp}\left[\underset{k}{}A_k(x)\right],$$ $`(23)`$ where $$A_k(x)=\frac{1}{2}\mathrm{ln}\left[\mathrm{cos}^2g_kx+\mathrm{tanh}^2\beta \omega \mathrm{sin}^2g_kx\right]i\mathrm{tan}^1\left[\mathrm{tanh}\beta \omega \mathrm{tan}g_kx\right].$$ $`(24)`$ Assume now that the sum in (23) contains $`N`$ terms, that is, the spin bath consists of $`N`$ spins, where $`N1`$. $`A_k(x)`$ are random numbers with some average $`A(x)`$. with this assumption expression (23) becomes $$\mathrm{\Phi }(x)=\mathrm{exp}NA(x),$$ $`(25)`$ where $``$ denote the average taken over the coupling constants. In this work we assume that $`g=0`$ and $`g^2=\frac{C}{N}`$, where $`C`$ is of order unity. This choice is made only for simplicity of calculations and obviosly other possibilites for distributions of $`\omega _k`$ and $`g_k`$ can be explored. Averaging of (24) can be done easily by expanding it up to the second order in $`g`$ with the result $$\mathrm{\Phi }(x)=\mathrm{exp}\left[\frac{C}{2\mathrm{cosh}^2\beta \omega }x^2\right].$$ $`(26)`$ This relation, when inserted into (19)-(21), gives the explicit analytical expressions for the density matrix of the spin. The magnetization (the difference between the diagonal elements) is an oscillatory function, which decays to the limiting value for $`\sigma _z^0(\mathrm{})`$. This can be explicitly calculated and after the straightforward manipulations such as change of order of integration in (19) one obtains: $$\sigma _z^0(\mathrm{})=\mathrm{\Delta }\mathrm{cosh}\beta \omega \sqrt{\frac{\pi }{8C}}\mathrm{exp}\left(z^2\right)erfc(z)$$ $`(27)`$ where $$z=\frac{\mathrm{\Delta }}{\sqrt{2C}}\mathrm{cosh}\beta \omega .$$ $`(29)`$ Here $`erfc(z)`$ is the complimentary error function. A similar straightforward calculation shows that the off-diagonal elements vanish for $`t=\mathrm{}`$. One could expect that the central spin “thermalizes” due to the interaction with the spin bath, i.e., its density matrix reaches the state distributed according to the Boltzmann distribution. One can see that for this model this is not the case. Moreover, it is a straightforward observation that $`\rho ^r(\mathrm{})`$ depends on the initial state $`\rho ^r(0)`$ and thus the conservation of the state of the bath is not sufficient to represent molecular chaos assumption and cannot represent a “true” heat bath. However its possible that by introducing a certain frequency dependence of the coupling constants $`g_k`$ one can obtain a reasonable approximation of a heat bath on a short time scale. Such possibility should be subjected to further study. In summary, we derived the exact results for the reduced density matrix of a spin interacting with a type of spin bath. The precise functional dependence is determined by the choice of the spin bath dispersion relation and its coupling to the “central” spin. It turns out that the system (spin) does not reach the canonical distribution in the course of its evolution, and its stationary state for $`t=\mathrm{}`$ depends on the initial conditions. The author would like to thank Professors V. Privman and L.S. Schulman for their interest in the work and numerous fruitfull discussions. References K. Blum, Density Matrix Theory and Applications, Plenum Press, (1996). W.H. Louisell, Quantum Statistical Properties of Radiation, John Wiley & Sons, (1973). A.O. Caldeira and A.J. Leggett,Phys. Rev. Lett. 46, 211 (1981). G.W. Ford, M. Kac and P. Mazur, J. Math. Phys.6, 504 (1965). A.J. Legget in Percolation, Localization and Superconductivity, NATO ASI Series B: Physics, Vol. 109, edited by A.M. Goldman and S.A. Wolf (Plenum, New York 1984), p.1. A.J. Bray and M.A. Moore, Phys. Rev. Lett. 49, 1546 (1982). S. Chakravarty and A.J. Leggett, Phys. Rev. Lett.52, 5 (1984). Review: A.J. Legget, S. Chakravarty, A.T. Dorsey, M.P.A. Fisher and W. Zwerger, Rev. Mod. Phys. 59, 1 (1987). L.-D. Chang, S. Chakravarty, Phys. Rev. B 31, 154 (1985). J. Winter, Magnetic Resonance in Metals, Oxford at the Clarendon Press, (1971). N.V. Prokof’ev and P.C.E. Stamp, J. Low Temp. Phys. 104, 143 (1996). I.S. Tupitsyn, N.V. Prokof’ev, P.C. Stamp, Effective Hamiltonian in the Problem of a “Central Spin” Coupled to Spin Environment, (preprint). S. Sachdev and R.N. Bhatt, J. Appl. Phys. 61, 4366 (1987). G.M. Palma, K.A. Suominen and A.K. Ekert, Proc. Royal. Soc. London A 452, 567 (1996). W.G. Unruh, Phys. Rev. A 51, 992 (1995). D. Mozyrsky and V. Privman, J. Stat. Phys. 91, 787 (1998). N.G. van Kampen, J. Stat. Phys. 78, 299 (1995). I.S. Gradshtein, I.M. Ryzhik, Table of Integrals, Series and Products, Academic Press, Inc., (1980). C.J. Thompson, Mathematical Statistical Mechanics, The Macmillan Company, (1972).
no-problem/9910/astro-ph9910275.html
ar5iv
text
# Nuclear Brightness Profiles of Merger Remnants: Constraints on the Formation of Ellipticals by Mergers ## 1. Introduction The possibility that many elliptical galaxies formed from mergers of disk galaxies is a topic of continuing interest. That mergers form elliptical-like remnants has been demonstrated through numerical simulations, and ground-based imaging has shown that many merger remnants have r<sup>1/4</sup> luminosity profiles. These arguments, along with the detection of shells, ripples and kinematically decoupled cores in elliptical galaxies, support this ‘merger hypothesis’ (e.g., Kennicutt, Schweizer, & Barnes 1998; hereafter KSB98). Theoretical arguments indicate that it is in the nuclei of remnants where the merger hypothesis may face its most stringent test. If dynamical relaxation is the dominant physical process in mergers, then remnant nuclei will be very diffuse with large cores (Hernquist 1992), unless the progenitor nuclei were dense to begin with. If both merging galaxies contain a central black hole, then the stellar density of the merger remnant will be lower than that of the progenitor galaxies (Quinlan & Hernquist 1997). Alternatively, if mergers are accompanied by strong gaseous dissipation and central starbursts, then the remnant may have a high stellar density and steep luminosity profile (Mihos & Hernquist 1994). A comparison between the observed nuclear properties of merger remnants and elliptical galaxies can shed more light on the viability of the merger hypothesis and on the physical processes that govern the structure of merger remnants. The nuclear brightness profiles of elliptical galaxies have been mapped in great detail with HST. Faber et al. (1997; hereafter F97) studied a large sample of normal ellipticals. Carollo et al. (1997; hereafter C97) studied a sample of elliptical galaxies with kinematically decoupled cores (presumably old merger remnants), and found few differences as compared to the sample of F97. To complement this work, we initiated an HST study of a sample of younger merger remnants (van der Marel, Zurek, Mihos, Heckman & Hernquist 2000, in preparation), and we present here some of the preliminary results. ## 2. Sample Selection A well-known compilation of nearby interacting galaxies and mergers is Toomre’s list of 11 galaxies selected from the NGC Catalog (see KSB98). The two latest-stage mergers in the list are NGC 3921 and NGC 7252, which have tidal tails but show no remaining signs of two galaxies with a separate identity. To create a sample for our study, we sought galaxies with morphological properties similar to NGC 3921 and 7252 from the Catalogs of Arp (1966) and Vorontsov-Velyaminov (1977), and from the imaging survey of (UV-bright) Markarian galaxies by Mazzarella & Boroson (1993). This yielded a sample of 19 galaxies with $`cz<10000\mathrm{km}\mathrm{s}^1`$, of which we imaged 14 galaxies (including NGC 3921 and 7252). The remaining five galaxies are classified as ultra-luminous IR galaxies, and were imaged with HST by other teams. ## 3. Observations To minimize any influence of dust on the observed brightness profiles we observed the galaxies in the near-IR with the HST/NICMOS instrument (Cycle 7 project GO-7268). Images were obtained with the NIC2 camera (pixel size $`0.076^{\prime \prime }`$ square) using the filters F110W, F160W and F205W, corresponding roughly to J, H and K, and with the NIC1 camera (pixel size $`0.043^{\prime \prime }`$ square) only in F110W. Each image was subjected to basic reduction steps followed by Lucy deconvolution with an appropriate PSF. Azimuthally averaged brightness profiles were extracted for all camera/filter combinations. Example results for two galaxies are shown in Figure 1. Each brightness profile was fit by a ‘nuker’ law (Lauer et al. 1995), which was deprojected to obtain the three-dimensional luminosity density. Figure 2 shows this density in the $`H`$-band at a fiducial radius $`r=50\mathrm{pc}`$, as function of galaxy luminosity (assuming $`H_0=80\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), both for the galaxies in our sample and for those in the samples of F97 and C97 (transformed to the $`H`$-band under the assumption of a proto-typical $`VH=3.0`$ for elliptical galaxies; Peletier, Valentijn & Jameson 1990; Silva & Bothun 1998). While most of the galaxies in our sample follow the same approximate correlation as normal ellipticals, there are three galaxies that strongly stand out because of their high luminosity density: NGC 34, 3921 and 7252. One other galaxy that stands out for the same reason is NGC 1316 from the F97 sample, which is also a merger remnant (Schweizer 1980). The $`JH`$ and $`HK`$ color profiles of NGC 3921 and 7252 show that they become bluer towards the center, presumably due to recent star formation. This is consistent with the detection of strong Balmer absorption lines in ground-based spectra of these galaxies (KSB98). NGC 34 becomes redder towards the center, probably as a result of dust absorption. NGC 34 is the most IR-luminous galaxy of those that we observed, suggesting the presence of ongoing or recent star formation in this galaxy as well. ## 4. Discussion and Conclusions The high luminosity densities observed in NGC 34, 3921 and 7252 are probably a direct consequence of recent star formation triggered by a merger. Stellar populations fade with time, and these galaxies will therefore become more similar to normal ellipticals as time passes. Dynamical and spectral evidence suggest that the mergers happened $`0.5`$$`1.5`$ Gyr ago (KSB98). The models of Bruzual & Charlot (e.g., 1993) indicate that a single-burst population fades by a factor of $`10`$ between $`0.5`$ and 10 Gyr. Figure 2 therefore suggests that these galaxies may become similar to normal ellipticals within a Hubble time. Most galaxies in our sample are already now similar to normal ellipticals in terms of their nuclear luminosity density, although some fall on the high end of the range occupied by normal ellipticals. If these galaxies are the remnants of disk-disk mergers, then either the merger ages must be large so that the newly formed stars have mostly faded, or they never formed many new stars, e.g., because the progenitors were gas poor or the star formation efficiency was low. Results such as those for NGC 3921 in Figure 1 indicate that its star formation was limited mostly to the central region, $`r<0.5^{\prime \prime }200\mathrm{pc}`$. This is consistent with predictions of dissipative N-body simulations of disk-disk mergers, in which the gas quickly falls to the central few-hundred pc (Mihos & Hernquist 1994). The ‘excess’ light in the central arcsec of NGC 3921 (as compared to NGC 7727; see Figure 1) represents $`4`$% of the total galaxy luminosity, and probably a smaller fraction in terms of mass. CO complexes observed in the central kpc of merger remnants support the view that gas flows to the center in galaxy interactions, but even if all the CO observed in NGC 3921 and 7252 were soon turned into stars, the light from recently formed stars would still provide only a small fraction of the total galaxy luminosity (Hibbard & Yun 1999). To summarize, we have detected the luminosity spikes predicted by dissipative simulations of disk-disk mergers, but only in some of our galaxies. In general, it appears that the light from young stars does not provide a major contribution to the total galaxy luminosity. This is consistent with work by Silva & Bothun (1998), who found that the near-IR colors of most morphologically disturbed ellipticals are inconsistent with intermediate age (2–5 Gyr) stars providing much of the luminosity. This raises the question whether these galaxies were ever similar to ultra-luminous infrared galaxies, in which massive starbursts are known to occur as a result of galaxy interactions. ## References Arp, H. C. 1966, Atlas of Peculiar Galaxies, ApJS, 123, 1 Bruzual, A. G., & Charlot, S. 1993, ApJ, 405, 538 Carollo, C., Franx, M., Illingworth, G., & Forbes D. 1997, ApJ, 481, 710 (C97) Faber, S. M., et al. 1997, AJ, 114, 1771 (F97) Hernquist, L. 1992, ApJ, 400, 460 Hibbard, J. E., & Yun, M. S. 1999, ApJ, 522, L93 Kennicutt, R. C., Schweizer, F., & Barnes, J. E. 1998, Galaxies: Interactions and Induced Star Formation (Berlin: Springer) (KSB98) Lauer, T. R., et al. 1995, AJ, 110, 2622 Mazzarella, J. M., & Boroson, T. A. 1993, ApJS, 85, 27 Mihos, J. C., & Hernquist, L. 1994, ApJL, 437, L47 Peletier, R. F., Valentijn, E. A., & Jameson, R. F. 1990, A&A, 233, 62 Quinlan, G. D., & Hernquist, L. 1997, New Astronomy, 2, 533 Schweizer, F. 1980, 237, 303 Silva, D. R., & Bothun, G. D. 1998, ApJ, 116, 85 Vorontsov-Velyaminov, B. A. 1977, Atlas of Interacting Galaxies, A&AS, 28, 1
no-problem/9910/astro-ph9910266.html
ar5iv
text
# Structural properties of Dark Matter Halos ## 1. Introduction The study of the properties of halos produced in N-body simulations is of central importance to semianalytical modelling of galaxy formation and evolution within clusters. During the past four years, most attention has been paid to one out of the many possible statistics which could characterize the halo population, namely the density profile (Navarro, Frenk & White 1996; Tormen, Bouchet & White, 1997; Moore et al., 1999; Jing & Suto, 1999). However, it is difficult to determine reliably the density profile of numerical dark matter halos containing less than $`10^5`$ particles: and in modern high resolution, parallel N-body cosmological simulations, there are typically not so many rich halos which are not formed by “overmerged” material in the center of clusters. In fact, all the work in the above mentioned papers where the density profiles are studied has been performed on halos extracted from some simulations and “re-simulated” at higher mass and spatial resolution. This allows one to study in detail at most a dozen of halos, but its is difficult to perform with this technique an analysis spanning a wide range of the possible halos’ parameters range. On the other hand, numerical simulations typically produce a lot of halos with $`10^110^4`$ particles, which carry a significant dynamical information having been “processed” by the gravitational field of the region where they form. It could be wise to try to use this statistical information to try to discriminate among different models of structure formation. Unfortunately we do not have a complete physical understanding of the gravitational instabilities which drive a halo (and particularly a numerical halo from an intrinsically spatially and temporally discrete experiment as a N-body simulation is) toward a (almost) relaxed state, so we must adopt models to interpret the results of numerical simulations. In this contribution we show that, given enough spatial and mass resolution, and choosing an appropriate statistics, it is possible to discriminate among different models of halo formation. ## 2. Models of collapse We consider three models for gravitational collapse of halos: Singula Isothermal Sphere (SIS), the spherically-averaged peak-patch model by Bond & Myers (1996) and the Truncated Isothermal Spherical (TIS) model by Shapiro, Iliev & Raga (1999). The physical assumptions of the three models are significantly different. The SIS model is the simplest one, but also the most unrealistic: it predicts a singular density profile decaying $`\rho _{SIS}r^2`$. Total mass is infinite and density is everywhere nonzero (the system is not truncated). The Bond & Myers model is based on a Montecarlo approach named by their authors “peak-patch”. The underlying model is that of a collapse of a homogenous spheroid, for which the equations of motion are exactly solvable. We consider here the spherically-averaged quantities computed from this model. The third and last is a recent class of isothermal, truncated models recently introduced by Shapiro et al. (1999). The main idea is to consider solutions of the Jeans’ equations for isothermal systems truncated at a finite radius. This is possible only if the shear gradient terms in these equations are not identically zero, for instance if an isotropic pressure term is present. The relationship between 1-D velocity dispersion and mass in these three models is given by $$\sigma _v=C_{SIS,TIS}\left(M_{12}\right)^{1/3}\left(1+z_{coll}\right)^{1/2}h^{1/3}$$ (1) where: $`C_{SIS}=71.29`$ and $`C_{TIS}=104.69`$ (in km/sec), respectively, and $`M_{12}`$ is the mass in units of $`10^{12}\mathrm{M}_{}`$. For the Bond & Myers model the relationship is slightly different: $$\sigma _v=117.6\left(M_{12}\right)^{0.29}\left(1+z_{coll}\right)^{1/2}h^{1/3}$$ (2) (Bond & Myers 1996, eq. 4.4). Note that this latter relationship was obtained from a fit of Montecarlo simulations for a range of mass much larger than the one we consider in this contribution (see next section). ## 3. Simulation and results In order to test the models presented in the previous section, we will use the results from a N-body simulation we have recently performed. Initial conditions were picked up from the catalogue of simulated clusters by van Kampen & Katgert (1997): we choose a configuration which would produce a double cluster, so we could study galaxy formation in a high shear environment. We then have run the same set of initial conditions with a higher mass and spatial resolution. The run was performed with $`256^3`$ particles using a parallel treecode (Becciani, Antonuccio-Delogu & Pagliaro, 1996). The softening length was fixed at 15 $`h^1`$ kpc (comoving). Although the simulation produces about a dozen of clusters, we will restrict our attention to the two major clusters, which are shown in Figure 2 In order to study the virialization properties, it is important to adopt a physically motivated criterion to select the halos. This is because these properties are traced by gravitationally bound particles. Simple criteria (like friends-of-friends) do not distinguish particles on the base of their gravitational properties, but only on the base of their relative distance. Here we selected groups using SKID, a open source software which produces catalogues of groups including only gravitationally bound particles (Stadel, Katz & Quinn, 1999). The only serious drawback concerning SKID is that it is very slow. For this reason, we restricted our analysis to a $`10^3h^3\mathrm{Mpc}^3`$ region centered around the double cluster and a similar region centered around a single cluster. We denote these two regions as DOUBLE and SINGLE in the following. DOUBLE and SINGLE contain approximately the same number of objects. Here we focus our attention on halos with masses in the range $`10^{11}10^{12}\mathrm{M}_{}`$, i.e. low-mass halos. These are the smallest objects we can reliably trace with the mass resolution and softening length we adopted. Within this mass range DOUBLE has 827 halos, while SINGLE has 757. From Figures 1 and 2 we can see that using our halo samples it is possible to discriminate among the three models outlined in the previous section. Both plots show that the TIS model offers a better description of the equilibrium properties of these halos. But notice also that a fraction of halos in DOUBLE has velocity dispersions significantly smaller than the average. Tidal effects, which are much more pronounced in DOUBLE than in SINGLE, are responsible for this difference (Antonuccio-Delogu, Pagliaro & Becciani, 1999b). ## 4. Conclusion The fact that the TIS model gives a better fit to the properties of low-mass virialized halos should not be surprising: it is possible to show that this reflects the action of the environmental tidal field on halo formation (Antonuccio-Delogu et al., 1999a). However, we have found that the actual prfile of these halos is different from the Minimum Energy state suggested by Shapiro et al. (1999), and is rather more consistent with a tidally limited profile. Before concluding, a few words about the consistency of the results of our work with the idea of the existence of a “universal” density profile. The TIS density profile differs significantly from either the Navarro et al. (1996) and the Moore et al. (1998) density profiles, because it flattens in the central region (i.e. it has a core). At the time we wrote this contribution, Jing & Suto (1999) submitted the results of a series of simulations which suggest a flattening of the inner 2% (in units of $`r/r_{200}`$) of the density profile in galaxy-sized halos. A similar flattening is not observed in their cluster-sized halos. The issue is then still an open one. All this seems to suggest that statistics based on virialization properties bear a smaller intrinsic uncertainty than the density profile, and are then more suitable to characterize the average statistical properties of halo populations. ### Acknowledgments. V.A.-D. would like to thank E. van Kampen for having supplied the initial parameters from his catalogue of simulated clusters. ## References Antonuccio-Delogu, V., Colafrancesco, S., Pagliaro, A., van Kampen, E. & Becciani, U., 1999a, in preparation Antonuccio-Delogu, V., Pagliaro, A. & Becciani, U., 1999b, in preparation Becciani, U., Antonuccio-Delogu, V., & Pagliaro, A., Computer Physics Communications, 1996, 99, 9 Bond, J.R. & Myers, S.T., 1996, ApJS, 103, 1 Jing, Y.P. and Suto, Y., 1999, submitted to ApJ Moore, B., Quinn, T., Governato, F., Stadel, J. & Lake, G., 1999, submitted to ApJand astro-ph/9903164 Navarro, J.F., Frenk, C.S. & White, S.D.M., 1996, ApJ, 462, 563 Shapiro, P.R., Iliev, I.T. & Raga, A. C., 1999, MNRAS, 307, 203? Stadel, j., Katz, N. and Quinn, 1999, http://www-hpcc.astro.washington.edu/tools/SKID/ Tormen, G., Bouchet, F.R. & White, S.D.M, 1997, MNRAS, 286, 865 van Kampen, E. & Katgert, P., 1997, MNRAS, 289, 327
no-problem/9910/astro-ph9910131.html
ar5iv
text
# Si AND Mn ABUNDANCES IN DAMPED LYMAN 𝛼 SYSTEMS WITH LOW DUST CONTENTThe data presented herein were obtained with the NASA/ESA Hubble Space Telescope and with the Keck I Telescope. The W.M. Keck Observatory is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. ## 1 INTRODUCTION This is the fourth paper in a series dealing with metal abundances in damped Lyman $`\alpha `$ systems (DLAs) at intermediate redshifts ($`z_{\mathrm{abs}}<1.5`$). The number of such systems known has been increasing slowly over the past few years with the growing database of Hubble Space Telescope (HST) QSOs observed at wavelengths below 3000 Å which are inaccessible from the ground. In Pettini et al. (1999, Paper III) we combined measurements of the abundance of Zn in 10 DLAs at $`z_{\mathrm{abs}}<1.5`$ with earlier surveys at higher redshifts to determine the evolution of the metallicity of H I gas in the universe. We found that, somewhat surprisingly, the metal content of DLAs apparently does not increase with cosmic time, and that the column density-weighted mean value of the Zn abundance remains roughly constant at $`[\mathrm{Zn}/\mathrm{H}]1.1\pm 0.2`$ between $`z=3`$ and 0.4 .<sup>1</sup><sup>1</sup>1In the usual notation, \[Zn/H\] = log (Zn/H) $``$ log (Zn/H). This result is apparently at odds with the common interpretation of DLAs as the high redshift progenitors of present day spiral galaxies; disk stars in the Milky Way, for example, had already reached \[Fe/H\] $`0.5`$ at $`z1`$ (Edvardsson et al. 1993), a point first made by Meyer & York (1992). The persisting low metallicity of DLAs may be explained by abundance gradients (Prantzos & Silk 1998) but, more generally, is consistent with the finding from HST imaging that galaxies of different morphological types and with a range of surface brightnesses contribute to the absorption cross-section for H I (Le Brun et al. 1997). To some extent this may well be a consequence of the fact that DLAs selected with HST are likely to be preferentially dust- and therefore metal-poor, simply because the background QSOs would otherwise be too faint to be accessible with a 2.5 m telescope. Whatever the connection between them and present-day galaxies, damped Lyman $`\alpha `$ systems remain our best route to accurate determinations of element abundances at high redshifts. Element ratios in Galactic stars and nearby H II regions have long been scrutinized with a view to deciphering the clues they hold both to the origin of different stellar populations and to the stellar yields (see, for example, Wheeler, Sneden, & Truran 1989 for a review of the main ideas underlying this field of work). As pointed out by Pettini, Lipman, & Hunstead (1995), abundance measurements in DLAs are potentially an important extension of this technique (and one yet to be fully exploited), allowing access to elements which are not well observed in stars, to lower metallicities than those of present-day H II regions, and to a wider range of environments and physical conditions than local studies. A possible complication in interpreting interstellar abundances is accounting for the fractions of refractory elements which are missing from the gas-phase having been incorporated into dust grains; however, we are aided in this respect by the generally low dust depletions which seem to apply to many DLAs (Pettini et al. 1997a). In this paper we analyse Keck I HIRES observations of several elements, ranging from Mg to Zn, in three DLAs at $`z_{\mathrm{abs}}\stackrel{<}{}1`$. One of these, at $`z_{\mathrm{abs}}`$ = 0.61251 in Q0058+019 (= PHL 938), has not been studied before while for the other two, at $`z_{\mathrm{abs}}`$ = 1.00945 in Q0302$``$223 and $`z_{\mathrm{abs}}`$ = 0.85967 in Q0454+039, we previously published only intermediate resolution observations (Pettini & Bowen 1997, Paper II; Steidel et al. 1995, Paper I respectively). In addition, we present an HST WFPC2 image of the field of Q0058+019 where we resolve a galaxy which is a highly plausible candidate for the damped absorber, being very close to the QSO sight-line. We use the pattern of element abundances in these and other DLAs where corrections for dust depletion are estimated to be small to explore the metallicity dependence of the abundances of Si (an $`\alpha `$-element) and Mn. ## 2 HST OBSERVATIONS ### 2.1 FOS Spectroscopy A trawl of the HST Faint Object Spectrograph (FOS) data archive revealed that Q0058+019 exhibits a damped Lyman $`\alpha `$ line at $`\lambda _{\mathrm{obs}}=1959`$ Å, shown in Figure 1. In producing this spectrum we resampled the pipeline calibrated data to a linear dispersion of 0.51 Å per pixel (one quarter diode steps) and applied a correction of +8% of the continuum level to bring the core of the Lyman $`\alpha `$ line to net zero flux. A fit to the absorption profile yielded a neutral hydrogen column density $`N`$(H<sup>0</sup>) = ($`1.2\pm 0.5`$)$`\times 10^{20}`$ cm<sup>-2</sup> at an absorption redshift $`z_{\mathrm{abs}}`$ = 0.6118. The column density error, which includes the effect of the correction applied to the zero level, is larger than is usually the case because the signal-to-noise ratio of the short FOS exposure is only $`7`$. Even so, the value of $`N`$(H<sup>0</sup>) is lower than the threshold $`N`$(H<sup>0</sup>) $`=2\times 10^{20}`$ cm<sup>-2</sup> for DLAs originally adopted by Wolfe et al. (1986), reflecting the shift of the column density distribution toward lower values at $`z<1.5`$, as first noted by Lanzetta, Wolfe, & Turnshek (1995). The difference between the redshift of the Lyman $`\alpha `$ line and $`z_{\mathrm{abs}}`$ = 0.61251 measured from the metal absorption lines in the HIRES spectrum (§3) corresponds to approximately half a diode on the detector and is typical of the accuracy with which the zero point of the FOS wavelength scale can be determined (Rosa, Kerber, & Keyes 1998). The FOS spectra of Q0302$``$223 and Q0454+039 have been described in Papers II and I respectively; from a re-examination of the damped Lyman $`\alpha `$ line profiles we deduce $`N`$(H<sup>0</sup>) = ($`2.3\pm 0.5`$)$`\times 10^{20}`$ cm<sup>-2</sup> and ($`4.9\pm 0.7`$)$`\times 10^{20}`$ cm<sup>-2</sup> at $`z_{\mathrm{abs}}`$ = 1.00945 and $`z_{\mathrm{abs}}`$ = 0.85967 respectively, in good agreement with the values published in the earlier papers. ### 2.2 WFPC2 Imaging The field of Q0058+019 was imaged with the Wide Field Planetary Camera (WFPC2) as part of a larger HST program to study the morphology and environments of galaxies producing Mg II absorption systems. A set of four exposures was taken through the F702W filter (with an effective wavelength of 6900 Å) in a two-point dither pattern; the total exposure time was 5000 s. The individual CCD frames were reduced using the pipeline calibration procedure and then coadded by “drizzling” onto a master output pixel grid using the DITHER and DITHERII IRAF packages (Fruchter & Hook 1999). The next step involved subtracting the QSO image to reveal any galaxies at small separations. The HST Mg II absorber imaging program was purposefully designed to facilitate this subtraction process by constructing an empirical point spread function (PSF) using images of many QSOs. Since the PSF characteristics (FWHM, shape, bleeding) depend sensitively on the level of saturation, QSOs observed in the program were grouped by flux so that an appropriate PSF could be determined using only QSOs of similar flux to Q0058+019. Subtraction of this median PSF (with the DAOPHOT IRAF package) then yielded the final image reproduced in Figure 2. A faint galaxy is clearly visible approximately 1.2 arcsec to the north-east of the QSO. Given its proximity to the QSO sight-line this is the most likely candidate for the damped Lyman $`\alpha `$ absorber at $`z_{\mathrm{abs}}`$ = 0.61251. Apparently the model PSF does not reproduce accurately one of the diffraction spikes in the QSO image, leaving a residual flux deficit which cuts through the galaxy. When this is taken into account, the object morphology is suggestive of a late-type galaxy seen at a high inclination angle, $`i65^{}`$. Table 1 lists relevant measurements, assuming $`z_{\mathrm{gal}}=0.61251`$ and adopting a $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0=0.05`$ cosmology. We converted the measured F702W magnitude to an AB magnitude in the $``$ photometric system of Steidel & Hamilton (1993) by reference to ground-based images of the field (the \[6930/1500\] $``$ filter is a very close match to the WFPC2 F702W filter), and obtained $`=23.7`$. For the above cosmology this in turn corresponds to an absolute magnitude in the rest-frame $`B`$-band (in the conventional Vega-based magnitude system) $`M_B=19.1`$ . No K-correction was applied because at $`z=0.61251`$ 6930 Å is close to the effective wavelength of the $`B`$-band. Adopting $`M_B^{}=21.0`$ (e.g. Folkes et al. 1999), we conclude that the candidate damped Lyman $`\alpha `$ absorber is a galaxy of relatively low luminosity, with $`L1/6L^{}`$. Other DLAs have been found to be associated with compact galaxies, dwarfs, low surface brightness galaxies, and even an S0 (Le Brun et al. 1997; Lanzetta et al. 1997; Rao & Turnshek 1998). We now add a low luminosity spiral to the list and thereby reinforce the conclusion of these earlier studies that damped Lyman $`\alpha `$ systems are drawn from a diverse population of galaxies with a wide range of morphologies and luminosities. ## 3 KECK OBSERVATIONS The spectra of the three QSOs were recorded at high spectral resolution with the HIRES echelle spectrograph (Vogt et al. 1994) on the Keck I telescope on Mauna Kea, Hawaii on 23 and 24 September 1998. Relevant details of the observations are collected in Table 2. We used the UV-blazed cross-disperser grating to record interstellar lines of interest in the three DLAs longward of the atmospheric cut-off near 3200 Å. Thus, in Q0058+019 we cover from Zn II $`\lambda 2026`$ to Mg I $`\lambda 2852`$ at $`z_{\mathrm{abs}}`$ = 0.61251, while the spectra of Q0302$``$223 and Q0454+039 extend from Si II $`\lambda 1808`$ to Mn II $`\lambda 2606`$ at $`z_{\mathrm{abs}}`$ = 1.00945 and 0.85967 respectively (for Q0302$``$223 this necessitated a second grating setting). Given the good seeing (0.5 to 0.7 arcsec), we used the 0.86 arcsec wide entrance slit which projects to 3 pixels on the 2048x2048 Tektronix CCD detector resulting in a resolution of 6.5 km s<sup>-1</sup> FWHM. The echelle spectra were extracted with Tom Barlow’s customised software package, following the steps described in Paper III. The signal-to-noise ratios of the reduced spectra were measured directly from the rms fluctuations about the continuum level. In general the value of S/N varies along each spectrum due to the presence of broad emission lines at the QSO redshifts and increasing atmospheric absorption below 3600 Å; the values (per pixel) listed in column (10) of Table 2 refer to the region near rest frame wavelength $`\lambda _0=2350`$ Å and should be representative of most of the absorption lines recorded. As can be seen from column (11) our Keck spectra are sensitive to rest frame equivalent widths of only a few mÅ. Figures 3, 4, and 5 show examples of absorption lines of varying strengths in each damped system. ## 4 ION COLUMN DENSITIES AND ELEMENT ABUNDANCES When recorded at high spectral resolution the metal lines in QSO absorption systems commonly break up into multiple components; as can be seen from Figures 3, 4, and 5, the three DLAs observed here are no exception with the strongest absorption lines extending over 100 to 200 km s<sup>-1</sup>. We analysed these complex absorption profiles with the VPFIT software written by Bob Carswell. The procedure has been described in detail before (e.g Paper III). As our data include seven transitions of Fe II with widely different $`f`$-values, spanning a range of $`165`$, the model fits produced with VPFIT are well constrained and allow us to determine the redshift, velocity dispersion parameter b ($`b=\sqrt{2}\sigma `$, where $`\sigma `$ is the one-dimensional velocity dispersion of the ions along the line of sight, assumed to be Gaussian), and ion column density N of each component. Details of the profile fits are collected in Table 3. The important point is that the total ion column densities do not depend on the fine detail of the profile decomposition because for each species considered we observe sufficiently weak transitions that the corresponding absorption lines fall on the linear part of the curve of growth. The exception is the Mg II $`\lambda \lambda 2796,2803`$ doublet which is strongly saturated (see Figure 3) and is therefore not included in the present analysis. The total column densities of the first ions of Zn, Si, Mn, Cr, Fe, and Ni in each DLA are listed in Table 4, together with $`N`$(H<sup>0</sup>). In deriving these values we used the compilation of $`f`$-values by Morton (1991) with the revisions proposed by Savage & Sembach (1996). For Ni II we took advantage of the recent radiative lifetime measurements by Fedchack & Lawler (1999) which have led to a reduction by a factor of 1.9 of the $`f`$-values of the $`\lambda 1709.600`$ and $`\lambda 1741.549`$ transitions relative to the values proposed by Zsargó & Federman (1998). The ensuing upward revision of $`N`$(Ni<sup>+</sup>) by a factor of $`2`$ is significant for the interpretation of the pattern of relative element abundances, as discussed below (§5). Since in DLAs the elements considered here are predominantly singly ionized, their abundances can be deduced directly by dividing the values of N in columns (4) to (9) by the values of $`N`$(H<sup>0</sup>) in column (3) of Table 4. Comparison with the solar system abundance scale of Anders & Grevesse (1989) finally gives the relative abundances listed in Table 5. (We have included in Table 5 the abundance measurements from Paper III, with the appropriate revisions for $`N`$(Ni<sup>+</sup>), as they will be considered in the discussion below.) We now briefly describe the results for the three DLAs which are the subject of the present paper. 1. Q0058+019; $`z_{\mathrm{abs}}=0.61251`$: Among the species observed in our HIRES spectra Zn gives the most direct measure of metallicity, free from the complication of dust depletion. In this low redshift DLA the Zn II $`\lambda 2026,2062`$ doublet lines fall at 3267 and 3326 Å respectively, where observations are difficult due to atmospheric absorption. Although noisy, we clearly detect both lines; given the relatively low column density of hydrogen in this system, the presence of Zn II absorption in itself implies high abundances and indeed we deduce \[Zn/H\] $`0`$. The conclusion that this absorber has near-solar metallicity does not rest on the poorly observed Zn II lines alone; as can be seen from Table 5, the abundances of Cr and Fe, two other iron-peak elements, are also within a factor of $`2`$ of solar and could be higher if some fraction of these elements has been incorporated into dust grains, as discussed below (§5). Evidently, DLAs with near-solar abundances are not rare at $`z\stackrel{<}{}1`$; out of the six measurements available to date, three have metallicities $`Z_{\mathrm{DLA}}\stackrel{>}{}1/3Z_{}`$ (see Figure 7 of Paper III). However, a wide range of values of \[Zn/H\], spanning $`1.5`$ dex, persists at all redshifts. It is intriguing that systems with high metallicity are invariably at the low end of the distribution of neutral hydrogen column density, so that the census of metals seen in absorption is dominated by gas with high $`N`$(H<sup>0</sup>) and low metal content.<sup>2</sup><sup>2</sup>2Thus, the new measurement for this DLA has a minimal effect on the column-density weighted average $`[\mathrm{Zn}/\mathrm{H}]=1.03\pm 0.23`$ in the redshift interval $`z=0.401.5`$ derived in Paper III. It is highly likely that selection effects play a role here; DLAs with large columns of molecules (and therefore probably high metallicity) are known to exist (e.g. the $`z_{\mathrm{abs}}`$ = 0.68466 21 cm absorber towards B0218+357—Carilli, Rupen, & Yanny 1993; Wiklind & Combes 1995), but are too faint to be studied spectroscopically at optical and ultraviolet wavelengths. It is a lingering concern, however, that such selection effects are still largely unquantified. Galaxies in the local universe exhibit a rough correlation between metallicity and $`B`$-band luminosity which apparently persists at least to $`z=0.4`$ (Kobulnicky & Zaritsky 1999). Referring to these authors’ Figure 4 it can be seen that, with $`M_B=19.1`$ and \[Zn/H\] $`=+0.1\pm 0.2`$, the $`z_{\mathrm{abs}}`$ = 0.61251 absorbing galaxy in Q0058+019 is somewhat metal-rich for its luminosity but is not inconsistent with the local relationship given the observed scatter. What is perhaps more remarkable is to find a near-solar abundance at relatively large distances from the centre of the galaxy. If the DLA arises in the disk, the high inclination of the galaxy, $`i65^{}`$, places it at a galactocentric distance of $`10h_{50}^1`$/cos $`i24h_{50}^1`$ kpc. If the absorption takes place in the halo, it would imply the existence of a cloud with $`N`$(H I) $`=10^{20}`$ cm<sup>-2</sup> and solar metallicity $`10h_{50}^1`$ kpc above the mid-plane of the galaxy. In either case it would seem that this galaxy does not have a marked abundance gradient, either along or perpendicular to the disk. 2. Q0302$``$223; $`z_{\mathrm{abs}}=1.00945`$: As can be seen from Figure 4, two main groups of components, separated by 36 km s<sup>-1</sup>, produce most of the absorption seen in this DLA; additional weaker components, at $`v=+35`$ and $`+121`$ km s<sup>-1</sup> relative to $`z_{\mathrm{abs}}`$ = 1.00945 are visible in the stronger Fe II lines. Although the Zn II and Cr II lines are weak, the corresponding column densities and abundances in Tables 3 and 4 are in excellent agreement with the values reported in Paper II which were measured from data of much lower resolution (0.88 Å compared to 0.08 Å FWHM). HST WFPC2 imaging (Le Brun et al. 1997) has revealed two compact galaxies close to the line of sight. At $`z=1.009`$ they would have absolute luminosities $`L_B0.2L_B^{}`$ and $`L_B^{}`$ and impact parameters $`12h_{50}^1`$ and $`27h_{50}^1`$ kpc respectively. It remains to be established with spectroscopic observations which of the two galaxies is associated with the damped absorber. 3. Q0454+039; $`z_{\mathrm{abs}}=0.85967`$: Two groups of components, separated by $`70`$ km s<sup>-1</sup>, are responsible for most of the absorption in this DLA (Figure 5). Again, the column densities we deduce for Zn<sup>+</sup>, Cr<sup>+</sup>, and Fe<sup>+</sup> are in excellent agreement with the values measured by Steidel et al. (1995) from 2.3 Å resolution Lick spectra, once allowance is made for the different $`f`$-values used. These authors also reported the presence of a compact galaxy close to the line of sight to the QSO, subsequently confirmed with WFPC2 images by Le Brun et al. (1997). If this is the absorber, it is at a projected separation of $`8h_{50}^1`$ kpc and it has an absolute luminosity $`L_B0.25L_B^{}`$. The low element abundances we find, approximately 1/10 of solar, indicate that this galaxy apparently does not conform to the metallicity-luminosity relation discussed by Kobulnicky & Zaritsky (1999); in this respect it is more in line with present-day H II galaxies. ## 5 DUST DEPLETIONS The pattern of relative abundances measured in the interstellar gas of distant galaxies responds to two effects, the selective depletion of refractory elements onto dust grains and inherent differences from the solar system scale, reflecting the past history of star formation which may well have been different from that of the Milky Way disk. Our goal here is to separate these two effects and, by accounting for the first, gain an insight into the second. In this endeavour we are guided by the extensive body of data on element abundances in stellar populations and the interstellar medium (ISM) of our Galaxy. Of the elements considered here, Zn, Cr, Fe, and Ni all track each other closely in Galactic stars with metallicities $`2.0[\mathrm{Fe}/\mathrm{H}]0.0`$ (e.g. Ryan, Norris, & Beers 1996). Dust depletion, on the other hand, is more significant for Cr, Fe, and Ni than for the generally undepleted Zn (Savage & Sembach 1996). It follows from these considerations that we can take the ratios \[Zn/Cr\], \[Zn/Fe\], and \[Zn/Ni\] as indicative of the fractions of these refractory elements which are missing from the gas-phase, e.g. $$[\mathrm{Zn}/\mathrm{Cr}]=\mathrm{log}\left(\frac{f_{\mathrm{gas}}+f_{\mathrm{dust}}}{f_{\mathrm{gas}}}\right)$$ (1) where $`f_{\mathrm{gas}}`$ and $`f_{\mathrm{dust}}`$ are the fractions of Cr in gaseous and solid form respectively.<sup>3</sup><sup>3</sup>3Recently, Howk & Sembach (1999) have drawn attention to the fact that ionization effects may boost the $`N`$(Zn<sup>+</sup>)/$`N`$(Cr<sup>+</sup>) ratio thereby mimicking dust depletion. However, this is unlikely to be the case for the DLAs considered here. Such effects would also increase the $`N`$(Ni<sup>+</sup>)/$`N`$(Cr<sup>+</sup>) ratio by similar factors, contrary to our measurements. Based on the calculations by Howk & Sembach (1999), the observed \[Ni/Cr\] $`\stackrel{<}{}0.0`$ implies very low ionization parameters, as expected for clouds with $`N`$(H I)$`\stackrel{>}{}10^{20}`$ cm<sup>-2</sup>. The boxes with the heavy outline in Figure 6 show the abundances measured in the six intermediate redshift DLAs listed in Table 5. Two conclusions can be drawn from inspection of this Figure. First, the depletions of Cr, Fe and, when available, Ni are roughly comparable, as is the case in the local ISM (Savage & Sembach 1996).<sup>4</sup><sup>4</sup>4The revision in the oscillator strengths of the Ni II lines mentioned in §4 has brought this element into better agreement with Fe and Cr, resolving an apparent puzzle which had been noted by Lu et al. (1996) and in Paper III. Second, the depletion levels are relatively modest, ranging from near-zero in Q0454+039 to a factor of $`4`$ in Q1351+318 ($`25`$% of Cr, Fe, and Ni in the gas). Such values are typical of DLAs in general (Pettini et al. 1997a), whereas in the disk of the Milky Way the same elements are depleted by more than one order of magnitude (see Figure 6 of Savage & Sembach 1996). It is unclear what lies at the root of this difference. It is interesting that the ISM of the Small Magellanic Cloud also exhibits only mild depletions (Welty et al. 1997), but metallicity alone is unlikely to be the explanation, because there is no trend in our data for a dependence of \[Zn/Cr\] on \[Zn/H\] (for example Cr and Fe are depleted by only a factor of $`2`$ in Q0058+019, where \[Zn/H\] is approximately solar). In any case, it appears that in most DLAs the balance between the incorporation of refractory elements into, and their release from, dust grains is shifted relative to the physical conditions prevailing in cool disk clouds on the Milky Way, so that on average there are roughly equal proportions of these elements in gas and dust. Note that this is unlikely to be the result of dust-related selection effects analogous to those mentioned earlier (§4). The total column densities of metals in the DLAs studied here are too low to produce significant dust reddening, even if 100% of the elements which make up the grains were in solid form. Finally on this topic, we point out that in two cases, Q0302$``$223 and Q0454+039, we can determine depletions separately for the two well resolved groups of components which make up the absorption lines (see Figures 4 and 5). The results are summarised in Table 6. Predictably, in Q0454+039 both components appear to be dust-free (since their sum is!). In Q0302$``$223, however, we see that the gas with the higher optical depth, at the adopted systemic redshift $`z_{\mathrm{abs}}`$ = 1.00945, has \[Zn/Cr, Fe, Ni\] $`+0.6`$, while in the component at $`v=36`$ km s<sup>-1</sup> the same ratio is only $`+0.1`$. Reduced depletions in interstellar clouds with high velocities are commonplace in the local ISM, where they have been known for nearly 50 years (Routly & Spitzer 1952) and are understood to arise from grain destruction in interstellar shocks. ## 6 ELEMENT RATIOS Two of the elements covered by our observations, Si and Mn, exhibit metallicity dependent ratios (relative to Fe) in Galactic stars, presumably because their nucleosynthesis follows different channels from that of the Fe-peak group. Furthermore, in the local ISM both elements show a degree of dust depletion. When overall depletion levels are high, Mn and Si are normally less underabundant than Fe, Cr, and Ni. However, such differences become less pronounced as depletions are reduced (Figure 6 of Savage & Sembach 1996)—all elements tend to the same depletion as the overall depletion level approaches zero. If we restrict ourselves to cases where \[Zn/Cr\] $`\stackrel{<}{}0.3`$ (and therefore $`f_{\mathrm{dust}}\stackrel{<}{}f_{\mathrm{gas}}`$ in eq. (1) so that dust correction factors are $`\stackrel{<}{}2`$), we may be justified in assuming that to a first approximation all refractory elements are depleted by the same factor.<sup>5</sup><sup>5</sup>5In future it should be possible to test this assumption by measuring the abundance of S, an undepleted $`\alpha `$-element. If the assumption is correct, we expect \[S/Si\] = \[Zn/Cr, Fe, Ni\]. The boxes with the light outline in Figure 6 show element abundances corrected for the dust fractions implied by the observed \[Zn/Cr\] ratios; in each case adopting the mean \[Zn/Cr, Fe, Ni\] ratio would produce very similar results. We take these values to represent the total abundances (gas + dust) of the element concerned and, having made this correction, we can now proceed to compare the abundances of Si and Mn in DLAs of different metallicities with analogous measurements in Galactic stars. The approach taken here is similar to, but more conservative than, the analysis by Vladilo (1998) who also used the ratio of Zn to Fe-peak elements to correct for dust depletion. The main difference is in the fact that Vladilo applied the correction to all DLAs for which relevant measurements were available, irrespectively of the degree of depletion, with the assumption that dust in DLAs has the same composition as in the Milky Way ISM. In our opinion we are on safer ground by limiting ourselves to cases where $`f_{\mathrm{dust}}\stackrel{<}{}f_{\mathrm{gas}}`$ because our conclusions do not then depend sensitively on the unknown detailed make-up of interstellar dust at high redshift. ### 6.1 Silicon The data for Si are displayed in Figure 7a, where the dots are the stellar measurements by Edvardsson et al. (1993) and Nissen & Schuster (1997). The general trend is for a mild increase in the relative abundance of Si at low metallicity; \[Si/Fe\] $`+0.2`$ to $`+0.3`$ at \[Fe/H\] $`\stackrel{<}{}1`$ . This is an example of the well known overabundance of the $`\alpha `$-elements which is generally attributed to the delayed production of additional Fe by Type Ia supernovae. In this picture, the overall metallicity (as measured by \[Fe/H\]) at which the ratio \[$`\alpha `$/Fe\] begins to decline towards the solar value depends on the previous history of star formation. A galaxy which turns most of its gas into stars within $`1`$ Gyr (the generally assumed timescale for the explosion of Type Ia supernovae) would maintain an enhanced \[$`\alpha `$/Fe\] ratio while \[Fe/H\] grows to high values. Such a scenario may apply to the thick disk and the bulge of the Milky Way where recent observations seem to indicate a uniform enhancement of the $`\alpha `$-elements at all metallicities (Fuhrmann 1998; Rich 1999, private communication). At the other extreme, in a galaxy where star formation proceeds slowly, or in bursts separated by quiescent periods lasting more than 1 Gyr, there would be time for \[$`\alpha `$/Fe\] to decline to the solar value (or even lower) while \[Fe/H\] remains low (Gilmore & Wyse 1991; Pagel & Tautvaisviene 1998). Returning to Figure 7a, we now consider the evidence provided by DLAs. Triangles show our measurements from Table 5, corrected for dust as explained above and taking Zn as a proxy for Fe.<sup>6</sup><sup>6</sup>6This approach is preferable to using our Fe abundances directly, because the latter are based on very weak transitions with oscillator strengths which may be less secure than those of the Zn II doublet. The systematic underabundance by $`0.2`$ dex of Fe relative to Cr in Figure 6 may well reflect the relative uncertainty in the $`f`$-values of the Cr II and Fe II lines. We also searched the literature for other DLAs where the abundances of Zn, Cr, and Si have been measured and the ratio \[Zn/Cr\] is within a factor of two of solar (therefore implying correspondingly small dust corrections). We found four such cases, all from the recent compilation by Prochaska & Wolfe (1999); they are shown in Figure 7a as large filled dots, again with the assumption that Cr and Si are depleted by similar amounts. The most straightforward conclusion from Figure 7a is that the \[Si/Fe\] ratio in DLAs is not dissimilar from the values observed in Galactic stars. At least half of the DLA measurements fall well in line with the bulk of stellar data. There are also hints of differences, with two or three cases where \[Si/Fe\] appears to be approximately solar at \[Fe/H\] $`\stackrel{<}{}1`$. While it is premature to make too much of these differences, it is probably fair to say that, unlike the situation for Galactic stars, one would not discern any trend in the \[Si/Fe\] ratio with metallicity from the DLA results alone. Thus, the data available at present are certainly consistent with the view that damped Lyman $`\alpha `$ absorbers are drawn from a varied population of galaxies which may have processed their interstellar gas at different rates prior to the time when we observe them. On the other hand, blanket statements to the effect that the chemical histories of DLAs are different from that of the Milky Way (e.g. Vladilo 1998) do not seem to be fully justified on the basis of the data for Si in Figure 7a. ### 6.2 Manganese A major new study of the abundance of Mn has recently been completed by Nissen et al. (2000) who measured \[Mn/Fe\] in 119 Galactic F and G stars from the thin disk, the thick disk, and the halo, following the same method as Edvardsson et al. (1993) and making use of Hipparcos parallaxes where available. The analysis takes into account hyperfine structure splitting of the Mn I lines, which is one of the complications involved in bringing together different data sets from previous studies. The measurements by Nissen et al. are reproduced as small dots in Figure 7b. There is an obvious drop in \[Mn/Fe\] with decreasing \[Fe/H\]; in the most metal-poor disk stars, at \[Fe/H\] $`1`$, \[Mn/Fe\] $`0.4`$ . The physical processes responsible for the metallicity dependence of \[Mn/Fe\] have not yet been confidently identified. Observers have remarked on the fact that the \[Mn/Fe\] trend seems to mirror the overabundance of the $`\alpha `$-elements but in the opposite sense (e.g. McWilliam 1997), leading to the conjecture that Type Ia supernovae may be an important source of Mn (Samland 1998; Nakamura et al. 1999). On the other hand, nucleosynthesis calculations can reproduce the shape of the trend in Figure 7b with a metallicity dependence yield of Mn in massive stars which in the calculations by Timmes, Woosley, & Weaver (1995) overwhelms the Type Ia contribution. The filled triangles in Figure 7b show the values of \[Mn/Fe\] determined for five of the DLAs considered in this paper, again taking Zn as the proxy for Fe for the reason explained above. From a literature search we found only one other data point which could be included in our analysis, in the $`z_{\mathrm{abs}}`$ = 1.3726 DLA towards Q0935+417 where \[Zn/H\] $`=0.80`$, \[Cr/H\] $`=0.90`$, and \[Mn/H\] $`=1.48`$ (Meyer, Lanzetta, & Wolfe 1995; Pettini et al. 1997b). The comparison between stars and DLAs is complicated by the fact that there is still some uncertainty regarding the correct solar system value of the abundance of Mn. Anders & Grevesse (1989) quote log (Mn/H) $`=6.61`$ in the solar photosphere, but log (Mn/H) $`=6.47`$ from meteorites (the latter value is the one used in the present analysis). This discrepancy persists in the more recent reappraisal of ‘Standard Abundances’ by Grevesse, Noels, & Sauval (1996). The uncertainty in (Mn/H) does not affect the stellar data of Nissen et al. (2000) which are all derived from differential measurements, but it does mean that there is a 0.14 dex ambiguity in referring the DLA values to the stellar scale. Thus the triangles and filled large dot in Figure 7b may need to be raised by 0.14 dex should the meteoritic abundance determination turn out to be in error. Even with this caveat, it does appear that Mn is underabundant in the galaxies producing damped Lyman $`\alpha `$ systems by factors similar to those measured in Galactic metal-poor stars. As in the case of Si, there are no obvious trends from the QSO absorption line data alone; \[Mn/Fe\] $`0.4\pm 0.1`$ is an adequate description of the whole DLA sample available at present. It is intriguing that the underabundance of Mn seems to persists to metallicities as high as solar, although admittedly such a statement is at present based on only one measurement. If further cases are found in future, the hypothesis that the underabundance of Mn is due to a metallicity-dependent yield in massive stars would clearly run into difficulties. On the other hand, finding that Mn is low (\[Mn/Fe\] $`=0.51`$) in one DLA (in Q1354+258) which shows no enhancement of Si at low metallicity (\[Fe/H\] $`=1.61`$—see Figure 7), argues against the SN Type Ia interpretation, as also pointed out by Nissen et al. (2000). Possibly a third process, yet to be identified, is responsible for the metallicity dependence of the abundance of Mn. ## 7 CONCLUSIONS We have measured element abundances in three galaxies which give rise to damped Lyman $`\alpha `$ systems at intermediate redshifts ($`z0.611.01`$). The new data confirm the well established result that significantly smaller fractions of refractory elements are incorporated into dust grains in DLAs compared with interstellar clouds of similar column density in the disk of the Milky Way. Although the physical reasons underlying this effect are not fully understood, empirically it appears that the equilibrium between gas and dust in damped absorbers is shifted so that on average comparable proportions of the grain constituents are in gaseous and solid forms. We propose that in cases where dust depletions are less than a factor of about two, it is possible to account for the unobserved fractions of Si, Mn, Cr, Fe, and Ni by assuming that they are all depleted by approximately the same factor. This assumption then allows us to examine the dependence on metallicity of the intrinsic abundances of Si and Mn. We find that the abundances of both elements are broadly in line with values measured in metal-poor stars of the Milky Way. In about half of the cases considered Si is mildly enhanced relative to Fe-peak elements at the typically lower-than-solar metallicities of the DLAs, but there are also counterexamples where \[Si/Fe\] is more nearly solar even though \[Fe/H\] is less than 1/10 solar. The underabundance of Mn at low metallicities is possibly even more pronounced than in Galactic stars, and no DLA has yet been found with a solar \[Mn/Fe\]. However, for neither element is there a clear abundance trend with metallicity; in our view this is an indication that galaxies picked by damped Lyman $`\alpha `$ absorption have experienced a variety of star formation histories prior to the time when we observe them. In this respect chemical abundances give a picture consistent with the results from imaging studies (including new observations reported here) which have shown that galaxies associated with DLAs exhibit a wide range of morphologies, luminosities, and surface brightnesses. It is important to emphasize the preliminary nature of these conclusions which are based on the comparison of very few measurements in DLAs with a much larger body of stellar data. One of the lessons from stellar work is that there is considerable scatter, observational and intrinsic, in the relative abundances of different elements so that most trends only become apparent when a large set of observations has been assembled. As a field of study, abundance determinations in high redshift galaxies are some twenty years behind their counterparts in Galactic stars but they may well hold the key to clarifying some of the still unresolved issues on the origin of elements. It is a pleasure to acknowledge the competent assistance with the observations by the staff of the Keck Observatory; special thanks are due to Tom Barlow for generously providing his echelle extraction software. We are very grateful to Poul Nissen and YuQin Chen for allowing us to use their stellar Mn abundances in advance of publication, and to Jim Lawler and Steve Federman for the early communication of their measurements of the $`f`$-values of Ni II transitions. This work has benefited from several conversations with colleagues, particularly Ken’ichi Nomoto, Bernard Pagel, Jason X. Prochaska, and Sean Ryan. C. C. S. acknowledges support from the National Science Foundation through grant AST 94-57446 and from the David and Lucile Packard Foundation.
no-problem/9910/astro-ph9910314.html
ar5iv
text
# An ISOCAM Mid-IR Survey through Gravitationally Lensing Galaxy Clusters ## 1. Introduction The high sensitivity of the CAMera on board the ISO<sup>1</sup><sup>1</sup>1ISO is an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA satellite (Cesarsky et al. 1996) has allowed detection of distant $`z1`$ galaxies at mid-infrared (MIR hereafter) wavelengths. These detections are crucial for understanding galaxy evolution, since theoretical models predict that dust obscuration can be quite an important effect in high redshift galaxies (e.g. Guiderdoni et al. 1997) and the dust-processed stellar radiation is re-emitted in the IR. The evidence emerging from various IR number count surveys (e.g. Aussel et al. 1999, Elbaz et al. 1999, Flores et al. 1999) indicates that strong IR emitters at $`z1`$ are an order of magnitude more numerous than the extrapolation from local IRAS counts indicate, when assuming no evolution. In this paper we report key results of a very deep ISOCAM survey we have conducted in three cluster fields. We took advantage of the gravitational lensing amplification by the cluster potential wells to detect the intrinsically faintest sources ever detected at 15 $`\mu `$m. A description of our survey and results can be found in Altieri et al. (1999) and Metcalfe et al. (1999), and even more refined analysis is ongoing. ## 2. Observations We observed the fields of three well-studied gravitationally lensing clusters (Abell 370, Abell 2218, and Abell 2390) during $`40`$ hours of ISO guaranteed time. The three fields were imaged in two (wide) ISOCAM filters, LW2 and LW3, centered at about 7 and 15 $`\mu `$m, respectively. We used a pixel-field-of-view of 3<sup>′′</sup> and micro-rastering with 7<sup>′′</sup> steps. This ensured good astrometric/positional results (essential for cross-correlating these mid-IR images with optical images, and for future observational follow-ups). The total area covered by our survey is $`56`$ arcmin<sup>2</sup>. For what concerns data reduction, source detection and photometry, error and completeness estimates, we refer the reader to Altieri et al. (1998, 1999). For reasons of space we here present only results obtained with the LW3 filter, the longest wavelength ISOCAM filter. ## 3. Results Gravitational lensing has two (equally important) effects: it suppresses confusion because of surface area dilation, and it amplifies the flux from background sources (see, e.g., McBreen & Metcalfe 1987, Paczynski 1987). Clearly, detailed modelling of the lens is needed in order to recover the intrinsic fluxes of the lensed galaxies, and their space density. For this reason, we accurately chose our targets among the best studied clusters where gravitational lensing has been detected. We achieve apparent 5 $`\sigma `$ sensitivities (i.e. ignoring the effect of lensing) of 67 $`\mu `$Jy at 15 $`\mu `$m in the deepest field (A2390), and 80 % completeness levels (before accounting for lensing) of 100, 250 and 500 $`\mu `$Jy in the fields of A2390, A2218 and A370, respectively. In order to correct for lensing, we use detailed models of the three cluster lenses (see Kneib et al. 1996, Bézecourt et al. 1999). The highest lensing gains are found to be $`10`$. The intrinsically faintest detected source (i.e. after correcting for lensing amplification) is an 18 $`\mu `$Jy source, lensed to an apparent 80 $`\mu `$Jy source (a 6 $`\sigma `$ detection). In total, 71 MIR sources are detected at 5 $`\sigma `$ over the three cluster fields. Most of them can be identified with (often rather faint) visual counterparts. While they rarely correspond to the optical arc(let)s, there are cases of impressive correlations between optical and MIR morphologies (see Figures 1, 2, and 3). On the basis of spectroscopic and photometric redshifts, and of redshift estimates from lensing inversion techniques, we find that almost all 15 $`\mu `$m sources are behind the cluster lens. We estimate the number counts at 15 $`\mu `$m, after correcting for incompleteness (a non-uniform correction over the field, due to the variable lensing amplification and distortion; see Altieri et al. 1999 and Metcalfe et al. 1999). Our number counts are in good agreement with those derived in empty fields (the Lockman hole, Elbaz et al. 1999, and the Hubble Deep Field, Aussel et al. 1999). Moreover, the number counts in our deepest field – reaching deeper than any other survey – show no sign of flattening, being close to $`1.5\pm 0.3`$ down to 35 $`\mu `$Jy. Fitting such a steep slope requires strong evolution models. Integrating the 15 $`\mu `$m number counts over the whole flux range (0.03–50 mJy) covered by ISOCAM surveys (including ours), we estimate the resolved background MIR light at 15 $`\mu `$m: $`3.3\pm 1.3\times 10^9`$ W m<sup>-2</sup> sr<sup>-1</sup>. This value is very close to the upper limit set by the gamma-CMBR photon-photon pair production (Stanev & Franceschini 1998), and consistent with the predictions from the model of Tan et al. (1999). ## 4. Conclusions We have detected a population of strong MIR emitters at high redshifts which cannot be fitted to the local IRAS counts with no evolution models. The nature of these faint MIR sources is still unclear. They could be dust-enshrouded AGN’s or dust-enshrouded starbursts, or both. Recently, Roche & Eales (1999) and Tan et al. (1999) have tried fitting the IR counts by invoking a population of starburst galaxies at high $`z`$ created by the numerous galaxy mergings predicted in a hierarchical clustering scenario. In order to elucidate this issue, we have recently performed high spatial resolution observations with ISAAC at the VLT for two of these sources, and submitted an XMM proposal for distinguishing the AGN’s by their strong X-ray emission. Observations have also been performed at the CFHT to help determine K-band morphology for several sources. Counts of MIR sources allow us to estimate a MIR background which is slightly less than 50 % of the I-band background. Extrapolating the MIR background to the far-IR, assuming typical galaxy spectral energy distributions, leads to the conclusion that the total cosmic IR background is actually larger than the optical background. Dust processing of stellar radiation is therefore much more important in distant galaxies than locally. MIR surveys are therefore essential for tracing the evolution of galaxies at high redshifts. Great caution must be taken when trying to infer the global star formation history of the Universe from UV luminosities, as these must be seriously affected by extinction. ## References Aussel, H., Cesarsky, C., Elbaz, D., Starck, J.-L. 1999, A&A, 342, 3 13 Altieri, B. et al. 1998, “ISOCAM Faint Source Report” Altieri, B., Metcalfe, L., Kneib, J.-P. et al. 1999, A&A, 343, L65 Bézecourt, J., Kneib, J.-P., Soucail, G., Ebbels, T. 1999, A&A, 347, 21 Cesarsky, C. et al. 1996, A&A, 315, L309 Elbaz, D., et al. 1999, in “The Universe as seen by ISO”, eds.: P.Cox, M.F.Kessler, astro-ph/9902229 Flores, H. et al. 1999, A&A, 343, 389 Guiderdoni, B. et al. 1997, Nature, 390, 257 Kneib, J.-P. et al. 1996, ApJ, 471, 643 McBreen, B., Metcalfe, L. 1987, Nature, 330, 348 Metcalfe, L., Altieri, B., McBreen, B. et al. 1999, in ”The Universe as seen by ISO”, eds.: P.Cox, M.F.Kessler, astro-ph/9901147 Paczynski, B. 1987, Nature, 325, 572 Roche, N., Eales, S.A. 1999, MNRAS, 307, 111 Stanev, T., Franceschini, A. 1998, ApJ, 494, L159 Tan, J.C., Silk, J., Balland, C. 1999, ApJ, 522, 579
no-problem/9910/cond-mat9910414.html
ar5iv
text
# Numerical Study of Aging Phenomena in Short-Ranged Spin Glasses ## 1 Introduction Spin glasses (SG) exhibit characteristic slow dynamics below the SG transition temperature. Recently such slow dynamics, in particular aging phenomena, has been much attractive both in experimental and theoretical studies. Several attempts have been made so far to explain the slow dynamics. There are mainly two distinct phenomenological approaches; one is a phase-space approach, in which the dynamics is described by a diffusion in the hierarchically constructed phase space, inspired by the mean-field theory which suggests multi-valley structure of the phase space. The other is a real-space picture based on the scaling theory in which low-lying excitations are attributed to connected clusters reversed from one of two ground states. There has been, however, no satisfactory description from a microscopic model, except for the dynamical mean-field theory. In this paper we present the results on the non-equilibrium dynamics obtained by large-scale Monte Carlo (MC) simulations on short-range Edwards-Anderson (EA) Ising SG models. Our analyses of the obtained data are based on the droplet theory. We believe that, even when the phase-space approach may give us a correct description for the slow dynamics, what really occurs in the real space is also indispensable for thorough understandings of the aging phenomena. Let us explain briefly the droplet theory. According to the theory, aging process is described by coarsening of domain walls, which is driven by successive flipping of thermally activated droplets. During isothermal aging up to waiting time $`t_w`$ after quench, domains with the mean size $`R(t_w)`$ separating different pure states have grown up. Within each domain, small droplets of size $`LL(\tau )R(t_w)`$ are thermally fluctuating within a time scale of $`\tau `$ as in equilibrium. The typical value of their excitation gap $`F_L^{\mathrm{typ}}`$ scales as $$F_L^{\mathrm{typ}}\mathrm{{\rm Y}}(L/L_0)^\theta ,$$ (1.1) where $`\mathrm{{\rm Y}}`$ is the stiffness constant and $`L_0`$ is a microscopic length scale, and that of free-energy barrier $`B_L^{\mathrm{typ}}`$ also scales as $$B_L^{\mathrm{typ}}\mathrm{\Delta }(L/L_0)^\psi ,$$ (1.2) where $`\mathrm{\Delta }`$ is a characteristic free-energy scale. As compared with the equilibrium, some droplets which touch the domain wall with the length scale $`R(t_w)`$ could reduce their excitation gap from (1.1). This effect is estimated by the droplet theory as the reduction of the averaged excitation gap which is given by $$F_{L,R}^{\mathrm{typ}}=\mathrm{{\rm Y}}_{\mathrm{eff}}(L/L_0)^\theta ,$$ (1.3) with the effective stiffness constant $$\mathrm{{\rm Y}}_{\mathrm{eff}}=\mathrm{{\rm Y}}\left(1c_v(L/R)^{d\theta }\right),$$ (1.4) where $`c_v`$ is a numerical constant. Physical quantities such as the spin autocorrelation function are estimated in terms of such length scales by taking into account statistical weights of the droplet excitations appropriately. The growth law of the length scales of the domain $`R(t_w)`$ and the droplet $`L(t)`$ is given by $$R(t),L(t)\left(\frac{T}{\mathrm{\Delta }}\mathrm{ln}(t/\tau _0)\right)^{1/\psi },$$ (1.5) where $`\tau _0`$ is a microscopic time scale. One notices that the droplet theory for aging phenomena consists of two almost independent steps; the scaling argument based on the typical length scales of $`R(t_w)`$ and $`L(\tau )`$ and the growth law of these length scales. The main purpose of the present work is to test these two steps separately. The present paper is organized as follows: in the next section after introducing the model system studied, time evolution of the length scale of domain wall is discussed. The results of spin-autocorrelation function in the quasi-equilibrium regime of isothermal aging are presented in Sect. 4. ## 2 Model and Method We focus on the four-dimensional (4D) Ising SG model, because its static critical properties have been established in the sense that a SG phase transition occurs at finite temperature with a rigid order parameter. The 4D EA Ising SG model is defined by Hamiltonian, $$=\underset{ij}{}J_{ij}S_iS_j,$$ (2.1) where the sum runs over nearest-neighbor sites and the Ising variables $`S_i`$ are defined on a hypercubic lattice with periodic boundary conditions. The interactions are bimodal variables ($`J_{ij}=\pm 1`$) distributed randomly with equal probability. The simulation method is the standard single-spin-flip Monte Carlo (MC) method using two-sublattice dynamics with the heat-bath transition probability. Using the MC method, we have simulated aging processes after a rapid quench from $`T=\mathrm{}`$ to the SG phase. The system size studied is mainly $`L=24`$, while the size $`L=32`$ is partly studied. We have found no significant difference in the data of $`L=24`$ and $`32`$. ## 3 Growth Law of Domain Size In order to extract a length scale characterizing the growth of ordering, we calculate spatial replica correlation function in off-equilibrium, defined as $$G(r,t)=\underset{i}{}S_i^{(\alpha )}(t)S_i^{(\beta )}(t)S_{i+r}^{(\alpha )}(t)S_{i+r}^{(\beta )}(t),$$ (3.1) where $`\alpha `$ and $`\beta `$ denote the replica indices which are updated independently and with different initial spin configurations. The bracket $`\mathrm{}`$ denotes the average over independent bond realizations. In our simulations, only one MC sequence is performed for each random bond configuration. We extract the mean domain size $`R(t)`$ by directly fitting $`G(r,t)`$ to an exponential form. In Fig. 2 we show time dependence of $`R(t)`$ at the SG transition temperature $`T_\mathrm{c}(2.0J)`$ and below. Just at $`T_\mathrm{c}`$, the length scale is expected to grow as a power law with the dynamical critical exponent $`z`$, $`R(t)t^{1/z}`$, irrespective of the physical picture underlying the ordered phase. As expected, it is found that the length scale follows a power law. The exponent $`z`$ is estimated to be $`4.98(5)`$, which is consistent with that of the previous work. Below $`T_\mathrm{c}`$, $`R(t)`$ grows with time slower and slower as temperature decreases. We try to see crossover between the critical fluctuation and slow dynamics inherent in the low-temperature phase. In off-equilibrium both at the critical and the off-critical temperatures, the length scale would exhibit a power law in short length and time regime where the critical fluctuation dominates the dynamics. We assume that such microscopic length $`R_0`$ and time $`\tau _0`$ are the correlation length and time associated with critical fluctuation in equilibrium, respectively. Thus, we propose a scaling form $$R(t)/R_0=g(t/\tau _0),$$ (3.2) with $`R_0=|TT_\mathrm{c}|^\nu `$ and $`\tau _0=|TT_\mathrm{c}|^{z\nu }`$. The scaling plot of $`R(t)`$ is shown in Fig. 2. As expected from a standard scaling theory of critical phenomena, the scaling function $`g(x)`$ for smaller $`x`$ exhibits a power law $`x^{1/z}`$ associated with the critical temperature. We find a significant deviation from the power law at longer times, suggesting that the characteristic slow dynamics of the SG phase takes place there. In fact, the functional form is not incompatible with a power law of $`\mathrm{ln}(t)`$ predicted by the droplet theory (1.5). It is noted that the strong temperature dependence of $`R(t)`$ shown in Fig. 2 can be almost explained by introducing the microscopic units $`R_0`$ and $`\tau _0`$ associated with the critical fluctuation, while it is not sure whether the asymptotic form at longer times is also described by a universal scaling function or not. ## 4 Scaling analysis in quasi-equilibrium Recent studies have revealed that the off-equilibrium dynamics in the SG is separated into two characteristic time regimes. One is a short-time regime, called “quasi-equilibrium regime”, and the other is a long-time regime, called “aging regime”. A typical observable to see such two time regimes is the spin auto-correlation function $$C(\tau ;t_w)=\frac{1}{N}\underset{i}{}S_i(t_w)S_i(\tau +t_w),$$ (4.1) where $`t_w`$ denotes a waiting time after the rapid quench. Our interest is in its behavior in the quasi-equilibrium regime, namely $`t_w\tau `$. Based on the droplet argument using the effective stiffness constant (1.4), the behavior of $`C(\tau ;t_w)`$ is explicitly given by $$C(\tau ;t_\mathrm{w})=C_{\mathrm{eq}}(\tau )+\frac{c}{\mathrm{{\rm Y}}}\frac{T}{(L(\tau )/L_0)^\theta }\left(\frac{L(\tau )}{R(t_\mathrm{w})}\right)^{d\theta }+\mathrm{},$$ (4.2) with the equilibrium part $`C_{\mathrm{eq}}(\tau )`$ $$C_{\mathrm{eq}}(\tau )=q_{\mathrm{EA}}+\frac{A}{(L(\tau )/L_0)^\theta },$$ (4.3) where $`q_{\mathrm{EA}}`$ is the EA order parameter and $`c`$ and $`A`$ are numerical constants. This gives us an extrapolation form of the large $`\tau `$ limit, namely a way of determining the EA order parameter. An empirical form $`C(t)=q_{\mathrm{EA}}+a/t^\alpha `$ has been used frequently for estimating $`q_{\mathrm{EA}}`$ from the spin autocorrelation function. This is true only if the time dependence of length scale $`L(t)`$ in (4.3) is a power law. As seen in the last section, however, the observed length scale $`R(t)`$ in this model exhibits the crossover from the critical power law to slower growth at large $`t`$. We observe the autocorrelation function $`C(\tau ;t_w)`$ at $`T/J=1.2`$ well below $`T_\mathrm{c}`$ and check the scaling form (4.2) by making use of the $`R(t)`$ estimated through $`G(r,t)`$ in the last section. According to the droplet theory, both $`R(t_w)`$ and $`L(\tau )`$ exhibit the same time dependence. Also, the microscopic length scale $`L_0`$ is assumed to be the same as $`R_0`$. For fixed $`\tau `$, the autocorrelation function is expressed as a function of $`R(t_w)`$. Using the estimated value of $`\theta (=0.82)`$, a simple linear fitting gives the equilibrium autocorrelation function $`C_{\mathrm{eq}}(\tau )`$ in the large $`t_w`$ limit. Collecting $`C_{\mathrm{eq}}(\tau )`$ for each $`\tau `$ thus extracted, we confirm directly the droplet prediction (4.3) as shown in Fig 4 and determine the equilibrium EA order parameter. The value of $`q_{\mathrm{EA}}`$ estimated to be $`0.58(1)`$ is compatible with the recent estimation from static MC simulation. Next we discuss correction to the equilibrium limit. The expression (4.2) suggests that the correction term $`\mathrm{\Delta }C(\tau ;t_w)=C(\tau ;t_w)C_{\mathrm{eq}}(\tau )`$ multiplied by $`L^\theta (\tau )`$ becomes only a function of $`L(\tau )/R(t_w)`$. As shown in Fig. 4, we confirm this scaling prediction. For the limit of $`L(\tau )/R(t_w)1`$, the scaling function shows $`\left(L(\tau )/R(t_w)\right)^{d\theta }`$ consistent with (4.2). ## 5 Discussion and Summary Let us compare the present results with those obtained in 3D Ising SG models. In three dimensions, results on the growth law by numerical simulations as well as experiments are well fitted to a power law as $$R(t)t^{1/z(T)},$$ (5.1) where the exponent $`1/z(T)`$ is proportional to temperature $`T`$ and continuously connects with the dynamical critical exponent $`z`$ at $`T_\mathrm{c}`$. It is not clear yet how to interpret physically such a power law with temperature-dependent exponent $`1/z(T)`$. One of the possibilities is the crossover observed in the preset work on 4D Ising SG model. Certainly it is worth examining the crossover effect of the critical fluctuation in the 3D model. On the other hand, the scaling argument in terms of the length scales $`R(t_w)`$ and $`L(\tau )`$ successfully explains the aging behavior of the correlation function in the quasi-equilibrium regime also in three dimensions. Recently it has been confirmed not only in the isothermal but also in temperature-shift aging processes, which are basic experimental procedures frequently used. To conclude, we have investigated non-equilibrium dynamics after the temperature quench from infinity to the SG phase in the 4D Ising SG model using Monte Carlo simulations. We have studied the growth law of the mean domain size by analyzing time evolution of the spatial replica correlation functions. We have found that the growth law shows a crossover from the critical regime to the low-temperature one and its main temperature dependence within time range of our simulation can be explained by this crossover. We have also analyzed the spin autocorrelation function in the quasi-equilibrium regime. The off-equilibrium correction of the correlation function to its equilibrium limit, namely, violation of the time translational invariance, in the quasi-equilibrium limit can be explained by the scaling argument in terms of the characteristic length scales $`R(t_w)`$ and $`L(\tau )`$, as observed already in the 3D Ising SG model. These results strongly suggest that the analysis consisting of the two almost independent steps is promising for understanding the aging phenomena in low-dimensional SG systems. ## Acknowledgements The present simulations has been performed on Fujitsu VPP-500/40 at the Supercomputer Center, Institute for Solid State Physics, the University of Tokyo.
no-problem/9910/hep-ph9910555.html
ar5iv
text
# References ON THE OSCILLATIONS OF THE TENSOR SPIN STRUCTURE FUNCTION A.V.EFREMOV, O.V.TERYAEV Joint Institute for Nuclear Research 141980, Dubna, Russia The tensor polarization is known to be the specific property of the particles with spin larger than $`\frac{1}{2}`$. The deuteron is one of the most fundamental spin-$`1`$ particles, and the effects of its tensor polarization are intensively studied at low and intermediate energies. Such effects should also be manifested for deep inelastic scattering (DIS) off deuteron target, resulting in the new structure functions . The DIS off longitudinally polarized deuterons have recently been studied by the Spin Muon Collaboration in order to extract the neutron spin structure function $`g_1^n`$ . The longitudinally polarized deuteron target automatically receives the tensor polarization as well, except the very special case when the probability of zero spin projection is just $`1/3`$. It allows, in principle, the tensor polarization effects to be studied by using the same data. In this report, the simple description of the tensor polarization in DIS is presented. We also remind and generalize a rather old result ; namely, the quark contribution to the tensor spin structure function should manifest the oscillating behavior. Its experimental study would allow one to discriminate between the deuteron components with different spins. The inclusive differential cross section for the DIS off spin-1 target has the form: $$\sigma =p_+\sigma _++p_{}\sigma _{}+p_0\sigma _0,$$ (1) where $`p`$’s are the probabilities of the corresponding projections and $`\sigma `$’s are the cross sections for pure states. Usually, one uses instead of $`p`$’s the (vector) polarization $$P=p_+p_{}$$ (2) and tensor polarization (alignment) $$T=p_++p_{}2p_0=13p_0.$$ (3) Using (2), (3) one can rewrite (1) in the form $$\sigma =\overline{\sigma }(1+PA+\frac{1}{2}TA_T)$$ (4) with the (vector) asymmetry $$A=\frac{\sigma _+\sigma _{}}{2\overline{\sigma }},$$ (5) and the tensor asymmetry $$A_T=\frac{\sigma _++\sigma _{}2\sigma _0}{3\overline{\sigma }}.$$ (6) The expression for the spin-averaged cross section is obvious $$\overline{\sigma }=\frac{1}{3}(\sigma _++\sigma _{}+\sigma _0).$$ (7) The usually measured asymmetry for the target with polarization ”up” (parallel to the beam direction) and ”down” is $$A^{exp}=\frac{\sigma ^{up}\sigma ^{down}}{P(\sigma ^{up}+\sigma ^{down})}=\frac{A}{1+\frac{1}{2}TA_T},$$ (8) i.e. has a correction due to tensor asymmetry. The latter, however, could be measured independently by using a nonpolarized target. $$A_T=\frac{\sigma ^{up}+\sigma ^{down}2\overline{\sigma }}{T\overline{\sigma }}.$$ (9) In the approximation of noninteracting proton and neutron, one easily gets $`\sigma _+^d=\sigma _+^p+\sigma _+^n;`$ (10) $`\sigma _{}^d=\sigma _{}^p+\sigma _{}^n;`$ (11) $`\sigma _0^d={\displaystyle \frac{1}{2}}(\sigma _+^p+\sigma _{}^n)+{\displaystyle \frac{1}{2}}(\sigma _+^n+\sigma _{}^p)=\overline{\sigma }^p+\overline{\sigma }^n.`$ (12) As a result, one has $`\overline{\sigma }^d=\overline{\sigma }^p+\overline{\sigma }^n;`$ (13) $`A^d=A^p{\displaystyle \frac{\overline{\sigma }^p}{\overline{\sigma }^d}}+A^n{\displaystyle \frac{\overline{\sigma }^n}{\overline{\sigma }^d}};`$ (14) $`A_T^d0,`$ (15) where $`\overline{\sigma }^{p,n}=\frac{1}{2}(\sigma _+^{p,n}+\sigma _{}^{p,n})`$ and $`A^{p,n}=(\sigma _+^{p,n}\sigma _{}^{p,n})/2\overline{\sigma }^{p,n}`$. This means that the tensor asymmetry plays a very important role: it measures the effect of deuteron boundness. Note that the lepton beam polarization is inessential here. In fact, the correlation between tensor and vector polarizations is related to the antisymmetric part of the density matrix whose hermiticity results in a pure imaginary factor. It should be compensated by the imaginary phase of the scattering amplitude, absent in DIS (the only relevant momentum is spacelike). The description of the tensor spin structure function in the parton model should naively be very different in the case of the partons with different spins – quarks and gluons. Here the sum rules for the first two moments are proposed, which are just the consequence of this difference. They also discriminate between hadronic components of the deuteron with a different spin. Their validity and violation should provide the important information about the nucleon and deuteron spin structure. It should be mentioned that the sum rules of interest were proposed already in 1982 . Although the high-$`p_T`$ vector meson in the final state was considered, the quark contribution to the tensor spin structure function was defined just for the initial state case. Let us briefly recall this definition. The quark contribution to the part of the cross section, proportional to $`T`$ can be expressed as $$\sigma _q=d^4ztr[E_{\mu ^2}\gamma ^\nu ]P,S|\overline{\psi }(0)\gamma _\nu \psi (z)|P,S_{\mu ^2}$$ (16) Here $`E`$ is the short-distance part, $`\mu ^2`$ being its IR regularization parameter, the same as the UV one for the matrix element. The Taylor expansion of the latter results in the obvious parton formula $$\sigma _q=_0^1𝑑xtr[\widehat{P}E_{\mu ^2}(xP)]C_{\mu ^2}^T(x)s^{zz}$$ (17) with the moments of quark tensor spin structure distribution related to the matrix elements of the local composite operators $$P,S|\overline{\psi }(0)\gamma ^\nu D^{\nu _1}\mathrm{}D^{\nu _n}\psi (0)|P,S_{\mu ^2}=i^nM^2S^{\nu \nu _1}P^{\nu _2}\mathrm{}P\nu _n_0^1C_q^T(x)x^n𝑑x.$$ (18) Here $`S^{\mu \nu }`$ is the traceless symmetric tensor providing the covariant description of the vector meson alignment. In hard processes, the single component $`S^{\mu \nu }=s^{zz}P^\mu P^\nu /M^2`$ dominates, where $`s^{ij}`$ is the Cartesian spin-tensor in the target rest frame, the latter being directly related to $`T`$. This is quite analogous to the dominance of longitudinal vector polarization and kinematical suppression of the transverse one. Only this dominant contribution was considered in the paper . The full analysis , however, also leads to the identification of the dominant structure function. The zero sum rule for each quark flavor $`i`$ follows immediately, just because the matrix element for $`n=0`$ vanishes: $$_0^1C_i^T(x)𝑑x=0.$$ (19) This ”naive” derivation is quite analogous to that of the Burkhardt-Cottingham sum rule in QCD . The problem of its possible violation is still discussed . However, there are solid arguments against such a violation in the scaling region. Note that the $`n=1`$ operator is just the quark contribution to the energy-momentum tensor. Taking into account the contributions of all flavors and gluons, one should get the $`S^{\mu \nu }`$and $`\mu ^2`$independent matrix element fixed by the energy-momentum conservation: $$\underset{q,g}{}P,S|T_i^{\mu \nu }|P,S_{\mu ^2}=2P^\mu P^\nu .$$ (20) However, quark and gluon contributions may, in principle, depend on $`S^{\mu \nu }`$: $`{\displaystyle \underset{q}{}}P,S|T_i^{\mu \nu }|P,S_{\mu ^2}=2P^\mu P^\nu (1\delta (\mu ^2))+2M^2S^{\mu \nu }\delta _1(\mu ^2)`$ (21) $`P,S|T_g^{\mu \nu }|P,S_{\mu ^2}=2P^\mu P^\nu \delta (\mu ^2)2M^2S^{\mu \nu }\delta _1(\mu ^2)`$ (22) This natural parametrization results in the gluonic correction to the $`n=1`$ zero sum rule: $$\underset{q}{}_0^1C_i^T(x)x𝑑x=\delta _1(\mu ^2).$$ (23) The gluons contribute to the $`n=0`$ sum rule as well. There is an additional contribution to the cross section, equal to the convolution of the gluon coefficient function with the gluon tensor distribution. The latter may have a non-zero first moment contrary to the quark one $$P,S|O_g^{\nu \nu _1}|P,S_{\mu ^2}=M^2S^{\nu \nu _1}_0^1C_g^T(x)𝑑x.$$ (24) Here $`O_g^{\mu \nu }`$ is the (renormalized) local gluonic operator. It may be constructed either from the gauge-invariant field strength or the gluon field itself in the ”physical” axial gauge. This contribution, however, is suppressed by $`\alpha _s`$ entering in the coefficient function. The gluon contribution to the deuteron tensor structure function, associated with the box diagram, was calculated a few years ago . It is similar to the gluon contribution to the linear polarized photon structure function. The authors therefore claimed that the deuteron should be aligned perpendicular to the beam. This statement naively contradicts the kinematical dominance of the longitudinal alignment mentioned above. However, the tensor polarizations in the mutually orthogonal directions are not independent because the tensor $`S^{\mu \nu }`$ is traceless. The sum of $`\rho _{00}^i`$, the zero spin projection probabilities, over three orthogonal directions $`i`$ is equal to unity. If the target is aligned along the beam direction, the rotational symmetry leads to : $$\rho _{00}^L+2\rho _{00}^T=1.$$ (25) The transverse alignment is absent ($`\rho _{00}^T=1/3`$) if and only if the longitudinal one is also absent. Note that zero sum appeared to be valid, provided the unobserved long-range singularity is taken into account . Such a possibility was also considered in the case of the Burkhardt-Cottingham sum rule . The sum rule (19) means, of course, that $`C_T`$ should change sign somewhere. If $`\delta _1`$ is numerically small, the oscillations of the singlet tensor distributions are even more dramatic. It crosses zero at least at two points. It is interesting that the model calculations of the tensor distribution really manifest an oscillating behavior. The parton model analysis shows that its violation is caused by the deuteron quadrupole structure. It is possible to describe such a behavior by considering a more conservative approach to the deuteron. Formula (16) is still valid if the quarks and gluons are replaced by the hadrons (nucleons and mesons). This is a straightforward generalization of the operator product expansions using the basis consisting of hadronic local operators . One may conclude that the zero sum rule is valid, as far as nucleonic operators (analogous in this sense to the quark ones) are considered. It is also valid for the (pseudo)scalar operators constructed from pion fields. However, it is obviously violated by the operators constructed from vector meson fields ”substituting” the gluon ones in this approach. It is very interesting to study the zero sum rule experimentally. The tensor structure function can be measured, as it has been mentioned above, by the Spin Muon Collaboration. However, as it is probably numerically small in comparison with $`g_1`$, one may expect to obtain some restrictions from above only. Nevertheless, even such a result would be important as a check of the validity of free nucleon approximation (it is used in order to extract the neutron spin structure function). Moreover, one should take into account the tensor asymmetry in order to extract the vector one in the self-consistent way. One cannot exclude the enhancement of the tensor structure function in some kinematical region, making possible its measurement by SMC. If it should happen at low $`x`$ region, the first moment of $`g_1^n`$, entering in the Bjorken sum rule, may be affected significantly. To study the tensor spin structure more statistics is required. This probably should be done by the HERMES collaboration at HERA and, possibly, by the European Electron Facility . It seems possible to do this also at CEBAF, simultaneously with the already proposed study of generalized Gerasimov–Drell–Hearn sum rule. We conclude that the experimental study of the first two moments of the deuteron tensor spin structure function can provide some information about its constituents with different spins. It is a pleasure to thank M. Anselmino, L. Kaptari, K. Kazakov and E. Leader for useful discussions. This work was supported in part by the Russian Foundation for Fundamental Researches Grant $`N^0`$ 93-02-3811.
no-problem/9910/cond-mat9910125.html
ar5iv
text
# Injection statistics simulator for dynamic analysis of noise in mesoscopic devices \[ ## Abstract We present a model for electron injection from thermal reservoirs which is applied to particle simulations of one-dimensional mesoscopic conductors. The statistics of injected carriers is correctly described from nondegenerate to completely degenerate conditions. The model is validated by comparing Monte Carlo simulations with existing analytical results for the case of ballistic conductors. An excellent agreement is found for average and noise characteristics, in particular, the fundamental unities of electrical and thermal conductances are exactly reproduced. \] The systematic trend of reduction in the size of electronic devices has led to the appearance of new phenomena that require special attention to be properly investigated. In particular, mesoscopic conductors are attracting increasing interest in recent years. Here, the microscopic interpretation of carrier transport and fluctuations demands for approaches which differ from those typically used in macroscopic devices. Several techniques have been used to this end. Accordingly, to account for phase coherence the scattering matrix theory originally proposed by Landauer has been further elaborated. A Wigner function formalism has also been used for the analysis of ballistic and diffusive conductors. When phase coherence does not play an essential role, semiclassical methods based on the Boltzmann-Langevin equation have shown to provide viable solutions. In all these theoretical approaches, the modeling of the contacts is proven to be crucial. The active region of the devices is considered to be surrounded by leads which are usually treated as ideal thermal reservoirs. In other words, contacts are assumed to be always at thermal equilibrium; absorbed carriers are thermalized immediately, thus any correlation is destroyed, and emitted carriers obey a Fermi-Dirac distribution. Recently, particle simulations, mainly based on the Monte Carlo method, have been used to study fluctuations in mesoscopic structures. This technique, which is widespread for the analysis of macroscopic electronic devices, has the advantage of being applicable under physical conditions which can be very far from thermal equilibrium, and often can not be studied analytically. Moreover, it can provide detailed microscopic information about the physical processes and the time scales associated with transport and fluctuations in electronic devices. This last feature makes particle simulations quite attractive for the study of mesoscopic conductors. In the case of macroscopic devices, the presence of energy dissipation and diffusive regions inside the structures washes out the influence of the contact injecting statistics on the output currents and voltages, and the simulation of contacts does not require very detailed models. On the contrary, when dealing with mesoscopic structures, the modeling of carrier injection from thermal reservoirs is a delicate problem. In particular, the statistics of electron injection associated with a Fermi-Dirac distribution at the electrodes is essential for the correct analysis of fluctuations and effects related to Pauli exclusion principle. For a classical injector, the statistics of transmitted charge is Poissonian, and can be easily accounted for. On the contrary, under degenerate conditions one should use a binomial distribution, and to our knowledge this issue has not been addressed so far. The aim of this paper is to present a model for particle injection from ideal thermal reservoirs into one-dimensional mesoscopic conductors which takes into account the fluctuating occupancy of the incoming electron states associated with a Fermi-Dirac distribution. The model can be continuously applied from nondegenerate to degenerate statistics at the contacts. The results obtained with a Monte Carlo simulation of a one-dimensional two-terminal ballistic conductor implementing the present contact model are compared with existing analytical results to validate the injection scheme. Let us consider a one-dimensional conductor connected to leads which act as perfect thermal reservoirs. The density (in $`k`$-space) of incoming electron states with wave vector $`k`$ impinging per unit time upon the boundary between the leads and the conductor, $`\zeta _k`$, is given by the product of the density of states $`n_k`$ and the velocity $`v_k`$ normal to the boundary, $`\zeta _k=n_kv_k=\frac{1}{\pi }\frac{\mathrm{}k}{m}`$, where we have taken a parabolic isotropic $`\epsilon k`$ relation. These $`n_k`$ states obey Fermi-Dirac statistics, thus only a fraction $`f(\epsilon _k)=\{1+\mathrm{exp}[(\epsilon _k\epsilon _F)/k_BT]\}^1`$ of them will be occupied and eventually will inject a carrier into the conductor, with $`\epsilon _F`$ the Fermi level. Therefore, the injection rate density of carriers with momentum $`k`$, $`\mathrm{\Gamma }_k`$, is given by $`\mathrm{\Gamma }_k=\zeta _kf(\epsilon _k)`$. While $`\zeta _k`$ does not depend on time, the instantaneous occupancy of an incoming $`k`$-state $`\stackrel{~}{f}(\epsilon _k,t)`$, of which $`f(\epsilon _k)`$ is the average, fluctuates in time obeying a binomial distribution with a probability of success $`f(\epsilon _k)`$. The injecting statistics imposed by this binomial distribution is determined by the Fermi-Dirac statistics electrons obey, i.e., ultimately, by Pauli principle. When $`\epsilon _k\epsilon _Fk_BT`$, $`f(\epsilon _k)1`$ and the injecting statistics of the corresponding $`k`$-state becomes uniform in time. On the contrary, when $`\epsilon _k\epsilon _Fk_BT`$, $`f(\epsilon _k)1`$ and the injecting statistics of the corresponding $`k`$-state becomes Poissonian in time. In a completely degenerate (quantum) reservoir, the former condition is fulfilled for any incoming $`k`$-state and the injection is uniform in time for all the $`k`$ values up to the Fermi wave vector $`k_F`$. On the contrary, in a nondegenerate (classical) reservoir, the latter condition applies for all $`k`$ values. To reproduce the injecting statistics imposed by the Pauli principle in a particle simulation, it is necessary to discretize momentum space into a certain number of meshes of width $`\mathrm{\Delta }k`$ around discrete values of $`k`$, $`k_i`$. For each of these meshes, the number of incoming electron states per unit time with wave vector $`k_i`$ is given by $`\zeta _{k_i,\mathrm{\Delta }k}=\zeta _{k_i}\mathrm{\Delta }k`$, with a probability of occupancy given by $`f(\epsilon _{k_i})`$. In the simulation, at each time interval of duration $`1/\zeta _{k_i,\mathrm{\Delta }k}`$ an attempt to introduce an incoming electron with wave vector $`k_i`$ takes place. At this point a random number $`r`$ uniformly distributed between 0 and 1 is generated, and the attempt is considered successful only if $`r<f(\epsilon _{k_i})`$ . This rejection-technique scheme properly accounts for the injection statistics at each mesh in $`k`$-space. For a completely degenerate reservoir, in every mesh up to $`k_F`$ an electron is injected every time interval $`1/\zeta _{k_i,\mathrm{\Delta }k}`$, and there is no need of the rejection technique. This is the case of the simple contact modeling used in Ref. . For a nondegenerate reservoir, since $`f(\epsilon _{k_i})1`$ for all $`k_i`$-states, it is possible to use a global Poissonian statistics characterized by an injection rate $`\mathrm{\Gamma }_{clas}=_0^+\mathrm{}\mathrm{\Gamma }_k𝑑k`$. Accordingly, the time between two consecutive electron injections is generated with a probability per unit time $`P(t)=\mathrm{\Gamma }_{clas}e^{\mathrm{\Gamma }_{clas}t}`$. Then, the electron wave-vector is randomly picked from a Maxwell-Boltzmann distribution, and there is no need of using a mesh in $`k`$-space. This is the scheme used in Refs. . For any intermediate level of degeneracy, to account for the proper statistics at each value of $`k_i`$, it is necessary to use the scheme explained above, which of course is also valid in the classical and degenerate limits, but less efficient from the point of view of computation time. The accuracy of the proposed scheme depends on the number of meshes in $`k`$-space used to inject carriers. In any case, it is not necessary to use a very large number of meshes. Indeed, it is well known that the noise of an electrical system depends only on the kinetics of electron states in a small energy range around the Fermi level. We have checked that very satisfactory noise results can be obtained by simulating just an energy range of $`3k_BT`$ above and below $`\epsilon _F`$, and dividing it into $`50`$ meshes. Below and above this range all $`k`$-states can be respectively considered to be completely occupied and empty. Therefore, they do not contribute to current fluctuations and can be ignored. In the following we will report the results of a Monte Carlo simulation of a one-dimensional two-terminal ballistic conductor of length $`L`$ connected with two thermal reservoirs modeled according to the above scheme. The temperature is taken to be 300 K and the effective mass $`m=0.25m_0`$, $`m_0`$ being the free electron mass. Carriers are considered to move ballistically into the conductor following the classical equations of motion, and when a voltage $`U`$ is applied to the leads, electrons are accelerated by an electric field $`E=U/L`$. When a carrier inside the conductor reaches a contact, it is considered to be immediately thermalized and it is cancelled from the simulation. For simplicity, the cross-sectional area of the conductor is assumed to be sufficiently small so that only the lowest sub-band is occupied. We remark that Coulomb interactions are ignored and Pauli exclusion principle is taken into account only at the contact injection. Under these conditions, the literature provides several analytical results which will be used to validate the model. Figure 1 shows the low-frequency value of the current spectral density $`S_I(0)`$ at equilibrium normalized to $`2qI_S`$ as a function of the degeneracy factor $`\epsilon _F/k_BT`$, with $`\epsilon _F`$ measured with respect to the bottom of the conduction band. $`I_S=q_0^{\mathrm{}}v_kn_kf(\epsilon _k)𝑑k`$ is the saturation current, i.e., the maximum current a contact can provide. In the classical limit, corresponding to large negative values of the degeneracy factor, $`S_I(0)=4qI_S`$. Here all carriers contribute to the current noise and $`S_I(0)`$ is just the sum of the full shot noise related to the two opposing currents $`I_S`$ injected by the contacts. Under degenerate conditions, corresponding to positive values of the degeneracy factor, $`S_I(0)`$ decreases with respect to $`4qI_S`$ in accordance with the suppression factor $`\epsilon _F/k_BT`$ related to Fermi correlations at the reservoirs. Here, as known, only carriers around the Fermi level contribute to the noise. As shown by the figure, the agreement between the results of the Monte Carlo simulation and the analytical expectations in the nondegenerate and degenerate limits is excellent, thus indicating that the carrier injecting statistics achieved with the proposed model is valid in both regimes. Figure 2 reports the current autocorrelation function $`C_I(t)`$ in the same structure of Fig. 1 when $`\epsilon _F/k_BT=100`$ for several applied voltages as a function of time normalized to the transit time at the Fermi level $`\tau _T=L(m/2\epsilon _F)^{1/2}`$. The corresponding $`IU`$ curve is shown in the upper inset of Fig. 2. Due to the unbalance in the number of carriers reaching the opposite contact, the current increases linearly with the applied voltage until $`qU=\epsilon _F`$, when the current reaches the saturation value $`I_S=2q\epsilon _F/h`$. For higher $`U`$ all the carriers injected at the left lead reach the opposite contact, while no electron injected at the anode reaches the cathode, and therefore the current saturates. The conductance in the linear region corresponds to the value of the fundamental unit $`2q^2/h`$. The current autocorrelation function $`C_I(t)`$ exhibits the following features. At equilibrium it shows the typical triangular shape, vanishing at $`t=\tau _T`$ since the contributions of carriers moving in both directions are symmetrical. This shape parallels that of a vacuum tube with a constant velocity emitter. Here, the same shape comes from Pauli principle which allows only carriers in a small range around the Fermi energy, and therefore moving with practically the same velocity, to contribute to the noise. When a voltage below $`\epsilon _F/q`$ is applied to the structure, $`C_I(t)`$ exhibits a two slope behavior because now the transit times of carriers moving in opposite directions are different. At voltages higher than $`\epsilon _F/q`$ a negative part appears in $`C_I(t)`$ since the carriers injected against the electric field no longer reach the cathode and return to the anode. At further increasing voltages the negative part appears sooner due to the shorter time it takes to the carriers to return back to the right contact. The low-frequency spectral density $`S_I(0)`$ as a function of $`U`$ is shown in the lower inset of the same figure. At equilibrium the value obtained corresponds to $`S_I(0)=8q^2k_BT/h`$, which, when compared with the Nyquist formula $`S_I(0)=4k_BTG`$, provides again for the static conductance $`G=2q^2/h`$. The fact that in our model the conductance obtained from the $`IU`$ curve reproduces the fundamental unit value is a valid check of the correct use of the one-dimensional density of states at the contacts. Furthermore, the same value $`2q^2/h`$ is also obtained from noise results at equilibrium, which proves that the injecting statistics model under degenerate conditions here proposed is also correct. The voltage dependence of $`S_I(0)`$ exhibits a step-like behavior, taking the equilibrium value up to $`U=\epsilon _F/q`$ and half this value for higher $`U`$. This behavior is understood as follows. For $`U<\epsilon _F/q`$, carriers around the Fermi level which are injected at both contacts reach the opposite side and therefore both contribute to the low-frequency noise. On the contrary, for $`U>\epsilon _F/q`$ only carriers injected at the cathode reach the anode and thus the value of $`S_I(0)`$ is halved. We remark that all the results shown in Fig. 2 are in excellent agreement with previous analytical results in degenerate systems. One of the advantages of using particle simulations for the noise analysis is the possibility to interpret the time and frequency behavior of fluctuations in terms of different contributions. Thus, in Fig. 3, in the case of degenerate conditions and $`qU/\epsilon _F=1.01`$, $`C_I(t)`$ is decomposed into velocity $`C_V(t)`$, number $`C_N(t)`$, and velocity-number $`C_{VN}(t)`$ contributions. Here it can be observed that the origin of the negative part in $`C_I(t)`$ comes from the velocity-number correlation, which at zero time exactly compensates the velocity contribution. Furthermore, from the time dependence of fluctuations three different characteristic times can be identified. The shortest one corresponds to the transit of carriers injected from the left contact, as better evidenced in $`C_N`$ and $`C_{VN}`$. It is close to the transit time at equilibrium $`\tau _T`$, but slightly shorter due to the acceleration of the field. A second one is the time taken by the carrier injected at the anode to reverse its velocity, and is reflected mainly in $`C_{VN}`$. Finally, the longest one is the time at which all correlation functions vanish. It corresponds to the time spent by the electrons injected at the anode to return back to the same contact. As final result, in Fig. 4 we report the spectrum of the thermal conductivity $`\kappa (f)`$ at equilibrium, calculated according to Ref. . Here, we remark that not only the correlations of electrical current fluctuations are involved, but also those of the heat flux and the cross-correlations between both. The oscillatory structure of Re$`[\kappa (f)]`$, with geometrical resonances at the inverse of $`\tau _T`$, is associated with the fact that all the involved correlation functions exhibit the triangular shape already found in the case of the current (see Fig. 2). A corresponding structure is detectable also in the imaginary part Im$`[\kappa (f)]`$. Again, the results of the Monte Carlo simulation are in excellent agreement with analytical results. In particular, they reproduce with great accuracy the fundamental unit of thermal conductance $`K=2\pi ^2k_B^2T/3h`$ inferred in Ref. , which again confirms the validity of the proposed model for the injection statistics at the reservoirs. In summary, we have presented an injection scheme of electrons at thermal reservoirs for particle simulations of mesoscopic conductors which takes into account the binomial distribution of the injected electrons imposed by Fermi statistics. The model has been validated for the case of ballistic transport in quasi one-dimensional degenerate conductors. In particular, at thermal equilibrium we have reproduced the fundamental units of electrical and thermal conductances, analytically calculated from the correlation-function formalism. The scheme can be applied continuously from classical to completely degenerate conditions, and it can be extended to two and three dimensions, and multi-subband systems. The proposed scheme is open to applications involving degenerate diffusive conductors. The authors acknowledge helpful discussions with Prof. A. Reklaitis and Dr. O. M. Bulashenko. This work has been partially supported by the Dirección General de Enseñanza Superior e Investigación through the project PB97-1331, and the Physics of Nanostructures project of the Italian Ministero dell’ Universitá e della Ricerca Scientifica e Tecnologica (MURST).
no-problem/9910/cond-mat9910349.html
ar5iv
text
# Pseudogap due to Antiferromagnetic Fluctuations and the Phase Diagram of High-Temperature Oxide Superconductors \[ ## Abstract A reduction of the density of states near the Fermi energy in the normal state (pseudogap) of high-temperature oxide superconductors is examined on the basis of the two-dimensional tight-binding model with effective interactions due to antiferromagnetic fluctuations. By using antiferromagnetic correlation lengths which are phenomenologically assumed, the doping dependence of the pseudogap is obtained. The superconducting transition temperature decreases and eventually vanishes due to the pseudogap as the hole concentration is reduced. \] In high-temperature oxide superconductors (HTSCs), behaviors which can be attributed to a reduction of the density of states (DOS) near the Fermi energy (pseudogap) have been observed. For example, experimental results in photoemission spectroscopies , tunneling spectroscopies , a specific heat measurement , NMR experiments , and neutron scattering . However, the origin of the pseudogap remains controversial: preformed Cooper pairs in the spinon condensation model , spin fluctuation in the nearly antiferromagnetic (AF) spin fermion model , $`d`$-wave pairing fluctuations , AF fluctuation-mediated pairing interactions in the $`d`$-$`p`$ model , $`d`$-wave pairing fluctuations and AF fluctuations , and so forth. Among these candidates, the decrease of $`T_\mathrm{g}`$ (the temperature below which the pseudogap phenomena are observed) with hole doping seems to suggest a possibility that the pseudogap mainly originates from AF fluctuations at least at temperatures much higher than the superconducting transition temperature ($`T_\mathrm{c}`$). The AF long-range-ordered phase occurs in the vicinity of the half-filling, where the real gap is open. It is likely to have a pseudogap structure in the DOS in its proximity due to strong AF fluctuations. In this mechanism, the decrease of $`T_\mathrm{g}`$ can be explained naturally, since the AF fluctuations decrease with doping. On the other hand, if we assume that the pseudogap is due to pairing fluctuations, temperature $`T_\mathrm{g}`$ can be regarded as the temperature at which pairing fluctuations begin to occur. Therefore, when $`T_\mathrm{g}`$ increases, it is natural to consider that $`T_\mathrm{c}`$ should increase as well. However, such behavior is inconsistent with the observed behavior of the opposite doping dependences of $`T_\mathrm{g}`$ and $`T_\mathrm{c}`$ in the underdoped region. In this paper we examine the pseudogap due to AF fluctuations. The reduction of the DOS near the Fermi surface should suppress $`T_\mathrm{c}`$ in the underdoped region, and thus $`T_\mathrm{c}`$ has a peak as a function of hole concentration. We intend to describe a minimum theory which reproduces the phase diagram of a HTSC. Hence, we omit some of the details which do not change the qualitative behavior of $`T_\mathrm{c}`$. For example, we adopt the static approximation for the spin fluctuations. The static approximation does not produce the imaginary part of the self-energy and the broadening of the one particle weight. However, the real part of the self-energy could reproduce the reduction of the DOS near the Fermi surface, which is the most intuitive definition of the pseudogap. Therefore, the static approximation is sufficient for our purpose. In addition, Schmalian et al. argued that the characteristic frequency of the spin fluctuations $`\omega _{\mathrm{sf}}`$ is much smaller than the temperatures of interest for HTSCs. In our formulation, AF fluctuations are taken into account through a renormalization effect. The importance of the renormalization effect in the doping dependence of the $`T_\mathrm{c}`$ of HTSC was discussed on the basis of the two-dimensional Hubbard model . It was shown that $`T_\mathrm{c}`$ is reduced considerably near the boundary of the AF long-range-ordered phase by the strong renormalization effect due to AF fluctuations. However, the pseudogap was not taken into account sufficiently. This work was extended to include the pseudogap due to AF fluctuations in a quasi-one-dimensional (Q1D) Hubbard model as a model of Q1D organic superconductors . It was shown that the pseudogap suppresses $`T_\mathrm{c}`$ markedly near the spin density wave boundary. As a result, phase diagrams of Q1D organic superconductors in the pressure and temperature plane were semiquantitatively reproduced. In HTSCs, however, the same approach based on one of the microscopic models is difficult. The interlayer coupling is much smaller and the temperature range of interest is much higher in HTSCs than it is in the organic superconductors. Hence, the thermal fluctuations are very strong in HTSCs. This situation makes a quantitative argument difficult. In addition, there is no common consensus as to which microscopic model is appropriate for HTSCs: a single-band Hubbard, $`d`$-$`p`$, $`t`$-$`J`$ models, and so forth. Therefore, we treat phenomenologically determined AF fluctuations from the experiments instead of calculating them microscopically, and concentrate on their qualitative features. In the calculation of $`T_\mathrm{c}`$, for simplicity, we assume that the coupling constant of the pairing interactions does not depend on the doping. Such interactions may be attributed to those mediated by phonons. It may appear that the $`d`$-wave pairing does not occur with phonon-mediated interactions. However, it is shown that pairing interactions mediated by screened phonons can give rise to a $`d`$-wave pairing superconductivity in the presence of AF fluctuations . Here, we do not specify the origin of the pairing interactions. To some extent there may be a contribution from the exchange of AF fluctuations We calculate the electron self-energy up to the one-loop approximation as $$\mathrm{\Sigma }_\sigma (k)=T\underset{n^{}}{}N^1\underset{𝐤^{}}{}\underset{\sigma ^{}}{}V_{\sigma \sigma ^{}}(k,k^{})G_\sigma ^{}(k^{}),$$ (1) with $`k=(𝐤,\mathrm{i}\omega _n)`$, where $`V_{\sigma \sigma ^{}}`$ is the effective interaction due to the exchange of the fluctuations and $`G_\sigma (k^{})`$ is the renormalized electron Green’s function, $$G_\sigma (k)=\frac{1}{i\omega _nϵ_𝐤\mathrm{\Sigma }_\sigma (k)+\mu }.$$ (2) We consider a two-dimensional tight-binding model with the electron dispersion $$ϵ_𝐤=2t(\mathrm{cos}k_x+\mathrm{cos}k_y)4t^{}\mathrm{cos}k_x\mathrm{cos}k_y,$$ (3) where $`t`$ and $`t^{}`$ are the nearest- and next-nearest-neighbor hoppings, respectively. We express the effective interactions due to exchange of magnetic fluctuations as $`V(𝐤,𝐤^{})={\displaystyle \frac{V_0}{(𝐤𝐤^{}𝐪_m)^2+q_{0}^{}{}_{}{}^{2}}},`$ (4) within the static approximation. Here, $`V_0`$ and $`q_0`$ are the phenomenological parameters. Since the effective interactions due to the exchange of magnetic fluctuations are proportional to the spin susceptibility $`\chi (𝐤𝐤^{})`$, they must have a sharp peak at $`𝐤=𝐤^{}+𝐪_m`$, where $`𝐪_m`$’s are the nesting vectors near $`(\pi ,\pi )`$ for the AF fluctuations which give the largest value of $`\chi (𝐪)`$. There are four nesting vectors such as $`𝐪_m=(\pi \pm \delta ,\pi )`$ or $`𝐪_m=(\pi ,\pi \pm \delta )`$, when the incommensurate AF fluctuations occur . In eq. (4), we take a nesting vector such as $`\pi q_{mx}q_{my}>0`$ for $`k_xk_x^{}k_yk_y^{}0`$. For other regions of $`k_xk_x^{}`$ and $`k_yk_y^{}`$, $`𝐪_m`$ is chosen so that $`V(𝐤,𝐤^{})`$ satisfies the symmetry condition. The AF correlation length $`\xi 1/q_0`$ diverges at the critical hole concentration $`n_\mathrm{h}^\mathrm{c}`$. We take $$\frac{1}{q_0}=\frac{C}{\pi (\mu _\mathrm{c}\mu )^{1/2}},$$ (5) which diverges at a critical chemical potential $`\mu _\mathrm{c}`$. For small hole concentrations, the chemical potential $`\mu `$ is roughly proportional to the hole concentration $`n_\mathrm{h}`$. Thus, the above form roughly implies $`\xi 1/\sqrt{n_\mathrm{h}}`$, which is plausible in the sense that the AF correlation length is directly related to the average distance between holes . Thus, we finally obtain an equation to solve in a compact form $$\mathrm{\Sigma }_\sigma (𝐤)=\frac{1}{2}N^1\underset{𝐤^{}}{}V(𝐤,𝐤^{})\mathrm{tanh}(\frac{ϵ_𝐤^{}+\mathrm{\Sigma }_\sigma (𝐤^{})\mu }{2T}).$$ (6) With the static approximation of eq. (5), we only need to solve the real part of the self-energy. Thus, it is found that the pseudogap appears as shown below, for example, in the DOS, which is given by $$\rho (ϵ)=\frac{\mathrm{d}k_x\mathrm{d}k_y}{(2\pi )^2}\delta (ϵ\stackrel{~}{ϵ}_𝐤),$$ (7) where we have put $`\stackrel{~}{ϵ}_𝐤=ϵ_𝐤+\mathrm{\Sigma }_\sigma (𝐤^{})`$. We solve the self-consistent equation for large discrete $`512\times 512`$ momentum points in the first Brillouin zone. We confirmed that the results of $`512\times 512`$ practically coincide with those of $`256\times 256`$, within the width of the lines in our figures of DOS and $`T_\mathrm{c}`$. Thus, practically, the system size $`512\times 512`$ can be regarded as being within the thermodynamic limit at the present temperature. We interpolate the obtained self-energy linearly in the momentum space so that the first Brillouin zone has $`4096\times 4096`$ points for the calculation of DOS. We consider the case of nearest-neighbor hoppings ($`t0`$ and $`t^{}=0`$). We choose $`V_0=0.4`$ and $`T=0.01`$, and take $`C=30`$ and $`\mu _\mathrm{c}=0.05`$ in eq. (5). We use the units in which $`t=1`$. The value of $`\mu _\mathrm{c}`$ gives $`n_\mathrm{h}^\mathrm{c}0.02=2\%`$, where $`n_\mathrm{h}^\mathrm{c}`$ denotes the critical hole concentration of the AF long-range-order. Figure 1 shows the $`n_\mathrm{h}`$ dependence of the AF correlation length $`\xi `$, and the resulting DOS at the Fermi energy. In this figure, $`\rho _d`$ is the effective DOS for the $`d`$-wave pairing in which the order parameter is $`\mathrm{\Delta }(𝐤)(\mathrm{cos}k_x\mathrm{cos}k_y)`$, $$\rho _d(\mu )=\frac{\mathrm{d}k_x\mathrm{d}k_y}{(2\pi )^2}\delta (\stackrel{~}{ϵ}_𝐤\mu )(\mathrm{cos}k_x\mathrm{cos}k_y)^2.$$ (8) $`\rho _s`$ is that for the $`s`$-wave pairing, ($`\mathrm{\Delta }(𝐤)=\mathrm{const}.`$), which is equivalent to the usual DOS. From the obtained DOS, we calculate $`T_\mathrm{c}`$ by the conventional weak coupling formula $$T_\mathrm{c}=1.13\omega _\mathrm{D}\mathrm{e}^{1/\lambda _\alpha },$$ (9) with $`\lambda _\alpha =g\rho _\alpha (\mu )`$. We choose a value of the cutoff energy of bosons which mediate the pairing interactions as $`\omega _\mathrm{D}=1000\mathrm{K}`$. The value of $`g`$ is chosen so that $`T_\mathrm{c}`$ takes reasonable values such as $`40\mathrm{K}`$ or $`90\mathrm{K}`$ at the peak. Figures 2 and 3 show the doping dependence of the $`T_\mathrm{c}`$ for the $`s`$-wave pairing and $`d`$-wave pairing, respectively. From the sensitivity of the singular exponential form of eq. (9), we obtain marked suppressions of $`T_\mathrm{c}`$ near the AF boundary ($`n_\mathrm{h}\stackrel{>}{}n_\mathrm{h}^\mathrm{c}`$) and thus peak structures around $`n_\mathrm{h}0.1=10\%`$. In particular, for the $`d`$-wave pairing, the peak is narrow, and seems to coincide with the experimental phase diagram of HTSC. In order to be more precise, we should take into account the temperature dependence of the DOS in the estimation of $`T_\mathrm{c}`$. However, for $`n_\mathrm{h}\stackrel{>}{}n_\mathrm{h}^\mathrm{c}`$, the AF correlation length $`\xi `$ does not strongly depend on temperature at low temperatures . When its temperature dependence is ignored, the DOS are almost independent of the temperature. For example, it can be confirmed that the results for $`T=0.005`$ are almost the same as those for $`T=0.01`$. On the other hand, even when the temperature dependence of $`\xi `$ is taken into account, it does not change the qualitative result. It only emphasizes the peak of the $`T_\mathrm{c}`$, because the pseudogap becomes deeper for longer $`\xi `$ and it reduces $`T_\mathrm{c}`$ more. Figure 4 shows the DOS of the underdoped region ($`n_\mathrm{h}0.0237`$) and that of the overdoped region ($`n_\mathrm{h}0.227`$). It is seen that the pseudogap is deep and clear in the underdoped region, while it becomes very shallow in the overdoped region. We also find that the pseudogap becomes large in the direction of $`(\pm \pi ,0)`$ and $`(0,\pm \pi )`$, while it becomes small in the direction of $`(\pm \pi ,\pm \pi )`$ . It is straightforward to extend the present calculation to the $`t^{}0`$ case. When we assume $`t^{}=0.2`$ and a similar doping dependence of the AF correlation length which decreases with doping, we also have a peak structure of the $`T_\mathrm{c}`$ near $`n_\mathrm{h}0.13`$ . This peak structure is also consistent with the experimental phase diagrams except that the peak becomes very steep due to the van Hove singularity. However, it is easily verified by a phenomenological consideration that such steepness of the peak is not at all essential. If we assume that a large $`t^{}`$ is appropriate for the HTSC, the Fermi surface crosses the van Hove singularities at a large hole concentration. Thus, the DOS at the Fermi energy and $`T_\mathrm{c}`$ have a sharp peak around the hole concentration. However, such singular behavior or marked enhancement of the DOS with hole doping has not been observed in any experiments, for example, the specific heat and susceptibility measurements. This suggests that the singularity is eliminated finally by some many-body effect. If we take this many-body effect into our phenomenological model beforehand, models with $`t^{}0`$ might be reasonable in order to describe a realistic situation, because it is consistent with the experimental facts that no singularity occurs in the overdoped region, and the AF correlation length increases near the half-filling. We omit vertex corrections and dynamical effects. However, since these effects should reduce $`T_\mathrm{c}`$ in the underdoped region, it is likely that they only emphasize the peak structure of $`T_\mathrm{c}`$. Also, the strong coupling effects would not be essential regarding the qualitative behavior of the $`T_\mathrm{c}`$. The essential point is that $`T_\mathrm{c}`$ is very sensitive to the DOS, as explicitly written in the singular exponential form eq. (9) within the weak coupling theory. The quantitative improvement of the theory by including these effects remains for future study. In conclusion, we propose a mechanism which explains the peak structure in the doping dependence of $`T_\mathrm{c}`$ of HTSCs. It is shown that the pseudogap due to AF fluctuations suppresses $`T_\mathrm{c}`$ in the underdoped region and eventually destroys the superconductivity at a finite doping. On the other hand, the pseudogap phenomena is less pronounced near the optimum doping at which $`T_\mathrm{c}`$ is maximum. Then, DOS is large since the Fermi energy is at the shoulder of the van Hove singularity. This leads to high $`T_\mathrm{c}`$. The decrease of $`T_\mathrm{c}`$ in the overdoped region is due to the decrease of the DOS , since the Fermi energy is distant from the van Hove singularity. The authors would like to thank Professor D. Rainer and Professor J. Friedel for useful discussions and encouragements. This work was partially supported by a grant for CREST from JST.
no-problem/9910/gr-qc9910025.html
ar5iv
text
# 1 Introduction ## 1 Introduction Most of the evolution of the Universe is likely to have proceeded classically, in the sense that the dominant phenomenon is the classical expansion while quantum fluctuations are small and can be treated as perturbations. Prior to this stage, however, genuinely quantum phenomena almost certainly took place. Although it is not clear whether they have left any observable footprints, it is of interest to try to understand them, as this may shed light on such issues as initial conditions for classical cosmology (are inflationary initial data natural? how long was the inflationary epoch? is open Universe consistent with inflation?), properties of space-time near the cosmological singularity, origin of coupling constants, etc. To describe the Universe at its quantum phase, one ultimately has to deal with full quantum gravity theory, well beyond the Einstein gravity. It is, however, legitimate to take more modest attitude and consider quantum phenomena below the Planck (or string) energy scale. Then the quantized Einstein gravity (plus quantized matter fields) provides an effective “low energy” description which must be tractable, at least in principle, within quantum field theory framework. Processes that should be possible to consider in this way are not necessarily perturbative, as the example of tunneling in quantum mechanics and field theory shows. Surprisingly, not so many phenomena are well understood even within this modest approach. Perhaps the most clear process is the decay of a metastable vacuum . It has been clarified recently that at least in the range of parameters where the treatment of space-time in terms of background de Sitter metrics is reliable, the Coleman–De Luccia instanton indeed describes the false vacuum decay, provided the quantum fluctuations above the classical false vacuum are in de Sitter-invariant (conformal) vacuum state. This is in full accord with the results of Refs. . It is likely that this conclusion holds also when the quantum properties of metrics are taken into account. Furthermore, the Hawking–Moss instanton can be interpreted as a limiting case of constrained instantons that describe the false vacuum decay in an appropriate region of parameter space, again in agreement with previous analyses . Hence, there emerges a coherent picture of the false vacuum decay with gravity effects included. One may try to apply laws of quantum mechanics to the Universe as a whole, and consider the wave function of the Universe. Although research in this direction began more than 30 years ago , the situation here is still intriguing and controversial. The main purpose of this contribution is to make a few comments on this subject. Namely, we will discuss which analogies to ordinary quantum mechanics are likely to work in quantum cosmology, and which are rather misleading. We begin with quantum mechanics, and only then turn to the wave function of the Universe. ## 2 Wave function in quantum mechanics To set the stage, let us consider a quantum mechanical system with two dynamical coordinates, $`x`$ and $`y`$. Let the Hamiltonian be $$\widehat{H}=\widehat{H}_0+\widehat{H}_y$$ where $$\widehat{H}_0=\frac{1}{2}\widehat{p}_x^2+V_0(x)$$ $$\widehat{H}_y=\frac{1}{2}\widehat{p}_y^2+\frac{1}{2}\omega ^2(x)y^2+\frac{1}{4}\lambda (x)y^4+\mathrm{}$$ Let us assume that the potential $`V_0(x)`$ is such that the motion along the coordinate $`x`$ is semiclassical, while the dynamics along the coordinate $`y`$ can be treated in perturbation theory about the semiclassical motion along $`x`$. This approach is close in spirit to the Born–Oppenheimer approximation. We will consider solutions to the stationary Schrödinger equation with fixed energy $`E`$. Let us first discuss the dynamics in the classically allowed region of $`x`$, where $`E>V_0(x)`$. In this region, there are two sets of solutions with the semiclassical parts of the wave functions equal to $$\mathrm{\Psi }\text{e}^{+iS(x)}$$ (1) and $$\mathrm{\Psi }\text{e}^{iS(x)}$$ where $$S(x)=^x𝑑x^{}\sqrt{2(EV_0(x^{}))}$$ These two sets of solutions correspond to motion right and left, respectively. Note that this interpretation is based on the fact that there exists extrinsic time $`t`$ inherent in the problem: the complete, time-dependent wave functions are $`\mathrm{exp}(iEt+iS(x))`$ and $`\mathrm{exp}(iEtiS(x))`$; the wave packets constructed out of the wave functions of these two types indeed move right and left, respectively, as $`t`$ increases. Let us now consider the dynamics along the coordinate $`y`$, still using the time-independent Schrödinger equation in the allowed region of $`x`$. This is done for, say, right-moving system by writing, instead of eq.(1), $$\mathrm{\Psi }(x,y)=\frac{1}{\sqrt{p_x(x)}}\stackrel{~}{\mathrm{\Psi }}(x,y)\text{e}^{iS(x)}$$ where $`p_x=S/x`$. To the first order in $`\mathrm{}`$ one obtains that the time-independent Schrödinger equation reduces to $$i\frac{\stackrel{~}{\mathrm{\Psi }}}{x}\frac{S}{x}=\widehat{H}_y\stackrel{~}{\mathrm{\Psi }}$$ (2) This can be cast into the form of time-dependent Schrödinger equation by changing variables from $`x`$ to $`\tau `$ related by $`x=x_c(\tau )`$, where $`x_c(\tau )`$ is the solution of the classical equation of motion for $`x`$ in “time” $`\tau `$, which has energy $`E`$ and obeys $$\frac{S}{x}(x=x_c)=\frac{x_c}{\tau }$$ (3) After this change of variables, $`\stackrel{~}{\mathrm{\Psi }}`$ becomes a function of $`y`$ and $`\tau `$ and obeys the following equation, $$i\frac{\stackrel{~}{\mathrm{\Psi }}(y;\tau )}{\tau }=\widehat{H}_y(\widehat{y},\widehat{p}_y;\tau )\stackrel{~}{\mathrm{\Psi }}(y;\tau )$$ (4) where the explicit dependence of $`\widehat{H}_y`$ on $`\tau `$ comes from $`x_c(\tau )`$. We see that there have emerged intrinsic time $`\tau `$ which parameterizes the classical trajectory $`x_c(\tau )`$ and also the $`y`$-dependent part of the wave function. We note again that in quantum mechanics, the arrow of intrinsic time, which is set by the sign convention in eq.(3), is determined by the arrow of extrinsic time $`t`$. Note also that one is free to choose any representation for operators $`\widehat{y}`$ and $`\widehat{p}_y`$ and write, instead of eq.(4), $$i\frac{|\stackrel{~}{\mathrm{\Psi }}}{\tau }=\widehat{H}_y(\tau )|\stackrel{~}{\mathrm{\Psi }}$$ (5) To solve this equation, one may find convenient to switch to the Heisenberg representation, as usual. We now turn to the discussion of the region of $`x`$ where the classical motion is forbidden and the system has to tunnel. To simplify formulas, we set $`E=0`$ in what follows. If the system tunnels from left to right, the dominant semiclassical wave function is $$\mathrm{\Psi }\text{e}^{S(x)}$$ where $`S(x)=^x𝑑x^{}\sqrt{2V_0(x^{})}`$ and obeys the following equation, $$\frac{1}{2}\left(\frac{S}{x}\right)^2+V_0(x)=0$$ This equation may be formally considered as the classical Hamilton–Jacobi equation in Euclidean (“imaginary”) time. The zero energy classical trajectory $`x_c(\tau )`$ in Euclidean time $`\tau `$ obeys $$\frac{d^2x_c}{d\tau ^2}=+\frac{V_0}{x}(x=x_c)$$ and hence $$\frac{dx_c}{d\tau }=\frac{S}{x}(x=x_c)$$ Then $`S(x)`$ can be calculated as the value of the Euclidean action along this trajectory. To find the equation governing the dynamics along $`y`$-direction in the classically forbidden region of $`x`$, we again write $$\mathrm{\Psi }(x,y)=\frac{1}{\sqrt{p_x}}\stackrel{~}{\mathrm{\Psi }}(x,y)\text{e}^{S(x)}$$ and obtain, changing variables from $`x`$ to $`\tau `$, $`x=x_c(\tau )`$, that $`\stackrel{~}{\mathrm{\Psi }}`$ obeys the time-dependent Schrödinger equation, now in Euclidean time, $$\frac{\stackrel{~}{\mathrm{\Psi }}(y;\tau )}{\tau }=\widehat{H}_y(\widehat{y},\widehat{p}_y;\tau )\stackrel{~}{\mathrm{\Psi }}(y;\tau )$$ (6) The minus sign on the right hand side of this equation is crucial for the stability of the approximation we use. Indeed, the system described by eq.(6) tends to de-excite, rather than excite, as “time” $`\tau `$ increases, so that the part $`\stackrel{~}{\mathrm{\Psi }}`$ of the wave function remains always subdominant as compared to the leading semiclassical exponential. The physics behind this property is quite clear: we consider tunneling at fixed energy, so the de-excitation of fluctuations along $`y`$ means the transfer of energy to the tunneling subsystem, which makes tunneling (exponentially) more probable. Inversely, if fluctuations along $`y`$ get excited, the kinetic energy along $`x`$ decreases, and tunneling gets suppressed stronger. ## 3 Wave function of the Universe To discuss specific aspects of quantum cosmology, let us consider the closed Friedmann–Robertson–Walker Universe with the scale factor $`a`$. Let us introduce the cosmological constant $`\mathrm{\Lambda }`$, minimal scalar field $`\varphi (x)`$ with a scalar potential $`V(\varphi )`$ and also massless conformal scalar field. We are going to treat the dynamics of the scale factor in a semiclassical manner; in this respect $`a`$ is analogous to the variable $`x`$ of the previous section. The minimal scalar field (as well as gravitons) will be considered within perturbation theory, so each of the modes $`\varphi _𝐤`$ will be analogous to the variable $`y`$ of the previous section. The basic equation in quantum cosmology is the Wheeler–De Witt equation, which in our case reads $$\left[\frac{1}{2}\widehat{p}_a^2\frac{1}{2}a^2+\mathrm{\Lambda }a^4+\widehat{H}_\varphi \right]\mathrm{\Psi }=ϵ\mathrm{\Psi }$$ (7) where we have set $`3M_{Pl}^2/16\pi =1`$ and ignored the operator ordering problems which are irrelevant for our discussion. Here $$\widehat{H}_\varphi =\frac{d^3x}{2\pi ^2}\left[\frac{1}{2a^2}\widehat{p}_\varphi ^2+\frac{a^2}{2}(_i\widehat{\varphi })^2+a^4V(\widehat{\varphi })\right]$$ is the term due to the minimal scalar field; at the classical level $`\widehat{H}_\varphi `$ is the energy of matter defined with respect to conformal time. The non-negative constant $`ϵ`$ on the right hand side of eq.(7) is the contribution of the conformal scalar field; the only purpose of introducing the latter field is to allow for non-zero $`ϵ`$. We do not consider gravitons in what follows, as they are similar to the quanta of the minimal scalar field $`\varphi `$. In the spirit of the Born–Oppenheimer approximation, let us first neglect the conformal energy of the field $`\varphi `$, i.e., omit the term $`\widehat{H}_\varphi `$ in eq.(7). Then the Wheeler–De Witt equation takes the form of the time-independent Schrödinger equation in quantum mechanics of one generalized coordinate $`a`$ with energy $`ϵ`$ and potential $$U(a)=\frac{1}{2}a^2\mathrm{\Lambda }a^4$$ At $`16\mathrm{\Lambda }^2ϵ<1`$, there are two classically allowed regions: at small $`a`$ ($`0<a^2<[1\sqrt{116\mathrm{\Lambda }^2ϵ}]/4\mathrm{\Lambda }`$) and at large $`a`$ ($`\mathrm{}>a^2>[1+\sqrt{116\mathrm{\Lambda }^2ϵ}]/4\mathrm{\Lambda }`$). At the classical level, the former region corresponds to an expanding and recollapsing Friedmann-like closed Universe, while the latter corresponds to the de Sitter-like behavior. As $`ϵ0`$, the first classically allowed region disappears, while the second becomes exactly de Sitter. In between these two regions, classical evolution is impossible (if one neglects $`\widehat{H}_\varphi `$), and one has to consider classically forbidden “motion”. Let us discuss classically allowed and classically forbidden regions separately. ### 3.1 Classically allowed region: issue of arrow of time To be specific, let us consider classically allowed de Sitter-like region where the scale factor $`a`$ is large. In the leading order, there are again two types of semiclassical wave functions, $$\mathrm{\Psi }\text{e}^{iS(a)}$$ (8) and $$\mathrm{\Psi }\text{e}^{+iS(a)}$$ (9) where $$S(a)=^a𝑑a^{}\sqrt{2(ϵU(a^{}))}$$ Classically, the momentum is related to the derivative of the conformal factor with respect to conformal time, $$\frac{da}{d\eta }=p_a$$ For the two semiclassical wave functions one has $$\widehat{p}_a\mathrm{\Psi }=\left(\frac{S}{a}\right)\mathrm{\Psi }$$ where upper and lower signs refer to eq.(8) and eq.(9), respectively. Hence, one is tempted to interpret the wave functions (8) and (9) as describing expanding and contracting Universes, respectively. Indeed, the Hartle–Hawking wave function that in the allowed region is a superposition $$\mathrm{\Psi }_{HH}\text{e}^{iS(a)}+\text{e}^{+iS(a)}$$ (10) is often interpreted as describing a collapsing and re-expanding de Sitter-like Universe. Similar interpretation is often given to the Linde wave function . On the other hand, the tunneling wave functions which contain one wave in the allowed region, $$\mathrm{\Psi }_{tun}\text{e}^{iS(a)}$$ are often assumed to be the only ones that correspond to an expanding, but not contracting, Universe; this is, at least partially, the basis for the tunneling interpretation. An important difference with conventional quantum mechanics is, however, the absence of extrinsic time in quantum cosmology. Hence, the arrow of intrinsic time has yet to be determined. In other words, there is no a priori reason to interpret the wave functions (8) and (9) as describing expanding and contracting Universes, respectively. The sign of the semiclassical exponent does not by itself determine the arrow of time. Were the scale factor the only dynamical variable, it would be impossible to decide whether, say, the wave function (8) corresponds to expanding or contracting Universe. If the matter fields (and/or gravitons) are included, this should be possible. Before discussing this point, let us derive the equation for the wave function describing matter , again in the spirit of the Born–Oppenheimer approximation. Let us extend the wave functions (8) and (9) to contain the dependence on the matter variables, $$|\mathrm{\Psi }(a)=\frac{1}{\sqrt{p_a}}\text{e}^{iS(a)}|\stackrel{~}{\mathrm{\Psi }}(a)$$ (11) where at given $`a`$ both $`|\mathrm{\Psi }(a)`$ and $`|\stackrel{~}{\mathrm{\Psi }}(a)`$ belong to the Hilbert space in which $`\widehat{\varphi }(𝐱)`$ and $`\widehat{p}_\varphi (𝐱)`$ act. As an example, one may (but does not have to) choose the generalized coordinate representation; then $`|\mathrm{\Psi }(a)`$ becomes a function $`\mathrm{\Psi }(\{\varphi _𝐤\};a)`$ of the Fourier components of $`\varphi `$. In the first order in $`\mathrm{}`$ one obtains from eq.(7) $$\pm i\sqrt{ϵU(a)}\frac{|\stackrel{~}{\mathrm{\Psi }}(a)}{a}=\widehat{H}_\varphi |\stackrel{~}{\mathrm{\Psi }}(a)$$ (12) in complete analogy to eq.(2). The arrow of time is determined now by where (at what $`a`$) and which initial conditions are imposed on $`|\stackrel{~}{\mathrm{\Psi }}(a)`$. As an example, let us assume that the initial conditions for the evolution in real intrinsic time are imposed at small $`a`$ (at the turning point $`a^2=[1+\sqrt{116\mathrm{\Lambda }^2ϵ}]/4\mathrm{\Lambda }`$), and that at that point $`|\stackrel{~}{\mathrm{\Psi }}`$ describes smooth distribution of the scalar field. This type of initial data are characteristic, in particular, to the Hartle–Hawking no-boundary wave function. As $`a`$ increases, the system will become more and more disordered, independently of the sign in eq.(12). With thermodynamical arrow of time, both wave functions (11) will describe expanding Universe. If, with these initial conditions, one changes variables from $`a`$ to $`\eta `$ using $$\frac{a}{\eta }=\sqrt{ϵU(a)}$$ then $`\eta `$ increases with $`a`$, so that $`\eta `$ is the conformal intrinsic time, independently of the choice of sign in eq.(11). In the case of positive sign, eq.(12) becomes the conventional Schrödinger equation for quantized matter in the expanding Universe, $$i\frac{|\stackrel{~}{\mathrm{\Psi }}}{\eta }=\widehat{H}_\varphi (\eta )|\stackrel{~}{\mathrm{\Psi }}$$ (13) where the matter Hamiltonian depends on $`\eta `$ through $`a(\eta )`$. On the other hand, in the case of negative sign one obtains “wrong sign” Schrödinger equation, $$i\frac{|\stackrel{~}{\mathrm{\Psi }}}{\eta }=\widehat{H}_\varphi (\eta )|\stackrel{~}{\mathrm{\Psi }}$$ This little problem is easily cured by considering, instead of $`|\stackrel{~}{\mathrm{\Psi }}`$, its $`T`$-conjugate, $`|\stackrel{~}{\mathrm{\Psi }}^{(T)}`$; if the generalized coordinate representation is chosen for $`|\stackrel{~}{\mathrm{\Psi }}`$, then $`T`$-conjugation is merely complex conjugation, $`\stackrel{~}{\mathrm{\Psi }}^{(T)}(\varphi _𝐤;\eta )=\stackrel{~}{\mathrm{\Psi }}^{}(\varphi _𝐤;\eta )`$. The $`T`$-conjugate wave function obeys conventional Schrödinger equation, but with $`CP`$-transformed Hamiltonian. Hence, the interpretation of both wave functions (11) as describing the expanding Universe is self-consistent; the only peculiarity is that the wave function $`\text{e}^{+iS}|\stackrel{~}{\mathrm{\Psi }}`$ corresponds to the Universe in which matter is $`CP`$-conjugate. In particular, we argue that both components of the Hartle–Hawking wave function (10) correspond to expanding Universes. In more generic cases (in particular, when the matter degrees of freedom cannot be treated perturbatively, see, e.g., Refs. and references therein), the situation may be much more complicated. Still, the arrow of time is generally expected to be one of the key issues in the interpretation of the wave function of the Universe. ### 3.2 Classically forbidden region: issue of stability of the Born-Oppenheimer approximation We now consider the region of the scale factor that is classically forbidden in the absense of $`\widehat{H}_\varphi `$, i.e., $`a_1<a<a_2`$, where $$a_{1,2}^2=\frac{1\sqrt{116\mathrm{\Lambda }^2ϵ}}{4\mathrm{\Lambda }}$$ If $`\widehat{H}_\varphi `$ is switched off, there are two semiclassical solutions to the Wheeler–De Witt equation, $$\mathrm{\Psi }\text{e}^{S(a)}$$ (14) and $$\mathrm{\Psi }\text{e}^{+S(a)}$$ (15) where $$S(a)=_{a_1}^a𝑑a^{}\sqrt{2(U(a^{})ϵ)}$$ is defined in such a way that it always increases at large $`a`$. The wave function (14) decays as $`a`$ increases, so it may be interpreted as describing tunneling from classically allowed Friedmann region to de Sitter-like one. It is convenient to introduce Euclidean conformal time parameter $`\tau `$ and consider Euclidean trajectory $`a_c(\tau )`$ obeying $$\frac{da_c}{d\tau }=\frac{S}{a}(a=a_c)$$ At $`ϵ=0`$ the Euclidean four-geometry corresponding to this trajectory is a four-sphere, the standard de Sitter instanton. Let us now turn on the scalar field Hamiltonian $`\widehat{H}_\varphi `$, and try to apply the procedure of the Born–Oppenheimer type. We write, instead of eq.(14), for the wave function decaying at large $`a`$, $$|\mathrm{\Psi }(a)=\frac{1}{\sqrt{p_a}}\text{e}^{S(a)}|\stackrel{~}{\mathrm{\Psi }}(a)$$ and obtain in the first order in $`\mathrm{}`$ that $`|\stackrel{~}{\mathrm{\Psi }}(a)`$ obeys the “wrong sign” Euclidean Schrödinger equation $$\frac{|\stackrel{~}{\mathrm{\Psi }}(\tau )}{\tau }=+\widehat{H}_\varphi (\tau )|\stackrel{~}{\mathrm{\Psi }}(\tau )$$ (16) where the change of variables from $`a`$ to $`\tau `$ with $`a=a_c(\tau )`$ has been performed. The sign on the right hand side of eq.(16) is opposite to that appearing in the usual quantum mechanics, eq.(6), and is directly related to the sign of $`\widehat{p}_a^2`$-term in the Wheeler–De Witt equation (7). The “wrong” sign in eq.(16) implies that the approximation we use is in fact unstable, if generic “initial” conditions are imposed at small $`a`$, say, at $`a=a_1`$. Note that imposing initial conditions in this way is natural if one interprets the wave function decaying at large $`a`$ as describing tunneling from small to large $`a`$. The formal reason for the instability of the approximation is that the degrees of freedom of the scalar field get excited as $`a`$ increases in the forbidden region. The rate at which this excitation occurs is generically high , and the approximation breaks down well before $`a`$ gets close to the second turning point $`a_2`$. In the path integral framework, breaking of the Born–Oppenheimer-type approximation for the wave function decaying at large $`a`$ is also manifest . This wave function corresponds to the Euclidean path integral with “wrong” sign of the action, $$DgD\varphi \text{e}^{+S[g,\varphi ]}$$ The instanton action then gives the factor $`\text{e}^{S_{inst}}`$, but the integral over $`\varphi `$ (and gravitons) diverges. The physics behind this instability is that tunneling of a Universe filled with matter is exponentially more probable as compared to empty Universe. Hence, the matter degrees of freedom tend to get excited in the forbidden region, thus making tunneling easier. Note that this property is peculiar to quantum cosmology: in quantum mechanics the situation is opposite, as we discussed in the previous section. There are exceptional cases in which matter degrees of freedom do not get excited in the forbidden region, e.g., because of symmetry. In our model this would be the case if $`ϵ=0`$ and the scalar field $`\varphi `$ was in the de Sitter-invariant state, cf. Ref. . Such cases do not seem generic, however. Breaking of the Born–Oppenheimer approximation does not necessarily mean that tunneling-like transitions from small $`a`$ to large $`a`$ with generic state of matter at small $`a`$ do not make sense. Rather, it is the semiclassical expansion that does not work in this case, so the state of the Universe after the transition may be quite unusual. Presently, neither the properties of this state, nor the properties of the wave function in the forbidden region are understood (except for special cases mentioned above). The situation is different for the wave functions increasing towards large $`a`$, eq.(15). In that case the matter wave function obeys the usual Euclidean Schrödinger equation, $`|\stackrel{~}{\mathrm{\Psi }}(\tau )/\tau =\widehat{H}_\varphi (\tau )|\stackrel{~}{\mathrm{\Psi }}(\tau )`$, where $`\tau `$ is still assumed to increase with $`a`$. Hence, it is possible to impose fairly general initial conditions at small $`a`$, and the approximation will not break down. In particular, the Hartle–Hawking wave function is a legitimate approximate solution to the Wheeler–De Witt equation in the forbidden region. This is in accord with the path-integral treatment: the increasing wave function (15) corresponds to the standard sign of the Euclidean action in the path integral. The non-semiclassical behavior of the tunneling wave functions, signalled by the instability of the Born–Oppenheimer-type approximation, is a special, and potentially interesting, feature of quantum cosmology. It is a challenging technical problem to develop techniques adequate to this situation. It is not excluded also that the properties of the tunneling wave functions are rich and complex, and that understanding them may shed light on the beginning of our Universe. The author is indebted to A. Albrecht, J. Goldstone, N. Turok, W. Unruh and A. Vilenkin for helpful discussions.
no-problem/9910/astro-ph9910396.html
ar5iv
text
# CDM N-body cosmological simulations in a 𝐿_{𝐵⁢𝑂⁢𝑋}=30⁢"h-1Mpc" P. Colín is very grateful to R. Carlberg for kindly supplied an account on the system of DEC Alpha Stations at CITA where part of the AP<sup>3</sup>M simulations were run. These simulations were also carried out on the Origin-2000 at the Dirección General de Servicios de Cómputo, UNAM, Mexico.
no-problem/9910/astro-ph9910530.html
ar5iv
text
# X-ray Beaming in the High Magnetic Field Pulsar GX 1+4 ## 1. Introduction To date essentially all the approaches which have been used to model the emission region in X-ray pulsars have limitations. Past efforts have typically adopted a geometry suitable for a particular accretion rate ($`\dot{M}`$) regime and then predict the emission properties by a range of techniques. Radiative transfer calculations (e.g. Burnard, Arons & Klein 1991) are necessarily restricted to symmetric, homogeneous emission regions, where in reality the accretion column may be hollow and even incomplete (an ‘accretion curtain’). The geometric fitting approach, where a beam pattern is assumed and then the geometry is varied (e.g. Leahy 1991) cannot reproduce sharper features observed in several sources. Neither method can generate asymmetric pulse profiles without resorting to an off-center magnetic axis, for which there is no other observational evidence. Recent observations of the X-ray pulsar GX 1+4 suggest a rather different scenario. The X-ray continuum spectrum of GX 1+4 is rather flat (with photon index $`1.0`$) up to a cutoff around 10-20 keV, above which the decay is steeper; it is one of the hardest known amongst the X-ray pulsars. Analysis of recent Rossi X-ray Timing Explorer (RXTE) data shows that the spectrum is generally consistent with those predicted by unsaturated Comptonisation models (e.g. Galloway et al. 1999). Pulse profiles are extremely variable and typically asymmetric, often with a sharp dip forming the primary minimum (Greenhill, Galloway & Storey 1998). During a low flux episode in July 1996, the pulse profiles were found to shift in asymmetry from ‘leading-edge bright’ (with the maximum closely following the sharp primary minimum) to ‘trailing-edge bright’. The entire observation which captured the change spanned only 34 hours, and occurred just 10 days before a short-lived transition from rather constant spin-down to spin-up and back again (Giles et al. 1999). We propose a model which seeks to explain the sharp primary minima seen in this and other sources (A 0535+262, Cemeljic & Bulik 1998; and RX J0812.4-3114, Reig & Roche 1999) and ultimately the change in the pulse profiles. ## 2. Model Description and Preliminary Results A Monte-Carlo code is used to generate the spectra and pulse profiles emitted by two semi-infinite homogeneous cyclindrical accretion columns of radius $`R_C`$, diametrically located on the surface of a ‘canonical’ neutron star of radius $`R_{}=10`$ km (Figure 1a). The algorithms of Pozdnyakov, Sobol’ & Syunyaev (1983) are used to draw the photon energy and direction, electron energies, and to calculate the fully relativistic (non-magnetic) cross-section for Compton scattering. Outside the accretion column, the redshift and bending of photon trajectories by the neutron star’s gravity is calculated by assuming a Schwarzschild metric. We simulate a single column for both poles of the star, and generate pulse profiles for a range of geometries simultaneously. The beam pattern and pulse profiles over a range of geometries are shown in Figure 1 b) and c). We note that when ($`|i\beta |\begin{array}{c}<\\ \end{array}45^{}`$) the emission exhibits a strong modulation at the stars rotation period, with the primary minimum corresponding to the closest passage of the line of sight with one of the magnetic polar axes. As $`i`$ and $`\beta `$ increase the primary minimum becomes progressively narrower. When $`i\beta \begin{array}{c}>\\ \end{array}50^{}`$, a secondary minimum (from the passage of the second axis through the line of sight) is observed. The emission is beamed at an angle $`>90^{}`$ with respect to the column axis; this corresponds to a ‘fan’ type beam. Emission at smaller angles is supressed as a consequence of the decreased escape probability for photons propagating along the column axis. That emission is beamed at $`>90^{}`$ is a consequence of the gravitational light bending; this is also affected by the size of the column $`R_C`$. Our simulations indicate that the mean spectra also depend strongly on the density of the column and the viewing geometry. ## 3. Discussion and application to X-ray pulsars Since we assume a constant infall velocity $`v_C`$ and neglect effects due to radiation pressure on the infalling electrons, the results described are only applicable to systems with low $`\dot{M}`$. Previous low-$`\dot{M}`$ models predict a ‘pencil’ rather than ‘fan’ beam, with emission reaching a maximum at small angles relative to the accretion column. This is probably because of the assumed ‘slab’ or ‘mound’ shaped emission region. Interactions between photons and the inflowing material in the accretion column, which is neglected by these models, is crucial for the formation of sharp primary minima in the pulse profiles as observed in GX 1+4 and several X-ray pulsars. The persistence of the sharp feature in GX 1+4 as X-ray flux drops almost to zero points to the continued importance of this effect, even at extremely low $`\dot{M}`$ (Giles et al. 1999). A significant approximation is the use of the nonmagnetic Compton scattering cross-section. For GX 1+4 - with an estimated magnetic field strength of $`23\times 10^{13}`$ G (Cui et al. 1997) - deviations from the nonmagnetic cross section will be significant within typical observational bands for X-ray astronomy. However we suspect that magnetic effects may only play a minor role in shaping the pulse profile, principally narrowing the primary minimum and possibly giving rise to the local maxima (‘shoulders’) immediately prior to and following the minimum (Giles et al. 1999). Finally we note that the model-predicted pulse profiles are in general quite symmetric. A possible cause of asymmetry in the observed profiles is a variation in density across the accretion column, which could potentially develop in the region where the disc plasma becomes entrained onto the magnetic field lines and persist to the neutron star surface. This effect further suggests a mechanism for the rapid changes in profile asymmetry observed in GX 1+4 (Giles et al. 1999); that is, the sense of asymmetry in the column changes and consequently so does the pulse profile. The detailed structure of the entrainment region is rather poorly understood, and we feel it is not possible to rule out such a phenomenon. We have described a model with homogeneous, axisymmetric, cyclindrical emission regions. To explain other qualitative features of observed pulse profiles, our model needs to be modified to take into account effects due to inhomogeneities and more complicated geometry of the emission regions. ## REFERENCES Burnard D.J., Arons J., Klein R.I. 1991, ApJ 367, 575 Cemeljic M., Bulik T. 1998 AcA 48, 65 Daugherty J.K., Harding A.K. 1986 ApJ 309, 362 Galloway D.K., Giles A.B., Greenhill J.G., Storey M.C. 1999, accepted for publication by MNRAS Giles A.B., Galloway D.K., Greenhill J.G., Storey M.C., Wilson C.A., 1999, accepted for publication by ApJ Greenhill J.G., Galloway D.K., Storey M.C. 1998 PASA 15, 2, 254 Leahy D.A. 1991 MNRAS 251, 203 Pozdnyakov L.A., Sobol’ I.M., Syunyaev R.A. 1983 Astrophys. Space Phys. Rev. 2, 189 Reig P., Roche P. 1999, MNRAS 306, 95
no-problem/9910/astro-ph9910448.html
ar5iv
text
# GIANT AND ‘DOUBLE-DOUBLE’ RADIO GALAXIES: IMPLICATIONS FOR THE EVOLUTION OF POWERFUL RADIO SOURCES AND THE IGM ## 1 Giant Radio Galaxies The central activity in radio loud Active Galactic Nuclei (AGN) produces relativistic outflows of matter, the so-called ‘jets’, for a prolonged period of time, possibly up to a few $`10^8`$ yr. These jets, when powerful enough, inflate a cocoon (e.g. , ) which expands first in the Interstellar Medium (ISM) and later in the Intergalactic Medium (IGM). Within this cocoon, which exists of accelerated jet material, synchrotron radio emission is produced. The evolution of the cocoon can therefore be traced by observations of the radio lobes. Giant radio galaxies (GRGs) are radio sources whose lobes span a (projected) distance of above 1 Mpc<sup>b</sup><sup>b</sup>bWe use $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$ throughout this contribution.. Since radio sources grow in size as they get older (e.g. , ), GRGs must represent a late phase in the evolution of radio sources. Probably not all sources will live long enough, or grow rapidly enough, to reach the size of the Giant radio sources. What fraction of sources do and under what circumstances is still unclear. According to radio source evolution models, GRGs must be extremely old (i.e. typically above $`10^8`$ yr) and/or located in very underdense environments, as compared to smaller radio sources (e.g. ). The age of a radio source can be estimated by sensitive multi-frequency radio observations of the radio lobes (e.g. ). The first systematically obtained results of a small sample of GRGs show that the spectral ages so found are indeed comparable to those expected from source evolution models . However, such studies have always been severely hampered by the fact that large uniformly selected samples of GRGs do not exist. Since GRGs have sizes which are considerably larger than galactic or even cluster halo cores, their radio lobes interact with the intergalactic medium (IGM). Therefore, by studying the properties of the radio lobes we can constrain the properties of the IGM. For instance, in an adiabatically expanding Universe filled with a hot, diffuse and uniform IGM, the IGM pressure, $`p_{igm}`$, should increase as a function of redshift, $`z`$, as $`p_{igm}(1+z)^5`$ (e.g. ). Using a small sample of GRGs, Subrahmanyan & Saripalli limit the local value of the IGM pressure, $`p_{igm,0}`$, to $`p_{igm,0}2\times 10^{14}`$ dyn cm<sup>-2</sup>. Cotter performs a similar analysis with a larger sample of sources which also extends to higher redshifts (up to $`z1`$), and he confirms that the observed evolution in radio lobe pressures does not contradict a $`(1+z)^5`$ relation. However, these results might be biased since it is likely that the known distant GRGs are the most powerful ones at that epoch and which thus have the highest equipartition lobe pressures. In order to address the above issues more carefully, it is vital to use a sample of GRGs with well understood selection effects. We have compiled such a sample of GRGs from the 325-MHz Westerbork Northern Sky Survey (WENSS; e.g. ). From the WENSS we have selected all radio sources with a (projected) size exceeding 1 Mpc, a flux density above 1 Jy, an angular size above 5 arcminutes and a distance from the galactic plane larger than 12.5 degrees. Our sample consists of 26 sources, of which 10 are newly discovered GRGs . We have used the WENSS radio maps, in combination with maps from the 1.4-GHz NRAO VLA Sky Survey (NVSS, ) and our 10.5-GHz Effelsberg observations to study the properties of the radio lobes of the 22 FRII-type sources in our sample. Since GRGs are large sources on the sky, it is already possible to achieve this with the modest angular resolution of these datasets (i.e. $`1`$ arcmin). In cases where we could determine the spectral age from a steepening of the lobe spectrum away from the hotspot, we find ages which are in the range of 50 – 100 Myr, which agrees with earlier results on GRG spectral ages (e.g. ). We also find that the GRGs tend to have higher lobe advance velocities than smaller sources of similar observed radio power . ## 2 Radio lobe pressure evolution Estimates of the internal pressures of radio lobes are directly obtained from estimates of the equipartition energy densities, $`u_{eq}`$, since in a relativistic plasma the pressure is given by $`p=\frac{1}{3}u`$, where $`u`$ is the energy density. The equipartition energy density $`u_{eq}`$ can be obtained from radio observations (e.g. ). If a radio source is well resolved (i.e. larger than 8 arcminute), we have divided the radio lobe into several regions and estimated the energy density in each of these. We have made measured the radio lobe pressure along the radio axes of the FRII-type sources. In Fig. 1 we have plotted four examples of energy density (i.e. pressure) profiles of GRGs. All sources presented here show a decrease in energy density when going back from the hotspots (situated at the outer edges) to the radio cores in the center. In some cases, the energy density rises again in the vicinity of the center. This can be either due to the presence of a strong radio core and/or jet, or to a real increase in the lobe pressure as a result of a higher pressure in the environment caused by the presence of, e.g., a gaseous galactic halo. The decrease in energy density as a function of hotspot distance indicates the presence of a pressure gradient in the lobes. This suggests that the radio lobes are still overpressured with respect to the ambient medium, the IGM. We have also calculated the intensity weighted average lobe energy density for each source in our sample. Fig. 2 shows these values plotted against the redshifts of the sources. We have made a separation between sources which are smaller and larger than 2 Mpc (closed and open symbols, respectively). There are two things to note in this plot. First, the larger sources tend to have the lowest average lobe pressures. This is a confirmation of the trend already noted by Cotter for sources smaller and larger than 1 Mpc. Second, although the redshift range of our sources is limited ($`z0.4`$), there appears to be a correlation between energy density and redshift. This agrees with the results from Saripalli & Subrahmanyan and Cotter and does not contradict a $`(1+z)^5`$ increase of the IGM pressure, provided that the current day value is $`10^{14}`$ dyn cm<sup>-2</sup> (indicated by the dotted line in Fig. 2). We note, however, that the observed increase in lobe energy density with increasing redshift in Fig. 2 also exactly matches the expected behaviour for a source of fixed dimensions and flux density. This is shown by the dashed line in Fig. 2, which indicates the expected equipartition energy density in the lobes of a source with a size equal to the median size of the GRG sample and a flux density equal to the median flux density of the sample. The slope of this line matches the observed redshift relation of the energy density in our sources. Therefore, the observed redshift relation is more likely due to the use of a flux density and source volume limited sample, than to any cosmological effect. The same relation must apply to the sources of Cotter , since his sample of high redshift GRGs also form a flux density limited sample (flux density between 0.4 and 1.0 Jy at 151 MHz) of sources larger than $`1`$ Mpc. We conclude that there is therefore no evidence for a strong increase in the IGM pressure with increasing redshift, although we cannot reject it either. To investigate whether the IGM pressure truly evolves as strongly as $`(1+z)^5`$, it would be necessary to find Mpc-sized radio sources at high redshifts. From Fig. 2 it can be deduced that the existence of a population of sources with lobe energy densities of $`3\times 10^{14}`$ erg cm<sup>-3</sup> at redshifts of at least 0.6 would be difficult to reconcile with a strong pressure evolution, unless the current day IGM pressure is much lower than $`10^{14}`$ dyn cm<sup>-2</sup>. Such sources are expected to have flux densities of $`200`$ mJy at 325 MHz (assuming a size of 1.5 Mpc, a spectral index of $`0.8`$ and a aspect ratio of 3), and are thus detectable by WENSS. Indeed, we have started to compile a sample of such sources. Detailed observations of their radio structures at low frequencies (i.e. $`100`$ MHz) will be necessary to correctly estimate the lobe energy densities. ## 3 ’Double-double’ radio galaxies One of the outstanding issues concerning extra galactic radio sources and other Active Galactic Nuclei (AGN) is the total duration of their active phase. For radio sources, this physical age of the nuclear activity is not to be confused with the radiative loss age determined from radio spectral ageing arguments; many extra galactic radio sources probably have a physical age well surpassing their radiative loss age (e.g. , , but also see ). The length of the active phase is intimately related to the possible existence of duty cycles of nuclear activity. In case nuclear activity is not continuous, how often do interruptions occur and how long do they last? Such duty cycles can only be recognized if there is some mechanism to preserve the information of past nuclear activity for a long enough time to be recognized when a new cycle starts up. In extended radio sources, such a mechanism is potentially provided for by the radio lobes, since they remain detectable for a long time after their energy supply has ceased (possibly up to a few $`10^7`$ yr; e.g. ). If a new phase of activity should start before the ‘old’ radio lobes have faded, and if this activity manifests itself by the production of jets, we can in principle recognize this by the observation of a new, young radio source embedded in an old, relic structure. One well-known candidate for such a ‘restarted’ radio source is the radio galaxy 3C 219 (, , ). In this source, radio jets have been observed which abruptly become undetectable at some point between the core and the leading edge of the outer radio lobes. However, sources such as this are extremely rare and difficult to recognize. During our search for GRGs in the WENSS survey, we have found several sources which are excellent candidates for restarted radio sources. Radio contour plots of two of these, B 1834+620 and B 1450+333, are shown in Figs. 3 and 4, respectively. Both cases are clearly different from ‘standard’ FRII-type radio galaxies. Since they consist of an inner double-lobed radio structure as well as a larger outer double-lobed structure, we have called these sources ‘double-double’ radio galaxies (DDRGs;, , ). In Schoenmakers et al. (, , ) we present a small sample of seven of these peculiar sources. Among the general properties are that in all cases the inner structures are less luminous than the outer structures, and that the difference in radio power between the inner and outer structures appears to decrease with increasing size of the inner structure. Further, almost all sources in our small sample have large linear sizes, above 700 kpc and ranging up to 3 Mpc. The observed two-sidedness and symmetry in the morphology of the inner structures strongly suggests a central cause for this phenomenon, and we believe that an interruption of the central jet-forming activity is the most likely one. For the source B 1834+620 (Fig. 3) we are able to constrain the time-scale of the interruption to $`6`$ Myr (, ). What actually causes the AGN to interrupt the radio activity is unclear. Possible options are a large inflow of gas into the central region of the galaxy (e.g. by an infalling large molecular cloud), causing an instability in the accretion flow onto the central massive black hole. One of the most fascinating aspects of these sources is the actual existence of the inner radio lobe structure. Numerical simulations of restarting systems (e.g. ) and physical considerations on the properties of cocoons produced by jets agree in that the density inside cocoons are not high enough to allow the formation of strong shocks, related to the formation of hotspots and radio lobes. The fact that we nevertheless observe these must indicate that the density inside the cocoons is much higher than predicted by these models. Kaiser et al. (; also see ) present a model in which the density inside the cocoon is actually increased as the result of the entrainment and subsequent shredding of warm clouds in the IGM by the expanding cocoon. They show that after a long enough time (i.e. a few $`10^7`$ yr) the density inside the cocoon may have increased sufficiently to allow a new system of lobes and hotspots to be formed after an interruption of the jet flow. The long time scale can explain the large size of the DDRGs. The low densities inside the cocoon as compared to the ambient medium can explain the low radio power and the high advance velocities of the inner structures (estimated to be 0.2$`c`$ – 0.3$`c`$ (, , ). We therefore argue that the DDRGs show a distinct phase in the evolution of radio sources. Among the questions that remain are the following: How many radio sources actually go through such a phase? What is the cause of the interruption? To answer these questions much more detailed studies of these fascinating sources are required. To investigate the rate of occurrence, it might be interesting to search for old relic structures around known radio sources. Such an undertaking must be performed at low frequency, with high sensitivity and dynamic range. With the proposed sensitivity of SKA, this should be a feasible project. The cause of the interruption can perhaps be investigated by detailed optical and kinematical studies. However, the chance of finding anything may actually be small if the cause is only due to small-scale events such as an infalling cloud. ## 4 The importance of SKA The next important step in GRG research will be the compilation of a large sample of higher redshift GRGs. This is interesting in many respects: First of all, such a sample will provide us with important constraints on the cosmological changes in radio source evolution. Since this is closely related to the evolution of the environments of radio sources on scales up to a few Mpc, this can teach us more about the coupling between the evolution of galaxies in clusters and the intra-cluster gas. Second, sensitive high resolution observations are needed to investigate the radio lobe properties of high redshift GRGs in some detail. This is vital to obtain information on the spectral ages of GRGs, and the ageing processes themselves. Since the energy density of the microwave background increases as $`(1+z)^4`$, the effect of Inverse Compton scattering on the ageing of the particles in the radio lobes becomes increasingly more important toward higher redshift. This will make the radio spectra of the bridges in the lobes steepen considerably, allowing only sensitive low frequency observations to detect these faint regions. Detailed studies of the rotation measures towards the radio lobes can show us density structures in the ambient medium of the lobes at distances of a Mpc from the host galaxy. Also, if part of the Faraday rotation were to occur within the radio lobes, such a study may yield unique information on the internal properties of the radio lobes, such as the thermal particle density and the magnetic field strengths. In case of the DDRGs similar studies can reveal the properties of the medium around the inner lobe structures, and can thus play an important role in testing the model proposed by Kaiser et al. for the formation of these structures. Another important topic is how common the DDRG phenomenon is, a question which is closely related to that of the occurrence of duty-cycles in AGN. The DDRGs which we know now are the ones with the most prominent outer structures and as such that may only form the tip of the iceberg of radio sources with multiple periods of activity. With a sensitive low frequency telescope we can investigate the areas surrounding known radio sources for possible relic structures, indicative of an earlier phase of activity. Therefore, the properties that will make SKA an extremely important instrument for future research of GRGs and DDRGs are a high (sub-arcsecond) angular resolution at a low frequency (100 – 1000 MHz), combined with excellent sensitivity and polarization characteristics. In order to be able to investigate such large structures as GRGs are, SKA should be capable of mapping large structures on the sky, up to a few tens of arcminute, without losing sensitivity. We therefore argue for a next generation radio telescope (i.e. SKA) which consists of a combination of a central compact array (few km diameter) and several long baselines (preferably up to a thousand km), which should be able to observe routinely at frequencies as low as 50–100 MHz. ## Acknowledgements KHM is supported by the Deutsche Forschungsgemeinschaft, grant KL533/4–2 and by the European Commission, TMR Programme, Research Network Contract ERBFMRXCT96-0034 “CERES”. This work is supported in part by the Formation & Evolution of Galaxies network set up by the European Commission under contract ERB-FMRX-CT96-086 of its TMR programme. ## References
no-problem/9910/astro-ph9910284.html
ar5iv
text
# A BeppoSAX observation of the cooling flow cluster Abell 2029 ## 1 Introduction Abell 2029 (hereafter A2029) is a rich, nearby (z$`=`$ 0.0766), X-ray luminous, cluster of galaxies. In the optical band, Oegerle et al. (1995), by analyzing the velocity dispersion of a large number of galaxies, do not find any strong evidence of substructure in A2029. X-ray observations (e.g. Slezak, Durret & Gerbal 1994; Buote & Canizares 1996) provide clear evidence that A2029 is a regular cluster. Various authors, either by deprojection analysis of ROSAT data (e.g. Sarazin, O’Connell & McNamara 1992; Peres et al. 1998) or through spectral analysis of ASCA and ROSAT data (e.g. Sarazin, Wise & Markevitch 1998, hereafter S98), have measured a substantial cooling flow in the core of A2029. David et al. (1993), using Einstein MPC data, report a global temperature of 7.8$`{}_{0.7}{}^{}{}_{}{}^{+0.8}`$ keV. S98, from the analysis of ASCA data, find evidence of a temperature gradient. The projected temperature is found to decrease from $``$ 9 keV to $``$ 6 keV when going from the cluster core out to $``$ 1.6 Mpc. The temperature map of A2029, presented by S98, is consistent with an azimuthally symmetric temperature pattern. Irwin, Bregman & Evrard (1999), who have used ROSAT PSPC data to search for temperature gradients in a sample of galaxy clusters including A2029, in agreement with S98, find evidence of a radial temperature gradient in A2029. Using ASCA data Allen & Fabian (1998) measure an average metal abundance of 0.46$`\pm `$0.03. S98, again using ASCA data, do not find compelling evidence of an abundance gradient in A2029. In this Letter we report a recent BeppoSAX observation of A2029. We use our data to perform an independent measurement of the temperature profile and two-dimensional map of A2029. We also present the abundance profile and the first abundance map of A2029. The outline of the Letter is as follows. In section 2 we give some information on the BeppoSAX observation of A2029 and on the data preparation. In section 3 we present the analysis of the broad band spectrum (2-35 keV). In section 4 we present spatially resolved measurements of the temperature and metal abundance. In section 5 we discuss our results and compare them to previous findings. Throughout this Letter we assume H<sub>o</sub>=50 km s<sup>-1</sup>Mpc<sup>-1</sup> and q<sub>o</sub>=0.5. ## 2 Observation and Data Preparation The cluster A2029 was observed by the BeppoSAX satellite (Boella et al. 1997a) between the 4<sup>th</sup> and the 5<sup>th</sup> of February 1998. We will discuss here data from two of the instruments onboard BeppoSAX: the MECS and the PDS. The MECS (Boella et al. 1997b) is presently composed of two units (after the failure of a third one), working in the 1–10 keV energy range. At 6keV, the energy resolution is $``$8% and the angular resolution is $``$0.7 (FWHM). The PDS instrument (Frontera et al. 1997), is a passively collimated detector (about 1.5$`\times `$1.5 degrees f.o.v.), working in the 13–200 keV energy range. Standard reduction procedures and screening criteria have been adopted to produce linearized and equalized event files. Both MECS and PDS data preparation and linearization was performed using the Saxdas package. The effective exposure time of the observation was 4.2$`\times `$10<sup>4</sup> s (MECS) and 1.8$`\times `$10<sup>4</sup> s (PDS). The observed countrate for A2029 was 0.812$`\pm `$0.004 cts/s for the 2 MECS units and 0.19$`\pm `$0.04 cts/s for the PDS instrument. All MECS spectra discussed in this Letter have been background subtracted using spectra extracted from blank sky event files in the same region of the detector as the source. The energy range considered for spectral fitting is always 2-10 keV. All spectral fits have been performed using XSPEC Ver. 10.00. Quoted confidence intervals are 68% for 1 interesting parameter (i.e. $`\mathrm{\Delta }\chi ^2=1`$), unless otherwise stated. ## 3 Broad Band Spectroscopy We have extracted a MECS spectrum, from a circular region of 8 radius (0.95 Mpc), centered on the emission peak. From the ROSAT PSPC radial profile, we estimate that about 90% of the total cluster emission falls within this radius. The PDS background-subtracted spectrum has been produced by subtraction of the “off-source” from the “on-source” spectrum. As in Molendi et al. (1999) (hereafter M99), a numerical relative normalization factor among MECS and PDS spectra has been included to account for: the fact that the MECS includes emission out to about 1 Mpc from the X-ray peak, while the PDS field of view covers the whole cluster; the mismatch in the absolute flux calibration of the MECS and PDS response matrices; the vignetting in the PDS instrument. The estimated normalization factor is 0.76. In the fitting procedure we allow this factor to vary within 15% from the above value to account for the uncertainty in this parameter. The spectra from the two instruments have been fitted with a one temperature thermal emission component plus a cooling flow component (MEKAL and MKCFLOW codes in the XSPEC package), absorbed by a galactic line of sight equivalent hydrogen column density, $`N_H`$, of 3.05$`\times 10^{20}`$ cm<sup>-2</sup> (Dickey & Lockman 1990). All parameters of the cooling flow component were fixed, as the energy range we use for spectral fitting (2-35 keV) is not particularly sensitive to this component. More specifically, the minimum temperature was fixed at 0.1 keV, the maximum temperature, and the metal abundance were set to be equal to the temperature and the metal abundance of the MEKAL component, the deposited mass, $`\dot{M}`$, was fixed at the value of 363$`M_{}`$$`\mathrm{yr}^1`$derived by S98 when fitting ROSAT PSPC and ASCA GIS data. The model yields an acceptable fit to the data, $`\chi ^2=`$ 160.0 for 167 d.o.f. The best fitting values for the temperature and the metal abundance are respectively, 8.3$`\pm `$0.2 keV and 0.46$`\pm `$0.03, solar units. By assuming a value of $`\dot{M}=556M_{}`$$`\mathrm{yr}^1`$, equal to the one derived by Peres et. al (1998) by deprojecting the ROSAT PSPC surface brightness profile, we obtain a fit of similar quality $`\chi ^2=162.3`$ for 167 d.o.f. and derive a slightly higher value for the temperature 8.6$`\pm `$0.2 keV and a similar value for the abundance, 0.47$`\pm `$0.02. ## 4 Spatially Resolved Spectral Analysis The spectral distorsions introduced by the energy dependent MECS PSF, when performing spatially resolved spectral analysis, have been taken into account using the method described in M99 and references therein. We have accumulated spectra from 5 annular regions centered on the X-ray emission peak, with inner and outer radii of 0-2, 2-4, 4-6, 6-8 and 8-12. A correction for the absorption caused by the strongback supporting the detector window has been applied for the 8-12 annulus, where the annular part of the strongback is contained. For the 6-8 region, where the strongback covers only a small fraction of the available area, we have chosen to exclude the regions shadowed by the strongback. We have fitted each spectrum, except the one extracted from the innermost region, with a MEKAL model absorbed by the galactic N<sub>H</sub>, of 3.05$`\times 10^{20}`$ cm<sup>-2</sup>. In the spectrum from the 0-2 region we have included a cooling flow component, the parameters of this component have all been fixed, as in the fitting of the broad band spectrum (see section 3). The temperature and abundance we derive for the innermost region are respectively 8.2 $`\pm `$ 0.4 keV and 0.53 $`\pm `$ 0.04, solar units, if we assume the mass deposition reported by S98, $`\dot{M}=363M_{}`$$`\mathrm{yr}^1`$, and 9.0 $`\pm `$ 0.5 keV and 0.55 $`\pm `$ 0.05, solar units, if we assume the mass deposition reported by Peres et. al (1998), $`\dot{M}=556M_{}`$$`\mathrm{yr}^1`$. In figure 1 we show the temperature and abundance profiles obtained from the spectral fits, the values reported for the innermost annulus are those obtained by fixing the mass deposition to $`\dot{M}=363M_{}`$$`\mathrm{yr}^1`$. Our measurements are practically unaltered if we excise from the accumulated spectra the emission of 2 pointlike sources, clearly recognizable in the ROSAT PSPC image of A2029 (see figure 1 of S98). By fitting the temperature and abundance profiles with a constant we derive the following average values: $`7.7\pm `$0.2 keV and 0.41$`\pm `$0.03, solar units. A constant does not provide an acceptable fit to the temperature profile. Using the $`\chi ^2`$ statistics we find: $`\chi ^2=`$15.2 for 5 d.o.f., corresponding to a probability of 0.009 for the observed distribution to be drawn from a constant parent distribution. A linear profile of the type, kT = a $`+`$ br, where kT is in keV and r in arcminutes, provides a much better fit, $`\chi ^2=`$ 3.5 for 4 d.o.f. The best fitting values for the parameters are a$`=8.7\pm 0.4`$ keV, b$`=0.28\pm 0.08`$ keVarcmin<sup>-1</sup>. The improvement is found to be statistically significant at more than the 97.5% level according to the F-test. As for the temperature, a constant does not provide an acceptable fit to the abundance profile, $`\chi ^2=`$16.4 for 4 d.o.f. (Prob.$`=`$0.002). Interestingly, a linear profile of the type, Ab = a $`+`$ br, where r is in arcminutes and Ab is in solar units, provides a significantly better fit, $`\chi ^2=`$0.6 for 3 d.o.f. According to the F-test, the probability that the improvement in the fit might be associated to the reduction in the d.o.f. is $`<`$0.001. The best fitting values for the parameters are a$`=0.55\pm 0.04`$ solar units, b$`=0.043\pm 0.011`$ solar unitsarcmin<sup>-1</sup>. We have divided A2029 into 4 sectors: NW, SW, SE and NE. Each sector has been divided into 3 annuli with bounding radii, 2-4, 4-8 and 8-12. In figure 2 we show the MECS image with the sectors overlaid. A correction for the absorption caused by the strongback supporting the detector window has been applied for the sectors of the 8-12 annulus. We have fitted each spectrum with a MEKAL model absorbed by the galactic N<sub>H</sub>. Our temperature and abundance measurements are practically unaltered if we excise from the spectra the emission of the 2 pointlike sources visible in the ROSAT PSPC image of A2029 (see figure 1 of S98). In figure 3 we show the temperature profiles obtained from the spectral fits for each of the 4 sectors. In all the profiles we have included the temperature measure obtained for the central region with radius 2. Fitting each radial profile with a constant temperature we derive the following average sector temperatures: 7.8$`\pm `$0.3 keV for the NE sector, 8.0$`\pm `$0.3 keV for the NW sector, 7.7$`\pm `$0.3 keV for the SW sector and 7.5$`\pm `$0.3 keV for the SE sector. The fits yield the following $`\chi ^2`$ values: $`\chi ^2=26.45`$ for 3 d.o.f. (Prob.$`=7.7\times 10^6`$) for the NE sector, $`\chi ^2=4.4`$ for 3 d.o.f. (Prob.$`=0.22`$) for the NW sector, $`\chi ^2=5.0`$ for 3 d.o.f. (Prob.$`=0.17`$) for the SW sector and $`\chi ^2=25.4`$ for 3 d.o.f. (Prob.$`=1.3\times 10^5`$) for the SE sector. In the NE sector the temperature first increases to values $`\stackrel{>}{}`$10 keV in the second and third annulus, and then decreases to $``$ 5 keV in the outermost annulus. By comparing the temperature of NE sector of the second and third annulus with the temperature averaged over the other 3 sectors in the second and third annulus, we find that they differ at the $`2.5\sigma `$ level. In the SE and NW sector the temperature decreases continuously as the distance from the cluster center increases, although the statistical significance of the decrease is rather small in the NW sector. Finally in the SW sectors, due to the relatively large errors, no trend can be seen in the temperature profile. In figure 4 we show the abundance profiles for each of the 4 sectors. In all profiles we have included the abundance measure obtained for the central region with bounding radius 2. Fitting each profile with a constant abundance we derive the following sector averaged abundances: 0.50$`\pm `$0.04 for the NE sector, 0.49$`\pm `$0.04 for the NW sector, 0.43$`\pm `$0.04 for the SW sector and 0.48$`\pm `$0.04 for the SE sector. The fits yield the following $`\chi ^2`$ values: $`\chi ^2=7.7`$ for 3 d.o.f. (Prob.$`=5.3\times 10^2`$) for the NE sector, $`\chi ^2=8.0`$ for 3 d.o.f. (Prob.$`=4.6\times 10^2`$) for the NW sector, $`\chi ^2=15.7`$ for 3 d.o.f. (Prob.$`=1.3\times 10^3`$) for the SW sector and $`\chi ^2=10.0`$ for 3 d.o.f. (Prob.$`=1.8\times 10^2`$) for the SE sector. A decreasing trend is observed in all sectors, except perhaps the NW sector. A highly statistically significant gradient is observed only in the SW and SE sectors. ## 5 Discussion Previous measurements of the temperature structure of A2029 have been performed by S98 and White (1999), using ASCA data, and by Irwin, Bregman & Evrard (1999), using ROSAT PSPC data. S98 find a decreasing radial temperature profile. In figure 1 we have overlaid the temperature profile obtained by S98 using ASCA data, to our own BeppoSAX profile. Although the single temperature measurements show some discordance an overall temperature decline is observed in both profiles. Indeed, a fit with a linear profile of the type kT = a $`+`$ br, where kT is in keV and r in arcminutes, to the S98 data, provides best fitting parameters: a $`=9.6\pm 1.3`$ keV, b$`=0.35\pm 0.28`$ keVarcmin<sup>-1</sup> compatible with those derived from the BeppoSAX data. Recently White (1999) has reanalyzed the ASCA observation of A2029 finding a temperature profile which, although suggestive of a temperature gradient, due to the rather large uncertainties, is consistent with a constant temperature and at the same time with the BeppoSAX declining profile. Irwin, Bregman & Evrard (1999) have used ROSAT PSPC hardness ratios to measure temperature gradients for a sample of nearby galaxy clusters, which includes A2029. In their analysis they find evidence of a radial temperature decrease, the authors comment that a temperature gradient is probably present in this cluster. The profiles we report in figure 3 suggest that the radial temperature gradient is most likely present in all sectors. We also find an indication of an azimuthal temperature gradient occurring in the annuli with bounding radii 2-4 (0.24 Mpc - 0.47 Mpc) and 4-8 (0.47 Mpc - 0.95 Mpc). The data suggests that the NE sector of the cluster may be somewhat hotter than the rest. Given the modest statistical significance of this temperature enhancement, and the lack of detection of substructure in this sector either from X-ray images (e.g. Buote & Canizares 1996) or from optical velocity dispersion studies (Oegerle et al. 1995), we cannot make a strong case in favor of an azimuthal temperature gradient. The ASCA radial abundance profile reported by S98 and by White (1999) is characterized by rather large uncertainties, and is compatible with a constant abundance as well as with an abundance decrement such as the one we measure with BeppoSAX data. The profiles we report in figure 4 suggest that the radial abundance gradient is most likely present in all sectors. * We acknowledge support from the BeppoSAX Science Data Center.
no-problem/9910/astro-ph9910063.html
ar5iv
text
# Dual Axisymmetry in Proto-Planetary Nebula Reflection Nebulosities: Results from an HST Snapshot Survey of PPN Candidates ## 1. Introduction Axisymmetry often seen in planetary nebulae (PNe) must arise before photoionization starts illuminating the fascinating structure of the nebulae. High resolution optical imaging of PPN reflection nebulosities can serve as an indirect probe for the innermost structure of the PPN dust shell, which is formed during the superwind mass loss phase at the end of the asymptotic giant branch (AGB) evolution. This conference contribution summarizes the results from our HST survey, which covered the largest number of PPN candidates to date including the ones associated with bright central stars (Ueta, Meixner, & Bobrowsky 2000). Our goal was to investigate if there exists any coherent trend that will bridge gaps in the circumstellar morphologies between the AGB and PN phases. ## 2. HST Snapshot Survey of PPN Candidates Our HST snapshot survey of PPN candidates found that 78% (21 of 27) of the reflection nebulosities were resolved and all of the resolved nebulae were elongated (ellipticity $`=0.44`$). This ubiquitous axisymmetry suggests that PPNe are intrinsically axisymmetric. Hence, given the spherical nature of the AGB wind shell, the PN axisymmetry is likely to originate during the superwind phase. Moreover, there are clearly two types of reflection nebulosities which we classified as Star-Obvious Low-level-Elongated (SOLE) nebulae, showing smooth, low surface brightness elongations with extremely bright central stars, and DUst-Prominent Longitudinally-EXtended (DUPLEX) nebulae, having bipolar lobes straddling dust lanes with totally or partially obscured central stars. ## 3. SOLE vs. DUPLEX: Physically Distinct Nebulosities We attribute the major source for the dual morphology to the optical depth in the superwind shell, which varies depending on the degree of equatorial density enhancement. The following evidence suggests that the inclination angle effect alone may not be good enough to interpret what we see in the data. ### 3.1. Mid-Infrared Morphology A recent mid-infrared (mid-IR) survey of PPN candidates (Meixner et al. 1999) has revealed a corresponding dual morphology in the PPN dust emission regions. SOLE type optical nebulae corresponds to toroidal type mid-IR dust emission nebulae, in which we see two dust emission peaks that are interpreted as a limb-brightened dust torus. On the other hand, DUPLEX type nebulae is related to core/elliptical type dust emission nebulae, in which we see a very compact, unresolved core with a broad plateau (Fig.2). Characteristics of both optical and mid-IR images suggest that SOLE-Toroidal type nebulae are of optically thin dust shells and that DUPLEX-Core/Elliptical type nebulae are of optically thick dust shells. We also schematically describe how the optical depth in the superwind influences the shape of the two types of reflection nebulosities in Fig.2. More importantly, mid-IR images show that there are SOLE nebulae that are oriented rather edge-on (e.g. IRAS 07134+1005 in Fig.2, IRAS 17436+5003 in Fig.4), and therefore, elliptical reflection nebulae (SOLE) are not necessarily bipolar nebulae (DUPLEX) seen pole-on. ### 3.2. Spectral Energy Distributions and Two-Color Diagrams SOLE and DUPLEX nebulae are also distinct in the shapes of spectral energy distribution (SED; Fig.3, left). Optically thin SOLE nebulae let the stellar emission pass while converting some to thermal dust emission, yielding comparable stellar and dust peaks. Optically thick DUPLEX nebulae absorb all stellar photons except for some scattered ones, yielding a large dust emission peak with an optical plateau. The duality is also seen in both an IRAS/near-IR and a near-IR color diagrams, confirming the optical thickness interpretation. IRAS/near-IR diagram (Fig.3, right) shows three clusters of PPNe according to the visibility of the central stars (total, partial, and none obscuration). Near-IR diagram (not shown) shows a linear distribution of PPNe according to the degree of reddening (the redder, the more dust). ### 3.3. Two-Dimensional Radiation Transfer Calculations Preliminary results from full 2-D radiation transfer simulations of PPN dust shells generally agree with optical/mid-IR data. The optically thin superwind shell model yielded a resolved dust peak in the mid-IR and a diffuse reflection nebula without a dust lane in the optical, while the optically thick superwind shell model yielded a compact dust emission core with a broad emission plateau in the mid-IR and a bipolar nebula divided by a dust lane in the optical (Meixner, Ueta, & Bobrowsky 2000; also see the contribution by Meixner in this issue). ## 4. Discussions The distinctness between SOLE and DUPLEX nebulae seems to originate from the varying optical depths in the superwind shells, and it is evidenced by the correlation between optical and mid-IR morphologies, characteristic SED shapes, color diagrams, and 2-D radiation transfer calculations. The inclination angle effect alone may not be good enough to interpret all SOLE nebulae being pole-on bipolar nebulae because we have found as many SOLE nebulae as DUPLEX nebulae, there are some SOLE PPNe that are rather edge-on, and SOLE nebulae do not show the imbalance of surface brightness, which is a signature of inclined orientation in DUPLEX nebulae, although the inclination angle effect still remains as a source of confusion. Neither age nor chemical composition seems to be related to the morphological bifurcation, and the origin of equatorial density enhancement in the superwind remains unknown. However, there is a suggestion that DUPLEX PPNe may have evolved from higher mass AGB progenitors because of the Galactic distribution of PPNe: DUPLEX nebulae, having a mean height of $`220`$pc with a range of $`|z|<520`$pc, are more confined to the Galactic plane than SOLE nebulae, which have a mean height of $`470`$pc with a range of $`|z|<2100`$pc. ## References Meixner, M. 2000, this issue Meixner, M., Ueta, T., & Bobrowsky, M. 2000, ApJL, submitted Meixner, M., Ueta, T., Dayal, A., Hora, J. L., Fazio, G., Hrivnak, B. J., Skinner, C. J., Hoffmann, W. F., Deutsch, L. K. 1999, ApJS, 122, 221 Ueta, T., Meixner, M., & Bobrowsky, M. 2000, ApJ, 528, 861
no-problem/9910/cond-mat9910183.html
ar5iv
text
# Untitled Document KINETIC THEORY AND MESOSCOPIC NOISE <sup>*</sup><sup>*</sup>Based on work presented at the 23rd International Workshop on Condensed Matter Theories, Ithaca, Greece, June 1999. M. P. Das Department of Theoretical Physics Research School of Physical Sciences and Engineering The Australian National University Canberra ACT 0200, Australia F. Green GaAs IC Prototyping Facility CSIRO Telecommunications and Industrial Physics PO Box 76, Epping NSW 1710, Australia 1. INTRODUCTION There are two truisms in the theory of transport. One states that two-particle correlations carry far more information about microscopic charge dynamics, than do single-particle processes such as direct-current response. The other asserts that, for the first insight to bear fruit, one must look beyond the near-equilibrium limit. Nowhere are these notions more apt than in testing the relation between hot-electron noise and shot noise, the leading effects of nonequilibrium mesoscopic fluctuations. Hot-electron noise is generated by spontaneous energy exchanges between a driven conductor and its thermal bath. Shot noise is generated by random entry and exit of the discrete carriers. Neither species is detectable unless a current flows. Our thesis is that the relationship between shot noise and hot-electron noise is absolutely fundamental to understanding mesoscopic fluctuations. At least from the vantage point of orthodox microscopics and kinetics, their relation is a long way from being settled. Its resolution calls for the tools of many-body theory. In Section 2 we motivate the many-body approach to noise. Sec. 3 surveys key mesoscopic experiments; we review the analysis of conductance and noise within linear diffusive theories, and the physical transition, or smooth crossover, linking thermal noise and shot noise . In Sec. 4 we outline a kinetic theory of nonequilibrium fluctuations and discuss how this conventional formulation directly negates diffusive explanations of the smooth crossover. We sum up in Sec. 5. 2. BACKGROUND At sub-micrometer scales, device sizes approach the mean free path for scattering and, often, the phase-breaking length for coherent propagation; they are “mesoscopic”\[4-6\], no longer fitting the usual picture of bulk transport. Certain structures, such as quantum dots, are tinier still. Multi-particle correlations are clearly important for devices supporting only a few carriers at most (strongly quantized dots, say), but they remain relevant even in a semiclassical setting. That prompts two questions: What are the experimental signatures of two-particle correlations at mesoscopic scales? In which ranges of the driving potential should they be probed? In dealing with themes similar to the above, many-body physicists know the value of the van Hove formula. For solid-state plasma excitations, it connects the dynamic polarizability of the electrons directly with inelastic momentum-energy loss, whenever the system is probed from outside. It forms the basis of much experimental analysis. For carrier motion in a conductor, there is a recipe comparable to van Hove’s: the Johnson-Nyquist formula . This connects thermal fluctuations of the current directly with energy dissipation, both ultimately induced by the same processes for microscopic scattering. The connecting principle of the Johnson-Nyquist formula provides a major consistency criterion for transport models. Like the van Hove relation, it is an example of the fluctuation-dissipation theorem . In the electron gas, both share a common basis since fluctuations, and hence current noise, are microscopically related to the dynamic polarizability. Each of the two effects, in its way, reflects the form and action of the underlying electron-hole excitations. This drives home a vital, if obvious, message: a true theory of current noise cannot avoid being a many-body theory. Despite these interconnections, many-body methods are under-represented in mainstream noise research . With few exceptions , the field is served by special developments of weak-field, single-particle formalisms. In both quantum coherent and semiclassical stochastic versions , the formalisms rest on novel mesoscopic re-interpretations of drift-diffusion phenomenology . Since noise is intrinsically a multi-particle effect, the internal logic of single-particle diffusive approaches (coherent and stochastic) bears closer inspection . In extending many-body ideas to driven fluctuations, there are two linked issues: $``$Real mesoscopic devices, in real operation, cannot be characterized by low-field response alone. This is easy to see in a typical structure 100 nm long, subject to a potential of 0.1 V. The mean applied field is $`10^4\mathrm{V}\mathrm{cm}^1`$; hardly weak. $``$Given the need for a high-field kinetic description, one must still preserve all the definitive low-field properties of the electron gas. While the leading rule of linear transport is the fluctuation-dissipation theorem (FDT), it is by no means the sole guiding principle in degenerate Coulomb systems. The FDT applies within the context of a nonequilibrium ensemble’s adiabatic connection to the global equilibrium state, whose nature thus exerts a governing effect on noise. An electron gas in equilibrium is anything but a collection of independent carriers. It is a correlated plasma, best known for the dominance of both degeneracy and quasi-neutrality, which persists down to distances not far above the Fermi wavelength and certainly well below mesoscopic . Heuristically, much has been made of the Johnson-Nyquist formula and the Einstein relation , which ties diffusion quantitatively to conduction in a restricted sense. However, a model built on these precepts alone is inadequate to characterize electronic fluctuations . The sum rules must be respected, notably compressibility and perfect screening . These ensure quasi-neutrality throughout the degenerate plasma. Sum rules cannot emerge from an independent-particle analysis because they refer explicitly to electron-hole correlations. Theories of fluctuations have less claim to be reliable if they make many-body predictions by inductive extrapolation from noninteracting single-particle physics . Consider a case in point: the first aim of all diffusive models is to compute the (one-body) conductance. To that end, diffusive phenomenology must simply assume that Einstein’s relation and its parent, the microscopic FDT, are valid . The need to take these theorems on faith removes the logical possibility of proving them. Without such proof, no diffusive model can demonstrate its control over the microscopic multi-particle structure. Without such control, the inner consistency of any subsequent noise prediction is uncertain. It is the inability to describe multi-particle correlations ab initio that bars access to proof of the fluctuation-dissipation theorem and the sum rules . All of these constraints will follow naturally in a canonical description of fluctuations. Conversely, within a given model of noise in a degenerate conductor, logical derivation of the constraints is first-hand evidence of tight control, reliability, and predictive strength. 3. LOW-FIELD MESOSCOPIC NOISE There is now a large collection of mesoscopic conductance and noise measurements. Sample sizes range from a few nanometers, to hundreds. The experiments generally cover three aspects: (a) behavior of current noise at low fields, the only domain in which diffusive theory is valid; (b) relation of noise to conductance, forming the tangible link between transport and fluctuations; and (c) crossover from thermally induced noise to shot noise, providing a unique signature of the underlying microscopics. We cite References and for reviews of noise, and , , and for mesoscopic transport. Two simple classes of metallic conductive structures have been tested: point-contact junctions and diffusive wires . We address each case separately, and then examine issues of intepretation common to both. Point contacts A point contact is a small conducting region between external leads. Its aperture approaches the scale of the Fermi wavelength, narrow enough for transport through it to be ballistic and strongly quantized . The contact forms an “electron waveguide” with discrete modes, that is, subbands of quasi-one-dimensional states propagating through the constriction. If the junction is fully transmissive, its conductance is quantized in steps of the universal value $`𝒢_02e^2/h`$. This is explained as follows. Each step signals the opening of a new channel as, with increasing carrier density, the Fermi surface successively and discretely intersects higher subbands (in analogy with the integer quantum-Hall effect ). If the junction has nonideal transmission, there will be a forward-scattering probability $`𝒯_n<1`$ for the $`n`$th mode at the Fermi energy $`\mu `$. Each crossing then augments the total intrinsic conductance in Landauer’s formula $$𝒢=𝒢_0\underset{n}{}\theta (\epsilon _n\mu )𝒯_n,$$ $`(1)`$ where $`\epsilon _n`$ is the $`n`$th subband threshold. There have been many verifications of this result. Two of the earliest are by van Wees et al. and Wharam et al. . One may ask why, if transport through a point contact is ballistic (collisionless), its conductance should be finite. The answer is that the contact is not a closed circuit on its own. It is open to a larger electrical environment where scattering effects are strong. The influence of the leads (Landauer’s massive banks), supplying and receiving the current, is paramount . Through dissipative collisions or by geometrical mismatch at the interfaces, the leads couple the modes in the contact to an arbitrarily large set of asymptotic degrees of freedom. This introduces irreversibility and stabilizes the transport. The details of asymptotic relaxation should not affect the response; relaxation serves only to ensure boundedness of the current and the electromotive potential. In every other way, the relation between them is an irreducible property of the mesoscopic channel, albeit in contact with the macroscopic environment . Mesoscopic current-noise measurements are more challenging, owing to very low signal levels. Good representative data for point contacts are in Reznikov et al. and Kumar et al. . In the zero-temperature limit, there are no thermal fluctuations; shot noise is the only active form of carrier correlations. The noise-power spectrum at low frequencies is $$𝒮=2eV𝒢_0\underset{n}{}\theta (\epsilon _n\mu )𝒯_n(1𝒯_n)$$ $`(2)`$ for voltage $`V`$ across the contact boundaries. This theoretical expression is well confirmed by experiment. To see how $`𝒮`$ relates to the current, take the single-channel case, for which $`𝒢=𝒢_0𝒯_1`$. Since we are limited to weak voltages, the current response is linear: $`I=𝒢V`$. We then have $$\frac{𝒮}{2eI}=\frac{2eV𝒢_0𝒯_1(1𝒯_1)}{2e𝒢V}=1𝒯_1.$$ $`(3)`$ We have normalized to Schottky’s expression for classical shot noise, $`2eI`$, associated with current $`I`$. Equation (3) shows that fluctuations in the point contact do in fact behave as shot noise, suppressed below the classical value depending on transmission. In such a small, quasi-ballistic device, suppression can only be a quantum effect. If the contact is ideal, then $`𝒯_1=1`$; quantum shot noise vanishes completely, because the incoming and outgoing scattering wave functions overlap fully with an eigenstate of the system. The asymptotic occupancies are totally anti-correlated by Pauli exclusion . If $`𝒯_1<1`$, the state of the system is no longer asymptotically pure, but mixed. The occupancies are partly decorrelated, allowing scope for the appearance of fluctuations. Evidently, the fluctuations and their associated shot noise have a nonlocal character. For finite temperatures, with $`k_BTeV`$, the current noise displays an appreciable thermal component. In place of Eq. (2), experimental data follow the expression (again we keep one channel for simplicity) $$𝒮(V)=4k_BT𝒢\left[𝒯_1+(1𝒯_1)\frac{eV}{2k_BT}\mathrm{coth}\left(\frac{eV}{2k_BT}\right)\right].$$ $`(4)`$ This is the prototypical smooth crossover, melding thermal noise and shot noise into a continuum \[19-21\]. At equilibrium, Eq. (4) for point-contact noise gives the classic Johnson-Nyquist form: $`𝒮(0)=4k_BT𝒢`$. For $`eVk_BT`$ the second term dominates and yields quantum shot noise with suppression, just as in Eq. (3). At intermediate potentials $`eVk_BT`$, Eq. (4) takes on a hybrid character, more than thermal but less than shot. From Eq. (4) one sees that the suprathermal contribution $`𝒮(V)𝒮(0)`$ has a quite complex nonlinear dependence on $`T`$ and $`V`$. Eq. (4) is certainly well supported empirically. In our view, however, the cause of its nonlinearity is a puzzle in the light of models which depend (by design) on a strictly linear drift-diffusion approach to transport. We revisit this issue shortly. The smooth crossover also dominates noise in larger conductors, as we now discuss. Diffusive wires Transport in a diffusive wire is not ballistic, but may still be quantum-coherent. This is especially so at low temperatures and weak fields, where scattering is almost perfectly elastic. If collisions preserve the quantum phase of the carrier wave function, its total phase shift in transmission depends only on the total length of the randomized path; this is quantum diffusion. Samples are too cold, and still too short, for local dissipative heating. Instead, carriers thermalize in the access leads . Quantum-mechanically one can think of a diffusive wire as the extreme limit of a point contact. The subband mode distribution becomes complicated and quasicontinuous, but Eqs. (1) and (4) still apply. With a statistical estimate of $`𝒯_n(E)`$ at the Fermi energy $`E=\mu `$, one can do an ensemble average to get $`_n𝒯_n^2\frac{2}{3}_n𝒯_n`$. In the context of multiple modes, Eq. (4) generalizes to $$\begin{array}{ccc}\hfill 𝒮(V)& =4k_BT𝒢_0\underset{n}{}\left[𝒯_n^2+𝒯_n(1𝒯_n)\frac{eV}{2k_BT}\mathrm{coth}\left(\frac{eV}{2k_BT}\right)\right]\hfill & \\ & 4k_BT𝒢\left[\frac{2}{3}+\frac{1}{3}\frac{eV}{2k_BT}\mathrm{coth}\left(\frac{eV}{2k_BT}\right)\right],\hfill & (5)\hfill \end{array}$$ presenting the smooth-crossover formula in its best-known guise \[1,2,19-21,23,24\]. Once more, the wire at zero voltage exhibits plain Johnson-Nyquist thermal noise. For $`eVk_BT`$ there is shot-noise behavior with the famous threefold suppression; even in conductors physically much bigger than a point contact, quantum suppression of shot noise is a robust effect. As with Eq. (4), there is solid corroboration of Eq. (5) by experiments \[22,25-27\]. The fit to measurements is not invariably good. Besides the survey of boundary-heating effects by Henny et al. , we note the very early, interesting test of Eq. (5) by Liefrink et al. in a two-dimensional electron gas. That experiment shows clear and systematic departures from the expected $`\frac{1}{3}`$ suppression. Although tentative explanations have been offered for those deviations, we consider that the work of Liefrink et al. in two-dimensional wires has ongoing importance, and we suggest that it be repeated with better control over carrier uniformity in the structure . So far, we have reviewed the quantum-coherent interpretation of diffusive noise theory. Diffusive wires are at the large end of the mesoscopic range, and elastic scattering need not necessarily be phase-preserving. A semiclassical Boltzmann analysis might be justified if one accepts that many sequential, locally incoherent collisions should give much the same diffusive transport as the superposition of many coherent, but randomly determined, quantum paths. That is the basis of diffusive adaptations of Boltzmann-Langevin theory . Such a basis lacks the clarity of the quantum-coherent descriptions of pure (zero-temperature) shot noise. This presents an interesting juxtaposition of alternatives: pure quantum mechanics alongside semiclassical stochastics, each offering a quite different computational strategy. We do not retrace the semiclassical derivation here; theoretical details can be found in the literature . Most important is the fact that these disparate approaches both converge on Eq. (5). Their agreement, which may seem surprising, suggests that it is the common assumptions about linear diffusive transport above all, which matter for the crossover. If the theoretical crossover were to be disconfirmed, by whatever means, both derivations would be equally suspect. A Theoretical Issue: Nonlinearity Having already noted the nonlinearity of the crossover formula, we now examine it more closely. Eq. (5), derived either quantum-coherently or semiclassically, describes all of the fluctuations about a mean current which is understood to be rigidly linear . Linearity of the $`I`$$`V`$ relation means that the resistive power dissipation in the conductor is strictly quadratic in $`V`$. Mesoscopic systems are quite amenable to linear-response analysis at the microscopic level . If one followed a normal plan for linear response (such as Kubo’s), one would compute a coefficient (the conductivity) for the local, quadratic, power density. The calculation, actually a microscopic proof of the FDT, would furnish the coefficient as a current autocorrelation proportional to the current-noise spectral density within the conductor. As an ensemble average at equilibrium, the coefficient could not depend on the external field, that is, on $`V`$. After integrating it over the sample, the local quantity would finally lead to $`𝒮(0)`$: Johnson-Nyquist noise, and nothing more. In arriving at $`𝒮(V)`$ rather than just $`𝒮(0)`$, diffusive theories cannot have followed a normal plan for linear response. Let us run this in reverse. The crossover formula shows marked dependence of the noise on voltage. On the other hand, it is derived in a model whose $`I`$$`V`$ response is perfectly linear. Its power dissipation $`IV=𝒢V^2`$ is perfectly quadratic; the coefficient $`𝒢`$ must, and does, scale with Johnson-Nyquist noise as required by the FDT (naturally so, since the models at hand invoke some form of Einstein relation, or drift-diffusion FDT, to secure linearity between $`I`$ and $`V`$). Assuming that the FDT is applicable to any diffusive model (without benefit of its proof within the same model), it follows that the excess noise $`𝒮(V)𝒮(0)`$ has no coupling to the equilibrium coefficient fixing the (strictly quadratic) resistive dissipation. Thus the excess noise is nondissipative. It is evident that the smooth-crossover formula does not fit the accepted linear-response canon, even though its associated transport model is in the linear-response regime. This shows how diffusively based accounts of the crossover fall short of consistency. However, it does not touch upon the established experimental validity of Eq. (5). Indeed, the experiments bring out one of our themes: the importance of nonequilibrium, nondissipative noise as a sensitive marker of physical effects on a fine scale . In terms of theory, two situations arise. Diffusive models of mesoscopic noise either (i) violate the microscopic fluctuation-dissipation theorem despite their need to invoke its offshoot, the Einstein relation, or (ii) they are somehow covertly nonlinear, despite their manifestly linear construction. One way or the other, there appear to be problems with diffusive accounts of the crossover. Eq. (5) requires a new explanation. 4. KINETIC APPROACH We begin by asserting our formalism’s most striking conclusion: the nonequilibrium thermal noise of a degenerate conductor always scales with bath temperature $`T`$. Since shot noise does not scale with $`T`$, there is an immediate corollary. Within kinetic theory, thermal noise and shot noise cannot be subsumed under a unified formula. The focus of this section is on the conceptual structure of the formalism, with only a brief mathematical overview. Ref. has more detail. The kinetic approach to nonequilibrium transport in a metallic conductor works with a set of assumptions and boundary conditions identical to those of every other model of current and noise in metals, including every version of diffusive theory \[1,2,4-6\]. They are: $``$an ideal thermal bath regulating the size of energy exchanges with the conductor, while itself always remaining in the equilibrium state; $``$ideal macroscopic carrier reservoirs (leads) in open contact with the conductor, without themselves being driven out of their local equilibrium; $``$local charge neutrality of the leads, and overall neutrality of the intervening conductor. This standard scheme, consistently applied within a standard semiclassical Boltzmann framework, puts tight and explicit constraints on the behavior of nonequilibrium current noise , constraints that are less transparent in a purely diffusive framework . The assumption of ideal leads implies that, regardless of the voltage across the active region, the electron distributions “far away” from the conductor remain quiescent and never depart from their proper equilibrium, characterized by $`T`$ and by a uniform density $`n`$. In practice, these extended populations need not be further away than a few Thomas-Fermi screening lengths. The associated interfacial screening zones will buffer any charge redistribution; these boundary zones should be included in the kinetic description of the system. The electron gas in each asymptotic lead is unconditionally neutral, and satisfies the canonical sum rules . Gauss’ theorem then implies that the central region must be overall neutral. Global neutrality and asymptotic equilibrium together condition the form of the nonequilibrium fluctuations in the mesoscopic conductor. Our goal is to show that nonequilibrium correlations are linear functionals of the equilibrium ones. In the degenerate electron gas, the immediate consequence of this is that all thermally induced noise must scale with ambient temperature $`T`$. Therefore it is impossible for shot noise to couple to the thermal bath. Otherwise, shot noise too would be seen to scale with $`T`$, which is not the case. The kinetic approach to fluctuations, sketched out below, takes as its input the electron-hole pair excitations in the equilibrium state. Fermi-liquid theory shows that these pair correlations form an essential unit, always with an internal kinematic coupling. Generally, they cannot be factorized into two stochastic components autonomously located, so to speak, on the single-electron energy shell. In that respect we do not follow Boltzmann-Langevin analysis for degenerate electrons . It is straightforward to specify the distribution of free electron-hole fluctuations, $`\mathrm{\Delta }f_𝐤^{\mathrm{eq}}(𝐫)`$, for wavevector $`𝐤`$ at position $`𝐫`$: $$\mathrm{\Delta }f_𝐤^{\mathrm{eq}}(𝐫)k_BT\frac{f_𝐤^{\mathrm{eq}}}{\mu }=f_𝐤^{\mathrm{eq}}(𝐫)[1f_𝐤^{\mathrm{eq}}(𝐫)].$$ $`(6)`$ The one-electron equilibrium distribution is $$f_𝐤^{\mathrm{eq}}(𝐫)=\left[1+\mathrm{exp}\left(\frac{\epsilon _𝐤+U_0(𝐫)\mu }{k_BT}\right)\right]^1,$$ $`(7)`$ where the conduction-band energy $`\epsilon _𝐤`$ can vary (implicitly) with $`𝐫`$ if the local band structure varies, as in a heterojunction. The electronic potential $`U_0(𝐫)`$ vanishes asymptotically in the leads, and satisfies the self-consistent Poisson equation ($`ϵ`$ is the background-lattice dielectric constant) $$^2U_0e\frac{}{𝐫}𝐄_0=\frac{4\pi e^2}{ϵ}\left(f^{\mathrm{eq}}(𝐫)n^+(𝐫)\right)$$ $`(8)`$ in which, for later use, $`𝐄_0(𝐫)`$ is the internal field at equilibrium and $``$ denotes the trace over spin and wave vector $`𝐤`$. The (nonuniform) neutralizing background density $`n^+(𝐫)`$ goes to $`n`$ in the (uniform) leads. The semiclassical Boltzmann equation, subject to the total internal field $`𝐄(𝐫,t)`$, can be written as $$\left(\frac{}{t}+𝒟_{𝐤;𝐫}[𝐄(𝐫,t)]\right)f_𝐤(𝐫,t)=𝒞_{𝐤;𝐫}[f].$$ $`(9)`$ Here $`𝒟_{𝐤;𝐫}[𝐄]𝐯_𝐤/𝐫(e𝐄/\mathrm{})/𝐤`$ is the convective operator and $`𝒞_{𝐤;𝐫}[f]`$ is the collision operator, whose kernel (local in real space) is assumed to satisfy detailed balance, as usual \[1-3\]. Even for single-particle impurity scattering, of immediate concern, Pauli blocking of the outgoing scattering states still means that $`𝒞`$ is nonlinear in the nonequilibrium solution $`f_𝐤(𝐫,t)`$. Since we follow the standard Boltzmann formalism, all of our results will comply with the conservation laws. The nonlinear properties of these results will extend as far as the inbuilt limits of the Boltzmann framework; much further than if they were restricted to the weak-field domain, as demanded by the drift-diffusion Ansatz . Moreover, since we rely directly on the whole fluctuation structure provided by Fermi-liquid theory , the sum rules are incorporated. Our prescription starts by developing the steady-state nonequilibrium distribution $`f_𝐤(𝐫)`$ as a mapping of the equilibrium distribution, which satisfies $$𝒟_{𝐤;𝐫}[𝐄_0(𝐫)]f_𝐤^{\mathrm{eq}}(𝐫)=0=𝒞_{𝐤;𝐫}[f^{\mathrm{eq}}],$$ $`(10)`$ the last equality following by detailed balance. Subtracting the corresponding sides of Eq. (10) from both sides of the time-independent version of Eq. (9), and introducing the difference $`g_𝐤(𝐫)f_𝐤(𝐫)f_𝐤^{\mathrm{eq}}(𝐫)`$, we obtain $$\begin{array}{ccc}\hfill & d𝐫^{}\frac{2d𝐤^{}}{(2\pi )^d}\left(_{\mathrm{𝐤𝐤}^{};\mathrm{𝐫𝐫}^{}}𝒟_{𝐤^{};𝐫^{}}[𝐄(𝐫^{})]+𝒞_{\mathrm{𝐤𝐤}^{};\mathrm{𝐤𝐫}^{}}^{}[f]\right)g_𝐤^{}(𝐫^{})\hfill & \\ & =\frac{e[𝐄(𝐫)𝐄_0(𝐫)]}{\mathrm{}}\frac{f_𝐤^{\mathrm{eq}}}{𝐤}(𝐫)𝒞_{𝐤;𝐫}^{\prime \prime }[g].\hfill & (11)\hfill \end{array}$$ The unit operator in $`d`$ dimensions is $`_{\mathrm{𝐤𝐤}^{};\mathrm{𝐫𝐫}^{}}(2\pi )^d\delta (𝐤𝐤^{})\delta (𝐫𝐫^{})`$, and the linearized operator $`𝒞^{}[f]`$ is the variational derivative $`𝒞_{\mathrm{𝐤𝐤}^{};\mathrm{𝐫𝐫}^{}}^{}[f]\delta 𝒞_{𝐤;𝐫}[f]/\delta f_𝐤^{}(𝐫^{})`$. Last, $`𝒞^{\prime \prime }[g]𝒞[f]𝒞^{}[f]g`$ carries the residual nonlinear contributions. Global neutrality enforces the important constraint $`𝑑𝐫g(𝐫)=0`$. The leading right-hand term in Eq. (11) is responsible for the functional dependence of $`g`$ on the equilibrium distribution. This is important because dependence on equilibrium-state properties carries through to the derived steady-state fluctuations. The electric-field factor can be written as $`𝐄𝐄_0𝐄_{\mathrm{ext}}+𝐄_{\mathrm{ind}}`$ where $`𝐄_{\mathrm{ext}}(𝐫)`$ is the external driving field, and the induced field $`𝐄_{\mathrm{ind}}(𝐫)`$ obeys $$\frac{}{𝐫}𝐄_{\mathrm{ind}}=\frac{4\pi e}{ϵ}g(𝐫).$$ $`(12)`$ Now we consider the nonequilibrium fluctuation $`\mathrm{\Delta }f_𝐤(𝐫,t)`$. It satisfies the linearized Boltzmann equation $$𝑑𝐫^{}\frac{2d𝐤^{}}{(2\pi )^d}\left[_{\mathrm{𝐤𝐤}^{};\mathrm{𝐫𝐫}^{}}\left(\frac{}{t}+𝒟_{𝐤^{};𝐫^{}}[𝐄(𝐫^{})]\right)+𝒞_{\mathrm{𝐤𝐤}^{};\mathrm{𝐤𝐫}^{}}^{}[f]\right]\mathrm{\Delta }f_𝐤^{}(𝐫^{},t)=0.$$ $`(13)`$ Given the temporal and spatial boundary constraints for this equation (causality and global neutrality), all of the relevant dynamical properties of the fluctuating electron gas, notably its current noise, can be obtained. Its adiabatic $`t\mathrm{}`$ limit, $`\mathrm{\Delta }f_𝐤(𝐫)`$, represents the average strength of the spontaneous background fluctuations, induced in steady state by the ideal thermal bath. It is one of two essential components that determine the dynamical fluctuations (the other is the Green function for the inhomogeneous form of Eq. (13)). In particular, $`\mathrm{\Delta }f_𝐤(𝐫)`$ dictates the explicit $`T`$-scaling of all thermal effects through its functional dependence on the equilibrium distribution $`\mathrm{\Delta }f_𝐤^{\mathrm{eq}}(𝐫)`$. We now show how this comes about. Define the variational derivative $`G_{\mathrm{𝐤𝐤}^{}}(𝐫,𝐫^{})\delta g_𝐤(𝐫)/\delta f_𝐤^{}^{\mathrm{eq}}(𝐫^{})`$. This is a Green-function-like operator obeying a steady-state equation obtained from Eq. (11) by taking variations on both sides . The explicit form of $`G`$ can be derived from knowledge of the Green function for Eq. (13). One can verify that $$\mathrm{\Delta }f_𝐤(𝐫)=\mathrm{\Delta }f_𝐤^{\mathrm{eq}}(𝐫)+𝑑𝐫^{}\frac{2d𝐤^{}}{(2\pi )^d}G_{\mathrm{𝐤𝐤}^{}}(𝐫,𝐫^{})\mathrm{\Delta }f_𝐤^{}^{\mathrm{eq}}(𝐫^{})$$ $`(14)`$ satisfies the steady-state form of Eq. (13) identically. This establishes the linear relationship between nonequilibrium and equilibrium thermal fluctuations, and the need for the former to be proportional to $`T`$ in a degenerate conductor, since then $`\mathrm{\Delta }f_𝐤^{\mathrm{eq}}(𝐫)k_BT\delta (\epsilon _𝐤+U_0(𝐫)\mu )`$. Again, charge neutrality enforces upon Eq. (14) the constraint $$𝑑𝐫\frac{2d𝐤}{(2\pi )^d}G_{\mathrm{𝐤𝐤}^{}}(𝐫,𝐫^{})=0$$ for all $`𝐤^{}`$ and $`𝐫^{}`$. Over volume $`\mathrm{\Omega }`$ of the whole conductor, including its buffer zones, this leads to the normalization $$_\mathrm{\Omega }𝑑𝐫\mathrm{\Delta }f(𝐫)=_\mathrm{\Omega }𝑑𝐫\mathrm{\Delta }f^{\mathrm{eq}}(𝐫).$$ $`(15)`$ One can compare the strict equality in Eq. (15) with the analogous situation in any of the diffusive noise formulations \[1,2,18-21,23,24\]; diffusive fluctuations do not fulfill this most basic of physical constraints. They do not fulfill it because local equilibrium and neutrality are not guaranteed, in one lead or more (depending on where a given model chooses to locate its “absolute” chemical potential $`\mu `$). Although those asymptotic conditions are implicitly respected at the level of one-body transport, they are no longer respected by fluctuations produced in the diffusive theories’ passage to the two-body level. Such inconsistency could never arise if the sum rules for the electron gas were in place and operative. For semiclassical diffusive models, Eq. (15) restores conformity of the local $`\mathrm{\Delta }f_𝐤(𝐫)`$ with the FDT, at the price of suppression. If a source of semiclassical suppression does exist, it is genuinely nonequilibrium and it accords with global neutrality. Furthermore, there is no compelling reason to expect that any semiclassical description – including ours – must recover a priori the quantum-coherent result for elastic diffusive wires. As far as we can see, only a quantum treatment can capture genuinely nonlocal physics in mesoscopic systems. However, we still differ on the separate conceptual issue of a smooth quantum crossover (as such), based on drift-diffusion ideas. We speculate that the S-matrix formalism can work without recourse to drift-diffusion phenomenology. Its coherent nonlocal nature gives it a certain numerical robustness against violations of neutrality, a feature not shared by local theories. Finally, using the tools that we have outlined, it is possible not only to display the linear functional dependence of hot-electron thermal noise on $`\mathrm{\Delta }f^{\mathrm{eq}}`$, but also to prove the mesoscopic FDT explicitly for semiclassical noise in the weak-field limit . Beyond this limit there is a systematic, nonperturbative way of classifying the appreciable hot-electron contribution. This type of excess noise has two features: it is not dissipative, and it still scales with temperature. It is just about impossible for it to “cross over” into shot noise, which is indisputably non-thermal . 5. SUMMARY Transport and fluctuations at mesoscopic scales reveal new, intriguing physics requiring theoretical models beyond the normal methods for extended, uniform systems. In the mesoscopic regime, even strongly metallic conductors may become nonuniform and sharply quantized. In addition, mesoscopic devices are very likely to operate at high fields. To date, however, most experimental and theoretical work has engaged only the low-field linear limit. There are two theoretical responses to these challenges: make greater efforts within standard microscopics and kinetics, or revisit simpler phenomenologies and try to stretch those. For one-body mesoscopics, notably low-field conductance, the success of diffusively inspired phenomenologies is impressive. For many-body effects such as current fluctuations, diffusive theories have also had considerable success, as witness the prediction of suppressed mesoscopic shot noise, whose quantum origin is not in debate. Regardless of their triumphs and their intuitive appeal, diffusive phenomenologies have not adduced a microscopic rationale for the apparent attempt to extrapolate the drift-diffusion Ansatz upward into the hierarchy of multi-particle correlations. In particular, diffusively based noise models fail to address the sum rules, and hence to secure them. The sum rules set fundamental constraints, whose satisfaction is crucial to the correct representation of many-body phenomena in the electron gas . Current noise is one such phenomenon. Therefore, noise predictions based on diffusive arguments are less sure to be well controlled. One of diffusive noise theory’s key results is the smooth crossover of equilibrium thermal noise (scaling with ambient temperature $`T`$) into nonequilibrium shot noise (independent of $`T`$). We have pointed out that the accepted explanation for the crossover is incompatible with conventional kinetics and the theory of charged Fermi liquids. This is shown by the mandatory $`T`$-scaling of nonequilibrium thermal noise in degenerate mesoscopic conductors. Such scaling clearly precludes any continuous crossover between shot noise and noise that is generated purely thermally. In sum, the account of the smooth crossover given by diffusive analysis is not supported theoretically by orthodox kinetic theory. We believe that its empirical truth still stands in need of a more rigorous, and certainly quite different, description. New experiments would be needed to test any alternative. We end with a question: since diffusive analysis itself is widely advertised as a serious first-principles procedure, could some of its subsidiary assumptions be defective? REFERENCES M. J. M. de Jong and C. W. J. Beenakker, in Mesoscopic Electron Transport, edited by L. P. Kouwenhoven, G. Schön, and L. L. Sohn, NATO ASI Series E (Kluwer Academic, Dordrecht, 1997). Sh. M. Kogan, Electronic Noise and Fluctuations in Solids (Cambridge University Press, Cambridge, 1996). F. Green and M. P. Das, cond-mat/9809339 (Report RPP3911, CSIRO, unpublished, 1998). S. Datta, Electronic Transport in Mesoscopic Systems (Cambridge University Press, Cambridge, 1995). D. K. Ferry and S. M. Goodnick, Transport in Nanostructures (Cambridge University Press, Cambridge, 1997). Y. Imry and R. Landauer, Rev. Mod. Phys. 71, S306 (1999). Many-body issues in mesoscopic noise tend to be presented as secondary, rather than central, in mainstream thinking. See for example R. Landauer in Proceedings of New Phenomena in Mesoscopic Structures, Kauai, 1998 (submitted to Microelectronic Engineering). A noteworthy Monte-Carlo study is P. Tadyszak, F. Danneville, A. Cappy, L. Reggiani, L. Varani, and L. Rota Appl. Phys. Lett. 69 1450 (1996). F. Green and M. P. Das, in Proceedings of the Second International Conference on Unsolved Problems of Noise, Adelaide, 1999 edited by D. Abbott and L. B. Kiss (AIP, in preparation). See also cond-mat/9905086. D. Pines and P. Nozières, The Theory of Quantum Liquids (Benjamin, New York, 1966). Indeed the Johnson-Nyquist formula in itself shows no sensitivity to carrier degeneracy, and neither does the Einstein relation. See C. Kittel, Elementary Statistical Physics (Wiley, New York, 1958), pp 143-5. Their forms are the same whether the conductor is classical or degenerate. Therefore, additional microscopic input is essential to any systematic treatment of the fluctuations. We do not discuss noise in a third important, but more complex, class: tunnel-junction devices . See for example H. Birk, M. J. M. de Jong, and C. Schönenberger, Phys. Rev. Lett. 75, 1610 (1995). There are also remarkable results for tunneling shot noise in the fractional-quantum-Hall regime. Refer to R. de Picciotto et al. Nature 389 162 (1997); L. Saminadayar et al., Phys. Rev. Lett. 79 2526 (1997); and M. Reznikov et al., Nature 399, 238 (1999). B. J. van Wees, H. van Houten, C. W. J. Beenakker, J. G. Williamson, L. P. Kouwenhoven, D. van der Marel, and C. T. Foxon, Phys. Rev. Lett. 60, 848 (1988). D. A. Wharam, T. J. Thornton, R. Newbury, M. Pepper, H. Ahmed, J. E. F. Frost, D. G. Hasko, D. C. Peacock, D. A. Ritchie, and G. A. C. Jones, J. Phys. C 21, L209 (1988). Landauer’s conception extends further, to a perfect duality between voltage and current by which either quantity can equally well induce the other . In conventional kinetic theory, the electromotive force is always distinguished as the prime cause of the current. Duality is problematic beyond the weak-field linear limit; recall the non-monotonic response of a resonant-tunneling diode . M. Reznikov, M. Heiblum, H. Shtrikman, and D. Mahalu, Phys. Rev. Lett. 75 3340 (1995). A. Kumar, L. Saminadayar, D. C. Glattli, Y. Jin, and B. Etienne, Phys. Rev. Lett. 76 2778 (1996). V. A. Khlus, Sov. Phys. JETP 66, 1243 (1987); G. B. Lesovik, JETP Lett. 49 592 (1989). M. Büttiker, Phys. Rev. Lett. 65 2901 (1992); Phys. Rev. B 46 12485 (1992). C. W. J. Beenakker and M. Büttiker, Phys. Rev. B 46 1889 (1992). Th. Martin and R. Landauer, Phys. Rev. B 45 1742 (1992). M. Henny, S. Oberholzer, C. Strunk, and C. Schönenberger, Phys. Rev. B 59, 2871 (1999). K. E. Nagaev, Phys. Lett. A 169 103 (1992); Phys. Rev. B 52 4740 (1994). M. J. M. de Jong and C. W. J. Beenakker, Phys. Rev. B 51 16867 (1995). F. Liefrink, J. I. Dijkhuis, M. J. M. de Jong, L. W. Molenkamp, and H. van Houten, Phys. Rev. B 49 14066 (1994). A. H. Steinbach, J. M. Martinis, and M. H. Devoret, Phys. Rev. Lett. 76 3806 (1996). R. J. Schoelkopf, P. J. Burke, A. A. Kozhevnikov, D. E. Prober, and M. J. Rooks, Phys. Rev. Lett. 78 3370 (1997). For a critique of Langevin methods in correlated-particle kinetics, see N. G. van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam, 1981), pp 246-52. S. V. Gantsevich, V. L. Gurevich, and R. Katilius, Nuovo Cimento 2 1 (1979). Here we freeze the response of the self-consistent fields. This is equivalent to probing the nonequilibrium analog of the long-wavelength, “screened” Lindhard function prior to including internal Coulomb screening correlations. Screening effects are especially important for inhomogeneous systems . They can be treated systematically in Eq. (13) in the spirit of a Landau-Silin approach .
no-problem/9910/astro-ph9910531.html
ar5iv
text
# Element mixing in the Cassiopeia A supernovaBased on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA, and on observations obtained at the Canada-France-Hawaii Telescope. ## 1 Introduction Supernovae (SNe) are key objects in the Universe (review by Trimble 1983 and references therein). They are the factories which feed the interstellar medium with many of the heavy elements. These heavy elements are built up, layer by layer, inside massive stars, stratified according to the atomic number (review by Arnett 1995 and references therein). Freshly ejected supernova material can be directly observed in the Cassiopeia A supernova remnant. The Cassiopeia A (Cas A) SuperNova Remnant (SNR) is the youngest SNR known in our galaxy. The SuperNova (SN) exploded about 320 years ago (Fesen, Becker and Goodrich 1988) at a distance of about 3.4 kpc (Reed et al. 1995). It must have been subluminous and/or heavely obscured, since it passed unnoticed, except perhaps by Flamsteed in 1680 (Ashworth 1980). The progenitor of the SN was a massive star (Vink, Kaastra and Bleeker 1996, Jansen et al. 1988, Fabian et al. 1980), probably of Wolf-Rayet type (Fesen, Becker and Goodrich 1988). The freshly ejected SN material has been widely observed in the optical range (Baade and Minkowski 1954, Chevalier and Kirshner 1979, van den Bergh and Kamper 1985 and earlier papers). The SN material is spatially distributed in Fast Moving Knots (FMKs) (see Figure 1) with a typical speed in the 5000 km/s range, as deduced from their proper motion on the sky and from the Doppler shift of the lines emitted by them. The optical observations have revealed the presence in these knots of heavy elements such as oxygen, sulfur, argon, but no hydrogen or helium lines. Mid-InfraRed (Mid-IR) observations of the FMKs have only started recently thanks to observations with ISO, the Infrared Space Observatory (Kessler et al. 1996); these observations have revealed the presence of two additional key components: neon and silicate dust (Lagage et al. 1996, Arendt, Dwek and Moseley 1999). In this paper, we present a new set of spectro-imaging observations made with ISOCAM (Cesarsky et al. 1996), the camera on board of ISO; these observations reveal the spatial distribution of the silicate knots and of the neon knots. The comparison of these distributions brings unique information on the degree of mixing of the various elements which has occurred during the supernova explosion. Section 2 present the data, the data reduction and the results we obtain. In Section 3, we discuss the implication for the mixing in SNe. ## 2 Observations and results The observations were performed on December 5th 1996 with ISOCAM. The pixel field of view of the instrument was set to 6”, comparable to the diffraction limit of the telescope. The total field of view is 3’x3’ and the field was centered on the northern part of the remnant. For each pixel, we have a spectrum from 5 to 16.5 microns that was obtained by rotating the Circular Variable Filter of ISOCAM; the spectral resolution obtained this way is around 40. The data reduction was performed with CIA<sup>1</sup><sup>1</sup>1CIA is a joint development by the ESA astrophysics division and the ISOCAM consortium led by the ISOCAM PI, C.J. Cesarsky, Direction des sciences de la matière, C.E.A., France., using a full spectroscopic data set of an off-position field to subtract the zodiacal contribution. The result consists in 1024 spectra. Some of them are shown on Figure 1. They feature both continuum emission and line emission. The continuum emission rises slowly from 8 to 16 $`\mu `$m; in some spectra, a bump is present around 9.3 $`\mu `$m. The lines are identified as \[Ar II\] (7.0 $`\mu `$m), \[Ar III\] (9.0 $`\mu `$m), \[S IV\] (10.5 $`\mu `$m), \[Ne II\] (12.8 $`\mu `$m) and \[Ne III\] (15.5 $`\mu `$m). The argon and sulfur lines where also observed with ISOPHOT (Tuffs et al. 1997), but not the neon lines, which are out of the ISOPHOT-S wavelength range. The neon emission map is obtained by the difference between the peak flux in the \[Ne II\] line and the underlying continuum. The evidence for the presence of dust is provided by the continuum radiation underlying the line emission; the silicate dust is well characterized by its feature around 9.3 $`\mu `$m, (see spectrum 2 of Figure 1). The silicate emission map is obtained at 9.5 $`\mu `$m by subtracting from the detected emission, the emission from a blackbody fitting the data at 7.5 and 11.5 $`\mu `$m. These two maps are both overplotted on an optical image obtained with the SIS instrument mounted on the Canada France Hawaii telescope on August 1998; a filter centered at 6750 Å and with a band-pass of 780 Å was used; the pixel field of view was 0.15 arcsec and the integration time was 300 s. Note that several optical knots are often present in a single ISOCAM pixel. Neon, which is barely detected in the visible (Fesen 1990), gives prominent lines in the Mid-IR: the \[Ne II\] line at 12.8 $`\mu `$m and the \[Ne III\] line at 15.5 $`\mu `$m (see spectrum 1 of Figure 1). Mid-IR searches of neon are more advantageous than optical studies, because of their insensitivity to the relatively high interstellar extinction toward Cas A (typically A<sub>V</sub>=5, see Hurford and Fesen 1996) and of the lower temperature needed to excite IR lines compared to optical lines. The neon map is compared with the silicate dust map on Figure 1. Spectrum 1 is typical of neon knots. Spectrum 2 is typical of silicate knots (the silicate feature at 9.3 $`\mu `$m is underlined by the dotted curve). Spectrum 2 is slightly contamined by neon, certainly due to the strong neon emission just nearby. The bump around this small neon feature could be attributed to Al<sub>2</sub>O<sub>3</sub> (Koike et al. 1995, Kozasa and Hisoto 1997). An anticorrelation between the presence of neon and the presence of silicate in many knots is evident. The regions where both neon and silicate are observed in the IR spectra are confused regions where several bright optical knots lie along the line of sight probed by an ISOCAM pixel (6”x6”). The Mid-IR radiation can also be used to probe the presence of argon, through the \[Ar II\] and \[Ar III\] lines at respectively 7.0 and 9.0 microns, and of sulfur, through the \[S IV\] line at 10.5 microns. The lines associated with these two elements show up in almost all the IR spectra, but, given that several knots lie in an ISOCAM pixel, additional arguments are needed before claiming that knots emitting Neon in the IR also contain sulfur and argon. The first argument comes from the IR data themselves. Indeed, even with the poor spectral resolution of ISOCAM observations, it has been possible to measure the Doppler shift of the lines emitted from the knots with the highest radial velocities (Lagage et al. 1999). For these knots, all the lines have the same Doppler shift, indicating a common knot origin of neon, argon and sulfur. Another way to find if neon, sulfur and argon originate from the same knots is to search for oxygen, sulfur and argon lines in optical spectra. Indeed in supernovae the neon layer is associated to the oxygen layer (see Figure 2) and the optical \[O III\] line has excitation conditions intermediate between those of the IR \[Ne II\] and \[Ne III\] lines (same critical density as the \[Ne II\] line and ionization potential intermediate between those of the \[Ne II\] and \[Ne III\] lines). For this purpose we performed follow-up spectro-imaging observations of the Mid-IR knots in the optical at the Canada-France-Hawaii Telescope. Most of the FMK optical spectra feature oxygen, sulfur and argon lines, in agreement with previous studies (Hurford and Fesen 1996, Chevalier and Kirshner 1979). Thus we can conclude that argon and sulfur are indeed present in most of the neon knots. Note also that given that the \[Ar III\] line and the \[Ne II\] line have very similar excitation conditions, we can exclude the presence of neon in the silicate knots which emit in the \[Ar III\] line but not in the \[Ne II\] line. Finally, spectrum 3 of Figure 1 originates in a region which is not associated with fast moving knots, but which nevertheless is bright in the IR (and also in X-ray and radio). No line emission is present in the spectrum, and the continuum emission is well fitted by Draine and Lee silicates (Draine and Lee 1984) at a temperature of 105 K (see dashed line in spectrum 3 of figure 1); Draine and Lee graphites do not fit this spectrum. The emission is probably due to circumstellar or interstellar dust heated by the supernova blast wave. Such a continuum emission is present all over the supernova remnant and is probably at the origin of the continuum dust emission underlying the line emission in neon knots. In spectrum 3, no room is left to synchrotron radiation down to the sensitivity limit of our observations, $`2.10^3`$ Jansky (1 $`\sigma `$) at 6 microns in an ISOCAM pixel. This limit is compatible with the expected IR synchrotron emission (about half a mJy), as extrapolated from the radio synchrotron emission between 1.4 Ghz and 4.8 Ghz detected in the region of the ISOCAM pixel of spectrum 3 (Anderson et al. 1991). ## 3 Discussion In order to explain these observations, we have to recall how the elements are structured inside a supernova. The elements are located in a stratified way according to their burning stage. Neon, silicate making elements and oxygen burning products (S, Ar…) are in different layers (see Figure 2). The hatched region on Figure 2 is what we call the ”silicate” region; this is where sufficient amounts of oxygen, magnesium and silicon are present at the same time, to form pyroxene (MgSiO<sub>3</sub>). We consider pyroxene because it is this type of silicate which is predicted by dust formation models (Kozasa, Hasegawa and Nomoto 1991). Furthermore the observation of a 22 $`\mu `$m feature in Cassiopeia A was attributed to Mg protosilicates (Arendt, Dwek and Mosely 1999) and the 9.3 $`\mu `$m feature of our spectra (see spectrum 2 of Figure 1) is well fitted with very small grains (less than 0.1 microns) of pyroxene. Then a straightforward way to interpret the fact that the neon and silicate layers remain spatially anticorrelated in the ejecta is to consider that there has only been weak mixing between those layers during the supernova explosion. In contrast, sulfur and argon have been extensively mixed. The mixing of the sulfur and argon layer with the oxygen layer was already revealed by optical observations; the mixing is known to be extensive, but not complete as some oxygen knots are free from sulfur and some sulfur knots are free from oxygen (Chevalier and Kirshner 1979, van den Bergh and Kamper 1985). The key new result from the Mid-IR observations is that the neon and silicate layers are essentially unmixed, i.e. the mixing is heterogeneous. The data also suggest that the mixing is mostly macroscopic. Indeed silicon is located in the same layer as sulfur and argon; thus silicon is also expected to be spread into the neon layer. If the silicon mixing were microscopic, silicates could be produced all-over the magnesium layer, which encompasses the neon layer (see Figure 2); then the anticorrelation of Figure 1 would not hold. Another possibility would be that the mixing is indeed microscopic, but that for some reasons, the silicate production is quenched in the neon layer. In any case the conclusion is that the silicate production from SN is limited to the thin layer shown in Figure 2. As a consequence, the silicate dust production is five times lower than in the case of condensation of all the silicate condensable elements. Silicates are known to be present in large quantities in the interstellar medium, but there is still a debate as to the dominant injection source of these silicates. Supernova ejecta or stellar wind material from giant and supergiant stars have been invoked. Core collapse SNe could be the main silicate provider in case of complete condensation of silicate elements (Dwek 1998). But, if for all core collapse SNe, the condensation is incomplete, as discussed here in the case of Cas A, then core collapse SNe can no longer be the dominant source of silicates at present; but they could have played a dominant role in the past (Dwek 1998). No evidence for dust formation in type Ia SNe has been found so far and no information on the mixing of silicate elements in those SNe exist. In the absence of this evidence, we consider that giant and supergiant stars are likely to be the main silicate providers. The origin of the mixing is still an open question. Several studies have been made in the framework of the observations of SN 1987A (Arnett et al. 1989 and references therein), the supernova whose explosion in the Large Magellanic Cloud was detected 13 years ago, and of SN 1993J (Spyromilio 1994, Wang and Hu 1994); the issue is how to mix the inner regions where the nickel and cobalt have been synthetized by oxygen explosive burning (see Figure 2) with the upper layers of hydrogen and helium. Hydrodynamic instabilities (of the Rayleigh-Taylor and Richtmyer-Meshkov type) are usually considered as playing a key role. Presupernova models show that the density profile of the presupernova features steep density gradients at the interface of composition changes, especially at the interfaces hydrogen/helium and helium/oxygen; these gradients are regions where shock induced Richtmyer-Meshkov instabilities, followed by Rayleigh-Taylor instabilities, can develop during the SN explosion. Models based on instabilities at these interfaces meet with difficulties to reproduce quantitatively the data (Arnett 1995). It is also hard to imagine how instabilities at the H/He or the He/O interface could be responsible for the heterogeneous mixing presented here, especially if, as generally claimed, the Cas A progenitor has already shed all of its hydrogen and all or most of its helium before the SN exploded. Presupernova models feature another, weaker, density gradient at the bottom of the oxygen layer (for example Nomoto et al. 1997). In addition, convection at work in this region during the presupernova phase can generate density perturbations which could seed the instabilities (Bazan and Arnett 1998). Thus it seems likely that the mixing originated at the bottom of the oxygen layer, but this remains to be proven by self consistent numerical models following up all phases from pre-supernova to now, taking into account radiative cooling which can lead to clumps. In complement to advances in numerical simulations, laboratory experiments are needed. Such experiments are starting to be possible thanks to the use of intense lasers, which can generate plasmas mimicking various astrophysical conditions (Remington et al. 1999). Laboratory experiments simulating hydrodynamic instabilities at the H/He interface have already been conducted (Kane et al. 1997, Drake et al. 1998). Experiments reproducing the conditions at the oxygen/oxygen-burning product interface should be performed. The possibility of heterogeneous mixing could be tested. Another issue which should be investigated is the degree of microscopic mixing versus macroscopic mixing. Heterogeneous microscopic mixing in supernovae is a key requirement in order to explain the isotopic anomalies observed in some presolar grains found in meteorites (Travaglio et al. 1998). For example, the presence of silicon carbide with <sup>28</sup>Si implies that <sup>28</sup>Si, produced in an inner shell of a SN, has to be injected up to the outer layer of carbon and then microscopically mixed with the carbon. The mixing has to be heterogeneous in the sense that the oxygen layer should not be mixed with the carbon layer; otherwise, the carbon would be locked into CO molecules and no carbon dust particle could be made. Rayleigh-Taylor instabilities mostly lead to macroscopic mixing, but some microscopic mixing could occur at the interface of macroscopically mixed regions. In that context, Mid-IR observations presented here are complementary to meteorite studies. Acknowledgment We would like to thank J.P. Chieze and R. Teyssier for enlightening discussions about Rayleigh-Taylor instabilities and laser experiments. We thank the referee R. Arendt for his careful reading of the manuscript and his useful comments.
no-problem/9910/astro-ph9910435.html
ar5iv
text
# Untitled Document Triton’s Surface Age and Impactor Population Revisited in Light of Kuiper Belt Fluxes: Evidence for Small Kuiper Belt Objects and Recent Geological Activity S. Alan Stern and William B. McKinnon<sup>1</sup> Department of Space Studies Southwest Research Institute 1050 Walnut Street, Suite 426 Boulder, CO 80302 astern@swri.edu, mckinnon@levee.wustl.edu 13 Pages 02 Figures 00 Tables Keywords: planets and satellites: Triton — Kuiper Belt — Submitted to The Astronomical Journal: 15 July 1999 Revised: 05 Oct 1999 <sup>1</sup>On sabbatical from Department Earth and Planetary Sciences and McDonnell Center for Space Sciences, Washington University, Saint Louis, MO 63130; mckinnon@levee.wustl.edu ABSTRACT Neptune’s largest satellite, Triton, is one of the most fascinating and enigmatic bodies in the solar system. Among its numerous interesting traits, Triton appears to have far fewer craters than would be expected if its surface was primordial. Here we combine the best available crater count data for Triton with improved estimates of impact rates by including the Kuiper Belt as a source of impactors. We find that the population of impactors creating the smallest observed craters on Triton must be sub-km in scale, and that this small-impactor population can be best fit by a differential power-law size index near –3. Such results provide interesting, indirect probes of the unseen small body population of the Kuiper Belt. Based on the modern, Kuiper Belt and Oort Cloud impactor flux estimates, we also recalculate estimated ages for several regions of Triton’s surface imaged by Voyager 2, and find that Triton was probably active on a time scale no greater than 0.1–0.3 Gyr ago (indicating Triton was still active after some 90% to 98% of the age of the solar system), and perhaps even more recently. The time-averaged volumetric resurfacing rate on Triton implied by these results, 0.01 km<sup>3</sup> yr<sup>-1</sup> or more, is likely second only to Io and Europa in the outer solar system, and is within an order of magnitude of estimates for Venus and for the Earth’s intraplate zones. This finding indicates that Triton likely remains a highly geologically active world at present, some 4.5 Gyr after its formation. We briefly speculate on how such a situation might obtain. Keywords: comets: general—planets and satellites: Neptune, Triton—Kuiper belt, Oort Cloud. 1. INTRODUCTION The 1989 Voyager 2 encounter with the largest satellite of Neptune, Triton, revolutionized our knowledge of this world, revealing it to be a scientifically inspiring satellite, 2700 km in diameter, with an N<sub>2</sub>/CH<sub>4</sub> cryo-atmosphere, and a morphologically complex surface (e.g., Smith et al. 1989). Voyager also discovered detached hazes, atmospheric emissions excited by the precipitation of charged particles from Neptune’s magnetosphere, and small vents generating plumes that rise almost 10 kilometers through Triton’s atmosphere. Among the most intriguing questions concerning this distant world is the issue of its surface age and therefore, by extension, the degree of recent or ongoing internal activity within this body. Voyager era investigators obtained a crude global surface age estimate of 1 Gyr (Smith et al. 1989; Strom et al. 1990; cf. Croft et al. 1995), but their calculations did not take into account the cratering flux from the (then undiscovered) Kuiper Belt. In what follows we will combine existing crater counts with modern impact flux estimates, which include the Kuiper Belt, in order to derive new estimates of the surface age on Triton. This paper extends considerably some preliminary results reported in an LPSC abstract (Stern & McKinnon 1999). A key assumption we make is that the primary process that removes craters from Triton’s surface is conventional geological activity (e.g., volcanism), as opposed to a more exotic possibilities such as viscous relaxation, escape erosion, or charged particle degradation. This assumption is strongly supported by Triton’s apparently conventional crater size-frequency distribution, and the uniformly fresh appearance of craters on Triton. 2. ATTRIBUTES OF TRITON’S CRATER AND IMPACTOR POPULATIONS We begin by estimating the typical size scale for craters produced by impacting bodies. Following standard Schmidt-Holsapple crater scaling (e.g., Chapman & McKinnon 1986; Holsapple 1993), the crater diameter $`D`$ for a specified set of impact parameters and surface properties on a body with gravity $`g`$ can be estimated from: $$D_{tr}=1.56d(A\delta /\rho )^{1/3}(1.61gd/v^2)^{\alpha /3}(\mathrm{cos}\overline{\theta })^{2\alpha /3},$$ $`(1)`$ where $`D_{tr}`$ is the so-called transient diameter, which we assume to be a paraboloid of revolution with a depth/diameter ratio of 1/2$`\sqrt{2}`$ (McKinnon & Schenk 1995). Here $`d`$ is the equivalent spherical impactor diameter, $`\delta `$ and $`\rho `$ are the impactor and surface densities, respectively, $`v`$ is the impact velocity, $`A`$ and $`\alpha `$ are scaling constants which depend on the thermomechanical properties of the surface, and cos$`\overline{\theta }`$=0.71 is an adjustment factor to account for the average impact angle ($`\theta `$=45 $`\mathrm{deg}`$). We adopt a maximum impactor velocity $`v_{max}`$=11.6 km s<sup>-1</sup>, as set by the root-sum-square of Triton’s escape speed and the sum of Triton’s orbital speed and the maximum impactor velocity at Triton’s orbit. We adopt a minimum impactor velocity $`v_{min}`$=2.3 km s<sup>-1</sup>, as set by the root-sum-square of Triton’s escape speed and the difference between Triton’s orbital speed and the escape speed from Triton’s orbit. Now, $`D_{tr}`$ is proportional to the final crater diameter $`D`$ for $`D`$$`<`$$`D_c`$, where $`D_c`$ is the simple-to-complex crater transition diameter, which is between 6 and 11 km on Triton (Strom, Croft, & Boyce 1990; Schenk 1992). We take $`D_c`$=8 km. When $`D`$$`>`$$`D_c`$, i.e., in the case of complex (flattened) craters, the scaling relationship is also relatively straightforward. Based on both morphological measurements of craters on Triton (Croft et al. 1995), and geometrical models of craters on Ganymede (Schenk 1991; McKinnon & Schenk 1995), the closest analogues to Triton’s craters for which extensive data exist, we therefore write: $$D(D<D_c)=1.3D_{tr}$$ $`(2a)`$ $$D(D>D_c)=(1.3D_{tr})^{1.11}D_c^{0.11}.$$ $`(2b)`$ The scaling presented in Equations (1) and (2) are probably accurate to 30% in $`D`$. To fully explore the range of impactor diameter $`d`$ that generates the observed craters on Triton, we show Figure 1. This figure evaluates Equations (1) and (2) as a function of impactor diameter over both the range of probable impact velocities, and a suite of ($`\delta `$,$`\rho `$) cases spanning the reasonably expected parameter space. The baseline case against which other density pairs may be compared is simply the one of equal densities for impactor and surface (Fig. 1, lower right). Inspecting Figure 1, one concludes that the 2.8 to 27 km diameter craters identified in Voyager images of Triton (Smith et al. 1989; Strom, Croft, & Boyce 1990) imply impactors with diameters between 0.1 km and 0.7 km (to create the 2.8 km minimum crater diameters counted), and 2–11 km in diameter (to create the largest craters detected, like Mazomba with $`D`$=27 km). Such sizes naturally imply comet-sized bodies as the dominant observed Triton impactor population, and as such provide a valuable constraint on the small body population in Neptune’s region of the solar system. This is our first result. We now consider the crater size and number statistics derived from imaging by the Voyager 2 spacecraft during its 1989 flyby of Triton, in order to constrain the size-frequency power law index of the impactor population. The best available assessment of Triton’s crater statistics (Smith et al. 1989; Strom et al. 1990) discussed four regions on Triton (“Areas 1–4”) on which careful crater counts were attempted. These areas add up to 16% of Triton’s total surface area, out of a total of $``$40% of the satellite that was imaged by Voyager at resolutions useful for geological analysis. The interested reader can find images of Areas 1–4 and a sketch map showing their location on Triton in Smith et al. (1989).<sup></sup> We do not consider the results of the 6 highest-resolution Voyager frames, reported in summary form by Croft et al. (1995), because the area covered is small and the images used are smeared to varying degrees owing to spacecraft motion. Because the observed crater population on Triton in general, and Area 1 in particular, is far from saturation equilibrium, and is not manifestly geologically degraded, the Triton crater counts probably represent a production population. We concentrate initially on Area 1, which exhibits the highest density of craters and therefore has the best statistical confidence. Area 1 is located near the apex of Triton’s orbital motion, and contains 9.79$`\times `$10<sup>5</sup> km<sup>2</sup> (some 4.2% of Triton’s surface area); it displays a total of 181 craters with d$`>`$2.8 km. We concur with Strom et al. (1990) that of the 4 distinct terrains counted on Triton, Area 1 has the crater count statistics to best support a size-frequency analysis. For the small-body impactor population we take a differential power law, as is typically used to represent the Kuiper Belt population (Weissman & Levison 1997), of the form $`n`$($`d`$) $``$ $`d^b`$. We again select impact velocities to evaluate Equation (1) from a uniform velocity distribution between the probable $`v_{min}`$ to $`v_{max}`$ range for Triton, and use Equations (1) and (2) to scale from impactor diameters to crater diameters. Figure 2 shows the results of model simulations designed to fit $`b`$, the power-law exponent on the size-frequency distribution of impactors in Area 1. Figure 2 shows that the Voyager data are fit well by relatively shallow power-law slopes of the impactor population, with the nominal value of $`b`$ being near –2.5. This result is robust to the choice of plausible ($`\delta `$,$`\rho `$) combinations in Equation because this ratio appears as a multiplier in the scaling equation and the change in slope at $`D`$=$`D_c`$ is not severe. We note, however, that the resolution in some of the images used in the Area 1 count are as poor as 2.2 km line-pair<sup>-1</sup>; therefore the bottom-most bin or two likely suffered undercounting. Neglecting these bottom-most bin or two allows steeper fits, up to $`b`$$``$–3, within the Poisson statistics of the crater counts. In what follows we adopt $`b`$=–3 as our preferred solution. The slope parameter just derived is in accord with both Weissman & Levison’s (1997) Kuiper Belt model, and is also consistent with Shoemaker & Wolfe’s (1982) preferred –3.0 power-law index for ray-crater impactors on Ganymede (presumably comets). This second result provides a new (if indirect) source of information on the population of small bodies that cannot as yet be optically detected in the Kuiper Belt, and indicates they are plentiful, as collisional evolution models have predicted (e.g., Stern 1995, Davis & Farinella 1997). 3. IMPACTOR FLUXES In preparation for estimating surface unit ages, we now estimate the current cratering rate on Triton, $`\dot{N}`$. The heliocentric flux contributing to $`\dot{N}`$ consists of terms due to objects on Neptune-crossing orbits from both the Edgeworth Kuiper Belt (EKB) (Levison & Duncan 1997; hereafter LD97) and the Scattered Kuiper Belt (SKB) (Duncan & Levison 1997), and due to objects in the Oort Cloud (Weissman & Stern 1994). For a recent Kuiper Belt review, see Farinella et al. (2000). We neglect the possibility of a significant Neptuneocentric population of impactors (Croft et al. 1995) on two grounds. The first is the great observed emptiness of the Neptunian system with regard to debris and small satellites outside 5 Neptune radii (Smith et al. 1989). The second is the fact, easily shown, that any small-body impactor population large enough to populate Triton’s surface with the Voyager-observed craters would, if their orbits are Triton crossing, be swept up on time scales of 1 to 10<sup>3</sup> years in most cases. Therefore, unless a discrete Neptuneocentric flux event very recently populated Triton with its observed craters, this short sweep up time for Neptuneocentric debris implies a large unseen population of such impactors on Triton-crossing orbits. Indeed, to sustain this population against Triton sweep up over 100 Myr would imply both an accreted veneer of mass up to 10<sup>24</sup> gm (roughly the mass of a typical Uranian satellite), and a surface that is constantly renewed on timescales of 10<sup>3</sup> years or less. We will refer to the combined EKB+SKB flux term as the KB contribution. To obtain the total Kuiper Belt cratering rate on Triton, we adopt the state-of-the-art comet impact rate estimate by LD97, as revised by Levison et al. (1999; henceforth LDZD99), i.e., $`\dot{N}_{Neptune}=`$3.5$`\times `$10<sup>-4</sup> comets yr<sup>-1</sup> with $`d>2_1^{+2}`$ km on Neptune. We then scale that result to Triton, accounting for its smaller diameter and the gravitational focusing at its distance from Neptune. For an average encounter velocity at Neptune’s sphere of influence of $``$0.3 Neptune’s orbital speed (LD97), these factors together conspire to reduce Triton’s collision cross-section, and therefore its globally-averaged collision rate, by a factor near 2.7$`\times `$10<sup>-4</sup>, relative to Neptune. Combined with the fact that the time-averaged Kuiper Belt source rate into the planetary region has probably only declined by $``$5% over the past 0.5 Gyr (Holman & Wisdom 1993; Levison & Duncan 1993), we predict a present-epoch, globally averaged KB-impactor source rate of $`\dot{N}_{KB}`$=1.0$`\times `$10<sup>-7</sup> comets yr<sup>-1</sup> with $`d`$$`>`$2 km, or, in a more useful, surface area-normalized form for our purposes, $`\widehat{\dot{N}}`$=4$`\times `$10<sup>-15</sup> craters km<sup>-2</sup> yr<sup>-1</sup> due to comets with d$`>`$2 km. LD97’s estimated uncertainty in deriving the KB term for $`\dot{N}_{Neptune}`$ is of order a factor of 2.8 to 4, depending on whether the diameter uncertainty above is convolved with a $`b`$=–2.5 or $`b`$=–3 differential power-law size index, respectively. Neither LD97 nor LDZD99 included an estimate of the Oort Cloud (OC) impactor rate on Neptune. Weissman & Stern (1994), however, made a calculation for the Oort Cloud impact rates on Pluto, an outer solar system object of similar physical cross-section to Triton in a region with relatively similar OC flux. They estimated that the total number of Oort Cloud impacts by comets with $`d`$$`>`$2.4 km on Pluto is $``$50 over the past 4 Gyr. Scaling this result to Triton’s larger physical cross-section and enhanced gravitational focusing environment around Neptune, and then adopting a (limiting case) –3 differential power-law size index, we expect $``$150 impacts with $`d>`$2 km on Triton over the past 4 Gyr. This corresponds to an average impact rate of $`\dot{N}_{OC}`$=3.7$`\times `$10<sup>-8</sup> yr<sup>-1</sup> for $`d>`$2 km, or $`\widehat{\dot{N}}`$=1.6$`\times `$10<sup>-15</sup> craters km<sup>-2</sup> yr<sup>-1</sup> due to comets with $`d>`$2 km, some $``$40% of the d$`>`$2 km KB impactor rate. This value, 40%, is likely to be an upper limit, however, because it assumes the perihelion distribution of inner Oort Cloud comets extends smoothly across Neptune’s dynamical barrier, which is unlikely (Weissman & Stern 1994). This leads us to conclude that the EKB+SKB flux is clearly the dominant contributor to recent cratering on Triton, our third result. Because the OC cratering rate appears to be only $``$40% or less of the EKB+SKB cratering rate, we neglect it in what follows. Continuing, for a satellite in synchronous rotation like Triton, it is well known that the area around the apex of motion is where impact fluxes should be highest (Shoemaker & Wolfe 1982). And indeed as noted above, Area 1, which is near the apex of motion, was the most heavily cratered terrain seen on Triton. Therefore, we must account for this position-dependent flux in interpreting unit ages where craters have been counted on Triton. Shoemaker & Wolfe (1982) showed that the enhancement factor, $`\eta _1`$, for any given surface unit $`i`$, is close to a factor of 2 near the apex, and varies approximately as the cosine of the angular distance from the apex of motion, reaching unity 90 deg from the apex (see their Eqn. 17). Area 1 stretches from about 20 to 60 deg from the apex of motion, which yields an area-averaged factor of $`\eta _1`$=1.8 increase for Area 1 in its nominal cratering rate over that predicted for the global-average Triton. 4. CRATER RETENTION AGES The average crater retention age of Area i’s surface can be estimated from the general relation: $$T_i=\left(\frac{N_{\mathrm{crat},i}}{\widehat{\dot{N}}}\right)\left(\frac{1}{\eta _iA_i}\right).$$ $`(3)`$ Here $`\eta _i`$ is from above, $`A_i`$ is the area of the unit and $`N_{\mathrm{crat},i}`$ is the number of craters formed on that unit by impactors of $`d>`$2 km; recall $`\widehat{\dot{N}}`$ is a global average for Triton. We now evaluate Equation (3) for Area 1, where $`\eta _1`$=1.8. If we presume that comets are relatively dense and Triton’s surface is no denser than pure water ice (Fig. 2, lower left), then the age computed from Eqn. (3) assuminng b=–3, is 240 Myr (600 Myr for b=–2.5). More plausibly, for the baseline case of equal densities for impactor and surface, we find $`T_1`$=320 Myr (750 Myr for b=–2.5)<sup></sup> We base the Area 1 age determination for b=–3 on the 99 craters with D$`>`$4 km counted by Strom and coworkers.. The key implication, which is robust for most plausible combinations of $`\delta `$ and $`\rho `$, is that Area 1 is geologically very young, almost certainly $`<`$10% of the age of the solar system, and perhaps a good deal younger. These estimates for $`T_1`$ likely represent an upper limit on the time scale for the most recent significant geologic activity on Triton. Why? Even for the limited fraction of Triton imaged at decent resolution by Voyager, stratigraphic relationships show that Area 1 is older than adjacent units on Triton (Croft et al. 1995), particularly Smith et al.’s (1989) Areas 2 and 4. Area 2 is an assemblage of volcanic plains, and is about half as densely cratered at Area 1 (Strom et al. 1990). Because Area 2 stretches from 60 to 90 from the apex of motion, $`\eta _2`$=1.25, which yields a baseline (i.e., $`\rho `$=$`\delta `$) $`T_2`$ of $``$ 230 Myr (550 Myr for b=–2.5); as above, these estimates assume all of these craters are due to sources outside the Neptune system. Areas 2 and 4 have similar crater densities and lie at similar distances from the apex of motion, so $`T_4`$ is similar to $`T_2`$. We note, however, that Area 4 comprises part of the northern portion of Triton’s southern frost cap, and is thus subject to a variety of surfacial modification processes, so the formation age of this unit may in fact be somewhat older. The craters initially identified on Area 3, Triton’s cantaloupe terrain, were later shown unlikely to be due to impact (Strom et al. 1990). Taken together, it is clear that three of the four crater-mapped units on Triton yield crater retention ages that are not only substantially less than 1 Gyr, but may well be of order 0.2–0.3 Gyr. This is our fourth result.<sup></sup> And why are there no very large impact craters or basins on Triton? Because Triton’s surface is too young to preserve the rare impact scars from 100 km impactors. A simple calculation of KB flux indicates that such objects should impact Triton on timescales of $`>`$10<sup>10</sup> years. (Regarding ancient impacts, the thermal pulse associated with tidal breaking should have erased any primordial surface.) The primary reason for the younger ages we have just derived is the inclusion of the Kuiper Belt population and its consequent effect on impact rates. What factors could conspire to substantially increase our estimates of these ages? They could be increased if either $`N_{\mathrm{crat},i}`$ were larger, or $`\dot{N}`$ were smaller. However, because the crater counts are complete at large sizes, it is unlikely that $`N_{\mathrm{crat},i}`$ can be substantially increased, particularly for Area 1, the oldest of the 4 units (with the most statistically robust crater counts). Of course, an undercount could have occurred if viscous relaxation or escape erosion has removed significant numbers of craters over time. But because the Strom et al. (1990) crater counts rely only on fresh craters, and neglect degraded ones (of which there are few if any known, a fact which itself argues for a recent resurfacing), we believe that it is unlikely this is an important factor. As for reducing $`\dot{N}`$, there is the caveat noted above that LD97’s impact flux (and that of LDZD99) carries an estimated factor of $``$4 uncertainty, which could allow $`T`$ to exceed a Gyr; however, this uncertainty is equally likely to decrease $`\dot{N}`$. Another alternative would be if Triton has until recently had a massive, impact shielding atmosphere (Lunine & Nolan 1992); however, there is no evidence for this in Triton crater morphologies or size-frequency statistics.<sup></sup> Reported underabundances of small craters on the Galilean satellites (Chapman et al. 1998) only occur at crater sizes ($`D`$$`<`$1 km) and for processes well below Voyager resolution at Triton. We thus conclude that our estimated ages are unlikely to be underestimated. In contrast to the difficulty of raising $`T`$, it is easy to imagine lowering Triton’s crater retention age below our nominal estimates. For example, as noted above, there must have been some contribution to $`\dot{N}`$ from the Oort Cloud. $`T`$ could also be lowered if some fraction of the crater counts were due to other sources, such as: (i) if impacting populations other than the Kuiper Belt and Oort Cloud (e.g., Neptuneocentric) dominate, (ii) if many of the craters counted are secondaries from larger craters on the unimaged parts of Triton, or (iii) if endogenic (i.e., geological) processes, rather than impacts, created many of the observed craters. Concerning the first possibility, we have already argued above against a dominant Neptuneocentric impactor population being very likely, but it is possible that, for example, a fortuitous, recent Oort Cloud shower of significant magnitude could have produced a cratering spike. The latter two possibilities are also unlikely, as the identification of impact crater morphologies on Triton’s plains units (which includes Area 1) is generally clear, and secondary crater populations characteristically follow steeper size-frequency distributions (e.g., Melosh 1989). A more serious matter concerns the overall KB cratering rate. LDZD99’s revision of LD97 included both longer integration times (for better averages) and a comparison of computed impact rates (direct counts) with those estimated by means of Öpik’s equations from the modeled ensemble of ecliptic comets (for comet terminology, see Levison ). This resulted in a factor of $``$3.5 reduction in impact rates relative to LD97. LDZD99’s new impact rate estimates can be turned into cratering rates by calibrating the modeled comet population against active, visible ecliptic comets and estimating the lifetime of the activity (which yields the ratio of active to extinct comets), and estimating a minimum diameter (and mass) for visible comets. Such a procedure was in fact exploited by Zahnle et al. (1998) in their systematic study of cratering rates on the Galilean satellites. In this work Zahnle et al. (1998) estimated that bombardment in the jovian system is dominated by Jupiter-family, ecliptic comets (JFCs), both active and extinct, and at a rate lower than but within a factor of two of that estimated by Shoemaker (1996). Shoemaker’s estimate, obtained using Öpik’s equations, was dominated by extinct JFCs, and was based on an observed population of asteroidal bodies in JFC-like orbits. The problem is this: if the Zahnle, Dones, & Levison (1998) estimate is recalibrated to LDZD99, then their cratering rates on the Galilean satellites fall by 3.5 and become $``$6 times less than Shoemaker’s (1996) estimate for extinct JFCs alone. We are skeptical that Shoemaker’s cratering rates are overestimated to such a degree, especially as the logical chain from observed asteroid orbits and magnitudes to crater production rates on the Galilean satellites is a short one. There are cratering rate estimates that are, conversely, much lower than even LDZD99 (i.e., Neukum et al. 1998), but these are based on the assumption that the Gilgamesh basin on Ganymede is the same age as the Orientale basin on the Moon, and otherwise ignore observations of present-day comets and asteroids. We discount these latter estimates. Our view is that Shoemaker’s estimates indicate that the calibration in LDZD99 is probably low, and that the true cratering rate may be higher than we obtained above by a sactor of up to $``$6. If so, then all of the terrain ages derived above may also be overestimates by a factor of several. In particular, the age of Triton’s leading hemisphere plains (Area 1) may be of order 50 Myr, and the age of the young volcanic plains on Triton (Area 2) may of order 40 Myr. 5. DISCUSSION What are the implications of our results vis-à-vis Triton’s activity? To begin, let us consider the time-averaged volumetric resurfacing rate on Triton, $`\dot{V}_{TR}`$. A characteristic depth of several 100 m is required to overtop the rims of the largest craters seen on Triton, and indeed, to bury most of the topographic structures observed (Croft et al. 1995). We assume a global resurfacing depth of 100 m over a (conservative) timescale of 300 Myr, which gives a characteristic volumetric resurfacing rate on Triton of $`\dot{V}_{TR}`$=0.01 km<sup>3</sup> yr<sup>-1</sup> (2.5$`\times `$10<sup>8</sup> yrs/$`T`$). Based on uncertainties in the required resurfacing depth and $`T`$, it is not implausible that the actual value of $`\dot{V}_{TR}`$ has been or is a factor of several times higher. Regardless, this resurfacing rate is far higher than what can be supported by the small-scale plume vents seen by Voyager, and indicates a far more active world in the geologically recent past than has been formerly appreciated. This conservative, 0.01 km<sup>3</sup> yr<sup>-1</sup> resurfacing rate also exceeds the escape-loss erosion (Strobel & Summers 1995) and aeolian transport (Yelle et al. 1995) rates on Triton by two orders of magnitude. We therefore conclude that geologic processes are indeed the dominant surface modification process operating on a global scale on Triton (Croft et al. 1995). Now consider Triton’s volumetric resurfacing rate, $`\dot{V}_{TR}`$, in comparison to other bodies. The lunar resurfacing rate during the active, mare-filling epoch was also $``$0.01 km<sup>3</sup> yr<sup>-1</sup> (Head et al. 1992). The current-epoch volcanic resurfacing rates on the Earth (Head et al. 1992), Venus (Bullock et al. 1993; Basilevsky et al. 1997) and Io (Spencer & Schneider 1996) are estimated to be $``$4 km<sup>3</sup> yr<sup>-1</sup>, $``$0.1–0.4 km<sup>3</sup> yr<sup>-1</sup>, and $``$40 km yr<sup>-3</sup>, respectively; the terrestrial rate excluding plate boundaries is $``$0.3–0.5 km<sup>3</sup> yr<sup>-1</sup> (Head et al. 1992). These various comparisons show that Triton clearly appears to be more active than any other solid body in the outer solar system, except the tidally heated satellites Io and Europa.<sup></sup> We must note that Titan’s volumetric resurfacing rate is unknown at present due to its opaque atmosphere. If Triton has been substantially internally active in the geologically recent past, it is natural to imagine that Triton is still active today (or else the geologic engine would have just run out, causing the Voyager observations to have occurred at a “special time”). We therefore now consider the question of how Triton, which lacks any significant present-day tidal forcing and has a radius of just 1350 km, could maintain geologic activity 4.5 Gyr after its formation. We discuss two ways which this could have occurred. First, Triton’s own internal engine, powered by radiogenic energy release alone, may after 4.5 Gyr still generate mantle temperatures exceeding 200 K (Stevenson & Gahndi 1990; McKinnon, Lunine, & Banfield 1995). Such conditions are possibly significant enough to power widespread, low-temperature cryovolcanism that accounts for the recent resurfacing. This cryovolcanism could in principle also be related to Triton’s observed plume vents (Kirk et al. 1995) and global color changes (Buratti et al. 1999). Alternatively, it is possible that Triton’s recent geologic activity is instead due to residual tidal heat resulting from a late-epoch capture into Neptunian orbit. This scenario would imply that Triton was a resident of the EKB or SKB until relatively recently (i.e., within the last Gyr), and as discussed above, that it likely would then have possessed an atmosphere until even more recently. This scenario would in turn favor Triton’s capture by collision with an original satellite (Goldreich et al. 1989), rather than by means of gas drag in a proto-Neptunian nebula (McKinnon & Leith 1995). Nevertheless, the a priori likelihood of such an event late in solar system history, and from such a depleted reservoir as the present-day EKB or SKB, is low. While a late capture is not impossible, it might seem simpler to accept that Triton is big enough, and composed of mobile enough ices, to be geologically active, as in the first scenario sketched above. Given the accumulating evidence for warmth and activity inside the Galilean satellites as revealed by the Galileo mission (e.g., McKinnon 1997), this should not be seen as so surprising. Perhaps Triton is telling us that somewhat smaller icy bodies, such as Pluto, can also remain geologically active at late times. 6. CONCLUSIONS Combining Voyager-derived Triton crater counts and improved cratering flux estimates for Triton, we have derived the following findings: 1. The impactor population reaching Triton today is most probably dominated by the Kuiper Belt. 2. Triton’s extant surface craters require an impactor flux that contains a substantial population of sub-km impactors; plausible 0.1 km to 10 km impactor populations appear to exhibit power-law slope size indices in the range –2.5 to –3, with the steeper slope being more likely. 3. Findings 1 and 2 together imply a strong (if circumstantial) case for a significant, unseen population of km-scale and sub-km scale bodies in the Kuiper Belt, as predicted by both dynamical and collisional models (see Weissman & Levison 1997; Farinella, Davis, & Stern 2000). 4. Unless the areas of Triton imaged by Voyager are not representative of the object as a whole, Triton’s global average surface age may be of order 100 Myr, though older ages cannot be formally ruled out. Regardless, this implies surface ages for the imaged units that are at least a factor of 2, and perhaps over a factor of 10, younger than the 1 Gyr derived at the time of the Voyager flyby (Smith et al. 1989). Even if unimaged terrains are more heavily cratered than the terrains seen by Voyager, the units already mapped indicate very recent resurfacing over large regions of Triton. 5. As such, Triton appears to have been active throughout at least 90%, and perhaps over 98%, of the age of the solar system. These estimates are conservative, in that some dateable units on Triton may well be significantly less than 100 Myr old. It is plausible that Triton’s internal engine still supports sufficient ongoing activity capable of generating large-scale (perhaps episodic) resurfacing. 6. Triton’s high rate of resurfacing may indicate its capture and subsequent, tidally-driven thermal catastrophe occurred relatively recently; alternatively, the high rate of resurfacing may imply that we understand less than had been thought about the interiors of icy objects like Triton and Pluto. 7. Owing to the derived time-average volumetric resurfacing rate, $`\dot{V}_{TR}`$, exceeds 0.01 km<sup>3</sup> yr<sup>-1</sup>, geologic processes are clearly the dominant large-scale surface modification process operating on Triton. 8. Triton’s inferred volumetric resurfacing rate exceeds all other satellites in the solar system except Io, Europa, and possibly Titan (whose rate is unknown). The time-average resurfacing rate at late epoch is comparable to or exceeds the lunar resurfacing rate during the Moon’s active, mare-filling era. ACKNOWLEDGEMENTS We thank Mark Bullock, Clark Chapman, Dan Durda, Hal Levison, Paul Schenk, and Bob Strom for useful discussions, and Kevin Zahnle for comments on an earlier version of this manuscript. We further acknowledge helpful comments from an anonymous referee. This research was supported by the NASA Origins of Solar Systems (SAS, WBM) and the NASA Planetary Geology and Geophysics (WBM) programs. REFERENCES Basilevsky, A.T., Head, J.W., Schaber, G.G., & Strom, R.G. 1997, in Venus II, ed. S.W. Bougher, D.M. Hunten, & R.J. Phillips, (Tucson: University of Arizona Press), 1047 Bullock, M.A., Grinspoon, D.H., & Head, J.W. 1993, GRL, 20, 2147 Buratti, B.J., Hicks, M.D., & Newburn, R.L., Jr., 1999, Nature, 397, 219 Chapman, C.R., Merline, W.J., Bierhaus, B., Brooks, S., & The Galileo Imaging Team 1998, LPSC, XXIX, #1927 Croft, S.K., Kargel, J.S., Kirk, R.L., Moore, J.M., Schenk, P.M., & Strom, R.G., 1995, in Neptune and Triton, ed. D.P. Cruikshank (Tucson: University of Arizona Press), 879 Head, J.W., Crumpler, L.S., Aubele, J.E., Guest, J.E., & Saunders, R.S. 1992, in Venus II, ed. S.W. Bougher, D.M. Hunten, & R.J. Phillips, (Tucson: University of Arizona Press), 13153 Davis, D.R. and Farinella, P. 1997, it Icarus, 125, 50 Duncan, M.J., & Levison, H.F. 1998, Science, 276, 1670 Farinella, P., Davis, D.R., & Stern, S.A. 2000, in Protostars and Planets IV, ed. V. Mannings. (Tucson: University of Arizona Press), in press Goldreich, P., Murray, N., Longaretti, P.Y., & Banfield, D. 1989, Science, 245, 500 Holman, M.J., & Wisdom, J. 1993, AJ, 105, 1987 Kirk, R.L., Soderblom, L.A., Brown, R.H., Kieffer, S.W., & Kargel, J.S. 1995, in Neptune and Triton, ed. D.P. Cruikshank (Tucson: University of Arizona Press), 849 Levison, H.F. 1997, in Completing the Inventory of the Solar System, ed. T.W. Rettig & J.M. Hahn (San Francisco: ASP), 173 Levison, H.F., & Duncan, M.J. 1993, ApJ, 406, L35 ——–. 1997, Icarus, 127, 13 (LD97) Levison, H.F., Duncan, M.J., Zahnle, K., & Dones, L. 1999, Icarus, submitted Lunine, J.I. & Nolan, M. 1992, Icarus, 100, 221 McKinnon, W.B., & Chapman, C.R. 1986, in Satellites, ed. J.A. Burns & M.S. Matthews (Tucson: University of Arizona Press), 492 McKinnon, W.B. 1997, Nature, 390, 23 McKinnon, W.B., & Leith, A.C. 1995, Icarus, 118, 392 McKinnon, W.B., & Kirk, R.L. 1999, in Encyclopedia of the Solar System, ed. P.R. Weissman, L.-A. McFadden, & T.V. Johnson (San Diego: Academic Press), 405 McKinnon, W.B., & Schenk, P.M. 1995, GRL, 22, 1829 McKinnon, W.B., Lunine, J.I., & Banfield, D. 1995, in Neptune and Triton, ed. D.P. Cruikshank (Tucson: University of Arizona Press), 807 Melosh, H.J. 1989, Impact Cratering: A Geologic Process (New York: Oxford University Press) Neukum, G., Wagner, R., Wolf, U., Ivanov, B.A., Head, J.W., Pappalardo, R.T., Klemaszewski, J.E., Greeley, R., & Belton, M.J.S. 1998, LPSC, XXIX, #1742 Schenk, P.M. 1991, JGR, 96, 15635 ——–. 1992, in Lunar and Planetary Science XXIII (Houston: Lunar Planet. Inst.), 1215 Shoemaker, E.M. 1996, in Europa Ocean Conference Abstracts (San Juan Capistrano Res. Inst.), 65 Shoemaker, E.M., & Wolfe, R.F. 1982, in Satellites of Jupiter, ed. D. Morrison (Tucson: University of Arizona Press), 277 Smith, B.A., & the Voyager Imaging Team 1989, Science, 246, 1422 Spencer, J.R., & Schneider, N.M. 1996, ARE&PS, 24, 125 Stern, S.A. 1995, AJ, 110, 856 Stern, S.A., & McKinnon, W.B., 1999, LPSC, XXX, #1766 Stevenson, D.J., & Gahndi, A.S. 1990, in Lunar and Planetary Science XXI (Houston: Lunar Planet. Inst.), 1202 Strom, R.G., Croft, S.K., & Boyce, J.M. 1990, Science, 250, 437 Strobel, D.F., & Summers, M.E. 1995, in Neptune and Triton, ed. D.P. Cruikshank (Tucson: University of Arizona Press), 991 Weissman, P.R., & Levison, H.F. 1997, in Pluto and Charon, ed. S.A. Stern & D.J. Tholen (Tucson: University of Arizona Press), 559 Weissman, P.R., & Stern, S.A. 1994, Icarus, 111, 378 Yelle, R.V., Lunine, J.I., Pollack, J.B., & Brown 1995, in Neptune and Triton, ed. D.P. Cruikshank (Tucson: University of Arizona Press), 1107 Zahnle, K., Dones, L., & Levison, H.F. 1998, Icarus, 136, 202 FIGURE CAPTIONS FIG. 1.—Crater diameter estimates for Triton from equations (1) and (2), as a function of both impactor diameter and velocity. We take $`g`$=78 cm s<sup>-2</sup> for Triton. We take a minimum impactor velocity $`v_{min}`$=2.3 km s<sup>-1</sup>, as set by the root-sum-square of Triton’s escape speed and the difference between Triton’s orbital speed and the escape speed from Triton’s orbit. We take a maximum impactor velocity $`v_{max}`$=11.6 km s<sup>-1</sup>, as set by the root-sum-square of Triton’s escape speed and the sum of Triton’s orbital speed and the maximum impactor velocity at Triton’s orbit (which is the root-sum-square of the escape speed from Triton’s orbit and, from LD97, the maximum encounter speed at Neptune’s sphere of influence). Because Triton’s surface is icy (e.g., Croft et al. 1995; McKinnon & Kirk 1999), we assume values of $`A`$ and $`\alpha `$ appropriate for water ice, i.e., 0.20 and 0.65, respectively (McKinnon & Schenk 1995). Model results were computed from Equations (1) and (2) assuming a uniform distribution of impact velocities between $`v_{min}`$ and $`v_{max}`$. The subtle upward curvature in the impactor size vs. crater size is due to the diameter correction for complex craters given in Equation (2b). The four panels show various cases of ($`\delta `$,$`\rho `$) that bound the probable range of uncertainty with respect to Triton and cometary impactors. The two bold, horizontal lines represent the smallest crater size counted and the largest crater seen on Triton, respectively (Strom, Croft, & Boyce 1990). FIG. 2.—Comparison of the differential size-frequency crater distribution in Area 1 on Triton (solid black line) to model cases with varying impactor size differential distribution power law slopes $`b`$ (green=–2.0, red=–2.5, and blue=–3.0). In all these model cases the minimum crater diameters shown are for $`D`$=2.8 km, which matches the smallest crater sizes counted in Area 1 (Strom, Croft, & Boyce 1990). To define absolute impact rates for this simulation, we normalized each model case to the integral number of craters in the Area 1 dataset (181). The four panels display the same suite of four ($`\delta `$,$`\rho `$) cases as in Figure 1.
no-problem/9910/astro-ph9910085.html
ar5iv
text
# A study of the core of the Shapley Concentration: IV. Distribution of intercluster galaxies and supercluster properties based on observations collected at the European Southern Observatory, La Silla, Chile. ## 1 Introduction Superclusters of galaxies are the largest coherent and massive structures known in the Universe. These objects are crucial in cosmology because their extreme characteristics set topological and physical constraints on the models for galaxy and cluster formation. For this reason, there have been great efforts in order to estimate the extension, shape, mass and dynamical state of these entities. The most direct way to infer these quantities is to perform a redshift survey in order to map the galaxy distribution in the structure and in its surrounding field, but this method requires a large amount of telescope time and limits the number of targets. Up to now only few of such objects have been studied in detail: the Great Wall (Geller & Huchra 1989; Ramella et al. 1992; Dell’Antonio et al. 1996), the Perseus–Pisces (Haynes & Giovanelli 1986), the Hercules (Barmby & Huchra 1998), and the Corona Borealis (Postman, Geller & Huchra 1988; Small et al. 1998a) superclusters. Hudson (1993a) computed the mean overdensity of the superclusters found in his redshift compilation ($`cz<8000`$ km/s) and estimated that the mean galaxy density excess for these structures is $`35`$, on scales of the order of $`3050`$ h<sup>-1</sup> Mpc (see also Chincarini et al. 1992). Another way to study superclusters is to detect structures delineated by the distribution of clusters. This method does not require large redshift surveys and relies on the assumption that clusters and galaxies trace in the same way the underlying matter distribution. The Shapley Concentration (Scaramella et al. 1989) stands out as the richest system of Abell clusters of the list of Zucca et al. (1993), at every density excess. In particular, at a density contrast of $`2`$, it contains 25 members (at mean velocity of $`14000`$ km/s) contained in a box of comoving size ($`\alpha \times \delta \times D`$) $`32\times 55\times 100`$ h<sup>-1</sup> Mpc (hereafter $`h=H_o/100`$). As a comparison, at the same density contrast the Great Attractor, which is the largest mass condensation within $`80`$ h<sup>-1</sup> Mpc (Lynden–Bell et al. 1988), has only 6 members, while Corona Borealis and Hercules are formed by 10 and 8 clusters, respectively. Scaramella et al. (1989) suggested that this supercluster could be responsible of a significant fraction of the acceleration acting on the Local Group, by adding its dynamical pull to that of the Great Attractor, which lies approximately in the same direction on the sky at a distance of $`4000`$ km/s. Further studies based on the dipole of the distribution of the Abell clusters (Scaramella et al. 1991, Plionis & Valdarnini 1991) confirmed the suggestion that large scales are important for cosmic flows. Attempts to determine the density excess and the mass of the Shapley Concentration have been made by Raychaudhury et al. (1991), Scaramella et al. (1994), Quintana et al. (1995) and Ettori et al. (1997): these authors obtained mass estimates of $`10^{16}`$ h<sup>-1</sup> M. These works are essentially based on estimates of the cluster masses, neglecting the contribution of the intercluster matter. Quintana et al. (1995) gave also a value for the total mass of the supercluster of $`5\times 10^{16}`$ h<sup>-1</sup> M on a scale of $`17`$ h<sup>-1</sup> Mpc , using the virial mass estimator applied to the distribution of clusters. They computed the velocity dispersion and the virial radius using the mean velocity and the bi-dimensional positions of clusters: this result could be biased if the physical elongation along the line of sight of the supercluster is not negligible. Studying the cluster distribution, at high density contrast the Shapley Concentration is characterized by three dense complexes of interacting clusters, dominated by A3558, A3528 and A3571, respectively. At lower density contrast these three systems connect to each other through a large cloud of clusters. The clusters in the A3558 and A3528 complexes appear to be aligned perpendicularly to the line of sight and approximately at the same distance, thus suggesting the presence of an elongated underlying structure. However, when the whole supercluster is considered, it appears to be extended along the line of sight (see e.g. figure 4 in Tully et al. 1992 and Zucca et al. 1993). The distribution of clusters inside this supercluster is quite well studied; on the contrary, little is known about the distribution of the intercluster galaxies. Very recently, Drinkwater et al. (1999) performed a redshift survey limited to the magnitude $`R<16`$ in the southern central part of the Shapley Concentration, finding evidences of sheets of galaxies connecting the clusters. The study of the properties of intercluster galaxies is very important in order to assess the physical reality and extension of the structure and to determine if galaxies and clusters trace the matter distribution in the same way. In this context, we are carrying on a long term multiwavelength study of the central part of the Shapley Concentration, studying both cluster and intercluster galaxies. In this paper we present the results of an intercluster galaxy redshift survey, from which we obtained $`450`$ new velocity determinations, and the analysis of the whole supercluster properties. The plan of the paper is the following: in Sect. 2 we describe the results of the intercluster galaxy survey and in Sect. 3 we present the characteristics of the cluster galaxy samples. In Sect. 4 we analyze the galaxy distribution and in Sect. 5 we describe the methods adopted to recover the density profile and the mass of the whole structure. In Sect. 6 we derive the overdensities associated to intercluster and cluster galaxies, we estimate the mass of the supercluster and its dynamical state and we discuss our results. Finally in Sect. 7 we provide the summary. In the following, the values $`H_o=100`$ km/s Mpc<sup>-1</sup> and $`q_o=0.5`$ will be adopted. ## 2 The intercluster galaxy sample As pointed out above, the Shapley Concentration is very rich in clusters with respect to the other known structures. The aim of the intercluster survey is therefore to study how the galaxies trace this supercluster and which percentage of the mass of the supercluster lies outside the clusters. ### 2.1 The catalogue Figures 1a, b, c show the isodensity contours for the galaxies in the $`b_J`$ magnitude range $`1719.5`$ from the COSMOS/UKST galaxy catalogue (Yentis et al. 1992) in the three plates 443, 444 and 509, which cover the central part of the Shapley Concentration. In order to check the existence of possible zero point photometric errors in our data, we compared the COSMOS magnitudes with the CCD sequences of Cunow et al. (1997), available for plates 443 and 444. After having applied the correction for the non-linearity of the COSMOS magnitude scale proposed by Lumsden et al. (1997), we found $`<b_JB_{CCD}>0.02\pm 0.06`$ mag using 12 galaxies in the $`b_J`$ range $`1618.4`$. Since no CCD sequences are available for plate 509, we checked its photometric zero point using the galaxies in the region overlapping the adjacent plate 444, finding that the two magnitude scales are in agreement. We also checked the photometric zero point for ESP survey galaxies (Vettolani et al. 1997), which represent our “field” normalization (see Sect.5.1), using the CCD sequences of Cunow (1993). We found, using 13 galaxies, $`<b_JB_{CCD}>0.03\pm 0.09`$ mag. These results show, on one side, that there is no systematic zero point shift in the photometric scale of our plates; on the other side, they confirm the consistency of our data with respect to the ESP data, which we will use as normalization in the overdensity estimates (see Sect.5.1). Finally, we found that a color correction for the conversion from $`b_J`$ to $`B_{CCD}`$ magnitudes is not required: this fact was already noted analysing independent CCD sequences on ESP galaxies (Garilli et al., in preparation). The data in Figure 1 have been binned in $`2`$ $`\times `$ $`2`$ arcmin pixels and then smoothed with a Gaussian with a FWHM of $`6`$ arcmin. For the Abell clusters present in the plates, circles of one Abell radius have been drawn (dashed circles). The magnitude range of galaxies in the figure has been chosen in order to enhance the features at distances equal to or greater than that of the Shapley Concentration. In fact, at $`14500`$ km/s the apparent magnitude of an $`M^{}`$ galaxy is $`16.2`$. The solid circles in Figure 1 correspond to the field of view of the multifiber spectrograph MEFOS (one degree of diameter, $`0.785`$ square degrees), which carries the fibers on rigid arms and allows the contemporary observation of 29 galaxies. This number would match the galaxy density in the $`b_J`$ magnitude range $`1718.2`$: unfortunately, due to the constraints which avoid arm collisions, only part of the objects are observable. Therefore, in order not to waste fibers, we have chosen to extend the magnitude range to $`17.018.8`$: in this range, the average number of galaxies per field is $`70`$. In Table 1 we report the coordinates of the centre and the relative UKSTJ plate of the 26 observed fields. Note that in choosing the pointing directions, we avoided clusters, in order to observe mainly true “field” objects, to which hereafter we refer as intercluster galaxies. In the upper right panel of Figure 1 the relative positions of the observed fields are shown, together with the positions (small circles) of the observed fields on the cluster complexes (see Sect. 3). ### 2.2 Observations and data reduction Spectroscopic observations were performed at the 3.6m ESO telescope at La Silla, equipped with the MEFOS multifiber spectrograph (Bellenger et al. 1991; Felenbok et al. 1997) in the nights of 8-9-10 April 1994. The MEFOS multifiber spectrograph was mounted at the prime focus of the telescope and allowed the contemporary observation of $`29`$ scientific targets. Each fiber, whose projected diameter on the sky is $`2.6`$ arcsec, was coupled with another one devoted to a simultaneous sky observation. This has in principle two advantages: the sky spectrum is measured very nearby the considered galaxy and the exposure is taken at the same time. Moreover, after having checked that background spatial variations are not present, it is possible to average the 29 sky spectra in order to obtain a “mean” sky with an enhanced signal–to–noise ratio. As pointed out in Bardelli et al. (1994), the sky subtraction procedure in fiber spectroscopy requires the determination of the relative fiber transmissions, which we estimated on the basis of the \[OI\]$`\lambda `$ 5577 sky line. MEFOS allowed also the possibility of “beam-switching”, i.e. the change between the object and the nearby sky fiber. Dividing the observation in two exposures and applying this option, galaxy and sky fall on the same fiber in different moments: this would allow a direct subtraction of the sky from the galaxy spectrum. However, even if this procedure avoids the problems given by the fiber-to-fiber different transmission, it can fail to correctly subtract the sky when the latter changes significantly between the two exposures. We found that this happened at large zenithal angles or at the beginning and at the end of the night. However, we used both methods and we always considered the procedure which gives the higher number of measured radial velocities in each field. The spectra, obtained with a CCD TEK512 CB and the ESO grating $`\mathrm{\#}15`$, have a resolution of $`12`$ Å and a pixel size of $`4.6`$ Å. They were cross–correlated with a set of 8 stellar and 8 galaxy templates using the XCSAO program of Kurtz et al. (1992). For spectra showing emission lines, we used the EMSAO program in the same package. Details of the reductions and the cross-correlation are given in Bardelli et al. (1994). ### 2.3 The redshift sample We observed a total of 685 objects, instead of the maximum number of 754 in principle allowed by the number of fibers (29 $`\times `$ 26 fields), due to collision constraints in the robotic arms of the MEFOS spectrograph. Among these 685 spectra, 85 were not useful for a redshift determination ($`12\%`$ of the total), due to a poor signal to noise ratio. Among the 600 good spectra, 158 objects resulted to be stars ($`26\%`$ of the total), leaving us with 442 galaxy velocities. In Table 2, we list the galaxies with velocity determination. Columns (1), (2) and (3) give the right ascension (J2000), the declination (J2000) and the $`b_J`$ apparent magnitude of the object, respectively. Columns (4) and (5) give the heliocentric velocity ($`v=cz`$) and the internal cross correlation error: this value has to be moltiplied by a factor 1.6-1.9 in order to find the true statistical error (see Bardelli et al. 1994). The code “emiss” in column (6) denotes the velocities obtained from emission lines. Finally in the following analysis we included also 69 velocities found in the literature (not reported in Table 2) for objects in our surveyed region and therefore our total sample has 511 velocities. The average redshift completeness of the sample in the magnitude range $`1718.8`$ is $`25\%`$. ## 3 The cluster galaxy sample ### 3.1 The A3558 cluster complex Figure 1b clearly shows the presence on the plate 444 of an elongated stucture formed by the ACO clusters A3558, A3562 and A3556, which is located at the geometrical centre of the Shapley Concentration. This complex has been extensively studied in the optical (Bardelli et al. 1994, 1998a, 1998b), in the X-ray (Bardelli et al. 1996) and in the radio (Venturi et al. 1997, 1998) wavelengths. We found that this structure may be a cluster-cluster collision seen just after the first core-core encounter. In order to determine the excess in number of galaxies, we used the redshift sample presented in Bardelli et al. (1998a), consisting of 714 velocities over an area of $`3^o.12\times 1^o.4`$, to which we added the 60 new redshifts published by the ENACS survey (Katgert et al. 1998). In order to maximize the completeness, we restricted our analysis to the well sampled region corresponding to the OPTOPUS fields (see Bardelli et al. 1998a), covering an area of about $`2.7`$ square degrees. In this sample, there are 723 galaxies with velocity determination, out of a total of 1582 ($`46\%`$) to the limit $`b_J=19.5`$. Restricting the sample in the magnitude range $`1718.8`$, the completeness is $`64\%`$ (456/711). ### 3.2 The A3528 cluster complex Another remarkable cluster complex is found at the westernmost part of plate 443 (Figure 1c) and is formed by the ACO clusters A3528, A3530 and A3532. We performed a redshift survey in this structure with the OPTOPUS multifibre spectrograph, obtaining 581 new velocities in an area of $`3^o.38\times 1^o.72`$, including also the cluster A3535 (Bardelli et al., in preparation). After a search in the literature, we found 79 additional velocities from Quintana et al. (1995) and ENACS (Katgert et al. 1998), leading to a final sample of 660 redshifts. As done for the A3558 complex, we restricted the sample to the area sampled by the 12 OPTOPUS fields (about $`2.7`$ square degrees). In this area the completeness is $`46\%`$ (645/1399) to $`b_J=19.5`$ and $`72\%`$ (475/656) in the magnitude range $`1718.8`$. ### 3.3 Other clusters In the region under consideration, in addition to the clusters cited above, there are several other Abell/ACO clusters. Among them, A3537 and A3565 are part of the Great Attractor, while A1736, A3554, A3555, A3559 and A3560 are members of the main structure of the Shapley Concentration. The cluster A3552, westward of A3556, was not previously included in this supercluster by Zucca et al. (1993) because of the value of its estimated redshift: now the available redshifts for this cluster (Quintana et al. 1995) put it in the distance range of the Shapley Concentration. An ambiguous case is that of A3557, which has two velocity peaks: only one of them is compatible with the supercluster. Among the remaining clusters, A1757, A1771, A3540, A3546 and A3551 may belong to another structure at $`30000`$ km/s (S300, see below). Finally the clusters A3531 and A3549 are at intermediate distance between Shapley Concentration and S300, while A1727 and A3544 are more distant objects, on the basis of their estimated redshifts. Among the clusters which are part of the Shapley Concentration, the only one which has been sampled enough to be useful in our analysis is A1736, clearly visible in the lower right corner of plate 509 (Figure 1a). The redshifts in A1736 have been taken by Dressler & Shectmann (1988) and Stein (1996), and the final sample contains 111 velocities (54 in the magnitude range $`1718.8`$). For the others, the redshift data found in the literature are too sparse to allow density reconstructions, therefore we choose to neglect these clusters. We ignore also the more distant clusters, because of the lack of data. Therefore, all the following results for the Shapley Concentration and S300 have to be regarded as lower limits for what concerns the cluster contribution. ## 4 Analysis of the galaxy distribution The total velocity sample we used contains 2057 velocities. In Figure 2a, we plot the wedge diagram of the galaxies with redshifts in the plates 444 and 443. The two cluster complexes dominated by A3558 (on the left) and by A3528 (on the right) are clearly visible: these two structures appear to be connected by a bridge of galaxies, similar to the Coma-A1367 system, the central part of the Great Wall. The scale of this system is $`23`$ h<sup>-1</sup> Mpc and it is comparable to that of Coma-A1367 ($`21`$ h<sup>-1</sup> Mpc ). Note the presence of two voids at $`20000`$ km/s and $`30000`$ km/s in the easternmost half of the wedge, labelled as V1 and V2 respectively. Other two voids (V3 and V4) are visible in the westernmost part of the plot: in particular, void V3 appears to be delimited by two elongated features (S300a and S300b) which appear to “converge” in a single feature at right ascension $`13^h`$ (S300c). We refer to this structure with the name S300: as shown below, the overdensity corresponding to this excess is remarkably high, even if it is not clear if all these features are part of a single structure. Figure 2b shows the distribution of all the galaxies (plates 443, 444 and 509) of the sample in the velocity range $`[050000]`$ km/s. Note that the plotted sky coordinate is the declination. The apparent “hole” in the galaxy distribution in the middle of the wedge ($`\delta 27^o`$) is due to a poor sampling of the southern part of plate 509 and to the fact that the literature data for A1736 are less deep than those in our survey. ### 4.1 The structure of the Shapley Concentration The average velocity of the observed intercluster galaxies appears to be a function of the ($`\alpha `$, $`\delta `$) position, ranging from 12289 km/s for the galaxies in plate 509 to 15863 km/s for the galaxies in plate 443. This fact was also noted by Quintana et al. (1995) as a relationship between the cluster radial velocities and the right ascension. Inspection of the trend of the average velocity as a function of position suggests that it can be reasonably described by a plane in the three–dimensional space ($`\alpha `$, $`\delta `$, $`v`$): $$a(xx_m)+b(yy_m)+c(zz_m)=0,$$ (1) where $`x`$ $`=`$ $`{\displaystyle \frac{v}{100}}\mathrm{cos}(\delta \delta _m)\mathrm{sin}(\alpha \alpha _m)\mathrm{h}^1\mathrm{Mpc}`$ $`y`$ $`=`$ $`{\displaystyle \frac{v}{100}}\mathrm{sin}(\delta \delta _m)\mathrm{h}^1\mathrm{Mpc}`$ $`z`$ $`=`$ $`{\displaystyle \frac{v}{100}}\mathrm{cos}(\delta \delta _m)\mathrm{cos}(\alpha \alpha _m)\mathrm{h}^1\mathrm{Mpc}`$ and the subscript $`m`$ indicates the average value of the variable. Each galaxy has a distance from the plane $$d=\frac{a(xx_m)+b(yy_m)+c(zz_m)}{\sqrt{a^2+b^2+c^2}}\mathrm{h}^1\mathrm{Mpc}.$$ (2) Minimizing the sum of the squared distances of galaxies from the plane, we find $$a=0.9b=1.4c=1.1.$$ In Figure 3 we show the histograms of the distances from the fitted plane. In panel a) the distribution of distances of the whole intercluster sample is shown: the peak corresponding to the Shapley Concentration is clearly visible. The secondary bump at $`d20`$ h<sup>-1</sup> Mpc corresponds to the structure at $`11000`$ km/s found by Drinkwater et al. (1999). This plane, which describes the ridge of the distribution of the galaxies, is a reasonably good fit over the entire extension of our survey, as shown by the distance histograms of galaxies divided in the three surveyed plates (Figure 3b, c, d). A Gaussian fit to the distribution of the distances around the best fit plane gives a dispersion $`\sigma =3.8`$ h<sup>-1</sup> Mpc . This dispersion is significantly narrower than what would be obtained ($`\sigma _{vel}=1011`$ km/s) by fitting a Gaussian to the velocity distribution of the galaxies in the same region. We checked also if this representation of the galaxy distribution holds outside our surveyed region using the velocity sample of Drinkwater et al. (1999), which covers plate 444 and two southern adjacent plates (382 and 383). Although the plane parametrization seems correct, we find a shift in the mean of $`5`$ h<sup>-1</sup> Mpc in the plates 382 and 383. In Figure 4 we show the histogram of the distances from the plane of galaxies in plates 382 and 383. In Figures 3a and 4 the fitted Gaussian is superimposed to the histograms. Note that in Figure 4 the mean has been shifted by $`5.0`$ h<sup>-1</sup> Mpc . ## 5 Density excesses and masses ### 5.1 The density excess estimate In order to estimate the overdensity associated to each structure and to reconstruct the density-distance profile, it is necessary to determine the number of galaxies expected in the case of a uniform distribution, under the same observational constraints (i.e. with the same redshift incompleteness), and the volume occupied by each structure. The first step is the determination of the selection function of the survey. For each sample, we computed the expected number of galaxies in the case of uniform distribution as $$N(z)dz=\underset{i}{}C(m_i)_{L_{min}(m_i,z)}^{L_{max}(m_i,z)}\varphi (L)\frac{dV}{dz}𝑑L𝑑z,$$ (3) where $`V`$ is the volume, $`L_{min}(m_i,z)`$ and $`L_{max}(m_i,z)`$ are the minimum and maximum luminosity seen in the sample at the redshift $`z`$, given the apparent magnitude limits of the sample. The quantity $`C(m_i)`$ represents the incompleteness of the survey, i.e. the ratio between the number of galaxies with redshifts and the total number of objects, as a function of the apparent magnitude, and $$\varphi (L)dL=\varphi ^{}\left(\frac{L}{L^{}}\right)^\alpha e^{L/L^{}}d\left(\frac{L}{L^{}}\right)$$ (4) is the luminosity function of field galaxies parametrized with a Schechter (1976) function. We adopted the values $`\alpha =1.22`$, $`M^{}=19.61`$ and $`\varphi ^{}=0.02`$ found by Zucca et al. (1997) for the ESP survey, which uses our same photometric catalogue. We applied the correction for the non-linearity of the COSMOS magnitude scale proposed by Lumsden et al. (1997) and the extinction values of Burstein & Heiles (1984). The intercluster sample has been obtained in the magnitude range $`1718.8`$ and for consistency we limited also the other samples in the same magnitude range. The upper panels of Figures 5 and the left upper panel of 6 show the histograms of the galaxies in the cluster and intercluster samples, with superimposed the distribution expected for the corresponding uniform sample. The lower panels of the same figures show the overdensity $`{\displaystyle \frac{N}{\overline{N}}}`$ as a function of the comoving distance, where $`N`$ is the observed number of galaxies and $`\overline{N}`$ is the number of objects expected in the volume occupied by the structure in the case of uniform distribution. The plotted errors represent Poissonian uncertainties in the observed galaxy counts and a dotted line corresponding to $`{\displaystyle \frac{N}{\overline{N}}}=1`$ has been drawn as a reference. The second step is the estimate of the volume occupied by the structure under consideration. While the solid angle is assumed to be given by the surveyed area, the determination of the width of the supercluster in the direction along the line of sight is more subjective. The data in the cluster complexes can not be used for this determination, because of the presence of significant peculiar velocities due to the virialization of clusters (also known as “finger-of-God” effect), which broaden the width in the velocity space. Therefore, we estimated the width of the supercluster by using only the distribution of galaxies in the intercluster survey, where the peculiar velocities are expected to be significantly smaller. Since we find that the distribution of the distances from the best fit plane of the intercluster galaxies in the Shapley Concentration can be reasonably well described by a Gaussian function (see previous Section), we considered as part of the supercluster all galaxies with distance from the plane smaller than $`\pm 2\sigma `$. Since the plane is tilted with respect to the ($`\alpha `$, $`\delta `$, $`v`$) reference frame, this choice produces a variable velocity range depending on the position. The minimum and maximum velocity ranges are $`\mathrm{\Delta }v2400`$ km/s and $`\mathrm{\Delta }v3100`$ km/s for fields $`\mathrm{\#}48`$ (plate 509) and $`\mathrm{\#}5`$ (plate 443), respectively. The physical width is assumed to be simply $`\mathrm{\Delta }v/H_o`$, a choice acceptable in the case of low peculiar velocities. It is impossible to have an estimate of the peculiar velocity pattern of this region, because of the difficulty of modelling the mass distribution: however, since it is reasonable to think that this region (except the clusters) is not yet virialized, we expect that galaxies are still infalling toward the centre of the structure. We could parametrize this uncertainty as done by Small et al. (1998a), which introduced the parameter $`F`$, defined as the ratio between the width of the supercluster in redshift space and in real space. In this case $`\left({\displaystyle \frac{N}{\overline{N}}}\right)_{real}=F\left({\displaystyle \frac{N}{\overline{N}}}\right)_{obs}`$. If the peculiar infall velocities (perpendicular to our fitted plane) were of the order of $`150`$ km/s, similar to those measured in the Great Wall (Dell’Antonio, Geller & Bothun 1996), the value of $`F`$ for our structure would be $`0.83`$, so that our derived overdensities could be overestimated by $`17\%`$. Given the uncertainties on the amount of peculiar velocities, in the following we prefer to present our results neglecting the effect of $`F`$. Also in the cluster complexes and in A1736 we assumed that the width of the supercluster is the same as that derived from our fit of the intercluster galaxies, even if, due to the high peculiar motions induced by the virialized state of these clusters, these structures appear significantly elongated in redshift space. In order to correct, at least partially, for this effect, we applied the following procedure. First we assigned to the supercluster all objects with distance $`\pm 2\sigma `$ from the fitted plane; then we considered galaxies outside this velocity range: when they were in excess with respect to the expected number, we assigned them to the supercluster. We note that this procedure is somewhat arbitrary, in particular in presence of substructures infalling toward the clusters with high velocity. In the density profile plots these substructures may appear as secondary peaks clearly separated by the main overdensity. In these cases, it is impossible to say if the observed difference in velocity between the main and secondary peaks is due to a real spatial distance (and in this case it would be not correct to assign galaxies of the substructure to the supercluster) or to a velocity difference. In our sample there are two clear cases of this kind. In the A3528 complex, there is the clump corresponding to A3535 at $`v20000`$ km/s, while in the A1736 cluster we find the substructure already detected by Dressler & Shectman (1988). In order to derive a conservative estimate of the supercluster overdensity, we choose to neglect these subclumps. The definition of the limits of S300 is not very clear, because on plate 443 it consists of two features (see Figure 2a), well separated by a void (labelled V3), while on plates 444 and 509 appears as a single feature. In the plate 443 sample, the nearest peak (S300a) appears to extend from 28000 to 31000 km/s, while the velocity range of the farthest (S300b) overdensity is $`[3400038000]`$ km/s. In the plate 444 and 509 samples, S300 appears as a single density excess (S300c): we estimate a velocity range of $`[3000037000]`$ km/s for the plate 444 sample and $`[3200035000]`$ km/s for the plate 509 sample. It is not clear if all these features are part of a single structure, but on the other hand the distribution of galaxies in this region appears highly coherent. For this reason, we computed a global overdensity in this region, regardless of the physical association among the various components: therefore, in the following we will present the results obtained considering the sum of the contributions of S300a, S300b, V3 and S300c. The third step is the computation of the overdensity $`{\displaystyle \frac{N}{\overline{N}}}`$ for each structure. The total overdensity in galaxies can then be estimated by combining the density excesses of the various samples following Postman, Geller & Huchra (1988) as $$\left(\frac{N}{\overline{N}}\right)_{SC}=\underset{i}{}f_i\left(\frac{N}{\overline{N}}\right)_i,$$ (5) where $`f_i`$ is the volume fraction occupied by the considered $`i^{th}`$ sample with overdensity $`\left({\displaystyle \frac{N}{\overline{N}}}\right)_i`$. In Table 3 the overdensities of the single structures (listed in column 1) and the total density excess in the Shapley Concentration are reported in column 2. The volumes and the redshift completeness of the samples are also reported (column 3 and 6). In Table 4 we present the estimated overdensities and the volumes involved in the S300 samples. In order to assign a formal scale to these overdensities, we calculated the radius of each structure assuming a spherical shape as $$R=\left(\frac{3}{4}\frac{V}{\pi }\right)^{\frac{1}{3}},$$ (6) where $`R`$ is the scale and $`V`$ is the volume. In column 4 of Tables 3 and 4, the values of the scale for each structure in the various samples are reported. ### 5.2 The mass estimate A direct measure of the mass in the supercluster regions between clusters is not possible, because their small overdensities indicate that they are far from virialization. For this reason we use the number excess in galaxies, which can be related to the mass excess (see below). For what concerns the clusters, in previous studies from the literature their contribution to the total mass or overdensity has been derived by adding together the results of the virial mass estimates of each cluster. In order to be fully consistent with the density excess derived for the intercluster survey, we chose to estimate also the cluster contribution as an excess in galaxy number. This procedure could be uncorrect if there is a significant segregation between visible and dark matter in clusters with respect to the field, i.e. the proportionality between light and mass is different inside and outside clusters. However, the peculiar dynamical situation of the clusters in this region could also influence the reliability of the mass estimates. Indeed, there is not agreement even for the mass of A3558, the richest and best studied cluster of the Shapley Concentration. The mass obtained from optical data ranges from $`3.4\times 10^{14}`$ h<sup>-1</sup> M (Dantas et al. 1997) to $`6\times 10^{14}`$ h<sup>-1</sup> M (Biviano et al. 1993), while the X-ray data give masses in the range $`1.56.0\times 10^{14}`$ h<sup>-1</sup> M (Bardelli et al. 1996, Ettori et al. 1997). We remind also that in the overdensity estimates we neglected the contribution of a number of clusters (see Sect. 3.3), for which the available data are too sparse for a density reconstruction. However, having rescaled the overdensity found for A1736 with the ratio of the ACO richness parameters of these clusters, we found that neglecting their contribution to the total overdensity of the Shapley Concentration leads to an underestimate of $`{\displaystyle \frac{N}{\overline{N}}}`$ of the order of $`10\%`$. Given the volume and the overdensity of a structure, the mass can be estimated as $$M=2.778\times 10^{11}\left(\frac{N}{\overline{N}}\right)V\mathrm{\Omega }_oh^1M_{},$$ (7) where $`V`$ is the volume (in h<sup>-3</sup> Mpc<sup>3</sup> ) occupied by the considered structure, $`{\displaystyle \frac{N}{\overline{N}}}`$ is its number excess in galaxies with respect to a uniform distribution and $`\mathrm{\Omega }_o`$ is the matter density parameter. This relation is correct only if the luminous matter traces exactly the distribution of the total matter. The values found for the masses of the various structures by using eq.(7) are reported in column 5 of Table 3 and Table 4. As a more general case, we can assume (following Kaiser 1994) that the density excess in galaxies (or in clusters) is proportional to the total mass overdensity through the bias factor $`b`$ $$\frac{N\overline{N}}{\overline{N}}=b\left(\frac{\rho \overline{\rho }}{\overline{\rho }}\right).$$ (8) Hudson (1993a, 1993b), comparing the peculiar velocity field in the local Universe with the distribution of optical galaxies, estimated the most likely range for the bias factor to be approximately $`2.5\mathrm{\Omega }_o^{0.6}1.4\mathrm{\Omega }_o^{0.6}`$. The first value comes from the comparison of the overdensity of the Virgo supercluster with the Virgo infall pattern, while the second value is obtained by assuming that the large–scale galaxy dipole converges at a depth of $`8000`$ km/s. Therefore, the lower value could be an underestimate of $`b`$ if the dipole of the peculiar velocities has a deeper depth (Plionis & Valdarnini 1991; Scaramella et al. 1991). Indeed, an analysis of the dipole component perpendicular to the supergalactic plane led to the higher value of $`2.0\mathrm{\Omega }_o^{0.6}`$ (Hudson 1993a). ## 6 Results and discussion The total overdensity of the Shapley Concentration in our surveyed region, computed following eq.(5), is $`{\displaystyle \frac{N}{\overline{N}}}=11.3`$ over a scale of $`10.1`$ h<sup>-1</sup> Mpc (see Table 3). In order to give an idea of the spatial pattern of the galaxy overdensity inside the Shapley Concentration, in the right panel of Figure 6 we give the value of $`{\displaystyle \frac{N}{\overline{N}}}`$ in each MEFOS field: as expected, these values are higher in proximity of the clusters. In the North-East part the overdensities decrease and there are three fields with no observed galaxies in the considered velocity range: note however that since the expected number of galaxies in these fields is $`0.8`$, fields with no galaxies are still compatible with the mean density. Field # 44 has the highest overdensity value (13.7), probably because of the presence of a group. If we assume that the estimated overdensity of the intercluster galaxies extends also outside the MEFOS fields over the region covered by plates 443, 444 and 509 (see the area delimited by dashed lines in Figure 6), the total galaxy excess derived from eq.(5) is $`6.8\pm 0.4`$ on a scale of $`14.1`$ h<sup>-1</sup> Mpc . Hereafter we refer to this case as “extended sample”, while with the name “total sample” we indicate the results for our original surveyed area. We remind that these samples include the contribution of the A3528 and A3558 complexes and of A1736, but neglect the contribution of the other clusters in the region (see Sect.3.3): therefore these overdensity values have to be regarded as lower limits. Drinkwater et al. (1999) give a value of $`{\displaystyle \frac{N}{\overline{N}}}=2.0\pm 0.2`$ for plates 382 and 383, over an area of 44 sq.deg.; in order to add the contribution of this survey to our data, we computed its volume in the following way. As shown in Figure 4, our plane representation holds also in this sample; for the width determination we assume that this plane is perpendicular to the line of sight, therefore we adopt a width of $`\pm 7.6`$ h<sup>-1</sup> Mpc (see Sect.4.1). We added the contribution of these plates to our sample, following eq.(5). With these values we find for the Shapley Concentration a total overdensity of $`{\displaystyle \frac{N}{\overline{N}}}=5.2\pm 0.3`$ on a scale of 15.5 h<sup>-1</sup> Mpc . Hereafter we refer to this case as “extended sample $`+`$ 382 & 383”. Also in this case, given the fact that the contribution of clusters in plates 382 and 383 has been neglected, the overdensity values are lower limits. For what concerns the S300 structure, we have a total overdensity of $`{\displaystyle \frac{N}{\overline{N}}}=2.9`$ over a scale of $`24.8`$ h<sup>-1</sup> Mpc : in Table 4 the various contributions to this structure are reported. However, we remark that it is not clear if all these features are physically part of a single structure: for this reason the values of the reported overdensities have to be regarded as indicative of the matter distribution in this region. Given these overdensities, the total mass of the Shapley Concentration in our surveyed region and of the S300 structure are $`1.4\times 10^{16}`$ $`\mathrm{\Omega }_o`$ h<sup>-1</sup> M and $`5.1\times 10^{16}`$ $`\mathrm{\Omega }_o`$ h<sup>-1</sup> M, respectively, if light traces mass. The contribution of the Shapley Concentration to the peculiar velocity of the Local Group with respect to the Cosmic Microwave Background reference frame could be estimated as (Davis & Peebles 1983) $$\mathrm{\Delta }v=\mathrm{2.86\; 10}^7\frac{\mathrm{\Delta }M}{v^2}\mathrm{\Omega }_o^{0.4}hkm/s,$$ (9) where $`\mathrm{\Delta }M`$ is the mass excess (in M) of the structure with respect to a uniform distribution and $`v`$ is the mean radial velocity (in km/s). Considering the “extended $`+`$ 382 & 383” sample, we find $`\mathrm{\Delta }v26`$ km/s (for $`\mathrm{\Omega }_o=1`$ and no bias): note however that this value corresponds only to the central part of the Shapley Concentration and does not take into account the contribution of the matter in the external regions of the supercluster. As a comparison, the peculiar velocity induced on the Local Group by the Great Attractor is predicted to be of the order of $`300`$ km/s by Hudson (1993a). Finally, it is possible to study how clusters and galaxies trace the same structure. Assuming a mean cluster density of $`25.2\times 10^6`$ clusters Mpc<sup>-3</sup> (as found by Zucca et al. 1993 for the ACO clusters), we expect in the surveyed region of the Shapley Concentration (delimited by the dashed lines in Figure 6) 0.29 clusters, to be compared with the 11 observed. This corresponds to an overdensity in clusters of $`\left({\displaystyle \frac{N}{\overline{N}}}\right)_{cl}=37.9\pm 11.4`$. The quantity $`b_{cl,g}=\left[\left({\displaystyle \frac{N}{\overline{N}}}\right)_{cl}1\right]/\left[\left({\displaystyle \frac{N}{\overline{N}}}\right)_{gal}1\right]`$ represents the ratio between the bias factors of clusters and galaxies. We found $`b_{cl,g}=6.4\pm 2.0`$ on a scale of 14.1 h<sup>-1</sup> Mpc . Although the large associated error, this quantity seems to be inconsistent with the range of $`23.5`$ found for $`b_{cl,g}`$ comparing the dipoles (Branchini & Plionis 1996) or the ratio of the correlation functions (see Scaramella et al. 1994) or the reconstructed linear power spectra (Peacock & Dodds 1994) of the distribution of clusters and galaxies. As reference, the same quantity for the Corona Borealis supercluster is $`b_{cl,g}={\displaystyle \frac{7.53}{7}}1`$ on $`20`$ h<sup>-1</sup> Mpc , where the value for the galaxies is taken from Small, Sargent & Hamilton (1998b). This fact, while is confirmimg the impression that the Shapley Concentration has an anomalous richness in clusters with respect to other superclusters, on the other hand could raise problems for a simple comparison between results on the large–scale distribution of clusters and galaxies, because suggests significant variations of $`b_{cl,g}`$ with the local density. However, in order to assess this result, detailed studies of the galaxy distribution in a sample of superclusters are needed and are now possible with the new generation of multi–objects facilities. ### 6.1 Comparison with previous results The comparison of our values for Shapley Concentration and S300 with the results from the literature for other superclusters is not straightforward, because of the different scales over which the overdensities are computed. Moreover, sometimes the density excesses are given in mass, sometimes in number of galaxies or clusters, therefore a direct comparison needs hypotheses about the value of $`\mathrm{\Omega }_o`$ and of the bias factor $`b`$. Moreover, these comparisons are affected by the relatively large uncertainties in the estimated values. However, it can be instructive to compare the results obtained using different methods to estimate masses and overdensities, as f.i. virial masses, X–ray masses, etc. The overdensity for the Shapley Concentration (extended sample $`+`$ 382&383) can be compared with the density excess in mass given by Raychaudhury (1989) for the Great Attractor on a similar scale ($`{\displaystyle \frac{\rho }{\overline{\rho }}}=6\mathrm{\Omega }_o^1`$ over $`15`$ h<sup>-1</sup> Mpc ). Under the hypothesis that $`\mathrm{\Omega }_o=1`$ and that the galaxy distribution traces that of the total matter, the two superclusters would be very similar on the considered scale. A more consistent comparison can be done with the list of superclusters of Hudson (1993a), who studied the distribution of optical galaxies taken from the UGC and ESO-Uppsala catalogues within $`8000`$ km/s. On scales comparable with those considered in our survey for the Shapley Concentration, only the overdensities of the Virgo (Local)supercluster and the Fornax-Eridanus supercluster have been determined (see his table 10), finding $`{\displaystyle \frac{N}{\overline{N}}}=2.70`$ and $`1.35`$, respectively. Comparisons with the other structures of Hudson (1993a), which are considered at very different scales, are not straightforward: indeed, as can be noticed from Tables 3 and 4, there are large variations of $`{\displaystyle \frac{N}{\overline{N}}}`$ inside superclusters when different scales are considered. The value of $`{\displaystyle \frac{N}{\overline{N}}}=3.1`$ for the overdensity in galaxies in the Great Attractor on a scale of $`30`$ h<sup>-1</sup> Mpc (Dressler 1988) can be compared with the value of $`2.9`$ of S300, revealing the similarity of these two structures. Note, however, that our value is a lower limit since we neglected the presence of clusters in the S300 structure. Marinoni et al. (1998), fitting the galaxy peculiar velocity field from the MarkIII catalogue (Willick et al. 1997) and the spiral galaxy sample of Matthewson, Ford & Buchhorn (1992) with a multi–attractor model, found for the Shapley Concentration $`{\displaystyle \frac{\rho \overline{\rho }}{\overline{\rho }}}`$ 2.7 and 3.8, respectively, within a scale of $`15`$ h<sup>-1</sup> Mpc : the comparison with our $`{\displaystyle \frac{N}{\overline{N}}}`$ for the “extended $`+`$ 382& 383” sample leads to values for the bias factor of $`b`$ 1.6 and 1.1, consistent with the adopted range of Hudson (1993a, 1993b). Note that Marinoni et al. (1998) used a model with spherical symmetry, while the geometry of our sample is much more complicated: however, if we use the Marinoni’s values obtained for scales of $`10`$ h<sup>-1</sup> Mpc and $`20`$ h<sup>-1</sup> Mpc , the estimates for $`b`$ do not significantly change. These results mean that the value of the bias factor inside the Shapley Concentration does not differ from those found in other regions of the local Universe. Adopting the Hudson’s values for the bias parameter, the mass of the Shapley Concentration in our surveyed region results in the range $`[6.310.3]\times 10^{15}`$ h<sup>-1</sup> M in the case of $`\mathrm{\Omega }_o=1`$ and $`[2.95.1]\times 10^{15}`$ h<sup>-1</sup> M in the case of $`\mathrm{\Omega }_o=0.2`$. The relative overdensity ranges are $`{\displaystyle \frac{\rho \overline{\rho }}{\overline{\rho }}}=[4.17.4]`$ and $`[10.819.4]`$, respectively. The mass values for each scale and for different choices of $`b`$ and $`\mathrm{\Omega }_o`$ are shown in Figure 7. The mass of the Shapley Concentration can be compared with other estimates found in the literature, also shown in Figure 7. Raychaudhury et al. (1991), using the virial mass estimator, found $`M_{vir}=3.5\times 10^{16}`$ h<sup>-1</sup> M on scales of $`18.5`$ h<sup>-1</sup> Mpc , and simply summing the cluster masses found $`M=5.5\times 10^{15}`$ h<sup>-1</sup> M on the same scale. This range is consistent with the more recent estimates of Quintana et al. (1995) on a scale of 17 h<sup>-1</sup> Mpc : also for these authors, the higher mass value is derived with the virial estimator and the lower one is the sum of cluster masses. These values can be compared with the mass we derived considering the “extended sample $`+`$ 382 & 383” discussed above for the Shapley Concentration, that is $`2.3\times 10^{16}`$ $`\mathrm{\Omega }_o`$ h<sup>-1</sup> M on a scale of 15.5 h<sup>-1</sup> Mpc (with no bias). Assuming that the overdensity derived for this sample can be extended also to larger scales, we extrapolated the expected mass values at the Quintana et al. and Raychaudhury et al. scales in the two extreme cases for $`b`$ and $`\mathrm{\Omega }_o`$: these extrapolations are shown as dashed lines in Figure 7. Our mass range is compatible with both the Quintana et al. and the Raychaudhury et al. values. However note that our estimate at the scale 15.5 h<sup>-1</sup> Mpc has to be regarded as a lower limit, because does not include the contribution of all clusters in the region. On a scale of $`14`$ h<sup>-1</sup> Mpc Ettori et al. (1997), using various mass estimators, found values in the range $`[1.75.2]\times 10^{15}`$ h<sup>-1</sup> M. This range is consistent only with our lower estimate for the “extended sample” on scale 14.1 h<sup>-1</sup> Mpc : however, these authors took into account only the clusters, neglecting the contribution of matter outside clusters. Ettori et al. (1997) gave a mass estimate also for the core of the supercluster, ranging from $`1.0\times 10^{15}`$ h<sup>-1</sup> M to $`4.3\times 10^{15}`$ h<sup>-1</sup> M on a scale of $`6.7`$ h<sup>-1</sup> Mpc . On a comparable scale (4.6 h<sup>-1</sup> Mpc on the A3558 complex), we find values in the range $`[2.23.9]\times 10^{15}`$ h<sup>-1</sup> M in the case of bias and $`\mathrm{\Omega }_o=1`$ and $`[1.12.0]\times 10^{15}`$ h<sup>-1</sup> M in the case of bias and $`\mathrm{\Omega }_o=0.2`$. The relative overdensity ranges are $`{\displaystyle \frac{\rho \overline{\rho }}{\overline{\rho }}}=[18.232.4]`$ and $`[47.885.7]`$, respectively, to be compared with the range $`[4.0511.47]`$ given by Ettori et al. (1997). Also in this case, the discrepance between these ranges is probably due to the fact that these authors neglected the contribution of the matter outside clusters on the considered scale. ### 6.2 Dynamics and comparison with theoretical models Rich superclusters are ideal laboratories where to study all the dynamical phenomena like cluster formation, because the high local densities induce high peculiar velocities, which accelerate merging events. A standard method to study the dynamical state of our structures is to follow the evolution of their equivalent present linear overdensities (Kaiser & Davis 1985; see Appendix A of Ettori et al. 1997 for the details of the formalism). In pratice, the linear overdensity and scale are the values that a structure would have if its gravitational evolution were linear. In Table 5 we report the values of the linear density excess (column (2)), linear radius (column (3)) and turnaround and collapse times (columns (4) and (5)) in the case of $`\mathrm{\Omega }_o=1`$ and no bias for the various structures (indicated in column (1)). The times have to be compared with the age of the Universe, which is $`t_o=\mathrm{6.5\; 10}^9`$ h<sup>-1</sup> yrs (in the Einstein-deSitter case). From the values in Table 5 it is clear that, if light traces mass and $`\mathrm{\Omega }_o=1`$, the Shapley Concentration already reached its turnaround radius and started to collapse: the final collapse will happen in $`\mathrm{3\; 10}^9`$ h<sup>-1</sup> yrs. We computed the turnaround times also in the case of high and low bias ($`b=2.5\mathrm{\Omega }_o^{0.6}`$ and $`b=1.4\mathrm{\Omega }_o^{0.6}`$ respectively) and $`\mathrm{\Omega }_o=1`$ and $`\mathrm{\Omega }_o=0.2`$, finding that the Shapley Concentration is still following a decelerated expansion in all cases, except in the case of low bias and $`\mathrm{\Omega }_o=1`$, for which the structure has just started to collapse. For what concerns the S300 structure, it results it is well far from the collapse in all considered scenarios. The dynamical analysis of the cluster complexes indicates that the A3558 complex is at the late stages of the collapse, which will happen in $`\mathrm{1\; 10}^9`$ h<sup>-1</sup> yrs. This result is consistent with the analysis of the A3558 complex by Bardelli et al. (1994, 1998a, 1998b), who found a complex dynamical situation, related to cluster mergings. Analogous results about the A3528 complex will be presented in Bardelli et al. (in preparation). Having calculated the linear scale and overdensity of the structures, it is possible to compare them with the predictions of various theoretical models. We considered a set of six different cosmological models belonging to the general class of Cold Dark Matter (CDM) scenarios (see e.g. Moscardini et al. 1998 ): 1) the standard CDM model, with a normalization consistent with the COBE data (SCDM<sub>COBE</sub>); 2) the same model but with a different normalization, in agreement with the cluster abundances (SCDM<sub>CL</sub>); 3) the so–called $`\tau `$CDM model with a shape parameter $`\mathrm{\Gamma }=0.21`$; 4) a tilted CDM model with $`n=0.8`$ (TCDM); 5) an open CDM model with a matter density parameter $`\mathrm{\Omega }_o=0.2`$ (OCDM); 6) a low density ($`\mathrm{\Omega }_o=0.2`$) CDM model with flatness given by the cosmological constant ($`\mathrm{\Lambda }`$CDM). All these models, except the first, are normalized by using the observed cluster abundance (see Eke, Cole & Frenk 1996). Note that this normalization is also compatible with the COBE normalization, with the exception of SCDM<sub>CL</sub>, for which this normalization gives predicted values $`43\%`$ higher. For this reason, we present the results for SCDM with both COBE and cluster normalizations. In Table 6 we report the ratio between the linear overdensity and the r.m.s. mass fluctuations predicted by the various models for the “total” sample and with different choices of $`\mathrm{\Omega }_o`$ and $`b`$; the results for the “extended” and “extended $`+`$ 382&383” samples are similar. In order to assess the statistical consistency of the Shapley Concentration “event” with the predictions of the models, it is necessary to calculate how many of such structures are expected in the sampled volume of the Universe. Our survey is not useful to this purpose, because it covers a small solid angle just in the direction of the Shapley Concentration. Therefore, we decided to consider the cluster sample used by Zucca et al. (1993) to search for superclusters: this sample covers the whole sky with $`|b^{II}|>15^o`$ up to a distance of $`300`$ h<sup>-1</sup> Mpc . In this volume there are no other superclusters as rich as the Shapley Concentration. We computed the ratio between this volume and the volume of our samples listed in Table 3: this corresponds to the number of available “probes”. Given this number, it is possible to estimate the number of $`\sigma `$ over which only 0.25 objects are expected in a Gaussian distribution. This number of $`\sigma `$ is 4.33 for the “total” sample. If the ratio between the linear overdensity and the r.m.s. mass fluctuations predicted by the various models exceeds this number of $`\sigma `$, we expect to have no event like the Shapley Concentration in the sampled volume. Therefore the existence of the Shapley Concentration is inconsistent with all models in Table 6 which predict values higher than $`4`$; the consistent values are written in italic in Table 6. The most discrepant model is the SCDM<sub>CL</sub>; the TCDM and $`\tau `$CDM give marginally consistent values only in case of high bias. On the contrary, all models with a matter density parameter $`<1`$ seem to be compatible with the existence of a Shapley Concentration “event”. Finally, note that the SCDM<sub>COBE</sub> appears to give the lowest values, but this model is inconsistent with other results obtained on the scales of galaxies and clusters (see e.g. Peacock & Dodds 1996). ## 7 Summary We have presented the results of a redshift survey of intercluster galaxies toward the central part of the Shapley Concentration supercluster, aimed at determining the distribution of galaxies in between obvious overdensities. Our sample is formed by 442 new redshifts mainly in the $`b_J`$ magnitude range $`1718.8`$. Together with our redshift surveys on the A3558 and A3528 complexes, our total sample has $`2000`$ velocities. Our main results are the following: – The average velocity of the observed intercluster galaxies in the Shapley Concentration appears to be a function of the ($`\alpha `$, $`\delta `$) position, and can be fitted by a plane in the three–dimensional space ($`\alpha `$, $`\delta `$, $`v`$): the distribution of the galaxy distances around the best fit plane is described by a Gaussian with a dispersion of $`3.8`$ h<sup>-1</sup> Mpc . – Using the 1440 galaxies of our sample in the magnitude range $`1718.8`$, we reconstructed the density profile in the central part of the Shapley Concetration and we detected another significant overdensity at $`30000`$ km/s (dubbed S300). – We estimated the total overdensity in galaxies, the mass and the dynamical state of these structures, discussing the effect of considering a bias between the galaxy distribution and the underlying matter. The estimated total overdensity in galaxies of these two structures is $`{\displaystyle \frac{N}{\overline{N}}}11.3`$ on scale of $`10.1`$ h<sup>-1</sup> Mpc for the Shapley Concentration and $`{\displaystyle \frac{N}{\overline{N}}}2.9`$ on scale of $`24.8`$ h<sup>-1</sup> Mpc for S300. If light traces the mass distribution, the corresponding masses are $`1.4\times 10^{16}`$ $`\mathrm{\Omega }_o`$ h<sup>-1</sup> M and $`5.1\times 10^{16}`$ $`\mathrm{\Omega }_o`$ h<sup>-1</sup> M for Shapley Concentration and S300, respectively. The dynamical analysis revealed that, if light traces mass and $`\mathrm{\Omega }_o=1`$, the Shapley Concentration already reached its turnaround radius and started to collapse: the final collapse will happen in $`\mathrm{3\; 10}^9`$ h<sup>-1</sup> yrs. – We compared our mass estimates on various scales with other results in the literature, finding a general agreement. – We found an indication that the value of the bias between clusters and galaxies in the Shapley Concentration is higher than that reported in literature, confirming the impression that this supercluster is very rich in clusters. – Finally from the comparison with some theoretical scenarios, we found that the Shapley Concentration is more consistent with the predictions of the models with a matter density parameter $`<1`$, such as open CDM and $`\mathrm{\Lambda }`$CDM. ## Acknowledgements We warmly thank Andrea Biviano for having given us an electronic version of the redshift data on A1736 and Christian Marinoni for his overdensity data of the Shapley Concentration.
no-problem/9910/hep-ex9910019.html
ar5iv
text
# HIGH RESOLUTION PIXEL DETECTORS FOR 𝑒⁺⁢𝑒⁻ LINEAR COLLIDERS ## 1 Introduction Precision measurements of Top quark and Higgs boson physics are in the reach of the next generation of $`e^+e^{}`$ linear colliders, operating at a centre of mass energy ranging from the $`Z^o`$ pole to $`1\mathrm{TeV}`$. High granularity, impact parameter resolution, secondary vertex reconstruction and jet flavour tagging are the essential tools for these measurements and define the figures of merit for a Vertex Detector. In the 1996 Joint DESY/ECFA study, the minimal requests on the impact parameter resolutions in the $`\mathrm{R}\mathrm{\Phi }`$ plane and along the beam direction were defined to be $`\delta (\mathrm{IP}_{\mathrm{R}\mathrm{\Phi }})=10\mu \mathrm{m}\frac{30\mu \mathrm{m}\mathrm{GeV}/\mathrm{c}}{\mathrm{p}\mathrm{sin}^{3/2}(\theta )}`$ and $`\delta (\mathrm{IP}_\mathrm{z})=20\mu \mathrm{m}\frac{30\mu \mathrm{m}\mathrm{GeV}/\mathrm{c}}{\mathrm{p}\mathrm{sin}^{5/2}(\theta )}`$ with a total material budget below $`3\%\mathrm{X}_0`$ for the complete Vertex Detector, fitting between the beam pipe originally at a $`2\mathrm{cm}`$ radius and the intermediate tracker starting at $`12\mathrm{cm}`$. Conceptual design of vertex trackers based on hybrid pixel sensors and CCD’s were proposed and Research & Development plans defined. Since then, several analysis remarked the advantages of possibly improved performances, both in terms of asymptotic resolution and multiple scattering. This triggered the quest for lightweight technologies that could provide space point informations along the particle track with at least $`10\mu \mathrm{m}`$ resolution. At the same time, an improved final focusing scheme allowed to shrink the beam pipe to $`1\mathrm{cm}`$ radius, with the inner sensitive layer at $`1.2\mathrm{cm}`$. In the following, two possible detector technologies are presented: the former is based on the development of the Hybrid Pixel Detectors used in DELPHI and WA-97 and being finalized for the LHC experiments ; the latter is based on the evolution of monolithic CMOS imagers to achieve the sensitivity to minimum ionizing particles. ## 2 Hybrid Pixel Detectors ### 2.1 General concepts The achievement of a $`10\mu \mathrm{m}`$ resolution in a Silicon detector can be accomplished probing the diffusion of the charge carriers locally generated around the impinging particle trail. Given the diffusion characteristics, this require a microstrip or pixel pitch well below $`50\mu \mathrm{m}`$ and an analog output to interpolate the signals on neighbouring cells. While this is feasible with the 1d microstrip detectors , the need to integrate the front-end electronics in a cell matching the detector pattern defines the ultimate pitch in 2d pixel detectors. At the moment, the most advanced read-out chips have a minimum cell dimension of $`50\times 300\mu \mathrm{m}^2`$, produced in 0.8 CMOS technology, implemented in a rad-hard process. On one hand, the trend of the VLSI development and the recent studies on intrinsic radiation hardness of deep submicron CMOS technology certainly allow to assume a relevant reduction in the cell dimensions on a mid-term. On the other hand, a detector design overcoming this basic limitation is worth being considered. What is being proposed is a layout inherited from the microstrip detectors where it is assumed to have a readout pitch n times larger than the pixel pitch (see for instance fig. 1 for $`\mathrm{n}=4`$). In such a configuration, the charge carriers created underneath an interleaved pixel will induce a signal on the output nodes, capacitively coupled to the interleaved pixel. In a simplified model where the detector is reduced to a capacitive network, the ratio of the signal amplitudes on the output nodes at the left hand side and right hand side of the interleaved pixel (in both dimensions) should have a linear dependence on the particle position. The ratio between the inter-pixel capacitance and the pixel capacitance to backplane plays a crucial rule in the detector design, as it defines the signal amplitude reduction (an effective charge loss) at the output nodes and at last the sustainable number of interleaved pixels. Recent results on $`200\mu \mathrm{m}`$ readout pitch microstrip detectors have been published, and a $`10\mu \mathrm{m}`$ resolution has been achieved in a layout with 3 interleaved strips ($`50\mu \mathrm{m}`$ strip pitch) and for a $`\mathrm{S}/\mathrm{N}80`$. Similar results may be expected in a pixel detector, taking into account a lower noise is achievable because of the intrisically smaller load capacitance and the charge is possibly shared on four output nodes reconstructing the particle position in two dimensions. Improvements are certainly possible sampling the diffusion with a smaller pitch. ### 2.2 Detector prototypes and electrostatics characterization Prototypes of detectors with interleaved pixels have been designed in 1998 and delivered in january 1999. The layout of one of the structures is shown in fig. 1. A series of guard rings defines the detector sensitive area. A bias grid allows the polarization of the interleaved pixels too; each $`\mathrm{p}^+`$ implant is connected to the metal bias line by polysilicon resistors in the $`13\mathrm{M}\mathrm{\Omega }`$ range. A metal layer is deposited on top of the pixels to be connected to the VLSI cell. The backplane has a meshed metal layer to allow the use of an IR diode for charge collection studies. In a 4” wafer 36 structures were fit, for 17 different layouts; a VLSI cell of $`200\times 200\mu \mathrm{m}^2`$ or $`300\times 300\mu \mathrm{m}^2`$ was assumed and detectors with a number of interleaved pixels ranging between 0 and 3 and different areas were designed. Ten high resistivity wafers $`(58\mathrm{k}\mathrm{\Omega }\mathrm{cm})`$ were processed <sup>2</sup><sup>2</sup>2at the Institute of Electron Technology, Warszawa, Poland together with an equal number of low resistivity wafers for process control, the details of which have been outlined in . Two wafers were retained by the factory for a destructive analysis and two others were stored for later use. All of the structures on five undiced wafers were visually inspected, tested up to $`250\mathrm{V}`$ and characteristics I-V and C-V curves produced; the results may be summarized as follows: $``$ two wafers suffered from processing problems. As a consequence of it, one wafer had a high leakage current ($`1\mu \mathrm{A}`$) even at very low voltages on most of the structures; the second wafer had interrupted metal lines, making the bias grid inefficient. The former problem is possibly connected to the Al pattern plasma etching; the latter to a not optimal planarization of the device. $``$ three wafers had extremely good characteristics, with a mean current $`50\mathrm{nA}/\mathrm{cm}^2`$ at full depletion. Structures were classified as good detectors if no breakdown was observed below $`100\mathrm{V}`$, the leakage current at depletion voltage was below $`1\mu \mathrm{A}`$ with a smooth trend vs. the applied voltage and no faults in the line pattern were detected by visual inspection. According to these criteria, 55/89 structures were accepted. In fig. 2 the value of the currents and detector capacitances at depletion voltage are shown for all of structures in one of the three wafers. In fig. 3 the typical $`\mathrm{I}\mathrm{vs}.\mathrm{V}`$ and $`1/\mathrm{C}^2\mathrm{vs}.\mathrm{V}`$ curves are shown. While the $`\mathrm{C}\mathrm{vs}.\mathrm{V}`$ curves behave as expected, the current has a peculiar trend. After a plateau is reached at full depletion, the current takes off at values in the $`5070\mathrm{V}`$ range. For most of the structures it is a mild increase, but most of the rejected detectors are characterized by a steep slope, eventually ending with a breakdown below $`100\mathrm{V}`$. Independent measurements of the guard ring and bias grid current has shown the latter is responsible for the increase, that might be connected to sharp edges where the electric field achieves high values. A full device simulation is planned to help understanding this feature. ### 2.3 Outlook The prototypes have not shown any design fault even if processing and layout optimization has to be considered. On a short term, measurements of the inter-pixel and backplane capacitances are planned, completing the electrostatics device characterization. A charge collection study will follow, relying on a low noise strip detector analog chip and an IR light spot shone on the meshed backplane. These measurements will be a proof of principle of the proposed device and define the fundamentals for a further iteration, aiming at a $`25\mu \mathrm{m}`$ pitch. The device thickness is a particularly relevant issue for the application of Hybrid Pixel Sensors in a linear collider experiment. The minimal thickness is defined by both the detector performances and the backthinning technology for bump bonded assembly. Industrial standards guarantee backthinning down to $`50\mu \mathrm{m}`$ and a procedure to obtain thin Hybrid Pixel detectors is being tested . The small load capacitance of the pixel cells shall guarantee an extremely high $`\mathrm{S}/\mathrm{N}`$. Scaling what was obtained for microstrip detectors, the desired resolutions might be obtained with a $`200\mu \mathrm{m}`$ thick detector (a $`250\mu \mathrm{m}`$ ($`0.27\%\mathrm{X}_\mathrm{o}`$) thick assembled device). Moreover, the mechanical structure of a three layer vertex detector based on Hybrid Pixel Sensors is being designed and a realistic material budget evaluated. ## 3 Monolithic Pixel Sensors ### 3.1 General concepts In the early 90’s monolithic pixel sensors have been proposed as a viable alternative to CCD’s in visible imaging . These sensors are made in a standard VLSI technology, often CMOS, so they are usually called CMOS imagers. Three main architectures have been proposed, namely: Passive Pixel Sensors (PPS) and Active Pixel Sensors (APS) with photodiode or photogate. In the former, a photodiode is integrated in each pixel together with a selection switch, while in the latter (fig. 4), three transistors are usually integrated together with a photosite. Today, most of the sensors are based on APS because of their superior noise performances: electron noise can be as low as $`4.5\mathrm{e}^{}`$ r.m.s. at room temperature. The use of standard CMOS technology gives APS several advantages with respect to the more generally used CCD’s: they are low cost; inherently radiation hard; several functionalities can be integrated on the sensor substrate, including random access; they consume very little power as the circuitry in each pixel is active only during the readout and there is no clock signal driving large capacitances. Because of these characteristics, CMOS sensors are the favoured technology for demanding applications which are typically found in space science. ### 3.2 CMOS sensors for charged particle detection In visible light applications, special care is taken to maximise the fill factor, i.e. the fraction of the pixel area that is sensitive to the light. Because of the transistors, fill factors in CMOS sensors are relatively low (in the order of 30%). This can be a severe limitation in high-energy physics application, if no special care is taken. One of us proposed to integrate a sensor in a twin-well technology with an n-well/p-substrate diode in order to achieve 100% fill factor for ionizing particle detection (fig. 5). This technique has already proven its effectiveness in visible light applications reducing the blind area to the metal lines, opaque for visible light but not for charged particles. CMOS sensors could achieve high spatial resolution: the pixel size is usually between 10 and 20 times the Minimal Size Feature of the used technology, which means that $`10\mu \mathrm{m}`$ pitch is possible, and hence spatial resolution better than $`3\mu \mathrm{m}`$ even with a binary readout. At the same time, very low multiple scattering is introduced as the substrate can in principle be thinned down to a few microns. A charge particle CMOS detector would also benefit from the generic characteristics of these devices, including low power dissipation, radiation resistance of deep submicron CMOS technologies and low cost. ### 3.3 CMOS sensors for a Linear Collider In order to prove the effectiveness of CMOS sensors for the Next Linear Collider, an R&D program has been initiated by the Strasbourg group. Some existing commercial devices are currently under test. Since the performances of the sensors depend on details of the fabrication process, a full-custom design of a first prototype sensor (MIMOSA = MIP MOS APS) has been done in a $`0.6\mu \mathrm{m}`$ CMOS technology. The circuit will be back from the foundry in autumn 1999. A preliminary detector design of a Microvertex Detector based on CMOS sensors was also completed. The detector is supposed to be made of 5 layers, the inner most having a radius of 1.2 cm. It is made of a cylindrical barrel part $`(|\mathrm{cos}(\theta )|<0.90)`$ associated to forward and backward conical and disk-like extensions $`(0.90<|\mathrm{cos}(\theta )|<0.99)`$, intercepting with at least 4 layers charged particles produced at polar angles ranging from 6 to 174 degrees. The sensors are assumed to be $`50\mu \mathrm{m}`$ thick squares of $`1.4\times 1.4\mathrm{cm}^2`$ area, with an active surface close to 80%. Assuming a few per-cent overlap between the active surfaces of neighbour sensors, about 5500 units are needed to cover the $`1.7\mathrm{m}^2`$ area of the detector. Since the sensors have low power dissipation, a mechanical support made of $`100\mu \mathrm{m}`$ thick, $`7\mathrm{mm}`$ large, thermal diamond rods was considered. A detailed simulation showed that such a device, connected to a light system of thin cooling pipes, would provide enough thermal conduction by itself to evacuate the heat from the sensors, thus substantially reducing the material seen by the particles. The simulation of the mechanical constraints showed that the bending of the rods should nowhere exceed a $`30\mu \mathrm{m}`$ sagitta. As a further advantage, diamond aluminised with a few micron thick layer could fan in/out all the electrical signals. Globally, the material budget is such that particles crossing the 5 detector planes would in average see a total amount of 0.8% radiation length in the barrel and 2.9% in the forward-backward parts, values making the CMOS sensor based vertex detector very competitive. ## 4 Conclusions Two detector technologies suitable for a Vertex Detector at the next generation of $`\mathrm{e}^+\mathrm{e}^{}`$ colliders have been presented in the paper. Hybrid Pixel Sensors could achieve the desired performances overcoming the limitations defined by the electronics cell dimension, mating the pixel, and developing a dedicated analog readout chip. Charge sharing and S/N ratio are the critical issues for these detectors as they determine both the resolution and the minimal detector thickness. Detector prototypes have been produced; the electrostatics characterization is ongoing and the first results are positive. Monolithic CMOS sensors could achieve an excellent resolution introducing a very low multiple scattering. Impressive results as visible light detectors have been recently obtained and a custom designed device optimized for ionizing particles have been submitted. ## References
no-problem/9910/astro-ph9910453.html
ar5iv
text
# Mergers of Galaxies from an HI Perspective ## 1. Introduction Spiral galaxies, particularly later types, tend to be rich in neutral hydrogen. Much of this gas is found in the outermost regions of the disks, which are the first regions to be perturbed during tidal interactions. As such, the structure of the gas-rich material thrown off in such encounters will bear the spatial and kinematic imprint of the encounter dynamics. Mapping the distribution and line-of-sight velocity of the atomic gas in the 21cm line of neutral hydrogen (H i) is therefore a unique and powerful tool for investigating these violent events. As an example, Figure 1 shows H i observations of the classical on-going disk-disk merger NGC 4038/9, “The Antennae”. This figure emphasizes the kinematic and spatial continuity of tidal features. It is this continuity that makes H i observations so powerful for investigating on-going mergers and their evolved remnants. The tails are generally much too faint to map the stellar kinematics, and ionized emission tends to be confined to a few localized regions of star formation. H i mapping is very often the only way to obtain such information. At present, at least 140 on-going interactions, mergers, or merger remnants have been mapped in the 21cm line of neutral hydrogen. This includes such classes of objects as interacting doubles, major mergers, evolved merger remnants, shell galaxies, ring galaxies, polar ring galaxies, compact groups, and ellipticals with extended H i debris. In the remainder of this review I will highlight some of what we have learned from these observations. It is beyond the scope of this review to summarize the wealth of knowledge obtained on each of the more than 140 systems observed, and I will instead highlight a few global themes. For additional details, the reader is directed to the proceedings edited by Arnaboldi et al. (1997), especially the contributions by van Gorkom & Schiminovich, Morganti et al., Schiminovich et al., and Oosterloo & Iovino. See also the recent H i reviews of mergers by Sancisi (1997), of compact groups by Verdes-Montenegro et al. (1999), of ring galaxies by Appleton & Struck-Marcell (1996), and of polar ring galaxies by Sparke (these proceedings). ## 2. True Fraction of Peculiar Galaxies H i mapping very often reveals a markedly different dynamical picture of systems than suggested by the distribution of the optical light. Particularly striking examples are: the extensive tidal streamers found connecting the members of the M81 group (van der Hulst 1979, Yun et al. 1994); the 200 kpc rotating H i ring in the M96 group (Schneider et al. 1989); a pair of purely gaseous tidal tails emerging from the E4 galaxy NGC 1052 (van Gorkom et al. 1986), from the E2 galaxy NGC 5903 (Appleton, Pedlar & Wilkinson 1990) and from the Sa galaxy NGC 7213 (Hameed, Blank & Young in preparation); the H i bridge/tail morphology of the “Virgo Cloud” H i 1225+01 (Giovanneli et al. 1991, Chengalur et al. 1995); plumes of H i pulled off the Sb galaxy NGC 678 by the Epec galaxy NGC 680 and the associated intergalactic H i cloud (van Morsel 1988). As a result of these and many similar discoveries, we conclude that the true fraction of peculiar objects must be considerably larger than derived from purely optical studies. Based on H i studies, Sancisi (1997) suggest that at least one in four galaxies has suffered a recent merger or experienced an accretion event. Even in systems already identified as optically peculiar, H i mapping frequently reveals structures that provide critical insights into their dynamical nature by revealing connections not seen at other wavelengths (e.g., Figure 2). Examples include: tidal H i in QSOs (Lim & Ho 1999); the nearly 200 kpc long tidal plumes emerging from the ring galaxy Arp 143 (Appleton et al. 1987; see Figure 4) and the IR luminous starburst Arp 299 (Hibbard & Yun 1999); the 275 kpc diameter H i disk around the mildly interacting system Mrk 348 (see Figure 3); the H i tail and counter-arm in the starburst galaxy NGC 2782 (Smith 1991); the extended tidal streamers in the starburst/blowout system NGC 4631 (Weliachew et al. 1978); the extended disk and streamers in the dIrr NGC 4449 (Hunter et al. 1998); two H i tails emerging from the blue compact dwarf II Zw 40 (van Zee et al. 1998). The fact that these features are easily visible in H i but lack optical counterparts is likely due to the fact that in disk galaxies H i is generally more extended than the stars. It is not clear if H i mapping is the most efficient means for revealing low-level peculiarities. When a similar amount of observing time ($``$ few to a dozen hours) is invested in deep optical imaging, some remarkable results have emerged: faint optical loops and streamers have been discovered around what were long thought to be normal unperturbed disk galaxies (see Malin & Hadley 1997, Zheng et al. 1999). While the optical observations do not include the kinematic information provided by H i observations, they may be the only signatures of very evolved interactions, when the H i has faded away or been ionized. ## 3. Global Dynamics of Merging Systems As demonstrated by Toomre & Toomre (1972) (and re-affirmed many times since, e.g. Barnes 1998), tidal features develop kinematically. As a result, they have a simple kinematic structure, with energy and angular momentum increasing monotonically with distance along the tail (Hibbard & Mihos 1995). Because of this simple kinematic structure, H i observations provide a uniquely useful constraint on N-body simulations of gas-rich mergers (e.g. Combes 1978, Combes et al. 1988, Hibbard & Mihos 1995, Yun 1997, Barnes 1998). While the primary parameters that are fit in this exercise are the physically uninteresting angles describing the orientations of the disks and the viewing perspective, the model matching gives us the confidence to explore the evolutionary history of mergers beyond the best-fit time. By running the simulations forward in time, we can explore the late-stage merger evolution for clues on the expected morphology of the remnants and the distribution of material at large radii in the halos around the remnants. Because much of the tidal material remains bound to the remnant, it will eventually reach an apocenter, turn around, and move back inwards in the potential. There will therefore be a constant rain of tidal material back onto the remnant. Material which falls back while the potential is still violently relaxing will scatter and be mixed throughout the remnant body. Material which returns after the potential has relaxed will wrap coherently, forming shells, loops and other “fine structures” (Hernquist & Spergel 1992). Because of its high energy and angular momentum, the material which falls back later will fall back to larger and larger radii, forming loops rather than shells. At late times, the material outside of the loops will have a low density and may be ionized by the intergalactic UV field (Corbelli & Salpeter 1993, Maloney 1993) or the remnant itself (Hibbard, Vacca & Yun 1999). We would therefore expect evolved disk merger remnants to exhibit partial rings of H i with a rotational signature (since the loops correspond to turning points where the radial velocity goes to zero), lying outside the remnant body. This is exactly what has been found around a number of shell galaxies (Schiminovich et al. 1995; Fig. 2b–d). Meanwhile, the loosely bound tidal material in the outer regions continues to travel outward. This material has radial periods $``$ many Gyr and azimuthal periods even longer than this (Hibbard 1995). As a result, the tidal material will not give rise to a smooth, spherical halo of material; instead there will be specific regions of higher column density material with a low filling factor extending to very large radii. At late times, the atomic gas will be too diffuse to be detected in emission, and may anyways be largely ionized by the intergalactic UV field. Therefore the tidal features mapped in H i are likely the denser neutral peaks of a more extended distribution. This material should be detectable in absorption against background sources (Carilli & van Gorkom 1992). ## 4. Galaxy Transformation: Spirals to Ellipticals The evidence that at least some mergers of gas-rich disk galaxies can make elliptical-like remnants is very strong (e.g. Schweizer 1998). Whether these merger remnants are true ellipticals or anomalous in some manner is still a matter of debate (see van den Marel & Zurek, these proceedings). H i observations addressed one important aspect of this question: do mergers get rid of the atomic gas of the progenitors? It has often been stated that H i will be ejected into the tidal features, but in fact at least as much (and likely much more) outer gas should be sent into the inner regions as is found in the tidal tails (see Fig. 15 of Toomre & Toomre 1972). It was therefore reassuring to find that progressively more advanced merging systems have less and less atomic gas in the bodies of the remnants (Hibbard & van Gorkom 1996). It was not clear how most of the original atomic gas was removed from the inner regions, or how they remain largely H i free in light of the H i which continues to fall back from the tidal regions. Recent observations have shed some light on this subject, by showing that two processes — galactic superwinds and ionization by continued star formation — can have a strong effect on the observability of tidal H i (Hibbard, Vacca & Yun 1999). Superwinds are likely to be important in helping the most gas-rich systems get rid of much of their cold gas reservoirs, but the wind phase is short lived, and would not explain the continued removal of returning tidal H i. Simple calculations suggest that the UV flux from on-going starformation is sufficient to ionize diffuse H i in the tidal regions (see also Bland-Hawthorn & Maloney 1999). Photoionization is an attractive mechanize for explaining tidal features which are gas rich in the outer radii, but gas poor at smaller radii (e.g., the northern tail of The Antenna, Fig. 1; Arp 105 Duc et al. 1997; NGC 7252 Hibbard et al. 1994). The dynamics of tail formation require that the gas-rich outer radii of the progenitor disks extend all the way back into the remnant (see Fig. 2 of Toomre & Toomre 1972). The geometry of a preliminary numerical fit to the NGC 40389/9 data (Hibbard, van der Hulst & Barnes, in preparation) suggest that the northern tail has an unobstructed sightline to the numerous starforming regions in the disk of NGC 4038, while the southern tail does not, explaining why it remains gas rich along its entire length. This process may explain how merger remnants remain gas poor in the presence of the continued return of tidal H i. Such an on-going process is required if remnants are to evolve into normal ellipticals in terms of their atomic gas content. ## 5. Galaxy Transformation: Other Beasts Is the ultimate evolutionary product of disk-disk mergers an elliptical with fine structure? Here again H i observations provided evidence for unexpected merger products. In particular, a number of on-going mergers and merger remnants are found to have large gaseous disks with rotational kinematics. Particularly good examples are Arp 230 (Fig. 2c&d), NGC 520 (Hibbard & van Gorkom 1996), and MCG -5-7-1 (Schiminovich, van Gorkom & van der Hulst in preparation). The very faint loops and streamers imaged around normal disk galaxies (Malin & Haley 1997, Zheng et al. 1999) support the idea that some disk systems may have had a violent origin or experienced a major accretion event. Finally, there are some systems which simply do not seem to conform to the standard interaction picture. One such example is Mrk 348 (Fig. 3). The main difficulty with the tidal interpretation for this system is that the scale of the H i is tremendous (diameter $``$280 kpc), and two thirds of the neutral hydrogen ($`1.4\times 10^{10}M_{}`$ out of a total of $`2.1\times 10^{10}M_{}`$) lies outside of the highest contour in Fig. 3, i.e., outside the region containing both the companion and all of the optical light of the disk. It simply does not seem possible that the small companion seen in Fig. 3b could have raised this much material to such large radii. It may be that the progenitor was a very gas-rich low surface brightness galaxy like Malin 1 (Impey & Bothun 1989, Pickering et al. 1997). A more intriguing possibility is that the neutral gas may have condensed out of an extensive halo of ionized gas. In this regard it is interesting to consider the NGC 4532/DDO 137 system, which has a very irregular distribution of H i lying mostly outside of the optical galaxies. Hoffman et al. (1999) suggest that the H i clumps are simply neutral peaks in sea of mostly ionized hydrogen. The existence of such a sea of baryons may mean that full scale galaxy formation continues to the current epoch. ## 6. Galaxy Formation: Tidal Dwarf Galaxies As the name implies, “tidal dwarf galaxies” are concentrations of stars entrained within tidal tails and believed to be gravitationally bound (Schweizer 1978). These systems have received considerable observational attention recently (e.g. Duc 1995, Hunsberger et al. 1996). However, because the inter-clump tidal material is so faint, H i mapping studies provide the only means of determining whether the local luminosity enhancements are kinematically distinct from the surrounding material, and this has only been done for a few systems (Hibbard et al. 1994; Hibbard & van Gorkom 1996; Duc et al. 1997). Within tidal tails there is a wealth of substructure on many scales. It ranges from small, dense gaseous knots within purely gaseous features (e.g., Figure 4), to small luminosity enhancements within optical tidal tails (e.g. Hutchings 1996, Hunsberger et al. 1996), to dwarf-sized condensations of gas and stars fully embedded within a tidal tail (e.g. NGC 7252, NGC 3921 Hibbard & van Gorkom 1996; NGC 4038/9 Fig. 1a); and finally to separate (and often separately classified) optical dwarfs entrained within mostly gaseous tidal features (e.g., M81/NGC 3077, van der Hulst 1979; NGC 4027, Phookun et al. 1992; NGC 520/UGC 957, Hibbard & van Gorkom 1996; Arp 105, Duc et al. 1997; NGC 5291, Malphrus et al. 1997). An outstanding question is whether there is an evolutionary link between any/all of these categories of structures. ## 7. Timing of Starbursts The 100 kpc scale tidal features imaged in H i emanating from starbursting systems suggest that the interaction and starburst timescales are quite different. For example, the starburst in the IR luminous merger Arp 299 has an age of $`<`$30 Myr while the 180 kpc tail was launched about 700 Myr ago (Hibbard & Yun 1999). Similarly, the tails of the Antennae (Fig. 1) suggest an interaction timescale of $``$ 500 Myr. This object has a population of star-clusters with an age of $``$ 500 Myr as well as a population that is currently forming massive stars (Whitmore & Schweizer 1995). These observations suggest that interaction induced starbursts are not isolated to either first periapse (when the tails are launched) or the final merger, but rather are episodic (cf Noguchi 1991). While the closeness of the nuclei of the ultraluminous IR mergers suggests that the most intense starbursts occur when the progenitor nuclei are coalescing, it does not necessarily follow that the bulk of the stars are formed during this short-lived phase. This fact has important repercussions for the expected observational characteristics of merger remnants. If most of the interaction-induced starformation takes place at the moment of final coalescence, the burst population is expected to be confined to the inner few 100 pc of the remnant, leaving an anomalously bright central core (Mihos & Hernquist 1994) with the characteristics of a younger, more metal enriched population. The lack of such signatures in shell galaxies is taken to mean that they could not have formed via major mergers (Silva & Bothun 1998). However, if much of the post-interaction population formed over the whole $``$ 1 Gyr timescale of the merger, then the “burst” population will be spread more widely through the remnant, leaving much more subtle observational signatures. ## 8. Conclusion H i spectral line mapping is a powerful diagnostic tool for investigating interacting and peculiar galaxies. In concert with numerical simulations, such observations provide insight into the transformation and formation of galaxies, the distribution of material in the halos of galaxies, the timing of interaction-induced starbursts, and the possible evolutionary products of mergers. An important outstanding questions is whether many normal systems formed via mergers. While a merger origin for most galaxies is a generic result of hierarchical structure formation scenarios, there are continued claims that merger remnants will differ from normal ellipticals (Mihos & Hernquist 1994, van den Marel & Zurek, these proceedings). H i observations can help address this question by identify evolved remnants of gas-rich mergers via the amounts and structure of any remaining tidal H i. Once identified, the structure of these remnants should be compared to ellipticals. If they are indeed different, then this might mean that the Hubble Sequence evolves with redshift, such that the merger of present day spirals evolve into ellipticals with different characteristics than present day ellipticals, and conversely that present day ellipticals had progenitors which differed in some manner from present day disk galaxies. With future cm wave facilities we should be able to address the cosmological aspect of this question. For instance, an expanded VLA (cooled low frequency receivers, greatly expanded correlator) will be able to detect H i out to redshifts $``$1. We should be able to image the gas-rich tidal features out to redshifts of $`z`$0.5 (Figure 5). We will thus be able to constrain the number density of gas-rich mergers at these redshifts, which will tell us how large the population of gas-rich merger remnants should be at the present epoch. ### Acknowledgments. I thank Jacqueline van Gorkom for useful discussions and a careful reading of this manuscript, Jim Higdon & Min Yun for providing figures for my talk, and the organizers for the opportunity to attend such an interesting meeting. ## References Appleton, P. N., et al. 1987, Nature, 330, 140 Appleton, P. N., Pedlar, A., & Wilkinson, A. 1990, ApJ, 357, 426 Appleton, P. N. & Struck-Marcell, C. 1996, Fund. of Cosmic Physics, 16, 111 Arnaboldi, M., Da Costa, G. S. & Saha, P. 1997, eds, The Nature of Elliptical Galaxies, ASP Conf. No. 116. Barnes, J. E. 1998, “Galaxies: Interactions and Induced Star Formation”, Saas-Fee Advanced Course No. 26, (Springer, Berlin), p. 275. Bland-Hawthorn, J. & Maloney, P.R. 1999, ApJ, 510, 33 Carilli, C. L. & van Gorkom, J. H. 1992, ApJ, 399, 373 Chengalur, J. N., Giovanelli, R. & Haynes, M. P. 1995, AJ, 109, 2415 Combes, F. 1978, A&A, 65, 47 Combes, F., Dupraz, C., Casoli, F., & Pagani, L. 1988, A&A, 203, L9 Corbelli, E. & Salpeter, E. E. 1993, ApJ, 419, 104 Duc, P. -A 1995, Ph. D. Thesis, Université Paris Duc, P. -A., Brinks, E., Wink, J. E., & Mirabel, I. F. 1997, A&A, 326, 537 Giovanelli, R., Williams, J. P. & Haynes, M. P. 1991, AJ, 101, 1242 Hernquist, L., & Spergel, D. N. 1992, ApJ, 399, L117 Hibbard, J. E. & van Gorkom, J. H. 1996, AJ, 111, 655 Hibbard, J, Guhathakurta, P, van Gorkom, J & Schweizer, F. 1994, AJ, 107, 67 Hibbard, J. E. & Mihos, J. C. 1995, AJ, 110, 140 Hibbard, J. E., Vacca, W. D. & Yun, M. S. 1999, AJ, submitted Hibbard, J. E., & Yun, M. S. 1999, AJ, 118, 162 Higdon, J. 1996, ApJ, 467, 241 Hoffman, G. L., Lu, N. Y., Salpeter, E. E. & Connell, B. M. 1999, AJ, 117, 811 Hunsberger, S., Charlton, J., & Zaritsky, D. 1996, ApJ, 462, 50 Hunter, D. A., et al. 1998, ApJ, 495, L47 Hutchings, J. B. 1996, AJ, 111, 712 Impey, C. & Bothun, G. 1989, ApJ, 341, 89 Lim, J. & Ho, P. T. P. 1999, ApJ, 510, L7 Malin, D. F. & Hadley, B. 1997, PASA, 14, 52 Maloney, P. 1993, ApJ, 414, 41 Malphrus, B., Simpson, C., Gottesman, S. & Hawarden, T. 1997, AJ, 114, 1427 Mihos, J. C., Hernquist, L. 1994, ApJ, 437, L47 Noguchi, M. 1991, MNRAS, 251, 360 Phookun, B., Mundy, L. G., Teuben, P. & Wainscoat, R. J. 1992, ApJ, 400, 516 Pickering, T., Impey, C., van Gorkom, J. & Bothun, G. 1997, AJ, 114, 1858 Sancisi, R. 1997, , “Galaxy Interactions at Low and High Redshift”, IAU Symp. No. 186, eds. D. Sanders & J. Barnes. Schiminovich, D., van Gorkom, J., van der Hulst, J., & Malin, D. 1995, ApJ, 444, L77 Schweizer, F. 1978, “Structure and Properties of Nearby Galaxies”, IAU Symp. No. 77, eds E. Berkhuijsen & R. Wielebinski (Reidel, Dordrecht), p. 279 Schweizer, F. 1998, “Galaxies: Interactions and Induced Star Formation”, Saas-Fee Advanced Course No. 26, (Springer, Berlin), p. 105 Schneider, S. E. et al. 1989, AJ, 97, 666 Silva, D. R. & Bothun, G. D. 1998, AJ, 116, 85 Simkin S., van Gorkom, J., Hibbard, J., Hong-Jun, S. 1987, Science, 235, 1367 Smith, B. J. 1991, ApJ, 378, 39 Toomre, A., & Toomre, J. 1972, ApJ, 178, 623 Verdes-Montenegro, L., et al. 1999, “Small Galaxy Groups”, IAU Colloq. 174, eds. M. Valtonen & C. Flynn (SF: ASP) van der Hulst, J. M. 1979, A&A, 75, 97 van Gorkom, J. H., et al. 1986, AJ, 91, 791 van Moorsel, G. A. 1988, A&A, 202, 59 Weliachew, L., Sancisi, R., & Guelin, M. 1978, A&A, 65, 37 Whitmore, B. C., & Schweizer, F. 1995, AJ, 109, 960 Yun, M. S., Ho, P. T. P., & Lo, K. Y. 1994, Nature, 372, 530 Yun, M. S. 1997, “Galaxy Interactions at Low and High Redshift”, IAU Symp. No. 186, eds. D. Sanders & J. Barnes. van Zee, L., Skillman, E. D. & Salzer, J. J. 1998, AJ, 116, 1186 Zheng et al. 1999, AJ, 117, 2757
no-problem/9910/cs9910010.html
ar5iv
text
# Communication Complexity Lower Bounds by Polynomials ## 1 Introduction and Statement of Results Communication complexity deals with the following kind of problem. There are two separated parties, usually called Alice and Bob. Alice receives some input $`xX`$, Bob receives some $`yY`$, and together they want to compute some function $`f(x,y)`$ which depends on both $`x`$ and $`y`$. Alice and Bob are allowed infinite computational power, but communication between them is expensive and has to be minimized. How many bits do Alice and Bob have to exchange in the worst-case in order to be able to compute $`f(x,y)`$? This model was introduced by Yao and has been studied extensively, both for its applications (like lower bounds on VLSI and circuits) and for its own sake. We refer to for definitions and results. An interesting variant of the above is quantum communication complexity: suppose that Alice and Bob each have a quantum computer at their disposal and are allowed to exchange quantum bits (qubits) and/or can make use of the quantum correlations given by pre-shared EPR-pairs (these are entangled 2-qubit states $`\frac{1}{\sqrt{2}}(|00+|11)`$ of which Alice has the first qubit and Bob the second) — can they do with fewer communication than in the classical case? The answer is yes. Quantum communication complexity was first considered by Yao and the first example where quantum beats classical communication complexity was given in . Bigger (even exponential) gaps have been shown since . The question arises how big the gaps between quantum and classical can be for various (classes of) functions. In order to answer this, we need to exhibit limits on the power of quantum communication complexity, i.e. establish lower bounds — few of which are known currently. The main purpose of this paper is to develop tools for proving lower bounds on quantum communication protocols. We present some new lower bounds for the case where $`f`$ is a total Boolean function. Most of our bounds apply only to exact quantum protocols, which always output the correct answer. However, we also have some extensions of our techniques to the case of bounded-error quantum protocols. ### 1.1 Lower bounds for exact protocols Let $`D(f)`$ denote the classical deterministic communication complexity of $`f`$, $`Q(f)`$ the qubit communication complexity, and $`Q^{}(f)`$ the qubit communication required if Alice and Bob can also make use of an unlimited supply of pre-shared EPR-pairs. Clearly $`Q^{}(f)Q(f)D(f)`$. Ultimately, we would like to show that $`Q^{}(f)`$ and $`D(f)`$ are polynomially related for all total functions $`f`$ (as are their query complexity counterparts ). This requires stronger lower bound tools than we have at present. Some lower bound methods are available for $`Q(f)`$ , but the only lower bound known for $`Q^{}(f)`$ is for the inner product function . A strong and well known lower bound for $`D(f)`$ is given by the logarithm of the rank of the communication matrix for $`f`$ . As first noted in , techniques of imply that an $`\mathrm{\Omega }(\mathrm{log}rank(f))`$-bound also holds for $`Q(f)`$. Our first result is to extend this bound to $`Q^{}(f)`$ and to derive the optimal constant: $$Q^{}(f)\frac{\mathrm{log}rank(f)}{2}.$$ (1) This implies $`n/2`$ lower bounds for the $`Q^{}`$-complexity of the equality and disjointness problems, for which no good bounds were known before. This $`n/2`$ is tight up to 1 bit, since Alice can send her $`n`$-bit input to Bob with $`n/2`$ qubits and $`n/2`$ EPR-pairs using superdense coding . Our corresponding lower bound also provides a new proof of optimality of superdense coding. In fact, the same $`n/2`$ bound holds for almost all functions. Furthermore, proof of the well-known “log-rank conjecture” ($`D(f)(\mathrm{log}rank(f))^k`$ for some $`k`$) would now imply our desired polynomial equivalence between $`D(f)`$ and $`Q^{}(f)`$ (as already noted for $`D(f)`$ and $`Q(f)`$ in ). However, this conjecture is a long standing open question which is probably hard to solve in full generality. Secondly, in order to get an algebraic handle on $`rank(f)`$, we relate it to a property of polynomials. It is well known that every total Boolean function $`g:\{0,1\}^n\{0,1\}`$ has a unique representation as a multilinear polynomial in its $`n`$ variables. For the case where Alice and Bob’s function has the form $`f(x,y)=g(xy)`$, we show that $`rank(f)`$ equals the number of monomials $`mon(g)`$ of the polynomial that represents $`g`$ ($`rank(f)mon(g)`$ was shown in ). This number of monomials is often easy to count and allows to determine $`rank(f)`$. The functions $`f(x,y)=g(xy)`$ form an important class which includes inner product, disjointness, and the functions which give the biggest gaps known between $`D(f)`$ and $`\mathrm{log}rank(f)`$ (similar techniques work for $`f(x,y)=g(xy)`$ or $`g(xy)`$). We use this to show that $`Q^{}(f)\mathrm{\Theta }(D(f))`$ if $`g`$ is symmetric. In this case we also show that $`D(f)`$ is close to the classical randomized complexity. Furthermore, $`Q^{}(f)D(f)O(Q^{}(f)^2)`$ if $`g`$ is monotone. For the latter result we rederive a result of Lovász and Saks using our tools. ### 1.2 Lower bounds for bounded-error protocols For the case of bounded-error quantum communication protocols, very few lower bounds are currently known (exceptions are inner product and the general discrepancy bound ). In particular, no good lower bounds are known for the disjointness problem. The best known upper bound for this is $`O(\sqrt{n}\mathrm{log}n)`$ qubits , contrasting with linear classical randomized complexity . Since disjointness is a co-NP-complete communication problem , a good lower bound for this problem would imply lower bounds for all NP-hard communication problems. In order to attack this problem, we make an effort to extend the above polynomial-based approach to bounded-error protocols. We consider the approximate rank $`\stackrel{~}{rank}(f)`$, and show the bound $`Q_2(f)(\mathrm{log}\stackrel{~}{rank}(f))/2`$ for 2-sided bounded-error qubit protocols (again using techniques from ). Unfortunately, lower bounds on $`\stackrel{~}{rank}(f)`$ are much harder to obtain than for $`rank(f)`$. If we could prove for the case $`f(x,y)=g(xy)`$ that $`\stackrel{~}{rank}(f)`$ roughly equals the number of monomials $`\stackrel{~}{mon}(g)`$ of an approximating polynomial for $`g`$, then a $`\sqrt{n}`$ lower bound would follow for disjointness, because we show that this requires at least $`2^\sqrt{n}`$ monomials to approximate. Since we prove that the quantities $`rank(f)`$ and $`mon(g)`$ are in fact equal in the exact case, this gives some hope for a similar result $`\stackrel{~}{rank}(f)\stackrel{~}{mon}(g)`$ in the approximating case, and hence for resolving the complexity of disjointness. The specific bounds that we actually were able to prove for disjointness are more limited at this point: $`Q_2^{}(\text{DISJ})\mathrm{\Omega }(\mathrm{log}n)`$ for the general case (by an extension of techniques of ; the $`\mathrm{log}n`$ bound without entanglement was already known ), $`Q_2^{}(\text{DISJ})\mathrm{\Omega }(n)`$ for 1-round protocols (using a result of ), and $`Q_2(\text{DISJ})\mathrm{\Omega }(n)`$ if the error probability has to be $`<2^n`$. Below we sum up the main results, contrasting the exact and bounded-error case. * We show that $`Q^{}(f)\mathrm{log}rank(f)/2`$ for exact protocols with unlimited prior EPR-pairs and $`Q_2(f)\mathrm{log}\stackrel{~}{rank}(f)/2`$ for qubit protocols without prior EPR-pairs. * If $`f(x,y)=g(xy)`$ for some Boolean function $`g`$, then $`rank(f)=mon(g)`$. An analogous result $`\stackrel{~}{rank}(f)\stackrel{~}{mon}(g)`$ for the approximate case is open. * A polynomial for disjointness, $`\text{DISJ}(x,y)=\text{NOR}(xy)`$, requires $`2^n`$ monomials in the exact case (implying $`Q^{}(\text{DISJ})n/2`$), and roughly $`2^\sqrt{n}`$ monomials in the approximate case. ## 2 Preliminaries We use $`|x|`$ to denote the Hamming weight (number of 1s) of $`x\{0,1\}^n`$, $`x_i`$ for the $`i`$th bit of $`x`$ ($`x_0=0`$), and $`e_i`$ for the string whose only 1 occurs at position $`i`$. If $`x,y\{0,1\}^n`$, we use $`xy\{0,1\}^n`$ for the string obtained by bitwise ANDing $`x`$ and $`y`$, and similarly $`xy`$. Let $`g:\{0,1\}^n\{0,1\}`$ be a Boolean function. We call $`g`$ symmetric if $`g(x)`$ only depends on $`|x|`$, and monotone if $`g`$ cannot decrease if we set more variables to 1. It is well known that each $`g:\{0,1\}^nR`$ has a unique representation as a multilinear polynomial $`g(x)=_{S\{1,\mathrm{},n\}}a_SX_S`$, where $`X_S`$ is the product of the variables in $`S`$ and $`a_S`$ is a real number. The term $`a_SX_S`$ is called a monomial of $`g`$ and $`mon(g)`$ denotes the number of non-zero monomials of $`g`$. A polynomial $`p`$ approximates $`g`$ if $`|g(x)p(x)|1/3`$ for all $`x\{0,1\}^n`$. We use $`\stackrel{~}{mon}(g)`$ for the minimal number of monomials among all polynomials which approximate $`g`$. The degree of a monomial is the number of its variables, and the degree of a polynomial is the largest degree of its monomials. Let $`X`$ and $`Y`$ be finite sets (usually $`X=Y=\{0,1\}^n`$) and $`f:X\times Y\{0,1\}`$ be a Boolean function. For example, equality has $`\text{EQ}(x,y)=1`$ iff $`x=y`$, disjointness has $`\text{DISJ}(x,y)=1`$ iff $`|xy|=0`$ (equivalently, $`\text{DISJ}(x,y)=\text{NOR}(xy)`$), and inner product has $`\text{IP}(x,y)=1`$ iff $`|xy|`$ is odd. $`M_f`$ denotes the $`|X|\times |Y|`$ Boolean matrix whose $`x,y`$ entry is $`f(x,y)`$, and $`rank(f)`$ denotes the rank of $`M_f`$ over the reals. A rectangle is a subset $`R=S\times TX\times Y`$ of the domain of $`f`$. A 1-cover for $`f`$ is a set of rectangles which covers all and only 1s in $`M_f`$. $`C^1(f)`$ denotes the minimal size of a 1-cover for $`f`$. For $`m1`$, we use $`f^m`$ to denote the Boolean function which is the AND of $`m`$ independent instances of $`f`$. That is, $`f^m:X^m\times Y^m\{0,1\}`$ and $`f^m(x_1,\mathrm{},x_m,y_1,\mathrm{},y_m)=f(x_1,y_1)f(x_2,y_2)\mathrm{}f(x_m,y_m)`$. Note that $`M_{f^2}`$ is the Kronecker product $`M_fM_f`$ and hence $`rank(f^m)=rank(f)^m`$. Alice and Bob want to compute some $`f:X\times Y\{0,1\}`$. After the protocol they should both know $`f(x,y)`$. Their system has three parts: Alice’s part, the 1-qubit channel, and Bob’s part. For definitions of quantum states and operations, we refer to . In the initial state, Alice and Bob share $`k`$ EPR-pairs and all other qubits are zero. For simplicity we assume Alice and Bob send 1 qubit in turn, and at the end the output-bit of the protocol is put on the channel. The assumption that 1 qubit is sent per round can be replaced by a fixed number of qubits $`q_i`$ for the $`i`$th round. However, in order to be able to run a quantum protocol on a superposition of inputs, it is important that the number of qubits sent in the $`i`$th round is independent of the input $`(x,y)`$. An $`\mathrm{}`$-qubit protocol is described by unitary transformations $`U_1(x),U_2(y),U_3(x),U_4(y),\mathrm{},U_{\mathrm{}}(x/y)`$. First Alice applies $`U_1(x)`$ to her part and the channel, then Bob applies $`U_2(y)`$ to his part and the channel, etc. $`Q(f)`$ denotes the (worst-case) cost of an optimal qubit protocol that computes $`f`$ exactly without prior entanglement, $`C^{}(f)`$ denotes the cost of a protocol that communicates classical bits but can make use of an unlimited (but finite) number of shared EPR-pairs, and $`Q^{}(f)`$ is the cost of a qubit protocol that can use shared EPR-pairs. $`Q_c(f)`$ denotes the cost of a clean qubit protocol without prior entanglement, i.e. a protocol that starts with $`|0|0|0`$ and ends with $`|0|f(x,y)|0`$. We add the superscript “1 round” for 1-round protocols, where Alice sends a message to Bob and Bob then sends the output bit. Some simple relations that hold between these measures are $`Q^{}(f)Q(f)D(f)D^{1round}(f)`$, $`Q(f)Q_c(f)2Q(f)`$ and $`Q^{}(f)C^{}(f)2Q^{}(f)`$ . For bounded-error protocols we analogously define $`Q_2(f)`$, $`Q_2^{}(f)`$, $`C_2^{}(f)`$ for quantum protocols that give the correct answer with probability at least $`2/3`$ on every input. We use $`R_2^{pub}(f)`$ for the classical bounded-error complexity in the public-coin model . ## 3 Log-Rank Lower Bound As first noted in , techniques from imply $`Q(f)\mathrm{\Omega }(\mathrm{log}rank(f))`$. For completeness we prove the following $`\mathrm{log}rank(f)`$ bound for clean quantum protocols in Appendix A. This implies $`Q(f)\mathrm{log}rank(f)/2`$. We then extend this to the case where Alice and Bob share prior entanglement:<sup>1</sup><sup>1</sup>1During discussions we had with Michael Nielsen in Cambridge in the summer of 1999, it appeared that an equivalent result can be derived from results about Schmidt numbers in \[25, Section 6.4.2\]. ###### Theorem 1 $`Q_c(f)\mathrm{log}rank(f)+1`$. ###### Theorem 2 $`Q^{}(f){\displaystyle \frac{\mathrm{log}rank(f)}{2}}`$. Proof Suppose we have some exact protocol for $`f`$ that uses $`\mathrm{}`$ qubits of communication and $`k`$ prior EPR-pairs. We will build a clean qubit protocol without prior entanglement for $`f^m`$. First Alice makes $`k`$ EPR-pairs and sends one half of each pair to Bob (at a cost of $`k`$ qubits of communication). Now they run the protocol to compute the first instance of $`f`$ ($`\mathrm{}`$ qubits of communication). Alice copies the answer to a safe place which we will call the ‘answer bit’ and they reverse the protocol (again $`\mathrm{}`$ qubits of communication). This gives them back the $`k`$ EPR-pairs, which they can reuse. Now they compute the second instance of $`f`$, Alice ANDs the answer into the answer bit (which can be done cleanly), and they reverse the protocol, etc. After all $`m`$ instances of $`f`$ have been computed, Alice and Bob have the answer $`f^m(x,y)`$ left and the $`k`$ EPR-pairs, which they uncompute using another $`k`$ qubits of communication. This gives a clean protocol for $`f^m`$ that uses $`2m\mathrm{}+2k`$ qubits and no prior entanglement. By Theorem 1: $$2m\mathrm{}+2kQ_c(f^m)\mathrm{log}rank(f^m)+1=m\mathrm{log}rank(f)+1,$$ hence $$\mathrm{}\frac{\mathrm{log}rank(f)}{2}\frac{2k1}{2m}.$$ Since this must hold for every $`m>0`$, the theorem follows. $`\mathrm{}`$ We can derive a stronger bound for $`C^{}(f)`$: ###### Theorem 3 $`C^{}(f)\mathrm{log}rank(f)`$. Proof Since a qubit and an EPR-pair can be used to send 2 classical bits , we can devise a qubit protocol for $`ff`$ using $`C^{}(f)`$ qubits (compute the two copies of $`f`$ in parallel using the classical bit protocol). Hence by the previous theorem $`C^{}(f)Q^{}(ff)(\mathrm{log}rank(ff))/2=\mathrm{log}rank(f)`$. $`\mathrm{}`$ Below we draw some consequences from these log-rank lower bounds. Firstly, $`M_{\mathrm{EQ}}`$ is the identity matrix, so $`rank(\text{EQ})=2^n`$. This gives the bounds $`Q^{}(\text{EQ})n/2`$, $`C^{}(\text{EQ})n`$ (in contrast, $`Q_2(\text{EQ})\mathrm{\Theta }(\mathrm{log}n)`$ and $`C_2^{}(\text{EQ})O(1)`$). The disjointness function on $`n`$ bits is the AND of $`n`$ disjointnesses on 1 bit (which have rank 2 each), so $`rank(\text{DISJ})=2^n`$. The complement of the inner product function has $`rank(f)=2^n`$. Thus we have the following strong lower bounds, all tight up to 1 bit:<sup>2</sup><sup>2</sup>2The same bounds for IP are also given in . The bounds for EQ and DISJ are new, and can also be shown to hold for zero-error quantum protocols. ###### Corollary 1 $`Q^{}(\text{EQ}),Q^{}(\text{DISJ}),Q^{}(\text{IP})n/2`$ and $`C^{}(\text{EQ}),C^{}(\text{DISJ}),C^{}(\text{IP})n`$. Komlós has shown that the fraction of $`m\times m`$ Boolean matrices that have determinant 0 goes to 0 as $`m\mathrm{}`$. Hence almost all $`2^n\times 2^n`$ Boolean matrices have full rank $`2^n`$, which implies that almost all functions have maximal quantum communication complexity: ###### Corollary 2 Almost all $`f:\{0,1\}^n\times \{0,1\}^n\{0,1\}`$ have $`Q^{}(f)n/2`$ and $`C^{}(f)n`$. We say $`f`$ satisfies the quantum direct sum property if computing $`m`$ independent copies of $`f`$ (without prior entanglement) takes $`mQ(f)`$ qubits of communication in the worst case. (We have no example of an $`f`$ without this property.) Using the same technique as before, we can prove an equivalence between the qubit models with and without prior entanglement for such $`f`$: ###### Corollary 3 If $`f`$ satisfies the quantum direct sum property, then $`Q^{}(f)Q(f)2Q^{}(f)`$. Proof $`Q^{}(f)Q(f)`$ is obvious. Using the techniques of Theorem 2 we have $`mQ(f)2mQ^{}(f)+k`$, for all $`m`$ and some fixed $`k`$, hence $`Q(f)2Q^{}(f)`$. $`\mathrm{}`$ Finally, because of Theorem 2, the well-known “log-rank conjecture” now implies the polynomial equivalence of deterministic classical communication complexity and exact quantum communication complexity (with or without prior entanglement) for all total $`f`$: ###### Corollary 4 If $`D(f)O((\mathrm{log}rank(f))^k)`$, then $`Q^{}(f)Q(f)D(f)O(Q^{}(f)^k)`$ for all $`f`$. ## 4 A Lower Bound Technique via Polynomials ### 4.1 Decompositions and polynomials The previous section showed that lower bounds on $`rank(f)`$ imply lower bounds on $`Q^{}(f)`$. In this section we relate $`rank(f)`$ to the number of monomials of a polynomial for $`f`$ and use this to prove lower bounds for some classes of functions. We define the decomposition number $`m(f)`$ of some function $`f:\{0,1\}^n\times \{0,1\}^nR`$ as the minimum $`m`$ such that there exist functions $`a_1(x),\mathrm{},a_m(x)`$ and $`b_1(y),\mathrm{},b_m(y)`$ (from $`R^n`$ to $`R`$) for which $`f(x,y)=_{i=1}^ma_i(x)b_i(y)`$ for all $`x,y`$. We say that $`f`$ can be decomposed into the $`m`$ functions $`a_ib_i`$. Without loss of generality, the functions $`a_i,b_i`$ may be assumed to be multilinear polynomials. It turns out that the decomposition number equals the rank:<sup>3</sup><sup>3</sup>3The first part of the proof employs a technique of Nisan and Wigderson . They used this to prove $`\mathrm{log}rank(f)O(n^{\mathrm{log}_32})`$ for a specific $`f`$. Our Corollary 6 below implies that this is tight: $`\mathrm{log}rank(f)\mathrm{\Theta }(n^{\mathrm{log}_32})`$ for their $`f`$. ###### Lemma 1 $`rank(f)=m(f)`$. Proof $`\mathrm{𝐫𝐚𝐧𝐤}(𝐟)𝐦(𝐟)`$: Let $`f(x,y)=_{i=1}^ma_i(x)b_i(y)`$, $`M_i`$ be the matrix defined by $`M_i(x,y)=a_i(x)b_i(y)`$, $`r_i`$ be the row vector whose $`y`$th entry is $`b_i(y)`$. Note that the $`x`$th row of $`M_i`$ is $`a_i(x)`$ times $`r_i`$. Thus all rows of $`M_i`$ are scalar multiples of each other, hence $`M_i`$ has rank 1. Since $`rank(A+B)rank(A)+rank(B)`$ and $`M_f=_{i=1}^{m(f)}M_i`$, we have $`rank(f)=rank(M_f)_{i=1}^{m(f)}rank(M_i)=m(f)`$. $`𝐦(𝐟)\mathrm{𝐫𝐚𝐧𝐤}(𝐟)`$: Suppose $`rank(f)=r`$. Then there are $`r`$ columns $`c_1,\mathrm{},c_r`$ in $`M_f`$ which span the column space of $`M_f`$. Let $`A`$ be the $`2^n\times r`$ matrix that has these $`c_i`$ as columns. Let $`B`$ be the $`r\times 2^n`$ matrix whose $`i`$th column is formed by the $`r`$ coefficients of the $`i`$th column of $`M_f`$ when written out as a linear combination of $`c_1,\mathrm{},c_r`$. Then $`M_f=AB`$, hence $`f(x,y)=M_f(x,y)=_{i=1}^rA_{xi}B_{iy}.`$ Defining functions $`a_i,b_i`$ by $`a_i(x)=A_{xi}`$ and $`b_i(y)=B_{iy}`$, we have $`m(f)rank(f)`$. $`\mathrm{}`$ Combined with Theorems 2 and 3 we obtain ###### Corollary 5 $`Q^{}(f){\displaystyle \frac{\mathrm{log}m(f)}{2}}`$ and $`C^{}(f)\mathrm{log}m(f)`$. Accordingly, for lower bounds on quantum communication complexity it is important to be able to determine the decomposition number $`m(f)`$. Often this is hard. It is much easier to determine the number of monomials $`mon(f)`$ of $`f`$ (which upper bounds $`m(f)`$). Below we show that in the special case where $`f(x,y)=g(xy)`$, these two numbers are the same.<sup>4</sup><sup>4</sup>4After learning about this result, Mario Szegedy (personal communication) came up with an alternative proof of this, using Fourier transforms. Below, a monomial is called even if it contains $`x_i`$ iff it contains $`y_i`$, for example $`2x_1x_3y_1y_3`$ is even and $`x_1x_3y_1`$ is not. A polynomial is even if each of its monomials is even. ###### Lemma 2 If $`p:\{0,1\}^n\times \{0,1\}^nR`$ is an even polynomial with $`k`$ monomials, then $`m(p)=k`$. Proof Clearly $`m(p)k`$. To prove the converse, consider $`\text{DISJ}(x,y)=\mathrm{\Pi }_{i=1}^n(1x_iy_i)`$, the unique polynomial for the disjointness function. Note that this polynomial contains all and only even monomials (with coefficients $`\pm 1`$). Since DISJ has rank $`2^n`$, it follows from Lemma 1 that DISJ cannot be decomposed in fewer then $`2^n`$ terms. We will show how a decomposition of $`p`$ with $`m(p)<k`$ would give rise to a decomposition of DISJ with fewer than $`2^n`$ terms. Suppose we can write $$p(x,y)=\underset{i=1}{\overset{m(p)}{}}a_i(x)b_i(y).$$ Let $`aX_SY_S`$ be some even monomial in $`p`$ and suppose the monomial $`X_SY_S`$ in DISJ has coefficient $`c=\pm 1`$. Now whenever $`bX_S`$ occurs in some $`a_i`$, replace that $`bX_S`$ by $`(cb/a)X_S`$. Using the fact that $`p`$ contains only even monomials, it is not hard to see that the new polynomial obtained in this way is the same as $`p`$, except that the monomial $`aX_SY_S`$ is replaced by $`cX_SY_S`$. Doing this sequentially for all monomials in $`p`$, we end up with a polynomial $`p^{}`$ (with $`k`$ monomials and $`m(p^{})m(p)`$) which is a subpolynomial of DISJ, in the sense that each monomial in $`p^{}`$ also occurs with the same coefficient in DISJ. Notice that by adding all $`2^nk`$ missing DISJ-monomials to $`p^{}`$, we obtain a decomposition of DISJ with $`m(p^{})+2^nk`$ terms. But any such decomposition needs at least $`2^n`$ terms, hence $`m(p^{})+2^nk2^n`$, which implies $`km(p^{})m(p)`$. $`\mathrm{}`$ If $`f(x,y)=g(xy)`$ for some Boolean function $`g`$, then the polynomial that represents $`f`$ is just the polynomial of $`g`$ with the $`i`$th variable replaced by $`x_iy_i`$. Hence such a polynomial is even, and we obtain: ###### Corollary 6 If $`g:\{0,1\}^n\{0,1\}`$ and $`f(x,y)=g(xy)`$, then $`mon(g)=mon(f)=m(f)=rank(f)`$. This gives a strong tool for lower bounding (quantum and classical) communication complexity whenever $`f`$ is of the form $`f(x,y)=g(xy)`$: $`\mathrm{log}mon(g)C^{}(f)D(f)`$. Below we give some applications. ### 4.2 Symmetric functions As a first application we show that $`D(f)`$ and $`Q^{}(f)`$ are linearly related if $`f(x,y)=g(xy)`$ and $`g`$ is symmetric (this follows from Corollary 8 below). Furthermore, we show that the classical randomized public-coin complexity $`R_2^{pub}(f)`$ can be at most a $`\mathrm{log}n`$-factor less than $`D(f)`$ for such $`f`$ (Theorem 4). We will assume without loss of generality that $`g(\stackrel{}{0})=0`$, so the polynomial representing $`g`$ does not have the constant-1 monomial. ###### Lemma 3 If $`g`$ is a symmetric function whose lowest-weight 1-input has Hamming weight $`t>0`$ and $`f(x,y)=g(xy)`$, then $`D^{1round}(f)=\mathrm{log}\left(_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)+1\right)+1`$. Proof It is known (and easy to see) that $`D^{1round}(f)=\mathrm{log}r+1`$, where $`r`$ is the number of different rows of $`M_f`$ (this equals the number of different columns in our case, because $`f(x,y)=f(y,x)`$). We count $`r`$. Firstly, if $`|x|<t`$ then the $`x`$-row contains only zeroes. Secondly, if $`xx^{}`$ and both $`|x|t`$ and $`|x^{}|t`$ then it is easy to see that there exists a $`y`$ such that $`|xy|=t`$ and $`|x^{}y|<t`$ (or vice versa), hence $`f(x,y)f(x^{},y)`$ so the $`x`$-row and $`x^{}`$-row are different. Accordingly, $`r`$ equals the number of different $`x`$ with $`|x|t`$, $`+1`$ for the 0-row, which gives the lemma. $`\mathrm{}`$ ###### Lemma 4 If $`g`$ is a symmetric function whose lowest-weight 1-input has weight $`t>0`$, then $`(1o(1))\mathrm{log}\left(_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)\right)\mathrm{log}mon(g)\mathrm{log}\left(_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)\right).`$ Proof The upper bound follows from the fact that $`g`$ cannot have monomials of degree $`<t`$. For the lower bound we distinguish two cases. Case 1: $`𝐭𝐧/\mathrm{𝟐}`$. It is known that every symmetric $`g`$ has degree $`deg(g)=nO(n^{0.548})`$ . That is, an interval $`I=[a,n]`$ such that $`g`$ has no monomials of any degree $`dI`$ has length at most $`O(n^{0.548})`$. This implies that every interval $`I=[a,b]`$ ($`bt`$) such that $`g`$ has no monomials of any degree $`dI`$ has length at most $`O(n^{0.548})`$ (by setting $`nb`$ variables to 0, we can reduce to a function on $`b`$ variables where $`I`$ occurs “at the end”). Since $`g`$ must have monomials of degree $`tn/2`$, $`g`$ must contain a monomial of degree $`d`$ for some $`d[n/2,n/2+O(n^{0.548})]`$. But because $`g`$ is symmetric, it must then contain all $`\left(\genfrac{}{}{0pt}{}{n}{d}\right)`$ monomials of degree $`d`$. Hence by Stirling’s approximation $`mon(g)\left(\genfrac{}{}{0pt}{}{n}{d}\right)2^{nO(n^{0.548})}`$, which implies the lemma. Case 2: $`𝐭>𝐧/\mathrm{𝟐}`$. It is easy to see that $`g`$ must contain all $`\left(\genfrac{}{}{0pt}{}{n}{t}\right)`$ monomials of degree $`t`$. Now $$(nt+1)mon(g)(nt+1)\left(\genfrac{}{}{0pt}{}{n}{t}\right)\underset{i=t}{\overset{n}{}}\left(\genfrac{}{}{0pt}{}{n}{i}\right).$$ Hence $`\mathrm{log}mon(g)\mathrm{log}\left(_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)\right)\mathrm{log}(nt+1)=(1o(1))\mathrm{log}\left(_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)\right)`$. $`\mathrm{}`$ The number $`mon(g)`$ may be less then $`_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)`$. Consider the function $`g(x_1,x_2,x_3)=x_1+x_2+x_3x_1x_2x_1x_3x_2x_3`$ . Here $`mon(g)=6`$ but $`_{i=1}^3\left(\genfrac{}{}{0pt}{}{3}{i}\right)=7`$. Hence the $`1o(1)`$ of Lemma 4 cannot be improved to $`1`$ in general (it can if $`g`$ is a threshold function). Combining the previous results: ###### Corollary 7 If $`g`$ is a symmetric function whose lowest-weight 1-input has weight $`t>0`$ and $`f(x,y)=g(xy)`$, then $`(1o(1))\mathrm{log}\left(_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)\right)C^{}(f)D(f)D^{1round}(f)=\mathrm{log}\left(_{i=t}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)+1\right)+1.`$ Accordingly, for symmetric $`g`$ the communication complexity (quantum and classical, with or without prior entanglement, 1-round and multi-round) equals $`\mathrm{log}rank(f)`$ up to small constant factors. In particular: ###### Corollary 8 If $`g`$ is symmetric and $`f(x,y)=g(xy)`$, then $`(1o(1))D(f)C^{}(f)D(f)`$. We have shown that $`Q^{}(f)`$ and $`D(f)`$ are equal up to constant factors whenever $`f(x,y)=g(xy)`$ and $`g`$ is symmetric. For such $`f`$, $`D(f)`$ is also nearly equal to the classical bounded-error communication complexity $`R_2^{pub}(f)`$, where we allow Alice and Bob to share public coin flips. In order to prove this, we introduce the notion of 0-block sensitivity in analogy to the notion of block sensitivity of Nisan . For input $`x\{0,1\}^n`$, let $`\mathrm{bs0}_x(g)`$ be the maximal number of disjoint sets $`S_1,\mathrm{},S_b`$ of indices of variables, such that for every $`i`$ we have (1) all $`S_i`$-variables have value 0 in $`x`$ and (2) $`g(x)g(x^{S_i})`$, where $`x^{S_i}`$ is the string obtained from $`x`$ by setting all $`S_i`$-variables to 1. Let $`\mathrm{bs0}(g)=\mathrm{max}_x\mathrm{bs0}_x(g)`$. We now have: ###### Lemma 5 If $`g`$ is a symmetric function, then $`mon(g)n^{2\mathrm{b}\mathrm{s}0(g)}`$. Proof Let $`t`$ be the smallest number such that $`g_tg_{t+1}`$, then $`\mathrm{bs0}(g)nt`$. If $`tn/2`$ then $`\mathrm{bs0}(g)n/2`$, so $`mon(g)2^nn^{2\mathrm{b}\mathrm{s}0(g)}`$. If $`t>n/2`$ then $`g`$ has no monomials of degree $`t`$, hence $`mon(g)_{i=t+1}^n\left(\genfrac{}{}{0pt}{}{n}{i}\right)n^{2\mathrm{b}\mathrm{s}0(g)}.`$ $`\mathrm{}`$ ###### Theorem 4 If $`g`$ is a symmetric function and $`f(x,y)=g(xy)`$, then $`D(f)O(R_2^{pub}(f)\mathrm{log}n)`$. Proof By Corollary 7 we have $`D(f)(1+o(1))\mathrm{log}mon(g)`$. Lemma 5 implies $`D(f)O(\mathrm{bs0}(g)\mathrm{log}n)`$. Using Razborov’s lower bound technique for disjointness (see also \[20, Section 4.6\]) we can easily show $`R_2^{pub}(f)\mathrm{\Omega }(\mathrm{bs0}(f))`$, which implies the theorem. $`\mathrm{}`$ This theorem is tight for the function defined by $`g(x)=1`$ iff $`|x|n1`$. We have $`mon(g)=n+1`$, so $`\mathrm{log}nD(f)(1+o(1))\mathrm{log}n`$. On the other hand, an $`O(1)`$ bounded-error public coin protocol can easily be derived from the well-known $`O(1)`$-protocol for equality: Alice tests if $`|x|<n1`$, sends a 0 if so and a 1 if not. In the first case Alice and Bob know that $`f(x,y)=0`$. In the second case, we have $`f(x,y)=1`$ iff $`x=y`$ or $`y=\stackrel{}{1}`$, which can be tested with 2 applications of the equality-protocol. Hence $`R_2^{pub}(f)O(1)`$. ### 4.3 Monotone functions A second application concerns monotone problems. Lovász and Saks prove the log-rank conjecture for (among others) the following problem, which they call the union problem for $`𝐂`$. Here $`𝐂`$ is a monotone set system (i.e. $`(A𝐂AB)B𝐂`$) over some size-$`n`$ universe. Alice and Bob receive sets $`x`$ and $`y`$ (respectively) from this universe, and their task is to determine whether $`xy𝐂`$. Identifying sets with their representation as $`n`$-bit strings, this problem can equivalently be viewed as a function $`f(x,y)=g(xy)`$, where $`g`$ is a monotone increasing Boolean function. Note that it doesn’t really matter whether we take $`g`$ increasing or decreasing, nor whether we use $`xy`$ or $`xy`$, as these problems can all be converted into each other via De Morgan’s laws. Our translation of rank to number of monomials now allows us to rederive the Lovász-Saks result without making use of their combinatorial lattice theoretical machinery. We just need the following, slightly modified, result from their paper (a proof is given in Appendix B): ###### Theorem 5 (Lovász and Saks) $`D(f)(1+\mathrm{log}(C^1(f)+1))(2+\mathrm{log}rank(f))`$. ###### Theorem 6 (Lovász and Saks) If $`g`$ is monotone and $`f(x,y)=g(xy)`$, then $`D(f)O((\mathrm{log}rank(f))^2)`$. Proof Let $`M_1,\mathrm{},M_k`$ be all the minimal monomials in $`g`$. Each $`M_i`$ induces a rectangle $`R_i=S_i\times T_i`$, where $`S_i=\{xM_ix\}`$ and $`T_i=\{yM_iy\}`$. Because $`g`$ is monotone increasing, $`g(z)=1`$ iff $`z`$ makes at least one $`M_i`$ true. Hence $`f(x,y)=1`$ iff there is an $`i`$ such that $`(x,y)R_i`$. Accordingly, the set of $`R_i`$ is a 1-cover for $`f`$ and $`C^1(f)kmon(g)=rank(f)`$ by Corollary 6. Plugging into Theorem 5 gives the theorem. $`\mathrm{}`$ ###### Corollary 9 If $`g`$ is monotone and $`f(x,y)=g(xy)`$, then $`D(f)O(Q^{}(f)^2)`$. This result can be tightened for the special case of $`d`$-level AND-OR-trees. For example, let $`g`$ be a 2-level AND-of-ORs on $`n`$ variables with fan-out $`\sqrt{n}`$ and $`f(x,y)=g(xy)`$. Then $`g`$ has $`(2^\sqrt{n}1)^\sqrt{n}`$ monomials and hence $`Q^{}(f)n/2`$. In contrast, the zero-error quantum complexity of $`f`$ is $`O(n^{3/4}\mathrm{log}n)`$ . ## 5 Bounded-Error Protocols Here we generalize the above approach to bounded-error quantum protocols. Define the approximate rank of $`f`$, $`\stackrel{~}{rank}(f)`$, as the minimum rank among all matrices $`M`$ that approximate $`M_f`$ entry-wise up to $`1/3`$. Let the approximate decomposition number $`\stackrel{~}{m}(f)`$ be the minimum $`m`$ such that there exist functions $`a_1(x),\mathrm{},a_m(x)`$ and $`b_1(y),\mathrm{},b_m(y)`$ for which $`|f(x,y)_{i=1}^ma_i(x)b_i(y)|1/3`$ for all $`x,y`$. By the same proof as for Lemma 1 we obtain: ###### Lemma 6 $`\stackrel{~}{rank}(f)=\stackrel{~}{m}(f)`$. By a proof similar to Theorem 1 (again using methods from , see Appendix C) we show ###### Theorem 7 $`Q_2(f){\displaystyle \frac{\mathrm{log}\stackrel{~}{m}(f)}{2}}`$. Unfortunately, it is much harder to prove bounds on $`\stackrel{~}{m}(f)`$ than on $`m(f)`$.<sup>5</sup><sup>5</sup>5It is interesting to note that $`\overline{\text{IP}}`$ (the negation of IP) has less than maximal approximate decomposition number. For example for $`n=2`$, $`m(f)=4`$ but $`\stackrel{~}{m}(f)=3`$. In the exact case we have $`m(f)=mon(g)`$ whenever $`f(x,y)=g(xy)`$, and $`mon(g)`$ is often easy to determine. If something similar is true in the approximate case, then we obtain strong lower bounds on $`Q_2(f)`$, because our next theorem gives a bound on $`\stackrel{~}{mon}(g)`$ in terms of the 0-block sensitivity defined in the previous section (the proof is deferred to Appendix D). ###### Theorem 8 If $`g`$ is a Boolean function, then $`\stackrel{~}{mon}(g)2^{\sqrt{\mathrm{bs0}(g)/12}}.`$ In particular, for $`\text{DISJ}(x,y)=\text{NOR}(xy)`$ it is easy to see that $`\mathrm{bs0}(\text{NOR})=n`$, hence $`\mathrm{log}\stackrel{~}{mon}(\text{NOR})\sqrt{n/12}`$ (the upper bound $`\mathrm{log}\stackrel{~}{mon}(\text{NOR})O(\sqrt{n}\mathrm{log}n)`$ follows from the construction of a degree-$`\sqrt{n}`$ polynomial for OR in ). Consequently, a proof that the approximate decomposition number $`\stackrel{~}{m}(f)`$ roughly equals $`\stackrel{~}{mon}(g)`$ would give $`Q_2(\text{DISJ})\mathrm{\Omega }(\sqrt{n})`$, nearly matching the $`O(\sqrt{n}\mathrm{log}n)`$ upper bound of . Since $`m(f)=mon(g)`$ in the exact case, a result like $`\stackrel{~}{m}(f)\stackrel{~}{mon}(g)`$ might be doable. We end this section by proving some weaker lower bounds for disjointness. Firstly, disjointness has a bounded-error protocol with $`O(\sqrt{n}\mathrm{log}n)`$ qubits and $`O(\sqrt{n})`$ rounds , but if we restrict to 1-round protocols then a linear lower bound follows from a result of Nayak : ###### Theorem 9 $`Q_2^{1round}(\text{DISJ})\mathrm{\Omega }(n)`$. Proof Suppose there exists a 1-round qubit protocol with $`m`$ qubits: Alice sends a message $`M(x)`$ of $`m`$ qubits to Bob, and Bob then has sufficient information to establish whether Alice’s $`x`$ and Bob’s $`y`$ are disjoint. Note that $`M(x)`$ is independent of $`y`$. If Bob’s input is $`y=e_i`$, then $`\text{DISJ}(x,y)`$ is the negation of Alice’s $`i`$th bit. But then the message is an $`(n,m,2/3)`$ quantum random access code : by choosing input $`y=e_i`$ and continuing the protocol, Bob can extract from $`M(x)`$ the $`i`$th bit of Alice (with probability $`2/3`$), for any $`1in`$ of his choice. For this the lower bound $`m(1H(2/3))n>0.08n`$ is known . $`\mathrm{}`$ For multi-round quantum protocols for disjointness with bounded error probability we can only prove a logarithmic lower bound, using a technique from (we omit the proof for reasons of space; for the model without entanglement, the bound $`Q_2(\text{DISJ})\mathrm{\Omega }(\mathrm{log}n)`$ was already shown in ). ###### Proposition 1 $`Q_2^{}(\text{DISJ})\mathrm{\Omega }(\mathrm{log}n)`$. Finally, for the case where we want to compute disjointness with very small error probability, we can prove an $`\mathrm{\Omega }(n)`$ bound. Here we use the subscript “$`\epsilon `$” to indicate qubit protocols (without prior entanglement) whose error probability is $`\epsilon `$. We first give a bound for equality: ###### Theorem 10 If $`\epsilon <2^n`$, then $`Q_\epsilon (\text{EQ})n/2`$. Proof By Lemma 6 and Theorem 7, it suffices to show that an $`\epsilon `$-approximation of the $`2^n\times 2^n`$ identity matrix $`I`$ requires full rank. Suppose that $`M`$ approximates $`I`$ entry-wise up to $`\epsilon `$ but has rank $`<2^n`$. Then $`M`$ has some eigenvalue $`\lambda =0`$. Gers̆gorin’s Disc Theorem (see \[15, p.31\]) implies that all eigenvalues of $`M`$ are in the set $`_i\{z|zM_{ii}|R_i\},`$ where $`R_i=_{ji}|M_{ij}|`$. But if $`\lambda =0`$ is in this set, then for some $`i`$ $`1\epsilon |M_{ii}|=|\lambda M_{ii}|R_i(2^n1)\epsilon ,`$ hence $`\epsilon 2^n`$, contradiction. $`\mathrm{}`$ We reduce equality to disjointness. Let $`x,y\{0,1\}^n`$. Define $`x^{}\{0,1\}^{2n}`$ by replacing $`x_i`$ by $`x_i\overline{x_i}`$ in $`x`$, and $`y^{}\{0,1\}^{2n}`$ by replacing $`y_i`$ by $`\overline{y_i}y_i`$ in $`y`$. It is easy to see that $`\text{EQ}(x,y)=\text{DISJ}(x^{},y^{})`$ so we have: ###### Corollary 10 If $`\epsilon <2^n`$, then $`Q_\epsilon (\text{DISJ})n/4`$. ## 6 Open Problems To end this paper, we identify three important open questions in quantum communication complexity. First, are $`Q^{}(f)`$ and $`D(f)`$ polynomially related for all total $`f`$, or at least for all $`f`$ of the form $`f(x,y)=g(xy)`$? We have proven this for some special cases here ($`g`$ symmetric or monotone), but the general question remains open. There is a close analogy between the quantum communication complexity lower bounds presented here, and the quantum query complexity bounds obtained in . Let $`deg(g)`$ and $`mon(g)`$ be, respectively, the degree and the number of monomials of the polynomial that represents $`g:\{0,1\}^n\{0,1\}`$. In it was shown that a quantum computer needs at least $`deg(g)/2`$ queries to the $`n`$ variables to compute $`g`$, and that $`O(deg(g)^4)`$ queries suffice (see also ). This implies that classical and quantum query complexity are polynomially related for all total $`f`$. Similarly, we have shown here that $`(\mathrm{log}mon(g))/2`$ qubits need to be communicated to compute $`f(x,y)=g(xy)`$. An analogous upper bound like $`Q^{}(f)O((\mathrm{log}mon(g))^k)`$ might be true. A similar resemblance holds in the bounded-error case. Let $`\stackrel{~}{deg}(g)`$ be the minimum degree of polynomials that approximate $`g`$. In it was shown that a bounded-error quantum computer needs at least $`\stackrel{~}{deg}(g)/2`$ queries to compute $`g`$ and that $`O(\stackrel{~}{deg}(g)^6)`$ queries suffice. Here we showed that $`(\mathrm{log}\stackrel{~}{m}(f))/2`$ qubits of communication are necessary to compute $`f`$. A similar upper bound like $`Q_2(f)O((\mathrm{log}\stackrel{~}{m}(f))^k)`$ may hold. A second open question: how do we prove good lower bounds on bounded-error quantum protocols? Theorems 7 and 8 of the previous section show that $`Q_2(f)`$ is lower bounded by $`\mathrm{log}\stackrel{~}{m}(f)/2`$ and $`\mathrm{log}\stackrel{~}{mon}(g)`$ is lower bounded by $`\sqrt{\mathrm{bs0}(g)}`$. If we could show $`\stackrel{~}{m}(f)\stackrel{~}{mon}(g)`$ whenever $`f(x,y)=g(xy)`$, we would have $`Q_2(f)\mathrm{\Omega }(\sqrt{\mathrm{bs0}(g)})`$. Since $`m(f)=mon(g)`$ in the exact case, this may well be true. As mentioned above, this is particularly interesting because it would give a near-optimal lower bound $`Q_2(\text{DISJ})\mathrm{\Omega }(\sqrt{n})`$. Third and last, does prior entanglement add much power to qubit communication, or are $`Q(f)`$ and $`Q^{}(f)`$ roughly equal up to small additive or multiplicative factors? Similarly, are $`Q_2(f)`$ and $`Q_2^{}(f)`$ roughly equal? The biggest gap that we know is $`Q_2(\text{EQ})\mathrm{\Theta }(\mathrm{log}n)`$ versus $`Q_2^{}(\text{EQ})O(1)`$. Acknowledgments. We acknowledge helpful discussions with Alain Tapp, who first came up with the idea of reusing entanglement used in Section 3. We also thank Michael Nielsen, Mario Szegedy, Barbara Terhal for discussions, and John Tromp for help with the proof of Lemma 9 in Appendix D. ## Appendix A Proof of Theorem 1 Here we prove a $`\mathrm{log}rank(f)`$ lower bound for clean qubit protocols. ###### Lemma 7 (Kremer/Yao) The final state of an $`\mathrm{}`$-qubit protocol (without prior entanglement) on input $`(x,y)`$ can be written as $$\underset{i\{0,1\}^{\mathrm{}}}{}\alpha _i(x)\beta _i(y)|A_i(x)|i_{\mathrm{}}|B_i(y),$$ where the $`\alpha _i(x),\beta _i(y)`$ are complex numbers and the $`A_i(x),B_i(y)`$ are unit vectors. Proof The proof is by induction on $`\mathrm{}`$: Base step. For $`\mathrm{}=0`$ the lemma is obvious. Induction step. Suppose after $`\mathrm{}`$ qubits of communication the state can be written as $$\underset{i\{0,1\}^{\mathrm{}}}{}\alpha _i(x)\beta _i(y)|A_i(x)|i_{\mathrm{}}|B_i(y).$$ (2) We assume without loss of generality that it is Alice’s turn: she applies $`U_{\mathrm{}+1}(x)`$ to her part and the channel. Note that there exist complex numbers $`\alpha _{i0}(x),\alpha _{i1}(x)`$ and unit vectors $`A_{i0}(x),A_{i1}(x)`$ such that $$(U_{\mathrm{}+1}(x)I)|A_i(x)|i_{\mathrm{}}|B_i(y)=\alpha _{i0}(x)|A_{i0}(x)|0|B_i(y)+\alpha _{i1}(x)|A_{i1}(x)|1|B_i(y).$$ Thus every element of the superposition (2) “splits in two” when we apply $`U_{\mathrm{}+1}`$. Accordingly, we can write the state after $`U_{\mathrm{}+1}`$ in the form required by the lemma. $`\mathrm{}`$ Theorem 1 $`Q_c(f)\mathrm{log}rank(f)+1`$. Proof Consider a clean $`\mathrm{}`$-qubit protocol for $`f`$. By Lemma 7, we can write its final state as $$\underset{i\{0,1\}^{\mathrm{}}}{}\alpha _i(x)\beta _i(y)|A_i(x)|i_{\mathrm{}}|B_i(y).$$ The protocol is clean, so the final state is $`|0|f(x,y)|0`$. Hence all parts of $`|A_i(x)`$ and $`|B_i(y)`$ other than $`|0`$ will cancel out, and we can assume without loss of generality that $`|A_i(x)=|B_i(y)=|0`$ for all $`i`$. Now the amplitude of the $`|0|1|0`$-state is simply the sum of the amplitudes $`\alpha _i(x)\beta _i(y)`$ of the $`i`$ for which $`i_{\mathrm{}}=1`$. This sum is either 0 or 1, and is the acceptance probability $`P(x,y)`$ of the protocol. Letting $`\alpha (x)`$ (resp. $`\beta (y)`$) be the dimension-$`2^\mathrm{}1`$ vector whose entries are $`\alpha _i(x)`$ (resp. $`\beta _i(y)`$) for the $`i`$ with $`i_{\mathrm{}}=1`$: $$P(x,y)=\underset{i:i_{\mathrm{}}=1}{}\alpha _i(x)\beta _i(y)=\alpha (x)^T\beta (y).$$ Since the protocol is exact, we must have $`P(x,y)=f(x,y)`$. Hence if we define $`A`$ as the $`|X|\times d`$ matrix having the $`\alpha (x)`$ as rows and $`B`$ as the $`d\times |Y|`$ matrix having the $`\beta (y)`$ as columns, then $`M_f=AB`$. But now $`rank(M_f)=rank(AB)rank(A)d2^{l1},`$ and the theorem follows. $`\mathrm{}`$ ## Appendix B Proof of Theorem 5 Theorem 5 (Lovász and Saks) $`D(f)(1+\mathrm{log}(C^1(f)+1))(2+\mathrm{log}rank(f))`$. Proof We will first give a protocol based on a 0-cover. Let $`c=C^0(f)`$ and $`R_1,\mathrm{},R_c`$ be an optimal 0-cover. Let $`R_i=S_i\times T_i`$. We will also use $`S_i`$ to denote the $`|S_i|\times 2^n`$ matrix of $`S_i`$-rows and $`T_i`$ for the $`2^n\times |T_i|`$ matrix of $`T_i`$-columns. Call $`R_i`$ type 1 if $`rank(S_i)rank(M_f)/2`$, and type 2 otherwise. Note that $`rank(S_i)+rank(T_i)rank(M_f)`$, hence at least one of $`rank(S_i)`$ and $`rank(T_i)`$ is $`rank(M_f)/2`$. The protocol is specified recursively as follows. Alice checks if her $`x`$ occurs in some type 1 $`R_i`$. If no, then she sends a 0 to Bob; if yes, then she sends the index $`i`$ and they continue with the reduced function $`g`$ (obtained by shrinking Alice’s domain to $`S_i`$), which has $`rank(g)=rank(S_i)rank(M_f)/2`$. If Bob receives a 0, he checks if his $`y`$ occurs in some type 2 $`R_j`$. If no, then he knows that $`(x,y)`$ does not occur in any $`R_i`$, so $`f(x,y)=1`$ and he sends a 0 to Alice to tell her; if yes, then he sends $`j`$ and they continue with the reduced function $`g`$, which has $`rank(g)=rank(T_i)rank(M_f)/2`$ because $`R_j`$ is type 2. Thus Alice and Bob either learn $`f(x,y)`$ or reduce to a function $`g`$ with $`rank(g)rank(f)/2`$, at a cost of at most $`1+\mathrm{log}(c+1)`$ bits. It now follows by induction on the rank that $`D(f)(1+\mathrm{log}(C^0(f)+1))(1+\mathrm{log}rank(f))`$. Noting that $`C^1(f)=C^0(\overline{f})`$ and $`|rank(f)rank(\overline{f})|1`$, we have $`D(f)=D(\overline{f})(1+\mathrm{log}(C^0(\overline{f})+1))(1+\mathrm{log}rank(\overline{f}))(1+\mathrm{log}(C^1(f)+1))(2+\mathrm{log}rank(f))`$. $`\mathrm{}`$ ## Appendix C Proof of Theorem 7 Theorem 7 $`Q_2(f){\displaystyle \frac{\mathrm{log}\stackrel{~}{m}(f)}{2}}`$. Proof By Lemma 7 we can write the final state of an $`\mathrm{}`$-qubit bounded-error protocol for $`f`$ as $$\underset{i\{0,1\}^{\mathrm{}}}{}\alpha _i(x)\beta _i(y)|A_i(x)|i_{\mathrm{}}|B_i(y).$$ Let $`\varphi (x,y)=_{i\{0,1\}^\mathrm{}1}\alpha _{i1}(x)\beta _{i1}(y)|A_{i1}(x)|1|B_{i1}(y)`$ be the part of the final state that corresponds to a 1-output of the protocol. For $`i,j\{0,1\}^\mathrm{}1`$, define functions $`a_{ij},b_{ij}`$ by $$a_{ij}(x)=\overline{\alpha _{i1}(x)}\alpha _{j1}(x)A_{i1}(x)|A_{j1}(x)$$ $$b_{ij}(y)=\overline{\beta _{i1}(y)}\beta _{j1}(y)B_{i1}(y)|B_{j1}(y)$$ Note that the acceptance probability is $$P(x,y)=\varphi (x,y)|\varphi (x,y)=\underset{i,j\{0,1\}^\mathrm{}1}{}a_{ij}(x)b_{ij}(y).$$ We have now decomposed $`P(x,y)`$ into $`2^{2\mathrm{}2}`$ functions. However, we must have $`|P(x,y)f(x,y)|1/3`$ for all $`x,y`$, hence $`2^{2\mathrm{}2}\stackrel{~}{m}(f)`$. It follows that $`\mathrm{}(\mathrm{log}\stackrel{~}{m}(f))/2+1`$. $`\mathrm{}`$ ## Appendix D Proof of Theorem 8 Here we prove Theorem 8. The proof uses some tools from the degree-lower bound proofs of Nisan and Szegedy \[27, Section 3\], including the following result from : ###### Theorem 11 (Ehlich, Zeller; Rivlin, Cheney) Let $`p`$ be a single-variate polynomial of degree $`deg(p)`$ such that $`b_1p(i)b_2`$ for every integer $`0in`$, and the derivative satisfies $`|p^{}(x)|c`$ for some real $`0xn`$. Then $`deg(p)\sqrt{cn/(c+b_2b_1)}`$. A hypergraph is a set system $`H𝒫ow\{1,\mathrm{},n\}`$. The sets $`EH`$ are called the edges of $`H`$. We call $`H`$ an $`s`$-hypergraph if all $`EH`$ satisfy $`|E|s`$. A set $`S\{1,\mathrm{},n\}`$ is a blocking set for $`H`$ if it “hits” every edge: $`SE\mathrm{}`$ for all $`EH`$. ###### Lemma 8 Let $`g:\{0,1\}^n\{0,1\}`$ be a Boolean function for which $`g(\stackrel{}{0})=0`$ and $`g(e_i)=1`$, $`p`$ be a multilinear polynomial which approximates $`g`$ (i.e. $`|g(x)p(x)|1/3`$ for all $`x\{0,1\}^n`$), and $`H`$ be the $`\sqrt{n/12}`$-hypergraph formed by the set of all monomials of $`p`$ that have degree $`\sqrt{n/12}`$. Then $`H`$ has no blocking set of size $`n/2`$. Proof Assume, by way of contradiction, that there exists a blocking set $`S`$ of $`H`$ with $`|S|n/2`$. Obtain restrictions $`h`$ and $`q`$ of $`g`$ and $`p`$, respectively, on $`n|S|n/2`$ variables by fixing all $`S`$-variables to 0. Then $`q`$ approximates $`h`$ and all monomials of $`q`$ have degree $`<\sqrt{n/12}`$ (all $`p`$-monomials of higher degree have been set to 0 because $`S`$ is a blocking set for $`H`$). Since $`q`$ approximates $`h`$ we have $`q(\stackrel{}{0})[1/3,1/3]`$, $`q(e_i)[2/3,4/3]`$, and $`q(x)[1/3,4/3]`$ for all other $`x\{0,1\}^n`$. By standard symmetrization techniques , we can turn $`q`$ into a single-variate polynomial $`r`$ of degree $`<\sqrt{n/12}`$, such that $`r(0)[1/3,1/3]`$, $`r(1)[2/3,4/3]`$, and $`r(i)[1/3,4/3]`$ for $`i\{2,\mathrm{},n/2\}`$. Since $`r(0)1/3`$ and $`r(1)2/3`$, we must have $`p^{}(x)1/3`$ for some real $`x[0,1]`$. But then $`deg(r)\sqrt{(1/3)(n/2)/(1/3+4/3+1/3)}=\sqrt{n/12}`$ by Theorem 11, contradiction. $`\mathrm{}`$ The next lemma shows that $`H`$ must be large if it has no blocking set of size $`n/2`$: ###### Lemma 9 If $`H`$ is an $`s`$-hypergraph of size $`m<2^s`$, then $`H`$ has a blocking set of size $`n/2`$. Proof We use the probabilistic method to show the existence of a blocking set $`S`$. Randomly choose a set $`S`$ of $`n/2`$ elements. The probability that $`S`$ does not hit some specific $`EH`$ is $$\frac{\left(\genfrac{}{}{0pt}{}{n|E|}{n/2}\right)}{\left(\genfrac{}{}{0pt}{}{n}{n/2}\right)}=\frac{\frac{n}{2}(\frac{n}{2}1)\mathrm{}(\frac{n}{2}|E|+1)}{n(n1)\mathrm{}(n|E|+1)}2^{|E|}.$$ Then the probability that there is some edge $`EH`$ which is not hit by $`S`$ is $$\mathrm{Pr}[\underset{EH}{}\text{ S does not hit E}]\underset{EH}{}\mathrm{Pr}[\text{S does not hit E}]\underset{EH}{}2^{|E|}m2^s<1.$$ Thus with positive probability, $`S`$ hits all $`EH`$, which proves the existence of a blocking set. $`\mathrm{}`$ The above lemmas allow us to prove: Theorem 8 If $`g`$ is a Boolean function, then $`\stackrel{~}{mon}(g)2^{\sqrt{\mathrm{bs0}(g)/12}}.`$ Proof Let $`p`$ be a polynomial which approximates $`g`$ with $`\stackrel{~}{mon}(g)`$ monomials. Let $`b=\mathrm{bs0}(g)`$, and $`z`$ and $`S_1,\mathrm{},S_b`$ be the input and sets which achieve the 0-block sensitivity of $`g`$. We assume without loss of generality that $`g(z)=0`$. We derive a $`b`$-variable Boolean function $`h(y_1,\mathrm{},y_b)`$ from $`g(x_1,\mathrm{},x_n)`$ as follows: if $`jS_i`$ then we replace $`x_j`$ in $`g`$ by $`y_i`$, and if $`jS_i`$ for any $`i`$, then we fix $`x_j`$ in $`g`$ to the value $`z_j`$. Note that $`h`$ satisfies 1. $`h(\stackrel{}{0})=g(z)=0`$ 2. $`h(e_i)=g(z^{S_i})=1`$ for all unit $`e_i\{0,1\}^b`$ 3. $`\stackrel{~}{mon}(h)\stackrel{~}{mon}(g)`$, because we can easily derive an approximating polynomial for $`h`$ from $`p`$, without increasing the number of monomials in $`p`$. It follows easily from combining the previous lemmas that any approximating polynomial for $`h`$ requires at least $`2^{\sqrt{b/12}}`$ monomials, which concludes the proof. $`\mathrm{}`$
no-problem/9910/astro-ph9910052.html
ar5iv
text
# Confirmation and Analysis of Circular Polarization from Sagittarius A* ## 1 Introduction The flat-spectrum, compact radio source in the Galactic Center, Sagittarius A\* (Sgr A\*), has long been believed to mark a massive black hole at the dynamical center of the Galaxy (Lynden-Bell & Rees 1971). Eckart & Genzel (1996,1997) and Ghez et al. (1998) provide compelling evidence that there is a dark mass of $`2.6\times 10^6M_{}`$ coincident with Sgr A\*. Furthermore VLBI observations suggest the intrinsic size of Sgr A\* is no larger than 1 to 3.6 AU (Krichbaum et al. 1998; Lo et al. 1998; Rogers et al. 1994; Bower & Backer 1998). The observed size, however, is substantially larger as a result of scattering by interstellar electrons in the vicinity of Sgr A\* (Davies, Walsh & Booth 1976), and follows a $`\lambda ^2`$ dependence. If Sgr A\* is a synchrotron source, polarized emission may be be expected, and would prove a tight constraint on some of the proposed models. However, despite numerous attempts, linear polarization has not been detected from Sgr A\* (e.g. Bower et al., 1999a). This is so even at high frequencies, or with experiments which should be sensitive to linear polarization with high Faraday rotation measures (Bower et al. are sensitive to limits of RM = $`10^7`$ rad m<sup>-2</sup>). Recently Bower et al. (1999b) have reported the detection of circular polarization from Sgr A\* at 4.8 and 8.4 GHz. We report an independent confirmation of this detection at 4.8 GHz using the Australia Telescope Compact Array (ATCA). We note that we have used a different telescope, calibrators and calibration procedures and software to Bower et al. Care is needed in analysing radio astronomy data for circular polarization, particularly given that the observed level of circular polarization in synchrotron sources is always small. A minor error in the polarimetric calibration can allow a fraction of total intensity to masquerade as circular polarization. Such a miscalibration will lead to an erroneous image which has no obvious error artifacts. This is unlike observations of linear polarization with alt-az antennas (small miscalibration will lead to artifacts, rather than an apparently clean image). The VLA’s off-axis design and circularly-polarized feeds also make it a poor instrument for circular polarization measurements. Given these caveats, and the general mixed history of the detection of circular polarization, we believe an independent confirmation adds significant weight to the detection of Bower et al. ## 2 Observations and Results We have made one new observation and used two archival observations from the ATCA to independently test the detection of Bower et al. The ATCA is a radio interferometer situated in eastern Australia at a latitude of $`30^{}`$. It consists of 6 antennas over a 6 km baseline. With an on-axis feed design, dual linear polarimetric measurements and alt-az antenna mounts, it is an excellent instrument for the measurement of circular polarization. Three observations, made in March 1996, 1997 and 1999, were analysed. The observations were 12 h, 8 h and 6 h in length, respectively. These observations used a variety of bandwidths, array configurations and correlator settings, but were all made at 4.8 GHz. All runs included observations of the blazar PKS B1730-130 approximately once every 30 min and included at least one observation of the ATCA’s primary flux calibrator, PKS B1934-638. With the ATCA having linearly polarized feeds, the most important calibration step is in determining the antenna polarization leakage terms (the so-called “D terms”). This was done using the PKS B1934-638 data, which we have assumed to have Stokes parameters of $`(I,Q,U,V)=(5.829,0,0,1.5\times 10^3)`$ Jy. PKS B1934-638 is a GHz peaked spectrum source. Numerous ATCA and Parkes observations have failed to detect linear polarization from it (even with appreciable rotation measure). However work by Komesaroff el al. (1984) and Rayner et al. (submitted to MNRAS) suggest some weak circular polarization. The value we adopt is from Rayner et al. From the data for PKS B1730-130, we have performed a simultaneous solution for antenna gains as a function of time and source polarization (note that PKS B1730-130 is known to be time-variable and circularly polarized at the level of several milliJanskys). The reduced PKS B1730-130 data was consistent with a time-variable polarized point source. In the polarimetric calibration process, we have included a subtle geometric correction as follows. Nominally, the geometry of the axes of all the ATCA antennas is identical. In reality, of course, this is not the case, and the deviation from the nominal axis geometry, which is typically of order $`1^{}`$, is determined in an antenna pointing solution. For high precision work using alt-az antennas (such as the ATCA antennas) on sources that transit near the zenith, the true antenna geometry needs to be used in the calculation of parallactic angle (Sault et al, 1991; Kesteven 1997). With the declination of Sgr A\* ($`\delta =29^{}`$) differing from the ATCA’s latitude by only $`1^{}`$, this is a detectable effect. The minimum spacing used in the analysis was 50 k$`\lambda `$ in total intensity and 5 k$`\lambda `$ in circular polarization. The total intensity limit is to avoid confusion from the extended emission in the Galactic Center. In circular polarization, the only emission is from Sgr A\*, and so confusion is not an issue. However the use of a minimum baseline for circular polarization excludes possible contamination from leakage of the rapidly rising total intensity emission at short spacings. This would be caused by small residual polarization calibration errors. Table 1 summarizes the results of our observations. We give the total intensity, circular polarization and fractional circular polarization of Sgr A\*. We also give the RMS residual in the Stokes $`V`$ image and $`\sigma _\mathrm{V}`$ (the theoretical noise in the Stokes-$`V`$ image which would result from the measured receiver noise). The results show good self-consistency, and agree well with the VLA detection of -2.0 mJy. ## 3 Discussion The circular polarization properties of Sgr A\* are broadly consistent with those found in the cores of extragalactic radio sources. The 0.3 – 0.4% circular polarization of Sgr A\* is toward the high end of the range – typical values for extragalactic objects are 0.05 to 0.5 % (e.g. Roberts et al. 1975, de Pater & Weiler 1982 and Weiler & de Pater 1983). The absence of linear polarization is, however, unusual. Variations in the circularly polarized flux indicate either a change in the intrinsic degree of circular polarization or that the circularly polarized source is small enough to exhibit the effects of interstellar scintillation. Since the circular polarization in extragalactic sources is sometimes found to be variable (Komesaroff et al. 1984) and the total intensity of Sgr A\* is itself variable (e.g. Brown & Lo 1982), it is of interest to place even a crude constraint on the degree of variability of the circular polarization. Although obviously hampered by the small number of measurements, we note the possibility that the circularly polarized component is variable. The normalized variance of the total intensity, defined by $`[I\overline{I}]^2/\overline{I}^2`$, is 0.11. The corresponding quantity for the circular polarization is 0.16, however at least 0.09 (60%) of this may be attributed to measurement uncertainty. The degree of circular polarization ($`V/I`$) may also vary. Variation in this quantity implies that either the intrinsic circular polarization is variable or, if the source scintillates, that the polarized emission experiences different phase fluctuations along its ray path compared to the bulk of the (unpolarized) emission. We can place a constraint on the variability of the degree of circular polarization: the $`3\sigma `$ upper limit $`\mathrm{\Delta }(V/I)/[\overline{V}/\overline{I}]`$, is $`25`$%. This number is only relevant to the variations on the timescale comparable to our observing intervals (i.e. one year). ## 4 Origin of the Circular Polarization It is of considerable interest to consider the physical properties of Sgr A\* which give rise to the observed circular polarization. It is possible that the circular polarization is intrinsic to the synchrotron emission (Legg & Westfold 1968), or it may result from one of several propagation-related mechanisms: ‘circular repolarization’ converts linear to circular polarization and may occur either in a cold plasma (Pacholczyk 1973), or in an electron-positron pair dominated plasma (Sazonov 1969, Jones & O’Dell 1977a,b). Circular polarization may also be induced by scintillation (Macquart & Melrose 1999). It is possible that the circular polarization is associated with only a small component of the total flux density of Sgr A\*. In the following discussion we therefore denote the degree of circular polarization as $`m_c=0.0035\xi `$, where $`\xi 1`$. The circular polarization due to sychrotron radiation from a power law distribution of relativistic electrons $`N(ϵ)ϵ^{2\alpha 1}`$ is (Melrose 1971) $`m_c={\displaystyle \frac{\mathrm{cot}\theta }{3}}\left({\displaystyle \frac{\nu }{3\nu _H\mathrm{sin}\theta }}\right)^{1/2}f(\alpha ),`$ (1) where $`\theta `$ is the angle between the line of sight, $`\nu `$ is in hertz and $`\nu _H=2.8\times 10^6B`$ Hz is the electron gyrofrequency, where $`B`$ is the magnetic field in gauss. The function $`f(\alpha )`$ is a weak function of the spectral index, $`\alpha `$; for optically thick emission in the limit of strong Faraday rotation $`f(\alpha )`$ only varies monotonically between 0.6 and 2.0 for $`\alpha `$ between 0 and 2 (see Melrose 1971). The observed flux density of Sgr A\* increases with frequency up to at least 850 GHz (Falcke et al. 1998, Serabyn et al. 1997) and is roughly proportional to $`\nu ^{1/3}`$, suggesting that the source is optically thick at $`\nu =4.8`$ GHz. The high magnetic fields and particle densities thought to occur to occur in the source (e.g. Beckert et al. 1996) motivates the use of the strong Faraday rotation limit. The high RM measurements (e.g., Yusef-Zadeh, Wardle & Parastaran 1997) in the vicinity of Sgr A\* support the use of the strong Faraday rotation limit. (The strong Faraday rotation limit does not necessarily imply linear depolarization and is applicable whenever negligible absorption occurs over a path length in which the plane of linear polarization rotates through 2 $`\pi `$ radians.) The electron energy spectrum is uncertain due to the combination of factors that influence the flux density in the region in which spectral turnover occurs. Taking $`\alpha =0`$, the circular polarization may be explained in terms of synchrotron emission from a magnetic field $`B=0.19\xi ^2|\mathrm{sec}\theta \mathrm{tan}\theta |`$ G, while for $`\alpha =2`$, the implied magnetic field is $`B=0.015\xi ^2|\mathrm{sec}\theta \mathrm{tan}\theta |`$ G. For $`\alpha =0`$ this is equivalent to generation of circular polarization from electrons with an effective Lorentz factor $`\gamma =|\mathrm{cot}\theta |f(\alpha )/3m_c=54.7|\mathrm{cot}\theta |\xi ^1`$. Below the self-absorption turnover frequency one has $`T_b3.3\times 10^{11}\xi ^1|\mathrm{cot}\theta |`$ K, which is near the inverse Compton limit for $`\xi ^1|\mathrm{cot}\theta |1`$. Assuming a flux density of $`640\xi ^1`$ mJy for the circularly polarized component (see Table 1), this brightness temperature implies an angular size of $`0.19(\mathrm{cot}\theta )^{1/2}`$ mas (1.7 AU at 8.5 kpc) at 4.8 GHz. For $`\alpha =2`$ one has $`\gamma =193|\mathrm{cot}\theta |\xi ^1`$, $`T_b1.1\times 10^{12}\xi ^1|\mathrm{cot}\theta |`$ K, and an angular size of $`0.10(\mathrm{cot}\theta )^{1/2}`$ mas. Note that both estimates of the angular size are comparable to the intrinsic size of Sgr A\* determined by Lo et al. (1998) at $`\lambda `$7 mm. It therefore appears viable to explain the magnitude of the circular polarization in terms of that intrinsic to synchrotron emission. The presence of a relativistic pair plasma has been suggested as the cause of circular polarization of a compact component of 3C 279 (Wardle et al. 1998). It is therefore relevant to consider the contribution of such a plasma to the observed properties of the circular polarization in Sgr A\*. In a plasma dominated by relativistic pairs the natural modes of the plasma are linearly polarized. Propagation through such a medium causes Stokes $`U`$ to cycle into $`V`$. Assuming $`V_{\mathrm{intrinsic}}=0`$, and denoting the degree of linear polarization as $`m_l`$, propagation through a homogeneous medium gives rise to circular polarization as follows: $`m_c=m_l\mathrm{sin}\psi \mathrm{sin}\lambda ^3\mathrm{RRM},`$ (2) where $`\psi `$ is the sky-projected angular change in magnetic field direction between the source region and that containing the relativistic plasma. The relativistic rotation measure (Kennett & Melrose 1998), $`\mathrm{RRM}=3\times 10^4L_{\mathrm{pc}}n_r\gamma _{\mathrm{min}}B^2\mathrm{sin}^2\theta \mathrm{rad}/\mathrm{m}^3,`$ (3) depends upon the pair density $`n_r`$, the path length $`L_{\mathrm{pc}}`$, measured in parsecs, and the minimum Lorentz factor of the pairs $`\gamma _{\mathrm{min}}`$. The linear polarization is $`\sqrt{Q^2+U^2}`$, and $`U`$ is this times $`\mathrm{sin}\psi `$. Note that the linearly polarized component of synchrotron radiation is proportional to Stokes $`Q`$ only, whereas the relativistic plasma converts between Stokes $`U`$ and $`V`$. The observed degree of circular polarization then requires $`\mathrm{RRM}14\xi /(m_l\mathrm{sin}\psi )`$ rad/m<sup>3</sup>. Bower et al. (1999a) report an observational limit $`m_l<0.001`$. If this reflects the degree of linearly polarized emission incident upon the pair-dominated region, one requires $`\mathrm{RRM}>1.4\times 10^4\xi `$ rad/m<sup>3</sup> in order to explain the circular polarization. It is possible, however, that depolarization of the (presumed) linear polarization occurs after the partial conversion to circular polarization, in which case $`m_l`$ is higher and the corresponding limit on RRM is lower. If linearly polarized radiation is incident upon a region containing an admixture of relativistic plasma and cold plasma, the ellipticity of the natural modes is then determined by the ratio $`\lambda ^3\mathrm{RRM}_\mathrm{m}/\lambda ^2\mathrm{RM}_\mathrm{m}`$, where RM is the rotation measure and the subscript $`m`$ denotes values in the region containing the mixture. The highest degree of circular polarization that can result in a homogeneous medium is then $`m_c=m_l\mathrm{sin}\psi {\displaystyle \frac{\lambda \mathrm{RRM}_m}{\mathrm{RM}_m}}.`$ (4) This is only achieved provided $`\lambda ^2\mathrm{RM}_m1`$. In this case, the requirement on $`\mathrm{RRM}_m`$ is identical to that for a pair-dominated plasma. However, if $`\lambda ^2\mathrm{RM}_m1`$ circular depolarization occurs because of rapid changes in sign with frequency. Measurements of the circular polarization at other frequencies are required to determine the viability of circular repolarization models. Finally we consider the effect of scintillation-induced circular polarization (Macquart & Melrose 1999). To exhibit this effect the source must be sufficiently small to undergo scintillation, and rotation measure fluctuations must be present in the scattering medium. The former is likely since Sgr A\* is believed to exhibit variability in the total intensity due to interstellar scintillation (ISS) (e.g. Zhao et al. 1993). The RM fluctuations may arise from the region near the accretion disk (Melia 1994 and Bower et al. 1999a), or from further out in the Galactic Center region (e.g. Nicholls & Gray 1992, Yusef-Zadeh, et al. 1997). The mean scintillation-induced circular polarization tends to zero only over a time interval large compared to the timescale of variability of the circular polarization. The rotation measure gradient required to produce the circular polarization depends upon the variability timescale, which is related to the intrinsic size of the scintillating source. The timescale can influence the expected spectral dependence of the circular polarization. Further observations on the variability of the circular polarization are required to test the viability of this model and constrain the value of any possible rotation measure gradient. ## 5 Conclusion We confirm the detection by Bower et al. (1999b) of circular polarization from the Galactic Center source, SgrA\*. We note that our detection is from a different telescope and uses completely separate calibration and reduction strategy. Although clearly present, it is difficult to identify the origin of the circular polarization in Sgr A\*. Measurements of the circular polarization over at least a decade in frequency are needed to test the viability of these models, particularly those due to synchrotron emission and circular repolarization. Measurements of any possible variability would best constrain the role of scintillation in producing the circular polarization. The observations used here were made by N.E.B. Killeen and J.-H. Zhao. We thank R.D. Ekers, D. Melrose and L. Ball for interest and encouragement with this work, and G.C. Bower for his comments on the manuscript.
no-problem/9910/math9910092.html
ar5iv
text
# Untitled Document On Generalized Van der Waerden Triples Bruce Landman Department of Mathematical Sciences University of North Carolina at Greensboro Greensboro, NC 27402 email: bmlandma@uncg.edu and Aaron Robertson Department of Mathematics Colgate University Hamilton, NY 11346 email: aaron@math.colgate.edu ## Abstract Van der Waerden’s classical theorem on arithmetic progressions states that for any positive integers $`k`$ and $`r`$, there exists a least positive integer, $`w(k,r)`$, such that any $`r`$-coloring of $`\{1,2,\mathrm{},w(k,r)\}`$ must contain a monochromatic $`k`$-term arithmetic progression $`\{x,x+d,x+2d,\mathrm{},x+(k1)d\}`$. We investigate the following generalization of $`w(3,r)`$. For fixed positive integers $`a`$ and $`b`$ with $`ab`$, define $`N(a,b;r)`$ to be the least positive integer, if it exists, such that any $`r`$-coloring of $`\{1,2,\mathrm{},N(a,b;r)\}`$ must contain a monochromatic set of the form $`\{x,ax+d,bx+2d\}`$. We show that $`N(a,b;2)`$ exists if and only if $`b2a`$, and provide upper and lower bounds for it. We then show that for a large class of pairs $`(a,b)`$, $`N(a,b;r)`$ does not exist for $`r`$ sufficiently large. We also give a result on sets of the form $`\{x,ax+d,ax+2d,\mathrm{},ax+(k1)d\}`$. 1. Introduction B.L. van der Waerden proved that for any positive integers $`k`$ and $`r`$, there exists a least positive integer, $`w(k,r)`$, such that any $`r`$-coloring of $`[1,w(k,r)]=\{1,2,\mathrm{},w(k,r)\}`$ must contain a monochromatic $`k`$-term arithmetic progression $`\{x,x+d,x+2d,\mathrm{},x+(k1)d\}`$. The only known non-trivial values of $`w(k,r)`$ are $`w(3,2)=9`$, $`w(4,2)=35`$, $`w(5,2)=178`$, $`w(3,3)=27`$ and $`w(3,4)=76`$. The function $`w(k,r)`$ is sometimes called the Ramsey function for the collection of arithmetic progressions. In the authors considered a generalization of van der Waerden’s theorem, by considering, for a given function $`f:𝐍𝐍`$, the Ramsey function corresponding to the collection of arithmetic progressions $`\{a,a+d,a+2d,\mathrm{},a+(k1)d\}`$ with the property that $`df(a)`$. The Ramsey functions for other “substitutes” for the set of arithmetic progressions were studied in , , and . In this paper we consider a new generalization of $`w(k,r)`$. To help describe this generalization, we begin with three definitions. Definition 1.1: Fix $`1ab`$. A set, $`S`$, of three natural numbers is called an $`(a,b)`$-triple if there exist natural numbers $`x`$ and $`d`$ such that $`S=\{x,ax+d,bx+2d\}`$. Definition 1.2: Fix $`1ab`$. Define $`N(a,b;r)`$ to be the least positive integer, if it exists, such that any $`r`$-coloring of $`[1,N(a,b;r)]`$ must contain a monochromatic $`(a,b)`$-triple. Definition 1.3: Fix $`1ab`$. Define $`(a,b)`$ to be regular if $`N(a,b;r)`$ exists for all positive integers $`r`$. If $`(a,b)`$ is not regular, the degree of regularity of $`(a,b)`$ is the largest $`r`$ such that $`N(a,b;r)`$ exists. Denote this by $`dor(a,b)`$. We note here that $`N(1,1;r)`$ is the van der Waerden number $`w(3,r)`$ so that $`N(a,b;r)`$ is a generalization of $`w(3,r)`$, and obviously $`(1,1)`$ is regular. We now discuss the sections which follow. In Section 2 we consider $`r=2`$. We show that, except for the case in which $`b=2a`$, $`N(a,b;2)`$ does exist; we also find upper and lower bounds on $`N(a,b;2)`$ (for $`b2a`$). For certain pairs $`(a,b)`$, we obtain stronger bounds; in particular, we use a result of to deal with $`N(a,2a1;2)`$ (when $`a=1`$ this is just $`w(3,2)`$). In Section 3 we establish that $`(a,b)`$ is not regular for a rather large class of pairs $`(a,b)`$, and give an upper bound (for these pairs) on the degree of regularity. We then give lower bounds on $`N(a,b;r)`$ for all $`1ab`$ and $`r>2`$. In Section 4 we make some observations about monochromatic sets of the form $`\{x,ax+d,ax+2d,\mathrm{},ax+(k1)d\}`$ for $`a1`$. We establish that for $`a>1`$ and $`k`$ sufficiently large (dependent upon $`a`$), we can $`4`$-color the natural numbers so that no monochromatic such $`k`$-set exists (this is in contrast to van der Waerden’s theorem which says that there are arbitrarily long monochromatic arithmetic progressions in any $`r`$-coloring of the natural numbers). 2. Using Two Colors Our first theorem categorizes those $`(a,b)`$ pairs for which $`N(a,b;2)`$ exists, i.e., those pairs for which $`dor(a,b)2`$. It also provides an upper bound on $`N(a,b;2)`$ whenever it exists. Theorem 2.1: Let $`a,b𝐍`$ with $`ab`$. Then $`dor(a,b)=1`$ if and only if $`b=2a`$. Furthermore, if $`b2a`$, $`N(a,b;2)\{\begin{array}{cc}4a(b^3+b^23b3)+2b^3+6b^2+6b\hfill & \mathrm{for}b>2a\hfill \\ 4a(b^3+2b^2+2b)4b^2\hfill & \mathrm{for}b<2a\hfill \end{array}`$ Proof. We first consider the case in which $`b=2a`$. To show that $`N(a,2a;2)`$ does not exist, we exhibit a 2-coloring of N which avoids monochromatic $`(a,2a)`$-triples. Namely, color the natural numbers so that the odd numbers are colored arbitrarily, and so that for each even number $`2n`$, the color of $`2n`$ is different from the color of $`n`$. Such a coloring avoids monochromatic $`(a,2a)`$-triples since such a triple has the form $`\{x,y,z\}`$ where $`z=2y`$. We next consider the case $`b>2a`$. Let $`M=4a(b^3+b^23b3)+2b^3+4b^2+6b`$ and let $`\chi :[1,M]\{0,1\}`$ be a $`2`$-coloring. Assume there is no monochromatic $`(a,b)`$-triple. Then within the set $`\{2,4,\mathrm{},2b+4\}`$ there exist $`x`$ and $`x+2`$ that are not the same color, since otherwise $`\{2,2a+2,2b+4\}`$ would be a monochromatic $`(a,b)`$-triple. Without loss of generality, assume $`\chi (x)=0`$ and $`\chi (x+2)=1`$. Let $`z`$ be the least integer greater than $`a(x+2)`$ such that $`b2a`$ divides $`z`$. Let $`S=\{z,az+(b2a),bz+2(b2a)\}`$. Since $`z2a(b+1)+b`$, we have $$bz+2(b2a)2a(b^2+b2)+b^2+2bM.$$ (1) Hence, since $`S`$ is an $`(a,b)`$-triple, some member, say $`s`$, of $`S`$ has color $`1`$. Let $$T=\{s+i(b2a):0i\frac{s(b1)}{b2a}+2\}.$$ Note that $`bs+2(b2a)`$ is the largest member of $`T`$, and that $`as+(b2a)T`$ since $`b2a`$ divides $`s`$. Also, by (1) $$bs+2(b2a)2a(b^3+b^22b2)+b^3+2b^2+2bM,$$ (2) so some member of $`T`$ must have color 0 (otherwise $`\{s,as+(b2a),bs+2(b2a)\}`$ is monochromatic). Let $`t`$ be the least member of $`T`$ with color 0. Then $`\chi (t(b2a))=1`$. Note that (2) implies that $$b(x+2)+2(taxb)=2t+x(b2a)M.$$ Thus, since $`\chi (x+2)=\chi (t(b2a))=1`$, we must have $`\chi (b(x+2)+2(taxb))=0`$ (that $`t(b2a)>a(x+2)`$ follows from the definition of $`t`$). This implies that $`\{x,t,bx+2(tax)\}`$ is a monochromatic $`(a,b)`$-triple, a contradiction. The case for $`b<2a`$ is very similar. Let $`M=4a(b^3+2b^2+2b)4b^2`$ and let $`\chi `$ be a $`2`$-coloring of $`[1,M]`$. Then the set $`\{2,4,\mathrm{},2b+4\}`$ contains $`x2`$ and $`x`$ that are not the same color. Assume $`\chi (x)=0`$ and $`\chi (x2)=1`$, and let $`z`$ be the least integer greater than $`ax(2ab)`$ such that $`2ab`$ divides $`z`$. Let $`S=\{z,az+(2ab),bz+2(2ab)\}`$. Let $`sS`$ have color $`1`$ and define $`T=\{s,s+(2ab),s+2(2ab),\mathrm{},bs+2(2ab)\}`$. As in the previous case, $`as+(2ab)T`$ and $`T[1,M]`$. Hence, $`T`$ must have a least member, $`t`$, with color $`0`$. Then $`\chi (t(2ab))=1`$, and since $`\chi (x2)=\chi (t(2ab))=1`$, we must have $`\chi (b(x2)+2(tax+b))=0`$. This gives the monochromatic $`(a,b)`$-triple $`\{x,t,bx+2(tax)\}`$, a contradiction. (That $`bx+2(tax)M`$ follows easily from the definitions of $`z`$, $`s`$, and $`t`$, and the fact that $`x4`$.) $`\mathrm{}`$ For certain pairs $`(a,b)`$, we are able to improve the upper bounds of Theorem 2.1. The next theorem deals with the case in which $`a=b`$. Theorem 2.1 gives an upper bound for this case of $`O(a^4)`$. The following theorem improves this to $`O(a^2)`$. Theorem 2.2: $`N(a,a;2)\{\begin{array}{cc}3a^2+a\hfill & \mathrm{for}\mathrm{\hspace{0.17em}\hspace{0.17em}4}a\mathrm{even}\hfill \\ 8a^2+a\hfill & \mathrm{for}a\mathrm{odd}\hfill \end{array}`$ Proof. We start with the case when $`a`$ is even. We may assume that $`a6`$ since $`N(4,4;2)=40`$ was obtained by computer search (for other exact values see Table 1 at the end of this section). We shall show that every red-blue coloring of $`S=[1,3a^2+a]`$ yields a monochromatic $`(a,a)`$-triple by considering all possible 2-colorings of the set $`\{1,a+1,(3/2)a^2+a,2a^2+a\}`$. Assume, by way of contradiction, that there is a 2-coloring $`\chi `$ of $`S`$ that yields no monochromatic $`(a,a)`$-triple. Let $`R`$ be the set of red elements of $`S`$ under $`\chi `$, and $`B`$ the set of blue elements of $`S`$ under $`\chi `$. Without loss of generality we assume $`1R`$. Case I: $`a+1,2a^2+aR`$. We then have the following implications. $`1,a+1Ra+2B`$ (by considering the triple with $`d=1`$). $`1,2a^2+aRa^2+aB`$ (taking $`d=a^2`$). $`a+1,2a^2+aR(3/2)a^2+aB`$. $`a+2,(3/2)a^2+aB(5/4)a^2+(3/2)aR`$. $`a^2+a,(3/2)a^2+aBa/2+1R`$. $`a+1,(5/4)a^2+(3/2)aR(3/2)a^2+2aB`$. $`a/2+1,(5/4)a^2+(3/2)aR2a^2+2aB`$. This gives a contradiction since $`\{a+2,(3/2)a^2+2a,2a^2+2a\}`$ is a monochromatic $`(a,a)`$-triple. Case II: $`a+1,(3/2)a^2+aR`$ and $`2a^2+aB`$. As in Case I, we must have $`a+2B`$. The following sequence of implications then leads to a contradiction. $`a+1,(3/2)a^2+aR(5/4)a^2+aB`$. $`a+2,2a^2+aB3a^2R`$. $`1,(3/2)a^2+aR3a^2+aB`$. $`a+2,3a^2+aB2a^2+(3/2)aR`$. $`3a^2,2a^2+(3/2)Ra+3B`$. $`a+2,(5/4)a^2+aB(3/2)a^2R`$. $`1,(3/2)a^2R3a^2aB.`$ We now have a contradiction since $`\{a+3,2a^2+a,3a^2a\}`$ is a blue $`(a,a)`$-triple. Case III: $`(3/2)a^2+a,2a^2+aB`$. This implies $`a+1R`$, so that again we have $`a+2B`$. Then $`a+2,2a^2+aB(3/2)a^2+(3/2)aR`$ and $`3a^2R`$. $`a+1,(3/2)a^2+(3/2)aR2a^2+2aB`$ . $`a+2,(3/2)a^2+aB2a^2R`$. $`2a^2,3a^2RaB`$. Hence, the $`(a,a)`$-triple $`\{a,(3/2)a^2+a,2a^2+2a\}`$ is blue, again a contradiction. Case IV: $`(3/2)a^2+a,2a^2+aR`$. Using this assumption and the fact that $`1R`$, we have $`(3/4)a^2+aB`$, $`a^2+aB`$, and $`3a^2+aB`$. Then $`(3/4)a^2+a,a^2+aB(a/2)+1R`$. $`(a/2)+1,(3/2)a^2+aR(5/2)a^2+aB`$. $`3a^2+a,(5/2)a^2+aB2a+1R`$. We now consider two subcases. Subcase (i): $`2R`$. Then $`2,(3/2)a^2+aR3a^2B`$. $`2,2a+1R2a+2B`$. Thus the $`(a,a)`$-triple $`\{2a+2,(5/2)a^2+a,3a^2\}`$ is monochromatic. Subcase (ii): $`2B`$ $`2,(3/4)a^2+aB(3/2)a^2R`$. $`2,a^2+aB2a^2R`$. $`(3/2)a^2,a/2+1Ra^2+a/2B`$. $`(3/2)a^2,2a^2RaB`$. Thus the $`(a,a)`$-triple $`\{a,(3/2)a^2,2a^2\}`$ is monochromatic. Case V: $`a+1`$, $`2a^2+aB`$. In this case both $`(3/2)a^2+a`$ and $`3a^2+a`$ must be red, so that the $`(a,a)`$-triple $`\{1,(3/2)a^2+a,3a^2+a\}`$ is monochromatic. Case VI: $`a+1`$, $`(3/2)a^2+aB`$. This assumption implies that $`(5/4)a^2+a`$ and $`2a^2+a`$ are red. Then $`1,2a^2+aRa^2+aB`$. $`a^2+a,(3/2)a^2+aBa/2+1R`$. Thus the $`(a,a)`$-triple $`\{a/2+1,(5/4)a^2+a,2a^2+a\}`$ is monochromatic. We now move onto the situation where $`a`$ is odd. We may assume that $`a5`$ since $`N(1,1;2)`$ is the van der Waerden number $`w(3;2)`$, which equals nine, and $`N(3,3;2)=39`$ (see Table 1). Our method is very similar to that of the even case. Here we 2-color $`T=[1,8a^2+a]`$ and consider the various ways in which the set $`\{4a+1,5a^2+a,8a^2+a\}`$ may be colored. The following six cases cover all possibilities. Case I: $`4a+1,5a^2+aR`$. In this case $`6a^2+aB`$ and, since $`1R`$, $`(5a+1)/2B`$. We consider two subcases. Subcase (i): $`2B`$. Then $`2,(5a+1)/2B3a+1R`$. $`1,3a+1R5a+2B`$. $`5a+2,6a^2+aB7a^2R`$. $`1,7a^2R(7a^2+a)/2B`$. $`(5a+1)/2,(7a^2+a)/2B(9a^2+a)/2R`$. $`(9a^2+a)/2,5a^2+aR4aB`$. $`2,4aB6aR`$. $`6a,7a^2R8a^2B`$. $`6a^2+a,8a^2B4a+2R`$. $`4a,8a^2B6a^2R`$. This gives the monochromatic $`(a,a)`$-triple $`\{4a+2,5a^2+a,6a^2\}`$. Subcase (ii): $`2R`$. $`2,5a^2+aR(5a^2+3a)/2B`$. $`(5a+1)/2,(5a^2+3a)/2B(5a^2+5a)/2R`$. $`(5a^2+5a)/2,5a^2+aR4B`$. $`2,4a+1R6a+2B`$. $`(5a+1)/2,(5a^2+3)/2B(5a^2+5a)/2R`$. $`1,(5a^2+5a)/2R5a^2+4aB`$. $`2,(5a^2+5a)/2R5a^2+3aB`$. $`5a^2+3a,5a^2+4aB5a+2R`$. $`4,6a+2B8a+4R`$. Thus, $`\{2,5a+2,8a+4\}`$ is a monochromatic $`(a,a)`$-triple. Case II: $`4a+1,8a^2+aR`$ and $`5a^2+aB`$. By using an obvious “forcing” argument (as in the previous cases) on the following sequence of $`(a,a)`$-triples, it is a routine exercise to show that the $`(a,a)`$-triple $`\{1,3a,5a\}`$ must be red: $`\{1,(5a+1)/2,4a+1\}`$, $`\{1,4a^2+a,8a^2+a\}`$, $`\{4a+1,6a^2+a,8a^2+a\}`$, $`\{(5a+1)/2,4a^2+a,(11a^2+3a)/2\}`$, $`\{4a+1,(11a^2+3a)/2,7a^2+2a\}`$, $`\{3a,5a^2+a,7a^2+2a\}`$, $`\{5a,6a^2+a,7a^2+2a\}`$. Case III: $`4a+1R`$ and $`5a^2+a,8a^2+aB`$. For this case we may use the $`(a,a)`$-triples $`\{2a+1,5a^2+a,8a^2+a\}`$, $`\{1,2a+1,3a+2\}`$, $`\{3a+2,5a^2+a,7a^2\}`$, $`\{3a+2,(11a^2+3a)/2,8a^2+a\}`$, $`\{1,(7a^2+a)/2,7a^2\}`$, $`\{4a+1,(11a^2+3a)/2,7a^2+2a\}`$, $`\{2a,(7a^2+a)/2,5a^2+a\}`$, $`\{3a,5a^2+a,7a^2+2a\}`$ to prove that the $`(a,a)`$-triple $`\{1,2a,3a\}`$ must be red. Case IV: $`5a^2+a,8a^2+aR`$. By considering the triples $`\{1,4a^2+a,8a^2+a\}`$, $`\{2a+1,5a^2+a,8a^2+a\}`$, $`\{2a+1,3a^2+a,4a^2+a\}`$, and $`\{2a+1,4a^2+a,6a^2+a\}`$, we find that the $`(a,a)`$-triple $`\{1,3a^2+a,6a^2+a\}`$ must be red. Case V: $`5a^2+aR`$ and $`4a+1,8a^2+aB`$. In this case we have $`6a^2+aR`$ and hence $`3a^2+aB`$. We now consider two subcases. Subcase (i): $`2B`$. By examining the triples $`\{2,3a^2+a,6a^2\}`$, $`\{4a+2,5a^2+a,6a^2\}`$, $`\{6a1,6a^2,6a^2+a\}`$, $`\{6a1,7a^2,8a^2+a\}`$, $`\{2,3a+1,4a+2\}`$, $`\{3a+1,4a^2+a,5a^2+a\}`$, $`\{2,4a^2+a,8a^2\}`$, $`\{4a,6a^2,8a^2\}`$, $`\{6a,7a^2,8a^2\}`$, we find that the $`(a,a)`$-triple $`\{2,4a,6a\}`$ must be blue. Subcase (ii): $`2R`$. By examining the triples $`\{2,(5a^2+3a)/2,5a^2+a\}`$, $`\{2a+2,(5a^2+3a)/2,3a^2+a\}`$, $`\{2,2a+2,2a+4\}`$, $`\{2,2a+1,2a+2\}`$, $`\{2a+2,(7a^2+3a)/2,5a^2+a\}`$, $`\{2a+4,(5a^2+5a)/2,3a^2+a\}`$, $`\{(3a+3)/2,(5a^2+3a)/2,(7a^2+3a)/2\}`$, $`\{2a+1,(5a^2+3a)/2,3a^2+2a\}`$, $`\{1,(5a^2+5a)/2,5a^2+4a\}`$, $`\{(3a+3)/2,3a^2+2a,(9a^2+5a)/2\}`$, we find that the $`(a,a)`$-triple $`\{4a+1,(9a^2+5a)/2,5a^2+4a\}`$ is blue. Case VI: $`4a+1,5a^2+aB`$. The sequence of triples $`\{4a+1,5a^2+a,6a^2+a\}`$, $`\{1,3a^2+a,6a^2+a\}`$, $`\{a+1,3a^2+a,5a^2+a\}`$, $`\{1,a+1,a+2\}`$, $`\{a+2,3a^2+a,5a^2\}`$, $`\{4a1,5a^2,6a^2+a\}`$, $`\{1,(5a^2+a)/2,5a^2\}`$, $`\{2a,(5a^2+a)/2,3a^2+a\}`$, $`\{a+2,(5a^2+a)/2,4a^2a\}`$, $`\{2a,4a^2a,6a^22a\}`$, $`\{1,4a^2a,8a^23a\}`$ leads us to conclude that the $`(a,a)`$-triple $`\{4a1,6a^22a,8a^23a\}`$ is blue. $`\mathrm{}`$ Another circumstance for which we can improve the upper bounds of Theorem 2.1 is the case in which $`b=2a1`$ (for $`a=1`$ this is the van der Waerden number $`w(3,2)`$). By Theorem 2.1, $`N(a,2a1;2)`$ is bounded above by a function having order of magnitude $`32a^4`$. We can improve this to $`16a^3`$ by making use of the following theorem which is taken from . First, we introduce some notation. Let $`f:𝐍𝐑^+`$ be a non-decreasing function. Denote by $`w(f,k)`$ the least positive integer (if it exists) such that whenever $`[1,w(f,k)]`$ is 2-colored, there must exist a monochromatic $`k`$-term arithmetic progression $`\{a,a+d,a+2d,\mathrm{},a+(k1)d\}`$ with $`df(a)`$. In it is shown that $`w(f,3)`$ always exists, and bounds for this function are given as follows. Theorem 2.3: (Brown and Landman ) Let $`f:𝐍𝐑^+`$ be a non-decreasing function. Let $`b=1+4\frac{f(1)}{2}`$. Then $$w(f,3)4f(b+4\frac{f(b)}{2})+14\frac{f(b)}{2}+7b/213/2.$$ Further, if $`f`$ maps into $`𝐍`$ with $`f(n)n`$ for all $`n𝐍`$, then $`w(f,3,2)8f(h)+2h+2c`$, where $`h=2f(1)+1`$ and $`c`$ is the largest integer such that $`f(c)+c4f(h)+h+1`$. Relating Theorem 2.3 to $`(a,b)`$-triples, we have the following corollary. Corollary 2.1: For all $`a2`$, $$16a^212a+6N(a,2a1;2)\{\begin{array}{cc}16a^32a^2+4a3\hfill & \mathrm{for}a\mathrm{even}\hfill \\ 16a^3+14a^2+2a3\hfill & \mathrm{for}a\mathrm{odd}\hfill \end{array}$$ Proof. Note that $`\{x,y,z\}`$ is an $`(a,2a1)`$-triple if and only if it is an arithmetic progression with $`yx(a1)x+1`$. By applying Theorem 2.3 with $`f(x)=(a1)x+1`$ we obtain the desired bounds. $`\mathrm{}`$ We now present some lower bounds for all $`(a,b)`$-triples. This is done by providing $`2`$-colorings which avoid monochromatic $`(a,b)`$-triples. Theorem 2.4: If $`b2a`$ then $`N(a,b;2)2b^2+5b(2a4)`$. If $`b<2a`$ then $`N(a,b;2)3b^2(2a5)b(2a4)`$. Proof. For the case $`b2a`$, we will exhibit a $`2`$-coloring of $`[1,2b^2+5b2a+3]`$ with no monochromatic $`(a,b)`$-triple. Color $`[b+2,b^2+2b+1]`$ red and its complement blue. It is an easy exercise to show that monochromatic $`(a,b)`$-triples are avoided. For the case where $`b<2a`$ the $`2`$-coloring of $`[1,3b^2(2a5)b(2a4)]`$ with $`[b+2,b^2+2b+1]`$ colored red and its complement colored blue is easily seen to avoid monochromatic $`(a,b)`$-triples. $`\mathrm{}`$ We are able to improve slightly the lower bound given in Theorem 2.5 for the case when $`a=1`$. In fact, from computer calculations (see Table 1 below), it appears that this inequality may in fact be an equality. Theorem 2.5: $`N(1,b;2)2b^2+5b+6`$ for all $`b3`$. Proof. Consider the following red-blue coloring of $`[1,2b^2+5b+5]`$: color $`[1,b+1]`$, $`\{b+3\}`$, and $`[b^2+2b+4,2b^2+5b+5]`$ red and the other integers blue. We now show that this coloring avoids monochromatic $`(1,b)`$-triples. Assume $`\{x,y,z\}=\{x,x+d,bx+2d\}`$ is a blue $`(1,b)`$-triple. Since the largest blue element is $`b^2+2b+3`$, we must have $`x=b+2`$. Thus, since we must have $`d2`$, we see that $`z>b^2+2b+3`$, which is not possible. Now assume $`\{x,y,z\}`$ is red. First, if $`x,x+d\{1,2,\mathrm{},b,b+1,b+3\}`$ then we have $`b+2bx+2db^2+b+4`$. Hence, the only possibility here is $`z=b+3`$, but $`bx+2d=b+3`$ has no solution in $`x`$ for $`b3`$. Second, if $`y[b^2+2b+4,2b^2+5b+5]`$ then $`bx+2dbx+2b^2+2b+2`$. Hence, we must have $`x\{1,2,3,4\}`$ ($`4`$ is possible if $`b=3`$). However, this gives $`bx+2dbx+2b^2+4b`$, which implies that $`x\{1,2\}`$ ($`2`$ is possible if $`b=3`$). This in turn implies that $`zbx+2b^2+2b+4`$ which gives $`x=1`$ as the only possibility. However with $`x=1`$ we must have $`z>2b^2+5b+5`$, which is out of bounds. $`\mathrm{}`$ Below we present a table of computer-generated values for $`N(a,b;2)`$ for small $`a`$ and $`b`$. We also include computer-generated lower bounds for those cases where the computer time became excessive (the program is available for download as the Fortran77 program VDW.f at http://math.colgate.edu/~aaron/). $`𝐍(𝐚,𝐛;\mathrm{𝟐})`$ Values | $`ab`$ | $`1`$ | $`2`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ | $`7`$ | | --- | --- | --- | --- | --- | --- | --- | --- | | $`1`$ | 9 | dne | $`39`$ | $`58`$ | $`81`$ | $`108`$ | $`139`$ | | $`2`$ | | $`16`$ | $`46`$ | dne | $`139`$ | $`106`$ | $`133`$ | | $`3`$ | | | $`39`$ | $`60`$ | $`114`$ | dne | $`135`$ | | $`4`$ | | | | $`40`$ | $`87`$ | $`124`$ | $`214`$ | | $`5`$ | | | | | $`70`$ | $`100`$ | $`150`$ | | $`6`$ | | | | | | $`78`$ | $`105`$ | | $`7`$ | | | | | | | $`95`$ | Table 1 3. The Degree of Regularity of $`(𝐚,𝐛)`$ In this section we consider $`N(a,b;r)`$ for general $`r`$. We begin by showing, in Theorems 3.1 and 3.2, that for many choices of $`a`$ and $`b`$, the pair $`(a,b)`$ is not regular. For such pairs we find an upper bound on $`dor(a,b)`$. Theorem 3.1: Let $`1a<b`$, and assume that $`b(2^{3/2}1)a+22^{3/2}`$. Let $`c=b/a`$. Then $`dor(a,b)log_\sqrt{2}c`$. Proof. Let $`r=\mathrm{log}_\sqrt{2}c+1`$. We will give an $`r`$-coloring of the natural numbers which contains no monochromatic $`(a,b)`$-triple. For readability, let $`p=\sqrt{2}`$. Using the colors $`0,1,\mathrm{},r1`$, define the coloring $`\chi `$ by letting $`\chi (x)i`$ (mod $`r`$), where $`p^ix<p^{i+1}`$. Assume that there exists an $`(a,b)`$-triple, say $`x<y<z`$, that is monochromatic under $`\chi `$. Let $`j`$ be the integer such that $`p^jy<p^{j+1}`$. Since $`\{x,y,z\}`$ is an $`(a,b)`$-triple, $`y=ax+d`$ and $`z=bx+2d`$ for some $`d`$. Thus $`zcy<p^{r1}p^{j+1}=p^{j+r}`$. Hence, by the way $`\chi `$ is defined and the fact that $`\chi (y)=\chi (z)`$, we must have $`p^jy<z<p^{j+1}`$. We now consider two cases. Case I: $`b2a1`$. In this case, $`yx=(a1)x+d(ba)x+d=zy<p^j(p1)p^j(11/p^{r1})`$. Hence, since $`y>p^j`$, we have that $`xp^jp^j(11/p^{r1})=p^{jr+1}`$. Since $`\chi (x)=\chi (y)`$, and by the definition of $`\chi `$, we must have $`p^jx<y<p^{j+1}`$. Thus we have that all three numbers $`x,y,z`$ belong to the interval $`[p^j,p^{j+1})`$. Hence, $`zx=(b1)x+2d<p^j(p1)x(p1)`$, a contradiction (since $`b1>p1`$). Case II: $`c=2`$ and $`b(2^{3/2}1)a+22^{3/2}`$. In this case, $`2(a1)(ba)/(p1)`$, so that $`(a1)x/(ba)x1/(2p2)`$. Therefore, $`((a1)x+d)/((ba)x+d)1/(2p2)`$. Hence, $`yx(zy)/(2p2)<(p1)p^j/(2p2)=p^j/2=p^{j2}`$. So, $`xp^jp^{j2}=p^{j2}`$. Since $`r=3`$ in this case, and $`\chi (x)=\chi (y)`$, we must have $`p^jx<p^{j+1}`$. Thus, as in Case I, $`x`$ and $`z`$ both belong to the interval $`[p^j,p^{j+1})`$, and we again have a contradiction. $`\mathrm{}`$ In the following theorem we give an upper bound on $`dor(a,b)`$ for several pairs $`(a,b)`$ that are either not covered by Theorem 3.1 or for which we are able to improve the bound of Theorem 3.1. Theorem 3.2: $`dor(1,3)3`$, $`dor(2,2)5`$, $`dor(2,5)3`$, $`dor(2,6)3`$, $`dor(3,3)5`$, $`dor(3,4)5`$, $`dor(3,8)3`$, and $`dor(3,9)3`$. Proof. We give the proof for the pair $`(2,2)`$, and outline the proofs for the other cases, which are quite similar. To show that $`dor(2,2)5`$, we provide a 6-coloring of the positive integers that avoids monochromatic $`(2,2)`$-triples. Let $$\chi (i)=\{\begin{array}{cc}2k(\mathrm{mod}\mathrm{\hspace{0.17em}\hspace{0.17em}6})\hfill & \mathrm{if}i[2^k,p2^k)\hfill \\ 2k+1(\mathrm{mod}\mathrm{\hspace{0.17em}\hspace{0.17em}6})\hfill & \mathrm{if}i[p2^k,2^{k+1})\hfill \end{array}$$ where $`p=\sqrt{2}`$ . Assume $`\{x,2x+d,2x+2d\}`$ is a (2,2)-triple such that $`\chi (x)=\chi (2x+d)`$. We will show that $`\chi (2x+2d)\chi (x)`$. We consider two cases. Case I: $`2^kx<p2^k`$ for some $`k\{0,1,2,\mathrm{}\}`$. Since $`\chi (x)=\chi (2x+d)`$ and $`2x+d>p2^k`$, there exists an $`m𝐍`$ such that $`2x+d[2^{k+3m},p2^{k+3m})`$. Hence $`d[2^{k+3m}p2^{k+1},p2^{k+3m}2^{k+1}]`$. This yields $$2x+2d2^{k+3m}+2^{k+3m}p2^{k+1}p2^{k+3m},$$ (3) and $$2x+2d<p2^{k+3m+1}2^{k+1}<2^{k+3(m+1)}.$$ (4) By (3) and (4) it follows that $`\chi (2x+2d)\chi (x)`$. Case II: $`p2^kx<2^{k+1}`$ for some $`k\{0,1,2,\mathrm{}\}`$. As in Case I, there must exist an $`m\{1,2,\mathrm{}\}`$ such that $`2x+d[p2^{k+3m},2^{k+3m+1})`$. Thus, $`d[p2^{k+3m}2^{k+2}1,2^{k+3m+1}p2^{k+1}]`$. Therefore $$2x+2d2^{k+3m+1}(p2^{13m}2^{k3m})2^{k+3m+1},$$ (5) and $$2x+2d<2^{k+3m+2}p2^{k+1}<2^{k+3(m+1)}.$$ (6) It follows from (5) and (6) that $`\chi (2x+2d)\chi (x)`$. The proofs that $`dor(3,3)5`$ and $`dor(3,4)5`$ may be done in the same way as that for $`dor(2,2)`$ except that we use $`p=\sqrt{3}`$ instead of $`p=\sqrt{2}`$ and we use powers of $`3`$ instead of $`2`$ in the defined intervals. The cases of $`(2,5)`$, $`(2,6)`$, $`(3,8)`$, and $`(3,9)`$ are done similarly, where we use a 4-coloring rather than a 6-coloring, which is defined the same as $`\chi `$ except that “mod 6” is replaced by “mod 4;” where we take $`p`$ to be $`1.6`$, $`1.5`$, $`1.9`$, and $`1.9`$, respectively; and where the powers in the defined intervals are powers of the given value of $`a`$. The case $`(1,3)`$ is done using a “mod 4” coloring with $`p=\sqrt{3}`$, where the powers in the given intervals of the coloring are powers of $`3`$. $`\mathrm{}`$ By using Theorems 2.4 and 2.5, we are able to obtain the following lower bounds for $`N(a,b;r)`$. Proposition 3.1: If $`b2a`$ then $`N(a,b;r)2b^r+5b^{r1}(2a4)b^{r2}+_{i=0}^{r3}b^i`$ for $`r2`$. If $`b<2a`$ then $`N(a,b;r)3b^r(2a5)b^{r1}(2a4)b^{r2}+2_{i=0}^{r3}b^i`$ for $`r2`$. Proof. We induct on $`r`$. The case $`b2a`$ and $`r=2`$ is proved in Theorem 2.4. Assuming $`r3`$ and that the result holds for $`r1`$ with $`b2a`$, there exists an $`(r1)`$-coloring of $`[1,2b^{r1}+5b^{r2}(2a4)b^{r3}+2_{i=0}^{r4}b^i1]`$ with no monochromatic triple. Color the interval $`[2b^{r1}+5b^{r2}(2a4)b^{r3}+2_{i=0}^{r4}b^i,2b^r+5b^{r1}(2a4)b^{r2}+2_{i=0}^{r3}b^i1]`$ with the remaining color. By construction this $`r`$-coloring avoids monochromatic triples. The case $`b<2a`$ is quite similar and will be omitted. $`\mathrm{}`$ Proposition 3.2: $`N(1,b;r)2b^r+5b^{r1}+6b^{r2}+2_{i=0}^{r3}b^i`$ for all $`b,r2`$. Proof. We induct on $`r`$. The case $`r=2`$ is proved in Theorem 2.5. Assuming $`r3`$ and that the inequality is true for $`r1`$, we have the existence of an $`(r1)`$-coloring of $`[1,2b^{r1}+5b^{r2}+6b^{r3}+2_{i=0}^{r4}b^i1]`$ which does not contain a monochromatic $`(1,b)`$-triple. Color the interval $`[2b^{r1}+5b^{r2}+6b^{r3}+2_{i=0}^{r4}b^i,2b^r+5b^{r1}+6b^{r2}+2_{i=0}^{r3}b^i1]`$ with the remaining color. It is an easy exercise to show that there is no monochromatic $`(1,b)`$-triple in this $`r`$-coloring. $`\mathrm{}`$ We conclude this section with a table which describes what is known about $`dor(a,b)`$ for some small values of $`a`$ and $`b`$. By Theorem 2.1, we know that if $`b2a`$, then $`dor(a,b)2`$. In the third column of the table below we give the reason for the given upper bound on $`dor(a,b)`$. Values of $`\mathrm{𝐝𝐨𝐫}(𝐚,𝐛)`$ | $`(a,b)`$ | $`dor(a,b)`$ | reason | | --- | --- | --- | | (1,1) | $`\mathrm{}`$ | van der Waerden’s Theorem | | (1,2) | $`1`$ | Theorem 2.1 | | (1,3) | $`23`$ | Theorem 3.2 | | (1,4) | $`24`$ | Theorem 3.1 | | (1,5) | $`25`$ | Theorem 3.1 | | (1,6) | $`26`$ | Theorem 3.1 | | (1,7) | $`26`$ | Theorem 3.1 | | (1,8) | $`26`$ | Theorem 3.1 | | (1,9) | $`27`$ | Theorem 3.1 | | (2,2) | $`25`$ | Theorem 3.2 | | (2,3) | $`2`$ | Theorem 3.1 | | (2,4) | $`1`$ | Theorem 2.1 | | (2,5) | $`23`$ | Theorem 3.2 | | (2,6) | $`23`$ | Theorem 3.2 | | (2,7) | $`24`$ | Theorem 3.1 | | (2,8) | $`24`$ | Theorem 3.1 | | (2,9) | $`25`$ | Theorem 3.1 | | (3,3) | $`25`$ | Theorem 3.2 | | (3,4) | $`25`$ | Theorem 3.2 | | (3,5) | $`2`$ | Theorem 3.1 | | (3,6) | $`1`$ | Theorem 2.1 | | (3,7) | $`24`$ | Theorem 3.1 | | (3,8) | $`23`$ | Theorem 3.2 | | (3,9) | $`23`$ | Theorem 3.2 | Table 2 4. A More General Question In this section we move from $`(a,b)`$-triples to sets of the form $`\{x,ax+d,ax+2d,\mathrm{},ax+(k1)d\}`$ for $`a1`$ and $`k3`$. Let us call such a set a $`k`$-term $`a`$-progression. For $`a=1`$ these are simply the $`k`$-term arithmetic progressions. Van der Waerden’s theorem states that given $`r1`$, any $`r`$-coloring of the natural numbers must contain arbitrarily long monochromatic arithmetic progressions. Theorem 4.1 shows that a similar result does not hold for $`a>1`$ and $`r>3`$. Denote by $`dor_k(a)`$ the largest number of colors with which we can arbitrarily color N and be guaranteed the existence of a monochromatic $`k`$-term $`a`$-progression. Theorem 4.1 shows that for $`k`$ large enough, $`dor_k(a)3`$. Theorem 4.1: For all $`a2`$ and all integers $`k\frac{a^2}{a+1}+2`$, $`dor_k(a)3`$. Proof. It suffices to exhibit a $`4`$-coloring of $`𝐍`$ which avoids monochromatic $`k`$-term $`a`$-progressions. Clearly, we may assume $`k=\frac{a^2}{a+1}+2`$. Define a $`4`$-coloring of $`𝐍`$ by coloring each interval $`[a^j,a^{j+1})`$ with the color $`j`$ (mod $`4`$). We will show that there is no monochromatic $`k`$-term $`a`$-progression by showing that if $`x`$ and $`ax+d`$ are the same color, then $`ax+(k1)d`$ is a different color. Let $`x[a^i,a^{i+1})`$, and assume $`ax+d`$ has the same color as $`x`$. Then clearly $`ax+d[a^i,a^{i+1})`$. Hence, there exists an $`m𝐍`$ such that $`ax+d[a^{i+4m},a^{i+4m+1})`$. From this we conclude that $$a^i(a^{4m}a^2)da^{i+1}(a^{4m}1).$$ To complete the proof we will show that $$ax+(k1)d<a^{i+4(m+1)}$$ (7) and $$a^{i+4m+1}ax+(k1)d.$$ (8) From (7) and (8) we can conclude that $`ax+(k1)d`$ is colored differently than $`x`$ and $`ax+d`$. To prove (7), note that $`k<a^3+1`$ for all $`a2`$. Thus $$1+(k2)(1a^{4m})<a^3,$$ and hence $$a^{i+4m+1}+(k2)a^{i+1}(a^{4m}1)<a^{i+4(m+1)}.$$ This last inequality, together with the fact that $$ax+(k1)d=ax+d+(k2)da^{i+4m+1}+(k2)a^{i+1}(a^{4m}1),$$ implies (7). To prove (8), first note that since $`ka^2/(a+1)+2`$, we have $`(k2)(a^21)a^3a^2`$. Hence $$a^{i+4m}+(k2)a^i(a^{4m}a^2)a^{i+4m+1}.$$ This last inequality, along with the fact that $$ax+(k1)da^{i+4m}+(k2)a^i(a^{4m}a^2),$$ shows that (8) holds. $`\mathrm{}`$ According to Theorem 4.1, it is not true that every 4-coloring of $`𝐍`$ yields arbitrarily long monochromatic $`a`$-progressions. We are not sure if this holds for two or three colors. However, if for $`r=2`$ or $`r=3`$, every $`r`$-coloring of $`𝐍`$ does yield arbitrarily long monochromatic $`a`$-progressions, then a somewhat stronger result holds, as stated in Proposition 4.1 below. We omit the proof, a trivial generalization of the proof of \[3, Theorem 2, p. 70\]. Proposition 4.1 Let $`a𝐍`$ and let $`r\{2,3\}`$. If for every $`r`$-coloring of $`𝐍`$ there are arbitrarily long monochromatic $`a`$-progressions, then for all $`s1`$ there exists $`n=n(a,r,s)`$ such that if $`[1,n]`$ is $`r`$-colored then for all $`k𝐍`$ there exists $`\widehat{x},\widehat{d}`$ so that $`\{\widehat{x},a\widehat{x}+\widehat{d},a\widehat{x}+2\widehat{d},\mathrm{},a\widehat{x}+k\widehat{d}\}\{s\widehat{d}\}`$ is monochromatic. 5. Some Concluding Remarks Although we have not proved that $`dor(a,b)<\mathrm{}`$ for general $`a`$ and $`b`$, the evidence in this paper leads us to believe that this is the case for all $`(a,b)(1,1)`$. In particular, we make the following conjecture: Conjecture: Let $`a>1`$ and $`r>3`$. Define $`K(a,r)`$ to be the least positive integer such that $`dor_K(a)r`$. Then there exists an $`s>r`$ such that $`K(a,s)<K(a,r)`$. By Theorem 4.1, we know that $`K(a,r)`$ exists. Clearly, $`K(a,s)K(a,r)`$ for $`sr`$, but if we are able to show that the inequality is strict for some $`s`$, then we can conclude that $`dor(a,a)<\mathrm{}`$ for all $`a>1`$. In fact it may be true that $`dor(a,b)=2`$ for all $`b2a`$, although we have presented scant evidence for this. References T.C. Brown and B. Landman, Monochromatic arithmetic progressions with large differences, Bull. Australian Math. Soc. 60 (1999), 21-35. T.C. Brown, B. Landman, M. Mishna, Monochromatic homothetic copies of $`\{1,1+s,1+s+t\}`$, Canadian Math. Bull. 40 (1997), 149-157. R.L. Graham, B.L. Rothschild, J.H. Spencer, Ramsey Theory, Second Ed., John Wiley and Sons, New York, 1990. B. Landman, Avoiding arithmetic progressions (mod $`m`$) and arithmetic progressions, Utilitas Math. 52 (1997), 173-182. B. Landman, Ramsey functions for quasi-progressions, Graphs and Combinatorics 14 (1998), 131-142. B. L. van der Waerden, Beweis einer Baudetschen Vermutung, Nieuw Arch. Wisk. 15 (1927), 212-216.
no-problem/9910/astro-ph9910308.html
ar5iv
text
# Gravitational Evolution of the Large-Scale Density Distribution: The Edgeworth & Gamma Expansions ## 1. Introduction Combining non-linear perturbation theory with the Edgeworth expansion has largely succeeded in describing the gravitational evolution of the large-scale density PDF in the weakly non-linear regime, for Gaussian initial conditions (Juszkiewicz et al 1995, Bernardeau & Koffman 1995). In principle, the accuracy of this approach is only limited by the order of the (reduced) cumulants, $`S_J`$, involved in the Edgeworth expansion. However, the Edgeworth series yields a PDF that is ill-defined. It has negative probability values and assigns non-zero probability to negative densities ($`\delta <1`$). Alternatively, we shall introduce the Gamma PDF as the basis for an expansion in orthogonal (Laguerre) polynomials around an arbitrary exponential tail (see Gaztañaga, Fosalba & Elizalde 1999). The proposed Gamma expansion is better suited for describing a realistic PDF, as always yields positive densities and the PDF is effectively positive-definite. ## 2. Comparison of the expansions with N-body simulations Figure 1 shows a comparison of the Edgeworth and Gamma expansions with N-body simulations. We measure the PDF in 10 realizations of SCDM, $`\mathrm{\Omega }=1`$ and $`\mathrm{\Gamma }=0.5`$, with $`L=180h^1\mathrm{Mpc}`$ and $`N=64^3`$ particles and $`\sigma _8=1`$ (Croft & Efstathiou 1994). We find that, up to second order, both expansions produce very similar results, specially around the peak of the distribution, within the error bars. However, the Gamma expansion provides a better match to the PDF on the tails. In particular, the Gamma expansion is in far better agreement with the numerical results for negative values of $`\nu `$ (see left panel) and performs slightly better for the positive tail of the PDF, $`\nu 15`$ (see right panel). In summary, we propose the Gamma expansion as a useful alternative to the Edgeworth series to model the gravitational evolution of the large-scale density PDF in the weakly non-linear regime. We stress the potential application of the Gamma expansion for modeling other non-Gaussian PDFs, such as those describing the peculiar velocities of galaxies or the temperature anisotropy of the CMB on small scales. ### Acknowledgments. This work has been supported by CSIC, DGICYT (Spain), project PB96-0925, and CIRIT (Generalitat de Catalunya), grant 1995SGR-0602. PF is also supported by a research fellowship from ESA. ## References Bernardeau, F., Kofman, L., 1995, ApJ, 443, 479 Croft, R.A.C., & Efstathiou, G., 1994, MNRAS, 267, 390 Gaztañaga, E., Fosalba, P., Elizalde, E., 1999, submitted to ApJ, \[astro-ph/9906296\] Juszkiewicz, R., Weinberg, D.H., Amsterdamski, P., Chodorowski, M., Bouchet, F.R., 1995, ApJ, 442, 39
no-problem/9910/cond-mat9910291.html
ar5iv
text
# Reconciling the correlation length for high-spin Heisenberg antiferromagnets \[ ## Abstract We present numerical results for the antiferromagnetic Heisenberg model (AFHM) that definitively confirm that chiral perturbation theory, corrected for cutoff effects in the AFHM, leads to a correct field-theoretical description of the low-temperature behavior of the spin correlation length for spins $`S1/2`$. With two independent quantum Monte Carlo algorithms and a finite-size-scaling technique, we explore correlation lengths up to $`\xi 10^5`$ lattice spacings $`a`$ for spins $`S=1`$ and $`5/2`$. We show how the recent prediction of cutoff effects by P. Hasenfratz is approached for moderate $`\xi /a=𝒪(100)`$, and smoothly connects with other approaches to modeling the AFHM at smaller correlation lengths. preprint: CBU-9901 \] In the past decade there has been a resurgence of interest in the quantum Heisenberg model. This interest is mainly due to the discovery that the undoped, insulating precursors of lamellar high-$`T_c`$ superconducting copper oxides are well-described by the spin $`S=1/2`$ two-dimensional (2-d) antiferromagnetic Heisenberg model (AFHM) on a square lattice. The field theories that describe this model at low temperatures share the property of asymptotic freedom with the theories that describe elementary particles, thus earning a share of attention from the high-energy physics community as well. The low-temperature physics of the AFHM is dominated by magnons. The magnon interactions are described by the 2-d classical continuum $`O(3)`$ non-linear $`\sigma `$-model at large correlation lengths. This is an extensively studied model in field theory, offering various known exact results that can be exploited for the prediction of the correlation length $`\xi (T)`$ in the Heisenberg model at low temperatures. The challenge is to find a proper way to connect the parameters of the quantum Heisenberg model with the coupling of the $`\sigma `$-model. Several approaches to this problem exist. Chakravarty, Halperin, and Nelson used renormalization group arguments to predict the leading behavior of $`\xi (T)`$. Hasenfratz and Niedermayer utilized analytical results for the $`\sigma `$-model to refine the prediction, and they used chiral perturbation theory (CPT) for the AFHM to connect the parameters of the two models. Neutron scattering experiments on $`S=1/2`$ antiferromagnets such as $`\text{Sr}_2\text{CuO}_2\text{Cl}_2`$ generally agree with this prediction . However, higher-spin antiferromagnets have remained problematic. Widely different techniques – including experiment , high-temperature series expansion , quantum Monte Carlo simulation for $`S=1`$ , and semi-classical approximation – all showed large deviations from the field-theoretical prediction by as much as 75% for $`\xi <200`$ lattice spacings, which is the regime accessible in experiments on $`\text{La}_2\text{Ni}\text{O}_4`$ ($`S=1`$) and $`\text{Rb}_2\text{Mn}\text{F}_4`$ ($`S=5/2`$). These results suggest that a serious discrepancy would persist to really large, macroscopic correlation lengths for $`S>1/2`$, which is highly unsatisfactory on theoretical grounds. In a recent paper , Hasenfratz argues that this discrepancy is due to cutoff effects in the AFHM, which increase strongly with spin $`S`$. When large, they can not be described anymore with the effective approach of CPT. Hasenfratz used spin-wave expansion to calculate the cutoff effects, and he showed the proper way to incorporate them into the CPT result. In this Letter we show via extensive quantum Monte Carlo (QMC) calculations that this correction indeed accounts for the severe spin dependence of the correlation length. Our data connect the regime of large and moderate correlation lengths, where “CPT+cutoff” applies, with the regime of small correlation lengths where high-temperature and semi-classical results apply. The diverse approaches are thereby reconciled. We also provide evidence that by $`S=5/2`$ the residual deviation at small correlation lengths, possibly due to the missing higher-order terms in the analytical calculations, has essentially reached the classical $`S\mathrm{}`$ limit. Consider the 2-d quantum Heisenberg model with nearest-neighbor interaction on an $`L\times L`$ lattice with lattice spacing $`a`$ and periodic boundary conditions, $$H=J\underset{x,\mu }{}\stackrel{}{S}_x\stackrel{}{S}_{x+\widehat{\mu }},\stackrel{}{S}_x^{\mathrm{\hspace{0.17em}2}}=S(S+1),$$ (1) where $`J>0`$ is the antiferromagnetic exchange coupling, $`\widehat{\mu }`$ denotes the two primitive translation vectors of the unit cell, and $`\stackrel{}{S}_x`$ is the spin operator at position $`x`$. The $`O(3)`$ symmetry of this Hamilton operator is spontaneously broken at zero temperature, and the model exhibits long-range antiferromagnetic order in the ground state. As a consequence, the model has two massless, relativistic Goldstone bosons (called magnons or spin waves). However, the Mermin-Wagner-Coleman theorem rules out Goldstone bosons in two dimensions at nonzero temperature. Instead, the AFHM magnons acquire a mass $`c/\xi (T)`$ where $`c`$ is the spin-wave velocity. (We set $`\mathrm{}`$ and $`k_B`$ to unity.) In fact, the $`\sigma `$-model is known to have a non-perturbatively generated mass gap, and the leading exponential behavior of the correlation length in the AFHM is a consequence of asymptotic freedom in the $`\sigma `$-model. The partition function of the 2-d quantum spin model in Eq. (1) can be represented by a path integral of a classical model with an additional (“time”) dimension. The continuous coordinate $`x_3`$ of this periodic Euclidean-time dimension has extent $`c/T`$. When $`T0`$, the correlation length grows exponentially, and becomes much larger than the length scale $`c/T`$. The system then appears dimensionally reduced to a thin slab with two infinite space directions and an extent $`c/T\xi (T)`$ in the Euclidean-time direction. This is just a special regularization of the $`\sigma `$-model. Hasenfratz and Niedermayer used CPT for the AFHM , as well as the exact mass gap and the 3-loop $`\beta `$-function of the $`\sigma `$-model to derive the asymptotic prediction for the spin correlation length in the AFHM: $$\xi _{\text{CH}_2\text{N}_2}=\frac{e}{8}\frac{c}{2\pi \rho _s}\mathrm{exp}\left(\frac{2\pi \rho _s}{T}\right)\left[1\frac{T}{4\pi \rho _s}+𝒪\left(T^2\right)\right].$$ (2) The values of spin stiffness $`\rho _s`$ and spin-wave velocity $`c`$ are not fixed by CPT, but they can be estimated by, for example, spin-wave expansion (SWE) or fits from QMC data . Calculating the $`𝒪(T^2)`$ corrections in CPT introduces new, unknown parameters. We call Eq. (2) the $`\text{CH}_2\text{N}_2`$ formula after its parents Chakravarty, Halperin, Nelson, Hasenfratz, and Niedermayer. The QMC data presented in this Letter confirm that the discrepancy between $`\text{CH}_2\text{N}_2`$ and the $`S>1/2`$ AFHM correlation length indeed is severe, and it persists to macroscopic correlation lengths $`\xi /a10^5`$. This situation is unsatisfactory since a mapping of the AFHM onto the $`\sigma `$-model must be valid for smaller correlation lengths, too, due to the dimensional reduction described above. In particular, this mapping is valid beyond the regime of “renormalized classical scaling” near $`T=0`$ (where Eq. (2) unquestionably is valid for any $`S`$). The virulent discrepancy indicates a shortcoming in the technique that connects the coupling of the $`\sigma `$-model with the AFHM parameters $`\rho _s`$ and $`T`$ using CPT for large spin. In his recent calculation , Hasenfratz used bilinear spin-wave expansion to modify this connection, taking into account cutoff effects in the AFHM. In the present study, we also account for a minor refinement of the result by Hasenfratz: a part of the quadratic temperature dependence, coming from known terms in the spin-wave expansion of the coupling of the $`\sigma `$-model, is incorporated. The resulting $`\text{CH}_3\text{N}_2\text{B}`$ formula is (with $`tT/(2\pi \rho _s)`$) $`\xi _{\text{CH}_3\text{N}_2\text{B}}={\displaystyle \frac{e}{8}}{\displaystyle \frac{c}{2\pi \rho _s}}\mathrm{exp}\left({\displaystyle \frac{1}{t}}\right)\mathrm{exp}\left(C(\gamma )\right)`$ (3) $`\times \left[1{\displaystyle \frac{1}{2}}t+{\displaystyle \frac{27}{32}}t^2+𝒪\left(T^2\right)\right].`$ (4) The parameter $`\gamma 2JS/T`$ brings in the explicit spin dependence. In Ref. , $`\mathrm{exp}(C(\gamma ))`$ is expressed as an integral of familiar spin-wave quantities over the first Brillouin zone. The asymptotic $`T0`$ ($`S`$ fixed) behavior is $`C(\gamma \mathrm{})\gamma ^2`$, so the $`\text{CH}_2\text{N}_2`$ formula, Eq. (2), is recovered in this limit. The effect of the aforementioned refinement is simply to add the term $`(27/32)t^2`$ to the polynomial. (This term is only a part of the $`𝒪(T^2)`$ correction.) In Ref. , an efficient continuous Euclidean-time QMC algorithm was used to study the $`S=1/2`$ AFHM correlation length up to 350,000 lattice spacings. For this purpose, a finite-size-scaling technique, developed by Caracciolo et al. (“CEFPS”) for the $`\sigma `$-model, was applied to the AFHM finite-volume $`\xi (L)`$ data. That study confirmed the validity of the “no cutoff effects” $`\text{CH}_2\text{N}_2`$ formula, Eq. (2), but only for large $`\xi /a10^5`$. By fitting a naïve quadratic term $`\alpha t^2`$ in the polynomial factor of $`\text{CH}_2\text{N}_2`$, good agreement was achieved down to $`\xi /a100`$ (yielding $`\alpha =0.75(5)`$). For $`S>1/2`$ we used two independent QMC algorithms. One is a higher-spin generalization of the $`S=1/2`$ continuous-Euclidean-time loop-cluster algorithm first described in Ref. . The other one is a “traditional” discrete-Euclidean-time loop-cluster algorithm for arbitrary $`S`$, based on a method proposed by Kawashima and Gubernatis . Results from these two codes were cross-checked and agree for all spins within statistical errors, which gives a high degree of confidence to our calculations. The finite-volume correlation lengths $`\xi (L)`$ were calculated using a second-moment method, similar to Eq. (4.13) in Ref. . Afterwards, the CEFPS finite-size-scaling method was applied. This method expresses $`\xi (2L)/\xi (L)`$ as a universal function $`F(\xi (L)/L)`$ – applicable to all models within the universality class of the 2-d lattice-regularized nearest-neighbour $`O(3)`$ non-linear $`\sigma `$-model. Iteration of $`F(\xi (L)/L)`$ yields $`\xi \xi (L\mathrm{})`$. Unlike the case for $`S=1/2`$, we found that for $`S>1/2`$ and our level of precision ($`0.2\%`$ for $`\xi (L)`$), it is not necessary to incorporate any correction for scaling violations, even for lattice sizes as small as $`L/a10`$. The maximum $`\xi /a`$ generated in our study are approximately 170,000 for $`S=1`$ and 135,000 for $`S=5/2`$ . Note that it is the finite-size scaling technique that enables the estimation of correlation lengths much larger than direct measurement allows (cf. Ref. , which achieved a maximum $`\xi /a=24.94(7)`$ at $`t=0.18`$ on a square lattice with side length $`L/a=200`$). Details of our algorithms, improved estimators, finite-size-scaling technique, and calculation of $`\xi (L)`$ will be provided in a separate paper . Figure 1 shows the QMC data plotted on a Memphis chart, where we divide $`\xi `$ by the leading (2-loop) term $`(e/8)(c/2\pi \rho _s)\mathrm{exp}(1/t)`$ and plot versus $`t`$. For $`S=1/2`$, the agreement between QMC and $`\text{CH}_3\text{N}_2\text{B}`$ down to $`\xi /a10`$ ($`t0.3`$) is striking. For $`S>1/2`$ we find that the QMC data smoothly merge into the $`\text{CH}_3\text{N}_2\text{B}`$ predictions at $`\xi /a>100`$ ($`t<0.15`$) for $`S=1`$, and $`\xi /a>500`$ ($`t<0.10`$) for $`S=5/2`$. The agreement in each case degrades above some temperature. This is to be expected because $`\text{CH}_3\text{N}_2\text{B}`$ leaves out higher-order terms from the CPT and spin-wave expansions. In addition, at high temperatures the field-theoretical requirements $`\xi c/T`$ and $`\xi a`$ are no longer satisfied, and predictions such as $`\text{CH}_3\text{N}_2\text{B}`$ become meaningless. In our figures we plot the $`\text{CH}_3\text{N}_2\text{B}`$ predictions only for $`\xi /a3`$. Ref. found that for $`S=1/2`$, the true value of spin stiffness $`\rho _s`$ is about 3% higher than the value predicted by third-order spin-wave theory (SW3). We find that comparison of QMC data with $`\text{CH}_3\text{N}_2\text{B}`$ for $`S=1/2`$ is in agreement with the fitted values of $`\rho _s=0.1800(5)`$ and $`c=1.657(2)`$ found in Ref. (we set $`J`$ and $`a`$ to unity). That study combined the correlation length data fit to $`\text{CH}_2\text{N}_2`$, Eq. (2), with a fit of finite-volume magnetic susceptibilities to the predictions of CPT for the finite-size and temperature effects in the AFHM . High fit precision was achieved by exploiting this combination. We will present a similar study for the full range of spins $`S5/2`$ in a separate paper . For this Letter, we have choosen a different approach which only involves the correlation length. We demonstrate that for $`S>1`$ one can directly rely on the SW3 results to achieve a consistent connection between the QMC data and $`\text{CH}_3\text{N}_2\text{B}`$. For the $`S=1`$ case we find that the SW3 predictions $`\rho _s^{\text{SW3}}=0.869`$ and $`c^{\text{SW3}}=3.067`$ are nearly correct. Our two-parameter fit gives $`\rho _s/\rho _s^{\text{SW3}}=1.005(3)`$ and $`c/c^{\text{SW3}}=0.98(2)`$. These ratios correspond to $`\rho _s=0.8733(23)`$ and $`c=3.01(6)`$. The fit includes the $`\xi /a>100`$ ($`t<0.15`$) data in Figure 1, and has $`\chi ^2/\text{d.o.f.}=1.085`$ with 58 degrees of freedom (which corresponds to a significance level $`p=30.5\%`$). These values of $`\rho _s`$ and $`c`$ are used in the figures for $`S=1`$. Although the relative deviations from SW3 ($`0.5(3)`$% for $`\rho _s`$ and $`2(2)`$% for $`c`$) seem small, there is in fact a serious discrepancy. Using the SW3 values $`(\rho _s^{\text{SW3}},c^{\text{SW3}})`$ in $`\text{CH}_3\text{N}_2\text{B}`$ and comparing to the QMC $`\xi `$ gives $`\chi ^2/60=1.39`$ (a poor fit, with $`p=2.5\%`$). Compared to the two-parameter fit, SW3 has $`\mathrm{\Delta }\chi ^2=+20.2`$ (i.e., outside the 99.99% confidence region). The reason the near-overlap with SW3 is deceptive is that the fit parameters $`\rho _s`$ and $`c`$ are strongly anticorrelated (with correlation coefficient $`r=0.977`$). Interestingly, the major axis of the nearly degenerate error ellipse is almost orthogonal to the curves of constant $`\theta \rho _s/c^2`$. In particular, the $`68.3`$% confidence region for the joint probability distribution of $`\rho _s`$ and $`c`$ (enclosed by the $`\mathrm{\Delta }\chi ^2=+2.30`$ ellipse) intersects the curve $`\rho _s/c^2=\theta ^{\text{SW3}}`$. In other words, this $`68.3`$% confidence region contains $`(\rho _s,c)`$ values which are consistent with $`\theta =\theta ^{\text{SW3}}`$. We are thus motivated to check how close to $`\rho _s^{\text{SW3}}`$ and $`c^{\text{SW3}}`$ the corresponding single-parameter-fit values could in fact be. To do this, we set $`\theta _{S=1}=\theta _{S=1}^{\text{SW3}}=0.09238`$ and performed a fit with the same set of $`S=1`$ QMC data, with $`(\rho _s,c)`$ constrained to the one-dimensional parameter subspace $`\rho _s=\theta ^{\text{SW3}}c^2`$. As an upper bound for the error associated with the assumption $`\theta _{S=1}=\theta _{S=1}^{\text{SW3}}`$, we took from Ref. the 4.4% deviation between $`\theta _{S=1/2}^{\text{SW3}}=0.06277`$ and that study’s result $`\theta _{S=1/2}=0.06556`$. This choice is conservative since SWE is an expansion in powers of $`1/S`$, and is expected to become more accurate as spin increases. Upon refitting, we found $`\rho _s/\rho _s^{\text{SW3}}=1.0024(27)`$, $`c/c^{\text{SW3}}=1.001(21)`$, and $`\chi ^2/59=1.083`$ ($`p=30.8\%`$). These ratios correspond to $`\rho _s=0.8711(24)`$ and $`c=3.07(6)`$. (The difference between the one- and two-parameter fits would not be visible in Figure 1.) Note the errors here are dominated by the conservative 4.4% uncertainty in $`\theta _{S=1}`$; the actual errors are bound to be smaller. For $`S=5/2`$, we could not identify any deviation from the SW3 values $`\rho _s^{\text{SW3}}=5.9444`$ and $`c^{\text{SW3}}=7.3005`$. We found $`\chi ^2/14=1.194`$ ($`p=27.2\%`$) for the data with $`\xi /a>500`$. These SW3 values of $`\rho _s`$ and $`c`$ are used in the figures for $`S=5/2`$. Figure 2 shows the situation for $`S=5/2`$ in more detail. There is an intermediate regime between $`\xi /a500`$ ($`t0.10`$), where $`\text{CH}_3\text{N}_2\text{B}`$ starts to deviate, and $`\xi /a12`$ ($`t0.15`$), where the high-temperature-series expansion (HTE) starts to fail. Most of the experimental data on $`\text{Rb}_2\text{Mn}\text{F}_4`$ fall into this “gap”, which exists similarly for $`S=1`$. At least for large spin $`S=5/2`$, this intermediate regime is correctly described by the semi-classical approximation known as the pure-quantum self-consistent harmonic approximation (PQSCHA) . The diverse approaches collectively describe the $`S=5/2`$ correlation length from extremely small to extremely large values. We note that a residual discrepancy between $`\text{CH}_3\text{N}_2\text{B}`$ and numerical data persists in the classical limit $`S\mathrm{}`$, where the AFHM becomes the 2-d lattice-regularized nearest-neighbour $`O(3)`$ non-linear $`\sigma `$-model. Predictions for this model are available from analytical calculations ($`\xi /a10^5`$), Monte Carlo simulation ($`10\xi /a10^5`$), and series expansion ($`\xi /a10`$). Hasenfratz also supplies the $`\gamma 0`$ form for the correction $`\mathrm{exp}(C(\gamma ))`$, which enables the computation of the classical $`S\mathrm{}`$ limit of Eq. (3). In Figure 3, we plot the ratio of $`\xi `$ to the $`\text{CH}_3\text{N}_2\text{B}`$ prediction for $`S=1/2,1,5/2,\text{ and }\mathrm{}`$ versus $`1/\mathrm{log}_{10}(\xi )`$. By $`S=5/2`$ the discrepancy between the numerical data and $`\text{CH}_3\text{N}_2\text{B}`$ has essentially reached the classical $`S\mathrm{}`$ limit. This means that the reasons for the residual discrepancy, including finite-order effects of the CPT and spin-wave expansions, are the same for the quantum AFHM and the classical $`\sigma `$-model. In conclusion, the cutoff correction accounts for the overall spin dependence of the correlation length. The spin stiffness and spin-wave velocity approach the spin-wave theory predictions rapidly. The diverse approaches to the AFHM for higher-spin are complementary. We thank U.-J. Wiese, P. Hasenfratz, F. Niedermayer, and P. Verrucchi for enlightening discussions. We also thank P. Verrucchi, R. Singh, N. Elstner, R.L. Leheny, and R.J. Christianson for use of their data. The work of VC was supported in part by the DOE under cooperative research agreement #DF-FC02-94ER40818. The work of PKM was supported in part by Schweizerischer Nationalfonds.
no-problem/9910/hep-th9910059.html
ar5iv
text
# The antisymmetric tensor propagator in 𝐴⁢𝑑⁢𝑆 ## I Introduction In , a lot of effort was put into finding the $`AdS`$ propagators for the graviton and the gauge boson. Their methods can be used straightforwardly for the $`B_{\mu \nu }`$ propagators. An ansatz can be made for bitensor propagators . This ansatz contains both gauge artifacts and gauge invariant parts. Upon using the equation of motion for $`B_{\mu \nu }`$ we obtain an equation for the gauge invariant part of the propagator, whose solution is hypergeometric. For d=5 it simplifies to an algebraic function of the chordal distance. As explained in , working on the subspace of conserved sources makes gauge fixing unnecessary. We check our result by verifying the 5-dimensional Poincaré duality between $`A_\mu `$ and $`B_{\mu \nu }`$. ## II The $`B_{\mu \nu }`$ propagator In Euclidean $`AdS_{d+1}`$, with the metric $$ds^2=\frac{1}{z_0^2}(dz_0^2+\mathrm{\Sigma }_{i=1}^ddz_i^2),$$ $`(1)`$ the easiest way to express invariant functions and tensors is in terms of the chordal distance: $$u\frac{(z_0w_0)^2+(z_iw_i)^2}{2z_0w_0}.$$ $`(2)`$ The action for an antisymmetric 2-tensor coupled to a conserved source $`S_{\mu \nu }`$ is: $$S_B=d^{d+1}z\sqrt{g}[\frac{1}{23!}H^{\mu \nu \rho }H_{\mu \nu \rho }\frac{1}{2}B_{\mu \nu }S^{\mu \nu }],$$ $`(3)`$ where $$H_{\mu \nu \rho }=D_\mu B_{\nu \rho }+D_\nu B_{\rho \mu }+D_\rho B_{\mu \nu }.$$ $`(4)`$ The Euler Lagrange equation has a solution of the form: $$B_{\mu \nu }(z)=\frac{1}{2}d^{d+1}w\sqrt{g}G_{\mu \nu ;\mu ^{}\nu ^{}}(z,w)S^{\mu ^{}\nu ^{}}(w),$$ $`(5)`$ where $`G_{\mu \nu ;\mu ^{}\nu ^{}}`$ is the bitensor propagator. To simplify notation, the $`D`$’s with unprimed indices mean covariant derivatives with respect to $`z`$, and those with primed indices with respect to $`w`$. The equation $`G_{\mu \nu ;\mu ^{}\nu ^{}}`$ satisfies is: $$D^\rho (D_\mu G_{\nu \rho ;\mu ^{}\nu ^{}}+D_\nu G_{\rho \mu ;\mu ^{}\nu ^{}}+D_\rho G_{\mu \nu ;\mu ^{}\nu ^{}})=\delta (z,w)(g_{\mu \mu ^{}}g_{\nu \nu ^{}}g_{\mu \nu ^{}}g_{\nu \mu ^{}})+$$ $$+D_\mu ^{}\mathrm{\Lambda }_{\mu \nu ;\nu ^{}}D_\nu ^{}\mathrm{\Lambda }_{\mu \nu ;\mu ^{}},$$ $`(6)`$ where $`\mathrm{\Lambda }_{\mu \nu ;\nu ^{}}`$ is a diffeomorphism whose contribution vanishes when integrated against the covariantly conserved source $`S^{\mu \nu }`$. We can see that all of our bitensors are antisymmetric at both points. Similarly to the methods in we observe that a suitable basis for antisymmetric bitensors is given by: $$T_{\mu \nu ;\mu ^{}\nu ^{}}^1=_\mu _\mu ^{}u_\nu _\nu ^{}u_\mu _\nu ^{}u_\nu _\mu ^{}u$$ $`(7)`$ $$T_{\mu \nu ;\mu ^{}\nu ^{}}^2=_\mu _\mu ^{}u_\nu u_\nu ^{}u_\mu _\nu ^{}u_\nu u_\mu ^{}u_\nu _\mu ^{}u_\mu u_\nu ^{}u+_\nu _\nu ^{}u_\mu u_\mu ^{}u.$$ Thus, an ansatz for $`G`$ is $`G=T^1F^1(u)+T^2F^2(u)`$. Nonetheless, we use a different decomposition, which illustrates better the gauge artifacts $$G_{\mu \nu ;\mu ^{}\nu ^{}}=T_{\mu \nu ;\mu ^{}\nu ^{}}^1H(u)+D_\mu V_{\nu ;\mu ^{}\nu ^{}}D_\nu V_{\mu ;\mu ^{}\nu ^{}},$$ $`(8)`$ where $`V_{\mu ;\mu ^{}\nu ^{}}=Y(u)[_\mu _\mu ^{}u_\nu ^{}u_\mu _\nu ^{}u_\mu ^{}u]`$. Also, an antisymmetric $`\mathrm{\Lambda }_{\mu \nu ;\nu ^{}}`$ can be expressed as $$\mathrm{\Lambda }_{\mu \nu ;\nu ^{}}=A(u)[_\nu _\nu ^{}u_\mu u_\mu _\nu ^{}u_\nu u].$$ $`(10)`$ We can now substitute (8) and (10) in (6), and after a long computation we obtain $$D^\rho (D_\mu G_{\nu \rho ;\mu ^{}\nu ^{}}+D_\nu G_{\rho \mu ;\mu ^{}\nu ^{}}+D_\rho G_{\mu \nu ;\mu ^{}\nu ^{}})D_\mu ^{}\mathrm{\Lambda }_{\mu \nu ;\nu ^{}}+D_\nu ^{}\mathrm{\Lambda }_{\mu \nu ;\mu ^{}}=$$ $$=T^1[H^{\prime \prime }u(u+2)+H^{}(1+u)(d1)2A]T^2[H^{\prime \prime }(1+u)+H^{}(d1)+A^{}].$$ $`(11)`$ For $`zw`$, we obtain 2 equations by setting the scalar coefficients of the two tensors to 0. We can observe that the $`V_{\mu ;\mu ^{}\nu ^{}}`$ part which was a gauge artifact dropped out as expected. Thus, for $`u0`$ we have the equations: $$H^{\prime \prime }u(u+2)+H^{}(1+u)(d1)2A=0$$ $`(12a)`$ $$H^{\prime \prime }(1+u)+H^{}(d1)+A^{}=0.$$ $`(12b)`$ The second equation can be integrated once, with the integration constant chosen so that $`A`$ and $`H`$ vanish as $`u\mathrm{}`$. Combining this with (12a) we find the differential equation obeyed by $`H`$: $$u(2+u)H^{\prime \prime }(u)+(d+1)(u+1)H^{}(u)+2(d2)H=0.$$ $`(13)`$ This equation is hypergeometric, but the solution which vanishes as $`u\mathrm{}`$ is rational: $$H(u)=\frac{\mathrm{\Gamma }((d1)/2)}{4\pi ^{(d+1)/2}}\frac{u+1}{[u(u+2)]^{(d1)/2}},$$ $`(14)`$ properly normalized to take care of the $`\delta `$ function in (6). ## III Poincaré duality In 5 dimensions a 2-form is Poincaré dual with a gauge boson, by the relation: $$H_{\mu \nu \rho }ϵ^{\mu \nu \rho \sigma \lambda }=3!F^{\sigma \lambda }$$ $`(15)`$ Therefore, we expect: $$F^{\sigma \lambda }(z)F^{\sigma ^{}\lambda ^{}}(w)=\frac{1}{(3!)^2}ϵ^{\mu \nu \rho \sigma \lambda }ϵ^{\mu ^{}\nu ^{}\rho ^{}\sigma ^{}\lambda ^{}}H_{\mu \nu \rho }(z)H_{\mu ^{}\nu ^{}\rho ^{}}(w).$$ $`(16)`$ Checking (16) is a verification that our result is true. We use the fact that $$B_{\mu \nu }B_{\mu ^{}\nu ^{}}=G_{\mu \nu ;\mu ^{}\nu ^{}}$$ $`(17)`$ and $$A_\mu A_\mu ^{}=G_{\mu ;\mu ^{}},$$ $`(18)`$ where the second propagator was found in . We could check the tensor equality (16) term by term, but it is messy. We rather observe that the right hand side of (16) is a bitensor antisymmetric at both ends, and thus it will have the structure $$ϵ^{\mu \nu \rho \sigma \lambda }ϵ^{\mu ^{}\nu ^{}\rho ^{}\sigma ^{}\lambda ^{}}H_{\mu \nu \rho }(z)H_{\mu ^{}\nu ^{}\rho ^{}}(w)=F_1(u)T_1^{\mu \nu ;\mu ^{}\nu ^{}}+F_2(u)T_2^{\mu \nu ;\mu ^{}\nu ^{}}.$$ $`(19)`$ Concentrating on the components of $`F^{z_0z_i}(z)F^{z_0^{}z_j^{}}(w)`$ we obtain: $$2F_1+F_2(1+u)=H^{\prime \prime },$$ $`(20a)`$ $$F_2(1+u)^2+F_2+2F_1(1+u)=2H^{\prime \prime }(1+u)+3H^{},$$ $`(20b)`$ which give the same $`F_1`$ and $`F_2`$ as the ones obtained from the gauge propagator derived in . ## IV Conclusion We computed the propagator for $`B_{\mu \nu }`$ in $`AdS_{d+1}`$ and checked our result by using Poincaré duality for $`d=4`$. This propagator can be used for computing various quantities having to do with $`B_{\mu \nu }`$ charged objects (like strings or D-branes with electric flux) in $`AdS`$. The propagators for higher form fields can also be found by using Poincaré duality or by explicit calculation . Acknowledgements: I’d like to acknowledge useful conversations with Joe Polchinski, Gary Horowitz and Veronika Hubeny. This work was supported in part by NSF grant PHY97-22022. ## A Several useful identities involving the chordal distance In the computations the following identities were useful: $$_\mu _\nu ^{}u=\frac{1}{z_0w_0}[\delta _{\mu \nu ^{}}+\frac{(zw)_\mu \delta _{\nu ^{}0}}{w_0}+\frac{(wz)_\nu ^{}\delta _{\mu 0}}{z_0}u\delta _{\mu 0}\delta _{\nu ^{}0}]$$ $`(A1)`$ $$_\mu u=\frac{1}{z_0}[(zw)_\mu /w_0u\delta _{\mu 0}]$$ $`(A2)`$ $$_\nu ^{}u=\frac{1}{w_0}[(wz)_\nu ^{}/z_0u\delta _{\nu ^{}0}]$$ $`(A3)`$ $$D^\mu _\mu u=(d+1)(u+1)$$ $`(A4)`$ $$^\mu u_\mu u=u(u+2)$$ $`(A5)`$ $$D_\mu _\nu u=g_{\mu \nu }(u+1)$$ $`(A6)`$ $$(^\mu u)(D_\mu _\nu _\nu ^{}u)=_\nu u_\nu ^{}u$$ $`(A7)`$ $$(^\mu u)(_\mu _\nu ^{})u=(u+1)_\nu ^{}u$$ $`(A8)`$ $$D_\mu _\nu _\nu ^{}u=g_{\mu \nu }_\nu ^{}u$$ $`(A9)`$
no-problem/9910/physics9910041.html
ar5iv
text
# Untitled Document Regional Centres for Space Science and Technology Education (Affiliated to the United Nations) Hans J. Haubold Programme on Space Applications Office for Outer Space Affairs United Nations Vienna International Centre P.O. Box 500 A-1400 Vienna, Austria Email: haubold@kph.tuwien.ac.at Abstract Education is a prerequisite to master the challenges of space science and technology. Efforts to understand and control space science and technology are necessarily intertwined with social expressions in the cultures where science and technology is carried out (Pyenson ). The United Nations is leading an effort to establish regional Centres for Space Science and Technology Education in major regions on Earth. The status of the establishment of such institutions in Asia and the Pacific, Africa, Latin America and the Caribbean, Western Asia, and Eastern Europe is briefly described in this article. 1. United Nations Programme on Space Applications The United Nations Programme on Space Applications was established in 1971 on the recommendation of the first United Nations Conference on the Exploration and Peaceful Uses of Outer Space (UNISPACE I) and the Programme was expanded and its mandate broadened in UNISPACE II (1982) and the recently concluded UNISPACE III Conferences. Fulfilling one element of the Programme’s mandate, more than 150 workshops with approximately 8000 participants have been organized since its establishment. Following the need of developing countries and taking into account the space-related agenda of the Programme, the majority of workshops focussed on core disciplines: remote sensing and geographic information system, satellite communications and geo-positioning system, satellite meteorology and global climate, and space and atmospheric sciences . Despite the success of these workshops in the initiation of regional and international cooperation and the development of space science and technology, particularly for the benefit of developing countries, in the 1980’s the limitations of short-term activities were recognized and called for the need of building long-term regional capacity in space science and technology and its applications . Subsequently, in 1988, under the auspices of the Programme, a project to establish centres for space science and technology education at the regional level was initiated . A unique element of this project was that the Centres were envisaged to be established in developing countries for the benefit of regional cooperation, particularly between the developing countries. 2. United Nations General Assembly Resolutions The General Assembly of the United Nations, in its resolution 45/72 of 11 December 1990, endorsed the recommendation of the Working Group of the Whole of the Scientific and Technical Subcommittee, as approved by the Committee on the Peaceful Uses of Outer Space (COPUOS) , that: “… the United Nations should lead, with the active support of its specialized agencies and other international organizations, an international effort to establish regional centres for space science and technology education in existing national/regional educational institutions in the developing countries” . Subsequently, the General Assembly, in its resolution 50/27 of 6 December 1995, also endorsed the recommendation of COPUOS that “these centres be established on the basis of affiliation to the United Nations as early as possible and that such affiliation would provide the centres with the necessary recognition and would strengthen the possibilities of attracting donors and of establishing academic relationships with national and international space-related institutions” . 3. Status of Establishing and Operating the Regional Centres At the occasion of the UNISPACE III Conference (19-30 July 1999, Vienna, Austria), the status of the operation and establishment of the regional Centres was reviewed as part of the intergovernmental meetings and the technical forum of this Conference . Since its inauguration in India in 1995, the regional Centre for Space Science and Technology Education in Asia and the Pacific has successfully conducted four post-graduate courses on remote sensing and geographic information system; two courses on satellite communications; and a course each on the following topics: satellite meteorology and global climate; and space science. Each of the courses was inaugurated through a research level workshop on the respective topic supported through regular activities of the United Nations Programme on Space Applications. Upon completion of the nine-month course in each activity, the scholars have carried out a one-year applications/research project in their home countries. In agreement with resolution 45/72, this Centre takes advantage of the intellectual resources and facilities of three renowned space-related institutions: (i) the Indian Institute of Remote Sensing, Dehradun, (ii) the Space Applications Centre, Ahmedabad, and (iii) the Physical Research Laboratory, Ahmedabad . The regional Centre for Space Science and Technology - in French Language - in Africa was inaugurated on 24 October 1998 in Casablanca, Morocco, and is located at the Ecole Mohammadia d’Ingenieurs in Rabat. The regional Centre for Space Science and Technology Education - in English Language - in Africa was inaugurated on 24 November 1998 in Abuja, Nigeria, and is located at Obafemi Awolowo University in Ile-Ife . The inauguration of the regional Centre for Space Science and Technology Education in Latin America and the Caribbean is expected to occur in 2000 in Brazil and Mexico. In preparation for the operation of the campus of the Centre in Brazil, the Instituto Nacional de Pesquisas Espaciais (INPE) is already very active in carrying out a number of workshops for the benefit of States in the region. An evaluation mission to Jordan and the Syrian Arab Republic was conducted in 1998. The reports of the mission have been finalized in consultation with the Governments of Jordan and the Syrian Arab Republic, with a view to selecting a host country for a regional Centre in Western Asia, which is expected to occur shortly after the UNISPACE III Conference. In 1995, the Network of Space Science and Technology Education and Research Institutions for States of Central-Eastern and South-Eastern Europe was established . A technical study mission to Bulgaria, Greece, Hungary, Poland, Romania, Slovakia, and Turkey was carried out in 1998. The mission undertook a technical study and provided an informative report that will be used in determining, in each country visited, an agreed framework for the operation of such a Network. Each country designated space science and technology related core and associated institutions, all of them with a long and successful history in research and applications of space science and technology, which are being part of this Network. 4. Governing Boards and Advisory Committees of the Centres Each Centre shall aspire to be a highly reputable regional institution, which, as the needs arise, and as directed by the Centre’s Governing Board, may grow into a network of specialized and internationally acclaimed affiliate nodes. Because resolution 45/72 specifically limits the role of the United Nations to “lead, …, an international effort to establish regional centres”, it is apparent that once a Centre is inaugurated, its Governing Board will assume all decision-making and policy-formulating responsibilities for the Centre. The Governing Board is the overall policy making body of each Centre and consists of member States (within the region where the Centre is located), that have agreed, through their endorsement of the Centre’s agreement, to the goals and objectives of the Centre. The agreement of the Centre calls for the establishment of an Advisory Committee that provides advise to the Governing Board on all scientific and technical matters, particularly on the Centre’s education curricula, and consists of experts in the field of space science and technology . The United Nations serves the Centre and its Governing Board and Advisory Committee in an advisory capacity. Governing Boards were established for the Centres in Asia and the Pacific and Africa. To date the Advisory Committee has been set up for the Centre in Asia and the Pacific. 5. Next Steps to Be Taken During the deliberations of the UNISPACE III Conference, meetings were held and presentations were delivered to chart the course for future measures to continue furthering the regional Centres. In a meeting between representatives of the Centres in Asia and the Pacific, Africa, and Latin America and the Caribbean, the opinion was emphasized, that as a follow-up of the Conference, closer and lively cooperation between the regional Centres needs to be established already at this point of time. Particularly, the rich experience gained in the successful operation of the Centre in Asia and the Pacific as centre of excellence shall be made available to the Centres in all other regions. It was further felt that all Centres, through the support of the United Nations Office for Outer Space Affairs and its Programme on Space Applications, should urgently establish cooperation with international organizations and institutions (among them COSPAR, IAU, ICTP, ISPRS, ISU, TWAS), specialized agencies of the United Nations system (among them FAO, IAEA, UNESCO, UNU, WHO, WMO), and the Economic and Social Commissions of the respective region. The International Astronomical Union (IAU) has undetaken first steps in this direction . The strong participation of developing countries in the technical forum activities of UNISPACE III also brought to the attention of the Office for Outer Space Affairs that the Centre’s education curricula may have to be supplemented with non-core discipline elements focussing on space biology/medicine, devising small satellite projects, microgravity, and other space-related topics. 6. UN/ESA Workshops on Basic Space Science The establishment of the regional Centres is the sole project of the Programme on Space Applications leading to “institutionalization” in the field of space science and technology. The operation of the Centres can be supported by the Programme in organizing some of its regular activities in close cooperation with the Centres. In this connection it shall be recalled that it was India in 1991, hosting the first United Nations/European Space Agency Workshop on Basic Space Science for the benefit of Asia and the Pacific at ISRO in Bangalore, that inaugurated a series of worldwide workshops. Since then such workshops were organized in Latin America and the Caribbean (Costa Rica and Colombia 1992, Honduras 1997), Africa (Nigeria 1993), Western Asia (Egypt 1994, Jordan 1999), Europe (Germany 1996, France 2000), and again in Asia and the Pacific (Sri Lanka 1995) . This series of workshops led to the establishment of several education and research oriented astronomical telescope facilities with a view to link them to the respective regional Centres in the future. Already such a series of workshops, organized in the field of space science and technology, can lead to an appreciable expansion of cooperation between countries of a region and its regional Centre. 7. Contact Adresses for More Details on the Regional Centres and Their Education Programmes Asia and the Pacific Region Prof. B. Deekshatulu Centre for Space Science and Technology Education in Asia and the Pacific Indian Institute of Remote Sensing Campus 4 Kalidas Road Dehra Dun - 248 001 India Tel.: (+91)-135-740-737 Fax : (+91)-135-740-785 Email: deekshatulu@hotmail.com Africa Region Prof. E.E. Balogun Centre for Space Science and Technology Education - in English Language - in Africa Department of Physics Obafemi Awolowo University Ile-Ife Nigeria Tel.: (234)-36-230-454 Fax : (234)-36-233-973 Email: ebalogun@oauife.edu.ng Africa Region Prof. A. Touzani Centre Regional Africain des Sciences et Technologie de l’Espace Langue Francaise Sis a l’Ecole Mohammadia d’Ingenieurs Avenue Ibn Sina B.P. 765, Agdal Rabat Maroc Tel.: (212)-7-681824 Fax : (212)-7-681826 Email: craste@emi.ac.ma Latin America and the Caribbean Region Dr. T.M. Sausen Instituto Nacional de Pesquisas Espaciais Divisao de Sensoriamento Remoto Av. dos Astronautas, 1758 Cx.P. 515 CEP 12201-970 Sao Jose dos Campos, SP Brazil Tel.: (+55)-12-325-6862 Fax : (+55)-12-325-6870 Email: tania@ltid.inpe.br Western Asia Region To be made available shortly Acknowledgements The cooperation with Dr. W. Steinborn (German Space Agency, DLR) during the evaluation mission through Africa, Drs. G. Arrigo and B. Negri (Italian Space Agency, ASI) during the technical study mission through Central-Eastern and South-Eastern Europe, and Prof. F.R. Querci (French Space Agency, CNES) during the evaluation mission through the Middle East, is greatly acknowledged. References Note: The author is writing in his personal capacity and the views expressed in this paper are those of the author and not necessarily of the United Nations. L. Pyenson and S. Sheets-Pyenson, Servants of Nature: A History of Scientific Institutions, Enterprises, and Sensibilities, W.W. Norton & Company, New York, 1999, pp. XV+496. United Nations Conference on the Exploration and Peaceful Uses of Outer Space, Vienna, 14-27 August 1968, United Nations, New York, 1968, Document E.68.I.11, pp. 59. United Nations Conference on the Exploration and Peaceful Uses of Outer Space, Vienna, Austria, 9-21 August 1982, United Nations, New York, 1982, Document A/CONF.101/10, pp. 167; R. Chipman (Ed.), The World in Space: A Survey of Space Activities and Issues Prepared for UNISPACE 82, Prentice-Hall, 1982, pp. 689. United Nations Conference on the Exploration and Peaceful Uses of Outer Space, Vienna, Austria, 19-30 July 1999, United Nations, Vienna, 1999, Document A/CONF.184/6; http://www.un.or.at/OOSA/. Space for Development: The United Nations Programme on Space Applications, United Nations, Vienna, 1999, Document V.98-57085, pp. 23; http://www.un.or.at/OOSA/. Report on the UN Workshop on Space Science and Technology and its Applications within the Framework of Educational Systems, 4-8 November 1985, Ahmedabad, India, Document A/AC.105/365, (27 December 1985) pp. 24; Report of the UN Meeting of Experts on Space Science and Technology and its Applications within the Framework of Educational Systems, 13-17 October 1986, Mexico, D.F., Document A/AC.105/378, (23 December 1986) pp. 25; Report on the UN Meeting of Experts on Space Science and Technology and its Applications within the Framework of Educational Systems, 27 April - 1 May 1987, Lagos, Nigeria, Document A/AC.105/390, (18 November 1987) pp. 23; Report on the UN International Meeting of Experts on the Development of Remote-Sensing Skills and Knowledge, 26-30 June 1989, Dundee, United Kingdom, (3 January 1990) pp. 21. Centre for Space Science and Technology Education, United Nations, New York, 1990, Documents SAP/90/001 to 003, pp. 24; Centres for Space science and Technology Education: A Progress Report, Document A/AC.105/498, (12 March 1990) pp. 28; Centres for Space Science and Technology Education: Updated Project Document, Document A/AC.105/534, (7 January 1993) pp. 56; Regional Centres for Space Science and Technology Education (Affiliated to the United Nations), Document A/AC.105/703, (16 June 1998) pp. 12. M. Benkoe and K.-U. Schrogl, International Space Law in the Making: Current Issues in the UN Committee on the Peaceful Uses of Outer Space, Editions Frontiers, Gif-sur-Yvette, 1993, pp. XXIII+275. Report of the Committee on the Peaceful Uses of Outer Space, General Assembly, Official Records: Forty-Fifth Session, Supplement No. 20 (A/45/20), United Nations, New York, 1990; Report of the Scientific and Technical Sub-Committee on the Work of its Twenty-Seventh Session, Document A/AC.105/456, (12 March 1990) pp. 37. Report of the Committee on the Peaceful Uses of Outer Space, General Assembly, Official Records: Fiftieth Session, Supplement No. 20 (A/50/20), United Nations, New York, 1995. M.-I. Piso, in Proceedings of the UNISPACE III Regional Preparatory Conference for Eastern Europe, Bucharest, Romania, 25-29 January 1999, published by the Romanian Space Agency under the auspices of the United Nations Office for Outer Space Affairs, Bucharest, Romania, 1999, pp. 185-198. Centres for Space Science and Technology Education: Education Curricula, United Nations, Vienna, 1996, Document A/Ac.105/649, 23 pp.; Report on the UN/ESA/COSPAR Workshop on Data Analysis Techniques, 10-14 November 1997, Sao Jose dos Campos, Brazil, (19 December 1997) pp. 10. Conclusions and Proposals of the IAU/COSPAR/UN Special Workshop on Education in Astronomy and Basic Space Science, 20-23 July 1999, UNISPACE III Conference, Document A/CONF.184/C.1/L.8, (23 July 1999) pp. 2; see also . H.J. Haubold and W. Wamsteker, Space Technology 18(1998)No. 4-6, pp. 149-156; H.J. Haubold, Journal of Astronomical History and Heritage 1(1998) No. 2, pp. 105-121; http://www.seas.columbia.edu/$``$ah297/un-esa/. Centre for Space Science and Technology Education (Affiliated to the United Nations) in Asia and the Pacific, Brochure issued by the Centre, Dehra Dun, India, 1995, pp. 6. Centre for Space Science and Technology Education (Affiliated to the United Nations) in Africa, Brochure issued by the Centre, Ile-Ife, Nigeria, 1998, pp. 14.
no-problem/9910/hep-ex9910014.html
ar5iv
text
# Determination of the Phase of 𝑽_{𝒖⁢𝒃} from Charmless Hadronic 𝑩 Decay Rates ## Abstract We perform a model dependent fit to recent data on charmless hadronic $`B`$ decays and determine $`\gamma `$, the phase of $`V_{ub}^{}`$. We find $`\gamma =114_{21}^{+25}`$ degrees, which disfavors the often quoted $`\gamma 60^{}`$ at the two standard deviation level. We also fit for the form factors $`F_0^{B\pi }`$ and $`A_0^{B\rho }`$, and the strange-quark mass. They are consistent with theoretical expectations, although $`m_s`$ is somewhat low. Such agreement and the good $`\chi ^2`$ for the fit may be interpreted as a confirmation of the adequacy of our model assumptions. preprint: NTUHEP-99-25 COLO-HEP-438 LNS-99-290 The measurement of a surprisingly large $`\epsilon ^{}/\epsilon `$ value in 1999 is an exasperating reminder of how little we really know about $`CP`$ violation in Nature. Within the Standard Model (SM) with 3 quark generations, however, there is a unique phase in the Kobayashi-Maskawa (KM) matrix $`V`$, often defined as $`\gamma =\mathrm{arg}(V_{ub}^{})`$ in the usual phase convention . At present, there is no evidence that this phase fails to account for $`CP`$ violation phenomena. Two $`B`$ factories, built to study $`CP`$ violation in the $`B`$ system, have just been completed. By comparing the time dependence of tagged $`B^0`$ vs. $`\overline{B}^0J/\psi K_S`$ decays, one can cleanly measure the $`CP`$ phase in $`B^0`$-$`\overline{B}^0`$ mixing, which, in the SM, gives $`\mathrm{sin}2\beta `$ where $`\beta =\mathrm{arg}(V_{td}^{})`$. Together with the demonstrated capabilities of collider detectors at the Tevatron, a precise measurement of $`\mathrm{sin}2\beta `$ is assured within a year or two. The unitarity phase $`\alpha `$ can be measured via $`\pi ^+\pi ^{}`$ or $`\pi ^+\pi ^{}\pi ^0`$ modes but now appears to be more challenging because the $`\pi ^+\pi ^{}`$ rate is smaller than expected, which in turn implies larger “penguin pollution”. However, it is the phase $`\gamma `$ that is usually viewed as the most difficult. All suggestions so far require very high statistics or various technical challenges. In this Letter we exploit the emerging rare $`B`$ decay data from CLEO and perform a fit that, though model dependent, allows extraction of $`\gamma `$ with just $`10^7`$ $`B`$ mesons using only $`CP`$-averaged rates. The goodness of fit and reasonableness of other fit parameters serve as checks on the adequacy of our model assumptions. Since neither vertexing nor tagging is required, this method will benefit from the improved statistics soon available from the CLEO upgrade as well. The measurement of $`\mathrm{sin}2\beta `$ is often compared to a double-slit interference experiment, the two slits being $`B^0`$ and $`\overline{B}^0J/\psi K_S`$ decays. Charmless rare $`B`$ decay rates, even when CP averaged, can also be viewed as double-slit experiments that in principle probe the phase $`\gamma `$. The present observed pattern that $`\overline{B}\overline{K}\pi `$ rates are larger than $`\pi \pi `$ but comparable to $`\rho ^0\pi ^{}`$, $`\rho ^{}\pi ^\pm `$ and $`\omega \pi ^{}`$ implies that both tree (T) and penguin (P) amplitudes contribute to these rates, hence the double-slit analogy. Unfortunately, hadronization effects such as final state interactions (FSI) could dilute such interference effects. The Fleischer-Mannel bound on $`\gamma `$ is no longer effective since one now has $`R\overline{\mathrm{\Gamma }}(B^{}K^{}\pi ^+)/\overline{\mathrm{\Gamma }}(\overline{B}^0K^0\pi ^+)=1.0\pm 0.3`$, where $`\overline{\mathrm{\Gamma }}`$ denotes the average of $`B`$ and $`\overline{B}`$ widths. A more promising method is based on $`R_{}=\overline{\mathrm{\Gamma }}(B^{}\overline{K}^0\pi ^{})/2\overline{\mathrm{\Gamma }}(B^{}K^{}\pi ^0)`$, with some reference to $`\pi \pi `$ for control of model dependence. But with $`R_{}=0.75\pm 0.30`$ at present, one cannot set a useful bound on $`\gamma `$. More than an order of magnitude increase in data is needed for a restrictive measurement. In this Letter we take a more global view and perform fits which trade model independence for exhaustive use of available data. We also use only $`CP`$-averaged rates, since there is no sign of significant $`CP`$ asymmetries and the errors are large. Asymmetries in addition are more sensitive to FSI than averaged rates. We shall assume that naive factorization ($`N_c=3`$) is a good approximation, and use effective-theory matrix elements cross-checked by two groups , ignoring annihilation type diagrams. We make a $`\chi ^2`$ fit of data to $`\gamma `$ and four other parameters. Factorization in two body rare $`B`$ decays may be heuristically justified by the large energy release : final state mesons move away from each other so fast that they do not interact. Recent theoretical work suggests that factorization may be derivable from QCD in certain limits . In our view, factorization provides a simple framework to describe hadronic $`B`$ decays that is rich in predictions with a limited set of free parameters. It is therefore reasonable to use this framework when attempting a first global fit to the large number of results on charmless hadronic $`B`$ decays now available. This work is motivated by Ref. , which pointed out that recent CLEO rare $`B`$ data supports factorization if $`\mathrm{cos}\gamma <0`$ is taken. Indirect fits to the unitarity triangle find a 95% C.L. range for $`\gamma `$ of $`44^{}`$$`75^{}`$ , $`44^{}`$$`93^{}`$ , $`41^{}`$$`97^{}`$ , and $`36^{}`$$`97^{}`$ , depending in part on how conservatively the theoretical errors are handled. Let us illustrate the parameters that enter with $`\overline{B}^0K^{}\pi ^+`$, which is a $`bs\overline{u}u`$ transition under factorization. Ignoring annihilation terms, one has $`𝒜_{K^{}\pi ^+}f_KF_0^{B\pi }(m_B^2m_\pi ^2)\left\{V_{us}^{}V_{ub}a_1V_{ts}^{}V_{tb}\left[a_4+a_{10}+(a_6+a_8)R_{su}\right]\right\}.`$ (1) We are free to fix $`|V_{ts}||V_{cb}|=0.0381`$ since any uncertainty can be absorbed in form factors. The two relevant fundamental parameters are therefore $`|V_{ub}/V_{cb}|`$ and $`\gamma `$, and the latter clearly controls the interference between tree and penguin terms. The parameters $`a_i`$ are related to short distance Wilson coefficients (WC) and evaluated within a QCD framework. They also depend on the scale parameter $`\mu _f`$ where factorization is operative. The values of $`a_i`$ in the literature are still evolving as issues of scale, scheme and gauge dependence are addressed. We use two sets of $`a_i`$ from Refs. (AKL) and (CCTY). The dominant strong penguin coefficients are $`a_4`$ and $`a_6`$ ($`0.04`$ to $`0.06`$), while the dominant electroweak penguin coefficient is $`a_90.009`$ coming from the $`Z`$ penguin. We use the $`a_i`$ for $`bs`$ since the difference for $`bd`$ is small. In Eq. (1) one also has the factor $`R_{su}=2m_K^{}^2/(m_bm_u)(m_s+m_u).`$ (2) A similar factor $`R_{sd}`$, which enters $`𝒜_{\overline{K}^0\pi ^{}}`$, is taken to be equal to $`R_{su}`$. This factor can be better understood as a product of two pieces: the factor $`1/(m_bm_u)1/m_b`$ balances against $`m_B^2m_\pi ^2`$; and $`m_K^{}^2/(m_s+m_u)`$ is nothing but the nonperturbative part of the pseudo-Goldstone boson mass formula, which is well defined within QCD but not yet very well determined. Although $`R_{su}`$ is technically related to an $`m_s`$-independent hadronic matrix element, in the form of Eq. (2), it becomes a sensitive probe of $`m_s`$ in a way that is analogous to $`K\pi \pi `$ decay and $`\epsilon ^{}/\epsilon `$. The factors $`f_K`$ and $`F_0^{B\pi }`$ arise from evaluating hadronic matrix elements of four quark operators under factorization: The former comes from forming $`K^{}`$ out of the vacuum via the $`\overline{s}\gamma _\mu \gamma _5u`$ current, the latter arises from the transition $`\overline{B}^0\pi ^+`$ via the $`\overline{u}\gamma _\mu b`$ current. While form factors are well defined, it is the reliance on models that causes us to lose track of the factorization scale $`\mu _f`$. Popular form factor models are the BSW model and light-cone sum rule (LCSR) evaluations. A recent compilation of models can be found in , but we shall treat form factors as fit parameters. The criteria for choosing the decay modes to include in the fit are as follows. First, a central value branching ratio (BR) with statistical and systematic errors must be available. Second, we exclude $`VV`$ modes such as $`\omega K^{}`$ since there is insufficient data to constrain the extra form factors that enter. Third, we require that the experimental sensitivity (a few times $`10^6`$ at present) to be comparable to the range of factorization predictions. Only the $`\omega \pi ^0`$ final state, with a predicted BR below $`10^7`$, is removed with this criterion. Since this and other suppressed decays such as $`\rho ^0\pi ^0`$, $`\pi ^0\pi ^0`$, $`\varphi \pi `$, $`K\overline{K}`$ and $`K^{}\overline{K}`$ may well be dominated by FSI from other charmless final states, factorization is less likely to work well. We therefore propose to exclude these modes even when suitable measurements become available. The only exceptions to these rules are final states involving $`\eta `$ and $`\eta ^{}`$. We prefer to apply the fit to predict their BRs rather than use them in the fit because the $`q\overline{q}`$ content and other issues of these mesons have recently been questioned. We note that the newly measured $`\eta K^{}`$ modes, like $`\eta ^{}K`$, are larger than previous theoretical predictions . We give the 14 measured BRs (averaged over $`B`$ and $`\overline{B}`$) that enter our fit in Table I, where we also give the fitted output. To limit the number of fit parameters, we use approximate relations as follows. We assume KM unitarity hence $`V_{ts}^{}V_{tb}|V_{cb}|`$ and $`V_{td}^{}V_{tb}=V_{cd}^{}V_{cb}+V_{ud}^{}V_{ub}|V_{cb}|(\lambda e^{i\gamma }|V_{ub}/V_{cb}|)`$, where $`\lambda =|V_{us}|`$. Since $`\lambda |V_{ub}/V_{cb}|\mathrm{cos}\gamma >0`$ always, as noticed in Ref. , T-P interference is opposite in sign for P-dominated and T-dominated modes such as $`K^{}\pi ^+`$ and $`\pi ^{}\pi ^+`$, leading to enhanced $`K^{}\pi ^{+,0}`$ and suppressed $`\pi ^{}\pi ^+`$ for $`\mathrm{cos}\gamma <0`$, in better agreement with data. The chiral relation $`m_K^{}^2/m_\pi ^{}^2(m_s+m_u)/(m_d+m_u)`$ and the fact that $`m_sm_{d,u}`$ give $`R_{su}R_{sd}R_{du}`$, while $`Q_{ij}=R_{ij}`$ for $`VP`$ modes such as $`\rho \pi `$, $`\omega \pi `$ and $`\omega K`$. We use form factors at $`q^2=0`$ and $`F_1^{BP}=F_0^{BP}`$; $`F_0^{BK}/F_0^{B\pi }=1.13`$ which is consistent with both BSW and LCSR models; $`A_0^{B\omega }=A_0^{B\rho }`$; and $`A_0^{BK^{}}=1.26A_0^{B\rho }`$ (used for predictions only). Surveying the amplitudes for modes in Table I, we find that just five parameters suffice for the fit: $`\gamma `$, $`|V_{ub}/V_{cb}|`$, $`R_{su}`$, $`F_0^{B\pi }`$ and $`A_0^{B\rho }`$. The function minimized by the fit is $$\chi ^2=\underset{i}{}((\mathrm{BR}_{\mathrm{meas}}^i\mathrm{BR}_{\mathrm{pred}}^i)/\sigma _{\mathrm{meas}}^i)^2+((0.08|V_{ub}/V_{cb}|_{\mathrm{pred}})/0.02)^2,$$ (3) where we sum over the modes in Table I. The predicted BRs are calculated from formulas like Eq. (1) taken from Refs. . We have checked that we confirm the $`N_c=3`$ results of AKL and CCTY with the same input parameters. We take into account the full (asymmetric) experimental errors and correlations in $`K^{}\pi ^+/\pi ^{}\pi ^+`$, $`K^{}\pi ^0/\pi ^{}\pi ^0`$ and $`\omega K^{}/\omega \pi ^{}`$ measurements, where the correlation coefficients are $`0.15`$, $`0.29`$ and $`0.17`$, respectively. The fit is able to nearly optimally use the information for each of these modes individually, though $`K/\pi `$ separation improvements in the next round of experiments will help in this regard. For simplicity, we assume that systematic errors have the same correlation coefficient as statistical errors, i.e. we apply the correlation coefficient to the total error with all errors combined in quadrature. If the best fit value is below the experimental central value, the low-side experimental error is used, and conversely the high-side. To understand the behavior of the $`\chi ^2`$ function in the 5D fit parameter space, its dependence on various fixed parameters or the exclusion of certain experimental measurements, we have explored many variants of our nominal fit. In all cases we find $`\gamma >100^{}`$. Our nominal fit results, with CCTY $`a_i`$ values, are given in Table II. The $`\chi ^2`$ per degree of freedom (DOF) in the last column indicates the good quality of the fit. We choose CCTY rather than AKL as nominal only because of their claim of improved gauge dependence of the $`a_i`$. The fit values for AKL (see Table II) differ only slightly from our nominal, and mostly because of the larger $`|a_{4,6}|`$ found by these authors. We note that $`R_{su}`$ for AKL input should be smaller than the CCTY case since quark masses are defined at $`\mu =2.5`$ GeV rather than $`m_b`$. We have also checked that the strong phases of $`a_{4,6}`$ have little impact on our fitted $`\gamma `$ value. The $`\chi ^2`$ vs. $`\gamma `$ curves for the nominal fit with CCTY input are shown in Fig. 1. We note that our $`\gamma `$ value has a two-fold ambiguity since the fit is sensitive to $`\mathrm{cos}\gamma `$ rather than $`\gamma `$. From the contributions from individual modes given in Fig. 1(b), we see that the main discriminator for favoring large $`\gamma `$ comes from $`K^{}\pi `$ and $`\pi ^{}\pi `$ modes, and, somewhat surprisingly, the $`\omega K^{}`$ and $`\varphi K^{}`$ modes. The situation for $`\varphi K^{}`$ is a result of the procedure of minimizing $`\chi ^2`$ for each $`\gamma `$ value. This induces an apparent sensitivity, due to changes in the other parameters, where there is no direct dependence. The error on $`|V_{ub}/V_{cb}|`$ returned by the fit is only marginally better than the conservative range $`|V_{ub}/V_{cb}|=0.08\pm 0.02`$ used as an additional term in Eq. (3). Sensitivity to $`|V_{ub}/V_{cb}|`$ largely comes from $`\overline{B}\rho ^0\pi ^{},\rho ^\pm \pi ^{}`$ and $`\omega \pi ^{}`$, all of which depend on $`A_0^{B\rho }`$. Removing the constraint on $`|V_{ub}/V_{cb}|`$ from the fit gives higher $`|V_{ub}/V_{cb}|`$ with large errors and strongly correlated with $`A_0^{B\rho }`$ (and between $`R_{su}`$ and $`F_0^{B\pi }`$ as well) but $`\gamma =121_{24}^{+31}`$ degrees remains close to nominal (see Table II). The fit favors large $`R_{su}`$ ($`R_{du}`$) since it facilitates the enhancement (suppression) of $`K^{}\pi ^{+,0}`$ ($`\pi ^{}\pi ^+`$) modes. Furthermore, under factorization, the BRs for $`\omega \overline{K}`$ modes are enhanced only for large $`R_{su}`$ such that $`a_4`$ and $`a_6`$ penguin contributions do not cancel fully. We have checked that when the $`\omega \overline{K}^0`$ mode is removed from the fit, there is no significant change in $`\gamma `$, though $`R_{su}`$ drops to 1.69. Our nominal $`R_{su}`$ fit value implies $`m_s=58_{11}^{+14}`$, $`67_{13}^{+16}`$ MeV at $`m_b`$ and 2 GeV scale, respectively. This is lower than what is commonly used in most previous calculations, but consistent with recent unquenched lattice results which give $`m_s(2\mathrm{GeV})=84\pm 7`$ MeV . In addition recent experimental results for $`\epsilon ^{}/\epsilon `$ can be reconciled better with theoretical predictions if a smaller value of $`m_s`$ is used . For comparison, the result for fixing $`R_{su}=1.21`$ ($`m_s(m_b)=90`$ MeV) is given in Table II. Note that $`R_{su}`$ is anti-correlated with $`a_6`$ since only the product appears in the amplitudes. As for the form factors, our fitted $`F_0^{B\pi }`$ ($`A_0^{B\rho }`$) is lower (higher) than but consistent with the LCSR result of $`0.305\pm 0.046`$ ($`0.372\pm 0.074`$). Predictions from our fit for some selected modes are given in Table III. The agreement with the newly measured $`\eta \overline{K}^{}`$ modes are rather striking. An enhancement factor of 1.7 comes from $`A_0^{BK^{}}0.60`$ compared to LCSR value of $`A_0^{BK^{}}0.47`$, the rest coming from our low $`m_s`$. The $`\eta ^{}\overline{K}^{}`$ modes are comparable in size to the observed $`\eta \overline{K}^{}`$ modes. Since we can account for less than half the rate of $`\eta ^{}\overline{K}`$ modes and the missing contribution may well be specific to the $`\eta ^{}`$ decay modes, our predicted $`\eta ^{}\overline{K}^{}`$ rates should be viewed with some caution. The $`\rho ^{}\pi ^0`$ mode is suppressed by $`\mathrm{cos}\gamma <0`$, smaller $`F_1^{B\pi }`$ (which also suppresses $`\overline{K}^{}\pi `$ modes) plus destructive interference between two terms because of low $`m_s`$. The $`\rho K`$ modes are enhanced by the low value of $`m_s`$ and larger $`A_0^{B\rho }`$ form factor, except for $`\rho ^0K^{}`$, which is suppressed compared to $`\rho ^+K^{}`$ by destructive interference between the strong $`a_6`$ and electroweak $`a_9`$ penguin terms (a similar effect suppresses $`\overline{K}^{()0}\pi ^0`$ modes). As an aside, we give the “penguin pollution” as determined from our fit. Defining $`T`$ ($`P`$) as the amplitude arising from $`a_{12}`$ ($`a_{310}`$), we find the ratio $`|P/T|`$ in $`\pi ^+\pi ^{}`$ ($`\rho ^0\pi ^\pm `$) to be $`0.37\pm 0.04`$ ($`0.20\pm 0.04`$) for our nominal fit. For comparison, the CCTY result for $`N_c=3`$, $`\gamma 65^{}`$, $`|V_{ub}/V_{cb}|=0.090`$, $`m_s(m_b)=90`$ MeV using LCSR form factors gives 0.20 (0.10). How do we reconcile with the usual fit to $`B`$ and $`K`$ data other than charmless rare $`B`$ decays, which give a 95% C.L. range that excludes $`\mathrm{cos}\gamma <0`$ ? The removal of the second quadrant in these fits results mostly from combining recent bounds on $`B_s`$ mixing from LEP, CDF and SLD, with lattice QCD results that relate $`B_d`$ and $`B_s`$ mixing parameters . Thus, our results suggest that $`B_s`$ mixing could be very close to the present limit. It should be stressed that the goodness of our fit (see Table II) suggests that corrections to factorization may be small compared with the present experimental precision. It is reassuring that the hadronic parameters from our fit are not at variance with theoretical expectations. We note that $`\gamma `$ is the most stable parameter in the fit, with $`\mathrm{cos}\gamma <0`$ for all variations we have considered. This is because $`\gamma `$ directly controls the “double-slit” interference, while other parameters enter indirectly. We note that our larger value of $`\gamma `$ tends to reduce the value of $`\mathrm{sin}2\beta `$. In conclusion, we have made a model-dependent determination of $`\gamma =114_{21}^{+25}`$ degrees. It will be interesting to see if future, more precise measurements will confirm this result and the predictions in our tables. This work is supported in part by grants from the US DOE and NSC of Taiwan, R.O.C. We thank our CLEO colleagues for the excellent data.
no-problem/9910/astro-ph9910103.html
ar5iv
text
# The observational evidence pertinent to possible kick mechanisms in neutron stars ## 1 Introduction It is now widely accepted that the velocities observed for pulsars include a significant component from kicks experienced by the neutron stars in the process of their formation. The basis for this view is almost totally empirical, with a variety of different kinds of observations all pointing to the existence of an impulsive transfer of momentum to the protoneutron star at birth (Shklovskii 1970; Gunn & Ostriker 1970; van den Heuvel & van Paradijs 1997). This implies an asymmetry in the ejection process, but as yet there is no consensus on any plausible mechanism for providing such an asymmetry. Mechanisms suggested range from hydrodynamical instabilities to those in which asymmetric neutrino emission is postulated (Burrows 1987; Keil et al. 1996; Horowitz & Li 1997, hep-ph/9701214; Lai & Qian 1998, astro-ph/9802344; Spruit & Phinney 1998). As far as the latter class is concerned, it appears, and very reasonably so, that if the neutrinos can impart momentum to the matter, the reverse must also happen, and the thermal equilibrium of the matter must necessarily destroy any incipient asymmetry in the neutrinosphere. And any asymmetry developed above the neutrino-matter decoupling layer, is by definition incapable of imparting any momentum to the matter (Bludman 1998, private communication). Whatever the operative mechanism for creating the asymmetry, it is an important and pertinent question to ask if the resulting direction is a random one, or connected with some basic property of the protoneutron star. Two such essential vectors associated with the core of the collapsing star are its rotational and magnetic axes, and both have been invoked in mechanisms proposed in the literature (Harrison & Tademaru 1975a,b; Burrows & Hayes 1996; Kusenko & Segre 1996). Any such mechanism that is postulated to provide the asymmetry must leave its signature in the direction and magnitude of the imparted velocity, thus enabling a possible test of the theory by comparison with observations. An important recent investigation in this connection is that of Spruit and Phinney (1998). They argue strongly that the cores of the progenitors of neutron stars cannot have the angular momentum to explain the rotation of pulsars and propose birth kicks as the origin of their spins. These authors do not specify any particular physical process as responsible for the “kick”, but emphasize that unless its force is exerted exactly head-on it must also cause the neutron star to rotate. As both the velocity and the spin of the neutron star have a common cause according to this hypothesis, it is not unreasonable to expect testable correlations as we shall discuss a little later. Independently, Cowsik (1998) has also advanced a similar common origin for the proper motion and spin of pulsars. The first suggestion of this possibility was by Burrows et al. (1995) The quantities on which comparisons with observations can be made are the direction and magnitude of the proper motion, the projected direction of the magnetic axis, the magnitude of the magnetic field, the direction of the rotation axis and the initial period of rotation. Of these it is only the last that is not accessible to observation. We are left with five quantities which may be interrelated, depending on the mechanism which causes the asymmetry. Several types of correlations between these quantities have been sought, and even claimed in the past, motivated by suggestions of possible kick mechanisms. Our approach in this investigation is to examine without prejudice an enlarged body of pulsar data now available for any correlations which could support or rule out various suggested explanations. As is widely practised, we shall assume that $`\alpha `$, the angle the magnetic axis makes with respect to the rotational axis, and $`\beta `$, to the line of sight, can both be derived from accurate measurements of the core-component widths and the sweep of the linear polarisation through the pulse window. The intrinsic angle of polarisation at the point of inflexion of the sweep then gives us the projection of the rotational axis, and $`\alpha +\beta =\zeta `$ the complement of the angle it makes to the plane of the sky. We shall also assume that the observed proper motion is due only to the kick received at birth, and return later to a discussion of contributions to the velocity from motion of the progenitor in a binary system, or from its runaway velocity from a previous disruption. The simplest models from the point of view of testability are those which predict an acceleration strictly along the rotational axis such as the rocket mechanism of Harrison and Tademaru (1975a; 1975b). The simplicity is due to projection on the sky plane not affecting the alignment expected of the rotation axis and the velocity direction (Morris et al. 1976). They argued that the direction of the spin axis projected on the plane of the sky must be the same as that of the rotation axis. Based on thirteen available samples at that time, they concluded that the acceleration mechanism suggested by Harrison & Tademaru is not supported by observations. A similar conclusion was reached by Anderson & Lyne (1983). In this paper, we use high quality polarisation and proper motion observations on a larger sample to test for preferential alignment of the pulsar velocity with either the rotation or the magnetic axis. ## 2 Sample selection We have carefully selected for our study a sample of 29 pulsars for which we estimated the intrinsic position angle (IPA) from observations available in the literature (Morris et al. 1981; McCulloch et al. 1978; Manchester et al. 1980; Xilouris et al. 1991). The transverse velocities were computed from the proper motion measurements of Lyne, Anderson & Salter (1982) and Harrison, Lyne & Anderson (1993), along with distances estimated using the electron density distribution model of Taylor & Cordes (1993). It may be noted that the distance information is required not only for calculating velocities, but also for calculating proper motion direction, as correction for differential galactic rotation needs to be incorporated. The selected list of pulsars together with the calculated directions of the intrinsic position angles and the proper motions are given in table 1. It is worth mentioning here that most of the data used by Anderson & Lyne (1983) were from Morris et al. (1979; 1981) where the position angle of polarisation was measured at the centre of the average pulse profiles. This, as we have since found, can be a source of significant error in the estimation of the IPA. We have included in the present comparison only those objects where the point of inflexion in the position angle sweep could be clearly identified. ## 3 Velocity parallel to rotation axis If the velocity vector of the neutron star is along its rotation axis, then it means that either the underlying mechanism itself produces acceleration along the rotation axis, or that the time scale over which the asymmetry is produced during the supernova explosion is much longer than the rotation period of the star, thus averaging out the azimuthal component. In both cases, the direction of the proper motion vector in the plane of the sky should be the same as the projected direction of the rotation axis. Figure 1a shows the distribution of the proper motion directions and the IPAs for 29 pulsars. The values of the IPAs have been obtained after correcting for interstellar Faraday rotation. In the lower panel the distribution of the difference between these two angles is shown. Pulsars (14 out of the 29) with orthogonal flips in the polarisation position angles are included with the IPA assigned two possible values 90<sup>o</sup> apart. Since the observational errors are not the same for all the points, each point has been weighted proportional to the reciprocal of the error in the angle difference. The distribution has been smoothed with a 10 degrees wide smoothing window. As is evident from this figure, there is no significant peak at either 0 or 90 degrees. So, even with the improved data set, any significant matching between the rotation axis and the direction of proper motion is not seen. ## 4 ‘Kicks’ along the magnetic axis of the star We consider next mechanisms that would predict one or more momentum impulses directed along the magnetic axis and proportional to the strength of the field. In such a scenario, the resultant direction of the motion would depend on the duration of the impulse as compared to the (unknown) period of rotation at the epoch of the impulse. Short impulses would accelerate the neutron star along the instantaneous (and unknown) direction of the magnetic axis, and long impulses, with net duration longer than $`50\%`$ of the rotation period of the star, would result in motion increasingly along the rotational axis due to averaging, and of a magnitude now proportional to $`\mathrm{cos}\alpha `$ times the magnetic field strength, $`B`$. We have already shown that there is no correlation between the directions of the proper motions and the projected rotation axes. This appears to rule out slow impulses and leave only the case of short impulses along the magnetic axis to be considered. But before going on to the short-impulse case, let us look at a possible correlation between the magnetic field strength and the magnitude of the observed velocity for long duration impulses. ### 4.1 Field strength vs velocity The fact that we see a given pulsar means that the angle between the rotation axis and the plane of the sky is $`[90(\alpha +\beta )]^o`$, where $`\beta `$ is the minimum angle between the line of sight and the magnetic axis (impact angle). Therefore, if the magnitude of the extended impulse is proportional to the strength of the magnetic field, the net transverse velocity of the pulsar must be proportional to $`B\mathrm{cos}(\alpha )\mathrm{sin}(\alpha +\beta )`$. In most pulsars, $`\beta `$ is much smaller than $`\alpha `$ (and often its sign is not known) and the observed transverse velocity should be approximately proportional to $`B\mathrm{sin}(2\alpha )/2`$, i.e., the transverse velocity will at maximum have half of the potential kick when $`\alpha =45`$ degrees. Incorporating all these considerations, we examined $`V_{\mathrm{pm}}/\mathrm{sin}(\alpha +\beta )`$ vs $`B\mathrm{cos}\alpha `$ for a sample of 44 pulsars for which the relevant quantities are known reliably (for example, pulsars with more than 50% uncertainty in proper motion have been excluded). The values of $`\alpha `$, $`\beta `$ used here are taken from Rankin(1993). <sup>1</sup><sup>1</sup>1Use of $`\alpha `$, $`\beta `$ values from Lyne & Manchester (1988) leads to a slight worsening of the correlation.. Any correlation present does not appear to be statistically robust. For normal pulsars (rotation periods longer than 25 milliseconds), the correlation coefficient is $`0.35\pm 0.15`$ (see Fig. 2). This seems to improve marginally, to $`0.65\pm 0.25`$ if pulsars with large errors in their proper motion are weighted down, but the correlation is, by no means, statistically significant. This is in agreement with the earlier work of Lorimer, Lyne & Anderson (1995), Birkel & Toldra (1997) and Cordes & Chernoff (1998). ### 4.2 The case of single short-lived kicks As pointed out above, the direction of the magnetic axis projected on the plane of the sky at the time of explosion is an unknown quantity. If the rotation axis is located in the plane of the sky, then the projected magnetic axis in the plane of the sky will be within $`\pm \alpha `$ of the rotation axis. However, when the direction of the rotation axis is oriented close to the line of sight, the varying angle between the projected magnetic and rotation axes will always exceed the above range. To assess this issue in detail, let us define the z-axis as pointing towards the observer, then x-y is the plane of the sky. Choose the x-axis to be along the projected rotation axis on the plane of the sky (i.e. aligned with the intrinsic PA). Then it is easy to see that, as the star rotates, the components of the kick imparted along the instantaneous magnetic-field-direction (and proportional to B, the magnetic-field strength) are given by $$V_x=k.B\left[\mathrm{cos}\alpha \mathrm{sin}(\alpha +\beta )\mathrm{sin}\alpha \mathrm{cos}(\alpha +\beta )\mathrm{cos}\varphi \right]$$ (1) $$V_y=k.B[\mathrm{sin}\alpha \mathrm{sin}\varphi ]$$ (2) $$V_z=k.B[\mathrm{cos}\alpha \mathrm{cos}(\alpha +\beta )+\mathrm{sin}\alpha \mathrm{sin}(\alpha +\beta )\mathrm{cos}\varphi ]$$ (3) where k is a constant of proportionality, and $`\varphi `$ is the rotation phase (like the pulse longitude; $`\varphi `$=0 for the closest angle between the magnetic-field direction and the sight-line). Then it follows that the angle ($`\theta `$) which the proper-motion direction would make with respect to the intrinsic PA direction is given by $$\mathrm{tan}\theta =\frac{\mathrm{sin}\alpha \mathrm{sin}\varphi }{\mathrm{cos}\alpha \mathrm{sin}\zeta \mathrm{sin}\alpha \mathrm{cos}\zeta \mathrm{cos}\varphi }$$ (4) where $`\zeta =\alpha +\beta `$. This is exactly the expression describing the sweep of the position angle of pulsar polarisation in the model of Radhakrishnan & Cooke (1969). The magnitude of the transverse velocity is then simply $`V_{xy}=\sqrt{V_x^2+V_y^2}`$. With this understanding, we explore the following two approaches. a) We take the IPA and proper-motion direction at their face value. Their relative angle (i.e. the difference) should allow us to compute $`\varphi `$ (never mind the sense of rotation)— giving us 8 possible solutions, of which we need to consider only the 4 independent ones (say $`\varphi _1`$ to $`\varphi _4`$) given the symmetry in the problem<sup>2</sup><sup>2</sup>2The allowance for any orthogonal-flips in the polarisation position angle makes the possible solutions 8 instead of the 4 that come from simple projection considerations, and the left-right symmetry.. We used these values, $`\varphi _1`$ to $`\varphi _4`$, to estimate the expected magnitudes of the proper-motions and compared them with the measured values<sup>3</sup><sup>3</sup>3Note that the sample size here is smaller than that shown in Fig. 2, due to limited measurements of IPA & proper-motion direction.. Even in the best case any correlation present was not significant and hence no clear inference was possible. b) In a second more statistical approach, we compute for each of the sample pulsars, the distribution of the relative angle $`\mathrm{}(V_{pm}IPA)`$ based on the known geometry (along with the above equations) and by assuming that $`\varphi `$ is distributed uniformly over its range. A combined distribution computed this way for a single short impulse is not significantly different from the observed distribution in Fig. 1. Assuming impulses of longer durations leads to a significantly greater expectation of alignment which is not seen. The observations therefore suggest that if impulses are along the magnetic axis, they must be confined to a small fraction of the rotational phase cycle of the star. ## 5 Birth kicks as the origin of pulsar rotation The mechanisms examined so far assumed ‘radial kicks’ which did not affect the rotation rate of the star. As noted already, Spruit & Phinney (1998, hereafter SP) and Cowsik (1998) have explored the possibility of non-radial random kicks which would impart both net linear and angular momenta to the star. Noting the significance of this mechanism, we have looked at this issue in some detail. SP give the results of their simulations for a case of 4 random non-radial kicks that impart velocities and spins of magnitudes such as observed. We examine this model over a range of parameters, such as the number of momentum impulses, their magnitudes and durations. For each combination of the parameters, we examine the resultant distribution of the angle that the apparent motion would make relative to the projection of the rotation axis of the star in the plane of the sky. We of course compare the resultant distribution with what is observed (Fig. 1). We select for our examination only the subset that most corresponds to the expected dispersion in the velocity and in the rotation rates of the sample. From our numerical simulation<sup>4</sup><sup>4</sup>45000 sample stars considered in each case., we find the following. As noted by SP, a single impulse gives linear motion in a plane perpendicular to the rotation axis. In the plane of the sky, the projected direction of the star motion relative to that of the spin axis would be expected to show a significant bias towards 90<sup>o</sup>, as illustrated in Fig. 3a. This is $`not`$ observed, as seen in Fig. 1. Hence, single impulses being responsible for both the birth velocity and spin can be ruled out. This holds irrespective of their duration as the rotation axis itself is defined by the action of the impulse. The strongest basis for this conclusion comes from the knowledge of the particular orientation angles of the pulsars in our sample. Hence, in such models, there must be two or more randomly directed impulses to be consistent with the observations. As might be expected, the strength of the impulses required to obtain the observed range of velocities and spin rates will be weaker (by a factor $`\sqrt{N}`$) for an increasing number of impulses ($`N`$). The velocity dispersion as a function of the duration of a momentum impulse of a given magnitude remains constant up to a certain critical duration ($`\tau _c`$), above which the azimuthal averaging reduces the resultant velocities. Following SP, if we assume the impact radius (radius of the protoneutron star) to be three times the radius of the neutron star, we find $`\tau _c`$ to be about 9 times the corresponding resultant rotation period of the star, as also pointed out by SP. The velocity dispersion levels off (at a reduced value) when the impulse duration is much larger than $`\tau _c`$. In contrast, the resultant spin rate is independent of the impulse duration but varies linearly with its average magnitude ($`I`$). Thus, the product $`(I\times \tau _c)`$ is a constant that depends on the number of impulses and the star’s mass and its moment of inertia. For impulse durations ($`\tau `$) much smaller than $`\tau _c`$, both the linear and angular momenta grow as $`\sqrt{N}`$ but the angle between them becomes random. However, for relatively longer-duration impulses a significant preference of the direction of the linear momentum develops towards the spin axis, which itself is evolving. This bias is illustrated in Fig. 3b. Since the data on the relative inclinations do not show such a preference, we conclude that the impulse durations must be shorter or equal to the corresponding critical duration $`\tau _c`$. This also means that no significant reduction<sup>5</sup><sup>5</sup>5 the reduction in the net velocity/spin due to the randomness in the impulse direction will continue to exist. in the expected net velocities occurs due to azimuthal averaging during rotation and therefore somewhat weaker impulses can account for the observed velocities. However, the corresponding spin rate would then be smaller than with long-duration impulses. If the duration of each kick in the 4-kick situation considered by SP is 0.32 seconds, we find that about 55% of the population should appear to have relative inclinations between 0-30<sup>o</sup> compared to about 20% in the 60<sup>o</sup>-90<sup>o</sup> range. Even after due allowance for possible selection effects etc., we find that the expected distribution of the relative directions is far from what is observed. Hence, both the duration and the magnitude of the impulses in the example considered by SP are to be reduced by factors of 5 or more and of 2, respectively. ## 6 Discussion We have shown in this work that given the present sample of radio pulsars for which we have reliable proper motion and polarisation measurements, no significant correlation exists between the magnetic field strength and the magnitude of the spatial velocities of pulsars, or between the projected directions of the rotation axis (and/or the magnetic axis) and the direction of the proper motion vector. This has fundamental implications for the mechanisms producing the asymmetric supernova kick velocities, as the observations do not support any mechanism producing net kick velocities parallel to the rotation axis. This rules out therefore, momentum impulses of any duration along the rotation axis, and any long duration (compared to the rotation period) impulses along any one fixed axis, for example, the magnetic axis. Among the most elegant suggestions to explain the origin of kick speeds and the initial rotation periods of pulsars are the ones suggested by Spruit & Phinney (1998) and Cowsik (1998). It has been possible to quantitatively assess the expectation regarding the distribution of proper-motion directions using the framework of the SP model. Our simulations and comparison of the results with the observations show that single impulses are ruled out, but not 2 or more impulses of relatively short duration. The durations have to be short enough not to cause any significant azimuthal averaging of the radial component of the impulse. The same conclusions should also apply to the model of Cowsik (1998). However, his expectation of an inverse correlation between velocity and initial rotation period (also applicable to SP) can not be tested readily. In the above analysis, we have ignored an important aspect of the evolutionary history of pulsars. As we know, a good fraction of massive stars in the sky are in binary or multiple systems. This implies that almost by definition, a considerable fraction of pulsars are born in binary systems. Most of them get disrupted during the first supernova explosion in the binary. Therefore, the spatial velocities of such pulsars must still retain some memory of their binary origin. The possible contribution to the post-explosion speeds from the pre-explosion orbital-motion can be appreciable (Radhakrishnan & Shukre 1985; Bailes 1989), particularly noting that a predominant number of pulsars seem to have speeds of the order of only about 200 km/sec (Hansen & Phinney 1997; Blaauw & Ramachandran 1998). Moreover, the analysis of Deshpande et al. (1995) shows that a considerable fraction of pulsars may be born at large heights from the galactic plane, suggesting that a considerable fraction of pulsars are born in binary systems, which have run away from the plane. It must be emphasized therefore that our conclusions above are valid only if the velocities are derived solely from natal kicks. To estimate this ‘contamination’ from the progenitor orbital velocities, an identification of an origin in OB-associations for as many pulsars as possible might throw much light. In some of these cases, the pulsar progenitor may perhaps be identified with the progenitor of a runaway OB star. We thank the referee for pointing out this possibility, as also the inadequacy of the present proper motion determinations for such accurate backtracking; and for emphasizing that for this problem, increased accuracy of the proper motions of known pulsars is more important than increasing the sample of known pulsars. More accurate information on the statistics of orbital velocities in the pre-explosion stage of massive double star systems would also be very important. An interesting case of observational evidence relating to the direction of the birth-kick is the recent work by Wex et al. (1999, private communication). Through a detailed modelling of the pre-explosion binary progenitor of PSR B1913+16, they find that the direction of the kick velocity must have been almost opposite to that of the orbital velocity of the exploding component to have not disrupted the system. In such binary systems, one expects the rotational angular momentum of the star to be parallel to that of the binary orbit due to the evolutionary history. The kick velocity could therefore not have been along the rotation axis of the star. An orbital velocity contribution can be viewed as from a ‘kick’, but one which is radial, i.e. it does not contribute to the spin of the star. It is easy to see that such a contribution to the velocity can influence the direction of the net motion, significantly so when its magnitude is comparable to or greater than the contribution from the natal kicks. In such cases, the resultant proper motion direction (with respect to the spin axis) should become more and more random as the relative contribution from the orbital motion increases. We have verified this expectation through simulation by including an initial velocity component in a random direction and of a varying magnitude. In all the cases discussed earlier, where the relative directions of the proper motion were biased towards the projection of the spin axis (or orthogonal to it), we see, as would be expected, a significant reduction in the bias. In fact, when the two possible contributions to the motion are about equal, the expected distribution of the relative directions becomes indistinguishable from the observed one (Fig.1)! This is so, independent of the kick durations, thus not necessarily requiring short duration kicks. The role of the duration of kicks is limited to only deciding whether azimuthal averaging occurs or not. Long duration kicks will only reduce the net contribution of the natal kicks to the proper motion and will no longer be relevant in deciding the direction of the ‘net’ motion. Our conclusions are therefore, 1. Mechanisms predicting a correlation between the rotation axis and the pulsar velocity are ruled out by the observations. This includes single long duration radial kicks along any fixed axis of the star. 2. There is no significant correlation between the magnetic field strength and the velocity. 3. If asymmetric acceleration at birth is responsible for both the rotation and the velocity of the pulsar, the observations rule out single impulses of any duration and multiple extended duration impulses. 4. The above conclusions lose their significance if there is a substantial contribution to pulsar velocities from the orbital (or runaway) motion of the progenitor. ###### Acknowledgements. We are grateful to the referee Adrian Blaauw for his valuable comments.
no-problem/9910/astro-ph9910413.html
ar5iv
text
# A BeppoSAX observation of the merging cluster Abell 3266 ## 1. Introduction Abell 3266 (hereafter A3266), also known as Sersic 40/6, is a rich, nearby (z$`=`$ 0.055, Teague et al. 1990), cluster of galaxies. It has been extensively studied at both optical and X-ray wavelengths. In the optical band various authors have studied the dynamics of this cluster by analyzing the velocity dispersion of a large number of galaxies (e.g. Teague et al. 1990, 152 galaxies; Quintana, Ramirez & Way 1996, hereafter QRW, 387 galaxies). QRW found evidence of a decrease of the velocity dispersion with increasing distance from the cluster core. Similar results were also reported by Girardi et al. (1997). The velocity dispersion radial gradient and the presence of a distorted central dumb-bell galaxy have been interpreted by QRW as evidence of a recent merger along the NE-SW direction. According to the above authors the two subclusters started colliding about 4 Gyr ago, with the central cores coming together in the last 1-2 Gyr. X-ray observations with the Einstein HRI (Mohr, Fabricant & Geller 1993), and the ROSAT PSPC (Mohr, Mathiesen & Evrard 1999), have confirmed that A3266 is far from being a relaxed cluster. The isophotes on the few hundred kpc scale are elongated in the NE-SW direction, while on the few Mpc scales the elongation shifts to the E-W direction. The azimuthally averaged surface brightness profile (see figure 9 of Mohr, Mathiesen & Evrard 1999), is characterized by a relatively large core radius of $``$ 500 kpc and is not well fitted by a $`\beta `$ model, confirming the non-relaxed status of this cluster. Peres et al. (1998), by applying the deprojection technique (Fabian et al. 1980), found no evidence of a cooling flow in the core of A3266. David et al. (1993), using Einstein MPC data, report a global temperature of 6.2$`{}_{0.4}{}^{}{}_{}{}^{+0.5}`$ keV for A3266. Markevitch et al. (1998), from the analysis of ASCA data, found evidence of a strong temperature gradient in A3266. The projected temperature was found to decrease from $``$ 10 keV to $``$ 5 keV when going from the cluster core out to $``$ 1.5 Mpc. Temperature maps of A3266 (Markevitch et al. 1998; Henricksen, Donnelly & David 1999) indicate an asymmetric temperature pattern, which could be associated with the ongoing merger. Irwin, Bregman & Evrard (1999), who have used ROSAT PSPC data to search for temperature gradients in a sample of galaxy clusters including A3266, in contrast with Markevitch et al. (1998), did not find any evidence of a temperature gradient in A3266. Mushotzky (1984), using HEAO1 A2 data, found a value of 0.4$`\pm `$0.2, solar units, for the Fe abundance of A3266. In this Letter we report a recent BeppoSAX observation of A3266. We use our data to perform an independent measurement of the temperature profile and two-dimensional map of A3266. We also present the first abundance profile and map of A3266 and the first measurement of the hard (15-50 keV) X-ray spectrum of A3266. The outline of the Letter is as follows. In section 2 we give some information on the BeppoSAX observation of A3266 and on the data preparation. In section 3 we present the analysis of the broad band spectrum (2-50 keV) of A3266. In section 4 we present spatially resolved measurements of the temperature and metal abundance. In section 5 we discuss our results and compare them to previous findings. Throughout this Letter we assume H<sub>o</sub>=50 km s<sup>-1</sup>Mpc<sup>-1</sup> and q<sub>o</sub>=0.5. ## 2. Observation and Data Preparation The cluster A3266 was observed by the BeppoSAX satellite (Boella et al. 1997a) between the 24<sup>th</sup> and the 26<sup>st</sup> of March 1998. We will discuss here data from two of the instruments onboard BeppoSAX: the MECS and the PDS. The MECS (Boella et al. 1997b) is presently composed of two units working in the 1–10 keV energy range. At 6 keV, the energy resolution is $``$8% and the angular resolution is $``$0.7 (FWHM). The PDS instrument (Frontera et al. 1997), is a passively collimated detector (about 1.5$`\times `$1.5 degrees f.o.v.), working in the 13–200 keV energy range. Standard reduction procedures and screening criteria have been adopted to produce linearized and equalized event files. Both MECS and PDS data preparation and linearization was performed using the Saxdas package under Ftools environment. The effective exposure time of the observation was 7.6$`\times `$10<sup>4</sup> s (MECS) and 3.2$`\times `$10<sup>4</sup> s (PDS). The observed countrate for A3266 was 0.488$`\pm `$0.003 cts/s for the 2 MECS units and 0.23$`\pm `$0.03 cts/s for the PDS instrument. All MECS spectra discussed in this Letter have been background subtracted using spectra extracted from blank sky event files in the same region of the detector as the source. All spectral fits have been performed using XSPEC Ver. 10.00. Quoted confidence intervals are 68$`\%`$ for 1 interesting parameter (i.e. $`\mathrm{\Delta }\chi ^2=1`$), unless otherwise stated. ## 3. Broad Band Spectroscopy We have extracted a MECS spectrum, in the 2-10 keV band, from a circular region of 14 radius (1.2 Mpc), centered on the emission peak. From the ROSAT PSPC radial profile, we estimate that about 89% of the total cluster emission falls within this radius. The PDS ($`1350`$ keV) background-subtracted spectrum has been produced by subtraction of the “off-” from the “on-source” spectrum. The spectra from the two instruments have been fitted simultaneously with an optically thin thermal emission model (MEKAL code in the XSPEC package), absorbed by a galactic line of sight equivalent hydrogen column density, $`N_H`$, of 1.6$`\times 10^{20}`$ cm<sup>-2</sup> (Dickey & Lockman 1990). A numerical relative normalization factor among the two instruments has been added to account for: a) the fact that the MECS spectrum includes emission out to 1.2 Mpc from the X-ray peak, while the PDS field of view (1.3 degrees FWHM) covers the entire emission from the cluster; b) the slight mismatch in the absolute flux calibration of the MECS and PDS response matrices employed in this Letter (September 1997 release); c) the vignetting in the PDS instrument, (the MECS vignetting is included in the response matrix thanks to the Effarea program described in the following section). The estimated normalization factor is 0.9. In the fitting procedure we allow this factor to vary within 15$`\%`$ from the above value to account for the uncertainty in this parameter. The MEKAL model yields an acceptable fit to the data, $`\chi ^2=`$ 191.3 for 176 d.o.f. The best fitting values for the temperature and the metal abundance are 8.1$`\pm `$0.2 keV and 0.17$`\pm `$0.02 respectively, where the latter value is expressed in solar units. In figure 1 we show the MECS and PDS spectra of A3266 together with the best fitting model. The PDS data shows no evidence of a hard X-ray excess. ## 4. Spatially Resolved Spectral Analysis When performing spatially resolved spectral analysis of galaxy clusters one must take into account the distortions introduced by the energy dependent PSF. In the case of the MECS instrument onboard BeppoSAX, the PSF is found to vary only weakly with energy (D’Acri, De Grandi & Molendi 1998), and therefore the spectral distortions are expected to be small. Nonetheless they have been taken into account using the Effarea program publicly available within the latest Saxdas release. As explained in Molendi et al. (1999), hereafter M99, the Effarea program convolves the ROSAT PSPC surface brightness with an analytical model of the MECS PSF (see D’Acri, De Grandi & Molendi 1998, for a more extensive description). The Effarea program also includes corrections for the energy dependent telescope vignetting, which are not discussed in D’Acri et al. (1998). The Effarea program produces effective area files, which can be used to fit spectra accumulated from annuli or from sectors of annuli. ### 4.1. Radial Profiles We have accumulated spectra from 7 annular regions centered on the X-ray emission peak, with inner and outer radii of 0-2, 2-4, 4-6, 6-8, 8-12, 12-16 and 16-20. A correction for the absorption caused by the strongback supporting the detector window has been applied for the 8-12 annulus, where the annular part of the strongback is contained. For the 4-6, 12-16 and the 16-20 annuli, where the strongback covers only a small fraction of the available area, we have chosen to exclude the regions shadowed by the strongback. For the 5 innermost annuli the energy range considered for spectral fitting was 2-10 keV, while for the 2 outermost annuli, the fit was restricted to the 2-8 keV energy range. We have used a softer energy range for the outer annuli to limit spectral distortions which could be caused by an incorrect background subtraction. The MECS instrumental background has a very hard spectrum that, in the outer regions, accounts for about 60$`\%`$ of the total intensity in the 8-10 keV band, and that can vary up to 10$`\%`$ from one observation to another. We have fitted each spectrum with a MEKAL model absorbed by the galactic $`N_H`$, of 1.6$`\times 10^{20}`$ cm<sup>-2</sup>. In figure 2 we show the temperature and abundance profiles obtained from the spectral fits. By fitting the temperature and abundance profiles with a constant we derive the following average values: $`8.7\pm `$0.3 keV and 0.21$`\pm `$0.03, solar units. A constant does not provide an acceptable fit to the temperature profile. Using the $`\chi ^2`$ statistics we find: $`\chi ^2=`$ 17.7 for 6 d.o.f., corresponding to a probability of 0.007 for the observed distribution to be drawn from a constant parent distribution. A linear profile of the type, kT = a $`+`$ b r, where kT is in keV and r in arcminutes, provides a much better fit, $`\chi ^2=`$ 0.75 for 5 d.o.f. The best fitting values for the parameters are a$`=10.48\pm 0.52`$ keV, b$`=0.307\pm 0.075`$ keV arcmin<sup>-1</sup>. A constant provides an acceptable fit to the abundance profile, $`\chi ^2=`$ 4.4 for 6 d.o.f. (Prob.$`=`$0.6). As in M99, we have used the Fe K<sub>α</sub> line as an independent estimator of the ICM temperature. Briefly we recall that the centroid of the observed Fe K<sub>α</sub> line depends upon the relative contributions of the He-like Fe line at 6.7 keV, and the H-like Fe line at 7.0 keV. Since the relative strength of these two lines is a function of the gas temperature, the centroid of the observed line is also a function of the gas temperature. Moreover, the position of the centroid of the Fe K<sub>α</sub> line is essentially unaffected by the spectral distortion introduced by the energy dependent PSF and depends only weakly on the the adopted continuum model. Thus it can be used to derive an independent and robust estimate of the temperature profile. Considering the limited number of counts available in the line we have performed the analysis on 2 annuli with bounding radii, 0-8 and 8-16. We have fitted each spectrum with a bremsstrahlung model plus a line, both at a redshift of z=0.055 (ZBREMSS and ZGAUSS models in XSPEC), absorbed by the galactic $`N_H`$. A systematic negative shift of 40 eV has been included in the centroid energy to account for a slight misscalibration of the energy pulseheight-channel relationship near the Fe line. To convert the energy centroid into a temperature we have derived an energy centroid vs. temperature relationship. This has been done by simulating thermal spectra, using the MEKAL model and the MECS response matrix, and fitting them with the same model, which has been used to fit the real data. In figure 2 we have overlaid the temperatures derived from the centroid analysis on those previously obtained through the thermal continuum fitting. The two measurements of the temperature profile are in agreement with each other. Unfortunately, the modest statistics available in the line does not allow us to say much more than that. ### 4.2. Maps We have divided A3266 into 4 sectors: NW, SW, SE and NE. Each sector has been divided into 3 annuli with bounding radii, 2-4, 4-8 and 8-16. The orientation of the sectors has been chosen so that the North-South division roughly coincides with the apparent major axis of the X-ray isophotes. In figure 3 we show the MECS image with the sectors overlaid. A correction for the absorption caused by the strongback supporting the detector window has been applied for the sectors belonging to the 8-16 annulus. For the sectors in the 2-4 and 4-8 annuli, we used the 2-10 keV energy range for spectral fitting, while for the 8-16 annulus we adopted the 2-8 keV range. We have fitted each spectrum with a MEKAL model absorbed by the galactic $`N_H`$. In figure 4 we show the temperature profiles obtained from the spectral fits for each of the 4 sectors. Note that in all the profiles we have included the temperature measure obtained for the central circular region with radius 2. Fitting each radial profile with a constant temperature we derive the following average sector temperatures: 8.8$`\pm `$0.5 keV for the NW sector, 9.6$`\pm `$0.5 keV for the SW sector, 8.1$`\pm `$0.5 keV for the SE sector and 8.2$`\pm `$0.4 keV for the NE sector. For all sectors we find a statistically significant temperature decrease with increasing radius. From the $`\chi ^2`$ statistics we find $`\chi ^2=21.4`$ for 3 d.o.f. (Prob.$`=9\times 10^5`$) for the NW sector, $`\chi ^2=9.2`$ for 3 d.o.f. (Prob.$`=2.6\times 10^2`$) for the SW sector, $`\chi ^2=24.0`$ for 3 d.o.f. (Prob.$`=2.5\times 10^5`$) for the SE sector and $`\chi ^2=10.5`$ for 3 d.o.f. (Prob.$`=1.5\times 10^2`$) for the NE sector. In the SE and NE sectors the temperature decreases continuously as the distance from the cluster center increases. In the NW and SW sectors the temperature first increases, reaching a maximum in either the second (NW sector) or third (SW sector) annulus, and then decreases. Interestingly, a fit to the temperatures of the 4 sectors in the third annulus (bounding radii 4-8) with a constant, yields $`\chi ^2=8.45`$ for 3 d.o.f., with an associated probability for the temperature to be constant of 0.03, indicating that an azimuthal temperature gradient may be present near the core of the cluster. More specifically the eastern side of the cluster appears to be somewhat cooler than the western side. From the analysis of the abundance map we find that all sector averaged abundances are consistent with the average abundance for A3266 derived in the previous subsection The $`\chi ^2`$ values derived from the fits indicate that all abundance profiles are consistent with being constant. ## 5. Discussion Previous measurements of the temperature structure of A3266 have been performed by Markevitch et al. (1998), using ASCA data, and by Irwin, Bregman & Evrard (1999), using ROSAT PSPC data. Markevitch et al. (1998) find a decreasing radial temperature profile. In figure 2 we have overlaid the temperature profile obtained by Markevitch et al. (1998) using ASCA data, to our own BeppoSAX profile. The agreement between the two independent measurements is clearly very good. A linear profile of the type, kT = a $`+`$ b r, where kT is in keV and r in arcminutes, which provides an acceptable fit to the ASCA profile ($`\chi ^2=5\times 10^5`$ for 1 d.o.f.) yields best fitting values: a $`=10.7\pm 0.8`$ keV, b$`=0.39\pm 0.11`$ keV arcmin<sup>-1</sup>. These values are in good agreement with those derived from the BeppoSAX data. Recently Irwin, Bregman & Evrard (1999) have used ROSAT PSPC hardness ratios to measure temperature gradients for a sample of nearby galaxy clusters, which includes A3266. In their analysis they find evidence of a radial decrease in one of the two hardness ratios sensitive to temperature variations. The authors do not attribute this variation to a temperature decrement, because a similar variation is also seen in an another hardness ratio, which is not sensitive to temperature gradients. Optical studies by various authors (e.g., QRW, Teague et al. 1990), have shown that A3266 is characterized by a large velocity dispersion, $``$ 1000 km s<sup>-1</sup>. Moreover both QRW and Girardi et al. (1997) find evidence of a decrease of the velocity dispersion with increasing distance from the cluster core. QRW measure a velocity dispersion of $``$ 1600 km s<sup>-1</sup> within 200 kpc from the core of the cluster and a velocity dispersion of $``$ 1000 km s<sup>-1</sup> at a radial distance of 18 (1.5 Mpc). Thus, it would seems that both the hot X-ray emitting gas and the galaxies visible at optical wavelengths are characterized by a decrease in their specific kinetic energy with increasing radius. From the velocity dispersion profile produced by QRW and our own temperature profile, we have computed the radial profile of the so-called $`\beta _{\mathrm{spec}}`$ parameter (Sarazin 1988), which is defined as: $`\beta _{\mathrm{spec}}\mu \mathrm{m}_\mathrm{p}\sigma _\mathrm{r}^2/\mathrm{kT}`$, where $`\mu `$ is the mean molecular weight in amu, m<sub>p</sub> is the proton mass and $`\sigma _\mathrm{r}`$ is the velocity dispersion. The derived $`\beta _{\mathrm{spec}}`$ profile is consistent with being flat, with an average value of $`\beta _{\mathrm{spec}}=1.5\pm 0.2`$, indicating that, although the specific kinetic energy of the galaxies is greater than that of the hot gas, the rate at which it decreases, with increasing radius, is the same for the galaxies and the hot gas. QRW have proposed that the velocity dispersion gradient and the presence of a distorted central dumb-bell galaxy may have resulted from a recent merger between two clusters. Evidence of a merger event can also be found from the X-ray data. The ROSAT PSPC image (Mohr, Mathiesen & Evrard 1999), shows isophotes elongated in the NE-SW direction on the few hundred kpc scale, while on the few Mpc scales the elongation shifts to the E-W direction. In the picture proposed by QRW the two clusters started colliding about 4 Gyr ago, with the central cores coming together in the last 1-2 Gyr. Incidentally a fit with a $`\beta `$-model to the PSPC radial profile (see Mohr, Mathiesen & Evrard 1999) yields a large value for the core radius, $`r_c=0.5`$ Mpc, as might expected in the case of recent merger in the core of A3266. The radial temperature gradient found by ASCA and BeppoSAX lends further strength to the merging scenario proposed by QRW. The map we present in figure 3 shows that the radial temperature gradient is present in all sectors. We also find an indication of an azimuthal temperature gradient occurring in the annulus with bounding radii 4-8 (0.35 Mpc - 0.7 Mpc); the data suggests that the eastern side of the cluster may be somewhat cooler than the western side. Very recently Henriksen, Donnelly & Davis (1999), from the analysis of an ASCA observation of A3266, find evidence of a temperature gradient in the SW$``$NE direction indicative of an ongoing merger. The azimuthal temperature gradient found in our data corroborates the ASCA result. The average metal abundance we find from the MECS data, $`0.21\pm 0.03`$, solar units, is in agreement with the value 0.4$`\pm `$0.2 derived by Mushotzky (1984), using HEAO1 A2 data, and with the average metallicity, $`0.21\pm 0.05`$, derived by Allen & Fabian (1998) for a sample of non-cooling flow cluster. The radial abundance profile (see the bottom panel of figure 2), does not show any strong evidence of a decrease in the abundance with increasing radius. Thus, A3266 would seem to conform to the general rule that non-cooling flow cluster do not present metallicity gradients. We acknowledge support from the BeppoSAX Science Data Center.
no-problem/9910/hep-ph9910254.html
ar5iv
text
# Magnetic fields within color superconducting neutron star cores ## 1 Introduction Conventional superconductors result from a condensate of Cooper pairs of electrons. Because the Cooper pairs have nonzero electric charge, electromagnetic gauge invariance is spontaneously broken: the photon gets a mass and weak magnetic fields are expelled by the Meissner effect. In this paper we show that the same is not in general true for the color superconducting state that is formed by cold dense quarks , even though here also the Cooper pairs have nonzero electric charge. The reason is that a color superconductor is not quite an electric superconductor: it makes the gluons massive (there is a color Meissner effect) but does not simply make the photon massive. Rather, one linear combination of the photon and a gluon becomes massive, but the orthogonal combination remains massless. Thus a region of color superconductor can allow itself to be penetrated by the component of an external magnetic field that corresponds to the unbroken generator. As we will see below, in the limit in which the screening length is the shortest length scale in the problem, the magnetic field within the color superconductor has the same strength as the applied external magnetic field. There is no Meissner effect. Though the interior field is not diminished in strength, it is “rotated” relative to the external field: it is associated with the $`U(1)`$ symmetry which is unbroken within the superconductor, not with the $`U(1)`$ of ordinary electromagnetism. If the penetration length is not smaller than the thickness of the boundary of the color superconducting region (to be defined below), there is a partial Meissner effect. In this case, the strength of the field which penetrates the superconductor depends on details of the geometry, the relative sizes of the screening length and the boundary thickness, and the relative strengths of electromagnetism and the color force. The most likely place to find superconducting quark phases in nature is in the core of neutron stars. If neutron stars achieve sufficient central densities that they feature quark matter cores, these cores must be color superconductors: because there are attractive interactions between pairs of quarks which are antisymmetric in color, the quark Fermi surfaces in cold dense quark matter are unstable to the formation of a condensate of quark Cooper pairs. Present theoretical methods are not sufficiently accurate to determine the density above which a quark matter description becomes appropriate, and thus cannot answer the question of whether quark matter, and hence color superconductivity, occurs in the cores of neutron stars. What theory can do is analyze the physical properties of dense quark matter and, eventually, make predictions for neutron star phenomenology, thus allowing astrophysical observation to settle the question. There are a number of avenues which may allow observations of neutron stars to answer questions about the presence or absence of color superconductivity. (Examples we do not pursue in this paper include analysis of cooling by neutrino emission and analysis of r-mode oscillations .) Since neutron stars have high magnetic fields ($`10^8`$ to $`10^{13.5}`$ Gauss in typical pulsars ; perhaps as high as $`10^{16}`$ Gauss in magnetars ) one prerequisite to making contact with neutron star phenomenology is to ask how the presence of a superconducting core would affect the magnetic field. We will see in this paper that the strength of the magnetic field within the superconducting core is hardly reduced, and the magnetic flux within the color superconductor is not restricted to quantized flux tubes. The latter constitutes a qualitative difference between conventional neutron stars and those with quark matter cores. We will see in Section 5 that, unlike in conventional neutron stars, the magnetic field within a color superconducting core does not vary even if the spin period of the neutron star is changing. ### 1.1 A fiducial example To give the reader some sense for typical scales in the problem, we now describe a fiducial example, which we will use in particular in Sect. 5. The numbers in this paragraph make the crude assumption that the quarks are noninteracting fermions, and so should certainly not be construed as precise. Consider quark matter with quark chemical potential $`\mu =400\mathrm{MeV}`$, made of massless up and down quarks and strange quarks with mass $`M_s=200\mathrm{MeV}`$. ($`M_s`$ is a density dependent effective mass; this adds to the uncertainty in its value.) If the strange quark were massless, quark matter consisting of equal parts $`u`$, $`d`$ and $`s`$ would be electrically neutral. In our fiducial example, on the other hand, electric neutrality requires a nonzero density of electrons, with chemical potential $`\mu _e=24\mathrm{MeV}`$. Charge neutrality combined with the requirement that the weak interactions are in equilibrium determine all the chemical potentials and Fermi momenta: $$\begin{array}{cccccccc}\hfill \mu _u& =& \hfill \mu \frac{2}{3}\mu _e& =& 384\mathrm{MeV},\hfill & \hfill p_F^u& =& \mu _u,\hfill \\ \hfill \mu _d& =& \hfill \mu +\frac{1}{3}\mu _e& =& 408\mathrm{MeV},\hfill & \hfill p_F^d& =& \mu _d,\hfill \\ \hfill \mu _s& =& \hfill \mu +\frac{1}{3}\mu _e& =& 408\mathrm{MeV},\hfill & \hfill p_F^s& =& \sqrt{\mu _s^2M_s^2}=356\mathrm{MeV},\hfill \\ & & \hfill \mu _e& =& 24\mathrm{MeV},\hfill & \hfill p_F^e& =& \mu _e.\hfill \end{array}$$ (1.1) The baryon number density $`\rho _B=(1/3\pi ^2)[(p_F^u)^3+(p_F^d)^3+(p_F^s)^3]`$ is 5 times nuclear matter density. A variety of estimates suggest that the gaps at the Fermi surfaces resulting from quark-quark pairing are about $`\mathrm{\Delta }20100\mathrm{MeV}`$, resulting in critical temperatures above which the color superconductivity is destroyed which are about $`T_c1050\mathrm{MeV}`$. Neutron stars have temperatures $`TT_c`$, and if they have quark matter cores these cores are certainly in the superconducting phase. Similar estimates for $`\mathrm{\Delta }`$ and $`T_c`$ are arrived at either by using phenomenological models, with parameters normalized to give reasonable vacuum physics or by using weak coupling methods , valid for $`\mu \mathrm{}`$ where the QCD coupling $`g(\mu )`$ does become weak. Neither strategy can be trusted quantitatively for $`\mu 400\mathrm{MeV}`$, where $`g(\mu )3`$, but it is pleasing that both strategies agree qualitatively. ### 1.2 The CFL and 2SC phases of quark matter There are, in fact, two different superconducting phases possible, which have very different symmetry properties. If $`\mathrm{\Delta }`$ is large compared to the differences among the three Fermi momenta in (1.1), $`ud`$, $`us`$ and $`ds`$ condensates all form. Chiral symmetry is broken by color-flavor locking in this phase. This is the favored phase at large $`\mu `$ where the differences between Fermi momenta decrease . This CFL phase has the same symmetries as baryonic matter which is itself sufficiently dense that the hyperon and nucleon densities are all similar, and there need not be a phase boundary between CFL matter and baryonic matter . The properties of the CFL phase have been further investigated in Refs. . Now, imagine starting with CFL matter at very large $`\mu `$ and reducing $`\mu `$. Because of the nonvanishing strange quark mass, as $`\mu `$ decreases the differences among the Fermi momenta increase. It may happen that before $`\mu `$ has decreased so far that a quark matter description ceases to be valid, one may find a phase of quark matter in which only two flavors ($`u`$ and $`d`$) and two colors (chosen spontaneously) of quarks pair. Chiral symmetry is restored in this two-flavor superconductivity (2SC) phase, which is also the phase which arises in QCD with no strange quarks at all . Because nature chooses a strange quark which can neither be treated as very light nor as very heavy, present theoretical analyses are not precise enough to determine whether quark matter at densities typical of neutron star interiors is in the CFL phase, or whether in this range of $`\mu `$ the 2SC phase is favored. For example, in (1.1) the splitting between $`d`$ and $`s`$ Fermi momenta is $`50`$ MeV, of the order of typical gaps. Current theoretical methods are not reliable enough to determine whether the quark-quark interactions in QCD are strong enough to generate a $`ds`$ condensate larger than $`50`$ MeV, as required in the CFL phase, or are somewhat weaker, admitting only the 2SC pairing. In discussing conventional superconductors in a magnetic field, one normally begins by distinguishing between Type I and Type II superconductors. Color superconductors are Type I at asymptotically high densities , but they may be Type I or Type II at neutron star densities. However, this distinction will not be of importance. First of all, the thermodynamic critical field $`H_c`$ required to destroy the color superconductivity is on the order of $`10^{18}`$ Gauss. If color superconductors exhibited the conventional Meissner effect, whether or not they were Type I or Type II they would simply exclude neutron star magnetic fields, which are in fact weak for our purposes. Second of all, we shall see that the presence of an unbroken rotated electromagnetism changes the story completely.<sup>1</sup><sup>1</sup>1 See for early attempts to analyze magnetic fields in color superconductors. These authors neglect the existence of the unbroken rotated electromagnetism. In the next three sections, we solve the problem of how a sphere of color superconducting matter, which can admit a rotated magnetic field, responds to an applied ordinary magnetic field. In the final section, we discuss the consequences of our findings for neutron stars. ## 2 Rotated electromagnetism ### 2.1 The new photon The fundamental fields in the theory are the quarks $`\psi _i^\alpha `$ (with color index $`\alpha `$ and flavor index $`i`$) and the gauge fields: the photon $`A_\mu `$ and the gluons $`G_\mu ^n`$, $`n=1\mathrm{}8`$. At low temperature and high density, the quarks form Cooper pairs, associated with a Higgs field $`\varphi _{ij}^{\alpha \beta }\psi _i^\alpha \psi _j^\beta `$. This field gets a vacuum expectation value which takes the form $$\varphi _{ij}^{\alpha \beta }\mathrm{\Delta }ϵ^{\alpha \beta }ϵ_{ij}$$ (2.2) in the 2SC phase, with $`i`$ and $`j`$ running over $`u`$ and $`d`$ only and $`\alpha `$ and $`\beta `$ running over two of the colors only. In the CFL phase, quarks of all three flavors and colors pair and the expectation value of the field takes the form $$\varphi _{ij}^{\alpha \beta }\mathrm{\Delta }_1\delta _i^\alpha \delta _j^\beta +\mathrm{\Delta }_2\delta _j^\alpha \delta _i^\beta ,$$ (2.3) with all indices now taking on three values. In both the CFL and 2SC phases, the condensate leaves an unbroken $`U(1)`$ generated by $$\stackrel{~}{Q}=Q+\eta T_8,\stackrel{~}{Q}\varphi =0,$$ (2.4) where $`\eta =1/\sqrt{3}`$ for CFL, and $`\eta =1/(2\sqrt{3})`$ for 2SC. $`Q`$ is the conventional electromagnetic charge generator, and $`T_8`$ is associated with one of the gluons. In the representation of the quarks, $$\begin{array}{ccccc}\hfill Q& =& & \text{diag}(\frac{2}{3},\frac{1}{3},\frac{1}{3})\hfill & \text{in flavor }u,d,s\text{ space}\hfill \\ \hfill T_8& =& \frac{1}{\sqrt{3}}\hfill & \text{diag}(1,1,2)\hfill & \text{in color }r,g,b\text{ space}.\hfill \end{array}$$ (2.5) As is conventional, we have taken $`\text{tr}(T_8T_8)=2`$. The $`\stackrel{~}{Q}`$-charge of all the Cooper pairs which form the condensates are zero. To see exactly which gauge field remains unbroken, look at the covariant derivative of the Higgs field: $$D_\mu \varphi =\left(_\mu +eA_\mu Q^{(\varphi )}+gG_\mu ^8T_8^{(\varphi )}\right)\varphi $$ (2.6) From (2.4), we see that the kinetic term $`|D\varphi |^2`$ will give a mass to one gauge field $$A_\mu ^X=\frac{\eta eA_\mu +gG_\mu ^8}{\sqrt{\eta ^2e^2+g^2}}=\mathrm{sin}\alpha _0A_\mu +\mathrm{cos}\alpha _0G_\mu ^8$$ (2.7) but the orthogonal linear combination $$A_\mu ^{\stackrel{~}{Q}}=\frac{gA_\mu +\eta eG_\mu ^8}{\sqrt{\eta ^2e^2+g^2}}=\mathrm{cos}\alpha _0A_\mu +\mathrm{sin}\alpha _0G_\mu ^8$$ (2.8) will remain massless. The denominators arise from keeping the gauge field kinetic terms correctly normalized, and we have defined the angle $`\alpha _0`$, $$\mathrm{cos}\alpha _0=\frac{g}{\sqrt{\eta ^2e^2+g^2}},$$ (2.9) which specifies the unbroken $`U(1)`$. At neutron star densities the gluons are strongly coupled ($`g^2/(4\pi )1`$), and of course the photons are weakly coupled ($`e^2/(4\pi )1/137`$), so $`\alpha _0\eta e/g`$ is small: $`\alpha _01/20`$ in the 2SC phase and $`\alpha _01/40`$ in the CFL phase. In both phases, the “rotated photon” consists mostly of the usual photon, with only a small admixture of the $`T_8`$ gluon. We can now see that the electron couples to $`A^{\stackrel{~}{Q}}`$ with charge $`eg/\sqrt{\eta ^2e^2+g^2}`$ which is less than $`e`$, so the new photon is slightly more weakly coupled to electrons than the old one. It can also be shown that in the CFL phase, the quarks have $`\stackrel{~}{Q}`$-charges which are integer multiples of the $`\stackrel{~}{Q}`$-charge of the electron . ### 2.2 The Meissner effect The main purpose of this paper is to study the Meissner effect in a high density region, which we will assume to be a sphere, like the core of a neutron star, with radius $`R`$. In this region, color superconductivity occurs. The unbroken generator $`\stackrel{~}{Q}`$ is associated with a rotated electromagnetic field $`A_\mu ^{\stackrel{~}{Q}}`$, with rotation angle $`\alpha _0`$, as described above. The broken generator $`X`$ gives no long-range gauge field. For simplicity we will treat the region outside the core not as nuclear matter, but as vacuum. As we will explain in Sect. 5, this does not seriously affect our conclusions. In the external region, then, ordinary electromagnetism $`A_\mu ^Q`$ is unbroken, and the color generator $`T_8`$ is confined. We assume that currents at infinity create a uniform applied $`Q`$-magnetic field $`B_a^Q`$ in the $`z`$-direction. We want to know to what degree the flux is expelled from the inner region. To study this situation, we need only work in the two-dimensional space of gauge symmetry generators spanned by $`Q`$ and $`T_8`$. It is natural to take $`\alpha `$ (the angle by which the unbroken $`U(1)`$ is rotated relative to ordinary electromagnetism, see (2.9)) to vary with radius, with $`\alpha (r)=\alpha _0`$ in the high density (inner) region, and $`\alpha (r)=0`$ outside. So in the $`(Q,T_8)`$ basis, $$Q=\left(\begin{array}{c}1\\ 0\end{array}\right),T_8=\left(\begin{array}{c}0\\ 1\end{array}\right),\stackrel{~}{Q}=\left(\begin{array}{c}\mathrm{cos}\alpha (r)\\ \mathrm{sin}\alpha (r)\end{array}\right),X=\left(\begin{array}{c}\mathrm{sin}\alpha (r)\\ \mathrm{cos}\alpha (r)\end{array}\right).$$ (2.10) To determine what fraction of the magnetic field is expelled from the color superconducting region, we must consider what happens at the boundary between the two phases. There are three relevant distance scales: $$\begin{array}{cc}R\hfill & \text{the size of the high-density region},\hfill \\ \delta R(d\alpha /dr)^1\hfill & \text{the distance over which }\alpha \text{ switches from }\alpha _0\text{ to 0, the boundary thickness}\hfill \\ \lambda \hfill & \text{the screening distance for broken/confined gauge fields}\hfill \end{array}$$ (2.11) We will always assume that $`R\lambda `$. We will treat two limiting scenarios: “sharp” boundary and “smooth” boundary. The sharp boundary, $`\delta R\lambda `$, corresponds to a sudden step-function-like change in $`\alpha `$. This is what we would expect to find if there were a first-order phase transition between the low and high density regions, with phase boundary thickness less than the screening length. The smooth boundary, $`\delta R\lambda `$, corresponds to a gradual change in $`\alpha `$. This applies to the situation where there is no first-order phase transition between nuclear and quark matter (e.g. for low strange quark mass, where we expect no phase transition at all ), or where there is a first-order phase transition with phase boundary thicker than the screening length. We will see that the behavior of the magnetic field is quite different depending on whether the boundary is sharp or smooth. What we are interested in is the behavior of the magnetic field at macroscopic distance scales of order $`R`$, not at the microscopic scale $`\lambda `$. In other words, we will always be interested in the unbroken and unconfined gauge fields, which obey the Maxwell equations. In order to treat all gauge fields in a single formalism it will therefore be convenient to take into account screening not by introducing gauge boson masses into the field equations, but rather by introducing monopoles and supercurrents to describe the screening of the confined and Higgsed gauge fields. By “monopoles” we mean whatever gauge field configurations are responsible for terminating magnetic field lines of confined gauge fields; we do not require them to be solutions to any classical field equations. We will ignore the details of how the supercurrents (and “monopoles”) arrange themselves on distance scales of order $`\lambda `$, as all we care about is that they screen (terminate) the higgsed (confined) magnetic fields incident upon them. ## 3 Sharp boundary In the sharp boundary case, $`\alpha `$ changes quickly from $`\alpha _0`$ to 0 at $`r=R`$ over a boundary region with thickness $`\delta R<\lambda `$ (Fig. 1). Just inside the boundary, in the region $`R\lambda r<R`$, the $`X`$ gauge field is Higgsed, so there is a density $`\stackrel{}{J}^X`$ of $`X`$-supercurrents that screen out the parallel component of the $`X`$-flux, leaving only $`\stackrel{~}{Q}`$-flux at $`rR\lambda `$. Just outside the boundary, in the region $`R<rR+\lambda `$, the $`T_8`$ gauge field is confined, so there is a density $`\rho _M^{T_8}`$ of $`T_8`$-monopoles that terminate any perpendicular component of the $`T_8`$-flux, leaving only $`Q`$-flux at $`rR+\lambda `$. To obtain the boundary conditions on the magnetic fields associated with the unbroken generators, write the magnetic field in the $`(Q,T_8)`$ basis (2.10), $$\stackrel{}{B}=\left(\begin{array}{c}\stackrel{}{B}^Q\\ \stackrel{}{B}^{T_8}\end{array}\right)=\left(\begin{array}{c}\mathrm{cos}\alpha _0\stackrel{}{B}^{\stackrel{~}{Q}}\mathrm{sin}\alpha _0\stackrel{}{B}^X\\ \mathrm{sin}\alpha _0\stackrel{}{B}^{\stackrel{~}{Q}}+\mathrm{cos}\alpha _0\stackrel{}{B}^X\end{array}\right),$$ (3.12) then the Maxwell equations are $$\begin{array}{ccccccc}\hfill r>R:& \hfill \mathrm{div}\stackrel{}{B}& =& \left(\begin{array}{c}0\\ \rho _M^{T_8}\end{array}\right),\hfill & \hfill \mathrm{curl}\stackrel{}{B}& =& 0,\hfill \\ \hfill r<R:& \hfill \mathrm{div}\stackrel{}{B}& =& 0,\hfill & \hfill \mathrm{curl}\stackrel{}{B}& =& \left(\begin{array}{c}\mathrm{sin}\alpha _0\stackrel{}{J}^X\\ \mathrm{cos}\alpha _0\stackrel{}{J}^X\end{array}\right).\hfill \end{array}$$ (3.13) Since we are only interested in the behavior of the fields on distance scales much greater than $`\lambda `$, we integrate the Maxwell equations (3.13) over $`R\lambda <r<R+\lambda `$, and obtain boundary conditions that relate the fields at $`R\lambda `$ to those at $`R+\lambda `$. We follow the standard derivation (Ref. , sect. I.5). We find a discontinuity in the normal compoment $`B_{}^{T_8}`$ due to the surface density of $`T_8`$-monopoles, and in the parallel component $`B_{}^X`$ due to surface $`X`$-supercurrents, $$\begin{array}{ccc}\hfill B_{}^Q(R+\lambda )\left(\begin{array}{c}1\\ 0\end{array}\right)B_{}^{\stackrel{~}{Q}}(R\lambda )\left(\begin{array}{c}\mathrm{cos}\alpha _0\\ \mathrm{sin}\alpha _0\end{array}\right)& =& \rho _M^{T_8}\left(\begin{array}{c}0\\ 1\end{array}\right),\hfill \\ \hfill B_{}^Q(R+\lambda )\left(\begin{array}{c}1\\ 0\end{array}\right)B_{}^{\stackrel{~}{Q}}(R\lambda )\left(\begin{array}{c}\mathrm{cos}\alpha _0\\ \mathrm{sin}\alpha _0\end{array}\right)& =& J^X\left(\begin{array}{c}\mathrm{sin}\alpha _0\\ \mathrm{cos}\alpha _0\end{array}\right).\hfill \end{array}$$ (3.14) From this we immediately obtain the boundary conditions on the flux: $$\begin{array}{ccc}\hfill B_{}^{\stackrel{~}{Q}}(R\lambda )& =& \frac{1}{\mathrm{cos}\alpha _0}B_{}^Q(R+\lambda ),\hfill \\ \hfill B_{}^{\stackrel{~}{Q}}(R\lambda )& =& \mathrm{cos}\alpha _0B_{}^Q(R+\lambda ).\hfill \end{array}$$ (3.15) Thus, just inside the interface we find a $`\stackrel{}{B}^{\stackrel{~}{Q}}`$ whose component parallel (perpendicular) to the interface is weakened (strengthened) relative to that of $`\stackrel{}{B}^Q`$ just outside the interface. ### 3.1 Solution for the sharp boundary Since this is a magnetostatic problem we can write it in terms of a magnetic scalar potential $`\mathrm{\Phi }`$. The potential is associated with the unbroken $`\stackrel{~}{Q}`$ flux inside the sphere, and the unbroken $`Q`$-flux outside $$\begin{array}{cccc}\hfill B^Q& =& \mathrm{\Phi }\hfill & \hfill \text{outside sphere},\\ \hfill B^{\stackrel{~}{Q}}& =& \mathrm{\Phi }\hfill & \hfill \text{inside sphere}.\end{array}$$ (3.16) Maxwell’s equations become $$^2\mathrm{\Phi }=0,$$ (3.17) with boundary conditions (3.15) $$\begin{array}{ccc}\hfill \mathrm{\Phi }(r\mathrm{})& =& B_a^Qr\mathrm{cos}\theta ,\hfill \\ \hfill \frac{\mathrm{\Phi }}{r}(R\lambda )& =& \frac{1}{\mathrm{cos}\alpha _0}\frac{\mathrm{\Phi }}{r}(R+\lambda ),\hfill \\ \hfill \frac{\mathrm{\Phi }}{\theta }(R\lambda )& =& \mathrm{cos}\alpha _0\frac{\mathrm{\Phi }}{\theta }(R+\lambda ).\hfill \end{array}$$ (3.18) Expanding in Legendre polynomials (see sect. 5.12) we find the solution $$\begin{array}{cccc}\hfill \mathrm{\Phi }& =& B_a^Qr\mathrm{cos}\theta B_a^Q\frac{R^3}{r^2}\mathrm{cos}\theta \frac{1\mathrm{cos}^2\alpha _0}{2+\mathrm{cos}^2\alpha _0}\hfill & (r>R),\hfill \\ \hfill \mathrm{\Phi }& =& B_a^Qr\mathrm{cos}\theta \frac{3\mathrm{cos}\alpha _0}{2+\mathrm{cos}^2\alpha _0}\hfill & (r<R).\hfill \end{array}$$ (3.19) In Fig. 2 we show the resultant field configuration for $`\mathrm{cos}\alpha _0=0.5`$. In the real world $`\alpha _0`$ is small, so the field is mostly converted into $`\stackrel{~}{Q}`$ flux by the supercurrents and monopoles, and penetrates the interior. Only a weak field is excluded. We can check that the solution (3.19) makes sense in two limits. (1) $`\alpha _0=0`$, so $`\stackrel{~}{Q}=Q`$ and $`T_8=X`$. The unbroken $`U(1)`$ in the color superconducting region is exactly the same as conventional electromagnetism: the magnetic field does not even notice the color superconductor. (2) $`\alpha _0=\pi /2`$, so $`\stackrel{~}{Q}=T_8`$ and $`X=Q`$. The region is a conventional superconductor, breaking electromagnetism. The magnetic field is completely expelled from the superconducting region. How large a field must we apply before we begin to destroy the color superconductivity in parts of the sphere? The magnitude of the $`Q`$-magnetic field just outside the sphere is largest at the equator of the sphere, where it is given by $$|\stackrel{}{B}^Q(r=R+\lambda ,\theta =\pi /2)|=B_a^Q\left(1+\frac{1\mathrm{cos}^2\alpha _0}{2+\mathrm{cos}^2\alpha _0}\right).$$ (3.20) This means that the equator of the sphere sees an $`X`$-magnetic field given by $`\mathrm{sin}\alpha _0`$ times (3.20). If the color superconductor is Type I, the color superconductivity is destroyed in regions of the sphere near its equator when the $`X`$-magnetic field exceeds the thermodynamic field $`H_c10^{18}`$ Gauss.<sup>2</sup><sup>2</sup>2If the material is Type II, flux tubes begin to penetrate when the $`X`$-magnetic field exceeds $`H_c^1`$, which is of the same order of magnitude. This requires $$B_a^Q>\frac{2+\mathrm{cos}^2\alpha _0}{3\mathrm{sin}\alpha _0}H_c.$$ (3.21) We conclude that because most of the applied field can penetrate the superconductor in the form of $`\stackrel{~}{Q}`$-flux, while only a small fraction of the applied field must be excluded, the applied field at which the color superconductivity begins to be destroyed is significantly larger than the thermodynamic critical field $`H_c10^{18}`$ Gauss. ## 4 Smooth boundary For a smooth boundary, $`\delta R\lambda `$, which means that $`\alpha `$ changes slowly relative to the screening length $`\lambda `$. In this case we assume that there is a continuous distribution of monopoles and supercurrents. At a given $`r`$ these produce flux in the $`U(1)`$ that is (locally) broken. Since the screening length $`\lambda `$ is small, the net flux at a given $`r`$ must only be in the unbroken $`U(1)`$. The Maxwell equations therefore take the form $$\begin{array}{ccc}\hfill \mathrm{div}\left\{\stackrel{}{B}\left(\begin{array}{c}\mathrm{cos}\alpha (r)\\ \mathrm{sin}\alpha (r)\end{array}\right)\right\}& =& \rho _M\left(\begin{array}{c}\mathrm{sin}\alpha (r)\\ \mathrm{cos}\alpha (r)\end{array}\right),\hfill \\ \hfill \mathrm{curl}\left\{\stackrel{}{B}\left(\begin{array}{c}\mathrm{cos}\alpha (r)\\ \mathrm{sin}\alpha (r)\end{array}\right)\right\}& =& \stackrel{}{J}\left(\begin{array}{c}\mathrm{sin}\alpha (r)\\ \mathrm{cos}\alpha (r)\end{array}\right),\hfill \end{array}$$ (4.22) where $`\stackrel{}{B},\rho _M,\stackrel{}{J}`$ are functions of position. Now $$\begin{array}{ccc}\hfill \mathrm{div}\left\{\stackrel{}{B}\left(\begin{array}{c}\mathrm{cos}\alpha (r)\\ \mathrm{sin}\alpha (r)\end{array}\right)\right\}& =& \mathrm{div}\stackrel{}{B}\left(\begin{array}{c}\mathrm{cos}\alpha (r)\\ \mathrm{sin}\alpha (r)\end{array}\right)+B_r\frac{d\alpha }{dr}\left(\begin{array}{c}\mathrm{sin}\alpha (r)\\ \mathrm{cos}\alpha (r)\end{array}\right),\hfill \\ \hfill \mathrm{curl}\left\{\stackrel{}{B}\left(\begin{array}{c}\mathrm{cos}\alpha (r)\\ \mathrm{sin}\alpha (r)\end{array}\right)\right\}& =& \mathrm{curl}\stackrel{}{B}\left(\begin{array}{c}\mathrm{cos}\alpha (r)\\ \mathrm{sin}\alpha (r)\end{array}\right)+\stackrel{}{B}\times \alpha \left(\begin{array}{c}\mathrm{sin}\alpha (r)\\ \mathrm{cos}\alpha (r)\end{array}\right).\hfill \end{array}$$ (4.23) Substituting these into (4.22) and taking the $`(\mathrm{cos}\alpha ,\mathrm{sin}\alpha )`$ component, we find that the locally unbroken magnetic field $`\stackrel{}{B}`$ obeys the sourceless Maxwell equations: $$\mathrm{div}\stackrel{}{B}=0,\mathrm{curl}\stackrel{}{B}=0.$$ (4.24) We conclude that in this case the magnetic field always rotates to be locally unbroken, but is otherwise unaffected: it obeys the free Maxwell equations everywhere. There is no expulsion at all. (In the smooth boundary case, then, the effective critical magnetic field is infinite: an arbitrarily strong magnetic field is always rotated to be in the locally unbroken $`U(1)`$, and never destroys the color superconductivity.) One way of understanding this result is to note that the smallness of the screening length $`\lambda `$ means that broken gauge fields are always zero. Consequently, when we move from radius $`r`$ to $`r+dr`$, the gauge field is immediately projected onto the locally unbroken generator $`\stackrel{~}{Q}(r+dr)`$, which differs by an infinitesimal angle from $`\stackrel{~}{Q}(r)`$. As we vary $`r`$, we therefore have a sequence of such projections. In the limit of an infinite number of projections, each infinitesimally different from the last, the gauge field is simply rotated to follow the locally unbroken generator, $$\underset{N\mathrm{}}{lim}\underset{n=1}{\overset{N}{}}\mathrm{cos}(\frac{\alpha _0}{N})=1.$$ (4.25) This is analogous to the behavior of a sequence of $`N`$ polarizers each at an angle $`\alpha _0/N`$ to the previous one. In the limit where $`N\mathrm{}`$, an incoming photon aligned with the first polarizer exits from the last polarizer with its polarization rotated by $`\alpha _0`$, and with no loss of intensity. Finally, it is interesting to ask why the $`\delta R0`$ limit of our result for the smooth boundary is not the same as our result for the sharp boundary. The reason is that in the smooth case we assume $`\lambda \delta R`$, so the monopoles and supercurrents are always in the region where $`\alpha `$ is changing. In the sharp case, by contrast, we assume $`\delta R\lambda `$, and there is in effect no such region. Taking the $`\delta R0`$ limit of the smooth case ($`\lambda \delta R`$) would result in a pile-up of currents and monopoles in a shrinking region within which $`\alpha `$ changes rapidly from $`\alpha _0`$ to 0. In this limit, (4.24) is maintained and no flux is excluded from the sphere of color superconductor.<sup>3</sup><sup>3</sup>3Formally, taking the $`\delta R0`$ limit while maintaining $`\lambda \delta R`$ yields both monopoles and supercurrents concentrated at a point where $`\alpha =\alpha _0/2`$. If one modifies the right hand side of (3.14) to reflect this, the boundary conditions (3.15) becomes $`\stackrel{}{B}^Q(R\lambda )=\stackrel{}{B}^Q(R+\lambda )`$, and no flux is excluded. The sharp boundary studied in Sect. 3 represents different physics, in which $`\lambda `$ is constant as $`\delta R0`$. In this limit all the monopoles are in the $`\alpha =0`$ region on the confined side of the boundary while all the supercurrents are in the $`\alpha =\alpha _0`$ region on the Higgsed side and we obtain the boundary condition (3.15) and partial flux exclusion as in (3.19). ## 5 Consequences for Neutron Stars and Outlook If a neutron star features a core made of color superconducting quark matter, we have learned that this core exhibits (almost) no Meissner effect in response to an applied magnetic field. Even though the Cooper pairs of quarks have nonzero electric charge, there is an unbroken gauge symmetry $`U(1)_{\stackrel{~}{Q}}`$, and the color superconducting region can support a $`\stackrel{~}{Q}`$-magnetic field. If the boundary layer of the color superconducting region is thick compared to the screening length, there is no Meissner effect: the $`\stackrel{~}{Q}`$-magnetic field within has the same strength as the applied $`Q`$-magnetic field. (This smooth boundary case applies if the color superconducting phase and the baryonic phase are not separated by a phase transition; it may also apply in the presence of a first order phase transition, if the phase boundary is thick enough.) If the thickness of the boundary of the superconducting region is less than the penetration length, there is a partial Meissner effect: the $`\stackrel{~}{Q}`$-magnetic field within is somewhat reduced relative to the applied $`Q`$-magnetic field. Because $`\alpha _0`$ is small in nature, the overlap between $`Q`$-photons and $`\stackrel{~}{Q}`$-photons is large, and this partial Meissner effect occurs only at the few percent level. The physics of magnetic fields in neutron stars with color superconducting cores is qualitatively different from that in conventional neutron stars. In conventional neutron stars, proton-proton pairing breaks $`U(1)_Q`$ and there is no unbroken $`U(1)_{\stackrel{~}{Q}}`$. This means that magnetic fields thread the cores of conventional neutron stars in quantized flux tubes, within each of which there is a nonsuperconducting region. In contrast, the $`\stackrel{~}{Q}`$-magnetic field within a color superconducting neutron star core is not confined to flux tubes. This means that, as we discuss below, the enormous $`\stackrel{~}{Q}`$-electrical conductivity of the matter ensures that the $`\stackrel{~}{Q}`$-magnetic field is constant in time. In ordinary neutron stars the $`Q`$-magnetic flux tubes can be dragged about by the outward motion of the rotational vortices as the neutron star spins down , and can also be pushed outward if the gap at the proton fermi surface increases with depth within the neutron star . One therefore expects the magnetic field of an isolated pulsar to decay over billions of years as it spins down or perhaps more quickly . There is no observational evidence for the decay of the magnetic field of an isolated pular over periods of billions of years ; this is consistent with the hypothesis that they contain color superconducting cores which serve as “anchors” for magnetic field, because they support a $`\stackrel{~}{Q}`$-magnetic field which does not decay. We now estimate the decay time of the $`\stackrel{~}{Q}`$-magnetic field for neutron stars with color superconducting cores, doing the calculation separately for cores which are in the 2SC and the CFL phase. To this point, the only difference between the two color superconducting phases in their response to applied magnetic fields has been a difference of a factor of two in the value of $`\alpha _0`$; because $`\alpha _01`$ in both phases, this difference is of no qualitative consequence. However, the 2SC and the CFL phase differ qualitatively in their symmetries and their low-energy excitations, and can therefore be expected to have quite distinct transport properties. As we will see below, the $`\stackrel{~}{Q}`$-electrical conductivity in the CFL phase is much larger than the conductivity in the CFL phase. However, even the “smaller” conductivity of the 2SC phase is so large that the timescale for the decay of a $`\stackrel{~}{Q}`$-magnetic field within a color superconducting neutron star core is long compared to the age of the universe. The characteristic magnetic field decay time due to ohmic dissipation is $$t_{\mathrm{decay}}4\sigma R^2/\pi ,$$ (5.26) where $`R`$ is the radius of the color superconducting core and $`\sigma `$ is the $`\stackrel{~}{Q}`$-electric conductivity. (We set $`\mathrm{}=c=k_B=1`$ throughout.) We begin by estimating $`\sigma `$, and hence the decay time, for a core which is in the 2SC phase. At keV temperatures, the dominant carriers are the relativistic electrons and those quarks of the 2SC phase which are ungapped, or which acquire gaps so small as to be $`T`$ or less. In the 2SC phase the up and down quarks of one color acquire gaps smaller than or of order keV . The strange quarks of all three colors — which do not participate in the dominant pairing characterizing the 2SC phase — can be expected to have gaps which are similar in magnitude or even smaller .<sup>4</sup><sup>4</sup>4 The contribution from the quark quasiparticles with gaps $`\mathrm{\Delta }20100\mathrm{M}\mathrm{e}\mathrm{V}T`$ can be neglected. In the CFL phase, all quark quasiparticles have gaps which are $`T`$. We discuss this situation below. To obtain a lower bound on $`\sigma `$, we assume that all strange quarks, and up and down quarks of one color, have gaps $`T`$. These five quarks have $`\stackrel{~}{Q}`$-charges $`\frac{1}{2},\frac{1}{2},0,1,0`$ in units of the $`\stackrel{~}{Q}`$-charge of the electron $`eg/\sqrt{g^2+\mathrm{e}^2/12}`$, which we henceforth take to be just $`e`$ since $`e/g1`$. The sum of the squares of the $`\stackrel{~}{Q}`$-charge of the ungapped quarks is therefore $`3e^2/2`$. Following , the electrical conductivity is given by $$\sigma \frac{1}{3\pi ^2}\mu _e^2e^2\tau _e+\frac{3}{2}\frac{1}{3\pi ^2}\mu ^2e^2\tau _q,$$ (5.27) where $`\tau _e`$ and $`\tau _q`$ are the momentum relaxation times for electrons and quarks in the plasma, defined and calculated in Ref. . For both the electrons and quarks, the dominant scattering process contributing to momentum relaxation is scattering off quarks. The five gapless quarks of interest yield $$\tau _q^1\frac{40}{9\pi }\mathrm{\hspace{0.17em}1.81}\alpha _s^2\frac{T^{5/3}}{(m_D^g)^{2/3}}$$ (5.28) where $`\alpha _s=g^2/4\pi `$ and $`(m_D^g)^23g^2\mu ^2/(2\pi ^2)`$ is the Debye screening mass for the gluons, neglecting $`M_s`$ relative to $`\mu `$, and where we have assumed that $`Tm_D^g`$.<sup>5</sup><sup>5</sup>5Note that we have worked directly from equations (28) and (39) of Ref. , and have not used the numerical factor in equation (40) of Ref. , which contains an error . Similarly, $$\tau _e^1\frac{4}{3\pi }\mathrm{\hspace{0.17em}1.81}\alpha _e^2\frac{T^{5/3}}{(m_D^e)^{2/3}},$$ (5.29) where $`\alpha _e=1/137`$, and $`(m_D^e)^25e^2\mu ^2/(12\pi ^2)`$ is the Debye screening mass for the $`\stackrel{~}{Q}`$-photon. We have neglected $`M_s`$, used the fact that the average squared $`\stackrel{~}{Q}`$-charge of the nine quarks participating in screening is $`5e^2/18`$, and have assumed $`Tm_D^e`$. Taking $`\alpha _s1`$, we find first that the electrons dominate the conductivity, by a factor of about $`20`$, and second that the time-scale for the decay of the magnetic field in a color superconducting 2SC core of radius $`R`$ is $$t_{\mathrm{decay}}3\times 10^{13}\mathrm{years}\left(\frac{R}{1\mathrm{km}}\right)^2\left(\frac{\mu }{400\mathrm{MeV}}\right)^{2/3}\left(\frac{\mu _e}{25\mathrm{MeV}}\right)^2\left(\frac{T}{1\mathrm{keV}}\right)^{5/3}.$$ (5.30) Thus, the magnetic field in the core of a neutron star which is made of matter in the 2SC phase decays only on a time-scale which exceeds the age of the universe. We now turn to a color superconducting core in the CFL phase. In contrast to the 2SC phase, the condensation in the CFL phase produces gaps $`T`$ for quarks of all three colors and all three flavors.<sup>6</sup><sup>6</sup>6It may be possible for some quark quasiparticles to be gapless even in the CFL phase, at densities just above those where the 2SC phase is favored . In this nongeneric circumstance, the CFL phase conductivity would be similar to that of the 2SC phase. Thus there are no low-energy quark quasiparticle excitations. Furthermore, the CFL condensate gives a mass to all eight gluons. In this quark matter phase, the degrees of freedom which are most easily excited are neither quarks nor gluons. Because of the spontaneous breakdown of chiral symmetry in the CFL phase, there are charged Nambu-Goldstone bosons, which would be massless pseudoscalar excitations if the quarks were massless. Once the nonzero quark masses are taken into account, one finds pseudoscalar masses which are small (in the sense that they are $`\mathrm{\Delta }`$) but which are still large compared to $`T`$. There is one remaining scalar Nambu-Goldstone excitation, associated with the superfluidity of the CFL phase, but this excitation is $`\stackrel{~}{Q}`$-neutral. We thus discover that all possible hadronic excitations which have nonzero $`\stackrel{~}{Q}`$-charge do acquire a gap: their populations in thermal equilibrium are suppressed exponentially by factors of the form $`\mathrm{exp}(\mathrm{\Delta }/T)`$, where $`\mathrm{\Delta }`$ is either a Fermi surface gap or a suitable bosonic mass. This first of all means that the only charge carriers which could contribute to $`\sigma `$ are the electrons. Second, the scattering of these electrons off the positively charged hadronic system in which they are immersed vanishes exponentially for $`T0`$, because at low temperatures there are no hadronic excitations off which to scatter.<sup>7</sup><sup>7</sup>7We are describing the conductivity in the linear response regime. In particular, we are assuming that the current is small enough that the momenta acquired by the electrons due to the current is small compared to the excitation energy for all charged hadronic modes. If the current were increased beyond the linear regime, the conductivity would decrease and eventually the CFL condensate itself would be destroyed. The nonlinear regime is not relevant for our purposes. The electrons can only scatter off other electrons. However, such collisions do not alter the total electric current . We therefore conclude that although matter in the CFL phase is not a $`\stackrel{~}{Q}`$-superconductor (it does not exclude $`\stackrel{~}{Q}`$-magnetic field) it is a near-perfect conductor: the resistivity drops exponentially to zero as $`T0`$. At typical neutron star temperatures $`T\mathrm{\Delta }`$, the density of hadronic excitations, and therefore the resistivity, is exponentially suppressed relative to the 2SC phase. As a consequence, the decay time for a $`\stackrel{~}{Q}`$-magnetic field in a CFL core is exponentially larger than that (5.30) for a core in the 2SC phase, which was already longer than the age of the universe. It is clear that the $`\stackrel{~}{Q}`$-magnetic field is rigidly locked in the color superconducting core. It cannot decay with time, even if rotational vortices move through the core as the spin rate of the pulsar changes with time. Rotational vortices do exist in the CFL phase, because the CFL condensate spontaneously breaks a global $`U(1)`$. Instead of dragging magnetic flux tubes with them as they move, as occurs in a conventional neutron star, the rotational vortices can move freely through the CFL phase because there are no $`\stackrel{~}{Q}`$-flux tubes, only a $`\stackrel{~}{Q}`$-magnetic field. Thus, as the spin period of the neutron star changes and the rotational vortices move accordingly, there is no change at all in the strength of the $`\stackrel{~}{Q}`$-magnetic field in the core. The conclusion is the same in the 2SC phase, but the argument is even simpler because in this phase there is no spontaneously broken global $`U(1)`$, and therefore no rotational vortices. As we have noted above, the data on isolated pulsars show no evidence for any decay of the observed magnetic field even as they spin-down over time . This is consistent with the possibility of a color superconducting core within which the field does not decay. However, we must also ask whether and how the fact that the observed magnetic fields of accreting pulsars do change as they spin up is consistent with the possibility of quark matter cores. There are several ways in which the surface magnetic field could decrease as a neutron star accretes and spins up, even though the magnetic field in the core remains constant. One possibility is that the accreting matter may bury the magnetic field . Another possibility is that as the magnetic flux tubes in the mantle and crust of the neutron star are pulled around by the rotational vortices, the north and south magnetic poles on the star’s surface may be pushed toward one another, reducing the observed dipole field even though the field deep within, in the color superconducting core, remains undiminished . The analysis of Section 3 was idealized in three ways relative to that appropriate for a neutron star, if there is a first order phase transition between baryonic and quark matter. First, we assumed that ordinary electromagnetism was unbroken outside the core. This is false: proton-proton superconductivity results in the restriction of ordinary $`Q`$-magnetic field to flux tubes. This means that our derivation of the boundary conditions in Section 3 only applies upon averaging over an area of the boundary that is sufficiently large that many $`Q`$-flux tubes are incident on it from the outside. A more microscopic description of the field configuration near the boundary would in fact require further work, but this is not necessary for our purposes. Second, our assumption in Section 3 of a spherical boundary is oversimplified. If there is a first order phase transition between baryonic and quark matter, because there are two distinct chemical potentials $`\mu `$ and $`\mu _e`$ there will be a mixed phase region, with many boundaries separating regions of quark matter and baryonic matter which have complex shapes . This complication of the geometry of the boundary evidently makes a complete calculation more difficult than the one we have done in Section 3, although if the boundary is thick then the conclusions of Section 4 are unaffected. Regardless, the qualitative conclusion that only a very small fraction of the flux will be excluded because $`\alpha _0`$ is so small will not be affected by these complications. Third, the configuration of Figure 2 cannot, in fact, be attained in a neutron star even though it is favored in the sharp boundary case. The core of a newborn neutron star is threaded with ordinary $`Q`$-magnetic field. Because of the enormous conductivity, the time it would take to accomplish the partial exclusion of flux seen in Figure 2 is exceedingly long . This means that, instead, although the field within the core will be largely $`\stackrel{~}{Q}`$-magnetic field, there will in addition be a small fraction of $`X`$-magnetic flux confined in quantized flux tubes. The sum of the $`\stackrel{~}{Q}`$\- and $`X`$-fluxes adds up to the original $`Q`$-flux. Over time, the motion of rotational vortices may move the $`X`$-flux tubes around. The much larger $`\stackrel{~}{Q}`$-flux, which is not constrained in flux tubes, is frozen as described above. We conclude that relaxing the simplifying assumptions which we have made would not change our qualitative conclusions. If neutron stars contain quark matter cores, those cores will exclude at most a very small fraction of any applied magnetic field. Instead, the flux penetrates (almost) undiminished. The only change in the flux within the color superconducting core is that it is a $`\stackrel{~}{Q}`$-magnetic field, associated with that linear combination of the photon and the $`T_8`$ gluon which is unbroken by the quark pair condensate. Most important for neutron star phenomenology, and in qualitative distinction from the results for conventional neutron stars, is the conclusion that this $`\stackrel{~}{Q}`$-flux does not form quantized flux tubes and is frozen over timescales long compared to the age of the universe. We are confident that we have not said the last word on the effects of color superconducting quark matter cores within neutron stars on magnetic field evolution. More work is required to better understand pulsars which are accreting and spinning up. We can already conclude that if observational evidence were to emerge that an isolated pulsar loses its magnetic field as it spins down (in such a way that the field would vanish in the limit in which the spin vanishes), this would allow one to infer that such a pulsar does not have a quark matter core. Acknowledgements We are grateful to the Aspen Center for Physics, where much of this work was completed. We thank I. Appenzeller, M. Camenzind, D. Chakrabarty, H. Heiselberg, L. Hernquist, R. Jaffe, V. Kaspi, D. Psaltis, M. Ruderman, T. Schäfer, E. Shuster and F. Wilczek for helpful discussions. This work is supported in part by the U.S. Department of Energy (D.O.E.) under cooperative research agreement #DF-FC02-94ER40818. The work of KR is supported in part by a DOE OJI Award and by the Alfred P. Sloan Foundation.
no-problem/9910/astro-ph9910391.html
ar5iv
text
# Untitled Document An ET Origin for Stratospheric Particles Collected During the 1998 Leonids Meteor Shower David A. Noever, James A. Phillips, John M. Horack, Gregory Jerman, and Ed Myszka Science Directorate, NASA/Marshall Space Flight Center, Huntsville, AL 35812 david.noever@msfc.nasa.gov john.horack@msfc.nasa.gov gregory.jerman@msfc.nasa.gov 256 544 1872 256 544 5800 (fax) BishopWebWorks, 162 Alpine Drive, Bishop, CA, 93514 phillips@spacesciences.com 760 873 5585 760 872 1382 (fax) CSC Corporation, SD-01, NASA/Marshall, Huntsville, AL, 35812 ed.myszka@msfc.nasa.gov 256 544 1032 256 544 5800 Submitted to Icarus 10 September 1999 Keywords: Comets, composition; Meteors; Interplanetary Dust Number of Figures: 3 Number of Pages: 16 (including Abstract, References, Table, and Figures) Abstract On 17 November 1998, a helium-filled weather balloon was launched into the stratosphere, equipped with a xerogel microparticle collector. The three-hour flight was designed to sample the dust environment in the stratosphere during the Leonid meteor shower, and possibly to capture Leonid meteoroids. Environmental Scanning Electron Microscope analyses of the returned collectors revealed the capture of a $``$30-$`\mu `$m particle, with a smooth, multigranular shape, and partially melted, translucent rims; similar to known Antarctic micrometeorites. Energy-dispersive X-ray Mass Spectroscopy shows enriched concentrations of the non-volatile elements, Mg, Al, and Fe. The particle possesses a high magnesium to iron ratio of 2.96, similar to that observed in 1998 Leonids meteors (Borovicka, et al. 1999) and sharply higher than the ratio expected for typical material from the earth’s crust. A statistical nearest-neighbor analysis of the abundance ratios Mg/Si, Al/Si, and Fe/Si demonstrates that the particle is most similar in composition to cosmic spherules captured during airplane flights through the stratosphere. The mineralogical class is consistent with a stony (S) type of silicates, olivine \[(Mg,Fe)<sub>2</sub>SiO<sub>4</sub>\] and pyroxene \[(Mg,Fe)SiO<sub>3</sub>\]–or oxides, herecynite \[(Fe, Mg) Al<sub>2</sub>O<sub>4</sub>\]. Attribution to the debris stream of the Leonids’ parent body, comet Tempel-Tuttle, would make it the first such material from beyond the orbit of Uranus positively identified on Earth. Proposed Running Head: ET Origin for Particles Captured During the 1998 Leonids Leonid meteoroids contribute a significant fraction of the annual budget of cosmic material falling on Earth (Rietmeijer 1999), and numerous groups (e.g., Blanchard et al. 1969, Maag et al. 1993) have attempted to capture and retreive them for analysis. There have also been unintended captures of Leonid particles by low Earth orbit (LEO) satellites (Humes and Kinard 1997). In 1969, 32 partially melted meteoroids were collected (Brownlee and Hodge 1969, Brownlee et al. 1997) by flying U2 and WB57 aircraft at stratospheric altitudes near 20 km. These meteoroids were members of a general low-level flux of particles that produce an average zenithal hourly rate (ZHR) of visual meteors near 6/hr on a daily basis. This work has led to a better understanding of meteoroid compositions and ablation mechanisms. However, there has since been no systematic effort to sample and study extraterrestrial particles in the stratosphere in general, or the Leonids in particular. The upper stratosphere remains perhaps the best environment to collect meteor dust, however success in doing so is a compromise between environmental and technical factors (Rietmeijer 1999). On 17 November 1998 we launched a 10m weather balloon equipped with an xerogel dust collector to sample the stratosphere during the peak flux of the Leonids. A matrix of low-density silica xerogels in separated polystyrene wells were fixed to the outside of the balloon package. The payload was carried to an altitude above 98$`\%`$ of Earth’s atmosphere during a 1.9 hr flight. An on board digital camera captured eight Leonid fireballs brighter than magntitude -10. At its maximum altitude, the balloon ruptured as planned and the payload descended to Earth by parachute. Eight candidate impactors were analyzed in the returned payload. One of these exhibits chemical and morphological signatures indicating extraterrestrial origin. Twenty-four one-inch-diameter circular wells of xerogel were sent aloft to the stratosphere and only a few were damaged during the balloon’s descent and landing. A visual microscopic survey of the remaining xerogel containers revealed that all were pitted with craters in the 20 - 100 micron range (e.g., Figure 1). Based on the apparent density of craters in each capture well, we selected one 1-inch diameter circular sample for further study with an Environmental Scanning Electron Microscope (ESEM) (Danilatos et al. 1982). The ESEM, which does not require hard vacuum conditions to reduce scattering, is normally used to study wet, biological samples. Its advantage for the Leonids sample return is that it is unlikely to cause vacuum damage to fragile microparticles. The ESEM was equipped with a 20 keV energy dispersive X-ray mass spectrometer (EDS) sensitive to elements with atomic weights $`Z`$ $`>`$ 10. Using the SEM and EDS together, we can image each impactor while simultaneously characterizing its elemental composition. We scanned the xerogel sample to a surface depth of 1-2 micron using a rastered beam similar in size to the diameter of the impactors (10 - 10,000x, sample-dependent 4 nm resolution). These instrument settings were selected to avoid sources of error involving resolution and diffraction effects. ESEM-EDS analysis of the xerogel capture media revealed a smooth silica surface to the resolution of approximately 1 micron and chemical analysis showed Si:O ratios between 2:1 and 3:1, as expected. Residual levels of Fe, Mg, Ni, Ca, Al and C were below 1$`\%`$ detection thresholds. The eight crater-like pits were examined in the same fashion as the xerogel background. Each crater contained a single impactor ranging in size from 1 - 40 microns. These particles fell into two categories differentiated by morphology and chemical composition. Seven particles were similar; spherical in shape (Figure 2a) with a strong signature of Si. The dominance of Si in the spectra of these particles makes accurate abundance measurements difficult because of possible confusion with the Si-rich xerogel capture media. The shape of the particles suggests that they have experienced sufficient heating – possibly a result of atmospheric friction – to render them molten before cooling and reforming as a sphere. Without more reliable abundance measurements it is impossible to say whether or not these seven impactors have an extraterrestrial origin. An eighth candidate stood out as distinct from the others. EDS data (Figure 2b) showed that this impactor is rich in Si, but also has significant concentrations of Mg, Al, and Fe. The particle is irregular in shape with translucent rims and an opaque core, much like known cosmic spherules (Brownlee et al. 1997). There is no sign of bulk melting. The general morphology of this 30 $`\mu `$m particle is also similar to that of Antarctic micrometeorites composed of silicates in the 50 - 100 $`\mu `$m range (Genge et al. 1996). No degassing vesicles or gas corrosion from volitiles are apparent as might be expected for intensely heated particles. It is nevertheless rich in the non-volatile elements Mg, Al, Si and Fe (see Fig 2C). Abundance ratios of Mg, Al, Si and to a lesser extent Fe have previously served (Brownlee and Hodge 1969, Love and Brownlee 1991, Love and Keil 1995) for identification of cosmic particles in three broad categories: a dominant type S (stony), and less frequent types I (iron) and FSN (iron-nickel-sulfur). S-types, which are thought to arise from asteroids, dominate the background terrestrial flux. In stratospheric collections of microparticles, FSN types feature prominantly but rarely exceed 20 $`\mu `$m in diameter. The strong Si peak in all the EDS analyses (Figures 2a,b) excludes this particle from the I-spherules which contain no silicates. The ratios Al/Si, Mg/Si and Fe/Si obtained for the irregular Leonid meteoroid candidate appear in the first three data columns of Table 1 along with average ratios for known cosmic spherules, meteorites, and typical terrestrial dust. Also tabulated are the ratios for an xerogel control sample which remained on Earth during the balloon flight. The composition of the Leonid candidate is clearly similar to that of cosmic spherules previously found on Earth, and much less like that of Earth dust or the control. A special subclass of vitreous S-type cosmic spherules have previously been detected and identified in stratospheric collections (Brownlee and Hodge 1969, Brownlee et al. 1997). Members of the subclass smaller than $``$30 $`\mu `$m were enriched in Al and depleted in Fe relative to parent chondritic material. This Leonid candidate may be related to these particles, as it is also rich in aluminum (Al/Si=0.56), and slightly depleted in magnesium (Mg/Si=0.89) and iron (Fe/Si=0.30) compared to median cosmic spherules from deep sea, Antactic, and stratospheric collections (Brownlee et al. 1997). Also notable is the Leonid candidate’s high magnesium to iron ratio. Airborne UV/Vis spectroscopy of a bright 1998 Leonids meteors show atomic metal spectral lines indicating Mg/Fe$``$3.3 (Borovicka et al. 1999). This is in good agreement with the high Mg/Fe ratio of the Leonid sample (2.96) and in sharp contrast to the low ratio expected for typical material from the earth’s crust (Mg/Fe $``$ 0.4). Iron/Nickel ratios can effectively discriminate between terrestrial and extraterrestrial origin of stratospheric dust (Rietmeijer 1999). Terrestrial dust particles tend to have high ratios. For example, volcanic ash from Mt. St. Helens collected at 34-36 km altitude had Fe/Ni=1200. Extraterrestrial dust exhibits much lower ratios, e.g., optical spectra of meteors yield Fe/Ni $``$ 19. Nickel was not detected in the EDS spectrum of the Leonids candidate, as the Ni peak was below the 3$`\sigma `$ noise level of 1$`\%`$ fractional composition. The Fe peak was measured at 9$`\%`$, placing a lower limit of Fe/Ni $`>`$ 9. This test does not yield discrimination between terrestrial and extraterrestrial dust. To test the extraterrestrial hypothesis further, we use a statistical procedure to compare EDS data with the chemical makeup of cosmic microparticles in various sampling databases (Rietmeijer 1999, Brownlee et al. 1997). We express compositional ratios as a vector $`X`$ with components (Mg/Si, Al/Si, Fe/Si). The ‘chemical distance’ $`R_{Lj}`$ between the Leonid candidate ($`L`$) and any other material ($`X_j`$) may then be evaluated as $$R_{Lj}=\sqrt{\underset{i=1}{\overset{3}{}}(L_iX_{ij})^2}.$$ $`(1)`$ This formulation allows us to compare and classify objectively the Leonid candidate with nearest neighbors based on their minimum Euclidian distance. These data are contained in the fourth column of Table 1, and the “chemical distance” vectors are plotted in Figure 3. These data clearly indicate that the 1998 Leonid candidate is closest in composition to coarse unmelted cosmic spherules captured during previous airplane flights through the stratosphere. The low concentration of iron in those spherules, relative to the Leonid candidate, may result from volatilization of iron sulfide during atmospheric entry or perhaps unique chemical features for Leonids-related dust streams. While the composition of the Leonid candidate does not perfectly match that of any catalogued cosmic spherule, the chemical distance is relatively small. Importantly, the Leonid particle is significantly more distant from the control than the cosmic particles, and lies most distant from terrestrial dust. This analysis shows conclusively that the Leonid candidate is most chemically similar to known extraterrestrial particles. Both its composition and morphology are consistent with that of micrometeoroids previously gathered from the stratosphere and elsewhere. Taken together, we believe the chemical and morphological properties are most consistent with the explanation of an extraterrestrial origin. However, there are factors which may contravene these conclusions. It is difficult to estimate a priori capture probabilities relevant to this experiment, as little is known about the stratospheric particle environment during an intense meteor shower. For satellite impacts during the 1998 shower, a flux of 0.4-59 m<sup>-2</sup> hr<sup>-1</sup> was predicted,<sup>7</sup> corresponding to an expected maximum of $``$2 impacts/hr for the $``$600 cm<sup>2</sup> hr area-time product for this flight. While comparable to that observed, the figure is unlikely to be meaningful in this context. The ‘true’ expected value at 20 km depends on a very uncertain extrapolation of meteoroid momenta and densities from the top of Earth’s atmosphere downward to the stratosphere. The xerogel dust collector was exposed for the entire duration of the balloon flight from launch to landing. We cannot exclude the possibility that metal-rich contaminants such as volcanic dust or industrial pollutants were captured at low altitudes. Small ($`<`$ 10-15 $`\mu `$m) volcanic ash particles have been captured in the upper stratosphere (Rietmeijer 1993, Testa et al. 1990), and some terrestrial particles, albeit atypical ones, are found to have high Al content as does the Leonids candidate. Future balloon flights, including one scheduled for the 1999 Leonids meteor shower, will carry a remotely controllable sample collector that opens only while the balloon is in the stratosphere. Meteoroids sampled in the stratosphere enter the atmosphere at high speed, $``$70 km/s for the Leonids. Terminal velocity for cometary debris in the 20 - 70 micron range is widely thought to be reached at altitudes considerably higher than 20 km, thus requiring a significant ‘drift time’ to reach these lower altitudes. Empirical data has little to say on this point, however, simply because there has been no systematic sampling of stratospheric meteoroid fluxes during major meteor showers prior to 1998. If the Tempel-Tuttle debris stream includes a component of larger-, harder-, and faster-than-average meteoroids, then this balloon would be sensitive to Leonids in real-time during the shower’s peak, rather than older, slower-moving particles. Even if the particle is extraterrestrial, it is still possible that it did not arise from the debris stream of comet Tempel-Tuttle. Most meteoroids do not plunge directly to Earth after entering the atmosphere. Instead they lose much of their kinetic energy high above the stratosphere and very slowly drift downward. If this scenerio applies to the particles captured during the peak of the Leonids meteor shower, they may have entered Earth’s atmosphere days or weeks earlier. The irregular particle might have originated, for example, from the debris stream of comet Giacobini-Zinner which caused an intense meteor shower in October 1998. Alternatively, it could be a member of the low-level background population of meteoroids that permeates the inner solar system. Indeed, the SEM survey of the xerogel collector was a surface scan and, thus, preferentially sensitive to older, lower velocity impactors. This year our group will execute a systematic campaign of balloon flights during relatively intense meteor showers and also during periods of low meteor activity to evaluate further the temporal correlation between visual meteor counts and meteoroid flux in the stratosphere. These additional flights may provide the information needed to confirm the present candidate as a Leonid or to assign it to a different meteoroid population. REFERENCES 1. Blanchard, M. B., Ferry, G. V., $`\&`$ Farlow, N. H., 1969, Meteoritics, 4, 152. 2. Borovicka, J., Stork, R., $`\&`$ Bocek, J., 1999 Meteoritics, (in press). 3. Brownlee, D. E., $`\&`$ Hodge, P. W., 1969, Meteoritics, 4, 264. 4. Brownlee, D. E., Bates, B., $`\&`$ Schramm L., 1997, Meteorit. and Plan. Sci., 32, 157-175. 5. Danilatos, G. D., $`\&`$ Postale, R., 1982, Scanning Electron Microscopy, 1-16. 6. Genge, M. J., Hutchison, R., $`\&`$ Grady, M. M., 1996, Meteorit. and Plan. Sci., 31, (suppl), A49. 7. Humes, D. H., and Kinard, W. H., 1997, Hubble Space Telescope Archive, http://setas-www.larc.nasa.gov/HUBBLE/PRESENTATIONS/ hubble$`\mathrm{\_}`$talk$`\mathrm{\_}`$humes$`\mathrm{\_}`$kinard.html 8. Love, S. G., $`\&`$ Brownlee, D. E., 1991, Icarus, 89, 26-43. 9. Love, S. G., $`\&`$ Keil, K., 1995, Meteoritics, 30, 269-278. 10. Maag, C. R., Tanner, W. G., Stevenson, T. J., Borg, J., Bibring, J.-P., Alexander, W. M. $`\&`$ Maag, A. J., 1993, in Proc. 1<sup>st</sup> European Conference on Space Debris, Darmstadt, Germany, 125-130. 11. Rietmeijer, F.J.M.,J. Volc. Geothermal Res., 1993, 55, 69-83. 12. Rietmeijer, F.J.M., 1999, 37<sup>th</sup> AIAA Aerospace Sciences Meeting and Exhibit, AIAA 99, 0502. 13. Testa, J. P., Stephens, J. R., Berg, W. W., Cahill, T. A., Onaka, T., Nakada, Y., Arnold, J. R., Fong, N., $`\&`$ Sperry, P. D., 1990, Earth Planet. Sci. Lett., 98, 287-302. TABLE I. | | Mg/Si | Al/Si | Fe/Si | | $`R_{Lj}`$ | | --- | --- | --- | --- | --- | --- | | 1998 Leonids | | | | | | | Sample | 0.89 | 0.56 | 0.3 | | 0.000 | | Control | 0.0 | 0.0 | 0.0 | | 1.094 | | Sampling Site | | | | | | | Stratosphere | 1.06 | 0.233 | 0.633 | | 0.497 | | Antarctic | 1.06 | 0.091 | 0.528 | | 0.549 | | Deep Sea | 1.06 | 0.083 | 1.024 | | 0.884 | | All | 1.06 | 0.094 | 0.937 | | 0.807 | | Stratospheric unmelted | | | | | | | Smooth | 0.82 | 0.082 | 0.742 | | 0.655 | | Porous | 1.02 | 0.07 | 0.705 | | 0.649 | | Coarse | 1.2 | 0.075 | 0.585 | | 0.642 | | Bulk IDP | 0.98 | 0.075 | 1.08 | | 0.923 | | Bulk Chondrites | | | | | | | CI | 1.07 | 0.085 | 0.9 | | 0.786 | | CM | 1.05 | 0.095 | 0.819 | | 0.715 | | H | 0.96 | 0.07 | 0.818 | | 0.717 | | L | 0.93 | 0.069 | 0.584 | | 0.569 | | Earth dust | 0.83 | $`<`$ 0.003 | 2.27 | | 2.048 | values for Earth dust are based on these mass fractional abundances for Earth as a whole: 34.6$`\%`$ Fe, 29.5$`\%`$ O, 15.2$`\%`$ Si, 12.7$`\%`$ Mg, 2.4$`\%`$ Ni, 1.9$`\%`$ S, 0.05$`\%`$ Ti. TABLE I. Comparison of chemical elemental ratios for non-volatiles in the Leonids candidate particle, along with known cosmic dust, control sample, and terrestrial composition. $`R_{Lj}`$ expresses the “chemical distance” between the Leonids candidate and other particles as described in the text. The Leonids candidate is most distant from terrestrial composition, and agrees most closely with known cosmic dust. FIGURE CAPTIONS FIGURE 1: Sequence showing the cross-section of a particle and impact crater in the xerogel collector. The crater measures 20-30$`\mu `$m in diameter. FIGURE 2a: Representative EDS mass spectrogram and ESEM image of a spherical particle found embedded in the xerogel collector. FIGURE 2b: EDS mass spectrum of the irregular, non-volitile rich particle with translucent rims and an opaque core, and its ESEM image. The percentage composition of the sample is Si (31$`\%`$), Mg (28$`\%`$), Al (18$`\%`$), O (12$`\%`$) and Fe (9$`\%`$) with no appreciable Ni or C. FIGURE 3: Three dimensional scatter plot of the “Chemical Vectors” (Fe/Si, Al/Si, Mg/Si) for known cosmic particles (square), the leonids ET candidate (filled circle) and for terrestrial dust (triangle). Projections in the XY plane are also shown to aid in visualization. The Leonids sample particle lies most closely to known particles of extraterrestrial origin, and most distant from terrestrial composition. FIGURES Figure 1, Noever, et al.; An ET Origin … Figure 2a, Noever, et al.; An ET Origin … Figure 2b, Noever, et al.; An ET Origin … Figure 3, Noever, et al.; An ET Origin …
no-problem/9910/cond-mat9910216.html
ar5iv
text
# Self-trapped Exciton and Franck-Condon Spectra Predicted in LaMnO3 ## ACKNOWLEDGMENTS We thank M. Blume, M. Cardona, J. P. Hill, T. P. Martin, L. Mihaly, A. J. Millis, and D. Romero for help. This work was supported in part by NSF grant no. DMR-9725037.
no-problem/9910/hep-ph9910333.html
ar5iv
text
# Review of Speculative “Disaster Scenarios” at RHIC ## I Introduction Fears have been expressed that heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC), which Brookhaven National Laboratory (BNL) is now commissioning, might initiate a catastrophic process with profound implications for health and safety. In this paper we explore the physical basis for speculative disaster scenarios at RHIC. Concerns have been raised in three general categories: first, formation of a black hole or gravitational singularity that accretes ordinary matter; second, initiation of a transition to a lower vacuum state; and third, formation of a stable “strangelet” that accretes ordinary matter. We have reviewed the scientific literature, evaluated recent correspondence, and undertaken additional calculations where necessary, to evaluate the scientific basis of these safety concerns. Our conclusion is that the candidate mechanisms for catastrophe scenarios at RHIC are firmly excluded by compelling arguments based on well-established physical laws. In addition, where the data exists, a conservative analysis of existing empirical evidence excludes the possibility of a dangerous event at RHIC at a very high level of confidence. Accordingly, we see no reason to delay the commissioning of RHIC on account of these safety concerns. Considerable attention has been focused on the possibility of placing a bound on the probability of a dangerous event at RHIC by making a “worst case” analysis of certain cosmic ray data. We believe it is reasonable to assume that the laws of physics will not suddenly break down in bizarre ways when entering a regime that actually differs only slightly and in apparently inessential ways from regimes already well explored. We will review the work that has been done on empirical bounds and point out where and how the laws of physics must be bent in order to avoid very firm bounds on the probability of a dangerous event at RHIC. No limit is possible if one allows arbitrarily poor physics assumptions in pursuit of a worst case scenario. Some of the expressed anxiety seems to be based on a misunderstanding of the nature of high energy collisions: It is necessary to distinguish carefully between total energy and energy density. The total center of mass energy ($`E_{\mathrm{CM}}`$) of gold-gold collisions at RHIC will exceed that of any existing accelerator. But $`E_{\mathrm{CM}}`$ is surely not the right measure of the capacity of a collision to trigger exotic new phenomena. If it were, a batter striking a major league fastball would be performing a far more dangerous experiment than any contemplated at a high energy accelerator. To be effective in triggering exotic new phenomena, energy must be concentrated in a very small volume. A better measure of effectiveness is the center of mass energy of the elementary constituents within the colliding objects. In the case of nuclei, the elementary constituents are mainly quarks and gluons, with small admixtures of virtual photons, electrons, and other elementary particles. Using the Fermilab Tevatron and the LEP collider at the European Center for Nuclear Research (CERN), collisions of these elementary particles with energies exceeding what will occur at RHIC have already been extensively studied. What is truly novel about heavy ion colliders compared to other accelerator environments is the volume over which high energy densities can be achieved and the number of quarks involved. In a central gold-gold collision, hundreds of quarks collide at high energies. Black holes and vacuum instability are generic concerns that have been raised, and ought to be considered, each time a new facility opens up a new high energy frontier. The fact that RHIC accelerates heavy ions rather than individual hadrons or leptons makes for somewhat different circumstances. Nevertheless there are simple, convincing arguments that neither poses any significant threat. The strangelet scenario is special to the heavy ion environment. It could have been raised before the commissioning of the AGS or CERN heavy ion programs. Indeed, we believe the probability of a dangerous event, though still immeasureably small, is greater at AGS or CERN energies than at RHIC. In light of its special role at RHIC, we pay most attention to the strangelet scenario. In the remainder of this Introduction we give brief, non-technical summaries of our principal conclusions regarding the three potential dangers. In the body of the paper which follows we consider each problem in as much detail as seems appropriate. First, in Section II we present a summary of cosmic ray data necessary to make empirical estimates regarding vacuum decay and strangelets. Sections III, IV, and V are devoted to gravitational singularities, vacuum decay, and strangelets respectively. When we make quantitative estimates of possible dangerous events at RHIC, we will quote our results as a probability, $`𝔭`$, of a single dangerous event over the lifetime of RHIC (assumed to encompass approximately $`2\times 10^{11}`$ gold-gold collisions over a 10 year lifetime at full luminosity). We do not attempt to decide what is an acceptible upper limit on $`𝔭`$, nor do we attempt a “risk analysis”, weighing the probability of an adverse event against the severity of its consequences. Ultimately, we rely on compelling physics arguments which, we believe, exclude a dangerous event beyond any reasonable level of concern. ### A Gravitational Singularities Exotic gravitational effects may occur at immense densities. Conservative dimensionless measures of the strength of gravity give $`10^{22}`$ for classical effects and $`10^{34}`$ for quantum effects in the RHIC environment, in units where 1 represents gravitational effects as strong as the nuclear force. The theoretical basis for these estimates is presented in Section III. In fact RHIC collisions are expected to be less effective at raising the density of nuclear matter than collisions at lower energies where the “stopping power” is greater and existing accelerators have already probed larger effective energies. In no case has any phenomenon suggestive of gravitational clumping, let alone gravitational collapse or the production of a singularity, been observed. ### B Vacuum Instability Physicists have grown quite accustomed to the idea that empty space — what we ordinarily call ‘vacuum’ — is in reality a highly structured medium, that can exist in various states or phases, roughly analogous to the liquid or solid phases of water. This idea plays an important role in the Standard Model. Although certainly nothing in our existing knowledge of the laws of Nature demands it, several physicists have speculated on the possibility that our contemporary ‘vacuum’ is only metastable, and that a sufficiently violent disturbance might trigger its decay into something quite different. A transition of this kind would propagate outward from its source throughout the universe at the speed of light, and would be catastrophic. We know that our world is already in the correct (stable) vacuum for QCD. Our knowledge of fundamental interactions at higher energies, and in particular of the interactions responsible for electroweak symmetry breaking, is much less complete. While theory strongly suggests that any possibility for triggering vacuum instability requires substantially larger energy densities than RHIC will provide, it is difficult to give a compelling, unequivocal bound based on theoretical considerations alone. Fortunately in this case we do not have to rely solely on theory; there is ample empirical evidence based on cosmic ray data. Cosmic rays have been colliding throughout the history of the universe, and if such a transition were possible it would have been triggered long ago. Motivated by the RHIC proposal, in 1983 Hut and Rees calculated the total number of collisions of various types that have occurred in our past light-cone — whose effects we would have experienced. Even though cosmic ray collisions of heavy ions at RHIC energies are relatively rare, Hut and Rees found approximately 10<sup>47</sup> comparable collisions have occurred in our past light cone. Experimenters expect about 2$`\times `$10<sup>11</sup> heavy ion collisions in the lifetime of RHIC. Thus on empirical grounds alone, the probability of a vacuum transition at RHIC is bounded by 2$`\times 10^{36}`$. We can rest assured that RHIC will not drive a transition from our vacuum to another. We review and update the arguments of Hut and Rees in Section IV after introducting the necessary cosmic ray data in Section II. ### C Strangelets Theorists have speculated that a form of quark matter, known as “strange matter” because it contains many strange quarks, might be more stable than ordinary nuclei. Hypothetical small lumps of strange matter, having atomic masses comparable to ordinary nuclei have been dubbed “strangelets”. Strange matter may exist in the cores of neutron stars, where it is stabilized by intense pressure. For strange matter to pose a hazard at a heavy ion collider, four conditions would have to be met: * Strange matter would have to be absolutely stable in bulk at zero external pressure. If strange matter is not stable, it will not form spontaneously. * Strangelets would have to be at least metastable for very small atomic mass, for only very small strangelets can conceivably be created in heavy ion collisions. * It must be possible to produce such a small, metastable strangelet in a heavy ion collision. * The stable composition of a strangelet must be negatively charged. Positively charged strangelets pose no threat whatsoever. Each of these conditions is considered unlikely by experts in the field, for the following reasons: * At present, despite vigorous searches, there is no evidence whatsoever for stable strange matter anywhere in the Universe. * On rather general grounds, theory suggests that strange matter becomes unstable in small lumps due to surface effects. Strangelets small enough to be produced in heavy ion collisions are not expected to be stable enough to be dangerous. * It is overwhelmingly likely that the most stable configuration of strange matter has positive electric charge. * Theory suggests that heavy ion collisions (and hadron-hadron collisions in general) are a poor way to produce strangelets. Furthermore, it suggests that the production probability is lower at RHIC than at lower energy heavy ion facilities like the AGS and CERN. Models and data from lower energy heavy ion colliders indicate that the probability of producing a strangelet decreases very rapidly with the strangelet’s atomic mass. * A negatively charged strangelet with a given baryon number is much more difficult to produce than a positively charged strangelet with the same baryon number because it must contain proportionately more strange quarks. To our knowledge, possible catastrophic consequences of strangelet formation have not been studied in detail before. Although the underlying theory (quantum chromodynamics, or QCD) is fully established, our ability to use it to predict complex phenomena is imperfect. A reasonable, conservative attitude is that theoretical arguments based on QCD can be trusted when they suggest a safety margin of many orders of magnitude. The hypothetical chain of events that might lead to a catastrophe at RHIC requires several independent, robust theoretical arguments to be wrong simultaneously. Thus, theoretical considerations alone would allow us to exclude any safety problem at RHIC confidently. However, one need not use theoretical arguments alone. We have considered the implications of natural “experiments” elsewhere in the Universe, where cosmic ray induced heavy ion collisions have been occurring for a long time. Recent satellite based experiments have given us very good information about the abundance of heavy elements in cosmic rays, making it possible to obtain a reliable estimate of the rate of such collisions. We know of two domains where empirical evidence tells us that cosmic ray collisions have not produced strangelets with disasterous consequences: first, the surface of the Moon, which has been impacted by cosmic rays for billions of years, and second, interstellar space, where the products of cosmic ray collisions are swept up into the clouds from which new stars are formed. In each case the effects of a long-lived, dangerous strangelet would be obvious, so dangerous strangelet production can be bounded below some limit. For example, we know for certain that iron nuclei with energy in excess of $`10`$ GeV/nucleon (equivalent to AGS energies) collide with iron nuclei on the surface of the Moon approximately $`6\times 10^{10}`$ times per second. Over the 5 billion year life of the Moon approximately $`10^{28}`$ such collisions have occurred. None has produced a dangerous strangelet which came to rest on the lunar surface, for if it had, the Moon would have been converted to strange matter. Similarly, we know that the vast number of heavy ion collisions in interstellar space have not created a dangerous strangelet that lived long enough to be swept up into a star. A dangerous strangelet would trigger the conversion of its host star into strange matter, an event that would resemble a supernova. The present rate of supernovae – a few per millennium per galaxy – translate into a strong upper limit on the probability of long-lived dangerous strangelet production at RHIC. To translate each of these results into a bound on $`𝔭`$, it is necessary to model some aspects of strangelet production, propagation, and decay. By making sufficiently unlikely assumptions about the properties of strangelets, it is possible to render both of these empirical bounds irrelevant to RHIC. The authors of Ref. construct just such a model in order to discard the lunar limits: They assume that strangelets are produced only in gold-gold collisions, only at or above RHIC energies, and only at rest in the center of mass. We are skeptical of all these assumptions. If they are accepted, however, lunar persistence provides no useful limit. Others, in turn, have pointed out that the astrophysical limits of Ref. can be avoided if the dangerous strangelet is metastable and decays by baryon emission with a lifetime longer than $`10^7`$ sec. In this case strangelets produced in the interstellar medium decay away before they can trigger the death of stars, but a negatively charged strangelet produced at RHIC could live long enough to cause catastrophic results. Under these conditions the DDH bound evaporates. We wish to stress once again that we do not consider these empirical analyses central to the argument for safety at RHIC. The arguments which are invoked to destroy the empirical bounds from cosmic rays, if valid, would not make dangerous strangelet production at RHIC more likely. Even if the bounds from lunar and astrophysical arguments are set aside, we believe that basic physics considerations rule out the possibility of dangerous strangelet production at RHIC. ## II Heavy Nuclei in Cosmic Rays Cosmic ray processes accurately reproduce the conditions planned for RHIC. Cosmic rays are known to include heavy nuclei and to reach extremely high energies. Hut and Rees pioneered the use of cosmic ray data in their study of decay of a false vacuum. Dar, De Rujula and Heinz have recently used similar arguments to study strangelet production in heavy ion collisions. Here we summarize data on heavy nuclei (iron and beyond) in cosmic rays and carry out some simple estimates of particular processes which will figure in our discussion of strange matter. In some instances we use observations directly; elsewhere reasonable extrapolation allows us to model behavior where no empirical data are available. We are interested in cosmic ray collisions which simulate RHIC and lower energy heavy ion facilities like the AGS. Equivalent stationary target energies range from 10 GeV/nucleon at the AGS to 20 TeV/nucleon corresponding to the center of mass energy of 100 GeV/nucleon at RHIC. The flux of cosmic rays has been measured accurately up to total energies of order $`10^{20}`$ eV . Many measurements of the abundance of ultraheavy nuclei in cosmic rays at GeV/nucleon energies are summarized in Ref. . These measurements are dominated by energies near the lower energy cutoff of 1.5 GeV/nucleon. More extensive measurements have been made of the flux of nuclei in the iron-nickel ($`Z=2628`$) group and lighter. Data on iron are available up to energies of order 2 TeV/nucleon . However, we know of no direct measurements of the flux of nuclei heavier than the iron-nickel group at energies above 10 GeV/nucleon. Thus data on iron are available over almost the entire energy range we need. For nuclei heavier than iron, data are available close to AGS energies, but not in the 100 GeV/nucleon–20 TeV/nucleon domain. For ultra heavy nuclei at very high energies, we extrapolate existing data to higher energies using two standard scaling laws, which agree excellently with available data. * At energies of interest to us, the flux of every species which has been measured shows a simple power law spectrum $`dF/dEE^\gamma `$ with $`\gamma 2.52.7`$. Swordy et al. found this behavior for oxygen, magnesium, silicon as well as hydrogen, helium and iron. The same power law is observed at high energies where data are dominated by hydrogen.<sup>*</sup><sup>*</sup>*At energies above $`10^{15}`$ eV the power $`\gamma `$ changes abruptly. This occurs above the energies of interest to us. * At all energies where they have been measured, the relative abundance of nuclear species in cosmic rays reflects their abundance in our solar system. \[See, for example, Figure 6 in Ref. .\] Exceptions to this rule seem to be less than an order of magnitude. If anything, heavy nuclei are expected to be relatively more abundant in high energy cosmic rays. In light of these facts we adopt the standard idealization that the $`A`$ (baryon number or atomic mass) and $`E`$ (energy per nucleon) dependence of the flux of primary cosmic rays factors at GeV/nucleon –TeV/nucleon energies: $$\frac{dF}{dE}=\mathrm{\Gamma }(A,E_0)(E_0/E)^\gamma ,$$ (1) where $`E_0`$ is some reference energy. To be conservative we will usually take $`\gamma =2.7`$. The total flux at energies above some energy $`E`$ is given by $$F(A,E)=_E^{\mathrm{}}𝑑E^{}\frac{dF}{dE^{}}=\frac{E}{\gamma 1}\frac{dF}{dE}=\frac{E}{\gamma 1}\mathrm{\Gamma }(A,E)$$ (2) The units of $`dF/dE`$ are {steradians,sec,m<sup>2</sup>, GeV}<sup>-1</sup>. The flux of cosmic rays is very large in these units. For example, for iron at 10 GeV/nucleon, according to Swordy et al. $$\frac{dF}{dE}(\mathrm{Fe},10\mathrm{GeV})\mathrm{\Gamma }(\mathrm{Fe},10\mathrm{GeV})4\times 10^3\{\mathrm{ster}\mathrm{sec}\mathrm{m}^2\mathrm{GeV}\}^1.$$ (3) Combining all nuclei with $`Z>70`$ into our definition of “gold”, we find an abundance of $`10^5`$ relative to iron.Estimates range from $`10^5`$ to as high as $`10^4`$ . To be conservative, we choose a value on the low side. We are interested in cosmic ray initiated heavy ion collisions which have occurred where we can observe their consequences. Three particular examples will figure in our subsequent considerations: a) Cosmic ray collisions with nuclei on the surface of planetoids that lack an atmosphere, like the Moon; b) Cosmic ray collisions in interstellar space resulting in strangelet production at rest with respect to the galaxy; c) The integrated number of cosmic ray collisions in our past light cone. ### A Cosmic ray impacts on the moon First we consider cosmic rays impinging on the surface of a planetoid similar to the Moon. The number of impacts per second with energy greater than $`E`$ on the surface of the planet is given by $`8\pi ^2R^2F(A,E)`$, where we measure $`R`$ in units of $`R_{\mathrm{moon}}`$, $$\frac{dN(A,E)}{dt}=2\times 10^{14}\frac{\mathrm{\Gamma }(A,E)}{\gamma 1}E\left(\frac{R}{R_{\mathrm{moon}}}\right)^2$$ (4) For convenience, we use iron with $`E=10`$ GeV/nucleon as our reference. From eqs. (2)–(4) we find $$\frac{dN(A,E)}{dt}5\times 10^{12}\frac{\mathrm{\Gamma }(A,10\mathrm{GeV})}{\mathrm{\Gamma }(\mathrm{Fe},10\mathrm{GeV})}\left(\frac{10\mathrm{GeV}}{E}\right)^{1.7}\left(\frac{R}{R_{\mathrm{moon}}}\right)^2$$ (5) This large instantaneous rate makes it possible to obtain useful limits from cosmic ray collisions with nuclei on the lunar surface. ### B Cosmic ray collisions in space Following Ref. , we consider collisions of cosmic rays in which the center of mass velocity is less than $`v_{\mathrm{crit}}=0.1`$ in units of $`c`$. With this $`v_{\mathrm{crit}}`$ strangelets produced at rest in the center of mass will have high probability of slowing down without undergoing nuclear collisions which would destroy them. The flux given in eq. (1) is associated with a density, $`\frac{dn}{dE}=\frac{4\pi }{c}\frac{dF}{dE}`$. The rate per unit volume for collisions of cosmic rays with energy per nucleon greater than $`E`$ in which all components of the center of mass velocity are less than $`v_{\mathrm{crit}}`$ is given by $$R(E)=2c\sigma f_\theta _E^{\mathrm{}}𝑑E_1_{(1v_{\mathrm{cm}})E_1}^{(1+v_{\mathrm{crit}})E_1}𝑑E_2\frac{dn}{dE_1}\frac{dn}{dE_2},$$ (6) where $`\sigma =0.18A^{2/3}`$ barns is the geometric cross section, and $`f_\theta =4v_{\mathrm{crit}}^2`$ is a geometric factor measuring the fraction of collisions in which the transverse velocity is less than $`v_{\mathrm{crit}}`$. Substituting from eq. (1), and normalizing to iron-iron collisions at $`E=10`$ GeV/nucleon, we obtain $$R(E,A)=10^{45}\left(\frac{10\text{GeV}}{E}\right)^{3.4}\left(\frac{\mathrm{\Gamma }(A)}{\mathrm{\Gamma }(\mathrm{Fe})}\right)^2\left(\frac{A}{56}\right)^{2/3}\mathrm{cm}^3\mathrm{sec}^1$$ (7) Although this rate appears very small, these collisions have been occurring over very large volumes for billions of years. ### C Cosmic ray collisions in our past light cone Finally we update the calculation of Hut and Rees of the total number of high energy collisions of cosmic rays in our past light cone. The number of such collisions for cosmic rays with energy greater than $`E`$ is given by $$N10^{47}\left(\frac{\mathrm{\Gamma }(\mathrm{A})}{\mathrm{\Gamma }(\mathrm{Fe})}\right)^2\left(\frac{56}{A}\right)^{2.7}\left(\frac{100\mathrm{GeV}}{E}\right)^{3.4},$$ (8) where we have normalized to iron at $`E=100`$ GeV/nucleon. The difference between the extremely small coefficient in eq. (7) and the extremely large coefficient in eq. (8) reflects integration over our past light cone, i.e., over the volume and age of the universe. ## III Strength of Gravitational Effects Two possible sources of novel gravitational effects might in principle be activated in collisions at RHIC. The first type is connected with classical gravity, the second type with quantum gravity. To estimate the quantitative significance of classical gravity, an appropriate parameter is $$k_{\mathrm{cl}}\frac{2GM}{Rc^2}$$ (9) for a spherical concentration of mass $`M`$ inside a region of linear dimension $`R`$, where $`G`$ is Newton’s constant and $`c`$ is the speed of light. It is when $`k_{\mathrm{cl}}1`$ that the escape velocity from the surface at $`R`$, calculated in Newtonian gravity, becomes equal to the speed of light. The same parameter, $`2GM/c^2`$, appears in the general relativistic line element $$ds^2=c^2dt^2(1\frac{2GM}{rc^2})\frac{dr^2}{1\frac{2GM}{rc^2}}r^2d^{\mathrm{\hspace{0.17em}2}}\mathrm{\Omega }$$ (10) outside a spherical concentration of mass $`M`$. In this language, it is when $`k_{\mathrm{cl}}=1`$ that a horizon appears at $`R`$, and the body is described as a black hole. Now for RHIC we obtain a very conservative upper bound on $`k_{\mathrm{cl}}`$ by supposing that all the initial energy of the collision becomes concentrated in a region characterized by the Lorentz-contracted nuclei with a Lorentz contraction factor of $`10^2`$. We are being extremely conservative by choosing the largest possible mass and the smallest possible distance scale defined by the collision, and also by ignoring the effect of the electric charge and the momentum of the constituents, which will resist any tendency to gravitational collapse. Thus our result will provide a bound upon, not an estimate of, the parameters that might be required to have a realistic shot at producing black holes. With $`M=10^4`$ Gev/$`c^2`$ and $`R=10^2\times 10^{13}`$ cm, we arrive at $`k_{\mathrm{cl}}=10^{22}`$. The outlandishly small value of this over-generous estimate makes it pointless to attempt refinements. To estimate the quantitative significance of quantum gravity, we consider the probability to emit the quantum of gravity, a graviton. It is governed by $$k_{\mathrm{qu}}\frac{GE^2}{\mathrm{}c^5},$$ (11) where $`\mathrm{}`$ is Planck’s constant and $`E`$ is the total center-of-mass energy of collision. For collisions between elementary particles at RHIC, we should put $`E200`$ GeV. This yields $`k_{\mathrm{qu}}10^{34}`$. Once again, the tiny value of $`k_{\mathrm{qu}}`$ makes it pointless to attempt refinements of this rough estimate. Of course higher-energy accelerators than RHIC achieve larger values of $`k_{\mathrm{qu}}`$, but for the foreseeable future values even remotely approaching unity are a pipe dream. ## IV Decay of the False Vacuum Hut and Rees first examined the question of vacuum stability in 1983. They reasoned that the transition to the true vacuum, once initiated, would propagate outward at the speed of light. Thus our existence is evidence that no such transition occured in our past light cone. Hut and Rees then estimated the total number of cosmic ray collisions in the RHIC energy regime which have occured in our past light cone. They used data on cosmic ray fluxes that have subsequently been confirmed and updated. Not knowing which would be more effective at triggering a transition, Hut and Rees looked both at proton-proton collisions and collisions of heavy nuclei. Cosmic ray data on proton fluxes go up to energies of order $`10^{\mathrm{\hspace{0.17em}20}}`$ eV . They conclude that proton-proton collisions with a center of mass energy exceeding $`10^{\mathrm{\hspace{0.17em}8}}`$ TeV have occurred so frequently in our past light cone that even such astonishingly high energy collisions can be considered safe. For heavy ions, Hut and Rees derived an estimate of the number of cosmic ray collisions in our past light cone. We have updated their result in eq. (7), and normalized it so that the coefficient $`10^{47}`$ equals the number of iron-iron collisions at a center of mass energy exceeding 100 GeV/nucleon. The abundance of iron in cosmic rays has now been measured up to energies of order 2 TeV/nucleon and agrees with the estimate used by Hut and Rees. This result translates into a bound of $`2\times 10^{36}`$ on, $`𝔭`$, the probability that (in this case) an iron-iron collision at RHIC energies would trigger a transition to a different vacuum state. While we do not have direct measurements of the fractional abundance of elements heavier than iron in cosmic rays of energy of order 100 GeV/nucleon, we do have good measurements at lower energies, where they track quite well with the abundances measured on earth and in the solar system. For “gold” (defined as $`Z>70`$) at lower energies $`\mathrm{\Gamma }(\mathrm{Au})/\mathrm{\Gamma }(\mathrm{Fe})10^5`$, leading to a bound, $`𝔭<2\times 10^{26}`$ on the probability that a gold-gold collision at RHIC would lead to a vacuum transition. Even if this estimate were off by many orders of magnitude, we would still rest assured that RHIC will not drive a transition from our vacuum to another. Since the situation has not changed significantly since the work of Hut and Rees, we do not treat this scenario in more detail here. The interested reader should consult Hut’s 1984 paper for further details . ## V Strangelets and Strange Matter The scientific issues surrounding the possible creation of a negatively charged, stable strangelet are complicated. Also, it appears that if such an object did exist and could be produced at RHIC, it might indeed be dangerous. Therefore we wish to give this scenario careful consideration. This section is organized as follows. First we give a pedagogical introduction to the properties of strangelets and strange matter. Second we discuss the mechanisms that have been proposed for producing a strangelet in heavy ion collisions. We examine these mechanisms and conclude that strangelet production at RHIC is extremely unlikely. Nevertheless, we go on to discuss what might occur if a stable, negatively charged strangelet could be produced at RHIC. In light of the possible consequences of production of a stable negatively charged strangelet, we shall refer to such an object as a “dangerous” strangelet. We then turn to the cosmic ray data. We obtain strong bounds on the dangerous strangelet production probability at RHIC from physically reasonable assumptions. We also describe the ways in which these bounds can be evaded by adopting a sequence of specially crafted assumptions about the behavior of strangelets, which we consider physically unmotivated. It is important to remember, however, that evading the bounds does not make dangerous strangelet production more likely. ### A A Primer on Strangelets and Strange Matter Strange matter is the name given to quark matter at zero temperature in equilibrium with the weak interactions. At and below ordinary nuclear densities, and at low temperatures, quarks are confined to the interiors of the hadrons they compose. It is thought that any collection of nucleons or nuclei brought to high enough temperature or pressureFor theoretical purposes a better variable is chemical potential, instead of pressure. But either can be used., will make a transition to a state where the quarks are no longer confined into individual hadrons. At high temperature the material is thought to become what is called a quark-gluon plasma. The defining property of this state is that it can be accurately described as a gas of nearly freely moving quarks and gluons. One main goal of RHIC is to provide experimental evidence for the existence of this state, and to study its properties. At high pressure and low temperature the material is expected to exhibit quite different physical properties. In this regime, it is called quark matter. Quarks obey the Pauli exclusion principle — no two quarks can occupy the same state. As quark matter is compressed, the exclusion principle forces quarks into higher and higher energy states. Given enough time (see below), the weak interactions will come into play, to reduce this energy. Ordinary matter is made of up ($`u`$) and down ($`d`$) quarks, which are the lightest species (or “flavors”) of quarks. The strange quark ($`s`$) is somewhat heavier. Under ordinary conditions when an $`s`$ quark is created, it decays into $`u`$ and $`d`$ quarks by means of the weak interactions. In quark matter the opposite can occur. $`u`$ and $`d`$ quarks, forced to occupy very energetic states, will convert into $`s`$ quarks. Examples of weak interaction processes that can accomplish this are strangeness changing weak scattering, $`u+ds+u`$, and weak semi-leptonic decay, $`us+e+\overline{\nu }_e`$. These reactions occur rapidly on a natural time scale $`10^{14}`$ sec. When the weak interactions finish optimizing the flavor composition of quark matter, there will be a finite density of strange quarks — hence the name “strange matter”. The most likely location for the formation of strange matter is deep within neutron stars, where the mammoth pressures generated by the overlayers of neutrons may be sufficient to drive the core into a quark matter state. When first formed, the quark matter at the core of a neutron star would be non-strange, since it was formed from neutrons. Once formed, however, the quark matter core would rapidly equilibrate into strange matter, if such matter has lower free energy at high external pressure. Initially, the non-strange quark matter core and the overlaying layer of neutrons were in equilibrium. Since the strange matter core has lower free energy than the overlaying neutrons, its formation disrupts the equilibrium. Neutrons at the interface are absorbed into the strange matter core, which grows, eating its way outward toward the surface. There are two possibilities. If strange matter has lower internal energy than nuclear matter even at zero external pressure, the strange matter will eat its way out essentially to the surface of the star. On the other hand, if below some non-zero pressure, strange matter no longer has lower energy than nuclear matter, the conversion will stop. Even in the second case a significant fraction of the star could be converted to strange matter. The “burning” of a neutron star as it converts to strange matter has been studied in detail . It is not thought to disrupt the star explosively, because the free energy difference between strange matter and nuclear matter is small compared to the gravitational binding energy. In 1984, E. Witten suggested that perhaps strange matter has lower mass than nuclear matter even at zero external pressure . Remarkably, the stability of ordinary nuclei does not rule this out. A small lump of strange matter, a “strangelet”, could conceivably have lower energy than a nucleus with the same number of quarks. Despite the possible energy gain, the nucleus could not readily decay into the strangelet, because it would require many weak interactions to occur simultaneously, in order to create all the requisite strange quarks at the same time. Indeed, we know that changing one quark (or a few) in a nucleus into an $`s`$ quark(s) — making a so-called hypernucleus — will raise rather than lower the energy. Witten’s paper sparked a great deal of interest in the physics and astrophysics of strange quark matter. Astrophysicists have examined neutron stars both theoretically and observationally, looking for signs of quark matter. Much interest centers around the fact that a strange matter star could be considerably smaller than a neutron star, since it is bound principally by the strong interactions, not gravity. A small quark star could have a shorter rotation period than a neutron star and be seen as a sub-millisecond pulsar. At this time there is no evidence for such objects and no other astrophysical evidence for stable strange matter, although astrophysicists continue to search and speculate . Strange matter is governed by QCD. At extremely high densities the forces between quarks become weaker (a manifestation of asymptotic freedom) and one can perform quantitatively reliable calculations with known techniques. The density of strange matter at zero external pressure is not high enough to justify the use of these techniques. Nevertheless the success of the ordinary quark model of hadrons leads us to anticipate that simple models which include both confinement and perturbative QCD provide us good qualitative guidance as to the properties of strange matter . Such rough calculations cannot answer the delicate question of whether or not strange matter is bound at zero external pressure reliably. Stability seems unlikely, but not impossible. Some important qualitative aspects of strange matter dynamics that figure in the subsequent analysis are as follows: ##### a Binding Systematics The overall energy scale of strange matter is determined by the confinement scale in QCD which can be parameterized by the “bag constant”. Gluon exchange interactions between quarks provide important corrections. Calculations indicate that gluon interactions in quark matter are, on average, repulsive, and tend to destabilize it. To obtain stable strange matter it is necessary to reduce the value of the bag constant below traditionally favored values . This is the reason we describe stability at zero external pressure as “unlikely”. ##### b Charge and flavor composition If strange matter contained equal numbers of $`u`$, $`d`$ and $`s`$ quarks it would be electrically neutral. Since $`s`$ quarks are heavier than $`u`$ and $`d`$ quarks, Fermi gas kinematics (ignoring interactions) would dictate that they are suppressed, giving strange matter a positive charge per unit baryon number, $`Z/A>0`$. If this kinematic suppression were the only consequence of the strange quark mass, strange matter and strangelets would certainly have positive electric charge. In a bulk sample of quark matter this positive quark charge would be shielded by a Fermi gas of electrons electrostatically bound to the strange matter, as we discuss further below. Energy due to the exchange of gluons complicates matters. As previously mentioned, perturbation theory suggests this energy is repulsive, and tends to unbind quark matter. However, gluon interactions weaken as quark masses are increased, so the gluonic repulsion is smaller between $`s`$-$`s`$, $`s`$-$`u`$ or $`s`$-$`d`$ pairs than between $`u`$ and $`d`$ quarks. As a result, the population of $`s`$ quarks in strange matter is higher than expected on the basis of the exclusion principle alone. If, in a model calculation, the strength of gluon interactions is increased, there comes a point where strange quarks dominate. Then the electric charge on strange matter becomes negative. Increasing the strength of gluon interactions pushes the charge of quark matter negative. However it also unbinds it. Unreasonably low values of the bag constant are necessary to compensate for the large repulsive gluonic interaction energy<sup>§</sup><sup>§</sup>§Some early studies that suggested negatively charged strange matter for broad ranges of parameters were based on incorrect applications of perturbative QCD.. For this reason we consider a negative charge on strange matter to be extremely unlikely. ##### c Finite size effects If it were stable, strange matter would provide a rich new kind of “strange” nuclear physics . Unlike nuclei, strangelets would not undergo fission when their baryon number grows large. Nuclear fission is driven by the mismatch between the exclusion principle’s preference for equal numbers of protons and neutrons and electrostatics’ preference for zero charge. In strange matter there is little mismatch: $`uds`$ coincides with approximately zero charge. On the other hand strangelets, like nuclei, become less stable at low baryon number. Iron is the most stable nucleus. Lighter nuclei are made less stable by surface effects. Surface energy is a robust characteristic of degenerate fermion systems. Estimates suggest that strange matter, too, has a significant surface energy, which would destabilize small strangelets . The surface tension which makes light nuclei and water droplets roughly spherical is a well known manifestation of positive surface energy. The exact value of $`A`$ below which strangelets would not be stable is impossible to pin down precisely, but small values of $`A`$ (eg. less than 10–30) are not favored. Some very small nuclei are very stable. The classic example is <sup>4</sup>He. The reasons for helium’s stability are very well understood. A similar phenomenon almost certainly does not occur for strangelets. The pattern of masses for strangelets made of 18 or fewer quarks can be estimated rather reliably . Gluon interactions are, on average, destabilizing. They are most attractive for six quarks, where they still fail to produce a stable strange hadron. The most bound object is probably the $`H`$, composed of $`uuddss`$ . It is unclear whether this system is stable enough to be detected. On empirical grounds, it is certainly not lighter than the non-strange nucleus made of six quarks — the deuteron. For $`2<A6`$, QCD strongly suggests complete instability of any strangelets. Larger strangelets, with baryon numbers up to of order 100, have been modelled by filling modes in a bag . These admittedly crude studies indicate the possible existence of metastable states, but none are sufficiently long-lived to play a role in catastrophic scenarios at a heavy ion collider. Thus, even if it were stable in bulk, strange matter would be unlikely to be stable in small aggregates. ##### d Strangelet radioactivity and metastability If strange matter is stable in bulk and finite size effects destabilize small strangelets, then there will likely be a range of $`A`$ over which strangelets are metastable and decay by various radioactive processes. The lighter a strangelet, the more unstable and shorter lived it would be. Two qualitatively different kinds of radioactivity concern us: baryon emission and lepton or photon emission. * Baryon emission It might be energetically favorable for a small strangelet to emit baryons (neutrons, protons, or $`\alpha `$ particles, in particular), and reduce its baryon number. Such decays are likely to be very rapid. Strong baryon emission would have a typical strong interaction lifetime of order $`10^{23}`$ sec. $`\alpha `$ decay, which can be very slow for nuclei, would be very rapid for a negatively charged strangelet on account of the absence of a Coulomb barrier. Weak baryon emission would be important for some light strangelets that must adjust their strangeness in order to decay. The lifetime for weak baryon emission can be approximated by $$\tau ^1\frac{Q}{4\pi }\mathrm{sin}^2\theta _cG_F^2\mu ^4$$ (12) where $`G_F`$ is Fermi’s constant ($`G_F=10^5M_p^2`$), $`\mathrm{sin}\theta _c`$ is Cabibbo’s angle, $`Q`$ is the Q-value of the decay, and $`\mu `$ is the quark chemical potential in strange matter. Reasonable choices for these parameters put $`\tau `$ below $`10^8`$ sec. Baryon emission leaves a small strangelet smaller still, and less stable. Strangelets unstable against baryon emission quickly decay away to conventional hadrons. * Lepton or photon emission A strangelet which is stable against baryon emission would adjust its flavor through a variety of weak processes until it reached a state of minimum energy. The underlying quark processes include electron or positron emission, $`(d\mathrm{or}s)ue^{}\overline{\nu }_e`$, $`u(d\mathrm{or}s)e^+\nu _e`$, electron capture, $`ue^{}(d\mathrm{or}s)\nu _e`$, and weak radiative strangeness changing scattering, $`udsu\gamma `$. These processes are much slower than baryon emission because they typically have three body final states, initial state wavefunction factors, or other suppression factors. Rates would depend on details of strangelet structure which cannot be estimated without a detailed model. We would expect lifetimes to vary as widely as the $`\beta `$ decay and electron capture lifetimes of ordinary nuclei, which range from microseconds to longer than the age of the universe. * Systematics of stability The only studies of strangelet radioactivity were done in the context of a rather primitive model. Even then, some features emerge that would have significant implications for the disaster scenarios which concern us. Specifically, + Even if the asymptotic value of $`Z/A`$ were negative, there probably would exist absolutely stable strangelets with positive charge. Production of such a species would terminate the growth of a dangerous strangelet (see below). The opposite case (a negatively charged strangelet in a world where $`Z/A`$ is asymptotically positive) would not present a hazard. + Calculations indicate that the lightest (meta)stable strangelet can occur at a value of $`AA_{\mathrm{min}}`$ well below the onset of general stability, with no further stable species until some $`A^{}A_{\mathrm{min}}`$. This phenomenon occurs in conventional nuclear physics at the upper end of the periodic table, where occasional (meta)stable nuclei exist in regimes of general instability. In this case a dangerous strangelet could not grow by absorbing matter. Even though these features of strangelet stability could stop the growth of a negatively charged strangelet produced at RHIC, we cannot use them to argue for the safety of RHIC because we do not know how to model them accurately. For the sake of definiteness, we will refer to any strangelet with a lifetime long enough to be produced at RHIC, come to rest, and be captured in matter as “metastable”. To summarize: strangelets which decay by baryon emission have lifetimes which are generally too short to be “metastable”. Thus any strangelets which eventually evaporate away do so very quickly. On the other hand, strangelets which decay by lepton or photon emission could be quite long lived. #### 1 Searches for Strange Matter In addition to the astrophysical searches reviewed in Refs. , experimental physicists have searched unsuccessfully for stable or quasi-stable strangelets over the past 15 years. Searches fall in two principal categories: a) searches for stable strangelets in matter; b) attempts to produce strangelets at accelerators. Stable matter searches look for stable stangelets created sometime in the history of our Galaxy, either in cosmic ray collisions or as by products of neutron star interactions. Due to its low charge to mass ratio, a stable light strangelet would look like an ultraheavy isotope of an otherwise normal element. For example a strangelet with $`A100`$ might have $`Z=7`$. Chemically, it would behave like an exotic isotope of nitrogen, <sup>100</sup>N(!) Searches for ultraheavy isotopes place extremely strong limits on such objects . The failure of these searches is relevant to our considerations because it further reduces the likelihood that strange matter is stable in bulk at zero external pressure . Accelerator searches assume only that strangelets can be produced in accelerators and live long enough to reach detectors. Experiments to search for strangelets have been carried out at the Brookhaven National Laboratory Alternating Gradient Accelerator (AGS) and at the CERN Super Proton Accelerator (SPS). At the AGS the beam species and energy were gold at an energy of 11.5 GeV/nucleon. At the CERN SPS the beam was lead at an energy of 158 GeV/nucleon . Experiments (with less sensitivity) were also done at CERN with sulfur beams at an energy of 200 GeV/nucleon. In all of these experiments the targets were made of heavy elements (lead, platinum and tungsten). All of the experiments were sensitive to strangelets of both positive and negative electric charge. All of the experiments triggered on the low value of $`Z/A`$ characteristic of strangelets. The experiments were sensitive to values of $`|Z/A|0.3`$, masses from 5 GeV/c<sup>2</sup> to 100 GeV/c<sup>2</sup>, and lifetimes longer than 50 ns ($`5\times 10^8`$ seconds). None of the experiments detected strangelet signals. Limits were therefore set on the possible production rates of strangelets with the stated properties. The limits achieved were approximately less than one strangelet in $`10^9`$ collisions at the AGS and from one strangelet per $`10^7`$ to $`10^9`$ collisions at CERN energies, depending on the precise properties of the strangelet. Of course the limits obtained from previous strangelet searches cannot be used to argue that experiments at RHIC are safe because the total luminosity of earlier searches would not place a decisive limit on the probability of negative strangelet production at RHIC. However, attempts to understand possible strangelet production mechanisms in these experiments figure importantly in our consideration of dangerous strangelet production at RHIC. ### B Strangelet Production in Heavy Ion Collisions The lack of a plausible mechanism whereby hypothetical dangerous strangelets might be produced is one of the weakest links in the catastrophe scenario at a heavy ion collider. Before discussing production mechanisms in detail, it is worthwhile to summarize some of the very basic considerations that make dangerous strangelet production appear difficult. * Strangelets are cold, dense systems. Like nuclei, they are bound by tens of MeV (if they are bound at all). Heavy ion collisions are hot. If thermal equilibrium is attained, temperatures are of order one hundred MeV or more. The second law of thermodynamics fights against the condensation of a system an order of magnitude colder than the surrounding medium. It has been compared to producing an ice cube in a furnace. * $`q\overline{q}`$ pairs, including $`s\overline{s}`$ pairs, are most prevalent in the central rapidity region in heavy ion collisions. Baryon chemical potential is highest in the nuclear fragmentation regions. To produce a strangelet one needs both high chemical potential and many $`s`$ quarks made as $`s\overline{s}`$ pairs. But the two occur in different regions. * Strangelets include many strange quarks. The more negative the strangelet charge, the more strange quarks. For example, a strangelet with $`A=20`$ and $`Z=4`$ would include 12 $`s`$ quarks if the number of $`u`$ and $`d`$ quarks are equal (as expected). However, a strangelet with $`A=20`$ and $`Z=1`$ would have to contain 22 $`s`$ quarks. The more strange quarks, the harder it is to produce a strangelet. Thus dangerous strangelets are much harder to make than benign ($`Z>0`$) strangelets. * As we have previously discussed, the smaller the strangelet, the less likely it is to be stable or even metastable. The last several items make it clear that the larger the strangelet, the less likely it is to be produced in a heavy ion collision. We find that these arguments, though qualitative, are quite convincing. Especially, they strongly suggest that strangelet production is even more unlikely at RHIC than at lower-energy facilities (e.g. AGS and CERN) where experiments have already been performed. Unfortunately, the very unlikelihood of production makes it difficult to make a reasonable model for how it might occur, or to make a quantitative estimate. Two mechanisms have been proposed for strangelet production in high energy heavy ion collisions: a) coalescence and b) strangeness distillation. The coalescence process is well known in heavy ion collisions and many references relate to it. A recent study which summarizes data at the AGS energies has been reported . The strangeness distillation process was first proposed by Heinz et al. and Greiner et al.. The coalescence process has been carefully studied at AGS energies . The coalescence model is most easily summarized in terms of a penalty factor for coalescing an additional unit of baryon number and/or strangeness onto an existing clump. By fitting data, Ref. finds a penalty factor of 0.02 per added baryon. The additional penalty for adding strangeness has been estimated at 0.2, however the data of Ref. suggests that it might be as small as 0.03. The model was originally intended to estimate the probability of producing nuclei and hypernuclei from the coalescence of the appropriate number and types of baryons. When it is used to estimate stranglet production, it is assumed that the transition from hadrons to quarks occurs with unit probability. This is certainly a gross overestimate, since wholesale reorganization of the quark wavefunctions is necessary to accomplish this transition. By ignoring this factor we obtain a very generous overestimate of the strangelet production probability. Given that the probability of producing a deuteron in the collision is about unity, this suggests that the yield of a strangelet with, for example A=20, Z=-1, and S=22 is about one strangelet per 10<sup>46</sup> collisions (taking the strangeness penalty factor as 0.2). This would lead to a probability $`𝔭2\times 10^{35}`$ for producing such a strangelet at RHIC. The difficulty of producing a (meta)stable, negatively charged strangelet (if it exists) is one of the principal reasons we believe there is no safety problem at RHIC. In addition, the coalescence factors are expected to decrease as the collision energy increases. This is because the produced particles are more energetic, and therefore less likely to be produced within the narrow range of relative momentum required to form a coalesced state. If one compares the coalescence yields at the Bevalac, the AGS, and the CERN experiments, this expectation is dramatically confirmed. From the point of view of coalescence, the most favorable energy for strangelet production is below that of the AGS. Closely related to the coalescence model is the thermal model, in which it is assumed that particle production reflects an equilibrium state assumed to exist until the fireball cools and collisions cease. In this model the “free” parameters are the temperature and the baryon chemical potential at freeze-out . Applying this model to the AGS experimental situation gives a reasonably good account of particle ratios, and indicates a freeze-out temperature of 140 MeV and a baryon chemical potential of 540 MeV. With these parameters the model can predict the production probability of strangelets with any given baryon number, charge, and strangeness. Braun-Munzinger and Stachel have carried out detailed calculations for the AGS case and find very small production. For example, the yield of a strangelet with A=20, Z=2, and S=16 is $`2\times 10^{27}`$ per central collision. Since central collisions are about 0.2 of all collisions this translates into a yield of one strangelet (with these parameters) in $`2\times 10^{27}`$ collisions if such a strangelet were stable and if we scale without change from AGS to RHIC energy. The yield of a negatively charged strangelet would be much smaller still. As the collision energy increases, this model predicts higher temperatures and smaller baryon chemical potentials. The result is that in this model strangelet production is predicted to decrease quickly with total center of mass energy in this model. The thermal model clearly favors an energy even lower than the AGS for the optimum for producing strangelets, should they exist. The strangeness distillation mechanism is considerably more speculative. It assumes that a quark gluon plasma (QGP) is produced in the collision and that the QGP is baryon rich. It further assumes that the dominant cooling mechanism for the QGP is evaporation from its surface. Since it is baryon rich, there is a greater chance for an $`\overline{s}`$ quark to find a $`u`$ or $`d`$ quark to form a kaon with positive strangeness than for an $`s`$ quark to find a $`\overline{u}`$ or $`\overline{d}`$ quark to form a kaon with negative strangeness. The QGP thus cools to a system containing excess $`s`$ quarks, which ultimately becomes a strangelet. This mechanism requires a collision energy sufficient to form a QGP. RHIC should be high enough. Many heavy ion physicists believe that even the fixed target CERN experiments have reached a sufficient energy and are in fact forming a QGP. If this is the case, the failure of the CERN experiments to find strangelets argues against either the existence of this mechanism or the existence of strangelets. A substantial body of evidence supports the view that a QGP is formed at CERN energies, but a truly definitive conclusion is not possible at present. In any case, fits to data from the AGS and CERN, and theoretical models suggest that the baryon density at central rapidity, where a QGP can be formed, will decrease at RHIC. Moreover, there is considerable evidence that the systems formed in CERN heavy ion collisions do not cool by slow evaporation from the surface but rather by rapid, approximately adiabatic expansion, as is also expected theoretically. Altogether, the strangeness distillation mechanism seems very unlikely to be effective for producing strangelets at RHIC. In summary, extrapolation from particle production mechanisms that describe existing heavy ion collision data suggests that strangelets with baryon number large enough to be stable cannot be produced. With one exception, all production models we know of predict that strangelet production peaks at low energies, much lower than RHIC and perhaps even lower than the AGS. The one exception is the hypothetical strangeness distillation mechanism. However, available data and good physics arguments suggest that this mechanism does not apply to actual heavy ion collisions. ### C Catastrophe at RHIC? What is the scenario in which strangelet production at RHIC leads to catastrophe? The culprit would be a stable (or long-lived, metastable) negatively charged strangelet produced at RHIC. It would have to be a light representative of a generic form of strange matter with *negative* electric charge in bulk. It would have to live long enough to slow down and come to rest in matter. Note that the term “metastable” is used rather loosely in the strangelet literature. Sometimes it is used to refer to strangelets that live a few orders of magnitude longer than strong interaction time scales. As mentioned above, we use “metastable” to refer to a lifetime long enough to traverse the detector, slow down and stop in the shielding. Since strangelets produced at high rapidity are likely to be destroyed by subsequent collisions, we assume a production velocity below $`v_{\mathrm{crit}}=0.1c`$. Hence it requires a lifetime greater than $`10^7`$ sec in order to satisfy our definition of “metastable”. Once brought to rest, a negative metastable strangelet would be captured quickly by an ordinary nucleus in the environment. Cascading quickly down into the lowest Bohr orbit, it would react with the nucleus, and could absorb several nucleons to form a larger strangelet. The reaction would be exothermic. After this reaction its electric charge would be positive. However, if the energetically preferred charge were negative, the strangelet would likely capture electrons until it once again had negative charge. At this point the nuclear capture and reaction would repeat. Since there is no upper limit to the baryon number of a strangelet, the process of nuclear capture and weak electron capture would continue. There are several ways that this growth might terminate without catastrophic consequences: First, as mentioned earlier, a stable positively charged species might be formed at some point in the growth process. This object would be shielded by electrons and would not absorb any more matter. Second (also mentioned before), the lightest metastable strangelet might be isolated from other stable strangelets by many units in baryon number.A similar barrier (the absence of a stable nucleus with $`A=8`$) prevents two $`\alpha `$ particles from fusing in stellar interiors. Third, the energy released in the capture process might fragment the strangelet into smaller, unstable objects. Unfortunately, we do not know enough about QCD either to confirm or exclude these possibilities. A strangelet growing by absorbing ordinary matter would have an electric charge very close to zero. If its electric charge were negative, it would quickly absorb (positively charged) ordinary matter until the electric charge became positive. At that point absorption would cease until electron capture again made the quark charge negative. As soon as the quark charge became negative the strangelet would absorb a nucleus. Thus the growing strangelet’s electric charge would fluctuate about zero as it alternately absorbed nuclei and captured electrons. Even though the typical time for a single quark to capture an electron might be quite long, the number of participating quarks grows linearly with $`A`$, so the baryon number of the strangelet would grow exponentially with time, at least until the energy released in the process began to vaporize surrounding material and drive it away from the growing strangelet. This process would continue until all available material had been converted to strange matter. We know of no absolute barrier to the rapid growth of a dangerous strangelet, were such an object hypothetically to exist and be produced. This is why we have considered these hypotheses in detail to assure ourselves beyond any reasonable doubt that they are not genuine possibilities. We should emphasize that production of a strangelet with positive charge would pose no hazard whatsoever. It would immediately capture electrons forming an exotic “strangelet-atom” whose chemical properties would be determined by the number of electrons. The strange “nucleus” at its core would be shielded from further nuclear interactions in exactly the same way that ordinary nuclei are shielded from exothermic nuclear fusion. We see no reason to expect enhanced fusion processes involving atoms with strangelets at their core. It has been suggested that an atom with a strangelet at its core would undergo fusion reactions with light elements in the environment and, like a negatively charged strangelet, grow without limit . This will not occur. First, the strength and range of the strong interactions between a strangelet-atom and an ordinary atom are determined by well-known, long-range properties of the nuclear force which are exactly the same for strangelets as for nuclei. Second, fusion is suppressed by a barrier penetration factor proportional to the product of the charge on the strangelet times the charge on the nucleus, $`fe^{Z_1Z_2K}`$. The most favorable case would be a strangelet of charge one fusing with hydrogen. Hydrogen-hydrogen fusion at room temperature is so rare that it is a subject of intense debate whether it has ever been observed. Even if strangelet-atom-hydrogen fusion were enhanced by some unknown and unexpected mechanism, the suppression factor that appears in the exponent would be doubled as soon as the strangelet had acquired a second unit of charge. As the strangelet’s charge grows each successive fusion would be breathtakingly more suppressed. To provide a concrete example, we have calculated the rate of fusion of a thermalized (room temperature) strangelet with baryon number 396 (the baryon number present in the entire Au-Au collision) and $`Z=6`$, with hydrogen. Using standard and well-tested nuclear reaction theory, we find a fusion rate of $`10^{2\times 10^5}`$ sec<sup>-1</sup>. On theoretical grounds alone, as discussed above, we believe creation of a dangerous strangelet at RHIC can be firmly excluded. We now turn to the important empirical evidence from cosmic rays. ### D Cosmic Ray Data Relevant to the Strangelet Scenario It is clear that cosmic rays have been carrying out RHIC-like “experiments” throughout the Universe since time out of mind. Here we choose some specific conditions and summarize briefly the arguments that place restrictions on dangerous strangelet production at RHIC. We have made estimates based on cosmic ray collisions with the Moon. We also review the astrophysical estimates in a recent paper by Dar, De Rujula and Heinz . In order to extract bounds from cosmic ray data, it is necessary to model the rapidity distribution of strangelets. It will turn out that the most important distinguishing features of a production mechanism are how it behaves at central and extreme values of the rapidity. Inclusive hadronic processes generally fall like a power of the rapidity near the limits of phase space. In light of this, we see no reason for strangelet production to be exponentially suppressed at $`Y_{\mathrm{min}}`$ and $`Y_{\mathrm{max}}`$. On the other hand, long-standing theoretical ideas and phenomenology suggest the emergence of a “central plateau” away from the kinematic limits of rapidity, along which physics is independent of the rapidity. Insofar as these ideas are correct, a singularity at central rapidity would violate the principle of relativity. So for our first model we assume a power law dependence at the kinematic limits of rapidity, and an exponential fall off away from the target fragmentation region, where the baryon chemical potential decreases. By convention we take $`y=0`$ to be the kinematic limit and we model the strangelet production near $`y=0`$ by $$\frac{d\mathrm{\Pi }}{dy}|_{\mathrm{BG}}=Npy^ae^{by},$$ (13) where $`a`$ and $`b`$ are parameters, $`N`$ is a normalization constant chosen so that $`p`$ is half the total strangelet production probability per collision (the other half comes near the other rapidity limit). The subscript “BG” stands for “best guess”. The authors of Ref. have made an extreme model of strangelet production, where production is completely confined to central rapidity. We know of no physical motivation for this assumption. On the contrary, what we know about particle production in heavy ion collisions argues against such a model. Their model can be approximated by a $`\delta `$ function at central rapidity, $$\frac{d\mathrm{\Pi }}{dy}|_{\mathrm{DDH}}=p\delta (yY/2),$$ (14) where $`Y`$ is the total rapidity interval. Although we find such a model impossible to justify on any theoretical grounds, we will use this rapidity distribution when we review the work of Ref. . The limits from cosmic ray considerations depend on the assumed rapidity distribution of strangelet production, in the following respect. If strangelets are produced in the nuclear fragmentation regions, then cosmic ray collisions with stationary nuclei on the surface of the moon provide more than adequate limits on dangerous strangelet production at RHIC. On the other hand, if strangelets were produced only at zero rapidity in the center of mass, then strangelets produced on the Moon would not survive the stopping process. Under this hypothetical — and we believe, quite unrealistic — assumption the persistence of the Moon provides no useful limit on strangelet production. Dar, De Rujula, and Heinz introduce a parameter, $`p`$, as a simple way to compare limits obtained in different processes. $`p`$ measures the probability to make a strangelet in a single collision with speed low enough to survive the stopping process at RHIC. $`p`$ is related to the parameter $`𝔭`$ which we introduced earlier by $`𝔭=2\times 10^{11}p`$. We will analyse cosmic ray data in terms of $`p`$ and relate the results to $`𝔭`$ when necessary. We assume that $`p`$ is independent of the atomic mass of the colliding ions, at least for iron and gold. We also assume $`p`$ is the same for RHIC and AGS energies. A single choice of $`p`$ simplifies our presentation. We will discuss the qualitative differences between AGS and RHIC energies and between collisions of different nuclear species where they arise. Of course our aim is to bound $`𝔭`$ far below unity. We begin with our neighbor, the Moon, because we know the environment well and know the Moon is not made of strange matter. Collisions of cosmic rays with the outer envelopes of stars, gaseous planets, or even terrestrial planets with atmospheres like the earth and venus, lead overwhelmingly to collisions with light nuclei like hydrogen, helium, etc. This is not a likely way to make strange matter. The Moon has a rocky surface rich in iron. Using the data from Section II it is easy to calculate the rate of collisions between specific heavy ions on the lunar surface. Consider a cosmic ray nucleus $`A`$ colliding with a nucleus $`A^{}`$ with fractional abundance $`f_A^{}`$ in the lunar soil. The total number of collisions at energies greater than $`E`$ over the 5 billion year lifetime of the moon (from eq. (5)) is<sup>\**</sup><sup>\**</sup>\**Eq. (15) was obtained by multiplying eq. (5) by $`15\times 10^{16}`$, the number of seconds in five billion years, and by the fractional abundance, $`f_A^{}`$. In addition, the collision cross section varies with $`A`$ and $`A^{}`$ like $`(A^{1/3}+A^{1/3})^2`$. Since the dominant constituents of the moon are lighter than iron, the probability of a cosmic ray interacting with iron (or gold) is higher than measured by its fractional abundance alone. We ignore the $`A`$ dependence of the cross section because it is small, it increases the strength of our bounds, and it complicates our equations. $$N(A,E)|_{\mathrm{moon}}8\times 10^{29}f_A^{}\frac{\mathrm{\Gamma }(A,10\mathrm{GeV})}{\mathrm{\Gamma }(\mathrm{Fe},10\mathrm{GeV})}\left(\frac{10\mathrm{GeV}}{E}\right)^{1.7}$$ (15) Using iron, $`f_{\mathrm{Fe}}=0.012`$ , and the cosmic ray abundance of iron and “gold”, we can calculate the number of dangerous strangelets which would have been created on the surface of the moon in several cases of interest as a function of $`p`$. * Dangerous strangelet production in lunar iron-iron collisions at AGS energies. Taking $`E=10`$ GeV/nucleon and $`f_{\mathrm{Fe}}=0.012`$ we obtain $`N_{\mathrm{moon}}(\text{ Fe-Fe, AGS})10^{28}p`$ for the number of dangerous strangelets produced on the surface of the moon in terms of the probability to produce one in a single collision at RHIC ($`p`$). * Dangerous strangelet production in lunar iron-iron collisions at RHIC energies. Scaling $`E`$ to 20 TeV/nucleon, we find $`N_{\mathrm{moon}}(\text{ Fe-Fe, RHIC})2\times 10^{22}p`$ * Dangerous strangelet production in lunar “gold”-iron collisions at AGS energies. The penalty of demanding “gold” is a factor of $`10^5`$ in cosmic ray flux, so $`N_{\mathrm{moon}}(\text{ Au-Fe, AGS})10^{23}p`$. * Dangerous strangelet production in lunar “gold”-iron collisions at RHIC energies. Scaling $`E`$ to 20 TeV/nucleon, we find $`N_{\mathrm{moon}}(\text{ Au-Fe, RHIC})2\times 10^{17}p`$. The Moon does not provide useful limits for targets less abundant than iron. The total number of collisions on the surface of the Moon is huge compared to the number anticipated at RHIC. However, strangelets produced with even relatively low rapidity in the lunar rest frame do not survive subsequent collisions with nuclei in the lunar soil. DDH model the survival probability by assuming that strangelets with $`v_{\mathrm{crit}}<0.1c`$ survive and all others are torn apart. Here, we assume a geometric strangelet dissociation cross section which is independent of energy, and use standard methods to calculate a survival probability. Our results agree with those of DDH to within a factor of $`2`$ for all cases of interest. Consider a strangelet with atomic mass $`A`$, charge $`Z`$ and rapidity $`y`$ in the lunar rest frame. Its survival probability is $`P(y,A,Z)`$ $`=`$ $`\mathrm{exp}[n\sigma (A)\lambda (y,Z,A)]`$ (16) $`=`$ $`\mathrm{exp}[4.85(1+{\displaystyle \frac{1}{3}}A^{1/3})^2(\mathrm{cosh}y1)A/Z^2]`$ (17) Here $`n`$ is the density of lunar soil (assuming silicon, $`n=0.5\times 10^{23}\mathrm{cm}^3`$), $`\sigma (A)`$ is the geometric cross section for the strangelet to collide with a silicon nucleus, $`\sigma (A)=0.4(1+\frac{1}{3}A^{1/3})^2`$ barns, and $`\lambda (y,Z,A)`$ is the stopping distance calculated assuming that the strangelet loses energy only by ionization, $`\lambda (y,Z,A)=242(\mathrm{cosh}y1)A/Z^2\mathrm{cm}`$. For a representative dangerous strangelet, e.g. $`A=20`$, $`Z=1`$, the suppression factor in eq. (17) is very large, $`P(y,20,1)=\mathrm{exp}[350(\mathrm{cosh}y1)]`$, so only strangelets with $`y0`$ survive. For the rapidity distribution, eq. (14), chosen by DDH, all dangerous strangelets produced at RHIC would survive stopping, but no strangelet would survive stopping on the moon. The more realistic production mechanism of eq. (13) yields lunar suppression factors of $`3\times 10^3`$, $`10^4`$, $`2\times 10^6`$, and $`5\times 10^8`$ when the parameter $`a`$ (which controls the small $`y`$ behavior of $`dN/dy`$) is chosen as $`1,2,3`$ and $`4`$.<sup>††</sup><sup>††</sup>††These estimates apply to $`A=20`$, $`Z=1`$. Larger $`A`$ are more suppressed, but we do not consider production of a negatively charged strangelet with $`A`$ much larger than 20 to be credible. Larger $`Z`$ reduces the suppression. However this mechanism also reduces the probability that a strangelet produced at RHIC will survive the stopping process. The survival probabilities are $`8\times 10^3`$, $`8\times 10^3`$, $`10^2`$, and $`2\times 10^2`$, for $`a=1,2,3,4`$ respectively. Thus the effective lunar suppression factors are: an enhancement of $`3`$ for $`a=1`$, no suppression for $`a=2`$, suppression by $`2\times 10^4`$ for $`a=3`$, and by $`3\times 10^6`$ for $`a=4`$. Choosing a suppression factor of $`10^6`$ we obtain survival probabilities of $`10^{22}p`$ for Case I (iron-iron at AGS energies), $`2\times 10^{16}p`$ for Case II (iron-iron at RHIC energies), $`10^{17}p`$ for Case III (“gold”-iron at AGS energies), and $`2\times 10^{11}p`$ for Case IV (“gold”-iron at RHIC energies). To compare with other estimates we convert these results to bounds on $`𝔭`$, the probability of producing a dangerous strangelet at RHIC which survives the stopping process. The fact that the Moon has not been converted to strange matter over its lifetime bounds $`𝔭`$ by $`𝔭<2\times 10^{11},10^5,2\times 10^6`$, and $`1`$ for cases I-IV respectively. Since we believe strangelet production to be more likely at AGS energies than at RHIC, and believe iron to be a reasonable “heavy nucleus”, we take the limit from Case I very seriously. If however, one insists on recreating exactly the circumstances at RHIC and insists on the worst case rapidity distribution, then lunar limits are not applicable. DDH explore the consequences of dangerous strangelet production in nucleus-nucleus collisions in interstellar space. They adopt “worst case” assumptions at several points. In particular, they demand RHIC energies and ultra heavy nuclei (gold rather than iron), and they assume that a dangerous strangelet is produced only at zero rapidity in the center of mass. Given these restrictive conditions they compute the rate at which strangelets are produced at rest relative to the galaxy. Taking an energy of 100 GeV/nucleon and an abundance relative to iron of $`10^5`$ in eq. (7),<sup>‡‡</sup><sup>‡‡</sup>‡‡DDH assume an $`E^{2.6}`$ decay of the cosmic ray spectrum and take $`\mathrm{\Gamma }(\mathrm{Au})/\mathrm{\Gamma }(\mathrm{Fe})3\times 10^5`$, slightly different from our choices. we reproduce their result, $`R(100\text{GeV},\text{Au})10^{58}`$. Multiplying by the age of the galaxy ($`T_0=10`$ billion years) and by the probability, $`p`$, of dangerous strangelet production, we find the number of dangerous strangelets produced per cm<sup>3</sup> in the galaxy, $$N(100\text{GeV},\text{Au})=T_0pR(100\text{GeV},\text{Au})=10^{41}p\text{cm}^3.$$ (18) DDH estimate that the material contained in a volume of $`10^{57}`$cm<sup>3</sup> is swept up in the formation of a “typical star”, so that the probability of a dangerous strangelet ending up in a star is approximately $`P_{}10^{16}p`$. They then go on to argue that the subsequent destruction of the star would be detectable as a supernova-like event. Based on $`P_{}`$ and the observed rate of supernovas, DDH limit $`p`$ to be less than $`10^{19}`$. This corresponds to a limit of $`2\times 10^8`$ on $`𝔭`$, the probability of producing a dangerous strangelet during the life of RHIC. Actually, we believe that DDH have been too conservative. Good physics arguments indicate that lower energy collisions are more likely to create strangelets, and iron is nearly as good a “heavy” ion as gold. If we scale down $`E`$ from RHIC energies (100 GeV/nucleon) to AGS energies (4.5 GeV/nucleon) we gain a factor of $`4\times 10^4`$ from the $`E^{3.4}`$ dependence in eq. (7). If we replace gold by iron we gain a factor of $`10^{10}`$. So the bound on dangerous strangelet production during the RHIC lifetime is more nearly $`𝔭<10^{21}`$. Finally, we point out the implications of strangelet metastability for these arguments. DDH have implicitly assumed that the dangerous strangelet produced in interstellar space lives long enough to be swept up into a protostellar nebula. Suppose, instead, that the dangerous strangelet was only metastable, and that it decays away by baryon emission with a lifetime greater than $`10^7`$ sec but much less than the millions of years necessary to form a star. In this case a dangerous strangelet produced at RHIC would have time to stop in matter, stabilize and begin to grow. However a strangelet formed in interstellar space would decay harmlessly into baryons etc.. We have estimated baryon emission lifetimes for strangelets. A lifetime of $`10^7`$ seconds is near the upper limit of our estimates. Since the strangelet production cross section is likely to fall so quickly with $`A`$ and $`S`$, the strangelet most likely to be created at RHIC would be the least stable and would likely decay on time scales much shorter than $`10^7`$ seconds by strong baryon emission. A strangelet heavy enough to have a baryon emisison lifetime of order $`10^7`$ seconds would be much harder to produce at RHIC. Still, the astrophysical argument of DDH is compromised by the possibility of producing a metastable strangelet with a long enough baryon emission lifetime. Note, however, that instability to decays which do not change baryon number (and therefore do not lead the strangelet to evaporate) is irrelevant. Also, note that metastability does not compromise the lunar arguments: a metastable strangelet produced in the lunar rest frame would have just as much time to react as one produced at RHIC. This discussion shows the pitfalls of pursuing the “worst case” approach to the analysis of empirical limits. The rapidity distribution necessary to wipe out lunar limits is bizarre. The metastability scenario necessary to wipe out the astrophysical limits seems less unphysical, but still highly contrived. Compelling arguments assure us that RHIC is safe. Nevertheless, a worst case analysis, based on arguments which bend, if not break, the laws ot physics, leads to a situation where there is no totally satisfactory, totally empirical limit on the probability of producing a dangerous strangelet at RHIC. In summary, we have relied on basic physics principles to tell us that it is extremely unlikely that negatively charged strange matter is stable, that if it is stable in bulk, it is unlikely to be stable in small droplets, and that even small strangelets are impossibly difficult to produce at RHIC. In addition, empirical arguments using the best physics guidance available, as opposed to “worst case” assumptions, together with data on cosmic ray fluxes, bound the probability of dangerous strangelet production at RHIC to be negligibly small. ## VI Acknowledgments We would like to thank many individuals for sharing their data and their insights with us. We thank B. Price, L. Rosenberg, S. Swordy and A. Westphal for references and conversations on cosmic rays, S. Mathur for conversations on gravitational singularities, and K. Hodges and B. Skinner for references and conversations on the composition of the moon. We thank A. Kent for correspondence on risk analysis. We are also grateful to A. Dar, A. DeRujula and U. Heinz for sending us Ref. and for subsequent correspondence and conversations. Research supported in part by the Department of Energy under cooperative agreement DE-FC02-94ER40818 (W.B. & R.L.J.) and grants DE-FG02-92ER40704, DE-FG02-90ER40562 (J.S.), and DE-FG02-90ER40542 (F.W.).
no-problem/9910/astro-ph9910499.html
ar5iv
text
# Hourly Variability in Q0957+561 ## 1 Introduction Previous studies of the Q0957+561 A,B brightness fluctuations have indicated the existence of daily brightness fluctuations. If these are intrinsic to the quasar, then the time delay can be determined to a precision of a fraction of a day; however, if such fluctuations are at least in part due to microlensing, they would probably signal the nature of a baryonic dark matter component. A recent report by Colley and Schild (ApJ 518, 153, 1999) demonstrated precision aperture photometry close to the photon limit. The procedure subtracts out the lens galaxy’s light according to the galaxy’s profile determined recently from an HST image, and removes cross talk light from the A and B image apertures. We now apply this reduction to 1629 frames collected in 4 runs, with continuous monitoring for 10 hours/night over, typically, 5 nights. Because previous analysis of these data showed that small brightness fluctuations were observed during these runs, but because the effects of seeing were known to limit conclusions about such low-amplitude fluctuations, we hoped that re-reduction would clarify conclusions about time delay and microlensing. In the first several sections to follow, new results unravel the systematic effects that seeing has on the aperture photometry. Next, the measured brightness fluctuations give a structure function that describes the amplitude of variations observed in this lensed/microlensed system on various timescales. Final sections present time delay calculations and possible microlensing results from two observing runs separated by 417 days. ## 2 The Photometry For two decades, RS has monitored Q0957 and amassed a “master dataset” of the variations of images A and B (Schild & Thompson 1995 and earlier references therein). The data presented here come from all-night monitoring in one of two similar observational configurations. About half of the data, as in Paper I, derives from unbinned 1k$`\times `$1k CCD images with pixel scales of one-third of an arcsecond, exposure times of 450 seconds, read noise of $`8e^{}`$, and FWHM seeing of 1.5–2 arcseconds. Some of the data come from otherwise similar circumstances, but derive from a 2k$`\times `$2k CCD binned down to 1k$`\times `$1k; here shorter exposure times are used, and the pixel scale is twice as great. Both of these setups typically allow for 5 unsaturated comparison stars in the $`R`$-band on each data-frame. This paper presents the photometry from more than 1600 such frames. To reduce 1600 frames, one requires a highly automated photometry reduction code (one free of mouse clicks and manual entry of parameters). In Paper I, we detailed such a method, which is not only automated, but includes two basic improvements upon the basic aperture photometry scheme historically employed to reduce Q0957+561 data images: 1) use of HST imaging to subtract the galaxy, 2) correction for “cross talk” between the narrowly separated ($`6.1\mathrm{}`$) A and B images. Typically the galaxy, whose core is inside the B aperture, contributes 17% to the B aperture and 2% to the A aperture. The cross talk contributes typically 2% to each image. One could calibrate those out if the seeing were steady, but poor seeing obviously introduces more cross talk contamination, and has the curious effect of spreading much more galaxy light into the A image. These variable effects confuse the search for both intrinsic QSO variation and microlensing. Worse still, while the seeing introduces a correlation in the A and B photometry (spreading more cross talk light into both images from each other), it also introduces an anti-correlation due to the galaxy, whose light is spread out from the B aperture and into the A aperture. To disentangle these effects, a complete galaxy subtraction and measurement of cross talk is necessary. Slight modifications to the photometry scheme in Paper I have been implemented, principally for sake of speed and increased automation. As before the astrometry is accomplished via PSF fitting, which requires a model PSF. The model is empirical, built by stacking several standards. The first pass utilizes methods described by Alard & Lupton (1998), but only even terms in the polynomial expansion are used. An explicitly even model PSF for each standard can be fit to the standard itself to determine the astrometry of that standard. Shifting and stacking (by median) all of the standards forms the model PSF. This method has the advantages of being quite fast and of eliminating cosmic rays automatically from the model PSF. Other modifications include increased sensitivity to erratic photometry in the standards (by simply censoring 5-$`\sigma `$ outliers), and more careful measurement of crosstalk (again by filtering out gross outliers). Each of these methods increase the stability of the photometry appreciably. ## 3 The Effects of the Lens Galaxy Fig. 1 shows the contribution of the lens galaxy’s light to the A and B measurement apertures as a function of seeing, as measured from all the image frames of this data set. Plotted is the percentage of the signal in the apertures as a function of FWHM seeing, measured from the stellar images on the data frames. Because seeing effects originate in different levels of the terrestrial atmosphere and do not have a single unique profile, adoption of a simple FWHM parameter is only a first-order statistic and cannot fully describe the variety of seeing profiles supplied by nature. In the top panel of Fig. 1 the typical percentage contribution to the light of image B is 17.7 percent for average seeing. The average brightness of the B image during this time was 16.51 magnitudes. The observed contribution is equivalent to an 18.39 magnitude source in the measurement aperture. This is in excellent agreement with the R = 18.34 magnitude correction to the lens galaxy magnitude that has been historically adopted throughout the 20-year Q0957 brightness monitoring project (Schild and Weekes, 1984, ApJ 277, 481). The historical correction was made from a compilation of aperture photometry at optical and infrared wavelengths, and the known colors of an elliptical galaxy at moderate redshift. Published data tables of Schild and colleagues usually had a correction for an R = 18.34 magnitude galaxy applied, but no correction for the galaxy contribution to the A aperture and no aperture cross talk corrections. The bottom panel in Fig. 1 shows the brightness contribution from the lens galaxy to the aperture of the A image. For average seeing, this correction is 3%. Because this contribution originates in the previously unknown outer profile of the lens galaxy, no correction for it has been made in the historical brightness records (where bad seeing images were removed). This has little effect on the historical record, since it just causes a slight reduction in the measured brightness fluctuations of image A, and a slight 3% correction to the inferred A/B continuum brightness ratio. Best fit parabolic curves have been overplotted on the Fig. 1 data; we give those fits below. With seeing expressed as a FWHM (represented below as $`x`$) in units of arcseconds, the fitted curves express the percent contribution of the lens galaxy to the R magnitude in a $`6\mathrm{}`$ diameter A aperture $`\delta R(A)`$ and B aperture $`\delta R(B)`$ as $$\begin{array}{cc}\delta R(A)=[3.1540.6775x+0.3542x^2]\%,\hfill & \\ \delta R(B)=[18.7250.5962x]\%.\hfill & \end{array}$$ (1) We found that the best fit quadratic term for B was very nearly zero, so we omit that here. These empirical low order fits apply only in the range shown on the plots, and should not be used for seeing with FWHM less than one arcsecond. In such cases, the minimum value shown for the A contamination is probably a quite good approximation, since the curve is obviously flattening by one arcsecond anyway. It seems, however, that the B contamination is still growing as the seeing improves toward one arcsecond, and perhaps one could extrapolate somewhat, but the B contamination would obviously level below some FWHM; hence, correction for the galaxy in the B aperture during very good seeing remains somewhat unresolved. ## 4 The Aperture Corrections As the seeing gets worse, relatively more light will spill over from quasar image A into the adjacent B aperture (and vice versa) and cause systematic the cross talk effects noted by Schild and Cholfin, 1986. This effect can be corrected for by measuring the amplitude of the effect in nearby field stars. Insofar as the telescope/camera optics produce no asymmetrical image structure, the effect should be the same for the A and B apertures, as we find here. Thus Fig. 2 shows the aperture correction curve applicable to both apertures, with the amplitude of the crosstalk expressed in percent as a function of seeing FWHM. The shape and nature of this curve will necessarily depend somewhat on the optics of the telescope/camera and on the properties of the terrestrial atmosphere above Mt Hopkins. We find that a cubic curve represents the crosstalk corrections quite well, and has the following form for $`\delta R`$ as a function of the seeing FWHM ($`x`$): $$\delta R=[0.7087+0.6320x0.7387x^2+0.2788x^3]\%$$ (2) Again, these curves should not be applied for seeing FWHM below one arcsecond, for which the correction evidently flattens out at a value very close to one percent. Results in this and the previous section show two very different effects of seeing in the two measurement apertures. The crosstalk correction is virtually the same for the two apertures, but the galaxy correction opposes the crosstalk in image B, while adding with the crosstalk for image A. Thus the B image data are substantially more seeing independent than image A data. Since most of the historical Q0957 data (from the “master dataset” of Schild and Thomson ) were taken with seeing between 1 and 2 arcseconds, the corrections are always less than about a percent. For seeing worse than 2.5 arcseconds, the observations have been heavily censored in the past, which Fig. 2 demonstrates to be important since seeing effects can produce photometric errors of several percent. ## 5 Results: The Brightness Record The final brightness data after corrections for the lens galaxy and aperture cross talk appear in Fig. 3. Plotted points are the result from each data frame, so the amplitude of the “swarm” for each night relates to the accuracy of the photometry. The accuracy is comparable to our results in Paper I, where it was found that the photometry is within 15% of the limit imposed by Poisson statistics. Note that the swarm for the first observing run in Dec. 1994 covering Julian Dates 2449702–08 has a larger amplitude than for later runs; this is because the pixel size was larger by a factor of two on the first observing run, so exposure times are four times shorter and the Poisson noise greater. But although the noise per image frame was larger, there are more of them and the overall precision of the photometry is about the same as for subsequent observing runs. Correlations in the A and B photometry could arise due to coherent observational errors, such as crosstalk (Schild and Cholfin 1986), though no significant correlation appear (verified by the Pearson’s correlation coefficient applied to raw, galaxy subtracted, and crosstalk corrected photometry). Although it might at first appear that correlated fluctuations were seen in the Feb 1995 run, with both images brightening with time, note that during this time the A image was unusually faint and the B image was unusually bright. So we conclude that by chance, both images were seen brightening at the same time. These brightness records give unambiguous evidence of daily brightness fluctuations. The best is the record for image B on night 2449704, which we have highlighted with an arrow in Fig. 3 (the same feature is just as obvious in the uncorrected photometry). Here image B increased in brightness by more than 2% from the night before, and returned to its normal level the night after. We conclude that the quasar lens system is capable of producing 2% brightness fluctuations on a time scale of a day. In section 7 below we will compute the structure function for Q0957 brightness fluctuations on long and short time scales. ## 6 Comparison of Old and New Reductions The new reductions allow a comparison with the original CCD data record published by Schild and colleagues over the years. Their data are available in tabular form at URL: http://cfa-www.harvard.edu/$``$rschild. The data have also been published in Schild & Thomson (1995, and references therein). Here we are concerned with the question of how well the new reductions agree with the original published and tabulated data. Recall that the basic procedure in both the old and new reductions is aperture photometry, and that the new reduction differs primarily in the treatment of subtraction of the lens galaxy G1 and in the correction for aperture crosstalk. Plotted in Figs. 4a) and b) are the photometry from this work in filled circles. In open circles are plotted the previous reductions of these data from Schild’s master dataset. In open triangles is the $`r`$-band photometry from Apache Point (Colley et al. 1999). Fig. 4a) shows the comparisons with the raw aperture photometry in this work, while Fig. 4b) gives the comparisons with the corrected photometry. In Fig. 4a), no correction or shift has been applied to either the old or new photometry for image A. The overall agreement is obviously quite good (almost always within 2%). Meanwhile, for image B, we have simply added back in the galaxy light subtracted in Schild’s master dataset (his 18.34 magnitude galaxy correction), and there is quite good agreement (rarely exceeding more than 1% error). The reason the observation times appear slightly different for these two datasets is that in the Schild dataset, filtering for seeing removed large fractions of data from each run and hence shifted the mean time of observation for each night. If one linearly interpolates the Schild data and compares directly to the photometry of this work, one finds that the image A photometry averages just 3 mmag brighter than Schild’s, with a standard deviation of 8 mmag (statistically consistent with no offset at all). If one does the same for image B, one finds an average discrepancy of 184 mmag with a standard deviation of 6 mmag. This 184 mmag means there is an additional 18.31 magnitude source in our raw B aperture, very close to the 18.34 mag correction Schild made for the lens galaxy. The \[A,B\] rms differences between the photometries of only mmag show that the nightly averages published by Schild and colleagues are confirmed, since the published errors average about the same amount, which adds credence to many microlensing conclusions reported based upon this data. Note particularly from Fig. 4a) that the discrepancies between the old and new reductions are on a time scale of individual nights, which should not effect the microlensing trends on larger timescales proposed from wavelet analysis by Schild (1999). Fig. 4b) gives the same data from Schild, but plots our corrected photometry. In previous sections, we discussed the contributions of the galaxy and cross talk during typical seeing. For aperture A the lens galaxy has contributes approximately 3% during typical seeing, while the cross talk between apertures contributes another 1.2%. Our corrected photometry is accordingly 44 mmag fainter than the Schild photometry with an rms of 7 mmag (a slight improvement over the comparison with raw photometry). The image B photometry shows a 40 mmag deficit to the Schild photometry with an rms of 5 mmag (also a slight improvement over the raw photometry). The improvement in rms is perhaps expected, because of the filtering for good seeing done by Schild. Schild and Thomson (1995) report an internal photometric error of 10 and 12 mmag for A and B respectively during this epoch. Therefore, the agreement of the Schild photometry and current photometry at the 5 mmag level, is about as good as one could expect. It is perhaps noteworthy that the agreement exists despite completely different software for the aperture photometry (IRAF dophot vs. IDL programming from scratch) and completely different methods for the relative photometry (pencil and calculator vs. Honeycutt matrix reduction). Because we expect to continue re-reducing archival data frames with our improved data reductions, it is worthwhile summarizing the differences between the new and old reductions. Relative to the old reductions, the new results for image \[A,B\] are mmag fainter than the tabulated Schild photometry, primarily because of the new corrections for aperture crosstalk but also partly due to improved knowledge of the effects of the lens galaxy. Thus to correct our old photometry to the new system, one would add \[.044, .040\] mag to the table entries in the Schild master data set. The true errors of the photometry for the dates we have compared are mmag. For reference, the open triangles in Figs. 4a) and b) give the APO $`r`$-band photometry where applicable (Colley et al. 1999), with arbitrary offsets. While the gross features of the photometry are reproduced, the agreement is not as good, but this might be expected because 1) $`r`$ is a different filter, 2) no correction or censoring for seeing effects, and 3) the APO photometry derives from only a handful of frames on a given night (hence the larger errorbars), compared with several tens of frames per night at Mt. Hopkins. ## 7 The Structure Function for Q0957 Brightness Fluctuations For many purposes related to the discussion of the Q0957 time delay and interpolation of the data set for missing observations, it is useful to know the structure function for the brightness fluctuations. The structure function, which is the expected variance as a function of temporal separation, can be approximated as a power law $`V^2=10^a\tau ^b`$. For our corrected photometry, we find $`a=4.5;b=0.54`$, in rough agreement with values found by Colley et al. (1999) for $`r`$-band photometry of $`a=4.1;b=0.47`$, and by PRH for $`R`$-band photometry of $`a_A0.34;b_A0.54`$ and $`a_B0.22;b_B0.68`$ for the A and B images respectively. The difference in the A and B power laws in PRH shows that measurement of the structure function is perhaps not the most stable process (even perhaps not the best way to describe QSO variation), and suggests that we should not be troubled by the smallish discrepancies between each of these estimates. Fortunately, within reason, the details of the structure function have little bearing on the PRH statistic itself. Fig. 5 contains a plot of the structure function for the fluctuations averaged for images A and B. Note that Schild (1996) has found from wavelet analysis that the A and B images have equal brightness fluctuations on time scales from days to 2 months, and Pelt et al (1998) have shown that on time scales of decades, the fluctuations are larger in B, presumably because of its larger optical depth to microlensing. Fig. 5 shows the structure function for time scales of less than a day for the first time. Note that there is no obvious departure from the power law from the smallest to the longest scales. For time scales of a day, the results are consistent with the frequent remark by Schild (1996) that this quasar’s daily brightness fluctuations are typically of order a percent (0.6 percent according to this plot). ## 8 Time Delay Estimates Because two of the observing runs, December 1994 and February 1996 were separated by 417 days, the presently favored value of the time delay (Colley et al. 1999, Pelt et al. 1998), it should be possible to determine the gravitational lens time delay to a fraction of a night. We have completed two time delay calculations for the new data. For both of the calculations, to be discussed below, the data have been binned by one hour. Our first calculation, shown as a dotted line in Fig. 6, is based upon simple linear interpolation of the data and is equivalent to the calculation by Kundić et al (1997, ApJ 482, 75), in which errors and values are linearly interpolated between the nearest points, and a simple $`\chi ^2`$ statistic is computed. This autocorrelation calculation produces a network of minima separated by 1.0 days, with the deepest minimum occurring for 417.3 days. A time delay calculation using the Press, Rybicki, and Hewitt (1992) calculation is shown as the solid line in Fig. 6. This calculation uses the Fig. 5 structure function to estimate the permitted range of brightness values in place of linear interpolation. It also shows a network of minima separated by 1.0 days with the deepest minimum at 417.5 days. These minima at $`n+1/2`$ days can be understood easily in terms of the statistics used to compute the best-fit time delay, both of which prefer not to overlap data. The linear method prefers no overlap for a simple reason. Because the data are binned by one hour, the actual number of data in the first and last bin varies according to chance. When a small number of data occurs in those bins, the errorbars are, of course, larger by root-n, than those of typical bins. Hence the linearly interpolated errors are larger by the same factor and are thus far more tolerant in the no-overlap regions. The unusually large errorbars at the end points also explains why the absolute $`\chi ^2`$ minimum for the linear method is less than one. The PRH method prefers no overlap for a different reason, but with similar results. The PRH method, of course, finds that the structure of variations of A and B independently match the structure function (Fig. 5) measured from the A and B variations. It tries to keep A and B independent rather than fighting to find the correct overlap. This propensity to find the gaps was recognized as a possible weakness of PRH by its authors in the original paper. While both of these methods have proven excellent for determining the time delay for well-sampled data on different timescales (Colley et al. 1999), both show weakness for intermittently sampled data, inevitable at a single observatory at mid latitudes. Because of this complication we conclude that our program of intensive monitoring over many nights separated by the time delay does not allow us to sharpen the time delay significantly, and we do not claim that our overall best fit value 417.4 days is an improvement over other recent determinations. This problem motivates the need for round-the-clock monitoring of the QSO. ## 9 Agreement of the Time Shifted Brightness Curves Fig. 7 contains the December 1994 A brightness record (filled circles) and the shifted B brightness record (open circles). At bottom the image B photometry has been shifted by the best-fit time delay from the linear interpolation $`\chi ^2`$ method; at top the PRH fit is shown. We show both, because of the qualitatively different appearance of the fits, and because we have no predisposition to favor either method particularly. For reasons discussed in the previous section, we might be inclined to place more faith in the PRH method for producing the best overall fit, because it is less subject to the edge effects due to binning. In both panels of Fig. 7, there does seem to be a qualitative agreement between the behavior in A and B. Particularly, there seems to be a coherent wavy behavior in both A and B. Some encouragement that the signal is real lies in the fact that night to night neither A nor B shows systematic behavior which is obviously spurious. Namely, neither A nor B is always increasing, decreasing or inflecting in the same way on each night. Such behavior would lead to suspicion that the waviness was an observational artifact. Without “filling in the gaps,” however, the actual validity of these fits is moot, and we are left with only one certainty, that the QSO often varies by about a percent within a given night. This variation means that with round-the-clock observations, highly precise measurement of the time-delay would be possible, and any daily microlensing residing in the signal should become apparent. Looking at Fig. 7, it is hard to find any definitive microlensing signal, but most suggestive is the apparent gap at JD 2449705.8. While the rest of the curve (in the top panel) seems to obey a coherent pattern, this last interface looks slightly off. This is hardly irrefutable evidence for microlensing, but again, with round-the-clock monitoring, we would be able to be much more definitive if such an event arose. Furthermore, a null result would be of some interest: quasar microlensing searches for the missing mass have been producing exclusion diagrams for possible MACHO masses (Schmidt and Wambsganss 1998, Refsdal et al 1999). The intermittency of microlensing and clumping of caustics (Wambsganss, Witt, and Schneider 1992) are complications, however, and a detection such as the Dec 1994–March 1996 event (Schild and Gibson 1999) is challenging, not so much to observe but to confirm, because of the low amplitudes of the events. ## 10 Summary and Conclusions Using improved photometric methods as described in Paper I, we have begun our program of re-reduction of archival Q0957 CCD images acquired over the last 20 years. We report herein results for 4 data runs when the quasar was continuously observed with a 1.2m telescope for typically 6 consecutive nights, including observing runs in Dec 1994 and Feb 1996 which are separated by 417, the currently favored time delay between images A and B (Kundić et al. 1997). For the new data we show how the contribution of the lens galaxy varies with the seeing. During good seeing the lens galaxy contribution to the A aperture is nearly constant at 2.5% and appears to maximize at about 18.5% in the B aperture. As seeing deteriorates beyond $`\text{FWHM}=2.5\mathrm{}`$, the A aperture contribution increases by 1% while the B aperture contribution decreases by 1%. For the same data we evaluate the contribution of A-B aperture cross talk and find that significant deterioration occurs in both apertures for seeing with FWHM greater than $`1.8\mathrm{}`$. The deviation from average due to this error source reaches 2% during $`\text{FWHM}=3\mathrm{}`$ seeing. Thus we find that for deteriorating seeing, galaxy contamination adds to aperture crosstalk in aperture A and compensates for half the cross talk in aperture B. These effects can very significantly affect aperture photometry for the system, but historically they have been only qualitatively understood. For the fully corrected data we present hourly binned brightness curves that show significantly detected variations on time scales of hours. The quasar can produce 2% brightness fluctuations on the time scales of 24 hours, and 1% brightness fluctuations within individual nights of continuous observation. Comparison of our new photometry with the previous reductions in the master data set (Schild and Thomson 1995, http://cfa-www.harvard.edu/$``$rschild) yields excellent agreement. The new raw photometry agrees with the historical data within an rms of about 7 mmag. The new corrected photometry agrees within slightly better rms limits, after an offset of about +42 mmag is applied to the historical photometry. With two data runs separated by 417 days, the currently favored time delay, one can search for a more precise delay, and for any daily microlensing residual. Two different time delay estimation procedures favor values near half-day values. These estimators, as would most, tend to find gaps where there is no data overlap between the two runs. The best microlensing candidate in these runs is a significant discrepancy on day 2449705.8 (Fig. 7), which is unfortunately straddling a half-day gap. The tendancy for time delay estimators to find the half-day gaps where either A or B could not be observed due to daylight, and the lack of conclusiveness available for microlensing due to such gaps necessitates round-the-clock monitoring of Q0957+561.
no-problem/9910/astro-ph9910302.html
ar5iv
text
# X-ray Absorption in Radio-Quiet QSOs ## 1. Introduction X-ray absorption studies of active galaxies are proving to be one of the most powerful ways of probing material in the immediate vicinity of supermassive black holes. Rapid X-ray variability suggests that the nuclear X-ray source is the most compact emitter of continuum radiation, and it thus provides a point-like and luminous ‘flashlight’ right at the heart of the active galaxy. Because X-rays are highly penetrating, X-ray spectra can be used to probe column densities over the wide range $`10^{19}`$$`10^{25}`$ cm<sup>-2</sup>. X-ray absorption is produced by the innermost electrons of metals, and it provides a probe of matter in nearly all forms (i.e., neutral gas, ionized gas, molecular gas and dust). Here we shall briefly review several types of X-ray absorption seen in luminous, radio-quiet QSOs. We will discuss X-ray warm absorber QSOs, Broad Absorption Line QSOs, and soft X-ray weak QSOs. We will also discuss some future prospects for radio-quiet QSO X-ray absorption studies. Due to lack of space, we will not be able to cover the important red QSO and type 2 QSO debates or the exciting recent results on X-ray absorption in radio-loud QSOs. ## 2. X-ray Warm Absorbers in Radio-Quiet QSOs Warm absorption by ionized nuclear gas is familiar from the lower-luminosity Seyfert 1 galaxies where it has been intensively studied (e.g., Reynolds 1997; George et al. 1998). Warm absorbers imprint moderately strong edges (e.g., O vii and O viii) on the continuum, but this absorption is not so strong that it completely extinguishes the soft X-ray flux. Assuming photoionization equilibrium, the column density and ionization parameter of the ionized gas can be obtained via X-ray spectral fitting. To our knowledge, only five luminous radio-quiet QSOs have been shown to have X-ray warm absorbers: MR 2251–178 (e.g., Halpern 1984; Pan, Stewart & Pounds 1990; Reeves et al. 1997), IRAS 13349+2438 (Brandt, Fabian & Pounds 1996; Brandt et al. 1997; Siebert, Komossa & Brinkmann 1999; see Figure 1), PG 1114+445 (Laor et al. 1994; George et al. 1997), IRAS 17020+4544 (e.g., Leighly et al. 1997), and IRAS 12397+3333 (e.g., Grupe et al. 1998). It has been difficult to perform detailed studies of the warm absorbers in these QSOs due to limited photon statistics, but the basic physical properties of their warm absorbers appear similar to those seen in Seyfert 1s. Edges from O vii and O viii seem to be the strongest spectral features, and column densities of $`10^{21}`$$`10^{23}`$ cm<sup>-2</sup> and ionization parameters of $`\xi `$ 20–160 erg cm s<sup>-1</sup> are inferred. In three cases (IRAS 12397+3333, IRAS 13349+2438 and IRAS 17020+4544), the warm absorber probably contains dust which causes significant reddening of the optical continuum. Dust will not be rapidly sputtered at warm absorber temperatures (the gas temperature is $`10^5`$ K for a photoionized warm absorber), and it will not be sublimated if the warm absorber is located outside the Broad Line Region (BLR). Two of the QSOs with X-ray warm absorbers (as well as most of the Seyfert 1 galaxies; Crenshaw et al. 1999) show UV absorption lines (PG 1114+445: Mathur, Wilkes & Elvis 1998; MR 2251–178: Mathur et al., in preparation), and it has been argued that the X-ray and UV absorption arise in the same gas. While there is still debate over the extent to which the X-ray and UV absorbers can be unified, they are likely to have qualitatively similar dynamics. The UV absorbing gas is measured to be outflowing from the nucleus at speeds of several hundred km s<sup>-1</sup>. The incidence of warm absorbers in luminous, radio-quiet QSOs is difficult to address at present (e.g., George et al. 1999). Warm absorbers are detected in $`\stackrel{>}{}50`$% of Seyfert 1s, while a much smaller percentage of radio-quiet QSOs have detected warm absorbers. However, the X-ray spectra of most radio-quiet QSOs have significantly lower signal-to-noise than those of Seyfert 1s, and cosmological redshifting also moves the main warm absorber edges down to regions of low effective area and often poor calibration. Seyfert-like warm absorbers could be lurking undetected in the noisy X-ray spectra of many radio-quiet QSOs, and the only clear conclusion that can be drawn at present is that better data are needed (although Laor et al. 1997 suggest that warm absorbers are relatively rare in radio-quiet QSOs based upon ROSAT PSPC data). ## 3. Broad Absorption Line (BAL) QSOs Luminous radio-quiet QSOs show another type of absorption that is not familiar from Seyfert 1s: UV Broad Absorption Lines (BALs) that are created in an outflowing ‘wind’ with velocities up to $`0.1c`$. BALs have been intensively studied in the UV for many years, and it is likely that most QSOs create BAL outflows (e.g., Weymann 1997). The BAL region is thought to be major part of the nuclear environment with a covering factor of 10–50% (e.g., Goodrich 1997; Krolik & Voit 1998), and the BAL phenomenon may be fundamentally connected to the QSO ‘radio volume control’ (e.g., Weymann 1997). In addition, BAL outflows may clear gas from QSO host galaxies and thereby affect star formation and QSO fueling over long timescales (e.g., Fabian 1999). Ideally, one would like to use X-rays to study the absorption properties, nuclear geometries, and continuum shapes of BAL QSOs. X-ray absorption studies would constrain the column density, ionization state, abundances, and covering factor of the BAL gas, and the nuclear geometry could be constrained using the iron K$`\alpha `$ line and X-ray variability. Regarding the continuum shape, it is important to establish that, underneath their absorption, BAL QSOs emit like normal radio-quiet QSOs. ROSAT observations found BAL QSOs to be very weak in the soft X-ray band with few X-ray detections (e.g., Kopko, Turnshek & Espey 1994; Green & Mathur 1996, hereafter GM96). This was an important and surprising result since, if BAL QSOs indeed have normal underlying X-ray continua, large neutral column densities of $`\stackrel{>}{}4\times 10^{22}`$ cm<sup>-2</sup> are required to extinguish the X-ray emission. Ionization of the absorbing gas led to even larger required column densities. The ROSAT column densities were at least an order of magnitude larger than those inferred from UV data. Subsequently, it was realized that the UV absorption lines are severely saturated (e.g., Hamann 1998), leading to much larger inferred UV column densities. These column densities are then consistent with the X-ray data. Hard X-rays are much more penetrating than soft X-rays, and if BAL QSOs are absorbed by column densities of a few times $`10^{22}`$ cm<sup>-2</sup> they should be much brighter above 2 keV than below this energy. Using ASCA, Mathur, Elvis & Singh (1995) detected the famous BAL QSO PHL 5200 ($`z=1.98`$) and claimed to measure a large column density for this object via spectral fitting. Our independent analysis of these data confirms the detection, but the claim for a large column density in this object is not reliable at present due to extremely limited photon statistics (Gallagher et al. 1999). The nearby BAL QSO Mrk 231 ($`z=0.042`$) has also been studied by ASCA (Iwasawa 1999; Turner 1999) and appears to show absorption with a column density of $`\stackrel{>}{}2\times 10^{22}`$ cm<sup>-2</sup>, although precise constraints are difficult due to the complex X-ray spectrum of this object (e.g., there appears to be a significant starburst contribution in X-rays). Recently, we have been performing an exploratory BAL QSO survey using ASCA and BeppoSAX (Gallagher et al. 1999; Gallagher et al., in preparation). We chose these satellites because they provide access to penetrating 2–10 keV X-rays. We performed moderate-length ($``$ 20–30 ks) exploratory observations to learn about the basic X-ray properties of as many BAL QSOs as possible without being too heavily invested in the uncertain results from any one object. Our goals were to define the 2–10 keV properties (e.g., fluxes) of the class, to discover good objects for follow-up studies with Chandra and XMM, and to set absorption, geometry and continuum constraints (to the greatest extent possible with exploratory observations). We proposed many of the optically brightest BAL QSOs known since the optical and X-ray fluxes are generally correlated for QSOs. Most of our objects should have been easily detected if they have normal QSO X-ray continua absorbed by column densities of several times $`10^{22}`$ cm<sup>-2</sup>. We focused on bona-fide BAL QSOs (no mini-BALs; see §3.1 of Weymann et al. 1991), and we also tried to sample a few objects with extreme properties (e.g., optical continuum polarization) to look for correlations. We have performed new ASCA and BeppoSAX observations for 8 BAL QSOs in total, and we have also analyzed the archival data for 4 BAL QSOs. Our objects have $`z=`$ 0.042–3.505 and $`B=`$ 14.5–18.5; PHL 5200 ($`B=18.5`$) is the optically faintest member of our sample. We detect 5 of our 12 BAL QSOs, with our most distant and most luminous detected BAL QSO being CSO 755 ($`z=2.88`$, $`M_\mathrm{V}=27.4`$; Brandt et al. 1999). Our detection fraction is higher than in soft X-rays, consistent with the idea that heavy absorption is present in these objects. However, we find that BAL QSOs are still generally faint 2–10 keV sources, and several of them are strikingly faint. For example, we did not detect the optically bright BAL QSOs PG 0043+039 ($`B=15.9`$, $`z=0.384`$, 24 ks ASCA exposure) and PG 1700+518 ($`B=15.4`$, $`z=0.292`$, 21 ks ASCA exposure). If these objects have normal underlying X-ray continua, then large neutral column densities of $`\stackrel{>}{}5\times 10^{23}`$ cm<sup>-2</sup> are needed to explain their X-ray non-detections (see Figure 2). Because of our access to more penetrating X-rays, our column density lower limits for some objects are about an order of magnitude larger than those set by ROSAT. Ionization of the absorbing gas raises our required column densities to the point where they are almost ‘Compton-thick’ ($`N_\mathrm{H}\stackrel{>}{}1.5\times 10^{24}`$ cm<sup>-2</sup>; compare with Murray et al. 1995). These large column densities increase the inferred mass outflow rate and kinetic luminosity. If the X-ray absorption arises in gas at $`\stackrel{>}{}3\times 10^{16}`$ cm that is outflowing with a significant fraction of the terminal velocity measured from the UV BALs, one derives extremely large mass outflow rates ($`\dot{M}_{\mathrm{outflow}}\stackrel{>}{}5`$ M yr<sup>-1</sup>) and kinetic luminosities ($`L_{\mathrm{kinetic}}\stackrel{>}{}L_{\mathrm{ionizing}}`$). While such powerful winds are perhaps not impossible, the mass outflow rate and kinetic luminosity can be reduced if much of the X-ray absorption occurs at velocities significantly smaller than the BAL terminal velocity. Note that the X-ray and UV absorbers in BAL QSOs have not yet been shown to be identical, and the X-ray and UV light paths may differ. Our exploratory observations demonstrate that it is risky to attempt long X-ray spectroscopic observations of BAL QSOs that do not have established X-ray fluxes, and we find that optical flux is not a good predictor of X-ray flux for BAL QSOs. We fail to detect some of our optically brightest objects, while some of our optically faintest are clearly detected. We have empirically searched for other predictors of X-ray brightness, and while the data are limited there is a tentative connection between high optical continuum polarization and X-ray brightness (see Brandt et al. 1999 for details). Such a connection could be physically understood if the direct lines of sight into the X-ray nuclei of BAL QSOs are usually blocked by Compton-thick matter, and we can only see X-rays when there is substantial electron scattering in the nuclear environment by a ‘mirror’ of moderate Thomson depth. Further studies of uniform, well-defined BAL QSO samples are needed to avoid biases and check this potential connection better. It can also be checked with detailed X-ray studies of highly polarized BAL QSOs. Iron K$`\alpha `$ lines with large equivalent widths could be formed if most of the X-ray flux is scattered, and one would also not expect rapid ($`\stackrel{<}{}1`$ day) X-ray variability. ## 4. Soft X-ray Weak (SXW) QSOs BAL QSOs are generally weak in the soft X-ray band, presumably due to heavy X-ray absorption. One can also address the converse questions: Do all Soft X-ray Weak QSOs (SXW QSOs) suffer from absorption? Do all SXW QSOs have BALs? Alternative possible causes of soft X-ray weakness include unusual intrinsic spectral energy distributions (SEDs) and extreme X-ray or optical variability (e.g., changes in $`\alpha _{\mathrm{ox}}`$ over time). The presence of QSOs with relatively weak soft X-ray emission was recognized at least as early as the mid-1980s, with some observed to be $`\stackrel{>}{}20`$ times weaker than expected given their optical fluxes (e.g., Elvis & Fabbiano 1984; Avni & Tananbaum 1986; Elvis 1992). For example, Avni & Tananbaum (1986) discussed a ‘skew tail’ towards soft X-ray weak objects for the $`\alpha _{\mathrm{ox}}`$ distribution of the PG QSOs. Many new SXW QSOs were found in ROSAT samples (e.g., Laor et al. 1997; Yuan et al. 1998), and ROSAT was also able to place significantly tighter constraints upon $`\alpha _{\mathrm{ox}}`$. This sparked further detailed studies of these objects (e.g., Wang et al. 1999; Wills, Brandt & Laor 1999), and we have recently completed the first systematic study of a well-defined SXW QSO sample (Brandt, Laor & Wills 1999). Our goals for this study were (1) to determine the origin of soft X-ray weakness in general, (2) to discover relations between SXW QSOs, BAL QSOs, and X-ray warm absorber QSOs, and (3) to search for correlations between soft X-ray weakness and other interesting observables. We selected all SXW QSOs from the Boroson & Green (1992, hereafter BG92) sample of 87 $`z<0.5`$ PG QSOs. The BG92 sample is well defined and representative of the optically selected QSO population, and there is already a large amount of high-quality and uniform data available for it. We computed our own $`\alpha _{\mathrm{ox}}`$ values for the BG92 objects using data mainly from ROSAT but also from ASCA and Einstein as needed, and our resulting $`\alpha _{\mathrm{ox}}`$ values were substantially more complete and constraining than those previously available (especially for the SXW QSOs). We used $`\alpha _{\mathrm{ox}}2`$ as our criterion for soft X-ray weakness (note in this section we take $`\alpha _{\mathrm{ox}}`$ to be a negative quantity). Thus, given their optical fluxes, our SXW QSOs were $`25`$ times weaker than ‘usual’ in soft X-rays. We found 10 SXW QSOs with $`\alpha _{\mathrm{ox}}2`$, and thus SXW QSOs appear to comprise $`11`$% of the optically selected QSO population. Nine of our SXW QSOs are radio-quiet, and one is radio-loud. We compared the continuum and line properties of our 10 SXW QSOs to those of the other 77 BG92 non-SXW QSOs using nonparametric tests. The properties compared included those listed in Tables 1 and 2 of BG92 as well as the optical continuum polarization, the optical continuum slope, and the radio structure. We also compared the C iv $`\lambda 1549`$ absorption-line properties for the 55 QSOs from BG92 that have high-quality UV coverage in this spectral region. All C iv measurements were made by B. J. Wills with particular effort toward ensuring consistency and uniformity. We found that the SXW QSOs and non-SXW QSOs have consistent distributions of $`M_\mathrm{V}`$, $`z`$, radio loudness ($`R`$), optical continuum slope, optical continuum polarization, and H$`\beta `$ FWHM. In addition, they have consistent EW distributions of H$`\beta `$, He ii and Fe ii. SXW QSOs were found to have significantly lower \[O iii\] luminosities than those of non-SXW QSOs; low \[O iii\] luminosities have similarly been noted for low-ionization BAL QSOs (e.g., Turnshek et al. 1997). Since \[O iii\] emission is likely to be a reasonably isotropic property for radio-quiet PG QSOs (e.g., Kuraszkiewicz et al. 1999), this result is significant as it suggests there may be an intrinsic difference between SXW QSOs and non-SXW QSOs (see Brandt, Laor & Wills 1999). In addition, SXW QSOs appear to have ‘peaky’ H$`\beta `$ line profiles and large H$`\beta `$ line shifts (either to the blue or the red relative to the rest frame defined by \[O iii\]). The most striking difference between the SXW QSOs and non-SXW QSOs is their UV absorption. SXW QSOs show greatly enhanced C iv absorption (see Figure 3). We find blueshifted C iv absorption with EW $`>4.5`$ Å in 8 of our 10 SXW QSOs, while only 1 of 45 non-SXW QSOs had EW $`>4.5`$ Å. The two SXW QSOs without clear UV absorption, 1011–040 and 2214+139, have UV spectra of only limited quality. Given that UV and X-ray absorption have a high probability of joint occurrence in Seyfert galaxies and QSOs, we consider Figure 3 to be evidence that absorption is the primary cause of soft X-ray weakness in QSOs. Only one of our SXW QSOs, 1411+442, has a broad-band X-ray spectrum at present, and it indeed shows evidence for strong X-ray absorption with $`N_\mathrm{H}\stackrel{>}{}10^{23}`$ cm<sup>-2</sup> (Brinkmann et al. 1999). We can argue against unusual SEDs as the primary cause of soft X-ray weakness by noting that our SXW QSOs have normal H$`\beta `$, He ii and Fe ii EWs (see Korista, Ferland & Baldwin 1997). We also do not find general evidence for strong $`\alpha _{\mathrm{ox}}`$ variability when we compare our $`\alpha _{\mathrm{ox}}`$ values with the limited historical data available. The fact that we find no evidence for QSOs with intrinsically weak soft X-ray emission underscores the universality of QSO X-ray production. The general correlation between $`\alpha _{\mathrm{ox}}`$ and C iv absorption EW shown in Figure 1 provides a useful overall view of QSO absorption. Unabsorbed QSOs and BAL QSOs lie at opposite extremes of the correlation, while X-ray warm absorber QSOs and moderate SXW QSOs lie at intermediate positions. We find 4–5 bona-fide BAL QSOs in the BG92 sample; 3 were already known (0043+039, 1700+518 and 2112+059) and 2 are new (1001+054 and probably 1004+130; see Wills, Brandt & Laor 1999 for detailed discussion of 1004+130). If all BAL QSOs are SXW QSOs, then we should have found all the BAL QSOs in the BG92 sample. The incidence of BAL QSOs in the BG92 sample appears to be statistically consistent with the $`11`$% observed for the LBQS (e.g., Weymann 1997), although this issue could be examined more reliably with complete UV coverage of the BG92 QSOs. The UV results for our SXW QSOs imply that selection by soft X-ray weakness is an effective ($`80`$% successful) way to find low-redshift QSOs with strong UV absorption. This is important from a practical point of view because, for bright QSOs, the optical and X-ray flux densities needed to establish soft X-ray weakness can often be obtained from publicly available data. This method has already been exploited in several cases for individual objects (e.g., Fiore et al. 1993; Mathur et al. 1994), and it could be profitably applied to larger QSO samples. ## 5. Some Future Prospects With the next generation of X-ray observatories, it should be possible to find many more X-ray warm absorbers in radio-quiet QSOs or demonstrate that few are present. This will allow study of their basic physical properties as well as a reliable determination of their incidence. Detailed X-ray spectroscopy and modeling should be possible for a few of the X-ray brightest sources, although this will require a significant investment of observation time. For BAL QSOs, further exploratory observations are needed to look for correlations with optical continuum polarization and other properties. These would be most effective if performed on uniform and well-defined samples. Moderate-quality X-ray spectroscopy should be possible for a few of the X-ray brightest BAL QSOs to study their absorption properties, nuclear geometries, and continuum shapes. For BAL QSOs with enough X-ray flux, the widths and amplitudes of X-ray bound-free edges can constrain the dynamics and metallicity of the absorber. It is also important to study the radio-loud BAL QSOs (e.g., Becker et al. 1999) in X-rays to determine if they follow the same patterns as radio-quiet objects, and deep X-ray surveys over moderate areas may be able to constrain the BAL covering factor (see Krolik & Voit 1998). Studies of SXW QSOs more generally would benefit from a focused X-ray and UV study of a complete sample. The PG SXW QSOs are probably a good starting point, but a larger complete sample would be even better. Intense studies of particularly interesting objects (e.g., 1004+130) are important as well. Finally, it is crucial to test models that propose to unify the different types of X-ray absorption into a coherent physical picture. Such testing should provide an exciting challenge for even the next generation of X-ray observatories. ## ACKNOWLEDGEMENTS We acknowledge the support of NASA LTSA grant NAG5-8107 and the Alfred P. Sloan Foundation (WNB), NASA grant NAG5-4826 and the Pennsylvania Space Grant Consortium (SCG), the fund for the promotion of research at the Technion (AL), and NASA LTSA grant NAG5-3431 (BJW). We thank C.S. Reynolds for a careful reading and J. Chiang for a helpful discussion. ## REFERENCES Avni, Y., Tananbaum, H. 1986, ApJ, 305, 83 Becker, R.H., et al. 1999, ApJ, in preparation Boroson, T.A., Green, R.F. 1992, ApJS, 80, 109 (BG92) Brandt, W.N., Fabian, A.C., Pounds, K.A. 1996, MNRAS, 278, 326 Brandt, W.N., Mathur, S., Reynolds, C.S., Elvis, M. 1997, MNRAS, 292, 407 Brandt, W.N., Laor, A., Wills, B.J. 1999, ApJ, in press (astro-ph/9908016) Brandt, W.N., Comastri, A., Gallagher, S.C., Sambruna, R.M., Boller, Th., Laor, A. 1999, ApJ, in press (astro-ph/9909284) Brinkmann, W., Wang, T., Matsuoka, M., Yuan, W. 1999, A&A, 345, 43 Crenshaw, D.M., Kraemer, S.B., Boggess, A., Maran, S.P., Mushotzky, R.F., Wu, C. 1999, ApJ, 516, 750 Elvis, M., Fabbiano, G. 1984, ApJ, 280, 91 Elvis, M. 1992, in Frontiers of X-ray Astronomy, ed. Tanaka, Y., Koyama, K. (Universal Acad. Press, Tokyo), p. 567 Fabian, A.C. 1999, MNRAS, in press (astro-ph/9908064) Fiore, F., Elvis, M., Mathur, S., Wilkes, B.J., McDowell, J.C. 1993, ApJ, 415, 129 Gallagher, S.C., Brandt, W.N., Sambruna, R.M., Mathur, S., Yamasaki, N. 1999, ApJ, 519, 549 George, I.M., et al. 1997, ApJ, 491, 508 George I.M., Turner T.J., Netzer H., Nandra K., Mushotzky R.F., Yaqoob T. 1998, ApJS, 114, 73 George, I.M., et al. 1999, ApJ, in press (astro-ph/9910218) Goodrich, R.W. 1997, ApJ, 474, 606 Green, P.J., Mathur, S. 1996, ApJ, 462, 637 (GM96) Grupe, D., Wills, B.J., Wills, D., Beuermann, K. 1998, A&A, 333, 827 Halpern, J.P. 1984, ApJ, 281, 90 Hamann, F. 1998, ApJ, 500, 798 Iwasawa, K. 1999, MNRAS, 302, 96 Kopko, M., Turnshek, D.A., Espey, B.R. 1994, in Multi-Wavelength Continuum Emission of AGN, ed. Courvoisier, T., Blecha, A. (Kluwer, Dordrecht), p. 450 Korista, K., Ferland, G., Baldwin, J. 1997, ApJ, 487, 555 Krolik, J.H., Voit, G.M. 1998, ApJ, 497, L5 Kuraszkiewicz, J., Wilkes, B.J., Brandt, W.N., Vestergaard, M. 1999, ApJ, submitted Laor, A., Fiore, F., Elvis, M., Wilkes, B.J., McDowell, J.C. 1994, ApJ, 435, 611 Laor, A., Fiore, F., Elvis, M., Wilkes, B.J., McDowell, J.C. 1997, ApJ, 477, 93 Leighly, K.M., Kay, L.E., Wills, B.J., Wills, D., Grupe, D. 1997, ApJ, 489, L25 Mathur, S., Wilkes, B.J., Elvis, M., Fiore, F. 1994, ApJ, 434, 493 Mathur, S., Elvis, M., Singh, K.P. 1995, ApJ, 455, L9 Mathur, S., Wilkes, B.J., Elvis, M. 1998, ApJ, 503, L23 Murray, N., Chiang, J., Grossman, S.A., Voit, G.M. 1995, ApJ, 451, 498 Pan, H.C., Stewart, G.C., Pounds, K.A. 1990, MNRAS, 242, 177 Reeves, J.N., Turner, M.J.L., Ohashi, T., Kii, T. 1997, MNRAS, 292, 468 Reynolds, C.S. 1997, MNRAS, 286, 513 Siebert, J., Komossa, S., Brinkmann, W. 1999, A&A, in press (astro-ph/9909323) Turner, T.J. 1999, ApJ, 511, 142 Turnshek, D.A., Monier, E.M., Sirola, C.J., Espey, B.R. 1997, ApJ, 476, 40 Wang, T.G., Brinkmann, W., Wamsteker, W., Yuan, W., Wang, J.X. 1999, MNRAS, 307, 821 Weymann, R.J., Morris, S.L., Foltz, C.B., Hewett, P.C. 1991, ApJ, 373, 23 Weymann, R.J. 1997, in Mass Ejection from AGN, ed. Arav, N., Shlosman, I., Weymann, R.J. (ASP Press: San Francisco), p. 3 Wills, B.J., Brandt, W.N., Laor, A. 1999, ApJ, 520, L91 Yuan, W., Brinkmann, W., Siebert, J., Voges, W. 1998, A&A, 330, 108
no-problem/9910/hep-ph9910304.html
ar5iv
text
# 𝜋⁰⁢𝜋⁰ Scattering Amplitudes and Phase Shifts Obtained by the 𝜋⁻⁢𝑝 Charge Exchange Process ## Abstract The results of the analysis of the $`\pi ^0\pi ^0`$ scattering amplitudes obtained with $`\pi ^{}p`$ charge exchange reaction, $`\pi ^{}p\pi ^0\pi ^0n`$, data at 9 GeV/c are presented. The $`\pi ^0\pi ^0`$ scattering amplitudes show clear $`f_0(1370)`$ and $`f_2(1270)`$ signals in the S and D waves, respectively. The $`\pi ^0\pi ^0`$ scattering phase shifts have been obtained below $`K\overline{K}`$ threshold and been analyzed by the Interfering Amplitude method with introduction of negative background phases. The results show a S wave resonance, $`\sigma `$. Its Breit-Wigner parameters are in good agreement with those of our previous analysis on the $`\pi ^+\pi ^{}`$ phase shift data. The fact that no odd wave is present in $`\pi ^0\pi ^0`$ scattering amplitude in comparison with $`\pi ^+\pi ^{}`$ is an advantage. However experimental difficulties impaired up to now the quality of $`\pi ^0\pi ^0`$ data for the analysis of $`\pi \pi `$ phase shifts. Only the high statistics $`\pi ^0\pi ^0`$ data of Cason et al. have been used for analysis so far, but they behave differently from the $`\pi ^+\pi ^{}`$ data below $`K\overline{K}`$ threshold and cannot help to solve the ambiguity of the solutions in the analysis of $`\pi ^+\pi ^{}`$ data. We have analyzed the $`\pi ^0\pi ^0`$ system produced in the $`\pi ^{}p`$ charge exchange process $`\pi ^{}p\pi ^0\pi ^0n`$ at 9 GeV/c studied by the E135 experiment with the Benkei spectrometer at the KEK12 GeV PS. We have obtained the $`\pi ^0\pi ^0`$ scattering amplitudes and also the $`\pi ^0\pi ^0`$ scattering phase shifts below $`K\overline{K}`$ threshold. Fig 1 shows the acceptance corrected $`\pi ^0\pi ^0`$ mass distribution reconstructed from $`4\gamma `$’s in the final state. The off-mass shell scattering amplitudes $`T_{\pi \pi }(m_{\pi \pi }^2,\mathrm{cos}\theta ,t)`$ are extrapolated to the on-mass shell scattering amplitudes at the pion pole, $`T_{\pi \pi }(m_{\pi \pi }^2,\mathrm{cos}\theta ,m_\pi ^2)`$, as the process proceeds through one pion exchange. A linear extrapolation is adopted. The on-mass shell scattering amplitudes can be described, considering the S and D waves, as follows; $`T_{\pi \pi }(m_{\pi \pi }^2,\mathrm{cos}\theta ,m_\pi ^2)=A_S+A_D5(3\mathrm{c}\mathrm{o}\mathrm{s}^2\theta 1)/2`$, where $`A_S`$ and $`A_D`$ are S and D wave scattering amplitudes, respectively. cos$`\theta `$ and $`t`$ distributions are shown in Fig. 2 a) and b) respectively, for each 40 MeV wide mass bin between 0.36 and 1.64 MeV of $`\pi ^0\pi ^0`$ mass. The partial waves obtained are shown in Fig.3 a) and b) for the S and D waves, respectively. Breit-Wigner parameters are obtained above 1 GeV for the S and D waves, as follows; $`M=1278\pm 5`$ MeV and $`\mathrm{\Gamma }=197\pm 8`$ MeV for $`f_0(1370)`$ and $`M=1286\pm 57`$ MeV and $`\mathrm{\Gamma }=161\pm 14`$ MeV for $`f_2(1270)`$. Solid lines in Fig. c) and d) show the results of the fits. A fourth power polynomial background is used for the S wave. The S wave $`\pi ^0\pi ^0`$ scattering amplitudes can be written $`|A_S|^2\mathrm{sin}^2(\delta _S^0\delta _S^2)`$ below $`K\overline{K}`$ threshold where $`\delta _S^0`$ and $`\delta _S^2`$ are the S wave scattering phase shifts of I = 0 and I = 2, respectively. We use a hard core type for $`\delta _S^2`$, as $`\delta _S^2=r_c|𝐪_1|=r_c\sqrt{m_{\pi \pi }^2/4m_\pi ^2}`$, where $`r_c`$ is the hard core radius. The parameter $`r_c`$ has been obtained from $`\pi ^+\pi ^{}`$ data so far. Fig.4 a) shows the normalized S-wave amplitude squared below 1 GeV. The S-wave $`\pi ^0\pi ^0`$ phase shifts obtained below $`K\overline{K}`$ threshold are shown in Fig.4 b) by solid squares. The results are consistent with $`\pi ^+\pi ^{}`$ phase shift data below 650 MeV, though they appear somewhat higher than those above 650 MeV. The results are consistent with those of the down flat solution obtained in the reanalysis performed recently by Kaminski et al. on the CERN-Cracow Munich polarization data. The $`\pi ^0\pi ^0`$ phase shift data are analyzed by the Interfering Amplitude (IA) method. A hard core is used for the negative background. $`f_0(980)`$ and $`\sigma `$ are the contributing resonant states. The fit is shown by the solid line in Fig.4 c). The fitted parameters of BW for the lower mass resonance, $`\sigma `$ are as follows; $`M_\sigma =588\pm 12`$ MeV, $`\mathrm{\Gamma }_\sigma =281\pm 25`$ MeV and $`r_c=2.76\pm 0.15`$GeV<sup>-1</sup>. The $`\chi ^2/n_{d.o.f}`$ value is 20.4/12. These values are in good agreement with those which we have obtained in our reanalysis for $`\pi ^+\pi ^{}`$ phase shift data. We checked also the case with no negative background (without hard core, $`r_c=0`$). The $`\chi ^2/n_{d.o.f}`$ becomes worse: 85.0/13. The BW parameters deviate from those with the hard core as follows; $`M_{\mathrm{"}\sigma \mathrm{"}}=890\pm 16`$ MeV and $`\mathrm{\Gamma }_{\mathrm{"}\sigma \mathrm{"}}=618\pm 51`$ MeV. The results is shown by the dotted line in Fig. 4 c). The difference of I = 0 and I= 2 S wave phase shift, $`\delta _S^0\delta _S^2`$ at the neutral kaon mass is related to the parameters of the CP violation in the neutral K decay. We obtain a value, $`\delta _S^0\delta _S^2=42.5\pm 3^{}`$, in agreement with the previous result, 40.6$`\pm `$3.