id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9903/cond-mat9903049.html
ar5iv
text
# Triangular anisotropies in Driven Diffusive Systems: reconciliation of Up and Down ## Lattice Gas Dynamics The basic discrete DDS model is an extension of an Ising model with conserved Metropolis dynamics: particles hop with probability $$W=Min[1,\mathrm{exp}\left(\beta \mathrm{\Delta }H\right)],$$ (1) where the energy difference $`\mathrm{\Delta }H=\mathrm{\Delta }H_{\mathrm{Ising}}+\mathrm{\Delta }H_{\mathrm{Field}}`$ includes an applied field. Restricting ourselves to a two-dimensional square lattice, $`\mathrm{\Delta }H_{\mathrm{Ising}}`$ is the standard nearest neighbor Ising Hamiltonian in which a particle interacts with its four nearest sites. \[We absorb $`J/k_B`$ into the temperature, which we then measure with respect to the zero-field Ising $`T_c`$.\] $`\mathrm{\Delta }H_{\mathrm{Field}}`$ is $`+E`$ if the particle moves one lattice unit opposite the field direction and $`E`$ if the particle moves in the field direction, where $`E`$ is the field strength. The dynamics are conserved, so particles hop rather than being created nor destroyed. Restricting the hops to nearest neighboring sites, Alexander et al. found that the domains formed upward pointing triangles, as shown in Fig. 1. Natural generalizations of this well studied DDS model include looking at different lattices \[triangular, hexagonal, and Kagomé in $`2d`$\], rotating the field away from a lattice direction , allowing hops and/or interactions with further neighbor particles, and allowing anisotropic hop rates and interactions. Universal results on the basic model should be robust to these sorts of microscopic differences, as differences will certainly be entailed by experimental realizations. \[These changes will affect variously the anisotropic particle mobility and interfacial surface tension of any coarse-grained representation.\] In this paper we allow next-nearest-neighbor hops, where we treat all eight immediate neighbors with equal weight. We label these “nnn” dynamics, in contrast to “nn” dynamics where hops are restricted to the four nearest-neighbors. ## Coarse-grained Dynamics The simplest coarse-grained dynamics is the time-dependent Ginzburg-Landau (TDGL) model with a field. The free energy is given as $`F[\varphi ]`$ $`=`$ $`{\displaystyle 𝑑r\left[f(\varphi (r))+\frac{1}{2}|\varphi |^2+Ez\varphi \right]},`$ (2) where $`\varphi (𝐫,t)`$ is the order parameter and the field $`𝐄`$ points down toward lower $`z`$. Within a uniform phase, we use the following Flory-Huggins type free-energy density: $$f(\varphi )=(1+\varphi )\mathrm{ln}(1+\varphi )+(1\varphi )\mathrm{ln}(1\varphi )\frac{a}{2}\varphi ^2.$$ The system will phase separate for $`a>2`$ with coexistence values depending on $`a`$. \[For $`a2`$ this recovers a more familiar $`\varphi ^4`$ free energy.\] This choice of $`f(\varphi )`$ forces $`|\varphi |<1`$, which simplifies the treatment of the particle mobility (below). We choose standard TDGL dynamics driven by gradients in the chemical potential, so that the particle current is $`\stackrel{}{J}`$ $`=`$ $`M(\varphi ){\displaystyle \frac{\delta F}{\delta \varphi }},`$ (3) where $`M(\varphi )`$ is an order parameter dependent mobility. A continuity equation is then used to determine the evolution of the order parameter: $`{\displaystyle \frac{\varphi }{t}}`$ $`=`$ $`\stackrel{}{J}`$ (4) $`=`$ $`M(\varphi )\left({\displaystyle \frac{df}{d\varphi }}^2\varphi \right)+E{\displaystyle \frac{M(\varphi )}{z}}.`$ (5) The choice of a constant mobility $`M(\varphi )=M_0`$ leads to the field dependence dropping out of the dynamics. The next simplest choice, $`M(\varphi )=M_0(1\varphi ^2)`$, the exact mobility for non-interacting lattice-gases, leads to a non-trivial field-dependent DDS coarsening . Indeed, because the dynamics are deterministic, a semi-quantitative understanding can then be reached for the linear stability of interfaces and other interfacial properties . More generally, we want the mobility to reflect the effective coarse-grained mobility. Starting from a stochastic model with an applied field, the coarse-grained mobility will generally be anisotropic, as can be seen explicitly near the critical point . As a minimal step, we allow for different mobilities for currents in the $`x`$ and $`z`$ directions with $`M_x(\varphi )`$ $`=`$ $`(1+m)M_o(1\varphi ^2),`$ (6) $`M_z(\varphi )`$ $`=`$ $`M_o(1\varphi ^2),`$ (7) where $`m`$ describes the mobility enhancement transverse to the field direction . In the simulations presented here we take $`a2.75`$, so that the bulk phases are at $`\varphi =\pm 0.8`$. We always set $`M_0=1`$, which fixes the overall timescale. The initial conditions $`\varphi (𝐫,0)`$ follows a Gaussian distribution around $`\varphi `$. ## Asymmetry Measure In order to quantitatively compare the models, we need a measure of triangular anisotropy. We use the microscopic measure shown in Fig. 2. That is, we examine all squares of four nearest-neighbor sites on our lattice and define $`n_{up}`$ as the number of squares in which the bottom two sites are positive but the top two sites have opposite signs. These configurations point “upward”. A similar definition is used for $`n_{down}`$. The normalized asymmetry measure is then $$asym=\frac{n_{up}n_{down}}{n_{up}+n_{down}},$$ (8) so that $`asym=1`$ if all triangles are upward pointing and $`asym=1`$ if all triangles are downward pointing. Squares with more or less than three filled sites are not counted. The same measure is used for the continuum model except we look at four neighboring mesh points and count ‘full’ and ‘empty’ as $`\varphi >0`$ and $`\varphi <0`$, respectively. \[In practice an equivalent asymmetry measure for the ‘empty’ phase can be constructed. Similar results are obtained.\] Our measure is quantitatively different from that of Alexander et al where normalization drives their asymmetry towards zero as domains grow larger, however we obtain the same qualitative sign of the triangular asymmetry. We prefer our measure since it only depends on the shapes of triangles and not their size, at least in the coarse-grained formulation. It qualitatively agrees with anisotropies seen “by eye”. ## Lattice Gas Results By including next nearest neighbor jumps we can flip the direction of the triangular domains with respect to the field. Fig. 3 illustrates our results for the Ising model with nn and nnn hops. This indicates that the sign of the triangular anisotropy is non-universal. We also observe that the evolution is more rapid when nnn hops are allowed, as probed by domain size. The quantitative evolution of the asymmetry as a function of time for both nn and nnn hops is shown in Fig. 4. The data is averaged over 5 to 10 configurations to reduce the noise. The asymmetry starts small and eventually saturates to an asymptotic value. We cannot rule out further change \[indeed slight decay is evident for $`T=0.75T_c`$\] since there is no known dynamical scaling in the correlations, i.e. no time-independent scaling function. The asymptotic asymmetry vs. $`T/T_c`$, where $`T_cT_c(0)`$ is the critical temperature for zero field, is shown in Fig. 5. With only nn hops the asymmetry is always positive, indicating upward pointing triangles, and increases with decreasing temperature. We note that the anisotropy measure is continuous across $`T_c(E)`$ (which ranges from $`T_c(0)`$ to approximately $`1.4T_c(0)`$ at $`E=\mathrm{}`$ ). At $`T_c(E)`$ we would expect surface tensions and particle mobilities to be isotropic in a small field, and so we attribute the residual anisotropy to the field. \[We note that long-range correlations can persist into the high-temperature disordered phase due to violations in detailed balance in driven systems .\] With nnn hops, the triangular asymmetry is small and positive near the critical point. This is consistent with the previous discussion of nn hops. At lower temperatures the asymmetry turns negative. Since the nnn hops should lead to a more isotropic particle mobility, we might infer that the increasing anisotropy of the surface tension with decreasing temperature feeds a negative triangular asymmetry when not sufficiently ‘compensated’ by an anisotropic mobility. Clearly the situation is complicated since to fully characterize the ‘anisotropy’ requires the entire function of, e.g., surface tension vs interfacial orientation. Without actually having a quantitative measure of the coarse-grained properties of the system (surface tension, particle mobility) it is difficult to discuss the exact origins of the triangular asymmetry. Indeed, this difficulty is a primary motivation to explore the coarse-grained picture. Regardless, the particle mobility will depend on the microscopic structure, which in turn depends on the applied field, even at $`T=0`$. In the limit of small applied field $`E`$ additional induced anisotropies should become negligible. We see some indications of this through the reduction in the low-temperature positive asymmetry regime with nnn hops as the field is reduced, indicating that the regime is induced by the finite field. This suggests one possible simplifying tactic: to look at the small field limit. Unfortunately this makes the timescales for numerical investigation of the driven system inaccessibly large. Analytically, this limit has been profitably used by one of us in a coarse-grained analysis of surface instabilities in nearly isotropic systems . ## Coarse-grained Results The results for the kinetic Ising model indicates that the positive asymmetry measure may be due to anisotropy in the mobility. Therefore to flip the triangles positive in the TDGL model, we allow for different mobilities in the the $`x`$ and $`z`$ (field) direction with $`M_x/M_z=1+m`$. We also varied the bulk coexistence value, the field strength $`E`$ and the initial filling fraction. We found that the primary effect came from $`m`$ and from $`E`$. The top snapshot in Fig. 6 shows that the triangles are in the same direction as the n.n. kinetic Ising model at early times. This is confirmed by a positive asymmetry measurement at these times. However, the bottom snapshot in Fig. 6 shows that the asymmetry measure becomes negative at late times when the domains are very elongated in the field direction. This transient behaviour was robustly present at all nonzero values of m and E that we tried. However, the early time “transient” regime with positive asymmetry is quite large and can be extended indefinitely in the limit of $`E/m0`$. This is shown in Fig. 7 which shows the asymmetry vs. time for critical quenches at fixed $`m=1`$ and varying $`E`$. This raises the intruiging question of whether the asymmetries seen in the Ising DDS models, where the underlying dynamics are slower, might switch at later times. ## Conclusion We have shown that the sign and magnitude of triangular anisotropies of growing domains are non-universal in both stochastic Ising and deterministic coarse-grained DDS models. Hence, we see no qualitative differences between these approaches with finite fields, and rather see great promise in using the strengths of each approach to explore DDS phenomenology. Questions remain concerning the origins of the triangular anisotropy within a full anisotropic coarse-grained model. This must be understood to intelligently explore the parameter space of coarse-grained models. With this understanding, we might even profitably turn the tables and use the triangular anisotropy as a probe of interfacial properties. We are not overly concerned with the apparent transient nature of the “flipped” anisotropy in the coarse-grained model, since even the stochastic Ising model has not yet been extensively enough studied to tell if the anisotropies hold asymptotically late. However, we will now focus our efforts on prolonging the reversal in coarse-grained models. This provides a motivation to more fully understand the origins of the triangular anisotropy. In the process we would like to develop a more intrinsically coarse-grained measure of anisotropy that can be used equally well in Ising and coarse-grained approaches. We expect that anisotropies in the Porod tail of the structure factor will be the most robust measure, since they directly probe the distribution of interfacial orientations. ###### Acknowledgements. A. D. Rutenberg thanks the NSERC, and le Fonds pour la Formation de Chercheurs et l’Aide à la Recherche du Québec. C. Yeung gratefully acknowledges support for this work from the Research Corporation under Cottrell College Science Grant CC3993. We would like to thank Royce Zia for encouragement and discussions.
no-problem/9903/cs9903002.html
ar5iv
text
# An Algebraic Programming Style for Numerical Software and its Optimization ## 1 Introduction The purpose of the Sophus approach to writing partial differential equation (PDE) solvers originally proposed in is to close the gap between the underlying coordinate-free mathematical theory and the way actual solvers are written. The main ingredients of Sophus are: 1. A library of abstract datatypes corresponding to manifolds, scalar fields, tensors, and the like, figuring in the abstract mathematical theory. 2. Expressions involving these datatypes written in a side-effect free algebraic style similar to the expressions in the underlying mathematical theory. Because of the emphasis on abstract datatypes, Sophus is most naturally combined with object-oriented languages or other languages supporting abstract datatypes. Hence, we will be discussing high-performance computing (HPC) optimization issues within an object-oriented or abstract datatype context, using abstractions that are suitable for PDEs. Sophus is not simply object-oriented scientific programming, but a much more structured approach dictated by the underlying mathematics. The object-oriented numerics paradigm proposed in is related to Sophus in that it uses abstractions corresponding to familiar mathematical constructs such as tensors and vectors, but these do not include continuous structures such as manifolds and scalar fields. The Sophus approach is more properly called coordinate-free numerics . A fully worked example of conventional vs. coordinate-free programming of a computational fluid dynamics problem (wire coating for Newtonian and non-Newtonian flows) is given in . Programs in a domain-specific programming style like Sophus may need additional optimization in view of their increased use of expensive constructs. On the other hand the restrictions imposed by the style may lead to new high-level optimization opportunities that can be exploited by dedicated tools. Automatic selection of high-level HPC transformations (especially loop transformations) has been incorporated in the IBM XL Fortran compiler, yielding a performance improvement for entire programs of typically less than 2$`\times `$ \[14, p. 239\]. We hope Sophus style programming will allow high-level transformations to become more effective than this. In the context of Sophus and object-oriented programming this article focuses on the following example. Object-oriented languages encourage the use of self-mutating (self-updating, mutative) objects rather than a side-effect free algebraic expression style as advocated by Sophus. The benefits of the algebraic style are considerable. We obtained a reduction in source code size using algebraic notation vs. an object-oriented style of up to 30% in selected procedures of a seismic simulation code, with a correspondingly large increase in programmer productivity and maintainability of the code as measured by the Cocomo technique , for instance. On the negative side, the algebraic style requires lots of temporary data space for (often very large) intermediate results to be allocated and subsequently recovered. Using self-mutating objects, on the other hand, places some of the burden of variable management on the programmer and makes the source code much more difficult to write, read, and maintain. It may yield much better efficiency, however. Now, by including certain restrictions as part of the style, a precise relationship between self-mutating notation and algebraic notation may be achieved. Going one step further, we see that the natural way of building a program from high-level abstractions may be in direct conflict with the way current compilers optimize program code. We propose a source-to-source optimization tool, called CodeBoost, as a solution to many of these problems. Some further promising optimization opportunities we have experimented with but not yet included in CodeBoost are also mentioned. The general approach may be useful for other styles and other application domains as well. This paper is organized as follows. After a brief overview of tensor based abstractions for numerical programming and their realization as a software library (Section 2), we discuss the relationship between algebraic and self-mutating expression notation, and how the former may be transformed into the latter (Section 3). We then discuss the implementation of the CodeBoost source-to-source optimization tool (Section 4), and give some further examples of how software construction using class abstractions may conflict with efficiency issues as well as lead to new opportunities for optimization (Section 5). Finally, we present conclusions and future work (Section 6). ## 2 A Tensor Based Library for Solving PDEs Historically, the mathematics of PDEs has been approached in two different ways. The solution-oriented approach uses concrete representations of vectors and matrices, discretisation techniques, and numerical algorithms, while the abstract approach develops the theory in terms of manifolds, scalar fields, tensors, and the like, focusing more on the structure of the underlying concepts than on how to calculate with them (see for a good introduction). The former approach is the basis for most of the PDE software in existence today. The latter has very promising potential for the structuring of complicated PDE software when combined with template class based programming languages or other languages supporting abstract datatypes. As far as notation is concerned, the abstract mathematics makes heavy use of overloaded infix operators. Hence, user-definable operators and operator overloading are further desirable language features in this application domain. C++ comes closest to meeting these desiderata, but, with modules and user-definable operators, Fortran 90/95 can also be used. In its current form Java is less suitable. It has neither templates nor user-definable operators. Also, Java’s automatic memory management is not necessarily an advantage in an HPC setting \[16, Section 4\]. Some of these problems may be alleviated in Generic Java . The examples in this article use C++. ### 2.1 The Sophus Library The Sophus library provides the abstract mathematical concepts from PDE theory as programming entities. Its components are based on the notions of manifold, scalar field and tensor field, while the implementations are based on the conventional numerical algorithms and discretisations. Sophus is currently structured around the following concepts: * Basic $`n`$-dimensional mesh structures. These are like rank $`n`$ arrays (i.e., with $`n`$ independent indices), but with operations like $`+`$, $``$ and $``$ mapped over all elements (much like Fortran 90/95 array operators) as well as the ability to add, subtract or multiply all elements of the mesh by a scalar in a single operation. There are also operations for shifting meshes in one or more dimensions. Parallel and sequential implementations of mesh structures can be used interchangeably, allowing easy porting between architectures of any program built on top of the mesh abstraction. * Manifolds. These represent the physical space where the problem to be solved takes place. Currently Sophus only implements subsets of $`R^n`$. * Scalar fields. These may be treated formally as functions from manifolds to reals, or as arrays indexed by the points of the manifold with reals as data elements. Scalar fields describe the measurable quantities of the physical problem to be solved. As the basic layer of “continuous mathematics” in the library, they provide the partial derivation operations. Also, two scalar fields on the same manifold may be pointwise added, subtracted or multiplied. The different discretisation methods provide different designs for the implementation of scalar fields. A typical implementation would use an appropriate mesh as underlying discrete data structure, use interpolation techniques to give a continuous interface, and map the $`+`$, $``$, and $``$ operations directly to the corresponding mesh operations. In a finite difference implementation partial derivatives are implemented using shifts and arithmetic operations on the mesh. * Tensors. These are generalizations of vectors and matrices and have scalar fields as components. Tensors define the general differentiation operations based on the partial derivatives of the scalar fields, and also provide operations such as componentwise addition, subtraction and multiplication, as well as tensor composition and application (matrix multiplication and matrix-vector multiplication). A special class are the metric tensors. These satisfy certain mathematical properties, but their greatest importance in this context is that they can be used to define properties of coordinate systems, whether Cartesian, axiosymmetric or curvilinear, allowing partial differential equations to be formulated in a coordinate-free way. The implementation of tensors relies heavily on the arithmetic operations of the scalar field classes. A partial differential equation in general provides a relationship between spatial derivatives of tensor fields representing physical quantities in a system and their time derivatives. Given constraints in the form of the values of the tensor fields at a specific instance in time together with boundary conditions, the aim of a PDE solver is to show how the physical system will evolve over time, or what state it will converge to if left by itself. Using Sophus, the solvers are formulated on top of the coordinate-free layer, forming an abstract, high level program for the solution of the problem. ### 2.2 Sophus Style Examples The algebraic style for function declarations can be seen in Figure 1, which shows specifications of some operations for multidimensional meshes, the lowest level in the Sophus library. The mesh class is parameterized by a class T, so all operations on meshes likewise are parameterized by T. Typical parameters would be a float or scalar field class. The operations declared are defined to behave like pure functions, i.e., they do not update any internal state or modify any of their arguments. Such operations are generally nice to work with and reason about, as their application will not cause any hidden interactions with the environment. Selected parts of the implementation of a continuous scalar field class are shown in Figure 2. This scalar field represents a multi-dimensional torus, and is implemented using a mesh class as the main data structure. The operations of the class have been implemented as self-mutating operations (Section 3), but are used in an algebraic way for clarity. It is easy to see that the partial derivation operation is implemented by shifting the mesh longer and longer distances, and gradually scaling down the impact these shifts have on the derivative, yielding what is known as a four-point, finite difference, partial derivation algorithm. The addition and multiplication operations are implemented using the element-wise mapped operations of the mesh. The meshes used in a scalar field tend to be very large. A TorusScalarField may typically contain between 0.2 and 2MB of data, perhaps even more, and a program may contain many such variables. The standard translation technique for a C++ compiler is to generate temporary variables containing intermediate results from subexpressions, adding a considerable run-time overhead to the algebraic style of programming. An implementation in terms of self-mutating operators might yield noticeable efficiency gains. For the addition, subtraction and multiplication algorithms of Figure 2 a self-mutating style is easily obtained. The derivation algorithm will require extensive modification, such as shown in Figure 5, with a marked deterioration in readability and maintainability as a result. ## 3 Algebraic Notation and Self-Mutating Implementation ### 3.1 Self-Mutating Operations Let a, b and c be meshes with operators as defined in Figure 1. The assignment ``` c = a * 4.0 + b + a ``` is basically evaluated as ``` temp1 = a * 4.0; temp2 = temp1 + b; c = temp2 + a. ``` This involves the creation of the meshes temp1, temp2, c, the first two of which are temporary. Obviously, since all three meshes have the same size and the operations in question are sufficiently simple, repeated use of a single mesh would have been possible in this case. In fact, for predefined types like integers and floats an optimizing C or C++ compiler would translate the expression to a sequence of compound assignments<sup>1</sup><sup>1</sup>1Not to be confused with the C notion of compound statement, which is a sequence of statements enclosed by a pair of braces. ``` c = a; c *= 4.0; c += b; c += a, ``` which repeatedly uses variable c to store intermediate results. We would like to be able to do a similar optimization of the mesh expression above as well as other expressions involving $`n`$-ary operators or functions of a suitable nature for user-defined types as proposed in . In an object-oriented language, it would be natural to define self-mutating methods (i.e., methods mutating this) for the mesh operations in the above expression. These would be closely similar to the compound assignments for predefined types in C and C++, which return a pointer to the modified data structure. Sophus demands a side-effect free expression style close to the underlying mathematics, however, and forbids direct use of self-mutating operations in expressions. Note that with a self-mutating += operator returning the modified value of its first argument, the expression (a += b) += a would yield $`2(a+b)`$ rather than $`(2a)+b`$. By allowing the user to define self-mutating operations and providing a way to use them in a purely functional manner, their direct use can be avoided. There are basically two ways to do this, namely, by means of wrapper functions or by program transformation. These will be discussed in the following sections. ### 3.2 Wrapper Functions Self-mutating implementations can be made available to the programmer in non-self-mutating form by generating appropriate wrapper functions. We developed a C++ preprocessor SCC doing this. It scans the source text for declarations of a standard form and automatically creates wrapper functions for the self-mutating ones. This allows the use of an algebraic style in the program, and relieves the programmer of the burden of having to code the wrappers manually. A self-mutating operator op= is related to its algebraic analog op by the basic rule or, if the second argument is the one being updated,<sup>2</sup><sup>2</sup>2This does not apply to built-in compound assignments in C or C++, but user-defined compound assignments in C++ may behave in this way. by the rule where $``$ denotes equivalence of the left- and right-hand sides, x, y, z are C++ variables, and copy makes a copy of the entire data structure. Now, the Sophus style does not allow aliasing or sharing of objects, and the (overloaded) assignment operator x = y is always given the semantics of x = copy(y) as used in (1) and (2). Hence, in the context of Sophus (1) can be simplified to | x = y op z;$``$x = y; x op= z; (3) | | --- | | | and similarly for (2). We note the special case | x = x op z;$``$x op= z; (4) | | --- | | | and the obvious generalizations | x = x op e;$``$x op= e; (5) | | --- | | | | x = e1 op e2;$``$x = e1; x op= e2; (6) | | | where e, e1, and e2 are expressions. SCC uses rules such as (6) to obtain purely functional behavior from the self-mutating definitions in a straightforward way. Figure 4 shows the wrappers created by SCC for the self-mutating mesh operations of Figure 3. The case of $`n`$-ary operators and functions is similar ($`n1`$). We note that, unlike C and C++ compound assignments, Sophus style self-mutating operators do not return a reference to the variable being updated and cannot be used in expressions. This simpler behavior facilitates their definition in Fortran 90/95 and other languages of interest to Sophus. The wrapper approach is superficial in that it does not minimize the number of temporaries introduced for expression evaluation as illustrated in Section 3.1. We therefore turn to a more elaborate transformation scheme. ### 3.3 Program Transformation Transformation of algebraic expressions to self-mutating form with simultaneous minimization of temporaries requires a parse of the program, the collection of declarations of self-mutating operators and functions, and matching them with the types of the operators and functions actually used after any overloading has been resolved. Also, declarations of temporaries have to be added with the proper type. Such a preprocessor would be in a good position to perform other source-to-source optimizations as well. In fact, this second approach is the one implemented in CodeBoost with promising results. Figure 5 shows an optimized version of the partial derivation operator of class TorusScalarField (Figure 2) that might be obtained in this way. In addition to the transformation to self-mutating form, an obvious rule for ushift was used to incrementalize shifting of the mesh. Assuming the first argument is the one being updated, some further rules for binary operators used in this stage are | x op1= e1 op2 e2;$``$ | | --- | | {T t = e1; t op2= e2; x op1= t;} (7) | | | | {T t1 = e1; s1;}{T t2 = e2; s2;}$``$ | | {T t = e1; s1; t = e2; s2;}. (8) | Here x, t, t1, t2 are variables of type T; e1, e2 are expressions; and self-mutating operators op=, op1=, op2= correspond to operators op, op1, op2, respectively. Recall that Sophus does not allow aliasing. Rule (7) introduces a temporary variable t in a local environment and rule (8) reduces the number of temporary variables by merging two local environments declaring a temporary into a single one. ### 3.4 Benchmarks #### 3.4.1 Two Kernels Consider C++ procedures F and P shown in Figure 6. F computes $`x^2+2x`$ using algebraic notation while P computes the same expression in self-mutating form using a single temporary variable temp1. Both were run with meshes of different sizes. The corresponding timing results are shown in Figures 78, and 9. The mesh size is given in the leftmost column. Mesh elements are single precision reals of 4B each. The second column indicates the benchmark procedure (F or P) or the ratio of the corresponding timings (F/P). The columns NC, NS, OC, and OS give the time in seconds of several iterations over each mesh so that a total of 16 777 216 elements were updated in each case. This corresponds to $`\mathrm{32\hspace{0.17em}768}`$ iterations for mesh size $`8^3`$, $`64`$ iterations for mesh size $`64^3`$, $`1`$ iteration for mesh size $`256^3`$, and so forth. In columns \_C (conventional style) the procedure calls are performed for each element of the mesh, while in columns \_S (Sophus style) they are performed as operations on the entire mesh variables. Columns N\_ give the time for unoptimized code (no compiler options), while columns O\_ give the time for code optimized for speed (compiler option -fast for the SUN CC compiler and -Ofast for the Silicon Graphics/Cray CC compiler). The timings represent the median of 5 test runs. These turned out to be relatively stable measurements, except in columns NS and OS, rows $`256^3`$ F and P of Figure 7, where the time for an experiment could vary by more than 100%. This is probably due to paging activity on disk dominating the actual CPU time. Also note that the transformations done by the optimizer are counterproductive in the P case, yielding an NS/OS ratio of 0.8. When run on the SUN the tests where the only really active processes, while the Cray was run in its normal multi-user mode but at a relatively quiet time of the day (Figure 10). As can be seen the load was moderate (around 58) and although fully utilized, resources where not overloaded. In the current context, only columns NS and OS are relevant, the other ones are explained in Section 5.1. As expected, the self-mutative form P is a better performer than the algebraic form F when the Sophus style is used. Disregarding the cases with disk paging mentioned above, we see that the self-mutating mesh operations are 1.8–2.4 times faster than their algebraic counterparts, i.e., the CodeBoost transformation roughly doubles the speed of these benchmarks. #### 3.4.2 Full Application: SeisMod We also obtained preliminary results on the Silicon Graphics/Cray Origin 2000 for a full application, the seismic simulation code SeisMod, which is written in C++ using the Sophus style. It is a collection of applications using the finite difference method for seismic simulation. Specific versions of SeisMod have been tailored to handle simulations with simple or very complex geophysical properties.<sup>3</sup><sup>3</sup>3SeisMod is used and licensed by the geophysical modelling company UniGEO A.S. (Bergen, Norway). We compared a version of SeisMod implemented using SCC generated wrapper functions and a self-mutating version produced by the CodeBoost source-to-source optimizer: * The algebraic expression style version turned out to give a 10–30% reduction in source code size and greatly enhanced readability for complicated parts of the code. This implies a significant programmer productivity gain as well as a significant reduction in maintenance cost as measured by the Cocomo technique , for instance * A 30% speed increase was obtained after 10 selected procedures out of 150 procedures with speedup potential had been brought in self-mutating form. This speedup turned out to be independent of C++ compiler optimization flag settings. This shows that although a more user-friendly style may give a performance penalty compared to a conventional style, it is possible to regain much of the efficiency loss by using appropriate optimization tools. Also, a more abstract style may yield more cost-effective software, even without these optimizations, if the resulting development and maintenance productivity improvement is taken into account. ## 4 Implementation of CodeBoost CodeBoost is a dedicated C++ source-to-source transformation tool for Sophus style programs. It has been implemented using the ASF+SDF language prototyping system . ASF+SDF allows the required transformations to be entered directly as conditional rewrite rules whose right- and left-hand sides consist of language (in our case C++) patterns with variables and auxiliary transformation functions. The required language specific parsing, rewriting, and prettyprinting machinery is generated automatically by the system from the high-level specification. Program transformation tools for Prolog and the functional language Clean implemented in ASF+SDF are described in . An alternative implementation tool would have been the TAMPR program transformation system , which has been used successfully in various HPC applications. We preferred ASF+SDF mainly because of its strong syntactic capabilities enabling us to generate a C++ environment fairly quickly given the complexity of the language. Another alternative would have been the use of template metaprogramming and/or expression templates . This approach is highly C++ specific, however, and cannot be adapted to Fortran 90/95. Basically, the ASF+SDF implementation of CodeBoost involves the following two steps: 1. Specify the C++ syntax in SDF, the syntax definition formalism of the system. 2. Specify the required transformation rules as conditional rewrite rules using the C++ syntax, variables, and auxiliary transformation functions. As far as the first step is concerned, specification of the large C++ syntax in SDF would involve a considerable effort, but fortunately a BNF-like version is available from the ANSI C++ standards committee. We obtained a machine-readable preliminary version and translated it largely automatically into SDF format. The ASF+SDF language prototyping system then generated a C++ parser from it. The fact that the system accepts general context-free syntax rather then only LALR or other restricted forms of syntax also saved a lot of work in this phase even though the size of the C++ syntax taxed its capabilities. With the C++ parser in place, the required program transformation rules were entered as conditional rewrite rules. In general, a program transformer has to traverse the syntax tree of the program to collect the context-specific information used by the actual transformations. In our case, the transformer needs to collect the declaration information indicating which of the operations have a self-mutating implementation. Also, in Sophus the self-mutating implementation of an operator (if any) need not update this but can indicate which of the arguments is updated using the upd flag. The transformer therefore needs to collect not only which of the operations have a self-mutating implementation but also which argument is being mutated in each case. As a consequence, CodeBoost has to traverse the program twice: once to collect the declaration information and a second time to perform the actual transformations. Two other issues have to be taken into account: * C++ programs cannot be parsed before their macros are expanded. Some Sophus-specific language elements are implemented as macros, but are more easily recognized before expansion than after. An example is the upd flag indicating which argument of an operator or function is the one to be updated. * Compared to the total number of constructs in C++, there are relatively few constructs of interest. Since all constructs have to be traversed, this leads to a plethora of trivial tree traversal rules. As a result, the specification gets cluttered up by traversal rules, making it a lot of work to write as well as hard to understand. One would like to minimize or automatically generate the part of the specification concerned with straightforward program traversal. Our approach to the above problems is to give the specification a two-phase structure as shown in Figure 11. Under the reasonable assumption that the declarations are not spoiled by macros, the first phase processes the declarations of interest prior to macro expansion using a stripped version of the C++ grammar that captures the declaration syntax only. We actually used a Perl script for this, but it could have been done in ASF+SDF as well. It yields an ASF+SDF specification that is added to the specification of the second phase. The effect of this is that the second phase is specialized for the program at hand in the sense that the transformation rules in the second phase can assume the availability of the declaration information and thus can be specified in a more algebraic, i.e., context independent manner. As a consequence, they are easy to read, consisting simply of the rules for the constructs that may need transformation and using the ASF+SDF system’s built-in innermost tree traversal mechanism. In this way, we circumvented the last-mentioned problem. As CodeBoost is developed further, it will have to duplicate more and more functions already performed by any C++ preprocessor/compiler. Not only will it have to do parsing (which it is already doing now), but also template expansion, overloading resolution, and dependence analysis. It would be helpful if CodeBoost could tap into an existing compiler at appropriate points rather than redo everything itself. One of the candidates we are considering is the IBM Montana C++ compiler/programming environment , which provides an open architecture with APIs giving access to various compiler intermediate representations with pointers back to the source text. ## 5 Software Structure vs. Efficiency As noted in Section 1, programs in a domain-specific programming style like Sophus may need additional optimization in view of their increased use of expensive constructs. On the other hand, the restrictions imposed by the style may lead to new high-level optimization opportunities that can be exploited by a CodeBoost-like optimization tool. We give some further examples of both phenomena. ### 5.1 Inefficiencies Caused by the Use of an Abstract Style We consider an example. As explained in Section 2.1, scalar field operations like $`+`$ and $``$ are implemented on top of mesh operations $`+`$ and $``$. The latter will typically be implemented as iterations over all array elements, performing the appropriate operations pairwise on the elements. For scalar fields, expressions like $`X_1=A_{1,1}V_1+A_{1,2}V_2`$, $`X_2=A_{2,1}V_1+A_{2,2}V_2`$ will force 8 traversals over the mesh data structure. If the underlying meshes are large, this may cause many cache misses for each traversal. Now each of the scalar fields $`A_{i,j}`$, $`V_j`$, and $`X_j`$ are actually implemented using a mesh, i.e., an array of $`n`$ elements, and are represented in the machine by A\[i,j,k\], V\[j,k\] and X\[j,k\] for $`𝚔=1,\mathrm{},𝙺`$, where K is the number of mesh points of the discretisation. In a conventional implementation this would be explicit in the code more or less as follows: ``` for k := 1,K for j := 1,2 X[j,k] := 0 for i := 1,2 X[j,k] += A[i,j,k]*V[j,k] endfor endfor endfor ``` It would be easy for an optimizer to partition the loops in such a way that the number of cache misses is reduced by a factor of 8. In the abstract case aggressive in-lining is necessary to expose the actual loop nesting to the optimizer. Even though most existing C++ compilers do in-lining of abstractions, this would be non-trivial since many abstraction layers are involved from the programmer’s notation on top of the library of abstractions down to the actual traversals being performed. Consider once again the timing results shown in Figure 7, Figure 8, and Figure 9. As was explained in Section 3.4, the procedure calls in columns \_C (conventional style) are performed for each element of the mesh, while they are performed as operations on the entire mesh variables in columns \_S (Sophus style). Columns OS/OC for row P give the relevant figures for the performance loss of optimized Sophus style code relative to optimized conventional style code as a result of Sophus operating at the mesh level rather than at the element level. The benchmarks show a penalty of 1.1–5.3, except for data structures of less than 128kB on the SUN, where a speedup of up to 1.4 (penalty of 0.7) can be seen in Figure 9. As is to be expected, for large data structures the procedure calls in column OC are more efficient than those in column OS, as the optimizer is geared towards improving the conventional kind of code consisting of large loops with procedure calls on small components of data structures. Also, cache and memory misses become very costly when large data structures have to be traversed many times. The figures for P in column OS of Figure 9 are somewhat unexpected. In these cases OS is the fastest alternative up to a mesh size somewhere between $`32^3`$ and $`64^3`$. This may be due to the smaller number of procedure calls in the OS case than in the OC case. In the latter case F and P are called once per element, i.e., 16 777 216 times, while in the OS case they are called only once and the self-mutating operations are called only 4 times. Another interesting phenomenon can be seen in column NC of Figure 7 and Figure 8. Here the self-mutating version takes longer than the algebraic version, probably because the compiler automatically puts small temporaries in registers for algebraic expressions, but cannot do so for self-mutating forms. The OC column shows that the optimizer eliminates the difference. ### 5.2 New Opportunities for Optimization The same abstractions that were a source of worry in the previous section at the same time provide the specificity and typing making the use of high-level optimizations possible. Before they are removed by inlining, the information the abstractions provide can be used to select and apply appropriate datatype specific optimization rules. Sophus allows application of such rules at very high levels of abstraction. Apart from the expression transformation rules (1)–(8) (Section 3), which are applicable to a wide range of operators and functions, further examples at various levels of abstraction are: * The laws of tensor algebra. In Sophus the tensors contain the continuous scalar fields as elements (Section 2.1), thus making the abstract tensor operations explicit in appropriate modules. * Specialization of general tensor code for specific coordinate systems. A Cartesian coordinate system gives excellent simplification and axiosymmetric ones also give good simplification compared to general curvilinear code. * Optimization of operations on matrices with many symmetries. Such symmetries offer opportunities for optimization in many cases, including the seismic modelling application mentioned in Section 3.4.2. ## 6 Conclusions and Future Work * The Sophus class library in conjunction with the CodeBoost expression transformation tool shows the feasibility of a style of programming PDE solvers that attempts to stay close to the abstract mathematical theory in terms of both the datatypes and the algebraic style of expressions used. * Our preliminary findings for a full application, the Sophus style seismic simulation code SeisMod, indicate significant programmer productivity gains as a result of adopting the Sophus style. * There are numerous further opportunities for optimization by CodeBoost in addition to replacement of appropriate operators and functions by their self-mutating versions. Sophus allows datatype specific rules to be applied at very high levels of abstraction. ## Acknowledgments Hans Munthe-Kaas, André Friis, Kristin Frøysa, Steinar Søreide, and Helge Gunnarsli have contributed to Sophus in various ways.
no-problem/9903/physics9903008.html
ar5iv
text
# A Stochastic Tunneling Approach for Global Minimization of Complex Potential Energy Landscapes \[ ## Abstract We investigate a novel stochastic technique for the global optimization of complex potential energy surfaces (PES) that avoids the freezing problem of simulated annealing by allowing the dynamical process to tunnel energetically inaccessible regions of the PES by way of a dynamically adjusted nonlinear transformation of the original PES. We demonstrate the success of this approach, which is characterized by a single adjustable parameter, for three generic hard minimization problems. PACS: 02.70.Pn, 02.70.Lq, 02.50.Ey, 02.70.Ln \] The development of methods that efficiently determine the global minima of complex and rugged energy landscapes remains a challenging problem with applications in many scientific and technological areas. In particular for NP-hard problems stochastic methods offer an acceptable compromise between the reliability of the method and its computational cost. Branch-and-bound techniques offer stringent error estimates but scale exponentially in their computational effort. In many stochastic approaches the computational cost to determine the global minimum with a given probability grows only as a power-law with the number of variables . In such techniques the global minimization is performed through the simulation of a dynamical process for a “particle” on the multi-dimensional potential energy surface. Widely used is the simulated annealing (SA) technique where the PES is explored in a series of Monte-Carlo simulations at successively decreasing temperatures. Its success depends often strongly on the choice of the cooling schedule, yet even the simplest geometric cooling schedule is characterized by three parameters (starting temperature, cooling rate and number of cooling steps) that must be optimized to obtain adequate results. For many difficult problems with rugged energy landscapes SA suffers from the notorious “freezing” problem, because the escape rate from local minima diverges with decreasing temperature. To ameliorate this problem many variants of the original algorithm have been proposed. Unfortunately these proposals often increase the number of parameters even further, which complicates their application for practical problems. In this letter we investigate the stochastic tunneling method, a generic physically motivated generalization of SA. This approach circumvents the “freezing” problem, while reducing the number of problem dependent parameters to one. In this investigation we demonstrate the success of this approach for three hard minimization problems: the Coulomb spin-glass (CSG), the traveling salesman problem (TSP) and the determination of low autocorrelation binary sequences(LABS) in comparison with other techniques. Method: The freezing problem in stochastic minimization methods arises when the energy difference between “adjacent” local minima on the PES is much smaller than the energy of intervening transition states separating them. As an example consider the dynamics on the model potential in Figure (1)(a). At high temperatures a particle can still cross the barriers, but not differentiate between the wells. As the temperature drops, the particle will get eventually trapped with almost equal probability in any of the wells, failing to resolve the energy difference between them. The physical idea behind the stochastic tunneling method is to allow the particle to “tunnel” forbidden regions of the PES, once it has been determined that they are irrelevant for the low-energy properties of the problem. This can be accomplished by applying a non-linear transformation to the PES: $$f_{\mathrm{STUN}}(x)=1\mathrm{exp}\left[\gamma (f(x)f_0)\right]$$ (1) where $`f_0`$ is the lowest minimum encountered by the dynamical process so far (see Figure (1)(b) + (c)). The effective potential preserves the locations of all minima, but maps the entire energy space from $`f_0`$ to the maximum of the potential onto the interval $`[0,1]`$. At a given finite temperature of O(1), the dynamical process can therefore pass through energy barriers of arbitrary height, while the low energy-region is still resolved well. The degree of steepness of the cutoff of the high-energy regions is controlled by the tunneling parameter $`\gamma >0`$. Continously adjusting the reference energy $`f_0`$ to the best energy found so far, successively eliminates irrelevant features of the PES that would trap the dynamical process. To illustrate the physical content of the transformation we consider a Monte-Carlo (MC) process at some fixed inverse temperature $`\beta `$ on the STUN-PES. A MC-step from $`x_1`$ to $`x_2`$ with $`\mathrm{\Delta }=f(x_2)f(x_1)`$ is accepted with probability $`\stackrel{~}{w}_{12}=\mathrm{exp}\left[\beta (f_{\mathrm{STUN}}(x_2)f_{\mathrm{STUN}}(x_1))\right]`$. In the limit $`\gamma \mathrm{\Delta }1`$ this reduces to $`\stackrel{~}{w}_{12}\mathrm{exp}(\stackrel{~}{\beta }\mathrm{\Delta })`$ with an effective, energy dependent temperature $`\stackrel{~}{\beta }=\beta \gamma e^{\gamma (f_0f(x_1))}\beta \gamma .`$ The dynamical process on the STUN potential energy surface with fixed temperature can thus be interpreted as an MC process with an energy dependent temperature on the original PES. In the latter process the temperature rises rapidly when the local energy is larger than $`f_0`$ and the particle diffuses (or tunnels) freely through potential barriers of arbitrary height. As better and better minima are found, ever larger portions of the high-energy part of the PES are flattened out. In analogy to the SA approach this behavior can be interpreted as a self-adjusting cooling schedule that is optimized as the simulation proceeds. Since the transformation in equation (1) is bounded, it is possible to further simplify the method: On the fixed energy-scale of the effective potential one can distinguish between phases corresponding to a local search and “tunneling” phases by comparing $`f_{\mathrm{STUN}}`$ with some fixed, problem independent predefined threshold $`f_\mathrm{t}`$ (see Fig. 1(c)). For the success of the method it is essential that the minimization process spends some time tunne- ling and some time searching at any stage of the minimization process. We therefore adjust the parameter $`\beta `$ accordingly during the simulation: If a short-time moving average of $`f_{\mathrm{STUN}}`$ exceeds the threshold $`f_{\mathrm{thresh}}0.03`$, $`\beta `$ is reduced by some fixed factor, otherwise it is increased. Following this prescription the method is characterized by the single problem-dependent parameter ($`\gamma `$). Applications: In order to test the performance of this algorithm we have investigated three families of complicated NP-hard minimization problems. For each problem we have determined either the exact ground-state energy or a good estimate thereof. We computed the average error of the various optimization methods as a function of the computational effort to determine the computational effort required to reach a prescribed accuracy. For the applications presented here we have fixed the functional form of the transformation and the “cooling schedule” for $`\beta `$ in order to demonstrate that these choices are sufficient to obtain adequate results. Obviously this does not guarantee that these choices are optimal. (CSG) The determination of low-energy configurations of glassy PES is a notoriously difficult problem. We have verified by direct comparison that the method converges quickly to the exact ground states for two-dimensional short-range Ising spin-glasses of linear dimension $`10`$ to $`30`$ with either discrete or Gaussian distributions of the coupling parameters. Next we turned to the more demanding problem of the Coulomb spin-glass, where classical charges $`\{s_i\}`$ with $`s_i=\pm 1`$ are placed on fixed randomly chosen locations within the unit cube. The energy of the system $$E(\{s_i\})=\underset{ij}{\overset{N}{}}\frac{s_is_j}{|\stackrel{}{r}_i\stackrel{}{r}_j|},$$ (2) is minimized as a function of the distribution of the $`\{s_i\}`$. The results of grand-canonical simulations for ten replicas of $`N=100`$ and $`N=500`$ charges are shown in Figure 2. We first conducted twenty very long STUN runs for each replica to determine upper bounds for the true ground-state energy. For the same charge distributions we then averaged the error of STUN, SA, parallel tempering (PT) and simulated tempering(ST) for twenty runs per replica as function of the numerical effort. We found that the average STUN energy converged in $`10^6`$ MC-steps to within 1% of the estimated true ground-state energy. Over two decades of the numerical effort we found a consistent significant advantage of the proposed method over the SA approach. Fitting the curves in the figure with a power-law dependence we estimate that STUN is two orders of magnitude more efficient than SA. We found no consistent ranking of ST and PT relative to SA for the two system sizes considered. Both methods offer alternative routes to overcome the freezing problem in SA. In PT the configurations of concurrent simulations at a variety of temperatures are occasionally exchanged. In ST only a single simulation is undertaken, but its temperature is considered to be a dynamical variable. Temperature and configuration are distributed according to: $`p(s,T)=e^{E(s)/Tg(T)}`$ and the weights $`g(T)`$ are optimized for a discretized temperature distribution, such that all temperatures are visited with equal probability. In both methods, a configuration can escape a local minimum when the instantaneous temperature is increased. The choice of the temperature set (along with values for $`g(T)`$) is system dependent and must be optimized much like the annealing schedule in SA. In accordance with other studies our results indicate that ST performs significantly better than SA for long simulation times. PT was successful only for the larger system (N=500), where it reached the same accuracy as STUN for $`10^6`$ steps. STUN converged faster than any of the competing methods, but showed a tendency to level off at high accuracy. In the limit of large computational its accuracy was matched by ST for N=100 and PT for N=500. (TSP) The traveling salesman problem is another ubiquitous NP-hard minimization problem. We have investigated the problem in its simplest incarnation: i.e. as a minimization of the euclidian distance along a closed path of N cities. Using long-range updates, i.e. the reversal and exchange of paths of arbitrarily length, we found that both SA and STUN perform about equally well and reach the global optimum for $`N=20,50`$ and $`100`$ very quickly (see right side of Table (I)). Nevertheless it is instructive to analyze this model somewhat further as it provides insight into the interplay of move-construction and complexity of the minimization problem. The unconstrained TSP is a rare instance among NP-hard minimization problems, where it is possible to construct efficient “long-range” hops on the PES. In most practical applications of minimization problems related to the TSP, the construction of global moves is severely complicated by the existence of “hard constraints” on the routes taken. For such problems, as well as the other examples reported here, the alteration of just a few variables of the configurations leads to unacceptably high energies in almost all cases. As a result, the construction of global moves is not an efficient way to facilitate the escape from local minima. When only local moves, i.e. transpositions of two adjacent cities, are considered high barriers that were circumvented in the presence of global moves hamper the progress of SA. The results on the left side of Table (I) demonstrate that in this scenario SA performs significantly worse than STUN. (LABS) Finally we turn to the construction of low-autocorrelation binary sequences. The model can be cast as a ground-state problem for a one-dimensional classical spin-1/2 chain with long-range four-point interactions $$E=\frac{1}{N}\underset{k=1}{\overset{N1}{}}\left[\underset{j=1}{\overset{Nk}{}}s_js_{j+k}\right]^2$$ (3) and is one of the hardest discrete minimization problems known. Even highly sophisticated and specialized optimization algorithms have failed to find configurations anywhere near (within 20%) the ground-state energy that can be extrapolated from exact enumeration studies for small systems ($`N<50`$). The reason for this difficulty has been attributed to the “golf-course” character of the energy landscape and there is convincing evidence that SA will fail to converge to the ground-state energy even in the limit of adiabatic cooling. The situation is significantly improved if the original potential energy surface is replaced by a piecewise constant energy surface that is obtained by a local minimization of the original PES at each point. Obviously the latter surface preserves all ground-state configurations and energies of the original PES, but eliminates many “plateaus” of the “golf-course” landscape. Using the modified energy surface we are able to compare SA to STUN, since SA can now determine the ground state energy of medium size systems (N=49) with a large, but finite computational effort. Table II summarizes the results for the average error of 20 SA and STUN runs for system sizes N=49 and N=101 as a function of the computational effort. In direct comparison we find that STUN is two orders of magnitude more efficient than SA. Both methods are at least a dozen orders of magnitude more efficient than SA on the original PES. Discussion: Using three NP-hard minimization problems with high-barriers separating local minima we have demonstrated that the stochastic tunneling approach offers a reliable, generic and efficient route for the determination of low-energy configurations. One chief advantage of the method lies in the fact that only a single parameter must be adjusted to adapt it for a specific problem. One of the drawbacks of STUN is that in contrast to e.g. PT, no thermodynamic expectation values for the system can be obtained from the simulation. Secondly, because the non-linear transformation will map any unbounded PES onto an interval bounded from above, the dynamical process in STUN will experience “tunneling” phases at any finite temperature. For PES that do not contain high barriers, or in the presence of efficient global moves that circumvent such barriers, STUN may therefore be less efficient than competing methods. In many realistic optimization problems where the construction of global moves is exceedingly difficult or very expensive the tunneling approach can ameliorate the difficulties associated with the existence of high energy barriers that separate local minima of the PES. We gratefully acknowledge stimulating discussions with C. Gros and U. Hansmann.
no-problem/9903/astro-ph9903268.html
ar5iv
text
# Precise Interplanetary Network Localization of a New Soft Gamma Repeater, SGR1627-41 ## 1 Introduction There is good evidence that the three known soft gamma repeaters (SGRs) are associated with supernova remnants (SNRs). SGR0525-66 appears to be in N49 in the Large Magellanic Cloud (Cline et al. 1982), and SGR1806-20 (Atteia et al. 1987) in G10.0-0.3 (Kulkarni & Frail 1993; Kouveliotou et al. 1994; Kulkarni et al. 1994; Murakami et al. 1994). SGR1900+14 lies close to, although not within, G42.8+0.6 (Kouveliotou et al. 1994; Hurley et al. 1999a). The SGRs are believed to be magnetars, i.e. neutron stars with magnetic field strengths in excess of 10<sup>14</sup> G. In these objects, magnetic energy dominates rotational energy. In this paper we present gamma-ray observations of a new source, SGR1627-41, first detected in 1998 June, whose repetition, time histories, energy spectra, and location are all consistent with the properties of the known SGRs. We present observations by the 3rd interplanetary network (IPN), consisting in this case of the gamma-ray burst experiment aboard the Ulysses spacecraft, the KONUS experiment aboard the WIND spacecraft, and the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma-Ray Observatory (CGRO), and use them to derive a precise source location for SGR1627-41. The typical energy ranges for the experiments and the data considered here are 25-150, 50-200, and 25-100 keV, respectively. At the time of these observations, Ulysses was located $``$ 2900 light-seconds from earth while WIND was $``$ 3 light-seconds away; CGRO was in near-Earth orbit. ## 2 Observations The first confirmed Ulysses observation of the new SGR was on 1998 June 17. The last was on 1998 July 12. In all, 36 events were observed by Ulysses and an instrument on at least one near-earth spacecraft, either KONUS-WIND or BATSE, and triangulation gave mutually consistent annuli. (Numerous other events were observed by BATSE alone, and numerous candidate events were also observed by a single instrument, either KONUS or Ulysses GRB, which could not be localized; we do not consider them here.) Each of the three instruments has various data-collecting modes which may be summarized as either “triggered” or “untriggered”. The time resolutions in triggered modes are as fine as 2 ms, while in untriggered modes, they may be as coarse as $``$ 1 s. Table 1 lists the bursts and the modes, and figure 1 shows a typical time history. When observed in triggered mode by Ulysses , almost all of the events considered here had durations $``$ 200 ms; as observed by BATSE, several events had durations of up to $``$ 2 s. In principle, short events such as these present an ideal case for localization by triangulation, since the width of a triangulation annulus is proportional to the uncertainty in cross-correlating the time histories observed by a pair of spacecraft. However, two other factors must be considered. First, to obtain a small cross-correlation uncertainty, the event must be observed in triggered modes by Ulysses and another spacecraft; only 11 of the events in table 1 satisfy this criterion. (Intense activity from a repeating source tends to fill trigger memories, so that subsequent events can only be recorded in an untriggered mode.) Second, the proximity of the WIND and CGRO spacecraft results in triangulation annuli which intersect at grazing incidence for any given burst, resulting in a long, narrow error box. To reduce the length of the error box for this SGR, we have combined bursts from the first and last triggered mode observations, on June 17 and July 12. The slowly moving Ulysses \- Earth vector, which is approximately the center of the triangulation annulus, produces a shorter error box. ## 3 Results Figure 2 shows a portion of the IPN error box, defined by the intersection of two $``$ 16 $`\mathrm{}`$ wide annuli. The corners of this $``$ 7.6 arcminute<sup>2</sup> error box are given in table 2. Strictly speaking, the curvature of the annuli does not allow the resulting error box to be defined by straight-line segments; for this reason, we also give the centers, radii, and widths of the annuli in table 3. Figure 2 also includes the 843 MHz radio contours of the Galactic supernova remnant G337.0-0.1, taken from the catalog of Whiteoak and Green (1996). Finally, figure 2 also shows the position of a BeppoSAX quiescent X-ray source believed to be the SGR counterpart (Woods et al. 1999). The intersection of the 3$`\sigma `$ IPN annuli with the 95% confidence error circle defines a $`2\mathrm{}\times 16\mathrm{}`$ error box whose area is $``$ 0.6 arcminutes<sup>2</sup>; the coordinates of this error box are given in table 4. These two error boxes are consistent with, but considerably smaller than, the following locations previously determined for this SGR: 1) the BATSE error circle derived from four triggers (Kouveliotou et al. 1998a). (Based on this initial location, the source was named SGR 1627-41). 2) the initial IPN annulus (Hurley et al. 1998a) 3) the restriction of the initial IPN annulus to locations consistent with BATSE earth-limb occultation considerations (Woods et al. 1998) 4) a refined, but still preliminary IPN annulus (Hurley et al. 1998b), 5) the initial Rossi X-Ray Timing Explorer All Sky Monitor (RXTE-ASM) error box (Smith and Levine 1998), and 6) the final RXTE-ASM error box (Smith et al. 1999) ## 4 Discussion If we adopt as a working hypothesis that SGR1627-41 is a magnetar, and that magnetars have lifetimes $``$ 10,000 years (Thompson and Duncan 1995), then we would expect the IPN position to be coincident with that of a radio supernova remnant, whose observable lifetimes are $``$ 20,000 years (Braun, Goss, and Lyne 1989). Also, three SGRs are known to be quiescent soft X-ray point sources: SGR0525-66 (Rothschild, Kulkarni, & Lingenfelter 1994), SGR1806-20 (Murakami et al. 1994), and SGR1900+14 (Hurley et al. 1994, 1999b). With this in mind, we can then inquire how compelling the IPN/G337.0-0.1/BeppoSAX association is. We first calculate the probability that the 1.8$`\mathrm{°}`$ by 16$`\mathrm{}`$ IPN annulus intersects a SNR in the 843 MHz survey of the MOST catalog (Whiteoak and Green 1996)? A rigorously correct method to estimate this probability was presented by Kulkarni and Frail (1993), but a very simple argument can be used to derive an absolute lower limit. The MOST survey covered galactic coordinates $`245\mathrm{°}<\mathrm{l}<355\mathrm{°},\mathrm{b}<1.5\mathrm{°}`$. 73 SNRs with measured sizes are cataloged, and they occupy $``$ 0.022 of the total area surveyed. Thus in the limit where the error box is a point, the probability of a chance association would be $``$ 0.022. However, two factors will increase this substantially. First, given the fact that SGR1900+14 appears to be outside its supernova remnant (Hurley et al. 1999a), we would probably accept an SGR/SNR association where the error box lay outside the remnant, increasing the effective occupied area of the survey. Second, the method of Kulkarni and Frail (1993), which is more appropriate for the long, narrow IPN error box, would result in a higher probability. We next ask what the probability is that the IPN error box will coincide with a quiescent soft X-ray source. One unidentified source with a 1 $`\mathrm{}`$ error radius was detected in the BeppoSAX observations (Woods et al. 1999), and the field of view was 28 $`\mathrm{}`$ in radius. Applying the method of Kulkarni and Frail (1993) gives a chance probability of 0.17. Thus the joint probability of the IPN/G337.0-0.1/BeppoSAX association is $`>`$0.004. Fortunately, more data which can substantiate the IPN/X-ray source/SNR association are forthcoming. An Advanced Satellite for Cosmology and Astrophysics observation of the X-ray source is planned for 1999. This will allow us to confirm the suggested 6.47 s period, observed with a low statistical significance in the BeppoSAX data (Woods et al. 1999), and possibly derive the period derivative. A high spindown rate, as found for SGR1806-20 and SGR1900+14 (Kouveliotou et al. 1998b, 1999) would be a compelling argument that the source is indeed a magnetar associated with the SNR. If this indeed proves to be the case, the transverse velocity of the magnetar can be estimated. The distance to G337.0-0.1 has been estimated to be as small as 5.8 kpc by Case and Bhattacharya (1998) based on a a new $`\mathrm{\Sigma }`$-D relation for supernova remnants, and as large as 11 kpc by Sarma et al. (1997) based on radio recombination lines. The displacement between the core of the remnant and the IPN/SAX error box is $``$1.3 $`\mathrm{}`$. From this, we obtain velocities between $``$200 and 2000 km/s for the smaller distance, and for assumed ages of 10,000 and 1000 y, consistent with the transverse velocities of the other three SGRs. KH is grateful to JPL for Ulysses support under Contract 958056, and to NASA for Compton Gamma-Ray Observatory support under contract NAG 5-3811.
no-problem/9903/nucl-th9903035.html
ar5iv
text
# Interaction of Ultra-Cold Neutrons with Condensed Matter ## 1 Introduction Thermal and cold neutrons with wave length $`0.03`$ nm $`\lambda 1`$ nm is an important tool for investigation of condensed matter. Theory of their interaction with substance is well established (see, e.g., ). It is based on the use of Fermi pseudopotential. For thermal and cold neutrons re-scattering of secondary waves is unimportant and one may use Born approximation that gives for double differential cross-section the following expression $$\frac{d^2\sigma }{d\mathrm{\Omega }d\omega }=\frac{k^{}}{2\pi k}\underset{\nu \nu ^{}}{}b_\nu ^{}b_\nu ^{}\chi _{\nu \nu ^{}}(𝜿,\omega ).$$ (1) Here $`𝜿=𝐤^{}𝐤`$, $`\omega =\epsilon \epsilon ^{}`$, where $`𝐤`$ and $`\epsilon `$ are momentum and energy of incident neutron, $`𝐤^{}`$ and $`\epsilon ^{}`$ are the same quantities for scattered neutron, and $`b_\nu `$ is a scattering amplitude on bound $`\nu `$-th nucleus. A Fourier transform $$\chi _{\nu \nu ^{}}(𝜿,\omega )=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}\chi _{\nu \nu ^{}}(𝜿,t)e^{i\omega t}𝑑t$$ (2) of diagonal matrix element of an operator of nuclear position correlation $$\chi _{\nu \nu ^{}}(𝜿,t)=i|e^{i𝜿\widehat{𝐑}_\nu (t)}e^{i𝜿\widehat{𝐑}_\nu ^{}(0)}|i$$ (3) between the initial eigenfunctions $`|i`$ of target Hamiltonian, that determines the target response on the scattered neutron wave. For ultra-cold neutrons (UCN), when $`\lambda 10`$ nm, re-scattering of neutron wave in media is very essential, and when $`k^2<4\pi bn`$ re-scattering becomes the dominant process and results in the total reflection from the surface of the target (of cause, for positive $`b`$). Thus, Born approximation in general, and the cross-section (1) in particular, can not be used for UCN. To describe an elastic scattering of UCN by matter one uses a multiple scattering wave approach for fixed (unmovable) nuclei (see, e.g., ). It gives an effective repulsive (optical) potential for neutron inside a condensed matter, so the neutron wave with the energy below the threshold is exponentially decreasing deep into target. However, UCN escape from vessels, that attracts attention of experimenters for many years, as well as small heating and cooling observed recently , belong to inelastic processes. In this paper we present the basic features of a general theory for elastic and inelastic scattering equally applicable for thermal and cold as well as ultra-cold neutrons, and which, when neutron wave-length decreases, smoothly transforms into the usual scattering theory giving the cross-section (1). ## 2 General expressions A proper theory for UCN scattering should be based on the following postulates: (i) No Born approximation; (ii) No use of Fermi potential; (iii) Target matter is a dynamical system. It is, of cause, impossible to solve the many-body problem of neutron – target interaction without any approximations. In our problem there are two main small parameters: short-range of neutron-nuclei interaction (as compared with interatomic distance and wave length), and small neutron energy (as compared with depth of interaction potential). The first condition allows to consider only s-wave part of the wave function of neutron – nucleus center-of-mass motion, when their interaction is evaluated. And the second condition allows in this evaluation to neglect energy of relative neutron – nucleus motion inside the interaction potential area. No specific model for neutron – nucleus interaction potential is needed. Its specific features described above (short range and large depth) allows to use scattering length approximation. From these considerations we obtain a general expression for double differential cross section $$\frac{d^2\sigma }{d\mathrm{\Omega }d\omega }=\frac{k^{}}{2\pi k}\underset{jj^{}\nu \nu ^{}}{}\varphi _\nu ^j\varphi _\nu ^{}^j^{}\chi _{\nu \nu ^{}}^{jj^{}}(𝜿,\omega +E_iE_j).$$ (4) It contains neutron amplitudes $`\varphi _\nu ^j`$ and Fourier transform of nondiagonal matrix element of the correlation operator $$\chi _{\nu \nu ^{}}^{jj^{}}(𝜿,t)=j|e^{i𝜿\widehat{𝐑}_\nu (t)}e^{i𝜿\widehat{𝐑}_\nu ^{}(0)}|j^{}$$ (5) between the eigenfunctions $`|j`$ and $`|j^{}`$ of target Hamiltonian. Note, that $`E_i`$ is the energy of the initial target state $`|i`$, and $`E_j`$ corresponds to a state $`|j`$. A set of linear algebraic equations for neutron amplitudes $`\varphi _\nu ^j`$ is also found. Neglecting in these equations terms that describe re-scattering we get for the amplitudes $$\varphi _\nu ^j=\delta _{ij}\beta _\nu \left(1i\alpha _\nu i|\sqrt{\widehat{𝐤}_\nu ^2}|i\right),$$ (6) where $`\alpha _\nu `$ and $`\beta _\nu `$ are scattering lengths on isolated and bound nucleus, respectively, and $`\widehat{𝐤}_\nu `$ is an operator of impact momentum in the center-of-mass system for neutron and nucleus. Thus, for thermal and cold neutrons Eq.(4) really transforms into Eq.(1), and the usual relation between $`\beta _\nu `$ and $`b_\nu `$ arises. Into a condensed matter we have $`𝐑_\nu =𝝆_\nu +𝐮_\nu `$, where $`𝝆_\nu `$ is the equilibrium position of the $`\nu `$-th nucleus, and $`𝐮_\nu `$ is its shift from the equilibrium. Thus, the factors $`e^{i𝐤𝝆_\nu }`$ and $`e^{i𝐤𝝆_\nu ^{}}`$ may be extracted in the matrix element (5) and combined with the amplitudes $`\varphi _\nu ^j`$ and $`\varphi _\nu ^{}^j^{}`$ in (4). Equations for new amplitudes $`\psi ^j(\nu )=(\varphi _\nu ^j/\beta _\nu )e^{i𝐤𝝆_\nu }`$ are of the form $$\psi ^j(\nu )=\delta _{ij}e^{i𝐤𝝆_\nu }\underset{j^{}\nu ^{}}{}\beta _\nu ^{}G_{\nu \nu ^{}}^{jj^{}}\psi ^j^{}(\nu ^{}).$$ (7) The coefficients $`G_{\nu \nu ^{}}^{jj^{}}`$ are expressed in terms of matrix elements (5). Then we use an expansion over $`\mathrm{𝐤𝐮}`$ for functions $`\chi _{\nu \nu ^{}}^{jj^{}}(𝜿,\omega )`$, coefficients $`G_{\nu \nu ^{}}^{jj^{}}`$, and amplitudes $`\psi ^j(\nu )`$ (or $`\varphi _\nu ^j`$). Zero-order approximation ($`𝐮=0`$) corresponds to fixed nuclei and, therefore, results in only elastic scattering. Equations for zero-order amplitudes $`\psi ^{(0)j}(\nu )=\delta _{ij}\psi _𝐤(\nu )`$ $$\psi _𝐤(\nu )=e^{i𝐤𝝆_\nu }\underset{\nu ^{}}{}\beta _\nu ^{}G^i(\nu \nu ^{})\psi _𝐤(\nu ^{}),G^i(\nu \nu ^{})=\frac{e^{ik|𝝆_\nu 𝝆_\nu ^{}|}}{|𝝆_\nu 𝝆_\nu ^{}|}.$$ (8) coincide with the multiple scattering wave equations usually used to describe UCN elastic scattering. ## 3 Inelastic scattering Inelastic scattering arises in the second-order approximation in $`\mathrm{𝐤𝐮}`$. Analysis shows that there are four second-order terms in inelastic cross section (4) $$\begin{array}{c}\underset{jj^{}}{}\varphi _\nu ^j\varphi _\nu ^{}^j^{}\chi _{\nu \nu ^{}}^{jj^{}}\varphi _\nu ^{(0)i}\varphi _\nu ^{}^{(0)i}\chi _{\nu \nu ^{}}^{(2)ii}+\varphi _\nu ^{(0)i}\varphi _\nu ^{}^{(1)f}\chi _{\nu \nu ^{}}^{(1)if}+\hfill \\ +\varphi _\nu ^{(1)f}\varphi _\nu ^{}^{(0)i}\chi _{\nu \nu ^{}}^{(1)fi}+\varphi _\nu ^{(1)f}\varphi _\nu ^{}^{(1)f}\chi _{\nu \nu ^{}}^{(0)ff}.\hfill \end{array}$$ (9) Four terms in the right-hand side of (9) are illustrated by Fig.1. To disclose physical meaning of these terms, it is instructive to compare our result with that based on the improvement of (1) by replacement of Born amplitudes $`b_\nu `$ by the neutron amplitudes in optical potential $`\varphi _\nu ^{(0)}`$ (see, e.g., ). In such an approach expansion similar to (9) would evidently result in only the first term (Fig.1a), where re-scattering is taken into account only for incident neutron wave (already included in $`\varphi _\nu ^{(0)}`$). Three other terms in (9) describe rescattering of out-going waves (in inelastic channels). They are directly and indirectly generated by nondiagonal matrix element $`\chi _{\nu \nu ^{}}^{jj^{}}`$. The first-order term for diagonal matrix element ($`j=j^{}`$) is absent. Final expression for the second order inelastic cross section is of the form $$\frac{d\sigma _{ie}^{(2)}}{d\omega }=\frac{1}{2\pi mk}d^3k^{}\delta (\epsilon ^{}+\omega \epsilon )\frac{d^3q}{(2\pi )^3}B_\alpha ^{}(𝐪)B_\beta (𝐪)\mathrm{\Omega }_{\alpha \beta }(𝐪,\omega ),$$ (10) $$𝐁(𝐪)=\underset{\nu }{}\beta _\nu e^{i𝐪𝝆_\nu }_\nu \left(\psi _𝐤^{}(\nu )\psi _𝐤(\nu )\right),$$ (11) where $`\mathrm{\Omega }_{\alpha \beta }(𝐪,\omega )`$ is related with Fourier transform of correlation function by the equation $$i|\widehat{u}_{\nu \alpha }(t)\widehat{u}_{\nu ^{}\beta }(0)|i=\frac{d^3qd\omega }{(2\pi )^4}e^{i𝐪(𝝆_\nu 𝝆_\nu ^{})i\omega t}\mathrm{\Omega }_{\alpha \beta }(𝐪,\omega ).$$ (12) Note, that (11) contains symmetrically the functions of the elastic and inelastic neutron channels. In the Born approximation, i.e., neglecting by re-scattering both in the elastic and inelastic channels, we have from (8): $`\psi _𝐤(\nu )e^{i𝐤𝝆_\nu }`$ and $`\psi _𝐤^{}(\nu )e^{i𝐤^{}𝝆_\nu }`$. Thus, $$𝐁(𝐪)i𝜿\underset{\nu }{}\beta _\nu e^{i(𝐪+𝜿)𝝆_\nu },$$ (13) and the usually used formula for inelastic cross section arises (see, e.g., ). In an attempt was made to improve this approach by replacing the plane wave $`e^{i𝐤𝝆_\nu }`$ in (13) by the damping function $`\psi _𝐤(\nu )`$. This attempt is clearly inconsistent as such replacement should be made in (11) before differentiation with respect to $`𝝆_\nu `$. ## 4 Results To illustrate the possibilities of our approach we studied small heating and cooling of UCN in a simplest model. Let us consider UCN that fall normally to a thick layer of uniform matter with an energy $`\epsilon `$ below a threshold $`U`$. Taking the correlation function in phonon model we obtain for the probabilities of inelastic scattering per one bounce the following expressions $$\frac{dw_{ie}^{(2)}}{d\epsilon ^{}}|_{\epsilon ^{}U}=\frac{2k\beta }{\pi U}\frac{T}{Ms^2}\frac{v^{}}{s},\frac{dw_{ie}^{(2)}}{d\epsilon ^{}}|_{\epsilon ^{}>U}=\frac{k\beta }{\pi \epsilon ^{}}\frac{T}{Ms^2}\frac{\frac{v^{}}{s}}{1\left(\frac{v^{}}{2s}\right)^2},$$ (14) where $`T`$ is a target temperature, $`M`$ is a mass of target nuclei, $`s`$ is a speed of sound, and $`v^{}=\sqrt{2\epsilon ^{}/m}`$ is a velocity of a scattered neutron. Note, that the second formula is not valid in a small region just above the barrier, where oscillations governed by narrow resonances in transmission and reflection are of importance. However, these oscillations damp rapidly when $`\epsilon ^{}`$ increases. A half of neutrons with the energy $`\epsilon ^{}>U`$ reflects from the target, while the other half transmits through it. Spectrum of inelastically scattered neutrons in the model considered has a maximum at $`\epsilon ^{}U`$. Indeed, it increases as $`v^{}`$ below the barrier and falls off as $`1/v^{}`$ above the barrier. Thus, qualitatively it is just of the form needed to explain small heating and cooling of UCN into vessels. However, the magnitude of the effect in the phonon model is low, as $`k\beta 10^6`$ and $`v^{}/s10^3`$. Nevertheless, it should be noted that an evaluation of the inelastic scattering probability in the Born approximation, i.e., with $`𝐁(𝐪)`$ (13), gives $$\frac{dw_{ie}^B}{d\epsilon ^{}}\frac{k\beta }{ms^2}\frac{T}{Ms^2}\frac{v^{}}{s}.$$ (15) This spectrum, first, has no maximum in low energy region and, second, is additionally suppressed by the factor $`U/ms^2`$ as compared with Eqs.(14). This results from the direct proportionality of $`𝐁(𝐪)`$ (13) to $`𝜿`$ vanishing for low energy transfer from neutron to target or vice versa. One should expect that the low energy transfer processes are governed not by phonons but rather by other collective excitations in condensed matter. In particular, when a propagation speed of the excitation is of the same order as the velocity of UCN, an influence of matter fluctuations on re-scattering processes may be maximal. Our study of UCN interaction with diffusion and thermal wave modes is now in progress. ## 5 Summary General theory of neutron scattering (elastic and inelastic) is presented. It is applicable for the whole domain of slow neutrons and includes as limiting cases existing theories for thermal and cold neutrons and for elastic scattering of UCN. The only small parameters used are those for the interaction potential that was assumed a short and relatively deep, what is equivalent to scattering length approximation for the interaction. Evident expression for the inelastic cross section is given. It differs from the usually used by proper account of re-scattering in the inelastic channel. It is shown that in the phonon model our approach qualitatively explains the low energy transfer processes. However, to provide the large observed probabilities of small heating and cooling of UCN into vessels other collective excitations of condensed matter in the limit of small $`𝐪`$ and $`\omega `$ should be apparently taken into account. This work was supported by RFBR Grant 96-15-96548. FIGURE CAPTURE Fig.1. Contributions to the second-order inelastic cross section: (a) scattering – scattering interference, (b) scattering – re-scattering interference, (c) re-scattering – re-scattering interference. Solid and dash lines represent neutrons in the elastic and inelastic channels, respectively. Open and crossed circles correspond to elastic and inelastic scattering, respectively. Figure 1
no-problem/9903/astro-ph9903384.html
ar5iv
text
# References Constraints on the magnitude of $`\alpha `$ in dynamo theory Eric G. Blackman Theoretical Astrophysics, Caltech 130-33, Pasadena CA, 91125, USA and George B. Field Center for Astrophysics, 60 Garden St., Cambridge MA, 02139, USA (accepted to ApJ) ABSTRACT We consider the backreaction of the magnetic field on the magnetic dynamo coefficients and the role of boundary conditions in interpreting whether numerical evidence for suppression is dynamical. If a uniform field in a periodic box serves as the initial condition for modeling the backreaction on the turbulent EMF, then the magnitude of the turbulent EMF and thus the dynamo coefficient $`\alpha `$, have a stringent upper limit that depends on the magnetic Reynolds number $`R_M`$ to a power of order $`1`$. This is not a dynamic suppression but results just because of the imposed boundary conditions. In contrast, when mean field gradients are allowed within the simulation region, or non-periodic boundary are used, the upper limit is independent of $`R_M`$ and takes its kinematic value. Thus only for simulations of the latter types could a measured suppression be the result of a dynamic backreaction. This is fundamental for understanding a long-standing controversy surrounding $`\alpha `$ suppression. Numerical simulations which do not allow any field gradients and invoke periodic boundary conditions appear to show a strong $`\alpha `$ suppression (e.g. Cattaneo & Hughes 1996). Simulations of accretion discs which allow field gradients and allow free boundary conditions (Brandenburg & Donner 1997) suggest a dynamo $`\alpha `$ which is not suppressed by a power of $`R_M`$. Our results are consistent with both types of simulations. Subject Headings: magnetic fields; galaxies: magnetic fields; Sun: magnetic fields; stars: magnetic fields; turbulence; accretion discs. 1. Introduction A leading candidate to explain the origin of large scale magnetic fields in stars and galaxies is mean-field turbulent magnetic dynamo theory (Moffatt 1978; Parker 1979; Krause & Rädler 1980; Zeldovich et al. 1983, Ruzmaikin et al. 1988, Beck et al. 1996). The theory appeals to a combination of helical turbulence (leading to the $`\alpha `$ effect), differential rotation (the $`\mathrm{\Omega }`$ effect), and turbulent diffusion to exponentiate an initial seed mean magnetic field. The total magnetic field is split into a mean component and a fluctuating component, and the rate of growth of the mean field is sought. The mean field grows on a length scale much larger than the outer scale of the turbulent velocity, with a growth time much larger than the eddy turnover time at the outer scale. A combination of kinetic and current helicity provides a statistical correlation of small scale loops favorable to exponential field growth. Turbulent diffusion is needed to redistribute the amplified mean field rapidly to ensure a net mean flux gain inside the system of interest. Rapid growth of the fluctuating field necessarily accompanies the mean-field dynamo. Its impact upon the growth of the mean field, and the impact of the mean field itself on its own growth are controversial. The controversy results because Lorentz forces from the growing magnetic field react back on and complicate the turbulent motions driving the field growth (e.g. Cowling 1959, Piddington 1981, Kulsrud & Anderson 1992). It is tricky to disentangle the back reaction of the mean field from that of the fluctuating field. Analytic studies and numerical simulations seem to disagree as to the extent to which the dynamo coefficients are suppressed by the back reaction of the mean field. Some numerical studies (e.g. Cattaneo & Vainshtein (1991), Vainshtein & Cattaneo (1992), Cattaneo (1994), Cattaneo & Hughes (1996)) and analytic studies (Gruzinov & Diamond (1994), Bhattacharjee & Yuan (1995), Kleeorin et al., (1995)) argue that the suppression of $`\alpha `$ takes the form $`\alpha \alpha ^{(0)}/(1+R_M^pB^2/v_0^2)`$ where $`\alpha ^{(0)}`$ is the value of $`\alpha `$ in the absence of a mean field, $`R_M`$ is the magnetic Reynolds number, $`\overline{𝐁}`$ is the mean field in velocity units, $`v_0`$ is the rms turbulent velocity, and $`p`$ is a number of order 1. Such a strong dependence on $`R_M`$ would prevent astrophysical dynamos from working, as $`R_M`$ is usually $`1`$. Other numerical studies (Brandenburg & Donner 1997) and analytic studies (e.g. Kraichnan 1979; Field et al., 1999, Chou & Fish 1999) suggest that $`p=0`$, so $`\alpha \alpha ^{(0)}/(1+\overline{B}^2/v_0^2)`$ in the fully dynamic regime. In particular Field et al. (1999) considered an expansion in the mean magnetic field (see also Vainshtein & Kitchatinov 1983; Montgomery & Chen 1984; Blackman & Chou 1997), and were able to derive the effect of the nonlinear back reaction on $`\alpha `$ in the case for which $`\overline{𝐁}=0`$. Their result is expressed in terms only of the difference between the zeroth-order kinetic and current helicities. They find that $`R_M`$ does not enter strongly, except possibly by suppressing the difference between the zeroth-order helicities, an effect which cannot depend upon $`\overline{𝐁}`$ and is not therein constrained. Blackman & Field (1999) have shown that some of the analytic approaches (e.g. Bhattacharjee & Yuan 1995; Gruzinov & Diamond 1994) which employ a use of Ohm’s law dotted with the fluctuating component of the magnetic field, do not distinguish between turbulent quantities of the base (zeroth-order) state and quantities which are of higher order in the mean field. This distinction is important. When it is made, many arguments for suppression via such approaches fall through. Note that Blackman & Field (1999) do not prove that the dynamo survives back reaction as a result of their considerations, only that some analytic approaches to the problem can be challenged. Despite this challenge, the apparent result of extreme $`\alpha `$ suppression is seen in the simulation of Cattaneo & Hughes (1996). These authors externally force the turbulence, imposing periodic boundary conditions and a uniform mean field, and find that the suppression of $`\alpha `$ involves $`R_M`$ in the form given above. By contrast, the simulation of Brandenburg & Donner (1997) suggests that an $`\alpha \mathrm{\Omega }`$ dynamo may in fact be operative in an accretion disc whose turbulence is self-generated by a shearing instability, without $`R_M`$ entering the suppression. The latter simulation does not employ periodic boundary conditions and allows gradients in mean fields. In this paper we show that the suppression of $`\alpha `$ depends crucially on the boundary conditions. We find that the mean quantities are defined by averaging over a periodic box $`\alpha `$ has an upper limit that depends on a factor of $`R_M^p`$, with $`p1`$. In the presence of mean field gradients and non-periodic boundary conditions, however, we find that the upper limit on the dynamo coefficients is significantly larger, and $`R_M`$ is not involved. The small upper limit in the periodic box case does not represent a dynamical suppression but rather an apparent suppression which is just a results from the boundary conditions. The results herein may be a step toward resolving controversies surrounding numerical suppression experiments. Central to the discussion is the equation for the time evolution of magnetic helicity. This equation was also employed by Seehafer (1994), who derived a suppression of $`\alpha `$ apparently consistent with that of Keinigs (1983) (and qualitatively consistent with the Cattaneo & Hughes (1996) simulation). The techniques of these additional two papers are different and one should note that they do not separate zeroth from higher order quantities. Section 2 reviews the basic formalism of the dynano coefficient expansion in orders of $`\overline{𝐁}`$. Section 3 shows that constraints on the magnitude of the EMF (and thus $`\alpha `$) results from dotting Ohm’s law for the fluctuating electric field with the fluctuating magnetic field, taking the average and expanding to second order in the mean magnetic field. The results depend on the boundary conditions. For a periodic box, the upper limit on $`\alpha `$ is too small for a dynamo to work in practice, but this does not represent a dynamical suppression. In section 4 we interpret the results in terms of helicity flow and we discuss implications with respect to previous studies. Section 5 is the conclusion. 2. Basic Formalism The basic formalism employed herein is discussed in Field et al. (1999) and Blackman & Field (1999). The formalism combines some aspects of the standard textbook treatment (e.g. Moffatt 1978) with the modification that fluctuating quantities are divided into (1) a zeroth-order turbulent base state whose correlations are homogeneous, stationary and isotropic (though not necessarily mirror-symmetric!), and (2) a contribution which depends on the presence of a non-zero mean field (see also Vainshtein & Kitchatinov 1983; Kitchatinov et al. 1994). This higher-order contribution is definitely not isotropic and not necessarily homogeneous or stationary. More specifically, the induction equation describing the magnetic field evolution is $$_t𝐁=\times (𝐕\times 𝐁)+\lambda ^\mathrm{𝟐}𝐁,$$ (1) where $`\lambda =\eta c^2/4\pi `$ is the magnetic viscosity, corresponding to resistivity $`\eta `$. Here $`𝐁=𝐛+\overline{𝐁}`$ is the magnetic field in velocity units, obtained by dividing by $`\sqrt{4\pi \rho }`$, and $`𝐛`$ and $`\overline{𝐁}`$ are the fluctuating and mean components of $`𝐁`$ respectively. We assume incompressibility. The equation for the mean field derived by averaging (1) is $$_t\overline{𝐁}=\times 𝐯\times 𝐛\overline{𝐕}\overline{𝐁}+\overline{𝐁}\overline{𝐕}+\lambda ^2\overline{𝐁}.$$ (2) The equation for $`𝐛`$ is given by subtracting (2) from (1), which gives $$_t𝐛=\times (𝐯\times 𝐛)\times 𝐯\times 𝐛+\times (𝐯\times \overline{𝐁})+\times (\overline{𝐕}\times 𝐛).$$ (3) The term $`\overline{𝐁}\overline{𝐕}`$ in (2) describes the $`\mathrm{\Omega }`$-effect of differential rotation, and will not be discussed further here, while the term $`\overline{𝐕}\overline{𝐁}`$ can be eliminated by changing the frame of reference to one moving with $`\overline{𝐕}`$; both terms will be ignored in what follows. The dynamo theorist must find the dependence of the turbulent EMF $`𝐯\times 𝐛`$ on $`\overline{𝐁}`$ so that (2) can be solved. In the absence of the mean velocity fields, all mean vectors can be written in terms of the mean magnetic field. In particular, we have $$𝐯\times 𝐛=\alpha _{ij}\overline{B}_j\beta _{ijk}_j\overline{B}_k+\gamma _{ijkl}\mathrm{O}(\overline{B}/R^2)+\mathrm{},$$ (4) where $`\alpha _{ij}`$, $`\beta _{ijk}`$ and $`\gamma _{ijk}`$ are explicit functions of correlations of turbulent quantities, but can depend implicitly on $`\overline{𝐁}`$ (Moffatt 1978) through their dependence on the induction equation for the fluctuating field. The order at which there is no implicit dependence on $`\overline{𝐁}`$ is the zeroth-order base state (see Field et al 1999). The expansion order parameter is $`|\overline{𝐁}|/|𝐛^{(0)}|`$, which is indeed $`<<1`$ for the early dynamo evolution, and $`<1`$ in the Galaxy at present. In particular, we have $`𝐛=𝐛^{(0)}+_n𝐛^{(n)}`$, and similarly for $`𝐯`$, where $`_n𝐛^{(n)}<𝐛^{(0)}`$ and $`n`$ indicates the number of powers of $`|\overline{𝐁}|/|𝐛|`$. The zeroth order base state correlations are composed of products of $`𝐛^{(0)}`$ and $`𝐯^{(0)}`$ and have no dependence on the mean field. The zeroth order base state is taken to be homogeneous and isotropic–the violation of isotropy comes from the contributions due to higher order fluctuating quantities, whose isotropy is broken by the mean field. Note that the zeroth order state need not be reflection invariant, and it is important for dynamo theory that it is not. Correlations between higher order quantities can be reduced to correlations of zeroth order quantities times the respective products of $`n`$ linear functions of $`\overline{𝐁}`$. Thus for example, $`𝐛^{(2)}`$ is the anisotropic component of the fluctuating magntetic field which depends on two powers of $`\overline{𝐁}`$, and is found by twice iterating terms like $`𝐛\overline{𝐁}`$ in the induction equation to obtain an approximate solution in terms of $`𝐛^{(0)}`$ and $`𝐯^{(0)}`$. To zeroth order, the $`\alpha `$ tensor can be written $$\alpha _{ij}^{(0)}=\alpha ^{(0)}\delta _{ij},$$ (5) which highlights the isotropy of this zeroth-order quantity. In our previous work (Field et al. 1999) we have used the induction equation for the fluctuating components of the magnetic field and the Navier-Stokes equation for the fluctuating velocity to find the form of $`\alpha `$ in terms of correlations of the zeroth-order products (see also Blackman & Chou 1997). Calculating the turbulent EMF in the absence of gradients of $`\overline{𝐁}`$, to first order in $`\overline{B}`$, gives $`𝐯\times 𝐛^{(1)}=\alpha ^{(0)}\overline{𝐁}`$, where $`\alpha ^{(0)}`$ is the sum of kinetic and current helicities associated with the zeroth order state, namely $$\begin{array}{c}\hfill \alpha ^{(0)}=\frac{1}{3}\left[𝐯^{(0)}(t)\times 𝐯^{(0)}(t^{})dt^{}𝐛^{(0)(t)}\times 𝐛^{(0)}(t^{})dt^{}\right]\\ \hfill \frac{1}{3}t_c\left[𝐯^{(0)}\times 𝐯^{(0)}𝐛^{(0)}\times 𝐛^{(0)}\right],\end{array}$$ (6) where $`𝐯`$ is the turbulent velocity and $`t_c`$ defined as the correlation time of the scale of the turbulence which dominates the averaged quantity. If we adopt a Kolmogorov energy spectrum (i.e. $`kb_k^2,kv_k^2k^2E_kk^{1/3}`$), then it might appear that the dominant contributions to the terms of (6) come from large $`k`$. However, Pouquet et al. (1976) showed that if the forcing is at the outer wavenumber $`k_0=L^1`$, most of the energy and helicity is concentrated there, and the turbulence for $`k>3k_0`$ is locked up into Alfvén waves which do not contribute to correlations. It is therefore likely reasonable to assume that any helicity in the zeroth-order state is concentrated near $`k_0`$, in which case $`t_c(v_0k_0)^1L/v_0`$. The first term in (6) was first derived by Steenbeck, Krause, & Rädler (1966). The second, current helicity, term in (6) was first derived by Pouquet et al. (1976); neither paper made the necessary distinction between zeroth and higher-order quantities. In the next sections we will not re-derive the form of $`\alpha ^{(0)}`$ in terms of $`𝐛^{(0)}`$ and $`𝐯^{(0)}`$; instead we will derive an independent upper limit on $`\alpha ^{(0)}`$ from the use of Ohm’s law, the definition of the electric field in terms of the vector potential, and the equation for magnetic helicity evolution. We will invoke the assumption that the zeroth-order base state is isotropic and homogeneous, and we will assume that that all anisotropies and inhomogeneities of higher-order correlations are due to mean fields. We will need the Reynolds relations (Rädler (1980)), i.e. that derivatives with respect to $`𝐱`$ or $`t`$ obey $`_{t,𝐱}X_iX_j=_{t,𝐱}(X_iX_j)`$ and $`\overline{X}_ix_j=0`$ where $`X_i=\overline{X}_i+x_i`$ are components of vector functions of $`𝐱`$ and $`t`$. For statistical ensemble means, these hold when correlation times are small compared to the times over which mean quantities vary. For spatial means, defined by $`X_i(𝐱,t)=V^1X_i(𝐱+𝐬,t)d𝐬`$, the relations hold when the average is over a large enough $`V`$ that $`LV^{\frac{1}{3}}RD`$, where $`D`$ is the size of the system $`R`$ is the scale of mean field variation and $`L`$ is the outer scale of the turbulence. Note that the scale of averaging is less than the overall system size. 3. Constraints on the tubulent EMF for periodic and non-periodic boundary conditions a. Constraint equations Let the electric field $`𝐄`$, like $`𝐁`$, be divided into a mean component $`\overline{𝐄}`$ and a fluctuating component $`𝐞`$. Ohm’s law for the mean field is thus $`\overline{𝐄}=c^1𝐕\times 𝐁+\eta 𝐉=c^1𝐯\times 𝐛+\eta \overline{𝐉}`$ (7) for the case $`\overline{𝐕}=0`$, where $`\overline{𝐉}`$ is the current density and $`\eta `$ is the resistivity. We also have $`𝐄𝐁=\overline{𝐄}\overline{𝐁}+𝐞𝐛=c^1𝐯\times 𝐛\overline{𝐁}+\eta \overline{𝐉}\overline{𝐁}+𝐞𝐛`$ (8) where we have used (7). A second expression for $`𝐄𝐁`$ also follows from Ohm’s law without first splitting into mean and fluctuating components, that is $`𝐄𝐁`$ $`=`$ $`c^1(𝐕\times 𝐁)𝐁+\eta 𝐉𝐁`$ (9) $`=`$ $`\eta 𝐉𝐁=\eta \overline{𝐉}\overline{𝐁}+\eta 𝐣𝐛=\eta \overline{𝐉}\overline{𝐁}`$ $`+c^1\lambda 𝐛\times 𝐛.`$ By substituting (9) into (8), we obtain $$c^1𝐯\times 𝐛\overline{𝐁}=c^1\lambda 𝐛\times 𝐛+𝐞𝐛,$$ (10) an equation which will now constrain $`𝐯\times 𝐛`$. However, we must expand (10) to second order in $`\overline{𝐁}`$ (as defined in section 2) to constrain the turbulent EMF $`𝐯\times 𝐛`$. This is because to zeroth order, the left hand side of (10) vanishes directly. To first order, the left side would be $`𝐯\times 𝐛^{(0)}\overline{𝐁}`$, but $`𝐯\times 𝐛^{(0)}=0`$, since vector averages of zeroth order quantities vanish. To second order in $`\overline{𝐁}`$ then, (10) implies that $$c^1𝐯\times 𝐛^{(1)}\overline{𝐁}=c^1\lambda 𝐛\times 𝐛^{(2)}+𝐞𝐛^{(2)}.$$ (11) Because $`R_M>>1`$, significant limits on $`𝐯\times 𝐛^{(1)}`$ and thus on $`\alpha ^{(0)}`$ come from the $`𝐞𝐛^{(2)}`$ term above. The result of Seehafer (1994) and Keinigs (1983) amount to the (11) with the last term equal zero, but without distinguishing the order in mean fields (i.e. without the superscripts). We now focus on this last, term keeping in mind that it is second order in mean fields. Since $`𝐞𝐛^{(2)}`$ is second order in $`𝐁`$, its most general form will be expressible as a sum of terms which each involve products of two types of quantities: 1. correlations of scalar or pseudoscalar products of zeroth order quantities and 2. quadratic scalar or pseudoscalar functions of $`\overline{𝐁}`$. Now $`𝐞𝐛`$ can be written as a sum a time derivative and spatial divergence. Consider $`𝐞`$ in terms of the vector and scalar potentials $`𝐚`$ and $`\varphi `$: $$𝐞=\varphi (1/c)_t𝐚.$$ (12) Dotting with $`𝐛=\times 𝐚`$ and averaging we have $$𝐞𝐛=\varphi 𝐛(1/c)𝐛_t𝐚.$$ (13) After straightforward algebraic manipulation, application of Reynolds rules and $`𝐛=0`$, this equation implies $$𝐞𝐛=(1/2)\varphi 𝐛+(1/2)𝐚\times 𝐞(1/2c)_t𝐚𝐛_0\overline{h}^0+_i\overline{h}^i=_\mu \overline{h}^\mu ,$$ (14) where we have defined a helicity density 4-vector for fluctuating quantities $$[h_0,h_i]=[(1/2c)𝐚𝐛,(1/2)(\varphi 𝐛)(1/2)(𝐚\times 𝐞)],$$ (15) and the overbar is used, as always, to mean the same thing as the brackets. b. Constraints for periodic boundary conditions We now investigate the implications of (14) for simulations of type performed by Cattaneo & Hughes (1996), where the brackets brackets are interpreted as a spatial average over a periodic box. Under these conditions, there are two important consequences. First, note that the second two terms of (14) vanish upon conversion to surface integrals and we have $$𝐞𝐛=(1/2c)_t𝐚𝐛,$$ (16) which is gauge invariant. The second consequence of the periodic box is that $`_t\overline{𝐁}=0`$ for incompressible flows. This follows simply from (2): the last three terms of (2) would vanish as they are all surface integrals. Using Reynolds rules and vector identities, the second term can be written $`[\times 𝐯\times 𝐛]_j=_i(b_iv_j)_i(v_ib_j)`$ which also vanishes by surface integration. The two consequences just discussed can be used to show that (16) vanishes for a periodic box, and thus the only contribution to the right of (11) will come from the first term on the right. To second order in mean quantities, assuming $`𝐛(t=0)=0`$ and that all times are far enough from $`t=0`$ such that $`𝐛(t)`$ does not correlate with any finite $`𝐚(0)`$, we have $$𝐚𝐛^{(2)}=_t^{}𝐚^{(2)}(t^{})𝐛^{(0)}(t)𝑑t^{}+_t^{}𝐚^{(1)}(t^{})_{t^{\prime \prime }}𝐛^{(1)}(t^{\prime \prime })𝑑t^{}𝑑t^{\prime \prime }+𝐚^{(0)}(t)_t^{}𝐛^{(2)}(t^{})𝑑t^{}.$$ (17) To express (17) explicitly in terms of mean fields, we use the equations of motion for $`𝐛`$ and $`𝐚`$. The use of $`_t𝐛`$ from (3) for the last two terms of (17) leads directly to contributions depending on products of the mean fields $`\overline{𝐁}`$ or $`\overline{𝐕}`$ and turbulent quantities $`𝐛`$ and $`𝐯`$. Consider now the equation for $`𝐚`$ which comes from uncurling the equation for $`𝐛`$, namely $$_t𝐚=(𝐯\times 𝐛)𝐯\times 𝐛+(𝐯\times \overline{𝐁})+(\overline{𝐕}\times 𝐛)+\theta ,$$ (18) where $`\theta `$ is an arbitrary scalar field. When (18) is used in (17) in the first and second terms on the right of (17), the periodic box nullifies the contribution from $`\theta `$. All other contributions depend only on products of $`𝐯`$, $`𝐛`$, $`\overline{𝐁}`$ and $`\overline{𝐕}`$. Thus when $`\overline{𝐕}=0`$, the only remaining mean field is $`\overline{𝐁}`$. Thus for a periodic box, $`𝐚𝐛^{(2)}`$ must be second order in $`\overline{𝐁}`$. Then, when plugged into (16) the time derivative will act on some quadratic function of $`\overline{𝐁}`$ multiplied by correlations of zeroth order. Since the zeroth order quantities are time independent, isotropic, and homogeneous, the function of $`\overline{𝐁}`$ must be a scalar, denoted $`F`$, and we have $$_t𝐚𝐛^{(2)}=_t(F(\overline{𝐁})^{(2)}\overline{Q}_1^{(0)})=\overline{Q}_1^{(0)}_t(F(\overline{𝐁})^{(2)})=0,$$ (19) where $`\overline{Q}_1^{(0)}`$ is a scalar or pseudoscalar correlation of zeroth order quantities. The last equality of (19) follows form stationarity of zeroth order quantities, and our proof that $`\overline{𝐁}`$ is time independent over the time scales of interest for the periodic box. We therefore conclude that $`_t𝐚𝐛=0`$ in (16). This result relates to the the fact that for a periodic box, there is no periodic mean vector field $`\overline{𝐀}`$ whose curl is everywhere equal to $`\overline{𝐁}`$. The divergence of $`\overline{𝐁}`$ is still equal to zero, so Maxwell’s equations are satisfied, but $`\overline{𝐁}`$ is the only non-trivial mean field. Since in the Cattaneo & Hughes (1996) simulation $`\overline{𝐁}=`$constant in both space and time, $`𝐯\times 𝐛^{(2)}=\alpha ^{(0)}\overline{𝐁}^2`$. Using this, and (19), (16) and (11), we obtain $$|\alpha ^{(0)}|=\frac{c^1\lambda |𝐛\times 𝐛^{(2)}|}{\overline{B}^2}.$$ (20) Field et al. (1999) showed that conclusions about $`\alpha ^{(0)}`$ are also conclusions about $`\alpha `$ to all orders, by relating the fully non-linear $`\alpha `$ to $`\alpha ^{(0)}`$ and showing that in the limit of large $`\overline{B}`$, $`\alpha `$ is not catastrophically affected when $`\overline{B}`$ is large. Thus $`\alpha ^{(0)}`$ is an upper limit to $`\alpha `$, and so the result (20) shows that $`\alpha `$ will be small when the brackets indicate an average over a periodic box. The important point is that this is not a dynamical suppression from the backreaction but a constraint on the magnitude of $`\alpha ^{(0)}`$ which is imposed by the boundary conditions. Notice that it is a constraint on the zeroth order quantity, and so it cannot represent the effect of backreaction. c. Constraints for non-periodic boundary conditions If the averaging brackets are not over a periodic box, or if the scale of the averaging is $`<<`$ than the overall scale size of the system, then the divergence terms in (14) do not vanish. In addition, the $`\varphi `$ term in (18) will contribute to (16). In this case, each term on the right of (14) is not gauge invariant. Thus, the only constraint we can make on the magnitude of the right side of (14) is on the sum of all the terms together. Writing down all possible second order terms up to one spatial derivative in $`\overline{𝐁}`$, we have $$c𝐞𝐛^{(2)}=\overline{Q}_2^{(0)}\overline{𝐁}^2+\overline{Q}_3^{(0)}\overline{𝐁}\times \overline{𝐁}+\mathrm{O}(\overline{B}^2L^2/R^2)+\mathrm{},$$ (21) where $`\overline{Q}_2^{(0)}`$ and $`\overline{Q}_3^{(0)}`$ are correlations of zeroth order averages, $`L`$ is the outer turbulent scale and $`R`$ is the mean field variation scale. The quantity $`\overline{Q}_2^{(0)}`$ must have units of velocity, and thus be of maximum order $`v_0`$ since it depends only on turbulent quantities. The quantity $`\overline{Q}_3^{(0)}`$ must have dimensions of viscosity, and must of maximum $`v_0L`$ since it too depends only on turbulent quantities. The combination of terms in (21) is the same form of the combination of terms entering on the left side of (10) which would result from using (4). That is, since $`𝐯\times 𝐛\overline{𝐁}=\alpha ^{(0)}\overline{𝐁}^2\beta ^{(0)}\overline{𝐁}\times \overline{𝐁}`$, we have $`\overline{Q}_1^{(0)}=\alpha ^{(0)}`$ and $`\overline{Q}_2^{(0)}=\beta ^{(0)}`$. Thus for simulations in which the mean values are not taken over a periodic box, there is certainly no a priori restriction on $`\alpha ^{(0)}`$. Since now $`\alpha \alpha ^{(0)}`$, any simulation result indicating suppression on $`\alpha `$ under these relaxed boundary conditions would indeed be a dynamical suppression. So far there are no simulations which invoke such boundary conditions that show catastrophic suppression (c.f. Brandenburg & Donner 1997). 4. Discussion Section 3 shows that periodic boundary conditions impose an upper limit on $`\alpha `$ that does not represent a dynamical suppression. Non-periodic boundary conditions or a finite scale separation between system size and mean field gradient scale allow for a much higher upper limit on $`\alpha `$, namely the kinematic limit. The dynamical backreaction is testable only in simulations of the latter type. 4.1 Relation to magnetic helicity Here we point out a connection to magnetic helicity. Repeating Eqns. (12), (13) and (14) for the total $`𝐄`$ and $`𝐁`$ gives $$𝐄𝐁=\overline{𝐄}\overline{𝐁}+𝐞𝐛=\frac{1}{2}_\mu H^\mu =\frac{1}{2}_\mu \stackrel{~}{h}^\mu +\frac{1}{2}_\mu \overline{h}^\mu 0,$$ (22) where $`H^\mu `$ is the total magnetic helicity 4-vector (Field 1986) defined exactly as in (15) but with all fluctuating quantities replaced by their total values. Similarly, $`\stackrel{~}{h}^\mu `$ is the helicity 4-vector associated with the mean fields. The last similarity in (22) follows because $`R_M>>1`$ ($`\lambda 0`$) in the astrophysical plasmas of interest. Using $`𝐞𝐛=_\mu \overline{h}^\mu `$, Eqn. (22) then shows that any non-negligible $`𝐞𝐛`$ requires a finite but opposite $`_\mu \stackrel{~}{h}^\mu `$. In general, for non-vanishing turbulent EMF, the $`𝐞𝐛`$ must be non-zero, and thus the 4-divergences in (22) cannot vanish separately. Interestingly, when (22) is integrated over the total volume inside and outside of the rotator, and interpreted in terms of a flow of relative magnetic helicity (Berger & Field 1984), it can be shown that a working dynamo implies an associated magnetic energy flow through the magnetic rotator of interest which likely leads to an active corona (Field & Blackman 2000; Blackman & Field 2000). 4.2 Implications for Previous and Future Studies The use of periodic boundary conditions in simulations appears to be unsuitable for testing the suppression of $`\alpha `$ in a real dynamo unless the scale of mean field variations is much smaller than the scale of the periodicity. If periodic boundary conditions are used, one must also be careful about causality issues. The scale separation should at minimum be large enough such that the Alfvén crossing time across the box is longer than correlation time of the fluctuating quantities, and possibly even longer than the time scale for mean field variation. Thus, the box could be periodic, but the dynamics of interest would occur in a non-periodic sub-region. The brackets which we have used to indicate averages, would thus represent averages over this sub-region, not the entire volume. Alternatively, the box could be non-periodic. The numerical simulations of Cattaneo & Hughes (1996) do not allow for any mean field gradients and employ periodic boundary conditions. The strong $`\alpha `$ reduction seen there is consistent with our suggestion that the suppression may not be dynamical, but may instead be a result of the boundary conditions. In contrast, the shearing box accretion disk simulations of Brandenburg & Donner (1997) do employ non-periodic boundary conditions and allow mean field gradients. Interestingly, they do find that something like a mean-field dynamo is operating therein. The limited suppression that they find does not involve $`R_M`$. 5. Conclusions We have suggested that the cause for apparent $`\alpha `$ suppression in numerical simulations which use periodic boundary conditions may not result from dynamics, but from rather from a choice of boundary conditions. If the boundary conditions enforce all mean field gradients and spatial divergences to vanish, then the upper limit on $`\alpha `$ is given by (20). For non-periodic boundary conditions or a box with significant scale separation between the mean field and box size, the upper limit on the turbulent EMF is given by the kinematic value. This would be a consistent interpretation of the large suppression reported by Cattaneo & Hughes (1996). In contrast, Brandenburg and Donner (1997) interpret disk simulations which use non-periodic boundary conditions and do not find such a strong suppression. In summary, our results are consistent with seemingly contradictory simulations. Working dynamos in real astrophysical bodies (even in the kinematic approximation) require mean field gradients and scale separations between the overall system scale, mean field averaging scale, and fluctuating scale. In order to disentangle boundary effects from dynamical ones, future simulations of $`\alpha `$ suppression should include non-periodic boundary conditions or allow the mean field to change over scales smaller than the size of the overall box. This is a challenging task. Acknowledgements: G.B.F. acknowledges partial support from NASA grant NAGW-931. E.G.B. acknowledges support from NASA grant NAG5-7034.
no-problem/9903/gr-qc9903048.html
ar5iv
text
# NONLOCAL EFFECTS IN QUANTUM GRAVITY ## I INTRODUCTION In a recent work, it was shown that the quantal behaviour of matter can be understood as a purely geomertical effect. In fact the conformal degree of freedom of the space–time metric would be determined by quantal effects. In this view, the geometry has two physical significances. First, its conformal degree of freedom represents what is usually called the quantal effects. Secondly, its other degrees of freedom determine the causal structure of the space–time. This second part, in the absence of quantal effects (where the conformal factor is a constant), is called classical gravity. These two parts are highly coupled so that the theory is expected to be aquantum gravity theory. The theory, primarily rests on the de-Broglie–Bohm quantum theory, which is the causal counterpart of quantum mechanics. In Bohmian mechanics, any particle is always accompanied by an objectively real field exerting some force on the particle. This is called the quantum force. In the case of relativistic particles, the quantum potential is nothing but the mass of the particle. So the equation of motion of a relativistic particle is: $$\frac{d(u_\mu )}{d\tau }=c^2_\mu $$ (1) where $$^2=m^2+\frac{\mathrm{}^2}{c^2}\frac{\mathrm{}|\mathrm{\Psi }|}{|\mathrm{\Psi }|}$$ (2) and $$\mathrm{}\mathrm{\Psi }+\frac{m^2c^2}{\mathrm{}^2}\mathrm{\Psi }=0$$ (3) The theory rests also on the de-Broglie ansatz that the presence of quantum force is identical to having a curved space–time. This fact can be seen simply by writing (1) in the Hamilton–Jacobi form: $$g^{\mu \nu }_\mu S_\nu S=^2c^2;_\mu S=u_\mu $$ (4) Equation (4) can be rewritten as: $$\stackrel{~}{g}^{\mu \nu }\stackrel{~}{}_\mu S\stackrel{~}{}_\nu S=m^2c^2;\stackrel{~}{g}_{\mu \nu }=\frac{^2}{m^2}g_{\mu \nu }$$ (5) Accordingly, an appropriate action for quantum gravity is written in . The corresponding equations of motion are: $$\mathrm{\Omega }+6\mathrm{}\mathrm{\Omega }+\frac{2\kappa }{m}\rho \mathrm{\Omega }\left(_\mu S^\mu S2m^2\mathrm{\Omega }^2\right)+2\kappa \lambda \mathrm{\Omega }=0$$ (6) $$_\mu \left(\rho \mathrm{\Omega }^2^\mu S\right)=0$$ (7) $$\left(_\mu S^\mu Sm^2\mathrm{\Omega }^2\right)\mathrm{\Omega }^2\sqrt{\rho }+\frac{\mathrm{}^2}{2m}\left[\mathrm{}\left(\frac{\lambda }{\sqrt{\rho }}\right)\lambda \frac{\mathrm{}\sqrt{\rho }}{\rho }\right]=0$$ (8) $`𝒢_{\mu \nu }{\displaystyle \frac{\left[g_{\mu \nu }\mathrm{}_\mu _\nu \right]\mathrm{\Omega }^2}{\mathrm{\Omega }^2}}6{\displaystyle \frac{_\mu \mathrm{\Omega }_\nu \mathrm{\Omega }}{\omega ^2}}+3g_{\mu \nu }{\displaystyle \frac{_\alpha \mathrm{\Omega }^\alpha \mathrm{\Omega }}{\mathrm{\Omega }^2}}+{\displaystyle \frac{2\kappa }{m}}\rho _\mu S_\nu S{\displaystyle \frac{\kappa }{m}}\rho g_{\mu \nu }_\alpha S^\alpha S`$ $$+\kappa m\rho \mathrm{\Omega }^2g_{\mu \nu }+\frac{\kappa \mathrm{}^2}{m^2}\left[_\mu \sqrt{\rho }_\nu \left(\frac{\lambda }{\sqrt{\rho }}\right)+_\nu \sqrt{\rho }_\mu \left(\frac{\lambda }{\sqrt{\rho }}\right)\right]\frac{\kappa \mathrm{}^2}{m^2}g_{\mu \nu }_\alpha \left(\lambda \frac{^\alpha \sqrt{\rho }}{\sqrt{\rho }}\right)=0$$ (9) $$\mathrm{\Omega }^2=1+\frac{\mathrm{}^2}{m^2}\frac{\mathrm{}\sqrt{\rho }}{\sqrt{\rho }}$$ (10) where $`\mathrm{\Omega }`$ is the conformal degree of freedom of the metric, $`\lambda `$ is a lagrange multiplier and $`\rho =\mathrm{\Psi }^{}\mathrm{\Psi }`$ is the matter density. A special case is when $`\lambda `$ can be expanded in powers of $`\alpha =\mathrm{}^2/m^2`$. Then, it can be simply shown that in this case $`\lambda =0`$, and the equations of motion are: $$_\mu \left(\rho \mathrm{\Omega }^2^\mu S\right)=0$$ (11) $$_\mu S^\mu S=m^2\mathrm{\Omega }^2$$ (12) $$𝒢_{\mu \nu }=\kappa 𝒯_{\mu \nu }^{(m)}\kappa 𝒯_{\mu \nu }^{(\mathrm{\Omega })}$$ (13) $$𝒯_{\mu \nu }^{(m)}=\frac{\rho }{m}_\mu S_\nu S$$ (14) $$\kappa 𝒯_{\mu \nu }^{(\mathrm{\Omega })}=\frac{\left[g_{\mu \nu }\mathrm{}_\mu _\nu \right]\mathrm{\Omega }^2}{\mathrm{\Omega }^2}+6\frac{_\mu \mathrm{\Omega }_\nu \mathrm{\Omega }}{\omega ^2}3g_{\mu \nu }\frac{_\alpha \mathrm{\Omega }^\alpha \mathrm{\Omega }}{\mathrm{\Omega }^2}$$ (15) $$\mathrm{\Omega }^2=1+\alpha \frac{\mathrm{}\sqrt{\rho }}{\sqrt{\rho }}$$ (16) As one can see, there are two contributions to the background metric ($`g_{\mu \nu }`$). First, we have $`𝒯_{\mu \nu }^{(m)}`$ which represents the gravitational effects of matter. Second, there is $`𝒯_{\mu \nu }^{(\mathrm{\Omega })}`$ which is a result of the quantal effects of matter. Since in the evaluation of the $`𝒯_{\mu \nu }^{(\mathrm{\Omega })}`$ the background metric is used, the gravitational and quantal contributions to the background metric are so highly coupled that no one without the other has any physical significance. In this way the theory is a quantum gravity theory. It must be pointed out here that, since the conformal factor is meaningless as $`\rho 0`$, the geometry looses its meaning at this limit. This is a desired property, because it is in accord with Mach’s principle, which states that for an empty universe the space–time should be meaningless. A special aspect of the quantum force is that it is highly nonlocal. This property, which can be seen from the equation (2), is an experimental matter of fact . Since the mass field given by (2) represents the conformal degree of freedom of the physical metric, quantum gravity is expected to be highly nonlocal. In the next section, this is shown explicitley for a specific problem. ## II Illustration of nonlocal effects in quantum gravity In order to illustrate how nonlocal effects can appear in quantum gravity through quantum potential, suppose that matter distribution is localized and has spherical symmetry. Then, one has: $$\rho =\rho (t;r)$$ (17) $$\mathrm{\Omega }=\mathrm{\Omega }(t;r)$$ (18) Suppose, furthermore, that matter is at rest: $$_0S=E(t;r)asr\mathrm{}$$ (19) $$_iS=0;i=1,2,3$$ (20) One expects that at large $`r`$, where there is no matter, the background metric would be of the Schwartzschield form: $$g_{\mu \nu }=\left(\begin{array}{cccc}1r_s/r& 0& 0& 0\\ 0& 1/(1r_s/r)& 0& 0\\ 0& 0& r^2& 0\\ 0& 0& 0& r^2\mathrm{sin}^2\theta \end{array}\right)asr\mathrm{}$$ (21) where $`r_s`$ is a constant (the Schwartzschield radius). The validity of this approaximation will be examined at the end. The equation of motion (12) relates $`E`$ and $`\mathrm{\Omega }`$: $$E=\frac{m\mathrm{\Omega }}{\sqrt{1r_s/r}}$$ (22) In order to calculate the conformal factor $`\mathrm{\Omega }`$, one needs the specific form of $`\rho `$. It must be a localized function at $`r=0`$. So we choose it as: $$\rho (t;r)=A^2\mathrm{exp}[2\beta (t)r^2]$$ (23) Using the relation (16), the conformal factor can be simply calculated. This leads to: $`\mathrm{\Omega }^2=1+\alpha [\dot{\beta }^2r^4\ddot{\beta }r^2+4\beta ^2r]`$ from which we get: $$\mathrm{\Omega }^2\alpha \dot{\beta }^2r^4asr\mathrm{}$$ (24) Now it is a simple task to examine that the continuity equation (11) is satisfied automatically as $`r\mathrm{}`$. This solution is an acceptable one, only if the generalized Einstein’s equations (13) are satisfied. This is so if $`𝒯_{\mu \nu }^{(\mathrm{\Omega })}0`$ as $`r\mathrm{}`$. It can be shown that in the limit $`r\mathrm{}`$ we have: $$\frac{\mathrm{}\mathrm{\Omega }^2}{\mathrm{\Omega }^2}=2(\ddot{\beta }/\dot{\beta })^2+2\dot{\ddot{\beta }}/\dot{\beta }20/r$$ (25) $$\frac{_0_0\mathrm{\Omega }^2}{\mathrm{\Omega }^2}=(\ddot{\beta }/\dot{\beta })^2+\dot{\ddot{\beta }}/\dot{\beta }$$ (26) $$\frac{_1_1\mathrm{\Omega }^2}{\mathrm{\Omega }^2}=12/r^2$$ (27) $$\frac{_1_0\mathrm{\Omega }^2}{\mathrm{\Omega }^2}=(8\ddot{\beta }/r\dot{\beta })$$ (28) $$\left(\frac{_0\mathrm{\Omega }}{\mathrm{\Omega }}\right)^2=(\ddot{\beta }/\dot{\beta })^2$$ (29) $$\frac{_1\mathrm{\Omega }_0\mathrm{\Omega }}{\mathrm{\Omega }^2}=(2\ddot{\beta }/r\dot{\beta })$$ (30) So provided that higher time derivatives of the scale factor of matter density ($`\beta `$) are small with respect to its first time derivative, that is: $$\frac{\ddot{\beta }}{\dot{\beta }}0;\frac{\dot{\ddot{\beta }}}{\dot{\beta }}0andsoon$$ (31) one has: $$\underset{r\mathrm{}}{lim}𝒯_\mu ^{(\mathrm{\Omega })\nu }=0$$ (32) Also we have from (14): $$\underset{r\mathrm{}}{lim}𝒯_\mu ^{(m)\nu }=0$$ (33) So at large distances $`g_{\mu \nu }`$ satisfies Einstein’s equations in vaccum, $`𝒢_{\mu \nu }=0`$. Therefore, the solution (21) is acceptable. In this way we find a solution to the quantum gravity equations at large distances. Consequently, if the time variation of $`\beta `$ is small, the physical metric $`\stackrel{~}{g}_{\mu \nu }=\mathrm{\Omega }^2g_{\mu \nu }`$ is given by: $$\underset{r\mathrm{}}{lim}\stackrel{~}{g}_{\mu \nu }=\alpha \dot{\beta }^2r^4g_{\mu \nu }^{(Shwarzschield)}$$ (34) An important points must be noted here. As it was shown, a change in matter distribution (due to $`\dot{\beta }`$) instantaneosely alters the physical metric. This is because of the appearance of $`\dot{\beta }(t)`$ in equation(34) and it comes from the quantum potential term. We conclude that the specific form of the quantum potential leads to the appearance of nonlocal effects in quantum gravity.
no-problem/9903/cond-mat9903393.html
ar5iv
text
# Piezomagnetism and Stress Induced Paramagnetic Meissner Effect in Mechanically Loaded High-𝑇_𝑐 Granular Superconductors Despite the fact that granular superconductors have been actively studied (both experimentally and theoretically) for decades, they continue contributing to the variety of intriguing and peculiar phenomena (both fundamental and important for potential applications) providing at the same time a useful tool for testing new theoretical concepts . To give just a few recent examples, it is sufficient to mention paramagnetic Meissner effect (PME) originated from a cooperative behavior of weak-links mediated orbital moments and found to be responsible for unusual aging effects in high-$`T_c`$ granular superconductors (HTGS). Among others are also recently introduced thermophase and piezophase effects suggesting, respectively, a direct influence of a thermal gradient and an applied stress on phase difference between the adjacent grains. Besides, using a model of random overdamped Josephson junction arrays, two dual time-parity violating effects in HTGS have been predicted . Namely, an appearance of magnetic field induced electric polarization along with the concomitant change of the junction capacitance (magnetoelectric effect ) and existence of electric field induced magnetization (converse magnetoelectric effect ) via a Dzyaloshinski-Moria type interaction mechanism. In this Letter we discuss a possibility of two other interesting effects expected to occur in a granular material under sufficient mechanical loading. Specifically, we predict the existence of stress induced paramagnetic moment in zero applied magnetic field (piezomagnetism) and its influence on a low-field magnetization (leading to a mechanically induced PME). The possibility to observe tangible piezoeffects in mechanically loaded grain boundary Josephson junctions (GBJJs) is based on the following arguments. It is well known that the grain boundaries (GBs) are the natural sources of weak links (or GBJJs) in granular superconductors. Under plastic deformation, GBs were found to move rather rapidly via the movement of the grain boundary dislocations (GBDs) comprising these GBs. As a matter of fact, using the so-called method of acoustic emission, the plastic flow of GBDs with the maximum rate of $`v_0=1mm/s`$ has been registered in $`YBCO`$ ceramics at $`T=77K`$ under the external load of $`\sigma =10^7N/m^2`$. Using the above evidence, in Ref.9 a piezophase response of a single GBJJ (created by GBDs strain field $`ϵ_d`$ acting as an insulating barrier of thickness $`l`$ and height $`U`$ in a $`SIS`$-type junction with the Josephson energy $`Je^{l\sqrt{U}}`$) to an externally applied mechanical stress was considered. The resulting stress-strain and stress-current diagrams were found to exhibit a quasi-periodic (Fraunhofer-like) behavior typical for Josephson junctions (JJs). To understand how piezoeffects can manifest themselves through GBJJs, let us invoke an analogy with the so-called thermophase effect suggested originally by Guttman et al (as a quantum mechanical alternative for the conventional thermoelectric effect) to occur in a single JJ and later applied to HTGS . In essence, the thermophase effect assumes a direct coupling between an applied temperature drop $`\mathrm{\Delta }T`$ and the resulting phase difference $`\mathrm{\Delta }\varphi `$ through a JJ. When a rather small temperature gradient is applied to a JJ, an entropy-carrying normal current $`I_n=L_n\mathrm{\Delta }T`$ (where $`L_n`$ is the thermoelectric coefficient) is generated through such a junction. To satisfy the constraint dictated by the Meissner effect, the resulting supercurrent $`I_s=I_c\mathrm{sin}[\mathrm{\Delta }\varphi ]`$ (with $`I_c=2eJ/h`$ being the Josephson critical current) develops a phase difference through a weak link. In other words, the temperature gradient stimulates a superconducting phase gradient which in turn drives the reverse supercurrent. The normal current is locally canceled by a counterflow of supercurrent, so that the total current through the junction $`I=I_n+I_s=0`$. As a result, supercurrent $`I_c\mathrm{sin}[\mathrm{\Delta }\varphi ]=I_n=L_n\mathrm{\Delta }T`$ generates a nonzero phase difference via a transient Seebeck thermoelectric field leading to the linear thermophase effect $`\mathrm{\Delta }\varphi =\mathrm{arcsin}(L_{tp}\mathrm{\Delta }T)L_{tp}\mathrm{\Delta }T`$ with $`L_{tp}=L_n/I_c(T)`$. By analogy, we can introduce a piezophase effect (as a quantum alternative for the conventional piezoelectric effect) through a JJ . Indeed, a linear conventional piezoelectric effect relates induced polarization $`P_n`$ to an applied strain $`ϵ`$ as $`P_n=d_nϵ`$, where $`d_n`$ is the piezoelectric coefficient. The corresponding normal piezocurrent density is $`j_n=dP_n/dt=d_n\dot{ϵ}`$ where $`\dot{ϵ}(\sigma )`$ is a rate of plastic deformation (under an applied stress $`\sigma `$) which depends on the number of GBDs of density $`\rho `$ and a mean dislocation rate $`v_d`$ as follows $`\dot{ϵ}(\sigma )=b\rho v_d(\sigma )`$ (where $`b`$ is the absolute value of the appropriate Burgers vector). In turn, $`v_dv_0(\sigma /\sigma _m)`$ with $`\sigma _m`$ being the so-called ultimate stress. To meet the requirements imposed by the Meissner effect, in response to the induced normal piezocurrent, the corresponding Josephson supercurrent of density $`j_s=dP_s/dt=j_c\mathrm{sin}[\mathrm{\Delta }\varphi ]`$ should emerge within the contact. Here $`P_s=2enb`$ is the Cooper pair’s induced polarization with $`n=N/V`$ the pair number density, and $`j_c=2ebJ/\mathrm{}V`$ is the critical current density. The neutrality conditions ($`j_n+j_s=0`$ and $`P_n+P_s=const`$) will lead then to the linear piezophase effect $`\mathrm{\Delta }\varphi =\mathrm{arcsin}[d_{pp}\dot{ϵ}(\sigma )]d_{pp}\dot{ϵ}(\sigma )`$ (with $`d_{pp}=d_n/j_c`$ being the piezophase coefficient) and the concomitant change of the pair number density under an applied strain, viz., $`\mathrm{\Delta }n(ϵ)=d_{pn}ϵ`$ with $`d_{pn}=d_n/2eb`$. Given the markedly different scales of stress induced changes in defect-free thin films and weak-links-ridden ceramics , it should be possible to experimentally register the suggested here piezophase effects. To adequately describe magnetic properties of a granular superconductor, we employ a model of random three-dimensional (3D) overdamped Josephson junction array which is based on the well known tunneling Hamiltonian $$=\underset{ij}{\overset{N}{}}J(r_{ij})[1\mathrm{cos}\varphi _{ij}],$$ (1) where $`\{i\}=\stackrel{}{r}_i`$ is a 3D lattice vector, $`N`$ is the number of grains (or weak links), $`J(r_{ij})`$ is the Josephson coupling energy with $`\stackrel{}{r}_{ij}=\stackrel{}{r}_i\stackrel{}{r}_j`$ the separation between the grains; the gauge invariant phase difference is defined as $`\varphi _{ij}=\varphi _{ij}^0A_{ij}`$, where $`\varphi _{ij}^0=\varphi _i\varphi _j`$ with $`\varphi _i`$ being the phase of the superconducting order parameter, and $`A_{ij}=\frac{2\pi }{\mathrm{\Phi }_0}_i^j\stackrel{}{A}(\stackrel{}{r})𝑑\stackrel{}{l}`$ is the frustration parameter with $`\stackrel{}{A}(\stackrel{}{r})`$ the electromagnetic vector potential which involves both external fields and possible self-field effects (see below); $`\mathrm{\Phi }_0=h/2e`$ is the quantum of flux. In the present paper, we consider a long-range interaction between grains (with $`J(r_{ij})=J`$) and model the true short-range behavior of a HTGS sample through the randomness in the position of the superconducting grains in the array (see below). For simplicity, we shall ignore the role of Coulomb interaction effects assuming that the grain’s charging energy $`E_cJ`$ (where $`E_c=e^2/2C`$, with $`C`$ the capacitance of the junction). As we shall see, this condition is reasonably satisfied for the effects under discussion. According to the above-discussed scenario, under an applied stress the superconducting phase difference will acquire an additional contribution $`\delta \varphi _{ij}(\sigma )=B\stackrel{}{\sigma }\stackrel{}{r}_{ij}`$, where $`B=d_n\dot{ϵ}_0/\sigma _mj_cb`$ with $`\dot{ϵ}_0=b\rho v_0`$ being the maximum deformation rate and the other parameters defined earlier. If, in addition to the external loading, the network of superconducting grains is under the influence of an applied frustrating magnetic field $`\stackrel{}{H}`$, the total phase difference through the contact reads (where $`\stackrel{}{R}_{ij}=(\stackrel{}{r}_i+\stackrel{}{r}_j)/2`$) $$\varphi _{ij}(\stackrel{}{H},\stackrel{}{\sigma })=\varphi _{ij}^0+\frac{\pi }{\mathrm{\Phi }_0}(\stackrel{}{r}_{ij}\stackrel{}{R}_{ij})\stackrel{}{H}B\stackrel{}{\sigma }\stackrel{}{r}_{ij}.$$ (2) It is well known that the self-induced Josephson fields can in principle be quite pronounced for large-size junctions even in zero applied magnetic fields. So, to safely neglect the influence of these effects in a real material, the corresponding Josephson penetration length $`\lambda _J`$ must be much larger than the junction (or grain) size. Specifically, this condition will be satisfied for short junctions with the size $`d\lambda _J`$, where $`\lambda _J=\sqrt{\mathrm{\Phi }_0/4\pi \mu _0j_c\lambda _L}`$ with $`\lambda _L`$ being the grain London penetration depth and $`j_c`$ its Josephson critical current density. In particular, since in HTGS $`\lambda _L150nm`$, the above criterium will be rather well met for $`d1\mu m`$ and $`j_c10^4A/m^2`$ which are the typical parameters for HTGS ceramics . Likewise, to ensure the uniformity of the applied stress $`\sigma `$, we also assume that $`d\lambda _\sigma `$, where $`\lambda _\sigma `$ is a characteristic length over which $`\sigma `$ is kept homogeneous. When the Josephson supercurrent $`I_{ij}^s=I_c\mathrm{sin}\varphi _{ij}`$ circulates around a set of grains (that form a random area plaquette), it induces a random magnetic moment $`\stackrel{}{\mu }_s`$ of the Josephson network $$\stackrel{}{\mu }_s\frac{}{\stackrel{}{H}}=\underset{ij}{}I_{ij}^s(\stackrel{}{r}_{ij}\stackrel{}{R}_{ij}),$$ (3) which results in the stress induced net magnetization $$\stackrel{}{M}_s(\stackrel{}{H},\stackrel{}{\sigma })\frac{1}{V}<\stackrel{}{\mu }_s>=\underset{0}{\overset{\mathrm{}}{}}𝑑\stackrel{}{r}_{ij}𝑑\stackrel{}{R}_{ij}f(\stackrel{}{r}_{ij},\stackrel{}{R}_{ij})\stackrel{}{\mu }_s,$$ (4) where $`V`$ is a sample’s volume and $`f`$ the joint probability distribution function (see below). To capture the very essence of the superconducting piezomagnetic effect, in what follows we assume for simplicity that an unloaded sample does not possess any spontaneous magnetization at zero magnetic field (that is $`M_s(0,0)=0`$) and that its Meissner response to a small applied field $`H`$ is purely diamagnetic (that is $`M_s(H,0)H`$). According to Eq.(2), this condition implies $`\varphi _{ij}^0=2\pi m`$ for the initial phase difference with $`m=0,\pm 1,\pm 2,..`$. Incidentally, this is also a requirement for current conservation at zero temperature . In order to obtain an explicit expression for the piezomagnetization, we consider a site positional disorder that allows for small random radial displacements. Namely, the sites in a 3D cubic lattice are assumed to move from their equilibrium positions according to the normalized (separable) distribution function $`f(\stackrel{}{r}_{ij}\stackrel{}{R}_{ij})f_r(\stackrel{}{r}_{ij})f_R(\stackrel{}{R}_{ij})`$. As usual , it can be shown that the main qualitative results of this paper do not depend on the particular choice of the probability distribution function. For simplicity here we assume an exponential distribution law for the distance between grains, $`f_r(\stackrel{}{r})=f(x)f(y)f(z)`$ with $`f(x_j)=(1/d)e^{x_j/d}`$, and some short range distribution for the dependence of the center-of-mass probability $`f_R(\stackrel{}{R})`$ (around some constant value $`D`$). While the specific form of the latter distribution is not important for the effects under discussion, it is worthwhile to mention that the former distribution function $`f_r(\stackrel{}{r})`$ reflects a short-range character of the Josephson coupling in granular superconductors where $`J(\stackrel{}{r}_{ij})=Je^{\stackrel{}{\kappa }\stackrel{}{r}_{ij}}`$. For isotropic arrangement of identical grains, with spacing $`d`$ between the centers of adjacent grains, we have $`\stackrel{}{\kappa }=(\frac{1}{d},\frac{1}{d},\frac{1}{d})`$ and thus $`d`$ is of the order of an average grain size. Taking the applied stress along the $`x`$-axis, $`\stackrel{}{\sigma }=(\sigma ,0,0)`$, normally to the applied magnetic field $`\stackrel{}{H}=(0,0,H)`$, we get finally $$M_s(H,\sigma )=M_0\frac{H_{tot}(H,\sigma )/H_0}{[1+H_{tot}^2(H,\sigma )/H_0^2]^2},$$ (5) for the induced transverse magnetization (along the $`z`$-axis), where $`H_{tot}(H,\sigma )=HH^{}(\sigma )`$ is the total magnetic field with $`H^{}(\sigma )=(\sigma /\sigma _0)H_0`$ being a stress-induced contribution. Here, $`M_0=I_cSN/V`$ with $`S=\pi dD`$ being a projected area around the Josephson contact, $`H_0=\mathrm{\Phi }_0/S`$, and $`\sigma _0=\sigma _m(j_c/j_d)(b/d)`$ with $`j_d=d_n\dot{ϵ}_0`$ and $`\dot{ϵ}_0=b\rho v_0`$ being the maximum values of the dislocation current density and the plastic deformation rate, respectively. Fig.1 presents the stress induced magnetization at different applied magnetic fields, calculated according to Eq.(5). As is seen, in practically zero magnetic field the piezomagnetization is purely paramagnetic (solid line), exhibiting a strong nonlinear behavior. With increasing the stress, it first increases reaching a maximum, and then rather rapidly dies away. Under the influence of small applied magnetic fields (dotted and dashed lines), the piezomagnetism turns diamagnetic (for low external stress) with its peak shifting toward higher loading. At the same time, Fig.2 shows changes of the initial stress-free diamagnetic magnetization (solid line) under an applied stress. As we see, already relatively small values of an applied stress render a low field Meissner phase strongly paramagnetic (dotted and dashed lines) simultaneously shifting the peak toward higher magnetic fields. According to Eq.(5), the initially diamagnetic Meissner effect turns paramagnetic as soon as the piezomagnetic contribution $`H^{}(\sigma )`$ exceeds an applied magnetic field $`H`$. To see whether this can actually happen in a real material, let us estimate the typical values of the piezomagnetic field $`H^{}`$. By definition, $`H^{}(\sigma )=(\sigma /\sigma _m)(j_d/j_c)(d/b)H_0`$ where $`H_0=\mathrm{\Phi }_0/S`$ is a characteristic magnetic field, and $`\sigma _m`$ is an ultimate stress field. Typically , for HTGS ceramics $`S10\mu m^2`$, leading to $`H_01G`$. To estimate the needed value of the dislocation current density $`j_d`$, we turn to the available experimental data. According to Ref.14, a rather strong polarization under compressive pressure $`\sigma /\sigma _m0.1`$ was observed in $`YBCO`$ ceramic samples at $`T=77K`$ yielding $`d_n=10^2C/m^2`$ for the piezoelectric coefficient. Usually , for GBJJs $`\dot{ϵ}_010^2s^1`$, and $`b10nm`$ leading to $`j_d=d_n\dot{ϵ}_01A/m^2`$ for the maximum dislocation current density. Using the typical values of the critical current density $`j_c=10^4A/m^2`$ and grain size $`d1\mu m`$, we arrive at the following estimate of the piezomagnetic field $`H^{}10^2H_0`$. Thus, the predicted stress induced paramagnetic Meissner effect (PME) should be observable for applied magnetic fields $`H10^2H_00.01G`$ which correspond to the region where the original PME was first registered . In turn, the piezoelectric coefficient $`d_n`$ is related to an effective charge $`Q`$ in the GBJJ as $`d_n=(Q/S)(d/b)^2`$. Given the above-obtained estimates, we get a reasonable value of $`Q10^{13}C`$ for the charge accumulated at a GBJJ. It is interesting to notice that the above values of the aplied stress $`\sigma `$ and the resulting effective charge $`Q`$ correspond (via the so-called electroplastic effect ) to an equivalent applied electric field $`E=b^2\sigma /Q10^7V/m`$ at which rather pronounced electric-field induced effects in HTGS were either observed (like an increase of the critical current in $`YBCO`$ ceramics ) or predicted to occur (like a converse magnetoelectric effect ). In conclusion, let us briefly discuss the contribution of the so-called striction effects (which usually accompany any stress related changes). According to Ref.28 the Josephson projected area $`S`$ was found to slightly decrease under pressure thus leading to some increase of the characteristic field $`H_0=\mathrm{\Phi }_0/S`$. In view of Eq.(5), it means that a smaller compression stress will be needed to actually reverse the sign of the induced magnetization $`M_s`$. Furthermore, if an unloaded granular superconductor already exhibits the PME, due to the orbital currents induced spontaneous magnetization resulting from an initial phase difference $`\varphi _{ij}^0=2\pi r`$ in Eq.(2) with fractional $`r`$ (in particular, $`r=1/2`$ corresponds to the so-called $`\pi `$-type state), then according to our predictions this effect will either be further enhanced by applying a compression (with $`\sigma >0`$) or will disappear under a strong enough extension (with $`\sigma <0`$) able to compensate the pre-existing effect. Given a very distinctive nonlinear character of $`M_s(H,\sigma )`$ (see Figs.1 and 2), the above-estimated range of accessible parameters suggests quite an optimistic possibility to observe the predicted effects experimentally either in HTGS ceramics or in a specially prepared system of arrays of superconducting grains. Finally, it is worth noting that a rather strong nonlinear response of the transport properties in $`HgBaCaCuO`$ ceramics was observed under compressive pressure with $`\sigma /\sigma _m0.8`$. Specifically, the critical current at $`\sigma =9kbar`$ was found to be three times higher its value at $`\sigma =1.5kbar`$, clearly indicating a weak-links-mediated origin of the phenomenon (in the best defect-free thin films this ratio never exceeds a few percents ). This work was done during my stay at ETH–Zürich and was funded by the Swiss National Science Foundation. I thank Professor T.M. Rice for hospitality and stimulating discussions on the subject.
no-problem/9903/cond-mat9903336.html
ar5iv
text
# Lamellae alignment by shear flow in a model of a diblock copolymer ## I Introduction We study a mesoscopic model of a block copolymer to describe the re-orientation of a lamellar structure by an imposed uniform shear flow that is either constant or periodic in time. This is a first step towards understanding known phenomenology pertaining to the response of the block copolymer microstructure to shear flows near the isotropic to lamellar transition . The model that we use is based on the free energy of the diblock copolymer obtained by Leibler , and later by Ohta and Kawasaki , to which an advection term is added to incorporate the effect of the externally applied shear flow. We identify spatially periodic solutions that correspond to a lamellar structure, and determine their stability against a number of long wavelength perturbations. Modulated phases are ubiquitous in physical and chemical systems . They generally result from the competition between short and long range forces. Additional symmetries of the system (e.g., translational or rotational invariance) often lead in practice to rich textures, especially in systems of extent that is large compared with the characteristic wavelength of the modulation. Modulated phases often have interesting macroscopic behavior, and exhibit a complex response to externally applied forces. While it is possible to devise approximate constitutive laws to describe the macroscopic response of such phases, it is often necessary to explicitly address their evolution at the mesoscopic scale, and to determine how microstructure evolution influences the macroscopic response. We focus here on the lamellar phase observed in diblock copolymers below the order-disorder transition . Diblock copolymers are formed by two distinct sequences of monomers, A and B, that are mutually incompatible but chemically linked. At sufficiently low temperatures, species A and B would segregate to form macroscopic domains, but the chemical bonding between the two leads to a modulated phase instead. The detailed equilibrium microstructure depends on the relative molecular weight of the chains and has been studied in detail within a mean field approximation . We follow in this paper the approach of Leibler who introduced an order parameter field $`\psi (𝐫)`$ that describes the local number density difference of monomers A and B. The order parameter is defined to be zero above the order-disorder transition, and is finite and nonuniform below. Leibler’s analysis was restricted to the weak-segregation limit (close to the order-disorder transition) within which the thickness of the interface separating the A-rich from the A-poor regions is of the order of the wavelength of the microstructure. Later, Ohta and Kawasaki extended Leibler’s free energy to the strong segregation range, and showed the importance of long ranged effective interactions that arise from the connectivity of the polymer chains. We use this latter free energy as the driving force for the re-orientation dynamics, allowing also for passive advection of the order parameter by an imposed shear flow. The model studied is similar to that considered by Fredrickson , except that we neglect thermal fluctuations and assume that both phases have the same viscosity. The stability of a lamellar structure to secondary instabilities has already been addressed in the literature, although in the absence of shear flow . In fact, the similarity between the equations governing the motion of the lamellae and the Swift-Hohenberg model of Rayleigh-Bénard convection gives rise to a common phenomenology . The lamellar structure is found to be stable only within a range of wavenumbers. At higher wavenumbers it undergoes an Eckhaus instability which generally results in a decrease of wavenumber, whereas for wavenumbers below that range the structure undergoes a zig-zag instability. In this paper, we extend these stability results to explicitly include fluid advection by the imposed shear. We find that the stability boundaries are modified with respect to the zero velocity case in a way that depends not only on the amplitude of the shear, but also on the orientation of the lamellae relative to the flow. Of course, the latter dependence is absent in earlier treatments that neglected advection. Our results are a first step towards understanding the complex re-orientation phenomenology that has been observed experimentally . In this initial analysis, we introduce a number of restrictive assumptions that we plan to relax in future work. First, our calculations are primarily two dimensional and thus can only address the so-called parallel and transverse orientations. Second, and more importantly, we neglect thermal fluctuations and any viscosity contrast between the two phases, elements that have been argued to be important in determining the main qualitative features of the re-orientation process. We also neglect flow induced by the lamellae themselves in response to the applied shear. These secondary flows could become important for the late stage coarsening of the lamellar structure. Finally, we have confined our study to locating the boundaries of several secondary instabilities of the lamellar structure, but have not addressed the evolution following the instabilities, nor the coarsening of the resulting textured pattern . However, our results concerning the periodic base solution under flow, and its stability against long wavelength perturbations are the prerequisite building blocks of a more general theory. ## II Mesoscopic model equations Following Leibler , we introduce an order parameter field, $`\psi (𝐫)`$, function of the local density difference of monomers A and B. For a block copolymer with equal length sub-chains, the order parameter is $`\psi (𝐫)=\frac{\rho _A(𝐫)\rho _B(𝐫)}{2\rho _0}`$, where $`\rho _X,X=A,B`$ is the density of monomer $`X`$, and $`\rho _0`$ is the total density, assumed constant (incompressibility condition). A mean field free energy $`\left[\psi (𝐫)\right]`$ was derived by Leibler for a monodisperse diblock copolymer melt , and later by Ohta and Kawasaki . In units of $`k_BT`$, where $`k_B`$ is Boltzmann’s constant and $`T`$ the temperature, the free energy is comprised of two terms, $`/(\rho _0k_BT)=_s+_l`$. The term $`_s`$ incorporates local monomer interactions, $$_s=\text{}\text{r}\left[\frac{\kappa }{2}|\psi |^2\frac{\tau }{2}\psi ^2+\frac{u}{4}\psi ^4\right],$$ and is formally identical to the Ginzburg-Landau free energy commonly used to describe phase separation in a binary fluid mixture . The parameters $`\kappa ,\tau `$ and $`B`$ can be approximately related to the polymerization index $`N`$, Kuhn’s statistical length $`b`$ and the Flory-Huggins parameter $`\chi `$ through the relations $`\kappa =\frac{b^2}{3}`$, $`\tau =\frac{2\chi N7.2}{N}`$ and $`B=\frac{144}{N^2b^2}`$ . Long range interactions arising from the covalent bond connecting the two sub-chains are contained in $`_l`$, $$_l=\frac{B}{2}d𝐫d𝐫^{}G(𝐫r^{})\psi (𝐫)\psi (𝐫^{})$$ where the kernel $`G(𝐫r^{})`$ is the infinite space Green’s function of the Laplacian operator $`^2G(𝐫𝐫^{})=\delta (𝐫𝐫^{})`$. The nonlocal interactions arising from the connectivity of the chains lead to a thermodynamic equilibrium state with a nonuniform density. In our case of equal length sub-chains, the equilibrium configuration is a periodic lamellar structure, with a characteristic wavelength of the order of 100 Å for a typical system. Given this mean field free energy, a phenomenological set of equations that govern the temporal relaxation of equilibrium thermal fluctuations of $`\psi (𝐫)`$ and of fluid velocity $`𝐯`$ has been derived close to the order-disorder transition . A similar phenomenological description can be used below the order-disorder transition under the assumption that the local relaxation of the order parameter field at the mesoscopic scale is still driven by minimization of the same free energy . Under this assumption, $`\psi `$ obeys the time-dependent Ginzburg-Landau equation, $$\frac{\psi }{t}+𝐯\psi =M^2\frac{\delta }{\delta \psi },$$ (1) where $`M`$ is a phenomenological mobility coefficient, and $`\delta /\delta \psi `$ stands for functional differentiation with respect to $`\psi `$. Equation (1) includes the effect of advection by a local velocity field v, which satisfies an extended Navier-Stokes equation $$\frac{𝐯}{t}+(𝐯)𝐯=\nu ^2𝐯\frac{p}{\rho }+\frac{\delta }{\delta \psi }\frac{\psi }{\rho },$$ (2) where $`\nu `$ is the kinematic viscosity of the fluid, assumed constant and independent of $`\psi `$, $`p`$ is the fluid pressure, and appropriate boundary conditions for both $`\psi `$ and v must be introduced. The last term on the right-hand side of Eq. (2) is required to ensure that there cannot be free energy reduction by pure advection of $`\psi `$ . This term is sometimes referred to as osmotic stress, and it leads to the creation of rotational flow by curved lamellae that is directed towards their local center of curvature. We focus on a layer of block copolymer, unbounded in the $`x`$ and $`y`$ directions, and being uniformly sheared along the $`z`$ direction (Fig. 1). The layer is confined between the stationary $`z=0`$ plane, and the plane $`z=d`$ which is uniformly displaced parallel to itself with a velocity $`v_{\mathrm{plane}}=sd`$ in the case of a steady shear, and $`v_{\mathrm{plane}}=\gamma d\omega \mathrm{cos}(\omega t)`$ in the case of an oscillatory shear. $`s`$ is the dimensional shear rate in the steady case, and $`\gamma `$ is the dimensionless strain amplitude in the case of an oscillatory shear of angular frequency $`\omega `$. The general problem defined by Eqs. (1) and (2) can be considerably simplified by noting that under typical experimental conditions inertia is negligible ($`\omega d^2/\nu 1`$). Furthermore, we will neglect in this paper the term $`(\delta /\delta \psi )\psi /\rho `$ in Eq. (2). Under these conditions, Eq. (2) admits a simple solution that satisfies the specified conditions at the moving plane: $`𝐯=sz\widehat{i}`$ for a steady shear, and $`𝐯=\gamma d\omega \mathrm{cos}(\omega t)z\widehat{i}`$ for an oscillatory shear, where $`\widehat{i}`$ is the unit vector in the $`x`$ direction. Therefore, the problem reduces to a single governing equation for the order parameter field $`\psi `$ (Eq. (1)) under a prescribed advection velocity $`𝐯`$. As discussed in the introduction, previous theoretical work on the formation and stability of lamellar structures further neglected advection of $`\psi `$ in Eq. (1). The results presented in this paper are free of this restriction. Since the base state to be considered is comprised of spatially uniform lamellae advected by the shear flow, it is convenient to introduce a new frame of reference in which the velocity vanishes. Define a new system of non mutually orthogonal coordinates $`(x_1,x_2,x_3)`$ by $`x_1=xa(t)z,x_2=y`$ and $`x_3=z`$. The dimensionless quantity $`a(t)=st`$ for a steady shear, and $`a(t)=\gamma \mathrm{sin}(\omega t)`$ for an oscillatory shear. All the calculations reported in this paper, both analytical and numerical, have been performed in this new frame of reference. Analytical calculations consider an unbounded geometry in the $`x_1`$ and $`x_2`$ directions and periodic boundary conditions along the $`x_3`$ direction, whereas the numerical computations have been conducted in a two dimensional, square domain on the $`(x_1,x_3)`$ plane and consider periodic boundary conditions along both $`x_1`$ and $`x_3`$. Note that both frame of references coincide at $`t=0`$, and at equal successive intervals of one half the period of the shear in the case of oscillatory shear. Dimensionless variables are introduced by defining a scale of length by $`\sqrt{\kappa /\tau }`$, a scale of time by $`\kappa /M\tau ^2`$, and an order parameter scale by $`\sqrt{\tau /u}`$. In the transformed frame of reference and in dimensionless variables, Eq. (1) reads, $$\frac{\psi }{t}=^2(\psi +\psi ^3^2\psi )\frac{B\kappa }{\tau ^2}\psi ,$$ (3) with $$^2=\left[1+a^2(t)\right]\frac{^2}{x_1^2}2a(t)\frac{^2}{x_1x_3}+\frac{^2}{x_3^2}+\frac{^2}{x_2^2}.$$ There is only one dimensionless group remaining $`B\kappa /\tau ^2`$, which will be simply denoted by $`B`$ in what follows. We will first show in Sec. III that below (but close to) the order-disorder transition point (in the weak-segregation limit), Eq. (3) admits periodic solutions. Their stability against infinitesimal long wavelength perturbations is the subject of Sec. IV. ## III Lamellar solution in the weak-segregation limit In the absence of shear ($`a(t)=0`$) the uniform solution of Eq. (3), $`\psi =0`$, loses stability at the order-disorder transition. In a mean field approximation the transition occurs at $`B_c=1/4`$. This is a supercritical bifurcation with a critical wavenumber $`q_c=\sqrt{1/2}`$. Near threshold $`(0ϵ=(B_cB)/2B_c1)`$ there exist periodic stationary solutions of the form $$\psi (𝐫)=2A\mathrm{cos}(𝐪r)+A_1\mathrm{cos}(3𝐪r)+\mathrm{},$$ (4) with $`A^2=\frac{q^2q^4B}{3q^2}O(ϵ)`$, and $`A_1`$ of higher order in $`ϵ`$. This solution only exists for a range of wavenumbers $`q`$ such that $`\sigma (q^2)=q^2q^4B0`$. For nonzero shear, we seek solutions of Eq. (3) of the form of Eq. (4), with $`𝐫=(x_1,x_2,x_3)`$ expressed in the sheared frame basis set $`\{𝐞_1=\widehat{i},𝐞_2=\widehat{j},𝐞_3=a(t)\widehat{i}+\widehat{k}\}`$. Wavevectors are expressed in the reciprocal basis set $`\{𝐠_1=\widehat{i}a(t)\widehat{k},𝐠_2=\widehat{j},𝐠_3=\widehat{k}\}`$. Therefore we keep the same functional form as for nonzero shear, but allow a time-dependent amplitude $`A(t)`$. Note that the components of the wavevector $`𝐪`$ are assumed to be independent of time and given by $`q_1=q_x(t=0),q_2=q_y(t=0)`$ and $`q_3=q_z(t=0)`$ respectively. The wavevector itself depends on time through the time dependence of the reciprocal basis set. Such a solution corresponds to a spatially uniform lamellar structure with a time-dependent wavevector that adiabatically follows the imposed shear in the laboratory frame (see Fig. 1). Inserting Eq. (4) into Eq. (3), we find to order $`ϵ^{3/2}`$ ($`\sigma `$ is itself of order $`ϵ`$), $$\frac{dA}{dt}=\sigma \left[q^2(t)\right]A3q^2(t)A^3,$$ (5) with $`q^2(t)=q_1^2+\left(a(t)q_1q_3\right)^2+q_2^2`$, and $`\sigma (q^2)=q^2q^4B`$. This nonlinear equation with time-dependent coefficients can be solved exactly in the two cases of steady and oscillatory shear flow. In the case of a steady shear $`a(t)=st`$. We find, $$A(t)=\left\{\frac{e^{2H(t)}}{A(0)^2}+6e^{2H(t)}_0^t𝑑t^{}e^{2H(t^{})}q^2(t^{})\right\}^{1/2},$$ (6) with $$H(t)=(q_0^4+Bq_0^2)t+(12q_0^2)sq_1q_3t^2+\frac{2q_1^2s^2(2q_3^2+q_0^2)}{3}t^3q_1^3q_3s^3t^4+\frac{q_1^4s^4}{5}t^5.$$ (7) The constant quantity $`q_0=\sqrt{q_1^2+q_2^2+q_3^2}`$ is the initial wavenumber, and $`A(0)`$ is the initial amplitude. For the special case $`q_1=0`$, $`A(t)`$ simply relaxes to its equilibrium value in the absence of shear $`A^2=\frac{q_0^2q_0^4B}{3q_0^2}`$. This corresponds to an initial orientation of the structure which has no component transverse to the flow. For any other initial orientation, the shear induces changes in the lamellar spacing in the laboratory frame of reference (Fig. 1). As a result, the amplitude $`A(t)`$ decreases and approaches zero at long times. Hence, the structure melts and reforms with a different orientation which we cannot predict on the basis of our single mode analysis. The emerging structure presumably results from the amplification of thermal fluctuations near the point at which the amplitude $`A(t)`$ vanishes, and they have been neglected in our treatment. Thermal fluctuation effects have been accounted for by others . For an oscillatory shear $`a(t)=\gamma \mathrm{sin}(\omega t)`$. We first examine the stability of the uniform solution $`\psi =0`$ against small perturbations. Linearization of Eq. (5) leads to, $$\frac{dA(t)}{dt}=\sigma \left[q^2(t)\right]A(t),$$ (8) with $`\sigma (t+T)=\sigma (t)`$ and $`T=2\pi /\omega `$. Equation (8) constitutes a one-dimensional Floquet problem. The solution $`A=0`$ is unstable when $$\overline{\sigma }=_0^T\sigma (t)𝑑t>0.$$ (9) The resulting neutral stability curve is given by, $$B=q_0^2q_0^4\frac{3q_1^4\gamma ^4}{8}\frac{(2q_0^2+4q_3^21)\gamma ^2q_1^2}{2}.$$ (10) Instability modes can be conveniently classified by considering the relative orientation of the lamellae at $`t=0`$ and the shear direction. We define a parallel orientation, $`q_30,q_1=q_2=0`$; a perpendicular orientation, $`q_20,q_1=q_3=0`$; and a transverse orientation, $`q_10,q_2=q_3=0`$. The following instability points are identified depending on the orientation of the critical wavevector: a transverse mode with $$B_c=\frac{1}{2}\frac{(2+\gamma ^2)^2}{8+8\gamma ^2+3\gamma ^4},q_{1c}=\sqrt{\frac{4+2\gamma ^2}{8+8\gamma ^2+3\gamma ^4}},$$ (11) a mixed parallel-perpendicular mode with $$B_c=\frac{1}{4},q_{1c}=0,2q_{2c}^2+2q_{3c}^2=1,$$ (12) and a mixed parallel-transverse mode defined by $$B_c=\frac{1}{4}\frac{7\gamma ^2+16}{15\gamma ^2+16},q_{2c}=0,q_{1c}=2\sqrt{\frac{1}{15\gamma ^2+16}},q_{3c}=\sqrt{\frac{3\gamma ^2+8}{30\gamma ^2+32}}.$$ (13) Note that the threshold corresponding to perturbations of wavevectors that do not have a projection along the transverse direction are not affected by the shear. Furthermore, neither the stability boundaries nor the values of the critical wavenumbers depend on the angular frequency $`\omega `$. In what follows, we consider mainly two dimensional solutions in the plane $`q_2=0`$ (transverse and parallel orientations) to make contact with two dimensional numerical calculations. As an example, Fig. 2 shows the neutral stability curve in the $`(q_1,q_3)`$ plane for mixed parallel-transverse modes at $`ϵ=0.04`$, and for several values of the dimensionless strain amplitude $`\gamma `$. Recall that $`q_1=q_x(t=0)`$ and $`q_3=q_z(t=0)`$ define the initial orientation of the lamellae. The figure shows that the shear does not modify the neutral stability curve in the vicinity of $`q_1=0`$ (parallel orientation), whereas the curve is shifted near $`q_3=0`$ (transverse orientation). Large changes are observed for oblique wavevectors, including the complete suppression of the instability at sufficiently large values of the strain amplitude. Above threshold, Eq. (5) can be solved to yield the time-dependent amplitude of the lamellar structure under oscillatory shear. We find, $$A(t)=\left\{\frac{e^{2(I(t)c_4c_5)}}{A(0)^2}+6e^{2I(t)}_0^t𝑑t^{}e^{2I(t^{})}q^2(t^{})\right\}^{1/2}.$$ (14) The function $`I(t)`$ is given by $$I(t)=c_1t+c_2\mathrm{sin}(2\omega t)+c_3\mathrm{sin}(4\omega t)+c_4\mathrm{cos}(\omega t)+c_5\mathrm{cos}^3(\omega t),$$ (15) with * $`c_1=\left[\frac{3q_1^4\gamma ^4}{8}+\frac{(2q_0^2+4q_3^21)\gamma ^2q_1^2}{2}+q_0^4+Bq_0^2\right]`$ * $`c_2=\left[\frac{q_1^4\gamma ^4+(2q_0^2+4q_3^21)\gamma ^2q_1^2}{4\omega }\right]`$, $`c_3=\frac{q_1^4\gamma ^4}{32\omega }`$ * $`c_4=\left[\frac{(4q_0^22)\gamma q_1q_3+4\gamma ^3q_1^3q_3}{\omega }\right]`$, and $`c_5=\frac{4\gamma ^3q_1^3q_3}{3\omega }`$. We note that the stability condition Eq. (10) is equivalent to $`c_1=0`$. Hence, the asymptotic behavior of $`A(t)`$ at long times changes qualitatively depending on the sign of $`c_1`$. For $`c_1>0`$, $`lim_t\mathrm{}e^{2I}=0`$, so that the integral in Eq. (14) tends to a finite constant. Since the prefactor $`e^{2I}`$ diverges exponentially, $`A(t)`$ decays to zero. If, on the other hand, $`c_1<0`$, $`A(t)`$ becomes periodic at long times. To prove this statement, we first rewrite the second term inside the curly brackets in Eq. (14) as $$(t)=6e^{2f(t)}_0^t𝑑t^{}e^{2c_1(t^{}t)2f(t^{})}q^2(t^{}),$$ (16) with $`f(t^{})=c_2\mathrm{sin}(2\omega t^{})+c_3\mathrm{sin}(4\omega t^{})+c_4\mathrm{cos}(\omega t^{})+c_5\mathrm{cos}^3(\omega t^{})`$. Since both $`f(t^{})`$ and $`q^2(t^{})`$ are periodic with period $`T=2\pi /\omega `$, we can decompose Eq. (16) into $$(t)=6e^{2f(t)}\left[\underset{j=1}{\overset{n}{}}e^{2jc_1T}_0^T𝑑t^{}e^{2c_1t^{}2f(t^{}+t)}q^2(t^{}+t)+e^{2c_1t}_0^{tnT}𝑑t^{}e^{2c_1t^{}2f(t^{})}q^2(t^{})\right],$$ (17) where $`n`$ is an integer such that $`0<tnT<T`$. In the limit of large $`t`$ and with $`c_1`$ negative, the last term on the right-hand side of Eq. (17) vanishes while the sum $`_{j=1}^ne^{2jc_1T}`$ converges to $`1/(e^{2c_1T}1)`$. Combining Eqs. (14) and (17) yields an asymptotically periodic solution for $`A(t)`$, $$A(t)=\left[\frac{6e^{2f(t)}}{e^{2c_1T}1}_0^T𝑑t^{}e^{2c_1t^{}2f(t^{}+t)}q^2(t^{}+t)\right]^{1/2}.$$ (18) The condition $`c_1=0`$ can also be understood in terms of a critical strain amplitude $`\gamma _c`$ above which an existing lamellar structure of a given orientation at $`t=0`$ will melt (i.e., $`A(t)`$ will decay to zero at long times) The value of $`\gamma _c`$ that corresponds to $`c_1=0`$ is given by, $$\gamma _c=[(b+\sqrt{b^24dc})/2d]^{1/2},$$ (19) with $`b=(2q_0^2+4q_3^21)q_1^2/2,c=q_0^4+Bq_0^2`$, and $`d=3q_1^4/8`$. Note again that the critical strain amplitude is independent of the angular frequency $`\omega `$. In order to test the approximations involved in Eq. (4), namely that the wavevector $`𝐪`$ adiabatically follows the flow, and the single mode truncation for small $`ϵ`$, we have undertaken a numerical solution of the model equation in a two dimensional, square geometry (see the Appendix for the details of the numerical method). As a first example, we consider an oscillatory shear of angular frequency $`\omega =0.02`$ imposed on a lamellar structure of initial wavevector $`(q_1,q_3)=(0.687,0.098)`$. The critical strain amplitude for this initial orientation is $`\gamma _c=0.695`$. Figure 3 shows the temporal evolution of $`A(t)`$ for two values of $`\gamma `$, one larger and one smaller than $`\gamma _c`$. The solid lines are the predictions of Eq. (14), and the symbols are the results of the numerical calculation. The agreement in both cases is excellent. ## IV Secondary instabilities of the lamellar pattern In order to address the stability of the lamellar pattern, we next consider long wavelength perturbations of the base state with wavevector $`𝐐=(Q_1,Q_2,Q_3)`$ such that its components are also constant in the sheared frame of reference. Close to threshold, perturbations evolve in a slow time scale compared to the inverse frequency of the shear. We therefore assume that the wavenumber of any long wave perturbation would adiabatically follow the imposed flow. Specifically, we consider a solution of the form, $$\psi (𝐫,t)=[A(t)+\delta A_+e^{i𝐐r}+\delta A_{}e^{i𝐐r}]e^{i𝐪r}+\mathrm{c}.\mathrm{c}.$$ (20) where $`A(t)`$ is the nonlinear solution obtained in Section III. Substituting Eq. (20) into Eq. (3), and linearizing with respect to the amplitudes $`\delta A_+`$ and $`\delta A_{}`$, we find, $$\frac{}{t}\left[\begin{array}{c}\delta A_+\\ \delta A_{}\end{array}\right]=L(t)\left[\begin{array}{c}\delta A_+\\ \delta A_{}\end{array}\right],$$ (21) with $$L(t)=\left[\begin{array}{cc}l_+l_+^2B+6A(t)^2l_+& 3A(t)^2l_+\\ 3A(t)^2l_{}& l_{}l_{}^2B+6A(t)^2l_{}\end{array}\right],$$ and $`l_\pm =(𝐪\pm 𝐐)^2=(1+a(t)^2)(q_1\pm Q_1)^2+2a(t)(q_1\pm Q_1)(q_3\pm Q_3)(q_2\pm Q_2)^2(q_3\pm Q_3)^2`$. In general, the matrix elements $`L_{ij}`$ are complicated functions of time, and we have not attempted to solve Eq. (21) analytically. For $`\gamma <\gamma _c`$ the operator $`L`$ contains terms that are both periodic in time and decaying transients. At long enough times, $`A(t)`$ is given by Eq. (18), and the linear system Eq. (21) has periodic coefficients. Hence, it reduces to a two dimensional Floquet problem for the amplitudes $`\delta A_+`$ and $`\delta A_{}`$ . In order to gain some insight into the stability problem, we first briefly review the known results for zero shear . In this case $`A(t)`$ is a constant, and the matrix elements of $`L(t)`$ are independent of time. An eigenvalue problem results by considering solutions of Eq. (21) of the form $`\delta A_\pm e^{\sigma _+t}+e^{\sigma _{}t}`$, and instability follows when either eigenvalue is positive. Two modes of instability are obtained: a zig-zag (ZZ) mode that leads to a transverse modulation of the lamellae ($`𝐐q=0`$), and an Eckhaus (E) mode that is purely longitudinal in nature $`𝐐q=Qq`$. In the zig-zag case, $`\sigma _+(𝐐)`$ has a maximum at $$Q_{max,ZZ}^2=\frac{12q^23A^2}{2}.$$ (22) The eigenvalue $`\sigma _+(𝐐)`$ changes sign on the line $`q=q_c`$, which therefore defines the zigzag stability boundary. In the Eckhaus case, we find after some straightforward algebra that the perturbation with the largest growth rate is $$Q_{max,E}^2=\frac{64\delta q^4(ϵ4\delta q^2)^2}{64\delta q^2},$$ (23) with $`\delta q=qq_c`$. Therefore the Eckhaus stability boundary is given by $`ϵ=12\delta q^2`$. These results are schematically summarized in Fig. 4. The hatched area is the region of stability of a lamellar solution in the absence of shear flow. It is worth pointing out that this stability diagram is identical to that of the Swift-Hohenberg model of Rayleigh-Bénard convection . Shiwa has recently shown that in the weak-segregation limit ($`ϵ1`$), and in the absence of shear flow, the amplitude equation describing slow modulations of a lamellar solution is the same as the amplitude equation of the Swift-Hohenberg model near onset of convection. The same stability diagram has been derived by Kodama and Doi by examining free energy changes upon distortion of a lamellar pattern. We now return to the Floquet problem of Eq. (21) when $`A(t)`$ is a periodic function of time (Eq. (18)). Since $`A(t+T)=A(t)`$ ($`T=2\pi /\omega `$), the solution of (21) is given by, $$\left[\begin{array}{c}\delta A_+\\ \delta A_{}\end{array}\right]=e^{\sigma t}\left[\begin{array}{c}\varphi _+(t)\\ \varphi _{}(t)\end{array}\right],$$ (24) with $`\varphi _\pm (t+T)=\varphi _\pm (t)`$. Equation (21) is then transformed to an eigenvalue problem within $`(0,T)`$, $$\frac{}{t}\left[\begin{array}{c}\varphi _+(t)\\ \varphi _{}(t)\end{array}\right]=\sigma \left[\begin{array}{c}\varphi _+(t)\\ \varphi _{}(t)\end{array}\right]+L(t)\left[\begin{array}{c}\varphi _+(t)\\ \varphi _{}(t)\end{array}\right].$$ (25) Given that the function $`A(t)`$ is quite complicated, we have solved this eigenvalue problem numerically. The eigenvalue $`\sigma `$ can depend in principle on the wavevector of the base state $`𝐪`$, on the wavevector of the perturbation $`𝐐`$, and on the amplitude $`\gamma `$ and frequency $`\omega `$ of the shear. For ease of presentation, we have focused on the case $`ϵ=0.04`$ although extension to other values of $`ϵ`$ is straightforward. Figures 5 and 6 summarize our results for the cases $`\gamma =0.2`$ and $`\gamma =0.4`$ respectively, and show the stability boundaries in the plane $`(q_1,q_3)`$, as well as the neutral stability curve already shown in Fig. 2. As before, $`(q_1,q_3)`$ is the wavevector of the lamellar structure at $`t=0`$. At fixed $`ϵ,\gamma `$ and $`\omega `$, these curves have been obtained by determining the loci of $`𝐪`$ at which the function $`\sigma (𝐐)`$ changes from a maximum to a saddle point at $`𝐐=0`$. First we note that any orientation of the lamellar pattern that is not initially close to either parallel or transverse is unstable to moderate shears. Second, and up to our numerical accuracy, these curves appear to be independent of angular frequency. Finally, and contrary to the case of no shear, the reciprocal basis vectors are not time independent. Since the components of both $`𝐪`$ and $`𝐐`$ are independent of time in the sheared frame, their mutual angle is not (except for the case in which they are parallel). However, the following statements can be made about the type of secondary instability. We have found that the secondary instability is of the longitudinal type only when either $`q_1=0`$ or $`q_3=0`$ (intersections between the lines marked with circles and the axes in Figs. 5 and 6). Otherwise, the angle between $`𝐪`$ and $`𝐐`$ is time-dependent. The lines on both figures marked with squares have the property that even though both $`𝐪`$ and $`𝐐`$ are functions of time, their angle oscillates periodically around $`90^o`$. The cases discussed up to now concern long wavelength instabilities of the base periodic pattern that are associated with the broken translational symmetry of the original system by the appearance of a periodic pattern. We now show that it is possible to obtain analytical expressions for the stability boundaries against finite wavelength perturbations that may have some experimental relevance as well. In some experimental protocols, the lamellar pattern is first obtained in the absence of shear. The resulting configuration comprises regions or domains of locally parallel lamellae but with a continuous distribution of orientations. A shear flow is then initiated and the reorientation of the pattern studied as a function of time. The pattern obtained in the absence of shear may be now unstable to several finite wavenumber perturbations that would not have been observable in the case in which flow is present throughout the ordering process. In the latter case the unstable orientations would have decayed away during the process of formation of the lamellae. In addition, the approximation that we derive below is generally valid when $`Q_3`$ cannot approach zero, as is the case in a system of finite extent in the direction of the velocity gradient. We first define the following linear transformation, $$\left[\begin{array}{c}\delta _+\\ \delta _{}\end{array}\right]=\left[\begin{array}{cc}\frac{3A(t)^2l_{}}{\sigma _+\sigma _{}}& \frac{\sigma _+\sigma _{}+L_{22}L_{11}}{2(\sigma _+\sigma _{})}\\ \frac{3A(t)^2l_{}}{\sigma _+\sigma _{}}& \frac{\sigma _+\sigma _{}+L_{11}L_{22}}{2(\sigma _+\sigma _{})}\end{array}\right]\left[\begin{array}{c}\delta A_+\\ \delta A_{}\end{array}\right]=T(t)\left[\begin{array}{c}\delta A_+\\ \delta A_{}\end{array}\right]$$ (26) which diagonalizes matrix $`L(t)`$, and where $$\sigma _\pm (t)=\frac{L_{11}+L_{22}\pm \sqrt{(L_{11}L_{22})^2+36l_+l_{}A(t)^4}}{2}.$$ (27) Combining Eqs. (21) and (26), we find $$\frac{}{t}\left[\begin{array}{c}\delta _+(t)\\ \delta _{}(t)\end{array}\right]=\left[\begin{array}{cc}\sigma _+& 0\\ 0& \sigma _{}\end{array}\right]\left[\begin{array}{c}\delta _+\\ \delta _{}\end{array}\right]\frac{A(t)^2l_{}}{\sigma _+\sigma _{}}\left[\begin{array}{cc}\dot{M_1}& \dot{M_2}\\ \dot{M_1}& \dot{M_2}\end{array}\right]\left[\begin{array}{c}\delta _+\\ \delta _{}\end{array}\right],$$ (28) where $`\dot{M_1}=\frac{}{t}\left[\frac{\sigma _+\sigma _{}+L_{11}L_{22}}{2A^2l_{}}\right]`$ and $`\dot{M_2}=\frac{}{t}\left[\frac{\sigma _{}\sigma _++L_{11}L_{22}}{2A^2l_{}}\right]`$. For finite $`Q`$, $`\frac{36A(t)^4l_+l_{}}{(L_{11}L_{22})^2}1`$. Assuming $`L_{11}L_{22}>0`$ (the other case leads to no extra complications), $`\sigma _+=L_{11}`$ and $`\sigma _{}=L_{22}`$. Also $`M_1=\frac{L_{11}L_{22}}{A^2l_{}}`$ and $`M_2=0`$ so that the equation for $`\delta _+`$ decouples from the equation for $`\delta _{}`$. The solution for $`\delta _+`$ is $$\delta _+(t)=\delta _+(0)e^{_0^t(\sigma _+\frac{}{t^{}}\mathrm{ln}\left[\frac{L_{22}L_{11}}{A^2l_{}}\right])𝑑t^{}}.$$ (29) The stability boundary is defined by $`\overline{\sigma }=_0^T(\sigma _+\frac{}{t^{}}\mathrm{ln}\left[\frac{L_{22}L_{11}}{A^2l_{}}\right])𝑑t^{}=_0^T\sigma _+𝑑t^{}=0`$. We have checked that this stability condition agrees with the numerical stability analysis based on Eq. (25) for finite $`Q`$. We finish by illustrating the re-orientation dynamics of the lamellar structure following a long wavelength instability by direct numerical solution of the governing equation. We focus on the region in which the uniform lamellar structure is linearly unstable. The first example discussed concerns a lamellar structure of initial wavenumber $`(q_1,q_3)=(0.4908,0.4908)`$ being sheared periodically with an amplitude $`\gamma =1`$. Figures 7A,B,C show the sequence of configurations obtained when $`\omega =5\times 10^6`$. A long wavelength transverse modulation of the lamellae is observed (Figure 7A). Subsequent growth leads to the formation of a forward kink band similar to that recently observed experimentally (Figure 7B) . As the strain grows larger, the kink band disappears leaving behind a lamellar structure without any defects and oriented differently relative to the shear (Figure 7C). In the second example (Figs. 7D,E,F), a structure initially transverse to the flow is being sheared periodically with an amplitude $`\gamma =1`$ and a frequency $`\omega =5\times 10^7`$. A longitudinal perturbation is clearly visible that manifests itself by local compression and dilation of the structure, leading to the disappearance of a pair of lamellae. The overall result is an increase in the lamellar spacing without any change in the orientation. In summary, we have obtained a nonlinear solution of the model equations that govern the formation of a lamellar structure in the weak-segregation limit. The solution is a periodic lamellar structure with a time-dependent wavevector that adiabatically follows the imposed shear flow, and a time-dependent amplitude which we have computed for the cases of steady and oscillatory shears. In the case of an oscillatory shear, the periodic solution only exists for a range of orientations of the lamellae relative to the shear direction. The width of the region depends on the shear amplitude but not on its frequency. Long wavelength secondary instabilities further reduce the range of existence of stable lamellar solutions. The corresponding stability boundaries depend again on the shear amplitude, but are independent (up to our numerical accuracy) of frequency. We next plan to examine the stability of the nonlinear solution presented in this paper when neither osmotic stresses nor viscosity contrast are neglected. ## Acknowledgments This research has been supported by the U.S. Department of Energy, contract No. DE-FG05-95ER14566, and also in part by the Supercomputer Computations Research Institute, which is partially funded by the U.S. Department of Energy, contract No. DE-FC05-85ER25000. F.D. is supported by the Microgravity Science and Applications Division of the NASA under contract No. NAG3-1885. ## A Numerical algorithm We use a pseudo-spectral technique to solve Eq. (3) in two spatial dimensions and in the sheared frame with periodic boundary conditions along both directions. Equation (3) can be written as, $$\frac{\stackrel{~}{\psi }}{t}=\sigma (t)\stackrel{~}{\psi }q^2(t)\stackrel{~}{\psi ^3},$$ (A1) where $`\sigma (t)=q^2(t)q^4(t)B`$ and $`q^2(t)=q_1^2+[a(t)q_1q_3]^2`$. The algorithm we use, due to Cross et al. , is obtained by first multiplying both sides of Eq. (A1) by $`\mathrm{exp}(\sigma (t^{})t^{})`$ and integrating over $`t^{}`$. This gives $$\mathrm{exp}(\sigma (t)t^{})\stackrel{~}{\psi }_t^{t+\mathrm{\Delta }t}=q^2(t)_t^{t+\mathrm{\Delta }t}𝑑t^{}\stackrel{~}{\psi ^3}(t^{})\mathrm{exp}(\sigma (t)t^{}),$$ (A2) where we have assumed $`\sigma (t^{})\sigma (t)`$ and $`q^2(t^{})q^2(t)`$. Next, we write the non-linear term $`\stackrel{~}{\psi ^3}(t^{})`$ as a linear function of $`t^{}`$ in the interval $`tt^{}t+\mathrm{\Delta }t`$, i.e., $$\stackrel{~}{\psi ^3}(t^{})\stackrel{~}{\psi ^3}(t)+\frac{\stackrel{~}{\psi ^3}(t+\mathrm{\Delta }t)\stackrel{~}{\psi ^3}(t)}{\mathrm{\Delta }t}(tt^{}).$$ (A3) Combining the last two equations finally yields $`\stackrel{~}{\psi }(t+\mathrm{\Delta }t)`$ $`=`$ $`\mathrm{exp}(\sigma (t)\mathrm{\Delta }t)\stackrel{~}{\psi }(t)q^2(t)\stackrel{~}{\psi ^3}(t)\left[{\displaystyle \frac{\mathrm{exp}(\sigma (t)\mathrm{\Delta }t)1}{\sigma (t)}}\right]`$ (A5) $`q^2(t)\left[{\displaystyle \frac{\stackrel{~}{\psi ^3}(t+\mathrm{\Delta }t)\stackrel{~}{\psi ^3}(t)}{\mathrm{\Delta }t}}\right]\left[{\displaystyle \frac{\mathrm{exp}(\sigma (t)\mathrm{\Delta }t)(1+\sigma (t)\mathrm{\Delta }t)}{\sigma ^2(t)}}\right].`$ Eq. (A5) is first evaluated with the last term on its right-hand side set to zero. The resulting value for $`\stackrel{~}{\psi }(t+\mathrm{\Delta }t)`$ is then used to estimate $`\stackrel{~}{\psi ^3}(t+\mathrm{\Delta }t)`$. Eq. (A5) is finally applied a second time with all three terms on its right-hand side now included in the calculation. The fact that the nonlinear terms are integrated using an explicit procedure in time limits the size of the time step $`\mathrm{\Delta }t`$ that can be used in simulations of the model. All the numerical results presented in this paper were obtained in the sheared frame of reference with $`128\times 128`$ spectral modes. We have chosen $`B=0.23`$ (which corresponds to $`ϵ=0.04`$) and a time step of maximum size $`\mathrm{\Delta }t=0.2`$, for which no numerical instability was observed. The initial condition $`\psi (𝐫,t=0)`$ is, unless otherwise noted, a lamellar structure obtained by numerical integration of Eq. (A5) with $`a(t)=0`$ (no shear) starting from a random initial condition (a gaussian distribution for $`\psi `$ if zero mean and small variance), for approximately 300,000 iterations until a stationary lamellar structure is reached.
no-problem/9903/astro-ph9903379.html
ar5iv
text
# Galaxy Cluster Shapes and Systematic Errors in H0 Measured by the Sunyaev-Zel’dovich Effect ## 1. Introduction There has been a substantial effort to detect the Sunyaev-Zel’dovich (SZ) effect from galaxy clusters (Sunyaev & Zel’dovich 1972) and to analyze its distortion of the cosmic microwave background radiation (CMB) in conjunction with cluster x-ray properties to derive the cluster cosmological angular-diameter distance and thus estimates of the cosmological parameters $`H_0`$ and $`q_0`$ (Gunn 1978, Silk & White 1978, Cavaliere, Danese, & DeZotti 1979, Birkinshaw 1979; see also Birkinshaw 1998, and references therein). This method provides a distance determination for the cluster that is independent of the “cosmic distance ladder” of Cepheid variables or supernovæ, and is potentially effective for clusters at high redshift ($`z1`$). Centimeter-wavelength interferometry optimized for imaging the SZ effect from galaxy clusters has been recently developed (Carlstrom, Joy, and Grego 1996; Grainge et al. 1996;). This allows high-resolution x-ray and radio images of clusters to be analyzed simultaneously. The results of fits of both the x-ray and radio images to simple cluster-plasma models will yield improved estimates of $`H_0`$, and systematic errors in the measured value of $`H_0`$ are likely to be a significant limit to its accuracy. Sources of systematic errors in the SZ-determined $`H_0`$ and $`q_0`$ can originate from the assumptions made in modeling the cluster plasma: ignorance of the cluster plasma’s true three-dimensional distribution and inadequate treatment of the physical state of the cluster plasma. Radio and x-ray images only provide the projected x-ray surface brightness and CMB decrement. For the analysis to proceed some assumption must be made about the cluster size along the line of sight; e.g. , one assumes that cluster has spherical or ellipsoidal symmetry. The modeling of physical state of cluster plasma for SZ analysis has generally assumed that the plasma was of a single phase and temperature, using the somewhat ad hoc “beta” model for electron density, $`n_e(𝐫)(1+r^2/r_c^2)^{3\beta /2}`$, where $`r_c`$ is cluster’s “core radius”, within which the density is relatively flat. The beta model can be argued as a possible distribution for the plasma in a dynamically relaxed isothermal cluster in hydrostatic equilibrium (e.g. , Cavaliere & Frusco-Femiano 1978; Sarazin 1986), but its usefulness is more empirical; many x-ray images of clusters fit a beta model reasonably well (Mohr et al. 1999; see §4). Also, studies of the distribution of SZ systematic errors caused by cluster shape and orientation (and other effects) based on the results of a large ensemble of numerically simulated clusters have yet to be completed; current results are for a small set of simulated clusters (see §4). Thus, three-dimensional “toy model” estimates for the effects of cluster shape are a useful first step in estimating these errors, and can help identify the physical sources of bias and scatter in $`H_0`$ estimates from simulated clusters. In this paper I study the systematic errors in the value of $`H_0`$, measured by SZ and x-ray observations, caused by effects of cluster shape. This study consists of two parts. First, I create theoretical galaxy cluster samples, where each cluster’s plasma distribution follows a triaxial isothermal beta-model (§2), possessing three independent core radii. I use the triaxial beta model because it is a simple three-dimensional generalization of the spherical or ellipsoidal beta models (commonly used in SZ analysis) that demonstrates the effects of shape and orientation on the uncertainties in $`H_0`$ determined by SZ observations. The triaxial beta model also produces simple analytical functions for the CMB decrement and x-ray surface brightness so results for large samples of clusters can be easily calculated. I create numerical distributions clusters by uniformly and independently sampling the plasma core radii, constraining them by a minimum ratio between any two core radii of a sample cluster. These samples are uniform in the plane of allowed cluster oblateness and ellipticity. The clusters are placed in the sky with a random orientation to our line of sight. I identify a cluster sample with an optimum asphericity that has a distribution of apparent cluster ellipticities that is consistent with that of observed x-ray clusters (Mohr et al. 1995; see §3). Second, I analyze the clusters of the theoretical sample to determine their distance as if they were either spherical or an ellipsoid of rotation, as in done in observational analysis (e.g. Hughes & Birkinshaw 1998). An important unknown quantity is an ellipsoidal cluster’s inclination angle $`i`$; the estimated value for $`H_0`$ will vary greatly with $`i`$. However, since our theoretical clusters are actually three-dimensional, specifying a single inclination angle is artifical. Therefore, I analyze each cluster very simply as if its inclination angle $`i=90^{}`$, i.e. , that the core radii for the clusters are not altered by projection effects, and then study the distribution of the estimates for $`H_0`$ for a large number of sample clusters. The apparent shape of a sample cluster’s x-ray surface brightness will be elliptical, with a large and small angular core radius, $`\theta ^+\theta ^{}`$. I calculate two different estimates of $`H_0`$ which are proportional to either $`\theta ^+`$ or $`\theta ^{}`$, designated $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{}`$. I also calculate an estimate $`\widehat{H}_0^{\mathrm{avg}}`$, by using the arithmetic average of $`\theta ^+`$ and $`\theta ^{}`$. I find that the sample means of the estimates $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{\mathrm{avg}}`$ fall within $`5\%`$ of $`H_0`$ for the optimal sample. The sample distribution of $`\widehat{H}_0^{}`$ shows greatest bias, with a mean for $`\widehat{H}_0^{}`$ that underestimates $`H_0`$ by $`14\%`$ for my optimal cluster sample. As a predictor for SZ observations, I also calculate estimates for $`H_0`$ averaged for 1000 realizations of a sample of 25 clusters. I find that the systematic errors caused by cluster shape are limited: the $`99.7\%`$ confidence intervals for $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{\mathrm{avg}}`$ include the assumed value of $`H_0`$ for my optimal cluster sample, and do not extend beyond $`14\%`$ from $`H_0`$. The $`99.7\%`$ confidence interval for $`\widehat{H}_0^{}`$ does not include $`H_0`$, indicating that it may not be a useful parameter for distance estimation. The structure of this paper is as follows. In section §2 I describe the triaxial beta model for the cluster plasma, and describe the analytic expressions for their CMB decrement and x-ray surface brightness. I also describe the construction of samples of theoretical clusters, distinguished by their degree of triaxiality, and describe the manner in which I analyze the apparent clusters to determine values for $`H_0`$. I present our results in §3, followed by a summary and discussion – noting some of the limitations of this beta model based analysis – in §4. ## 2. Method ### 2.1. Triaxial beta model clusters The distribution of cluster plasma is described by an isothermal “beta” model. The electron density at a position within the cluster $`𝐫=x_1\widehat{𝐱}_1+x_2\widehat{𝐱}_2+x_3\widehat{𝐱}_3`$, measured in the observer’s coordinates, is given by $$n_e(𝐫)=n_{e0}(1+𝐫𝐌𝐫)^{\frac{3}{2}\beta },$$ (1) where the matrix $`𝐌`$ describes a cluster’s shape and orientation of its principal axes to the observer, and $`\beta `$ is an exponent with the nominal range $`\frac{1}{2}<\beta 1`$. The maps of x-ray surface brightness $`S_X`$ and cosmic-microwave background decrement $`\delta T_r`$ in sky angular coordinates $`(\vartheta ,\phi )`$ (measured from the cluster center $`𝐫=0`$) are given by integrals of the x-ray emissivity and electron pressure over the line-of-sight (defined here as $`x_1`$) though the cluster; $$S_X(\vartheta ,\phi )=\frac{1}{4\pi (1+z)^3}\mathrm{\Lambda }_X(T_e(𝐫))n_e^2(𝐫)𝑑l$$ (2) and $$\delta T_r(\vartheta ,\phi )\frac{\mathrm{\Delta }T_r}{T_r}(\vartheta ,\phi )=2\frac{k_B\sigma _T}{m_ec^2}T_e(𝐫)n_e(𝐫)𝑑l.$$ (3) Here $`T_e`$ is the electron temperature, hereafter assumed to be constant, $`\mathrm{\Lambda }_X(T_e)`$ is the plasma emission function over a prescribed x-ray bandwidth at temperature $`T_e`$, and $`z`$ is the cluster redshift. For a triaxial isothermal beta model plasma described by equation (1), then integrating equation (2) by choosing $`dl=dx_1`$ gives $`S_X`$ to be $$S_X(\vartheta ,\phi )=\frac{B(\frac{1}{2},3\beta \frac{1}{2})}{4\pi }\frac{n_{0e}^2\mathrm{\Lambda }_X(T_e)}{(1+z)^3}\frac{L_{eff}}{\chi (\vartheta ,\phi )^{3\beta \frac{1}{2}}},$$ (4) where $`\chi (\vartheta ,\phi )`$ is a quadratic function of the sky angular coordinates $`\vartheta `$ and $`\phi `$, describing elliptical isophotes. Along the line of sight of the center $`\chi (0,0)=1`$. The quantity $`B(q,r)`$ is the beta function. The quantity $`L_{eff}`$ is an effective column length for the plasma along the line of sight through the cluster: $`L_{eff}^{^{\mathrm{triaxial}}}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{M_{11}}}}`$ (5) $`=`$ $`\left\{{\displaystyle \frac{\mathrm{cos}^2\alpha _1}{r_{c1}^2}}+\mathrm{sin}^2\alpha _2\left({\displaystyle \frac{\mathrm{cos}^2\alpha _3}{r_{c2}^2}}+{\displaystyle \frac{\mathrm{sin}^2\alpha _3}{r_{c3}^2}}\right)\right\}^{1/2}.`$ The quantities $`(r_{c1},r_{c2},r_{c3})`$ are the cluster core radii, and $`(\alpha _1,\alpha _2,\alpha _3)`$ are the rotation angles of the cluster principal axes relative to the observer. The coefficients in the quadratic function $`\chi (\vartheta ,\phi )`$ are also functions of the cluster core radii and its orientation. By integrating equation (3) in exactly the same manner, $`\delta T_r`$ can be shown to be $$\delta T_r(\vartheta ,\phi )=B(\frac{1}{2},\frac{3}{2}\beta \frac{1}{2})n_{e0}\sigma _T\frac{k_BT_e}{m_ec^2}\frac{L_{eff}}{\chi (\vartheta ,\phi )^{\frac{3}{2}\beta \frac{1}{2}}}.$$ (6) The value of $`L_{eff}`$ determined from observations by relating $`S_X`$ and $`\delta T_r`$ observed for the cluster. For example, $`L_{eff}`$ can be determined by the values of $`\delta T_r`$ and $`S_X`$ measured at the cluster’s center: $$L_{eff}=\frac{B(\frac{1}{2},3\beta \frac{1}{2})}{B^2(\frac{1}{2},\frac{3}{2}\beta \frac{1}{2})}\frac{1}{(1+z)^3}\frac{\mathrm{\Lambda }_X(T_e)}{4\pi \sigma _T^2}\left(\frac{m_ec^2}{k_BT_e}\right)^2\frac{\delta T_r^2(0,0)}{S_X(0,0)}.$$ (7) The cluster’s cosmological angular diameter distance $`D_\theta (z;H_0,q_0)`$ is then inferred by equating the measured $`L_{eff}`$ to that derived from a model for the cluster. Recent efforts to fit the cluster $`S_X`$ (Hughes and Birkinshaw 1998) assume that the cluster is either an oblate or prolate ellipsoid in shape as well as isothermal; this assumption about shape is reasonable when no information can be known about the structure of the cluster along the line of sight. However, the dependence of $`L_{eff}`$ on the cluster’s apparent major and minor axes for an ellipsoidal beta model is different than that for a triaxial cluster. $$L_{eff}^{^{\mathrm{ellipsoidal}}}=D_\theta (z;H_0,q_0)\left\{\frac{\mathrm{cos}^2i}{\theta _{c2}^2}+\frac{\mathrm{sin}^2i}{\theta _{c1}^2}\right\}^{1/2},$$ (8) where $`\theta _{c1}`$ and $`\theta _{c2}`$ are an ellipsoidal clusters’ angular axes; $`\theta _{c1}>\theta _{c2}`$ describes an oblate ellipsoid and $`\theta _{c1}<\theta _{c2}`$ describes a prolate ellipsoid. The quantity $`i`$ is the inclination angle of the symmetry axis. If the cluster is triaxial, the observed $`L_{eff}`$ will be equal to that of equation (5). However, analyzing the cluster as an apparent ellipsoid with equation (8) will produce a systematic error in the inferred value of $`D_\theta `$, and thus in cosmological parameters $`H_0`$ and $`q_0`$. ### 2.2. The theoretical cluster samples I generate triaxial beta-model clusters choosing a set of core radii $`(r_{c1},r_{c2},r_{c3})`$ from a random uniform distribution for the ratio of two of the core radii, $`r_{c2}`$ and $`r_{c3}`$, with respect to $`r_{c1}`$. Both $`r_{c2}`$ and $`r_{c3}`$ are assumed to be a random fraction of the length of $`r_{c1}`$, but bounded below by a minimum value. This minimum value is not known a priori but can be chosen to optimize the observed ellipticity of x-ray clusters (see below). Clusters are only distinguished by their core radii; I do not create a distribution for the clusters’ $`\beta `$-values nor any other quantity except core radii. Spherical beta model fits to real clusters appear to exhibit correlation between core radius and the value of $`\beta `$ (Neumann and Arnaud 1999). However, the observed correlation for beta model clusters is not convolved by cluster shape projection. This is shown in the expressions for the x-ray surface brightness $`S_X`$, equation (4), and SZ CMB decrement, $`\delta T_r`$, equation (6). The profile exponents for both of these quantities are not functions of the individual core radii nor of rotation angles; nor are the major and minor axes of the observed elliptical cluster ($`\theta ^+`$ and $`\theta ^{}`$, see §2.3) functions of $`\beta `$; they are only functions of the sample cluster’s core radii and the rotation angles. Also, I am not considering a distribution of the magnitude of the cluster core radii, but the distribution of the cluster triaxiality (see §2.3). I rule out bias in the cluster samples toward a net oblateness or prolateness by checking for uniform sampling in the ellipticity-prolateness $`(E,P)`$ plane, given by Thomas et al. (1998) as $$E\frac{1}{2}\frac{r_{c2}^2(r_{c1}^2r_{c3}^2)}{r_{c2}^2r_{c3}^2+r_{c1}^2r_{c3}^2+r_{c1}^2r_{c2}^2},$$ (9) and $$P\frac{1}{2}\frac{r_{c2}^2r_{c3}^22r_{c1}^2r_{c3}^2+r_{c1}^2r_{c2}^2}{r_{c2}^2r_{c3}^2+r_{c1}^2r_{c3}^2+r_{c1}^2r_{c2}^2}.$$ (10) Strictly prolate and oblate clusters fall onto the lines $`P=E`$ and $`P=E`$ respectively, with length determined by the lower bound of the ratio between core radii. My optimal sample, described below, uniformly covers the allowed region in the $`(E,P)`$ plane, which is a triangle proscribed by the prolate and oblate lines and the line connecting their endpoints. How well does the triaxial beta model reproduce the observed shapes of x-ray clusters that could be used for SZ analysis? A study of 65 Einstein x-ray clusters by Mohr et al. (1995) found an emission-weighted mean ellipticity of $`0.20\pm 0.12`$, while McMillan, Kowalski, and Ulmer (1989) found a mean ellipticity of $`0.24\pm 0.14`$ for 49 Einstein Abell clusters. In both of these studies clusters were included with substantial flattening caused by recent merging, or with cooling inflows which can make the cluster appear more spherical. A more appropriate sample for comparison would be one which excludes these effects. If I eliminate clusters that are apparent mergers from the Mohr et al. sample (8 out of 12 clusters with ellipticities of $`0.3`$ or greater with apparent subclustering) and clusters in which cooling inflows may exist (as measured by central cooling times of 10 Gyr or less; an additional 17 clusters), then the mean ellipticity of the remaining subset is $`0.18\pm 0.09`$. I find that a triaxial beta model cluster sample where the minimum ratio between any two core radii to be $`0.65`$ produces a distribution of apparent ellipticities that is consistent with this subset of the Mohr et al. sample (figure 1). A Kolmogorov-Smirnov test between these samples indicates the ellipticity distributions are statistically indistinguishable, with a maximum difference between the two cumulative distributions of $`d=0.12`$, and a probability that the two samples are drawn from the same distribution of $`69\%`$. ### 2.3. The analysis The systematic error in analyzing the clusters arises from assuming that an apparent cluster is either a prolate or an oblate ellipsoid, when it is in fact triaxial. The apparent elliptical image of the cluster will have a major and minor axis, $`\theta ^+`$ and $`\theta ^{}`$; these are the “angular core radii” for the x-ray and SZ images. Using equation (5) and the function $`\chi (\vartheta ,\phi )`$, $`\theta ^+`$, $`\theta ^{}`$, and $`L_{eff}`$ are determined for a given triaxial cluster. The observational analysis proceeds as if the observed $`\theta ^+`$ and $`\theta ^{}`$ are that of an ellipsoidal cluster, inclined to the line of sight by an unknown angle $`i`$. The apparent cluster distance $`\widehat{D}_\theta `$ for an ellipsoidal cluster is related to $`\theta ^+`$, $`\theta ^{}`$, $`L_{eff}`$, and $`i`$ by deprojecting the cluster axes and using equation (8): $$L_{eff}=\widehat{D}_\theta (z;\widehat{H}_0,\widehat{q}_0)\{\begin{array}{cc}\sqrt{\frac{\theta ^^2(\theta ^{+^2}\theta ^^2\mathrm{cos}^2i)}{\theta ^{+^2}\mathrm{sin}^2i}},\hfill & \text{(prolate);}\hfill \\ \sqrt{\frac{\theta ^{+^2}(\theta ^^2\theta ^{+^2}\mathrm{cos}^2i)}{\theta ^^2\mathrm{sin}^2i}},\hfill & \text{(oblate).}\hfill \end{array}$$ (11) Equating $`L_{eff}`$ of equation (11) with that for the triaxial cluster, then in general $`\widehat{D}_\theta `$ will not be equal to the actual distance $`D_\theta `$. This leads to erroneous values for the apparent cosmological parameters $`H_0`$ and $`q_0`$. Since the sample clusters are intrinsically triaxial, using estimators (11) for $`H_0`$ based on an ellipsoidal cluster model with a single inclination angle $`i`$ is artificial; there is no single angle that characterizes the orientation of a cluster to the line of sight, unless a sample cluster’s core radii had been accidentally chosen to be roughly prolate or oblate. Therefore, I collapse the dependency on $`i`$ and assume $`i90^{}`$, and use only a cluster’s observed $`\theta ^+`$ and $`\theta ^{}`$. I compose three estimates for $`H_0`$ for each cluster: $`\widehat{H}_0^+\theta ^+`$, $`\widehat{H}_0^{}\theta ^{}`$, and $`\widehat{H}_0^{\mathrm{avg}}\frac{1}{2}(\theta ^++\theta ^{})`$. These estimates are ordered $`\widehat{H}_0^+\widehat{H}_0^{\mathrm{avg}}\widehat{H}_0^{}`$. The estimate $`\widehat{H}_0^+`$ is equivalent to assuming the observed cluster an oblate ellipsoid, while the estimate $`\widehat{H}_0^{}`$ is for the cluster as a prolate ellipsoid, both viewed as if the axis of rotation were in the plane of the sky. I study the distribution of these estimators for $`H_0`$ for a sample of triaxial clusters drawn from the distribution described in §$`2.2`$. The cosmological parameter $`q_0`$ is fixed to be zero, so that $`\widehat{D}_\theta =c\widehat{H}_0^1z(1+\frac{1}{2}z)/(1+z)^2`$. As mentioned in §2.2, I am sampling clusters only by a distribtution in their triaxiality, and I do not use the magnitude of the core radii. Thus the estimators $`\widehat{H}_0`$ are determined with respect to an assumed value of $`H_0`$. ## 3. Results Figure 1 shows the distributions of the values of $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{}`$ inferred from the cluster apparent angular axes, a scatterplot of these estimates (one point per cluster), and the distribution of apparent cluster ellipticities, for our optimal theoretical beta model sample (with a minimum ratio of core radii of $`0.65`$). Figure 2 shows the distribution of of $`\widehat{H}_0^{\mathrm{avg}}`$ for this cluster sample. The distributions of $`\widehat{H}_0^+`$ has sample mean that is within $`8\%`$ of the assumed value of $`H_0`$. The distribution of $`\widehat{H}_0^{}`$ is more biased (i.e. , lower) in sample mean value, which is expected since for every sample cluster the value of $`\widehat{H}_0^{}`$ is constrained to be lower in value than $`\widehat{H}_0^+`$. I also studied the distribution of the $`H_0`$ estimates for clusters samples with greater asphericity. For a cluster sample with a minimum ratio of core radii of $`0.5`$, the estimate distributions broaden significantly, and the sample deviations nearly double. The distribution of $`\widehat{H}_0^{\mathrm{avg}}`$ also exhibit broadening with greater asphericity, but still has a mean value within $`5\%`$ of $`H_0`$. All of the means are relatively insensitive to the samples’ degree of asphericity. What then is the expected systematic error in the measured $`H_0`$ caused by cluster shape for a practical-sized sample of clusters? The $`99.7\%`$ confidence intervals for the mean values $`\widehat{H}_0^+`$, $`\widehat{H}_0^{}`$, and $`\widehat{H}_0^{\mathrm{avg}}`$, based on 1000 realizations of a 25-cluster sample are summarized in Table 1. The confidence intervals for $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{\mathrm{avg}}`$ include the assumed value of $`H_0`$ for the optimal cluster sample, and even for samples with even greater asphericity. For the optimal cluster sample the confidence intervals for $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{\mathrm{avg}}`$ do not extend beyond $`14\%`$ from $`H_0`$. The $`99.7\%`$ confidence interval for $`\widehat{H}_0^{}`$ does not include the assumed value of $`H_0`$. ## 4. Summary and Discussion The high-resolution imaging of the SZ effect in galaxy clusters in combination with cluster plasma x-ray diagnostics is a powerful technique for measuring the cosmic distance scale. This method is sensitive to the projection of the cluster’s inverse-Compton scattering and x-ray emission, which depend on plasma density and temperature along the cluster line-of-sight. In this paper I have estimated the systematic errors in the SZ-determined Hubble constant caused only by the projection effects of cluster shape. I use a triaxial beta model to represent the clusters’ gas because it is the simplest generalization of the ubiquitous spherical and ellipsoidal beta models that can demonstrate the effects of cluster shape and orientation on measurements of $`H_0`$. The triaxial beta model has analytic expressions for the clusters’ CMB decrement and x-ray surface brightness, so that the statistics for measured $`H_0`$ for a very large number of clusters are easily computed. Ideal clusters for SZ analysis would possess no obvious substructure nor evidence of merging, and contain plasma at a single temperature without cooling inflows. As most observed clusters are not such simple systems, I discuss the relevance of my beta model sample predictions for errors in $`H_0`$ caused by cluster shape for a real SZ cluster survey. First, will the presence of cooling gas alter a cluster’s SZ properties substantially from a beta model description ? A recent analysis of ROSAT x-ray clusters indicate that a majority of them contain cooling gas (Peres et al. 1998). The clusters’ x-ray emission is sensitive to the environment of their centers, where gas densities are highest, while their SZ effect is relatively more sensitive to the lower-density outer regions of the clusters. Cooling inflows are confined to the core of the clusters and are believed to be subsonic and isobaric, so that the relevant SZ cluster property, the column integral of the cluster pressure, will be largely unaffected with the presence of cooling gas in the core. “Reprojection” estimates for Einstein x-ray clusters suggest that the SZ effect is enhanced in the largest of cooling clusters, with cooling rates of hundreds of solar masses per year (White et al. 1997). However, most of these clusters have high pressures in their outer regions, so the SZ enhancement is likely not to be caused by the alteration of the SZ effect within the cooling core, but by the presence of higher gas pressure over the extended outer (beta model) region. Strong cooling inflow clusters have sharply peaked x-ray profiles so that their x-ray determined core radii and central plasma densities can be skewed. As mentioned in §2.2, in comparing my beta model sample to the observed cluster ellipticity distribution, I eliminate the stronger cooling inflow clusters in attempting to avoid this bias in observed shape. For many of these systems these issues are academic, as they are radio loud and obscure a CMB decrement (Burns 1990). However, some of the clusters that have a detected SZ effect contain large cooling inflows (Hughes 1997), and observers have used the beta model properties of the outer portion of the clusters to extract the relevant x-ray properties for analysis in conjunction with the SZ effect (e.g. , Myers et al. 1997). It is also interesting to note that for many clusters – approximately half of ROSAT clusters of an x-ray flux-limited sample previously selected by Edge et al. (1990), including some that contain cooling inflows – the simple beta model can produce a good fit to their x-ray profiles (Mohr et al. 1999). An important caveat to the error estimates in $`H_0`$ provided by these beta models is that they cannot account for the cluster gas distibutution changing shape from the core to the outer region. Observations of a few high signal to noise ROSAT clusters show ellipticity gradients, exhibiting a rougly linear decline in x-ray ellipticity from $`e0.30.1`$ from the clusters’ center, over a distance of several core radii (Buote and Canizares 1996). This behavior may have a substantial effect on the SZ properties of a cluster, altering both the shapes and magnitudes of the apparent x-ray and (to an even larger degree) the SZ images. While results of cluster ellipticities from a larger sample are required to adequately constrain this effect for a statistical study, here I illustrate a possible bias in $`\widehat{H}_0`$ that could arise from changing cluster shape using a simple model of a cluster with varying ellipticity. I consider a beta model with the core radii in the elements of $`𝐌`$ in equation (1) as functions of coordinates. An ad hoc example is $$r_{c1}^2r_{c1}^2+\left[\frac{r}{r_{c1}+r}\right]^\alpha (R^2r_{c1}^2),$$ (12) used in the beta model of equation (1), with where $`r`$ is the distance of the coordinate point from the cluster center. This describes a triaxial cluster (using similar expressions for $`r_{c2}`$ and $`r_{c3}`$) with core radii of $`(r_{c1},r_{c2},r_{c3})`$ within the cluster core, becoming spherical with core radius $`R>r_{c1}`$ outside the cluster’s core. I refer to this as a cluster with a “modified” core radius. Choosing $`\alpha 4`$ and $`r_{c1}/R0.6`$ for a cluster with one core radius modified by equation (12) and $`\beta =\frac{2}{3}`$ in equation (1), produces a decreasing x-ray ellipticity from $`e(R)0.3`$ to $`e(5R)0.1`$, when the modified core radius lies in the plane of the sky; there is similar behavior in ellipticity for two modified cluster core radii with one along the line of sight. The presence of gas with a more spherical distribution moderates the effect of cluster orientation on $`L_{eff}`$ determined from equation (7), with $`L_{eff}`$ assuming intermediate values within the range given by the outer core radius $`R`$ and the inner core radii $`(r_{c1},r_{c2},r_{c3})`$. For example, a cluster with one modified core radius, taken along the line of sight and using the values for $`r_{c1}/R0.6`$ and $`\alpha 4`$, produces an estimate $`\widehat{H}_0`$ that is biased (high) by $`20\%`$. This by itself is a substantial effect on what otherwise appears as a spherical cluster, however in conjunction with orientation, it is a significantly lower bias that would have been produced by the unmodified oblate cluster observed along its minor axis, $`67\%`$. I have calculated the x-ray and CMB decrement images for a small set of prolate or oblate clusters, with one or two modified core radii (using the parameters from above), observed along the axes and along the line $`x=y=z`$. For these clusters the estimates $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{}`$ are lower (by $`10\%`$), and their difference $`\widehat{H}_0^+\widehat{H}_0^{}`$ substantially smaller than their counterparts for clusters with unmodified core radii. It is uncertain whether any significant bias to estimates for $`H_0`$ would be introduced in statistically analyzing a large set of such type of clusters. These estimates are based on using the central values for x-ray brightness, CMB decrement, and determining the cluster angular size using the apparent x-ray core radii defined simply by the ellipse of brightness that is lower than the peak by a factor $`2^{3\beta 1/2}`$ with $`\beta =\frac{2}{3}`$. More quantitative results will depend on the model details of the distribution of gas in transition from core to outer region of the cluster, and the manner by which models are fit to the data to determine parameters (e.g. simultaneous x-ray and CMB image fitting), well beyond the scope of this paper. Qualitatively, however, the presence of a changing cluster shape can alter the estimates for $`H_0`$ by softening the effects of orientation. A notable effect for this type of cluster is that the beta model congruence of the x-ray and CMB images (equations (4) and (6)) is broken, so that comparison of the maps may determine the importance of shape changes outside of the cluster core. Clusters with recent merging activity cannot be adequately represented by simple beta models. However, the use of such clusters in a SZ survey is likely to be fraut with complications. In principle, numerical simulations of cluster formation would yield a more “realistic” sample of clusters for SZ analysis than my beta model sample, accounting for the effects of cooling and merging as well as for shape projection. However, simulations of cluster formation can not yet physically reproduce the observed large gas cores that are observed in clusters (Metzler and Evrard 1997; Anninos and Norman 1996). Inagaki et al. (1995) used two simulated clusters to study SZ measurement systematics caused by plasma temperature gradients, plasma clumpiness, cluster peculiar velocity, the finite extent of cluster plasma, and cluster shape. They determined that effects of asphericity would be limited to an uncertainty of $`10\%`$ in $`H_0`$ by observing several clusters, as I have also found. They did not conduct a survey of possible cluster shapes; the statistics for estimates of $`H_0`$ were generated by the viewing of the two clusters at many orientations. I have constrained the limits of my beta model shapes by checking for its consistency with the observed apparent ellipicities of x-ray clusters. Roettiger et al. (1997) focused on the systematic errors in an SZ-determined $`H_0`$ observed in seven simulated cluster mergers with strong temperature gradients and asphericity. They found that these effects could lead to $`H_0`$ underestimated by as much as $`35\%`$, and concluded that two approaches should be used in SZ analysis: perform detailed simulations of individual clusters where it was indicated that merging was strongly affecting the SZ properties, otherwise employ a statistical sample of clusters that show no evidence of recent merging or dynamical evolution. In this paper I have addressed the systematic errors caused by cluster shape and orientation that would be present in using this latter approach with a modeled optimal SZ cluster sample. What are needed now are the statistical results for SZ $`H_0`$ estimates from a large sample of numerically simulated clusters. I have created numerical samples of triaxial beta model clusters by specifying the minimum ratio between any two core radii. I have identified an optimal such sample, with the ratio of $`0.65`$, that has a distribution of apparent cluster x-ray ellipticities that is consistent with that measured from observations of x-ray clusters. I have analyzed the cluster samples for their SZ decrement and x-ray surface brightness, assuming no effects of inclination angle. The apparent cluster’s large and small angular core radius, $`\theta ^+`$ and $`\theta ^{}`$, yield three estimates of $`H_0`$ that are proportional to $`\theta ^+`$, $`\theta ^{}`$ and $`\frac{1}{2}(\theta ^++\theta ^{})`$. These estimates are equivalent to assuming that the cluster is either oblate ($`\theta ^+`$) or prolate ($`\theta ^{}`$), with its symmetry axis in the plane of the sky, or spherical $`\frac{1}{2}(\theta ^++\theta ^{})`$. I have found that the estimates $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{\mathrm{avg}}`$, have means that fall within $`5\%`$ of the assumed value of $`H_0`$ for the optimal theoretical cluster sample, while the mean of the estimate $`\widehat{H}_0^{}`$ underestimates $`H_0`$ by $`14\%`$. The size of these errors caused by cluster shape is similar to that found in a more approximate fashion by Hughes and Birkinshaw (1998), and discussed recently by Cooray (1998). Other estimates of $`H_0`$ can be devised, for example, a (weighted) geometric mean $`\stackrel{~}{H}_0(\widehat{H}_0^+)^\alpha (\widehat{H}_0^{})^{1\alpha };\mathrm{\hspace{0.17em}0}<\alpha <1`$ (Van Speybroeck and Vikhlinin 1997). These may produce better estimates for $`H_0`$ than the three simple estimates that I have studied, but the best choice of $`H_0`$ estimator may depend on the intrinsic shape distribution of clusters. I have also determined the confidence intervals for the estimates of $`H_0`$ that would be derived from the SZ and x-ray analysis of a 25-cluster sample. Our optimal theoretical cluster sample has $`99.7\%`$ confidence intervals for $`\widehat{H}_0^+`$ and $`\widehat{H}_0^{\mathrm{avg}}`$ that are within $`14\%`$ of $`H_0`$, and enclose $`H_0`$. The confidence intervals for the estimate $`\widehat{H}_0^{}`$ show more deviation and do not enclose $`H_0`$, indicating that it may not a useful estimator. I thank J. Mohr, A. Evrard, and B. Mathiesen for very enlightening conversations and suggestions. I thank M. Joy and S. Patel for critical readings of earlier versions of this manuscript. I also thank NASA’s Interagency Placement Program, the University of Michigan Department of Astronomy, and the University of Michigan Rackham Visiting Scholars Program.
no-problem/9903/cond-mat9903291.html
ar5iv
text
# Anomalies of the infrared-active phonons in underdoped YBCO as an evidence for the intra-bilayer Josephson effect ## Abstract The spectra of the far-infrared $`c`$-axis conductivity of underdoped YBCO crystals exhibit dramatic changes of some of the phonon peaks when going from the normal to the superconducting state. We show that the most striking of these anomalies can be naturally explained by changes of the local fields acting on the ions arising from the onset of inter- and intra-bilayer Josephson effects. PACS Numbers: 74.25.Gz, 74.72.Bk, 74.25.Kc, 74.50.+r The essential structural elements of the high-$`T_c`$ superconductors are the copper-oxygen planes which host the superconducting condensate. Many experiments, and also some theoretical considerations, suggest that these planes are only weakly (Josephson) coupled along the $`c`$-direction. Studies of the $`c`$-axis transport and those of the microwave absorption , and the far-infrared $`c`$-axis conductivity revealing Josephson plasma resonances, have established that Josephson coupling indeed takes place for planes (or pairs of planes) separated by insulating layers wider than the in-plane lattice constant. It is not fully understood why the coupling is so weak and it is debated whether this is related to the unconventional ground state of the electronic system of the planes causing a charge confinement and/or to the properties of the insulating layers. In this context, it is of interest to ascertain whether the closely-spaced copper-oxygen planes of the so-called bilayer compounds, like YBa<sub>2</sub>Cu<sub>3</sub>O<sub>y</sub>, are also weakly (Josephson) coupled. In this paper we show that the far-infrared spectra of the $`c`$-axis conductivity of underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>y</sub> with $`6.4y6.8`$ may provide a key for resolving this interesting issue. The spectra exhibit, beside a spectral gap that shows up already at temperatures much higher than $`T_c`$ , two pronounced anomalous features . Firstly, at low temperatures a new broad absorption peak appears in the frequency region between $`350\mathrm{cm}^1`$ and $`550\mathrm{cm}^1`$. The frequency of its maximum increases with increasing doping; for optimally doped samples this feature disappears. Secondly, at the same time as the peak forms, the infrared-active phonons in the frequency region between $`300\mathrm{cm}^1`$ and $`700\mathrm{cm}^1`$ (in particular their strength and frequency) are strongly renormalized. This effect is most spectacular for the oxygen bond-bending mode at $`320\mathrm{cm}^1`$, which involves the in-phase vibration of the plane oxygens against the Y-ion and the chain ions. For strongly underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.5</sub> with $`T_c50\mathrm{K}`$, this mode loses most of its spectral weight and softens by almost $`20\mathrm{cm}^1`$. Although the additional peak, and the related changes of the phonon peaks (phonon anomalies), start to develop above $`T_c`$, there is always a sharp increase of the peak magnitude below $`T_c`$ . Similar effects have also been reported for several other underdoped bilayer-compounds (see, e.g., Refs. ) and for hole-doped ladders in Sr<sub>14-x</sub>Ca<sub>x</sub>Cu<sub>24</sub>O<sub>41</sub> . Van der Marel et al. have suggested that the additional peak around $`450\mathrm{cm}^1`$ could be explained using a phenomenological model of the dielectric response of superlattices with two superconducting layers (a bilayer) per unit cell. The model involves two kinds of Josephson junctions: inter-bilayer and intra-bilayer. As a consequence, the model dielectric function exhibits two zero crossings corresponding to two longitudinal plasmons: the inter-bilayer and the intra-bilayer one. In addition, it exhibits also a pole corresponding to a transverse optical plasmon. Van der Marel et al. pointed out that the additional peak in the spectra of underdoped YBCO may just correspond to the latter plasmon. Very recently, they have confirmed their suggestion by more quantitative considerations regarding the doping dependence of the peak position . The details of the spectacular anomaly of the $`320\mathrm{cm}^1`$ phonon mode, however, cannot be explained within the original form of their model. In the following we report a theoretical analysis of the additional peak and the phonon anomalies. We have extended the model of van der Marel et al. by including the four phonons at 280, 320, 560, and $`630\mathrm{cm}^1`$ in such a way that the extended model can account not only for the peak but also for the most striking phonon anomalies. The important new feature is that we take into account local electric fields acting on the ions participating in the above mentioned phonon modes. As we show below, the phonon anomalies are then simply due to dramatic changes of these local fields as the system becomes superconducting. Let us briefly introduce the model. The dielectric function is written as $$\epsilon (\omega )=\epsilon _1(\omega )+i\epsilon _2(\omega )=\epsilon _{\mathrm{}}+\frac{i}{\omega \epsilon _0}\underset{n}{}\frac{j_n(\omega )}{E(\omega )},$$ $`(1)`$ where $`\epsilon _{\mathrm{}}`$ is the interband dielectric function at frequencies somewhat above the phonon range, $`j_n`$ are the induced currents, $``$ means the volume average, and $`E`$ is the average electric field along the $`c`$-axis. The following currents have to be taken into account: the Josephson current between the planes of a bilayer, $`j_{bl}=i\omega \epsilon _0\chi _{bl}E_{bl}`$, the Josephson current between the bilayers, $`j_{int}=i\omega \epsilon _0\chi _{int}E_{int}`$, the current due to the oxygen bending mode at $`320\mathrm{cm}^1`$, $`j_P=i\omega \epsilon _0\chi _PE_{locP}`$, and the current due to the other three infrared-active modes involving vibrations of ions located between the bilayers (apical oxygens and chain atoms), $`j_A=i\omega \epsilon _0\chi _AE_{locA}`$. Here $$\chi _{bl}=\frac{\omega _{bl}^2}{\omega ^2}+\frac{S_{bl}\omega _b^2}{\omega _b^2\omega ^2i\omega \gamma _b},\chi _{int}=\frac{\omega _{int}^2}{\omega ^2}+\frac{S_{int}\omega _b^2}{\omega _b^2\omega ^2i\omega \gamma _b},$$ $`(2)`$ $$\chi _P=\frac{S_P\omega _P^2}{\omega _P^2\omega ^2i\omega \gamma _P},\chi _A=\underset{n=1}{\overset{3}{}}\frac{S_n\omega _n^2}{\omega _n^2\omega ^2i\omega \gamma _n}$$ $`(3)`$ are the susceptibilities that enter the model. The plasma frequencies of the intra-bilayer and the inter-bilayer Josephson plasmons are denoted as $`\omega _{bl}`$ and $`\omega _{int}`$, respectively. We do not attribute any physical interpretation to the Lorentzian terms in Eq. (2) that are designed solely to represent the featureless residual electronic background in the frequency range of interest (i.e., from $`200\mathrm{cm}^1`$ to $`700\mathrm{cm}^1`$) in a Kramers-Kronig consistent way. The response of the phonons is described by Lorentzian oscillators as usual. Further, $`E_{bl}`$ is the average electric field inside a bilayer, $`E_{int}`$ is the average electric field between neighbouring bilayers, $`E_{locP}`$ is the local field acting on the plane oxygens, and $`E_{locA}`$ is the local field acting on the ions located between the bilayers. Note that by identifying $`E_{locP}`$ with the field acting on the plane oxygens we have neglected the contributions of the other ions involved in the phonon (the Y-ion and the chain ions). This seems to be a reasonable approximation, since the contribution of the plane oxygens to the $`320\mathrm{cm}^1`$ mode is known to be dominant . The electric fields $`E_{bl}`$, $`E_{int}`$, $`E_{locP}`$, and $`E_{locA}`$ can be obtained using the following set of equations: $$E_{bl}=E^{}+\frac{\kappa }{\epsilon _0\epsilon _{\mathrm{}}}\frac{\alpha \chi _PE_{locP}}{\epsilon _{\mathrm{}}},$$ $`(4)`$ $$E_{int}=E^{}\frac{\beta \chi _PE_{locP}+\gamma \chi _AE_{locA}}{\epsilon _{\mathrm{}}},$$ $`(5)`$ $$E_{locP}=E^{}+\frac{\kappa }{2\epsilon _0\epsilon _{\mathrm{}}},$$ $`(6)`$ $$E_{locA}=E^{},$$ $`(7)`$ $$i\omega \kappa =j_{int}j_{bl},$$ $`(8)`$ $$E(d_{bl}+d_{int})=E_{bl}d_{bl}+E_{int}d_{int}$$ $`(9)`$ containing two additional variables, $`\kappa `$ and $`E^{}`$. The former represents the surface charge density of the copper-oxygen planes which alternates from one plane to the other whereas $`E^{}`$ is the part of the average internal field $`E`$ that is not due to the effects of $`\kappa `$, $`\chi _P`$ and $`\chi _A`$. The terms in Eqs. (4) and (6) containing $`\kappa `$ represent the fields generated by charge fluctuations between the planes. The terms in Eqs. (4) and (5) containing the phonon susceptibilities represent the fields generated by the displacements of the ions. The values of the numerical factors $`\alpha `$, $`\beta `$, and $`\gamma `$ (1.8, 0.8, 1.4) have been obtained using an electrostatical model . While the feedback effects of the phonons on the electric fields have to be included in order to obtain the observed softening of the oxygen bending mode, they are not essential for explaining the spectral-weight anomalies. Equation (8) guarantees charge conservation. The distances between the planes of a bilayer and between the neighbouring bilayers are denoted by $`d_{bl}`$ ($`d_{bl}=3.3\mathrm{\AA }`$) and $`d_{int}`$ ($`d_{int}=8.4\mathrm{\AA }`$), respectively. Figure 1 (a) shows the experimental spectra of the $`c`$-axis conductivity of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.5</sub> with $`T_c=53\mathrm{K}`$ from Ref. . Figures 1(b), 1(c), and 1(d) show the data for (b) $`T=300\mathrm{K}`$, (c) $`T=75\mathrm{K}`$ and (d) $`T=4\mathrm{K}`$ together with the fits obtained by using the model explained above. The values of the parameters used are summarized in Table 1. Those used in computing the room-temperature spectrum have been obtained by fitting the measured complex dielectric function from Ref. (with $`\omega _{bl}=0.0`$ and $`\omega _{int}=0.0`$). Those used in calculating the $`4\mathrm{K}`$ spectrum have also been obtained by fitting the data, except for $`\epsilon _{\mathrm{}}`$, $`\omega _P`$, and the oscillator strengths of the phonons ($`S_P`$, $`S_1`$, $`S_2`$, $`S_3`$) which have been fixed at the room-temperature values. The appearance of the additional peak and the anomalies already at temperatures higher than $`T_c`$ may be caused by pairing fluctuations within the bilayers. Motivated by this idea, we have fitted the 75K spectra in the same way as the 4K ones allowing only the upper plasma frequency ($`\omega _{bl}`$) to acquire a nonzero value. We shall comment on this point below. Note that the low-temperature value of $`\omega _{int}`$ ($`220\mathrm{cm}^1`$) is rather close to the one obtained from reflectance measurements ($`204\mathrm{cm}^1`$ in Ref. ) and that the screened value of $`\omega _{bl}`$ ($`500\mathrm{cm}^1`$) falls into the frequency region of a broad peak in the loss function ($`\mathrm{Im}(1/\epsilon )`$) , which is a signature of a longitudinal excitation. The values of the phonon frequencies are somewhat different from those which would result from a usual fit of the data (such as in Refs. ). This is because the susceptibilities of Eq. (3) represent response functions with respect to the local fields instead of the average field. In the absence of interlayer currents, the input frequencies would correspond to the LO-frequencies while the frequencies renormalized according to Eqs. (4)-(9) would correspond to the TO-ones. Our input frequency of the oxygen bending mode ($`390\mathrm{cm}^1`$) is close to the measured LO frequency . It can be seen in Fig. 1 that the model is capable of providing a good fit of both the normal- and the superconducting-state data without changing the oscillator strengths of the phonons and without any change of the input frequency of the oxygen bending mode. It reproduces succesfully: (i) the appearance of the additional peak, its position, broadening and magnitude; (ii) the loss of the spectral weight of the peak corresponding to the oxygen-bending mode and the pronounced softening of this mode; (iii) the loss of the spectral weight of the peaks corresponding to the apical oxygen modes at $`550\mathrm{cm}^1`$ and $`630\mathrm{cm}^1`$ and the increase of their assymetry. The intrinsic frequencies of the latter modes have to be slightly increased in order to reproduce the noticeable hardening of these modes. The dotted lines in Figures 1(b),1(c) and 1(d) represent the results obtained after omitting the phonons in the fitted expresions ($`S_P=S_1=S_2=S_3=0.0`$). It appears that the plasmon peak collects the lost part of the normal-state spectral weight of the phonons. This, however, only accounts for a part of its spectral weight. In the absence of the phonons and for small values of the residual conductivities the spectral weight of the $`\delta \mathrm{peak}`$ at $`\omega =0.0`$ is $`S_\delta =(\pi /2)\epsilon _0(d_{bl}+d_{int})\omega _{bl}^2\omega _{int}^2/(d_{bl}\omega _{int}^2+d_{int}\omega _{bl}^2)`$ and the spectral weight of the additional peak is $`S_p=(\pi /2)\epsilon _0(d_{bl}d_{int}/(d_{bl}+d_{int}))(\omega _{bl}^2\omega _{int}^2)^2/(d_{bl}\omega _{int}^2+d_{int}\omega _{bl}^2)`$. For the values of the two plasma frequencies given in Table 1 we obtain $`S_\delta =1700\mathrm{\Omega }^1\mathrm{cm}^2`$ and $`S_p=10000\mathrm{\Omega }^1\mathrm{cm}^2`$. Note that both $`S_\delta `$ and $`S_p`$ belong to the spectral weight of the superconducting condensate. This should be taken into account in discussing the sum rules as, e.g., in Ref. . Our model allows us to explain the decrease of the spectral weight of the phonons when going from the normal to the superconducting state by using rather simple qualitative arguments. Neglecting the feedback effects of the phonons and the residual electronic background (represented by the Lorentzians in Eq. (2)), the electric fields $`E_{bl}`$ and $`E_{int}`$ are given by: $$E_{bl}=\frac{(d_{bl}+d_{int})\epsilon _{int}}{d_{bl}\epsilon _{int}+d_{int}\epsilon _{bl}}E,E_{int}=\frac{(d_{bl}+d_{int})\epsilon _{bl}}{d_{bl}\epsilon _{int}+d_{int}\epsilon _{bl}}E,$$ $`(10)`$ where $`\epsilon _{bl}=\epsilon _{\mathrm{}}\omega _{bl}^2/\omega ^2`$ and $`\epsilon _{int}=\epsilon _{\mathrm{}}\omega _{int}^2/\omega ^2`$. The low-temperature spectra of $`\epsilon _{bl}`$ and $`\epsilon _{int}`$ are shown as the solid lines in Fig. 2. In the frequency range of the oxygen bending mode, $`\epsilon _{bl}`$ and $`\epsilon _{int}`$ have opposite signs and similar magnitudes and the same holds for $`E_{int}`$ and $`E_{bl}`$. As a consequence, the local field acting on the plane oxygens, which equals the average of the two fields $`E_{int}`$ and $`E_{bl}`$ (cf. Eqs. (4), (5), and (6)), can become rather small. The frequency range of the modes of the apical oxygen is close to the zero crossing of $`\epsilon _{bl}`$. Consequently, the local field acting on the apical oxygens, $`E_{int}`$, is rather small in this frequency region. It is the decrease of the local fields when going from the normal to the superconducting state which is responsible for the spectral weight anomalies. The room- and low-temperature spectra of $`E_{locP}`$ shown in Fig. 2 illustrate the above considerations. Our model is also capable of explaining the experimentally observed doping dependence of the additional peak and the anomaly of the oxygen bending mode. As the doping increases the peak shifts towards higher frequencies and it becomes broader and less pronounced (see Fig. 10 of Ref. and Fig. 3 of Ref. ). Both these trends can be easily understood. The first one is due to the progressive increase of the plasma frequencies with hole doping, which is related both to the increase of the condensate density and to the reduction of the charge confinement . The second one is due to the fact that the broadening is proportional to the residual backround conductivities which increase with increasing doping. In addition, the size of the spectral gap ($`2\mathrm{\Delta }_{max}`$) decreases with hole doping and eventually falls below the energy of the transverse optical plasmon around optimum doping. The phonon anomaly appears in the same range of doping as the additional peak. For $`y`$ around 6.8, the spectral weight from the high-frequency side of the phonon peak moves into the additional peak (see Figs. 10 (a) and (b) of Ref. ) as the temperature is lowered. For $`y`$ around 6.6 we find the most pronounced anomaly (see the experimental data of Fig. 1). For even lower doping levels the additional peak and the phonon merge together forming a single highly-assymetric structure (see Fig. 3 (d) of Ref. ). These trends can be understood using arguments similar to those presented above (see the discussion related to Fig. 2) and can be well reproduced using the model. This is demonstrated in Fig. 3 which displays the experimental spectra of the $`c`$-axis conductivity of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>y</sub> with (a) $`y=6.4`$ ($`T_c=25\mathrm{K}`$) and (b) $`y=6.8`$ ($`T_c=80\mathrm{K}`$) from Ref. together with the fits (Fig. 3(c) and 3(d), respectively). We discuss next the peculiar temperature dependence of the additional peak and the phonon anomalies. The proximity of the onset temperature of the anomalies and the onset temperature of the spin gap ($`T^{}`$) observed in nuclear magnetic resonance experiments has provoked several speculations that the anomalies are due to the coupling of the phonons to spin excitations. From the fact that we are able to fit the data for temperatures between $`T_c`$ and $`T^{}`$ (see Fig. 1(c)) we infer that in this temperature-range the intra-bilayer plasmon is already developed. This suggests that many of the electronic and possibly also structural anomalies starting below $`T^{}`$ (see, e.g., Ref. ) are caused by pairing fluctuations within the bilayers. In summary, we have extended the phenomenological model of Van der Marel et al. involving inter-bilayer and intra-bilayer Josephson junctions by including phonons and local field effects. The model allows us to explain not only the additional broad peak around $`450\mathrm{cm}^1`$ but also the spectacular anomaly of the oxygen bending mode at $`320\mathrm{cm}^1`$ and the spectral weight anomalies of the apical oxygen modes at $`550`$ and $`630\mathrm{cm}^1`$. Our results indicate that also the closely spaced copper-oxygen planes of underdoped bilayer cuprates are weakly (Josephson) coupled. These findings provide support for the conjecture that the $`c`$-axis dynamics of the cuprates (at least the underdoped ones) is dictated by the unconventional properties of the ground state of the electronic system of the planes. We suggest that the onset of the anomalies at temperatures significantly higher than $`T_c`$ may be caused by pairing fluctuations within the bilayers. We thank G. P. Williams and L. Carr for technical support at the U4IR beamline at NSLS and E. Brücher and R. Kremer for SQUID measurements. We acknowledge discussions with M. Grüninger, D. van der Marel, R. Zeyher, T. Strohm, and A. Wittlin. D. M. gratefully acknowledges support by the Alexander von Humboldt Foundation. D. M. and J. H. were supported by the grant VS96102 of the Ministry of Education of the Czech Republic.
no-problem/9903/cond-mat9903329.html
ar5iv
text
# 1 Statistics and spin in two dimensions ## 1 Statistics and spin in two dimensions I would like to begin by reminding you of the fact that in two space dimensions there is a richer set of possibilities than in higher dimensions as far as statistics and spin of particles is concerned. Quantum statistics is determined by the symmetry of the wave function under interchange of particle coordinates, and in three and higher dimensions the corresponding symmetry group is the permutation group. However, when particle interchange is viewed as a continuous process under which the coordinates are changed, then the symmetry group in two dimensions is larger, it is the two-dimensional braid group rather than the permutation group . An element of this group does not only specify the permutation of the particles, but also the windings of the particle trajectories under the interchange of the positions. In dimensions higher than two these windings can be disentangled, since only interchanges corresponding to different permutations of the particles are topologically distinct. This is not possible in two dimensions. For particles on the plane the coordinates can be written as complex variables, $`z=x+iy`$, and for two particles the symmetry under interchange of the particle positions can be expressed as $`\psi \left(e^{in\pi }(z_1z_2)\right)=e^{in\theta }\psi (z_1z_2),`$ (1) where only the relative coordinate has been written out explicitly. In this expression $`n`$ is the winding number of the particle trajectory in 2-particle space, and $`\theta `$ is the parameter that specifies the statistics. The symmetry follows from the assumption that all configurations which differ only by an interchange of the particle positions are physically indistinguishable. The wave function for these configurations should therefore differ at most by a phase factor. Also for more than two (identical) particles the symmetry factors have the form $`exp(in\theta )`$ and they define a one-dimensional representation of the braid group for the particles. In two dimensions $`\theta `$ is a free parameter, while in higher dimensions it is restricted to the values $`\theta =0(mod\mathrm{\hspace{0.17em}\hspace{0.17em}2}\pi )`$ for bosons and $`\theta =\pi (mod\mathrm{\hspace{0.17em}\hspace{0.17em}2}\pi )`$ for fermions. For values of $`\theta `$ different from these two the particles are said to satisfy intermediate or fractional statistics, and they are referred to as anyons. Also spin is different in two dimensions. In three dimensions the intrinsic spin of a particle is associated with the rotation group $`SO(3)`$. It is regarded as the generator of rotations in the rest frame of the particle. As is well known, the unitary representations of the rotation group $`SO(3)`$ restrict the allowed values of the spin to integer or half-integer multiples of $`\mathrm{}`$. For particles in two dimensions the rotation group is reduced to $`SO(2)`$. This is a one-parameter group with unitary representations $`U(\varphi )=e^{i\varphi S/\mathrm{}},`$ (2) where $`\varphi `$ is the rotation angle. In this case there is no restriction on $`S`$, it can take any real value<sup>2</sup><sup>2</sup>2I am here actually referring to representations of the covering group of $`SO(2)`$, which are the relevant ones for quantum mechanics.. Thus, statistics as well as spin can be regarded as continuous variables in two dimensions. An obvious question to ask is whether these two variables are linked by some kind of spin-statistics relation. This question has previously been discussed in different ways, and we know from theoretical constructions that many simple explicit models of two-dimensional particles have such a relation. Here I will consider this question in connection with a concrete realization: quasi-particles in the fractional quantum Hall effect. These quasi-particles are believed, on one hand to be real physical realizations of anyons in a quasi two-dimensional electron system, on the other hand to be well described (in some cases) by simple many-electron wave functions. The question of spin and statistics of these quasi-particles can therefore be examined rather directly, and has been done so in the past. One specific study is due to Einarsson et. al. , and my talk is inspired by this paper and can be seen as a comment to their result. ## 2 Spin-statistics relations Since we are considering a non-relativistic system, I would like to stress the point that we cannot expect to find a spin-statistics theorem that on general grounds gives a strict relation between these two particle properties. After all we have a simple counter-example to the standard relation between spin and statistics: spinless fermions described by one-component anti-symmetric wave functions. In the context of non-relativistic many-particle theory there seems to be no problems with such a construction, and this is so for particles in two as well as in three space dimensions. Nevertheless, as soon as one leaves the simple point particle description and makes explicit models where the spin as well as the statistics can be derived from more fundamental fields, the standard spin-statistics relation seems naturally to appear in three-dimensional systems while a linear extension of this relation appear in two dimensions. Let me just mention some examples from two dimensions. A simple electromagnetic model of an anyon is an electric point charge $`e`$ with an attached magnetic flux $`\varphi `$, that is confined to a small region around the charge. (The mechanism that binds the flux to the charge is not so important and neither is the detailed profile of the magnetic field surrounding the charge.) In addition to the Coulomb interaction between such charge-flux composites, there will be an Aharonov-Bohm interaction between the charge of one composite and the flux of the other. When two composites are interchanged the latter gives rise to a phase factor that can be identified with the statistics factor. A simple calculation gives for the statistics parameter $`\theta ={\displaystyle \frac{e\varphi }{\mathrm{}c}}.`$ (3) There is an electromagnetic spin associated with a charge flux composite, due to the overlap of the electric and magnetic fields. Using the expression for electromagnetic angular momentum reduced to its two-dimensional form, we calculate the spin to be $`S={\displaystyle \frac{1}{c}}{\displaystyle d^2rB\stackrel{}{r}\stackrel{}{E}}={\displaystyle \frac{e\varphi }{2\pi c}}.`$ (4) We note that the statistics parameter and the spin both are determined by the same quantity $`e\varphi `$. A second example is provided by soliton solutions in the $`O(3)`$ non-linear $`\sigma `$-model with a topological (Hopf) term . In this case the strength of the topological term determines the spin as well as the statistics of the solitons. A third example is given by the particles described by a scalar field theory with Chern-Simons coupling . The Chern-Simons field gives an explicit realization of fractional statistics in the form of an Aharonov-Bohm effect. It also affects the conserved angular momentum and thereby links the spin to the statistics of the particles. In the examples referred to above (as well as in some other examples) the relation between spin and statistics has the simple form $`S=\left[{\displaystyle \frac{\theta }{2\pi }}(mod\mathrm{\hspace{0.17em}\hspace{0.17em}1})\right]\mathrm{}.`$ (5) It coincides with the standard relation for bosons ($`\theta =0`$) and fermions ($`\theta =\pi /2`$) and extends that linearly to all other values of the statistics parameter $`\theta `$. Even if the simple relation (5) is favoured by many anyon models, we do not have a clear specification of the general conditions under which the relation should be satisfied. There do exist, however, some general arguments for a less restrictive form of the spin-statistics relation that are based on the assumption that there exist both anyons and anti-anyons in the system under consideration. Let me briefly give the arguments for this generalized spin-statistics relation, since it is relevant for the quantum Hall quasi-particles. We then assume that there exist fractional statistics particles of a type we denote by $`p`$ (with some unspecified statistics parameter $`\theta `$). There also exist another type of particles $`\overline{p}`$, that we consider as anti-particles to $`p`$. Since we are not considering a relativistic theory, we do not assume charge conjugation symmetry (symmetry between $`p`$ and $`\overline{p}`$). The important point is the assumption that a $`p\overline{p}`$ pair can be created and annihilated inside the system. This means that all long range effects of a single particle are canceled by the corresponding effects of an anti-particle. This has consequences for statistics as well as for spin. For a $`p\overline{p}`$ pair there are no long-range Aharonov-Bohm effects. That means that the phase factor introduced by transport of another particle of type $`p`$ around the pair is the trivial factor $`1`$ for a path far away from the two particles. If these two particles also are sufficiently far apart, the phase factor can be written as a product of one factor from each of the particles in the pair. We write this as $`exp\left(2i(\theta _{pp}+\theta _{p\overline{p}})\right)=1.`$ (6) We easily see that $`\theta _{pp}`$ is identical to the statistics phase $`\theta `$ of particles $`p`$. The other phase $`\theta _{p\overline{p}}`$ is sometimes referred to as a mutual statistics phase. It describes an Aharonov-Bohm interaction between two non-identical particles $`p`$ and $`\overline{p}`$. Clearly we have a similar condition when a particle of type $`\overline{p}`$ is transported around the pair, $`exp\left(2i(\theta _{\overline{p}\overline{p}}+\theta _{\overline{p}p})\right)=1.`$ (7) The two conditions (6) and (7), and the symmetry relation $`\theta _{\overline{p}p}=\theta _{p\overline{p}}`$, mean that all phases can be expressed in terms of a single phase $`\theta `$, $`\theta _{\overline{p}\overline{p}}=\theta _{pp}`$ $`=`$ $`\theta (mod\pi )`$ $`\theta _{\overline{p}p}=\theta _{p\overline{p}}`$ $`=`$ $`\theta (mod\pi ).`$ (8) A rotation of the $`p\overline{p}`$ pair by an angle $`2\pi `$ also has to give rise to a trivial phase factor. We write this as $`exp\left(2\pi {\displaystyle \frac{i}{\mathrm{}}}(L_{cm}+L_{rel}+S_p+S_{\overline{p}})\right)=1.`$ (9) The orbital angular momentum has here been divided into a center-of-mass part $`L_{cm}`$ and a part determined by the relative motion, $`L_{rel}`$; $`S_p`$ and $`S_{\overline{p}}`$ are the intrinsic spins of the two particles. $`L_{cm}`$ has integer eigenvalues in multiples of $`\mathrm{}`$, while the spectrum of $`L_{rel}`$ is shifted due to the nontrivial phase $`\theta _{p\overline{p}}`$. The eigenvalues are $`(n\theta /\pi )\mathrm{},n=0,\pm 1,\pm 2\mathrm{}`$. With this inserted in (9) we get $`{\displaystyle \frac{1}{2}}(S_p+S_{\overline{p}})=\left[{\displaystyle \frac{\theta }{2\pi }}(mod{\displaystyle \frac{1}{2}})\right]\mathrm{}.`$ (10) This is the generalized spin statistics relation. It only involves the sum of the spins of the anyon and the anti-anyon. Even if these two spins are equal we note the relation is less restrictive than the relation (5). It does not exclude spinless fermions or bosons with half-integer spin. ## 3 Anyons in the quantum Hall system The quasi-particles of the quantum Hall system are charged excitations in a 2-dimensional electron gas subject to a strong perpendicular magnetic field. In general the quasi-particles are fractionally charged and obey fractional statistics; they are charged anyons in a strong magnetic field. For special filling fractions of the lowest Landau level, $`\nu =1/m`$, $`m`$ odd, there exist simple (trial) wave functions, originally introduced by Laughlin , for the ground state of the many-electron system as well as for the quasi-particle excitations. Expressed in complex electron coordinates, the (non-normalized) $`N`$-electron ground state has the form $`\psi _m(z_1,z_2,\mathrm{},z_N)={\displaystyle \underset{i<j}{}}(z_iz_j)^me^{\frac{1}{4\mathrm{}^2}\underset{k=1}{\overset{N}{}}\left|z_k\right|^2},`$ (11) with $`\mathrm{}=1/\sqrt{\frac{\mathrm{}c}{eB}}`$ as the magnetic length, and $`eB`$ taken to be positive. The one quasi-hole state is $`\psi _Z^{qh}(z_1,z_2,\mathrm{},z_N)={\displaystyle \underset{i=1}{\overset{N}{}}}(z_iZ)\psi _m(z_1,z_2,\mathrm{},z_N),`$ (12) with $`Z`$ as the position of the quasi-hole. Multi-hole wave functions are constructed in a similar way, with several prefactors of the form given in Eq.(12). For the oppositely charged quasi-electron Laughlin has suggested a wavefunction of the form $`\psi _Z^{qe}(z_1,z_2,\mathrm{},z_N)={\displaystyle \underset{i=1}{\overset{N}{}}}({\displaystyle \frac{}{z_i}}Z^{})\psi _m(z_1,z_2,\mathrm{},z_N).`$ (13) Supported by general arguments, as well as numerical studies, the ground state and the quasi-hole state are believed to be very well represented by the wave functions (11) and (12) (in a homogeneous system). However there is an asymmetry between the quasi-hole and the quasi-electron, and one should note that there is not a similar strong evidence in favour for the quasi-electron wave function (13)<sup>3</sup><sup>3</sup>3For a recent discussion see Ref. .. The form of the quasi-particle wave functions determine the fractional charge as well as their fractional statistics. This was demonstrated by Arovas, Schrieffer and Wilczek who calculated the Berry phases associated with shifts of the quasi-particle coordinates along closed curves . Let me give a brief comment on this in general terms. The wave functions for configurations with $`M`$ quasi-holes define a $`M`$ (complex) dimensional submanifold in the $`N`$-electron Hilbert space parameterized by the quasi-hole coordinates. A fractional statistics representation (or anyon representation) of the system can be introduced in terms of wave functions defined on this manifold, $`\psi (Z_1,Z_2,\mathrm{},Z_M)`$. The $`M`$-dimensional manifold, on which the wave-functions are defined can be interpreted as the configuration space (alternatively as the phase space) of the (classical) $`M`$ quasi-hole system. In a low-energy approximation we may consider the system restricted to this space. The kinematics as well as the dynamics of the quasi-hole system are determined from the $`N`$-electron system by projection on the complex submanifold. In particular, the kinematics is determined from the geometry of the manifold, and the charge and the statistics appear as geometrically determined parameters. The scalar product of the $`N`$-electron Hilbert space defines, by projection, a complex geometry in the $`M`$-dimensional quasi-hole space. It is expressed in terms of the Hermitian matrix $`\eta _{kl}=D_k\psi |D_l\psi ,`$ (14) with $`D_k=_k+iA_k,A_k=i\psi |_k\psi .`$ (15) $`|\psi `$ denotes the $`M`$-quasi-hole state and $`_k`$ is the partial derivative with respect to a set of real coordinates in the quasi-hole space. $`A_k`$ is the Berry connection defined by the set of quasi-particle states. The real (and symmetric) part of $`\eta _{kl}`$ determines a metric on the $`M`$ quasi-particle space $`g_{kl}=ReD_k\psi |D_l\psi ,`$ (16) while the imaginary (and anti-symmetric) part determines a symplectic form, that we identify as the “Berry magnetic field”, $`b_{kl}=2Im_k\psi |_l\psi =_kA_l_lA_k.`$ (17) For a single quasi-hole the form of $`\eta _{kl}`$ is strongly restricted by translational and rotational invariance (in the limit $`N\mathrm{}`$) and by analyticity in the variable $`Z`$ , $`\eta _{kl}={\displaystyle \frac{b_1}{2}}(\delta _{kl}+iϵ_{kl}).`$ (18) Here $`b_1`$ is a constant that can be expressed in terms of the the real magnetic field, $`b_1=\frac{e^{}B}{\mathrm{}c}`$, with the coefficient $`e^{}`$ as the effective charge of the quasi-hole. A Berry phase calculation for a loop in the plane determines the flux of $`b_1`$ through this loop, and comparison with the real magnetic flux then gives the effective charge $`e^{}`$ . For a two quasi-hole state an expression similar to (18) is valid for $`\eta _{kl}`$, if this now refers to the relative coordinate of the two quasi-holes. In this case $`b_1`$ is replaced by a function $`b_2(R)`$ that depends on the relative distance $`R`$. For small $`R`$ the form of this function is determined by local properties of the quasi-holes. For large $`R`$, $`b_2(R)`$ is expected to approach rapidly the constant $`\frac{1}{2}b_1`$ when the quasi-holes are well localized objects. The flux of $`b_2`$ then has the form $`{\displaystyle _{r<R}}d^2rb_2(R)={\displaystyle \frac{1}{2}}\pi R^2b_12\theta ,`$ (19) where $`\theta `$ is identified as the statistics parameter of the quasi-holes. Again this parameter can be determined by a Berry phase calculation, that measures the flux of $`b_2(R)`$ within a given radius. Berry phase calculations based on the quasi-hole wave function (12) gives $`e^{}=e/m`$ for the charge and $`\theta =\pi /m`$ for the statistics parameter, with $`e`$ as the electron charge . For the quasi-electron wave function (13) one cannot derive the results so easily , but the expected results for the physical quasi-electron is $`e^{}=e/m`$ and $`\theta =2\pi /m`$, as determined from general reasoning and numerical studies . Whereas charge and statistics can be determined geometrically, in terms of Berry phases associated with closed curves of one and two quasi-particles, the spin cannot be determined quite as easily. However, as pointed out by Einarsson and Li there is a way to derive spin from Berry phases, provided the particles move in a curved space. If the spin can be viewed as a three-dimensional spin constrained to point in the direction orthogonal to the two-dimensional surface, there will be a contribution to the Berry phase when transporting the quasi-particle around a loop that is proportional to the product of the spin value and the solid angle traced out by the spin . This suggests the following form of the Berry magnetic field $`b_1={\displaystyle \frac{e^{}B}{\mathrm{}c}}{\displaystyle \frac{S}{\mathrm{}}}\kappa ,`$ (20) with $`\kappa `$ as the Gauss curvature and the coefficient $`S`$ as the spin. It is not obvious that calculations of Berry phases for quasi-holes will give a separation in two terms of this form, but if they do, the spin can be determined from the Berry phases. This is the assumption made in . In this case a quantum Hall system with the geometry of a sphere is considered. One should note that in this case the magnetic field $`B`$ as well as the curvature $`\kappa `$ are constants. That means that there is no clear distinction between the two contributions to the Berry phase in Eq.(20). However if the charge $`e^{}`$ of the quasi-particle on the sphere is the same as the quasi-particle charge on the plane (which seems reasonable), then the second term can be separated from the first one and the spin can be determined. ## 4 Quantum Hall states on the sphere In practice, to create a quantum Hall system with the geometry of a sphere can hardly be done. A radially directed magnetic field is then needed, and this means that a magnetic monopole should be found and placed at the center of the sphere. However as a theoretical construction a spherical Hall system can easily be created, and as first shown by Haldane such a geometry may conveniently be used in the study of certain aspects of the quantum Hall effect . Also for numerical calculations it is convenient due to the lack of boundaries . To have a consistent quantum description of the electrons in the monopole field, Dirac’s quantization condition has to be satisfied, $`{\displaystyle \frac{e\varphi }{4\pi \mathrm{}c}}={\displaystyle \frac{1}{2}}N_\varphi ,`$ (21) where $`\varphi `$ is the total flux of the monopole field and $`N_\varphi `$ is an integer. This means that the total magnetic flux through the sphere is quantized in units of the flux quantum $`\varphi _0=\frac{hc}{e}`$, $`\varphi =N_\varphi \varphi _0,`$ (22) with $`N_\varphi `$ as the number of flux quanta. Laughlin states like (11),(12) and (13) can be constructed on the sphere and can conveniently be expressed in terms of the coordinates $`u=\mathrm{cos}(\theta /2)`$ and $`v=\mathrm{sin}(\theta /2)exp(i\varphi )`$, with $`\theta `$ and $`\varphi `$ as the polar coordinates on the sphere. The form of the ground state is (in the Dirac gauge $`e\stackrel{}{A}=eB\mathrm{tan}\frac{\theta }{2}\stackrel{}{e}_\varphi `$) $`\psi _m={\displaystyle \underset{i<j}{}}(u_iv_ju_jv_i)^m,N_\varphi =m(N1),`$ (23) and this is non-degenerate, with all particles in the lowest Landau level, provided the number of electrons $`N`$ is linked to the number of flux quanta $`N_\varphi `$ as indicated above. If one flux quantum is added, a hole state is created, $`\psi _{UV}^{qh}={\displaystyle \underset{i}{}}(Vu_iUv_i)\psi _m,N_\varphi =m(N1)+1,`$ (24) with $`(U,V)`$ as the quasi-hole coordinates, and if one flux quantum is removed, a quasi-electron state is created, $`\psi _{UV}^{qe}={\displaystyle \underset{i}{}}(V^{}{\displaystyle \frac{}{u_i}}U^{}{\displaystyle \frac{}{v_i}})\psi _m,N_\varphi =m(N1)1,`$ (25) now with $`(U,V)`$ as the quasi-electron coordinates. For the quasi-hole state a detailed calculation of the Berry phase has been performed in Ref. , with a discussion of the different contributions. I will not repeat that here, let me rather show how the result concerning the spin can be derived directly from rotational invariance, without reference to Berry phases. This derivation is based on the assumption that the quasi-particle can be represented as a particle with charge $`e^{}`$ in the monopole field. For a single electron moving in a magnetic monopole field, the conserved angular momentum has the form $`\stackrel{}{J}=\stackrel{}{r}\times \stackrel{}{\pi }+\mu \stackrel{}{\widehat{r}},`$ (26) with $`\stackrel{}{\pi }`$ as the mechanical momentum, $`\stackrel{}{\pi }=\stackrel{}{p}{\displaystyle \frac{e}{c}}\stackrel{}{A},`$ (27) and $`\mu ={\displaystyle \frac{e\varphi }{4\pi c}}`$ (28) as the component of the total angular momentum in the radial direction $`\stackrel{}{\widehat{r}}`$. This spin can be identified as the electromagnetic angular momentum due to the overlap of the electric field of the charge with the magnetic monopole field. This radially directed spin is quantized due to the Dirac condition, $`\mu ={\displaystyle \frac{1}{2}}N_\varphi \mathrm{},`$ (29) and this quantization condition can alternatively be derived directly from the requirement of rotational invariance, i.e. from the condition that the operator $`\stackrel{}{J}`$ should generate unitary representations of the rotation group. Thus, there are two invariants associated with the angular momentum, $`\stackrel{}{J}^2=j(j+1)\mathrm{}^2,\widehat{\stackrel{}{r}}\stackrel{}{J}=\mu ,`$ (30) with the restriction $`j=|\mu |,|\mu |+1,\mathrm{}.`$ (31) The smallest value of $`j`$ can be identified as corresponding to the lowest Landau level, and as on the plane, the mechanical part of the angular momentum then has its smallest value. For $`N`$ electrons the total angular momentum is the sum of the contributions from each electron, $`\stackrel{}{J}={\displaystyle \underset{i=1}{\overset{N}{}}}\stackrel{}{J}_i.`$ (32) The ground state (23) is rotationally symmetric, with $`j=0`$, while the spin of the quasi-hole state (24) is $`j=N/2`$. In the anyon representation the quasi-hole is represented as a (single) charged particle in the monopole field. If we assume that it can be treated as a point particle, the angular momentum has the same form as for a single electron, $`\stackrel{}{J}=\stackrel{}{r}\times \stackrel{}{\pi }+(\mu ^{}+S)\stackrel{}{\widehat{r}}.`$ (33) In this expression $`\stackrel{}{r}`$ is the quasi-hole coordinate and $`\mu ^{}=\frac{e^{}\varphi }{4\pi c}`$ is the radially directed electromagnetic spin. $`S`$ is a possible additional radially directed spin, an intrinsic spin of the quasi-hole. We note that such an additional spin in fact has to be added in order to preserve rotational invariance. If $`e^{}`$ is taken to be identical to the charge $`e/m`$ of a quasi-hole in a planar system, then $`\mu ^{}=N_\varphi /2m`$. This is in general not a half-integer, and the condition for rotational invariance is therefore not satisfied with $`S=0`$. The value of $`S`$ can be determined if we identify the anyon coordinates with the coordinates $`(U,V)`$ of the quasi-hole state (24). The spin component of this state in the $`(U,V)`$ direction is $`N/2`$, and this gives the relation $`{\displaystyle \frac{1}{2m}}N_\varphi +S={\displaystyle \frac{1}{2}}N.`$ (34) With the number of flux quanta related to the electron number as indicated in Eq.(24) this gives the spin value $`S_{qh}={\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{2m}}={\displaystyle \frac{1}{2}}+{\displaystyle \frac{\theta }{2\pi }}.`$ (35) where $`qh`$ now labels the spin of the quasi-hole. This result for the spin is the same as the one determined by Berry phase calculations . We note that the spin-statistics relation given by (35) is not identical to the relation (5) indicated by the anyon models referred to at an earlier stage. There is an additional term $`1/2`$ that looks like a shift between the boson and fermion value of $`\theta `$. However, one should also note that the contribution from the intrinsic spin of the electrons has not been included here. For fully polarized electrons in the plane this contribution is $`1/2m`$. For large electron numbers, this contribution is presumably the same on the sphere. Thus, with all contributions included we get $`S_{qh}=\frac{1}{2}\frac{1}{m}=\frac{1}{2}+\frac{\theta }{\pi }`$, and we still do not recover the relation (5). The only exception is for $`m=1`$, the case of a fully occupied lowest Landau level. The spin is then $`1/2`$, in accordance with the standard spin-statistics relation. The quasi-electron state (25) can be examined in a similar way. The spin component in the radial direction in this case has the opposite sign and there is also a change in the relation between the number of flux quanta and the electron number. The spin value now is $`S_{qe}={\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{2m}}={\displaystyle \frac{1}{2}}+{\displaystyle \frac{\theta }{2\pi }}.`$ (36) The contribution from the intrinsic spin of the electrons in this case is $`1/2m`$, which gives the total spin $`S_{qe}=\frac{1}{2}`$. Also here the original spin-statistics relation is not satisfied. However, the two expressions (35) and (36) show that the generalized spin-statistics relation is satisfied in the form $`{\displaystyle \frac{1}{2}}(S_{qh}+S_{qe})={\displaystyle \frac{\theta }{2\pi }}.`$ (37) That is the case also when the contribution from the intrinsic spin of the electrons are included, since the contribution to the quasi-electron spin is the same, but with opposite sign as the contribution to the quasi-hole spin. ## 5 Spin on the sphere – spin on the plane The spin values (35) and (36) are determined for quasi-particles on a sphere. What conclusion can we now draw concerning quasi-particles in a planar system? Is there a local spin associated with the quasi-particles with value identical to the one found on a sphere? The discussion we find in Ref. , and also the results found in a paper by Sondhi and Kivelson , do not support this conclusion<sup>4</sup><sup>4</sup>4Somewhat surprisingly this is not seen as a problem in , with the explanation that the spin in the planar system does not have a dynamical significance.. Thus, if their conclusions are correct, there is no simple relation between the spin of the quasi-particle on the sphere and a spin derived from the angular momentum of the electrons in a planar system. This is somewhat disappointing since the main motivation for putting the quasi-particles on the sphere, I assume, was to be able to visualize the quasi-particle spin, not to create the spin. The usual picture of the quasi-particle excitations is that they are strongly localized in space and that they have particle like properties with sharply defined quantum numbers such as charge, mass and possibly spin. If the quasi-particle spin determined on the sphere is not the same as the quasi-particle spin on the plane, that presumably means that it cannot be thought of as a local spin associated with the quasi-particle. The spin could in principle be due to a small renormalization of the charge of the quasi-particle when put on a sphere, $`e_{sphere}^{}=e^{}\left(1+{\displaystyle \frac{m1}{mN}}\right),`$ (38) However, the $`N`$ dependence of the correction term does not seem to fit the picture of the quasi-particle as a strongly localized object. Let me briefly discuss the question of the quasi-particle spin for a planar system. The normal component of the conserved angular momentum of an electron in a homogeneous magnetic field is $`J=(\stackrel{}{r}\times \stackrel{}{\pi })_z+{\displaystyle \frac{eB}{2}}r^2,`$ (39) with $`\stackrel{}{r}`$ as a vector in the $`(x,y)`$-plane. The first term is the mechanical angular momentum of the circulating electron, whereas the second term can be interpreted as the electromagnetic spin (with an infinite $`\stackrel{}{r}`$-independent term subtracted). For electrons in the lowest Landau level, the conserved angular momentum can be written in the form $`J=\mathrm{}\left({\displaystyle d^2r\rho (r)}+{\displaystyle \frac{1}{2\mathrm{}^2}}{\displaystyle d^2rr^2\rho (r)}\right),`$ (40) with $`\rho `$ as the particle density. The first term, the mechanical angular momentum is proportional to the particle number, since all electrons in the lowest Landau level carry one unit of (mechanical) angular momentum. The second term is the contribution from the electromagnetic angular momentum. It has the opposite sign of the first term and dominates this so that for all angular momentum eigenvalues the spin is non-negative. The total angular momentum (40) diverges with the size of the system, the first term as the electron number $`N`$ and the second term as $`N^2`$. This is so for the ground state (11) as well as for the quasi-particle states (12) and (13). Clearly, if a local, finite spin should be associated with the quasi-particle, one has in some way to subtract the angular momentum of the ground state. A simple definition of the quasi-particle spin would be $`S_{qp}=\underset{R\mathrm{}}{lim}(J_{qp}(R)J_0(R)).`$ (41) where $`J_{qp}(R)`$ is the total angular momentum of the quasi-particle state within a radius $`R`$ and $`J_0(R)`$ the angular momentum of the ground state within the same radius. The size of the electron system is here regarded as infinite. Even if these two terms diverge separately for large $`R`$, the difference should stay finite and give a well-defined value for the spin. The first term of the angular momentum (40) gives a contribution to the quasi-particle spin (after the subtraction of the ground state spin) which is determined by the charge of the quasi-particle. The contribution is $`\pm 1/m`$ with + for the quasi-hole and - for the quasi-electron. The second term is not so easy to determine as the first term, but in the paper by Sondhi and Kivelson (where a similar definition of the quasi-particle spin is used), there is a discussion of the quasi-hole case. In this case the plasma analogy, introduced by Laughlin, can be applied. In the plasma analogy the square modulus of the quasi-hole wave function (12) is interpreted as the partition function of a Coulomb system consisting of $`N`$ free (unit) charges in a homogeneous neutralizing background, with the presence of an additional fixed charge of value $`1/m`$ (the quasi-hole). The integrated particle number is then determined as the screening charge of this fixed charge, with the value $`1/m`$. Also the second moment of the particle number density, which is relevant for the second term of the angular momentum, can be related to the value of the charge. In fact, assuming that the conditions for “perfect screening” to be satisfied , there is a cancellation between the two terms of the the angular momentum so that the quasi-hole spin, as defined above, vanishes. This is the conclusion of Sondhi and Kivelson<sup>5</sup><sup>5</sup>5Sondhi and Kivelson also consider corrections to the spin due to the electromagnetic self-interaction of the quasi-hole. Such corrections are important in order to give the correct value of the spin for the physical quasi-hole, but have not been taken into consideration here.. With this conclusion it it difficult to see any connection between the physical spin of the quasi-hole state in the plane and the spin determined on the sphere. If the physical spin vanishes for any value of $`m`$ this in fact rules out any connection between the (physical) spin and the statistics parameter of the quasi-particles. However, as a final point I would like to pose the question whether the conclusion concerning the spin, which is based on the use of the plasma analogy, is necessarily true, or whether another conclusion may be possible. Clearly, for a full Landau level, with $`m=1`$, the quasi-hole spin vanishes since the hole is created simply by removing an electron in a spin $`0`$ state. For $`m=3`$ the situation is not quite as obvious and one has to refer to the situation in a one-component plasma with a $`1/3`$ charge screened by a plasma of integer charges. I am not able to judge the claim that the perfect screening condition is satisfied in this case, but I have noted with interest that in Ref. one refers to a “basic belief” in the underlying assumption when the perfect screening sum rule is derived. There is of course a way to avoid the reference to the plasma analogy. That is to make a straight forward calculation of the spin (41) of the planar system, and I will cite some preliminary results for Monte-Carlo calculations performed by Heidi Kjønsberg for an electron system consisting of $`N=100`$ electrons. The numerical calculations reproduce values for the integrated quasi-hole spin $`S_{qh}`$, within a variable radius $`R`$ around the quasi-hole, which is placed at the center of the circular electron system defined by the Laughlin wave function. Let me first give some values for the spin evaluated on the sphere, as given by Eq. (35). For $`m=1`$ the spin is $`0`$, for $`m=3`$ the spin is $`1/3`$ and for $`m=5`$ the spin is $`2/5`$, all spins expressed in units of $`\mathrm{}`$. The numerical results for the the planar system agree well with the value $`0`$ for the $`m=1`$ state. However, for $`m=3`$ this is not the case. For values of the radius $`R`$ that lie between the size of the quasi-hole and the size of the full electron system, the results indicate instead a fairly stable value close to $`1/3`$, that agrees with the value found on the sphere. For $`m=5`$ the results are not so clear, due to larger finite size effects and also due to larger statistical fluctuations in the Monte Carlo calculations. Nevertheless, also here the results indicate a spin value different from $`0`$ and possibly consistent with the value $`2/5`$. So I would like to finish by referring to the question of the spin of the quasi-hole as an interesting one which deserves a further study. I feel that the situation in a sense would be more satisfying if the spin evaluated on the sphere could be identified as the physical spin of the quasi-particle also for a planar quantum Hall system. But such a conclusion would raise some new and interesting questions concerning the use of the plasma sum rules for the Laughlin states. ## Acknowledgments I appreciate the help of Heidi Kjønsberg who have performed the numerical calculations referred to in this paper. I am grateful to Hans Hansson and Anders Karlhede for several helpful comments and would like to thank their group at the Department of Physics, Stockholm University, for hospitality during a stay in February 1999.
no-problem/9903/hep-ph9903324.html
ar5iv
text
# Remarks on Cosmic String Formation during Preheating on Lattice Simulations \[ ## Abstract We reconsider the formation of (global) cosmic strings during and after preheating by calculating the dynamics of a scalar field on both two- and three-dimensional lattices. We have found that there is little differences between the results in two and three dimensions about the dynamics of fluctuations, at least, during preheating. Practically, it is difficult to determine whether long cosmic strings which may affect the later evolution of the universe could ever be produced from the results of simulations on three-dimensional lattices with smaller box sizes than the horizon. Therefore, using two-dimensional lattices with large box size, we have found that cosmic strings with the breaking scale $`\eta 10^{16}\mathrm{GeV}`$ are produced for broad range of parameter space in $`\eta `$, while for higher breaking scales ($`\eta 3\times 10^{16}\mathrm{GeV}`$), their production depends crucially on the value of the breaking scale $`\eta `$ in our simulations. \] One of the hot topics at the reheating stage after inflation is the possibility of the formation of topological defects . This phenomenon is due to large nonthermal fluctuations during preheating and efficient rescattering both of which are caused by the Bose enhancement effects. At the preheating stage, very large nonthermal fluctuations are produced, $`\delta \varphi ^2c^2M_p^2`$ where $`M_p`$ is the Planck mass and $`c=10^210^3`$ . These fluctuations make the shape of the effective potential of the field $`\varphi `$ change to the type of the potential which has a minimum at the origin if the potential $`V(\varphi )`$ is of spontaneous symmetry-breaking type. It may be regarded that the symmetry is restored . Later, when the amplitude of these fluctuations is redshifted away by the cosmic expansion, the symmetry is spontaneously broken and topological defects may be created. Thus, the mechanism for producing the topological defects seems to be somewhat similar to the Kibble mechanism in high temperature theory. For the order estimation of the critical value of the breaking scale where cosmic strings are not formed above that value, it is sufficient to see the amplitude of the nonthermal fluctuations produced during preheating: $`\delta \varphi ^2^{1/2}10^{16}\mathrm{GeV}`$ . This order estimation is in good agreement with numerical estimations done in Refs.. But numerical simulations on the lattices which follow the dynamics of the scalar fields reveals several new results, such as the effects of the rescattering or the number of defects estimation . In our previous paper , we investigated the dynamics of a complex scalar field using two-dimensional lattice simulations, taking into account that the box size is large enough to cover the horizon size and that the lattice size is small enough to identify cosmic strings safely, and concluded that (long) cosmic strings would not be produced if the breaking scale $`\eta `$ was larger than $`\eta 3\times 10^{16}\mathrm{GeV}`$. On the other hand, the authors of Ref. showed the possibility that cosmic strings could be formed even if $`\eta =6\times 10^{16}\mathrm{GeV}`$ using the three-dimensional lattice where the box size is smaller than the horizon. In this paper, we will discuss both two- and three-dimensional lattice simulations, and make certain that these results are consistent, commenting on the limitations of both simulations. First we show that results from lattice simulations in two and three dimensions are not different from each other when we study the parametric resonance during preheating. To be concrete, let us consider a complex scalar field with the effective potential: $$V(\mathrm{\Phi })=\frac{\lambda }{2}(|\mathrm{\Phi }|^2\eta ^2)^2,$$ (1) where $`\lambda `$ is a small coupling constant. This model has a global U(1) symmetry, and cosmic strings are formed when the symmetry is spontaneously broken. What we have to do is to integrate the equation of motion: $$\ddot{\mathrm{\Phi }}+3H\dot{\mathrm{\Phi }}\frac{1}{a^2}^2\mathrm{\Phi }+(|\mathrm{\Phi }|^2\eta ^2)\mathrm{\Phi }=0.$$ (2) For numerical simulations, it is convenient to use rescaled variables: $`a(\tau )d\tau `$ $`=`$ $`\sqrt{\lambda }\mathrm{\Phi }_0a(0)dt,`$ (3) $`\phi `$ $`=`$ $`{\displaystyle \frac{\mathrm{\Phi }a(\tau )}{\mathrm{\Phi }_0a(0)}},`$ (4) $`\xi `$ $`=`$ $`\sqrt{\lambda }\mathrm{\Phi }_0a(0)x,`$ (5) where $`\mathrm{\Phi }_0|\mathrm{\Phi }(0)|`$. Setting $`a(0)=1`$, we obtain $$\phi ^{\prime \prime }\frac{a^{\prime \prime }}{a}\phi _\xi ^2\phi +(|\phi |^2\stackrel{~}{\eta }^2a^2)\phi =0,$$ (6) where $`\stackrel{~}{\eta }\eta /\mathrm{\Phi }_0`$ and the prime denotes differentiation with respect to $`\tau `$. The second term of the left-handed side can be omitted since the energy density of the universe behaves like radiation at the early times and also the scale factor becomes very large later. In this case, the rescaled Hubble parameter,$`h(\tau )H(\tau )/\sqrt{\lambda }\mathrm{\Phi }_0`$, and the scale factor $`a(\tau )`$ become $$h(\tau )=\frac{\sqrt{2}}{3}a^2(\tau ),$$ (7) and $$a(\tau )=\frac{\sqrt{2}}{3}\tau +1,$$ (8) respectively, when $`\mathrm{\Phi }`$ is assumed to be an inflaton (even if $`\mathrm{\Phi }`$ is not an inflaton, results are the same as in the case of rescaling the breaking scale in an appropriate way, see Ref. ) and we set $`a(0)=1`$. For the initial conditions we take $`x\mathrm{Re}\phi (0)`$ $`=`$ $`1+\delta x(𝐱),`$ (9) $`y\mathrm{Im}\phi (0)`$ $`=`$ $`\delta y(𝐱),`$ (10) where the homogeneous part comes from its definition (we call $`x`$ direction for real direction and $`y`$ for imaginary), and $`\delta x,y(𝐱)`$ is a small random variable of $`𝒪(10^7)`$ representing the fluctuations. We also set small random values for velocities. Since the physical length and the horizon grow proportional to $`a`$ and $`a^2`$, respectively, the rescaled horizon grows proportional to $`a`$. The initial length of the horizon is $`\mathrm{}_h(0)=3/\sqrt{2}2.12`$. Therefore, the rescaled horizon size grows as $`\mathrm{}_h(\tau )=\mathrm{}_h(0)a(\tau )`$. It is thus better to take the box size larger than $`\mathrm{}(\tau _{end})`$ where $`\tau _{end}`$ is the time at the end of the simulation. This is because those strings whose lengths are shorter than the horizon scale will be in the form of loops, which will shrink and disappear very soon. Only longer strings than the horizon will survive to affect the later evolution of the universe. On the other hand, the width of the topological defect is $`(\sqrt{\lambda }\eta )^1`$ which corresponds to $`(\stackrel{~}{\eta }a(\tau ))^1`$ in the rescaled units. Since it decreases with time, one lattice length should be at least comparable with the defect width at the end of the calculation. Leaving these facts aside, let us just compare the evolution of fluctuations on two-dimensional lattices with three-dimensional ones. Here we take $`128^3`$ three-dimensional lattices and $`4096^2`$ two-dimensional lattices with the lattice size $`\mathrm{\Delta }\xi =0.3`$ for both cases. We find no difference between both growth exponents (see Fig. 1). Notice that the data used on the two-dimensional lattice (top panel) is the same used in Fig.4 of Ref. , where it was plotted linearly instead of logarithmically as in Fig. 1. The authors of Ref. might be misleaded by this point, since they claimed that the growth exponent of the fluctuation in the $`x`$ direction is much larger in three-dimensional lattices than that in the two-dimensional cases of our previous results in Ref. . Moreover, we obtain very similar results in two- and three-dimensional lattices, at least, on the effects of parametric resonance during preheating. Since small loop strings will shrink and disappear very soon, and cause no influence on the universe, we are interested only in infinitely long strings. In three-dimensional lattices, they can be considered as those strings that penetrate through the box of the lattice (come into the box from one side and go out to the other side) and survive until the later time. Actually, we have found temporary formation of cosmic strings at any breaking scale, which confirms the result of Ref. . In most values of the breaking scale, however, these strings are in the form of small loops, and disappear very quickly. We have also found strings longer than the box size. In Fig. 2, we take the breaking scale $`\eta =3.08\times 10^{16}\mathrm{GeV}`$, the lattice size $`\mathrm{\Delta }\xi =0.3`$, and integrate the equation of motion until $`\tau =280`$. If we integrated until the time before $`\tau =265`$, we would conclude that cosmic strings were formed for this breaking scale. However, as we can see in Fig. 2, these long strings feel the attractive force from each other, and they form into loops beyond the box size, which is much smaller than the horizon scale: $`N\mathrm{\Delta }\xi =38.4\mathrm{}_h(\tau =280)282`$, where $`N`$ is the number of the lattice. We agree with the authors of Ref. that longer strings tend to be created when the velocity of the real part of the oscillating homogeneous mode $`x`$ becomes almost zero at the moment when it passes through the origin of the effective potential (in their words, when the moment of the symmetry breaking nearly coincides with the moment when $`\varphi _1(=x)`$ passes through zero ). They argued that it leads to the fact that the long string formation is a non-monotonic function of the breaking scale , which we confirm to some extent. However, in more than one hundred runs of our simulations, for almost all the breaking scales higher than $`3\times 10^{16}\mathrm{GeV}`$, we have not found long strings stretched out beyond the box size except for two breaking scales: We have recently found long strings formed at $`3.01\times 10^{16}\mathrm{GeV}`$, but they are very unstable in the sense that strings in the form of loops develop into the form of long strings, and later they again deform into loops and disappear. Notice that the range of the breaking scale where long strings are formed is also very narrow at this scale as in the above two scales. Actually, these scales correspond to the circumstances that the velocity of the real part of the oscillating homogeneous mode $`x`$ becomes almost (but not exactly) zero at the moment when it passes through the origin of the effective potential, so there may be more breaking scales where long cosmic strings are formed. $`\eta 3.08\times 10^{16}\mathrm{GeV}`$ and $`\eta 3.16\times 10^{16}\mathrm{GeV}`$ on the $`128^3`$ lattices with $`\mathrm{\Delta }\xi =0.3`$ (Actually, long strings cannot be found at these scale, if another initial configurations of initial fluctuations are used). Even in these scales, long strings deform into loops and shrink and disappear very soon. Therefore, we cannot make any definite conclusion on the formation of cosmic strings in the sense that we cannot tell whether or not they may affect the evolution of the universe. This is because the box size cannot be taken to be larger than the horizon scale in three-dimensional lattices. We have thus studied the formation of cosmic strings in two-dimensional lattices in the previous paper , abandoning one dimension in space because of the lack of memory capacity of computers. Nevertheless, we can extract useful information from the results on three-dimensional lattices. If we take the criterion of the formation of (infinitely) long strings as the cases when those strings stretched across the box size have been once formed, for the sake of the conservative discussion, we have found that the corresponding cases are in the only very narrow region of the parameter space of the breaking scale for the fixed initial conditions: <sup>§</sup><sup>§</sup>§For other initial conditions, narrow ranges of the breaking scale where long cosmic strings are produced appear in the different scales, and they are also very narrow. $`\mathrm{\Delta }\eta /\eta 3\times 10^3`$. See Figs. 3 and 4. Figures 3 and 4 show the lifetime ($`\tau _d\tau _f`$) with respect to the breaking scale $`\eta `$ for $`3.08\times 10^{16}\mathrm{GeV}`$ and $`3.16\times 10^{16}\mathrm{GeV}`$, respectively. We can see that the size of the narrow regions are both $`\mathrm{\Delta }\eta 0.009\times 10^{16}`$ GeV. Here $`\tau _d`$ is the time when the long strings destruct into loops, and $`\tau _f`$ is the formation time for those strings. However, this criterion may not be so sufficient, since those strings stretched across the lattice box will deform into loops and disappear very soon. We can take other criterion, which may reflect the idea that it is important to consider a long cosmic string. When we observe the lifetime ($`\tau _d\tau _f`$), we find that long strings survive a few times longer at a certain range of the breaking scale, as shown in Figs. 3 and 4. We can expect that long strings that stretch beyond the horizon size will be formed only within such very narrow regions. In this case, therefore, the breaking scale should be in the very narrow ranges ($`\mathrm{\Delta }\eta /\eta 10^4`$) in order for (infinitely) long strings to be formed for the fixed initial conditions in our simulations (These features can be also seen on the lattices with larger box size: $`N=200`$). In other words, the long string formation is very sensitive to the breaking scale. These results can be understood as follows. If there is no gradient force, the dynamics of the field is determined only by the homogeneous mode. But the initial values of the field at each point on the lattice differs from each other by $`𝒪(10^7)`$, we naively expect that cosmic strings are formed only in the very narrow ranges of the breaking scale of $`𝒪(10^7)`$. Owing to the preheating stage, fluctuations become large, so that the degree of the narrow ranges is somewhat relaxed. However, it seems that the full development of rescattering does not occur yet for $`\eta 3\times 10^{16}\mathrm{GeV}`$, as we mentioned in Ref. . We will see later that the fundamentally different feature appears for the lower breaking scales. As mentioned above, it is difficult to tell whether long cosmic strings are formed on three-dimensional lattices with a smaller box size than the horizon volume, and this is why we have calculated in two dimensions, in order to make definite conclusions using both two and three-dimensional simulations complementarily. Similar results are found in two-dimensional lattices. In the two-dimensional case, we cannot distinguish long strings from loops, because all the strings are assumed to be infinitely stretched along the $`z`$ direction. Instead, we observe the number of cosmic strings in the horizon size at each time. The physically meaningful criterion for the formation of long cosmic strings is whether the number of strings per horizon remain (almost) constant or not as time goes on. Even if we cannot distinguish very long strings from loop ones, we can regard very nearly located string-antistring pair in two dimensions as a small loop string in three dimensions. Actually, they annihilate very soon, similar to small loop strings which will shrink and disappear very quickly. Thus, when we observe the number of strings as time goes on, it will remain at a certain value if long strings are formed, since loop strings (string-antistring pairs in terms of two dimensions) will disappear and very long strings (isolated strings in terms of two dimensions) will remain. Notice that we include both strings and antistrings in the numbers. As a result, we find that it depends on the breaking scale very crucially at the scale $`\eta 3\times 10^{16}\mathrm{GeV}`$, as is seen in Fig. 5. At the scales $`\eta 3.02\times 10^{16}\mathrm{GeV}`$ and $`\eta 3.09\times 10^{16}\mathrm{GeV}`$, a dozen cosmic strings are formed, and the numbers do not decrease so much. On the other hand, at the other scales, the numbers of strings decreases, and finally, we find no strings at all. Thus, the value of the breaking scale has to be in the very narrow ranges such as the degree of $`\mathrm{\Delta }\eta /\eta 10^2`$, where $`\mathrm{\Delta }\eta `$ is defined as the breaking scales at which the number of strings per horizon remains at a certain value as time goes on. This implies, together with the results in three dimensions, that long string formation is very sensitive to the breaking scale at these scales. Moreover, we have simulated $`30`$ realizations of initial conditions for fluctuations for each breaking scale, and find that the dependence of the average numbers of strings per horizon on the breaking scales, shown in Fig. 6, coincides with the above particular one of Fig. 5. This not only confirms the crucial sensitiveness to the breaking scale, but also implies that the main factor which determines whether long strings are produced is the value of the breaking scale. Initial fluctuations do not affect the dynamics of the scalar field so much at $`\eta 3\times 10^{16}`$ GeV . On the contrary to the cases with somewhat higher breaking scale such as $`\eta 3\times 10^{16}\mathrm{GeV}`$, many cosmic strings are formed at $`\eta 10^{16}\mathrm{GeV}`$ as is seen in Fig. 7. We have found more than a dozen strings per horizon size at any value of the breaking scales near $`\eta 10^{16}\mathrm{GeV}`$, and the numbers do not decrease so much. We thus conclude that the formation of cosmic strings occurs in the very broad region of the breaking scale at $`\eta 10^{16}`$ GeV. Finally, we should comment on the relation between the symmetry restoration and the topological defect (cosmic string) formation. To know whether the symmetry is restored or not is a difficult task to do, and its methods are somewhat uncertain. The authors of Ref. argued it in terms of the shape of the effective potential for the scalar field. If the potential has a minimum at the origin, the symmetry is restored. We may make sure that the potential has a minimum at the origin in the following way, as done in Ref. . The field $`\varphi `$ (radial direction of $`\mathrm{\Phi }`$) will oscillate around the origin in the potential of the form $`(\varphi ^2\eta ^2)^2`$ when its amplitude is larger than $`\sqrt{2}\eta `$. But, as we can see in Fig. 8, $`\varphi `$ is oscillating around the origin even when its amplitude becomes smaller than the breaking scale $`\eta `$. It is thus very clear that the effective potential should have a minimum at the origin. We can see this phenomena in all cases in Fig. 8. However, in the above, we actually see that topological defects are produced only when the breaking scale is $`\eta =10^{16}\mathrm{GeV}`$, not in the cases of $`\eta =3\times 10^{16}\mathrm{GeV}`$ and $`\eta =6\times 10^{16}\mathrm{GeV}`$. Therefore, the symmetry must be fully restored only in the case of $`\eta =10^{16}\mathrm{GeV}`$, where rescatterings play a crucial role for that. Then what is the relation between the symmetry restoration and defect production? We conclude that defect formation is the signal of the full restoration of symmetry. We thus discriminate between symmetry restoration and the shape of the effective potential which has a minimum at the origin. In other words, you cannot tell if the symmetry is restored only by observing the shape of the potential. In conclusion, we have reconsidered the formation of (global) cosmic strings during and after preheating by calculating the dynamics of the scalar field on both two- and three-dimensional lattices, and confirmed the results of both Ref. and our previous ones . We have found that there are little differences between the results in two and three dimensions, at least, at the preheating stage. It is obvious that phase-space volume is larger in the three dimensions than two, and this effect might affect somehow in the rescattering stage, but we expect that it seems subdominant when we obverse our numerical simulations. Practically, it is difficult to determine whether long cosmic strings which may affect the later evolution of the universe could ever be produced from the results of simulations on three-dimensional lattices, since they will deform into large loops and disappear very soon because of the small box size of the lattices. Moreover, we have found that cosmic strings with a higher breaking scale than $`3\times 10^{16}\mathrm{GeV}`$ could only be produced in the very narrow ranges of the breaking scale in our simulations. In Ref. , they referred to this formation of cosmic string as the formation of a nonmonotonic function of the breaking scale. We confirm this result and, in addition, find that the formation of long cosmic strings occurs in very small parameter space of the breaking scale $`\eta `$. In other words, it is very sensitive to the value of the breaking scale. In two-dimensional simulations, long strings and loops can be distinguished to some extent, even though it is difficult, by observing the evolution of the number of strings per horizon, and we have found similar phenomenon that long strings are produced only within very narrow range of the breaking scale around $`\eta 3\times 10^{16}\mathrm{GeV}`$ for fixed initial conditions. On the contrary, they are produced for a wide range of the breaking scale when $`\eta 10^{16}\mathrm{GeV}`$. Cosmologically, we are interested only in (infinitely) long cosmic strings stretched beyond the horizon. Those strings with breaking scale $`\eta 10^{16}`$ GeV are naturally formed after preheating, since they are produced independent to the actual values of the breaking scale. In other words, they are produced and their numbers remain constant in every horizon volume. On the other hand, when the breaking scale is larger ($`\eta 3\times 10^{16}`$ GeV), it is very difficult to connect our results directly to the actual probability of the string formation. What we have found is that the formation of a cosmic string with $`\eta 3\times 10^{16}`$ GeV depends crucially on the breaking scale. As mentioned, since initial fluctuations do not affect whether long strings are formed so much, we can tell how many long strings are produced, if the value of the breaking scale and the initial condition for the homogeneous mode are fixed, which is, in general, determined if the inflation model is specified. S.K. and M.K. would like to thank A. Linde for helpful comments. S.K. is also grateful to L. Kofman and I. Tkachev for useful discussions. M.K. is supported in part by the Grant-in-Aid, Priority Area “Supersymmetry and Unified Theory of Elementary Particles”($`\mathrm{\#}707`$).
no-problem/9903/astro-ph9903230.html
ar5iv
text
# Interferometric observations of nearby galaxies ## 1. Overview In the last few years the sensitivity and versatility of mm-wave interferometers have improved in large steps. Using mosaicking techniques, it has become a relatively easy task to map the regions of the molecular gas emission in nearby galaxies with high angular and spectral resolution at a very good signal-to-noise ratio. Here, I will present some examples of results on nearby galaxies obtained recently that cover a relatively large range of galaxy types. All observations were performed at the IRAM Plateau de Bure interferometer (PdBI), the most sensitive telescope in the range of 80…250 GHz. The instrument is located in the French Alps at an altitude of 2550 m. At present it consists of 5 antennas of a diameter of 15 m each which can be placed on different stations along a T-shaped track. Usually, the antennas are arranged in one of four standard configurations yielding baselines from 24 m to 230 m (N-S) and 400 m (E-W). Typical angular resolutions start from about $`4^{\prime \prime }`$ for a compact array (called CD) at 100 GHz and reach sub-arcsecond values for the most extended configuration. For a more exhaustive description see Guilloteau et al. (1992) and the IRAM website http://iram.fr. Due to the relatively large dishes the field of view is rather small, about $`50^{\prime \prime }`$ at 100 GHz. This can however be overcome nowadays by applying mosaicking techniques and it remains the advantage of the larger collecting area. Properly set up, a mosaic covers the field of interest with an almost uniform sensitivity which is particularly useful for elongated sources such as outflows and edge-on galaxies. The choice of the objects reflects the present work on external galaxies done in particular at the Radioastronomical Institute of the University of Bonn. The focus is however a bit shifted towards particular objects that exploit the capabilities of the PdBI. Most of the projects are just started or in progress, so this is to a large extent a “preview” report. ## 2. Dwarf and starburst galaxies The research on dwarf galaxies at the RAI Bonn is at the heart of a graduate school programme joining the Astronomical Institutes of Bonn and Bochum, supported by the Deutsche Forschungsgemeinschaft since 1993. At its start aimed exclusively at the Magellanic Clouds, the topic has shifted towards low mass galaxies in general in a second period; the third period starting in 1999 also includes the comparison between dwarf and starburst galaxies. In this section four examples are shown, three of which are subject of a PhD Thesis (T. Fritz, A. Weiß, A. Tarchi). ### 2.1. A star forming dwarf Investigators: T. Fritz, A. Heithausen, U. Klein, N. Neininger, C.L. Taylor, W. Walsh Dwarf galaxies offer an excellent opportunity to probe the properties of the interstellar medium (ISM) in the absence of strong streaming motions, shear forces or large density gradients. Despite of their low mass, some of them are actively star forming galaxies at a rate that is high compared to their gas mass. These “blue compact dwarf galaxies” (BCDGs) are known to have a low metal abundance and almost no detectable molecular gas. Several surveys have been conducted with the aim to detect CO in dwarf galaxies, but generally in vain. Based on a Hi survey of Taylor et al. (1994), a more detailed search was carried out with the 30-m telescope (Barone et al., in preparation) and a clear detection of CO emission in the BCDG Haro 2 could be obtained. This was the basis of subsequent PdBI observations. Haro 2 was observed in the compact ‘CD’ configuration as a mosaic of 3 fields. This was chosen in order to cover the entire star-forming body as it had been determined from optical photometry (Loose and Thuan 1986). Extended emission of <sup>12</sup>CO (1-0) and (2-1) was easily detected in the field, showing a relatively strong central peak and two lobes extended along the major axis of the star forming region. This extent is consistent with the preceding 30-m observations, but now the structure can be investigated in much more detail. Particularly surprising is the structure of the velocity field. Haro 2 is a dwarf galaxy with more or less elliptical appearance, but the emission of the molecular gas shows a very disturbed velocity field with steep gradients. It is not even clear whether there are several separate components or maybe even a merger. The nature of this unexpected behaviour is currently under investigation. ### 2.2. A merging starburst “dwarf” Investigators: S. Hüttemeister, U. Klein, N. Neininger, A. Greve The upper size limit for dwarf galaxies is not well defined, but the Clumpy Irregular galaxy Mrk 297 is certainly a bit at the bigger side with a mass of about $`2\times 10^{10}`$ M. It also shows intense star formation. This one consists of two distinct kinematic components and is interpreted as an ongoing merger of two late-type spirals, one seen edge-on, the other face-on. Mrk 297 was covered with a four-field mosaic in the compact configuration and shows a rather concentrated emission of <sup>12</sup>CO. The velocity structure is however rather complex over the range of detectable emission ($`180`$ km/s wide). We hope to get a closer view about the molecular gas distribution and its properties, in particular in the interaction zone. ### Starburst galaxies If intense star formation occurs at the scale of a large part of a galaxy the phenomenon is called starburst galaxy. At present, we are conducting studies of two prominent examples of this case, one being the “prototype” M 82 and the other the special case of a starburst without obvious trigger, NGC 2146. In conjunction with the isolated star bursts in dwarf galaxies like those mentioned above or the Magellanic clouds, we hope to obtain a better understanding of the conditions of such an event. ### 2.3. M 82 Investigators: N. Neininger, A. Weiß, U. Klein, M. Guélin, R. Wielebinski The nearby irregular galaxy M 82 is commonly called the prototype starburst galaxy. Because of its brightness in the IR/mm regime and its proximity (3.25 Mpc) a wealth of observations has been done to understand the origin and nature of the intense star formation activity. The trigger of the activity seems to be clear: obviously, M 82 is interacting with its neighbours M 81 and NGC 3077 and the tidal forces during the encounter are supposed to have started the exceptional star formation. On the other hand, the evolution of the burst and many other parameters are not yet understood and even the structure of the disk is still a matter of debate. The rotation curve, obtained from near-IR spectra and CO observations, indicates that M 82 is indeed a disk galaxy. Early single-dish CO observations (e.g. Loiseau et al. 1988) were interpreted to show a molecular ring similar to that of the Milky Way close to the centre, probably confining the active region and collimating the strong outflow (Shopbell 1998). Those data with an a spatial resolution of at best 150 pc were however inadequate to show any detail of the distribution of the molecular gas. Interferometric observations of CO at somewhat better resolution (e.g. Lo et al. 1987) and others covering the lines of tracers of dense gas (Brouillet and Schilke 1993) indicate a patchy structure of the material. The optical depth of CO being most probably high (Wild et al. 1992), we observed M 82 in the presumably optically thin (1$``$0) line of <sup>13</sup>CO with the PdBI (Neininger et al. 1998b). In general, the spatial distribution of the <sup>13</sup>CO is rather similar to that of the <sup>12</sup>CO (Shen and Lo 1995): concentrated in two lobes embracing a weaker central region (Fig. 5.). In contrast to the low-resolution data which have commonly been interpreted as reflecting a torus of gas, the interferometer maps are better described by the presence of a molecular bar. For the stellar population, such a bar had already been proposed by Achtermann and Lacy (1995). A barred structure provides a straightforward means of transporting the gas needed to fuel the star formation towards the centre. The weakness of the emission close to the nucleus (the active region) is certainly due to dissociation of the molecules in the strong radiation field. In total, this provides a coherent global picture: during the passage close to M 81, the tidal forces disrupted the outer disk of M 82, leaving intergalactic streamers of atomic gas (Yun et al. 1994). The remaining instable inner disk partially collapsed, thus starting the starburst acitivity that in turn provoked the strong wind. The physical conditions in the actual disk remain however unclear. For example, the similarity of the distributions in the <sup>12</sup>CO and <sup>13</sup>CO line casts doubt on the determination of the optical depth. Moreover, the relationship between the individual energy sources and the conditions in the gas is not yet clear: A number of radio point sources has been identified (Kronberg et al. 1985) – most of them are clearly identified as supernova remnants (SNRs) (see e.g. Wills et al. 1997 and references therein). Around the strongest SNR we have identified a 130 pc-wide bubble which is characterized by warmer gas and an enhanced cosmic ray production rate. It is however so big that this supernova could not possibly have created it – the SNR just marks the position where previous SNe and stellar winds have created a particular environment. A number of questions remains, e.g.: what are the physical properties of the molecular gas in the central region of M 82 – such as the opacity, the temperatures, densities and abundances? So we continued our studies by using the capability of the PdBI to perform dual-frequency observations; we observed the same field as before simultaneously in the (2-1) line of <sup>12</sup>CO and in C<sup>18</sup>O. The reduction and interpretation of these data is currently under way (Weiß et al., in preparation). ### 2.4. NGC 2146 Investigators: N. Neininger, A. Greve, A. Tarchi The “dusty hand” galaxy features a system of three dust lanes (spiral arms?) and clear indications of a starburst such as a strong galactic wind. But in contrast to other galaxies with strong star formation activity like M 82 or NGC 3628, no companion is visible that could have triggered the activity; and in contrast to Mrk 297 there is no obvious hint at a merger as well. Nevertheless, the ‘hidden merger’ scenario seems to be the most plausible explanation. The claim is that the starburst is indeed caused by a merging, but the encounter has happened long ago and no marked traces are left anymore. Looking for such traces thus is an additional task when studying the properties of the gas in the starburst region here. We mapped NGC 2146 with the PdBI in the <sup>12</sup>CO (1-0), (2-1) and the <sup>13</sup>CO(1-0) emission lines, the parameters set in a way as to ensure a uniform coverage in all cases. The detectable emission is well concentrated towards the center as in many other spiral galaxies. But already within these $`60^{\prime \prime }`$ (about 4 kpc) a warp is clearly visible (Fig. 5., rightpart). Consistent with the earlier findings no obvious hints at a second component are present, but the molecular gas shows clear signs of an outflow (see Fig. 1). Parallel to these observations of the molecular gas content, we have obtained high-resolution data at radio wavelengths ($`\lambda `$ 6 and 20 cm) with a combination of MERLIN and the VLA. We could identify a number of point sources which are currently under investigation (Tarchi et al., in prparation). The hope is to eventually compare their properties with their counterparts in M 82. ## 3. Normal galaxies Especially since the availability of the mosaicking technique a growing number of nearby galaxies has been investigated. Results on several objects have been published recently and a (maybe incomplete) list of them is included in the references. Here I want to present the most “extreme” case in a little more detail. ### 3.1. The highest spatial resolution: M 31 Investigators: N. Neininger, M. Guélin, R. Lucas et al. All the above described galaxies are several Mpc away from us and we can only derive global properties of the molecular gas content. Even in our own Galaxy the detailed investigation of the molecular gas is limited to a few prominent examples and it is very difficult to obtain an homogeneous view at small and large scales. A particularly tricky problem is the need to rely on velocity information to determine the distances of the objects under study and the related ambiguities. Therefore we started a global survey of the nearest grand design spiral, M 31, with the 30-m telescope (Neininger et al. 1998a). From this survey we choose a number of cloud complexes for a further investigation with the PdBI. That way, we are able to combine a global view of the whole galaxy with detailed investigations at scale down to less than 10 pc. Among the chosen regions are obviously cold, quiescent clouds that show a bright, single-component emission line in the survey (e.g. situated in the dark cloud D84 – the names are defined in Hodge 1981) as well as regions with multiple-peaked spectra. One example of such a disturbed region lies in the dark cloud D47, about $`5^{}`$ or 1 kpc away from D84 (see Fig. 2 and Fig. 2 in Neininger et al. 1998a). The velocity separation in this complex is up to 50 km/s – in the Galaxy, such a spectrum would be attributed to two components at very different places along the line of sight. Here, it is clear from the location and the separation of the spiral arms that everything belongs to one cloud complex. But wherefrom originate such big differences between those relatively close neighbours? A comparison with the distribution of the Hii regions gives a hint: the molecular cloud complex in D84 is isolated whereas that in D47 is located at the border of a particularly bright and extended Hii region. Similarly broad spectra are found in the big southern dark cloud D39 which hosts several star clusters. This are only few examples, but they point all into the same direction: broad or multiple-component spectra are most likely caused by local effects. These cloud complexes are certainly not virialized on the scale of 100 pc, the resolution of the 30-m telescope at the distance of M 31. The determination of the gas mass on the basis of data from the two instruments yields grossly differing values. To further investigate the properties of the molecular cloud complexes in M 31, we are enlarging our sample of combined studies with the two IRAM instruments while pushing the angular resolution well below the 10 pc limit with the PdBI. ## 4. A Black Hole candidate Investigators: M. Krause, N. Neininger, C. Fendt The spiral galaxy NGC 4258 (or M 106) started to become famous in the 1960’s when it was recognized that its velocity field and the morphology of the H$`\alpha `$ emission was peculiar; further interest was rised by a high-resolution map of its radio emission (van der Kruit et al. 1972 and references therein). It shows two extended lobes that reach out along the minor axis of the optical image, with a steep (shock-?) front and extended trailing plateaux – if we assume for them a sense of rotation as defined by the main body of the galaxy. Since then, different models for the origin of the anomalous features have been proposed. The first one by van der Kruit et al. suggested an explosion in the center of the galaxy about 18 million years ago. The galactic rotation then winds up the trail of the ejected material, thus forming the lobes. A new aspect was discovered in 1995 when Miyoshi et al. found evidence for a rapidly rotating accretion disk in the centre of NGC 4258. In the subsequent investigations it became one of the best candidates for a galaxy hosting a supermassive black hole. This moreover implies the existence of a jet if we follow the actual picture of accretion disk systems. Indeed, jets are known to create radio lobes and also the unusual H$`\alpha `$ arms might well be linked to a jet activity. This scenario has a weak point, however: usually, it is assumed that jets are aligned with the rotation axis of the accretion disk which in turn is fixed in space by the black hole. The axis of rotation of the accretion disk is very close to the major axis of the galactic disk. Thus, the jet has to travel a long way through the ISM of the galaxy. Moreover, the galactic material is moving transversely through the path of the jet. So it is necessary to investigate the properties of the molecular gas in NGC 4258 in addition to the studies of the atomic hydrogen and the radio emission. Earlier 30-m observations had shown that the molecular gas is elongated along the anomalous H$`\alpha `$ arms (Krause et al. 1990) and hence we used a five-field mosaic to cover the whole emission region. The single emission “bar” (cf. Cox & Downes 1996) seen by the 30-m telescope splits into two separate parts that form emission ridges on both sides of the anomalous diffuse H$`\alpha `$ arm (see Fig. 4). This suggests the existence of a tunnel with walls made of molecular gas which is filled with hot ionized (atomic) gas that is entrained by the jet travelling along the axis of the tunnel. A similar scenario at a smaller scale is proposed for the outflows of Herbig-Haro objects (see e.g. Gueth et al. 1998 and references therein) where it can be more easily studied. For NGC 4258 the story is however not yet settled – the evidence for the presence of a jet is accumulating, but the precise nature of it and the way how it may interact with the ISM is rather unclear. The PdBI data show a very distorted kinematical structure of the molecular gas (Krause et al. 1997) which is consistent with earlier Hi data (van Albada 1980) – that way the local and the global kinematics seem to be linked. ## 5. Summary Sensitive mm-wave interferometers have become a versatile tool to investigate even the relatively extended molecular gas emission of nearby galaxies. The gain in angular resolution may attain a factor of ten compared to single-dish instruments and quite often this really opens new views. In particular the combination of high angular resolution and high sensitivity – ideally combined with large-scale information from single-dish telescopes – marks a major step forward. Most of the structures described or presented here were squeezed into a few spectra of the earlier observations and hence difficult to interpret. The analysis of kinematic details or the structure of the molecular gas distribution and the determination of its mass depends on such high-quality data. ### Acknowledgements It is a pleasure to thank my colleagues from the RAI for providing background information and excellent viewgraphs of their present work which I was allowed to present at the conference.
no-problem/9903/cond-mat9903126.html
ar5iv
text
# Untitled Document Fig. 1 , A. Oguri, Phys. Rev. B.
no-problem/9903/math9903092.html
ar5iv
text
# Cubic Laurent Series in Characteristic 2 with Bounded Partial Quotients ## 1 Introduction Let $`F`$ be a field and $`F(x)`$ be the field of rational functions over $`F`$ in an indeterminate $`x`$. Let $`E=F((x^1))`$ be the field of formal Laurent series $$u=a_mx^m+a_{m1}x^{m1}+a_{m2}x^{m2}+\mathrm{}$$ in $`x^1`$ with coefficients in $`F`$. If $`a_m0`$ we say that the degree of $`u`$ is $`m`$. $`E`$ is a topological field. Its topology is characterized by the property that a sequence $`u_n`$ of formal Laurent series converges to zero when their degrees converge to $`\mathrm{}`$. The relationship between $`F(x)`$ and the extension field $`E`$ bears a close analogy to that between the rational numbers and the real numbers with polynomials in $`x`$ playing the role of integers. In particular most of the basic facts about continued fractions for real numbers have analogies for $`E`$. For a Laurent series $`u`$ we define its integral part to be the sum of the terms of nonnegative degree in $`x`$. Then we define the continued fraction expansion of $`u`$ with an inductive calculation as follows. We set $`u_0=u`$. Given $`u_i`$ we define $`p_i`$ to be the integral part of $`u_i`$. If $`p_iu_i`$, we define find $`u_{i+1}`$ by $`1/(u_ip_i)`$ so that $`u_i`$ satisfies $$u_i=p_i+\frac{1}{u_{i+1}}.$$ If $`p_i=u_i`$ we terminate the procedure. The $`p_i`$’s are called the partial quotients the $`u_i`$’s the complete quotients. The sequence of $`p`$’s terminates exactly when $`u`$ is rational. If it terminates with $`p_n`$, then we have $$u=p_0+\frac{1}{p_1+{\displaystyle \frac{1}{p_2+{\displaystyle \frac{1}{\mathrm{}{\displaystyle \frac{1}{p_n}}.}}}}}$$ Otherwise it makes sense to write $$u=p_0+\frac{1}{p_1+{\displaystyle \frac{1}{p_2+{\displaystyle \frac{}{\mathrm{}}}}}}$$ Indeed if we define $`c_n`$ by $$c_n=p_0+\frac{1}{p_1+{\displaystyle \frac{1}{p_2+{\displaystyle \frac{1}{\mathrm{}{\displaystyle \frac{1}{p_n}},}}}}}$$ $`c_n`$ is called the $`n`$th convergent to $`u`$ and the sequence $`c_0,c_1,\mathrm{}`$ of convergents converges in $`E`$ to $`u`$. More generally in the irrational case each complete quotient has the continued fraction expansion $$u_n=p_n+\frac{1}{p_{n+1}+{\displaystyle \frac{1}{p_{n+2}+{\displaystyle \frac{}{\mathrm{}}}}}}$$ We say that an irrational Laurent series $`u`$ has bounded partial quotients if the polynomials $`p_k`$ are bounded in degree and we say that a Laurent series is algebraic if it is algebraic over $`F(x)`$. It can be proved that algebraic Laurent series whose minimum polynomials have degree 2 always have bounded partial quotients. In fact the sequence of partial quotients is eventually periodic (by analogy with the theory of continued fractions for quadratic algebraic numbers). Baum and Sweet showed in that, when $`F=\mathrm{GF}(2)`$, the cubic equation (in $`y`$ with coefficients in $`F(x)`$) $$x+y+xy^3=0$$ has a unique Laurent series solution with coefficients in $`\mathrm{GF}(2)`$ and that this solution has bounded partial quotients. Their proof does not yield a descripton of what the sequence of partial quotients is. Later Mills and Robbins succeeded in giving a complete description of that sequence of partial quotients in . They also provided some examples in higher characteristic. Nevertheless it appears that there is still very little known about the nature of continued fractions of algebraic Laurent series. In particular, even though there seem to be many examples with bounded partial quotients, for any particular example, it may be difficult or impossible to provide a proof. Baum and Sweet also gave some simple examples with unbounded partial quotients. Algebraic Laurent series with unbounded partial quotients can also be quite complicated even when the partial quotient sequence is recognizable. Such Laurent series were studied by Mills and Robbins in and by Buck and Robbins in and Lasjaunias in . In this paper we survey cubic Laurent series in characteristic 2. More precisely we report on algebraic Laurent series with coefficients in a finite field of characteristic 2 that are solutions of an irreducible equation of the form $$a_0(x)+a_1(x)y+a_2(x)y^2+a_3(x)y^3=0$$ where the polynomials $`a_i(x)`$ have coefficients in $`\mathrm{GF}(2)`$. We concentrate primarily on the cases where the coefficients $`a_i(x)`$ have degrees $`1`$. However, we also make some general observations concerning the relationships that appear to hold between different roots of the same cubic. ## 2 Algorithms In this section we explain how we perform the calculations in our survey. Let $`F`$ be a finite field of characteristic 2. In most of what follows $`F`$ will be the field $`\mathrm{GF}(2)`$. Suppose that we are given polynomials $`a_0(x)`$, $`a_1(x)`$, $`a_2(x)`$, $`a_3(x)`$ with $`a_3(x)0`$ in $`F[x]`$. There are at most three Laurent series $`u`$ in $`E`$, with coefficients in an algebraic extension of $`F`$, that satisfy the equation $$a_0(x)+a_1(x)u+a_2(x)u^2+a_3(x)u^3=0.$$ (1) Using classical Newton polygon methods, we can find the beginnings of the Laurent series solutions for any algebraic equation. From the initial parts of these Laurent series we can calculate the first few partial quotients of any solution. When a solution has bounded partial quotients, this method requires $`O(n^2)`$ field operations to find $`n`$ partial quotients. However, in the case of cubic equations in characteristic 2, the method, implicit in Mills and Robbins , allows for calculation of $`n`$ partial quotients in $`O(n)`$ time when the degrees of the partial quotients are bounded. We review that method here. The key to the computation is to rewrite (1) in the form $$u=\frac{Q(x)u^2+R(x)}{S(x)u^2+T(x)}$$ (2) expressing $`u`$ as a fractional linear transformation of $`u^2`$ with coefficients that are polynomials in $`x`$. We can assume without loss of generality that no non-constant polynomial divides all four of $`Q`$, $`R`$, $`S`$ and $`T`$. Let $`D=QTRS`$. If $`D=0`$, then $`u`$ is a rational function of $`x`$. But we are only interested in $`u`$’s for which the minimum polynomial is cubic. So we may assume that $`D0`$. We will call the degree of $`D`$ the height of the cubic Laurent series $`u`$ and denote this quantity by $`\mathrm{ht}(u)`$. Suppose that $`u`$ is such a Laurent series with partial quotients $`p_0,p_1,p_2,\mathrm{}`$ and complete quotients $`u_0,u_1,u_2,\mathrm{}`$. Then we have $$u_i=p_i+1/u_{i+1}=\frac{pu_{i+1}+1}{u_{i+1}}$$ for all $`i0`$. This shows that $`u_i`$ is a fractional linear transformation of $`u_{i+1}`$ where the matrix that relates them has determinant 1. It follows that, for any $`i`$ and $`j`$, $`u_i`$ and $`u_j`$ are related by a fractional linear transformation of determinant 1 with polynomial coefficients. In characteristic 2, since squaring is linear, it is immediate that $`u^2`$ has partial quotients $`p_0^2,p_1^2,p_2^2,\mathrm{}`$ and complete quotients $`u_0^2,u_1^2,u_2^2,\mathrm{}`$. Again, for any $`i`$ and $`j`$, $`u_i^2`$ and $`u_j^2`$ are related by a fractional linear transformation of determinant 1. It follows that, for any $`i`$ and $`j`$, $`u_i`$ is a fractional linear transformation of $`u_j^2`$ with the degree of the determinant equal to $`\mathrm{ht}(u)`$. Suppose that, for some $`i`$ and $`j`$, we know polynomials $`Q`$, $`R`$, $`S`$ and $`T`$ such that $$u_i=\frac{Qu_j^2+R}{Su_j^2+T}$$ (3) There are three useful computational principles. First, if we know the value of $`p_j`$, we can deduce that $$u_i=\frac{Q(p_j^2+1/u_{j+1}^2)+R}{S(p_j^2+1/u_{j+1}^2)+T}=\frac{(Qp_j^2+R)u_{j+1}^2+Q}{(Sp_j^2+T)u_{j+1}^2+S}$$ (4) so that we have found the matrix relating $`u_i`$ and $`u_{j+1}`$ by performing a suitable column operation on our matrix and exchanging columns. Similarly, if we know the value of $`p_i`$, we can deduce that $$u_{i+1}=1/(p_i+u_i)=\frac{Su_j^2+T}{(Q+p_iS)u_j^2+(R+p_iT)}$$ (5) so that we have found the matrix relating $`u_{i+1}`$ and $`u_j`$ by performing a suitable row operation on our matrix and exchanging rows. Finally the main computational principle is that, if we have an equation of the form (3) with known $`Q`$, $`R`$, $`S`$ and $`T`$ and the degree of $`p_j`$ is also known, then we can sometimes deduce that $`p_i`$ is (the integral part of) the quotient when $`Q`$ is divided by $`S`$. Note that from (3) we always have $$u_i\frac{Q}{S}=\frac{Qu_j^2+R}{Su_j^2+T}\frac{Q}{S}=\frac{D}{S(Su_j^2+T)}.$$ Since $`\mathrm{deg}(p_j)=\mathrm{deg}(u_j)`$ is known, we can compute $`\mathrm{deg}(Su_j^2)`$. If $`\mathrm{deg}(Su_j^2)>\mathrm{deg}(T)`$, then we know $`\mathrm{deg}(Su_j^2+T)`$ and therefore $`\mathrm{deg}(S(Su_j^2+T))`$. Finally, if this last degree exceeds $`\mathrm{ht}(u)`$, then we can conclude that $`u_i`$ and $`Q/S`$ have the same integral part and that therefore $`p_i`$ is the quotient when $`Q`$ is divided by $`S`$. (We remark that, with sufficiently detailed knowledge of $`u_j`$, it is possible that we can deduce that $`\mathrm{deg}(S(Su_j^2+T))>\mathrm{ht}(u)`$ without requiring that $`\mathrm{deg}(Su_j^2)>\mathrm{deg}(T)`$. But in our computations we deduce new partial quotients this way only when we have the sufficient conditions that $`\mathrm{deg}(S)+2\mathrm{deg}(p_j)>\mathrm{deg}(T)`$ and $`2\mathrm{deg}(S)+2\mathrm{deg}(T)>\mathrm{ht}(u)`$.) Once $`p_i`$ is known we can perform an operation of type (5) and obtain a new relation of the form (3) from which we may be able to find another partial quotient, and so forth. When we cannot deduce the value of $`p_i`$ this way, we may still be able to make progress if we know sufficiently many terms of the sequence $`p_j,p_{j+1},\mathrm{}`$. Let us assume that $`j>0`$ so that $`\mathrm{deg}(p_j)>0`$. If $`\mathrm{deg}(S)\mathrm{deg}(T)`$, then $`\mathrm{deg}(S(Su_j^2+T))=2(\mathrm{deg}(S)+\mathrm{deg}(u_j))`$. Moreover subsequent operations of type $`(\text{4})`$ will always yield relations of the form (3) in which $`\mathrm{deg}(S)\mathrm{deg}(T)`$, where the sequence of $`S`$’s have degrees increasing by at least 2. Thus after a few steps of this type we will be in position to compute one or more new partial quotients $`p_i`$. If we are stuck with a case in which $`\mathrm{deg}(S)<\mathrm{deg}(T)`$, then a transformation of type (4) will yield a new relation of the form (3) in which $`\mathrm{deg}(T)`$ is smaller than it was before. But there can be only finitely many steps of this type. Thus eventually we will arrive at the favorable case where $`\mathrm{deg}(S)\mathrm{deg}(T)`$ and in a few more steps we will be able to compute a new partial quotient. We can now see the outline of a general computational procedure. We start with a relation of the form (3) and $`i=j=0`$ and use classical methods to compute the first few partial quotients $`p_0,p_1,\mathrm{}`$. Then, when possible, we use the main principle above to compute a new partial quotient $`p_i`$ and adjoin it to our list of known partial quotients, if it is not already known. (If it is already known, we have a check on our results.) We then use the value of $`p_i`$ to obtain a new relation of the form (3) where $`i`$ has been replaced by $`i+1`$. If we cannot use the main principle, provided $`p_j`$ is known, we make a transformation of the type (4) yielding a new relation where $`j`$ has been replaced by $`j+1`$. Thus the first type of step produces new partial quotients and advances $`i`$ while the second type of step uses old partial quotients and advances $`j`$. If the partial quotient $`p_j`$ is always known when needed, we can continue indefinitely. In particular if $`i>j`$ we can continue. We have observed empirically that on average in this algorithm $`i`$ increases twice and fast as $`j`$ so the production rate for partial quotients is approximately twice the consumption rate with small local variations. However, our initial relation of the form (3) has $`i=j=0`$, so there can be some difficulty getting started. Thus we use classical methods to find a few partial quotients for an initial supply. After the first few steps, $`i`$ seems to stay reliably ahead of $`j`$ so production stays reliably ahead of consumption. The computational procedure can also be thought of as the operation of an automaton. From this point of view a state of the automaton is one of the matrices relating $`u_i`$ and $`u_j^2`$ from which no deduction of a partial quotient is immediately possible. We think of the use of $`p_j`$ as the reading of an input by the automaton, and we regard any partial quotients $`p_i,p_{i+1},\mathrm{}`$ that can be computed as new outputs and we view the state arrived at, after computing the new partial quotients, as the new state. Note that this automaton is unusual in that its inputs come from its previous outputs. In examples with bounded partial quotients it appears that only finitely many states occur, so we have something like a finite state automaton. However, we note that this is not what is usually meant by a finite state automaton. The reason is that we do not see every partial quotient being read in every state. Instead we typically find that, for each state, there are certain partial quotients that are never read when we are in that state. Moreover, it is usually the case that if one of these were read while in that state, partial quotients never previously seen would be output, or states never previously seen would occur. Indeed it seems to be the avoidance of certain combinations of states and input polynomials that makes the partial quotient sequence bounded. Thus the automaton description of the partial quotient sequence removes the algebra of the problem and makes it combinatorial. But it does not solve the problem since there appears to be no simple way to prove that the unseen combinations will never occur. The finite automaton description does, however, lead to the possibility of even more efficient computation of the partial quotient sequence since we can remember every combination of input polynomial and state that occurs and what the resulting outputs are and what the new state is. This way the whole process can be implemented by look-up tables. We have not actually used this refinement in our computations, however. Even without the last refinement, in a typical case with bounded partial quotients, we can find a million or so partial quotients in just a few seconds. There are three additional properties of algebraic elements with bounded partial quotients that simplify our investigation. ###### Theorem 1 Suppose that $$u=\frac{Qu^2+R}{Su^2+T}$$ and that $`\mathrm{deg}(Su^2)>\mathrm{deg}(T)`$ and $`\mathrm{deg}(u)0`$. If $`u`$ has a partial quotient, other than the first, with degree $`>\mathrm{ht}(u)`$, then $`u`$ has unbounded partial quotients. Proof: Our argument is essentially from . We introduce the usual non-archimedean absolute value on the field of Laurent series in which $`|x|`$ is set to an arbitrary real number $`>1`$ and $`|u|=|x|^{\mathrm{deg}(u)}`$ for any Laurent series $`u`$. If the convergent $`c_n`$ of $`u`$ is $`a_n/b_n`$ with $`a_n`$ and $`b_n`$ relatively prime, then it is known that $$|ua_n/b_n|=\frac{1}{|p_{n+1}||b_n|^2},$$ and that, conversely, if $`a`$ and $`b`$ are relatively prime polynomials with $$|ua/b|=\frac{1}{|x|^k|b|^2}$$ for some positive integer $`k`$, then there is a non-negative integer $`n`$ with $`a/b=c_n`$ and $`k=\mathrm{deg}(p_{n+1})`$. We call $`k`$ the accuracy of the convergent $`a/b`$. In particular since we assume that $`u`$ has degree $`0`$, every convergent $`a/b`$ of $`u`$ has $`|a/b|=|u|`$. Now suppose that $`c=a/b`$ is a convergent of $`u`$ of accuracy $`k>\mathrm{ht}(u)`$. Since $`|a/b|=|u|`$, we have $`|Sa^2/b^2|=|Su^2|>|T|`$ so $`|Sa^2|>|Tb^2|`$ and $`|Sa^2+Tb^2|=|Sa^2|=|Sb^2u^2|=|Sb^2u^2+Tb^2|0.`$ It follows that $`\left|u{\displaystyle \frac{Qa^2+Rb^2}{Sa^2+Tb^2}}\right|`$ $`=\left|{\displaystyle \frac{Qu^2+R}{Su^2+T}}{\displaystyle \frac{Qc^2+R}{Sc^2+T}}\right|`$ $`=\left|{\displaystyle \frac{(QTRS)(uc)^2}{(Su^2+T)(Sc^2+T)}}\right|`$ $`={\displaystyle \frac{1}{|x|^{2k}}}\left|{\displaystyle \frac{(QTRS)}{(Sb^2u^2+Tb^2)(Sa^2+Tb^2)}}\right|`$ $`={\displaystyle \frac{1}{|x|^{2k}}}\left|{\displaystyle \frac{(QTRS)}{(Sa^2+Tb^2)^2}}\right|.`$ This shows that $`\left(Qa^2+Rb^2\right)/\left(Sa^2+Tb^2\right)`$ is a convergent of accuracy $`2k\mathrm{ht}(u)>k`$. It follows that there are convergents of arbitrarily large accuracy and therefore unbounded partial quotients. ###### Lemma 1 If in the course of our algorithm for computing the continued fraction expansion of the cubic Laurent series $`u`$, we find the partial quotient $`p_i`$ when $`ij>0`$, then the complete quotient $`u_i`$ satisfies an equation of the form $$u_i=\frac{Qu_i^2+R}{Su_i^2+T}$$ with $`\mathrm{deg}(Su_i^2)>\mathrm{deg}(T)`$. Indeed since by hypothesis we can compute $`p_i`$, $`u_i`$ is related to $`u_j^2`$ by a fractional linear transformation with matrix $$\left[\begin{array}{cc}Q& R\\ S& T\end{array}\right]$$ in which $`\mathrm{deg}(Su_j^2)>\mathrm{deg}(T)`$ and $`\mathrm{deg}(S^2u_j^2)>\mathrm{ht}(u)`$. We will only be concerned with the first condition. If $`j=i`$ we are done. However, if $`j<i`$, we can depart from the usual computation and perform a sequence of steps in which we successivley consume the partial quotients $`p_j,p_{j+1},\mathrm{},p_i`$. In the first such step we replace $`S`$ with $`S^{}=Sp_j^2+T`$, and replace $`T`$ with $`T^{}=S`$, and replace $`u_j`$ by $`u_{j+1}`$. Then we have $$\mathrm{deg}(S^{}u_{j+1}^2)=\mathrm{deg}(S)+2\mathrm{deg}(p_j)+2\mathrm{deg}(p_{j+1})>\mathrm{deg}(S)=\mathrm{deg}(T^{}).$$ The same argument shows that subsequent steps preserve this relationship between $`S`$ and $`T`$. So when $`p_i`$ is finally consumed, we will have $`u_i`$ related to $`u_i^2`$ by a matrix with the desired property. ###### Corollary 1 Suppose that, in the course of our calculation of the continued fraction expansion of the cubic Laurent series $`u`$, we find a partial quotient $`p_i`$ when $`ij>0`$, and that we find a partial quotient $`p_k`$, with $`k>i`$ of degree exceeding $`\mathrm{ht}(u)`$. Then $`u`$ has unbounded partial quotients. In practice Corollary 1 can be applied quite efficiently. For most algebraic power series, the conditions of the corollary are met for rather small $`i`$, $`j`$ and $`k`$, proving that the partial quotient sequence is unbounded. We believe that series that cannot be ruled out this way have bounded partial quotients although it is still difficult in any individual case to prove that this is the case. Here is another useful principle, for which a proof is given in the original Baum–Sweet paper . ###### Theorem 2 If $`u`$ and $`v`$ are irrational Laurent series and $`u`$ and $`v`$ are related by a fractional linear transformation (of non-zero determinant) with coefficients that are polynomials, then $`u`$ has bounded partial quotients if and only if $`v`$ does. ###### Corollary 2 If the Laurent series $`u`$ satisfies an irreducible cubic equation, with coefficients polynomial in $`x`$, and $`u`$ has bounded partial quotients then every irrational element of the field generated by $`u`$ over the field of rational functions of $`x`$ has bounded partial quotients. Indeed if $`v`$ is in the (cubic) field generated by $`u`$, then 1, $`u`$, $`v`$ and $`uv`$ must satisfy a linear relation with polynomial coefficients. We can then solve to find $`v`$ as a fractional linear transformation of $`u`$. We will apply the preceding theorem only to $`1/u`$ and $`1+u`$. Finally we have the simple principle, observed in . ###### Theorem 3 If $`u`$ is an algebraic Laurent series with bounded partial quotients, and if, in the continued fraction for $`u`$, we replace $`x`$ by any polynomial $`p(x)`$ of positive degree in $`x`$, then we obtain another algebraic continued fraction with bounded partial quotients. We will only be interested in substituting $`x+1`$ for $`x`$. ## 3 Results Here we investigate solutions of (1) when each of the polynomials $`a_i(x)`$ has coeffecients in $`\mathrm{GF}(2)`$. We concentrate mainly on the case that the degrees of the $`a_i`$’s are all $`1`$. (We have also used our computational methods to investigate what happens when the $`a_i`$ have larger degree and make a few observations below concerning this more general situation.) There are 256 such equations. However, we are interested only in polynomials which are irreducible over the algebraic closure of $`\mathrm{GF}(2)`$. This leaves 96 equations. We test each of the 96 equations for Laurent series solutions with bounded partial quotients. In most cases we can use Corollary 1 above to eliminate the solution from contention rather quickly. Any series for which we find $`10^6`$ partial quotients without triggering the condition of Corollary 1 we declare to have “probable bounded partial quotients”. There are 36 polynomials in our collection that have at least one Laurent series root with probable bounded partial quotients. However, from the remarks above, there is a group of twelve substitutions generated by the substitutions $$\begin{array}{ccc}x& & x+1;\\ y& & y+1;\\ y& & 1/y,\end{array}$$ which preserve degrees and the property of having bounded partial quotients. None of the 36 polynomials is fixed under any of these substitutions, so there are just three orbits. Here we give a representative of each of the three orbits. $$\begin{array}{cccc}\text{case A}:\hfill & x+y+xy^3\hfill & =& 0;\\ \text{case B}:\hfill & x+xy+(1+x)y^3\hfill & =& 0;\\ \text{case C}:\hfill & x+(1+x)y+xy^3\hfill & =& 0.\end{array}$$ We give some empirical information about the partial quotient sequences of the solutions in each of the three cases. Each of the equations has three Laurent series solution. We shall see that in each case the three solutions have closely related continued fraction expansions. However, there are large qualititive differences between the three cases. ### 3.1 Case A Case A is the previously studied Baum–Sweet cubic. However, previous studies considered only the solution that has coefficients in $`\mathrm{GF}(2)`$. The Case A equation has two other Laurent series solutions with coefficients in $`\mathrm{GF}(4)`$. These do not seem to have been studied. These two roots are equivalent in that one is mapped to the other by the Frobenius automorphism of $`\mathrm{GF}(4)`$. They appear also to have bounded partial quotients. What follows is a description of one of the solutions in $`\mathrm{GF}(4)`$. The reader should bear in mind that we have not proved that this description is correct although it does seem likely that the method of could be used to construct a proof. A reasonable measure of the complexity of such a proof is the complexity of the automaton. We measure this by the number of distinct pairs that occur each consisting of an input polynomial and a state. For this polynomial the number seems to be 36. By contrast the $`\mathrm{GF}(2)`$ solution is simpler and only leads to 12 pairs. We represent $`\mathrm{GF}(4)`$ as the extension of $`\mathrm{GF}(2)`$ generated by an element $`t`$ satisfying $`t^2=t+1`$ and we describe the solution $`u`$ to the Case A equation whose leading term is the constant $`t`$. There are nine different polynomials that occur as partial quotients. We label these in order of appearance with the letters $`a,\mathrm{},i`$ as follows. $$\begin{array}{ccc}a\hfill & =& t\hfill \\ b\hfill & =& tx\hfill \\ c\hfill & =& t+x\hfill \\ d\hfill & =& (1+t)+x^2\hfill \\ e\hfill & =& x\hfill \\ f\hfill & =& x^2\hfill \\ g\hfill & =& 1+t+tx\hfill \\ h\hfill & =& (1+t)x^2\hfill \\ i\hfill & =& t+(1+t)x^2\hfill \end{array}$$ Next we define some strings of polynomials. For each non-negative integer $`n`$ we define $`x_n`$ to be the list of length $`(84^n5)/3`$ alternating $`h`$’s and $`b`$’s of the form $`hbh\mathrm{}hbh`$. $`y_n`$ to be the list of length $`(164^n7)/3`$ of the form $`efe\mathrm{}efe`$. $`u_n`$ to be the list of length $`(84^n5)/3`$ of the form $`fef\mathrm{}fef`$. $`v_n`$ to be the list of length $`(164^n7)/3`$ of the form $`bhb\mathrm{}bhb`$. Now here is what the partial quotient sequence looks like. The first two partial quotients of $`u`$ are $`a,b`$. This is followed by an infinite sequence of finite sequences $`A_0,A_1,A_2,\mathrm{}`$, with $`A_i`$ palindromic for all $`i>0`$. The $`A`$’s will be defined by a somewhat complicated recursion. We have the initial conditions $$A_0=cdefcb,A_2=ghg,A_4=gibhbig$$ and, for $`n`$ odd, explicit formulas: if $`n=1`$ mod $`4`$, then $$A_n=egx_{(n1)/4}ge$$ and if $`n=3`$ mod 4, then $$A_n=cdy_{(n3)/4}dc.$$ Here is what happens if $`n`$ is even and $`6`$. If $`n=0`$ mod 4, $$A_n=h_ngiv_{(n8)/4}igr(h_n);$$ for $`n=2`$ mod 4, $$A_n=h_nbcu_{(n6)/4}cbr(h_n),$$ where $`r`$ is the operator that reverses the terms in a sequence and $`h_n`$ is defined below. For $`n>0`$ define the palindrome $`p_n`$ by $$p_n=A_0\mathrm{}A_{2n2}A_{2n1}A_{2n2}\mathrm{}r(A_0).$$ Also set $`p_0=A_3=cdefedc`$ and $`p_1=cfc`$. We have $$\begin{array}{ccc}h_6\hfill & =& gibhge\hfill \end{array}$$ If $`n8`$, then, for $`n=0`$ mod 4, $$h_n=h_{n2}bcu_{(n8)/4}cbp_{(n10)/2}$$ and, for $`n=2`$ mod 4, $$h_n=h_{n2}giv_{(n10)/4}igp_{(n10)/2}.$$ This completes the recursive description of the pattern of partial quotients. This pattern has been verified to continue for one million partial quotients. The continued fraction expansion of the solution to Case A with coefficients in $`\mathrm{GF}(2)`$ is described in . There it is proved that the partial quotients follow a pattern somewhat similar to the one given here. But the connection between the patterns is actually much more striking. We have observed empirically that we obtain the partial quotient sequence for the $`\mathrm{GF}(2)`$ solution by replacing every non-zero coefficient of every partial quotient of the $`\mathrm{GF}(4)`$ solution with a 1. We have not found an explanation for this phenomenon. This phenomenon does not appear to be restricted to the Baum–Sweet cubic. We have observed several other cases of equations of the form of (1) that have two roots with bounded partial quotients in $`\mathrm{GF}(4)`$ and a root in $`\mathrm{GF}(2)`$. In these examples we allowed the degrees of the $`a_i(x)`$ to exceed 1. In each case the root in $`\mathrm{GF}(2)`$ also had bounded partial quotients and was related to the $`\mathrm{GF}(4)`$ roots, but we have not been able to give a precise description of what that relationship is. ### 3.2 Case B It appears that neither Case B nor Case C has been previously studied. They both have three solutions with coefficients in $`\mathrm{GF}(8)`$. In each case all three are equivalent under the Frobenius automorphism of $`\mathrm{GF}(8)`$. Case B is unusual in that all its partial quotients (except the first, which is constant) have degree 1. It also has the unusual property that, in the finite automaton, after the first few inputs, every input produces precisely two outputs. In a sense this example is much more complicated than Case A since there are 737 distinct input-state pairs that occur. On the other hand inspection of the list of pairs shows that there are a great many symmetries and much structure to the list so the complication may not be quite so great. We can conjecture a recursion for the sequence of partial quotients. We represent $`\mathrm{GF}(8)`$ as the field generated over $`\mathrm{GF}(2)`$ by a solution $`t`$ to $`1+t+t^3=0`$. For brevity we will identify elements of $`\mathrm{GF}(8)`$ with the integers from 0 to 7, according to their binary expansions. Thus we will denote $`0`$, $`1`$, $`t`$, $`1+t`$, $`t^2`$, $`1+t^2`$, $`t+t^2`$, $`1+t+t^2`$ respectively by 0, 1, 2, 3, 4, 5, 6, 7. Also, for brevity, we will denote a polynomial $`a+bx+cx^2+\mathrm{}`$ by the sequence of digits $`abc\mathrm{}`$. So for example 13 stands for the polynomial $`1+(1+t)x`$. Using this notation we find that the first 4 partial quotients, $`p_0,p_1,p_2,p_3`$ are $$2,\mathrm{\hspace{0.33em}13},\mathrm{\hspace{0.33em}13},\mathrm{\hspace{0.33em}01}.$$ Thereafter, if we group the remaining partial quotients in quadruples, $$(p_4,p_5,p_6,p_7),(p_8,p_9,p_{10},p_{11}),\mathrm{},$$ there are precisely 63 quadruples that occur. Also the 12 partial quotients $`(p_4,\mathrm{},p_{15})`$ are $$33,\mathrm{\hspace{0.33em}11},\mathrm{\hspace{0.33em}73},\mathrm{\hspace{0.33em}04},\mathrm{\hspace{0.33em}53},\mathrm{\hspace{0.33em}23},\mathrm{\hspace{0.33em}41},\mathrm{\hspace{0.33em}07},\mathrm{\hspace{0.33em}11},\mathrm{\hspace{0.33em}77},\mathrm{\hspace{0.33em}21},\mathrm{\hspace{0.33em}05}.$$ Thereafter, if we group the remaining partial quotients in 16-tuples, $$(p_{16},p_{17},\mathrm{},p_{31}),(p_{32},\mathrm{},p_{47}),\mathrm{},$$ there are precisely 63 16-tuples that occur. Finally there is a bijection between the set of quadruples and the set of 16-tuples such that, after applying the bijection the sequence of quadruples beginning with $`(p_4,p_5,p_6,p_7)`$ becomes the sequence of 16-tuples, beginning with $`(p_{16},\mathrm{},p_{31})`$. The list of 63 quadruples, together with their bijectively associated 16-tuples, is given in Table 1 below. One can use this table, together with the initial conditions above, to generate the sequence of partial quotients. For example, since we have $`(p_4,p_5,p_6,p_7)=(33,11,73,04)`$, we can deduce that $`(p_{16},\mathrm{},p_{31})`$ is the associated 16-tuple $`(61,03,\mathrm{},54,02)`$. It is not hard to find algebraic relationships in Table 1, particularly if we group the rows according to the positions of zeroes in each row. However these algebraic relations have not yielded additional insight and we omit them. ### 3.3 Case C Case C appears to have bounded partial quotients but we have not been able to identify the pattern of partial quotients. Other than the first partial quotient which is a constant, the polynomials that occur as partial quotients comprise exactly the set of all polynomials of degree 1 together with the squares of all polynomials of degree 1. Thus, ignoring the first partial quotient, there are 112 possible partial quotients. This case seems to be of much greater complexity than the others. Over 17000 input-state pairs occur during the generation of the first four million partial quotients and it seems as if one would have to compute many more before all possible pairs would occur. This casts some doubt on the boundedness of the partial quotients. Even though we cannot give a simple description of the partial quotient sequence, the sequence itself is far from random looking. For example, it contains very long subsequences which alternate between a multiple of $`x`$ and a multiple of $`x^2`$. These subsequences are the centers of even larger palindromic subsequences. The lengths of these palindromic subsequences appear to be unbounded. In Table 2, we have listed the first 1000 partial quotients for the solution whose constant term is $`t`$ in the same notation as we used for Case B. (The first row of the table contains the first 20 partial quotients, the second row the second twenty, etc.) ### 3.4 Equations with Three $`\mathrm{𝐆𝐅}\mathbf{(}\mathrm{𝟐}\mathbf{)}`$ Solutions with Bounded Partial Quotients It is quite possible for an equation of the form (1) to have three Laurent series roots with coefficients in $`\mathrm{GF}(2)`$ each with probable bounded partial quotients. One of the simplest examples is the equation $$1+x^2u+(1+x^2)u^2+xu^3=0.$$ In all cases like this that we have examined, the continued fractions for the three roots seem to be roughly related to each other. For example, it appears that the set of partial quotients that occur infinitely often is the same for all three roots. It is also possible that an equation have three Laurent series solutions in $`\mathrm{GF}(2)`$, just one of which has probable bounded partial quotients, although such polynomials seem to be more rare than those with all three roots having bounded partial quotients. We have seen no examples with three Laurent series roots in $`\mathrm{GF}(2)`$, precisely two of which have probable bounded partial quotients.
no-problem/9903/astro-ph9903456.html
ar5iv
text
# The 2dF Galaxy Redshift Survey: Spectral Types and Luminosity Functions ## 1 INTRODUCTION The 2dF Galaxy Redshift Survey (2dFGRS; Colless 1998, Maddox 1998) is a major new redshift survey utilising the full capabilities of the 2dF multi-fibre spectrograph on the Anglo-Australian Telescope (AAT). The observational goal of the survey is to obtain high quality spectra and redshifts for 250,000 galaxies to an extinction-corrected limit of $`b_J`$=19.45. The survey will eventually cover approximately 2000$`\mathrm{}^{}`$, made up of two continuous declination strips plus 100 random 2°-diameter fields. One strip is in the southern Galactic hemisphere and covers approximately 75°$`\times `$15° centred close to the South Galactic Pole at ($`\alpha `$,$`\delta `$)=($`01^h`$,$`30`$°); the other strip is in the northern Galactic hemisphere and covers 75°$`\times `$7.5° centred at ($`\alpha `$,$`\delta `$)=($`12.5^h`$,$`+00`$°). The 100 random fields are spread uniformly over a 7000$`\mathrm{}^{}`$ region in the southern Galactic cap. The survey has been designed to provide a detailed picture of the large-scale structure of the galaxy distribution in order to understand structure formation and evolution and to address cosmological issues such as the nature of the dark matter, the mean mass density of the universe, the role of bias in galaxy formation and the Gaussianity of the initial mass distribution. However the survey will also yield a comprehensive database for investigating the properties of the low-redshift galaxy population, providing: $`b_J`$ and $`r_F`$ magnitudes and various image parameters from the blue and red Southern Sky Survey plates scanned with the Automatic Plate Measuring machine (APM); moderate quality spectra (S/N$`>`$10 per $`4.3\AA `$ pixel); derived spectroscopic quantities such as redshifts and spectral types. The first test observations for the 2dFGRS were taken at the start of the 2dF instrument commissioning period in November 1996. The first survey observations with all 400 fibres were obtained in October 1997, and as of March 1999 we have measured over 40,000 redshifts. We plan to complete the survey observations by the end of 2000. This paper presents some of the first results from the 2dFGRS. It deals with the Principal Component Analysis (PCA) methods that we are developing for classifying galaxy spectra, and the application of these methods to deriving the luminosity functions for different galaxy spectral types. The luminosity function (LF) is a fundamental characterization of the galaxy population. It has been measured from many galaxy surveys with differing sample selections, covering a wide range of redshifts. Generally a Schechter function (Schechter 1976), $$\varphi (L)dL=\varphi ^{}(\frac{L}{L^{}})^\alpha exp(\frac{L}{L^{}})\frac{dL}{L^{}},$$ (1) with $$\frac{L}{L^{}}=10^{0.4(M^{}M)}$$ (2) provides a good fit to field galaxy data, but the parameters $`M^{}`$, $`\alpha `$ and $`\varphi ^{}`$ are still relatively uncertain. Surveys of bright field galaxies, such as the CfA2 (Marzke et al 1994) and SSRS2 (Marzke & Da Costa 1997) have mean redshift $`\overline{z}0.02`$ and so cover relatively small volumes. This leads to a large sample variance on these measurements. Also they are based on photometric catalogues which have been visually selected from photographic data, which makes selection biases hard to quantify. Deeper surveys such as the CFRS (Lilly et al 1995), and AUTOFIB (Ellis et al 1996) surveys are based on better controlled galaxy samples, but the sample volumes are very small, and they reach to much higher redshifts ($`z1`$), so that galaxy evolution is an important factor. The Stromlo-APM Redshift Survey (SAPM, Loveday et al 1992) the Las Campanas Redshift Survey (LCRS; Lin et al 1996) and the ESO Slice Project (ESP; Zucca et al 1997) cover intermediate redshifts, $`z0.050.2`$, and give the most reliable current estimates of the local luminosity function. The estimated values of $`M^{}`$ from these surveys agree to about 0.1 magnitude, and $`\varphi ^{}`$ varies by a factor of $``$ 1.5. The faint-end slope, $`\alpha `$, is more poorly determined, ranging from $`0.7\pm 0.05`$ for the LCRS to $`1.22_{0.07}^{+0.06}`$ for the ESP. Some of these differences are likely to be due to sample variance, but some may be due to different selection effects in the surveys, e.g, the high surface brightness cut imposed on the LCRS survey. The 2dF survey reaches to a similar depth, will cover a much larger volume than the LCRS survey, and so have smaller sample variance, allowing us to estimate luminosity functions for several sub-samples of galaxies. The mean surface brightness isophotal detection limit of the underlying APM catalogue is $`b_j=25`$. It is well established that the galaxy luminosity function depends on the type of galaxy that is sampled. Morphologically late-type galaxies tend to have a fainter $`M^{}`$ (in the B-band) and steeper faint-end slope, $`\alpha `$, (Loveday et al. 1992, Marzke et al 1998). Spectroscopically, selecting galaxies with higher \[OII\] equivalent width leads to a similar trend (Ellis et al. 1996, Lin et al 1996), as does the selection of bluer galaxies (Marzke & Da Costa 1997, Lin et al 1996). These results reflect the correspondence between spectral and morphological properties, with galaxies of late type morphology having stronger emission lines and bluer continuum than galaxies of early type morphology. However this correspondence is quite approximate and a considerable scatter exists. In this paper we use a classification scheme based on PCA of the galaxy spectra (e.g. Connolly et al. 1995; Folkes, Lahav & Maddox 1996; Sodré & Cuevas 1997; Galaz & de Lapparent 1998; Bromley et al. 1998; Glazebrook, Offer & Deeley 1998; Ronen, Aragón-Salamanca & Lahav 1999). This provides a representation of the spectra in two (or more) dimensions that highlights the differences between individual galaxies. The technique is based on finding the directions in spectral space in which the galaxies vary most, and so offers an efficient, quantitative means of classification. Bromley et al (1998) have studied the variation of the LCRS LF with spectral type using classes from a similar PCA technique. They find that late type galaxies have a steeper faint-end slope than early type galaxies. They also find a clear density-morphology relation: over half of their extreme early-type objects are found in regions of high density, whereas these regions contain less than a quarter of their extreme late-type objects. The plan of the paper is as follows. In §2 we briefly summarise the construction of the survey source catalogue, the capabilities of the 2dF multi-fibre spectrograph, the observing and reduction procedures employed and the main properties of the subset of the data which will be used in the rest of the paper. In §3 we outline the fundamentals of Principal Component Analysis, describe the steps required to prepare the spectra, and then present the principal components for our sample of galaxy spectra and their distribution. Two methods to connect the PCA decomposition of a galaxy’s spectrum with its physical (spectral or morphological) type are investigated in §4. We then define spectral types based on the PCA decomposition and use these types to derive K-corrections for each galaxy in our sample. The luminosity functions (LFs) for the whole sample, and for each spectral type separately, are derived in §5, using both a direct estimator and a parametric method. Our results are discussed in §6, focusing on the strengths and weaknesses of the PCA spectral classifications and a comparison of the LFs for our spectral types with similar analyses in the literature. ## 2 THE DATA ### 2.1 Source catalogue The source catalogue for the survey is a revised and extended version of the APM galaxy catalogue (Maddox et al. 1990a,b,c). This catalogue is based on APM scans of 390 IIIa-J plates from the UK Schmidt Telescope (UKST) Southern Sky Survey. The magnitude system for the Southern Sky Survey is defined by the response of Kodak IIIa-J emulsion in combination with a GG395 filter, zero pointed using Johnson B band CCD photometry. The extended version of the APM catalogue includes over 5 million galaxies down to $`b_J`$=20.5 in both north and south Galactic hemispheres over a region of almost 10<sup>4</sup>$`\mathrm{}^{}`$ (bounded approximately by declination $`\delta `$$``$+3° and Galactic latitude $`|b|`$$`>`$20°). The astrometry for the galaxies in the catalogue has been significantly improved, so that the rms error is now 0.25″ for galaxies with $`b_J`$=17–19.45. Such precision is required in order to minimise light losses with the 2″ diameter fibres of 2dF. The photometry of the catalogue is calibrated with numerous CCD sequences and has a precision (random scatter) of approximately 0.2 mag for galaxies with $`b_J`$=17–19.45. The mean surface brightness isophotal detection limit of the APM catalogue is $`b_j=25`$. The star-galaxy separation is as described in Maddox et al. (1990b), supplemented by visual validation of each galaxy image. A full description of the source catalogue is given in Maddox et al. (1998, in preparation). ### 2.2 The 2dF multi-fibre spectrograph The 2dF facility consists of a prime focus corrector, a focal plane tumbler unit and a robotic fibre positioner, all mounted on an AAT top-end ring which also supports the two spectrographs. The 2dF corrector introduces chromatic distortion (i.e, shifts between the blue and red images). This is a function of radius and is, by design, a maximum at the halfway point (radius=30 arcmin) and minimum at the centre and edge. The absolute shifts, at a particular wavelength, are allowed for in the 2dF configuration program and for our survey we configure for 5800Å. Relative to this we have maximum residual shifts of +0.9 arcsec (at 4000Å) and -0.3 arcsec (at 8000Å). The typical shift, averaged over the field, will be of order half these values (Bailey and Glazebrook 1998). The 2dF instrument has two field plates and two full sets of 400 fibres. This allows one plate to be configured while observations are taking place with the other. The robotic positioner uses a gripper head which places magnetic buttons containing the fibre ends at the required positions on the plate. When the plate is prepared and one set of observations are complete, the tumbler can rotate to begin observations on the new field. Each field plate has 400 object fibres and an additional 4 guide-fibre bundles, which are used for field acquisition and tracking. The object fibres are 140 $`\mu `$m in diameter, corresponding to about 2.16″ at the centre and 2″ at the edge of the field. Two identical spectrographs receive 200 fibres each and spectra are recorded on thinned Tektronix 1024 CCDs. Further detail on the design and use of the instrument is given in Bailey & Glazebrook (1999), Lewis et al. (1998) and Smith & Lankshear (1998). In terms of spectral analysis, the design of the system has a number of implications. Firstly, all 400 spectra taken in a particular observation have the same integration time, regardless of the brightness of individual targets. After the fibre feed, each of the two sets of 200 spectra pass through an identical optical system, which ensures some consistency. The combination of the fibre size, small positioning errors and chromatic variation means that, particularly for nearby extended objects, the observed spectrum is not necessarily representative of the whole object. The impact of some of these effects is examined in more detail below. ### 2.3 Observations and reductions This paper uses only the data taken for the 2dFGRS in two early observing runs: 1997 October 29 to 1997 November 3 and 1998 January 23–29. In the former we observed 14 2dF fields and 3882 galaxies; in the latter we observed 16 2dF fields and 4482 galaxies. Counting repeats only once this gives a total of 7972 galaxies. The 400 fibres in each field were shared between the targets of the 2dFGRS and the 2dF QSO Redshift Survey (Boyle 1998). The observations were performed with the 300B gratings, giving an observed wavelength range of approximately 3650Å to 8000Å, although this varies slightly from fibre to fibre. The spectral scale was 4.3Å/pixel and the FWHM resolution measured from arc lines was 1.8-2.5 pixels (varying over the wavelength range). The exposure time for the observations described here was around 70 min. This time was determined by the time necessary for configururation of the fibres (with improvement in the configuration time, the observation time has been reduced to around 60 min). The spectra were reduced using an early version of the 2dfdr pipeline software package (Bailey & Glazebrook 1999). Typically, three to four sub-exposures were taken and cosmic rays removed by a sigma-clipping algorithm. Galaxies at the survey limit of $`b_J`$=19.45 have a median S/N of $`14`$, which is more than adequate for measuring redshifts and permits reliable spectral types to be determined, as described below. Redshifts were found by two independent methods: the first, cross-correlating the spectra with absorption line templates, and the second, by emission line fitting. These automatic redshift estimates were then confirmed by visual inspection of each spectrum, and the more reliable of the two results chosen as the final redshift. A quality flag (Q) was manually assigned to each redshift: Q=3 and Q=4 (7180 objects) correspond to reliable redshift determinations; Q=2 (574 objects) means a probable redshift; and Q=1 (218 objects) means no redshift could be determined. We note that some stars enter the sample because the star-galaxy classification criteria for the source catalogue are chosen to exclude as few galaxies as possible, at the cost of 4% contamination of the galaxy sample by stars. As a crude criterion, and for the purpose of this work, we consider all objects with $`z<0.01`$ as stars. Only the 6899 non-stellar objects with quality flag Q=3 or Q=4 are included in subsequent analysis. Using this criterion, the sample of galaxies with reliable redshifts is 90% of all observed objects not identified as stars. We note that more recent observations are giving a higher redshift completeness. It is worth noting that PCA and redshift determination can be performed simultaneously in a single procedure which can improve the redshift determination by reducing its dependency on a predetermined set of template spectra (Glazebrook et al. 1998). We are investigating future use of this method. The fields observed in these two runs lie in the northern and southern declination strips covered by the survey. Note that at the median redshift of the sample, $`\widehat{z}=0.1`$, 2° corresponds to a comoving distance of 9.7 h<sup>-1</sup> Mpc. Figure 1 summarises some of the properties of this sample. The distribution in apparent magnitude and redshift is shown in Figure 1a; the median redshift of the galaxies increases from $`\widehat{z}=0.07`$ at $`b_J`$=17 to $`\widehat{z}=0.14`$ at $`b_J`$=19.45. The redshift distribution in Figure 1c shows considerable clustering in redshift space, reflecting in part that our survey is still dominated by a few lines-of-sight whcih intersect common structures. We expect this to average out when we complete the full volume. Figure 2 shows the redshift completeness, as a function of apparent magnitude, computed as the fraction of objects with good redshift ($`Q>2`$) out of all observed objects. A further completeness factor must also be included to allow for the fact that not all galaxies in the photometric catalogues can be assigned a fibre in the 2dF configurations. One of the constraints on the configuration is the fact that two fibres cannot be put closer than $`25^{\prime \prime }`$, for the most favourable geometry. The tiling process of the fields is designed to maximise the completenes of the survey as galaxies in close pairs can be observed in diffrent tiles. The final completeness is extremely high (93%). ## 3 PRINCIPAL COMPONENT ANALYSIS ### 3.1 The method A spectrum, like any other vector, can be thought of as a point in an $`M`$-dimensional parameter space. One may wish for a more compact description of the data. This can be accomplished by Principal Component Analysis (PCA), a well-known statistical tool that has been used in a number of astronomical applications (Murtagh & Heck 1987). By identifying the linear combination of input parameters with maximum variance, PCA finds the principal components that can be most effectively used to characterise the inputs. The formulation of standard PCA is as follows. Consider a set of $`N`$ objects ($`i=1,N`$), each with $`M`$ parameters ($`j=1,M`$). If $`r_{ij}`$ are the original measurements of these parameters for these objects, then mean subtracted quantities can be constructed, $$X_{ij}=r_{ij}\overline{r}_j,$$ (3) where $`\overline{r}_j=\frac{1}{N}_{i=1}^Nr_{ij}`$ is the mean. The covariance matrix for these quantities is given by $$C_{jk}=\frac{1}{N}\underset{i=1}{\overset{N}{}}X_{ij}X_{ik}1jM1kM.$$ (4) It can be shown that the axis (i.e, direction in vector space) along which the variance is maximal is the eigenvector $`𝐞_\mathrm{𝟏}`$ of the matrix equation $$C𝐞_\mathrm{𝟏}=\lambda _1𝐞_\mathrm{𝟏},$$ (5) where $`\lambda _1`$ is the largest eigenvalue (in fact the variance along the new axis). The other principal axes and eigenvectors obey similar equations. It is convenient to sort them in decreasing order (ordering by variance), and to quantify the fractional variance by $`\lambda _\alpha /_\alpha \lambda _\alpha `$. The matrix of all the eigenvectors forms a new set of orthogonal axes which are ideally suited to an efficient description of the data set using a truncated eigenvector matrix employing only the first $`P`$ eigenvectors $$U_P=\{e_{jk}\}1kP1jM,$$ (6) where $`e_{jk}`$ is the $`j`$th component of the $`k`$th eigenvector. The turncation turns out to be efficient because as it happens the cloud of points which represent the spectra closely lies in a low dimensional sub-space. This can be seen from the fact that the first few eigenvalues account for most of the variation in the data, and it can also be seen that the higher eigenvectors contain mostly the noise (Folkes, Lahav & Maddox, 1996). Now if a specific spectrum is taken from the matrix defined in Equation 3, or possibly a spectrum from a different source which has been similarly mean-subtracted and normalised, it can be represented by the vector of fluxes $`𝐱`$. The projection vector $`𝐳`$ onto the $`M`$ principal components can be found from (here $`𝐱`$ and $`𝐳`$ are row vectors): $$𝐳=𝐱U_M.$$ (7) Multiplying by the inverse, the spectrum is given by $$𝐱=𝐳U_{M}^{}{}_{}{}^{1}=𝐳U_{M}^{}{}_{}{}^{t},$$ (8) since $`U_M`$ is an orthogonal matrix by definition. However, using only $`P`$ principal components the reconstructed spectrum would be $$𝐱_{rec}=𝐳U_{P}^{}{}_{}{}^{t},$$ (9) which is an approximation to the true spectrum. The eigenvectors into which we project the spectra can be viewed as ‘optimal filters’ of the spectra, in analogy with other spectral diagnostics such as colour filter or spectral index. Finally, we note that there is some freedom of choice as to whether to represent a spectrum as a vector of fluxes or of photon counts. The decision will affect the resulting principal components, as a representation by fluxes will give more weight to the blue end of a spectrum than a representation by photon counts. In this paper all spectra are represented as photon counts, but we leave open the question of which representation is ‘the best’ in some sense. ### 3.2 Data preparation Before we can carry out Principal Component Analysis a number of procedures are required to prepare the spectra. Firstly, residuals from strong sky lines and bad columns were removed by interpolating the continuum across them. Secondly, we corrected for sky absorption at the A and B bands (7550Å to 7700Å and 6850Å to 6930Å respectively) as follows: the spectrum was smoothed with a 150Å Gaussian filter to give the low-resolution spectral shape, and with a 3Å Gaussian filter to give a noise-reduced spectrum closely following the shape of the absorption band profiles. The original spectrum was then multiplied by the ratio of the former to the latter over the spectral ranges covered by the atmospheric absorption bands. The system response also needs to be removed from the spectra. This was done by calculating a second-order polynomial fit to the 2dF system response with the 300B grating from observations of Landolt standard stars in BVR. The fit can be seen in Figure 3. Some fibre-to-fibre variations in the response function can be expected, as well as possible time variations. We have observed wavelength dependent variations at the level of 20%. We expect improvement in this with the application of an improved extraction algorithm (not applied to the data presented here). It is possible to remove such unknown variations in the flux calibration by, e.g., by removing the low frequency Fourier components, as was done by Bromley et al. (1998). However such a procedure will also remove potentially important information inherent in the continuum. Therefore, in this paper, we chose to use our measured flux calibration and retain the whole spectrum. We leave it for a future study to compare the two approaches. The next step is to de-redshift the spectra to their rest frame and re-sample them to a uniform spectral scale with 4Å bins. Since the galaxies cover a range in redshift, the rest-frame spectra cover different wavelength ranges. To overcome this problem, only the 6015 objects with redshifts in the range 0.01$``$$`z`$$``$0.2 are included in the analysis. All the objects meeting this criterion then have rest-frame spectra covering the range 3700Å to 6650Å (the lower limit was chosen to exclude the bluest end of the spectrum where the response function is poor). Limiting the analysis to this common wavelength range means that all the major optical spectral features between \[OII\] (3727Å) and H$`\alpha `$ (6563Å) are included in the analysis. In order to make the PCA spectral classifications as robust as possible, objects with redshifts but relatively low S/N were eliminated by imposing a minimum mean flux of 50 counts per bin. The spectra are then normalised so that the mean flux over the whole spectral range is unity. Figure 4 shows examples of the prepared spectra for a range of galaxy types, with some of the major spectral features indicated. The spectral classifications and luminosity functions are derived from this final sample of 5869 galaxies, each described by 738 spectral bins. Finally, we reiterate that we applied the PCA analysis to the spectra given as photon counts per bin, as opposed to energy flux per bin. ### 3.3 Application Principal Component Analysis of the sample spectra was carried out by finding the eigenvectors (principal components) of the covariance matrix of the de-redshifted and mean subtracted spectra. The mean spectrum and first three principal components (PCs) can be seen in Figure 5. The first PC accounts for 49.6% of the variance in the sample, the second accounts for 11.6%, and the third accounts for 4.6%. This still leaves 34.2% of the variance for the later PCs. Much of this remaining variance will be due to noise in the data—compare the distribution of variance over the principal components with that seen in the PCA of synthetic spectra by Ronen et al. (1999) or high S/N observations by Folkes (1998). The 1st PC shows the correlation between a blue continuum slope and strong emission-line features. The 2nd PC allows for stronger emission lines without a strong continuum shape. The 3rd PC allows for an anti-correlation between the oxygen and H$`\alpha `$ lines, relating to the ionization level of the emission-line regions. Figure 6 shows the distribution of the sample spectra in the PC1–PC2 plane. The spectra form a single cluster, with the blue objects with emission lines found to the right of the plot and the red objects with absorption lines to the left. Objects with particularly strong emission lines are found lower on the plot. We expect that there exists some small scatter due to Poisson noise, but this is suppressed due to the noise reduction property of the PCA technique. Errors in the flux calibration will cause a systematic error, while fibre-to-fibre variations and changes from run to run could introduce additional random scatter (see Ronen et al. (1999) and Folkes (1998) for a discussion of errors due to Poisson noise and flux calibration). For this reason, Bromley et al. (1998) chose to high-pass filter the spectra. However this involves loss of information, which we regard as undesirable. The accuracy of the 2dF flux calibration is currently being examined, and will be included in the final analysis of 2dF data. We will use the location of spectra in the PC1–PC2 plane as the basis of our spectral classification scheme. We have investigated the variation in the distribution of objects in the PC1–PC2 plane as a function of various parameters (see Folkes, 1998). We find that there is very little difference in the distribution with galaxy size or ellipticity. There is a small variation with redshift, in that there is a population of low luminosity galaxies with very strong emission lines at low redshift which are not seen at higher redshifts. ## 4 SPECTRAL CLASSIFICATION Principal Component Analysis has revealed the main features of the galaxy spectra, but without some further information it is not clear how to segment the PC1–PC2 plane into spectral classes, whether such a classification is meaningful, and what the physical significance of such classes would be. Two approaches were used in combination to gain insight into the distribution of the galaxy spectra in principal component space. One was to classify a subsample of the spectra by eye using a simple phenomenological scheme, and hence look at the distribution of galaxies with specific spectral features in the PC space. The second was to take the Kennicutt (1992) sample of spectra belonging to galaxies of known structural morphology and project them onto the PC1–PC2 plane. A third approach , to relate the PCs to physical parameters (e.g. age, metallicity and star-formation history of galaxies) by using model spectra, is discussed in Ronen et al. (1999). ### 4.1 Spectral features A simple 3-parameter classification scheme was used to identify spectral features. The scheme allocates each galaxy spectrum a code number from 0 to 2 according to the strength of spectral features in each of the following three categories: early-type absorption lines (molecular features such as H, K, CN, Mg), Balmer series absorption lines (H$`\gamma `$, H$`\delta `$, etc.), and nebular emission lines (OII, OIII, H$`\beta `$ etc.). This is physically motivated by the typical features produced by stellar populations at progressive stages during and after star-formation. We have selected an unbiased subsample of 56 galaxies which were classified using this scheme, and then collected into six broad classes with physical motivation, as follows: Class A, strong absorption lines; Class B, weak absorption lines; Class C, weak features; Class D, strong balmer lines; Class E, strong emission lines; Class F, Strong Balmer and Emission lines. We note that these classes are not directly related to any structural morphology. The 56 example objects can be plotted on the PC1–PC2 plane. These can be seen in Figure 6, which shows considerable segregation on the PC1–PC2 plane. The Class A objects, representing the strong absorption line systems with old stellar content and little star formation inhabit a clear region of the plots. The Class B, weaker absorption line systems also show a clear cluster. Classes D, E and F with emission and/or Balmer lines inhabit the lower sections of the plot in fairly distinct areas. The Class C objects that do not have particularly prominent features in absorption or emission are more widely spread. Although some segregation is shown in the PC1–PC3 plane, in general PC3 is not such a good discriminator, as shown in Figure 7. However it does allow good separation of the Balmer and oxygen emission line objects, since PC3 allows for an anti-correlation between those lines. ### 4.2 Galaxy morphology In the second approach, the 55 Kennicutt (1992) galaxies were split into five standard morphological groups (E/S0, Sa, Sb, Scd, Irr) plus 29 objects with unusual spectra. To make use of this set of well-fluxed, reliable spectra of known morphology, they need to be projected onto the PC space defined by the 2dF spectra. To do this, each Kennicutt galaxy is de-redshifted to its rest frame then smoothed with a 3Å Gaussian filter, which (by experimentation) gives a line profile similar to that of the 2dF spectra. The Kennicutt spectra are then sampled with 4Å bins across the same wavelength range and normalised in the same way as the 2dF data. The remaining uncertainties are due to any systematic errors in the 2dF response function. The 55 Kennicutt galaxies prepared in this way were then projected onto the PCs from the 2dF sample. Figure 8 shows the PC1–PC2 plane with the Kennicutt points labeled by morphological group. This figure clearly shows the progression in morphological type across the plot, with the many unusual objects, such as star-bursts and irregulars populating the extreme emission-line regions. The Seyfert galaxies from the Kennicutt sample fall below the morphological sequence of normal galaxies, since they have emission lines that are not necessarily associated with a blue continuum. However many of the other unusual galaxies with star-burst activity also fall in this area, so that the Seyferts are not clearly segregated. Figure 6 shows that this area is populated by the Balmer strong objects and some of the Class 3 survey spectra, which show a variety of weak absorption and emission features. ### 4.3 Definition of spectral types It is now possible to define sensible classifications in PC space based on meaningful spectral and morphological classifications. Here we wish to emphasise the links between galaxy spectra and morphology, so we choose to employ parallel cuts in the PC1–PC2 plane along the Hubble sequence as delineated by the Kennicutt galaxies. The cuts can be seen in Figure 8 with the Kennicutt galaxies superimposed. This defines five spectral types, which are roughly analogous to the five morphological groups. The Kennicutt galaxies do not seem to fall in the region where most of the 2dF galaxies are, and this may be due to the selection bias of the sample, but also due to flux calibration. With this reservation, the exact placement of the lines on Figure 8 is somewhat arbitrary, but we have used both the projection of the Kennicutt galaxies and our by-eye spectral classification to place the lines as appropriately as possible. Note, however, that there is no one-to-one correspondence between these five classes and the six classes described in section 4.1. To check the robustness of our object classification in view of the fact that the response function is fibre dependent, we have examined 212 objects with repeated observations in overlapping fields. We find that 64% of the repeated objects have the same class, and 95% have the same class to within one type. The mean spectra for each of the five types defined by these cuts are shown in Figure 9. This shows the clear progression from the red absorption-line spectrum of Type 1 to the strong emission-line spectrum of Type 5. As can be seen from Figure 8, there is not a one-to-one relation between morphology and spectral type, but the general relation is clear. The mean spectra are in good agreement with the spectra of the equivalent morphological groups given by Pence (1976), Coleman et al. (1980) or Kennicutt (1992): Type 1 corresponds approximately to E/S0 galaxies, Type 2 to Sa galaxies, Type 3 to Sb galaxies, Type 4 to Scd galaxies and Type 5 to the irregulars. The agreement is excellent in the blue end ($`\lambda <5000\AA `$), though towards the red end the spectra of the 2dF types have less flux than the corresponding Pence spectra. This is probably due to inaccuracies in the current preliminary flux calibration, and we will be making further observations to improve the mean calibration of the 2dF spectra. We note that, in principle, the PCs can be used as continuous variables, without binning them. ### 4.4 K-corrections Spectral types are of intrinsic interest, but are also important in that they yield the K-corrections necessary for estimating absolute magnitudes. The K-correction appropriate to a particular galaxy could in principle be obtained directly from the observed spectrum, from its PCA reconstruction, or from its principal components as $`K(z,\mathrm{PC1},\mathrm{PC2},\mathrm{})`$. These approaches require careful examination of a number of issues including the extent to which the fibre spectrum is representative of the integrated galaxy spectrum, the systematic uncertainties in the flux calibration and the available wavelength range (cf Heyl et al, 1997). A further complication with our current analysis is that we have limited the PCA to a fixed range in rest-frame wavelength, 3700Å to 6650Å. Since the $`b_J`$ pass-band extends from $`3950`$Åto $`5600`$Å, the wavelength range of the PCA reconstruction, or the class mean spectra allow us to calculate K-corrections in $`b_J`$ only for galaxies $`z<0.07`$. In light of these complications we adopt here a practical approach, associating the spectral types defined in the previous section with specific template spectral energy distributions (SEDs). As discussed above, the SEDs of the five morphological types given by Pence (1976) agree well (i.e, at the blue end, which is that relevant for the $`b_J`$ filter) with the mean spectra of our five spectral types. We therefore identify our spectral types with Pence’s SEDs. We compute the appropriate K-corrections from Pence’s tabulations, transforming the K-corrections in the B and V filters according to the colour relation given by Blair & Gilmore (1982) for the $`b_J`$ filter, $$K(b_J)=K(B)0.28(K(B)K(V)).$$ (10) The K-corrections derived in this way for the redshift range $`0<z<0.2`$ are shown as the curves in Figure 10. ## 5 THE GALAXY LUMINOSITY FUNCTION ### 5.1 Methods In computing the luminosity functions (LFs) we use both the 1/$`V_{\mathrm{max}}`$ method (Schmidt 1968) for a non-parametric estimate of the LF and the STY method (Sandage, Tammann & Yahil 1979) for a maximum likelihood fit of LF parameters. The statistical properties of these different estimators are discussed by Felten (1976), Efstathiou, Ellis & Peterson (1988) and Willmer (1997). The 1/$`V_{\mathrm{max}}`$ method assumes uniform spatial distribution, while the STY method assumes a parametric form, here taken to be a Schechter function (Schechter 1976). The use of other LF estimators is left for future work. We assume the cosmological parameters to be $`\mathrm{\Omega }=1`$ and $`\mathrm{\Lambda }=0`$ (hence $`q_0=\frac{1}{2}`$) and $`H_0`$=100 km s<sup>-1</sup> Mpc<sup>-1</sup>. The analysis is limited to $`z<0.2`$ by the definition of the sample used in the PCA. The effective area of the survey can be estimated by dividing the number of objects currently observed by the final expected density. The density of galaxies brighter than $`b_J=19.45`$ as derived from the parent 2dFGRS source catalogue is $`180/\text{ }\mathrm{}^{}`$. Taking into account a configuration completeness of 93%, the effective area for the 7972 galaxies observed is $`47.3\text{ }\mathrm{}^{}`$. This provides the overall normalisation for our LF estimates. We also define a completeness factor for each apparent magnitude range in the currently observed sample compared to the parent photometric sample, and weight each galaxy accordingly in our LF estimates. We note that there may be selection biases that may depend on spectral type (e.g, because of surface brightness effects and also because at low S/N emission line galaxies are easier to measure redshfits for). However it is not possible to account for such spectral type dependency in the completeness since, by defintion, the partition into classes is known only for our selected sample classified spectra and not for the parent catalogue. Error estimates for the 1/$`V_{\mathrm{max}}`$ LFs are computed assuming Poisson statistics to deduce the fractional error without weights, then applying that fractional error to the actual estimate of the LF. These errors are under-estimates since they neglect the effects of clustering, which will be especially apparent at the faint end of the LFs, where the galaxies are sampled over a relatively small volume. Error estimates for the parameters obtained by the STY method are found using the $`\chi ^2`$ contours of the likelihood ratio distribution. Note that the fact that the sample is limited to $`z<0.2`$ is irrelevant when finding the maximum likelihood Schechter parameterisation, since the maximum likelihood method is based on conditional probability given a redshift. The normalisation of the LF is not given directly by the STY method, but it can be found by integrating the derived LF over the observed volume of the survey and comparing this to the actual number of galaxies observed. The errors in the measured magnitudes lead to a Malmquist-like bias which can have a noticeable effect on the LF. One method of correction (e.g. Loveday 1992) is to maximise the likelihood in the STY method for a luminosity function which, when convolved with a Gaussian of dispersion equal to the magnitude error, will give the observed data. Another source of bias is the fact that isophotal magnitudes are a function of redshift, due to point spread function effects and cosmological dimming. We have applied a correction to the APM isophotal magnitude of each galaxy to approximate its total magnitude, but subtle biases could remain due to the initial isophotal selection and these may influence the shape and normalisation of LFs (Dalcanton 1998). ### 5.2 Results Figure 11 shows the 1/$`V_{\mathrm{max}}`$ LFs and the best-fitting Schechter functions from the STY method for the whole sample and for the individual spectral types. The parameters of the Schechter functions are given in Table 1, while Figure 12 shows the contours of likelihood for the parameter estimates. Note that the number of galaxies in each subsample has a large effect on the uncertainties. The errors on M and $`\alpha `$ in Table 1 define a box which bounds a one sigma contour. The LFs show a clear trend of fainter characteristic magnitudes $`M^{}`$ and steeper faint-end slopes $`\alpha `$ going from early to late spectral types. There seems to be a discrepancy between the 1/$`V_{\mathrm{max}}`$ points and the STY curves at the faint end. This may indicate that the LFs are not well fitted with a Schechter function, but may also arise from the fact that there is only a small number of faint galaxies and also from the uniformity assumption of the 1/$`V_{\mathrm{max}}`$ method, which is not assumed by the STY method. Note the peculiar result that $`M^{}`$ for the whole sample is brighter than $`M^{}`$ for any of the individual spectral types. How this comes about is illustrated in Figure 13, which shows the co-addition of the luminosity functions for the individual spectral types to give the total luminosity function, and indicates the relative contribution of the spectral types at each absolute magnitude. The most remarkable point about this figure is the way that the very different Schechter functions of the five spectral types combine to give an overall LF that is also a Schechter function, at least down to $`M_{b_J}`$=$``$16. Fainter than this the steep LF of the latest types comes to dominate the overall LF, resulting in an upturn in the faint end slope. The additional information provided by the spectral classification is clear from this figure, and confirms the comment made by Binggeli, Sandage & Tammann (1988) that discussion of a luminosity function without knowledge of the galaxy types is ‘covering a wealth of details with a thick blanket’. As well as looking at variations in the LFs with spectral type, we can also provide a preliminary picture of the differences in clustering as a function of spectral type. Figure 14 shows cone plots of the redshift-space distribution of early types (types 1 and 2) and late types (types 3, 4 and 5); these combinations were chosen simply to give similar numbers of galaxies. The red, ‘early-type’ galaxies do appear more clustered, with evidence for ‘finger-of-God’ effects caused by the velocity dispersion of galaxy clusters, in agreement with the long-known morphology-density relation (Dressler 1980). In comparison, the blue, ‘late-type’ galaxies show a more uniform distribution, although clustering is still evident. Quantifying these differences and comparing them with the predictions of models (e.g. Cole et al. 1998) will be a major focus of future analysis of the 2dFGRS. ## 6 DISCUSSION ### 6.1 PCA and spectral types The 2dF system and the adopted survey strategy yield a good data set for spectral analysis. The broad wavelength coverage and the homogeneity of the spectra are particularly important. The key remaining unknown in the analysis is variation in the system response as a function of time, fibre or location on the field plate. These variations may influence the results of the PCA and produce some of the scatter in the distribution of galaxies in PC space. Large adverse effects are not apparent in the results presented here, but some of the PCs beyond the third do show unphysical broad features which may be due to variation in the response function. Since the PCs are an orthogonal basis set, restricting the analysis to the first few PCs means that these irregularities are not allowed to influence the spectral classifications. The ideal, however, would be to obtain fully fluxed spectra with the use of standard star observations for each observing run, and rigorous testing of fibre throughput, and this may become possible as the 2dF system continues to develop. Even without this, it may still be possible with the full data set to determine (and correct) the average strength of the PCs as a function of fibre number or plate position. Further refinement of the procedures used to remove the adverse effects of sky lines, atmospheric absorption and bad pixels will also improve the quality and robustness of the analysis. The chosen rest-frame wavelength range is a major issue for the PCA method. It would be possible to further restrict the wavelength range so that the broader redshift range $`0<z<0.3`$ could be included. However this would involve limiting the rest-frame spectra to the wavelength range from 3650Å to 6150Å, excluding the H$`\alpha `$ line from the analysis. An alternative method would be to analyse the $`0<z<0.2`$ spectra and the $`0.2<z<0.3`$ spectra separately, with the possibility of finding a relation between the classes found in each case. PCA is clearly extracting physical information from the spectra. Figure 5 shows that the first PC emphasises the correlation between the blue continuum and the emission line strength. The second PC emphasises the emission lines alone, while the third PC allows for an anti-correlation between H$`\alpha `$ and the \[OIII\] lines reflecting different excitation levels. Ronen et al. (1999) show how PCA can be used in conjunction with population synthesis or evolutionary synthesis models of galaxy spectra to extract information regarding the age, metallicity and star-formation history of galaxies. Here we have simply used PCA for spectral classification. Our spectral types were defined in the PC1–PC2 plane (Figure 8) by reference to the locations of galaxies belonging to morphologically-defined groups, with the intention of defining a spectral sequence analogous to the Hubble sequence. There are a number of refinements that can be considered here. In order to better define the location and the spread of the morphology sequence on the PC1–PC2 plane, a larger set of tracer objects is required than the 26 normal Kennicutt galaxies. For this reason it would be very useful to observe high S/N integrated spectra for a much larger set of galaxies covering the full range of structural morphological types at a range of inclinations. Alternatively, some of the brighter 2dFGRS galaxies could be morphologically classified by eye or by automated means (e.g. Naim 1995; Abraham et al. 1994), or in some cases using existing classifications from the partially overlapping Stromlo-APM Redshift Survey (Loveday et al. 1992). This second approach has the additional benefit of allowing studies of the links between morphology and spectral type. Another possible approach is to define a purely spectral classification, along the lines presented in section 4.1. This has the advantage of being self-consistent. A third possibility is to use a training set from models (Ronen et al. 1998). ### 6.2 Comparison with other luminosity functions It is useful to compare our results on the LF with other recent measurements. Ratcliffe et al. (1998) determine the LF for 2055 galaxies in the Durham/UKST Galaxy Redshift Survey and find $`M^{}(b_J)=19.7`$, $`\alpha =1.1`$, $`\varphi ^{}=0.012`$ Mpc<sup>-3</sup>. After correcting for Malmquist bias, they find $`M^{}(b_J)=19.7`$, $`\alpha =1.0`$, $`\varphi ^{}=0.017`$ Mpc<sup>-3</sup>; thus the correction for Malmquist bias has the effect of dimming $`M^{}`$ and flattening the faint-end slope a little which in turn raises the normalisation. Zucca et al. (1997) determine the LF for 3342 galaxies in the ESO Slice Survey and find $`M^{}(b_J)=19.6`$, $`\alpha =1.2`$, $`\varphi ^{}=0.020`$ Mpc<sup>-3</sup> after correcting for Malmquist bias. Lin et al. 1996 use 18678 galaxies from the LCRS, and find $`M^{}(R)=20.`$, $`\alpha =0.7`$, $`\varphi ^{}=0.019`$ Mpc<sup>-3</sup>. Our present measurement of the overall LF (see Table 2) is consistent with the ESO result (the largest pre-existing sample in $`b_J`$), but note that we have not yet corrected for Malmquist bias, and that this is expected to improve the agreement. We also note that all the normalizations agree to within 10%. The SAPM (Loveday et al. 1992) and SSRS2 (Marzke & da Costa 1997) estimates of $`\varphi ^{}`$ are $`30\%`$ lower than these estimates and probably reflect a large local underdensity. We will carry out a more detailed analysis of the overall LF and comparison to other surveys in a future paper. Loveday et al. (1992) determine LFs for the Stromlo-APM Redshift Survey based on a sparse sample of galaxies to $`b_J=17.15`$. They morphologically classify the images into E/S0, Sp/Irr and unclassifiable samples. They find Schechter parameters corrected for Malmquist bias to be $`M^{}(b_J)=19.71\pm 0.25`$, $`\alpha =0.2\pm 0.35`$ for the E/S0 sample and $`M^{}(b_J)=19.40\pm 0.16`$, $`\alpha =0.8\pm 0.20`$ for the Sp/Irr sample. The steeply declining faint-end slope for early type galaxies is due to the difficulty of classifying faint, relatively featureless galaxies from the photographic images: probably most of the un-classifiable galaxies are E/S0 galaxies. The SAPM luminosity functions based on emission line strengths confirm this interpretation (Loveday et al 1998). So, the LFs for different morphological classes show the same trends as the LFs for the PCA classes considered here. Bromley et al. (1998) use a similar PCA analysis on spectra from the Las Campanas Redshift Survey (Shectman et al. 1996). However their data scaling and filtering means that the galaxy ‘clans’ they derive are not directly comparable to our spectral types. They do however confirm a very similar progression in the faint-end slope of the Schechter functions for the spectrally defined subsets, with values of $`\alpha `$ going from $`\alpha =0.51\pm 0.14`$ for their earliest-type clan to $`\alpha =1.93\pm 0.13`$ for their latest-type clan. Lin et al. (1996) and Zucca et al. (1997) split their samples according to \[OII\] emission line equivalent widths. In both surveys the strong emission line galaxies have a faint-end slope that is steeper by about 0.5, and $`M^{}`$ about 0.2 magnitudes fainter than those without emission lines. Loveday et al (1998) have also measured the LF for samples split on H$`\alpha `$ and \[OII\] and find a similar steep faint-end slope and fainter $`M^{}`$ for emission line galaxies. These variations are comparable to the changes in $`\alpha `$ and $`M^{}`$ that we would find if we split our sample into just two PC classes. This is an exploratory analysis and not the final word on the 2dFGRS galaxy luminosity function. The complete survey sample will comprise about 40 times as many galaxies, allowing investigation of the multivariate distribution of galaxies over luminosity, spectral type, surface brightness and local galaxy density. More sophisticated analyses will then be appropriate, including: (i) corrections for variations in completeness with redshift and surface brightness as well as magnitude; (ii) allowance for the effects of clustering on the LF; (iii) the use of clustering-independent LF estimators; (iv) correction for the Malmquist-like bias due to magnitude errors; (v) tests of the physical significance and robustness of the PCA spectral types, and (vi) aperture corrections. ## 7 CONCLUSIONS Spectral analysis and classification of 5869 2dF Galaxy Redshift Survey spectra has been performed with a Principal Component Analysis method. The spectra form a sample limited to $`b_J`$=19.45 and $`0.01<z<0.2`$. Methods have been applied to remove the effects of sky lines, bad pixels, atmospheric absorption and the system response function. The first PC was found to relate to the blue continuum and the strength of the emission lines, while the second PC was found to relate purely to emission line strength. The PC1–PC2 plane has been investigated by classifying a subset of the spectra by eye on a physically motivated spectral scheme, and also by projecting the Kennicutt (1992) galaxy spectra of known morphology onto the plane. The spectra have been classified into five spectral types with the mean spectra of types 1 to 5 approximately corresponding to the spectra of E/S0, Sa, Sb, Scd, and Irr galaxies respectively. Luminosity functions for the spectral types have been computed, with type-specific K-corrections and weighting of the galaxies to compensate for magnitude-dependent incompleteness. Schechter fits to the luminosity functions reveal a steadily steepening value of $`\alpha `$ and a trend towards fainter $`M^{}`$ for later types. For spectral type 1 the Schechter parameters are $`M^{}=19.61\pm 0.09`$ and $`\alpha =0.74\pm 0.11`$ whereas for spectral type 5 values of $`M^{}=19.02\pm 0.22`$ and $`\alpha =1.73\pm 0.16`$ are found (errors define a box bounding the one sigma contour). The redshift-space distribution of spectral types 1 and 2 has been visually compared to that of spectral types 3, 4 and 5, revealing qualitative evidence for stronger clustering of the early-type galaxies. The methods used in this paper will form the basis of the analysis of the luminosity function of the full 2dF Galaxy Redshift Survey.
no-problem/9903/hep-ph9903452.html
ar5iv
text
# Calculability of Quark Mixing Parameters from General Nearest Neighbor Interaction Texture Quark Mass Matrices ## I Introduction A fundamental explanation of the flavor-mixing matrix, the fermion masses and their hierarchical structure persists to be one of the most challenging and outstanding problems of particle physics today. Within the standard model (SM) the fermion masses, the three flavor-mixing angles and the CP violating phase are free parameters and no relation exists among them. However, the expectation that “low-energy” quantities which can be computed in the SM should remain finite as the masses of intermediate particles go to infinity leads us to suspect that such a relation does hold. For example, $`\mathrm{\Delta }M(K_s^oK_L^o)`$ diverges as $`m_t\mathrm{}`$. Since the contribution of the intermediate quark is always multiplied by a flavor-mixing matrix element, a relation between quark masses and flavor-mixing parameters could conspire to guarantee that the contribution to the low-energy quantity always remains finite. As an attempt to derive a relationship between the quark masses and flavor-mixing hierarchies, mass matrix ansätze based on flavor democracy with a suitable breaking so as to allow mixing between the quarks of nearest kinship via nearest neighbor interactions (NNI) was suggested about two decades ago . These early attempts are the first examples of “strict calculability”; i.e., mass matrices such that all flavor-mixing parameters depend solely on, and are determined by, the quark masses. But the simple symmetric NNI texture leads to the experimentally violated inequality $`M_{top}<110`$ GeV, prompting consideration of a less restricted form for the mass matrices so as to still achieve calculability, yet be consistent with experiment . It was later shown that the texture structure of these early ansätze for quark mass matrices, the texture structure of the NNI mass matrices, holds in general. Branco et al. have demonstrated that if one does not impose the assumption of hermiticity, then the NNI texture structure contains no physical assumptions . For three fermion generations, one may consider without loss of generality quark mass matrices of the NNI form. This texture structure serves as the general starting point from which additional constraints on the mass matrix elements can be imposed in order to achieve calculability. In this paper we analyze general quark mass matrices in a modified NNI form to be described in the next section. This form is free from physical content, as it corresponds to a particular choice of weak basis in the right-handed chiral quark sector. We perform a numerical fit to the most recent measurement of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, $`V_{CKM}`$. The results are in excellent agreement with the observables. Finally, we propose a new mass matrix ansätze based on our numerical results, to ensure some relation among the six quark mass masses and the four observable flavor-mixing parameters, thus reducing the number of free parameters in the SM. Our paper is organized as follows. In Section II we review the NNI form of mass matrices, pointing out a subtlety that has been overlooked in previous works. In Section III, we present the quark mass matrices to be analyzed as well as the resulting flavor-mixing matrix. We present the observables used in our fitting procedure as well as the resultant unitary triangle (UT) parameters. Guided by the numerics of the general case, we postulate quark mass matrices that ensure calculability. ## II General NNI mass matrices In a given generic gauge theory, flavor eigenstates do not necessarily coincide with the mass eigenstates which can be written in terms of combinations of flavor eigenstates. The mass matrix in a given electric charge charge sector is not necessarily diagonal in flavor space and involves the coupling between the left-handed and right-handed chiral states of different flavor. It can be shown that in the SM with three generations of fermions, one may perform a biunitary transformation of the quark mass matrices that leaves both the quark mass spectrum and the flavor-mixing parameters unchanged, where the new mass matrices in both the up-type and down-type quark sectors are of the NNI form and are in general neither hermitian nor symmetric . We stress that this form is merely the consequence of choosing a particularly convenient weak basis that allows us to eliminate some of the redundant free parameters of the SM and has no physical consequences. Starting from initial completely general complex matrices $`M_{(u,d)}^{}`$, a necessary condition for this biunitary transformation to yield the NNI form is the existence of a solution to the eigenvalue equation $$\left(M_u^{}M_u^{}+kM_d^{}M_d^{}\right)_{ji}U_{i2}=\lambda U_{j2}$$ (1) where $`k`$ is initially assumed to be an arbitrary complex constant. The NNI mass matrices are then given by $$M_u=\left(\begin{array}{ccc}0& A_{12}e^{i\theta _1}& 0\\ A_{21}e^{i\theta _2}& 0& A_{23}e^{i\theta _3}\\ 0& A_{32}e^{i\theta _4}& A_{33}e^{i\theta _5}\end{array}\right)$$ (2) $$M_d=\left(\begin{array}{ccc}0& B_{12}e^{i\theta _1^{}}& 0\\ B_{21}e^{i\theta _2^{}}& 0& B_{23}e^{i\theta _3^{}}\\ 0& B_{32}e^{i\theta _4^{}}& B_{33}e^{i\theta _5^{}}\end{array}\right)$$ (3) $$M_{(u,d)}=U^{}M_{(u,d)}^{}V_{(u,d)}$$ (4) where explicit forms for $`U`$ and $`V_{(u,d)}`$, corresponding to the left-handed and right-handed chiral quark rotations, respectively, are given in . Note that $`U`$, the left-handed chiral quark rotation matrix, is common to both the up and down quark sectors. It is the freedom of performing an unitary transformation on the triplet of right-handed quark fields in both the up and down quark sectors, i.e. the freedom associated with the $`V_{(u,d)}`$, without affecting the quark masses or the flavor-mixing parameters, that enables this texture structure to be possible while still remaining completely general. We can rewrite Eq. (1) for arbitrary complex $`k`$ as $$A_{21}^2+A_{23}^2+k(B_{21}^2+B_{23}^2)=\lambda $$ $$k_{real}\left[\mathrm{cos}(\beta )\mathrm{sin}(\alpha )\mathrm{sin}(\beta )\mathrm{cos}(\alpha )\right]k_{imag}\left[\mathrm{sin}(\beta )\mathrm{sin}(\alpha )+\mathrm{cos}(\beta )\mathrm{cos}(\alpha )\right]=0$$ where $`\beta \theta _5^{}\theta _3^{}`$ and $`\alpha \theta _5\theta _3`$. The first equation exhibits the functional dependence of $`\lambda `$ on $`k`$ and provides no restriction on $`k`$. It may appear that when choosing a weak-basis that gives NNI forms for the quark mass matrices, one has two arbitrary degrees of freedom corresponding to $`k_{real}`$ and $`k_{imag}`$. But in fact this two-fold degree of freedom does not exist; $`k`$ must be either purely real or purely imaginary. For both $`k_{real}`$ and $`k_{imag}`$ nonzero, we arrive at the inconsistency $`\mathrm{tan}(\beta )=\mathrm{cot}(\beta )`$. Inspection of Eq. (1) shows that if $`k`$ is purely real, we are guaranteed a solution to (1) because the operator on the left hand side is Hermitian and has three linearly independent eigenvectors. However, this guarantee does not hold if $`k`$ is purely imaginary. Therefore, we arrive at the condition $`k`$ is purely real, which results in $`\alpha =\beta `$. Thus the number of parameters in our flavor-mixing matrix is reduced to eleven from twelve. The above mass matrices with their texture structure and the relationship $`\alpha =\beta `$ are the most general ones that exhibit the NNI texture and have the fewest number of independent parameters. At this stage no physical inputs have been introduced, and the resulting flavor-mixing matrix has five more parameters than are necessary to acheive strict calculability. ## III Flavor-mixing matrix and numerical fit to observables We use the rephasing freedom of the quark-fields to minimize the number of parameters entering into the quark flavor-mixing matrix. We can re-write Eq. (2) $$M_u=\left(\begin{array}{ccc}e^{i(\theta _1\theta _4)}& 0& 0\\ 0& e^{i(\theta _3\theta _5)}& 0\\ 0& 0& 1\end{array}\right)\left(\begin{array}{ccc}0& A_{12}& 0\\ A_{21}& 0& A_{23}\\ 0& A_{32}& A_{33}\end{array}\right)\left(\begin{array}{ccc}e^{i(\theta _5\theta _3+\theta _2)}& 0& 0\\ 0& e^{i(\theta _4)}& 0\\ 0& 0& e^{i(\theta _5)}\end{array}\right)P_{(u)L}\stackrel{~}{M_u}P_{(u)R}$$ (5) with an analogous expression for $`M_d`$, and where $`\theta _3^{}\theta _5^{}=\theta _3\theta _5`$. The flavor-mixing matrix is written in terms of the unitary matrices $`U_{(u)L}`$ and $`U_{(d)L}`$ that diagonalize $`M_uM_u^{}`$ and $`M_dM_d^{}`$: $$U_{(u)L}M_uM_u^{}U_{(u)L}^{}\text{diag}(m_u^2,m_c^2,m_t^2)$$ $$U_{(d)L}M_dM_d^{}U_{(d)L}^{}\text{diag}(m_d^2,m_s^2,m_b^2)$$ From the above expressions for $`M_{u,d}`$, we see that $$M_uM_u^{}=P_L\stackrel{~}{M_u}\stackrel{~}{M_u}^TP_L^{}$$ (6) Because $`\stackrel{~}{M_u}\stackrel{~}{M_u}^T`$ is a real symmetric matrix, it can be diagonalized by a real orthogonal matrix $`R_u`$: $$R_u\stackrel{~}{M_u}\stackrel{~}{M_u^T}R_u^T=\text{diag}(m_u^2,m_c^2,m_t^2).$$ (7) The flavor-mixing matrix can then be expressed as $$V_{fm}=U_{(u)L}U_{(d)L}^{}=R_uP_{(u)L}^{}P_{(d)L}R_d^T$$ (8) $$V_{fm}=\left(\begin{array}{ccc}& \stackrel{}{v_u}& \\ & \stackrel{}{v_c}& \\ & \stackrel{}{v_t}& \end{array}\right)\left(\begin{array}{ccc}e^{i\theta }& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)\left(\begin{array}{ccc}& & \\ \stackrel{}{v_d}& \stackrel{}{v_s}& \stackrel{}{v_b}\\ & & \end{array}\right)$$ (9) where the $`\stackrel{}{v}_{u,c,t}`$ are the normalized eigenvectors of $`\stackrel{~}{M_u}\stackrel{~}{M_u}^{}`$ with eigenvalues $`m_u^2,m_c^2`$ and $`m_t^2`$ and similarly for the $`\stackrel{}{v}_{d,s,b}`$ . Note that only this single phase $`\theta \theta _1\theta _4\theta _1^{}+\theta _4^{}`$ enters the flavor-mixing matrix and is responsible for CP violation. Thus, it is sufficient for only one of the four non-vanishing elements coming from the second columns of $`M_u`$ and $`M_d`$ to be complex in order to provide a general parametrization of flavor-mixing and CP violation. For example, one may choose $`\theta _1`$ to be non-zero while all other phases vanish. At this stage, the mass matrices contain eleven independent parameters. We must specify the energy scale at which we are evaluating the mass matrices. We use the light quark masses $`m_u=4.9\pm 0.53`$ MeV, $`m_d=9.76\pm 0.63`$ MeV, and $`m_s=187\pm 16`$ MeV, and the heavy quark masses $`m_c=1.467\pm 0.028`$ GeV, $`m_b=6.356\pm 0.08`$ GeV and $`m_t=339\pm 24`$ GeV all of which correspond to the masses at a modified minimal subtraction $`(\overline{MS})`$ renormalization point of 1 GeV. Choosing the energy scale to be 1 GeV we use the invariance of the trace, minor determinants and determinant of $`\stackrel{~}{M}_{u(d)}\stackrel{~}{M}_{u(d)}^T`$ under a similarity transformation to relate the $`\stackrel{~}{M}_{u(d)ij}`$ to the $`m_i^2`$ in each quark sector. These six relations comprise the first six terms of our $`\chi _{tot}^2`$. In addition, we fit to the following $`V_{CKM}`$ observables : $`|V_{ud}|=0.9740\pm 0.001`$, $`|V_{us}|=0.2196\pm 0.0023`$, $`|V_{cd}|=0.224\pm 0.016`$, $`|V_{cs}|=1.04\pm 0.16`$, $`|V_{cb}|=0.0395\pm 0.0017`$, and $`\frac{|V_{ub}|}{|V_{cb}|}=0.08\pm 0.02`$. Our method is to fit to the experimental data and compute the corresponding $`\chi _{tot}^2_{i=1}^{12}\chi _i^2`$ with one degree of freedom arising from twelve experimental values minus eleven independent input parameters. The following parameters yield a $`\chi _{tot}^2`$ of only 0.180: Table 1 | Parameter | Value \[MeV\] | Parameter | Value \[Mev\] | | --- | --- | --- | --- | | $`A_{12}`$ | 976.99 $`\pm `$ 0.9818 | $`B_{12}`$ | $`44.149\pm 0.1959`$ | | $`A_{21}`$ | 101.07 $`\pm `$ 0.975 | $`B_{21}`$ | $`50.532\pm 1.406`$ | | $`A_{23}`$ | -1465.5 $`\pm `$ 0.5671 | $`B_{23}`$ | $`304.19\pm 3.2851`$ | | $`A_{32}`$ | -3.3811 $`\times 10^5`$ $`\pm `$ 0.9675 | $`B_{32}`$ | $`3644.1\pm 0.9721`$ | | $`A_{33}`$ | -24678 $`\pm `$ 0.9679 | $`B_{33}`$ | $`5203.4\pm 0.9693`$ | The single phase $`\theta `$ that minimizes $`\chi _{tot}^2`$ is -0.716 $`\pm `$ 0.0989 radians. The above parameters also make predictions concerning the mixing of the top quark as follows: $`\frac{|Vtb|^2}{|V_{td}|^2+|V_{ts}|^2+|V_{tb}|^2}=0.9984`$, $`|V_{tb}^{}V_{td}|=0.009422`$ and $`\frac{|V_{td}|}{|V_{ts}|}=0.2479`$. The first two predictions above are in excellent agreement with the latest Particle Data Group values , while our value of $`\frac{|V_{td}|}{|V_{ts}|}`$ is slightly higher than that predicted in . The numerically determined mass matrices $`\stackrel{~}{M_u},\stackrel{~}{M_d}`$ from above display some interesting properties worth noting at this point. One can immediately see that they are far from being symmetric, as $`|A_{12}|`$ differs markedly in magnitude from $`|A_{21}|`$, etc. Of particular interest is the difference between the up and down quark sectors vis-a-vis the 3-3 element. Historically, mass matrix ansätze previously proposed in the spirit of calculability have written the 3-3 element as a small pertubation about $`|m_{t,b}|`$; i.e. as $`|m_3ϵ|`$, where epsilon is some small parameter. Inspection of the above parameters shows that this description works fairly well for the down sector, but is not at all applicable to the up quark sector. In the up quark sector, it is the magnitude of the 3-2 element that differs only slightly from $`m_3=m_t`$. This numerical result and observed structure is a consequence of the top quark being so heavy. Using just the central values of the quark masses at the 1 GeV scale, the absolute values of the elements of the flavor-mixing matrix elements are: $$|V_{CKM}|=\left(\begin{array}{ccc}0.974007& 0.219610& .00316079\\ 0.223856& 0.975401& .0395023\\ .00942966& .0380247& 0.999214\end{array}\right)$$ To impose unitarity we need to go through one additional step after varying our input parameters and fitting to the above experimental values. In our flavor-mixing matrix, we have in the matrices $$\left(\begin{array}{ccc}& \stackrel{}{v_u}& \\ & \stackrel{}{v_c}& \\ & \stackrel{}{v_t}& \end{array}\right)\text{and}\left(\begin{array}{ccc}& & \\ \stackrel{}{v_d}& \stackrel{}{v_s}& \stackrel{}{v_b}\\ & & \end{array}\right)$$ explicit expressions involving the $`m_i^2`$, the six central values of the quark masses squared at the 1 GeV scale. The real symmetric matrices $`\stackrel{~}{M_u}\stackrel{~}{M_u}^T`$ and $`\stackrel{~}{M_d}\stackrel{~}{M_d}^T`$ will not have these central values as their exact eigenvalues unless the fit is perfect; i.e. unless $`_{i=1}^6\chi _i^2=0`$. To ensure unitarity, after having performed the fit to the experimental observations, we evaluate the eigenvalues of $`\stackrel{~}{M_u}\stackrel{~}{M_u}^{}`$ and $`\stackrel{~}{M_d}\stackrel{~}{M_d}^{}`$ to find the correct $`m_i^2`$ we should use in the $`\stackrel{}{v_i}`$ and then re-evaluate our expression for $`V_{fm}`$, thus ensuring an unitary flavor-mixing matrix. Although unitary, our derived flavor-mixing matrix contains unphysical phases that must be removed in order to perform an unitary triangle analysis. After finding the mass matrices parameters that yield the minimum $`\chi ^2`$, we put the flavor-mixing matrix into the standard CKM representation advocated by the Particle Data Group and then into an improved Wolfenstein parametrization from which we find the unitary triangle. The improved Wolfenstein parametrization can be obtained from the standard CKM representation with the following identifications : $$\lambda s_{12}=\mathrm{sin}\theta _{12},A\lambda ^2s_{23}=\mathrm{sin}\theta _{23},A\lambda ^3(\rho i\eta )s_{13}e^{i\delta }=\mathrm{sin}\theta _{13}e^{i\delta }$$ with modified Wolfenstein parameters $$\overline{\rho }\rho (1\frac{\lambda ^2}{2}),\overline{\eta }\eta (1\frac{\lambda ^2}{2})$$ With the above parameter values and using the true numerical eigenvalues of the resulting $`\stackrel{~}{M}_{(u,d)}\stackrel{~}{M}_{(u,d)}^T`$, we arrive at the following values for the flavor-mixing parameters: Table 2 | Parameter | Value \[rad\] | Parameter | Value | | --- | --- | --- | --- | | $`\theta _{13}`$ | $`3.160065\times 10^3`$ | $`\overline{\eta }`$ | $`0.3444959`$ | | $`\theta _{12}`$ | $`0.22825323`$ | $`\overline{\rho }`$ | $`0.00422775`$ | | $`\theta _{23}`$ | $`3.95070522\times 10^2`$ | $`\alpha `$ | $`71.61961^{}`$ | | $`\delta `$ | $`1.5585246`$ | $`\beta `$ | $`19.083502^{}`$ | | $`\lambda `$ | $`.22627623`$ | $`\gamma `$ | $`89.296884^{}`$ | | A | $`.771433972`$ | $`J`$ | $`2.748573468\times 10^5`$ | $`J`$ is the Jarlskog invariant , a rephasing invariant of the mixing matrix, and is given in the standard representation of the quark flavor-mixing matrix as $`J=s_{12}s_{13}s_{23}c_{12}c_{13}^2c_{23}s_\delta =2.748573468\times 10^5`$. After imposing unitarity as described above, the absolute values of $`V_{CKM}`$ become: $$|V_{CKM}|=\left(\begin{array}{ccc}0.974058& 0.226275& .00316006\\ 0.226101& 0.973303& .0394979\\ .00941607& .0384891& .999215\end{array}\right)$$ As alluded to in the introduction, one of the most appealing features of the NNI mass matrices is the large number of texture zeroes. One may take advantage of this texture structure to eliminate three of the five real parameters in each sector, expressing them solely in terms of the $`m_i^2`$, up to a sign ambiguity associated with the square root branches. That is, we may use the invariance of the characteristic equation under a similiarity transformation to express $`A_{23},A_{32},`$ and $`A_{33}`$ in terms of $`m_u^2,m_c^2,`$ and $`m_t^2`$, and similarly for the down quark sector. Choosing $`A_{12}`$ and $`A_{12}`$ to be the as yet undetermined parameters in the up-quark mass matrix, and using the branch information revealed by Table 1, one easily verifies the following relations: $`A_{33}=\frac{m_um_cm_t}{A_{12}A_{21}}`$, $`A_{32}=\sqrt{\frac{b+\sqrt{b^2+4c}}{2}}`$, $`A_{23}=\sqrt{m_u^2+m_c^2+m_t^2A_{12}^2A_{21}^2A_{32}^2\frac{m_u^2m_c^2m_t^2}{A_{12}^2A_{21}^2}}`$, where $`bm_u^2+m_c^2+m_t^22A_{12}^2\frac{m_u^2m_c^2m_t^2}{A_{12}^2A_{21}^2}`$ and $`cA_{12}^2\left(m_u^2+m_c^2+m_t^2A_{12}^2\right)+\frac{m_u^2m_c^2m_t^2}{A_{12}^2}m_u^2m_c^2m_u^2m_t^2m_c^2m_t^2`$. Similarly, for the down-quark sector, choosing $`B_{12}`$ and $`B_{21}`$ to be the parameters as yet undetermined by the invariance relations, one finds the following equalities: $`B_{33}=\frac{m_dm_sm_b}{B_{12}B_{21}}`$, $`B_{32}=\sqrt{\frac{d+\sqrt{d^2+4e}}{2}}`$, $`B_{23}=\sqrt{m_d^2+m_s^2+m_b^2B_{12}^2B_{21}^2B_{32}^2\frac{m_d^2m_s^2m_b^2}{B_{12}^2B_{21}^2}}`$, where $`dm_d^2+m_s^2+m_b^22B_{12}^2\frac{m_d^2m_s^2m_b^2}{B_{12}^2B_{21}^2}`$ and $`eB_{12}^2\left(m_d^2+m_s^2+m_b^2B_{12}^2\right)+\frac{m_d^2m_s^2m_b^2}{B_{12}^2}m_d^2m_s^2m_d^2m_b^2m_s^2m_b^2`$. Using the above mass matrices with their associated five total degrees of freedom corresponding to $`A_{12},A_{21},B_{12},B_{21}`$ and $`\theta `$, and the six experimental values of $`|V_{CKM}|`$ observables as before, we obtain Table 3, with a $`\chi _{tot}^2`$ of 2.182. This $`\chi _{tot}^2`$ is not nearly as good as in the previous case, but the simple explanation for this apparent decline in numerical confidence is just a reflection of the slight discrepancy between unitarity and the present experimental values. In the previous case, the flavor-mixing matrix could deviate from unitarity to satisfy the experimental constraints; i.e., $`|V_{cs}|`$ having a central value that in itself violates unitarity did not do violence to the minimization of $`\chi _{tot}^2`$ because the requirements of unitarity were not automatically satisfied. In the current situation, unitarity is automatically imposed, and so the increase in $`\chi _{tot}^2`$ is to be expected. The predicted $`|V_{CKM}|`$ all lie within the ranges preferred by the Particle Data Group analysis, and as such, any set of parameters that fulfill this requirement of agreement with the ranges listed therein is deemed “acceptable”, regardless of $`\chi _{tot}^2`$ values. $$|V_{CKM}|=\left(\begin{array}{ccc}.975244& .22111& .00314624\\ .220939& .974488& .0394887\\ .00924305& .0385204& .999215\end{array}\right)$$ Implicit in this prejudicial “acceptance” is the belief that only three generations of quarks exist and that future experiments will bring the experimental values into closer agreement with the constraints imposed by unitarity. Table 3 | Parameter | Value | | --- | --- | | $`A_{12}`$ | $`1229.3\pm 32.38`$ | | $`A_{21}`$ | $`139.82\pm .091`$ | | $`B_{12}`$ | $`45.872\pm 3.4573`$ | | $`B_{21}`$ | $`48.797\pm .0433`$ | | $`\theta `$ | $`0.67806\pm .009657`$ | In addition, the standard representation parameters as well as the Wolfenstein parameters and Jarlskog invariant $`J`$ are predicted to be: Table 4 | Parameter | Value | Parameter | Value | | --- | --- | --- | --- | | $`\theta _{13}`$ | $`3.146249\times 10^3`$ | $`\overline{\eta }`$ | $`.3515246`$ | | $`\theta _{12}`$ | $`0.2229535`$ | $`\overline{\rho }`$ | $`.001394`$ | | $`\theta _{23}`$ | $`3.9499098\times 10^2`$ | $`\alpha `$ | $`70.83438^{}`$ | | $`\delta `$ | $`1.5668302`$ | $`\beta `$ | $`19.39285^{}`$ | | $`\lambda `$ | $`0.2211108`$ | $`\gamma `$ | $`89.77276^{}`$ | | $`A`$ | $`.80770906`$ | $`J`$ | $`2.67698466\times 10^5`$ | Because the mass matrices squared have the correct eigenvalues, the free-parameters $`A_{12},A_{21},B_{12},B_{21}`$ and $`\theta `$ represent the general parametrisation at the fundamental mass matrices in general. To realize calculability, it is not enough to simply postulate some relation among this set of parameters independent of the quark masses, as such a relation would only serve to relate the flavor-mixing parameters among themselves, and not provide the sought after relation between quark masses and flavor-mixing parameters. ## IV Making New Mass Matrix Ansätze The above mass matrices are completely general; to achieve calculability we must find expressions for $`A_{12},A_{21},B_{12},B_{21}`$ and $`\theta `$ in terms of the $`m_i`$. Guided by the parameter values in Tables 1 and 3, we postulate the following mass matrices in terms of the $`m_i`$: $$\stackrel{~}{M_u}=\left(\begin{array}{ccc}0& A_{12}& 0\\ A_{21}& 0& \sqrt{m_u^2+m_c^2+m_t^2A_{12}^2A_{21}^2A_{32}^2\frac{m_u^2m_c^2m_t^2}{A_{12}^2A_{21}^2}}\\ 0& \sqrt{\frac{b+\sqrt{b^2+4c}}{2}}& \frac{m_um_cm_t}{A_{12}A_{21}}\end{array}\right)$$ where $`A_{12}=\left(\sqrt{\frac{m_u}{2}}+\sqrt{\frac{m_c}{2}}\right)^2`$, $`A_{21}=\sqrt{m_um_c}`$ and $$\stackrel{~}{M_d}=\left(\begin{array}{ccc}0& B_{12}& 0\\ B_{21}& 0& \sqrt{m_d^2+m_s^2+m_b^2B_{12}^2B_{21}^2B_{32}^2\frac{m_d^2m_s^2m_b^2}{B_{12}^2B_{21}^2}}\\ 0& \sqrt{\frac{d+\sqrt{d^2+4e}}{2}}& \frac{m_b\sqrt{m_dm_s}}{B_{21}}\end{array}\right)$$ where $`B_{12}=\sqrt{m_dm_s}`$, $`B_{21}=m_d+\sqrt{m_dm_s}`$ and $`b,c,d`$ and $`e`$ are defined as before. Lastly, the single phase $`\theta `$ entering the flavor-mixing matrix is postulated to be $`\frac{B_{32}}{B_{33}}`$. We know that for only two quark generations there is no CP violating phase in the mixing matrix, so it is natural to expect, within the framework of calculability, that $`\theta `$ will involve ratios of elements in the mass matrices that only involve the mixings of the third generation quarks. With these postulates for $`\stackrel{~}{M_u}`$, $`\stackrel{~}{M_d}`$ and $`\theta `$, $`V_{fm}`$ is found from Eqns. (6-9). This constitutes a new mass matrix ansatz that provides a calculable model of flavor-mixing and is in excellent agreement with the latest experimental results. This new ansatz predicts the following absolute values for $`|V_{CKM}|`$: $$|V_{CKM}|=\left(\begin{array}{ccc}.974427& .224682& .00307663\\ .224513& .973672& .0394506\\ .00923256& .0384783& .999217\end{array}\right),$$ $`\frac{|V_{ub}|}{|V_{cb}|}=0.0779868`$ and $`\frac{|V_{td}|}{|V_{ts}|}=0.239942`$. The $`\chi _{tot}^2`$ found from the six experimental $`V_{CKM}`$ measurements is found to be 5.2489. Because we have no free parameters, there are six degrees of freedom. Such a value for $`\chi _{tot}^2`$ corresponds to $`70\%`$ confidence level. The standard representation parameters, Jarlskog invariant $`J`$, and the modified Wolfenstein parameters are predicted to be: Table 5 | Parameter | Value | Parameter | Value | | --- | --- | --- | --- | | $`\theta _{13}`$ | $`3.0766324\times 10^3`$ | $`A`$ | $`0.781473947`$ | | $`\theta _{12}`$ | 0.226617779 | $`\overline{\eta }`$ | $`0.3380146`$ | | $`\theta _{23}`$ | $`3.94622992\times 10^2`$ | $`\overline{\rho }`$ | $`0.01469470`$ | | $`\delta `$ | 1.52735011 | $`\alpha `$ | $`73.554460004^{}`$ | | $`J`$ | $`2.652858\times 10^5`$ | $`\beta `$ | $`18.93482465^{}`$ | | $`\lambda `$ | 0.2246832 | $`\gamma `$ | $`87.51071528^{}`$ | The flavor-mixing observables predicted from this calculability ansatz in Table 5 are in excellent agreement with the general result of Table 2. The above mass matrices with their calculability property as well as the low $`\chi _{tot}^2`$ are very compelling arguments in favor of calculability in the quark sector, but naturally one would like to uncover at least a glimpse of the more fundamental theory beyond the SM that is the source of this calculability. When one considers hermitian mass matrices, considerations of family permutation symmetry and its breaking in successive stages are suggestive explanations of the source of calculability, whereas with general NNI texture mass matrices, even though calculability still holds, it is not readily apparent what familiy symmetry, if any, is responsible. In conclusion, we have elucidated an important point in the construction of NNI weak-eigenstate quark basis that has eluded some previous authors. We have then performed an analysis of the experimental data to determine the mass matrix parameters, and have obtained values for the flavor-mixing and UT parameters that are in excellent agreement with previous analysis . Finally, we have presented a new class of calculable mass matrices that also can explain all the experimental data and predict nearly identical values for the flavor-mixing and UT parameters as in the previous case. ## V ACKNOWLEDGEMENTS We thank the hospitality of the School of Physics, Korea Institute of Advanced Study, where much of this work was completed. DD would like to thank the NSF/KOSEF 1998 Summer Institute in Korea Program for finanacial support. Support for this work was provided in part by U.S. Dept. of Energy Contract DE-FG-02-91ER40688-Task A.
no-problem/9903/cond-mat9903344.html
ar5iv
text
# On the theory of Josephson effect in a diffusive tunnel junction ## I Introduction In recent years, considerable advances have been made in technology of preparing low-resistance tunnel junctions with a comparatively high barrier transmissivity (tunneling probability) $`\mathrm{\Gamma }`$. This primarily applies to controlled break-junctions as well as systems based on 2D electron gas , whose conductivity undergoes a crossover from tunnel to metal type upon a change in the barrier parameters. The problem of calculation of the Josephson current through a junction with an arbitrary transmissivity in the ballistic regime (with the electron mean free path $`l`$ much greater than the coherence length $`\xi _0`$) was solved by many authors on the basis of the model of a single-mode junction with current-carrying banks ensuring a rapid “spreading” of supercurrent and the equality of the order parameter modulus $`\mathrm{\Delta }`$ near the barrier to its bulk value (the “rigidity” condition for $`\mathrm{\Delta }`$). In the 1D geometry (e.g., a planar junction or a superconducting channel with a tunnel barrier ), the problem is complicated considerably due to the change in the order parameter and the quasiparticle energy spectrum in the vicinity of the junction, which makes a contribution to the phase dependence of the current $`j(\mathrm{\Phi })`$. Antsygina and Svidzinskii determined the corresponding corrections to $`j(\mathrm{\Phi })`$ of the order of $`\mathrm{\Gamma }^2`$ for a pure ($`l\xi _0`$) superconductor in the limit of low transmissivity $`\mathrm{\Gamma }1`$: $$\delta j(\mathrm{\Phi })=\alpha (T)I(\mathrm{\Delta })\mathrm{\Gamma }(\mathrm{sin}\mathrm{\Phi }\frac{1}{2}\mathrm{sin}2\mathrm{\Phi }),\alpha (T)1,$$ (1) $$I(\mathrm{\Delta })=(\pi /4)e\nu _Fv_F\mathrm{\Gamma }\mathrm{\Delta }=I_c(\mathrm{\Delta })/\mathrm{tanh}(\mathrm{\Delta }/2T),$$ (2) where $`\nu _F`$ is the density of states, $`v_F`$ the Fermi velocity, and $`I_c(\mathrm{\Delta })`$ the critical current through the junction. In a diffusive superconductor (the “dirty” limit $`l\xi _0=\sqrt{D/2\mathrm{\Delta }}`$, $`D=v_Fl/3`$ is the diffusion constant), the calculation of the Josephson current for an arbitrary $`\mathrm{\Gamma }`$ is hardly possible even in a simple model disregarding the variation of the order parameter in the vicinity of the junction. As a matter of fact, the boundary conditions for isotropic Green’s functions $`\widehat{g}(𝒓,t_1t_2)`$ at the junction, obtained by Kupriyanov and Lukichev , $$l(\widehat{g}\widehat{g})_L=l(\widehat{g}\widehat{g})_R=\frac{3}{4}\frac{\mu d(\mu )}{r(\mu )}[\widehat{g}_L,\widehat{g}_R],$$ (3) where $`d(\mathrm{cos}\theta )`$ is the tunneling probability for an electron impinging the barrier at an angle $`\theta `$, and the subscripts $`R`$ and $`L`$ mark the value to the right and left of the barrier, are valid only within the first order in small angle-averaged transmissivity $`\mathrm{\Gamma }=\mu d(\mu )`$. Lambert et al. proved that the derivation of the boundary conditions in the general case ($`d1`$) is reduced to an analysis of a system of nonlinear integral equations for the terms in the expansion of the averaged Green’s function $`\widehat{g}(𝒓,𝒑)=\widehat{g}(𝒓)+𝒑\widehat{𝒈}_1(𝒓)+\mathrm{}`$ over Legendre polynomials. This problem can be solved only for $`\mathrm{\Gamma }1`$ by expanding the right-hand side of Eq. (3) into a power series in $`\mathrm{\Gamma }`$, which was used in Ref. for calculating the corrections to the Josephson current of the order of $`\mathrm{\Gamma }^2`$. In this paper, we pay attention, first of all, to the fact that the problem of calculation of the current–phase relation for a diffusive junction in the 1D geometry has the sense only in the case of low transmissivity of the barrier. Indeed, simple estimates obtained on the basis of the well-known formula for $`j(\mathrm{\Phi })`$ in the first order in $`\mathrm{\Gamma }`$, $$j_0(\mathrm{\Phi })=I(\mathrm{\Delta })\mathrm{tanh}(\mathrm{\Delta }/2T)\mathrm{sin}\mathrm{\Phi }$$ (4) (which coincides, according to the Anderson theorem, with the Ambegaokar–Baratoff result for a pure superconductor ), show that even for small $`\mathrm{\Gamma }l/\xi _01`$ the critical current through the junction becomes of the order of the bulk thermodynamic critical current $`n_sev_{sc}`$, where $`v_{sc}1/m\xi _0`$ is the critical velocity of the condensate, $`n_sm\nu _FD\mathrm{\Delta }`$ its density, $`m`$ the electron mass ($`\mathrm{}=1`$). Thus, for $`\mathrm{\Gamma }l/\xi _0`$ the tunnel junction does not play any longer the role of “weak link” with the jump of the order parameter phase $`\mathrm{\Phi }`$ and other features of a Josephson element. This follows even from the boundary conditions Eq. (3) if we use the estimate $`\widehat{g}\widehat{g}/\xi _0`$ in the vicinity of the junction, which leads to $`[\widehat{g}_L,\widehat{g}_R]\mathrm{sin}\mathrm{\Phi }\xi _0/l\mathrm{\Gamma }1`$ for $`\mathrm{\Gamma }l/\xi _0`$ . This criterion of weak link can be also formulated in terms of the conductance of the system in the normal state: the resistance of the junction must exceed the resistance of a metal layer of thickness $`\xi _0`$. From this it follows that the parameter $$W=(3\xi _0/4l)\mathrm{\Gamma }\mathrm{\Gamma }$$ (5) plays a fundamental role in the theory of Josephson effect for diffusive junction (the factor 3/4 is chosen for convenience of notation). We can attach to this parameter the meaning of the effective tunneling probability for Cooper pairs, which is higher than the conventional probability $`\mathrm{\Gamma }`$ of quasiparticle tunneling. Small values of $`W1`$ correspond to “weak link” conditions (Josephson effect); for $`W>1`$, the presence of a tunnel barrier virtually does not affect the supercurrent flow and the distribution of the order parameter in a diffusive superconductor. Moreover, we can expect that just $`W`$ and not $`\mathrm{\Gamma }`$ is a true parameter of the expansion of $`j(\mathrm{\Phi })`$ in the barrier transmissivity. Indeed, the dependence of the Josephson current on the mean free path is absent only within the main approximation in $`\mathrm{\Gamma }`$, Eq. (4) and, therefore, it must be manifested in higher-order terms of the expansion of $`j(\mathrm{\Phi })`$ in the emergence of additional dimensionless parameter $`\xi _0/l`$ in them, which vanishes at $`l\mathrm{}`$. An analysis of corrections to the current–phase dependence of Eq. (4), carried out in Sec. 4 of this article in the next order in $`W`$, confirms these considerations and proves that the corrections $`\mathrm{\Gamma }^2`$ to the Josephson current obtained in Ref. and associated with the corrections to boundary conditions Eq. (3), are much smaller and insignificant in fact. Another important result of the analysis of the current-carrying state of a diffusive Josephson junction is the conclusion concerning the emergence of localized states of electron excitations in the vicinity of the barrier. This phenomenon is well known for a ballistic tunnel junction in which discrete energy levels $$ϵ_n(\mathrm{\Phi })=\pm \mathrm{\Delta }(1d\mathrm{sin}^2\mathrm{\Phi }/2)^{1/2},$$ (6) associated with Andreev localization of electron excitations near the jump in the order parameter phase, split from the continuous spectrum in the current-carrying state. A similar phenomenon also takes place in a diffusive junction in which, however, isolated coherent energy levels cannot exist due to electron scattering at impurities and defects. In this case, the most adequate description of the variation of the energy spectrum of excitations is the deformation of their local density of states $`N(ϵ,𝒓)=\text{Re}u^R(ϵ,𝒓)`$ ($`u^R`$ is the diagonal component of the retarded Green’s function for the superconductor) which is assumed for brevity to be normalized to its value $`\nu _F`$ in the normal metal. In the absence of current, the density of states in a homogeneous superconductor has the standard form $`N_0(ϵ)=|ϵ|\mathrm{\Theta }(ϵ^2\mathrm{\Delta }^2)/\sqrt{ϵ^2\mathrm{\Delta }^2}`$ ($`\mathrm{\Theta }(x)`$ is the Heaviside function) with root singularities at the gap boundaries. In the current state, the momentum $`p_s`$ of the superfluid condensate plays the role of a depairing factor smoothing the singularities of $`N(ϵ)`$ and reducing the energy gap $`2ϵ_{}`$ by $`\mathrm{\Delta }ϵ_{}(p_s)(Dp_s^2)^{2/3}`$ . In the vicinity of a weak link, a similar (and main) factor of the energy gap suppression is the phase jump $`\mathrm{\Phi }`$ which leads to the formation of a “potential well” around the junction having a width of the order of $`\xi _0`$ and containing localized excitations with an energy $`|ϵ|<\mathrm{\Delta }`$ (see Sec. 3). In contrast to the ballistic case, the Josephson transport in a diffusive junction is performed not only by the states in the potential well, but by excitations within the whole energy region near the gap edge where the density of states differs significantly from the unperturbed value. ## II Equations for Green’s function of a low-transparent Josephson junction In order to calculate the density of states and equilibrium supercurrent $$j=\frac{e}{4}\nu _Fv_FD_{\mathrm{}}^+\mathrm{}𝑑ϵf_0(ϵ)\text{Tr}\sigma _z(\widehat{g}^R\widehat{g}^R\widehat{g}^A\widehat{g}^A)(ϵ)$$ (7) we must solve equations for the matrix retarded (advanced) Green’s functions $`\widehat{g}^{R,A}(𝒓,ϵ)`$ averaged over the ensemble of scatterers: $$[\sigma _zϵ+\mathrm{\Delta }\mathrm{exp}(i\sigma _z\chi )i\sigma _y,\widehat{g}]=iD(\widehat{g}\widehat{g}),$$ (8) Here $`\mathrm{\Delta }`$ and $`\chi `$ are the modulus and phase of the order parameter and $`f_0(ϵ)=(1/2)(1+\mathrm{tanh}(ϵ/2T))`$ is the equilibrium distribution function. According to the normalization condition $`\widehat{g}^2=1`$ for the Green’s function, the matrix $`\widehat{g}`$ can be presented as $`\widehat{g}=𝝈𝒖`$, where $`𝝈`$ is the vector formed by Pauli matrices. Using the well-known relations $`(𝝈𝒂)(𝝈𝒃)=𝒂𝒃+i𝝈[𝒂\times 𝒃]`$, $`[\sigma _z,𝝈]=2i[𝝈\times 𝒔]`$, where $`𝒔`$ is the unit vector of “isotopic spin” directed along the $`z`$-axis in the space of Pauli matrices, we can obtain from Eqs. (3) and (8) the following equations and the boundary conditions for the vector Green’s function $`𝒖`$: $$ϵ[𝒔\times 𝒖]+i\mathrm{\Delta }[𝝌\times 𝒖]=(D/2)[𝒖\times 𝒖],𝒖^2=1,$$ (9) $$\xi _0[𝒖\times 𝒖]_{L,R}=2W[𝒖_L\times 𝒖_R],$$ (10) where $`𝝌=(\mathrm{sin}\chi ,\mathrm{cos}\chi ,0)`$ is the symbolic vector of the order parameter phase. Singling out the component of the vector $`𝒖`$ along the direction $`𝒔`$: $`𝒖=𝒔u+i𝒗`$ ($`𝒗𝒔=0`$), we project Eq. (9) onto the $`(x,y)`$-plane in the space of Pauli matrices: $$ϵ𝒗\mathrm{\Delta }u𝝌=(iD/2)(u𝒗𝒗u),u^2𝒗^2=1$$ (11) and introduce the unit vector $`𝝍=(\mathrm{sin}\psi ,\mathrm{cos}\psi ,0)`$ directed along $`𝒗`$: $`𝒗=𝝍v`$, where $`\psi (𝒓,ϵ)`$ is the phase of “anomalous” Green’s function $`v`$ ($`𝝍=[𝝍\times 𝒔]\psi `$). The obtained system of scalar equations is a possible representation of Usadell equations: $$ϵv\mathrm{\Delta }u\mathrm{cos}(\psi \chi )=\frac{i}{2}D[(uvvu)uv(\psi )^2],$$ (12) $$\mathrm{\Delta }v\mathrm{sin}(\psi \chi )=(iD/2)(v^2\psi ),$$ (13) $$u^2v^2=1,$$ (14) and its solutions determine the supercurrent $$j(\mathrm{\Phi })=e\nu _FD_{\mathrm{}}^+\mathrm{}𝑑ϵf_0\text{Im}(v^R)^2\psi ^R.$$ (15) Choosing the coordinate axis $`x`$ orthogonally to the contact plane $`x=0`$ ($`\chi (+0)=\chi (0)=\mathrm{\Phi }/2`$) and taking into account the continuity of Green’s function and antisymmetry of their derivatives, we can easily obtain from Eq. (10) the boundary conditions to Eqs. (12), (13) for $`x+0`$: $$\xi _0(uvvu)(0)=4Wu(0)v(0)\mathrm{sin}^2\psi (0),$$ (16) $$\xi _0\psi (0)=2W\mathrm{sin}2\psi (0).$$ (17) Far away from the junction, the behavior of the order parameter and Green’s function phases is described by linear asymptotic form corresponding to the given value of current $$\chi (+\mathrm{})=\psi (+\mathrm{})=\chi _{\mathrm{}}+2p_sx,p_s=(W/\xi _0)\mathrm{sin}\mathrm{\Phi },$$ (18) i.e., of the superfluid momentum $`p_s`$ whose magnitude is determined in the main approximation by the condition of equality of the current Eq. (4) through the junction to its value $`j=\pi e\nu _FDp_s\mathrm{\Delta }\mathrm{tanh}(\mathrm{\Delta }/2T)`$ in the bulk of the metal. The Green’s functions tend to their asymptotic values satisfying Eqs. (12)–(14) for $`\psi =\chi `$ and $`u=v=0`$. Using the parametrization $`u=\mathrm{cosh}\theta ,v=\mathrm{sinh}\theta `$, which takes into account the normalization condition Eq. (14), we can put in correspondence to the vector Green’s function $`𝒖`$ the following geometrical image . The unit vector $`𝒖`$ in a normal metal is directed along the isospin axis $`z`$ (which corresponds to a purely electron or hole state of excitation of a Fermi gas), while in a superconductor this vector is deflected from the axis through an imaginary angle $`i\theta `$ and turned around it through the azimuthal angle $`\psi `$. In the spatially homogeneous case, this angle obviously coincides with the phase of the order parameter ($`\psi =\chi `$), and the scalar Green’s functions $`u`$ and $`v`$ are described by the formulas $$u^{R,A}=\mathrm{cosh}\theta _s=\frac{ϵ}{\sqrt{(ϵ\pm i0)^2\mathrm{\Delta }^2}},v^{R,A}=\mathrm{sinh}\theta _s,$$ (19) where $`\pm i0`$ defines the position of singularities of the retarded (advanced) Green’s function in the complex plane $`ϵ`$, and the square root in Eq. (20) is defined so that $`u^{R,A}\pm 1`$ for $`ϵ+\mathrm{}`$. Eqs. (12)–(14) for Green’s functions should be supplemented by the self-consistency conditions for the modulus and phase of the order parameter: $$\mathrm{\Delta }=\lambda _{\mathrm{}}^+\mathrm{}𝑑ϵf_0\text{Re}v^R,$$ (20) $$_{\mathrm{}}^+\mathrm{}𝑑ϵf_0\text{Re}v^R\mathrm{sin}(\psi ^R\chi )=0,$$ (21) where $`\lambda `$ is the constant of superconducting interaction. Taking into account the current conservation law, Eqs. (13) and (21), it is convenient to calculate the value of current at the barrier ($`x+0`$) by expressing $`\psi (0)`$ in Eq. (15) with the help of Eq. (17) through the phase jump $`2\psi (0)`$: $$j(\mathrm{\Phi })=\frac{e}{2}\nu _Fv_F\mathrm{\Gamma }_{\mathrm{}}^+\mathrm{}𝑑ϵf_0\text{Im}(v^R(0))^2\mathrm{sin}2\psi ^R(0),$$ (22) which allows us to single out explicitly the small parameter of the theory, i.e., the barrier transmissivity $`\mathrm{\Gamma }`$. It can easily be verified that in the main approximation using the unperturbed values of Green’s function of Eq. (19) and phase $`\psi (0)\chi (0)=\mathrm{\Phi }/2`$, Eq. (22) leads to the result of Eq. (4). A simplifying factor in the case of a low transmissivity of the barrier is that the quantities $`\psi \chi `$ and $`\psi `$ proportional to the current through the junction are small (see Eqs. (17) and (13)), and hence we can omit in Eq. (12) the terms quadratic in $`W`$ and containing the phase gradients. Replacing $`\psi (0)\chi (0)=\mathrm{\Phi }/2`$ in the boundary conditions Eqs. (16), (17), to the same degree of accuracy, we obtain the equation and the boundary conditions for the parameter $`\theta `$: $$ϵ\mathrm{sinh}\theta \mathrm{\Delta }(x)\mathrm{cosh}\theta =(iD/2)^2\theta ,$$ (23) $$\xi _0\theta (0)=2W\mathrm{sinh}2\theta (0)\mathrm{sin}^2\mathrm{\Phi }/2,\theta (+\mathrm{})=\theta _s.$$ (24) Direct application of the perturbation theory to the solution of Eq. (23) ($`\theta (x)=\theta _0+\theta _1(x)`$, $`\mathrm{\Delta }(x)=\mathrm{\Delta }_0+\mathrm{\Delta }_1(x)`$) leads to an expression for the correction $`\theta _1(x)`$ containing nonintegrable singularities at the gap boundaries, and as a consequence, to the divergence of the corresponding correction to the Josephson current Eq. (4). This is associated with the emergence of localized states of quasiparticles at a tunnel junction in the current-carrying state mentioned in Introduction and considered in the next section. ## III Localized states at a tunnel barrier It will be proved below that the depth of the “potential well” in the vicinity of the barrier is much larger than the scale of variation of the order parameter. Consequently, it is sufficient to confine an analysis of the behavior of the density of states to the model with a constant $`\mathrm{\Delta }`$, in which Eq. (23) has a simple solution describing the attenuation of perturbations of Green’s functions at a distance $`\xi _0`$ from the barrier: $$\mathrm{tanh}\frac{\theta (x)\theta _s}{4}=\mathrm{tanh}\frac{\theta (0)\theta _s}{4}\mathrm{exp}(k_ϵ|x|),$$ (26) $$k_ϵ^2=i\xi _0^2\mathrm{sinh}\theta _s,\text{Re}k_ϵ>0.$$ (27) The quantity $`\theta (0)`$ satisfies the boundary condition following from Eqs. (24) and (25): $$k_ϵ\xi _0\mathrm{sinh}\frac{\theta _s\theta (0)}{2}=\gamma \mathrm{sinh}2\theta (0),\gamma =W\mathrm{sin}^2\frac{\mathrm{\Phi }}{2}1,$$ (28) which can be reduced to the eighth-power algebraic equation in $`z=\mathrm{exp}\theta (0)`$: $$2z^3(zz_s)^2=i\gamma ^2(z_s^21)(z^41)^2,z_s=\mathrm{exp}\theta _s.$$ (29) In the general case (for an arbitrary $`ϵ`$), the solution of Eq. (27) can be obtained only numerically, but the presence of the small parameter $`\gamma `$ in (26) and (27) makes it possible to apply the perturbation theory. Far away from the spectrum boundary, we can put $`\theta (0)=\theta _s`$ on right-hand side of (26), which leads to the following expression for the correction to the density of states at the barrier: $$N(ϵ,0)N_0(ϵ)=2\gamma \text{Re}\left(\sqrt{i\mathrm{sinh}^3\theta _s}\mathrm{sinh}2\theta _s\right),$$ (30) that becomes obviously inapplicable for $`|ϵ|\mathrm{\Delta }`$, where $`|\theta _s|\mathrm{}`$. In this region, we must apply the improved perturbation theory (IPT) by putting $`|z|,|z_s|1`$ for an arbitrary (not necessarily small) value of $`zz_s`$. This not only reduces the power of the general Eq. (27), but also allows us to write it in a universal form which does not contain the depairing parameter $`\gamma `$: $$(y\sqrt{E}1)^2=iy^5,$$ (32) $$y=z/\beta \sqrt{2},E=\beta ^2(ϵ\mathrm{\Delta })/\mathrm{\Delta },\beta =(2/\gamma )^{1/5}1,$$ (33) Relations Eq. (29) show that the increase in the density of states is bounded by a quantity of the order of $`\beta W^{2/5}`$ as we approach the spectrum boundary. Thus, the range of applicability of the conventional perturbation theory, Eq. (28), is determined by the condition $`(ϵ\mathrm{\Delta })/\mathrm{\Delta }\beta ^2`$ and overlaps with the region of applicability $`(ϵ\mathrm{\Delta })/\mathrm{\Delta }1`$ of the IPT. The boundary $`ϵ_{}`$ of the spectrum (the position of the bottom of the potential well), below which the density of states vanishes, corresponds to the emergence of a purely imaginary root of Eq. (29a) at the point $`E_{}=(25/6)(2/3)^{1/5}3.842`$: $$ϵ_{}(\mathrm{\Phi })=\mathrm{\Delta }[1C(W\mathrm{sin}^2\frac{\mathrm{\Phi }}{2})^{4/5}],C=\frac{25}{36^{1/5}}5.824.$$ (34) The dependence of the position of the spectrum boundary on the phase jump at the junction is illustrated by Fig. 1 in which a similar dependence of the position of the Andreev level Eq. (6) in a junction between pure superconductors is shown for comparison. It should be noted that the scale of variation of $`ϵ_{}(\mathrm{\Phi })`$ is much larger than the splitting of the Andreev level from the boundary of the continuous spectrum for the same barrier transmissivity. This is associated with the large value of the depairing parameter $`\gamma `$ in the diffusive junction as compared to the splitting parameter $`\mathrm{\Gamma }`$ of the Andreev level as well as with the large numerical value of the constant $`C`$ defining the shift of the spectrum boundary Eq. (30). Fig. 2 shows the results of numerical calculation of the density of states at the junction on the basis of the general formula Eq. (27) for different values of the depairing parameter, which show that in addition of the root singularity ($`\sqrt{ϵϵ_{}}`$) at the spectrum boundary, the quantity $`N(ϵ)`$ has a “beak-type” root singularity for $`ϵ=\mathrm{\Delta }`$. Its physical nature is associated with an infinite increase in the attenuation length $`k_ϵ^1`$ of the perturbation of Green’s function in the bulk of the metal, Eq. (25), within the vicinity of the gap boundary. For $`ϵ_{}<ϵ<\mathrm{\Delta }`$, the density of states decreases exponentially with increasing distance from the junction (Fig. 3), which corresponds qualitatively to the image of the potential well of depth $`\mathrm{\Delta }ϵ_{}`$ and of width $`\xi _0`$ with excitations localized in it. It is well known that the Josephson current is carried through a ballistic junction by localized excitations only and can be presented in the following form: $$j(\mathrm{\Phi })=2e\underset{n}{}\frac{ϵ_n(\mathrm{\Phi })}{\mathrm{\Phi }}\mathrm{tanh}\frac{ϵ_n(\mathrm{\Phi })}{2T},$$ (35) where the index $`n`$ labels Andreev levels. At the same time, Eq. (22) for current expressed in the IPT approximation in terms of the reduced variables of Eq. (29), $$j(\mathrm{\Phi })I(\mathrm{\Delta })\mathrm{tanh}\frac{\mathrm{\Delta }}{2T}\mathrm{sin}\mathrm{\Phi }\underset{E_{}(\mathrm{\Phi })}{\overset{\mathrm{}}{}}\frac{dE}{\pi }\text{Im}\left(y^R\right)^2=j_0(\mathrm{\Phi }),$$ (36) shows that the charge transfer in a diffusive junction is performed not only by the states within the potential well ($`E<0,ϵ<\mathrm{\Delta }`$), but also by the excitations with energy $`ϵ>\mathrm{\Delta }`$ in the region $`ϵ\mathrm{\Delta }\mathrm{\Delta }\beta ^2`$, where the density of states differs significantly from the unperturbed value $`N_0(ϵ)`$. It should be noted in this connection that Argaman proposed an analog of Eq. (31) for a diffusive system, which can be obtained by the replacement of the energy $`ϵ_n(\mathrm{\Phi })`$ of Andreev levels by the local value $`ϵ(\xi ,\mathrm{\Phi },x)`$ of the excitation energy for $`x=0`$, which is adiabatically deformed by supercurrent, using instead of the discrete number $`n`$ the continuous variable $$\xi =_{ϵ_{}\left(\mathrm{\Phi }\right)}^{ϵ(\xi ,\mathrm{\Phi },x)}𝑑ϵ^{}N(ϵ^{},\mathrm{\Phi },x)$$ (37) viz., the number of states with an energy smaller than $`ϵ`$ ($`\xi =\mathrm{\Theta }(ϵ^2\mathrm{\Delta }^2)\sqrt{ϵ^2\mathrm{\Delta }^2}`$ for a homogeneous superconductor) . One can assume that the contributions from the bound and delocalized states to the Josephson current are taken into account simultaneously by the formula $$j(\mathrm{\Phi })=2e\nu _F_0^{\mathrm{}}𝑑\xi \frac{ϵ(\xi ,\mathrm{\Phi },0)}{\mathrm{\Phi }}\mathrm{tanh}\frac{ϵ(\xi ,\mathrm{\Phi },0)}{2T},$$ (38) which, however, leads to correct results only in the case of a homogeneous current-carrying state (where $`\chi `$ plays the role of $`\mathrm{\Phi }`$) or a wide $`SNS`$-junction (with a width $`L\xi _0`$ of the normal layer) and is inapplicable for a narrow bridge and tunnel junction. Nevertheless, the consideration of the function $`ϵ(\xi ,\mathrm{\Phi },x)`$ is useful in these cases also since this allows us to visualize the variation of the energy distribution of quasiparticle states in the vicinity of the junction (Fig. 4). ## IV Current–phase dependence for a junction in the second order in $`W`$ Although the modified perturbation theory for Green’s function in the energy representation described in the preceding section is the most physically obvious method operating with actual excitation energies, it leads to considerable formal difficulties in the calculation of corrections to the Josephson current Eq. (4). Indeed, it was shown in the previous section that the expression for $`j(\mathrm{\Phi })`$ calculated on the basis of the IPT for Green’s functions, Eq. (32), coincides with Eq. (4) since the small IPT parameter $`\beta ^2`$ cancels out as we go over to the reduced variables of Eq. (29). Thus, in order to calculate the corrections to Eq. (4) we are interested in, we must leave the approximation of Eq. (29) that describes the behavior of Green’s functions correctly only in a narrow vicinity of singularity in the density of states. For this purpose, it is convenient to use the formalism of temperature Green’s functions by going over from integration over energy in Eqs. (20)–(22) to summation over the Matsubara frequencies $`\omega _n=\pi T(2n+1),n=0,\pm 1,\pm 2,\mathrm{}`$: $$j(\mathrm{\Phi })=\pi e\nu _Fv_F\mathrm{\Gamma }T\underset{\omega _n>0}{}\text{Re}v^2(0)\mathrm{sin}2\psi (0),$$ (39) $$\mathrm{\Delta }(x)=2\pi \lambda T\underset{\omega _n>0}{}\text{Im}v(x)$$ (40) and making the substitution $`ϵi\omega _n`$ in Eq. (23). This allows us to avoid divergences of the type of Eq. (28) in the perturbation theory which, unlike the IPT, makes it possible to take into account the coordinate dependence $`\mathrm{\Delta }(x)`$. It is expedient to use as the main approximation in the asymptotic expansion $`\theta =\theta _0+\theta _1+\mathrm{}`$ the “adiabatic” value of Green’s function corresponding to the local value of $`\mathrm{\Delta }(x)=\mathrm{\Delta }+\mathrm{\Delta }_1(x)`$, ($`\mathrm{\Delta }_1(\mathrm{})=0`$): $$u_0(x)=\mathrm{cosh}\theta _0(x)=\frac{\omega _n}{\stackrel{~}{\omega }_n(x)},v_0(x)=\mathrm{sinh}\theta _0(x)=\frac{\mathrm{\Delta }}{\stackrel{~}{\omega }_n(x)},$$ (41) where $`\stackrel{~}{\omega }_n(x)=\sqrt{\omega _{n}^{}{}_{}{}^{2}+\mathrm{\Delta }^2(x)}`$. In this case, the correction $`\theta _1(x)`$ satisfies the nonhomogeneous equation $$^2\theta _1k_\omega ^2\theta _1=^2\theta _0,k_\omega ^2=2\stackrel{~}{\omega }_n/D$$ (42) with the boundary conditions $`\theta _1(+0)=2W\mathrm{sinh}2\theta _s\times \mathrm{sin}^2\mathrm{\Phi }/2`$, $`\theta _1(\mathrm{})=0`$, where $`\mathrm{cosh}\theta _s=\omega _n/\stackrel{~}{\omega }_n`$ is the value of the Green’s function far away from the junction with the unperturbed value of $`\mathrm{\Delta }`$, and $`\stackrel{~}{\omega }_n=\sqrt{\omega _{n}^{}{}_{}{}^{2}+\mathrm{\Delta }^2}`$. The self-consistency condition for $`\mathrm{\Delta }_1(x)`$ following from Eq. (20), $$\mathrm{\Delta }_1(q)T\underset{\omega _n>0}{}\frac{\mathrm{\Delta }^2}{\stackrel{~}{\omega }_{n}^{}{}_{}{}^{3}}=T\underset{\omega _n>0}{}\frac{\omega _n}{\stackrel{~}{\omega }_n}\text{Im}\theta _1(i\omega _n,q)$$ (43) completes the system of equations for determining the corrections $`\theta _1`$ and $`\mathrm{\Delta }_1`$, whose solution in the Fourier representation has the form $$\mathrm{\Delta }_1(q)=8W\mathrm{\Delta }\frac{B(q)}{\xi _0A(q)}\mathrm{sin}^2\mathrm{\Phi }/2,$$ (45) $$\theta _1(i\omega _n,q)=8W\mathrm{\Delta }\frac{i\omega _n}{\stackrel{~}{\omega }_n}\frac{1}{q^2+k_\omega ^2}\frac{A(0)}{\xi _0A(q)}\mathrm{sin}^2\mathrm{\Phi }/2,$$ (46) $$A(q)=A(0)+q^2B(q),A(0)=2\pi T\underset{\omega _n>0}{}\frac{\mathrm{\Delta }^2}{\stackrel{~}{\omega }_{n}^{}{}_{}{}^{3}},$$ (48) $$B(q)=2\pi T\underset{\omega _n>0}{}\frac{\omega _n^2}{\stackrel{~}{\omega }_{n}^{}{}_{}{}^{3}}\frac{1}{q^2+k_\omega ^2}.$$ (49) $`(\theta _1(i\omega _n,x),\mathrm{\Delta }_1(x))={\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle \frac{dq}{2\pi }}e^{iqx}(\theta _1(i\omega _n,q),\mathrm{\Delta }_1(q)).`$ As regards the correction to the asymptotic value Eq. (18) of the phase $`\psi (x)`$ of the Green’s function, it is equal to zero in this approximation. In order to prove this, we introduce the quantity $`\phi =\psi \chi 1`$, which, according to Eq. (13), obeys the equation $$^2\phi k_\omega ^2\phi =^2\chi _1,$$ (50) where $`\chi _1=\chi (x)\chi (\mathrm{})`$ is a correction to Eq. (18) localized near the junction. Taking into account the boundary condition $`\phi (0)=\chi _1(0)`$ following from Eqs. (17) and (18), we find that this equation has the simple solution $`\phi (i\omega _n,q)=q^2\chi _1(q)/(q^2+k_\omega ^2)`$ which leads, after the substitution into the self-consistency condition Eq. (21), to the homogeneous integral equation for $`\chi _1(q)`$: $$T\underset{\omega _n>0}{}\frac{\mathrm{\Delta }}{\stackrel{~}{\omega }_n}_{\mathrm{}}^+\mathrm{}𝑑q\frac{q^2\mathrm{cos}qx}{q^2+k_\omega ^2}\chi _1(q)=0.$$ (51) The only nonsingular solution of Eq. (43) is $`\chi _1(q)0`$, which proves the absence of a correction to the Josephson current due to the deviation of the behavior of the phases of the order parameter and Green’s functions from the linear law Eq. (18). This result can be explained as follows. The correction $`\chi _1(x)`$ is obviously of the order of the small correction $`p_{s1}(x)`$ to the constant value $`p_s`$ of Eq. (18) in the vicinity of the junction, that ensures the conservation of the current upon a change in $`N(ϵ)`$ and $`\mathrm{\Delta }`$. Since the value of $`p_sW`$, the correction to this quantity, and hence $`\chi _1(x)`$ and $`\phi `$ have a higher order of smallness ($`W^2`$) than the corrections of the order of $`W`$ we are interested in. Substituting Eqs. (40), (41) into Eq. (22), we obtain the required correction to the Josephson current: $`\delta j=j(\mathrm{\Phi })j_0(\mathrm{\Phi })={\displaystyle \frac{4T}{\mathrm{\Delta }}}I(\mathrm{\Delta })\mathrm{sin}\mathrm{\Phi }{\displaystyle \underset{\omega _n>0}{}}\text{Re}\left(v^2+{\displaystyle \frac{\mathrm{\Delta }^2}{\stackrel{~}{\omega }_{n}^{}{}_{}{}^{2}}}\right)=`$ $$=I(\mathrm{\Delta })W_0Z(T)\left(\mathrm{sin}\mathrm{\Phi }\frac{1}{2}\mathrm{sin}2\mathrm{\Phi }\right),$$ (52) $$Z(T)=\frac{16}{\pi }\sqrt{\mathrm{\Delta }\mathrm{\Delta }_0}T\underset{\omega _n>0}{}\frac{\omega _n^2}{\stackrel{~}{\omega }_{n}^{}{}_{}{}^{4}}\underset{\mathrm{}}{\overset{+\mathrm{}}{}}\frac{dk}{k^2+\stackrel{~}{k}_\omega ^2}[1+\frac{\stackrel{~}{k}_\omega ^2B(k)}{A(k)}],$$ (53) where $`\stackrel{~}{k}_\omega =\stackrel{~}{\omega }_n/\mathrm{\Delta }`$, $`A(k)`$ and $`B(k)`$ are defined by Eqs. (41) upon the substitution $`k_\omega \stackrel{~}{k}_\omega `$, and $`W_0`$ and $`\mathrm{\Delta }_0`$ are the values of $`W`$ and $`\mathrm{\Delta }`$ at $`T=0`$. At low temperatures ($`T\mathrm{\Delta }`$), the summation over $`\omega _n`$ in Eqs. (41) and (45) can be replaced by integration with respect to the continuous variable $`\omega `$: $`A(0)=1,B(k)={\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{\mathrm{tanh}^2vdv}{k^2+\mathrm{cosh}v}}=`$ $`={\displaystyle \frac{1}{k^4}}\left({\displaystyle \frac{\pi }{2}}2\sqrt{1k^2}\mathrm{arctan}\sqrt{{\displaystyle \frac{1k^2}{1+k^2}}}k^2\right),`$ which leads to the following asymptotic value of the function $`Z(T)`$ for $`T0`$: $$Z(T)=\frac{8}{\pi ^2}_0^{\mathrm{}}𝑑k\left[\frac{\pi k^2}{(1+k^2)^{9/4}}+\frac{2B^2(k)}{1+k^2B(k)}\right]2.178.$$ (54) In the vicinity of critical temperature ($`\mathrm{\Delta }T`$), the quantity $`A(0)7\zeta (3)\mathrm{\Delta }^2/4\pi ^2T^2`$ is small, and the main contribution to integral of Eq. (45) comes from the region of small wave vectors $`k\mathrm{\Delta }/T`$ corresponding to damping of perturbations at large distances of the order of $`\xi (T)(T_cT)^{1/2}`$. This allows us to replace the function $`B(k)`$ by its value $`\pi \mathrm{\Delta }/4T`$ for $`k=0`$: $`Z(T)={\displaystyle \frac{32\sqrt{\mathrm{\Delta }\mathrm{\Delta }_0}}{\pi ^3T}}{\displaystyle \underset{n0}{}}{\displaystyle \frac{1}{(2n+1)^2}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{B(0)dk}{A(0)+k^2B(0)}}=`$ $$=2\pi \sqrt{\frac{\pi \mathrm{\Delta }_0}{7\zeta (3)T_c}}5.099.$$ (55) The results of numerical calculations of the $`Z(T)`$ dependence within the entire temperature range $`0<T<T_c`$ are presented in Fig. 5. Similarly, we can calculate by using Eqs. (40) and (41) the asymptotic values of the correction $`\mathrm{\Delta }_1(0)`$ to the unperturbed value of the order parameter at the junction: $$\frac{\mathrm{\Delta }_1}{\mathrm{\Delta }_0}=\alpha (T)W_0\mathrm{sin}^2\frac{\mathrm{\Phi }}{2},\alpha (0)=3.037,\alpha (T_c)=5.782.$$ (56) The dependence of the order parameter $`\mathrm{\Delta }(0)`$ on the phase jump at the junction at $`T=0`$ presented in Fig. 1 shows that the main contribution to the energy gap suppression comes from the depairing mechanism considered in Sec. 3, and the change in the order parameter is smaller than the variation of $`ϵ_{}(\mathrm{\Phi })`$. The structure of the phase and temperature dependences of the correction to the Josephson current of Eq. (44) in a diffusive superconductor virtually coincide with expression Eq. (1) for a junction between pure metals except the following circumstance noted in Introduction: the parameter of the expansion of $`j(\mathrm{\Phi })`$ in the transmissivity of the junction for $`l\xi _0`$ is not the tunneling probability $`\mathrm{\Gamma }`$, but a considerably larger parameter $`W`$, Eq. (5). This allows one to observe higher harmonics of the current–phase dependence in diffusive tunnel junction with a comparatively high resistance. Koops et al. apparently reported on the first experimental results in this field. The theory discussed above describes the current–phase dependence for a diffusive Josephson junction in the whole temperature range $`0T<T_c`$ except a narrow neighborhood of $`T_c`$ in which $`\mathrm{\Delta }/T_cW_0`$ ($`\mathrm{\Delta }/T_c\mathrm{\Gamma }`$ in a pure superconductor), and the magnitude of corrections Eqs. (44) and (1) becomes comparable with $`j_0(\mathrm{\Phi })`$, while the correction Eq. (48) to $`\mathrm{\Delta }`$ becomes of the order of its unperturbed value. This means that in the definition Eq. (5) of the parameter $`W`$ near $`T_c`$ the coherence length $`\xi _0(T)`$ describing the characteristic scale of spatial variations of Green’s function and density of states should be replaced by the characteristic length $`\xi (T)`$ of variation of the order parameter (healing length) in the Ginzburg–Landau theory, whose order of magnitude is the same as $`\xi _0`$ far away from $`T_c`$. Taking into account the results of calculations of $`j(\mathrm{\Phi })`$ for a pure superconductor in the vicinity of $`T_c`$ , we can obtain the following interpolation estimate of the effective transmissivity $`W`$ suitable for any temperatures and mean free paths: $$W\mathrm{\Gamma }\xi (T)\left(\frac{1}{l}+\frac{1}{\xi (0)}\right).$$ (57) As we approach $`T_c`$, the value of $`W`$ increases infinitely, which is accompanied with a decrease in the phase jump for a given external current bounded by its critical value. Thus, in the 1D geometry for an arbitrarily large normal resistance of the junction, there exists a narrow region near $`T_c`$ in which the phase difference of the order parameter at the junction is small up to values of current of the order of the bulk critical current. The authors are grateful to T.N. Antsygina and V.S. Shumeiko for fruitful discussions. This research was supported by the Foundation for Fundamental Studies at the National Academy of Sciences of the Ukraine (grant No. 2.4/136).
no-problem/9903/nucl-th9903061.html
ar5iv
text
# Directed and Elliptic Flow ## Abstract We compare microscopic transport model calculations to recent data on the directed and elliptic flow of various hadrons in $`210A`$GeV Au+Au and Pb$`(\mathrm{\hspace{0.17em}158}A\mathrm{GeV})`$Pb collisions. For the Au+Au excitation function a transition from the squeeze-out to an in-plane enhanced emission is consistently described with mean field potentials corresponding to one incompressibility. For the Pb$`(\mathrm{\hspace{0.17em}158}A\mathrm{GeV})`$Pb system the elliptic flow prefers in-plane emission both for protons and pions, the directed flow of protons is opposite to that of the pions, which exhibit anti-flow. Strong directed transverse flow is present for protons and $`\mathrm{\Lambda }`$’s in Au($`6A`$GeV)Au collisions as well. Both for the SPS and the AGS energies the agreement between data and calculations is remarkable. Recently, it has been reported on an enormous amount of new detailed data on the collective flow in relativistic heavy ion collisions . The excitation function of transverse collective flow is the earliest predicted signature for probing compressed nuclear matter . Its sensitivity to the equation of state (EoS) can be used to search for abnormal matter states and phase transitions . In the fluid dynamical approach, the transverse collective flow is directly linked to the pressure $`P(\rho ,S)`$ (depending on the density $`\rho `$ and the entropy $`S`$) of the matter in the reaction zone: One can get a physical feeling for the generated collective transverse momentum $`\stackrel{}{p}_x`$ by writing it as an integral of the pressure acting on a surface and over time : $$\stackrel{}{p}_x=_t_AP(\rho ,S)dAdt.$$ (1) Here d$`A`$ represents the surface element between the participant and spectator matters and the total pressure is the sum of the potential pressure and the kinetic pressure: The transverse collective flow depends directly on the equation of state, $`P(\rho ,S)`$. Collective flow had originally been predicted by nuclear shock wave models and ideal fluid dynamics (NFD) . Microscopic models such as VUU (Vlasov Uehling Uhlenbeck), and QMD (Quantum Molecular Dynamics) have predicted smaller flow than ideal NFD. These microscopic models agree roughly with viscous NFD and with data , which discovered flow first at the BEVALAC for charged particles by the Plastic-Ball and Streamer Chamber collaborations , and at SATURNE by the DIOGENE collaboration . It has been studied extensively at GSI by the FOPI , LAND , TAPS , and KaoS collaborations, and by the EOS-TPC collaboration at LBNL and at MSU . Two different signatures of collective flow have been predicted: * The bounce–off of compressed matter in the reaction plane and * the squeeze–out of the participant matter out of the reaction plane. The most strongly stopped, compressed matter around mid-rapidity is seen directly in the squeeze–out . A strong dependence of these collective effects on the nuclear equation of state is predicted . For higher beam energies, however, projectile and target spectator decouple quickly from the reaction zone, giving way to a preferential emission of matter in the reaction plane, even at mid-rapidity . An excitation function of the squeeze–out at midrapidity shows the transition from out of plane enhancement to preferential in-plane emission. At 10.6 $`A`$GeV collective flow has recently been discovered by the E877 collaboration by measuring d$`v_1/\mathrm{d}\eta =\mathrm{d}(E_x/E_T)/\mathrm{d}\eta `$ for different centrality bins. The EOS group has measured the flow excitation function for Au+Au at the AGS in the energy range between 2.0 and 8 GeV/nucleon . Their data show a smooth decrease in $`p_x`$ from 2 to 8 GeV/nucleon and are corroborated by measurements of the E917 collaboration at 8 and 10.6 GeV/nucleon . The EOS collaboration has also measured a squeeze-out excitation function (sometimes also termed “elliptic flow” ), indicating a transition from out-of-plane to in-plane enhancement around 5 GeV/nucleon . At CERN/SPS, the first observations of the predicted directed transverse flow component have been reported by the WA98 collaboration using the Plastic Ball detector located at target rapidity for event plane reconstruction. They show a strong directed flow signal for protons and “antiflow” for pions, both enhanced for particles with high transverse momenta. Similar findings have also been reported by the NA49 collaboration, which due to their larger acceptance allows for a more detailed investigation . Due to its direct dependence on the EoS, $`P(\rho ,T)`$, flow excitation functions can provide unique information about phase transitions: The formation of abnormal nuclear matter, e.g., yields a reduction of the collective flow . A directed flow excitation function as signature of the phase transition into the QGP has been proposed by several authors . A microscopic analysis showed that the existence of a first order phase transition can show up as a reduction in the directed transverse flow . For first order phase transitions, the pressure remains constant (for $`T=\mathrm{const}`$) in the region of the phase coexistence. This results in vanishing shock velocities $`v_f=0,v_s=0`$ and velocity of sound $`c_s=\sqrt{p/\epsilon }`$ . The expansion of the system is driven by the pressure gradients, therefore expansion depends crucially on $`c_s^2`$. Matter in the mixed phase expands less rapidly than a hadron gas or a QGP at the same energy density and entropy. In case of rapid changes in the EoS without phase transition, the pressure gradients are finite, but still smaller than for an ideal gas EoS, and therefore the system expands more slowly . This reduction of $`c_s^2`$ in the transition region is commonly referred to as softening of the EoS. Here the flow will temporarily slow down (or possibly even stall). This hinders the deflection of spectator matter (the bounce–off) and, therefore, causes a reduction of the directed transverse flow in semi-peripheral collisions. The softening of the EoS should be observable in the excitation function of the transverse directed flow of baryons. An observation of the predicted local minimum in the excitation function of the directed transverse flow would be an important discovery, and an unambiguous signal for a phase transition in dense matter. Its experimental measurement would serve as strong evidence for a QGP, if that phase transition is of first order. An illustration of the in-plane elliptic flow is given by the following picture: Two colliding nuclei create a stopped overlap region. At higher bombarding energies ($`E_{\mathrm{lab}}10A`$GeV) the spectators leave rapidly this interaction zone. The remaining interaction zone expands almost freely, where the surface is such that in-plane emission is prefered. It is therefore also the interplay between the timescales of passing time of the spectators and expansion time of the dense, stopped interaction zone which determines the time-integrated elliptic flow signal. Indeed, when following the elliptic flow as a function of reaction time, early out-of-plane squeeze is superposed by later preferential in-plane expansion . So, the sign of the elliptic flow changes twice as a function of incident energy: At intermediate energies ($`E_{\mathrm{lab}}100A`$MeV) a change from in-plane emission (rotation-like behaviour) to the squeeze-out is predicted where at relativistic energies ( $`E_{\mathrm{lab}}5A`$GeV) the opposite change from the squeeze-out to in-plane enhancement is observed. Fig. 1 shows the excitation function of the in-plane/squeeze-out flow parameter $`v_2`$. This is observed by $`90^o`$ peaks in the azimuthal angular distribution $`dN/d\mathrm{\Phi }`$ of nucleons at midrapidity for Au+Au collisions with the Fourier expansion $$\frac{dN}{d\mathrm{\Phi }}=v_0\left(1+2v_1\mathrm{cos}(\mathrm{\Phi })+2v_2\mathrm{cos}(2\mathrm{\Phi })\right).$$ (2) $`v_0`$ is for normalization only, where $`v_1`$ characterizes the directed in-plane flow. While $`v_2>0`$ indicates in-plane enhancement, $`v_2<0`$ characterizes the squeeze-out perpendicular to the event plane. Data by the E895 and the E877 collaborations (stars) and UrQMD calculations are displayed. The UrQMD calculations are performed within the cascade mode (circles) as well as with mean field potentials (squares). A detailed survey on the UrQMD model and its underlying concepts is available . Clearly, the experimental observation of a transition from squeeze-out to a preferential in-plane emission can only be described with the potentials included. The cascade simulations do not show the squeeze-out due to the lack of the strongly repulsive nucleonic potential at this energy. The data are consistently described with potentials corresponding to an equation of state with one incompressibility ($`K=380\mathrm{MeV}`$), independent on the incident energy. This is in contrast to findings in where a softening of the equation of state with incident energy is deduced from the comparison to transport model calculations . Transverse flow has been discovered even at the highest energies at the SPS for the Pb+Pb system at 158$`A`$GeV both by the NA49 and by the WA98 collaborations. Here, UrQMD calculations are compared to the flow parameters $`v_1`$ and $`v_2`$, which can also be expressed by $$v_1=\frac{p_x}{p_t},v_2=\left(\frac{p_x}{p_t}\right)^2\left(\frac{p_y}{p_t}\right)^2.$$ (3) Fig.2 shows the rapidity dependence of the proton flow (upper half) and of the flow of charged pions (lower half). Full symbols are UrQMD calculations where open symbols are experimental data . The data are reflected at midrapidity ($`y_{\mathrm{lab}}2.9`$). In reflection the signs of the $`v_1`$ values have been reversed in the backward hemisphere, but not the $`v_2`$ values . For the directed transverse flow ($`v_1`$), both data as well as UrQMD results exhibit a characteristic S-shaped curve. The elliptic flow values ($`v_2`$) seem to be slightly peaked at medium rapidity ($`y_{\mathrm{lab}}44.5`$ and $`y_{\mathrm{lab}}1.52`$), both for pions and protons, contrary to what was inferred in . Both protons and pions show an in-plane enhanced emission ($`v_2>0`$). The proton flow shows positiv flow whereas the pion flow exhibits the opposite negative sign, caused by absorption and rescattering effects. The overall agreement between data and calculations looks rather good. Discrepancies are seen dominantly for the high rapidity pion directed flow ($`v_1`$), which is too strong in the calculations compared to the data which show saturation of $`v_1`$ for $`y_{\mathrm{lab}}>4`$ and $`y_{\mathrm{lab}}<2`$. Also the proton directed flow seems to be slightly too strong at high rapidity. The elliptic flow shows good agreement for the sign as well as for the magnitude of $`v_2`$ ($`v_25\%`$ for protons and $`v_22\%`$ for pions). Strong directed flow has also been discovered in the energy region where the elliptic flow disappears. Fig. 3 shows the directed transverse flow $`p_x/m`$ as a function of the normalized rapidity for protons (squares) and $`\mathrm{\Lambda }`$’s (circles) in Au$`(\mathrm{\hspace{0.17em}6}A\mathrm{GeV})`$Au collisions. Open symbols are preliminary data by the E895 collaboration and full symbols display the results of UrQMD calculations. The proton data are reflected at midrapidity. Both, protons and $`\mathrm{\Lambda }`$’s show strong positive directed flow. The proton flow is larger than the $`\mathrm{\Lambda }`$ flow close to midrapidity ($`|y/y_p|0.6`$) both in the data as well as in the UrQMD calculations. At target/projectile rapidity the $`\mathrm{\Lambda }`$ flow is predicted to exhibit a similar magnitude as the protons show. The species-dependent flow pattern clearly demonstrates a complex non-hydrodynamic behaviour which seems to rule out simple fireball+flow models. In summary, recent data on the collective flow in heavy ion collisions at the SPS and AGS have been compared to UrQMD calculations. The excitation function of the elliptic flow at midrapidity for the Au+Au system shows a transition from the squeeze-out to an in-plane enhancement. The data agree with the calculations done with an equation of state with one incompressibility. Therefore a softening of the equation of state cannot be deduced from this comparison. The elliptic flow at the SPS for Pb+Pb collisions shows in-plane enhancement, both for protons and pions in the full rapidity range. The UrQMD results show complete agreement to data. The positive directed flow of protons is opposite to the directed flow of pions which show an anti-flow. While good agreement exists around midrapidity, the pion flow is too strong in the calculations at high rapidities. This seems to be due to the high momentum tails of the pion transverse momentum distribution and will be investigated in a forthcoming publication. The directed proton flow also seems to be slightly overestimated at high rapidities by the UrQMD results. The comparison of Au$`(\mathrm{\hspace{0.17em}6}A\mathrm{GeV})`$Au collisions demonstrates that strong directed flow is present for protons and $`\mathrm{\Lambda }`$’s, where the $`\mathrm{\Lambda }`$’s show less flow than the protons around midrapidity. At higher rapidities the $`\mathrm{\Lambda }`$ flow is predicted to show similar magnitude as the proton flow. The species-dependent flow patterns illustrate the complex collision dynamcis and demonstrate the necessity of highly non-trivial microscopic transport models for an adequate description of relativistic heavy ion collisions. ###### Acknowledgements. This work has been supported in part by BMBF, DFG, GSI and Graduiertenkolleg ’Experimentelle und Theoretische Schwerionenphysik’. S.A.B. is supported in part by the Alexander von Humboldt Foundation through a Feodor Lynen Fellowship, and by DOE grant DE-FG02-96ER40945. S. S. and M. B. thank the Josef Buchmann Foundation for support.
no-problem/9903/astro-ph9903122.html
ar5iv
text
# The nature of the extreme kinematics in the extended gas of high redshift radio galaxies. ## 1 Introduction The existence of high velocities (FWHM$`>`$1000 km s<sup>-1</sup>) (McCarthy et al. 1996) in the extended gas (EELR) of high redshift ($`z>`$2) radio galaxies (HZRG) is in contrast with the more relaxed kinematics observed in the majority of low redshift radio galaxies (FWHM$`<`$400 km s<sup>-1</sup>) (Tadhunter et al. 1989). The nature of such extreme kinematic motions is not well understood. We investigate here this issue studying the kinematics of the extended gas in a small sample of 4 distant active galaxies ($`z>`$2): MRC1558-003, MRC2025-218 and MRC2104-242 (radio galaxies) and SMM02399-0136 (hyperluminous type 2 active galaxy with very weak radio emission). ## 2 Observations and data reduction The spectroscopic observations were carried out on the nights 1997 July 3-5 and 1998 July 25-27 using the EMMI multi-purpose instrument at the NTT (New Technology Telescope) in La Silla Observatory (ESO-Chile). The detector was a Tektronix CCD with 2048$`\times `$2048 pixels of size 24 $`\mu `$m, resulting in a spatial scale of 0.27 arcsec per pixel. We used EMMI in RILD spectroscopic mode (Red Imaging and Low Disperson Spectroscopy). We used the same grism (#3) for all objects. This has a blaze wavelength of 4600 Å, dispersion 5.9 Å/pixel and wavelength range 4000-8300 Å. The slit was aligned with the radio axis for the three radio galaxies. We positioned the slit along the two main optical components of SMM02399-0136 (L1 and L2, adopting the nomenclature of Ivinson et al. 1998 \[IV98 hereafter\]). A log of the spectroscopic observations is shown in Table 1. Standard data reduction techniques were applied using IRAF software (see Villar-Martín et al. 1998 for a more detailed description). ## 3 The fitting procedure In order to study the kinematics of the gas, we fitted the emission lines with a Gaussian profile at every spatial position (pixel). Several spatial pixels were added where the emission was too faint. We used the Starlink package DIPSO for this purpose. The FWHM, flux and central wavelength were measured from the Gaussian fitted to the line profile. The FWHM was corrected in quadrature for instrumental broadening (the instrumental profiles in the observed frame are given in Table 1). Single Gaussians did not always provide a perfect fit and underlying broad wings were sometimes present. This is probably due to the presence of several kinematic components and/or absorption of Ly$`\alpha `$ by neutral hydrogen. To eliminate uncertainties due to the second mechanism, we also present the result of the fit for the second strongest emission line CIV$`\lambda `$1550 not susceptible of hydrogen absorption. At the s/n of the data, we are confined to using single Gaussian fits to the lines, a procedure which is the same as that followed by Tadhunter et al. (1989). A single Gaussian fit will a) lose any information about multiple components b) neglect any possible weak broad underlying wings (as is observed in MRC2104-242, see Fig. 1 left cloud), since the fit will be optimized for the dominant part of the line. However, the main goal of this paper does not require such a precise analysis. The broad wing on MRC2104-242 will have little influence on the fit, which will be dominated by the strong, narrower component. ## 4 Results We present in Figs. 1 to 4 the results of our analysis for the 4 targets in the sample. The upper panel in each figure is the 2-D spectrum of the Ly$`\alpha `$ spectral region with the dispersion in $`\lambda `$ running vertically. The middle panel presents the spatial variation of the FWHM and the bottom panel shows the spatial variation of the velocity shift of the Ly$`\alpha `$ emission (open circles) across the nebula. The 3 panels in each figure have the same spatial scale and are aligned so that vertical lines join the same spatial positions. FWHM for CIV$`\lambda `$1550 is also plotted for comparison (solid triangles). * MRC2104-242 (Fig. 1): Ly$`\alpha `$ shows a bimodal distribution and is extended over $``$12 arcsec along the slit aligned with the radio axis. The two blobs lie in between the radio lobes (McCarthy et al. 1990). The bimodal distribution is also apparent in CIV. The two clumps present high FWHM values (1100 km s<sup>-1</sup> and 900 km s<sup>-1</sup> respectively, for the spatially integrated spectra) and are shifted by $``$500 km s<sup>-1</sup> (consistent with McCarthy et al. 1990, Koekemoer et al. 1996). Kinematic substructure is observed in the two blobs. The velocity curve (Fig. 1, bottom panel) is rather flat across each blob. CIV is also extended and presents high FWHM values ($``$700 km s<sup>-1</sup>, see Fig. 1). * MRC2025-218 (Fig. 2): Ly$`\alpha `$ shows a bimodal distribution and is extended over $``$4.5 arc sec. This structure lies between the radio lobes (Pentericci et al. 1998). Continuum is detected in our spectra. Large and rather constant FWHM values are measured across the two components ($``$1200 km s<sup>-1</sup> and 700 km s<sup>-1</sup> respectively). The velocity curve is rather steep across the nebula varying smoothly over a range of $``$600 km s<sup>-1</sup>. CIV is also extended and presents similar FWHM as Ly$`\alpha `$. * MRC1558-003 (Fig. 3): Ly$`\alpha `$ is extended over $``$15 arc sec. A bright component is detected as well as diffuse, very extended emission which show large velocity widths at $``$10 arc sec from the main component. The optical (Rötggering et al. 1994) and radio astrometry (Röttgering et al. 1996, Rhee et al. 1996) locate the high velocity region several arcsec beyond the radio structures. CIV is also extended and presents very large FWHM within the high velocity region, consistent with the Ly$`\alpha `$ measurement. There is no apparent pattern in the velocity curve of the ionized gas. * SMM02399-0136 (Fig. 4): This object is a hyperluminous active galaxy, gravitationally lensed by a foreground cluster (IV98). It consists of two main optical sources L1 and L2. The radio emission is very weak, below the detection tresholds of most radio surveys. The radio, submm and optical properties are consistent with a scenario where L1 contains an active nucleus and L2 is an interacting companion. The system is undergoing strong starburst activity. We detect Ly$`\alpha `$ emission across $``$20 arc sec. Two main components are revealed by our spectra, coincident with (L1 and L2). Ly$`\alpha `$ is relatively broad in L1 (FWHM$``$1800 km $`s^1`$) and narrower in L2 (FWHM$``$300-700 km s<sup>-1</sup>). A high velocity region (FWHM$``$1500 km <sup>-1</sup>) is detected at the border of L2. A spectrally unresolved region lies at $``$12 arc sec from L1. The four objects present complex kinematics and show that there is a large variety of kinematic behaviour in high redshift active galaxies. All objects present certain common characteristics: * High velocities (FWHM$`>`$1000 km s<sup>-1</sup>) in the extended gas * Velocity shift of the Ly$`\alpha `$ emission across the nebula varying over a range $`<`$700 km s<sup>-1</sup>, although the velocity curves are rather different from object to object. Similar values are observed in low redshift radio galaxies (Tadhunter et al. 1989) * Presence of at least two different kinematic components which seem to be spatially distinct. Such components look like individual clumps in the case of MRC2025-218, MRC2104-242 and SMM02399-0136. Diffuse and fainter line emission is present in the spectra of SMM02399-0136 and MRC1558-003. Narrow band Ly$`\alpha `$ narrow band images show also diffuse Ly$`\alpha `$ emission in MRC2025-218 and MRC2104-242 in addition to the brightest components (McCarthy et al. 1990, Pentericci et al. 1998). ## 5 Discussion High velocities have been observed in the EELR of many HZRG (McCarthy et al. 1996). The alignment between the radio and optical structures (McCarthy et al. 1987, Chambers et al. 1987) and the anticorrelation between the size of the radio source and the velocity dispersion found for HZRG (van Ojik 1995) suggest that the jet is interacting with the ambient gas. Studies of radio galaxies at intermediate redshift with clear signs of such interactions as well as hydrodynamical simulations show that this process can produce large FWHM ($`>`$1000 km s<sup>-1</sup>) (e.g. Villar-Martín et al 1999, Clark et al. 1997). Some HZRG show clear evidence for jet-cloud interactions (e.g. van Ojik et al. 1996) and this process is surely having an effect in some high redshift radio galaxies. It could be also the case of MRC2025-214 and MRC2104-242, which show the alignment effect (McCarthy et al. 1990) and large line widths inside the radio structures. However, we have also measured high velocities in the EELR of MRC1558-003 beyond the radio structures (Fig. 3) and the extended gas of the galaxy-galaxy interacting system SMM02399-236. Bremer et al. (1992) reported the detection of extended Ly$`\alpha `$ emission in the radio quiet quasar 0055-264 ($`z=`$3.66) with FWHM$``$1000 km s<sup>-1</sup>. Jet cloud interactions cannot explain the extreme kinematics in these objects. Another accelerating mechanism is at work which could also play a role in many other HZRG. IV98 have proposed that SMM02399-0136 is a system in which two companions are interacting: L1 (that contains an active nucleus) and L2. The interaction has induced starburst activity responsible for the submm and weak radio emissions. Another possibility is that the radio emission is due to a frustrated radio jet, since the steep radio spectrum is consistent with an AGN origin (IV98). The radio emission is extended along PA71, close to PA88.6 (L1-L2 direction) and the projected size ($``$7.9 arc sec) is larger than the L1-L2 extension ($``$4 arc sec). Therefore, the high velocity gas detected beyond L2 is probably inside the radio structures. In this case, the interaction between the radio jet and the ambient gas could be responsible for the large FWHM values measured beyond L2. However, if SMM02399-0136 is a system of two interaction companions with strong starburst activity \[as is the case for many $`\mu `$Jy radio sources (Lowenthal 1997)\], jet-cloud interactions cannot explain the gas kinematics in SMM02399-0136. <sup>1</sup><sup>1</sup>1SMM02399-0136 is gravitationally lensed by a foreground cluster. The appropriate geometry with respect to the lensing source could produce a greater distortion of component L1 due to its compact morphology. L1 emission might extend beyond L2 with the result that the high velocities measured in L1 could be contaminating measurements of the extended gas. We would then expect the same effect for NV and CIV, which are strongly nucleated and have similar fluxes in L1 than Ly$`\alpha `$ (see Fig. 3 in IV98). However, these lines are detected only in L1. We discuss here several possible mechanisms that could explain the extreme kinematics in some HZRG. * Broad scattered lines. Many distant radio galaxies ($`z>`$2) show polarized continuum with the electric vector perpendicular to the axis of the optical (UV rest frame) structures (Cimatti et al. 1998, Fosbury et al. 1998b,1999). This is consistent with an scenario in which powerful radio galaxies contain QSO nuclei whose FUV emission we see scattered by extended dust structures. The emission from the broad line region should also be scattered and therefore broad lines could be detected within the extended gas. Scattering preserves the equivalent width of BLR lines against the nuclear continuum and if this mechanism dominated, we should measure similar EW in the extended gas. We have measured a lower limit for the EW of the CIV$`\lambda `$1550 line (not affected by neutral hydrogen absorption). We obtain EW(CIV)$``$100, which is quite large compared with typical values measured in quasars at high redshift (Corbin & Francis 1994). This suggests that the line is dominated by direct light, rather than scattered. This mechanism is not likely to play a role in radio galaxies in general. In effect, NIR spectroscopy of distant narrow line radio galaxies ($`z=`$2.2-2.6) reveals FWHM$`>`$1000 km s<sup>-1</sup> for both permitted and forbidden lines (Evans 1998). Eventhough the spectra of HZRG are customarily integrated along the spatial dimension, the extended emission frequently dominates the integrated spectra (e.g. McCarthy et al. 1996, Stockton et al. 1996) and therefore the observed large velocities could well originate from the extended regions themselves. At $`z`$1 the \[OII\] emission has similarly revealed high velocities (FWHM$`>`$1000 km s<sup>-1</sup>) within the extended gas regions of some radio galaxies (McCarthy et al. 1996). * Infall. Heckman et al. (1991) made a spectroscopic study of the extended gas in a sample of 5 high redshift radio-loud quasars ($`z`$2-3) (see also Lehnert & Becker 1998). They found that the kinematical properties are very similar to the properties of the EELR of HZRG: namely velocity dispersions across the nebulae consistent with gravitational motions ($`<`$500 km s<sup>-1</sup>) but with large FWHM$``$1000–1500 km s<sup>-1</sup>. They propose a scenario in which gravitation is at the origin of these extreme motions: i.e. gas freely falling from a large distance into the galaxy. An infall process with such characteristics could happen during the process of galaxy formation, if the radio galaxy lies at the bottom of a deep potential well (like a dense cluster). * A group of Ly break galaxies around the radio galaxy. Recent deep HST images of radio galaxies at $`z>`$2 show clumpy and irregular morphologies, consisting of a bright component and a number of small components, which is suggestive of a merging system (Pentericci et al. 1998). Pentericci et al. find that those clumps have similar characteristics to Ly break galaxies and suggest that the host galaxy of the radio source had itself formed through the merging of such smaller units (see also van Breugel et al. 1998). MRC2025-218, MRC2104-242 and SMM02399-0136 have similar morphologies to the one described above (see optical images in Pentericci et al. 1998 and IV98). The presence of several components is also revealed by our spectra. We also detect diffuse emission between (and sometimes beyond) the main clumps which could have the same nature as the diffuse and asymmetric halos found around compact clumps in Ly break galaxies (Steidel et al. 1996). An important difference between Ly break galaxies and the clumps in HZRG is the large velocities measured in each clump (FWHM$`>`$1000 km s<sup>-1</sup>), much larger than the values observed in Ly break galaxies (FWHM$``$200 km s<sup>-1</sup>) (e.g. Pettini et al. 1998, Prochaska & Wolfe 1997). However, spectroscopy shows that velocity shifts of $`>`$1000 km s<sup>-1</sup> between absorbing and emitting gas are common in Ly break galaxies. This suggests the presence of large scale outflows of hundreds km s<sup>-1</sup> (e.g. Pettini et al. 1998, Franx et al. 1997), which could be responsible for large FWHM values if all the gas was ionized. On the other hand no clear link has been established between the absorbing and the ionized gas and, therefore, these velocity differences might simply occur between unrelated intervening objects. * Bipolar outflows. Chambers (1998) has recently proposed that bipolar outflows can be responsible for the high velocities, morphologies and polarimetric properties of high redshift radio galaxies. The EELR of HZRG consists in this model of an expanding bipolar dust shell which scatters light from a quasar core and has an evacuated interior. Bipolar outflows can be generated by the superwind associated with a starburst in a circumnuclear molecular disk. Evidence for such superwinds has been found in Far Infrared Galaxies (FIRGs) (Heckman et al. 1990, H90 hereafter). These galaxies show emission lines with FWHM of several hundreds km s<sup>-1</sup> and shifted by $``$1000 km s<sup>-1</sup> in some objects. Appropriate superwind models predict outflow velocities of several hundreds km s<sup>-1</sup>. We have calculated some basic parameters characterizing a superwind which could explain the kinematic properties of the EELR in HZRG (we want to enphasize the approximate nature of this calculations). We have assumed a typical radius $`r_{neb}`$ for the ionized nebula of 20 kpc, a density $`n_0`$ in the ambient (undisturbed) medium of 10 cm<sup>-3</sup> (McCarthy 1993). Two emission lines components of several hundred km s<sup>-1</sup> (unresolved at our spectral resolution) and shifted by 1000 km s<sup>-1</sup> will produce a broad profile with FWHM$``$1000 km s<sup>-1</sup>, the values observed in our objects. Therefore, we can assume, as concluded for nearby FIRGs, an expanding velocity of the nebula $`v_{neb}`$ of several hundreds km s<sup>-1</sup> (500 km s<sup>-1</sup>). The dynamical time scale $`t_{dyn}`$ for the nebula is given by equation in H90. We obtain $`t_{dyn}`$40 Myr, which is in the range calculated for some FIRGs. According to the predictions of the superwind models, this is comparable to the age of the starburst. For comparison, Dey et al. (1997) derived an upper limit for the young stellar population in the radio galaxy 4C41.17 ($`z`$=3.80) of 600 Myr for a continuous star formation scenario and $``$ 16 Myr for an instantaneous starburst model. On the other hand, if the total mass of the ionized gas (several$`\times `$10<sup>8</sup>-10<sup>9</sup> M, van Ojik 1995, McCarthy 1993) has been ejected in the outflow and $`t_{dyn}`$40 Myr, this implies that the mass injection rate is 7$`\frac{dE}{dt}`$25 M yr<sup>-1</sup>, which is also consistent with superwind models for nearby FIRGs. We have also calculated the rate of injection of kinetic energy in the wind $`\frac{dE}{dt}`$, given by eq. in H90. We obtain $``$3$`\times `$10<sup>47</sup> erg s<sup>-1</sup> which is one order of magnitude higher than predicted for high power FIRGs. This is what we expect taking into account cosmological evolution of the wind rate (H90) (the mass injection rate should increase by the same factor, though). $`\frac{dE}{dt}`$ is large compared to the integrated luminosity of the lines (Ly$`\alpha `$ can be as luminous as $``$10<sup>44</sup> erg s<sup>-1</sup>).With a small conversion efficiency of kinetic energy of the outflow in emission line luminosity, the outflow could power the nebula. Therefore, a superwind with properties similar to those predicted for the high power FIRGs at low redshift, could explain the kinematic properties of the extended gas in the HZRG. The possibility of an outflowing wind generated in the nuclear region of a powerful radio galaxy is suggested by the nearby radio galaxy Cyg A: a polarization map (Tadhunter et al. 1990, Ogle et al. 1997) shows a biconical structure suggestive of a dusty reflection nebula which scatters the light of the hidden active nucleus. The outflow is suggested by the shift between the narrow emission lines detected in direct light and the polarized narrow lines (Ogle et al. 1997). Evidence for a circumnuclear starburst ring is presented in Fosbury et al. 1998a. The velocity shift however, is much lower ($``$100-200 km s<sup>-1</sup>) than the FWHM values observed at high $`z`$. ## 6 Summary and conclusion We have studied the UV spectra of 3 distant powerful radio galaxies and the hyperluminous SMM02399-136 system with the goal of understanding the mechanism responsible for the high velocities observed in the extended gas of HZRG. Large velocities are found in the extended gas of all the objects (FWHM$`>`$1000 km s<sup>-1</sup>). Interactions between the radio jet and the ambient gas certainly play a role in some radio galaxies. However, we measure high velocities in regions where such type of interactions are not taking place and therefore other mechanisms must be at work. Possible explanations for such extreme motions are: 1) infall of material from large distances (gravitational origin). This mechanism could be important in the process of galaxy formation. 2) A group of Ly break galaxies in the neighbourhood of the radio galaxy. Large scale outflows in the individual components are required. 3) Bipolar outflows produced by superwinds, as observed in nearby FIRGs. ###### Acknowledgements. This work is based on spectroscopic data obtained at La Silla Observatory. M.Villar-Martín acknowledges support of PPARC fellowship to develop most of this work at the Dept. of Physics and Astronomy in Sheffield (UK). Thanks to the referee, Pat McCarthy, for useful comments that contributed to improve the paper. Thanks also to Raffaella Morganti for helpful comments on the radio and optical astrometry of MRC1558-003.
no-problem/9903/astro-ph9903056.html
ar5iv
text
# Untitled Document INVERSION OF THE ABEL EQUATION FOR TOROIDAL DENSITY DISTRIBUTIONS L. Ciotti Osservatorio Astronomico di Bologna via Ranzani 1, 40127 Bologna (Italy) e-mail: ciotti@astbo3.bo.astro.it (with 1 figure) ABSTRACT In this paper I present three new results of astronomical interest concerning the theory of Abel inversion. 1) I show that in the case of a spatial emissivity that is constant on toroidal surfaces and projected along the symmetry axis perpendicular to the torus’ equatorial plane, it is possible to invert the projection integral. From the surface (i.e. projected) brightness profile one then formally recovers the original spatial distribution as a function of the toroidal radius. 2) By applying the above–described inversion formula, I show that if the projected profile is described by a truncated off-center gaussian, the functional form of the related spatial emissivity is very simple and – most important – nowhere negative for any value of the gaussian parameters, a property which is not guaranteed – in general – by Abel inversion. 3) Finally, I show how a generic multimodal centrally symmetric brightness distribution can be deprojected using a sum of truncated off-center gaussians, recovering the spatial emissivity as a sum of nowhere negative toroidal distributions. keywords: Methods: analytical – Methods: numerical – Methods: data analysis 1. INTRODUCTION A common problem in astronomy is the deprojection of a given surface brightness distribution. Unfortunately this problem is degenerate, i.e., different spatial emissivities can originate the same surface (projected) brightness distribution. As a consequence, any inversion procedure is invariably based on a (more or less) arbitrary choice of the underlying geometry of the spatial emissivity. A help to this unpleasant situation comes from two guide rules: in choosing the geometry of the spatial emissivity one uses symmetry properties (if any) of the brightness distribution, and after deprojection discards the assumed geometry if this produces a somewhere negative spatial emissivity. One of the simplest cases is given by a surface brightness characterized by central symmetry, i.e., described by a function $`I(R)`$ of the projected radius $`R`$. The natural assumption is a spherically symmetric spatial emissivity, $`=(r)0`$, where $`r`$ is the spherical radius. It is a well known result that in this case the projection operator and its deprojection are given by an Abel integral equation: $$I(R)=2_R^{r_\mathrm{t}}\frac{(r)rdr}{\sqrt{r^2R^2}},$$ $`(1)`$ and $$(r)=\frac{1}{\pi }\left[_r^{r_\mathrm{t}}\frac{dI(R)}{dR}\frac{dR}{\sqrt{R^2r^2}}\frac{I(r_\mathrm{t})}{\sqrt{r_\mathrm{t}^2r^2}}\right],$$ $`(2)`$ where $`r_\mathrm{t}`$ is the truncation radius, i.e., $`I(R)=0`$ for $`r>r_\mathrm{t}`$, and for untruncated distributions $`r_\mathrm{t}\mathrm{}`$. It is important to note that if $`I(r_\mathrm{t})>0`$ the recovered emissivity is (weakly) divergent at the edge of the sphere, due to the second therm on the r.h.s. of eq. (2). A fundamental problem of Abel inversion is that the positivity of the recovered distribution is not guaranteed. As an example of this fact, one can consider $`I(R)`$ to be a gaussian or an off-center gaussian: while in the first case the deprojected emissivity – being still a gaussian – is everywhere positive (see, for applications, Bendinelli, Ciotti, & Parmeggiani 1993, and references therein), in the second case it can be easily shown that $`(r)`$ diverges negatively for $`r0`$. What other symmetries for $``$ are compatible with a surface brightness $`I=I(R)`$ and are still of astrophysical interest? The natural generalization of the spherical symmetry is the cylindrical one. Many astronomical objects are characterized by this geometry: lenticular and spiral galaxies, accretion disks, plasma tori, planetary rings, some planetary nebula etc.. In this paper we are interested in particular in the problem of deprojecting a $`I(R)`$ which presents some off–center maxima, and so the natural assumption about the emissivity distribution $``$ is the toroidal one. In Section 2 I show that in the case of a toroidal emissivity distribution projected along its symmetry axis it is possible to generalize the inversion formula (2). Then I show that in the particular case of a projected brightness described by a truncated off–center gaussian, $``$ has an extremely simple functional form, and at variance with the spherically symmetric case, it is nowhere negative for any choice of the gaussian parameters. In Section 3 I propose the use of the found physically admissible $`I`$$``$ pair to obtain a (finite) series expansion of the emissivity for a generic centrally symmetric surface brightness profile. In a following paper (Bendinelli et al. 1999), an application of this method to the Planetary Nebula (PN) A 13 is described. This PN is just one case among many others of astrophysical interest to which the method produces useful results on the object’s structure. 2. THE INVERSION FORMULA FOR TOROIDAL SYMMETRY Let us start by obtaining the analogous formula of eq. (1) for the projection along the symmetry axis of an emissivity distribution stratified on toroidal surfaces. The natural coordinates are the cylindrical, $`(R,\phi ,z)`$, where $`z=0`$ is the torus equatorial plane. The relation between cylindrical and toroidal coordinates $`(r,\phi ,\vartheta )`$ is given by $$R=R_{}+r\mathrm{sin}\vartheta ,\phi =\phi ,z=r\mathrm{cos}\vartheta ,$$ $`(3)`$ (see Figure 1). Figure 1 The describing parameters of toroidal geometry, and their relations used for the projection and deprojection. We assume independence The request of independence of $``$ from $`\phi `$ is unnecessary, and the following discussion can be easily generalized to distributions $`I=I(R,\phi )`$, obtaining $`=(r,\phi )`$. of $``$ from $`\vartheta `$ and $`\phi `$, and so each isoemissivity surface is labeled by its toroidal radius $`r=\sqrt{z^2+(RR_{})^2}`$ around the circle of radius $`R_{}`$ placed in the equatorial plane. Moreover, for the sake of generality let us assume that the emissivity is non-zero only for $`0rr_\mathrm{t}R_{}`$. For $`r_\mathrm{t}=R_{}`$ we have a so-called full torus. With the previous assumptions, $`I(R)0`$ for $`|RR_{}|r_\mathrm{t}`$ and $`I(R)=0`$ outside. From very simple geometric arguments one obtains the analogous formula of eq. (1): $$I(R)=2_{|RR_{}|}^{r_\mathrm{t}}\frac{(r)rdr}{\sqrt{r^2(RR_{})^2}}.$$ $`(4)`$ With respect to the variable $`R`$, the l.h.s. of eq. (4) is a generalized Abel integral, but it is not directly invertible, since $`(RR_{})^2`$ is not strictly increasing with $`R`$ (see Gorenflo & Vessella 1991, p.24). Since the function’s profile is symmetric with respect to $`R_{}`$, we can use a branch of $`I`$ (i.e., we use $`RR_{}`$) and we define a new variable $`s:=RR_{}0`$ with $`I_+(s):=I(R_{}+s)`$. In this way, the integral is formally identical to the projection operator in eq. (1) and can be inverted: $$(r)=\frac{1}{\pi }\left[_r^{r_\mathrm{t}}\frac{dI_+}{ds}\frac{ds}{\sqrt{s^2r^2}}\frac{I_+(r_\mathrm{t})}{\sqrt{r_\mathrm{t}^2r^2}}\right].$$ $`(5)`$ Thus, we proved that assuming the surface brightness distribution to be the projection on its equatorial plane of a toroidally stratified emissivity, it is formally possible to invert the projection integral, recovering the emissivity as a function of the toroidal radius. 3. TRUNCATED OFF–CENTER GAUSSIANS As an application of eq. (5) we invert a surface brightness profile described by an off–center gaussian truncated at $`r_\mathrm{t}`$: $$I(R)=S\left\{\mathrm{exp}\left[\frac{(RR_{})^2}{2\sigma ^2}\right]\mathrm{exp}\left(\frac{r_\mathrm{t}^2}{2\sigma ^2}\right)\right\},|RR_{}|r_\mathrm{t}.$$ $`(6)`$ Note that $`I_+(r_\mathrm{t})=0`$, and so the unpleasant divergence at the edge of the torus is avoided. The total luminosity $`L`$ associated to $`I`$ is given by its surface integral over the annulus $`R_{}r_\mathrm{t}RR_{}+r_\mathrm{t}`$: $$L=SR_{}\sigma (2\pi )^{3/2}\left[\mathrm{Erf}\left(\frac{r_\mathrm{t}}{\sqrt{2}\sigma }\right)\frac{r_\mathrm{t}\sqrt{2}}{\sigma \sqrt{\pi }}\mathrm{exp}\left(\frac{r_\mathrm{t}^2}{2\sigma ^2}\right)\right],$$ $`(7)`$ where $`\mathrm{Erf}(x)=2/\sqrt{\pi }_0^x\mathrm{exp}(t^2)𝑑t`$ is the error function. The formula for the luminosity density given using eqs. (5)-(6) results to be: $$(r)=\frac{S}{\sqrt{2\pi }\sigma }\mathrm{exp}\left(\frac{r^2}{2\sigma ^2}\right)\mathrm{Erf}\left(\sqrt{\frac{r_\mathrm{t}^2r^2}{2\sigma ^2}}\right).$$ $`(8)`$ The deprojection formula can be verified by inserting eq. (8) in eq. (4) and then evaluating the integral. Note that the spatial emissivity given by eq. (8) is everywhere positive, for any choice of the gaussian parameters $`(S,R_{},r_\mathrm{t},\sigma )`$. One can then conclude with the following analogy: off–centered gaussians in toroidal symmetry correspond to centered gaussians in spherical symmetry. Having found a nowhere negative $`I`$-$``$ pair that satisfies the Abel inversion for a particular toroidal density distributions, the successive step is the natural extension of a previous work (Bendinelli et al. 1993), where a multigaussian expansion of observed profiles is described, under the assumption of spherical symmetry: here it is sufficient to remember that by using gaussian functions one avoids the computational difficulty of direct Abel numerical inversion, which belongs to the class of unstable inverse problems (Gorenflo & Vessella 1991; Craig & Brown 1986). So one can assume that a given profile $`I(R)`$ is expanded as a sum of truncated off-center gaussians with different parameters $`(S,R_{},r_\mathrm{t},\sigma )_i`$ for $`i=1,\mathrm{},N`$. The parameters are easily computable using the Newton–Gauss regularized method which is a powerful iterative non linear fitting technique (Bendinelli et al. 1987; Bendinelli 1991). Clearly the parameters are constrained to satisfy the request that the integral over all the projection plane of the fitted brightness distribution must be equal to the original one, i.e. the total luminosity of the system $`L=_iL_i`$, where $`L_i`$ are given by eq. (7). The associated spatial emissivity is then approximated by the sum of $`N`$ distributions as in eq. (8). The developed technique will be applied to the deprojection of the galactic PN A 13, which is characterized by a well defined ring-shaped morphology (Bendinelli et al. 1999). 4. CONCLUSIONS The results of this work are the following: 1) It is shown that it is possible to extend the classical Abel inversion for spherically symmetric systems to toroidal density distributions projected along the torus symmetry axis. 2) It is also shown that an off-center truncated gaussian function gives rise to a well behaved spatial emissivity, i.e., the emissivity is a very simple function of the toroidal radius. More important, the emissivity is non-negative over all the space for any choice of its parameters, a property not guaranteed by the Abel inversion. 3) Finally, it is proposed the use of the found $`I`$-$``$ pair for the recovering of the spatial emissivity of any centrally symmetric projected distribution as a sum of toroidal density distribution, after a non-linear parameter fitting. 5. ACKNOWLEDGEMENTS I would like to thank O. Bendinelli, G. Parmeggiani, and L. Stanghellini for useful discussions. This work was partially supported by the contracts ASI-95-RS-152 and ASI-ARS-96-70. 6. REFERENCES Bendinelli, O., Ciotti, L., Parmeggiani, G., and Stanghellini, L., 1999, in preparation. Bendinelli, O., 1991, ApJ, 366, 599. Bendinelli, O., Parmeggiani, G., Piccioni, A., and Zavatti, F., 1987, AJ, 94, 1095. Bendinelli, O., Ciotti, L., and Parmeggiani, G., 1993, A&A, 279, 668. Craig, I.J.D., Brown, J.C., 1986, “Inverse Problems in Astronomy”, Adam Hilger, Bristol. Gorenflo, R., Vessella, S., 1991, “Abel Integral Equations”, Springer–Verlag, Berlin.
no-problem/9903/astro-ph9903454.html
ar5iv
text
# The Spectral Correlation Function – A New Tool for Analyzing Spectral-Line Maps ## 1 Introduction The “spectral correlation function” analysis we introduce in this short paper is a new tool for analyzing spectral-line data cubes. Owing to the recent advances in receiver and computer technology, both observed and simulated cubes have been growing in size. Our ability to intuit their import, however, has not kept pace. Therefore, the need for statistical methods of analyzing these cubes has now become acute. Several methods for analyzing spectral-line data cubes have been proposed and applied over the past fifteen years. Many of the methods are “successful” in that they can describe a cube with far fewer bits than the original data set contained. The question in the study this paper introduces can be phrased as “just which bits describe the cube most uniquely?” In particular, we seek a method which produces easily understood results, but preserves as much information as possible about all of the dimensions of a position-position-velocity cube of intensity measurements. Some previous statistical analyses do not explicitly make use of the velocity dimension in analyzing spectral-line cubes. For example, Gill and Henriksen (1990) and Langer, Wilson, & Anderson (1993) apply wavelet analysis to position-position-intensity data, in order to represent the physical distribution of material in a mathematically efficient way. Houlahan and Scalo (1992) use structure-tree statistics on IRAS images to analyze the hierarchical vs. random nature of molecular clouds, ultimately finding evidence for some of each. Wiseman and Adams (1994) use pseudometric methods on IRAS data to describe and rank cloud “complexity.” Elmegreen and Falgarone (1996) analyze the clump mass spectrum of several molecular clouds in order to determine a characteristic fractal dimension for the star-forming interstellar medium. Blitz and Williams (1997) find evidence for a break in the column density distribution of material in clouds by analyzing histograms of column density. Other analyses preserve velocity information along with the spatial information in analyzing the cubes. At present, these kinds of analyses can essentially be broken down into two groups. In the first group, no transforms are taken, and spatial information is preserved directly. For example Williams, de Geus, & Blitz (1994) use the CLUMPFIND program, and Stutzki & Güsten (1990) use their GAUSSCLUMPS algorithm to identify “clumps” in position-position-velocity space. Statistical analyses are made on the distributions of clump properties (e.g. the clump mass spectrum is calculated) to probe the three dimensional structure of molecular clouds. In the second group, transforms of one kind or another are performed, and spatial information is preserved as “scale” rather than as “position” information. The classic example of this kind of analysis involves calculation of autocorrelation and structure functions. Application of these functions to molecular cloud data was first suggested by Scalo (1984) and then applied to real data by Kleiner and Dickman (1985) and by Miesch and Bally (1994). Heyer and Schloerb (1994) have recently applied Principal Components Analysis (PCA) to several data cubes. This method describes clouds as a sum of special functions in a manner mathematically similar to wavelet analysis. Most of these analyses have offered new insights into cloud structure and kinematics. Using this breakdown, the SCF falls into the first group,<sup>1</sup><sup>1</sup>1Strictly speaking, the SCF is in the first group when applied for a fixed spatial resolution. However, the SCF can be used as a tool more like the autocorrelation function analyses mentioned in the second group, by comparing runs with different spatial resolution. An upcoming paper (Padoan & Goodman 1999) discusses the effects of varying the ratio of resolution to map size on the SCF (see §3.4). in that no transforms are performed and spatial information is preserved directly. The SCF simply describes the similarities in shape, size, and velocity offset among neighboring spectra in a data cube. In originally developing the SCF, our goal was to create a “hard-to-fool” statistic for use in comparing data cubes calculated from simulations of the ISM with those of observed cubes. The exact reproduction of an observed object in the ISM through simulation is practically impossible so simulations need to be evaluated on their ability to reproduce more general properties of the ISM like appropriate scaling relationships. In the only published work known to us<sup>2</sup><sup>2</sup>2Padoan et al. 1999 have recently submitted a comparison of the Padoan & Nordlund (1999) simulations with <sup>13</sup>CO maps of the Perseus Molecular Cloud to the Astrophysical Journal. The cubes are compared using moments of the distribution of line parameters (see §3.5). that specifically evaluates hydrodynamic simulations by comparing them with real spectral maps, Falgarone et al. (1994) have compared a simulation by Porter, Pouquet & Woodward (1994) with an observed data cube (See also the analysis of simulated cubes in Dubinski, Narayan & Phillips 1995). The observed cube is a CO map of a small piece of the expanding H I loop in Ursa Major, first mapped by Heiles (1976). The Falgarone et al. (1994) analysis is based on comparing combinations of the moments of the the derived distributions of spectral line parameters for each cube. They find that the moment analysis on the observed maps agrees well with one performed on the simulations. We show, below, however, that this comparison may not have been strict enough, in that the distribution of the SCF for the Porter et al. (1994) simulation differs significantly from the distribution calculated for the observed Ursa Major data cube. ## 2 The SCF Algorithm The SCF project was developed in order to probe the nature of correlation in spectral-line maps of molecular clouds. Unlike other probes like Scalo’s (1984) Autocovariance Function (ACF) and Structure Function (SF), the SCF is specifically designed to preserve detailed spatial information in spectral-line data cubes. The motivations and mathematical background of the project are discussed in Goodman (1997). ### 2.1 The Development of the SCF The SCF algorithm centers around quantifying the differences between spectra. To begin, a deviation function, $`D`$, is defined which represents the differences between two spectra, $`T_1(v)`$ and $`T_0(v)`$. $$D(T_1,T_0)\underset{s,\mathrm{}}{\mathrm{min}}\left\{\left[sT_1(v\mathrm{})T_0(v)\right]^2𝑑v\right\}$$ (1) The two parameters $`s`$ and $`\mathrm{}`$ are included in the function so differences in height and velocity offset between the two spectra can be eliminated, recognizing similarities solely in the shape of two line profiles. These parameters can be adjusted in order to find the scaling and/or velocity-space shifting which minimizes the differences between the spectra. In addition, the deviation function can be evaluated with either or both of the parameters fixed. We normalize the deviation function to the unit interval: a value of 1 indicating identical spectra and a value of 0 indicating minimal correlation<sup>3</sup><sup>3</sup>3A value of 1 can only be achieved in the case of infinite signal to noise. See §2.2. The appropriate normalization is to divide by the maximum value of the deviation function in the absence of absorption and subtract this value from 1. The resulting function is referred to as the SCF evaluated for the two spectra. $$S(T_1,T_0)1\sqrt{\frac{D(T_1,T_0)}{s^2T_1^2(v)𝑑v+T_0^2(v)𝑑v}}$$ (2) As mentioned previously, the deviation function can be evaluated with the parameters $`s`$ or $`\mathrm{}`$ fixed, to 1 and/or 0, respectively. Such restrictions provide different kinds of information about the two spectra under examination. The resulting forms of the spectral correlation function are summarized in Table 1. In order to examine spectral-line maps, comparison of two spectra must be extended to the analysis of many spectra simultaneously. The simplest such extension is to evaluate the functions $`S`$, $`S^{\mathrm{}}`$, $`S^s`$ and $`S^0`$ between a base spectrum and each spectrum in the map within a specified angular range from the base spectrum. We refer to the angular range under consideration as the resolution of the SCF. All of the SCF calculations are performed using only the central portion of the spectra, specifically, over a range equal to a given number of FWHMs of the base spectrum from each spectrum’s velocity centroid. The FWHM is defined by a Gaussian fit to the base spectrum’s line profile.<sup>4</sup><sup>4</sup>4The Gaussian fit is only used to set a reasonable window over which the SCF is calculated. For very noisy data, including extra baseline decreases the SCF. The line profiles discussed in this paper are all roughly gaussian, with widths that do not vary much within a map, so the change in the SCF caused by varying window size is tiny. However, in other cases, such as analysis of H I line profiles, a window must be manually set, and fixed, as Gaussian fitting gives spurious widths. A new version of the SCF, developed to deal with H I data, uses a fixed spectral window and is presented in Ballesteros-Paredes, Vazquez-Semadeni & Goodman 1999. The resulting values of the SCF are then averaged together with the option of weighting the results based on distance from the original spectrum. The averaged value is the correlation of the base spectrum with its neighbors and a similar analysis is then performed for every spectrum in the map. When the SCF is evaluated for neighboring points in the spectral-line map, the averages use many of the same spectra, implying that SCF values for points within an SCF resolution element are not independent. ### 2.2 The Effects of Instrumental Noise Our measure of the similarity between observed spectra must deal with the effects of noise. Noise obscures similarities and differences between the two spectra under examination, usually skewing the results to indicate less correlation than is actually present. Hence, the principal difficulty generated by noise is that it creates a bias in correlation measurements, favoring spectra with higher values of signal-to-noise. We have explored a few methods of subtracting out noise bias using techniques shown to work on infinitely well-sampled data. However, these techniques break down for data with limited resolution roz (Rosolowsky 1998). While highly unconventional, we have found that the best method of dealing with non-uniform signal-to-noise is to discard all spectra with signal-to-noise below a certain cutoff value, $`(T_A/\sigma )_c`$, and then to add normally distributed random noise to spectra with signal-to-noise greater than the cutoff until all spectra have $`T_A/\sigma =(T_A/\sigma )_c`$. This method appears effective because it eliminates the bias (See Figure 1) and the resulting correlation outputs do not appear to depend strongly on specific set of noise added. The maximum value of the SCF cannot reach 1 for any finite value of $`T_A/\sigma `$; instead, the maximum is dependent on the line shape and the cutoff value for the signal-to-noise. Figure 2 depicts the rise in SCF values with increasing signal-to-noise for each of the correlation functions. These data are generated by using the SCF to analyze a data cube consisting of identical Gaussian spectra which have had noise added to achieve a specific value of $`(T_A/\sigma )_c`$. We considered renormalizing the SCF by a factor equal to the inverse of the maximum SCF possible for the $`(T_A/\sigma )_c`$ used. If we did so then absolute SCF values would always have the same meaning. However, since the exact maximum possible depends on line shape, we chose not to renormalize. Instead, we note that whenever $`(T_A/\sigma )_c`$ is set to the same value, the maximum possible SCF value should be roughly equal for any cube. In the examples below, we set $`(T_A/\sigma )_c=5`$, which implies a maximum possible SCF of order 0.65 (See Figure 2). This “noise equalizing” procedure may seem distasteful to some–especially to observers who spend long hours at the telescope! The best way we can explain its necessity is by reminding the reader that you cannot get something for nothing. In other words, if your data is noisy, you simply cannot know how well correlated two spectra are as well as if the spectra were clean. Uncorrelated noisy spectra can look just the same as correlated noisy spectra. Any correction introduces different amounts of uncertainty for different positions in the map. For this reason, until someone makes a better proposal, we continue to advocate equalizing signal to noise at a threshold value. ## 3 First Results from the SCF In this section, we analyze five sample data cubes chosen to demonstrate the SCF’s ability to discriminate among different physical conditions. First, we describe the data cubes (see Table 2), and then we discuss comparisons amongst them in the context of the SCF. Of the two observational data cubes, one is for a self-gravitating cloud (Heiles Cloud 2), and one is for a non-self-gravitating high-latitude cloud (in Ursa Major). For the three cubes generated from simulations: one is purely hydrodynamic and non-self-gravitating; one is magnetohydrodynamic and non-self-gravitating; and the last is magnetohydrodynamic and self-gravitating. ### 3.1 Data Sets Heiles Cloud 2: In 1996, observers at FCRAO mapped Heiles Cloud 2 in the $`\text{C}^{18}\text{O}(21)`$ line (deVries et al. 1999). The resulting data cube consists of 4800 spectra arranged in a grid of $`50\times 96`$ pixels on the sky. The grid covers $`58^{}`$ in right ascension and $`40^{}`$ in declination, centered at $`\alpha (2000)=4^h36^m09^s`$ and $`\delta (2000)=25^{}47^{}30^{\prime \prime }`$. Assuming the cloud is 140 pc distant, the map covers a physical area of $`2.3\times 1.6`$ pc at a spatial resolution of 0.034 pc. The spectra have 256 channels of velocity running from -0.35 km/s to 12.45 km/s, with a channel width of 0.05 km/s. The peak emission from the cloud is at about +6 km/s. Ursa Major: The <sup>12</sup>CO (2-1) map analyzed here is described in Falgarone et al. 1994. The area mapped is located on a giant H I loop, and is claimed by Falgarone et al. to be “a good site to study turbulence in molecular clouds given its proximity to an important source of kinetic energy (the expanding loop itself).” The size of this map is 9 by 19 pixels, with a grid step of 30<sup>′′</sup> (0.015 pc if the cloud is at 100 pc). Note that there are approximately 50% more pixels in the Porter et al. simulation than in this relatively small map. Pure Hydrodynamic Turbulence: This cube, presented in presented in Porter et al. (1994), is the one compared with the <sup>12</sup>CO (2-1) Ursa Major data cube (see above) by Falgarone et al. (1994). It is a three-dimensional simulation with periodic boundary conditions, no magnetic fields, no gravity, and fully compressible turbulence. The time step analyzed here is the second cube ($`t=1.2\tau _{ac}`$) presented in Falgarone et al. (1994). The spectra in the simulated data cube are laid out in a grid of $`16\times 16`$ pixels.<sup>5</sup><sup>5</sup>5The full Porter et al. 1994 simulation is 512<sup>3</sup>, but the grid of $`16\times 16`$ spectra is generated by considering subsamples of the cube with dimensions $`32\times 32\times 512`$. There are 512 channels in each spectrum and the channel width is 0.13 km/s. The simulated spectra are generated from density-weighted histograms of velocity, which are intended to mimic observations of the <sup>12</sup>CO(2-1) line. The overall physical size of the simulation is not given, but the nature of comparison with the CO map of Ursa Major shown in Falgarone et al. (1994) implies that the spatial resolution of the simulated spectral-line map should be approximately 0.015 pc (30<sup>′′</sup> at 100 pc). Magnetohydrodynamic Turbulence: Mac Low et al. (1998) have made their simulated cube “L,” which represents uniform, isotropic, isothermal, supersonic, super-Alfv nic, decaying turbulence available to us and others through the world wide web. The spectra are produced as density-weighted histograms from a simulation using a finite-difference method (ZEUS; see Stone & Norman 1992) and 256<sup>3</sup> zones. Mac Low et al. use this cube, and others, to study the free decay of turbulence in the ISM. Mac Low et al.’s results concerning decay times apparently agree with those of Padoan & Nordlund (1999), who have carried out equivalent simulations. The physical scale the Mac Low et al. cube represents depends on the choice of other paramenters (e.g. field strength), but it is fair to estimate that the resolution should be similar to that of Gammie et al (1999; see below), or about 0.06 pc. Self-Gravitating Magnetohydrodynamic Turbulence: Another group, whose most recently published work is Stone, Ostriker, & Gammie (1998), has been using the ZEUS code to study self-gravitating MHD turbulence. Charles Gammie has kindly provided us with a preliminary simulated spectral-line cube with dimensions $`32\times 32\times 256`$, generated from a recent 3D, self-gravitating, high-resolution run (Gammie et al. 1999). The spectra are density-weighted histograms meant to simulate <sup>13</sup>CO emission, observed with a velocity resolution of 0.054 km s<sup>-1</sup>. As is the case for the Porter et al. (1994) simulations presented in Falgarone et al. (1994), the larger original (here, 256<sup>3</sup>) simulation is downsampled in the two spatial dimensions (here to $`32\times 32`$) to produce reasonable spectra. The resulting spatial resolution is approximately 0.06 pc. ### 3.2 Analysis For all of the SCF analyses presented in this paper, the cutoff signal-to-noise ratio is set to 5, the spatial resolution of the SCF includes all spectra within 2 pixels of the base spectrum, uniform weighting is applied across the resolution element, and the portion of each spectrum within 3 base spectrum FWHMs of the velocity centroid is used. For each data cube, greyscale maps of the SCF values are generated and compared with maps of line parameters such as antenna temperature. Note that before the noise correction discussed in §2.2 was applied, maps of the peak antenna temperature $`T_A`$ and the SCF looked similar. After correction, this is no longer the case. So, the greyscale maps of the SCF, which preserve all the spatial information about which spectra in a map are correlated, can be informative on their own. In addition to pointing out correlations with various line parameters, they are highlight edges of H I shells (see Ballesteros-Paredes, Vazquez-Semadeni & Goodman 1999) and other such structures. In the current paper, however, we show only simple histograms (Figure 3) of all of the values of the SCF in a map, which are easier to quantitatiely intercompare than the greyscale images. We present a moment analysis of these distributions in Table 3. As a test of the hypothesis that the positions of spectra within a cube are important to its description, we calculate the SCF for the original cubes and for “comparison” cubes where the positions are randomized. If the meaning of the SCF is linked to the original positions of the spectra, randomization of the positions should create a significant change in the SCF values. This drop is in fact observed in our analysis. The magnitude of the drop depends on the cube being analyzed and the compensatory parameters used in the SCF but not on the specific randomized positions. Different randomizations produce changes of $`1`$% in the mean values of correlation functions. The SCF results for the randomized cubes appear with the original histograms and moment analysis (Figure 3 and Table 3). The significance of the drop in the mean value of the SCF caused by randomization is high in all cases studied here. The error in estimating the mean in each SCF distribution is of order the standard deviation of the distribution divided by $`\sqrt{N_{\mathrm{pixels}}}\times 3\mathrm{\Delta }v_{\mathrm{channels}}`$, where $`N_{\mathrm{pixels}}`$ is the number of pixels in the map and $`3\mathrm{\Delta }v_{\mathrm{channels}}`$ is the number of spectral channels considered in calculating the SCF. As can be seen from Tables 2 and 3, the standard deviations of the distributions are all less than 0.1, and the minimum $`\sqrt{N_{\mathrm{pixels}}}\times 3\mathrm{\Delta }v_{\mathrm{channels}}`$ is of order $`5\times 10^3`$, meaning that the error in the quoted mean is always less than about $`1\times 10^3`$. This value is much smaller than any of the differences between SCF means for actual and randomized maps listed in the last column of Table 3. ### 3.3 Inferences The most basic conclusion we draw from our analysis is that the SCF does recognize some form of spectral correlation in data cubes. The drop in the mean and increased spread in the distribution when positions are randomized depicts a clear loss of spatial correlation of the spectra. When the compensatory parameters $`s`$ and $`\mathrm{}`$ are fixed, the difference between the two histograms becomes clearer because the program has fewer tools to compensate for the differences in the spectra. For example, in Heiles Cloud 2, randomization causes the smallest correlation drop when both the lag and scaling parameters are allowed to vary, indicating that the spectra are similar in shape throughout the data cube. The fact that the mean value of $`S^{\mathrm{}}`$ is larger than that of $`S^s`$ implies that the spectra in the cube are more similar in overall antenna temperature than in velocity distribution. In other words, it appears that the compensatory parameter $`\mathrm{}`$ is more important for good correlation than is the scaling parameter $`s`$ in this case. Examining Figure 3 and Table 3 closely, one can detect a clear pattern. From this set of examples, it appears that the more gravity matters, the more of an effect randomization has on the SCF distribution. The SCF distribution for the self-gravitating cloud (Heiles Cloud 2) shows a much larger change in response to randomization in spectral positions than does the unbound high-latitude cloud (Ursa Major). The simulation which most closely reflects the Heiles Cloud 2 response to randomization is the only one that includes self-gravity: Gammie et al. (1999). In the non-self-gravitating case, the Ursa Major SCF distributions show much less change in response to randomization than the Heiles Cloud 2 distributions, but more change than the distributions for the simulations presented as their “match” in Falgarone et al. 1994 (see Table 3). This trend is corroborated by a visual inspection of the grid of simulated spectra (Falgarone et al. 1994 Figure 1b), where it is seen that the differences between neighboring spectra appear more pronounced than in the Ursa Major observations. Such behavior indicates that the original positions of the spectra in the simulated cube are not as essential to the cloud description as they are in Ursa Major. As for the mean values of the SCF, it is acceptable to intercompare means for the five data sets used here because signal-to-noise values have been equalized (see §2.2). The SCF means for the Mac Low et al. (1998) simulations seem the best match to the Heiles Cloud 2 data. The means for the simulation presented in Falgarone et al. 1994 are not very similar to those for the Ursa Major cloud which Falgarone et al. claim is an excellent match for these simulations. In particular, the means for $`S`$ and $`S^{\mathrm{}}`$ in the simulation are nearly equal (0.6) as are those for $`S^s`$ and $`S^0`$ (0.52), meaning that lag adjustments are important, but scale adjustments are not. In the Ursa Major observations, all forms of the SCF give roughly 0.55, and this difference between lag and scaling is not seen. ### 3.4 Resolution and Sampling Effects If two maps of similar regions have very different spatial resolution, the map with lower resolution will look smoother. One can imagine that the spectra in the lower resolution map will change more rapidly from one position to the next, so that the mean SCF for the lower resolution map will be lower if the SCF is run with the same size averaging box. Alternatively, one can imagine running the SCF with a very large averaging box on a high-resolution map, thus “smearing out” small differences in spectra that might show up in a smaller box. To investigate issues of resolution, we have used the Heiles Cloud 2 data set in a numerical experiment. We re-ran the SCF many times, using a box size of 3 (the original), 5, 7, 9, 11, 13, and 15 pixels, to see what effect this “smoothing” would have on the SCF. Figure 4 presents the results of this experiment. The mean value of the SCF in Heiles Cloud 2 does drop for larger box sizes, but by a surprisingly small amount (see Figure 4). (For the Heiles Cloud 2 cube with positions randomized, the SCF is unaffected by changes in resolution, as expected.) The width of the SCF distributions for Heiles Cloud 2 with positions randomized also drops a bit for larger box sizes, illustrating that small spectral differences do eventually get smeared-out, even when spectra are shuffled. Thus, for this one example, the effect of changing resolution is relatively small. Other factors can also effect the true resolution–and resulting magnitude–of the SCF. The way the spectra in a map are sampled will effect the absolute values of the SCF measured. For example, in a Nyquist-sampled map, neighboring spectra are not independent and will necessarily yield higher values of the SCF than a beam-sampled map. In this paper, we have tried to restrict ourselves to beam-sampled maps, but a correction for sampling should be added to the SCF algorithm in the future. Ultimately, it is the relationship between map resolution, averaging box size, map size, and physically important scales (e.g. Alfvén wave cutoff, outer scale of turbulence, Jeans length, etc.) that determine the SCF. The subtleties of these relationships’ influence on the SCF, and on other statistical techniques, are discussed in Padoan & Goodman 1999. ### 3.5 How discriminating is the SCF? This paper is intended as a “proof-of-concept,” to show that the SCF is a discriminating tool for analyzing observed and simulated spectral-line maps. As discussed in the previous section, Falgarone et al. (1994) concluded that the simulated cube of Porter et al. 1994 is an extraordinary match to the <sup>12</sup>CO(2-1) observations of Ursa Major. This conclusion is based on Falgarone et al.’s analysis of combinations of statistical moments of distributions of antenna temperatures. We have shown here that, contrary to Falgarone et al.’s conclusions, the SCF can detect significant differences between these two data sets. In addition, we have tested the SCF against the comparison method where the simple histograms of moments (centroid velocity, velocity width, skewness, and kurtosis) of spectra in a map are used to intercompare data cubes. It turns out that this seemingly simple method has one major weakness. The values of the higher-order moments (skewness and kurtosis) are extremely sensitive to how one treats noise in the data cube. Some researchers (e.g. Padoan et al. 1999) choose to set a threshold of $`n\sigma `$ (where $`n`$ is usually 3 and $`\sigma `$ is the rms noise in a spectrum) before calculating moments. It turns out that the value of $`n`$ has profound effects on the skewness and kurtosis distributions for the whole cube. We also compared equalizing the signal-to-noise in a cube and then computing moments. This gives different results than thresholding. Given these complications, we reserve further discussion of moment distribution comparisons for future work. For now, based on the re-comparison of the data and simulation in Falgarone et al. 1994, we have shown that the SCF does find differences where other methods do not. ## 4 Conclusions The drop to lower correlation values when spectral positions are randomized shows the SCF is performing its primary function of quantifying the correlation between proximate spectra in a data cube. In this first paper, we have demonstrated that the SCF algorithm can find subtle differences between simulated and observational data cubes that may not be evident in other kinds of comparisons. Thus, application of the SCF will provide a “sharper tool” to be used in comparing simulated and observational data cubes in the future. There is far more information produced by the SCF algorithm than is presented here and the full extent and import of the information provided will be explored in subsequent papers. In the future, we plan to exploit the information generated by the SCF to examine a large variety of observational and simulated data cubes. Ultimately, our aim is to use the SCF to evaluate which physical conditions imposed on MHD simulations are necessary to produce the correlations observed in the ISM. The initial results presented here, showing stronger drops in spectral correlation in response to randomization in self-gravitating situations, hint that including self-gravity may be essential in the numerical recreation of star-forming regions. Physical effects other than gravity, such as large-scale shocks, should also have an “organizing” effect on the spatial distribution of spectra, and we intend to search for those effects with the SCF as well. A deeper coverage of the development of the SCF is found in Rosolowsky (1998) which is available on the internet<sup>6</sup><sup>6</sup>6See http://cfa-www.harvard.edu/~agoodman/scf/SCF/scfmain.html. The SCF algorithm is written in IDL and is available for public use<sup>7</sup><sup>7</sup>7 http://cfa-www.harvard.edu/~agoodman/scf/distribution/ . We would like to thank Marc Heyer of FCRAO for the use of the Heiles Cloud 2 data set prior to publication. We wish to express our thanks to Derek Lis who facilitated access to the simulated and observed data in Falgarone et al. (1994). Uros Seljak provided some clear insights into the effects of instrumental noise which have proved indispensable in our understanding and we are grateful. Thanks are in order to Mordecai Mark Mac-Low for providing access to his group’s MHD simulations. And, our gratitude extends to Charles Gammie who allowed access to the MHD simulations of Gammie, Ostriker and Stone. Analysis of these additional simulations is available at our web site. Javier Ballesteros-Paredes provided valuable help with revisions to this paper. Finally, we are most grateful to an anonymous referee whose comments very significantly improved the integrity of this paper. This research is funded by grant AST-9721455 to A.G. from the National Science Foundation. Figure 3–Histograms of SCF values for observed and simulated data cubes. To facilitate comparison between the correlation functions, histograms have been normalized to the unit interval for the unrandomized cube, and to an integral equal to the unrandomized cube for the randomized cube. The histogram shown in heavy print represents the correlations for the spectra in their original positions and the lighter line indicates the distribution for randomized positions. Distributions are shown for: (a) observed C<sup>18</sup>O map of the star-forming cloud Heiles Cloud 2 devries (deVries et al., 1998); (b) observed <sup>12</sup>CO(2-1) map of the Ursa Major unbound high-latitude cloud (Falgarone et al. 1994); (c) magnetic, non-self-gravitating simulation of Mac Low et al. 1998; (d) non-magnetic, non-self-gravitating simulation of Porter et al. (1994) cube used by Falgarone et al. (1994); and (e) magnetic, self-gravitating simulation of Gammie et al. (1999). Table 2 lists the properties of the data sets illustrated, and Table 3 compares the means and standard deviations of distributions shown here. The four variants of the SCF are described in Table 1. Figure 4 – The behavior of the SCF as a function of changing resolution. For the HCl2 data set with uniform signal-to-noise $`(T_A/\sigma )_c=5`$, the top panel shows the mean value of the SCF as a function of the size of the box over which the SCF is calculated. The bottom panel shows the $`1\sigma `$ width of the distribution of the SCF, for the same noise-equalized HCl 2 data set, but with positions randomized. For any randomized cube, the mean of the SCF is independent of resolution, and only the width of the distribution changes, as shown. In creating these plots, runs using 3, 5, 7, 11, 13, and 15 pixel square sampling areas for the SCF are used.
no-problem/9903/astro-ph9903217.html
ar5iv
text
# A Search for Very Low-mass Stars and Brown Dwarfs in the Young 𝜎 Orionis cluster ## 1 Introduction Stellar clusters and associations offer a unique opportunity to study substellar objects in a context of known age, distance and metallicity; they are laboratories of key importance in understanding the evolution of brown dwarfs. Deep imaging surveys have revealed a large population of substellar objects in the Pleiades cluster (Rebolo, Zapatero Osorio & Martín rebolo95 (1995); Cossburn et al. cossburn97 (1997); Zapatero Osorio, Rebolo & Martín 1997a ; Zapatero Osorio et al. 1997b ; Bouvier et al. bouvier98 (1998)), demonstrating that the formation of brown dwarfs extends down to masses of 0.035 $`M_{}`$ (Martín et al. 1998a ; see also Luhman, Liebert & Rieke luhman97 (1997)). The extension of these studies to other clusters, especially in the younger regions where we can reach objects with lower masses, is therefore very important for confirming and enlarging these results. The Orion complex is recognized as one of the best sites for understanding star formation processes. Our knowledge of the young, low-mass stellar population in this star forming region has been enriched in recent years with the application of new search techniques for low-mass stars as, for example, H$`\alpha `$ surveys (Wiramihardja et al. wiramihardja89 (1989), wiramihardja91 (1991), wiramihardja93 (1993); Kogure et al. kogure89 (1989)), and the optical identification of X-ray sources, which were detected by the ROSAT all-sky survey (RASS). These recent surveys provided a spatially unbiased sample of X-ray sources over the entire Orion complex (Sterzik et al. sterzik95 (1995); Alcalá, Chavarría-K., & Terranegra alcala98 (1998); Alcalá et al. alcala96 (1996)). ROSAT pointed observations performed on Orion’s Belt (Walter et al. walter94 (1994)) led to the discovery of a high concentration of X-ray sources near the bright O9.5V star $`\sigma `$ Orionis. This star belongs to the Orion 1b association, for which an age of 1.7–7 Myr and a distance modulus of 7.8–8 are estimated (Blaauw blaauw64 (1964); Warren & Hesser warren78 (1978), hereafter WH; Brown, de Geus & de Zeeuw brown94 (1994), hereafter BGZ). Follow-up photometry and spectroscopy (Wolk wolk96 (1996)) of these X-ray sources have revealed a population of low-mass stars defining a photometric sequence consistent with the existence of a very young cluster at an age of few million years. This cluster is ideally suited for the detection of very low-mass brown dwarfs and subsequently for investigating the initial mass function in the substellar regime. Additionally, the multiple star $`\sigma `$ Orionis is affected by an extinction of $`E(BV)`$ = 0.05 mag (Lee lee68 (1968)), and thus, the associated cluster may exhibit very little reddening. At these early ages brown dwarfs are intrinsically more luminous (Burrows et al. burrows97 (1997); D’Antona & Mazzitelli dantona94 (1994)), which makes their detection and study easier. For example, a 0.025 $`M_{}`$ object is about 7 mag brighter in the absolute magnitude M<sub>I</sub> at the age of 5 Myr than at that of the Pleiades cluster (120 Myr) according to the recent tracks of Baraffe et al. (baraffe98 (1998)). In this paper we present the results of a deep photometric survey of the young cluster around the $`\sigma `$ Orionis star in search of its substellar population. We also present follow-up low-resolution spectroscopy of some of our photometric candidates; and we discuss the possibility of using deuterium in studying very young substellar populations. ## 2 Observations ### 2.1 Photometry We have obtained $`RIZ`$ images with the Wide Field Camera (WFC), mounted at the prime focus of the Isaac Newton Telescope (INT), at Observatorio del Roque de los Muchachos (ORM) on the island of La Palma, on 1997 November 29. The camera consists of a mosaic of four 2048$`\times `$2048 pixel<sup>2</sup> Loral CCD detectors, providing a pixel projection of 0.37 arcsec. At the time of our observations, one of the CCDs was not in operation, so the effective area of one single mosaic was 478 arcmin<sup>2</sup>. We observed two different regions in each of the three filters, covering a total area of 870 arcmin<sup>2</sup>. Figure A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster shows the location of the two mosaic fields (six CCDs) surveyed. Exposure times were 1200 s for all three filters for the east region and 2$`\times `$1200 s for the west region. Raw frames were reduced within the IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation. environment, using the CCDRED package. Images were bias-subtracted and flat-fielded. Due to poor weather conditions at dusk and dawn, it was not possible to take good sky flats. We combined our long-exposure scientific images to obtain the flat-fields we finally used. The photometric analysis was performed using routines within DAOPHOT, which include the selection of stars using the DAOFIND routine (extended objects were mostly avoided) and aperture and psf photometry. Observations at the INT were affected by cirrus; therefore no photometric standard stars were observed at this stage. Average seeing ranged from 1.3 to 2.0 arcsec. In order to transform our INT instrumental magnitudes into the Cousins $`RI`$ system, we made use of objects in common with images of our fields taken under photometric conditions with the IAC80 Telescope (Teide Observatory on the island of Tenerife) in 1998 January. The IAC80 images were calibrated with standards of Landolt (landolt92 (1992)). We estimate the 1$`\sigma `$ error in our calibration to be around 0.1 mag. Due to variability in weather conditions at the INT and to the different sensitivity among the CCDs of the WFC our limiting magnitudes differ slightly from one image to the next. The survey completeness magnitudes are $`R`$ = 20.5, $`I`$ = 19.5, and $`Z`$ = 19.2, while limiting magnitudes are $`R`$ = 23.2, $`I`$ = 21.8, and $`Z`$ = 21.0. In Table 1 we list the limiting and completeness magnitudes for each of the CCDs and regions observed. For each CCD analyzed we constructed the $`I`$ vs. $`RI`$ and $`I`$ vs. $`IZ`$ color–magnitude (CM) diagrams. These proved useful in distinguishing between cool cluster-member candidates and field stars. Figure A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster shows the resulting $`I`$ vs. $`RI`$ diagram of our survey where we have plotted an arbitrary straight line separating our candidates from field stars. Cluster-member candidates are identified as objects brighter and redder than the Pleiades sequence shifted to the distance of $`\sigma `$ Orionis in both CM diagrams. We have selected 46 very low-mass stars and brown dwarf candidates with magnitudes in the interval $`I`$ = 15–20 mag that were found to be red in both CM diagrams. Table 2 shows the list of our candidates with their magnitudes and coordinates (IAU designations are included). Those objects which do not appear red in both CM diagrams (they are indicated with different symbols in Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster) have not been considered in the following discussions on cluster membership and therefore, they are not listed in Table 2. Nevertheless, it has been shown that the $`RI`$ color saturates at a given value ($`RI2.4`$, Bessell bessell91 (1991)) becoming bluer for cooler objects. This value may be dependent on gravity, younger objects saturating at slightly redder colors (see Martín, Rebolo, & Zapatero Osorio martin96 (1996); Bouvier et al. bouvier98 (1998)). Thus, $`RI`$ cannot be used by itself as a good indicator of very faint cluster members, and we rely on the $`IZ`$ color for the selection of candidates. The objects in our survey with $`I`$ magnitudes fainter than 20 mag showing red $`IZ`$ colors and $`(RI)2.1`$ are listed in Table 3 (coordinates and IAU designations are given). This table may be not complete since the magnitude range here is clearly beyond the completeness of our survey. Astrometry was carried out using the USNO-SA2.0 catalog (Monet et al. monet96 (1996)) and the ROE/NRL catalog (Yentis et al. yentis92 (1992)), a precision around 1 arcsec being achieved. The spatial distribution of the candidates within the area surveyed is shown in Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster and finder charts in the $`I`$-band (3′$`\times `$3′ in extent) are provided in Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster. ### 2.2 Spectroscopy We have obtained low-resolution optical spectroscopy of nine of our photometric candidates (SOri 12, 17, 25, 27, 29, 39, 40, 44, and 45), using the 4.2-m William Herschel Telescope (WHT) at the ORM. The spectra were collected on 1997 December 28–30; the instrumentation used was the ISIS double-arm spectrograph (the red arm only), the R158R grating and the TEK 1024$`\times `$1024 pixel<sup>2</sup> CCD detector which provides a total spectral coverage of 635–920 nm and a nominal dispersion of 2.9 Å per pixel. The spectral resolution (FWHM) of the instrumental setup was 20 Å. Exposure times ranged from 900 to 2500 s depending on the magnitudes of the objects and on the weather conditions. Spectra were reduced by a standard procedure using IRAF, which included debiasing, flat-fielding, optimal extraction, and wavelength calibration using the sky lines appearing in each individual spectrum (Osterbrock et al. osterbrock96 (1996)). Finally, the spectra were corrected for the instrumental response making use of the standard star G 191-B2B, which has absolute flux data available in the IRAF environment. All the spectra clearly correspond to late M-type objects, showing typical VO and TiO molecular bands. We have classified them by comparison to Pleiades members of known spectral types (Martín et al. martin96 (1996); Zapatero Osorio et al. 1997b ) resulting that our objects range from class M6 to M8.5. We have also obtained the pseudo-continuum PC1–4 indices (Martín et al. martin96 (1996)) and found them to yield slightly earlier spectral types by about half a subclass with respect to the main sequence field dwarfs. This is likely due to an effect of gravity; nevertheless this difference is within the estimated error bar of our measurements. The spectra of eight of the candidates (all except for SOri 44) are shown in Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster, where the clearest features are indicated. ## 3 Discussion ### 3.1 Contamination by other sources Possible contaminating objects in our survey are red galaxies, M giants, and foreground field M dwarfs. Due to the spatial resolution and completeness magnitudes of our observations, the contamination due to red galaxies is not a major problem, since these are mostly resolved and routines in IRAF can distinguish them from stellar point-like objects. Another source of contamination is the M giants, but given the galactic latitude of the cluster ($`b`$ = –17.34 deg) their number is negligible ($`5\%`$) in comparison with main-sequence dwarfs (Kirkpatrick et al. kirk94 (1994)). The most relevant source of contamination is field M dwarf stars in the line of sight towards the cluster. To estimate their number we have considered the results from searches covering a large area of the sky. Kirkpatrick et al. (kirk94 (1994)) performed a 27.3 deg<sup>2</sup> survey, reaching a completeness magnitude of $`R`$ = 19 mag and finding space densities of 0.0022 pc<sup>-3</sup>, 0.0042 pc<sup>-3</sup>, and 0.0024 pc<sup>-3</sup> for M5–M6, M6–M7, and M7–M9 dwarfs, respectively. With these densities and with the typical absolute magnitudes of late M dwarfs (Kirkpatrick & McCarthy kirkmc94 (1994)) we have calculated the number of these cool stars that might be populating the CM region which we ascribe to the cluster members. The result is that less than one M5–M6, about one M6–M7, and one M7–M9 field dwarfs should be contaminating our survey. Our selection criterion based on the three filters $`R`$, $`I`$, and $`Z`$ appears to be good enough to differentiate clearly between true members and contaminating field stars, but further studies in the near infrared, which is less affected by reddening, or low-resolution spectroscopy will tell us which objects are definitely bona fide cluster members. ### 3.2 The $`\sigma `$ Orionis cluster: size, age, and distance The existence of a cluster around the multiple star $`\sigma `$ Orionis was noted for the first time in the Lund Observatory Catalogue of Open Clusters (Lynga lynga81 (1981)), where it was designated by the name of the star. In later studies (Lynga lynga83 (1983)) about fourteen stars were given as members and the diameter of the cluster in the sky was estimated at 25 arcmin. The work by Wolk (wolk96 (1996)) and Walter et al. (walter97 (1997)) covered an area of 900 arcmin<sup>2</sup> containing a rather dense population of X-ray sources, and found a homogeneous distribution of cluster candidates. In the future, a larger area will need to be surveyed to determine with precision the total region occupied by the cluster, since our current knowledge may be limited to the core. Age is one of the most important parameters of a cluster, particularly in locating the substellar mass limit. Until recently, the only way to determine the age of $`\sigma `$ Orionis was via studies of the massive stars of the OB1b subgroup, resulting in age estimates in the range of 1.7–7 Myr, (Blaauw blaauw91 (1991); WH; BGZ). The discovery by Wolk (wolk96 (1996)) of a large low-mass population allowed him to compare H–R diagrams with theoretical isochrones and obtain an age for the cluster of about 2 Myr (Wolk & Walter wolk99 (1999)), in good agreement with estimates for the subgroup, and especially with the age given by the latest work of BGZ (1.7 Myr). This provides additional support for the assumption that the cluster belongs to the young Orion star forming region, and that its central star, $`\sigma `$ Orionis, is indeed a member in the cluster of the same name. Figure A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster shows our candidates together with theoretical isochrones from several authors ((a) Burrows et al. burrows97 (1997); (b) D’Antona & Mazzitelli dantona97 (1997); (c) Baraffe et al. baraffe98 (1998)). The later models provide magnitudes and colors and in order to transform the effective temperatures and luminosities of the models by Burrows et al. (burrows97 (1997)) and D’Antona & Mazzitelli (dantona97 (1997)) into the observational CM diagrams of Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster, we have used the temperature–color scale and bolometric corrections from Bessell, Castelli & Plez (bessell98 (1998)). The cluster candidates seem to imply average ages of $``$1 Myr to 3 Myr according to Burrows et al. (burrows97 (1997)), and $``$1 Myr to 5 Myr according to D’Antona & Mazzitelli dantona97 (1997)). The evolutionary tracks by Baraffe et al. (baraffe98 (1998)) used in Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster (c) are those named “dusty” models by these authors and they include dust condensation and opacity in the atmospheres of cool objects. While these models predict effective temperatures and luminosities very similar to the other two sets of isochrones, the predicted magnitudes and colors do not fit the observations. Alternative methods for deriving ages could give a more precise age for $`\sigma `$ Orionis. In particular, the Li-luminosity (LL-clock) dating (see e.g. Martín & Montes martin97 (1997); Basri basri98 (1998)), which provides a very good age determination in the case of the Pleiades and $`\alpha `$-Persei clusters (Martín et al. 1998b ; Stauffer, Schultz & Kirkpatrick stauffer98 (1998); Basri & Martín basri99 (1999)) can be useful in this case as it will be discussed in section 3.5. The distance is another important parameter that needs to be known for a cluster. Measurements available in the literature give a distance modulus of 7.8–8 (WH; BGZ) for the subgroup. More recently, Hipparcos has provided a distance to the central star of the cluster $`\sigma `$ Orionis of 352 pc ($`mM`$=7.73), slightly smaller than previous results, but in good agreement with them. We have adopted this value as the cluster distance. ### 3.3 Spectroscopy of brown dwarf candidates From the nine objects studied spectroscopically, eight appear to be very probable young cluster members, which implies a high efficiency (89$`\%`$) for our photometric search strategy. All eight confirmed candidates (listed in Table 4) show spectral types later than M5 (three M6, two M6.5, two M7, and one M8.5). The rejected candidate (SOri 44) is hotter (M6.5) than expected for its given $`I`$ magnitude and does not show spectral features indicative of the youth of the $`\sigma `$ Orionis cluster, such as H$`\alpha `$ in emission. In Table 4 we give the strengths of the H$`\alpha `$ and Na i lines. In the Pleiades and $`\alpha `$ Persei clusters the M6.5 spectral type marks the substellar mass limit (Martín et al. 1998b ; Basri & Martín basri99 (1999)), with an uncertainty of about half a subclass. As very low-mass stars and massive brown dwarfs evolve at nearly constant temperature ($`\pm `$200 K) from several million years to nearly 100 Myr (Baraffe et al. baraffe98 (1998); Burrows et al. burrows97 (1997); D’Antona & Mazzitelli dantona94 (1994)), we expect that the substellar mass limit is located at a similar spectral type for ages of a few Myr (Luhman et al. 1998a ). This should be taken with caution if the relationship between effective temperature and spectral type for low-gravity objects is different from that of dwarfs or 100 Myr old objects. All our spectroscopically confirmed candidates with spectral types later than M7 are therefore very likely brown dwarfs. In Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster we can see that the eight objects show indications of strong activity, with higher H$`\alpha `$ emission than Pleiades objects of the same spectral type. This argues in favor of a younger age, since it is expected that activity decreases with the age. We also see variations in the emission of objects of similar type, a kind of behavior already seen in late M objects of young clusters like the Pleiades, IC 348 or Taurus (Zapatero Osorio et al. 1997b ; Luhman et al. 1998b ; Briceño et al. briceno98 (1998)). Spectral features associated with Na i and K i can be seen in some spectra, while in others they are too weak and we can only set upper limits. The smaller equivalent width (EW) of Na i in our $`\sigma `$ Orionis candidates (except for SOri 44 which shows an absorption typical of ages older than 100 Myr) with respect to Pleiads and older field objects of the same spectral type may be a result of the lower surface gravity of these very young objects (Martín et al. martin96 (1996); Luhman et al. luhman97 (1997); Briceño et al. briceno98 (1998)). ### 3.4 The coolest brown dwarf in the $`\sigma `$ Orionis cluster Our candidate of latest spectral type (M8.5), SOri 45, deserves special attention since it is among the coolest and therefore among the least massive objects in our sample. This very young brown dwarf candidate has an effective temperature in the range between 2100 and 2500 K, as derived from different temperature scales for dwarfs available in the literature (Tinney et al. tinney93 (1993); Kirkpatrick kirk95\_1 (1995); Jones et al. jones96 (1996); Leggett et al. leggett98 (1998); Bessell et al. bessell98 (1998)). However, we note that the temperature scale for giants is several hundred degrees warmer and an upward correction in the estimated temperature of about 100–200 K (Luhman et al. luhman97 (1997)) may be required. The luminosity of the object can be obtained from the $`I`$ magnitude using the bolometric corrections of various authors (Monet et al. monet92 (1992); Tinney et al. tinney93 (1993); Kirkpatrick, Henry & Simons kirk95 (1995); Bessell & Stringfelow bessell93 (1993); Bessell et al. bessell98 (1998)). We derived an average correction factor $`BC_\mathrm{I}=1.1\pm 0.1`$. From a comparison of the colors of our objects with those of objects of the same spectral type in the Pleiades and field stars, we do not find any significant reddening ($`A_V0.5`$ mag). So, assuming that the extinction is negligible, and taking as distance modulus of the cluster $`mM`$ = 7.73, we obtain a luminosity for our object of $`\mathrm{log}L/L_{}=2.40\pm 0.15`$. If we adopt 10 Myr as an upper limit for its age, according to theoretical calculations of luminosities for young ages (Baraffe et al. baraffe98 (1998); D’Antona & Mazzitelli dantona97 (1997); Burrows et al. burrows97 (1997)) we infer a mass of 0.020–0.040 $`M_{}`$ for SOri 45, and for the most likely age of the cluster, 2–5 Myr, a mass of only 0.020–0.025 $`M_{}`$. Therefore it is one of the least massive objects found to date outside the Solar System. Follow-up IR and/or spectroscopic observations may reveal even less massive objects among the faintest $`\sigma `$ Orionis cluster photometric candidates of Table 3. The uncertainties in the conversion from magnitudes to luminosities and in the theoretical modeling of such low-mass objects at very early ages are considerable, hence this mass estimate is to be treated with caution. Nevertheless, our results leave little doubt concerning the extension of the star formation process very deep in the brown dwarf domain. In Table 4 luminosities and masses are derived using the same procedure given above for the remaining of the eight candidates for which spectra were obtained. In Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster we compare our least massive brown dwarf candidate of Table 2, SOri 45, with other objects of spectral type M8.5 from Ophiuchus, the Pleiades and the field. The stronger H$`\alpha `$ emission and the much lower strength of K i and Na i lines are noteworthy (Figs. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster(b, c, d)). As mentioned above, this is to be expected for a very young object. Another interesting feature is that the VO and TiO molecular bands are clearly less intense in the field star than in SOri 45 (see the spectral regions 660–760 nm and 840–880 nm, Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster(c)) . This is possibly a consequence of the higher gravity of the older systems, which favors the condensation of dust grains in cold atmospheres (Tsuji, Ohnaka, & Aoki tsuji96 (1996)). In Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster(a) we show a comparison with the spectrum of the M8.5 object found by Luhman et al. (luhman97 (1997)) in Ophiuchus. The similarity of the two spectra is remarkable. These two objects, although discovered in star forming regions far away from each other, appear to be extremely similar, suggesting that low-mass brown dwarfs could be quite common. Old counterparts of these substellar objects may be populating the galactic disk. At the age of a few Gyr their atmospheric temperatures will be similar to or likely lower than that of Gl 229B ($``$ 1000 K), so it is expected that they present spectroscopic characteristics intermediate between those of Gl 229B and Jupiter. Important questions that remain unanswered are how many objects of this kind there are, and whether even less massive ones can form. These questions are obviously related to our knowledge of the mass function at such low masses, but beyond the scope of this article. Although we believe that the area covered and the number of objects are significant, at this juncture this issue presents several difficulties mainly due to the uncertainty in the cluster age; small changes in the adopted age imply a significantly different mass-luminosity relationship and consequently, large variations in the slope of the mass function. To this difficulty we must add the uncertainty in the membership of our candidates, especially the faintest ones, which might be more contaminated by reddened field stars. Nevertheless, if we adopt the age (about 3 Myr) of the model which provides the best fit to the optical photometric sequence of $`\sigma `$ Orionis we derive that the number of brown dwarfs per unit of mass grows through the substellar domain till around 0.040 $`M_{}`$, which is in agreement with what is seen in the Pleiades (Martín et al. 1998c ; Bouvier et al. bouvier98 (1998)). ### 3.5 Lithium and deuterium depletion: prospects of identification of substellar objects and age determination Young stellar clusters such as $`\sigma `$ Orionis are excellent laboratories for studying the depletion of light elements such as deuterium, lithium, berillium, and boron, since these elements are burned in the early stages of pre-main sequence stellar evolution and their atmospheric abundances drastically change in this phase. As mentioned before, a detailed knowledge of lithium burning at the bottom of the main sequence has provided an alternative method of age determination for the young clusters $`\alpha `$ Persei and the Pleiades. This method appears to be more reliable than traditional ones, based on evolutionary models of the more massive stars in the clusters which are limited by our poor knowledge of the interiors of these stars. Lithium dating relies on the fact that in fully convective low-mass stars lithium burning takes place over a very short time interval (a few Myr) once the temperature in the core is high enough to produce the destructive reactions (Li,p)$`\alpha `$. The lower the mass of the star the greater is the age at which it starts lithium burning; in any case this age is always smaller than a few tens of Myr. Brown dwarfs less massive than 0.060–0.065 M do not ever burn lithium because their cores do not reach the minimum burning temperature. The presence of lithium in the atmosphere of low luminosity, fully convective objects is a clear indication that the maximum internal temperature is below the lithium burning value, and that the object is sub-stellar. However this simple criterion can be applied only if the object is older than 150 Myr. Those objects with masses above 0.065 M do destroy lithium while they are young ($``$150 Myr) and this fact can be used for dating clusters. A detailed inspection of evolutionary models confirms that the transition between lithium depletion/preservation in the atmospheres of low-mass objects takes place at higher masses and luminosities as we consider younger ages. Given the youth of $`\sigma `$ Orionis, this transition should occur in early/mid M-type stars. If the age were indeed 10 Myr we would expect stars in the mass range 0.9–0.4 M ($`1.0\mathrm{log}L/L_{}0.5`$) to have burned most of their original lithium content, whereas if the cluster is as young as 2–5 Myr, neither stars nor massive brown dwarfs would have had sufficient time to reach the interior temperatures needed to start lithium burning (D’Antona & Mazzitelli dantona97 (1997); Soderblom et al. soderblom98 (1998)). It is consequently very important to search for lithium in the early/mid M-type stars of the cluster. The Deuterium Test Deuterium has a similar behavior to that of Li as it is a light element that is destroyed in stellar interiors when the temperature reaches 0.8$`\times `$10<sup>6</sup> K (Ventura & Zeppieri ventura98 (1998)), i.e., much below the minimum temperature for lithium burning. Therefore, deuterium burning takes place at much earlier ages and extends to less massive substellar objects (0.015–0.018 $`M_{}`$, Burrows et al. burrows93 (1993); D’Antona & Mazzitelli dantona97 (1997); Burrows et al. burrows97 (1997)). Brown dwarfs with masses larger than 0.02 M efficiently destroy deuterium in the age range 1–10 Myr and this destruction takes place on a very short time scale. Figure A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster represents the evolution of the deuterium abundance as a function of age for several masses. Only objects below 0.02 M can preserve their original abundance on time scales of 10 Myr, since it will take longer for them to burn any deuterium. During the deuterium-burning phase the luminosity and effective temperatures stay almost constant (Burrows et al. burrows93 (1993); D’Antona & Mazzitelli dantona94 (1994)). All the physical arguments discussed above for lithium can be applied to deuterium. There is a transition between objects which have burned deuterium and those which have preserved it which in principle can provide an age determination method. In Fig. A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster, we can note, as an example, how an object of 0.07–0.08 M with an age of 3 Myr has destroyed a significant amount of its initial deuterium abundance, whereas a 0.03 $`M_{}`$ object does not burn deuterium (by a factor larger than 10) until an age of 7 Myr. If $`\sigma `$ Orionis were as young as 2–3 Myr the transition between deuterium burning and preservation would take place at $`\mathrm{log}L/L_{}=1.6`$ which approximately coincides with the substellar mass limit (0.070–0.075 $`M_{}`$), while at an age of 10 Myr the deuterium preservation would take place at $`\mathrm{log}L/L_{}3`$ (D’Antona & Mazzitelli dantona97 (1997)). We have considered theoretical models with the interstellar abundance of deuterium (see e.g. Linsky linsky98 (1998)) for this discussion. If the initial abundance of this isotope is changed by 50%, the time scale for the deuterium depletion is subsequently affected by a factor $``$ 1.5. For a given age the lower the abundance of deuterium is, the lower the luminosity and the mass at the deuterium depletion/preservation boundary. It is interesting to note that at the approximate age of 4 Myr, all low-mass stars will have destroyed their deuterium by several orders of magnitude; therefore, the simple detection of this isotope in an older object would manifest its substellar nature. The detection of deuterium, then, complements that of lithium as a test of substellarity for young objects and provides a way of confirming the substellar nature of objects which, due to their young ages, cannot be confirmed as such by the lithium test. We shall also note an alternative way to make use of the potential of deuterium observations: any object older than 1 Myr, with a luminosity below $`\mathrm{log}L/L_{}=1.5`$ and where deuterium is preserved, must be substellar. The detection of deuterium is difficult and presents an important challenge from the observational point of view. Deuterium has been detected in the planets and comets of the Solar System among other astrophysical sites. The observations were made using dipole lines of monodeuterated hydrogen in the visible (Macy & Smith macy78 (1978)) and rotational bands of monodeuterated methane at 1.6 $`\mu `$m (De Bergh et al. deberg86 (1986), deberg88 (1988), deberg90 (1990)). Implications on primordial deuterium could be deduced if detection were achieved in young brown dwarfs, or in those less massive than 0.015 $`M_{}`$. The very low-mass $`\sigma `$ Orionis cluster members are in the phase of burning this light element. Figure A Search for Very Low-mass Stars and Brown Dwarfs in the Young $`\sigma `$ Orionis cluster shows our candidates in an H–R diagram, which includes the frontier between the depletion (by a factor 10) and preservation of deuterium for different ages. To convert $`I`$ magnitudes and colors to luminosities and temperatures, we have taken account of the same references given in §3.4. As we can see, if the cluster were indeed as young as it has been claimed (around 3 Myr), all objects with luminosities below log $`L/L_{}`$ = $`1.7`$ and temperatures cooler than $``$2700 K (about M6–M7 spectral types) could preserve their deuterium, while if the cluster were as old as 10 Myr, this could happen only to those with the latest types. Objects like SOri 45 are then the most promising candidates for detecting deuterium. Observations of deuterium in a substantial number of $`\sigma `$ Orionis members could also be used to constrain any age spread within the cluster. ## 4 Conclusions We have performed a deep $`R`$, $`I`$, and $`Z`$ survey in the very young cluster $`\sigma `$ Orionis covering an area of 870 arcmin<sup>2</sup> and have found objects with masses as small as 0.020–0.025 $`M_{}`$, well below the substellar mass limit. Our selected 49 candidates define a photometric sequence ranging in age from $``$1 Myr to 5 Myr, which is in agreement with previous results for the OB1b subgroup, where the multiple star $`\sigma `$ Orionis is associated. We have confirmed spectroscopically the cool nature of eight of these objects (spectral types M6–M8.5), which show spectral features indicative of a stronger activity and lower gravity than previously known members of similar types in older clusters. Our latest candidate, SOri 45 (M8.5), is one of the least-massive objects known to date, with a best estimate of its mass at 0.020–0.025 $`M_{}`$ for the age of 2–5 Myr. An upper limit of 0.040 $`M_{}`$ is estimated if the age of the cluster were as old as 10 Myr. The detection of the old counterparts of these brown dwarfs in the solar neighborhood will represent a challenge for future searches as they will cool down to very low atmospheric temperatures ($`T_{\mathrm{eff}}1000`$ K). At the age of the Sun a brown dwarf of 0.020 $`M_{}`$ may have absolute magnitudes of around $`M_J`$ $``$ 20–21, and $`M_K`$ $``$ 22–23 mag (Burrows et al. burrows97 (1997)). Given the limiting magnitudes of the current large scale infrared surveys like DENIS (Delfosse et al. delfosse99 (1999)) and 2MASS (Beichman et al. beichman98 (1998)), the detection of such low-mass brown dwarfs would be possible if they were located at distances closer than about 3 pc. The study of light elements like lithium and deuterium in brown dwarfs of young clusters may provide a precise tool of determining ages. In particular, the least-massive brown dwarfs that we have found in $`\sigma `$ Orionis should have preserved their deuterium and it is worthwhile investigating possible ways of achieving its detection. Acknowledgments: We thank A. Oscoz for acquiring data at the IAC-80 Telescope necessary for the calibration photometry of the WFC observations. We thank K. Luhman for kindly have provided the M8.5 spectrum in $`\rho `$ Ophiuchi, and I. Baraffe and the Lyon group and F. D’Antona for sending us electronic versions of their recent models. This work is based on observations obtained at the INT and WHT operated by the Isaac Newton Group of Telescopes funded by PPARC at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias and the IAC80 Telescope at the Observatorio del Teide (Tenerife, Spain). Partial financial support was provided by the Spanish DGES project no. PB95-1132-C02-01.
no-problem/9903/cond-mat9903105.html
ar5iv
text
# Untitled Document The NMR of High Temperature Superconductors without Anti-Ferromagnetic Spin Fluctuations Jamil Tahir-Kheli First Principles Research, Inc. 8391 Beverly Blvd., Suite #171, Los Angeles, CA 90048 www.firstprinciples.com Abstract A microscopic theory for the NMR anomalies of the planar Cu and O sites in superconducting La<sub>1.85</sub>Sr<sub>0.15</sub>CuO<sub>4</sub> is presented that quantitatively explains the observations without the need to invoke anti-ferromagnetic spin fluctuations on the planar Cu sites and its significant discrepancy with the observed incommensurate neutron spin fluctuations. The theory is derived from the recently published ab-initio band structure calculations that correct LDA computations tendency to overestimate the self-coulomb repulsion for the half-filled Cu $`d_{x^2y^2}`$ orbitals for these ionic systems. The new band structure leads to two bands at the Fermi level with holes in the Cu $`d_{z^2}`$ and apical O $`p_z`$ orbitals in addition to the standard Cu $`d_{x^2y^2}`$ and planar O $`p_\sigma `$ orbitals. This band structure is part of a new theory for the cuprates that explains a broad range of experiments and is based upon the formation of Cooper pairs comprised of a $`k`$ electron from one band and a $`k`$ electron from another band (Interband Pairing Model). Introduction All current explanations<sup>1,2</sup> of the dramatically different NMR behaviors of the Cu and O nuclei separated by only $`2.0`$Å in the CuO planes of high temperature superconductors are based on the existence of anti-ferromagnetic (AF) spin fluctuations on the Cu sites. The NMR difference is attributed to the delicate cancellation on the O sites of these AF fluctuations. The success of such models is dependent upon the AF spin correlation having a large peak very close or at wavevector $`(\pi ,\pi )`$. Neutron scattering experiments have detected a spin correlation peak at incommensurate wavevectors $`(\pi \pm \delta ,\pi )`$ and $`(\pi ,\pi \pm \delta )`$ with $`\delta 0.2\pi `$, spoiling the initial success of the models.<sup>3</sup> The models can be corrected by adding in next-nearest neighbor hyperfine couplings of the Cu atoms to the O sites to cancel the incommensurate fluctuations,<sup>4</sup> but the required hyperfine couplings are chemically too large. Finally, these models suffer from the lack of a microscopic derivation of the wavevector, temperature, and doping dependence they require the spin fluctuation function (i.e., the spin susceptibility $`\chi (q,kT)`$) to satisfy in order to fit experiments. Thus, we regard expressions for $`\chi (q,kT)`$ as empirically devised to fit the NMR experimental data. Recently, we proposed an Interband pairing model (IBP)<sup>5,6</sup> for superconductivity that can explain the different Cu and O NMR without invoking AF fluctuations and the functional form of the spin susceptibility. In IBP, the incommensurate spin fluctuation peaks observed by spin neutron scattering arise naturally from the microscopic computed three dimensional (3D) band structure (but not the 2D band structure we computed previously), yet they do not lead to the NMR problems of AF spin fluctuation models. The IBP model is based on the idea that in the vicinity of special symmetry directions, Cooper pairs comprised of a $`k`$ electron from one band and a $`k`$ electron from a different band are formed (interband pairs) and couple to standard BCS-like Cooper pairs ($`k`$ and $`k`$ from the same band) elsewhere in the Brillouin zone. In particular for LaSrCuO, the crossing occurs between a Cu $`d_{x^2y^2}`$ band and a Cu $`d_{z^2}`$ band along the diagonals $`k_x=\pm k_y`$. Such a theory requires the existence of two bands at the Fermi level that can cross with optimal doping associated with the bands crossing at the Fermi level. Local Density Approximation (LDA) band structure computations done over a decade ago<sup>7</sup> and accepted implicitly by the physics community<sup>8</sup> as the correct starting point for developing theories of the cuprates find only a single Cu $`d_{x^2y^2}`$ and O $`p_\sigma `$ antibonding band at the Fermi level with all other bands well above or below the Fermi energy. As we argued previously,<sup>6</sup> such calculations are plagued by improperly subtracting only $`(1/2)J`$ for a half-filled band from the $`d_{x^2y^2}`$ orbital energy rather than subtracting a full $`J`$ due to the strong correlation in such ionic systems, where $`J`$ is the $`d_{x^2y^2}`$ self-coulomb repulsion. LDA calculations therefore, artificially raise the $`d_{x^2y^2}`$ and $`p_\sigma `$ antibonding band above the other Cu $`d`$ bands. When we correct the orbital energy evaluation,<sup>6</sup> we find that in addition to the Cu $`d_{x^2y^2}`$ and O $`p_\sigma `$ orbitals, holes are created in the Cu $`d_{z^2}`$ and apical O $`p_z`$ orbitals leading to two bands at the Fermi level. The IBP model has had great success explaining a broad spectrum of diverse high $`T_c`$ experimental observations. These include the Hall effect, d-wave Josephson tunneling with coupling due to phonons, the doping sensitivity of the cuprates, resistivity, and the NMR (within the context of a 2D band structure with an approximate 3D structure added on).<sup>5</sup> Most recently, we have shown<sup>9</sup> that our band crossing at the Fermi level for optimal doping prevents the electron gas from adequately screening the attractive electron-phonon coupling as occurs in BCS superconductors, leading to a simple explanation for the observed $`T_c`$ values in excess of standard BCS limits. In addition, the incommensurate neutron spin scattering and the anomalous mid-infrared absorption peak arise in a straightforward manner from our 3D bands.<sup>10</sup> Finally, the angle resolved photo-emission spectroscopy (ARPES) and in particular, the observed so called pseudo-gap (a gap on the Fermi surface in the normal state) for underdoped cuprates and its disappearance for overdoping have been explained as due to the rapid change in the orbital characters of the two bands near the energy of the band crossing and the fact that $`k`$ states with primarily $`d_{z^2}`$ character do not have resolvable quasiparticle peaks in the ARPES spectra.<sup>11</sup> This leads to the incorrect assignment of the Fermi surface in underdoped systems as the crossover surface between dominant $`d_{x^2y^2}`$ and $`d_{z^2}`$ characters and hence the erroneous conclusion that a pseudo-gap has opened above $`T_c`$ on the Fermi surface. The reason for the lack of a sharp quasiparticle peak in the ARPES spectra for $`d_{z^2}`$ electrons is because there is no great anisotropy in its dispersion in the CuO planes versus its dispersion normal to the planes. This is in contrast to the almost 2D dispersion of $`d_{x^2y^2}`$ and leads to a large linewidth of $`d_{z^2}`$ from the intermediate excited photo-electron state that is added to the physically interesting linewidth of the initial electron state. In this paper, we derive the key NMR observations based upon the detailed 3D band structure we obtained recently.<sup>12</sup> This explanation supersedes the NMR discussion in our previous work that was based upon a 2D band with an approximate 3D dispersion and is significantly different in the details. The essential features remain the same. These are: 1.) the interesting region of the Brillouin zone (BZ) for the upper band is the vicinity of the saddle point density of states (DOS) at $`(\pi /a,0)`$ or $`(\pi /a,0,\pi /c)`$ for the 3D zone and the relevant region for the lower band is near $`(\pi /a,\pi /a)`$ that is at the top of this band. Both the saddle point and the top of the lower band at $`(\pi /a,\pi /a)`$ are close to the Fermi energy (less than or on the order of $`0.08`$ eV). 2.) the character of the lower band $`k`$ states near $`(\pi /a,\pi /a)`$ has reduced O $`p_\sigma `$ and $`2s`$ character. The reduced O $`p_\sigma `$ is due to the O sites forming a bonding combination in order to couple to $`d_{z^2}`$ at $`(\pi /a,\pi /a)`$. O $`2s`$ is reduced because it cannot couple to $`d_{z^2}`$ and $`d_{x^2y^2}`$ by symmetry. 3.) the Cu spin relaxation anisotropy of $`3.4`$ for magnetic fields in the plane versus perpendicular to the CuO planes is due to the small amount of Cu $`d_{xy}`$ and its spin orbital coupling to $`d_{x^2y^2}`$. A new piece of chemistry appears in this paper in order to produce the small increase ($`0.1\%`$) in the O spin relaxation rate over temperature $`(1/T_1T)`$ from 50K to 300K that was not required in the 2D model. That is the Jahn-Teller $`5^{}`$ alternating tilt of the CuO<sub>6</sub> octahedra reducing the crystal point group from $`D_{4h}`$ to $`D_{2h}`$ and changing the Bravais lattice from body-centered tetragonal to one-face-centered orthorhombic.<sup>7</sup> The distortion splits the saddle point peak in the DOS at wavevector $`(\pi /a,0,\pi /c)`$ and $`(0,\pi /a,\pi /c)`$ into two peaks. Hume-Rothery and Jahn-Teller type arguments suggest the material will self-adjust to place its Fermi level between these two peaks because the unperturbed saddle point singularity is so close to the Fermi energy. We argue, but do not compute, that the distortion leads to the O $`1/T_1T`$ increase with $`T`$ and suggest this is the reason these systems have a tendency to self-dope to optimal doping for the highest $`T_c`$. 3D Band Structure The 3D Fermi surface for optimally doped La<sub>1.85</sub>Sr<sub>0.15</sub>CuO<sub>4</sub> is shown in figures $`1(ad)`$ at various fixed $`k_z`$ values.<sup>12</sup> The range of values of $`k_z`$ is $`2\pi /c<k_z<2\pi /c`$ and $`k_x`$, $`k_y`$ vary between $`\pi /a`$ and $`\pi /a`$ where $`a=3.8`$Å and $`c=13.2`$Å are the doubled rectangular unit cell parameters. As $`k_z`$ is increased from $`0`$ to $`1.18\pi /c`$, the Fermi surface is very similar to the standard LDA one band result with a hole-like surface centered around $`(\pi /a,\pi /a)`$. At $`k_z1.18\pi /c`$, the top of the lower band is reached and as $`k_z`$ is increased further, a second hole-like Fermi surface appears centered around $`(0,0)`$. This arises from the 3D coupling of the apical O $`p_z`$ orbitals in one layer to its neighboring apical O $`p_z`$ in another layer. At $`k_z1.54\pi /c`$, the two Fermi surfaces touch along the diagonal that is the only symmetry allowed crossing. Further increasing of $`k_z`$ to its upper limit of $`2\pi /c`$ splits the two surfaces into three with a hole-like surface centered around the diagonal and two electron-like surfaces centered at $`(\pi /a,0)`$ and $`(0,\pi /a)`$. Figures $`2(ad)`$ show the total density of state (DOS) and the bare DOS for Cu $`d_{z^2}`$, $`d_{x^2y^2}`$ and O $`p_\sigma `$. Note the large DOS peak just below the Fermi level due to the almost pure 2D character of the bands at $`(\pi /a,\pi /a)`$. At $`(\pi /a,\pi /a)`$, the lower band is composed of $`d_{z^2}`$ and the bonding combination of $`p_\sigma `$. This is the most unstable $`d_{z^2}`$ state at this $`k`$ vector. There is no $`d_{x^2y^2}`$ or O 2s due to symmetry. Because the $`p_\sigma `$ orbitals are in a stabilizing bonding combination, the most unstable $`d_{z^2}`$ state will not have much $`p_\sigma `$ character at all. Thus, the k states that contribute to the peak in the DOS just below the Fermi level in the vicinity of $`(\pi /a,\pi /a)`$ have very little $`d_{x^2y^2}`$, $`p_\sigma `$, and O 2s characters. The bare DOS for a given band is defined as the product of the average orbital character for the band at a given energy times the DOS of the band. The total bare DOS for each orbital is the sum of the bare DOS from each band and is the relevant quantity for the NMR. Physically, the total bare DOS of an orbital is the number of electrons in the orbital per unit of energy. The most important thing to notice from these figures is the difference in the behaviors of the Cu $`d_{z^2}`$ bare DOS versus $`d_{x^2y^2}`$ and O $`p_\sigma `$. $`d_{z^2}`$ is very sensitive to the large DOS that arises from the lower band near $`(\pi /a,\pi /a)`$ whereas, both the total bare DOS for $`d_{x^2y^2}`$ and $`p_\sigma `$ are not. This is the fundamental reason for the difference in the Cu and O spin relaxation rates. Figures $`3(ac)`$ show the orbital character for each band. One can see that the two bands trade off their orbital characters when the bands cross. The Cu and O NMR We use standard expressions for the nuclear spin relaxation rates due to delocalized electrons that are well described by simple Bloch states to form bands.<sup>13,14</sup> All of the relevant expressions for computing the spin relaxation rates on the planar Cu and O sites due to $`d_{x^2y^2}`$, $`d_{z^2}`$ and $`p_\sigma `$ are explicitly written down in reference 5. We do not reproduce them all here. Instead, we will write down the general form of the expression (equation (61) in the above reference) to clarify our discussion of the results. The general expression for the spin relaxation rate in the cuprates where we neglect the contribution from the Cu 4s and O 2s contact terms and the core polarization is, $$\frac{1}{T_1}=2\left(\frac{2\pi }{\mathrm{}}\right)(\gamma _e\gamma _h\mathrm{})^2dϵf(ϵ)(1f(ϵ))\frac{1}{r^3}^2[W_{\mathrm{dip}}(ϵ)+W_{\mathrm{orb}}(ϵ)],$$ $`(1)`$ where $`f(ϵ)`$ is the Fermi-Dirac function, $`f(ϵ)=1/(e^{\beta (ϵ\mu )}+1)`$ at energy $`ϵ`$ and $`\mu `$ is the chemical potential. $`\gamma _e`$ and $`\gamma _n`$ are the electronic and nuclear gyromagnetic ratios, $`<1/r^3>`$ is the mean value of $`1/r^3`$ for the relevant orbital, $`W_{\mathrm{dip}}(ϵ)`$ is a function of the bare density of states of the orbitals for dipolar relaxation, and $`W_{\mathrm{orb}}(ϵ)`$ is the similar expression for orbital relaxation. For Cu relaxation with the magnetic field normal to the CuO planes (z-axis), $`W_{\mathrm{dip}}`$ and $`W_{\mathrm{orb}}`$ are given by, $$W_{\mathrm{dip}}^z(ϵ)=\left(\frac{1}{7^2}\right)[6N_{d_{x^2y^2}}(ϵ)N_{d_{z^2}}(ϵ)+N_{d_{x^2y^2}}(ϵ)N_{d_{x^2y^2}}(ϵ)+N_{d_{z^2}}(ϵ)N_{d_{z^2}}(ϵ)],$$ $`(2)`$ $$W_{\mathrm{orb}}^z(ϵ)=0,$$ $`(3)`$ where $`N(d_{x^2y^2})(ϵ)`$ and $`N(d_{z^2})(ϵ)`$ are the total bare density of states for their respective orbitals. We take $`<1/r^3>=6.3`$ a.u. for Cu<sup>15</sup> and make the crude approximation<sup>5</sup> of $`3.0`$ a.u. for O. The inclusion of Cu 4s and the effects of core polarization will lead to a small change in the computed magnitude of the Cu spin relaxation and Knight shift, but should not change the overall qualitative behavior. The O 2s can increase the magnitude of the O relaxation rate by an order of magnitude due to its large density at the O nucleus but as discussed above, cannot alter the qualitative behavior of the relaxation and Knight shift curve because by symmetry, no 2s character appears at $`(\pi /a,\pi /a)`$. In most metals, the bare densities of states that appear in equations (2) and (3) can be taken to be constant over the range $`\mu \pm kT`$ around the Fermi level. The integral in equation (1) is thereby over $`f(1f)`$ and is equal to the temperature $`kT`$. Hence, $`1/T_1T`$ is a constant. Due to the band crossing, the closeness of the Fermi level to the saddle point singularity in the DOS at $`(\pi /a,0,\pi /c)`$ and the top of the lower band, the bare densities of states cannot be taken to be constant over the range of energies relevant for computing the NMR. In addition, the chemical potential $`\mu `$ increases with increasing temperature in order to maintain particle conservation. Thus, $`\mu `$ must be solved for self-consistently at every temperature. Figures 4a and 4b show the calculated Cu and O spin relaxation rates over temperature $`1/T_1T`$ for a z-axis magnetic field. The Cu $`1/T_1T`$ initially rises due to the sharp increase in the $`d_{z^2}`$ bare DOS just below the Fermi level from the DOS peak at $`(\pi /a,\pi /a)`$. As the temperature is further increased, the chemical potential increases to maintain particle conservation and the integral in equation (1) “falls over” the top of the lower band leading to the sharp decrease in the relaxation. The values we obtained are approximately a factor of $`2`$ larger than experiment but the most important point is the percentage increase from $`50K`$ to the maximum and the approximately factor of $`1.4`$ decrease from the maximum value to the value at $`300K`$ are compatible with experiment where the increase is $`1030\%`$ and the decrease is about factor of $`2`$.<sup>3</sup> In contrast, the O $`1/T_1T`$ decreases by $`8\%`$ as the temperature is increased. This is due to the lack of the $`(\pi /a,\pi /a)`$ peak in the bare DOS for $`p_\sigma `$ and the decrease in its bare DOS as the energy is increased above the $`T=0`$ Fermi level. The relaxation is more sensitive to bare DOS values above the $`T=0`$ Fermi level due to the increase in $`\mu `$. These numbers are about a factor of $`5`$ smaller than experimental values. Inclusion of O 2s character can easily produce an increase of a factor of $`510`$ without changing the qualitative behavior. The most important point to note here is that the decrease of the O relaxation is very small compared to the scale of the Cu relaxation decrease. Although, with the present calculations the small observed increase is not reproduced, we have already attained considerable success in obtaining such a dramatic difference in the Cu and O NMR. By considering the orthorhombic CuO<sub>6</sub> tilt in the following section, we will argue that in fact, the observed increase can be obtained by our model. The expressions for the various Knight Shifts are explicitly written down in reference 5 and are not reproduced here. As before<sup>5</sup>, we must assume that $`d_{z^2}`$ and Cu 4s interfere such that the net dipolar field due to the $`d_{z^2}`$ and the Cu 4s hybrid is of the opposite sign of a single $`d_{z^2}`$ in order to lead to an increase in the Cu Knight shift with increasing temperature and the lack of strong temperature dependence of the shift for a z-axis field. This is discussed in detail in reference 5. The one additional point in favor of the sign flip of the dipolar field of $`d_{z^2}`$ due to interference with the 4s for our 3D model as compared to our 2D model is that in the 3D model, $`d_{z^2}`$ holes appear in the vicinity of $`(0,0,\pi /c)`$. At $`(0,0)`$, 4s character will mix with $`d_{z^2}`$ and by symmetry $`p_\sigma `$ cannot couple to them. Thus, one expects the 4s to mix into the $`d_{z^2}`$ to increase the size of the $`d`$ orbital in the planar directions, or in other words, interfere with $`d_{z^2}`$ with the correct sign to lead to a net sign flip of the dipolar field. Figures 5a and 5b show the Cu and O z-axis Knight shifts. The O Knight shift does not include O 2s and decreases with increasing temperature. The Cu shift increases with increasing temperature and hence does not track the spin relaxation curve in agreement with experiment. This is due to the fact that for relaxation, the DOS appear twice (squared) in the relaxation expression because the initial and final state probabilities of the relaxing electron must be multiplied together. For the Knight shift, only a single power of the DOS appears. The Cu relaxation is therefore more sensitive than the shift to the sharp increase in the DOS due to $`(\pi /a,\pi /a)`$ just below the Fermi level. Note also the contribution to the temperature dependence of the Cu shift from $`d_{x^2y^2}`$ is much smaller than the contribution from $`d_{z^2}`$. In figure $`5a`$, we plot the $`d_{x^2y^2}`$ contribution multiplied by $`10`$ and minus the $`d_{z^2}`$ shift to incorporate the sign flip interference from Cu 4s. The scale of the Cu shift is consistent with with experiments. Orthorhombic Distortion The orthorhombic CuO<sub>6</sub> octahedra tilt in La<sub>1.85</sub>Sr<sub>0.15</sub>CuO<sub>4</sub> splits the DOS peak at $`(\pi /a,0,\pi /c)`$ and $`(0,\pi /a,\pi /c)`$ into two peaks at energy shifts $`ϵ_0\pm \delta `$ where $`ϵ_0`$ is the original energy. This leads to a local gap in the energy between these two values in the vicinity of the saddle point. As the contribution to the DOS is large here, one expects the total DOS to be much smaller between the two peaks. Chemically, one expects the size of $`\delta `$ to be on the order of $`0.010.05`$ eV or greater. The Fermi level will therefore fall between the two peaks. The overall effect on the Cu NMR will be small due to the dominance of the $`(\pi /a,\pi /a)`$ peak for Cu. On the other hand, this distortion will dramatically change the O NMR from slightly decreasing to slightly increasing as observed by experiment. We believe this is the reason for the O NMR increase with temperature for LaSrCuO. One also expects the system will adjust itself to place its Fermi level between the two peaks in order to lower its total free energy. This is essentially a Hume-Rothery or Jahn-Teller type argument. Such a mechanism provides a simple explanation for the tendency of several cuprates to “self-dope” to the optimal doping for $`T_c`$. Conclusions We have presented a theory for the NMR of LaSrCuO that explains the observed different Cu and O relaxations as arising from two bands with Cu $`d_{x^2y^2}`$, $`d_{z^2}`$, planar O $`p_\sigma `$, and apical O $`p_z`$ characters. These bands were derived ab initio<sup>6,12</sup> by correcting the improper accounting of the self-coulomb contribution to the orbital energy in LDA band structure calculations. The theory resolves the NMR anomalies with a microscopic picture that does not require the introduction of anti-ferromagnetic spin fluctuations and it’s corresponding disagreement with the observed incommensurate neutron spin fluctuations. The splitting of the saddle point singularity in the density of states at $`k`$ vector $`(\pi /a,0,\pi /c)`$ and $`(0,\pi /a,\pi /c)`$ by the CuO<sub>6</sub> orthorhombic distortion changes the O NMR from monotonically decreasing with increasing temperature to monotonic increasing with temperature by splitting the peak into two peaks. Acknowledgments We wish to thank Jason K. Perry with whom all parts of this work was discussed. We also wish to thank William A. Goddard III for his insight and encouragement during the development of the ideas presented here and in previous publications. REFERENCES 1. F. Mila and T.M. Rice, Physica C 157, 561 (1989) 2. D. Pines, Physica C 282, 273 (1997) 3. R.E. Walstedt, B.S. Shastry, and S-W. Cheong, Phys. Rev. Lett. 72, 3610 (1994) 4. Y. Zha, V. Barzkin, and D. Pines, Phys. Rev. B 54, 7561 (1996) 5. J. Tahir-Kheli, Phys. Rev. B 58, 12307 (1998) 6. J.K. Perry and J. Tahir-Kheli, Phys. Rev. B 58, 12323 (1998); J.K. Perry, J. Phys. Chem. (submitted), xxx.lanl.gov/abs/cond-mat/9903088, www.firstprinciples.com 7. W.E. Pickett, Rev. Mod. Phys. 61, 433 (1989) and references therein 8. P.W. Anderson, The Theory of Superconductivity in the High-T<sub>c</sub> Cuprates (Princeton, 1997), p. 33 9. J. Tahir-Kheli, Phys. Rev. Lett. (submitted), www.firstprinciples.com 10. J. Tahir-Kheli, (to be published) 11. J.K. Perry and J. Tahir-Kheli, Phys. Rev. Lett. (submitted), xxx.lanl.gov/abs/cond-mat/9908308, www.firstprinciples.com 12. J.K. Perry and J. Tahir-Kheli, Phys. Rev. Lett. (submitted), xxx.lanl.gov/abs/cond-mat/9907332, www.firstprinciples.com 13. C.P. Slichter, Principles of Magnetic Resonance Third Edition (Springer-Verlag, 1990) 14. A. Abragam, Principles of Nuclear Magnetism (Oxford, 1961) 15. A. Abragam and B. Bleaney, Electron Paramagnetic Resonance of Transition Ions (Dover, 1986), p. 458 Figure Captions 1(a-d). The Fermi surface for a.) $`k_z=0`$, b.) $`k_z=1.30\pi /c`$, c.) $`k_z=1.54\pi /c`$, and d.) $`k_z=2\pi /c`$. 2(a-d). The density of states of the two bands and the bare density of states for Cu $`d_{x^2y^2}`$, $`d_{z^2}`$, and O $`p_\sigma `$ in units 1/(eV$`\times `$spin$`\times `$unit cell). 3(a-c). The orbital characters for the Cu $`d_{x^2y^2}`$, $`d_{z^2}`$, and O $`p_\sigma `$ orbitals. 4(a,b). The Cu and O spin relaxation rate over temperature for a z-axis magnetic field. The O curve is only computed for the $`p_\sigma `$ orbital. Including O 2s will not change the qualitative behavior of the curve, but will increase its magnitude. 5(a,b). The Cu and O Knight shifts. The contribution from $`d_{x^2y^2}`$ is multiplied by a factor of $`10`$ and minus the $`d_{z^2}`$ shift is plotted due to the argued sign flip arising from interference with Cu 4s. The O shift only includes the contribution arising from $`p_\sigma `$.
no-problem/9903/astro-ph9903195.html
ar5iv
text
# Aspects of Inflationary Reconstruction ## 1 Introduction One of the most exciting aspects of the rapidly improving observational situation in cosmology is the hope that we might learn of processes happening in the very early Universe, and thus learn of physics at energies inaccessible to terrestrial experiments. A key idea in early Universe cosmology is inflation , a period of accelerated expansion thought to be driven by the potential energy of one or more scalar fields. Assuming all goes well with upcoming experiments, particularly satellite projects MAP and PLANCK aiming to accurately measure microwave background anisotropies, one can hope to receive a limited, but non-trivial, amount of information concerning the inflationary mechanism. That is to say, one can hope to reconstruct part of the inflationary potential . Although inflation (or other early Universe ideas such as cosmic strings) is often discussed more or less independently of cosmological parameters such as the Hubble parameter $`h`$ and the density parameter $`\mathrm{\Omega }_0`$, it is in fact crucial to the entire enterprise of using such observations to constrain cosmology. The reason is that, contrary to the impression given by a fair fraction of the literature, measurements of the microwave background in isolation tell you nothing about cosmological parameters. This is because the influence of the parameters is on the dynamics, whereas the microwave background anisotropies give us a single snapshot. In order to predict the parameters, we need a theoretical prejudice as to the initial conditions, which are processed by dynamical evolution into the anisotropies we see. Consequently, fitting for the cosmological parameters and for the initial conditions for structure formation are not independent tasks which can be decoupled. Rather, they must be done together. ## 2 The standard paradigm In this article I will be discussing a few of the ways in which inflation, while being essentially correct, might turn out to be more complicated than envisaged. However, I stress that it is probably much more likely, if inflation proves correct at all, that it is one of the simpler models which is true. If so, then as Neil Turok said at this meeting ‘The person who fits the data with the fewest parameters is the winner’ and the game is over. So let’s begin by quickly reviewing the simplest scenario. It arises when the dynamics of inflation (both classical and quantum) are dominated by a single scalar field evolving in a nearly flat potential. If so it is well established that to a good approximation the two types of perturbations, scalar and tensor, will take on a power-law form, with the tensors giving a subdominant (and quite conceivably negligible) contribution. This is certainly expected to be valid for present data, unless we have a ‘designer’ model with a very strong spectral feature present on observable scales (as discussed in this session by Lesgourgues). The two power laws require four parameters for their specification. However, there is one consistency relation linking the two spectra which means that the tensor spectral index is not independent of the other parameters; disappointingly this redundancy is unlikely to be useful as almost certainly the tensor spectral index cannot be measured anyway. The remaining three parameters can be taken as the overall perturbation amplitude, the spectral index $`n`$ of the scalar perturbations and the relative impact $`r`$ of tensors as opposed to scalars on large-angle microwave background anisotropies. In a given model they are readily calculated, for example via the slow-roll expansion . ## 3 Simplest extension: scale-dependent spectral index High-accuracy observational data makes stringent demands on theory, so eventually the power-law approximation may prove inadequate. There are some theoretical reasons to believe that slow-roll might not be all that good; in supergravity models the slow-roll parameter $`\eta `$, which must be small for inflation to proceed, takes the form $$\eta =1+\text{‘other terms’},$$ (1) Even if the other terms manage to partly cancel the 1, it may be unlikely that they do so to high accuracy. A particular example of this point in action is the running-mass models of inflation introduced by Stewart , where slow-roll is due to an accidental, and temporary, cancellation. If the slow-roll approximation is only weakly satisfied, then higher-order corrections to the formulae for the spectral index etc become significant and have to be accounted for . More pertinently, the power-law approximation is likely to break down , and has to be replaced by a more general analysis such as a truncated Taylor expansion of the spectrum about some scale. (Some kind of expansion must be done to describe the spectra with a finite number of parameters which can be fit from the data.) We investigated corrections to power-law behaviour in Ref. . Adding in extra parameters will always worsen the uncertainty on all parameters, but we found that the likely impact on uncertainties in parameters such as $`h`$ is small, while as a bonus we have given ourselves one or more extra inflationary parameters to constrain early Universe physics with. In terms of our being able to constrain our models, it appears therefore that a breakdown in slow-roll should be regarded as a good thing, and we should hope that if inflation is correct it is a model of that type. ## 4 Isocurvature models A much more disastrous turn of events would be if the best models include isocurvature perturbations. Many of the most popular inflation models have more than one dynamically important field, and as soon as that happens we have the possibility of isocurvature modes. These significantly complicate the calculation of the microwave background anisotropies, and a particular disadvantage is that these models appear to defy reconstruction, in the sense that given a set of observations it would be very hard to decide what sort of inflation model gave rise to them. One would have to test candidate models against the data on a one-by-one basis. Three regimes are possible: * Pure isocurvature models. The basic idea here is that the field which eventually becomes the cold dark matter already exists during the inflationary era, and acquires perturbations by the usual mechanism. The idea has quite a long history , recently revived by Linde & Mukhanov and by Peebles . * Mixed adiabatic and isocurvature. If both modes are present we need more parameters to describe the initial conditions . Calculationally complex; in particular one usually needs to know the whole evolution of the Universe after inflation to compute predictions, whereas for adiabatic alone one needs evolve only until the modes are well outside the horizon. * Low-level isocurvature. A small isocurvature contribution might not be directly detectable, yet be an extra noise source leading to deterioration of cosmological parameter determination. Isocurvature models pose two difficulties. The first is that the power spectra from the models must be parametrized so they can be fit from the observational data, and in such models it is not clear how many parameters one may need to introduce, e.g. treating them as power-laws may be inadequate. More importantly, unlike the case of single-field inflation models there is no direct connection between these parameters and the inflation model in the form of a set of equations. Even if we have a successful fit of model parameters it may be a difficult task to deduce the form of the inflation model giving rise to them, particularly if details of post-inflation processing of perturbations need to be included. ## 5 Open inflation models Open inflation is another complicated scenario, either in the original single-bubble Universe models or the more recently-devised instanton models . These models are readily testable insofar as the geometry of the Universe is measurable, and encouragingly already the indications are very much in favour of a flat or nearly flat Universe, both from the microwave background and measurements of the apparent magnitude–redshift relation for type Ia supernovae. If they do remain viable, they pose a similar set of technical problems to those posed in the isocurvature case. ## 6 Reconstruction without slow-roll I end with a separate topic not closely related to the rest of the article. With the increasingly widespread use of numerical technology in cosmology, and bearing in mind the possibility that slow-roll may not work all that well, a new approach to reconstruction is suggested in the single-field case. The traditional approach relies on computing a parametrized form of the perturbation spectra, which can be input into the CMBFAST program to give the microwave anisotropy spectrum. The drawback is that the analytic computation of the spectra is only approximate, and ultimately this leads to a biased estimation of the inflaton potential. This can be circumvented by instead using exact computation of the spectra, obtained by numerically solving the mode equations wavenumber by wavenumber as demonstrated in Ref. . These are input directly into a modified form of CMBFAST to give microwave anisotropy predictions which are exact up to the assumption of linear perturbation theory. By obtaining the anisotropies directly from a parametrized form of the potential, one can estimate the uncertainties in the parameters describing the reconstructed potential, and the covariances between the uncertainties on different parameters, directly. This will be described in more detail in a forthcoming publication. ## 7 Discussion This article stresses that models of the initial perturbation spectra are an integral part of cosmological parameter estimation, and we rely on our present understanding being a good one. It is reasonable to hope, and even expect, that everything will work out quite simply, but I’ve outlined a few ways in which things might be more complicated. On the plus side, I noted that at least if things are just a little more complicated, especially in the form of departures from perfect power-law spectra, that is likely to be seen as a good thing as it increases the amount of readily accessible information about early Universe physics. ## Acknowledgments Thanks to Ed Copeland, Ian Grivell, Rocky Kolb, Jim Lidsey and David Lyth for numerous discussions and collaborations related to this work. ## References
no-problem/9903/astro-ph9903236.html
ar5iv
text
# HST Observations of the Host Galaxy of GRB970508 ## Introduction The detection and rapid localization of GRB 970508 by the Gamma-Ray Burst Monitor and the X-ray Wide Field Camera on BeppoSAX (Piro et al. (1998)) led to the identification of an optical counterpart within four hours (Bond (1997)) and, subsequently, to Keck spectroscopy of the counterpart that revealed a system of absorption lines at $`z=0.835`$ (Metzger et al. (1997)). This lower limit on the GRB redshift was the first direct constraint on the distance and energy scale of a classical gamma-ray burst. Because of its early discovery, as well as the great interest attracted by the redshift measurement, the fading counterpart of GRB 970508 has been more thoroughly studied than any other GRB counterpart. The optical light curve, for example, has been intensively observed from a few hours to over a year after the GRB. The optical flux reached a peak at R$`19.8`$ two days after the GRB, then began a power law decay, $`t^\beta `$, with $`\beta =1.141\pm 0.014`$, that continued for over one hundred days (Pian et al. 1998b ; Galama et al. 1998b ). At that point, the decay curve began to flatten (Pian et al. 1998a ; Pedersen et al. (1998); Bloom et al. 1998a ; Zharikov, Sokolov, and Baryshev (1998); Sokolov et al. (1999)), as expected if the measured flux were becoming dominated by light from a host galaxy. GRB 970508 was also the first burst for which a radio counterpart was detected (Frail et al. (1997); Galama et al. 1998a ). The broadband (radio to X-ray) spectrum of the afterglow (Galama et al. (98)) provided strong support for the synchrotron emitting shock model for afterglows (see, e.g., Sari, Piran and Narayan 1998). Despite the wealth of data on the GRB counterpart itself, the host galaxy has proven a more difficult observational target. Spectroscopy has revealed \[O II\] and \[Ne III\] emission features, and these, together with colors of the galaxy obtained by fitting observations of the combined light from the OT and the galaxy, have led to the suggestion that the host is an actively star-forming dwarf galaxy (Bloom et al. 1998a ; Sokolov et al. (1999)). However, attempts to resolve the host galaxy from the ground have proven fruitless. Even early HST observations, less than a month after outburst, found no evidence for an extended source at the position of the optical transient, down to faint levels, $`\mathrm{R}\mathrm{}>24.5`$ (Pian et al. 1998b ). In this Letter, we describe HST observations taken more than a year after outburst, which have finally allowed us to resolve the host galaxy of GRB 970508. These show that GRB 970508 occurred remarkably close—within about 70 pc—of the host galaxy center, and suggest that the brightness of the OT may have fallen faster at late times than would be predicted by a simple power-law fit. Finally, we discuss the implications of these observations for understanding the progenitor objects and energetics of GRBs. ## Observations, Image Analysis, and Results The field of GRB 970508 was imaged during four HST orbits in 1998 August 5.78–6.03 UT, using the STIS CCD in Clear Aperture (50CCD) mode. Two exposures of 1446 s each were taken at each of four dither positions for a total exposure time of 11,568 s. The images were bias and dark subtracted, and flat-fielded using the STIS pipeline. The final image was created and cleaned of cosmic rays and hot pixels using the variable pixel linear reconstruction algorithm (a.k.a. Drizzle) developed for the Hubble Deep Field (Williams et al. (1996); Fruchter and Hook (1997)). An output pixel size of $`0\stackrel{}{\mathrm{.}}025`$ across (one-half the size of the detector pixels on the sky) and a “pixfrac” of 0.6 were used. The (small) geometric distortion of the STIS CCD (Malamuth and Bowers (1997)) was removed during the drizzling process. A section of the final image is shown in Fig. 1. The total emission from the OT and galaxy were measured by summing the counts in a box $`1\stackrel{}{\mathrm{.}}5`$ on a side and subtracting the local sky. We find $`3.13\pm 0.12`$ counts per second in the aperture. The photometric calibration of the images was performed using the synthetic photometry package SYNPHOT in IRAF/STSDAS; however, a 12% aperture correction has been applied Landsman (1997) to account for light lost to large-angle scattering. The STIS CCD in clear aperture mode has a broad bandpass, with a significant response from 200 to 900 nm that peaks near 600 nm. As a result, STIS instrumental magnitudes are best translated into the standard filter set by quoting the result as a V magnitude; however, knowledge of an object’s intrinsic spectrum is required for an accurate conversion to the standard filter system. Using a spectral energy distribution (SED) flat in $`f(\nu )`$ one finds $`\mathrm{V}=25.1\pm 0.1`$. But, as mentioned in the introduction, ground-based observers have fitted for the host galaxy magnitude under the assumption that the power-law index of decay of the OT has been constant with time. We can therefore estimate the V magnitude using the color information from these observations. Although the estimated galactic magnitudes have changed with time (a point we will return to later), all observers have found a blue host, and Sokolov et al. suggest that the galaxy colors are best fit by an object intermediate between an Scd and an irregular (Im) redshifted to $`z=0.83`$. Using either the measured galaxy colors, obtained by a rough averaging of the values obtained by previous observers (Bloom et al. 1998a ; Zharikov, Sokolov, and Baryshev (1998); Sokolov et al. (1999)), or an SED created by interpolating between Coleman, Weedman and Wu (1980) Scd and Im SEDs and redshifting to $`z=0.83`$, we estimate $`\mathrm{V}=25.40\pm 0.15`$, where the error is dominated by our uncertainty over the SED. This, however, represents the sum of the emission from the host galaxy and any remnant of the OT. We next place a limit on the magnitude of the OT. In order to register the position of the OT on the late-time image, the positions of nine compact sources were found on both the June 1997 (Pian et al. 1998b ) and July 1998 drizzled images. A shift (in $`x`$ and $`y`$) and rotation were then fit between the two images using the IRAF task geomap. The accuracy of this transformation was checked by comparing the observed and predicted positions of four bright, point-like sources. An r.m.s. scatter of $`0.25`$ drizzled pixels ($`0\stackrel{}{\mathrm{.}}006`$) was found in each coordinate, for a position uncertainty $`<0\stackrel{}{\mathrm{.}}01`$. When the position of the OT on the June 1997 image was transformed to that of the July 1998 image using the shift and rotation measured, we found it to be exactly at the center of the host. To verify this observation, we fitted the host galaxy with elliptical isophotes using the IRAF task ellipse. We find that the isophotal center of the galaxy is stable as a function of radius and agrees with the predicted position of the OT to better than our astrometric error of $`0\stackrel{}{\mathrm{.}}01`$. In Fig. 2 we show a plot of the measured surface brightness profile of the galaxy compared with an $`r^{1/4}`$ model and an exponential disk model. In both cases, we have convolved the model with the STIS PSF. In addition to the measured surface brightness profile, we show that profile after subtracting an estimated remnant OT. To do this, we went back to the June 1997 observation and scaled and subtracted a STIS PSF until the remaining counts in a circle of radius four drizzled pixels equaled that in the same region of the late-time image. This PSF was then used as the estimate of the OT at 24.7 days after outburst, was then itself scaled using the $`t^{1.14}`$ power-law found in Pian et al. (1998) to the late time, 454 days after outburst. When subtracted from the galaxy, this estimate of the OT produced a clear “hole” in the center of the host. Under the assumption that galaxies (convolved to $`500`$ pc resolution by the PSF) should have surface brightness profiles rising toward the center, we reject this subtraction. The largest subtraction consistent with a roughly continuously rising surface brightness profile is shown in Fig. 2. Therefore we have subtracted a PSF scaled as $`t^{1.3}`$ between the two HST observations. This power-law is $`2\sigma `$ below the power-law reported by Pian et al. (1998), but agrees well with that found by Bloom et al. (1998) and is within $`2\sigma `$ of that found by Zharikov et al. (1998). (We note the Pian et al. fit was slightly contaminated by the then-unmeasured light from the host galaxy). As can be seen from Figure 2, the surface brightness profile of the host galaxy is a far better fit to by exponential disk model than by an $`r^{1/4}`$ law. The best fit exponential model shown has a scale length $`=0\stackrel{}{\mathrm{.}}046\pm 0\stackrel{}{\mathrm{.}}006`$ and ellipticity $`=0.70\pm 0.07`$. It has then been convolved with an estimate of the STIS PSF, produced using the HST Tiny Tim software (Krist, Hasan, and Burrows (1992)) (results obtained when the image is convolved using a stellar PSF are quite similar). This convolution produces a model which can be approximated by an exponential disk with scale length $`0\stackrel{}{\mathrm{.}}060`$ and ellipticity $`0.3`$. A true exponential disk plotted as magnitude versus radius would, of course, have a surface brightness profile that is a straight line; however, at its core, the surface brightness of the observed galaxy is averaged over the width of the PSF, and at large radii the true light of the galaxy is overwhelmed by light scattered from the center. It is worth noting that given the large eccentricity observed, the poor fit of the $`r^{1/4}`$ law is not unexpected. In spite of their names, ellipticals rarely have ellipticities approaching $`0.7`$. The fit between the galaxy models and the data is substantially better when no OT is subtracted, than when we remove an OT scaled as $`t^{1.3}`$. Nonetheless, as can be seen in Figure 3, this power-law largely fits the available ground-based R-band photometry. For this figure, a galaxy magnitude of $`\mathrm{R}=25.2`$ has been removed from previous photometry. This corresponds to a flat (in $`f(\nu )`$), ı.e. very blue, galaxy spectrum between R and V. The colors found by Sokolov et al. are somewhat redder; however, their V galaxy magnitude $`25.16\pm 0.16`$ is somewhat fainter than ours. In July 1998, an OT falling as $`t^{1.3}`$ would have $`\mathrm{V}25.8`$, implying a corrected galactic magnitude of $`\mathrm{V}=25.5\pm 0.15`$. However, there has been a continuing trend among the ground-based estimates of the host magnitude. The later the data used to fit the host galaxy, the fainter the host was found to be (Bloom et al. 1998a ; Zharikov, Sokolov, and Baryshev (1998); Sokolov et al. (1999)). The differences are visible in all bands (B, V, R and I), and are typically at the $`2\sigma `$ level. Furthermore, the preference of our surface brightness fit for no continuing emission from the OT, and the prevalence of upper limits, rather than detections beyond day 150 in Figure 3, all suggest a single conclusion—the OT may have faded much more rapidly than $`t^{1.3}`$ after day $`100`$. ## Discussion Our imaging has revealed the faint galaxy host of GRB 970508. We find that the OT is located, within astrometric errors of order $`0\stackrel{}{\mathrm{.}}01`$, at the isophotal center of the host. At the redshift of GRB 970508, $`z=0.83`$, this corresponds to an offset from the nucleus of $`\stackrel{<}{}70`$ pc. The surface brightness profile of the host better fits an exponential disk than the $`r^{1/4}`$ profile of an elliptical, and agrees best when no OT is assumed to be adding to the profile; however, we cannot rule out a power-law decay of the OT as $`t^\beta `$, where $`\beta 1.3`$. Both our data, and the ground-based observations, tend to support a steepening of the early power-law decay curve sometime after day $`100`$. Such a break is naturally expected when the expanding fireball has swept up the material from the ISM comparable in rest mass to the energy of the initial explosion at: $$t1\text{ yr}\left(\frac{E_{52}}{n}\right)^{1/3},$$ where $`E_{52}`$ is the initial energy of the explosion in units of $`10^{52}`$ erg and $`n`$ is the density of the surrounding medium in protons per cm<sup>3</sup> (Wijers, Rees, and Mészáros (1997)). Wijers and Galama (1999) have used the multi-wavelength observations of the afterglow emission of GRB 970508 to estimate the physical parameters of the burst and its surrounding interstellar medium, assuming that the afterglow is dominated by synchrotron emission. They find a total burst energy of $`4\times 10^{52}`$ and $`n0.04`$. These values would cause us to expect a break somewhat after one year. However, the uncertainties in these estimated parameters are large (perhaps an order of magnitude, Wijers, private communication). Furthermore the above calculation does not take into account the significant time-dilation in the early part of the expansion, thus overestimating the time till the break. Therefore, we believe that all observations are consistent with the possible break in the light curve between 100 and 200 days after outburst. Although the precise behavior with time of the OT is uncertain, its position on the host is not. The extraordinary coincidence of the OT with the isophotal center of the host galaxy raises the question of whether the GRB is related to the galactic nucleus, either through a nuclear starburst or an AGN. The Keck spectroscopy by Bloom et al. (1998) shows strong \[O II\] and \[Ne III\], both of which are present in galaxies with active nuclei. However, in a large spectroscopic sample of galaxies (McQuade, Calzetti, and Kinney (1995); Storchi-Bergmann, Kinney, and Challis (1995)), no ellipticals or spirals without AGN show \[Ne III\]. About one-third of the starbursts in the sample show this line, and these are by and large the most active starbursts in the group. Furthermore, only the most extreme starbursts, and the Seyferts, have a \[Ne III\] equivalent width or the large \[Ne III\] to \[O II\] ratio (indicative of a temperature in excess of 40,000 K) seen in this galaxy. Thus, the spectroscopic evidence does not allow us to distinguish between a host which possesses an AGN and one which is simply showing signs of vigorous star formation. Nonetheless, we tend to prefer the latter explanation for two reasons. First, if cosmological GRBs are produced by a single mechanism, that mechanism is unrelated to AGN. The OT of GRB 970228 is located at the very edge of a galactic disk (Fruchter et al. 1999a ). Furthermore, HST images of other GRBs (Odewahn et al. (1998); Bloom et al. 1998b ; Fruchter et al. 1999b ), while less conclusive, all tend to discourage an AGN interpretation. Secondly, our recent work has shown that other GRB hosts possess unusually blue optical-to-infrared colors, implying that these galaxies are actively star-forming (Fruchter et al. 1999b). NICMOS imaging should soon allow us to determine whether this is also the case for the host galaxy of GRB 970508. Until then, we note that in many ways this host galaxy has a strong resemblance to the classic nearby starburst dwarf NGC 5253, in its integrated colors, morphology and line-strengths (McQuade, Calzetti, and Kinney (1995); Storchi-Bergmann, Kinney, and Challis (1995)). Furthermore, NGC 5253 has a hot, young star cluster in its nucleus, suggesting that the resemblance may be very good indeed, by providing a natural explanation for the location of the OT. The hosts of four GRBs (970228, 970508, 971214 and 990123) have now been imaged and clearly resolved by HST (Sahu et al. (1997); Fruchter et al. 1999a ; Odewahn et al. (1998); Fruchter et al. 1999b ; Bloom et al. (1999)). In each case, the OT is superposed on the stellar field. We note that this may be a result of selection effects and not the true distribution of GRBs with respect to host galaxies, since all of these GRBs were localized by detection of an OT, which itself may require the presence of a dense external working surface such as an ISM (Paczyński and Rhoads (1993); Mészáros and Rees (1997)). Furthermore, only $`50\%`$ of the GRB localizations by the Beppo-Sax satellite have resulted in the discovery of an optical transient. This fraction is consistent with a model of GRB formation from the merger of neutron-star-neutron-star binaries, since a substantial fraction of neutron-star-neutron-star binaries are likely to be ejected from the galaxy by the momentum imparted to the neutron-stars at birth (Bloom, Sigurdsson, and Pols (1998); Livio et al. (1998)). However, it is not immediately clear that star-formation can properly account for the fraction of GRBs detected in the optical. Local estimates of dust obscuration in star-forming galaxies (Calzetti and Heckman (1999)), as well as some estimates of the same effect in high-redshift galaxies (Pettini et al. (1998); Blain et al. (1999)), suggest that about one-third of the light emitted in the UV escapes from starforming galaxies before being absorbed by dust and reprocessed to IR or radio wavelengths. (We typically view the OTs of GRBs in the UV rest wavelength since they have observed redshifts between 0.8 and 3.4, see also Hogg and Fruchter 1999.) A reduction by a factor of three of the light emitted by GRBs would be roughly consistent with what we observe — about one-half of GRBs are missing, and of order one-half of those observed have redder spectra than expected based on the afterglow theory (Bloom et al. 1998b ; Fruchter et al. 1999a ; Halpern et al. (1998)), perhaps suggesting the presence of moderate extinction. However, other authors (Meurer et al. (1997)) have claimed significantly higher absorption by dust at high redshift. And deep sub-millimeter observations of several high Galactic latitude fields (Hughes et al. (1998); Barger et al. (1998)) have suggested that a few obscured objects in each field which are undetectable in the optical could be producing more stars than all of the galaxies visible in the optical. If these more extreme estimates of the importance of dust obscuration are correct, and GRBs are related to star formation, it may be difficult to explain the success optical observers have had in finding OTs—especially ones like 970508, which is quite probably at the nucleus of a highly inclined starburst galaxy, yet whose color in the rest-frame UV (Pian et al. 1998b ) shows no sign of significant extinction. We thank Bob Williams for allocating Director’s Discretionary time to observe GRB 970508 using STIS and John Krist for preparing Tiny Tim STIS PSFs for us. ## Figures
no-problem/9903/astro-ph9903437.html
ar5iv
text
# Striking Photospheric Abundance Anomalies in Blue Horizontal-Branch Stars in Globular Cluster M13Based in large part on observations obtained at the W.M. Keck Observatory, which is operated jointly by the California Institute of Technology and the University of California ## 1 Introduction A number of unresolved issues in post-main-sequence stellar evolution revolve around the nature of stars on the horizontal branch (HB), an evolutionary stage characterized by core helium burning and shell hydrogen burning. The HB stars in globular clusters are particularly appealing targets, as they are readily identified by their position in a cluster’s color-magnitude diagram, and are assumed to be chemically homogeneous and coeval with the other stars in the cluster. Although intrinsically luminous, most cluster HB stars are also distant, with V magnitudes of 14 or greater, so that detailed spectroscopic study is challenging. With the recent advent of 8–10 meter-class telescopes and highly efficient spectrographs, however, the HBs of many of the nearer globular clusters are now accessible at high spectral resolution in reasonable exposure times. We have therefore undertaken a program to measure chemical abundances and rotation rates of HB stars in M3, M13, M15, M92, and M68 via high-resolution echelle spectroscopy. M13 (NGC 6205) is one of the closest and best-studied globulars, with $`(mM)=14.35`$ mag (Peterson 1993) and a metallicity \[Fe/H\] $`=1.51`$ dex measured from red giant abundances (Kraft et al. 1992). Its BHB extends from the blue edge of the RR Lyrae gap to rather high temperatures (a “long blue tail”), but is interrupted by one or more gaps, including a large one at $`UV0.3`$ mag. Some researchers have suggested that this gap separates two different populations of BHB stars (Ferraro et al. 1997b; Sosin et al. 1997). In order to test this hypothesis, we have observed stars on either side of the gap, to look for differences in composition and rotation. In this paper, we describe the striking trends in helium and metal abundance that we observe along the HB of M13. The rotation results will be reported in a subsequent paper (Behr et al. 1999a). ## 2 Observations and Reduction The spectra were collected using the HIRES spectrograph (Vogt et al. 1994) on the Keck I telescope, during four observing runs on 1998 June 27, 1998 August 20–21, 1998 August 26–27, and 1999 March 09–11. A 0.86-arcsec slit width yielded $`R=45000`$ ($`v=6.7\mathrm{km}\mathrm{s}^1`$) per 3-pixel resolution element. Spectral coverage ran from $`39405440`$ Å ($`m=9066`$) for the June observations, and $`38906280`$ Å ($`m=9157`$) for the August and March observations, with slight gaps above 5130 Å where the free spectral range of the orders overfilled the detector. We limited frame exposure times to 1200 seconds, to minimize susceptibility to cosmic ray accumulation, and then coadded three frames per star. $`S/N`$ ratios were on the order of $`5090`$ per resolution element, permitting us to measure even weak lines in the spectra. Nine of the thirteen stars in our sample were selected from HST WFPC-2 photometry of the center of M13 from Ferraro et al. (1997a), as reduced by Zoccali et al. (1999). The program stars were selected to be as isolated as possible; the HST images showed no apparent neighbors within $`5`$ arcseconds. The seeing during the HIRES observations was sufficiently good ($`0.81.0`$ arcsec) to avoid any risk of spectral contamination. The $`UV`$ colors from the HST study provided the $`T_{\mathrm{eff}}`$ estimates for the subsequent abundance analysis. The other four M13 HB program stars were taken from the $`v\mathrm{sin}i`$ and \[O/H\] survey of Peterson, Rood, and Crocker (1995). They are all located in the cluster outskirts, where crowding is not so problematic. Positions, finding charts, photometry, and observational details for the target stars will be provided in a later paper (Behr 1999b). We used a suite of routines developed by J.K. McCarthy (1988) for the FIGARO data analysis package (Shortridge 1988) to reduce the HIRES echellograms to 1-dimensional spectra. Frames were bias-subtracted, flat-fielded against exposures of HIRES’ internal quartz incandescent lamps (thereby removing much of the blaze profile from each order), cleaned of cosmic ray hits, and coadded. A thorium-argon arc lamp provided wavelength calibration. Sky background was negligible, and 1-D spectra were extracted via simple pixel summation. A 10th-order polynomial fit to line-free continuum regions completed the normalization of the spectrum to unity. ## 3 Analysis The resulting spectra show many tens to over two hundred metal absorption lines each. Line broadening from stellar rotation is evident in many stars, but even in the most extreme cases, the line profiles were close to Gaussian, so line equivalent widths ($`W_\lambda `$) were measured by least-square fitting of Gaussian profiles to the data. Equivalent widths as small as 10 mÅ were measured reliably, and errors in $`W_\lambda `$ (estimated from the fit $`\chi ^2`$) were typically 5 mÅ or less. Observed lines were matched to the atomic line lists of Kurucz & Bell (1995). (Several observed lines in the hotter stars could not be identified, and a more comprehensive future analysis will attempt to do so.) Those lines that were identified provided a consistent $`v_r`$ solution for each of the stars, placing all of them well within the canonical heliocentric $`v_r=246.6\mathrm{km}\mathrm{s}^1`$ of M13. We make the simplifying assumption that all the program stars lie on or near the zero-age horizontal-branch (ZAHB) track computed by Dorman et al. (1993), so that surface gravity would be determined by our choice of temperature. For the hotter and potentially “overluminous” HB stars (discussed below), this assumption may overestimate $`\mathrm{log}g`$ by as much as $`0.7`$ dex (Moehler 1998, Figure 4), but this will have only a modest impact on computed abundances for the species observed. Effective temperature was derived for the nine HST stars by matching dereddened $`UV`$ color indices to computed ATLAS9 colors, with errors in $`T_{\mathrm{eff}}`$ based on the photometric errors. For the four non-HST stars, we accepted the published $`T_{\mathrm{eff}}`$ values of Peterson et al. (1995), although we note that these are based on photographic $`BV`$ photometry, and are thus suspect. Strömgren photometry of M13 will refine the $`T_{\mathrm{eff}}`$ for these stars in a later reanalysis. We assign conservative error bars of $`\pm 500\mathrm{K}`$ to the photographic temperatures. For the chemical abundance analyses, we use the LINFOR/LINFIT line formation analysis package (developed at Kiel, based on earlier codes by Baschek, Traving, and Holweger (1966), with subsequent modifications by M. Lemke), along with model atmospheres computed by ATLAS9 (Kurucz 1997). Our spectra are sufficiently uncrowded that we can simply compute abundances from equivalent widths, instead of performing a full spectral synthesis fit. Only lines attributed to a single chemical species were considered; potentially blended lines are ignored in this analysis. Microturbulent velocity $`\xi `$ was chosen such that the abundance derived for a single species (Fe II) was invariant with $`W_\lambda `$. We assumed a cluster metallicity of \[Fe/H\] $`=1.5`$ dex in computing the model atmospheres, and although many of the stars turn out to be considerably more metal-rich than this (see below), adjustments to the atmospheric input were found to have only modest effects ($`<0.2`$ dex) on the abundances of individual elements. Table 1 lists the final photospheric parameters used for each of the target stars, as well as the heliocentric radial velocities. ## 4 Results In Figure 1, abundance determinations for key chemical species are plotted as a function of stellar $`T_{\mathrm{eff}}`$. Note that the bottom three panels have a different vertical scale than the top six. The values \[X/H\] represent logarithmic offsets from the solar values of Anders & Grevesse (1993). Whenever possible, we used the abundance computed for the dominant ionization stage of each element, to minimize the possibility of non-LTE effects. The error bars incorporate the scatter among multiple lines of the same species, plus the uncertainties in $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$, $`\xi `$, $`W_\lambda `$ for each line, and \[Fe/H\] of the input atmosphere. Even with the conservative error bars in $`T_{\mathrm{eff}}`$ ($`\pm 500`$ K in the cooler stars) and $`\mathrm{log}g`$ ($`\pm 0.4`$ dex), individual element abundances are uncertain by 0.3 dex or less, with the sole exception of the Ca I lines of star IV-83. The abundances of helium, iron, and magnesium provide the most striking contrast in behavior. The He abundance first appears at the expected solar He/H ratio at $`T_{\mathrm{eff}}11000\mathrm{K}`$, but then drops by a factor of more than 100 as $`T_{\mathrm{eff}}`$ increases to $`19000\mathrm{K}`$. Iron, similarly, is present at (or slightly below) the \[Fe/H\] $`=1.51`$ dex expected for this metal-poor cluster, but then rises to Population I abundances for the stars hotter than $`12000`$ K. Magnesium, on the other hand, appears consistently at almost exactly the canonical cluster metallicity, with no discernable change with $`T_{\mathrm{eff}}`$. Other metals exhibit similar enhancements in the hotter stars. The Ti abundance rises by approximately a factor of 30 from 8000 K to 15000 K, although the trend is not as clear-cut as with iron. Silicon and calcium are also modestly enhanced to \[X/H\] $`0.5`$ dex among some of the hotter stars. The most pronounced overabundances are seen in phosphorus, which appears at \[P/H\] $`+1.5`$ dex in six stars, and chromium, which climbs past solar metallicity to reach a remarkable \[Cr/H\] $`=3.10`$ dex, an enhancement of more than a factor of $`10^4`$ over the metallicity of M13, albeit in only one star. These values are each based on several separate spectral lines, in close agreement with each other, so we are confident that they are not due to random errors or line misidentification. The CNO elements, particularly nitrogen, also show enhancements, although most of these abundances are based on only a single line per species, and are therefore suspect. N II appears in four of the hot stars, at \[N/H\] ranging from $`+1.6`$ to $`+3.5`$ dex. Nitrogen enhancement from dredge-up of fusion-processed material is expected in evolved stars, but not to this extent, so if these values are accurate, some other mechanism must be at work. A single oxygen line appears at a more reasonable \[O/H\] $`=+1.0`$ dex, and carbon is solar or slightly subsolar in three of the stars. ## 5 Discussion An underabundance of helium on the BHB has been observed in several previous instances (Baschek 1975; Heber 1987; Glaspey et al. 1989, among others), and in fact appears to be typical for stars of this type. Michaud, Vauclair, and Vauclair (1983, henceforth MVV), building on the original suggestion by Greenstein, Truran, and Cameron (1967), explain the underabundances as a result of gravitational settling of helium, which can take place if the outer atmosphere of the star is sufficiently stable. Our current results are significant in that they demonstrate a distinct trend in \[He/H\] with $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$ along the HB, including cooler, lower-gravity stars with roughly solar helium abundances, such that the magnitude of helium diffusion can be traced over a range of conditions. Although MVV do predict greater helium depletion in their hotter, higher-gravity BHB models, the actual abundance pattern is also likely to depend on stellar rotation rate. As pointed out both in MVV and Glaspey et al., diffusion can easily be stymied by turbulence, mass loss, and meridional circulation produced by stellar rotation. Many of the BHB stars in M13 are comparatively fast rotators (Peterson et al. 1995), reaching $`v\mathrm{sin}i40\mathrm{km}\mathrm{s}^1`$, while the ten stars in our study with helium abundances all exhibit extremely narrow lines, suggesting $`v\mathrm{sin}i<6\mathrm{km}\mathrm{s}^1`$ in most cases. A more comprehensive assessment of the stellar rotations, and their potential effects on diffusion processes, will be presented in another paper (Behr et al. 1999a). The MVV calculations indicate that helium depletion should be accompanied by photospheric enhancement of metals, as the same stable atmosphere which permits gravitational settling also permits levitation of species with large radiative cross-sections. Overabundances of factors of $`10^310^4`$ from a star’s initial composition could be supported by radiation pressure, although as Glaspey et al. hasten to point out, this is a “necessary but not sufficient condition” for actual abundance anomalies to appear, given the possibility of turbulent mixing or radiation-driven escape of metals from the star. MVV make some initial assessment of the magnitudes of these variations, but future models will have to explain more fully why some elements (N, P, and Cr) are enhanced so much more strongly than others (Fe, C, Ti, Ca, Si), while a few (Mg) are apparently immune to diffusion mechanisms. None of the recent diffusion work reported in the literature treats the specific case of the BHB, so a detailed comparison of our results with current theory will have to wait for improved models. Our results for iron and magnesium closely parallel those of Glaspey et al., who studied two stars in globular cluster NGC 6752, one at 10000 K, the other at 16000 K. The hotter star displays a Fe enhancement of 50 times above the cluster mean, but the cooler star has the same \[Fe/H\] as the cluster, while the Mg abundances are near the cluster mean in both cases. They find significantly lower amounts of silicon and phosphorus than we do, but the rough agreement in the helium and iron anomalies between these two BHB stars in NGC 6752 and our larger sample in M13 suggests that these diffusion mechanisms are not peculiar to M13. In addition to furthering our understanding of diffusion mechanisms, these atmospheric abundance variations offer potential ramifications for the photometric morphology of globular clusters. A great deal of recent attention has focused on the presence of gaps in the color distribution of stars on the HBs of M13 and several other clusters (Ferraro et al. 1997b; Sosin et al. 1997; Buonanno et al. 1986). The origin of such gaps is not yet understood, and presents a challenge for theories of HB evolution. A prominent gap in M13’s BHB, located at $`T_{\mathrm{eff}}11000\mathrm{K}`$ and labelled ‘G1’ in Ferraro et al. (1997b), seems to coincide with the onset of our diffusion anomalies. We will explore this possible connection in later publications. Additionally, there is the issue of “overluminous” regions of the BHB in several clusters including M13 (Moehler 1998; Grundahl, VandenBerg, & Anderson 1998; Grundahl et al. 1999). While cooler BHB stars are in good agreement with theoretical zero-age horizontal-branch (ZAHB) tracks, those in the range $`11000\mathrm{K}T_{\mathrm{eff}}20000\mathrm{K}`$ are found to be significantly brighter (or equivalently, at lower $`\mathrm{log}g`$) than expected. Again, this is the temperature range where diffusion effects start to alter the atmospheric composition significantly, possibly also modifying the atmospheric structure. The abundances that we observe should prove useful in evaluating the potential role of diffusion-driven metal enhancement in explaining this phenomenon. Lastly, the observed helium diffusion may have some impact on estimates of globular cluster ages. Helium diffusion in main-sequence models can alter evolutionary timescales by 10% or more (VandenBerg, Bolte, & Stetson 1996), and although the atmospheric structure of HB stars and MS stars are quite different, the magnitude of helium diffusion seen at different $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$ on the HB may offer some insights into the degree of diffusion expected in the main-sequence case. More importantly, the helium fraction can influence the luminosity of the ZAHB (Proffitt 1997), which will affect age determinations based on the $`\mathrm{\Delta }V`$ between the turnoff and the HB, as well as distance estimates using the observed magnitudes of HB stars. Further observational and theoretical work will be necessary to determine what relationship (if any) exists between the onset of these diffusion-driven abundance anomalies and other characteristics of the HB, such as its luminosity, stellar rotation, and gaps in its color distribution, and whether diffusion significantly affects estimates of GC ages. These observations would not have been feasible without the HIRES spectrograph and the Keck I telescope. We are indebted to Jerry Nelson, Gerry Smith, Steve Vogt, and many others for making such marvelous machines, to the W. M. Keck Foundation for making it happen, and to a bevy of Keck observing assistants for making them work. Patrick Côté graciously provided assistance with many of the HIRES observations. Thanks also go to Manuela Zoccali, Elena Pancino, and Giampaolo Piotto for their reduction of the HST photometry, and to Michael Lemke for introducing us to the LINFOR package and installing it locally. SGD was supported, in part, by the Bressler Foundation. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
no-problem/9903/hep-ph9903396.html
ar5iv
text
# Partial widths 𝑎₀⁢(980)→𝛾⁢𝛾, 𝑓₀⁢(980)→𝛾⁢𝛾 and 𝑞⁢𝑞̄-classification of the lightest scalar mesons ## 1 Introduction The determination of the lightest scalar $`q\overline{q}`$ nonet is a problem of principal importance both for quark systematics and the search for exotic states. The key query here is an understanding of the origin of $`a_0(980)`$ and $`f_0(980)`$ mesons: a study of decays $`a_0(980)\gamma \gamma `$ and $`f_0(980)\gamma \gamma `$ is an imperative step in the analysis of the structure of $`a_0(980)`$ and $`f_0(980)`$ (see, for example, and references therein). Here we perform the calculation of the scalar meson transition form factors $`a_0(980)\gamma ^{}(q^2)\gamma `$ and $`f_0(980)\gamma ^{}(q^2)\gamma `$ in the region of small $`q^2`$; these form factors, in the limit $`q^20`$, determine the partial widths $`a_0(980)\gamma \gamma `$ and $`f_0(980)\gamma \gamma `$. Our calculation is based on the spectral representation technique developed in for a study of the pseudoscalar meson transitions $`\pi ^0\gamma ^{}(q^2)\gamma `$, $`\eta \gamma ^{}(q^2)\gamma `$ and $`\eta ^{}\gamma ^{}(q^2)\gamma `$. In the region of moderately small $`q^2`$, where Strong-QCD works, the transition form factor $`q\overline{q}`$-meson $`\gamma ^{}(q^2)\gamma `$ is determined by the quark loop diagram of Fig. 1a which is a convolution of the $`q\overline{q}`$-meson and photon wave functions, $`\mathrm{\Psi }_{q\overline{q}}\mathrm{\Psi }_\gamma `$. The calculation of the process of Fig. 1a is performed in terms of the double spectral representation over $`q\overline{q}`$ invariant masses squared, $`s=(m^2+k_{}^2)/\left(x(1x)\right)`$ and $`s^{}=(m^2+k_{}^2)/\left(x(1x)\right)`$ where $`k_{}^2`$, $`k_{}^2`$ and $`x`$ are the light-cone variables and $`m`$ is the constituent quark mass. Following , we represent the photon wave function as a sum of two components which describe the prompt production of the $`q\overline{q}`$ pair at large $`s^{}`$ (with a point-like vertex for the transition $`\gamma q\overline{q}`$, correspondingly) and the production in the low-$`s^{}`$ region where the vertex $`\gamma q\overline{q}`$ has a nontrivial structure due to soft $`q\overline{q}`$ interactions. The process of Fig. 1a at moderately small $`|q^2|`$ is mainly determined by the low-$`s^{}`$ region, in other words by the soft component of the photon wave function. The soft component of the photon wave function was restored in on the basis of the experimental data for the transition $`\pi ^0\gamma ^{}(q^2)\gamma `$ at $`|q^2|1`$ GeV<sup>2</sup>. With the photon wave function found, the form factors $`a_0\gamma ^{}(q^2)\gamma `$ and $`f_0\gamma ^{}(q^2)\gamma `$ at $`|q^2|1`$ GeV<sup>2</sup> provide the opportunity to investigate in detail the scalar meson wave functions. However, the current data do not allow to perform a full analysis, so we restrict ourselves by the consideration of a one-parameter representation of the wave function of scalar mesons, this parameter being the mean radius squared $`R^2`$. Within the assumption about $`q\overline{q}`$ structure of the lightest scalar mesons, the flavour content of $`a_0(980)`$ is fixed, thus allowing unambiguous calculation of the transition form factor $`a_0(980)\gamma \gamma `$. We obtain reasonable agreement with data at $`R_{a_0(980)}^2(1127)`$ GeV<sup>-2</sup> or, in terms of the pion radius squared, at $`R_{a_0(980)}^2/R_\pi ^21.12.7`$. The partial width $`\mathrm{\Gamma }(f_0(980)\gamma \gamma )`$ depends on the relative weight of the strange and non-strange components of the scalar/isoscalar meson, $`s\overline{s}`$ and $`n\overline{n}`$. For the region of not very large $`R_{f_0(980)}^2`$, $`R_{f_0(980)}^2/R_\pi ^21.01.7`$, the agreement with data is attained at relatively large $`s\overline{s}`$ component in $`f_0(980)`$, that is, of the order of $`4060\%`$. It does not contradict the results of the analysis of two-meson spectra according to which the lightest $`(IJ^{PC}=00^{++})`$-meson has a large $`s\overline{s}`$-component. ## 2 Decay amplitude and partial width Below we present the formulae for the scalar/isoscalar meson decay $`f_0\gamma \gamma `$. The formulae for $`a_0\gamma \gamma `$ coincide in their principal points with those for $`f_0\gamma \gamma `$. The amplitude for the scalar meson two-photon decay has the following structure: $$A_{\mu \nu }=e^2g_{\mu \nu }^{}F_{f_0\gamma \gamma }(0,0).$$ (1) Here $`e`$ is the electron charge ($`e^2/4\pi =\alpha =1/137`$) and $`F_{f_0\gamma \gamma }(0,0)`$ is the form factor for the transition $`f_0\gamma (q^2)\gamma (q^2)`$ at $`q^2=0`$ and $`q^2=0`$, namely, $`F_{f_0\gamma \gamma }(0,0)=F_{f_0\gamma \gamma }(q^20,q^20)`$. The metric tensor $`g_{\alpha \beta }^{}`$ determines the space perpendicular to $`q`$ and $`q^{}`$: $$g_{\alpha \beta }^{}=g_{\alpha \beta }q_\alpha q_\beta \frac{q^2}{D}q_\alpha ^{}q_\beta ^{}\frac{q^2}{D}+(q_\alpha q_\beta ^{}+q_\alpha ^{}q_\beta )\frac{(qq^{})}{D},$$ (2) where $`D=q^2q^2(qq^{})^2`$. ### 2.1 Partial width The partial width, $`\mathrm{\Gamma }_{f_0\gamma \gamma }`$, is determined as $$m_{f_0}\mathrm{\Gamma }_{f_0\gamma \gamma }=\frac{1}{2}𝑑\mathrm{\Phi }_2(p_{f_0};q,q^{})\mathrm{\Sigma }_{\mu \nu }|A_{\mu \nu }|^2=\pi \alpha ^2|F_{f_0\gamma \gamma }(0,0)|^2.$$ (3) Here $`m_{f_0}`$ is the $`f_0`$-mass, the summation is carried over outgoing photon polarizations, the photon identity factor, $`\frac{1}{2}`$, is written explicitly, and the two-particle invariant phase space is equal to $$d\mathrm{\Phi }_2(p_{f_0};q,q^{})=\frac{1}{2}\frac{d^3q}{(2\pi )^32q_0}\frac{d^3q^{}}{(2\pi )^32q_0^{}}(2\pi )^4\delta ^{(4)}\left(p_{f_0}qq^{}\right).$$ (4) ### 2.2 Form factor $`F_{f_0\gamma \gamma }(q^2,q^2)`$ Following the prescription of Ref. , we present the amplitude of the process of Fig. 1a in terms of the spectral representation in the $`f_0`$ and $`\gamma (q^{})`$ channels. The double spectral representation reads $$F_{f_0\gamma \gamma }(q^2,q^2)=2\underset{4m^2}{\overset{\mathrm{}}{}}\frac{dsds^{}}{\pi ^2}\frac{G_{f_0}(s)}{sm_{f_0}^2}$$ (5) $$\times d\mathrm{\Phi }_2(P;k_1,k_2)d\mathrm{\Phi }_1(P^{};k_1^{},k_2)Z_{f_0}T(P^2,P^2,q^2)\sqrt{N_c}\frac{G_{\gamma q\overline{q}}(s^{})}{s^{}q^2}.$$ In the spectral integral (5), the momenta of the intermediate states differ from those of the initial/final states. The corresponding momenta for intermediate states are re-denoted as shown in Fig. 1b: $$qPP^{},q^{}P^{},p_{f_0}P,$$ (6) $$P^2=s,P^2=s^{},(P^{}P)^2=q^2.$$ It should be stressed that $`P^{}Pq`$. The two-particle phase space $`d\mathrm{\Phi }_2(P;k_1,k_2)`$ is determined by Eq. (4), while the one-particle space factor is equal to $$d\mathrm{\Phi }_1(P^{};k_1^{},k_2)=\frac{1}{2}\frac{d^3k_1^{}}{(2\pi )^32k_{10}^{}}(2\pi )^4\delta ^{(4)}\left(P^{}k_1^{}k_2\right).$$ (7) The factor $`Z_{f_0}`$ is determined by the quark content of the $`f_0`$ meson: it is equal to $`Z_{n\overline{n}}=(e_u^2+e_d^2)/\sqrt{2}`$ for the $`n\overline{n}`$ component, and $`Z_{s\overline{s}}=e_s^2`$ for the $`s\overline{s}`$ component. The factor $`\sqrt{N_c}`$, where $`N_c=3`$ is the number of colours, is related to the normalization of the photon vertex made in Ref. . We have two diagrams: with quark lines drawn clockwise and anticlockwise; the factor $`2`$ in front of the right-hand side of Eq. (5) stands for this doubling. The vertices $`G_{\gamma n\overline{n}}(s^{})`$ and $`G_{\gamma s\overline{s}}(s^{})`$ were found in Ref. ; the wave function $`G_{\gamma n\overline{n}}(s)/s`$ is shown in Fig. 2. We parametrize the $`f_0`$-meson wave function in the exponential form: $$\mathrm{\Psi }_{f_0}(s)=\frac{G_{f_0}(s)}{sm_{f_0}^2}=Ce^{bs},$$ (8) where $`C`$ is normalization constant, and the parameter $`b`$ can be related to the $`f_0`$-meson radius squared. ### 2.3 Spin structure factor $`T(P^2,P^2,q^2)`$ For the amplitude of Fig. 1b with transverse polarized photons, the spin structure factor is fixed by the quark loop trace: $$Tr[\gamma _\nu ^{}(\widehat{k}_1^{}+m)\gamma _\mu ^{}(\widehat{k}_1+m)(\widehat{k}_2m)]=T(P^2,P^2,q^2)g_{\mu \nu }^{}.$$ (9) Here $`\gamma _\nu ^{}`$ and $`\gamma _\mu ^{}`$ stand for photon vertices, $`\gamma _\mu ^{}=g_{\mu \beta }^{}\gamma _\beta `$, while $`g_{\mu \beta }^{}`$ is determined by Eq. (2) with the following substitution $`qPP^{}`$ and $`q^{}P^{}`$. Recall that the momenta $`k_1^{}`$, $`k_1`$ and $`k_2`$ in (9) are mass-on-shell. One has $$T(s,s^{},q^2)=2m\left[4m^2s+s^{}+q^2\frac{4ss^{}q^2}{2(s+s^{})q^2(ss^{})^2q^4}\right].$$ (10) ### 2.4 Light cone variables The formula (5) allows one to make easily the transformation to the light cone variables using the boost along the z-axis. Let us use the frame in which the initial $`f_0`$-meson is moving along the z-axis with the momentum $`p\mathrm{}`$: $$P=(p+\frac{s}{2p},0,p),P^{}=(p+\frac{s^{}+q_{}^2}{2p},\stackrel{}{q}_{},p).$$ (11) Then the transition form factor $`f_0\gamma (q^2)\gamma `$ reads: $$F_{f_0\gamma \gamma }(q^2,0)=\frac{2Z_{f_0}\sqrt{N_c}}{16\pi ^3}\underset{0}{\overset{1}{}}\frac{dx}{x(1x)^2}d^2k_{}\mathrm{\Psi }_{f_0}(s)\mathrm{\Psi }_\gamma (s^{})T(s,s^{},q^2),$$ (12) where $`x=k_{2z}/p`$ , $`\stackrel{}{k}_{}=\stackrel{}{k}_2`$, and the $`q\overline{q}`$ invariant masses squared are $$s=\frac{m^2+k_{}^2}{x(1x)},s^{}=\frac{m^2+(\stackrel{}{k}_{}x\stackrel{}{q}_{})^2}{x(1x)}.$$ (13) ### 2.5 Meson charge form factor In order to relate the wave function parameters $`C`$ and $`b`$ of Eq.(8) to the $`f_0`$-meson radius squared, we calculate the meson charge form factor shown diagrammatically in Fig. 1c. The amplitude has the following structure $$A_\mu =(p_{f_0\mu }+p_{f_0\mu }^{})F_{f_0}(q^2),$$ where the meson charge form factor $`F_{f_0}(q^2)`$ is a convolution of the $`f_0`$-meson wave functions $`\mathrm{\Psi }_{f_0}\mathrm{\Psi }_{f_0}`$: $$F_{f_0}(q^2)=\frac{1}{16\pi ^3}\underset{0}{\overset{1}{}}\frac{dx}{x(1x)^2}d^2k_{}\mathrm{\Psi }_{f_0}(s)\mathrm{\Psi }_{f_0}(s^{})S(s,s^{},q^2).$$ (14) $`S(s,s^{},q^2)`$ is determined by the quark loop trace in the intermediate state: $$Tr[(\widehat{k}_1+m)\gamma _\mu ^{}(\widehat{k}_1^{}+m)(\widehat{k}_2m)]=[P_\mu ^{}+P_\mu \frac{s^{}s}{q^2}(P_\mu ^{}P_\mu )]S(s,s^{},q^2),$$ (15) where $$\gamma _\mu ^{}=g_{\mu \nu }^{}\gamma _\nu ,g_{\mu \nu }^{}=g_{\mu \nu }(P_\mu ^{}P_\mu )(P_\nu ^{}P_\nu )/q^2.$$ (16) One has $$S(s,s^{},q^2)=\frac{q^2(s^{}+sq^2)(s^{}+sq^28m^2)}{2(s+s^{})q^2(s^{}s)^2q^4}+q^2.$$ (17) The low-$`q_{}^2`$ charge form factor, $$F_{f_0}(q^2)1\frac{1}{6}R^2q_{}^2,$$ (18) determines the $`f_0`$-meson wave function parameters, $`C`$ and $`b`$. ### 2.6 First radial excitation states, $`2^3P_0q\overline{q}`$ Equation (8) stands for the wave function of the basic state; the wave function of the first radial excitation can be written within an exponential approximation as $$\mathrm{\Psi }_{f_0}^{(1)}(s)=C_1(D_1s1)e^{b_1s}.$$ (19) The parameter $`b_1`$ can be related to the radius of the radial excitation state, then the values $`C_1`$ and $`D_1`$ are fixed by the normalization and orthogonality requirements, $`\left[\mathrm{\Psi }_{f_0}^{(1)}\mathrm{\Psi }_{f_0}^{(1)}\right]_{q^2=0}=1`$ and $`\left[\mathrm{\Psi }_{f_0}\mathrm{\Psi }_{f_0}^{(1)}\right]_{q^2=0}=0`$. ## 3 Results Using Eqs. (8), (12) and (19), we calculate $`\gamma \gamma `$ partial widths of the $`1^3P_0q\overline{q}`$ and $`2^3P_0q\overline{q}`$ mesons. ### 3.1 Partial widths $`a_0(980)\gamma \gamma `$ and $`a_0(1450_{20}^{+90})\gamma \gamma `$ The partial width $`\mathrm{\Gamma }(a_0(980)\gamma \gamma )`$ is determined by the same equation as for $`f_0(980)`$-decay, Eq. (12), with the only substitution $`Z_{f_0}Z_{a_0}=(e_u^2e_d^2)/\sqrt{2}=1/(3\sqrt{2})`$. The value $`\mathrm{\Gamma }(a_0(980)\gamma \gamma )`$ is shown in Fig. 3 as a function of $`R_{a_0(980)}^2`$. Experimental study of $`\mathrm{\Gamma }(a_0(980)\gamma \gamma )`$ was carried out in Refs. , the averaged value is: $`\mathrm{\Gamma }(\eta \pi )\mathrm{\Gamma }(\gamma \gamma )/\mathrm{\Gamma }_{total}=0.24_{0.07}^{+0.08}`$ keV . Using $`\mathrm{\Gamma }_{total}\mathrm{\Gamma }(\eta \pi )+\mathrm{\Gamma }(K\overline{K})`$, we have $`\mathrm{\Gamma }(a_0(980)\gamma \gamma )=0.30_{0.10}^{+0.11}`$ keV. The calculated value of $`\mathrm{\Gamma }(a_0(980)\gamma \gamma )`$ agrees with data at $`R_{a_0(980)}^2=19\pm 8`$ GeV<sup>-2</sup>: this value looks quite reasonable for a meson of the $`1^3P_0q\overline{q}`$ multiplet. If $`a_0(980)`$ is a member of the basic $`1^3P_0q\overline{q}`$ multiplet, the scalar/isoscalar meson $`a_0(1450_{20}^{+90})`$ is the first radial excitation meson, a member of the $`2^3P_0q\overline{q}`$ multiplet. Figure 3b demonstrates the values of partial widths $`\mathrm{\Gamma }(a_0(1450_{20}^{+90})\gamma \gamma )`$: in the calculation, we use $`m_{a0(1450_{20}^{+90})}=1535`$ MeV following the results of the analysis and put $`R_{a_0(1450)}/R_{a_0(980)}1.22`$ assumimg that radii of the mesons of the $`2^3P_0q\overline{q}`$ multiplet are larger than those for $`1^3P_0q\overline{q}`$. The transition form factor $`F_{a_0(1450)\gamma \gamma }`$ is a convolution of two wave functions, $`\mathrm{\Psi }_{a_0(1450)}\mathrm{\Psi }_\gamma `$, one of them being the wave function of the first radial excitation changes the sign, see Eq. (19). This fact results in a relative suppression of the decay $`a_0(1450_{20}^{+90})\gamma \gamma `$: $`\mathrm{\Gamma }(a_0(1450_{20}^{+90})\gamma \gamma )/\mathrm{\Gamma }(a_0(980)\gamma \gamma )1/10`$. ### 3.2 Partial $`\gamma \gamma `$-widths for scalar/isoscalar mesons of $`1^3P_0q\overline{q}`$ and $`2^3P_0q\overline{q}`$ multiplets In the analysis of $`f_0\gamma \gamma `$ decays, one should take into account that the scalar/isoscalar mesons in the mass range $`12`$ GeV are mixtures of not only $`n\overline{n}`$ and $`s\overline{s}`$ components but the gluonium state as well. Therefore, the transition form factor of the $`f_0`$-meson reads: $$F_{f_0\gamma \gamma }=\mathrm{cos}\alpha (\mathrm{cos}\varphi F_{f_0(n\overline{n})\gamma \gamma }+\mathrm{sin}\varphi F_{f_0(s\overline{s})\gamma \gamma })$$ (20) where $`\mathrm{sin}^2\alpha `$ is the probability of the gluonium component in $`f_0`$-meson, and $`\varphi `$ is mixing angle for $`n\overline{n}`$ and $`s\overline{s}`$ components: $`\psi _{f_0}^{flavour}=\mathrm{cos}\varphi n\overline{n}+\mathrm{sin}\varphi s\overline{s}`$. According to the estimations of Ref. , $`\mathrm{cos}^2\alpha 0.70.9`$ for the $`f_0`$-mesons of $`1^3P_0q\overline{q}`$ and $`2^3P_0q\overline{q}`$ multiplets. Figure 3c demonstrates the partial $`\gamma \gamma `$-widths for $`f_0(980)`$ calculated under assumptions that $`f_0(980)`$ is either a pure $`n\overline{n}`$ state (solid curve) or pure $`s\overline{s}`$ (dashed curve). Experimental analyses give $`\mathrm{\Gamma }(f_0(980)\gamma \gamma )=0.42\pm 0.06\pm 0.18`$ keV and $`\mathrm{\Gamma }(f_0(980)\gamma \gamma )=0.63\pm 0.14`$ keV ; the averaged value reads: $`\mathrm{\Gamma }(f_0(980)\gamma \gamma )=0.56\pm 0.11`$ keV . Our calculation shows that this value can be easily understood if $`f_0(980)`$ has a significant $`s\overline{s}`$ component. For example, for $`R_{f_0(980)}^2=11`$ GeV<sup>-2</sup> and $`\alpha =0`$, the data can be described either with $`\varphi 35^o`$ or $`\varphi 63^o`$. The existence of a significant $`s\overline{s}`$-component in $`f_0(980)`$ agrees with the results of analysis for the two-meson spectra. Figure 3d shows the $`f_0(1500)\gamma \gamma `$ partial widths calculated for pure $`n\overline{n}`$ and $`s\overline{s}`$ components within the assumption that these $`q\overline{q}`$ components belong to $`2^3P_0`$ multiplet. One can see a strong suppression of the $`\gamma \gamma `$ decay mode for both components, $`n\overline{n}`$ and $`s\overline{s}`$. The origin of this suppression is the same as for the decay $`a_0(1450_{20}^{+90})\gamma \gamma `$: this is an approximate orthogonality of the photon and $`(2^3P_0q\overline{q})`$-meson wave functions in the coordinate/momentum space. ## 4 Conclusion Here we continue the investigation of the meson two-photon decays started in Ref. , where the partial widths $`\pi \gamma \gamma `$, $`\eta \gamma \gamma `$ and $`\eta ^{}\gamma \gamma `$ were calculated. In the present paper, we have calculated partial widths $`a_0(980)\gamma \gamma `$ and $`f_0(980)\gamma \gamma `$ assuming that the mesons $`a_0(980)`$ and $`f_0(980)`$ are members of the basic $`1^3P_0q\overline{q}`$ multiplet: the results are in a reasonable agreement with the data. This supports the idea of $`q\overline{q}`$ origin of the scalar mesons $`a_0(980)`$ and $`f_0(980)`$ and gives the argument that the lightest scalar nonet is located near $`1000`$ MeV (see discussion in Refs. and in references therein). A successful description of the data is due to two principal points taken into account in the calculation: (i) the spin structure and relativistic corrections are included into consideration in the framework of the relativistic light-cone technique, (ii) the subprocess $`q\overline{q}\gamma \gamma `$, which was found previously in , is used in the numerical analysis. We are graceful to L.G. Dakhno, D.I. Melikhov and A.V. Sarantsev for useful discussions. The paper is suppored by grants RFFI 96-02-17934 and INTAS-RFBR 95-0267.
no-problem/9903/hep-ph9903241.html
ar5iv
text
# Diffractive Physics at HERA ## 1 Introduction Presently, one of the most important tasks in particle physics is the understanding of the strong force. For this purpose, the Quantum Chromodynamics theory (QCD), part of the Standard Model, seems to be the best candidate. An important characteristic of this theory is that the coupling constant $`\alpha _S`$ tends to zero when the transverse distance between matter constituents, quarks and gluons, two quarks tends to zero. This means that to be calculable within a perturbative approach, the interaction between these constituents requires the presence in the process of a “hard” scale, i.e. a large transverse momentum transfer or a large mass. The lepton beam of the high energy $`ep`$ collider HERA is a prolific source of photon in a large virtuality range such that the study of $`\gamma ^{}p`$ interaction provides a completely new and deep insight into the QCD dynamics. A major discovery at HERA is the observation of the strong rise of the total cross section at high energy in the deep inelastic scattering (DIS), i.e. $`\gamma ^{}p`$ interactions with large $`Q^2`$ values ($`Q^2`$ being the negative of the squared four-momentum of the exchanged photon). This is inconsistent with the case of the photoproduction ($`Q^20`$), which shows a soft dependence in the total hadronic energy, $`W`$, similar to the hadron-hadron interaction case and well described by the Regge phenomenological theory. After a transition around $`Q^2=`$ 1 GeV<sup>2</sup>, the steep energy dependence of the total cross section in DIS is related to the fast increase of the gluon density in the proton at high energy . HERA is thus a unique device to test QCD in the perturbative regime and to study the transition between perturbative and non-perturbative domains. One of the remarkable success of this theory, as reviewed at this conference , is the correct prediction of the evolution of the proton structure function with $`Q^2`$, for $`Q^2>1`$ GeV<sup>2</sup>. This evolution allows the extraction of the gluon density in the proton, which is not directly measurable. The other main opened window at HERA for the understanding of the strong force is the study of diffractive interaction. Diffraction has been successfully described, already more than 30 years ago, via the introduction of an exchanged object carrying the vacuum quantum numbers, called the pomeron ($`IP`$). Whilst Regge-based models give a unified description of all pre-HERA diffractive data, this approach is not linked to the underlying QCD theory. The second major result at HERA is thus the observation in deep inelastic scattering that $`810\%`$ of the events present a large rapidity gap (LRG) without hadronic activity between the two hadronic sub-systems, $`X`$ and $`Y`$, as illustrated in Fig. 1. . The gaps being significantly larger than implied by particle density fluctuation during the hadronisation process, these events are attributed to diffraction, i.e. to the exchange of a colourless object at the proton vertex. ## 2 Exclusive vector meson production In exclusive elastic vector meson production study, $`\gamma ^{}pVp`$, the hadronic system $`X`$ consists only in a vector meson ($`\rho ,\omega ,\mathrm{}`$), and the system $`Y`$ in the scattered proton. This process provides a very interesting way to test the mechanism of diffraction and our understanding of the pomeron structure. Fig. 2 summarizes the $`W`$ dependence of various elastic exclusive vector meson production. Fig. 2.a) presents the exclusive $`\rho ,\omega ,\varphi ,J/\mathrm{\Psi }`$ and $`\mathrm{{\rm Y}}`$ production in photoproduction , together with the total photoproduction cross section . The light mesons ($`\rho ,\omega `$ and $`\varphi `$) show a soft dependence in $`W`$, equivalent to that of the total cross section dependence, while this energy dependence is much steeper for $`J/\mathrm{\Psi }`$ production. This is interpreted as being due to the presence of a hard scale, the charm quark mass, making the $`J/\mathrm{\Psi }`$ meson smaller than the confinement scale ($`1fm`$). In this case, it is natural to attempt a perturbative QCD description of the process, where the photon fluctuates into a quark-antiquark pair and the exchanged pomeron is modeled by a pair of gluons. This leads to a cross section proportional to the gluon density squared, which is in good agreement (full line) with the data (points) shown on Fig. 2.b) . This figure also shows the agreement of the 2 gluons exchange model with measurement of exclusive $`J/\mathrm{\Psi }`$ production in the DIS regime, where a second hard scale, $`Q^2`$is present. As illustrated on Fig. 2.c), a modification of the $`W`$ dependence also occurs for the elastic $`\rho `$ production when the $`Q^2`$ increases. ## 3 Inclusive DIS cross section and partonic structure of the pomeron The diffractive DIS process can be defined by four kinematic variables conveniently chosen as $`Q^2`$, $`x_{IP}`$, $`\beta `$ and $`t`$, where $`t`$ is the squared four-momentum transfer to the proton, and $`x_{IP}`$ and $`\beta `$ are defined as $$x_{IP}\frac{Q^2+M_X^2}{Q^2+W^2}\beta \frac{Q^2}{Q^2+M_X^2};$$ (1) $`x_{IP}`$ can be interpreted as the fraction of the proton momentum carried by the exchanged pomeron and $`\beta `$ is the fraction of the exchanged momentum carried by the quark struck by the photon. These variables are related to the Bjorken $`x`$ scaling variable (with $`W^2Q^2/xQ^2`$) by the relation $`x=\beta x_{IP}`$. Experimentally, the $`t`$ variable is usually not measured or is integrated over. In analogy with non-diffractive DIS scattering, the measured cross section is expressed in the form of a three-fold diffractive structure function $`F_2^{D(3)}(Q^2,x_{IP},\beta )`$: $$\frac{\mathrm{d}^3\sigma (epeXY)}{\mathrm{d}Q^2\mathrm{d}x_{IP}\mathrm{d}\beta }=\frac{4\pi \alpha ^2}{\beta Q^4}(1y+\frac{y^2}{2})F_2^{D(3)}(Q^2,x_{IP},\beta ),$$ (2) where $`y`$ is the usual scaling variable, with $`yW^2/s`$. $`F_2^{D(3)}`$ is conveniently factorised in the form $`F_2^{D(3)}(Q^2,x_{IP},\beta )=f_{IP/p}(x_{IP})F_2^D(Q^2,\beta )`$, assuming that the $`IP`$ flux $`f_{IP/p}(x_{IP})`$ is independent of the $`IP`$ structure $`F_2^D(Q^2,\beta )`$, by analogy with the hadron structure functions, $`\beta `$ playing the role of Bjorken $`x`$. The $`IP`$ flux is parametrized in a Regge inspired form. The fit of HERA data according to the Regge form has shown that factorization is broken. This feature is explained in Regge theory by the need to include sub-leading trajectories in addition to the $`IP`$. Including one further trajectory, the reggeon ($`IR`$), in addition to the pomeron: $$F_2^{D(3)}(Q^2,x_{IP},\beta )=f_{IP/p}(x_{IP})F_2^{IP}(Q^2,\beta )+f_{IR/p}(x_{IP})F_2^{IR}(Q^2,\beta ),$$ (3) is sufficient to obtain a good description of the data throughout the measured kinematic domain ($`0.4<Q^2<800`$ GeV<sup>2</sup> $`x_{IP}<0.05`$ and $`0.001<\beta <0.9`$. The contributions of pomeron and reggeon exchange are illustrated on Fig. 3.a). The reggeon contribution gets larger for increasing values of $`x_{IP}`$, which correspond to smaller energy (for given $`Q^2`$ and $`\beta `$ values). It gets also larger for smaller values of $`\beta `$, which is consistent with the expected decrease with $`\beta `$ of the reggeon structure function, following the meson example, whereas the pomeron structure function is observed to be approximately flat in $`\beta `$. By analogy to the QCD evolution of the proton structure function, one can attempt to extract the partonic structure of the pomeron from the $`Q^2`$ evolution of $`F_2^{D(2)}(Q^2,\beta )`$. The extracted distributions are shown in Fig. 3.b) separately for the gluon and the quark components as a function of $`z`$, the pomeron momentum fraction carried by the parton entering the hard interaction. This distribution shows the dominance of hard gluons (high $`z`$ values) in the pomeron partonic structure. The dominance of hard gluons into the pomeron has been confirmed by various analysis of the diffractive hadronic final state (jet production, energy flow, particle spectra and multiplicities, and event shape) providing a global consistent picture of diffraction . ## Conclusion HERA experiments have produced a large amount of results in diffraction, which allow confrontations with QCD predictions, when one of the hard scales $`Q^2`$, the quark mass or $`t`$ (not reported in this summary) is present in the process. For the case of exclusive vector meson production, in the presence of a hard scale, models based on the fluctuation of the photon in a quark-antiquark pair which subsequently exchange a pair of gluons with the proton parton successfully reproduce the enhanced energy dependence. The QCD analysis of the total diffractive cross section, assuming factorization into a pomeron flux in the proton the corresponding parton distributions, favors the dominance of hard gluons in the pomeron, confirmed by the analysis of inclusive final states and of jet production.
no-problem/9903/cond-mat9903018.html
ar5iv
text
# Phase Ordering and Onset of Collective Behavior in Chaotic Coupled Map Lattices ## Abstract The phase ordering properties of lattices of band-chaotic maps coupled diffusively with some coupling strength $`g`$ are studied in order to determine the limit value $`g_\mathrm{e}`$ beyond which multistability disappears and non-trivial collective behavior is observed. The persistence of equivalent discrete spin variables and the characteristic length of the patterns observed scale algebraically with time during phase ordering. The associated exponents vary continuously with $`g`$ but remain proportional to each other, with a ratio close to that of the time-dependent Ginzburg-Landau equation. The corresponding individual values seem to be recovered in the space-continuous limit. One of the most remarkable features distinguishing extensively-chaotic dynamical systems from most models studied in out-of-equilibrium statistical physics is that they generically exhibit non-trivial collective behavior (NTCB), i.e. long-range order emerging out of local chaos, accompanied by the temporal evolution of spatially-averaged quantities . In particular, NTCB is easily observed on simple models of reaction-diffusion systems such as coupled map lattices (CMLs) in which (chaotic) nonlinear maps $`S`$ of real variables $`X`$ are coupled diffusively with some coupling strength $`g`$ . NTCB is often claimed to be a macroscopic attractor, well-defined in the infinite-size limit and reached for almost every initial condition, provided the local coupling between sites is “large enough”. On the other hand, for small $`g`$ values, such as those corresponding to the so-called “anti-integrable” limit which tries to extend zero-coupling behavior to small, but finite coupling strengths, CMLs often exhibit multistability . This is in particular the case if the local map shows banded chaos, because the interfaces separating clusters of sites in the different bands can be pinned. This multistability is “extensive”: the number of (chaotic) attractors may then be argued to grow exponentially with the system size, in opposition to NTCB for which this number is small and size-independent. In this Letter, we define and measure the limit coupling strength $`g_\mathrm{e}`$ separating the strong-coupling regime in which NTCB is observed from the weak-coupling, extensive multistability region. Using the discrete “spin” variables which can be defined whenever the one-body probability distribution functions (pdfs) of local (continuous) variables have disjoint supports, we study numerically the phase ordering process following uncorrelated initial conditions in cases where the spin variables take only two values. We find that the persistence probability $`p(t)`$ (i.e. the proportion of spins which have not changed sign up to time $`t`$) saturates in finite time to strictly positive values in the weak coupling regime whereas it decays algebraically to zero when $`g>g_\mathrm{e}`$. The associated persistence exponent $`\theta `$ varies continuously with parameters, at odds with traditional models . Moreover, data obtained on various two-dimensional CMLs is best accounted for by a relation of the form $`\theta (gg_\mathrm{e})^w`$, which we use to estimate $`g_\mathrm{e}`$. We show further that this behavior is mostly due to the non-trivial scaling of the characteristic length $`L(t)t^\varphi `$ during the phase ordering process. Indeed, $`\varphi \frac{1}{2}`$, the expected value for a scalar, non-conserved order parameter , and is found to be proportional to $`\theta `$, with the exponent ratio $`\varphi /\theta `$ approximately taking the value known for the time-dependent Ginzburg-Landau equation (TDGLE). We also provide evidence that, in the continuous-space limit, “normal” phase ordering behavior is recovered. Finally, we discuss the hierarchy of limit coupling values $`g_\mathrm{e}^n`$ which can be defined when the local map is unimodal and shows $`2^n`$-band chaos, using recent results on renormalisation group (RG) ideas applied to CMLs . Consider a $`d`$-dimensional hypercubic lattices $``$ of coupled identical maps $`S_\mu `$ acting on real variables $`(X_\stackrel{}{r})_\stackrel{}{r}`$: $$X_\stackrel{}{r}^{t+1}=(12dg)S_\mu (X_\stackrel{}{r}^t)+g\underset{\stackrel{}{e}𝒱}{}S_\mu (X_{\stackrel{}{r}+\stackrel{}{e}}^t),$$ (1) where $`𝒱`$ is the set of $`2d`$ nearest neighbors $`\stackrel{}{e}`$ of site $`\stackrel{}{0}`$. We first present results obtained for the piecewise linear, odd, local map $`S_\mu `$ defined by: $$S_\mu (X)=\{\begin{array}{ccc}\mu X\hfill & \mathrm{if}\hfill & X[1/3,1/3]\hfill \\ 2\mu /3\mu X\hfill & \mathrm{if}\hfill & X[1/3,1]\hfill \\ 2\mu /3\mu X\hfill & \mathrm{if}\hfill & X[1,1/3]\hfill \end{array}$$ (2) which leaves the $`I=[1,1]`$ interval invariant. (For $`\mu =3`$, this is the chaotic map introduced by Miller and Huse .) For $`\mu [2,1]`$, $`S_\mu `$ displays banded chaos, while for opposite $`\mu `$ values, these bands become invariant subintervals of $`I`$. At $`\mu =1.9`$ in particular, $`S_\mu `$ possesses two symmetric such intervals $`I^\pm =[\pm \mu (2\mu )/3,\pm \mu /3]`$, separated by a finite gap. For any value of $`g`$, the support of the pdf of $`X`$ for the CML defined by (1-2) can be separated into two components thanks to the symmetry of the map. This allows the unambiguous definition of spin variables $`\sigma _\stackrel{}{r}=\mathrm{sign}(X_\stackrel{}{r})`$. The deterministic nature of the system and the form of the coupling strictly forbids the nucleation of opposite-phase droplets in clusters: the analog spin system is at zero temperature. For large $`g`$ values, complete phase ordering occurs (Fig. 1a,b), and the system eventually reaches a regime in which all sites are situated in one of the two intervals $`I^\pm `$. For small $`g`$, initial conditions with sites in both intervals $`I^\pm `$ lead to spatially-blocked configurations where interfaces between clusters of each phase are strictly pinned, while chaos is present within clusters (Fig. 1c,d). To study the phase ordering process efficiently, uncorrelated initial conditions were generated as follows: exactly one half of the sites of a $`d=2`$ lattice were chosen at random and assigned positive $`X`$ values drawn according to the invariant distribution of $`S_\mu `$ on $`I^+`$, while the other sites were similarly assigned negative values. Large lattices with periodic boundary conditions were used, and the persistence $`p(t)`$ was measured. Fig. 2a shows the results of single runs for various values of $`g`$. For small $`g`$, $`p(t)`$ saturates at large times to strictly positive values, while it decays algebraically, for large $`g`$, on square lattices of linear size 2048 sites. The associated persistence exponent $`\theta `$ varies continuously with $`g`$, and its $`g`$-dependence is nicely accounted for by a functional form $`\theta (gg_\mathrm{e})^w`$ with $`g_\mathrm{e}0.169(1)`$ and $`w0.20(3)`$ (Fig. 2b). We have, at this point, no theoretical justification of this fitting Ansatz. At any rate, it provides an operational definition of $`g_\mathrm{e}`$ yielding estimates consistent with those obtained using other, less accurate, methods . The origin of this unusual behavior of the persistence exponent is largely explained by the evolution of the spatial structures formed during phase ordering. Usually, one expects the coarsening to be described by the algebraic growth of a single characteristic length $`L(t)t^\varphi `$ with $`\varphi =1/2`$ for a non-conserved, scalar order parameter . In the CML studied above, the two-point correlation function $`C(\stackrel{}{x},t)=\sigma _{\stackrel{}{r}+\stackrel{}{x}}^t\sigma _\stackrel{}{r}^t`$ was measured during phase ordering . Length $`L(t)`$ was then evaluated to be the width at mid-height ($`C(L(t),t)=1/2`$), determined by interpolation. This procedure was then validated by a collapse of all $`C(\stackrel{}{x}/L(t),t)`$ curves. Surprisingly, while the scaling behavior of $`L(t)`$ is observed, exponent $`\varphi `$ departs from the expected $`1/2`$ value and varies continuously with $`g`$ (Fig. 3). Again, we find a law of the form $`\varphi (gg_\mathrm{e})^w`$ to be an acceptable Ansatz of our numerical results. The estimated values of $`g_\mathrm{e}`$ and $`w`$ are consistent, within numerical accuracy, with those found when fitting $`\theta (g)`$. This is corroborated by studying directly $`p(t)`$ vs $`L(t)`$ (not shown), or by plotting $`\theta `$ vs $`\varphi `$ which confirms that the two exponents are proportional to each other (Fig. 3d). Remarkably, the ratio $`\theta /\varphi `$ is found to have, within our numerical accuracy, the $`d=2`$ TDGLE value: $`\theta /\varphi 0.40(2)2\theta _{\mathrm{GL}}0.40`$ . (We cannot, however, completely exclude the values corresponding to the Ising model, or the diffusion equation, since $`\theta _{\mathrm{Ising}}0.22`$ , and $`\theta _{\mathrm{Diff}.}0.19`$ .) The same analysis was also performed on CMLs with a non-symmetric, unimodal, local map $`S_\mu `$ of the form: $$S_\mu (X)=1\mu |X|^{1+\epsilon }\mathrm{with}\mu [0,2],$$ (3) in particular for $`\epsilon =0`$ (tent map) and $`\epsilon =1`$ (logistic map). For $`\mu [\mu _{\mathrm{}},2]`$, this map shows $`2^n`$-band chaos and exhibits an inverse cascade of band-merging points $`\overline{\mu }_n`$ when $`\mu \mu _{\mathrm{}}`$. In the strong-coupling limit, the corresponding CMLs exhibit, depending on $`d`$, periodic or quasiperiodic NTCB with a period equal to, or a multiple of, that of the band-chaos of the local map . For $`d=2`$ and $`3`$, in particular, simple period-$`2^n`$ NTCB occurs, with an infinite cascade of phase transition points $`\mu _n^\mathrm{c}`$ distinct from the band-merging points (Fig. 4a). When period-2 NTCB occurs in the two-band chaotic region of the map ($`\mu [\overline{\mu }_2,\overline{\mu }_1][1.43,1.54]`$), two-state spin variables $`\sigma _\stackrel{}{r}^t\{1,1\}`$ can be defined, but the asymmetry of the two bands hinders the generation of “effectively” uncorrelated initial conditions. Indeed, an equal proportion of sites in each band quickly leads to complete phase ordering and saturation of $`p(t)`$, even in the strong-coupling regime. This happens because these initial conditions create, after a few timesteps, configurations with a fairly large unbalance between the two phases. Tuning the initial proportion $`\rho `$ of sites in, say, the band containing $`X=0`$, one can minimize such effects. We determined the optimal proportion $`\rho ^{}`$ defined as the value for which the magnetization $`\sigma _\stackrel{}{r}^t`$ remains constant (Fig. 4b). Clean scaling behavior of $`L(t)`$ and $`p(t)`$ is then observed with reasonable system sizes, as with the symmetric local map (2). Varying the coupling strength $`g`$, exponents $`\varphi `$ and $`\theta `$ show the same behavior as above, decreasing continuously to zero at $`g_\mathrm{e}`$. Fig. 4c shows the case of coupled logistic maps, for which the Ansatz $`\theta ,\varphi (gg_\mathrm{e})^w`$ is, again, valid, although not as good as in the case of local map (2). Note that the estimated value $`w0.06(2)`$ is different from that measured for the CML with local map (2), but $`\theta /\varphi 0.48(4)`$ is still rather close to the TDGLE value (Fig. 4d). We now deal with the onset of more complex NTCB such as the period-$`2^n`$ cycles mentioned above for which the study of the phase ordering in terms of two-state spin variables may not be legitimate. Consider, for example, a CML with local map $`S_\mu `$ defined by (3) in a 4-band chaotic regime ($`\mu [\overline{\mu }_3,\overline{\mu }_2]`$) which exhibits period-4 NTCB. The “natural” spin variables to study phase ordering take four values, indexed by the 4-band chaotic cycle. However, these four bands can be grouped in two “meta-bands”, since they arise from a band splitting bifurcation at $`\overline{\mu }_2`$, so that two-state spin variables can still be defined. Accordingly, two limit coupling strengths can be defined: $`g_\mathrm{e}^1`$, marking the onset of complete phase ordering between the two meta-bands, and $`g_\mathrm{e}^2`$ for ordering from initial conditions within one of the meta-bands. A priori, $`g_\mathrm{e}^2g_\mathrm{e}^1`$, and there might exist coupling strengths such that, e.g., pinned clusters exist within, but not between, the two meta-bands. The “true” onset of period-4 NTCB is then given by $`g\mathrm{max}(g_\mathrm{e}^1,g_\mathrm{e}^2)`$. Similarly, for $`\mu [\mu _{\mathrm{}},\overline{\mu }_n]`$, one can define $`n`$ different $`\mu `$-dependent limit coupling strengths $`g_\mathrm{e}^1,g_\mathrm{e}^2,\mathrm{},g_\mathrm{e}^n`$, with $`n\mathrm{}`$ as $`\mu \mu _{\mathrm{}}`$. Using our recent work on renormalisation group arguments for CMLs , one can show that the threshold values of this infinite hierarchy are related to each other. Here, we only describe briefly these results, while a detailed derivation can be found in . The RG structure of single map (3) induces the conjugacy between $`(𝚫_g^m𝐒_\mu )^2`$ and $`𝚫_g^{2m}𝐒_{q(\mu )}`$, where $`𝐒_\mu `$ transforms each variable $`X_\stackrel{}{r}`$ by $`S_\mu `$, $`𝚫_g^m`$ is the diffusive operator applied $`m`$ times, and $`q(\mu )=\mu ^2`$ for coupled tent maps. This relation can be shown to imply that $`g_\mathrm{e}^2(\mu ,m)=g_\mathrm{e}^1(q(\mu ),2m)`$. Furthermore, using the fact that $`g_\mathrm{e}^n(\mu ,m)`$ decreases with $`m`$, one can prove that the maximum $`g_\mathrm{e}`$ for all $`n`$, $`\mu `$, and $`m`$ is $`g_\mathrm{e}^{}=g_\mathrm{e}^1(\overline{\mu }_1,1)`$. Thus, whenever $`gg_\mathrm{e}^{}`$, complete ordering occurs for all bands. The above results are at odds with the behavior of usual models studied in phase ordering problems . But in both cases presented here, the exponent ratio $`\theta /\varphi `$ seems to take the value expected for the TDGLE model. This “weak universality” is reminiscent of similar results found recently at the Ising-like critical points shown by the same models . We note, moreover, that, when $`g`$ is increased, $`\varphi `$ approaches $`1/2`$ and $`\theta `$ reaches values close to $`\theta _{\mathrm{GL}}`$. We believe that this tendancy is mostly due to the lattice effects becoming less and less important (although strict pinning does not occur for $`g>g_\mathrm{e}`$). We have shown recently that, in the continuous-space limit of CMLs, the weak coupling regime disappears ($`g_\mathrm{e}0`$), together with any pinning effects. One can thus wonder whether, in this limit, one recovers more “conventional” phase ordering dynamics. The continuous limit of CMLs such as those defined by (1-2) is reached when applying the coupling step of the dynamics more and more times per iteration, i.e. when taking the $`m\mathrm{}`$ limit of $`𝚫_g^m𝐒_\mu `$. In this limit, $`𝚫_g^m`$ converges to a universal Gaussian kernel $`𝚫_\lambda ^{\mathrm{}}=\mathrm{exp}(\frac{\lambda ^2}{2}^2)`$ with a coupling range $`\lambda =\sqrt{2gm}\stackrel{}{e}`$ where $`\stackrel{}{e}`$ is the lattice spacing, which can thus be chosen to scale like $`1/\sqrt{m}`$ so as to keep $`\lambda `$ constant. We investigated the phase ordering properties of these CMLs with the symmetric local map (2) for increasing values of $`m`$. At a qualitative level, the scaling behavior of $`L(t)`$ and $`p(t)`$ is observed at all $`m`$ values. Quantitatively, exponents $`\theta `$ and $`\varphi `$ vary with $`m`$ at fixed $`g`$. Increasing $`m`$, $`\varphi `$ seems to converge to $`1/2`$, while $`\theta \theta _{\mathrm{GL}}`$: for $`m=1`$ to 3, we find $`\varphi =0.467`$, 0.479, 0.505, and $`\theta =0.174`$, 0.184, 0.196, from single runs on lattices of linear size 4096 sites. Our work provides a quantitative method for determining the onset of NTCB in chaotic coupled map lattices. It also reveals that the phase-ordering properties of multiphase, chaotic CMLs are different from those of most models studied traditionally. More work is needed, especially at the analytical level, to clarify the origin of the non-universality observed and put our numerical results on firmer ground, since we cannot completely exclude a very slow, unobservable, crossover of the scaling behavior observed to that of a more traditional model. Different approaches can be suggested. A continuous variation of the scaling exponent $`\varphi `$ for the characteristic length of domains is not usually observed, but (at least) two exceptions are known. One is the case of coarsening from initial conditions with built-in long-range correlations , but then the persistence probability $`p(t)`$ does not decrease algebraically with time . Another situation of possible relevance is the case of phase-ordering with an order-parameter-dependent mobility , for which, unfortunately, the behavior of the persistence is not known. At any rate, the recovery of the “normal” scaling properties of the TDGLE in the space-continuous limit suggests that lattice effects are ultimately responsible for the non-trivial scaling properties recorded in discrete systems. This calls for a detailed study of interface dynamics in order to assess the effective role of discretization and anisotropy. Finally, we believe our results are general and that similar behavior should be found in experiments on phase-ordering of pattern-forming systems, such as, e.g., electro-hydrodynamical convection in liquid crystals, or Rayleigh-Bénard convection . We thank Ivan Dornic for many fruitful discussions and his keen interest in our work.
no-problem/9903/gr-qc9903010.html
ar5iv
text
# Universal Upper Bound to the Entropy of a Charged System ## Abstract We derive a universal upper bound to the entropy of a charged system. The entropy bound follows from application of the generalized second law of thermodynamics to a gedanken experiment in which an entropy-bearing charged system falls into a charged black hole. This bound is stronger than the Bekenstein entropy bound for neutral systems. Black-hole physics mirrors thermodynamics in many respects . According to the thermodynamical analogy in black-hole physics, the entropy of a black hole is given by $`S_{bh}=A/4\mathrm{}`$, where $`A`$ is the black-hole surface area. (We use gravitational units in which $`G=c=1`$). Moreover, it is widely believed that a system consisting of ordinary matter interacting with a black hole will obey the generalized second law of thermodynamics (GSL): “The sum of the black-hole entropy and the common (ordinary) entropy in the black-hole exterior never decreases”. This assumption plays a fundamental role in black-hole physics. In a classical context, a basic physical mechanism is known by which a violation of the GSL can be achieved: Consider a box filled with matter of proper energy $`E`$ and entropy $`S`$ which is dropped into a black hole. The energy delivered to the black hole can be arbitrarily red-shifted by letting the assimilation point approach the black-hole horizon. As shown by Bekenstein , if the box is deposited with no radial momentum a proper distance $`R`$ above the horizon, and then allowed to fall in such that $$R<\mathrm{}S/2\pi E,$$ (1) then the black-hole area increase (or equivalently, the increase in black-hole entropy) is not large enough to compensate for the decrease of $`S`$ in common (ordinary) entropy. Arguing from the GSL, Bekenstein has proposed the existence of a universal upper bound on the entropy $`S`$ of any system of total energy $`E`$ and effective proper radius $`R`$: $$S2\pi RE/\mathrm{},$$ (2) where $`R`$ is defined in terms of the area $`A`$ of the spherical surface which circumscribe the system $`R=(A/4\pi )^{1/2}`$. This restriction is necessary for enforcement of the GSL; the box’s entropy disappears but an increase in black-hole entropy occurs which ensures that the GSL is respected provided $`S`$ is bounded as in Eq. (2). Evidently, this universal upper bound is a quantum phenomena (the upper bound goes to infinity as $`\mathrm{}0`$). This provides a striking illustration of the fact that the GSL is intrinsically a quantum law. The universal upper bound Eq. (2) has the status of a supplement to the second law; the latter only states that the entropy of a closed system tends to a maximum without saying how large that should be. Other derivations of the universal upper bound Eq. (2) which are based on black-hole physics have been given in . Few pieces of evidence exist concerning the validity of the bound for self-gravitating systems . However, the universal bound Eq. (2) is known to be true independently of black-hole physics for a variety of systems in which gravity is negligible . In this paper we challenge the validity of the GSL in a gedanken experiment in which an entropy-bearing charged system falls into a charged black hole. We show that while the upper bound Eq. (2) is a necessary condition for the fulfillment of the GSL, it is not a sufficient one. It is not difficult to see why a stronger upper bound on the entropy of an arbitrary charged system must exist: The electromagnetic interaction experienced by a charged body (which, of-coarse, was not relevant in Bekenstein’s gedanken experiment) can decrease the change in black-hole entropy (area). Hence, the GSL would be violated unless the entropy of the charged system (what disappears from the black-hole exterior) is restricted by a bound stronger than Eq. (2). Furthermore, there is one disturbing feature of the universal bound Eq. (2). As was pointed out by Bekenstein black holes conform to the bound; however, the Schwarzschild black hole is the only black hole which actually attains the bound. This uniqueness of the Schwarzschild black hole (in the sense that it is the only black hole which have the maximum entropy allowed by quantum theory and general relativity) is somewhat disturbing. Recently, Hod derived an (improved) upper bound to the entropy of a spinning system and proved that all electrically neutral Kerr black holes have the maximum entropy allowed by quantum theory and general relativity. Clearly, the unity of physics demands a stronger bound for charged systems in general, and for black holes in particular. In fact, the plausible existence of an upper bound stronger than Eq. (2) on the entropy of a charged system has nothing to do with black-hole physics; a part of the energy of the electromagnetic field residing outside the charged system seems to be irrelevant for the system’s statistical properties. This reduce the phase space available to the components of a charged system. Evidently, an improved upper bound to the entropy of a charged system must decrease with the (absolute) value of the system’s charge. However, our simple argument cannot yield the exact dependence of the entropy bound on the system’s parameters: its energy, charge, and proper radius. In fact, black-hole physics (more precisely, the GSL) yields a concrete expression for the universal upper bound. Arguing from the GSL, we derive a universal upper bound to the entropy of a charged system which is stronger than the bound Eq. (2). We consider a charged body (assumed to be spherical for simplicity) of rest mass $`\mu `$, charge $`q`$, and proper radius $`b`$, which is dropped into a (charged) Reissner-Nordström black hole. The external gravitational field of a spherically symmetric object of mass $`M`$ and charge $`Q`$ is given by the Reissner-Nordström metric $$ds^2=\left(1\frac{2M}{r}+\frac{Q^2}{r^2}\right)dt^2+\left(1\frac{2M}{r}+\frac{Q^2}{r^2}\right)^1dr^2+r^2d\mathrm{\Omega }^2.$$ (3) The black-hole (event and inner) horizons are located at $$r_\pm =M\pm (M^2Q^2)^{1/2}.$$ (4) The equation of motion of a charged body on the Reissner-Nordström background is a quadratic equation for the conserved energy $`E`$ (energy-at-infinity) of the body $$r^4E^22qQr^3E+q^2Q^2r^2\mathrm{\Delta }(\mu ^2r^2+p_{\varphi }^{}{}_{}{}^{2})(\mathrm{\Delta }p_r)^2=0,$$ (5) where $`\mathrm{\Delta }=r^22Mr+Q^2=(rr_{})(rr_+)`$. The quantities $`p_\varphi `$ and $`p_r`$ are the conserved angular momentum of the body and its covariant radial momentum, respectively. The conserved energy $`E`$ of a body having a radial turning point at $`r=r_++\xi `$ (where $`\xi r_+`$) is given by Eq. (5) $`E`$ $`=`$ $`{\displaystyle \frac{qQ}{r_+}}+{\displaystyle \frac{\sqrt{\mu ^2r_+^2+p_\varphi ^2}(r_+r_{})^{1/2}}{r_+^2}}\xi ^{1/2}\left\{1+O\left[\xi /(r_+r_{})\right]\right\}`$ (7) $`{\displaystyle \frac{qQ}{r_+^2}}\xi \left[1+O(\xi /r_+)\right].`$ This expression is actually the effective potential (gravitational plus electromagnetic plus centrifugal) for given values of $`\mu ,q`$ and $`p_\varphi `$. It is clear that it can be minimized by taking $`p_\varphi =0`$ (which also minimize the increase in the black-hole surface area. This is also the case for neutral bodies ). In order to find the change in black-hole surface area caused by an assimilation of the body, one should evaluate $`E`$ \[given by Eq. (7)\] at the point of capture, a proper distance $`b`$ outside the horizon. Thus, we should evaluate $`E`$ at $`r=r_++\delta (b)`$, where $`\delta (b)`$ is determined by $$_{r_+}^{r_++\delta (b)}(g_{rr})^{1/2}𝑑r=b,$$ (8) where $`g_{rr}=r^2/\mathrm{\Delta }`$. Integrating Eq. (8) one finds (for $`br_+`$) $$\delta (b)=(r_+r_{})\frac{b^2}{4r_{+}^{}{}_{}{}^{2}}.$$ (9) An assimilation of the charged body results in a change $`dM=E`$ in the black-hole mass and a change $`dQ=q`$ in its charge. Taking cognizance of Eq. (7) and using the first-law of black-hole thermodynamics $$dM=\frac{\kappa }{8\pi }dA+\mathrm{\Phi }dQ,$$ (10) where $`\kappa =(r_+r_{})/2r_+^2`$ and $`\mathrm{\Phi }=Q/r_+`$ are the surface gravity ($`2\pi `$ times the Hawking temperature ) and electric potential of the black hole, respectively, one finds $$(\mathrm{\Delta }\alpha )_{min}=\frac{4\mu r_+}{(r_+r_{})^{1/2}}\delta (b)^{1/2}\frac{4qQ}{r_+r_{}}\delta (b),$$ (11) where the “rationalized area” $`\alpha `$ is related to the black-hole surface area $`A`$ by $`\alpha =A/4\pi `$. With Eq. (9) for $`\delta (b)`$ we find $$(\mathrm{\Delta }\alpha )_{min}(\mu ,q,b,s)=2\mu b\frac{qQb^2}{r_+^2},$$ (12) which is the minimal black-hole area increase for given values of the body’s parameters $`\mu ,q`$ and $`b`$ \[and for given black-hole parameters $`r_+`$ and $`Q`$ ($`s`$ stands for these two parameters)\]. Obviously the increase in black-hole surface area Eq. (12) can be minimized (for given values of the body’s parameters) by maximizing the black-hole electric field (given by $`Q/r_{+}^{}{}_{}{}^{2}`$). However, we must consider an external electric field with a limited strength in order to keep it from deforming and breaking the charged body. Evidently, a charged body does not break up under its own electric field. Clearly, this value of the field is very conservative; most bodies can be subjected to much stronger electric fields without being broken. However, our goal is to derive a universal upper bound which is valid for each and every charged system in nature, regardless of its specific internal structure (and regardless of its internal constituents). Hence, we must consider an electric-field strength of this order of magnitude (this assures us that the charged body does not break up under the external electric field). Therefore, we find $$(\mathrm{\Delta }\alpha )_{min}(\mu ,q,b)=2\mu bq^2,$$ (13) which is the minimal area increase for given values of the body’s parameters $`\mu ,q`$ and $`b`$. It is in order to emphasize the assumptions made in obtaining Eq. (13). By keeping the term $`qQ\xi /r_+^2`$ and neglecting terms of order $`\mu \xi ^{3/2}(r_+r_{})^{1/2}/r_+`$ in Eq. (7) we actually assumed that $`\mu b|qQ|`$. Thus, we have a series of inequalities $`q/b^2=Q/r_{+}^{}{}_{}{}^{2}1/Qq/\mu b`$, which implies $`b\mu `$. Hence, the lower bound Eq. (13) is valid for bodies with negligible self-gravity, which is consistent with the test particle approximation. In addition, the series of inequalities $`br_+(Q/r_{+}^{}{}_{}{}^{2})^1=b^2/|q|`$ imply $`|q|b`$. Assuming the validity of the GSL, one can derive an upper bound to the entropy $`S`$ of an arbitrary system of proper energy $`E`$ and charge $`q`$: $$S\pi (2Ebq^2)/\mathrm{}.$$ (14) It is evident from the minimal black-hole area increase Eq. (13) that in order for the GSL to be satisfied $`[(\mathrm{\Delta }S)_{tot}(\mathrm{\Delta }S)_{bh}S0]`$, the entropy $`S`$ of the charged system must be bounded as in Eq. (14). This upper bound is universal in the sense that it depends only on the system’s parameters (it is independent of the black-hole parameters $`M`$ and $`Q`$). We emphasized that the universal upper bound Eq. (14) is derived for bodies with negligible self-gravity. Nevertheless, this improved bound is also very appealing from a black-hole physics point of view: consider a charged Reissner-Nordström black hole of charge $`Q`$. Let its energy be $`E`$; then its surface area is given by $`A=4\pi r_{+}^{}{}_{}{}^{2}=4\pi (2Er_+Q^2)`$. Now since $`S_{bh}=A/4\mathrm{}`$, $`S_{bh}=\pi (2Er_+Q^2)/\mathrm{}`$, which is the maximal entropy allowed by the upper bound Eq. (14). Thus, all Reissner-Nordström black holes saturate the bound. This proves that the Schwarzschild black hole is not unique from a black-hole entropy point of view, removing the disturbing feature of the entropy bound Eq. (2). This is precisely the kind of universal upper bound we were hoping for ! Evidently, systems with negligible self-gravity (the charged system in our gedanken experiment) and systems with maximal gravitational effects (i.e., charged black holes) both satisfy the upper bound Eq. (14). Therefore, this bound appears to be of universal validity. Still, it should be recognized that the upper bound Eq. (14) is established only for bodies with negligible self-gravity. It is of great interest to derive the bound for strongly gravitating systems. One piece of evidence exist concerning the validity of the bound for the specific example of a system composed of a charged black hole in thermal equilibrium with radiation . In summary, using a gedanken experiment in which an entropy-bearing charged system falls into a charged black hole, and assuming the validity of the GSL, one can derive a universal upper bound to the entropy of a charged system. An important goal is obviously to clarify the ultimate relation of the bound to black holes. In fact this relation is reflected in the numerical factor of $`\pi `$ which multiply the $`q^2`$ term. We believe that some other proof, presumably a more complicated one, could establish this value of the numerical coefficient. \[It should be stressed that this is also the current situation for the original upper bound Eq. (2), which was first suggested in the context of black-hole physics . The relation of the original bound to black holes is reflected in the numerical factor of $`2\pi `$ appearing in it\]. Nevertheless, our main goal in this paper was to prove the general structure of the universal upper bound for charged systems; the new and interesting observation of this paper is the role of electric charge in providing an important limitation on the entropy which a finite physical system can have. The intriguing feature of our derivation is that it uses a law whose very meaning stems from gravitation (the GSL, or equivalently the area-entropy relation for black holes) to derive a universal bound which has nothing to do with gravitation \[written out fully, the entropy bound would involve $`\mathrm{}`$ and $`c`$, but not $`G`$\]. This provides a striking illustration of the unity of physics. ACKNOWLEDGMENTS I wish to thank Professor Jacob D. Bekenstein and Avraham E. Mayo for stimulating discussions. This research was supported by a grant from the Israel Science Foundation.
no-problem/9903/cond-mat9903005.html
ar5iv
text
# Charge Segregation, Cluster Spin-Glass and Superconductivity in La1.94Sr0.06CuO4 \[ ## Abstract A <sup>63</sup>Cu and <sup>139</sup>La NMR/NQR study of superconducting ($`T_c`$=7 K) La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub> single crystal is reported. Coexistence of spin-glass and superconducting phases is found below $``$5 K from <sup>139</sup>La NMR relaxation. <sup>63</sup>Cu and <sup>139</sup>La NMR spectra show that, upon cooling, CuO<sub>2</sub> planes progressively separate into two magnetic phases, one of them having enhanced antiferromagnetic correlations. These results establish the AF-cluster nature of the spin-glass. We discuss how this phase can be related to the microsegregation of mobile holes and to the possible pinning of charge-stripes. \] Although La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> is one of the most studied and structurally simplest high-$`T_c`$ superconductor, the complexity of its phase diagram keeps increasing every year. A striking feature is that, while Néel antiferromagnetic (AF) order is fully destroyed by $`x`$=2 % of doped holes, samples with much higher doping still show clear tendencies towards spin ordering: \- At intermediate concentrations between Néel and superconducting phases (0.02$``$$`x`$$``$0.05), a spin-glass phase is found . There are indications, but no direct evidence, that this phase is formed by frozen AF clusters, which could originate from the spatial segregation of doped holes in CuO<sub>2</sub> planes: a ”cluster spin-glass” . Strikingly, this spin-glass phase is found to coexist with superconductivity (see also ). \- Commensurability effects around $`x`$=0.125 (=1/8) and/or subtle structural modifications help restoring long-range AF order. This is also understood as a consequence of segregation of doped-holes, but here charges are observed to order into 1D domain walls, or ”stripes” . Again magnetic order is claimed to coexist with bulk superconductivity . Clearly, the context of static magnetism and charge segregation in which superconductivity takes place is the central question in this region of the phase diagram . So, a lot should be learnt from the microscopic nature of the cluster spin-glass phase, which has not been clarified yet, and from the passage from spin-glass to superconducting behaviour. Here, we address this problem through a comprehensive nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR) investigation of La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub>, a compound at the verge of the (underdoped) superconducting phase ($`T_c`$=7 K). In addition to the confirmation of coexisting spin-glass and superconducting phases, the AF-cluster nature of the spin-glass is microscopically demonstrated from <sup>63</sup>Cu and <sup>139</sup>La NMR spectra. We discuss how the observed microscopic phase separation can be related to the microsegregation of mobile holes in CuO<sub>2</sub> planes, and suggest that the cluster spin-glass is the magnetic counterpart of a pinned, disordered, stripe phase: a ”stripe-glass” . The sample is a single crystal ($``$200 mg), grown from solution as described in Ref. . Magnetization measurements have shown a superconducting transition with an onset at $`T_c`$=7 K. We first discuss the NQR measurements. The <sup>63</sup>Cu nuclear spin-lattice relaxation rate 1/$`{}_{}{}^{63}T_{1}^{}`$ was measured at the center of the NQR line shown in Fig. 1(a). The recovery of the magnetization after a sequence of saturating pulses, was a single exponential at all temperatures. The results are shown in Fig. 1(b) . It is remarkable that for the same hole concentration and a similar $`T_c`$, we obtain identical Cu NQR spectra (central frequency, width, and small high frequency tail from the anomalous ”B” line -sites with a localized doped-hole ) and the same $`{}_{}{}^{63}T_{1}^{}`$ values as Fujiyama et al. . All these quantities are strongly doping-dependent. This is a very good indication of the precision and the homogeneity of the Sr concentration in our sample $`x`$=0.06$`\pm `$0.005. Below 250 K, 1/$`{}_{}{}^{63}T_{1}^{}`$ flattens and it decreases below $``$150 K. This regime could not, however, be explored since the Cu nuclear spin-spin relaxation time ($`T_2`$) shortens drastically upon cooling, making the NMR signal too small for reliable measurements, especially below $``$50 K. A useful substitute of <sup>63</sup>Cu measurements is the NQR/NMR of <sup>139</sup>La. Although La lies outside CuO<sub>2</sub> planes, it is coupled to Cu<sup>2+</sup> spins through a hyperfine interaction, whose magnitude is small compared to that on <sup>63</sup>Cu, leading to a long value of $`{}_{}{}^{139}T_{2}^{}`$. A typical <sup>139</sup>La NQR line (3$`\nu _Q`$ transition) is shown in Fig. 1(c). The asymmetry is perfectly accounted for by a two-gaussian fit, which is very similar to that found in stripe-ordered La<sub>1.48</sub>Nd<sub>0.4</sub>Sr<sub>0.12</sub>CuO<sub>4</sub> . The existence of two electric field gradient contributions is related to static charge inhomogeneities, either directly and/or indirectly through different tilt configurations of CuO<sub>6</sub> octahedra. By comparing the recovery law of the <sup>139</sup>La magnetization after saturation of the 2$`\nu _Q`$ transition with that measured on the 3$`\nu _Q`$ transition, it was found that the spin-lattice relaxation is due to both magnetic and electric field gradient fluctuations around 100 K. However, below 75 K 1/$`T_1`$ increases progressively upon cooling and becomes entirely of magnetic origin. As seen in Fig. 1(d), 1/$`T_1`$ increases by almost three orders of magnitude with a peak at $`T_g`$$``$5 K. This behaviour is typical of a slowing down of spin-fluctuations, 1/$`T_1`$ reaching a maximum when the frequency of these fluctuations is equal to the nuclear resonance frequency, here $`\nu _Q`$$``$18 MHz (or equivalently a correlation time $`\tau `$$``$10<sup>-8</sup>s). Thus, a spin-freezing occurs in the superconducting state of La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub>. This adds a new item to the list of unconventional properties of the cuprates, which have to be addressed by any theory. The scale, microscopic or mesoscopic, on which both types of order coexist is a crucial question which cannot be addressed here. But we stress again that our results are representative of an homogeneous x=0.06 Sr concentration. This is also confirmed by the value $`T_g`$$``$5 K, which is in quantitative agreement with the carefully established NQR and $`\mu `$SR phase diagrams of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (the characteristic times of NQR and $`\mu `$SR are similar). The freezing process is characterized by a high level of inhomogeneity, since a very wide distribution of $`T_1`$ values develops below 50 K, as inferred from the streched exponential time decay of the nuclear magnetization . As already noticed in Refs. , the slowing down starts around 70 K, in the temperature range where the in-plane resistivity $`\rho _{ab}`$ has a minimum. Thus, charge localization seems to be a precursor effect of Cu<sup>2+</sup> spin freezing. It is also important to probe the local static magnetization in CuO<sub>2</sub> planes. This can be characterized through the shift $`K_{cc}`$ (for $`H_0`$$``$$`c`$) of the <sup>63</sup>Cu NMR line which is the sum of a $`T`$-independent orbital term $`K^{\mathrm{orb}}`$1.2% plus a contribution from the spin susceptibility: $${}_{}{}^{63}K_{cc}^{\mathrm{spin}}=\frac{(A_{cc}+4B)}{g_{cc}\mu _B}\frac{<S_z>}{H_0}.$$ (1) $`A_{cc}`$ is the hyperfine coupling with on-site electrons, $`B`$ the transferred hyperfine coupling with electrons on the first Cu neighbour, $`g`$ the Landé factor, and $`<`$$`S_z`$$`>`$ the on-site Cu moment, here assumed to be spatially homogeneous on the scale of the Cu-Cu distance. Since $`A_{cc}+4B`$0 in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>, one usually has negligible magnetic shift $`{}_{}{}^{63}K_{cc}^{\mathrm{spin}}`$$``$0. The inset to Fig. 2 shows the <sup>63</sup>Cu NMR central line at room temperature. There are clearly two contributions: a relatively sharp line with the usual shift $`K_c`$$``$1.2%, and a slightly shifted, much broader, background. The perfect overlap of the NMR intensity vs. shift plots at 17 and 24 Tesla asserts that the broadening is purely magnetic, i.e. it is a distribution of shifts $`K_c`$. This distribution is considerable ($`\pm `$2-3%), exceeding by far anything ever seen in the cuprates. Also striking is the $`T`$-dependence of the spectrum (Fig. 2). The NMR signal clearly diminishes upon cooling. The effect is more dramatic for the main peak, which disappears between 100 and 50 K. At 50 K, the spectrum is only composed of a background, at least two times wider than at 300 K. The shortening of $`T_2`$ (by a factor of two from 300 K to 100 K) accounts for a small fraction of the intensity loss. Some signal is redistributed from the main peak to the background signal, but part of it is actually not observed, due to the huge spread of resonance frequencies. It is evident from Eqn. 1 that $`K_c`$$``$0 values are possible only if $`<`$$`S_z`$$`>`$ is strongly spatially modulated on scale of one lattice spacing, so that the shift for a Cu site at position $`(x,y)`$ cannot be written as in Eqn. 1, but contains the sum of terms: $`A_{cc}`$$`<`$$`S_z(x,y)`$$`>`$+$`B`$\[$`<`$$`S_z(x\pm 1,y)`$$`>`$\+ $`<`$$`S_z(x,y\pm 1)`$$`>`$\]. In fact, large values of $`K_c`$ such as found here imply that the local magnetization is staggered: the cancellation of the $`A_{cc}`$$`<`$0 and $`B`$$`>`$0 terms in Eqn. 1 is removed by the sign alternation of $`<`$$`S_z`$$`>`$ from one site to its nearest Cu neighbours, thus allowing $`|K_c|`$$``$0 locally. The presence of substantial staggered magnetization is striking. One way to generate such enhanced AF correlations could be that some localized doped-holes act as static defects in the magnetic lattice, somehow similar to the substitution of Zn for Cu . However, only one broadened peak is detected in Cu NMR studies of Zn-doped YBCO, while there are here two well-defined magnetic phases (see also <sup>139</sup>La results below): Furthermore, there is some staggered magnetization already at 290 K, where $`\rho _{ab}`$ is metallic-like, and the <sup>63</sup>Cu NQR B site, which is known to be related to localized holes , is extremely small here. So, an impurity-like effect from localized holes does not explain the data. To our knowledge, the only other situation which could generate an inhomogeneous staggered magnetization is the presence of magnetic clusters, such as would be generated by finite size hole-free regions. The corrolary of this is the presence of surrounding hole-rich regions. Their exact topology cannot be inferred here, so we will call them ”domain-walls”. In such a scenario, the main peak, which disappears at low $`T`$, corresponds to hole-rich regions, i.e. where domain-walls are still mobile. In fact, the wall-motion averages out $`<`$$`S_z`$$`>`$ (spin-flips), yielding a narrow central peak. This also reduces the magnetic coupling between hole-poor domains. The spatially inhomogeneous profile of $`<`$$`S_z`$$`>`$ within each domain and the distribution of cluster sizes yield the broad background. Full localization of domain-walls is likely to restore inter-cluster magnetic coupling, thus enabling spin-freezing. Of course, there must be significant disorder in the domain-wall topology, in order to prevent long range AF ordering. The disappearance of the main Cu peak is compatible with the localization of walls, which reduces the effective width of hole-rich regions. Accordingly, this peak disappears in the temperature region where $`\rho _{ab}`$ becomes insulating-like. The concomitant growth of $`<`$$`S_z`$$`>`$ explains the broadening of the background signal. <sup>139</sup>La NMR spectra offer a second possibility to probe the phase separation in CuO<sub>2</sub> planes. A shown in Fig. 3, a second peak emerges upon cooling on the low frequency side of the spectrum. Qualitatively, we can ascribe the new peak to the <sup>139</sup>La nuclei within AF clusters, as a confirmation of the <sup>63</sup>Cu NMR spectra. Similar experiments at 4.7 Tesla show a single peak (not shown), with a $`T`$-dependent asymetry which is well-fitted by the sum of two gaussians, whose separation is half of that at 9.4 T. This again proves that the peaks are related to two different magnetic environnements. Additional magnetic broadening at low-$`T`$ makes the two <sup>139</sup>La peaks unresolved, and not surprinsingly, the broadening becomes noticable below $``$70 K, where the spin fluctuations start to slow down. Again, we stress that macroscopic doping inhomogeneities in the sample would not produce such a $`T`$-dependence of the relative intensities of the two NMR contributions. The observed phase separation clearly develops on decreasing temperature. Furthermore, similar <sup>139</sup>La NMR results have been recently obtained in La<sub>1.9</sub>Sr<sub>0.1</sub>CuO<sub>4</sub> and in La<sub>2</sub>CuO<sub>4+δ</sub> at a concentration where long range spin and charge ordering are absent . This shows that the results are not unique to our Sr concentration. Rather, phase separation appears to be a general tendency in these materials. In fact, most striking is probably the similarity between our <sup>139</sup>La NMR spectra and those reported in stripe-ordered nickelates , although details differ due to the difference of hyperfine interactions, doping levels and stripe configurations between cuprates and nickelates. A quantitative analysis, like the comparison between <sup>63</sup>Cu and <sup>139</sup>La spectra, is however difficult since a number of Cu nuclei are not observed and hyperfine interactions are not well known for <sup>139</sup>La in the paramagnetic phase. Furthermore, the relation of the <sup>139</sup>La peak intensity ratio to the relative size of the two phases is expected to be much more complex than the value $``$1/16 determined by the hole concentration. Many microscopic details like the profile of the spin modulation and the organization (topology, filling) of the hole-rich region are involved. Even in the case of La<sub>5/3</sub>Sr<sub>1/3</sub>NiO<sub>4</sub>, with established stripe order, the two-peak intensity ratio is not well-understood . Fig. 4 summarizes our findings in La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub>: <sup>63</sup>Cu and <sup>139</sup>La NMR spectra reveal that magnetic phase separation develops below room temperature. The data are best explained in terms of hole-poor regions (AF clusters are evidenced through an anomalous NMR line) and hole-rich regions (contributing a more usual line). In the regime were doped-holes are localized (”charge glass”), the dynamics of staggered moments, probed by NMR relaxation, slows down. Below 5 K, in the superconducting state, AF clusters are frozen, a phase called ”cluster spin-glass”. Although no direct evidence for stripe-like objects is claimed here, the evidence for their existence at somewhat higher doping ($`x`$$``$0.12 ) does suggest that hole-rich regions are related to charge-stripes that are progressively pinned by random (Sr) disorder as $`T`$ decreases. The charge-freezed state would then correspond to a static disordered stripe phase: a ”stripe-glass” . The above conclusions are further supported by: 1) the already mentioned similarities with NMR data in stripe-ordered materials, 2) the fact that even materials with well-established stripe order tend to have a glassy behaviour , 3) the presence of incommensurate elastic peaks in neutron scattering for $`x`$=0.06 , 4) the two-component ARPES spectra in the spin-glass region . This, to our knowledge first, observation of two-phases NMR spectra in superconducting LSCO opens new perspectives: Given the similarities between LSCO and YBCO , an NMR re-investigation of their underdoped regime is clearly called for. Useful exchanges with H.B. Brom, V.J. Emery, R.J. Gooding, P.C. Hammel, A. Rigamonti and B.J. Suh are acknowledged. We thank S. Aldrovandi, Z.H. Jang, E. Lee, L. Linati and F. Tedoldi for help, as well as J.E. Ostenson and D.K. Finnemore for magnetization measurements. The work in Pavia was supported by the INFM-PRA SPIS funding. Ames Laboratory is operated for U.S Department of Energy by Iowa State University under Contract No. W-7405-Eng-82. The work at Ames Laboratory was supported by the director for Energy Research, Office of Basic Energy Sciences.
no-problem/9903/hep-ph9903547.html
ar5iv
text
# HOW TO FIND THE QCD CRITICAL POINT11footnote 1Work done in collaboration with Misha Stephanov and Edward Shuryak.[1] ## 1 Introduction In my talk at Strong and Electroweak Matter ’98, I presented recent work on the physics which arises in two different areas of the QCD phase diagram. Cold dense quark matter forms a color superconductor, and I compared the superconducting phase expected in QCD with two massless quarks with that expected for three massless quarks in which chiral symmetry is broken by color-flavor locking. Alford, Berges and I have recently completed an analysis of the phase diagram of zero temperature QCD as a function of density and strange quark mass, in Ref. . This brings the ideas I presented in Copenhagen together into a consistent picture, and I refer you to that paper for an up-to-date treatment of the subject, and references to the literature. The other half of my talk in Copenhagen was a sketch of methods by which present heavy ion experiments can find the critical point in the QCD phase diagram at nonzero temperature $`T`$ and baryon chemical potential $`\mu `$. Like the end point in the electroweak phase diagram, discussed by others at this meeting, this critical point is a second order transition in the Ising universality class which occurs at the end of a line of first order phase transitions. Stephanov, Shuryak and I have recently completed a detailed analysis of the signatures of the physics characteristic of the vicinity of this point, begun in Ref. . I described this work in a preliminary fashion in Copenhagen; in these proceedings, I summarize the results and implications of Ref. . Those interested in the derivation of these results should see Ref. . Large acceptance detectors, such as NA49 and WA98 at CERN, have made it possible to measure important average quantities in single heavy ion collision events. For example, instead of analyzing the distribution of charged particle transverse momenta obtained by averaging over particles from many events, we can now study the event-by-event variation of the mean transverse momentum of the charged pions in a single event, $`p_T`$.<sup>2</sup><sup>2</sup>2We denote the mean transverse momentum of all the pions in a single event by $`p_T`$ rather than $`p_T`$ because we choose to reserve $`\mathrm{}`$ for averaging over an ensemble of events. Although much of this data still has preliminary status, with more statistics and more detailed analysis yet to come, some general features have already been demonstrated. In particular, the event-by-event distributions of these observables are as perfect Gaussians as the data statistics allow, and the fluctuations — the width of the Gaussians — are small. This is very different from what one observes in $`pp`$ collisions, in which fluctuations are large. These large non-Gaussian fluctuations clearly reflect non-trivial quantum fluctuations, all the way from the nucleon wave function to that of the secondary hadrons, and are not yet sufficiently well understood. As discussed in Refs. , thermal equilibration in $`AA`$ collisions drives the variance of the event-by-event fluctuations down, close to the value determined by the variance of the inclusive one-particle distribution divided by the square root of the multiplicity. Can we learn something from the magnitude of these small fluctuations and their dependence on the parameters of the collision? What do the widths of the Gaussians tell us about the thermodynamics of QCD? Some of these questions have been addressed in Refs. where it was pointed out that, for example, temperature fluctuations are related to heat capacity via $$\frac{(\mathrm{\Delta }T)^2}{T^2}=\frac{1}{C_V(T)},$$ (1) and so can tell us about thermodynamic properties of the matter at freeze-out. Furthermore, Mrówczyński has discussed the study of the compressibility of hadronic matter at freeze-out via the event-by-event fluctuations of the particle number and Gaździcki and Mrówczyński have considered event-by-event fluctuations of the kaon to pion ratio as measured by NA49 . In $`pp`$ physics one can hope to extract quantum mechanical information about the initial state from event-by-event fluctuations of the final state; in heavy ion collisions equilibration renders this an impossible goal. In $`AA`$ collisons, then, the new goal is to use the much smaller, Gaussian event-by-event fluctuations of the final state to learn about thermodynamic properties at freeze-out. It is worth noting that once a large acceptance detector has presented convincing evidence that the event-by-event distribution of, for example, $`p_T`$ is Gaussian, then the measurement of the width of such a distribution can be accomplished by “event-by-event” measurements in which only two pions per event are observed. This has recently been emphasized by Białas and Koch. Of course, this approach measures the width of the event-by-event distribution whether or not it is Gaussian; it is only the results of a large acceptance experiment like NA49 which motivate a thermodynamic analysis of the event-by-event fluctuations. Stephanov, Shuryak and I focus on observables constructed from the multiplicity and the momenta of the charged particles in the final state, as measured by NA49. We leave the extension of the methods of this paper to the study of thermodynamic implications of the NA49 Gaussian distribution of event-by-event $`K/\pi `$ ratios and of the WA98 Gaussian distribution of event-by-event $`\pi ^0/\pi ^\pm `$ ratios for future work. One of the lessons of our paper is that it is difficult to apply thermodynamic relations like (1) directly. To see a sign of this, note that the event-by-event fluctuations of the energy $`E`$ of a part of a finite system in thermal equilibrium are given by $`(\mathrm{\Delta }E)^2=T^2C_V(T)`$. For a system in equilibrium, the mean values of $`T`$ and $`E`$ are directly related by an equation of state $`E(T)`$; their fluctuations, however, have quite different behavior as a function of $`C_V`$, and therefore behave differently when $`C_V`$ diverges at a critical point. The fluctuations of “mechanical” observables increase at the critical point. Because $`T(E)`$ is singular at the critical point, the fluctuations of $`T`$ decrease there. It is a fact that what we measure are the mechanical observables, and since we in general only know $`T(E)`$ for simple systems we call thermometers, we cannot apply (1) to the complicated system of interest. It is not in fact necessary to translate the observed “mechanical” variable (the mean transverse momentum $`p_T`$ for example) into a temperature in order to detect the critical point. It is easier to look directly at the fluctuations of observable quantities. We demonstrate that the fluctuations of $`p_T`$ grow at the critical point. Although our methods are general, we focus in Ref. on how to use them to find and study the critical end-point E on the phase diagram of QCD in the $`T\mu `$ plane. The possible existence of such a point, as an endpoint of the first order transition separating quark-gluon plasma from hadron matter, and its universal critical properties have been pointed out recently in Refs. . The point E can be thought of as a descendant of a tricritical point in the phase diagram for 2-flavor QCD with massless quarks. In a previous letter, we have laid out the basic ideas for observing the critical endpoint . The signatures proposed in Ref. are based on the fact that such a point is a genuine thermodynamic singularity at which susceptibilities diverge and the order parameter fluctuates on long wavelengths. The resulting signatures all share one common property: they are nonmonotonic as a function of an experimentally varied parameter such as the collision energy, centrality, rapidity or ion size. Once experimentalists vary a control parameter which causes the freeze-out point in the $`(T,\mu )`$ plane to move toward, through, and then past the vicinity of the endpoint E, they should see all the signatures we describe first strengthen, reach a maximum, and then decrease, as a nonmonotonic function of the control parameter. It is important to have a control parameter whose variation changes the $`\mu `$ at which the system crosses the transition region and freezes out. The collision energy is an obvious choice, since it is known experimentally that varying the collision energy has a large effect on $`\mu `$ at freeze-out. Other possibilities should also be explored.<sup>3</sup><sup>3</sup>3If the system crosses the transition region near E, but only freezes out at a much lower temperature, the event-by-event fluctuations will not reflect the thermodynamics near E. In this case, one can push freeze-out to earlier times and thus closer to E by using smaller ions. We assume throughout that freeze-out occurs from an equilibrated hadronic system. If freeze-out occurs “to the left” (lower $`\mu `$; higher collision energy) of the critical end point E, it occurs after the matter has traversed the crossover region in the phase diagram. If it occurs “to the right” of E, it occurs after the matter has traversed the first order phase transition. This is the situation in which our assumption of freeze-out from an equilibrated system is most open to question. First, one may imagine hadronization directly from the mixed phase, without time for the hadrons to rescatter. Hadronic elastic scattering cross-sections are large enough that this is unlikely. Second, one may worry that the matter is inhomogeneous after the first order transition, and has not had time to re-equilibrate. Fortunately, our assumption is testable. If the matter were inhomogeneous at freeze-out, one can expect non-Gaussian fluctuations in various observables which would be seen in the same experiments that seek the signatures we describe. We focus on the Gaussian thermal fluctuations of an equilibrated system, and study the nonmonotonic changes in these fluctuations associated with moving the freeze-out point toward and then past the critical point, for example from left to right as the collision energy is reduced. Ref. is devoted to a detailed analysis of the physics behind event-by-event fluctuations in relativistic heavy ion collisions and the resulting effects unique to the vicinity of the critical point in the phase diagram of QCD. Most of our analysis is applied to the fluctuations of the observables characterizing the multiplicity and momenta of the charged pions in the final state of a heavy ion collision. There are several reasons why the pion observables are most sensitive to the critical fluctuations. First, the pions are the most numerous hadrons produced and observed in relativistic heavy ion collisions. A second, very important reason, is that pions couple strongly to the fluctuations of the sigma field (the magnitude of the chiral condensate) which is the order parameter of the phase transition. Indeed, the pions are the quantized oscillations of the phase of the chiral condensate and so it is not surprising that at the critical end point, where the magnitude of the condensate is fluctuating wildly, signatures are imprinted on the pions. ## 2 Noncritical Thermal Fluctuations in Heavy Ion Collisions Before we discuss the effects of the critical fluctuations, we must analyze the thermal fluctuations which are present if freeze-out does not occur in the vicinity of the critical point. In this section, but not throughout this paper, we assume that the system freezes out far from the critical point in the phase diagram, and can be approximated as an ideal resonance gas when it freezes out. We compare some of our results to preliminary data from the NA49 experiment on PbPb collisions at 160 AGeV, and find broad agreement. The results obtained seem to support the hypotheses that most of the fluctuation observed in the data is indeed thermodynamic in origin, and that this system is not freezing out near the critical point. As a first test of our resonance gas model, we analyze the fluctuations in an ideal Bose gas of pions, and then add as many of the effects which this simple treatment neglects except that we assume that no effects due to critical fluctuations are significant. We model the matter in a relativistic heavy ion collision at freeze-out as a resonance gas in thermal equilibrium, and begin by calculating the variance of the event-by-event fluctuations of total multiplicity $`N`$. The fluctuations in $`N`$ are not affected by the boost which the pion momenta receive from the collective flow, but they are contaminated experimentally by fluctuations in the impact parameter. This experimental contamination can be reduced by making a tight enough centrality cut using a zero degree calorimeter. We find $`(\mathrm{\Delta }N)^2/N1.5`$, which we compare with NA49 results from central Pb-Pb collisions at 160 AGeV. It is clear that with no cut on centrality, one would see a very wide non-Gaussian distribution of multiplicity determined by the geometric probability of different impact parameters $`b`$. Gaussian thermodynamic fluctuations can only be seen if a tight enough cut in centrality is applied. The event-by-event $`N`$-distribution found by NA49 when they use only the $`5\%`$ most central of all events, with centrality measured using a zero degree calorimeter, is Gaussian to within about $`5\%`$. This cut corresponds to keeping collisions with impact parameters $`b<3.5`$ fm. The non-Gaussianity could be further reduced by tightening the centrality cut further. From the data, we have $`(\mathrm{\Delta }N)^2/N=2.008\pm 0.009`$, which suggests that about $`75\%`$ of the observed fluctuation is thermodynamic in origin. The contamination introduced into the data by fluctuations in centrality could be reduced by analyzing data samples with more or less restrictive cuts but the same $`N`$, and extrapolating to a limit in which the cut is extremely restrictive. This could be done using cuts centered at any centrality. Our resonance gas model predicts that as the centrality cut is tightened, the ratio $`v_{\mathrm{ebe}}^2(N)/N`$ should decrease toward a limit near 1.5. Although further work is certainly required, it is already apparent that the bulk of the multiplicity fluctuations observed in the data are thermodynamic in origin. impact parameter. Note that our prediction is strongly dependent on the presence of the resonances; had we not included them, our prediction would have been significantly lower, farther below the data. Because the multiplicity fluctuations are sensitive to impact parameter fluctuations, it may prove difficult to explain their magnitude with greater precision even in future. However, the fact that they are largely thermodynamic in origin suggests that the effects present near the critical point, which we describe below, could result in a significant nonmonotonic enhancement of the multiplicity fluctuations. This would be of interest whether or not the noncritical fluctuations on top of which the nonmonotonic variation occurs are understood with precision. We then turn to a calculation of the variance of the event-by-event fluctuations of the mean transverse momentum, $`p_T`$. We first calculate the width of the inclusive $`p_T`$-distribution, $`v_{\mathrm{inc}}(p_T)`$. In the absence of any correlations, the event-by-event fluctuations of the mean transverse momentum of the charged pions in an event, $`v_{\mathrm{ebe}}(p_T)(\mathrm{\Delta }p_T)^2^{1/2}`$ would be given by $`v_{\mathrm{inc}}(p_T)/N^{1/2}`$, and this turns out to be a very good approximation in the present data as we discuss below. We calculate numerically the contribution to $`v_{\mathrm{inc}}(p_T)`$ from “direct pions”, already present at freeze-out, and from the pions generated later by resonance decay. We have simulated a gas of pions, nucleons and resonances in thermal equilibrium at freeze-out, including the $`\pi `$, $`K`$, $`\eta `$, $`\rho `$, $`\omega `$, $`\eta ^{}`$, $`N`$, $`\mathrm{\Delta }`$, $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }`$ and $`\mathrm{\Xi }`$, and then simulated the subsequent decay of the resonances. That is, we have generated an ensemble of pions in three steps: (i) Thermal ratios of hadron multiplicities were calculated assuming equilibrium ratios at chemical freeze-out. Following , the values $`T_{\mathrm{ch}}=170`$ MeV and $`\mu _{\mathrm{baryon}}=200`$ MeV were used. (ii) Then, a program generates hadrons with multiplicities determined at chemical freezeout, but with thermal momenta as appropriate at the thermal freeze-out temperature, which we take to be $`T_\mathrm{f}=120`$ MeV, with $`\mu _\pi =60`$ MeV. The last step (iii) is to decay all the resonances. From the resulting ensemble of pions (the sum of the direct pions and those from the resonances) we obtain $`v_{\mathrm{inc}}(p_T)/p_T=0.66`$ The resonances turn out to be less important here than in the calculation of the multiplicity fluctuations, in that the resonance gas prediction for $`v_{\mathrm{inc}}(p_T)/p_T`$ is almost indistinguishable from that of an ideal Bose gas of pions. To this point, we have calculated the fluctuations in $`p_T`$ as if the matter in a heavy ion collision were at rest at freeze-out. This is not the case: by that stage the hadronic matter is undergoing a collective hydrodynamic expansion in the transverse direction, and this must be taken into account in order to compare our results with the data. A very important point here is that the fluctuations in pion multiplicity are not affected by flow, and our prediction for them is therefore unmodified. However the event-by-event fluctuations of mean $`p_T`$ are certainly affected by flow. The fluctuations we have calculated pertain to the rest frame of the matter at freeze-out, and we must now boost them. A detailed account of the resulting effects would require a complicated analysis. We use the simple approximation that the effects of flow on the pion momenta can be treated as a Doppler blue shift of the spectrum: $`n(p_T)n(p_T\sqrt{1\beta }/\sqrt{1+\beta })`$. This blue shift increases $`p_T`$, and increases $`v_{\mathrm{inc}}(p_T)`$, but leaves the ratio $`v_{\mathrm{inc}}(p_T)/p_T`$ (and therefore the ratio $`v_{\mathrm{ebe}}(p_T)/p_T`$) unaffected. However, event-by-event fluctuations in the flow velocity $`\beta `$ must still be taken into account. This issue was discussed qualitatively already in , where it was argued that this effect must be relatively weak. In Ref. we provide the first rough estimate of its magnitude. We estimate that fluctuations in the flow velocity increase $`v_{\mathrm{inc}}(p_T)/p_T`$ from $`0.66`$ to $`0.67`$. The largest uncertainty in our estimate for $`v_{\mathrm{inc}}(p_T)/p_T`$ is not due to the fluctuations in the flow velocity, which can clearly be neglected, but is due to the velocity itself. The blue shift approximation which we have used applies quantitatively only to pions with momenta greater than their mass . Because of the nonzero pion mass, boosting the pions does not actually scale the momentum spectrum by a momentum independent factor. Furthermore, in a real heavy ion collision there will be a position dependent profile of velocities, rather than a single velocity $`\beta `$. A more complete calculation of $`v_{\mathrm{inc}}(p_T)/p_T`$ would require a better treatment of these effects in a hydrodynamic model; we leave this for the future. We compare our results to the NA49 data, in which $`v_{\mathrm{inc}}(p_T)/p_T=0.749\pm 0.001`$. We see that the major part of the observed fluctuation in $`p_T`$ is accounted for by the thermodynamic fluctuations we have considered. Our prediction is about $`10\%`$ lower than that in the data. First, this suggests that there may be a small nonthermodynamic contribution to the $`p_T`$-fluctuations, for example from fluctuations in the impact parameter. (However, we expect that the fluctuations of an intensive quantity like $`p_T`$ are less sensitive to impact parameter fluctuations than are those of the multiplicity, and this seems to be borne out by the data.) The other source of the discrepancy is the blue shift approximation. We leave a more sophisticated treatment of the effects of flow on the spectrum to future work. Such a treatment is necessary before we can estimate how much of the $`10\%`$ discrepancy is introduced by the blue shift approximation. Future work on the experimental side (varying the centrality cut) could lead to an estimate of how much of the discrepancy is due to impact parameter fluctuations. We have gone as far as we will go in this paper in our quest to understand the thermodynamic origins of the width of the inclusive single particle distribution. We now turn to the ratio of the scaled event-by-event variation to the variance of the inclusive distribution: $$\sqrt{F}\frac{N^{1/2}v_{\mathrm{ebe}}(p_T)}{v_{\mathrm{inc}}(p_T)}=1.002\pm 0.002.$$ (2) The difference between the scaled event-by-event variance and the variance of the inclusive distribution is less than a percent in the NA49 data.<sup>4</sup><sup>4</sup>4We explain in an Appendix in Ref. that in order to be sure that $`F=1`$ when there are no correlations between pions, care must be taken in constructing an estimator for $`v_{\mathrm{ebe}}(p_T)`$ using a finite sample of events, each of which has finite multiplicity. The appropriate prescription is to weight events in the event-by-event average by their multiplicity, and we have made the appropriate correction in writing (2). Other authors have introduced the correlation measure $`\mathrm{\Phi }_{p_T}=N^{1/2}v_{\mathrm{ebe}}(p_T)v_{\mathrm{inc}}(p_T)`$. Because $`v_{\mathrm{inc}}(p_T)`$ is scaled by the blue shift introduced by the expansion velocity, so is $`\mathrm{\Phi }_{p_T}`$. This makes $`\mathrm{\Phi }_{p_T}`$ harder to predict than $`F`$. However, for convenience, we note that if one uses the experimental value of $`v_{\mathrm{inc}}(p_T)`$, a value $`\sqrt{F}=1.01`$ corresponds to $`\mathrm{\Phi }_{p_T}=2.82`$ MeV, and the $`\sqrt{F}`$ in the data (2) corresponds to $`\mathrm{\Phi }_{p_T}=0.6\pm 0.6`$ MeV. We analyze a number of noncritical contributions to the ratio $`\sqrt{F}`$, which we write $$\sqrt{F}=\sqrt{F_BF_{\mathrm{res}}F_{\mathrm{EC}}}.$$ (3) $`F_B`$ is the contribution of the Bose enhancement of the fluctuations of identical pions. We calculate this effect and find $`\sqrt{F_B}1.02`$. $`F_{\mathrm{res}}`$ describes the effect of the correlations induced by the fact that pions produced by the decay of a resonance after freeze-out do not have a chance to rescatter. We estimate it by dividing the pions from our resonance gas simulation into “events” of varying sizes, and evaluating $`F`$. Since Bose enhancement is not included in the simulation, the $`F`$ so obtained is just $`F_{\mathrm{res}}`$. We find no statistically significant contribution, and conclude that $`|F_{\mathrm{res}}1|<0.01`$. The third contribution, $`F_{\mathrm{EC}}`$, is due to energy conservation in a finite system. This is most easily described by considering the event-by-event fluctuations $`\mathrm{\Delta }n_p`$ in the number of pions in a bin in momentum space centered at momentum $`p`$. Consider the correlator $`\mathrm{\Delta }n_p\mathrm{\Delta }n_k`$. When one $`n_p`$ fluctuates up, others must fluctuate down, and it is therefore more likely that $`n_k`$ fluctuates downward. Energy conservation in a finite system therefore leads to an anti-correlation which is off-diagonal in $`pk`$ space. $`v_{\mathrm{ebe}}(p_T)`$ is determined by $`\mathrm{\Delta }n_p\mathrm{\Delta }n_k`$, and the result of this anti-correlation is a reduction: $$\sqrt{F_{\mathrm{EC}}}0.99.$$ (4) If the observed charged pions are in thermal contact with an unobserved heat bath, the anti-correlation introduced by energy conservation decreases as the heat capacity of the heat bath increases. The estimate (4) assumes that the heat capacity of the direct charged pions is about $`1/4`$ of the total heat capacity of the hadronic system at freezeout. In addition to the contributions we calculate, $`\sqrt{F}`$ is affected by the finite two-track resolution in the detector, and by final state Coulomb interactions between charged pions. NA49 estimates that these contributions reduce $`F`$ by about the same amount that Bose enhancement increases it. We conclude that the ratio $`\sqrt{F}`$ measured by NA49 is broadly consistent with thermodynamic expectations. It receives a positive contribution from Bose enhancement, negative contributions from energy conservation and two-track resolution, and a positive contribution from the effect of resonance decays. These contributions to $`\sqrt{F}`$ are all roughly at the $`1\%`$ level (or smaller in the case of that from resonance decays) and it seems that they cancel in the data (2). Our results support the general idea that the small fluctuations observed in $`AA`$ collisions, relative to those in $`pp`$, are consistent with the hypothesis that the matter in the $`AA`$ collisions achieves approximate local thermal equilibrium in the form of a resonance gas. With more detailed experimental study, either now at the SPS, or soon at RHIC (STAR will study event-by-event fluctuations in $`p_T`$, $`N`$, particle ratios, etc; PHENIX and PHOBOS in $`N`$ only) it should be possible to disentangle the different effects we describe. Making a cut to look at only low $`p_T`$ pions should increase the effects of Bose enhancement. The anti-correlation introduced by energy conservation is due to terms in $`\mathrm{\Delta }n_p\mathrm{\Delta }n_k`$ which are off-diagonal in $`pk`$. Thus, a direct measurement of $`\mathrm{\Delta }n_p\mathrm{\Delta }n_k`$ would make it easy to separate this anti-correlation from other effects. The cross correlation $`\mathrm{\Delta }N\mathrm{\Delta }p_T`$ is also a very interesting observable to study. It vanishes for a classical ideal gas. This means that whereas $`v_{\mathrm{ebe}}(p_T)`$ receives a dominant contribution from the width of the inclusive single particle distribution, this effect cancels in $`\mathrm{\Delta }N\mathrm{\Delta }p_T`$ and the remaining effects due to Bose enhancement and energy conservation dominate. Although this cross-correlation is small, it is worth measuring because it only receives contributions from interesting effects. We hope that the combination of the theoretical tools we have provided and the present NA49 data provide a solid foundation for the future study of the thermodynamics of the hadronic matter present at freeze-out in heavy ion collisions. Once data is available for other collision energies, centralities or ion sizes, the present NA49 data and the calculations of this section will provide an experimental and a theoretical baseline for the study of variation as a function of control parameters. Our analysis demonstrates that the observed fluctuations are broadly consistent with thermodynamic expectations, and therefore raises the possibility of large effects when control parameters are changed in such a way that thermodynamic properties are changed significantly, as at a critical point. The smallness of the statistical errors in the data also highlights the possibility that many of the interesting systematic effects we analyze in this paper will be accessible to detailed study as control parameters are varied. ## 3 Pions Near the Critical Point: Interaction with the Sigma Field With the foundations established, we now describe how the fluctuations we analyze will change if control parameters are varied in such a way that the baryon chemical potential at freeze-out, $`\mu _\mathrm{f}`$, moves toward and then past the critical point in the QCD phase diagram at which a line of first order transitions ends at a second order endpoint. The good agreement between the noncritical thermodynamic fluctuations we analyze in Section 2 and NA49 data make it unlikely that central PbPb collisions at 160 AGeV freeze out near the critical point. Estimates we have made in Ref. suggest that the critical point is located at a baryon chemical potential $`\mu `$ such that it will be found at an energy between 160 AGeV and AGS energies. This makes it a prime target for detailed study at the CERN SPS by comparing data taken at 40 AGeV, 160 AGeV, and in between. If the critical point is located at such a low $`\mu `$ that the maximum SPS energy is insufficient to reach it, it would then be in a regime accessible to study by the RHIC experiments. We want to stress that we are more confident in our ability to describe the properties of the critical point and thus to predict how to find it than we are in our ability to predict where it is. We now describe how the fluctuations of the pions will be affected if the system freezes out near the critical endpoint. First, because the pions at freeze-out are now in contact with a heat bath whose heat capacity diverges at the critical point, the effects of energy conservation parametrized by $`F_{\mathrm{EC}}1`$ are greatly reduced. However, since $`F_{\mathrm{EC}}`$ is close to one even away from the critical point, this is a small effect. The dominant effects of the critical fluctuations on the pions are the direct effects occuring via the $`\sigma \pi \pi `$ coupling. In the previous section, we made the assumption that the “direct pions” at freeze-out could be described as an ideal Bose gas. We do not expect this to be a good approximation if the freeze-out point is near the critical point. The sigma field is the order parameter for the transition and near the critical point it therefore develops large critical long wavelength fluctuations. These fluctuations are responsible for singularities in thermodynamic quantities. We find that because of the $`G\sigma \pi \pi `$ coupling, the fluctuations of both the multiplicity and the mean transverse momentum of the charged pions do in fact diverge at the critical point. We then estimate the size of the effects in a heavy ion collision. This requires first estimating the strength of the coupling constant $`G`$, and then taking into account the finite size of the system and the finite time during which the long wavelength fluctuations can develop. We find a large increase in the fluctuations of both the multiplicity and the mean transverse momentum of the pions. This increase would be divergent in the infinite volume limit precisely at the critical point. We apply finite size and finite time scaling to estimate how close the system created in a heavy ion collision can come to the critical singularity, and consequently how large an effect can be seen in the event-by-event fluctuations of the pions. We conclude that the nonmonotonic changes in the variance of the event-by-event fluctuation of the pion multiplicity and momenta which are induced by the universal physics characterizing the critical point can easily be between one and two orders of magnitude greater than the statistical errors in the present data. The value of the coupling $`G`$ in vacuum can be estimated either from the relationship between the sigma and pion masses and $`f_\pi `$ or from the width of the sigma. Both yield an estimate $`G1900`$ MeV, where we have used $`m_\sigma =600`$ MeV. The width of the sigma is so large that this “particle” is only seen as a broad bump in the $`s`$-wave $`\pi \pi `$ scattering cross-section. The vacuum $`\sigma \pi \pi `$ coupling must be at least as large as $`G1900`$ MeV, since the sigma would otherwise be too narrow. The vacuum value of $`G`$ would not change much if one were to take the chiral limit $`m0`$. The situation is different at the critical point. Taking the quark mass to zero while following the critical endpoint leads one to the tricritical point P in the phase diagram for QCD with two massless quarks. At this point, $`G`$ vanishes as we discuss below. This suggests that at E, the coupling $`G`$ is less than in vacuum. In Ref. , we use what we know about physics near the tricritical point P to make an estimate of how much the coupling $`G`$ is reduced at the critical endpoint E (with the quark mass $`m`$ having its physical value), relative to the vacuum value $`G1900`$ MeV estimated above. We begin by recalling some known results. (For details, see Refs. .) In QCD with two massless quarks, a spontaneously broken chiral symmetry is restored at finite temperature. This transition is likely second order and belongs in the universality class of $`O(4)`$ magnets in three dimensions. At zero $`T`$, various models suggest that the chiral symmetry restoration transition at finite $`\mu `$ is first order. Assuming that this is the case, one can easily argue that there must be a tricritical point P in the $`T\mu `$ phase diagram, where the transition changes from first order (at higher $`\mu `$ than P) to second order (at lower $`\mu `$), and such a tricritical point has been found in a variety of models. The nature of this point can be understood by considering the Landau-Ginzburg effective potential for $`\varphi _\alpha `$, order parameter of chiral symmetry breaking: $$\mathrm{\Omega }(\varphi _\alpha )=\frac{a}{2}\varphi _\alpha \varphi _\alpha +\frac{b}{4}(\varphi _\alpha \varphi _\alpha )^2+\frac{c}{6}(\varphi _\alpha \varphi _\alpha )^3.$$ (5) The coefficients $`a`$, $`b`$ and $`c>0`$ are functions of $`\mu `$ and $`T`$. The second order phase transition line described by $`a=0`$ at $`b>0`$ becomes first order when $`b`$ changes sign, and the tricritical point P is therefore the point at which $`a=b=0`$. The critical properties of this point can be inferred from universality , and the exponents are as in the mean field theory (5). We will use this below. Most important in the present context is the fact that because $`\varphi =0`$ at P, there is no $`\sigma \pi \pi `$ coupling, and $`G=0`$ there. In real QCD with nonzero quark masses, the second order phase transition becomes a smooth crossover and the tricritical point P becomes E, the second order critical endpoint of a first order phase transition line. Whereas at P there are four massless scalar fields undergoing critical long wavelength fluctuations, the $`\sigma `$ is the only field which becomes massless at the point E, and the point E is therefore in the Ising universality class . The pions remain massive at E because of the explicit chiral symmetry breaking introduced by the quark mass $`m`$. Thus, when we discuss physics near E as a function of $`\mu `$ and $`T`$, but at fixed $`m`$, we will use universal scaling relations with exponents from the three dimensional Ising model. Our present purpose, however, is to imagine varying $`m`$ while changing $`T`$ and $`\mu `$ in such a way as to stay at the critical point E, and ask how large $`G`$ (and $`m_\pi `$) become once $`m`$ is increased from zero (the tricritical point P at which $`G=m_\pi =0`$) to its physical value. For this task, we use exponents describing universal physics near P. Applying tricritical scaling relations all the way up to a quark mass which is large enough that $`m_\pi `$ is not small compared to $`T_c`$ may introduce some uncertainty into our estimate. We first determine the trajectory of the critical line of Ising critical points E as a function of quark mass $`m`$,<sup>5</sup><sup>5</sup>5See Ref. for a derivation of the analogous line of Ising points emerging from the tricritical point in the QCD phase diagram at zero $`\mu `$ as a function of $`m`$ and the strange quark mass $`m_s`$. This tricritical point can be related to the one we are discussing by varying $`m_s`$. and then find that $`Gm^{3/5}`$ along this line, where $`m`$ is the light quark mass. Thus the coupling $`G`$ is suppressed compared to its “natural” vacuum value $`G_{\mathrm{vac}}`$ by a factor of order $`(m/\mathrm{\Lambda }_{\mathrm{QCD}})^{3/5}`$. Taking $`\mathrm{\Lambda }_{\mathrm{QCD}}200`$ MeV, $`m10`$ MeV we obtain our estimate $$G_E\frac{G_{\mathrm{vac}}}{6}300\mathrm{MeV}.$$ (6) The main source of uncertainty in this estimate is our inability to compute the various nonuniversal masses which enter the estimate as prefactors in front of the $`m`$ dependence which we have followed. In other words, we do not know the correct value to use for $`\mathrm{\Lambda }_{\mathrm{QCD}}`$ in the suppression factor which we write as $`(m/\mathrm{\Lambda }_{\mathrm{QCD}})^{3/5}`$. The final ingredient we need is an estimate of the correlation length $`\xi `$ of the sigma field, which is infinite at the critical point. In practice, there are important restrictions on how large $`\xi `$ can become. Two particle interferometry suggests that the size of regions over which freeze-out is homogeneous is roughly $`12`$ fm in both the longitudinal and transverse directions. This means that the finite size of the system limits $`\xi `$ to be less than about this value. The finite time restriction is stricter, but harder to estimate. Although the size of the system allows the correlation length to grow to 12 fm, there may not be enough time for such long correlations to grow. We use $`\xi _{\mathrm{max}}6`$ fm as a rough estimate of the largest correlation length possible if control parameters are chosen in such a way that the system freezes out close to the critical point. We now return to our discussion of the effects of the long wavelength sigma fluctuations on the fluctuations of the pions. We use mean field theory throughout Ref. . The fluctuations of the sigma field around the minimum of $`\mathrm{\Omega }(\sigma )`$ are not small; however, this does not make much difference to the quantities of interest, all of which diverge like $`m_\sigma ^2\xi ^2`$ at the critical point. The divergence is that of the sigma field susceptibility, and for the 3d-Ising universality class we know the corresponding exponent to be $`\gamma /\nu =2\eta `$ which is $`2`$ to within a few percent because $`\eta `$ is small. We can therefore safely use mean-field mean field results with their $`m_\sigma ^2`$ divergence, and will take $`m_\sigma 1/\xi _{\mathrm{max}}1/(6\mathrm{fm})`$ in our estimates. We now have all the ingredients in place to present our estimate of the size of the effect of the critical fluctuations of the sigma field on the fluctuations of the direct pions, via the coupling $`G`$. We express the size of the effect of interest by rewriting the ratio $`\sqrt{F}`$ of (2) and (3) as $$\sqrt{F}=\sqrt{F_BF_{\mathrm{res}}F_{\mathrm{EC}}F_\sigma }$$ and presenting $`F_\sigma `$. We find: $$F_\sigma =1+0.35\left(\frac{G_{\mathrm{freeze}\mathrm{out}}}{300\mathrm{MeV}}\right)^2\left(\frac{\xi _{\mathrm{freeze}\mathrm{out}}}{6\mathrm{fm}}\right)^2.$$ (7) where we have taken $`T=120`$ MeV and $`\mu _\pi =60`$ MeV. $`F_\sigma `$ will be reduced by about a factor of two, because not all of the pions which are observed are direct. The coupling $`G`$ transmits the effects of the critical $`\sigma `$ fluctuations to the pions at freezeout, not to the (heavier) resonances. The size of the effect depends quadratically on the coupling $`G`$. We argued above that $`G`$ is reduced to $`G_E300`$ MeV at the critical point. However, freeze-out may occur away from the critical point, in which case $`G`$ would be larger, although still much smaller than its vacuum value. The size of the effect also depends quadratically on the sigma correlation length at freeze-out, and we have seen that there are many caveats in an estimate like $`\xi _{\mathrm{freeze}\mathrm{out}}\xi _{\mathrm{max}}6`$ fm. We have studied two different effects of the critical fluctuations on $`\sqrt{F}`$. First, $`F_{\mathrm{EC}}1`$, leading to about a $`1\%`$ increase in $`\sqrt{F}`$. The direct effect of the critical fluctuations is a much larger increase in $`\sqrt{F}`$ by a factor of $`\sqrt{F_\sigma }`$. We have displayed the various uncertainties in the factors contributing to our estimate (7) so that when an experimental detection of an increase and then subsequent decrease in $`\sqrt{F}`$ occurs, as control parameters are varied and the critical point is approached and then passed, we will be able to use the measured magnitude of this nonmonotonic effect to constrain these uncertainties. It should already be clear that an effect as large as $`10\%`$ in $`\sqrt{F_\sigma }`$ is easily possible; this would be 50 times larger than the statistical error in the present data. We now give a brief account of the effect of critical fluctuations on $`(\mathrm{\Delta }N)^2`$ and $`\mathrm{\Delta }N\mathrm{\Delta }p_T`$. The contribution of the direct pions to $`(\mathrm{\Delta }N)^2`$ can easily double, but the multiplicity fluctuations are dominated by the pions from resonance decay, so we estimate that the critical multiplicity fluctuations lead to about a 10-20% increase in $`(\mathrm{\Delta }N)^2`$. (This neglects the pions from sigma decay. See below.) The cross-correlation $`\mathrm{\Delta }N\mathrm{\Delta }p_T`$ only receives contributions from nontrivial effects, and we find that near the critical point, the contribution from the interaction with the sigma field is dominant. We estimate that (for $`G_{\mathrm{freeze}\mathrm{out}}300`$ MeV and $`\xi _{\mathrm{freeze}\mathrm{out}}6`$ fm) the cross-correlation will be a factor of 10-15 times larger than in the absence of critical fluctuations! The lesson is clear: although this correlation is small, it may increase in magnitude by a very large factor near the critical point. The effects of the critical fluctuations can be detected in a number of ways. First, one can find a nonmonotonic increase in $`F_\sigma `$, the suitably normalized increase in the variance of event-by-event fluctuations of the mean transverse momentum. Second, one can find a nonmonotonic increase in $`(\mathrm{\Delta }N)^2`$. Both these effects can easily be between one and two orders of magnitude greater than the statistical errors in present data. Third, one can find a nonmonotonic increase in the magnitude of $`\mathrm{\Delta }p_T\mathrm{\Delta }N`$. This quantity is small, and it has not yet been demonstrated that it can be measured. However, it may change at the critical point by a large factor, and is therefore worth measuring. In addition to effects on these and many other observables, it is perhaps most distinctive to measure the microscopic correlator $`\mathrm{\Delta }n_p\mathrm{\Delta }n_k`$ itself. The effects proportional to $`1/m_\sigma ^2`$ in has a specific dependence on $`p`$ and $`k`$. It introduces off-diagonal correlations in $`pk`$ space. Like the off-diagonal anti-correlation introduced by energy conservation, this makes it easy to distinguish from the Bose enhancement effect, which is diagonal in $`pk`$. Near the critical point, the off-diagonal anti-correlation vanishes and the off-diagonal correlation due to sigma exchange grows. Furthermore, the effect of $`\sigma `$ exchange is not restricted to identical pions, and should be visible as correlations between the fluctuations of $`\pi ^+`$ and $`\pi ^{}`$. The dominant diagonal term proportional to $`\delta _{pk}`$ will be absent in the correlator $`\mathrm{\Delta }n_p^+\mathrm{\Delta }n_k^{}`$, and the effects of $`\sigma `$ exchange will be the dominant contribution to this quantity near the critical point. ## 4 Pions From Sigma Decay Having analyzed the effects of the sigma field on the fluctuations of the direct pions, we next ask what becomes of the sigmas themselves. For choices of control parameters such that freeze-out occurs at or near the critical endpoint, the excitations of the sigma field, sigma (quasi)particles, are nearly massless at freeze-out and are therefore numerous. Because the pions are massive at the critical point, these $`\sigma `$’s cannot immediately decay into two pions. Instead, they persist as the temperature and density of the system further decrease. During the expansion, the in-medium $`\sigma `$ mass rises towards its vacuum value and eventually exceeds the two pion threshold. Thereafter, the $`\sigma `$’s decay, yielding a population of pions which do not get a chance to thermalize because they are produced after freeze-out. We estimate the momentum spectrum of these pions produced by delayed $`\sigma `$ decay. An event-by-event analysis is not required in order to see these pions. The excess multiplicity at low $`p_T`$ will appear and then disappear in the single particle inclusive distribution as control parameters are varied such that the critical point is approached and then passed. In calculating the inclusive single-particle $`p_T`$-spectrum of the pions from sigma decay, we must treat $`m_\sigma `$ as time-dependent, and should also take $`G`$ to evolve with time. However, the dominant time-dependent effect is the opening up of the phase space for the decay as $`m_\sigma `$ increases with time and crosses the two-pion threshold. We therefore treat $`G`$ as a constant. We have estimated that in vacuum with $`m_\sigma =600`$ MeV, the coupling is $`G1900`$ MeV, whereas at the critical end point with $`m_\sigma =0`$, the coupling is reduced, perhaps by as much as a factor of six or so. In this section, we need to estimate $`G`$ at the time when $`m_\sigma `$ is at or just above twice the pion mass. We will use $`G1000`$ MeV, recognizing that we may be off by as much as a factor of two. We parametrize the time dependence of the sigma mass by $`m_\sigma (t)=2m_\pi (1+t/\tau )`$, where we have defined $`t=0`$ to be the time at which $`m_\sigma `$ has risen to $`2m_\pi `$ and have introduced the timescale $`\tau `$ over which $`m_\sigma `$ increases from $`2m_\pi `$ to $`4m_\pi `$. It seems likely that $`5\mathrm{fm}<\tau <20\mathrm{fm}`$. We find that the mean transverse momentum of the pions produced by sigma decay is $$p_T0.58m_\pi \left(\frac{1000\mathrm{MeV}}{G}\right)^{2/3}\left(\frac{10\mathrm{fm}}{\tau }\right)^{1/3}.$$ (8) We therefore estimate that if freeze-out occurs near the critical point, there will be a nonthermal population of pions with transverse momenta of order half the pion mass with a momentum distribution given in Ref. . How many such pions can we expect? This is determined by the sigma mass at freeze-out. If $`m_\sigma `$ is comparable to $`m_\pi `$ at freeze-out, then there are half as many $`\sigma `$’s at freeze-out as there are charged pions. Since each sigma decays into two pions, and two thirds of those pions are charged, the result is that the number of charged pions produced by sigma decays after freeze-out is $`2/3`$ of the number of charged pions produced directly by the freeze-out of the thermal pion gas. Of course, if freeze-out occurs closer to the critical point at which $`m_\sigma `$ can be as small as $`(6\mathrm{fm})^1`$, there would be even more sigmas. We therefore suggest that as experimenters vary the collision energy, one way they can discover the critical point is to see the appearance and then disappearance of a population of pions with $`p_Tm_\pi /2`$ which are almost as numerous as the direct pions. Yet again, it is the nonmonotonicity of this signature as a function of control parameters which makes it distinctive. The event-by-event fluctuations of the multiplicity of these pions reflect the fluctuations of the sigma field whence they came . We estimate that the event-by-event fluctuations of the multiplicity of the pions produced in sigma decay will be $`(\mathrm{\Delta }N)^22.74N`$. We have already seen in that the critical fluctuations of the sigma field increase the fluctuations in the multiplicity of the direct pions sufficiently that the increase in the fluctuation of the multiplicity of all the pions will be increased by about $`1020\%`$. We now see that in the vicinity of the critical point, there will be a further nonmonotonic rise in the fluctuations of the multiplicity of the population of pions with $`p_Tm_\pi /2`$ which are produced in sigma decay. ## 5 Outlook Our understanding of the thermodynamics of QCD will be greatly enhanced by the detailed study of event-by-event fluctuations in heavy ion collisions. We have estimated the influence of a number of different physical effects, some special to the vicinity of the critical point but many not. The predictions of a simple resonance gas model, which does not include critical fluctuations, are to this point in very good agreement with the data. More detailed study, for example with varying cuts in addition to new observables, will help to further constrain the nonthermodynamic fluctuations, which are clearly small, and better understand the different thermodynamic effects. The signatures we analyze allow experiments to map out distinctive features of the QCD phase diagram. The striking example which we have considered in detail is the effect of a second order critical end point. The nonmonotonic appearance and then disappearance of any one of the signatures of the critical fluctuations which we have described would be strong evidence for the critical point. Furthermore, if a nonmonotonic variation is seen in several of these observables, then the maxima in all the signatures must occur simultaneously, at the same value of the control parameters. Simultaneous detection of the effects of the critical fluctuations on different observables would turn strong evidence into an unambiguous discovery. ## Acknowledgments We are grateful to G. Roland for providing us with preliminary NA49 data. We acknowledge helpful conversations with M. Creutz, U. Heinz, M. Gaździcki, V. Koch, St. Mrówczyński, G. Roland and T. Trainor. I thank the organizers of SEWM’98 for a conference which, by bringing together those studying QCD matter in extreme conditions and those studying electroweak matter in extreme conditions, was stimulating and enjoyable. This work was supported in part by a DOE Outstanding Junior Investigator Award, by the A. P. Sloan Foundation, and by the DOE under cooperative research agreement DE-FC02-94ER40818. ## References
no-problem/9903/astro-ph9903364.html
ar5iv
text
# MACHO Mass Determination Based on Space Telescope Observation ## 1 Introduction Recent extensive searches for gravitational microlensing events have already detected about a hundred of events toward the Galactic bulge and the Magellanic Clouds (Alcock et al. 1993; 1996; 1997a; 1997b; Aubourg et al.1993; Alard & Guibert 1997; Udalski et al. 1994). The number of events toward the Magellanic clouds exceeds the expected number of events for known population of stars in the Galactic disk and the Magellanic Clouds themselves, indicating that a considerable number of the events are caused by massive compact halo objects (hereafter MACHOs). However, since the MACHO mass cannot be obtained directly from microlensing observations, the nature of MACHOs remains unclear. The only observable for a single microlensing event is the Einstein ring crossing time $`t_\mathrm{E}`$, which may be written as, $$t_\mathrm{E}=\frac{1}{v_{}}\sqrt{\frac{4Gm}{c^2}D_\mathrm{s}l(1l)}$$ (1) where $`v_{}`$ is the tangential velocity of the lens, $`m`$ is the lens mass, $`D_\mathrm{s}`$ is the source distance, and $`l`$ is the fractional lens distance, i.e., the ratio of the lens distance to the source distance. Since the right-hand side of equation (1) contains three unknowns ($`v_{}`$, $`l`$ and $`m`$), the MACHO mass cannot be determined for each microlensing event. Instead, the MACHO mass is evaluated statistically by assuming a halo model which describes the distribution of the lens distance and velocity. Alcock et al.(1997b) found the MACHO mass of $``$ 0.5 $`M_{}`$ by assuming the standard halo model in which the rotation curve is flat out to the Magellanic Clouds, and suggested that MACHOs are likely to be old white dwarfs. However, the MACHO mass may be significantly changed when a different halo model is considered. In fact, if the rotation curve is slightly declining in the outer region, the MACHO mass is as small as that of brown dwarfs (Honma & Kan-ya 1998). While microlensing events by single lenses cannot break the three-fold degeneracy in equation (1), one can additionally measure the proper motion for a caustic crossing event, which is a microlensing event due to a binary lens system (e.g., Schneider & Weiss 1986). To measure the proper motion, a real-time detection of the event is necessary because the caustic crossing must be monitored intensively with high time resolution. Recent developments of the alert system made the real-time detection possible, and in fact, the proper motion has been measure for the first time for the event 98-SMC-01 (Afonso et al. 1998; Albrow et al. 1998; Alcock et al. 1998; Rhie et al. 1998). Once the proper motion is measured, the only quantity necessary for the lens mass determination is the lens distance. Hardy & Walker (1995) showed that if a caustic crossing event is monitored at two or more observatories that are separated well, the parallax effect causes a time delay of $``$ 30 sec, from which the lens distance can be derived. Later, Gould & Andronov (1998) discussed extensively the possibility of lens mass determination by combination of the proper motion and parallax measurements for a caustic crossing event. For doing this, Gould & Andronov (1998) proposed a monitoring observation of a caustic crossing event with three non-collinear telescopes that are located in different continents. This proposal requires that the event should be observable from three continents, i.e., three continents must be in the dark side of the Earth at the same time, and the weather must be clear. This requirement may strongly limit the application of this method. As an alternative to three ground-based telescopes, in this Letter we propose to use a space telescope for lens distance determination. Because its orbital motion automatically causes parallax effect, a space telescope acts like a number of ground-based telescopes located in difference continents. Also, there is no concern for the weather in the space. Therefore, using a space telescope is more practical and realistic than using three ground-based telescopes. In the following sections, we describe how parallax effect changes the light curve of a caustic crossing event, and discuss how accurately one can determination the lens distance as well as the lens mass based on a space telescope observation. ## 2 Parallax from Space Telescope Here we describe the parallax effect in a caustic crossing event observed with a space telescope. In the following analysis, the origin of the coordinate system is set to be the center of the Earth, and $`z`$ axis is set to be in the direction of the source star. The $`x`$ axis is set to be perpendicular to both of the $`z`$ axis and the orbital axis of the space telescope (thus the orbital axis is in the $`y`$-$`z`$ plane). We define the inclination of the telescope orbit as an angle between the $`z`$ axis and the orbital axis. We also assume that the space telescope is in a circular orbit with a radius of $`r_{\mathrm{st}}`$ and an angular velocity of $`\omega `$. The position of the telescope ($`\stackrel{}{T}`$) is written as $$\stackrel{}{T}=(r_{\mathrm{st}}\mathrm{cos}(\omega t+\delta ),r_{\mathrm{st}}\mathrm{sin}(\omega t+\delta )\mathrm{cos}i,r_{\mathrm{st}}\mathrm{sin}(\omega t+\delta )\mathrm{sin}i).$$ (2) We set $`t=0`$ when the source observed from the center of the Earth crosses the caustic. The angle $`\delta `$ describes the position of the telescope at $`t=0`$. In this coordinate system the Earth and the source are at rest and the caustic is moving. The position of the caustic crossing point (the point on the caustic where the source crosses when observed from the center of the Earth) is given by, $$\stackrel{}{C_\mathrm{l}}=(v_{}t\mathrm{cos}\alpha ,v_{}t\mathrm{sin}\alpha ,D_\mathrm{d}+v_\mathrm{r}t),$$ (3) where $`v_{}`$ and $`v_\mathrm{r}`$ are tangential and radial velocity of the lens, and $`\alpha `$ is the angle between the $`x`$ axis and the direction of tangential velocity $`v_{}`$. When observed from the center of the Earth, the position of the caustic crossing point projected onto the source plane is given by, $$\stackrel{}{C_\mathrm{s}}=\frac{D_\mathrm{s}}{D_\mathrm{d}}\stackrel{}{C_\mathrm{l}}\frac{1}{l}\stackrel{}{C_\mathrm{l}}.$$ (4) When observed from a space telescope orbiting the Earth, the projected position of the caustic crossing point in the source plane is given by, $$\stackrel{}{C_\mathrm{s}^{}}=\stackrel{}{T}+\frac{1}{l}(\stackrel{}{C_\mathrm{l}}\stackrel{}{T}).$$ (5) For convenience we define the source position relative to the caustic crossing point in the source plane as $`\stackrel{}{X}=\stackrel{}{S}\stackrel{}{C_\mathrm{s}}`$ and $`\stackrel{}{X^{}}=\stackrel{}{S}\stackrel{}{C_\mathrm{s}^{}}`$, where $`\stackrel{}{S}`$ is the position of the source, and $`\stackrel{}{S}=(0,0,D_\mathrm{s})`$. With these equations, we may write $`\stackrel{}{X}`$ and $`\stackrel{}{X^{}}`$ explicitly as $$\stackrel{}{X}=(\frac{v_{}t}{l}\mathrm{cos}\alpha ,\frac{v_{}t}{l}\mathrm{sin}\alpha ,0),$$ (6) $$\stackrel{}{X^{}}=(\frac{v_{}t}{l}\mathrm{cos}\alpha +\frac{1l}{l}r_{\mathrm{st}}\mathrm{cos}(\omega t+\delta ),\frac{v_{}t}{l}\mathrm{sin}\alpha +\frac{1l}{l}r_{\mathrm{st}}\mathrm{sin}(\omega t+\delta )\mathrm{cos}i,0)$$ (7) Equation (7) shows that the parallax effect due to the telescope motion causes a wavy trajectory of the source relative to the caustic (note that in the coordinate system of $`X_x`$ and $`X_y`$ the source is moving while the caustic is at rest). ## 3 Lens Distance from Light Curve Observation Here we calculate the light curve near the caustic observed with a space telescope. For simplicity we assume a constant surface brightness of the source star. Gould & Andronov (1998) obtained the magnification for such a source near the caustic as, $$A=a_0G(\eta )+a_1,$$ (8) $$G(\eta )=\frac{2}{\pi }_{\mathrm{max}(\eta ,1)}^1\left(\frac{1x^2}{x\eta }\right)^{1/2}𝑑x,$$ (9) where the function $`G`$ describes the shape of light curve, and $`\eta `$ is the separation between the source and the caustic normalized with the source size $`R_{}`$, namely, $`\eta d/R_{}`$. The two constant $`a_0`$ and $`a_1`$ describe the maximum magnification and the magnification outside the caustic, respectively. Note that these two constants depend on the shape of the caustic, the source trajectory, the source radius, and so on. To evaluate $`\eta `$ as a function of time, here we assume that the caustic is approximately linear near the caustic crossing point. This approximation is valid for stellar or substellar mass lenses in the halo because the amplitude of wavy motion of the source due to the parallax is usually much smaller than the size of caustic itself. In the coordinate system of $`X_x`$ and $`X_y`$ (see the definition of $`\stackrel{}{X}`$), the caustic is expressed as $`X_y=\mathrm{tan}(\alpha +\varphi )X_x`$, where $`\varphi `$ is the angle between the caustic and the direction of the source motion. In the absence of the parallax effect, the distance of the source to the caustic is given by $$\eta =\left(\mathrm{sin}(\alpha +\varphi )X_x\mathrm{cos}(\alpha +\varphi )X_y\right)/R_{}.$$ (10) Similarly, when observed from a space telescope orbiting around the Earth, the distance of the source from the caustic is obtained as $$\eta ^{}=\left(\mathrm{sin}(\alpha +\varphi )X_x^{}\mathrm{cos}(\alpha +\varphi )X_y^{}\right)/R_{}.$$ (11) Note that $`\eta ^{}`$ depends on following parameters : $`\alpha `$, $`\varphi `$, $`v_{}`$, $`l`$, $`R_{}`$, $`r_{\mathrm{st}}`$, $`\omega `$, $`\delta `$, $`i`$, and $`t`$. Among them, the unknown parameters to be determined by space telescope are $`l`$ and $`\alpha `$, i.e., the lens distance and the direction of the source motion. The orbital parameters of the telescope ($`r_{\mathrm{st}}`$, $`\omega `$, $`\delta `$, $`i`$) are accurately known. One can also determine the angle $`\varphi `$ and the proper motion $`\mu `$ from the full light curve fitting of the caustic crossing event. To obtain the full light curve, the ground-based observation is necessary because of long event duration ($`t_\mathrm{E}40`$ days). If the lens is located in the halo and the source proper motion can be neglected, the proper motion $`\mu `$ is simply related to $`v_{}`$ as $`v_{}=lD_\mathrm{s}\mu `$, and hence $`v_{}`$ can be determined once $`l`$ is given. The radius of the source star $`R_{}`$ can be obtained based on the color and the spectrum of the source star. The accuracy of $`R_{}`$ is typically within 10%, but depends on how precisely one can determine the effective temperature. An accurate measurement of $`R_{}`$ is crucial for the lens mass determination because $`R_{}`$ affects not only the lens distance $`l`$ through equation (11) but also the proper motion $`\mu `$ derived from the full light curve. Figure 1 shows $`G(\eta (t))`$ and $`G(\eta ^{}(t))`$ as well as the difference of the two, $`\delta GG(\eta (t))G(\eta ^{}(t))`$. In figure 1 we assume $`D_\mathrm{s}=50`$ kpc, $`l=0.2`$, $`v_{}=100`$ km/s, $`\alpha =20^{}`$, $`\varphi =45^{}`$ and $`R_{}=3R_{}`$, corresponding to a typical microlensing event due to MACHOs in the Galactic halo (e.g., Paczynski 1986). With these parameters, it takes about 3 hours for the source to cross the caustic. For orbital parameters for the space telescope, we assume $`r_{\mathrm{st}}=7000`$ km, $`P_{\mathrm{orb}}2\pi /\omega =97`$ minutes, $`\delta =160^{}`$ and $`i=30^{}`$, which are similar to those of the Hubble Space Telescope. Figure 1 shows the light curve near the caustic for about 5 hours, corresponding to $`\eta =2`$ to $`1`$. The figure demonstrates that the parallax effect due to the telescope motion causes the periodic fluctuation in $`\delta G`$. Note that changing the angle $`\alpha `$ mainly shifts the phase of the $`\delta G`$ curve whereas the lens distance $`l`$ changes the amplitude of the curve. Thus, both $`l`$ and $`\alpha `$ can be determined by fitting the $`\delta G`$ curve. Since the amplitude of the $`\delta G`$ curve is of a few %, the source magnification should be measured within uncertainty of 1 %, or with a photometric S/N larger than $`100`$. For a space telescope like the HST, this S/N is easily achievable when the source is being magnified significantly. ## 4 Uncertainty in Lens Distance and Mass In this section, we investigate how accurately we can measure the parameters $`l`$ and $`\alpha `$ as well as the lens mass $`m`$. First, we evaluate how the lens distance uncertainty depends on the photometric S/N. For doing this, we have simulated observations of the light curve presented in figure 1. We assumed that the light curve was observed every 10 minutes with a constant uncertainty. The lens and orbital parameters are set to be the same to those in section 3 (simulated observations with the photometric $`S/N`$ of 300 are plotted in figure 1). We calculated the likelihood in the parameter space of $`l`$ and $`\alpha `$ as $`L=p(\delta G_i|l,\alpha )`$, where $`\delta G_i`$ corresponds to the $`i`$-th simulated measurement of the light curve, and the probability $`p`$ is calculated assuming Gaussian error. We then calculated best values of $`l`$ and $`\alpha `$ as well as their uncertainties $`\mathrm{\Delta }l`$ and $`\mathrm{\Delta }\alpha `$ (68% confidence level). Figure 2 plots the resultant uncertainties in $`l`$ and $`\alpha `$ with S/N ratio of 50 to 300. Figure 2 shows that the lens distance is determined within 30% if the photometric S/N ratio is greater than 100. In this case, one can also determine the direction of the lens motion within 20 degrees. Figure 2 also shows that the uncertainties decrease gradually with increasing the photometric S/N ratio, and that the lens distance can be determined within $`10`$% in case of S/N $``$ 300. Next, in order to investigate how far we can measure the lens distance, we have also calculated the uncertainties in $`l`$ and $`\alpha `$ with varying $`l`$ while keeping the S/N constant. We assumed that the lens path relative to the caustic is the same to that in figure 1, and also assumed that the proper motion $`\mu `$ is $`10.0`$ km/s/kpc, so that all the events considered here have the same light curve to the one presented in figure 1 regardless of the lens distance $`l`$. Figure 3 plots the uncertainties in $`l`$ and $`\alpha `$ with varying $`l`$ for every 10 minutes observations with the photometric S/N of 300. Figure 3 demonstrates that the uncertainty in $`l`$ is rapidly increasing with the lens distance. Nevertheless, the distance uncertainty remains less than 30 % for lenses within $`l=0.5`$. Since typical microlensing events by MACHOs have $`l0.2`$ and since most of microlensing events by MACHOs have $`l`$ less than 0.5 (Paczynski 1986), the lens distance can be determined for most of events. Note that the uncertainty in $`R_{}`$, which was not considered above, may not be negligible in some cases. Since the uncertainty in $`R_{}`$, which is typically within 10%, propagates to the uncertainty in $`l`$ almost linearly, its effect may be important for an event with $`\mathrm{\Delta }l/l`$ less than $`10`$%. Also, a possible discontinuity of the space telescope observation due to the occultation by the Earth could affect the lens distance uncertainty. If the peaks of $`\delta G`$ curve are missed due to the occultation, the uncertainty in $`l`$ may increase considerably. However, for instance, the Magellanic Clouds is in the Continuous Viewing Zone of the Hubble Space Telescope. Thus, a space telescope which has an orbit close to the HST can perform a continuous or nearly continuous observation depending on the orbital precession, and so the probability of missing the peaks of $`\delta G`$ curve will be small. Once $`l`$ is determined, the MACHO mass can be obtained through equation (1). For a binary lens, the mass $`m`$ in equation (1) denotes the total lens mass, but the mass ratio of the two lenses can be also derived from the full light curve. Since the proper motion $`\mu `$ and the Einstein ring crossing rime $`t_\mathrm{E}`$ can be measured relatively accurately from the full light curve, the accuracy of the MACHO mass depends mainly on the accuracy of $`l`$. Thus, if S/N of a few hundred is achieved, the MACHO mass can be determined with an uncertainty of 10 $``$ 30 %. If $`\mathrm{\Delta }l`$ is larger, the constraint on the MACHO mass may be weak. However, even in that case, a measurement of the lens distance allow us to discriminate whether the lens is in the halo or not, which will be a strong test for existence of MACHOs. ## 5 Discussion We have seen that a space telescope observation of a caustic crossing event supplied with the ground-based observations will enable us to measure the MACHO mass. The most practical strategy at present is: 1) real-time detection and long-term monitoring of a caustic crossing event by ground-based observations, and 2) the space telescope observation of the caustic crossing. The real-time detection of the event is essential, and this can be done by the existing alert system. For the lens trajectory determination from the full light curve, precise photometry with high time resolution as well as global collaboration of monitoring groups will be efficient. High time resolution is also crucial for predicting the precise date of the second caustic crossing, which is necessarily for scheduling the observation with a space telescope. Since a typical interval of two caustic crossings is less than 10 days for binary MACHOs (Honma 1999), monitoring the source twice or more per night is more favorable than the nightly observation. If the fraction of binary MACHOs is as high as that of binary stars, $`510`$ % of events are expected to be a caustic crossing event (Mao & Paczynski 1991). Thus, one can expect a caustic crossing event per a few year even with the current microlensing search, and if the next generation microlensing search is launched, the expected number of such an event will be increased by two order of magnitudes (Stubbs 1998). Therefore in a decade or so, we may be able to measure the MACHO mass for a number of events by combining the ground-based monitoring and the HST or other space telescope in the next generation. The author acknowledges the financial support from the Japan Society of the Promotion of science.
no-problem/9903/gr-qc9903034.html
ar5iv
text
# General-Relativistic Thomas-Fermi model ## Acknowledgment We acknowledge useful discussions with D. Tsiklauri. This work was supported by the Foundation for Fundamental Research (FFR) and the Ministry of Science and Technology of the Republic of Croatia under Contract No. 00980102.
no-problem/9903/astro-ph9903048.html
ar5iv
text
# Profile instabilities of the millisecond pulsar PSR J1022+1001 ## 1 Introduction The discovery of millisecond pulsars (MSPs, Backer et al. 1982 ), opened new ways to study the emission mechanism of pulsars. Although pulse periods and surface magnetic fields are several orders of magnitude smaller than those of slowly rotating (’normal’) pulsars, the emission patterns show some remarkable similarities (Kramer et al. 1998 , hereafter KXL98; Xilouris et al. 1998 , hereafter XKJ98; Jenet et al. 1998 ). This suggests that the same emission process might work in both types of objects despite orders of magnitude difference in the size of their magnetospheres. A systematic study of the emission properties of MSPs and a comparison of the results with characteristics of normal pulsars can thus lead to new important insight in the emission physics of pulsars. In this paper we investigate unexpected profile instabilities seen for some MSPs and compare them to profile changes known for normal pulsars. The profile changes described here have consequences for high precision timing of MSPs, in which integrated profiles are cross-correlated with a standard template to measure pulse times of arrival. This procedure implicitly assumes that the shape of the integrated profile does not vary. This premise has never been tested thoroughly for MSPs. The only systematic search we are aware of was an analysis of the planet pulsar PSR B1257+12 which was searched, unsuccessfully, for shape changes (Kaspi & Wolszczan 1993 ). Recently, Backer & Sallmen (1997) noticed a significant change in the profile of the isolated millisecond pulsar PSR B1821$``$24 in about 25% of all observations on time scales of a few hours and possibly days. Even earlier Camilo (1995) described observations of PSR J1022+1001 which show profile changes on apparently shorter time scales, i.e. hours or less. These are the first cases in which such instabilities of profiles averaged over many pulse periods have been reported. The plan of the paper is as follows. After briefly reviewing what is known about the profile stability of normal pulsars in the next section, we present observations of PSR J1022+1001 and investigate a large data set with respect to pulse shape changes in time, frequency and polarization. In Sect. 3.5 we study the consequences for high precision timing and present a method to compensate for the profile changes when measuring pulse times-of-arrival. Possible explanations for the observed profile changes are discussed in Sect. 4, in view of additional sources showing a similar phenomenon. A summary of this work is made in Sect. 5. ## 2 Profile stability of normal pulsars Since the discovery of pulsars it has been known that individual pulses are highly variable in shape and intensity. Nevertheless, summing a sufficiently large number of pulses leads generally to a very stable pulse profile. Systematic studies of the stability of integrated profiles were carried out by Helfand, Manchester & Taylor (1975) and recently by Rankin & Rathnasree (1995). These studies show that a number of a few thousand pulses added together are very often enough to produce a final stable waveform which does not differ from a high signal-to-noise ratio (S/N) template by more than 0.1% or even less. Despite this stability of pulse profiles, a small sample of normal pulsars shows distinct pulse shape changes on time scales of minutes. This behaviour was first noticed by Backer (1970) and is nowadays well known as mode changing. In a mode change the pulsar switches from one stable profile to another on a time scale of less than a pulse period, remains in that mode for typically hundreds of periods, before it returns back to the original pulse shape or switches to another mode. This immediate switch from one mode to the next is a common phenomenon and is often associated with a sudden change in pulse intensity (Rankin 1986 ). Interestingly, Suleymanova, Izvekova & Rankin (1996) report that a mode switch in PSR B0943+10 is preceded by a decline in intensity for one mode, although again a so called “burst” and “quiet” mode can be distinguished. Rankin (1986) noted that mode-changing is often observed for such sources which exhibit rather complex profiles showing both the so-called “cone” and “core” components (cf. Rankin 1983 ; Lyne & Manchester 1988 ). The mode changing manifests itself as a reorganization of core and cone emission and thus usually affects the whole profile (often including polarization properties) rather than only certain pulse longitudes. A rare counter-example might be PSR J0538+2817 (Anderson et al. 1996 ). A phenomenon related to mode changing might be nulling, i.e. the absence of any pulsed emission for a certain number of periods. There are no clear and unequivocal explanations for the origin of nulling or mode changing which is normally interpreted as a re-arrangement in the structure of the emitting region. Some studies report a possible relationship between mode changing and change in emission height (e.g. Bartel et al. 1982 ). Other interpretations invoke a large variation in the absorption properties of the magnetosphere above the polar cap (e.g. Zhang et al. 1997 ). In any case, mode changing will increase the number of pulses that must be added before reaching a final waveform. However, even for pulsars which show mode changes a maximum number of $`10^4`$ pulses is typically sufficient for a stable average pulse shape to emerge from the process of adding seemingly random pulses (Helfand, Manchester & Taylor 1975 ; Rankin & Rathnasree 1995 ). In contrast, the profile changes of PSR J1022+1001 which we study in this paper, are on much longer time scales, i.e. hundreds of thousands of periods or more as discussed below. ## 3 The changing profile of PSR J1022+1001 Soon after the discovery of PSR J1022+1001 (Camilo et al. 1996 ), we included it as part of our regular timing programme at Effelsberg. The first high S/N profile was obtained in 1994 August (Fig. 1a). Comparison with another high S/N profile in 1994 October (Fig. 1b) clearly demonstrated that the resolved pulse peaks differ significantly in their relative amplitudes. Observations at other telescopes confirmed this result which will be discussed in detail in the following. ### 3.1 Observations and data reduction The majority of the data presented in this paper were obtained at 1410 MHz with the Effelsberg 100-m radiotelescope of the Max-Planck-Institut für Radioastronomie, Bonn, Germany. Besides the Effelsberg Pulsar Observing System (EPOS) described by KXL98, we also made measurements with the Effelsberg-Berkeley-Pulsar-Processor (EBPP) — a coherent de-disperser that has operating in parallel with EPOS since 1996 October. The EBPP provides 32 channels for each polarization with a total bandwidth of up to 112 MHz depending on observing frequency, dispersion measure and number of Stokes parameters recorded. For PSR J1022+1001 a bandwidth of 56 MHz can be obtained when recording only the two orthogonal (left- and right hand) circularly polarized signals (LHC and RHC). In polarization mode, i.e. also recording the polarization cross-products, a bandwidth of 28 MHz can be used. Each channel is coherently de-dispersed on-line (assuming a dispersion measure of DM$`=10.25`$ pc cm<sup>-3</sup>) and folded with the topocentric pulse period. Individual sub-integrations typically last for 2 min, before they are transferred to disk. A more detailed description of the EBPP can be found in Backer et al. (1997) and Kramer et al. (1999). In order to monitor the gain stability and polarization characteristics of the observing system, we also performed regular calibration measurements using a switchable noise diode. The signal from this noise diode is injected into the waveguide following the antenna horn and was itself compared to the flux density of known continuum calibrators during regularly performed pointing observations. Switching on the noise diode regularly after observations of pulsars allowed monitoring of gain differences in the LHC and RHC signal paths. Use of this procedure, along with parallel observations by two independent data acquisition systems, allows us to exclude an instrumental origin of the observed profile changes. As a demonstration we show a three hour observation of PSR J1022+1001 in Fig. 2, where each profile corresponds to an integration time of about 40 min. The total power profile (right column) was obtained after appropriately weighting and adding the LHC and RHC profiles shown in the first two columns. In order to guide the eye, we have drawn a dashed horizontal line at the amplitude of the trailing pulse peak, which was normalized to unity. Error bars are based on a worst-case analysis, combining $`3\sigma `$ values calculated from off-pulse data with the (unlikely) assumption that the gain difference has (still) an uncertainty of about 20%. Inspecting the time evolution of the shown profiles, we see that the RHC profile remains unchanged during the whole measurement. At the same time the LHC profile undergoes clear changes. At the beginning of the observations, the trailing pulse peak is the dominant feature in the LHC profile, it then weakens gradually with time, until it becomes of equal amplitude to the first pulse peak. The resulting (total power) profiles reflect exactly this trend. The measurement presented in Fig. 2 clearly demonstrates that the observed profile changes are not of instrumental origin, owing to the lack of any instrumental effect which could explain the shown evolution on the observed short time scales. Moreover, a correlation between profile shape and source elevation or hour angle is not present. We also searched for a possible relation between profile changes and pulse intensity. In Fig. 2 we thus indicate the flux density measured for the corresponding profiles (with an estimated uncertainty of less than 10%). Clearly, the profile changes are uncorrelated with changes in intensity. Instead, the observed intensity change is presumably caused by interstellar scintillation – a common phenomenon seen in low dispersion measure pulsars (Rickett 1970 ). ### 3.2 Profile changes with time A simple comparison of measured pulse profiles normalized to each of the two pulse peaks provides first clues as to whether the profile is changing as a whole or stable parts are present. In Fig. 3 we present pulse profiles obtained at 1.410 MHz at different epochs. Normalizing to the leading pulse peak, the profile apparently changes over all pulse longitudes, i.e. including the depth of the saddle region and the width of the profile itself. Normalizing the same profiles to the trailing pulse peak seems to cause mainly changes in the first profile part while the trailing one remains stable. The picture seems also to apply to the 430-MHz data obtained by Camilo (1995) and can be confirmed, as discussed later, by the timing behaviour of this pulsar. Although our data sometimes suggest that variations on time scales of a few minutes are present, we need higher S/N data to confirm this impression. Instead, we reliably study here profile variations visible on longer time scales by adding about $`610^4`$ to $`810^4`$ pulses each (i.e. 10 to 16 min). Although this corresponds to a much larger number of pulses than needed to reach a stable profile for normal pulsars even in the presence of moding (cf. Sect. 2), we still observe a smoothly varying set of pulse shapes. In order to demonstrate that the involved time scales are highly variable, we calculated the amplitude ratio of the leading and trailing pulse peaks at 1410 MHz as the most easily accessible parameter to describe the profile changes. In order to use a large homogeneous data set, we analyzed EPOS data, which were obtained with a bandwidth of 40 MHz, a time resolution of $`25.8\mu `$s (cf. KXL98) and an integration time as quoted above. We estimated uncertainties in the amplitude ratio using the same worst-case analysis as described before. The mean value of the component ratio (amplitude of the leading pulse peak divided by that of the second one) for the whole data set covering about four years of observations is $`0.975\pm 0.009`$. Two examples of observations of comparable duration are shown in Fig. 4 where we plot the amplitude ratio as a function of time. During the first measurement the profile appears to be stable. In the second observation profile changes are evident. This is consistent with the results of an unsuccessful search for periodicities or typical time scales in the amplitude ratio data by computing Lomb periodograms of the unequally sampled data set. Using a method described by Press et al. (1992) we investigated time scales ranging from several hours, over days to months without obtaining significant results. In order to model the profile changes in detail, we fit the integrated profiles to a sum of Gaussian components, defined as $$I(\varphi )=\underset{i=1}{\overset{n}{}}a_{3i2}\mathrm{exp}\left\{\left(\frac{\varphi a_{3i1}\varphi _0}{a_{3i}}\right)^2\right\},$$ (1) where $`\varphi _0`$ is a fiducial point. As shown by KXL98 PSR J1022+1001 is well described by a sum of $`n=5`$ components (cf. Fig. 5). We applied this method to the whole data set, varying amplitude, positions, and widths of the components. We then developed a model using the median values of component position and width, and found that, surprisingly, this model fits all observed profiles well with only adjustments to the relative amplitudes of the components. Of the profiles studied, only 5% of the fits would have been rejected by the criteria of Kramer et al. (1994). According to these, the significance level of the null-hypothesis that the post-fit residuals in the on-pulse region and the data in an off-pulse region of similar size are drawn from the same parent-distribution, must not be less than 95%. Most of the rare cases, where these criteria were not fullfilled, were profiles with very high S/N, indicating a refined model would be needed to perfectly describe the best data. The summarized results of our Gaussian fitting procedure are presented as a set of histograms in Figures 6 and 7 and Table 1. Figure 6 shows the occurrence of amplitudes for each of the five components (for a numbering see Fig. 5). All profiles were normalized to the trailing peak of the profile, so that the amplitude of the fifth component is always close to unity. Whilst the range of amplitudes is well confined for the first component, the amplitudes for the second and fourth component show a broad distribution. In particular the amplitudes of the third component exhibit a large scatter which is also demonstrated by the summary of the results in Table 1. Inspecting Fig. 5, it is clear that the amplitude, $`p`$, of the first pulse peak is made up by a combination of intensities from component 2 and 3, scaling as $`p=0.75a_4+0.95a_7`$. The quantity $`p`$ is is also displayed in Fig. 6, showing a very broad distribution reflecting the observed changes in amplitude ratio. We can now test numerically as to whether only parts of the leading profile are changing by repeating the above analysis, but this time allowing additionally for a fit in the relative spacing of the components. The intriguing result is presented in Fig. 7, which shows the distributions of the resulting centroids (relative to the fiducial point) for each component, and Table 1. Interestingly, the scatter in the central position gradually decreases from the leading to the trailing part. At the same time, the corresponding amplitude histogram is almost identical to Fig. 6 (not shown). This result indeed suggests that the trailing part of the profile is more stable than the leading one, which undergoes significant profile changes. ### 3.3 Profile changes with frequency Although profiles of normal pulsars are well known to change significantly with observing frequency, MSPs show often a much smaller profile development (XKJ98). In contrast, the profiles of PSR J1022+1001 show strong changes with frequency, which are inconsistent with the canonical behaviour of normal pulsars (cf. Rankin 1983 , Lyne & Manchester 1988 ). #### 3.3.1 Large frequency scale Comparing the average pulse profiles of PSR J1022+1001 over a wide range of frequencies (cf. Sayer, Nice & Taylor 1997 , Camilo et al. 1996 , Kijak et al. 1997 , Sallmen 1998 , KXL98, Kramer et al. 1999) it becomes clear that profile changes at frequencies other than 400 or 1400 MHz are more difficult to recognize (but nevertheless possible). Only around these two frequencies both prominent pulse peaks are of comparable (although nevertheless changing) amplitude. It has to be addressed by simultaneous multi-frequency observations as to whether the profile changes at different frequencies occur simultaneously. This might, however, be a difficult task given the discovered phenomenon discussed below. #### 3.3.2 Small frequency scale For almost all cases, EPOS and EBPP, both operating in parallel, yielded identical pulse profiles. However, at a few occasions the EBPP profiles differed slightly from those obtained with EPOS. The causes are discovered profile variations across the observing bandpass: while EPOS always uses a fixed bandwidth of 40 MHz, the bandwidth of the EBPP for PSR J1022+1001 at 1410 MHz is 56 MHz in total power mode and 28 MHz in polarization mode. When profile changes happen on frequency intervals smaller than this, the obtained profile depends on the exact location and size of the bandwidth used. This is what we observe as demonstrated by a contour plot (Fig. 8), where we show the intensity as a function of pulse longitude and observing frequency. In order to produce this plot we have added 12 min of EBPP total power data folding with the topocentric pulse period. From 30 frequency channels (or 52.5 MHz, i.e. two channels were excised due to technical reasons) each two adjacent ones were collapsed to produce a reliable S/N ratio. All resulting 15 profiles were normalized to the second pulse peak, indicated by the dashed vertical line at 60 longitude. Contour levels were chosen such that solid lines reflect an increase of $`3\sigma `$ (computed from off-pulse data) from the unit amplitude of the trailing pulse peak. Conversely, the dotted lines denote $`3\sigma `$ decreases with respect to the trailing pulse peak. Additionally, we overlay a sample of corresponding profiles as insets whose vertical position reflects their actual observing frequency. Their horizontal position is arbitrarily chosen for reasons of clarity. The longitude ranges covered in the contour plot and the pulse profiles are identical. Evidently, a significant profile change is occurring on a small frequency scale of the order of 8 MHz, which however also varies for different observations. Obviously, the profile observed over a large bandwidth is an average of the individual profiles within the band. Depending on the relative occurrence and strength of the various pulse shapes, which is additionally modulated by interstellar scintillation, a whole variety of pulse shapes and time scales can be created. ### 3.4 Polarization structure The polarization of PSR J1022+1001 has been already discussed by XKJ98 (see also Sallmen 1998 and Stairs 1998 ). Here we concentrate on the impact of the profile changes on the polarization characteristics, since it is already clear from Fig. 2 that some changes are to be expected. In Fig. 9 we present polarization data obtained with the EBPP at 1410 MHz for two typical pulse shapes. In the left panel the leading pulse peak is weaker, whereas in the right panel the amplitude ratio is reversed. The linearly polarized intensity (and thus its position angle) is very similar in both measurements, but the circular polarization shows distinct differences. In the right profile we observe significant circular power with positive sense, coinciding with the leading resolved peak of the pulse profile. This feature of circular polarization is not present in the left profile. Similarly, the saddle region of the right profile shows a dip in circular power, while at the same longitude the left profile shows significant circular power with negative sense. Since the position angle swing appears to be identical in both measurements, it rules out some obvious effects due to changes in the viewing geometry. In fact, the strange notch appearing at the maximum of circular power is prominent in both profiles and seems to describe a resolvable jump by about $`70`$ deg above the otherwise fairly regular S-like swing. We stress, that the two profiles shown in Fig. 9 represent only two typical pulse profiles. Various states between two extremes can be observed. The Gaussian components used to model the profile show a distinct correspondence to the polarization structure: the first component coincides with the unpolarized leading part of the profile. The second component corresponds to the first linearly polarized feature, whilst the third component resembles the first large peak in circular polarization. The fourth component coincides with the second peak in linearly polarized intensity, and the fifth Gaussian clearly agrees with the trailing prominent pulse peak. This correspondence between Gaussian components and polarization features, along with the success of the Gaussian model for the various profiles of this and other pulsars (e.g. Kramer et al. 1994 , KXL98), strongly suggests that the Gaussian components have some physical meaning, and are not just a mathematical convenience used to describe profiles. ### 3.5 Timing solution We have undertaken timing observations of PSR J1022+1001 at several observatories and several observing frequencies over a span of four years. In many cases the timing measurements were derived from the same data as used in the profile shape study described above. Data were collected at the 300 m telescope at Arecibo<sup>1</sup><sup>1</sup>1The Arecibo Observatory, a facility of the National Astronomy and Ionosphere Center, is operated by Cornell University under a cooperative agreement with the National Science Foundation. (May to November 1994; 430 MHz), the 100 m telescope at Effelsberg (December 1994 to July 1998; 1400 MHz); the 76 m Lovell telescope at Jodrell Bank (April 1995 to July 1997; 600 and 1400 MHz); and the 42 m telescope at Green Bank (July 1994 to May 1998; 370, 600, and 800 MHz). At each observatory, data were folded with the topocentric pulse period, de-dispersed (on- or off-line), and recorded, along with the observation start time. Times of arrival were calculated by cross-correlating the data profiles with a standard template. For the Green Bank and Jodrell Bank data, a template with fixed shape was used. For the Arecibo and Effelsberg data, the model of five Gaussian components with fixed width and separation but freely varying amplitudes was used. The Arecibo data were not calibrated (left- and right-hand circular polarizations were summed with arbitrary weights), and systematic trends were evident in the residual arrival times, even after allowing Gaussian component amplitudes to vary. The trends were reduced somewhat by fitting the residuals to a linear function of the amplitudes of the five Gaussian components and removing the resulting function from the data. These procedures had the net effect of reducing the rms residual arrival times from $`25\mu `$s to $`17\mu `$s for two minute integrations. Still, some systematics remained, typically drifts of order $`20\mu `$s over time spans of 2 hours (Figure 10). An alternative scheme for timing the Arecibo data, in which a conventional fixed-template scheme was used, but only that part of the profile from the central saddle point through the trailing edge were given weight in the fit, gave results very similar to those of the five-Gaussian fit. We view this as further evidence that the trailing edge of the profile is relatively stable, while the leading profile is variable. A total of 4277 times of arrival (TOA) were measured. These were fit to a model of pulsar spin-down, astrometry, and orbital elements using the tempo program. Root-mean-square (RMS) residual arrival times after the fit were of order 15-20 $`\mu `$s for the Arecibo, Jodrell Bank, and Effelsberg data sets, and 40-100 $`\mu `$s for the Green Bank data (Figure 11). To partially compensate for systematic uncertainties, the Arecibo TOAs were given uniform weights in the fit (equivalent to a timing uncertainty of 17 $`\mu `$s), and systematic terms (of order 10 $`\mu `$s) were added in quadrature to the uncertainties of TOAs from other observatories. The resulting fits had reduced $`\chi ^2`$ values close to 1 for each observatory, and the overall fit had a reduced $`\chi ^2`$ of 1.09 for the full data set. Our best estimates of timing parameters are listed in Table 2. To guard against remaining systematic errors, we separately analyzed several subsets of the data and incorporated the spread in parameters thus derived into the uncertainties in Table 2. Particular data sets considered included the individual sets from Green Bank, Jodrell Bank, and Effelsberg; a smoothed data set (in which all TOAs from a given day were averaged); and a data set which excluded all earth-pulsar lines-of-sight which passed within $`30^{}`$ of the Sun. We recommend that the uncertainties thus derived be treated as $`1\sigma `$ values. Because this pulsar is close to the ecliptic, the uncertainty in ecliptic latitude, as determined by timing, is much greater than the uncertainty in ecliptic longitude. To minimize covariance between fit parameters, the pulsar’s position and proper motion are thus best presented in ecliptic coordinates. The ecliptic coordinates given in Table 2 are based on the reference frame of the DE 200 ephemeris of the Jet Propulsion Laboratory, rotated by $`23^{}26^{}21.4119^{\prime \prime }`$ about the direction of the equinox. The proper motion of this pulsar has not been previously reported. The measured proper motion in ecliptic longitude, $`\mu _\lambda `$, translates to a one-dimensional space motion of 50 km s<sup>-1</sup>, assuming a distance of 0.6 kpc, as inferred from the dispersion measure. This is typical of the velocities of millisecond pulsars (e.g. Lorimer 1995 , Cordes & Chernoff 1998 ). ## 4 Discussion For PSR J1022+1001 we clearly demonstrated the existence of highly unusual changes in pulse shape and polarization which cannot be explained by instrumental effects. Studies of other MSPs reveal that PSR J1022+1001 is not the only source for which such behaviour can be observed. Backer & Sallmen (1997) have already discussed a similar phenomenon for PSR B1821$``$24. Another MSP where we find profile changes is PSR J1730$``$2304 (see Fig. 12). Its usual weakness at 1410 MHz (cf. KXL98) prevents a data analysis as possible for PSR J1022+1001, but similar profile changes have also been observed at the Parkes telescope (Camilo et al. in prep.). Very recently, Vivekanand, Ables & McConnell (1998) also described small profile changes of PSR J0437$``$4715 at 327 MHz. Although they observed this highly polarized pulsar only with a single polarization, and although Sandhu et al. (1997) demonstrate that measurements of this pulsar are difficult to calibrate, Vivekanand et al. argue that these pulse variations are real. In any case, the low time resolution of their observed profiles prevents a detailed analysis. It was already noted by XKJ98 that for some MPSs profile changes can be prominent in the polarization characteristics whereas the total intensity remains mostly unchanged. As an intriguing example we refer to PSR J2145$``$0750, for which XKJ98 measured at 1410 MHz a high degree of polarization and a well defined, flat position angle (see their Fig. 1). Recent results indicate that for most of the time, the profile seems in fact to be weakly polarized with a highly disturbed position angle swing (Sallmen 1998 , Stairs 1998 ). However, a profile very similar to XKJ98’s 1410 MHz observation has been observed by Sallmen (1998) also at 800 MHz. As it is apparently the case for PSRs J1022+1001 and B1821$``$24, only certain parts of the profile seem to actually change. Thus, we can exclude any propagation effect due to the interstellar or interplanetary medium since it should affect all parts of the profile simultaneously. When we compare the properties of this ’strange’ sample of MSPs, we notice that PSRs J0437$``$4715, J1022+1001 and J2145$``$0750 have an orbiting companion while both PSRs J1730$``$2304 and B1821$``$24 are isolated pulsars. The existence of a binary companion is therefore certainly unrelated to the observed phenomenon. The pulse periods of the pulsars range from 3.05 ms (PSR B1821$``$24) to 16.45 ms (PSR J1022+1001), and their profiles are not only vastly different in shape and frequency development (KXL98 and XKJ98), but also dissimilar in their polarization structure (XKJ98). While, for instance, in the cases of PSRs J1730$``$2304 and B1821$``$24 a highly linearly polarised component seems to change in intensity, it is a weakly polarised component in the case of PSR J1022+1001. An other promiment example where profile changes for a recycled pulsar have been noticed, are those of the binary pulsar PSR B1913+16 which were described by Weisberg, Romani & Taylor (1989), Cordes, Wasserman & Blaskiewicz (1990) and Kramer (1998). The observed secular, small change in the amplitude ratio and now also separation of the components is evidently caused by geodetic precession of the neutron star. However, the time scales of the profiles changes discussed here are by far shorter and also the amplitudes involved are dramatically larger. In combination with the stable polarization angle swing (at least for PSR J1022+1001), we can certainly exclude a precession effect for the profile changes of our studied sample. The most simple explanation for the observations would be if we had discovered a mode change as long known for normal pulsars. Although mode changes are not understood even for slowly rotating pulsars, we would not have to invoke previously unknown effects. Comparing the number of normal pulsars known when Backer (1970) discovered mode changing, we note that it is similar to the number of MSPs known now. However, if the profile changes were just another aspect of the mode changing for normal pulsars, we would expect similarly that even in such a case a large number of typically $`10^4`$ pulses should be sufficient to average out any random fluctuation in the individual pulses, i.e. producing a stable waveform. For PSR J1022+1001 this would mean to obtain a non-changing pulse profile already after only 3 min of integration time, in contrast to what is observed, which is a pulse shape changing smoothly on much longer time-scales. Besides, except for the case of PSR J0437$``$4715 reported by Vivekanand et al. (1998), the pulse shape changes discussed here seem to appear only in certain parts of the profile while others are obviously unaffected. This together with the obvious lack of a relation between pulse shape and intensity is unusual for the moding behaviour as seen in slowly rotating pulsars. Most important, however, is that the “classical” mode changing does not provide an explanation for the extraordinary narrow-band variation of the profile of PSR J1022+1001, which is most reminiscent of a scintillation pattern. Since we excluded propagation effects caused by the interstellar medium, the data could be interpreted as a magnetospheric propagation effect but also in the context of a previously unnoticed narrow-band property of the emission process. The latter would be a surprising result since most of the previous pulsar studies favour a broad band emission process (e.g. Lyne & Smith 1998 ). We note here that Smirnova & Shabanova (1992) describe simultaneous observations of PSR B0950+08 at very low frequencies of 60 MHz and 102 MHz. Observing with only one linear polarization they report a previously unnoticed profile change of this source which does not seem to occur at both frequencies at the same time. Similar to our observations, they noticed a narrow-band variation of the pulse profile at both frequencies with a characteristic bandwidth of 30–40 kHz. Arguing that recording only one linear polarization is not responsible for this effect, they also consider a narrow band property of the emission process or a scintillation effect of spatially separate sources of emission. Smirnova & Shabanova favour the latter explanation and give estimates for the separation of the emission regions. Applying similar calculations to our case, we however easily derive differences in emission height which are larger than the light-cylinder radius of PSR J1022+1001. It is interesting to note that the profile changes of PSR J1022+1001 bear certain similarities to the behaviour of the well known mode-changing pulsar PSR B0329+54 (Bartel et al. 1982 ). McKinnon & Hankins (1993) pointed out that “gated” pulse profiles of PSR B0329+54 produced by single pulses sorted according to their intensity, revealed a shift in the pulse longitude of the core component depending on its intensity. In order to explain this effect, they considered a different emission height for strong and weak pulses as well as a circular motion of the core component around an axis off-center to the magnetic axis. The profile changes in PSR J1022+1001 could be explained in a similar manner, assuming that a core component moves, for instance, on an annulus whose center is displaced from the magnetic axis but closer to the emission region of the leading pulse peak. Those profiles with an amplitude ratio larger than unity (cf. Sect. 3.2) are then produced when the core component is positioned in such a way that it adds to the observed intensity of the first pulse peak. Most of the time, however, it will be away from the first pulse peak, leading to an average amplitude ratio lower than unity as observed. Since core components are mostly associated with circular polarization rather than linear, this simple picture also provides an explanation why only the circular polarization is changing whereas the linear remains unchanged. A rough estimate for the displacement can be calculated by using Eqn. (5) of McKinnon & Hankins (1993), a lower limit for the magnetic inclination angle $`\alpha `$ of $`60^{}`$ (XKJ98) and the spacing of the centroids of components 3 and 5 of $`\mathrm{\Delta }t0.55`$ms (Fig. 7). This results in a displacement of $`1.2`$km, which corresponds interestingly to the radius of a dipolar polar cap for PSR J1022+1001. A movement of the core on a circular path would, however, imply a typical time scale for the profile changes, which is not observed. If the motion of the core component happens instead in an irregular manner, obvious time scales might not be present. Nevertheless, fluctuation spectra of observed single pulses may be able to resolve a possible movement of the core. Those results should be frequency independent, since all profile changes should obviously occur simultaneously over a wide range of frequencies. Single pulse studies also offer a chance to detect possible correlations between the intensity of single pulses and the resulting average pulse profile as in the cases of PSR B0329+54 (McKinnon & Hankins 1993 ) or PSR J0437$``$4715 (Jenet et al. 1998 ). We note that a preliminary analysis of recent Arecibo data at 430 MHz suggest that “giant pulses” for PSR J1022+1001 occur – if present at all – much less than once per $`10^4`$ stellar rotations, which is already much less than observed for the Crab pulsar (e.g. Lundgren et al. 1995 ) or PSR B1937+21 (e.g. Sallmen & Backer 1995 ; Cognard et al. 1996 ). Although this above simple picture can apparently explain some of the observed features at least qualitatively, it bears the fundamental problem that we still would not know what causes this motion of individual components. The $`E\times B`$-drift considered by McKinnon & Hankins (1993) would presumably cause a regular motion. Similarly, the model provides unfortunately no direct explanation of the observed narrow-band variation of the pulse profile. Actually, if we are dealing with the same emission mechanism as for normal pulsars (see KXL98, XKJ98 and Jenet et al. 1998) and if we cannot explain the data by the known moding behaviour, then we are left with a propagation effect in the pulsar magnetosphere. This might be combined with different emission altitudes for different parts of the profile and/or differential absorption properties of the magnetosphere above the polar cap. Indeed, one could interpret the position angle swing of PSR J1022+1001 as the composition of two separate S-swings which are delayed to each other and thus represent (independent) emission from different altitudes. In that case, the “notch” in the swing would mark the longitude where the trailing part of the pulse starts to dominate over the leading one. Applying, however, the model derived by Blaskiewicz, Cordes & Wasserman (1991) to estimate the emission height based on polarization properties, we would derive a negative emission altitude for the trailing profile part. More conventionally we could use the spreads in the centroids, $`\mathrm{\Delta }t`$, of the fitted Gaussian components as an estimator for a change in emission height, $`\mathrm{\Delta }r`$. A rough estimate is given by $`\mathrm{\Delta }r=c\mathrm{\Delta }t/(1+\mathrm{sin}\alpha )`$, where $`\alpha `$ is the magnetic inclination angle and $`c`$ the speed of light (see eg. McKinnon & Hankins 1993 ). Using the largest spread as found for component 1 (i.e. 0.082 ms, cf. Tab. 1), and again $`\alpha =60^{}`$ (XKJ98), we derive a change of $`\mathrm{\Delta }r130`$ km. This value is still smaller than the light cylinder radius of 785 km. Although we can apparently construct a simple phenomenological model which can explain some observations qualitatively, a propagation effect in the pulsar magnetosphere might be still the most probable explanation for the observed phenomena. In conclusion, we believe that this interpretation and the reason for the observed narrow-band variation of the pulse shape should be addressed with future simultaneous multi-frequency observations of these interesting sources. Only such observations have the potential to distinguish between a propagation effect in the pulsar magnetosphere, which can be expected to be frequency dependent, and those involving a reformation of the emitting regions, which should produce frequency independent properties. ## 5 Summary Focussing in particular on PSR J1022+1001, we have demonstrated that a sample of MSPs shows distinct and unusual profile changes. We argued that these profile changes are not caused by instrumental effects or represent a propagation effect in the interstellar or interplanetary medium. In fact, we conclude that the observed variations in pulse shapes (in time and frequency) are intrinsic to the pulsars and that they are not consistent with the mode changing effect known for normal pulsars. We have shown that the profile changes can have a significant impact regarding the apparent timing stability of MSPs. We suggest the usual template matching procedure to be extended by allowing for variations of the amplitudes of different profile component. As demonstrated for PSR J1022+1001 this procedure improves the timing accuracy significantly and has led to the first proper motion measurement for this pulsar. ###### Acknowledgements. We are indebted to all people involved in the project to monitor millisecond pulsars in Effelsberg, in particular to Axel Jessner and Alex Wolszczan. MK acknowledges the receipt of the Otto-Hahn Prize, during whose tenure this paper was written, and the warm hospitality of the Astronomy Department at UC Berkeley. FC is a Marie Curie Fellow.
no-problem/9903/cond-mat9903427.html
ar5iv
text
# Triple sign reversal of Hall effect in HgBa2CaCu2O6 thin films after heavy-ion irradiations \[ ## Abstract The triple sign reversal in the mixed-state Hall effect has been observed for the first time in ion-irradiated HgBa<sub>2</sub>CaCu<sub>2</sub>O<sub>6</sub> thin films. The negative dip at the third sign reversal is more pronounced for higher fields, which is opposite to the case of the first sign reversal near T<sub>c</sub> in most high-T<sub>c</sub> superconductors. These observations can be explained by a recent prediction in which the third sign reversal is attributed to the energy derivative of the density of states and to a temperature-dependent function related to the superconducting energy gap. These contributions prominently appear in cases where the mean free path is significantly decreased, such as our case of ion-irradiated thin films. \] The Hall anomaly in the mixed state of type II superconductors is one of the most attractive subjects, both experimentally and theoretically, in the field of vortex dynamics. According to classical theories , the vortex motion due to the Lorentz force should generate a Hall voltage with the same sign as observed in the normal state because normal electrons in the vortex cores effectively produce this voltage. To the contrary, a puzzling sign reversal of the Hall effect has been observed in various conventional superconductors, such as impure Nb and V crystals , and Nb thin films , and in some high-T<sub>c</sub> superconductors (HTS), such as YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals and La<sub>2-x</sub>Sr<sub>x</sub>Cu<sub>2</sub>O<sub>4</sub> . Furthermore, a double sign reversal has been observed in highly anisotropic HTS, such as Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> crystals , Tl<sub>2</sub>Ba<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> films , and HgBa<sub>2</sub>CaCu<sub>2</sub>O<sub>6</sub> (Hg-1212) films . Various models related to two band effects , induced pinning , a superconducting fluctuation , and a flux backflow have been proposed to interpret these Hall anomalies, but they have not been able to explain the experimental results. Therefore, the origin of the mixed-state Hall effect still remains an unsolved subject. An interesting microscopic approach based on a time-dependent Ginzburg-Landau theory has been proposed in a number of papers . In this approach, the mixed-state Hall voltage in type II superconductors is determined by the quasiparticle and hydrodynamic contributions of the vortex cores. Since the sign of hydrodynamic term is determined by the energy derivative of the density of states , if that term is negative, a sign anomaly can appear. This theory is qualitatively consistent with experimental results , especially for high magnetic fields and for temperatures near T<sub>c</sub>. Recently, Kopnin has developed a modified theory which includes an additional force arising from charge neutrality effects. In this theory, interestingly, he anticipated the possibility of a third sign anomaly when the system remains moderately clean, this anomaly would even occur at low temperatures. This implies that the third sign reversal could be observed if the mean free path of a system were reduced from the clean limit of $`l>\xi `$ to the moderately clean limit of $`l\xi `$, where $`l`$ is the mean free path and $`\xi `$ is the superconducting coherence length. This may be the case for columnar defects due to high-density ion irradiations. Hg-1212 thin films are suitable candidates for observing the third sign reversal because the general trend of the negative dip at low field near T<sub>c</sub> still remains at higher fields , a situation which is clearly different from the cases of the Bi and the Tl compounds . This suggests that at higher fields, the negative contribution due to the additional transverse force in the Hg-1212 compound is more substantial than that in either the Bi or Tl compounds. In this Letter, we present the first report on an observation of triple sign reversal in superconducting Hg-1212 thin films containing columnar defects produced by 5-GeV Xe ions. The level of the dose, 1.5 $`\times `$ 10<sup>11</sup> ions/cm<sup>2</sup>, was equivalent to the mean distance, less than 258 $`\AA `$, between the columnar defects, thus effectively reducing the mean free path of the samples, even at low temperatures. Consequently, we were able to observe, for the first time to the best of our knowledge, the triple sign reversal predicted by the microscopic theory of nonequilibrium superconductivity. This observation will provide new insight, we believe, into the flux dynamics in type II superconductors. The fabrication process and the transport properties of the Hg-based thin films used in this study were previously reported in detail. The mid-transition temperatures T<sub>c</sub> of the as-grown thin films on (001) SrTiO<sub>3</sub> substrate were 122 - 124 K. The critical current density at zero field was $`10^6`$ $`A/cm^2`$ at 100 K. The X-ray diffraction pattern indicate highly oriented thin films with the c axis normal to the substrate plane. The minor phase of HgBa<sub>2</sub>Ca<sub>2</sub>Cu<sub>3</sub>O<sub>8</sub> was less than 5 %. The ion irradiation was performed at the Superconducting Cyclotron Center at Michigan State University by using 5-GeV Xe ions. The irradiation was done at room temperature along a direction normal to the film surface. The irradiation dose was 1.5 x 10<sup>11</sup> ions/cm<sup>2</sup>, which corresponded to a matching field, B<sub>ϕ</sub>, of $``$ 3 T. These ions produced continuous amorphous tracks with diameters of 50 - 100 $`\AA `$ in the Hg-1212 thin films. The Hall resistivity $`\rho _{xy}`$ and the longitudinal resistivity $`\rho _{xx}`$ were measured simultaneously using a two-channel nanovoltmeter (HP34420A) and the standard five-probe dc method. A magnetic field was applied parallel to the c axis of the Hg-1212 films. $`\rho _{xy}`$ was extracted from the antisymmetric parts of the Hall voltages measured under opposite fields. The applied current densities were 250 - 500 A/cm<sup>2</sup>. Both $`\rho _{xy}`$ and $`\rho _{xx}`$ were Ohmic at the currents used in this study. Typical temperature dependences of $`\rho _{xx}`$ before (B<sub>ϕ</sub> = 0 T) and after heavy-ion irradiation (B<sub>ϕ</sub> = 3 T) for fields up to 8 T are shown in Fig. 1. A large enhancement of the zero-resistance temperature, T<sub>c,zero</sub>, which is due to strong pinning by the columnar defects, is clearly visible. This is consistent with the results of previous works on HTS with columnar defects. We observe that the enhancement of T<sub>c,zero</sub> above 3 T is rather small compare to that of below 3 T, indicating that depinned vortices with a density of $`n_\varphi =(HB_\varphi )/\mathrm{\Phi }_o`$ really contribute to the resistivity, where $`\mathrm{\Phi }_o`$ is the flux quantum. The enhancement of pinning by the columnar defects becomes effective below the temperature T\* which is marked in Fig. 1 by an arrow. Figure 2 (a) shows $`\rho _{xy}`$ before and after irradiation for H = 2, 4, 6, and 8 T. For the data at H = 2 T, the first sign reversal, which appears in the vicinity of the transition temperature, does not shift after the irradiation, while the second sign reversal shifts to higher temperature. These double sign reversals are not very rare for HTS. For the irradiated sample, however, if we look at $`\rho _{xy}`$ on a magnified scale we can observe a third sign reversal at a relatively lower temperature, as shown in Fig. 2 (b). The negative dip becomes clear with increasing field, a finding which is contrary to the one for the first sign reversal in HTS. For the unirradiated thin films, however, no third sign change is observed for fields up to 8 T. The inset in Fig. 2 shows the third-sign-reversal regions where $`\rho _{xy}`$ is positive (P), negative (N), and zero (Z), $`i.e.`$, below the resolution in our experiment. Thus, we claim that we clearly observed a third sign reversal for the irradiated thin films. Now the question arises as to why such a multiple Hall sign reversal is possible for some superconductors. Is there any relevant explanation for this phenomenon? Fortunately, a recent study by Kopnin claims that such a phenomenon is possible if the force arising from the effects of vortex motion on the pairing interaction, which was neglected in previous works , is added to the Lorentz force. The additional force is induced by the kinetic effect of charge imbalance relaxation and thus depends on the difference between the charge densities of the system in the superconducting and the normal states. According to this theory, the Hall conductivity $`\sigma _H`$ due to the motion of a single vortex is determined by three terms: localized excitations, delocalized excitations, and an additional force term. Therefore, $`\sigma _H`$ can be expressed by the sum of three terms: $$\sigma _H=\sigma _H^{(L)}+\sigma _H^{(D)}+\sigma _H^{(A)}.$$ (1) Using $`\mathrm{}=c=k_B=1`$, the Hall conductivity due to localized excitations,$`\sigma _H^{(L)}`$, is given by $$\sigma _H^{(L)}\frac{Ne}{B}\frac{(\omega _o\tau )^2}{1+(\omega _o\tau )^2},$$ (2) where N is the density of carriers, $`\tau `$ is the relaxation time, $`\omega _o\mathrm{\Delta }^2/E_F`$ is the distance between the energy levels in the vortex core, and E<sub>F</sub> is the Fermi energy. The portion of Hall conductivity contributed by the additional force,$`\sigma _H^{(A)}`$, is $$\sigma _H^{(A)}\frac{e}{B\lambda }\left(\frac{\nu }{\zeta }\right)\mathrm{\Delta }^2\beta (T),$$ (3) where $`\lambda `$ is the BCS coupling strength, $`\nu /\zeta `$ is the energy derivative of the density of states, $`\mathrm{\Delta }`$ is the superconducting energy gap, and $`\beta (T)`$ is a temperature-dependent function and is positive, which depends on temperatures in the following way: either $`\beta `$ = 1 near T<sub>c</sub> or $`\beta (T)\mathrm{\Delta }/[Tln(\mathrm{\Delta }/T)]`$ at low temperatures . Since the delocalized excitation term, $`\sigma _H^{(D)}`$, is due to the density of quasiparticles outside the vortex core, the sign of $`\sigma _H^{(D)}`$ is the same as the sign of the normal-state Hall conductivity and $`\sigma _H^{(D)}`$ is very small at low temperatures compared to $`\sigma _H^{(L)}`$ . Due to this, we will simply neglect $`\sigma _H^{(D)}`$ at low temperatures. It is found that the tangent of the Hall angle, $`tan\mathrm{\Theta }\omega _o\tau `$, is very small $`(0.01)`$ for the dirty region near T<sub>c</sub> and approaches $`1`$ for the superclean region at T $`T_c`$. This is consistent with the theoretical calculation . $`\sigma _H^{(A)}`$ deduced from charge imbalance relaxation is determined by the energy derivative of the density of states $`\nu /\zeta `$ at the Fermi surface and by $`\beta (T)`$. Since this latter term is negative, the sign of $`\sigma _H`$ can critically depend on this term. If $`\sigma _H^{(A)}`$ is negative and if it is the dominant contribution, then $`\sigma _H`$ can be negative. In order to estimate the sign of $`\nu /\zeta `$, we had better comment on the symmetry of the superconducting order parameters. Very recently, Himeda $`\mathrm{𝑒𝑡}`$ $`\mathrm{𝑎𝑙}.`$ calculated the microscopic structure of the vortex cores in HTS by using the two-dimensional t-J model for a wide range of doping rates. They argued that the density of states split into two levels due to mixing of the s- and the d-wave components in the underdoped regions. The typical density of states for d-wave superconductors was observed in the overdoped regions. This indicates that the sign of $`\nu /\zeta `$ in $`\sigma _H^{(A)}`$ should not be based on the BCS s-wave theory . Within the context of the Kopnin’s theory, however, we can estimate the sign of $`\sigma _H^{(A)}`$ from the experimental results. From the data for H = 2 T in Fig. 2, for example, the sign of the additional force term is negative simply because $`\sigma _H`$ is positive and $`\sigma _H^{(A)}`$ is negative; thus, $`\nu /\zeta `$ is negative. According to Eq. (1), $`\sigma _H`$ allows multiple sign reversals as a function of temperature and mean free path. The sign reversals arise from competition between a positive $`\sigma _H^{(L)}`$ and a negative $`\sigma _H^{(A)}`$. For the dirty case with $`l<\xi `$, $`\sigma _H^{(A)}`$ is dominant because $`(\omega _o\tau )^210^4`$ is very small as T $``$ T<sub>c</sub>. Thus, the sign of $`\sigma _H`$ can be negative. For the clean case with $`l>\xi `$, $`\sigma _H^{(L)}`$ is dominant because the magnitude of $`(\omega _o\tau )^2`$ is very large compared to its magnitude for the dirty case; thus $`\sigma _H`$ is positive in the low-temperature region. This is a plausible interpretation for the double sign reversal observed in Fig. 2. The double sign reversal was also observed in highly anisotropic HTS, such as Bi- and Tl-based compounds . Now, the problem is to explain the triple sign reversal. The first thing we should notice is that the triple sign reversal is observed only in the ion-irradiated samples. In that case, $`\sigma _H^{(L)}`$ decreases seriously because $`\omega _o\tau `$ is reduced drastically by a change from the clean to the moderately clean or the dirty case. Then, a third sign reversal in the mixed state is quite natural, and this interpretation is in good agreement with the experimental observations. To explain this in detail, we should point out that at low temperatures, $`\sigma _H^{(L)}`$ and $`\sigma _H^{(A)}`$ have different temperature dependences through $`\omega _o\tau `$ and $`\beta (T)`$. Since $`\beta (T)`$ increases as $``$ 1/T with decreasing temperature and $`(\omega _o\tau )^2`$ is still small for the moderately clean region, $`\sigma _H^{(A)}`$ can exceed $`\sigma _H^{(L)}`$ again at low temperatures , especially for the moderately clean case. As a result, we should expect a third sign reversal of the mixed-state Hall effect if high-density impurities exist, which is in agreement with our observation in Fig. 2. At this point, it is meaningful to compare the temperature dependence of the Hall angles before and after ion irradiation, as shown in Fig. 3. As the temperature decreases from the normal state to the superconducting state, $`tan\mathrm{\Theta }`$ of the pristine sample increases steeply and then shows a peak at a relatively low temperature. The maximum magnitude of $`tan\mathrm{\Theta }`$ at H = 8 T is much larger than that observed in YBCO crystals which are believed to be very clean superconductors. Note that $`tan\mathrm{\Theta }`$ is reduced significantly by the ion irradiation, even above T where the pinning is not important. This result should be explained by an impurity effect rather than by the pinning effect. This strongly supports the above interpretation that $`\sigma _H^{(L)}`$ can decrease if a clean system become a moderately clean after the irradiation. Note that we observe the third sign reversal for Hg-1212 thin films irradiated with an ion dose of 1.5 x 10<sup>11</sup> ions/cm<sup>2</sup>, which corresponds to an average distance of 258 $`\AA `$ between the columnar defects. If we consider columnar defects with the diameter 50 - 100 $`\AA `$ and universal defects, such as oxygen vacancies, in the pristine thin films, the mean free path is very low and is much smaller than the value reported for YBCO crystals in the low-temperature region . Therefore, we can observe a triple sign reversal after heavy-ion irradiations because the irradiated thin films are probably moderately clean, even at low temperatures. As a final note, since the above interpretation is based on the assumption that there are localized states at the core level, it is worth mentioning the existence of localized core states in d- wave superconductors. Localized core states, which are consistent with the predictions of theoretical works , have been observed in HTS by using various experimental probes, such as a far-infrared spectroscopy and scanning tunneling spectroscopy . Furthermore, in the moderately clean case, Kopnin and Volovik have observed that $`\sigma _H^{(L)}`$ for d-wave superconductors is similar to the previous result based on s-wave superconductors. On the other hand, in a recent calculation for the quasiparticle state in a d-wave superconductor, Frantz and Tesanovic have claimed that the bound state in the vortex core was not observed. In summary, the Hall effect in Hg-1212 films has been studied before and after irradiation by high-energy Xe ions. After irradiation with a dose of 1.5 x 10<sup>11</sup> ions/cm<sup>2</sup>, we find that columnar defects play an important role not only as strong pinning sites but also as high-density impurities which can effectively reduce the mean free path even at low temperatures. As a result, we observe a triple sign reversal, which can be qualitatively interpreted within the framework of a recent model based on the nonequilibrium microscopic theory. This work is partially supported by the Ministry of Science and Technology of Korea through the Creative Research Initiative Program. The work at University of Kansas is supported by AFOSR, NSF, NSF EPSCoR, and DEPSCoR.
no-problem/9903/astro-ph9903402.html
ar5iv
text
# SEVEN PARADIGMS IN STRUCTURE FORMATION ## 1 Introduction The current model for structure formation in the expanding universe has been remarkably successful. Indeed it has recently been argued that we have resolved the principal issues in cosmology. However the lessons of history prescribe caution. There have been more oscillations in the values of the Hubble constant, the deceleration parameter and the cosmological constant over the working life of a cosmologist than one cares to recall. As the quality of the data has improved, one can be reasonably confident that the uncertainties in parameter extraction have decreased. But have we really converged on the definitive model? I have selected seven of the key paradigms in order to provide a critical assessment. To set the context I will first review the reliability of the fundamental model of cosmology, the Big Bang model, in terms of the time elapsed since the initial singularity, or at least, the Planck epoch, $`10^{43}\mathrm{s}`$. Galaxies are well studied between the present epoch, $`14\times 10^9\mathrm{yr},`$ and $`3\times 10^9\mathrm{yr}`$ ($`z3`$). One can examine the distribution of Lyman alpha clouds, modelling chemical evolution from the gas phase metal abundances, and find large numbers of young, star-forming galaxies back to about $`2\times 10^9\mathrm{yr}`$ ($`z4`$). Beyond this are the dark ages where neither gas nor evidence of galaxy formation has yet been detected. Strong circumstantial evidence from the Gunn-Peterson effect, indicating that the universe is highly ionized by $`z=5`$, suggests that sources of ionizing photons must have been present at an earlier epoch. Microwave background fluctuations provide substantial evidence on degree angular scales for an acoustic peak, generated at $`3\times 10^5\mathrm{yr}`$ ($`z=1000`$), when the radiation underwent its last scatterings with matter. The blackbody spectrum of the cosmic microwave background, with no deviation measured to a fraction of a percent and a limit on the Compton $`y`$ parameter $`\mathrm{\Delta }y<3\times 10^6(95\%CL)`$ on 7 degree angular scales, could only have been generated in a sufficiently dense phase which occurred during the first year of the expansion. Light element nucleosynthesis is an impressive prediction of the model, and testifies to the Friedmann-like character at an epoch of one second. At this epoch, neutrons first froze out of thermal equilibrium to subsequently become incorporated in <sup>2</sup>H, <sup>4</sup>He, and <sup>7</sup>Li, the primordial distribution of which matches the predicted abundances for a unique value of the baryon density. Thus back to one second, there is strong observational evidence for the canonical cosmology. At earlier epochs, any observational predictions are increasingly vague or non-existent. One significant epoch is that of the quark-hadron phase transition ($`t10^4\mathrm{s}`$ , $`T100\mathrm{MeV}`$), which while first order cannot have been sufficiently inhomogeneous to amplify density fluctuations to form any primordial black holes. The electro-weak phase transition ($`t10^{10}\mathrm{s}`$ , $`T100\mathrm{GeV}`$), was even more short-lived but may have triggered baryon genesis. Before then, one has the GUT phase transition ($`t10^{35}\mathrm{s}`$, $`T10^{15}\mathrm{GeV}`$), and the Planck epoch ($`t10^{43}\mathrm{s},`$ $`T10^{19}\mathrm{GeV}`$), of unification of gravitation and electroweak and strong interactions. Inflation is generally believed to be associated with a strongly first order GUT phase transition, but is a theory that is exceedingly difficult, if not impossible, to verify. A gravitational radiation background at low frequency is one possible direct relic of quantum gravity physics at the Planck epoch, but we are far from being able to detect such a background. In summary, we could say that our cherished beliefs, not to be abandoned at any price, endorse the Big Bang model back to an epoch of about one second or $`T1\mathrm{MeV}.`$ One cannot attribute any comparable degree of confidence to descriptions of earlier epochs because any fossils are highly elusive. Bearing this restriction in mind, we can now assess the paradigms of structure formation. The basic framework is provided by the hypothesis that the universe is dominated by cold dark matter, seeded by inflationary curvature fluctuations. This does remarkably well at accounting for many characteristics of large-scale structure in the universe. These include galaxy correlations on scales from 0.1 to 50 Mpc, the mass function and morphologies of galaxy clusters, galaxy rotation curves and dark halos, the properties of the intergalactic medium, and, most recently, the strong clustering found for luminous star-forming galaxies at $`z3`$. I will focus on specific paradigms that underly these successes and assess the need for refinement both in data and in theory that may be required before we can be confident that we have found the ultimate model of cosmology. ## 2 Paradigm 1: Primordial Nucleosynthesis Prescribes the Baryon Density Primordial nucleosynthesis predicts the abundances of several light elements, notably <sup>2</sup>H, <sup>4</sup>He, and <sup>7</sup>Li. The principle variable is the baryon density, $`\mathrm{\Omega }_\mathrm{b}h^2`$. One finds approximate concordance for $`\mathrm{\Omega }_\mathrm{b}h^20.015`$, and with the consensus value of H<sub>0</sub> ($`h=0.7\pm 0.1`$) one concludes that $`\mathrm{\Omega }_\mathrm{b}0.03`$. Not all pundits agree on concordance, since the primordial <sup>4</sup>He abundance requires a somewhat uncertain extrapolation from the most metal-poor galaxies with He emission lines ($`Z0.02`$ Z$`_{\mathrm{}}`$) to zero metal abundances. Moreover the <sup>2</sup>H abundance is based on intergalactic (and protogalactic) <sup>2</sup>H observed in absorption at high redshifts toward two quasars, probing only a very limited region of space. However incorporation of <sup>7</sup>Li and allowance for the various uncertainties still leaves relatively impressive agreement with simple model predictions. Direct measurement of the baryon density at $`z3`$ can be accomplished by using the Lyman alpha forest absorption systems toward high redshift quasars. The neutral gas observed is only a small component of the total gas, but the ionizing radiation from quasars is measured. A reasonably robust conclusion finds that $`\mathrm{\Omega }_{\mathrm{gas}}0.04`$, implying that the bulk of the baryons are observed and in diffuse gas at high redshift. At low redshift, the luminous baryon component is well measured, and amounts to $`\mathrm{\Omega }_{}0.003`$ in stars. Gas in rich clusters amounts to a significant fraction of cluster mass and far more than the stellar mass, but these clusters only account for about five percent of the stellar component of the universe. Combining both detected gas and stars implies that at $`z0`$, we observe no more than $`\mathrm{\Omega }_{\mathrm{gas}}0.005`$. Here we have a problem: where are the baryons today? Most baryons must therefore be relatively dark at present. There are two possibilities, neither one of which is completely satisfactory. The dark baryons could be hot gas at $`T10^6`$ K, in the intergalactic medium. This gas cannot populate galaxy halos, where it is not observed, nor objects such as the Local Group, and is not present in rich clusters in a globally significant amount. It remains to be detected: if the temperature differed significantly from $`10^6`$ K the presence of so much gas would already have had observable consequences. The alternative sink for dark baryons is in the form of compact objects. MACHOs are the obvious candidate, detected by gravitational microlensing by objects in our halo of stars in the LMC, and possibly constituting fifty percent of the dark mass of our halo. However star-star lensing provides a possible alternative explanation of the microlensing events, associated with a previously undetected tidal stream in front of the LMC <sup>,</sup> and with the known extension of the SMC along the line of sight. In the LMC case, at least one out of approximately 20 events has a known LMC distance, and for the SMC, there are only two events, both of which are associated with the SMC. The statistics are unconvincing, and since until now one requires binary lenses to obtain a measure of the distance, any distance determinations are likely to be biased towards star-star lensing events. ## 3 Paradigm 2: $`\mathrm{\Omega }=1`$ It is tempting to believe that $`\mathrm{\Omega }_\mathrm{m}`$ is unity. If it is not unity, one has to fine tune the initial curvature to one part in $`10^{30}`$. Moreover inflationary models generally predict that $`\mathrm{\Omega }`$ is unity. However the evidence in favor of low $`\mathrm{\Omega }_\mathrm{m}`$, and specifically $`\mathrm{\Omega }_\mathrm{m}0.3`$ is mounting. The most direct probe arises from counting rich galaxy clusters, both locally and as a function of redshift. The direct prediction of $`\mathrm{\Omega }_\mathrm{m}=1`$ is that there should be a higher-than-observed local density of clusters, and strong evolution in number with redshift that is not seen. However this conclusion has recently been disputed.<sup>,</sup> An indirect argument comes from studies of Type Ia supernovae, which provide strong evidence for acceleration. This is most simply interpreted in terms of a positive cosmological constant.<sup>,</sup> The SN Ia data actually measure $`\mathrm{\Omega }_\mathrm{\Lambda }\mathrm{\Omega }_\mathrm{m}`$. Combined with direct measures of $`\mathrm{\Omega }_\mathrm{m}`$ both from galaxy peculiar velocities and from clusters, one infers that $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$. Hence flatness is likely, and certainly well within observational uncertainties. Further evidence for the universe being spatially flat comes from the measurement of the location of the first acoustic peak in the cosmic microwave background anisotropy spectrum. The location reflects the angular size subtended by the horizon at last scattering, and has Fourier harmonic $`\mathrm{}=220\mathrm{\Omega }^{1/2}`$. Current data requires $`\mathrm{\Omega }\stackrel{>}{}0.4`$, where $`\mathrm{\Omega }=\mathrm{\Omega }_\mathrm{m}+\mathrm{\Omega }_\mathrm{\Lambda }`$. Some possible pitfalls in this conclusion are that unbiased cluster surveys have yet to be completed. Use of wide field weak lensing maps will go a long way towards obtaining a definitive rich cluster sample. There is no accepted theory for Type Ia supernovae, and it is possible that evolutionary effects could conspire to produce a dimming that would mimic the effects of acceleration, at least to $`z1`$. Utilization of supernovae at $`z>1`$ will eventually help distinguish evolutionary dimming or gray dust, the effects of which should be stronger at earlier epochs and hence with increasing $`z`$, from the effect of acceleration, which decreases at earlier epochs, that is with increasing $`z`$. ## 4 Paradigm 3: Density Fluctuations Originated in Inflation There is an elegant explanation for the origin of the density fluctuations that seeded structure formation by gravitational instability. Quantum fluctuations are imprinted on a macroscopic scale with a nearly scale-invariant spectral distribution of amplitudes, defined by constant amplitude density fluctuations at horizon crossing. This leads to a bottom-up formation sequence as the smallest subhorizon scales acquire larger amplitudes and are the first to go nonlinear. One can compare the predicted linear fluctuations over scales $`\stackrel{>}{}10`$ Mpc with observations via microwave background fluctuations and galaxy number count fluctuations. $`\delta T/T`$ measures $`\delta \rho /\rho `$ at last scattering over scales from $`100`$ Mpc up to the present horizon. Temperature fluctuations on smaller scales are progressively damped by radiative diffusion, but a signal is detectable to an angular scale of $`10^{}`$, equivalent to $`20`$ Mpc. The conversion from $`\delta T/T`$ to $`\delta \rho /\rho `$ is model-dependent, but can be performed once the transfer function is specified. At these high redshifts, one is well within the linear regime, and if the fluctuations are Gaussian, one can reconstruct the density fluctuation power spectrum. Deep galaxy surveys yield galaxy number count fluctuations, which are subject to an unknown bias between luminous and dark matter. Moreover, all three dimensional surveys necessarily utilize redshift space. Conversion from redshift space to real space is straightforward if the peculiar velocity field is specified. One normally assumes spherical symmetry and radial motions on large scales, and isotropic motions on scales where virialization has occurred, with an appropriate transition between the linear and nonlinear regimes. On the virialization scale, collapse by of order a factor of 2 has occurred in the absence of dissipation, and correction for density compression must also be incorporated via interpolation or preferably via simulations. Comparison of models with data is satisfactory only if the detailed shape of the power spectrum is ignored. A two parameter fit, via normalisation at $`8h^1`$ Mpc and a single shape parameter $`\mathrm{\Gamma }\mathrm{\Omega }h`$, is often used. For example, as defined below, $`\sigma _8(\delta \rho /\rho )_{\mathrm{rms}}/(\delta n_\mathrm{g}/n_\mathrm{g})_{\mathrm{rms}},`$ as evaluated at $`8h^1`$ Mpc, equals unity for unbiased dark matter. COBE normalisation of standard cold dark matter requires $`\sigma _81`$ but the cluster abundance requires $`\sigma _80.6`$. The shape parameter $`\mathrm{\Omega }h=1`$ for standard cold dark matter, but $`\mathrm{\Omega }h0.3`$ is favoured for an open universe. One can fit a model to the data with $`\sigma _80.6`$ and $`\mathrm{\Omega }h0.3`$. However detailed comparison of models and observations reveals that there is no satisfactory fit to the power spectrum shape for an acceptable class of models. There is an excess of large-scale power near 100 Mpc. This is mostly manifested in the APM galaxy and cluster surveys, but is also apparent in the Las Campanas redshift survey. ## 5 Paradigm 4: Galaxy Rotation Curves are Explained by Halos of Cold Dark Matter Galaxy halos of cold dark matter acquire a universal density profile. This yields a flat rotation curve over a substantial range of radius, and gives an excellent fit to observational data on massive galaxy rotation curves. There is a central density cusp ($`1/r`$) which in normal galaxies is embedded in a baryonic disk, the inner galaxy being baryon-dominated. Low surface brightness dwarf spiral galaxies provide a laboratory where one can study dark matter at all radii: even the central regions are dark matter-dominated. One finds that there is a soft, uniform density dark matter core in these dwarf galaxies. It is still controversial whether the CDM theory can reproduce soft cores in dwarf galaxies: at least one group finds in high resolution simulations that the core profiles are even steeper than $`r^1`$, and have not converged. Disk sizes provide an even more stringent constraint on theoretical models. Indeed disk scale lengths cannot be explained.<sup>,</sup> The difficulty lies in the fact that if angular momentum is conserved as the baryons contract within the dark halos, approximately the appropriate amount of angular momentum is acquired by tidal torques between neighbouring density flutuations to yield correct disk sizes However simulations fail to confirm this picture. In practice, cold dark matter and the associated baryons are so clumpy that massive clumps fall into the center via dynamical friction and angular momentum is transferrd outwards. Disk torquing by dark matter clumps also plays a role. The result is that the final baryonic disks are far too small. The resolution presumably lies in gas heating associated with injection of energy into the gas via supernovae once the first massive stars have formed. <sup>,</sup> ## 6 Paradigm 5: Hierarchical Merging Accounts for the Luminosity Function and the Tully-Fisher Relation Galaxies form by a succession of mergers of cold dark matter halos, the baryons dissipating and forming a dense core. Isolated infall plausibly results in disk formation. Disk merging concentrates the gas into a dense spheroid. The transition from linear theory to formation of self-gravitating clouds occurs at an overdensity of about $`\delta _{\mathrm{crit}}200`$. A simple ansatz due to Press and Schechter yields the mass function of newly nonlinear objects $$\frac{dN}{dM}M^2\mathrm{exp}\left[\delta _{\mathrm{cr}}^2/(\delta \rho /\rho )^2(M,t)\right],$$ where $`\delta ^2(\delta \rho /\rho )^2(M,t)`$ is the variance in the density fluctuations. The variance at $`8h^1\mathrm{Mpc}`$, $`\delta _8`$, is given by $$\delta _8=(R/8h^1\mathrm{Mpc})^{\frac{n+3}{2}}(1+z)^1$$ where $`n1`$ on cluster scales but $`n2`$ on galaxy scales, and $`M=10^{15}\mathrm{\Omega }h^1(R/8h^1\mathrm{Mpc})^3\mathrm{M}_{\mathrm{}}`$. Of course the luminosity function rather than the mass function is actually observed. We define $`\sigma \delta /\delta _\mathrm{g}`$, where $`\delta _\mathrm{g}`$ is the variance in the galaxy counts. On cluster scales, one finds that $`\sigma _80.6(\pm 0.1)`$ yields the observed density of clusters if $`\mathrm{\Omega }=1`$. More generally, $`\sigma _8`$ scales as $`\mathrm{\Omega }^{0.6}`$. A larger $`\sigma `$ is required for a given number density of objects in order to account for the reduced growth in $`\delta `$ as $`\mathrm{\Omega }`$ is decreased below unity. To match the observed luminosity function and predicted mass function requires specification both of $`\sigma _8`$ and of the mass-to-light ratio. Much of the dark mass is in objects that were the first to go nonlinear, as well as in the objects presently going nonlinear. Hence one crudely expects that $`M/L400h`$, as measured in rich clusters. The global value of $`M/L`$ is $`M/L1500\mathrm{\Omega }h,`$ and happens to coincide with the mass-to-luminosity ratio measured for rich clusters if $`\mathrm{\Omega }0.4.`$ This suggests that these clusters may provide a fair sample of the universe. Even if most dwarfs do not survive, because of subsequent merging, the relic dwarfs are expected to have high $`M/L`$. Later generations of galaxies should have undergone segregation of baryons, because of dissipation, and the resulting $`M/L`$ is reduced. Many of the first dwarfs are disrupted to form the halos of massive galaxies. The predicted high $`M/L`$ (of order 100) is consistent with observations, both of galaxy halos and of the lowest mass dwarfs (to within a factor of $`2`$). However it is the detailed measurement of $`M/L`$ that leads to a possible problem. One has to normalise $`M/L`$ by specifying the mass-to-light ratio of luminous galaxies. The observed luminosity function can be written as $$\frac{dN}{dL}L^\alpha \mathrm{exp}(L/L_{})$$ where $`\alpha `$ 1 – 1.5, depending on the selection criterion, and $`L_{}10^{10}h^2\mathrm{L}_{\mathrm{}}`$. Matching to the predicted mass function specifies $`M/L`$ for $`L_{}`$ galaxies, as well as the slope of the luminosity function. One forces a fit to $`\alpha `$ by invoking star formation-induced feedback and baryonic loss. This preferentially reduces the number of low mass galaxies. A typical prescription is that the retained baryonic fraction is given by $$f_\mathrm{B}=(v_c/v_{})^2,$$ where $`v_c`$ is the disk circular velocity. Dwarfs are preferentially disrupted by winds. In this way one can fit $`\alpha `$. There is no longer any freedom in the luminous galaxy parameters. Potential difficulties arise as follows. Simulations of mass loss from dwarf galaxies suggest that supernova ejecta may contribute to the wind but leave much of the interstellar gas bound to the galaxies. This would be a serious problem as one relies on redistribution of the baryonic reservoir to form massive galaxies. Another problem arises with the Tully-Fisher relation. This is the measured relation, approximately $`L\stackrel{}{}V_{\mathrm{rot}}^\beta ,`$ between galaxy luminosity and maximum rotational velocity. In effect, the Tully-Fisher relation offers the prescription for $`M/L`$ within the luminous part of the galaxy, since the virial theorem requires $$LV_{\mathrm{rot}}^4G^2\mu _L^1(L/M)^2$$ where $`\mu _L`$ is the surface brightness of the galaxy. Since $`\mu _L`$ has a narrow dispersion for most disk galaxies, the Tully-Fisher relation, where $`\beta 3`$ is measured in the $`I`$ band and $`\beta 4`$ is appropriate to the near infrared, effectively constrains $`M/L`$. The normalization of the Tully-Fisher relation requires $`M/L5h`$ for early-type spirals, as is observed directly from their rotation curves within their half-light radii. However simulations of hierarchical clustering, which incorporate baryonic cooling and star formation with a prescription designed to reproduce the luminosity function, give too high a normalization for $`M/L`$ in the predicted Tully-Fisher relation: at a given luminosity the rotational velocity is too high. Moreover the efficient early star formation required in order to fit the luminosity function requires the Tully-Fisher normalisation to change with redshift: galaxies are predicted to be brighter by about a magnitude at a given rotation velocity at $`z1`$, and this exceeds the observed offset. Resolution of the Tully-Fisher normalization remains controversial. ## 7 Paradigm 6: The Bulk of the Stars Formed After $`z=2`$ Identification of the Lyman break galaxies, by using the 912 Å discontinuity in predicted spectra as a broad band redshift indicator, has revolutionized our knowledge of early star formation. Current samples of high redshift star-forming galaxies, chosen in a relatively unbiased manner, contain $`1000`$ galaxies at $`z3`$ and $`100`$ galaxies at $`z4`$. The volume of the universe involved is known, and one can therefore compute the comoving luminosity density. Since the galaxies are selected in the rest-frame UV, one can convert luminosity density to massive star formation rate. One uncertainty is correction for dust extinction but this is mostly resolved by measurement of the galaxy spectra. If, say, a Miller-Scalo initial stellar mass function is adopted, one concludes that the star formation rate per unit volume rose rapidly between the present epoch and redshift unity by a factor of about 10. Beyond redshift one, the star formation rate remains approximately constant, to $`z>4`$. Moreover the median star formation rate per galaxy is high, around 30 M$`_{\mathrm{}}`$per year, the star forming galaxies are mostly compact, and strong clustering is found. One interpretation of the data is that most stars formed late, because of the short cosmic time available at high redshift, and that most of the Lyman-break galaxies are massive, and hence clustered, objects that are probably undergoing spheroid formation. An alternative view is that the clustering is due to merger-induced starbursts of low mass galaxies within massive galaxy halos. Reconcilation of either interpretation with hierarchical clustering theory requires a low $`\mathrm{\Omega }`$ universe, especially in the former case, and a detailed prescription for galaxy star formation. The rapid rise in the number of star-forming galaxies at low redshift is especially challenging if $`\mathrm{\Omega }`$ is low, since galaxy clustering reveals little or no evolution at $`z\stackrel{<}{}1`$, as measured by cluster abundances, and both massive disk sizes and the Tully-Fisher relation show little change to $`z1`$. One intersting suggestion is that a new population of blue compact, star-forming galaxies is responsible for the evolution in the star formation rate density of the universe. ## 8 Paradigm 7: Galaxy Spheroids Formed Via Mergers Galaxy mergers are recognized as the triggers of nearby starbursts, especially the ultraluminous far infrared-selected galaxies. These systems are powered in large part by star formation rather than by an embedded AGN, as confirmed by far infrared spectroscopy, and have star formation rates of 100 or even 1,000 M$`_{\mathrm{}}`$per year. Near infrared mapping reveals de Vaucouleurs profiles and CO mapping reveals a central cold disk or ring with $`10^{10}`$ M$`_{\mathrm{}}`$of molecular gas within a few hundred parsecs. Can one generalize from the rare nearby examples that ellipticals, and more generally spheroids, formed via merger-induced starbursts? Evidence that gives support to this contention requires a component of star-forming galaxies that is sparse locally to account for three distinct observations of galaxies, or of their emission presumed to be at $`z>1`$. Far IR counts by ISO at 175 $`\mu `$m and submillimeter counts by SCUBA at 850 $`\mu `$m require a population of IR-emitting objects that have starburst rather than normal disk infrared spectra. Moreover identification of SCUBA objects demonstrates that typical redshifts are one or larger, but mostly below 2. A powerful indirect argument has emerged from modelling of the diffuse far infrared background radiation. This amounts to $`\nu i_\nu 20`$ nw/m<sup>2</sup>sr, and exceeds the diffuse optical background light of about 10 nw/m<sup>2</sup>sr that is inferred from deep HST counts. The local population of galaxies, evolved backwards in time fails to account for the diffuse infrared light, if one only considers disk galaxies, where the star formation history is known from considerations of their dynamical evolution. The starburst population invoked to account for the FIR counts can account for the diffuse infrared background radiation. If this is the case, one expects a non-negligible contribution near 1 mm wavelength from ultraluminous FIR galaxies to the diffuse background radiation. For example, the predicted FIR flux peaks at $`400\mu `$m if the mergers occur at $`z\stackrel{<}{}3`$. The extrapolation to longer wavelengths tracks the emissivity, or decreases roughly as $`\lambda ^3`$. Hence there should be a contribution at 1 mm of order 1 nw/m<sup>2</sup>sr, which may be compared with the CMB flux of $`2000`$ nw/m<sup>2</sup>sr. One can measure fluctuations of $`\delta T/T10^6`$, and one could therefore be sensitive to a population of $`10^6`$ ultraluminous FIR sources at high $`z`$. The inferred surface density ($`20`$ per square degree) is comparable to the level of current SCUBA detections. Hence CMB fluctuations on an angular scale of $`10`$$`^{}`$near the CMB peak could be generated by the sources responsible for the diffuse FIR background. Moreover these are rare and massive galaxies, and hence are expected to have a large correlation length that should give an imprint on degree scales. One can evidently reconcile submillimeter counts, the cosmic star formation history and the far infrared background together with formation of disks and spheroids provided that a substantial part of spheroid formation is dust shrouded. A difficulty that arises is the following: where are the precursors of current epoch ellipticals? A few are seen at $`z<5`$ but are too sparse in number to account for the younger counterparts of local ellipticals. Dust shrouding until after the A stars have faded ($`2\times 10^9`$ yr) would help. Other options are that the young ellipticals are indeed present but disguised via ongoing star formation activity, and mostly form at $`z>5`$ or else possess an IMF deficient in massive stars. ## 9 Conclusions Cosmological model-building has made impressive advances in the past year. However much of this rests on supernovae being standard candles. This is a demanding requirement, given that we lack complete models for supernovae. Consider a Type I supernova, for which one popular model consists of a close pair of white dwarfs. We do not know a priori whether a pair of merging white dwarfs will explode or not, or will self destruct or leave a neutron star relic. Other models involve mass transfer onto a white dwarf by an evolving close companion: again, we do not know the outcome, whether the endpoint is violently explosive or mildly quiescent. No doubt some subset of accreting or merging white dwarfs are SNIa, but we do not know how to select this subset, nor how evolution of the parent system would affect the outcome in the early universe. One of the largest uncertainties in interpreting the SCUBA submillimeter sources is the possible role of AGN and quasars in powering the high infrared luminosities. The absence of a hot dust component in some high redshift ultraluminous infrared galaxies (ULIRGs) with CO detections argues for a star formation interpretation of infrared luminosities as high as $`10^{13}\mathrm{L}_{}.`$ Observations of far infrared line diagnostics suggest thatup to $`20\%`$ of ULIRGs may be AGN-powered, but nearby examples such as Arp 220 suggest that even in these cases there may be comparable amounts of star formation-induced infrared luminosity. Interpretation of the hard ($`30`$ keV) x-ray background requires the mostly resolved sources responsible for the background to be self-absorbed AGN surrounded by dusty gas that reemits the absorbed AGN power at far infrared wavelenghts and can at most account for $`1020\%`$ of the diffuse far infrared background. An independent argument is as follows: the correlation of central black holes in nearby galaxies with spheroids ($`M_{bh}0.005M_{}`$) suggests that with an accretion efficiency that is expected to be a factor $`f1030`$ larger than the nuclear burning efficiency for producing infrared emission, the resulting contribution from AGN and quasars to the far infrared background should be $`15(f/0.03)\%`$ of the contribution from star formation. There are too many unresolved issues in the context of structure formation to be confident that we have converged on the correct prescription for primordial fluctuations in density, nonlinear growth, and cosmological model. And then we must add in the complexities of star formation, poorly understood in the solar neighbourhood, let alone in ultraluminous galaxies at high redshift. One cannot expect the advent of more powerful computers to simply resolve the outstanding problems. Rather it is a matter of coming to grips with improved physical modelling of star-forming galaxies. Phenomenological model building is likely to provide more fruitful returns than brute force simulations, but the data requirements are demanding even on the new generations of very large telescopes. Fluctuation spectra will be measured with various CMB experiments, although disentangling the various parameters of cosmology and structure formation will take time. However I am optimistic that the anticipated influx of new data, from optical, infrared, x-ray and radio telescopes will go far towards resolving these uncertainties. It is simply that the journey will be long with many detours, before we have deciphered the ultimate model of cosmology. ## Acknowledgments I thank Ana Mourao and Pedro Ferreira for the gracious hospitality provided in Faro. ## References
no-problem/9903/cond-mat9903107.html
ar5iv
text
# Diffusional growth of wetting droplets ## Abstract The diffusional growth of wetting droplets on the boundary wall of a semi-infinite system is considered in different regions of a first-order wetting phase diagram. In a quasistationary approximation of the concentration field, a general growth equation is established on the basis of a generalized Gibbs-Thomson relation which includes the van der Waals interaction between the droplet and the wall. Asymptotic scaling solutions of these equations are found in the partial-, complete- and pre-wetting regimes. The physics of wetting phenomena has attracted much interest in recent years, both from experimental and from theoretical points of view. Whereas initially static properties dominated the discussion, the interest has shifted more recently to the dynamics of wetting . In many experimental situations the formation of a wetting layer starts with the nucleation of droplets on the boundary wall of the system. The central question therefore is the temporal evolution of the droplet profile. There are essentially two different types of dynamic behavior of a liquid surface droplet. The first is a spreading process which e.g. dominates, if a droplet of a non-volatile liquid is overheated from below to above a wetting transition point. Such processes are driven by hydrodynamic modes of the liquid, and they have extensively been discussed in the literature . The second mechanism is the phase transformation (condensation or evaporation) between the liquid and the vapor phase of the droplet. This is driven by particle diffusion in the vapor, and e.g. is the dominating process in the growth of supercritical droplets in a metastable situation. Whereas the diffusional growth of a homogeneous wetting layer has been discussed in the literature , this seems not to be the case for surface droplets. The present paper deals with the diffusional growth of a supercritical droplet from a super-saturated vapor. This process is accompanied by the creation of latent heat, and it will be assumed that heat transport as well as other hydrodynamic modes are fast compared to the diffusion. As a consequence the droplet is isothermal and always has a shape which minimizes its free energy at a given volume. The time dependence of this shape is the main object of interest in this paper. The excess free energy of a wetting film of local thickness $`f(x)`$ on a planar boundary wall of a semi-infinite system can be written in the form $$_h[f]=d^2x\left[\frac{\gamma }{2}\left(f\right)^2+V(f)hf\right],$$ (1) where $`\gamma `$ is the interface stiffness, $`h`$ is the difference of the chemical potential from that of the saturated vapor and $`V(f)`$ is an effective interface potential. The field $`h`$ can be expressed by the difference between the vapor concentration $`c`$ and its value $`c_0`$ at saturation, so that in linear order $$c=c_0\left(1+\mathrm{\Gamma }h\right).$$ (2) The form of the potential $`V(f)`$ corresponding to a first-order wetting transition is sketched in Fig. 1. There, for temperatures $`T`$ less than the wetting temperature $`T_w`$, the global minimum of $`V(f)`$ is at $`f=f_0`$, whereas for $`T>T_w`$ this minimum becomes metastable in favor of the global minimum at diverging film thickness. For $`f\mathrm{}`$ we assume $`Vf^{1\sigma }`$, where $`\sigma =3`$ for nonretarded and $`\sigma =4`$ for retarded van der Waals interactions . Homogeneous (i.e. $`f(x)=const`$) minima of the excess free energy $`_h[f]`$ (i.e. the global minima of $`V(f)hf`$) determine the phase diagram, shown in Fig. 2. In the region $`h>0`$, where the liquid bulk phase is stable, a film of infinite thickness forms on the wall in thermal equilibrium. On the line $`h=0`$, which means bulk coexistence of the liquid and vapor phases, the first-order wetting transition occurs at $`T=T_w`$, where $`f=f_0`$ for $`h=0`$, $`T<T_w`$ (partial wetting), and $`f=\mathrm{}`$ for $`T>T_w`$ (complete wetting). From the transition point a prewetting line $`h_p(T)`$ extends into the region $`h<0`$ where the vapor phase is stable in the bulk. This line separates a region (below $`h_p(T)`$) where the wall is covered by a thin film from a region ($`h>h_p(T)`$) where the wall is covered by a thick film. The jump in film thickness along the prewetting line vanishes at the prewetting critical point $`T_{pw}`$. The partial wetting line $`h=0`$, $`T<T_w`$ and the prewetting line $`h_p(T)`$ together form a first order line concerning the wetting properties of the system . If the system is quenched from below to above the first order line (for example by increasing the pressure), the phase transition is initialized by the formation of critical droplets on the wall (provided, one stays within the surface spinodal lines, shown in Fig. 2). The shape of these droplets is qualitatively different in different regions of the phase diagram . Axisymmetric profiles $`f(r)`$ can be calculated via the saddle point equation $$\delta _h/\delta f(r)=0$$ (3) with the natural boundary conditions of a droplet profile $$f^{}(0)=0,\underset{r\mathrm{}}{lim}f(r)=f_0.$$ (4) As illustrated in Fig. 2, this leads to spherical (in the squared gradient approximation of Eq. (1) parabolic) caps in the partial wetting regime, to flat cylindrical droplets (pancakes) in the prewetting regime and to ellipsoid-like droplets in the complete wetting regime . The saddle point $`\delta _h/\delta f(r)=0`$ has an unstable growth mode but the volume preserving shape fluctuations are stable . Assuming that the volume growth of the droplet is slow, the diffusion in the surrounding concentration field $`c`$ becomes quasistationary and can be approximated by the Laplace equation $$D\mathrm{\Delta }c=0,$$ (5) where $`D`$ is the diffusion constant. At far distances from the droplet the concentration field is given by the system concentration $`c_{\mathrm{}}(t)`$ which is time-dependent in a supersaturated system ($`h>0`$) because of the phase-separation process in the metastable bulk phase. The normal derivative of the concentration field on the boundary wall of the system vanishes because there is no diffusion flux into the wall, i.e. the Neumann boundary condition $$D_{}c|_{\text{wall}}=0$$ (6) has to be fulfilled. To obtain a well defined diffusion problem the boundary condition on the surface as well as the actual shape of the supercritical droplet need to be specified. Motivated by the slow diffusional growth of the droplet the concentration field close to the droplet surface is assumed to be in local thermal equilibrium. Therefore the local chemical potential $`h(x)`$ at the droplet surface is given by $`h(x)=\delta _0/\delta f(x)`$ which due to (2) corresponds to a concentration $$c_s(x)=c_0\left(1+\mathrm{\Gamma }\frac{\delta _0}{\delta f(x)}\right),$$ (7) which can be denoted as a generalized Gibbs-Thomson relation for wetting droplets. The expression $`\delta _0/\delta f`$ consists of a term $`\gamma `$ times the local curvature $`K`$ of the droplet interface plus an interaction term $`V/f`$. Neglect of the interaction term reduces (7) to the classical Gibbs-Thomson relation $`c_s=c_0(1+\mathrm{\Lambda }K)`$ with the capillary length $`\mathrm{\Lambda }\mathrm{\Gamma }\gamma `$. It identifies the concentration at a curved interface as the concentration $`c_0`$ for a flat interface modified by a linear curvature correction. The assumption of fast hydrodynamic modes (compared with the diffusional growth) implies that the shape of a growing droplet can be calculated by minimizing its free energy under the constraint of a fixed droplet volume $`\mathrm{\Omega }(t)`$. Technically, this variational calculation leads again to Eq. (3) with the boundary conditions (4) but now with $`h`$ in (3) replaced by a function $`h_{\mathrm{\Omega }(t)}`$ which includes a Lagrange parameter corresponding to $`\mathrm{\Omega }(t)`$. Consequently the growing supercritical droplet always looks like a critical droplet at a different time-dependent chemical potential. With increasing volume $`\mathrm{\Omega }(t)`$ the corresponding field $`h_{\mathrm{\Omega }(t)}`$ approaches the first order line, where eventually the volume of the droplet diverges. In this sense wetting droplets grow along isotherms towards the first order line, as illustrated in Fig. 2. The saddle point equation (3) with the fixed volume constraint is equivalent to $`\delta _0/\delta f=h_{\mathrm{\Omega }(t)}`$. Via Eq. (7) this implies that the Dirichlet boundary condition of the constrained equilibrium droplet is given by $$c_s(t)=c_0\left(1+\mathrm{\Gamma }h_{\mathrm{\Omega }(t)}\right)$$ (8) and therefore independent of $`x`$. Especially in the complete wetting or prewetting case, where the droplets are not spherical, one would expect a non trivial boundary condition having the classical Gibbs-Thomson condition in mind. Additionally corrections due to the potential $`V(f)`$, which determine the shape of the droplets in these regions, have to be taken into account. Nevertheless both effects add up in such a way that Eq. (7) can be written as Eq. (8) for a droplet in a volume-constraint equilibrium showing that $`c_s`$ is constant along the droplets surface! Now, the $`x`$-independent Dirichlet boundary condition (8) allows to use an electrostatic analogy to solve the quasistationary diffusion problem (5)–(7) for the growing droplet . To fulfill the Neumann condition (6), the system (including the droplet) is mirrored at the boundary wall of the system. Then the field $`4\pi Dc`$ is identified with an electric potential which also obeys the Laplace equation. The normal derivative of the field, i.e. the diffusion flux density on the droplet surface field, corresponds to the charge density of a conductor with the shape of the droplet including its mirror image. Consequently, the total volume growth of the droplet corresponds to the total charge, which is given by the capacity $`C`$ of the conductor times the potential difference between the surface and infinity. This ultimately leads to the droplet growth equation $$\dot{\mathrm{\Omega }}=4\pi DC(t)\left[c_{\mathrm{}}(t)c_s(t)\right]=4\pi D\mathrm{\Gamma }C(t)\left[h(t)h_{\mathrm{\Omega }(t)}\right]$$ (9) where $`C`$ depends on the droplets profile and therefore is implicitly time-dependent . The difference $`\left[h(t)h_{\mathrm{\Omega }(t)}\right]`$ may be interpreted as the supersaturation of the system with respect to the droplet. Eq. (9) together with Eqs. (1), (3) and (4) allow to determine selfconsistently the growing droplet profile if $`h(t)`$ is known. For a given volume $`\mathrm{\Omega }`$, the droplet profile can be calculated from by Eqs. (1), (3) and (4) with a conveniently chosen Lagrange multiplier $`h_\mathrm{\Omega }`$. Then the capacity $`C`$ of the conductor represented by the droplet plus its mirror image is calculated. Insertion of $`C`$, $`h_\mathrm{\Omega }`$ and the chemical potential $`h`$ into Eq. (9) yields the droplet growth rate $`\dot{\mathrm{\Omega }}(t)`$ and integration of (9) eventually determines $`\mathrm{\Omega }(t)`$. In practice, for large droplets, the calculation can be facilitated by the use of scaling properties of critical droplets close to the first-order transition line . At temperatures $`T>T_w`$ the wetting droplets on the wall nucleate either as ellipsoid-like droplets at $`h0`$ (complete wetting) or as pancake like droplets at $`h_p(T)<h<0`$ (prewetting). In both cases the droplets grow along an isotherm towards the prewetting line ($`h_{\mathrm{\Omega }(t\mathrm{})}h_p(T)`$). This means that they eventually become pancake-like droplets with a constant height but diverging radius $`R(t)`$ so that the capacity of large droplets is given by the capacity of a flat disk $`C(t)R(t)`$. In the case where the initial quench leads to a supersaturated bulk system ($`h>0`$) the volume will phase separate until it reaches $`h=0`$ whereas for initial values $`h<0`$ the vapor bulk phase is stable and $`h`$ remains constant in time. In either situation the difference $`[hh_\mathrm{\Omega }]`$ approaches a non-vanishing constant so that Eq. (9) yields $`\dot{\mathrm{\Omega }}\mathrm{\Omega }^{1/2}`$ or $`\mathrm{\Omega }t^2`$ which implies $$Rt,$$ (10) and only is determined by the time dependent capacity, i.e. the increasing diffusive coupling to the environment. Wetting droplets at $`T=T_w`$ in a supersaturated system ($`h>0`$) also are not spherical. Their radius $`R`$ scales as $`Rh_{\mathrm{\Omega }}^{}{}_{}{}^{(\sigma +1)/\mathrm{\hspace{0.17em}2}\sigma }`$, their central height $`F`$ as $`Fh_{\mathrm{\Omega }}^{}{}_{}{}^{1/\sigma }`$, and consequently their volume as $`\mathrm{\Omega }h_{\mathrm{\Omega }}^{}{}_{}{}^{(\sigma +2)/\sigma }`$ . Therefore, the profile of a growing wetting droplet becomes flatter and approaches a disk with capacity $`CRh_{\mathrm{\Omega }}^{}{}_{}{}^{(\sigma +1)/\mathrm{\hspace{0.17em}2}\sigma }`$. In a supersaturated system (i.e. $`h>0`$) there are not only wetting droplets on the wall, but also droplets in the bulk. The set of growing bulk droplets reduces the supersaturation in a Lifshitz-Slyozov-Wagner type way as $`ht^{1/3}`$ . With this input the wetting droplet growth equation (9) can be written as $$\dot{h}_\mathrm{\Omega }h_{\mathrm{\Omega }}^{}{}_{}{}^{\frac{2(\sigma +1)}{\sigma }}h_{\mathrm{\Omega }}^{}{}_{}{}^{\frac{\sigma +1}{2\sigma }}\left[At^{1/3}h_\mathrm{\Omega }\right]$$ (11) for large droplets. This leads to the asymptotic growth law $`h_\mathrm{\Omega }t^{\frac{4\sigma }{3(\sigma +3)}}`$ which due to the above scaling properties for $`R`$ and $`F`$ implies $$Rt^{\frac{2(\sigma +1)}{3(\sigma +3)}},Ft^{\frac{4}{3(\sigma +3)}}.$$ (12) Finally, in the partial wetting regime $`T<T_w,h>0`$, the wetting droplets are spherical caps, and therefore their growth properties are similar to those of bulk droplets, i.e. $$Rt^{1/3}$$ (13) The evaluation of the difference $`[hh_\mathrm{\Omega }]`$ in Eq. (9) can only be done in a theory where diffusional interactions between surface and bulk droplets are taken into account . At late stages partial wetting droplets in systems with a temperature corresponding to a contact angle $`\mathrm{\Theta }>\pi /2`$ will shrink because $`[hh_\mathrm{\Omega }]`$ turns negative for each droplet, whereas droplets at a temperature with $`\mathrm{\Theta }<\pi /2`$ will grow. One of the basic ingredients of the present calculation is the Neumann boundary condition (6). It derives from the fact that there is no diffusion flux through the wall. Even if the wall locally is in a non-wet state it always is covered by a film of microscopic thickness $`f_0`$. If somewhere in such a region Eq. (6) were not valid, the film would thicken there and the interface would run out of the microscopic minimum of the interface potential shown in Fig. 1. Due to the generalized Gibbs-Thomson relation (7), which includes a term $`V/f`$, the local concentration on top of the surface would then increase and the film would evaporize until it again reaches the former height $`f_0`$. Thus, up to fluctuations, Eq. (6) will be valid. The calculation of the supercritical droplet shape is based on the assumption of fast hydrodynamic modes compared to the droplets diffusional growth. This assumption may become questionable, if a prewetting droplet becomes very large. However at this very late stage the coalescence of different droplets will be dominant anyway. I like to thank R. Bausch and R. Blossey for stimulating discussions and the Deutsche Forschungsgemeinschaft via SFB 237 “Unordnung und große Fluktuationen” and “Benetzung und Strukturbildung an Grenzflächen” as well as the EU via FMRX-CT 98-0171 “Foam Stability and Wetting Transitions” for financial support.
no-problem/9903/cond-mat9903454.html
ar5iv
text
# Torsional Oscillator Studies of the Superfluidity of 3He in Aerogel ## 1 Introduction The discovery of superfluidity of <sup>3</sup>He in low density aerogel glass in torsional oscillator experiments at Cornell University and its subsequent investigation in NMR experiments at Northwestern University raised many interesting questions; for example: * Would the transition seen in the torsional oscillator be coincident with the onset of an NMR frequency shift? * How many superfluid phases are there and what is the phase diagram? * What is the nature of the pairing? * Is it possible to explain theoretically the effect of the aerogel on the properties of the superfluid phase(s)? In an attempt to address these questions we decided to perform simultaneous torsional oscillator and NMR experiments on the same sample. In this paper we report the results of torsional oscillator measurements on aerogel samples with 1% and 2% of solid density. Analysis of the NMR data is still in progress so we do not report these results in detail here. ## 2 Experimental method The aerogel was grown within a spherical glass container with outer and inner diameters of order $`10\mathrm{mm}`$ and $`8\mathrm{mm}`$ respectively as shown in Fig. 1. Two samples have been studied with nominal densities of 1% and 2% of that of solid glass. The <sup>3</sup>He entered the cell via a glass stem with an internal diameter of about $`1\mathrm{mm}`$. In the case of the 1% sample the spherical envelope was completely filled with aerogel but for the 2% sample a small region of bulk liquid was present near the stem as indicated in Fig. 1. The glass stem was glued to a berylium copper capillary with stycast 2850 GT epoxy; the capillary acted as the fill line and also provided the torsion rod for torsional oscillations of the cell. The torsional oscillations were excited and detected electrostatically using electrodes mounted on the low pass vibration filter shown in Fig. 1. The vibration frequencies for the 1% and 2% samples were of order $`850\mathrm{Hz}`$ and $`970\mathrm{Hz}`$ respectively. A two-phase lockin amplifier was used to measure the output from the detection electrode; from the in-phase and quadrature signals the resonant frequency and bandwidth of the oscillator were determined. Computer control of the driving frequency and voltage ensured that the oscillator was driven close to its resonant frequency and at a user-specified amplitude. We were also able to simultaneously make transverse NMR measurements on the <sup>3</sup>He at frequencies up to about $`200\mathrm{kHz}`$; a full account of the NMR experiments will be given later. Temperature was measured using an LCMN thermometer mounted in a separate tower. The LCMN was calibrated against the transition temperature of bulk <sup>3</sup>He; the superfluid transition of the <sup>3</sup>He in the thermometer was identified from the change in warming rate of the LCMN. As we were able to detect the bulk superfluid transition in both the thermometer and the glass sphere we were able to monitor and correct for the small temperature differences between thermometer and aerogel. ## 3 Superfluid transition temperature Fig. 2 shows the superfluid transition temperature $`T_{\mathrm{ca}}`$ of the <sup>3</sup>He in the aerogel at various pressures relative to the superfluid transition temperature $`T_\mathrm{c}`$ of bulk <sup>3</sup>He. The values of $`T_{\mathrm{ca}}`$ are plotted as functions of the bulk coherence length which we define as $$\xi _0=\frac{\mathrm{}v_\mathrm{F}}{2\pi k_\mathrm{B}T_\mathrm{c}}.$$ (1) For the 2% aerogel the values of $`T_{\mathrm{ca}}`$ were obtained from the torsional oscillator frequency measurements as described in Sec. 4. For the 1% aerogel the values of $`T_{\mathrm{ca}}`$ were those below which a shift in NMR frequency from the Larmor value was observed; as explained in Sec. 5. it was not possible to use the torsional oscillator frequency to identify the transition reliably at all pressures. To the extent that the aerogel behaves like a dilute impurity which subjects the <sup>3</sup>He superfluid to isotropic pair breaking scattering the reduction in transition temperature should be given by the following implicit equation $$\mathrm{ln}\left(\frac{T_{\mathrm{ca}}}{T_\mathrm{c}}\right)=\psi \left(\frac{1}{2}\right)\psi \left(\frac{1}{2}+\frac{\xi _0T_\mathrm{c}}{lT_{\mathrm{ca}}}\right),$$ (2) where $`\psi (x)`$ is the digamma function and $`l`$ is the mean free path associated with the scattering. We have adjusted $`l`$ to obtain the best possible fits to the experimental data; the values required were $`3380\AA `$ and $`2440\AA `$ for the 1% and 2% aerogel respectively. As has also been observed in other experiments, there is a small but systematic difference between the observed pressure dependence and that predicted by Eq. (2) although this could be evidence for a small pressure dependence of $`l`$. The values of $`l`$ are close to those obtained in a naive calculation for collisions of <sup>3</sup>He atoms with silica strands of diameter $`3\mathrm{nm}`$. Such a calculation predicts that $`l`$ should be inversely proportional to the aerogel density; that our fitted values of $`l`$ differ by less than a factor of two probably indicates that the real densities of our aerogel samples differ somewhat from their nominal values. We note that other measurements of $`T_{\mathrm{ca}}`$ for aerogels with nominal density 2% of solid show considerable variation. Our values of $`T_{\mathrm{ca}}`$ lie between those of Ref. and those of Refs. and . One important deduction that can be made from the success of Eq. (2) in describing the depression of the superfluid transition is that this indicates that the Cooper pairing within the aerogel is of the same type as in bulk, namely p-wave. ## 4 Measurements for 2% aerogel Fig. 3 shows measurements of bandwidth and resonant frequency of the torsional oscillator for 2% aerogel filled with <sup>3</sup>He at $`10\mathrm{bar}`$ pressure; the temperatures, $`T_\mathrm{c}`$ and $`T_{\mathrm{ca}}`$, of the superfluid transitions of the small region of bulk <sup>3</sup>He and of the <sup>3</sup>He in the aerogel are indicated. Above $`T_{\mathrm{ca}}`$ the temperature dependence is associated entirely with the small region of bulk liquid; the aerogel and <sup>3</sup>He within it behave as though rigidly locked to the motion of the oscillator. The increase in resonant frequency below $`T_{\mathrm{ca}}`$ signifies the decoupling of the superfluid fraction within the aerogel from the motion of the oscillator. The superfluidity of the <sup>3</sup>He within the aerogel does not produce any additional dissipation; apart from the small peak in bandwidth just below $`T_{\mathrm{ca}}`$ and scarcely visible increments in dissipation even closer to $`T_{\mathrm{ca}}`$, which are discussed further below, the dissipation is associated entirely with the bulk liquid region. There is no evidence that the oscillator frequency and bandwidth depend significantly on magnetic fields up to 51 Gauss, the value used in most of our NMR measurements. To determine the reduced superfluid density $`\rho _\mathrm{s}/\rho `$ within the aerogel we use $$\frac{\rho _\mathrm{s}}{\rho }=\frac{\nu (T)\nu _0(T)}{\nu _\mathrm{e}\nu _0(0)},$$ (3) where $`\nu (T)`$ is the measured resonant frequency at temperature $`T`$ as indicated by the solid line on Fig. 3, $`\nu _0(T)`$ is the frequency indicated by the dashed line that would be expected at temperature $`T`$ in the absence of superfluidity in the aerogel and $`\nu _\mathrm{e}`$ the measured frequency of the cell prior to filling with <sup>3</sup>He; $`\nu _0(0)`$ is the frequency to be expected for the case where the <sup>3</sup>He within the aerogel is rigidly coupled to the motion of the oscillator but the bulk <sup>3</sup>He is completely decoupled. The value of $`\nu _0(0)`$ could be obtained from both the normal state data at sufficiently high temperatures and the data just above $`T_{\mathrm{ca}}`$; in both these cases the viscous penetration depth $`\delta =\sqrt{2\eta /\rho _\mathrm{n}\omega }`$ is small compared to the size of the bulk liquid region leading to linear relations between bandwidth and resonant frequency as shown in Fig. 4. Extrapolation of these relationships to zero bandwidth ($`\delta 0`$) allows $`\nu _0(0)`$ to be determined. At all pressures the values obtained from the two extrapolations agreed within an experimental uncertainty of about $`10\mathrm{mHz}`$. To obtain the value of $`\nu _0(T)`$ for $`T<T_{\mathrm{ca}}`$ the extrapolation of the bandwidth vs frequency for the $`T>T_{\mathrm{ca}}`$ data was used as shown by the dashed line on Fig. 4. Since the dissipation was unaffected by the superfluid transition within the aerogel the measured bandwidth at the temperature concerned allowed us to obtain a value for the ‘unshifted’ resonant frequency. This method ignores possible departures from hydrodynamic behaviour within the bulk region at low temperatures; since the variation of $`\nu _0(T)`$ with temperature is very weak (see the dashed line on Fig. 3), this is unlikely to lead to significant error. The values obtained for $`\nu _0(0)`$ at different pressures are shown in the inset on Fig. 4 as functions of the density $`\rho `$ of bulk <sup>3</sup>He at the pressure concerned; the straight line through the data corresponds to that expected for rigid torsional oscillations of a sphere of diameter $`4\mathrm{mm}`$ and density $`0.98\rho `$. The extrapolation of the line at $`\rho =0`$ to a value close to the measured empty cell frequency confirms our belief that the aerogel in the 2% sample moves rigidly with the glass envelope. Values of $`\rho _\mathrm{s}/\rho `$ at different pressures obtained using Eq. (3) are shown in Fig. 5. The behaviour is qualitatively similar to that observed by Porto and Parpia also for aerogel of nominal 2% of solid density; the data from Ref. at $`29\mathrm{bar}`$ are also shown on Fig. 5. The temperature dependence of $`\rho _\mathrm{s}/\rho `$ is very different to that of bulk <sup>3</sup>He. It rises more slowly as $`T`$ decreases through $`T_{\mathrm{ca}}`$ and approaches a value substantially less than unity at the lowest temperatures; this latter feature is more apparent in the data of Ref. than in our data. Our data share with that of Ref. the character that close to $`T_{\mathrm{ca}}`$ the values of $`\rho _\mathrm{s}/\rho `$ at different pressures can be superimposed to a good approximation by a translation along the temperature axis. For $`T/T_{\mathrm{ca}}>0.75`$, $`\rho _\mathrm{s}/\rho `$ is well fitted by a temperature dependence $$\frac{\rho _\mathrm{s}}{\rho }=A\left(1\frac{T}{T_{\mathrm{ca}}}\right)^b.$$ (4) An example of such a fit is shown in Fig. 6. The values of the exponent $`b`$ at different pressures are given in Fig. 7. Fits to the bulk superfluid density over the same range of reduced temperatures do not fit Eq. (4) well near $`T_\mathrm{c}`$ (where the correct limiting behaviour is $`\rho _\mathrm{s}/\rho (1T/T_\mathrm{c})`$) and produce significantly smaller values of $`b1.251.30`$. We note that Porto and Parpia obtained somewhat smaller values of $`b`$ than ours from logarithmic fits ($`\mathrm{ln}(\rho _\mathrm{s}/\rho )`$ vs $`\mathrm{ln}(1T/T_{\mathrm{ca}})`$); also their values of $`b`$ increased with increasing pressure. We used our fits to Eq. (4) as an objective way of determining the values of $`T_{\mathrm{ca}}`$ for the 2% aerogel; this proved to be a more accurate way than from the NMR frequency shift data. There is no evidence for a systematic difference between the transition temperatures observed by the two experimental methods as can be seen from Fig. 8 which compares NMR frequency shift data with the value of $`T_{\mathrm{ca}}`$ obtained from the torsional oscillator. Finally in this section we discuss the peak in bandwidth which can be seen just below $`T_{\mathrm{ca}}`$ on Fig. 3. This peak was seen at all pressures above $`7\mathrm{bar}`$ but was largest at high pressures where a corresponding feature in the resonant frequency produced the small glitches in $`\rho _\mathrm{s}/\rho `$ on Fig. 5. The occurrence of the peak just below $`T_{\mathrm{ca}}`$ suggests a mode crossing of the torsional oscillator frequency with an internal mode of oscillation of the cell which has a frequency increasing from zero as $`\rho _\mathrm{s}`$ becomes finite within the aerogel. A second smaller peak in dissipation was observed even closer to $`T_{\mathrm{ca}}`$; there was evidence also for small additional dissipation between this second peak and $`T_{\mathrm{ca}}`$. These observations suggest the existence of several internal modes such as might be associated with sound waves in the aerogel. Following McKenna et al, we assume that the normal fluid is rigidly locked to the aerogel. The sound speeds, $`c`$, are then the roots of the equation $$c^4c^2(c_1^2+c_2^2)+c_1^2c_2^2+\frac{\rho _\mathrm{a}}{\rho _\mathrm{n}}(c^2c_\mathrm{a}^2)(c^2c_4^2)=0,$$ (5) where $`c_\mathrm{a}`$ is the speed of sound in the aerogel in the absence of the <sup>3</sup>He and $`c_1`$, $`c_2`$ and $`c_4`$ are the speeds of first, second and fourth sound respectively. In <sup>3</sup>He, $`c_2`$ is very small and can be set to zero in Eq. (5). One of the roots of Eq. (5) goes to zero as $`\rho _\mathrm{s}0`$ and the sound modes in a sphere of radius $`a`$ associated with this root are calculated in appendix A. The two lowest modes occur at angular frequencies $`2.082c/a`$ and $`3.342c/a`$ and lead to mode crossings with the torsional oscillator frequency at values of $`\rho _\mathrm{s}/\rho `$ given by the theoretical lines on Fig. 9. The fit with the experimental values was obtained by adjusting $`c_\mathrm{a}`$; the value used is $`174\mathrm{m}\mathrm{s}^1`$. McKenna et al quote a value of $`c_\mathrm{a}100\mathrm{m}\mathrm{s}^1`$ for aerogel of 5% of solid density. Our value seems higher than might be expected for 2% aerogel since $`c_\mathrm{a}`$ is expected to decrease as the density of aerogel decreases; it is possible that there will be variation between different samples of the same nominal density and also that $`c_\mathrm{a}`$ may vary with temperature. It is clear from inspection of Fig. 9 that there is a systematic difference in pressure dependence between the theoretical curves and experimental points. We have no explanation of this although we note that the experimental values at low pressures are not well determined because the peaks were very small and somewhat broader. We note that the existence of coupling between the torsional oscillations and the sound modes indicates that our experiment does not have perfect rotational symmetry about a vertical axis. We consider also the possibility that the intersecting mode is a Helmholtz resonance with pressure oscillations within the glass sphere producing oscillatory flow through the fill line. If our spherical container were filled only with <sup>3</sup>He then the geometry of our fill line is such that the Helmholtz frequency would always be more than a factor of two less than that of the torsional oscillator even when the superfluid fraction is 100%. The introduction of aerogel will decrease the frequency and hence the intersecting mode is not likely to be a Helmholtz resonance. There is a possibility that the sound modes discussed above could be modified by flow through the fill line. In appendix A we discuss a simple model which takes both the flow through the fill line and the spatial variation of pressure within the aerogel into account. The conclusion is that the fill line impedance is sufficiently high that flow through the fill line is unlikely to have had a significant effect on the sound mode frequencies. ## 5 Measurements for 1% aerogel Fig. 10 shows the bandwidth and the resonant frequency as functions of temperature at $`15.0\mathrm{bar}`$ pressure for the 1% aerogel. The bulk superfluid transition is clearly seen in both the amplitude and bandwidth despite the absence of a macroscopic bulk superfluid region inside the sphere. The superfluid transition in the aerogel as determined from the NMR frequency shift is indicated and corresponds to an upturn in the resonant frequency with decreasing temperature just as for the 2% aerogel (Fig. 3). However, just below $`T_{\mathrm{ca}}`$, the behaviour of the torsional oscillator is dominated by coupling to another resonant mode. We do not believe that this parasitic mode is a sound mode like those observed for the 2% aerogel. Indeed it may not be associated uniquely with the onset of superfluidity of the <sup>3</sup>He in the aerogel since at low pressures the coupling to this mode appears to be evident above $`T_{\mathrm{ca}}`$; it is this coupling above $`T_{\mathrm{ca}}`$ which prevented identification of $`T_{\mathrm{ca}}`$ from the torsional oscillator data at low pressures. The large magnitude of the coupling between the mode and the torsional oscillations together with the large bandwidth of the torsional oscillator at all temperatures in comparison with the 2% data suggest that significant shear motions of the 1% aerogel are being excited. Further evidence for this is the apparent discontinuity in the ‘background’ frequency of the torsional oscillator associated with the mode crossing. This possible discontinuity prevents the deduction of reliable values of superfluid density for the 1% aerogel at lower temperatures although we are currently searching for a theoretical description of the intersecting mode which might make this possible. The smaller intersecting resonance which can be seen at lower temperatures on Fig. 10 is we believe due to a sound mode like those observed for the 2% aerogel. If we ignore the apparent discontinuity in the background frequency of the torsional oscillator mentioned in the previous paragraph we can estimate a value of $`\rho _\mathrm{s}`$ at which this second resonance occurs and then by proceeding as for the 2% aerogel we can estimate the speed of sound $`c_\mathrm{a}`$ in the 1% aerogel. The value obtained is $`55\mathrm{m}\mathrm{s}^1`$ which, in view of our neglect of a possible background frequency discontinuity, must be regarded as an upper limit; the value is less than that for the 2% aerogel as would be expected. For the 1% aerogel we investigated the effect of the addition of small quantities of <sup>4</sup>He on the torsional oscillator. About 3% of <sup>4</sup>He was sufficient to replace the solid He layer on the aerogel surfaces as indicated by the absence of a Curie-Weiss contribution to the NMR signal strength. The addition of <sup>4</sup>He caused a dramatic reduction in the size of the parasitic resonances and of the oscillator bandwidth at higher temperatures; coupling to the intersecting resonances was almost completely absent at high pressures. We do not have any explanation for this observation. Although the parasitic resonances were absent we were still unable to determine the superfluid density within the aerogel because the addition of the <sup>4</sup>He also introduced a large thermal boundary resistance between the LCMN and the <sup>3</sup>He which preventing us from measuring the temperature. ## 6 Conclusions In this concluding section we return briefly to the questions posed in our introduction. Our experiments show that the superfluid transition of <sup>3</sup>He in aerogel as indicated by the appearance of a finite superfluid density coincides with the onset of a shift in the NMR frequency. Our torsional oscillator measurements provide no evidence for more than one superfluid phase and do not identify the nature of the pairing although our observation that the measured superfluid density is independent of applied magnetic fields up to about $`50\mathrm{Gauss}`$ might be interpreted as implying that the superfluid phase is one with an isotropic superfluid density. We are hoping that the completion of our analysis of our NMR measurements will provide further information on the questions of the number of phases and the nature of the pairing. The fact that we measure two different properties on the same sample of aerogel should provide a stringent test for theories purporting to explain the effect of aerogel on the properties of the superfluid. We have benefitted greatly from discussions with Henry Hall. We are grateful to Norbert Mulders, Jongsoo Yoon and Moses Chan for providing the aerogel specimens used in our experiments. This work was supported by EPSRC through Research Grants GR/K59835 and GR/K58234, and by the award of Research Studentships to JJK and PSW. ## Appendix A Sound Modes and Helmholtz Resonance We consider first the sound modes in a spherical cavity of radius $`a`$ completely filled with aerogel containing <sup>3</sup>He. The small value of $`c_2`$ for <sup>3</sup>He means that the effect of temperature gradients within the cavity can be ignored and the equations of motion which describe the <sup>3</sup>He/aerogel combination are then $`{\displaystyle \frac{\rho }{t}}`$ $`=`$ $`(\rho _\mathrm{s}𝐯_\mathrm{s}+\rho _\mathrm{n}𝐯_\mathrm{n}),`$ (6) $`{\displaystyle \frac{𝐯_\mathrm{s}}{t}}`$ $`=`$ $`{\displaystyle \frac{1}{\rho }}p,`$ (7) $`(\rho _\mathrm{a}+\rho _\mathrm{n}){\displaystyle \frac{𝐯_\mathrm{n}}{t}}`$ $`=`$ $`{\displaystyle \frac{\rho _\mathrm{n}}{\rho }}pp_\mathrm{a},`$ (8) $`{\displaystyle \frac{\rho _\mathrm{a}}{t}}`$ $`=`$ $`(\rho _\mathrm{a}𝐯_\mathrm{n}),`$ (9) where $`\rho `$ and $`\rho _\mathrm{a}`$ are the densities of helium and aerogel, $`p`$ is the pressure acting on the helium and $`p_\mathrm{a}`$ is the pressure acting on the aerogel due to its elastic distortion; the motion of the aerogel and normal fluid are assumed to be locked together. Assuming that small variations in $`p`$ and $`p_\mathrm{a}`$ are related to the corresponding densities by $`\delta p=c_1^2\delta \rho `$ and $`\delta p_\mathrm{a}=c_\mathrm{a}^2\delta \rho _\mathrm{a}`$, we obtain from Eqs. (6) to (9) the following equation for small harmonic departures, $`\delta \rho =\rho ^{}\mathrm{exp}(i\omega t)`$, of $`\rho `$ from equilibrium $$^4\rho ^{}+\omega ^2\left(\frac{1}{c_4^2}+\frac{1}{c_\mathrm{a}^2}+\frac{\rho _\mathrm{n}c_1^2}{\rho _\mathrm{a}c_4^2c_\mathrm{a}^2}\right)^2\rho ^{}+\omega ^4\frac{(\rho _\mathrm{a}+\rho _\mathrm{n})}{\rho _\mathrm{a}c_4^2c_\mathrm{a}^2}\rho ^{}=0,$$ (10) where $`c_4^2=c_1^2\rho _\mathrm{s}/\rho `$. The solutions of Eq. (10) appropriate to our spherical geometry are of the form $$\rho ^{}=\left(Aj_l(k_1r)+Bj_l(k_2r)\right)Y_{lm}(\theta ,\varphi ),$$ (11) where $`A`$ and $`B`$ are constants of integration, $`j_l(z)`$ is a spherical Bessel function, $`Y_{lm}(\theta ,\varphi )`$ is a spherical harmonic and we are using the notation of Abramowitz and Stegun; the spherical Bessel functions $`y_l(k_1r)`$ and $`y_l(k_2r)`$ can be excluded from the solution because they diverge as $`r0`$. The wave numbers $`k_1`$ and $`k_2`$ are the roots of $$k^4\omega ^2k^2\left(\frac{1}{c_4^2}+\frac{1}{c_\mathrm{a}^2}+\frac{\rho _\mathrm{n}c_1^2}{c\rho _\mathrm{a}c_4^2c_\mathrm{a}^2}\right)+\omega ^4\frac{(\rho _\mathrm{a}+\rho _\mathrm{n})}{\rho _\mathrm{a}c_4^2c_\mathrm{a}^2}=0,$$ (12) and are thus the wavenumbers associated with the sound speeds which satisfy Eq. (5) in the limit $`c_20`$. The boundary conditions to be applied to the solution are that $$𝐯_\mathrm{n}\widehat{𝐫}=𝐯_\mathrm{s}\widehat{𝐫}=0\mathrm{at}r=a.$$ (13) The radial components of the $`𝐯_\mathrm{n}`$ and $`𝐯_\mathrm{s}`$ can be found for solution (11) by using Eqs. (6) to (9). Fortunately the boundary conditions produce no mixing of the terms of different wavenumber and the modes occur at wavenumbers $`k`$ ($`=k_1`$ or $`k_2`$) which satisfy $$\left[\frac{\mathrm{d}}{\mathrm{d}r}j_l(kr)\right]_{r=a}=j_{l1}(ka)\frac{l+1}{ka}j_l(ka)=0.$$ (14) The roots are thus characterised by two numbers $`n`$ and $`l`$, where $`n`$ specifies the number of radial nodes (excluding $`r=0`$) in the variation of $`p`$. The values of $`ka`$ ($`n,l`$) for the 7 lowest modes are 2.08157 (0,1), 3.34210 (0,2), 4.49341 (1,0), 4.51411 (0,3), 5.64670 (0,4), 5.94037 (1,1), 6.75645 (0,5). The two lowest modes with $`n,l=0,1`$ and $`0,2`$ are the modes plotted in Fig. 9. We now consider the possibility that the modes may be affected by flow through the fill line. Since a spherical cavity on the end of cylindrical fill line is difficult to calculate we consider instead a simplified model: a spherical cavity with a fill line entering at the centre. We take the end of the fill line to be surrounded by a small spherical region of bulk liquid of radius $`b`$. Only modes of spherical symmetry will have a pressure variation at the centre of the sphere. For these modes the general solution of Eq. (10) is $$\rho ^{}=\left(A_1j_0(k_1r)+B_1y_0(k_1r)+A_2j_0(k_2r)+B_2y_0(k_2r)\right).$$ (15) Because of the small region of bulk helium and the fill line at the centre of the sphere it is no longer possible to ignore the spherical Bessel function $`y_0(z)=\mathrm{cos}(z)/z`$ which diverges as $`z0`$. The boundary conditions to be applied at $`r=a`$ are Eqs. (13) as before and at the boundary of the bulk liquid region, $`r=b`$, we require that the variations of $`p_\mathrm{a}`$ should vanish and that the ratio of pressure variation $`\delta p=p^{}\mathrm{exp}(i\omega t)`$ to mass flow out of the aerogel $`\dot{M}=4\pi b^2(\rho _\mathrm{s}v_\mathrm{s}+\rho _\mathrm{n}v_\mathrm{n})`$ should be appropriate to the geometry of our fill line $$\frac{\delta p(b)}{\dot{M}}=\frac{i\omega \rho L}{\rho _{\mathrm{sb}}\sigma },$$ (16) where $`L/\sigma `$ is the ratio of fill line length to cross sectional area and $`\rho _{\mathrm{sb}}`$ is the superfluid density in the fill line. Eq. (16) follows from the equation of motion $$\frac{v_\mathrm{s}}{t}=i\omega v_\mathrm{s}=\frac{\delta p(b)}{L\rho },$$ (17) for the helium within the fill line. Note that we ignore the variation of pressure inside the sphere of radius $`b`$. Applying the boundary conditions to the solution given by Eq. (15) leads to a rather complicated condition for determining the frequency of the spherically symmetric modes. The terms in Eq. (15) of wavenumbers $`k_1`$ and $`k_2`$ are no longer decoupled. We do not discuss the details here but report the general conclusion that the impedance of our fill line ($`L/\sigma `$) is sufficiently high that the modes under consideration are essentially sound modes with little flow through the fill line.
no-problem/9903/hep-th9903021.html
ar5iv
text
# Untitled Document T-duality of Large N QCD. Z. Guralnik University of Pennsylvania Philadelphia PA, 19104 guralnik@ovrut.hep.upenn.edu We argue that non-supersymmetric large $`N`$ QCD compactified on $`T^2`$ exhibits properties characteristic of an $`SL(2,Z)`$ T-duality. The kahler structure on which this $`SL(2,Z)`$ acts is given by $`\frac{m}{N}+i\mathrm{\Lambda }^2A`$, where $`A`$ is the area of the torus, $`m`$ is the ’t Hooft magnetic flux on the torus, and $`\mathrm{\Lambda }^2`$ is the QCD string tension. 1. Introduction Following ’t Hooft’s discovery that the large N expansion of QCD is an expansion in the genus of feynman graphs, , it has been suspected that large $`N`$ QCD is a string theory in which the string coupling is given by $`\frac{1}{N}`$. This correspondance is best understood for pure $`QCD_2`$ . The QCD string in higher dimensions has yet to be constructed, although much progress has been made recently based on the Maldacena conjecture . Assuming the existance of a string description, large $`N`$ QCD will inherit any self-dualities of this description. In this talk we address the question of whether the QCD string, when compactified on a two torus, has a self T-duality. Such a T-duality would be generated by $$\begin{array}{cc}\hfill \tau \tau +1& \\ \hfill \tau \overline{\tau }& \\ \hfill \tau \frac{1}{\tau }& \end{array}$$ for $`\tau =B+i\mathrm{\Lambda }^2A`$, where $`A`$ is the area of the torus, $`B`$ is a two form modulus, and $`\mathrm{\Lambda }^2`$ is the string tension. For simplicity we consider only square tori. We shall argue that the two form modulus is $`\frac{m}{N}`$ where $`m`$ is the ’t Hooft magnetic flux through the torus. This quantity has the desired properties of periodicity, and continuity in the large $`N`$ limit. We suspect that pure QCD is not exactly self T-dual, although another theory in the same universality class may be self dual. In two dimensions the partition function of $`QCD_2`$ is invariant under T-duality after a simple modification . In the case of $`QCD_4`$ on $`T^2\times R^2`$, there are qualitative properties consistent with self T-duality. If $`QCD_4`$ on $`T^2\times R^2`$ is T-dual to a another theory in the same universality class, there may be computationally useful consequences. By dualizing pure $`QCD_4`$ on a very large torus, one would obtain a $`QCD_2`$-like theory with two adjoint scalars. Such theories have been used as toy models which mimic some of the dynamics of pure $`QCD_4`$. However these models lack a $`U(1)\times U(1)`$ symmetry which could generate two extra dimensions. We shall propose a model which does have such a symmetry in the large $`N`$ limit. This model is obtained by a dimensional reduction of $`QCD_4`$ preserving the $`Z_N\times Z_N`$ global symmetry generated by large gauge transformations on the torus. 2. $`QCD_2`$ and T-duality Consider pure Euclidean $`QCD_2`$ on a two-torus. The partition function for vanishing ’t Hooft flux is given by $$Z=\underset{R}{}e^{g^2AC_2(R)},$$ where $`C_2(R)`$ is the quadratic casimir in the representation $`R`$. When the ’t Hooft flux $`m`$ is non-vanishing, the partition function is given by $$Z=\underset{R}{}e^{g^2AC_2(R)}\frac{\mathrm{Tr}_R(D_m)}{d_R},$$ where $`d_R`$ is the dimension of the representation. $`\mathrm{Tr}_R(D_m)`$ is the trace of the element in the center of $`SU(N)`$ corresponding to the ’t Hooft flux. For instance in a representation for which the Young Tableaux has $`n_R`$ boxes, $$D_m=e^{2\pi i\frac{m}{N}n_R}.$$ Evaluating the free energy in the planar $`N\mathrm{}`$ limit, as done in for vanishing $`m`$, gives $$F=ln\left|\frac{e^{2\pi i\frac{\tau }{24}}}{\eta (\tau )}\right|^2$$ where $`\eta `$ is a Dedekind eta function, and $$\tau =\frac{m}{N}\frac{\lambda A}{2\pi i},$$ with $`g^2N=\lambda `$. This is not quite invariant under (1.1). However the modified free energy $$=F+\frac{1}{24}\lambda A\frac{1}{2}ln(\lambda A)$$ is modular invariant. The additional term proportional to $`A`$ is a local counterterm. However it is not clear what modification of the action could account for the term $`ln(\lambda A)`$. Nonetheless this term is very simple, so it seems that pure $`QCD_2`$ is almost self T-dual. 3. ’t Hooft flux and two form moduli We have seen that under T-duality, the ’t Hooft flux $`m/N`$ behaves like a two form modulus of a string description. There are several reasons one could have guessed this correspondance between magnetic flux and the string theory two form. First, $`m/N`$ is periodic and continuous (for $`N\mathrm{}`$). Second, in $`QCD_2`$ with $`\lambda A0`$, the partition function is an integral over the moduli space of flat connections. The dimension of this space is invariant under $`SL(2,Z)`$ transformations acting on the doublet (m, N) . Under these transformations $`m/N`$ transforms precisely like the Kahler structure $`\tau `$ of string theory, which for vanishing area is just the two form modulus. Finally using the correspondance between D-branes and Yang Mills theories one can argue that under the appropriate conditions the magnetic flux is equal to the NS-NS two-form modulus. Consider a system of parallel D-branes stretched between NS5-branes which is described by a theory with a mass gap, such as pure $`SU(N)`$ $`𝒩=1`$ Yang-Mills . Since parallel D-branes generally give rise to a $`U(N)`$ bundle, let us construct a $`U(N)`$ bundle in which the $`U(1)`$ trace degree of freedom is frozen, so that there are no masseless $`U(1)`$ degrees of freedom. On a torus the fields are periodic up to $`U(N)`$ gauge transformations, $`U_1`$ and $`U_2`$, which for a $`U(N)`$ bundle satisfy $$U_1U_2U_1^{}U_2^{}=I.$$ The $`U`$’s may be written as products of $`U(1)`$ and $`SU(N)`$ pieces, so that the above equation becomes $$e^{i_{T^2}F_{12}}e^{2\pi i\frac{m}{N}}=I,$$ where $`F_{\mu \nu }`$ is the $`U(1)`$ field strength. If the $`U(1)`$ degree of freedom is frozen, the locally gauge invariant combination $`F_{\mu \nu }B_{\mu \nu }`$ must vanish. Thus one obtains $$\frac{m}{N}=_{T^2}B.$$ Note this relation was obtained for finite N. We shall argue elsewhere that a periodic potential is generated for $`B`$ when there is a mass gap. In this case one can not obtain a continuous class of theories on non-commutative tori by varying $`B`$. Having established (3.1), one must still show that $`B`$ behaves like a two-form modulus of the QCD-string as well as the IIA string. This means that $`B`$ should be the imaginary part of the action of a QCD string wrapping the torus. This can be argued by lifting the brane configuration to $`M`$ theory, in which case the IIA string and the QCD-string are homotopic . Note that if a QCD T-duality exists, it is not the same T-duality in the IIA theory for a variety of reasons. It can only exist in the $`N\mathrm{}`$ limit, when $`\frac{m}{N}`$ becomes a continuous parameter. Also, there is no D2-brane charge in the brane costruction of QCD, so that after a IIA T-duality the D4-brane charge vanishes. Furthermore the Kahler structure of the QCD-string is different from that of the IIA string because the string tensions are different. 4. $`QCD_4`$ and T-duality The four dimensional QCD string is much less understood than the two dimensional QCD string. However there are qualitative properties of large N $`QCD_4`$ on $`T^2\times R^2`$ which are consistent with self T-duality. We take the time direction to lie in $`R^2`$. This theory has a $`U(1)\times U(1)`$ translation symmetry on the torus. If the theory is self T-dual it must have another $`U(1)\times U(1)`$ symmetry corresponding to translations on the dual torus. Large gauge transformations on the torus generate a global $`Z_N\times Z_N`$ symmetry, which becomes continuous as $`N\mathrm{}`$. If this symmetry is a translation symmetry on a dual torus, then eigenstates of this symmetry should have energies proportional to $`1/R_d`$, where $`R_d`$ is the radius of the small dual torus. This is consistent with electric confinement. The eigenstates of $`Z_N\times Z_N`$ transformations carry electric flux , which have energy proportional to $`R=1/(\mathrm{\Lambda }^2R_d)`$, where $`R`$ is the radius of the original torus. (We are considering the case with vanishing magnetic flux). If the magnetic flux is non-vanishing, then $`\tau \frac{1}{\tau }`$ does not invert the area of the torus. Thus it would not make sense in this case to exchange the winding number of the QCD string, or the electric flux, with the QCD momentum. In string theory the momentum that gets exchanged with winding number under $`\tau \frac{1}{\tau }`$ is $`P_i=p_iB_{ij}w^j`$. Here $`p_i`$ is the velocity of the string, and $`w^j`$ is the winding number: $`X^i(t,\sigma )=p^it+w^i\sigma +\mathrm{}`$. Under $`\tau \tau +1`$, $`p_i`$ and $`w^j`$ are invariant but $`P_i`$ is shifted. Therefore the quantity in QCD corresponding to $`P_i`$ is $`p_i\frac{m}{N}ϵ_{ij}e^j`$, where $`p_i`$ is the usual momentum and $`e^i`$ is the electric flux. The term $`\frac{m}{N}ϵ_{ij}e^j`$ has precisely the form of a cross product of electric and magnetic fluxes and can be thought of as the contribution of ’t Hooft fluxes to the momentum. 5. Why there might not be T-duality If this $`Z_N\times Z_N`$ symmetry is spontaneously broken for a sufficiently small torus then QCD can not be self T-dual. Shrinking one cycle of the torus while keeping the other fixed would break the $`Z_N`$ associated with the small torus, just as in a finite temperature deconfinement transition. We do not know what happens if both cycles of the torus are shrunk simultaneously, but we can not rule out the possibility that both $`Z_N`$’s are spontaneously broken. Even so there may still be some theory in the QCD universality class for which the $`Z_N\times Z_N`$ symmetry is never spontaneously broken. Recent arguments suggest that large N $`QCD_4`$ can be described by a critical string theory in a five dimensional background, whose boundary is the QCD world volume. In this picture, spontaneous breaking of $`Z_N\times Z_N`$ would require a phase transition to a five dimensional target space geometry in which both cycles of the boundary torus are contractible. Perhaps such a transition does not exist. 6. $`QCD_4`$ from two dimensions If large N QCD on $`T^2\times R^2`$ were self T-dual, pure QCD on $`R^4`$ would be dual to QCD on $`R^2`$ with two adjoint scalars. In fact such a model has been used to approximate the dynamics of pure QCD in $`4`$ dimensions . The adjoint scalars in this model play the role of transversely polarized gluons. In the spectrum of this two dimensional model was computed by discrete light cone quantization and compared to the glueball spectrum of pure 4-d QCD computed using Monte-Carlo simulation. The degree of numerical accuracy allows only crude comparison, however the spectra have some qualitatively agreement. Perhaps in the $`N\mathrm{}`$ limit the agreement is more than just qualitative. However the usual models with adjoint scalars are incomplete since they lack a $`Z_N\times Z_N`$ symmetry. A more careful dimensional reduction would give a non-linear sigma model of the form $$S_{SU(N)}=\frac{N}{\lambda _{2d}}d^2xTr\left(F_{\mu \nu }^2+\frac{1}{R_s^2}(h_iD_\mu h_i^{})^2+\frac{1}{R_s^4}[h_2,h_3][h_2^{},h_3^{}]\right).$$ where $`R_s`$ is the radius of the cycles of the torus, and $`h_i`$ an element of $`SU(N)`$. Writing $`h_i=\mathrm{exp}(iR_sX^i)`$ and taking $`R_s0`$ with $`X^i`$ fixed gives the usual naive reduced action. Note that the naive reduced action requires a mass counterterm for $`X^i`$. In terms of the $`h_i`$ fields, such a term would look like $`\frac{1}{R_s^2}_iTrh_i`$, which is prohibited by $`Z_N\times Z_N`$ symmetry. Of course the $`Z_N\times Z_N`$ symmetry might be spontaneously broken for sufficiently small $`R_s`$, in which case the large N limit can not generate extra dimensions. At tree level there are flat directions which connect vacua related by the $`Z_N`$ symmetry. If these flat directions are not lifted quantum mechanically, as in a supersymmetric version of this theory, then long range fluctuations in two dimensions would prevent spontaneous symmetry breaking at finite $`N`$. Unfortunately this last statement is not always true in the large $`N`$ limit. 7. Conclusion We have shown that large $`N`$ QCD has properties consistent with the existence of self T-duality when compactified on a torus. It may be that QCD is not really self T-dual, but that some QCD-like theory is. Acknowledgments I am thankful to Antal Jevicki, Igor Klebanov, Joao Nunez, Burt Ovrut, Sanjaye Ramgoolam, Washington Taylor, and Edward Witten for enlightening conversations. References relax G. ’t Hooft, ‘‘A planar diagram theory for strong interactions,’’ Nucl. Phys.B72 (1974) 461. relax D. Gross, "Two dimensional QCD as a string theory," Nucl. Phys. B400 (1993) 161, hep-th/9212149. relax D. Gross and W. Taylor, "Two dimensional QCD is a string theory," Nucl. Phys. B400 (1993) 181, hep-th/9301068. relax S. Cordes, G. Moore, S. Ramgoolam, ‘‘Large N 2-D Yang-Mills theory and topological string theory,’’ Commun. Math. Phys.185 (1997) 543-619, hep-th/9402107. relax P. Horava, ‘‘Topological rigid string theory and two-dimensional QCD,’’ Nucl. Phys. B463 (1996) 238-286, hep-th/9507060. relax A. Polyakov, ‘‘String theory and quark confinement,’’ hep-th/9711002. ‘‘Confining strings,’’ Nucl.Phys.B486 (1997) 23-33, hep-th/9607049. relax S. Gubser, I. Klebanov, and A. Polyakov, ‘‘Gauge theory correlators from noncritical string theory,’’ e-Print Archive: hep-th/9802109. relax J. Maldacena, ‘‘The large N limit of superconformal field theories and supergravity,’’ e-Print Archive: hep-th/9711200. relax E. Witten, ‘‘Anti-de Sitter space and holography,’’ e-Print Archive: hep-th/9802150 relax E. Witten, ‘‘Anti-de-Sitter space, thermal phase transition, and confinement in gauge theories,’’ e-Print Archive: hep-th/9803131. relax G. ’t Hooft, "A property of electric and magnetic flux in non-abelian gauge theories," Nucl. Phys. B153 (1979) 141-160. relax R. Rudd, "The string partition function for QCD on the torus," hep-th/9407176. relax Z. Guralnik, ‘‘Duality of large N Yang Mills theory on $`T^2\times R^2`$,’’ e-print Archive: hep-th/9804057 relax S. Dalley, I. Klebanov, String spectrum of $`(1+1)`$ dimensional large $`N`$ QCD with adjoint matter,’’ Phys. Rev. D47 (1993) 2517-2527. relax K. Demeterfi, I. Klebanov, Gyan Bhanot, ‘‘Glueball spectrum in a $`(1+1)`$ dimensional model for QCD,’’ Nucl. Phys.B418 (1994) 15-29, hep-th/9311015. relax F. Antonuccio and S. Dalley, ‘‘ Glueballs from (1+1) dimensional gauge theories with transverse degrees of freedom,’’ Nucl. Phys. B461 (1996) 275-304, hep-ph/9506456. relax B. Rusakov, ‘‘Loop averages and partition functions in U(N) gauge theory on two-dimensional manifolds,’’ Mod.Phys.Lett.A5 (1990) 693-703. relax A. Migdal, ‘‘Recursion equations in gauge theories,’’ Sov.Phys.JETP42 (1975) 413, Zh.Eksp.Teor.Fiz.69 (1975) 810-822. relax E. Witten, ‘‘Two dimensional gauge theories revisited.’’ J. Geom. Phys. 9 (1992) 303-368, hep-th/9204083. relax Z. Guralnik and S. Ramgoolam, "From 0-Branes to Torons," hep-th/9708089. relax E. Witten, ‘‘Branes and the dynamics of QCD,’’ Nucl. Phys.B507 (1997) 658-690, e-print Archive: hep-th/9706109. relax K. Hori and H. Ooguri, ‘‘Strong coupling dynamics of four-dimensional N=1 gauge theories from M theory five-brane.’’ Adv.Theor.Math.Phys.1 (1998) 1-52, hep-th/9706082. relax Z. Guralnik, ‘‘Strings and discrete fluxes of QCD,’’ in preparation. relax A. Connes, M. Douglas, A. Schwarz, "Noncommutative geometry and Matrix theory: compactification on tori," hep-th/9711162. relax M. Douglas and C. Hull, "D-branes and the noncommutative torus," hep-th/9711165.
no-problem/9903/math-ph9903027.html
ar5iv
text
# On the main equations of electrodynamics ## The Problem Before a photon was experimentally discovered, the classical electrodynamics was, at the same time, a theory of light. The locality and stability of photons have lead specialists to a perplexity, since solutions of electrodynamics equations do not possess these properties. For this reason, the system of Maxwell’s equations needed to be fixed by replacing it with a nonlinear system. This problem was not solved those days. It was a new science, the quantum electrodynamics, that should have saved the situation and explained the phenomenon of photon. But it did not solve the problem, and it could not solve it, for again, the same linear equations as in the electrodynamics were used. As the result, a photon still remains nowadays an unexplained, even mysterious object, and the theory is similar to the astronomy of Ptolemy, accompanied by obscure philosophical doxies and conjurations. In this work, the author comes back to the problem of existence of such a system of equations that would “admit” a photon. That is to say that the author believes that the quantum theory is also not flawless. ## Solution of the problem The system we are looking for comes just immediately if we will accurately understand the main principals of electrodynamics and find correct answers to the following questions: * what is an electric charge; * what is an electric current; * does an electromagnetic field act on electric charges and currents? For the reader to easier accept the author’s answers to these questions, consider a very simple example from mechanics. ###### Example. Let $`\varphi (x)`$, $`\mathrm{}<x<\mathrm{}`$, be a sufficiently smooth function, and $`\varphi (x)=0`$ for $`|x|1`$. Let us cover the graph of the function $`\varphi (x)`$ with a sufficiently wide plate, and cut it along the graph of the function $`\varphi (x)`$. Then let a stretched along the $`x`$-axis homogeneous string be crimped from below and above by the parts of the cut plate, thus making the form of the string repeat the graph of the function $`\varphi (x)`$. Keeping everything still let us analyze the situation. It is clear that the deflection of the string from the initial condition $`u(x)`$ ($`=\varphi (x)`$) completely determines the state of the string. Nevertheless, let us introduce another useful characteristic of the string. Let us call it a string charge and set the density of the string charge to be $`\sigma (x)=^2u/x^2`$. It is clear that: 1. The density of the force with which the lower or upper part of the plate act on the string is proportional to the density of the string charge (in the theory of liner string). In particular, in the places where $`\sigma (x)=0`$, the string can be undercut so that it would not touch the string. The form of the string will remain the same after such a procedure. 2. One can consider the force with which the string acts on the plate, and also one can consider the force with which the plate acts on the string. One can as well consider forces acting inside the string, and forces acting inside the plate. But saying that there are forces acting on the string charge in the considered example is absurd. In this example the forces are connected with the charge, these forces and the charge are interconnected, they accompany one another, they cause one another, but no forces act on the string charge. This is impossible. ### Answers to questions a) — c). The vacuum is a physical medium. The classical electrodynamics should be regarded first of all as a continuous theory of this medium. The known in the electrodynamics constants $`\epsilon _0`$ and $`\mu _0`$ are characteristics of this medium. The fields $`𝐄`$ and $`𝐁`$ are also main characteristics of the state of this medium. According to the classical formula $`\rho =\epsilon _0\mathrm{div}𝐄`$, the author claims that the electric charge is only a special characteristic of the state of the vacuum-medium (together with the main characteristics — the vectors $`𝐄`$ and $`𝐁`$). According to the classical formula $`𝐣=\mu _0^1(\mathrm{rot}𝐁c^2\dot{𝐄})`$, the author claims that the electric current is nothing more then another special characteristic of the vacuum-medium (together with the main characteristics $`𝐄`$ and $`𝐁`$, and the characteristic $`\rho `$). According to the formula $`𝐟=\rho 𝐄+[𝐣,𝐁]`$ it is accepted to think that the electromagnetic field exerts a force on currents and charges. The author claims that there is nothing of the sort. There is not a single experiment where one can observe an action of any force on an electric charge. And there is not a single experiment where one could observe an action of any force on electric current. There is not a single physicist who could say about what exactly happens when a force acts on a charge or current. In all experiments we only observe an action of a force on certain physical bodies. And what acts on them is not the electromagnetic field but the medium, the vacuum. This is similar to hydromechanics where the action is exerted not by some characteristics of the liquid (the distribution of speed, pressure, etc.) but acts the liquid itself, being in a certain state. Physical bodies on which the vacuum acts, exert a counteraction. They must. In the continuous theory, where we are dealing not with forces but with their densities, it is necessary and sufficient to consider an interaction of interpenetrating mediums. The vacuum can interact simultaneously with two distinct mediums. This is seen from the formulas $`\rho =\rho ^++\rho ^{}`$, $`𝐣=\rho ^+𝐯^++\rho ^{}𝐯^{}`$. Existence of such mediums is due to the fact that in nature there exist electrons and protons that “from the birth” sit on vacuum. These particles need not be used in the continuous theory, similarly as in the hydromechanics, the formula $`H_2O`$ is not used. The mediums $`M^+`$ and $`M^{}`$ do not own the charges or currents. They only interact with the vacuum and induce a condition where the characteristics $`\rho `$ and $`𝐣`$ appear. The charges and currents is not a cause but a consequence. The theory about flows of electrons, coils, magnets, capacitors, etc. is a specific part of the theory of electrodynamics, where we are dealing with technical means to act on vacuum, to control its state. By using recipes of this part of electrodynamics, we prepare a special medium, or the mediums $`M^+`$ and $`M^{}`$, as to exert a needed action on the vacuum, to get it into a needed state. This is precisely in this part of electrodynamics where the formulas $`𝐣=\rho ^+𝐯^++\rho ^{}𝐯^{}`$ appear that are not included in the main equations. There are no such formulas in the theory of string; one can touch the string with a finger, it is more difficult to reach the vacuum. Comparing a string and vacuum, the formulas $`f=\mu _t^2u+T_x^2u`$ and $`𝐟=\rho 𝐄+[𝐣,𝐁]`$ address the same question, — the force of an external action on the medium (the string or vacuum). If the condition of the string, $`u(x,t)`$, is such that $`f0`$, then this means that the string does not interact with the ambient medium, the string is free and self contained. The condition $`𝐟=0`$ is necessary for the fields $`𝐄`$ and $`𝐁`$ describe the free vacuum. But since $`𝐟`$, in the general case, is the sum of the forces $`𝐟^+`$ and $`𝐟^{}`$, the condition $`𝐟=0`$ does not imply that the vacuum is free. The mediums $`M^+`$ and $`M^{}`$ could “stretch” the vacuum in opposite directions and yield $`𝐟=0`$, but still there will be a nontrivial energy exchange between the mediums. This nontriviality can be eliminated by imposing the condition $`(𝐄,𝐣)=0`$. On the basis of the preceding discussion the author claims that, if the fields $`𝐄`$ and $`𝐁`$ and all other possible additional characteristics of the vacuum condition are such that $`\rho 𝐄+[𝐣,𝐁]=0`$ and $`(𝐄,𝐣)=0`$, then the vacuum is not interconnected with anything, it is free and self contained. ### Nonlinear system of equations The system of equations (1) $$\begin{array}{c}\dot{𝐁}=\mathrm{rot}𝐄,\mathrm{div}𝐁=0,\rho =\epsilon _0\mathrm{div}𝐄,\\ c^2\dot{𝐄}+\mu _0𝐣=\mathrm{rot}𝐁,\rho 𝐄+[𝐣,𝐁]=0,(𝐄,𝐣)=0\end{array}$$ is the system of equations of the state of the free vacuum. The author believes that system (1) should replace the following system used in physics: (2) $$\dot{𝐁}=\mathrm{rot}𝐄,\mathrm{div}𝐁=0,\mathrm{div}𝐄=0,\dot{𝐄}=c^2\mathrm{rot}B.$$ ## First corollaries If the field $`𝐄`$ (stationary or not) is sufficiently smooth and vanishing at infinity, and the assembly $`\{𝐄;𝐁0\}`$ is a solution of system (1), then $`𝐄0`$. Indeed, system (1) yields, for $`𝐄`$, the representation $`𝐄=\mathbf{}\phi `$ and $`\mathrm{\Delta }\phi \mathbf{}\phi =0`$. Whence $`\mathrm{\Delta }\phi =0`$ and $`\mathrm{\Delta }E_i=0`$ in $`R^3`$ if $`|E_i|0`$ at infinity. In the same elementary way one can deduce that there do not exist stationary or nonstationary spherically symmetric states of the vacuum, i.e. solutions of the form $`𝐄=\mathbf{}e(r,t)`$, $`𝐁=\mathbf{}b(r,t)`$, $`r^2=x^2+y^2+z^2`$. Solutions of system (2) satisfy system (1). There exist solutions of system (1) that are not solutions of system (2). Let us show this. Let $`a_0(x,y,z)`$ be a sufficiently smooth function on $`R^3`$ with compact support. Denote by $`a=a_0(xct,y,z)`$ and let $`𝐄=(0,c_ya,c_za)`$, $`𝐁=(0,_za,_ya)`$. Such $`𝐄`$ and $`𝐁`$ satisfy system (1) but not (2). Let us stress on the following properties of the solution $`𝐄`$ and $`𝐁`$: 1. The vectors $`𝐄`$ and $`𝐁`$ and the support of the function $`a_0`$ travel with the velocity of light along the $`x`$-axis without change. 2. The vectors $`𝐄`$ and $`𝐁`$ are orthogonal to the direction of the travel. 3. The characteristic $`\rho `$ for this solution equals $`\epsilon _0c(_y^2a+_z^2a)`$, and, consequently, the full charge transported by the wave is zero. 4. The total energy $``$ and the total momentum $`𝐏`$ of the wave satisfy $`=cP`$. 5. It is easy to see that, if the wave meets a similar wave with another direction of the travel, one observes an interaction, since there is no superposition for system (1). ### Special example Take $`a_0=(Ay\mathrm{sin}\omega x+Bz\mathrm{cos}\omega x)\chi (x,y,z)`$, where $`\chi `$ is a sufficiently smooth function equal to zero outside of a compact set $`G_0`$, and $`\chi 1`$ in a smaller domain $`G_1`$, $`G_1G_0`$. For such $`a_0`$, the solution $`𝐄`$, $`𝐁`$ in the domain $`G_1`$ gives a classical ellipse polarized field. ### Hypothesis The photon, considered as a real object, is characterized in the classical chlectrodynamics by the fields $`𝐄`$ and $`𝐁`$ from the preceding construction. It is a special state of the free vacuum. The diversity of photons is limited by a special class of the function $`a_0`$. ## Some problems ### 1. To formulate conditions on the functions $`a_0`$ such that the functions would correspond to real photons. It can happen that the special example of the considered $`a_0`$ serves as a useful remark on this question. Or this is not true. The author does not understand very well how an elliptically polarized wave could manage to go through a stationary plate of a polaroid. At this time the author does not understand this phenomenon at all. ### 2. The class of functions $`a_0`$ corresponding to real photons will be fairly small. It is not clear in principle whether or not it is technologically possible to produce artificial photons. ### 3. The majority of problems in the electrical and radio technology can be solved by using classical theory of electromagnetic waves that dissipate at infinity. But as far as the real radiation is concerned, a dominating opinion is that everything consists of photons. System (1) contains both types of solutions, and, hence, there is a suspicion that this is what indeed happens in reality. ### 4. Let us look for solutions of system (1) in the case where there is a symmetry axis. #### a) Let us consider the fields as follows: $$𝐄=\mathbf{}f(u),𝐁=(\frac{x}{s}_zg(u)+\frac{y}{s}h(u),\frac{y}{s}_zg(u)\frac{x}{s}h(u),2_sg(u)),$$ where $`f(\lambda )`$, $`g(\lambda )`$, $`h(\lambda )`$ are certain functions of one variable, $`u=u(s,z)`$, $`s=x^2+y^2`$. For such kind of field, we immediately have that $`\mathrm{rot}𝐄=0`$, $`\mathrm{div}𝐁=0`$, $`(𝐄,𝐣)=0`$, and $$[𝐣,𝐁]=\frac{1}{\mu _0}\left(4g^{}(u)_s^2g(u)+\frac{1}{s}g^{}(u)_z^2g(u)+\frac{1}{s}h(u)h^{}(u)\right)\mathbf{}u.$$ To obtain this formula, we use that $`_x=2x_s`$, $`_y=2y_s`$, $`_x^2+_y^2=4_s(s_s)`$. As we see, the equation $`\rho 𝐄+[𝐣,𝐁]=0`$ in system (1) means that there is the following relation between the functions $`f`$, $`g`$, $`h`$, $`u`$: (3) $$\frac{s}{c^2}\mathrm{\Delta }f(u)f^{}(u)=4sg^{}(u)_s^2g(u)+g^{}(u)_z^2g(u)+h(u)h^{}(u).$$ Formally, each collection of functions $`f`$, $`g`$, $`h`$, $`u`$ satisfying equation (3) generates fields $`𝐄`$ and $`𝐁`$ that are solutions of system (1). Let us set in (3) $`f(\lambda )=a_0\lambda `$, $`g(\lambda )=a\lambda `$, $`h(\lambda )=b\lambda `$, where $`a_0,a,b`$ are constants. Equation (3) becomes the linear equation for $`u`$: (4) $$\frac{a_0^2}{c^2}s\mathrm{\Delta }u=4a^2s_s^2u+a^2_z^2u+b^2u.$$ Formally, each solution $`u`$ of equation (4) generates a solution of system (1), $`𝐄=a_0\mathbf{}u`$, $`𝐁=({\displaystyle \frac{ax}{s}}_zu+{\displaystyle \frac{by}{s}}u,{\displaystyle \frac{ay}{s}}_zu{\displaystyle \frac{bx}{s}}u,2a_su)`$. In particular, the function $`u_k=s^k(s+z^2)^{2k1/2}`$ satisfies the equation $`s\mathrm{\Delta }u_k=4k^2u_k`$, and, thus, the corresponding fields $`𝐄^{(k)}=\mathbf{}u_k`$, $`𝐁^{(k)}={\displaystyle \frac{2k}{cs}}(yu_k,xu_k,0)`$ is a solution of system (1), however, not in the whole space. All these solutions $`\{𝐄^{(k)},𝐁^{(k)}\}`$, $`kR^1`$, have a singularity at the origin, and hence will not serve as states of the free vacuum. But these solutions could be of interest if considered as supplementing the well known electromagnetic field of a stationary point charge ($`k=0`$). It can happen that there are no solutions of equation (4) that are regular and vanishing at infinity for $`s+z^2\mathrm{}`$. Let us explain this. First of all if such solutions exist, then there would be sufficiently many of them, since, together with a solution $`u(s,z)`$, the functions $`u(s,z+\lambda )`$, $`_a^bu(s,z+\lambda )\phi (\lambda )𝑑\sigma (\lambda )`$, and $`L(_z)u(s,z)`$ would also be solutions, where $`L(_z)`$ is a linear polynomial in $`_z`$ with constant coefficients. Let now $`u_1(s,z)`$ and $`u_2(s,z)`$ be two such solutions, and $`\{𝐄^1,𝐁^1\}`$ and $`\{𝐄^2,𝐁^2\}`$ — the corresponding electromagnetic fields. Then the superposition of these fields, $`\{𝐄,𝐁\}=\{𝐄^1+𝐄^2,𝐁^1+𝐁^2\}`$ also satisfies system (1), because $`\{𝐄,𝐁\}`$ is generated by the solution $`(u_1+u_2)`$ of equation (4). As far as particles are concerned, it is apparent that particles 1 and 2 are noninteracting. After this, let us consider the electromagnetic field $`\{\stackrel{~}{𝐄}^2,\stackrel{~}{𝐁}^2\}`$ which is obtained by a translation of the field $`\{𝐄^2,𝐁^2\}`$ in $`R^3`$ but not along the $`z`$-axis. This field will also be a solution of system (1), but it will be generated by a solution $`\stackrel{~}{u}_2(x,y,z)`$ of a linear equation distinct from equation (4). As a result, the superposition $`\{𝐄^1+\stackrel{~}{𝐄}^2,𝐁^1+\stackrel{~}{𝐁}^2\}`$, in general, will not satisfy system (1), and we get that “threaded on an axis” particles do not interact, whereas they become interacting in another position. It seems to the author that such physics is too exotic, and thus we can leave the problem of finding solutions of equation (4) and again look for the functions $`f`$, $`g`$, $`h`$, $`u`$ which would satisfy the nonlinear equation (3) and generate reasonable electromagnetic fields. #### b) Let a solution of system (1) have the form: $`𝐄=(_x\mathrm{\Psi },_y\mathrm{\Psi },_z\mathrm{\Psi }+\dot{\mathrm{\Phi }})`$, $`𝐁=(_y\mathrm{\Phi },_x\mathrm{\Phi },0)`$, $`\mathrm{\Psi }=\mathrm{\Psi }(s,z,t)`$, $`\mathrm{\Phi }=\mathrm{\Phi }(s,z,t)`$, $`s=x^2+y^2`$. One can check that such $`𝐄`$ and $`𝐁`$ are solutions of system (1) if $`\mathrm{\Psi }`$ and $`\mathrm{\Phi }`$ satisfy the following system: $$\begin{array}{c}_s\mathrm{\Phi }[4c^2_s(s_s\mathrm{\Phi })\ddot{\mathrm{\Phi }}_z\dot{\mathrm{\Psi }}]=_s\mathrm{\Psi }[\mathrm{\Delta }\mathrm{\Psi }+_z\dot{\mathrm{\Phi }}],\\ _s\mathrm{\Phi }[4c^2_z(s_s\mathrm{\Phi })+4s_s\dot{\mathrm{\Psi }}]=(\dot{\mathrm{\Phi }}+_z\mathrm{\Psi })[\mathrm{\Delta }\mathrm{\Psi }+_z\dot{\mathrm{\Phi }}].\end{array}$$ Assuming that $`\mathrm{\Phi }`$ and $`\mathrm{\Psi }`$ are independent of $`t`$ and denoting $`s_s\mathrm{\Phi }=g`$, we get that the pair $`\mathrm{\Psi }`$, $`g`$ satisfy the system $$4c^2g_sg=s\mathrm{\Delta }\mathrm{\Psi }_s\mathrm{\Psi },4c^2g_zg=s\mathrm{\Delta }\mathrm{\Psi }_z\mathrm{\Psi }.$$ Formally, each solution $`g`$, $`\mathrm{\Psi }`$ of this system generates a solution of system (1), $`𝐄=\mathbf{}\mathrm{\Psi }`$, $`𝐁=(2ys^1g,2xs^1g,0)`$. For example, such is the pair $`\mathrm{\Psi }`$, $`g=c^1s_s\mathrm{\Psi }`$ for an arbitrary but independent of $`z`$ function $`\mathrm{\Psi }`$. The author does not have a more interesting example of a pair $`g`$, $`\mathrm{\Psi }`$ as well as a regular solution of equation (3). ### 5. Talking about the interaction of photon solutions as they meet, we should add that there are no known formulas describing interactions between photon solutions and a stationary electromagnetic field. Only superposition of the solution $`𝐄=(0,c_ya,c_za)`$, $`𝐁=(0,_za,_ya)`$ and a stationary field of the form $`\stackrel{~}{𝐄}=(0,ch_2,ch_3)`$, $`\stackrel{~}{𝐁}=(h_1,h_3,h_2)`$ yields again a solution of system (1). ## General theory. Necessity. Interaction between the vacuum and physical bodies in the continuous theory, when the force density is used, could only be interactions between mediums that interpenetrate each other. In the general case, we will be considering a medium $`M`$ and a medium $`\mathrm{\Phi }`$ that simultaneously fill a certain part of space and have there, and in other part of the space, the velocities $`v_\alpha ^M(x,y,z,t)`$ and $`v_\alpha ^\mathrm{\Phi }(x,y,z,t)`$, respectively. Let there be a force interaction between the mediums. Let $`f_\alpha ^M`$ be the force density with which the medium $`M`$ acts on the medium $`\mathrm{\Phi }`$, and let $`f_\alpha ^\mathrm{\Phi }`$ be the force density with which the medium $`\mathrm{\Phi }`$ acts on $`M`$. There is no relation $`f_\alpha ^M=f_\alpha ^\mathrm{\Phi }`$ in the relativistic theory. It is replaced by a more complicated formula obtained by switching from the densities $`f_\alpha ^M`$, $`f_\alpha ^\mathrm{\Phi }`$ to the corresponding four dimensional force densities. The author did not succeed in obtaining a unique transition, hence in the sequel we give two versions of all main formulas. The reason for this is that, in every point where $`f_\alpha ^M`$ and $`f_\alpha ^\mathrm{\Phi }`$ are not equal to zero, there are two velocities $`v_\alpha ^M`$ and $`v_\alpha ^\mathrm{\Phi }`$, instead of one, that makes the forth component of the four dimensional force density. It turns out that each one of these velocities is capable to control the forth component of the 4-density of either force. To make it less confusing, we preserve the notations $`f_\alpha ^M`$ and $`f_\alpha ^\mathrm{\Phi }`$ for the first version, and assume that the corresponding 4-densities $`f_k^M`$, $`f_k^\mathrm{\Phi }`$ are of the form $`\{f_\alpha ^M,\frac{i}{c}f_\beta ^Mv_\beta ^M\}`$ and $`\{f_\alpha ^\mathrm{\Phi },\frac{i}{c}f_\beta ^\mathrm{\Phi }v_\beta ^\mathrm{\Phi }\}`$.<sup>1</sup><sup>1</sup>1Here and in the sequel, the main $`4`$-vector is $`(x_1,x_2,x_3,x_4)=(x,y,z,ict)`$ and the $`4`$-velocity is $`V_k=(\gamma v_\alpha ,ic\gamma )`$, $`_k=/x_k`$. For the second version, $`g_\alpha ^M`$ and $`g_\alpha ^\mathrm{\Phi }`$ will denote the force densities exerting by $`M`$ onto $`\mathrm{\Phi }`$ and $`\mathrm{\Phi }`$ onto $`M`$, and the corresponding densities $`g_k^M`$ and $`g_k^\mathrm{\Phi }`$ are of the form $`\{g_\alpha ^M,\frac{i}{c}g_\beta ^Mv_\beta ^\mathrm{\Phi }\}`$ and $`\{g_\alpha ^\mathrm{\Phi },\frac{i}{c}g_\beta ^\mathrm{\Phi }v_\beta ^M\}`$. ### The formulas connecting $`f_k^M`$ and $`f_k^\mathrm{\Phi }`$ ($`g_k^M`$ and $`g_k^\mathrm{\Phi }`$) Denote by $`V_k^M`$ and $`V_k^\mathrm{\Phi }`$ the $`4`$-velocities (fields) of the mediums $`M`$ and $`\mathrm{\Phi }`$, i.e. $`V_\alpha ^M=\gamma ^Mv_\alpha ^M`$, $`V_\alpha ^\mathrm{\Phi }=\gamma ^\mathrm{\Phi }v_\alpha ^\mathrm{\Phi }`$, $`\alpha =1,2,3`$, $`V_4^M=ic\gamma ^M`$, $`V_4^\mathrm{\Phi }=ic\gamma ^\mathrm{\Phi }`$. Denote by $`\stackrel{}{v}_\alpha (x,y,z,t)`$ such a velocity in the initial inertial reference frame (IRF) for the new IRF’ such that the mediums $`M`$ and $`\mathrm{\Phi }`$ would have the velocities $`(v_\alpha ^M)^{}=(v_\alpha ^\mathrm{\Phi })^{}`$ at a point $`(x,y,z,t)^{}`$ in this new IRF’. Such $`\stackrel{}{v}_\alpha `$ is defined by the formula $`\stackrel{}{v}_\alpha =(\gamma ^M+\gamma ^\mathrm{\Phi })^1(V_\alpha ^M+V_\alpha ^\mathrm{\Phi })`$, the corresponding $`4`$-velocity $`\stackrel{}{V}_k`$ will be $`(\stackrel{}{\gamma }\stackrel{}{v}_\alpha ,ic\stackrel{}{\gamma })`$ with $`\stackrel{}{\gamma }=(\gamma ^M+\gamma ^\mathrm{\Phi })(2+2\gamma ^M\gamma ^\mathrm{\Phi }2\gamma ^M\gamma ^\mathrm{\Phi }c^2v_\alpha ^Mv_\alpha ^\mathrm{\Phi })^{1/2}`$. It is natural to think that IRF’ have the property that observed forces that act and counteract between the mediums at this very point $`(x,y,z,t)^{}`$ differ by the sign, i.e. $$\begin{array}{c}(f_\alpha ^M)^{}=(f_\alpha ^\mathrm{\Phi })^{},(g_\alpha ^M)^{}=(g_\alpha ^\mathrm{\Phi })^{},\alpha =1,2,3,\\ (f_4^M)^{}=(f_4^\mathrm{\Phi })^{},(g_4^M)^{}=(g_4^\mathrm{\Phi })^{}.\end{array}$$ We should also add that all the densities with and without prime are connected by the Lorenz transformation defined by the velocity $`\stackrel{}{v}_\alpha `$. All this leads to the following formulas for the initial IRF ($`k,m=1,\mathrm{},4`$): (5) $`f_k^\mathrm{\Phi }=f_k^M{\displaystyle \frac{2}{c^2}}f_m^M\stackrel{}{V}_m\stackrel{}{V}_k,f_k^M\stackrel{}{V}_k=f_k^\mathrm{\Phi }\stackrel{}{V}_k,`$ (6) $`g_k^\mathrm{\Phi }=g_k^M{\displaystyle \frac{2}{c^2}}g_m^M\stackrel{}{V}_m\stackrel{}{V}_k,g_k^M\stackrel{}{V}_k=g_k^\mathrm{\Phi }\stackrel{}{V}_k.`$ A boring derivation of these formulas are left to the reader. ### The medium energy-momentum tensors The interacting mediums $`M`$ and $`\mathrm{\Phi }`$ have some domain $`G=G_MG_\mathrm{\Phi }R^3`$ of their mutual existence as well as regions in $`R^3`$ where they exist by themselves. This fact suggests that one must define the energy and write its conservation law separately for each medium counting the energy exchange and the energy transformation from one form into another. We thus assume that the variables $`f_4^M`$ and $`f_4^\mathrm{\Phi }`$ (or $`g_4^M`$ and $`g_4^\mathrm{\Phi }`$) together determine the energy of both the energy of the medium $`M`$ and the energy of the medium $`\mathrm{\Phi }`$, and the conservation laws have the form: (7) $`\dot{W}^M+\mathrm{div}𝐒^M`$ $`=`$ $`k_Mf_\alpha ^Mv_\alpha ^M+k_\mathrm{\Phi }f_\alpha ^\mathrm{\Phi }v_\alpha ^\mathrm{\Phi },`$ (8) $`\dot{W}^\mathrm{\Phi }+\mathrm{div}𝐒^\mathrm{\Phi }`$ $`=`$ $`k_\mathrm{\Phi }f_\alpha ^\mathrm{\Phi }v_\alpha ^\mathrm{\Phi }+k_Mf_\alpha ^Mv_\alpha ^M,`$ or (9) $`\dot{\stackrel{~}{W}}^M+\mathrm{div}\stackrel{~}{𝐒}^M`$ $`=`$ $`\kappa _\mathrm{\Phi }g_\alpha ^Mv_\alpha ^\mathrm{\Phi }+\kappa _\mathrm{\Phi }g_\alpha ^\mathrm{\Phi }v_\alpha ^M,`$ (10) $`\dot{\stackrel{~}{W}}^\mathrm{\Phi }+\mathrm{div}\stackrel{~}{𝐒}^\mathrm{\Phi }`$ $`=`$ $`\kappa _\mathrm{\Phi }g_\alpha ^\mathrm{\Phi }v_\alpha ^M+\kappa _Mg_\alpha ^Mv_\alpha ^\mathrm{\Phi }.`$ Here $`W^M`$, $`W^\mathrm{\Phi }`$ (or $`\stackrel{~}{W}^M`$, $`\stackrel{~}{W}^\mathrm{\Phi }`$) are energy densities in $`M`$ and $`\mathrm{\Phi }`$, $`𝐒^M`$, $`𝐒^\mathrm{\Phi }`$ (or $`\stackrel{~}{𝐒}^M`$, $`\stackrel{~}{𝐒}^\mathrm{\Phi }`$) are energy fluxes, if the forces $`f_\alpha ^M`$, $`f_\alpha ^\mathrm{\Phi }`$ (or $`g_\alpha ^M`$, $`g_\alpha ^\mathrm{\Phi }`$) operate. The parameters $`k_M`$, $`k_\mathrm{\Phi }`$, $`\kappa _M`$, $`\kappa _\mathrm{\Phi }`$ control the energy exchange between the mediums, i.e. these parameters make a quantitative characteristic of the pair of mediums. We assume that they are all positive and $`k_M+k_\mathrm{\Phi }=\kappa _M+\kappa _\mathrm{\Phi }=1`$. The case where $`k_M=k_\mathrm{\Phi }`$ ($`\kappa _M=\kappa _\mathrm{\Phi }`$) correspond to a symmetric interaction between the mediums $`M`$ and $`\mathrm{\Phi }`$. Note that each of the equations (7) — (10) is considered in its own domain ($`G_M`$ or $`G_\mathrm{\Phi }`$). Starting with formulas (7) — (10) and using the $`4`$-vectors $`f_k`$ and $`g_k`$ we introduce the tensors $`\tau _{ik}^M`$, $`\tau _{ik}^\mathrm{\Phi }`$, $`\stackrel{~}{\tau }_{ik}^M`$, $`\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }`$ such that the following equations hold (in the corresponding domains): $`(11),(12)`$ $$_k\tau _{ik}^M=k_Mf_i^Mk_\mathrm{\Phi }f_i^\mathrm{\Phi },_k\tau _{ik}^\mathrm{\Phi }=k_\mathrm{\Phi }f_i^\mathrm{\Phi }k_Mf_i^M,$$ $`(13),(14)`$ $$_k\stackrel{~}{\tau }_{ik}^M=\kappa _Mg_i^M\kappa _\mathrm{\Phi }g_i^\mathrm{\Phi },_k\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }=\kappa _\mathrm{\Phi }g_i^\mathrm{\Phi }\kappa _Mg_i^M.$$ The signs in these equations are put in such a way that $`\tau _{44}^M`$, $`\tau _{44}^\mathrm{\Phi }`$, $`\stackrel{~}{\tau }_{44}^M`$, $`\stackrel{~}{\tau }_{44}^\mathrm{\Phi }`$ could serve in equations (7) — (10) as energy densities, and the vectors $`ic\tau _{4\alpha }^M`$, $`ic\tau _{4\alpha }^\mathrm{\Phi }`$, $`ic\stackrel{~}{\tau }_{4\alpha }^M`$, $`ic\stackrel{~}{\tau }_{4\alpha }^\mathrm{\Phi }`$ — the energy fluxes. The tensors defined by equations (7) — (14) could be called energy-momentum tensors for the mediums $`M`$ and $`\mathrm{\Phi }`$. Also the vectors $`ic^1\tau _{\alpha 4}^M`$, $`ic^1\tau _{\alpha 4}^\mathrm{\Phi }`$, $`ic^1\stackrel{~}{\tau }_{\alpha 4}^M`$, $`ic^1\stackrel{~}{\tau }_{\alpha 4}^\mathrm{\Phi }`$ will be impulse densities for the mediums $`M`$ and $`\mathrm{\Phi }`$. If one of the parameters $`k_M`$, $`k_\mathrm{\Phi }`$, $`\kappa _M`$, $`\kappa _\mathrm{\Phi }`$ equals $`0`$ or $`1`$, then this is a case of a limiting nonsymmetrical interaction between the mediums. In particular, if $`k_M=1`$ and $`k_\mathrm{\Phi }=0`$, we get from (11), (12) that $$_k\tau _{ik}^MV_i^M=0,_k\tau _{ik}^\mathrm{\Phi }V_i^M=0.$$ The first of these conditions appears, for example, in relativistic hydrodynamics. There $`V_i^M`$ is a field of $`4`$-velocities of the liquid, and the tensor $`\tau _{ik}^M`$ is the energy-momentum tensor of the liquid itself that interacts with the so-called mass-forces (a one more phantom). The second condition is fundamental in electrodynamics. There $`\tau _{ik}^\mathrm{\Phi }`$ is the Poynting’s energy-impulse tensor which characterizes the state of the vacuum, $`V_i^M`$ is not a $`4`$-velocity of the vacuum but that of the medium $`M`$ interacting with the vacuum $`\mathrm{\Phi }`$. Existence in physics of these two very different formulas on a similar subject lead the author to an understanding that, in the general theory of interacting mediums, there must appear special control parameters $`k_M`$, $`k_\mathrm{\Phi }`$, $`\kappa _M`$, $`\kappa _\mathrm{\Phi }`$. It is clear that the energy states of both mediums could influence to a great extend the process of energy exchange between the mediums, and, hence, the scalars $`k_M`$, $`k_\mathrm{\Phi }`$, $`\kappa _M`$, $`\kappa _\mathrm{\Phi }`$, in general, are not constants but, for example, if a process is considered in a small volume and for a short period of time, could be regarded as such. By using equations (7) — (14) one can obtain a series of other equations eliminating some density from (7) — (14) by using formulas (5), (6). In particular, we have (15) $$_k\tau _{ik}^\mathrm{\Phi }=f_i^Mk_\mathrm{\Phi }\frac{2}{c^2}f_m^M\stackrel{}{V}_m\stackrel{}{V}_i,\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }=g_i^\mathrm{\Phi }+\kappa _M\frac{2}{c^2}g_m^\mathrm{\Phi }\stackrel{}{V}_m\stackrel{}{V}_i.$$ ### A more complicated interaction of mediums A more complicated scheme of interactions will be used to consider electromagnetic phenomena. At this point we leave out the question on whether an electron is a free state of the vacuum. Our goal is to obtain analogues of formulas (15) for three mediums $`M1`$, $`M2`$ and $`\mathrm{\Phi }`$ which simultaneously occupy the same region in space $`G=G_1G_2G_\mathrm{\Phi }`$. This means that we have at our disposal the velocities $`𝐯^1`$, $`𝐯^2`$, $`𝐯^\mathrm{\Phi }`$ and force densities $`𝐟^{12}`$, $`𝐟^{21}`$, $`𝐟^{1\mathrm{\Phi }}`$, $`𝐟^{\mathrm{\Phi }1}`$, $`𝐟^{2\mathrm{\Phi }}`$, $`𝐟^{\mathrm{\Phi }2}`$, where $`𝐟^{12}`$ is the force density with which the medium $`M1`$ acts on the medium $`M2`$, etc. Passing to $`4`$-densities we again obtain 2 versions of them: $`f_k^{12}`$, $`f_k^{21}`$, $`f_k^{\mathrm{\Phi }1}`$, $`f_k^{\mathrm{\Phi }2}`$, $`f_k^{2\mathrm{\Phi }}`$, and $`g_k^{12}`$, $`g_k^{12}`$, $`g_k^{21}`$, $`g_k^{\mathrm{\Phi }1}`$, $`g_k^{\mathrm{\Phi }2}`$, $`g_k^{2\mathrm{\Phi }}`$. Also the forces of action and counteraction are connected by formulas similar to (5), (6). For example, (16) $`f_k^{\mathrm{\Phi }1}`$ $`=`$ $`f_k^{1\mathrm{\Phi }}{\displaystyle \frac{2}{c^2}}f_m^{1\mathrm{\Phi }}\stackrel{}{V}_m^{1\mathrm{\Phi }}\stackrel{}{V}_k^{1\mathrm{\Phi }},f_k^{1\mathrm{\Phi }}\stackrel{}{V}_k^{1\mathrm{\Phi }}=f_k^{\mathrm{\Phi }1}\stackrel{}{V}_k^{1\mathrm{\Phi }},`$ (17) $`g_k^{\mathrm{\Phi }1}`$ $`=`$ $`g_k^{1\mathrm{\Phi }}{\displaystyle \frac{2}{c^2}}g_m^{1\mathrm{\Phi }}\stackrel{}{V}_m^{1\mathrm{\Phi }}\stackrel{}{V}_k^{1\mathrm{\Phi }},g_k^{1\mathrm{\Phi }}\stackrel{}{V}_k^{1\mathrm{\Phi }}=g_k^{\mathrm{\Phi }1}\stackrel{}{V}_k^{1\mathrm{\Phi }},`$ where the $`4`$-velocity $`\stackrel{}{V}_k^{1\mathrm{\Phi }}=\stackrel{}{V}_k^{\mathrm{\Phi }1}`$ is determined by using the well known procedure applied to the pair of velocities $`𝐯^1`$, $`𝐯^\mathrm{\Phi }`$. There is an exchange of energies in the mediums $`M1`$, $`M2`$, $`\mathrm{\Phi }`$. Let $`W^1`$, $`W^2`$, $`W^\mathrm{\Phi }`$ be energy densities of the mediums $`M1`$, $`M2`$, $`\mathrm{\Phi }`$, and $`𝐒^1`$, $`𝐒^2`$, $`𝐒^\mathrm{\Phi }`$ be flux of these energies. The the most simple equations that could control the energy exchange between the mediums are the following natural generalizations of (7), (8): (18) $`\dot{W}^1+\mathrm{div}𝐒^1`$ $`=`$ $`k_{1\mathrm{\Phi }}𝐟^{1\mathrm{\Phi }}𝐯^1+k_{\mathrm{\Phi }1}𝐟^{\mathrm{\Phi }1}𝐯^\mathrm{\Phi }k_{12}𝐟^{12}𝐯^1+k_{21}𝐟^{21}𝐯^2,`$ (19) $`\dot{W}^2+\mathrm{div}𝐒^2`$ $`=`$ $`k_{2\mathrm{\Phi }}𝐟^{2\mathrm{\Phi }}𝐯^2+k_{\mathrm{\Phi }2}𝐟^{\mathrm{\Phi }2}𝐯^\mathrm{\Phi }k_{21}𝐟^{21}𝐯^2+k_{12}𝐟^{12}𝐯^1,`$ (20) $`\dot{W}^\mathrm{\Phi }+\mathrm{div}𝐒^\mathrm{\Phi }`$ $`=`$ $`k_{\mathrm{\Phi }1}𝐟^{\mathrm{\Phi }1}𝐯^\mathrm{\Phi }+k_{1\mathrm{\Phi }}𝐟^{1\mathrm{\Phi }}𝐯^1k_{\mathrm{\Phi }2}𝐟^{\mathrm{\Phi }2}𝐯^\mathrm{\Phi }+k_{2\mathrm{\Phi }}𝐟^{2\mathrm{\Phi }}𝐯^2,`$ and analogous three equations, corresponding to (9), (10) with the forces $`g`$ and constants $`\kappa _{\alpha \beta }`$. Similarly to the case of two mediums, all the coefficients now are in pairs in the sense that $`k_{1\mathrm{\Phi }}+k_{\mathrm{\Phi }1}=1`$, $`k_{\mathrm{\Phi }2}+k_{2\mathrm{\Phi }}=1`$, $`\kappa _{1\mathrm{\Phi }}+\kappa _{\mathrm{\Phi }1}=1`$, $`\kappa _{2\mathrm{\Phi }}+\kappa _{\mathrm{\Phi }2}=1`$, $`k_{12}+k_{21}=1`$, $`\kappa _{12}+\kappa _{21}=1`$. The three formulas we gave, (18), (19), (20), and the other three formulas can be used to introduce the tensors $`\tau _{ik}^1`$, $`\tau _{ik}^2`$, $`\tau _{ik}^\mathrm{\Phi }`$, $`\stackrel{~}{\tau }_{ik}^1`$, $`\stackrel{~}{\tau }_{ik}^2`$, $`\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }`$ relating them to the $`4`$-densities $`f_k^{\alpha \beta }`$ and $`g_k^{\alpha \beta }`$. As an example we give two such equations that correspond to equations (11) and (14): (21) $`_k\tau _{ik}^\mathrm{\Phi }=k_{\mathrm{\Phi }1}f_k^{\mathrm{\Phi }1}k_{1\mathrm{\Phi }}f_k^{1\mathrm{\Phi }}+k_{\mathrm{\Phi }2}f_k^{\mathrm{\Phi }2}k_{2\mathrm{\Phi }}f_k^{2\mathrm{\Phi }},`$ (22) $`_k\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }=\kappa _{\mathrm{\Phi }1}g_k^{\mathrm{\Phi }1}\kappa _{1\mathrm{\Phi }}g_k^{1\mathrm{\Phi }}+\kappa _{\mathrm{\Phi }2}g_k^{\mathrm{\Phi }2}\kappa _{2\mathrm{\Phi }}g_k^{2\mathrm{\Phi }}.`$ Each of the tensors $`\tau _{ik}^\mathrm{\Phi }`$, $`\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }`$ could be called an energy-momentum tensor of the medium $`\mathrm{\Phi }`$. Let us replace in (21), (22) $`f_k^{\mathrm{\Phi }1}`$ and $`f_k^{\mathrm{\Phi }2}`$ by using formulas (16), and $`g_k^{1\mathrm{\Phi }}`$ and $`g_k^{2\mathrm{\Phi }}`$ — according to (17). In addition also assume that the interactions between the medium $`\mathrm{\Phi }`$ and the mediums $`M1`$ and $`M2`$ are the same: $`k_{\mathrm{\Phi }1}=k_{\mathrm{\Phi }2}=k_\mathrm{\Phi }`$, $`k_{1\mathrm{\Phi }}=k_{2\mathrm{\Phi }}=k_M`$, $`\kappa _{\mathrm{\Phi }1}=\kappa _{\mathrm{\Phi }2}=\kappa _\mathrm{\Phi }`$, $`\kappa _{1\mathrm{\Phi }}=\kappa _{2\mathrm{\Phi }}=\kappa _M`$. All this leads to the following generalization of formulas (15): (23) $`_k\tau _{ik}^\mathrm{\Phi }=(f_i^{1\mathrm{\Phi }}+f_i^{2\mathrm{\Phi }})k_\mathrm{\Phi }{\displaystyle \frac{2}{c^2}}(f_m^{1\mathrm{\Phi }}\stackrel{}{V}_m^{1\mathrm{\Phi }}\stackrel{}{V}_i^{1\mathrm{\Phi }}+f_m^{2\mathrm{\Phi }}\stackrel{}{V}_m^{2\mathrm{\Phi }}\stackrel{}{V}_i^{2\mathrm{\Phi }}),`$ (24) $`_k\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }=(g_i^{\mathrm{\Phi }1}+g_i^{\mathrm{\Phi }2})+\kappa _M{\displaystyle \frac{2}{c^2}}(g_m^{\mathrm{\Phi }1}\stackrel{}{V}_m^{\mathrm{\Phi }1}\stackrel{}{V}_i^{\mathrm{\Phi }1}+g_m^{\mathrm{\Phi }2}\stackrel{}{V}_m^{\mathrm{\Phi }2}\stackrel{}{V}_i^{\mathrm{\Phi }2}).`$ ### A look at electrodynamics We start with the formulas $`\rho =\rho ^++\rho ^{}`$, $`\rho ^+0`$, $`\rho ^{}0`$, $`𝐣=\rho ^+𝐯^++\rho ^{}𝐯^{}`$ which allow to state that the vacuum $`\mathrm{\Phi }`$ interacts with a medium $`M1`$ that has velocity $`𝐯=𝐯^+`$, and with a medium $`M2`$ that has velocity $`𝐯^2=𝐯^{}`$. We now take the representation of the Lorentz $`4`$-force $`f_k=F_k^++F_k^{}`$, $`F_k^\pm =\{\rho ^\pm 𝐄+\rho ^\pm [𝐯^\pm ,𝐁],\frac{i}{c}\rho ^\pm (𝐄,𝐯^\pm )\}`$, and address the question on what the $`4`$-forces $`F_k^+`$ and $`F_k^{}`$ are and what is their place in the theory of interaction of mediums? Comparing $`F_k^+`$ and $`F_k^{}`$ with the $`4`$-vectors $`f_k^{1\mathrm{\Phi }}`$, $`f_k^{2\mathrm{\Phi }}`$, $`g_k^{1\mathrm{\Phi }}`$, etc. we come to the conclusion that there are two answers to the posed question: a) $`F_k^+=g_k^{\mathrm{\Phi }1}`$, $`F_k^{}=g_k^{\mathrm{\Phi }2}`$, b) $`F_k^+=f_k^{1\mathrm{\Phi }}`$, $`F_k^{}=f_k^{2\mathrm{\Phi }}`$. The answer a) reiterates the well established notion of $`𝐟`$, the answer b) is new, and the author sees no reasons why the answer a) is more preferable. One also must retain both formulas (23) and (24) which now become: (25) $$\begin{array}{c}_k\tau _{ik}^\mathrm{\Phi }=(F_i^++F_i^{})+k_\mathrm{\Phi }2c^2\left(F_m^+\stackrel{}{V}_m^{1\mathrm{\Phi }}\stackrel{}{V}_i^{1\mathrm{\Phi }}+F_m^{}\stackrel{}{V}_m^{2\mathrm{\Phi }}\stackrel{}{V}_i^{2\mathrm{\Phi }}\right),\\ _k\stackrel{~}{\tau }_{ik}^\mathrm{\Phi }=(F_i^++F_i^{})+\kappa _\mathrm{\Phi }2c^2\left(F_m^+\stackrel{}{V}_m^{1\mathrm{\Phi }}\stackrel{}{V}_i^{1\mathrm{\Phi }}+F_m^{}\stackrel{}{V}_m^{2\mathrm{\Phi }}\stackrel{}{V}_i^{2\mathrm{\Phi }}\right).\end{array}$$ Let us compare these two formulas with the equation for the energy-momentum tensor $`T_{ik}`$ in electrodynamics: $`_kT_{ik}=F_i^++F_i^{}`$. Since now all tensors ($`T_{ik}`$, $`\tau _{ik}`$, $`\stackrel{~}{\tau }_{ik}`$) are responsible for the distribution and flow of energy, as well as for the momentum of the same medium $`\mathrm{\Phi }`$, one can make 2 different claims: #### 1. In an interaction with the mediums $`M1`$ and $`M2`$, the vacuum shows its limit properties by having the characteristics $`k_\mathrm{\Phi }`$ and $`\kappa _\mathrm{\Phi }`$ equal to zero. An equation for EMT of the vacuum is the well known equation $`_kT_{ik}=F_i^++F_i^{}`$. #### 2. The characteristics $`k_\mathrm{\Phi }`$ and $`\kappa _M`$ of an interaction of the vacuum and mediums $`M1`$ and $`M2`$ are not zero but sufficiently small, and a right equation for EMT of the vacuum is equation (25) with $`k_\mathrm{\Phi }0`$. It is clear that, for $`k_\mathrm{\Phi }0`$, the tensors $`\tau _{ik}^\mathrm{\Phi }`$ and $`T_{ik}`$ will give a different picture of distributions of the flux of energy and momentum in the vacuum. And if a measurement will have a sufficient precision to make formula (25) more preferable, then there will appear the quantity $`k_\mathrm{\Phi }2c^2(F_m^+\stackrel{}{V}_m^{1\mathrm{\Phi }}\stackrel{}{V}_i^{1\mathrm{\Phi }}+F_m^{}\stackrel{}{V}_m^{2\mathrm{\Phi }}\stackrel{}{V}_i^{2\mathrm{\Phi }})`$, and together with this, the mysterious velocity $`𝐯^\mathrm{\Phi }`$ included in the structure of $`\stackrel{}{V}_i^{1\mathrm{\Phi }}`$ and $`\stackrel{}{V}_i^{2\mathrm{\Phi }}`$ will also be discovered.
no-problem/9903/astro-ph9903062.html
ar5iv
text
# On the Frequency Evolution of X-ray Brightness Oscillations During Thermonuclear X-ray Bursts: Evidence for Coherent Oscillations ## 1 Introduction Millisecond oscillations in the X-ray brightness during thermonuclear bursts, “burst oscillations”, have been observed from six low mass X-ray binaries (LMXB) with the Rossi X-ray Timing Explorer (RXTE) (see Strohmayer, Swank & Zhang et al. 1998 for a review). The presence of large amplitudes near burst onset combined with spectral evidence for localized thermonuclear burning suggests that these oscillations are caused by rotational modulation of thermonuclear inhomogeneities (see Strohmayer, Zhang & Swank 1997). The asymptotic pulsation frequency in the cooling tails of bursts from 4U 1728-34 are stable over year timescales, also supporting a coherent mechanism such as rotational modulation (Strohmayer et al. 1998a). An intriguing aspect of these oscillations is the frequency evolution evident during many bursts. The frequency is observed to increase in the cooling tail, reaching a plateau or asymptotic limit (see Strohmayer et al. 1998a). However, Strohmayer (1999) has recently discovered an episode of spin down in the cooling tail of a burst from 4U 1636-53. Evidence of frequency change has been seen in five of the six burst oscillation sources and appears to be commonly associated with the physical process responsible for the pulsations. Strohmayer et. al (1997) have argued this evolution results from angular momentum conservation of the thermonuclear shell. The thermonuclear flash expands the shell, increasing its rotational moment of inertia and slowing its spin rate. Near burst onset the shell is thickest and thus the observed frequency lowest. The shell then spins back up as it recouples to the bulk of the neutron star as it cools. This scenario is viable as long as the shell decouples from the bulk of the neutron star during the thermonuclear flash and then comes back into co-rotation with it over the $`10`$ s of the burst fall-off. Calculations indicate that the $`10`$ m thick pre-burst shell expands to $`30`$ m during the flash (see Joss 1978; Bildsten 1995), which gives a frequency shift due to angular momentum conservation of $`2\nu _{spin}(20\mathrm{m}/R)`$, where $`\nu _{spin}`$ and $`R`$ are the stellar spin frequency and radius, respectively. For the several hundred Hz spin frequencies inferred from burst oscillations this gives a shift of $`2`$ Hz, similar to that observed. In bursts where frequency drift is evident the drift broadens the peak in the power spectrum and produces quality values $`Q\nu _0/\mathrm{\Delta }\nu _{FWHM}300`$. In some bursts a relatively short train of pulses is observed during which there is no strong evidence for a varying frequency. A burst such as this from KS 1743-26 with 524 Hz oscillations yielded the highest coherence of $`Q900`$ yet reported in a burst oscillation (see Smith, Morgan & Bradt 1997). In this Letter we investigate the time dependence of the frequency observed in bursts from 4U 1728-34 and 4U 1702-429. We show that in the cooling tails of bursts the pulse trains are effectively coherent. We show that with accurate modeling of the drift quality factors as high as $`Q4,000`$ are achieved in some bursts. We investigate the functional form of the frequency drift and show that a simple exponential “chirp” model works remarkably well. We use this model to search for significant power at the harmonics and first subharmonic of the strongest oscillation frequency in each source. Such searches are important in establishing whether the strongest oscillation frequency is the stellar spin frequency or its first harmonic, as appears now to be the case for 4U 1636-53 (see Miller 1999). The detection of harmonic signals or limits on them is also important in obtaining constraints on the stellar compactness (see Miller & Lamb 1998, Strohmayer et al. 1998b). We note that Zhang et al. (1998) have previously reported on a model for the frequency evolution during a burst from Aql X-1. ## 2 Modelling the Frequency Drift To investigate the frequency evolution in burst data we use the $`Z_n^2`$ statistic (see Buccheri et al. 1983) $$Z_n^2=2/N\underset{k=1}{\overset{n}{}}\left(\underset{j=1}{\overset{N}{}}\mathrm{cos}(k\varphi _j)\right)^2+\left(\underset{j=1}{\overset{N}{}}\mathrm{sin}(k\varphi _j)\right)^2.$$ (1) Here $`N`$ is the total number of photons in the time series, $`\varphi _j`$ are the phases of each photon derived from a frequency model, $`\nu (t)`$, vis. $`\varphi _j=2\pi _0^{t_j}\nu (t^{})𝑑t^{}`$, and $`n`$ is the total number of harmonics added together. For the burst oscillations, which are highly sinusoidal, we will henceforth restrict ourselves to $`n=1`$. This statistic is particularly suited to event mode data, since no binning is introduced. $`Z_1^2`$ has the same statistical properties as the well known Leahy normalized power spectrum, which for a Poisson process is distributed as $`\chi ^2`$ with 2 degrees of freedom. All of the bursts discussed here were observed with the Proportional Counter Array (PCA) onboard RXTE and sampled with 125 $`\mu `$s (1/8192 s) resolution. For $`\nu (t)`$ we have investigated a number of functional forms, including; $`\nu (t)=\nu _0`$ (a constant frequency), $`\nu (t)=\nu _0(1+d_\nu t)`$ (a linearly increasing frequency), and $`\nu (t)=\nu _0(1\delta _\nu \mathrm{exp}(t/\tau ))`$ (an exponential “chirp”). For a given data set and frequency model we vary the parameters so as to maximize the $`Z_1^2`$ statistic. We then compare the maximum values from different models to judge which is superior in a statistical sense. Our aim is to both constrain the functional form of the frequency evolution and to determine whether the pulse train during all or a portion of a burst is coherent, or not. We judge the coherence of a given model by computing the quality factor $`Q\nu _0/\mathrm{\Delta }\nu _{fwhm}`$ from the width of the peak in a plot of $`Z_1^2`$ vs the frequency parameter $`\nu _0`$. We also compare the peak width to that expected for a coherent pulsation in data of the same length. A pulsation in a time series of finite extent produces a broadened peak in a power spectrum. The well known window function, $`W(\nu )=|\mathrm{sin}(\pi \nu T)/\pi \nu |^2`$, gives a width of $`\mathrm{\Delta }1/T`$, where $`T`$ is the length of the data. We also confirm that for a successful frequency model the integrated power under the $`Z_1^2`$ peak is consistent with that calculated assuming no frequency evolution. ### 2.1 Linear and Exponential Frequency Drift To begin we demonstrate how a linear increase in frequency yields a significant improvement in the $`Z_1^2`$ statistic compared with a constant frequency model. We use the burst from 4U 1702-429 observed on July 26, 1997 at 14:04:19 UT, which we refer to as burst A (see Figure 4 in Markwardt, Strohmayer & Swank 1999). We used a 5.25 s interval during this burst to investigate the frequency evolution. In figure 1a we show results from our calculations of $`Z_1^2`$ for the constant frequency model (top panel) and the model with a linearly increasing frequency (bottom panel). In both cases the ordinate corresponds to the frequency parameter $`\nu _0`$ defined in the models. For the linear frequency model we found that $`Z_1^2`$ was maximized with $`d_\nu =1.264\times 10^3`$ s<sup>-1</sup>. Including the linear drift increased $`Z_1^2`$ from 88.48 to 271.4, a dramatic improvement of $`183`$ obtained with only 1 additional degree of freedom. The resulting $`Z_1^2`$ peak is also substantially narrower (see figure 1a), leaving no doubt that the pulsation frequency is increasing during this time interval. The frequency evolution during bursts can also be explored with dynamic power spectra. Several such spectra have been presented elsewhere (see Strohmayer et al. 1998a, Strohmayer, Swank, & Zhang 1998, etc.). A striking behavior is that the pulsation frequency reaches an asymptotic limit in many bursts. Motivated by this behavior we investigated a simple exponential “chirp” model with a limiting frequency, $`\nu (t)=\nu _0(1\delta _\nu \mathrm{exp}(t/\tau ))`$. This model has three parameters, the limiting frequency $`\nu _0`$, the fractional frequency change, or “bite”, $`\delta _\nu `$, and the relaxation timescale, $`\tau `$. We fit this model to burst A and find a maximum $`Z_1^2`$ of 342.9, an increase of 71.5 in $`Z_1^2`$ over the linear frequency model. This is also a dramatically significant improvement in $`Z_1^2`$. We fit the peak in $`Z_1^2`$ vs. $`\nu _0`$ obtained with the chirp model to a gaussian in order to determine its width. Figure 1b shows the resulting fit. The peak is well described by a gaussian with a width $`\mathrm{\Delta }\nu _{fwhm}=0.201`$ Hz, which gives $`Q=\nu _0/\mathrm{\Delta }\nu _{fwhm}=1,641`$. We can compare this with the width caused by windowing, which for a 5.25 s interval gives a width (FWHM) of $`0.17`$ Hz. We used the chirp model to investigate a sample of bursts from 4U 1702-429 and 4U 1728-34. We do not present here a systematic description of all observed bursts, rather, we demonstrate the main results with several illustrative examples. A burst from 4U 1702-429 observed on July 30, 1997 at 12:11:58 UT (burst B) revealed a $`12`$ s interval during which oscillations were detected. Our results using the chirp model for this burst are summarized in figure 2. Panel (a) shows a contour plot of the time evolution of the $`Z_1^2`$ statistic through the burst. It was computed by calculating $`Z_1^2`$ on a grid of constant frequency values using 2 s intervals with a new interval starting every 0.25 s, that is, assuming no frequency evolution. The burst countrate profile (solid histogram) and best fitting exponential chirp model (heavy solid line) are overlaid. The extent of the model curve defines the time interval used to fit the chirp model. The best model tracks the dynamic $`Z_1^2`$ contours remarkably well. Panel (b) compares $`Z_1^2`$ vs. $`\nu _0`$ for the constant frequency (dashed curve) and chirp models (solid curve). We again fit a gaussian to the peak calculated with the chirp model and find a width $`\mathrm{\Delta }\nu _{fwhm}=0.086`$ Hz, which yields $`Q=\nu _0/\mathrm{\Delta }\nu _{fwhm}=3,848`$ for this burst. This compares with a width of $`0.071`$ for a windowed pulsation of duration 12.5 s. We carried out similar analyses to investigate the frequency evolution in bursts from 4U 1728-34. We again found that the chirp model provides a remarkably useful description of the frequency drift. Table 1 summarizes our results using the chirp model for several bursts from both 4U 1702-429 and 4U 1728-34. We find the peaks obtained with the chirp model are only modestly broader than those expected for a coherent pulsation of the same length. Some of this additional width is likely due to the fact that pulsations are not present during the entirety of each interval examined. It is also likely that the chirp model is not the exact functional form of the frequency evolution, this is suggested by the broader wings of the $`Z_1^2`$ peaks computed for several bursts, however, the success of such a simple model argues strongly that the pulsations during the cooling tails of these bursts are phase coherent. ## 3 Harmonics and Subharmonics Pulsations from a rotating hotspot can be used to place constraints on neutron star compactnesses (see Strohmayer et al. 1998b; Miller & Lamb 1998; and Miller 1999). The pulsation amplitude is constrained by the strength of gravitational light deflection. An observed amplitude places an upper limit on the compactness, $`GM/c^2R`$, because too compact stars cannot achieve the observed modulation amplitude. Further, an upper limit on the harmonic content places a lower limit on the compactness, since less compact stars produce more harmonic content, and at some limit the harmonics should become detectable. In some models for the kHz QPO observed in the accretion driven X-ray flux from neutron star LMXB, the QPO frequency separation is closely related to the stellar spin frequency inferred from burst oscillations (see Miller, Lamb & Psaltis 1998; Strohmayer et al. 1996). In two sources, burst oscillations are seen with frequencies close to twice the kHz QPO frequency separation (Wijnands et al. 1997; Wijnands & van der Klis 1997; Mendez, van der Klis & van Paradijs ). Miller (1999) has reported evidence for a significant 290 Hz subharmonic of the strong 580 Hz pulsation seen in 4U 1636-539 (Zhang et al. 1996), suggesting that the strongest signal observed during bursts may actually be the first harmonic of the spin frequency, not the spin frequency itself. Based on these new results and the evidence for a beat frequency interpretation it is important to search for the subharmonic of the strongest signal detected during bursts. We have shown that frequency drift during bursts can greatly smear out the signal power. We have also shown that simple models can recover a coherent peak. By modelling the drift we can make a much more sensitive search for harmonics. Moreover, we can coherently add signals from different bursts by first modelling their frequency evolution and then computing a total $`Z_1^2`$ by phase aligning each burst. We note that this procedure will also coherently add together power at any higher harmonics of the known signal. However, there will be a $`\pi `$ phase ambiguity of any signal at the first subharmonic (see Miller 1999). We have added coherently the 330 Hz signals in all five bursts from 4U 1702-429 seen during our 1997, July observations (see Markwardt, Strohmayer & Swank 1999). We fit the chirp model to oscillations in each burst and then computed a total $`Z_1^2`$ by phase aligning them. Figure 3 shows the results of this analysis. The top panel shows the total $`Z_1^2`$ power at 330 Hz obtained by adding the bursts coherently. The peak value is $`1,400`$ and demonstrates that we have succesfully added the bursts coherently. The highest power for any burst individually was $`487`$. The two lower panels show the power at the first and second harmonics of the 330 Hz signal. We find no evidence for a significant signal at these or higher harmonics. To search for a signal at the 165 Hz sub-harmonic we computed a total $`Z_1^2`$ for each of 16 different combinations of the phases from each of the five bursts. Since there is a 2-fold ambiguity when coherently adding a subharmonic signal from two bursts, with a total of 5 we have $`2^4=16`$ possible combinations. We found no significant power at the subharmonic. We performed a similar analysis using 4 bursts from 4U 1728-34 which showed strong oscillations in their cooling tails, again we found no significant harmonic or subharmonic signals. The 90 % confidence upper limits on the signal power, $`Z_1^2`$, at the first harmonic in bursts from 4U 1702-429 and 4U 1728-34 are 5.8 and 1.8, respectively. These correspond to lower limits on the ratio of power at the fundamental to power at the first harmonic, $`h`$, of 242 and 556, respectively. ## 4 Discussion In this work we have concentrated on the pulsations in the cooling tails of bursts. Bursts also show pulsations during the rising phase (Strohmayer, Zhang, & Swank 1997). We have not yet been able to show that the pulsations which begin near burst onset can be phase connected to those in the cooling tail with a simple model. To fully address this interesting question will require more sophisticated modelling than we have employed here. We will address this question in future work. With the chirp model we find magnitudes of the frequency shift, $`\nu _0\delta _\nu `$ of $`23`$ Hz. These values are consistent with simple estimates based on angular momentum conservation using theoretical values for the pre- and post-burst thickness of bursting shells (Bildsten 1995). For the frequency relaxation timescale, $`\tau `$, we find a range of values from 1.7 to 4 s. Interestingly, different bursts from the same source can show markedly different decay timescales. For example, the two bursts from 4U 1728-34 summarized in table 1 show similar values for $`\nu _0`$ and $`\delta _\nu `$, but have decay timescales $`\tau `$ which differ by almost a factor of two. Of these two bursts, burst C had both a substantially greater peak flux and fluence. This seems consistent with the idea that the frequency increase is due to hydrostatic settling of the shell as it radiates away its thermal energy, however, study of more bursts is required to firmly establish such a connection. If the angular momentum conservation argument is correct, it implies the existence of a shear layer in the neutron star atmosphere. In the chirp model the total amount of phase shearing is simply, $`\varphi _{shear}=\nu _0\delta _\nu \tau (1e^{T/\tau })`$, where $`T`$ is the length of the data interval. For the bursts examined here we find $`\varphi _{shear}48`$, so that the shell “slips” this many revolutions over the underlying neutron star during the duration of pulsations. The dynamics of this shear layer are no doubt complex. Given the physical conditions in the shell; the shear flows are characterized by a large Reynolds number, it is likely that dissipation of the shear velocity and recoupling will be dominated by turbulent momentum transport. Magnetic fields may also play a role as well. Shear layers can be unstable to Kelvin-Helmholtz instability, however, Bildsten (1998) has suggested that the shear may be stabilized by either thermal buoyancy or the mean molecular weight contrast. We urge new theoretical investigations to explore the mechanisms of recoupling to determine if such a shear layer can survive long enough to explain the persistence of pulsations for $`10`$ s, as well as the observed relaxation timescale. For rotational modulation of a hotspot, the ratio, $`h`$, of signal power at the fundamental to that at the first harmonic is a function of the stellar compactness (see Miller & Lamb 1998), so that measurement of $`h`$ can be used to constrain the compactness. More compact stars have less harmonic content in their pulses and therefore larger $`h`$. Since the pulsations in the cooling tails of bursts are likely caused by a broad brightness anisotropy on the neutron star surface, and not a point spot, it will require more realistic modelling of such an emission geometry to use the limits on harmonic content derived here to constrain the stellar compactness. We will perform such modelling in future work. ## 5 Figure Captions Figure 1a: $`Z_1^2`$ vs frequency parameter $`\nu _0`$ computed from a 5.25 s interval in burst A. The top panel was calculated with no frequency modulation, $`d_\nu =0`$, while the bottom panel was computed with a linear frequency increase of magnitude $`d_\nu =1.264\times 10^3`$ s<sup>-1</sup>. Figure 1b: $`Z_1^2`$ peak and gaussian fit to burst A using the best fitting parameters of the chirp model. The solid line shows the gaussian fit to the peak. The derived peak centroid and width (FWHM) give $`Q\nu _0/\delta _\nu =1,641`$. Figure 2a: Dynamic $`Z_1^2`$ spectrum for burst B. The contours show loci of constant $`Z_1^2`$ and were computed using 2 s intervals with a new interval starting every 0.25 s. The calculation was done on a grid of constant frequency points with no frequency modulation. The PCA countrate profile is shown (solid histogram) as well as the best fitting chirp model (heavy solid line). The interval used to fit the chirp model is denoted by the extent of the model curve. Figure 2b: $`Z_1^2`$ vs frequency parameter $`\nu _0`$ computed from the time interval during burst B marked in figure 2a. The dashed curve shows $`Z_1^2`$ computed with no frequency modulation, ie. $`\delta _\nu =0`$, while the solid curve was computed with the best fitting chirp model. Figure 3: Results of the search for harmonic signals by coherently adding 330 Hz signals in five bursts from 4U 1702-429. The top panel shows the total 330 Hz signal computed by adding all five bursts coherently. The two lower panels show $`Z_1^2`$ in the vicinty of the 1st and 2nd harmonics of the 330 Hz signal. There is no significant power at either harmonic.
no-problem/9903/nucl-th9903002.html
ar5iv
text
# Coherent Photoproduction of Dileptons on Light Nuclei - a New Means to Learn about Vector Mesons ## 1 Introduction The question of how the $`\rho `$-meson behaves in hot and dense nuclear matter has attracted much attention over the last years. Based on arguments from chiral symmetry, one expects that the $`\rho `$-meson mass changes in the vicinity of the chiral phase transition, though chiral symmetry does not tell if its mass goes up or down . The interest in this question was further stimulated by measurements of dilepton spectra from heavy-ion collions, which were carried out by the NA45 collaboration . These spectra seem to indicate a mass-shift of the $`\rho `$-meson down to lower masses of about $`100`$ MeV. In order to understand this effect various theory groups performed calculations of the selfenergy of the $`\rho `$-meson at finite density, which contains all the information about its mass and decay width in nuclear matter. The models differ a lot in what they predict for the in-medium properties of the $`\rho `$-meson, the results ranging from a mass-shift of the $`\rho `$ to a selfenergy, which clearly shows resonant structures from the excitation of baryonic resonances, especially the $`D_{13}(1520)`$ . However, if one uses these selfenergies to calculate dilepton spectra in heavy-ion collisions it turns out that they yield very similar results and thus it seems to be very hard to distinguish experimentally between them on the basis of heavy-ion collions. Therefore it is clearly necessary to find other reactions that yield additional information about the $`\rho `$-meson in medium. We claim that the photoproduction of $`\rho `$-mesons is a promising candidate for that. In this talk we will concentrate on a discussion of the coherent photoproduction of $`\rho `$-mesons off light nuclei. The term coherent will be explained in section 3. As we will show, this reaction is not only very sensitive to different medium-modifications of the $`\rho `$-meson, in addition it also opens up the possibility to study the momentum dependence of the selfenergy of the $`\rho `$. The talk is organized as follows: in section 2 we will briefly review the influence of the excitation of resonance-hole loops on the $`\rho `$-selfenergy. In section 3 we will explain the model to calculate the coherent photoproduction before we then turn to the results. ## 2 The Selfenergy of the $`\rho `$-meson in Nuclear Matter For the study of the mass spectrum of a particle it is convenient to introduce the spectralfunction, which is proportional to the imaginary part of the propagator of the particle. Thus, for a $`\rho `$-meson in vacuum, the spectralfunction has the following form: $$A_\rho ^{vac}(q)=\frac{1}{\pi }\frac{\mathrm{Im}\mathrm{\Sigma }^{\mathrm{vac}}(\mathrm{q})}{(q^2m_\rho ^2)^2+(\mathrm{Im}\mathrm{\Sigma }^{\mathrm{vac}}(\mathrm{q}))^2},$$ where $$\mathrm{Im}\mathrm{\Sigma }^{\mathrm{vac}}(\mathrm{q})=\sqrt{\mathrm{q}^2}\mathrm{\Gamma }_{\rho \pi \pi }$$ describes the decay of a $`\rho `$-meson into two pions. The spectralfunction gives the probability that the $`\rho `$-meson propagates with a mass $`m=\sqrt{q^2}`$. One sees, that through the coupling to the 2$`\pi `$-channel the $`\rho `$-meson can propagate with any mass larger than $`2m_\pi `$ and not only with its rest mass of $`m_\rho =0.768`$ MeV. In nuclear matter there will be additional contributions to the selfenergy from interactions of the $`\rho `$ with the surrounding nucleons: $$\mathrm{\Sigma }(\omega ,\stackrel{}{q})=\mathrm{\Sigma }^{vac}(q)+\mathrm{\Sigma }^{med}(\omega ,\stackrel{}{q}).$$ Note that the in-medium part of the selfenergy depends on energy and three-momentum of the $`\rho `$-meson independently. This is a direct consequence of the fact, that there exists a preferred rest frame, namely the rest frame of nuclear matter. Energy and momentum of the $`\rho `$-meson are defined with respect to that frame. As a further consequence, transversely and longitudinally polarized $`\rho `$-mesons will be modified differently. At low nuclear densities the $`\rho `$-selfenergy can be calculated by means of the low density theorem, which relates the selfenergy to the $`\rho N`$ forward-scattering amplitude: $$\mathrm{\Sigma }^{med}=\rho 𝒯_{\rho N}(\theta =0)$$ Thus, at low densities a complete knowledge of the $`\rho N`$ forward-scattering amplitude suffices for a description of the $`\rho `$-mass spectrum in nuclear matter. There are various contributions to this amplitude. In this talk we want to concentrate on those scattering processes that lead to the excitation of a baryonic resonance (fig.1): The details of the calculation can be found in Peters et al. , to which the interested reader may refer. In addition, we would like to discuss here some points that have not been mentioned in our previous publication. If one looks up baryonic resonances which couple to the $`\rho N`$-channel in , one finds some resonances whose mass $`m_R`$ is below the $`\rho N`$-threshold: $$m_R<m_\rho +m_N.$$ Among these resonances is for example the $`D_{13}(1520)`$-resonance which in was found to be very important for the in-medium properties of the $`\rho `$-meson. However, keeping in mind that the $`\rho `$ is an unstable particle, this is not puzzling at all: the resonances simply couple to the low-mass tail of the $`\rho `$-spectralfunction. Direct experimental evidence for that can be found in an analysis from Manley et al. . He performed a partial-wave analysis of all existing data for the reaction $`\pi N\pi \pi N`$ within an isobar model, allowing for $`\rho N`$, $`\mathrm{\Delta }\pi `$ and $`ϵN`$ as intermediate $`\pi \pi N`$-states. Here the $`ϵ`$ represents an isoscalar $`s`$-wave $`\pi \pi `$-state. Because of its importance we want to discuss the case of the $`D_{13}(1520)`$-resonance. The result of the analysis for the corresponding partial-wave together with the contribution from the coupling of the resonance to $`\rho N`$ is shown in fig.2 and leaves little doubt that the $`D_{13}(1520)`$ really decays into $`\rho N`$. We mentioned before that the knowledge of the $`\rho N`$ forward-scattering amplitude suffices at low densities for a complete description of the in-medium properties of the $`\rho `$-meson. As was pointed out by Friman , parts of this amplitude can be compared with the experimental data for the reaction $`\pi ^{}p\rho ^0n`$. The only available analysis of this is reaction is from Brody et al. . It is shown in fig.3 in comparison with a calculation of the cross-section based on our $`\rho N`$-scattering amplitude. The calculation seems to be in clear contradiction to the data. We argue however, that this does not imply that the used model is incorrect, but rather that the extraction of the data at energies $`W<1.7`$ GeV is not reliable. The data suggest, for example, that there is no coupling of the $`D_{13}(1520)`$ to $`\rho N`$ which, as shown above, does not agree with Manleys analysis. The latter was carried out more carefully and is based on a much larger set of data, so we prefer to rely on its results. At higher energies the agreement becomes better, due to the fact that the experimental identification of the $`\rho `$-mesons is less problematic. Before we turn to the coherent photoproduction let us quickly review one main feature of the spectralfunction resulting from the selfenergy discussed above. The calculations show that transverse and longitudinal selfenergy have a very different momentum dependence. At low momenta both exhibit clearly the influence of the $`D_{13}(1520)`$. However, whereas the spectralfunction for transverse $`\rho `$-mesons is nearly flat at high momenta, for longitudinal $`\rho `$-mesons the importance of the $`D_{13}(1520)`$ as well as of the other resonances is reduced and much of the structure coming from the $`\rho `$-decay into pions can be found. This will be of great importance in the next section. ## 3 Coherent Photoproduction of Vector-Mesons We come now to the discussion of the coherent photoproduction of $`\rho `$ mesons off light nuclei. By coherent we mean that the $`\rho `$ is produced elastically, i.e. the nucleus is required to remain in its ground state. Since the $`\rho `$-meson has a large decay width, it will decay inside the nucleus. In order to avoid a distortion of the signal due to final state interactions of the decay products, we consider dileptons as the final state. One also has to calculate the dilepton production rate coming from intermediate photon and $`\omega `$-meson states, wich can not be distinguished experimentally from dileptons from coming $`\rho `$-decay. In impulse approximation the amplitude for the complete process can be put in the form: $$\underset{V}{}𝑑m\underset{\alpha }{}\frac{e^+e^{}|𝒪|V(m)\alpha V(m)|𝒱|\alpha \gamma }{m^2m_V^2+Im\mathrm{\Gamma }+\mathrm{\Sigma }^{med}}$$ Here $`V`$ represents the produced spin-1 state, $`m_V`$ its mass, $`\mathrm{\Gamma }`$ its vacuum decay width and $`\mathrm{\Sigma }^{med}`$ its selfenergy in nuclear matter. Different scenarios for the medium-modifications of the $`\rho `$ enter through $`\mathrm{\Sigma }^{med}`$. $`m`$ is the invariant mass of the dileptons. $`|\alpha `$ is a bound nucleon state with the quantum numbers $`\alpha `$ and the sum is over all filled nucleon states in the nucleus under consideration. The potential for the production of a vector-meson is denoted by $`𝒱`$ and $`𝒪`$ describes the coupling of a vector particle to dilepton. ### 3.1 The Potential $`𝒱`$ The potential $`𝒱`$ is taken from Friman et al. and describes the photoproduction of vector-mesons within a meson-exchange model. The parameters of the model are adjusted to data for the photoproduction on free nucleons. It turns out that for a reasonable description of the $`\rho `$-meson production one needs to take into account the contribution from both $`\pi `$\- and $`\sigma `$-exchange, whereas in the case of $`\omega `$-mesons $`\pi `$-exchange alone suffices. Since the pion is a pseudoscalar particle, $`\pi `$-exchange induces a change of the parity of the nucleus and does therefore not contribute to the amplitude for the coherent production. Thus within our model this amplitude vanishes for $`\omega `$-mesons. ### 3.2 The Selfenergy $`\mathrm{\Sigma }^{med}`$ The selfenergy $`\mathrm{\Sigma }^{med}`$ describes how the $`\rho `$-meson is modified during its propagation through the nucleus. In our calculation we studied the effects of a selfenergy based on the excitation of resonance-hole loops, which was discussed in the first part of this talk, on the production amplitude. We would like to mention again the main properties of this model, namely that it has a large imaginary part and that it shows a different momentum-dependence of transverse and longitudinal selfenergy. In order to demonstrate the sensitivity of the amplitude to different models for the in-medium modification of the $`\rho `$-meson we also calculated the $`\rho `$-selfenergy that follows from the same Lagrangian as the potential $`𝒱`$ and which is depicted diagramatically by a tadpole-graph (fig.4). In contrast to the resonance-hole model this selfenergy is purely real and leads to a decrease of the $`\rho `$-mass of about $`100`$ MeV. Also, it induces the same medium-modification to both transverse and longitudinal $`\rho `$-mesons. ### 3.3 Results The calculation shows that with our choice of the potential $`𝒱`$ the production amplitude is proportional to the nuclear formfactor $`F(q)`$ , where $`q`$ denotes the momentum transfer. In fig.5 we show the formfactor of $`{}_{}{}^{12}C`$. We also indicate the minimal momentum transfer $`q_{min}`$ for the production of a particle of mass $`0.5`$ GeV and $`0.768`$ GeV at an incident photon energy of $`0.85`$ GeV. A simple kinematical consideration shows that with dropping mass or increasing photon energy $`q_{min}`$ becomes smaller. In general $$\sigma _{tot},\frac{d\sigma }{dm}_{q_{min}}^{q_{max}}q|F(q)|^2.$$ Since the formfactor decreases rapidly as $`q`$ increases, it is clear that the magnitude of the cross-section is mainly determined by the kinematical region around $`q_{min}`$. As a direct consequence of the kinematics the nuclear formfactor will therefore strongly favour the production of $`\rho `$-mesons lighter than $`m_\rho =0.768`$ GeV, whereas the spectralfunction favours $`\rho `$-mesons with a mass around $`m_\rho `$. Thus one expects that the shape of $`\frac{d\sigma }{dm}`$ is governed by an interplay between spectralfunction and formfactor and that two peaks will show up in the spectrum. For the same reason the coherent photoproduction is very sensitive to medium modifications of the $`\rho `$-meson. The cross-section for a $`\rho `$-meson whose mass is reduced in the nuclear medium will be substantially larger than in the vacuum-case. If on the other side the major effect of the medium is a broadening of the $`\rho `$, the cross-section should be reduced due to absorptive effects. In fig.6 $`\frac{d\sigma }{dm}`$ for the production of dileptons via vector-mesons is shown for different medium-scenarios at a photon energy of $`0.85`$ GeV. The left plot contains only the contribution from the $`\rho `$-meson to the dilepton-spectrum. The results are in line with the previous discussion. Two peaks can be found in the spectrum at $`m0.77`$ GeV and at $`m0.55`$ GeV. Furthermore, a lower $`\rho `$-mass stronlgy enhances the cross-section whereas the resonance-hole model, which predicts a strong broadening of the $`\rho `$, gives smaller results. The plot on the right shows $`\frac{d\sigma }{dm}`$ with the photon included as well. The spectralfunction of the photon enhances the contribution at low masses. Besides, the decay width of a virtual photon into dileptons has a different mass dependence than that of a $`\rho `$-meson ($`\mathrm{\Gamma }_\gamma \frac{1}{m^3}`$, but $`\mathrm{\Gamma }_\rho m`$). Both effects lead to a strong enhancement of the cross-section at low masses. However, various medium-modifications of the $`\rho `$-meson still lead to quite different results for masses above $`0.6`$ GeV. The results shown so far did not take into account the polarization of the vector-particles. In view of the very different momentum dependence of the selfenergy for transverse and longitudinal $`\rho `$-mesons in the resonance-hole model it is tempting to look at both polarizations separately and thus to turn the momentum dependence directly into an observable. We find that the ratio $`R=\frac{d\sigma _{tr}}{dm}/\frac{d\sigma _{long}}{dm}`$ is of particular interest. In fig.7 we show $`R`$ for the two selfenergies discussed above and for the vacuum case. The resonance-hole model leads to a strong enhancement of $`R`$ in the mass region around $`0.6`$ GeV in comparison to the vacuum case. Since the tadpole-selfenergy is identical for transverse and longitudinal polarizations and since $`R`$ is proportional to the ratio of both selfenergies, a simple mass-shift scenario gives exactly the same results as one would get for a $`\rho `$-meson without any medium modification. ## Summary $`\&`$ Outlook In a calculation of the $`\rho `$-selfenergy in nuclear matter we found that the excitation of the $`D_{13}(1520)`$-resonance in $`\rho N`$ scattering is of great importance for the in-medium properties of the $`\rho `$-meson. As a possibility to obtain more information about the $`\rho `$ in nuclear matter we propose the coherent photoproduction of vector mesons. It was demonstrated that the production rates are quite sensitive to different in-medium scenarios for the $`\rho `$-meson. Furthermore, by looking at the polarization of the vector-meson one can obtain valuable information about the momentum dependence of the selfenergy of the $`\rho `$. Further work on this subject will include the calculation of a background contribution, the Bethe-Heitler process , to the dilepton spectrum, a more refined version of the potential $`𝒱`$, which is consistent with the selfenergy used and a calculation of the $`\rho `$-selfenergy in the nucleus rather than in nuclear matter. ## Acknowledgements This work was supported by BMBF, GSI Darmstadt and DFG.
no-problem/9903/hep-ph9903417.html
ar5iv
text
# Hierarchies without Symmetries from Extra Dimensions ## 1 Introduction The usual way of organizing our thinking about physics beyond the Standard Model (SM) is the effective field theory paradigm: all operators consistent with the symmetries are present in the theory, with higher-dimension operators suppressed by powers of the ultraviolet cutoff. The SM itself provides an exception to this expectation: the Yukawa couplings for all the fermions other than the top quark are much smaller than $`𝒪(1)`$. This does not lead to any fine-tuning problems since small Yukawa couplings are technically natural. Nevertheless, we are normally led to suspect that the fermion mass hierarchy is controlled by (weakly broken) flavor symmetries operative at shorter distances. Similar issues surround the question of proton decay in extensions submof the SM, especially when there is new physics at the TeV scale. Once again, some symmetry is normally invoked to forbid dangerous 1/(TeV) suppressed interactions mediating proton decay. Furthermore, imposing global symmetries on low-energy effective theories, for instance, stabilizing the proton by declaring that the low-energy theory respects baryon number, is widely considered to be unsatisfactory given the lore that black-holes/wormholes violate all non-gauged symmetries. This seems particularly problematic for theories where the fundamental Planck scale is lowered close to the TeV scale , and suggests that some sort of continuous or discrete gauge symmetry is required to adequately suppress proton decay. In this paper, we will show that all of this lore can easily and generically be violated in theories where the SM fields are constrained to live on a wall in $`n`$ extra dimensions, where gravity and perhaps other SM singlet fields are free to propagate. We will construct a simple model where our wall is slightly thick in one of the extra dimensions. The wall will have interesting sub-structure: while the Higgs and SM gauge fields are free to propagate inside it, the SM fermion are “stuck” at different points in the wall, with wave functions given by narrow Gaussians as shown in Figure 1. Without imposing any flavor symmetries on the short-distance theory, we will see that the long-distance 4-dimensional theory can naturally have exponentially small Yukawa couplings, arising from the small overlap between left- and right-handed fermion wave functions. Similarly, without imposing any symmetries to protect against proton decay, the proton decay rate can be exponentially suppressed to safety if the quarks and leptons are are localized at different ends of the wall <sup>*</sup><sup>*</sup>*Our approach to to the fermion mass hierarchy similar in spirit to the one in . For other approaches to suppressing Yukawa couplings and proton decay, see .. We emphasize that there is nothing fine-tuned about this from the point of view of the low-energy 4-dimensional theory; all the exponentially small couplings are technically natural. However, our examples violate the usual intuition that small couplings in a low-energy theory must be explained by symmetries in the high-energy theory. Instead, small couplings arise from the location and geometry of fermion fields stuck at different points in the extra dimensions, with no symmetries in the high-energy theory whatsoever. Note that this mechanism of separating fermions in an extra dimension is already being used to preserve chiral symmetry on the lattice in Kaplan’s domain wall fermions . Lattice simulations show that chiral symmetry is protected very effectively by separating the left and right handed components of the fermions in the 5’th dimension. If the wall thickness $`L`$ is close to the TeV scale, which is natural in theories with very low fundamental Planck scale, the mechanisms suggested in this paper can give rise to dramatic signals at future colliders. Since the SM gauge fields can only propagate inside the wall, $`L`$ effectively acts as the size of the extra dimensions for themNote that the dimensions where the gauge fields propagate need not be orthogonal to the large dimensions in which only gravity propagates; the gauge fields can just be restricted to live in a smaller part of the gravitational dimensions. The possibility of TeV sized extra dimensions with KK excitations for the SM gauge fields was first considered by Antoniadis . Therefore, at energies above $`L^1`$, “Kaluza-Klein” excitations (the higher harmonics of a particle in a box) of the gauge fields can be produced, and can scan the wall substructure. In particular, while the lowest excitation of the gauge fields (which we identify as the usual 4-d SM gauge fields) have a flat wave function throughout the wall and couple with standard strength to all the SM fermions, the KK excitations couple with non-universal strength to the fermions stuck at different points in the wall. For instance, if some of the fermions are stuck at special points (say the center of the wall), KK excitations of e.g. the photon can be baryophobic or leptophobic. More generally, measurements of the non-universal couplings of KK excitations to SM fermions can pin down their geometrical arrangement in the thick wall. We emphasize that our prediction of non-universal couplings of the SM fermions to gauge and Higgs fields is model-independent, it only depends on the fact that the fermions are stuck at different points in the extra dimensions. Of course, the values of the different couplings are model-dependent and can be used to distinguish between models. In Section 2 we describe an explicit field theory mechanism which we use to construct a setup as outlined above; we discuss how to localize a single chiral fermion to defects in higher dimensions and then generalize to several fermions localized at different points in the vicinity of the same defect. In Section 3 we derive the exponentially small couplings which result from our framework and demonstrate how the scenario can explain the SM fermion mass hierarchy and suppress proton decay. We also comment on neutrino masses. Section 4 contains a brief discussion of experimental signatures resulting from the non-universal couplings of KK gauge fields. For example, our KK fields make a contribution to atomic parity violation with the correct sign to explain the discrepancies between the SM prediction and the most recent experimental results . Our conclusions are drawn in section 5. ## 2 Localizing chiral fermions ### 2.1 One chiral fermion in 5 dimensions For simplicity we limit ourselves to constructions with one extra dimension. Generalizations to higher dimensions are equally interesting and can be analyzed similarly. Localizing fields in the extra dimension necessitates breaking of higher dimensional translation invariance. This is accomplished in our construction of a thick wall by a spatially varying expectation value for a five-dimensional scalar field $`\mathrm{\Phi }`$ as shown in Figure 2. We assume the expectation value to have the shape of a domain wall transverse to the extra dimension and centered at $`x_5=0`$. For example, such an expectation value could result from a $`𝐙_2`$ symmetric potential for $`\mathrm{\Phi }`$. Interactions with the fermions below break this symmetry and render the domain wall profile unstable but the rate for tunneling to a constant expectation value can easily be suppressed to safety. We will now show that the Dirac equation for a five dimensional fermion in the background of this scalar field has a zero mode solution which corresponds to a four dimensional chiral fermion stuck at the zero of $`\mathrm{\Phi }`$ . A convenient representation for the $`4\times 4`$ gamma matrices in five dimensions is $$\gamma ^i=\left(\begin{array}{cc}0& \sigma ^i\\ \overline{\sigma }^i& 0\end{array}\right),i=\mathrm{0..3},\gamma ^5=i\left(\begin{array}{cc}\mathrm{𝟏}& 0\\ 0& \mathrm{𝟏}\end{array}\right).$$ (1) As it will be useful in the following sections, we record below the two different Lorentz invariant fermion bilinears in 5 dimensions $$\overline{\mathrm{\Psi }}_1\mathrm{\Psi }_2,\mathrm{\Psi }_1^TC_5\mathrm{\Psi }_2$$ (2) where $$C_5=\gamma ^0\gamma ^2\gamma ^5=\left(\begin{array}{cc}ϵ& 0\\ 0& ϵ\end{array}\right)\text{in the Weyl basis}.$$ (3) The first is the usual Dirac bilinear, while the second is the Majorana bilinear which generalizes the familiar 4-dimensional expression, where instead of $`C_5`$ we have $`C_4=\gamma ^0\gamma ^2`$. The action for a five dimensional fermion $`\mathrm{\Psi }`$ coupled to the background scalar $`\mathrm{\Phi }`$ is then $$𝒮=\mathrm{d}^4𝐱dx_5\overline{\mathrm{\Psi }}[i\overline{)}_4+i\gamma ^5_5+\mathrm{\Phi }(x_5)]\mathrm{\Psi }.$$ (4) Here the coordinates of our $`3+1`$ dimensions are represented by $`𝐱`$ whereas the fifth coordinate is $`x_5`$; five-dimensional fields are denoted with capital letters whereas four-dimensional fields will be lower case. This Dirac operator is separable, and it is convenient to expand the $`\mathrm{\Psi }`$ fields in a product basis $$\mathrm{\Psi }(𝐱,x_5)=\underset{n}{}x_5|LnP_L\psi _n(𝐱)+\underset{n}{}x_5|RnP_R\psi _n(𝐱)$$ $$\overline{\mathrm{\Psi }}(𝐱,x_5)=\underset{n}{}\overline{\psi }_n(𝐱)P_RLn|x_5+\underset{n}{}\overline{\psi }_n(𝐱)P_LRn|x_5,$$ (5) where the $`\psi _n`$ are arbitrary four-dimensional Dirac spinors and $`P_{L,R}=(1\pm i\gamma ^5)/2`$ are chiral projection operators. We use a bra-ket notation for the eigenfunctions which diagonalize the $`x_5`$-dependent part of the Dirac operator; the kets $`|Ln`$ and $`|Rn`$ are solutions of $`aa^{}|Ln=(_5^2+\mathrm{\Phi }^2+\dot{\mathrm{\Phi }})`$ $`|Ln=\mu _n^2|Ln`$ $`a^{}a|Rn=(_5^2+\mathrm{\Phi }^2\dot{\mathrm{\Phi }})`$ $`|Rn=\mu _n^2|Rn,`$ (6) respectively. Here $`\dot{\mathrm{\Phi }}_5\mathrm{\Phi }`$, and $`a^{}`$ and $`a`$ are “creation” and “annihilation” operators defined as $`a=`$ $`_5+\mathrm{\Phi }(x_5)`$ $`a^{}=`$ $`_5+\mathrm{\Phi }(x_5).`$ (7) The $`|Ln`$ and $`|Rn`$ each form an orthonormal set and for non-zero $`\mu _n^2`$ are related through $`|Rn=(1/\mu _n)a|Ln`$ as can be verified easily from Eq.(2.1). The eigenfunctions with vanishing eigenvalues need not be paired however. It is no accident that we use simple harmonic oscillator (SHO) notation. For the special choice $`\mathrm{\Phi }(x_5)=2\mu ^2x_5`$ the operators $`a`$ and $`a^{}`$ become the usual SHO creation and annihilation operators up to a normalization factor $`\sqrt{2}\mu `$, and the operator $`a^{}a`$ becomes the number operator $`N`$. The eigenkets are then related to the usual SHO kets by $`|Ln=|n`$ and $`|Rn=|n1`$. The pairing of eigenfunctions also persists for general $`\mathrm{\Phi }`$. This follows most elegantly from considering the operators $`Q=a\gamma ^0P_L`$ and $`Q^{}=a^{}\gamma ^0P_R`$ which are the supercharges of an auxiliary supersymmetric quantum mechanics system with Hamiltonian $`H=\{Q,Q^{}\}`$. Then $`P_L|Ln`$ and $`P_R|Rn`$ are the “boson” and “fermion” eigenstates of $`H`$ respectively, and the equality of eigenvalues of $`|Ln`$ and $`|Rn`$ is the usual boson-fermion degeneracy of supersymmetric theories. Again, zero modes need not be paired which allows us to obtain chiral 4-d theories. While most of what follows applies also to the case of general $`\mathrm{\Phi }`$ we will find it convenient to use the SHO language. Expanding in $`|Ln`$ and $`|Rn`$ the action for a 5-d Dirac fermion eq. (4) can be re-written in terms of a 4-d action for an infinite number of fermions $$S=\mathrm{d}^4𝐱\left[\overline{\psi }_Li\overline{)}_4P_L\psi _L+\overline{\psi }_Ri\overline{)}_4P_R\psi _R+\underset{n=1}{\overset{\mathrm{}}{}}\overline{\psi }_n(i\overline{)}_4+\mu _n)\psi _n\right].$$ (8) The first two terms correspond to 4-d two-component chiral fermions,they arise from the zero modes of Eq.(2.1). The third term describes an infinite tower of Dirac fermions corresponding to the modes with non-zero $`\mu _n`$ in the expansion. The zero mode wave functions are easily found by integrating $`a^{}|Ln=0`$ and $`a|Rn=0`$. The solutions $$x_5|L,0\mathrm{exp}\left[_0^{x_5}\mathrm{\Phi }(s)ds\right]\mathrm{and}x_5|R,0\mathrm{exp}\left[_0^{x_5}\mathrm{\Phi }(s)ds\right],$$ (9) are exponentials with support near the zeros of $`\mathrm{\Phi }`$. In the infinite system that we are considering these modes cannot both be normalizable<sup>§</sup><sup>§</sup>§Of course, we will be working in finite volume in the end, then the other mode is normalizable as well, but it is localized at the other end of the extra dimension. The existence of this other mode is dependent on boundary conditions.. It is easy to see that $`|b,0`$ is normalizable if $`\mathrm{\Phi }(\mathrm{})<0`$ and $`\mathrm{\Phi }(+\mathrm{})>0`$ as in Figure 2, and if $`\mathrm{\Phi }(\mathrm{})>0`$ and $`\mathrm{\Phi }(+\mathrm{})<0`$ then the mode $`|f,0`$ is normalizable. In the other cases there is no normalizable zero mode. For definiteness let us now specialize to the SHO. Then $$x_5|L,0=\frac{\mu ^{1/2}}{(\pi /2)^{1/4}}\mathrm{exp}\left[\mu ^2x_{5}^{}{}_{}{}^{2}\right],$$ (10) and $`x_5|R,0`$ is not normalizable. Thus the spectrum of four dimensional fields contains one left-handed chiral fermion in addition to an infinite tower of massive Dirac fermions. The shape of the wave function of the chiral fermion is Gaussian, centered at $`x_5=0`$. Note that coupling $`\mathrm{\Psi }`$ to $`\mathrm{\Phi }`$ would have rendered $`x_5|R,0`$ normalizable and we would have instead localized a massless right handed chiral fermion. For clarity, let us write the full wave function of the massless chiral fermion in the chiral basis $$\mathrm{\Psi }(𝐱,x_5)=\left(\begin{array}{c}x_5|L,0\psi (𝐱)\\ 0\end{array}\right).$$ (11) ### 2.2 Many chiral fermions We can easily generalize Eq. (4) to the case of several fermion fields. We simply couple all 5-d Dirac fields to the same scalar $`\mathrm{\Phi }`$ $$𝒮=\mathrm{d}^5x\underset{i,j}{}\overline{\mathrm{\Psi }}_i[i\overline{)}_5+\lambda \mathrm{\Phi }(x_5)m]_{ij}\mathrm{\Psi }_j.$$ (12) Here we allowed for general Yukawa couplings $`\lambda _{ij}`$ and also included masses $`m_{ij}`$ for the fermion fields. Mass terms for the five-dimensional fields are allowed by all the symmetries and should therefore be present in the Lagrangian. In the case that we will eventually be interested in – the standard model – the fermions carry gauge charges. This forces the couplings $`\lambda _{ij}`$ and $`m_{ij}`$ to be block-diagonal, with mixing only between fields with identical gauge quantum numbers. For simplicity we will set $`\lambda _{ij}=\delta _{ij}`$ in this paper, then $`m_{ij}`$ can be diagonalized with eigenvalues $`m_i`$. Finding the massless four-dimensional fields is completely analogous to the single fermion case of the last section. Each 5-d fermion $`\mathrm{\Psi }_i`$ gives rise to a single 4-d left chiral fermion. Again, the wave functions in the 5th coordinate are Gaussian, but they are now centered around the zeros of $`\mathrm{\Phi }m_i`$. In the SHO approximation this is at $`x_5^i=m_i/2\mu ^2`$. Thus, at energies well below $`\mu `$ the five-dimensional action above describes a set of non-interacting four dimensional chiral fermions localized at different 4-d “slices” in the 5th dimension. Note that while the overall position of the massless fermions in the $`x_5`$-direction is a dynamical variable (the location of the zero of $`\mathrm{\Phi }`$), the relative positions of the various fermions are fixed by the $`m_i`$. Thus even when we turn on interactions between the massless fields, the relative distances which control the size of coupling constants in the effective 4-d theory stay fixed. We now exhibit the field content of the 5-d theory which can reproduce the chiral spectrum of the 4-d SM as localized zero modes. First note that by choosing all $`\lambda `$’s positive we have localized only left handed chiral Weyl spinors. That implies that we will construct the SM using only left handed spinors, the right handed fields are represented by their charge conjugates $`\overline{\psi }^c`$. Then the SM arises simply by choosing 5-d Dirac spinors $`(Q,U^c,D^c,L,E^c)`$ transforming like the left-handed SM Weyl fermions $`(q,u^c,d^c,l,e^c)`$. We also briefly mention how we imagine confining gauge fields to a (3+1)-dimensional wall. A field-theoretic mechanism for localizing gauge fields was proposed by Dvali and Shifman and was later extended and applied in (see also ). The idea is to arrange for the gauge group to confine outside the wall; the flux lines of any electric sources turned on inside the wall will then be repelled by the confining regions outside and forced to propagate only inside the wall. This traps a massless gauge field on the wall. Since the gauge field is prevented to enter the confined region, the thickness $`L`$ of the wall acts effectively as the size of the extra dimensions in which the gauge fields can propagate. Notice that in a picture like this, the gauge couplings will exhibit power law running above the scale $`L^1`$, and so the scenario of for gauge coupling unification may be implemented, without the presence of any new dimensions beyond the large gravitational dimensions. ## 3 Exponentially small 4-d couplings In this section we present two examples of applications for our central result: exponentially small couplings from small wave function overlaps of fields which are separated in the fifth dimension. The two examples we consider are SM Yukawa couplings and proton decay. Since our exponential suppression factors dominate any power suppression we will not keep track of the various powers of scales which arise from matching 5-d to 4-d Lagrangians. ### 3.1 Yukawa couplings In this section we apply our mechanism to generating hierarchical Yukawa couplings in four dimensions. Concentrating on only one generation and the lepton sector for the moment, we start with the five-dimensional fermion fields with action $$𝒮=\mathrm{d}^5x\overline{L}[i\overline{)}_5+\mathrm{\Phi }(x_5)]L+\overline{E}^c[i\overline{)}_5+\mathrm{\Phi }(x_5)m]E^c+\kappa HL^TC_5E^c.$$ (13) where $`C_5`$ was defined in Eq. (3). As discussed in the previous sections, we find a left-handed massless fermions $`l`$ from $`L`$ localized at $`x_5=0`$ and $`e^c`$ from $`E^c`$ localized at $`x_5=rm/(2\mu ^2)`$. For simplicity, we will assume that the Higgs is delocalized inside the wall. We now determine what effective four-dimensional interactions between the light fields results from the Yukawa coupling in eq. (13). To this end we expand $`L`$ and $`E^c`$ as in eq. (5) and replace the Higgs field $`H`$ by its lowest Kaluza-Klein mode which has an $`x_5`$-independent wave function. We obtain for the Yukawa coupling $$𝒮_{Yuk}=\mathrm{d}^4𝐱\kappa h(𝐱)l(𝐱)e^c(𝐱)dx_5\varphi _l(x_5)\varphi _{e^c}(x_5).$$ (14) Here $`\varphi _l(x_5)`$ and $`\varphi _{e^c}(x_5)`$ are the zero-mode wave functions for the lepton doublet and singlet respectively. $`\varphi _l`$ is a Gaussian centered at $`x_5=0`$ whereas $`\varphi _{e^c}`$ is centered at $`x_5=r`$. The overlap of Gaussians is itself a Gaussian and we find $$dx_5\varphi _l(x_5)\varphi _{e^c}(x_5)=\frac{\sqrt{2}\mu }{\sqrt{\pi }}dx_5e^{\mu ^2x_5^2}e^{\mu ^2(x_5r)^2}=e^{\mu ^2r^2/2}.$$ (15) This result is in agreement with the intuitive expectation from Figure 2. Any coupling between the two chiral fermions is necessarily exponentially suppressed because the two fields are separated in space. The coupling is then proportional to the exponentially small overlap of the wave functions. Note that we did not impose any chiral symmetries in the fundamental theory to obtain this result: the coupling $`\kappa `$ can violate the electron chiral symmetry by $`O(1)`$. Even with chiral symmetry maximally broken in the fundamental theory, we obtain an approximate chiral symmetry in the low energy, 4-d effective theory. ### 3.2 Long live the proton Proton decay places a very stringent constraint on most extensions of the standard model. Unless a symmetry can be imposed to forbid either baryon or lepton number violation, proton decay forces the scale of new physics to be extremely high. In particular one might be tempted to conclude that proton decay kills all attempts to lower the fundamental Planck scale $`M_{}`$ significantly beneath the GUT scale, unless continuous or discrete gauge symmetries are invoked. We now show that these no-go theorems are very elegantly evaded by separating wave functions in the extra dimensions. Consider for simplicity a one-generation model in five dimensions where the standard model fermions are again localized in the $`x_5`$ direction by coupling the five-dimensional fields to the domain wall scalar $`\mathrm{\Phi }`$. Assume that all quark fields are localized near $`x_5=0`$ whereas the leptons are near $`x_5=r`$ as depicted schematically in Figure 1. We allow the five-dimensional theory to violate both baryon number and lepton number maximally, and we assume that we can parameterize this violation by local operatorsNon-local operators which result from integrating out massive bulk fields are discussed in the next subsection.. Then we can expect the following dangerous looking five-dimensional baryon and lepton number violating operators $$𝒮\mathrm{d}^5x\frac{(Q^TC_5L)^{}(U^{cT}C_5D^c)}{M_{}^3}$$ (16) To obtain the corresponding four-dimensional proton decay operator we simply replace the five-dimensional fields by the zero mode fields and calculate the wave function overlap in $`x_5`$. The result is $$𝒮\mathrm{d}^4𝐱\delta \times \frac{(ql)^{}(u^cd^c)}{M_{}^2}$$ (17) where $$\delta dx_5\left[e^{\mu ^2x_5^2}\right]^3e^{\mu ^2(x_5r)^2}e^{3/4\mu ^2r^2}.$$ (18) Already for a separation of $`\mu r=10`$ we obtain $`\delta 10^{33}`$ which renders these operators completely safe even for $`M_{}1`$ TeV. Thus we imagine a picture where quarks and leptons are localized near opposite ends of the wall so that $`rL`$. Once again, even if baryon and lepton number are maximally broken in the 5-d theory at short distances, the coupling generated in the 4-d theory is exponentially suppressed and can be harmless. In the following subsection, we present an alternate way of understanding the suppression of proton decay which also shows that corrections to this picture, either coming from quantum loops or exchange of new degrees of freedom, can be harmless. ### 3.3 Long live the proton, again There is an alternative way of understanding the $`e^{(\mu r)^2}`$ suppression which is physically transparent and shows that all radiative corrections are also suppressed by the same exponential factor. Even though it applies equally well to the case of Yukawa couplings we will only describe the analysis for proton decay here. In order to decay the proton using the local $`QQQL`$ interaction, the quarks and leptons must propagate into the bulk of the wall, away from the points where they are massless (see Fig 4). Because e.g. the quarks are getting more massive as they move into the bulk, the propagator from the plane where they live into the bulk is suppressed. Intuitively, for each slice between $`x_5`$ and $`x_5+\delta x_5`$, a Yukawa propagator $`e^{m(x_5)\delta x_5}`$ must be paid. Therefore, the propagator to reach a final point $`x_{}`$ is proportional to $$\underset{\text{slices}}{}e^{m(x_5)\delta x_5}=e^{^x_{}m(x_5)𝑑x_5}=e^{\mu ^2x_{}^2}$$ (19) for $`m(x_5)=2\mu ^2x_5`$. This is exactly the wave function for the zero mode evaluated at $`x_{}`$, as is intuitively expected and can also be seen more formally. In order to evaluate the tree level diagram of Figure 4, we have to integrate over the interaction vertex, yielding for the coefficient of the proton decay operator $$\delta 𝑑x_5\left(\varphi _q(x_5)\right)^3\varphi _l(x_5),$$ (20) precisely reproducing the result from our earlier “overlap of wave functions” picture. This approach also makes clear why higher order corrections do not significantly change the result. Indeed, the most general diagram for proton decay takes the form of Figure 5: the effect of all interactions are encoded in a modified propagator into the bulk and modified interaction vertex. The modified propagator has the simple interpretation of being the wave function of the zero mode in the interacting theory. The exact form of the modified vertex is unknown. However, the vertex will still be point-like on scales of order $`\mu `$, because all the interactions modifying it are mediated by particles of mass $`\mu `$, which can only smear the vertex on scales of order $`\mu `$. Since the propagators involved are needed at distances $`L10\mu ^1`$, the vertex in the Figure 5 is still effectively point-like, and so the picture of the suppression of proton decay through exponentially small wave function overlaps persists, if we replace the (free) Gaussian zero-mode wave functions by the true interacting ones. So far we have considered proton decay operators induced by short-distance physics above the cutoff $`M_{}`$; but what about effects coming from integrating out fields possibly lighter than $`M_{}`$? In particular, we may worry that while the separation of quarks and leptons suppresses higher-dimensional operators linking them, operators involving only quarks on one side and violating baryon number, or leptons on the other side violating lepton number, are not suppressed. If a light field of mass $`m`$ freely propagates inside the wall, this may induce operators violating both $`B`$ and $`L`$ suppressed only by $`e^{mL}`$ (see Figure 6). However, in order to specifically induce proton decay, this light field would have to be fermionic. In particular, no gauge or Higgs boson exchanges can ever give rise to proton decay. If we make the single assumption that all delocalized fermions have masses of order the cutoff $`M_{}`$, then their exchange can at most give $`e^{M_{}L}`$ contributions which are comparable to the $`e^{(\mu L)^2}`$ effects we have considered. Note that this argument implies that grand unification at a scale as low as $`\mu `$ or $`M_{}`$ does not lead to rapid proton decay, as long as there are no delocalized fermionic fields with masses below $`M_{}`$. We cannot resist the temptation to speculate that the same vacuum expectation values (VEVs) which break the GUT symmetry near the scale $`M_{}`$ may also be responsible for the separation of the SM fermions in the 5’th dimension. For example the $`m_{ij}`$ of Eq. (12) could stem from the VEV of a GUT symmetry breaking field which points in the (B-L) direction. Then the SM fermions would be split according to their baryon and lepton numbers. A VEV in the hypercharge direction would arrange the fermions according to their hypercharge. We have seen that without imposing any symmetries on the underlying theory, proton decay can be adequately suppressed if quarks and leptons are stuck at different points in extra dimensions. One might then wonder what happened to the general lore that black-hole/wormhole effects violate all non-gauged symmetries and are therefore dangerous. In evaluating this argument, we have to recall that it was Planck-scale sized wormholes giving the supposedly $`O(1)`$ symmetry violating effects. Since these have a mass above the cutoff, all their effects can be encoded in terms of local operators suppressed by $`M_pM_{}`$; and indeed, we presumed that such “dangerous” operators were really present in the theory. However, their effects are harmless because the quarks and leptons are stuck at different points, yet have to be dragged close to each other for the dangerous operators to be operative. Of course, in the effective theory at distances longer than $`L`$, the quarks and leptons look like they are on top of each other, so the above suppression mechanism does not seem to apply. However, only wormholes larger than $`L`$ are admissible in this effective theory, and any effect they induce will be exponentially suppressed by their action $$e^S,Sd^{4+n}xM_p^{(2+n)}(M_pL)^{(2+n)}$$ (21) which is a far larger suppression than the effects we have computed $`e^{(\mu L)^2}e^{M_{}L}`$. The largest possible effect which might arise from wormholes would come from long and skinny wormholes which stretch from the proton to the lepton; but even these are completely safe as their action is at least $`e^{(M_pL)}`$. ### 3.4 Neutrino masses Separating quarks and leptons at different points in extra dimensions can easily suppress proton decay without imposing any symmetries on the high-energy theory. On the other hand, as already mentioned, operators violating baryon and lepton number need not be suppressed. In fact, in the absence of any symmetries in the high-energy theory, a Majorana neutrino mass operator $$𝒮d^5x\frac{L^TC_5LH^{}H^{}}{M_{}^2}$$ (22) turns into an unsuppressed Majorana mass term for the 4-d zero mode $$𝒮d^4𝐱\frac{llh^{}h^{}}{M_{}}$$ (23) since there is no small overlap between $`l`$’s wave function with itself. There are a number of ways of resolving this problem; we will just mention the obvious strategy of adding a right-handed neutrino and gauging $`(BL)`$. Of course $`(BL)`$ must be broken in such a way as to not allow large Majorana masses after breaking. In our framework this would be most naturally achieved with a $`(BL)`$ breaking VEV which is localized within the wall but at some distance from the lepton field $`l`$ so that the Majorana neutrino mass is exponentially suppressed $`(BL)`$ could also be broken everywhere within the wall if a discrete subgroup remains preserved, or it could be broken on a distant wall if $`(BL)`$ is gauged in the large bulk where gravity propagates . For another approach to neutrino masses see .. In addition one would also get small Dirac neutrino masses, with the tiny Yukawa couplings originating from the overlap between right- and left- handed neutrino wave functions. ### 3.5 Summary of scales Let us close by giving an account of the various scales we are now imagining. Recall that the at the edge of the wall, the fermion mass $`\mathrm{\Phi }`$ is $`\mu ^2L10\mu `$, and must not be larger than the ultraviolet cutoff $`M_{}`$. In fact we will take them to be comparable. Therefore, we have three scales in the problem: the UV cutoff $`M_{}`$, $`\mu `$ and the wall thickness scale $`L^1`$, with magnitudes related roughly as $$M_{}10\mu 100L^1$$ (24) Since we cannot push $`L^1`$ significantly below $``$ TeV, the fundamental scale $`M_{}`$ is bounded below by $`100`$ TeV. This is actually desirable from another point of view: in the absence of flavor symmetries, it is difficult to protect against flavor changing neutral currents without pushing the scale of higher-dimension operators to $`100`$ TeV. Notice that even though the theory becomes effectively 5-dimensional above $`L^1`$, the theory is perturbative up to the UV cutoff $`M_{}`$. From the 4-dimensional viewpoint, we have $`N_{KK}(M_{}L)/2\pi 10100`$ gauge and Higgs field KK modes, and so the effective expansion parameter is $$\frac{h_4^2}{16\pi ^2}\times N_{KK}O(1)$$ (25) where $`h_4`$ is a generic low-energy gauge coupling or top Yukawa coupling. From the higher-dimensional point of view, the theory is on the edge of being strongly coupled at the UV cutoff $`M_{}`$. Finally, we wish to give a rough idea of the sort of suppressions which are generated by our $`e^{(c\mu L)^2}`$ size effects, with $`c=1/2`$ for Yukawa couplings and $`c=3/4`$ for proton decay: | $`\mu r`$ | 0 | 1 | 2 | 3 | 4 | 5 | 7 | | 10 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | exp$`(c\mu ^2r^2)`$ | 1 | 1 | $`10^1`$ | $`10^2`$ | $`10^4`$ | $`10^6`$ | $`10^{11}`$ | | $`10^{33}`$ | | | | $`\lambda _t`$ | | $`\mathrm{}`$ | | $`\lambda _e`$ | $`\mathrm{}`$ | | $`\mathrm{\Gamma }_{proton}`$ | It is attractive that for $`\mu r`$ just ranging between $`110`$, we can get appropriate sizes for everything from the top Yukawa coupling $`\lambda _t`$ (for $`\mu r1`$), to the electron Yukawa $`\lambda _e`$ ($`\mu r5`$), to sufficient suppression for proton decay ($`\mu r10`$). ## 4 Cartography with gauge fields While the SM fermion fields are stuck at different points in the extra dimension, the gauge fields are totally delocalized, and we expect that we can probe the locations of the fermions using the gauge fields. Cartography of the SM fermions with gauge fields will become an experimental science to be performed at the LHC or NLC if the wall thickness is as large as $`1`$ TeV<sup>-1</sup>. To see how this works explicitly, consider a toy example with two 4-d chiral fermions $`\psi _1,\psi _2`$, transforming identically under a gauge group $`G`$, but stuck to different points $`s_1,s_2`$ in the extra dimensions. At distances larger than the width of their wavefunction in the extra dimensions, the dynamics that localizes the fermions is irrelevant; we can approximate their wavefunctions as delta functions or, what is the same, fix them to live on a 3 dimensional wall, while gauge fields freely propagate in the bulk. The effective coupling to gauge fields is fixed by gauge invariance to be $$𝒮d^4𝐱\overline{\psi }_1\overline{\sigma }^\mu T^a\psi _1A_\mu ^a(𝐱,s_1)+\overline{\psi }_2\overline{\sigma }^\mu \psi _2A_\mu (𝐱,s_2)$$ (26) Let us Fourier expand $`A_\mu `$ as $$A_\mu (𝐱,x_5)=\frac{A_\mu ^{(0)}(𝐱)}{\sqrt{2}}+\underset{n=1}{\overset{\mathrm{}}{}}A_\mu ^{(n)}(𝐱)\text{cos}(k_nx_5)+\underset{n=1}{\overset{\mathrm{}}{}}B_\mu ^{(n)}(𝐱)\text{sin}(k_nx_5),$$ (27) where $`k_n=\frac{2\pi }{L}n`$. Here $`A_\mu ^{(0)}`$ corresponds to the massless 4-d gauge field, and the $`A_\mu ^{(n)}`$ and $`B_\mu ^{(n)}`$ are KK excitations with masses $`k_n`$. Inserting this expansion into Eq. (26) we find the effective couplings of the tower of 4-d KK gauge fields to the fermions. Note that since $`A_\mu ^{(0)}`$ has a flat wave function in the extra dimensions, it has the same coupling to both fermions as required by 4-dimensional gauge invariance. However the couplings of the fermions to the massive KK states, $`A`$ and $`B`$, are proportional to cosines and sines from the wave functions of the KK states at the locations of the fermions <sup>\**</sup><sup>\**</sup>\**Note that these couplings are valid for the KK modes with wavelength long compared to the fermion localization width; shorter wavelength KK modes can resolve the fermion wave function and so the delta function approximation for the fermion wave function is inadequate.. Suppose that we are sitting on the $`s`$-channel resonance for particle-antiparticle annihilation mediated by the $`n`$’th KK modes of $`A`$ and $`B`$. Then the relative cross section for $`1^+1^{}1^+1^{}`$ (and $`2^+2^{}2^+2^{}`$) as calculated from the diagrams in Figure 7 will be different from $`1^+1^{}2^+2^{}`$: $`\sigma (1^+1^{}A^{(n)},B^{(n)}1^+1^{})`$ $``$ $`\sigma _n`$ $`\sigma (2^+2^{}A^{(n)},B^{(n)}2^+2^{})`$ $`=`$ $`\sigma _n`$ $`\sigma (1^+1^{}A^{(n)},B^{(n)}2^+2^{})`$ $`=`$ $`\sigma _n\text{cos}^2(k_n(s_1s_2)),`$ (28) and this can be used to gain information on the distance between $`\psi _1,\psi _2`$ in the extra dimensions. Actually, there are subtleties in this analysis associated with the mechanism for localizing gauge fields. For example, in the Dvali-Shifman mechanism, confinement outside the wall forces specific boundary conditions $`F_{\mu 5}=0`$ at the edges of the wall . This then enforces $`_5A_\mu (𝐱,x_5)=0`$ at $`x_5=0,L`$, and the $`B^{(n)}`$ are eliminated. In addition, the $`A^{(n)}`$ may now be periodic or anti-periodic, thus $`k_n=\frac{\pi }{L}n`$. The position dependence of KK gauge couplings raises the interesting possibility that some of the KK excitations of various fields may be leptophobic or baryophobic if the quarks or leptons sit at nodes of their wave functions. More generally, the cross sections above are modified to $`\sigma (1^+1^{}A^{(n)}1^+1^{})`$ $`=`$ $`\sigma _n\text{cos}^4(k_ns_1)`$ (29) $`\sigma (2^+2^{}A^{(n)}2^+2^{})`$ $`=`$ $`\sigma _n\text{cos}^4(k_ns_2)`$ (30) $`\sigma (1^+1^{}A^{(n)}2^+2^{})`$ $`=`$ $`\sigma _n\text{cos}^2(k_ns_1)\text{cos}^2(k_ns_2)`$ (31) There are clearly other interesting possibilities arising from the non-universal couplings of SM fermions to the KK excitations. As one example, in our scenario for suppressing proton decay by separating quark and lepton wave functions, the non-standard coupling of the quarks and leptons to the KK modes has an interesting impact on atomic parity violation (APV). The latest experimental results indicate that the measured weak charge of the nucleus is lower than the SM expectation by $`2.5\sigma `$. If we had a conventional Kaluza-Klein tower at $``$ TeV scale, with standard couplings to quarks and leptons, this would enhance the SM contribution to APV. In our case, however, the situation can be different. If we impose the $`F_{\mu 5}=0`$ boundary conditions as stated above, then the first Kaluza-Klein excitation has the profile shown in Figure 8. Notice that the product of the quark and lepton couplings to the first KK excitation has the opposite sign as in the SM, and gives a contribution to atomic parity violation that moves in the right direction. The sign of this effect is an inevitable consequence of our mechanism for suppressing proton decay, and the correct magnitude can be obtained if the wall thickness is $`1`$ TeV. ## 5 Conclusions In this paper we have shown that approximate symmetries can arise in a long-distance theory without any symmetry explanation in the underlying short-distance theory. Instead, even if symmetries are maximally broken at short distances, exponentially small couplings between different fields can result if they are “stuck” at slightly different points in extra dimensions. This opens a new arena for model-building, where a specific arrangement of the fermions in extra dimensions, and not familiar flavor symmetries, determine the fermion mass hierarchy. Furthermore, proton decay can be elegantly disposed of, even in theories with the fundamental cutoff close to the TeV scale, if quarks and leptons are separated from each other by a factor of 10 larger than their size in the extra dimensions. If the effective size of this extra dimension or equivalently our wall thickness is close to the TeV scale, these ideas can be probed at the LHC and NLC. The smoking gun for our mechanism for would be the detection non-universality in the coupling of SM fermions to the KK excitations of the SM gauge fields. A detailed analysis of this non-universality could then be used to “map” the locations of the fermions in the extra dimensions. In closing we would like to emphasize that our mechanism of suppressing couplings from non-trivial wave functions is generically operative in higher dimensional theories with chiral fermions, as most such models obtain chiral matter from modes stuck to a defect in the higher dimensions. This defect may be a field theoretic domain wall in one extra dimension, a cosmic string in two extra dimensions, or a D-brane or orbifold fixed point<sup>††</sup><sup>††</sup>††The $`e^{r^2}`$ suppression of Yukawa couplings between twisted sector fields at different orbifold fixed points has been known for some time, see e.g. . in a string model. ## 6 Acknowledgements We thank Savas Dimopoulos, Lance Dixon, Gia Dvali, David Kaplan, John March-Russell, Gino Mirabelli, Shmuel Nussinov, Michael Peskin, Tom Rizzo and Eva Silverstein for discussions. Our work is supported by the Department of Energy under contract DE-AC03-76SF00515.
no-problem/9903/astro-ph9903464.html
ar5iv
text
# Pulsar Radiation and Quantum Gravity ## 1 Introduction Quantum gravity may cause modification of the dispersion relation for photons at high energies. It has recently been suggested that certain quantum gravity models may lead to a first order correction to the dispersion relation which can be parameterized as $`\mathrm{\Delta }t=L\mathrm{\Delta }E/cE_{QG}`$, where $`\mathrm{\Delta }t`$ is the magnitude of the travel time difference between two photons whose energies differ by $`\mathrm{\Delta }E`$ and that have traveled a distance $`L`$, and $`E_{QG}`$ is the energy scale of the dispersion effects (Amelino-Camelia et al. (1998)). To probe dispersion effects at high energy scales, accurate relative timing of nearly simultaneously produced photons of different energies which have traveled long distances is required. Use of sub-millisecond time structure of the keV photon flux of gamma-ray bursts at cosmological distances (Amelino-Camelia et al. (1998); Schaefer (1998)) and use of several minute time structure in TeV flares from AGN (Biller et al. (1998)) have been suggested. Here, we show that sub-millisecond timing of GeV emission from gamma-ray pulsars may also place useful constraints on the dispersion relation for photons at high energies. Below, we use existing gamma-ray data to determine the accuracy with which high energy pulsar emission can be timed and to place bounds on the energy scale for quantum gravity corrections to the speed of light. We then discuss how this limit might be improved by pulsar observations at higher energies in the near future. ## 2 Gamma-ray pulsations from the Crab We chose to analyze the energy dependence of pulse arrival times from the Crab pulsar as it has the largest ratio of distance to pulse period of the bright gamma-ray pulsars, thus maximizing the constraints which can be placed. Also, the pulses from the Crab are well aligned in time from radio waves, through optical and x-ray emission, to gamma-rays. Thus, it is likely that the photons of different energies are produced nearly simultaneously. We used data from the Energetic Gamma-Ray Experiment Telescope (EGRET) (Thompson et al. (1993)) of the Compton Gamma-Ray Observatory (CGRO). We extracted gamma-ray photon event lists from the CGRO public archive for observations pointed within $`40^{}`$ of the Crab and then, using the program pulsar (version 3.2, available from the CGRO Science Support Center) selected events lying within the energy dependent 68% point spread function of EGRET and calculated the phase of each photon relative to the radio ephemeris of the Crab (Arzoumanian et al. (1992)). The pulse period of the Crab changed from 33.39 ms to 33.49 ms over the course of these observations. The radio timing must be corrected for the variable dispersion along the line of sight to the Crab. The accuracy of this correction is estimated to be 0.2 ms (Nice (1998)), consistent with previous estimates of the accuracy of the dispersion correction for the Crab (Gullahorn et al. (1977)). Pulse phase histograms for several energy bands are shown in Fig. 1. The main pulse peak, near phase 0.0, is the most appropriate feature for timing. The main peak is similar across the energy range from 70 MeV to 2 GeV (Fierro (1995)). The peak width is about 0.05 in phase, and appears somewhat narrower at high energies. There is no obvious shift of the peak centroid with energy. To study the energy dependence of the speed of light, we measured the main peak pulse arrival time in each energy band. We did this in two ways. First, we calculated the average arrival time for photons in the main peak. We found the average time for each energy band using photons with phases between -0.0464 and 0.0336, an interval centered on the mean arrival time for all photons used in this analysis. Second, we parameterized the pulse arrival times by fitting a Lorentzian to the pulse profile, within the same phase range specified above, for each energy band. Before fitting, a constant rate equal to the average rate between phases -0.4 and -0.2 was subtracted. The resultant was then fit with a Lorentzian using a gradient-expansion algorithm to compute a non-linear least squares fit. The fits were all acceptable with $`\chi ^2`$ in the range 2.9 to 7.6 for 5 degrees of freedom. Fig. 2 shows the pulse arrival times calculated via both methods. The errors in Fig. 2 correspond to $`\mathrm{\Delta }\chi ^2=1`$ (68% confidence). The energy of each point is the median photon energy for each energy band. For the highest energy band, the median energy is substantially lower than the average, 2.9 GeV versus 5.0 GeV. The zero pulse phase is set by the radio ephemeris. The pulse arrival time for all photons used in this analysis is shown as a dashed line and differs by 0.21 ms from the radio zero phase. This is within the error in the radio dispersion correction (Nice (1998)). We note that errors in the radio zero phase can broaden the gamma-ray peak, but will not induce an energy dependent shift in the gamma-ray pulse arrival time. The accuracy of the pulse arrival time determination for the Lorentzian fit is 0.07 ms ($`\mathrm{\Delta }\chi ^2=3.84`$ or 95% confidence for a single parameter of interest) in the 100–200 MeV band and 0.21 ms (95% confidence) in the highest energy band. The accuracy in the highest energy band is limited mainly by statistics. It is apparent from the figure that there is no statistically significant variation in pulse arrival time with energy. To place an upper bound on any energy dependence in the speed of light, we compare the arrival time for photons with energies above 2 GeV (median energy 2.93 GeV) to that for the 70–100 MeV band (median energy 82.8 MeV). The 95% confidence upper limit on the difference of the arrival times is 0.35 ms. Adopting a distance to the Crab of 2.2 kpc (Zombeck (1990)), this leads to a lower limit on the energy scale of quantum gravity effects on the speed of light of $`E_{QG}>1.8\times 10^{15}\mathrm{GeV}`$ (95% confidence). This limit lies below the range of interest, but within an order of magnitude of some predictions in the context of string theory (Witten (1996)). ## 3 Discussion Other effects which could also produce an energy dependent delay in photon arrival times include energy dependent dispersion due to the strong gravitational field near the neutron star, purely electromagnetic dispersion, an energy dependence in the emission location, or an intrinsic energy dependence in the emission time. The effect of any energy dependent dispersion due to the strong gravitational field near the neutron star is likely to be small because, even if emitted from the neutron star surface, photons traverse the region of high gravitational fields within about 0.1 ms. Allowing a fractional change in the speed of light equal to the dimensionless field strength at the neutron star surface, $`GM/Rc^20.2`$, where $`M1.4M_{}`$ is the neutron star mass and $`R10\mathrm{km}`$ is the neutron star radius, the difference in arrival times would be only 0.02 ms. The actual energy dependent change in the speed of light is likely to be much smaller than 0.2. Any significant purely electromagnetic dispersion at MeV energies and above can be excluded based on the dispersions measured at lower energies. An energy dependence in the photon emission location or intrinsic emission time could produce a significant energy dependent time delay. While the possibility that precise tuning of the emission locations or times for various energy photons could cancel an energy dependent dispersion arising from quantum gravity effects, we consider such a coincidence unlikely, although not excluded, and interpret our lack of detection of any energy dependence in arrival times as constraining both the energy dependent dispersion and the emission location and time. In this case, the average emission location, projected along our line of sight, for photons at energies in the 70-100 MeV band must lie within 110 km of that for photons above 2 GeV, within 50 km of that for 0.5–1.0 GeV photons, and within 150 km of that for radio photons. It is encouraging that the analysis shows that it is possible to time the Crab pulsar at gamma-ray energies to an accuracy of 0.07 ms (95% confidence) given adequate statistics. Detection of pulsations from the Crab at 50–100 GeV could improve the limit on $`E_{QG}`$ by two orders of magnitude. The key question is whether the pulsations of the Crab and other gamma-ray pulsars continue to such high energies. Observations of the Crab near 1 TeV show only unpulsed emission (Vacanti et al. (1991)) and the cutoff energy of the pulsed emission is unknown. If the Crab does pulse at 50–100 GeV, detection of the pulses may be possible in the near term with low energy threshold atmospheric Cherenkov telescopes (ACTs), such as STACEE (Bhattacharya et al. (1997)) and CELESTE (Giebels et al. (1998)), or in the longer term with a space-borne gamma-ray detector such as GLAST (Gehrels et al. (1998)). The Crab pulsed signal may extend only to the lowest energies accessible with the ACTs. Thus, measurement of a timing difference between two energy bands might require contemporaneous measurements at other wavelengths. Both optical (Smith et al. (1978)) and x-ray timing (Rots et al. (1998)) can exceed the accuracy of gamma-ray timing. However, the emission location for x-ray and optical photons may differ from that of gamma-ray photons. If quantum gravity does produce a first order correction to the dispersion relation for electromagnetic waves, then measurement of the pulse arrival time of the Crab at 50 GeV with an accuracy of 0.1 ms could be used to place a lower bound on $`E_{QG}>1.1\times 10^{17}\mathrm{GeV}`$. This is within the range, $`10^{16}10^{18}\mathrm{GeV}`$, for the energy scale for quantum gravity effects preferred in string theory (Witten (1996)). If future measurements do reveal an energy dependence in pulsar photon arrival times, then it will be difficult to distinguish an energy dependent dispersion from an intrinsic energy dependence in the emission location or emission time. This problem is common to all of the suggested astronomical tests of quantum gravity effects. Convincing proof for quantum gravity effects will likely require detection of energy dependent time delays in at least two different classes of objects, preferably at vastly difference distances, i.e. pulsars versus AGN or gamma-ray bursts, with all of the detections compatible with the same value of $`E_{QG}`$. ###### Acknowledgements. I thank Paul Mende for useful discussions. I acknowledge partial support from NASA grant NAG5-7389.
no-problem/9903/cs9903011.html
ar5iv
text
# A complete anytime algorithm for balanced number partitioning ## 1 Introduction and overview The number partitioning problem is defined as follows: Given a list $`x_1,x_2,\mathrm{},x_n`$ of non-negative, integer numbers, find a partition $`A\{1,\mathrm{},n\}`$ such that the partition difference $$\mathrm{\Delta }(A)=|\underset{iA}{}x_i\underset{iA}{}x_i|,$$ (1) is minimized. In the constrained partition problem, the cardinality difference between $`A`$ and its complement, $$m=|A|(n|A|)=2|A|n,$$ (2) must obey certain constraints. The most common case is the balanced partitioning problem with the constraint $`|m|1`$. Partitioning is of both theoretical and practical importance. It is one of Garey and Johnson’s six basic NP-complete problems that lie at the heart of the theory of NP-completeness . Among the many practical applications one finds multiprocessor scheduling and the minimization of VLSI circuit size and delay . Due to the NP-hardness of the partitioning problem , it seems unlikely that there is an efficient exact solution. Numerical investigations have shown, however, that large instances of partitioning can be solved exactly within reasonable time . This surprising fact is based on the existence of perfect partitions, partitions with $`E1`$. The moment an algorithm finds a perfect partition, it can stop. For identically, independently distributed (i.i.d.) random numbers $`x_i`$, the number of perfect perfect partitions increases with $`n`$, but in a peculiar way. For $`n`$ smaller than a critical value $`n_c`$, there are no perfect partitions (with probability one). For $`n>n_c`$, the number of perfect partitions increases exponentially with $`n`$. The critical value $`n_c`$ depends on the number of bits needed to encode the $`x_i`$. For the unconstrained partitioning problem $$n_c\frac{1}{2}\mathrm{log}_2n_c=\frac{1}{2}\mathrm{log}_2\frac{\pi }{2}x^2,$$ (3) where $``$ denotes the average over the distribution of the $`x_i`$ . The corresponding equation for the balanced partitioning problem reads $$n_c\mathrm{log}_2n_c=\mathrm{log}_2\left(\pi \sqrt{x^2x^2}\right).$$ (4) For most practical applications the $`x_i`$ have a finite precision and Eq. 3 resp. Eq. 4 can be applied. Theoretical investigations consider real-valued i.i.d. numbers $`x_i[0,1)`$, i.e. numbers with infinite precision. In this case, there are no perfect partitions, and for a large class of real valued input distributions, the optimum partition has a median difference of $`\mathrm{\Theta }(\sqrt{n}/2^n)`$ for the unconstrained resp. $`\mathrm{\Theta }(n/2^n)`$ for the balanced case . Using methods from statistical physics, the average optimum difference has been calculated recently . It reads $$\mathrm{\Delta }_{\mathrm{opt}}=\sqrt{2\pi x^2}\sqrt{n}2^n$$ (5) for the unconstrained and $$\mathrm{\Delta }_{\mathrm{opt}}=\pi \sqrt{x^2x^2}n2^n$$ (6) for the balanced partioning problem. These equations also describe the case of finite precision in the regime $`1nn_c`$. For both variants of the partitioning problem, the best heuristic algorithms are based on the Karmarkar-Karp differencing scheme and yield partitions with expected $`E=n^{\mathrm{\Theta }(\mathrm{log}n)}`$ when run with i.i.d. real valued input values . They run in polynomial time, but offer no way of improving their solutions given more running time. Korf proposed an algorithm that yields the Karmarkar-Karp solution within polynomial time and finds better solutions the longer it is allowed to run, until it finally finds and proves the optimum solution. Algorithms with this property are referred to as anytime algorithms . Korf’s anytime algorithm is very efficient, especially for problems with moderate values of $`n_c`$. For numbers $`x_i`$ with twelve significant digits or less ($`n_c33`$), it can optimally solve partitioning problems of arbitrary size in practice, since it quickly finds a perfect partition for $`n>n_c`$. For larger values of $`n_c`$, several orders of magnitude improvement in solution quality compared to the Karmarkar-Karp heuristic can be obtained in short time. For practical applications of this NP-hard problem, this is almost more than one might expect. Korf’s algorithm is not very useful to find the optimum constrained partition, however. In this paper, we describe a modification of Korf’s algorithm, which is as efficient as the original, but solves the constrained partition problem. The next section comprises a description of Korf’s algorithm and the modifications for the balanced problem. In the third section we discuss some experimental results. The paper ends with a summary and some conclusions. ## 2 Algorithms ### 2.1 Differencing heuristics The key ingredient to the most powerful partition heuristics is the differencing operation : select two elements $`x_i`$ and $`x_j`$ and replace them by the element $`|x_ix_j|`$. Replacing $`x_i`$ and $`x_j`$ by $`|x_1x_2|`$ is equivalent to making the decision that they will go into opposite subsets. Applying differencing operations $`n1`$ times produces in effect a partition of the list $`x_1,\mathrm{},x_n`$. The value of its partition difference is equal to the single element left in the list. Various partitions can be obtained by choosing different methods for selecting the pairs of elements to operate on. In the paired differencing method (PDM), the elements are ordered. The first $`n/2`$ operations are performed on the largest two elements, the third and the fourth largest, etc.. After these operations, the left-over $`n/2`$ elements are ordered and the procedure is iterated until there is only one element left. Another example is the largest differencing method (LDM). Again the elements are ordered. The largest two elements are picked for differencing. The resulting set is ordered and the algorithm is iterated until there is only one element left. For $`1nn_c`$, i.e. in the regime where there are no perfect partitions, and for random i.i.d. input numbers, the expected partition differences are $`\mathrm{\Theta }(n^1)`$ for PDM and $`n^{\mathrm{\Theta }(\mathrm{log}n)}`$ for LDM . LDM, being superior to PDM, is not applicable to the constrained partioning problem. PDM on the other hand yields only perfectly balanced partitions. Yakir proposed a combination of both algorithms, which finds perfectly balanced partitions, but with an expected partition difference of $`n^{\mathrm{\Theta }(\mathrm{log}n)}`$ . In his balanced LDM (BLDM), the first iteration of PDM is applied to reduce the original $`n`$-element list to $`n/2`$ elements. By doing so, it is assured that the final partition is balanced, regardless of which differencing operations are used thereafter. If one continues with LDM, a final difference of $`n^{\mathrm{\Theta }(\mathrm{log}n)}`$ can be expected. The time complexity of LDM, PDM and BLDM is $`O(n\mathrm{log}n)`$, the space-complexity is $`O(n)`$. ### 2.2 Korf’s complete anytime algorithm LDM and BLDM are the best known heuristics for the partioning problem, but they find approximate solutions only. Korf showed, how the LDM can be extended to a complete anytime algorithm, i.e. an algorithm that finds better and better solutions the longer it is allowed to run, until it finally finds and proves the optimum solution: At each iteration, the LDM heuristic commits to placing the two largest numbers in different subsets, by replacing them with their difference. The only other option is to place them in the same subset, replacing them by their sum. This results in a binary tree, where each node replaces the two largest remaining numbers, $`x_1x_2`$: the left branch replaces them by their difference, while the right branch replaces them by their sum: $$x_1,x_2,x_3,\mathrm{}\{\begin{array}{cc}\hfill |x_1x_2|,x_3,\mathrm{}& \text{ left branch }\hfill \\ \hfill x_1+x_2,x_3,\mathrm{}& \text{ right branch }\hfill \end{array}$$ (7) Iterating both operations $`n1`$ times generates a tree with $`2^{n1}`$ terminal nodes. The terminal nodes are single element lists, whose elements are the valid partition differences $`\mathrm{\Delta }`$. Korf’s complete Karmarkar-Karp (CKK) algorithm searches this tree depth-first and from left to right. CKK first returns the LDM solution, then continues to find better solutions as time allows. See Fig. 1 for the example of a tree generated by CKK. There are two ways to prune the tree: At any node, where the difference between the largest element in the list and the sum of all other elements is larger than the current minimum partition difference, the node’s offspring can be ignored. If one reaches a terminal node with a perfect partition, $`\mathrm{\Delta }1`$, the entire search can be terminated. The dashed nodes in Fig. 1 are pruned by these rules. In the regime $`n<n_c`$, the number of nodes generated by CKK to find the optimum partition grows exponentially with $`n`$. The first solution found, the LDM-solution, is significantly improved with much less nodes generated, however. In the regime $`n>n_c`$, the running time decreases with increasing $`n`$, due to the increasing number of perfect partitions. For $`nn_c`$, the running time is dominated by the $`O(n\mathrm{log}n)`$ time to construct the LDM-solution, which in this regime is almost always perfect. ### 2.3 A complete anytime algorithm for constrained partioning The application of differencing and its opposite operation leads to lists, in which single elements represent several elements of the original list. In order to apply CKK to the constrained partitioning problem, one needs to keep track of the resulting cardinality difference. This can be achieved by introducing an effective cardinality $`m_i`$ for every list element $`x_i`$. In the original list, all $`m_i=1`$. The differencing operation and its opposite become $$\genfrac{}{}{0pt}{}{x_1}{m_1},\genfrac{}{}{0pt}{}{x_2}{m_2},\genfrac{}{}{0pt}{}{x_3}{m_3},\mathrm{}\{\begin{array}{cc}\hfill \genfrac{}{}{0pt}{}{|x_1x_2|}{m_1m_2},\genfrac{}{}{0pt}{}{x_3}{m_3},\mathrm{}& \text{ left branch }\hfill \\ \hfill \genfrac{}{}{0pt}{}{x_1+x_2}{m_1+m_2},\genfrac{}{}{0pt}{}{x_3}{m_3},\mathrm{}& \text{ right branch}\hfill \end{array}.$$ (8) Fig. 1 shows how the $`m_i`$ evolve if the branching rule 8 is applied to the list $`8,7,6,5,4`$. The terminal nodes contain the partition difference and the cardinality difference. A simple approach to the constrained partition problem is to apply CKK with the branching rule 8 and to consider only solutions with matching $`m`$. This can be very inefficient, as can be seen for the constraint $`m=n`$. This extreme case is trivial, but CKK needs to search the complete tree to find the solution! As a first improvement we note, that an additional pruning rule can be applied. Let $`m_{\mathrm{max}}:=\mathrm{max}_i\{|m_i|\}`$ and $`M:=_i|m_i|`$ at a given node. The cardinality difference $`m`$ which can be found within the offspring of this node, is bounded by $$2m_{\mathrm{max}}M|m|M.$$ (9) Comparing these bounds to the cardinality constraint, one can prune parts of the tree. Consider again the case $`m=n`$ as an example: The trivial solution is now found right away. CKK finds the first valid partition (the LDM solution) after generating $`n`$ nodes. For the constrained partition problem, this can not be guaranteed – except in the case of balanced partitions, where we can use the BLDM strategy. Applying the first $`n/2`$ PDM operations to the original list leaves us with a $`n/2`$-element list with all $`m_i=0`$ (resp. with a single $`m_i=1`$ if $`n`$ is odd). CKK applied to this list produces only perfectly balanced partitions, the BLDM solution in first place. To keep the completeness of the algorithm, we have to consider the alternative to each of the PDM operations, i.e. to put a pair of subsequent numbers in the same subset. An outline of the complete BLDM algorithm can be seen in Fig. 2. Note that in an actual implementation several modifications should be applied to improve the performance. Instead of sorting the list at every node in the LDM phase, it is much more efficient to sort only when switching from PDM to LDM and insert the new element $`x_1\pm x_2`$ in the LDM-phase such that the order is preserved. The $`\mathrm{max}`$ and $``$ of $`x_i`$ and $`|m_i|`$ should be calculated only once and then locally updated when the list is modified ## 3 Experimental results We implemented the complete BLDM algorithm to test its performance as an exact solver, a polynomial heuristic and an anytime algorithm. For all computer experiments we use i.i.d. random numbers $`x_i`$, uniformly distributed from $`0`$ to $`2^b1`$, i.e. $`b`$-bit integers. To measure the performance of the algorithm as an exact solver, we count the number of nodes generated until the optimum solution has been found and proven. The result for $`25`$-bit integers is shown in Fig. 3. Each data point is the average of $`100`$ random problem instances. The horizontal axes shows the number of integers partitioned, the vertical axes show the number of nodes generated (left) and the fraction of instances that have a perfect partition (right). Note that we counted all nodes of the tree, not just the terminal nodes. We observe three distinct regimes: for $`n<30`$, the number of nodes grows exponentially with $`n`$, for $`n>30`$ it decreases with increasing $`n`$, reaching a minimum and starting to increase again slowly for very large values of $`n`$. Eq. 4 yields $`n_c=29.7`$ for our experimental setup, in good agreement with the numerical result, that the probability of having a perfect partition is one for $`n30`$ and drops sharply to zero for smaller values of $`n`$. In the regime $`n<n_c`$, the algorithm has to search an exponential number of nodes in order to prove the optimality of a partition. For $`n>n_c`$ it finds a perfect partition and stops the search prematurely. The number of perfect partitions increases with increasing $`n`$, making it easier to find one of them. This explains the decrease of searching costs. For $`nn_c`$, the very first partition found already is perfect. The construction of this BLDM solution requires $`n`$ nodes. We have seen that for $`nn_c`$ the BLDM heuristic yields perfect partitions. How does it behave in the other extreme, the “infinite precision limit”, $`nn_c`$? Yakir proved that in this limit BLDM yields an expected partition difference of $`n^{\mathrm{\Theta }(\mathrm{log}n)}`$. For a numerical check we applied BLDM to partition $`2n`$-bit integers to ensure that $`nn_c`$. The partition difference is then divided by $`2^{2n}`$ to simulate infinite precision real numbers from the interval $`[0,1)`$. Fig. 4 shows the resulting partition difference. Each data point is averaged over $`1000`$ random instances. Due to the numerical fit in Fig. 4 it is tempting to conjecture $$\mathrm{\Delta }_{\mathrm{BLDM}}=(\sqrt{2}1)n^{\frac{2}{3}\mathrm{ln}n}.$$ (10) If we want better solutions than the BLDM solution we let the complete BLDM run as long as time allows and take the best solution found. We applied this approach to partition $`100`$ random $`150`$-bit integers. Perfect partitions do not exist (with probability one), and the true optimum is definitely out of reach. The results can be seen in Fig. 5. The horizontal axis is the number of nodes generated, and the vertical axes is the ratio of the initial BLDM solution to the best solution found in the given number of node generations, both on a logarithmic scale. The entire horizontal scale represents about 90 minutes of real time, measured on a Sun SPARC 20. The fact that the number of nodes per second is a factor of $`1000`$ smaller than reported by Korf for the CKK on $`48`$-bit integers is probably due to the fact that we had to use a multi-precision package for the $`150`$-bit arithmetic while Korf could stick to the fast arithmetic of built-in data types. Even with this slow node generation speed we observe a several order of magnitude improvement relative to the BLDM solution in a few minutes. A least square fit to the data of $`100`$ runs gives $$\frac{\mathrm{\Delta }_{\mathrm{BLDM}}}{\mathrm{\Delta }}0.075(\mathrm{\#}\mathrm{nodes})^{0.84},$$ (11) but the actual data vary considerably. ## 4 Summary and conclusions The main contribution of this paper is to develop a complete anytime algorithm for the constrained number partioning problem. The complete Karmarkar-Karp algorithm CKK, proposed by Korf for the unconstrained partitioning problem, can be adapted to the constrained case simply by keeping book of the effective cardinalities and by extending the BLDM heuristic to a complete algorithm. The first solution the algorithm finds is the BLDM heuristic solution, and as it continues to run it finds better and better solutions, until it eventually finds and verifies an optimal solution. The basic operation of the complete BLDM is very similar to Korf’s CKK. The additional processing of the effective cardinalities has only a minor impact on the runtime. The pruning based on estimating the cardinality difference leads to a gain in speed, on the other hand. Therefore we adopt Korf’s claim: For numbers with twelve significant digits or less, complete BLDM can optimally solve balanced partitioning problems of arbitrary size in practice.
no-problem/9903/astro-ph9903115.html
ar5iv
text
# 1 The Life of a Merger ## 1 The Life of a Merger To begin a discussion of merger dynamics, it is perhaps best to describe the different dynamical phases of the merging process. Figure 1 shows a “typical” merger (described in detail in Mihos & Hernquist 1996 \[MH96\]) involving two equal mass disk galaxies colliding on a parabolic orbit with perigalactic separation of $`2.5h`$, where $`h`$ is the exponential disk scale length. One disk is exactly prograde, while the other is inclined 71 to the orbital plane. Both disks are embedded in truncated isothermal dark halos with mass 5.8 times the disk mass. The half-mass rotation period is $`t_{rot}`$. * Pre-collision $`\mathrm{\Delta }t0.01(\frac{r_{init}}{h})^{3/2}t_{rot}`$ As the galaxies fall in towards each other for the first time, they move on simple parabolic orbits until they are close enough that they have entered each others’ dark halos, and the gravitational force becomes non-Keplerian. During this infall, the galaxies hardly respond to one another at all, save for their orbital motion. * Impact! $`\mathrm{\Delta }t0.3(\frac{r_p}{h})^{3/2}t_{rot}`$ As the galaxies reach perigalacticon, they feel the strong tidal force from one another. The galaxies become strongly distorted, and the tidal tails are (appropriately!) launched from their back sides. Strong shocks are driven in the galaxies’ ISM due to tidal caustics in the disks as well as direct hydrodynamic compression of the colliding ISM. * (Self-) Gravitational Response $`\mathrm{\Delta }tt_{rot}`$ As the galaxies separate from their initial collision, the disk self-gravity can amplify the tidal distortions into a strong $`m=2`$ spiral or bar pattern. This self-gravitation response is strongly coupled to the internal structure of the galaxies as well as their orbital motion, resulting in a variety of dynamical responses (see §3). * “Hanging Out” $`\mathrm{\Delta }t`$?? – long? short? Having plowed through the densest parts of one another’s dark halos, the galaxies experience strong dynamical friction, causing the orbit to decay. The galaxies linger at apogalacticon for a significant time (several to many rotation periods) before falling back together and merging. The timescale here is crucially dependent on the distribution of dark matter at large radius, resulting in significant uncertainties in the duration of this phase (see §5). * Merging $`\mathrm{\Delta }t\mathrm{a}\mathrm{few}t_{rot}`$ Once the galaxies fall back together, they typically dance around each other once or twice more on a short-period, decaying orbit before coalescing into a single remnant. During this period, gravitational torques and hydrodynamic forces are strong, resulting in strong gaseous inflow and rapid violent relaxation. * Relaxation $`\mathrm{\Delta }t\mathrm{a}\mathrm{few}t_{rot}(R)`$ Once the galaxies merge, a general rule of thumb is that violent relaxation and/or dynamical mixing occurs on a few rotation periods at the radius in question. In the inner regions, the remnant will be relaxed in only $`10^8`$ yr; in the outer portions mixing may take $`>10^9`$ yr. ## 2 Where are the ULIRGs? Given this range of dynamical states, it is useful to ask which state preferentially hosts ULIRG galaxies. The fact that they are predominantly close pairs or single, disturbed systems argues that late stage systems dominate, but can we be more quantitative? Such insight can come from an analysis of the projected separations of ULIRGs (e.g., Murphy et al. 1996). If we know the orbital evolution of binary galaxy pairs, we can statistically reconstruct the distribution of dynamical phases from the observed projected separations. In essence, this exercise will reveal how ULIRG activity samples the general merging population. This selection function can be determined (in an admittedly model-dependent fashion) using N-body simulations of merging galaxies. A suite of merger models is calculated, focusing on the close ($`r_{peri}=`$ 2, 4, 6, 8, and 10 disk scale lengths), equal-mass mergers thought to give rise to ULIRGs. Given the orbital evolution of these models, we “observe” the model pairs randomly in projection, and weighted by $`r_{peri}^2`$ (geometric weighting of orbits). Because the merging timescale differs drastically in mergers of different impact parameter, we define a relative timescale as $`t_{rel}=t/t_{merge}`$ in which initial impact occurs at $`t_{rel}=0`$ and final merging occurs at $`t_{rel}=1`$. If we “observe” the merger models completely randomly in time – in essence assuming the the ULIRG selection function is constant over dynamical stage – we construct the histogram of projected separation $`\mathrm{\Delta }R`$ shown in Figure 2a. Because binary galaxies spend most of their orbital lifetimes at apogalacticon, and because distant encounters are assumed to be more common than close ones, $`N(\mathrm{\Delta }R)`$ shows a strong peak at extremely wide separations, $`\mathrm{\Delta }R>`$ 30 kpc. As this is far from the observed situation, a flat selection function (Fig 2c) is clearly unrealistic. However, from this histogram, we can do a Monte Carlo rejection of observations in each bin until we match the true observed $`N(\mathrm{\Delta }R)`$ histogram (Fig 2b). At this point, we can determine from the models the distribution of dynamical ages of the surviving observations (Fig 2d). From this distribution, we see that ULIRGs must come predominantly from mergers in the final 20% of their merging history – in other words, the final merging phase. A small fraction of objects may come from objects near their initial collision. This selection function argues that galaxies are somehow stable against the onset of ULIRG activity over most of the merging history, even though they respond dynamically at a much faster pace. What, then, causes this disconnect between dynamical response and ULIRG activity? ## 3 Is there a Dynamical Trigger? Whatever powers the extreme luminosity of ULIRGs, the requisite is sufficient fuel in the form of interstellar gas. From a dynamical point of view, the link between ULIRG activity and the merging process must lie in the detailed dynamics which drive nuclear gas inflows in mergers. Is there a distinct dynamical trigger which begins this inflow and resultant ULIRG activity, or are there several paths to the formation of ULIRGs? ### 3.1 Theoretical Expectations Much of our understanding of the dynamical triggering of inflows in galaxies comes from N-body simulations. These simulations have generally shown that gaseous inflows in galaxies arise largely in response to the growth of $`m=2`$ instabilities in disks – spiral arms or, more strongly, bars (Noguchi 1988; Barnes & Hernquist 1991; MH96; BH96). As such the question of inflow triggers becomes one of bar instability in disks. What kind of encounters drive bars? When in the merging sequence do they form? A variety of simulations have revealed a variety of answers. In disks which are susceptible to global instabilities, strong bars form shortly after the initial collision. In these situations, rapid inflow occurs within a few disk rotation periods, providing the fuel for early starburst or AGN activity well before the galaxies merge (MH96, BH96). If these types of galaxies were the dominant sources for ULIRGs, ULIRG samples should contain many more wide pairs than are actually observed. Disk stability, therefore, may be one criterion for forming ULIRGs. That stability may come from the presence of a massive central bulge or a low ratio of disk-to-dark matter in the inner disk. Simulations by Mihos & Hernquist (1994, MH96) show that a bulge component can stabilize the disk against bar formation, holding off inflow until the galaxies ultimately merge. At this point the strong gravitational torques and gasdynamical shocks overwhelm any stability offered by the bulges, and the gas is rapidly driven inwards on a dynamical timescale, presumably fueling a starburst or active nucleus. In interesting contrast to these models are those of Barnes & Hernquist (1996, BH96) which employed a similar 3:1 disk-to-bulge ratio in their model galaxies, but with a much lower density bulge. In these models, the bulges were unable to stabilize the disks, and early inflow again occurred. Clearly it is thus more than the mere presence of bulges that stabilize disks – the bulges must be sufficiently concentrated that the dominate the mass distribution (and thus the rotation curve) in the inner disk. Alternatively, a high fraction of dark-to-disk mass in the inner portion of the galaxies may also stabilize the disks; such is probably the case in low surface brightness disk galaxies (Mihos et al. 1997). Aside from internal dynamics, the orbital dynamics also play a role in triggering inflow and activity. BH96 also show how orbital geometry influences the inflow and activity. For galaxies with a modest amount of disk stability, a prograde encounter will be sufficient to drive bar instabilities, while a retrograde encounter will not. In these cases, retrograde disks will survive the initial impact relatively undamaged, and not experience any strong activity until the galaxies ultimately merge. In the extreme situation of very strong or very weak stability, however, internal stability effects tend to win out over orbital effects (MH96). More recently, simulations have shown that triggering of activity may not be solely tied to the physics of inflows. Instead, the fueling of activity may be moderated by starburst energy, which can render the gas incapable of forming stars. Simulations by Gerittsen (1998) indicate that a starburst can heat a significant fraction of the inflowing gas to a few million degrees; the onset of star formation in this gas must await radiative cooling, resulting in milder but longer-lived starbursts compared to those of MH96. With the current uncertainties in modeling the physics of star formation and starburst feedback, this result does not bode well for the detailed predictive power of any current starburst merger model. ### 3.2 Observational Constraints While the dynamical models can guide our expectations, the variances due to effects such as galactic structure, orbital geometry, and starburst physics make it hard to isolate any single effect as a dominant trigger. Can we instead turn to the observational data to complement these models? Because of the rapid decoupling of the nuclear gas from the global kinematics, studies of nuclear kinematics may give an improper account of the dynamical history of the encounter. Instead, global kinematics have a better “memory” of the initial conditions and evolution of the collision. To study these global kinematics, Mihos & Bothun (1998) recently examined the two dimensional H$`\alpha `$ velocity fields of four southern ULIRGs. These galaxies were chosen to display extended tidal features, and thus biased the sample towards largely prograde systems. Nonetheless, the four systems showed a wide range of kinematic structure, with no distinct commonality. One (IRAS 14348-1447) showed short tidal features, extended H$`\alpha `$ emission, and fairly quiescent disk kinematics, suggesting the system is a young interaction. The second (IRAS 19254-7245, the Superantennae) possessed extremely long tidal features, more concentrated H$`\alpha `$ emission, and evidence for outflowing winds – clearly a more advanced interaction, although the pair is still separated by $``$ 10 kpc and simple disk kinematics still dominate the overall velocity field. IRAS 23128-4250 is the most distorted of the four, with two nuclei separated by $``$ 4 kpc, several distinct overlapping kinematic components, and a 90<sup>o</sup> slew in the angular momentum vector of the system from the nuclear regions to the extended tidal features. Such kinematic structure cannot survive for long, so we must be catching IRAS 23128-4250 in a very transient stage associated with the final merging. Finally, the fourth system (IRAS 20551-4250) consists of a single nucleus in an r<sup>1/4</sup> galaxy with a single long tidal tail. The H$`\alpha `$ is very centrally concentrated, and shows simple rotational motion indicative of the quiescent dynamical stage following the completion of the merger. The fact that we see four ultraluminous systems in four very different dynamical phases argues that (at least in this small sample) there is no common dynamical trigger for ULIRG activity. A similar conclusion was reached by Hibbard & Yun (1996) from a study of the HI morphologies of ULIRGs, which showed no tendency towards prograde interactions. We are left then with a bit of a dissatisfying – although perhaps not unexpected – result. Both theoretical and observational arguments indicate there is no unique trigger for ULIRG activity. While ULIRGs are associated with late stage mergers, beyond that there seems to be no one-to-one mapping of dynamics to ULIRG activity. Internal structure, orbital dynamics, gas content, and starburst physics must all play competing and tangled roles in the ultimate triggering of ULIRG activity. ## 4 The Believability of N-body Models Given the ever-expanding role numerical simulation plays in the study of galactic dynamics, and in particular galaxy mergers, it is perhaps prudent here to make a few critical comments on the robustness of N-body modeling. With respect to the ULIRG question, the first obvious shortcoming of the current generation of N-body models is numerical resolution. The spatial resolution of models such as those of MH96 or BH96 are $`100`$ pc, many orders of magnitude larger than any central accretion disk.<sup>1</sup><sup>1</sup>1Gasdynamical models with a variable hydrodynamical smoothing length may purport to have finer resolution in the gas phase, but ultimately the detailed dynamics cannot be resolved on scales smaller than the gravitational softening length. While these models have shown the efficacy of mergers at driving radial inflows, they cannot address accretion onto an AGN, other than the first step of fueling gas inwards from the disk. This is no trivial matter – to reach the accretion disk the gas must shed several orders of magnitude more angular momentum (e.g., Phinney 1994). Ideas with which to mediate this further inflow abound, such as nuclear bars (Shlosman et al. 1990), dynamical friction (Heller & Schlosman 1994), or gravitational torques (Bekki 1995). While invoking such processes is quite reasonable, we must realize that in the context of AGN triggering these arguments remain purely speculative, and cannot be resolved in present models. Modeling of the nuclear dynamics of mergers is both a technical and physical challenge. First, because the resolution scales with the mean interparticle separation ($`rN^{1/3}`$), to get a factor of two improvement in resolution demands an order of magnitude increase in the number of particles (and CPU time) employed. However, sheer brute force will not solve the problem – on these smaller scales the starburst and AGN physics begin to dominate the dynamical equations. To see this, equate starburst power to binding energy for a $`M_g=10^{10}`$ M$`_{\mathrm{}}`$, $`10^{12}`$ L$`_{\mathrm{}}`$, $`10^7`$ year starburst inside 100 pc: $$\begin{array}{ccc}\hfill ϵL\mathrm{\Delta }t& =& GM_g^2/R\hfill \\ \hfill ϵ10^{60}& =& 10^{59}\hfill \end{array}$$ If the efficiency of energy deposition into the ISM ($`ϵ`$) is even a few percent, it can have a significant effect on the nuclear gasdynamics. Star formation and feedback remains poorly understood, and efforts to incorporate it into dynamical simulations are fraught with uncertainties – this problem stands as the biggest obstacle in modeling the dynamical evolution of ULIRGs. Until better models exist for incorporating feedback into N-body simulations, improved spatial resolution is meaningless. What, then, can we believe from N-body simulations? Surely the gravitational dynamics are well understood, right? They are, up to a point. Unless Newtonian mechanics are wrong, N-body models accurately calculate the gravitational forces acting on the merging galaxies. The uncertainties lie not in the physics of gravity, but in the initial conditions of the model, in particular the mass distribution of the different components of the galaxies. The dynamics of inflow are dependent on the mass distribution in the inner disk, but are disk galaxies maximal disks? Minimal disks? Equally problematic is the strong dependency of the merger evolution on the distribution of dark matter at large radius, which affects the orbital evolution and merging timescale. As we shall see next, these uncertainties make even pure gravitational modeling of merging galaxies uncertain. ## 5 The Role of Dark Matter Dark matter halos play the dominant role in determining the dynamics of merger on large (tens of kpc) size scales. On these scales, it is the dynamical friction of the dark halos which brakes the galaxies on their orbit and causes them to merge. Different dark matter halos lead to different orbital evolution and merging timescales for the colliding galaxies. With the amount of dark matter in galaxies poorly constrained, particularly at large distances from the luminous disks, these effects represent a serious uncertainty in dynamical modeling of galaxy mergers. While recent cosmological simulations give detailed predictions of the dark matter distribution on large scales (e.g., Navarro et al. 1995), these predictions are at often at odds with observed galaxy rotation curves (McGaugh & de Blok 1998). Most models of galaxy mergers to date have typically employed relatively low mass dark halos truncated outside of a few tens of kpc. More recently, efforts have been made to include more massive and extended halos in merger models. While results on the detailed evolution of the tidal debris remain contentious (Dubinski et al. 1996, 1999; Barnes 1998; Springel & White 1998), one thing is clear – the more massive the dark halo is, the longer the merging time. At first this may seem counter-intuitive, since the more “braking material” there is, the faster the braking ought to be! But halo mass also provides acceleration, so that galaxies with more massive halos are moving faster at perigalacticon, diminishing the efficiency of dynamical friction. At fixed circular velocity, the higher encounter velocity wins out over the increased dynamical friction, and merging time increases. This can be seen in Figure 3, which shows the orbital evolution of two equal-mass mergers, both with similar rotation curves in the luminous portion of the galaxy, but where one has a dark matter halo three times the mass of, and twice as extended as, the other. The differences in orbital evolution among models with different dark halos have several ramifications for merger dynamics and the formation of ULIRGs in particular; for example: * Timing of inflows: In §2, statistical arguments were made that the onset of activity is largely suppressed over 80% of the merger evolution, and that inflow occurs only late. If in fact halos are even more massive, and the merging timescale even longer, this constraint becomes even more severe – over more than 95% of the merger timescale the galaxies must lie dormant before activity is triggered. In this case, the dynamical stability must be strong indeed. * Modeling of specific systems: The rapid advances in N-body modeling have made it easy to construct “made-to-order” models of specific systems. However, uncertainties in the dark matter distribution translate in significant uncertainties in the dynamical evolution of mergers inferred from these models. For example, dynamical models of NGC 7252 can be constructed using a variety of halo models (Mihos et al. 1998), all of which successfully reproduce the observed kinematics of the system, yet have orbital characteristics and merging timescales which differ significantly. These uncertainties argue that such specific models are caricatures of the real systems, and that inferences of the detailed dynamics of specific systems based on such models are ill-motivated. ## 6 High-z Musings Finally, in light of the new results from SCUBA on possible high redshift analogs of nearby ULIRGs (e.g., Smail et al. 1998; Barger et al. 1998), it is interesting to ask how any of the lessons we have learned apply to mergers at higher redshift. The first immediate question is: are these SCUBA sources disk galaxy mergers at all? The “smoking gun” of a disk merger is the presence of tidal tails, but such features will be difficult to detect. The surface density of these structures evolves rapidly, giving them a dynamical lifetime which is short – $`\mathrm{a}\mathrm{few}t_{rot}(R)`$. Only outside of $``$ 10 kpc are tidal features long-lived, yet here their surface brightness is quite faint ($`\mu _R`$ $`\stackrel{>}{}`$25–26 mag arcsec<sup>-2</sup>). Add to this the $`(1+z)^4`$ cosmological dimming, and by $`z=2`$ the surface brightness of these features will be down to 30–31 mag arcsec<sup>-2</sup>, very hard indeed to detect. In fact, with the tidal features so faint and the inner regions perhaps highly obscured, simply detecting these galaxies may be quite problematic, much less determining their structural and dynamical properties. Nonetheless it is instructive to ask how disk mergers might evolve differently at high redshift compared to present day mergers. High redshift disks may well have been more gas-rich than current disk galaxies, resulting in more fuel for star formation and in a lower disk stability threshold (the Toomre $`Q`$). As a result, rather than driving gas inwards, collisions of galaxies at high redshift may instead result in pockets of disk gas going into local collapse and increasing disk star formation at the expense of nuclear starbursts. Morphologically, we might expect mergers to show an extended, knotty structure of sub-luminous clumps rather than an extremely luminous, nucleated structure. Qualitatively this is similar to the types of objects found in the Hubble Deep Field(s), where multiple bright knots observed in the restframe UV may be embedded in a single structure when observed in rest frame optical. We should therefore apply extreme caution when applying results obtained from merger models based on nearby galaxies to the high redshift universe.
no-problem/9903/hep-ph9903397.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is customary (for a recent review see, e.g. ) to parametrize the small $`x(x<0.1)`$ behaviour of the proton structure function (SF) by a power-like growth $$F_2(x,Q^2)a\left(Q^2\right)x^{\lambda (Q^2)},$$ (1) with the $`Q^2`$ dependent ”effective” power $`\lambda (Q^2)`$ generally interpreted as the Pomeron intercept $`1`$, rising from about $`0.1`$ to about $`0.4`$ between the smallest and largest values of $`Q^2`$ measured at HERA. In recent papers a ”hard” Pomeron term, with $`ϵ_0=0.418`$, besides the ”soft” one, with $`ϵ_1=0.0808`$ and a subleading ”Reggeon” with $`ϵ_2=0.4525`$ (all $`Q^2`$ independent) was introduced in the SF $$F_2(x,Q^2)\underset{i=0}{\overset{2}{}}a_i\left(Q^2\right)x^{ϵ_i}.$$ (2) In our opinion, there is only one Pomeron in the nature : nevertheless, for the sake of completeness, we have included in our analysis the above parametrization as well. Notice that with an increasing number of contributions to $`F_2`$ the values of some of the powers (related to the intercepts of relevant trajectories) must be fixed in some way since the number of the small $`x`$ data is not sufficient to determine unambiguously their values from the fits. Alternative logarithmic parametrizations $$F_2(x,Q^2)\underset{i=0}{\overset{2}{}}b_i\left(Q^2\right)\mathrm{}n^_i\left(\frac{1}{x}\right)$$ (3) exist and are claimed to be equally efficient. Notice also that in expressions (2),(3), contrary to (1), the $`Q^2`$ dependence factorizes in each individual term (Reggeon) - a typical feature of the Regge pole theory - (for more details see ). It should be noted that each term in the simple parametrizations of the type (2) and (3) may be associated with the $`Q^2`$ independent Pomeron trajectories with relevant factorized $`Q^2`$ dependent residuae. Formally, it is not compatible with the GLAP evolution equation, by which the variation (evolution) with $`Q^2`$ modifies also the $`x`$ dependence of the SF (although in a limited range, approximate ”selfconsistent” solutions, stable with respect to a logarithmic behaviour are known to exist). In any case, since we are fitting the SF to fixed values (bins) of $`Q^2`$, our parametization does not depend directly on the $`Q^2`$ evolution. Moreover, since the onset and range of the perturbative GLAP evolution is not known a priori, our ”data” may be used as a test for it. In this paper we present the results of a comparative analysis of these two types of parametrizations : power-like and logarithmic. To avoid theoretical bias, we do not constrain the $`Q^2`$ dependence by any particular model, instead we take the experimental value in a parametric way. The range of variables and the set of experimental points are, of course, the same in both cases. As a by-product, the parameters obtained in this way may be used as ”experimental data” in future calculations of the GLAP or BFKL evolution. The present study is an extension of a preliminary analysis . It is generally believed that, at small $`x`$, the singlet SF increases monotonically, indefinitely, accelerating towards larger $`Q^2`$ (the Pomeron becomes more ”perturbative”). This phenomenon is usually quantified by means of the derivative $`\mathrm{}nF_2/\mathrm{ln}(1/x)`$, which in the simple case of $`F_2x^\lambda `$ ($`\lambda `$ being $`x`$ independent), is identical with the effective power $`\lambda `$ (otherwise it is not). We found evidence against this monotonic trend: moreover, we show that, at the highest $`Q^2`$, the rise of $`F_2`$ starts slowing down. ## 2 Analysis of the structure function ### 2.1 Small $`x(<0.05)`$ The following forms of the small $`x`$ singlet component ($`S,0`$) of the SF are compared for $`x<x_c`$ and for each experimental $`Q_i^2`$ bin : A. Power-like $$F_2^{S,0}(x,Q_i^2)=a(Q_i^2)(\frac{1}{x})^{\lambda (Q_i^2)},$$ (4) and $$F_2^{S,0}(x,Q_i^2)=a_0(Q_i^2)(\frac{1}{x})^{ϵ_0}+a_1(Q_i^2)(\frac{1}{x})^{ϵ_1},$$ (5) where the exponents $`ϵ_0,ϵ_1`$ are fixed in accordance with . B. Logarithmic $$F_2^{S,0}(x,Q_i^2)=b_0(Q_i^2)+b_1(Q_i^2)\mathrm{}n(\frac{1}{x}),$$ (6) $$F_2^{S,0}(x,Q_i^2)=b_0(Q_i^2)+b_2(Q_i^2)\mathrm{}n^2(\frac{1}{x})$$ (7) and the combination of the two $$F_2^{S,0}(x,Q_i^2)=b_0(Q_i^2)+b_1(Q_i^2)\mathrm{}n(\frac{1}{x})+b_2(Q_i^2)\mathrm{}n^2(\frac{1}{x}).$$ (8) In these equations, $`a(Q_i^2)`$, $`a_{0,1}(Q_i^2)`$, $`b_{0,1,2}(Q_i^2)`$ and $`\lambda (Q_i^2)`$ are parameters fitted to each $`i^{\mathrm{th}}`$ $`Q^2`$ bin. More precisely, the free parameters are $`a`$ and $`\lambda `$ for (4), $`a_0`$ and $`a_1`$ for (5), $`b_0`$ and $`b_1`$ for (6), $`b_0`$ and $`b_2`$ for (7), $`b_0,`$ $`b_1`$ and $`b_2`$ for (8). The choice of the cut $`x_c`$ is obviously crucial, but subjective. Balancing between $`x`$ small enough, to minimize the large $`x`$ effects, and $`x`$ large enough to include as many data points as possible, we tentatively set, like in $`x_c=0.05`$ as a compromise solution. Since one can be never sure of the choice of the boundary below which the non-singlet contribution ($`nS,0`$) may be neglected <sup>5</sup><sup>5</sup>5The ratio of the non-singlet contribution to the singlet one was calculated in and was shown to drop below 10% around $`x=10^3`$, tending to decrease with increasing virtualities $`Q^2`$., we performed additional fits with the non singlet contribution included $$F_2^{nS,0}(x,Q_i^2)=a_f\left(Q_i^2\right)x^{1\alpha _f},$$ (9) with the intercept fixed as in , $`\alpha _f=0.415`$ (i.e. only one free parameter, namely $`a_f`$ is added). ### 2.2 Extension to all $`x(<1.0)`$ To ensure that our fits do not depend on the choice of the cut $`x_c,`$ we extend the previous analysis to larger values of $`x`$ with relevant modifications of the SF. Namely, we multiply the singlet and subsequently the non singlet contributions by appropriate large $`x`$ factors . The resulting SF becomes $$F_2(x,Q_i^2)=F_2^{S,0}(x,Q_i^2)(1x)^{n\left(Q_i^2\right)+4}+F_2^{nS,0}(x,Q_i^2)(1x)^{n\left(Q_i^2\right)},$$ (10) where $`F_2^{S,0}`$ runs over all the cases considered in the previous section, and the exponent $`n\left(Q_i^2\right)`$ is either that of $$n\left(Q_i^2\right)=\frac{3}{2}\left(1+\frac{Q_i^2}{Q_i^2+c}\right),\mathrm{with}c=3.5489\mathrm{GeV}^2,$$ (11) or is fitted to the data for each $`Q_i^2`$ value (see below). ## 3 Discussion of the results ### 3.1 Structure function We made two kinds of fits, one restricted to small $`x`$ only, ($`x<x_c=0.05`$), the other one including large $`x`$ as well. In the first case ($`x<x_c`$) the experimental data are from . Altogether 43 representative $`Q^2`$ values were selected to cover the interval $`[0.2,1200]`$ GeV<sup>2</sup> and $`x[2.10^6,x_c]`$. Including more (or all available) data points had little effect on the resulting trend of the results. The relevant values of $`\chi ^2`$, with and without the non- singlet term, are given in Table 1. Notice, that we use the definition $$<\chi ^2/\mathrm{dof}>=\frac{_{i=1}^{N_{bin}}\left(\frac{\chi _i^2}{n_{data_i}m_{para}}\right)}{N_{bin}},$$ (12) where each $`Q_i^2`$ bin out of a total of $`N`$ bins contains $`n`$ data points and gives a resulting contribution to $`\chi _i^2`$ in fitting eqs. (4)-(8), each containing $`m`$ parameters. Table 1. Results of the fits without or with non-singlet term (9) for small $`x(<0.05)`$. The total number of experimental points is 508. | Version | Power | Power | Logarithm | Logarithm | Logarithm | | --- | --- | --- | --- | --- | --- | | Eq. | (4) | (5) | (6) | (7) | (8) | | Nb. of parameters | 2 | 2 | 2 | 2 | 3 | | $`\chi ^2`$ | 282 | 262 | 463 | 303 | 231 | | $`<\chi ^2/\mathrm{dof}>`$ | 0.68 | 0.62 | 1.04 | 0.70 | 0.61 | | Eqs. | (4,9) | (5,9) | (6,9) | (7,9) | - | | Nb. of parameters | 3 | 3 | 3 | 3 | - | | $`\chi ^2`$ | 247 | 225 | 253 | 234 | - | | $`\chi ^2/\mathrm{dof}`$ | 0.67 | 0.60 | 0.67 | 0.62 | - | Notice that in performing the small $`x`$ fit we profited from a large set of available data, while in the large $`x`$ extension a representative set of 30 $`Q^2`$ bins ($`Q^2[1.5,2000]`$ GeV<sup>2</sup>) was used. The data are from . The relevant $`\chi ^2`$ values are shown in Table 2. Two options are presented : the first one relies entirely on the extension by , in the second one the exponent $`n(Q^2)`$ (see (10)) is fitted for each $`Q^2`$ bin. Table 2. Results of the fits for all $`x(<1.0)`$, when the parameters of the large $`x`$ extension $`n`$ is chosen as in or fitted. The total number of experimental points is 545. | Version | Power | Power | Logarithm | Logarithm | Logarithm | | --- | --- | --- | --- | --- | --- | | Eqs. | (4,9,10,11) | (5,9,10,11) | (6,9,10,11) | (7,9,10,11) | (8,9,10,11) | | Nb. of parameters | 3 | 3 | 3 | 3 | 4 | | $`\chi ^2`$ | 368 | 371 | 894 | 399 | 319 | | $`<\chi ^2/\mathrm{dof}>`$ | 0.79 | 0.79 | 1.74 | 0.85 | 0.78 | | Eqs. | (4,9,10) | (5,9,10) | (6,9,10) | (7,9,10) | - | | Nb. of parameters | 4 | 4 | 4 | 4 | - | | $`\chi ^2`$ | 321 | 317 | 541 | 329 - | | | $`\chi ^2/\mathrm{dof}`$ | 0.76 | 0.75 | 1.25 | 0.780 | - | The $`Q^2`$ dependence of the parameters was shown in Figs. 1-3. We exposed the most representative results from the small $`x`$ fit that may clarify asymptotic trends in the behaviour of the singlet SF (see and the following discussion of the results). As already explained, the large $`x`$ extension was intended merely to support the small $`x`$ results. Fig. 1. Results of our analysis for $`a\left(Q_i^2\right)`$ and $`\lambda \left(Q_i^2\right)`$ entering in the parametrization (4) : $`F_2^{S,0}=a(\frac{1}{x})^\lambda `$ of the small $`x`$ structure function ($`x<x_c=0.05`$); they are fitted to the discrete values of $`Q^2`$ data from ; $`Q^2`$ is in GeV<sup>2</sup>, the error bars are produced from the minimization program ”Minuit”. Fig. 2. Same as Fig .1 for $`a_0\left(Q_i^2\right)`$, $`a_1\left(Q_i^2\right)`$ and parametrization (5) : $`F_2=a_0(\frac{1}{x})^{0.418}+a_1(\frac{1}{x})^{0.0808}`$. Fig. 3. Same as Fig .1 for $`b_0\left(Q_i^2\right)`$ and $`b_2\left(Q_i^2\right)`$ and parametrization (7) : $`F_2^{S,0}=b_0+b_2\mathrm{}n^2(\frac{1}{x})`$. The following comments are in order : All the parametrizations (4)-(8), except (6), result in roughly equal quality fits. We may rule out the parametrization (6) giving the poorest (as expected) agreement with the data; so we do for the least economic (largest number of the free parameters), parametrization (8) (not shown in the figures). The best results are achieved for the parametrization (8), giving the best value of the total $`\chi ^2`$. Although fit (8) contains an extra free parameter with respect to the rest, the $`\chi ^2/\mathrm{dof}`$ value is, nevertheless, better than in other variants (4)-(7). Notice that (8) leads to alternating signs of the coefficients. Such an effect has been observed earlier in a fit to hadronic total sections ; it resembles the first few terms in an expansion of the supercritical Pomeron in an alternate series of logarithms. ### 3.2 $`x`$-slope A clear indicator measuring the rate of increase of $`F_2`$ is its logarithmic derivative or the $`x`$ slope $$B_x(x,Q_i^2)=\frac{\mathrm{}nF_2(x,Q_i^2)}{\mathrm{}n\frac{1}{x}}$$ (13) identical with the effective power $`\lambda (Q_i^2)`$ in the case of a single power term, $`x`$ independent as in (1) <sup>6</sup><sup>6</sup>6Note that by factorization, the intercept is $`Q^2`$ independent; that is why (1) is an ”effective” Regge pole contribution rather than a genuine Pomeron . (for an example of utilization of this derivative, see ). The $`x`$-slope $`B_x`$ is a function depending on two variables $`x`$ and $`Q^2`$, which in principle are independent, although correlated by a kinematical constraint: $`y1`$, which at HERA energy becomes $`Q^2`$ (in GeV<sup>2</sup>) $`<9.10^4x`$. The derivative can be calculated either analytically, if the SF is parametrized explicitly, or numerically by calculating the finite difference within certain intervals $`<x>`$. If a given parametrization fits the data well, then the analytical differenciation has a chance to reflect the slope, although it will not be model-independent. By calculating the slopes of finite bins, we have a better chance to be model-independent, although the result may depend on the width of the chosen bins. For the parametrization (4) $`B_x=\lambda `$ is already shown in Fig. 1. We show in Fig. 4 the results of our analysis corresponding to the other representative cases (5),(7), the coefficients of which are exhibited above. Fig. 4. $`x`$ slope $`B_x`$ versus $`Q^2`$ and parametrizations (5) : $`F_2=a_0(\frac{1}{x})^{0.418}+a_1(\frac{1}{x})^{0.0808}`$ (left side) and (7) : $`F_2^{S,0}=b_0+b_2\mathrm{}n^2(\frac{1}{x})`$ (right side). Asymptotically, as $`x0`$, the $`x`$-slope $`B_x`$ calculated from eqs. (5)-(8) is $`Q^2`$ independent. However, for finite values of $`x`$, in the range of the present experiments, the $`Q^2`$ dependence, as it can be seen from Fig. 1,4 is still essential. ## 4 Conclusions Our comparative analysis shows that several competing parametrizations for the small $`x`$ structure function exist, providing equally good fits to the data. The $`Q^2`$ dependent intercept in (1) may be considered as an ”effective” one, reflecting the contribution from two Pomerons in (2). Notice that logarithmic parametrization (3) conserving the unitarity bounds is equally efficient. The fits show also some evidence that the rise of the singlet component of the structure functions with $`1/x`$ moderates as $`Q^2`$ increases, the turning point being around $`Q^2=200`$ GeV<sup>2</sup>, whenafter $`F_2(x,Q^2)`$ decelerates monotonically. Such a slow-down (deceleration) of the rate of increase was anticipated already in . Later, it was confirmed and discussed in the framework of a model interpolating (combining) between Regge behaviour and the high $`Q^2`$ asymptotics of the GLAP evolution equation. It was discussed also in with a traditional Regge-type model with a $`Q^2`$ independent Pomeron intercept. Apart from the turn-over in the $`x`$ slope $`B_x`$, the ”softening” of the singlet SF towards highest $`Q^2`$ may be visualized also from the behaviour of the fitted $`Q^2`$ dependent coefficients, namely $`a_1`$ in Fig. 2, and $`b_2`$ in Fig. 3. Here, we only mention that the origin of the phenomenon - if confirmed - is either the increasing role of shadowing as $`Q^2`$ increases, restoring the Froissart bound, or the revelation of a contribution different from the ”perturbative” Pomeron (whose role was believed to increase with increasing virtuality $`Q^2`$), or a combination of the two. As discussed in , the concavity of the slope $`B_x`$ with respect to $`Q^2`$ is another important quantity, indicative of the path of evolution (GLAP or BFKL). Both (BFKL and GLAP) evolution equations are known to be the theoretical bases of the small $`x`$ behaviour of the structure functions. While the perturbative solution is well known for the GLAP equation, its convergence for the BFKL equation is still debated. Approximate solution of both and the relevant path in the $`xQ^2`$ plane may be revealed both from phenomenological models and from fits to the data. Present data may already reveal the actual path, but it should be remembered that the highest measured values of $`Q^2`$ do not reach the smallest $`x`$, so further measurement at possibly smallest $`x`$ and highest $`Q^2`$ are eagerly awaited.
no-problem/9903/physics9903011.html
ar5iv
text
# Which Physics Laws are Deduced from the Logic Properties of the Information? ## Abstract The relativity theory principles and the quants theory principles are deduced from logic properties of the information, obtained from a physics device. This paper presents a logic development of the Bergson , Whitehead , Capek , Stapp -, Whipple ideas on ”… events must be treat as the fundamental objective constituents … events and not particles constituite the true objective reality”. (The A.Jadczyk and Ph.Blanchard papers - are related to this topic for some time past.): An information, which is obtained from a physics device $`\widehat{𝐚}`$, can be expressed by a set $`𝐚`$ of any language sentences. The set $`𝐚`$ is denoted as ”the recorder of the device $`\widehat{𝐚}`$”. The set of the recorders call into existence structures, similar to a clocks . The following results are deduced from the logic properties of the recorders set : First, all such clocks have got the same direction, i.e. if the event, expressed by the sentence $`A`$, precedes to the event, expressed by the sentence $`B`$, with respect to any such clock, then it is the same for all other such clocks. Second, the Time, defined by such clocks, proves irreversible, i.e. no the recorder can obtain the information, that a certain event has taken place, before it has actually taken place. Thus, nobody can return back into the Past Times or obtain the information from the Future Times. Third, the set of recorders has been embedded in the metric space by some natural method; i.e. all metric space axioms are obtained from the logic properties of the recorder set. Fourth, if this metric space proves to be the Euclidean space, then the corresponding recorders ”space-time” obeys the Poincare complete group transformations. I.e. in this case the Special Theory Relativity follows from the logic properties of the information. If this metric space is not Euclidean, then any non-linear geometry exists on the space of the recorders, and any variant of the General Relativity Theory can be realized on this space. Therefore, the principal time properties - the one-dimensionality and the irreversibility -, the space metric properties and the spatial-temporal principles of the theory of the relativity are deduced from the logic properties of the recorders set. Hence, if you have got any set of the objects, which able to get, to keep and/or to give any information, then ”the time” and ”the space” are inevitable on this set. And it is all the same: or this set is in our world or this set is in any other worlds, in which the spatial- temporal structure does not exist initially. Hence, the spatial-temporal structure arises from the logic properties of the information. There is the evident nigh affinity between the classical probability function and the Boolean function of the classical propositional logic . These functions are differed by the range of value, only. That is if the range of values of the Boolean function shall be expanded from the two-elements set $`\{0;1\}`$ to the segment $`[0;1]`$ of the real numeric axis then the logic analog of the Bernoulli Large Number Law can be deduced from the logic axioms. And if the range of values of such function shall be expanded to the segment of some suitable variant of the hyperreal numeric axis then this theorem shall insert some statistical meaning for this function . The probability must comply with certain simple condition in order to be expressed by a relativistic $`\mu +1`$-vector of the probability density . Such probability is denoted as ”the trackelike probability”. The Dirac equation is deduced from such probability properties by the Poincare group transformations . Hence the physics elementary particle behavior in the vacuum looks like to the trackelike probability behavior. In the two- slits experiment if the partition with two slits between the source of the physics particle and the detecting screen exists in the vacuum then the interference of the probability is observed. But if this system shall be placed in the Wilson cloud chamber then the particle shall got the clear trace, marked by the condensate drops, and whole interference shall vanished. It looks like to the following: the physics particle exists in the moment, only, in which some event on this particle is happening. And in other times this particle does not exist and the probability of some event on this particle exists, only. Hence, if an events on this particle do not happen between the event-birth and the event-detection then the particle behavior is the probability behavior between these events, and the interference is visible. But in the Wilson cloud chamber, where the ionization acts form the almost continuous line, the particle has got the clear trace and no the interference. And the particle moves because such line is not absolutely continuous. Every point of the ionization act has got the neighboring ionization point, and the event on this particle is not happen between these points. Therefore, the physics particle moves because the corresponding probability is propagated in the space between these points. Therefore a particle is an ensemble of events, bounded by a probabilities (that is similar to ). In the $`3+1`$ space-time all interactions between fermions can be expressed by some division algebra (the Cayley algebra) but such algebra does not exist in thespace- time with more than $`3+1`$ dimension . Hence the fermions can not go out from this $`3+1`$ space-time. Thus particles and fields are not the basic entities of Universe but the logic events and the logic probabilities are the basic entities. Universe - i.e. the time, the space and whole their contents - is the by-product of the deduction from the logic events.
no-problem/9903/cond-mat9903113.html
ar5iv
text
# Effective Field Theory for Layered Quantum Antiferromagnets with Non-Magnetic Impurities \[ ## Abstract We propose an effective two-dimensional quantum non-linear sigma model combined with classical percolation theory to study the magnetic properties of site diluted layered quantum antiferromagnets like La<sub>2</sub>Cu<sub>1-x</sub>M<sub>x</sub>O<sub>4</sub> (M$`=`$Zn, Mg). We calculate the staggered magnetization at zero temperature, $`M_s(x)`$, the magnetic correlation length, $`\xi (x,T)`$, the NMR relaxation rate, $`1/T_1(x,T)`$, and the Néel temperature, $`T_N(x)`$, in the renormalized classical regime. Due to quantum fluctuations we find a quantum critical point (QCP) at $`x_c0.305`$ at lower doping than the two-dimensional percolation threshold $`x_p0.41`$. We compare our results with the available experimental data. PACS numbers: 75.10.-b,75.10.Jm,75.10.Nr \] The discovery of high temperature superconductivity in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> has motivated an enormous number of experimental and theoretical studies of this and related materials. La<sub>2</sub>CuO<sub>4</sub> has attracted a lot of interest because it is a classical example of a quantum Heisenberg antiferromagnet (QHAF). La<sub>2</sub>CuO<sub>4</sub> is a layered quasi-two-dimensional (2D) QHAF, with an intraplanar coupling constant $`J`$ ($`J/k_B1500`$ K) much larger than the interplanar coupling $`J_{}`$ ($`10^5J`$) . The quantum nonlinear sigma model (QNL$`\sigma `$M) is probably the simplest continuum model with correct symmetry and spin-wave spectrum that reproduces the low-energy behavior of a QHAF. It has been successfully used to explain many magnetic properties of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> . In this paper we propose a QNL$`\sigma `$M allied to classical percolation theory to study the site dilution effect in La<sub>2</sub>Cu<sub>1-x</sub>M<sub>x</sub>O<sub>4</sub>, where M is a non-magnetic atom. While the theory of disordered classical magnetic systems is fairly developed we still lack deep understanding of the behavior of the site diluted QHAF . As we show below the interplay between quantum fluctuations and disorder leads to new effects which cannot be found in classical magnets. In particular we show that long-range order (LRO) is lost before the system reaches the classical percolation threshold. Furthermore, we have only two independent parameters in the theory: the spin-wave velocity $`c_0`$ ($`0.74`$ eV Å/$`\mathrm{}`$) and the bare coupling constant $`\overline{g}_0`$ ($`0.685`$ ) of the clean system ($`x=0`$). The results for the staggered magnetization, correlation length, NMR relaxation rate and Néel temperature are derived without any further adjustable parameters. Our starting point is the 2D site diluted nearest-neighbor isotropic Heisenberg model $$H=J\underset{i,j}{}p(𝐫_i)p(𝐫_j)𝐒_i𝐒_j,$$ (1) where $`p(𝐫)`$ is the distribution function for Cu sites: $`p(𝐫)=1`$ on Cu sites and $`p(𝐫)=0`$ on M sites. Although translational invariance has been lost in (1), the Hamiltonian retains the SU(2) invariance for rotations in spin space. Since the symmetry is continuous Goldstone’s theorem predicts the existence of a gapless mode in the broken symmetry phase. The ordered phase is characterized by a finite expectation value of the magnetization, $`𝐧=𝐒(𝐐)`$, at the antiferromagnetic ordering vector $`𝐐=(\pi /a_0,\pi /a_0)`$ ($`a_0=3.8`$ Å). In the pure system, in accord with the Hohenberg-Mermim-Wagner theorem, LRO for a system with continuous symmetry is only possible at finite temperatures in dimensions larger than 2. In the absence of disorder the system has a Goldstone mode which is a spin wave around $`𝐐`$ with energy $`E(𝐤)`$ and linear dispersion relation with the wave-vector $`𝐤`$: $`E(𝐤)=\mathrm{}c|𝐤|`$, where $`c`$ is the spin-wave velocity. This dispersion relation is a consequence of the Lorentz invariance of the system. In the paramagnetic phase, where the continuous symmetry is recovered, all excitations are gapped because order is only retained in a region of size $`\xi `$. In this case the excitations have dispersion $`E(𝐤)=\mathrm{}c\sqrt{𝐤^2+1/\xi ^2}.`$ (2) Now consider the case where quenched disorder is present. Spin-wave theory, which can only be applied to (1) at $`T=0`$, predicts that Lorentz invariance is lost even for an infinitesimal amount of impurities . The dispersion changes to $`k\mathrm{ln}(k)`$ and the spin waves become damped at a rate proportional to $`k`$ when $`k0`$. These results (strictly valid in 2D and $`T=0`$) are not directly applicable to the systems in question which order at finite temperature . At finite temperatures and weak disorder we can consider the criterion established by Harris for the relevance of disorder in critical phenomena . Firstly, we can classify the phase diagram of the pure system as : renormalized classical (RC) where $`\xi (T)`$ diverges as $`\mathrm{exp}(T_0/T)`$ (where $`T_0`$ is a characteristic temperature scale - see (7)); quantum critical (QC) where $`\xi (T)1/T`$; quantum disordered (QD) where $`\xi (T)\xi _0`$ is constant. If we imagine the pure system being divided into regions of size $`\xi `$, each part will have fluctuations in the microscopic coupling constant ($`g`$, say) which by the central limit theorem are proportional to the square root of the number of spins $`N(\xi )\xi ^2`$ in that region. That is, there are statistical fluctuations of order $`\delta g(\xi )1/\sqrt{N(\xi )}1/\xi `$. On the other hand the thermal fluctuations in the system are of order $`\delta T(\xi )1/\mathrm{ln}(\xi /a_0)`$ in RC, $`a_0/\xi `$ in the QC and vanishingly small in the QD region. For the critical behavior of the system with weak disorder to be essentially the same as for the pure system one must require that $`\delta T(\xi )\delta g(\xi )`$ when $`\xi a_0`$. Observe that this condition is always fullfilled in the RC regime and therefore we expect the critical behavior to be the same as in the pure system, that is, described by a QNL$`\sigma `$M . In the QC and QD regimes the situation is not clear because $`\delta T(\xi )\delta g(\xi )`$ and therefore the effect of disorder is strong. We conjecture that in these regimes the critical behavior is different from the one described by a QNL$`\sigma `$M. In this work we focus entirely in the RC regime. Having these results in mind we can apply classical percolation theory to (1) . The main parameters of the problem depend on geometrical factors such as the probability of finding a spin in the infinite cluster $`P_{\mathrm{}}(x)`$ ($`1x`$, for $`x1`$) and the bond dilution factor $`A(x)`$ ($`1\pi x+\pi x^2/2`$) (in the expressions below $`P_{\mathrm{}}(x)`$ and $`A(x)`$ are valid for all $`x`$ as given by the numerical simulations ). In the classical case the spin stiffness $`\rho _s(x)`$ is related to the undoped stiffness by $`\rho _s(x)=A(x)\rho _s(0)`$, while the transverse susceptibility is given by $`\chi _{}(x)=(P_{\mathrm{}}(x)/A(x))\chi _{}(0)`$ so that $`\rho _s(x)=c^2(x)\chi _{}(x)`$. In this paper we propose an effective field theory which is valid for $`T_NT<J/k_B`$ and combines the Lorentz invariance implied in (2), the Harris criterion and the results of percolation theory. In percolation theory, besides the infinite cluster, we always have finite clusters. A finite cluster of size $`L`$ has discrete energy levels and therefore a gap of order $`\mathrm{}c/L`$. In what follows we assume $`\xi L`$ and ignore the contribution of finite clusters to the magnetic properties and focus entirely on the physics of the infinite cluster. It is obvious from the definition of $`p(𝐫)`$ that on average $`p(𝐫)=P_{\mathrm{}}(x)`$. Furthermore, site dilution implies that $`𝐧^2(𝐫)=p(𝐫)`$. Thus, on average we have $`𝐧^2(𝐫)=P_{\mathrm{}}(x)`$. In the continuum limit of (1) the Harris criterion discussed above indicates that in the long-wavelength low-energy limit the magnetic properties of the site diluted problem can be described in terms of an effective QNL$`\sigma `$M: $`Z={\displaystyle D𝐧\delta \left[𝐧^2P_{\mathrm{}}(x)\right]\mathrm{exp}\left\{S_{eff}/\mathrm{}\right\}},`$ (3) where $`S_{eff}`$ $`=`$ $`1/2{\displaystyle _0^\beta \mathrm{}}𝑑\tau {\displaystyle 𝑑𝐫\left[\chi _{}(x)\left|_\tau 𝐧\right|^2+\rho _s(x)\left|𝐧\right|^2\right]}`$ (4) and $`\tau `$ is the imaginary time direction with $`\beta =1/(k_BT)`$. Equation (4) leads to a natural description of the undoped system and provides an effective field theory for the QNL$`\sigma `$M in the presence of impurities. Moreover, it has incorporated the correct properties of the classical percolation problem added to the quantum fluctuations of the QHAF. It is very simple to show by a change of variables that the action in (4) can be rewritten as $$\frac{S_{eff}}{\mathrm{}}=\frac{1}{2g(x)}_0^{\beta \mathrm{}c(x)}𝑑\tau 𝑑𝐫\left(_\mu 𝐧\right)^2$$ (5) where $`g(x)=\mathrm{}c(x)/\rho _s(x)`$ is the effective coupling constant of the theory. Moreover, because of the continuum limit the theory has an intrinsic ultraviolet cut-off $`\mathrm{\Lambda }(x)=2\sqrt{\pi P_{\mathrm{}}(x)}/a_0`$ which is fixed by the total number of states. In writing (5) we have not included the topological term. In a random system one suspects that this term vanishes as in the pure 2D case . Nevertheless there are always statistical fluctuations in a random system which are of order $`\sqrt{N_I}`$, where $`N_I`$ is the number of $`M`$ ions. Thus, the topological term has importance as we discuss at the end of the paper. The great advantage of (5) is its simplicity and close relationship to the description of the undoped problem. In this paper we use the large $`N`$ approach for the QNL$`\sigma `$M which has been so successful in describing the undoped system . At zero temperature, a critical value of the coupling constant $`g_c(x)`$ separates the RC from the QD region. $`g_c(x)=4\pi P_{\mathrm{}}(x)/\mathrm{\Lambda }(x)`$ can be obtained from the saddle-point equation for (5). The ratio of the coupling constant to the critical coupling constant is $`\overline{g}(x)g(x)/g_c(x)=\overline{g}_0/P_{\mathrm{}}(x)`$, which implies that non-magnetic doping drives the system from RC region to QD region at $`x_c`$ where $`P_{\mathrm{}}(x_c)=\overline{g}_0`$ at $`T=0`$. The critical concentration $`x_c`$ is completely determined by the value of $`\overline{g}_0`$ in the undoped case. Using the dilute result for $`P_{\mathrm{}}(x)`$ and $`\overline{g}_0=0.685`$ we find $`x_c0.3`$ which is indeed smaller than the percolation threshold $`x_p0.41`$ . This result has to be contrasted with classical calculations where long range order is lost at percolation threshold only. We also performed a one-loop renormalization group analysis and calculated the zero temperature staggered magnetization $`M_s(x)=M_0(x)\sqrt{1\overline{g}(x)}`$. Here $`M_0(x)`$ is the classical staggered magnetization for the perfect Néel spin alignment and the remaining factor is due to quantum fluctuations. Thus, the local average magnetic moment is given by $`{\displaystyle \frac{\mu (x)}{\mu (0)}}={\displaystyle \frac{M_s(x)/M_0(x)}{M_s(0)/M_0(0)}}=\sqrt{{\displaystyle \frac{1\overline{g}(x)}{1\overline{g}(0)}}}.`$ (6) Observe that the average local moment indeed vanishes at $`x_c`$. For the undoped case, (6) predicts that the maximum measured magnetic moment of Cu ion is $`0.56\mu _B`$ which agrees with the measured value $`0.6\pm 0.15\mu _B`$ . It is also in good agreement with the existing experimental sublattice magnetization measured by $`\mu `$SR for various doping concentrations as shown in Fig. 1. Notice that for the Ising magnet $`\mu (x)`$ only deviates from $`\mu (0)`$ at $`x_p`$. The larger reduction of the moment in the QHAF is due to quantum fluctuations present in the QNL$`\sigma `$M. The magnetic correlation length $`\xi `$ can be directly calculated from the QNL$`\sigma `$M. The interpolation formula from the RC to the QC region reads $$\xi (x,T)=\left(\frac{e\mathrm{}c(x)}{4}\right)\frac{\mathrm{exp}\left(2\pi \rho _{R,s}(x)/k_BT\right)}{4\pi \rho _{R,s}(x)+k_BT},$$ (7) where $`\rho _{R,s}(x)=\rho _s(x)[1\overline{g}(x)]`$ is the renormalized spin stiffness. This result agrees very well with the Monte Carlo simulations in a large temperature range in the undoped case . As far as we know the only existing neutron scattering results for magnetic correlation length are for the pure system and $`x=0.05`$ . In Fig. 2 we plot the available data and the prediction of our model given in (7). As it is well known, samples with $`x=0.05`$ have problems with the Oxygen stoichiometry . Excess O introduces mobile holes in the plane which produce strong frustration effects which are not accounted for in our theory. Thus, direct comparison between the theory and experiment for this sample is problematic, especially at high temperatures. Thus, only new experiments with controlled O content can directly test our theory. Chakravarty and Orbach have calculated the nuclear spin-lattice relaxation rate of Cu for La<sub>2</sub>CuO<sub>4</sub> using the dynamical structure factor from the QNL$`\sigma `$M. A detailed calculation was done in Refs. . These calculation can be easily extended for the doped case. Here we just quote the result for $`\mathrm{\Lambda }\xi 1`$: $`{\displaystyle \frac{1}{T_1(x,T)}}`$ $`=`$ $`\gamma ^2P_{\mathrm{}}(x)\sqrt{2\pi ^3}S(S+1)`$ (8) $`\times `$ $`ϵ\left(A_{}4P_{\mathrm{}}(x)B\right)\sqrt{1{\displaystyle \frac{2A_{}B}{A_{}^2+4B^2}}}`$ (9) $`\times `$ $`{\displaystyle \frac{\left[\left(A_{}4P_{\mathrm{}}(x)B\right)\xi ^2+4P_{\mathrm{}}(x)Ba_0^2\mathrm{ln}\left(\xi \mathrm{\Lambda }\right)\right]}{3\omega _e(x)\xi a_0\left(\mathrm{ln}\left(\xi \mathrm{\Lambda }\right)\right)^2}}`$ (10) where $`\gamma `$ is the nuclear gyromagnetic ratio, $`A_{}=80`$ kG and $`B=83`$ kG are the hyperfine constants, and $`\omega _e(x)=A(x)\sqrt{\left({\displaystyle \frac{2J^2k_B^2zS(S+1)}{3\mathrm{}^2}}\right)}`$ (11) (where $`z`$ is the number of nearest neighbor spins) is the corrected Heisenberg exchange frequency. Fig. 3 shows the NMR relaxation rate normalized to the high temperature value as given by the experimental data and the result of our calculations. The growth of the relaxation rate at low temperatures is due to fast growth of $`\xi `$. As the system approaches the QCP one starts to see the crossover from RC to the QC regime where the $`\xi `$ grows like $`1/T`$ leading to slower growth of the relaxation rate. This behavior is clearly seen in the data since for $`x=0.11`$ where growth is very slow from $`800`$K down to $`400`$K. The agreement between data and theory is again quite reasonable. The 3D Néel order can be obtained from the weak interplane coupling $`J_{}`$ and it is given by : $$k_BT_NJ_{}P_{\mathrm{}}(x)\left(\frac{\xi (x,T_N)}{a_0}\right)^2\left(\frac{M_s(x)}{M_0(x)}\right)^2$$ (12) which is a transcendental equation for $`T_N(x)`$. The interplanar coupling constant is insensitive to doping because the change in lattice parameters is negligible . In the undoped case the Néel temperature $`T_N(0)`$ is of order of $`315`$ K. The initial suppression rate of the Néel temperature with doping, $`I=d\mathrm{ln}(T_N(x))/dx`$, when $`x0`$ can be directly obtained from (12) and, due to quantum fluctuations it is much faster than in the Ising case (dashed line on Fig.4). We find $`I4.7`$ in good agreement with the data. Indeed, in Fig. 4 we show our theoretical results in comparison with various different experimental measurements. The critical concentration $`x_c`$ for which the system loses long-range order by moving from the RC region to the QD region is approximately $`0.305`$, in agreement with the loss of long-range order at zero temperature as given in (6). Finally, it is also easy to show using the procedure given in ref. that the topological term will lead to induced moments close to the impurities. These moments interact through a random magnetic exchange of order $`Je^{(a_0x)/\xi (x,T)}`$. This effect can lead to order of the induced moments in the paramagnetic phase, as seen experimentally . In conclusion, we have proposed an effective QNL$`\sigma `$M to describe the magnetic diluted QHAF. Our model combines the result of classical percolation theory and the quantum fluctuations of the Heisenberg model. Although our model is fairly simple it gives a good quantitative description of the magnetism in La<sub>2</sub>Cu<sub>1-x</sub>M<sub>x</sub>O<sub>4</sub>. The success of our model in describing the physics of the RC regime is due to the fact that the 2D correlations are very long at finite temperatures and the effect of disorder in the critical behavior is rather weak. Disorder induces quantum fluctuations in the system which lead to the final destruction of LRO at $`x_c`$. This effect is not found in classical magnets where LRO is solely determined by the percolation problem. Finally, our arguments indicate that a new approach is required in the QC and QD regions where the NL$`\sigma `$M is probably not applicable. We thank J. Baez, W. Beyermann, F. Borsa, B. Büchner, P. Carreta, G. Castilla, A. Chernyshev, M. Greven, P. C. Hammel, B. Keimer, D. MacLaughlin, U. Mohideen, and S. Sachdev for useful discussions and comments. We thank P. Carretta for providing us with his experimental results. We also acknowledge support by the A. P. Sloan foundation and support provided by the DOE for research at Los Alamos National Laboratory.
no-problem/9903/astro-ph9903061.html
ar5iv
text
# Bar Diagnostics in Edge-On Spiral Galaxies. I. The Periodic Orbits Approach. ## 1 Introduction The classification of spiral galaxies along the Hubble sequence (Sandage 1961) is difficult for highly inclined systems. The tightness of the spiral arms and the degree to which they are resolved into stars and H II regions are useless criteria when dealing with edge-on galaxies. Only one main criterion remains: the relative importance of the bulge with respect to the disk. The problem is more acute when it comes to determining if a galaxy is barred, as there is no easy way to identify a bar in an edge-on system. The presence of a plateau in the light distribution of a galaxy (typically the light profile along the major axis) is often taken to indicate the presence of a bar (e.g. de Carvalho & da Costa 1987; Hamabe & Wakamatsu 1989). However, this method has two serious shortcomings: axisymmetric features might be mistaken for a bar (e.g. a lens would probably produce a very similar effect) and end-on bars are likely to be missed (their plateaus would be both short and, in early type barred galaxies, superposed on the steep light profile of the bulge). The studies of the galaxy NGC 4762 by Burstein (1979a, 1979b), Tsikoudi (1980), Wakamatsu & Hamabe (1984), and Wozniak (1994) illustrate the uncertainties resulting from using such a method. It is clear that a photometric or morphological identification of bars in edge-on spiral galaxies is problematic and unsatisfactory. Kuijken & Merrifield (1995) (see also Merrifield 1996) were the first to demonstrate that a kinematical identification of bars in external edge-on spiral galaxies was possible. They calculated the projection of periodic orbits in a barred galaxy model for various line-of-sights and showed that an edge-on barred disk produces characteristic double-peaked line-of-sight velocity distributions which would not occur in an axisymmetric disk. Equivalent methods have been used for many years in Galactic studies (e.g. Peters 1975; Mulder & Liem 1986; Binney et al. 1991; Wada et al. 1994; and more recently Weiner & Sellwood 1995; Beaulieu 1996; Sevenster et al. 1997; Fux 1997a,b), since the PVDs of external galaxies are analogous to the longitude-velocity diagrams of the aforementioned studies. In this paper, we aim to develop bar diagnostics using the PVDs of edge-on spiral galaxies in the same spirit as Kuijken & Merrifield (1995). We will, however, study the signature of each family of periodic orbits separately (before joining them to obtain a global picture) and examine how it depends on the viewing angle. We use a well-studied mass model, a well-defined method to populate the periodic orbits, and we explore a large number of periodic orbit families. Our results should be used as a guide to interpret observations of the stellar and/or gaseous kinematics in edge-on spiral galaxies. While the gas streamlines can be approximated by periodic orbits, the presence of shocks will modify this behaviour significantly. Also, the collisionless stellar component is not confined to periodic or regular (quasi-periodic) orbits and there could be a non-negligible fraction of stars on irregular orbits. Athanassoula & Bureau (1999a, hereafter Paper II) and Athanassoula & Bureau (1999b, hereafter Paper III) will provide bar diagnostics similar to those developed here but using, respectively, hydrodynamical and $`N`$-body simulations. The identification of bars in edge-on spiral galaxies is not a goal in itself but rather a tool allowing us to deepen our understanding of bars. The particular line-of-sight to edge-on systems allows us to get a view of the kinematics of the entire symmetry plane of the disk in one single observation (assuming the disk is transparent) and provides a unique way of studying the dynamics of the disk globally. More importantly, such a diagnostic represents a unique opportunity to study the vertical structure of bars, of which very little is known observationally. Three-dimensional $`N`$-body simulations have shown that bars tend to buckle soon after their formation and settle with an increased thickness and vertical velocity dispersion, appearing boxy or peanut-shaped when viewed edge-on (e.g. Combes & Sanders 1981; Combes et al. 1990; Raha et al. 1991). Beside the clues provided by the Galaxy (e.g. Blitz & Spergel 1991; Binney et al. 1991; Weiland et al. 1994), little observational data exist to directly test this hypothesis. In fact, the vertical light distribution of a bar has never been measured. Kuijken & Merrifield (1995) and Bureau & Freeman (1997) are the only ones to have actively searched for the kinematical signature of large scale bars in boxy/peanut-shaped bulges. Although their results seem to support the scenario described above, only eight galaxies have been studied so far. A similar study of a sample of over thirty galaxies, most of which have a boxy or peanut-shaped bulge, will appear in Bureau & Freeman (1999). The development of better bar diagnostics and the search for bars in edge-on systems are the keys to a better understanding of the vertical structure of bars. This series of papers aims to fulfill the first of those needs; Bureau & Freeman (1999) will address the latter. In § 2, we describe the mass model used throughout this paper and detail the methods adopted to calculate and populate periodic orbits. The orbital properties of the mass model are described in § 3. In § 4, we describe the PVDs of edge-on spirals and develop kinematical bar diagnostics based on the properties of prototypical barred models with and without inner Lindblad resonances. We also generalise those diagnostics to a large range of models. The limitations of the models for interpreting spectroscopic observations are discussed in § 5. We summarise our results and conclude briefly in § 6. ## 2 Models ### 2.1 Density Distribution The mass model used in this paper and in Paper II is the same as that used by Athanassoula (1992a,b, hereafter A92a, A92b); the results from all papers can therefore be directly compared and are complementary. We briefly review the main characteristics of the mass model here and refer the reader to A92a for more discussion of its properties. The mass model has four free parameters which define the density distribution. The bar is represented by a Ferrers spheroid (Ferrers 1877) with density $$\rho (x,y,z)=\{\begin{array}{cc}\rho _0(1g^2)^n\hfill & \text{for }g<1\hfill \\ 0\hfill & \text{for }g1,\hfill \end{array}$$ (1) where $`g^2=x^2/a^2+(y^2+z^2)/b^2`$, $`a`$ and $`b`$ are the semi-major and semi-minor axes of the bar ($`a>b`$), $`\rho _0`$ is its central density, and $`(x,y,z)`$ are the coordinates in the frame corotating with the bar. We will consider both homogeneous ($`n=0`$) and inhomogeneous models; the latter with $`n=1`$. The semi-major axis is, as in A92a, fixed at 5 kpc, but, contrary to A92a, the major axis of the bar is along the $`x`$-axis. We have thus so far introduced two free parameters: the bar axial ratio $`a/b`$ (which fixes $`b`$) and the quadrupole moment of the bar $`Q_m`$ (which fixes $`\rho _0`$). A92a shows how the central density, axial ratio, quadrupole moment, and mass of the bar are related. The pattern speed of the bar ($`\mathrm{\Omega }_p`$), or equivalently the distance from the center to the Lagrangian points $`L_1`$ and $`L_2`$ ($`r_L`$), constitutes a third free parameter. The bar model described has often been used in the past and is well studied both in the context of orbital studies (e.g. Athanassoula et al. 1983; Papayannopoulos & Petrou 1983; Teuben & Sanders 1985) and of hydrodynamical simulations (e.g. Sanders & Tubbs 1980; Schwarz 1985). The main deficiencies of this density distribution are that the shape and axial ratio of the bar are independent of radius, and that the isodensities are necessarily ellipses. The density distribution we use has two axisymmetric components which, when combined together, produce a rotation curve rising relatively rapidly in the inner parts and flat in the outer parts. The first component is a Kuzmin/Toomre disk of surface density $$\sigma (r)=\frac{V_0^2}{2\pi Gr_0}(1+r^2/r_0^2)^{3/2}$$ (2) (Kuzmin 1956; Toomre 1963), where $`V_0`$ and $`r_0`$ are fixed to yield a maximum disk circular velocity of 164.2 km s<sup>-1</sup> at 20 kpc. The second axisymmetric component is a central bulge-like spherical density distribution given by $$\rho (R)=\rho _b(1+R^2/R_b^2)^{3/2},$$ (3) where $`\rho _b`$ is the bulge central density and $`R_b`$ its scalelength. The fourth free parameter of the mass model is the central concentration, $`\rho _c=\rho _0+\rho _b`$ (which fixes $`\rho _b`$). The bulge scalelength is determined by imposing a fixed total mass within 10 kpc. The models are therefore parametrised by an index $`n`$ ($`n=0`$ or 1) and by four free parameters: the bar axial ratio $`a/b`$, the quadrupole moment of the bar $`Q_m`$, the Lagrangian radius $`r_L`$, and the central concentration $`\rho _c`$. It should be noted that while the quadrupole moment of the bar affects all Fourier components of the potential equally, this is not the case for the axial ratio. The bar pattern speed and central concentration mainly affect the existence and position of the resonances. The models considered are those of A92a (see her Table 1). We will also use her units: $`10^6`$ $`M_{\text{}}`$ for masses, kpc for lengths, and km s<sup>-1</sup> for velocities. Based on a comparison with an observational sample (rotation curves, resonances positions, and Fourier components), A92a showed that these models are a fair representation of early type barred galaxies. ### 2.2 Periodic Orbits Calculations The periodic orbits allowed by a model are found using the shooting method. Through this paper, we will only consider orbits in the plane of the disk ($`z=0`$). For a given position along the $`y`$-axis ($`x=0`$) and an initial velocity parallel to the $`x`$-axis ($`\dot{y}=0`$), we follow a trial orbit for half a turn in the reference frame of the bar. Other trial orbits with the same initial position but slightly different initial velocities allow iterative convergence to an orbit which “closes” after half a revolution. The orbits are integrated using a fourth-order Runge-Kutta method and the Newton-Raphson method is used to converge to the right initial velocity (Press et al. 1986). By then moving the initial position along the $`y`$-axis, it is possible to delineate a family of periodic orbits. Here, we use a constant increment along the $`y`$-axis ($`\mathrm{\Delta }y=0.01`$ kpc for all families). All periodic orbits found in this way are symmetric with respect to the minor axis of the bar. It should be noted that it is possible to have more than one periodic orbit at a given position along the $`y`$-axis (with different initial velocities $`\dot{x}`$). In the limit of negligible pressure, gas streamlines coincide with periodic orbits. However, contrary to periodic orbits, gas streamlines can not intersect. Thus, because we are mainly interested in studying the gaseous dynamics of barred spiral galaxies, we are not interested in periodic orbits that self-intersect or possess loops. We have therefore searched and identified only direct singly periodic non-self-intersecting orbits, which may best represent the gas flow. This constraint limits the extent of the periodic orbit families we have studied. Periodic orbits can be regarded as galactic building blocks, but it is non-trivial to determine how best to use them to represent the gas distribution in a real galaxy. For stellar systems, Schwarzschild (1979) proposed a method where a linear combination (with non-negative weights) of orbits is used to reproduce the original mass distribution (yielding a self-consistent model). Here, we simply consider all periodic orbits from certain families to be populated equally. Whenever we plot orbits, we plot an equal number of timesteps (an equal number of “points”) for all orbits, independent of the period. Since we use a constant increment along the $`y`$-axis between the orbits of a given family, the resulting surface density along that axis is inversely proportional to the distance from the center (this would be true everywhere if the orbits were self-similar). This procedure will be used whenever we plot orbits. One shortcoming of this method is that, although we have only selected individual periodic orbits which do not self-intersect and do not possess loops, orbits from a given family or from different families of periodic orbits can intersect. Such situations could not occur in the case of gas. ## 3 Periodic Orbit Families A detailed study of the periodic orbit families located within corotation in our models was carried out by A92a. In this paper, we will extend her study to the outer parts of the models (outside corotation) and draw heavily on her conclusions to explain the behaviour observed in the inner parts. For a more general description of the orbital structure and dynamics of barred spiral galaxies, we refer the reader to the excellent reviews by Contopoulos & Grosbøl (1989) and Sellwood & Wilkinson (1993). In this section, we will describe the main properties of the basic families of periodic orbits present in the models. We will focus on two inhomogeneous bar models which are prototypes of models with and without inner Lindblad resonances (ILRs). They are, respectively: model 001 ($`a/b=2.5`$, $`Q_m=4.5\times 10^4`$, $`r_L=6.0`$, $`\rho _c=2.4\times 10^4`$) and model 086 ($`a/b=5.0`$, $`Q_m=4.5\times 10^4`$, $`r_L=6.0`$, $`\rho _c=2.4\times 10^4`$). Using the results of A92a, it is easy to extend the conclusions drawn from models 001 and 086 to most other models. Figure 6 shows the characteristic diagrams for models 001 and 086. For all calculated periodic orbits, they show the Jacobi integral ($`E_J=E\stackrel{}{\mathrm{\Omega }_p}\stackrel{}{J}`$) of the orbit as a function of the position where the orbit intersects the $`y`$-axis. The Jacobi integral represents the energy in the rotating frame of the bar, and is the only combination of the energy and angular momentum which is conserved (neither being conserved separately in a rotating non-axisymmetric potential). All the major periodic orbit families are present. More exist, especially higher order resonance families near corotation, but they are probably unimportant for understanding the gas flow. Figure 3 of A92a shows examples of periodic orbits from the main families in model 001 (see Sellwood & Wilkinson 1993 for families outside corotation, although they use a slightly different potential). The most important families inside corotation are the $`x_1`$ and $`x_2`$. The $`x_1`$ orbits are elongated parallel to the bar and are generally thought to support it (see, e.g., Contopoulos 1980). The $`x_2`$ (and $`x_3`$) orbits are elongated perpendicular to the bar and only occur inside the ILRs. Some properties of the $`x_1`$ and $`x_2`$ periodic orbits which will be useful in the next sections are summarised in Figure 6. We do not consider the retrograde $`x_4`$ family here. The inner 4:1 family (four radial oscillations during one revolution) may be important for the structure of rectangular bars. Outside corotation, the dominant periodic orbit families are the $`x_1^{}`$ and outer 2:1, corresponding to the “$`x_i`$” families inside corotation. The $`x_1^{}`$ orbits are elongated parallel to the bar and located outside the outer Lindblad resonance (OLR). The outer 2:1 orbits are perpendicular the bar and located between corotation and the OLR. The short period orbits (SPO) and long period orbits (LPO) are located around the (stable) Lagrange points $`L_4`$ and $`L_5`$ on the minor axis of the bar. Figure 6 shows the main precession frequencies for models 001 and 086, obtained by azimuthally averaging the mass distribution. The major resonances are easily identified: ILRs ($`\mathrm{\Omega }_p=\mathrm{\Omega }\kappa /2`$), inner ultra-harmonic resonance (IUHR; $`\mathrm{\Omega }_p=\mathrm{\Omega }\kappa /4`$), corotation ($`\mathrm{\Omega }_p=\mathrm{\Omega }`$), and OLR ($`\mathrm{\Omega }_p=\mathrm{\Omega }+\kappa /2`$). Defined this way, the presence of ILRs is not sufficient to guarantee the existence of the $`x_2`$ family. Contopoulos & Papayannopoulos (1980) showed that the $`x_2`$ family disappears for strong bars. For our mass model, A92a showed that the $`x_2`$ orbits are absent for small Lagrangian radii $`r_L`$, low central concentrations $`\rho _c`$, large bar axial ratios $`a/b`$, and for large quadrupole moments $`Q_m`$ (see Figs. 6 and 7 of A92a). In particular, despite the presence of ILRs in model 086, no $`x_2`$ orbit exists. It is thus necessary to extend the classical definition of an ILR to the strong bar case. van Albada & Sanders (1982) and A92a propose that the existence of ILRs can be tied with the existence of the $`x_2`$ periodic orbit family and the position of the ILRs assimilated with the minimum and maximum of the $`x_2`$ characteristic curve in the characteristic diagram (of course, there might be only one ILR). We will use this definition of the existence of ILRs in this paper, which explains why model 086, despite having two ILRs in the classical sense, is considered a “no-ILRs” model. Similarly, we will assimilate the position of the IUHR with the maximum of the $`x_1`$ characteristic curve before the 4:1 gap in the characteristic diagram (A92a). ## 4 Bar Diagnostics ### 4.1 Detecting Edge-On Bars In the spirit of Kuijken & Merrifield (1995) and Merrifield (1996), our basic tool to identify bars in edge-on disk galaxies will be PVDs. We obtain those by calculating the projected density of material in our edge-on barred disk models as a function of line-of-sight velocity and projected position along the major axis (for various line-of-sights). These can then be directly compared with long-slit spectroscopy observations of edge-on spiral galaxies (with the slit positioned along the major axis) or with other equivalent data sets. The goal is to identify features in the PVDs which can be unmistakably associated with the presence of a bar. We discuss such features in the next sections. ### 4.2 Model 001 (ILRs) Figures 6, 6, and 6 show PVDs for, respectively, the $`x_1`$, $`x_2`$, and outer 2:1 periodic orbit families of model 001, which has ILRs (or, equivalently, has an $`x_2`$ family of periodic orbits). Each figure presents the face-on appearance of the entire family of orbits (with orbits equally spaced along the $`y`$-axis and the extent of the family limited by gaps in the characteristic curve or the appearance of loops in the orbits) and PVDs obtained using an edge-on projection and various viewing angles with respect to the bar. The viewing angle $`\psi `$ is defined to be 0° when the bar is seen end-on and 90° when the bar is seen side-on. The upper left panel of Figure 6 shows that, because of the high curvature of the $`x_1`$ orbits on the major axis of the bar (A92a) and the crowding of orbits at its ends, overdensities of material are created which are analogous to those caused by shocks in hydrodynamical simulations (see Sanders, Teuben, & van Albada 1983; Sanders & Tubbs 1980; A92b). As expected, very high radial velocities are present in the PVDs when the bar is seen end-on due to streaming up and down the bar. Conversely, the velocities are low when the bar is seen side-on because the movement is mostly perpendicular to the line-of-sight. In the next few paragraphs, we will analyse this effect in more detail, in order to understand the variation of the shape of the signature of the orbits in the PVDs as a function of the viewing angle. In general, the trace in a PVD of a two-dimensional elongated orbit seen edge-on can be thought of as a parallelogram. For the sake of simplicity, we will consider here an orbit which is symmetric about two perpendicular axes and which is centered at their origin, like the $`x_1`$ and $`x_2`$ orbits. If the orbit is seen exactly along one of its symmetry axes, then its trace in a PVD will be a line, both near and far sides of the orbit yielding the same radial velocity at a given projected distance from the center. In addition, the observed radial velocity will switch from positive to negative values at the center (the radial velocity is null at that point). However, for all other viewing angles, the trace of the orbit in a PVD will be strongly parallelogram-shaped, populating the “forbidden” quadrants (top-right and bottom-left quadrants of the PVDs considered here). This shape is due to the fact that, when the line-of-sight is not parallel to an axis of symmetry of the orbit, the near and far sides of the orbit yield different radial velocities for a given projected distance from the center, and the position at which the observed radial velocity switches from positive to negative values is not the center, but rather is displaced slightly away from it. At that position, by definition, the tangent to the orbit is perpendicular to the line-of-sight. One only needs to look at the radial component of the velocity along an elongated orbit to see these effects. Generally, the highest tangential velocity occurs on the minor axis of the orbit and is parallel to its major axis. The opposite is also generally true (but not always) for the lowest velocity (see, e.g., Fig. 6). Therefore, the parallelogram-shaped trace of an elongated orbit in a PVD is narrow but reaches high radial velocities (with respect to the local circular velocity) for viewing angles close (parallel) to its major axis, while it is rather extended and reaches only relatively low radial velocities for viewing angles close to its minor axis. The exact shape of the parallelogram in a PVD depends primarily on the axial ratio of the orbit. For a given azimuthally averaged radius, as the eccentricity of the orbit is increased, the velocity contrast of the orbit (the difference between the highest and lowest tangential velocities) also increases. The viewing angle dependence of the trace of the orbit in a PVD is thus accentuated. At the other end of the eccentricity range, the trace of a circular orbit in a PVD is an inclined straight line passing through the origin and identical for all viewing angles. The parallelogram-shaped signature of the $`x_1`$ orbits observed in the PVDs of Figure 6 can be understood based on the above principles. The axial ratio of the $`x_1`$ orbits generally increases with decreasing radius (except in the very center, see Fig. 6a). The inner orbits will thus reach very high radial velocities (compared to the circular velocity) at small projected distances for small viewing angles, while they will reach only low radial velocities at large viewing angles. On the other hand, the projected velocities of the outer orbits will vary little with the viewing angle because they are rounder. They will thus reach radial velocities close to the circular velocity at large projected distances for all viewing angles. Orbits of intermediate radii have intermediate axial ratios and thus intermediate behaviours in the PVDs. As one moves inward in radius and in projected distance, the locus of the maxima of the traces of successive orbits in the PVDs will therefore increase rapidly for small viewing angles (see Fig. 6b), while it will decrease for large viewing angles (see Fig. 6c). This is exactly the behaviour observed in the PVDs for the upper part of the envelope of the signature of the $`x_1`$ orbits (see Fig. 6). For orbits of very small radii, the axial ratio actually decreases rapidly with decreasing radius (the axial ratio reaches a maximum for orbits of minor axis length about 0.2 kpc; Fig. 6a). This explains why the envelope of the signature of the $`x_1`$ orbits does not increase right to the center, but drops abruptly just before that. The ellipsoidal “holes” in the PVDs at intermediate viewing angles are due to the fact that we stopped the $`x_1`$ periodic orbits at the IUHR, not populating the small segments of the $`x_1`$ characteristic curve existing past the 4:1 gap in the characteristic diagram (see Fig. 2 in A92a). The holes disappear if we include orbits at larger Jacobi constant $`E_J`$ (which are also rounder). In Figure 6, the behaviour of the $`x_2`$ orbits can be contrasted with that of the $`x_1`$. As expected, because the $`x_2`$ orbits are elongated perpendicular to the bar (see Fig. 3 of A92a), the highest radial velocities are now reached when the bar is seen side-on, and the lowest when the bar is seen end-on. The general parallelogram shape is still present, but its nature is quite different than that of the signature of the $`x_1`$ orbits shown in Figure 6. Contrary to the $`x_1`$ orbits, the axial ratio of the $`x_2`$ orbits generally decreases with decreasing radius (up to about 0.4 kpc, see Fig. 6d). The inner orbits have only a short extent and, because they are almost circular, they do not reach high radial velocities. Their projected velocity is close to the circular velocity for all viewing angles. The outer orbits, on the other hand, are highly elongated. They will thus reach only relatively low radial velocities at “large” projected distances for small viewing angles, and high velocities at “large” distances for large viewing angles (they are elongated perpendicular to the bar). The locus of the maxima of the traces of successive orbits of decreasing radius in the PVDs will therefore increase rapidly for small viewing angles (see Fig. 6e) and decrease for large viewing angles (see Fig. 6f). Indeed, this behaviour is observed in the PVDs of Figure 6, at least for “large” projected distances. The behaviour at very small radii is dominated by the shape of the circular velocity curve, which rises rapidly with radius. The observed behaviours of the $`x_1`$ and $`x_2`$ orbits are qualitatively rather similar. This might be surprising on first thought, as the $`x_2`$ orbits behave very differently than the $`x_1`$, but one could say that the properties of the $`x_2`$ orbits are “doubly-inverted” with respect to those of the $`x_1`$. The variations of the axial ratio of the $`x_1`$ and $`x_2`$ orbits with radius are opposite (Fig. 6), so the dependence of their signatures on the viewing angle with respect to their major axes will be opposite. Furthermore, the major axes of the $`x_1`$ and $`x_2`$ orbits are also perpendicular to each other. This “double inversion” leads to the similarity of the signatures in the PVDs. While this is true in a relative manner, it is not true in an absolute way. The envelope of the signature of the $`x_1`$ orbits reaches higher radial velocities than that of the $`x_2`$ orbits at small viewing angles, and the opposite is observed at large viewing angles. The explanation is simple: for small viewing angles, the radial velocities reached by the $`x_1`$ orbits in the inner parts are increased with respect to the circular velocity (the outer parts are unchanged), while for the $`x_2`$ orbits, the radial velocities in the outer parts are decreased with respect to the circular velocity (the inner parts are unchanged). The opposite is true at large viewing angles. A further difference is that in the case of the $`x_1`$ orbits, the center of the parallelogram-shaped signature is relatively faint compared to its edges, while for the $`x_2`$ orbits, it is the center of the parallelogram which is bright, forming a strong inverted S-shaped feature, and the outer parts are relatively faint. The S-shape feature is not due to a single orbit leaving such a trace in the PVDs, but rather to the crowding of the traces of many successive orbits, which explains why it is so bright (an effect comparable to the spiral arms created by rotating slightly similar ellipses of increasing radii; Kalnajs 1973). Furthermore, because the axial ratio of the $`x_2`$ orbits increases with radius (outside 0.4 kpc), the trace of the largest orbit in the PVDs is not only the most extended but is also the one with the widest parallelogram shape. It therefore encompasses the traces of all the other orbits and defines largely by itself the envelope of the signature of the $`x_2`$ orbits, which is then very faint. The small “holes” present in the center of the PVDs at intermediate viewing angles are due to the fact that, although the elongation of the $`x_2`$ orbits generally decreases inward, the $`x_2`$ family does not extend up to the center (see Fig. 6). Figure 6 illustrates the signature of the outer 2:1 orbits in the PVDs. Because the orbits are almost circular, the upper part of the envelope reaches radial velocities close to the circular velocity at large projected distances, independent of the viewing angle. The features seen in the signature of the outer 2:1 orbits are largely due to the “dimples” in the orbits on the major axis of the bar (see Fig. 11 in Sellwood & Wilkinson 1993). As should be expected, the PVDs for the $`x_1^{}`$ orbits (not shown) are similar to that of the outer 2:1 orbits when the viewing angles are reversed (e.g. $`67.5\mathrm{°}22.5\mathrm{°}`$), the major axes of the orbits being at right angles. Both families yield a slowly-rising almost solid-body signature in the PVDs for all viewing angles. Bars in early-type spirals and in $`N`$-body simulations tend to be more rectangular shape rather than ellipsoidal shape (see, e.g., Sparke & Sellwood 1987; Athanassoula et al. 1990). Interestingly, the maximum boxiness generally occurs just before the end of the bar (Athanassoula et al. 1990), where the $`x_1`$ orbits are slightly boxy and where the rectangular-shaped inner 4:1 orbits are found (see Fig. 3 of A92a). It is thus tempting to associate the branch of the inner 4:1 family of periodic orbits lying outside the characteristic curve of the $`x_1`$ orbits in the characteristic diagram (Fig. 6) with the rectangular shape of bars. The 4:1 gap in model 001 is of type 2 (see Contopoulos 1988; A92a), so the lower branch of the 4:1 characteristic is stable and the proposed association makes sense, but this is not necessarily the case in real galaxies. In fact, early-type galaxies seem to have 4:1 gaps of type 1 (Athanassoula 1996). Figure 6 shows the surface density and PVDs for both the $`x_1`$ family of periodic orbits in model 001 and the lower branch of the inner 4:1 family. It shows that indeed the inner 4:1 orbits can create a very rectangular surface density distribution when combined with the $`x_1`$ orbits. Although the signature of the inner 4:1 orbits in the PVDs is very peculiar and easily identifiable when taken alone, it is superposed on the signature of the $`x_1`$ orbits for most viewing angles and it is hard to disentangle the two families. However, when the bar and the inner 4:1 orbits are seen either end-on or side-on, the inner 4:1 orbits leave a signature in the PVDs distinct from that of the $`x_1`$ orbits. The lower limit of the combined envelope of the signatures of the $`x_1`$ and inner 4:1 orbits is straight and only slightly inclined until it does a sharp bend at approximately the position of the IUHR (at a slightly smaller radius when $`\psi =0\mathrm{°}`$ and slightly larger radius when $`\psi =90\mathrm{°}`$, following the definition of the IUHR adopted in § 3); it then rises vertically until it joins with the upper limit of the envelope. This is easily understandable considering the morphology of the inner 4:1 orbits. The projected edges of the density distribution are sharpest at those viewing angles and the line-of-sights are parallel to the approximately straight segments of the orbits (see Fig. 6). The advantage of the periodic orbits approach is that various orbital components of a galaxy can be combined together in a multitude of ways. A superposition of the $`x_1`$, $`x_2`$, and outer 2:1 families of periodic orbits should give a reasonable representation of a prototypical barred galaxy with ILRs. Indeed, in the inner parts, only three direct families exist: the $`x_1`$, $`x_2`$, and $`x_3`$ (see Fig. 6). The $`x_1`$ orbits, parallel to the bar, are certainly present and, because the $`x_3`$ orbits are unstable (e.g. Sellwood & Wilkinson 1993), the $`x_2`$ orbits will dominate over the $`x_3`$, if orbits perpendicular to the bar are present. In the outer parts, we find the $`x_1^{}`$ and outer 2:1 families. The shapes of these orbits are almost identical to that of the two subclasses of outer rings observed in barred spiral galaxies (Buta & Crocker 1991): R$`{}_{}{}^{}{}_{1}{}^{}`$ outer rings for the outer 2:1 orbits and R$`{}_{}{}^{}{}_{2}{}^{}`$ for the $`x_1^{}`$ orbits. The R$`{}_{}{}^{}{}_{1}{}^{}`$ class is dominant (Buta 1986), which is why we have chosen the outer 2:1 family of periodic orbits. However, the signature of the $`x_1^{}`$ orbits in the PVDs is very similar to that of the outer 2:1 orbits (almost identical if the viewing angles are reversed) and using one or the other does not affect our conclusions or the nature of the bar diagnostics in the PVDs. Both families act as a slowly rising almost solid-body component. Figure 6 shows the surface density and PVDs obtained by superposing the $`x_1`$, $`x_2`$, LPO, and outer 2:1 families of periodic orbits for model 001. The surface density of the $`x_1`$, $`x_2`$, and outer 2:1 families is qualitatively similar to what is observed for barred galaxies with an outer ring. More interesting is the amount of structure present in the PVDs, especially in the inner parts of the model where the effects of the bar are strongest. We find back the signatures of the $`x_1`$ and $`x_2`$ orbits already discussed above, as well as the features which allow us to determine the viewing angle with respect to the major axis of the bar. The large gap between the signatures of the $`x_1`$ and outer 2:1 families of periodic orbits is due to a corresponding gap in our reconstitution of the density distribution of a prototypical barred galaxy. The only way to make this gap disappear is to populate periodic orbit families close to corotation, but this is not obvious using our periodic orbits approach. Beyond the 4:1 gap in the characteristic diagram, there are higher order n:1 families, and, between consecutive 2n:1 gaps, short segments of what can still be called $`x_1`$ orbits (see Fig. 6 and Fig. 2 in A92a). However, as can be seen from the figures of A92b, the gas streamlines continue to be ellipsoidal and elongated along the bar past the IUHR, the extent of this region being very model dependent. Loosely speaking, one could say that, although the gas does not follow precisely the higher order resonance families, it follows their general form. Yet further out the gas circulates around each of the two stable Lagrangian points $`L_4`$ and $`L_5`$, the streamlines now being associated with the LPO periodic orbits (see Fig. 11 in Sellwood & Wilkinson 1993). The latter are easy to add in our description and have also been included in Figure 6. Their signature in the PVDs is very similar to that of circular orbits. As expected, because of their location, the signature of the LPO orbits falls right in the gap between the signature of the $`x_1`$ orbits and that of the outer orbits. This gap is significantly reduced, but many smaller gaps are still present because of the non-homogeneous distribution of the orbits of the various families. Such gaps could not occur in an axisymmetric spiral galaxy. ### 4.3 Model 086 (no-ILRs) Despite the presence of ILRs in the classical sense (see Fig. 6), we consider model 086 a “no-ILR” model because it does not have an $`x_2`$ family of periodic orbits. Its characteristic diagram is very similar to that of model 001 (Fig. 6), differing only in the inner parts, where the $`x_2`$ and $`x_3`$ families are absent and the $`x_1`$ characteristic curve displays an elbow due to the high axial ratio of the bar (see also Fig. 4 of A92a; Pfenniger 1984). In addition, the $`x_1`$ orbits possess loops for a certain range of radii. Here, we exceptionally include those orbits to prevent the appearance of an empty region in the $`x_1`$ orbits surface density distribution. Figure 6 shows the surface density and PVDs obtained by superposing the $`x_1`$ and outer 2:1 families of periodic orbits for model 086. The surface density distribution is again similar to that observed for the gaseous component in barred spiral galaxies. In the PVDs, as expected, the signature of the outer parts of the model has not changed, the outer 2:1 family of periodic orbits behaving again like a slowly-rising solid-body component. In the inner parts, the obvious difference with the PVDs of model 001 (Fig. 6) is the absence of the signature of any $`x_2`$ orbit (the LPO orbits have been omitted in Fig. 6 for clarity). The signature of the $`x_1`$ orbits has changed only slightly on a qualitative level, the envelope still being generally parallelogram-shaped. The main difference with the signature of the $`x_1`$ orbits in model 001 is that the envelope of the signature has more curved edges, due to the presence of orbits with loops, and reaches more extreme radial velocities (compared to the circular velocity), due to the higher axial ratio of the bar yielding more eccentric orbits (see Fig. 10 in A92a). The gap between the signatures of the $`x_1`$ and outer 2:1 orbits is again due to the absence of populated orbits near corotation in our model. ### 4.4 Other Models Now that we understand the general structure of the PVDs produced by models with and without ILRs, we can extend our study to investigate how the bar diagnostics might change when the free parameters of the mass model are varied. To do this, we borrow heavily on the results of A92a, who studied how the orbital structure of the mass model varies within most of the volume of parameter space likely to be occupied by real galaxies. We do not expect the outer parts of the models to vary significantly since the influence of the bar falls off rapidly with radius. The outer families of periodic orbits will always produce slowly-rising almost solid-body signatures in the PVDs. We will thus concentrate on understanding the behaviour of the periodic orbits in the inner parts of the models. The parallelogram-shaped signatures of the $`x_1`$ and $`x_2`$ periodic orbits in the PVDs will be mainly affected by their eccentricity and extent. The results of A92a concerning the eccentricity of the $`x_1`$ orbits can be summarised as follows (see Fig. 10 in A92a): as the axial ratio $`a/b`$ of the bar is increased, the Lagrangian radius $`r_L`$ increased, the central concentration $`\rho _c`$ increased, and/or the quadrupole moment $`Q_m`$ of the bar decreased, the eccentricity of the $`x_1`$ orbits is increased. The counterintuitive behaviour of the eccentricity of the $`x_1`$ orbits with $`Q_m`$ stems from the fact that for $`Q_m`$ to be increased, the bulge mass and therefore the central density of the model has to be decreased (the total mass within a given radius being fixed), leading to a decrease in the eccentricity of the orbits. The eccentricity of the $`x_2`$ orbits behaves in the same way as that of the $`x_1`$ orbits except with respect to the axial ratio of the bar (again, see Fig. 10 in A92a). In that case, the $`x_2`$ orbits become less eccentric as the bar axial ratio is increased. Because higher eccentricity means more extreme radial velocities compared to the circular velocity in the PVDs (very high when the orbit is seen end-on and very low when the orbit is seen side-on), the envelopes of the signatures of the $`x_1`$ and $`x_2`$ orbits in the PVDs should be most extreme (in the above sense) for high bar axial ratios (except for the $`x_2`$ orbits), high Lagrangian radii, high central densities, and/or low bar quadrupole moments. This was certainly the case for model 086, which has a higher bar axial ratio than model 001. A92a showed that the radial extent of the $`x_1`$ family is mainly affected by the pattern speed of the bar and changes very little as the other parameters of the mass model are varied (see Fig. 6 and 7 of A92a). For the $`x_2`$ orbits, the major factor affecting their signature in the PVDs will be their existence or non-existence, depending on the model considered. A92a showed that the radial range of the $`x_2`$ orbits is reduced when the bar axial ratio $`a/b`$ and/or quadrupole moment $`Q_m`$ are increased, and when the central density $`\rho _c`$ and/or Lagrangian radius $`r_L`$ are decreased (see Fig. 6 and 7 of A92a). Furthermore, as exemplified by model 086, the $`x_2`$ orbits can be completely absent for high bar axial ratios and/or quadrupole moments, and for low central densities and/or Lagrangian radii. The presence and extent of the inverted S-shape signature of the $`x_2`$ orbits in the PVDs depends therefore strongly on the parameters of the model. Full sequences of PVDs as each parameter of the mass model is varied will be provided in Paper II using hydrodynamical simulations. We do not present them here using the periodic orbits approach to avoid unnecessary repetition. ## 5 Discussion The bar diagnostics we have developed in the previous sections are all based on the use of families of periodic orbits in the equatorial plane of a barred spiral galaxy mass model. Periodic orbits, however, are only an approximation to the dynamical structure of either the gas or the stars in galaxies and the PVDs presented in the previous sections should only be used as a guide when interpreting kinematical data. The ability of the gas to dissipate energy changes the behaviour of the gaseous component from that predicted by the periodic orbits, particularly near shocks, occuring at the transition regions between different orbit families and near periodic orbits with loops. The kinematics of stars on regular orbits are relatively well approximated by that of the periodic orbits, since the former are trapped around the latter. On the other hand, stars on chaotic orbits give a totally different signature, and the percentage of stars on such orbits may well be non-negligible, particularly in strongly barred galaxies. In addition, we have not attempted to make the models self-consistent when populating the orbit families. We have only calculated the shape of the signature in the PVDs of each family of periodic orbits of the models, but not the relative weights of the families or of the orbits within them. Nevertheless, in order to assess how much our results depend on the method adoped to populate the orbits, we have also produced PVDs for the $`x_1`$ and $`x_2`$ periodic orbits of model 001 using equal increments of the Jacobi constant between orbits (rather than equal $`\mathrm{\Delta }y`$). As expected, the envelope of the signature of the $`x_1`$ orbits in the PVDs does not change but, because of the form of the $`x_1`$ characteristic curve (Fig. 6), the central parts are much stronger. Similarly, the signature of the $`x_2`$ periodic orbits changes very little. Independent of those issues, the relative amplitude of each component of the PVDs will also vary depending on the emission line used to measure the kinematics, this simply because each component arises from a different part of the galaxy where the line might be produced in a different way. For example, the presence of shocks and/or increased star formation in the components will lead to different emission line strengths in each of the component, and the ratios will vary depending on the lines used. When interpreting data based on the PVDs produced here, one has therefore to take into account the following effects: 1) the kinematical signature observed might be somewhat different from that calculated here because periodic orbits are only an approximation to the gaseous or stellar dynamics in a galaxy, 2) the relative amplitude of each component will be different from that calculated because the building blocks approach used may not represent the relative weights correctly, and 3) the observed relative amplitude of each component will be different from that calculated because the intensity of a line depends not only on the density of the emitting material but also on the production mechanism of the line, which is not considered here. The hydrodynamical simulations reported in Paper II and the $`N`$-body simulations reported in Paper III cover the first and second problems. However, to remedy the third problem raised above, one would need to consider both stellar evolution and the detailed physical conditions in the gas. The presence of dust can also hinder our ability to detect bars in edge-on spiral galaxies. Because the dust in disks is mostly confined to a thin layer, it can make a disk optically thick at optical wavelengths if the galaxy is seen edge-on. If this is the case, there are two ways around the opacity problem. First, it is possible to select objects which are not perfectly edge-on. The line-of-sight then reaches the central parts of the galaxy where the bar resides while still going through a substantial fraction of the disk. However, if the inclination is too large, the bar diagnostics developed here will not work, as they depend on the line-of-sight going through most of the disk. Secondly, it is possible to use observations in a part of the spectrum where even a dusty disk is likely to be optically thin. Long-slit spectroscopy in the near-infrared (e.g. using the Br$`\gamma `$ line at $`K`$-band) is attractive but most lines are weak in non-active galaxies and near-infrared spectrographs with sufficient resolution for kinematical work are still uncommon. A better option is to use line-imaging in the 21 cm H I line with a radio synthesis telescope. Even very dusty edge-on spiral galaxies are probably optically thin at 21 cm. In addition, it is possible to use a higher spectral resolution than available with most optical long-slit spectrographs. However, radio synthesis observations are useful only for large H I-rich galaxies because of limited sensitivity and spatial resolution. Using a large sample of galaxies, Bureau & Freeman (1999) will address in more detail the question of dust extinction when identifying bars in edge-on spiral galaxies. ## 6 Summary and Conclusions Our main goal in this paper was to develop kinematical bar diagnostics for edge-on spiral galaxies. Considering a well-studied family of mass models including a Ferrers bar, we identified the major periodic orbit families and briefly reviewed the orbital structure in the equatorial plane of the mass model. We considered only orbits which are direct, singly periodic, and non-self-intersecting. Using a simple method to populate these orbits, we then used the families of periodic orbits as building blocks to model the structure of real galaxies. We constructed position-velocity diagrams (PVDs) of the models using an edge-on projection and various viewing angles with respect to the bar. We considered mainly two models which are prototypes of models with and without inner Lindblad resonances. The PVDs obtained show a complex structure which would not occur in an axisymmetric galaxy (see Fig. 6 and 6). The global appearance of a PVD can therefore be used as a reliable diagnostic for the presence of a bar in an observed edge-on disk. Specifically, the presence of a gap between the signatures of the families of periodic orbits in the PVDs follows directly from the non-homogeneous distribution of the orbits in a barred galaxy. The $`x_1`$ orbits lead to a parallelogram-shaped feature in the PVDs which reaches very high radial velocities with respect to the outer parts of the model when the bar is seen end-on and rather low velocities when the bar is seen side-on. It occupies all four quadrants of the PVDs, i.e. including the two forbidden quadrants. This signature would dominate the structure of the PVD produced by the stellar component of a barred spiral galaxy, and can be used as an indicator of the viewing angle with respect to the bar in the edge-on disk. When present, the $`x_2`$ orbits can also be used efficiently as a bar diagnostic and behave similarly to the $`x_1`$ orbits in the PVDs. However, the highest velocities are now reached when the bar is seen side-on and the signature is spatially much more compact. The signature of the $`x_2`$ orbits would dominate the structure of the PVD produced by the gaseous component of a barred spiral. The mass model we adopted had four free parameters, allowing to reproduce the range of properties observed in real galaxies. Using the results of A92a, we analysed how the structures present in the PVDs vary when the parameters of the model are changed. We predicted that the signatures of the $`x_1`$ and $`x_2`$ periodic orbits are more extreme for high bar axial ratios (except for the $`x_2`$ orbits), high Lagrangian radii, high central densities, and/or for low bar quadrupole moments. In addition, the extent of the $`x_2`$ orbits is reduced and can completely disappear when the bar axial ratio and/or quadrupole moment are increased and when the central density and/or Lagrangian radius are decreased. The shape and presence of the signatures of the $`x_1`$ and $`x_2`$ familes of periodic orbits in a PVD can therefore provide strong constraints on the mass distribution of an observed galaxy. We briefly discussed the application of the models to the interpretation of real data. The major limitations of the models are the approximation of the disk kinematics by that of periodic orbits, the treatment of the orbits as “test particles”, and the ignorance of the production mechanism of the line used in the observations. Nevertheless, the understanding of the traces of individual orbits and of the signatures of orbit families in the PVDs will prove indispensable in Paper II and Paper III, where, using hydrodynamical and $`N`$-body numerical simulations, we will develop similar bar diagnostics addressing some of these limitations. We thank K. C. Freeman and A. Kalnajs for comments on the manuscript, and L. S. Sparke and A. Bosma for useful discussions in the early stages of this work. M. B. acknowledges the support of an Australian DEETYA Overseas Postgraduate Research Scholarship and a Canadian NSERC Postgraduate Scholarship during the conduct of this research. M. B. would also like to thank the Observatoire de Marseille for its hospitality and support during a large part of this project. E. A. acknowledges support from the Newton Institute during the final stages of this work.
no-problem/9903/astro-ph9903440.html
ar5iv
text
# Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope ## 1 Introduction The search for galaxy alignments has a long history, beginning with searches for alignments in “Spiral and Elliptical Nebulae” during the late 19th Century. Recent scrutiny of the problem has been motivated by the understanding that establishing the level of galaxy spin vector ($`\stackrel{}{L}`$) alignments could offer an additional constraint on various theories of galaxy formation and evolution. For example, “top-down” scenarios of Large-Scale Structure formation can lead to ordered distributions of angular momentum on cluster and supercluster scales through a variety of mechanisms (Zel’dovich (1970), Doroshkevich & Shandarin (1978), White (1984), Colberg, et al. (1998)). In addition to galaxy $`\stackrel{}{L}`$ alignments resulting from various formation mechanisms, galaxy $`\stackrel{}{L}`$ alignments may also be the evolutionary result of anisotropic merger histories (West (1994)), galaxy-galaxy interactions (Sofue (1992)), or strong gravitational gradients (Ciotti & Dutta (1994), Ciotti & Giampieri (1998)). For a summary of the history of the field, see Djorgovski (1987) and Cabanela & Aldering (1998) \[hereafter Paper I\]. Observational support exists for some forms of galaxy $`\stackrel{}{L}`$ alignments with surrounding large-scale structure. For example, Binggeli (1982) discovered that the major axes of cD galaxies tended to be aligned with the axes of their parent cluster. However, most previous searches for galaxy alignments have had results that one could describe as negative or statistically significant but not strongly so. One complication in earlier efforts has been that most have not truly determined $`\stackrel{}{L}`$, but rather simply used the position angle (and sometimes ellipticity) of the galaxies in an attempt to determine the possible distribution of $`\stackrel{}{L}`$. However, for each combination of galaxy position angle and ellipticity, there are four solutions for the true orientation of the galactic angular momentum axis ($`\stackrel{}{L}`$). This degeneracy in $`\stackrel{}{L}`$ can only be removed by establishing both which side of the major axis is moving toward the observer and whether we are viewing the north or south side of the galaxy, where “north” is in the direction the galaxy angular momentum vector. Therefore previous studies have either restricted themselves to using only position angles of galaxies, or they have often taken all four possible solutions of $`\stackrel{}{L}`$ with equal weight (Flin (1988), Kashikawa & Okamura (1992)). Several studies have been published regarding searches for alignments using completely determined galaxy angular momentum axes. Helou & Salpeter (1982) used Hi and optical observations of 20 galaxies in the Virgo cluster to show that no very strong $`\stackrel{}{L}`$ alignments exist. However, a followup to this study by Helou (1984) found evidence for anti-alignments of spin vectors for binary pairs of galaxies in a sample of 31 such pairs. Hoffman et al. (1989) briefly investigated the possibility of galaxy alignments by plotting up the $`\stackrel{}{L}`$ orientations for $`85`$ galaxies with fully determined spin vectors from their Virgo cluster sample and found no obvious alignments. Most recently, Han, Gould, & Sackett (1995) used a sample of 60 galaxies from the Third Reference Catalogue of Bright Galaxies (de Vaucouleurs et al. (1991), hereafter referred to as the RC3) in the “Ursa Major filament” and found no evidence of galaxy alignments. There are several criticisms one can level against these earlier studies. All the studies attempted to use relatively small samples to map out orientation preferences over the entire sky. Thus only very strong $`\stackrel{}{L}`$ alignment signatures could have been discovered via this method. The samples were selected using source catalogs with “visual” criteria which may have led to a biased sample. For example, as noted in Paper I, the source catalog for the “Ursa Major filament” study, the RC3, suffers from the “diameter-inclination effect,” which leads to a strong bias for preferentially including face-on galaxies over edge-on galaxies of the same diameter (Huizinga (1994)). Finally, no attempt was made to consider the positions of the galaxies within the local large-scale structure before looking for alignments. Considering that the local mass density is critical for determining which alignment mechanism may be dominant, an attempt should be made to look for $`\stackrel{}{L}`$ alignments relative to local large-scale structures. This study is an attempt to avoid some of the issues citied above and obtain a sample of galaxies with well determined $`\stackrel{}{L}`$ in various environments in a supercluster using a mechanically-selected sample of galaxies. For this study, we selected a subsample of the Minnesota Automated Plate Scanner Pisces-Perseus galaxy catalog (hereafter MAPS-PP), which is a true major-axis diameter-limited catalog built using automated, mechanical methods and does not exhibit the “diameter-inclination” effect (see Paper I). We determined the $`\stackrel{}{L}`$ orientation for the galaxies in this subsample using Hi observations. The sample selection criteria are outlined in Section 2. The analysis methods are discussed in Section 3. Section 4 discusses the results of the data analysis. Our interpretation of these results is provided in Section 5. ## 2 Data The galaxy sample for this study was selected from the MAPS-PP. The MAPS-PP catalog was designed to avoid several of the pitfalls of previous attempts to measure galaxy orientations. The MAPS-PP contains $`1400`$ galaxies in the Pisces-Perseus Supercluster field with (roughly) isophotal diameter $`>`$30<sup>′′</sup> constructed from digitized scans of the blue and red plates of the Palomar Observatory Sky Survey (POSS I). By using a mechanical measure of the diameter, this catalog avoids the “diameter-inclination” effect seen in both the Uppsala General Catalog (Nilson (1974), hereafter UGC) and the RC3. The MAPS-PP also uses a two-dimensional, two-component fit of the galaxy light profile in order to obtain a more accurate position angle and ellipticity measurement for the component of the galaxy with most of the angular momentum (e.g. - the disk in spirals). Such a full two-dimensional fit has been shown (Byun & Freeman (1995)) to be very effective at recovering the image parameters in situations were a simple ellipse fit fails (e.g. - edge-on spirals with a large bulge). More details as to the construction of the MAPS-PP are available in Paper I. ### 2.1 Selection Criteria For this study, we selected a subsample of the MAPS-PP that could have their $`\stackrel{}{L}`$ determined through Hi observations and at the same time could probe the galaxy $`\stackrel{}{L}`$ orientations relative to the large-scale structure of the Pisces-Perseus Supercluster (hereafter PPS). Hi observations can determine which side of the major-axis is approaching us, reducing the four-fold degeneracy in the $`\stackrel{}{L}`$ to two solutions. However, because of the great distance to the PPS ($`cz5500`$km s<sup>-1</sup>), the POSS I images don’t generally have enough detail to make out spiral arm structure, so determining if we were viewing the north or south side of a galaxy would be difficult without re-imaging the galaxies. Instead, we choose to constrain the inclinations of the galaxies in our subsample to be edge-on. This means we effectively reduce the two-fold degeneracy in $`\stackrel{}{L}`$ solution to a single solution and simultaneously we reduce the galaxy alignments analysis problem from a full three-dimensional problem to a much simpler one-dimensional problem. And because the PPS plane itself is viewed very close to edge-on (Giovanelli & Haynes (1988)), we are simplifying the problem without losing the ability to probe the angular momentum distribution in relationship to the PPS plane. The primary requirement for including a MAPS-PP galaxy in this study was therefore an ellipticity greater than 0.66. Other criteria for selecting a MAPS-PP galaxy for our Hi program were based on observational considerations. To ensure the galaxy could be observed from Arecibo, the Declination was required to be less than 36. An O (blue) major-axis diameter between 44<sup>′′</sup> and 100<sup>′′</sup> was needed so that the Hi disk of the galaxy was not too small to be targeted on both sides by the Arecibo beam and not too large to be fully sampled. The galaxy was required to be within 2.25 of the PPS midplane (as determined in Paper I) and if the redshift was known, it needed to be between $`35007000`$km s<sup>-1</sup> in order to increase the chances it was a true PPS member. Finally to reduce the sample size, we selected galaxies with O magnitude brighter than 17. This MAPS-PP subsample consisted of 105 galaxies. The MAPS-PP subsample was cross-identified with the NASA/IPAC Extragalactic Database (NED) in order to obtain previous radio flux measurements and redshifts.<sup>1</sup><sup>1</sup>1The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We also examined the field around each subsample galaxy and eliminated those in crowded fields, which led to a final MAPS-PP subsample of 96 galaxies (which will hereafter be referred to as the Arecibo sample), listed in Table 1. ### 2.2 Hi Observations We obtained 21cm line spectra with the 305m Arecibo telescope of the National Astronomy and Ionosphere Center over 14 nights between August 6 and August 20, 1998.<sup>2</sup><sup>2</sup>2The National Astronomy and Ionosphere Center is operated by Cornell University under a cooperative agreement with the National Science Foundation. The new Gregorian feed was used with the narrow L band receiver using a 25 MHz bandpass centered on 1394 MHz (1024 channels). One observation was performed using a 50 MHz bandpass centered at 1400.5 MHz. The beamsize of the 305m Arecibo dish is approximately 3.3′ FWHM. For each of our Arecibo sample galaxies, we made two sets of ON-OFF observations, one 90<sup>′′</sup> to the east of the central position along the major-axis, and a corresponding observation to the west of the galaxy center. Typically, 5 minute integrations were used for each observation, although some galaxies were re-observed to allow better measurement of their weak flux and others known to be bright in Hi were observed with shorter integrations. Preliminary data reduction was performed using ANALYZ at the Arecibo facility. For each observation, the two polarizations were averaged together. For each galaxy we then archived both the sum of the east and west ($`E+W`$) spectra and the difference (in the sense east minus west). It is the difference ($`EW`$) spectra that can be used to determine the spin vector, by allowing us to determine which side of the major-axis is moving toward us relative to the galaxy center. Of the 96 galaxies in the original sample, 6 were not observed, 16 were not detected in Hi, 3 suffered from strong radio frequency interference (RFI), and one suffered from a distorted baseline. We therefore had a total of 70 galaxies for which there were good detections. Subsequent data reduction was performed on the 70 galaxies for which good $`E+W`$ detections existed. The spectra were Doppler corrected and the fluxes corrected for gain differences with zenith angle and changes in system temperature. A visual estimate of each galaxy’s redshift was made and then radio frequency interference (RFI) within $`750`$km s<sup>-1</sup> of the line was ’removed’ from the spectra. RFI ’removal’ was performed interactively and the RFI was replaced with a linear interpolation between the two endpoints of the spectra. Noise was added to the linear interpolation, using the surrounding spectral channels to determine the noise level. Both the $`E+W`$ and $`EW`$ spectra were baseline corrected using a linear fit to non-Hi line channels within $`500`$km s<sup>-1</sup>. We determined the Hi line properties of the galaxy using the $`E+W`$ spectra. All velocities follow the optical convention, $`v=c\mathrm{\Delta }\lambda /\lambda _0`$, and are adjusted to be in the heliocentric frame. The flux-weighted mean velocity, $`v_0`$, of the galaxy as well as the line flux is computed. The line width used the mean of the line widths at a threshold of 50% of the boxcar equivalent flux and at a threshold of 20% the maximum flux determined by using a outward searching algorithm (Lavezzi & Dickey (1997)). The reported line width has been corrected for noise and channel width using the method outlined in Lavezzi and Dickey (1997). ### 2.3 Determination of Galaxy Spin Vector Directions and Uncertainty The direction of the galaxy’s spin vector was determined by taking the first moment of the $`EW`$ spectra, $`\mu _{EW}`$, where $$\mu _{EW}=\frac{_{v_{min}}^{v_{max}}f_{E+W}(v)f_{EW}(v)(vv_0)𝑑v}{_{v_{min}}^{v_{max}}[f_{E+W}(v)]^2𝑑v},$$ (1) where $`v_{min}`$ and $`v_{max}`$ are the minimum and maximum velocity of the line respectively, $`v_0`$ is the flux-weighted mean velocity of the galaxy, and $`f_{E+W}(v)`$ and $`f_{EW}(v)`$ are the fluxes of the $`E+W`$ and $`EW`$ spectra respectively. Negative $`\mu _{EW}`$ implies that the eastern side of the galaxy is approaching us relative to the galaxy center, meaning the galaxy’s $`\stackrel{}{L}`$ points northward. Positive $`\mu _{EW}`$ implies $`\stackrel{}{L}`$ points to the south. The uncertainty in $`\mu _{EW}`$ due to bad baseline and spectral noise was measured using two variants of the normal first moment. To determine the effect of spectral noise on the first moment, we computed $`\mu _{offset}`$, where we measure the first moment of the flux outside the line by conserving $`\mathrm{\Delta }v=(v_{max}v_{min})`$, but offset the $`v_0`$, $`v_{min}`$, and $`v_{max}`$ in equation 1 to lie outside the line (see Figure Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope). This gave us a measure of the contribution of spectral noise (presumably similar outside the Hi line as inside) to the value of $`\mu `$. To determine the effect of uncertainty in the baseline fit to the first moment determination, we also computed $`\mu _{wide}`$, where we find the 1st moment about $`v_0`$ of the flux outside the line . We then scaled this by $`\overline{\mathrm{\Delta }v}/\overline{\mathrm{\Delta }v_{outside}}`$ to determine the amount of $`\mu _{EW}`$ uncertainty due to uncertainty in the baseline fit. Both $`\mu _{offset}`$ and $`\mu _{wide}`$ are illustrated in Figure Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope. $`\mu _{offset}`$ and $`\mu _{wide}`$ measurements suggest that galaxies with $`\left|\mu _{EW}\right|<15`$km s<sup>-1</sup> should be considered to have undetermined spin (see Figure Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope). To confirm that the $`EW`$ spectra are the result of gas being observed on both sides of the major axis, we also computed the cross-correlation, $`P_{cc}`$, of the $`E+W`$ and $`EW`$ spectra, $$P_{cc}=\frac{_{v_{min}}^{v_{max}}f_{E+W}(v)f_{EW}(v)𝑑v}{_{v_{min}}^{v_{max}}[f_{E+W}(v)]^2𝑑v},$$ (2) since we would expect that the $`E+W`$ and $`EW`$ spectra would be orthogonal in those cases where the flux is from both the eastern and western positions. We have empirically found that if $`P_{cc}>0.4`$ the $`EW`$ flux was likely to be entirely from only one position and thus the spin measurement should be considered undetermined. It should be noted that this process will not eliminate observations of galaxies with an asymmetric Hi distribution if there is significant flux in both the eastern and western positions. Such a asymmetric Hi distribution would affect the mean velocity, $`v_0`$, and thus may affect the amplitude of $`\mu _{EW}`$, but it should not change the sign of $`\mu _{EW}`$, which is the observable we use later. The final dataset had 54 galaxies with well determined spin vectors out of the 70 galaxies with good Hi detections (see Table 2), 16 galaxies having been rejected from the sample due to either large $`P_{cc}`$ or small $`\mu _{EW}`$. For these galaxies, we computed $$\theta _\stackrel{}{L}=\theta +90^{}(\mu _{EW}/\left|\mu _{EW}\right|),$$ (3) which is the projection of $`\stackrel{}{L}`$ on the plane of the sky. Since the Arecibo sample is chosen to be nearly edge-on, $`\theta _\stackrel{}{L}`$ is essentially a complete description of $`\stackrel{}{L}`$, allowing simple one-dimensional statistical analysis to be used for what is normally a three-dimensional problem. ## 3 Data Analysis Methods ### 3.1 The Kuiper Statistic Identification of anisotropies in the observed $`\theta _\stackrel{}{L}`$ and $`\theta `$ distributions was initially done by using the Kuiper V statistic, which is a two-sided variant of the Kolmogorov-Smirnov (K–S) D statistic (Press et al. (1992)). We use the Kuiper V statistic because the K–S D statistic can systematically underestimate the significance of differences between the observations and the models, especially if the differences are near the ends of the distribution (Press et al. (1992)). For this test, we compare the cumulative distributions of a variable, $`x`$ (such as $`\theta _\stackrel{}{L}`$, $`\mathrm{\Delta }\theta _\stackrel{}{L}`$, etc.), in the observed sample, $`S(x)`$, with that for a model of 100000 randomly-oriented galaxies, $`S_m(x)`$. The Kuiper statistic, $`V`$, is then defined as $$V=D_++D_{}=max[S(x)S_m(x)]+max[S_m(x)S(x)],$$ (4) the sum of the absolute values of the maximum positive ($`D_+`$) and negative ($`D_{}`$) differences between $`S(x)`$ and $`S_m(x)`$.<sup>3</sup><sup>3</sup>3Note that the normal K–S D statistic is equal to $`max|S(x)S_m(x)|`$. It doesn’t distinguish between differences above or below the $`S_m(x)`$ the curve. $`V`$ is essentially a measure of the difference between two distributions (see Figure Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope). If the number of degrees of freedom is known a priori, a simple functional form exists for the probability, $`P(V)`$, that the two samples whose cumulative distributions differ by $`V`$ were drawn from the same parent distribution (see Press et al. 1992, for example). Therefore, if we are comparing the distribution of $`x`$ for the observed sample to that of a modeled, randomly-oriented sample, we have a way of estimating the probability that the observed sample is drawn from an isotropic distribution. In this study, we considered a distribution’s anisotropy significant if the probability, $`P(V)`$, that the Arecibo sample could have been drawn from the randomly-oriented sample was less than 5%. In those cases where the number of degrees of freedom is not well determined a priori, we used Monte Carlo comparisons of the observations with 1000 model samples of equal size. This was necessary in order to avoid overestimating the significance of an observed anisotropy. We model a randomly oriented distribution of galaxies by taking the observed sample, randomly reassigning the observed $`P_{cc}`$ and $`\mu `$ values to various galaxies ($`\mu `$ is determined by randomly reversing the sign of $`|\mu |`$), and then randomly generating the major-axis position angle, $`\theta `$. This model kept the spatial distribution of the original sample and the Hi observational selection effects while otherwise being a completely randomly oriented model. Comparison of the real distribution of a variable versus its distribution in the 1000 Monte-Carlo samples is used to determine the significance of an anisotropy in some of the more complicated distributions discussed in section 4. ## 4 Results and Analysis ### 4.1 Probing for Global Spin Vector Alignments As a followup to the work done in Paper I, we initially examined some of the distributions similar in nature to the ones investigated that study. We divided the entire MAPS-PP and Arecibo Samples into 3 subsets each: the high density subset, the low density subset, and the complete sample. The high and low density subsets were created using surface density estimates, $`\mathrm{\Sigma }`$, from the MAPS-PP catalog to compute the median surface density. The high and low density subsets include all galaxies with $`\mathrm{\Sigma }`$ greater than and less than this median value, respectively. For the MAPS-PP subsets we tested the $`\theta `$-based distributions, whereas for the Arecibo subsets, we tested the $`\theta _\stackrel{}{L}`$-based distributions. Examinations of the $`\theta _\stackrel{}{L}`$ and $`\theta `$ distributions show no significant anisotropy in any of the Arecibo or MAPS-PP subsets. Similar results were seen for distributions of $`\theta _\stackrel{}{L}`$ and $`\theta `$ relative to other critical angles including the following: * $`\mathrm{\Delta }\theta _\stackrel{}{L}(1)`$ and $`\mathrm{\Delta }\theta (1)`$: the difference of $`\theta _\stackrel{}{L}`$ and $`\theta `$, respectively, between nearest neighbor galaxies in that sample. Note that $`\mathrm{\Delta }\theta (1)`$ is used in the Arecibo sample only to separate the significance of any $`\mathrm{\Delta }\theta _\stackrel{}{L}(1)`$ alignments from any $`\mathrm{\Delta }\theta (1)`$ alignments. * $`\mathrm{\Delta }\theta _\stackrel{}{L}(Geo)`$: the difference of $`\theta _\stackrel{}{L}`$ from the geodesic to the nearest neighbor galaxy. * $`\mathrm{\Delta }\theta _\stackrel{}{L}(Ridge)`$: the difference of $`\theta _\stackrel{}{L}`$ from angle of the Pisces-Perseus Supercluster ridgeline at its nearest point (as determined in Paper I). * $`\mathrm{\Delta }\theta _\stackrel{}{L}(GCX)`$: the difference of $`\theta _\stackrel{}{L}`$ from the galaxy concentration position angle built using a percolation length of $`X`$ arcminutes (galaxy concentrations groupings of galaxies identified using a 2 dimensional friends-of-friends algorithm (redshift is ignored), see Paper I for details). * $`\mathrm{\Delta }\theta _\stackrel{}{L}(GCRX)`$: the difference of $`\theta _\stackrel{}{L}`$ from the radial line to the center of the galaxy concentration built using a percolation length of $`X`$ arcminutes. These results, shown in Table 3, support the observations in Paper I in that no simple $`\theta `$ or $`\theta _\stackrel{}{L}`$ alignments appear to be present. Examination of the $`\mathrm{\Delta }\theta _\stackrel{}{L}(GCX)`$ distribution does not support the tentative anti-alignments seen in Paper I. We looked for ’twisting’ of $`\mathrm{\Delta }\theta _\stackrel{}{L}(Ridge)`$ versus distance from the PPS ridgeline, and could not corroborate this signal seen in the $`\mathrm{\Delta }\theta (Ridge)`$ distribution of the MAPS-PP in Paper I. We note that the Arecibo sample is considerably smaller that the MAPS-PP, so we cannot rule out the trends seen in Paper I, but we simply cannot support them. ### 4.2 Probing for Spin Vector Domains An initial visual inspection of the plot of the distribution of $`\theta _\stackrel{}{L}`$ on the sky (Figure Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope) appears to show some $`\theta _\stackrel{}{L}`$ alignments. Specifically, in many cases if you pick a galaxy at random and then compare its $`\theta _\stackrel{}{L}`$ with that of nearby galaxies, the difference is often less than 90. It appeared to the authors that there was a visual impression of the PPS being divided up into “spin vector domains,” regions with preferred $`\stackrel{}{L}`$ orientations. Because visual impressions are subjective, we devised tests to look for possible spin vector domains as well as looking for the alignments of the sort reported in Paper I for the galaxy major-axes. We attempted to confirm visual impression of $`\stackrel{}{L}`$ domains seen in Figure Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope by examining the orientations of several nearest neighbors, instead of just the nearest neighbor. To this end, we computed the $`\mathrm{\Delta }\theta _\stackrel{}{L}(N)`$ distribution, which is the summed distribution of $`\mathrm{\Delta }\theta _\stackrel{}{L}`$ (respectively) for the N closest galaxies within 3 of each galaxy . If $`\stackrel{}{L}`$ domains exist, the $`\mathrm{\Delta }\theta _\stackrel{}{L}(N)`$ distribution should be peaked toward the lower values of $`\mathrm{\Delta }\theta _\stackrel{}{L}(N)`$. Because the $`\mathrm{\Delta }\theta _\stackrel{}{L}(N)`$ distribution about one galaxy is not independent of the distribution about that galaxy’s nearest neighbors, the number of degrees of freedom is uncertain a priori. This means that the standard function to determine the probability, $`P(V)`$, of two distributions being identical doesn’t work. Instead, we gauge $`P(V)`$ by generating 1000 Monte Carlo samples and computing the Kuiper V statistic of their $`\mathrm{\Delta }\theta _\stackrel{}{L}(N)`$ distributions. By comparing the value of V for the observed sample with the distribution of V in the 1000 Monte Carlo samples, we have an estimate of the likelihood that a greater value of V is obtained, $`P(V)`$. We therefore use $`P(>V)`$ in leu of the $`P(V)`$ used in cases where we know the number of degrees of freedom. We examined the $`\mathrm{\Delta }\theta _\stackrel{}{L}(N)`$ distributions for the N closest galaxies of the Arecibo samples, for N ranging from 3 to 10. These samples show no significant anisotropy when compared to Monte Carlo generated datasets, indicating that the visual impression of $`\stackrel{}{L}`$ domains is either incorrect, or the $`\stackrel{}{L}`$ domains are too weakly aligned to confirm with this test. Because in Paper I only a simple nearest neighbor test was performed, we also examined the $`\mathrm{\Delta }\theta (N)`$ distribution for the MAPS-PP samples, in order to see if $`\stackrel{}{L}`$ domains might be visible in the larger MAPS-PP dataset. We found that for N ranging from 3 to 10, the $`\mathrm{\Delta }\theta (N)`$ distributions showed no evidence of significant anisotropies. This appears to indicate that it is unlikely that $`\stackrel{}{L}`$ domains exist in the Pisces-Perseus Supercluster. ### 4.3 Establishing Limits on Galaxy Alignments In order to quantify the largest anisotropic signature that could remain “hidden” from our statistical techniques, we performed a simple simulation. We generated samples drawn from random ‘sinusoidal’ distributions described by the probability distributions $$P(\mathrm{\Theta })d\mathrm{\Theta }=\left[1+\alpha \mathrm{cos}\left(\mathrm{\Theta }\frac{2\pi }{\lambda }\right)\right]d\mathrm{\Theta },\text{where}\mathrm{\Theta }[0,\lambda ],$$ (5) and $$P(\mathrm{\Theta })d\mathrm{\Theta }=\left[1+\alpha \mathrm{cos}\left(\mathrm{\Theta }\frac{2\pi }{\lambda }\right)\right]d\mathrm{\Theta },\text{where}\mathrm{\Theta }[0,\frac{\lambda }{2}],$$ (6) where $`\alpha `$ is the amplitude of the ’sinusoidal’ component of the probability in percent. In these two distributions, $`\mathrm{\Theta }`$ represents either the expected $`\theta `$ or $`\theta _\stackrel{}{L}`$ distributions in the cases of large-scale alignments (equation 6), or the $`\mathrm{\Delta }\theta `$ and $`\mathrm{\Delta }\theta _\stackrel{}{L}`$ distributions in the cases of alignments (equation 6), anti-alignments (equation 6) or both (equation 5) between nearby galaxies. Using these two distributions, we can generate samples with a predetermined amplitude, $`\alpha `$, for the alignments present and then compute the value of $`P(V)`$, the probability of the sample having been drawn from a random sample. By repeatedly doing this, we can determine the distribution of $`P(V)`$ for a given $`\alpha `$ and sample size. For samples of 30, 54, 100, 615, and 1230 galaxies (the sizes of our subsets as noted in Table 3), we computed the $`P(V)`$ distribution for 100 generated $`\mathrm{\Theta }`$ samples with amplitudes, $`\alpha `$, ranging from 0% to 100% in steps of two percent (see Figure Determination of Galaxy Spin Vectors in the Pisces-Perseus Supercluster with the Arecibo Telescope). We then examined at which point 95% of the $`P(V)`$ distribution dropped below 0.05, our criterion for calling a distribution significantly anisotropic. This gave us an estimate of the largest amplitude sinusoidal anisotropy that could have been missed, which we call $`\alpha _{95}`$. $`\alpha _{95}`$ is therefore the smallest amplitude of a sinusoidal anisotropy for which there is a 95% chance of detection given the criteria of $`P(V)<0.05`$. For our Arecibo sample, we find that with 54 galaxies $`\alpha _{95}0.75`$, therefore we can only eliminate global spin vector alignments with sinusoidal amplitudes greater than 75%. This sample does not place very strong limits on level of any spin vector alignments present. With the 1230 galaxies in the MAPS-PP catalog, we find $`\alpha _{95}0.15`$. Therefore we can eliminate the possibility of galaxy major-axis alignments at amplitudes greater than 15%. Major-axis alignments place very weak limits on the level of spin vector alignments due to the fact that the orientation of the major-axis of the galaxy, with no additional information, only restricts the spin vector to a plane. However, if there is a spin vector alignment, it must be reflected in the major-axis distribution of the edge-on galaxies. We find that in a subsample of 729 MAPS-PP galaxies restricted to $`ϵ>0.50`$, there is no significant major-axis anisotropy of any sort. For a sample of 729 galaxies, we find $`\alpha _{95}0.20`$, therefore, we can confidently state that there are no spin vector alignments with sinusoidal amplitude greater than 20% (within the uncertainty due to the two-fold degeneracy in mapping major-axis position angle to spin vector). We would like to have computed $`\alpha _{95}`$ for the spin vector domain tests in order to gauge their sensitivity but it was computationally too expensive. ## 5 Conclusions We have constructed the only catalog of well determined spin vectors for galaxies in the Pisces-Perseus Supercluster. Our study is the first radio study that explicitly looks at the spin vector distribution of galaxies in a supercluster and was optimized toward that end. We developed a simple technique for obtaining spin vector determinations and accessing the level of uncertainty in the spin vector determinations due to both spectral noise and uncertainty in fitting the continuum. We were intentionally rather conservative in our data selection criteria, possibly rejecting several well measured spin vectors. There are several problems currently hampering the determination of the angular momentum distribution of galaxies relative to each other and to the surrounding large-scale structure. One major problem is that we do not have a very clear understanding of the internal extinction in galaxies and its effect on the appearance of the galaxy with changing inclination. Therefore, it is very difficult to accurately determine the inclination of a galaxy based solely on its ellipticity and position angle. This also makes it more difficult to construct a proper volume-limited sample for a large-scale angular momentum study. One could obtain redshifts for all the galaxies in a diameter-limited or magnitude-limited galaxy catalog and select a volume-limited subsample, but without a clear understanding of internal extinction, we cannot correct magnitudes and diameters for inclination. We compensated for these uncertainties of the effects of internal galaxy extinction by restricting our sample to highly edge-on galaxies. This had the added benefit of making the Hi spectra of the galaxies as broad as possible, and thus making it easier to determine the $`\stackrel{}{L}`$ orientation. We note that this restriction to edge-ons could make reduction of alignments relative to large-scale structure difficult, since we would be restricting analysis to galaxies with $`\stackrel{}{L}`$ in the plane of the sky. However, in this study, the edge-on orientation of the Pisces-Perseus Supercluster means our sample galaxies’ $`\stackrel{}{L}`$ lie in the plane perpendicular to the supercluster plane, which is advantageous for reducing the complexity of the analysis. This does reduce our sensitivity to any $`\stackrel{}{L}`$ alignments that lie outside the plane of the sky. For example, if galaxies’ $`\stackrel{}{L}`$ are preferentially oriented in a given direction within the plane of the Pisces-Perseus Supercluster (e.g. toward a cluster in the supercluster plane) rather than simply being restricted to that plane, we may not detect such an alignment in our sample, since we restrict $`\stackrel{}{L}`$ of sample galaxies to the plane perpendicular to the supercluster plane. It would be interesting to perform similar observations of a “face-on” version of Pisces-Perseus, where we would then be restricting $`\stackrel{}{L}`$ to the supercluster plane and possibly investigating a new class of $`\stackrel{}{L}`$ alignments. The technique we outline for obtaining spin vector measurements could be applied to quickly obtain $`\stackrel{}{L}`$ measurements for many galaxies in superclusters other than Pisces-Perseus. It is also notable that this technique could be transferred to multi-fiber spectroscopy. By assigning two fibers to each galaxy, one could simultaneously determine the $`\stackrel{}{L}`$ directions of many galaxies much more quickly than a comparable line slit spectrograph observations. No rotation curve information would be available, but it would allow quick collection of a large sample of well determined galaxy $`\stackrel{}{L}`$. Our examination of the $`\stackrel{}{L}`$ distribution of galaxies in Pisces-Perseus provides no support for any form of anisotropic $`\stackrel{}{L}`$ distribution. We do not provide confirmation of the possible $`\stackrel{}{L}`$ alignments noted in Paper I for the major-axis distributions of Pisces-Perseus galaxies. Given the relatively small size of the Arecibo sample, rather large anisotropies in the spin vector distribution of the Arecibo sample (see Section 4.3) could remain undetected with our technique. We do note that by using a sample of 729 nearly edge-on galaxies from the original MAPS-PP catalog, we feel we can restrict the sinusoidal amplitude of any spin vector anisotropy present to be less than approximately 20% the background ‘random’ distribution, at least in the plane perpendicular to the Pisces-Perseus supercluster ridge. It is unclear at what level galaxy $`\stackrel{}{L}`$ alignments might be expected as no recent simulations have been designed with the goal of estimating galaxy alignments. We expect that if galaxy alignments are produced by large-scale structure formation, the alignments would be strongest in areas of low density, where the relative scarcity of subsequent galaxy-galaxy interactions suggests the initial $`\stackrel{}{L}`$ distribution would be better preserved. However, as noted in the introduction, galaxy alignments can arise from a variety of evolutionary processes, in both high and low density environments. It would be interesting if in modern computer simulations of galaxy evolution, the angular momentum of the resulting galaxies was investigated for $`\stackrel{}{L}`$ alignments and predictions as to the amplitude (and type) of any anisotropies in the $`\stackrel{}{L}`$ distribution were made. As we showed in Section 4.3, sample sizes need to be large (on the order of at least 500 galaxies) in order to unambiguously detect weak alignments. There are two paths toward increasing the sample size. We could examine a denser cluster with a greater number of targets satisfying our edge-on criteria such as the Coma cluster. It would be interesting to investigate the possibility of tidally induced galaxy alignments in denser environments as predicted by Ciotti and Dutta (1994) and Ciotti and Giampieri (1998). The only previous study looking for galaxy alignments in Coma was plagued by stretched imaging (Djorgovski (1987)), so alignment results for this cluster are still unclear. Our other option for increasing sample size is to develop a better understanding of the internal extinction in galaxies so that we could use galaxies of all inclinations. The first author is currently investigating using image parameters of a large number of galaxies obtained using the APS database in order to better determine the internal extinction properties of galaxies. We would like to thank telescope operators Miguel Boggiano, Willie Portalatin, Pedro Torres, and Norberto Despiau for their good humor and help with observing (And especially Norberto for his “lucky coffee”). JEC would like to thank Chris Salter, Tapasi Ghosh, Jo Ann Etter, and Phil Perillat for helping make his first radio observing experience excellent, both professionally and personally. Travel was sponsored by the National Astronomy and Ionosphere Center (NAIC) and the University of Minnesota Graduate School. This research has made use of the APS Catalog of the POSS I, which is supported by the National Aeronautics and Space Administration and the University of Minnesota. The APS databases can be accessed at http://aps.umn.edu/ on the World Wide Web. Some data reduction was performed at the Laboratory for Computational Science and Engineering (LCSE) at the University of Minnesota. Information about the LCSE can be found online at http://www.lcse.umn.edu/.
no-problem/9903/cond-mat9903359.html
ar5iv
text
# Comment on “Superconducting phases in the presence of Coulomb interaction: From weak to strong correlations” ## Abstract We examine the paper basic equations of T. Domański and K.I. Wysokiński \[Phys. Rev. B 59, 173 (1999)\], who calculated the critical superconducting temperature, $`T_c`$, in function of Coulomb correlations for $`s`$\- and $`d`$–wave order parameter symmetries. We argue that in their gap equation the Coulomb repulsion is counted twice. Then, we write down the right gap equation and solve it by using a normal state one–particle Green function which gives a Mott metal–insulator transition. Our numerical results for $`T_c`$ vs $`U`$ shows that $`U`$ is detrimental to superconductivity as it was found by Domański and Wysokiński. Pacs numbers: 74.20.-Fg, 74.10.-z, 74.60.-w, 74.72.-h In a recent paper, a study of the evolution of the superconducting phases in a model with competing short–range attractive ($`W`$) and on–site ($`U`$) interactions is presented. The authors have evaluated the superconducting critical temperature ($`T_c`$) at the mean field level for the s- and d–wave superconducting order parameter symmetries under the influence of correlations ($`U`$) using several approximations ($`HF`$, second order perturbation theory ($`SOPT`$) and the alloy analogy approximation ($`AAA`$)). We find that: 1\- Eq. (2) of counts the Coulomb repulsion twice: once for the normal state one–particle Green function, $`G_N(\stackrel{}{k},i\omega _n)`$, and again for the mean field gap equation. The right expression should be: $$\mathrm{\Delta }_\stackrel{}{k}=\frac{T}{N}\underset{\stackrel{}{q},n}{}\frac{W_{\stackrel{}{q}\stackrel{}{k}}\mathrm{\Delta }_\stackrel{}{q}}{|G_N(\stackrel{}{k},i\omega _n)|^2+|\mathrm{\Delta }_\stackrel{}{q}|^2},$$ (1) where $`G_N(\stackrel{}{k},i\omega _n)`$ is the normal state one–particle Green function for which $`\mathrm{\Delta }_\stackrel{}{q}0`$. 2\- To avoid double counting the Coulomb interaction term we use a $`G_N(\stackrel{}{k},i\omega _n)`$ which gives a metal–insulator transition ($`MIT`$). Therefore, we propose the following academic well behaved $`G_N(\stackrel{}{k},i\omega _n)`$: $$G_N(\stackrel{}{k},i\omega _n)=\frac{1\rho }{i\omega _n+\mu \epsilon _\stackrel{}{k}}+\frac{\rho }{i\omega _n+\mu \epsilon _\stackrel{}{k}U},$$ (2) with a critical value of $`U`$ of $`U_c=2D`$ (the bandwidth). With the use of Eq. (2), the equations for the mean–field superconducting critical temperature, $`T_c`$, and the self–consistent particle density, $`\rho `$, are $`{\displaystyle \frac{1}{V}}={\displaystyle \frac{1}{N_s}}{\displaystyle \underset{\stackrel{}{k}}{}}\left[{\displaystyle \frac{(1\rho )^2\mathrm{tanh}\left(\frac{\epsilon _\stackrel{}{k}\mu }{2T_c}\right)}{2(\epsilon _\stackrel{}{k}\mu )}}+{\displaystyle \frac{\rho ^2\mathrm{tanh}(\frac{\left(\epsilon _\stackrel{}{k}\mu +U\right)}{2T_c})}{2(\epsilon _\stackrel{}{k}\mu +U)}}+{\displaystyle \frac{\rho (1\rho )\left[\mathrm{tanh}(\frac{\left(\epsilon _\stackrel{}{k}\mu +U\right)}{2T_c})+\mathrm{tanh}(\frac{\epsilon _\stackrel{}{k}\mu }{2T_c})\right]}{(2(\epsilon _\stackrel{}{k}\mu )+U)}}\right].`$ (3) and $`\rho `$ $`=`$ $`{\displaystyle \frac{1}{2N_s}}{\displaystyle \underset{\stackrel{}{k}}{}}\left[1\left(1\rho \right)\mathrm{tanh}\left({\displaystyle \frac{\epsilon _\stackrel{}{k}\mu }{2T_c}}\right)\rho \mathrm{tanh}\left({\displaystyle \frac{\epsilon _\stackrel{}{k}\mu +U}{2T_c}}\right)\right],`$ (4) from where we recuperate $`T_c^{BCS}`$ when $`U/W=0`$. We have chosen a flat free density of states, i.e., $`N_L(ϵ)=1/2D`$ for $`Dϵ+D`$ and zero otherwise. Let us say that Eq. (2) is the exact solution of the normal state Hamiltonian $`H_U`$ and the perturbation part is the attractive interaction between Cooper pairs, $`Vf(\stackrel{}{k})`$ in our case. The case of $`V=f(\stackrel{}{k})`$ will not be discussed here. Thus, our full Hamiltonian can be written as $$H=H_U+H_V.$$ (5) From Eq. (5) we conclude that the perturbation is $`H_V`$, i.e., we can use a mean field analysis to the second term of Eq. (5). In other words, one knows the “exact” solution of $`H_U`$ and our problem at hand is to calculate the solution of the full Hamiltonian $`H`$. Then, the normal state one–particle Green function must be valid for any value of $`U`$. Because of this, Eq. (1) is fully justified. Under these circunstances, it would be an error to perform mean field approximation on both $`U`$ and $`W_\stackrel{}{q}`$. In Fig. 1 we present $`T_c`$ vs $`U`$ for several values of $`V`$. In (a) $`\mu =0.25(U/2)`$; (b) $`\mu =0.50(U/2)`$; (c) $`\mu =0.75(U/2)`$ and (d) $`\mu =1.0(U/2)`$. For $`\mu =U/2`$ we are at half–filling. We have chosen $`2D=1.0`$. From Fig. 1 we observe that there is critical value of $`U`$ beyond which $`T_c`$ is zero. This clearly shows that the Coulomb interaction (correlations) conspire against superconductivity. These results agree with the ones found in Ref.. Our calculations have been performed assuming that the Hartree shift due to Cooper pairing, $`\rho V`$, is the same both in the normal and superconducting phases. This is the reason that we have not renormalized the chemical potential with the pairing interaction. Our approximation (Eq. (2)) is an academic one because the weights of the spectral functions of $`G_N(\stackrel{}{k},i\omega _n)`$ are not $`\stackrel{}{k}`$–dependent (they are $`\alpha _1(\stackrel{}{k})=1\rho `$ and $`\alpha _2(\stackrel{}{k})=\rho `$, respectively). However, other approximations, besides the ones used in Ref. and our Eq. (2), can be employed, for example the one of Nolting which gives a metal–insulating transition if the band narrowing factor term, $`B(\stackrel{}{k})`$, is properly treated. For high values of $`|V|`$ we should take into account the effect of superconducting pair fluctuations as it has been done by Schmid and others. In conclusion we have justified our gap equation (Eq. (1). See also Eq. (5)) which modifies Eq. (2) of Ref.. By proposing Eq. (1) we avoid doubling counting of the Coulomb interaction ($`U`$). At the same time, we have used a normal state one–particle Green function (Eq. (2)) which yields the metal–insulator transition. Also, we have presented results of $`T_c`$ vs $`U`$ for several values of $`V`$ and of $`\mu =\alpha (U/2)`$, where $`\alpha =0.25`$; $`0.50`$; $`0.75`$ and $`1.00`$. The present results agree with the conclusions of Ref. that correlations conspire against superconductivity. Acknowledgements We thank Prof. Sergio Garcia Magalhães for interesting discussions. The authors thank partial support from FAPERGS–Brasil, CONICIT–Venezuela (Project F-139), CNPq–Brasil. Figure Captions Figure 1. $`T_c`$ vs $`U`$, for different values of $`V`$, i.e., $`V=0.50`$; $`1.00`$; $`1.50`$ and $`2.00`$. (a) $`\mu =0.25(U/2)`$; (b) $`\mu =0.50(U/2)`$; (c) $`\mu =0.75(U/2)`$; and (d) $`\mu =1.00(U/2)`$.
no-problem/9903/nucl-ex9903010.html
ar5iv
text
# Elliptic Flow: Transition from out-of-plane to in-plane Emission in Au + Au Collisions ## Abstract We have measured the proton elliptic flow excitation function for the Au + Au system spanning the beam energy range 2 – 8 AGeV. The excitation function shows a transition from negative to positive elliptic flow at a beam energy, $`E_{tr}`$ 4 AGeV. Detailed comparisons with calculations from a relativistic Boltzmann-equation are presented. The comparisons suggest a softening of the nuclear equation of state (EOS) from a stiff form (K$``$ 380 MeV) at low beam energies ($`E_{Beam}2`$ AGeV) to a softer form (K$``$ 210 MeV) at higher energies ($`E_{Beam}`$ 4 AGeV ) where the calculated baryon density $`\rho 4\rho _0`$. For many years, the investigation of the nuclear equation of state (EOS) has stood out as one of the primary driving forces for heavy ion reaction studies (e.g. ). Measurements of collective motion and, in particular, the elliptic flow have been predicted to provide information crucial for establishing the parameters of the EOS . Theoretical conjectures have also focused on the notion that a transition to the quark-gluon plasma (QGP) is associated with a “softest point” in the EOS where the pressure increase with temperature is much slower than the energy density . Such a softening of the EOS is predicted to start at quark-antiquark densities comparable to those in the ground-state of nuclear matter , and also at relatively low temperatures if the baryon density is driven significantly beyond its normal value $`\rho _0`$ . At energies of $`1E_{Beam}11`$ AGeV, collision-zone matter densities are expected up to $`\rho 68\rho _0`$. Such densities could very well result in conditions favorable to a softening of the EOS. Therefore, it is important to investigate currently available elliptic flow data \[in this energy range\] to search for new insights into the parameters of the EOS and for any indication of its softening. Elliptic flow reflects the anisotropy of transverse particle emission at midrapidity. For beam energies of 1–11 AGeV this anisotropy results from a strong competition between “squeeze-out” and “in-plane flow”. The magnitude and the sign of elliptic flow depend on two factors: (a) the pressure built up in the compression stage compared to the energy density, and (b) the passage time of the projectile and target spectators. The characteristic time for the development of expansion perpendicular to the reaction plane can be estimated as $`R/c_s`$, where the speed of sound $`c_s=\sqrt{p/e}`$, $`R`$ is the nuclear radius, $`p`$ is the pressure and $`e`$ is the energy density. The passage time is $`2R/(\gamma _0v_0)`$, where $`v_0`$ is the c.m. spectator velocity. Thus the ”squeeze-out” contribution should reflect the ratio $`c_s/\gamma _0v_0`$ which is responsible for the essentially logarithmic dependence of elliptic flow on the beam energy for $`1E_{beam}11`$ AGeV . Recent calculations have made specific predictions for the beam energy dependence of elliptic flow for Au + Au collisions at 1–11 AGeV . They indicate a transition from negative to positive elliptic flow at a beam energy $`E_{tr}`$, which has a marked sensitivity to the stiffness of the EOS. In addition, they suggest that a phase transition to the QGP should give a characteristic signature in the elliptic flow excitation function due to significant softening of the EOS. In this Letter we present an experimental elliptic flow excitation function for the Au + Au system to establish $`E_{tr}`$ and to search for any hints of a softening of the EOS. The measurements were performed at the Alternating Gradient Synchrotron (AGS) at the Brookhaven National Laboratory. Beams of <sup>197</sup>Au ($`E_{Beam}=2`$, 4, 6, and 8 AGeV) were used to bombard a <sup>197</sup>Au target of thickness calculated for a 3% interaction probability. Typical beam intensities resulted in $`10`$ spills/min with $`10^3`$ particles per spill. Charged reaction products were detected with the E895 experimental setup which consists of a time projection chamber (TPC) and a multisampling ionization chamber (MUSIC). The TPC which was located in the MPS magnet (typically at 1.0 Tesla) provided good acceptance and charge resolution for charged particles $`1<Z<6`$ at all four beam energies. However, unique mass resolution for $`Z=1`$ particles was not achieved for all rigidities. The MUSIC device, positioned $``$ 10 m downstream of the TPC, provided unique charge resolution for fragments with $`Z>7`$ for the 2 and 4 AGeV beams. Data were taken with a trigger for minimum bias and also for a bias toward central and mid-central collisions. Results are presented here for protons measured in the TPC for mid-central collisions. We use the second Fourier coefficient $`v_2=\mathrm{cos}2\varphi `$, to measure the elliptic flow or azimuthal asymmetry of the proton distributions at midrapidity ($`|y_{cm}|<0.1`$) ; $$\frac{dN}{d\varphi }\left[1+2v_1\mathrm{cos}(\varphi )+2v_2\mathrm{cos}(2\varphi )\right].$$ (1) Here, $`\varphi `$ represents the azimuthal angle of an emitted proton relative to the reaction plane. The Fourier coefficient $`\mathrm{cos}2\varphi =0`$, $`>0`$, and $`<0`$ for zero, positive, and negative elliptic flow respectively. Measurements of $`v_1`$ will be presented and discussed in a forthcoming paper. Our analysis proceeds in two steps. First, we determine the reaction plane and its associated dispersion for each beam energy. Second, we generate azimuthal distributions with respect to this experimentally determined reaction plane and evaluate $`\mathrm{cos}2\varphi `$. The vector Q<sub>i</sub> = $`_{ji}^nw(y_j)𝐩_j^t/p_j^t`$ is used to determine the azimuthal angle, $`\mathrm{\Phi }_{plane}`$, of the reaction plane . Here, $`𝐩_j^t`$ and $`y_j`$ represent, respectively, the transverse momentum and the rapidity of baryon j (Z$`2`$) in an event. The weight $`w(y_j)`$ is assigned the value $`\frac{<p^x>}{<p^t>}`$, where $`p^x`$ is the transverse momentum in the reaction plane. $`<p^x>`$ is obtained from the first pass of an iterative procedure. The dispersion of the reaction plane as well as biases associated with detector efficiencies plays a central role in flow analyses. Consequently, in Fig. 1 we show representative distributions for the experimentally determined reaction-plane ($`\mathrm{\Phi }_{Plane}`$), and the associated relative reaction-plane distributions ($`\mathrm{\Phi }_{12}`$). The distributions have been generated for a mid-central impact parameter, i.e. multiplicities between 0.5 and 0.75 M<sub>max</sub>. Here, M<sub>max</sub> is the multiplicity corresponding to the point in the charged particle multiplicity distribution where the height of the multiplicity distribution has fallen to half its plateau value. It is estimated that this multiplicity range corresponds to an impact parameter range $`57`$ fm. The $`\mathrm{\Phi }_{12}`$ distributions (cf. Fig. 1) which are important for assessing the role of the reaction-plane dispersion, have been obtained via the subevent method . That is, reaction planes were determined for two subevents constructed from each event; $`\mathrm{\Phi }_{12}`$ is the absolute value of the relative azimuthal angle between these two estimated reaction planes. The essentially flat reaction plane distributions shown in Fig. 1a reflect rapidity and multiplicity-dependent azimuthal efficiency corrections, applied to take account of the detection inefficiencies of the TPC. These corrections were obtained by accumulating the laboratory azimuthal distribution of the particles (as a function of rapidity and multiplicity) for all events and then including the inverse of these distributions in the weights for the determination of the reaction plane. The distributions shown in Fig. 1a confirm the absence of significant distortions which could influence the magnitude of the extracted elliptic flow. The relative reaction-plane distributions ($`\mathrm{\Phi }_{12}`$) shown in Fig. 1b indicate mean values which increase with the beam energy from $`<\mathrm{\Phi }_{12}>/217.0^{}`$ at 2 AGeV to $`36.1^{}`$ at 8 AGeV. This increase suggests a progressive deterioration in the resolution of the reaction plane with increasing beam energy; however a reasonable resolution is maintained over the entire energy range. The $`\mathrm{\Phi }_{12}`$ distributions serve as the basis for correcting the extracted elliptic flow values as discussed below. In Fig. 2, we show observed (or $`\varphi ^{}`$) azimuthal distributions, for protons. The distributions, shown for several rapidity bins, have been generated for the same mid-central impact parameter range ($`57`$ fm) discussed above. Several characteristic features are exhibited in Fig. 2. For example, as one moves away from midrapidity, the $`\varphi ^{}`$ distributions exhibit shapes commonly attributed to collective sidewards flow. That is, for $`y>0`$, the distributions peak at $`0^{}`$, and, for $`y<0`$, they peak at $`\pm 180^{}`$. Fig. 2 also shows that these anisotropies decrease with increasing beam energy. The primary feature of the midrapidity distributions contrasts with those obtained at other rapidities. At 2 AGeV, two distinct peaks can be seen at $`90^{}`$ and $`+90^{}`$. These peaks indicate a clear signature for the “squeeze-out” of nuclear matter perpendicular to the reaction plane or negative elliptic flow. By contrast, at 6 and 8 AGeV, the midrapidity distributions peak at $`0^{}`$, and $`\pm 180^{}`$. This latter anisotropy pattern is expected for positive elliptic flow. Thus, Fig. 2c provides clear evidence for negative elliptic flow at 2 AGeV, positive elliptic flow for 6 and 8 AGeV, and near zero flow for $`E_{Beam}=4`$ AGeV. In order to quantify the proton elliptic flow, it is necessary to suppress possible distortions arising from imperfect particle identification (PId). It is relevant to reiterate here that unique separation of $`\pi ^+`$ and protons was not achieved for all rigidities. To suppress such ambiguity we applied the following procedure. First, we plot the observed Fourier coefficient $`\mathrm{cos}2\varphi ^{}`$ vs. $`p_t`$ with $`p_t`$ thresholds which allow clean particle separation ($`p_t1`$ GeV/c). We then extract the coefficients for the quadratic dependence of $`\mathrm{cos}2\varphi ^{}`$ on $`p_t`$ (see inset in Fig. 3). These quadratic fits are restricted by the requirement that $`\mathrm{cos}2\varphi ^{}=0`$ for $`p_t=0`$. Second, we correct the proton $`p_t`$ distributions for possible $`\pi ^+`$ contamination by way of a probabilistic PId. The latter probabilities were obtained by extrapolating the exponential tails of the proton and $`\pi ^+`$ rigidity distributions into the regions of overlap. A weighted average (relative number of protons in a $`p_t`$ bin times the $`\mathrm{cos}2\varphi ^{}`$ for that bin) was then performed to obtain $`\mathrm{cos}2\varphi ^{}`$ for each beam energy. Subsequent to this evaluation, we then use the relative reaction plane distribution at each beam energy (cf. Fig. 1) to obtain dispersion corrections for the extracted Fourier coefficients . The relationship between the $`\mathrm{cos}2\varphi ^{}`$ (obtained with the estimated reaction plane) and the Fourier coefficient $`\mathrm{cos}2\varphi `$ relative to the true reaction plane is: $$\mathrm{cos}2\varphi ^{}=\mathrm{cos}2\varphi \mathrm{cos}2\mathrm{\Delta }\mathrm{\Phi }.$$ (2) where $`\mathrm{cos}2\mathrm{\Delta }\mathrm{\Phi }`$ is the correction factor determined from the $`\mathrm{cos}\mathrm{\Phi }_{12}`$ . Following the prescription outlined in Ref., we find correction factors which range from 0.79 at 2 AGeV to 0.29 at 8 AGeV. The correction factors are summarized along with ($`\mathrm{cos}\mathrm{\Phi }_{12}`$) in Table 1. The corrected elliptic flow values, $`\mathrm{cos}2\varphi `$, are represented by filled stars in Fig. 3. This excitation function clearly shows an evolution from negative to positive elliptic flow within the region $`2E_{Beam}8`$ AGeV and points to an apparent transition energy $`E_{tr}4`$ AGeV. The solid and dashed curves represent the results of model calculations described below. Since the value of $`E_{tr}`$ is predicted to be sensitive to the parameters of the EOS, it is important to examine additional constraints on its value. The inset in Fig. 3 shows the corrected $`\mathrm{cos}2\varphi `$ values as a function of $`p_t`$ for protons. The solid curves in the figure represent quadratic fits to the data (2 and 6 AGeV) which are in agreement with the predicted quadratic dependence of $`\mathrm{cos}2\varphi `$ on $`p_t`$ . Of greater significance is the fact that a comparison of the $`p_t`$ dependence of the elliptic flow for 2, 4, and 6 AGeV, provides further direct evidence that the sign of elliptic flow changes as the beam energy is increased from 2 to 6 AGeV. The essentially flat $`p_t`$ dependence shown for 4 AGeV is consistent with $`E_{tr}4`$ AGeV. To interpret these data, extensive calculations have been made to constrain the parameters of the EOS in the context of a newly developed relativistic Boltzmann-equation model (BEM) . The phenomenological relativistic Landau theory of quasiparticles serves as a basis for the model which has nucleon, pion, $`\mathrm{\Delta }`$ and $`N^{}`$ resonance degrees of freedom as well as momentum dependent forces. Calculations were performed for both a soft ($`K=210`$ MeV), and a stiff ($`K=380`$ MeV) EOS for the same rapidity and impact parameter selections applied to the data. The elliptic flow excitation functions (calculated for free protons) are compared to the experimental data in Fig. 3. The dashed and solid curves represent the results for a stiff and a soft EOS respectively. In addition to the data from the present experiment (filled stars), Fig. 3 also shows experimental results for Au + Au reactions at 1.15 A GeV (filled triangle) and 10.8 A GeV (filled circle). The experimental data are compatible with the excitation function predicted for a stiff EOS at beam energies $`1E_{Beam}2`$ AGeV. By contrast, the data show good agreement with the predictions for a soft EOS for $`4E_{Beam}11`$ AGeV. This pattern is consistent with a softening of the EOS in semicentral collisions of Au + Au at $``$ 4 AGeV. The calculated densities at maximum compression for these energies are of the order of $`4\rho _0`$ for the stiff EOS. In summary, we have measured an elliptic flow excitation function for mid-central collisions of Au + Au at 2, 4, 6, and 8 AGeV. The excitation function exhibits a transition from negative to positive elliptic flow with $`E_{tr}4`$ AGeV. Detailed comparisons of these elliptic flow data have been made with calculated results from a relativistic Boltzmann-equation calculation. Within the context of a simple parametrization of the EOS, the calculations suggest an evolution from a stiff EOS (K$`380`$ MeV) at low beam energies ($`2`$ AGeV) to a softer EOS (K$`210`$ MeV) at higher beam energies ($`4E_{Beam}11`$ AGeV). Such a softening of the EOS could result from a number of effects, the most intriguing of which is the possible onset of a nuclear phase change. On the other hand, it should be noted that transport models have failed to reproduce low energy ”squeeze-out” data with a single incompressibility constant. Thus, additional experimental signatures as well as calculations based on other models will be necessary to test the detailed implications of these results. Nevertheless, the results presented here, clearly show that elliptic flow measurements can provide an important constraint on the EOS of high density nuclear matter. This work was supported in part by the U.S. Department of Energy under grants DE-FG02-87ER40331.A008, DE-FG02-89ER40531, DE-FG02-88ER40408, DE-FG02-87ER40324, and contract DE-AC03-76SF00098; by the US National Science Foundation under Grants No. PHY-98-04672, PHY-9722653, PHY-96-05207, PHY-9601271, and PHY-9225096; and by the University of Auckland Research Committee, NZ/USA Cooperative Science Programme CSP 95/33. Feodor Lynen Fellow of the Alexander v. Humboldt Foundation.
no-problem/9903/astro-ph9903251.html
ar5iv
text
# A Possible Origin of Gamma Ray Bursts and Axionic Boson Stars ## Abstract We indicate a possible mechanism of generating gamma ray bursts. They are generated by a collision between an axionic boson star and a neutron star. The axionic boson star can dissipates its whole energy $`10^{50}`$ erg in the magnetized conducting medium of the neutron star. This dissipation arises only around envelope of the neutron star so that a fire ball with small baryon contamination is expected. We have evaluated roughly a rate of the collision per year and per galaxy which is consistent with observations, under plausible assumptions. We also point out that cosmic rays with extremely high energy, $`10^{21}`$eV, can be produced in the similar collisions with the neutron stars with strong magnetic fields $`10^{14}`$G. preprint: Nisho-99/2 One of the most fascinating problems in astrophysics is the mechanism leading to the gamma ray bursts ( GRB ). It is found that GRB can be understood well with the use of the fire ball models; a relativistic expanding shell originating from the source interacts with the intergalactic matter and GRB is produced with the interactions. However, we make clear the mechanism generating the fire ball. Common understanding is that a merger of two compact stars, e.g. neutron stars or black holes, is the origin of the fire ball. Especially a merger of two neutron stars is a plausible candidate. But up to now, there is no satisfactory mechanisms generating the fire balls with few contamination of baryons. In this letter we wish to propose a mechanism of generating GBR. The axionic boson stars ( ABS ), which are ones of plausible candidates of the dark matters, collide with neutron stars and dissipate their energies under strong magnetic fields of the neutron stars. As a result, the whole energy of the axionic boson star is radiated very rapidly. Since the dissipation occurs in magnetospheres and around envelopes of the neutron star, the contamination problem of baryons can be cured. Furthermore, rate of such collisions is estimated roughly as $`10^610^7`$ per year in a galaxy under plausible assumptions about number of neutron stars in the galaxy, e.t.c.. This rate is consistent with observations. A significant problem in this mechanism is that the maximal energy released is restricted as mass, $`M_a`$, of ABS itself; the mass is given by $`10^5M_{}/m_510^{49}/m_5`$ erg where $`m_5`$ denotes $`m_a/10^5`$eV with mass $`m_a`$ of the axion. Since the mass of the axion is restricted observationally such as $`m_a>10^6`$eV, the maximal energy $`E_m`$ released in the collision must satisfy an upper bound such as $`E_m<10^{50}`$erg. This fact contradicts with a recent observation of GRB with the energy output up to $`10^{54}`$erg, which is estimated with assumption of no beaming. Since the beaming can occur in our model because of the presence of the strong magnetic field, we expect that this energy problem may be cured. First we explain briefly the axion and the axionic boson star. The axion is the Goldstone boson associated with Peccei-Quinn symmetry, which was introduced to solve naturally the strong CP problem. The symmetry is broken at the energy scale of $`f_{PQ}>10^{10}`$GeV. The resultant axion is described with a real scalar field, $`a(x)`$. In the early Universe some of the axions condense and form topological objects, i.e. strings and domain walls, although they decay below the temperature of QCD phase transition. It have been shown numerically that although these local objects decay, axion clumps are formed around the period of $`1`$ GeV owing to both the nonlinearity of an axion potential leading to an attractive force among the axions and the inhomogeneity of coherent axion oscillations on the scale beyond the horizon. The axion clumps contract gravitationally to axionic boson stars after separating out from the cosmological expansion. They are solitons of coherent axions bounded gravitationally. They are described as stable solutions of the axion field coupled with gravity. It has been shown that such solutions exist and a critical mass of ABS is given by $`m_{pl}^2/m_a`$; $`m_{pl}`$ is the Planck mass. Quite similar results on the critical mass have been obtained for the boson stars of the complex scalar field; the critical mass implies the maximal mass of the boson stars, with masses beyond which the stable solutions do not exit. This is the same notion as the critical mass of the neutron stars. Thus it is reasonable to suppose that as an order of magnitude, a typical mass of ABS present in the Universe is given roughly by the critical mass, $`m_{pl}^2/m_a10^5M_{}/m_510^{49}/m_5`$ erg. The solution can be approximated in the explicit form such as $$a=f_{PQ}a_0\mathrm{sin}(m_at)\mathrm{exp}(r/R_a),$$ (1) where $`t`$ ( $`r`$ ) is time ( radial ) coordinate and $`f_{PQ}`$ is the decay constant of the axion. The value of $`f_{PQ}`$ is constrained from cosmological and astrophysical considerations such as $`10^{10}`$GeV $`<f_{PQ}<`$ $`10^{13}`$GeV. $`R_a`$ represents a radius of ABS which has been obtained numerically in terms of the mass $`M_a`$ of ABS, $$R_a=6.4\frac{m_{pl}^2}{m_a^2M_a},$$ (2) in the limit of infinitely large radius. In the similar limit the amplitude $`a_0`$ of the axion field is given by $$a_0=1.73\times 10^6\frac{(10\text{cm})^2}{R_a^2}\frac{10^5\text{eV}}{m_a}.$$ (3) Numerical solutions obtained without taking the limit possess other oscillation modes, but their amplitudes are much smaller than this one eq(3). Similarly, the almost same relation as one in eq(2) between $`R_a`$ and $`M_a`$ can be obtained without taking the limit. Thus we use the formulae in rough evaluation of the energy released per unit time in the collision between ABS and the neutron star. There are several physical parameters poorly known about the neutron stars so that the only evaluations of the order of the magnitude are meaningful. We note that ABSs oscillate with the frequency of $`m_a/2\pi =2.4\times 10^9m_5`$ Hz. and that the radius $`R_a`$ of ABS is given by $`16m_5^2M_5^1`$cm where $`M_5`$ denotes $`M_a/10^5M_{}`$. Up to now these solutions of ABSs have been found in the axion field equation only with approximating a cosine potential of the axion with a mass term. It seems that the treatment is inconsistent in the case of the large amplitude $`a_010^6/m_5`$. But we can see that the effect of the potential term, $`m_a^2f_{PQ}^2\mathrm{cos}(a/f_{PQ})`$, is negligible compared with other terms, e.g. $`(a)^2R_a^2f_{PQ}^2a_0^2m_a^2f_{PQ}^2a_0^2`$ in the case of the large amplitude $`a_01`$ ( $`R_am_a^1`$ in the case of our concerns ). Thus the solution in eq(1) may be used as approximate one even in the equation including the cosine potential of the axion. We now proceed to explain that ABS generates an electric field in an magnetic field of the neutron star and consequently dissipates its energy in a conducting medium. The point is that the axion couples with the electromagnetic fields in the following way, $$L_{a\gamma \gamma }=c\alpha a\stackrel{}{E}\stackrel{}{B}/f_{PQ}\pi $$ (4) with $`\alpha =1/137`$, where $`\stackrel{}{E}`$ and $`\stackrel{}{B}`$ are electric and magnetic fields respectively. The value of $`c`$ depends on the axion models; typically it is the order of one. It follows from this interaction that Gauss law is given by $$\stackrel{}{}\stackrel{}{E}=c\alpha \stackrel{}{}(a\stackrel{}{B})/f_{PQ}\pi +\text{“matter”}$$ (5) where the last term “matter” denotes contributions from ordinary matters. The first term in the right hand side represents a contribution from the axion. Thus it turns out that the axion field has an electric charge density, $`\rho _a=c\alpha \stackrel{}{}(a\stackrel{}{B})/f_{PQ}\pi `$, under the magnetic field $`\stackrel{}{B}`$. Accordingly, the electric field, $`E_a`$ associated with this axion charge is produced such that $`\stackrel{}{E_a}=c\alpha a\stackrel{}{B}/f_{PQ}\pi `$. Note that this field is quite strong around the surface of neutron star with a magnetic field $`10^{12}`$G; $`E_a10^{18}(B_{12}/m_5)\text{eV}\text{cm}^1`$ with $`B_{12}=B/10^{12}`$G. Note that both of $`\rho _a`$ and $`E_a`$ oscillate with the frequency given by the mass of the axion in ABS, since the field $`a`$ itself oscillates. The typical spatial extension of the electric field is about $`10`$cm, while the frequency of the oscillation is $`10^9`$ Hz for ABS of our concern. Thus the electric field can not be screened by charged particles present around the neutron stars. Obviously, this field induces an oscillating electric current $`J_m=\sigma E_a`$ in magnetized conducting media with electric conductivity $`\sigma `$. In addition to the current $`J_m`$ carried by ordinary matters, e.g. electrons, there appears an electric current, $`J_a`$, associated with the oscillating charge $`\rho _a`$ owing to the current conservation ( $`_0\rho _a\stackrel{}{}\stackrel{}{J_a}=0`$ ). This is given such that $`\stackrel{}{J_a}=c\alpha _ta\stackrel{}{B}/f_{PQ}\pi `$. This electric current is present even in nonconducting media like vacuum as far as ABS is exposed to the magnetic field. On the other hand, current $`J_m`$ is present only in the magnetized conducting media. Since $`_tam_aa`$ in ABS, the ratio of $`J_m/J_a`$ is given by $`\sigma /m_a`$. Hence, $`J_a`$ is dominant in the media with $`\sigma <10^{12}/s`$, while $`J_m`$ is dominant in the media with $`\sigma >10^{12}/s`$; note that $`10^9/s<m_a<10^{12}/s`$ corresponding to the above constraint on $`f_{PQ}`$. Electric conductivities in the neutron stars are large enough for $`J_m`$ to be dominant, while the magnetospheres of the neutron stars may have the small conductivities so that $`J_a`$ is dominant. The envelopes of the neutron stars have still much large conductivities so that $`J_m`$ is dominant. Therefore, we expect that as ABS approaches the neutron star, both of the magnetic field and the conductivity become large and hence the rate of the dissipation of its energy increases; the rate is proportional to $`\sigma E_a^2`$. We will see soon later that ABS dissipates quite rapidly its whole energy inside of the neutron star because of the extremely high electric conductivity. Let us evaluate amount of the energy dissipated in the magnetized conducting medium such as the magnetosphere and the inside of the neutron star. Denoting the average electric conductivity of the media by $`\sigma `$ and assuming the Ohm law, we find that the axion star dissipates an energy $`W`$ per unit time, $$W=_{ABS}\sigma E_a^2d^3x=4c^2\times 10^{54}\text{erg/s}\frac{\sigma }{10^{26}/s}\frac{M}{10^4M_{}}\frac{B^2}{(10^8G)^2}$$ (6) where the integration has been performed over volume of ABS and we have used the explicit formula of $`a_0`$ and $`R_a`$ in the above. The value of the conductivity $`\sigma `$ is taken as a typical value of the inside of the neutron star, although its precise value depends on temperature, density, composition e.t.c. of the neutron star. When we consider magnetosphere with very low conductivity, $`\sigma <10^9`$/s, we should replace conductivity, $`\sigma `$, with mass $`m_a`$. As we can see from the equation of $`W`$, the dissipation proceeds very rapidly in the neutron star even with relatively low magnetic field $`B10^8`$G. Since the real neutron stars, even old ones, must possess stronger magnetic fields than $`10^8`$G, ABS with mass $`10^4M_{}10^{50}`$erg evapolates within a time less than $`10^4`$ second in the core of the neutron star. It means that the real dissipation arises only near the envelope of the neutron star; ABS can not enter the core of the neutron star since a typical velocity of ABS in a halo is $`10^3\times \text{light velocity}`$. Therefore, the energy released with the collision between ABS and the neutron star occurs around the surface of the neutron star. This implies that a fire ball, which would be produced with the collision, may involve only a small fraction of baryons. We expect that this is a mechanism of generating the fire ball with few baryon contamination and hence with large Lorentz factor. Although we have neglected the dissipation in the region of the magnetosphere, even if the dissipation is so large for the whole energy of ABS to be dissipated in the region, the baryon contamination in the fire ball is smaller than one in the case of dissipation arising mainly near the envelope. The amount of the energy released in this mechanism is given maximally by $`10^{50}`$ erg which is the mass, $`m_{pl}^2/m_a`$ of ABS with the choice of axion mass $`m_a=10^6`$eV; possibility of the axion with the axion mass smaller than $`m_a=10^6`$eV is inhibited cosmologically. Thus the maximal energy released is less than one observed in the gamma ray burst. However, in our mechanism generating the clean fire ball we can expect beaming of gamma ray emissions because of the presence of the strong magnetic field of the neutron star. Therefore, the observations do not necessarily contradict with our mechanism. Here we wish to point out that since the electric field generated under the magnetic field of the neutron star is quite strong, $`E_a10^{18}(B_{12}/m_5)`$ eV $`\text{cm}^1`$, it is possible for the field to produce cosmic rays with extremely high energy such as $`10^{21}`$eV. Note that in a period $`10^9/m_5`$ sec of the oscillation of the field, a charged particle accelerated by the field can gain energy, $`E_a\times 10^9\text{sec}\times \text{light velocity}`$. Thus, the energy gain such as $`10^{21}`$eV is realized under the strong magnetic field $`10^{14}`$G. Hence we speculate that the mechanism of generating GRB is the same as one generating extremely high energy cosmic rays, although the neutron star need to possess stronger magnetic field for the generation of such cosmic rays than one the ordinal neutron star does. Finally we evaluate the rate of the collisions between the neutron stars and ABSs. We assume that number of the neutron stars in a galaxy is $`10^810^9`$ just as expected in our galaxy. We also assume that the halo of the galaxy is composed mainly of ABS whose typical velocity, $`v`$ is supposed to be $`3\times 10^7`$ cm. Since local density of the halo in our galaxy is given by $`0.5\times 10^{24}`$g $`\text{cm}^3`$, the rate, $`R_c`$ of the collision per year and per a galaxy is calculated as follows, $$R_c=n_a\times N_{ns}\times Sv\times 1\text{year}10^7m_510^6m_5\text{per year}$$ (7) with $`n_a=0.5\times 10^{24}\text{g cm}^3/M_a`$ being number density of ABS in the galaxy and $`N_{ns}=10^810^9`$ being the number of the neutron stars. Cross section, $`S`$ of the collision has been caluculated in the following. Namely, we suppose that the collision occurs when these two objects approach with each other such that the kinetic energy, $`v^2M_a/2`$ of ABS is equal to the potential energy $`GM_aM_{}/L_c`$ of ABS around the neutron star with mass $`M_{}`$; $`L_c10^{11}`$ cm. Thus $`S=\pi L_c^2`$. This means that ABS is trapped to the neutron star when they approach within a distance less than $`L_c`$. But this estimation of $`L_c`$ is too naive. In a real situation of the trapping ABS, we need to take account of the dissipation of both the kinetic energy and the angular momentum of ABS in the region of the magnetosphere or in outer region with the magnetic field. In these regions ABS interacts with the magnetic field as mentioned above and looses its energy or angular momentum. Therefore, it turns out that the rate, $`R_c`$ depends on the several parameters poorly known, e.g. electric conductivity of the surrounding of the neutron star. But we may expect from the naive estimation that the rate of the collisions is not necessarily inconsistent with the observations of GRB. There must be various ways of ABS being trapped; ABS moves around the neutron stars several times before colliding the neutron star or it collides directly with the neutron stars. Thus we expect that there are several types of pulse of GRB. Actually, various shapes of the pulses have been observed. These variations might originate from the ways of ABS being trapped. In summary, we have proposed a possible origin of generating GRB; the collision between ABS and the neutron star. In the collision ABS evapolates rapidly near the surface of the neutron star and maximally the energy $`10^{50}`$ erg can be released. We can expect quite a few contamination of baryons in a fire ball produced in this mechanism. We have also evaluated the rate of the collisions in a typical galaxy and have found that it is roughly $`10^7m_510^6m_5`$ per year, although there are several ambiguities in the evaluation. Since there are various ways of the collisions depending on the collision parameters, the variations of the shapes of GRB observed are possibly caused by theses ways of ABS colliding with the neutron star. Furthermore, we have pointed out that cosmic rays with extremely high energy, $`10^{21}`$eV, can be produced in the collisions between ABSs and the neutron stars with strong magnetic fields $`10^{14}`$G. The author wish to express his thank for the hospitality in Tanashi KEK.
no-problem/9903/astro-ph9903167.html
ar5iv
text
# Explanations of pulsar velocities ## I Introduction The proper motions of pulsars present an intriguing astrophysical puzzle. The measured velocities of pulsars exceed those of the ordinary stars in the galaxy by at least an order of magnitude. The data suggest that neutron stars receive a powerful “kick” at birth. Whatever the cause of the kick, the same mechanism may also explain the rotations of pulsars under some conditions . The origin of the birth velocities is unclear. Born in a supernova explosion, a pulsar may receive a substantial kick due to the asymmetries in the collapse, explosion, and the neutrino emission affected by convection . Evolution of close binary systems may also produce rapidly moving pulsars . It was also suggested that the pulsar may be accelerated during the first few months after the supernova explosion by its electromagnetic radiation, the asymmetry resulting from the magnetic dipole moment being inclined to the rotation axis and offset from the center of the star. Most of these mechanisms, however, have difficulties explaining the magnitudes of pulsar spatial velocities in excess of 100 km/s. Although the average pulsar velocity is only a factor of a few higher, there is a substantial population of pulsars which move faster than 700 km/s, some as fast as 1000 km/s . Neutrinos carry away most of the energy, $`10^{53}`$ erg, of the supernova explosion. A 1% asymmetry in the distribution of the neutrino momenta is sufficient to explain the pulsar “kicks”. A strong magnetic field inside the neutron star could set the preferred direction. However, the neutrino interactions with the magnetic field are hopelessly weak. Ordinary electroweak processes cannot account for the necessary anisotropy of the neutrino emission . The possibility of a cumulative build-up of the asymmetry due to some parity-violating scattering effects has also been considered . However, in statistical equilibrium, the asymmetry does not build up even if the scattering amplitudes are asymmetric . Although some net asymmetry develops because of the departure from equilibrium, it is too small to explain the pulsar velocities for realistic values of the magnetic field inside the neutron star . There is a class of mechanisms, however, that can explain the birth velocities of pulsars as long as the magnetic field inside a neutron star is $`10^{14}10^{15}`$ G. These mechanisms have some common features. First, the conversions of some neutrino $`\nu `$ into a different type of neutrino, $`\nu ^{}`$, occurs when one of these neutrinos is free-streaming while the other one is not. The free-streaming component is out of equilibrium with the rest of the star, which prevents the wash-out of the asymmetry. Second, the position of the transition point it affected by the magnetic field. I will review two possible explanations, which do not require any exotic neutrino interactions and rely only on the established neutrino properties, namely matter-enhanced neutrino oscillations. The additional assumptions about the existence of sterile neutrinos and the neutrino masses appear plausible from the point of view of particle physics. ## II Pulsar kicks from neutrino oscillations As neutrinos pass through matter, they experience an effective potential $`V(\nu _s)`$ $`=`$ $`0`$ (1) $`V(\nu _e)`$ $`=`$ $`V(\overline{\nu }_e)=V_0(3Y_e1+4Y_{\nu _e})`$ (2) $`V(\nu _{\mu ,\tau })`$ $`=`$ $`V(\overline{\nu }_{\mu ,\tau })=V_0(Y_e1+2Y_{\nu _e})+{\displaystyle \frac{eG__F}{\sqrt{2}}}\left({\displaystyle \frac{3N_e}{\pi ^4}}\right)^{1/3}{\displaystyle \frac{\stackrel{}{k}\stackrel{}{B}}{|\stackrel{}{k}|}}`$ (3) where $`Y_e`$ ($`Y_{\nu _e}`$) is the ratio of the number density of electrons (neutrinos) to that of neutrons, $`\stackrel{}{B}`$ is the magnetic field, $`\stackrel{}{k}`$ is the neutrino momentum, $`V_0=10\mathrm{eV}(\rho /10^{14}\mathrm{g}\mathrm{cm}^3)`$. The magnetic field dependent term in equation (3) arises from a one-loop finite-density contribution to the self-energy of a neutrino propagating in a magnetized medium. An excellent review of the neutrino “refraction” in magnetized medium is found in Ref. . The condition for resonant oscillation $`\nu _i\nu _j`$ is $$\frac{m_i^2}{2k}cos\mathrm{\hspace{0.17em}2}\theta _{ij}+V(\nu _i)=\frac{m_j^2}{2k}cos\mathrm{\hspace{0.17em}2}\theta _{ij}+V(\nu _j)$$ (4) where $`\nu _{i,j}`$ can be either a neutrino or an anti-neutrino. The neutron star can receive a kick if the following two conditions are satisfied: (1) the adiabatic<sup>*</sup><sup>*</sup>*Non-adiabatic oscillations are discussed in Ref. oscillation $`\nu _i\nu _j`$ occurs at a point inside the $`i`$-neutrinosphere but outside the $`j`$-neutrinosphere; and (2) the difference $`[V(\nu _i)V(\nu _j)]`$ contains a piece that depends on the relative orientation of the magnetic field $`\stackrel{}{B}`$ and the momentum of the outgoing neutrinos, $`\stackrel{}{k}`$. If the first condition is satisfied, the effective neutrinosphere of $`\nu _j`$ coincides with the surface formed by the points of resonance. The second condition ensures that this surface (a “resonance-sphere”) is deformed by the magnetic field in such a way that it will be further from the center of the star when $`(\stackrel{}{k}\stackrel{}{B})>0`$, and nearer when $`(\stackrel{}{k}\stackrel{}{B})<0`$. The average momentum carried away by the neutrinos depends on the temperature of the region from which they exit. The deeper inside the star, the higher is the temperature during the neutrino cooling phase. Therefore, neutrinos coming out in different directions carry momenta which depend on the relative orientation of $`\stackrel{}{k}`$ and $`\stackrel{}{B}`$. This causes the asymmetry in the momentum distribution. An $`1\%`$ asymmetry is sufficient to generate birth velocities of pulsars consistent with observation. Let us use two different models for the neutrino emission to calculate the kick from the active-sterile and the active neutrinos, respectively. As shown in Ref. , these two models are in good agreement. ## III Oscillations into sterile neutrinos Since the sterile neutrinos have a zero-radius neutrinosphere, $`\nu _s\overline{\nu }_{\mu ,\tau }`$ oscillations can be the cause of the pulsar motions if $`m(\nu _s)>m(\nu _{\mu ,\tau })`$. If, on the other hand, $`m(\nu _s)<m(\nu _{\mu ,\tau })`$, $`\nu _s\nu _{\mu ,\tau }`$ oscillations can play the same role. In the presence of the magnetic field, the condition (4) is satisfied at different distances $`r`$ from the center (Fig. 1), depending on the value of the $`(\stackrel{}{k}\stackrel{}{B})`$ term in (4). The surface of the resonance is, therefore, $$r(\varphi )=r_0+\delta cos\varphi ,$$ (5) where $`cos\varphi =(\stackrel{}{k}\stackrel{}{B})/k`$ and $`\delta `$ is determined by the equation $`(dN_n(r)/dr)\delta e\left(3N_e/\pi ^4\right)^{1/3}B`$. This yields $$\delta =\frac{e\mu _e}{\pi ^2}B/\frac{dN_n(r)}{dr},$$ (6) where $`\mu _e(3\pi ^2N_e)^{1/3}`$ is the chemical potential of the degenerate (relativistic) electron gas. Assuming a black-body radiation luminosity $`T^4`$ for the effective neutrinosphere, the asymmetry in momentum distribution is $$\frac{\mathrm{\Delta }k}{k}=\frac{4e}{3\pi ^2}\left(\frac{\mu _e}{T}\frac{dT}{dN_n}\right)B,$$ (7) To calculate the derivative in (7), we use the relation between the density and the temperature of a non-relativistic Fermi gas. Finally, $$\frac{\mathrm{\Delta }k}{k}=\frac{4e\sqrt{2}}{\pi ^2}\frac{\mu _e\mu _n^{1/2}}{m_n^{3/2}T^2}B=0.01\frac{B}{3\times 10^{15}\mathrm{G}}$$ (8) if the neutrino oscillations take place in the core of the neutron star, at density of order $`10^{14}\mathrm{g}\mathrm{cm}^3`$. The neutrino oscillations take place at such a high density if one of the neutrinos has mass in the keV range, while the other one is much lighter. The mixing angle can be very small, because the adiabaticity condition is satisfied if $$l_{osc}\left(\frac{1}{2\pi }\frac{\mathrm{\Delta }m^2}{2k}sin\mathrm{\hspace{0.17em}2}\theta \right)^1\frac{10^2\mathrm{cm}}{sin\mathrm{\hspace{0.17em}2}\theta }$$ (9) is smaller than the typical scale of the density variations. Thus the oscillations remain adiabatic as long as $`sin^2\mathrm{\hspace{0.17em}2}\theta >10^8`$. ## IV Oscillations of active neutrinos The active neutrino oscillations can also explain the pulsar kick . The magnitude of the kick can be calculated using a model for neutrino transfer used in the previous section . That is, one can assume that the neutrinos are emitted from a “hard” neutrinosphere with temperature $`T(r)`$ and that their energies are described by the Stefan-Boltzmann law. Alternatively, we can use the Eddington model for the atmosphere which was used by Schinder and Shapiro to describe the emission of a single neutrino species. One can generalize it to include several types of neutrinos.A recent attempt to use the Eddington model for the neutrino transfer failed to produce a correct result because the neutrino absorption $`\nu _ene^{}p^+`$ was neglected, and also because the different neutrino opacities were assumed to be equal to each other. The assumption that the effect of neutrino oscillations can be accounted for in a simplistic model with one neutrino species and a deformed core-atmosphere boundary is also incorrect because the temperature profile is determined by the emission of six neutrino types, five of which are emitted isotropically. The neutrinos of the sixth flavor, which have an anisotropic momentum distribution, cause negligible (down by at least a factor of 6) asymmetry in the temperature profile. When the neutrino absorption is included, the Eddington model gives the same result for the kick as the model with “hard neutrinospheres” . In the diffusion approximation, the distribution functions $`f`$ are taken in the form : $$f_{\nu _i}f_{\overline{\nu }_i}f^{eq}+\frac{\xi }{\mathrm{\Lambda }_i}\frac{f^{eq}}{m},$$ (10) where $`f^{eq}`$ is the distribution function in equilibrium, $`\mathrm{\Lambda }_i`$ denote the respective opacities, $`m`$ is the column mass density, $`m=\rho 𝑑x`$, $`\xi =cos\alpha `$, and $`\alpha `$ is the normal angle of the neutrino velocity to the surface. At the surface, one imposes the same boundary condition for all the distribution functions, namely $$f_{\nu _i}(m,\xi )=\{\begin{array}{cc}0,\hfill & \mathrm{for}\xi <0,\hfill \\ 2f^{eq},\hfill & \mathrm{for}\xi >0.\hfill \end{array}$$ (11) However, the differences in $`\mathrm{\Lambda }_i`$ produce the unequal distributions for different neutrino types. Generalizing the discussion of Refs. to include six flavors, three neutrinos and three antineutrinos, one can write the energy flux as $$F=2\pi _0^{\mathrm{}}E^3𝑑E_1^1\xi 𝑑\xi \underset{i=1}{\overset{3}{}}(f_{\nu _i}+f_{\overline{\nu }_i}),$$ (12) We will assume that $`\mathrm{\Lambda }_i=\mathrm{\Lambda }_i^{(0)}(E^2/E_0^2)`$. We use the expressions for $`f_{\nu _i}`$ from equation (10). Changing the order of differentiation with respect to $`m`$ and integration over $`E`$ and $`\xi `$, and using the fact that $`f^{eq}`$ is isotropic, we arrive at the result similar to that of Ref. : $$F=\frac{2\pi ^3}{9}E_0^2\left[\underset{i=1}{\overset{3}{}}\frac{2}{\mathrm{\Lambda }_i^{(0)}}\right]\frac{T^2}{m}.$$ (13) The basic assumption of the model is that flux $`F`$ is conserved. In other words, the neutrino absorptions $`\nu _ene^{}p^+`$ are neglected. Since the sum in brackets, as well as the flux $`F`$ are treated as constants with respect to $`m`$, one can solve for $`T^2`$: $$T^2(m)=\frac{9}{2\pi ^3}E_0^2\left[\underset{i=1}{\overset{3}{}}\frac{2}{\mathrm{\Lambda }_i^{(0)}}\right]^1Fm+\left(\frac{30}{7\pi ^5}F\right)^{1/2}$$ (14) Swapping the two flavors in equation (14) leaves the temperature unchanged in the Eddington approximation. Hence, neutrino oscillations do not alter the temperature profile in this approximation. We will now include the absorptions of neutrinos. Some of the electron neutrinos are absorbed on their passage through the atmosphere thanks to the charged-current process $$\nu _ene^{}p^+.$$ (15) The cross section for this reaction is $`\sigma =1.8G__F^2E_\nu ^2`$, where $`E_\nu `$ is the neutrino energy. The total momentum transfered to the neutron star by the passing neutrinos depends on the energy. Both numerical and analytical calculations show that the muon and tau neutrinos leaving the core have much higher mean energies than the electron neutrinos . Below the point of MSW resonance the electron neutrinos have the mean energies $`10`$ MeV, while the muon and tau neutrinos have energies $`25`$ MeV. The origin of the kick in this description is that the neutrinos spend more time as energetic electron neutrinos on one side of the star than on the other side, hence creating the asymmetry. Although the temperature profile remains unchanged in Eddington approximation, the unequal numbers of neutrino absorptions push the star, so that the total momentum is conserved. Below the resonance $`E_{\nu _e}<E_{\nu _{\tau ,\mu }}`$. Above the resonance, this relation is inverted. The energy deposition into the nuclear matter depends on the distance the electron neutrino has traveled with a higher energy. This distance is affected by the direction of the magnetic field relative to the neutrino momentum. We assume that the resonant conversion $`\nu _e\nu _\tau `$ takes place at the point $`r=r_0+\delta (\varphi );\delta (\varphi )=\delta _0\mathrm{cos}\varphi `$. The position of the resonance depends on the magnetic field $`B`$ inside the star : $$\delta _0=\frac{e\mu _eB}{2\pi ^2}/\frac{dN_e}{dr},$$ (16) where $`N_eY_eN_n`$ is the electron density and $`\mu _e`$ is the electron chemical potential. Below the resonance the $`\tau `$ neutrinos are more energetic than the electron neutrinos. The oscillations exchange the neutrino flavors, so that above the resonance the electron neutrinos are more energetic than the $`\tau `$ neutrinos. The number of neutrino absorptions in the layer of thickness $`2\delta (\varphi )`$ around $`r_0`$ depends on the angle $`\varphi `$ between the neutrino momentum and the direction of the magnetic field. Each occurrence of the neutrino absorption transfers the momentum $`E_{\nu _e}`$ to the star. The difference in the numbers of collisions per electron neutrino between the directions $`\varphi `$ and $`\pi +\varphi `$ is $`\mathrm{\Delta }k_e/E_{\nu _e}`$ $`=`$ $`2\delta (\varphi )N_n[\sigma (E_1)\sigma (E_2)]`$ (17) $`=`$ $`1.8G__F^2[E_1^2E_2^2]{\displaystyle \frac{\mu _e}{Y_e}}{\displaystyle \frac{eB}{\pi ^2}}h_{N_e}\mathrm{cos}\varphi ,`$ (18) where $`h_{N_e}=[d(\mathrm{ln}N_e)/dr]^1`$. We use $`Y_e0.1`$, $`E_125`$ MeV, $`E_210`$ MeV, $`\mu _e50`$ MeV, and $`h_{N_e}6`$ km. After integrating over angles and taking into account that only one neutrino species undergoes the conversion, we obtain the final result for the asymmetry in the momentum deposited by the neutrinos: $$\frac{\mathrm{\Delta }k}{k}=0.01\frac{B}{2\times 10^{14}\mathrm{G}},$$ (19) which agrees with the estimatesWe note in passing that we estimated the kick in Refs. assuming $`\mu _e\mathrm{const}`$. A different approximation, $`Y_e\mathrm{const}`$, gives a somewhat higher prediction for the magnitude of the magnetic field . that use a different model for the neutrino emission. Neutrinos also lose energy by scattering off the electrons. Since the electrons are degenerate, the final-state electron must have energy greater than $`\mu _e`$. Therefore, electron neutrinos lose from $`0.2`$ to $`0.5`$ of their energy per collision in the neutrino-electron scattering. However, since $`N_eN_n`$, this process can be neglected. One may worry whether the asymmetric absorption can produce some back-reaction and change the temperature distribution inside the star altering our result (19). If such effect exists, it is beyond the scope of Eddington approximation, as is clear from equation (14). The only effect of the asymmetric absorption is to make the star itself move, in accordance with the momentum conservation. This is the origin of the kick (19). Of course, in reality the back-reaction is not exactly zero. The most serious drawback of Eddington model, pointed out in Ref. , is that diffusion approximation breaks down in the region of interest, where the neutrinos are weakly interacting. Another problem has to do with inclusion of neutrino absorptions and neutrino oscillations . However, to the extent we believe this approximation, the pulsar kick is given by equation (19). ## V Conclusion The neutrino oscillations can explain the motions of pulsars. Although many alternatives have been proposed, all of them fail to explain the large magnitudes of the pulsar velocities. If the pulsar kick velocities are due to $`\nu _e\nu _{\mu ,\tau }`$ conversions, one of the neutrinos must have mass $`100`$ eV (assuming small mixing) and must decay on the cosmological time scales not to overclose the Universe . This has profound implications for particle physics hinting at the existence of Majorons or other physics beyond the Standard Model that can facilitate the neutrino decay. If the active-to-sterile neutrino oscillations are responsible for pulsar velocities, the prediction for the sterile neutrino to have a mass of several keV is not in contradiction with any of the present bounds. In fact, the $``$keV mass sterile neutrino has been proposed as a dark-matter candidate . Some other explanations that utilize new hypothetical neutrino properties, but use a similar mechanism for generating the asymmetry, can also explain large pulsar velocities. ## VI Acknowledgements The author thanks E. S. Phinney and G. Segrè for many interesting and stimulating discussions. This work was supported in part by the US Department of Energy grant DE-FG03-91ER40662.
no-problem/9903/astro-ph9903408.html
ar5iv
text
# The Hubble Deep Field Reveals a Supernova at z≃0.95 ## 1 Introduction The discovery of supernovae (SNe) in the early universe is of great interest because they can provide a wealth of information about cosmological parameters and the cosmic star formation history. It is now believed that the star formation activity in the universe very likely started in small objects which later on merged to form larger units (Couchman & Rees 1986, Ciardi & Ferrara 1997, Haiman et al. 1997, Tegmark et al. 1997, Ferrara 1998). Unless the IMF at high-$`z`$ is drastically different from the local one, some of the formed stars will end their lives as SNe. Detecting high-$`z`$ SNe would be of primary importance to clarify how reionization and rehating of the universe proceeded \[Ciardi & Ferrara 1997\], and, in general, to derive the star formation history of the universe (Sadat et al. 1998, Madau, Della Valle & Panagia 1998) and pose constraints on the IMF and chemical enrichment of the universe. Great effort has been spent in this search \[Garnavich et al. 1998, Perlmutter et al. 1998\] and many SNe have been found up to a redshift of $`z=1.20`$ \[Aldering 1998\] when the universe had only about half of its present age. The HDF images \[Williams et al. 1996\] are the among the deepest taken to date and in principle could contain SNe up to $`z`$3. Two SNe<sup>1</sup><sup>1</sup>1These two SNe are different objects with respect to the ones reported in this work were actually detected by Gilliland & Phillips (1998) and Gilliland, Nugent & Phillips (1999) by comparing the primary HDF-N data with second-epoch images taken two years after. The primary observations of this field were taken using four optical filters centered at 3000Å (U<sub>300</sub>), 4500Å (B<sub>450</sub>), 6060Å (V<sub>606</sub>) and 8140Å (I<sub>814</sub>) during a time span of about 10 days in December 1995. Although the distribution of the data along this period is not uniform, the overall time coverage is good enough to detect objects with significant variations on time scales of a few days. High redshift SNe cannot be identified near their maximum in such a short time span because they vary too slowly, but soon after the explosion they evolve fast enough to be detected. How many SNe can be expected in the HDF-N? This number can be estimated using the computations by Marri & Ferrara (1998) and Marri et al. (1998) for future Next Generation Space Telescope (NGST) surveys. Scaling their results for a flat CDM+$`\mathrm{\Lambda }`$ with $`\mathrm{\Omega }_M=0.4`$ to the HDF-N area ($``$ 5 sq. arcmin) and assuming a surveying time of $`8`$ days (see below), we obtain an expected number $`0.34`$ SNe from massive stars (SN types Ib/II). As a comparison, scaling the analytical estimates in Miralda-Escudé & Rees (1997) to the HDF-N, one obtains a similar expected rate of 0.17 SNe in the HDF-N. These values are large enough to encourage a new analysis of the HDF-N. ## 2 Object selection, photometry and morphology The original observations of the HDF-N in each filter consisted of about 300 images taken in several (from 9 to 11, depending on the filter) positions on the sky (dither positions). We have divided these images into a few subsequent groups, three for V<sub>606</sub> and I<sub>814</sub> and two for B<sub>450</sub>. The U<sub>300</sub> band, which is intrinsically less efficient, was not considered. The HDF-N was observed again two years later in U<sub>300</sub> and I<sub>814</sub> and the latter image (I4 in Table 1) is deep enough to be used for this project. Where possible, images taken in the same position on the sky were not split into different groups so that we could make effective use of those combined by the HDF working team \[Williams et al. 1996\] for each dither position, and made available on the web. Some dither positions in V<sub>606</sub> and I<sub>814</sub> contain images too widely separated in time and therefore have been split. In this case the frames were reduced by the automatic pipeline provided by the ST-ECF web site. In all cases warm and bad pixels were rejected using the standard routines. Given the small number of images, between 3 and 6, present in some of the final groups, it was not possible to use the drizzle algorithm \[Fruchter et al. 1997\] often used for the WFPC2 data reduction. We have chosen to resample all the images to a pixel of 0.05 arcsec (half of the original WF pixel scale) and use the IRAF task IMCOMBINE to make the final combinations while rejecting deviant pixels. This procedure is more efficient than drizzle in rejecting any residual warm and bad pixel; this is especially true for the final V<sub>606</sub> and I<sub>814</sub> images which contain 5 dither images each. Table 1 lists the properties of the resulting images with their time coverage and limit magnitude. Two tests were performed to check the data reduction results: i) the flux of a few compact sources from the list in Méndez et al. (1996) was measured in each of the resulting images to verify the constancy of the photometry; ii) as discussed, for example, in Ferguson (1998), the effective limiting magnitude depends on the object size as much as on its total magnitude. The detection limits for point sources in the images were measured by adding stars and detecting them using FOCAS \[Valdes 1982\]: the derived values for the 80% completeness level are in good agreement (within 0.1 mag) with the values in Williams et al. (1996) once their 10$`\sigma `$ limits in an aperture of 0.5 arcsec are scaled for the exposure time and to about 3$`\sigma `$ in 0.30 arcsec, an aperture yielding the highest SNR for point source photometry (eg., Thompson 1995). These values are listed in Table 1; all magnitudes are in the AB system. The three combined V<sub>606</sub> images (V1, V2 and V3), showing the best sensitivity (3$`\sigma `$ limits between 29.1 and 29.7) and time coverage (about 8.5 days), were examined for variable objects having a monotonic trend, either brightening or dimming. A few interesting objects were selected; the most remarkable one is in chip 2 of the WF camera and is present in the Williams et al. (1996) catalog with the entry number 584.2. The J2000 coordinates of this object (2-584.2) are 12:36:49.357 +62:14:37.50. In the total images, this object has flat B<sub>450</sub>, V<sub>606</sub> and I<sub>814</sub> colors and is undetected in U<sub>300</sub>; however it cannot be classified as a U<sub>300</sub> drop-out being too faint to show a strong enough break between U<sub>300</sub> and B<sub>450</sub>. Table 1 and Fig. 1 show the time evolution of the photometry of 2-584.2. Its magnitudes in the various images were measured inside a circular aperture of 0.3 arcsec, corresponding to about twice the PSF FWHM, and corrected to an infinite aperture. This object shows a strong brightening both in V<sub>606</sub> and I<sub>814</sub>, while the photometry in the B<sub>450</sub> band is consistent with a constant value. The errors shown in Fig. 2 are derived under the assumption of Poisson noise inside the photometric aperture. We have computed the statistical significance of the detected variation. For the V<sub>606</sub> band we consider the two differences V2–V1 and V3–V2; in the I<sub>814</sub> band, as the object shows no significant variation among I1, I2 and I4, we coadd these three images and compare the result with I3. By comparing the differences in the photometry with the quadratic sum of the errors, we find a joint probability for these three variations to arise from noise of 1.2$`\times 10^5`$. In the HDF-N there are about 1200 objects between I<sub>814</sub>=27 and I<sub>814</sub>=29 \[Williams et al. 1996\]; therefore we expect one such spurious event only every 70 Hubble Deep Fields. Fig. 2 shows the images of this object in the various bands <sup>5</sup><sup>5</sup>5 Images of 2-584.2 can also be found at http://www.arcetri.astro.it/$``$filippo/sn/sn.html and its brightening in V<sub>606</sub> and I<sub>814</sub>. In the total images the object is marginally resolved in B<sub>450</sub> and V<sub>606</sub>, whereas in I<sub>814</sub> it is consistent with a point source. The images also give the impression of a sharpening of 2-584.2 with time: the object seems more extended at the beginning of the observations than at the end, as if a bright core were emerging in V<sub>606</sub> and I<sub>814</sub>. Its faintness prevents us from studying its morphology in detail; nevertheless, its extension can be roughly measured by fitting circular gaussians to each image. The results of this procedure supports the visual impression of a sharpening of the object with time: the FWHM of the best-fitting gaussian goes from 0$`\stackrel{}{.}`$33$`\pm `$0$`\stackrel{}{.}`$08 to 0$`\stackrel{}{.}`$18$`\pm `$0$`\stackrel{}{.}`$02 in V<sub>606</sub> and from 0$`\stackrel{}{.}`$40$`\pm `$0$`\stackrel{}{.}`$10 to 0$`\stackrel{}{.}`$19$`\pm `$0$`\stackrel{}{.}`$02 in I<sub>814</sub> while the value expected for point sources is about 0$`\stackrel{}{.}`$15. ## 3 The light curve Among the objects that could show variability at this high galactic latitude and faint flux level, AGNs and SNe are the most plausible candidates. The shape of detected time variation and the possible development of a bright core in an underlying galaxy strongly suggest the identification of 2-584.2 with a SN observed soon after its explosion. The expected evolution of the apparent magnitude of a SN can be derived from template light curves, spectra and absolute magnitude (as discussed below) once its distance is known and K-corrections (due to the narrowing of the filter passband in the restframe of the source and redshifting of the emitted photons from the source to the observer) are computed. For high redshift objects the time in the observer frame, $`t_{obs}`$, is related to the time in the supernova rest frame by $`t_{obs}=(1+z)t_{rest}`$; this produces a time dilation of the high redshift SN light curve. Once the SN template is given, four parameters are left to simultaneously fit the data in the various filters: the SN redshift, $`z`$, the time of the maximum light, $`t_{max}`$, the color excess due to dust, E(B-V), and the cosmological matter density parameter, $`\mathrm{\Omega }_0`$ (we assume $`\mathrm{\Lambda }=0`$). Some uncertainty remains associated with this fit as pre-maximum light curves and spectra (particularly in the UV) are generally not very well determined because they are available only for a handful of objects. The SN flux must then be added to the host galaxy flux in each filter. The observed time variation of 2-584.2 shows that the SN becomes quickly dominant in V<sub>606</sub> and I<sub>814</sub> while the galaxy produces most of the B<sub>450</sub> flux. We now consider in turn the possibility that the detected variable source is a Type Ia (SNIa), a Type II (SNII) or a Type Ib (SNIb) supernova. SNIa have been shown to be reliable standard candles, as their intrinsic luminosities can be accurately determined \[Phillips 1993\]. Pre-maximum optical (CTIO) and UV (IUE) spectra of SN1990N \[Leibundgut 1991\], show that the flux drops off sharply below $`2600`$Å; therefore, as soon as $`z\stackrel{>}{}0.7`$, their B<sub>450</sub> flux in negligible as the cut off is redshifted into this band, while the mere detection of variation in the V<sub>606</sub> band puts the upper limit $`z\stackrel{<}{}1.6`$. The luminosity of a SNIa evolves in time according to a light curve \[Leibundgut 1988, Doggett & Branch 1985\] which shows a fast rise to the maximum (3.6 mag in 15 days), with slight differences between the (rest frame) B- and V-band. When fitting the data for 2-584.2 with the time-dilated, K-corrected light curves, we find impossible to simultaneously match the V<sub>606</sub> and I<sub>814</sub> data points (in this case the B<sub>450</sub> data only give information on the galaxy magnitude), as shown in Fig. 1. Since the object is relatively faint, this implies high SNIa redshifts ($`z\stackrel{>}{}1.3`$) for which colors cannot be reproduced by using appropriate K-corrections. We conclude that identification of 2-584.2 with a SNIa can be safely ruled out. We repeat the same procedure for SNII. These objects cannot be used as standard candles as their peak abolute luminosities are known to cover a wide range, from $`M_B14`$ to $`M_B19`$ \[Patat et al. 1994\]. SNII are usually divided into two classes \[Doggett & Branch 1985\], one (SNII-P) showing a slow pre-maximum brightening and a plateau in the after-maximum decline, the other (SNII-L) a faster brightening and a linear decline. In both cases the pre-maximum spectra can be approximated by a black-body with a temperature $`T_{BB}25000`$ K \[Kirshner 1990\] without any UV cut-off. By comparing the expected light curves with the data (see Fig. 1) we can safely exclude both classes because (i) their brightening is too slow and (ii) their blue spectrum makes them too luminous in B<sub>450</sub>. SNIb yield a good agreement with the data. These objects closely resemble SNIa in terms of time evolution and spectra but are typically 1-2 mag fainter \[Wheeler & Levreault 1985, Kirshner 1990\] and show a larger spread in the maximum brightness. While SNIa are found in all types of galaxies and derive from old stars, the SNIb are only detected near regions of active star formation and their progenitors should be young massive stars. As shown in Figure 1, both the time evolution of 2-584.2 increasing from B<sub>450</sub> to V<sub>606</sub> to I<sub>814</sub> and its apparent magnitudes are easily reproduced by SNIb light curves \[Kirshner 1990\]: acceptable fits can be obtained for $`0.90<z<1.02`$ which makes this object one of the most distant SN observed to date, while the best agreement is found for $`z=0.95`$, $`t_{max}`$=$`34.0\pm 1`$ days ($`12`$ rest-frame days after the end of the observations), low $`\mathrm{\Omega }_0`$ values and moderate reddening $`0`$E(B-V)$`0.05`$ \[Seaton 1978\]. This values of the reddening, corresponding up to about 0.29 mag of extinction in V<sub>606</sub> and 0.22 in I<sub>814</sub>, could also be accounted for by a SNIb fainter than the average and without extinction. Lower values for the fitting redshift tend to select low values of $`\mathrm{\Omega }_0`$ and low extinction, viceversa for higher redshifts. The galaxy, showing flat B<sub>450</sub>-V<sub>606</sub> and V<sub>606</sub>-I<sub>814</sub> colors and luminosites indicating a SFR of about 0.2 M/yr \[Madau et al. 1998\], is consistent with a star forming dwarf. About 20% of the flux in the I<sub>814</sub> image taken two year later (I4) is still due to the SN. We therefore conclude that: (i) SNIa are too bright and red and SNII are too slow and blue to be viable candidates for 2-584.2; (ii) a SNIb naturally reproduces the time evolution, brightness and colors of the variable object. This type of study, when applied to high sensitivity, long time span future observations, could produce a significant sample of high-$`z`$ SNe that can be succesfully used to constrain the star formation history and the geometry of our universe. ## Acknowledgments We are indebted to P. Hoeflich and B. Leibundgut for providing supernova spectra and light curves, and to R. McCray, M. Salvati, L. Pozzetti and N. Panagia for useful discussions. We also thank the referee, R. Ellis, for insightful comments. We thank STScI/ST-ECF for the implementation of their data archive and for support during this work. FM acknowledges a partial support from ASI grant ARS-96-66; AF acknowledges partial support as a Visiting Fellow at JILA.
no-problem/9903/astro-ph9903297.html
ar5iv
text
# AN ATLAS FOR STRUCTURAL STUDIES OF SPIRAL GALAXIES Rotation Curves and Surface Brightness Profiles of 304 Bright Spirals ## Abstract This is an announcement of a new database of structural properties for 304 late-type (Sb-Sc) spiral galaxies drawn from the UGC catalogue. These data were compiled from the kinematic and photometric studies of Courteau (1996, 1997) and are made available to the community via the Canadian Astronomy Data Centre. The data base contains redshift information and Tully-Fisher distances, various measures of optical (H$`\alpha `$) line width and rotational velocity, isophotal diameters and magnitudes, disk scale lengths, B-r colour, rotation curve model parameters, and more. The main table includes 66 entries (columns); it can be down-loaded as one single file, or searched for any range of parameters using our search engine. The data files for each rotation curve and luminosity profile (including multiple observations) are also available and can be retrieved as two separate tar files. These data were originally obtained for cosmic flow studies (e.g., Courteau et al. 1993, Courteau 1993) and have been included in the Mark III Catalog of Galaxy Peculiar Velocities (Willick et al. 1997). The high spatial and spectral resolution of these data make them ideal for structural and dynamical investigations of spiral galaxies (e.g., Broeils & Courteau 1997; Mo, Mao, & White 1998; Somerville & Primack 1999; Courteau & Rix 1999). The data base can be accessed at http://cadcwww.hia.nrc.ca/astrocat/courteau.html. galaxies: spiral — galaxies: kinematics and dynamics — galaxies: distance scale Acknowledgements We would like to thank Daniel Durand at CADC for setting up the Web architecture.
no-problem/9903/astro-ph9903114.html
ar5iv
text
# Starburst Galaxies in Clusters ## 1. Introduction In a series of papers extending over the last 12 years (Rakos et al. 1988, 1990, 1995, 1996, 1997), we have used a narrow band color system to perform photometry of galaxies in rich clusters at various redshifts in the rest frame of the cluster. Our study approaches this problem through the use of a modified Strömgren system, modified in the sense that the filter set is ‘redshifted’ to the cluster of galaxies in consideration; therefore, no k-corrections. We call our modified system uz,vz,bz,yz to distinguish it from the original Strömgren system (uvby). The color indices have been a profitable tool for investigating color evolution and we have demonstrated that the amplitude of the 4000Å break (Dressier and Shectman 1987) in the spectra of galaxies is correlated with the $`uzvz`$ color index. In more recent studies, we have used the high resolution and high S/N ratio spectra of galaxies published by Gunn and Oke (1975), Yee and Oke (1978), Kennicutt (1992), De Bruyn and Sargent (1978) and Ashby et al. (1992) compute synthetic colors. In this fashion, we have constructed a set of templates to establish the relationship between our color indices and the morphology of galaxies. This scheme can be expanded upon using principal component analysis similar to the technique outlined by Lahav et al. (1996) in order to classification of galaxies in high redshift clusters. For our preliminary work, our sample galaxies are divided into four classes: ellipticals, spirals, Seyferts, and starbursts. Each class is well separated in our color-color diagrams, especially starburst galaxies which have anomalous $`mz`$ indices ($`mz`$ is defined in the same manner as the traditional Strömgren $`m`$ indice, $`mz=(vzbz)(bzyz)`$). The starburst galaxies all lie below $`mz=0.2`$ due to a combination of intrinsic reddening combined with starburst colors from a marginally low metal content population (see Figure 1). ## 2. Starburst Colors In our photometric system, there are reddening free expressions, similar to the original relations defined by Strömgren, which can be used to estimate the variable amount of reddening in individual galaxies. The reddening free parameter for $`mz`$, called $`C`$, is defined to be: $$C=mz+0.39(bzyz)$$ (1) For $`uzvz`$ we define the reddening free variable, $`A`$, as: $$A=1.49(uzvz)(bzyz)$$ (2) Using these variables, our indices become primarily dependent on the age and total mass of the starburst. Calibration is based on theoretical models of starburst galaxies from Leitherer and Heckman (1995). Figure 2 displays the various models for timesteps of log t = 6.3, 7.0, 7.3. 7.7 and 8.7 years. The initial model at each timestep is for a pure starburst population. Additional models represent a the mixture of an underlying old stellar population that is 1 mag fainter than the starburst, the same total luminosity as the starburst and 1 mag brighter than the starburst as noted in the Figure. Note that the luminosity, rather than burst mass, is used for normalization. The brightness of the burst is taken as the peak luminosity at 5500Å. As an illustration to the use of these indices, we have compared new data on Mrk325, a clumpy irregular galaxy, and NGC 3277, a normal spiral. Mrk 325 is estimated to contain more than 20,000 very hot stars and is one of the strongest extragalactic far-infrared sources (see Condon and Yin 1990). Its luminosity, size and internal motions are all larger than the typical dwarf irregular galaxies. The position of the two galaxies in the reddening free diagram are shown as open symbols in Figure 2 and indicate that the age of the starburst in Mrk 325 is in the range of 40 Myrs, whereas NGC 3277 displays all the global colors of an old starburst of 0.5 Gyrs ago. The IRAS infrared index $`f(60)/f(100)`$ and the narrow band colors are also well correlated despite reddening effects. The IRAS index is a good indicator of the nature of the dust heating sources in galaxies and provides a direct indication of the dominance of the warm component of the interstellar radiation field produced by O and B Stars. We have defined a reddening free color index, $`E`$ such that: $$E=(uzvz)+(vzyz)2.84(bzyz)$$ (3) Figure 3 shows this correlation between the far-IR colors for starburst galaxies from the literature and new narrow band data. It has been commonly assumed that IRAS colors are poorly correlated to UV and blue optical colors due to the heavy presence of dust responsible for far-IR emission. However, new HST imaging has revealed that, in fact, ultraluminous infrared galaxies display a complex structure of dust lanes and compact knots of star formation (see Surace et al. 1998). A significant amount of the light from these blue star formation regions exists outside the dust-rich core regions to produce the $`E`$, far-IR relation in Figure 3. ## 3. Starbursts in Distant Clusters Observations of galaxy clusters with high redshift show an increasing numbers of blue galaxies ($`bzyz<0.2`$) and an increasing number of those blue galaxies have $`mz<0.2`$ with redshift (Rakos and Schombert 1995). One such clusters, CL0317+1521 at $`z=0.583`$, has over 60% of its population as blue galaxies and 42% have $`mz<0.2`$, the photometric signature for starburst. The deep rest frame Strömgren color photometry of the cluster A2317 ($`z=0.211`$, Rakos, Odell and Schombert 1997) shows that the ratio of blue to red galaxies has a strong dependence on absolute magnitude such that blue galaxies dominate the very brightest and very faintest galaxies, shown in the bottom panel of Figure 4. Similar behavior is found in new data on A2283 ($`z=0.183`$) also shown in Figure 4. However, the fraction of galaxies displaying the signatures of a starburst only increases towards the faint, dwarf end of the luminosity function (see top panel of Figure 4). Tidal interactions are frequently invoked as an explanation for the high fraction of starburst galaxies. These starburst systems would have their origin as gas-rich dwarf galaxies who then undergoing a short, but intense, tidally induced starburst. It should be noted, however, that the orbits of cluster galaxies are primarily radial, and the typical velocities into the dense cluster core are high. This makes any encounter with another galaxy extremely short-lived, with little impulse being transferred as is required to shock the incumbent molecular clouds into a nuclear starburst. Recently, a new mechanism for cluster-induced star formation has been proposed. This method, called galaxy harassment (Moore et al. 1996) emphasizes the influence of the cluster tidal field and the more powerful impulse encounters with massive central galaxies. These two processes conspire to not only raise the luminosity of cluster dwarfs, but also to increase their visibility (i.e. surface brightness) and hence their detectability. One of the predictions of galaxy harassment is that galaxies in the cores of clusters will be older (post-starburst) than galaxies at the edges. In terms of star formation history, this is exactly what has demonstrated in A2317 and A2283 (Rakos, Odell and Schombert 1997). That is, the blue population is primarily located in the outer two-thirds of the cluster (see Figure 6 of Rakos, Odell and Schombert 1997). Regardless of the origin of the blue population, its fate is obvious. These galaxies do not exist in present-day clusters and, therefore, must either be destroyed or reduced to the luminosity (or detectability) of dwarf galaxies. As shown in Figure 4, the blue galaxies dominate the bright and faint ends of the luminosity function. Figure 4 also shows that the blue-fraction of faint galaxies contains a larger number of starbursting galaxies (i.e. ones with $`mz<0.2`$) then the bright galaxies. One interpretation is that bright galaxies finished their starburst phase much earlier in the past, and now only display a steady, spiral-like production of new stars. Thus, the scenario proposed here is that the blue galaxies on the bright end of the luminosity function are core galaxies on low orbits involved in high impulse star-forming events. Faint galaxies, on the other hand, are cluster halo objects undergoing harassment style starburst phenomenon. This scenario naturally divides the galaxy population by mass and by distance from the center of the cluster through dynamical processes. Confirmation for this scenario is found from deep HST observations by Koo et al. (1997), Oemler et al. (1997) and Couch et al. (1998) who have shown that the most spectacular starbursts and emission-line galaxies tend to be low mass objects, whose final state is likely to be that of a dwarf galaxy. ## 4. Conclusions The colors of faint blue cluster galaxies in clusters are consistent with a simple starburst phenomenon and indicates that there exists a bursting population of dwarf galaxies in clusters which rises in visibility at earlier epochs, then fades to become the current population of dwarf elliptical and nucleated galaxies. This becomes a parallel issue to the faint blue galaxy problem in the field, except in clusters the bursting dwarf population does not distinguish itself sharply from other cluster galaxies by color alone. Only through a mixture of filter indices does the reddened nature of the bursting dwarf population reveal itself as unique in color and luminosity from the Butcher-Oemler population common in most distant clusters. The fraction of blue galaxies (in A2283 and A2317) increases on both ends of the luminosity function. The bright end is dominated by post-starburst and merger objects identified in HST images of the Butcher-Oemler population. The faint end is dominated by the dwarf population, temporarily enhanced in visibility probably caused by galaxy harassment mechanisms. ## References Ashby, M., Houck, J. and Hacking, P. 1992, AJ, 104, 980 Couch, W., Barger. A., Smail, I., Ellis, R., Sharples, R. 1998 ApJ 497, 188 Coudon, J., Yin, Q. 1990, ApJ. 357, 97 De Bruyn, A and Sargent, W. 1978 AJ, 83, 1257 Dressler, A. and Shectman, S. 1987, AJ, 94, 899 Hamilton, D. 1985 ApJ, 297, 371 Gunn, J. and Oke, J. 1975 ApJ, 195, 255 Kennicutt, R. 1992 ApJS, 79, 255 Koo, D., Guzman, R., Gallego, J. and Wirth, G. 1997 ApJ, 487, L49 Leitherer, C. and Heckman, T 1995 ApJS, 96, 9 Lahav et al. 1996, MNRAS 283, 207 Oemler, A., Dressler, A., Butcher, H. 1997 ApJ, 474, 561 Yee, H. and Oke, J. 1978 ApJ, 226, 52 Surace, J., Sanders, D., Vacca, W., Veilleux, S. and Mazzarella, J. 1998 ApJ, 492, 116 Rakos, K., Fiala, N. and Schombert, J. 1988 ApJ, 328, 463 Rakos, K., Kreidl, T. and Schombert, J. 1990 ApJ, 377, 382 Rakos, K. and Schombert, J. 1995 ApJ, 439, 47 Rakos, K., Maindl, T. and Schombert, J. 1996 ApJ. 466, 122 Rakos, K., Odell, A. and Schombert, J. 1997 ApJ, 490, 201
no-problem/9903/hep-ph9903454.html
ar5iv
text
# Four-Neutrino Mass Spectra and the Super-Kamiokande Atmospheric Up–Down Asymmetry \[ ## Abstract In the framework of schemes with mixing of four massive neutrinos, which can accommodate atmospheric, solar and LSND ranges of $`\mathrm{\Delta }m^2`$, we show that, in the whole region of $`\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ allowed by LSND, the Super-Kamiokande up–down asymmetry excludes all mass spectra with a group of three close neutrino masses separated from the fourth mass by the LSND gap of order 1 eV. Only two schemes with mass spectra in which two pairs of close masses are separated by the LSND gap can describe the Super-Kamiokande up–down asymmetry and all other existing neutrino oscillation data. preprint: UWThPh-1999-20 DFTT 17/99 hep-ph/9903454 \] The observation of a significant up–down asymmetry of atmospheric high-energy $`\underset{\mu }{\overset{()}{\nu }}`$-induced events in the Super-Kamiokande experiment is considered as the first model-independent evidence in favor of neutrino oscillations. Such indications were also obtained in other atmospheric neutrino experiments: Kamiokande , IMB , Soudan-2 and MACRO . In addition, evidence in favor of neutrino masses and mixing is provided by all solar neutrino experiments: Homestake , Kamiokande , GALLEX , SAGE and Super-Kamiokande . Finally, observation of $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$ oscillations have been claimed by the LSND collaboration . For the explanation of all these data three different scales of neutrino mass-squared differences are required: $`\mathrm{\Delta }m_{\mathrm{sun}}^210^{10}\mathrm{eV}^2`$ (vacuum oscillations) or $`\mathrm{\Delta }m_{\mathrm{sun}}^210^5\mathrm{eV}^2`$ (MSW), $`\mathrm{\Delta }m_{\mathrm{atm}}^210^3\mathrm{eV}^2`$, $`\mathrm{\Delta }m_{\mathrm{LSND}}^21\mathrm{eV}^2`$. Thus, at least four neutrinos with definite mass are needed to describe all data. Four-neutrino schemes have been considered in many papers. For early works see Ref. and for a more comprehensive list of four-neutrino papers consult, e.g., Ref. . In Refs. it was shown that from the results of all existing experiments, including short-baseline (SBL) reactor and accelerator experiments in which no indications of neutrino oscillations have been found, information on the four-neutrino mass spectrum can be inferred. In the case of three different scales of $`\mathrm{\Delta }m^2`$, there are two different classes of neutrino mass spectra (see Fig. 1) that satisfy the inequalities $`\mathrm{\Delta }m_{\mathrm{sun}}^2\mathrm{\Delta }m_{\mathrm{atm}}^2\mathrm{\Delta }m_{\mathrm{LSND}}^2`$. In the spectra of class 1 there is a group of three close masses which is separated from the fourth mass by the LSND gap of around 1 eV. It contains the spectra (I) – (IV) in Fig. 1. Note that spectrum (I) corresponds to a mass hierarchy, spectrum (III) to an inverted mass hierarchy, whereas (II) and (IV) are non-hierarchical spectra. In the spectra of class 2 there are two pairs of close masses which are separated by the LSND gap. The two possible spectra in this class are denoted by (A) and (B) in Fig. 1. It was shown in Ref. that, in the case of the spectra of class 1, from the existing data one can obtain constraints on the amplitude of SBL $`\nu _\mu \nu _e`$ oscillations that are not compatible with the results of the LSND experiment in the allowed region $`0.2\mathrm{eV}^2\mathrm{\Delta }m_{\mathrm{LSND}}^22\mathrm{eV}^2`$ with the exception of the small interval from 0.2 to 0.3 eV<sup>2</sup>. In Ref. the double ratio $`R`$ of $`\mu `$-like over $`e`$-like events has been used as input from atmospheric neutrino measurements, whereas in the present letter we consider what constraints on neutrino mixing can be inferred from the up–down asymmetry of multi-GeV muon-like events measured in the Super-Kamiokande experiment , i.e., from $$A_\mu =\frac{UD}{U+D}=0.311\pm 0.043\pm 0.01,$$ (1) where $`U`$ and $`D`$ denote the number of events in the zenith angle ranges $`1<\mathrm{cos}\theta <0.2`$ and $`0.2<\mathrm{cos}\theta <1`$, respectively. We will show that with this input the conclusion of Ref. will be strengthened and that now the neutrino mass spectra of class 1 are disfavored for any value of $`\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ in the allowed range. In addition, we will also derive a constraint on the mixing matrix for the neutrino mass spectra (A) and (B). The general case of mixing of four massive neutrinos is described by $`\nu _{\alpha L}=_{j=1}^4U_{\alpha j}\nu _{jL}`$, where $`U`$ is the $`4\times 4`$ unitary mixing matrix, $`\alpha =e,\mu ,\tau ,s`$ denotes the three active neutrino flavors and the sterile neutrino, respectively, and $`j=1,\mathrm{},4`$ enumerates the neutrino mass eigenfields. For definiteness, we will consider the spectrum of type I with a neutrino mass hierarchy $`m_1m_2m_3m_4`$, but the results that we will obtain in this case will apply to all spectra of class 1. The probability of SBL $`\nu _\mu \nu _e`$ transitions is given by the two-neutrino-like formula $$P_{\nu _\mu \nu _e}=P_{\overline{\nu }_\mu \overline{\nu }_e}=A_{\mu ;e}\mathrm{sin}^2\frac{\mathrm{\Delta }m_{41}^2L}{4E},$$ (2) where $`\mathrm{\Delta }m_{41}^2\mathrm{\Delta }m_{\mathrm{LSND}}^2`$, $`L`$ is the distance between source and detector and $`E`$ is the neutrino energy. We use the abbreviation $`\mathrm{\Delta }m_{kj}^2m_k^2m_j^2`$. The oscillation amplitude $`A_{\mu ;e}`$ is given by $$A_{\mu ;e}=4\left(1c_e\right)\left(1c_\mu \right)$$ (3) with $$c_\alpha =\underset{j=1}{\overset{3}{}}|U_{\alpha j}|^2(\alpha =e,\mu ).$$ (4) From the results of reactor and accelerator disappearance experiments it follows that $$c_\alpha a_\alpha ^0\text{or}c_\alpha 1a_\alpha ^0$$ (5) with $`a_\alpha ^0=\frac{1}{2}\left(1\sqrt{1B_{\alpha ;\alpha }^0}\right)`$, where $`B_{\alpha ;\alpha }^0`$ is the upper bound for the amplitude of $`\nu _\alpha \nu _\alpha `$ oscillations. The exclusion plots obtained from the Bugey and CDHS and CCFR experiments imply that $`a_e^04\times 10^2`$ for $`\mathrm{\Delta }m_{\mathrm{LSND}}^20.1\mathrm{eV}^2`$ and $`a_\mu ^00.2`$ for $`\mathrm{\Delta }m_{\mathrm{LSND}}^20.4\mathrm{eV}^2`$ . Below $`\mathrm{\Delta }m^20.3`$ eV<sup>2</sup>, the survival amplitude $`B_{\mu ;\mu }`$ is not restricted by experimental data, i.e., $`B_{\mu ;\mu }^0=1`$. The survival probability of solar $`\nu _e`$’s is bounded by $`P_{\nu _e\nu _e}^{}(1c_e)^2`$ . Therefore, to be in agreement with the results of solar neutrino experiments we conclude that from the two ranges of $`c_e`$ in Eq.(5) only $$c_e1a_e^0$$ (6) is allowed. We will address now the question of what information on the parameter $`c_\mu `$ can be obtained from the asymmetry $`A_\mu `$ (1). As a first step we derive an upper bound on the number of downward-going $`\mu `$-like events $`D`$. The probability of $`\nu _\alpha \nu _\alpha `$ and $`\overline{\nu }_\alpha \overline{\nu }_\alpha `$ transitions of atmospheric neutrinos is given by $`P_{\nu _\alpha \nu _\alpha }=P_{\overline{\nu }_\alpha \overline{\nu }_\alpha }=`$ (7) $`\left|{\displaystyle \underset{j=1,2}{}}|U_{\alpha j}|^2+|U_{\alpha 3}|^2\mathrm{exp}\left(i{\displaystyle \frac{\mathrm{\Delta }m_{31}^2L}{2E}}\right)\right|^2+|U_{\alpha 4}|^4,`$ (8) where we have taken into account that $`\mathrm{\Delta }m_{41}^2\mathrm{\Delta }m_{31}^2`$ and $`\mathrm{\Delta }m_{21}^2L/2E1`$ ($`\mathrm{\Delta }m_{21}^2`$ is relevant for solar neutrinos). Because of the small value of $`\mathrm{\Delta }m_{\mathrm{atm}}^2\mathrm{\Delta }m_{31}^2`$, it is well fulfilled that downward-going neutrinos do not oscillate with the atmospheric mass-squared difference.<sup>*</sup><sup>*</sup>*This is not completely true for neutrino directions close to the horizon with $`\mathrm{\Delta }m_{\mathrm{atm}}^23\times 10^3\mathrm{eV}^2`$. Taking into account the result of the CHOOZ experiment , we have checked, however, that numerically this has a negligible impact on the following discussion. Therefore, we obtain for the survival probability of downward-going neutrinos $$P_{\nu _\alpha \nu _\alpha }^D=c_\alpha ^2+(1c_\alpha )^2.$$ (9) Furthermore, conservation of probability and Eq.(6) allow to deduce the upper bound $$P_{\nu _e\nu _\mu }^D1P_{\nu _e\nu _e}^D=2c_e(1c_e)2a_e^0(1a_e^0).$$ (10) Note that all arguments hold for neutrinos and antineutrinos. Denoting the number of muon (electron) neutrinos and antineutrinos produced in the atmosphere by $`n_\mu `$ ($`n_e`$), from Eqs.(9) and (10) we have the upper bound $$Dn_\mu [c_\mu ^2+(1c_\mu )^2]+2n_ea_e^0(1a_e^0).$$ (11) Taking into account only the part of $`D`$ which is determined by the $`\underset{\mu }{\overset{()}{\nu }}`$ survival probability, we immediately obtain the lower bound $$Dn_\mu [c_\mu ^2+(1c_\mu )^2].$$ (12) Considering only $`|U_{\mu 4}|^4`$ in Eq.(7), we readily arrive at a lower bound on $`U`$ as well: $$Un_\mu (1c_\mu )^2.$$ (13) This inequality is analogous to the above inequality for the survival of solar neutrinos and is valid also with matter effects in the earth. Now we can assemble the inequalities (11), (12) and (13) and it follows the main result of this work $$A_\mu \frac{c_\mu ^2+2a_e^0(1a_e^0)/r}{c_\mu ^2+2(1c_\mu )^2},$$ (14) where we have defined $`rn_\mu /n_e`$. For the numerical evaluation of Eq.(14) we use $`A_\mu 0.254`$ at 90% CL, the 90% CL bound $`a_e^0`$ from the result of the Bugey experiment and $`r=2.8`$ read off from Fig. 3 in Ref. of the Super-Kamiokande Collaboration. As a result we get $$c_\mu a_{\mathrm{SK}}0.45,$$ (15) as can be seen from the horizontal line in Fig. 2. Note that the dependence of this lower bound on $`\mathrm{\Delta }m_{\mathrm{LSND}}^2\mathrm{\Delta }m_{41}^2`$ is almost negligible due to the smallness of the second term in the numerator on the right-hand side of Eq.(14). Consequently, also the exact value of $`r`$ is not important numerically. In Fig. 2 we have also depicted the bounds $$c_\mu a_\mu ^0\text{and}c_\mu 1a_\mu ^0$$ (16) that were obtained from the exclusion plot of the CDHS $`\nu _\mu `$ disappearance experiment. For $`\mathrm{\Delta }m_{\mathrm{LSND}}^20.24`$ eV<sup>2</sup> these two bounds meet at $`c_\mu =0.5`$. Below 0.24 eV<sup>2</sup> there are no restrictions on $`c_\mu `$ from SBL experiments. Finally, we take into account the result of the LSND experiment, from which information on the SBL $`\overline{\nu }_\mu \overline{\nu }_e`$ transition amplitude $`A_{\mu ;e}`$ (3) is obtained. Using Eq.(6) and the lower bound $`A_{\mu ;e}^{\mathrm{min}}`$, which can be inferred from the region allowed by LSND, we derive the further bound on $`c_\mu `$ $$c_\mu a_{\mathrm{LSND}}1A_{\mu ;e}^{\mathrm{min}}/4a_e^0.$$ (17) This bound is represented by the curve in Fig. 2 labelled LSND + Bugey. Fig. 2 clearly shows that a four-neutrino mass hierarchy is strongly disfavored because no allowed region for $`c_\mu `$ is left in this plot. A four-neutrino mass hierarchy is also strongly disfavored for $`\mathrm{\Delta }m_{\mathrm{LSND}}^20.4`$ eV<sup>2</sup> as was shown in Ref. . We want to stress that all bounds are derived from 90% CL plots and that the bound (17) is quite sensitive to the actual values of $`A_{\mu ;e}^{\mathrm{min}}`$ and $`a_e^0`$. This has to be kept in mind in judging the result derived here. As was noticed before , the procedure discussed here applies to all four-neutrino mass spectra of class 1 where a group of three neutrino masses is close together and separated from the fourth neutrino mass by a gap needed to explain the result of the LSND experiment. The reason is that all arguments presented here remain unchanged if one defines $`c_\alpha `$ (3) by a summation over the indices of the three close masses for each of the mass spectra of class 1 (see Fig. 1), i.e., $`j=1,2,3`$ for the spectra I and II and $`j=2,3,4`$ for the spectra III and IV. To give an intuitive understanding that the data disfavor all spectra of class 1 we note that $`c_\mu `$ cannot be too close to 1 in order to explain the non-zero LSND $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillation amplitude (3). On the other hand, if $`c_\mu `$ is too close to zero, the atmospheric $`\nu _\mu `$ oscillations are suppressed (see Eq.(7), taking into account that $`|U_{\mu 4}|^2=1c_\mu `$). For $`\mathrm{\Delta }m_{\mathrm{LSND}}^20.3\mathrm{eV}^2`$ these two requirements contradict each other. For $`\mathrm{\Delta }m_{\mathrm{LSND}}^20.3\mathrm{eV}^2`$ they are in contradiction to the results of the CDHS and CCFR $`\nu _\mu `$ disappearance experiments requiring $`c_\mu `$ to be either close to zero or 1 (see Eq.(5)). According to the previous discussion, only the mass spectra of class 2 remain. They can be characterized in the following way: $$\text{(A)}\underset{\mathrm{LSND}}{\underset{}{\stackrel{\mathrm{atm}}{\stackrel{}{m_1<m_2}}\stackrel{\mathrm{solar}}{\stackrel{}{m_3<m_4}}}}$$ (18) and $$\text{(B)}\underset{\mathrm{LSND}}{\underset{}{\stackrel{\mathrm{solar}}{\stackrel{}{m_1<m_2}}\stackrel{\mathrm{atm}}{\stackrel{}{m_3<m_4}}}}.$$ (19) Let us now discuss which impact the up–down asymmetry $`A_\mu `$ has on these mass schemes. We consider first scheme (A) and go through the same steps as in the case of the mass hierarchy. Now we define $$c_\alpha =\underset{j=1,2}{}|U_{\alpha j}|^2.$$ (20) Then the results of reactor experiments and the energy-dependent suppression of the solar neutrino flux leads to $$c_ea_e^0.$$ (21) Repeating the derivation of Eq.(14) with $`c_\alpha `$ as defined in Eq.(20), it is easily seen that the inequality (14) holds also for scheme (A). On the other hand, the bound that takes into account the LSND result now has the form $$c_\mu A_{\mu ;e}^{\mathrm{min}}/4a_e^0.$$ (22) The corresponding curve in the $`\mathrm{\Delta }m_{41}^2`$$`c_\mu `$ plane is given by a reflection of the curve labelled LSND + Bugey in Fig. 2 at the horizontal line $`c_\mu =0.5`$. Therefore, in the case of scheme (A) the allowed region of $`c_\mu `$ is determined by the bound (22) and by $`c_\mu 1a_\mu ^0`$. This region is allowed and not restricted by $`c_\mu 0.45`$ obtained from the Super-Kamiokande up–down asymmetry. A discussion of scheme (B) with $`c_e1a_e^0`$ leads to the bound (14) with $`c_\mu `$ replaced by $`1c_\mu `$ in this formula and to Eq.(17). Therefore, the bounds for scheme (B) are obtained from those of scheme (A) by a reflection of the curves at the line $`c_\mu =0.5`$. In summary, the white area in Fig. 2 represents the allowed region for $`1c_\mu `$ in scheme (A) and for $`c_\mu `$ in scheme (B). In this paper we have shown that the existing neutrino oscillation data allow to draw definite conclusions about the nature of the possible four-neutrino mass spectra. We have demonstrated that the spectra (I) – (IV) in Fig. 1, including the hierarchical one, are all disfavored by the data in the whole range $`0.2\mathrm{eV}^2\mathrm{\Delta }m_{\mathrm{LSND}}^22\mathrm{eV}^2`$ of the mass-squared difference determined by LSND and other SBL neutrino oscillation experiments. With the Super-Kamiokande result on the atmospheric up–down asymmetry it has been also possible to investigate the region $`\mathrm{\Delta }m_{\mathrm{LSND}}^20.3\mathrm{eV}^2`$ which was not explored in previous publications. The only four-neutrino mass spectra that can accommodate all the existing neutrino oscillation data are the spectra (A) and (B) in Fig. 1 in which two pairs of close masses are separated by the LSND mass gap. The analysis introduced in this paper enables us in addition to obtain information on the mixing matrix $`U`$ via a rather stringent bound on the quantity $`c_\mu `$ (20) for the allowed schemes (A) and (B). ###### Acknowledgements. S.M.B. would like to thank the Institute for Theoretical Physics of the University of Vienna for its hospitality.
no-problem/9903/astro-ph9903094.html
ar5iv
text
# Search for Non-Triggered Gamma Ray Bursts in the BATSE Continuous Records: Preliminary Results ## 1 Introduction Many gamma-ray bursts which were too weak to cause the BATSE to trigger or were being missed due to other reasons (data readouts etc.) can be confidently identified in the BATSE daily records which cover the full period of the CGRO operation. The search for non-triggered bursts can be of crucial importance in the following respects: \- Extension of the log N - log P distribution which is necessary to make conclusive cosmological fits. \- To reveal different GRB subpopulations, should they exist. In particular, the weak end of log N - log P can reveal subpopulations with Eucledian brightness distributions or put strong constraints on them. \- To refine angular distributions and constraints on anisotropy associated with the Galaxy and M31. This in turn can put much stronger constraints on the Galactic halo scenario. \- To increase the probability to observe the gravitational lensing effect in GRBs by more than a factor of 2. The systematic search for non-triggered GRBs was started by Kommers et al. (1997). Recently, Kommers et al.(1998) completed the data scan for 6 years. Despite that we started our search (in November, 1997) for non-triggered bursts much later than Kommers et al.(1997), our work still have several strong motivations: \- Any scientific work subjected to difficult selective biases becomes much more reliable when performed independently by different groups. \- We started our scan with some important advances. Firstly, we used a more selective off-line trigger code. Secondly, and most importantly, is the method of measurement of the efficiency of the GRB search using artificial test bursts that we employed. More than a half of the human power was spent on the finding and the fitting of artificial test bursts for the sake of testing the reliability of log N - log P distribution of real bursts. ## 2 Data scan We use 1024 ms time resolution BATSE data (DISCLA) from the ftp archive of the Goddard Space Flight Center at ftp://cossc.gsfc.nasa.gov/compton/data/batse/daily/. The procedure of data reduction contains the following steps: Step 1. Conversion of the original BATSE records adding to them artificial test bursts prepared from real rescaled bursts taken from the BATSE database. The number of test bursts in the sample is 500. All are longer than 1 s. Each test burst was made by randomly sampling one of the 500 bursts and rescaling its amplitude to the randomly sampled peak count rate with a proper Poisson noise (the lower limit being 160 counts /s/2500cm<sup>2</sup> , which approximately corresponds to $``$ 0.1 ph cm<sup>-2</sup> s<sup>-1</sup>) Test bursts were added to the data at a random time with an average time interval 25000 s (i.e., the number of test bursts exceeds the number of real bursts). Step 2. Data scan We performed automatic check for the trigger conditions (see sec.3). Each trigger was followed by a human decision whether the trigger is a GRB candidate. The decision was made using: \- residual $`\chi ^2`$, hardness ratio and other quantities; \- count rate curves in different detectors and energy channels; \- a$`\chi ^2`$ map of the sky for the event (residual $`\chi ^2`$ after fit from a given direction) with projected Sun, Cyg X-1 and Earth horizon. All candidates were recorded as a fragments of daily records saving all original information. The person performing the scan was unaware whether it was a real or a test burst. Step 3. Event classification. The candidate events were discarded or classified as non-GRBs using the following criteria: \- A low statistical significance (using a $`\chi ^2`$ map of the event over the sky: $`\chi ^2`$ should exceed $`4\sigma `$ over its minimum value in a hemisphere opposite to the best fit direction.) \- A bad directional fit: strong signals in all detectors, a bad $`\chi ^2`$ map (many local minima), a spike in a single detector (luminiscence from heavy nuclei). \- Soft, appeared during high ionosperic activity, and is close or below to the horizon (ionospheric events). \- Soft and close to the Sun (solar flare) or consistent with Cyg X-1 or another known x-ray sources in the corresponding location and is of the same range of intensity and hardness. Step 4. Separation of tests bursts using the protocols generated on step 1. ## 3 Off-line trigger Background estimation for the trigger was done by linear fit of count rate over preceeding 40 seconds. (The backround estimate for BATSE trigger is the average over preceeding 17 seconds. Kommers et al.(1997) use linear fit in interval dependind on triggering time scale.) Trigger criteria were the following (all criteria should be satisfied simultaneously): 1. The first criterium was traditional: a significant count rate excess over background: brightest detector - $`4\sigma `$ excess, second brightest - $`2.5\sigma `$. The excess was checked in time intervals (triggering timescales) 1 bin (1.024s), 2 bins, 4 bins, and 8 bins. Count rates in energy channels #2 and #3 (50 - 300 keV) were used for triggering. (Kommers et al. 1997 used similar thresholds in the same time intervals. BATSE trigger was set up to 5.5$`\sigma `$ and 5.5 $`\sigma `$ correspondingly, sometimes higher. ) 2. The second criterium was a test on sufficient time variability using $`\chi ^2`$ threshold over intervals around triggering time. After fitting the signal summed over all triggered detectors by a straight line in the intervals -16 s $`<T<`$ 16 s and -16 s $`<T<`$ 32 s, the residual $`\chi ^2`$ in one of intervals should exceed 2.5 per degree of freedom. The threshold value was chosen using a sample of weak non-triggered burst found before applying this criterium: all of them passed this threshold. The criterium is efficient against false triggers on smooth background variations and occultation steps. This criterium was not applied by Kommers et al.(1997). 3. The third criterium was based on Cyg X-1 subtraction. The detector counts are fitted using a signal incident from Cyg X-1 direction (no real detector response matrix is used in this step – just cos$`(\theta )`$ factor). Then the residual count rate pattern is checked for sufficient variability with $`\chi ^2`$ threshold as in the previous step. If $`\chi ^2`$ exceeds threshold for one of a few test time intervals, the trigger is accepted and the scanning code stops for human interactive operation. Criteria (2) and (3), which were not used before, turned out to be very efficient against false triggers reducing their number from hundreds to several per day for quite Cyg X-1 or to few tens for loud Cyg X-1. These criteria can reject some real bursts but their number should be small: a very smooth profile is not tipical for GRBs and Cyg X-1 subtraction cuts out less than 5% of the sky. On the other hand such criteria improve the efficiency of the human visual stage of the work. This is a possible reason why we have found more non-triggered events than Kommers et al., (1998) ## 4 Preliminary results We scanned 2068 days of BATSE daily records and found 1243 non-triggered events which can be classified as classic GRBs. (Kommers et al.,(1998) found 837 non-triggered GRBs per 2200 days). We found 1374 bursts which were triggered by BATSE (Kommers et al.,(1998) detected 1393 BATSE triggered events), and missed near 350 BATSE triggers: some of them are in data gaps, some are too short to be detected with 1 s time resolution. Because many short bursts are lost in 1 s resolution daily BATSE data our scan yielded mostly long ($`>`$1 s) events. We also found 3780 test bursts out of about 6800 added to the data. The comparison with catalog of Kommers et al. (1997)for one year of observations showed that we found 90% of their events and approximately the same amount missing in their catalog. Events that we did found and Kommers et al.(1997) did not are not necessary the weakest bursts. Later, Kommers et al. (1998) increased their efficiency, so our final statisticsis about 50% larger. The peak flux distribution of events found in the scan is presented in Fig.1. Note that BATSE trigger missed some events much above threshold due to readout dead time. The efficiency measured by test bursts is shown in Fig.2. Data gaps and periods of a high ionospheric background are taken into account so the efficiency is normalized to whole elapsed time of CGRO operation. Angular distributions of new events show no excess towards the Sun, the excess $`20`$ events in the direction of Cyg X-1 and reasonable distribution in Geocentric coordinates (a smoothed step at Earth horizon and isotropy above it which indicates that a possible contamination of our sample with ionospheric events is small). The equatorial angular distribution is consistent with the sky coverage function given by Meegan et al (1998). The hardness - peak flux scattering plot shown in Fig. 3 demonstrates that new weak bursts give a direct continuation of the distribution of stronger GRBs. All possible backgrounds events are softer on average. The resulting log N - log P distribution in absolute units is presented in Fig.4 in comparison with BATSE log N - log P from Meegan et al. (1998) (in arbitrary normalisation) and that from Kommers et al. (1998) (in absolute units). All distributions are normalised with efficiencies estimated by their authors. Kommers et al.(KOM98 (1998)) have detected events below 0.2 ph s<sup>-1</sup> cm<sup>-2</sup>, however they presented only data with efficiency higher 0.8. The efficiency curve used by Kommers et al.(1998) is a sharper function of the peak flux. Our log N - log P curve is higher in the range $`P`$ 0.2 - 0.6 ph cm<sup>-2</sup> s<sup>-1</sup> because of two factors: lower efficiency in this range according our estimate and a larger number of detected events. Fig. 5 shows the log N - log P distribution where short events (only one bin is above 0.5 of the peak value) were removed from the sample and its best fit with a standard candle cosmological distribution for non-evolving parent population. For the first time the simplest cosmological model cannot fit data. The fit will be even worse if we use the star formation rate curve as the GRB evolution scenario. The rejection of the standard candle hypothesis is not surprising, however this is still an achievement because the cosmological fit of the log N – log P becomes conclusive. ## 5 Preliminary conclusions Stern, Poutanen & Svensson (1997) claimed a possible indication of a turnover of the log N - log P near BATSE threshold. Such a turnover was also indicated by the results of Kommers et al. (1998) . There was a hope to reveal a cosmological evolution of the GRBs source population, i.e., their likely decline at large z together with the star production rate (see Totani, 1997 Bagot, Zwart & Yungel’son, 1998). The turnover of the log N - log P is not confirmed. The reason is that the BATSE detection efficiency (see Meegan et al.,1998) below 1 ph cm<sup>-2</sup> s<sup>-1</sup> turned out to be less than was estimated before - the smoothly declining efficiency mimiced a turnover. Probably all we see is just a very wide intrinsic luminosity function convolved with the cosmological distribution which we will not be able to extract straightforwardly. Can the range 0.1 - 0.5 ph cm<sup>-2</sup> s<sup>-1</sup> be contaminated by non-GRB events? There is a temptation to suggest that the true GRBs log N - log P is bent as the standard candle curve in Fig. 5 and the rise at the left is caused by a contamination with events of another nature. One exciting possibility is a subpopulation of a relatively nearby bursts: the possible association of a supernova and GRB supports this variant. Then log N - log P should bend up to the Euclidean slope somewhere. Nobody can exlude this, however weakest events have the same hardness (Fig. 3) as classic GRBs and are isotropic in any frame (Galactic, Solar, Geocentric, where ionospheric events are anisotropic). They have the same range of durations as GRBs and the same character of the variability. A wide intrinsic luminosity function is a more “economical” explanation: it must be wide and it will fit this log N – log P easiely. ## 6 Aknowlegements We thank Juri Poutatnen, Aino Skassyrskaia, Andrei Skorbun, Eugeni Stern, Vladimir Kurt, Kirill Semenkov, Stas Masolkin, Alex Sergeev, Max Voronkov, Andrei Beloborodov, and Felix Ryde for valuable assistance. This work was supported by the Swedish Natural Science Research Council, the Royal Swedish Academy of Science, the Wennergren Foundation for Scientific Research, and a NORDITA Nordic Collaboration Project grant. RFBR grant 97-02-16975 supported one of the authors (D.K.).
no-problem/9903/cond-mat9903323.html
ar5iv
text
# REFERENCES Comment on “Small-world networks: Evidence for a crossover picture” In a recent letter, Barthélémy and Nunes Amaral examine the crossover behaviour of networks known as “small-world”. They claim that, for an initial network with $`n`$ vertices and $`z`$ links per vertex, each link being rewired according to the procedure of with a probability $`p`$, the average distance $`\mathrm{}`$ between two vertices scales as $$\mathrm{}(n,p)n^{}F\left(\frac{n}{n^{}}\right)$$ (1) where $`F(u1)u`$, and $`F(u1)\mathrm{ln}u`$, and $`n^{}`$ $`n^{}p^\tau `$ with $`\tau =2/3`$ as $`p`$ goes to zero. Other quantities can be of interest in small-world networks, and will be discussed in . In this comment however, we concentrate like on $`\mathrm{}`$ and we show, using analytical arguments and numerical simulations with larger values of $`n`$, that: (i) the proposed scaling form $`\mathrm{}(n,p)n^{}F(n/n^{})`$ seems to be valid, BUT (ii) the value of $`\tau `$ cannot be lower than $`1`$, and therefore the value found in is clearly wrong. The naive argument developed in uses the mean number of rewired links, $`N_r=pnz/2`$. According to , one could expect that the crossover happens for $`N_r=O(1)`$, which gives $`\tau =1`$ . However they find $`\tau =2/3`$. Let us suppose that $`\tau <1`$. Then, if we take $`\alpha `$ such that $`\tau <\alpha <1`$, according to eq (1), we obtain that $$\mathrm{}(n,n^{1/\alpha })n^{\tau /\alpha }F(n^{1\tau /\alpha })n^{\tau /\alpha }\mathrm{ln}(n^{1\tau /\alpha })$$ (2) since $`\tau /\alpha <1`$ and $`n^{1\tau /\alpha }1`$ for large $`n`$. However, the mean number of rewired links in this case is $`N_r=n^{11/\alpha }z/2`$, which goes to zero for large $`n`$. The immediate conclusion is that a change in the behaviour of $`\mathrm{}`$ (from $`\mathrm{}n`$ to $`\mathrm{}n^{\tau /\alpha }\mathrm{ln}(n)`$) could occur by the rewiring of a vanishing number of links! This is a physical nonsense, showing that, if $`n^{}p^\tau `$, $`\tau `$ cannot be lower than $`1`$. We know present our numerical simulations. The value of $`n^{}(p)`$ is obtained by studying, at fixed $`p`$ (we take $`p=2^k/2^{20}`$, $`k=0,\mathrm{}20`$), the crossover between $`\mathrm{}n`$ at small $`n`$ to $`\mathrm{}\mathrm{ln}(n)`$ at large $`n`$ . For small values of $`p`$, it is difficult to reach large enough values of $`n`$ to accurately determine $`n^{}`$, and we think that the underestimation of $`n^{}`$ given by comes from this problem. We here simulate networks with $`z=4,6,10`$ up to sizes $`n=11000`$, and find that $`n^{}`$ behaves like $`1/p`$ for small $`p`$ (Inset of fig. (1)). We moreover show the collapse of the curves $`\mathrm{}/n^{}`$ versus $`n/n^{}`$ in figure (1), for $`z=4`$ and $`z=10`$: note that we obtain the collapse over a much wider rabge than . Besides, we present results for another quantity: at fixed $`n`$ we evaluate $`\mathrm{}(n,p)`$ and look for the value $`p_{1/2}(n)`$ of $`p`$ such that $`\mathrm{}(n,p_{1/2}(n))=\mathrm{}(n,0)/2`$. This value of $`p`$ corresponds to the rapid drop in the plot of $`\mathrm{}`$ versus $`p`$ at fixed $`N`$, and can therefore also be considered as a crossover value. If we note $`u^{}`$ the number such that $`F(u^{})=u^{}/2`$, then we obtain, since $`\mathrm{}(n,0)n`$, that $`n^{}(p_{1/2}(n))=n/u^{}`$. If $`n^{}(p)p^\tau `$, this implies $`p_{1/2}(n)n^{1/\tau }`$. We show in fig (2) that $`p_{1/2}(n)1/n`$ (and clearly not $`n^{3/2}`$ like the results of would imply), meaning that $`\tau =1`$: a finite number of rewired links already has a strong influence on $`\mathrm{}`$. Again the $`\tau =2/3`$ result of is clearly ruled out. It is a pleasure to thank G. Biroli, R. Monasson and M. Weigt for discussions. A. Barrat Laboratoire de Physique Théorique , Université de Paris-Sud, 91405 Orsay cedex, France
no-problem/9903/astro-ph9903112.html
ar5iv
text
# Far–infrared Point Sources ## 1. Introduction The accurate measurement of the fluctuations of the Cosmic Microwave Background down to scales of a few arcmins by the forthcoming satellite missions map and planck will require a thorough analysis of all the astrophysical sources of submm/mm anisotropies, and a careful separation of the various foreground components (Bouchet et al. 1996, Gispert & Bouchet 1997, Tegmark & Efstathiou, 1996, Tegmark, 1998, Hobson et al. 1998a, Bouchet & Gispert 1999). Among them, the foreground due to resolved/unresolved IR/submm galaxies that are present at all redshifts on the line of sight was poorly known, until observations and analyses in the last three years began unveiling the “optically dark” (and infrared–bright) side of galaxy evolution at cosmological distances. In parallel to this observational breakthrough, a strong theoretical effort has opened up the way to a modelling of these galaxies that was able to implement the basic astrophysical processes ruling IR/submm emission in a consistent way. As a consequence of this, it is now possible to have a more general view on the problems of foreground separation, and on the capabilities of CMB missions to put new constraints on the number and properties of these sources. In the local universe, we know from iras observations that about 30 % of the bolometric luminosity of galaxies is radiated in the IR (Soifer & Neugebauer 1991). Local galaxies can be classified in a luminosity sequence from spirals (e.g the Milky Way – the brightest spirals in the IR have a bar), and mild starbursts (e.g. M82), to the “Luminous Infrared Galaxies” (say, with $`10^{11}L_{}<L_{IR}<10^{12}L_{}`$), and “Ultra–Luminous Infrared Galaxies” (say, with $`10^{12}L_{}<L_{IR}`$) that radiate more than 95 % of their bolometric luminosity in the IR/submm. The IR/submm emission of these sources is due to dust that absorbs UV and optical light, and thermally reradiates with a broad spectral energy distribution ranging from a few $`\mu `$m to a few mm. Most of the heating is due to young stellar populations but, in the faintest objects, the average radiation field due to old stellar populations can be the main contributor, and, in the brightest objects (especially the ULIRGs), the question of the fraction of the heating that is due to a possible Active Galactic Nucleus is still difficult to assess. However, recent work based on iso observations shows that starbursting still dominates in 80 % of ULIRGs, whereas the AGNs power only the brightest objects (Genzel et al. 1998, Lutz et al. 1998). Now, IR/submm observations are beginning to unveil what actually happened at higher redshift. The detection of the “Cosmic Infrared Background” (hereafter CIRB) at a level twice as high as the “Cosmic Optical Background” (hereafter COB) has shown that about 2/3 of the luminosity budget of galaxies is emitted in the IR/submm range (Puget et al. 1996). In the same time, the first deep surveys at submm wavelengths have discovered the sources that are responsible for the CIRB, with a number density much larger than the usual predictions based on our knowledge of the local universe (Smail et al. 1997). The optical follow–up of these sources is still in progress, but it appears that some (most ?) of them should be the moderate– and high–redshift counterparts of the local LIRGs and ULIRGs discovered by the iras satellite and thoroughly studied by the iso satellite. We shall hereafter focus on these dusty sources with thermal radiation, and we refer the reader to the paper by Toffolatti and coworkers (this volume), for the description of the foreground due to radiosources that emit free–free and synchrotron radiations at larger wavelengths. A consistent approach to the early evolution of galaxies is particularly important for any attempt at predicting their submm properties. Three basic problems have to be kept in mind, that explain why it is so difficult, starting from general ideas about galaxy evolution, to get a correct assessment of the number density of faint submm sources and of the level of submm fluctuations they generate. First, it is difficult to extrapolate the IR/submm properties of galaxies from our knowledge of their optical properties. It is well known that there is no correlation between the optical and IR fluxes – see e.g. Soifer et al. (1987) for an analysis of the statistical properties of the “Bright Galaxy Sample”. Interestingly, the galaxies with the highest luminosities also emit most of their bolometric luminosity in the IR. If young stars are the dominant source of heating, it turns out that the strongest starbursts mainly emit in the IR/submm. Fig. 1 shows a sequence of spectral energy distributions for local galaxies with various IR luminosities, very much in the spirit of fig. 2 of Sanders & Mirabel (1996). The sources have been modelled by Devriendt et al. (1999, see section 5) under the assumption that starbursts are the dominant source of heating. Now, it is known that local LIRGs and ULIRGs are interacting systems and mergers (e.g. Sanders & Mirabel 1996). It is consequently plausible that their number density should increase with redshift, when more fuel was available for star formation and more interactions could trigger it. As a matter of fact, the Hubble Deep Field (HDF, Williams et al. 1996) has unveiled a large number of irregular/peculiar objects undergoing gravitational interactions (Abraham et al. 1996). Such a large number of interacting systems is of course predicted by the paradigm of hierarchical clustering, but the quantitative modelling of the merging rates of galaxies, and of the influence of merging on star formation is highly uncertain. Second, we might have kept so far the prejudice that high–redshift galaxies have little extinction, simply because their heavy–element abundances are low (typically 1/100 to 1/10 of solar at $`z>2`$). However, low abundances do not necessarily mean low extinction. For instance, if we assume that dust grains have a size distribution similar to the one of our Galaxy ($`n(a)daa^{3.5}`$ with $`a_{min}aa_{max}`$), and are homogeneously distributed in a region with radius $`R`$, the optical depth varies as $`\tau a_{min}^{0.5}R`$, whereas the total dust mass varies as $`M_{dust}a_{max}^{0.5}R^3`$. For a given dust mass and size distribution, there is more extinction where grains are small, and close to the heating sources. This is probably the reason why Thuan et al. (1998) observed a significant dust emission in the extremely metal–poor galaxy SBS0335-052. In this context, modelling chemical evolution and transfer is not an easy task. Third, distant galaxies are readily observable at submm wavelengths. Fig. 2 shows model spectra of an ULIRG as it would be observed if placed at different redshifts. There is a wavelength range, between $`600`$ $`\mu `$m and $`4`$ mm, in which the distance effect is counterbalanced by the “negative k–correction” due to the rest–frame emission maximum at $`100`$ $`\mu `$m. In this range, the apparent flux of galaxies depends weakly on redshift to the point that, evolution aside, a galaxy might be easier to detect at $`z=5`$ than at $`z=0.5`$. The observer–frame submm fluxes, faint galaxy counts, and diffuse background of unresolved galaxies are consequently very sensitive to the early stages of galaxy evolution. Note that this particular wavelength range brackets the maximum of emission of the CMB. As a consequence, any uncertainty in the modelling of galaxy evolution at high $`z`$ will strongly reflect on the results of the faint counts of resolved sources, and on the fluctuations of the foreground of unresolved sources. In sections 2 and 3, we report respectively on the recent observation of the CIRB, and the faint submm counts with the iso satellite and the SCUBA instrument on the James Clerk Maxwell Telescope (see the review by Mann and coworkers in this volume). In section 4, we briefly mention the efforts to correct the optical surveys for the effect of extinction, that give a lower limit of the number of submm sources from the number of sources detected at optical wavelengths. In section 5, we resume various attempts developped so far to compute consistent optical/IR spectra, and to model IR/submm counts. In section 6, we summarize recent developments of the semi–analytic modelling of galaxy formation and evolution where the computation of dust extinction and emission is explicitly implemented. Finally, in section 7, we sketch an overview of the sensitivities of forthcoming instruments that should greatly improve our knowledge of IR/submm sources, and we emphasize the capability of the planck High Frequency Instrument to get an all–sky survey of bright, dusty sources at submm wavelengths. ## 2. The Cosmic Infrared Background The epoch of galaxy formation can be observed by its imprint on the background radiation that is produced by the accumulation of the light of extragalactic sources along the line of sight. The direct search for the COB currently gives only upper limits. However, estimates of lower limits can be obtained by summing up the contributions of faint galaxies. The shallowing of the faint counts obtained in the HDF (Williams et al. 1996) suggests that these lower limits are close to convergence. In the submm range, Puget et al. (1996) have discovered an isotropic component in the FIRAS residuals between 200 $`\mu `$m and 2 mm. This measure was confirmed by subsequent work in the cleanest regions of the sky (Guiderdoni et al. 1997), and by an independent determination (Fixsen et al. 1998), giving a mean value of the background $`\nu I_\nu =1.3\times 10^5(\lambda _{100})^{0.64}\nu B_\nu (T_d=18.5\mathrm{K})`$ where $`\lambda _{100}`$ is the wavelength in units of 100 $`\mu `$m. The analysis of the DIRBE dark sky has also led to the detection of the isotropic background at 240 and 140 $`\mu `$m, and to upper limits at shorter wavelengths down to 2 $`\mu `$m (Schlegel et al. 1998, Hauser et al. 1998). Recently, a measure at 3.5 $`\mu `$m was proposed by Dwek & Arendt (1998). The results of these analyses seem in good agreement, though the exact level of the background around 140 and 240 $`\mu `$m is still a matter of debate. The controversy concerns the correction for the amount of Galactic dust in the ionized gas uncorrelated with the HI gas. A new assessment of the issue by Lagache et al. (1999) leads to values of the CIRB that are in good agreement with the fit of FIRAS data by Fixsen et al. (1998), and to values at 140 and 240 $`\mu `$m that are lower than in Hauser et al. (1998). Figure 3 displays the various determinations. It appears very likely that this isotropic background is the long–sought CIRB (Puget et al. 1996, Dwek et al. 1998). As shown in fig. 3, its level is about 5–10 times the no–evolution prediction based on the local IR luminosity function determined by iras. There is about twice as much flux in the CIRB than in the COB. If the dust that emits at IR/submm wavelengths is mainly heated by young stellar populations, the sum of the fluxes of the CIRB and COB gives the level of the Cosmic Background associated with stellar nucleosynthesis (Partridge & Peebles 1967). The bolometric intensity (in W m<sup>-2</sup> sr<sup>-1</sup>) is : $$I_{bol}=\frac{ϵ_{bol}}{4\pi }\frac{dl}{(1+z)^4}=\frac{c\eta }{4\pi }\frac{\rho _Z(z=0)}{(1+z_{eff})}$$ (1) where $`ϵ_{bol}(t)=\eta (1+z)^3\dot{\rho }_Z(t)`$ is the physical emissivity due to young stars at cosmic time $`t`$, and $`z_{eff}`$ is the effective redshift for stellar He and metal nucleosynthesis. An approximate census of the local density of heavy elements $`\rho _Z(z=0)1\times 10^7`$ $`M_{}`$ Mpc<sup>-3</sup>, taking into account the metals in the hot gas of rich galaxy clusters (Mushotzky & Loewenstein 1997) gives an expected bolometric intensity of the background $`I_{bol}50(1+z_{eff})^1`$ nW m<sup>-2</sup> sr<sup>-1</sup>. This value is roughly consistent with the observations for $`z_{eff}1`$ – 2. Of course, it is not clear yet whether star formation is responsible for the bulk of dust heating, or there is a significant contribution of AGNs. In order to address this issue, one has first to identify the sources that are responsible for the CIRB. The question of the origin of dust heating in heavily–extinguished objects is a difficult one, because both starburst and AGN rejuvenation can be fueled by gas inflows triggered by interaction, and IR/submm spectra can be very similar if extinction is large. However, according to Genzel et al. (1998), the starburst generally contributes to 50–90 % of the heating in local ULIRGs. About 80 % of the ULIRGs in the larger local sample of Lutz et al. (1998) are dominated by the starburst, but the trend decreases with increasing luminosity, and the brightest objects are AGN–dominated. Now, it is very likely that the high–redshift counterparts of the local LIRGs and ULIRGs are responsible for the CIRB. However the redshift evolution of the fraction and power of AGNs that are harbored in these distant objects is still unknown. ## 3. Far–infrared galaxies at high redshift Various submm surveys have been achieved or are in progress. The FIRBACK program is a deep survey of 4 deg<sup>2</sup> at 175 $`\mu `$m with the ISOPHOT instrument aboard iso. The analysis of about 1/4 of the Southern fields (that is, of 0.25 deg<sup>2</sup>, see fig. 4) unveils 24 sources (with a $`5\sigma `$ flux limit $`S_\nu >100`$ mJy), corresponding to a surface density five times larger than the no–evolution predictions based on the local IR luminosity function (Puget et al. 1999). It is likely that we are actualy seeing the maximum emission bump at 50–100 $`\mu `$m redshifted at cosmological distances, rather than a local population of sources with a very cold dust component, which seems to be absent from the shallow ISOPHOT survey at 175 $`\mu `$m (Stickel et al. 1998). The total catalogue of the 4 deg<sup>2</sup> will include about 275 sources (Dole et al. 1999). The radio and optical follow–up for identification is still in progress. This strong evolution is confirmed by the other 175 $`\mu `$m deep survey by Kawara et al. (1998). The ISOCAM deep surveys at 15 $`\mu `$m also conclude to a significant evolution of the sources (Oliver et al. 1997, Aussel et al. 1998, Elbaz et al. 1998, 1999). Most of the sources identified so far by the optical follow–up have typical redshifts $`z0.7`$, and optical colours similar to those of field galaxies, with morphologies that frequently have signs of interaction. The surveys seem to show a population of bright peculiar galaxies, starbursts, LIRGs, and AGNs. The observer–frame 15 $`\mu `$m waveband corresponds to rest–frame wavelengths that probe the properties of PAH and very small grains, at the depth of the survey. The extent to which the 15 $`\mu `$m flux is related to the bulk of IR/submm emission produced by star formation is under study. Various deep surveys at 850 $`\mu `$m have been achieved with the SCUBA instrument at the JCMT (Smail et al. 1997, Hughes et al. 1998, Barger et al. 1998, Eales et al. 1998). They also unveil a surface density of sources (with $`S_\nu >2`$ mJy) much larger than the no–evolution predictions (by two or three orders of magnitude !). The total number of sources discovered in SCUBA deep surveys now reaches about 35 (see e.g. Blain et al. 1998) and should rapidly increase. The tentative optical identifications seem to show that these objects look like distant LIRGs and ULIRGs (Smail et al. 1998, Lilly et al. 1999). In the HDF, 4 of the brightest 5 sources seem to lie between redshifts 2 and 4 (Hughes et al. 1998), but the optical identifications are still a matter of debate (Richards, 1998). The source SMM 02399-0136 at $`z=2.803`$, which is gravitationally amplified by the foreground cluster A370, is clearly an AGN/starburst galaxy (Ivison et al. 1998, Frayer et al. 1998). ## 4. The optical view and the issue of extinction Recent observational breakthroughs have made possible the measurement of the Star Formation Rate (SFR) history of the universe from rest–frame UV fluxes of moderate– and high–redshift galaxies (Lilly et al. 1996, Madau et al. 1996, 1998, Steidel & Hamilton 1993, Steidel et al. 1996, 1999). Since the early versions of the reconstruction of the cosmic SFR density, much work has been done to address dust issues. However, a complete assessment of the effect of extinction on UV fluxes emitted by young stellar populations, and of the luminosity budget of star–forming galaxies is still to come. Dust seems to be present even at large redshifts, since the optical spectrum of a gravitationally–lensed galaxy at $`z=4.92`$ (Franx et al. 1997) already shows a reddening factor amounting to $`0.1<E(BV)<0.3`$ (Soifer et al. 1998). The cosmic SFR density determined only from the UV fluxes of the Canada–France Redshift Survey has been recently revisited with optical, IR, and radio observations. The result is an upward correction of the previous values by an average factor 2.9 (Flores et al. 1999). At higher redshift, various authors have attempted to estimate the extinction correction and to recover the fraction of UV starlight absorbed by dust (e.g. Meurer et al. 1997, Pettini et al. 1998). It turns out that the observed slope $`\alpha `$ of the UV spectral energy distribution $`F_\lambda (\lambda )\lambda ^\alpha `$ (say, around 2200 Å) is flatter than the standard value $`\alpha _02.5`$ computed from models of spectrophotometric evolution. The derived extinction corrections are large and differ according to the method. For instance, Pettini et al. (1998) and coworkers fit a typical extinction curve (the Small Magellanic Cloud one) to the observed colors, whereas Meurer et al. (1997) and coworkers use an empirical relation between $`\alpha `$ and the FIR to 2200 Å luminosity ratio in local starbursts. The former authors derive $`<E(BV)>0.09`$ resulting in a factor 2.7 absorption at 1600 Å, whereas the latter derive $`<E(BV)>0.30`$ resulting in a factor 10 absorption. This discrepancy suggests sort of a bimodal distribution of the young stellar populations : the first method would take into account the stars detected in the UV with relatively moderate reddening/extinction, while the second one would phenomenologically add the contributions of these “apparent” stars and of heavily–extinguished stars. Fig. 5 shows the cosmic SFR comoving density in the early version (no extinction), and after the work by Flores et al. (1999) at $`z<1`$ and the extinction correction derived by Pettini et al. (1998) at higher redshift. The broad maximum observed at $`z1.5`$ to 3 (see fig. 5) seems to be correlated with the decrease of the cold–gas comoving density in damped Lyman–$`\alpha `$ systems between $`z=2`$ and $`z=0`$ (Lanzetta et al. 1995, Storrie–Lombardi et al. 1996). These results nicely fit in a view where star formation in bursts triggered by interaction/merging consumes and enriches the gas content of galaxies as time goes on. It is common wisdom that such a qualitative scenario is expected within the paradigm of hierarchical growth of structures. The implementation of hierarchical galaxy formation in semi–analytic models confirms this expectation (e.g. Baugh et al. 1998, and references therein). The question is to know whether the observations of the optically dark side of galaxies could modify this view significantly. ## 5. Modelling dust spectra and IR/submm counts Various models have been proposed to account for the IR/submm emission of galaxies and to predict forthcoming observations. The level of sophistication (and complexity) increases from pure luminosity and/or density evolution extrapolated from the iras local luminosity function with $`(1+z)^n`$ laws, and modified black–body spectra, to physically–motivated spectral evolution. The evolution of the IR/submm luminosities can be computed from the usual modelling of spectrophotometric evolution, by implementing the involved physical processes (stellar evolutionary tracks and stellar spectra, chemical evolution, dust formation, dust heating and transfer, dust thermal emission). Dwek (1998) tried to explicitly model the processes of dust formation and destruction (see references therein for a review of this complicated issue). Most models prefer to assume simple relations between the dust content and the heavy–element abundance of the gas. The simplest assumption is a dust–to–gas ratio that is proportional to the heavy–element abundances. Guiderdoni et al. (1996, 1997, 1998) proposed a consistent modelling of IR/submm spectra that was designed to be subsequently implemented in semi–analytic models of galaxy formation and evolution (see section 6). The values of the free parameters that appear in this modelling (gas mass and metallicity, radius of the gaseous disk) are readily computable in semi–analytic models for the overall population of galaxies. The IR/submm spectra of galaxies are computed according to Guiderdoni & Rocca–Volmerange (1987), as follows : 1. follow chemical evolution of the gas ; 2. implement extinction curves which depend on metallicity according to observations in the Milky Way, the LMC and SMC ; 3. compute $`\tau _\lambda Z_{gas}^sN_{gas}(A_\lambda /A_V)`$ where $`Z_{gas}`$ and $`N_{gas}`$ are the gas metallicity and column density, $`s=1.6`$ for $`\lambda >`$ 2000 Å (and 1.35 below), and $`A_\lambda /A_V`$ is the Milky Way extinction curve ; 4. assume the so–called “slab” or oblate spheroid geometries where the star and dust components are homogeneously mixed with equal height scales. The choice of these simple geometries for transfer is motivated by studies of nearby samples (Andreani & Franceschini 1996) ; 5. compute a spectral energy distribution by assuming a mix of various dust components (PAH, very small grains, big grains) according to Désert et al. (1990). The contributions are fixed in order to reproduce the observational correlation of iras colours with total IR luminosity (Soifer & Neugebauer 1991). Fig. 6 gives the luminosity sequence from Guiderdoni et al. (1998). Franceschini et al. (1991, 1994, and other papers of this series) follow the same scheme for items 1, 2, 3, 4, and use slightly different IR/submm templates, whereas Fall et al. (1996) basically use constant dust–to–metal ratios and black–body spectra. Recently, Silva et al. (1998) proposed a more sophisticated treatment in which transfer is computed in molecular clouds. The method of Guiderdoni et al. (1998) has been extended to obtain far–UV to radio spectra, and to study local templates by Devriendt et al. (1999). Fig. 7 shows the predicted optical/IR/submm spectrum of an ULIRG, and fig. 8 and 9 display examples of fits for observed objects (a spiral galaxy and an ULIRG) from this latter paper. These spectra can subsequently be used to model IR/submm counts. The simplest idea is to implement luminosity and/or number evolution which are parameterized as power laws of $`(1+z)`$ (e.g. Blain and Longair 1993a, Pearson & Rowan–Robinson 1996 ; see also references in Lonsdale, 1996). These power laws are generally derived from fits of the slope of iras faint counts (which do not probe deeper than $`z0.2`$). Then they are extrapolated up to redshifts of a few units. Unfortunately, various analyses of iras deep counts yield discrepant results at $`S_{60}<300`$ mJy, and the amount of evolution is a matter of debate (see e.g. Bertin et al. 1997, for a new analysis and discussion). This uncertainty increases in the extrapolation at higher $`z`$. ## 6. Semi–analytic modelling These classes of models assume that all galaxies form at the same redshift $`z_{for}`$. But the paradigm of the hierarchical growth of structures implies that there is no clear–cut redshift $`z_{for}`$. In this paradigm, the modelling of dissipative and non–dissipative processes ruling galaxy formation (halo collapse, cooling, star formation, stellar evolution and stellar feedback to the interstellar medium) has been achieved at various levels of complexity, in the so–called semi–analytic approach which has been successfully applied to the prediction of the statistical properties of galaxies (White & Frenk 1991; Lacey & Silk 1991, Kauffmann et al. 1993, 1994; Cole et al. 1994; Somerville & Primack 1999 ; see other papers of these series). In spite of differences in the details, the conclusions of these models in the UV, visible and (stellar) NIR are remarkably similar. A first attempt to compute the IR evolution of galaxies with the Press–Schechter formalism has been proposed by Blain & Longair (1993a,b), but with a crude treatment of dissipative processes. In Guiderdoni et al. (1996, 1997, 1998), we extend this approach by implementing spectral energy distributions in the IR/submm range. As a reference, we take the standard CDM case with $`H_0`$=50 kms<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_0=1`$, $`\mathrm{\Lambda }=0`$ and $`\sigma _8=0.67`$. We assume a Star Formation Rate $`SFR(t)=M_{gas}/t_{}`$, with $`t_{}\beta t_{dyn}`$. The efficiency parameter $`1/\beta =0.01`$ gives a nice fit of local spirals. The robust result of this type of modelling is a cosmic SFR history that is too flat with respect to the data. As a phenomenological way of reproducing the steep rise of the cosmic SFR history from $`z=0`$ to $`z=1`$, we introduce a “burst” mode of star formation involving a mass fraction that increases with $`z`$ as $`(1+z)^4`$ (similar to the pair rate), with ten times higher efficiencies $`1/\beta =0.1`$. The predicted cosmic SFR density is given in fig. 5. The resulting model, called “model A” does not predict enough flux at IR/submm wavelengths to reproduce the level of the CIRB. This is sort of a minimal model that fits the COB and extrapolates the IR/submm fluxes from the optical (see section 4 on extinction). In order to increase the IR/submm flux, we have to assume that a small fraction of the gas mass (typically less than 10 %) is involved in star formation with a top–heavy IMF in heavily–extinguished objects (ULIRG–type galaxies). Then a sequence of models is derived. The amount of ULIRG can be normalized e.g on the local iras luminosity function (to reproduce the iras bright counts) and on the level of the CIRB. The so–called “model E” normalized to the flux level determined in Guiderdoni et al. (1997) (see fig. 3) is hereafter used to predict faint counts and redshift distributions. An extension to NIR and visible counts is given in Devriendt & Guiderdoni (1999). Fig. 10 and 11 give the predicted number counts and redshift distributions at 15, 60, 175, and 850 $`\mu `$m for this model, that have been produced in Guiderdoni et al. (1998) before the publication of the ISOPHOT and SCUBA faint counts (their fig. 17). The agreement of the predicted number counts with the data seems good enough to suggest that these counts do probe the evolving population contributing to the CIRB. The model shows that 15 % and 60 % of the CIRB respectively at 175 $`\mu `$m and 850 $`\mu `$m are built up by objects brighter than the current limits of ISOPHOT and SCUBA deep fields. The predicted median redshift of the iso–HDF is $`z0.8`$. It increases to $`z1.2`$ for the deep ISOPHOT surveys, and to $`z2`$ for SCUBA, though the latter value seems to be very sensitive to the details of evolution. The model by Toffolatti et al. (1998) is also shown in fig. 10. It gives more bright sources and less faint sources at submm wavelengths. ## 7. Future instruments Fig. 12 gives the far–UV to submm spectral energy distribution that is typical of a $`L_{IR}=10^{12}`$ $`L_{}`$ ULIRG at various redshifts. This model spectrum is taken from the computation of Devriendt et al. (1999). The instrumental sensitivities of various past and on–going satellite and ground–based instruments are plotted on this diagram : the iras Very Faint Source Survey at 60 $`\mu `$m, iso with ISOCAM at 15 $`\mu `$m, and ISOPHOT at 175 $`\mu `$m, the IRAM interferometer at 1.3 mm, SCUBA at 450 and 850 $`\mu `$m, and various surveys with the VLA. Forthcoming missions and facilities include wire, sirtf, SOFIA, the planck High Frequency Instrument, the first Spectral and Photometric Imaging REceiver, and the imaging modes of the SUBARU IRCS and VLT VIRMOS instruments. Finally, the capabilities of the ngst, LSA/MMA, and Infrared Space Interferometer (darwin) are also plotted. The final sensitivity of the next–generation instruments observing at IR and submm wavelengths (wire, sirtf, SOFIA, planck, first) is going to be confusion limited. However, the observation of a large sample of ULIRG–like objects in the redshift range 1–5 should be possible. More specifically, the all–sky shallow survey of planck HFI, and the medium–deep survey of first SPIRE (to be launched by ESA in 2007), will respectively produce bright ($`S_\nu >`$ a few 100 mJy) and faint ($`S_\nu >`$ a few 10 mJy) counts that will be complementary. Table 1 resumes the various sources of fluctuation in the six bands of planck HFI (Bersanelli et al. 1996, and planck HFI Consortium, 1998). The confusion limit due to unresolved sources in beam $`\mathrm{\Omega }`$ has been roughly estimated with the theoretical faint counts from model E, according to the formula $`\sigma _{conf}=(_0^{S_{lim}}S^2(dN/dS)𝑑S\mathrm{\Omega })^{1/2}`$. The values $`\sigma _{conf}`$ and $`S_{lim}=q\sigma _{tot}`$ have been computed iteratively with $`q=5`$. The 1$`\sigma `$ total fluctuation is $`\sigma _{tot}=(\sigma _{ins}^2+\sigma _{conf}^2+\sigma _{cir}^2+\sigma _{CMB}^2)^{1/2}`$. However, this does not give a sound estimate of what will actually be possible with planck, once proper algorithms of filtering and source extraction are implemented. Source extraction can be studied on simulated maps. Tegmark & de Oliveira–Costa (1998) showed that (i) $`\sigma _{tot}`$ is about 40 to 100 mJy for the HFI frequencies after filtering, (ii) 40 000 and 5000 sources are expected (at the $`5\sigma `$ level in 8 sr) respectively at 857 GHz and 545 GHz, and (iii) the CMB reconstruction is not jeopardized by the presence of point sources at the level predicted by the models. Similar results are obtained by Hobson et al. (1998b) with a maximum–entropy method on mock data (Hobson et al. 1998a). In both estimates, source counts and source confusion are based on the predicted counts of Toffolatti et al. (1998), which differ from ours : Toffolatti’s counts are higher at the bright end, and fainter at the faint end ; hence his source confusion is lower. Table 1 gives the number densities expected with flux limits of 100 mJy and 500 mJy, according to Guiderdoni et al. (1998). The reader is invited to note the strong sensitivity of the counts to the exact flux limit. So the expectations of source counts with planck (and first) are severely model–dependent. The on–going follow–up of ISOPHOT and SCUBA sources will eventually give redshift distributions that will strongly constrain the models and help improving the accuracy of the predictions. As far as first is concerned, a 10 deg<sup>2</sup> survey with SPIRE will result in $`10^4`$ sources (first SPIRE Consortium, 1998). The study of the $`250/350`$ and $`350/500`$ colors are suited to point out sources which are likely to be at high redshifts. These sources can be eventually followed at 100 and 170 $`\mu `$m by the first Photoconductor Array Camera & Spectrometer and by the FTS mode of SPIRE, to get the spectral energy distribution at $`200\lambda 600`$ $`\mu `$m with a typical resolution $`R\lambda /\mathrm{\Delta }\lambda =20`$. After a photometric and spectroscopic follow–up, the submm observations will readily probe the bulk of (rest–frame IR) luminosity associated with star formation. The reconstruction of the cosmic SFR comoving density will thus take into account the correct luminosity budget of high–redshift galaxies. However, the spatial resolution of the submm instruments will be limited, and only the LSA/MMA should be able to resolve the IR/submm sources and study the details of their structure. ## 8. Conclusions 1. High–redshift galaxies emit much more IR than predictions based on the local IR luminosity function, without evolution. The submm counts start unveiling the bright end of the population that is responsible for the CIRB. The issue of the relative contributions of the starbursts and AGNs to dust heating is still unsolved. Local ULIRGs (but the brightest ones) seem to be dominated by starburst heating. However the trend at higher redshift is unknown. 2. It is difficult to correct for the influence of dust on the basis of the optical spectra alone. Multi–wavelength studies are clearly necessary to address the history of the cosmic SFR density. Forthcoming instrument will help us greatly improve our knowledge of the optically dark side of galaxy formation. Next milestones are sirtf, SOFIA, the planck High Frequency Instrument, the first Spectral and Photometric Imaging REceiver, and the LSA/MMA. 3. Under the assumption that starburst heating is dominant, simple models in the paradigm of hierarchical clustering do reproduce the current IR/submm data. These models normalized by means of the current and forthcoming counts should help us predict the number of IR/submm sources that will be observed by the planck High Frequency Instrument, the contribution of the unresolved sources to the submm anisotropies, and the final strategy for foreground separation and interpretation. Though source counts are strongly model–dependent, and only partly constrained by the current set of data, the studies so far seem to show that the quality of the reconstruction of CMB anisotropies is not severely degraded by the presence of foreground point sources. ### Acknowledgments. I am grateful to F.R. Bouchet, J.E.G. Devriendt, E. Hivon, G. Lagache, B. Maffei, and J.L. Puget who collaborated to many aspects of this program, as well as to H. Dole and the FIRBACK consortium for illuminating discussions. My thanks also to G. De Zotti, A. Franceschini, and L. Toffolatti. ## References Abraham, R.G., Tanvir, N.R., Santiago, B.X., Ellis, R.S., Glazebrook, K., van den Bergh, S., 1996, Mon. Not. Roy. Astron. Soc., 279, L47 Andreani, P., Franceschini, A. 1996, Mon. Not. Roy. Astron. Soc., 283, 85 Aussel, H., Cesarsky, C.J., Elbaz, D., Starck, J.L., 1998, Astron. Astrophys., in press Barger, A.J., Cowie, L.L., Sanders, D.B., Fulton, E., Taniguchi, Y., Sato, Y., Kawara, K., Okuda, H., 1998, Nature, 394, 248 Baugh, C.M., Cole, S., Frenk, C.S., Lacey, C.G., 1998, Astrophys. J., 498, 504 Bersanelli, M., Bouchet, F.R., Efstathiou, G., Griffin, M., Lamarre, J.M., Mandolesi, R., Norgaard–Nielsen, H.U., Pace, O., Polny, J., Puget, J.L., Tauber, J., Vittorio, N., Volonte, S., 1996, COBRAS/SAMBA Phase A report Bertin, E., Dennefeld, M., Moshir, M., 1997, Astron. Astrophys., 323, 685 Blain, A.W., Longair, M.S., 1993a, Mon. Not. Roy. Astron. Soc., 264, 509 Blain, A.W., Longair, M.S., 1993b, Mon. Not. Roy. Astron. Soc., 265, L21 Blain, A.W., Kneib, J.P., Ivison, R.J., Smail, I., 1998, Astrophys. J., in press Bouchet, F.R., Gispert, R., Puget, J.L., 1996, in Unveiling the Cosmic Infrared Background, E. Dwek (ed), AIP Conference Proceedings 348, p.255 Bouchet, F.R., Gispert, R., 1999, in preparation Cole, S., Aragón–Salamanca, A., Frenk, C.S., Navarro, J.F., Zepf, S.E. 1994., Mon. Not. Roy. Astron. Soc., 271, 781 Connolly, A.J., Szalay, A.S., Dickinson, M., SubbaRao, M.U., Brunner, R.J., 1997, Astrophys. J., 486, L11 Désert, F.X., Boulanger, F., Puget, J.L. 1990, Astron. Astrophys., 237, 215 Devriendt, J.E.G., Guiderdoni, B., Sadat, R., 1999, Astron. Astrophys., submitted Devriendt, J.E.G., Guiderdoni, B., 1999, in preparation Dole, H., Lagache, G., Puget, J.L., Aussel, H., Bouchet, F.R., Clements, D.L., Cesarsky, C., Désert, F.X., Elbaz, D., Franceschini, A., Gispert, R., Guiderdoni, B., Harwit, M., Laureijs, R., Lemke, D., Moorwood, A.F.M., Oliver, S., Reach, W.T., Rowan–Robinson, R., Stickel, M., 1999, in The Universe as seen by ISO, P. Cox & M.F. Kessler (eds), 1998, UNESCO, Paris, ESA Special Publications series (SP-427) Dwek, E., 1998, Astrophys. J., 501, 643 Dwek, E., Arendt, R.G., 1998, Astrophys. J., 508, L9 Dwek, E., Arendt, R.G., Hauser, M.G., Fixsen, D., Kelsall, T., Leisawitz, D., Pei, Y.C., Wright, E.L., Mather, J.C., Moseley, S.H., Odegard, N., Shafer, R., Silverberg, R.F., Weiland, J.L., 1998, Astrophys. J, 508, 106 Eales, S., Lilly, S., Gear, W., Dunne, L., Bond, J.R., Hammer, F., Le Fèvre, O., Crampton, D., 1998, Astrophys. J., in press Elbaz, D., Aussel, H., Cesarsky, C.J., Desert, F.X., Fadda, D., Franceschini, A., Puget, J.L., Starck, J.L., 1998, in Proc. of the 34th Liege International Astrophysics Colloquium on the “Next Generation Space Telescope” Elbaz D., Aussel H., Cesarsky C.J., Desert F.X., Fadda D., Franceschini A., Harwit, M., Puget J.L., Starck J.L., 1999, in The Universe as seen by ISO, P. Cox & M.F. Kessler (eds), 1998, UNESCO, Paris, ESA Special Publications series (SP-427) Fall, S.M., Charlot, S., Pei, Y.C., 1996, Astrophys. J., 464, L43 first SPIRE Consortium, 1998, SPIRE, a bolometer instrument for first, a proposal submitted to the European Space Agency, in response to the Announcement of Opportunity Fixsen, D.J., Dwek, E., Mather, J.C., Bennett, C.L., Shafer, R.A., 1998, Astrophys. J., 508, 123 Flores, H., Hammer, F., Thuan, T.X., Cesarsky, C., Désert, F.X., Omont, A., Lilly, S.J., Eales, S., Crampton, D., Le Fèvre., O., 1999, Astrophys. J., in press Franceschini, A., Toffolatti, L., Mazzei, P., Danese, L., & De Zotti, G., 1991, Astrophys. J. Supp. Ser., 89, 285 Franceschini, A., Mazzei, P., De Zotti, G., Danese, L. 1994, Astrophys. J., 427, 140 Franx, M., Illingworth, G.D., Kelson, D.D., van Dokkum, P.G., Tran, K.V., 1997, Astrophys. J., 486, L75 Frayer, D.T., Ivison, R.J., Scoville, N.Z., Yun, M., Evans, A.S., Smail, I., Blain, A., Kneib, J.P., 1998, Astrophys. J., in press Gallego, J., Zamorano, J., Aragon–Salamanca, A., Rego, M., 1995, Astrophys. J., 455, L1 Gautier, T. N., III, Boulanger, F., Perault, M., Puget, J. L., 1992, Astron. J., 103, 1313 Genzel, R., Lutz, D., Sturm, E., Egami, E., Kunze, D., Moorwood, A.F.M., Rigopoulou, D., Spoon, H.W.W., Sternberg, A., Tacconi–Garman, L.E., Tacconi, L., Thatte, N., 1998, Astrophys. J., 498, 579 Gispert, R., Bouchet, F.R., 1997, in Clustering in the Universe, Proceedings of the $`30^{th}`$ Moriond meeting, S. Maurogordato et al. (eds), Editions Frontières, Guiderdoni, B., Rocca–Volmerange, B. 1987, Astron. Astrophys., 186, 1 Guiderdoni, B., Hivon, E., Bouchet, F.R., Maffei, B., & Gispert, R. 1996, in Unveiling the Cosmic Infrared Background, E. Dwek (ed), AIP Conference Proceedings 348 Guiderdoni, B., Bouchet, F.R., Puget, J.L., Lagache, G., Hivon, E., 1997, Nature, 390, 257 Guiderdoni, B., Hivon, E., Bouchet, F.R., Maffei, B., 1998, Mon. Not. Roy. Astron. Soc., 295, 877 Guiderdoni, B., Bouchet, F.R., Devriendt, J.E.G., Hivon, H., Puget, J.L., 1999, in The Birth of Galaxies, B. Guiderdoni, F.R. Bouchet, T.X. Thuan & J. Trân Thanh Vân (eds), Editions Frontières, in press Hauser, M.G., Arendt, R., Kelsall, T., Dwek, E., Odegard, N., Welland, J., Freundenreich, H., Reach, W., Silverberg, R., Modeley, S., Pei, Y., Lubin, P., Mather, J., Shafer, R., Smoot, G., Weiss, R., Wilkinson, D., Wright, E., 1998, Astrophys. J., 508, 25 Hobson, M.P., Jones, A. W., Lasenby, A.N., Bouchet, F.R., 1998a Mon. Not. Roy. Astron. Soc., 300, 1 Hobson, M.P., Barreiro, R.B., Toffolatti, L., Lasenby, A.N., Sanz, J.L., Jones, A.W., Bouchet, F.R., 1998b, Mon. Not. Roy. Astron. Soc., submitted Hughes, D., Serjeant, S., Dunlop, J., Rowan–Robinson, M., Blain, A., Mann, R.G., Ivison, R., Peacock, J., Efstathiou, A., Gear, W., Oliver, S., Lawrence, A., Longair, M., Goldschmidt, P., Jenness, T., 1998, Nature, 394, 241 Ivison, R.J., Smail, I., Le Borgne, J.F., Blain, A.W., Kneib, J.P., Bézecourt, J., Kerr, T.H., Davies, J.K., 1998, Mon. Not. Roy. Astron. Soc., 298, 583 Kauffmann, G.A.M., White, S.D.M., Guiderdoni, B., 1993, Mon. Not. Roy. Astron. Soc., 264, 201 Kauffmann, G.A.M., Guiderdoni, B., White, S.D.M., 1994, Mon. Not. Roy. Astron. Soc., 267, 981 Kawara, K., Sato, Y., Matsuhara, H., Taniguchi, Y., Okuda, H., Sofue, Y., Matsumoto, T., Wakamatsu, K., Karoji, H., Okamura, S., Chambers, K.C., Cowie, L.L., Joseph, R.D., Sanders, D.B., 1998, Astron. Astrophys., 336, L9 Lacey, C., Silk, J., 1991, Astrophys. J., 381, 14 Lagache, G., Abergel, A., Boulanger, F., Désert, F.X., Puget, J.L., 1999, Astron. Astrophys., submitted Lanzetta, K.M., Wolfe, A.M., Turnshek, D.A., 1995, Astrophys. J., 440, 435 Lilly, S.J., Le Fèvre, O., Hammer, F., Crampton, D., 1996, Astrophys. J., 460, L1 Lilly, S.J., Eales, S.A., Gear, W.K.P., Hammer, F., Le Fèvre, O., Crampton, D., Bond, J.R., Dunne, L., 1999, Astrophys. J., in press Lonsdale, C.J., Hacking, P.B., Conrow, T.P., Rowan–Robinson, M., 1990, Astrophys. J., 358, 60 Lonsdale, C.J. 1996, in Unveiling the Cosmic Infrared Background, E. Dwek (ed.), AIP Conference Proceedings 348 Lutz, D., Spoon, H.W.W., Rigopoulou, D., Moorwood, A.F.M., Genzel, R., 1998, Astrophys. J., 505, L103 Madau, P., Ferguson, H.C., Dickinson, M.E., Giavalisco, M., Steidel, C.C., Fruchter, A., 1996, Mon. Not. Roy. Astron. Soc., 283, 1388 Madau, P., Pozzetti, L., Dickinson, M.E., 1998, Astrophys. J., 498, 106 Meurer, G.R., Heckman, T.M., Lehnert, M.D., Leitherer, C., Lowenthal, J., 1997, Astron. J., 114, 54 Mushotzky, R.F., Loewenstein, M., 1997, Astrophys. J., 481, L63 Oliver, S.J., Goldschmidt, P., Franceschini, A., Serjeant, S.B.G., Efstathiou, A.N., Verma, A., Gruppioni, C., Eaton, N., Mann, R.G., Mobasher, B., Pearson, C.P., Rowan–Robinson, M., Sumner, T.J., Danese, L., Elbaz, D., Egami, E., Kontizas, M., Lawrence, A., McMahon, R., Norgaard–Nielsen, H.U., Perez–Fournon, I., Gonzalez–Serrano, J.I., 1997, Mon. Not. Roy. Astron. Soc., 289, 471 Partridge, B., Peebles, P.J.E., 1967, Astrophys. J., 148, 377 Pearson, C., Rowan–Robinson, M., 1996, Mon. Not. Roy. Astron. Soc., 283, 174 Pozzetti, L., Madau, P., Zamorani, G., Ferguson, H.C., Bruzual, G.A., 1998, Mon. Not. Roy. Astron. Soc., 298, 1133 Pettini, M., Steidel, C.C., Adelberger, K., Kellogg, M., Dickinson, M., Giavalisco, M., 1998, in ORIGINS, J.M. Shull, C.E. Woodward, and H. Thronson (eds), ASP Conference Series planck HFI Consortium, 1998, the High–Frequency Instrument for the planck Mission, a proposal submitted to the European Space Agency, in response to the Announcement of Opportunity Puget, J.L., Abergel, A., Bernard, J.P., Boulanger, F., Burton, W.B., Désert, F.X., Hartmann, D., 1996, Astron. Astrophys., 308, L5 Puget, J.L., Lagache, G., Clements, D.L., Reach, W.T., Aussel, H., Bouchet, F.R., Cesarsky, C., Désert, F.X., Dole, H., Elbaz, D., Franceschini, A., Guiderdoni, B., Moorwood, A.F.M., 1999, Astron. Astrophys., in press Richards, E.A., 1998, submitted Rush, B., Malkan, M.A., Spinoglio, L., 1993, Astrophys. J. Suppl. Ser., 89, 1 Sanders, D.B., Mirabel, I.F., 1996, Ann. Rev. Astron. Astrophys., 34, 749 Schlegel, D.J., Finkbeiner, D.P., Davis, M., 1998, Astrophys. J., 500, 525 Silva, L., Granato, G.L., Bressan, A., Danese, L., 1998, Astrophys. J., 509, 103 Smail, I., Ivison, R.J., Blain, A.W., 1997, Astrophys. J., 490, L5 Smail, I., Ivison, R.J., Blain, A.W., Kneib, J.P., 1998, Astrophys. J., 507, L21 Soifer, B.T., Sanders, D.B., Madore, B.F., Neugebauer, G., Danielson, G.E., Elias, J.H., Lonsdale, C.J., Rice, W.L., 1987, Astrophys. J., 320, 238 Soifer, B.T., Neugebauer, G., 1991, Astron. J., 101, 354 Soifer, B.T., Neugebauer, G., Franx, M., Matthews, K., Illingworth, G.D., 1998, Astrophys. J., 501, L171 Somerville, R., Primack, J., 1999, Mon. Not. Roy. Astron. Soc., in press Steidel, C.C., Hamilton, D., 1993, Astron. J., 105, 2017 Steidel, C.C., Giavalisco, M., Pettini, M., Dickinson, M., Adelberger, K.L., 1996, Astrophys. J., 462, L17 Steidel, C.C., Adelberger, K.L., Giavalisco, M., Dickinson, M., Pettini, M., 1999, Astrophys. J., in press Stickel, M., Bogun, S., Lemke, D., Klaas, U., Tóth, L.V., Herbstmeier, U., Richter, G., Assendorp, R., Laureijs, R., Kessler, M.F., Burgdorf, M., Beichman, C.A., Rowan–Robinson, M., Efstathiou, A., 1998, Astron. Astrophys., 336, 116 Storrie–Lombardi, L.J., McMahon, R.G., Irwin, M.J., 1996, Mon. Not. Roy. Astron. Soc., 283, L79 Tegmark, M., Efstathiou, G., 1996, Mon. Not. Roy. Astron. Soc., 281, 1297 Tegmark, M., 1998, Astrophys. J., 502, 1 Tegmark, M., de Oliveira–Costa, A., 1998, Astrophys. J., 500, L83 Thuan, T.X., Sauvage, M., Madden, S., 1999, Astrophys. J, in press Toffolatti, L., Argüeso Gòmez, F., De Zotti, G., Mazzei, P., Franceschini, A., Danese, L., Burigana, C., 1999, Month. Not. Roy. Astron. Soc., in press Vogel, S., Weymann, R., Rauch, M., Hamilton, T., 1995, Astrophys. J., 441, 162. Williams, R.E., Blacker, B., Dickinson, M., van Dyke Dixon, W., Ferguson, H.C., Fruchter, A.S., Giavalisco, M., Gilliland, R.L., Heyer, I., Katsanis, R., Levay, Z., Lucas, R.A., McElroy, D.B., Petro, L., Postman, M., 1996, Astron. J., 112, 1335 White, S.D.M., Frenk, C.S., 1991, Astrophys. J., 379, 52
no-problem/9903/cond-mat9903352.html
ar5iv
text
# Observation of out-of-phase bilayer plasmons in YBa2Cu3O7-δ \[ ## Abstract The temperature dependence of the $`c`$-axis optical conductivity $`\sigma (\omega )`$ of optimally and overdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> ($`x`$=6.93 and 7) is reported in the far- (FIR) and mid-infrared (MIR) range. Below T<sub>c</sub> we observe a transfer of spectral weight from the FIR not only to the condensate at $`\omega `$=0, but also to a new peak in the MIR. This peak is naturally explained as a transverse out-of-phase bilayer plasmon by a model for $`\sigma (\omega )`$ which takes the layered crystal structure into account. With decreasing doping the plasmon shifts to lower frequencies and can be identified with the surprising and so far not understood FIR feature reported in underdoped bilayer cuprates. \] After many years the discussion about the charge dynamics perpendicular to the CuO<sub>2</sub> layers of the high T<sub>c</sub> cuprates is still very controversial. The role attributed to interlayer hopping ranges from negligible to being the very origin of high T<sub>c</sub> superconductivity. There is no agreement about the relevant excitations nor about the dominant scattering mechanism. The $`c`$-axis resistivity $`\rho _c`$ is much larger than predicted by band structure calculations. The anisotropy $`\rho _c/\rho _{ab}`$ can be as large as $`10^5`$ and shows a strong temperature dependence, especially in the underdoped regime, which has been interpreted as an indication for non-Fermi liquid behavior and confinement. This strong temperature dependence is due to two different regimes with d$`\rho _c/\mathrm{dT}<0`$ for $`\mathrm{T}_c<\mathrm{T}<\mathrm{T}^{}`$ and d$`\rho _c/\mathrm{dT}>0`$ for $`\mathrm{T}>\mathrm{T}^{}`$, with a crossover temperature T that decreases with increasing doping. There is some agreement as to the phenomenology that $`\rho _c`$ is described by a series of resistors, i.e., that different contributions have to be added, and that the sign change in d$`\rho _c`$/dT is due to the different temperature dependence of the competing contributions. Overdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> (YBCO) is often regarded as a remarkable exception, as $`\rho _c/\rho _{ab}`$ is only about 50, and d$`\rho _c/\mathrm{dT}>0`$ for all $`\mathrm{T}>\mathrm{T}_c`$. It is an important issue whether a sign change in d$`\rho _c/\mathrm{dt}`$ at low T is really absent or only hidden by T<sub>c</sub> being larger than a possible T, i.e., whether overdoped YBCO follows anisotropic three dimensional (3D) or rather 2D behavior. The $`c`$-axis optical conductivity $`\sigma _1(\omega )`$ of YBCO shows several remarkable features: (1) It’s very low value compared to band structure calculations, reflecting the large $`\rho _c`$. (2) A suppression of spectral weight at low frequencies already above T<sub>c</sub> in underdoped samples referred to as the opening of a ’pseudogap’ (which agrees with the upturn in $`\rho _c`$). (3) The appearance of an intriguing broad ’bump’ in the FIR at low T in underdoped samples. (4) In overdoped YBCO, the spectral weight of the superconducting condensate is overestimated from $`\sigma _1(\omega )`$ as compared to microwave techniques. In this letter we suggest that most of the above mentioned issues can be clarified by modelling the cuprates or in particular YBCO as a stack of coupled CuO<sub>2</sub> layers with alternating weaker and stronger links. This multilayer model fits the measured data at all doping levels and at all temperatures. A similar model was proposed for the superconducting state by van der Marel and Tsvetkov. A transverse optical plasmon was predicted. This model has been verified in SmLa<sub>0.8</sub>Sr<sub>0.2</sub>CuO<sub>4-δ</sub>. We report the observation of this mode in the infrared spectrum of optimally and overdoped YBCO and propose a common origin with the above mentioned ’bump’ in underdoped YBCO. Single crystals of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> were grown using the recently developed BaZrO<sub>3</sub> crucibles, which in contrast to other container materials do not pollute the resulting crystals. Crystals grown using this technique exhibit therefore a superior purity ($`>`$ 99.995 at. %). The samples had typical dimensions of $`2\times 0.5`$$`0.7`$ mm<sup>2</sup> in the $`ac`$-plane. The O concentration was fixed by annealing according to the calibration of Lindemer. An O content of $`x`$=7 was obtained by annealing for 400 h at 300C in 100 bar of high purity oxygen. Annealing in flowing oxygen at 517C for 260 h produced $`x`$=6.93. Measurements of the ac-susceptibility indicate T<sub>c</sub>=91 K for $`x`$=6.93 and 87 K for $`x`$=7. The width of the transitions were 0.2 K and 1 K, respectively. Polarized reflection measurements were carried out on a Fourier transform spectrometer between 50 and 3000 cm<sup>-1</sup> for temperatures between 4 and 300 K. As a reference we used an in-situ evaporated Au film. Above 2000 cm<sup>-1</sup> the spectra are almost T independent. The optical conductivity $`\sigma (\omega )`$ was calculated via a Kramers-Kronig analysis. The measured $`c`$-axis reflectivity and $`\sigma _1(\omega )`$ derived from it are plotted in Fig. 1 for 4 and 100K (solid and dashed black lines). Disregarding the phonons, $`\sigma _1(\omega )`$ shows an almost constant value of about 200 $`\mathrm{\Omega }^1`$cm<sup>-1</sup>. A Drude-like upturn is only observed at low frequencies in the overdoped case $`x`$=7. Below T<sub>c</sub> a sharp reflectivity edge develops at about 300 cm<sup>-1</sup> (inset of top panel), which had been identified as a Josephson plasmon, a collective mode in a stack of Josephson coupled 2D superconducting layers. The gradual suppression of $`\sigma _1(\omega )`$ below about 700 cm<sup>-1</sup> can be attributed to the opening of the superconducting gap. The finiteness of $`\sigma _1(\omega )`$ at all frequencies reflects the $`d`$-wave symmetry of the gap. The increase of $`\sigma _1(\omega )`$ between 700 and 1300 – 1500 cm<sup>-1</sup> from 100 to 4 K comes as a surprise. The superconducting phase transition obeys case II coherence factors for electromagnetic absorption, i.e., only a suppression of $`\sigma _1(\omega )`$ is expected for frequencies not too close to 0. The difference of spectral weight above and below T<sub>c</sub> defined as (for T$`<`$T<sub>c</sub>): $$\omega _\mathrm{\Delta }^2(\mathrm{T},\omega )=8_{0^+}^\omega \left[\sigma _1(100\mathrm{K},\omega ^{})\sigma _1(\mathrm{T},\omega ^{})\right]d\omega ^{}$$ (1) is expected to rise monotonically with increasing frequency to a constant value for frequencies much larger than the gap. It is common practice to determine the spectral weight of the superconducting condensate from this constant value. However, in YBCO<sub>6.99</sub> Homes et al. reported a discrepancy between $`\omega _\mathrm{\Delta }`$ determined from either this optical sum rule ($`2050\pm 150`$ cm<sup>-1</sup>) or the microwave surface reactance ($`1450\pm 50`$ cm<sup>-1</sup>). To account for this difference the existence of a very narrow normal carrier Drude peak with a width smaller than the lowest measured frequency was concluded, which contradicts again the microwave measurements showing a very small $`\sigma _1(\omega )`$. Our data clearly indicate a non-monotonic behavior of $`\omega _\mathrm{\Delta }(\omega )`$ (insets in Fig. 1, see also Ref. ) and a spectral weight transfer from low frequencies to a new peak above the phonons. This can naturally be explained by the following model for $`\sigma (\omega )`$ which takes into account the layered structure of the cuprates. We devide the unit cell of YBCO into the intra- and inter-bilayer subcells $`A`$ and $`B`$. Let us imagine, that a time dependent current is induced along the $`c`$-direction, the time derivative of which is $`(dJ_c/dt)`$. We define $`(dV_j/dt)`$ as the time derivative of the voltage between two neighboring CuO<sub>2</sub> layers, i.e., across subcell $`j`$. Our multilayer model corresponds to the approximation, that the ratio $`(dV_j/dt)/(dJ_c/dt)`$ is provided by a local linear response function $`\rho _j`$ corresponding to the complex impedance which depends only on the voltage variations on the neighboring CuO<sub>2</sub> layers, and not on the voltages on layers further away. Microscopically this corresponds to the condition, that in the normal state the mean free path along $`c`$ must be shorter than the distance between the layers, $`l_j`$. In the superconducting state this should be supplemented with the same condition for the coherence length along $`c`$. In this sense, the multilayer model reflects the confinement of carriers in the 2D CuO<sub>2</sub> layers. Let us treat the current as the parameter controlled by applying an external field. Since the current between the layers is now uniform and is independent of the subcell index $`j`$, the electric field average over the unit cell is a linear superposition of the voltages over all subcells within the unit cell. This effectively corresponds to putting the complex impedances $`\rho _j`$ of subcells in series, $`\rho (\omega )=x_A\rho _A(\omega )+x_B\rho _B(\omega )`$, where the $`x_j=l_j/l_c`$ are the relative volume fractions of the two subcells, $`l_A+l_B=l_c`$, and $`\rho _j(\omega )`$ are the local impedance functions within subcells $`A`$ and $`B`$. This sum for $`\rho (\omega )=[\sigma (\omega )+\omega /4\pi \mathrm{i}]^1`$ is very different from the case of a homogeneous medium, where different contributions are additive in $`\sigma (\omega )=\mathrm{\Sigma }\sigma _j(\omega )`$, which corresponds to putting the various conducting channels of the medium in parallel. To illustrate this, let us adopt the Drude model for the complex interlayer impedance. In parallel conduction the sum of e.g. two Drude peaks yields $$\frac{4\pi \mathrm{i}/\omega }{\rho (\omega )}=1\frac{\omega _{p,A}^2}{\omega ^2+\mathrm{i}\gamma _A\omega }\frac{\omega _{p,B}^2}{\omega ^2+\mathrm{i}\gamma _B\omega }$$ (2) where $`\omega _{p,j}`$ denotes the plasma frequency, and $`\gamma _j`$ labels the damping. This results in a single plasma resonance at a frequency $`\omega _p^2=\omega _{p,A}^2+\omega _{p,B}^2`$, i.e., only one longitudinal mode (the zero) survives which is shifted with respect to the zeros of the individual components. The transverse mode (the pole at $`\omega =0`$) is identical. Putting two Drude oscillators in series in the multilayer model, i.e., using $`\mathrm{\Sigma }x_j\rho _j`$ has a surprising consequence. $$\frac{\rho (\omega )}{4\pi \mathrm{i}/\omega }=\frac{x_A}{1\frac{\omega _{p,A}^2}{\omega ^2+\mathrm{i}\gamma _A\omega }}+\frac{x_B}{1\frac{\omega _{p,B}^2}{\omega ^2+\mathrm{i}\gamma _B\omega }}$$ (3) Now both longitudinal modes (poles of $`\rho _j`$) are unaffected, and in between a new transverse mode arises. This transverse optical plasmon can be regarded as an out-of-phase oscillation of the two individual components. This mode has been predicted in Ref. for the case of a multilayer of Josephson coupled 2D superconducting layers. The existence of two longitudinal modes was confirmed experimentally in SmLa<sub>0.8</sub>Sr<sub>0.2</sub>CuO<sub>4-δ</sub>. Note that superconductivity is not a necessary ingredient, the optical plasmon appears regardless of the damping of the individual components. In order to apply the model to the measured reflectivity data we have to include the phonons, for which a separation into subcells is not generally justfied, e.g. for the $`c`$-axis bending mode of the planar O ions, located on the border between subcells $`A`$ and $`B`$. Therefore we adopt the following model impedance $$\rho (\omega )=\underset{j}{}\frac{x_j}{\sigma _j+\sigma _{ph}+\sigma _M+\omega /4\pi \mathrm{i}},j\{A,B\}$$ (4) where $`x_A=0.28`$, and $`x_B=1x_A`$ for YBCO. Note that this model reduces to the conventional expression for a homogeneous medium commonly used for high T<sub>c</sub> superconductors if we either set $`x_A=0`$ or $`\sigma _A=\sigma _B`$. The $`\sigma _{A,B}(\omega )`$ contain the purely electronic contributions with eigenfrequency $`\omega _0=0`$ within each subcell. $$4\pi \sigma _j(\omega )=\frac{\mathrm{i}\omega _{s,j}^2}{\omega }+\frac{\mathrm{i}\omega _{n,j}^2}{\omega +\mathrm{i}\gamma _j},j\{A,B\}$$ (5) where $`\omega _{s,j}`$ and $`\omega _{n,j}`$ label the plasma frequencies of superconducting and normal carriers, respectively. All other contributions (phonons, MIR oscillators, etc.) are assumed to be identical in the two subcells and are included in a sum of Lorentz oscillators. $$\frac{4\pi \mathrm{i}}{\omega }[\sigma _{ph}+\sigma _M]=\frac{\omega _{p,j}^2}{\omega _{0,j}^2\omega ^2\mathrm{i}\gamma _j\omega }$$ (6) where $`\omega _{0,j}`$ denotes the $`j`$-th peak frequency. The agreement between the measured reflectivity data and fits using this model is very good at all temperatures (thick gray lines in Fig. 1). The strong MIR peak of the optical plasmon caused by the out-of-phase oscillation of the superconducting carriers in the two subcells is very well reproduced. Note that in a conventional Lorentz model the optical plasmon would have to be fit with three parameters $`\omega _0`$, $`\omega _p`$ and $`\gamma `$. Also our model has three new parameters, namely the two sets of $`\omega _s`$, $`\omega _n`$ and $`\gamma `$ of Eq. 5 for the two subcells as compared to the single set used within a conventional two-fluid fit. In the case of $`x`$=6.93 at 4 K we have $`\omega _{n,A}=\omega _{n,B}=0`$, leaving only one new parameter $`\omega _s`$. In Fig. 2 we plot the real part of the dynamical resistivity $`\rho (\omega )`$. The thick gray line was obtained from the full fit parameters and agrees with the Kramers-Kronig result. The solid line depicts the electronic contribution $`\rho _e(\omega )`$, which was obtained by leaving out the phonon part $`\sigma _{ph}(\omega )`$ from the fit parameters in Eq. 4. In the multilayer model $`\rho _e(\omega )`$ is the sum of the subcell contributions $`x_j\rho _{ej}=x_j/(\sigma _j+\sigma _M+\omega /4\pi \mathrm{i})`$ ($`j\{A,B\}`$, dashed lines), which shows that the two peaks in $`\rho _e(\omega )`$ can be attributed to the plasmon peaks in the two subcells. Contrary to the conventional model, the different contributions are not strictly additive in $`\sigma _1(\omega )`$ due to the inverse summation in Eq. 4. Nevertheless we can calculate an estimate of the electronic contribution $`\sigma _e(\omega )`$ from the fit parameters in the same way as done for $`\rho _e`$. An estimate of only the normal electronic contribution $`\sigma _{en}(\omega )`$ is obtained by leaving out the London terms $`\omega _{s,j}^2`$ together with $`\sigma _{ph}`$. The contribution arising from the presence of superconducting carriers is then defined as $`\sigma _{es}(\omega )=\sigma _e(\omega )\sigma _{en}(\omega )`$ (see Fig. 1). With decreasing doping level the absolute value of $`\sigma _1(\omega )`$ decreases and therefore the optical plasmon peak becomes sharper. At the same time, all plasma frequencies and hence also the optical plasma mode shift to lower frequencies. This scenario explains the strong FIR ’bump’ reported in underdoped YBCO. Similar bumps have been observed in other bilayer cuprates, but never in a single layer material. This bump has hindered an unambiguous separation of electronic and phononic contributions to $`\sigma _1(\omega )`$. In Fig. 3 we show reflectivity spectra of underdoped samples of YBCO taken from Refs. together with fits using the multilayer model. Again good agreement with the model is obtained. The strong phonon asymmetries present in the underdoped samples called for a fine tuning of the model: the two apical O stretching phonon modes at about 600 cm<sup>-1</sup> were described by local oscillators in the inter-bilayer subcell $`B`$, i.e., they moved in Eq. 4 from $`\sigma _{ph}(\omega )`$ to $`\sigma _B(\omega )`$. The figure demonstrates that this reproduces the asymmetry of the experimental phonon line shape well, although a Lorentz oscillator was used. Similar fine tuning has only a minor effect on the quality of the fit for the data presented in Fig. 1. Comparing the various doping levels shows that both the bending (350 cm<sup>-1</sup>) and the stretching (600 cm<sup>-1</sup>) phonon modes show strong assymetries whenever they overlap with the transverse plasma mode, but that both modes are symmetric if the transverse plasmon is far enough away, as e.g. in the case of $`x`$=7. Previously it was argued that the phonon spectral weight is only conserved for different T if the bump is interpreted as a phonon. However, a sum rule exists only for the total $`\sigma _1(\omega )`$, not for the phonon part separately. Moreover, in this scenario the width of the bump, it’s temperature and doping dependence and the phonon asymmetries remained unexplained. Both the low frequency Josephson plasmon and the bump are suppressed simultaneously by Zn substitution, which supports our assignment that both peaks are plasma modes. An increase of spectral weight of the bump with decreasing T was reported to start far above T<sub>c</sub>, but a distinct peak is only observed below T<sub>c</sub>. We obtained good fits for all T (not shown). As mentioned above, superconductivity is not a necessary ingredient of the multilayer model, an out-of-phase motion of normal carriers will give rise to a peak at finite frequencies, too. Upon cooling below T<sub>c</sub>, the reduction of the underlying electronic conductivity due to the opening of a gap and the reduced damping produce a distinct peak. Our results imply that even the $`c`$-axis transport between the two layers of a bilayer is incoherent, which agrees with the absence of a bilayer bonding-antibonding (BA) transition in our spectra. Using photo electron spectroscopy a BA splitting of about 3000 cm<sup>-1</sup> was reported. The anomalous broad photoemission lineshape may explain the absence thereof in the optical data. In conclusion, we observed the out-of-phase bilayer plasmon predicted by the multilayer model. The good agreement of the optical data with the multilayer model at all temperatures and doping levels shows that YBCO can be modelled by local electrodynamics along the $`c`$-axis in both the normal and the superconducting state. This applies even to overdoped YBCO, one of the least anisotropic cuprates. Our results strongly point towards a non-Fermi liquid picture and confinement of carriers to single CuO<sub>2</sub> layers. We gratefully acknowledge C. Bernhard and S. Tajima for helpful discussions. The project is supported by the Netherlands Foundation for Fundamental Research on Matter (FOM) with financial aid from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). Also P.N. Lebedev Physical Institute, Moscow.
no-problem/9903/astro-ph9903166.html
ar5iv
text
# Computing Challenges of the Cosmic Microwave Background ## 1. The CMB ### 1.1. Historical Overview The detection of the cosmic microwave background (CMB) in 1965 stands as one of the most important scientific discoveries of the century, the strongest evidence we have of the Hot Big Bang model. We know from the COBE satellite that it is an almost perfect blackbody with temperature $`2.728\pm 0.004`$ K, with expected tiny spectral distortions only very recently discovered. Once the CMB was discovered, the search was on for the inevitable angular fluctuations in the temperature, which theorists knew would encode invaluable information about the state of the universe at the epoch when big bang photons decoupled from the matter. This occurred as the universe cooled sufficiently for the ionized plasma to combine into hydrogen and helium atoms. This epoch was a few hundred thousand years after the Big Bang, at a redshift $`z1000`$ when the universe was a factor of a thousand smaller than it is today. Theorists led the experimenters on a merry chase, originally predicting the fractional temperature fluctuation level would be $`10^2`$, then in the seventies $`10^3`$, then $`10^5`$, where it has been since the early eighties, when the effects of the dark matter which dominates the mass of the universe were folded into the predictions. Fortunately the experimenters were persistent, and upper limits on the anisotropy dropped throughout the eighties, leaving in their wake many failed ideas about how structure may have formed in the universe. A major puzzle of the hot big bang model was how regions that would not have been in causal contact at redshift $`z1000`$ could have the same temperature to such a high precision. This led to the theory of inflation, accelerated expansion driven by the energy density of a scalar field, dubbed the inflaton, in which all of the universe we can see was in contact a mere $`\mathrm{}<10^{33}`$ seconds after the big bang. It explained the remarkable isotropy of the CMB and had a natural byproduct: quantum oscillations in the scalar field could have generated the density fluctuations that grew via gravitational instability to create the large scale structure we see in the universe around us. This theory, plus the hypothesis that the dark matter was made up of elementary particle remnants of the big bang, led to firm predictions of the anisotropy amplitude. In the eighties, competing theories arose, one of which still survives: that topologically stable configurations (defects, such as cosmic strings) of exotic particle fields arising in phase transitions could have formed in the early universe and acted as seeds for the density fluctuations in ordinary matter. Immediately following the headline-generating detection of anisotropies by COBE \[bennett \] in 1992 at the predicted $`10^5`$ level, many ground and balloon experiment began seeing anisotropies over a broad range of angular scales. The emerging picture from this data has sharpened our theoretical focus to a small group of surviving theories, such as the inflation idea. The figures in this article tell the story of where we go from here. Fig. 1 shows a realization of how the temperature fluctuations would look on the sky in an inflation-based model, at the $`7^{}`$ resolution of the COBE satellite and what would be revealed at essentially full resolution. One sees not only the long wavelength ups and downs that COBE saw, but also the tremendous structure at smaller scales in the map. One measure of this is the power spectrum of the temperature fluctuations, denoted by $`C_{\mathrm{}}`$, a function of angular wavenumber $`\mathrm{}`$, or, more precisely, the multipole number in a spherical harmonic expansion. Fig. 2 shows typical predictions of this for the inflation and defect theories, and contrasts it with the best estimate from all of the current data. The ups and downs in $`\mathrm{}`$-space are associated with sound waves at the epoch of photon decoupling. The damping evident at high $`\mathrm{}`$ is a natural consequence of the viscosity of the gas as the CMB photons are released from it. The flat part at low $`\mathrm{}`$ is associated with ripples in the past light cone arising from gravitational potential fluctuations that accompany mass concentrations. All of these effects are sensitive to cosmological parameters, e.g., the densities of baryons and dark matter, the value of the cosmological constant, the average curvature of the universe, and parameters characterizing the inflation-generated fluctuations. If the spectrum can be measured accurately enough experimentally, such cosmological parameters can also be determined with high accuracy. For a review of CMB science see \[btw \]. Once it became clear that there was something to measure, the race was on to design high-precision experiments that would cover large areas of the sky at the fine resolution needed to reveal all this structure and the wealth of information it encodes. These include ground-based interferometers and long duration balloon (LDB) experiments (flying for 10 days vs. 10 hours for conventional balloon flights), as well as the use of large arrays of detectors. NASA will launch the Microwave Anisotropy Probe (MAP) \[MAP \] satellite in 2000 and ESA will launch the Planck Surveyor \[Planck \] around 2006. They will each spend a year or two mapping the full sky. Fig. 3 gives an idea of how well we think that the LDB and satellite experiments can do in determining $`C_{\mathrm{}}`$ if everything goes right. Theorists have also estimated how well the cosmological parameters that define the functional dependence of $`C_{\mathrm{}}`$ in inflation models can in principle be determined with these experiments. In one exercise that allowed a mix of nine cosmological parameters to characterize the space of inflation-based theories, COBE was shown to determine one combination of them to better than 10% accuracy, LDBs and MAP could determine six, and Planck seven. MAP would also get three combinations to 1% accuracy, and Planck seven! This is the promise of a high-precision cosmology as we move into the next millennium. ### 1.2. Experimental Concerns CMB anisotropy experiments often involve a number of microwave and sub-millimeter detectors covering at least a few frequencies, located at the focal plane of a telescope. The raw data comes to us as noisy time-ordered recordings of the temperature for each frequency channel, which we shall refer to as timestreams, along with the pointing vector of each detector on the sky. The resolution of the experiment is usually fixed by the size of the telescope and the frequency of the radiation one looks at. We must learn from the data itself almost everything about the noise and the many signals expected, both wanted and unwanted, with only some guidance from other astrophysical observations. We shall see that to a large degree this appears to be a well-posed problem in Bayesian statistical analysis. The major data products from the COBE anisotropy experiment were six maps, each with 6144 pixels, derived from six timestreams, one for each detector. The timestream noise was Gaussian, which translated into correlated Gaussian noise in the maps. Much effort went into full statistical analyses of the underlying sky signals, most often under the hypothesis that the sky signal was a Gaussian process as well. The amount of COBE data was at the edge of what could be done with 1992 workstations. The other experiments used in the estimate of the power spectrum in Fig. 2 had less data, and full analysis was also feasible. We are now entering a new era: LDB experiments will have up to two orders of magnitude more data, MAP three and Planck four. For the forecasts of impressively small $`C_{\mathrm{}}`$ errors to become reality, we must learn to deal with this huge volume of data. In this article, we discuss the computational challenges associated with current methods for going from the timestreams to multi-frequency sky maps, and for separating out from these maps of the different sky signals. Finally, from the CMB map and its statistical properties, cosmological parameters can be derived. To illustrate the techniques, we use them to find estimates of $`C_{\mathrm{}}`$. This represents an extreme form of data compression, but from which cosmological parameters and their errors can finally be derived. As we shall discuss at considerable length in this article, the analysis procedure we will describe is necessarily global; that is, making the map requires operating on the entire time-ordered data, and estimating the power spectrum requires analyzing the entire map at once. This is due to the statistically-correlated nature of both the instrumental noise and the expected CMB sky signal which links up measurements made at one point with those made at all others. ## 2. What signals do we expect? Of the signals we know are present, there are of course the primary CMB fluctuations from the epoch of photon decoupling that we have already discussed, the primary goal of this huge worldwide effort. There are also secondary fluctuations of great interest to cosmologists arising from nonlinear processes at lower redshift: some come from the epoch of galaxy formation and some from scattering of CMB photons by hot gas in clusters of galaxies. Extragalactic radio sources are another nontrivial signal. On top of this, there are various emissions from dust and gas in our Milky Way galaxy. While these are foreground nuisances to cosmologists, they are signals of passionate interest to interstellar medium astronomers. Fortunately these signals have very different dependences on frequency (Fig. 5), and, as we now know, rather statistically distinct sky patterns (Fig. 4). We know how to calculate in exquisite detail the statistics of the primary signal for the various models of cosmic structure formation. The fluctuations are so small at the epoch of photon decoupling that linear perturbation theory is a superb approximation to the exact non-linear evolution equations. The simplest versions of the inflation theory predict that the fluctuations from the quantum noise form a Gaussian random field. Linearity implies that this translates into anisotropy patterns that are drawn from a Gaussian random process and which can be characterized solely by their power spectrum. Thus our emphasis is on confronting the theory with the data in the power spectrum space, as in Fig. 2. Primary anisotropies in defect theories are more complicated to calculate, because non-Gaussian patterns are created in the phase transitions which evolve in complex ways and for which large scale simulations are required, a computing challenge we shall not discuss in this article. In both theories, algorithmic advances have been very important for speeding-up the computations of $`C_{\mathrm{}}`$. The secondary fluctuations involve nonlinear processes, and the full panoply of $`N`$-body and gas-dynamical cosmological simulation techniques discussed in this volume are being brought to bear on the calculation. Non-gaussian aspects of the predicted patterns are fundamental, and much beyond $`C_{\mathrm{}}`$ is required to specify them. Further, some secondary signals, such as radiation from dusty star-burst regions in galaxies, are too difficult to calculate from first principles, and statistical models of their distribution must be guided by observations. At least for most CMB experiments, they can be treated as point sources, much smaller than the observational resolution. The foreground signals from the interstellar medium are also non-Gaussian and not calculable. They must be modeled from the observations and have the added complication of being extended sources. For each signal $`T`$ present, there is therefore a theoretical “prior probability” function specifying its statistical distribution, $`𝒫(T|\mathrm{th})`$. A Gaussian $`𝒫(T|\mathrm{th})`$ has the important property that it is completely specified by the two-point correlation function which is the expectation value of the product of the temperature in two directions $`\widehat{𝐪}`$ and $`\widehat{𝐪}^{}`$ on the sky, $`T(\widehat{𝐪})T(\widehat{𝐪}^{})`$. For non-Gaussian processes an infinite number of higher order temperature correlation functions are needed in principle. The inflation-generated or defect-generated temperature anisotropies are also usually statistically isotropic, that is, the $`N`$-point correlation functions are invariant under a uniform rotation of the $`N`$ sky vectors $`\widehat{𝐪}`$. This implies $`T(\widehat{𝐪})T(\widehat{𝐪}^{})`$ is a function only of the angular separation. If the temperature field is expanded in spherical harmonics $`Y_\mathrm{}m(\widehat{𝐪})`$, then the two-point function of the coefficients $`a_\mathrm{}m`$ is related to $`C_{\mathrm{}}`$ by $$a_\mathrm{}ma_\mathrm{}^{}m^{}^{}=C_{\mathrm{}}\delta _{\mathrm{}\mathrm{}^{}}\delta _{mm^{}},\mathrm{where}T(\widehat{𝐪})=\underset{\mathrm{}m}{}a_\mathrm{}mY_\mathrm{}m(\widehat{𝐪}),$$ (1) so the correlation function is related to the $`C_{\mathrm{}}`$ by $$T(\widehat{𝐪})T(\widehat{𝐪}^{})=\underset{\mathrm{}}{}\frac{2\mathrm{}+1}{4\pi }C_{\mathrm{}}P_{\mathrm{}}(\widehat{𝐪}\widehat{𝐪}),$$ (2) where $`P_{\mathrm{}}(x)`$ is a Legendre polynominal. Just as a Fourier wavenumber $`k`$ corresponds to a scale $`\lambda 2\pi /k`$, the spherical-harmonic coefficients correspond to an angular scale $`\theta 180^{}/\mathrm{}`$. Figure 2 shows $`C_{\mathrm{}}`$ for two different cosmologies given the same primordial theory; we plot $`\mathrm{}(\mathrm{}+1)C_{\mathrm{}}/(2\pi )`$ since at high $`\mathrm{}`$ it gives the power per logarithmic bin of $`\mathrm{}`$. A nice way to think about Gaussian fluctuations is that for a given power spectrum, they distribute this power with the smallest dispersion. Temperature fluctuations are typically within $`\pm 2\sigma `$ and rarely exceed $`3\sigma `$, where $`\sigma `$ is rms amplitude. Such is the map in Fig. 1. Since the term non-Gaussian covers all other possibilities, it may seem impossible to characterize, but the way the greater dispersion often manifests itself is that the power is more concentrated, e.g. in extended hot and/or cold spots for the galactic foregrounds, and point-like concentrations for the extragalactic sources, as is evident in Fig. 4. Although we may marvel at how well the basic inflation prediction from the 1980’s is doing relative to the current data in Fig. 2, it will be astounding is if no anomalies are found in the passage from those large error bars to the much smaller ones of Fig. 3 and human musings about such exotic ultra-early universe processes are confirmed. ## 3. What is coming? The new CMB anisotropy data sets will come from a variety of platforms: large arrays of detectors on the ground or on balloons, long duration balloons (LDBs), ground-based interferometers and satellites. Most of these experiments measure the sky at anywhere between 3 to 10 photon frequencies, with several detectors at each frequency. With detector sampling rates of about 100 Hz and durations of weeks to years, the raw data sets range in size from Gigabytes to nearly Terabytes. Another measure of the size of a data set is the number of resolution elements, or beam-size pixels, in the maps that are derived from the raw data. Over the next two years, LDBs and interferometers will measure between $`10^4`$ to $`10^5`$ resolution elements, which is an impressive improvement upon COBE/DMR’s $`10^3`$ elements. NASA’s MAP satellite will measure the whole sky with $`12^{}`$ resolution in its highest frequency channel, resulting in CMB maps with $`10^6`$ resolution elements. The Planck Surveyor has $`5^{}`$ resolution, that of the lower panel of Fig. 1, and will create maps with $`10^7`$ resolution elements. In Fig. 3, forecasts of power spectra and their errors for TopHat and BOOMERanG (two LDB missions) and MAP and Planck are given. These results ignore foregrounds and assume maps have homogeneous noise, and thus are highly idealized. Extracting the angular power spectrum from such large maps presents a formidable computing challenge. Except for the complication of being on a sphere, the difficulties are those shared with the more usual problem of power spectrum estimation in flat spaces; in general, it is an $`O(m_p^3)`$ process, where $`m_p`$ is the number of pixels in the map. What makes the process $`O(m_p^3)`$ is either matrix inversion or determinant evaluation, depending on the particular implementation. (In special cases, the Fast Fourier Transform is a particularly elegant matrix factorization, reducing the operations count from $`O(m_p^3)`$ to $`O(m_p\mathrm{ln}m_p)`$, but it is not generally applicable.) In addition to the operations count, storage is also a challenge, since the operations are manipulations of $`m_p\times m_p`$ matrices. For example, the noise correlation matrix for a megapixel map requires 2000 Gbytes for single precision (four byte) storage! ## 4. Following the thread of the data: Conceptually, the process of extracting cosmological information from a CMB anisotropy experiment is straightforward. First, maps of microwave emission at the observed wavelengths are extracted from the lengthy time-ordered data; these are the maximum-likelihood estimates of the sky signal given a noise model. Then, the various physical components are separated: solar-system contamination, galactic and extragalactic foregrounds, and the CMB itself. Finally, given the CMB map, we can find the maximum-likelihood power spectrum, $`C_{\mathrm{}}`$, from which the underlying cosmological parameters can be computed. This entire data analysis pipeline can be unified in a Bayesian likelihood formalism. Of course, this pipeline is complicated by the correlated nature of the instrumental noise, by unavoidable systematic effects and by the non-Gaussian nature of the various sky signals. ### 4.1. From the instrument to the maps… Experiments measure the microwave emission from the sky convolved with their beam. Measurements of different parts of the sky are often combined using complicated difference schemes, called chopping patterns. For example, while the Planck Surveyor will measure the temperature of a single point on the sky at any given time, MAP and COBE measure the temperature difference between two points. The purpose of these chops is to reduce the noise contamination between samples, which can be large and may have long-term drifts and other complications. Observations are repeated many times over the experiment’s lifetime in different orientations on the sky and in many detectors sensitive to a range of photon wavelengths. Schematically, we can write the observation as $$d_{\nu t}=\underset{p}{}P_{\nu tp}\mathrm{\Delta }_{\nu p}+\eta _{\nu t}.$$ (3) Here, $`d_{\nu t}`$ is the vector of observations at frequency $`\nu `$ and time $`t`$, $`\eta _{\nu t}`$ is the noise contribution, $`\mathrm{\Delta }_{\nu p}`$ is the microwave emission at that frequency and position $`p=1,\mathrm{},m_p`$ on the sky, smeared by the experimental beam and averaged over the pixel. The pointing matrix, $`P_{\nu tp}`$, is an operator which describes the location of the beam as a function of time and its chopping pattern. For a scanning experiment, it is a sparse matrix with a 1 whenever position $`p`$ is observed at time $`t`$; for a chopping experiment it will have positive and negative weights describing the differences made at time $`t`$. (Note that we shall often drop the reference to the channel, $`\nu `$, when referring to a single frequency). The first challenge is to separate the noise from the signal and create an estimate of the map, $`\overline{\mathrm{\Delta }}_p`$, and its noise properties. This alone is a daunting task: long-term correlations in the noise mean that the best estimate for the map is not simply a weighted sum of the observations at that pixel. Rather, a full least-squares solution is required. This arises naturally as the maximum-likelihood estimate of the map if the noise is taken to be Gaussian (see Eq. 5, below). This in turn requires complex matrix manipulations due to the long-term noise correlations. One of the most difficult forms of noise results from the random long term drifts in the instrument. These make it hard to measure the absolute value of temperature on a pixel, though temperature differences along the path of the beam can be measured quite well because the drifts are small on short time scales. However, by the time the instrument returns to scan a nearby area of the sky, the offset due to this drift can be quite large, resulting in an apparent striping of the sky along the directions of the scan pattern. The problem is even more complicated than a simple offset because the detector noise has a “$`1/f`$” component at low frequencies accompanying the high frequency white noise. This striping can be reduced by using a better observing strategy. If the scan pattern is such that it often passes over one of a set of well sampled reference points, then the offset can be measured and removed from the timestreams. More complicated crossing patterns in which many pixels are quickly revisited along different scan directions provide a better sampling of the offset drift and allow it to be removed more effectively. The striping issue highlights the global nature of the problem of map-making. If the map did not need to be analyzed globally, then one could cut the map into $`N`$ pieces and speed up processing time by $`N^2`$. However, including the reference points is essential and these can be far removed from the subset of pixels in which one is interested. More complicated crossing patterns which reduce these errors unfortunately increase the “non-locality” of the problem, making it difficult to use divide-and-conquer tactics successfully. Solving for the map in the presence of this noise is, in general, an $`O(m_t^3)`$ process, where $`m_t`$ is the number of elements in the time-ordered data. Since $`m_t`$ may be anywhere from $`10^6`$ to upwards of $`10^9`$, the general problem cannot be solved in a reasonable time. Fortunately, the problem becomes tractable if one can exploit the stationarity, or time-translation invariance, of the noise. In addition to solving for the map, one also needs the statistical properties of the errors in the map. Accurate calculation of the “map noise matrix” is critical, since the signal we are looking for is excess variance in the map, beyond that which is expected from the noise. It turns out that it is both easier to calculate and store the inverse of the map noise matrix, called the map weight matrix. The weight matrix is typically very sparse, whereas its inverse may be quite dense. It is therefore advantageous to have algorithms for power spectrum and parameter estimation which require the weight matrix, rather than its inverse. ### 4.2. Removing the foregrounds… Maps are made at a number of different wavelengths. Each of these maps will be the sum of the CMB signal, $`T_p`$, and contributions from astrophysical foregrounds: sources of microwave emission in the universe other than the CMB itself. This includes low-frequency galactic emission from the 20K dust that permeates the galaxy and from gas, emitting synchrotron and bremsstrahlung (or free-free) radiation. There are also extragalactic sources of emission: galaxies that emit in the infrared and the radio. These are treated as point sources, since their angular size is much smaller than the experimental resolution. In addition, clusters of galaxies and the filamentary structures connecting them will appear because their hot gas of electrons can Compton scatter CMB photons to shorter wavelengths, a phenomenon known as the Sunyaev-Zel’dovich (SZ) effect. These clusters are typically a few arcminutes across, small enough to be resolved by Planck but not MAP. In Figure 4, we schematically show the spatial patterns of some of these foregrounds, and in Figure 5, we show their frequency spectra. The next challenge, then, is to separate these foregrounds from the CMB itself in the noisy maps. We write $$\overline{\mathrm{\Delta }}_{\nu p}=T_p+\underset{i}{}f_{\nu p}^{(i)}+n_{\nu p}.$$ (4) Here, $`T`$ is the frequency-independent CMB temperature fluctuation, $`n`$ is the noise contribution whose statistics have been calculated in the map-making procedure, and $`f_{\nu p}^{(i)}`$ is the contribution of the foreground or secondary anisotropy component $`i`$. The shapes of the expected frequency dependences shown in Figure 5 show some uncertainty. There is none for some secondary anisotropy sources, e.g., the Sunyaev-Zeldovich effect, so $`f_{\nu p}^{(i)}`$ can be considered a product of the given function of frequency times a spatial function. In the past, an approximation like this involving a single spatial template and one function of frequency has been used for all of the foregrounds, but it is essential to consider fluctuations about this for the accuracy that will be needed in the data sets to come. A crude but reasonably effective method is to separate the signals using the multifrequency data on a pixel-by-pixel basis. However, it is clearly better to use our knowledge of the spatial patterns in the forms adopted for $`𝒫(f_{\nu p}^{(i)}|\mathrm{theory})`$, e.g., the foreground power spectra shown in Fig. 2. Even using a Gaussian approximation for the foreground prior probabilities has been shown to be relatively effective at recovering the signals. In this case, the statistical distribution of the maps is again Gaussian, with a mean given by the maximum likelihood, which turns out to involve Wiener filtering of the data \[numrec \]. In simulations for Planck performed by Bouchet and Gispert, the layers making up the “cosmic sandwich” in figure 4 have been convolved with the frequency-dependent beams, and realistic noise has been added. The recovered signals look remarkably like the input ones. There is some indication that the performance degrades if too large a patch of the sky is taken, possibly because the non-Gaussian aspects become more important. Of course, good estimates of the power spectra for each of the foregrounds are essential ingredients for $`𝒫(f_{\nu p}^{(i)}|\mathrm{theory})`$, and these must be obtained from the CMB data in question by iterative techniques, or with other CMB data. Radio astronomers have a long history of image construction using interferometry data. One of the most effective techniques is the “maximum entropy method”. Although this is often a catch-all phrase for finding the maximum likelihood solution, the implementation of the method involves a specific assumption for the nature of $`𝒫(f_{\nu p}^{(i)}|\mathrm{theory})`$, derived as a limit of a Poisson distribution. For small fluctuations it looks like a Gaussian, but has higher probability in the tails than the Gaussian does. The Poisson aspect makes it well-suited to find and reconstruct point sources. To apply it to the CMB, which has both positive and negative excursions, and to include signal correlation function information, some development of the approach was needed. This has been recently carried out and applied to the cosmic sandwich exercise \[bouch \]. It did at least as well at recovery as the Wiener method did, and was superior for the concentrated Sunyaev-Zeldovich cluster sources and more generally for point sources, as might be expected. Errors on the maximum entropy maps are estimated from the second derivative matrix of the likelihood function. We regard these exercises as highly encouraging, but since the accuracy with which cosmological parameters can be determined is very dependent upon the accuracy with which separation can be done, it is clear that much work is in order for improving the separation algorithms. ### 4.3. From the CMB to cosmology… Armed with a CMB map and its noise properties, we can try to extract its cosmological information. If we assume the cosmological signal is the result of a statistically isotropic Gaussian random process, then all of the information is contained in the power spectrum, $`C_{\mathrm{}}`$. With Gaussian noise as well, we can write down the exact form of its likelihood function. Unfortunately, because of incomplete sky coverage, and the presence of correlated, anisotropic noise, maximizing this likelihood function (either directly or by some sort of an iterative procedure) requires manipulation of $`m_p\times m_p`$ matrices, typically needing $`O(m_p^3)`$ operations and $`O(m_p^2)`$ storage. This becomes computationally prohibitive on typical workstations when $`m_p`$ exceeds about $`10^4`$; for the $`m_p>10^6`$ satellite missions even supercomputers may be inadequate to the task. For example, on a single 1000 MHz processor, even one calculation of $`O(10^{21})`$ operations necessary for a ten-million-pixel map would take 30,000 years! There is, as of yet, no general solution to this problem. However, in some cases, such as for the MAP satellite, a solution has been proposed which relies upon the statistical isotropy of the signal and a simple form for the noise. Unfortunately, most experiments will produce maps with more complicated noise properties. The power spectrum is a highly compressed form of the data in the map, but it is not the end of the story. The real goal remains to determine the underlying cosmological parameters, such as the density of the different components in the universe. For the simple inflationary models usually considered, there are still at least ten different parameters which affect the CMB power spectrum, so we must find the best fit in a ten (or more) dimensional parameter space. Just as the frequency channel maps were derived from the timestreams, the CMB map from the frequency maps, and the power spectrum from the CMB map, the cosmological parameters can be estimated from the power spectrum. Although in doing so, one must be careful about the non-Gaussian distribution of the uncertainty in the $`C_{\mathrm{}}`$ \[bjkII \]. ## 5. The Most Daunting Challenges We now take a more in-depth look at the problems of map-making and parameter estimation. The most general algorithms for solving these problems operate globally on the data set and are prohibitively expensive: both require matrix operations $`O(m^3)`$, where $`m`$ is either the number of points in the time series ($`m_t>10^9`$ for upcoming satellites) or the number of pixels on the sky ($`m_p>10^6`$). Special properties, such as the approximate stationarity of the instrumental noise, must be exploited in order to make the analysis of large data sets possible. To date most work has concentrated on efficient algorithms for the exact global problem, but for the new data sets it will be essential to develop approximate methods as well. We wish to find the most likely maps and power spectra. We can write down likelihood functions for both these quantities if we assume that both the noise and signal are Gaussian. While the maximum-likelihood map has a closed-form solution, there is no such solution for the most likely power spectrum. Thus, the problem of the cost of evaluating the likelihood function is compounded by having to search a very high-dimensional space for the global maximum. Even these complex problems are an oversimplification because we know that foregrounds and secondary anisotropies have non-Gaussian distributions. Thus, although we expect to get valuable results using simplified approximations for $`𝒫(f_{\nu p}^{(i)}|\mathrm{th})`$, in particular the Gaussian one we use in the discussion below, Monte Carlo approaches in which many $`f_{\nu p}^{(i)}`$ maps are made will undoubtedly be necessary to accurately determine the uncertainty in the derived cosmological parameter. ### 5.1. Map-making: the ideal case As described in Eq. 3, for each channel we model the timestream, $`d`$, as due to signal, $`\mathrm{\Delta }`$, and noise, $`\eta `$, $`d=P\mathrm{\Delta }+\eta `$, where $`P`$ is the pointing matrix that describes the observing strategy as a function of time. In the ideal case, the noise is Gaussian-distributed, i.e., its probability distribution is $$𝒫(\eta )=\left[\left(2\pi \right)^{m_t}|N|\right]^{1/2}\mathrm{exp}\left(\eta ^{}N^1\eta /2\right),$$ (5) where $`m_t`$ is the number of time-ordered data points and $`N_{tt^{}}\eta _t\eta _t^{}^{}`$ is the noise covariance matrix. Here the denotes transpose and the brackets indicate an ensemble average (integration over $`𝒫(\eta )d\eta `$). Substituting $`dP\mathrm{\Delta }`$ for $`\eta `$ in this expression gives the probability of the time-ordered data given a map, $`𝒫(d|\mathrm{\Delta })`$, which is also referred to as the likelihood of the map, $`(\mathrm{\Delta })`$. We are actually interested in the probability of a map given the data, $`𝒫(\mathrm{\Delta }|d)`$. If we assign a uniform prior probability to the underlying map, i.e., $`𝒫(\mathrm{\Delta }|\mathrm{theory})`$ is constant, then by Bayes’ theorem $`𝒫(\mathrm{\Delta }|d)`$ is simply proportional to the likelihood function, $`(\mathrm{\Delta })`$. The map that maximizes this likelihood function is $$\overline{\mathrm{\Delta }}=C_NP^{}N^1d$$ (6) where $`C_N`$ is the noise covariance matrix of the map, $$C_N\left(\overline{\mathrm{\Delta }}\mathrm{\Delta }\right)\left(\overline{\mathrm{\Delta }}\mathrm{\Delta }\right)^{}=\left(P^{}N^1P\right)^1.$$ (7) This map is known as a sufficient statistic, in that $`\overline{\mathrm{\Delta }}`$ and $`C_N`$ contain all of the sky information in the original data set, provided the pixels are small enough. As discussed above, it is preferable to work with $`C_N^1`$, the map weight matrix, which is often sparse or nearly so. For many purposes, the variance-weighted map, $$C_N^1\overline{\mathrm{\Delta }}=P^{}N^1d$$ (8) may be more useful than the map itself, so that we can avoid the computationally intensive step of inverting the weight matrix. This is true for optimally combining maps, since variance-weighted maps and their weight matrices simply sum, and for finding the minimum-variance map in a different basis, such as Fourier modes or spherical harmonics. An algorithm for finding the most likely power spectrum exploits this, as we will see below. If we do need to find $`\overline{\mathrm{\Delta }}`$, we can solve Eq. 8 iteratively by techniques like the conjugate gradient method. In general, such methods require $`m_p`$ iterations and are effectively still $`O(m_p^3)`$ methods. Fortunately, we expect $`C_N`$ to be sufficiently diagonal-dominant that many fewer than $`m_p`$ iterations are required. This is aided by the use of pre-conditioners, which will be discussed further in the context of finding the maximum-likelihood power spectrum. Whether we are interested in $`\overline{\mathrm{\Delta }}`$ or $`C_N^1\overline{\mathrm{\Delta }}`$, we still must convolve the inverse of $`N`$ with the data vector. The direct inversion of $`N`$ by brute force is impractical since it is an $`m_t\times m_t`$ matrix where $`m_t`$ is often about $`10^9`$. However, this is greatly simplified if the noise is stationary, which means its statistical properties are time translation invariant, so that $`N_{tt^{}}=N(tt^{})`$. Stationarity means that $`N`$ is diagonal in Fourier space with eigenvalues $`\stackrel{~}{N}(f)`$, the noise power spectrum. $`N^1`$ is then just the inverse Fourier Transform of $`1/\stackrel{~}{N}(f)`$. Knowing $`N^1`$, it is easy to calculate the map weight matrix, $`C_N^1=P^{}N^1P`$. The convolution of $`N^1`$ with $`d`$ appears to be an $`O(m_t^2)`$ operation. Since there is much more timestream data $`(m_tm_p)`$, this is potentially the slowest step in the calculation of the map. Fortunately, the convolution is actually much faster because $`N^1(tt^{})`$ generally goes nearly to zero for $`tt^{}0`$. The absence of weight at long time scales can be due to the “$`1/f`$” nature of the instrument noise at low temporal frequencies. Atmospheric fluctuations also have more power on long time scales than on short time scales, as do many noise sources. Since these characteristic times do not scale with the mission duration, the convolution is actually $`O(m_t)`$. Similarly, the multiplication of the pointing matrix is also $`O(m_t)`$ because of its sparseness. Thus, we can reduce the timestream data to an estimate of the map and its weight matrix in only $`O(m_p^2)`$ operations, a substantial savings compared to the $`O(m_t^3)`$ operations required for a direct calculation. These algorithms, or similar ones, have been implemented in practice, e.g., \[qmap ; wri \]. ### 5.2. Map-making: complications Above, we made two simplifying assumptions: that the statistical properties of the noise in the timestream were known and that the noise sources were all stationary. Here we try to deal with the more general case. We would like to estimate the statistical properties of the noise by using a model of the instrument, but in practice, these models are never sufficient. One must always estimate the noise from the data set itself, and doing this from the timestream requires some assumptions. It is usually assumed that the noise is stationary over sufficiently long intervals of time and is Gaussian. Often the data set is dominated by noise and to a first approximation, is all noise. Thus one has many pairs of points separated by $`tt^{}`$ to estimate $`N(|tt^{}|)=\eta (t)\eta (t^{})`$. Techniques are being developed \[fjnoise \] to simultaneously determine the map and noise power spectrum and the covariance between the two. Non-stationary noise can arise in a number of ways: possible sources include contamination by radiation from the ground, balloon or sun, some components of atmospheric fluctuations and cosmic ray hits. Often they are synchronous with a periodic motion of the instrument. They can be taken into account by extending the model of the timestream given in Eq. 3 to include contaminants of amplitude $`\kappa _c`$ with a known “timestream shape”, $`\mathrm{{\rm Y}}_{tc}`$: $$d_t=\underset{p}{}P_{tp}\mathrm{\Delta }_p+\underset{c}{}\mathrm{{\rm Y}}_{tc}\kappa _c+\eta _t.$$ (9) The contaminant amplitudes are now on the same mathematical footing as the map pixels, $`\mathrm{\Delta }_p`$, and both can be solved for simultaneously. A more conservative approach assigns infinite noise to modes of the time-ordered data which can be written as a linear combination of the $`\mathrm{{\rm Y}}_{tc}`$. Doing so removes all sensitivity of the map to the contaminant, irrespective of the assumption of Gaussianity. Operationally, we replace the timestream noise covariance matrix, $`N_{tt^{}}`$ with $$N_{tt^{}}N_{tt^{}}+\underset{c}{}\sigma _c^2\mathrm{{\rm Y}}_{tc}\mathrm{{\rm Y}}_{t^{}c}$$ (10) where the $`\sigma _c^2`$ are taken to be very large, thereby setting the appropriate eigenvalues of $`N^1`$ to zero. This noise matrix has lost its time-translation invariance and so is no longer directly invertible by Fourier transform methods. Fortunately, there is a theorem called the Woodbury Formula \[numrec \] which allows one to find the resulting correction to $`N^1`$ for additions to $`N`$ of the form in Eq. 10 while only having to invert matrices of dimension equal to the number of contaminants. ### 5.3. Parameter Estimation: A First Attempt We now turn to the determination of some set of cosmological parameters from the map. We will focus on the case where the parameters are the $`C_{\mathrm{}}`$’s because it is a model independent way of compressing the data. However, the discussion below can easily be generalized to any kind of parameterization, including the ten or more cosmological parameters that we would like to constrain. We wish to evaluate the likelihood of the parameters $`(C_{\mathrm{}})𝒫(\overline{T}|C_{\mathrm{}})`$, which folds in the probability of the map given the data with all of the prior probability distributions, for the target signal $`T`$ and the foregrounds and secondary anisotropies $`f^{(i)}`$, in a Bayesian way: $$(C_{\mathrm{}})=𝒫(\mathrm{\Delta }|d)𝒫(T|\mathrm{theory})d^{m_p}T\underset{i}{}𝒫(f^{(i)}|\mathrm{theory})d^{m_p}f^{(i)}.$$ (11) Only in the Gaussian or uniform prior cases is the integration over $`T`$ and $`f^{(i)}`$ analytically calculable. The usual procedure for “maximum entropy” priors is to estimate errors from the second derivative of the likelihood, i.e. effectively use a Gaussian approximation. Exploring how to break away from the Gaussian assumption is an important research topic. Assuming all signals and the noise are Gaussian-distributed, the likelihood function is $$(C_{\mathrm{}})=\frac{\mathrm{exp}\left[\frac{1}{2}\overline{T}^{}\left(C_N+C_S\right)^1\overline{T}\right]}{\left[\left(2\pi \right)^{m_p}|C_N+C_S|\right]^{1/2}},$$ (12) where $`\overline{T}`$ is the maximum-likelihood CMB map, with the foregrounds removed. $`C_N`$ is the noise matrix calculated above, modified to include variances determined for the foreground maps, and $`C_S`$ is the primary signal autocorrelation function which depends on $`C_{\mathrm{}}`$ (as in Eq. 2, but corrected for the effect of the beam pattern and finite pixel size). The likelihood function is a Gaussian distribution in the data, but a complicated nonlinear function of the parameters, which enter into $`C_S`$ through the power spectrum. Unlike the map-making problem (Eq. 6), there is no closed-form solution for the most likely $`C_{\mathrm{}}`$. Thus we must use a search strategy and it should be a very efficient one, since brute force evaluation of the likelihood function requires determinant evaluation and matrix inversion which are both $`O(m_p^3)`$ problems. Compounding this, evaluating the likelihood is more difficult here because the signal and noise matrices have different symmetries, making it harder to find a basis in which $`CC_S+C_N`$ has a simple form. A particularly efficient search technique for finding the maximum-likelihood parameters is a generalization of the Newton-Raphson method of root finding. The Newton-Raphson method finds the zero of a function of one parameter iteratively. One guesses a solution and corrects that guess based on the first derivative of the function at that point. If the function is linear, this correction is exact; otherwise, more iterations are required until it converges. In maximizing the likelihood, we are searching for regions where the first derivative of the likelihood with respect to the parameters goes through zero, so it can be solved analogously to the Newton-Raphson method. We actually maximize $`\mathrm{ln}`$, which simplifies the calculation and also speeds its convergence since the derivative of the logarithm is generally much more linear in $`C_{\mathrm{}}`$ than the derivative of the likelihood itself. Solving for the roots of $`\mathrm{ln}/C_{\mathrm{}}`$ using the Newton-Raphson method requires that we calculate $`^2\mathrm{ln}/C_{\mathrm{}}C_{\mathrm{}^{}}`$, which is known as the curvature of the likelihood function. Operationally, we often replace the curvature with its expectation value $`F_{\mathrm{}\mathrm{}^{}}`$, the Fisher matrix, because it is easier to calculate and still results in convergence to the same parameters. The change in the parameter values at each iteration for this method is a quadratic form involving the map; hence it is referred to as a quadratic estimator. Using $`C_{\mathrm{}}`$ as our parameter, the new guess is modified by \[bjkI ; teg \] $$\delta C_{\mathrm{}}=\frac{1}{2}\underset{\mathrm{}^{}}{}F_{\mathrm{}\mathrm{}^{}}^1\left[\overline{T}^{}C^1\frac{C}{C_{\mathrm{}}}C^1\overline{T}\mathrm{Tr}\left(\frac{C}{C_{\mathrm{}}}C^1\right)\right]$$ (13) where the Fisher matrix is given by $$F_{\mathrm{}\mathrm{}^{}}\frac{^2\mathrm{ln}}{C_{\mathrm{}}C_{\mathrm{}^{}}}=\frac{1}{2}\mathrm{Tr}\left(C^1\frac{C}{C_{\mathrm{}}}C^1\frac{C}{C_{\mathrm{}}}\right).$$ (14) We can recover the full shape of the likelihood for the $`C_{\mathrm{}}`$’s from this and one other set of numbers, calculated in approximately the same number of steps as the Fisher matrix itself \[bjkII \]. The procedure is very similar to that of the Levenberg-Marquardt method \[numrec \] for minimizing a $`\chi ^2`$ with non-linear parameter dependence. There the curvature matrix (second derivative of the $`\chi ^2`$) is replaced by its expectation value and then scaled according to whether the $`\chi ^2`$ is reduced or increased from the previous iteration. Similar manipulations may possibly speed convergence of the likelihood maximization, although one would want to do this without direct evaluation of the likelihood function. This method has been used for the power spectrum estimates for COBE and other experiments, and for the compressed power spectrum bands estimated from current data shown in Fig. 2. This brute force approach is quite tractable for the current data and for idealized simulations of the satellite and LDB data, such as the power spectrum forecasts of Fig. 3, in which the noise was assumed (incorrectly) to be homogeneous. We can calculate the time and memory required to do this quadratic estimation for a variety of realistic data sets and kinds of computing hardware. For this algorithm, the $`O(m_p^3)`$ operations must be performed for each parameter (e.g., each band of $`\mathrm{}`$ for $`C_{\mathrm{}}`$). Borrill \[borr \] has considered this issue under several different scenarios. For COBE, power spectrum calculation can easily be done on a modern workstation in less than one day. However, for the LDB data sets expected over the next several years (with $`m_p200,000`$ or so) the required computing power becomes prohibitive, requiring 640 Gb of memory and of order $`3\times 10^{17}`$ floating-point operations, which translates to 40 years of computer time at 400 MHz. This pushes the limits of available technology; even spread over a Cray T3E with $`1024`$ 900 MHz processors, this would take a week or more. This data set is in hand now, so we cannot even wait for computers to speed up. When the satellite data arrives, with $`m_p>10^6`$, a brute-force calculation will clearly be impossible even with projected advances in computing technology over the next decade. The ten million pixel Planck data set would require 1600 TB of storage and $`3\times 10^{23}`$ floating-point operations or 25,000 years of serial CPU time at 400 MHz. Even a hundredfold increase in computing over the next decade, predicted by Moore’s law, still renders this infeasible. ### 5.4. Discretizing the Sky To solve these computing challenges, shortcuts must be found. One area where there is great potential benefit is in deciding how the discretized map elements are to be distributed on the sky and stored. Imposing enough symmetries at this early step can help greatly to speed up everything that follows. Obviously it is important to keep the number of pixels as small as possible. For a given resolution, fixed for example by the beam size, the number of pixels is minimized by having them all roughly of the same area. If there are many pixels in a resolution element much smaller than the beam size, they will be highly correlated and little information is gained by treating them individually. The hierarchical nature of the pixelization used for the COBE maps was also a very useful property. In this pixelization, known as the Quadrilateralized Spherical Cube, the sky was broken into six base pixels corresponding to faces of a cube. Higher resolution pixels were created hierarchically, by dividing each pixel into four smaller pixels of approximately equal area. One advantage of this hierarchical structure is that the data is effectively stored via a branching structure, so that pixels that are physically close to each other are stored close to each other. Among other things, this allows one to coarsen a map very quickly, by adding the ordered pixels in groups of four. Finally, it is very beneficial to have a pixelization which is azimuthal, where many pixels share a common latitude. This is incredibly useful in making spherical harmonic transforms between the pixel space, where the data and inverse noise matrix are simply defined, and multipole space, where the theories are simple to describe. Specifically, one wishes to make transforms of the type described by Eq. 1, as well as the inverse transformation. When discretized, these transforms naively take $`m_p^2`$ operations, because $`m_p`$ spherical harmonic functions need to be evaluated at $`m_p`$ separate points on the sky. However, as has been recently emphasized, if one uses a pixelization with azimuthal symmetry, then the spherical transforms can be greatly sped up \[mnv \]. This utilizes the fact that the azimuthal dependence of the spherical harmonic functions can be simply factored out, $`Y_\mathrm{}m(\theta ,\varphi )=\lambda _\mathrm{}m(\theta )e^{im\varphi }`$. If one further requires that the pixels have discrete azimuthal symmetry, then the azimuthal sum can be performed quickly with a fast Fourier transform. Effectively, this means that the $`m_p`$ functions need only be evaluated at $`m_p^{1/2}`$ different latitudes, so that the whole process requires only $`m_p^{3/2}`$ operations. Efforts have been made to speed this up even further, by attempting to use FFT’s in the $`\theta `$ direction as well, which in principle could perform the transform in $`m_p(\mathrm{log}m_p)^2`$ operations. Such implementations are still being developed, and do not tend to pay off until $`m_p`$ is very large. Pixelizations have been developed which have all of these symmetries. HEALPix, devised by Kris Gorski and collaborators \[ghw \], has a rhombic dodecahedron as its fundamental base, which can be divided hierarchally while remaining azimuthal. It was used for the rapid construction of the map in Fig. 1. Another class of pixelizations is based on a naturally azimuthal igloo structure which has been specially designed to be hierarchical \[ct \]. In this scheme, pixel edges lie along lines of constant latitude and longitude, so it is easy to integrate over each pixel exactly. This allows any suppression effects due to averaging over the varying pixel shapes to be simply and accurately included when making the transforms. ### 5.5. Exploiting the Symmetries Since many of the signals are most simply described in multipole space, it is natural to try to exploit this basis when implementing the parameter estimation method described above. We should also try recasting the calculation to take advantage of the simple form the weight matrix $`C_N^1`$ has in the pixel basis. Finally, with iterative methods we can exploit approximate symmetries of these matrices which can speed up the algorithms tremendously. Oh, Spergel and Hinshaw \[osh \], hereafter OSH, have recently applied these techniques to simulations of the operation of parameter estimation for the MAP satellite to great effect. The Newton-Raphson method does not require the full inverse correlation matrix, but rather $`C^1\overline{T}`$, which can be expressed in terms of $`C_N^1`$ and various $`C_S^{1/2}`$ factors. The equation can be solved using a simple conjugate gradient technique, which iteratively solves the linear system $`Cz=\overline{T}`$ by generating an improved guess and a new search direction (orthogonal to previous search directions) at each step. In general, conjugate gradient is no faster than ordinary methods, requiring of order $`m_p`$ iterations with $`m_p^2`$ operations per iteration required for the matrix-vector multiplications. However, this can be sped up in two ways. First, one can make the matrix well conditioned by finding an appropriate preconditioner which allows the series to converge much faster, in only a few iterations. Second, one can exploit whatever symmetries exist to do the multiplications in fewer operations. A preconditioner $`\stackrel{~}{C}`$ is a matrix which approximately solves the linear system and is used to transform it to $`\stackrel{~}{C}^1Cz=\stackrel{~}{C}^1\overline{T}`$, making the series converge much faster. There are two requirements of a good preconditioner: it should be close enough to the original matrix to be useful and it should be quickly invertible. One can rewrite the linear system we need to solve as $$\left(I+C_S^{1/2}C_N^1C_S^{1/2}\right)C_S^{1/2}z=C_S^{1/2}C_N^1\overline{T}.$$ (15) OSH use a preconditioner $`\left(I+C_S^{1/2}\stackrel{~}{C}_N^1C_S^{1/2}\right)`$, where $`\stackrel{~}{C}_N^1`$ is an approximation to the inverse noise matrix in multipole space: $`\stackrel{~}{C}_N^1`$ is taken to be azimuthally symmetric, so that it is proportional to $`\delta _{mm^{}}`$ in multipole space, which makes it block diagonal and possible to invert quickly. For the case they looked at, which includes only uncorrelated pixel noise and an azimuthally symmetric sky cut, this turned out to be a very good approximation which allows for quick convergence. Because the matrices are simple in the bases chosen, the vector-matrix multiplications are much faster than $`m_p^2`$ . In multipole space, the theory correlation matrix is simply diagonal, $`C_S=C_{\mathrm{}}B_{\mathrm{}}^2\delta _{\mathrm{}\mathrm{}^{}}\delta _{mm^{}}`$, where $`B_{\mathrm{}}`$ denotes the beam pattern in $`\mathrm{}`$ space. Similarly, in pixel space, operations using the inverse noise matrix are much faster. (OSH simplified to a case where the noise matrix was exactly diagonal in pixel space.) A time-consuming aspect is the transformation between pixel and multipole space, which is $`O(m_p^{3/2})`$. The whole process is actually dominated by the calculation of the trace in Eq. 13, which is performed by Monte Carlo iterations of the above method, exploiting the fact that $`\overline{T}^{}C^1C/C_{\mathrm{}}C^1\overline{T}=\mathrm{Tr}[C_S/C_{\mathrm{}^{}}C^1]`$. The OSH method requires effectively $`m_p^2`$ operations, a dramatic improvement over traditional methods. ## 6. Unsolved Problems The methods highlighted here have focused on solving one well-posed problem under a number of important simplifying assumptions. It is not obvious whether any of these assumptions are correct or indeed if the problem itself is as simple as we have described. In addition, there remain other problems, as or more complex, which remain to be addressed. Here, we briefly touch on some of these issues. The improvements in speed discussed in the last section relied heavily on assuming the error matrix was close to being both diagonal and azimuthally symmetric. This may well be the case for the MAP satellite, because it measures the temperature difference between each point on the sky and very many other points at a fixed angular separation of $`120^{}`$ at many different time scales. In doing so, the off-diagonal elements of the noise matrix are “beaten down” and may indeed be negligible. However, for almost all other cases (and indeed possibly for MAP when the effects of foreground subtraction are taken into account,) the $`C_{\mathrm{}}`$ estimation problem becomes much more complicated. In the presence of significant striping or inhomogeneous sky coverage, the block-diagonality of the noise matrix is no longer a good approximation. In this case, finding a basis where both the signal and noise matrices are simple may not be possible. People have found signal-to-noise eigenmodes of the matrix $`C_N^{1/2}C_SC_N^{1/2}`$ (or $`C_S^{1/2}C_N^1C_S^{1/2}`$ as in Sec. 5.5) to be useful for data compression and computation speedup, but finding them is another $`O(m_p^3)`$ problem. One might try to solve this by splitting the data set up into smaller bits and analyzing them separately, recombining the results at the end. However, as emphasized above, this can be difficult to do because of the global nature of the the mapmaking process. Ignoring correlations between different regions is often a poor approximation. Due to the complicated noise correlation structure, optimally splitting and recombining may itself require the $`O(m_p^3)`$ operations we are trying to avoid. Another feature of realistic experiments that has not been properly accounted for in the formalism we have outlined is that of asymmetric or time-varying beams. The model of the experimental procedure we have given here (Eq. 3) assumes that all observations of a given pixel see the same temperature. This implicitly assumes an underlying model of the sky that has been both beam-smoothed and pixelized. (Pixelization effects were touched on in Sec. 5.4.) If the beam is not symmetric, or if it is time-varying, then different sweeps through the same pixel will see different sky temperatures. This is very difficult to account for exactly and may be crucial for some upcoming experiments which can have significantly asymmetric beams. In addition, large uncertainties in the nature of the foregrounds may make their removal quite tricky. Not only are they non-Gaussian, but unlike the CMB, their frequency dependence is not well understood. Above, we have cast the problem of foreground separation as essentially a separate step in the process, between the making of maps at various frequencies and the estimation of the cosmological power spectrum. However, we may need to study foregrounds contaminants in as much detail as the CMB fluctuations themselves in order to fully understand their impact on parameter determination. Throughout, we have emphasized the assumption of Gaussianity for both the instrumental noise and the cosmological model. If one or both of these assumptions are violated, the theoretical underpinning of the algorithms we have described becomes shaky. Non-Gaussianity issues arise even in intrinsically Gaussian theories, due to foregrounds and non-linear effects. More worrisome are models with intrinsic non-Gaussianity at larger angular scales. How do we even begin to characterize an arbitrary distribution of sky temperatures? As it is sometimes put, describing non-Gaussian distributions is like describing “non-dog animals.” However, techniques do exist for finding specific flavors of non-Gaussianity; for example, estimations have been made recently of the so-called connected $`n`$-point functions for $`n>2`$ which vanish for a Gaussian theory. Other methods have tried to find structures using wavelets, which localize phenomena in both position on the sky and scale (wavenumber $`\mathrm{}`$). Still others have attempted to find topological measures of non-Gaussianity, focusing on fixed temperature contours, like the isotherms of a weather map. For all of these cases, however, both the theoretical predictions and data analysis are considerably more difficult than the algorithms presented here; in particular, none of them have been considered in the presence of complicated correlated noise. The computational challenges we have highlighted are associated specifically with parameter estimation from CMB data, but the problems are generic to other statistical measures that might be of interest. For example, goodness-of-fit tests (like a simple $`\chi ^2`$ or more complicated examples like those explored in \[qmap ; knoxcompare \]) require calculation of a quadratic form involving inversion of $`m_p\times m_p`$ matrices, as in the parameter estimation examples above. One might hope that these problems may also be solvable given similar assumptions to those considered above, but this has yet to be addressed. Finally, we have not even touched on the problem of analyzing measurements of the polarization of the CMB, which results from Thomson scattering at the surface of last scattering. Although the essential aspects of the analysis are the same, polarization data will be considerably more difficult to handle for several reasons. First, because polarization is defined with respect to spatially fixed axes, we must combine measurements from different experimental channels in order to make an appropriate sky map. Second, the signal is expected to be about one tenth the amplitude of the already very small temperature anisotropies. Third, the polarization of foreground contaminants is even less well-understood than their temperatures. With these greater experimental challenges, the resulting maps, and their construction algorithms, will be more complicated. ## 7. Finale Upcoming CMB data sets will contain within them many of the answers to questions that have interested cosmologist for decades: How much matter is there in the universe? What does it consist of? What did the universe look like at very early times? Our task will be to extract the answers and assess the errors from these large data sets. Especially challenging are the necessities for a global analysis of the data and for separating the various signals. Although some of the issues we face are specific to the CMB problem, many are of common concern to all astronomers facing the huge onslaught of data from the ground, balloons and space that the next millennium is bringing (see, e.g. the article on the Sloan Digital Sky Survey). We cannot rely on raw computing power alone. Computer scientists and statisticians are now collaborating with cosmologists in the quest for algorithmic advances. Figure 1 was provided by Kris Gorski and both computation and visualization have been handled using the http://www.tac.dk/ healpix software package. Figure 4 was provided by Francois Bouchet and Richard Gispert. We also thank Julian Borrill and David Spergel for discussion of computer timings and algorithmic issues.
no-problem/9903/astro-ph9903435.html
ar5iv
text
# Infrared spectroscopic variability of Cygnus X-3 in outburst and quiescence ## 1 Introduction Cygnus X-3 is a heavily obscured luminous X-ray binary in the Galactic plane which displays a unique and poorly-understood combination of observational properties. These include strong radio emission, with a flat spectrum extending to (at least) mm wavelengths in quiescence (e.g. Waltman et al. 1994; Fender et al. 1995) and giant flares which are associated with a relativistic jet (e.g. Geldzahler et al. 1983; Fender et al. 1997; Mioduszewski et al. 1998). In the infrared the system is bright with occasional rapid flare events and thermal continuum consistent with a strong stellar wind (e.g. van Kerkwijk et al. 1996; Fender et al. 1996). There is no optical counterpart at wavelengths shorter than $`0.8\mu `$m due to heavy interstellar extinction. The system is persistently bright in soft and hard X-rays (e.g. van der Klis 1993; Berger & van der Klis 1994; Matz et al. 1996), with strong and variable metal emission lines (e.g. Liedahl & Paerels 1996; Kawashima & Kitamoto 1996). Several detections at $`\gamma `$-ray energies have been claimed but rarely confirmed (see e.g. Protheroe 1994). A clear and persistent (observed for $`>20`$ yr) asymmetric modulation in the X-ray and infrared continuum emission with a period of 4.8 hr (e.g. Mason, Cordova & White 1986) is interpreted as the orbital period of the system. This period is rapidly lengthening with a characteristic timescale of less than a million years (e.g. Kitamoto et al. 1995) Infrared spectroscopy of the system in 1991 (van Kerkwijk et al. 1992) first revealed the presence of broad emission lines and an absence of hydrogen which was reminiscent of Wolf-Rayet stars. These observations have subsequently been confirmed and expanded upon (van Kerkwijk 1993; van Kerkwijk et al. 1996) and the binary interpreted as comprising a compact object (neutron star or black hole) and the helium core of a massive star, embedded within a dense stellar wind. Such an evolutionary end-point was predicted for Cyg X-3 as far back as 1973 by van den Heuvel & de Loore (1973). Unfortunately most models of Wolf-Rayet stars do not envisage objects which can be contained within a 4.8 hr orbit, causing some dispute over this interpretation (e.g. Schmutz 1993). Doppler-shifting of the broad emission lines with the orbital period of the system, with maximum blue shift at X-ray minimum, is interpreted by van Kerkwijk (1993) and van Kerkwijk et al. (1996) as being due to the lines arising in the region of the stellar wind shadowed from the X-rays of the compact object by the companion star. In this way the semi-amplitude of the Doppler-shifts reflects only the wind velocity and gives no information on mass function of the system. Schmutz, Geballe & Schild (1996) interpret the Doppler-shifting of the emission lines with the orbital period more conventionally as tracking directly the motion of the companion star and derive a mass function which implies the presence of a black hole of mass $`>10M_{}`$ in the system. However their intepretation does not explain the phasing of the emission lines relative to the X-rays, nor is this discrepancy addressed in their work. Mitra (1996, 1998) has argued that Cyg X-3 cannot contain a massive W-R star as the optical depth to X-rays for a compact object in a tight 4.8-hr orbit would be $`>>1`$. The alternative explanation put forward is that Cyg X-3 instead contains a neutron star and an extremely low-mass dwarf, cf. PSR 1957+20. Van Kerkwijk (1993) discussed the dramatic variability in line strengths and line ratios in the infrared spectra of Cyg X-3 and suggested that when the source is bright in X-rays the emission lines should be weak and orbitally modulated, but when the source is weak in X-rays the lines should be strong and show little orbital modulation. However, as noted in van Kerkwijk et al. (1996), Kitamoto et al. (1994) show that the strength of infrared line and X-ray emission are in fact probably broadly correlated from epoch to epoch, with the strong lined spectrum of 1991 being obtained during an outburst of the system. The explanation put forward for this was enhanced mass loss from the companion during outbursts, which both increases X-ray brightness (more accretion) and emission line strengths. This model was combined with detailed radio, (sub)mm and infrared (photometric) observations obtained during an outburst, and expanded upon in Fender et al. (1997). Waltman et al. (1997) clearly indicate the epochs of the published infrared spectra against the Green Bank 2 GHz radio monitoring of the system. In this paper we present four epochs of high-resolution infrared spectroscopy of Cyg X-3 with the Multiple Mirror Telescope over a two year period. These observations cover periods of quiescence, small flaring and major outburst as revealed in radio and X-ray monitoring, and we discuss the clear changes in the spectrum of the source as a function of state. In a future paper will analyze and discuss the results of our spectra that fully sample the entire orbit of Cyg X-3 during quiescence and during outburst. ## 2 Observations ### 2.1 Infrared All observations were made using the Steward Observatory’s infrared spectrometer, FSpec (Williams et al. 1993), on the Multiple Mirror Telescope (MMT). The spectra were taken using the medium resolution, 300 g/mm grating, yielding a 2 pixel resolution element of 0.0018 $`\mu `$m, or R $``$ 1200 at 2.12 $`\mu `$m, and R $``$ 900 at 1.62 $`\mu `$m. The same observing procedure was used on all nights. A full log of these observations is provided in Appendix A. The spectrometer has a slit size of $`1\stackrel{}{.}2\times 32\mathrm{}`$ on the MMT, allowing Cyg X-3 to be observed in four unique positions along the slit. In the reductions, after dark current had been subtracted and a flat field divided from the raw two-dimensional images, sky emission and additional thermal background was removed by subtracting one slit position from the next. The integration times for Cyg X-3 were very long, either 2 or 4 minutes at each slit position (see tables in Appendix A). This is long enough that the strong atmospheric OH emission lines did not always subtract away cleanly due to temporal variations in atmospheric conditions between slit positions. In many cases, a few percent scaling was required to get the OH features to disappear entirely. Background normalization of a few percent was performed to remove fluctuations in thermal background between integrations. Interspersed between our Cyg X-3 observations we obtained spectra of other stars which were used to correct for telluric absorption features. The same telluric standard star, HR 7826, an A1 V, was used through out all observations. The intrinsic spectrum of the standard star, HR 7826, was determined using two secondary telluric standard stars, HR 7503 (16 Cyg A), a G1.5 V and the O3 If\*, Cyg OB2 #7. A first estimate of the intrinsic spectrum of HR 7503 was obtained using a solar spectrum. The 2 $`\mu `$m spectrum of Cyg OB2 #7 is nearly featureless, with the exception of N iii at 2.115 $`\mu `$m (Hanson, Conti & Rieke 1996). The A1 V telluric standard contains only the Br $`\gamma `$ feature. By ratioing these three spectroscopically unique telluric standard stars against each other, we were able to obtain a good determination of the intrinsic spectrum of each star. The intrinsic spectrum of HR 7826 was determined during our first observing run in June 1996. This solution for the intrinsic spectrum was used through out that run and with all future observing runs. If our determination of the intrinsic spectrum of HR 7826 is not exactly correct, which is certainly the case at some level, any spurious features we have introduced will at least be consistently introduced into all of our Cyg X-3 spectra. This is important since it is our hope to study flux and velocity variations of very weak broad features in Cyg X-3 in an upcoming paper. Mean spectra for the four epochs of observation, and their relation to the changing X-ray and radio state of Cyg X-3, are shown in Fig 1. Note that these spectra are not normalised, whereas those throughout the rest of the paper are. This is in order to show the approximate constancy of the continuum slope in different states. The correlation between radio flaring and bright X-ray states, originally proposed by Watanabe et al. (1994) is also obvious from Fig 1. We began our first Cyg X-3 observing campaign in late May, 1996, which is symbolized in Figure 1 as epoch A. For ten of the eleven consecutive nights, Cyg X-3 was observed at approximately the same UT. Because a 24 hour daily cycle is almost exactly five binary orbits, we were observing Cyg X-3 at close to the same orbital phase for these ten nights (see Table 1 in Appendix A). Furthermore, on 2 June 1996, Cyg X-3 was observed over an entire orbital period, from $`\varphi _X`$ = 0.185 to 1.181 (quadratic ephemeris of Kitamoto et al. 1995, where $`\varphi _X=0`$ corresponds to minimum X-ray flux in the 4.8 hr modulation, probably the point of superior conjunction of the compact object). During this first campaign, $`H`$-band spectra centred at 1.62 $`\mu `$m were also obtained on the 7th and 8th of June 1996 (see Fig 2). The second campaign of observations, represented by epoch B in Figure 1, began 22 September 1996, where we obtained spectra covering the entire orbital period, from $`\varphi _X`$ = 0.382 to 1.429. One fifth of an orbit was observed the following night (Table 2 in the Appendix). The third observing campaign covered five consecutive nights beginning 16 July 1997 (Table 3 in the Appendix) and are represented in Figure 1 as epoch C. The fourth night of the observations taken during epoch C covered one orbit, sampling from $`\varphi _X`$ = 0.205 to 1.109. Our final observing campaign, represented by epoch D in Figure 1, covered just one quarter of an orbital period on 15 October 1997. ### 2.2 Radio The Ryle Telescope observations, at 15 GHz with a bandwith of 350 MHz, follow the pattern described in Fender et al (1997). Data points shown in Figs 1 and 3 are 5-min integrations. The typical uncertainty in the flux-density scale from day to day is 3%, and the rms noise on a single integration is less than 2 mJy. ### 2.3 XTE Cyg X-3 is monitored up to several times daily in the 2-12 keV band by the Rossi XTE All-Sky Monitor (ASM). See e.g. Levine et al. (1996) for more details. The total source intensity in the 2-12 keV band for individual scans is plotted in the top panels of Figs 1 and 3. ## 3 Line Identifications Line identifications in Cyg X-3 are shown in Figure 2 and listed in Table 1. We display two different $`K`$-band spectra in Figure 2, the upper taken during a time of high x-ray and radio activity, the lower taken during quiescence. The strongest features include the 2.0587 $`\mu `$m He i singlet during outburst and the 2.1891 He ii (7-4) during quiescence. The $`H`$-band spectrum centered at 1.62 $`\mu `$m, displays only a few identifiable features, He II (13-7) and (12-7) at 1.5719 and 1.6931$`\mu `$m, respectively, and N v (10-9) at 1.554 $`\mu `$m. These H-band features were also evident in earlier UKIRT spectra from 1992 May 30, one day after K-band spectra revealed Cyg X-3 to be in a weak-lined state equivalent to quiescence as defined in this paper (M.H. van Kerkwijk private communication). There is no evidence for any Brackett series hydrogen features. The $`H`$-band spectrum shown in Figure 2 was taken in June 1996, when Cyg X-3 was in a quiescent phase. There is one absorption feature, centered at approximately 2.129 $`\mu `$m, that we have been unable to positively identify. It is unlikely that it is a feature due to intervening interstellar material, as numerous stars with line of sight extinction greater than ten magnitudes in the visible have been observed without ever showing such a feature (Tamblyn et al. 1995; Hanson, Howarth & Conti 1997; Watson & Hanson 1997). We suspect then, it must be related to the Cyg X-3 system. Curiously, it shows no shifting with the orbit, unlike the other lines in the K-band (with the possible exception of He i at 2.058 $`\mu `$m). This unidentified absorption feature (UAF), has since disappeared from the spectrum, starting in June 1997. We have seriously considered that the feature may be spurious, introduced by poor telluric corrections, or perhaps a bad pixel on the array. However, we see it present through out the entire 11 day run in 1996 June, despite small changes in grating position, against three different telluric standard stars, and new calibration images taken each day. Furthermore, inspection of earlier 2 $`\mu `$m spectra of Cyg X-3, while of lower resolution, seems to substantiate the presence of a weak absorption feature at 2.129 $`\mu `$m (van Kerkwijk et al. 1996). However without an identification, we are unable to comment further on its nature or its possible relation to the Cyg X-3 system. ## 4 Spectral variability In this section we discuss the observational properties at each of the four epochs for when near-infrared spectra were obtained. It is our aim to establish the spectral characteristics and nature of any variations seen in Cyg X-3 in different radio and X-ray states. This may help us to identify the origin of the spectral features, be they from the secondary star or the compact object. Spectra characteristic of each epoch are plotted in Fig 1. ### 4.1 1996 May / June : quiescence Represented by epoch A in Figure 1, this is the longest continuous set of near-infrared observations ever taken of Cyg X-3. The source is in a state of radio and X-ray quiescence, with radio flux densities at 15 GHz in the range 40 – 140 mJy and XTE ASM fluxes in the range 4 – 9 count/sec. The spectrum is dominated by broad weak He ii and N v emission, and weak, more narrow and intermittent He i (2.058 $`\mu `$m) absorption. Nearly all spectral variability is related to the 4.8 hr orbital modulation, namely Doppler-shifting of the broad emission features. The full amplitude of the Doppler-shifting is of the same order as that reported by van Kerkwijk (1993) and Schmutz et al. (1996), i.e. $`1000`$$`1500`$ km s<sup>-1</sup>. Orbitally phase-resolved spectra and dynamical interpretations will be presented elsewhere. The unidentified absorption feature (UAF) at 2.129 $`\mu `$m is also detected, but cannot be clearly identified with any known transition. The UAF shows no Doppler-shifting. ### 4.2 1996 September : small flaring The second set of observation, epoch B in Figure 1, caught Cyg X-3 in a more active phase. Radio observations at 15 GHz showed many small flares with flux densities ranging from 50 – 450 mJy, corresponding to the ‘small flaring’ state classified by Waltman et al. (1995). The XTE ASM recorded 8 – 17 count/sec, significantly higher and more variable than in 1996 May / June. However, the K-band spectrum is very similar to that obtained at epoch A, showing little variability that is not orbitally-related, and being dominated by the broad weak He ii and N v emission. The unidentified absorption feature at 2.129 $`\mu `$m appears to have weakened considerably in the three months since 1996 May / June. ### 4.3 1997 June : outburst Epoch C represents observations during a major outburst of Cyg X-3. XTE ASM count rates varied rapidly between 14 – 32 ct/sec, having peaked at $`40`$ ct/sec around 100 days earlier. The radio emission was undergoing a second sequence of major flaring within 200 days. During the period of these observations flux densities of up to 3 Jy at 15 GHz were recorded. During the first period of radio flaring (MJD 50400 – 50500) Mioduszewski et al. (1998) clearly resolved an asymmetric, probably relativistic, jet from the source. The K-band spectrum at this epoch is wildly different from that at any other epoch, being dominated by what appear to be very strong double-peaked He i emission features, most obviously at 2.058 $`\mu `$m. Significant day-to-day spectral changes which are not related to orbital phase are evident at this epoch, unlike in quiescence (where spectral variability is almost entirely due to orbital modulation – see above). Figure 3 illustrates the dramatic variability in the strength of the double-peaked He i emission over the five nights of observations : strongest emission is present on the fourth night, 1997 June 19. Fig 4 shows in detail the rapid V/R variability, probably cyclic at the 4.8 hr orbital period, observed on this date. Figure 3 also hints at a possible anti-correlation between 2-12 keV X-ray flux and He i emission line strength on day timescales; while there is a large degree of X-ray variability in individual scans, the daily-averaged flux drops during the first four days to a minimum just after June 19. This is in contrast to the longer-term correlation between X-ray state and He i line strength, and probably results from a drop in ionisation state of the wind following a temporary decrease in X-ray flux. The dramatic day-to-day variability illustrated in Figure 3 suggests that the lines seen are transient features which are not tied to a steady-state wind of the secondary. The wind is most likely changing in ionisation state and/or density/velocity on day timescales. The strongest peak seen in He i on day 4, showing up blue-shifted at $`\varphi _X`$ = 0.0 and red-shifted at $`\varphi _X`$ = 0.5, is clearly evolving on a timescale of days, more than tripling in strength between days 3 – 4, and declining again within 24 hr. The appearance of the 1.0830 $`\mu `$m He i feature on 14 June 1993 (van Kerkwijk et al. 1996) likely represented another one of these events, though 2 $`\mu `$m spectra are not available to confirm. van Kerkwijk et al. (1996) do, however, show there was a marked increase in K-band flux on 14 June 1993. Such near-infrared flux increases on day timescales have been seen during radio outbursts (Fender et al. 1997), and appear to be distinct from the more rapid (second to minutes timescales) infrared flaring which is often observed (e.g. Fender et al. 1996). ### 4.4 1997 October : post-outburst / small flaring A single short (64 min total) observation on 1997 October 15 ($`\varphi _X`$ = 0.58 - 0.81), during an apparent decline to quiescence following outburst, epoch D again reveals previously unobserved features. Alongside the quiescent weak broad He ii and N v emission is strong He i 2.0587 $`\mu `$m absorption, displaying a P-Cygni profile. This absorption is stronger than observed at any time during epoch A. The absorption is present in all individual spectra, and there is no evidence for significant variability on short (minutes) time scales. The absorption minimum occurs within uncertainties at the rest wavelength of the transition, 2.058 $`\mu `$m, and the blue wing extends to $``$ 2.054 $`\mu `$m, implying a minimum outflow velocity of 500 km s<sup>-1</sup>. The He I absorption feature does not display any Doppler-shifting, though our phase coverage is not ideal. There are no other significant absorption features in the spectrum. The UAF feature is entirely absent. A comparison of the spectrum around the He I 2.0587 $`\mu `$m with that in outburst (Fig 5) shows that the deep absorption may well be present in outburst also, but is completely dominated by much enhanced emission at this stage. This is compatible with the model for a disc-like wind which we explore in section 6 below. ## 5 Discussion ### 5.1 The Near-Infrared Spectral Type of the Secondary Van Kerkwijk et al. (1992) published the first near-infrared spectra of Cyg X-3, covering 0.72-1.05 $`\mu `$m (I-band) and 2.0-2.4 $`\mu `$m (K-band). These spectra, taken in late June 1991, displayed strong emission lines of He i and He ii. The I-band spectrum in particular, showed a conspicuous absence of hydrogen lines. The lack (or much reduced fraction) of hydrogen, the strong He i emission at 2.058 $`\mu `$m, and the broad He ii emission lines were interpreted as coming from the wind of the binary companion to the compact object in Cyg X-3. Based on the 1991 spectrum, and using comparison spectra obtained of several Wolf-Rayet stars which were observed at the same time, a spectral type of WN7 was estimated for the companion. There are some problems, however, with the June 1991 spectra. The spectrum showed strong narrow He i at 2.0578 $`\mu `$m with strong, broad He ii, which is not generally seen in hydrogen-free WN Wolf-Rayet stars (Figer, McLean & Najarro 1997; c.f. WR 123 in Crowther & Smith 1996). This subtle mis-match of spectral characteristics suggested that the lines seen in the original June 1991 spectrum did not originate solely from a WR-like wind. Indeed, subsequent spectra taken by van Kerkwijk et al. (1993) showed that the originally strong He i features had since disappeared. These later spectra were now dominated by the broad He ii features, as well as N v and N iii. Such features are indicative of an earlier WR wind, perhaps WN4/5. However, as noted by van Kerkwijk et al. (1996), the line ratios between the nitrogen and He ii lines are not consistent with such an early spectral class. In fact, the near-infrared He ii lines in Cyg X-3 are extremely weak compared to other early WN stars (Crowther & Smith 1996; Figer et al. 1997). We are now able to show that the original 1991 June spectrum of van Kerkwijk et al. (1992) was anomalous and almost certainly associated with an outburst in the system. Our 1997 June spectra are dominated by double peaked emission, which does not seem to be traced in the original 1991 spectrum. However, by choosing a phase that was dominated by one peak and smoothing our spectra to the lower resolution of the van Kerkwijk et al. (1992) spectrum, our 1997 June spectra became a very close match in both lines detected and relative strength to the 1991 spectrum (Fig 6). As already suspected by van Kerkwijk et al. (1996), the original K-band spectrum of van Kerkwijk et al. (1992) therefore appears to have been anomalous due to an outburst of Cyg X-3. The quiescent spectrum, dominated by weak, broad He ii features, likely originates in the more steady-state wind of the stellar companion of Cyg X-3 and is our best diagnostic of the nature of this component. However, even this phase is not consistent with a normal WR wind. As first suggested by van Kerkwijk (1993), the presence of the high energy compact object, circling the companion star at very close radii (estimated to be on the order of 5 R), has likely altered the wind structure of its companion (the “Hatchett McCray effect,” Hatchett & McCray 1977). Stellar winds in early-type stars are driven through high opacity resonance lines of such species as C iv and N v. The predominance of very high energy photons from the compact object completely alters the ionization structure and thus the driving force of the wind, and may entirely eliminate significant line formation (McCray & Hatchett 1975). In the presence of the compact object, an X-ray-excited, thermally driven wind is instead created, which may have little or no line formation (Stevens 1991; Blondin 1994). Where the compact object is entirely blocked by the central disk of the companion, the expanding wind from the helium star may be capable of creating a normal line-driven wind, giving rise, at least weakly, to the broad high ionization wind lines detected in Cyg X-3. With such an interpretation for the line emission seen at near-infrared wavelengths, it would be difficult to infer many characteristics of the companion star. The most important characteristics of the companion star are that it is a helium rich atmosphere, and it may be driving a fairly extensive, fast wind, both being reminiscent of late-stages in massive star evolution. An early WN Wolf-Rayet star is likely the best candidate for the spectral type of the companion star. However, the mass of the companion star, and thus information on the mass of the compact object, can not be uniquely or confidently determined from the spectrum. ### 5.2 Twin-peaked He I emission in outburst Here we discuss possible origins for the strong twin-peaked He i emission observed during outburst. #### 5.2.1 Jets ? Simple arguments show that the twin-peaked emission lines cannot arise directly from material in the relativistic outflows which are inferred from high-resolution radio mapping of Cyg X-3 (e.g. Geldzahler et al. 1983; Mioduszewski et al. 1998). Firstly, the persistently stronger red wing as shown in Figure 5 is the opposite of what would be predicted from Doppler boosting, where the approaching (blue shifted) emission would be boosted, and the receding component diminished. Secondly, the relatively low velocity ($`1500`$ km s<sup>-1</sup>) implied by the peak separation could only arise from a relativistic jet almost in the plane of the sky (i.e. with a very small radial component). In that case, the transverse Doppler shift due to time dilation would dominate, red shifting both components. For a velocity of around 0.3 c this would result in a red shift of both components by $`0.1`$ $`\mu `$m, which is not observed (note this effect is observed in SS 433 - see e.g. Margon 1984). The lack of a discernible transverse red shift (assuming the lines do correspond to the indicated He I transitions) effectively rules out a relativistic outflow. A lower-velocity non-relativistic jet is possible, although it still suffers from the problem of explaining the persistently stronger red peak, but we consider this unlikely as rapidly variable radio emission is occurring throughout this period (Fig 3). This is almost certainly associated with the production of a relativistic jet; given the existence of this jet and the strong wind the presence of a third outflowing component (which matches the terminal velocity of the wind as inferred from quiescent observations) seems unlikely. #### 5.2.2 Accretion disc ? As already noted by van Kerkwijk et al. (1992), a significant contribution to the infrared emission of Cyg X-3 from an accretion disc is unlikely. This is because in order to generate the observed infrared luminosity the disc would need to be very hot ($`10^6`$ K), as its size is tightly constrained by the 4.8 hr orbit. Such a high temperature is hard to reconcile with the observed low-excitation He i features. However, the line profiles and velocity separation are reminiscent of features seen in optical spectra of accretion-disc dominated systems, and it is worth checking in more detail. We can calculate the temperature that a black body (the most efficient emitter) would require in order to reproduce the observed line flux, given that its size is constrained by the dimensions of the orbit. We assume a distance of 8.5 kpc, a binary separation of $`5R_{}`$, a K-band extinction $`A_K=2.3`$ mag, and a flux in emission lines which is about 10% of that in the continuum. For an observed flux density in the K-band of $`12`$ mJy (e.g. Fender et al. 1996) we find that we require a black-body temperature in excess of $`10^6`$ K in order to produce the flux in the emission lines within the binary separation. As the emission of the plasma producing the lines is much less efficient than that of a black body, there seems to be no way in which the relatively low excitation He i lines can be produced within the scale of the binary separation, as these lines need temperatures $`T10^5`$ K (for reasonable densities). Using this temperature we can find a minimum dimension for the emitting region. As $`rT^{1/2}`$ we require an emitting region which is a factor of three larger, i.e. $`15R_{}=10^{12}`$cm. Such a large separation for a 4.8 hr orbit would imply a total mass in the system of 1000 $`M_{}`$ ! So, we can rule out an emitting zone which is contained within the orbit of the system. Furthermore, the luminosity of the emission lines, both in outburst and quiescence, is orders of magnitude greater than that observed in K-band emission lines from the X-ray binary Sco X-1 (Bandyopadhyay et al. 1997). Given that Sco X-1, with a longer orbital period, probably possesses a larger (and hence brighter in the infrared) accretion disc than Cyg X-3, an origin for the infrared lines of Cyg X-3 in an accretion disc can be ruled out (unless the distance is overestimated by a factor of 10 or more - which seems highly unlikely given the broad agreement between high optical/infrared extinction, high $`N_H`$ in X-ray spectral fits, and the distance inferred from 21 cm radio observations). So, in agreement with van Kerkwijk et al. we must conclude that the He i emission lines arise from a region significantly larger than the binary separation of the system. This conclusion also rules out an origin for the emission lines in the X-ray irradiated face of a relatively cool secondary. #### 5.2.3 An enhanced, possibly disc-like wind ? Here we discuss a third possible origin for the twin-peaked variable emission lines : a significant enhancement in the wind in Cyg X-3. This has already been suggested as the origin for outbursts from the system (Kitamoto et al. 1994; van Kerkwijk et al. 1996; Fender et al. 1997). Given that we have established that the twin-peaked emission lines almost certainly originate in an extended region which is not the jets, and the existing evidence for a strong wind in the Cyg X-3 system, a natural explanation is that the increased line strength in outburst represents an increase in the density of the WR-like wind in the system. Such an increase in density will be coupled to a decrease in the mean ionisation level of the wind, hence the much increased He i : He ii ratio. While an enhanced wind density of the companion star is a natural explanation for bright X-ray / radio states which reflect increased rates of accretion and jet formation, such enhancements have never been observed in other WN stars. Cyg X-3, however, is an exceptional system. It experiences both extreme tidal forces and irradiation, which likely induce erratic behavior and non-periodic variations in the extended atmosphere of the companion Helium star. The appearance of the twin-peaked lines, and their variability (probably) in phase with the 4.8 hr orbit suggests an origin in an asymmetric emitting region. We believe that this wind may be flattened and disc-like, probably in the plane of the binary (see e.g. Stee & de Araújo 1994 for predicted line profiles from a disc-wind). A flattened wind may have formed in the Cyg X-3 system as a result of a rapidly (synchronously) rotating mass donor and/or focussing of non-accreted material into the binary plane by the compact object. In this case most of the infrared emission arises from material in the plane of the binary but outside the orbit, and the optical depth along the line of sight from the X-ray source to the observer remains small as long as the system is not viewed edge-on (see Fig 7). In this way, the problem of reconciling the Wolf-Rayet spectral typing of the companion with the detection of X-ray emission from near the centre of the system, highlighted by Mitra (1996, 1998), can be side-stepped whilst also explaining the large infrared luminosity of the system. It is worth recalling however that Berger & van der Klis (1994) show from timing studies that the X-ray emission from Cyg X-3 must still be undergoing significant scattering. Further support for a disc-wind model may come from the infrared polarimetric observations of Jones et al (1994) who found a significant degree of intrinsic polarisation from Cyg X-3 in the infrared K-band. They suggested that this may indicate a preferential plane of scattering in the binary. Several WR stars also show intrinsic polarisation, interpreted as arising from scattering in a flattened wind (e.g. Schulte-Ladbeck, Meade & Hillier 1992; Schulte-Ladbeck 1995; Harries, Hillier & Howarth 1998). Such intrinsic polarisation seems to be more common from WN type Wolf-Rayets (Schulte-Ladbeck et al. 1992; Harries et al. 1998); and the only direct observation (radio interferometry) of a flattened Wolf-Rayet wind was also from a WN subtype (Williams et al. 1997). Additionally, the position angle of the radio jet in Cyg X-3 (e.g. Mioduszewski et al. 1998) is approximately perpendicular to the long axis of the flattened wind as inferred from the position angle (0 – 40 degrees) of the derived intrinsic infrared polarisation. Assuming the jet propagates along the axis of the accretion disc, which itself lies in the binary plane, this supports a model in which binary and wind planes are aligned. We discuss the interpretation of the outburst state in the context of a flattened disc-like wind in more detail below. ## 6 Orbital modulations with a disc-wind ### 6.1 Quiescence Our disc-wind model for Cyg X-3 is sketched in Fig 7. In such a model, Doppler-shifting of He ii and N v lines in quiescence would occur essentially as outlined in the model of van Kerkwijk (1993) and van Kerkwijk et al. (1996) (hereafter the ‘van Kerkwijk’ model). We expect most of this emission to arise from outside the binary, with the compact object orbiting within the wind-accelerating zone of the WR-like companion. As discussed in the Introduction and in van Kerwijk et al. (1996) and Mitra (1998) the van Kerkwijk model naturally explains the phasing of the X-ray and infrared continuum modulation with the Doppler shifts, in contrast to the model of Schmutz et al. (1996). ### 6.2 Outburst During outburst, we presume that the much-enhanced mass loss and consequent higher wind density prevents the X-ray source from ionising anything but a small fraction of the wind (unlike in the van Kerkwijk model for quiescence in which the majority of the wind is ionised). A quantitative level of enhancement above the quiescent state is difficult to estimate. A realistic model to describe the increased He i emission would require knowledge of the geometry and structure (the clumpiness) of this wind, as well as the fractional increase in mass loss rate. An increase in the soft X-ray flux by a factor of three during outburst indicates a corresponding increase in the mass accretion rate during such periods, although density enhancements close to the compact object may not exactly reflect those in the wind as a whole. It might be possible, given arguments based on the time scale of the line structures seen, to estimate a fractional density of the wind during outburst. Such an in depth analysis is beyond the scope of this study. Possibly the most accurate measure of the degree of wind enhancement may come from measurements of ‘glitches’ in the orbital period derivative as angular momentum is lost from the system at a higher rate during the outbursts. The rapid, probably cyclic V/R variability observed during outburst could occur as a combination of three components 1. Broad He i emission from the entire disc-wind, from approximately -1500 to +1500 km s<sup>-1</sup>. 2. An ionised region (Stromgren zone) local to the X-ray source which depletes the He i emission in that region of the orbit. 3. Lower-velocity ($`500`$ km s<sup>-1</sup>) blue-shifted (P-Cygni) absorption from the accelerating region of the wind observed against the companion star. This simple scheme (illustrated in the lower panel of Fig 7) can qualitatively explain the observed phasing of the V/R variability and the greater persistence of the red-shifted peak : * Phase 0.0 : X-ray source is on far side of wind from the observer. Red-shifted peak is depleted at relatively low velocities due to ionised zone around X-ray source. Similarly blue-shifted peak is depleted at lower-velocities due to persistent P-Cyg absorption. * Phase 0.5 : X-ray source is on near side of wind. Blue-shifted emission is depleted both by P-Cyg absorption and ionisation from X-ray source; red-shifted peak is unaffected by either and is much stronger. In the context of this model, the spectrum obtained in 1997 Oct (epoch D, Fig 5) still shows deep P-Cyg absorption but much-reduced emission. This may represent an intermediate state in the return to quiescence in which the the low-velocity absorption is still occuring in the densest parts of the wind, but beyond the binary orbit most of the material is ionised and He ii / N v dominate over He i as in quiescence. ## 7 Conclusions We have presented the most comprehensive and highest-resolution set of infrared spectra of Cyg X-3 to date. In combination with X-ray and radio monitoring we can characterize the infrared spectral behaviour of the source in outburst and quiescence. The underlying infrared spectrum of Cyg X-3, observed during both radio and X-ray outburst and quiescence, displays weak, broad, He ii and N v (but no He i) emission. Some He i 2.058 $`\mu `$m absorption may be present, preferentially around orbital phase zero. H-band spectra extend our spectral coverage and confirm the significant He-enrichment of the mass donor, with no evidence of any hydrogen features. While not perfect, the closest match to the spectrum is that of a hydrogen depleted early WN-type Wolf-Rayet star. In outburst, the K-band spectrum becomes dominated by twin-peaked He i emission, which is shown to be unlikely to arise in relativistic jets or an accretion disc. This emission seems to arise in an enhanced wind density, presumably also responsible for the X-ray and radio outburst via enhanced accretion and related jet formation. This explains the observed long-term (outburst timescale) correlation between emission line strength and X-ray and radio state, as noted in Kitamoto et al. (1994). The emitting region almost certainly extends beyond the binary orbit, and displays significant day-to-day intensity variations, as well as V / R variability with orbital phase. The short term (day-to-day) variability in He i line strength may be anticorrelated with X-ray flux due to a varying degree of ionisation of the wind. It seems that, for Cyg X-3 at least, the major X-ray and radio outbursts are due to mass-transfer, and not disc, instabilities. If this interpretation is correct then the period evolution of Cyg X-3, determined by extreme mass-loss from the system (van Kerkwijk et al. 1992; Kitamoto et al. 1995) will not be smooth, instead displaying periods of accelerated lengthening during outbursts. The detection and measurement of such ‘glitches’ would be important both for understanding the evolution of the Cyg X-3 system and estimating the amount of additional circumstellar material present during outbursts. The appearance and variability of the emission features in outburst is suggestive of an asymmetric emitting region, and we propose that the wind in Cyg X-3 is significantly flattened, probably in the plane of the binary orbit. This may explain the intrinsic polarisation of the infrared emission from Cyg X-3, which indicates a scattering plane perpendicular to the radio jet axis. The interpretation of a flattened wind is supported by polarimetric and direct radio interferometric observations revealing evidence for flattened winds in other Wolf-Rayet stars. A simple model for the V/R variability in outburst, in the context of a flattened disc-wind, comprising a small ionised zone around the compact object and continuous P-Cygni absorption which erodes the blue-shifted wing, qualitatively explains the observations. Furthermore, a disc-like wind in the Cyg X-3 system also naturally explains why we can have both a large infrared luminosity and yet still observe the X-ray source, a problem highlighted by Mitra (1996, 1998) as being very serious for a spherically symmetric wind. While there is still significant scattering of the X-rays along the line of sight (see Berger & van der Klis 1994) it will be considerably less than in the case of a spherically symmetric wind. Additionally we note that the apparent one-sidedness of the radio jet from Cyg X-3 in the latest VLBA observations (Mioduszewski et al. 1998) may arise not from a jet aligned near to the line of sight (implying a nearly face-on orbit which is seemingly incompatible with the strong orbital modulations observed) but instead from the obscuration of the receding (northerly) jet by the far side of the disc-wind. This would naturally explain why the jet is so apparently one-sided on small scales and yet symmetrical on larger scales. To conclude, the combination of a WR-like spectrum, high luminosity ($`M_k5`$) and evidence for a disc-like wind supports the interpretation of Cyg X-3 as a high-mass X-ray binary in a very transient phase of its evolution. ## Acknowledgements We wish to thank George and Marcia Rieke for the use of their near-infrared spectrometer and help during the observations. RPF would like to thank Rudy Wijnands for help with the XTE ASM light curves, and Marten van Kerkwijk, Elizabeth Waltman, Michiel van der Klis, Simon Clark, Jan van Paradijs, Lex Kaper and Rens Waters for many useful discussions. The MMT is jointly operated by the Smithsonian Astrophysics Observatory and the University of Arizona. We thank the staff at MRAO for maintenance and operation of the Ryle Telescope, which is supported by the PPARC. RPF was supported during the period of this research initially by ASTRON grant 781-76-017, and subsequently by EC Marie Curie Fellowship ERBFMBICT 972436. MMH has been supported by NASA through Hubble Fellowship grant #HF-1072.01-94A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. ## Appendix A Observing logs Tables A1-4 below list the epochs and exposure times of every K- and H-band spectum taken during all four observing runs. The reduced spectra are downloadable from ftp://cdsarc.u-strasbg.fr/pub/cats/J/MNRAS/vol/page HJD, as used below, is heliocentric-corrected Julian Date, with 2450000.0 subtracted. The ‘Exposures’ column shows three numbers, indicating the number of spectra averaged, the number of exposures and the length of each exposure (in seconds). Orbital phase at the start of each observation is indicated by $`\varphi `$. We request that any use of these spectra in future publications makes reference to this work.
no-problem/9903/cond-mat9903206.html
ar5iv
text
# to appear in JETP Probing the field-induced variation of the chemical potential in 𝐵⁢𝑖₂⁢𝑆⁢𝑟₂⁢𝐶⁢𝑎⁢𝐶⁢𝑢₂⁢𝑂_𝑦 via the magneto-thermopower measurements ## a Mean value of the magneto-TEP: $`\mathrm{\Delta }S_{av}(T,H)`$. Assuming that the net result of the magnetic field is to modify the chemical potential (Fermi energy) $`\mu `$ of quasiparticles, we can write the the generalized GL free energy functional $`𝒢`$ of a superconducting sample in the mixed state as $$𝒢[\psi ]=a(T)|\psi |^2+\frac{\beta }{2}|\psi |^4\mu |\psi |^2.$$ (3) Here $`\psi =|\psi |e^{i\varphi }`$ is the superconducting order parameter, $`\mu (H)`$ stands for the field-dependent in-plane chemical potential of quasiparticles; $`a(T,H)=\alpha (H)(TT_c)`$ and the GL parameters $`\alpha (H)`$ and $`\beta (H)`$ are related to the critical temperature $`T_c`$, zero-temperature BCS gap $`\mathrm{\Delta }_0=1.76k_BT_c`$, the out-of-plane chemical potential (Fermi energy) $`\mu _c(H)`$, and the total particle number density $`n`$ as $`\alpha (H)=\beta (H)n/T_c=2\mathrm{\Delta }_0k_B/\mu _c(H)`$. In fact, in layered superconductors, $`\mu =\mu _c/\gamma ^2m_{ab}^{}(J_cd/2\mathrm{})^2`$, where $`d`$ and $`J_c`$ are the interlayer distance and coupling energy within the Lawrence-Doniach model, and $`\gamma =\sqrt{m_c^{}/m_{ab}^{}}`$ is the mass anisotropy ratio. The magnetic field is applied normally to the $`ab`$-plane where the strongest magneto-TEP effects are expected. In what follows, we ignore the field dependence of the critical temperature since for all fields under discussion $`T_c(H)=T_c(0)(1H/H_{c2})T_c(0)T_c`$. As usual, the equilibrium state of such a system is determined from the minimum energy condition $`𝒢/|\psi |=0`$ which yields for $`T<T_c`$ $$|\psi _0|^2=\frac{\alpha (H)(T_cT)+\mu (H)}{\beta (H)}$$ (4) Substituting $`|\psi _0|^2`$ into Eq.(3) we obtain for the average free energy density $$\mathrm{\Omega }(T,H)𝒢[\psi _0]=\frac{[\alpha (H)(T_cT)+\mu (H)]^2}{2\beta (H)}$$ (5) In turn, the magneto-TEP $`\mathrm{\Delta }S(T,H)`$ can be related to the corresponding difference of transport entropies $`\mathrm{\Delta }\sigma \mathrm{\Delta }\mathrm{\Omega }/T`$ as $`\mathrm{\Delta }S(T,H)=\mathrm{\Delta }\sigma (T,H)/en`$, where $`e`$ is the charge of the quasiparticles. Finally the mean value of the mixed-state magneto-TEP reads (below $`T_c`$) $$\mathrm{\Delta }S_{av}(T,H)=S_{p,av}(H)B_{av}(H)(T_cT),$$ (6) with $$S_{p,av}(H)=\frac{\mathrm{\Delta }\mu (H)}{eT_c},$$ (7) and $$B_{av}(H)=\frac{8\mathrm{\Delta }_0k_B\mathrm{\Delta }\mu (H)}{eT_c\gamma ^2\mu ^2(0)}.$$ (8) Before we proceed to compare the above theoretical findings with the available experimental data, we first have to estimate the corresponding fluctuation contributions to the observable magneto-TEP, both above and below $`T_c`$. ## b Mean-field Gaussian fluctuations of the magneto-TEP: $`\mathrm{\Delta }S_{fl}(T,H)`$. The influence of superconducting fluctuations on transport properties of HTS (including TEP and electrical conductivity) has been extensively studied for the past few years (see, e.g., and further references therein). In particular, it was found that the fluctuation-induced behavior may extend to temperatures more than $`10K`$ higher than the respective $`T_c`$. Let us consider now the region near $`T_c`$ and discuss the Gaussian fluctuations of the mixed-state magneto-TEP $`\mathrm{\Delta }S_{fl}(T,H)`$. Recall that according to the theory of Gaussian fluctuations, the fluctuations of any observable, which is conjugated to the order parameter $`\psi `$ (such as heat capacity, susceptibility, etc) can be presented in terms of the statistical average of the square of the fluctuation amplitude $`<(\delta \psi )^2>`$ with $`\delta \psi =\psi \psi _0`$. Then the TEP above $`(+)`$ and below $`()`$ $`T_c`$ have the form of $$S_{fl}^\pm (T,H)=A<(\delta \psi )^2>_\pm =\frac{A}{Z}d|\psi |(\delta \psi )^2e^{\mathrm{\Sigma }[\psi ]},$$ (9) where $`Z=d|\psi |e^{\mathrm{\Sigma }[\psi ]}`$ is the partition function with $`\mathrm{\Sigma }[\psi ](𝒢[\psi ]𝒢[\psi _0])/k_BT`$, and $`A`$ is a coefficient to be defined below. Expanding the free energy density functional $`𝒢[\psi ]`$ $$𝒢[\psi ]𝒢[\psi _0]+\frac{1}{2}\left[\frac{^2𝒢}{\psi ^2}\right]_{|\psi |=|\psi _0|}(\delta \psi )^2,$$ (10) around the mean value of the order parameter $`\psi _0`$, which is defined as a stable solution of equation $`𝒢/|\psi |=0`$ we can explicitly calculate the Gaussian integrals. Due to the fact that $`|\psi _0|^2`$ is given by Eq.(4) below $`T_c`$ and vanishes at $`TT_c`$, we obtain finally $$S_{fl}^{}(T,H)=\frac{Ak_BT_c}{4\alpha (H)(T_cT)+4\mu (H)},TT_c$$ (11) and $$S_{fl}^+(T,H)=\frac{Ak_BT_c}{2\alpha (H)(TT_c)2\mu (H)},TT_c$$ (12) As we shall see below, for the experimental range of parameters under discussion, $`\mu (H)/\alpha (H)|T_cT|`$. Hence, with a good accuracy we can linearize Eqs.(11) and (12) and obtain for the fluctuation contribution to the magneto-TEP $$\mathrm{\Delta }S_{fl}^\pm (T,H)S_{p,fl}^\pm (H)\pm B_{fl}^\pm (H)(T_cT),$$ (13) where $$S_{p,fl}^{}(H)=\frac{Ak_BT_c\mathrm{\Delta }\mu (H)}{4\mu ^2(0)},S_{p,fl}^+(H)=2S_{p,fl}^{}(H),$$ (14) and $$B_{fl}^{}(H)=\frac{3Ak_B^2T_c\mathrm{\Delta }_0\mathrm{\Delta }\mu (H)}{\gamma ^2\mu ^4(0)},B_{fl}^+(H)=2B_{fl}^{}(H).$$ (15) Furthermore, it is reasonable to assume that $`S_p^{}=S_p^+S_p`$, where $`S_p^{}=S_{p,av}+S_{p,fl}^{}`$ and $`S_p^+=S_{p,fl}^+`$. Then the above equations bring about the following explicit expression for the constant parameter $`A`$, namely $`A=4\mu ^2(0)/3ek_BT_c^2`$. This in turn leads to the following expressions for the fluctuation contribution to peaks and slopes through their average counterparts (see Eqs.(7) and (8)): $`S_{p,fl}^+(H)=(2/3)S_{p,av}(H)`$, $`S_{p,fl}^{}(H)=(1/3)S_{p,av}(H)`$, $`B_{fl}^{}(H)=(1/2)B_{av}(H)`$, and $`B_{fl}^+(H)=B_{av}(H)`$. Finally, the total contribution to the observable magneto-TEP reads (Cf. Eq.(1)) $$\mathrm{\Delta }S(T,H)=S_p(H)\pm B^\pm (H)(T_cT),$$ (16) where $$S_p(H)=\frac{2\mathrm{\Delta }\mu (H)}{3eT_c},B^+(H)B_{fl}^+(H)=2B^{}(H),$$ (17) and $$B^{}(H)B_{av}(H)+B_{fl}^{}(H)=\frac{4\mathrm{\Delta }_0k_B\mathrm{\Delta }\mu (H)}{eT_c\gamma ^2\mu ^2(0)}.$$ (18) Let us compare now the obtained theoretical expressions with the typical experimental data on textured $`Bi_2Sr_2CaCu_2O_y`$ for the slopes $`B^\pm (H)`$ and the peak $`S_p(H)`$ values for $`H=0.12T`$ (see Fig.1): $`S_p=0.16\pm 0.01\mu V/K`$, $`B^{}=0.012\pm 0.001\mu V/K^2`$, and $`B^+=0.027\pm 0.003\mu V/K^2`$. First we notice that the calculated slopes $`B^+(H)`$ above $`T_c`$ are twice their counterparts below $`T_c`$, i.e., $`B^+(H)=2B^{}(H)`$ in a good agreement with the observations. Using $`\gamma 55`$ and $`d=1.2nm`$ for the anisotropy ratio and interlayer distance in this material, we obtain reasonable estimates of the field-induced changes of the in-plane chemical potential (Fermi energy) $`\mathrm{\Delta }\mu (H)`$ (along with its zero-field value $`\mu (0)`$) and the interlayer coupling energy $`J_c`$. Namely, $`\mu (0)1.6meV`$, $`\mathrm{\Delta }\mu (H)0.02meV`$, and $`J_c4meV`$. Furthermore, relating the field-induced variation of the in-plane chemical potential to the change of the corresponding magnetization $`M(H)`$, viz. $$\mathrm{\Delta }\mu (H)=\frac{M(H)H}{n_h},$$ (19) where $`M(H)`$ for $`H_{c1}HH_{c2}`$ has a form (recall that the lower critical field for this material is $`H_{c1}=(\varphi _0/4\pi \lambda _{ab}^2)\mathrm{ln}\kappa 40G`$ with $`\lambda _{ab}250nm`$, $`\xi _{ab}1nm`$, and $`\kappa 250`$) $$\mu _0M(H)=\frac{2\varphi _0}{\sqrt{3}\lambda _{ab}^2}\left\{\mathrm{ln}\left[\frac{3\varphi _0}{4\pi \lambda _{ab}^2(HH_{c1})}\right]\right\}^2H,$$ (20) we obtain $`n_h2.5\times 10^{27}m^3`$ for the hole number density in this material, in reasonable agreement with the other estimates of this parameter. Fig.2 shows $`\mathrm{\Delta }\mu (H)`$ calculated according to Eq.(19) with the experimental data points deduced (via Eq.(17)) from the magneto-TEP measurements on the same sample. As is seen, the data are in a good agreement with the model predictions. And finally, using the above parameters (along with the critical temperature), we find that $`\mu (H)/\alpha (H)100K`$ which justifies the use of the linearized Eq.(13) since, as is seen in Fig.1, the observed magneto-TEP practically vanishes for $`|T_cT|15K`$. In conclusion, to probe the variation of chemical potential $`\mathrm{\Delta }\mu (H)`$ of quasiparticles in anisotropic materials under an applied magnetic field, we calculated the mixed-state magneto-thermopower $`\mathrm{\Delta }S(T,H)`$ in the presence of field-modulated charge effects near $`T_c`$. Using the available magneto-TEP experimental data on textured $`Bi_2Sr_2CaCu_2O_y`$, field-induced behavior of in-plane $`\mathrm{\Delta }\mu (H)`$ was obtained along with reasonable estimates for its zero-field value (Fermi energy) $`\mu (0)`$, interlayer coupling energy $`J_c`$, and the hole number density $`n_h`$ in this material. We thank A. Varlamov for very useful discussions on the subject. Part of this work has been financially supported by the Action de Recherche Concertées (ARC) 94-99/174. S.A.S. acknowledges the financial support from FNRS.
no-problem/9903/hep-ph9903488.html
ar5iv
text
# On the extraction of skewed parton distributions from experiment ## I Introduction Skewed parton distributions<sup>*</sup><sup>*</sup>*This is the unified terminology since the Regensburg conference of ’98, finally eradicating the many terms like non-diagonal, off-diagonal, non-forward and off-forward which have populated the literature on this subject over the last few years. However recent publications have, alas, again fallen back upon the old terminology! which appear in exclusive, hard diffractive processes like deeply virtual Compton scattering (DVCS)First discussed in Ref. . or vector meson production with a rapidity gap, to name just a few, have attracted a lot of theoretical and experimental interest over the last few years as a hot bed for interesting new QCD physics . The list of references is probably far from being complete and thus we apologize beforehand to everybody not mentioned. The basic concept of skewed parton distributions is illustrated in Fig. 1 with the lowest order graph of DVCS in which a quark of momentum fraction $`x_1`$ leaves the proton and is returned to the proton with momentum fraction $`x_2`$. The two momentum fractions not being equal is due to the fact that an on-shell photon is produced which necessitates a change in the $`+`$ momentum in going from the virtual space-like photon with $`+`$ momentum usually taken to be $`xp_+`$, where $`p_+`$ is the appropriate light cone momentum of the proton and $`x`$ is the usual Bjorken $`x`$, to basically zero $`+`$ momentum of the real photon. This sets $`x_2=x_1x`$ and thus the skewedness parameter to $`x`$. (see for more details on the kinematics.) Thus one has a nonzero momentum transfer onto the proton and the parton distributions which enter the process are non longer the regular parton distributions of inclusive reactions since the matrix element of the appropriate quark and gluon operators is now taken between states of unequal momentum rather than equal momentum as in the inclusive case (see for example ). These parton distributions still obey DGLAP-type evolution equations but of a generalized form (see for example Radyushkin’s references in ). The above mentioned kinematical situation is not the only one possible. One can also have the situation where $`x_2`$ becomes negative. In this case not a quark is returned to the proton but rather an anti-quark is emitted. In this situation one does not deal with parton distributions any more but rather distributional amplitudes obeying now Efremov-Radyushkin-Brodsky-LePage (ERBL) type evolution equations (again see, for example, Radyushkin’s references in ). Furthermore, both momentum fractions could be negative in which case one is dealing with anti-quark distributions which again obey DGLAP-type evolution equations. After having answered the question how these skewed parton distributions arise, the next question is which of the exclusive, hard diffractive processes is most suitable for extracting these skewed parton distributions and how can this be achieved. This question will be answered in the following sections, where we discuss the most promising process and the appropriate experimental observable in Sec. II, in Sec. III we will explain the algorithm and the problems associated with it and finally in Sec. IV we will give an outlook on further research in this area. ## II Appropriate Process and experimental observable The most desirable process for extracting skewed parton distributions is the one with the least theoretical uncertainty, the least singular $`Q^2`$ behavior so as to be accessible over a broad range of $`Q^2`$ and with a proven factorization formula. The last requirement is actually the most important one since without a factorization theorem one has no reliable theoretical basis for extracting parton distributions. The process which fulfills all the above criteria is DVCS since it is least suppressed in $`Q^2`$ of all known exclusive, hard diffraction processes, in fact it is only down by an additional factor of $`Q^2`$ in the differential cross section as compared to DISCompare this to the $`1/Q^8`$ behavior of vector meson production, di-muon production or di-jet production., the theoretical uncertainty is minimal since we are dealing with an elementary particle in the final state as compared to, for example, vector meson production where one also has to deal with the vector meson wavefunction in the factorization formula as an additional uncertainty and there exists a proven factorization formula . Furthermore it was shown in Ref. that there will be sufficient DVCS events at HERA as compared to DIS, albeit only at small $`x`$ between $`10^410^2`$, to allow an analysis with enough statistics. The experimental observable which allows direct access to the skewed parton distributions is the azimuthal angle asymmetry $`A`$ of the combined DVCS and Bethe-Heitler(BH)<sup>§</sup><sup>§</sup>§In the Bethe-Heitler process the incoming electron exchanges a coulomb photon with the proton and radiates off a real photon, either before or after the interaction with the proton. differential cross section, where the azimuthal angle is between the final state proton - $`\gamma ^{}`$ plane and the electron scattering plane. $`A`$ is defined as: $$A=\frac{_{\pi /2}^{\pi /2}𝑑\varphi 𝑑\sigma _{DVCS+BH}_{\pi /2}^{3\pi /2}𝑑\varphi 𝑑\sigma _{DVCS+BH}}{_0^{2\pi }𝑑\varphi 𝑑\sigma _{DVCS+BH}}.$$ (1) In other words one counts the events where electron and photon are in the same hemisphere of the detector and subtracts the number of events where they are in opposite hemispheres and normalizes this expression with the total number of events. The reason why this asymmetry is not $`0`$ is due to the interference term between BH and DVCS which is proportional not only to $`\mathrm{cos}(\varphi )`$, as compared to the pure DVCS and BH differential cross sections which are constant in $`\varphi `$, but also to the real part of the DVCS amplitude. The factorized expression for the real part of the amplitude takes the following form : $$ReT(x,Q^2)=_{1+x}^1\frac{dy}{y}ReC_i(x/y,Q^2)f_i(y,x,Q^2).$$ (2) $`ReC_i`$ is the real part of the hard scattering coefficient and $`f_i`$ are the skewed parton distributions. The sum over the parton index $`i`$ is implied and $`y`$ is defined to be the parent momentum fraction in the parton distribution. As mentioned above, one is mainly restricted to the small-$`x`$ region where gluons dominate and thus $`i`$ in Eq. (2) will be only $`g`$ to a very good accuracy. Note that since the parton distributions are purely real, the real part of the amplitude in its factorized form has to contain the real part of the hard scattering coefficient. Thus Eq. (1) contains only measurable or directly computable quantities except Eq. (2) in the interference part of the differential cross section for real photon production which is isolated in Eq. (1). Therefore, one would now be able to extract the skewed parton distributions from experimental information on $`A`$, the directly computable part of the interference term and the knowledge about the hard scattering coefficient if one could deconvolute Eq. (2). As we will see in the next section this direct deconvolution is not possible, however there is a way around the deconvolution problem. ## III Algorithms for extracting skewed parton distributions ### A The Deconvolution Problem in DIS The deconvolution problem in inclusive DIS presents itself in a similar way as in Eq. (2). For the structure function $`F_2(x,Q^2)`$, for example, one has the following factorization equation (in a general form): $$F_2(x,Q^2)=_x^1\frac{dy}{y}C_i(x/y,Q^2)f_i(y,Q^2),$$ (3) where one has the same situation as in Eq. (2) except, of course, that the hard scattering coefficient $`C_i`$ and the parton distributions $`f_i`$ are now different from the DVCS case. Also notice that the parton distributions depends now only on $`y`$ rather than $`y`$ and $`x`$. In this case one can now easily deconvolute Eq. (3) by taking moments in $`x`$ via $`_0^1𝑑xx^N`$. It is an easy exercise to show that the convolution integral turns into a product in moment space: $$\stackrel{~}{F}_2(N,Q^2)=\stackrel{~}{C}_i(N,Q^2)\stackrel{~}{f}_i(N,Q^2).$$ (4) Thus, after having calculated the hard scattering coefficient to the appropriate order and having measured $`F_2`$ such that the moment integral can be taken numerically, one can directly extract the parton distribution. What remains to be done is to perform the inverse Mellin transform to obtain the parton distribution in terms of $`x`$ and $`Q^2`$. Of course, we have simplified the actual procedure and the inverse Mellin transform is also not easy to perform but this example serves more as a pedagogical exercise to illustrate the basic concept of deconvolution and extraction of parton distributions. In the case of interest to us, however, life is not that ”simple”, since the skewed parton distributions depend on two rather than one variable. Furthermore, the hard scattering coefficient depends on the same variables as the parton distribution. This makes the deconvolution of Eq. (2), at least to the best knowledge of the author, impossible because both the hard scattering coefficient and the parton distribution have two rather than one variable in common, one of which is even fixed, thus one does not have enough information to perform a deconvolution. This seems like an intractable problem but there is a way out. For the purpose of as simple a presentation as possible the following discussion will only be done in LO but the same principles also apply in NLO. However, the precision of the data in the foreseeable future, will be such that a leading order analysis will be sufficient. The following two discussions rest heavily on the methods in Ref. . ### B The First Principle Extraction Algorithms The basic idea of this algorithm is to expand the parton distributions in terms of orthogonal polynomials to reduce the unknown quantities in the factorization formula for the real part of the DVCS amplitude to a number of unknown coefficients which can be obtained through an inversion of a known matrix and DVCS data on the real part of the amplitude. As is well known, one can expand parton distributions or any smooth function for that matter, with respect to a complete set of orthogonal polynomials $`P_j^{(\alpha _P)}(t)`$, where $`t`$ is used here to shorten the notation. The orthogonality of the polynomials of our choice needs to be on the interval $`1t1`$ with $`t=\frac{2yx}{2x}`$ which translates to an interval in $`y`$ of $`1+xy1`$ as found as the upper and lower bounds of the convolution integral in Eq. (2). One can then write the following expansion: $$f^{q,g}(t,x,Q^2)=\frac{2}{2x}\underset{j=0}{\overset{\mathrm{}}{}}\frac{w(t|\alpha _P)}{n_j(\alpha _P)}P_j^{q,g}(t)M_j^{q,g}(x,Q^2)$$ (5) with $`w(t|\alpha _P)`$ and $`n_j(\alpha _P)`$ being weight and normalization factors determined by the choice of the orthogonal polynomial used. The labels $`q,g`$ for quarks and gluons are necessary since the $`j`$ label will be different for quarks and gluons. $`\alpha _P`$ is a label which depends on the orthogonal polynomials used$`\alpha _P=\alpha ,\beta `$, in other words two labels, if Jacobi polynomials are used or $`\alpha _P=\mu 1/2`$ if Gegenbauer polynomials are used.. $`M_j^{q,g}(x,Q^2)`$ is given by: $$M_j^{q,g}(x,Q^2)=\underset{k=0}{\overset{\mathrm{}}{}}E_{jk}^{q,g}(\nu ;\alpha _P|x)f_k^{q,g}(x,Q^2),$$ (6) where $$f_k^{q,g}(x,Q^2)=\underset{l=0}{\overset{k}{}}x^{lk}B_{lk}^{q,g}\stackrel{~}{f}_l^{q,g}(x,Q^2).$$ (7) $`B_{lk}^{q,g}`$ is an operator transformation matrix which fixes the NLO corrections to the eigenfunctions of the kernels. The explicit form of the transformation matrix $`B_{lk}^{q,g}`$ can be found in Eq. $`(35)`$ of the second article of Ref. for example. The explicit form is not important, however, for neither this discussion nor the conclusion of this paper since we are dealing only with a LO analysis in which case the transformation matrix is just the identity matrix. The general from was just included for completeness sake. The moments $`\stackrel{~}{f}_l^{q,g}(x,Q^2)`$ of the parton distributions in Eq. (7) generally evolve according to $$\stackrel{~}{f}_l^{q,g}(x,Q^2)=\stackrel{~}{K}_l^{ik}(\alpha _s(Q^2),\alpha _s(Q_0^2))\stackrel{~}{f}_l^{q,g}(x,Q_0^2)$$ (8) where the evolution operator is a matrix ($`i`$, $`k`$ equals either q or g) of functions in the singlet case (and just a function in the non-singlet case) taking account of quark and gluon mixing and depending on the order in the strong coupling constant. Striving for simplicity, the above expansion is most simple in LO in the basis of Gegenbauer polynomials, since they are the eigenfunctions of the evolution kernels at LO. Thus we will use these polynomials from now on in our formulas. Thus, the Gegenbauer moments of the initial parton distributions at $`Q_0^2`$ in Eq. (8) are defined the following way $`\stackrel{~}{f}_l^q(x,Q_0^2)`$ $`=`$ $`{\displaystyle _1^1}𝑑t\left({\displaystyle \frac{x}{2x}}\right)^lC_l^{3/2}\left({\displaystyle \frac{tx}{2x}}\right)f^q(t,x,Q_0^2)`$ (9) $`\stackrel{~}{f}_l^g(x,Q_0^2)`$ $`=`$ $`{\displaystyle _1^1}𝑑t\left({\displaystyle \frac{x}{2x}}\right)^{l1}C_{l1}^{5/2}\left({\displaystyle \frac{tx}{2x}}\right)f^g(t,x,Q_0^2).`$ (10) Turning now to the expansion coefficients in Eq. (6). The upper limit of the sum in Eq. (6) is given by the constraint $`\theta `$-functions $`\theta _{jk}=1,\text{if}kj;0,\text{if}j<k`$ present in the expansion coefficients, which are defined, in terms of Gegenbauer polynomials, by $`E_{jk}^{q,g}(\nu ;\mu |x)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\theta _{jk}\left[1+(1)^{jk}\right]{\displaystyle \frac{\mathrm{\Gamma }(\nu )}{\mathrm{\Gamma }(\mu )}}{\displaystyle \frac{(1)^{\frac{jk}{2}}\mathrm{\Gamma }(\mu +\frac{j+k}{2})}{\mathrm{\Gamma }(\nu +k)\mathrm{\Gamma }(1+\frac{jk}{2})}}`$ (12) $`(2x)^k{}_{2}{}^{}F_{1}^{}({\displaystyle \frac{jk}{2}},\mu +{\displaystyle \frac{j+k}{2}},\nu +k+1|{\displaystyle \frac{x^2}{(2x)^2}}),`$ where $`\nu =\mu =3/2`$ for quarks and $`\nu =\mu =5/2`$ for gluons. The general expansion in Eq. (5) then reduces to $`f^q(t,x,Q^2)`$ $`=`$ $`{\displaystyle \frac{2}{2x}}{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{w(t|3/2)}{N_j(3/2)}}E_{jk}^q(3/2|x)C_j^{3/2}(t)\stackrel{~}{f}_k^q(x,Q^2)`$ (13) $`f^g(t,x,Q^2)`$ $`=`$ $`{\displaystyle \frac{2}{2x}}{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{w(t|5/2)}{N_j(5/2)}}E_{jk1}^g(5/2|x)C_{j1}^{5/2}(t)\stackrel{~}{f}_{k1}^g(x,Q^2),`$ (14) with $`w(t|\nu )=(t(1t))^{\nu 1/2}`$, $`N_j(\nu )=2^{2\nu +1}\frac{\mathrm{\Gamma }^2(1/2)\mathrm{\Gamma }(2\nu +j)}{\mathrm{\Gamma }^2(\nu )(\nu +j)j!}`$ and the $`C_j^\nu `$ are Gegenbauer polynomials. As we are in LO, multiplicatively renormalizable moments evolve with the following explicit evolution operator: $$\stackrel{~}{K}_j^{ik}(\alpha _s(Q^2),\alpha _s(Q_0^2))=Texp\left(\frac{1}{2}_{Q_0^2}^{Q^2}\frac{d\tau }{\tau }\gamma _j^{ik}(\alpha _s(\tau ))\right)$$ (15) where $`T`$ orders the matrices of LO anomalous dimensions along the integration path. Note that there is a slight difference in the anomalous dimensions in the skewed case to the anomalous dimensions in the non-skewed, i.e. inclusive, case due to the particular definition of the conformal operators used in the definition of the parton distributions<sup>\**</sup><sup>\**</sup>\** This is true in LO, in NLO, however, the anomalous dimensions obtain, besides the NLO $`\gamma _j`$’s of the inclusive case, additional anomalous dimensions due to non-diagonal elements in the renormalization matrix of the conformal operators entering the skewed parton distributions (see Ref. ). $`\gamma _j^{qg}=\frac{6}{j}\gamma _j^{qg,incl.}`$ and $`\gamma _j^{gq}=\frac{j}{6}\gamma _j^{gq,incl.}`$. Now we have all the ingredients to proceed. Inserting Eq. (14) in Eq. (2) one obtains for small $`x`$, where one is justified to neglect the quark contribution: $`ReT(x,Q^2)=`$ $`2{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}\stackrel{~}{K}_{k1}^{gg}(\alpha _s(Q^2),\alpha _s(Q_0^2))\stackrel{~}{f}_{k1}^g(x,Q_0^2)E_{jk1}^g(5/2|x)`$ (17) $`{\displaystyle _1^1}{\displaystyle \frac{dt}{2t+x}}{\displaystyle \frac{w(t|5/2)}{N_j(5/2)}}ReC_g({\displaystyle \frac{1}{2}}+{\displaystyle \frac{t}{x}},Q^2)C_{j1}^{5/2}(t),`$ where we chose the factorization scale to be equal to the renormalization scale , which we chose to be equal to $`Q^2`$. As one can see the integral in the sum is now only over known functions and will yield, for fixed $`x`$, a function of $`j`$ as will also the expansion coefficients for fixed $`x`$. The evolution operator can also be evaluated and will yield for fixed $`Q^2`$ also just a function of $`j`$, which leaves the coefficients $`\stackrel{~}{f}_{k1}^g(x,Q_0^2)`$ as the only unknowns, albeit an infinite number of them. Since the lefthand side will be known from experiment for fixed $`x`$ and $`Q^2`$, we are still in the unfortunate situation that a number is determined by the sum over an infinite number of coefficients labeled by $`j`$. Thus, if one had measured the real part of the DVCS amplitude through the asymmetry $`A`$ at fixed $`x`$ <sup>††</sup><sup>††</sup>††This fixes the undetermined coefficients up to the $`j`$ index. and at an infinite number of $`Q^2`$, one would have an infinite dimensional column vector on the lefthand side namely the real part of the amplitude at fixed $`x`$ but at an infinite number of $`Q^2`$ and on the right hand side one would have a square matrix<sup>‡‡</sup><sup>‡‡</sup>‡‡The number of $`Q`$ values determines the column dimension and the number of the index $`j`$ determines the row dimension. The matrix is square since we can choose the number of $`Q`$ values to be equal to the number of $`j`$ values! times another column vector of coefficients of which the length is determined by the number of $`j`$. Since all the entries in the matrix are real and positive definite<sup>\**</sup><sup>\**</sup>\**The evolution operator will, of course, always yield a positive number, the integrals in the sum, are integrals over positive definite functions in the integration interval and the expansion coefficients are also positive definite as can be seen from Eq. (12)., it can be inverted, using the well known linear algebra theorems on inversion of infinite dimensional square matrices, provided that there are no zero eigenvalues in other words no physical zero modes in the problem which would imply that the real part of the DVCS amplitude would have to be zero which is, of course, never the case<sup>\*†</sup><sup>\*†</sup>\*† The DGLAP part of the amplitude will be zero at $`x=1`$ but the contribution from the ERBL region will not be!. After having found the inverse, we can directly compute the moments of our initial parton distributions<sup>\*‡</sup><sup>\*‡</sup>\*‡Note that the same moments of the initial parton distribution will appear for different values of $`j`$, since the sum over $`k`$ runs up to $`j`$, for fixed $`Q`$, such that each unknown moment is just multiplied by a number determined from known functions in Eq. (17). which are needed to reconstruct the skewed gluon distribution at small $`x`$ from Eq. (5). The drawback of the above procedure is that this process has to be repeated anew for each $`x`$. Nothing, however prevents us from doing so, in principle. Even for a finite number of $`j`$’s and $`Q`$’s, the task seems formidable, however, this is not as problematic as it seems, since experiment will only render information for small $`x`$, at least in the beginning, and not the whole range of $`x`$, thus one does not need an infinite number of coefficients and thus an infinite number of $`Q`$ for each $`x`$ to get a good approximation. Unfortunately a $`j_{max}`$ of $`50100`$ will be necessary, The author’s thanks go to Andrei Belitsky for pointing this out.. therefore, if the lefthand side is known for each $`x`$ at $`50`$ values of $`Q^2`$, Eq. (17) reduces to a system of $`50`$ equations with $`50`$ unknowns for $`j_{max}=50`$. This system can readily be solved as explained above. Experimentally speaking, of course, this procedure is not feasible, though theoretically very attractive, since one will never be able to measure the any experimental observable for fixed $`x`$ at $`50`$ different values of $`Q^2`$. Nevertheless, there may be ways using constraints on SPD’s to reduce this number of $`50100`$ polynomials as can be done in the forward case, however this has to be further explored. Notwithstanding the above, let us give a toy example of the above extraction algorithm The author would like to thank John Collins for suggesting such an example to clarify the problem at hand.. Take $`x`$ discrete and fix $`Q^2`$ then one can write a factorized expression for a cross section: $$\sigma _a=\underset{j}{}H_{j;a}f_{j;a}.$$ (18) The index $`j`$ corresponds to the parton fractional momentum, and $`a`$ to the $`x`$ variable. Obviously, it is not possible to obtain $`f_{j;a}`$ from $`\sigma _a`$. If one now puts in an index for $`Q`$, the parton densities will now be $`f(Q)_{j;a}`$, and the solution of the evolution equation has the form $$f(Q)_{j;a}=\underset{k}{}U(Q)_{j,k;a}f(Q_0)_{k;a}.$$ (19) Here $`f(Q_0)_{k;a}`$ is the initial parton density at the value $`Q=Q_0`$ which is left implicit. The cross section as a function of $`Q`$ takes now the form $`\sigma _{Q;a}`$ $`=`$ $`{\displaystyle \underset{j,k}{}}H_{j;a}U(Q)_{j,k;a}f(Q_0)_{k;a}`$ (20) $`=`$ $`{\displaystyle \underset{k}{}}A(Q)_{j,k}f(Q_0)_{k;a},`$ (21) for a suitable matrix $`A`$. The $`Q`$ dependence of the hard scattering function $`H`$ can be ignored for our present purpose. As a next step, take enough values of $`Q`$ such that the matrix is square. The most trivial example is to have two values of $`Q`$: the initial value and one other: $`f_{1;j}`$ $`=`$ $`f(Q_0)_j`$ (22) $`f_{2;j}`$ $`=`$ $`{\displaystyle \underset{k}{}}U(Q)_{j,k}f(Q_0)_k.`$ (23) One can take $`U`$ to be triangular, as is appropriate for DGLAP evolution $$U=\left(\begin{array}{cc}1& 1\\ 0& 1\end{array}\right),$$ (24) the hard scattering cross section to be $$H=(1,1)$$ (25) and the parton distributions to be a two dimensional column vector: $$f_0=\left(\begin{array}{c}f(Q_0)_1\\ f(Q_0)_2\end{array}\right).$$ (26) This then yields $$\left(\begin{array}{c}\sigma (Q)\\ \sigma (Q_0)\end{array}\right)=\left(\begin{array}{cc}1& 2\\ 1& 1\end{array}\right)\left(\begin{array}{c}f(Q_0)_1\\ f(Q_0)_2\end{array}\right).$$ (27) Clearly one has an invertible matrix in Eq. (27) and can thus compute $`f(Q_0)_1`$ and $`f(Q_0)_2`$. ### C The Practical Extraction Algorithm A practical way out of the polynomial predicament is by making a simple minded ansatz for the skewed gluon distribution, since we are still at small $`x`$, in the different ER-BL and DGLAP regions of the convolution integral Eq. (2). An example of such an Ansatz could be $`A_0z^{A_1}(1z)_3^A`$ for the DGLAP region where $`z`$ is now just a dummy variable. If one inserts this Ansatz in Eq. (2) and can fit the coefficients to the data of the real part of the DVCS amplitude for fixed $`x`$ and $`Q^2`$. One can then repeat this procedure for different values of $`Q^2`$ and then interpolate between the different coefficients to obtain a functional form of the coefficients in $`Q^2`$. Alternatively, after having extracted the values of the coefficients for different values of $`x`$ at the same $`Q^2`$, use an evolution programm with the ansatz and the fitted coefficients as input and check whether one can reproduce the data for the real part at higher $`Q^2`$, thus checking the viability of the model ansatz. To obtain an ansatz fulfilling the various constraints for SPD’s (see Ji’s and Radyushkin’s references in ), one should start from the double distributions (DD) (see Redyushkin’s references in .) which yield the skewed gluon distribution in the various regions of the convolution integral $`g(y,x)`$ $`=`$ $`\theta (yx){\displaystyle _0^{\frac{1y}{1x}}}𝑑zG(yxz,z)+`$ (29) $`\theta (yx){\displaystyle _0^{\frac{y}{x}}}𝑑zG(yxz,z).`$ Due to the fact that there are no anti-gluons, the above formula is enough to cover the whole region of interest $`1+xyleq1`$. What remains is to choose an appropriate model ansatz for G, for example, $$G(z_1,z)=\frac{h(z_1,z)}{h(z_1)}f(z_1)$$ (30) with $`f(z_1)`$ being taken from a diagonal parametrization with its coefficients now being left as variants in the skewed case and the normalization condition $`h(z_1)=_0^{1z1}𝑑zh(z_1,z)`$ such that, in the diagonal limit, the DD just gives the diagonal distribution. The choice for $`h(z_1,z)`$ is a matter of taste but should be kept as simple as possible. The drawback of this algorithm as compared to the previous one is that it is model dependent and thus not a first principle methods, which theoretically speaking, is not as satisfying but from the practical side this method is much simpler and thus experimentally much more feasible. Thus, one has solved the problem of extracting the parton distributions from the factorization equation, at least for small-$`x`$. The remaining problem is an experimental one. ## IV Conclusions and outlook After having showed, that the extraction of skewed parton distributions from DVCS experiments is both principally and practically possible given the high enough statistics data on the asymmetry, one should now get a more accurate model description of the asymmetry. This will be done elsewhere. ## Acknowledgments This work was supported by the E. U. contract $`\mathrm{\#}`$FMRX-CT98-0194. The author would like to thank John Collins, Andrei Belitsky and Mark Strikman for helpful discussions and comments on the draft version of this paper.
no-problem/9903/astro-ph9903211.html
ar5iv
text
# Explaining the light curves of Gamma-ray Bursts with a precessing jet ## 1 Introduction Gamma-ray bursts are observed with a large variety in duration, ranging from seconds to minutes (Norris et al. 1996), intensity and variability. The shortest temporal structures are unresolved by detectors and reflect the activity of a highly variable inner engine (Fenimore et al. 1996). On the other hand some bursts last for several minutes which indicates that the energy generation within the burst region has a rather long time scale. In the proposed model a neutron star transfers mass to a black hole with a mass of 2.2 to 5.5 $`\mathrm{M}_{}`$. A strong magnetic field is anchored in the disc, threads the black hole and taps its rotation energy via the Blandford-Znajek (1977) mechanism. Gamma-rays are emitted in a narrow beam. The luminosity distribution within the beam is given by the details of the Blandford-Znajek process. Precession of the inner part of the accretion disc causes the bean to sweep through space. This results in repeated pulses or flashes for an observer at a distant planet. This model was proposed by Portegies Zwart et al. (1999, hereafter PZLL) to explain the complex temporal structure of gamma-ray bursts. ## 2 The Gamma-ray binary We start with a close binary where a low-mass black hole is accompanied by the helium star. The configuration results from the spiral-in of a compact object in a giant, the progenitor of the helium star (see Portegies Zwart 1998, hereafter PZ). The collapse of the helium core results in the formation of a neutron star. The sudden mass loss in the supernova and the velocity kick imparted to the neutron star may dissociate the binary. If the system remains bound a neutron star – black hole binary is formed. The separation between the two compact stars decreases due to gravitational wave radiation (see \[Peters & Mathews 1963\]). At an orbital separation of a few tens of kilometers the neutron star fills its Roche-lobe and starts to transfer mass to the black hole. Mass transfer from the neutron star to the black hole is driven by the emission of gravitational waves but can be stabilized by the redistribution of mass in the binary system. If the mass of the black hole $`\stackrel{<}{}2.2`$$`\mathrm{M}_{}`$ coalescence follows within a few orbital revolutions owing the Darwin-Riemann instability (\[Clark et al. 1977\]). If the mass of the black hole $`\stackrel{>}{}5.5`$$`\mathrm{M}_{}`$ the binary is gravitationally unstable (\[Lattimer & Schram 1976\]); the event horizon of the hole is then larger than than the orbital separation. Only in the small mass ranger from $`2.2`$$`\mathrm{M}_{}`$ to $`5.5`$$`\mathrm{M}_{}`$ stable mass transfer is possible (see PZ). The entire episode of mass transfer lasts for several seconds up to minutes. Mass transfer becomes unstable if the neutron star starts to expand rapidly as soon as its mass drops below the stability limit of $`0.1`$$`\mathrm{M}_{}`$. Initially the neutron star’s material falls in the black hole almost radially but at a later stage an accretion disc can be formed (see PZ for details). ## 3 The precessing jet The asymmetry in the supernova, in which the neutron star is formed, causes the angular momentum axis of the binary to make an angle $`\nu `$ with the spin axis of the black hole. This misalignment causes the accretion disc around the black hole to precess (see Larwood 1998). The magnetic field of the black hole is anchored in the disc. Energy can then be extracted from the black hole by slowing down its rotation via the magnetic field which exert torque on the induced current on the black hole horizon (Blandford & Znajek 1977; Thorne et al. 1986). The radiation is liberated in a narrow beam with an opening angle of $`\stackrel{<}{}6^{}`$ (Fendt 1997). Such a narrow beaming is supported by the results of the ROTSE (Robotical Optical Transient Search Experiment) telescope (see http://www.umich.edu/rotse/ for more details). Since the magnetic field is anchored in the disc the radiation cone precesses with the same amplitude and period as the accretion disc. The intrinsic time variation of a single gamma-ray burst has a short rise time followed by a linear decay (Fenimore 1997). We construct the burst time profile from three components: an exponential rise, a plateau phase and a stiff decay (see PZLL for details). Rapid variations within this timespan are caused by the precession and nutation of the radiation beam. This results in a model with eight free parameters; three timescales for the profile of the burst, the precession and nutation periods, the precession angle and the direction (two angles) from which the observer looks at the burst. Figure 1 gives the result of fitting the model to the gamma-ray burst with BATSE trigger number 1609 in the energy channel between 115 keV and 320 keV. The fit was performed by minimizing the $`\chi ^2`$ with simulated annealing (see PZLL). The light curves computed with this model shows similar complexities and variability as observed gamma-ray bursts (see Fig. 1). * I am grateful to Gerry Brown, Chang-Hwan Lee, Hyun Kyu Lee and Jun Makino for a great time, numerous discussions and financial support.
no-problem/9903/astro-ph9903069.html
ar5iv
text
# A Comparison of Simple Mass Estimators for Galaxy Clusters ## 1. Introduction Rich clusters of galaxies constitute the largest gravitationally-bound objects in the universe and the history of their formation is a potentially powerful test of the viability of differing large-scale structure models. In particular, the time evolution of the cluster mass function is expected to be a good discriminator between low- and high-density universes, and, additionally, it provides a constraint on the degree of bias between the galaxy and mass distribution (e.g. Bahcall et al., 1997; Fan et al., 1997). Owing to the lack of complete, uniform, mass-selected catalogs of galaxy clusters out to large depths ($`z0.5`$ to $`1.0`$), the evolution of the cluster mass function is not constrained especially well at present. It is expected, however, that significant effort will soon be devoted to developing such catalogs, particularly using deep wide-field imaging of weak gravitational lensing of distant field galaxies by intervening clusters. The ability of these future catalogs to constrain models of structure formation rests heavily on the accuracy with which the cluster masses can be obtained and an understanding of any systematic biases present in the mass estimators themselves. The virial mass estimator is the method which has the longest history of application to galaxy clusters and it yields consistent results for cluster mass to light ratios in the range of $`200h(M/L)_{}`$ to $`400h(M/L)_{}`$ (e.g. Zwicky 1933, 1937; Smith 1936; Schwarzschild 1954; Gott & Turner 1976; Gunn 1978; Ramella, Geller & Huchra 1989; David, Jones & Forman 1995; Bahcall, Lubin & Dorman 1995; Carlberg et al. 1996; Carlberg, Yee & Ellingson 1997). While there are legitimate concerns that large clusters are not fully virialized, Carlberg et al. (1997a) have presented spectroscopic evidence which strongly suggests that the clusters in the CNOC survey are in equilibrium and, therefore, that the masses obtained within the virial radius are reliable. Nevertheless, because of its potential power to map the dark mass distribution within a cluster independent of the cluster’s dynamical state, recently a great deal of effort has been devoted to measurements of the gravitational potentials of clusters via observations of weak lensing (e.g. Tyson, Wenk & Valdes 1990; Bonnet et al. 1994; Dahle, Maddox & Lilje 1994; Fahlman et al. 1994; Mellier et al. 1994; Smail et al. 1994, 1995, 1997; Tyson & Fischer 1995; Smail & Dickinson 1995; Kneib et al 1996; Seitz et al. 1996; Squires et al. 1996ab; Bower & Smail 1997; Fischer et al. 1997; Fischer & Tyson 1997; Luppino & Kaiser 1997). Relatively few clusters have been studied in detail, but a consistent picture of the dark mass distribution appears to be emerging from the weak lensing investigations. In particular, the center of mass corresponds well with the center of the optical light distribution and the smoothed light distribution traces the dark mass well. The lensing-derived mass to light ratios vary from cluster to cluster but bracket a broad range of $`200h(M/L)_{}`$ to $`800h(M/L)_{}`$, with most of the clusters falling in the middle of the range. The consistency of cluster masses obtained from independent methods such as lensing, virial analyses, or X-ray data (assuming pressure supported hydrostatic equilibrium) is very much in debate at this time. Cluster mass estimates obtained from observations of strong lensing can often exceed the X-ray mass by a factor of 2 to 3 (e.g. Miralda-Escudé & Babul 1995). This particular discrepancy is likely due to the failure of the assumption of hydrostatic equilibrium and Waxman & Miralda-Escudé (1995) have thus proposed the existence of multiphase cooling flows in the centers of rich clusters. Based on analyses of weak lensing over considerably larger cluster radii, however, some studies conclude that there is quite good agreement between the lensing and X-ray masses (e.g. Squires et al. 1996ab; Smail et al. 1997) while others claim a significant disagreement in which the lensing mass systematically exceeds the X-ray mass (e.g. Fischer et al. 1997; Fischer & Tyson 1997). Additionally, the weak lensing mass estimate is often found to exceed the mass obtained from the virial estimator by a factor of order 2 (e.g. Fahlman et al. 1994; Carlberg et al. 1994; Smail et al. 1997) but in some cases good agreement between these two independently derived masses is found (e.g. Fischer et al. 1997). One of the troubles associated with a fair assessment of independent cluster mass estimates is that different techniques tend, out of necessity, to be applied at different cluster radii (i.e. it is not always possible to investigate the gravitational potential over the entire cluster via one particular estimator simply due to lack of data on appropriate scales). Systematic biases inherent in any given mass estimator may well be scale-dependent and amongst different estimators the form of the scale dependence is likely to vary. Therefore, a considerable effort will be required in order to reconcile all of the outstanding discrepancies amongst independent cluster mass estimates. Few observational investigations have been able to place direct constraints on the radial mass profiles of clusters to date. Bonnet et al. (1994) detected a coherent weak lensing shear due to the cluster Cl0024+1654 out to a radius of $`r1.5h^1`$ Mpc and from their observations they showed that the underlying cluster mass profile was consistent both with an isothermal profile and a steeper de Vaucouleurs profile. Similarly, Fischer & Tyson (1997) found that the weak lensing shear field of RXJ 1347.5-1145 yielded a density profile that was consistent with isothermal. Tyson & Fisher (1995), however, found that the density profile implied by weak lensing observations of A1689 was steeper than isothermal on large scales ($`200h^1\mathrm{kpc}<r<1h^1\mathrm{Mpc}`$). Similarly, Squires et al. (1996b) found that the density profile of A2390 implied by the weak lensing shear field was consistent with isothermal on small scales ($`r<250h^1\mathrm{kpc}`$), but on larger scales was better described by a profile steeper than isothermal. In contrast to the weak lensing results, however, Carlberg et al. (1997b) found that the velocity dispersion profiles of the CNOC clusters gave rise to a mean cluster mass profile that was fit very well by a Navarro, Frenk & White profile, i.e. shallower than isothermal at small radii and isothermal at large radii (e.g. Navarro, Frenk & White 1995, 1996, 1997). In this investigation we use high-resolution N-body simulations of rich clusters to investigate systematic trends in both the total cluster masses and the radial mass profiles as derived from three simple estimators: (1) the weak lensing shear field under the assumption of an isothermal cluster potential, (2) the dynamical mass obtained from the measured velocity dispersion under the assumption of an isothermal cluster potential, and (3) the classical virial estimator. The simulated clusters are very massive and thus do not constitute an average, unbiased sample of objects. They do, however, correspond to the largest clusters likely to form in a standard CDM universe and are objects which would certainly be detectable as weak gravitational lenses. The N-body simulations of the clusters are discussed in §2 and the weak lensing and dynamical properties of the clusters are discussed in §3 together with the mass profiles obtained from the three estimators. A discussion of the results is presented in §4. ## 2. The Numerical Clusters The Hierarchical Particle-Mesh (HPM) N-body code written by J. V. Villumsen (Villumsen 1989) was used to simulate the formation of three rich clusters. The HPM code allows small-volume particle-mesh simulations to be nested self-consistently within large-volume particle-mesh simulations and by successively nesting many simulations within each other it is possible to obtain extremely high resolution in both mass and length within a small, localized region of a large computational volume (a “power zoom” effect). The code is, therefore, especially useful for the simulation of the formation of objects such as individual clusters. In particular, using the HPM code to simulate the formation of clusters obviates the need for computations which utilize “constrained initial conditions” and those which simulate at high resolution the evolution of density peaks that have been excised from the initial conditions of a large computational volume. While the largest-volume, lowest-resolution grid in an HPM simulation uses periodic boundary conditions, the smaller-volume, higher-resolution grids use isolated boundary conditions, allowing mass to flow in and out of the higher-resolution grids over the course of the simulation. Therefore, unlike constrained initial conditions or peak-excision simulations, clusters simulated with HPM are guaranteed to accrete all of the mass that they should accrete if one simply ran a single large-volume simulation at a level of resolution comparable to that of the highest-resolution HPM grid. The HPM code uses a standard cloud-in-cell (CIC) interpolation scheme, which results in an approximately Gaussian smoothing of the power spectrum with a smoothing length of $`r_s=0.8`$ grid cell (see, e.g., §6.6 of Blandford et al. 1991). The gravitational force is, therefore, softer than Newtonian on small scales but becomes Newtonian for length scales greater than or of order 2 grid cells (Villumsen 1989). Due to the force softening we therefore restrict our analyses to length scales greater than 2 grid cells. The clusters used for the present analysis are discussed in detail in Brainerd, Goldberg & Villumsen (1998) and here we present only a summary of the simulations involved. A standard Cold Dark Matter model ($`\mathrm{\Omega }_0=1`$, $`\mathrm{\Lambda }_0=0`$, and $`H_0=50`$ km/s/Mpc) was adopted and the present epoch (i.e. redshift, $`z`$, of 0) was taken to correspond to $`\sigma _8=1`$ where $$\sigma _8\left[\frac{\delta \rho }{\rho }(8h^1\mathrm{Mpc})\right]^2^{\frac{1}{2}}.$$ (1) This is a model which is somewhat under-normalized compared to the COBE observations (e.g. Bunn & White 1997) and over-normalized compared to the abundance of rich clusters (e.g. Bahcall & Cen 1993; White, Efstathiou & Frenk 1993; Eke, Cole & Frenk 1996; Viana & Liddle 1996). The simulations began at $`\sigma _8=0.033`$ (corresponding to a redshift of 29) and were evolved forward in time to $`\sigma _8=1.0`$. The formation of the three most massive (i.e. “richest”) clusters contained within a single cubical volume of comoving side length $`L=400`$ Mpc was followed at high resolution. The large, primary computational volume common to all three clusters was a standard particle-mesh simulation consisting of $`256^3`$ grid cells and $`128^3`$ particles. Each cluster in turn was simulated at increased resolution by nesting two smaller, higher-resolution grids successively within the large, primary simulation volume. These two small, higher-resolution grids were centered on the center of mass of the particular cluster being simulated, used $`256^3`$ grid cells each and had comoving side lengths of $`L=66.6`$ Mpc and $`L=16.7`$ Mpc, respectively. Since isolated boundary conditions are used in the small-volume simulations, the number of particles physically inside the small grids varied over the course of the simulations. The particle mass in the large computational volume common to all three clusters was $`m_p=2.1\times 10^{12}M_{}`$, while for the smaller grids unique to the individual clusters, the particle masses were $`m_p=7.8\times 10^{10}M_{}`$ and $`m_p=9.8\times 10^9M_{}`$ in the grids with $`L=66.6`$ Mpc and $`L=16.7`$ Mpc, respectively. Dynamic ranges of $`4.5\times 10^8`$ in mass and $`6000`$ in length were thus achieved by nesting the smaller simulations within the primary computational volume. Throughout the present analysis we shall define the clusters to consist of all particles within a radius $`r_{200}`$ of the centers of mass, where $`r_{200}`$ is the radius inside which the mean interior mass overdensity is 200: $$\frac{\delta \rho }{\rho }\left(r_{200}\right)=200$$ (2) (see, e.g., Navarro, Frenk & White 1997, 1996, 1995). The cluster mass estimates were computed using only particles from the three highest-resolution grids (i.e. the grids with $`L=16.7`$ Mpc and $`m_p=9.8\times 10^9M_{}`$) at an epoch corresponding to a redshift of 0.5 ($`\sigma _8=0.67`$ for our normalization). All of the particles within $`r_{200}`$ of the cluster centers were excised from the highest-resolution grid and were then used to compute the mass profiles. The total number of particles per cluster located within $`r_{200}`$ at $`z=0.5`$ are: 192346 (“cluster 1”; $`r_{200}=2.1`$ Mpc, proper radius), 288641 (“cluster 2”; $`r_{200}=2.4`$ Mpc, proper radius), and 310310 (“cluster 3”; $`r_{200}=2.5`$ Mpc, proper radius). Within $`r_{200}`$ the clusters contain a significant amount of substructure and have median projected ellipticities of 0.3. Cluster 1 is nearly prolate while clusters 2 and 3 are nearly oblate. Although the density profiles of the clusters are fit well by Navarro, Frenk, & White (1997, 1996, 1995) profiles, the values of the best-fit concentration parameters obtained for the clusters are a factor of order 2 lower than the values predicted by the Navarro, Frenk & White formalism for objects in the identical mass range. For a full discussion of the above cluster properties see Brainerd, Goldberg & Villumsen (1998). ## 3. Results Two-dimensional projections of the clusters are shown in the top panels of Figs. 1, 2, and 3. The color scale shows the logarithm of the surface mass density (in units of $`M_{}/\mathrm{kpc}^2`$) and distances are given in proper coordinates for $`z=0.5`$. The actual mass profiles of the clusters are shown in Fig. 4, where the top panel shows the mean 2-dimensional projected mass profile obtained from 10 random projections of each cluster and the bottom panel shows the 3-dimensional mass profile of each cluster. Throughout we shall adhere to notation in which $`R`$ refers to a proper radius projected on the sky and $`r`$ refers to a 3-dimensional proper radius. In this notation, then, $`M(R)`$ is the projected mass interior to a radius $`R`$ on the sky and $`M(r)`$ is the mass interior to a sphere of radius $`r`$. As expected from the work by Navarro, Frenk & White (1997, 1996, 1995) on the relatively generic shapes of the density profiles of objects formed by dissipationless collapse, the mass profiles of the numerical clusters are not fit well by single power laws (see also Dubinski & Carlberg 1991, Cole & Lacey 1996, and Tormen, Bouchet & White 1997). Rather, a gently changing slope is observed, with the density profiles becoming roughly isothermal on large scales ($`>1`$ Mpc in the case of our clusters). Fig. 1: Top panel: the logarithm of the surface mass density of cluster 1 as observed from a randomly-chosen line of sight. The units of the surface mass density are $`M_{}/\mathrm{kpc}^2`$. The cluster consists of all particles in the highest-resolution subgrid that are located within a radius $`r_{200}`$ of the center of mass. Bottom panel: the gravitational lensing shear, $`\gamma `$, obtained for the projected mass density shown in the top panel. The cluster was placed at a redshift of 0.5 and the shear that would be induced in a plane of sources at $`z=1.0`$ was computed by tracing a regular grid of $`4\times 10^6`$ light rays through the cluster. The color scale indicates the local value of $`\mathrm{log}_{10}\gamma `$ while the orientation of the sticks indicates the orientation of the shear, $`\phi `$. For clarity, the mean orientation of the local shear is shown on a coarse $`10\times 10`$ grid. The angular scale of the figure is of order $`11^{}\times 11^{}`$. Figs. 2 and 3 are the same as Fig. 1, but for clusters 2 and 3, respectively. In the following subsections we will compare the true mass profiles of the clusters to the mass profiles obtained from the three estimators. All of the mass estimators assume the clusters to be spherically symmetric and, additionally, both the weak lensing and “isothermal” dynamical mass estimates assume that the cluster potential is approximately isothermal. Fig. 4: The mass profiles of the clusters as computed directly from the distribution of particles in the highest-resolution grids. Top panel: the mean projected mass profile, computed from 10 random projections of each cluster. Bottom panel: the full 3-dimensional mass profile. The clusters are roughly isothermal on scales $`>1`$ Mpc. ### 3.1. Weak Lensing Shear Observations of gravitational lensing provide potentially powerful constraints on both the total mass and the mass distribution within clusters of galaxies. The gravitational potential of the cluster systematically deforms the shapes of distant source galaxies that are seen through the lensing cluster. The result is a net ellipticity induced in the images of lensed galaxies and a net tangential alignment of the lensed images relative to the center of the cluster potential. Provided the distance traveled by the light ray is very much greater than the scale size of the lens itself, it is valid to adopt the “thin lens approximation” in order to describe a gravitational lens. Consider a lens with an arbitrary 3-dimensional potential, $`\mathrm{\Phi }`$. In the thin lens approximation a conveniently scaled 2-dimensional potential, $`\psi `$, is adopted for the lens (i.e. $`\psi `$ is a scaled representation of the 3-dimensional potential of the lens integrated along the optic axis): $$\psi (\stackrel{}{\theta })=\frac{D_{ds}}{D_dD_s}\frac{2}{c^2}\mathrm{\Phi }(D_d\stackrel{}{\theta },z)𝑑z.$$ (3) Here $`\stackrel{}{\theta }`$ is the location of the lensed image on the sky relative to the optic axis, $`D_{ds}`$ is the angular diameter distance between the lens (the “deflector”) and the source, $`D_d`$ is the angular diameter distance between the observer and the lens, and $`D_s`$ is the angular diameter distance between the observer and the source. Having adopted this 2-dimensional lens potential, then, it is straightforward to relate the potential of the lens (through second derivatives of $`\psi `$) directly to the two fundamental quantities which characterize the lens: the convergence ($`\kappa `$) and the shear ($`\stackrel{}{\gamma }`$). The convergence, which describes the isotropic focusing of light rays, is given by: $$\kappa (\stackrel{}{\theta })=\frac{1}{2}\left(\frac{^2\psi }{\theta _1^2}+\frac{^2\psi }{\theta _2^2}\right).$$ (4) The shear describes the tidal gravitational forces acting across a bundle of light rays and, therefore, the shear has both a magnitude, $`\gamma =\sqrt{\gamma _1^2+\gamma _2^2}`$, and an orientation, $`\phi `$. In terms of $`\psi `$, the components of the shear are given by: $$\gamma _1(\stackrel{}{\theta })=\frac{1}{2}\left(\frac{^2\psi }{\theta _1^2}\frac{^2\psi }{\theta _2^2}\right)\gamma (\stackrel{}{\theta })\mathrm{cos}\left[2\phi (\stackrel{}{\theta })\right]$$ (5) $$\gamma _2(\stackrel{}{\theta })=\frac{^2\psi }{\theta _1\theta _2}=\frac{^2\psi }{\theta _2\theta _1}\gamma (\stackrel{}{\theta })\mathrm{sin}\left[2\phi (\stackrel{}{\theta })\right]$$ (6) (e.g. Schneider, Ehlers & Falco 1992). A great deal of work has been done in recent years to develop methods by which a map of the surface mass density of a cluster can be reconstructed from observations of the distortions induced in the images of background galaxies in the limit of weak gravitational lensing, for which $`\kappa <<1`$ and $`|\gamma |<<1`$ (e.g. Kaiser & Squires 1993; Bartelmann 1995; Kaiser 1995; Kaiser et al. 1995; Schneider 1995; Schneider & Seitz 1995; Seitz & Schneider 1995; Bartelmann et al. 1996; Seitz & Schneider 1996; Squires & Kaiser 1996; Seitz et al. 1998). It is not the intent of this paper to explore these detailed methods of cluster mass reconstruction. Rather, we will focus on a very simple weak lensing analysis technique that is sometimes used to gauge the total mass of a cluster contained within a given radius without fully reconstructing the underlying density profile. The method invokes an assumption that the cluster potential may be represented adequately by an isothermal sphere. The actual density potentials of the numerical clusters are better represented by Navarro, Frenk & White profiles (see Brainerd, Goldberg & Villumsen 1998) than by singular isothermal spheres and, given the apparent generality of the NFW profile, it is more likely that an NFW profile will better represent an actual galaxy cluster than will an isothermal sphere. Here we choose to adopt the isothermal sphere approximation for the analysis because this is the simplifying assumption that is most commonly invoked in the literature when cluster masses are estimated from observations of weak lensing without a full reconstruction of the density profile (see the references listed below). Here our goal is simply to quantify systematic effects due to the assumption of an underlying isothermal potential when the true potential is better approximated by that of an NFW-type object. An isothermal sphere is uniquely specified by a single quantity, the velocity dispersion ($`\sigma _v`$), and the mass of an isothermal sphere contained within a 3-dimensional radius $`r`$ is given by $$M(r)=\frac{2\sigma _v^2r}{G}$$ (7) where $`G`$ is Newton’s constant. The total mass of an isothermal sphere within a radius $`R`$ projected on the sky is given by $$M(R)=\frac{\pi \sigma _v^2R}{G}$$ (8) (e.g. Binney & Tremaine 1987). Since it is spherically symmetric, the isothermal sphere gives rise to a gravitational lensing shear field which is necessarily circularly symmetric and, in particular, the shear as a function of angular radius, $`\theta `$, is given by $$\gamma (\theta )=\frac{2\pi }{\theta }\left(\frac{\sigma _v}{c}\right)^2\left[\frac{D_{ds}}{D_s}\right],$$ (9) where $`c`$ is the velocity of light and $`\sigma _v`$ is the velocity dispersion of the lens (e.g. Schneider, Ehlers & Falco 1992). If we consider an annulus of inner radius $`\theta _{\mathrm{min}}`$ and outer radius $`\theta _{\mathrm{max}}`$ centered on the center of mass of the isothermal sphere, the mean shear inside the annulus is given by $$\overline{\gamma }=4\pi \left(\frac{\sigma _v}{c}\right)^2\left[\frac{D_{ds}}{D_s}\right]\left(\theta _{\mathrm{max}}+\theta _{\mathrm{min}}\right)^1.$$ (10) That is, provided the cluster potential is sufficiently well-represented by an isothermal sphere it is possible to deduce its characteristic velocity dispersion directly from either a measurement of the shear at a given radius, $`\gamma (\theta )`$, or the mean value of the shear, $`\overline{\gamma }`$, computed within some large large annulus. A measurement of $`\sigma _v`$ by such a technique then leads to an estimate of the mass of the cluster within a given radius (e.g. Tyson, Wenk & Valdes 1990; Bonnet et al. 1994; Smail et al. 1994, 1997; Smail & Dickinson 1995; Bower & Smail 1997; Fischer & Tyson 1997). In practice, an observed weak lensing shear only places a limit on the mass of cluster to within an additive constant (the so-called uniform density mass sheet degeneracy). The simple singular isothermal sphere mass estimator that we use here formally assumes that there is no such mass sheet present and that the observed weak lensing shear can be directly translated into a mass measurement via equations (7), (8), (9), and (10). Below we will, therefore, interpret the cluster shear fields in a manner consistent with the simple form of the mass estimator and we will not explicitly address the mass sheet degeneracy problem or its implications for an observed weak lensing shear. In this section we compute the shear fields of the numerical clusters and in §3.3 we will use these shear fields to investigate the systematic effects that the above weak lensing mass estimate has on the masses inferred for the numerical clusters. The shear fields of the clusters are determined directly by tracing regular Cartesian grids of $`2001\times 2001`$ light rays through the clusters. In the analysis below we adopt the thin lens approximation and for a particular plane projection of a cluster we simply calculate the net deflection of each light ray due to all of the point masses contained within $`r_{200}`$ of the cluster center of mass. Note, however, that we ran a few test cases in which all particles inside a radius of 4 Mpc of the cluster centers were included in the ray trace analysis. The inclusion of the mass exterior to a radius of $`r_{200}`$ gave rise to a shear field interior to $`r_{200}`$ that was indistinguishable from the shear field obtained using only the particles interior to $`r_{200}`$. That is, owing to the fact that the clusters are roughly axisymmetric and no large mass concentrations exist just outside the clusters, the shear interior to a projected radius $`R`$ is determined by the surface mass density interior to $`R`$. The clusters are located at a redshift of $`z=0.5`$ and we consider a plane of sources at $`z=1.0`$. (Although the redshift of the sources will affect the magnitude of the shear, it will not affect the velocity dispersion inferred in the isothermal sphere approximation and, therefore, the choice of source plane is essentially arbitrary for our analysis.) The side lengths of the grids of light rays were taken to be $`L=2r_{200}`$ so that throughout we compute only the shear interior to the virial radii of the clusters. At the redshift of the clusters, then, the side lengths of the grids correspond to and angular scale of order $`11^{}\times 11^{}`$. If we let the location of a light ray on the grid be given by $`\stackrel{}{\beta }`$ prior to lensing (i.e. $`\stackrel{}{\beta }`$ is the location of the light ray in the source plane) and we let $`\stackrel{}{\theta }`$ be the location of the light ray after having been lensed by all of the point masses (i.e. $`\stackrel{}{\theta }`$ is the location of the light ray in the image plane), the components of the shear are then given by: $$\gamma _1(\stackrel{}{\theta })=\frac{1}{2}\left(\frac{\beta _\mathrm{x}}{\theta _\mathrm{x}}\frac{\beta _\mathrm{y}}{\theta _\mathrm{y}}\right)$$ (11) $$\gamma _2(\stackrel{}{\theta })=\frac{1}{2}\left(\frac{\beta _\mathrm{x}}{\theta _\mathrm{y}}+\frac{\beta _\mathrm{y}}{\theta _\mathrm{x}}\right)$$ (12) The $`2001\times 2001`$ light rays define a grid of $`2000\times 2000`$ cells and the shear at the centers of each of these cells can be determined from equations (11) and (12) above by finite differencing of the deflections of the four light rays which define the corners of the cell. The code used to compute the net deflections of the grid of light rays was tested by tracing the light rays through a number of singular isothermal spheres that were approximated by a set 250,000 point masses. The point masses were constrained to lie within a maximum projected radius of $`R=2.7`$ Mpc and their masses were scaled appropriately so as to reproduce the correct values of $`M(R=2.7\mathrm{Mpc})`$ for a set of isothermal spheres with values of $`\sigma _v`$ in the range of 500 km/s to 1500 km/s. As with the simulated clusters, the isothermal sphere lenses were placed at $`z=0.5`$ and the source light rays emanated from $`z=1.0`$. The net deflections of the light rays were evaluated and the radial dependence of the convergence, $`\kappa (R)`$, and shear, $`\gamma (R)`$, was computed and compared to the analytic expectations for infinite singular isothermal spheres having values of $`\sigma _v`$ identical to the isothermal spheres that were approximated by the point masses. For the isothermal sphere we know $`\kappa (R)=\gamma (R)`$, and in all cases good agreement was found between both $`\kappa (R)`$ and $`\gamma (R)`$ as computed individually from the ray tracing and between the ray tracing results and the analytic expectations (deviations $`<1`$% of the analytic values). Shown in the bottom panels of Figs. 1, 2, and 3 are the shear fields corresponding to the 2-dimensional projections of the clusters shown in the top panels of these figures. The color scale shows the logarithm of the magnitude of the shear and the small sticks indicate its orientation. For clarity of the figure, we plot the mean orientation of the shear on a coarse $`10\times 10`$ grid that was computed from an unweighted average of the local shear vectors obtained from the differencing of the displacements of the light rays (i.e. the sticks show a rebinning of the original $`2000\times 2000`$ grid of shear vectors onto a $`10\times 10`$ grid). The visual agreement of the magnitude and orientation of the shear with the actual surface mass density of the clusters is as expected; the shear is greatest in the densest regions of the clusters and is oriented roughly tangentially with respect to the cluster centers. The shear fields are not, however, circularly symmetric and reflect both the overall ellipticity of the clusters and the substructure within them. Fig. 5: The mean gravitational lensing shear for the clusters as a function of projected radius. Two-dimensional shear fields were determined for 10 random projections of each cluster, from which the average radial value of the shear was computed in independent bins of radius $`R`$ centered on the cluster center of mass. The error bars show the formal standard deviation in the mean between the 10 projections. For comparison the dotted line indicates the shape of the shear profile expected for an isothermal sphere lens, $`\gamma (R)R^1`$. Each cluster was viewed at 10 random orientations and a mean radial shear profile was computed from the full $`2000\times 2000`$ grid of shear vectors. The results are shown in Fig. 5, where the error bars indicate the formal standard deviation in the mean between the 10 random projections. Also shown for comparison is the radial shear profile expected for an isothermal sphere (i.e. $`\gamma (R)R^1`$, cf. equation 9 above). Below a scale of $`1`$ Mpc the radial shear profiles of the clusters behave as $`\gamma (R)R^{0.5}`$, while on larger scales the variation of $`\gamma `$ with $`R`$ is roughly isothermal, $`\gamma (R)R^1`$. Given the mass profiles shown in Fig. 4, this is precisely the behavior we would anticipate for the shear profiles. This behavior will, however, cause systematic errors in the cluster masses inferred from the shear fields under the assumption of isothermal cluster potentials. ### 3.2. Velocity Dispersions Under the assumption of an isothermal cluster potential, the masses of the clusters can be determined from measurements of their velocity dispersions alone (e.g. equations 7 and 8 above). The isothermal sphere is characterized by a single, constant value for the velocity dispersion and in this section we investigate the degree to which the measured cluster velocity dispersions vary with distance from the cluster centers of mass. The force resolution of the simulations is too poor to resolve convincingly the dark matter halos that would be associated with individual galaxies within the cluster (one grid cell in the particle-mesh calculation is of order 45 kpc in length) and, so, it is not possible to calculate the line of sight velocity dispersion of member galaxies directly. However, in the absence of significant velocity bias in both observed and high-resolution numerical clusters (e.g. Lubin & Bahcall 1993; Bromley et al. 1995; Ghigna et al. 1998), a random subset of the particles can be drawn from each cluster to estimate the velocity dispersion that would be expected for the member galaxies. Each cluster was viewed from 1000 random orientations and the line of sight velocity dispersion, $`\sigma _v`$, was computed as a function of projected radius relative to the cluster center of mass. Two types of annuli were used for the computation: independent annuli (i.e. $`\sigma _v`$ was computed in thin annuli with differential radius, $`R`$) and cumulative annuli (i.e. $`\sigma _v`$ was computed in wide annuli which shared a fixed inner radius, $`R_{\mathrm{min}}`$, and differed only by the maximum radius of the annuli, $`R_{\mathrm{max}}`$). That is, the use of the independent annuli yields a measurement of $`\sigma _v`$ at a particular projected distance from the cluster center while the use of the cumulative annuli yields a measurement of $`\sigma _v`$ averaged over the entire cluster (out to some maximum radius). Throughout, the minimum radius from the cluster centers of mass, $`R_{\mathrm{min}}`$, was taken to be a distance equal to the length of two grid cells in the particle-mesh simulation since below that scale the gravitational force is softened by the N-body computational technique. Shown in Figs. 6 and 7 (crosses) are the mean values of $`\sigma _v`$ that were calculated directly from the line of sight velocities of particles within the clusters. Independent annuli were used in Fig. 6 and cumulative annuli were used in Fig. 7. The error bars in the figure show the formal 1-$`\sigma `$ dispersion amongst the different projections of the clusters. The velocity dispersion computed using independent annuli decreases monotonically with radius in all three clusters but the decrease is slow enough such that averaged over large scales within the clusters (i.e. $`\sigma _v`$ computed in the cumulative annuli) the velocity dispersion is roughly constant. Fig. 6: Line of sight cluster velocity dispersions, $`\sigma _v`$, as function of projected radius. Crosses show the mean value of $`\sigma _v`$ computed directly from the velocities of random subsets of the constituent particles and the error bars show the formal 1-$`\sigma `$ deviation amongst 1000 random lines of sight. Squares show the value of $`\sigma _v`$ inferred for the clusters on the basis of the mean weak lensing shear, under the assumption that the cluster potential is well-represented by an isothermal sphere; error bars show the formal 1-$`\sigma `$ deviation amongst the 10 random lines of sight for which direct ray tracing was performed. In this figure $`\sigma _v`$ has been computed using independent annuli with proper radius $`R`$. Also shown in Figs. 6 and 7 (squares) are the mean values of $`\sigma _v`$ that are inferred for the clusters on the basis of the weak lensing shear field, assuming that the cluster potentials can be well-represented by isothermal spheres (e.g. Fig. 5). From the 10 different projections for which direct ray tracing was performed, the mean shear was computed using both independent and cumulative annuli identical to the annuli used to compute the velocity dispersions of the particles themselves. The values of $`\gamma (R)`$ and $`\overline{\gamma }(R_{\mathrm{max}})`$ obtained from the ray trace analysis were then used in conjunction with equations (9) and (10) to infer the variation of the cluster velocity dispersion with radius. Error bars in Figs. 6 and 7 show the formal 1-$`\sigma `$ dispersion amongst the different cluster projections. In contrast to the velocity dispersion measured directly for the particles, the velocity dispersion inferred from the weak lensing analysis increases with radius monotonically. Fig. 7: Same as Fig. 6 except that in this figure $`\sigma _v`$ has been computed using large cumulative radii of outer radius $`R_{\mathrm{max}}`$ (see text). ### 3.3. Cluster Mass Estimates Here we compute mass profiles for the clusters using the following simple estimators: (1) the mean value of the weak lensing shear under the assumption of an isothermal cluster potential, (2) the dynamical mass obtained from the line of sight velocity dispersion of the particles under the assumption of an isothermal cluster potential, and (3) the classical virial estimator. The cluster mass profiles obtained using the estimators are compared directly to the true mass profiles (e.g. Fig. 4) and throughout we will plot ratios of the estimated and true cluster mass profiles as a function of radius. Shown in Figs. 8 and 9 are the mass profiles obtained from the mean weak lensing shear under the assumption of an isothermal cluster potential. Results for the 2-dimensional projected mass profiles are shown in Fig. 8 and the 3-dimensional mass profiles are shown in Fig. 9. The velocity dispersion, $`\sigma _v(R)`$, inferred from the circularly-averaged weak lensing signal (e.g. the squares in Fig. 6 and 7) was used in equations (7) and (8) above to compute $`M(r)_{\mathrm{lens}}`$ and $`M(R)_{\mathrm{lens}}`$. In both Figs. 8 and 9 the circles indicate that the value of $`\sigma _v(R)`$ was determined using the large, cumulative annuli (i.e. an average velocity dispersion over the cluster out to a maximum radius of $`R`$). The solid squares in these figures indicate that the value of $`\sigma _v(R)`$ was determined using thin, independent annuli (i.e. a value of the velocity dispersion computed at a particular distance, $`R`$, from the cluster center of mass). From the weak lensing analysis it is not possible to measure the direct dependence of $`\sigma _v`$ on the 3-dimensional radius, $`r`$, and, so, to compute the 3-dimensional mass profile we have taken the velocity dispersion to be $`\sigma _v(r)\sigma _v(R=r)`$. Fig. 8: The 2-dimensional, projected cluster mass profile obtained from the weak lensing analysis compared to the true cluster mass profile. Solid squares indicate that the value of $`\sigma _v`$ used in equation (8) was determined from independent annuli of differential radius $`R`$; open circles indicate that the value of $`\sigma _v`$ was determined from large, cumulative annuli with outer radii of $`R_{\mathrm{max}}=R`$. The data points shown by the squares have been plotted such that $`R`$ is the value of the projected radius at the midpoints of the independent radial bins and the data points shown by the circles are plotted such that $`R`$ is the value of $`R_{\mathrm{max}}`$ (i.e. for the circles $`R`$ corresponds to the outermost radius of the annulus used in the calculation). Error bars show the 1-$`\sigma `$ dispersion in $`M(R)`$ amongst the 10 different projections for which ray tracing was performed. Fig. 8 shows that there is clearly a scale-dependent systematic deviation of the 2-dimensional projected cluster mass profile determined from the simple weak lensing analysis adopted here. Overall the trend is for $`M(R)_{\mathrm{lens}}`$ to increase monotonically with radius, underestimating the true projected mass at small radii and overestimating the true projected mass at large radii. The overestimate of the projected mass at large radii is simply a reflection of the fact that the isothermal sphere is, by definition, infinite in extent while the actual clusters are confined to a finite radius of $`r_{200}`$. (Note, however, that we performed a few test cases in which the proper radius of the numerical clusters was increased to a value of $`r=4`$ Mpc and this had a negligible effect upon the measured shear and, hence, the inferred projected mass.) In contrast to the results for $`M(R)_{\mathrm{lens}}`$, there is only a weak scale dependence in the deviation of $`M(r)_{\mathrm{lens}}`$ from the true 3-dimensional mass. Over most scales there is quite good agreement between the true cluster mass profiles and $`M(r)_{\mathrm{lens}}`$ as determined from values of $`\sigma _v`$ that were computed using independent annuli. When values of $`\sigma _v`$ determined from the large cumulative annuli are used, $`M(r)_{\mathrm{lens}}`$ systematically underestimates the true cluster mass on scales significantly less than $`r_{200}`$. At large radii, however, $`M(r)_{\mathrm{lens}}`$ is in very good agreement with the true mass of the cluster for the case in which $`\sigma _v`$ was determined from the large, cumulative annuli (i.e. $`\sigma _v`$ is determined from the mean shear over the entire cluster). This result may seem a bit surprising given the fact that the clusters are better represented by NFW density profiles than they are by isothermal spheres. However, for NFW-type objects with masses comparable to those of our numerical clusters, the mean shear interior to $`R_{200}`$ differs relatively little ($`<10`$%) from that of an isothermal sphere that has an identical mass contained within $`r_{200}`$ (Oaxaca Wright & Brainerd 1999). Hence, the isothermal sphere approximation should yield a reasonable estimate of the cluster mass contained within $`r_{200}`$, provided the mean shear is computed interior to a projected radius of $`R_{200}`$. Fig. 9: The 3-dimensional cluster mass profile obtained from the weak lensing analysis compared to the true cluster mass profile. Solid squares indicate that the value of $`\sigma _v`$ used in equation (7) was determined from independent annuli of differential radius $`R`$; open circles indicate that the value of $`\sigma _v`$ was determined from large, cumulative annuli with $`R_{\mathrm{max}}=r`$. The data points shown by the squares have been plotted such that $`r`$ is the value of the 3-dimensional radius at the midpoints of the independent radial bins and the data points shown by the circles are plotted such that $`r`$ is the value of $`r=R_{\mathrm{max}}`$. Error bars show the 1-$`\sigma `$ dispersion in $`M(r)`$ amongst the 10 different projections for which ray tracing was performed. Shown in Figs. 10 and 11 are the cluster mass profiles obtained from equations (7) and (8) in which $`\sigma _v`$ is taken to be the mean particle velocity dispersion measured directly from random subsets of particles. The 2-dimensional projected mass profile, $`M_{\sigma _v}(R)`$, is shown in Fig. 10 and the 3-dimensional mass profile, $`M_{\sigma _v}(r)`$, is shown in Fig. 11. As in Figs. 8 and 9, circles refer to values of $`\sigma _v`$ computed using the large cumulative annuli and squares refer to values of $`\sigma _v`$ computed using the thin, independent annuli. Both the projected mass profiles and the 3-dimensional mass profiles estimated directly from the particle velocity dispersions show scale-dependent deviations from the true mass profile. In this case the cluster mass is overestimated at very small radii and underestimated over most of the cluster. Fig. 10: The 2-dimensional, projected cluster mass profile obtained directly from the measured particle velocity dispersion (assuming the clusters to be isothermal spheres) compared to the true cluster mass profile. Solid squares indicate that the value of $`\sigma _v`$ used in equation (8) was determined from independent annuli of differential radius $`R`$; open circles indicate that the value of $`\sigma _v`$ was determined from large, cumulative annuli with outer radii of $`R_{\mathrm{max}}=R`$. Error bars show the 1-$`\sigma `$ dispersion in $`M(R)`$ amongst the 1000 projections from which the mean line-of-sight velocity dispersion was computed. Lastly, shown in Fig. 12 is a 3-dimensional mass profile computed for the clusters using a virial mass estimator. The classical cluster virial mass estimator is: $$M=\frac{3\pi \sigma _v^2R_e}{2G}$$ (13) where $`R_e`$ is the mean effective radius as projected on the sky: $$R_e^1\frac{1}{N^2}\underset{i<j}{\overset{N}{}}\frac{1}{|\stackrel{}{R}_i\stackrel{}{R}_j|}$$ (14) and $`N`$ is the number of galaxies in the cluster. Again, we cannot resolve the individual dark matter halos of member galaxies and, so, the virial analysis was performed on the clusters using random subsets of the particles. Particles contained within concentric spheres of radius $`r`$ centered on the cluster centers of mass were viewed from 1000 random orientations and $`\sigma _v`$ and $`R_e^1`$ were computed for each orientation. Values of $`r`$ were increased incrementally to $`r_{\mathrm{max}}=r_{200}`$, and $`M(r)`$, the mass contained within the concentric spheres, was computed using equation (13) above. From Fig. 12, the virial mass estimator leads to a scale-dependent deviation from the true 3-dimensional mass profile in the sense that the cluster mass is overestimated on small scales. On large scales (and, in particular, near the “edges” of the clusters), however, the virial mass estimator reproduces the true cluster mass quite well. Fig. 11: The 3-dimensional cluster mass profile obtained directly from the measured particle velocity dispersion (assuming the clusters to be isothermal spheres) compared to the true cluster mass profile. Solid squares indicate that the value of $`\sigma _v`$ used in equation (7) was determined from independent annuli of differential radius $`R`$; open circles indicate that the value of $`\sigma _v`$ was determined from large, cumulative annuli with outer radii of $`R_{\mathrm{max}}=R`$. Error bars show the 1-$`\sigma `$ dispersion in $`M(r)`$ amongst the 1000 projections from which the mean line-of-sight velocity dispersion was computed. It should be noted that gravitational force softening in the numerical simulation will, necessarily, affect dynamical mass estimates of simulated objects (see, e.g., Tormen, Bouchet & White 1997). That is, on scales smaller than or of order the smoothing length, the mass will be severely overestimated simply due to numerical effects. We have, therefore, restricted our analyses to radii at which the effects of force softening on the mass estimate should be small. In particular, any overestimate of the mass caused by numerical effects is expected to be at most of order 3% to 4% in the innermost radial bins and will drop rapidly to zero for the bins with larger radii. Cen (1997) and Reblinsky & Bartelmann (1999) have also investigated the virial masses obtained for numerical clusters, though not for objects as massive as those presented here. Reblinsky & Bartelmann (1999) find that the virial mass severely overestimates the true masses of clusters whose masses are less than a few times $`10^{14}M_{}`$. The degree of overestimation decreases with increasing cluster mass, however, and appears to converge in the mean to the true cluster mass for their most massive objects. Reblinsky & Bartelmann’s results are broadly consistent with those of Cen (1997), though differences in the procedures used to select and analyse the clusters makes direct comparisons between the two not entirely straightfoward. Direct comparisons between our results and those of Cen (1997) and Reblinsky & Bartelmann (1999) are also not straightforward. Fig. 12: The 3-dimensional cluster mass profile obtained from the classical virial mass estimator compared to the true cluster mass profile. Particles contained within concentric spheres of radius $`r`$ centered on the cluster center of mass were used to determine the mean values of $`R_e`$ and $`\sigma _v`$ required for the evaluation of equation (13). Error bars show the 1-$`\sigma `$ dispersion in $`M(r)`$ amongst 1000 random projections. In our analyses above we have expressly calculated $`R_e`$ for each of the subsets of the particles, whereas Cen (1997) and Reblinksy & Bartelmann (1999) do not. Also, we have selected our clusters from a 3-dimensional mass distribution while Cen (1997) and Reblinsky & Bartelmann (1999) select their clusters based on 2-dimensional projections and the assignment of luminous galaxies to a random subset of the particles in their simulations. As such, their analyses attempt to at least partially address the issue of contamination by interloper galaxies and false detections of clusters in the limit of realistic observational data. In contrast, our results above are effectively derived in the limit of ideal data (i.e. the values of $`R_e`$ and $`\sigma _v^2`$ are computed from objects which are known a priori to be contained within the cluster under investigation). ## 4. Discussion The cluster mass results which are the most relevant for direct comparison to observational investigations are those that were obtained using large, cumulative annuli (i.e. the shear and particle velocity dispersion averaged over large scales in the cluster) as well as the virial estimate in which $`R_e`$ is the effective radius determined for the “entire” cluster. Although in principle the shear and velocity dispersion can be measured at independent radii in observed clusters, the data are generally too sparse and noisy for this to be practicable. (See, however, Bonnet et al. (1994), Tyson & Fischer (1995), Squires et al. (1996b), Fisher & Tyson (1997) and Carlberg et al. (1997b) for exceptions to this.) The mass profiles plotted in Figs. 8 through 12 extend to a maximum cluster radius equal to $`r_{200}`$ (or $`R_{200}`$ in the case of the projected mass profiles). In all cases $`M(r)`$ and $`M(R)`$ in these figures refer to the mass contained within a 3-dimensional radius, $`r`$, or a projected radius, $`R`$. Since we have defined the clusters to consist of all particles inside of a radius $`r_{200}`$, we will define the total mass of a cluster to be the mass contained within this radius, $`M(r_{200})`$. The total mass obtained for each cluster from each of the estimators is, therefore, indicated by the points in Figs. 8 through 12 that are plotted at the largest occurring values of the radius. In terms of estimating the total cluster mass (i.e. the mass of the cluster contained within a 3-dimensional radius of $`r_{200}`$), the classical virial estimator is found to be very successful. The total mass of the cluster is systematically underestimated, but only by $`10`$%. This result is somewhat surprizing given the fact that within $`r_{200}`$ the cluster mass distributions are not perfectly smooth and substructure exists at a significant level. Additionally, moment of inertia analyses performed using all particles within a radius $`r_{200}`$ of the cluster centers of mass show the cluster mass distributions to be clearly triaxial, rather than spherical (see Brainerd, Goldberg & Villumsen 1998 for the relevant discussions). Our result, therefore, suggests that at least in the limit of ideal data the classical virial mass estimator is quite robust to modest deviations from pure spherical symmetry and the presence of substructure within a cluster. The “isothermal” dynamical mass estimate, in which the measured line of sight velocity dispersion is used to infer the mass under the assumption of an isothermal potential, yields a poor estimate of the total mass of the cluster. The value of $`M(r_{200})`$ is underestimated by $`40`$% for the case in which $`\sigma _v`$ is determined from an average over the entire cluster and is underestimated by $`70`$% for the case in which $`\sigma _v`$ is computed at a projected radius of $`R=R_{200}`$ (i.e. Fig. 11). Provided the mean shear used to infer the cluster velocity dispersion is computed using a large, cumulative annulus in which the shear is averaged over the entire cluster, the weak lensing estimate of $`M(r_{200})`$ is found to be in excellent agreement with the total cluster mass (i.e. Fig. 9, open circles). In contrast, however, the shear measured solely at a radius of $`R=R_{200}`$ yields a $`25`$% overestimate of the total cluster mass (i.e. Fig. 9, solid squares) due to the fact that the clusters are finite in extent, rather than infinite. Because of its promise to yield direct measurements of the masses of galaxy clusters independent of dynamics and hydrodynamics, weak lensing mass estimates of cluster masses are currently of particular interest. Given the fact that most high-quality observations of the weak lensing shear due to clusters have been obtained only on relatively small scales (i.e. radii significantly less than 1 or 2 Mpc), Figs. 8 and 9 suggest some caution regarding the interpretation of recent cluster mass estimates that are based on a measurement of an average value of the weak shear together with an assumption of an isothermal cluster potential. In particular, a measurement of the mean shear in which the mean is computed within an aperture whose outer radius is significantly less than $`R_{200}`$ yields a mass estimate that differs systematically from the true mass. For example, the projected mass within a radius of 0.5 Mpc, $`M(R=0.5\mathrm{Mpc})`$, is underestimated by $`40`$% and the 3-dimensional contained mass, $`M(r=0.5\mathrm{Mpc})`$, is underestimated by $`35`$%. Interestingly, in an analysis of observed cluster lensing data, Wu et al. (1998) found that weak lensing mass estimates that were performed over small cluster radii did seem to underestimate the contained mass in a systematic manner. The results for $`M_{\mathrm{lens}}(r)`$ shown in Fig. 9 are, however, encouraging at large cluster radii. In particular, with the advent of large format CCD cameras capable of wide-field imaging, it will be possible to measure the weak shear due to lensing clusters at radii of order a few Mpc in a reasonably routine fashion. An example of such deep wide-field imaging is the data obtained with the UH 8K CCD mosaic camera which has recently resulted in a detection of large scale coherent weak shear in the images of $`30,000`$ faint background galaxies due to lensing by the supercluster MS0302+17 (Kaiser et al. 1998). Given the apparent universality of the Navarro, Frenk & White density profile (i.e. dissipationless collapse generically leads to the formation of an object with an NFW profile), our results suggest that it will be possible to estimate a total 3-dimensional cluster mass fairly accurately with wide-field imaging simply by computing the mean of the shear over the entire cluster and adopting an isothermal lens potential. In the short term, such observations will hopefully provide a resolution to the remaining discrepancies between cluster masses estimated from weak lensing and virial techniques. (This is, of course, providing that the redshift distribution of the lensed galaxies is well-constrained and is not, in itself, a large source of uncertainty in the interpretation of the observed shear.) In the long term, large surveys from which the weak lensing shear can be detected out to large cluster radii should have the ability to yield uniform samples of objects, including a reasonably accurate mass-selection criterion, without necessarily requiring a full reconstruction of the density profile of each individual lensing cluster. ## Acknowledgments A generous allocation of computing resources on Boston University’s Origin2000, support under NSF contract AST-9616968 (TGB and COW) and NSF Graduate Fellowships (DMG and COW) are gratefully acknowledged.
no-problem/9903/astro-ph9903471.html
ar5iv
text
# A 98% spectroscopically complete sample of the most powerful equatorial radio sources at 408 MHz ## 1 Introduction Radio sources are unique cosmological probes, with importance for understanding the physics of active galactic nuclei, for studying the relationship between the radio source and its surrounding environment, for probing high redshift proto–cluster environments, and for defining complete samples of galaxies for studies of stellar populations at early epochs. The revised 3CR sample (Laing, Riley & Longair 1983; hereafter LRL) contains the brightest extragalactic radio sources in the northern sky selected at 178 MHz; the host galaxies of these radio sources are predominantly giant elliptical galaxies and lie at redshifts out to $`z2`$. Scientifically, the revised 3CR sample has proven exceedingly powerful since it is 100% spectroscopically complete, avoiding many selection biases inherent in less complete samples, and has become established as the standard sample of bright low frequency selected radio sources. The 3CR galaxies and quasars have been the subject of extensive studies over a wide range of wavelengths leading to many important discoveries, not least of which were the very tight relationship between the infrared K–magnitudes and the redshifts of the radio galaxies (e.g. Lilly and Longair 1984), the discovery that the optical and ultraviolet emission of the high redshift ($`z\mathrm{}>0.6`$) radio galaxies is elongated and aligned along the direction of the radio axis , and the orientation–based unification schemes of radio galaxies and radio loud quasars (e.g. Barthel 1989). The new generation of large optical telescopes provides an exciting new opportunity for very detailed studies of these important objects and their environments, as has been proven by the results being produced by the Keck telescope (e.g. Cimatti et al. 1996, Dey et al. 1997, Dickinson 1997). Radio astronomy has, however, historically been concentrated in the northern hemisphere, and there is currently no large, spectroscopically complete sample of low frequency selected radio sources equivalent to the 3CR sample for studies with southern hemisphere telescopes such as the VLT and Gemini South. The current paper aims to rectify this deficiency. The layout of the paper is as follows. In Section 2 the selection criteria of the new sample are described. Details of the observations that were carried out to provide optical identifications and spectroscopic redshifts for those sources for which such data could not be found in the literature are provided in Section 3. In Section 4, the results of these observations are provided in the form of radio maps, optical images and spectra of these sources. Tabulated details of the resulting complete sample are compiled in Section 5 and global properties of the sample are investigated. Conclusions are summarised in Section 6. Values for the cosmological parameters of $`\mathrm{\Omega }=1`$ and $`H_0=50`$ km s <sup>-1</sup> Mpc<sup>-1</sup> are assumed throughout the paper. ## 2 Sample Definition The basis dataset for our sample was the Molonglo Reference Catalogue (MRC; Large et al. 1981), a catalogue of radio sources selected at 408 MHz in the region of sky $`85^{}<\delta <18.5^{}`$, $`|b|3^{}`$, and essentially complete down to a flux density limit of 1.0 Jy at that frequency. The low frequency selection criterion of this catalogue, like that of the 3CR sample (178 MHz), selects radio sources primarily on the relatively steep spectrum synchrotron emission of their extended radio lobes, rather than on flat spectrum cores, jets and hotspots, and is therefore less subject to Doppler boosting effects than samples selected at higher radio frequencies. The sample (hereafter the BRL sample) was drawn from the MRC according to four criteria: * They must have a flux density $`S_{408\mathrm{M}\mathrm{H}\mathrm{z}}5`$ Jy. * They must lie in the region of sky $`30^{}\delta +10^{}`$. * They must lie away from the galactic plane, $`|b|10^{}`$. * They must be associated with extragalactic hosts. The first selection criterion is similar to the flux density limit of the revised 3CR sample (LRL), $`S_{178\mathrm{M}\mathrm{H}\mathrm{z}}>10.9`$ Jy, for a typical radio source with a radio spectral index $`\alpha 0.8`$ ($`S_\nu \nu ^\alpha `$). The second criterion was made so that the sample would be visible from both northern radio telescopes such as the VLA and southern hemisphere telescopes such as the new large optical telescopes (VLT, Gemini South) and the proposed Atacama Large Millimetre Array (ALMA). The third criterion rejects most galactic objects and avoids the regions of highest galactic extinction. The first three selection criteria produced a sample of 183 entries in the Molonglo Reference Catalogue. Of these, 0532$``$054 (M42, Orion) and 0539$``$019 were excluded on the basis of being galactic HII regions. 0634$``$204 and 0634$``$206 appear as two separate entries in the MRC catalogue, whereas they are in fact two parts of the same giant radio source ; these two entries were therefore merged as 0634$``$205. Similarly, the entries 1216+061A and 1216+061B are from the same source, hereafter referred to as 1216+061 (e.g. see Formalont 1971). Finally, the single catalogue entry 2126+073 (3C435) is actually composed of two individual radio sources neither of which on its own is luminous enough to make it into the sample, and so these were excluded. 0255+058 (3C75) is also composed of two distinct sources, although overlapping and inseparable in terms of flux density , but this entry is maintained within the sample since at least one of the two sources must be sufficiently luminous that it would have entered the sample on its own. These considerations led to a final BRL sample of 178 sources. Selected in this way the sample is complementary to many other radio source samples which have been constructed or are under construction. The northern declination limit of +10 corresponds to the southern declination limit of the LRL sample. Bright sources more southerly than the $`30^{}`$ declination limit will be included in a new southern sample being prepared by Hunstead and collaborators. Between these three samples, therefore, almost the entire sky (away from the galactic plane) will be covered. The BRL sample further complements the MRC strip 1 Jansky sample defined by McCarthy and collaborators (e.g. McCarthy et al. 1996); the MRC strip, also composed of southern radio sources from the MRC catalogue, is selected to be about a factor of five lower in radio power than our sample and is currently $`\mathrm{}>75\%`$ spectroscopically complete. Due to its larger sky coverage, the BRL sample provides almost a factor of 5 more radio sources at the highest flux densities than the MRC strip, which is essential to provide large enough numbers of the rare high power objects for studies of, for example, the alignment effect or their clustering environments at high redshifts. Combining the BRL sample and the MRC strip will allow variations with radio power to be investigated. Finally, the new sample provides a complement to samples selected at high radio frequencies, such as that of Wall and Peacock , which contain far larger fractions of flat spectrum sources and quasars than low frequency selected samples, due to Doppler boosting effects. The Wall and Peacock sample covers the whole sky away from the galactic plane ($`|b|10^{}`$), contains 233 sources brighter than 2 Jy at 2.7 GHz, and in the region of sky $`\delta <+10^{}`$ is over 90% spectroscopically complete (e.g. di Serego Alighieri et al. 1994). ## 3 Observations and Data Reduction A literature search showed that spectroscopic redshifts were already available for 128 of the 178 sources in the sample. Our observations concentrated upon the remaining 50 sources, with the goal of producing a spectroscopically complete sample. ### 3.1 Radio Imaging The angular resolution of the observations comprising the Molonglo Reference Catalogue is only about 160 arcseconds in right ascension and in declination varies from about 170 arcseconds at $`\delta =30^{}`$ to 240 arcseconds at $`\delta =+10^{}`$. In general these positional uncertainties are too great to allow unambiguous identification of the host radio galaxy or quasar; higher resolution radio data are essential. Radio data with angular resolution of about 10 arcseconds or less were extracted from the VLA archive for 27 of the 50 radio sources without spectroscopic redshifts. Details of the array configurations and frequencies of these archive data are provided in Table 1. For the remaining 23 sources new observations were made using the VLA during filler time in September 1997 and June 1998 (see Table 1). These observations were single snapshot exposures of typically 5–minute duration. Either 3C286 or 3C48 was observed during each run for primary flux calibration, and accurate phase calibration was achieved by observations of secondary calibrators within 10–15 degrees of the radio galaxies. Both the new data and that extracted from the archive were calibrated, cleaned, and further self–calibrated within the aips package provided by the National Radio Astronomy Observatory following standard reduction procedures. The resultant radio maps are shown in Figures 1 to 50. ### 3.2 3.6m EFOSC2 observations The optical imaging and spectroscopic observations were carried out predominantly during two observing runs at the ESO 3.6m telescope, La Silla, on 21-22 April 1998 and 20-21 November 1998. The EFOSC2 instrument was used together with the 2048 by 2048 Loral CCD #40 which, after binning 2 by 2 pixels on read-out, provided a spatial scale of 0.315 arcsec per binned pixel. The spectroscopic observations were taken through a 2 arcsecond slit using the grism #6 which provided a wavelength coverage from 3860 to 8070Å, a scale of 4.1Å per binned pixel and a spectral resolution of about 22Å. The observing technique used was to make first a short image of the field of the radio source through the $`R`$–Bessel filter and use the VLA radio map to identify the host galaxy or quasar. The telescope was then moved to centre this object in the slit and a spectrum taken. The spectral slit position angle was left in the default east–west direction unless there was good reason to do otherwise, for example if there were two candidate host objects in which case they were both placed along the slit. The duration of the spectral exposure was of between 5 and 20 minutes, depending roughly upon the magnitude of the host object in the $`R`$–band observation. A second exposure was then begun whilst the first was being reduced. If emission lines were easily visible in the first spectrum then the second exposure was cut short or aborted. Details of the observations are given in Table 1. Both the imaging and spectroscopic data were reduced using standard packages within the iraf noao reduction software. For the imaging observations the bias level and dark current were first subtracted and then the images were flat–fielded using a flat–field constructed from all of the imaging observations. Photometry was achieved using regular observations of photometric standards throughout the nights. The raw spectroscopic data frames were bias subtracted, and then flat–fielded using observations of internal calibration lamps taken with the same instrumental set-up as the object exposures. The sky background was removed, taking care not to include extended line emission in the sky bands, and then the different exposures of each galaxy were combined, where appropriate, and cosmic ray events removed. One dimensional spectra were extracted and were wavelength calibrated using observations of CuNe and CuAr arc lamps. Flux calibration was achieved using observations of the spectrophotometric standard stars GD108, G60-54 and LDS749B for the April run, and Feige-24 and L745-46A in November, and the determined fluxes were corrected for airmass extinction. ### 3.3 WHT observations Observations of 7 of the sources were made using LDSS2 on the William Herschel Telescope as a backup programme during the first half of the night of 13 August 1998. LDSS2 was used together with a SITe1 2048 by 2048 CCD, providing 0.594 arcseconds per pixel. Imaging observations were made using the broadband Harris-R filter. Spectroscopic observations were taken using the standard long slit with a projected size of 1.4 by 225 arcseconds, and the ‘medium-blue’ grism, providing a scale of 5.3Å per pixel and a spectral resolution of about 13Å. Spectroscopic observations of the sources 1643$``$223 and 1920$``$077 were made using the duel–beam ISIS spectrograph on the William Herschel Telescope during morning twilights of 20 and 21 March 1999 respectively. In the blue arm of the spectrograph the R158B grating was used together with a 2096 by 4200 EEV CCD, providing a spectral coverage from the atmospheric cutoff through to longward of the dichroic at 5400Å, and a spectral resolution of 11Å. In the red arm the R316R grating was used together with a 1024 by 1024 Tek CCD, providing a spectral resolution of about 5Å and a spectral range of 1525Å. This range was centred on 7919Å for 1643$``$223 and 8245Å for 1920$``$077. Details of the observations are given in Table 1. The procedures for both the observations and the reduction of the LDSS2 data and the ISIS data mirrored those described for the EFOSC2 observations, except that the default spectral slit orientation for the LDSS data was north–south. Kopff 27 was used for spectrophotometric calibration of the LDSS data, and g193$``$74 for the ISIS data. ## 4 Results of Observations The host galaxy or quasar of the sources without pre–existing spectroscopic redshifts has been identified from the $`R`$–band images in all 50 cases. Finding charts for these identifications can be found in Figures 1 to 50; these show a region of 2 by 2 arcminutes, centred upon the host galaxy or quasar. The host objects are indicated by the crosses on the finding charts. Where there may be some ambiguity as to the host object, with more than one object lying within the radio source structure, the justification for the labelled host is given in Section 4.3 on a source by source basis. A magnified view of the central regions can also be found overlaid upon the radio maps in Figures 1 to 50, except for the largest radio sources where this would simply repeat the finding chart view. The absolute positioning of the optical frames was determined using the positions of between 8 and 12 unsaturated stars on the optical frames that were also present in the APM data base or the Digitized Sky Survey. The optical frames were registered with the sky surveys taking account of possible rotation of the field of view, and then the precise optical position of the host galaxy or quasar was determined. These positions can be found in the sample collation, Table 3. There will remain small astrometric errors between the radio and optical images, due to uncertainties in the absolute alignment of the radio and optical frames of reference. The magnitude of these errors can be judged from the mean positional difference between the optical centre of a radio galaxy or quasar and the position of a radio core, where the latter is detected. Unambiguous radio cores or unresolved radio sources (from a high resolution map) are observed for 21 of the sources presented here; there is no systematic offset between the radio core and the optical centre, and the root–mean–squared separation is 0.55 arcseconds. This therefore is the accuracy to which the radio and optical frames can be overlaid. Note that where there is an offset between the radio core position and the position of the optical identification in our data, it is the position of the optical source which is given in Table 3. For 46 of the 50 galaxies a spectroscopic redshift was obtained. The spectra are shown in Figures 1–50 with the identified lines labelled, the resulting redshift being given in Table 1. The uncertainties of the peak positions of the measured emission lines and the variation in the measured redshift between different lines in the spectrum were both determined, and the larger of these values (generally the latter) was adopted as the uncertainty on the galaxy redshift. In four cases (1039+029, 1509+015, 1649$``$062, 2322$``$052) the redshift was based upon only weak emission lines, or a single strong emission line together with some weak confirming feature; notes on these sources are provided in Section 4.3. For the four sources for which no spectroscopic redshift has been obtained (1059$``$010, 1413$``$215, 1859$``$235, 1953$``$077), a discussion is also provided in Section 4.3. ### 4.1 Emission line properties Various properties of the emission lines have been determined and are provided in Table 2. The flux of each emission line was determined by summing the intensities of the pixels above a fitted continuum level over a wavelength range of four times the fitted (pre–deconvolution) full–width at half–maximum (FWHM) of the line. The uncertainty in the measured flux of each emission line was calculated taking account both of the measurement error from the limited signal–to–noise of the emission line, and an uncertainty in the flux calibration, assumed to be 10%. Note that caution must be applied to the use of the derived emission line fluxes, since they are measured only from the portion of the galaxy sampled by the slit and therefore, especially at low redshifts, are lower than the total line flux emitted by the galaxy. These line fluxes should not be used to investigate the variation of emission line strengths as a function of redshift. The ratios of the line fluxes for lines not widely separated in wavelength should provide an accurate measure, although for widely separated lines caution should again be adopted since atmospheric diffraction means that the red and blue ends of the spectrum might be sampling different regions of the galaxy. Also calculated are the deconvolved FWHM of the emission lines, determined assuming that the lines follow a Gaussian profile. The uncertainty in this deconvolved width is a combination of the uncertainty in the measured FWHM due to the limited signal–to–noise ratio of the line, and deconvolution errors introduced by the uncertainty in the spectral resolution of the observations, estimated to be 10%. Where the fitted FWHM was found to be less than the resolution, or the deconvolved FWHM was determined to be less than its error, the uncertainty in the deconvolved FWHM has been adopted as an upper limit to the deconvolved FWHM. Due to the low spectral resolution of these observations, this was frequently the case, and little velocity information was obtained except for the broadest lines. Finally, the equivalent widths of the emission lines were calculated, along with their errors. Where the equivalent width was determined to be smaller than 1.5 times its error, a value of twice the error in the equivalent width has been adopted as an upper limit to the line equivalent width. ### 4.2 Optical magnitudes and classifications $`R`$–band magnitudes were measured for the host galaxy or quasar, after any faint objects close to the radio source host but not obviously a part of it had been edited out of the image. The local sky level was determined within a concentric annulus centred upon the host object with an inner radius of between 5 and 10 arcseconds, chosen to avoid contamination from nearby objects, and a width of 2 or 3 arcseconds. Where it was impossible to avoid faint objects within the sky annulus, these objects were edited out of the image before photometry was performed. In 5 cases (1307+000, 1643$``$223, 1859$``$235, 1912$``$269, 1920$``$077) it was impossible to avoid nearby bright objects, or large numbers of faint objects, falling within the sky annulus. In these cases the local sky level was estimated using regions of empty sky close to the host galaxy or quasar at a range of different position angles. The $`R`$–band magnitude was determined through two different aperture sizes. First, a 4 arcsecond diameter aperture was used, chosen to allow comparison with the measurements of McCarthy et al. for the Molonglo strip sources. Such a measurement was possible for all of the objects, since none had significant contributions from other objects at these radii. It should be noted, however, that this aperture measurement seriously underestimates the luminosity of low redshift galaxies (for quasars there is little dependence of the $`R`$–magnitude upon aperture diameter). The second aperture adopted was a fixed metric aperture of 63.9 kpc diameter; such an aperture is large enough to contain essentially all of the light from the galaxies, the specific value of 63.9 kpc being chosen for comparison with the measurements of Eales et al. . Determination of the $`R`$–magnitude through a 63.9 kpc aperture was not possible in all cases: for the 4 sources without redshifts, the aperture diameter corresponding to 63.9 kpc is unknown; for 8 other sources, nearby objects too bright to be accurately edited out of the image prevented measurement of the flux out to these radii. The $`R`$–band magnitudes of the host galaxies and quasars through these two apertures are provided in Table 1. Notice the large differences, up to 1.5 magnitudes, between the two values for the low redshift galaxies. The uncertainties on the measurements of these magnitudes are about 0.1 magnitudes for $`R\mathrm{}<21`$, increasing to about 0.3 magnitudes by $`R23`$. The nature of optical identification was classified based upon optical appearance of the image and the presence of broad emission lines, that is, emission lines with a deconvolved FWHM greater than 2000 km s <sup>-1</sup>, with an uncertainty sufficiently low that the FWHM minus its uncertainty is above 1500 km s <sup>-1</sup>. If the optical image contained an unresolved component of absolute luminosity<sup>1</sup><sup>1</sup>1To calculate the K–correction needed to derive absolute magnitudes, a power–law slope of spectral index 0.7 was assumed for the point source continuum. $`M_\mathrm{R}<24`$ (roughly equivalent to the $`M_\mathrm{B}<23`$ limit in the quasar catalogue of Véron–Cetty & Véron 1996; cf. the discussion in Willott et al. 1998), and broad emission lines, then the object was classified as a quasar. This was true for eight of the fifty cases. Of the remainder, 0850$``$206, 1140$``$114, 1436$``$167 are all well–resolved and show narrow forbidden lines, but their permitted lines are broad. These sources were therefore classified as broad–line radio galaxies. 1732$``$092 also shows broad permitted lines, and some of its emission comes from an unresolved component, but that component is not sufficiently luminous to be classified as a quasar and so it also is classified as a broad–line radio galaxy. The remainder of the identifications are well resolved with only narrow lines and so are classified as radio galaxies. In Figure 51 the $`R`$ magnitudes are plotted against the redshift of the source, showing the radio galaxies and quasars separately. The $`Rz`$ diagram for the radio galaxies shows the well–known tight correlation out to about a redshift of 0.8 (e.g. Eales 1985), beyond which the scatter increases due to different strengths of the alignment effect in different sources. This diagram is powerful because the $`Rz`$ correlation can be used to provide supporting evidence for redshifts determined from only weak features, and to estimate the redshifts of the 4 remaining sources from their $`R`$–magnitudes. The quasars have brighter $`R`$–magnitudes than would be expected from the $`Rz`$ relation of the radio galaxies, due to their AGN component. ### 4.3 Notes on individual sources 0056–172: The optical identification of this source is the south–westerly of the two objects lying along the radio source axis; the north–eastern object shows no strong emission lines. 0125–143: The optical identification, showing powerful line emission, is the brighter of the two objects and is coincident with the radio core. 0128–264: The identification is the faint aligned object lying directly along the radio axis. The high background level in the south–west is caused by a nearby bright star. 0357–163: The host galaxy appears to lie coincident with the eastern hotspot (possibly the core if a faint radio lobe has been missed), but this identification is secure: the galaxy shows strong emission lines and no other object is observed within the radio source structure. 0850–206: The host radio galaxy, showing powerful emission lines, is the more southerly of the two objects within the radio source structure. 1039+029: The redshift of this source is based upon a single strong emission line which, owing to its high flux, is assumed to be \[OII\] 3727. This assumption is supported by the detection of a spectral break, consistent with being at 4000Å rest–frame, and by the consistency of the $`R`$–magnitude of this galaxy with the $`Rz`$ relationship if it is at that redshift (see Figure 51). 1059–010: No spectroscopic redshift has been obtained for this source; the only spectroscopic observations have been carried out during twilight conditions. The proposed host galaxy is extremely faint ($`R>24`$), detected at only the $`4\sigma `$ level, but is the only possible identification found. The location of this galaxy at the centre of the radio source and its elongation along the radio axis (as is common of high redshift radio galaxies) add to its believability. Comparing the $`R`$–magnitude of this galaxy with the $`Rz`$ relationship suggests a minimum redshift of about 1.5. 1131–171: The northern of the two objects within the radio source structure, a high redshift quasar, is the host identification for this radio source. 1344–078: It is the object 2 arcseconds north of the centre of the radio source (RA 13 44 23.60, Dec $``$07 48 25.2) which shows strong emission lines and is therefore identified as the radio source host galaxy. 1413–215: No spectroscopic redshift has been obtained for this source. At the time of the imaging and spectroscopic observations, only a low ($`60`$ arcsec) resolution NVSS radio map was available and this indicated a radio position 10 arcseconds south of the true core due to the asymmetry between the northern and southern hotspot strengths. Given this, there was no obvious host galaxy identification, and so no spectroscopic observations were attempted. The new high resolution VLA map provides an unambiguous optical identification, whose $`R`$–band magnitude suggests that it is above a redshift of one (cf. Figure 51). 1422–297: The optical identification is the brighter of the two objects towards the centre of the radio source. The fainter object shows no strong emission lines. 1434+036: This source is classified as a quasar on account of a sufficiently luminous unresolved optical component. Its broad MgII 2799 emission (FWHM $`2460\pm 408`$ km s <sup>-1</sup>) is relatively low for quasars, and the $`R`$–magnitude lies almost within the scatter of the radio galaxy $`Rz`$ relation at this redshift, and so the quasar component is likely to be not extremely powerful. 1509+015: The redshift for this galaxy is based predominantly upon one very luminous emission line, assumed to be \[OII\] 3727; only a weak emission line consistent with being MgII 2799 and a potential 4000Å break are seen to support this. Some corroborating evidence is given by the $`R`$–magnitude of the galaxy, which is consistent with the $`Rz`$ relationship of the sample if this redshift is correct. 1602–174: The emission line object identified as the host galaxy of this radio source is the faint galaxy aligned along the radio axis, coincident with the radio core. 1602–288: The two–dimensional spectrum along the radio axis shows emission lines covering an angular extent of over 20 arcseconds (see Figure 31), which at a redshift of $`0.48`$ corresponds to a spatial extent of nearly 150 kpc. This emission line region extends through both of the objects lying directly along the radio axis, centred close to the fainter of the two objects. The continuum shape of this fainter object (RA 16 02 6.65, Dec $``$28 51 5.2) resembles an intermediate redshift radio galaxy, whilst the brighter object (ignoring the emission lines) is unresolved and spectroscopically a star. Therefore, although the brighter object is coincident with what has the appearance of a radio core, it is the fainter object which is identified as the host radio galaxy. 1621–115: The object at the centre of the radio contours shows emission lines and so is identified as the host radio galaxy. 1649–062: The optical identification lies very close to the centre of this large radio source. The galaxy shows only very weak emission lines; its redshift is based upon a weak \[OII\] 3727 emission line and three absorption features. The other object close to the radio source centre is a star. 1716+006: This source is classified as a quasar since, apart from a very close companion, its optical emission is unresolved. The H$`\gamma `$ line, although weak, appears broad (FWHM of $`3473\pm 626`$ km s <sup>-1</sup>). The $`R`$–band magnitude of this object is about 1.5 magnitudes brighter than the $`Rz`$ relation of the radio galaxies at that redshift, support the interpretation of a significant quasar component. 1732–092: The optical identification is the object lying coincident with the bright knot of radio emission. This object is mostly unresolved, but the unresolved component does not contain sufficient flux for it to be classified as a quasar. Its $`R`$–magnitude is comparable to those of radio galaxies at this redshift. The galaxy shows clearly broad emission lines so is identified as a broad–line radio galaxy. 1859–235: The identification lies towards the centre of the radio source and so is reasonably secure, although no spectroscopic redshift was obtained for this source in a 20 minute spectrum (albeit taken at a mean airmass of 1.9). The $`R`$–band magnitude suggest a redshift in the range 0.3 to 0.9 (cf. Figure 51). 1920–077: The object lying in the gap between the two radio lobes is identified as the host galaxy on the grounds of its location, its resolved emission (many of the objects seen in the image are stellar), and the fact that it shows powerful line emission. 1953–077: The object close to the southern radio lobe is a very likely candidate host galaxy, also on the basis of a detection at the same location in a short exposure J-band image made with WHIRCAM, the infrared imager on the WHT. A spectrum of this object taken during twilight conditions failed to yield a redshift for the galaxy, but the faintness of the $`R`$–magnitude indicates that the galaxy is above a redshift of one. 2025–155: It is the fainter (north–eastern) of the two objects seen close to the radio source that shows line emission and is identified as the host object. This object is unresolved, shows a power–law type spectrum, and has an $`R`$–magnitude about 3 magnitudes brighter than the mean $`Rz`$ relationship and so is identified as a quasar. 2128–208: The brighter (eastern) of the two sources is identified as a quasar, and is the host of this radio source. The western object shows no strong emission lines. 2322–052: The region of faint diffuse emission close to the eastern radio lobe shows powerful line emission and is identified as the host radio galaxy. Two emission lines are unambiguously detected with wavelengths consistent with being CIII\] 2326 and MgII 2799 at a redshift of 1.188. Weak features consistent with \[OII\] 2470 and \[NeV\] 3426 provide supporting evidence, although it is surprising that CII\] 1909 is not seen. The $`R`$–magnitude of the host galaxy is also consistent with the proposed redshift. ## 5 The complete sample ### 5.1 Sample Collation In Table 3, details are collated of all 178 radio sources in the complete BRL sample. The right ascension and declination of the host galaxy of each source have positional uncertainties typically well below 1 arcsecond. The 408 MHz flux density of the radio source is taken from the MRC catalogue value, except for radio sources of angular size larger than 100 arcseconds, in which case the Parkes Catalogue value was used instead (see discussion in Section 5.2). The spectral index of the source was calculated between the 408 MHz flux density and the flux density at 1.4 GHz, the latter being determined from the NVSS catalogue . The radio power of each source, corrected to a rest–frame frequency of 408 MHz and calculated assuming $`\mathrm{\Omega }=1`$ and $`H_0=50`$ km s <sup>-1</sup> Mpc<sup>-1</sup>, is also tabulated. The radio galaxies and quasars were classified, where observations of sufficient angular resolution were available for this to be done (otherwise they are classified as ‘U’), into three categories: Fanaroff and Riley (1974; hereafter FR) classes one (‘I’) and two (‘II’), and sources whose emission is dominated by the radio core, either as a core–jet or a core–halo source (‘C’). For a small number of sources, complicated radio structure prohibits an unambiguous classification between FR Is and FR IIs; in these cases the classification is followed by a question mark to indicate the uncertainty, or designated as I / II. Determination of the angular size of the radio source depended upon the source classification. In the case of the majority FR II objects, the angular separation between the hotspots in each lobe most distant from the active nucleus was measured. For FR I’s and core dominated sources, the determined angular size is necessarily more arbitrary and less robust: whatever method is used the measured angular size of an FR I will always be highly dependent upon the sensitivity and frequency of the radio observations. In this paper, the maximum angular separation within the second highest radio contour on the quoted radio map was used. The (heliocentric) redshifts of the host galaxies or quasars of the radio sources are provided in the table to an accuracy of three decimal places, unless the original measurement was not made to such precision, in which case it is given to the accuracy quoted in the original reference; uncertainties on the new redshifts presented in this paper can be found in Table 1. The nature of the host object is also classified in the table. Two sources lie nearby and are a starburst galaxy (NGC253) and a Seyfert 2 galaxy (NGC1068); the rest of the sources are classified as either a quasar or a radio galaxy. For the sources presented in this paper, a discussion of the classification scheme adopted has been presented in Section 4. Where possible this was also applied to the other 128 sources, but in the majority of cases it was only possible to accept the classification determined by the authors of the original imaging and spectroscopy papers. For some of the radio galaxies broad lines have been detected either in our spectroscopic observations (see Section 4) or in the literature, and these radio galaxies have been sub–classified as broad–line radio galaxies (BLRG’s). It should be noted, however, that some or many of the galaxies which remain classified under the more general ‘Radio Galaxy’ classification may actually be BLRG’s but the spectra are of insufficient quality to determine this. Additionally provided in Table 3 are references to the optical identification, spectroscopic redshift, and a radio map of the galaxy. The optical identification reference refers to the publication which first identified the radio source host or to the first published finding chart of the field. The spectroscopic redshift is referenced by the first published value; for some sources later observations have confirmed this redshift and provided a more accurate value. Where this is the case the first publication is still given in the reference list, but the improved value for $`z`$ is used. The third reference given is to a high quality radio map or, where none exist, the best radio data available in the literature. For sources which have been well–studied and have many radio maps in the literature, the reference is not necessarily to the most sensitive or the highest angular resolution data, but is simply to a map of sufficient quality to show clearly the important features of the radio source provided in the table. Notes need to be added to the classifications and redshifts of three of these sources. 0347+052: The NASA/IPAC Extragalactic Database (NED) gives a redshift for this source of 0.76, but both Allington–Smith et al. and di Serego Alighieri et al. derive a redshift of 0.339, and so the latter value is adopted here. 0834$``$196: This object is classified as a galaxy in NED, but di Serego Alighieri et al. point out that the host is actually an unresolved object, and they classify it as a quasar. That classification is adopted here. 2030$``$230: This object is classified as a quasar by Kapahi et al. \[1998a\], but in the rest of the literature is classified as an N–type galaxy. It is not sufficiently luminous to be classified as a quasar here, according to the classification scheme adopted in Section 4, and so falls within the radio galaxy population. ### 5.2 Radio selection effects: missing large sources? An issue of concern in the definition of any sample of radio sources such as this is the possibility that giant radio sources, which may have sufficient flux density to be above the selection limit for the complete sample but owing to their large angular sizes may have only a low surface brightness, are missed from the sample. A discussion of this issue for the 3CR LRL sample can be found in Bennett and Riley . This issue indeed turns out to be very relevant for the MRC catalogue, due to the method of determining catalogue fluxes. As described by Large et al. , the catalogue is based on a point-source fitting procedure, and so the flux density of sources whose angular size is comparable to or larger than the beam size (3 arcmins) may be systematically underestimated by an amount which depends strongly upon the angular structure of the source. To investigate how strong this effect is, for all of the radio sources in the BRL sample with angular sizes in excess of 60 arcsecs, the 408 MHz flux density from the MRC survey has been compared with that measured in the lower angular resolution Parkes Catalogue ; the results are shown in Figure 52. The flux densities determined for the MRC sources are secure up to 100 arcsec, but above 200 arcsec the MRC flux densities of some sources are lower by as much as a factor of two (some of this difference may be due to the presence of other weaker sources within the large Parkes beam, but the majority is due to an underestimate of the MRC flux densities). Therefore, in Table 3 the flux densities quoted for sources larger than 100 arcsec are those taken from the Parkes Catalogue. This result raises the possibility that some sources larger than $`200`$ arcsec have been missed from the sample because the MRC has underestimated their flux density, artificially placing them below the flux density cut–off. At high redshifts this effect is likely to be of little importance (200 arcsec corresponds to 1 Mpc at a redshift $`z0.25`$, and the 3CR sample shows that there are essentially no higher redshift sources larger than this size of sufficient flux density), but at $`z=0.1`$ this angular size corresponds to only 500 kpc and below this redshift sources may be missed. Indeed, a comparison of the redshift distributions of the BRL and LRL samples<sup>2</sup><sup>2</sup>2Note: in accordance with the new observations of Willott et al. (1999), 3C318 in the LRL sample has been reclassified in our diagrams as a quasar of redshift 1.574 instead of a radio galaxy at $`z=0.752`$. (Figure 53) shows that the BRL sample has a slightly lower peak at $`z<0.1`$ which may be due to this effect (although the combined counts at $`z<0.2`$ are similar, and a Kolmogorov–Smirnov test shows no statistically significant differences between the two distributions). The BRL sample also contains a lower percentage of FR I class sources than the LRL sample, 8% as compared with 16%; FR Is enter the sample generally only at low redshifts and have considerable flux which can not be well–modelled as point sources, and so are perhaps more likely to be missed. To summarise, whilst the sample is essentially complete at redshifts $`z\mathrm{}>0.2`$, radio selection effects in the MRC catalogue may have led to a small number of low redshift sources with angular sizes $`\mathrm{}>200`$ arcsec having been excluded from the sample. ### 5.3 Sample Properties Some features of the sample can be easily examined and compared to those of the 3CR sample, to show any differences resulting from the selection at 408 MHz instead of 178 MHz. In Figure 54 is shown the radio power versus linear size ($`PD`$) diagram for the BRL sample, and that for the LRL sample corrected to the same rest–frame frequency. In Figure 55 the redshift versus spectral index distribution of both samples is shown, showing the well–known increase in mean spectral index with redshift. The two samples show very similar distributions in both plots suggesting that there is little difference in their global properties. The mean and standard deviation of the spectral indices of the two samples are $`\overline{\alpha _{\mathrm{BRL}}}=0.81\pm 0.19`$ and $`\overline{\alpha _{\mathrm{LRL}}}=0.80\pm 0.24`$, both comparable. Figure 56 shows the distribution of sources from the two samples in the $`D\alpha `$ plane; again the distributions are generally similar, although a small excess of BRL sources is to be found with small radio sizes ($`D\mathrm{}<30`$ kpc) and steep spectra ($`\alpha \mathrm{}>0.75`$); it is not entirely unexpected that the higher selection frequency will select a larger fraction of these compact steep–spectrum sources, as synchrotron self–absorption prohibits these from entering low frequency selected samples. This appears to be the only important difference between the LRL and BRL samples. The redshift distribution of the two samples has already been shown to be similar in Figure 53. An interesting feature of this plot is that the fraction of radio galaxies and quasars at redshifts $`z\mathrm{}>1.5`$ in the new BRL sample remains roughly constant, indicating that the 100% quasar fraction beyond $`z=1.8`$ in the LRL sample is just due to small number statistics. This result can be seen in Figure 57 which shows the quasar fraction as a function of redshift from the combined BRL and LRL samples; there is no significant increase in the quasar fraction at the highest redshifts. Both samples show a stark lack of quasars at the lowest redshifts, which has been discussed by many authors. In orientation–based unification schemes (e.g. Barthel 1989) this is partially attributed to broad–line radio galaxies being the equivalent of the quasar population at low redshifts (e.g. see Antonucci 1993 for a review); more sophisticated explanations have been proposed, including an evolution in the opening angle of the torus with radio power (e.g. Lawrence 1991), or the presence of an isotropic population of low excitation radio galaxies at low redshift . The simple unification scheme of Barthel makes strong predictions for the relative linear sizes of radio galaxies and quasars; indeed, the difference in linear sizes between the two populations in the 3CR sample was one of the factors which led him to propose the model. Figure 58 now shows the radio size versus redshift distribution for the BRL and LRL samples combined, plotting radio galaxies and quasars separately. The median linear sizes of the radio galaxies and quasars have been calculated in five separate redshift bins, the lowest redshift bin including insufficient quasars to calculate an accurate median. With the increased number of sources, the result of Barthel still holds that radio galaxies with $`0.5<z<1.0`$ are, on average, larger than quasars in the same redshift range by about a factor of two. This relation is also true for higher redshifts, but the larger number statistics confirm that the result does not hold at lower redshifts (e.g. see Singal et al. 1993 for the LRL sample alone; although cf. the discussion of Section 5.2 which might have a slight effect here). Again, this rules out the simplest unification schemes, but can plausibly be explained by the modifications discussed above (e.g. see Gopal–Krishna et al. 1996). ## 6 Conclusions Details of a new sample of the most powerful equatorial radio sources have been collated. New radio imaging, optical imaging and spectroscopic observations have been presented of the sources previously without spectroscopic redshifts, leading to the complete sample being fully optically identified and spectroscopic redshifts being available for 174 of the 178 sources (98%). Work to obtain the redshifts for the remaining four sources is continuing. Due to method of determining flux densities used for the Molonglo Reference Catalogue, radio selection effects may have led to a small number of radio sources subtending angular sizes larger than about 200 arcseconds being missed from the catalogue; this probably gives rise to the slightly lower percentage of FR I sources in the new sample of radio sources as compared with the revised 3CR sample. Another observed difference is that the new sample contains a higher percentage of compact steep spectrum sources than the 3CR sample; this was to be expected since these sources are often missed in low frequency selected samples due to synchrotron self–absorption. No other significant differences are found between the properties of the new sample and those of the 3CR sample. Due to its equatorial location and its high spectroscopic completeness, this sample will prove very useful for studies using a combination of the northern hemisphere instruments such as the VLA, and the new and forthcoming southern hemisphere telescope facilities, such as the large optical telescopes and the Atacama Large Millimetre Array. ## Acknowledgements This work was supported in part by the Formation and Evolution of Galaxies network set up by the European Commission under contract ERB FMRX– CT96–086 of its TMR programme. This work is based upon observations made at the European Southern Observatory, La Silla, Chile, and using the William Herschel Telescope and the Very Large Array. The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roches de los Muchachos of the Instituto de Astrofisica de Canarias. The National Radio Astronomy Observatory is operated by Associated Universities Inc., under co-operative agreement with the National Science Foundation. The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The authors thank Jaron Kurk for his work with the VLA archive data, and the referee, Steve Rawlings, for a very careful consideration of the manuscript and a number of helpful suggestions.
no-problem/9903/nucl-th9903047.html
ar5iv
text
# TPI-MINN-99-13NUC-MINN-99/5-TUMN-TH-1749March 1999 Initial Conditions for Parton Cascades1footnote 11footnote 1Talk presented at RHIC Physics and Beyond: Kay Kay Gee Day, Brookhaven National Laboratory, Upton, Long Island NY, Oct. 1998 ## I Introduction Klaus Kinder-Geiger was a postdoctoral fellow with us at the University of Minnesota from 1991-1993. I remember well the first seminar he gave to us on the work he had been doing with Berndt Muller concerning the parton cascade model. I was very excited by what he was doing, and I was asking him question after question. This was at a time at the University of Minnesota recently after we had hired a number of Russians, so 2 hour seminars were not unusual. (We no longer have such long seminars. The Russians are real Americans now.) Klaus was needless to say nervous about the time he was taking, and Sharon, who was in the audience, was I think more than a little bothered by what she perceived as my harassment of Klaus. I don’t think Klaus fully understood how much I respected him after that talk. He was one of the few young people I knew who were not afraid to say they didn’t understand something, who was excited about exploring new ways of thinking, and most important, had a deep understanding of what it was he was doing. I guess we all knew Klaus as an unconventional thinker. When he was with us, he had the typical fears and lack of confidence of all people his age. The following story illustrates this: We were discussing hiring new postdocs at Minnesota and Joe Kapusta and I invited Klaus to come to the meeting and join in the discussion. The first thing I did was go down the list of people, and anyone who had published less than three papers a year since they got their Ph. D., I refused to further consider. Klaus muttered something about how this wasn’t really very fair, and I answered back that I didn’t want to hire lazy people. Klaus had been with us for about 6 months at this time and had submitted a paper, maybe two, for publication. In the next few months, he submitted about half a dozen. Klaus was the most prolific postdoc which we ever hired in nuclear theory at the University of Minnesota. His papers were not superficial and each involved much work and thinking. Klaus and I talked much but never worked together. He was captured by my colleague Joe Kapusta. Klaus had a profound impact on my thinking nevertheless. He got me very interested in his picture of the very early stages of heavy ion collisions, and this is the subject of this talk. To understand the parton cascade model,gm -g one needs a space-time picture of nucleus-nucleus collisions. Such a space-time picture was developed by Bjorken and I shall summarize it in this introduction.bjorken We concentrate on the central region of collisions at asymptotically high energy. We assume that the rapidity density of produced particle is slowly varying, slow enough so that we can treat the distribution $`{\displaystyle \frac{dN}{dy}}=contant`$ (1) If this is the case, the space-time dynamics for particle produced in the central region should be longitudinally Lorentz boost invariant. This means that the dynamical evolution of the particles produced in the collision is described by only on parameter $`\tau =\sqrt{t^2z^2}`$. We also will assume that the transverse size of the system is large enough so that one can ignore effects such as the transverse expansion of the system. The other longitudinal variable is the space-time rapidity $`\eta ={\displaystyle \frac{1}{2}}ln\left({\displaystyle \frac{t+z}{tz}}\right)`$ (2) which under a longitudinal Lorentz boost changes by a constant. Note that for a free streaming particle, $`\eta =y`$ since $`\eta ={\displaystyle \frac{1}{2}}ln\left({\displaystyle \frac{t+z}{tz}}\right)={\displaystyle \frac{1}{2}}ln\left({\displaystyle \frac{1+v_z}{1v_z}}\right)={\displaystyle \frac{1}{2}}ln\left({\displaystyle \frac{E+p_z}{Ep_z}}\right)=y`$ (3) We see therefore that $`{\displaystyle \frac{dN}{dy}}{\displaystyle \frac{dN}{d\eta }}={\displaystyle \frac{dN}{dz}}{\displaystyle \frac{t}{\tau ^2}}`$ (4) The particles produced at $`z=y=0`$ therefore expand and dilute their density as $`{\displaystyle \frac{dN}{dz}}={\displaystyle \frac{constant}{t}}`$ (5) In an isentropic expansion, as will be the case later in the collision after the particle have thermalize, the entropy density $`\sigma `$ satisfies $`\tau \sigma =\tau _0\sigma _0`$ (6) Here $`\tau _0`$ is the initial thermalization time, for which a variety of arguments suggest that $`\tau _01Fm/c`$ We expect that the entropy will be approximately conserved as the system expands. If there is a first order phase transition, then there will be some entropy production, but again for the typical time scales characteristic of heavy ion collisions, we do not expect a dramatic increase in the entropy. Of course as the system expands, the degrees of freedom of the system change dramatically. Early on, we expect that the system will be an almost ideal gas of quarks and gluons. Late on the gas is hadronic, and very late it is a gas of far separated almost non-interacting pions. If the hadronic gas decouples when the pions are still to a good approximation massless, as will be the case for decoupling temperatures $`T_{decoupling}100MeV`$ (recall the average energy is 3T, so that $`(m_{pion}/E)^2.1`$), then one can show that entropy production implies that the number of gluons initially is the same as the number of pions. Therefore the number density of gluons early on is $`{\displaystyle \frac{N}{V}}={\displaystyle \frac{1}{\tau _0\pi R^2}}{\displaystyle \frac{dN_{pions}}{dy}}`$ (7) For the typical rapidity density of pions seen at RHIC, this leads to initial temperatures $`T200MeV`$. This should be sufficient to produce a quark gluon plasma. In Fig. 1, a space-time picture of the evolution of matter produced in ultra-relativistic nuclear collisions is shown. After the time $`\tau _0`$ when thermalization occurs, the system expands. At some time and corresponding temperature, the system converts from a quark-gluon plasma into a hadron gas. This may take some time, and go through a mixed phase if there is truly a phase difference between hadronic matter and a quark-gluon plasma. If there is no true phase change, the system nevertheless changes its properties dramatically and to do so involves time. At much later time, the system freezes out and produces free streaming particles. ## II What Happens Before $`\tau _0`$ ? What happens in the time between $`\tau =0`$ and $`\tau _0`$? Surely, the earlier one goes in time, the more energetic are particle interactions. Weak coupling methods should therefore be at their best, and one should be able to compute, at least in the limit of very high energies and very large nuclei, from first principles in QCD. It is reasonable to assume that the matter when it is first formed in heavy ion collisions is in some sort of non-thermal distribution. The matter must therefore thermalize. Klaus Kinder-Geiger and Berndt Muller made a daring proposal in an attempt to understand the thermalization. They assumed the momentum space-distribution of partons just after formation was given by the parton distribution functions. This assumption deserves a little comment since the parton distribution functions specify only the longitudinal momentum space distribution of the partons. Both the transverse momentum structure and the space-time positions of the partons at formation must be assumed. Some guidance about what are reasonable assumptions are given by uncertainty principle arguments, but the coordinate space picture is nevertheless assumed. In fact the uncertainty principle and quantum mechanics limits the region where the parton cascade can be applied. In order to use a cascade, one must specify the phase space distribution of particles $`f(\stackrel{}{p},\stackrel{}{x},t)`$. This involves specifying both the position and coordinates of the particles, and is inconsistent with a quantum mechanical description. (One can formally define a phase space distribution function for a fully quantum system, but the distribution will in general lack positivity, and usually will violate it in the region of phase space where the quantum effects are important.) At the earliest times in the collision, the system is described by two quantum mechanical wavefunctions which describe the nuclei. Therefore for some sufficiently early time, the parton cascade description must fail. One can also see that one must go beyond partons to describe the earliest times in the collisions. At the earliest time, the density of fast moving quarks and gluons is very high. If we use cascade theory to describe their effect on long wavelength quanta such as will be produced in the central region, we will have each of the quanta acting incoherently. This is because in a cascade, only matrix elements squared for single particle scattering occur. On the other hand, we know that when we compute the field associated with these quanta, their effect is tempered since because of overall color neutrality for confined particles. Any colored field will therefore be reduced in strength in the infrared, and their effect on long wavelength quanta will be reduced. Klaus and Berndt tried to phenomenologically include the effect of quantum mechanics and classical charge coherence in two separate ways. The first was to assume that particles were not produced and could interact until after a characteristic formation time took place in the rest frame of the particle. This has the effect of delaying the cascade description until after the formation time has taken place. It evades the question of whether the parton distributions are modified during the time from the initial collision $`\tau =0`$ until the formation time. During this time, the evolution is quantum mechanical, but in the complicated collision environment, there may be a non-trivial quantum evolution of the distributions typical of a single nucleus. The other way they tried to build in some coherence is in cutting of the cross sections for parton parton scattering at small angles. In a plasma, for example, such cross sections are cutoff by media screening effects. This parameter is crucial in their computations as all cross sections depend quadratically on such a cutoff. In spite of these difficulties, the parton cascade model provides a useful way to describe the evolution of the matter from some time which I will refer to as the formation time $`\tau _f`$ until the thermalization time. The details of what is the precise form of the initial conditions and how one cuts off cross sections may be subject to dispute, but the description of the time evolution between $`\tau _f`$ and $`\tau _0`$ is conceptually correct. There are several qualitative issues associated with the approach to equilibrium which we can easily understand from this approach. The first is that if the partons are formed in an energetic environment, then the coupling is weak. Thermalization will take place by two body scatterings. The number of quanta is conserved. Following the logic through the isentropic expansion stage, we see that the number of partons at formation is to a first approximation the same as the number of produced pions. A second issue concerns flavor production. Initially most of the quanta are gluons. This is because they dominate the distribution functions. The number of quarks and anti-quarks in the sea is relatively small compared to the number of gluons. Therefore, the quark flavors come into chemical equilibrium during the transport and hydrodynamic evolution times, and therefore can be estimated by these methods if they turn out to be significantly in excess of their intrinsic contribution to the hadron wavefunction. ## III Before the Parton Cascade In order to poperly formulate the initial value conditions for the parton cascade, one must have a consistent quantum mechanical picture of the early stages of the collision. Such a picture is given by the McLerran-Venugopalan model as extended to nucleus-nucleus collisions.mv -citekmw The basic ingredient in this picture are non-abelian Lienard-Wiechart potentials. To understand how this works, consider an electric dipole at rest. The electric field is the familiar electric field shown in Fig. 2. If we now boost this field to the infinite momentum frame, the electric and magnetic fields all exist in a plane perpendicular to that of the direction of motion. Further, the magnetic field lines are perpendicular to the electric field lines. Viewed head on, the electric field lines are those of Fig. 2, with magnetic fields everywhere orthogonal to electric. Now if we study the field produced at central rapidity by a fast moving nucleus, all the gluons at higher rapidity act as color sources for these fields. This means that the system is composed of very many dipole fields in an infinitesmally thin plane perpendicular to the direction of motion. Since color is confined, on scales larger than that of a fermi, the fields vanish. On smaller scales, they are stochastic. The McLerran-Venugopalan model assumes that these fields are generated by a Gaussian distribution of sources. The weight function for these distributions of sources may be directly related to the gluon distribution function. The fields maintain their Lienard-Wiechart form prior to the collision. Upon collision, the fields begin evolving. The Yang-Mills equations can be solved numerically from these initial conditions.kv -citebmp Initially, the fields are strong and the equations of motion are intrinsically non-linear. As the fields evolve, they dilute themselves and at some time the field equations linearize. The solution to the linear equation corresponds to produced gluons. One can compute their phase space density, and this forms the initial conditions for a subsequent cascade description. There is only one scale in this classical problem: the total charge in gluon at rapidities other than the central region. Up to powers of $`\alpha _s`$, this is the same as the rapidity of gluon per unit area $`\mathrm{\Lambda }^2={\displaystyle \frac{1}{\pi R^2}}{\displaystyle \frac{dN}{dy}}`$ (8) If $`\mathrm{\Lambda }>>\mathrm{\Lambda }_{QCD}`$, then the coupling at this scale is weak, and the classical description is consistent. (Factors of $`\alpha _s`$ can be ignored in the power counting arguments below as they involve only logarithms of density scales). This single scale has many consequences. It is precisely the scale-introduced by hand in the parton cascade which is used to cutoff the parton cross sections. Note that it depends upon the initial density of partons per unit area, and therefore upon rapidity and the baryon number of the target. Verifying that there is in fact such a $`p_T`$ scale, and its dependence on various nuclei and rapidity will be one of the things that RHIC should be able to do. Another consequence is because the density of produced gluon had the same parametric dependence on density as does the initial gluon density per unit rapidity, up to slowly varying factors of $`\alpha _s`$ and constant factors, these densities are the same. This provides an a posteriori justification for the initial conditions used in the parton cascade model. The space-time structure of the initial conditions is automatically built in to the classical computation. Several groups are now attempting solution of the classical field nucleus-nucleus collision problems, and this can provide an initialization for a parton cascade computation. ## IV Acknowledgments I thank my colleagues Alejandro Ayala-Mercado, Miklos Gyulassy, Yuri Kovchegov, Alex Kovner, Jamal Jalilian-Marian, Andrei Leonidov, Raju Venugopalan and Heribert Weigert with whom the ideas presented in this talk were developed. This work was supported under Department of Energy grants in high energy and nuclear physics DOE-FG02-93ER-40764 and DOE-FG02-87-ER-40328.
no-problem/9903/hep-ph9903363.html
ar5iv
text
# Can ϵ'/ϵ be supersymmetric?[1] ## Abstract The possible supersymmetric contribution to $`ϵ^{}/ϵ`$ has been generally regarded small in the literature. We point out, however, that this is a result based on specific assumptions, such as universal scalar mass, and in general needs not to be true. Based on a general situation, (1) hierarchical quark Yukawa matrices protected by flavor symmetry, (2) generic dependence of Yukawa matrices on Polonyi/moduli fields as expected in many supergravity/superstring theories, (3) Cabibbo rotation originating from the down-sector, and (4) phases of order unity, we find the typical supersymmetric contribution to $`ϵ^{}/ϵ`$ to be order $`3\times 10^3`$ for $`m_{\stackrel{~}{q}}=500`$ GeV. It is even possible that the supersymmetric contribution dominates in the reported KTeV value $`ϵ^{}/ϵ=(28\pm 4.1)\times 10^4`$. If so, the neutron electric dipole moment is likely to be within the reach of the currently planned experiments. preprint: LBNL-42967, UCB-PTH-99/07 CP violation is the least understood aspect in the properties of the fundamental particles besides the mechanism of the electroweak symmetry breaking. The so-called “indirect CP violation” $`ϵ`$ in the neutral kaon system has been known for three decades as the only evidence that there is a fundamental distinction between particles and anti-particles. This year, however, produced two new manifestations of CP violation: $`\mathrm{sin}2\beta `$ from $`B\psi K_s`$ at CDF , even though the evidence is still somewhat week, and a beautiful measurement of “direct CP violation” $`ϵ^{}/ϵ`$ in neutral kaon system from KTeV . The latter confirmed the previous evidence reported by NA31 at a much higher accuracy and excludes the so-called superweak model of CP violation. The reported number, $`ϵ^{}/ϵ=(28\pm 4.1)\times 10^4`$ was, however, somewhat surprisingly large. The standard model prediction is currently controversial (see Table I) and is dominated by theoretical uncertainties in quantities such as the non-perturbative matrix elements and the strange quark mass $`m_s`$. Given this situation, one cannot interpret the KTeV data reliably; in particular, it is not clear if the data is consistent with the standard model (see also for a recent discussion in ). On the other hand, the standard model is believed to be only an effective low-energy approximation of fundamental physics. This is largely because it lacks a dynamical explanation of the mechanism of electroweak symmetry breaking and suffers from a serious hierarchy problem that the electroweak scale is unstable against radiative corrections. The best available simultaneous solution to both of these problems is supersymmetry. Therefore, it is a natural question to ask if supersymmetry gives a sizable contribution to $`ϵ^{}/ϵ`$ given a precise measurement. The experimental sensitivity to a possible supersymmetric contribution is currently plagued by the theoretical uncertainties mentioned above, but we can expect them to be resolved or at least alleviated eventually by improvements in particular in lattice QCD calculations. It is therefore timely to reconsider the supersymmetric contribution to $`ϵ^{}/ϵ`$. In this letter, we revisit the estimate of $`ϵ^{}/ϵ`$ in supersymmetric models . The common lore in the literature is that the supersymmetric contribution to $`ϵ^{}/ϵ`$ is in general rather small. We point out, however, that this lore is largely based on the specific choice of supersymmetry breaking effects sometimes called minimal supergravity framework . A more general framework of flavor structure tends to give a relatively large contribution to $`ϵ^{}/ϵ`$ in a wide class of models. The assumptions are: (1) hierarchical quark Yukawa matrices protected by flavor symmetry, (2) generic dependence of Yukawa matrices on Polonyi/moduli fields as expected in many supergravity/superstring theories, (3) Cabibbo rotation originating from the down-sector, and (4) phases of order unity. In fact, there is even an intriguing possibility that the observed $`ϵ^{}/ϵ`$ is mostly or entirely due to the supersymmetric contribution. To discuss the CP violating effects induced by loops of supersymmetric particles, it is convenient to introduce the mass insertion formalism . The Yukawa matrices are couplings in the superpotential $`W=Y_{ij}^uQ_iU_jH_u+Y_{ij}^dQ_iD_jH_d`$, where $`H_d`$, $`H_u`$ are Higgs doublets and $`i,j`$ flavor indices. The expectation values $`H_u=v\mathrm{sin}\beta /\sqrt{2}`$ and $`H_d=v\mathrm{cos}\beta /\sqrt{2}`$ generate quark mass matrices $`M^u=Y^uv\mathrm{sin}\beta /\sqrt{2}`$ and $`M^d=Y^dv\mathrm{cos}\beta /\sqrt{2}`$. They are diagonalized by bi-unitary transformations $`M^u=V_L^u\mathrm{diag}(m_u,m_c,m_t)V_R^u`$ and $`M^d=V_L^d\mathrm{diag}(m_d,m_s,m_b)V_R^d`$, and the Cabibbo–Kobayashi–Maskawa matrix is given by $`V_L^uV_L^d`$. The squarks have chirality-preserving mass-squared matrices $`\stackrel{~}{Q}_i^{}(M_Q^2)_{ij}\stackrel{~}{Q}_j\stackrel{~}{U}_i^{}(M_U^2)_{ij}\stackrel{~}{U}_j\stackrel{~}{D}_i^{}(M_D^2)_{ij}\stackrel{~}{D}_j`$ and chirality-violating trilinear couplings $`\stackrel{~}{Q}_i(A^d)_{ij}\stackrel{~}{D}_jH_d\stackrel{~}{Q}_i(A^u)_{ij}\stackrel{~}{U}_jH_u`$ where $`H_d`$, $`H_u`$ are Higgs doublets. The Higgs expectation values generate the left-right (LR) mass-squared matrix $`M_{LR}^{2,d}=A^dv\mathrm{sin}\beta /\sqrt{2}`$ and $`M_{LR}^{2,u}=A^uv\mathrm{cos}\beta /\sqrt{2}`$. The convenient basis to discuss flavor-changing effects in the gluino loop diagrams is the so-called superCKM basis . In this basis the relevant quark mass matrix is diagonalized (say, $`M^d`$) and the squarks are also rotated in the same way, $`M_Q^2V_L^dM_Q^2V_L^d`$, $`M_D^2V_R^dM_D^2V_R^d`$, and $`M_{LR}^{2,d}{}_{}{}^{t}V_{L}^{d}M_{LR}^{2,d}V_R^d`$. Flavor-changing effects can be estimated by insertion of flavor-off-diagonal components of the mass-squared matrices in this basis. By normalizing the off-diagonal components by average squark mass-squared $`m_{\stackrel{~}{q}}^2`$, we define $`(\delta _{LL}^d)_{ij}=(V_L^dM_Q^2V_L^d)_{ij}/m_{\stackrel{~}{q}}^2`$, $`(\delta _{RR}^d)_{ij}=(V_R^dM_D^2V_R^d)_{ij}/m_{\stackrel{~}{q}}^2`$, and $`(\delta _{LR}^d)_{ij}=({}_{}{}^{t}V_{L}^{d}M_{LR}^{2,d}V_R^d)_{ij}/m_{\stackrel{~}{q}}^2`$. The supersymmetric contributions due to gluino loops to neutral Kaon parameters $`(\mathrm{\Delta }m_K)_{SUSY}`$, $`ϵ_{SUSY}`$ and $`(ϵ^{}/ϵ)_{SUSY}`$ have been calculated, and have been used to place bounds on mass insertion parameters . The values of the mass insertion parameters which saturate the observed numbers of $`\mathrm{\Delta }m_K`$, $`ϵ`$ and $`ϵ^{}/ϵ`$ are tabulated in Table II, after updating the numbers in Ref. . These numbers are subject to theoretical uncertainties in QCD corrections and matrix elements at least of order a few tens of percents (this is at least what is obtained for the $`\mathrm{\Delta }S=2`$ transitions ). Barring possible cancellations with the standard-model amplitudes as well as with the other SUSY contributions (i.e., chargino and charged Higgs exchages), the mass insertion parameters have to be smaller than or at most comparable to the entries in the Table. Stringent bounds on $`(\delta _{12}^d)_{LL}`$ from $`\mathrm{\Delta }m_K`$ and $`ϵ`$ have been regarded as a problem in supersymmetric models. A random mass-squared matrix of squarks would lead to a large $`(\delta _{12}^d)_{LL}`$ which overproduce $`\mathrm{\Delta }m_K`$ or $`ϵ`$. Usually an assumption is invoked that the squark mass-squared matrix is proportional to the identity matrix (universality), at least for the first- and second-generations (alternatively one can invoke an alignment of the quark and squark mass matrices ). Even when such an assumption is made at the Planck scale, radiative effects can induce $`(\delta _{12}^d)_{LL}`$ and hence over produce $`\mathrm{\Delta }m_K`$ or $`ϵ`$. Once the bounds are satisfied, however, the supersymmetric contribution to $`ϵ^{}`$ tends to be rather small: $`\mathrm{\Delta }m_K`$ and $`ϵ`$ require $`|(\delta _{12}^d)_{LL}|=((\mathrm{Re}(\delta _{12}^d)_{LL}^2)^2+(\mathrm{Im}(\delta _{12}^d)_{LL}^2)^2)^{1/4}\begin{array}{c}<\hfill \\ \hfill \end{array}0.019`$–0.092, which is much smaller than the corresponding bounds from $`ϵ^{}/ϵ`$, $`|\mathrm{Im}(\delta _{12}^d)_{LL}|\begin{array}{c}<\hfill \\ \hfill \end{array}0.10`$–0.27 . This fact led to a common wisdom that the supersymmetric contribution to $`ϵ^{}`$ is in general small. For the rest of the letter, we simply assume that $`(\delta _{ij}^d)_{LL}`$ parameters are under control by some mechanisms such as a flavor symmetry and do not go into any further discussions. However, the contribution from $`(\delta _{12}^d)_{LR}`$ can be important; even $`|\mathrm{Im}(\delta _{12}^d)_{LR}^2|10^5`$ gives a significant contribution to $`ϵ^{}/ϵ`$ while the bounds on $`(\delta _{12}^d)_{LR}`$ from $`\mathrm{\Delta }m_K`$ and $`ϵ`$ are only about $`3\times 10^3`$ and $`3\times 10^4`$, respectively. Actually, one can even imagine to saturate both $`ϵ`$ and $`ϵ^{}/ϵ`$ at the borderline of the current limits . Therefore, whether the supersymmetric contribution to $`ϵ^{}/ϵ`$ can be important is an issue of how large $`(\delta _{12}^d)_{LR}`$ is expected in supersymmetric models rather than that of phenomenological viability. What is the general expectation on the size of $`(\delta _{12}^d)_{LR}`$? The common answer in the literature to this question is that it is very small in general, and hence the supersymmetric contribution to $`ϵ^{}`$ has been regarded small as well. This is indeed the case if one assumes that all soft supersymmetry breaking parameters are universal at Planck- or GUT-scale where $`(\delta _{12}^d)_{LR}`$ is induced only radiatively at higher orders in small Yukawa coupling constants of first and second generation particles . However, the universal breaking is a strong assumption and is known not to be true in many supergravity and string-inspired models . On the other hand, the LR mass matrix has the same flavor structure as the fermion Yukawa matrix and both in fact originate from the superpotential couplings. Our theoretical prejudice is that there is an underlying symmetry (flavor symmetry) which restricts the form of the Yukawa matrices to explain their hierarchical forms. Then the LR mass matrix is expected to have a very similar form as the Yukawa matrix. More precisely, we expect the components of the LR mass matrix to be roughly the supersymmetry breaking scale (e.g., $`m_{3/2}`$) times the corresponding component of the quark mass matrix. However, there is no reason for them to be simultaneously diagonalizable based on this general argument. In general, we expect the size of $`(\delta _{12}^d)_{LR}`$ to be $$(\delta _{12}^d)_{LR}\frac{m_{3/2}M_{12}^d}{m_{\stackrel{~}{q}^2}}.$$ (1) To be more concrete, one can imagine a string-inspired theory where the Yukawa couplings in the superpotential $`W=Y_{ij}^d(T)Q^iD^jH_d`$ are in general complicated functions of the moduli fields $`T`$. The moduli fields have expectation values of order string scale $`T`$ which describe the geometry of compactified extra six dimensions. The low-energy Yukawa couplings are then given by their expectation values $`Y_{ij}^d(T)`$. On the other hand, the moduli fields in general also have couplings to fields in the hidden sector and acquire supersymmetry-breaking $`F`$-component expectation values $`F_Tm_{3/2}`$ in the Planck unit $`M_{Pl}=\sqrt{8\pi }`$. This generates trilinear couplings given by $$\frac{Y_{ij}^d}{T}F_T\stackrel{~}{Q}^i\stackrel{~}{D}^jH_d,$$ (2) which depend on a different matrix $`Y_{ij}^d/T`$. Due to holomorphy, flavor symmetry is likely to constrain $`Y_{ij}^d`$ and its derivative to be similar while they in general do not have to exactly proportional to each other and hence are not simultaneously diagonalizable. In order to proceed to numerical estimates of $`(\delta _{12}^d)_{LR}`$, we need to specify if the quark mixings come from up- or down-sector. In general, attributing mixing to up-sector gives smaller flavor-changing effects and receive less constraints . On the other hand, historically the Cabibbo angle has been often attributed to the down sector because of a numerical coincidence $`V_{us}=\mathrm{sin}\theta _C=0.22\sqrt{m_d/m_s}`$. For our purpose, we pick the latter choice, which fixes the form of the mass matrix for the first and second generations to be $$M^d\left(\begin{array}{cc}m_d& m_sV_{us}\\ & m_s\end{array}\right),$$ (3) where the (2,1) element is unknown due to our lack of knowledge on the mixings among right-handed quarks. Based on the general considerations on the LR mass matrix above, we expect $$m_{LR}^{2,d}m_{3/2}\left(\begin{array}{cc}am_d& bm_sV_{us}\\ & cm_s\end{array}\right),$$ (4) where $`a`$, $`b`$, $`c`$ are constants of order unity. Unless $`a=b=c`$ exactly, $`M_d`$ and $`m_{LR}^{2,d}`$ are not simultaneously diagonalizable and we find $`(\delta _{12}^d)_{LR}{\displaystyle \frac{m_{3/2}m_sV_{us}}{m_{\stackrel{~}{q}}^2}}`$ (6) $`=2\times 10^5\left({\displaystyle \frac{m_s(M_{Pl})}{50\mathrm{MeV}}}\right)\left({\displaystyle \frac{m_{3/2}}{m_{\stackrel{~}{q}}}}\right)\left({\displaystyle \frac{500\mathrm{GeV}}{m_{\stackrel{~}{q}}}}\right).`$ It is interesting to see that $`(\delta _{12}^d)_{LR}`$ of this naive dimensional estimate gives the saturation of the bound from $`ϵ^{}/ϵ`$ (see Table II) if it has a phase of order unity. The key-point of the above example is that the large value of $`ϵ^{}/ϵ`$ of the KTeV and NA31 experiments can be accounted for in the supersymmetric context without particularly contrived assumptions on the size of the $`(\delta _{12}^d)_{LR}`$ mass insertion with the exception of taking it to have a large CP violating phase . One may wonder if typical off-diagonal elements in $`(\delta _{ij}^d)_{LR}`$ may be already excluded from other flavor-changing processes. For instance, $`(\delta _{23}^d)_{LR}`$ is constrained by $`bs\gamma `$ to be less than 1–3$`\times 10^2`$ . This is to be compared to the estimate $`(\delta _{23}^d)_{LR}m_{3/2}m_bV_{cb}/m_{\stackrel{~}{q}}^22\times 10^5`$. The constraint from $`bsl^+l^{}`$ is similarly insignificant . Constraints from the up sector are much weaker. It is tempting to speculate that the observed $`ϵ^{}/ϵ`$ may be dominated by $`(\delta _{12}^d)_{LR}`$ contribution. The estimate in Eq. (6) requires $`m_{3/2}m_{\stackrel{~}{q}}`$ and an $`O(1)`$ phase. Because $`m_{\stackrel{~}{q}}^2`$ acquires a positive contribution from gluino mass in the renormalization-group evolution while off-diagonal components in LR mass matrix don’t, such a scenario would prefer models where the gaugino mass is somewhat smaller than scalar masses (assumed also to be $`O(m_{3/2})`$). An important implication of supersymmetry-dominated $`ϵ^{}/ϵ`$ is that neutron electric dipole moment (EDM) is likely to be large. The current limit on neutron EDM $`d_n<11\times 10^{26}e\mathrm{cm}`$ constrains $`|\mathrm{Im}(\delta _{11}^d)_{LR}|<(2.4,3.0,5.6)\times 10^6`$ for $`m_{\stackrel{~}{g}}^2/m_{\stackrel{~}{q}}^2=0.3,1.0,4.0`$, respectively, with a theoretical uncertainty of at least a factor of two, while our estimate gives $`(\delta _{11}^d)_{LR}m_{3/2}m_d/m_{\stackrel{~}{q}}^23\times 10^6`$. It would be interesting to see results from near-future experiments which are expected to improve the limit on $`d_n`$ by two orders of magnitude. One may extend the discussion to the lepton sector. Let us consider $`m_{\stackrel{~}{l}}m_{3/2}500`$ GeV for our discussions. The constraints from $`\mu e\gamma `$ and the electron EDM are: $`|(\delta _{12}^l)_{LR}|<0.7`$$`1.9\times 10^5`$ and $`|\mathrm{Im}(\delta _{11}^l)_{LR}|<1.5`$$`3.5\times 10^6`$ for $`0.4<m_{\stackrel{~}{\gamma }}^2/m_{\stackrel{~}{l}}^2<5.0`$ . Our estimates on these mass insertion parameters are $`(\delta _{12}^l)_{LR}m_{3/2}m_\mu V_{\nu _e\mu }/m_{\stackrel{~}{l}}^22.1\times 10^4V_{\nu _e\mu }`$ and $`(\delta _{11}^l)_{LR}m_{3/2}m_e/m_{\stackrel{~}{l}}^21.0\times 10^6`$. In the lack of our knowledge on the lepton mixing angles, we cannot draw a definite conclusion on the $`\mu e\gamma `$ process. One possible choice is what is suggested by the small angle MSW solution to the solar neutrino problem, $`V_{\nu _e\mu }\sqrt{m_e/m_\mu }0.05`$. It is interesting that these estimates of $`|(\delta _{12}^l)_{LR}|`$ and $`|\mathrm{Im}(\delta _{11}^l)_{LR}|`$ nearly saturate the bounds. In summary, we have reconsidered the possible supersymmetric contribution to $`ϵ^{}/ϵ`$. Contrary to the lore in the literature, we find that generic supersymmetric models give an interesting contribution to $`ϵ^{}/ϵ`$, and it is even possible that it dominates in the observed value. We expect the neutron EDM to be within the reach of near-future experiments in that case. ###### Acknowledgements. A.M. thanks Luca Silvestrini for interesting discussions and suggestions. H.M. thanks Kaustubh Agashe and Lawrence Hall for pointing out numerical errors in an earlier version of the paper. We thank the organizers of the IFT workshop, “Higgs and SuperSymmetry: Search & Discovery,” University of Florida, March 8-11, 1999 for providing the stimulating atmosphere for this work to be started.
no-problem/9903/hep-ph9903505.html
ar5iv
text
# Generation of Neutrino Masses and Mixings in Gauge Theories ## 1 Introduction Recent experimental data of neutrinos make big impact on the neutrino masses and their mixings. Most exciting one is the results at Super-Kamiokande on the atmospheric neutrinos, which indicate the large neutrino flavor oscillation of $`\nu _\mu \nu _x`$ . Solar neutrino data also provide the evidence of the neutrino oscillation, however this problem still uncertain . What can we learn from these results? We want to get clues for origins of neutrino masses and neutrino flavor mixings. In particular, we want to underatand why the neutrino mixing is large compared with the quark sector. Now we should discuss these problems in connection with the quark sector. ## 2 Phenomenological Aspect of Neutrino Masses and Mixings Our starting point as to the neutrino mixing is the large $`\nu _\mu \nu _\tau `$ oscillation of the atmospheric neutrino oscillation with $`\mathrm{\Delta }m_{\mathrm{atm}}^2=(16)\times 10^3\mathrm{eV}^2`$ and $`\mathrm{sin}^22\theta _{\mathrm{atm}}0.9`$ which are derived from the recent data of the atmospheric neutrino deficit at Super-Kamiokande . In the solar neutrino problem , there are three solutions: the MSW small angle solution, the MSW large angle solution and the vacuum solution. These mass difference scales are much smaller than the atmospheric one. Once we put $`\mathrm{\Delta }m_{\mathrm{atm}}^2=\mathrm{\Delta }m_{32}^2`$ and $`\mathrm{\Delta }m_{}^2=\mathrm{\Delta }m_{21}^2`$, there are two typical mass patterns: $`m_3m_2m_1`$ and $`m_3m_2m_1`$. The neutrino mixing is defined as $`\nu _\alpha =U_{\alpha i}\nu _i`$, where $`\alpha `$ denotes the flavor $`e,\mu ,\tau `$ and $`i`$ denotes mass eigenvalues $`1,2,3`$. Now we have two typical mixing patterns: $$U_{\mathrm{MNS}}=\left(\begin{array}{ccc}1& U_{e2}& U_{e3}\\ U_{\mu 1}& \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\\ U_{\tau 1}& \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{array}\right),\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& U_{e3}\\ \frac{1}{2}& \frac{1}{2}& \frac{1}{\sqrt{2}}\\ \frac{1}{2}& \frac{1}{2}& \frac{1}{\sqrt{2}}\end{array}\right),$$ (1) the first one is the single maximal mixing pattern, in which the solar neutrino deficit is explained by the small angle MSW solution, and the other is the bi-maximal mixings pattern, in which the solar neutrino deficit is explained by the just so solution. In both case $`U_{e3}`$ is constrained by the CHOOZ Data. introduced. These quarks and leptons ## 3 Neutrino Masses and Mixings in the GUT The left handed neutrino masses are supposed to be at most $`𝒪(1)\mathrm{eV}`$. In the case of Majorana neutrino, we know two classes of models which lead naturally to a small neutrino mass: (i) models in which the seesaw mechanism works and (ii) those in which the neutrino mass is induced by a radiative correction. The central idea of models (i) supposes some higher symmetry which is broken at an high energy scale. If this symmetry breaking takes place so that it allowes the right-handed neutrino to have a mass, and a small mass induced for the left handed neutrino by the seesaw mechanism. In the classes of model (ii) one introduces a scalar particle with a mass of the order of the electroweak (EW) energy scale which breaks the lepton number in the scalar sector. A left-handed neutrino mass is then induced by a radiative correction from a scalar loop. This model requires some new physics at the EW scale. SU(5) GUT: In the standpoint of the quark-lepton unification, the charged lepton mass matrix is connected with the down quark one. The mixing following from the charged lepton mass matrix may be considered to be small like quarks in the hierarchical base. However, this expectation is not true if the mass matrix is non-Hermitian. In the SU(5), fermions belong 10 and 5\*: $$\mathrm{𝟏𝟎}:\chi _{ab}=u^c+Q+e^c,\mathrm{𝟓}^{}:\psi ^a=d^{c1}+L,$$ (2) where $`Q`$ and $`L`$ are SU(2) doublets of quarks and leptons, respectively. The Yukawa couplings are given by $`10_i10_j5_H`$(up-quarks) and $`5_i^{}10_j5_H^{}`$(down-quarks and charge leptons)(i,j=1,2,3). Therefore we get $`m_E=m_D^T`$ at the GUT scale. It should be noticed that observed quark mass spectra and the CKM matrix only constrains the down quark mass matrix typically as follows: $$m_{\mathrm{down}}K_D\left(\begin{array}{ccc}\lambda ^4& \lambda ^3& \lambda ^4\\ x& \lambda ^2& \lambda ^2\\ y& z& 1\end{array}\right)\mathrm{with}\lambda =0.22.$$ (3) Three unknown $`x,y,z`$ are related to the left-handed charged lepton mixing due to $`m_E=m_D^T`$. The left(right)-handed down quark mixings are related to the right(left)-handed charged lepton mixings in the SU(5). Thefore, there is a source of the large flavor mixing in the charged lepton sector if $`z1`$ is derived from some models. This mechanism was nicely used by some authors . In the case of the SO(10) GUT, SO(10) breaking may lead to the large mixing in the charged lepton sector if an asymmetric interaction in the family space exists . In conclusion, the $`\nu _\mu \nu _\tau `$ mixing could be maximal in some GUT models, which are consistent with the quark sector. See-saw enhancement: The large mixing may come from the neutrino sector. It could be obtained in the see-saw mechanism as a consequence of a certain structure of the right-handed Majorana mass matrix . That is the so called see-saw enhancement of the neutrino mixing due to the cooperation between the Dirac and Majorana mass matrices. Mass matrix of light Majorana neutrinos $`m_\nu `$ has the following form $$m_\nu m_DM_R^1m_D^T,$$ (4) where $`m_D`$ is the neutrino Dirac mass matrix and $`M_R`$ is the Majorana mass matrix of the right-handed neutrino components. Then, the lepton mixing matrix is $`V_{\mathrm{}}=S_{\mathrm{}}^{}S_\nu V_s`$, where $`S_{\mathrm{}}`$, $`S_\nu `$ are transformations which diagonalize the Dirac mass matrices of charged leptons and neutrinos, respectively. The $`V_s`$ specifies the effect of the see-saw mechanism, i.e. the effects of the right-handed Majorana mass matrix. It is determined by $$V_s^Tm_{ss}V_s=diag(m_1,m_2,m_3),\mathrm{with}m_{ss}=m_D^{diag}M_R^1m_D^{diag}.$$ (5) In the case of two generations, the mixing matrix $`V_s`$ is easily investigated in terms of one angle $`\theta _s`$. This angle could be maximal under the some conditions of parameters in the Dirac mass matrix and right handed Majorana mass matrix. That is the enhancement due to the see-saw mechanism. The rich structure of right-handed Majorana mass matrix can lead to the maximal flavor mixing of neutrinos. Radiative neutrino mass: In the class of models in (ii), neutrino masses are induced from the radiative corrections. The typical one is the Zee model, in which charged gauge singlet scalar induces the neutrino mass . In this model, the previous predictions are consistent with LSND data and atmospheric neutrino data. Then the soalr neutrino deficit was explained by introducing the sterile neutrino. However new solution has been found in the framework of the Zee model. In the case of the inverse hierarchy $`m_1m_2m_3`$, the bi-maximal mixing, which is consistent with atmospheric and solar neutrinos, is obtained . The MSSM with R-parity violation can also give the neutrino masses and mixings. The MSSM allowes renormalizable B and L violation. The R-parity conservation forbids the B and L violation in the superpotential in order to avoid the proton decay. However the proton decay is avoided in the tree level if either of B or L violating term vanishs. The simplest model is the bi-linar R-parity violating model with $`ϵ_iH_uL_i`$ for the lepton-Higgs coupling . This model provides the large mixing which is consistent with atmospheric and solar neutrinos. ## 4 Flavor Symmetry and Large Mixings In the previous discussions, we assumed the family structure in the mass matrices. However masses and mixings may suggest the some flavor symmetry. The simple flavor symmetry is U(1), which was discussed intensively by Ramond et al.. In their model, they assumed (1) Fermions carry U(1) charge, (2) U(1) is spontaneously broken by $`<\theta >`$, in which $`\theta `$ is the EW singlet with U(1) charge -1, and (3) Yukawa couplings appear as effective operators $$h_{ij}^DQ_i\overline{d}_jH_d\left(\frac{\theta }{\mathrm{\Lambda }}\right)^{m_{ij}}+h_{ij}^UQ_i\overline{u}_jH_u\left(\frac{\theta }{\mathrm{\Lambda }}\right)^{n_{ij}}+\mathrm{},$$ (6) where $`<\theta >/\mathrm{\Lambda }=\lambda 0.22`$. The powers $`m_{ij}`$ and $`n_{ij}`$ are determined from the U(1) charges of fermions in order that the effective operators are U(1) invariants. The U(1) charges of the fermions are fixed by the experimental data of the fermion masses and mixings. Then the model has anomalous U(1). Another typical flavor symmetry is $`S_3`$. The $`S_{3L}\times S_{3R}`$ symmetric mass matrix is so called the democratic mass matrix , which needs the large rotation in order to move to the diagonal base. In the quark sector, this large rotation is canceled each other between down quarks and up quarks. However, the situation of the lepton sector is very different from the quark sector if the effective neutrino mass matrix $`m_{LL}^\nu `$ is far from the democratic one and the charged lepton one is still the democratic one. Let us consider the neutrino mass matrices, which provide large mixings : The typical one is $$M_\nu =c_\nu \left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)+\left(\begin{array}{ccc}0& ϵ_\nu & 0\\ ϵ_\nu & 0& 0\\ 0& 0& \delta _\nu \end{array}\right),\mathrm{or}+\left(\begin{array}{ccc}ϵ_\nu & 0& 0\\ 0& ϵ_\nu & 0\\ 0& 0& \delta _\nu \end{array}\right),$$ (7) where the first term is the $`S_{3L}`$ symmetric effective mass matrix and the second or the third is the $`S_{3L}`$ breaking one. In the case of the first breaking matrix, the large mixing of $`(12)`$ family sector is completely canceled between the neutrino and the charged lepton sectors, however the large mixing of the $`(23)`$ family in the charged lepton sector is not canceled. So we have the large mixing in the lepton flavor mixing matrix. If we adopt the latter symmetry breaking matrix , we obtain the lepton mixing matrix to be near bi-maximal because the large mixings from the charged lepton mass matrix cannot be canceled. This case can accommodate the ”just-so” scenario for the solar neutrino problem due to neutrino oscillation in vacuum. recent data ## 5 Summary Models depends on three phenomenological aspects. Is the mixing pattern the single maximal mixing or bi-maximal mixing ? Is there sterile neutrino ? Are the neutrino masses degenerated or hirarchical ones? More precise solar neutrino data will answer the first and second questions. More precise atmospheric neutrino data and the long baseline experiments can answer the second question. The double beta dacay experiments may answer the last question. We need more data in order to establish the model as well as more theoretical studies. ## References
no-problem/9903/physics9903041.html
ar5iv
text
# Optical response of small silver clusters ## I Introduction The optical response of clusters of IB elements has been an interesting theoretical challenge: while their chemistry is dominated by the atom’s single valence $`s`$-electron, the electrical properties are strongly influenced by the nearby filled $`d`$-shell. Up to now, the $`d`$-electrons have been treated only implicitly by a dielectric approximation. For example, one of the interesting phenomena that has been attributed to the $`d`$-electrons is the blue shift of the surface plasmon for small clusters. The $`d`$-electrons also strongly screen the oscillator strength of the valence electrons, and this raises the question of whether the theory is consistent with the measured oscillator strength, which are only somewhat below the full sum for the $`s`$-electrons. In this work, we calculate the optical response explicitly including the $`d`$-electrons, using the time-dependent local density approximation (TDLDA). We begin by recalling the limiting behavior in some simple extreme models. The first is the free-electron model including only the $`s`$-electrons, as in the jellium theory. This produces a collective mode with all of the oscillator strength at a frequency related to the number density $`n`$ by $`\omega _M=\sqrt{4\pi e^2n/3m}`$. At the bulk density of silver, this gives an excitation energy of 5.2 eV. The second limiting case is the Mie theory, treating the cluster as a classical dielectric sphere. The Mie theory in the long wavelength limit gives the optical absorption cross section as $$\sigma =\frac{4\pi \omega R^3}{c}\mathrm{Im}\frac{ϵ(\omega )1}{ϵ(\omega )+2}$$ (1) where $`R`$ is the radius of the sphere and $`ϵ(\omega )`$ is the dielectric function. In Fig. 1 we show the result expressed as the cross section per atom, taking $`ϵ(\omega )`$ from ref. . The graph also shows the integrated oscillator strength per atom, $`f_E/N=_{E_i<E}f_i/N`$. We see that there is a sharp peak at 3.5-3.6 eV, but that the oscillator strength is only 1/6 of the sum rule for $`s`$-electrons. Thus the effect of the screening is to push the $`s`$-electron surface plasmon down from 5.2 to 3.5 eV, together with a strong quenching of the oscillator strength. ## II The TDLDA method The details of our implementation of the TDLDA are given in ref. . The calculation is performed in real time, which has the advantage that the entire response is calculated at once, and only a Fourier transformation is needed to extract strengths of individual excitations. The Hamiltonian we employ is one that has been frequently used in static calculations. The electron-electron interaction is treated in the local density approximation following the prescription of ref. . The ionic potential is treated in the pseudopotential approximation keeping only the $`d`$\- and $`s`$-electrons active. The $`l`$-dependent pseudopotentials were constructed according to the method of Troullier and Martins . We showed in ref. that for the atom the resulting pseudopotential is adequate to describe the electromagnetic response well into the continuum, even though the sum rules become ambiguous. We make one further approximation in the Hamiltonian, treating the nonlocality in the pseudopotential by the method of Kleinman and Bylander. The approximation takes one angular momentum state as given by the radial pseudopotential and corrects the others by adding a separable function. A potential problem of this method is that there may be spurious deeply bound states in some of the partial waves. We take the local wave to be the $`s`$-wave, which avoids the difficulty. The critical numerical parameters in the implementation of the TDLDA on a coordinate-space mesh is the mesh spacing $`\mathrm{\Delta }x`$, the shape and size of the volume in which the electron wave functions are calculated, and the step size $`\mathrm{\Delta }t`$ of the time integration. We use a mesh size $`\mathrm{\Delta }x`$ = 0.25 Å, which is justified in the next section that examines atomic properties. For the volume geometry we take a sphere of radius 6 Å. From experience with the jellium model, the collective resonance frequency of Ag<sub>8</sub> should be accurate to 0.1 eV with this box size, and the smaller clusters will be described even more accurately. The last numerical parameter $`\mathrm{\Delta }t`$ must be small compared to the inverse energy scale of the Hamiltonian, which in turn is controlled by $`\mathrm{\Delta }x`$ in our method. We find that the integration is stable and accurate taking $`\mathrm{\Delta }t=0.001`$ eV<sup>-1</sup>. The equations are integrated to a total time of $`T=50`$ $`\mathrm{}/eV`$. The inverse of this time corresponds to the energy resolution of the theoretical spectrum. ## III Atomic properties Before presenting the results on silver clusters, we examine the accuracy of our three-dimensional coordinate-space numerical method for atomic properties. We have considered the TDLDA treatment of IB atoms in an earlier publication. There we used a spherical basis and the emphasis was on the validity of the pseudopotential approximation for calculating the response and its sum rule. Here we use those results to test the implementation of the Kohn-Sham equations on a three-dimensional mesh, which of course is much more inefficient than the spherical representation for atomic systems. Comparison of the two methods is given in Table I. We find, with a mesh of 0.25 Å, that orbital energies are reproduced to an accuracy of about 0.1 eV. The ground state configuration of the Ag atom is $`d^{10}s^1`$ with Kohn-Sham orbital energies of the $`d`$-, $`s`$-, and $`p`$-orbitals having values -7.8, -4.6 and -0.7 eV, respectively. In the 3-d mesh, the lack of spherical symmetry also splits the $`d`$-orbitals by about 0.1 eV. The intrinsic limitations of the TDLDA on physical quantities are certainly beyond the 0.1 eV accuracy level, so we judged the 0.25 Å mesh adequate for our purposes. We also show in the table some physical quantities of interest: the ionization potential, the energy of the lowest excited state, and its oscillator strength. Although it is tempting to interpret the Kohn-Sham eigenvalues as orbital energies, it is well known that the ionization potential is not well reproduced by the highest electron’s eigenvalue. In our case here, the negative of the $`s`$-orbital energy, 4.6 eV, is quite far from the empirical 7.5 eV ionization potential. However, the LDA does much better when the total energies of the Ag atom and the Ag<sup>+</sup> ion are compared. We quote this number as ‘I.P.’ in the table. The next quantity we examine is the excitation energy of the lowest excited state. The state has a predominant $`d^{10}p^1`$ character; the difference in orbital energies is quoted as $`\mathrm{`}e_pe_s`$’ in the table. The physical excitation energy including interaction effects is shown on the line $`E_{p\overline{s}}`$. The theoretical values are obtained from the peak position in the Fourier transform of the TDLDA response. We see that three-dimensional mesh agrees to 0.1 eV with the spherical basis calculation on these energies. However, the experimental excitation energy is lower than theory by about 10%; this number sets the scale of the intrinsic limitations of the TDLDA. In the last line, we display the oscillator strength associated with the transition between the ground and excited state. Here there is some disagreement between the spherical results and the three-dimensional results. This might be due to the different treatment of the pseudopotential in the two cases. The three-dimensional treatment used the Kleinman-Bylander method to treat the nonlocality of the pseudopotential, while in the spherical basis, the $`l`$-dependent nonlocality is treated exactly. In any case, the three-dimensional result is within 10% of the empirical value. We also include in the table the energies associated with the excitation of a $`d`$-electron to the $`p`$-orbital. ## IV Silver dimer and trimer We next examine the Ag<sub>2</sub> dimer. We take the nuclear separation distance at 2.612 Å from the calculations of ref. . The response averaged over directions is shown in Fig. 2. The $`sp`$ transition is split into two modes, a longitudinal mode at 3.2 eV and a transverse mode at 4.9 eV. Experimentally, the dimer has only been studied in matrices which are subject to environmental shifts of the order of tenths of an electron volt. Absorption peaks have been identified at 3.0 eV and 4.7 eV which very likely correspond to the two modes found theoretically. In emission, these states are shifted somewhat lower, to 2.8 and 4.5 eV. These numbers are probably a better measure of the free cluster energies, judging by the behavior of silver atoms in a matrix. The lower state is strongly coupled to vibrations in the data of ref. , supporting the interpretation of the mode as a longitudinal excitation. In summary, the TDLDA reproduces the splitting of the longitudinal and transverse modes quite accurately, but the average frequency of the mode is probably too high by the same amount that we found for the atom. We conclude that the interaction physics between the two atoms is reasonably described by the TDLDA. The picture of two nearly independent states on the two atoms is qualitatively valid also in considering the oscillator strengths of the transitions. The theoretical ratio of strengths for the two states is very close to 2:1, which is expected for the two transverse modes compared to the single longitudinal mode. However, the total strength of the sharp states, 1.05 electrons, is only 80% of the theoretical strength for separated atoms. Thus a significant fraction of strength goes to a higher spectral region. We shall see that much of the shift is to the region 5 eV to 6 eV, where experimental data is still available. The silver trimer is predicted to have a shape of an isosceles triangle with nearly equal sides. There are two nearly degenerate geometries (corresponding to the E symmetry of the equilateral triangle) with the <sup>2</sup>B state in an obtuse triangle predicted to be lowest in most calculations. Our calculation uses the obtuse geometry (geometry I) of ref. . The absorption spectrum of Ag<sub>3</sub> is shown in Fig. 3. We see that the absorption in the 3-5 eV region is spread out among several states. The more complex spectrum may be due to the low ionization potential of Ag<sub>3</sub>. According to the Kohn-Sham eigenvalue, the binding of the highest occupied orbital is 3.5 eV, permitting Rydberg states in this region. There is a quantum chemistry calculation of the spectral properties of Ag<sub>3</sub> excitations in the visible region of the spectrum. This calculation predicted an integrated strength below 3.5 eV of $`f_E0.6`$, neglecting the screening of the $`d`$-electrons. In comparison we find for the same integration limit $`f_E=0.1`$, a factor of 6 smaller. ## V Ag<sub>8</sub> and Ag$`{}_{}{}^{+}{}_{9}{}^{}`$ We shall now see that collective features of the response become prominent going to 8-electron clusters. In the alkali metals, clusters with 8 valence electrons have a sharp collective resonance associated with a nearly spherical cluster shape and filled shells of the delocalized orbitals. These systems have been modeled with the spherical jellium approximation, and the gross features of the collective resonance are reproduced. The IB metals are quite different from the IA alkali metals, however, in that the occupied $`d`$-orbitals are close to the Fermi surface and strongly screen the $`s`$-electrons. On the experimental side, the studies of Ag<sub>8</sub> and Ag$`{}_{}{}^{+}{}_{9}{}^{}`$ seem to show that the oscillator strength of the $`s`$-electrons is not seriously quenched by the $`d`$-polarizability. An important motivation of our study then is to see whether the simple arguments made for a strong $`d`$-screening are in fact borne out by the theory treating the $`d`$-electrons on an equal footing. There are two competing geometries in eight-atom clusters of $`s`$-electron elements, having T<sub>d</sub> and D<sub>2d</sub> symmetry. We have calculated the response of both geometries, taking the bond lengths from ref.. The optical absorption strength function is shown in Fig. 4. Also shown with arrows are the two experimental absorption peaks seen in ref. . The peak locations agree very well with the theoretical spectrum based on the T<sub>d</sub> geometry. But one should remember that the matrix spectrum is likely to be shifted by a few tenths of an eV with respect to the free cluster spectrum. The experimental absorption strength is considerably higher for the upper of the two peaks in the 3-4 eV region, which also agrees with theory. The D<sub>2d</sub> geometry has a smaller splitting between the two peaks and does not agree as well with the data. The theory thus favors the T<sub>d</sub> geometry for the ground state. This is not the predicted ground state in ref. , but since the calculated energy difference between geometries is only 0.08 eV, the theoretical ordering is uncertain. For the Ag$`{}_{}{}^{+}{}_{9}{}^{}`$ cluster, we used the geometry (I) of ref. , the predicted ground state of the cluster in their most detailed calculation. The comparison between theory and experiment is shown in Fig. 6. The peak at 4 eV is reproduced in position; its theoretical width is somewhat broadened due to the lower geometric symmetry of the 9-atom cluster. We next turn to the integrated absorption strength. The strength function $`f_E`$ is shown in Fig. 5 for Ag<sub>8</sub> in the T<sub>d</sub> and D<sub>2d</sub> geometries; the results for Ag$`{}_{}{}^{+}{}_{9}{}^{}`$ are shown in Fig. 7. The sharp modes below 5 eV are predicted to have only 25% of the $`s`$-electron sum rule. This is slightly higher than the Mie theory prediction, which perhaps can be attributed to the imperfect screening in a small cluster. The same physics is responsible for the blue shift of the excitation in small clusters. Although the sharp states are strongly screened, the integrated strength below 6 eV is 3.9 electrons, about 50% of the $`s`$-electron sum. The integrated strength data is compared with theory in Fig. 8, showing all the trend with increasing cluster size. The integrated strength per $`s`$-electron has moderate decrease with increasing cluster size; no trend is discernible in the experimental data. Beyond N=1, the experimentally measured strength is substantially larger than theory predicts. The data of ref. is about a factor of two larger than theory, as may also be seen in Fig. 7. However, it is difficult to assess the errors in that measurement, and the data of ref. is not seriously out of disagreement in view of their assigned error bars. From a theoretically point of view, it is difficult to avoid the $`d`$-electron screening and the resulting strong reduction of the strength. We present in the next section a semianalytic argument on this point. ## VI Interpretation In this section we will analyze the $`d`$-electron contribution to the TDLDA response from an atomic point of view. In the TDLDA, the bound electrons can be treated separately because they only interact through the common mean field. In particular, there are no Pauli exclusions corrections when combining $`sp`$ and $`dp`$ transition strength. To describe the response from an atomic point of view, it is convenient to express it in terms of the dynamic polarizability $`\alpha (\omega )`$. We remind the reader that it is related to the strength function $`S(E)=df_E/dE`$ by $$\alpha (\omega )=\frac{e^2\mathrm{}^2}{m}_0^{\mathrm{}}𝑑E\frac{S(E)}{\omega ^2+E^2}.$$ (2) The data in Table I may be used to estimate the $`dp`$ polarizability function, but this would not include higher energy contributions and the continuum $`df`$ transitions. Instead, we recomputed the atomic silver response freezing the $`s`$-electron. That procedure yielded a polarizability function with values $`\alpha (0\mathrm{eV})=1.8`$ Å<sup>3</sup> and $`\alpha (4\mathrm{eV})=2.1`$ Å<sup>3</sup>. We then fit this to a convenient single-state resonance form, $$\alpha _d=\frac{e^2\mathrm{}^2}{m}\frac{f_d}{\omega ^2+E_d^2}.$$ (3) with fit parameters $`f_d=1.89`$ and $`E_d=10.7`$ eV, from which we can analytically calculate the effects on the $`s`$-electron response. Except for minor interaction terms the TDLDA response is equivalent to the RPA, which we apply using the response formalism as in App. A of ref. . Note that the dipole response function $`\mathrm{\Pi }`$ is related to the polarizability $`\alpha `$ by $`\mathrm{\Pi }=\alpha /e^2`$. Alternatively, the same physics can be derived using the dielectric functions, as was done in ref. . The formulations are equivalent provided the dielectric function and the polarizability satisfy the Clausius-Mossotti relation. In the dipole response formalism, it is convenient to represent the uncoupled response function as a $`2\times 2`$ matrix, separating the free-electron and the polarizability contributions. The RPA response function is written as $$\mathrm{\Pi }^{RPA}=(1,1)(1+𝚷^0𝐕)^1𝚷^0(1,1)^t$$ (4) where $`𝚷^0`$ and $`𝐕`$ are the following $`2\times 2`$ matrices: $$𝚷^0=\left(\begin{array}{cc}\mathrm{\Pi }_{free}^0& 0\\ 0& N\alpha _d/e^2\end{array}\right)$$ (5) $$𝐕=\frac{e^2}{R^3}\left(\begin{array}{cc}1& 1\\ 1& 0\end{array}\right)$$ (6) Here $`N`$ is the number of atoms in the cluster, and $`R`$ is the radius of the cluster. The form for $`𝚷^0`$ is obvious, with the free electron response given by $`\mathrm{\Pi }_{free}^0=\mathrm{}^2N/m\omega ^2`$. The $`𝐕`$ is more subtle. The Coulomb interaction, represented by the long-range dipole-dipole coupling $`e^2\stackrel{}{r}_1\stackrel{}{r}_2/R^3`$ , acts among the free electrons and between the free electrons and the polarization charge, but not within the polarization charges— separated dipoles have zero interaction after averaging over angular orientations. The algebra in eq. (4) is easily carried out to give $$\mathrm{\Pi }^{RPA}=\frac{N\mathrm{}^2/m\left(1\alpha _d/r_s^3(1+\omega ^2/\omega _M^2)\right)}{\omega ^2+\omega _M^2(1\alpha _d/r_s^3)}$$ (7) where $`r_s=(V/N)^{1/3}`$ and $`\omega _M`$ is the free-electron resonance frequency defined in the introduction. The pole position of the response gives the frequency with the polarization, $$\omega _M^{}=\sqrt{1\alpha _d/r_s^3}\omega _M$$ (8) Taking $`r_s=3.09`$ and $`\alpha _d`$ from the atomic calculation, we find the resonance shifted from 5.18 to 3.6 eV, i.e. exactly the value for the empirical Mie theory. . The strength is calculated from the energy times the residue of the pole which yields $$f=N\left(1\frac{\alpha _d}{r_s^3}\right)^2$$ (9) Numerically, eq. (9) gives a factor of 4 reduction in the strength, consistent with the full TDLDA calculation for Ag<sub>8</sub> with the $`s+d`$ valence space. We thus conclude that the $`d`$-polarization effects can be quite simply understood in atomic terms. ## VII Acknowledgment We acknowledge very helpful discussions with P.G. Reinhard, particularly in formulating Sect. 4. This work is supported in part by the Department of Energy under Grant DE-FG-06-90ER40561, and by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture (Japan), No. 09740236. Numerical calculations were performed on the FACOM VPP-500 supercomputer in the institute for Solid State Physics, University of Tokyo, and on the NEC sx4 supercomputer in the research center for nuclear physics (RCNP), Osaka University.
no-problem/9903/cond-mat9903102.html
ar5iv
text
# References The concept of finite-size scaling plays a fundamental role in the theory of finite-size effects near phase transitions \[1-4\] and is indispensable for the analysis of numerical studies of critical phenomena in small systems . Consider, for example, the susceptibility $`\chi (T,L)`$ of a ferromagnetic system for $`TT_c`$ in a finite geometry of size $`L`$. Finite-size scaling is expected to be valid for large $`L`$ and large correlation length $`\xi (TT_c)^\nu `$, with a scaling form $`\chi (T,L)=L^{\gamma /\nu }f(L/\xi )`$ where $`\gamma `$ and $`\nu `$ are bulk critical exponents and where the scaling function $`f`$ depends on the geometry and boundary conditions but not on any other length scale. In this paper we shall consider only periodic boundary conditions and cubic geometry, $`V=L^d`$. Finite-size scaling functions have been calculated within the $`O(n)`$ symmetric $`\phi ^4`$ field theory in $`2<d<4`$ dimensions \[6-9\] and quantitative agreement with Monte-Carlo (MC) data has been found \[8-10\]. It is the purpose of the present Rapid Note to call attention to a remarkable feature that has not been explained by the field-theoretic calculations. This is the exponential (rather than power-law) approach $$\mathrm{\Delta }\chi \chi (T,\mathrm{})\chi (T,L)\mathrm{exp}[\mathrm{\Gamma }(T)L]$$ (1) towards the asymptotic bulk critical behavior $`\chi (T,\mathrm{})\xi ^{\gamma /\nu }`$ above $`T_c`$, as has been found in several exactly solvable model systems \[1,2,11-14\]. By contrast, field theory \[6-9\] implies a non-exponential behavior $`\mathrm{\Delta }\chi O((L/\xi )^d)`$ in one-loop order above $`T_c`$ for $`d<4`$. We are not aware of numerical tests of this property, e.g., by MC simulations. We shall analyze this problem on the basis of the exact result for $`\chi `$ in the large-$`n`$ limit of the $`\phi ^4`$ model. In particular we shall study the effect of a finite cutoff $`\mathrm{\Lambda }`$ and of a finite lattice spacing in the field-theoretic and lattice version of the $`\phi ^4`$ theory. We find that field theory at finite cutoff predicts the leading nonuniversal deviation $`\mathrm{\Delta }\chi (\mathrm{\Lambda }L)^2`$ from bulk critical behavior that violates finite-size scaling for $`d>2`$ and differs from Eq. (1). This is in contrast to the general belief \[3-10, 15-23\] (and corrects our recent statement ) that the finite-size scaling functions of the $`\phi ^4`$ field theory are universal for $`2<d<4`$ (for cubic geometry and periodic boundary conditions). We shall show that the $`\phi ^4`$ lattice theory with a finite lattice spacing accounts for the exponential size-dependence of Eq. (1). We shall argue that a loop expansion destroys this exponential form and that a non-perturbative treatment of the $`\phi ^4`$ theory is required. The $`\phi ^4`$ field theory is based on the statistical weight $`\mathrm{exp}(H)`$ with the Landau-Ginzburg-Wilson continuum Hamiltonian $$H=_Vd^dx\left[\frac{r_0}{2}\phi _0^2+\frac{1}{2}(\phi _0)^2+u_0(\phi _0^2)^2\right],$$ (2) with $`r_0=r_{0c}+a_0t,t=(TT_c)/T_c`$ where the $`n`$-component field $`\phi _0(x)`$ has spatial variations on length scales larger than a microscopic length $`\stackrel{~}{a}`$ corresponding to a finite cutoff $`\mathrm{\Lambda }=\pi /\stackrel{~}{a}`$. Since we wish to perform a convincing comparison with the finite-size effects of lattice systems which have a finite lattice constant $`\stackrel{~}{a}`$ we must keep $`\mathrm{\Lambda }`$ finite even if a well defined limit $`\mathrm{\Lambda }\mathrm{}`$ can formally be performed at fixed $`r_0r_{0c}`$ for $`2<d<4`$ . It is well known that this limit is justified for bulk systems where finite-cutoff effects are only subleading corrections to the leading bulk critical temperature dependence. Here we raise the question what kind of finite-size effects exist at finite $`\mathrm{\Lambda }`$. This question was left unanswered in the renormalization-group arguments of Brézin and in the explicit field-theoretic calculations of Refs. \[6-10, 17-22\] which were performed only in the limit $`\mathrm{\Lambda }\mathrm{}`$ and where it was tacitly assumed that finite-cutoff effects are negligible for $`d<4`$. We shall prove for $`d>2`$ and $`n\mathrm{}`$ that this assumption is not generally justified for the field-theoretic $`\phi ^4`$ model for finite systems. We shall first examine the susceptibility of the field-theoretic model $$\chi =(1/n)_Vd^dx<\phi _0(x)\phi _0(0)>$$ (3) in the large-$`n`$ limit at fixed $`u_0n`$. For cubic geometry, $`V=L^d`$, the exact result for $`d>2`$ is determined by the implicit equation $$\chi ^1=r_0r_{0c}4u_0n\stackrel{~}{\mathrm{\Delta }}_1+4u_0n\left\{\chi L^d\chi ^1\underset{𝐤}{}\left[𝐤^\mathrm{𝟐}(\chi ^1+𝐤^\mathrm{𝟐})\right]^1\right\},$$ (4) $$\stackrel{~}{\mathrm{\Delta }}_1=\underset{𝐤}{}(\chi ^1+𝐤^2)^1L^d\underset{𝐤\mathrm{𝟎}}{}(\chi ^1+𝐤^2)^1,$$ (5) where $`r_{0c}=4u_0n_𝐤𝐤^2`$. Here $`_𝐤`$ stands for $`(2\pi )^dd^dk`$ with $`|k_j|\mathrm{\Lambda }`$, and the summation $`_{𝐤\mathrm{𝟎}}`$ runs over discrete $`𝐤`$ vectors with components $`k_j=2\pi m_j/L,m_j=\pm 1,\pm 2,\mathrm{},j=1,2,\mathrm{},d,`$ in the range $`\mathrm{\Lambda }k_j<\mathrm{\Lambda }`$. For large $`L`$ at finite $`\mathrm{\Lambda }`$ we have found for $`d>2`$ $$\stackrel{~}{\mathrm{\Delta }}_1=I_1(\chi ^1L^2)L^{2d}+\mathrm{\Lambda }^{d2}\left\{a_1(d,\chi ^1\mathrm{\Lambda }^2)(\mathrm{\Lambda }L)^2+O\left[(\mathrm{\Lambda }L)^4\right]\right\},$$ (6) $$I_1(x)=(2\pi )^2\underset{0}{\overset{\mathrm{}}{}}𝑑ye^{(xy/4\pi ^2)}\left[K(y)^d(\pi /y)^{d/2}\mathrm{\hspace{0.33em}1}\right],$$ (7) $$a_1(d,\chi ^1\mathrm{\Lambda }^2)=\frac{d}{3(2\pi )^{d2}}\underset{0}{\overset{\mathrm{}}{}}𝑑xx\left[_1^1𝑑ye^{y^2x}\right]^{d1}\mathrm{exp}\left[(1+\chi ^1\mathrm{\Lambda }^2)x\right]$$ (8) with $`K(y)=\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}e^{ym^2}`$. Near $`T_c`$, i.e., for small $`\chi ^1\mathrm{\Lambda }^2`$ at finite $`\mathrm{\Lambda }`$, the bulk integral in Eq. (4) yields for $`2<d<4`$ $$\underset{𝐤}{}\left[𝐤^2(\chi ^1+𝐤^2)\right]^1=A_d\chi ^{ϵ/2}ϵ^1\left\{1+O\left[(\chi ^1\mathrm{\Lambda }^2)^{ϵ/2}\right]\right\}$$ (9) with $`ϵ=4d`$ and $`A_d=\mathrm{\hspace{0.33em}2}^{2d}\pi ^{d/2}(d2)^1\mathrm{\Gamma }(3d/2).`$ This leads to the large-$`L`$ and small-$`t`$ representation at finite $`\mathrm{\Lambda }`$ $$\chi =L^{\gamma /\nu }P(t(L/\xi _0)^{1/\nu },\mathrm{\Lambda }L)$$ (10) where the function $`P`$ is determined implicitly by $`P^{1/\gamma }=t(L/\xi _0)^{1/\nu }+ϵA_d^1\left[PI_1(P^1)a_1(d,0)(\mathrm{\Lambda }L)^{d4}\right],`$ (11) apart from $`O\left[(\mathrm{\Lambda }L)^{d6}\right]`$ corrections, with the critical exponents $`\nu =(d2)^1`$ and $`\gamma =2/(d2)`$, and with the bulk correlation-length amplitude $`\xi _0`$ . We note that the term $`I_1(P^1)`$ is a $`𝐤\mathrm{𝟎}`$ contribution whereas the term $`P`$ on the r.h.s. of Eq. (11) comes from the $`𝐤=\mathrm{𝟎}`$ mode. At first sight, the $`\mathrm{\Lambda }`$ dependent term in Eq. (11) seems to be a subleading correction and appears to be negligible for large $`L`$. This is asymptotically correct as long as $`PI_1(P^1)>0`$ does not vanish in the large-$`L`$ limit. This is indeed the case for $`t(L/\xi _0)^{1/\nu }<\mathrm{}`$, i.e., as long as the critical point is approached at finite ratio $`L/\xi `$ . This corresponds to paths in the $`L^1\xi ^1`$ plane (Fig. 1) that approach the origin $`L^1=0,\xi ^1=0`$ along curves with a non-vanishing asymptotic slope $`\xi /L>0`$. Along these paths the function $`P`$ remains finite and hence $`PI_1(P^1)`$ remains non-zero (positiv) which was tacitly assumed previously where the $`\mathrm{\Lambda }`$-dependent terms in Eq. (11) were dropped (see Eq. (62) of Ref. ). There exist, however, significant paths in the $`L^1\xi ^1`$ plane where $`t(L/\xi _0)^{1/\nu }`$ becomes arbitrarily large. This includes paths at constant $`t>0`$ or $`\xi <\mathrm{}`$ with increasing $`L`$ corresponding to an approach towards the asymptotic bulk value $`\chi _b`$ (arrow in Fig. 1). We emphasize that these paths lie entirely in the asymptotic region $`\xi \mathrm{\Lambda }^1,L\mathrm{\Lambda }^1,\chi _b=\xi ^2\mathrm{\Lambda }^2`$. In such limits the quantity $`P(\xi /L)^{\gamma /\nu }`$ approaches zero. As a remarkable feature we find that in Eq. (11) the function $`I_1(P^1)`$ (which originates from the $`𝐤\mathrm{𝟎}`$ modes) completely cancels the term $`P`$ (which comes from the $`𝐤=\mathrm{𝟎}`$ mode) according to the small-$`P`$ representation $$I_1(P^1)=P+O\left[P^{(3d)/4}\mathrm{exp}(P^{1/2})\right].$$ (12) In other words, the higher-mode contribution $`I_1(P^1)`$ does not represent a ”correction” to the lowest-mode term $`P`$ but becomes as large as the lowest-mode term itself. This result is quite plausible because above $`T_c`$, at fixed temperature $`TT_c>0`$, the lowest mode does not play a significant role and does not become dangerous in the bulk limit, unlike the case $`TT_c`$ where the separation of the lowest mode is an important concept . The crucial consequence of Eq.(12) is that the term $`PI_1(P^1)`$ in Eq. (11) is reduced to the exponentially small contribution $`\mathrm{exp}(P^{1/2})\mathrm{exp}(L/\xi )`$. This implies that the leading finite-size deviation from bulk critical behavior is now governed by the cutoff-dependent power-law term $`(\mathrm{\Lambda }L)^{d4}`$ in Eq. (11) which was dropped in Ref. . This leads to the explicitly $`\mathrm{\Lambda }`$ dependent result, at finite $`t1`$ and finite $`\mathrm{\Lambda }L1`$, $$P(t(L/\xi _0)^{1/\nu },\mathrm{\Lambda }L)=\left[t(L/\xi _0)^{1/\nu }ϵA_d^1a_1(d,0)(\mathrm{\Lambda }L)^{d4}\right]^\gamma ,$$ (13) $$\chi =\chi _b\left[1+\frac{ϵ\mathrm{\hspace{0.33em}2}^{d1}\pi ^{d/2}}{\mathrm{\Gamma }(3d/2)}a_1(d,0)(\mathrm{\Lambda }\xi )^{d2}(\mathrm{\Lambda }L)^2\right],$$ (14) apart from $`O[(\mathrm{\Lambda }L)^4,e^{L/\xi }]`$ corrections. Eq. (14) is valid for $`2<d<4`$ and is applicable to the region below the dotted line in Fig. 1. This line is a representative of a smooth crossover region and may be defined by requiring that the cutoff dependent term in Eq. (11) is as large as the term $`PI_1(P^1)`$. In the latter term, $`P`$ can be approximated by $`(L/\xi )^{\gamma /\nu }`$, i.e., the dotted line is determined by $$(L/\xi )^{\gamma /\nu }I_1((L/\xi )^{\gamma /\nu })=a_1(d,0)(\mathrm{\Lambda }L)^{d4}.$$ (15) Eq. (15) represents a line in a crossover region separating the scaling region (where cutoff effects can be considered as small corrections) from the nonscaling region (where cutoff effects are dominant) close the bulk limit. The power law $`(\mathrm{\Lambda }L)^2`$ in Eq. (14) disagrees with the exact result for the spherical model on a lattice where an exponential $`L`$ dependence, analogous to Eq. (1), has been found for general $`d>2`$. This proves that $`\phi ^4`$ field theory at finite cutoff does not correctly describe the leading finite-size deviations from bulk critical behavior of spin systems on a lattice above $`T_c`$, not only for $`d>4`$, as stated in Ref. , but more generally for $`d>2`$, at least in the large-$`n`$ limit. Furthermore, the result in Eq. (14) violates finite-size scaling in the asymptotic region where $`L^{\gamma /\nu }\chi `$ should only depend on $`L/\xi `$, not on $`\mathrm{\Lambda }L`$. Thus $`\phi ^4`$ field theory at finite cutoff is inconsistent with usual finite-size scaling not only for $`d>4`$, but more generally for $`d>2`$, at least in the large-$`n`$ limit. This is not in conflict with the renormalization-group arguments of Brézin who considered only the limit of infinite cutoff in which the non-scaling region (Fig. 1) shrinks to zero. The existence of the non-scaling region for the field-theoretic $`\phi ^4`$ model below four dimensions has been overlooked in Sect. 4.1 of our recent work . In the following we briefly analyze the corresponding properties in the $`\phi ^4`$ lattice model for $`d>2`$. The $`\phi ^4`$ lattice Hamiltonian reads $$\widehat{H}(\phi _i)=\stackrel{~}{a}^d\left\{\underset{i}{}\left[\frac{\widehat{r}_0}{2}\phi _i^2+\widehat{u}_0(\phi _i^2)^2\right]+\underset{ij}{}\frac{1}{2\stackrel{~}{a}^2}J_{ij}(\phi _i\phi _j)^2\right\}$$ (16) where $`\stackrel{~}{a}`$ is the lattice constant. As noted recently , the susceptibility $`\widehat{\chi }`$ of the lattice model is obtained from $`\chi `$ of the field-theoretic model by the replacement $`𝐤^2\widehat{J}_𝐤`$ in the sums and integrals in Eqs. (4) and (5), where $$\widehat{J}_𝐤=\frac{2}{\stackrel{~}{a}^2}\left[J(0)J(𝐤)\right]=J_0𝐤^2+O(k_i^2k_j^2),$$ (17) $$J(𝐤)=(\stackrel{~}{a}/L)^d\underset{ij}{}J_{ij}e^{i𝐤(𝐱_i𝐱_j)},$$ (18) $$J_0=\frac{1}{d}(\stackrel{~}{a}/L)^d\underset{ij}{}(J_{ij}/\stackrel{~}{a}^2)(𝐱_i𝐱_j)^2.$$ (19) The crucial difference between the field-theoretic and lattice versions of the $`\phi ^4`$ model comes from the large-$`L`$ behavior of the lattice version of the quantity $`\stackrel{~}{\mathrm{\Delta }}_1`$ in Eq. (5). Instead of Eq. (6) we now obtain for $`L\stackrel{~}{a}`$ $$\underset{𝐤}{}(\widehat{\chi }^1+\widehat{J}_𝐤)^1L^d\underset{𝐤\mathrm{𝟎}}{}(\widehat{\chi }^1+\widehat{J}_𝐤)^1=J_0^1I_1(J_0^1\widehat{\chi }^1L^2)L^{2d},$$ (20) apart from more rapidly vanishing terms. We have found that such terms are only exponential (rather than power-law) corrections in the regime $`L\xi `$. This implies that, for the lattice model in the regime $`L\xi `$, Eq. (11) is reduced to $$\widehat{P}^{1/\gamma }=t(L/\widehat{\xi }_0)^{1/\nu }+ϵA_d^1\left[\widehat{P}I_1(\widehat{P}^1)\right]$$ (21) without power-law corrections. This corresponds to Eq. (77) of Ref. . Here $`\widehat{\xi }_0`$ is the bulk correlation-length amplitude of the lattice model and $`\widehat{P}=\widehat{\chi }L^{\gamma /\nu }J_0`$ . Because of the exponential behavior of $`\widehat{P}I_1(\widehat{P}^1)`$ according to Eq. (12) and because of the exponential corrections to Eq. (20) we see that the lattice $`\phi ^4`$ model indeed predicts an exponential size dependence for $`\mathrm{\Delta }\widehat{\chi }`$. The detailed form of the ($`L`$-dependent) amplitude of this exponential size-dependence is nontrivial and will be analyzed elsewhere. In the following we extend our analysis to the case $`n=1`$ of the field-theoretic model for $`2<d<4`$. The bare perturbative expressions for the effective parameters given in Eqs. (68) - (71) of Ref. for the field-theoretic $`\phi ^4`$ model are valid for general $`d>2`$. Application to the critical region for $`2<d<4`$ requires to renormalize these expressions by the $`L`$-independent $`Z`$-factors of the bulk theory. We recall that the bulk renormalizations can well be performed at finite $`\mathrm{\Lambda }`$ . This does not eliminate the cutoff dependent term $`(\mathrm{\Lambda }L)^2`$ in $`r_0^{eff}`$ for the field-theoretic model and implies that $`\mathrm{\Delta }\chi `$ will exhibit the leading size dependence $`(\mathrm{\Lambda }L)^2`$ above $`T_c`$ also for $`n=1`$, $`2<d<4`$. A grave consequence of these results is that universal finite-size scaling near the critical point of a finite system with periodic boundary conditions is less generally valid than believed previously \[1-24\]. Finite-size scaling is not valid in the region below the dotted line of the $`L^1\xi ^1`$ plane (Fig. 1), at least in the large-$`n`$ limit, for the field-theoretic $`\phi ^4`$ model at finite cutoff for $`d>2`$. This region is of significant interest as it describes the leading finite-size deviations from asymptotic bulk critical behavior. The violation of finite-size scaling in this region originates from the $`(\phi )^2`$ term in Eq. (2) that approximates the interaction term $`J_{ij}(\phi _i\phi _j)^2`$ of the $`\phi ^4`$ lattice Hamiltonian, Eq. (16). The serious defect of this approximation at finite $`\mathrm{\Lambda }`$ becomes more and more significant as $`L/\xi 1`$ increases (arrow in Fig. 1) whereas it is negligible for $`\xi /L>1`$. This defect does not show up in the $`\mathrm{\Lambda }\mathrm{}`$ version (or dimensionally regularized version) of renormalized field theory. For a discussion of the case $`d>4`$ we refer to . From the one-loop finite-size scaling functions of Ref. we find the nonexponential behavior $`\mathrm{\Delta }\chi O((L/\xi )^d)`$ for $`n=1`$ and $`2<d<4`$ above $`T_c`$. The same behavior exists already in the lowest-mode approximation. The question arises whether higher-loop calculations would change this $`L`$ dependence. The corresponding question for $`n\mathrm{}`$ can be answered on the basis of our exact solution for $`\widehat{\chi }`$ of the $`\phi ^4`$ lattice model . Approximating this solution by a one-loop type expansion around the lowest-mode structure leads to the large-$`L`$ behavior $`\mathrm{\Delta }\widehat{\chi }O((L/\xi )^d)`$ above $`T_c`$ rather than $`e^{cL}`$. Thus, at least for $`n\mathrm{}`$, the exponential size dependence is a non-perturbative feature. We expect, therefore, that a conclusive answer of our question requires a non-perturbative treatment of the $`\phi ^4`$ lattice theory. Our previous nonperturbative order-parameter distribution function is an appropriate basis for analyzing this problem which will presumably lead to an exponential size dependence for $`\mathrm{\Delta }\widehat{\chi }`$ within the $`\phi ^4`$ lattice model for general $`n`$ above two dimensions. It would be interesting to test the leading finite-size deviations from bulk critical behavior by Monte-Carlo simulations. The absence of terms $`(\mathrm{\Lambda }L)^2`$ would provide evidence for the failure of the continuum approximation $`(\phi )^2`$ of the $`\phi ^4`$ field theory at finite $`\mathrm{\Lambda }`$ for confined lattice systems with periodic boundary conditions. Acknowledgment Support by Sonderforschungsbereich 341 der Deutschen Forschungsgemeinschaft and by NASA under contract numbers 960838 and 100G7E094 is acknowledged. One of the authors (X.S.C.) thanks the National Natural Science Foundation of China for support under Grant No. 19704005. Figure Caption Fig.1. Asymptotic $`L^1\xi ^1`$ plane (schematic plot) above $`T_c`$ for the $`\phi ^4`$ field-theoretic model at finite cutoff $`\mathrm{\Lambda }`$ in the large-$`n`$ limit in three dimensions where $`L`$ is the system size and $`\xi `$ is the bulk correlation length. Finite-cutoff effects become non-negligible in the non-scaling region below the dotted line. This crossover line has a vanishing slope at the origin and is determined by Eq. (15) with $`\gamma /\nu =2,\gamma =2`$ and $`a_1(3,0)=0.226`$ for $`d=3`$. Well above this line the cutoff dependence is negligible in Eq. (11). The arrow indicates an approach towards bulk critical behavior at constant $`0<t1`$ through the non-scaling region where Eq. (14) is valid.
no-problem/9903/chao-dyn9903031.html
ar5iv
text
# CHAOTIZATION OF THE SUPERCRITICAL ATOM ## Abstract Chaotization of supercritical ($`Z>137`$) hydrogenlike atom in the monochromatic field is investigated. A theoretical analysis of chaotic dynamics of the relativistic electron based on Chirikov criterion is given. Critical value of the external field at which chaotization will occur is evaluated analytically. The diffusion coefficient is also calculated. PACS numbers: 32.80.Rm, 05.45+b, 03.20+i Study and synthesis of superheavy elements is becoming one of actual problems of the modern physics . Fast growing interest to the physics and chemistry of actinides and transactinides stimulates extensive study of superheavy elements. One of the main differences which leads to the additional difficulties in the study of superheavy atoms is the fact that the motion of the atomic electrons is described by the relativistic equations of motion due to the large values of the charge of the atomic nucleus. In this Brief Report we will study classical chaotic dynamics of the relativistic hydrogenlike atom with the charge of the nucleus $`Z>137`$, interacting with monochromatic field. Such an atom is called the overcritical atom -. Quantum mechanical properties of such atom was investigated by a number of authors . Quasiclassical dynamics of the supercritical atom was investigated by V.S.Popov and co-workers -. The experimental way of creating the overcritical states are the collision experiments of slow heavy ions with resulting charge $`Z_1+Z_2>137`$ . To treat chaotic dynamics of the relativistic electron in the supercritical kepler field we need to write the unperturbed Hamiltonian in terms of action-angle variables. As is well known , for the relativistic electron moving in the field of charge $`Z>137`$ point charge approximation cannot be applied for describing its motion i.e. there is need in regularizing of the problem. Such a regularizing can be performed by taking into account a finite sizes of the nucleus i.e. by cut off Coulomb potential at small distances: $$V\left(r\right)=\{\begin{array}{cc}\frac{Z\alpha }{r},\hfill & forr>R\hfill \\ & \\ \frac{Z\alpha }{b}f\left(\frac{r}{R}\right),\hfill & for\mathrm{\hspace{0.33em}\hspace{0.33em}0}<r<R\hfill \end{array},$$ where $`f(\frac{r}{R})`$ is the cut-off function $`R`$ is the radius of the nucleus, $`\alpha =1/137`$ (the system of units $`m_e=\mathrm{}=c=1`$ is used here and below). Further we will take $`f(r/R)=1`$ (surface distribution of the charge). Then the relativistic momentum defined by $$p=\sqrt{\left(\epsilon V\right)^2\frac{M^2}{r^2}1},$$ where $`\epsilon `$ is the energy of the electron and $`M`$ its angular momentum, can be rewritten as the following: $$p=\{\begin{array}{cc}\sqrt{\left(\epsilon +\frac{Z\alpha }{r}\right)^2\frac{M^2}{r^2}1}\hfill & forr>R\hfill \\ & \\ \sqrt{\left(\epsilon +\frac{Z\alpha }{R}\right)^2\frac{M^2}{r^2}1}\hfill & for\mathrm{\hspace{0.25em}\hspace{0.33em}0}<r<R\hfill \end{array},$$ One of the turning points of the electron (which are defined as a zeros of the momentum) lies on inside of the nucleus and given by $$r_1=M\left[\left(\epsilon +\frac{Z}{R}\right)^21\right]^1$$ The turning point lying on outside of the nucleus is given by the expression $$r_2=\frac{\epsilon Z\sqrt{\epsilon ^2Z^2\left(\epsilon ^21\right)\left(Z^2M^2\right)}}{\epsilon ^21}$$ Thus one can write for the action (for $`Z>M`$) $$\pi n=I_1+I_2,$$ (1) where $$I_1=_{r_1}^R\sqrt{\left(\epsilon +\frac{Z}{R}\right)^2\frac{M^2}{r^2}1}𝑑r$$ $$I_2=_{r_1}^R\sqrt{\left(\epsilon +\frac{Z}{r}\right)^2\frac{M^2}{r^2}1}𝑑r$$ From (1) one can find the Hamiltonian of the relativistic electron in the field of overcritical nucleus ($`Z>137`$) in terms of action-angle variables: $$H_0=\epsilon \frac{g}{Z}c(R,g)exp\left\{\frac{\pi n}{g}\right\},$$ (2) where $`c(r,g)=exp(gR)`$. In the derivation of (2) we have accounted that $`ZM`$ and $`\epsilon 0`$. The Kepler frequency can be defined as $$\omega _0=\frac{dH_0}{dn}=\frac{\pi }{Z}c(R,g)exp\left\{\frac{\pi n}{g}\right\}$$ (3) Trajectory equation for $`Z>M`$ (for $`r>R`$) has the form $$\frac{Z^2M^2}{r}=\sqrt{M^2\epsilon ^2+\left(Z^2M^2\right)}ch\left(\varphi \sqrt{\frac{Z^2}{M^2}1}\right)+\epsilon Z$$ (4) For $`ZM`$ ($`\epsilon 0`$) we have $$\frac{g^2}{r}\sqrt{M^2\epsilon ^2+g^2}\left(1\varphi ^2\right)\sqrt{\frac{Z^2}{M^2}1}+\epsilon Z$$ or $$\frac{g}{r}g\left(M\varphi g\right)+\epsilon Z$$ The trajectory equation for $`r<R`$ is $$\frac{1}{r}=a_0cos\varphi ,$$ (5) where $$a_0=M^1\left[\left(\epsilon +\frac{Z}{R}\right)^21\right]$$ To investigate the chaotic dynamics of this atom we will consider the angular momentum as fixed and ($`ZM`$). Consider now the interaction of the supercritical atom with a linearly polarized monochromatic field $$V=ϵcos\omega tsin\theta \left[xsin\psi +ycos\psi \right],$$ (6) where $`\theta `$ and $`\psi `$ are the Euler angles. The full Hamiltonian of the system can be written as $`H={\displaystyle \frac{\sqrt{Z^2M^2}}{Z}}exp\left\{{\displaystyle \frac{\pi n}{\sqrt{Z^2M^2}}}\right\}+`$ $`ϵcos\omega tsin\theta {\displaystyle \left(x_ksin\psi cosk\lambda +y_kcos\psi sink\lambda \right)},`$ (7) where $`x_k`$ and $`y_k`$ are the Fourier components of the electron dipole moment: $`x_k={\displaystyle \frac{i}{k\omega T}}{\displaystyle _0^T}e^{ik\omega t}𝑑x={\displaystyle \frac{i}{k\omega T}}{\displaystyle _0^T}e^{ik\omega \left(2\epsilon Z\xi sin\xi \right)}`$ $`sin\xi \left\{cos{\displaystyle \frac{2g^1}{acos\xi }}{\displaystyle \frac{1}{acos\xi }}sin{\displaystyle \frac{2g^1}{acos\xi }}\right\}d\xi `$ (8) and $`y_k={\displaystyle \frac{iMb^{5/2}}{k\omega T}}{\displaystyle _0^T}{\displaystyle \frac{e^{ik\omega t}sin2\xi d\xi }{\sqrt{M^2cos^2\xi b^2}}}+{\displaystyle \frac{i}{k\omega T}}{\displaystyle _0^T}e^{ik\omega \left(2\epsilon Z\xi sin\xi \right)}`$ $`sin\xi \left\{sin{\displaystyle \frac{2g^1}{acos\xi }}{\displaystyle \frac{1}{acos\xi }}cos{\displaystyle \frac{2g^1}{acos\xi }}\right\}d\xi `$ (9) here $`a=\sqrt{Z^2M^2}exp\{\pi n/\sqrt{Z^2M^2}\},`$ $`T=2\pi /\omega _0`$ $$b=\left(\epsilon +\frac{Z}{R}\right)^21,$$ $$T_1=\frac{\left(Rr_0\right)}{2M},T_2=TT_1$$ Calculating the integrals (8) and (9) using the stationary phase method we have $$x_k=0,y_k=\frac{R^2exp\left\{\frac{\pi n}{\sqrt{Z^2M^2}}\right\}}{\pi k^2}$$ (10) For the further treatment of the chaotic dynamics of the system one should find as it was done in , resonance width: $$\mathrm{\Delta }\nu _k=\left(8\omega _0^{}r_kϵ\right)^{\frac{1}{2}},$$ where $`r_k=\sqrt{x_k^2+y_k^2}`$. Application of the Chirikov criterion to the Hamiltonian (7) gives us the critical value of the external field at which electron moving in the supercritical Kepler field enters chaotic regime of motion: $$ϵ_{cr}=\frac{gc(R,g)exp\left\{\pi n/g\right\}}{20Zk\left(k+1^2\right)\left(\sqrt{r_k}+\sqrt{r_{k+1}}\right)^2}$$ (11) Taking into account (10) for the critical field we have $$ϵ_{cr}=\pi k\frac{gc(R,g)exp\left\{2\pi n/g\right\}}{20Z\left(2k^2+2k+1\right)}$$ (12) One can also calculate the diffusion coefficient: $$D=\frac{\pi }{2}\frac{ϵ^2R^2}{c(R,g)Z^3}exp\left\{2\pi n/g\right\}$$ (13) Thus we have obtained the critical value of the external monochromatic field strength at which chaotization of motion of the electron moving in the supercritical Kepler field will occur. As is seen from (12) the this critical value is rather small i.e. in the supercritical case ($`Z>137`$) electron is more chaotic than the undercritical ($`Z<137`$) case. This can be explained by the fact that the level density of the $`Z>137`$ atom is considerably more (see (2)) than the one for the undercritical atom (see ). The above results may be useful for slow collision experiments of heavy (with the resulting charge $`Z_1+Z_2>137`$) ions in the presence of laser field.
no-problem/9903/hep-ph9903259.html
ar5iv
text
# 1 Introduction ## 1 Introduction The ultimate source of electroweak symmetry breaking (EWSB) is still mysterious. So far, progress on solving this mystery has been confined to ruling out ideas rather than confirming one. Nevertheless, an explanation exists and it is the primary purpose of the next generation colliders to find it. We do know what the results of EWSB must be: the $`W`$ and $`Z`$ bosons must get mass, and the chiral fermions must get mass. The simplest explanation within the Standard Model (SM) is a scalar $`SU(2)`$ doublet which couples to the vector bosons via the covariant derivative, and to the fermions via Yukawa couplings. After spontaneous symmetry breaking, one physical scalar degree of freedom remains – the Higgs boson. Given all the other measurements that have already been made (gauge couplings and masses of the gauge bosons and fermions) the couplings of the Higgs boson to all SM particles are fixed, and the collider phenomenology is completely determined as a function of only one parameter, the Higgs boson mass. The correct theory may be much different than our simplest notion. Low-energy supersymmetry, for example, is a rather mild deviation from the Standard Model EWSB idea. Nevertheless, supersymmetry requires at least two Higgs doublets that contribute to EWSB and complicate the phenomenology by having more parameters and more physical states in the spectrum. Furthermore, some theories, including supersymmetry, may allow other states with substantial couplings to exist which are light enough for a Higgs boson to decay into. EWSB burden sharing, or Higgs boson interactions with other light states are in principle just as likely as the SM solution to EWSB. In this letter we would like to add to the discussion of EWSB possibilities at a high luminosity Tevatron collider, by considering a Higgs boson which decays invisibly. The SM Higgs boson case has been studied in great detail recently for $`\sqrt{s}=2\text{ TeV}`$ with high luminosity, and the prospects for discovering a light Higgs boson ($`m_h130\text{ GeV}`$) with $`30\text{ fb}^1`$ are promising . Reaching beyond $`130\text{ GeV}`$ will be more of a challenge, but studies in this direction appear tantalizing . For an invisibly-decaying Higgs boson, no studies have been performed to our knowledge. However, we believe it is interesting for many reasons. The reason non-SM Higgs phenomena is especially relevant for the Tevatron is because at the Tevatron a Higgs boson is copiously produced only if its mass is less than 150 GeV or so. Such a light SM Higgs boson couples only very weakly to all on-shell decay-mode states, and has a narrow decay width in this range . For example, $`hf\overline{f}`$ decays depend on the squared coupling $`m_f^2/v^2`$, where $`v=175\text{ GeV}`$. The largest mass fermion for a light Higgs boson to decay into is the $`b`$ quark with $`m_b4.5\text{ GeV}`$, leading to a squared coupling of order $`m_b^2/v^2\stackrel{<}{}10^3`$. As $`m_H`$ is increased above 135 GeV, the decays $`HWW^{()}`$ begin to become more important than $`Hb\overline{b}`$. However, even for $`m_H=(140,150)`$ GeV, the total width of a SM Higgs boson is only about $`(8,17)`$ MeV. Therefore, if the light Higgs boson interacts with any new particle(s) in addition to the SM particles, the resulting impact on Higgs boson decay branching fractions could be dramatic. For example, an $`𝒪(1)`$ coupling of the Higgs boson to other light particles means that the Higgs boson will decay to these new states essentially $`100\%`$ of the time. If the new states happen to not be detectable, none of the standard analyses for Higgs boson discovery would directly apply.In contrast, a heavy Higgs boson which is near or above the $`WW`$ threshold has a guaranteed decay mode with electroweak-strength coupling, and other unsuppressed decay modes enter at the $`ZZ`$ and $`t\overline{t}`$ thresholds. Therefore any new states which the Higgs boson may be allowed to decay into will likely not completely overwhelm the SM decay modes, so standard analyses will still be relevant for discovery at post-Tevatron colliders. Therefore, a comprehensive assessment of EWSB phenomenology at the Tevatron must include considering the possibility of a Higgs boson decaying invisibly. ## 2 Motivation It follows from the above discussion that any theoretical idea which allows the light Higgs boson to interact with light invisible particles with $`𝒪(1)`$ couplings will result in $`B(H\mathrm{invisible})100\%`$. Many possibilities for this exist. In the following paragraphs we list a small subset of interesting theoretical ideas which could lead to a SM-like Higgs boson that decays invisibly. Higgs boson decays to neutralinos As a first example, the lightest supersymmetric partner (LSP), $`\chi `$, in supersymmetry may be a small mixture of higgsino and bino (superpartners of the Higgs boson and hypercharge gauge boson), and so decays of the lightest Higgs boson into LSPs, $`h\chi \chi `$, may have sizeable probability. Or, the LSP might be very nearly degenerate with other charged states which the Higgs boson decays into, and so decay products of the charged states are too soft to detect. If R-parity is conserved, the $`\chi `$ does not decay and escapes detection. Therefore, the Higgs boson is invisible. This possibility, however, is almost excluded for minimal supersymmetry based upon gauge coupling unification, gaugino mass unification, and scalar mass universality. In this case, the lightest neutralino is mostly a bino, and has mass approximately half that of the lightest chargino. The present bounds on the chargino are above about $`90\text{ GeV}`$ , which in turn implies that the the mass of the neutralino is at least $`45\text{ GeV}`$ or so in minimal supersymmetry. Although it is possible to still have $`h\chi \chi `$, the parameter space remaining for such decays has decreased and may continue to decrease if there is no discovery as the CERN LEP II $`e^+e^{}`$ collider runs proceed. Furthermore, in minimal supergravity parameter space the coupling of $`\chi \chi h`$ is often not significantly above that of $`\overline{b}bh`$ , and so $`B(h\chi \chi )B(h\overline{b}b)`$ is not necessarily expected. However, as we stray from the naive assumptions of gaugino mass unification and scalar mass universality, $`h\chi \chi `$ is not as constrained by searches of chargino pair production at LEP II. Then the motivation is strengthened to consider the case where this branching ratio is high, leading to an invisibly-decaying light Higgs boson. In supersymmetry with minimal particle content, the lightest Higgs boson is not expected to be above $`125\text{ GeV}`$ . We shall see later that the invisible Higgs boson can be probed with 3$`\sigma `$ significance up to about $`125\text{ GeV}`$ at the Tevatron with $`30\text{ fb}^1`$. Higgs boson decays to neutrinos in extra dimensions Another interesting motivation is related to neutrino mass generation in theories with extra dimensions opening up at the TeV scale . In this approach, which we will call “TeV gravity”, no fundamental mass scale in field theory should exist above a few TeV. Therefore, electroweak symmetry breaking, fermion masses, flavor dynamics, and neutrino masses all must occur near the TeV scale. The standard approach to neutrino mass generation is to introduce a right-handed neutrino, and to apply a see-saw between a heavy Majorana mass of the neutrino ($`m_M`$) and a rather light Dirac mass ($`m_D`$). The lightest eigenvalue is then $`m_\nu =m_D^2/m_M`$. Typically, models prefer $`m_M\stackrel{>}{}10^{12}\text{ GeV}`$ either because of naturalness, or some considerations in $`SO(10)`$ model building, etc. In TeV gravity such high mass scales are not available. It is mainly theoretical prejudice that has paradoxically made us consider extremely high mass scales to explain such low scales. One should not forget that there are many orders of magnitude between the neutrino mass and the weak scale in which nature could develop the right twist to explain itself. If TeV gravity is the correct approach to nature, then we must find the explanation and identify the phenomenology that can help us discern it. How this is related to the invisible Higgs boson will become apparent shortly. If the right-handed neutrino is restricted to the SM 3-brane along with the other SM particles, neutrino masses would then need to be generated by dynamics near or below the TeV scale. There are viable alternatives for this, which may even lead to Higgs boson invisible decays . However, one is enticed to postulate that the right-handed neutrino is free to propagate also in the extra dimensions where gravity propagates . This is natural since $`\overline{\nu }_R`$ can be interpreted as a singlet that has no quantum numbers to restrict it to the SM brane. In this scenario, the $`\overline{\nu }_R`$ not only has its zero mode but Kaluza-Klein (KK) modes $`\overline{\nu }_R^{(i)}`$ separated in mass by $`1/R`$ where $`R`$ is the linear dimension of the compact $`\delta `$ dimensions, determined from $$M_{\mathrm{Pl}}^2=R^\delta M_D^{2+\delta }.$$ (1) Here $`\delta `$ is the number of extra dimensions, $`M_{\mathrm{Pl}}`$ is the familiar Planck mass of the effective four-dimensional theory, and $`M_D1`$ TeV is the fundamental $`D=4+\delta `$ dimensional gravity scale. The absence of experimental deviations from Newtonian gravity at distances greater than a millimeter implies that $`R\stackrel{<}{}10^{13}`$ GeV<sup>-1</sup>. For $`\delta =1`$, this implies $`M_D\stackrel{>}{}10^9`$ GeV, but for $`\delta 2`$ this does not impose any constraint stronger than $`M_D\stackrel{>}{}1`$ TeV. Suppose that Dirac neutrino masses arise from Yukawa couplings $`y_\nu H\overline{\nu }_R\nu _L`$, so that $`m_\nu =y_\nu v`$ where $`v=175`$ GeV is the Higgs vacuum expectation value. Although the decay of $`H`$ to any given final state $`\nu _L\overline{\nu }_R^{(i)}`$ is proportional to $`y_\nu ^2`$ and extremely small, the multiplicity of gauge-singlet right-handed neutrino KK states below $`m_H`$ can be very large. It is proportional to the volume $`R^\delta `$ of the $`\delta `$-dimensional space, with a momentum-space factor of order $`m_H^\delta `$: $$\underset{i}{}(m_HR)^\delta .$$ (2) The total partial width of $`H`$ into KK excitations involving neutrinos is then of order: $$\underset{i}{}\mathrm{\Gamma }(H\nu _L\overline{\nu }_R^{(i)})\frac{m_H}{16\pi }y_\nu ^2(m_HR)^\delta .$$ (3) Therefore the ratio of $`_iB(H\nu _L\overline{\nu }_R^{(i)})`$ to $`B(Hb\overline{b})`$ can be estimated to be roughly $$x\frac{_iB(H\nu _L\overline{\nu }_R^{(i)})}{B(H\overline{b}b)}\frac{m_\nu ^2}{3m_b^2}\left(\frac{m_H}{M_D}\right)^\delta \left(\frac{M_{\mathrm{Pl}}}{M_D}\right)^2$$ (4) Now for $`\delta =1`$, the aforementioned constraint $`M_D\stackrel{>}{}10^9`$ GeV tells us that $`x`$ is negligibly small. For $`\delta 2`$, there is no corresponding relevant constraint on $`M_D`$ and one can estimate $$x10^{11\delta }\left(\frac{m_\nu }{1\mathrm{eV}}\right)^2\left(\frac{m_H}{100\mathrm{GeV}}\right)^\delta \left(\frac{1\mathrm{TeV}}{M_D}\right)^{2+\delta }.$$ (5) The case $`\delta =2`$ may run into difficulties with nucleosynthesis, but for $`\delta =3`$, the decays into invisible states can dominate . For example, with $`m_H\stackrel{>}{}100\text{ GeV}`$ one can have $`x\stackrel{>}{}100`$ even for $`m_\nu ^2=10^6`$ eV<sup>2</sup> and $`M_D=1`$ TeV, or for $`m_\nu ^2=10^1`$ eV<sup>2</sup> and $`M_D=10`$ TeV. Larger values of $`\delta `$ can also give dominant invisible decays, although the estimate is increasingly sensitive to $`M_D`$. In any case, there is a strong possibility that the Higgs to KK neutrinos partial width may greatly exceed the partial widths into SM states. Note also that there are no additional Higgs bosons necessary in this framework, allowing $`\sigma (HZ)`$ to occur at the same rate as $`\sigma (H_{\mathrm{SM}}Z)`$ in the SM. Higgs boson decays to Majorons Another approach is to assume that the traditional see-saw mechanism applies with a Majorana mass scale not much larger than 1 TeV. In this case, $`m_M`$ cannot be much bigger than about $`1\text{ TeV}`$. If $`m_D1\mathrm{MeV}`$, and $`m_M1\text{ TeV}`$, then the neutrino mass is naturally $`m_\nu 1\mathrm{eV}`$. We leave it to model builders to decide why the Dirac mass of neutrinos may be near or below about $`1\text{ MeV}`$. However, we remark that this is approximately the electron mass and so there is precedence in nature for a Dirac mass of a SM field to be near $`1\text{ MeV}`$. That is, no extraordinary mass scales are required in the see-saw numerology of neutrino masses. The question then centers on the origin of the Majorana mass. For us, the important consideration is whether the Majorana mass results from a spontaneously broken global symmetry. If $`\eta `$ is a singlet scalar field charged under a global lepton number, and if $`\eta `$ couples to the neutrinos via the operator (in 2-component Weyl fermion notation) $`\lambda \eta \overline{\nu }_R\overline{\nu }_R`$, then a vacuum expectation value of $`\eta `$ will spontaneously break the global lepton number and generate a Majorana mass equal to $`\lambda \eta `$. We can then identify $`J=\mathrm{Im}\eta `$ as the Nambu-Goldstone boson of the symmetry breaking . It is easy to write down a potential between the SM Higgs doublet $`\varphi `$ and the singlet scalar $`\eta `$, and to construct the interactions among mass eigenstates . The two CP-even mass eigenstates are $`H`$ $`=`$ $`\mathrm{cos}\theta \mathrm{Re}\varphi ^0\mathrm{sin}\theta \mathrm{Re}\eta `$ (6) $`S`$ $`=`$ $`\mathrm{sin}\theta \mathrm{Re}\varphi ^0+\mathrm{cos}\theta \mathrm{Re}\eta .`$ (7) The partial widths of $`HJJ`$ and $`Hb\overline{b}`$ can be calculated in an arbitrary potential $`V=V(\varphi ^{}\varphi ,\eta ^{}\eta )`$ consistent with gauge invariance and global lepton number invariance. The ratio of these partial widths (branching fractions) can then be expressed as $$x\frac{B(HJJ)}{B(H\overline{b}b)}\frac{\mathrm{tan}^2\theta }{12}\left(\frac{m_H}{m_b}\right)^2\frac{\varphi ^2}{\eta ^2}.$$ (8) There are several consequences to notice from Eq. (8). First, if $`\eta \varphi `$, or equivalently, if $`m_Mm_Z`$ the Higgs boson decays into $`JJ`$ would not happen very often. In the usual discussion of the Majoron model approach to neutrino masses the prospect of $`m_Mm_Z`$ is just one possibility over a very wide range of choices for $`m_M`$. However, in TeV gravity, for example, it is required that $`m_M`$ cannot be higher than the weak scale, leading to a potentially large branching fraction of $`HJJ`$. The second point to notice in Eq. (8) is implicit. If $`\mathrm{tan}\theta 1`$ then $`B(HJJ)100\%`$. However, in this case $`\sigma (HZ)`$ is proportional to $`\mathrm{cos}^2\theta 0`$, because the $`HZZ`$ coupling scales with $`\mathrm{cos}\theta `$. In reality the invisible Higgs rate in this model for $`m_Z\stackrel{<}{}m_H\stackrel{<}{}150\text{ GeV}`$ is $$\frac{\sigma (ZHZ+JJ)}{\sigma (ZH_{\mathrm{SM}})}=\xi (x,\mathrm{cos}\theta )\frac{x}{1+x}\mathrm{cos}^2\theta ,$$ (9) where $`\xi 1`$ represents a small correction from $`HWW^{},\tau \tau `$ decays for $`m_H\stackrel{<}{}130\text{ GeV}`$, and a more sizeable $`\xi \stackrel{<}{}1`$ correction for larger Higgs boson mass values. Therefore, it is impossible in this approach to have $`\sigma (Z+JJ)>\sigma (ZH_{\mathrm{SM}})`$. Nevertheless, it is quite possible and natural for $`\sigma (Z+HZ+JJ)`$ to be the dominant production and decay mode of $`H`$, and to have a production cross-section close to the value of a SM Higgs boson of the same mass. Standard Model with an extra singlet There are many variations on the above themes which will have impact on the production rate of the relevant Higgs boson and its decays into invisible particles. Rather than trying to parameterize all the possibilities with complicated formulae, we instead choose to study an equally motivated but simpler model such that one can scale the results to any other particular idea. In this model there exists one gauge-singlet scalar boson and one doublet Higgs boson whose vacuum expectation value constitutes all of EWSB symmetry breaking, and which therefore couples to the $`W`$ and $`Z`$ bosons with the same strength as the SM Higgs boson. This minimal extension, as we will see, has a strong impact on the invisible width of the Higgs boson . When one adds a SM singlet to the spectrum, the full lagrangian becomes $$=_{\mathrm{SM}}m_S^2|S|^2\lambda ^{}|S|^2|H|^2\lambda ^{\prime \prime }|S|^4,$$ (10) where $`H`$ is the SM doublet Higgs boson and $`S=S^0+iA_S^0`$ is the complex singlet Higgs boson. In writing Eq. (10), we have assumed only that $`S`$ is charged under a $`U(1)_S`$ global symmetry, and that the Lagrangian respects this symmetry. Without this symmetry one could write down more terms, such as $`(S^2+S^2)|H|^2`$, but these do not qualitatively change the discussion below. Now if $`S0`$, the model is the same as the Majoron model discussed earlier, with $`U(1)_S`$ playing the role of lepton number. If $`S=0`$ there is no mixing between the $`S`$ and the $`H`$, and if $`m_S<m_H/2`$ then $`HS^0S^0,A_S^0A_S^0`$ are allowed to proceed with coupling $`\mu _S=2\lambda ^{}H`$. If $`\mu _Sm_b`$ these decays will be near 100% for a light Higgs boson mass below about $`150\text{ GeV}`$. Since the $`S^0`$ does not mix with the $`H`$ there will be no suppression of $`ZH`$ production. Also, since $`S`$ has no couplings to SM gauge bosons or fermions, it will be stable and non-interacting (invisible) in the detectors. For the remainder of this paper we assume this model where $`\sigma (ZH)`$ is unsuppressed compared to the SM and $`H\mathrm{invisible}`$ with 100% branching fraction. One can then scale the results to other, more complicated models which may have suppressions in the total production cross-section or in the invisible decay width. One should keep in mind that the optimal experimental analysis will be to combine search results over all channels, including invisible decay products, $`b\overline{b}`$, $`\tau ^+\tau ^{}`$, etc to search for evidence in the data of a scalar Higgs boson that may decay to several final states with similar probabilities. ## 3 Detecting an invisibly-decaying Higgs boson with leptons The process we have found most significant in the search for an invisibly-decaying Higgs boson in $`p\overline{p}`$ collisions at the Tevatron with $`\sqrt{s}=2`$ TeV is $$p\overline{p}Z^{}(Zl^+l^{})(H_{\mathrm{inv}}\mathrm{invisible}).$$ (11) The signal is therefore two oppositely-charged same-flavor leptons with invariant mass $`m_Z`$, accompanied by missing transverse energy from the invisibly-decaying Higgs boson. By $`l^+l^{}`$ we mean $`e^+e^{}`$ or $`\mu ^+\mu ^{}`$ and not $`\tau ^+\tau ^{}`$. The $`\tau ^+\tau ^{}`$ final states may be used to gain in significance slightly, but the uncertainties in $`\tau `$ identification and invariant mass resolution leads us to ignore this final state in the present analysis. Again, we are assuming a theory which is identical to the SM except that a light singlet scalar model exists that the Higgs boson can decay into. As discussed in the previous section, this model then implies that $`\sigma (H_{\mathrm{inv}}Z)=\sigma (H_{\mathrm{SM}}Z)`$ and $`B(H_{\mathrm{inv}}\mathrm{invisible})100\%`$. The most important background is $`ZZ`$ production where one $`Z`$ boson decays leptonically and the other $`Z`$ boson decays into neutrinos. Since $`ZZ`$ is produced by $`t`$-channel processes, it is expected that the $`E_T`$ distribution of the $`Z`$ bosons will be softer (lower energy) than the $`E_T`$ distribution of the $`Z`$ boson accompanying $`ZH_{\mathrm{inv}}`$, $`s`$-channel production. An equivalent statement at leading order (also NLO with a jet veto) is that the missing transverse energy in the $`ZZ`$ background will typically be smaller than the missing energy distribution in $`ZH_{\mathrm{inv}}`$ events for Higgs bosons with mass near $`m_Z`$. The next most significant background is from $`W^+W^{}`$ production with each $`W`$ decaying leptonically. (We have included here contributions from $`W\tau \nu `$ followed by a leptonic $`\tau `$ decay.) This background has a considerably softer transverse energy distribution. As we will see in the plots and discussion below, the fact that both of the leading backgrounds have softer transverse energy profiles than the signal allows the possibility to gain significance by choosing a high cut on $`E\text{/}_T`$ . Finite detector resolution and smearing effects may also favor choosing a higher $`E\text{/}_T`$ cut. However, if the lower bound on $`E\text{/}_T`$ is chosen to be too high, then one will simply run out of signal. Therefore some intermediate choice of cut for $`E\text{/}_T`$ is required. Other important backgrounds to consider arise from $`WZ`$, $`Wj`$, and $`Z^{()}\tau ^+\tau ^{}l^+l^{}+E\text{/}_T`$. The $`Z^{()}\tau ^+\tau ^{}`$ background is made completely negligible by requiring that $`m_{l^+l^{}}m_Z`$, $`E\text{/}_T>50\text{ GeV}`$, and $`\mathrm{cos}(\varphi _{l^+l^{}})>0.9`$. The angle $`\varphi _{l^+l^{}}`$ is the angle between the two leptons in the transverse plane. The $`WZ`$ background requires that a lepton from $`Wl\nu `$ is not detected. This has a rather low probability, and our analysis requires that the pseudo-rapidity of the missed lepton be above $`|\eta |>2`$. The $`Wj`$ background can mimic the signal final state if the jet registers in the detector as a lepton of the right flavor and charge to partner with the lepton from $`Wl\nu `$. We liberally put this fake rate of $`jl`$ at $`10^4`$. Other backgrounds from grossly mismeasured jet energies, $`WZ`$ production with $`W\tau \nu `$, and $`t\overline{t}`$ production can be eliminated by vetoing events with a jet with transverse energy greater than 10 GeV and $`|\eta |<2.5`$. We now summarize all the kinematic cuts applied in this analysis: $`p_T(l^+),p_T(l^{})>12\text{ GeV}`$ (12) $`|\eta (l^+)|<2,|\eta (l^{})|<2`$ (13) $`|m_{l^+l^{}}m_Z|<7\text{ GeV}`$ (14) $`\mathrm{cos}(\varphi _{l^+l^{}})>0.9`$ (15) $`E\text{/}_T>50\text{ GeV}.`$ (16) The actual analysis of signals and backgrounds was carried out at the parton level using the CompHEP program , except for the $`WW\tau \mathrm{}\nu \overline{\nu }\mathrm{}\mathrm{}+E\text{/}_T`$, which was included using the ISAJET Monte Carlo program. We also summarize some relevant detector parameters that we assume: $`\mathrm{Probability}(jl)=10^4`$ (17) $`\mathrm{Lost}\mathrm{lepton}\mathrm{has}|\eta (l)|>2`$ (18) $`\mathrm{Dilepton}\mathrm{id}\mathrm{efficiency}\mathrm{in}Zl^+l^{}=0.7`$ (19) $`\mathrm{NLO}\mathrm{K}\mathrm{factor}\times \mathrm{jet}\mathrm{veto}=\mathrm{LO}.`$ (20) The dilepton identification rate is taken from . The last line refers to the fact that NLO calculations of EW gauge boson production $`VV^{}`$ and gauge boson with Higgs boson production $`VH`$ has a $`K`$ factor of slightly less than $`1.4`$ at the Tevatron. The jet veto efficiency assuming that jets must have $`p_T>10\text{ GeV}`$ and $`|\eta _j|<2.5`$ is approximately $`70\%`$ . Multiplying these two numbers together gives $`1.4\times 0.71`$, which is what we assume for the analysis. This is equivalent to simulating background and signal at leading order (LO). Loosening the jet veto requirement somewhat might lead to a slightly larger significance. In Figs. 1 and 2 we plot the dilepton $`E_T`$ (equivalent to $`E\text{/}_T`$ ) spectrum for the background and signal for various Higgs boson masses. As expected, the $`ZZ`$ and $`WW`$ backgrounds are the most significant, and the other backgrounds are down significantly from them. Moreover, the $`WW`$ background is reduced quite significantly by choosing a higher $`E_T`$ cut. Results for the cross-sections after cuts and efficiencies are given for the $`m_H=100`$ and $`130`$ signals and the total background for different choices of the $`E_T`$ cut, in Table 1. Using the definition $$\mathrm{Significance}=S/\sqrt{B}$$ (21) where $`S`$ and $`B`$ are the signal and background in fb, we plot the significance of the signal compared to background in Fig. 3 as a function of $`E_T`$. The peak of the significance curve occurs at different $`E_T`$ depending on the mass of the Higgs boson. For larger masses the significance peak is at larger $`E_T`$. This is expected since heavier Higgs bosons will tend to carry away more missing energy and be accompanied by more boosted $`Z`$ bosons, and because the $`WW`$ component of the background has a much softer $`E_T`$ distribution. In our analysis we choose the $`E_T`$ cut for each Higgs mass in order to maximize the significance, although the significance is a rather flat function of this cut. We are now in position to predict how much luminosity is required at the Tevatron to produce a $`95\%`$ ($`1.96\sigma `$) exclusion limit, a $`3\sigma `$ observation, and a $`5\sigma `$ discovery . The results are shown in Fig. 4 and Table 2. If we assume that the Tevatron will accumulate a total of $`30\text{ fb}^1`$ of integrated luminosity, the invisible Higgs bosons could be excluded at the 95% confidence level up for $`m_H`$ up to nearly $`150`$ GeV. (Note, however, that the theoretical motivation for an invisibly-decaying Higgs boson is reduced anyway as $`m_H`$ increases above 150 GeV and the $`HWW^{()}`$ mode opens up.) A $`3\sigma `$ observation is possible for masses up to approximately $`125`$ GeV, and a $`5\sigma `$ discovery is not possible for $`m_{H_{\mathrm{inv}}}>100`$ GeV. This should be compared with LEP II with $`\sqrt{s}=205`$ GeV which should be able to discover $`H_{\mathrm{inv}}`$ if its mass is below $`95`$ GeV . The current limit on $`m_{H_{\mathrm{inv}}}`$ from $`\sqrt{s}=184`$ GeV data at LEP II is $`80`$ GeV . Our results have been based only on counting events with $`E_T`$ larger than some cut. After detector responses have been more firmly established, it may also be worth investigating whether the shape of the $`E_T`$ distribution, compared to the expected background profile, can be employed to exclude or substantiate a signal. In effect, the plentiful $`\mathrm{}^+\mathrm{}^{}+E\text{/}_T`$ events with smaller $`E\text{/}_T`$ (even less than 50 GeV) could be used to get a handle on background levels which can then be tested with the higher $`E\text{/}_T`$ events where the signal has its main support. This could be done, for example, using an optimized neural net procedure. ## 4 Detecting the invisibly-decaying Higgs boson with $`b`$-quarks Another signal that is potentially useful for discovering an invisibly-decaying Higgs boson isOther possible signals involving $`p\overline{p}ZH`$ followed by $`Zjj`$ without tagged $`b`$-jets will suffer from large backgrounds due to multiple partonic contributions to $`p\overline{p}jjZjj\nu \overline{\nu }`$. $$p\overline{p}(Zb\overline{b})(H_{\mathrm{inv}}\mathrm{invisible})=b\overline{b}+E\text{/}_T.$$ (22) The advantage of this signal is the increased branching fraction of $`Zb\overline{b}`$ compared to $`Zl^+l^{}`$. The disadvantages are the lower efficiency for identifying $`b\overline{b}`$ final states compared to leptonic final states, the reduced invariant mass resolution of $`Zb\overline{b}`$, and more difficult background sources. The signal of Eq. (22) is very similar to a $`b\overline{b}+E\text{/}_T`$ signal accessible in the SM : $$p\overline{p}(Z\nu \overline{\nu })(H_{\mathrm{SM}}b\overline{b})=b\overline{b}+E\text{/}_T.$$ (23) Therefore, we can directly apply the background studies of this complementary signal to the invisibly-decaying Higgs boson signal. In ref. the signal and backgrounds for $`b\overline{b}+E\text{/}_T`$ were studied using the following cuts and efficiency parameters: $`p_T(b_1),p_T(b_2)>20\text{ GeV},15\text{ GeV}`$ (24) $`|\eta (b_{1,2})|<2`$ (25) $`\varphi (b_1,E\text{/}_T),\varphi (b_2,E\text{/}_T)>0.5\mathrm{radians}`$ (26) $`H_T{\displaystyle E_T(j)}<175\text{ GeV}`$ (27) $`E\text{/}_T>35\text{ GeV}`$ (28) $`70\text{ GeV}<m_{bb}<110\text{ GeV}(\mathrm{loose}\mathrm{cut})`$ (29) $`80\text{ GeV}<m_{bb}<100\text{ GeV}(\mathrm{tight}\mathrm{cut})`$ (30) $`Zb\overline{b}\mathrm{efficiency}=0.49(70\%\mathrm{for}\mathrm{each}b)`$ (31) The cut on $`\varphi (b,E\text{/}_T)`$ is to ensure that the missing energy does not originate from a grossly mismeasured $`b`$-jet, which may, for example, have neutrino(s) carrying away much of its energy. We will also present results based on the assumption of “loose” $`m_{bb}`$ invariant mass resolution, and on the “tight” $`m_{bb}`$ invariant mass resolution as indicated in the above. The $`b\overline{b}+E\text{/}_T`$ total background after all cuts are applied is $`51.1\text{ fb}`$ for the “loose” $`m_{bb}`$ resolution, and $`32.3\text{ fb}`$ for the “tight” $`m_{bb}`$ resolution . These background totals include contributions from $`ZZ`$, $`WZ`$, $`Zb\overline{b}`$, $`Wb\overline{b}`$, single top, and $`t\overline{t}`$ production. To apply these background studies to the present invisibly-decaying Higgs boson situation, we simulate the signal given the same kinematic cuts and efficiency parameters. Our simulation is at the parton level, and so we must further take into account realistic $`b`$-jet energy corrections and jet reconstruction. Also, $`H_T`$ is simply the sum of the two $`b`$-jet energies in our parton-level computations, but in the analysis of ref. it includes a sum over other jets as well. To take these factors into account, we can take advantage of the fact that for $`m_H=m_Z`$, the two signals are exactly the same except for the known effects of branching fractions $`H,Zb\overline{b}`$ and $`Z\mathrm{}^+\mathrm{}^{}`$. Therefore we normalize our total efficiency for the $`m_{H_{\mathrm{inv}}}=m_Z`$ case to be equal to the efficiency found in for the $`m_{H_{\mathrm{SM}}}=m_Z`$ case. Since our signal always has $`Zb\overline{b}`$, we can apply this overall normalization efficiency factor for all values of $`m_{H_{\mathrm{inv}}}`$ with little error. A dedicated analysis of $`b\overline{b}`$ efficiencies as a function of $`H_{\mathrm{inv}}`$ would likely indicate a slight increase in efficiency since the $`Z`$ boson $`p_T`$, and therefore the average $`b`$-jet $`p_T`$ values, increases as $`m_{H_{\mathrm{inv}}}`$ increases. Furthermore, the missing transverse energy will systematically increase with $`m_{H_{\mathrm{inv}}}`$, allowing for events to pass the missing energy cut with less sensitivity to $`b`$-jet energy fluctuations around their intrinsic parton values. It is quite possible that the significance can be increased somewhat by raising the $`E\text{/}_T`$ cut to take advantage of this. We therefore conclude that our approach is justified, and perhaps yields slightly too pessimistic results. In Table 3 we list the signal cross-section after cuts and efficiencies and the significance for the $`b\overline{b}+E\text{/}_T`$ signal. The last column is the required luminosity to make a 95% exclusion of the invisibly-decaying Higgs boson based upon the $`b\overline{b}+E\text{/}_T`$ final state and the “tight” $`m_{bb}`$ invariant mass resolution. With $`30\text{ fb}^1`$, $`m_{H_{\mathrm{inv}}}\stackrel{<}{}115`$ GeV could be excluded. With the same luminosity, a $`3\sigma `$ observation could be made for $`m_{H_{\mathrm{inv}}}\stackrel{<}{}100`$ GeV; however, most or all of this region will likely be probed earlier by the CERN LEP II $`e^+e^{}`$ collider. We can clearly see that at the present time the significance of this channel in discovering the light invisible Higgs boson is not as high as in the $`l^+l^{}+E\text{/}_T`$ channel. Nevertheless, $`b\overline{b}+E\text{/}_T`$ could be a useful channel to combine with $`l^+l^{}+E\text{/}_T`$ to investigate exclusion ranges, and also to obtain confirmation of an observed signal if an excess were to develop. ## 5 Conclusion In summary, there are many reasonable theoretical ideas which lead to a light Higgs boson that most often decays invisibly. Several of these ideas, including Higgs decays to Majorons or right-handed neutrinos, are made possible by mechanisms which generate neutrino masses. Thus, our ignorance of neutrino mass generation is correlated with our ignorance of how likely Higgs bosons will decay invisibly. Experimentally, no theoretical prejudices should prevent the search for this possibility. This is especially important at the Tevatron, since low mass Higgs bosons have very weak SM couplings, and so any non-standard coupling of the Higgs boson to other particles will likely garner a significant branching fraction, perhaps even near $`100\%`$. The experimental search capability of an invisible Higgs bosons at the Tevatron requires non-SM search strategies outlined in the previous sections. With $`30\text{ fb}^1`$ one could observe (at $`3\sigma `$) an invisible Higgs boson with mass up to approximately $`125`$ GeV in the $`l^+l^{}+E\text{/}_T`$ channel and up to $`100`$ GeV in the $`b\overline{b}+E\text{/}_T`$ channel. It should be noted that the presence or absence of an excess in these channels will require a knowledge of backgrounds which come primarily from $`ZZ`$ and $`WW`$. The total rates for these processes will be difficult to model with great accuracy. However, they can be measured directly by observation of other final states, e.g. $`p\overline{p}ZZ\mathrm{}^+\mathrm{}^{}b\overline{b}`$ and the rarer but clean $`p\overline{p}ZZ\mathrm{}^+\mathrm{}^{}\mathrm{}^+\mathrm{}^{}`$, as well as $`\mathrm{}^+\mathrm{}^{}+E\text{/}_T`$ events with lower $`E\text{/}_T`$ . The fact that these backgrounds will need to be well-understood is a general feature of Higgs boson searches, and is not strictly limited to the invisibly-decaying Higgs boson search. The current bounds on an invisibly-decaying Higgs allow for a very interesting window to be explored at the Tevatron. At LEP II with $`\sqrt{s}=205`$ GeV, discovery should reach up to a mass of at least $`95`$ GeV . At the LHC, the discovery reach may be as high as $`150`$ GeV in the gauge process $`ppZZH_{\mathrm{inv}}`$ , or $`250`$ GeV in the Yukawa process $`ppt\overline{t}H_{\mathrm{inv}}`$ . The current published limit is $`80`$ GeV from the $`\sqrt{s}=184`$ GeV run at LEP II. Higgs bosons with mass much above about $`150`$ GeV are not likely to be completely invisible since SM couplings to the EWSB Higgs boson exist which are $`𝒪(1)`$ in strength, and thus lead to visible decay modes. Therefore, an opportunity exists for a high-luminosity Tevatron to discover or exclude the invisibly-decaying Higgs boson in the low mass region, which is the most likely place where an invisible Higgs boson would reside. Acknowledgements: We thank D. Hedin and A. Pilaftsis for helpful discussions.
no-problem/9903/hep-ph9903438.html
ar5iv
text
# References Recently at LEP the L3 and OPAL Collaborations have produced new results on the total cross section $`\sigma (e^+e^{}e^+e^{}hadrons)`$ for several values of the $`e^+e^{}`$ center of mass energy, up to $`\sqrt{s}=183GeV`$ \[1-3\]. The analysis of these data allows to isolate the two-photon cross section $`\sigma (\gamma \gamma hadrons)`$, which has been obtained in the range $`5W_{\gamma \gamma }145GeV`$. The lowest energy data , which corresponds to $`5W_{\gamma \gamma }75GeV`$, has revealed experimentally for the first time a rise for the two-photon cross section. This increasing energy dependence has been confirmed on the full energy range, as seen on Fig. 1, showing that the photon behaves pretty much like a hadron. Increasing total cross sections were first predicted nearly thirty years ago entirely on theoretical grounds based on quantum field theory. Indeed, one of the starting point of that theoretical work was $`\gamma \gamma `$ scattering; see for example and . One of the results of this theory, sometimes referred to as the impact picture, is that there is a universal increase of all total cross sections at very high energies ; see Eq.(1) below. It is the purpose of this letter to return to the root of the theory and apply it for comparison with the experimental data on $`\gamma \gamma `$ total cross section $`\sigma _{tot}^{\gamma \gamma }`$. In view of the recent experimental results \[1-3\], several theoretical attempts have been made to explain the behavior of $`\sigma _{tot}^{\gamma \gamma }`$. This energy behavior can be described by a Regge-type parametrization based on the exchange of Regge trajectories and Pomeron in the $`t`$-channel, which was used for all hadron total cross sections . In early impact-picture predictions , a simple $`s`$ dependence $`s^{0.08}`$ was first obtained and later extensively used by several authors i.e. Ref.. However, a best fit to the LEP data by the L3 collaboration gives a higher power value . The Dual Parton Model with unitarization constraint gives a faster rise of the cross section, also the model of $`\gamma \gamma `$ scattering where the total cross section receives contributions from three event classes, VDM processes, direct and anomalous processes. In the framework of eikonalized amplitudes, a model of mini-jet explains the increase of the cross section through the rise of jet cross sections, while a model using the vector dominance and an eikonalized form of the quarks and gluons interactions, reproduces the energy dependence of the total cross sections for $`pp`$, $`\overline{p}p`$, $`\gamma p`$ and $`\gamma \gamma `$. Returning to the impact picture, which is the basis of the present consideration, we have learned that the effective interaction strength increases with energy in the form $$\frac{s^{1+c}}{(\mathrm{ln}s)^c^{}},$$ (1) a simple expression in terms of two key parameters $`c`$ and $`c^{}`$. It should be emphasized that these two parameters are independent of the scattering process under consideration, i.e., the increase in the total cross section for example is universal. The scattering amplitude from the impact picture is given by $$a^N(s,t)=is_0^{\mathrm{}}J_0(b\sqrt{t})(1e^{\mathrm{\Omega }(s,b)})b𝑑b,$$ (2) where $$\mathrm{\Omega }(s,b)=S_0(s)F(b^2)+\mathrm{\Omega }_R(s,b),$$ (3) $`\mathrm{\Omega }_R(s,b)`$ being a Regge background which allows to use the model at rather low energy. The energy dependence is given by the crossing symmetric version of Eq.(1), $$S_0(s)=\frac{s^c}{(\mathrm{ln}s)^c^{}}+\frac{u^c}{(\mathrm{ln}u)^c^{}},$$ (4) $`s`$ and $`u`$ are the Mandelstam variable. The $`t`$-dependence of $`a^N(s,t)`$ is controlled by $`F(b^2)`$ whose Fourier transform is taken to be $$\stackrel{~}{F}(t)=f[G(t)]^2[(a^2+t)/(a^2t)],$$ (5) where $`G(t)`$ is given by $$G(t)=\frac{1}{(1t/m_1^2)(1t/m_2^2)}.$$ (6) It describes successfully $`\overline{p}p`$ and $`pp`$ elastic scattering up to ISR energies, including the total and differential cross sections, the polarization, and the forward real part of the amplitude, and a systematic study of the experimental data available up to 1979 led to the values, $$c=0.167,c^{}=0.748.$$ (7) Its predictions at very high energy, are in excellent agreement with the data from the CERN SPS collider and the FNAL Tevatron, as we recall for total cross sections in the bottom part of Fig. 1, and some others, at several $`TeV`$, which remain to be checked at the Large Hadron Collider under construction at CERN. For hadron-hadron processes at high energies, the physical picture is such that each hadron appears as a black disk with a gray fringe, where the black disk radius increases as $`lns`$. So far we have encountered the expanding proton in $`\overline{p}p`$ and $`pp`$ scattering, but also recently in a very different experimental situation, namely in $`\gamma p`$ scattering at HERA . It can be shown that the energy rise observed in the $`\gamma p`$ total cross section, for the center of mass energy up to $`\sqrt{s}=180GeV`$, is entirely consistent with the theory of expanding protons and this is also shown in the middle part of Fig. 1. Similarly, since the parameters $`c`$ and $`c^{}`$, which control the increase of the total cross sections, are given by Eq.(7) and are universal for all scattering processes, we expect the high-energy behavior of $`\sigma _{tot}^{\gamma \gamma }`$ to follow approximately what we have obtained for $`\sigma _{tot}^{pp}`$, the $`pp`$ total cross section. Accordingly, a simple way to obtain $`\sigma _{tot}^{\gamma \gamma }`$ is to use the following approximate relationship $$\sigma _{tot}^{\gamma \gamma }(W_{\gamma \gamma })=A\sigma _{tot}^{pp}(\sqrt{s}),$$ (8) where A is a normalization constant and $`\sqrt{s}`$ is the $`pp`$ center of mass energy. Since there seems to be a normalization discrepancy between L3 and OPAL, the accurate determination of this constant is not needed for comparison with experiments. We have found that to get the best agreement with L3 data, $`A_L=\mathrm{8.5.10}^6`$ is required and the use of Eq. (2) (see also ) yields the $`\sigma _{tot}^{\gamma \gamma }`$ shown as the solid curve in the top part of Fig. 1. Another possible choice which agrees with the OPAL data is $`A_O=10^5`$, the corresponding dotted curve is also shown in Fig. 1. We also display in Fig. 1 our predictions over a much higher energy range. These prediction may be checked at a future $`e^+e^{}`$ linear collider. The success of our previous predictions for the $`pp`$ total cross section gives confidence for the present one for the $`\gamma \gamma `$ total cross section. Therefore this universal energy rise, presented in Fig. 1, for three different reactions, is one of the properties of the impact-picture approach which is once again verified by experiment. We expect that more accurate data and possible access to higher energy domains will strenghten the validity of these predictions. We obtained useful informations from Guy Coignet and Maria Kienzle on the L3 data, from Stefan Söldner-Rembold on the OPAL data and we thank them all. One of us (TTW) is very grateful for hospitality at the CERN Theory Division. This work was supported in part by the US Department of Energy under Grant DE-FG02-84ER40158.
no-problem/9903/hep-ex9903064.html
ar5iv
text
# Measurement of the Spectroscopy of Orbitally Excited B Mesons with the L3 detector ## I Introduction Detailed understanding of the resonant structure of orbitally excited B mesons provides important information regarding the underlying theory. A symmetry (Heavy Quark Symmetry) arises from the fact that the mass of the $`b`$ quark is large relative to $`\mathrm{\Lambda }_{\mathrm{QCD}}`$. In this approximation, the spin of the heavy quark ($`\stackrel{}{s}_Q`$) is conserved independently of the total angular momentum ($`\stackrel{}{j}_q=\stackrel{}{s}_q+\stackrel{}{l}`$) of the light quark. Excitation energy levels are thus degenerate doublets in total spin and can be expressed in terms of the spin-parity of the meson $`J^P`$ and the total spin of the light quark $`j_q`$. Corrections to this symmetry are a series expansion in powers of $`1/m_Q`$, calculable in Heavy Quark Effective Theory (HQET). The $`L=0`$ mesons, for which $`j_q=1/2`$, have two possible spin states: a pseudo-scalar $`P`$ ($`J^P=0^{}`$) and a vector $`V`$ ($`J^P=1^{}`$). If the spin of the heavy quark is conserved independently, the relative production rate of these states is $`V/(V+P)=0.75`$.Corrections due to the decay of higher excited states are predicted to be small. Recent measurements of this rate for the $`\mathrm{B}`$ system agree well with this ratio. In the case of orbitally excited $`L=1`$ mesons, two sets of degenerate doublets are expected: one corresponding to $`j_q=1/2`$ and the other to $`j_q=3/2`$. Their relative production rates follow from spin state counting ($`2J+1`$ states). Rules for the decay of these states to the $`1S`$ states are determined by spin-parity conservation . For the dominant two-body decays, the $`j_q=1/2`$ states can decay via an $`L=0`$ transition (S-wave) and their decay widths are expected to be broad in comparison to those of the $`j_q=3/2`$ states which must decay via an $`L=2`$ transition (D-wave). Table I presents the nomenclature of the various spin states for $`L=1`$ $`\mathrm{B}`$ mesons containing either a $`u`$ or $`d`$ quark, with the predicted production rates and two-body decay modes. Several models, based on HQET and on the charmed $`L=1`$ meson data, have made predictions for the masses and widths of orbitally excited $`\mathrm{B}`$ mesons. Some of these models place the average mass of the $`j_q=3/2`$ states above that of the $`j_q=1/2`$ states, while others predict the opposite (“spin-orbit inversion”). Recent analyses at LEP combining a charged pion produced at the primary event vertex with an inclusively reconstructed $`\mathrm{B}`$ meson have measured an average mass of $`M_\mathrm{B}^{}=57005730\mathrm{MeV}`$, where $`\mathrm{B}^{}`$ indicates a mixture of all $`L=1`$ spin states. An analysis combining a primary charged pion with a fully reconstructed $`\mathrm{B}`$ meson, measures $`M_{\mathrm{B}_2^{}}=(5739_{11}^{+8}(\mathrm{stat})_4^{+6}(\mathrm{syst}))\mathrm{MeV}`$ by performing a fit to the mass spectrum which fixes the mass differences, widths and relative rates of all spin states according to the predictions of Eichten, et al... The analysis presented here is based on the combination of primary charged pions with inclusively reconstructed $`\mathrm{B}`$ mesons. Several new analysis techniques make it possible to improve on the resolution of the $`\mathrm{B}\pi `$ mass spectrum and to unfold this resolution from the signal components. As a result, measurements are obtained for masses and widths of D-wave $`\mathrm{B}_2^{}`$ decays and of S-wave $`\mathrm{B}_1^{}`$ decays. ## II Event Selection ### A Selection of $`\mathrm{Z}b\overline{b}`$ decays The analysis is performed on data collected by the L3 detector in 1994 and 1995, corresponding to an integrated luminosity of $`90\mathrm{pb}^1`$ with LEP operating at the Z mass. Hadronic Z decays are selected which have an event thrust direction satisfying $`|\mathrm{cos}\theta |<0.74`$, where $`\theta `$ is the polar angle. The events are also required to contain an event primary vertex reconstructed in three dimensions, at least two calorimetric jets, each with energy greater than $`10\mathrm{GeV}`$, and to pass stringent detector quality criteria for the vertexing, tracking and calorimetry. A total of $`1,248,350`$ events are selected. A cut on a $`Zb\overline{b}`$ event discriminant based on track DCA significances yields a $`b`$-enriched sample of $`176,980`$ events. To study the content of the selected data, a sample of 6 million hadronic Z decays have been generated with JETSET 7.4 , and passed through a GEANT based simulation of the L3 detector. From this sample, the $`Zb\overline{b}`$ event purity is determined to be $`\pi _{b\overline{b}}=0.828`$. ### B Selection of $`\mathrm{B}^{}\mathrm{B}^{()}\pi `$ decays Secondary decay vertices and primary event vertices are reconstructed in three dimensions by an iterative procedure such that a track can be a constituent of no more than one of the vertices. A calorimetric jet is selected as a $`\mathrm{B}`$ candidate if it is one of the two most energetic jets in the event, if a secondary decay vertex has been reconstructed from tracks associated with that jet, and if the decay length of that vertex with respect to the event primary vertex is greater than $`3\sigma `$, where $`\sigma `$ is the estimated error of the measurement. The decay of a $`\mathrm{B}^{}`$ to a $`\mathrm{B}^{()}`$ meson and a pion is carried out via a strong interaction and thus occurs at the primary event vertex. In addition, the predicted masses for the $`L=1`$ states correspond to relatively small $`Q`$ values, so that the decay pion ($`\pi ^{}`$) direction is forward with respect to the $`\mathrm{B}`$ meson direction. We take advantage of these decay kinematics by requiring that, for each $`\mathrm{B}`$ meson candidate, there is at least one track which is a constituent of the event primary vertex and which is located within 90 degrees of the jet axis. A total of $`60,205`$ track-jet pairs satisfy these criteria. To decrease background, typically due to charged fragmentation particles, only the track with the largest component of momentum in the direction of the jet is selected. This choice has been found to improve the purity of the signal. The track is further required to have a transverse momentum with respect to the jet axis larger than $`100\mathrm{MeV}`$, to reduce background due to charged pions from $`\mathrm{D}^{}\mathrm{D}\pi `$ decays. These selection criteria are satisfied by $`48,022`$ $`\mathrm{B}\pi `$ pairs with a $`b`$ hadron purity of $`\pi _\mathrm{B}=0.942`$. #### 1 B meson direction reconstruction The direction of the $`\mathrm{B}`$ candidate is estimated by taking a weighted average in the $`\theta `$ (polar) and $`\varphi `$ (azimuthal) coordinates of directions defined by the vertices and by particles with a high rapidity relative to the jet axis. A numerical error-propagation method makes it possible to obtain accurate estimates for the uncertainty of the angular coordinates measured from vertex pairs. These errors, as well as the error for the decay length measurement used in the secondary vertex selection, are calculated for each pair of vertices from the associated error matrices. Particles coming from the decay of $`b`$ hadrons produced in Z decays have a characteristically high rapidity relative to the original direction of the hadron when compared to that of particles coming from fragmentation. A cut on the particle rapidity distribution is thus a powerful tool for selecting the $`\mathrm{B}`$ meson decay constituents. A second estimate for the direction of the $`\mathrm{B}`$ is obtained by summing the momenta of all charged and neutral particles (excluding the $`\pi ^{}`$ candidate) with rapidity $`y>1.6`$ relative to the original jet axis. Estimates for the uncertainty of the coordinates obtained by this method are determined from simulated $`\mathrm{B}`$ meson decays as an average value for all events. The final $`\mathrm{B}`$ direction coordinates are taken as the error-weighted averages of these two sets of coordinates. The resolution for each coordinate is parametrized by a two-Gaussian fit to the difference between the reconstructed and generated values. For $`\theta `$, the two widths are $`\sigma _1=18\mathrm{mrad}`$ and $`\sigma _2=34\mathrm{mrad}`$ with $`68\%`$ of the $`\mathrm{B}`$ mesons in the first Gaussian. For $`\varphi `$, the two widths are $`\sigma _1=12\mathrm{mrad}`$ and $`\sigma _2=34\mathrm{mrad}`$ with $`62\%`$ of the $`\mathrm{B}`$ mesons in the first Gaussian. #### 2 B meson energy reconstruction The energy of the $`\mathrm{B}`$ meson candidate is estimated by taking advantage of the known center of mass energy at LEP to constrain the measured value. The energy of the $`\mathrm{B}`$ meson from this method can be expressed as $$E_\mathrm{B}=\frac{M_\mathrm{Z}^2+M_\mathrm{B}^2M_{\mathrm{recoil}}^2}{2M_\mathrm{Z}},$$ (1) where $`M_\mathrm{Z}`$ is the mass of the Z boson and $`M_{\mathrm{recoil}}`$ is the mass of all particles in the event other than the $`\mathrm{B}`$. To determine $`M_{\mathrm{recoil}}`$, the energy and momenta of all particles in the event with rapidity $`y<1.6`$, including the $`\pi ^{}`$ candidate (regardless of its rapidity), are summed and $`M_{\mathrm{recoil}}^2=E_{y<1.6}^2p_{y<1.6}^2`$. Fitting the difference between reconstructed and generated values for the B meson energy with an asymmetric Gaussian yields a maximum width of $`2.8\mathrm{GeV}`$. ## III Analysis of the $`𝐁\pi `$ Mass Spectrum The combined $`\mathrm{B}\pi `$ mass is defined as $$M_{\mathrm{B}\pi }=\sqrt{M_\mathrm{B}^2+m_\pi ^2+2E_\mathrm{B}E_\pi 2p_\mathrm{B}p_\pi cos\alpha },$$ (2) where $`M_\mathrm{B}`$ and $`m_\pi `$ are set to $`5279\mathrm{MeV}`$ and $`139.6\mathrm{MeV}`$, respectively, and $`\alpha `$ is the measured angle between the $`\mathrm{B}`$ meson and the $`\pi ^{}`$ candidate. The data mass spectrum is shown in Figure 1.a together with the expected Monte Carlo background. ### A Background function The background distribution is estimated from the Monte Carlo data sample, excluding $`\mathrm{B}^{}\mathrm{B}^{()}\pi `$ decays, and fitted with a six-parameter threshold function given by $$p_1\times (xp_2)^{p_3}\times e^{(p_4\times (xp_2)+p_5\times (xp_2)^2+p_6\times (xp_2)^3)}.$$ (3) Parameters $`p_2`$ through $`p_6`$ are fixed to the shape of the simulated background, while the overall normalization factor $`p_1`$ is allowed to float freely in order to obtain a correct estimate of the contribution of the background to the statistical error of the signal. ### B Signal function To examine the underlying structure of the signal, it is necessary to unfold effects due to detector resolution. The $`\pi ^{}`$ candidates are expected to have typical momenta of a few $`\mathrm{GeV}`$. In this range, the single track momentum resolution is no more than a few percent with an angular resolution better than $`2\mathrm{mrad}`$. The dominant sources of uncertainty for the mass measurement are thus the B meson angular and energy resolutions. Monte Carlo studies confirm that these two components are dominant and roughly equal in magnitude. This analysis thus concentrates on unfolding the effects of these components by parametrizing and removing their contribution to the mass resolution. #### 1 Signal resolution and efficiency The dependence of the $`\mathrm{B}\pi `$ mass resolution and selection efficiency on $`Q`$ value is studied by generating signal events at several different values of $`\mathrm{B}^{}`$ mass and Breit-Wigner width. The simulated events are passed through the same event reconstruction and selection as the data. The resulting $`\mathrm{B}\pi `$ mass distributions are each fitted with a Breit-Wigner function convoluted with a Gaussian resolution (Voigt function) and the detector resolution is extracted by fixing the Breit-Wigner width to the generated value. The Gaussian width is found to increase linearly from $`20\mathrm{MeV}`$ to $`60\mathrm{MeV}`$ in the $`\mathrm{B}^{}`$ mass range $`5.65.8\mathrm{GeV}`$. This increase with $`Q`$ value is mainly due to the angular component of the uncertainty, which increases as a function of the opening angle $`\alpha `$. The resolution is parametrized as a linear function of the $`\mathrm{B}^{}`$ mass from a fit to the extracted widths. Similarly, the selection efficiency is found to increase slightly with $`Q`$ value and the dependence is parametrized with a linear function. Agreement between data and Monte Carlo for the $`\mathrm{B}`$ meson energy and angular resolution is confirmed by analyzing $`\mathrm{B}^{}\mathrm{B}\gamma `$ decays selected from the same sample of $`\mathrm{B}`$ mesons. The photon selection for this test is the same as that described in reference . A $`\mathrm{B}^{}`$ meson decays electromagnetically and hence has a negligible decay width compared to the detector resolution. As in the case of the $`\mathrm{B}\pi `$ mass resolution, the $`\mathrm{B}`$ meson energy and angular resolution are the dominant components of the reconstructed $`\mathrm{B}\gamma `$ mass resolution. Fits to the $`M_{\mathrm{B}\gamma }M_\mathrm{B}`$ spectra are performed with the combination of a Gaussian signal and the background function described above. For the Monte Carlo, the Gaussian mean value is found to be $`M_{\mathrm{B}\gamma }M_\mathrm{B}=(46.5\pm 0.6(\mathrm{stat}))\mathrm{MeV}`$ with a width of $`\sigma =(11.1\pm 0.7(\mathrm{stat}))\mathrm{MeV}`$. The input generator mass difference is $`46.0\mathrm{GeV}`$. For the data, the Gaussian mean value is found to be $`M_{\mathrm{B}\gamma }M_\mathrm{B}=(45.1\pm 0.6(\mathrm{stat}))\mathrm{MeV}`$ with a width of $`\sigma =(10.7\pm 0.6(\mathrm{stat}))\mathrm{MeV}`$. Good agreement between the widths of the data and Monte Carlo signals provides confidence that the $`\mathrm{B}`$ energy and angular resolution are well understood and simulated. #### 2 Combined signal According to spin-parity rules, five mass resonances are expected, corresponding to five possible $`\mathrm{B}^{}`$ decay modes: $`\mathrm{B}_2^{}\mathrm{B}\pi `$, $`\mathrm{B}_2^{}\mathrm{B}^{}\pi `$, $`\mathrm{B}_1\mathrm{B}^{}\pi `$, $`\mathrm{B}_1^{}\mathrm{B}^{}\pi `$ and $`\mathrm{B}_0^{}\mathrm{B}\pi `$. No attempt is made to tag subsequent $`\mathrm{B}^{}\mathrm{B}\gamma `$ decays, as the efficiency for selecting the soft photon is relatively low. As a result, the effective $`\mathrm{B}\pi `$ mass for a decay to a $`\mathrm{B}^{}`$ meson is shifted down by the $`46\mathrm{MeV}`$ $`\mathrm{B}^{}\mathrm{B}`$ mass difference. The five resonances are fitted with five Voigt functions, with the relative production fractions determined by spin counting rules. The Gaussian convolutions to the widths are determined by the resolution function. Additional physical constraints are applied to the mass differences and relative widths in order to obtain the most information possible from the data sample. Predictions for the mass differences $`M_{\mathrm{B}_2^{}}M_{\mathrm{B}_1}`$ and $`M_{\mathrm{B}_1^{}}M_{\mathrm{B}_0^{}}`$ depend on several factors, including the $`b`$ and $`c`$ quark masses and, in some cases, input from experimental data of the D meson system. The values are predicted to be roughly equal and in the range $`520\mathrm{MeV}`$ . We constrain both of the mass differences to $`12\mathrm{MeV}`$. Predictions for the Breit-Wigner widths of the $`j_q=3/2`$ are extrapolated from measurements in the D meson system and are expected to be roughly equal and about $`2025\mathrm{MeV}`$. No precise predictions exist for the $`j_q=1/2`$ states as there are no corresponding measurements in the D system. In general, however, they are also expected to be roughly equal, although broader than those of the $`j_q=3/2`$ states. We constrain $`\mathrm{\Gamma }_{\mathrm{B}_1}=\mathrm{\Gamma }_{\mathrm{B}_2^{}}`$ and $`\mathrm{\Gamma }_{\mathrm{B}_0^{}}=\mathrm{\Gamma }_{\mathrm{B}_1^{}}`$, but allow the widths of the $`\mathrm{B}_2^{}`$ and $`\mathrm{B}_1^{}`$ to float freely in the fit. ### C Fit results Monte Carlo events for each of the expected $`\mathrm{B}^{}`$ decays are generated and passed through the simulation and reconstruction programs and the $`\mathrm{B}\pi `$ event selection. The resulting mass spectra are combined with background and fitted with the signal and background functions under the constraints described above. Mass values and decay widths for the $`\mathrm{B}_2^{}`$ and $`\mathrm{B}_1^{}`$ resonances and the overall normalization are extracted from the fit and found to agree well with the generated values. All differences lie within the statistical error and have no systematic trend. The data $`\mathrm{B}\pi `$ mass spectrum is fitted with the combined signal and background functions, allowing the normalization parameters to float freely. The resulting fit, shown in Figure 1, has a $`\chi ^2`$ of $`39`$ for $`74`$ degrees of freedom. A total of $`2652`$ events occupy the signal region corresponding to a relative $`\mathrm{B}_{u,d}^{}`$ production rate of $`\sigma (\mathrm{B}_{u,d}^{})/\sigma (\mathrm{B}_{u,d})=0.39\pm 0.05(\mathrm{stat})`$. The mass and width of the $`\mathrm{B}_2^{}`$ are found to be $`M_{\mathrm{B}_2^{}}=(5770\pm 6(\mathrm{stat}))\mathrm{MeV}`$ and $`\mathrm{\Gamma }_{\mathrm{B}_2^{}}=(21\pm 24(\mathrm{stat}))\mathrm{MeV}`$ and the mass and width of the $`\mathrm{B}_1^{}`$ are found to be $`M_{\mathrm{B}_1^{}}=(5675\pm 12(\mathrm{stat}))\mathrm{MeV}`$ and $`\mathrm{\Gamma }_{\mathrm{B}_1^{}}=(75\pm 28(\mathrm{stat}))\mathrm{MeV}`$. ### D Systematic uncertainty Sources of systematic uncertainty and their estimated contributions to the errors of the measured values are summarized in Table II. The $`b`$ hadron purity of the sample is varied from $`91\%`$ to $`96\%`$. The fraction of $`b`$ quarks hadronizing to $`\mathrm{B}_{u,d}`$ mesons is taken to be $`79\%`$ and is varied between $`74\%`$ and $`83\%`$ in accordance with the recommendations of the LEP $`\mathrm{B}`$ Oscillation Working Group . These variations effect only the overall $`\mathrm{B}^{}`$ production fraction. Systematic effects due to background modelling are studied by varying the shape parameters of the background function and by performing the fit with other background functions to study the effect on the measured values. Contributions to the error due to modelling of the signal are estimated for the mass and width constraints: the $`M_{\mathrm{B}_2^{}}M_{\mathrm{B}_1}`$ and $`M_{\mathrm{B}_1^{}}M_{\mathrm{B}_0^{}}`$ mass differences are varied in the range $`618\mathrm{MeV}`$ and the $`\mathrm{\Gamma }_{\mathrm{B}_1}/\mathrm{\Gamma }_{\mathrm{B}_2^{}}`$ and $`\mathrm{\Gamma }_{\mathrm{B}_0^{}}/\mathrm{\Gamma }_{\mathrm{B}_1^{}}`$ ratios are varied between $`0.8`$ and $`1`$. Effects due to uncertainty in the resolution and efficiency functions are estimated by varying the slopes and offsets of the linear parametrizations. Three-body decays of the type $`\mathrm{B}_2^{}\mathrm{B}\pi \pi `$ have been generated and passed through the simulation and reconstruction programs and the $`\mathrm{B}\pi `$ event selection. $`\mathrm{B}\pi `$ pairs, for which only one of the pions is tagged, are studied as a possible source of resonant background. The resulting reflection is found to contribute insignificantly to the background in regions of small $`Q`$ value. Similarly, generated $`\mathrm{B}_s^{}\mathrm{BK}`$ decays, where the $`\mathrm{K}`$ is mistaken for a $`\pi `$ are found to contribute only slightly to the low $`Q`$ value region and their effects are included in the background modelling uncertainty contribution. ## IV Conclusion We measure for the first time the masses and decay widths of the $`\mathrm{B}_2^{}`$ ($`j_q=3/2`$) and $`\mathrm{B}_1^{}`$ ($`j_q=1/2`$) mesons. From a constrained fit to the $`\mathrm{B}\pi `$ mass spectrum, we find $`M_{\mathrm{B}_2^{}}`$ $`=`$ $`(5770\pm 6(\mathrm{stat})\pm 4(\mathrm{syst}))\mathrm{MeV}`$ $`\mathrm{\Gamma }_{\mathrm{B}_2^{}}`$ $`=`$ $`(23\pm 26(\mathrm{stat})\pm 15(\mathrm{syst}))\mathrm{MeV}`$ $`M_{\mathrm{B}_1^{}}`$ $`=`$ $`(5675\pm 12(\mathrm{stat})\pm 4(\mathrm{syst}))\mathrm{MeV}`$ $`\mathrm{\Gamma }_{\mathrm{B}_1^{}}`$ $`=`$ $`(76\pm 28(\mathrm{stat})\pm 15(\mathrm{syst}))\mathrm{MeV}.`$ The relative $`\mathrm{B}_{u,d}^{}`$ production rate, including all $`L=1`$ spin states, is measured to be $`{\displaystyle \frac{\mathrm{Br}(b\mathrm{B}_{u,d}^{}\mathrm{B}^{()}\pi )}{\mathrm{Br}(b\mathrm{B}_{u,d})}}=0.39\pm 0.05(\mathrm{stat})\pm 0.06(\mathrm{syst})`$ where isospin symmetry is employed to account for decays to neutral pions. ## Acknowledgments I wish to thank my colleagues Steven Goldfarb and Franz Muheim for their help in preparing this talk.
no-problem/9903/chao-dyn9903027.html
ar5iv
text
# Thermostating by Deterministic Scattering: Heat and Shear Flow ## I Introduction Driving macroscopic systems out of equilibrium requires external forces. Now, the very existence of a nonequilibrium steady state implies that the temperature of the system must remain time independent. One way to prevent the system from heating up indefinitely in nonequilibrium is the introduction of a thermostating algorithm . Starting from molecular dynamics simulations Evans, Hoover, Nosé and others proposed deterministic thermostats to model equilibrium and nonequilibrium fluids . In this formalism the (average) internal energy of the dynamical system is kept constant by subjecting the particles to fictitious frictional forces, thus leading to microcanonical or canonical distributions in phase space . The major feature of this mechanism is its deterministic and time-reversible character, which is in contrast to stochastic thermostats . This allows to elaborate on the connection between microscopic reversibility and macroscopic irreversibility and has led to interesting new links between statistical physics and dynamical systems theory , especially to relations between transport coefficients and Lyapunov exponents , and between entropy production and phase-space contraction . Although used almost exclusively in the context of nonequilibrium systems, the abovementioned thermostating mechanism presents the drawback that the dynamical equations themselves are altered, even in equilibrium. This raises the question of whether some of the results are due to the special nature of this thermostating formalism or are of general validity . Recently, an alternative mechanism in which thermalization is achieved in a deterministic and time-reversible way has been put forward by Klages et al. and has been applied to a periodic Lorentz gas under an external field. In the present paper we apply this thermostating method to an interacting many-particle system subjected to nonequilibrium boundary conditions giving rise to thermal conduction and to shear flow. The model is closely related to that of Chernov and Lebowitz , who study a hard disk fluid driven out of equilibrium into a steady state shear flow by applying special scattering rules at the boundaries in which the particle velocity is kept constant. Our model is introduced in Section II where the thermalization mechanism is tested under equilibrium conditions. In Section III we move on to the case of an imposed temperature gradient and a velocity field by adapting the scattering rules, and we compute the respective transport coefficients. Having a deterministic and time-reversible system at hand we proceed in Section IV to investigate the relation between thermodynamic entropy production and phase-space contraction rate in nonequilibrium stationary states. The main conclusions are drawn in Section V. ## II Equilibrium state Consider a two-dimensional system of hard discs confined in a square box of length $`L`$ with periodic boundary conditions along the x-axis, i.e., the left and right sides at $`x=\pm L/2`$ are identified. At the top and bottom sides of the box, $`y=\pm L/2`$, we introduce rigid walls where the discs are reflected according to certain rules to be defined later. The discs interact among themselves via impulsive hard collisions so that the bulk dynamics is purely conservative. In the following and in all the numerical computations we use reduced units by setting the particle mass $`m`$, the disk diameter $`\sigma `$ and the Boltzmann constant $`k_B`$ equal to one. Before proceeding to the nonequilibrium case we define the disc-wall collision rules in equilibrium and check whether the system is well-behaved. Now, in equilibrium the bulk distribution is Gaussian with a temperature $`T`$, and the in- and outgoing fluxes at the top and bottom wall have the form (see ) $$\mathrm{\Phi }(v_x,v_y)=(2\pi T^3)^{1/2}|v_y|\mathrm{exp}\left(\frac{v_x^2+v_y^2}{2T}\right),$$ (1) with $`v_y<0`$ for the bottom wall and $`v_y>0`$ for the top wall. Imposing stochastic boundary conditions on the systems in this setting would mean that for every incoming particle the outgoing velocities are chosen randomly according to Eq. (1). In practice, this is usually done by drawing numbers from two independent uniformly distributed random generators $`\zeta ,\xi [0,1]`$ and then transforming these numbers with the invertible map $`𝓣\text{ }^1:[0,1]\times [0,1][0,\mathrm{})\times [0,\mathrm{})`$ as $$(v_x,v_y)=𝓣\text{ }^1(\zeta ,\xi )=\sqrt{2T}(\text{erf}^1(\zeta ),\sqrt{\mathrm{ln}(\xi )}),$$ (2) which amounts to transforming the uniform densities $`\rho (\zeta )=1`$ and $`\rho (\xi )=1`$ onto $`\mathrm{\Phi }(v_x,v_y)`$ according to $$\rho (\zeta )\rho (\xi )\left|\frac{d\zeta d\xi }{dv_xdv_y}\right|=\left|\frac{𝓣\text{ }(v_x,v_y)}{v_xv_y}\right|=(2/\pi T^3)^{1/2}|v_y|\mathrm{exp}\left(\frac{v_x^2+v_y^2}{2T}\right).$$ (3) Note that so far we have restricted Eqs. (2),(3) to positive velocities $`v_x,v_y[0,\mathrm{})`$, which implies a normalization factor in Eq. (3) being different to the one of Eq. (1). In analogy to stochastic boundaries we now define the deterministic scattering at the walls as follows. First, take the incoming velocities $`v_x`$, $`v_y`$ and transform them via $`𝓣\text{ }(v_x,v_y)=(\zeta ,\xi )`$ onto the unit square. Second, use a two-dimensional, invertible, phase-space conserving chaotic map $`:[0,1]\times [0,1][0,1]\times [0,1]`$ to obtain $`(\zeta ^{},\xi ^{})=(\zeta ,\xi )`$. Finally, transform back to the outgoing velocities via $`(v_x^{},v_y^{})=𝓣\text{ }^1(\zeta ^{},\xi ^{})`$. In order to render the collision process time-reversible, we also have to distinguish between particles with positive and negative tangential velocities by using $``$ and $`^1`$, respectively. Thus, particles going in with positive (negative) velocities have to go out with positive (negative) velocities and the full collision rules read $`(v_x^{},v_y^{})`$ $`=`$ $`𝓣\text{ }^1𝓣\text{ }(v_x,v_y),v_x0`$ (4) $`(v_x^{},v_y^{})`$ $`=`$ $`𝓣\text{ }^1^1𝓣\text{ }(v_x,v_y),v_x<0,`$ (5) where $`𝓣`$ is meant to be applied to the modulus of the velocities . Since both the positive and the negative side of the tangential velocity distribution of Eq. (1) are normalized to $`1/2`$, this normalization factor has to be incorporated in Eq.(3) to render the full desired flux $`\mathrm{\Phi }`$ equivalent to the one of Eq. (1). Rewriting Eq. (3) in polar coordinates yields precisely the transformation used in , in the limiting case where it mimics a reservoir with infinitely many degrees of freedom. It should also be realized that for obtaining the transformation $`𝓣`$ the total number of degrees of freedom of the reservoir has been projected out onto a single velocity variable, which couples the bulk to the reservoir. Eckmann et al. used a similar idea to go from a Hamiltonian reservoir with infinitely many degrees of freedom to a reduced description when modeling heat transfer via a finite chain of nonlinear oscillators. It remains to assign the form of the chaotic map $``$ and we shall first adopt the choice of a baker map, as in Refs. , $$(\zeta ^{},\xi ^{})=(\zeta ,\xi )=\{\begin{array}{cc}(2\zeta ,\xi /2);\hfill & 0\zeta 1/2\hfill \\ (2\zeta 1,(\xi +1)/2);\hfill & 1/2<\zeta 1\hfill \end{array}.$$ (6) Later on we will investigate the consequences of choosing other mappings like the standard map (see, e.g., ). Since in equilibrium the in- and outgoing fluxes have the same form as in Eq.(1), the baker map yields a uniform density and our scattering prescription in Eq.(4) can be viewed as a deterministic and time-reversible counterpart of stochastic boundary conditions. $``$ being chaotic, the initial and final momentum and energy of any single particle are certainly different, but both quantities should be conserved on the average. The latter is confirmed by numerical experiments in equilibrium, where, as usual in hard disk simulations, we follow a collision-to-collision approach . Keeping the volume fraction occupied by $`N=100`$ hard disks equal to $`\rho =0.1`$ sets the length of the box equal to $`L=28.0`$. After some transient behavior which depends on the temperature of the initial configuration the bulk distribution is Gaussian with zero mean and mean kinetic energy $`T/2`$ in each directions. The in- and outgoing fluxes at the walls are correctly equipartioned with $`T`$ as well and have the desired form of Eq.(1), so the system reproduces the correct statistic properties. We close this section by a remark on how we measure the temperature of a flux to or from the boundaries. As the temperature of the tangential component we use the variance of the velocity distribution, $`T_x:=(v_xv_x_x)^2_x`$, where $`<>_x`$ denotes an average over the density $`\rho (v_x)`$. On the other hand, since in the normal direction we actually measure a flux, the appropriate prescription to measure the temperature of this component is $`T_y:=\left[v_y\right]_y/\left[v_y^1\right]_y`$, where $`[]_y`$ represents an average over the flux $`\mathrm{\Phi }`$ and the denominator serves as a normalization. The temperatures of the in- and outgoing fluxes at the wall are then defined as $`T_{i/o}:=(T_x+T_y)/2`$, and $`T_w:=(T_i+T_o)/2`$. ## III Nonequilibrium steady state ### A Heat flow #### 1 The Model In the following, we explicitly indicate the dependence of the transformation $`𝓣`$ on the parameter $`T`$ by writing $`𝓣\text{ }_T`$. This immediately indicates how we may drive our system to thermal nonequilibrium: We just have to use different values of this parameter for the upper ($`T^u`$) and the lower wall ($`T^d`$). We deliberately avoid to use the word ’temperature’ for this parameter, since in contrast to stochastic boundary conditions we have generally no idea how a different $`T`$ affects the actual temperature of the wall in the sense of the definition given above. In a nonequilibrium situation the temperature of the ingoing flux $`\mathrm{\Phi }_i`$ generally does not match exactly the parameter $`T`$. Therefore, we do not transform onto the uniform invariant density of the baker map anymore, and consequently, the outgoing flux might have all kinds of shapes or temperatures. Nevertheless, the hope is that the mapping $``$ will be chaotic enough to smooth out most of the differences and to produce a reasonable outgoing flux $`\mathrm{\Phi }_o`$ such that the system is correctly thermostated. And this is indeed what we find in the numerical experiments. #### 2 Numerical Results We set $`T^u=2`$, $`T^d=1`$ and $`\rho =0.1`$ and average over about 40000 particle-particle collisions per particle and about 6000 particle-wall collisions per particle. We divide the available vertical height $`L1`$ into 20 equally spaced horizontal layers and calculate the time averages of the number density $`n(y)`$ of the particles, the mean velocities $`u_x(y)=v_x`$, $`u_y(y)=v_y`$, and the variances $`(v_xu_x)^2`$, $`(v_yu_y)^2`$. Furthermore, we record the time average of the kinetic energy transfer and measure the temperatures of the in- and outgoing fluxes of both walls as described in the preceding section. The temperatures at the walls are then defined as the mean value of the in- and outgoing temperatures, $`T_w^{u/d}:=(T_i^{u/d}+T_o^{u/d})/2`$. Time series plots of these quantities confirm the existence of a nonequilibrium stationary state (NSS) induced by the temperature gradient. Fig. 1 shows the temperature profile between the upper and the lower wall. Apart from boundary effects it is approximately linear, and the respective kinetic energy is equipartitioned between the two degrees of freedom. The parameters $`T^u`$ and $`T^d`$ are represented as (\*), and we find a ’temperature’ jump, whereas the measured temperatures $`T_w^{u/d}`$ (+) at the walls seem to continue the bulk profile reasonably well. The profile of the number density $`n=4/\pi \rho `$ is depicted in Fig. 2. Note again the boundary effects. The densities of the in-coming particles at the upper wall are Gaussian shaped (Figs. 3a,c), whereas the outgoing densities (Figs. 3b,d) show cusps due to the folding property of the baker map. Nevertheless, the baker map produces a reasonable outgoing flux which generates a NSS. In order to examine the bulk behavior we now compute the thermal conductivity in our computer experiment and compare it to the theoretical value. For this purpose, we measure the heat flux $`Q`$ across the boundaries and estimate the temperature gradient $`dT(y)/dy`$ by a linear least square fit to the experimental profile. To discard boundary effects we use only data in the bulk of the system, namely from layer 3 to layer 18, i.e., excluding the top two and the bottom two layers. The experimental heat conductivity is then defined as $$\lambda _{exp}=Q\left(\frac{dy}{dT}\right),$$ (7) whereas the theoretical expression for the conductivity of a gas of hard disks with unit mass and unit diameter as predicted by Enskog’s theory reads $$\lambda _l=1.0292\sqrt{\frac{T}{\pi }}\left[\frac{1}{\chi }+\frac{3}{2}bn+0.8718(bn)^2\chi \right].$$ (8) Here, $`b`$ is the second virial coefficient, $`b=\pi /2`$, and $`\chi `$ is the Enskog scaling factor, which is just the pair correlation function in contact, $$\chi =\frac{1\frac{7}{16}\frac{\pi }{4}n}{(1\frac{\pi }{4}n)^2}.$$ (9) Since (8) depends on local values of $`T`$ and $`n`$ we define the theoretical effective conductivity $`\lambda _{th}`$ as the harmonic mean over the layers , $$\lambda _{th}=\left(1/N_{layers}\underset{l=1}{\overset{N_{layers}}{}}1/\lambda _l\right)^1.$$ (10) Table I compares $`\lambda _{exp}`$ to $`\lambda _{th}`$ by showing the ratio of the experimental to the theoretical conductivity for different particle numbers and temperature differences. The agreement is quite good, so our thermostating mechanism produces a NSS which is in agreement with hydrodynamics. Furthermore, going into the hydrodynamic limit by increasing the number of particles we observe that the discontinuity in the outgoing flux of Figs. 3(b,d) diminishes, as expected, since both the in- and outgoing flux come closer to local equilibrium. ### B Shear Flow Inspired by the recently proposed model of Chernov and Lebowitz for a boundary driven planar Couette-flow in a nonequilibrium steady state, we now proceed to check whether it is possible to combine our thermostating mechanism with a positive (negative) drift imposed onto the upper (lower) wall, respectively. Chernov and Lebowitz chose a purely Hamiltonian bulk and simulated the drift at the boundaries by rotating the angle of the particle velocity at the moment of the scattering event with the wall while keeping the absolute value of the velocity constant. This setting could be formulated in a time-reversible way and keeps the total energy of the system generically constant. Here we separate the thermostating mechanism and the drift of the walls by introducing the map $$𝒮_d(v_x,v_y)=(v_x+d,v_y),$$ (11) and by applying this shift to the ’thermostated’ velocities. Time-reversibility forces us to do the same before thermostating. Thus, the full particle-wall interaction reads (Model I) (12) $`(v_x^{},v_y^{})`$ $`=`$ $`𝒮_d𝓣\text{ }_T^1𝓣\text{ }_T𝒮_d(v_x,v_y),v_xd`$ (13) $`(v_x^{},v_y^{})`$ $`=`$ $`𝒮_d𝓣\text{ }_T^1^1𝓣\text{ }_T𝒮_d(v_x,v_y),v_x<d,`$ (14) where shifts of different sign are used for the upper (lower) wall to let the walls move into opposite directions. Other prescriptions to impose a shear will be investigated in more detail in the following section. In the simulations we set $`d=\pm 0.05`$, $`T=T^u=T^d=1.0`$, $`N=100`$ and $`\rho =0.1`$. As we expected, we find a NSS with a linear shear profile along the x-direction (Fig. 4), where the drift velocity of the wall $`u_w`$ (\*) is defined as the average between the in- and outgoing tangential velocities. The temperature profile is shown in Fig. 5, with the wall temperatures $`T_w`$ (+) defined as above. As can be seen in the plots, none of these values correspond to the parameters $`T`$ (\*) or $`d`$. Nevertheless, we obtain a linear shear profile $`u_x(y)`$ and an almost quadratic temperature profile $`T(y)`$, as predicted by hydrodynamics . For a comparison of experimental and theoretical viscosity we follow the same procedure as above, i.e., we estimate the experimental shear rate $`du_x(y)/dy`$ by a linear least square fit $`u_x(y)=\gamma y`$, again discarding the outermost layers. Through the measured momentum transfer from wall to wall $`\mathrm{\Pi }`$ the experimental viscosity is given as $$\eta _{exp}=\mathrm{\Pi }/\gamma ,$$ (15) and the theoretical value as calculated by the Enskog theory has the form $$\eta _l=1.022\frac{1}{2}\sqrt{\frac{T}{\pi }}\left[\frac{1}{\chi }+bn+0.8729(bn)^2\chi \right].$$ (16) Again $`b=\pi /2`$ denotes the second virial coefficient, $`\chi `$ is given by Eq.(9) and we use the arithmetic mean of the viscosity over layers 3 - 18 to compute the theoretical viscosity, i.e., $`\eta _{th}=1/N_{layers}\eta _l`$. Table II shows good agreement between the experimental and the theoretical values for different numbers of particles, so again the thermostating mechanism leads to the correct macroscopic behavior. However, the discontinuities in the $`v_x`$-velocity distribution of the scattered particles should be noticed (see Fig.6). This time these discontinuities do not diminish or disappear in the hydrodynamic limit. ## IV Entropy production and Phase-Space Contraction Having a deterministic and reversible dynamics at hand we can now turn to properties beyond the usual hydrodynamic ones and, in particular, investigate the conjectured identity between phase space contraction rate and thermodynamic entropy production in the light of our formalism . In an isolated macroscopic system the entropy is a thermodynamic potential and therefore plays the central role in determining the time evolution and the final equilibrium state. Yet, its microscopic interpretation out of equilibrium remains controversial (see, e.g., ) and the situation is even much less clear in NSS. For the class of models where thermostating is ensured by friction coefficients an exact equality between entropy production and phase space contraction rate in NSS has been inferred on the basis of a global balance between the system and the reservoir . For the Chernov-Lebowitz model an approximate equality has also been found . Still, it is not clear under which circumstances this relation holds in general . ### A Equilibrium State We begin with the simplest case of equilibrium described in section II, where the thermodynamic entropy production $`\overline{R}_{eq}`$ vanishes. The bulk dynamics being Hamiltonian, phase-space contraction can only occur during collisions with a wall. Since these collisions take place ’ instantaneously’, we ignore the bulk particles and restrict ourselves to the compression related to a single collision during the time interval $`dt`$. The phase space contraction is then given by the ratio of the one-particle phase-space volume after the collision $`(dx^{}dy^{}dv_x^{}dv_y^{})`$ to the one before the collision, $`(dxdydv_xdv_y)`$, and can thus be obtained from the Jacobi determinant of the scattering process. One easily sees that $`dx^{}=dx`$ and $`|dy^{}/dy|=|v_y^{}dt/v_ydt|`$ . Furthermore, $`\left|{\displaystyle \frac{dv_x^{}dv_y^{}}{dv_xdv_y}}\right|`$ $`=`$ $`\left|{\displaystyle \frac{𝓣\text{ }}{v_xv_y}}{\displaystyle \frac{^{(1)}}{xy}}{\displaystyle \frac{𝓣\text{ }^1}{v_x^{}v_y^{}}}\right|=\left|{\displaystyle \frac{𝓣\text{ }}{v_xv_y}}\left[{\displaystyle \frac{𝓣\text{ }}{v_x^{}v_y^{}}}\right]^1\right|`$ (17) $`=`$ $`\left|{\displaystyle \frac{v_y}{v_y^{}}}\right|\mathrm{exp}\left({\displaystyle \frac{v_x^2+v_y^2v_x^2v_y^2}{2T}}\right),`$ (18) where step two follows from the phase-space conservation of $`/^1`$, and the last line is obtained from Eqs.(2)(3). Hence, in a particle-wall collision the phase-space volume is changed by a factor of $$\left|\frac{dv_x^{}dv_y^{}dx^{}dy^{}}{dv_xdv_ydxdy}\right|=\mathrm{exp}\left(\frac{v_x^2+v_y^2v_x^2v_y^2}{2T}\right).$$ (19) The mean exponential rate of compression of the phase space volume per unit time is thus given by $$\overline{P}=<\mathrm{ln}\left|\frac{dv_x^{}dv_y^{}dx^{}dy^{}}{dv_xdv_ydxdy}\right|>=(v_x^2+v_y^2v_x^2v_y^2)/2T,$$ (20) where the brackets $`<>`$ denote a time average over all collisions at the top and the bottom walls. In equilibrium the in- and outgoing fluxes associated to these collisions have the same statistical properties, so $`\overline{P}_{eq}`$ sums up to zero and $$\overline{P}_{eq}=\overline{R}_{eq},$$ (21) This is fully confirmed by the simulations. ### B NSS The thermodynamic entropy production $`\sigma `$ per unit volume of our system in NSS is given by the Onsager form $$\sigma (y)=\frac{\mathrm{\Pi }}{T}\frac{du_x}{dy}+J(y)\frac{d}{dy}\left(\frac{1}{T}\right),$$ (22) where $`\mathrm{\Pi }`$ is the x-momentum flux in the negative y-direction, and $`J(y)`$ is the heat flux in the positive y-direction. #### 1 Heat Flow Imposing only a temperature gradient on our system like in section III A, the first term in Eq. (22) is identical to zero and the total entropy production $`\overline{R}`$ in the steady state is then $$\overline{R}=_{Volume}\sigma 𝑑𝐫=_{Surface}J/T𝑑s=J_w^u/T_w^u+J_w^d/T_w^d$$ (23) The right hand side of Eq.(23) is the outward entropy flux $`J_w/T_w`$ across the walls of the container. Note that there is no temperature slip at the walls with respect to the correctly defined temperature values, as indicated by the simulation results in Figs.1 and 5. On the other hand, the exponential phase-space contraction rate now reads $$\overline{P}=(v_x^2+v_y^2v_x^2v_y^2)/2T_u_u+(v_x^2+v_y^2v_x^2v_y^2)/2T_d_d,$$ (24) where we averaged over the upper and the lower wall separately. Since in NSS $`J_w^u=(v_x^2+v_y^2v_x^2v_y^2)/2_u=J_w^d=(v_x^2+v_y^2v_x^2v_y^2)/2_d`$, the ratios of entropy production to exponential phase space contraction rate reduce to $$\frac{\overline{R}^{u/d}}{\overline{P}^{u/d}}=\frac{T^{u/d}}{T_w^{u/d}}.$$ (25) In the hydrodynamic limit the in- and the outgoing fluxes approach local equilibrium, implying $`T_i^{u/d}T_o^{u/d}T_w^{u/d}T^{u/d}`$ for both walls. Therefore, the ratios in Eq.(25) should go to unity. The numerical results in Table III confirm this expectation, leading to a good agreement between entropy production and exponential phase-space contraction rate. #### 2 Shear Flow We follow the same procedure as in the preceding section. For a stationary shear flow the hydrodynamic entropy production $`\sigma `$ per unit volume in Eq. (22) can be written as $$\sigma (y)=\frac{\mathrm{\Pi }}{T}\frac{du_x}{dy}+J(y)\frac{d}{dy}\left(\frac{1}{T}\right)=\mathrm{\Pi }\frac{d}{dy}\left(\frac{u_x}{T}\right).$$ (26) The second step in Eq.(26) follows from the fact that in NSS $`\lambda dT/dy=J(y)=\mathrm{\Pi }u_x(y)`$. The total entropy production $`\overline{R}`$ in the steady shear flow state is then $`\overline{R}={\displaystyle _{Volume}}\sigma 𝑑𝐫={\displaystyle _{Surface}}\mathrm{\Pi }u/T𝑑s`$ (27) $`=J_w/T_w=2L^2\mathrm{\Pi }\left(u_w/L\right)/T_w=L^2\mathrm{\Pi }\gamma /T_w.`$ (28) In the macroscopic formulation of irreversible thermodynamics Eq.(27) is interpreted as an equality, in the stationary state, between the entropy produced in the interior and the entropy flow carried across the walls. Our shift map $`𝒮_d`$ mimics moving walls with drift velocities $`\pm u_w`$. The work performed at these walls is converted by the viscous bulk into heat and then again absorbed by the walls which now act as infinite thermal reservoirs. By imagining that the walls act as an ’equilibrium’ thermal bath at temperature $`T_w`$, $`\overline{R}`$ can be interpreted as their entropy increase rate. For Model I (Eqs.(13)(14)) the mean exponential phase-space contraction rate takes the form ($`𝒮_d/v_xv_y1`$) $$\overline{P}=[v_x^2+v_y^2v_x^2v_y^22d(v_x^{}+v_x)]/2T,$$ (29) whereas the entropy production is given by $$J_w/T_w=[v_x^2+v_y^2v_x^2v_y^2<v_x^{}>^2+<v_x>^2]/2T_w.$$ (30) In Table IV the ratios of $`L^2\mathrm{\Pi }\gamma /J_w`$ as obtained from the simulations are reported and the relation of phase-space contraction rate to entropy production is subsequently checked. Whereas the equality between entropy production and entropy flow via heat transfer is confirmed, we observe a significant difference between entropy production and phase-space contraction which subsists in the hydrodynamic limit. This mismatch is perhaps not so unexpected in view of the distorted outgoing fluxes (see Fig.6). Nevertheless, one could argue that the fine structure of these distributions may depend on the specific characteristics of the baker map chosen to model the collision process. We therefore also considered the standard map $$\stackrel{~}{}:\{\begin{array}{c}\xi ^{}=\xi \frac{k}{2\pi }\mathrm{sin}(2\pi \zeta ),\hfill \\ \zeta ^{}=\zeta +\xi ^{},\hfill \end{array}$$ (31) with the parameter $`k=100`$ to ensure that we are in the hyperbolic regime . We found that the discrepancy between entropy production and phase-space contraction rate remains: although the deterministic map now seems to be chaotic enough to smooth out the fine structure of the outgoing densities, the discontinuity at $`d`$ survives. Actually, as long as Model I is adopted it becomes clear that in NSS there will always be more out- than ingoing particles with $`v_xd`$ at the upper wall (and with $`v_xd`$ at the lower wall). Thus, the Gaussian halves in Fig.7 (b) will never match to a full Gaussian even in the hydrodynamic limit, and $`\mathrm{\Phi }_i`$ and $`\mathrm{\Phi }_o`$ will never come close to a local equilibrium. To circumvent this problem we modify the map $`𝓣`$ and investigate the following model: (Model II) (32) $`𝓣\text{ }_\pm (v_x,v_y)`$ $`=`$ $`({\displaystyle \frac{\text{erf}\left[(|v_x|d)/\sqrt{2T}\right]\pm \text{erf}(d/\sqrt{2T})}{1\pm \text{erf}(d/\sqrt{2T})}},\mathrm{exp}(v_y^2/2T))`$ (33) $`(v_x^{},v_y^{})`$ $`=`$ $`𝓣\text{ }_+^1\stackrel{~}{}𝓣\text{ }_{}(v_x,v_y),v_x0`$ (35) $`(v_x^{},v_y^{})`$ $`=`$ $`𝓣\text{ }_{}^1\stackrel{~}{}^1𝓣\text{ }_+(v_x,v_y),v_x<0.`$ (36) This model is also time-reversible, but in contrast to the former one no particle changes its tangential direction during the scattering. There is still a gap in the outgoing distribution of Fig.(8) (b), however, simulations show that this gap disappears in the hydrodynamic limit thus bringing the in- and the outgoing distributions close to local equilibrium. Furthermore, we note that whereas we were not able to give a relation between the parameter $`d`$ and the actual wall velocity $`u_w`$ for Model I, in case of Model II $`u_w`$ converges to $`d`$ in the hydrodynamic limit. For this reason we chose $`d=0.5`$ in the following, since this value yields the same order of the wall velocity as $`d=0.1`$ for Model I. Proceeding now to the phase-space contraction rate we find that it takes the form $$\overline{P}^{u/d}=(n_+n_{})\mathrm{ln}\frac{1\pm \text{erf}(d/\sqrt{2T})}{1\text{erf}(d/\sqrt{2T})}[v_x^2+v_y^2v_x^2v_y^22d(v_x^{}+v_x)]/2T_{u/d},$$ (37) where $`n_\pm `$ are the collision rates for positive and negative tangential velocities, and the additional term (cf. Eq.(29)) results from the different denominators in Eq.(33). Note that one has to average over the upper and the lower wall separately. Again we compare $`\overline{R}`$ and $`\overline{P}`$ (Table V), but although the outgoing flux approaches now a Gaussian in the hydrodynamic limit the two quantities still do not match. This result can be understood in more detail by rearranging the terms in Eq.(37) as $`\overline{P}^{u/d}`$ $`=`$ $`[v_x^2+v_y^2v_x^2v_y^2<v_x^{}>^2+<v_x>^2]/2T_{u/d}`$ (39) $`[<v_x^{}>^2<v_x>^22d(v_x^{}+v_x)]/2T_{u/d}(n_+n_{})\mathrm{ln}{\displaystyle \frac{1\pm \text{erf}(d/\sqrt{2T})}{1\text{erf}(d/\sqrt{2T})}}.`$ Since $`TT_w`$ in the hydrodynamic limit, the first term clearly corresponds to the entropy production Eq.(30). However, the second and the third terms provide additional contributions. For $`u_wd`$ and $`d0`$ they are both of order $`d^2`$ and can be interpreted as a phase space contraction due to a friction parallel to the walls . These two terms apparently depend on the specific modeling of the collision process at the wall. They may physically be interpreted as representing certain properties of a wall, like a roughness, or an anisotropy. Actually, the second term already appeared in Model I, see Eq.(29). The price we had to pay in Model II for the fluxes getting close to local equilibrium is the additional third term in Eq.(39), which does not compensate the second one. The foregoing analysis shows clearly what to do to get rid of the additional term in Eq.(37): we have to use the same forward and backward transformations $`𝓣\text{ }_\pm `$ in Eqs.(33,35). If one still wants to transform onto a full Gaussian in the hydrodynamic limit time-reversibility has to be given up. This leads us to propose the model (Model III) (40) $`𝓣\text{ }_{}(v_x,v_y)`$ $`=`$ $`({\displaystyle \frac{\text{erf}\left[(v_xd)/\sqrt{2T}\right]+1}{2}},\mathrm{exp}(v_y^2/2T))`$ (41) $`(v_x^{},v_y^{})`$ $`=`$ $`𝓣\text{ }_{}^1\stackrel{~}{}𝓣\text{ }_{}(v_x,v_y).`$ (43) which is still deterministic, but no longer time-reversible. The phase-space contraction is now given as $$\overline{P}=[v_x^2+v_y^2v_x^2v_y^22d(v_x^{}v_x)]/2T.$$ (44) Fig. 9 shows that the in- and the outgoing fluxes are getting close to local equilibrium, implying that the velocity of the wall $`u_w`$ goes to $`d`$ and the wall temperature $`T_w`$ goes to $`T`$. Consequently, Eq.(44) should converge to the correct thermodynamic entropy production of Eq.(30) in the hydrodynamic limit, and this is indeed what we observe in Table V. This implies that time-reversibility does not appear to be an essential ingredient for having a relation between phase space contraction and entropy production, as was already stated in Refs. . We remark that we consider the lack of time-reversibility in Model III rather as a technical difficulty of how we define our scattering rules than a fundamental property of this model. ## V Conclusion We have applied a novel thermostating mechanism to an interacting many-particle system. Under this formalism the system is thermalized through scattering at the boundaries while the bulk is left Hamiltonian. We have shown how this deterministic and time-reversible thermostating mechanism is related to conventional stochastic boundary conditions. For a two-dimensional system of hard disks, this thermostat yields a stationary nonequilibrium heat or shear flow state. Transport coefficients obtained from computer simulations, such as thermal conductivity and viscosity, agree with the values obtained from Enskog’s theory. Having a time-reversible and deterministic system we also examined the relation between microscopic reversibility and macroscopic irreversibility in terms of entropy production. We find that entropy production and exponential phase-space contraction rate do in general not agree. When the NSS is created by a temperature gradient both quantities converge in the hydrodynamic limit. By subjecting the system to a shear we examined three different versions of scattering rules, of which one (Model III) produced an agreement. Our results indicate that neither time-reversibility nor the existence of a local thermodynamic equilibrium at the walls are sufficient conditions for obtaining an identity between phase space contraction and entropy production. A class of systems where such an identity is guaranteed by default are the ones thermostated by velocity-dependent friction coefficients . We suggest that in general, that is, by using other ways of deterministic and time-reversible thermostating, such an identity may not necessarily exist. We would expect the same to hold for any system where the interaction between bulk and reservoir depends on the details of the microscopic scattering rules. As a next step it would be important to compute the spectrum of Lyapunov exponents for the models presented in this paper. This would enable to check, for example, the validity of formulas which express transport coefficients in terms of sums of Lyapunov exponents , and the existence of a so-called conjugate pairing rule of Lyapunov exponents . Moreover, it would be interesting to verify the fluctuation theorem, as it has been done recently for the Chernov-Lebowitz model . Acknowledgments Helpful discussions with P.Gaspard, M.Mareschal and K. Rateitschak are gratefully acknowledged. R.K. wants to thank the Deutsche Forschungsgemeinschaft (DFG) for financial support. This work is supported, in part, by the Interuniversity Attraction Pole program of the Belgian Federal Office of Scientific, Technical and Cultural Affairs and by the Training and Mobility Program of the European Commission.
no-problem/9903/astro-ph9903244.html
ar5iv
text
# The Future of Cherenkov Astronomy ## I Introduction Cherenkov telescopes indirectly detect $`\gamma `$-rays by observing the flashes of Cherenkov light emitted by particle cascades initiated when the $`\gamma `$-rays interact with nuclei in the atmosphere. These telescopes have effective areas of 10,000 m<sup>2</sup> to 100,000 m<sup>2</sup>, making them efficient at detecting very short time-scale variations. Telescopes with pixellated cameras that “image” the shower utilize the differences in the Cherenkov images from $`\gamma `$-ray and cosmic-ray primaries to reject the dominant cosmic-ray background with $`>`$99.7% efficiency. These telescopes are used singly or as arrays that stereoscopically image the Cherenkov flash and they currently detect 250 GeV to 20 TeV $`\gamma `$-rays. Primary particle directions are reconstructed with accuracies of about 0.15 and 0.1 and the energy resolution is about 35% (RMS) and 20% for current single telescopes and arrays, respectively. The first clear detection of a $`\gamma `$-ray source by a Cherenkov telescope was the Crab Nebula by the Whipple collaboration in 1989 . At present, seven other objects have been detected with high statistical significance: one shell-type supernova remnant (SNR), two pulsar-powered nebulae, and four active galactic nuclei (AGN). Thorough reviews of the current status of the field of very high energy (VHE, E$``$100 GeV) astrophysics can be found elsewhere . The observations of AGN reveal extremely large amplitude and rapid flux variations which correlate with variations at longer wavelengths (see Figure 2). These provide estimates of the magnetic field in the AGN jets and the amount of relativistic Doppler boosting of the emission and are most easily explained if the $`\gamma `$-rays are produced through inverse Compton scattering of low energy photons and electrons. However, the energy spectra of the AGN extend to $`>`$10 TeV which is more easily explained by proton models because the electron inverse Compton process becomes inefficient above a few TeV. In travelling to Earth, the TeV $`\gamma `$-rays emitted by AGN are attenuated by pair-production with optical/IR photons . While this eliminates $`\gamma `$-rays from very distant sources, the effect on the TeV spectra from nearby AGN can be used to estimate the density of the extragalactic background light (EBL) . With the spectra from Mrk 421 and Mrk 501, upper limits on the IR background are already, at some wavelengths, more than 10 times better than those achieved with direct measurements . VHE $`\gamma `$-ray measurements of the spectrum of the Crab Nebula are consistent with the emission being produced by inverse-Compton scattering of electrons and photons in the synchrotron nebula (see Figure 2) and provide estimates of the nebular magnetic field . Similarly, TeV $`\gamma `$-rays from the shell-type SNR, SN 1006 , provide estimates of the magnetic field and the acceleration time of the electrons in the SNR, both previously unknown variables in modelling the emission from this object. Despite these exciting results, the current generation of Cherenkov telescopes only scratches the surface of the science to which the field can contribute. The fact that EGRET detected over 250 objects above 100 MeV while only eight objects have been detected above 300 GeV indicates that much can be gained by lowering the energy range covered by Cherenkov telescopes. New instruments also need to improve flux sensitivity and estimates of the $`\gamma `$-ray energy and direction in order to detect more sources and better test emission models. Here I discuss how proposed Cherenkov telescopes will accomplish these goals and what we hope to learn from the data they will collect. ## II New Cherenkov telescope projects ### A Imaging telescopes Proposed imaging arrays have good sensitivity from 50 GeV to 50 TeV. The energy threshold is lowered by increasing the mirror area and using a multiple telescope trigger to eliminate the background triggers from local penetrating muons and fluctuations of the night sky background light. Also, because arrays measure a shower in several telescopes, less light need be recorded in individual telescopes to reconstruct the shower - further reducing the achievable energy threshold. With multiple images of the shower, its geometry and development is better characterized, improving the angular resolution, the ability to identify $`\gamma `$-ray induced showers, and determination of the primary $`\gamma `$-ray energy. The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is one such proposed array of seven 10 m telescopes (Figure 4) to be located at the base of Mt. Hopkins in the Whipple Observatory in Arizona. Six of the telescopes will be arranged at the corners of a hexagon with 80 m sides and the seventh telescope will sit at the center. Each telescope will have an imaging camera of 499 photomultiplier tubes (PMTs) viewing a 3.5 diameter area of the sky. An energy threshold of 75 GeV will be achieved and the sensitivity will be approximately 20 times better than the current Whipple telescope. VERITAS will have an angular resolution of 0.09 at 100 GeV which improves to 0.03 at 1 TeV and its RMS energy resolution will be $`<`$15%. The High Energy Stereoscopic System (HESS) is a proposed array with similar performance to VERITAS, planned for operation in the southern hemisphere, likely Namibia. HESS could eventually consist of sixteen 10 m diameter telescopes on a square grid with 100 m spacing. The Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescope attempts to maximize the performance of a single imaging telescope. The baseline proposal for MAGIC is a 17 m diameter mirror, equipped with a camera of approximately 530 pixels viewing a 3.6 diameter field of view. If high quantum efficiency ($``$45%) hybrid PMTs become economically viable, MAGIC is predicted to achieve an energy threshold of 30 GeV. ### B Solar arrays Heliostat arrays have mirror areas of several thousand square meters, so they can efficiently detect 20 – 300 GeV $`\gamma `$-ray induced Cherenkov flashes. Secondary mirrors at a central tower focus the light from individual heliostats onto PMTs (each PMT views one heliostat) to sample the Cherenkov wavefront rather than image its development. Cosmic-ray background rejection is achieved by measuring the lateral distribution of the Cherenkov light. Two groups (CELESTE and STACEE ) have begun operation of prototypes of these solar arrays. During 1999, they should become fully operational and achieve an energy threshold of $``$50 GeV. The sensitivities to point sources of the different types of telescope operating in this energy range are shown in Figure 4. Clearly, imaging arrays will have the greatest sensitivity in their energy range, but the solar arrays and MAGIC can achieve lower energy thresholds. ## III Scientific Motivation ### A Extragalactic astrophysics #### 1 Active galactic nuclei Outstanding questions about AGN include the particle which dominates the production of $`\gamma `$-rays (protons or electrons), the mechanism by which $`\gamma `$-rays are produced, and the acceleration mechanism for the particles. Variability studies are important to understanding the physics of the central source of AGN because the core regions cannot be resolved with existing interferometers. The large effective area of Cherenkov telescopes enables accurate measurements of extremely short variations in the $`\gamma `$-ray flux as indicated in Figure 5. The left part of the figure shows Whipple observations of the fastest flare ever recorded at $`\gamma `$-ray energies . While the flare is clearly detected, the structure of the flare is not resolved. The dashed curve is a hypothetical flux variation which matches the Whipple data. The right part of the figure shows a simulation of how an imaging array would clearly resolve all features of the flare. Because blazars are extremely variable at all wavelengths, the best way to understand the physical processes at work in them is to conduct detailed observations spanning as wide an energy range as possible. The new Cherenkov instruments and space-based telescopes will make measurements spanning 6 orders of magnitude in $`\gamma `$-ray energies. In addition, the arrays of telescopes will have significantly improved energy resolution to better measure the AGN spectra which is crucial to understanding the emission and flaring mechanisms. The new Cherenkov telescopes should also significantly increase the number of sources detected at VHE energies. A lower energy threshold will permit viewing objects further from Earth (the optical depth for pair production with low energy photons decreases rapidly with decreasing energy) and those objects which have spectral cut-offs below the sensitive range of existing telescopes (e.g., EGRET sources). The improved flux sensitivity of the imaging arrays will permit the detection of more of the AGN already detected with the Cherenkov telescopes. Measurements of the ends of the spectra for a wide range of AGN types at different redshifts can help determine what particles produce the $`\gamma `$-ray emission and refine or eliminate unification models of blazar-type AGN . #### 2 Infrared background radiation The current limits on the IR density derived from measurements of the TeV spectra of AGN are approximately 5 to 10 times higher than predicted from galaxy evolution . However, they place substantial restrictions on several proposed particle physics and cosmological models which would contribute to the IR background . The new Cherenkov telescopes should substantially improve these limits to the EBL. With a large ensemble of sources, the energy resolution of the imaging telescope arrays may resolve the intrinsic spectra of the AGN from the external absorption features so that it may even be possible to detect the EBL itself. Because the EBL is predominantly the result of galaxy formation, these measurements will add to our understanding of that process as well. #### 3 Gamma-ray bursts X-ray and optical afterglows confirm that $`\gamma `$-ray bursts are extragalactic but the sources and mechanism for producing the $`\gamma `$-ray bursts remain unknown. The delayed GeV photons from $`\gamma `$-ray bursts demonstrate that high energy $`\gamma `$-rays play an important role in $`\gamma `$-ray bursts that can be pursued with rapid follow-up observations. With low energy thresholds, new Cherenkov telescopes will be able to see bursts out to z$``$1 or more. Because of the difficulty in producing VHE $`\gamma `$-rays and in getting them out of the region where the burst originates, the detection of a VHE component would place stringent limits on the viable models for $`\gamma `$-ray bursts. Attenuation from interaction with the EBL can also provide an independent distance estimate if optical follow-up observations do not reveal spectral lines. ### B Galactic astrophysics #### 1 Shell-type supernova remnants and cosmic rays SNRs are widely believed to be the sources of hadronic cosmic rays up to energies of approximately $`Z\times 10^{14}`$ eV, where $`Z`$ is the nuclear charge of the particle. The existence of energetic electrons in SNRs is well-known from observations of synchrotron emission at radio and X-ray wavelengths and TeV $`\gamma `$-rays from SN 1006 , most likely generated by electrons through inverse Compton scattering. However, a clear indication for the acceleration of hadronic particles in SNR is lacking. The evidence for such particles would be a characteristic spectrum of $`\gamma `$-rays produced mostly via $`\pi ^0`$ decay subsequent to nuclear interactions in the SNR. While EGRET has detected signals from several regions of the sky that are consistent with the positions of shell-type SNRs , upper limits from the Whipple collaboration at E$`>`$300 GeV are well below the extension of the EGRET spectra . As shown in Figure 7, there are predictions for strong $`\gamma `$-ray emission from shell-type SNRs by hadron and electron interactions. Model fits to EGRET and Whipple data indicate that if the emission detected by EGRET is from the SNR, inverse Compton and bremsstrahlung scattering of electrons contribute to the flux and the hadronic spectrum is steeper than the $`E^{2.1}`$ expected from direct cosmic-ray measurements. The new Cherenkov telescopes, particularly the imaging arrays, and GLAST will provide excellent sensitivity and energy reconstruction for resolving the various emission components in these objects. In addition, the imaging arrays will provide detailed mapping of the emission regions in the SNRs. For a typical SNR luminosity and angular extent, an imaging array should be able to detect approximately 20 objects within 4 kpc of Earth according to one popular model of $`\gamma `$-ray production by hadronic interactions , permitting investigation of which characteristics in SNR are necessary for particle acceleration. #### 2 Compact Galactic Objects VHE emission from the Crab, PSR 1706-44 and Vela suggest that they may be the most prominent members of a large galactic population of sources. An accurate VHE spectrum is crucial to understanding the production mechanism of $`\gamma `$-rays from these pulsar-powered nebulae. The new imaging arrays should be sensitive to Crab-like objects anywhere within the Galaxy. The energy resolution of the arrays and the broad energy coverage available by combining the data with GLAST measurements will significantly improve tests of $`\gamma `$-ray emission models. Finally, the imaging arrays may even be able to resolve the VHE emission region of nearby objects like the Crab Nebula. VHE $`\gamma `$-rays produced near a pulsar will pair produce with the intense magnetic fields there, leading to a sharp spectral cut-off. Thus, VHE observations constrain the location of the pulsar particle acceleration region. The high energy emission of the six pulsars detected at EGRET energies is already seriously constrained by the VHE upper limits . The energy threshold of the new telescopes should permit the detection of these bright GeV sources. Of the EGRET sources, 170 have no known counterpart at longer wavelengths , mostly due to their positional uncertainty. With their sensitivity and energy threshold, Cherenkov telescopes should detect many of these objects and source locations from imaging arrays could lead to identifications with objects at longer wavelengths. A survey is an efficient means of observing a large sample of sources and the only way to efficiently detect new types of sources. Imaging arrays will be able to survey the sky in the 100 GeV – 10 TeV energy range. An 80-night survey of the Galactic plane region $`0^{}<l<85^{}`$ with VERITAS will be sensitive to fluxes down to $``$0.02 Crab above 300 GeV and encompass more than 40 potential VHE sources, and so should significantly increase the VHE catalog. ### C Fundamental Physics #### 1 Neutralino annihilation in the Galactic center Current astrophysical data indicate the need for a cold dark matter component with $`\mathrm{\Omega }0.3`$. A good candidate for this component is the neutralino, the lightest stable supersymmetric particle. If neutralinos do comprise the dark matter and are concentrated near the center of our galaxy, their direct annihilation to $`\gamma `$-rays should produce a monoenergetic annihilation line with mean energy equal to the neutralino mass. Cosmological constraints and limits from accelerator experiments restrict the neutralino mass to the range 30 GeV - 3 TeV. Thus, the new Cherenkov telescopes and GLAST together will allow a sensitive search over the entire allowed neutralino mass range. Recent estimates of the annihilation line flux for neutralinos at the galactic center predict a $`\gamma `$-ray signal which may be of sufficient intensity to be detected with an imaging array (Figure 7) and GLAST. #### 2 Quantum gravity Quantum gravity can manifest itself as an effective energy-dependence to the velocity of light in vacuum caused by propagation through a gravitational medium containing quantum fluctuations. In some formulations , this time dispersion can have a first-order dependence on photon energy: $$\mathrm{\Delta }t\xi \frac{E}{E_{QG}}\frac{L}{c}$$ (1) where $`\mathrm{\Delta }t`$ is the time delay relative to the energy-independent speed of light, $`c`$; $`\xi `$ is a model-dependent factor of order 1; $`E`$ is the energy of the observed radiation; $`E_{QG}`$ is the energy scale at which quantum gravity couples to electromagnetic radiation; and $`L`$ is the distance over which the radiation has propagated. Recent work within the context of string theory indicates that quantum gravity may begin to manifest itself at a much lower energy scale than the Planck mass, perhaps as low as 10<sup>16</sup> GeV . VHE observations of variable emission from distant objects provide an excellent means of searching for the effects of quantum gravity. For example, the Whipple Collaboration has recently used data from a rapid TeV flare of the AGN Mrk 421 to constrain $`E_{QG}/\xi `$ to be $`>4\times 10^{16}`$ GeV, the highest convincing limit determined to date . This limit can be vastly improved with the new Cherenkov telescopes because they will be more sensitive to short time-scale variability and able to detect more distant objects. In addition to AGN flares, $`\gamma `$-ray bursts and pulsed emission from Galactic sources may provide avenues for investigating the effects of quantum gravity.
no-problem/9903/math9903019.html
ar5iv
text
# Proof of a partition identity conjectured by Lassalle ## Abstract. We prove a partition identity conjectured by Lassalle (Adv. in Appl. Math. 21 (1998), 457–472). The purpose of this note is to prove the theorem below which was conjectured by Lassalle . In order to state the theorem, we introduce the following notations. Let $`(a)_n=a(a+1)\mathrm{}(a+n1)`$. For a partition $`\mu `$ of $`n`$ let the length $`l(\mu )`$ be the number of the parts of $`\mu `$, $`m_i`$ the number of parts $`i`$, $`z_\mu =_{i1}i^{m_i(\mu )}m_i(\mu )!`$ and $`\genfrac{}{}{0.0pt}{}{\mu }{r}`$ the number of ways to choose $`r`$ different cells from the diagram of the partition $`\mu `$ taking at least one cell from each row. Then the following theorem holds for $`n1`$. ###### Theorem 1. $$\begin{array}{c}\underset{|\mu |=n}{}\genfrac{}{}{0.0pt}{}{\mu }{r}\frac{X^{l(\mu )1}}{z_\mu }\underset{i=1}{\overset{l(\mu )}{}}(\mu _i)_s\hfill \\ \hfill =(s1)!\left(\genfrac{}{}{0pt}{}{n+s1}{nr}\right)\left[\left(\genfrac{}{}{0pt}{}{X+r+s1}{r}\right)\left(\genfrac{}{}{0pt}{}{X+r1}{r}\right)\right]\end{array}$$ (1) ###### Proof. We first observe that $`_{i1}i^{m_i(\mu )}=_{i=1}^{l(\mu )}\mu _i`$ and that $`\frac{l(\mu )!}{m_1!\mathrm{}m_n!}`$ is the number of compositions of $`n`$ which are permutations of the parts of $`\mu `$. Let us denote this number by $`C(\mu )`$. After division by $`s!`$ the left-hand side can be rewritten as $`{\displaystyle \frac{\mathrm{LHS}}{s!}}`$ $`={\displaystyle \underset{|\mu |=n}{}}C(\mu ){\displaystyle \genfrac{}{}{0.0pt}{}{\mu }{r}}{\displaystyle \frac{X^{l(\mu )1}}{l(\mu )!_{i=1}^{l(\mu )}\mu _i}}{\displaystyle \underset{i=1}{\overset{l(\mu )}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{\mu _i+s1}{s}}\right)`$ $`={\displaystyle \underset{l=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{\underset{\mu _j1}{\mu _1+\mathrm{}+\mu _l=n}}{}}{\displaystyle \frac{X^{l1}}{l!\mu _1\mathrm{}\mu _l}}{\displaystyle \genfrac{}{}{0.0pt}{}{\mu }{r}}{\displaystyle \underset{i=1}{\overset{l}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{\mu _i+s1}{s}}\right)`$ For the composition $`\mu `$, $`\genfrac{}{}{0.0pt}{}{\mu }{r}`$ counts the ways of choosing $`r`$ points in the diagram of the composition. If we choose $`r_i`$ points from part $`\mu _i`$, there are $`_{i=1}^l\left(\genfrac{}{}{0pt}{}{\mu _i}{r_i}\right)`$ possible choices. Summing over all possible compositions $`r=r_1+\mathrm{}+r_l`$, where every part is $`1`$ gives $`\genfrac{}{}{0.0pt}{}{\mu }{r}`$. Thus we get for the left-hand side of (1) $$\begin{array}{c}\frac{\mathrm{LHS}}{s!}=\underset{l=1}{\overset{\mathrm{}}{}}\underset{\underset{\mu _j1}{\mu _1+\mathrm{}+\mu _l=n}}{}\frac{X^{l1}}{l!}\underset{\underset{r_j1}{r_1+\mathrm{}+r_l=r}}{}\frac{1}{r_1\mathrm{}r_l}\left(\genfrac{}{}{0pt}{}{\mu _11}{r_11}\right)\mathrm{}\left(\genfrac{}{}{0pt}{}{\mu _l1}{r_l1}\right)\underset{i=1}{\overset{l}{}}\left(\genfrac{}{}{0pt}{}{\mu _i+s1}{s}\right)\hfill \end{array}$$ It is easy to see that $`\left(\genfrac{}{}{0pt}{}{\mu _i+s1}{\mu _i1}\right)\left(\genfrac{}{}{0pt}{}{\mu _i1}{r_i1}\right)=(1)^{r_i1}\left(\genfrac{}{}{0pt}{}{s1}{r_i1}\right)\left(\genfrac{}{}{0pt}{}{\mu _i+s1}{r_i+s1}\right)`$. Now we can evaluate the sum over the $`\mu _j`$ by repeated application of the Chu-Vandermonde summation formula: $$\underset{\mu _1+\mathrm{}+\mu _l=n}{}\left(\genfrac{}{}{0pt}{}{\mu _11}{r_11}\right)\mathrm{}\left(\genfrac{}{}{0pt}{}{\mu _l1}{r_l1}\right)\left(\genfrac{}{}{0pt}{}{\mu _i+s1}{s}\right)=(1)^{r_i1}\left(\genfrac{}{}{0pt}{}{s1}{r_i1}\right)\left(\genfrac{}{}{0pt}{}{n+s1}{r+s1}\right).$$ Thus, we get for the left-hand side of (1) $$\frac{\mathrm{LHS}}{s!}=\underset{l=1}{\overset{\mathrm{}}{}}\frac{X^{l1}}{l!}\underset{\underset{r_j1}{r_1+\mathrm{}+r_l=r}}{}\frac{1}{r_1\mathrm{}r_l}\underset{i=1}{\overset{l}{}}(1)^{r_i1}\left(\genfrac{}{}{0pt}{}{s1}{r_i1}\right)\left(\genfrac{}{}{0pt}{}{n+s1}{r+s1}\right).$$ (2) The factor $`\left(\genfrac{}{}{0pt}{}{n+s1}{r+s1}\right)=\left(\genfrac{}{}{0pt}{}{n+s1}{nr}\right)`$ can be taken outside of all the sums. By comparison of (1) and (2), we see that it remains to prove $$\begin{array}{c}\underset{l=1}{\overset{\mathrm{}}{}}\frac{X^{l1}}{l!}\underset{\underset{r_j1}{r_1+\mathrm{}+r_l=r}}{}\frac{1}{r_1\mathrm{}r_l}\underset{i=1}{\overset{l}{}}(1)^{r_i1}\left(\genfrac{}{}{0pt}{}{s1}{r_i1}\right)\hfill \\ \hfill =\frac{1}{s}\left[\left(\genfrac{}{}{0pt}{}{X+r+s1}{r}\right)\left(\genfrac{}{}{0pt}{}{X+r1}{r}\right)\right].\end{array}$$ (3) This can be done by using generating functions. We multiply both sides of the equation by $`\mathrm{\Phi }^r`$ and sum over all $`r0`$. The right-hand side can be evaluated by the binomial theorem and gives $$\frac{1}{s}\left((1\mathrm{\Phi })^{Xs}(1\mathrm{\Phi })^X\right).$$ (4) For the left-hand side we need the power series expansion of the logarithm and the equation $$\underset{r_i=1}{\overset{\mathrm{}}{}}\left(\genfrac{}{}{0pt}{}{r_i+s1}{s}\right)\frac{\mathrm{\Phi }^{r_i}}{r_i}=\frac{1}{s}((1\mathrm{\Phi })^s1),$$ which can be derived from the binomial theorem. So the generating function corresponding to the left-hand side of (4) evaluates as follows: $$\begin{array}{c}\underset{l=1}{\overset{\mathrm{}}{}}\frac{X^{l1}}{l!}\underset{r_1=1}{\overset{\mathrm{}}{}}\frac{\mathrm{\Phi }^{r_1}}{r_1}\underset{r_2=1}{\overset{\mathrm{}}{}}\frac{\mathrm{\Phi }^{r_2}}{r_2}\mathrm{}\underset{r_l=1}{\overset{\mathrm{}}{}}\frac{\mathrm{\Phi }^{r_l}}{r_l}\underset{i=1}{\overset{l}{}}\left(\genfrac{}{}{0pt}{}{r_i+s1}{s}\right)\hfill \\ \hfill \begin{array}{cc}& =\underset{l=1}{\overset{\mathrm{}}{}}\frac{X^{l1}}{l!}\underset{i=1}{\overset{l}{}}\left(\mathrm{log}\frac{1}{1\mathrm{\Phi }}\right)^{l1}\frac{1}{s}\left(\left(1\mathrm{\Phi }\right)^s1\right)\hfill \\ & =\frac{1}{s}((1\mathrm{\Phi })^s1)\underset{l=1}{\overset{\mathrm{}}{}}\frac{\left(X\mathrm{log}\frac{1}{1\mathrm{\Phi }}\right)^{l1}}{(l1)!}\hfill \\ & =\frac{1}{s}((1\mathrm{\Phi })^s1)e^{X\mathrm{log}\frac{1}{1\mathrm{\Phi }}}\hfill \\ & =\frac{1}{s}((1\mathrm{\Phi })^s1)(1\mathrm{\Phi })^X\hfill \\ & =\frac{1}{s}((1\mathrm{\Phi })^{Xs}(1\mathrm{\Phi })^X).\hfill \end{array}\end{array}$$ This is equal to (4), so the theorem is proved. ∎
no-problem/9903/hep-ex9903030.html
ar5iv
text
# Measurement of the Neutrino-induced Semi-contained Events in MACRO ## I Introduction Recent measurements of atmospheric neutrino flux , as well as some older measurements , give absolute values and ratios of various quantities that are inconsistent with expectations for massless (non-oscillating) neutrinos, although some older measurements , , , reported no evidence for oscillations. The ongoing MACRO measurement of neutrino-induced upgoing muons that traverse the entire detector (so-called “throughgoing muons”) also suggests oscillations with parameters ($`\mathrm{\Delta }m^2`$ a few times $`10^3`$ and $`sin^22\theta =1`$) roughly consistent with and . The MACRO analysis has been extended to event topologies that probe lower neutrino energies, and the preliminary results are presented here. The MACRO detector located at a depth of 3700 mwe at the Gran Sasso laboratory in Italy is a large (77 m $`\times `$ 12 m $`\times `$ 9.3 m) detector of penetrating radiation. Although it was designed primarily to search for exotic slow-moving supermassive particles such as GUT monopoles, it is also capable of measuring neutrino-induced muon fluxes. The bottom half of the detector is filled with crushed rock absorber and planes of limited streamer tubes (with wire and strip views) with a pitch of 3 cm. Each outer face of the detector and a central layer are filled with boxes of liquid scintillator that provide sub-nanosecond timing resolution. The outer faces also contain additional streamer tube layers. The interior of the upper portion of the detector is hollow. (See Figure 1.) Almost all neutrino interactions in the detector take place in the lower half, where most of the mass is. Because of its large granularity, MACRO has very poor efficiency to detect neutral current events or neutrino-induced electrons. It is primarily sensitive to muons (from $`\nu _\mu `$ charged current interactions) which travel a few meters or more. There are three event topologies of neutrino-induced muons analyzed in MACRO (again, refer to Figure 1). Throughgoing muons, labeled TG, are from neutrino interactions below the detector which send a muon through the entire detector. The TG analysis is described elsewhere . Upward contained-vertex events, labeled CT-Up, in which the muon strikes the Center (“C”) layer of scintillator as well as a higher layer (“T” for Top) constitute the first topology in the present analysis. Of course, there are also neutrino-induced downgoing muons, both throughgoing and stopping in the detector. However, they are indistinguishable from primary atmospheric muons from cosmic ray showers, which even at MACRO’s depth outnumber neutrino-induced muons by 100,000 to 1. Therefore for these topologies only upward muons are considered, as determined by the time of flight between two or three scintillator layers. The final topology, labeled B-Up-Down, consists of two classes of events: downward muons from contained-vertex events and upward muons from external events which stop in the detector. Because only one layer of scintillator is hit (“B” for Bottom) and because the streamer tubes do not provide accurate timing information, it cannot be determined if the particles are moving up or down, so the B-Up-Down analysis sums over both classes of events (upward and downward). Figure 2 shows the distribution of parent neutrino energies contributing to the different topologies, from a no-oscillations Monte Carlo calculation (described below). The current analyses utilize neutrinos more than an order of magnitude less energetic than the TG analysis. ## II The CT-Up Analysis The first analysis, CT-Up, looks for upward muons from neutrino interactions in the lower part of the detector. The muon must strike a scintillator tank in the center layer, as well as a higher layer (either the top layer or the upper portion of the side walls). The analysis requires these two scintillator hits, as well as enough colinear streamer tube hits in two different views to reconstruct a track in space. Typically this is 4 hits per view, but varies depending on which planes the track traverses. To ensure that the vertex is truly internal, a projection of the track below the lowest streamer tube hit is required to pass through several streamer tube or scintillator planes which did not fire. Thus, muons that enter from below through cracks in the detector are rejected. The vast majority of events passing these geometry cuts are downgoing atmospheric muons. The final cut relies on the time of flight measured between the two scintillator boxes to select only upgoing events. Defining $`\beta `$ by $`v=\beta c`$, with the convention that upgoing particles have negative $`\beta `$, Figure 3 shows the observed $`1/\beta `$ distribution for all events passing the geometry cuts. A substantial peak of upgoing events is well-separated from the large background of downgoing events, although the events between the peaks show a residual background due to mistimed events. ## III The B-Up-Down Analysis The B-Up-Down analysis searches for two types of events, both of which have the same topological signature in MACRO: downward contained vertex events and externally-produced upward muons that stop in the detector. In both cases the signature is a hit in the bottom scintillator layer and a few associated colinear streamer hits. In the absence of oscillations, we expect about equal numbers from the two classes because for every downgoing neutrino that makes a track starting in the detector and ending below, there is a corresponding upgoing neutrino that could make a track of the same length in the opposite direction. The analysis requires a B-layer scintillator hit associated with streamer tube tracks in two views (wire and strip). At least 3 colinear hits are required to define a track, which implies at least $`100g/cm^2`$ of scintillator and crushed rock must be traversed. To reduce the background attributable to the huge number of downgoing primary atmospheric muons, all hits are required to be more than $`1m`$ from the side walls, and a projection of the track above the highest streamer tube hit must pass through several streamer planes and/or scintillator tanks which did not fire. Experience has shown that many events passing these simple cuts do not appear to be clean neutrino-induced events. Therefore, a final hand scanning procedure is implemented to reject events where the reconstructed track appears wrong or hits outside the fiducial volume appear to be correlated with the track. Fully half the candidates are rejected by the hand scan. However, it should be noted that two different people performed the scan and made the same judgment on more than 95% of events. Simulated events passed the hand scan with greater than 95% efficiency. A small systematic uncertainty is included in the final results due to these effects. In addition to neutrino-induced events, upgoing particles (typically soft pions) are induced by downward atmospheric muons, through a photonuclear process such as $`\mu N\mu \pi X`$ Figure 4 shows an event probably produced by this mechanism. A downgoing muon traversed the detector, and a few nanoseconds later an upgoing particle hit the detector nearby. In this case, because we saw the muon we would reject this event in the neutrino analysis. However, if the muon missed the detector and we saw only the soft pion, it would be indistinguishable from the neutrino-induced muons we are searching for. We have made a study of 243 events similar to that shown in Figure 4 to characterize the spectrum of particles produced by downgoing muons. We estimate such events give an irreducible background which amounts to about 4% of the number of events we observe. MACRO’s non-compact geometry is well-suited for measuring this background, and the results in can be adapted for use in other underground experiments. ## IV The Monte Carlo Calculation An approximate event rate could be determined by a semi-analytic calculation; however, it is most precise to use a Monte Carlo calculation that takes into account the detector geometry and all relevant energy loss mechanisms. The basic ingredients of the Monte Carlo calculation are a calculation of the atmospheric neutrino flux, a model of the neutrino interaction cross section, and detector simulation. We use the Bartol flux calculation including geomagnetic effects. The cross section model is due to Lipari, et al which computes the total cross section as the sum of three component processes – quasielastic, resonant and deep inelastic. For the deep inelastic cross section, the current work uses parton distribution function set S1 of Morfin and Tung , but future work will utilize a more modern distribution function. The detector simulation is the standard Geant-based MACRO simulation program, GMACRO, which has been tuned to match the copious downgoing muon data. Events output by GMACRO are in the same format as real data, and are analyzed by the same software chain as real data. The calculation assumes that atmospheric neutrinos are the only relevant source of neutrinos. In this regard, we may point out that, in the absence of oscillations, the atmospheric neutrino flux alone overpredicts the number of observed events. Also, MACRO searches for two exotic sources of neutrino flux (WIMP annihilation in the center of the earth or the sun , and astrophysical point sources ) have been negative. In the case of the B-Up-Down analysis, after cuts Monte Carlo events are randomly merged with real events before the hand scan, so that the people performing the scan do not know if they are looking at a real or a Monte Carlo event. ## V Results The B-Up-Down analysis had an effective livetime of 2.81 years, occurring between July, 1994 and November, 1997. 125 events passed the scan, of which 5 are estimated to be background due to soft pion production. The zenith angle distribution of the background-subtracted sample is shown in Figure 5a. In this case, we do not know if any individual event is upgoing or downgoing, so the horizontal axis is $`|cos\theta |`$, with vertical to the left and horizontal to the right. The CT-Up analysis utilized a total effective livetime of 3.16 years, accumulated between April, 1994 and November, 1997. 88 events are identified in the upgoing peak of Figure 3, of which an estimated 3 are due to the mistimed background. The zenith angle distribution of the background-subtracted sample is shown in Figure 5b. All of these events are upgoing. Also shown in Figure 5 in a solid line is the result of the Monte Carlo prediction, assuming no oscillations. Integrating over all zenith bins, the Monte Carlo predicts, for B-Up-Down, $`159\pm 40_{theor}`$ events and for CT-Up, $`144\pm 36_{theor}`$. Forming the ratio of observed to expected, we get our preliminary results, $`R_{BUpDown}=0.75\pm 0.07_{stat}\pm 0.08_{sys}\pm 0.19_{theor}`$ $`R_{CTUp}=0.59\pm 0.06_{stat}\pm 0.06_{sys}\pm 0.15_{theor}`$ Both ratios, especially CT-Up, differ significantly from unity, though the significance is greatly degraded by the theoretical uncertainty, of order 25%, on the neutrino flux and neutrino cross section. We have not done a complete analysis of oscillations pending a more careful calculation of the uncertainties. However, the zenith distributions are highly suggestive of oscillations with maximal mixing and $`\mathrm{\Delta }m^2`$ of a few times $`10^3`$. With these parameters, we expect neutrinos from below, which travel thousands of kilometers through the earth, to be fully oscillated (50% deficit) while neutrinos from above or from the horizontal, which travel only tens of kilometers, are hardly suppressed at all. For the B-Up-Down analysis, in which the vertical bins sum over upward and downward neutrinos, we expect half the upward and none of the downward neutrinos to disappear, giving a deficit of 25%. In fact, we have shown on Figure 5 in the dashed lines the expectation for a test point of $`sin^22\theta =1`$ and $`\mathrm{\Delta }m^2=2.5\times 10^3`$ which seems to fit the data quite well (but the remarkable fit should not be taken at face value due to the large theoretical errors in the prediction, not shown in the figure). ## VI Conclusions The CT-Up and B-Up-Down analyses, presented in preliminary form here, provide a measurement of the atmospheric neutrino flux that is independent of, and complementary to, the MACRO TG analysis, although with smaller statistics. Both preliminary analyses seem inconsistent with no oscillations, although the theoretical uncertainties are large. Both analyses agree well with oscillations with parameters ($`sin^22\theta =1`$ and $`\mathrm{\Delta }m^2=2.5\times 10^3`$) consistent with those suggested by the MACRO TG analysis, as well as other experiments. In the near future, we will analyze an additional 1.25 years of data already in the can, and do further work to quantify systematic and theoretical uncertainties. We will also compute double ratios to try to cancel some uncertainties. A ratio of the present low-energy events to the standard throughgoing events is of limited value in canceling errors, because the primary cosmic ray flux and the cross sections at the different energies may have different systematics. A more promising ratio is that of the two low-energy analyses, $`\frac{R_{BUpDown}}{R_{CTUp}}`$. Referring again to Figure 2, these events come from the same energy range, so a great cancellation of uncertainties will occur. This should strengthen the conclusions outlined in rough form in Section V.
no-problem/9903/hep-ph9903316.html
ar5iv
text
# Symmetry Conserving Dynamical Mappings ## 1 INTRODUCTION Solving a theory for interacting fields is one of the major challenges in quantum field theory (QFT). It is in fact amazing to realize that, besides the coupling constant perturbation (CCP), the analytical (approximate) solutions at our disposal are of a semi-classical nature. These are based on expansions in the number of colors $`(N_c)`$, flavors $`(N_f)`$, or simply charges $`(N)`$, depending on the problem at hand. I will refer to these generically by the $`1/N`$-expansion. The symmetries are known to be preserved by these approaches since the dynamics, like in the case of the CCP, is sorted out according to an expansion in an arbitrary parameter. It is well known that the usual CCP is supported by a Fock space made of an ”uncorrelated” vacuum $`|0`$ and excited states $`|\nu `$ build by the action of creation operators of the quantized canonical fields. Going to higer order in the loops one builds correlations perturbatively which leads to a gradual redefinition of the vaccum at each order. An interesting question to raise is: can one envisage a similar construction for the Fock space in the case of a nonperturbative approach e.g. the semi-classical ones? In the following I will explore such a possibility and show that the idea is fertile and can even help guessing a promising ”new” nonperturbative approach. The latter, in contrast to the semi-classical one, is an approximate solution with a full quantum character. It transcends the $`1/N`$-expansion and contains the Gaussian functional approach (GFA). In fact, it was the need for such kind of solutions of QFT which triggered the interest into the GFA. Unfortunately, it was quickly realized that the latter, being order mixing, is not in general able to treat the symmetries correctly . Here I would like to argue that the second nonperturbative symmetry conserving approximation, next to the $`1/N`$-expansion, needs in fact much more vaccum-correlations than what the GFA offers. We will see that these correlations are of a RPA-type, selected carefully by dynamically mapping the canonical fields into the currents with the corresponding quantum numbers. The idea of substituting the currents for the canonical fields is in fact not new. It was used in the late sixties by Callan, Dashen, Sharp, Sommerfield, and Sugawara in an attempt to build a QFT with currents as dynamical variables giving up the concept of describing the fields with canonical variables. In the following, I would like to draw a slightly different picture. Although I will substitute for the asymptotic fields the corresponding currents, I will not renounce the use of the canonical fields as building blocks of QFT. Mapping these canonical fields into the currents will then help in gathering the dynamics which dress the asymptotic fields while preserving the symmetry. In section 3, we will see how this can be put to work in building a nonperturbative pion by mapping its canonical field into the axial current. However, I will first, in section 2, revisit the semi-classical $`1/N`$-expansion approach using the very concept of dynamical mappings. As an example, consider the toy model of QFT, the $`\mathrm{\Phi }^4`$-theory with a continuous $`O(N+1)`$ symmetry. The lagrangian density with the appropriate scaling reads $$=\frac{1}{2}\left[\left(_\mu \stackrel{}{\pi }\right)^2+\left(_\mu \sigma \right)^2\right]\frac{\mu ^2}{2}\left[\stackrel{}{\pi }^2+\sigma ^2\right]\frac{\lambda }{4N}\left[\stackrel{}{\pi }^2+\sigma ^2\right]^2+\sqrt{N}c\sigma ,$$ (1) where $`\stackrel{}{\pi }(x)`$ stands for a $`N`$-components pion field and $`\sigma (x)`$ its chiral partner. ## 2 HOLSTEIN-PRIMAKOFF MAPPING First let us see how the concept of symmetry conserving dynamical mapping (SCDM) can be used to retrieve the well known $`1/N`$-expansion. It is clear that, due to the Bose statistics, the pion wave function induces direct and exchange contributions (Hartree and Fock terms) which are of two distinct orders in the $`1/N`$ counting. Therefore to hinder any order mixing in the $`1/N`$-expansion one should, whatever the procedure used, only allow the Hartree terms as leading contributions and relegate the Fock terms to the sub-leading orders. This is, however, only possible if the pion wave-function is severely truncated, leading to a particle which doesn’t fully enjoy the quantum statistics or, in other words, to a Hartree particle. Sorting out the dynamics according to this scheme can be achieved by means of a pion-pair bosonization via the so-called Holstein-Primakoff mapping (HPM). The latter appeared first in the early fourties as a realization of the $`SU(2)`$ algebra for quasi-spins. It was forgotten ever since and reappeared in the sixties in the nuclear many-body problem where it was used for bosonizing fermion-pairs (see for a review). In the present case the HPM for pion-pairs reads (see for details): $$\stackrel{}{a}_q^+\stackrel{}{a}_p^+\left(A^+\sqrt{N+A^+A}\right)_{q,p},\stackrel{}{a}_q\stackrel{}{a}_p\left(\stackrel{}{a}_p^+\stackrel{}{a}_q^+\right)^+,\stackrel{}{a}_q^+\stackrel{}{a}_p\left(A^+A\right)_{q,p},$$ (2) where $`\stackrel{}{a}_q^+,\stackrel{}{a}_q`$ stand for the pion creation and annihilation operators, while $`A_{q,p}`$ and $`A_{q,p}^+`$ are real boson operators obeying the Heisenberg-Weyl algebra. This mapping is made in such a way that the original algebra, obeyed by the pairs of operators at the l.h.s of eq.(2), is also realized by the ansatz at the r.h.s. The square root is to be understood as a formal power series in the operators. Thus the Hamiltonian of the vector model derived from eq.(1) will naturally inherit a formal expansion of the form: $`H=^{(0)}+^{(1)}+^{(2)}+^{(3)}+^{(4)}+..`$, where the superscripts indicate the powers of the operators $`A`$ and $`A^+`$ and also $`b`$ and $`b^+`$ for the sigma field. At this stage, the content in $`N`$ of each order $`^{(p)}`$ is not yet specified. Also the formal expansion is in reality not unique since the operators are not in normal order. Therefore a definition of a vacuum is mandatory if one wishes to make any use of the above expansion. By defining a vacuum for the $`A`$ and $`b`$ operators one makes the HPM dynamical. To meet the desired $`1/N`$-expansion approach this step is indeed decisive. In other words not any vacuum and thus not any Fock space is able to support this approach. It was shown in that the vacuum of the $`1/N`$-expansion is a coherent state $$|\psi >=exp\left[\underset{q}{}d_{\pi \pi }(q)A_{q,q}^++b_0b_0^+\right]|0>,$$ (3) which can accommodate condensates of the sigma field, denoted here by $`b_0`$, as well as pairs of Hartree pions, denoted by $`d_{\pi \pi }(q)`$. Assuming this, the Hamiltonian displays then a parallel (and unambiguous) expansion in the powers of $`N`$, such that $$H=NH^{(0)}+\sqrt{N}H^{(1)}+H^{(2)}+\frac{1}{\sqrt{N}}H^{(3)}+\frac{1}{N}H^{(4)}+\mathrm{}$$ (4) Here the terms $`H^{(p)}`$ have no content in $`N`$. To gather the dynamics, one has to diagonalise $`H`$. Odd powers in $`\sqrt{N}`$ are completely off-diagonal and therefore ought to disappear. The leading order dynamics is contained in the three lowest terms, thus we disregard here all higher terms. The term $`H^{(1)}`$ is washed out by performing a variational Hartree-Bogoliubov (HB) calculation, using $`b_0`$ and $`d_{\pi \pi }(q)`$ as variational parameters in minimizing the ground state energy. The bilinear $`H^{(2)}`$ is diagonalized by applying a canonical Bogoliubov rotation which mixes the operators $`b`$, $`b^+`$ and $`A`$, $`A^+`$ such that: $$Q_\stackrel{}{p}^+=X_\stackrel{}{p}b_\stackrel{}{p}^+Y_\stackrel{}{p}b_\stackrel{}{p}+\underset{q}{}\left[U_{\stackrel{}{q},\stackrel{}{p}}A_{\stackrel{}{q},\stackrel{}{p}\stackrel{}{q}}^+V_{\stackrel{}{q},\stackrel{}{p}}A_{\stackrel{}{q},\stackrel{}{p}+\stackrel{}{q}}\right].$$ (5) This is nothing but a $`\pi \pi `$ RPA-scattering equation coupled to a Dyson equation for the sigma mode. The vacuum of the theory is accordingly modified. The latter, denoted by $`|RPA`$, is implicitly defined by $`Q_\stackrel{}{p}|RPA=0`$ and explicitly obtained via a unitary transformation<sup>1</sup><sup>1</sup>1 This transformations is constructed as a product of three unitary inequivalent transformations. The first is a unitary squeezing transformation in the $`(b,b^+)`$-sector, the second is a similar one in the $`(A,A^+)`$-sector and the third is a unitary transformation which mixes both sectors . of the coherent state: $`|RPA=U_{unitary}|\psi `$. This exhausts the leading order dynamics. In a cutoff theory, the $`|RPA`$-vacuum, obtained so far, possesses a broken phase with a finite sigma-condensate $`(\sigma 0)`$ and two curvatures; one is the Goldstone boson mass , obtained in the HB mean-field, and the second is the sigma mass, obtained in the RPA. These are given by $`m_\pi ^2`$ $`=`$ $`\mu ^2+\lambda \left[I_\pi +\sigma ^2\right],{\displaystyle \frac{c}{\sigma }}=\mu ^2+\lambda \left[I_\pi +\sigma ^2\right],`$ (6) $`m_\sigma ^2`$ $`=`$ $`\mu ^2+\lambda \left[I_\pi +3\sigma ^2\right]+{\displaystyle \frac{2\lambda ^4\sigma ^2\mathrm{\Sigma }_{\pi \pi }(m_\sigma ^2)}{1\lambda ^2\mathrm{\Sigma }_{\pi \pi }(m_\sigma ^2)}}.`$ (7) Here $`I_\pi `$ is the tadpole of the HB-pion and $`\mathrm{\Sigma }_{\pi \pi }(p^2)`$ stands for the convoluted two HB-pion propagator (RPA bubble of HB-pions). Naturally, this approach preserves the whole hierarchy of Ward identities. In particular, the lowest one which expresses the current conservation (in PCAC sense), $`D_\pi ^1(0)=\frac{c}{\sigma }`$, holds. Figures 1.a and 1.b show the summed class of diagrams. Fig. 1.a. BCS solution in a Hartree-Bogoliubov (HB) approximation. The pion mass and the condensate are given by two coupled self-consistent equations. There is no dynamical mass generation therefore the pion is a Goldstone mode Fig. 1.b. Dyson equation for the $`\sigma `$ mode coupled to a RPA equation for $`\pi \pi `$-scattering. According to this scheme, the sigma mass is build perturbatively (in contrast to the self-consistent building of the pion and the condensate in Figure 1.a.) ## 3 PIONIC-QRPA MAPPING The approach presented in the previous section is very appealing from many aspects and particularly from its symmetry conserving character. However, it has a serious drawback. The pion, constructed so far, is a Hartree particle thus a semi-quantum (semi-classical) ”object”. It is also the building block for all higher n-point functions, as suggested by the whole hierarchy of Ward identities. Attempts made with the GFA failed so far to correct for this . Indeed, assuming the full wave function of the pion (instead of truncating it) induces an uncontrollable order mixing which inevitably ”destroys the symmetry”. As stated in the introduction, one way out is to map the canonical pion field into the axial current. This idea is supported by the exact (Goldstone) statement: $`Q_5^a|vac|\pi ^a`$ , which allows to build a pion state by acting with the symmetry generator on the full correlated vacuum of the theory. In an effective model, the generator $`Q_5^a`$ is simply given by Noether’s theorem. Therefore one can use the field structure of $`Q_5^a`$ to model an excitation operator for the asymptotic pion field. In the present case, the creation operator of the iso-vector pion takes the form $$\stackrel{}{Q}_\pi ^+=X_\pi ^{(1)}\stackrel{}{a}_0^+Y_\pi ^{(1)}\stackrel{}{a}_0+\underset{q}{}\left[X_\pi ^{(2)}(q)b_q^+\stackrel{}{a}_q^+Y_\pi ^{(2)}(q)b_q\stackrel{}{a}_q+X_\pi ^{(3)}(q)b_q^+\stackrel{}{a}_qY_\pi ^{(3)}(q)b_q\stackrel{}{a}_q^+\right].$$ (8) Here the operators $`\stackrel{}{a}_q`$ and $`b_q`$ represent the canonical pion- and sigma- fields. The $`X`$ and $`Y`$ amplitudes are fixed dynamically. Using these as variational variables to minimize the the pion-state energy $`(\delta \frac{\pi |H|\pi }{\pi |\pi }=0)`$ leads to the so-called Rowe equation of motion $$RPA|[\delta \stackrel{}{Q}_\pi ,[H,\stackrel{}{Q}_\pi ^+]]|RPA=m_\pi RPA|[\delta \stackrel{}{Q}_\pi ,\stackrel{}{Q}_\pi ^+]|RPA,$$ (9) where H is the Hamiltonian of the model, $`m_\pi `$ is the pion mass (excitation energy to create a pion at rest) and $`|RPA`$ is an approximate ansatz to the full correlated vacuum $`|vac`$, defined implicitly by : $`\stackrel{}{Q}_\pi |RPA=0`$. The eigenvalue problem in eq.(9), in its present variational form, is known as the self-consistent RPA which can not be solved in practice. Therefore one uses in general the quasi-boson assumption which approximates the bilinear $`\stackrel{}{Q}_\pi `$ by a boson. Thus eq.(9) is linearized. In the exact chiral limit $`(c=0)`$, one of its solutions, if successfully normalized, has zero energy $`(m_\pi =0)`$. The normalization of this Goldstone solution can in fact be achieved by optimizing the RPA basis. This is done by dynamically mapping the original canonical pion $`(\stackrel{}{a},\stackrel{}{a}^+)`$ and sigma $`(b,b^+)`$ fields into Hartree-Fock-Bogoliubov (HFB) fields<sup>2</sup><sup>2</sup>2This is in fact the minimal procedure to achieve the normalization of the Goldstone solution. Other normalization procedures, based on Higher-RPA, and which allow to gather more dynamics than the present approach, are also possible. Further discussion on this point is deferred to a coming work. $$\stackrel{}{\alpha }_q^+=u_q\stackrel{}{a}_q^+v_q\stackrel{}{a}_q,\beta _q^+=x_qb_q^+y_qb_qw^{}\delta _{q0}.$$ (10) Here $`u,v,x,y,w`$ are variational functions chosen to minimize the energy of the vacuum of the theory. The latter is given, up to an unimportant factor, by the squeezed state $$|\mathrm{\Phi }=\mathrm{exp}\left[\underset{q}{}\frac{v_q}{2u_q}\stackrel{}{a}_q^+\stackrel{}{a}_q^++\frac{y_q}{2x_q}b_q^+b_q^++\frac{w}{2x_0}b_0^+\right]|0.$$ (11) The dynamics gathered at this HFB mean-field appear in the following set of equations: $`_\pi ^2`$ $`=`$ $`\mu ^2+\lambda \left[{\displaystyle \frac{N+2}{N}}I_\pi +{\displaystyle \frac{1}{N}}I_\sigma +\sigma ^2\right],_\sigma ^2=\mu ^2+\lambda \left[I_\pi +{\displaystyle \frac{3}{N}}I_\sigma +3\sigma ^2\right],`$ $`{\displaystyle \frac{c}{\sigma }}`$ $`=`$ $`\mu ^2+\lambda \left[I_\pi +{\displaystyle \frac{3}{N}}I_\sigma +\sigma ^2\right].`$ (12) which consist of three coupled self-consistent BCS gap equations that give the condensate $`(\sigma )`$ and the two curvatures $`(_\pi ,_\sigma )`$ of the squeezed vacuum $`|\mathrm{\Phi }`$. Here $`I_\pi `$ and $`I_\sigma `$ are, respectively, the tadpoles for the pion and sigma quasi-particles (with the Hartree and Fock terms considered together). This is precisely the dynamics generated by the GFA where $`_\pi `$ stands for the asymptotic pion mass. This is, however, clearly wrong. Indeed, in the exact chiral limit ($`c=0`$) and for a finite condensate, the curvature $`_\pi `$ does not vanish (see also ). Therefore the squeezed state and equally the Gaussian functional can not be regarded as a viable vacuum for the theory since the Goldstone theorem is violated. However, the RPA ground state, as implicitly defined by $`\stackrel{}{Q}_\pi |RPA=0`$, is a good candidate for a vacuum with broken symmetry. The latter, in the case of the quasi-boson assumption (used here), is explicitly obtained by an unitary transformation of the squeezed state: $`|RPA=U_{unitary}|\mathrm{\Phi }`$ <sup>3</sup><sup>3</sup>3In the case of the QBA assumption, the way of building the unitary transformation is similar to the one sketched in footnote 1. Because of the infinite degrees of freedom, both vacuums are in fact inequivalent. . The curvature along the valley of this ground-state is given by the RPA eigenvalue $`m_\pi `$ and reads: $$m_\pi ^2=\frac{c}{\sigma }+\frac{2\lambda ^2}{N}\frac{\left[_\pi ^2_\sigma ^2\right]\left[\mathrm{\Sigma }_{\pi \sigma }(0)\mathrm{\Sigma }_{\pi \sigma }(m_\pi ^2)\right]}{1\frac{2\lambda ^2}{N}\mathrm{\Sigma }_{\pi \sigma }(m_\pi ^2)}$$ (13) where $`\mathrm{\Sigma }_{\pi \sigma }(p^2)`$ is the convoluted quasi-pion and quasi-sigma propagators (see figure 2). It is clear, from eq.(13), that the asymptotic pion is not only highly nonperturbative in the coupling $`\lambda `$ but also has a non-trivial content in $`N`$, in contrast to the HB-pion of section 2. It is, however, still a Goldstone mode, since for $`c=0`$ and $`\sigma 0`$ a zero pion mass exists. Furthermore it is easily verified that the Ward identity, $`D_\pi ^1(0)=\frac{c}{\sigma }`$, holds here too. Fig.2. Diagrammatic representation of the collected dynamics. In step I, the optimized quasi-particle basis is build as a BCS solution in the HFB-approximation. In step II, the quasi-particle states are scattered in a Lippmann-Schwinger equation. In step III, a mass operator is build out of the full vertex $`T_{\pi \sigma }`$ and inserted in a Dyson equation to generate the asymptotic Goldstone pion. ## 4 CONCLUSION There is obviously an urgent need for developing symmetry conserving nonperturbative approaches with tractable analytical solutions to QFT. I exposed here the concept of SCDM which is a promising tool that gives a helpful insight on the structure of the Fock space. Besides the HPM which leads to the $`1/N`$-expansion, I presented a second SCDM that relies on a systematic scheme which consists of mapping the canonical fields into the corresponding currents. The latter mapping was made dynamical in the quasi-particle RPA. The vacuum of the theory was then found to have more correlations than the vacuums of the $`1/N`$-expansion and the GFA alike. Extensions to higher RPA, finite temperature and baryon density as well as to richer dynamics are possible . Acknowledgements : I would like to thank G. Chanfray, P. Schuck and J. Wambach for their interest in this work and for their continuous support.
no-problem/9903/cond-mat9903390.html
ar5iv
text
# Thermal noise can facilitate energy conversion by a ratchet system ## Abstract Molecular motors in biological systems are expected to use ambient fluctuation. In a recent Letter \[Phys. Rev. Lett. 80, 5251 (1998)\], it was showed that the following question was unsolved, “Can thermal noise facilitate energy conversion by ratchet system?” We consider it using stochastic energetics, and show that there exist systems where thermal noise helps the energy conversion. Molecular motors in biological systems are known to operate efficiently. They convert molecular scale chemical energy into macroscopic mechanical work with high efficiency in water at room temperature, where the effect of thermal fluctuation is unavoidable. These experimental facts lead us to expect the existence of the system where thermal noise helps the operation. To find out the mechanism of these motors is interesting not only to biology but also to statistical and thermal physics. Recently inspired by observations on the molecular motors, many studies have been performed from the viewpoint of statistical physics. Much has been studied in ratchet models to consider how the directed motion appears from non-equilibrium fluctuation. One of the best known works among these ratchet models was by Magnasco. He studied “forced thermal ratchet,” and claimed that “there is a region of the operating regime where the efficiency is optimized at finite temperatures.” His claim is interesting because thermal noise is usually known to disturb the operation of machines. However, recently it was revealed that this claim was made incorrectly, because it was not based on the analysis of the energetic efficiency but only on that of the probability current, as most of the studies of ratchet systems were. The insufficient analysis was attributed to the lack of systematic method of energetics in systems described by Langevin equation. Recently a method what is called stochastic energetics was formalized, where the heat was described quantitatively in the frame of Langevin equation. Using this method, some attempts to discuss the energetics of these systems have been made. By the energetic formulation of the forced thermal ratchet using this stochastic energetics, the following was showed: The behavior of the probability current is qualitatively different than that of energetic efficiency. Thermal noise does not help the energy conversion by the ratchet at least on the condition where the claim was made. Therefore it was revealed that the following question had not yet been solved, “Can thermal noise facilitate operation of the ratchet?” In this Letter, we will show that the thermal noise certainly can facilitate the operation of the ratchet. Let us consider an over-dumped particle in an “oscillating ratchet”, where the amplitude of the 1-D ratchet potential is constant, but the degree of the symmetry breaking oscillates at frequency $`\omega `$ (Fig. 1). Langevin equation is as follows: $`{\displaystyle \frac{dx}{dt}}`$ $`=`$ $`{\displaystyle \frac{V(x,t)}{x}}+\xi (t),`$ (1) $`V(x,t)`$ $`=`$ $`V_p(x,t)+\mathrm{}x,`$ (2) where $`x`$, $`\mathrm{}`$ and $`V_p(x,t)`$ represent the state of the system, the load and the ratchet potential respectively (Fig. 2) . The white and Gaussian random force $`\xi (t)`$ satisfies $`\xi (t)=0`$ and $`\xi (t)\xi (t^{})=2ϵ\delta (tt^{})`$, where the angular bracket $``$ denotes the ensemble average. We use the unit $`m=\gamma =1`$. We assume that the potential $`V(x,t)`$ always has basins, and thus a particle cannot move over the potential peak without thermal noise. The ratchet $`V_p(x,t)`$ is assumed to satisfy the temporally and spatially periodic conditions, $`V_p(x,t+T)`$ $`=`$ $`V_p(x,t),`$ (3) $`V_p(x+L,t)`$ $`=`$ $`V_p(x,t),`$ (4) where $`L`$ is a spatial period of the ratchet potential, and $`T\left(\frac{2\pi }{\omega }\right)`$ is a temporal period of the potential modulation. By potential modulation, energy is introduced into the system, and the system converts it into work against the load. The Fokker-Planck equation corresponding to Eq. (1) is written $`{\displaystyle \frac{P(x,t)}{t}}`$ $`=`$ $`{\displaystyle \frac{J(x,t)}{x}},`$ (5) $`=`$ $`{\displaystyle \frac{}{x}}\left({\displaystyle \frac{V(x,t)}{x}}P(x,t)\right)+ϵ{\displaystyle \frac{^2P(x,t)}{x^2}},`$ (6) where $`P(x,t)`$ and $`J(x,t)`$ are a probability density and a probability current respectively. We apply the periodic boundary conditions on $`P(x,t)`$ and $`J(x,t)`$, $`P(x+L,t)`$ $`=`$ $`P(x,t),`$ (7) $`J(x+L,t)`$ $`=`$ $`J(x,t),`$ (8) where $`P(x,t)`$ is normalized in the spatial period $`L`$. Except for transient time, $`P(x,t)`$ and $`J(x,t)`$ satisfy the temporally periodic conditions $`P(x,t+T)`$ $`=`$ $`P(x,t),`$ (9) $`J(x,t+T)`$ $`=`$ $`J(x,t).`$ (10) According to the stochastic energetics , the heat $`\stackrel{~}{Q}`$ released to the heat bath during the period $`T`$ is given as, $$\stackrel{~}{Q}=_{x(0)}^{x(T)}\left\{\left(\frac{dx(t)}{dt}+\xi (t)\right)\right\}𝑑x(t).$$ (11) Inserting Eq. (1) into Eq. (11), we obtain the energy balance equation, $$\stackrel{~}{Q}=_0^T\frac{V(x(t),t)}{t}𝑑t_{V(0)}^{V(T)}𝑑V(x(t),t).$$ (12) The first term of RHS is the energy $`\stackrel{~}{E_{in}}`$ that the system obtain through the potential modulation, and the second term, $`_{V(0)}^{V(T)}𝑑V(x(t),t)`$, is the work $`\stackrel{~}{W}`$ that the system extracts from the input energy $`\stackrel{~}{E_{in}}`$, during the period $`T`$. The ensemble average of $`\stackrel{~}{W}`$ is given using Eqs. (2), (3) and (9) as, $`\stackrel{~}{W}`$ $`=`$ $`{\displaystyle _{V(0)}^{V(T)}}𝑑V(x(t),t)`$ (13) $`=`$ $`\mathrm{}{\displaystyle _0^T}𝑑t{\displaystyle _0^L}𝑑xJ(x,t)W,`$ (14) where one can find that $`W`$ represents the work against the load. Also, using Eqs. (2), (6) and the periodic conditions (Eqs. (3), (4), (8) and (9)), the ensemble average of $`E_{in}`$ is given as $`\stackrel{~}{E_{in}}`$ $`=`$ $`{\displaystyle _0^T}{\displaystyle \frac{V(x(t),t)}{t}}𝑑t`$ (15) $`=`$ $`{\displaystyle _0^T}𝑑t{\displaystyle _0^L}𝑑x\left({\displaystyle \frac{V_p(x,t)}{x}}\right)J(x,t)E_{in}.`$ (16) Taking an ensemble average, Eq. (12) yields, $`Q`$ $`=`$ $`E_{in}W,`$ (17) $`=`$ $`{\displaystyle _0^T}𝑑t{\displaystyle _0^L}𝑑x\left({\displaystyle \frac{V_p(x,t)}{x}}\right)J(x,t)`$ (19) $`\mathrm{}{\displaystyle _0^T}𝑑t{\displaystyle _0^L}𝑑xJ(x,t),`$ where $`Q\stackrel{~}{Q}`$. Therefore we obtain the efficiency $`\eta `$ of the energy conversion from the input energy $`E_{in}`$ into the work $`W`$, as follows, $$\eta =\frac{W}{E_{in}}=\frac{\mathrm{}_0^T𝑑t_0^L𝑑xJ(x,t)}{_0^T𝑑t_0^L𝑑x\left(\frac{V_p(x,t)}{x}\right)J(x,t)}.$$ (20) This expression can be estimated simply by solving the Fokker-Planck equation (Eq. (6)). We solve Eq. (6) numerically with the following ratchet potential as an example. It satisfies Eqs. (3), (4) and the condition that the degree of the asymmetry oscillates but the amplitude of the ratchet is constant. It will turn out that the results does not depend on the detailed shape of the potential. The ratchet potential is $$V_p(x,t)=\frac{1}{2}V_0\left(\mathrm{sin}\left(\frac{2\pi x}{L}+A(t)\mathrm{sin}\left(\frac{2\pi x}{L}+C_1\mathrm{sin}\left(\frac{2\pi x}{L}\right)\right)\right)+1\right),$$ (21) where $`A(t)=C_2+C_3\mathrm{sin}(\omega t)`$, and $`V_0`$, $`C_1`$, $`C_2`$, $`C_3`$ are constant. The results are shown in Fig. 3. We find that the efficiency is maximized at finite intensity of thermal noise (Fig. 3(a)). This shows that thermal noise can certainly facilitate the energy conversion. What is the reason for the behavior of the efficiency $`\eta `$? Let us see the work $`W`$ and the input energy $`E_{in}`$ as a function of the intensity of thermal noise. The work $`W`$, the numerator of Eq. (20), has a peak at finite intensity of thermal noise (Fig. 3(b)), because of the behavior of the flow during the period $`T`$, $`\overline{J}_0^T𝑑t_0^L𝑑xJ`$. In the absence of thermal noise ($`ϵ=0`$), the particle cannot move over the potential peak (which results in $`\overline{J}=0`$). As the intensity of thermal noise increases, the effect of non-equilibility emerges and it induces finite asymmetric flow against the load through the asymmetry of the ratchet. When thermal noise is large enough ($`ϵ\mathrm{}`$), the flow against load is no longer positive, because the effect of the ratchet disappears in this limit. Therefore the flow, and also the work, behave like Fig. 3(b) as a function of thermal noise intensity. The input energy $`E_{in}`$, the denominator of Eq. (20), remains finite at the limit $`ϵ0`$ (Fig. 3(c)), where all input energy dissipates because the oscillation of the local potential minimum makes finite local current even in the absence of thermal noise. Therefore the efficiency starts with $`\eta =0`$ at $`ϵ=0`$, and grows up as the intensity of thermal noise increases, then disappears as $`ϵ\mathrm{}`$. The efficiency has its peak at finite $`ϵ`$. As we have stated above, noise-induced flow and finite dissipation in the absence of thermal noise are the cause for the noise-induced energy conversion. Thus our finding will not depend on the detail of the shape of $`V_p(x,t)`$. We expect that thermal noise can facilitate the energy conversion in a variety of ratchet systems. Finally we discuss the forced thermal ratchet. The forced thermal ratchet is a system where a dissipative particle in a ratchet is subjected both to zero-mean external force and to thermal noise. The previous Letter was the first trial that discussed the energetics in the ratchet. For the analytical estimate, the discussion in that Letter was only on the quasi-static limit where the change of the external force is slow enough. In that case, thermal noise cannot facilitate operation of the ratchet. The energetic efficiency is monotonically decreasing function of thermal noise intensity, in contrast to the oscillating ratchet discussed above. However one notices that the external force of the forced thermal ratchet can also be written by oscillatory modulating potential, when the external force is periodic as in the literature, It is likely that the difference between the two cases, the oscillating ratchet and the forced thermal ratchet discussed in that Letter, is attributed to the condition of the system, namely, quasi-static or not. We suppose that thermal noise may facilitate the energy conversion in the forced thermal ratchet when the ratchet is not quasi-static. Langevin equation of the forced thermal ratchet is the same as Eq. (1), except for the potential $`V`$. In this case, the potential is $$V(x,t)=V_p(x)+\mathrm{}xF_{ex}(t)x,$$ (22) where $`V_p(x)`$, $`\mathrm{}`$ and $`F_{ex}`$ represent the ratchet potential, load and an external force respectively. The periodic external force $`F_{ex}(t)`$ satisfies $`F_{ex}(t+T)=F_{ex}(t)`$ and $`_0^T𝑑tF_{ex}(t)=0`$. The work $`W`$ is the same as Eq. (14), and the input energy $`E_{in}`$ is, $$E_{in}=_0^T𝑑t_0^L𝑑xF_{ex}(t)J(x,t).$$ (23) In quasi-static limit , the probability current $`J`$ does not depend on the coordinate $`x`$. Thus, when the current over the potential peak (that causes $`W`$) vanishes, the local current vanishes anywhere ($`J(x,t)=J(t)=0`$). However, if the system is not quasi-static, the behavior changes qualitatively. In this case, even when the current over the potential peak vanishes at $`ϵ=0`$, local current around the local potential minimum still remains finite. Thus there exists finite energy dissipation even in the limit $`ϵ0`$, which means that the input energy $`E_{in}`$ still remains finite value at this limit (Fig. 4(c)). Therefore, the efficiency is found to be zero at $`ϵ=0`$, and has a peak at finite $`ϵ`$ (Fig. 4(a)). The result is the same as that of the oscillating ratchet. It must be noted that the energetics can distinguish the behavior of the efficiency in the non-quasistatic case from that in quasi-static case, although the dependences of the flow $`\overline{J}`$ are the same between the two. We have discussed energetics of the ratchet system using the method of the stochastic energetics, and estimated the efficiency of energy conversion. We found that thermal noise can facilitate the operation of the ratchet system. The mechanism was briefly summarized as follows: Through the ratchet, potential modulation causes noise-induced flow against the load that results in the work. On the other hand, potential modulation with finite speed causes local current around the local potential minimum that makes finite dissipation even in the absence of thermal noise. Thus the efficiency is maximized at finite intensity of thermal noise. The result must be robust and independent of the detail of the potential, because only two factors are essential for the energy conversion activated by thermal noise: One is the noise-induced flow, and the other is the finite dissipation in the absence of thermal noise. Also in the two-state model that is an other type of ratchet systems, it was reported quite recently that the efficiency could be maximized at finite temperature. We expect it to be examined by experiment whether and how the real molecular motors use thermal noise. We would like to thank K. Sekimoto, J. Prost, A. Parmeggiani, F. Jülicher, S. Sasa, T. Fujieda and T. Tsuzuki for helpful comments. This work is supported by the Japanese Grant-in-Aid for Science Research Fund from the Ministry of Education, Science and Culture (No. 09740301) and Inoue Foundation for Science.
no-problem/9903/astro-ph9903280.html
ar5iv
text
# 1 Introduction ## 1 Introduction During the last decade there has been a drastic change in our picture of high energy emission from Active Galactic Nuclei (AGNs). The 2nd CGRO catalogue lists over 40 AGNs as strong sources of GeV gamma rays (Thompson et al 1995) while two AGNs, Mkn 421 and Mkn 501, have been detected in the TeV regime (Punch et al. 1992, Quinn et al. 1996). All of these AGNs belong to the category of blazars, which include OVV, flat radio sources, many of which exhibit superluminal motion. The fact that up to date not a single radio quiet AGN has been detected in the GeV/TeV regime (Lin et al. 1992) has put a first strong constraint on the theories of gamma-ray emission from AGNs, while, at the same time, has given arguments in favour of certain theories for AGN unification (Urry & Padovani 1995). On the theoretical front it became quickly apparent that the gamma-ray emission was connected with processes in the jet rather than in the core. While this general picture remains more or less undisputed, many models have been proposed for the high energy emission itself; these can be roughly divided in leptonic or hadronic in origin, depending on whether it is electrons or protons which are responsible for the gamma-ray emission. Thus while there are models which invoke protons as the ultimate source of high energy emission (Mannheim 1993 , Bednarek & Protheroe 1997a), the majority of the proposed models assume that the gamma-rays come from inverse Compton scattering of relativistic electrons on some soft photon targets. The source of these targets is still an open question and many possible origins have been proposed such as accretion disk photons (Dermer, Schlickeiser & Mastichiadis 1992), diffuse isotropic photons coming from regions such as the broad line clouds (Sikora, Begelman & Rees 1994), internally produced synchrotron photons (Maraschi, Ghisellini & Celotti et al. 1992; Marscher & Travis 1996, Inoue & Takahara 1996) or combinations thereof (Dermer, Sturner, Schlickeiser 1997) with each model giving rather similar spectral features and characteristics. A very interesting aspect which emerged from the intense gamma-ray monitoring of the sources was the discovery of fast variability. So in addition to the already known variability in the X-ray regime, Mkn 421 was discovered to exhibit TeV flares, the fastest of which had a duration of about 15 minutes (Gaidos et al. 1996). More powerful sources, such as 3C279, have shown variability in the GeV regime of the order of an hour (Hartman et al. 1996). These observations put new, interesting constraints on the theoretical models of high energy emission from AGNs since one expects the particle cooling times to be of the order of the flare itself. The imposed constraints become even tighter from recent results of multiwavelength campaigns which show certain trends in the evolution of flares along the EM spectrum. Thus Mkn 421 was discovered to exhibit quasisimultaneous variation in the keV and TeV regime (Macomb et al. 1995), while other energy regimes (most notably the GeV regime) remained virtually unaffected. The other AGN detected in TeV, Mkn 501, has shown similar trends (Catanese et al. 1997, Pian et al. 1998). The aforementioned observations provoked a flurry of models which addressed explicitly either the fast variability (Salvati, Spada & Pacini 1998), the multiwavelength spectrum (Ghisellini, Maraschi & Dondi 1997) or both (Mastichiadis & Kirk 1997). In §2 of the present article we will review the basic features of such models especially in the context of the so-called homogeneous synchrotron self-Compton models (SSC). In §3 we will address explicitly the problem of particle acceleration and we will present a simple way one can explain certain observations with the picture of accelerating/radiating particles. ## 2 Homogeneous Synchrotron-Self Compton Models ### 2.1 Spherical Models This class of models, based on the ideas first put forward by Jones, O’Dell & Stein (1974), has extensively been discussed elsewhere (Kirk & Mastichiadis 1997, Mastichiadis & Kirk 1997–henceforth MK97, and Ghisellini, Maraschi & Dondi 1997), however for the sake of completeness we give a brief overview here. As the above authors have shown, a homogeneous region containing magnetic fields and relativistic electrons can reproduce the observed spectrum of the blazar Mkn 421 whilst allowing for time variations on the scale of roughly 1 day. In order to address explicitly the temporal behaviour of the spectrum, MK97 used a set of time-dependent, spatially averaged kinetic equations for the electrons and photons adopting the approach outlined in Mastichiadis & Kirk (1995). The electrons are assumed to have a power-law uniform injection in a spherical source (blob) of radius $`R`$; the blob itself is supposed to move at some small angle $`\theta `$ to our line of sight with a bulk Lorentz factor $`\mathrm{\Gamma }`$. The electrons lose energy from synchrotron radiation on a magnetic field of strength $`B`$ and from inverse Compton radiation on the produced synchrotron photons. The so obtained electron distribution function is then convolved with the single electron synchrotron and inverse Compton emissivities and the overall photon spectrum is obtained after allowing for the possibility of photon-photon pair production –a process which turns out to be negligible for the parameters used. Seven independent parameters are needed to determine a stationary spectrum in this model. They are the Doppler-boosting factor $`\delta =[\mathrm{\Gamma }(1B_\mathrm{b}\mathrm{cos}\theta )]^1`$ (with $`B_\mathrm{b}c`$ the bulk velocity of the source), the size of the source $`R`$, its magnetic field $`B`$, the mean time during which particles are confined in the source $`t_{\mathrm{esc}}`$, and three parameters determining the injected relativistic electron distribution: its luminosity, or compactness $`\mathrm{}_\mathrm{e}`$, the spectral index $`s`$ and the maximum Lorentz factor of the electron distribution $`\gamma _{\mathrm{max}}`$. The inclusion of the particle escape time $`t_{\mathrm{esc}}`$ becomes necessary from the fact that the photon spectrum is rather flat between the radio and the infra-red region, implying that the radiating particles do not have time to cool significantly. It turns out that this fit leaves a free parameter which can be suitably chosen as either the Doppler factor of the blob $`\delta `$ or the timescale over which variability can be observed $`t_{\mathrm{var}}`$ (in sec). These two quantities are related by the scaling relation $`\delta =267t_{\mathrm{var}}^{1/4}`$ ($`t_{\mathrm{var}}`$ expressed in sec). For the reported variability of about one day ($`t_{\mathrm{var}}=10^5`$ sec–Macomb et al 1995) one readily finds $`\delta =15`$, which is close to the usually assumed values of the Doppler factor. The spectrum of the flare can be fitted in a time-dependent fashion (i.e. before complete cooling can be achieved) by changing $`\gamma _{\mathrm{max}}`$ by a factor of a few (MK97). ### 2.2 Slab models Recently, very rapid variations in the TeV flux of Mkn 421 have been reported (Gaidos et al 1996). As it was stated above, in the framework of the homogeneous SSC spherical models this implies that acceptable fits can be found only by increasing the Doppler factor $`\delta `$. Thus, as the above scaling formula between $`\delta `$ and $`t_{\mathrm{var}}`$ suggests, a choice of $`t_{\mathrm{var}}=1000`$ sec (so as to agree with the observed flare timescale) would mean $`\delta `$ of the order of 50, a value which is above those indicated by observations of apparent superluminal motion (Vermeulen & Cohen 1994). An alternative way of approach can be understood as follows: Assuming that the electrons cool due to synchrotron radiation and that their cooling time $`t_{\mathrm{c},3}`$ is given in units of $`10^3`$ sec we can write $`t_{\mathrm{c},3}10^6\gamma ^1B^2\delta ^1`$ sec where $`\gamma `$ is the Lorentz factor of the particle in the rest frame of the emitting plasma and $`B`$ is the magnetic field in gauss. The highest energy photons (in units of 10 keV) emitted by these particles are $`\nu _{10}10^{12}\gamma ^2B\delta `$. These relations imply $`B1t_{\mathrm{c},3}^{2/3}\nu _{10}^{1/3}\delta ^{1/3}`$ gauss and $`\gamma 10^6t_{\mathrm{c},3}^{1/3}\nu _{10}^{2/3}\delta ^{1/3}`$. Therefore the maximum photon energy radiated by such electrons is $`\nu _{\mathrm{max}}.5t_{\mathrm{c},3}^{1/3}\nu _{10}^{2/3}\delta ^{2/3}`$ TeV. Assuming furthermore that the source has equal amounts of energies in magnetic fields and photons (as seems to be implied by the source’s equal amounts of luminosities in the radio to X-ray and soft to very high energy gamma-rays regimes), we obtain an expression for the aspect ratio of the source region $`\eta =d/R`$ where $`d=310^{13}\delta t_{\mathrm{c},3}`$ cm is a thickness measured in the rest frame of the source and $`R`$ is defined such that $`\pi R^2`$ is the area of the source when projected onto the plane of the sky. Fig.1 shows a fit to the multiwavelength spectrum of Mkn 421 as this was given in Macomb (1995, 1996). A fit to the same data can be obtained as well by assuming a spherical source but with either a long variability timescale $`t_{\mathrm{var}}`$ and a ‘canonical’ Doppler factor $`\delta `$ or with a short $`t_{\mathrm{var}}`$ and a high $`\delta `$ (see MK97). The present fit for the low state was obtained for $`t_{\mathrm{var}}=500s`$, $`\delta =20`$, $`B=0.4`$ G, $`\gamma _{\mathrm{max}}=1.410^5`$, $`s=1.7`$, $`\mathrm{}_\mathrm{e}=1.510^5`$ and $`t_{\mathrm{esc}}=50t_{\mathrm{cr}}`$. The high state was obtained by increasing $`\gamma _{\mathrm{max}}`$ by a factor of 4 while leaving the other parameters unchanged. The corresponding values of $`d`$ and $`R`$ are $`9.210^{13}`$ cm and $`2.810^{15}`$ cm respectively, implying an aspect ratio of $`\eta .03`$. As it was pointed in MK97 (and can also be seen from Fig. 1) changes in $`\gamma _{\mathrm{max}}`$ result in large variations in the X and TeV regime but these changes are not especially prominent at lower frequencies. An alternative way of producing a flare is to consider an increase in the amplitude $`Q`$ of the injected relativistic electrons while leaving the other parameters unchanged. Figure 2 shows the TeV flare produced by increasing $`Q`$ by a factor of 12 within one crossing time and consequently decreasing it to its original value. The produced flare can fit quite well the flare reported by Gaidos et al. (1996). Figure 3 shows the corresponding low and high states of Mkn 501. The parameters used are $`t_{\mathrm{var}}=500s`$, $`\delta =15`$, $`B=0.5`$ G, $`\gamma _{\mathrm{max}}=1.610^5`$, $`s=1.8`$, $`\mathrm{}_\mathrm{e}=.710^5`$ and $`t_{\mathrm{esc}}=100t_{\mathrm{cr}}`$. The high state was obtained by increasing $`\gamma _{\mathrm{max}}`$ by a factor of 40 (to accommodate the fact that Mkn 501 during one flare in 1997 was observed by OSSE up to energies of 200 keV– Pian et al. 1998). The corresponding values of $`d`$ and $`R`$ are $`7.110^{13}`$ cm and $`1.110^{16}`$ cm respectively, implying an aspect ratio of $`\eta .007`$. From the above it is evident that fast variability can be accommodated in the homogeneous self-Compton models only in the case where the emitting source is a thin structure with a crossing time comparable to the cooling time of the highest energy particles. This can lead naturally to the shock-in-jet model (Marscher & Gear 1985) which we present in the next section. ## 3 Particle Acceleration in Blazar Jets Let us consider a thin shock wave moving down a cylindrically symmetric jet (Marscher & Gear 1985, Kirk, Rieger, & Mastichiadis 1998–henceforth KRM) with a velocity $`u_\mathrm{s}`$ in the rest frame of the jet. Let also particles be accelerated by the shock through a first order Fermi scheme and subsequently escape downstream where they radiate. Following KRM we will restrict the present analysis only to synchrotron losses and radiation. The equation that governs the number of particles $`N(\gamma )`$ with Lorentz factors between $`\gamma `$ and $`\gamma +d\gamma `$ in the acceleration zone can be written $`{\displaystyle \frac{N}{t}}+{\displaystyle \frac{}{\gamma }}\left[\left({\displaystyle \frac{\gamma }{t_{\mathrm{acc}}}}\beta _\mathrm{s}\gamma ^2\right)N\right]+{\displaystyle \frac{N}{t_{\mathrm{esc}}}}`$ $`=`$ $`Q\delta (\gamma \gamma _0)`$ (1) (Kirk, Melrose & Priest 1994), where $`\beta _\mathrm{s}`$ $`=`$ $`{\displaystyle \frac{4}{3}}{\displaystyle \frac{\sigma _\mathrm{T}}{m_\mathrm{e}c^2}}\left({\displaystyle \frac{B^2}{8\pi }}\right).`$ (2) with $`\sigma _\mathrm{T}`$ the Thomson cross section. The first term in brackets in the above equation describes acceleration at the rate $`t_{\mathrm{acc}}^1`$, the second describes the rate of energy loss due to synchrotron radiation averaged over pitch-angle (because of the assumed isotropy of the distribution) in a magnetic field $`B`$. Particles are assumed to escape from this region at an energy independent rate $`t_{\mathrm{esc}}^1`$, and to be injected into the acceleration process with a (low) Lorentz factor $`\gamma _0`$ at a rate $`Q`$ particles per second. Note that the concept of this ‘acceleration zone’ differs from the emission region in the homogeneous model discussed in the previous section in two important respects: a) particles are injected at low energy and continuously accelerated and b) very little radiation is emitted by a particle whilst in the acceleration zone. A further difference comes from the fact that the high energy cut-off of the electron distribution is given now by a detailed balance between the acceleration and loss rates at the Lorentz factor $`\gamma _{\mathrm{max}}=1/(\beta _\mathrm{s}t_{\mathrm{acc}})`$. The variability features therefore do not depend only on the electron cooling timescale but on the interplay between acceleration and loss timescales. For $`\gamma <\gamma _{\mathrm{max}}`$ the acceleration rate exceeds the synchrotron loss rate while for $`\gamma >\gamma _{\mathrm{max}}`$ the distribution vanishes. To describe the kinetic equation in the radiation zone we follow Ball & Kirk (1992) and use a coordinate system at rest in the radiating plasma. The shock front then provides a moving source of electrons, which subsequently suffer energy losses, but are assumed not to be transported in space. The kinetic equation governing the differential density $`\mathrm{d}n(x,\gamma ,t)`$ of particles in the range $`\mathrm{d}x`$, $`\mathrm{d}\gamma `$ is then $`{\displaystyle \frac{n}{t}}{\displaystyle \frac{}{\gamma }}(\beta _\mathrm{s}\gamma ^2n)`$ $`=`$ $`{\displaystyle \frac{N(\gamma ,t)}{t_{\mathrm{esc}}}}\delta (xx_\mathrm{s}(t))`$ (3) where $`x_\mathrm{s}(t)`$ is the position of the shock front at time $`t`$. For a shock which starts to accelerate (and therefore ‘inject’) particles at time $`t=0`$ and position $`x=0`$ and moves at constant speed $`u_\mathrm{s}`$, the solution of Eq. (3) for $`\gamma >\gamma _0`$ is $`n(x,\gamma ,t)`$ $`=`$ $`{\displaystyle \frac{a}{u_\mathrm{s}t_{\mathrm{esc}}\gamma ^2}}`$ (4) $`\left[{\displaystyle \frac{1}{\gamma }}\beta _\mathrm{s}\left(t{\displaystyle \frac{x}{u_\mathrm{s}}}\right){\displaystyle \frac{1}{\gamma _{\mathrm{max}}}}\right]^{(t_{\mathrm{acc}}t_{\mathrm{esc}})/t_{\mathrm{esc}}}`$ $`\mathrm{\Theta }\left[\gamma _1(x/u_\mathrm{s})(1/\gamma \beta _\mathrm{s}t+\beta _\mathrm{s}x/u_\mathrm{s})^1\right],`$ where $`\gamma _1(t)`$ is given by $`\gamma _1(t)`$ $`=`$ $`\left({\displaystyle \frac{1}{\gamma _{\mathrm{max}}}}+\left[{\displaystyle \frac{1}{\gamma _0}}{\displaystyle \frac{1}{\gamma _{\mathrm{max}}}}\right]e^{t/t_{\mathrm{acc}}}\right)^1.`$ (5) To obtain the synchrotron emissivity as a function of position, time and frequency we convolve the density $`n`$ with the synchrotron Green’s function $`P(\nu ,\gamma )`$. At a point $`x=X`$ ($`>u_\mathrm{s}t`$) on the symmetry axis of the source at time $`t`$ the specific intensity of radiation in the $`\stackrel{}{x}`$ direction depends on the retarded time $`\overline{t}=tX/c`$ and is given by $`I(\nu ,\overline{t})={\displaystyle d\gamma P(\nu ,\gamma )dxn(x,\gamma ,\overline{t}+x/c)}`$ (6) At this point we stress that in this model one needs to integrate the differential electron density over the spatial coordinate since, in contrast to the homogeneous models, the acceleration region is distinct from the cooling region. ### 3.1 Spectral Signatures of Acceleration As in the case of the homogeneous models we first seek parameters that could fit specific blazar spectra in a steady state and then we try to induce a flare by changing some parameter of the fit. As an example, we show in Fig. 4 observations of the object Mkn 501. The gamma-ray emission of this object is not included in this figure, since, according to §2, it is not thought to arise as synchrotron radiation. The form of the spectrum is very close to that given by Meisenheimer & Heavens (1987), who used an analytic solution to the stationary diffusion/advection equation, including synchrotron losses. Four free parameters are used to produce this fit: 1. the low frequency spectral index $`\alpha =0.25`$, which corresponds to taking $`t_{\mathrm{acc}}=t_{\mathrm{esc}}/2`$ 2. the characteristic synchrotron frequency emitted by an electron of the maximum Lorentz factor as seen in the observers frame (taken to be $`1.3\times 10^{18}`$Hz) 3. the spatial extent of the emitting region, which determines the position of the spectral break at roughly $`5\times 10^{12}`$Hz 4. the absolute flux level. Since we restrict our model to the synchrotron emission of the accelerated particles, it is not possible independently to constrain quantities such as the Doppler boosting factor, or the magnetic field. Similarly, the frequency below which synchrotron self-absorption modifies the optically thin spectrum is not constrained. Nevertheless, this model of the synchrotron emission makes predictions concerning the spectral variability in each of the three characteristic frequency ranges which can be identified in Fig. 4. These ranges are generic features of any synchrotron model, so that the predicted variability can easily be applied to the synchrotron emission of other blazars. They are a) the low frequency region, where the particles have not had time to cool before leaving the source (this is the region with $`\alpha =0.25`$ in Fig. 4, below the break at $`5\times 10^{12}`$Hz) b) the region between the break and the maximum flux, where the particles have had time to cool, but where the cooling rate is always much slower than the acceleration rate and the spectrum is close to $`\alpha =0.75`$, and c) the region around and above the flux maximum at roughly $`10^{17}`$Hz, where the acceleration rate is comparable to the cooling rate. Variability or flaring behaviour can arise for a number of reasons. When the shock front overruns a region in the jet in which the local plasma density is enhanced, the number of particles picked up and injected into the acceleration process might be expected to increase. In addition, if the density change is associated with a change in the magnetic field strength, the acceleration timescale might also change, and, hence, the maximum frequency of the emitted synchrotron radiation. Considering the case in which the acceleration timescale remains constant, we can compute the emission in a straightforward manner. An increase of the injection rate by a factor $`1+\eta _\mathrm{f}`$ for a time $`t_\mathrm{f}`$ is found by setting $`Q(t)`$ $`=`$ $`Q_0\mathrm{for}t<0\mathrm{and}t>t_\mathrm{f}`$ (7) $`Q(t)`$ $`=`$ $`(1+\eta _\mathrm{f})Q_0\mathrm{for}0<t<t_\mathrm{f}`$ (8) Using $`\eta _\mathrm{f}=1`$, $`t_\mathrm{f}=10t_{\mathrm{acc}}`$ and $`u_\mathrm{s}=c/10`$, we show the resulting emission at a frequency $`\nu =\nu _{\mathrm{max}}/100`$ in Fig. 5. In the case of Mkn 501, this corresponds to a frequency of about $`10^{16}`$ Hz, which lies between the infra-red and X-ray regions where the spectral index is close to $`\alpha =0.75`$. Also shown in this figure is the temporal behaviour of the spectral index, as determined from the ratio of fluxes at $`0.01\nu _{\mathrm{max}}`$ and $`0.05\nu _{\mathrm{max}}`$, through the flare. When plotted against the flux at the lower frequency, the spectral index exhibits a characteristic loop-like pattern, which is tracked in the clockwise sense by the system. This type of behaviour is well-known and has been observed at different wavelengths in several sources e.g. OJ287 (Gear, Robson & Brown 1986), PKS 2155-304 (Sembay et al. 1993) and Mkn 421 (Takahashi et al. 1996). It arises whenever the slope is controlled by synchrotron cooling so that information about injection propagates from high to low energy (Tashiro et al. 1995). If the system is observed closer to the maximum frequency, where the cooling and acceleration times are equal, the picture changes. Here information about the occurrence of a flare propagates from lower to higher energy, as particles are gradually accelerated into the radiating window. Such behaviour is depicted in Fig.6, where the same flare is shown at frequencies which are an order of magnitude higher than in Fig.5. In the case of Mkn 501, the frequency range is close to $`10^{18}`$ Hz. This time the loop is traced anticlockwise. Such behaviour, although not as common, has occasionally been observed in the case of PKS 2155-304 (Sembay et al. 1993). ## 4 Summary In this paper we have presented a selective account of recent results on AGN variability within the context of a) the homogeneous syncro-self-Compton and b) the diffusive particle acceleration. We have shown that the SSC models give good overall fits to the multiwavelength Mkn 421 and Mkn 501 spectra and can explain the major flares of these objects such as the ones reported by Macomb et al (1995) and Pian et al. (1998) respectively by increasing only one parameter of the fit, namely the high energy cutoff of the injected electron distribution. This type of flare is especially prominent at the high end of the photon distribution, i.e. in the X- and TeV regime, leaving other energy regimes (most notably the GeV gamma-rays) practically unaffected, giving thus an explanation of why EGRET detected neither of these two major outbursts. The very fast variation of the source Mkn 421, however, as reported by Gaidos et al. (1996) poses a problem for the homogeneous SSC models: in order for the models to satisfy simultaneously i) the high total luminosity, ii) the very fast variability and iii) the transparency to TeV radiation (Bednarek & Protheroe 1997b) one needs either to invoke a high value of the Doppler boosting factor or to abandon the assumptions about a spherical source in favour of a laminar source geometry. As shown above, this new manifestation of the SSC model can provide us with good fits to the AGN observations both in spectral and temporal behaviour (see, for examples, Figures 1-3). This picture can lead naturally to the shock-in-jet model, i.e. to the picture of a shock advancing down a jet, accelerating, at the same time, particles. Approaching the acceleration by a first order Fermi scheme we have shown that one can get once again remarkably good fits to the multiwavelength spectra of AGN at least from the radio to the X-ray regime (since we have restricted our analysis only to the synchrotron spectra–Fig.4). This approach improves upon the assumptions of homogeneous SSC model, as presented in § 2, mainly by replacing the instantaneous electron injection with the concept of an acceleration timescale. It is therefore the interplay between the acceleration and energy loss timescales that provides us with the different flare behaviours shown in Figures 5 and 6 (for a more examples of this the reader is referred to Kirk, Rieger & Mastichiadis 1999). ### Acknowledgments AM would like to thank the organisers of the Workshop for their hospitality. This work was supported by the European Commission under the TMR program, contract number FMRX-CT98-0168. ## 5 References Ball L.T., Kirk J.G. 1992, ApJ 396, L39 Bednarek, W., Protheroe, R.J. 1997a, MNRAS 287, L9 Bednarek, W., Protheroe, R.J. 1997b, MNRAS 292, 646 Catanese, M. et al. 1997, ApJ 487, L143 Dermer, C.D., Schlickeiser, R., Mastichiadis, A. 1992, A&A 256, L27 Dermer, C.D., Sturner, S.J., Schlickeiser, R. 1997, ApJS 109, 103 Gaidos, J.A. et al. 1996, Nature 383, 318 Gear, W.K., Robson, E.I., Brown, L.M.J. 1986, Nature 324, 546 Ghisellini, G., Maraschi, L., Dondi, L. 1996, A&A Suppl 120C, 503 Hartman, R.C. et al. 1996, ApJ 461, 698 Inoue,S., Takahara, F. 1996, ApJ 463, 555 Jones, T.W., O’Dell, S.L., Stein, W.A. 1974, ApJ 188, 353 Kirk, J.G., Mastichiadis, A. 1997, in ‘Frontier Objects in Astrophysics and Particle Physics’ eds.: F. Giovannelli, G. Mannocchi, Conference Proceedings Vol. 57, page 263 Italian Physical Society (Bologna) Kirk, J.G., Melrose, D.B., Priest, E.R. Plasma astrophysics, eds. A.O. Benz, T.J.-L. Courvoisier, Springer, Berlin Kirk, J.G., Rieger, F.M., Mastichiadis, A. 1998, A&A 333, 452 (KRM) Kirk, J.G., Rieger, F.M., Mastichiadis, A. 1999, to appear in proceedings of the conference ‘BL Lac Phenomenon’, eds. L.O. Takalo, A. Sillanpää, Turku 1998 Lin, Y.C. et al. 1992, ApJ 416, L53 Macomb, D.J. et al. 1995 ApJ 449, L99 Macomb, D.J. et al. 1996 ApJ 459, L111 (Erratum) Mannheim, K. 1993, A&A 269, 67 Maraschi L., Ghisellini G., Celotti A. 1992, ApJ 397, L5 Marscher, A.P., Gear, W.K. 1985, ApJ 298, 114 Marscher, A.P., Travis, J.P., 1996 A&A Suppl 120C, 537 Mastichiadis A., Kirk, J.G., 1995 A&A 295, 613 Mastichiadis A., Kirk, J.G., 1997 A&A 320, 19 Meisenheimer, K., Heavens, A.F. 1987, MNRAS 225, 335 Pian, E. et al. 1998, ApJ 492, L17 Punch, M. et al. 1992, Nature 358, 477 Quinn, J. et al. 1996, ApJ 456, 83 Salvati, M., Spada, M., Pacini, F. 1998, ApJ 495, L19 Sembay, S. et al. 1993, ApJ 404, 112 Sikora, M., Begelman, M.C., Rees, M.J. 1994, ApJ 421, 153 Takahashi, T. et al. 1996, ApJ 470, L89 Tashiro, M. et al. 1995, PASJ 47, 131 Thompson, D.J. et al. 1995, ApJS 101, 259 Urry, C.M. & Padovani, P. 1995, PASP 107, 803 Vermeulen, R.C. & Cohen, M.H. 1994, ApJ 430, 467
no-problem/9903/solv-int9903006.html
ar5iv
text
# 1 Introduction ## 1 Introduction The quantum version of the inverse scattering method paved the way for the discovery and solution of new exactly solvable two-dimensional lattice models. Of particular interest are non-homogeneous vertex models whose transfer matrix are constructed by mixing different local vertex operators. These operators, known as Lax $``$-operators, define the local structure of Boltzmann weights of the system. A sufficient condition for integrability is that all the $``$-operators should satisfy the Yang-Baxter equation with the same invertible $`R`$-matrix. More precisely we have , $$R(\lambda \mu )_{𝒜i}(\lambda )_{𝒜i}(\mu )=_{𝒜i}(\mu )_{𝒜i}(\lambda )R(\lambda \mu )$$ (1) The auxiliary space $`𝒜`$ corresponds to the horizontal degrees of freedom of the vertex model in the square lattice. The operator $`_{𝒜i}(\lambda )`$ is a matrix in the auxiliary space $`𝒜`$ and its matrix elements are operators on the quantum space $`{\displaystyle \underset{i=1}{\overset{L}{}}}V_i`$, where $`V_i`$ represents the vertical space of states and $`i`$ the sites of a one-dimensional lattice of size $`L`$. The tensor product in formula (1) is taken with respect the auxiliary space and $`\lambda `$ is a spectral parameter. One way of producing a mixed vertex model is by choosing $``$-operators intertwining between different representations $`V_i`$ of a given underlying algebra. This approach has first been used by Andrei and Johannesson to study the Heisenberg model in the presence of an impurity of spin-S and by De Vega and Woynarovich to construct alternating Heisenberg spin chains. Subsequently, several papers have discussed physical properties of the latter models as well as considered generalizations to include other Lie algebras and superalgebras . In this paper, which we are pleased to dedicate to James McGuire, we construct and solve mixed vertex models whose $``$-operators can be expressed in terms of the generators of the braid-monoid algebra . In particular, we show that the recent results by Abad and Rios and by Links and Foerster for the $`SU(3)`$ and $`gl(2|1)`$ algebras can be reobtained from such algebraic approach. The novel feature as compared to these works is that we are able to diagonalize the corresponding transfer matrix by using a standard variant of the nested Bethe ansatz approach. Furthermore, this algebraic approach allows us to derive extensions to the $`SU(N)`$ and $`Sl(N|M)`$ algebras in a more direct way. This paper is organized as follows. In section 2 we recall the basics of the braid-monoid algebra and show how it produces two different $``$-operators satisfying the Yang-Baxter equation (1). In section 3 a mixed vertex model based on the $`SU(N)`$ algebra is diagonalized by the algebraic Bethe ansatz method. ## 2 Braid-monoid L-operators It is well known that the braid algebra produces the simplest rational $`R`$-matrix solution of the Yang-Baxter equation. In this case the braid operator becomes a generator of the symmetric group and the $`R`$-matrix is given by $$R(\lambda )=I_i+\lambda b_i$$ (2) where $`I_i`$ is the identity and $`b_i`$ is the braid operator acting on the sites $`i`$ and $`i+1`$ of a one-dimensional chain. Here we choose the braid operator as the graded permutation between $`N`$ bosonic and $`M`$ fermionic degrees of freedom $$b_i=\underset{a,b=1}{\overset{N+M}{}}(1)^{p(a)p(b)}e_{ab}^ie_{ba}^{i+1}$$ (3) where $`p(a)`$ is the Grassmann parity of the $`a`$-th degree of freedom, assuming values $`p(a)=0`$ for bosons and $`p(a)=1`$ for fermions. The $`R`$-matrix (2,3) has a null Grassmann parity, and consequently can produce a vertex operator $`_{𝒜i}(\lambda )`$ satisfying either the Yang-Baxter equation or its graded version . In the latter case, the tensor product in formula (1) should be taken in the graded sense(supertensor product) . In the latter case the associated $``$-operator is $$_{𝒜i}(\lambda )=\lambda I_i+b_i$$ (4) The next step is to search for extra $``$-operators which should satisfy equation (1) with the $`R`$-matrix (2,3). As we shall see below, this is possible when we enlarge the braid algebra by including a Temperley-Lieb operator $`E_i`$. This operator satisfies the following relations $$\begin{array}{c}E_i^2=qE_i,E_iE_{i\pm 1}E_i=E_i,E_iE_j=E_jE_i|ij|2\hfill \end{array}$$ (5) where $`q`$ is a $`c`$-number. It turns out that the braid operator (3) together with the monoid $`E_i`$ close the braid-monoid algebra at its degenerated point ($`b_i^2=I_i`$). The extra relations between $`b_i`$ and $`E_i`$ closing the degenerated braid-monoid algebra are (see e.g ) $$\begin{array}{c}b_iE_i=E_ib_i=\widehat{t}E_i\hfill \\ E_ib_{i\pm 1}b_i=b_{i\pm 1}b_iE_{i\pm 1}=E_iE_{i\pm 1}\hfill \end{array}$$ (6) where the constant $`\widehat{t}`$ assumes the values $`\pm 1`$. Now we have to solve the Yang-Baxter equation (1) with the $`R`$-matrix (2,3) assuming the following general ansatz for the $``$-operator $$_{𝒜i}(\lambda )=f(\lambda )I_i+g(\lambda )b_i+h(\lambda )E_i$$ (7) where $`f(\lambda )`$, $`g(\lambda )`$ and $`h(\lambda )`$ are functions to be determined as follows. Substituting this ansatz in equation (1), and taking into account the braid-monoid relations, we find two classes of solutions. The first one has $`h(\lambda )=0`$ and clearly corresponds to the standard solution already given in equation (4). The second one is new and is giving by $`g(\lambda )=0`$. We find that the new $``$-operator, after normalizing the solution by function $`h(\lambda )`$, is given by $$\stackrel{~}{}_{𝒜i}(\lambda )=\widehat{t}(\lambda +\eta )I_iE_i$$ (8) where $`\eta `$ is an arbitrary constant. This constant can be fixed imposing unitary property, i.e $`\stackrel{~}{}_{𝒜i}(\lambda )\stackrel{~}{}_{𝒜i}(\lambda )I_i`$. By using this property and the first equation (5) we find $$\eta =\frac{q}{2\widehat{t}}$$ (9) After having found two distincts $``$-operators which satisfy the Yang-Baxter algebra with the same $`R`$-matrix, the construction of an integrable mixed vertex model becomes standard . The monodromy matrix of a vertex model mixing $`L_1`$ operators of type $`_{𝒜i}(\lambda )`$ and $`L_2`$ operators of type $`\stackrel{~}{}_{𝒜i}(\lambda )`$ is written as $$𝒯^{L_1,L_2}(\lambda )=\overline{}_{𝒜L_1+L_2}(\lambda )\overline{}_{𝒜L_1+L_21}(\lambda )\mathrm{}\overline{}_{𝒜1}(\lambda )$$ (10) where $`\overline{}_{𝒜i}(\lambda )`$ is defined by $$\overline{}_{𝒜i}(\lambda )=\{\begin{array}{cc}_{𝒜i}(\lambda )\hfill & \text{if }i\{\beta _1,\mathrm{},\beta _{L_1}\}\text{ }\hfill \\ \stackrel{~}{}_{𝒜i}(\lambda )\hfill & \text{ otherwise}\hfill \end{array}$$ (11) and the partition $`\{\beta _1,\mathrm{},\beta _{L_1}\}`$ denotes a set of integer indices assuming values in the interval $`1\alpha _iL_1+L_2`$. Although the integrability does not depend on how we choose such partition, the construction of $`local`$ conserved charges commuting with the respective transfer matrix does. One interesting case is when the number of operators $`_{𝒜i}(\lambda )`$ and $`\stackrel{~}{}_{𝒜i}(\lambda )`$ are equally distributed ($`L_1=L_2=L`$) in an alternating way in the monodromy matrix . In this case, the first non-trivial charge, known as Hamiltonian, is given in terms of nearest neighbor and next-to-nearest neighbor interactions. More specifically, the expression for the Hamiltonian in the absence of fermionic degrees of freedom($`M=0`$) is $$H=\underset{modd}{\overset{2L}{}}\stackrel{~}{}_{m1,m}(0)+\underset{ieven}{\overset{2L}{}}\stackrel{~}{}_{n2,n1}(0)\stackrel{~}{}_{n,n1}(0)P_{n2,n}$$ (12) where in the computations it was essential to use the unitary property $`\stackrel{~}{}_{𝒜i}(0)\stackrel{~}{}_{𝒜i}(0)=\frac{q^2}{4}I_i`$ at the regular point $`\lambda =0`$. We also recall that $`P_{ij}`$ denotes permutation between sites $`i`$ and $`j`$(equation (3) with $`M=0`$). We close this section discussing explicit representations for the monoid $`E_i`$. Such representations can be found in terms of the invariants of the superalgebra $`Osp(N|2M)`$, where $`N`$ and $`2M`$ are the number of bosonic and fermionic degrees of freedom, respectively. Here we shall consider a representation that respects the $`U(1)`$ invariance, which will be very useful in Bethe ansatz analysis. Following ref. the monoid is written as $$E_i=\underset{a,b,c,d=1}{\overset{N+2M}{}}\alpha _{ab}\alpha _{cd}^1e_{ac}^ie_{bd}^{i+1}$$ (13) and the matrix $`\alpha `$ has the following block anti-diagonal structure $$\alpha =\left(\begin{array}{ccc}O_{N\times M}& O_{N\times M}& _{N\times N}\\ O_{M\times M}& _{M\times M}& O_{M\times N}\\ _{M\times M}& O_{M\times M}& O_{M\times N}\end{array}\right)$$ (14) where $`_{k_1\times k_2}`$ and $`O_{k_1\times k_2}`$ are the anti-diagonal and the null $`k_1\times k_2`$ matrices, respectively. We also recall that for $`\widehat{t}=1`$ the sequence of grading is $`f_1\mathrm{}f_Mb_1\mathrm{}b_Nf_{M+1}\mathrm{}f_{2M}`$ and the Temperley-Lieb parameter $`q`$ is the difference between the number of bosonic and fermionic degrees of freedom $$q=N2M$$ (15) Finally, we remark that new $`\stackrel{~}{}_{𝒜i}(\lambda )`$ operators are obtained only when $`N+2M3`$. Indeed, for the special cases $`N=2`$, $`M=0`$ and $`N=0`$, $`M=1`$ it is possible to verify that such operator has the structure of the 6-vertex model, which is precisely the same of $`_{𝒜i}(\lambda )`$, modulo trivial phases and scaling. In the cases $`N=3`$,$`M=0`$ and $`N=1`$,$`M=1`$ we reproduce, after a canonical transformation, the $``$-operators used recently in the literature to construct mixed $`SU(3)`$ and $`tJ`$ models. As we shall see in next section, however, an important advantage of our approach is that representation (14) is the appropriate one to allows us to perform standard nested Bethe Ansatz diagonalization. ## 3 Bethe ansatz diagonalization In this section we look at the problem of diagonalization of the transfer matrix $`T^{L_1,L_2}(\lambda )=Tr_𝒜[𝒯^{L_1,L_2}(\lambda )]`$, namely $$T^{L_1,L_2}(\lambda )|\mathrm{\Phi }=\mathrm{\Lambda }(\lambda )|\mathrm{\Phi }$$ (16) by means of the quantum inverse scattering method. For sake of simplicity we restrict ourselves to the case of mixed vertex models in the absence of fermionic degrees of freedom. In this case the $``$-operators $`_{𝒜i}(\lambda )`$ and $`\stackrel{~}{}_{𝒜i}(\lambda )`$ are given by formulae (4) and (8) with $`M=0`$, $`\widehat{t}=1`$ and $`\eta =N/2`$. An important object in this framework is the reference state $`|0`$ we should start with in order to construct the full Hilbert space $`|\mathrm{\Phi }`$. The structure of the $``$-operators suggests us to take the standard ferromagnetic pseudovacuum as our reference state, i.e $$|0=\underset{i=1}{\overset{L_1+L_2}{}}|0_i,|0_i=\left(\begin{array}{c}1\\ 0\\ \mathrm{}\\ 0\end{array}\right)_N$$ (17) where the index $`N`$ represents the length of the vectors $`|0_i`$. It turns out that this state is an exact eigenvector of the transfer matrix, since both operators $`_{𝒜i}(\lambda )`$ and $`\stackrel{~}{}_{𝒜i}(\lambda )`$ satisfy the following important triangular properties $$_{𝒜i}(\lambda )|0_i=\left(\begin{array}{ccccc}a(\lambda )|0_i& & & \mathrm{}& \\ 0& b(\lambda )|0_i& 0& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& 0& \mathrm{}& b(\lambda )|0_i\end{array}\right)_{N\times N}$$ (18) and $$\stackrel{~}{}_{𝒜i}(\lambda )|0_i=\left(\begin{array}{ccccc}\stackrel{~}{b}(\lambda )|0_i& 0& 0& \mathrm{}& \\ 0& \stackrel{~}{b}(\lambda )|0_i& 0& \mathrm{}& \\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& 0& \mathrm{}& \stackrel{~}{a}(\lambda )|0_i\end{array}\right)_{N\times N}$$ (19) where the symbol $``$ stands for non-null values that are not necessary to evaluate in this algebraic approach. The functions $`a(\lambda )`$, $`b(\lambda )`$, $`\stackrel{~}{a}(\lambda )`$ and $`\stackrel{~}{b}(\lambda )`$ are obtained directly from expressions (8,9), and they are given by $$a(\lambda )=\lambda +1,b(\lambda )=\lambda ,\stackrel{~}{a}(\lambda )=\lambda +\frac{N}{2}1,\stackrel{~}{b}(\lambda )=\lambda +\frac{N}{2}$$ (20) To make further progress we have to write an appropriate ansatz for the monodromy matrix $`𝒯^{L_1,L_2}(\lambda )`$ in the auxiliary space $`𝒜`$. The triangular properties (18,19) suggest us to seek for standard structure used in nested Bethe ansatz diagonalization of $`SU(N)`$ vertex models , $$𝒯^{L_1,L_2}(\lambda )=\left(\begin{array}{cc}A(\lambda )& B_i(\lambda )\\ C_i(\lambda )& D_{ij}(\lambda )\end{array}\right)_{N\times N}$$ (21) where $`i,j=1,\mathrm{},N1`$. As a consequence of properties (18,19) we derive how the monodromy matrix elements act on the reference state. The fields $`B_i(\lambda )`$ play the role of creation operators while $`C_i(\lambda )`$ are annihilation fields, i.e $`C_i(\lambda )|0=0`$. Furthermore, the action of the “diagonal” operators $`A(\lambda )`$ and $`D_{ij}(\lambda )`$ are given by $$A(\lambda )|0=[a(\lambda )]^{L_1}[\stackrel{~}{b}(\lambda )]^{L_2}|0$$ (22) $$D_{ii}(\lambda )|0=[b(\lambda )]^{L_1}[\stackrel{~}{b}(\lambda )]^{L_2}|0\text{ for }iN1,D_{N1,N1}(\lambda )|0=[b(\lambda )]^{L_1}[\stackrel{~}{a}(\lambda )]^{L_2}|0$$ (23) $$D_{i,j}(\lambda )|0=0\text{ for }ij\text{ and }jN1,D_{i,N1}(\lambda )|00\text{ for }iN1$$ (24) We observe that although matrix $`D_{ij}(\lambda )|0`$ is non-diagonal it is up triangular, which is an important property to carry on higher level Bethe ansatz analysis. In order to construct other eigenvectors we need to use the commutation relations between the monodromy matrix elements which are obtained by extending the Yang-Baxter relation to the monodromy matrix ansatz (21). Due to the structure of the $`R`$-matrix, the commutation rules are the same of that already known for isotropic $`SU(N)`$ models , and the most useful relations for subsequent derivations are $$A(\lambda )B_i(\mu )=\frac{a(\mu \lambda )}{b(\mu \lambda )}B_i(\mu )A(\lambda )\frac{1}{b(\mu \lambda )}B_i(\lambda )A(\mu )$$ (25) $$D_{ij}(\lambda )B_k(\mu )=\frac{1}{b(\lambda \mu )}B_p(\mu )D_{iq}(\lambda )r^{(1)}(\lambda \mu )_{pq}^{jk}\frac{1}{b(\lambda \mu )}B_j(\lambda )D_{ik}(\mu )$$ (26) $$B_i(\lambda )B_j(\mu )=B_p(\mu )B_q(\lambda )r^{(1)}(\lambda \mu )_{pq}^{ij}$$ (27) where $`r^{(1)}(\lambda )_{pq}^{ij}`$ are the elements of the $`R`$-matrix $`I_i+\lambda b_i`$ on the subspace $`(N1)\times (N1)`$. The eigenvectors are given in terms of the following linear combination $$|\mathrm{\Phi }_{m_1}(\lambda _1^{(1)},\mathrm{},\lambda _{m_1}^{(1)})=B_{a_1}(\lambda _1^{(1)})\mathrm{}B_{a_{m_1}}(\lambda _{m_1}^{(1)})^{a_{m_1}\mathrm{}a_1}$$ (28) where the components $`^{a_{m_1}\mathrm{}a_1}`$ are going to be determined a posteriori. By carring on the diagonal fields $`A(\lambda )`$ and $`D_{ii}(\lambda )`$ over the above $`m_1`$-particle state we generate the so-called wanted and unwanted terms. The wanted terms are those proportional to $`|\mathrm{\Phi }_{m_1}(\lambda _1^{(1)},\mathrm{},\lambda _{m_1}^{(1)})`$ and they contribute directly to the eigenvalue $`\mathrm{\Lambda }^{L_1,L_2}(\lambda ,\{\lambda _i^{(1)}\})`$. These terms are easily obtained by keeping only the first term of the commutation rules (25,26) each time we turn $`A(\lambda )`$ and $`D_{ii}(\lambda )`$ over one of the $`B_{a_i}(\lambda _i^{(1)})`$ component. The result of this computations leads us to the following expression $`T^{L_1,L_2}(\lambda )|\mathrm{\Phi }_{m_1}(\lambda _1^{(1)},\mathrm{},\lambda _{m_1}^{(1)})`$ $`=`$ $`[a(\lambda )]^{L_1}[\stackrel{~}{b}(\lambda )]^{L_2}{\displaystyle \underset{i=1}{\overset{m_1}{}}}{\displaystyle \frac{a(\lambda _i^{(1)}\lambda )}{b(\lambda _i^{(1)}\lambda )}}|\mathrm{\Phi }_{m_1}(\lambda _1^{(1)},\mathrm{},\lambda _{m_1}^{(1)})`$ (29) $`+{\displaystyle \underset{i=1}{\overset{m_1}{}}}{\displaystyle \frac{1}{b(\lambda \lambda _i^{(1)})}}B_{b_1}(\lambda _1^{(1)})\mathrm{}B_{b_{m_1}}(\lambda _{m_1}^{(1)})T^{(1)}(\lambda ,\{\lambda _j^{(1)}\})_{b_1\mathrm{}b_{m_1}}^{a_1\mathrm{}a_{m_1}}^{a_{m_1}\mathrm{}a_1}`$ $`+\mathrm{unwanted}\mathrm{terms}`$ where $`T^{(1)}(\lambda ,\{\lambda _j^{(1)}\})`$ is the transfer matrix of the following inhomogeneous auxiliary vertex model $$T^{(1)}(\lambda ,\{\lambda _i^{(1)}\})_{b_1\mathrm{}b_{m_1}}^{a_1\mathrm{}a_{m_1}}=r^{(1)}(\lambda \lambda _1^{(1)})_{b_1d_1}^{aa_1}r^{(1)}(\lambda \lambda _2^{(1)})_{b_2d_2}^{d_1a_2}\mathrm{}r^{(1)}(\lambda \lambda _{m_1}^{(1)})_{b_{m_1}d_{m_1}}^{d_{m_11}a_{m_1}}D_{ad_{m_1}}(\lambda )|0$$ (30) The unwanted terms arise when one of the variables $`\lambda _i^{(1)}`$ of the $`m_1`$-particle state is exchanged with the spectral parameter $`\lambda `$. It is known how to collect these in a close form, thanks to the commutation rule (27) which makes possible to relate different ordered multiparticle states. We find that the unwanted terms of kind $`B_{a_1}(\lambda _1^{(1)})\mathrm{}B_{a_i}(\lambda )\mathrm{}B_{a_{m_1}}(\lambda _{m_1}^{(1)})`$ are cancelled out provided we impose further restriction to the $`m_1`$-particle state rapidities $`\lambda _i^{(1)}`$, namely $`\left[a(\lambda _i^{(1)})\right]^{L_1}\left[\stackrel{~}{b}(\lambda _i^{(1)})\right]^{L_2}{\displaystyle \underset{j=1ji}{\overset{m_1}{}}}b(\lambda _i^{(1)}\lambda _j^{(1)}){\displaystyle \frac{a(\lambda _j^{(1)}\lambda _i^{(1)})}{b(\lambda _j^{(1)}\lambda _i^{(1)})}}^{a_{m_1}\mathrm{}a_1}=`$ $`T^{(1)}(\lambda =\lambda _i^{(1)},\{\lambda _j^{(1)}\})_{a_1\mathrm{}a_{m_1}}^{b_1\mathrm{}b_{m_1}}^{b_{m_1}\mathrm{}b_1},i=1,\mathrm{},m_1`$ (31) Now it becomes necessary to introduce a second Bethe ansatz in order to diagonalize the auxiliary transfer matrix $`T^{(1)}(\lambda ,\{\lambda _i^{(1)}\})`$. The only difference as compare to standard cases is the presence of the “gauge” matrix $`g_{ab}=D_{ab}(\lambda )|0`$. It turns out that this problem is still integrable since the tensor product $`gg`$ commutes with the auxiliary $`R`$-matrix $`r^{(1)}(\lambda )`$ <sup>1</sup><sup>1</sup>1 This occurs because the off-diagonal elements $`D_{i,N1}(\lambda )`$ belongs to a commutative ring.(see e.g ref. ). From equations (23,24) we also note that this gauge does not spoil the triangular form of the monodromy matrix associated to $`T^{(1)}(\lambda ,\{\lambda _i^{(1)}\})`$ when it acts on the usual ferromagnetic state, $$|0^{(1)}=\underset{i=1}{\overset{m_1}{}}\left(\begin{array}{c}1\\ 0\\ \mathrm{}\\ 0\end{array}\right)_{N1}$$ (32) By defining $`\mathrm{\Lambda }^{(1)}(\lambda ,\{\lambda _i^{(1)}\})`$ as the eigenvalue of the auxiliary transfer matrix $`T^{(1)}(\lambda ,\{\lambda _i^{(1)}\})`$, i.e $$T^{(1)}(\lambda ,\{\lambda _i^{(1)}\})_{a_1\mathrm{}a_{m_1}}^{b_1\mathrm{}b_{m_1}}^{b_{m_1}\mathrm{}b_1}=\mathrm{\Lambda }^{(1)}(\lambda ,\{\lambda _i^{(1)}\})^{a_{m_1}\mathrm{}a_1}$$ (33) we derive from equation (29) that the eigenvalue of $`T^{L_1,L_2}(\lambda )`$ is given by $$\mathrm{\Lambda }^{L_1,L_2}(\lambda ,\{\lambda _i^{(1)}\})=[a(\lambda )]^{L_1}[\stackrel{~}{b}(\lambda )]^{L_2}\underset{i=1}{\overset{m_1}{}}\frac{a(\lambda _i^{(1)}\lambda )}{b(\lambda _i^{(1)}\lambda )}+\underset{i=1}{\overset{m_1}{}}\frac{1}{b(\lambda \lambda _i^{(1)})}\mathrm{\Lambda }^{(1)}(\lambda ,\{\lambda _i^{(1)}\})$$ and the nested Bethe ansatz equations (31) become $$\left[a(\lambda _i^{(1)})\right]^{L_1}\left[\stackrel{~}{b}(\lambda _i^{(1)})\right]^{L_2}\underset{j=1ji}{\overset{m_1}{}}b(\lambda _i^{(1)}\lambda _j^{(1)})\frac{a(\lambda _j^{(1)}\lambda _i^{(1)})}{b(\lambda _j^{(1)}\lambda _i^{(1)})}=\mathrm{\Lambda }^{(1)}(\lambda =\lambda _i^{(1)},\{\lambda _j^{(1)}\}),i=1,\mathrm{},m_1$$ (34) In order to find the auxiliary eigenvalue $`\mathrm{\Lambda }^{(1)}(\lambda =\lambda _i^{(1)},\{\lambda _j^{(1)}\})`$ we have to introduce a new set of variables $`\{\lambda _1^{(2)},\mathrm{},\lambda _{m_2}^{(2)}\}`$ which parametrize the eigenvectors of $`T^{(1)}(\lambda ,\{\lambda _i^{(1)}\})`$. The structure of the commutations rules as well as the eigenvector ansatz (28) remains basically the same, and the expression for $`\mathrm{\Lambda }^{(1)}(\lambda =\lambda _i^{(1)},\{\lambda _j^{(1)}\})`$ will again depend on another auxiliary inhomogeneous vertex model having $`(N2)`$ states per link. We repeat this procedure until we reach the $`(N2)`$th step, where the auxiliary problem becomes of $`6`$-vertex type. Since this nesting approach is well known in the literature , we here only present our final results. The eigenvalue of the transfer matrix $`T^{L_1,L_2}(\lambda )`$ is given by $`\mathrm{\Lambda }^{L_1,L_2}(\lambda ;\{\lambda _j^{(1)}\},\mathrm{},\{\lambda _j^{(N1)}\})`$ $`=`$ $`[a(\lambda )]^{L_1}[\stackrel{~}{b}(\lambda )]^{L_2}{\displaystyle \underset{j=1}{\overset{m_1}{}}}{\displaystyle \frac{a(\lambda _j^{(1)}\lambda )}{b(\lambda _j^{(1)}\lambda )}}`$ (35) $`+[b(\lambda )]^{L_1}[\stackrel{~}{b}(\lambda )]^{L_2}{\displaystyle \underset{l=1}{\overset{N2}{}}}{\displaystyle \underset{j=1}{\overset{m_l}{}}}{\displaystyle \frac{a(\lambda \lambda _j^{(l)})}{b(\lambda \lambda _j^{(l)})}}{\displaystyle \underset{j=1}{\overset{m_{l+1}}{}}}{\displaystyle \frac{a(\lambda _j^{(l+1)}\lambda )}{b(\lambda _j^{(l+1)}\lambda )}}`$ $`+[b(\lambda )]^{L_1}[\stackrel{~}{a}(\lambda )]^{L_2}{\displaystyle \underset{j=1}{\overset{m_{N1}}{}}}{\displaystyle \frac{a(\lambda \lambda _j^{(N1)})}{b(\lambda \lambda _j^{(N1)})}}`$ while the nested Bethe ansatz equations are given by $`\left[{\displaystyle \frac{a(\lambda _i^{(1)})}{b(\lambda _i^{(1)})}}\right]^{L_1}={\displaystyle \underset{j=1,ji}{\overset{m_1}{}}}{\displaystyle \frac{a(\lambda _i^{(1)}\lambda _j^{(1)})}{a(\lambda _j^{(1)}\lambda _i^{(1)})}}{\displaystyle \underset{j=1}{\overset{m_2}{}}}{\displaystyle \frac{a(\lambda _j^{(2)}\lambda _i^{(1)})}{b(\lambda _j^{(2)}\lambda _i^{(1)})}}`$ $`{\displaystyle \underset{j=1,ji}{\overset{m_l}{}}}{\displaystyle \frac{a(\lambda _i^{(l)}\lambda _j^{(l)})}{a(\lambda _j^{(l)}\lambda _i^{(l)})}}={\displaystyle \underset{j=1}{\overset{m_{l1}}{}}}{\displaystyle \frac{a(\lambda _i^{(l)}\lambda _j^{(l1)})}{b(\lambda _i^{(l)}\lambda _j^{(l1)})}}{\displaystyle \underset{j=1}{\overset{m_{l+1}}{}}}{\displaystyle \frac{b(\lambda _j^{(l+1)}\lambda _i^{(l)})}{a(\lambda _j^{(l+1)}\lambda _i^{(l)})}},l=2,\mathrm{},N2`$ $`\left[{\displaystyle \frac{\stackrel{~}{a}(\lambda _i^{(N1)})}{\stackrel{~}{b}(\lambda _i^{(N1)})}}\right]^{L_2}={\displaystyle \underset{j=1,ji}{\overset{m_{N1}}{}}}{\displaystyle \frac{a(\lambda _j^{(N1)}\lambda _i^{(N1)})}{a(\lambda _i^{(N1)}\lambda _j^{(N1)})}}{\displaystyle \underset{j=1}{\overset{m_{N2}}{}}}{\displaystyle \frac{a(\lambda _i^{(N1)}\lambda _j^{(N2)})}{b(\lambda _i^{(N1)}\lambda _j^{(N2)})}}`$ (38) We would like to close this paper with the following remarks. First we note that it is possible to perform convenient shifts in the Bethe ansatz rapidities, $`\{\lambda _i^{(p)}\}\{\lambda _i^{(p)}\}\frac{p}{2}`$, in order to present the results (36-39) in a more symmetrical form. For instance, after these shifts, the nested Bethe ansatz equations can be compactly written as $$\left[\frac{\lambda _i^{(a)}\frac{\delta _{a,w}}{2}}{\lambda _i^{(a)}+\frac{\delta _{a,w}}{2}}\right]^{L_w}=\underset{b=1}{\overset{r}{}}\underset{k=1,ki}{\overset{m_b}{}}\frac{\lambda _i^{(a)}\lambda _k^{(b)}\frac{C_{a,b}}{2}}{\lambda _i^{(a)}\lambda _k^{(b)}+\frac{C_{a,b}}{2}},i=1,\mathrm{},m_a;a=1,\mathrm{},N1$$ (39) where $`C_{ab}`$ is the Cartan matrix of the $`A_N`$ Lie algebra and $`w=1,N1`$. We note that for $`N=3`$ we recover the results by Abad and Rios . It is also straightforward to extend the above Bethe ansatz results to the superalgebra $`Sl(N|M)`$. The $`N=0`$ case is the simplest one, since the only modification is the addition of minus signs in functions $`\stackrel{~}{a}(\lambda )`$ and $`\stackrel{~}{b}(\lambda )`$. Next remark concerns the physical meaning of the $``$-operators as scattering $`S`$-matrices. It is known that $`_{𝒜i}(\lambda )`$ might represent the scattering matrix of particles belonging to the fundamental representation of $`SU(N)`$. The extra solution $`\stackrel{~}{}_{𝒜i}(\lambda )`$, however, should be seen as the forward scattering amplitude between a particle and an antiparticle. In fact, it is possible to show that the whole particle-antiparticle scattering(even backward amplitudes) can be closed in terms of the braid-monoid algebra. We leave a detailed analysis of this possibility, their Bethe ansatz properties as well as generalizations to include trigonometric solutions for a forthcoming paper . ## Acknowledgements This work was supported by Fapesp ( Fundação de Amparo à Pesquisa do Estado de S. Paulo) and Cnpq (Conselho Nacional de Desenvolvimento Científico e Tecnológico).
no-problem/9903/astro-ph9903425.html
ar5iv
text
# The asymptotic collapsed fraction in an eternal universe ## 1 INTRODUCTION Explaining the origin and evolution of galaxies and large-scale structure and determining the fundamental properties of the background universe are the primary goals of modern cosmology. The most common assumption is that the structure we observe today (density structures such as galaxies, clusters, and voids, as well as velocity structures such as the Virgocentric Infall or that associated with the Great Attractor), results from the growth, by gravitational instability, of small-amplitude, primordial density fluctuations present in the universe at early times. These fluctuations are normally assumed to originate from a Gaussian random process. In this case, they can be described as a superposition of plane-wave density fluctuations with random phases. One important property of these initial conditions is that overdense and underdense regions occupy equal volumes (in other words, their filling factors are 1/2). Since the density is nearly uniform at early times, overdense and underdense regions also contain the same mass. The gravitational instability scenario makes the following predictions: overdense regions, because of their larger gravitational field, will decelerate faster than the background universe, resulting in an increase of their density contrast relative to the background. If this deceleration is large enough, these regions will turn back and recollapse on themselves, resulting in the formation of positive density structures such as galaxies and clusters. The opposite phenomenon occurs in underdense regions. These regions decelerate more slowly than the background universe, thus getting more underdense, and eventually become the cosmic voids we observe today. In this paper, we investigate the asymptotic collapsed fraction, defined as the fraction of the matter in the universe that will eventually end up inside collapsed objects. Obviously, this makes sense only in an unbound universe. Naively, we might think that the asymptotic collapsed fraction will be equal to 1/2, since half the matter is located in overdense regions at early times. This ignores two important effects. First, some overdense regions might be unbound, and second, matter located inside underdense regions could be accreted by collapsed objects. The importance of these effects depends upon the particular background universe in which these structures form. Consider, for instance, an Einstein-de Sitter universe. In this case, the background density is exactly equal to the critical density, and therefore all overdense regions are bound, and will eventually collapse. Furthermore, it can easily be shown that any mass element located inside an underdense region is gravitationally bound to at least one overdense region. Consequently, all the matter inside underdense regions will eventually be accreted by collapsed objects, and the asymptotic collapsed fraction is unity. This is not true, however, for a background universe with mean density below that of an Einstein-de Sitter universe. Interest in models of the background universe in which the matter density is less than the critical value for a flat, matter-dominated universe is now particularly strong, on the basis of several lines of evidence which can be reconciled most economically if $`\mathrm{\Omega }_0<1`$, where $`\mathrm{\Omega }_0`$ is the present mean matter density in units of the critical value. (For reviews and references, see, e.g., Ostriker & Steinhardt 1995; Turner 1998; Krauss 1998; Bahcall 1999). Arguments in favor of a flat universe with $`\mathrm{\Omega }_0<1`$ in which a nonzero cosmological constant makes up the difference between the matter density and the critical density have been significantly strengthened recently by measurements of the redshifts and distances of Type Ia SNe, which are best explained if the universe is expanding at an accelerating rate, consistent with $`\mathrm{\Omega }_0=0.3`$ and $`\lambda _0=0.7`$, where $`\lambda _0`$ is the vacuum energy density in units of the critical density at present (Garnavich et al. 1998a; Perlmutter et al. 1998). When combined with measurements of the angular power spectrum of the cosmic microwave background (CMB) anisotropy, these Type Ia SN results can be used to restrict further the range of models for the mass-energy content of the universe. In particular, while the SN data alone are better fit by a flat model with $`\mathrm{\Omega }_0<1`$ and a positive cosmological constant than by an open, matter-dominated model with no cosmological constant (e.g. Perlmutter et al. 1998), the combined information from Type Ia SNe and the CMB significantly strengthens the case for a flat model with cosmological constant over that for an open, matter-dominated model (e.g. Garnavich et al. 1998b). Exotic alternatives to the well-known cosmological constant which might also contribute positively to the total cosmic energy density and thereby similarly affect the mean expansion rate have also been discussed, sometimes referred to as “quintessence” models (e.g. Turner & White 1997; Caldwell, Dave, & Steinhardt 1998). Such models can also explain the presently accelerating expansion rate indicated by the Type Ia SNe, while satisfying several other constraints which suggest that $`\mathrm{\Omega }_0<1`$. The results from Type I SNe and CMB anisotropy combined can be used to constrain the range of equations of state allowed for this other component of energy density $`\rho _x`$, with pressure $`p_x=w_x\rho _xc^2`$. The current results favor a flat universe with $`\mathrm{\Omega }_0<1`$ and an equation of state for the second component with a value of $`w_x1`$ (where $`w_x=1`$ for a cosmological constant) favored over larger values of $`w_x`$ (such as would describe topological defects like domain walls, strings, or textures), although the restriction of the range allowed for $`w_x`$ is not yet very precise (Garnavich et al. 1998b). Consider now an unbound universe with a matter density parameter $`\mathrm{\Omega }`$ with present value $`\mathrm{\Omega }_0<1`$. In such a universe, the critical density exceeds the mean density, and therefore some overdense regions are unbound. The asymptotic collapsed fraction could still be unity if all the matter in overdense, unbound regions plus all the matter in underdense regions is accreted. This will never be the case, however. In such a universe, the density parameter $`\mathrm{\Omega }`$ is near unity at early times, and structures can grow. Eventually $`\mathrm{\Omega }`$ drops significantly below unity, and a phenomenon known as “freeze-out” occurs. In this regime, density fluctuations do not grow unless their density is already significantly larger than the background density. After freeze-out, accretion by collapsed objects will be very slow, and most of the unaccreted matter will remain unaccreted. The asymptotic collapsed fraction will therefore be less than unity. The asymptotic collapsed fraction is a quantity which is relevant to modern attempts to interpret observations of cosmic structure in at least two ways. For one, anthropic reasoning can be used to calculate a probability distribution for the observed values of some fundamental property of the universe, such as the cosmological constant, in models in which that property takes a variety of values with varying probabilities (Efstathiou 1995; Vilenkin 1995; Weinberg 1996; Martel, Shapiro, & Weinberg 1998, hereafter MSW). Examples of such models include those in which a state vector is derived for the universe which is a superposition of terms with different values of the fundamental property (e.g. Hawking 1983, 1984; Coleman 1988) and chaotic inflation in which the observed big bang is just one of an infinite number of expanding regions in each of which the fundamental property takes a different value (Linde 1986, 1987, 1988). In models like these, the probability of observing any particular value of the property is conditioned by the existence of observers in those “subuniverses” in which the property takes that value. This probability is proportional to the fraction of matter which is destined to condense out of the background into mass concentrations large enough to form observers – i.e. the asymptotic collapsed fraction for collapse into objects of this mass or greater. MSW used this approach to offer a possible resolution of the infamous “cosmological constant problem,” one of the most serious crises of quantum cosmology. Estimates of the size of a relic vacuum energy density $`\rho _V`$ from quantum fluctuations in the early universe suggest a value which is many orders of magnitude larger than the cosmic mass density today, and no cancellation mechanism has yet been identified which would reduce this to zero, let alone one so finely tuned as to leave the small but nonzero value suggested by recent astronomical observations (i.e. where the net $`\rho _V`$ is the sum of a contribution from quantum fluctuations and a term $`\mathrm{\Lambda }/8\pi G`$, where $`\mathrm{\Lambda }`$ is the cosmological constant which appears in Einstein’s field equations) (Weinberg 1989; Carroll, Press, & Turner 1992). MSW calculated the relative likelihood of observing any given value of $`\rho _V`$ within the context of the flat CDM model with nonzero cosmological constant, with the amplitude and shape of the primordial power spectrum in accordance with current data on the CMB anisotropy. Underlying this calculation was the notion that values of $`\rho _V`$ which are large are unlikely to be observed since such values of $`\rho _V`$ tend to suppress gravitational instability and prevent galaxy formation. MSW found that a small, positive cosmological constant in the range suggested by astronomical evidence is actually a reasonably likely value to observe, even if the a priori probability distribution that a given subuniverse has some value of the cosmological constant does not favor such small values. Similar reasoning can, in principle, be used to assess the probability of our observing some range of values for other properties of the universe, too, in the absence of a theory which uniquely determines their values (e.g. the value of $`\mathrm{\Omega }_0`$; Garriga, Tanaka, & Vilenkin 1998). In such calculations, the asymptotic collapsed fraction is a fundamental ingredient. Aside from its importance in anthropic probability calculations like these, in which one needs to know the state of the universe in the infinite future, the asymptotic collapsed fraction is also relevant as an approximation to the present universe, for the following reason. In an Einstein-de Sitter universe, in which there is no freeze-out, the asymptotic collapsed fraction is unity. In any other unbound universe, there will be a freeze-out at some epoch. If we live in such a universe, the freeze-out epoch could be either in the future or in the past. However, if recent attempts to reconcile a number of the observed properties of our universe with theoretical models of the background universe and of structure formation by invoking an unbound universe with $`\mathrm{\Omega }_0<1`$ are correct, then the freeze-out epoch is much more likely to be in the past. If it were in the future, then the matter density parameter today would still be close to unity, e.g. $`\mathrm{\Omega }_0>0.9`$ or $`\mathrm{\Omega }_0>0.99`$.<sup>1</sup><sup>1</sup>1This is a subjective notion, since there is no precise definition of the freeze-out epoch. For a flat, universe with positive cosmological constant, for example, spherical density fluctuations must have fractional overdensity $`\delta =(\rho \overline{\rho })/\overline{\rho }(729\rho _V/500\overline{\rho })^{1/3}`$ in order to undergo gravitational collapse, where $`\rho _V`$ is the vacuum energy density and $`\overline{\rho }`$ is the mean matter density (Weinberg 1987). In this case, once $`\overline{\rho }`$ drops to a value of the order of $`\rho _V`$ or less, only density enhancements which are already nonlinear will remain gravitationally bound. As such, the “freeze-out” epoch corresponds roughly to the time when $`\overline{\rho }\rho _V`$. Recent estimates from measurements of distant Type Ia SNe, however, suggest values which, if interpreted in terms of this model, are closer to $`\overline{\rho }\rho _V/2`$ (Garnavich et al. 1998a; Perlmutter et al. 1998), so “freeze-out” began in the past for this model. If so, then the observable consequences of the eventual departure of the background model from Einstein-de Sitter would be largely in the future, as well. As such, the strong motivation for considering models with $`\mathrm{\Omega }_0<1`$ in order to explain a number of the observed properties of our universe as described above would vanish. In short, the current interest in a universe with $`\mathrm{\Omega }_0<1`$ is consistent with a value of $`\mathrm{\Omega }_0`$ small enough that the epoch of freeze-out is largely in the past. In that case, the asymptotic collapsed fraction should be a good approximation to the present collapsed fraction. This quantity is of interest, for example, since, by combining it with the observed luminosity density of the universe, we can get a handle on the average mass-to-light ratio of the universe, and the amount of dark matter. The complementary quantity, the uncollapsed fraction, is of interest, too, since it determines the amount of matter left behind as the intergalactic medium, observable in absorption and by its possible contributions to background radiation. A knowledge of the amount of matter left uncollapsed is also necessary in order to interpret observations of gravitational lensing of distant sources by large-scale structure. In addition, as we shall see, the dependence of the asymptotic collapsed fraction on the equation of state of the background universe will imply that theoretical tools, such as the Press-Schechter approximation, require adjustment in order to take proper account of the effect of “freeze-out” on the rate of cosmic structure formation. In this paper, we compute the asymptotic collapsed fraction for unbound universes, using an analytical model involving spherical top-hat density perturbations surrounded by shells of compensating underdensity, applied statistically to the case of Gaussian random noise density fluctuations, a model introduced by MSW for the particular case of a flat universe with a cosmological constant. We consider a generic cosmological model with 2 components, a nonrelativistic component whose mean energy density varies as $`\overline{\rho }a^3`$, where $`a`$ is the FRW scale factor, and a uniform, nonclumping component whose energy density varies as $`\rho _\mathrm{X}a^n`$, where $`n`$ is non-negative. In terms of the equations of state for these two components, we can write this as $`p_i=w_i\rho _ic^2`$, where $`\rho _i`$ and $`p_i`$ are the mean energy density and pressure contributed by component $`i`$. For the nonrelativistic matter component, $`w=0`$, while for component X, $`1w0`$ is the physically allowed range in models in which the universe had a big bang in its past and the energy of component X was not more important in the past than that of matter, which corresponds to $`n=3(1+w)`$ and the range $`0n3`$. The latter condition is necessary in order to be consistent with observations of cosmic structure and the CMB anisotropy today. Special cases of this model include models with a cosmological constant ($`n=0`$), domain walls ($`n=1`$), infinite strings ($`n=2`$), massive neutrinos ($`n=3`$), and radiation background ($`n=4`$) (although, as explained above, we shall exclude values of $`n>3`$ in our treatment here). This generic model, or similar ones, have been discussed previously by many authors (e.g. Fry 1985; Charlton & Turner 1987; Silveira & Waga 1994; Martel 1995; Dodelson, Gates, & Turner 1996; Turner & White 1997; Martel & Shapiro 1998). Recently, such models have been referred to as “quintessence” models (e.g. Caldwell, Dave, & Steinhardt 1998) or as models involving “dark energy.”<sup>2</sup><sup>2</sup>2We note that in some models, the X component is not entirely nonclumping: For massive neutrinos, for example, the assumption that the X-component is nonclumping is a very good approximation only for fluctuations of wavelength smaller than the “free-streaming,” or “damping,” length of the neutrinos and for epochs such that longer wavelength fluctuations are still in the linear amplitude phase. The Friedmann equation for this model is $$\left(\frac{\dot{a}}{a}\right)^2=H_0^2\left[(1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}})\left(\frac{a}{a_0}\right)^2+\mathrm{\Omega }_0\left(\frac{a}{a_0}\right)^3+\mathrm{\Omega }_{\mathrm{X0}}\left(\frac{a}{a_0}\right)^n\right],$$ (1) where H is the Hubble constant, $`\mathrm{\Omega }=\overline{\rho }/\rho _c`$, $`\mathrm{\Omega }_\mathrm{X}=\rho _\mathrm{X}/\rho _c`$, $`\rho _c=3H^2/8\pi G`$, and subscripts zero indicate present values of time-varying quantities. In §2, we derive the conditions that the cosmological parameters must satisfy in order for the background universe to qualify as an eternal, unbound universe. In §3, we compute the critical density contrast $`\delta _c`$, defined as the minimum density contrast a spherical perturbation must have in order to be bound. In §4, we derive the asymptotic collapse fraction $`f_{c,\mathrm{}}`$ in an unbound universe, using the model introduced by MSW involving compensated spherical top-hat density fluctuations. In §5, we compute $`f_{c,\mathrm{}}`$ using the Press-Schechter approximation, instead. In §6, we compare the predictions of the two models. As we shall see, this comparison points up a fundamental limitation to the validity of the ad hoc, over-all correction factor of 2 by which the Press-Schechter integral over positive initial density fluctuations is traditionally multiplied so as to recover a total collapsed factor which takes account of the accretion of mass initially in underdense regions. In particular, we shall derive this factor of 2 for the Einstein-de Sitter case, but show that the same factor of 2 in the Press-Schechter formula overestimates the asymptotic collapsed fraction for an unbound universe. To illustrate the importance of these results for currently viable models of cosmic structure formation, we apply our model in §6 to two examples of the Cold Dark Matter (CDM) model, with $`\mathrm{\Omega }_0=0.3`$ and $`H_0=70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, the open, matter-dominated model and the flat model with cosmological constant. ## 2 CRITERIA FOR AN UNBOUND UNIVERSE The Friedmann equation (1) describes the time-evolution of the scale factor $`a(t)`$. The solutions of this equation can be grouped into four categories, according to their asymptotic behavior at late times. If the derivative $`\dot{a}`$, which is initially positive, remains positive at all times, never dropping to zero, then the universe is unbound.<sup>3</sup><sup>3</sup>3The asymptotic value of $`\dot{a}`$ in the limit $`a\mathrm{}`$ can be either finite or infinite This is the case, for instance, in a matter-dominated universe with $`\mathrm{\Omega }_0<1`$. If, instead, $`\dot{a}`$ drops to zero as $`a\mathrm{}`$, then the universe is marginally bound. This is the case for the Einstein-de Sitter universe ($`\mathrm{\Omega }_0=1`$, $`\mathrm{\Omega }_{\mathrm{X0}}=0`$). If $`\dot{a}`$ drops to zero at a finite value $`a=a_t`$, then two situations can occur: If the second derivative $`\ddot{a}`$ is negative at $`a=a_t`$, the universe will turn back and recollapse. This is the case for a matter-dominated universe with $`\mathrm{\Omega }_0>1`$. However if both $`\dot{a}`$ and $`\ddot{a}`$ are zero at $`a=a_t`$, then the universe asymptotically approaches an equilibrium state with $`a=a_t`$ at late times. This is the case of the de Sitter universe with a positive cosmological constant, which initially expands, and asymptotically becomes an Einstein static universe. To determine in which category a particular model falls, we need to study the properties of the Friedmann equation (1). For convenience, we rewrite this equation as $$g(y)H_0^2y\dot{y}^2=(1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}})y+\mathrm{\Omega }_0+\mathrm{\Omega }_{\mathrm{X0}}y^{3n},$$ (2) where $`ya/a_0=1/(1+z)`$. Only non-negative values of $`g`$ are physically allowed. Since $`y>0`$ after the big bang, the condition $`g=0`$ is equivalent to $`\dot{y}=0`$ (or $`\dot{a}=0`$). The first term in the right-hand-side of equation (2) can be either positive or negative, while the last two terms cannot be negative.<sup>4</sup><sup>4</sup>4We are ignoring the possibility that $`\mathrm{\Omega }_{\mathrm{X0}}<0`$, as would be the case, for instance, in a universe with a negative cosmological constant If $`n=2`$ or $`\mathrm{\Omega }_{\mathrm{X0}}=0`$, then the quantity $`\mathrm{\Omega }_{\mathrm{X0}}`$ cancels out in equation (2). This merely illustrates the fact that a universe with a uniform component whose density varies as $`a(t)^2`$ (cf. a universe with infinite strings) behaves exactly like a matter-dominated universe. Such a universe is bound, marginally bound, or unbound if $`\mathrm{\Omega }_0>1`$, $`\mathrm{\Omega }_0=1`$, or $`\mathrm{\Omega }_0<1`$, respectively. The case in which $`\mathrm{\Omega }_{\mathrm{X0}}0`$ and $`n=3`$ is exactly the same as that with $`\mathrm{\Omega }_{\mathrm{X0}}=0`$, except that $`\mathrm{\Omega }_0`$ is everywhere replaced by $`\mathrm{\Omega }_0+\mathrm{\Omega }_{\mathrm{X0}}`$. In that case, the universe is bound, marginally bound, or unbound according to whether $`\mathrm{\Omega }_0+\mathrm{\Omega }_{\mathrm{X0}}>1`$, $`\mathrm{\Omega }_0+\mathrm{\Omega }_{\mathrm{X0}}=0`$, or $`\mathrm{\Omega }_0+\mathrm{\Omega }_{\mathrm{X0}}<1`$, respectively. Let us now consider cases with $`\mathrm{\Omega }_{\mathrm{X0}}0`$ for which $`n2`$ and $`n3`$. For $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}0`$, the universe cannot be bound. Clearly, if $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}>0`$, then $`g(y)>0`$ for all $`y`$, and the universe is unbound for any value of $`n`$. If $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}=0`$, then the last term in equation (2) will eventually dominate (since we assume $`n<3`$). Two situations can then occur. If $`n>2`$, then $`g(y)`$ grows more slowly than $`y`$, implying that $`\dot{y}^2=g(y)/H_0^2y`$ decreases as $`y`$ increases, reaching zero as $`y\mathrm{}`$. This is the case of a marginally bound universe. If $`n<2`$, $`\dot{y}^2`$ will eventually increase with $`y`$. The universe is then unbound. Let us now focus on the case $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}<0`$. If $`n>2`$, then at small $`y`$, $`g(y)>0`$, but as $`y`$ increases, the first term in equation (2) will eventually dominate the other terms, giving $`g(y)<0`$. There will therefore be a change of sign of $`g(y)`$ at some finite value $`y=y_t`$ where $`g(y_t)=0`$. That corresponds to a bound universe. This leaves the interesting case of a universe with $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}<0`$ and $`n<2`$. Since the slope of $`g(y)`$ at early times for any $`n<2`$ is $`(1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}})<0`$, while at late times it is $`(3n)\mathrm{\Omega }_{\mathrm{X0}}y^{2n}>0`$, $`g(y)`$ has a minimum at some intermediate value of $`y`$. When $`g(y)`$ is zero at that intermediate value, this corresponds to the case of a marginally bound universe. The various possibilities for the cases with $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}<0`$ and $`n<2`$ are shown in Figure 1. The top curve shows a case for which $`g(y)>0`$ for all $`y`$, that is, an unbound universe. The bottom curve shows a case for which $`g(y)`$ drops to zero at a finite value of $`y`$. In this case, the universe turns back and recollapses. It is therefore bound.<sup>5</sup><sup>5</sup>5At large $`y`$, the function $`g(y)`$ becomes positive again, indicating that there are possible solutions for $`y`$ large. These are “catenary universes,” sometimes referred as “no big bang solutions.” In such models, the universe contracts from an infinite radius, turns back, and reexpands forever. These solutions are not considered to be physically interesting. The transition between these two cases, a marginally bound universe, is illustrated by the middle curve in Figure 1, which is tangent to the $`y`$-axis. At $`y=y_t`$, both the function $`g(y)`$ and its first derivative $`dg/dy`$ vanish. The condition for having a marginally bound universe is, therefore, given by the following simultaneous equations, $`(1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}})y_t+\mathrm{\Omega }_0+\mathrm{\Omega }_{\mathrm{X0}}y_t^{3n}=0,`$ (3) $`(1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}})+(3n)\mathrm{\Omega }_{\mathrm{X0}}y_t^{2n}=0.`$ (4) We can solve equation (4) for $`y_t`$, and substitute this $`y_t`$ into equation (3). We get, after some algebra, $$\left(\frac{\mathrm{\Omega }_0+\mathrm{\Omega }_{\mathrm{X0}}1}{3n}\right)^{3n}=\left(\frac{\mathrm{\Omega }_0}{2n}\right)^{2n}\mathrm{\Omega }_{\mathrm{X0}}.$$ (5) We can easily check some limiting cases. For a matter-dominated universe ($`\mathrm{\Omega }_{\mathrm{X0}}=0`$), equation (5) gives $`\mathrm{\Omega }_0=1`$ as the condition for a marginally bound universe, as expected. For a universe with a nonzero cosmological constant ($`n=0`$), equation (5) reduces to $$(\mathrm{\Omega }_0+\lambda _0+1)^3=\frac{27}{4}\lambda _0\mathrm{\Omega }_0^2,$$ (6) where we have replaced $`\mathrm{\Omega }_{\mathrm{X0}}`$ by $`\lambda _0`$. This is actually a well-known result (see, for instance, Glanfield 1966; Felten & Isaacman 1986; Martel 1990). ## 3 THE CRITICAL DENSITY CONTRAST Consider, at some initial redshift $`z_i1`$, a spherical perturbation of density $`\rho _i`$=$`\overline{\rho }_i(1+\delta _i)`$ in an otherwise uniform background of density $`\rho _i`$. Let us focus on positive density perturbations ($`\delta _i>0`$). Clearly, if the background universe is bound or marginally bound, then the perturbation is bound. However, if the background universe is unbound, then the perturbation can be either bound or unbound depending upon the value of the initial density contrast $`\delta _i`$. Our goal in this section is to derive the critical density contrast $`\delta _{i,c}`$, which is defined as the minimum value of $`\delta _i`$ for which the perturbation is bound. To compute $`\delta _{i,c}`$, we make use of the Birkhoff theorem, which implies that a uniform, spherically symmetric perturbation in an otherwise smooth Friedmann universe evolves like a separate Friedmann universe with the same mean energy density and equation of state as the perturbation.<sup>6</sup><sup>6</sup>6Note: For a nonuniform spherically symmetric perturbation, every spherical mass shell evolves as it would in a universe with the same mean energy density and equation of state as that of the average of the sphere bounded by that shell. Pursuing this analogy, a perturbation with $`\delta _i>\delta _{i,c}`$ behaves like a bound universe, a perturbation with $`\delta _i<\delta _{i,c}`$ behaves like an unbound universe, and a perturbation with $`\delta _i=\delta _{i,c}`$ behaves like a marginally bound universe. We can then use the results of the previous section to compute $`\delta _{i,c}`$. First, we need to derive expressions for the “effective cosmological parameters” of the perturbation. Notice first that an overdense perturbation has been decelerating relative to the background between the big bang and the initial redshift $`z_i`$. Hence, at $`z=z_i`$, the perturbation is expanding with an “effective Hubble constant” $`H_i^{}`$ which is smaller than the Hubble constant $`H_i`$ of the background universe. Assuming that the redshift $`z_i`$ is small enough for linear theory to be accurate and for the universe to resemble an Einstein-de Sitter universe ($`\mathrm{\Omega }_i1`$, $`\mathrm{\Omega }_{\mathrm{X}i}1`$), but late enough to allow us to neglect the linear decaying mode, we can easily compute the relationship between $`H_i^{}`$ and $`\delta _i`$, $$H_i^{}=H_i\left(1\frac{\delta _i}{3}\right),$$ (7) (see, for instance, Lahav et al. 1991). The effective density parameters of the perturbation are then given by $`\mathrm{\Omega }_i^{}={\displaystyle \frac{8\pi G\rho _i}{3H_{i}^{}{}_{}{}^{2}}}={\displaystyle \frac{8\pi G\overline{\rho }_i(1+\delta _i)}{3H_i^2(1\delta _i/3)^2}}={\displaystyle \frac{\mathrm{\Omega }_i(1+\delta _i)}{(1\delta _i/3)^2}},`$ (8) $`\mathrm{\Omega }_{\mathrm{X}i}^{}={\displaystyle \frac{8\pi G\rho _{\mathrm{X}i}}{3H_{i}^{}{}_{}{}^{2}}}={\displaystyle \frac{8\pi G\rho _{\mathrm{X}i}}{3H_i^2(1\delta _i/3)^2}}={\displaystyle \frac{\mathrm{\Omega }_{\mathrm{X}i}}{(1\delta _i/3)^2}}.`$ (9) Next, we need to find combinations of $`\mathrm{\Omega }_i^{}`$ and $`\mathrm{\Omega }_{\mathrm{X}i}^{}`$ that correspond to “effective” marginally bound universes. For the cases for which $`n<2`$, this condition is given by equation (5). We now replace $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_{\mathrm{X0}}`$ by $`\mathrm{\Omega }_i^{}`$ and $`\mathrm{\Omega }_{\mathrm{X}i}^{}`$ in equation (5)<sup>7</sup><sup>7</sup>7That equation was derived using the present values of the density parameters, but it is of course valid at any epoch. and replace $`\delta _i`$ by $`\delta _{i,c}`$. This equation becomes $$\left[\frac{\mathrm{\Omega }_i(1+\delta _{i,c})+\mathrm{\Omega }_{\mathrm{X}i}(1\delta _{i,c}/3)^2}{3n}\right]^{3n}=\left[\frac{\mathrm{\Omega }_i(1+\delta _{i,c})}{2n}\right]^{2n}\mathrm{\Omega }_{\mathrm{X}i}.$$ (10) Since $`\delta _{i,c}1`$, we can expand this expression in powers of $`\delta _{i,c}`$ and keep only leading terms. We can then simplify this expression further by using the approximation $`\mathrm{\Omega }_i1`$. Equation (10) reduces to $$\left[\frac{\mathrm{\Omega }_i+\mathrm{\Omega }_{\mathrm{X}i}1+5\delta _{i,c}/3}{3n}\right]^{3n}=\frac{\mathrm{\Omega }_{\mathrm{X}i}}{(2n)^{2n}}.$$ (11) Notice that we had to keep the term $`\mathrm{\Omega }_i`$ in the left hand side because of the presence of the term $`1`$, and that we cannot expand the left hand side in powers of $`\delta _{i,c}`$ because the quantity $`\mathrm{\Omega }_i+\mathrm{\Omega }_{\mathrm{X}i}1`$ might be as small as $`\delta _{i,c}`$. We now solve this equation for $`\delta _{i,c}`$, and get $$\delta _{i,c}=\frac{3}{5}\left[\frac{(3n)\mathrm{\Omega }_{\mathrm{X}i}^{1/(3n)}}{(2n)^{(2n)/(3n)}}+1\mathrm{\Omega }_i\mathrm{\Omega }_{\mathrm{X}i}\right].$$ (12) This gives the critical density contrast as a function of the initial density parameters $`\mathrm{\Omega }_i`$ and $`\mathrm{\Omega }_{\mathrm{X}i}`$. We can reexpress it as a function of the present density parameters $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_{\mathrm{X0}}`$ and the initial redshift, as follows: The initial density parameters are given by $`\mathrm{\Omega }_i=8\pi G\overline{\rho }_i/3H_i^2=8\pi G\overline{\rho }_0(1+z_i)^3(H_0/H_i)^2/3H_0^2=\mathrm{\Omega }_0(1+z_i)^3(H_0/H_i)^2`$ and $`\mathrm{\Omega }_{\mathrm{X}i}=8\pi G\rho _\mathrm{X}/3H_i^2=8\pi G\rho _{\mathrm{X0}}(1+z_i)^n(H_0/H_i)^2/3H_0^2=\mathrm{\Omega }_{\mathrm{X0}}(1+z_i)^n(H_0/H_i)^2`$. The ratio $`(H_0/H_i)^2`$ is given directly by equation (1) (with $`a_0/a_i=1+z_i`$). We substitute these expressions into equation (12), and, using the fact that $`z_i1`$, we keep only the leading terms in $`(1+z_i)^1`$. Equation (12) reduces to $$\delta _{i,c}=\frac{3}{5(1+z_i)}\left[\frac{(3n)}{(2n)^{(2n)/(3n)}}\left(\frac{\mathrm{\Omega }_{\mathrm{X0}}}{\mathrm{\Omega }_0}\right)^{1/(3n)}+\frac{1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}}{\mathrm{\Omega }_0}\right].$$ (13) For the particular cases of a matter-dominated universe ($`\mathrm{\Omega }_{\mathrm{X0}}=0`$) or a flat universe with a nonzero cosmological constant ($`n=0`$, $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}=0`$), we recover the results derived by Weinberg (1987) and Martel (1994, eqs. and ). For cases in the range $`2n<3`$, the condition for a marginally bound universe is $`1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}}=0`$. We substitute equations (8) and (9) into this expression, make the same approximations as above, and get $$\delta _{i,c}=\frac{3}{5}(1\mathrm{\Omega }_i\mathrm{\Omega }_{\mathrm{X}i}).$$ (14) In terms of the present density parameters and the initial redshift, this expression reduces to $$\delta _{i,c}=\frac{3(1\mathrm{\Omega }_0\mathrm{\Omega }_{\mathrm{X0}})}{5\mathrm{\Omega }_0(1+z_i)}.$$ (15) The case $`n=3`$ differs from all others in that the energy density of the X component does not diminish relative to that of the ordinary matter component as we go back in time. As such, we are never free to assume that the early behavior of the top-hat is the same as it would be in the absence of the X component. We shall, therefore, for simplicity, exclude this case $`n=3`$ from further consideration here. ## 4 THE ASYMPTOTIC COLLAPSED FRACTION Our goal is to compute the fraction of the matter in the universe that will eventually end up inside collapsed objects (the asymptotic collapsed fraction). Clearly, this question only makes sense in unbound or marginally bound universes. In general, the answer depends upon the mass scale of the collapsed objects being considered. For cosmological models with Gaussian random noise initial conditions (the usual assumption), the density contrast $`\delta (\lambda )`$ for fluctuations of comoving length scale $`\lambda `$ is of order $`[k^3P(k)]^{1/2}`$, where $`k=2\pi /\lambda `$ is the wavenumber, and $`P(k)`$ is the power spectrum. For a model such as Cold Dark Matter, for instance, the power spectrum decreases more slowly than $`k^3`$ at large $`k`$. Thus the density contrast diverges at small scale. Normally, we eliminate small-scale perturbations from the calculation by filtering the power spectrum at the mass scale of interest, typically the mass required to form a galaxy. The density fluctuations at that scale have a variance $`\sigma ^2`$ given by $$\sigma ^2=\frac{1}{2\pi ^2}_0^{\mathrm{}}P(k)\widehat{W}^2(kR)k^2𝑑k,$$ (16) where $`\widehat{W}`$ is a window function, and $`R`$ is the comoving radius of a sphere enclosing a mass in the unperturbed density field which is equal to the mass scale of interest. Assuming that the initial conditions are Gaussian, the fluctuation distribution for positive values of $`\delta `$ is given by $$𝒩(\delta )=\frac{2^{1/2}}{\pi ^{1/2}\sigma }e^{\delta ^2/2\sigma ^2}.$$ (17) Our problem consists of computing the asymptotic collapsed fraction involving initially positive density fluctuations of mass equal to that contained on average by a sphere of comoving radius $`R`$, together with the additional mass which eventually accretes onto these positive density fluctuations from initially underdense regions, starting from initial conditions described by equations (16) and (17). In this section, we consider the analytical model introduced by MSW. In the next section, we will consider the well-known Press-Schechter approximation, instead. Consider, at some early time $`t_i`$, a spherical, top-hat matter-density fluctuation of volume $`V`$ and density contrast $`\delta _i`$, surrounded by a compensating shell of volume $`U`$ and negative density contrast, such that the average density contrast of the system top-hat + shell vanishes. This model is parametrized by the shape parameter $`sV/U`$. If $`\delta _i\delta _{i,c}`$, the top-hat core will collapse. Furthermore, a fraction of the matter located outside the top-hat, inside the shell, initially occupying a volume $`U^{}U`$, will be accreted by the top-hat. Since the density is nearly uniform at early times, the asymptotic collapsed mass fraction of this system is simply $`(V+U^{})/(V+U)`$. We now approximate the initial conditions for the whole universe as an ensemble of these compensated top-hat perturbations, with a distribution of top-hat core positive density fluctuations given by equation (17), and we neglect the interaction between perturbations. As discussed in MSW, the value of $`s=0`$ corresponds to the limit in which each positive fluctuation is isolated, surrounded by an infinite volume of compensating underdensity (at a total density infinitessimally below the mean value $`\overline{\rho }`$). For a flat universe with nonzero cosmological constant, this case was treated by Weinberg (1996). The case $`s=\mathrm{}`$ corresponds to the limit of “no infall” in which the additional mass associated with the compensating underdense volume $`V`$ is negligible compared with that of the initial top-hat. This case was considered for the flat universe with $`\lambda _00`$ by Weinberg (1987). If $`s=1`$, however, the volume occupied by every positive fluctuation is surrounded by an equal volume of compensating negative density fluctuation. This is the case most relevant to the problem at hand, involving a Gaussian-random distribution of linear density fluctuations, since the latter ensures that the volumes initially occupied by positive and negative density fluctuations of equal amplitude are exactly equal. The full range of values of $`s`$, $`0s\mathrm{}`$, was treated by MSW for the flat universe with $`\lambda _00`$, with a special focus on $`s=1`$ as the case corresponding to Gaussian-random noise initial conditions. The insensitivity of the results for the anthropic probability calculations presented there to the value assumed for $`s`$ suggests that the relative amount of total collapsed fraction in universes with different values of $`\rho _V`$ may not be sensitive to the crudeness of the treatment of the effect of one fluctuation on another. However, we will also present results here for the full range of values of $`s`$, while noting that the value $`s=1`$ is the most relevant to the case at hand of Gaussian random density fluctuations. Under these assumptions, the asymptotic collapsed fraction for the whole universe is given by $$f_{c,\mathrm{}}=\frac{2^{1/2}s}{\pi ^{1/2}\sigma _i}_{\delta _{i,c}}^{\mathrm{}}\frac{\delta e^{\delta ^2/2\sigma _i^2}d\delta }{\delta _{i,c}+s\delta },$$ (18) where $`\sigma _i`$ is the value of $`\sigma `$ at time $`t_i`$ (MSW). For bound and marginally bound universes (including, in particular, the Einstein-de Sitter universe), $`\delta _{i,c}=0`$, and equation (18) reduces trivially to $`f_{c,\mathrm{}}=1`$ for all values of $`s`$. Hence, the MSW model predicts that, in an Einstein-de Sitter universe, all the matter will eventually end up in collapsed objects. For unbound universes, we change variables from $`\delta `$ to $`x\delta ^2/2\sigma _i^2`$. Equation (18) reduces to $$f_{c,\mathrm{}}=\frac{s}{\pi ^{1/2}}_\beta ^{\mathrm{}}\frac{e^xdx}{sx^{1/2}+\beta ^{1/2}},$$ (19) where $$\beta \frac{\delta _{i,c}^2}{2\sigma _i^2}.$$ (20) This equation shows that the collapsed fraction $`f_{c,\mathrm{}}`$ is unity only when $`\beta =0`$, which requires $`\delta _{i,c}=0`$. However, in an unbound universe, $`\delta _{i,c}`$ is always positive. Hence, according to the MSW model, the collapsed fraction in an unbound universe is always less than unity. Notice that the dependence upon the cosmological parameters is entirely contained in the parameter $`\beta `$. For any cosmological model, we can compute $`\sigma _i`$ using equation (16) and $`\delta _{i,c}`$ using either equation (13) or (15). Since $`\sigma _i(1+z_i)^1`$ at large $`z_i`$ for any universe with $`n<3`$, the dependence on $`z_i`$ cancels out in the calculation of $`\beta `$, as it should: The asymptotic collapsed fraction should not depend upon the initial epoch chosen for the calculation. The size of the asymptotic collapse parameter $`\beta `$ determines not only how large or small the collapsed fraction is but also how important the increase of collapsed fraction is due to accretion from the surrounding underdense regions. For small values of $`\beta `$, the asymptotic collapsed fraction is close to unity because both the typical positive initial density fluctuation and its fair share of the matter in surrounding regions of compensating underdensity collapse out before the effects of “freeze-out” suppress fluctuation growth. Hence, in this limit of small $`\beta `$, “freeze-out” is unimportant and the results resemble that for an Einstein-de Sitter universe. For values of $`\beta 1`$, however, the typical collapse occurs after “freeze-out” has begun to limit the growth of density fluctuations. The large $`\beta `$ limit, in fact, is that in which only a rare, much-higher-than-average, positive density fluctuation is able to collapse out of the background before “freeze-out” prevents it, and very little of the compensating underdense matter condenses out along with it. For this large $`\beta `$ limit, equation (19) can be shown to reduce to the following simple formula (see Appendix A), $$f_{c,\mathrm{}}(\beta 1)=\left(\frac{s}{s+1}\right)\frac{e^\beta }{(\pi \beta )^{1/2}}.$$ (21) ## 5 THE ASYMPTOTIC LIMIT OF THE PRESS-SCHECHTER APPROXIMATION In the Press-Schechter approximation (Press & Schechter 1974; henceforth, “PS”), the collapsed fraction at time $`t`$ is estimated as follows: Consider a spherical top-hat perturbation with an initial linear density contrast $`\delta _i`$ chosen such that this perturbation collapses precisely at time $`t`$. The density contrast of that perturbation is infinite at time $`t`$. However, if we estimate the density contrast at that epoch using linear perturbation theory, we obtain instead a finite value $`\delta =\mathrm{\Delta }_c`$, because linear theory underestimates the growth of positive fluctuations. The value of $`\mathrm{\Delta }_c`$ is usually taken to be $`(3/5)(3\pi /2)^{2/3}=1.6865`$, though this result is strictly correct only for the Einstein-de Sitter universe (cf. Shapiro, Martel, & Iliev 1999, and references therein). A larger perturbation would collapse earlier, and linear theory would predict that its density contrast exceeds $`\mathrm{\Delta }_c`$ at time $`t`$. To compute the collapsed fraction at time $`t`$, we simply need to integrate over all perturbations whose density contrast predicted by linear theory would exceed $`\mathrm{\Delta }_c`$ at time $`t`$, using the distribution given by equation (17). The resulting expression, after multiplication by a factor of “2” to correct for the fact that half the mass was initially in underdense regions outside the positive density fluctuations, is $$f_c^{\mathrm{PS}}=\frac{2^{1/2}}{\pi ^{1/2}\sigma (t)}_{\mathrm{\Delta }_c(t)}^{\mathrm{}}e^{\delta ^2/2\sigma (t)^2}𝑑\delta .$$ (22) The introduction of this ad hoc correction factor of “2” in equation (22) is based on some assumption about the amount of matter located in unbound regions, either underdense or overdense, which is destined to be accreted onto collapsed perturbations. Consider, for instance, the case of an Einstein-de Sitter universe at late time. The critical density contrast distinguishing a bound from an unbound density fluctuation is zero, and therefore all overdense perturbations are bound and will eventually collapse. Since, for Gaussian perturbations, the overdense regions initially contain only half the mass of the universe, the asymptotic collapsed fraction, without taking accretion into account, would be $`f_{c,\mathrm{}}=1/2`$. However, it can easily be shown that in an Einstein-de Sitter universe, all matter in the universe will eventually end up inside bound objects. Hence, for this particular case, the proper way to handle accretion is to multiply the collapsed fraction by a factor of 2. Equation (22) is derived by assuming that this factor of 2 is valid, not only for the asymptotic limit of the Einstein-de Sitter universe, but for all universes and at all epochs. Hence, the PS approximation assumes that the total mass accreted by collapsed positive density fluctuations is instantaneously equal to the total mass of these collapsed objects themselves. What is the asymptotic collapsed fraction according to this PS approximation? We now change variables from $`\delta `$ to $`x=\delta ^2/2\sigma ^2`$. Equation (22) reduces to $$f_c^{\mathrm{PS}}=\frac{1}{\pi ^{1/2}}_{\beta _{\mathrm{PS}}}^{\mathrm{}}\frac{e^xdx}{x^{1/2}},$$ (23) where $$\beta _{\mathrm{PS}}(t)\frac{\mathrm{\Delta }_c(t)^2}{2\sigma (t)^2}.$$ (24) To compute the asymptotic collapsed fraction, $`f_{c,\mathrm{}}^{\mathrm{PS}}`$, we need to take the limit of equations (23) and (24) as $`t\mathrm{}`$. Consider a bound spherical perturbation, with the values of its initial density contrast $`\delta _i`$ at initial time $`t_i`$ chosen so that it collapses at $`t=\mathrm{}`$. Call this value of $`\delta _i`$, $`\delta _{i,\mathrm{}}`$. By definition, the quantity $`\mathrm{\Delta }_c`$ at $`t=\mathrm{}`$ is given by $$\mathrm{\Delta }_c(\mathrm{})=\delta _{i,\mathrm{}}\frac{\delta _+(\mathrm{})}{\delta _+(t_i)},$$ (25) where $`\delta _+(t)`$ is the linear growing mode. Since this spherical perturbation collapses at $`t=\mathrm{}`$, the initial density contrast $`\delta _{i,\mathrm{}}`$ must be equal to the critical density contrast $`\delta _{i,c}`$. If $`\delta _{i,\mathrm{}}`$ was less than $`\delta _{i,c}`$ the perturbation would not collapse at all, while if it was greater, the perturbation would collapse at a finite time. We can therefore replace $`\delta _{i,\mathrm{}}`$ by $`\delta _{i,c}`$ in equation (25). Finally, we notice that the quantity $`\sigma `$ also evolves according to linear theory, $$\sigma (\mathrm{})=\sigma _i\frac{\delta _+(\mathrm{})}{\delta _+(t_i)}.$$ (26) Combining these results, we get $$\beta _{\mathrm{PS}}(\mathrm{})=\frac{\left[\delta _{i,c}\delta _+(\mathrm{})/\delta _+(t_i)\right]^2}{2\left[\sigma _i\delta _+(\mathrm{})/\delta _+(t_i)\right]^2}=\frac{\delta _{i,c}^2}{2\sigma _i^2}=\beta ,$$ (27) (see eq. ). Hence, the PS $`\beta _{\mathrm{PS}}`$ parameter reduces to the MSW $`\beta `$ parameter in the limit $`t\mathrm{}`$. Now, comparing equations (19) and (23), we see immediately that these equations are identical in the limit $`s\mathrm{}`$. Notice that the product $`\delta _{i,c}\delta _+(\mathrm{})`$ takes the undetermined form $`0\mathrm{}`$ in the case of an Einstein-de Sitter universe. In this case, the quantity $`\delta _{i,c}\delta _+(\mathrm{})/\delta _+(t_i)`$ is equal to $`(3/5)(3\pi /2)^{2/3}`$, or 1.6865, at all times. In the Einstein-de Sitter case, $`\beta =\beta _{\mathrm{PS}}(\mathrm{})=0`$, and equations (19) and (23) are the same for all values of $`s`$; the asymptotic collapsed fractions in that case are all equal to unity. In the limit of large $`\beta `$, $`f_{c,\mathrm{}}^{\mathrm{PS}}`$ in equation (23), with $`\beta _{\mathrm{PS}}`$ replaced by $`\beta `$, according to equation (27), can be shown to reduce to the following simple formula (see Appendix A): $$f_{c,\mathrm{}}^{\mathrm{PS}}(\beta 1)=\frac{e^\beta }{(\pi \beta )^{1/2}}.$$ (28) A comparison of equations (21) and (28) reveals that the asymptotic collapsed fraction $`f_{c,\mathrm{}}^{\mathrm{PS}}`$ according to the PS approximation is just a factor of $`(s+1)/s`$ times $`f_{c,\mathrm{}}`$ according to the MSW model, in the limit of large $`\beta `$. ## 6 DISCUSSION AND CONCLUSION Our analytical result in equations (19) and (20) for the asymptotic collapsed fraction in an eternal universe can be evaluated for any background universe which satisfies the conditions given in §2 which identify it as an unbound universe. We need only specify the background universe and the power spectrum of primordial density fluctuations, in order to evaluate $`\beta `$. Before we do this for a few illustrative cases, however, it is instructive to evaluate the asymptotic collapsed fraction $`f_{c,\mathrm{}}`$ in general as a function of $`\beta `$ and $`s`$, and compare $`f_{c,\mathrm{}}`$ to the prediction of the PS approximation, $`f_{c,\mathrm{}}^{\mathrm{PS}}`$, according to equations (23) and (27). We have shown above that the asymptotic collapsed fraction predicted for an eternal universe by the PS approximation differs from that predicted here by the spherical model of MSW (as generalized to other background universe cases) for $`s=1`$, the value of the shape parameter appropriate for Gaussian random initial density fluctuations, with the exception of the Einstein-de Sitter universe, for which $`f_{c,\mathrm{}}^{\mathrm{PS}}=f_{c,\mathrm{}}=1`$. For any eternal universe other than Einstein-de Sitter, in fact, the two approaches predict the same asymptotic collapsed fraction only if $`s=\mathrm{}`$, instead. The fact that the two approaches generally predict different asymptotic collapsed fractions for $`s=1`$ is not surprising, since the PS approximation never concerns itself with the fraction of matter which is inside some gravitationally bound region and is, hence, fated to collapse out, as the MSW model explicitly does. Instead, the PS approximation assumes that, as long as the matter is located within a region of average density which is high enough to make it collapse according to the spherical top-hat model, it will not only collapse but will also take with it an equal share of the matter outside this region which was not initially overdense. This latter assumption is not correct if the underdense matter is not all gravitationally bound to some overdense matter. What is perhaps more surprising than this disagreement between the two approaches for $`s=1`$ is the fact that they do agree for all models if $`s=\mathrm{}`$. The fact that in the limit $`s\mathrm{}`$ the MSW model reduces to the asymptotic limit of the PS approximation is significant, because the two models are based on different assumptions. In the case of the PS approximation, a factor of 2 is introduced to take accretion into account. In the MSW model, the limit $`s\mathrm{}`$ corresponds to perturbations surrounded by underdense shells of negligible volume and mass. In this limit, there is essentially no accretion. However, the volume filling factor of overdense regions, which is 1/2 in the PS approximation, approaches unity in the limit $`s\mathrm{}`$ for the MSW model, resulting once again in a factor of 2 in the expression for the collapsed fraction, but for a different reason. We have computed the collapsed fraction predicted by the MSW model as a function of the parameter $`\beta `$, for various values of $`s`$, by numerically evaluating equation (19). The results are plotted in Figure 2. In addition, the analytical expression in equation (21) which is valid in the large $`\beta `$ limit is plotted in Figure 2 for the case $`s=1`$. The analytical expression provides an excellent fit to the exact results for the important case of $`s=1`$, not only for large $`\beta `$, but for all $`\beta 1`$. The error even at $`\beta =1`$, for example, is only 15%, while at $`\beta =5`$, the error is reduced to 4.5%. For comparison, we also show the prediction of the PS approximation, according to equations (23) and (27). This curve is identical to the curve for $`f_{c,\mathrm{}}`$ for the case $`s=\mathrm{}`$. The point $`\beta =0`$, $`f_{c,\mathrm{}}=1`$ corresponds to the Einstein-de Sitter universe. As we see, all curves go through this point, indicating that the MSW model predicts the correct asymptotic limit in this case, for any value of $`s`$. The $`s=1`$ case is particularly important, since it is the only one which gives equal filling factors to overdense and underdense perturbations, a requirement for describing realistic Gaussian random initial conditions. Figure 2 shows that for finite values of $`s`$ and for $`\beta >0`$, the asymptotic collapsed fraction $`f_{c,\mathrm{}}`$ predicted by the MSW model is always less than the asymptotic collapsed fraction $`f_{c,\mathrm{}}^{\mathrm{PS}}`$ predicted by the PS approximation. For $`s<1`$, this is not surprising, since in this limit the filling factor of the overdense regions is below the value of 1/2 assumed by the PS approximation. However, if $`s>1`$, then the filling factor of overdense regions exceeds 1/2, indicating that the bound perturbations contain more mass in the MSW model than in the PS approximation. In spite of this, we still have $`f_{c,\mathrm{}}<f_{c,\mathrm{}}^{\mathrm{PS}}`$. This is caused by their different treatments of accretion. The MSW model includes a detailed calculation of the amount of matter accreted by a spherical top-hat, while the PS approximation simply assumes that the accreted mass equals the initially overdense mass, for all cosmological models. Figure 2 suggests that this approximation can be quite crude in some situations and greatly overestimate the amount of matter actually accreted. To estimate this effect, we have computed, for the MSW model, the “accretion factor,” $`F_{\mathrm{acc}}`$, defined as the ratio of the total asymptotic collapsed fraction $`f_{c,\mathrm{}}`$ divided by the asymptotic collapsed fraction $`f_{c,\mathrm{}}^{}`$ that we would obtain if accretion were neglected. (This factor $`F_{\mathrm{acc}}`$ is 2 for the PS approximation). We can easily compute $`f_{c,\mathrm{}}^{}`$ by going back to the derivation of MSW and dropping the term in equation (19) which represents the accreted matter. The resulting expression is $$f_{c,\mathrm{}}^{}(s,\beta )=\frac{1}{\pi ^{1/2}}\left(\frac{s}{s+1}\right)_\beta ^{\mathrm{}}\frac{e^xdx}{x^{1/2}}=\left(\frac{s}{s+1}\right)f_{c,\mathrm{}}(s=\mathrm{},\beta ).$$ (29) Hence, the accretion factor is $$F_{\mathrm{acc}}(s,\beta )=\left(\frac{s+1}{s}\right)\frac{f_{c,\mathrm{}}(s,\beta )}{f_{c,\mathrm{}}(\mathrm{},\beta )}.$$ (30) In the large $`\beta `$ limit, equations (21) and (30) indicate that $`F_{\mathrm{acc}}(s,\beta 1)=1`$; in this limit, none of the matter in the compensating underdense regions is able to condense out. In Figure 3, we plot this accretion factor $`F_{\mathrm{acc}}`$ as a function of $`\beta `$, for various values of $`s`$. The factor $`F_{\mathrm{acc}}^{\mathrm{PS}}=2`$ for the PS approximation is indicated by the dashed line. For the MSW model, the accretion factor depends mostly on the amount of matter available in the shell surrounding the top-hat, which goes to zero in the limit $`s\mathrm{}`$ and to infinity in the limit $`s0`$. For the interesting case $`s=1`$ (underdense and overdense regions with equal filling factors), we recover the PS limit $`F_{\mathrm{acc}}=2`$ at small $`\beta `$, but the value departs rapidly from 2 at larger $`\beta `$. At $`\beta =1`$, for example, the accretion factor drops to 1.125, indicating that the PS approximation overestimates the amount of matter being accreted by a factor of 8! To demonstrate the importance of this effect for actual cosmological models, we consider two variations of the Cold Dark Matter (CDM) model: (a) open, matter-dominated CDM ($`\mathrm{\Omega }_{\mathrm{X0}}=0`$), and (b) flat CDM with nonzero cosmological constant ($`\mathrm{\Omega }_{\mathrm{X0}}=\lambda _0=1\mathrm{\Omega }_0`$, $`n=0`$), both with an untilted primordial Harrison-Zel’dovich power spectrum<sup>8</sup><sup>8</sup>8The exponent of the primordial power spectrum, which is unity in the absence of tilt, is usually designated by the letter $`n`$. It should not be confused by the exponent $`n`$ used in this paper, which is introduced in equation (1). The primordial density fluctuation power spectrum for this model, consistent with the standard inflationary cosmology and the measured anisotropy of the cosmic microwave background according to the COBE DMR experiment, is described in great detail in Bunn & White (1997, and references therein). In the absence of tilt, this power spectrum (extrapolated to the present according to linear theory) is given by $$P(k)=2\pi ^2\left(\frac{c}{H_0}\right)^4\delta _H^2k^nT_{\mathrm{CDM}}^2(k).$$ (31) where $`c`$ is the speed of light and $`T_{\mathrm{CDM}}`$ is the transfer function, given by $$T_{\mathrm{CDM}}(q)=\frac{\mathrm{ln}(1+2.34q)}{2.34q}\left[1+3.89q+(16.1q)^2+(5.46q)^3+(6.71q)^4\right]^{1/4}$$ (32) (Bardeen et al. 1986), with $`q`$ defined by $`q`$ $`=`$ $`\left({\displaystyle \frac{k}{\mathrm{Mpc}^1}}\right)\alpha ^{1/2}(\mathrm{\Omega }_0h^2)^1\mathrm{\Theta }_{2.7}^2,`$ (33) $`\alpha `$ $`=`$ $`a_1^{\mathrm{\Omega }_b/\mathrm{\Omega }_0}a_2^{(\mathrm{\Omega }_b/\mathrm{\Omega }_0)^3},`$ (34) $`a_1`$ $`=`$ $`(46.9\mathrm{\Omega }_0h^2)^{0.670}\left[1+(32.1\mathrm{\Omega }_0h^2)^{0.532}\right],`$ (35) $`a_2`$ $`=`$ $`(12.0\mathrm{\Omega }_0h^2)^{0.424}\left[1+(45.0\mathrm{\Omega }_0h^2)^{0.582}\right]`$ (36) (Hu & Sugiyama 1996, eqs. \[D-28\] and \[E-12\]), where $`\mathrm{\Omega }_b`$ is the density parameter of the baryons, and $`\mathrm{\Theta }_{2.7}`$ is the temperature of the cosmic microwave background in units of 2.7K. The quantity $`\delta _H`$ is given by $$\delta _H=\{\begin{array}{cc}1.95\times 10^5\mathrm{\Omega }_0^{0.350.19\mathrm{ln}\mathrm{\Omega }_0},\hfill & \lambda _0=0\text{, no tilt;}\hfill \\ 1.94\times 10^5\mathrm{\Omega }_0^{0.7850.05\mathrm{ln}\mathrm{\Omega }_0},\hfill & \lambda _0=1\mathrm{\Omega }_0\text{, no tilt;}\hfill \end{array}$$ (37) Once the power spectrum is specified, we can compute the variance $`\sigma ^2`$ of the present density contrast (i.e. as extrapolated to the present using linear theory) as a function of the comoving length scale or, equivalently, mass scale over which the density field is smoothed, using equation (16). We then compute the parameter $`\beta `$ using equation (20), where $`\delta _{i,c}`$ is given by either equation (13) or (15), and $`\sigma _i=\sigma \delta _+(z_i)/\delta _+(0)`$. After some algebra, we get $$\beta =\frac{9}{50\sigma ^2\eta ^2(\mathrm{\Omega }_0,\lambda _0,z_i)}\left[3\left(\frac{\lambda _0}{4\mathrm{\Omega }_0}\right)^{1/3}+\frac{1\mathrm{\Omega }_0\lambda _0}{\mathrm{\Omega }_0}\right]^2,$$ (38) where the function $`\eta (\mathrm{\Omega }_0,\lambda _0,z)`$ is defined by $$\eta (\mathrm{\Omega }_0,\lambda _0,z)=(1+z)\frac{\delta _+(z)}{\delta _+(0)}$$ (39) (MSW). In the limit $`z1`$, which we assume here, the function $`\eta `$ becomes independent of $`z`$. For flat models ($`\mathrm{\Omega }_0+\lambda _0=1`$), MSW derived the following expression: $$\eta (\mathrm{\Omega }_0,1\mathrm{\Omega }_0,z1)=\frac{6\lambda _0^{5/6}}{5\mathrm{\Omega }_0^{1/3}}\left[_0^{\lambda _0/\mathrm{\Omega }_0}\frac{dw}{w^{1/6}(1+w)^{3/2}}\right]^1.$$ (40) For matter-dominated models, we can easily compute the function $`\eta `$ using the expressions given in Peebles (1980). For open models, we get $$\eta (\mathrm{\Omega }_0,0,z1)=\frac{2(1\mathrm{\Omega }_0)}{5\mathrm{\Omega }_0}\left\{1+\frac{3\mathrm{\Omega }_0}{1\mathrm{\Omega }_0}+\frac{3\mathrm{\Omega }_0}{(1\mathrm{\Omega }_0)^{3/2}}\mathrm{ln}\left[\frac{1(1\mathrm{\Omega }_0)^{1/2}}{\mathrm{\Omega }_0^{1/2}}\right]\right\}^1.$$ (41) The fraction of matter eventually collapsed into objects created by positive density fluctuations of mass greater than or equal to some mass $`M`$ is entirely specified by the parameter $`\beta `$ evaluated for this mass scale as the density field filter mass. Once $`\beta `$ is known, we can compute the asymptotic collapsed mass fractions $`f_{c,\mathrm{}}`$ and $`f_{c,\mathrm{}}^{\mathrm{PS}}`$ using equations (19) and (23), respectively. We consider models with $`H_0=70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ ($`h=0.7`$), $`\mathrm{\Omega }_b=0.015h^2`$ (Copi, Schramm, & Turner 1995), and $`\mathrm{\Theta }_{2.7}=1`$. We have computed $`\sigma ^2`$ using equation (16) with a top-hat window function, $$\widehat{W}(kR)=\frac{3}{(kR)^3}(\mathrm{sin}kRkR\mathrm{cos}kR).$$ (42) In Figure 4, we plot the variation of the asymptotic collapse parameter $`\beta `$ with the filter mass $`M`$ \[which corresponds to the length scale $`R`$ in equation (42) according to $`M=4\pi R^3\rho _c\mathrm{\Omega }_0/3`$, or $`M/M_{}=1.163\times 10^{12}R_{\mathrm{Mpc}}^3h^2\mathrm{\Omega }_0`$\] for two cases of interest: (a) open, matter-dominated, $`\mathrm{\Omega }_0=0.3`$, and (b) flat with cosmological constant, $`\mathrm{\Omega }_0=0.3=1\lambda _0`$. The value $`\beta =1`$ for these two cases corresponds to the mass scales $`M/M_{}=3.651\times 10^{14}`$ (open) and $`5.778\times 10^{14}`$ (flat), respectively. For $`H_0=70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and density parameter $`\mathrm{\Omega }_0=0.3`$ (assuming the shape parameter $`s=1`$, as required for Gaussian random noise density fluctuations), the open, matter-dominated CDM model and the flat CDM model with nonzero cosmological constant yield mass fractions asymptotically collapsed into objects created by positive density fluctuations of mass greater than or equal to the galaxy cluster mass-scale $`10^{15}M_{}`$ of 0.0361 and 0.0562, respectively. These values of the asymptotic collapsed fraction are only 55% of the values determined by the Press-Schechter approximation. These results have implications for the use of the latter approximation to compare the observed space density of X-ray clusters today with that predicted by cosmological models. We have also calculated the asymptotic collapsed fractions $`f_{c,\mathrm{}}`$ and $`f_{c,\mathrm{}}^{\mathrm{PS}}`$ as a function of $`\mathrm{\Omega }_0`$ (assuming $`s=1`$), for four different filter mass scales $`M/M_{}=10^6`$, $`10^9`$, $`10^{12}`$, and $`10^{15}`$ (notice that the length scale $`R`$ corresponding to a given mass scale varies with $`\mathrm{\Omega }_0`$). The results are shown in Figure 5. In Figure 5a, $`f_{c,\mathrm{}}`$ and $`f_{c,\mathrm{}}^{\mathrm{PS}}`$ are each plotted separately, while in Figure 5b, we plot the ratio $`f_{c,\mathrm{}}^{\mathrm{PS}}/f_{c,\mathrm{}}`$ to demonstrate the extent to which the Press-Schechter approximation overestimates the collapsed fraction, especially for cluster mass objects and above. ## ACKNOWLEDGMENTS This work benefited greatly from our stimulating collaboration with Steven Weinberg. We are pleased to acknowledge the support of NASA Astrophysical Theory Program Grants NAG5-2785, NAG5-7363, and NAG5-7812, NSF Grant ASC 9504046, and a TICAM Fellowship in the summer of 1998 for HM from the Texas Institute of Computational and Applied Mathematics. ## Appendix A The Large $`\beta `$ Limit ### A.1 The MSW Model The asymptotic collapsed fraction according to the MSW model is given by $$f_{c,\mathrm{}}=\frac{s}{\pi ^{1/2}}_\beta ^{\mathrm{}}\frac{e^xdx}{sx^{1/2}+\beta ^{1/2}}.$$ (43) If we change variables using $`x=\beta (1+w)`$, then equation (A1) reduces to $$f_{c,\mathrm{}}=\frac{s\beta ^{1/2}e^\beta }{\pi ^{1/2}}_0^{\mathrm{}}\frac{e^{\beta w}dw}{s(w+1)^{1/2}+1}.$$ (44) In the limit $`1/\beta 1`$, we can always find a number $`\alpha `$ such that $`1/\beta \alpha 1`$. Since $`\alpha \beta 1`$, we can truncate the integral in equation (A2) at $`w=\alpha `$, because the exponential $`e^{\beta w}`$ is negligible for larger values of $`w`$. Hence $$f_{c,\mathrm{}}\frac{s\beta ^{1/2}e^\beta }{\pi ^{1/2}}_0^\alpha \frac{e^{\beta w}dw}{s(w+1)^{1/2}+1}.$$ (45) Since $`\alpha 1`$, the integration variable $`w`$ is always much smaller than unity, and we can replace $`w+1`$ by 1 in the denominator. The resulting integral yields $$f_{c,\mathrm{}}\left(\frac{s}{s+1}\right)\frac{e^\beta (1e^{\beta \alpha })}{(\pi \beta )^{1/2}}.$$ (46) Since $`\beta \alpha 1`$, the term $`e^{\beta \alpha }`$ is negligible. The final expression is $$f_{c,\mathrm{}}(\beta 1)\left(\frac{s}{s+1}\right)\frac{e^\beta }{(\pi \beta )^{1/2}}.$$ (47) ### A.2 The PS Approximation The asymptotic collapsed fraction according to the PS approximation is given by $$f_{c,\mathrm{}}^{\mathrm{PS}}=\frac{1}{\pi ^{1/2}}_\beta ^{\mathrm{}}\frac{e^xdx}{x^{1/2}}.$$ (48) A change of variables to $`w=x^{1/2}`$ allows us to rewrite equation (A6) as follows: $$f_{c,\mathrm{}}^{\mathrm{PS}}=\frac{2}{\pi ^{1/2}}_{\beta ^{1/2}}^{\mathrm{}}e^{w^2}𝑑w=1\mathrm{erf}(\beta ^{1/2}).$$ (49) For large $`\beta `$, $$\mathrm{erf}(\beta ^{1/2})1\frac{e^\beta }{(\pi \beta )^{1/2}}.$$ (50) Combining equations (A7) and (A8), we find $$f_{c,\mathrm{}}^{\mathrm{PS}}(\beta 1)\frac{e^\beta }{(\pi \beta )^{1/2}}.$$ (51)
no-problem/9903/cond-mat9903060.html
ar5iv
text
# Non linear flux flow in TiN superconducting thin film ## 1 Introduction Vortex motion have been intensively studied over the past years, especially in high $`T_c`$ type II superconductors which show new exciting features . In a thin film geometry both low $`T_c`$ and high $`T_c`$ superconducting films often show similar behaviors (glass transition, vortex liquid state) and are therefore commonly used to investigate the mixed state (-). In this goal, transport properties give some insights about the motion of vortices on a macroscopic level and in some cases can be analyzed in terms of quasiparticles dynamics (\- ). Indeed, some 20 years ago, Larkin and Ovchinnikov (LO) predicted an electronic instability in the voltage-current characteristics at large vortex velocities , . This instability is related to the energy relaxation time of the quasiparticles $`\tau _ϵ`$. When an electrical current passes through a sample, it creates a force on the vortices which is maximum when the direction of the current is perpendicular to the magnetic field. Above a certain value (the critical current), the vortices are depinned and start to move with a velocity proportionnal to the electric field strength E that develops along the sample. When the velocity is such that the time for a vortex to move over a distance of its size ($`2\xi `$) is of the order of the quasiparticles relaxation time, their number decreases inside the vortex core and increases outside. Then, the diameter of the vortex core shrinks and the viscous coefficient is reduced leading to an even higher velocity. In this regime, the I-V curves are no longer linear and at a critical velocity $`v^{}`$, the differential flux flow resistivity becomes negative. For current-biased experiments an instability takes place and the system switches to another state with a measured resistivity close to the normal one. LO showed that the velocity $`v^{}`$ at which the voltage jumps, is related to the relaxation time $`\tau _ϵ`$ and should be independent of the magnetic field. However, a field dependence is often observed experimentally. There can be different reasons for this field dependence. First, in the LO theory, the diffusion length of the quasiparticles $`l_ϵ=\sqrt{D\tau _ϵ}`$ must be greater than the distance between vortices $`a_0\sqrt{\varphi _0/B}`$ because the non equilibrium distribution of the quasiparticles is assumed to be uniform over the whole superconductor volume. If this is not the case, local effects can be included by introducing explicitely the inter vortex spacing leading to a $`1/\sqrt{B}`$ dependence of the critical velocity $`v^{}`$ . In a second approach, Bezuglyj and Shklovskij (BS) refined the LO description by taking into account the temperature $`T^{}`$ of the quasiparticles which can be different from that of the crystal lattice $`T_0`$ because of the finite rate of removing the power dissipated in the sample . Again, for the BS results to be valid and used to describe experiments, the diffusion length of the quasiparticles must be larger than $`a_0`$. In this paper, we report transport measurements obtained in a $`100nm`$ Titanium Nitride (TiN) thin film. In order to characterize the film, we first measured the temperature dependence of the resistivity for various magnetic fields applied perpendicularly to the surface of the film. This allows us to draw the phase diagramm in the H-T plane and to obtain the coherence length $`\xi (0)10nm`$. In a second part, we measured both the differential resistance and the voltage drop across a patterned microbridge, as a function of a DC (or slow varying) current, for different values of the magnetic field. From the differential resistance we can deduce the flux flow resistance and compare it to the LO model. From the I-V curves, we have measured the voltage instability $`V^{}`$ and we show that in the field range we have investigated, the critical vortex velocity is field dependent with a $`1/\sqrt{B}`$ behavior. We then extracted the relaxation time $`\tau _ϵ`$ and checked that indeed the diffusion length $`l_ϵ`$ is comparable to the vortex distance. ## 2 Sample The sample is a thin film of Titanium Nitride.It is a material that has been used for many years in microelectronics as a diffusion barrier. At low temperature, TiN can be superconductor with a critical temperature up to $`6K`$, depending on the conditions of deposition. The film we have used is $`100nm`$ thick and shows a zero field critical temperature of $`4.6K`$. The film has been synthetized from a Titanium target in a mixed Argon and Nitrogen atmosphere, using a collimator. The deposition has been made on a 8 inches $`Si/SiO_2`$ wafer at a temperature of $`350^{}C`$ with zero bias voltage. The room temperature resistivity is $`85\mu \mathrm{\Omega }.cm`$ and is almost constant with the film thickness, from $`d=100nm`$ to $`8nm`$. Therefore, an upper bound value to the mean free path is estimated to few nanometers and seems reasonable for this kind of granular material . Some preleminarly results of x-rays diffraction show that the films are textured with two preferred orientations $`<200>`$ and $`<111>`$. Further investigations should allow us to measure the averaged grain size. By use of UV photolithography, we patterned two kind of bridges. Sample A was $`200\mu m`$ long and $`10\mu m`$ wide and sample B was $`17\mu m`$ long and $`7.5\mu m`$ wide. Sample A has been essentially used to measure the temperature dependence of the resistance, whereas sample B (that has a lower resistance) has been used for transport measurements at $`4.2K`$. We have checked that sample A gives similar I-V curves than sample B at 4.2 K. ## 3 Magnetic field - Temperature diagram Figure 1 shows the resistance of Sample A as a function of temperature for different magnetic fields. In zero field, the transition width is about $`0.1K`$. We see that the transition becomes larger as the magnetic field increases, especially at the foot of the transition. This broadening is due to thermally activated flux flow as it can be seen in figure 2. Indeed, over a certain range of temperature, the resistance decrease can be described by a thermally activated behavior with a field dependent energy scale (see insert of figure 2). We did not investigate this behavior any further but is similar to what has been observed by Hsu et al. . Since, the transition temperature is not very well defined at high field, we took as a criterion for $`T_c`$, the temperature at which the resistance drops to $`90\%`$ of its value in the normal state. This will overevaluate the transition temperature but will not change the slope in the H-T plane. Indeed, contrary to the foot, the shape of the beginning of the transition does not change much with the field. Figure 3 shows the critical magnetic field $`H_{c2}`$ versus the critical temperature $`T_c`$. The slope $`\left(\frac{H_{c2}}{T}\right)_{H=0}`$ is $`0.745T/K`$ and gives $`\xi (0)\mathrm{\hspace{0.17em}10}nm`$ using $`\xi (T)=\xi (0)/\sqrt{1T/T_c}`$ and $`H_{c2}=\mathrm{\Phi }_0/2\pi \xi ^2`$ ## 4 Transport measurements For these experiments, the magnetic field was generated by a 650 turns NbTi superconducting coil. Both sample and coil were immersed in the Helium bath of a magnetically unshielded dewar. The currents (DC and AC) were generated by a voltage driving current source following the electronic scheme of Payet-Burin et al. . The voltage was amplified by a low noise differential preamplifier. ### 4.1 Critical current We have measured the onset of dissipation by measuring I-V curves at very low voltage. In Figure 4, we have plotted the current at which the voltage exceeds $`1\mu V`$ across the sample B and identified it to the critical current. On a log-log plot, we see that below a certain field of few Gauss, the critical current $`I_c=1.9mA`$ and is constant in field over two decades. For such a current flowing in the film, the self-magnetic field generated by the current at the edge of the bridge is : $`B_{sf}=\frac{\mu _0I}{2\pi w}ln\left(\frac{2w}{d}\right)2G`$ for a width $`w=10\mu m`$ and a thickness $`d=100nm`$. Therefore for field less than $`2G`$ the main ”applied” field is the self-field. For higher fields, the observed behavior is consistent with a regime of collective pinning and the critical current decreases as a power law of the field $`I_cB^n`$. Two regimes can be distinguished with $`n0.5`$ and $`n1`$ but their origins are not clear. This behavior can be related to that observed by De Brion et al. in $`Mo_{77}Ge_{23}`$. ### 4.2 Core resistivity When measuring the transport behavior in the dissipative regime ($`I>I_c`$), one can distinguish different behaviors. Those appear more clearly by recording the differential resistance $`R_{diff}=\frac{V}{I}`$ versus a DC current. For these measurements, the amplitude of the small AC current added to the DC one, was $`4\mu A`$ peak to peak and its frequency around 2 kHz (see figure 5). Just above the critical current the differential resistance increases in a short DC current range. Then $`R_{diff}`$ crosses over a plateau before growing up again. We attribute these different regimes to the flux creep, flux flow and non-linear flux flow, respectively. We did not study the flux creep regime but we checked the flux flow regime by recording the value of the differential resistance at the plateau as a function of the applied magnetic field. According to LO and , the flux flow resistivity is given by : $$\rho _{ff}=\left(\frac{V}{I}\right)_{V=0}=\frac{\stackrel{~}{\rho }_N}{\alpha (B)+1}$$ (1) with $$\alpha (B)=\frac{1}{\sqrt{1\frac{T}{T_c}}}\frac{B_{c2}}{B}f\left(\frac{B}{B_{c2}}\right)$$ (2) and $$f\left(\frac{B}{B_{c2}}\right)=4.04\left(\frac{B}{B_{c2}}\right)^{1/4}\left(3.96+2.38\frac{B}{B_{c2}}\right)\text{when}\frac{B}{B_{c2}}0.315$$ (3) where $`\stackrel{~}{\rho }_N`$ is the normal core resistivity and $`B_{c2}`$ the critical field at 4.2 K ($`B_{c2}=3kG`$ from figure 3). The above expressions are valid for temperatures close to $`T_c`$ and are therefore well suited to describe our experiments since a temperature of $`4.2K`$ corresponds to $`0.9T_c`$. In figure 6, we have plotted the normal core resistivity deduced from the measurements of the resistivity at the plateau and following equation 1. One can see that the core resistivity is indeed independent of the field, but shows a value slightly above the normal resistivity of the sample at $`T>T_c`$. This kind of discrepancy has been already reported by Doettinger et al. and can be due to some uncertainties on the coefficients in the above formula or to some differences between the resistivity of the vortex core and the normal resistivity of the whole sample. In the flux flow regime, the differential resistivity is constant as a function of the DC current. Looking at our experimental results, this is clearly not the case (see figure 5) and the system enters the non-linear flux flow regime. ### 4.3 Non linear flux flow and electronic instability In order to analyse the regime of non linear I-V curves, we have studied the voltage response versus a slow varying current (figure 7). To avoid overheating of the sample by Joule effect, we have used triangular current pulses of few ms with a duty cycle of about 10 (i.e the pulse was repeated every 100 ms with zero current in between). The voltage response was recorded on a digital oscilloscope and averaged more than 30 times. Figure 7 shows several IV characteristics that have been recorded this way. At a voltage $`V^{}`$ and current $`I^{}`$ and instability takes place. We have checked that the rapid jump of the voltage at $`V^{}`$ is not due to some overheating of the sample by recording the power $`P=V^{}I^{}`$ as a function of the field. We found that $`P`$ increases smoothly from $`50nW`$ at $`20G`$ to $`150nW`$ at $`600G`$. As mentionned by Xiao et al. this power would have been constant if the main mechanism for the observed instability was due to heating. From LO model, the voltage-current characteristics behave as : $$II_cI\frac{V}{\stackrel{~}{R}_N}\left(\frac{\alpha (B)}{1+\left(\frac{V}{V^{}}^2\right)}+1\right)$$ (4) where $`V^{}`$ ($`I^{}`$) is the voltage (current) at which the electronic instability takes place. Equation 4 gives the correct asymptotic, and observed experimentally, behaviors : As $`V0`$ the characteristic is linear with a constant flux flow resistance and for $`VV^{}`$, the I-V response is again linear with a resistance close to the normal resistance. According to LO, the voltage $`V^{}`$ should be such that the vortex velocity $`v^{}=V^{}/BL`$ is constant with respect to the applied magnetic field. If one plots the critical velocity, its behavior as a function of the field is not constant as shown in figure 8 but is a decreasing function of the magnetic field. According to Doettinger et al. , the vortex velocity increases at low field, in order to keep the distance $`v^{}\tau _ϵ`$ large enough to ensure spatial homogeneity of the quasiparticles distribution. This condition, which is essential for the use of the LO description, is achieved when $`v^{}\tau _ϵ`$ is comparable to the vortex spacing $`a_0`$. Therefore, the velocity $`v^{}`$ is related to the magnetic field by : $$v^{}(B)=a_0\frac{f(T)}{\tau _ϵ}=\sqrt{\frac{2}{\sqrt{3}}\frac{\varphi _0}{B}}\frac{f(T)}{\tau _ϵ}$$ (5) with $$f(T)1.14\left(1\frac{T}{T_c}\right)^{1/4}$$ (6) At large field, $`v^{}`$ reaches the field independent LO value : $$v_{LO}^{}=\sqrt{\frac{D}{\tau _ϵ}}1.14\left(1\frac{T}{T_c}\right)^{1/4}$$ (7) where D is the quasiparticles diffusion constant ($`D=1/3\nu _Fl`$, with $`\nu _F`$ the Fermi velocity and $`l`$ the electron mean free path). In figure 8, the solid line is a fit using : $$v^{}=v_{LO}^{}\left(1+\frac{a_0}{\sqrt{D\tau _ϵ}}\right)$$ (8) The diffusion constant $`D`$ is known from the slope $`\frac{H_{c2}}{T_c}`$ by : $$D=\frac{4k_B}{\pi e}\frac{H_{c2}}{T_c}^1=\mathrm{1.26\hspace{0.17em}10}^4m^2/s$$ (9) From this analysis, we find : $`\tau _ϵ\mathrm{5\hspace{0.17em}10}^{10}s`$. This value is in the same magnitude range than those obtained in other thin film materials at $`0.9T_c`$ : $`\tau _ϵ10^{11}s`$ in $`YBa_2Cu_3O_{7\delta }`$ , $`\tau _ϵ10^{10}s`$ in $`Bi_2Sr_2CaCu_2O_{8\delta }`$ and $`\tau _ϵ10^9s`$ in $`Mo_3Si`$ . From our results, we can get the quasiparticles diffusion length $`l_ϵ=\sqrt{D\tau _ϵ}=250nm`$. We then check that indeed the quasiparticles diffuse over a distance which is not large compared to the inter vortex spacing $`a_0`$ which ranges from $`1.5\mu m`$ at $`10G`$ to $`0.2\mu m`$ at $`600G`$ ## 5 Conclusion In this paper, we have given some properties of the superconducting phase of a Titanium Nitride (TiN) thin film ($`100nm`$). We have drawn the transition line in the H-T plane with the magnetic field perpendicular to the film. From the slope $`\frac{H_{c2}}{T_c}`$ between $`T_c(0)=4.6K`$ and $`2.5K`$, we estimate the coherence length $`\xi (0)10nm`$. At $`4.2K`$, we performed transport measurements by recording both the differential resistance versus DC current and I-V characteristics for various field amplitudes. At low current, flux flow theory applies whereas at higher current the behavior is non-linear. For a certain current, an electronic instability takes place which corresponds to a critical vortex velocity. From its behavior with respect to the field, we get an estimate of the energy relaxation time : $`\tau _ϵ\mathrm{5\hspace{0.17em}10}^{10}s`$. Further measurements, especially at different temperatures and film thicknesses, are needed to give a more precise description of the microscopic dynamics in this material, in particular concerning electron-electron and electron-phonon relaxation processes. ## 6 Acknowledgment We would like to thank J. C. Villégier and R. Calemczuk for clarifying discussions.
no-problem/9903/gr-qc9903084.html
ar5iv
text
# Untitled Document SPIN-1999/07 gr-qc/9903084 QUANTUM GRAVITY AS A DISSIPATIVE DETERMINISTIC SYSTEM Gerard ’t Hooft Institute for Theoretical Physics University of Utrecht, Princetonplein 5 3584 CC Utrecht, the Netherlands and Spinoza Institute Postbox 80.195 3508 TD Utrecht, the Netherlands e-mail: g.thooft@phys.uu.nl internet: http://www.phys.uu.nl/~thooft/ Abstract It is argued that the so-called holographic principle will obstruct attempts to produce physically realistic models for the unification of general relativity with quantum mechanics, unless determinism in the latter is restored. The notion of time in GR is so different from the usual one in elementary particle physics that we believe that certain versions of hidden variable theories can – and must – be revived. A completely natural procedure is proposed, in which the dissipation of information plays an essential role. Unlike earlier attempts, it allows us to use strictly continuous and differentiable classical field theories as a starting point (although discrete variables, leading to fermionic degrees of freedom, are also welcome), and we show how an effective Hilbert space of quantum states naturally emerges when one attempts to describe the solutions statistically. Our theory removes some of the mysteries of the holographic principle; apparently non-local features are to be expected when the quantum degrees of freedom of the world are projected onto a lower-dimensional black hole horizon. Various examples and models illustrate the points we wish to make, notably a model showing that massless, non interacting neutrinos are deterministic. 1. Introduction. At present, many elementary particle physicists appear to agree that superstring theory$`^\text{1}`$ and its descendants such as “M-theory”$`^\text{2}`$ are the only candidates for a completely unified theory that incorporates the gravitational force into elementary particle physics. This concensus is based on the very rich mathematical structure of these theories that shows some resemblance to the observed mathematical structure of the Standard Model as well as that of General Relativity. It also appears to be a satisfactory feature$`^\text{3}`$ of these theories, that they manage to reproduce the so-called ‘holographic principle’$`^\text{4}`$. This principle states that any complete theory combining quantum mechanics with gravity should exhibit an upper limit to the total number of independent quantum states that is quite different from what might be expected in a quantum field theory: it should increase exponentially with the surface area of a system, rather than its volume. Yet this only adds to the suspicion that these theories are far removed from a description of what one might call ‘reality’. One would have expected that the quantum degrees of freedom can be localised, as in a quantum field theory, but this cannot really be the case if theories with different dimensionalities are being mapped one onto the other. How can notions such as causality, unitarity, and local Lorentz invariance make sense if there is no trace of ‘locality’ left? In this paper, a theory is developed that will not postulate the quantum states as being its central starting point, but rather classical, deterministic degrees of freedom. Quantum states, being mere mathematical devices enabling physicists to make statistical predictions, will turn out to be derived concepts, with a not strictly locally formulated definition. At the time this is written, the quantum mechanical doctrine, according to which all physical states form a Hilbert space and are controlled by non-commuting operators, is fully taken for granted in string theory. No return to a more deterministic description of “reality” is being foreseen; to the contrary, string theorists often give air to their suspicion that the real world is even crazier than quantum mechanics. Consequently, the description of what really constitutes concepts such as space, time, matter, causality, and the like, is becoming increasingly and uncomfortably obscure. By many, this is regarded as an inescapable course of events, with which we shall have to learn to live. But there are also other difficulties associated to such starting points, for instance when space-time curvature is being used to close an entire universe. We get “quantum cosmology”. An extremely important example of a quantum cosmological model, is a model of gravitating particles in 1 time, 2 space dimensions$`^\text{5}`$. Here, a complete formalism for the quantum version at first sight seems to be straightforward$`^\text{6}`$, but when it comes to specifying exact details, one discovers that we cannot rigorously define what quantum mechanical amplitudes are, what it means when it is claimed that “the universe will collapse with such-and-such probability”, what and where the observers are, what they are made of, and so on. Yet such questions are of extreme importance if one wants to check a theory for its self-consistency, by studying unitarity, causality, etc. Since the entire hamiltonian of the universe is exactly conserved, the “wave function of the universe” is in an exact eigenstate of the hamiltonian, and therefore, the usual Schrödinger equation is less appropriate than the description of the evolution in the so-called Heisenberg representation. Quantum states are space-time independent, but operators may depend on space-time points – although only if the location of these space-time points can be defined in a coordinate-free manner! Note that, besides energy, also total momentum and angular momentum of the universe must be conserved (and they too must be zero).$`p_(`$ We have learned to live with the curious phenomenon that our wave functions can be eigenstates of operators which at different space-time points usually do not commute. A “physical state” can be an eigenstate of an arbitrary set of mutually commuting operators, but then other operators are not diagonalized, and so, these observables tend to be smeared, becoming “uncertain”. The idea that such uncertainties may be due to nothing other than our limited understanding of what really is going on, has become unpopular, for very good reasons. Attempts at lifting these uncertainties by constructing theories with ‘hidden variables’, have failed. It is the author’s suspicion, however, that these hidden variable theories failed because they were based far too much upon notions from everyday life and ‘ordinary’ physics, and in particular because general relativistic effects have not been taken into account properly. The interpretation adhered to by most investigators at present is still not quite correct, and a correct interpretation is crucial for making further progress at very technical levels in quantum gravity. Earlier attempts by this author to obtain further insights led to the idea that space, time, and matter all had to be discrete$`^\text{7}`$. If this were the case, it would seem to be easy to set up a deterministic model of the universe, and a mathematically rigorous procedure to handle probabilities by introducing an auxiliary Hilbert space, spanned by all possible states, whose evolution is accurately described by an evolution operator, leading to Schrödinger’s equation in the continuum limit. Indeed, some models constructed along these lines look very much like genuine quantum field theories. They showed, however, one very important shortcoming. This is the fact the the hamiltonian, emerging naturally from the basic equations, invariably fails to have a lower bound, and so it appeared to be impossible to construct the vacuum state. One possible exception is a model of (second quantized) non-interacting massless fermions. They can be viewed exactly as a continuum limit of a discrete, deterministic theory, see Appendix A. Here it is shown that massless non-interacting neutrinos are deterministic. Unfortunately, however, we have been unable to generalize this system into something more interesting. Since quantum mechanics is described by a unitary evolution operator, it was natural to expect that, in a cellular automaton model, only time-reversible evolution laws would be acceptable. However, a little bit of thought suffices to realise that this is not the case. If an evolution law is not time reversible, it just means that some states will be absolutely forbidden (their amplitudes will vanish), and others will evolve into states that, after a while, become indistinguishable from states with a different past. If a pair of states evolve in such a way that their futures are identical, then these states should be called physically identical from the very start. To be precise, we must introduce equivalence classes of states, defined by collecting all states which some time in the future, after a given lapse of time, will become identical to one another. The evolution from one equivalence class to a different equivalence class is then again unitary, by construction. An early attempt to construct a deterministic model with built-in information loss, appeared to bring improvement: in a certain approximation, the hamiltonian did appear to develop a lower bound$`^\text{8}`$. Nevertheless, there were shortcomings, as in more precise calculations the lower bound disappears again, and anyway, the model was unattractive. On the other hand, once it is realized that, at the classical level, information loss is permitted, we can return to strictly continuous underlying deterministic equations. All that is needed is that the equivalence classes are discrete. At later stages of the theory, one might reconsider the option to regard the continuous theories as the continuum limit of some discrete system. The advantages of returning to continuum theories are numerous. One is, that it becomes much easier to account for the many observed continuous symmetries such as rotational and Lorentz invariance. Even more important is the fact that a strictly continuous time coordinate implies that the hamiltonian is unbounded, so that realistic models may be easier to achieve. But making information dissipate is not easy in continuum theories. It may well be that discrete degrees of freedom must be added. This would be no real problem. Discrete degrees of freedom often manifest themselves as fermions in the quantum formalism. It is also conceivable that the continuum theories at the basis of our considerations will have to include string- and $`D`$-brane degrees of freedom, and it would be beautiful if we could make more than casual contact with the mathematics of string- and $`M`$-theories. In Section 2, we expand on the definition of physical states as being equivalence classes of deterministic states, first illustrated for the discrete case, but it has a sensible continuum limit, so that a continuous time parameter can be employed. It is shown how dissipation of information leads to a reduction in the number of quantum levels, but in terms of these reduced states, unitarity is restored. In Section 3, we treat one of the continuum versions of a model with information loss, and show how they lead to discrete quantization even if the original degrees of freedom form a continuous multi- (or infinite-) dimensional space. In Section 4, we show how to couple different degrees of freedom gravitationally. Gravity theory naturally exhibits information loss when black holes are considered, and thus we argue that incorporating the gravitational force will actually help us to understand quantum mechanics. An axample of a non-quantum theory that could be considered for use as an input is a liquid obeying the Navier Stokes equation, and developing turbulent behaviour. Viscosity induces information loss. The ultraviolet structure of Navier Stokes fluids however does not quite meet the requirements of our theories. These matters are discussed in Section 5. The reader will criticize our arguments on the basis of the well-known Einstein-Rosen-Podolsky paradox$`^\text{9}`$. We elucidate our viewpoints on this matter in Section 6. Here also we discuss an other fundamental quantum feature of our world that may appear to be irreconcilable with a non-quantum or pre-quantum interpretation: the ‘quantum computer’. Indeed we formulate a conjecture concerning the practical limits of a quantum computer. Will the Copenhagen interpretation survive the 21st century? This is discussed in Section 7. Here, we also define the notions of beables and changeables. Dropping the requirement that information is preserved at the deterministic level, settles the problem how to treat quantum mechanical black holes. We explain how to handle them in our theory, and what now to think of the ‘holographic principle’, in Sect. 8. Conclusions are formulated in Sect. 9. In Appendix A, we discuss the massless neutrino model, and explain why massless neutrinos may be called quantum-deterministic objects. 2. Quantum States Consider a discrete system that can be in any one of the states $`e_1`$, $`e_2`$, $`e_3`$ or $`e_4`$. We shall call these states primordial states. Let there be an evolution law such that after every time step, $$e_1e_2,e_2e_1,e_3e_3,e_4e_1.$$ $`(2.1)`$ This evolution is entirely deterministic, but it will still be useful to introduce the Hilbert space spanned by all four states, in order to handle the evolution statistically. Now, in this space, the one-time-step evolution operator would be $$U=\left(\begin{array}{cccc}0& 1& 0& 1\\ 1& 0& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 0\end{array}\right),$$ $`(2.2)`$ and this would not be a unitary operator. Of course, the reason why the operator is not unitary is that the evolution rule (2.1) is not time reversible. After a short lapse of time, only states $`e_1`$, $`e_2`$ and $`e_3`$ can be reached. In this simple example, it is clear that one should simply erase state $`e_4`$, and treat the upper $`3\times 3`$ part of Eq. (2.2) as the unitary evolution matrix. Thus, the quantum system corresponding to the evolution law (2.1) is three-dimensional, not four-dimensional. Fig. 1. The transitions of Eq. (2.1). In more complicated non-time-reversible evolving systems, however, the ‘genuine’ quantum states and the false ones (the ones that cannot be reached from the far past) are actually quite difficult to distinguish, so it is more fruitful to talk of equivalence classes. Two states are called equivalent if, after some finite time interval, they evolve into the same state. The system described above has three equivalence classes, $$E_1=\left\{e_1\right\},E_2=\{e_2,e_4\},E_3=\left\{e_3\right\}.$$ $`\left(2.3\right)`$ and the evolution operator in terms of the states $`E_1,E_2,E_3`$ is $$U=\left(\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right).$$ $`\left(2.4\right)`$ One may consider constructing a hamiltonian operator $`H`$ such that $`U=e^{iH}`$. Our model universe (2.1) would be in an eigenstate of this hamiltonian. Since the phases of the states $`|e_i`$ are arbitrary anyway, our universe can be assumed to be either in the state $`|E_3`$, or in $$|\psi ^+=\frac{1}{\sqrt{2}}\left(|E_1+|E_2\right).$$ $`\left(2.5\right)`$ Global time is not a directly observable quantity (time translations can be regarded as being gauge transformations), which is why only $`|\psi ^+`$ is a physical state, together with $`|E_3`$. So, the physical Hilbert space is only two dimensional:$`\{|\psi ^+,|E_3\}`$. In a system with discrete time coordinates, the hamiltonian has a periodic energy spectrum, and it is impossible to identify any of the energy states as the true ‘ground state’, or vacuum. This was found to be a major obstacle impeding the construction of physically more interesting models. On the other hand, discretization of the states is imperative for any statistical analysis. The above model now shows that, if information is allowed to dissipate, we have to treat the equivalence classes of states as the basis of a quantum Hilbert space, and we observe that these equivalence classes can form a smaller set than the complete set of primordial states that one starts off with. In the Heisenberg picture, the dimensionality of a limit cycle does not change if we replace the time variable by one with smaller time steps, or even a continuous time. Working with a continuous time variable has the advantage that the associated operator, the hamiltonian, is unambiguous in that case. The hamiltonian will play a very important role in what follows. 3. A continuum model with information loss. In this section, it will be shown that even in theories containing many continuous degrees of freedom, the equivalence classes will tend to form discrete, ‘quantum’ sets, much like the situation in the real world, only if one allows information to dissipate. The simplest model goes as follows. If there is a single limit cycle, we have one periodic degree of freedom $`q(t)[\mathrm{\hspace{0.17em}0},\mathrm{\hspace{0.17em}1})`$, evolving according to $$\dot{q}\left(t\right)=v,$$ $`\left(3.1\right)`$ so that the period is $`T=1/v`$. In the Schrödinger picture, the dimensionality of Hilbert space is infinite, but if this model represents an entire universe, then only the state $`E=0`$ is physically acceptable. Also the fact that time is not a gauge-invariant notion implies that only the single state $`q|\psi =1`$ is physical. Therefore, in the Heisenberg picture, the dimensionality of Hilbert space is just one. Now imagine two such degrees of freedom: $$q_1,q_2[\mathrm{\hspace{0.17em}0},\mathrm{\hspace{0.17em}1});\dot{q}_1\left(t\right)=v_1,\dot{q}_2\left(t\right)=v_2.$$ $`\left(3.2\right)`$ First, take $`v_1`$ and $`v_2`$ to be constants. The Schrödinger Hilbert space is spanned by the states $`|q_1,q_2`$, and our formal hamiltonian is $$H=v_1p_1+v_2p_2;p_j=i/q_j.$$ $`\left(3.3\right)`$ In this case, even the zero-energy states span an infinite Hilbert space, so, in the Heisenberg picture, there is an infinity of possible states. Information loss is now introduced by adding a tiny perturbation that turns the flow equations into a non-Jacobian one: $$v_1v_1^0+\epsilon f(q_1,q_2);v_2v_2^0+\epsilon g(q_1,q_2).$$ $`\left(3.5\right)`$ The effect of these extra terms can vary a lot, but in the generic case, one expects the following (assuming $`\epsilon `$ to be a sufficiently tiny number): Let the ratio $`v_1^0/v_2^0`$ be sufficiently close to a rational number $`N_1/N_2`$. Then, at specially chosen initial conditions there may be periodic orbits, with period $$P=v_1^0/N_1=v_2^0/N_2,$$ $`\left(3.6\right)`$ where now $`v_1^0`$ and $`v_2^0`$ have been tuned to exactly match the rational ratio – possible deviations are absorbed into the perturbation terms. Nearby these stable orbits, there are non-periodic orbits, which in general will converge into any one of the stable ones, see Fig. 2. After a sufficiently large lapse of time, we will always be in one of the stable orbits, and all information concerning the extent to which the initial state did depart from the stable orbit, is washed out. Of course, this only happens if the Jacobian of the evolution, the quantity $`_i(/q_i)\dot{q}_i`$, departs from unity. Information loss of this sort normally does not occur in ordinary particle physics, although of course it is commonplace in macroscopic physics, such as the flow of liquids with viscosity (see Sect. 5). Fig. 2. Flow chart of a continuum model with two periodic variables, $`q_1`$ and $`q_2`$. In this example, there are two stable limit cycles, $`A`$ and $`B`$, representing the two ‘quantum states’ of this ‘universe’. In between, there are two orbits that would be stable in the time-reversed model. The stable orbits now represent our equivalence classes (note that, under time reversal, there are new stable orbits in between the previous ones). Most importantly, we find that the equivalence classes will form a discrete set, in a model of this sort, most often just a finite set, so that, in the Heisenberg picture, our ‘universe’ will be just in a finite number of distinct quantum states. Generalizing this model to the case of more than two periodic degrees of freedom is straightforward. We see that, if the flow equations are allowed to be sufficiently generic (no constraints anywhere on the values of the Jacobians), then distinct stable limit orbits will arise. There is only one parameter that remains continuous, which is the global time coordinate. If we insert $`H|\psi =0`$ for the entire universe, then the global time coordinate is no longer physically meaningful, as it obtains the status of an unobservable gauge degree of freedom. Observe that, in the above models, what we call ‘quantum states’, coincides with Poincaré limit cycles of the universe. Just because our model universes are so small, we were able to identify these. When we glue tiny universes together to obtain larger and hence more interesting models, we get much longer Poincaré cycles, but also much more of them. Eventually, in practice, sooner or later, one has to abandon the hope of describing complete Poincaré cycles, and replace them by the more practical definitions of equivalence classes. At that point, when one combines mututally weakly interacting universes, the effective quantum states are just multiplied into product Hilbert spaces. 4. Gravity. Models describing only a small number of distinct quantum states, such as all of the above, do not very clearly show the most salient difficulty encountered when one attempts to construct realistic models. This is the fact that our universe is known to be thermodynamic stable. A system in thermodynamic equilibrium is governed by Boltzmann factors $`e^{\beta E_i}`$, where $`\beta `$ is the inverse temperature. Stability is guaranteed only if the hamiltonian has a ground state. In the models above, only the zero eigenvalue of the hamiltonian plays a role, so we have to be more careful in our use of the notion of a hamiltonian. A thermodynamic treatment applies only to a hamiltonian describing some small subsystem of the universe. Apparently, one first must address the notion of locality before being able to formulate the exact meaning of thermodynamic stability. Our definition of locality will be that the hamiltonian of the universe can be written as $$H=\mathrm{d}^3𝐱\left(𝐱\right),$$ $`\left(4.1\right)`$ where $`(𝐱)`$ is a hamiltonian density, obeying $$[\left(𝐱\right),\left(𝐱^{}\right)]=0\text{if}\left|𝐱𝐱^{}\right|>\epsilon ,$$ $`\left(4.2\right)`$ for some $`\epsilon >0`$. Positivity then means that $`(𝐱)`$ is bounded from below: $$\left(𝐱\right)>\delta ,$$ $`\left(4.3\right)`$ for some number $`\delta `$. The value of $`\delta `$ may diverge as $`\epsilon `$ is sent to zero, but we do not really need to send $`\epsilon `$ all the way to zero; probably a small finite value will be good enough for us. At first sight, it seems to be easy to realise (4.1)–(4.3) in a deterministic cellular automaton model$`^\text{7}`$. In such a model, $`(𝐱)`$ depends on only a finite number of states, and there is some freedom in defining $``$, since the time variable is discrete. If the evolution law of the automaton is local, one naturally expects that the hamiltonian will be local as well, but unfortunately, the hamiltonian generated by a cellular automaton is not so simple. The point is that the hamiltonian is the logarithm of the evolution operator, and it can only be written as an infinite perturbation series in terms of the interactions. In calculating the outcome, one discovers divergences that contradict (4.1)–(4.3), and this is the only reason why our automaton models failed to serve as realistic models for a quantum field theory.$`^\text{7}`$ We propose to circumvent these problems, by first returning to continuous time variables. As we see from the model of the previous section, it, in principle, relies on strictly continuous time, so there is a unique hamiltonian. Finiteness of the number of quantum states is then guaranteed by the mechanism explained, which is that information loss reduces the dimensionality of Hilbert space to be finite. The model one starts off with, may have continuous degrees of freedom, such as a classical field theory, but it must have information loss. We now propose that one takes a continuous, classical field theory with general coordinate invariance. Such models differ in various essential ways from the more naive cellular automaton models studied previously. First, one observes that the time variable has now become a local gauge degree of freedom. The velocity of time evolution in various regions of the universe may differ, just as in the model of Sect. 3, and this difference is controlled by the gravitational field. The ratio of the speed of evolution at $`𝐱`$ and $`𝐱^{}`$ is $`\sqrt{g_{00}(𝐱)/g_{00}(𝐱^{})}`$, and since $`\sqrt{g_{00}}`$ plays the role of a gravitational potential, this relative speed depends on the gravitational flux from $`𝐱`$ to $`𝐱^{}`$. Indeed, the coupling introduced in Sect. 3 may be regarded as a ‘gravitational’ coupling. Secondly, also the notion of locality is made more complicated as also the coordinates $`𝐱`$ have become gauge degrees of freedom. This makes our study of the positivity constraint of the hamiltonian density much more difficult than before. A third complication is the exact definition of what an hamiltonian actually is. We should distinguish the matter part of the hamiltonian from the gravitational part. It is the matter part which we want to be positive. The gravitational part, controlling the value of the gravitational fields, must be regarded separately, since in total they add up to zero: the hamiltonian density generates local time translations, which however are pure gauge transformations, under which the wave function does not change, hence $$_{\mathrm{matter}}\left(𝐱\right)+_{\mathrm{grav}}\left(𝐱\right)=0.$$ $`\left(4.6\right)`$ But “matter” should also include gravitons. The correct way to introduce the hamiltonian here is first to define Cauchy surfaces of equal time, and then to define the operator $`H`$ that generates time evolution, that is, a mapping from one Cauchy surface to the next. This requires an external clock to be defined; we take as our clock the measurements made by observers far from the region studied. It is important, however, that this clock is part of the universe studied, and not external to it. It is the hamiltonian with this more subtle definition that has to be split into hamilton density functions $`(𝐱)`$. All of these aspects make gravity so much different from ordinary cellular automaton models that we have good hopes that the naive difficulties encountered with the cellular automata can now be resolved. The most important distinction between gravitational and non-gravitational models is that, in gravitational models, information loss naturally occurs, since black holes may be formed. Indeed, it will be hard to avoid the development of coordinate singularities, but quite generally, one expects such singularities to be hidden behind horizons. So we have black holes. One may wonder whether black holes in our deterministic gravity models can emit any Hawking radiation$`^{\text{10}}`$, since the latter is considered to be a typical quantum effect. The answer is that we do expect Hawking radiation, and the argument for this is that the usual derivation of this effect is still valid. One then may ask how it can be that a black hole can loose weight, since in classical theories black holes can only grow. The answer here is, presumably, as follows. In gravity, the hamiltonian not only generates the evolution through the hamilton equations, but it is also the source of the gravitational field. In our deterministic model, the gravitational field already exists, whereas the logarithm of the evolution operator, at first sight, may have little to do with curvature. In writing the hamiltonian as an integral over hamiltonian densities, however, these two notions of energy get intertwined, and we end up with only one notion of energy. When a black hole looses energy, it is primarily because it absorbs negative amounts of “curvature energy”. Clearly then, our primordial model must allow for the presence of negative amounts of energy. Actually, this is obviously true for the quantum mechanical energy, because, after diagonalization, the total Hamitonian has a zero eigenvalue. Prior to diagonalization of the total $`H`$, the hamilton density $`(𝐱)`$ must have negative eigenstates. We now see that, since the black hole must loose weight, the primordial model must also have local fluctuations with negative “curvature energy”. Black holes absorb negative amounts of energy, allowing positive energy to escape to infinity. It is due to the postulated thermodynamics stability that the fluctuations surviving at spatial infinity may only have positive energy. Since the total energy balances out, the black hole will therefore receive only net amounts of negative energy falling in. Hence it looses weight and decays. 5. Viscosity. To obtain some insight in continuum models with information loss, it is tempting to consider an example from macroscopic physics. Consider the Navier-Stokes equations for a fluid with viscosity$`^{\text{11}}`$. For simplicity, we take a pure, incompressible fluid with density $`\varrho `$ equal to one, and viscosity $`\eta `$. As is well known, such fluids may develop turbulence, and turbulence occurs when the Reynolds number, $$R=\varrho u\mathrm{}/\eta ,$$ $`\left(5.1\right)`$ where $`u`$ represents the velocities involved and $`\mathrm{}`$ the typical sizes, becomes larger than a certain critical value, $`R_{\mathrm{cr}}`$. This is a dimensionless number, ranging between a few factors of 10 to something of the order of $`10^3`$. Turbulence could be a nice example of the kind of chaotic behaviour to which one could apply our quantum mechanical philosophy. We see from the expression (5.1) for Reynold’s number, that only if the viscosity $`\eta `$ is sufficiently small compared to the dimensions of the system, instabilities arise that cause turbulence. Viscosity, for incompressible fluids, can be expressed in terms of $`\mathrm{cm}^2/\mathrm{sec}`$, so the distance scale at which turbulence can take place can be arbitrarily small, provided that the time scale decreases accordingly. This is why turbulence can cascade down to very tiny dimensions, until finally the molecular scale is reached, at which point the fluid equations no longer apply. Because of this divergence into the infinitesimally small scales, a viscous fluid cannot be treated with our Hilbert space methods. In a relativistic classical field theory, the situation is likely to be very different. First of all, it is very difficult to introduce viscosity in a relativistically invariant way, since first derivatives in time must be linked to second derivatives in space. But, assuming that in sufficiently complicated systems, viscous yet Lorentz invariant terms can be introduced, one notices that there must be another distinction as well: if turbulence cascades down to smaller dimensions, it cannot be that the square of the distance scale divided by the time scale stays constant, because the limit of the ratio of the distance scale itself and the time scale is limited by the speed of light. Therefore, one may imagine that there is a lower limit to vortex size, and hence a natural smallest distance limit. It is necessary to have a smallest scale limit so as to have a workable cut-off leading to an effective quantization. Unfortunately, realistic relativistic classical field theories with viscosity were not (yet?) found, which is why perhaps information loss via black holes must be called upon. 6. The EPR paradox. A falsifiable prediction The most serious objection usually raised against ideas of the kind discussed in this paper, is that deterministic theories underlying quantum mechanics appear to imply Bell’s inequalities for stochastic phenomena$`^{\text{12}}`$, whereas it is well-known that many of these inequalities are violated in quantum mechanics. Clearly, we have to address these objections. Bell’s inequalities follow if one assumes deterministic equations of motion to be responsible for the behaviour of quantum mechanical particles at large scales. If one assumes that the $`x`$-component of an electron’s spin exists, having some (unknown) value even while the $`z`$-component is measured, then the usual clashes our found. In our theory, however, the wave function has exactly the meaning and interpretation as in usual quantum mechanics; it describes the probability that something will or will not happen, given all other information of the system available to us. “Reality”, as we perceive it, does not refer to the question whether an electron went through one slit or another. It is our belief that the true degrees of freedom are not describing electrons or any other particles at all, but microscopic variables at scales comparable to the Planck scale. Their fluctuations are chaotic, and no deterministic equation exists at all that describes the effects of these fluctuations at large scales. Thus, the behaviour of the things we call electrons and photons is essentially entirely unpredictable. It so happens, however, that some regularities occur within all these stochastic osscillations, and the only way to describe these regularities is by making use of Hilbert space techniques. When we measure the spin of a photon, or the detection rate of particles by a counter, our measuring device is as much a chaotic object as the phenomena measured, and only at macroscopic scales can we detect statistical regularities that can in no other way be linked to microscopic behaviour than by assuming Schrödinger’s equation. The idea that there might exist a deterministic law of physics underlying all of this essentially amounts to nothing more than the suggestion that there exists a ‘primordial basis’, a preferred basis of states in Hilbert space with the property that any operator that happens to be diagonal in this basis, will continue to be diagonal during the evolution of the system. None of the operators describing present-day atomic and subatomic physics will be completely diagonal in this basis. This enables us to accept both quantum mechanics with its usual interpretation and to assume that there is a deterministic physical theory lying underneath it. Apparently, we are forced to deny the existence of electrons, and other microscopic objects, even if they appear to be obvious explanations of observed phenomena. Only macroscopic oscillations, such as the movements of planets and people, are undeniable realities (that is, approximately diagonal in the primordial basis), and it must be possible to recognise these ‘realities’ in terms of the microscopic, deterministic variables. This leads once again to a very serious objection, which is the following. Quantum mechanics, as we know it, leads to many more phenomena that are at odds with classical determininistic descriptions. An example of this is the so-called quantum computer$`^{\text{13}}`$. Using quantum mechanics, a device can be built that can handle information in a way no classical machine will ever be able to reproduce, such as the determination of the prime factors of very large numbers in an amount of time not much more than what is needed to do multiplications and other basic arithmetic with these large numbers. If our theory is right, it should be possible to mimick such a device using a classical theory. This gives us a falsifiable prediction: It will never be possible to construct a ‘quantum computer’ that can factor a large number faster, and within a smaller region of space, than a classical machine would do, if the latter could be built out of parts at least as large and as slow as the Planckian dimensions. A somewhat stronger version of this prediction, based on the entropy formula for a black hole, would be: “The classical machine may be thought of as being built of parts each of which occupy an area of at least one Planck length squared.” If this would be true, it would not be the total volume but the total area thet needs to be compared. We are less confident, however, of this latter version of our prediction, which is based on the holographic principle. The reason to doubt it is that the holographic principle follows from quantum mechanical arguments, hence refers to the number of equivalence classes, not the number of actual possible states, see Sect. 8. Therefore, a classical computer that is able to erase information, may have to use sites of Planckian dimension in a volume, not just on an area. Quantum computers are known to suffer from problems such as ‘decoherence’. Often, it is claimed that decoherence is nothing but an annoying technical problem. In our view, it will be one of the essential obstacles that will forever stand in the way of constructing super powerful quantum computers, and unless mathematicians find deterministic algorithms that are much faster than the existing ones, factorization of numbers with millions of digits will not be possible ever. 7. Beables and changeables. Will the Copenhagen interpretation survive the 21st century? In a way, our present approach does not really attack the Copenhagen interpretation. We attach to the wave function $`|\psi `$ exactly the same interpretation as the one taught at our universities. However, the Copenhagen interpretation also carries a certain amount of agnosticism: We will never be able to determine what actually happened during a physical experiment, and it is asserted that a deterministic theory is impossible. It is this agnosticism that we disagree with. There is a single ‘reality’, and physicists may be able to identify some of it. Of course, our physical universe is far too complex ever to be able to pinpoint in detail the actual sequence of events at tiny distance scales, but this situation is in no way different from our inability to follow individual atoms and molecules in a classical theory for gases and liquids. In a classical theory, we know that the atoms and molecules are there, we know their dynamics, but we are unable to trace individual entities, nor are we even interested in doing so; what we do want is to unravel the laws. Thus, we add the following to the Copenhagen interpretation. In our theory, the operators used for describing a system, will be divided in two types. If the representation of our Hilbert space is chosen to be such that the equivalence classes of the primordial states are chosen to form its basis elements, then we have beables, which, if expressed in this basis, multiply a state by a real or complex number referring to properties of the equivalence class our state is in, and changeables, which may replace a state by a different state, in a different equivalence class, possibly multiplying it also by a complex, state dependent, amplitude. Beables are operators which, in the Heisenberg representation, all commute with one another at all times. Changeables of course do not commute, in general. Operators that act non trivially on the different states within one equivalence class, are physically not meaningful, but could be used for mathematical purposes. We propose to refer to these as ghost operators. Conventional quantum mechanics results from the remarkable feature that, in describing systems of atomic sizes, we have become unable to distinguish the beables from the changeables. All operators known in the Standard Model of elementary particles are changeables. Beables may refer to features at the Planck scale, or to features at macroscopic scales, but in general they are not suitable for describing single particles at the atomic scale. Only if we manage to demonstrate that, under several restrictive conditions, diagonalizing a beable at the macroscopic scale corresponds to diagonalizing a changeable at the atomic scale, can we do a quantum experiment. This, the author believes, does not contradict any of the usual findings concerning hidden variable theories and Bell’s inequalities. These findings were based on the assertion that a theory describing the fluctuations at atomic scales should ‘explain’ these fluctuations in terms of laws at the atomic scale that go beyond ordinary quantum mechanics. In contrast, we now require such laws to exist only at the Planck scale. It will be the physicists’ task in the next century to identify the beables that can be used at the Planck scale. They can clearly not include operators resmbling the ones we are used to at present, such as spin, positions or momenta. At the Planck scale, the introduction of the wave function will be nothing other than a mathematical trick enabling us to handle the equations statistically. Due to the powerful mathematics of linear algebras, this trick will allow us to perform renormalization goup transformations towards the much lower energy scales and much larger distance scales of atomic physics. As a result, conventional quantum mechanics is the only way to describe the correlations at atomic scales. Our theory does profoundly disagree with the so-called ‘many world interpretation’. The unobserved outcomes of experiments are not realized in ‘parallel universes’ or anything of the sort. Every experiment has a single outcome that is true, and all other outcomes are not realized anywhere. The wave function only means something when it it used as a tool helping us to make statistical predictions. At atomic and molecular scales, it is the only tool we have; there will never be a better way to make predictions, but this does not mean that there will not be a better underlying theory. We have no idea whether the Copenhagen interpretation, and in particular its agnostic elements, will survive the new century or not. This depends on human ingenuity which is impossible to predict. String theories and related approaches at present do not address at all the possibility of deterministic underlying equations. This does not mean that they would be wrong. It is quite conceivable that string theory is the only way to analyse our world to such detail that the underlying dynamical equations can be identified. Our paper is a plea not to give up common sense while doing so. 8. Black holes and holography. Dropping the requirement that information is preserved at the deterministic level, also settles an other vexing problem: the treatment of quantum mechanical black holes. The problem encountered in studying the theory for black holes is as follows. Any sensible theory of matter and gravitation inevitably predicts that, given a sufficiently large amount of matter, gravitational collapse may occur and a black hole may form To see this, it is sufficient to study the Chandrasekhar limit.$`p_(`$. Consider now a large black hole. Its properties at moderately small scales, can be deduced unambiguously from invariance under general coordinate transformations. An elementary outcome of these considerations is that black holes emit particles of all kinds, in the form of thermal radiation. This result allows us to estimate the total number of possible quantum states of a black hole, and one finds that this number is essentially governed by the total area $`A`$ of the black hole horizon$`^{\text{10}}`$. On the other hand, one can try to make a model of the black hole horizon, in order to attempt to describe these quantum states, in a statistical manner, in terms of local degrees of freedom residing at this horizon$`^{\text{14}}`$. If quantum field theory is applied – any quantum field theory for particles in the background metric defined by the black hole – one finds the number of quantum states near the horizon to be strictly infinite. The difficulty is, that the quantum states of a field theory reside in a volume, not on a surface, and furthermore, the number of quantum states in a field theory is unlimited because of the freedom to perform unlimited Lorentz transformations at the extreme vicinity of a horizon. It could be observed that what was needed is a ‘holographic principle’$`^\text{4}`$. This principle states that the number of quantum states of the quantum field theory describing our world should not at all be as large as in conventional, non-gravitational systems; this number should, in fact, be bounded by an expression involving the total area of the boundary. This situation resembles what one gets if a holographic picture is taken of a scene in three spacelike dimensions, using a two-dimensional photographic plate. We give the photographic plate a resolution limited by one pixel per Planck length squared, approximately. This causes a slight blurring of our three-dimensional view, but, since it is Planckian dimensions that are involved, such a blurring is unobservable in ordinary physics. However, it appeared to be extremely difficult to construct a theory with ‘holography’ from first principles. At this point, string theory and $`D`$-brane theory appear to come to the rescue$`^{\text{15}}`$: beautiful studies provide for descriptions of black holes where, indeed, the quantum states are identified, counted, and their number is found to depend on the horizon area in a way that was expected from Hawking radiation. There appears to be just a small price to be paid. These theories do not tell us exactly how to handle the space-time transformations that relate the behaviour at a horizon to theories in the nearby volume. There are conjectures that describe the nature of these relations, but the physical implications of these conjectures are difficult to grasp. String theory now asserts that a theory in $`3+1`$ dimensions must be equivalent to a conformal theory in lower dimensions$`^\text{3}`$; this has to be the case if black holes are to be adequately described by these theories. However, it does raise all sorts of questions. In the real physical world, the number of space-time dimensions can be determined ‘experimentally’, and the outcome of such experiments should be either $`3+1`$ or $`2+1`$ or something else, but not two or more conflicting answers, except in the uninteresting case where inhabitants of this world cannot do their experiments because there are no usable inter particle interactions, or because the interactions in their world are severely non-local (Note that we are not referring to Kaluza-Klein compactification at this point, which of course would be an acceptable way to link theories with different dimensionalities). How can we get around these problems? The theory in this paper gives a way out that is quite acceptable from a physical point of view. In our theory, quantum states are not the primary degrees of freedom. The primary degrees of freedom are deterministic states. Since, at a local level, information in these states is not preserved, the states combine into equivalence classes. By construction then, the information that distinguishes the different equivalence classes is absolutely preserved. Quantum states are equivalence classes, but in order to identify equivalence classes, the evolution of a system must be followed for a certain length of time, and this turns the definition of an equivalence class into a non-local one. Black holes are nothing but extreme situations where information gets lost. Their equivalence classes comprise large sets of states that do look quite different for a local, ‘infalling’ observer, and this is why a black hole contains much fewer quantum states than the world seen by someone going in. But, as black holes are now truly large scale, composite objects, they cease to present elementary problems; they will take care of themselves in a natural manner; what remains to be done is the determination of the microscopic laws. Just as all other structures in our theory, black holes will have to be described in terms of equivalence classes of states. States that have a different past, but identical future, will be joined in a single equivalence class. By construction, the evolution of these equivalence classes will be unitary, so the emerging description of black hole evolution will be as in standard quantum mechanics, but the exact formulation of the Rindler space transformation can only be given after the set of fundamental, primordial states for the vacuum fluctuations have been identified. The so-called ‘holographic principle’ will then turn out to be a feature of the effective quantum mechanical description of black holes, but is no longer needed for the description of the fundamental (deterministic) degrees of freedom of the world. What the holographic principle tells us is, that the number of equivalence classes of the deterministic theory will grow proportionally to the area of a black hole. The fact that the number of equivalence classes depends only on the surface of the boundary may seem to be something quite natural, At the boundary, information can pour in and out. If we would keep the boundary fixed (including te vacuum fluctations there), the finite system at the inside may eventually loose all of its information and turn into a single Poincaré cycle (or into one of a small set of options). At closer inspection, however, this argument turns out to be insufficient. More investigation is needed for the mechanism that reduces the number of classes to an expression depending only on the area. 9. Conclusions and remarks. In spite of the failure of macroscopic hidden variable theories, it may still be possible that the quantum mechanical nature of the phenomenological laws of nature at the atomic scale can be attributed to an underlying law that is deterministic at the Planck scale but with chaotic effects at all larger scales. In this picture, what is presently perceived as a wave function must be regarded as a mathematical device for computing probabilities for correlations in the chaos. This wave function does retain its usual Copenhagen interpretation, but identifying quantum states at the Planck scale will be impeded by the phenomenon of information loss at that scale. Due to information loss, Planck scale degrees of freedom must be combined into equivalence classes, and it is these classes that will form a special basis for Hilbert space, which we refer to as the ‘primordial basis’. These considerations are of special importance for the description of black holes. The general coordinate transformation that underlies the definition of Rindler space, maps local degrees of freedom into local degrees of freedom. However, the fact that all information that disappeared into black holes must be considered as being lost, implies that the Rindler space transformation does not transform equivalence classes into equivalence classes, and therefore, this transformation is not a transformation of quantum states into quantum states. Let us stress again that information loss in black holes only occurs at the classical level. Since, according to our philosophy, quantum states are identified with equivalence classes, quantum information is preserved, by construction. In our theory, however, we reestablish the fundamental nature of the classical states, and deprive the quantum states of their fundamental status of primary degrees of freedom. This way, the black hole information paradox may be resolved. The well-known black hole entropy formula tell us that the number of equivalence states for a black hole will grow as the exponent of the area in Planck units. It is of interest to observe that, in constructing models with a deterministic interpretation for quantum states, the restriction to $`1+1`$ dimensions is usually quite helpful. This is a reason to suspect that a deterministic interpretation of string theory is possible. In Appendix A, a construction is shown. Here, we succeeded in producing a model in $`3+1`$ dimensions, but its ultraviolet cut-off is fairly artificial. In $`1+1`$ dimensions, the cut-off is straightforward. Appendix A. Massless neutrinos are deterministic. There is one system, actually realised to some extent in the real world, for which a primordial basis can be constructed. A primordial basis is a complete set of basis elements of Hilbert space that is such that any operator that happens to be diagonal now, will continue to be diagonal in the future. Only if there is no information loss, the evolution of these elements is determined by local equations. The model constructed in this Appendix is first constructed in such a way that no information loss appears to occur, but it also appears to be not quite local. Then we restore locality (at the cost of a violation of Lorentz invariance) by introducing information loss (whether Lorentz invariance has to be broken in the real world, remains to be seen). Consider massless, non-interacting chiral fermions in four space-time dimensions. We can think of neutrinos, although of course real neutrinos deviate slightly from the ideal model described here. First, take the first-quantized theory. The hamiltonian for a Dirac particle is $$H=\stackrel{}{\alpha }\stackrel{}{p}+\beta m,\{\alpha _i,\alpha _j\}=2\delta _{ij},\{\alpha _i,\beta \}=0,\beta ^2=1.$$ $`\left(A.1\right)`$ Taking $`m=0`$, we can limit ourselves to the subspace projected out by the operator $`\frac{1}{2}(1+\gamma _5)`$, at which point the Dirac matrices become two-dimensional. The Dirac equation then reads $$H=\stackrel{}{\sigma }\stackrel{}{p},$$ $`\left(A.2\right)`$ where $`\sigma _{1,\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3}}`$ are the Pauli matrices. We now consider the basis in which the following ‘primordial observables’ are diagonal: $$\{\widehat{p},\widehat{p}\stackrel{}{\sigma },\widehat{p}\stackrel{}{x}\},$$ $`\left(A.3\right)`$ where $`\widehat{p}`$ stands for $`\pm \stackrel{}{p}/|p|`$, with the sign such that $`\widehat{p}_x>0`$. We do not directly specify the sign of $`\stackrel{}{p}`$. Writing $`p_j=i\frac{}{x_j}`$, one readily checks that these three operators commute, and that they continue to do so at all times. Indeed, the first two are constants of the motion, whereas the last one evolves into $$\widehat{p}\stackrel{}{x}\left(t\right)=\widehat{p}\stackrel{}{x}\left(0\right)+\widehat{p}\stackrel{}{\sigma }t.$$ $`\left(A.4\right)`$ The fact that these operators are complete is also easy to verify: in momentum space, $`\widehat{p}`$ determines the orientation; let us take this to be the $`z`$ direction. Then, in momentum space, the absolute value of $`p`$, as well as its sign, are identified with its $`z`$-component, and it is governed by the operator $`i/p_z=x_z=\widehat{p}\stackrel{}{x}`$. The spin is defined in the $`z`$-direction by $`\widehat{p}\stackrel{}{\sigma }`$. Mathematically, these equations appear to describe a plane, or a flat membrane, moving in orthogonal direction with the speed of light. Given the orientation (without its sign) $`\widehat{p}`$, the coordinate $`\widehat{p}\stackrel{}{x}`$ describes its distance from the origin, and the variable $`\widehat{p}\stackrel{}{\sigma }`$ specifies in which of the two possible orthogonal directions the membrane is moving. Note that, indeed, this operator flips sign under $`180^{}`$ rotations, as it is required for a spin $`\frac{1}{2}`$ representation. This, one could argue, is what a neutrino really is: a flat membrane moving in the orthogonal direction with the speed of light. But we’ll return to that later: the theory can be further improved. We do note, of course, that in the description of a single neutrino, the Hamiltonian is not bounded from below, as one would require. In this very special model, there is a remedy to this, and it is precisely Dirac’s second quantization procedure. We consider a space with an infinite number of these membranes, running in all of the infinitely many possible directions $`\widehat{p}\stackrel{}{\sigma }`$. In order to get the situation under control, we introduce a temporary cut-off: in each of the infinitely many possible directions $`\widehat{p}`$, we assume that the membranes sit in a discrete lattice of possible positions. The lattice length $`a`$ may be as small as we please. Furthermore, consider a box with length $`L`$, being as large as we please. The first-quantized neutrino then has a finite number of energy levels, between $`\pi /a`$ and $`+\pi /a`$. The state we call ‘vacuum state’, has all negative energy levels filled and all positive energy levels empty. All excited states now have positive energy. Since the Dirac particles do not interact, their numbers are exactly conserved, and the collection of all observables (A.3) for all Dirac particles still correspond to mutually commuting operators. In this very special model we thus succeed in producing a complete set of primordial observables, i.e., operators that commute with one another at all times, whereas the hamiltonian is bounded from below. We consider this to be an existence proof, but it would be more satisfying if we could have produced a less trivial model. Unfortunately, our representation of neutrinos as infinite, strictly flat membranes, appears to be impossible to generalise so as to introduce mass terms and/or interactions. Also, the flat membranes appear to be irreconcilable with space-time curvature in a gravity theory. Quite likely, one has to introduce information loss. Suppose we may drop Lorentz invariance for the deterministic underlying theory At a later stage, this could lead to a tiny, in principle detectable, violation of Lorentz invariance for the quantum system.$`p_(`$. We then may add as physical variables also transverse coordinates $`\stackrel{~}{x}`$, orthogonal to $`\widehat{p}`$. Particles are now described in terms of all three space coordinates $`\stackrel{}{x}`$, and a direction operator $`\widehat{p}`$ (reabsorbing $`\widehat{p}\stackrel{}{\sigma }`$ to indicate its sign). In the direction of $`\widehat{p}`$, the propagation is rigid. But in the orthogonal direction, the propagation is haphazard, such that information concerning the initial value of $`\stackrel{~}{x}`$ is lost. All states with the same $`\widehat{p}`$ and $`\widehat{p}\stackrel{}{x}`$, but with different $`\stackrel{~}{x}`$, will have to be put in the same equivalence class. Thus, it is the equivalence classes that form flat membranes, while the deterministic theory may be strictly local. References 1. M.B. Green, J.H. Schwarz and E. Witten, Superstring Theory”, Cambridge Univ. Press. 2. P.K. Townsend, in Frontiers in Quantum Physics, Kuala Lumpur 1997, S.C. Lim et al, Eds., Springer 1998, p. 15. 3. E. Witten, Anti de Sitter Space and holography, hep-th/9802150; J. Maldacena, The large $`N`$ Limit of superconformal field theories and supergravity, hep-th/9711020; T. Banks et al, Schwarzschild Black Holes from Matrix Theory, hep-th/9709091; K. Skenderis, Black holes and branes in string theory, SPIN-1998/17, hep-th/9901050. 4. G. ’t Hooft, Dimensional reduction in quantum gravity. In Salamfestschrift: a collection of talks, World Scientific Series in 20th Century Physics, vol. 4, Eds. A. Ali, J. Ellis and S. Randjbar-Daemi (World Scientific, 1993), THU-93/26, gr-qc/9310026; Black holes and the dimensionality of space-time, in Proceedings of the Symposium “The Oskar Klein Centenary”, 19-21 Sept. 1994, Stockholm, Sweden. Ed. U. Lindström, World Scientific 1995, p. 122; L. Susskind, J. Math. Phys. 36 (1995) 6377, hep-th/9409089. 5. A. Staruszkiewicz, Acta Phys. Polon. 24 (1963) 734; S. Giddings, J. Abbott and K. Kuchar, Gen. Rel. and Grav. 16 (1984) 751; S. Deser, R. Jackiw and G. ’t Hooft, Ann. Phys. 152 (1984) 220; J.R. Gott, and M. Alpert, Gen. Rel. Grav. 16 (1984) 243; J.R. Gott, Phys. Rev. Lett. 66 (1991) 1126; S. Deser, R. Jackiw and G. ’t Hooft, Phys. Rev. Lett. 68 (1992) 267; S.M. Carroll, E. Farhi and A.H. Guth, Phys. Rev. Lett. 68 (1992) 263; G. ’t Hooft, Class. Quantum Grav. 9 (1992) 1335. 6. A. Achucarro and P.K. Townsend, Phys. Lett. B180 (1986) 89; E. Witten, Nucl. Phys. B311 (1988) 46; S. Carlip, Nucl. Phys. B324 (1989) 106, and in: ”Physics, Geometry and Topology”, NATO ASI series B, Physics, Vol. 238, H.C. Lee ed., Plenum 1990, p. 541; J.E. Nelson and T. Regge, Quantisation of 2+1 gravity for genus 2, Torino prepr. DFTT 54/93, gr-qc/9311029; G. ’t Hooft, Class. Quantum Grav. 10 (1993) 1023, ibid. 10 (1993) S79; Nucl. Phys. B30 (Proc. Suppl.) (1993) 200; S. Carlip, Six ways to quantize (2+1)-dimensional gravity, Davis Preprint UCD-93-15, gr-qc/9305020; G. Grignani, 2+1-dimensional gravity as a gauge theory of the Poincaré group, Scuola Normale Superiore, Perugia, Thesis 1992-1993; G. ’t Hooft, Commun. Math. Phys. 117 (1988) 685; Nucl. Phys. B342 (1990) 471, Class. Quantum Grav. 13 1023; S. Deser and R. Jackiw, Comm. Math. Phys. 118 (1988) 495. 7. G. ’t Hooft, J. Stat. Phys. 53 (1988) 323; Nucl. Phys. B342 (1990) 471; G. ’t Hooft, K. Isler and S. Kalitzin, Nucl. Phys. B 386 (1992) 495. 8. G. ’t Hooft, Quantummechanical behaviour in a deterministic model, Foundations of Physics letters 10 no. 4, April 1997, quant-ph/9612018. 9. A. Einstein, B. Podolsky and N. Rosen, Phys. Rev. 47 (1935) 777. 10. S.W. Hawking, Commun. Math. Phys. 43 (1975) 199; J.B. Hartle and S.W. Hawking, Phys.Rev. D13 (1976) 2188. 11. See e.g. L.D. Landau and E.M. Lifshitz, Course of Theoretical Physics, Vol 6, Fluid Mechanics, Pergamon Press, Oxford 1959. 12. J.S. Bell, Physica 1 (1964) 195. 13. A. Eckert and R. Jozsa, Rev. Mod. Physics 68 (1996) 733. 14. G. ’t Hooft, Nucl. Phys. B256 (1985) 727. 15. J. Polchinski, Phys. Rev. Lett. 75 (1995) 4724, hep-th/9510017.
no-problem/9903/hep-th9903195.html
ar5iv
text
# Untitled Document hep-th/9903195 IASSNS-HEP-99/31 A Comment on Nonsupersymmetric Fixed Points and Duality at large N Micha Berkooz<sup>1</sup> berkooz@sns.ias.edu and Anton Kapustin<sup>2</sup> kapustin@sns.ias.edu School of Natural Sciences Institute for Advanced Study Princeton, NJ 08540, USA We review some of the problems associated with deriving field theoretic results from nonsupersymmetric AdS, focusing on how to control the behavior of the field theory along the flat directions. We discuss an example in which the origin of the moduli space remains a stable vacuum at finite $`N`$, and argue that it corresponds to an interacting CFT in three dimensions. Associated to this fixed point is a statement of nonsupersymmetric duality. Because $`1/N`$ corrections may change the global picture of the RG flow, the statement of duality is much weaker than in the supersymmetric case. 1. Introduction The AdS/CFT correspondence , is a powerful tool for studying the large $`N`$ limit of field theories. By now a significant number of matches has been made between the dynamics of gauge theories and the dynamics of supergravity in the corresponding backgrounds. For the most part this analysis has been carried out in a supersymmetric setting. An interesting question is whether one can use gravity to understand the dynamics of nonsupersymmetric conformal field theories at large N. To answer this question one is led to study string theory/M-theory backgrounds of the form $`AdS_p\times M_q`$ where $`M_q`$ is a compact manifold which breaks supersymmetry (either via orbifolding a supersymmetric manifold , or by other means ). Another approach (related to the previous one ) uses type 0 string theory . When discussing nonsupersymmetric theories one usually appeals to classical 11D supergravity (i.e., the leading term in the momentum expansion) or to classical string theory, both of which correspond to $`N=\mathrm{}`$. In trying to extend the discussion to large but finite N one generically runs into problems. In the following two problems were discussed: 1. If for $`N=\mathrm{}`$ there are fields whose masses are at the Breitenlohner-Freedman unitarity bound, then these masses might be pushed below the bound by $`1/N`$ corrections. 2. If there are massless fields (i.e. fields that correspond to marginal operators at $`N=\mathrm{}`$) which are invariant under all the symmetries, then $`1/N`$ corrections may shift their VEVs significantly and there may not be a stable vacuum for finite $`N`$, or if such vacuum exists, it may be qualitatively different from the $`N=\mathrm{}`$ starting point. It was shown in , however, that it is easy to construct models in which these problems do not arise. Another problem which we will discuss in this paper is that of the fate of flat directions present at $`N=\mathrm{}`$. Many nonsupersymmetric gauge theories converge, in some formal sense at least, to a theory with sixteen supercharges as $`N\mathrm{}`$ , so in this limit the scalar potential has flat directions. These flat directions are typically lifted by $`1/N`$ effects, as a result of which the fields are either driven away from the origin or attracted to the origin (or a combination of both in different directions). In the former case the vacuum at the origin is destabilized (in fact, the theory may not have any stable vacuum at all), while in the latter case the origin is at least perturbatively stable. In the latter case there is generically a mass gap, explaining why it is so hard to construct nonsupersymmetric CFTs when scalars are present (there are however examples of nonsupersymmetric fixed points with fermions in the weak coupling regime ). In this paper we will discuss a 2+1 dimensional example in which the flat directions are lifted in a way which drives the fields to the origin, nevertheless the theory does not become massive and trivial there. Before proceeding it is worth mentioning some open problems. The main open problem is that it is not clear whether the expansion around $`N=\mathrm{}`$ is only formal, or whether it can be used to really approximate the physics at finite $`N`$. In backgrounds that correspond to weakly coupled string theory there is a genus expansion which is an expansion in $`1/N`$. If the contribution of each genus is finite then there is a valid $`1/N`$ expansion. However, models in the perturbative stringy regime, for example those based on D3-branes, run into problem 2 (the dilaton is always a dangerous massless field). In the strong coupling regime (M-theory or type IIB string theory near its self-dual points) it is not clear whether quantum corrections are small. More on this point will appear in . Another open problem is the issue of nonperturbative instabilities which describe tunneling in the bulk. Presumably these effects are exponentially small at large $`N`$. Not much is known about such instabilities (see however ), and we will not change this situation here. 2. The Example The example that we will focus on is that of M-theory on<sup>1</sup> the spectrum is related to that of $`AdS_4\times S^7`$. The spectrum of the latter is computed in and compared to field theory expectations in . $`AdS_4\times 𝐒^7/\text{ZZ}_2`$ . This background is obtained by probing different kinds of $`\mathrm{IR}^8/\text{ZZ}_2`$ orbifolds of M-theory with either M2-branes or anti-M2-branes. The two kinds of $`\mathrm{IR}^8/\text{ZZ}_2`$ orbifolds differ by the charge of the singularity. The first one, which we call the A-orbifold, has membrane charge $`1/16`$, while the other one, which we will call the B-orbifold, has charge $`3/16`$ . Both orbifolds preserve sixteen supercharges, the same supercharges as those preserved by an M2-brane parallel to the orbifold plane. Hence probing the orbifold singularities by M2-branes yields $`𝒩=8`$ field theories in three dimensions. Supersymmetry implies that when the charge of the orbifold singularity is positive (relative to that of the M2-brane) the long range gravitational field of the singularity is as if it had a positive mass; contrary-wise, if the charge is negative, then the mass is negative (this, for example, can be deduced from the cancelation of forces between the M2-brane and the singularity). For both singularities the near-horizon geometry in the limit of large number of probes $`N`$ is $`AdS_4\times \mathrm{IRP}^7`$. The only difference between the two backgrounds is the torsion class in $`H^4(\mathrm{IRP}^7,\text{ZZ})=\text{ZZ}_2`$ which specifies how a membrane propagating in this background is to be quantized \[11,,12,,13\]. The A-singularity corresponds to a trivial torsion class, while the B-singularity corresponds to a nontrivial one. In the large $`N`$ limit the curvature is small, and M-theory on $`AdS_4\times \mathrm{IRP}^7`$ reduces to supergravity on the same background. Since supergravity is insensitive to the torsion, the supergravity spectrum will be exactly the same for the two backgrounds. In this limit, the difference in the torsion class becomes visible only if one considers solitonic objects (M2-branes and M5-branes) wrapping nontrivial cycles of $`AdS_4\times \mathrm{IRP}^7`$. Similarly we can probe the A and B singularities with anti-M2-branes. This yields models without any supersymmetry. The near horizon geometry in this case is the “skew-whiffed” $`AdS_4\times \mathrm{IRP}^7`$ . The usual logic of the AdS/CFT correspondence leads to the conclusion that M-theory on a “skew-whiffed” $`AdS_4\times \mathrm{IRP}^7`$ describes a nonsupersymmetric CFT on the boundary. The backgrounds obtained from the A and B singularities differ only by a torsion class which does not affect the Kaluza-Klein spectrum. Both A and B singularities can be regarded as a strong-coupling limit of certain orientifold backgrounds in IIA string theory ,. An $`O2^{}`$ plane lifts to an M-theory background of the form $`(\mathrm{IR}^7\times 𝐒^1)/\text{ZZ}_2`$ which has two orbifold singularities of type A. An $`O2^+`$ plane lifts to the same orbifold, except that one singularity is of type A, and the other one is of type B. Finally, an $`\stackrel{~}{O2}^+`$ plane (which is an $`O2^{}`$ plane with a half-D2-brane stuck to it) lifts to a pair of B-singularities. These IIA backgrounds can be probed with (anti-)D2 branes, which lift to (anti-)M2-branes of M-theory. Thus the $`𝒩=8`$ CFTs described by M-theory on $`AdS_4\times \mathrm{IRP}^7`$ are related to $`𝒩=8`$ gauge theories on D2-branes, while the $`𝒩=0`$ CFTs described by M-theory on the “skew-whiffed” $`AdS_4\times \mathrm{IRP}^7`$ are related to the gauge theories on anti-D2-branes. The precise nature of this relation will be discussed in section 4. In this paper we will focus on the $`𝒩=0`$ case. Reference discusses some aspects of supergravity on the “skew-whiffed” $`AdS_4\times \mathrm{IRP}^7`$. It was shown there that the Kaluza-Klein spectrum has neither massless charged scalars, nor modes saturating the Breitenlohner-Freedman bound. As explained in the introduction, this implies that the “skew-whiffed” $`AdS_4\times \mathrm{IRP}^7`$ avoids some immediate problems of nosupersymmetric compactifications. In the next section we will address another potential problem associated with the presence of flat directions at infinite $`N`$. We will argue that for the B-singularity the potential generated along the flat directions at large but finite $`N`$ does not change the vacuum significantly. The model corresponding to the A-singularity is apparently destabilized by $`1/N`$ corrections. 3. Lifting of the flat direction We are therefore interested in discussing anti-M2-branes probing an $`A`$ or $`B`$ $`\mathrm{IR}^8/\text{ZZ}^2`$ singularity. Equivalently one may consider M2-branes probing the charge-conjugated singularities which we will call $`\overline{A}`$ and $`\overline{B}`$. In this section we will use the latter viewpoint. At leading order in $`N`$ there are flat directions which correspond to moving the branes away from the singularity and away from each other <sup>2</sup> We are referring to the flat directions of the fixed point theory in the IR rather than to those of the UV theory which flows to it.. This can be seen in several ways, but in general one expects that at $`N=\mathrm{}`$ the structure of the flat directions is the same as in the corresponding $`𝒩=8`$ theory. To obtain some information about the potential along the flat directions one can do a long distance M-theory computation: one can place the branes at a distance $`r>>l_p`$ from the singularity and determine, based on the charge and mass of the singularity, whether there is an attractive or repulsive force between the branes and the singularity. This computation has little to do with field theory, since the branes are in the asymptotically flat region. However, because this computation depends on the mass and charge of the singularity in the same way as the correct near horizon computation, it distinguishes correctly between attractive and repulsive potential. Using this approach one can also see that the potential is subleading in $`1/N`$. The leading term in the long distance computation ($`r>>l_p`$) is nominally of order $`N^2`$ (coming from all pairwise interactions between the branes), but because this is the same as in the $`𝒩=8`$ theory it is $`N^2\times 0=0`$. On the other hand, the interaction between the singularity and the branes is of order $`N`$, because there is only one singularity. The computation that we would like to do is to check the stability of the AdS to fragmentation along the flat directions in the near horizon geometry. The idea is to separate the branes into several clusters and compute the potential as a function of separation. For simplicity we will focus on the case of a single cluster away from the singularity (i.e., two clusters which are the images of each other). 3.1. The approximate solution along the flat directions We will start with the supergravity solution representing two clusters of M2-branes in flat space and then orbifold this solution. The metric for several parallel D3-branes in flat space was written in and it is straightforward to generalize the ansatz to M2-branes: $$ds^2=f^{2/3}dx^2+f^{1/3}\left(dr^2+r^2d\mathrm{\Omega }^2\right)$$ $`\left(3.1\right)`$ $$G_{x^0x^1x^2r^i}_{r^i}f^1\left(r\right),$$ where $`G`$ is the 4-form field strength and $`f`$ is an harmonic function of the 8-vector $`r`$. To obtain the situation with two clusters each containing $`N`$ M2-branes we set $$f\left(r\right)=\frac{Nl_p^6}{\left|ra\right|^6}+\frac{Nl_p^6}{\left|r+a\right|^6},$$ where the 8-vector $`a`$ is the position of the cluster. From the field theory point of view it is convenient to do a rescaling $`u^i=r^i/l_p^{\frac{3}{2}}`$ . Next we want to orbifold this background. Orbifolding introduces an $`\mathrm{IR}^8/\text{ZZ}_2`$ singularity at $`r=0`$. To facilitate the analysis of this background it is convenient to further rescale the coordinates so that the metric near the origin is the canonical flat metric on $`\mathrm{IR}^{11}`$: $$y^i=\left(\frac{2Nl_p^6}{a^6}\right)^{\frac{1}{3}}x^i,z^i=\left(\frac{2Nl_p^6}{a^6}\right)^{\frac{1}{6}}r^i,$$ $`\left(3.2\right)`$ after which the metric and the 4-form are given by the same ansatz but with the following harmonic function: $`\widehat{f}`$: $$\widehat{f}=\frac{1/2}{\left|n\frac{z}{\left(2N\right)^{1/6}l_p}\right|^6}+\frac{1/2}{\left|n+\frac{z}{\left(2N\right)^{1/6}l_p}\right|^6},$$ where $`n`$ is a unit 8-vector in the direction of $`a`$. Since the metric near the origin is the canonical one, and for large $`N`$ all curvatures and field strengths are small there, it is easy to insert the fields of the $`\text{ZZ}_2`$ singularity at $`z=0`$. One can identify the following regions in the orbifolded background: 1. $`z^2<l_p^2`$: inside this region the curvature and the field strength produced by the singularity are large. Our knowledge of this this region is not better or worse than that of the $`\mathrm{IR}^8/\text{ZZ}_2`$ singularity in flat space. The fields due to the clusters of M2-branes (the curvature and the 4-form) are of order $`1/N^{\frac{1}{6}}`$ there. 2. The fields produced by the singularity and the fields produced by the branes are comparable when $$\frac{1}{z^7}\frac{1}{N^{\frac{1}{6}}}.$$ At this point both are weak and can be treated using perturbation theory around flat space (locally). 3. At $`zN^{\frac{1}{6}}n`$ we approach the cluster of M2-branes around which the space looks like $`AdS_4\times 𝐒^7`$. This describes an $`𝒩=8`$ IR fixed point to which our theory flows along this flat direction. In the region $`z>l_p`$, the fields produced by the singularity are small, and so are the fields of the original background. The gravity background is therefore under control, and furthermore, the corrections to the background due to the introduction of the singularity are small as well. In the following subsection we will extract the influence of this small correction on the potential along the flat directions. 3.2. The potential along the flat directions We would like to know whether, upon the introduction of the singularity, there is a potential which drives the center of the cluster to the origin or repels it. This potential is subleading in the $`1/N`$ expansion and can be easily computed if one neglects the back-reaction of the singularity on the rest of the geometry. We saw above that this approximation is valid for $`z>l_p`$. Within this approximation the computation is straightforward. If we were allowed to choose the mass ($`m`$) and charge ($`Q`$) of the singularity arbitrarily (the charge is measured relative to the charge of the M2-branes), then there would be a line in the $`Qm`$ plane, $`Q=m`$ in appropriate units, on which supersymmetry is preserved. A and B singularities correspond to two points on this line (A has negative charge, while B has positive charge). The points corresponding to the $`\overline{A}`$ and $`\overline{B}`$ singularities which break supersymmetry also have charges of opposite sign and lie on on the line $`Q=m`$. Clearly the sign of the potential will change when going from one side of the line $`Q=m`$ to the other. Hence one of the SUSY-breaking singularities will attract the two clusters of branes, and the other will repel them. In more detail, the computation goes as the follows. When we take into account the singularity the action is $$=_0+m_{r=0}d^3x\sqrt{g_{ind}}+Q_{r=0}C^{\left(3\right)}$$ $`\left(3.3\right)`$ where $`_0`$ is the usual action of 11D supergravity and $`g_{ind}`$ is the determinant of the induced metric on the plane $`r=0`$. The fields in $`_0`$ are the same as in the supersymmetric case, except for a two-fold identification due to orbifolding. The terms localized at $`r=0`$ are due to the mass and charge of the singularity. To compute the leading contribution to the potential in the no-back-reaction approximation one has to insert the ansatz (3.1) for the two symmetrically separated clusters into this action. The terms that we are interested in are the kinetic terms for $`a^i(x^\mu )`$ (we allow $`a`$ to depend slowly on $`x^\mu `$) and the terms that encode the interaction of clusters with the singularity. The latter are proportional to $`_{r=0}𝑑xC^{(3)}`$ (the gravitational term gives an equal contribution as can be seen by comparison with the supersymmetric case). This gives a term in the effective Lagrangian for $`a`$ of the form $$\frac{1}{N}d^3x\left(U^i\right)^6,$$ where $`U`$ is the field theory quantity with dimension $`1/2`$ ($`U^i=a^i/l_p^{3/2}`$). The kinetic term is also easy to evaluate. The functional dependence is determined by spontaneously broken scale invariance to be proportional to $$d^3x\left(_\mu U^i\right)^2.$$ The coefficient in front of this term is of order $`N`$. This can be seen by rescaling the coordinates $`x`$ so that the entire metric in the new coordinates is proportional to $`N^{\frac{1}{3}}`$. In this setup it is easy to obtain the $`N`$-scaling of $`_0`$ and therefore the $`N`$-scaling of the kinetic term. The result of this computation is that for a singularity with negative charge ($`\overline{B}`$) there is an attractive potential along the flat directions, while for $`\overline{A}`$ the potential is repulsive. Furthermore, since the potential is suppressed by powers of $`N`$, it is small at large $`N`$, and the no-back-reaction approximation is self-consistent. 4. Nonsupersymmetric Duality 4.1. Weakness of nonsupersymmetric duality The statement that we are after is that of IR duality, i.e., we would like to exhibit two distinct (weakly coupled) theories in the UV which flow in the IR to the fixed point described above. However, the duality that we obtain here will be considerably weaker than the one obtained in cases with higher supersymmetry. Field theory considerations The reason that the duality is weaker is the following. Let us first consider the case $`N=\mathrm{}`$. In this case the theory is a projection of the $`𝒩=8`$ theory, in the sense that its dynamics is the same as in the latter, except that we restrict our attention to a subset of operators . The dynamics of the $`𝒩=8`$ theory is well understood and it is known that at the origin of its moduli space it flows from a free UV fixed point to an interacting superconformal IR fixed point. Consider now the $`1/N`$ corrections to the RG flow. They are present everywhere along the RG trajectory. Such corrections, even though they are small at each point in the field theory parameter space, can change the global picture of the RG flow. Therefore they may change the statement that the theory flows from the gaussian fixed point in the UV to the interacting IR. Nevertheless, even with $`1/N`$ corrections taken into account, there exists an RG trajectory which ends at the IR fixed point and passes at a distance of order $`1/N`$ from the gaussian fixed point. Therefore, if one wishes to “land” at the IR fixed point, one needs to fix a cutoff and add, besides the relevant perturbation that already exists in the $`N=\mathrm{}`$ theory, other operators with fine-tuned coefficients suppressed by powers of $`1/N`$. In principle, at each order in $`1/N`$ expansion one will have to tune the coefficients of all operators allowed by symmetries, including nonrenormalizable ones (Of course, we do not need to tune these infinite number of coefficients independently since there would be an entire submanifold of trajectories which passes close to the gaussian UV and ends in the interacting IR). Note that at large $`N`$ we are still close to the free fixed point at the cutoff scale, but we do not start from it in the UV. Duality is thus a weaker concept, since we do not know precisely the Lagrangian at the cutoff. An example (not necessarily the specific theory we have discussed in the paper so far) of how small subleading $`1/N`$ effects may change the global structure of the flow, and the need to fine tune at the UV, is shown in fig. 1. Figure 1: Global aspects of the flow. Black arrows are the leading N contribution. Dashed/White arrows are the subleading N correction. Line a is the modified flow from the UV fixed point. Line b is the fine tuned trajectory needed to hit the IR fixed point (we have neglected the fact that the IR fixed point moves a bit once $`1/N`$ corrections are included. M-theory considerations In the $`AdS/CFT`$ correspondence the statement that for $`N=\mathrm{}`$ the RG flow is the same as in $`𝒩=8`$ is mimicked by the fact that the orbifold of the entire $`𝒩=8`$ solution at all scales is still a solution of the classical equations of motion. Consider now $`1/N`$ corrections. These corrections are present at each value of $`U`$ (where $`U`$ is the additional coordinate in the $`AdS`$, which contains information about the RG flow). The zeroth order solution is no longer a solution and we need to correct it. When correcting it we may either keep the boundary conditions at $`U=\mathrm{}`$ fixed or the behavior at $`U=0`$ fixed. In the first case we keep the UV of the theory fixed but then the corrections at $`U=0`$ may be significant and the solution there may longer by approaching $`AdS`$. Instead we would like to keep the $`AdS`$ near $`U=0`$ but we can do so at the price of maybe changing the $`U=\mathrm{}`$ behavior. One may ask whether from the supergravity description one can argue that the field theory becomes a gaussian theory in the UV. It would seem that the answer is no. The reason is that in the supergravity solution all that one sees near the boundary of the space-time are large curvatures . Without independent means of computing at large curvature, all one can say is that this is consistent with the field theory becoming weakly coupled in the UV. One may perhaps also deduce the number of degrees of freedom from black hole entropy counting, or other dominant effects, but one can not argue that one knows exactly the Lagrangian of this weakly coupled theory at some given cutoff. 4.2. An example of a nonsupersymmetric dual pair We need to exhibit two distinct theories which flow in the IR to the theory of anti-M2 branes near the B-singularity. For example, we may consider $`(\mathrm{IR}^7\times 𝐒^1)/\text{ZZ}_2`$ orbifolds of M-theory of types AB and BB and probe them with anti-M2-branes. At weak coupling (i.e. when the radius of $`𝐒^1`$ is small) the M-theory orbifold of type BB becomes an $`\stackrel{~}{O2}^+`$ plane in IIA, while the orbifold of type AB becomes an $`O2^+`$ plane. Anti-M2 branes become anti-D2 branes in this limit. Naively, one expects the theories of anti-D2 branes probing the $`\stackrel{~}{O2}^+`$ and $`O2^+`$ planes to be IR dual. As explained above, this is only literally true for $`N=\mathrm{}`$, and for finite $`N`$ one may need to add renormalizable and nonrenormalizable operators with fine-tuned coefficients in order to preserve duality. An analogous supersymmetric duality was suggested in . The difference is that in the supersymmetric case the theories have a moduli space of vacua, and to see the duality one needs to go to a specific place in the moduli space. We have argued above that in the nonsupersymmetric case the moduli space is lifted at subleading order in the $`1/N`$ expansion, so both theories have a unique vacuum and no tuning of the moduli is necessary. The theories on anti-D2 branes are of course gauge theories. They are closely related to $`𝒩=8`$ theories on D2 branes probing the same backgrounds; in fact, the bosonic fields are identical. To obtain the spectrum of fermions recall that the field theory on $`N`$ (anti-)D2 branes near an orientifold 2-plane is obtained by orientifolding the spectrum of the $`𝒩=8`$ $`U(2N)`$ theory. In the supersymmetric case the projection is identical for fermions and bosons, while in the nonsupersymmetric case the projection for the fermions has an extra minus sign compared to that for the bosons. It follows that the spectrum of the gauge theory of $`N`$ anti-D2 branes near an $`\stackrel{~}{O2}^+`$ (resp. $`O2^+`$) orientifold contains gauge bosons and seven real scalars in the adjoint of $`SO(2N+1)`$ (resp. $`Sp(2N)`$) and eight Majorana fermions in the symmetric tensor representation of $`SO(2N+1)`$ (resp. antisymmetric tensor representation of $`Sp(2N)`$). We do not know the precise Lagrangian, for reasons explained above. At leading order in $`1/N`$ the Lagrangian can be obtained by taking the corresponding $`𝒩=8`$ Lagrangian describing D2 branes and replacing fermions in the adjoint by fermions in the appropriate tensor representation of the gauge group. This Lagrangian is superrenormalizable. We expect that all terms allowed by symmetries, including nonrenormalizable ones, would have to be included at next-to-leading order if one wants to flow to the CFT described by the “skew-whiffed” $`AdS^4\times \mathrm{IRP}^7`$. Acknowledgments We would like to thank O. Aharony, S. Kachru, E. Silverstein, and M. Strassler for useful discussions. The work of MB is supported by NSF grant PHY-9513835. The work of AK is supported by DOE grant DE-FG02-90ER40542. References relax J. Maldacena, “The Large N Limit of Superconformal Field Theories and Supergravity”, hep-th/9711200, Adv. Theor. Math. Phys.2,231,1998 relax E. Witten, “Anti-de-Sitter Space and Holography”, hep-th/9802150, Adv. Theor. Math. Phys.2,253,1998; S.S. Gubser, I.R. Klebanov and A.M. Polyakov, “Gauge Theory Correlators from Noncritical String Theory”, hep-th/9802109, Phys. Lett.B 428,105,1998 relax S. Kachru and E. Silverstein, “4-D Conformal Theories and Strings on Orbifolds”, hep-th/9802183, Phys. Rev. Lett. 80,4855,1998 relax M. Berkooz and S.-J. Rey, “Nonsupersymmetric Stable Vacua of M-theory”, hep-th/9807200, JHEP 9901:014,1999 relax I.R. Klebanov and A.A. Tseytlin, ”D-Branes and Dual Gauge Theories in Type 0 String”, hep-th/9811035; I.R. Klebanov and A.A. Tseytlin, ”Asymptotic Freedom and Infrared Behavior in the Type 0 String Approach to Gauge Theory”, hep-th/9812089; I.R. Klebanov and A.A. Tseytlin, ”A Nonsupersymmetric Large N CFT from Type 0 String Theory”, hep-th/9901101; A.A. Tseytlin and K. Zarembo, “Effective Potential in Non-Supersymmetric $`SU(N)\times SU(N)`$ Gauge Theory and Interactions of Type 0 D3-Brane”, hep-th/9902095 relax N. Nekrasov and S.L. Shatashvili, “On Nonsupersymmetric CFT in Four Dimensions”, hep-th/9902110 relax M. Bershadsky, Z. Kakushadze and C. Vafa, “String Expansion as Large N Expansion of Gauge Theory”, hep-th/9803076, Nucl. Phys. B523,59, 1998; Z. Kakushadze, “Gauge Theories from Orientifolds and Large N Limit” hep-th/9803214, Nucl. Phys. B529, 157,1998; M. Bershadsky and A. Johansen, “Large N Limit of Orbifold Field Theories”, hep-th/9803249, Nucl. Phys. B536,141, 1998 relax T. Banks and A. Zaks, ”On the Phase Structure of Vector-Like Gauge Theories with Massless fermions”, Nucl. Phys. B196,189,1982. relax J. Maldacena, J. Michelson and A. Strominger, “Anti-de-Sitter Fragmentation”, hep-th/9812073 relax M. Berkooz and A. Kapustin, work in progress relax E. Witten, “Baryons and Branes in AdS”, hep-th/9804001, JHEP 9807:006,1998 relax S. Sethi, “A Relation between N=8 Gauge Theories in Three Dimensions”, hep-th/9809162, JHEP 9811:003,1998 relax M. Berkooz and A. Kapustin, “New IR Dualities in Supersymmetric Gauge Theories”, hep-th/9810257, To be published in JHEP. relax M.J. Duff, B.E.W. Nilsson and C.N. Pope, Phys. Rep. 130,1,1986 ; M.J. Duff, B.E.W. Nilsson and C.N. Pope, “Spontaneous Supersymmetry Breaking by the Squashed Seven-Sphere”, Phys. Rev. 50,2043,1983, Erratum 51, 846,1983. relax N. Itzhaki, J. Maldacena, J. Sonnenschein and S. Yankielowicz, “Supergravity and the Large N Limit of Theories with Sixteen Supercharges”, hep-th/9802042, Phys. Rev. D58,46,1998 relax A. Fayyazuddin and M. Spalinski, “Large N Superconformal Gauge Theories and Supergravity Orientifolds”, hep-th/9805096, Nucl. Phys. B535,219,1998; O. Aharony, A. Fayyazuddin and J. Maldacena, “The Large N Limit of N=2,1 Field Theories from Threebranes in F-Theory”, hep-th/9806159, JHEP 9807:013,1998; A. Kehagias, “New Type IIB Vacua and Their F-Theory Interpretation”, hep-th/9805131, Phys. Lett. B435,337,1998 relax N. Seiberg, “Notes on Theories with Sixteen Supercharges”, hep-th/9705117, Nucl. Phys. Proc. Suppl.67:158-171,1998 relax B. Biran, A. Casher, F. Englert, M. Rooman and P. Spindel, “The Fluctuating Seven Sphere in Eleven Dimensional Supergravity”, Phys. Lett. 134B,179,1984; L. Castellani, R. D’Auria, P. Fre, K.Pilch and P. Van Nieuwenhuizen, “The Bosonic Mass Formula for Freund-Rubin Solution of d=11 Supergravity General Coset Manifolds”, Class. Quant. Grav. 1,229,1984 relax O. Aharony, Y. Oz and Z. Yin, “M-Theory on $`AdS(P)\times S(11P)`$ and Superconformal Field Theories”, hep-th/9803051, Phys.Lett.B430,87,1998
no-problem/9903/hep-ex9903045.html
ar5iv
text
# Particle Identification with BELLE ## I Introduction The new KEK $`B`$ asymmetric $`e^+e`$ collider is scheduled to be fully operational in spring 1999. The BELLE experiment at KEK $`B`$ is in a preparatory stage of data taking and aims at either confirm or refute the present hypothesis of the Standard Model of CP violation in terms of phases in the Cabibbo-Kobayashi-Maskawa (CKM) matrix through a varieties of CP asymmetries in neutral and charged $`B`$ decays . One of the most important task of the BELLE detector is the identification of the charged hadrons which is relevant for the reconstruction of many beauty and charm decay channels. This facilitates (1) the $`B`$ flavor tagging which relies on the correlation between the charge of kaon and the flavor of the decaying $`B^0`$ from which it is originated, and (2) identification of exclusive final states such as $`B^0\pi ^+\pi ^{}`$ , that will provide a measurement of the angle $`\alpha `$ in the unitary triangle. Since the relative abundance of pions and kaons in $`B`$ decays are approximately 8:1, the $`B`$ flavor tagging requires good kaon identification with minimal pion contamination. Also the $`bu`$ type suppressed mode requires good separation from the penguin type decay such as $`B^0K^+\pi ^{}`$ which is of a similar magnitude. Physics requirements divide the momentum coverage of charged hadron identification into two approximately non overlapping regions. Flavor tagging of the opposite $`B^0`$ though the detection of charged kaons produced in $`bcs`$ cascade requires kaon identification in the low momentum region i.e. between 0.2 GeV/c and 1.5 GeV/c (Fig 1). Due to the boost of the $`B\overline{B}`$ system the $`\pi `$ momenta range from about 1.5 GeV/c to 4.0 GeV/c for two body decays such as $`B^0\pi ^+\pi ^{}`$ (Fig 2.) In addition to the above requirement, the particle identification (PID) system should have a minimum inactive material in front of the CsI(Tl) crystal calorimeter to preserve the good energy resolution and detection efficiency of soft photons. A sufficient sensitivity of signal to noise over the full angular and momentum range would be an added feature and the system must be able to operate efficiently in a 1.5 T magnetic field. Considering the above requirements, a hybrid system consisting of an array of scintillator Time of Flight (ToF) counters and an array of silica Aerogel Cherenkov Counter (ACC) has been chosen as the BELLE particle identification device where the ToF counters cover the momentum region below 1.2 GeV/c and ACC provide identification at higher momenta. This approach has an advantage of simplicity. ## II The Time of Flight Counters The subsystem consists of 128 Bicron BC408 Scintillator counters and 64 BC412 Trigger Scintillation (TSC) counters with fine mesh photo multiplier tubes (FMPMT) to read out the signals. By using the FMPMTs one eliminates the need for the light guides which gives a big reduction of time resolution. These modules are individually mounted to the inner wall of the CsI container at 1175 cm radius from the beam axis. The angular coverage of ToF is $`33.7^{}<\theta <120.8^{}`$. They are used to start a clock and stop counting at a precise time ( with less than 100 psec time resolution ) after a beam crossing take place, thereby allowing the determination of the time it takes a particle to travel from the center of the interaction point to the ToF layer. This time, together with the knowledge of particle’s momentum from Central Drift Chamber (CDC), allows an estimate of particle mass and thus identity. In addition to the $`B`$-flavor tagging capability, ToF provides an event timing signal used by the trigger to provide a gate to the readout of other sub-detectors such as Electromagnetic Calorimeter (ECL) and CDC. Fig 3 shows the schematic diagram for the (a) fast trigger and (b) ToF readout. The FMPMT signals are read out into fast leading edge discriminators. A high level threshold discriminator is used to gate the low level timing signal. The signals are also read into MQT300A Charge to Time conversion chips. The BELLE standard readout board, LeCroy 1877s multi-hit TDC’s, are used to read out the signals from the MQT300A chip. These TDC’s have a least significant bit (LSB) of 500 ps. With the collaboration with LeCroy a ”Time Stretcher” modules have been developed . The time stretcher expands a 25 ps LSB into a 500 ps LSB. By reading the timing information into the time stretcher and 1877s combination and performing the time walk correction, we have achieved 25ps resolution in the readout electronics which gives us a 100 ps overall timing resolution. The forward and backward FMPMT timing signals of ToF modules are mean timed. The TSC timing signal is used to gate this mean time signal. The first mean time in the event is used for an on line event timing signal for the CDC and for fast reconstruction on the on-line farm. Xilinx pipelines are also used to calculate the event multiplicity and event shape (in $`\varphi `$) for background reduction in the trigger. ### A Results from Beam Tests Full size prototype ToF counters (BC408 scintillators with attenuation length about 2.5 m ) were tested using a $`\pi `$ beam by placing the counter on a movable stage that could be rotated around a pivot point . Fig 4 shows the time resolution as a function of beam position z. A time walk correction has been applied over all beam position and the time jitter of the start counter ($``$ 35 ps) was subtracted quadratically. Intrinsic time resolution of 85 ps are obtained with a discriminator threshold set at 100 mV. Fig 5 shows the $`\pi ^+`$/$`p`$ separation for a 2 GeV/c un-separated beam. The observed 6$`\sigma `$ separation between $`\pi ^+`$ and $`p`$ corresponds to what could be expected for $`\pi `$/$`K`$ separation in a 1 GeV/c beam. The separation is improved near the FMPMT owing to the longer path length and better timing resolution. ## III The Aerogel Cherenkov Counters The ACC sub-system consists of 960 element-arrays (16 elements in $`z`$ and 60 elements in $`\varphi `$) in the barrel and 268 element-array in the forward endcap. The barrel ACC (BACC) system is located between the CDC and CsI starting at an inner radius of 88.5 cm with the z coverage of -85 $`<z<`$ 162 cm. The forward endcap ACC (EACC) is located between the forward endcap CsI and CDC endplate occupying the region bounded by 42$`<r<`$114 cm and 116$`<z<`$194 cm (Fig 7). Fig 6 shows the schematic of a single aerogel block in the array. Each aerogel block is made up of 5 layers of silica aerogel slabs housed in light-weight and light tight aluminum boxes. Being a Rayleigh scatterer, mean path lengths in aerogel are larger than in any non-scattering media. Therefore absorption in the aerogel and on the container walls is minimized by covering the wall with highly efficient white diffuse reflector (Gortex Teflon). Since the detector stays within a 1.5 T magnetic field, FMPMT are used to detect Cherenkov radiation. With a proper choice of refractive index, charged pions cause light to be emitted in the aerogel. The refractive indices for BACC varies with $`\theta `$ (n = 1.01, 1.013, 1.015, 1.020, 1.028) in a way that takes into account the general softening of the hadron momentum spectrum with increasing lab polar angle. All the EACC counters use n=1.03 aerogel that are appropriate for flavor tagging in the momentum region between 0.8 and 2.5 GeV/c. Table 1 summarizes the $`\pi `$, $`K`$ and $`p`$ thresholds for different refractive indices used in the design. Around 3.5 GeV/c, $`K`$ also produces Cherenkov light and so separation of $`K`$ and $`\pi `$ becomes difficult in the forward direction when the particle has momentum more than 3.5 GeV/c. One can somehow get around the problem by looking at the pulse height information of the Cherenkov light since $`\pi `$ mesons tend to have larger pulse heights. Besides having low refractive indices, the aerogels are hydrophobic \- that ensures long term stability of the detector. In a separate test, aerogels were found to be radiation hard up to 10 MRad equivalent dose . Since the gain of FMPMT drops sharply in high magnetic fields, one needs further amplification of the Cherenkov signal. Depending on the threshold, PMT signal from either $`\pi `$’s or $`K`$’s is amplified about 10 times before it goes to MQT300 chips. The output from the chips is fed to the Lecroy’s 1877s TDCs whose leading edge gives the timing of the pulse and the width is proportional to the pulse amplitude. ### A Test Beam Results Fig 8 demonstrates the performance of ACC prototype achieved during a test beam operation with 1.5 Tesla magnetic field . A pulse height distribution separating two 3.5 GeV/c charged particles, pions and protons, with n = 1.05 aerogel is shown in Fig 8(a). More than 4 $`\sigma `$ separation with efficiency better than 98 % can be achieved with less than 2 % background contamination which is mainly coming from proton induced knock-on electrons produced on the 1 mm thick aluminum box and aerogel material itself. It should be noted that in practice, unlike the beam test, the fake rate is dominated by hits on the glass window of FMPMT and showering in FMPMTs . Fig 8(b) shows the efficiency and background contamination as a function of threshold on the pulse height. The average number of photoelectrons obtained for 3.5 GeV/c pions incident at the center of the counter was found to be 20.3 when viewed by two 2.5 inch FMPMTs. Considering the above results and the momentum dependence of Cherenkov light yield, we expect that more than 3 $`\sigma `$ $`\pi /K`$ separation is possible in the momentum region of 1.2 to 3.5 GeV/c for BACC and 0.8 to 2.2 GeV/c for the EACC. ## IV Simulation Results To demonstrate that designed PID fulfills the various physics requirements, a number of simulation studies (both parametrized and detector based) has been done. Figure 8 shows an example of good separation between signal events $`B^+\overline{D^0}K^+`$ when (a) $`\overline{D^0}K^+\pi ^{}`$ (b) $`\overline{D^0}K^+\pi ^{}\pi ^+\pi ^{}`$ and backgrounds coming from the mis-identification of kaons, after the data are normalized to the same integrated luminosity. Similar simulation studies have been done for other $`B`$ decay modes that contain $`\pi `$’s and $`K`$’s in the decay products and the results are found satisfactory. ## V Conclusion A PID system based on hybrid system of ToF and silica aerogel counters is simple, robust and a powerful device that provides excellent particle identification over entire solid angle and momentum range. The detector has been constructed at KEK and being calibrated and tuned with cosmic ray events at a roll out position before the data taking with beam starts in spring 1999.
no-problem/9903/math9903177.html
ar5iv
text
# The Point Spectrum of the Dirac Operator on Noncompact Symmetric Spaces ## 0. Introduction We investigate the existence of point spectrum of the Dirac operator $`D`$ acting on spinors over a Riemannian symmetric space $`M=G/K`$ of noncompact type. Following Seifarth’s approach in \[S\], we look at those discrete series representations of $`G`$ that appear in $`L^2(𝒮)`$, where $`𝒮`$ is the spinor bundle over $`M`$. We find that the existence of point spectrum of $`D`$ is equivalent to a regularity condition for the half sum $`\rho _𝔨`$ of positive roots of $`K`$, which in turn is equivalent to the nonvanishing of the $`\widehat{A}`$-genus of the compact dual $`M^{}`$ of $`M`$. Using the classification of compact symmetric spaces, we finally determine all noncompact symmetric spaces on which the Dirac operator $`D`$ has point spectrum. We summarize our results: ###### \NeueNummer\MainTheorem{Theorem}\ifx\relax\relax\else\\/ \rm\fi Let $`M`$ be a Riemannian symmetric space of noncompact type, and let $`D`$ be the Dirac operator acting on spinors over $`M`$. Then the following statements are equivalent: Our present note is motivated by the work of several authors on the spectra of Dirac operators on noncompact Riemannian symmetric spaces. Using the Plancherel theorem, Bunke computed the whole spectrum of the untwisted Dirac operator $`D`$ on the real hyperbolic spaces in \[Bu\] (note the incorrect statement concerning the eigenvalue $`0`$). Seifarth showed the nonexistence of point spectrum on the real and quaternion hyperbolic spaces in \[S\] (the treatment of the complex hyperbolic space is incomplete). Another computation of the spectrum of $`D`$ on $`H^n`$ by Camporesi and Higuchi uses polar coordinates and separation of variables (\[CH\]). Using a similar approach, Baier proved in \[Ba\] that the Dirac operator on $`H^n`$ has no eigenvalue $`\lambda `$ with $`\left|\lambda \right|\frac{n1}{4}`$. Let us also mention the results of Galina and Vargas on the eigenvalues of twisted Dirac operators: In \[GV\], they compute the spectrum of Dirac operators on $`H^n`$ and $`H^n`$, twisted with a homogeneous vector bundle. They consider only the case where the inducing $`K`$-representation has a sufficiently nonsingular highest weight. The rest of this paper is organized as follows: In chapter 1, we recall the relation between point spectrum of homogeneous selfadjoint elliptic operators on $`M=G/K`$ and discrete series representations of $`G`$. In chapter 2, we show that the existence of point spectrum of $`D`$ on $`M`$ is equivalent to the nonvanishing of the $`\widehat{A}`$-genus on the compact dual of $`M`$. Finally, in chapter 3, we classify the compact symmetric spaces $`M^{}`$ with $`\widehat{A}(M^{})[M^{}]0`$. This work was written while the second named author enjoyed the hospitality and support of the IHES (Bures-sur-Yvette). The first named author would like to thank the Université de Paris-Sud (Orsay) for its hospitality. We are grateful to C. Bär, J.-M. Bismut and W. Müller for helpful discussions. We wish to thank M. Olbrich for carefully reading the manuscript, pointing out a few inaccuracies, and suggesting an alternative proof of Corollary 2.12. ## 1. The Point Spectrum and the Discrete Series Let $`M=G/K`$ be a Riemannian symmetric space of noncompact type. Here, $`G`$ is a noncompact connected semisimple Lie group, and $`K`$ is a maximal compact subgroup. We fix a $`G`$-invariant metric on $`M`$. Then $`M`$ is a Hadamard manifold, i.e. the Riemannian exponential map $`\mathrm{exp}:T_pMM`$ is a diffeomorphism at each point $`p`$ of $`M`$. In particular, $`M`$ is contractible, and thus possesses a unique spin structure. Let $`𝔤=𝔨𝔭`$ be the Cartan decomposition of $`𝔤`$, where $`𝔨`$ is the Lie algebra of $`K`$. A homogeneous spin structure can be described by a lift $`\stackrel{~}{\alpha }`$ of the adjoint representation $`\alpha :K\mathrm{SO}(𝔭)`$ to $`\mathrm{Spin}(𝔭)`$. We can assume the existence of such a lift (if necessary, we replace $`G`$ and $`K`$ by suitable double covers). The (complex) spin representation $`(\rho ,S)`$ of $`\mathrm{Spin}(𝔭)`$ gives rise to a $`K`$-representation $`(\sigma ,S)`$, with $`\sigma :=\rho \stackrel{~}{\alpha }`$. The spinor bundle is then isomorphic to the homogeneous vector bundle $$𝒮:=G\times _\sigma S$$ $`1.1`$ induced by $`\sigma `$. The Levi-Civita connection on $`M`$ induces a connection on $`𝒮`$. Let $`\mathrm{\Gamma }_c(𝒮)`$ be the space of compactly supported smooth sections of $`𝒮`$, and let $`L^2(𝒮)`$ be its Hilbert space completion. The Dirac operator acts on $`\mathrm{\Gamma }_c(𝒮)`$ as the composition of covariant derivative and Clifford multiplication. Since $`M=G/K`$ is a complete manifold, the Dirac operator is essentially selfadjoint (cf. \[W\]). Hence, its minimal and maximal closed extension coincide. Let $`D`$ be the unique selfadjoint extension to a closed operator. It commutes with the natural action of $`G`$ on $`L^2(𝒮)`$. More generally, one can consider Dirac operators on $`L^2(𝒮𝒲)`$, where $`𝒲`$ is a homogeneous Hermitian vector bundle over $`M`$ which is equipped with an equivariant unitary connection. Because $`D`$ is selfadjoint, its spectrum consists only of point spectrum and continuous spectrum. Moreover, it is completely contained in $``$. The point spectrum $`\mathrm{Spec}_p(D)`$, i.e. the set of eigenvalues, is defined as $$\mathrm{Spec}_p(D):=\left\{\lambda \right|\mathrm{ker}(D\lambda )\{0\}\}.$$ If $`\lambda `$ is an eigenvalue of $`D`$, the dimension of the eigenspace $`\mathrm{ker}(D\lambda )`$ is called the multiplicity of $`\lambda `$. Clearly, $`G`$ acts on the eigenspaces of $`D`$. It turns out that the eigenspaces are direct sums of irreducible $`G`$-representations belonging to the discrete series: ###### Definition 1.2. Definition An irreducible representation $`(\pi ,H)`$ of $`G`$ is called a discrete series representation iff the matrix coefficients $`g\pi (g)v,w`$ for all $`v`$, $`wH`$ are square integrable on $`G`$ with respect to the Haar measure. Let $`\widehat{G}_d`$ be the set of equivalence classes of discrete series representations. The main tool of our investigation of $`\mathrm{Spec}_p(D)`$ is the following ###### \NeueNummer\PointSpectrumTheorem{Theorem}\ifx\relax(cf.~\AS, \CM)\relax\else\\/ \rm(cf.~\AS, \CM)\fi Let $`D`$ be a homogeneous selfadjoint elliptic differential operator on $`:=G\times _\epsilon E`$ for some $`K`$-representation $`(\epsilon ,E)`$. Then the direct sum of all eigenspaces of $`D`$ is isomorphic to $$\underset{\pi \widehat{G}_d}{}\pi \mathrm{Hom}_K(\pi |_K,\epsilon ).$$ In particular, a discrete series representation $`\pi `$ of $`G`$ is isomorphic to a subrepresentation of $`L^2()`$ iff $`\pi |_K`$ has an irreducible $`K`$-subrepresentation in common with $`\epsilon `$. In this case, we say that $`\pi \widehat{G}_d`$ contributes to $`\mathrm{Spec}_p(D)`$. ###### Demonstration Proof of Theorem 1.3 Since $`D`$ is a $`G`$-invariant elliptic differential operator, we can apply a theorem of Connes and Moscovici (\[CM\], Theorem 6.1). It follows that each eigenspace of $`D`$ is isomorphic to a finite sum of discrete series representations of $`G`$. On the other hand, by the Plancherel Theorem and Frobenius reciprocity (cf. \[AS\], chapter 2), we have $$\mathrm{Hom}_G(\pi ,L^2())\mathrm{Hom}_K(\pi |_K,\epsilon )$$ for each discrete series representation $`\pi `$. Moreover, $`D`$ is $`G`$-invariant and $`\mathrm{Hom}_K(\pi |_K,\epsilon )`$ is finite dimensional by results of Harish-Chandra (Theorem 8.1 in \[K\]). Hence, it is easy to check that $$\pi \mathrm{Hom}_G(\pi ,L^2())L^2()$$ decomposes as a finite sum of $`D`$-eigenspaces. ∎ Each eigenvalue has infinite multiplicity, since all nontrivial unitary representations of a noncompact connected semisimple Lie group are infinite dimensional. Moreover, if $`D`$ has nonempty point spectrum then $`G`$ has discrete series representations. Due to a theorem of Harish-Chandra (cf. \[K\], Theorem 12.20, \[AS\], Proposition 6.11), this is the case iff $`\mathrm{rk}(G)=\mathrm{rk}(K)`$. Hence, we have the following ###### Remark 1.3. Remark On a noncompact symmetric space $`G/K`$ with $`\mathrm{rk}(G)>\mathrm{rk}(K)`$, the point spectrum of the Dirac operator $`D`$ is empty. ## 2. Minimal $`K`$-Types and Point Spectrum In this chapter, we recall a few facts from the theory of discrete series representations of $`G`$. We will show that at most one irreducible subrepresentation of $`\sigma `$ can occur as a $`K`$-type of a discrete series representation of $`G`$. This happens iff the half sum $`\rho _𝔨`$ of positive roots of $`K`$ is $`𝔤`$-regular. Using this, we prove the equivalence of statements (1) – (3) of Theorem 0.1. We remark that our arguments in this chapter are also valid for nonirreducible symmetric spaces. ### a) Discrete Series Representations and their $`K`$-Types By Remark 1.4, we may and will assume from now on that $`\mathrm{rk}(G)=\mathrm{rk}(K)`$. Then we fix a common maximal torus $`HKG`$ with Lie algebra $`𝔥`$ and weight lattice $$\mathrm{\Gamma }:=\left\{\gamma i𝔥^{}\right|\gamma (X)2\pi i\text{ for all }X𝔥\text{ with }e^X=e\}.$$ Let $`\mathrm{\Delta }_𝔤=\mathrm{\Delta }_𝔨\mathrm{\Delta }_𝔭`$ be the root system of $`G`$ with respect to $`𝔥`$, decomposed into the root system of $`K`$ and the set of noncompact roots. Choose systems of positive roots $`\mathrm{\Delta }_𝔤^+\mathrm{\Delta }_𝔨^+`$, and let $`P_𝔤P_𝔨i𝔥^{}`$ be the Weyl chambers associated to $`\mathrm{\Delta }_𝔤^+`$ and $`\mathrm{\Delta }_𝔨^+`$. Let $`W_𝔤`$, $`W_𝔨`$ be the Weyl groups of $`G`$ and $`K`$, and set $$W^{}:=\{wW_𝔤w(P_𝔤)P_𝔨\}.$$ Let $`\rho _𝔤`$ and $`\rho _𝔨`$ be the half sums of positive roots of $`G`$ and $`K`$. We fix an $`\mathrm{Ad}_K^{}`$-invariant scalar product on $`𝔨^{}`$. We call a weight $`\lambda i𝔥^{}`$ $`𝔤`$-regular if $`\lambda ,\gamma 0`$ for all $`\gamma \mathrm{\Delta }_𝔤`$, and $`𝔤`$-singular otherwise. If $`\lambda ,\gamma 0`$ holds only for $`\gamma \mathrm{\Delta }_𝔨`$, then $`\lambda `$ is called $`𝔨`$-regular. Clearly, $`𝔤`$-regularity implies $`𝔨`$-regularity. An element $`\kappa i𝔥^{}`$ is called $`𝔨`$-algebraically integral iff $$2\frac{\alpha ,\kappa }{\alpha ,\alpha }$$ for all $`\alpha \mathrm{\Delta }_𝔨^+`$. Note that all weights of $`K`$, i.e. all elements of $`\mathrm{\Gamma }`$, are automatically $`𝔨`$-algebraically integral. Note also that $`\rho _𝔨`$ and $`\rho _𝔤`$ are $`𝔨`$-algebraically integral (for $`\rho _𝔨`$ this is well known, for $`\rho _𝔤`$ it follows because $`\rho _𝔤`$ is $`𝔤`$-algebraically integral). Furthermore, $`\rho _𝔨`$ uniquely minimizes $`\left|\kappa \right|`$ among all $`𝔨`$-algebraically integral $`𝔨`$-regular elements $`\kappa `$ of $`P_𝔨`$ (for semisimple $`K`$, this is well known, in the general case it follows because the center of $`K`$ is orthogonal to its semisimple part). Let us now turn to some facts about the discrete series of $`G`$, in particular about the possible $`K`$-types. ###### Definition 2.1. Definition Let $`\pi \widehat{G}_d`$ be a discrete series representation of $`G`$, and let $`\phi _\kappa `$ be an irreducible representation of $`K`$ with highest weight $`\kappa \mathrm{\Gamma }P_𝔨`$. Then $`\kappa `$ is called a $`K`$-type of $`\pi `$, if $`\pi |_K`$ contains an irreducible subrepresentation isomorphic to $`\phi _\kappa `$. The dimension of $`\mathrm{Hom}_K(\pi ,\phi _\kappa )`$ is called the multiplicity of $`\kappa `$. If $`\kappa `$ minimizes $`\left|\kappa +2\rho _𝔨\right|`$ among all $`K`$-types, then $`\kappa `$ is called a minimal $`K`$-type of $`\pi `$. Our argumentation is based upon the following fundamental result of Harish-Chandra: ###### \NeueNummer\KTypeTheorem{Theorem}\ifx\relax({\AS, Theorems~8.1 and~8.5, \Knapp, Theorems 9.20 and 12.21})\relax\else\\/ \rm({\AS, Theorems~8.1 and~8.5, \Knapp, Theorems 9.20 and 12.21})\fi The discrete series representations of $`G`$ are parametrized by $`\lambda P_𝔨`$ with $`\lambda w(P_𝔤)`$ for some $`wW^{}`$ such that $`\lambda `$ is regular and $`\lambda w\rho _𝔤\mathrm{\Gamma }`$. For such a $`\lambda `$, the discrete series representation $`\pi _\lambda `$ corresponding to $`\lambda `$ has a unique minimal $`K`$-type $$\kappa :=\lambda +w\rho _𝔤2\rho _𝔨,$$ which occurs with multiplicity $`1`$. Finally, each $`K`$-type $`\kappa ^{}`$ of $`\pi _\lambda `$ is of the form $$\kappa ^{}:=\kappa +\underset{{\scriptscriptstyle \genfrac{}{}{0pt}{}{\alpha \mathrm{\Delta }_𝔤}{w\rho _𝔤,\alpha >0}}}{}n_\alpha \alpha $$ $`2.3`$ where the $`n_\alpha `$ are nonnegative integers. In the literature, $`\lambda `$ is called the Harish-Chandra parameter for $`\pi _\lambda `$, while $`\kappa `$ is called the Blattner parameter. ### b) The Dirac Operator on Spinors Let $`𝒮=G\times _\sigma S`$ be the spinor bundle on $`M`$ as in (1.1). Since we assume that $`\mathrm{rk}(G)=\mathrm{rk}(K)`$, the symmetric space $`M`$ is even dimensional. In particular, the spinor representation and the spinor bundle split into a positive and a negative part: $$S=S^+S^{},\text{and}𝒮=𝒮^+𝒮^{}.$$ The $`K`$-action on $`S^+`$ and $`S^{}`$ is described by a formula of Parthasarathy: ###### \NeueNummer\ParthLemma{Lemma}\ifx\relax(\forward\Parth, Lemma~2.2)\relax\else\\/ \rm(\forward\Parth, Lemma~2.2)\fi For $`wW^{}`$, let $`\sigma ^{w\rho _𝔤\rho _𝔨}`$ be the $`K`$-representation with highest weight $`w\rho _𝔤\rho _𝔨P_𝔨`$. Then for a suitable orientation of $`M`$, $`\sigma `$ decomposes as $$\sigma =\sigma ^+\sigma ^{}:=\underset{{\scriptscriptstyle \genfrac{}{}{0pt}{}{wW^{}}{\mathrm{sign}(w)=1}}}{}\sigma ^{w\rho _𝔤\rho _𝔨}\underset{{\scriptscriptstyle \genfrac{}{}{0pt}{}{wW^{}}{\mathrm{sign}(w)=1}}}{}\sigma ^{w\rho _𝔤\rho _𝔨}.$$ ###### Remark 2.3. Remark This implies in particular that $`\rho _𝔨w\rho _𝔤\mathrm{\Gamma }`$, because we have assumed that $`\sigma `$ is a representation of $`K`$. Note that $`wW_𝔤`$ may be arbitrary, because different $`W_𝔤`$-translates differ by linear combinations of roots of $`G`$, which are clearly in $`\mathrm{\Gamma }`$. ###### Remark 2.4. Remark We recall that the operator $`D`$ splits as $$D^\pm :=D|_{\mathrm{\Gamma }(𝒮^\pm )}:\mathrm{\Gamma }(𝒮^\pm )\mathrm{\Gamma }(𝒮^{}).$$ If $`E_\mu `$ is an eigenspace corresponding to an eigenvalue $`\mu `$ of $`D`$, then $`E_\mu `$ splits into $`E_\mu ^+E_\mu ^{}`$, with $`E_\mu ^\pm :=E_\mu \mathrm{\Gamma }(𝒮^\pm )`$. If moreover, $`\mu 0`$, then $`D^\pm |_{E_\mu ^\pm }:E_\mu ^\pm E_\mu ^{}`$ is an isomorphism. We will now establish an algebraic criterion for the existence of point spectrum for untwisted Dirac operators. ###### \NeueNummer\DiracTheorem{Theorem}\ifx\relax\relax\else\\/ \rm\fi Let $`D`$ be the untwisted Dirac operator on $`M=G/K`$. If $`\rho _𝔨`$ is $`𝔤`$-regular, then $`\mathrm{Spec}_p(D)=\{0\}`$, and $`\mathrm{ker}(D)`$ is isomorphic to the discrete series representation with Harish-Chandra parameter $`\rho _𝔨`$. If $`\rho _𝔨`$ is $`𝔤`$-singular, then there is no point spectrum. ###### Remark Remark By \[AS\], Theorem 9.3, we already know that $`\mathrm{ker}(D)0`$ iff $`\rho _𝔨`$ is $`𝔤`$-regular. It would thus be enough to check that $`\mathrm{Spec}_p(D)`$ contains no nonzero eigenvalues. ###### Demonstration Proof First of all, if $`\rho _𝔨`$ is $`𝔤`$-regular, then there exists a discrete series representation with Harish-Chandra parameter $`\rho _𝔨`$ because of Theorem 2.2 and Remark 2.5. Let $`wW^{}`$ such that $`\rho _𝔨w(P_𝔤)`$. The minimal $`K`$-type of $`\pi _{\rho _𝔨}`$ is $`w\rho _𝔤\rho _𝔨`$, which is a highest weight of $`\sigma `$ by Lemma 2.4. Hence by Theorem 1.3, $`D`$ has point spectrum. On the other hand, let us assume that $`\pi _\lambda `$ is a discrete series representation of $`G`$ that contributes to $`\mathrm{Spec}_p(D)`$. We will show that then necessarily $`\lambda =\rho _𝔨`$. Let $`wW^{}`$ be such that $`\lambda w(P_𝔤)`$. By Theorem 1.3 and Lemma 2.4, for some $`w_0W^{}`$, the weight $`w_0\rho _𝔤\rho _𝔨`$ is a $`K`$-type of $`\pi _\lambda `$. Then by Theorem 2.2, there exist nonnegative integers $`n_\alpha `$ such that $$w_0\rho _𝔤+\rho _𝔨=\lambda +w\rho _𝔤+\underset{{\scriptscriptstyle \genfrac{}{}{0pt}{}{\alpha \mathrm{\Delta }_𝔤^+}{w\rho _𝔤,\alpha 0}}}{}n_\alpha \alpha .$$ $`2.8`$ We establish a few inequalities: By construction, $`\lambda `$ and $`w\rho _𝔤`$ are both $`𝔤`$-regular and lie in the same Weyl chamber $`w(P_𝔤)`$ of $`𝔤`$. This has two consequences: First, the weight $`w\rho _𝔤`$ uniquely minimizes the distance to $`\lambda `$ among all $`W_𝔤`$-translates of $`\rho _𝔤`$. Thus $$\lambda ,w_0\rho _𝔤\lambda ,w\rho _𝔤,$$ $`2.9`$ with equality iff $`w_0=w`$. Second, $`\alpha ,w\rho _𝔤>0`$ iff $`\alpha ,\lambda >0`$. This implies $$\lambda ,n_\alpha \alpha 0,$$ $`2.10`$ since the $`n_\alpha `$ have to be nonnegative. Moreover, we have equality iff all the $`n_\alpha `$ are zero. If (2.8) holds for $`\lambda `$, then $`\lambda `$ must clearly be $`𝔨`$-algebraically integral, because the same holds for $`\rho _𝔨`$, $`\rho _𝔤`$ and all $`\alpha \mathrm{\Delta }_𝔤^+`$. Now the weight $`\rho _𝔨`$ uniquely minimizes $`\left|\kappa \right|`$ among all $`𝔨`$-regular $`𝔨`$-algebraically integral $`\kappa P_𝔨`$. Because $`\lambda `$ is $`𝔤`$-regular, it is also $`𝔨`$-regular, and we have $$\rho _𝔨,\lambda \left|\lambda \right|\left|\rho _𝔨\right|\left|\lambda \right|^2$$ $`2.11`$ with equality iff $`\lambda =\rho _𝔨`$. In order to show that $`\lambda =\rho _𝔨`$, we multiply (2.8) by $`\lambda `$ and apply (2.9) und (2.10): $$w_0\rho _𝔤+\rho _𝔨,\lambda =\lambda +w\rho _𝔤+n_\alpha \alpha ,\lambda \rho _𝔨,\lambda \left|\lambda \right|^2.$$ So by (2.11), we have equality, which means that $`\lambda =\rho _𝔨`$, that $`w_0=w`$, and that all $`n_\alpha `$ are zero. Now, $`\rho _𝔨`$ can only be a Harish-Chandra parameter for a discrete series representation of $`G`$ if $`\rho _𝔨`$ is $`𝔤`$-regular. Thus, there is no point spectrum if $`\rho _𝔨`$ is $`𝔤`$-singular. Let us assume that $`\rho _𝔨`$ is $`𝔤`$-regular. Then, among the highest weights of $`\sigma `$, only $`w\rho _𝔤\rho _𝔨`$ can appear as a $`K`$-type of a discrete series representation of $`G`$. This implies that the eigenspaces of $`D`$ are contained either in $`L^2(𝒮^+)`$ or in $`L^2(𝒮^{})`$. In particular, $`\mathrm{Spec}_p(D)\{0\}`$, because by Remark 2.6, any nonzero eigenvalue $`\mu `$ would lead to an eigenspace $`E_\mu =E_\mu ^+E_\mu ^{}`$ with $`E_\mu ^+E_\mu ^{}\cong ̸\{0\}`$. Finally, $`w\rho _𝔤\rho _𝔨`$ is the minimal $`K`$-type of $`\pi _{\rho _𝔨}`$. Hence, it has multiplicity $`1`$, and $`\mathrm{ker}(D)`$ is irreducible as a $`G`$-module. ∎ ###### Remark Remark Another way to check that $`D`$ vanishes on $`\pi _{\rho _𝔨}L^2(𝒮)`$ is to express $`D^2`$ in terms of the Casimir operator $`\mathrm{\Omega }`$ of $`G`$ (\[P\], Proposition 3.1, \[K\], Lemma 12.12), and using the explicit formula for $`\pi _\lambda (\mathrm{\Omega })`$ (\[K\], Lemma 12.28). We will now reformulate the theorem above in terms of the compact dual of $`M`$. Therefore, let $`𝔤^{}`$ be the complexification of $`𝔤`$. Recall that there exists a compact, connected, simply connected Lie group $`G^{}`$ with Lie algebra $`𝔤^{}:=𝔨i𝔭`$. Let $`K^{}G^{}`$ be its Lie subgroup with Lie algebra $`𝔨`$, then $`K^{}`$ is closed, and $`M^{}:=G^{}/K^{}`$ is called the compact dual of $`M`$. Note that $`𝔥`$ is a common Cartan subalgebra of $`𝔤`$, $`𝔤^{}`$ and $`𝔨`$, and that $`𝔤`$ and $`𝔤^{}`$ have the same roots, Weyl chambers etc. with respect to $`𝔥`$. With these definitions, we can give an equivalent criterion for the existence of point spectrum: ###### \NeueNummer\AdachTheorem{Corollary}\ifx\relax\relax\else\\/ \rm\fi Let $`D`$ be the untwisted Dirac operator on $`M=G/K`$. Then $`D`$ has point spectrum iff the $`\widehat{A}`$-genus of the compact dual $`M^{}=G^{}/K^{}`$ of $`M`$ is nonzero. ###### Demonstration Proof By \[BH\], the $`\widehat{A}`$-genus of $`M^{}`$ is given by the formula $$\widehat{A}(M^{})[M^{}]=\underset{\alpha \mathrm{\Delta }_𝔤}{}\frac{\alpha ,\rho _𝔨}{\alpha ,\rho _𝔤}$$ $`2.13`$ for a suitable orientation of $`M^{}`$. In particular, $`\widehat{A}(M^{})[M^{}]0`$ iff $`\rho _𝔨`$ is $`𝔤`$-regular, cf. Theorem 23.3 in \[BH\]. Thus, our claim follows from Theorem 2.7. ∎ ###### Remark Remark The following alternative proof motivates the appearance of the $`\widehat{A}`$-genus: By Theorem 2.7, $`D`$ has point spectrum iff its $`L^2`$-index is nonzero. Using Hirzebruch proportionality, one then concludes that this is the case iff $`\widehat{A}(M^{})[M^{}]0`$ (cf. \[AS\], chapter 3 and erratum). This was suggested by M. Olbrich. Let us state some consequences of our criterion. ###### \NeueNummer\ComplexCorollary{Corollary}\ifx\relax\relax\else\\/ \rm\fi Let $`D`$ be the untwisted Dirac operator on $`M=G/K`$. If $`D`$ has point spectrum, then ###### Demonstration Proof By \[BH\], Theorem 23.3, $`M^{}`$ is Hermitian symmetric if its $`\widehat{A}`$-genus is nonzero. Then $`M`$ is also Hermitian symmetric. Next, $`M^{}`$ has positive scalar curvature. Thus if $`M^{}`$ was spin, its $`\widehat{A}`$-genus would vanish by Lichnerowicz’ theorem (\[LM\], Corollary 8.9). Finally, the $`\widehat{A}`$-genus of $`M^{}`$ can be nonzero only if $`dimM^{}`$ is divisible by $`4`$. Hence, the claims follows from Corollary 2.12. ∎ ###### Remark Remark The conditions listed in Corollary 2.14 are not sufficient for the existence of point spectrum: In the next section, we will see that for $`M^{}:=\mathrm{Sp}(n)/\mathrm{U}(n)`$ with $`n4`$, conditions (1) – (3) above are satisfied. Nevertheless, the $`\widehat{A}`$-genus of $`M^{}`$ vanishes. ## 3. Compact Symmetric Spaces with Nonvanishing $`\widehat{A}`$-Genus In this section we want to determine the compact Riemannian symmetric spaces $`M^{}=G^{}/K^{}`$ with non-vanishing $`\widehat{A}`$-genus. By \[Bo\], we may again assume that $`\mathrm{rk}(G^{})=\mathrm{rk}(K^{})`$. Because the $`\widehat{A}`$-genus is multiplicative on products of manifolds, we restrict our attention to irreducible symmetric spaces. By Corollary 2.14, we only have to investigate compact Hermitian symmetric spaces $`M^{}`$ with $`dimM^{}4`$ which are not spin. The simply connected symmetric spaces that admit a spin structure are known (cf. \[CG\] or \[HS\]). On the other hand there are four families of Hermitian symmetric spaces and two exceptional ones (cf. \[H\]). Combining these lists, we see that the irreducible Hermitian symmetric spaces which have no spin structure form the following three families: $`(1)`$ $`\mathrm{SO}(n+2)`$ $`/\mathrm{SO}(2)\times \mathrm{SO}(n)`$ for $`n`$ odd, $`(2)`$ $`\mathrm{Sp}(n)`$ $`/\mathrm{U}(n)`$ for $`n`$ even, and $`(3)`$ $`\mathrm{U}(p+q)`$ $`/\mathrm{U}(p)\times \mathrm{U}(q)`$ for $`p+q`$ odd (in the last case, $`M`$ should actually be represented as a quotient of a finite cover of $`\mathrm{SU}(p,q)`$, rather than of $`\mathrm{U}(p,q)`$). We will show that all manifolds of the families (1) and (2) have vanishing $`\widehat{A}`$-genus, while the $`\widehat{A}`$-genus of $`\mathrm{U}(p+q)/\mathrm{U}(p)\times \mathrm{U}(q)`$ is different from zero if $`p+q`$ is odd. The manifolds of family (1) have dimension $`2n`$. Since we assume that $`n`$ is odd, the dimension is not divisible by $`4`$, and the $`\widehat{A}`$-genus vanishes. For the two other families, we have to use formula (2.13) and to compute the scalar products $`\alpha ,\rho _𝔨`$ for all positive roots $`\alpha \mathrm{\Delta }_𝔤^+`$. Let us investigate family (2). The Lie algebra of $`\mathrm{Sp}(n)`$ is the Lie algebra of skew-Hermitian quaternionic matrices of order $`n`$. The Lie algebra of $`\mathrm{U}(n)`$ is realized as the sub-algebra of skew-Hermitian complex matrices of order $`n`$. A common Cartan sub-algebra $`𝔥`$ is the Lie algebra of the matrices of the form $$\lambda =(\lambda _1,\mathrm{},\lambda _n):=\mathrm{diag}(\lambda _1,\mathrm{},\lambda _n),$$ with $`\lambda _ji`$. As a system of positive roots of $`\mathrm{Sp}(n)`$, we take $$\mathrm{\Delta }_𝔤^+=\{\lambda _i\pm \lambda _j1i<jn\}\{\mathrm{\hspace{0.17em}2}\lambda _ii=1,\mathrm{},n\}.$$ The positive roots in $`K^{}=\mathrm{U}(n)`$ are $`\mathrm{\Delta }_𝔨^+:=\{\lambda _i\lambda _ji<j\}`$. Hence, $`\rho _𝔨=(n1,n3,\mathrm{},1n)`$. Clearly, the standard scalar product on $`𝔥^{}^n`$ extends to an $`\mathrm{Ad}_G^{}^{}`$-invariant scalar product on $`𝔤^{}`$. In particular, for $`\alpha =(\lambda _1+\lambda _n)`$ we have $`\alpha ,\rho _𝔨=0`$. Hence, according to formula (2.13), the $`\widehat{A}`$-genus of all manifolds $`\mathrm{Sp}(n)/\mathrm{U}(n)`$ is zero. The computation for family (3) is similar. As a system of positive roots we take $$\mathrm{\Delta }_𝔤^+=\{\lambda _i\lambda _j1i<jp+q\}.$$ The positive roots in $`K^{}=\mathrm{U}(p)\times \mathrm{U}(q)`$ are $$\mathrm{\Delta }_𝔨^+=\{\lambda _i\lambda _j1i<jp\}\{\lambda _i\lambda _jp+1i<jp+q\}.$$ This yields $`\rho _𝔨=(p1,p3,\mathrm{},1p,q1,q3,\mathrm{}1q)`$. Again we take as a scalar product for the roots the canonical scalar product of vectors in $`^n`$. Since $`p+q`$ is odd, we can assume $`p`$ to be even and $`q`$ to be odd. Hence, all numbers $`p1`$, $`p3`$, …, $`1p`$ are odd and all numbers $`q1`$, $`q3`$, …, $`1q`$ are even. From this it follows that the scalar product $`\alpha ,\rho _𝔨`$ for any positive root $`\alpha \mathrm{\Delta }_𝔤^+`$ is different from zero. Using once again formula (2.13), we obtain that the $`\widehat{A}`$-genus of $`\mathrm{U}(p+q)/\mathrm{U}(p)\times \mathrm{U}(q)`$ is nonzero. In particular, the $`\widehat{A}`$-genus of the complex projective space $`P^{2n}`$ does not vanish. Here a simple computation gives $`\widehat{A}(P^{2n})[P^{2n}]=(4)^n_{i=1}^n\frac{2i1}{2i}=(16)^n\left(\genfrac{}{}{0pt}{}{2n}{n}\right)`$. Finally, we have ###### \NeueNummer\Classification{Theorem}\ifx\relax\relax\else\\/ \rm\fi Let $`M^{}=G^{}/K^{}`$ be an irreducible Riemannian symmetric space of compact type. Then $`M^{}`$ has nonvanishing $`\widehat{A}`$-genus iff $`M^{}`$ is isometric to $$\mathrm{U}(p+q)/\mathrm{U}(p)\times \mathrm{U}(q),\text{with }p+q\text{ odd.}\mathit{}$$ Together with Theorem 2.7, Corollary 2.12, and the multiplicativity of $`\widehat{A}`$, this proves Theorem 0.1. ∎
no-problem/9903/cond-mat9903178.html
ar5iv
text
# The effect of substrate induced strain on the charge-ordering transition in Nd0.5Sr0.5MnO3 thin films ## ACKNOWLEDGMENTS We acknowledge R. Ramesh and Y. Zheng for help in low temperature XRD measurements. This work was partly supported by the MRSEC program of the NSF (Grant # DMR 96-32521).
no-problem/9903/cond-mat9903415.html
ar5iv
text
# A thermal model for adaptive competition in a market ## Abstract Abstract: New continuous and stochastic extensions of the minority game, devised as a fundamental model for a market of competitive agents, are introduced and studied in the context of statistical physics. The new formulation reproduces the key features of the original model, without the need for some of its special assumptions and, most importantly, it demonstrates the crucial role of stochastic decision-making. Furthermore, this formulation provides the exact but novel non-linear equations for the dynamics of the system. There is currently much interest in the statistical physics of non-equilibrium frustrated and disordered many-body systems . Even relatively simple microscopic dynamical equations have been shown to lead to complex co-operative behaviour. Although several of the interesting examples are in areas traditionally viewed as physics, it is growingly apparent that many further challenges for statistical physics have their origins in other fields like biology and economics . In this letter we discuss a simple model whose origin lies in a market scenario and show that not only does it exhibit interesting behaviour in its own right but also it yields an intriguingly unusual type of stochastic micro-dynamics of potentially more general interest. The model we will introduce is based on the the minority game (MG) , which is a simple and intuitive model for the behaviour of a group of agents subject to the economic law of supply and demand, which ensures that in a market the profitable group of buyers or sellers of a commodity is the minority one . From the perspective of statistical physics, these problems are novel examples of frustrated and disordered many-body systems. Agents do not interact directly but with their collective action determine a ‘price’ which in turn affects their future behaviour, so that minority reward implies frustration. Quenched disorder enters in that different agents respond in different ways to the same stimuli. There are effective random interactions between agents via the common stimuli and the cooperative behaviour is reminiscent of that of spin-glasses , but there are important conceptual and technical differences compared with the problems of conventional statistical physics. The setup of the MG in the original formulation of is the following: $`N`$ agents choose at each time step whether to ‘buy’ ($`0`$) or ‘sell’ ($`1`$). Those agents who have made the minority choice win, the others lose. In order to decide what to do agents use strategies, which prescribe an action given the set of winning outcomes in the last $`m`$ time steps. At the beginning of the game each agent draws $`s`$ strategies randomly and keep them forever. As they play, the agents give points to all their strategies according to their potential success in the past, and at each time step they employ their currently most successful one (i.e. the one with the highest number of points). The most interesting macroscopic observable in the MG is the fluctuation $`\sigma `$ of the excess of buyers to sellers. This quantity is equivalent to the price volatility in a financial context and it is a measure of the global waste of resources by the community of the agents. We therefore want $`\sigma `$ to be as low as possible. An important feature of the MG, observed in simulations , is that there is a regime of the parameters where $`\sigma `$ is smaller than the value $`\sigma _r`$ which corresponds to the case where each agent is buying or selling randomly. Previous studies have considered this feature from a geometrical and phenomenological point of view . Our aim, however, is to enable a full analytic solution. One of the major obstacles to an analytic study of the MG in its original formulation is the presence of an explicit time feedback via the memory $`m`$. Indeed, when the information processed at each time step by the agents is the true history, that is the result of the choices of the agents in the $`m`$ previous steps, the dynamical evolution of the system is non-Markovian and an analytic approach to the problem is very difficult. A step forward in the simplification of the model has been made in , where it has been shown that the explicit memory of the agents is actually irrelevant for the global behaviour of the system: when the information processed by the agents at each time step is just invented randomly, having nothing to do with the true time series, the relevant macroscopic observables do not change. The significance of this result is the following: the crucial ingredient for the volatility to be reduced below the random value appears to be that the agents must all react to the same piece of information, irrespective of whether this information is true or false . This result has an important technical consequence, since the explicit time feedback introduced by the memory disappears: the agents respond now to an instantaneous random piece of information, i.e. a noise, so that the process has become stochastic and Markovian. The model can be usefully simplified even further and at the same time generalized and made more realistic. Let us first consider the binary nature of the original MG. It is clear that from a simulational point of view a binary setup offers advantages of computational efficiency, but unfortunately it is less ideally suited for an analytic approach . More specifically, if we are interested in the analysis of time evolution, integer variables are usually harder to handle. Moreover, the geometrical considerations that have been made on a hypercube of strategies of dimension $`2^m`$ for the binary setup , become more natural and general if the strategy space is continuous. Finally, in the original binary formulation of the MG there is no possibility for the agents to fine tune their bids: each agent can choose to buy or sell, but they cannot choose by how much. As a consequence, also the win or loss of the agents is not related to the consistency of their bids. This is another unrealistic feature of the model, which can be improved. For all these reasons, we shall now introduce a continuous formulation of the MG. Let us define a strategy $`\stackrel{}{R}`$ as a vector in the real space $`^D`$, subject to the constraint, $`\stackrel{}{R}=\sqrt{D}.`$ In this way the space of strategies $`\mathrm{\Gamma }`$ is just a sphere and strategies can be thought as points on it. The next ingredient we need is the information processed by strategies. To this aim we introduce a random noise $`\stackrel{}{\eta }(t)`$, defined as a unit-length vector in $`^D`$, which is $`\delta `$-correlated in time and uniformly distributed on the unit sphere. Finally, we define the response $`b(\stackrel{}{R})`$ of a strategy $`\stackrel{}{R}`$ to the information $`\stackrel{}{\eta }(t)`$, as the projection of the strategy on the information itself, $$b(\stackrel{}{R})\stackrel{}{R}\stackrel{}{\eta }(t).$$ (1) This response is nothing else than the bid prescribed by the particular strategy $`\stackrel{}{R}`$. The bid is now a continuous quantity, which can be positive (buy) or negative (sell). At the beginning of the game each agent draws $`s`$ strategies randomly from $`\mathrm{\Gamma }`$, with a flat distribution. All the strategies initially have zero points and in operation the points are updated in a manner discussed below. At each time step the agent uses his/her strategy with the highest number of points. The total bid is: $$A(t)\underset{i=1}{\overset{N}{}}b_i(t)=\underset{i=1}{\overset{N}{}}\stackrel{}{R}_i^{}(t)\stackrel{}{\eta }(t),$$ (2) where $`\stackrel{}{R}_i^{}(t)`$ is the best strategy (that with the highest number of points) of agent $`i`$ at time $`t`$. We have now to update the points. This is particularly simple in the present continuous formulation. Let us introduce a time dependent function $`P(\stackrel{}{R},t)`$ defined on $`\mathrm{\Gamma }`$, which represents the points $`P`$ of strategy $`\stackrel{}{R}`$ at time $`t`$. We can write a very simple and intuitive time evolution equation for $`P`$, $$P(\stackrel{}{R},t+1)=P(\stackrel{}{R},t)A(t)b(\stackrel{}{R})/N,$$ (3) where $`A(t)`$ is given by eq.(2). A strategy $`\stackrel{}{R}`$ is thus rewarded (penalized) if its bid has an opposite (equal) sign to the total bid $`A(t)`$, as the supply-demand dynamics requires. Now the win or the loss is proportional to the bid. It is important to check whether the results obtained with this continuous formulation of the MG are the same as in the original binary model. The main observable of interest is the variance (or volatility) $`\sigma `$ in the fluctuation of $`A`$, $`\sigma ^2=lim_t\mathrm{}\frac{1}{t}_{t_0}^{t_0+t}𝑑t^{}A(t^{})^2`$. Indeed, we shall not consider any quantity related to individual agents. We prefer to concentrate on the global behaviour of the system, taking more the role of the market regulator than that of a trading agent. The main features of the MG are reproduced: first, we have checked that the relevant scaling parameter is the reduced dimension of the strategy space $`d=D/N`$; second, there is a regime of $`d`$ where the variance $`\sigma `$ is smaller than the random value $`\sigma _r`$, showing a minimum at $`d=d_c(s)`$, and, moreover, the minimum of $`\sigma (d)`$ is shallower the higher is $`s`$ ; see Fig.1. It can be shown that all the other standard features of the binary model are reproduced in the continuous formulation. An interesting observation is that there is no need for $`\stackrel{}{\eta }(t)`$ to be random at all. Indeed, the only requirement is that it must be ergodic, spanning the whole space $`\mathrm{\Gamma }`$, even in a deterministic way. Moreover, if $`\stackrel{}{\eta }(t)`$ visits just a sub-space of $`\mathrm{\Gamma }`$ of dimension $`D^{}<D`$ everything in the system proceeds as if the actual dimension was $`D^{}`$: the effective dimension of the strategy space is fixed by the dimension of the space spanned by the information. Relations (2) and (3) constitute a closed set of equations for the dynamical evolution of $`P(\stackrel{}{R},t)`$, whose solution, once averaged over $`\stackrel{}{\eta }`$ and over the initial distribution of the strategies, gives in principle an exact determination of the behaviour of the system. In practice, the presence of the ‘best-strategy’ rule, i.e. the fact that each agent uses the strategy with the highest points, makes the handling of these equations still difficult. From the perspective of statistical physics it is natural to modify the deterministic nature of the above procedure by introducing a thermal description which progressively allows stochastic deviations from the ‘best-strategy’ rule, as a temperature is raised. We shall see that this generalization is also advantageous, both for the performance of the system in certain regimes and for the development of convenient analytical equations for the dynamics. In this context the ‘best-strategy’ original formulation of the MG can be viewed as a zero temperature limit of a more general model. Hence we introduce the Thermal Minority Game (TMG), defined in the following way. We allow each agent a certain degree of stochasticity in the choice of the strategy to use at any time step. For each agent $`i`$ the probabilities of employing his/her strategy $`a=1,\mathrm{},s`$ is given by, $$\pi _i^a(t)\frac{e^{\beta P(\stackrel{}{R}_i^a,t)}}{Z_i},Z_i\underset{b=1}{\overset{s}{}}e^{\beta P(\stackrel{}{R}_i^b,t)},$$ (4) where $`P`$ are the points, evolving with eq.(3). The inverse temperature $`\beta =1/T`$ is a measure of the power of resolution of the agents: when $`\beta \mathrm{}`$ they are perfectly able to distinguish which is their best strategy, while for decreasing $`\beta `$ they are more and more confused, until for $`\beta =0`$ they choose their strategy completely at random. What we have defined is therefore a model which interpolates between the original ‘best-strategy’ MG ($`T=0`$, $`\beta =\mathrm{}`$) and the random case ($`T=\mathrm{}`$, $`\beta =0`$). In the language of Game Theory, when $`T=0`$ agents play ‘pure’ strategies, while at $`T>0`$ they play ‘mixed’ ones . We now consider the consequences of having introduced the temperature. Let us fix a value of $`d`$ belonging to the worse-than-random phase of the MG (see Fig.1) and see what happens to the variance $`\sigma `$ when we switch on the temperature. We do know that for $`T=0`$ we must recover the same value as in the ordinary MG, while for $`T\mathrm{}`$ we must obtain the value $`\sigma _r`$ of the random case. But in the middle a very interesting thing occurs: $`\sigma (T)`$ is not a monotonically decreasing function of $`T`$, but there is a large intermediate temperature regime where $`\sigma `$ is smaller than the random value $`\sigma _r`$. This behaviour is shown in Fig.2. The meaning of this result is the following: even if the system is in a MG phase which is worse than random, there is a way to significantly decrease the volatility $`\sigma `$ below the random value $`\sigma _r`$ by not always using the best strategy, but rather allowing a certain degree of individual error. Note from Fig.2 that the temperature range where the variance is smaller than the random one is more than two orders of magnitude large, meaning that almost every kind of individual stochasticity of the agents improves the global behaviour of the system. Furthermore, as we show in the inset of Fig.2, if we fix $`d`$ at a value belonging to the better-than-random phase, but with $`d<d_c`$, a similar range of temperature still improves the behaviour of the system, decreasing the volatility even below the MG value. These features can be seen also in Fig.3, where we plot $`\sigma `$ as a function of $`d`$ at various values of the temperature. In addition this figure shows further effects: (i) the improvement due to thermal noise occurs only for $`d<d_c`$; (ii) there is a cross-over temperature $`T_11`$, below which temperature has very little effect for $`d>d_c`$; (iii) above $`T_1`$ the optimal $`d_c(T)`$ moves continuously towards zero and $`\sigma (d_c)`$ increases; (iv) there is a higher critical temperature $`T_210^2`$ at which $`d_c`$ vanishes, and for $`T>T_2`$ the volatility becomes monotonically increasing with $`d`$. We turn now to a more formal description of the TMG. Once we have introduced the probabilities $`\pi _i^a`$ in eq.(4) we can write a dynamical equation for them. Indeed, from eq.(3), after taking the continuous-time limit, we have, $$\dot{\pi }_i^a(t)=\beta \pi _i^a(t)a(t)\left(\stackrel{}{R}_i^a\underset{b=1}{\overset{s}{}}\pi _i^b(t)\stackrel{}{R}_i^b\right)\stackrel{}{\eta }(t),$$ (5) where the normalized total bid $`a(t)`$ is given by, $$a(t)=N^1\underset{i=1}{\overset{N}{}}\stackrel{}{r}_i(t)\stackrel{}{\eta }(t).$$ (6) Now $`\stackrel{}{r}_i(t)`$ is a stochastic variable, drawn at each time $`t`$ with the time dependent probabilities set $`[\pi _i^1,\mathrm{},\pi _i^s]`$. Note the different notation: $`\stackrel{}{R}_i^a`$ are the quenched strategies, while $`\stackrel{}{r}_i(t)`$ is the particular strategy drawn at time $`t`$ from the set $`[\stackrel{}{R}_i^1,\mathrm{},\stackrel{}{R}_i^s]`$ by agent $`i`$ with instantaneous probabilities $`[\pi _i^1(t),\mathrm{},\pi _i^s(t)]`$. In order to better understand equation (5), we recall that $`b_i^a(t)=\stackrel{}{R}_i^a\stackrel{}{\eta }(t)`$ is the bid of strategy $`\stackrel{}{R}_i^a`$ at time $`t`$ (eq.(1)) and therefore the quantity $`w_i^a(t)B(t)b_i^a(t)`$ can be considered as the win of this strategy (cf. eq.(3)). Hence, we can rewrite eq. (5) in the following more intuitive form, $$\dot{\pi }_i^a(t)=\beta \pi _i^a(t)[w_i^a(t)w_i],$$ (7) where $`w_i_{b=1}^s\pi _i^b(t)w_i^b(t)`$. The meaning of equation (7) is clear: the probability $`\pi _i^a`$ of a particular strategy $`\stackrel{}{R}_i^a`$ increases only if the performance of that strategy is better than the instantaneous average performance of all the strategies belonging to the same agent $`i`$ with the same actual total bid . Relations (5) and (6) are the exact dynamical equations for the TMG. They do not involve points nor memory, but just stochastic noise and quenched disorder, and they are local in time. From the perspective of statistical mechanics, this is satisfying and encouraging. However, these equations differ fundamentally from conventional replicator and Langevin dynamics. First, the Markov-propagating variables are themselves probabilities. Second, there are two sorts of stochastic noises, as well as quenched randomness. Third, and more importantly, the stochastic noises enter non-linearly, one independently for each agent via probabilistic dependence on the $`\pi `$ themselves, the other globally and quadratically. They thus provide interesting challenges for fundamental transfer from microscopic to macroscopic dynamics, including an identification of the complete set of necessary order parameters . We shall address the problem of finding a solution of the TMG equations in a future work. Finally, let us note that the TMG (as well as the MG) is not only suitable for the description of market dynamics. Indeed, any natural system where a population of individuals must organize itself in order to optimize the utilization of some limited resources, is qualitatively well described by such a model. We hope that the thermal model we have introduced in this Letter will give more insight into this kind of natural phenomena. We thank L. Billi, P. Gillin and P. Love for many suggestions, and N.F. Johnson and I. Kogan for useful discussions. This work was supported by EPSRC Grant GR/M04426 and EC Grant ARG/B7-3011/94/27.
no-problem/9903/cond-mat9903132.html
ar5iv
text
# 1 Introduction ## 1 Introduction Compact polymers, the continuum limit of random walks that are constrained to visit every site of some lattice $``$, are intriguing in so far as their critical exponents depend explicitly on $``$. Whilst first observed numerically , this curious lack of universality was firmly established through the exact solution of the compact polymer problem on the honeycomb and, very recently, the square lattice . However, not every lattice can support a compact polymer phase. To see this, consider more generally an O($`n`$)-type loop model defined on $``$, in which each closed loop is weighed by $`n`$, and each vertex not visited by a loop carries a factor of $`t`$. It is well-known that for $`|n|2`$ this model possesses a branch of low-temperature ($`t`$ being the temperature) attractive critical fixed points with critical exponents that do not depend on $``$, even when $``$ is not a regular lattice but an arbitrary network . On the other hand, whenever the model is invariant under $`tt`$, as is the case if $``$ can only accommodate loops of even length, this symmetry allows for a distinct zero-temperature branch of repulsive fixed points , with the $`n0`$ limit representing the compact polymer problem. That the critical behaviour of this class of fully-packed loop (FPL) models depends on $``$ is readily seen from the solutions of the honeycomb and the square case given in Refs. . Namely, the continuum limit of these models can be described by a conformal field theory (CFT) for a fluctuating interface, where the fully-packing constraint forces the height variable to be a vector, with a number of components that depends on the coordination number of the lattice at hand. This $`tt`$ symmetry argument, originally put forward by Blöte and Nienhuis , prompts us to conjecture its inverse: Whenever $``$ allows for loops of odd length, so that the symmetry is destroyed, the renormalisation group flow can be expected to take us to non-zero $`t`$, eventually terminating in the dense, universal O($`n`$) phase. Support for this conjecture so far comes from numerics in the case of the triangular lattice , and recently for a class of decorated lattices interpolating between the square and the triangular lattices .<sup>2</sup><sup>2</sup>2Although belonging to the universality class of the square lattice FPL model the FPL model on the square-diagonal lattice does not constitute a very good counterexample, since the fully-packing constraint actually prevents the loops from occupying the diagonal edges. (Note that the proof given in Ref. is also valid for $`n0`$.) Accepting for the moment the validity of this conjecture however leaves us with an infinite set of bipartite lattices, each one being a potential candidate for a novel universality class of compact polymers. This perspective is especially appealing in the light of the constructive point of view taken in Refs. . In these papers new CFTs were explicitly constructed, based on purely geometrical considerations applied to the FPL model in question. On the other hand, if the bipartite lattices generate an entire family of distrinct CFTs, this gives rise to important classification issues. In particular, the applicability of compact polymer models to the protein folding problem implies that one would like to understand on which microscopic parametres (bending angles, coordination number, steric constraints) the resulting conformational exponents do depend. In this paper we examine FPL models on a class of bipartite lattices, in which every vertex of a regular (square or honeycomb) lattice has been decorated. A renormalisation group (RG) argument, essentially amounting to a summation over the decoration, reveals that the Liouville field theory construction should really be based on the undecorated lattice, but with bare vertex weights that depend on the loop fugacity $`n`$. This leads to a novel scenario in which, depending on $`n`$, the model may either renormalise towards the dense phase of the O($`n`$) model or flow off to a non-critical phase, even for $`n<2`$! The case of the square-octagon lattice, shown in Fig. 1, is investigated in detail. This lattice can be thought of as a square lattice in which each vertex has been decorated with a tilted square. Our interest in the square-octagon lattice stems from the fact that it is bipartite and has the same coordination number as the honeycomb lattice, but enjoys the symmetry of the square lattice. In particular it will enable us to assess whether the critical behaviour of compact polymers on a lattice $``$ depends only on its coordination number<sup>3</sup><sup>3</sup>3In the protein folding language this determines the number of close contacts per monomer of the folded chain., only on the bond angles, or on a combination of both these parameters. Our analysis suggests that the corresponding FPL model belongs to the dense O($`n`$) phase for $`n<1.88`$, whilst for $`n>1.88`$ a finite correlation length is generated. For $`n=2`$ we show rigorously that the model is equivalent to the (non-critical) 9-state Potts model. The analytical results are confirmed by numerical transfer matrix calculations on strips of width up to $`L_{\mathrm{max}}=18`$ loop segments. Having introduced the models in Section 2, we present the analytical results in Section 3 and the numerics in Section 4. Our results are discussed in Section 5. ## 2 The models A fully packed loop (FPL) model on a lattice $``$ is defined by the partition function $$Z_{\mathrm{FPL}}=\underset{𝒢_{\mathrm{FPL}}}{}n^N,$$ (2.1) where the sum runs over all configurations $`𝒢_{\mathrm{FPL}}`$ of closed loops drawn along the edges of $``$ so that every vertex is visited by a loop. Within a given configuration a weight $`n`$ is given to each of its $`N`$ loops. An FPL model on $``$ can be generalised to an O($`n`$) model by lifting the fully packing constraint and further weighing each empty vertex by a factor of $`t`$. Physically $`t`$ corresponds to a temperature, the FPL model thus being the zero-temperature limit of the O($`n`$) model. When $``$ is the honeycomb lattice, the resulting phase diagram is as shown on Fig. 2 . For $`|n|2`$ three branches, or phases, of critical behaviour exist. Since $``$ is bipartite, the resulting $`tt`$ symmetry allows for a compact phase at $`t=0`$ , as discussed at length in the Introduction. For $`t>0`$ Nienhuis has found the exact parametrisation of a dense and a dilute phase, and determined the critical exponents as a function of $`n`$ . For our discussion of the square-octagon FPL model we shall need the corresponding parametrisation for the O($`n`$) model on the square lattice. The definition of the partition function is now slightly more complicated, since each vertex can be visited by the loops in several ways that are unrelated by rotational symmetry. An appropriate choice is $$Z_{\mathrm{O}(n)}=\underset{𝒢}{}t^{N_t}u^{N_u}v^{N_v}w^{N_w}n^N,$$ (2.2) where $`N_t`$, $`N_u`$, $`N_v`$ and $`N_w`$ are the number of vertices visited by respectively zero, one turning, one straight, and two mutually avoiding loop segments. It is convenient to redefine the units of temperature so that $`t=1`$. Nienhuis has identified five branches of critical behaviour for the model (2.2). The first four are parametrised by $`w_\mathrm{c}`$ $`=`$ $`\left\{2\left[12\mathrm{sin}\left({\displaystyle \frac{\theta }{2}}\right)\right]\left[1+2\mathrm{sin}\left({\displaystyle \frac{\theta }{2}}\right)\right]^2\right\}^1,`$ $`u_\mathrm{c}`$ $`=`$ $`4w_\mathrm{c}\mathrm{sin}\left({\displaystyle \frac{\theta }{2}}\right)\mathrm{cos}\left({\displaystyle \frac{\pi }{4}}{\displaystyle \frac{\theta }{4}}\right),`$ $`v_\mathrm{c}`$ $`=`$ $`w_\mathrm{c}\left[1+2\mathrm{sin}\left({\displaystyle \frac{\theta }{2}}\right)\right],`$ $`n`$ $`=`$ $`2\mathrm{cos}(2\theta ),`$ (2.3) where $`\theta [(2b)\pi /2,(3b)\pi /2]`$ corresponds to branch $`b=1,2,3,4`$. It has recently been noticed that the edges not covered by the original (‘black’) loops form a second species of closed (‘grey’) loops, each one occuring with unit weight . Lifting the fully-packing constraint implies that the two loop flavours decouple, and each of them can independently reside in either of the two critical phases (dense or dilute) discussed above. The black (resp. grey) loops are dense on branches 2 and 4 (resp. 1 and 2), and dilute on branches 1 and 3 (resp. 3 and 4). On branches 1 and 2 the grey loops contribute neighter to the central charge, nor to the geometrical (string) scaling dimensions, and in the scaling limit these two branches are thus completely analogous to the dilute and the dense branches of the O($`n`$) model on the honeycomb lattice . The last critical branch, known as branch 0, has weights $$u_\mathrm{c}=w_\mathrm{c}=\frac{1}{2},v_\mathrm{c}=0,3n1,$$ (2.4) and can be exactly mapped onto the dense phase of the O($`n+1`$) model , or equivalently to the selfdual $`(n+1)^2`$-state Potts model . ## 3 Renormalisation group analysis and an exact mapping At first sight it would seem that the continuum limit of the FPL model (2.1) on the square-octagon lattice should be described by a Liouville field theory for a two-dimensional height field, since the lattice has the same coordination number as the honeycomb lattice . However, we shall presently see that only one height component survives when applying the appropriate coarse graining procedure to the two-dimensional microscopic heights defined on the lattice plaquettes. Consider performing the first step of a real-space renormalisation group (RG) transformation of Eq. (2.1), by summing over the degrees of freedom residing at the decorating squares. In this way the decorated vertices transform into weighted undecorated vertices, as shown on Fig. 3. The renormalised model is then simply the O($`n`$) model on the square lattice (2.2), but with some particular ‘bare’ values of the vertex weights. Defining again the empty vertex to have unit weight, these bare weights read $$u=\frac{1}{n},v=0,w=\frac{1}{n}.$$ (3.1) Following the standard procedure microscopic heights can be defined on the lattice plaquettes by orienting the loops and assigning a vector, $`𝐀`$, $`𝐁`$ or $`𝐂`$, to each of the three possible bond states: $`𝐀`$ ($`𝐁`$) if the bond is covered by a loop directed towards (away from) a site of the even sublattice, and $`𝐂`$ if the bond is empty. When encircling an even (odd) site in the (counter)clockwise direction the microscopic height increases by the corresponding vector whenever a bond is crossed. As was first pointed out in Ref. the fully-packing constraint leads to the condition $`𝐀+𝐁+𝐂=\mathrm{𝟎}`$, whence the height must a priori be two-dimensional. However, the RG transformation that we have just applied lifts the fully-packing constraint, due to the appearance of the bottom left vertex of Fig. 3. Defining now the sublattices with respect to the renormalised (square) lattice we have the additional constraint $`4𝐂=\mathrm{𝟎}`$, whence the coarse grained height field should really be one-dimensional<sup>4</sup><sup>4</sup>4See Ref. for similar examples of such a reduction of the dimensionality of the height field., and O($`n`$)-like behaviour is to be expected. Also note that it clearly suffices to define the microscopic heights on the octagonal plaquettes in order to obtain a continuous height field defined everywhere in $`\mathrm{𝖨𝖱}^\mathrm{𝟤}`$ by the usual coarse graining procedure . The reason that the renormalised FPL model is still interesting is that the bare vertex weights (3.1) are now some fixed functions of the loop fugacity $`n`$, rather than arbitrary parameters that can be tuned to their critical values. This constitutes an interesting situation which has not been encountered before. We shall soon see that it implies that the FPL model (2.1), unlike any other loop model studied this far, is only critical within a part of the interval $`|n|2`$. In Fig. 4 we show $`1/u_\mathrm{c}`$, the weight of the empty vertex relative to that of a turning loop segment, as a function of $`n`$ for the critical branches 1 (dilute phase) and 2 (dense phase) of the O($`n`$) model on the square lattice; cfr. Eq. (2.3). In analogy with the honeycomb case the dense and dilute branches again consist of respectively attractive and repulsive fixed points. With the bare value $`1/u`$ given by Eq. (3.1) the subsequent RG flow must therefore be as schematically indicated on the figure. For $`n1.88`$ there is an intersection between the bare value and that of the dilute branch, and for $`n>1.88`$ we can therefore expect the flow to be directed towards the high-temperature disordered phase of the O($`n`$) model. In other words, a finite correlation length (roughly the size of the largest loop in a typical configuration) is generated and the model is no longer critical. Of course we should be a little more careful, since $`u`$ is not the only parameter in the model. Whenever the bare weights (3.1) do not intersect one of the five branches of fixed points, $`v`$ and $`w`$ will flow as well. In particular, $`v`$ will in general flow towards non-zero values, since the turning loop segments always occur with finite weight, and these are clearly capable of generating straight loop segments on larger length scales. The essential point is that for $`1.88<n<2`$ empty vertices will begin to proliferate, and there is no physical mechanism for halting the flow towards the disordered phase.<sup>5</sup><sup>5</sup>5The flow cannot be towards branch 0 since this is a repulsive fixed point. The point $`n=2`$ merits special attention. Here the bare weights are $$u=w=\frac{1}{2},v=0,$$ (3.2) which coincides with the fixed point values on branch 0; see Eq. (2.4). Invoking Nienhuis’ mapping the $`n=2`$ FPL model is therefore exactly equivalent to the selfdual 9-state Potts model, which is of course again non-critical . ## 4 Transfer matrix results In order to confirm the analytical predictions given in Section 3 we have numerically calculated effective values of the central charge $`c`$ and the thermal scaling dimension $`x_t`$ on strips of width $`L=4,6,\mathrm{},18`$ loop segments. To this end we adapted the connectivity basis transfer matrices described in Refs. to the square-octagon lattice. The working principle of these transfer matrices is illustrated in Fig. 1: To determine the number of loop closures induced by the addition of a new row of vertices it suffices to know the pairwise connections amongst the $`L`$ dangling ends of the top row. For $`L`$ even, the number of such connections is $$a_L=\underset{i=0}{\overset{L/2}{}}\left(\genfrac{}{}{0pt}{}{L}{2i}\right)c_{L/2i},$$ (4.1) where $`c_m=\frac{(2m)!}{m!(m+1)!}`$ are the Catalan numbers. Thus, the transfer matrix for a strip of width $`L`$ has dimensions $`a_L\times a_L`$, and a sparse matrix decomposition can be made by adding one site of the lattice at a time, rather than an entire row. The size of the largest matrix employed is given by $`a_{18}=6,536,382`$. The effective central charge $`c(L,L+4)`$ has been estimated by three-point fits of the form $$f_0(L)=f_0(\mathrm{})\frac{\pi c}{6L^2}+\frac{A}{L^4}+\mathrm{}$$ (4.2) applied to the free energy per site $`f_0(L^{})`$ with $`L^{}=L,L+2,L+4`$. Similarly, effective values $`x_t(L,L+2)`$ of the thermal scaling dimension were found from two-point fits of the form $$f_1(L)f_0(L)=\frac{2\pi x_t}{L^2}+\frac{B}{L^4}+\mathrm{},$$ (4.3) where $`f_0(L)`$ and $`f_1(L)`$ are related to the ground state and the first excited state of the transfer matrix spectra in the usual way. The numerical results are given in Tables 1 and 2. For $`n1.5`$ we see the expected convergence towards the exact values of the O($`n`$) model in the dense phase, which read $$c=1\frac{6e^2}{1e},x_t=\frac{2e+1}{2(1e)}$$ (4.4) with $`e\frac{1}{\pi }\mathrm{arccos}(n/2)`$. For $`n=1.5`$ the convergence is rather slow, especially in the case of $`c`$, reflecting a large crossover length. As predicted by theory, the FPL model is no longer critical at $`n=2`$. This is particulary visible from the monotonic decrease of the $`x_t`$ estimates, which are well below the exact O($`n`$) value $`x_t=1/2`$. For a system with a finite correlation length, $`\xi <\mathrm{}`$, the effective values for $`c`$ should eventually tend to zero. The fact that we observe rather large effective values is in agreement with Ref. , and rather predictable since $`\xi `$ is much greater than the largest strip width used in the simulations . For comparison we performed similar computations for the 9-state Potts model in its loop representation , finding again effective values of $`c`$ in the range 1.3–1.4. ## 5 Discussion Having seen that two of the simplest two-dimensional lattices (square and honeycomb) give rise to distinct compact universality classes, it would be tempting to conjecture that an FPL model defined on any new lattice leads to different critical exponents and has a new CFT describing its continuum limit. In the present paper we have demonstrated that this is far from being the case. Even within the very restricted class of bipartite lattices, fulfilling the $`tt`$ symmetry requirement, any lattice that can be viewed as a decorated square or honeycomb lattice is likely to flow away from the compact phase by virtue of an RG transformation analogous to the one presented in Section 3. Despite the curious lattice dependence of the compact phases, it thus appears that the number of distinct universality classes is very restricted. We recall that the continuum limit of all loop models solved to this date can be constructed by perturbing a $`\mathrm{SU}(N)_{k=1}`$ Wess-Zumino-Witten model by exactly marginal operators and introducing an appropriate background charge . It would be most interesting to pursue the physical reason why only the cases $`N=2`$ (the O($`n`$) , Potts and six-vertex models), $`N=3`$ (the FPL model on the honeycomb lattice ), and $`N=4`$ (the two-flavoured FPL model on the square lattice ) seem to occur in practice. The square-octagon lattice FPL model studied here turned out to be interesting in several respects. First, it provides us with the first example of an non-oriented , bipartite lattice for which the scaling properties of compact and dense polymers are identical. In particular, the exact value of the conformational exponent $`\gamma `$ is $`19/16`$ , indicating a rather strong entropic repulsion between the chain ends. Second, the square-octagon model presents a novel scenario in which the same fully packed loop model may renormalise towards different conformal field theories, or even flow off to a non-critical regime, depending on the value of the loop fugacity $`|n|2`$. In particular one might be able to ‘design’ a decorated lattice with bare vertex weights that simultaneously intersect those of the dilute O($`n`$) phase for some value of $`n`$. This could be a starting point for gaining a microscopic, geometrical understanding of the Coulomb gas charge asymmetry which was shown in Ref. to distinguish between the dense and dilute phase of the O($`n`$) model. Finally, our model proves that the scaling properties of compact polymers do not depend exclusively on either bond angles or coordination number, but rather on a combination of these two parameters. Acknowledgments The author is greatly indebted to J. Kondev for many valuable comments and suggestions, and would like to thank Saint Maclou for inspiration during the initial stages of this project.
no-problem/9903/chao-dyn9903021.html
ar5iv
text
# High Rayleigh number turbulent convection in a gas near the gas-liquid critical point. ## Abstract $`SF_6`$ in the vicinity of its critical point was used to study turbulent convection up to exceptionally high Rayleigh numbers, $`Ra`$, (up to $`510^{14}`$) and to verify for the first time the generalized scaling laws for the heat transport and the large scale circulation velocity as a function of $`Ra`$ and the Prandtl number, $`Pr`$, in very wide range of these parameters. The both scaling laws obtained are consistent with theoretical predictions by B.Shraiman and E.Siggia, Phys. Rev. A 42, 3650 (1990). Turbulent convection has recently attracted a lot of attention due to possibility to tune and control the relevant parameters with unbeatable precision. The fundamental aspect of the turbulent convection is a competition between buoyancy and shear . An important issue in our understanding of the convective turbulence is experimental test of theoretical predictions on the Prandtl number, $`Pr`$, dependence of global characteristics, such as nondimensional heat transport, $`Nu(Ra,Pr)`$, and Reynolds number of the large scale circulation flow, $`Re(Ra,Pr)`$. The difficulty to measure the $`Pr`$ dependence in turbulent convection arises from the fact that in conventional fluids it cannot be varied substantially but only by changing the fluid which is not always a simple task. The verification of these scaling relations will be a crucial test of the theory. In this Letter we present results on the Pr dependence of global properties of the flow, namely, global heat transport, characterized by $`Nu`$, and large scale circulation velocity, characterized by $`Re`$. There are two possibilities to reach high $`Ra`$ number convection regime: either to increase temperature difference across the cell or to vary physical parameters, which appear in the expression for $`Ra=\frac{g\alpha \mathrm{\Delta }TL^3}{\kappa \nu }`$, where $`\mathrm{\Delta }T`$ is the temperature difference across the cell, $`g`$ is the gravitational constant, $`\alpha `$ is the thermal expansion, and $`\kappa `$ and $`\nu `$ are the thermal diffusivity and kinematic viscosity, respectively. The latter possibility was used successfully in compressed gases ($`He`$ and $`SF_6`$) at almost constant value of $`Pr`$ about one. Another system in which this method of variation of $`Ra`$ number in a wide range can be used, is a gas near its gas-liquid critical point(CP). It was realized long time ago that heat transport is enhanced dramatically near $`T_c`$ . But it was only recently shown experimentally that the critical temperature difference for the convection onset decreases drastically with the distance from $`T_c`$ due to strong variations of thermodynamical and kinetic properties of a gas in the vicinity of CP . Singular behaviour of the thermodynamic and kinetic properties of the fluid near $`T_c`$ provides the opportunity both to reach extremal values of the control parameter $`Ra`$ and to scan $`Pr`$ over an extremely wide range . All these features make the system unique in this respect. However, the most exciting aspect of the system is the possibility to perform Laser Doppler velocimetry(LDV) measurements that we recently demonstrated . Small temperature differences used to reach high $`Ra`$, lead to rather small fluctuations of refraction index in the flow. It gives us the possibility to use a standard LDV technique. On the other hand, the problem of seeding particles in a close flow of a gas, which coagulate and sediment rather fast, turns out to be unsolvable at the moment. Fortunately, we discovered that the existence of the critical density fluctuations provided us the possibility to perform LDV measurements of the velocity field in a rather wide range of closeness to the CP between $`310^4`$ and $`10^2`$ in reduced temperature $`\tau =(\overline{T}T_c)/T_c`$, where $`\overline{T}`$ is the mean cell temperature. The upper limit is defined by a small scattering amplitude, and the lower limit- by multiscattering in the large convection cell . At the same time those obvious advantages come together with limitations and new features which should be taken into account and studied. Strong dependence of gas properties on the closeness to $`T_c`$ manifests itself in nonuniform distribution of density in a gravitational field and variation of coefficients in the Navier-Stokes equation with temperature and density. The former, the well-known gravity effect, can be significant even at $`\tau 10^3`$ and 10 cm height cell: the density difference across the cell reaches 1% that leads to rather significant variations in the fluid properties. Fortunately, a temperature gradient compensates the gravity when heating from below, and strong convection reduces the characteristic size of nonuniformity to one of about the boundary layer height. This leads to reduction of the density nonuniformity to a tolerant level much below 0.1%. In the Boussinesq approximation fluid properties are assumed to be constant despite the temperature gradient across the cell, except for the buoyancy term. According to the degree of deviation from the Boussinesq approximation is adequately described by the ratio of the temperature drop across the top boundary layer to the temperature drop across the bottom boundary layer. As the experiment shows , significant deviations in the measurements of global transport properties of the turbulent convection occurs if this ratio reduces below 0.5. According to our estimates measurements at $`Ra`$ up to $`10^{15}`$ can be done while non-Boussinesq effects are still relatively small. In the data presented the temperature drop ratio was above 0.7. So small deviations from the scaling behaviour were observed at highest $`Pr`$ and $`Ra`$. There are another aspects, related to the proximity to $`T_c`$ such as compressibility (besides adiabatic gradient) and breaking down of hydrodynamic description due to macroscopical size of thermodynamical fluctuations. Simple estimates show that both these factors do not play any significant role in the range of parameters under studies . Contrary to the compressed gas convection, here one cannot cover wide range of $`Ra`$ for one value of $`Pr`$. On the other hand, the data presented cover the range of $`Ra`$ from $`10^{10}`$ at low values of $`Pr`$ ($`Pr=8`$) far from $`T_c`$ and $`\mathrm{\Delta }T=12mK`$ up to the largest attainable in a laboratory $`Ra=510^{14}`$ at high values of $`Pr`$ close to $`T_c`$. The whole range of the reduced mean temperature was $`410^2>\tau >210^4`$. And finally, as a result of large compressibility and relatively large cell height the adiabatic temperature gradient was observed. This effect will be discussed below. The experiment we present here, was done with a high purity gas $`SF_6`$ (99.998%) in the vicinity of $`T_c`$ and at the critical density($`\rho _c=730kg/m^3`$). This fluid was chosen due to the relatively low critical temperature ($`T_c=318.73`$ K) and pressure($`P_c=37.7`$ bar) and well-known thermodynamic and kinetic properties far away and in the vicinity of CP. This gas was widely used to study the equilibrium critical phenomena. The cell is a box of a cross-section 76x76 $`mm^2`$ formed by 4 mm plexiglass walls, which are sandwiched between a Ni-plated mirror-polished copper bottom plate and a 19 mm thick sapphire top plate of $`L=105`$ mm apart. The cell is placed inside the pressure vessel with two side thick plastic windows to withstand the pressure difference up to 100 bar. So the cell had optical accesses from above through the sapphire window and from the sides. They were used for both shadowgraph flow visualization and for LDV. The pressure vessel was placed inside a water bath which was stabilized with rms of temperature fluctuations at the level of 0.4 $`mK`$. The gas pressure was continuously measured with 1 $`mbar`$ resolution by the absolute pressure gauge. Together with calibrated 100$`\mathrm{\Omega }`$ platinum resistor thermometer they provide us the thermodynamic scale to define the critical parameters of the fluid and then to use a parametric equation of state developed recently for $`SF_6`$. We should point out here that it is not obvious at all that the critical parameters, defined at equilibrium conditions, can be used for a strongly non-equilibrium state to define closeness to CP. Contrary, as suggested by theory and experiment, turbulent flow in a binary mixture near its consolute CP can suppress the critical concentration fluctuations. At $`Re`$ comparable with that achieved in our experiment, Pine et al observed the critical temperature depression up to 50 $`mK`$. We used the enhancement of the heat transport(or decrease in the temperature difference across the cell at the fixed heat flux) near CP as an indicator of the closeness to CP. The critical pressure and temperature obtained by this procedure agree within $`\pm 10mK`$ with those defined independently by light scattering at equilibrium conditions. The heat transport measurements were conducted at the fixed heat flux and temperature of the top sapphire plate, while the bottom temperature was measured by a calibrated thermistor epoxied in it. Local temperature measurements in a gas were made by three $`125\mu m`$ thermistors suspended on glass fibers in the interior of the cell (one at the center, and two about half way from the wall). Local vertical component velocity measurements at about $`L/4`$ from the bottom plate were conducted by using LDV on the critical density fluctuations. Shadowgraph visualization was used mostly to get qualitative information about structures and characteristic time and length scales in the flow mostly of the top and bottom boundary layers. From the measurements of the heat transport and, particularly, of the velocity we realized that at a small but finite temperature differences, much larger than defined by the critical $`Ra`$ for convection onset, there exists a mechanically stable state. This temperature difference $`\mathrm{\Delta }T_{ad}`$ is defined by the adiabatic temperature gradient for the convection onset. The latter results from a fluid compressibility and is defined as $`\mathrm{\Delta }T_{ad}/L=gT\alpha C_{p}^{}{}_{}{}^{1}`$, where $`C_p`$ is the heat capacity at constant pressure. As follows from simple thermodynamics and critical divergences of thermodynamical parameters as $`T`$ approaches $`T_c`$, the adiabatic temperature gradient saturates at a finite value $`\mathrm{\Delta }T_{ad}/L=g\rho (P/T)_v^1`$, which for $`SF_6`$ and $`L=10.5`$ cm gives $`\mathrm{\Delta }T_{ad}=9.5mK`$ for $`\tau <10^2`$, So with available temperature stability and resolution it can be measured. Then, as shown in , $`Ra`$ should be modified as $`Ra=\frac{gL^3\alpha (\mathrm{\Delta }T\mathrm{\Delta }T_{ad})}{\nu \kappa }`$. The most sensitive probe to detect the convection onset in our case was the local velocity measurements. The results of the mean vertical velocity $`V_m`$ and rms of vertical velocity fluctuations $`V_s`$ measurements at $`\tau =810^4`$ are presented in Fig.1. Both signals show clear transition at $`\mathrm{\Delta }T_{ad}=9.5\pm 0.5mK`$, that agrees well with the theoretical value . These measurements were done for several values of the reduced temperature in the range $`810^3>\tau >210^4`$. It was found that $`\mathrm{\Delta }T_{ad}(\tau )`$ is constant and independent of $`\tau `$ in agreement with the theory. This value of $`\mathrm{\Delta }T_{ad}`$ was used further to correct the $`Ra`$ values. As a result of this correction for each $`Pr`$ both $`Nu`$ vs $`Ra`$(Fig.2) and $`V_s`$ vs $`Ra`$(Fig.3) dependences show scaling laws in rather wide range of $`Ra`$. The heat transport measurements were done in the range $`1.610^2>\tau >210^4`$. In order to keep $`Pr`$ constant, $`\tau `$ was kept constant, while $`\mathrm{\Delta }T`$ changed during the measurements. The range of $`\tau `$ was limited from higher temperature end by mechanical stability of constructing materials (mostly plexiglass) and from lower end by the temperature stabilization of the system. Heat transport measurements were also conducted far away from CP at $`P=20bar`$, $`\overline{T}=303K`$ and the density $`\rho =0.18g/cm^3`$, that corresponds to $`Pr=0.9`$, and at $`P=50bar`$, $`\overline{T}=323K`$ and $`\rho =1.07g/cm^3`$, that corresponds to $`Pr=1.5`$. The data far away from CP cover the range of $`Ra`$ between $`10^9`$ and $`510^{12}`$ with the temperature differences across the cell from about $`0.1K`$ till $`10K`$. By making all appropriate corrections for heat losses through the lateral walls, gas and insulation outside the cell, and for a temperature drop across the top sapphire plate, we found that the data on the heat transport far away from CP are in a good agreement with a 2/7 law for $`10^9Ra510^{12}`$. These data combined with the heat transport measurements in the same cell in the vicinity of CP fot higher values of $`Pr`$ can be scaled and presented in a power law form: $`Nu=0.22Ra^{0.3\pm 0.03}Pr^{0.2\pm 0.04}`$ (Fig.4). Just a few points at the highest $`Pr`$ and $`Ra`$ deviate from this scaling due to the non-Boussinesq effect. This scaling is consistent with predictions of ref. We visualized a structure of the large scale flow by using shadowgraph technique through the top and side windows. Large diameter with narrow focal width imaging optics enabled us to visualize narrow slices across the cell and scan it in both vertical and horizontal directions. The flow was recorded at various $`Ra`$ and $`Pr`$. This visualization confirmed the picture of up- and down-going circulation jet flow along the cell diagonal, which forms the top and bottom turbulent boundary layers. The bulk of the cell appears homogeneous. The turbulent character of the flow at the boundary layers is seen as a rapid and ”violent” horizontal motions . The circulation frequency which corresponds to the travel time of a fluid element passing through one cycle of the large eddy, is observed as a peak in the power spectra for both the velocity and temperature fluctuations(see the insert in Fig.5). We also measured directly by LDV the large scale circulation velocity. However, the best results were obtained by extracting the peak frequency from the velocity power spectra at various $`Ra`$ and $`Pr`$ which can be presented in a power law form: $`Re=2.6Ra^{0.43\pm 0.02}Pr^{0.75\pm 0.02}`$ (Fig.5). Here $`Re=4f_pL^2/\nu `$, and $`f_p`$ is the peak frequency. The data from the low frequency peak in the temperature spectra far away and close to CP and from LDV measurements are consistent with this scaling law. We would like to point out that $`Re`$ reaches values up to $`10^5`$ at the highest values of $`Ra`$. The dependence of $`Re`$ on $`Pr`$ has almost the same exponent (within the measurement uncertainty) as one found recently in convection in helium for much narrow range of $`Pr`$, and agrees rather well with the theoretical prediction 5/7 in ref. However, the scaling of $`Re`$ with $`Ra`$ differs significantly and close to 3/7 rather to 0.5. We can also verify consistency of the exponents of the both scaling laws obtained by using one of them, e.g. for $`Re`$, and the exact relation for the dissipation in a bulk turbulent regime $`PrRa(Nu1)Re^3Pr^3`$. The resulting scaling relation $`NuRa^{0.29}Pr^{0.25}`$ is consistent with that found experimentally. Together with the unified scaling law in the whole range of $`Ra`$ and $`Pr`$ under studies for both transport mechanisms it suggests that a single mechanism is responsible for these scaling laws. Moreover, our data did not indicate any signature of the transition to the asymptotic regime in the heat transport, discovered recently at relatively low $`Pr`$ and $`Ra>10^{11}`$. There are possible two explanations to this fact: either at higher $`Pr`$ the transition occurs at higher $`Ra`$, or the scatter in the data for different $`Pr`$ does not allow us to observe the transition. In conclusion, we present for the first time the generalized scaling laws of the heat transport and the large scale flow in respect to $`Ra`$ and $`Pr`$ in a wide range of variation of there parameters. Both scaling laws are consistent with the theoretical predictions of . We believe that in the light of the recent theoretical reconsiderations of these scaling laws these data provide new insight on the turbulent convection. This work was partially supported by the Minerva Foundation and the Minerva Center for Nonlinear Physics of Complex Systems. VS is greatful for support of the Alexander von Humboldt Foundation.