id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9905/quant-ph9905080.html
|
ar5iv
|
text
|
# Untitled Document
25 March 1999 revised 19 September 1999 quant-ph/9905080
FINITE PRECISION MEASUREMENT NULLIFIES
THE KOCHEN-SPECKER THEOREM
David A. Meyer
Project in Geometry and Physics
Department of Mathematics
University of California/San Diego
La Jolla, CA 92093-0112
dmeyer@chonji.ucsd.edu
and Institute for Physical Sciences
Los Alamos, NM
ABSTRACT
Only finite precision measurements are experimentally reasonable, and they cannot distinguish a dense subset from its closure. We show that the rational vectors, which are dense in $`S^2`$, can be colored so that the contradiction with hidden variable theories provided by Kochen-Specker constructions does not obtain. Thus, in contrast to violation of the Bell inequalities, no quantum-over-classical advantage for information processing can be derived from the Kochen-Specker theorem alone.
1999 Physics and Astronomy Classification Scheme: 03.65.Bz, 03.67.Hk, 03.67.Lx.
American Mathematical Society Subject Classification: 81P10, 03G12, 68Q15.
Key Words: quantum computation, Kochen-Specker theorem, hidden variable theories, quantum communication complexity.
Recent theoretical and experimental work on quantum computation and quantum information theory has inspired renewed interest in fundamental results of quantum mechanics. The Horodeccy have shown, for example, that a spin-$`\frac{1}{2}`$ state can be teleported with greater than classical fidelity using any mixed two spin-$`\frac{1}{2}`$ state which violates some generalized Bell-CHSH inequality . Quantum teleportation was first demonstrated experimentally in quantum optics systems ; the parametric down-conversion techniques crucial for these experiments have also been used to verify violation of Bell’s inequality directly . Although the Bell-CHSH inequalities were originally derived in the context of EPR-B experiments and (local) hidden variable theories , the present concern is with the differences in information processing capability between classical and quantum systems.
See, for example, the recent discussion of separability in NMR experiments ; at issue is whether these perform or merely simulate quantum computations.
Analyses of EPR-B experiments from the very first have been concerned with limitations in, for example, detector efficiency : The observed violations of Bell-CHSH inequalities are consequently reduced; so, too, is teleportation fidelity .
Logically, if not entirely chronologically prior contradictions with (noncontextual) hidden variable theories were derived by Bell from a theorem of Gleason and by Kochen and Specker . The GHZ-Mermin three spin-$`\frac{1}{2}`$ state exhibits a similar incompatibility with (noncontextual) hidden variable theories and reduces the communication complexity of certain problems . While no quantum improvement in information processing power has yet been derived directly from the Kochen-Specker theorem, it is natural to ask for the consequences of experimental limitations on measurement in this setting.
The Kochen-Specker theorem concerns the results of a (counterfactual) set of measurements on a quantum system described by a vector in a three dimensional Hilbert space. Kochen and Specker consider, for example, measurement of the squares of the three angular momentum components of a spin-1 state . The corresponding operators commute and can be measured simultaneously, providing one ‘yes’ and two ‘no’s to the three questions, “Does the spin component along $`\widehat{a}`$, $`\widehat{b}`$, $`\widehat{a}\times \widehat{b}`$ vanish?” for any $`\widehat{a}\widehat{b}S^2`$, the unit sphere in $`\text{}^3`$. Specker and Bell observed that Gleason’s theorem implies that there can be no assignment of ‘yes’s and ‘no’s to the vectors of $`S^2`$ consistent with this requirement:
each triad is ‘colored’ with one ‘yes’ and two ‘no’s $`(1)`$
(where triad means three mutually orthogonal vectors) and concluded that there could be no theory with hidden variables assigned independently of the measurement context.
A compactness argument implies that there must be a finite set of triads for which there is no coloring satisfying (1). Kochen and Specker gave the first explicit construction of a such a finite set . Their construction requires 117 vectors; subsequently examples with 33 and 31 vectors in $`S^2`$ have been constructed.
For our present purposes the details of these constructions are unimportant; all that matters is that the vectors forming the set of triads are precisely specified. But, as Birkhoff and von Neumann remark in their seminal study of the lattice of projections in quantum mechanics, “it seems best to assume that it is the Lebesgue-measurable subsets $`\mathrm{}`$ which correspond to experimental propositions, two subsets being identified, if their difference has Lebesgue-measure 0.” \[19, p.825\] That is, no experimental arrangement could be aligned to measure spin projections along coordinate axes specified with more than finite precision. The triads of a Kochen-Specker construction should therefore be constrained only to lie within some (small) neighborhoods of their ideal positions. This is sufficient to nullify the Kochen-Specker theorem because, as we will show presently, there is a coloring of the vectors of a set of triads, dense in the space of triads, which respects (1). More complicated colorings satisfying (1) ‘almost everywhere’ have been constructed by Pitowsky using the axiom of choice and the continuum hypothesis ; our results here support a conjecture of his that many dense subsets—in particular, the rational vectors—have colorings which satisfy (1) , but we will need no more than constructive set theory.
The finite constructions violating (1) provide the clue we use: in each case the components of some of the vectors forming triads are irrational numbers. So let us consider only the vectors with rational components, $`S^2\text{}^3`$. This is a familiar subset: the usual requirement of separability
To avoid possible confusion, we remark that this is a distinct concept from that of separability of density matrices . for Hilbert space and for the lattice of measurement propositions depends upon such a countable dense subset ; the fact that it is dense means that it is indistinguishable from its closure by finite precision measurements. As Jauch puts it, while the rationals must already be defined with infinite precision, completing them to include the irrationals requires that “we transcend the proximably observable facts and $`\mathrm{}`$ introduce ideal elements into the description of physical systems.” \[23, p.75\] Surely the meaning of quantum mechanics should not rest upon such non-experimental entities. But, at least in the three dimensional arena for the Kochen-Specker theorem it does, as we will be able to conclude from the following three lemmas:
LEMMA 1. The rational vectors $`S^2\text{}^3`$ can be colored to satisfy (1).
Proof. This is an immediate consequence of a result of Godsil and Zaks which is in turn based upon a theorem of Hales and Straus . It suffices here to give an explicit coloring using their results. The rational projective plane $`\text{}P^2`$ consists of triples of integers $`(x,y,z)`$ with no common factor other than 1 (every integer is taken to divide 0). Since at least one of $`x`$, $`y`$ and $`z`$ must therefore be odd, and since odd (even) numbers square to 1 (0) modulo 4, exactly one must be odd if $`x^2+y^2+z^2`$ is to be a square. In this case, and only in this case, $`(x,y,z)\text{}P^2`$ can be identified as a vector in $`S^2\text{}^3`$. Consider a triad of such vectors. For any two, $`(x,y,z)`$ and $`(x^{},y^{},z^{})`$, $`x^{}x+y^{}y+z^{}z=0`$ implies that they must differ in which component is odd. Thus exactly one vector of any triad has an odd $`z`$ component. Color this one ‘yes’ and the other two ‘no’. This defines a $`z`$-parity coloring of $`S^2\text{}^3`$ satisfying (1).
The rational vectors are dense in $`S^2`$ since $`\text{}^2`$ is dense in $`\text{}^2`$ and rational vectors in $`S^2`$ map bijectively to rational points in the affine plane—since stereographic projection is a birational map. Furthermore:
LEMMA 2. The rational vectors $`z`$-parity colored ‘yes’ are dense in $`S^2`$.
Proof. Again we follow the argument of Godsil and Zaks : Rotation by angle $`\alpha =\mathrm{arccos}\frac{3}{5}`$ about the $`x`$-axis takes each rational vector $`(0,y,z)`$ with odd $`z`$ (i.e., colored ‘yes’) to another rational vector colored ‘yes’. Since $`\alpha `$ is not a rational multiple of $`\pi `$, iterated rotation takes $`(0,0,1)`$ to a dense set of vectors in the $`x=0`$ great circle of $`S^2`$. Similarly, iterated rotation by angle $`\alpha `$ around the $`z`$-axis takes this set of vectors dense in $`S^1`$ to a set of vectors dense in $`S^2`$, each of which is colored ‘yes’ since it has odd $`z`$-component.
Repeating this argument with $`(x,y,z)`$ permuted to $`(y,z,x)`$ shows that the rational vectors $`z`$-parity colored ‘no’ are also dense in $`S^2`$.
LEMMA 3. The rational triads are dense in the space of triads.
Proof. By the proof of Lemma 2, for any $`ϵ>0`$, within a $`\frac{1}{2}ϵ`$-neighborhood of a specified vector $`\widehat{a}`$ of a triad, $`\widehat{a}`$, $`\widehat{b}`$, $`\widehat{a}\times \widehat{b}`$, there is a rational vector $`\widehat{u}`$ to which $`(0,0,1)`$ is mapped by an SO$`(3,\text{})`$ rotation. This rotation maps the rational vectors $`(x,y,0)`$ on the the equator to the rational vectors in a great circle passing through the $`\frac{1}{2}ϵ`$-neighborhoods of $`\widehat{b}`$ and $`\widehat{a}\times \widehat{b}`$. Since the rational points are dense in the equator (also a consequence of the proof of Lemma 2) there is a rational vector $`\widehat{v}\widehat{u}`$ in the $`\frac{1}{2}ϵ`$-neighborhood of $`\widehat{b}`$, and thus $`\widehat{u}\times \widehat{v}`$ is a rational vector in the $`ϵ`$-neighborhood of $`\widehat{a}\times \widehat{b}`$.
Suppose we measure some triad in a three dimensional Kochen-Specker construction. By Lemma 3 the unavoidable finite precision of this measurement cannot distinguish it from the (many) rational triads within some neighborhood of the intended triad. By Lemmas 1 and 2 the results of a (counterfactual) set of such measurements cannot conflict with (1), and so cannot rule out a noncontextual hidden variable theory defined over the rationals. Thus finite precision measurement nullifies the Kochen-Specker theorem. The $`z`$-parity coloring of $`S^2\text{}^3`$ shows that arguments such as Bell’s , based on Gleason’s theorem in three dimensions, also fail when the finite precision of measurement is taken into account.
Kent has recently generalized the results of this paper to show that similar constructions produce ‘colorings’ of dense subsets satisfying the analogue of (1) in all higher dimensional real or complex Hilbert spaces as well. Although our explicit construction involves the rational vectors, we emphasize that they are incidental to the interpretation of this result. Any dense subset is indistinguishable by finite precision measurement from its completion, so any colorable dense subset would do equally well. Our results, together with Pitowsky’s earlier and Kent’s subsequent constructions indicate that there are many such subsets.
We conclude by remarking that while one might object that since the counterfactual measurements specified by a Kochen-Specker construction are not (simultaneously) experimentally realizable, it is unreasonable to impose the experimental limitation of finite precision on such a theoretical edifice. But theoretical analyses of the power of algorithms must address the possibility that it resides in infinite precision specification of the computational states or the operations on them. Schönhage showed, for example, that classical computation with infinite precision real numbers would solve NP-complete problems efficiently . And, as Freedman has emphasized, even classical statistical mechanics models would solve #P-hard problems were infinite precision measurement possible . The promise of quantum computation, in contrast, is efficient algorithms—which require only poly(log) number of bits precision—for problems not known to have polynomial time classical solutions . Thus, despite the relation noted earlier with the GHZ-Mermin state which can reduce communication complexity, the elementary argument presented here shows that given the finite precision of any expermental measurement, the Kochen-Specker theorem alone cannot separate quantum from classical information processing in three dimensional Hilbert space. We have not, of course, constructed even a static (much less a dynamic) hidden variable theory for a spin-1 particle, so we have not proved that no separation result is possible—only that the Kochen-Specker theorem does not imply one, as we might have expected. Our results, and Pitowsky’s deterministic model , however, make it seem unlikely that any separation exists.
Confirming this conjecture, Clifton and Kent have recently extended the results of this paper and to construct a noncontextual hidden variable model for finite precision measurements in any finite dimensional Hilbert space.
Acknowledgements
I thank Peter Doyle, Michael Freedman, Chris Fuchs, Asher Peres, Jeff Rabin and especially Rafael Sorkin for useful discussions, Chris Godsil and Joseph Zaks for providing reference , and Philippe Eberhard for suggestions which improved the exposition. This work was partially supported by ARO grant DAAG55-98-1-0376.
References
R. Horodecki, M. Horodecki and P. Horodecki, “Teleportation, Bell’s inequalities and inseparability”, Phys. Lett. A 222 (1996) 21–25.
D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter and A. Zeilinger, “Experimental quantum teleportation”, Nature 390 (1997) 575–579; D. Boschi, S. Branca, F. De Martini, L. Hardy and S. Popescu, “Experimental realization of teleporting an unknown pure quantum state via dual classical and Einstein-Podolsky-Rosen channels”, Phys. Rev. Lett. 80 (1998) 1121–1125.
Z. Y. Ou and L. Mandel, “Violation of Bell’s inequality and classical probability in a two-photon correlation experiment”, Phys. Rev. Lett. 61 (1988) 50–53; Y. H. Shih and C. O. Alley, “New type of Einstein-Podolsky-Rosen-Bohm experiment using pairs of light quanta produced by optical parametric down conversion”, Phys. Rev. Lett. 61 (1988) 2921–2924; T. E. Kreiss, Y. H. Shih, A. V. Sergienko and C. O. Alley, “Einstein-Podolsky-Rosen-Bohm experiment using pairs of light quanta produced by type-II parametric down-conversion”, Phys. Rev. Lett. 71 (1993) 3893–3897.
A. Einstein, B. Podolsky and N. Rosen, “Can quantum-mechanical description of physical reality be considered complete?”, Phys. Rev. 47 (1935) 777–780; D. Bohm, Quantum Theory (New York: Prentice-Hall 1951).
J. S. Bell, “On the Einstein-Podolsky-Rosen paradox”, Physics 1 (1964) 195–200.
J. F. Clauser, M. A. Horne, A. Shimony and R. A. Holt, “Proposed experiment to test local hidden-variable theories”, Phys. Rev. Lett. 23 (1969) 880–884.
S. L. Braunstein, C. M. Caves, R. Jozsa, N. Linden, S. Popescu and R. Schack, “Separability of very noisy mixed states and implications for NMR quantum computing”, Phys. Rev. Lett. 83 (1999) 1054–1057; R. Laflamme, in Quick Reviews in Quantum Computation and Information, http://quantum-computing.lanl.gov/qcreviews/qc/, 15 January 1999; R. Schack and C. M. Caves, “Classical model for bulk-ensemble NMR quantum computation”, quant-ph/9903101.
C. S. Wu and I. Shaknov, “The angular correlation of scattered annihilation radiation”, Phys. Rev. 77 (1950) 136; C. A. Kocher and E. D. Commins, “Polarization correlations of photons emitted in an atomic cascade”, Phys. Rev. Lett. 18 (1967) 575–577; A. Aspect, P. Grangier and G. Roger, “Experimental tests of realistic local theories via Bell’s theorem”, Phys. Rev. Lett. 47 (1981) 460–463.
P. Kok and S. L. Braunstein, “On quantum teleportation using parametric down-conversion”, quant-ph/9903074.
J. S. Bell, “On the problem of hidden variables in quantum mechanics”, Rev. Mod. Phys. 38 (1966) 447–452.
A. M. Gleason, “Measures on the closed subspaces of a Hilbert space”, J. Math. Mech. 6 (1957) 885–893.
S. Kochen and E. P. Specker, “The problem of hidden variables in quantum mechanics”, J. Math. Mech. 17 (1967) 59–87.
D. M. Greenberger, M. A. Horne and A. Zeilinger, “Going beyond Bell’s theorem”, in M. Kafatos, ed., Bell’s Theorem, Quantum Theory and Conceptions of the Universe (Boston: Kluwer 1989) 69–72; N. D. Mermin, “What’s wrong with these elements of reality”, Phys. Today 43 (June 1990) 9–11.
R. Cleve and H. Buhrman, “Substituting quantum entanglement for communication”, Phys. Rev. A 56 (1997) 1201–1204; H. Buhrman, R. Cleve and W. van Dam, “Quantum entanglement and communication complexity”, quant-ph/9705033.
E. Specker, “Die Logik nicht gleichzeitig entscheidbarer Aussagen”, Dialectica 14 (1960) 239–246.
N. G. de Bruijn and P. Erdös, “A color problem for infinite graphs and a problem in the theory of relations”, Proc. Nederl. Akad. Wetensch., Ser. A 54 (1951) 371–373.
A. Peres, “Two simple proofs of the Kochen-Specker theorem”, J. Phys. A: Math. Gen. 24 (1991) L175–L178.
A. Peres, Quantum Theory: Concepts and Methods (Boston: Kluwer 1995) p.197.
G. Birkhoff and J. von Neumann, “The logic of quantum mechanics”, Ann. Math. 37 (1936) 823–843.
I. Pitowsky, “Deterministic model of spin and statistics”, Phys. Rev. D 27 (1983) 2316–2326; I. Pitowsky, “Quantum mechanics and value definiteness”, Philos. Sci. 52 (1985) 154–156.
I. Pitowsky, email responding to a question from C. A. Fuchs (1998).
A. Peres, “Separability criterion for density matrices”, Phys. Rev. Lett. 77 (1996) 1413–1415; M. Horodecki, P. Horodecki and R. Horodecki, “Separability of mixed states: necessary and sufficient conditions”, Phys. Lett. A 223 (1996) 1–8.
J. M. Jauch, Foundations of Quantum Mechanics (Menlo Park, CA: Addison-Wesley 1968).
C. D. Godsil and J. Zaks, “Colouring the sphere”, University of Waterloo research report CORR 88-12 (1988).
A. W. Hales and E. G. Straus, “Projective colorings”, Pacific J. Math. 99 (1982) 31–43.
A. Kent, “Non-contextual hidden variables and physical measurements”, quant-ph/ 9906006.
A. Schönhage, “On the power of random access machines”, Automata, Languages and Programming (Sixth Colloquium, Graz, 1979), Lecture Notes in Computer Science, Vol. 71 (New York: Springer 1979) 520–529.
M. H. Freedman, “Topological views on computational complexity”, Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998), Doc. Math. J. DMV, Extra Vol. ICM II (1998) 453–464.
P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer”, SIAM J. Comput. 26 (1997) 1484–1509.
R. Clifton and A. Kent, “Simulating quantum mechanics by non-contextual hidden variables”, quant-ph/9908031.
|
no-problem/9905/hep-lat9905020.html
|
ar5iv
|
text
|
# Predictions of chiral random matrix theory and lattice QCD results
## 1 Introduction
We are interested in learning as much as we can about the spectrum of the Euclidean QCD Dirac operator, $`/D=/+ig/A`$. One of the reasons for this interest is the Banks-Casher relation, $`\mathrm{\Sigma }|\overline{\psi }\psi |=\pi \rho (0)/V`$, which relates the chiral condensate to the spectral density, $`\rho (\lambda )=_n\delta (\lambda \lambda _n)`$, of the Dirac operator at zero virtuality. Thus, the spontaneous breaking of chiral symmetry, a nonperturbative phenomenon with profound consequences for the hadron spectrum, is encoded in an accumulation of the small Dirac eigenvalues.
It has been realized that chiral random matrix theory (RMT) is a suitable tool to compute the distribution and the correlations of the small Dirac eigenvalues, see the talk by Verbaarschot in these proceedings and references therein. The first numerical evidence in support of this statement came from instanton liquid simulations. The purpose of this talk is to verify the RMT-predictions by comparing them to lattice data. We shall see that the agreement between theory and numerical experiment is remarkable.
Recently, it has been shown that the results obtained in RMT can also be derived directly from field theory (partially quenched chiral perturbation theory). Conceptually, this is a big step forward, while for actual calculations RMT seems to be the simpler alternative. It is comforting to know that the results of the two approaches agree with each other and with lattice data. Related topics are covered in the talks by Damgaard, Akemann, Papp, Markum, Stephanov, and Halasz (in order of appearance) in these proceedings.
## 2 Symmetries of the Dirac operator
Because of $`\{i/D,\gamma _5\}=0`$, the Dirac operator falls into one of three symmetry classes corresponding to the three chiral ensembles of RMT: chiral orthogonal (chOE), unitary (chUE), and symplectic (chSE) ensemble, respectively. In most cases, the Dirac operator does not have additional symmetries and is described by the chUE. Exceptions are as follows ($`N_c`$ = number of colors).
– continuum, $`N_c=2`$, fermions in fundamental representation: chOE
– continuum, fermions in adjoint representation: chSE
– lattice, $`N_c=2`$, staggered fermions in fundamental representation: chSE
– lattice, staggered fermions in adjoint representation: chOE
The overlap Dirac operator on the lattice has the symmetries of the continuum operator. There are a few exceptions where the Dirac operator does not have the usual chiral symmetries and is, therefore, described by the non-chiral RMT ensembles: QCD in three dimensions (UE), and the Wilson Dirac operator on the lattice (OE for $`N_c=2`$, UE for $`N_c3`$, SE for the adjoint representation).
## 3 Correlations in the bulk of the spectrum
The full Dirac spectrum can be computed numerically using, e.g., a special version of the Lanczos algorithm. Which features of the spectrum are described by RMT in the bulk of the spectrum, i.e., away from the edges? The global spectral density is certainly not given by the RMT result since it is sensitive to the details of the dynamics. However, one can separate the average spectral density from the spectral fluctuations on the scale of the local mean level spacing. This process is called unfolding. After unfolding, the spectral correlation functions are, up to a certain energy, given by RMT. This limiting energy is called the Thouless energy and will be addressed in Sec. 5.
The short- and long-range correlations of the eigenvalues are measured by quantities such as the distribution, $`P(s)`$, of spacings, $`s`$, between adjacent eigenvalues and the number variance, $`\mathrm{\Sigma }^2(L)=(n(L)n(L))^2`$, where $`n(L)`$ is the number of eigenvalues in an interval of length $`L`$. In the bulk of the spectrum, the predictions of the chiral RMT ensembles for these quantities are identical to those of the corresponding non-chiral ensemble. They can be tested against lattice data. The predictions of RMT were confirmed with very high accuracy in the following cases: staggered fermions in SU(2), in SU(3), and in compact U(1), respectively, Wilson fermions in SU(2), and the overlap Dirac operator in SU(2), in SU(3), and in the adjoint representation of SU(2), respectively. Furthermore, the lattice data continue to agree with the RMT predictions in the deconfinement phase. In all cases, the agreement was perfect, see $`P(s)`$ in Fig. 1 for an example.
There is a very interesting question concerning staggered fermions: for $`N_c=2`$, they have the symmetries of the chSE whereas the continuum symmetries are those of the chOE (and the other way around in the adjoint representation). While one should eventually see a transition from chSE to chOE behavior as the lattice spacing goes to zero, it is unlikely that this can be observed on present day lattices. The overlap Dirac operator, on the other hand, has the correct symmetries in all cases.
If a chemical potential is added to the problem, the Dirac eigenvalues are scattered in the complex plane. In this case, $`P(s)`$ can also be constructed, see the talk by Markum.
## 4 Correlations at the spectrum edge
Because the nonzero eigenvalues of $`i/D`$ come in pairs $`\pm \lambda _n`$, the Dirac spectrum has a “hard edge” at $`\lambda =0`$. As mentioned in the introduction, the Dirac eigenvalues in this region are of great interest because of their connection to chiral symmetry breaking. The distribution of the smallest eigenvalues is encoded in the microscopic spectral density, $`\rho _s(z)=lim_V\mathrm{}\rho (z/V\mathrm{\Sigma })/V\mathrm{\Sigma }`$, which is universal and can be computed in RMT. One can also construct the distribution of the smallest eigenvalue, $`P(\lambda _{\mathrm{min}})`$, and higher-order spectral correlation functions. Analytical RMT results for these quantities are available. To compare these to lattice data, the only input one needs is the energy scale $`V\mathrm{\Sigma }=\pi \rho (0)`$, which is obtained from the data by extracting $`\rho (0)`$.
The spectral correlations in the vicinity of $`\lambda =0`$, in contrast to those in the bulk, are sensitive to the number of massless (or very light) quarks and to the topological charge $`\nu `$. This is another reason why the hard edge of the spectrum is more interesting than the bulk.
### 4.1 Quenched approximation
The first evidence from the lattice for the universality of the microscopic spectral density came from an analysis by Verbaarschot of Columbia data for the valence quark mass dependence of the chiral condensate in SU(3) with staggered fermions. Plotting all the data computed for various values of $`\beta =6/g^2`$ in a rescaled form suggested by RMT, it was observed that in the region of very small masses they all fell on the same universal curve given by RMT. The simulations were done with dynamical quarks, but the sea quarks were much heavier than the valence quark so that one effectively had $`N_f=0`$.
The first direct lattice computation of the microscopic spectral quantities was done in quenched SU(2) with staggered fermions, and the data were subsequently analyzed in more detail. The agreement between lattice data and RMT predictions was remarkably good, see $`\rho _s(z)`$ in Fig. 1 for an example, but the analysis immediately raised a number of questions. First, why were the data consistent with the RMT predictions for topological charge $`\nu =0`$ even in weak coupling? Second, at what energies does RMT cease to be applicable? The answers will be given in Secs. 4.3 and 5.
Meanwhile, several groups have performed lattice simulations of the hard edge of the spectrum for almost all interesting cases: SU(3) with staggered fermions in three and four dimensions, the Schwinger model using both a fixed point and the Neuberger Dirac operator, SU(2) and SU(3) with staggered fermions in the adjoint representation, and the overlap Dirac operator in SU(2), in SU(3), and in the adjoint representation of SU(2), respectively. The theoretical predictions of all three chiral ensembles (and of the UE) have thus been verified with high accuracy.
### 4.2 Light dynamical fermions
In the chiral limit, the microscopic spectral correlations predicted by RMT depend on the number, $`N_f`$, of massless dynamical quarks. If the dynamical quarks are sufficiently heavy, the microscopic spectral correlations are given by the quenched result. There is an intermediate regime for quark masses of order $`1/V\mathrm{\Sigma }`$, i.e., of the same order as the low-lying Dirac eigenvalues. In this “double-scaling” regime, the RMT results depend on the quark masses. So far, analytical results are only known for the chUE. Lattice simulations with very light dynamical quarks have been performed in SU(2) with staggered fermions, corresponding to the chSE for which the corresponding RMT predictions were computed numerically. Again, very good agreement was found for $`\rho _s(z)`$ and $`P(\lambda _{\mathrm{min}})`$.
### 4.3 Topology
Lattice simulations with staggered fermions show that the microscopic spectral quantities agree with the RMT predictions in the sector of zero topological charge, even in weak coupling. Presumably, the reason is that the would-be zero modes related to topology are shifted away from zero due to discretization errors of order $`a^2`$, where $`a`$ is the lattice spacing. Thus, it is essential to use “better” Dirac operators which obey the Ginsparg-Wilson relation and for which the Atiyah-Singer index theorem holds. This has recently been done, in the Schwinger model using both a fixed point and the Neuberger Dirac operator, and using the overlap Dirac operator in SU(2), in SU(3), and in the adjoint representation of SU(2), respectively. (“Neuberger” and “overlap” refer to the same operator.) The lattice data agree very well with the RMT predictions also in sectors with $`\nu 0`$, see $`P(\lambda _{\mathrm{min}})`$ in Fig. 1.
## 5 Thouless energy
Since RMT predictions are derived without any knowledge of the details of the QCD dynamics, they are only valid up to a certain energy which is called the Thouless energy, $`E_c`$. To identify $`E_c`$ for QCD, one needs two ingredients: (1) the fact that the random matrix model is applicable only if the kinetic terms in the chiral Lagrangian can be neglected, which is the case for $`L1/m_\pi `$ (with $`L`$ the linear extent of the four-volume and $`m_\pi `$ the pion mass), and (2) the Gell-Mann–Oakes–Renner relation, connecting $`m_\pi `$ to the quark mass and the chiral condensate. This gives rise to the theoretical prediction $`E_cf_\pi ^2/\mathrm{\Sigma }L^2`$, where $`f_\pi `$ is the pion decay constant. This prediction was verified numerically in instanton liquid simulations and in lattice simulations for SU(2) and SU(3) with staggered fermions. For lattice simulations, this roughly means that the domain of validity of the RMT approach is larger for larger lattice size and stronger coupling.
## 6 Summary
We now have a wealth of analytical and numerical evidence in support of the claim that the low-lying spectrum of the QCD Dirac operator is described by universal functions which can be computed, e.g., in RMT. This applies to the quenched approximation, to the full theory with dynamical quarks, and to trivial and nontrivial topological sectors. Moreover, we know the energy where the universality breaks down. Apart from a better analytical understanding of the Dirac spectrum, this also offers a variety of practical applications, such as better extrapolations to the thermodynamic limit, the extraction of chiral logarithms, and perhaps the construction of hybrid Monte Carlo algorithms which would use the distribution of the small eigenvalues as input.
Acknowledgments. I thank the organizers for the invitation to this very stimulating workshop, and M.E. Berbenni-Bitsch, P.H. Damgaard, M. Göckeler, T. Guhr, H. Hehl, A.D. Jackson, J.-Z. Ma, H. Markum, S. Meyer, S. Nishigaki, R. Pullirsch, P.E.L. Rakow, A. Schäfer, B. Seif, J.J.M. Verbaarschot, H.A. Weidenmüller, and T. Wilke for very enjoyable collaborations.
|
no-problem/9905/astro-ph9905376.html
|
ar5iv
|
text
|
# A 6.4-hr positive superhump period in TV Col
## 1 Introduction
So far four periods have been discovered in the light curve of TV Col (Hellier 1993). They were interpreted as follows: the 32-min – the spin period; the 5.5-hr period – the orbital binary revolution; the 4-day period – the nodal precession of the accretion disc, and the 5.2-hr period – the beat between the two longer periods (a negative superhump). This interpretation makes TV Col the permanent superhump system with the largest orbital period. Since light curves of many permanent superhumpers show both types of superhumps (Patterson 1999), we decided to search for positive superhumps in the light curve of TV Col as well. Extrapolating the Stolz & Schoembs (1984) relation we predicted that the superhump period should be around 6.4 hr. Indeed we found such a periodicity in the data.
## 2 Discussion
We re-examined four sets of photometric data of TV Col (Hellier 1993). Three sets show a similar pattern. In the upper panel of Fig. 1 we present the power spectrum of the 1989 January run. There is a triple alias structure around the three marked peaks. Two of them are the known 5.2-hr and 5.5-hr periods. The third peak corresponds to the period 6.4 hr. As a first test, we fitted the two known periods to the data, subtracted them and performed a new power spectrum on the residuals, which is shown in the lower panel of Fig. 1. The third peak didn’t disappear from the synthetic power spectrum, but even gained power.
The third period is 0.265+/–0.005 day – about 16 percent longer than the orbital period. It obeys the relation between superhump-period excess and orbital period (Stolz & Schoembs 1984), which is plotted in Fig 2. A positive superhump interpretation is inevitable.
As a confirmed permanent superhumper, the accretion disc of TV Col is naturally thermally stable. Therefore, our result supports the idea of Hellier & Buckley (1993) that the short-term outbursts seen in its light curve are mass transfer events rather than thermal instabilities in the disc.
At 5.5-hr, TV Col has an orbital period longer than any known superhumper, and thus a mass ratio which is probably outside the range at which superhumps can occur according to the current theory.
|
no-problem/9905/astro-ph9905154.html
|
ar5iv
|
text
|
# High-Resolution 𝐾' Imaging of the 𝑧=1.786 Radio Galaxy 3C 294
## 1 Introduction
High-redshift radio galaxies probably mark out regions of higher than average density in the early Universe, and they give us a window into formation processes for at least one kind of massive galaxy. Recent Hubble Space Telescope (HST) imaging of radio galaxies with redshifts $`>2`$ in the rest-frame ultraviolet show that most comprise several components and that the ultraviolet flux from these components is apparently dominated by recent star formation (van Breugel et al. 1998; Pentericci et al. 1998, 1999).
3C 294, at $`z=1.786`$, is one of the dozen radio galaxies in the 3CR catalog with $`z>1.5`$, so it is one of the most powerful radio sources in the observed Universe as well as a likely example of a massive galaxy in its youth. It has strong Ly-$`\alpha `$ emission extending over $`10`$″, roughly aligned with the radio structure (McCarthy et al. 1990). It also has a $`V=12`$ star $`<10`$″ to the west (Kristian, Sandage, & Katem 1974), which initially hampered optical and IR observations, but which now can be put to good use as an adaptive optics (AO) reference. In this Letter we describe the results of an initial AO imaging investigation of 3C 294.
## 2 Observations and Data Reduction
The AO observations were obtained with the University of Hawaii AO system Hokupa‘a (Graves et al. 1998) mounted at the f/36 focus of the Canada-France-Hawaii Telescope (CFHT). This system uses the curvature-sensing approach pioneered by François Roddier (Roddier, Northcott, & Graves 1991). Briefly, an image of the telescope primary is formed on a 36-element deformable mirror. Light from this mirror shortwards of $`1\mu `$m is sent by a beamsplitter to a membrane mirror, which is driven at 2.6 kHz to image extrafocal images on both sides of focus onto a 36-element avalanche-photodiode array. Corrections for the wavefront errors derived from the difference of these extrafocal images are sent back to the deformable bimorph mirror, which is updated at 1.3 kHz. Under typical seeing conditions at CFHT, and for sufficiently bright ($`R12`$) stars, diffraction-limited imaging can be achieved as short as $`I`$ band (Graves et al. 1998). Our imaging of 3C 294 was in the $`K^{}`$ band (Wainscoat & Cowie 1992), so the correction was excellent and quite stable over the course of the observations. The detector system was the University of Hawaii Quick Infrared Camera (QUIRC), which uses a $`1024\times 1024`$ HAWAII array (Hodapp et al. 1996). We obtained 8 300 s exposures on 1998 July 3 UT and 14 300 s exposures on 1998 July 4 UT. Unusually rapid variation of the airglow emission compromised the reduction of data from the first night, and we use here only the data from the second night. The individual exposures were dithered in a pattern that kept the bright guide star off the edge of the detector for all exposures. The images were reduced using our standard iterative procedure (Stockton, Canalizo, & Close 1998). Briefly, we make a bad-pixel mask from a combination of hot pixels (from dark frames) and low-response pixels (from flat-field frames); these pixels are excluded from all sums and averages. For each dark-subtracted frame, 3 or 4 time-adjacent dithered frames were median averaged to make a sky frame, which was normalized to the sky value of the frame in question and subtracted; then the residual was divided by a flat-field frame. These first-pass images were registered to the nearest pixel and median averaged. After a slight smoothing, this rough combined frame was used to generate object masks for each frame, which were then combined with the bad-pixel mask. This process was repeated to give better sky frames, better offsets for alignment, and a new combined image. This new image was used to replace bad pixels in each flattened frame with the median from the other frames, so that bad pixels would not affect the centering algorithm used to calculate the offsets. The final combined image was a straight average of the corrected, subpixel-registered frames, using a sigma-clipping algorithm in addition to the bad-pixel mask to eliminate deviant pixel values. Normally, we use field stars for registration, but in this case we had to use the ghost image of the guide star, since there were no other objects visible on individual frames. This ghost image was produced by a secondary reflection in the AO system optics; it was about 10 mag fainter than the guide star, and it was positioned 0$`\stackrel{}{\mathrm{.}}`$114 south and 7$`\stackrel{}{\mathrm{.}}`$85 east of the star. The image scale with the $`K^{}`$ filter was determined to be $`0\stackrel{}{\mathrm{.}}03537\pm 0\stackrel{}{\mathrm{.}}00005`$ pixel<sup>-1</sup>, based on an accurate measurement of the scale in the $`J`$ filter by Claude Roddier and a determination of the ratio of the $`K^{}`$ to $`J`$ scales for the QUIRC camera from contemporaneous imaging data obtained with the University of Hawaii 2.2 m telescope.
## 3 Results
Our $`K^{}`$ band image of 3C 294 samples the rest-frame spectrum from 6850–8300 Å, a region unlikely to be dominated by emission lines (the strongest expected lines in this bandpass are \[Ar III\] $`\lambda \lambda 7136`$,7751 and \[O II\] $`\lambda \lambda 7320`$,7330). This region is also close to the peak of the spectral-energy distribution (SED) for a stellar population with an age of $`2`$ Gyr. If the luminosity at the center of this galaxy is dominated by a central bulge of relatively old stars, we should be able to see it. What we actually do see is shown in Fig. 1. Scattered light and a diffraction spike from the guide star extend from the lower right across the lower middle of the frame, and the ghost image of the guide star lies just above the diffraction spike. 3C 294 is at the middle of the frame. The structure is knotty and filamentary, comprising several distinct components within a roughly triangular region about 1$`\stackrel{}{\mathrm{.}}`$4 ($`10`$ kpc, for $`H_0=75`$, $`q_0=0.3`$, $`\mathrm{\Lambda }=0`$, which we assume throughout this Letter) across, no one of which is clearly dominant. The data from the first night, although of lower quality, confirm the main features seen in Fig. 1.
Since, at larger scales, both the inner radio structure and the extended Ly-$`\alpha `$ emission are aligned along PA$`20`$° (McCarthy et al. 1990), one possible interpretation is that we are seeing a dust scattering nebula centered on this same axis (Chambers 1999$`a`$,$`b`$). If this is the case, the hidden nucleus must lie somewhere near the faint southern tip of the “triangle”. However, it is also possible that we are seeing an assemblage of small merging components, and that the active nucleus is coincident with one of the peaks in our image. Thus, the interpretation of the observed structure is critically dependent on the location of the nucleus.
McCarthy et al. (1990) found a flat-spectrum central compact radio component, presumably to be identified with the nucleus, and they determined its position to a precision of $`50`$ mas. Because we have determined the position of ghost image and the image scale quite accurately, we can relate any point in our field to the position of the guide star to a similar precision. However, in order to relate our image to the radio position, we need to have high-quality astrometric positions for both the guide star and the radio nucleus in the same reference frame. There are two problems with accomplishing this task: (1) the McCarthy et al. (1990) radio data is referenced to the equinox B1950, epoch 1979.9 VLA calibrator reference frame, and converting to the equinox J2000, epoch 2000.0 FK5 frame is not completely straightforward; and (2) as McCarthy et al. (1990) point out, the positions for our guide star given by Veron (1966), Kristian et al. (1974), and Riley et al. (1980) do not agree very well. Our separate AO imaging of the guide star itself shows an added minor complication: the star is a double with a separation of 0$`\stackrel{}{\mathrm{.}}`$13 (see inset in Fig. 1).
We have converted the position of the central radio component to the FK5 frame by an empirical mapping of the B1950 VLA calibrator reference frame in the region of the source to the J2000 VLA calibrator frame. We first do a standard precession of B1950 calibrator positions from equinox B1950 to nominal equinox J2000, for calibrators within 15° of 3C 294. We then take the difference between these precessed positions and the cataloged J2000 VLA calibrator positions, and we fit a low-order reference surface to these differences (which were always 0$`\stackrel{}{\mathrm{.}}`$5 or less). The values of the corrections at the position of 3C 294 are then applied to the precessed B1950 VLA position for the central compact radio source. We obtain a position $`14^\mathrm{h}06^\mathrm{m}44\stackrel{\mathrm{s}}{\mathrm{.}}077\pm 0\stackrel{\mathrm{s}}{\mathrm{.}}002,+34\mathrm{°}11\mathrm{}24\stackrel{}{\mathrm{.}}956\pm 0\stackrel{}{\mathrm{.}}025`$ (J2000), where the quoted errors are the standard errors of the fit of the correction surface to the coordinate residuals. In addition, there is an uncertainty due to errors in the J2000 VLA calibrator positions, which is likely to be $`0\stackrel{}{\mathrm{.}}1`$ or less.
Unfortunately, the AO guide star is too faint to appear in the Hipparcos/Tycho databases. In order to obtain a more accurate position for it, we have obtained large-image-scale, short-exposure CCD frames covering an 8′ field including the star and have derived a plate solution from 12 USNO-A2.0 stars (after deleting 3 additional stars with outlying residuals). The residuals for the solution give a standard error of 0$`\stackrel{}{\mathrm{.}}`$24 in RA and 0$`\stackrel{}{\mathrm{.}}`$34 in declination, so the 2$`\sigma `$ uncertainty in the position of an individual star is over 0$`\stackrel{}{\mathrm{.}}`$5. For the mean position of the components of the AO guide star, we obtain $`14^\mathrm{h}06^\mathrm{m}43\stackrel{\mathrm{s}}{\mathrm{.}}35,+34\mathrm{°}11\mathrm{}24\stackrel{}{\mathrm{.}}2`$ (J2000).
The USNO-A2.0 catalog is in the FK5 system tied to the Hipparcos reference frame. With the AO guide star and 3C 294 radio core positions, and our knowledge of the position of the AO guide star relative to its ghost image, we can locate the position of the radio core on our image. This position, with $`2\sigma `$ error bars (determined mostly by the uncertainty of the position of the guide star), is shown in the lower-left inset in Fig. 1, along with radio-core positions based on positions for the guide star given previously by Veron (1966), Kristian et al. (1974), and Riley et al. (1980). The position we derive for the radio core is consistent with a location at or near the southern tip of the $`K^{}`$ structure, and apparently inconsistent with association with any of the three brightest blobs along the northern edge of the structure.
In a 3″-diameter synthetic aperture, we obtain a total $`K^{}`$ magnitude for 3C 294 of $`18.3\pm 0.3`$. Lower-resolution imaging at the NASA Infrared Telescope Facility gives $`K^{}=18.3\pm 0.1`$. Both of these measurements are consistent with the $`K=18.0\pm 0.3`$ found by McCarthy et al. (1990) for a similar aperture.
## 4 Discussion
Radio galaxies at high redshifts often show multiple components, and these are often found to be aligned with the radio structure (e.g., Pentericci et al. 1999, and references therein). This conclusion is based mostly on optical data, which corresponds to rest-frame UV for these objects, although recently some deep ground-based IR imaging has been presented by van Breugel et al. (1998), and programs involving HST NICMOS imaging of high-redshift radio galaxies are in progress (e.g., Fosbury et al. 1999). One reason that 3C 294 is especially of interest in this context is that its redshift places the $`K^{}`$ band in a region of the spectrum that is unlikely to be dominated by emission-line radiation or nebular thermal continuum. Since there is also no evidence in the VLA map of McCarthy et al. (1990) of radio counterparts to the rest-frame optical structure we see, the emission is most likely due either to stars or to scattering from a hidden quasar. The position we obtain for the flat-spectrum radio component of 3C 294 favors a location at or near the southern apex of the observed distribution of bright knots, and it therefore supports (though it cannot prove) the interpretation of the observed structure as an illumination cone, most likely due to dust scattering of radiation from a quasar nucleus. Figure 1 can be compared with Fig. 2 of McCarthy et al. (1990), which shows contours of the Ly-$`\alpha `$ and radio emission (the short arrows in our Fig. 1 are at the positions of the radio knots K<sub>N</sub> and K<sub>S</sub> shown in their figure). The extended Ly-$`\alpha `$ emission also shows a well-defined triangular structure extending to the north, which McCarthy et al. (1990) suggest is due to anisotropic emission of ionizing radiation from a central non-thermal source. While the inferred Ly-$`\alpha `$ and $`K^{}`$ cones are fairly well aligned, the Ly-$`\alpha `$ material extends $`4`$″, or roughly 4 times as far as does the continuum emission in our $`K^{}`$ AO image. The Ly-$`\alpha `$ emission also extends to the other side of the radio nucleus, in a weak and somewhat poorly-defined “counter cone.” We do not see a corresponding feature at $`K^{}`$, but our dynamic range is not sufficient to put very strong limits on the presence of such material. If the illumination is truly biconical and intrinsically fairly symmetric, the southern cone must suffer significant extinction along our line of sight.
However, granting that the morphology we see in our AO image is likely to be at least partly determined by illumination effects, the material being illuminated also does seem to have an intrinsic distribution in small ($`0\stackrel{}{\mathrm{.}}21.5`$ kpc) coherent bodies, which (on the not unreasonable assumption that they contain stars as well as dust) must shortly merge together. In fact, the tendency of these objects to show elongation roughly aligned in the direction of the apex may well be due to tidal stretching. Thus, the alternatives of illumination effects and merging subunits need not be starkly opposed to each other in this case, although the presence of an illumination cone, if confirmed (say, by polarization measurements), means that we are likely seeing only part of the action.
In summary, 3C 294 appears to be a particularly good example of several aspects of an emerging picture of high-redshift radio galaxies. The observations suggest the following scenario: A quasar nucleus, hidden along our line of sight, is responsible for the jets that power the radio source as well as for the illumination of material in the immediate environment of the radio galaxy that falls within a biconical region. This illumination is made evident to us by scattering by dust and by emission from the large Ly-$`\alpha `$ nebula that is aligned with the radio axis (McCarthy et al. 1990). The bulge is apparently still in the process of being assembled from small ($`1.5`$ kpc = 0$`\stackrel{}{\mathrm{.}}`$2), merging, dusty objects (e.g., Baron & White 1987; Pascaralle et al. 1996), which, at the depth of our current images, are visible only by scattered light, when they happen to fall within the illumination cone of the central source. Deeper high-resolution imaging might pick up intrinsic emission from similar objects in regions not illuminated by the quasar.
We are grateful for the support of the University of Hawaii Adaptive Optics group: François Roddier, Claude Roddier, Buzz Graves, Malcolm Northcott, and Laird Close, without which these observations would not have been possible. We also thank Laird Close and Claude Roddier for helpful discussions and for information on the image scale, Dave Monet and Dave Tholen for discussions on astrometric matters, and Rob Whitely for obtaining short exposures on the 3C 294 field. This research was supported in part by NSF grant AST 95-29078.
|
no-problem/9905/hep-ph9905485.html
|
ar5iv
|
text
|
# Acknowledgments
## Acknowledgments
The author thanks K.G. Chetyrkin for discussion, encouragement and correspondence. The work is supported in part by the Volkswagen Foundation under contract No. I/73611 and by the Russian Fund for Basic Research under contracts Nos. 97-02-17065 and 99-01-00091.
|
no-problem/9905/physics9905027.html
|
ar5iv
|
text
|
# Measurability of side chain rotational isomer populations: NMR and molecular mechanics of cobalt glycyl-leucine dipeptide model system
## I INTRODUCTION
Comparisons of NMR structures with X-ray structures show that vicinal coupling constants accurately measure the backbone torsion angles of proteins Cornilescu et al. (2000); Sprangers et al. (2000); Wang and Bax (1996). In the best X-ray structures multiple conformations of a particular side chain can be confidently identified Esposito et al. (2000); Jelsch et al. (2000); Yamano et al. (1997); Burling et al. (1996); Teeter et al. (1993), but population estimates may not be very accurate. The populations of the three $`\chi ^1`$ rotational isomers of protein side chains can be determined from vicinal coupling constants if the rotational isomers are assumed to have ideal staggered conformations Hennig et al. (1999); Karimi-Nejad et al. (1994). If the side chain rotational isomers are not assumed to have ideal conformations and the amplitudes of the torsion angle fluctuations of each rotational isomer are also unknown, then in general it is only possible to measure rotational isomer populations as statistical averages over many side chains West and Smith (1998). Even with a very complete set of vicinal coupling constants and NOESY cross relaxation rates it is difficult to measure the populations of more than two rotational isomers of a single side chain Džakula et al. (1992). Protein NMR and crystallographic data available now or in the foreseeable future simply do not have the resolution to measure the populations of all possible side chain rotational isomers at anywhere near the 1% level of accuracy. Understanding protein properties such as fluorescence intensity decay spectra Sillen et al. (2000); Moncrieffe et al. (2000, 1999); Antonini et al. (1997); Verma et al. (1996); Haydock (1993), hydrogen ion association constants Onufriev et al. (2000); Luo et al. (1998); You and Bashford (1995), or global stability Vohník et al. (1998); Wilson et al. (1995) often requires that an NMR or X-ray structure be supplemented with molecular mechanics structure calculations of the conformations of a particular side chain. An ideal structure determination method would incorporate these supplemental molecular mechanics calculations and simultaneously fit NMR or crystallographic data. The consistency of the data and the incorporated molecular mechanics could be judged by existing methods Kleywegt and Jones (1995); Brünger et al. (1993) for assessing when a model over-fits the data. In the case of measuring side chain rotational isomer populations it might be possible to measure the population of one or two prominent rotational isomers and assess the measurability of other molecular mechanically plausible rotational isomers. In this work we take a simple first step in this direction with an analysis of the cobalt glycyl-leucine dipeptide model system. This model system has the advantages that the vicinal coupling constants and NOESY cross relaxation rates can be accurately measured on samples with natural isotope abundance and that the cobalt dipeptide ring system restrains the dipeptide backbone in a single conformation.
The two background sections give essential information about the conformational analysis of leucine side chain rotamers and about the accuracy of the Karplus equation coefficients. The experimental section gives the vicinal coupling constant and NOESY cross relaxation rate data for the cobalt glycyl-leucine dipeptide. Simple two and three rotational isomer models suggested by conformational analysis are compared and fit to some of this data. The computational results and discussion section examines the molecular mechanics energy map, the effect of intramolecular thermal motions on calculated NMR coupling constants and cross relaxation rates, and the Monte Carlo probability density functions of the rotational isomer populations. For the simple cobalt dipeptide model system these probability density functions confirm the preliminary analysis in the experimental section and additionally suggest that the populations of the remaining rotational isomers are unmeasurable at present. Our analysis shows that while the NMR data is not fit too well by the simplest models neither does this data give any guidance in selecting among the multitude of models with improved fits.
## II BACKGROUND
### II.1 Leucine side chain conformational analysis
The leucine side chains of crystallographic oligopeptide and protein structures strongly prefer the trans guache<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers Benedetti et al. (1983); Schrauber et al. (1993). Conformational analysis predicts the backbone-dependent stability of protein side chain conformations and explains the rotamer preferences observed in high resolution crystallographic structures of proteins Dunbrack Jr and Karplus (1994, 1993). This analysis is equally applicable to the leucine side chain of the cobalt glycyl-leucine dipeptide and leads to the same conclusions found for leucine side chains in proteins. In either case the predominant rotational isomers of the leucine side chain are trans guache<sup>+</sup> and gauche<sup>-</sup> trans. The butane and syn-pentane effects are known from the conformational analysis of the simple hydrocarbons $`n`$-butane and $`n`$-pentane. The syn-pentane conformations, which are guache<sup>+</sup> guache<sup>-</sup> or guache<sup>-</sup> guache<sup>+</sup>, are about 3.3 kcal/mol higher in energy than the extended trans trans conformation. A molecular conformation is said to be destabilized by the syn-pentane effect when one (or more) five atom fragments of the molecule are in syn-pentane like conformations. The conformational analysis of a peptide or protein identifies unfavorable side chain conformations primarily by searching for syn-pentane effects among all possible $`n`$-pentane fragments with the C<sup>α</sup>–C<sup>β</sup> bond in the second or third fragment position. When the C<sup>α</sup>–C<sup>β</sup> and C<sup>β</sup>–C<sup>γ</sup> bonds are in the second and third fragment positions, the analysis gives backbone-independent rotamer preferences because the fragment conformation does not depend on the backbone $`\varphi `$ and $`\psi `$ angles. When the backbone N–C<sup>α</sup> or C–C<sup>α</sup> bond is in the second and the C<sup>α</sup>–C<sup>β</sup> bond is in the third fragment position, the analysis gives backbone-dependent rotamer preferences. For a leucine residue there are eight pentane fragments to consider: four backbone-independent fragments of the pattern (N,C)–C<sup>α</sup>–C<sup>β</sup>–C<sup>γ</sup>–(C<sup>δ1</sup>,C<sup>δ2</sup>), and four backbone-dependent fragments of the patterns (C<sub>i-1</sub>,O- - -HN<sub>i</sub>)–N<sub>i</sub>–C$`{}_{}{}^{\alpha }{}_{i}{}^{}`$–C$`{}_{}{}^{\beta }{}_{i}{}^{}`$–C$`{}_{}{}^{\gamma }{}_{i}{}^{}`$or (N<sub>i+1</sub>,O<sub>i</sub>)–C$`{}_{}{}^{}{}_{i}{}^{}`$–C$`{}_{}{}^{\alpha }{}_{i}{}^{}`$–C$`{}_{}{}^{\beta }{}_{i}{}^{}`$–C$`{}_{}{}^{\gamma }{}_{i}{}^{}`$, where leucine is the $`i`$th residue and O- - -HN<sub>i</sub> is an assumed hydrogen bond acceptor of HN<sub>i</sub>. A very clear Newman projection diagram of these eight fragments is shown in Fig. 2 of Ref. Dunbrack Jr and Karplus, 1994. Of the nine possible leucine rotamers only two, trans guache<sup>+</sup> and gauche<sup>-</sup> trans, have no backbone-independent syn-pentane interactions, six have one interaction, and one, guache<sup>+</sup> gauche<sup>-</sup>, has two such interactions. Because the identical backbone-independent fragments are present in the cobalt dipeptide and proteins, the backbone-independent conformational analysis for proteins applies equally to the cobalt dipeptide.
The conformational analysis of backbone-dependent rotamer preferences is important because the cobalt dipeptide backbone forms two approximately planar chelate rings with $`\varphi `$ and $`\psi `$ angles differ somewhat from the angles most commonly found in protein structures. To apply backbone-dependent conformational analysis to the leucine side chain of the cobalt dipeptide (Fig. 1) simply note that leucine is the second residue and substitute the atom names Co, O<sup>t1</sup>, and O<sup>t2</sup> for the names O- - -HN<sub>i</sub>, O<sub>i</sub>, and N<sub>i+1</sub> in the above five atom fragments, where O<sup>t1</sup> is the terminal carboxyl oxygen bonded to cobalt and O<sup>t2</sup> is the uncomplexed carboxyl oxygen. (The reverse substitution of O<sup>t1</sup> and O<sup>t2</sup> inconveniently generates the dipeptide from a protein with a very unlikely backbone conformation that has colliding amide groups.) With this identification the leucine backbone torsion angles $`\varphi _2`$ and $`\psi _2`$ are both near 180 degrees. If $`\varphi _2=174.74`$ degrees and leucine $`\chi ^1`$ is gauche<sup>-</sup>, there is a syn-pentane interaction between the glycine carbonyl carbon and the leucine C<sup>γ</sup> (Fig. 1 middle). If $`\psi _2=174.74`$ degrees and leucine $`\chi ^1`$ is trans, there is a syn-pentane interaction between the uncomplexed terminal oxygen and the leucine C<sup>γ</sup> (Fig. 1 top). Because $`\varphi _2`$ and $`\psi _2`$ of the cobalt dipeptide are both near these critical angles, syn-pentane effects could destabilize both of the leucine side chain trans guache<sup>+</sup> and gauche<sup>-</sup> trans rotamers, which are observed in most crystallographic structures and preferred by the backbone-independent conformational analysis.
Crystallographic studies of copper and cobalt dipeptides and the vicinal coupling constants about both N–C<sup>α</sup> bonds of the cobalt glycyl-leucine dipeptide suggest that the $`\varphi _2`$ and $`\psi _2`$ torsion angles of the cobalt glycyl-leucine dipeptide depart from 180 degrees by as much as 10 or 20 degrees. In the crystallographic structures of cobalt glycyl-glycine dipeptides Prelesnik and Herak (1984) and copper dipeptides Freeman et al. (1977) the chelate ring conformations vary over a wide range. The peptide backbone atoms are typically displaced by up to 0.1 or 0.2 angstrom from the mean plane of the chelate rings and the angles between 3-atom segments of the chelate rings are typically 5 or 10 or perhaps as large as 20 degrees. Both these measures of chelate ring puckering imply backbone torsion angles of 180$`\pm `$10 degrees with maximum deviations from 180 degrees of no more than about 20 degrees. The variability of the chelate ring conformations is thought to arise from intermolecular contacts within the crystall. Even if a crystallographic structure of the cobalt dipeptide were available, no reliable predictions about the solution conformation of the chelate rings could be made because the conformational distortions caused by intermolecular contacts are not well understood. In various DMSO plus D<sub>2</sub>O mixtures measurements of the four H–N–C<sup>α</sup>–H vicinal coupling constants about the glycine N–C<sup>α</sup> bond of the cobalt glycyl-leucine dipeptide show a rotation about this bond of $``$10 to $``$20 degrees Juranić et al. (1993). This rotation angle is relative to an eclipsed substituent atom geometry about the N–C<sup>α</sup> bond and implies a puckered amino-peptidato chelate ring. Though this puckering gives no direct information about the leuine $`\varphi _2`$ and $`\psi _2`$ torsion angles, the presence of puckering in solution shows that the intermolecular contacts within a crystall are not the only cause of puckering. The C–N–C<sup>α</sup>–H vicinal coupling constant about the N–C<sup>α</sup> bond of the cobalt glycyl-leucine dipeptide is about 0.3 Hz larger than the same coupling constant of the cobalt glycyl-glycine dipeptide Juranić et al. (1993). (In Ref. Juranić et al., 1993 the first atom of this coupling constant is mistakenly labeled C<sup>α</sup> rather than C.) This coupling constant difference implies that the cobalt dipeptide $`\varphi _2`$ torsion angle is about $`170\pm 10`$ degrees.
Simple inspection of backbone-dependent rotamer libraries suggests that a 20 degree departure from the backbone torsion angles of planar chelate rings can diminish or even eliminate the gauche<sup>+</sup> $`\chi ^1`$ rotamer preference of the leucine side chain. The proportions of gauche<sup>-</sup>, gauche<sup>+</sup>, and trans $`\chi ^1`$ side chain rotamers change fairly dramatically for both 30 and 20 degree backbone angle increments within the backbone-dependent rotamer libraries of Dunbrack and Karplus Dunbrack Jr and Karplus (1994, 1993). The rotamer libraries seem to show that backbone-independent interactions are slightly more important than backbone-dependent interactions. In the backbone-dependent rotamer library for straight side chains Dunbrack Jr and Karplus (1994), which does not include leucine, the gauche<sup>-</sup> and trans $`\chi ^1`$ rotamers appear with a combined frequency of 20 to 30% in the two 30 by 30 degree square cells adjacent to the point $`\varphi ,\psi =180`$ degrees, even though backbone-dependent syn-pentane interactions favor gauche<sup>+</sup> $`\chi ^1`$ rotamers. A similar trend seems to hold throughout the backbone-dependent rotamer library, $`\chi ^1`$ rotamers excluded by syn-pentane interactions appear with diminished probability rather than being completely excluded. In contrast, the backbone-independent syn-pentane interactions of the leucine side chain exclude the gauche<sup>+</sup> $`\chi ^1`$ rotamer with striking completeness. In crystallographic structures on which the rotamer library is based less than 2% of the leucines are gauche<sup>+</sup> $`\chi ^1`$ rotamers Dunbrack Jr and Karplus (1994). The exclusion of leucine gauche<sup>+</sup> $`\chi ^1`$ rotamers as well as gauche<sup>-</sup> $`\chi ^2`$ rotamers seems to hold over all known classes of protein backbone structures Schrauber et al. (1993). The crystallographic and NMR evidence cited in the last paragraph suggests that the $`\varphi _2`$ and $`\psi _2`$ torsion angles of the cobalt dipeptide both depart from 180 degrees by as much as 10 or 20 degrees. This is more than enough to relieve the backbone-dependent syn-pentane interactions with the leucine C<sup>γ</sup> and diminish the backbone-dependent preference for gauche<sup>+</sup> $`\chi ^1`$ rotamers of the leucine side chain.
### II.2 Accuracy of Karplus equation calibration
Theoretical calculations show that the H–C–C–H vicinal proton coupling constant depends on the torsion angle $`\varphi `$(H,C,C,H) around the C–C bond, the electronegativity and orientation of substituent groups on the C and C atoms, the bond angles $`\theta `$(H,C,C) and $`\theta `$(C,C,H), and the length of the C–C bond Karplus (1963). The same symbol $`\varphi `$ serves here for the torsion angle between the vicinally coupled spins and elsewhere for the protein backbone torsion angle, but the meaning should always be clear from the context. The theoretical dependence of the coupling constant on the torsion angle is approximated by a Karplus equation, which is often written in the form
$${}_{}{}^{3}J(\varphi )=A\mathrm{cos}^2\varphi B\mathrm{cos}\varphi +C,$$
(1)
where $`\varphi `$ is the torsion angle around the C–C bond. For peptide side chain vicinal coupling constants the torsion angle $`\varphi `$ is equal to the side chain torsion angle to within a phase shift that is approximately an integer multiple of 120 degrees. The Karplus equation for a H–C–C–C<sup>′′</sup> heteronuclear vicinal coupling constant also has the same form, where the torsion angle is now $`\varphi `$(H,C,C,C<sup>′′</sup>) around the same C–C bond. The accuracy of theoretical coupling constant calculations or of the Karplus equation fit to such calculations is at best around $`\pm `$1 Hz. A more accurate Karplus equation is obtained by adjusting the coefficients to fit experimental coupling constant data. The greatest improvement occurs when the Karplus equation coefficients are calibrated for a four atom fragment with specific functional groups substituted in a specific orientation on the central two atoms. For example, the error in the coupling constants predicted by the Karplus equation calibrated for the protein H–N–C<sup>α</sup>–C<sup>β</sup> heteronuclear coupling constant is $`\pm `$0.25 Hz, as estimated by the RMS difference between the Karplus curve and the fit experimental data Wang and Bax (1996). If a Karplus equation that is calibrated for a four atom fragment with specific substituent groups is applied to the same four atom fragment with different substituent group chemistry or orientation, then the errors in the predicted coupling constants are dramatically increased. Similar large errors in the predicted coupling constants result if the Karplus equation is fit to a collection of experimental coupling constants that are all measured for the same fixed four atom fragment, but with a variety of functional groups substituted in a variety of orientations on the two central atoms. For one data set of over 300 experimental measurements of the H–C–C–H coupling constant in about 100 conformationally rigid compounds, which are largely 6-membered rings with holding groups, the RMS difference between the fit Karplus curve and experimental data is 1.2 Hz Haasnoot et al. (1980). In this data set carbon and oxygen are the most frequent substituent atoms bonded to the central two carbon atoms of the H–C–C–H fragment, while nitrogen, sulfur, halogen, silicon, and selenium substituent atoms occur in smaller numbers. These two examples span the accuracy range of most calibrated Karplus equations, that is, errors in predicted coupling constants are in the range $`\pm `$0.25 to $`\pm `$1 Hz. For a specific substituent group chemistry and orientation the error may be around $`\pm `$0.5 Hz or even as low as $`\pm `$0.25 Hz in favorable cases. For small to moderate variations in substituent group chemistry or orientation the error probably is in the range $`\pm `$0.5 Hz to $`\pm `$1 Hz.
Studies of the vicinal coupling constants of peptides and closely related compounds suggest that the above generalizations about the accuracy of calibrated Karplus equations apply to the vicinal coupling constants about the C<sup>α</sup>–C<sup>β</sup> and C<sup>β</sup>–C<sup>γ</sup> bonds of the cobalt dipeptide leucine side chain. Simple information about the effect of substituent group chemistry comes from alanine and its analogues, which have a single H–C<sup>α</sup>–C<sup>β</sup>–H coupling constant because of the three-fold symmetry of the methyl side chain. The experimental H–C–C–H coupling constant of ethane is 8.0 Hz Bothner-By (1965), the H–C<sup>α</sup>–C<sup>β</sup>–H coupling constant of the alanine dipeptide is 7.3 Hz Kopple et al. (1973), and the H–C<sup>α</sup>–C<sup>β</sup>–H coupling constant of the amino acid alanine remains almost exactly 7.3 Hz over the pH range of 0.5 to 12.5 Roberts and Jardetzky (1970). Replacing two protons on one ethane carbon atom with one carbon and one nitrogen substituent group drops the coupling constant by 0.7 Hz; changing the nitrogen substituent group electronegativities through the range ammonium $`>`$ acetamide $`>`$ amide and changing the carbon substituent group through the range carboxyl $`>`$ N-methylamide $`>`$ carboxylate does not change the coupling constant at all. But there are many counter examples to this seeming insensitivity to substituent group chemistry. The H–C–C–H coupling constant of propane is 7.3 Hz and of isopropylamine is 6.3 Hz Bothner-By (1965). Replacing one ethane proton with one carbon substituent group already drops the coupling constant to the value observed for alanine, which has an additional nitrogen substituent group, and replacing a second proton on the same carbon atom with a nitrogen substituent group, making the substituted carbon equivalent to the alanine $`\alpha `$-carbon, drops the coupling constant by an additional 1.0 Hz. The H–C<sup>α</sup>–C<sup>β</sup>–H coupling constants of various alanine dipeptide derivatives are in the range 6.9 to 7.3 Hz Kopple et al. (1973), which shows that even substituent group changes one peptide bond removed from the $`\alpha `$-carbon can change the coupling constant about the C<sup>α</sup>–C<sup>β</sup> bond by at least 0.4 Hz.
The $`\beta `$-carbon atoms of all the other amino acids lack the three-fold symmetry of the alanine $`\beta `$-carbon. The above examples of alanine mehtyl proton coupling across the C<sup>α</sup>–C<sup>β</sup> bond probably underestimate the coupling constant variation due to $`\alpha `$-carbon substituent group chemistry and totally ignore the effect of $`\beta `$-carbon substitution. For leucine two H–C<sup>α</sup>–C<sup>β</sup>–H coupling constants between the $`\alpha `$ and $`\beta `$-protons and four heteronuclear coupling constants between the amide nitrogen and carbonyl carbon and $`\beta `$-protons are usually measurable. The simplest models for these vicinal coupling constants about the C<sup>α</sup>–C<sup>β</sup> bond assume ideal gauche or trans torsion angles between the coupled spins and have four parameters: the populations of two of the three $`\chi ^1`$ rotational isomers and the gauche and trans coupling constants. The heteronuclear N–C<sup>α</sup>–C<sup>β</sup>–H trans coupling constant of the leucine cation apparently decreases by 0.6 Hz when the cation is converted into the anion Fischman et al. (1978). The effect of $`\alpha `$-carbon substituent chemistry on the coupling constants about the C<sup>α</sup>–C<sup>β</sup> bond is also seen in the 1-substituted derivatives of 3,3-dimethylbutane, which are analogues of the amino acid leucine with the $`\alpha `$-carbon and side chain intact and with various replacements for the amine and carboxylate groups. Both the gauche and trans coupling constants of these analogues vary over the range of 0.7 Hz Whitesides et al. (1967). Furthermore, this same study found a 1 Hz difference in the average gauche coupling constant depending on whether the 1-substituent was gauche or trans to the coupled proton on the second carbon. This suggests that two separate Karplus equations are required for the two $`\beta `$-protons of leucine. Substituent orientation effects require two different Karplus equations for predicting the $`\beta `$-proton coupling constants of proline Mádi et al. (1990). In a similar way the electronegativity corrections to the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}(\varphi )`$ Karplus equation for leucine probably depend on the orientation of the substituent groups with respect to both the H<sup>α</sup> and H<sup>β</sup> protons Bystrov (1976).
The existing calibrations of the Karplus equations for vicinal coupling about the C<sup>α</sup>–C<sup>β</sup> bond suffer from several sources of error. Most assume that for each $`\alpha `$-carbon bonded atom one Karplus equation predicts the coupling of this atom to both $`\beta `$-protons. The calibrations are done with sets of model compounds that have normal or sometimes rather far from normal peptide backbone chemistries and that have a range of standard and nonstandard amino acid side chains. Errors arise because the set of model compounds is too varied or because none of the model compounds closely match the molecule of interest, whether it be a protein or as in this study the cobalt dipeptide. Model compounds such as 2,3-substituted bicyclo\[2.2.2\]octanes Fischman et al. (1980); Kopple et al. (1973), gallichrome DeMarco et al. (1978), $`\alpha `$-amide-$`\gamma `$-butyrolactones Cung and Marraud (1982), differ from standard proteins in both backbone and side chain structure. The match to the molecule of interest may depend on a choice of coupling constants about several different bonds of the model compound. Because the gallichrome backbone and ornithyl side chains out to the $`\beta `$-carbon have essentially the same structure as standard proteins, the C<sup>α</sup>–C<sup>β</sup> bonds of gallichrome are fairly well matched to the C<sup>α</sup>–C<sup>β</sup> bonds of proteins. The substituent groups on the $`\beta `$ and $`\gamma `$-carbons are obviously quite different from those on the $`\alpha `$-carbon and coupling about the C<sup>β</sup>–C<sup>γ</sup> and C<sup>γ</sup>–C<sup>δ</sup> bonds is somewhat different from that about the C<sup>α</sup>–C<sup>β</sup> bond. If coupling constants about all three bonds are chosen to calibrate the Karplus equation for coupling about the C<sup>α</sup>–C<sup>β</sup> bond, then the gallichrome model compound is not a very good match to proteins. Model compounds such as cyclo(triprolyl) peptide Kopple et al. (1973), an asparaginamide dipeptide, oxytocin cystine-1 and 6, alumicrocin Cung and Marraud (1982), match the backbone of standard proteins, but the side chains may differ from a specific amino acid side chain of interest.
The residuals between the Karplus curve and the calibration data set are perhaps the best available indication of the accuracy of a Karplus equation calibration. However, these residuals generally underestimate the errors that occur when the Karplus equation is then applied to predict the vicinal coupling constants of a particular side chain of interest. Fischman et al. calibrate the $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta }^{}(\varphi )`$ Karplus equation by fitting the three Karplus coefficients to 4 coupling constants measured on two bicyclo-octanes Fischman et al. (1978). For this fit the RMS residual per degree of freedom is 0.13 Hz. Due to the small calibration data set this tiny observed residual is a completely unreliable estimate of the true residual. Kopple et al. calibrate the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}(\varphi )`$ Karplus equation by fitting to 10 coupling constants measured on seven model compounds Kopple et al. (1973). For this fit the RMS residual per degree of freedom is 0.47 Hz. The data set is just large enough to give a reliable residual estimate, but as discussed in the previous paragraph the model compounds may not be a very good match to proteins. DeMarco et al. calibrate the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}(\varphi )`$ Karplus equation by fitting 30 coupling constants measured about the C<sup>α</sup>–C<sup>β</sup>, C<sup>β</sup>–C<sup>γ</sup> and C<sup>γ</sup>–C<sup>δ</sup> bonds of the ornithyl side chains of gallichrome DeMarco et al. (1978). For this fit the RMS residual per degree of freedom is 0.92 Hz. This fairly large residual is apparently the result of fitting the coupling constants about all three side chain bonds. The errors in this calibration may be even larger than $`\pm `$1 Hz because fitting the coupling constants about all three side chain bonds makes the gallichrome model compound a poor match to the C<sup>α</sup>–C<sup>β</sup> bond of proteins and may add additional bias error to that suggested by the residuals. Cung and Marraud calibrate the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}(\varphi )`$ Karplus equation by fitting the three Karplus coefficients and eight angle parameters to 16 coupling constants measured on five model compounds Cung and Marraud (1982). For this fit the RMS residual per degree of freedom is 0.49 Hz. Note Cung and Marraud arrive at a standard deviation of half this value by computing a straight RMS average of the 16 residuals rather than by averaging over the 5 degrees of freedom actually present. Though the five model compounds used for this calibration are well matched to the C<sup>α</sup>–C<sup>β</sup> bond of proteins, the errors of this calibration are likely to be substantially larger than suggested by the residuals because the model compound torsion angles are not estimated from crystallographic structures or by molecular mechanics calculations.
Considering the wide variety of model compounds, the single calibration for both $`\beta `$-protons, the observed residuals, and uncertainties in the model compound structures, it seems extremely unlikely that the errors in the calibration of the Karplus equations for vicinal coupling about the C<sup>α</sup>–C<sup>β</sup> and C<sup>β</sup>–C<sup>γ</sup> bonds are significantly less than $`\pm `$1 Hz, whether the molecule of interest is a standard protein or peptide or as here the glycyl-leucine dipeptide complexed to cobalt.
## III EXPERIMENTAL RESULTS
The proton assignments in Table 1 are model dependent. These assignments depend on our assumption that the population of the leucine side chain rotational isomers with a gauche<sup>+</sup> $`\chi ^1`$ torsion angle is small compared to the population of rotational isomers with gauche<sup>-</sup> and trans $`\chi ^1`$ torsion angles. Without this assumption an unambiguous assignment is not possible. The conventional approach to assigning the $`\beta `$-protons examines the vicinal coupling constants about the C<sup>α</sup>–C<sup>β</sup> bond and exploits the fact that a weak coupling is expected for synclinal spins and a strong coupling for antiperiplanar spins. When the leucine side chain $`\chi ^1`$ torsion is gauche<sup>-</sup> the atoms H<sup>α</sup> and H<sup>β1</sup> are antiperiplanar and the atoms H<sup>α</sup> and H<sup>β2</sup> are synclinal and when $`\chi ^1`$ is trans these angular magnitudes are reversed, that is H<sup>β1</sup> is synclinal and H<sup>β2</sup> is antiperiplanar Kessler et al. (1987). Thus the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ coupling constant does not help with H<sup>β</sup> assignment, but is very helpful in determining the ratio of gauche<sup>-</sup> to trans populations once this assignment is known. The alternating synclinal antiperiplanar geometries of the gauche<sup>-</sup> and trans rotational isomers produces a conjugate $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ coupling pattern that is diagnostic of the absence of gauche<sup>+</sup> $`\chi ^1`$ rotational isomers. The average of the two $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ couplings is independent of the rotational isomer populations and the coupling ratio ($`J_{\text{H}\alpha \text{ H}\beta \text{1}}`$$``$$`J_{\text{sc}}`$)/($`J_{\text{H}\alpha \text{ H}\beta \text{2}}`$$``$$`J_{\text{sc}}`$) is equal to the gauche<sup>-</sup> over trans population ratio. For ideal geometries the Karplus equation predicts that the synclinal coupling $`J_{\text{sc}}`$ = $`A/4B/2+C`$ and that the average coupling is $`5A/8+B/4+C`$. A very similar situation occurs with the $`{}_{}{}^{3}J_{\text{H}\beta \text{ H}\gamma }^{}`$ coupling constant. Leucine side chain rotational isomers with a gauche<sup>-</sup> $`\chi ^2`$ torsion angle are virtually excluded by backbone-independent syn-pentane interactions Dunbrack Jr and Karplus (1994). When the $`\chi ^2`$ torsion angle is gauche<sup>+</sup> the atoms H<sup>β1</sup> and H<sup>γ</sup> are antiperiplanar and the atoms H<sup>β2</sup> and H<sup>γ</sup> are synclinal and when $`\chi ^2`$ is trans these angular magnitudes are reversed. This coupling is again helpful with populations but not assignments and the same conjugate coupling pattern is now diagnostic of the absence of gauche<sup>-</sup> $`\chi ^2`$ rotational isomers. As noted in the introduction the $`\chi ^1`$ and $`\chi ^2`$ torsion angles are highly correlated. With only the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans isomers populated trans $`\chi ^1`$ implies gauche<sup>+</sup> $`\chi ^2`$ and gauche<sup>-</sup> $`\chi ^1`$ implies trans $`\chi ^2`$. This correlation produces a doubly conjugate $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ and $`{}_{}{}^{3}J_{\text{H}\beta \text{ H}\gamma }^{}`$ correlation pattern. When the trans gauche<sup>+</sup> rotational isomer predominates the $`\beta `$1-proton couples weakly to the $`\alpha `$-proton and strongly to the $`\gamma `$-proton and the $`\beta `$2-proton couples strongly to the $`\alpha `$-proton and weakly to the $`\gamma `$-proton. When the gauche<sup>-</sup> trans isomer predominates these coupling strengths are all reversed. If either one of these leucine side chain rotational isomers predominates then the $`\beta `$-proton assignment can be made by inspection of the $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta }^{}`$ heteronuclear coupling constants Kessler et al. (1987) because the leucine carboxyl carbon and both $`\beta `$-protons are synclinal when the leucine side chain $`\chi ^1`$ torsion is gauche<sup>-</sup> and only the carboxyl carbon and $`\beta `$2-proton are synclinal when $`\chi ^1`$ is trans.
The experimental $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ and $`{}_{}{}^{3}J_{\text{H}\beta \text{ H}\gamma }^{}`$ vicinal coupling constants (Table 2) show the doubly conjugate pattern expected for leucine side chains, that is strong – weak and weak – strong. The approximately 3 Hz difference between the weak and strong couplings indicates that both the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans leucine side chain isomers are significantly populated and that the $`\beta `$-proton assignment is best determined by comparing the goodness-of-fit of the two alternative assignments. For the preliminary $`\beta `$-proton assignment in this section it is adequate to fit only the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ and $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta }^{}`$ coupling constants. The resulting two rotational isomer model, see methods, does not have any dependence on the $`\chi ^2`$ torsion angle; nevertheless, throughout this paragraph we maintain the assumption of highly correlated $`\chi ^1`$ and $`\chi ^2`$ torsion angles and continue to refer to these two rotational isomers as trans gauche<sup>+</sup> and gauche<sup>-</sup> trans. The goodness-of-fit of the simple two rotational isomer model is $`2\times 10^2`$ for the $`\beta `$-proton assignment in Table 1 and $`2\times 10^3`$ for the alternative assignment. The better fit gives population estimates of 39% trans gauche<sup>+</sup> and 61% gauche<sup>-</sup> trans with an uncertainty of $`\pm `$9%. These population estimates fall in the gray area between predominantly gauche<sup>-</sup> trans and approximately equal mixture of both conformations. On either side of this gray area the assignment made by inspection agrees with that obtained by fitting the experimental $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ and $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta }^{}`$ coupling constants. Suppose gauche<sup>-</sup> trans predominates. Then the $`\chi ^1`$ torsion angle is gauche<sup>-</sup>, H<sup>α</sup> and H<sup>β1</sup> are antiperiplanar, the carboxyl carbon and both $`\beta `$-protons are synclinal, and the assignment in Table 1 is correct because the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta \text{1}}^{}`$ coupling is stronger than the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta \text{2}}^{}`$ coupling and both $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta }^{}`$ couplings are fairly weak. On the other hand suppose the two conformations are approximately equally mixed. Then the carboxyl carbon and the $`\beta `$1-proton are synclinal in one conformation and antiperiplanar in the other, but the carboxyl carbon and the $`\beta `$2-proton are synclinal in both conformations, and the assignment in Table 1 is again correct because the $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta \text{1}}^{}`$ coupling is stronger than the $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta \text{2}}^{}`$ coupling.
The goodness-of-fit of the simple two rotational isomer model is only $`2\times 10^2`$ because this model predicts a high average $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ coupling constant and too low a $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta \text{2}}^{}`$ coupling constant. As noted above the average of the two $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ coupling constants is $`5A/8+B/4+C`$ for ideal geometry. Karplus coefficients for this coupling Cung and Marraud (1982); DeMarco et al. (1978); Kessler et al. (1987) give average values ranging from 8.1 to 8.7 Hz. These predicted values must be compared with 6.0 Hz, which is the average of the two experimental $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ couplings in Table 2. One explanation for this difference is that there is a small population of rotational isomers with gauche<sup>+</sup> $`\chi ^1`$ torsion angle. This would reduce the predicted average coupling constant because both $`\beta `$-protons are synclinal to the $`\alpha `$-proton when $`\chi ^1`$ is gauche<sup>+</sup>. What is important is not the magnitude in Hertz of the difference between the average predicted and experimental $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ couplings, but the standard deviation, that is, the ratio of this difference over the estimated error. The one and one-half standard deviation difference found here is not improbably large and reflects our estimate of the errors in the Karplus equation calibration, see background section, and of the errors due to the assumption of ideal geometry. In view of the known improbablity of leucine gauche<sup>+</sup> $`\chi ^1`$ rotational isomers, again see background section, these last two sources of error are a more likely explanation of the difference.
The above explanation of the difference between the average predicted and experimental $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ coupling constants is even more plausible in view of a similar difference found between the average predicted and experimental $`{}_{}{}^{3}J_{\text{H}\beta \text{ H}\gamma }^{}`$ coupling constants. Though the dipeptide backbone conformation leaves some room for doubt about the complete absence of $`\chi ^1`$ is gauche<sup>+</sup> conformation, the evidence from crystallographic studies and conformational analysis is very good that there is at most a very small population of rotational isomers with a gauche<sup>-</sup> $`\chi ^2`$ torsion angle. Also the cobalt complex with the dipeptide backbone should have relatively little effect on the $`{}_{}{}^{3}J_{\text{H}\beta \text{ H}\gamma }^{}`$ coupling constant. For ideal geometry the average of the two $`{}_{}{}^{3}J_{\text{H}\beta \text{ H}\gamma }^{}`$ coupling constants is given by the same expression as for the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ average. Karplus coefficients for the sec-butyl coupling Bothner-By (1965) give an average value of 8.5 Hz and coefficients corrected for substituent electronegativity as suggested by Pachler (Ref. Pachler, 1972, Eq. 2 and Table 4) give an average value of 8.1 Hz. The average of the two experimental couplings in Table 2 is 6.5 Hz. Error in the Karplus coefficient calibration and perhaps some departure from ideal geometry seem to be the only plausible explanation is this difference. This supports our view that overall errors of one to two Hertz are entirely possible. The two rotational isomer model also predicts that the $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta \text{2}}^{}`$ coupling constant is $`A/4B/2+C`$ because the $`\beta `$2-proton is always synclinal to the leucine carboxyl carbon when rotational isomers with a gauche<sup>+</sup> $`\chi ^1`$ torsion angle are excluded. The predicted coupling Fischman et al. (1980) is 1.4 Hz and the observed is 4.2 Hz (Table 2). A small gauche<sup>+</sup> $`\chi ^1`$ population could make up much of this two standard deviation difference, but we again favor the explanation that the Karplus calibration is not very accurate.
The $`\delta `$-proton assignments in Table 1 follow from the pattern of R$`_{\text{H}\beta \text{H}\delta }`$ cross relaxation rates in Table 2. These assignments are also model dependent. For several models the $`\delta `$-proton assignment is unambiguous once the $`\beta `$-proton assignment is selected (results not presented); however, to keep things simple we again assume that only the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans leucine side chain rotational isomers are significantly populated. For both these rotational isomers the H<sup>β1</sup> to H<sup>δ1</sup> and H<sup>β2</sup> to H<sup>δ2</sup> distances are 2.8 to 2.9 angstroms. The H<sup>β1</sup> to H<sup>δ2</sup> and H<sup>β2</sup> to H<sup>δ1</sup> distances are 2.9 and 4.0 angstroms for the trans gauche<sup>+</sup> rotational isomer and reverse order to 4.0 and 2.9 angstroms for the gauche<sup>-</sup> trans isomer. The $`\delta `$-proton assignments in Table 1 produce the strong – weak – weak – strong pattern observed in the experimental relaxation rates in Table 2.
An unambiguous $`\beta `$-proton assignment is not possible if an arbitrarily large population of gauche<sup>+</sup> $`\chi ^1`$ rotational isomers is allowed. We have repeated the above least squares fit to the experimental $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ and $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta \text{2}}^{}`$ coupling constants, while allowing rotational isomers with gauche<sup>-</sup>, gauche<sup>+</sup>, and trans $`\chi ^1`$ torsion angles. For the $`\beta `$-proton assignment in Table 1 the goodness-of-fit of this three rotational isomer model, see methods, rises to 29% and that of the alternative $`\beta `$-proton assignment rises to 94%. By this criterion either assignment is now acceptable. The population of rotational isomers with gauche<sup>+</sup> $`\chi ^1`$ torsion angles is 36% for the selected assignment and 51% for the alternative. Either of these gauche<sup>+</sup> populations seems unacceptably high. In any event the presently available models are not able to meaningfully predict the gauche<sup>+</sup> $`\chi ^1`$ population. The experimental data can be satisfactorily explained by a two rotational isomer model, which excludes gauche<sup>+</sup> $`\chi ^1`$ rotational isomers. The unambiguous assignment of the $`\beta `$ and $`\delta `$-protons probably must await the preparation of cobalt dipeptide with stereoselectively deuterated leucine side chains Ostler et al. (1993).
## IV COMPUTATIONAL RESULTS AND DISCUSSION
### IV.1 Chelate ring conformation
As discussed in the background section, conformational analysis predicts that gauche<sup>+</sup> $`\chi ^1`$ rotational isomers of the leucine side chain are favored in the absence of chelate ring puckering. Crystallographic and NMR evidence shows that the cobalt dipeptide chelate rings do pucker and that the $`\varphi _2`$ and $`\psi _2`$ torsion angles depart from 180 degrees by as much as 10 or 20 degrees. Simple inspection of backbone-dependent rotamer libraries suggests that this departure is large enough to reduce the population of gauche<sup>+</sup> $`\chi ^1`$ rotational isomers to fairly low levels. To obtain a sharper picture of the dependence on backbone conformation of rotamer preferences we have constructed a rotamer library for a region-of-interest around the special point in $`\varphi \times \psi `$ space where gauche<sup>+</sup> $`\chi ^1`$ rotamers are most favored, that is, the point with coordinates $`\varphi =175`$ and $`\psi =175`$ degrees. This region-of-interest rotamer library differs from previous backbone-dependent rotamer libraries Dunbrack Jr and Karplus (1994, 1993) only in that a limited region of $`\varphi \times \psi `$ space is divided into annular disks around the special point instead of dividing the entire $`\varphi \times \psi `$ space into a grid of square cells. Our region-of-interest rotamer library is constructed from a list of backbone and side chain torsion angles of 7085 leucine residues on 445 nonhomologous (that is with less than 50% sequence identity) protein chains from a recent Brookhaven Protein Database of structures with a resolution of 2.0 angstroms or better. Backbone angles in the region-of-interest are not very common in protein structures. There are only 0, 13, 28, 72, 112, and 251 of the leucine residues with backbone $`\varphi `$ and $`\psi `$ angles in the 6 annular shells with a width of 10 degrees and that have outer radii of 10, 20, 30, 40, 50, and 60 degrees. There are 11 residues with a gauche<sup>+</sup> $`\chi ^1`$ torsion angle out of the 13 (92%) with backbone torsion angle in the 10 to 20 degree annulus, 21 of 28 (75%) in the 20 to 30 annulus, 29 of 72 (40%) in the 30 to 40 annulus, 10 of 112 (9%) in the 40 to 50 annulus, and 7 of 251 (3%) in the 50 to 60 annulus. The region-of-interest rotamer library shows a dramatic drop-off of gauche<sup>+</sup> $`\chi ^1`$ leucine side chain rotational isomers when the backbone torsion angle is beyond the 20 to 30 degree annulus. Note that $`\chi ^2`$ torsion angles of most of the gauche<sup>+</sup> $`\chi ^1`$ rotational isomers in this region-of-interest are also gauche<sup>+</sup>.
The conformational statistics of leucines in protein database structures show clearly that side chain rotamer preferences are highly sensitive to the backbone conformation, especially near the $`\varphi `$ and $`\psi `$ angles of the cobalt dipeptide backbone. Analysis of the molecular mechanics energy map over $`\chi ^1\times \chi ^2`$ torsion space of the leucine side chain (Fig. 2) and of the backbone conformation of the energy minimized dipeptide structures suggests that the cobalt dipeptide chelate rings do indeed pucker enough to allow gauche<sup>-</sup> and trans $`\chi ^1`$ rotational isomers to predominate. The gauche<sup>-</sup> trans energy well has the lowest energy minimum, which we assign the value exactly 0 kcal/mol, and the trans gauche<sup>+</sup> well is only 0.1 kcal/mol higher. The energies of the three energy well minima with the $`\chi ^1`$ torsion angle gauche<sup>+</sup> are 6.4 kcal/mol for gauche<sup>+</sup> gauche<sup>-</sup>, 2.1 kcal/mol for gauche<sup>+</sup> gauche<sup>+</sup>, and 2.6 kcal/mol for gauche<sup>+</sup> trans. All other well minima have energies higher than 2.9 kcal/mole. The molecular mechanics energy map prediction matches the backbone-independent conformational analysis result that the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans leucine side chain rotational isomers are the most stable. The backbone torsion angles of the minimized structures at the energy minimum grid point of the gauche<sup>-</sup> trans energy well are $`\varphi =162`$ and $`\psi =167`$ degrees and of the trans gauche<sup>+</sup> well are $`\varphi =166`$ and $`\psi =168`$ degrees. These leucine backbone torsion angles specify a point in $`\varphi \times \psi `$ space that is 15 and 11 degrees from the point where gauche<sup>+</sup> $`\chi ^1`$ rotamers are most favored, that is, the point with coordinates $`\varphi =175`$ and $`\psi =175`$ degrees. The backbone torsion angles at the energy minimum grid point of the three gauche<sup>+</sup> $`\chi ^1`$ rotational isomers are all in the ranges $`176\varphi 174`$ and $`178\psi 180`$ and are all within 4 to 6 degrees of the point with both $`\varphi `$ and $`\psi =180`$ degrees. This seems to confirm that the backbone torsion angles of the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers are indeed adjustments from cobalt dipeptide chelate ring planar geometry that accommodate unfavorable backbone-dependent syn-pentane interactions. To eliminate these backbone-dependent interactions the backbone conformation apparently need adjust only by an angle of 10 to 15 degrees in $`\varphi \times \psi `$ torsion space, which is about half that suggested by the region-of-interest rotamer library.
To definitively establish the amount of chelate ring puckering and its influence on leucine side chain populations will require more extensive molecular mechanics calculations and more reliable energy map error estimates than presented here. Such molecular mechanics studies are important because as we have already emphasized in the experimental section the NMR data by itself does not give an unambiguous assignment of the $`\beta `$ and $`\delta `$-protons. Further molecular mechanics studies are also needed to validate our analysis on measurabality of rotational isomer populations, which is presented in the concluding subsection of this results and discussion section. There we analyze the measurability of the gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer population based on the assumption that the ratio of gauche<sup>+</sup> to trans or gauche<sup>-</sup> $`\chi ^1`$ rotational isomer populations is small. The assignments presented in the experimental section also rely on this assumption.
The accuracy of molecular mechanics predictions of the relative populations of $`\chi ^1`$ rotational isomers depends on achieving the correct balance of at least three energy terms: the steric energy of syn-pentane interactions, the energy of 10 to 15 degree compensatory rotations of the leucine $`\chi ^1`$ and $`\chi ^2`$ side chain dihedral angles, and the energy of ring puckering associated with the rotation of the leucine $`\varphi `$ and $`\psi `$ backbone dihedral angles. Two general considerations suggest that these three energy terms are correctly balanced in the present molecular mechanics calculations. First, syn-pentane interaction accounting correctly predicts the observed rotamer preferences of protein side chains Dunbrack Jr and Karplus (1994). This implies that compensatory rotations of the side chain dihedral angles don’t significantly deminish the importance of the syn-pentane effect in predicting rotamer preferences and that the balance of the first two of the three above energy terms is at least qualitatively correct in our calculations. Second, energy minimization of pentane structures with CHARMM parameters MacKerell Jr et al. (1998), which are very similar to those we employ, quantitatively reproduces the conformational energies observed experimentally or predicted by ab initio calculations Dunbrack Jr and Karplus (1994). This suggests that the balance of the first two of the three above energy terms is also quantitatively correct in our calculations. It remains to establish the accuracy of the last of the above three energy terms, the energy of ring puckering associated with the rotation of the leucine $`\varphi `$ and $`\psi `$ backbone dihedral angles. The molecular mechanics parameters of the cobalt chelate ring complex are expected to play and important role in determining the ring puckering energy. Most of the bond length and bond angle molecular mechanics parameters are known from previous crystallographic Prelesnik and Herak (1984) or molecular mechanics Buckingham et al. (1974) studies of cobalt complexes. At the other extreme the torsion angle and improper torsion angle force constants with cobalt in one of the four angle defining positions and charges of the nitro and cobalt atoms are not much better than order of magnitude guesses. Furthermore the $`2`$ charge distributed over the cobalt complex may introduce a substantial solvation effect Schaefer et al. (1998); Gresh et al. (1998) into the ring puckering energy. Indeed the solvation effects may be viewed as a fourth energy term that affects the accuracy of molecular mechanics predictions of the relative populations of $`\chi ^1`$ rotational isomers.
More extensive molecular mechanics studies are certainly needed to establish the effect molecular mechanics parameters and solvation have on the relative isomer populations. However, some simple tests of the present molecular mechanics suggest we aren’t too far off. The molecular mechanics energy map over $`\chi ^1\times \chi ^2`$ torsion space of the leucine side chain is not very sensitive to the values of the uncertain parameters. The relative energy well depths of the leucine side chain rotational isomers vary by less than about one half kcal/mol when the torsion angle and improper torsion angle force constants involving cobalt are scaled down to zero as a group or when the distance independent dielectric constant equal to one is replaced by a distance dependent dielectric constant equal to the inverse atomic separation in angstroms.
### IV.2 Effect of intramolecular motions
For most leucine side chain rotational isomers the effect of thermal motions on the NOESY cross relaxation rates is several times smaller than the typical accuracy of these measurements and the effect on vicinal coupling constants is perhaps several times bigger than the accuracy of the best homonuclear coupling measurements. The effect of thermal motions on side chain vicinal coupling constants is similar in magnitude to the previously reported effect on backbone coupling constants Wang and Bax (1996). The magnitude of the thermal motion effect is estimated by comparing the calculated average NMR observables to those values calculated at the average $`\chi ^1`$ and $`\chi ^2`$ torsion angles. These averages are taken over the individual energy well regions, see methods, and thus the comparison looks at the effects of fast thermal motions within the energy wells as opposed to the effects of slower interconversion of rotational isomers. The trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers, which are the predominantly populated isomers, are typical. For these two rotational isomers the RMS differences between the average observables and the observables at the average are 0.16 Hz for the vicinal coupling constants and 0.0011 s<sup>-1</sup> for the NOESY cross relaxation rates, where these RMS differences are averaged over these two rotational isomers and over the 10 NOESY cross relaxation rates and 8 vicinal coupling constants listed in Table 2. These differences are substantially increased for rotational isomers with more anharmonic $`\chi ^1\times \chi ^2`$ torsion space energy wells. The difference between the energy well minimum position and the average $`\chi ^1`$ and $`\chi ^2`$ torsion angles gives a rough measure of the anharmonicity of a rotational isomer energy well. By this measure the trans gauche<sup>-</sup> energy well is the most anharmonic (compare Fig. 2) with a difference of 12 degrees between the minimum and average positions. For the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans energy wells these differences are only 3 and 4 degrees. For the anharmonic energy well of the trans gauche<sup>-</sup> rotational isomer the RMS differences between the average observables and the observables at the average are 0.31 Hz for the vicinal coupling constants and 0.0021 s<sup>-1</sup> for the NOESY cross relaxation rates, which are both almost twice the values for the more nearly harmonic energy wells of the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers.
The difference between the average observables and the observables at the average $`\chi ^1`$ and $`\chi ^2`$ torsion angles only accounts for about half the effect of thermal motions on the NMR observables. Thermal shifting of the average $`\chi ^1`$ and $`\chi ^2`$ torsion angles of a rotational isomer also significantly changes the NMR observables. This thermal motion effect is measured by the difference between the NMR observables at the $`\chi ^1\times \chi ^2`$ energy well minimum and at the average $`\chi ^1`$ and $`\chi ^2`$ torsion angles. As already noted in the last paragraph the difference between the minimum and average positions of the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans energy wells is 3 and 4 degrees. For these rotational isomers the RMS differences between the observables at the minimum and average positions are 0.17 Hz for the vicinal coupling constants and 0.0013 s<sup>-1</sup> for the NOESY cross relaxation rates. These differences are about the same as the values of 0.16 Hz and 0.0011 s<sup>-1</sup> reported in the previous paragraph for the differences between the average observables and the observables at the average. In the case of the anharmonic energy well of the trans gauche<sup>-</sup> rotational isomer the thermal shifting effect is more dramatic. The RMS differences between the observables at the minimum and average positions are 0.75 Hz and 0.0041 s<sup>-1</sup>, which are about twice the differences between the average observables and the observables at the average for the trans gauche<sup>-</sup> rotational isomer.
All these thermal motion effects can be put into perspective by comparing them to the differences in the NMR observables at the $`\chi ^1\times \chi ^2`$ energy well minimum and at ideal geometry $`\chi ^1`$ and $`\chi ^2`$ torsion angles. For the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers the RMS differences between the minimum energy and ideal geometry observables are 0.50 Hz for the vicinal coupling constants and 0.0039 s<sup>-1</sup> for the NOESY cross relaxation rates, which is about three times the size of the thermal motion effect. This is just about what might be expected because the energy minima of the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers differ by 9 and 12 degrees from the ideal geometry positions in $`\chi ^1\times \chi ^2`$ torsion space and these differences are three times the differences between the average $`\chi ^1`$ and $`\chi ^2`$ torsion angles and the positions of the energy well minima.
### IV.3 Necessity of molecular mechanics energy estimates
At the present accuracy of Karplus equation calibrations it is not possible to calculate the populations of all 9 rotational isomers of the leucine side chain from only the NMR data in Table 2. Before fitting the NMR data a small set of rotational isomers with nonzero population must be selected with molecular mechanics calculations or conformational analysis. If we calculate the population of all 9 rotational isomers by fitting an 8 parameter model (Table 3 row 1), then the populations estimates range from 0.0 to 0.3, though most of them are near 0.1, and the population error estimates from the moment matrix range from about $`\pm `$0.2 to $`\pm `$0.3. The standard deviations of the Monte Carlo probability density functions are all close to $`\pm `$0.1. These errors are somewhat smaller than suggested by the moment matrix because the moment matrix estimates do not take into account the nonnegativity constraints on the isomer populations. The errors given by either set of error estimates are larger than or at least nearly as large as the population estimates. This indicates that fitting the 9 rotational isomer model gives meaningless population estimates.
There are $`2^91=511`$ possible nonempty subsets of the set of 9 rotational isomers. As we will detail shortly, these rotational isomer subsets generate a large number of distinct solutions to the problem of fitting the experimental NMR data. As an alternative to fitting all the rotational isomer populations, we might hope to find among this large set of solutions one best solution that includes only a small number of rotational isomers and that has a uniquely high goodness-of-fit to the NMR data. A solution is initially generated from each subset of rotational isomers by fitting the experimental NMR data with the model that includes only the isomers in that subset. In general the populations of these included isomers are not all positive because active nonnegativity constraints force some populations to be exactly zero. A different set of experimental data would yield a different set of active constraints and thus a different set of positive populations. In the next subsection we fit multiple Monte Carlo simulated NMR data sets to calculate population probability distributions. But here we fit only the actual experimental data and generate one single solution from each subset of rotational isomers. Two different subsets of isomers may generate the same solution with the same positive isomer populations on some common subset of the two original subsets of isomers. Thus the number of unique solutions is substantially smaller than the number of subsets of 9 rotational isomers. A single unique solution is conveniently identified by the positive isomer populations and the isomers with positive populations are referred to as the populated isomers.
For the assignments in Table 1, experimental data in Table 2, and with the NMR observables calculated at the $`\chi ^1\times \chi ^2`$ energy map minimum positions the 511 isomer subsets generate 278 unique solutions. One half of these solutions have a goodness-of-fit better than 10% and two thirds of them have a goodness-of-fit better than 1%. There are 5 solutions that have only two populated isomers and have a goodness-of-fit between 10% and 1% and 15 solutions that have three populated isomers and have a goodness-of-fit better than 10% (Table 3 rows 2 through 21). Apparently many good solutions exist even with the restriction to solutions that have only a small number of populated isomers. Worse, the solutions with two or three populated isomers are inconsistent and over-fit the experimental data. As a group, the 5 good solutions with two populated isomers give 5 predictions of the population of each of the 9 rotational isomers. The only consistent predictions are for the gauche<sup>-</sup> gauche<sup>-</sup> and trans trans rotational isomer populations, which all 5 solutions predict are zero. The gauche<sup>-</sup> trans rotational isomer population is predicted 4 times in the range 0.5 to 0.6 and once at zero. The other 6 rotational isomer populations are each predicted once in the range 0.4 to 0.6 and 4 times at zero. These positive and zero population predictions are inconsistent because the positive population errors of the 5 solutions are all around $`\pm `$0.05. The 15 good solutions with three populated isomers give a similar picture. The gauche<sup>-</sup> trans rotational isomer population is predicted 11 times in the range 0.3 to 0.5 and 4 times at zero. The other 8 rotational isomer populations are each predicted in the range of 0.3 or 0.4 from 2 to 7 times and otherwise at zero. Again these population predictions are inconsistent because the positive population errors of the 15 solutions are all around $`\pm `$0.08. The experimental NMR data is over-fit in the sense that the predicted isomer populations depend on the model and the discrepancies between these predictions are much larger than the errors estimated from the fit of a single model.
The number of solutions with two or three populated isomers, the goodness-of-fits, population predictions, and error estimates reported in the last paragraph change very little if the average NMR observables are fit instead of those at the energy map minimum positions, compare rows 2 through 6 and 22 through 26 of Table 3. Inconsistent and over-fit solutions also result if the $`\beta `$ and $`\delta `$-proton assignments in Table 1 are reversed. For example, for the reverse assignments the 511 isomer subsets generate 285 unique solutions with an overall pattern of goodness-of-fits similar to the assignments in Table 1. There are 4 solutions that have only two populated isomers and have a goodness-of-fit better than 10% (Table 3 rows 27 through 30). The gauche<sup>+</sup> gauche<sup>+</sup> and trans gauche<sup>+</sup> rotational isomer populations are predicted 2 times in the range 0.4 to 0.6 and once at zero. Four other rotational isomer populations are each predicted once in the range 0.4 to 0.6 and 3 times at zero. All 4 solutions predict that the populations of the remaining 3 rotational isomers are zero. These population predictions are inconsistent because the positive population errors of the 4 solutions are all around $`\pm `$0.05. Again there is no best solution and the solutions that have only a small number of populated isomers are over-fit. At present the only hope for obtaining a reasonable solution is to narrow down the number of possible isomers with the help of molecular mechanics energies or conformational analysis.
### IV.4 Measurability of rotational isomer populations
An analysis of all the NOESY cross relaxation rates and vicinal coupling constants listed in Table 2 and the Karplus coefficients listed in Table 4 confirms the preliminary analysis in the experimental results section that the cobalt dipeptide leucine side chain predominantly populates the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers in approximately equal proportions. The goodness-of-fit of this simple two rotational isomer model with the NMR observables calculated at the $`\chi ^1\times \chi ^2`$ energy map minimum positions is 0.020 for the assignments given in Table 1 and $`2.2\times 10^3`$ if both the $`\beta `$ and $`\delta `$-proton assignments are reversed (Table 3 rows 31 and 32). These goodness-of-fits increase to 0.063 and 0.010 if the average NMR observables are fit instead of those at the energy map minimum positions (Table 3 rows 33 and 34). For the assignments in Table 1 the gauche<sup>-</sup> trans rotational isomer predominates with a population of $`0.625\pm 0.043`$. Switching to average NMR observables has no effect on this population or uncertainty, they both increase by only $`0.001`$. The reverse assignment approximately reverses the populations, but again does not change the uncertainty.
Analysis of both the protein data bank and the molecular mechanics energy map suggests that the third most populated rotational isomer after trans gauche<sup>+</sup> and gauche<sup>-</sup> trans is gauche<sup>+</sup> gauche<sup>+</sup>, see the section on chelate ring conformation at the beginning of this results and discussion section. The gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer population is expected to be small, perhaps less than 5 or 10%. Because the 4.3% population standard deviation that is given by fitting the two rotational isomer model is as large if not larger than the probable population, it is unlikely that the gauche<sup>+</sup> gauche<sup>+</sup> population can be measured by fitting the NMR data. If a three rotational isomer model that includes gauche<sup>+</sup> gauche<sup>+</sup> is fit to the NMR data, then the gauche<sup>+</sup> gauche<sup>+</sup> population is $`0.245\pm 0.078`$ with a 0.18 goodness-of-fit (Table 3 row 35). Though this gauche<sup>+</sup> gauche<sup>+</sup> population mean is high, the population distribution is not inconsistent with the expected small population. The distribution gives about a 5% probability that the population is smaller than 10% and about a 1% probability it is smaller than 5%. Note that these probability estimates must be taken with caution because, as pointed out in the previous section, even models with three populated rotational isomers over-fit the experimental data. Indeed the high population mean seems to further suggest that the gauche<sup>+</sup> gauche<sup>+</sup> population is poorly measured by fitting the NMR data with the three rotational isomer model. For the reverse assignments the gauche<sup>+</sup> gauche<sup>+</sup> population reaches the extremely implausible level of $`0.385\pm 0.078`$ with a 0.62 goodness-of-fit (Table 3 row 36). The high goodness-of-fit for the three rotamer model with and without the assignments reversed again suggests that the assignments in Table 1 must be taken with caution.
The prominent populations of the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers are best estimated by fitting experimental data. The minuscule populations of the other seven rotational isomers, except for gauche<sup>+</sup> gauche<sup>+</sup>, are best estimated from the molecular mechanics energy map. This leaves the gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer on the awkward borderline of experimental measurability. The standard Monte Carlo procedure Press et al. (1989) can be altered to estimate the gauche<sup>+</sup> gauche<sup>+</sup> population and its probability distribution (Table 3 row 37). Initially the gauche<sup>+</sup> gauche<sup>+</sup> population is fixed at zero, the simple two rotational isomer model is fit, and Monte Carlo NMR observables are generated, but then these Monte Carlo observables are fit to a three rotational isomer model that includes gauche<sup>+</sup> gauche<sup>+</sup>. This differs from the standard procedure because one model generates the Monte Carlo observables and a second different model is fit to these Monte Carlo observables. The resulting Monte Carlo probability density functions (Fig. 3) give population estimates of $`0.355\pm 0.053`$ and $`0.612\pm 0.045`$ for the prominent trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers and $`0.033\pm 0.046`$ for the gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer. The prominent rotational isomer population estimates have slightly smaller means and larger standard deviations than the estimates given in the last paragraph for the fit of the simple two rotational isomer model. The Monte Carlo probability density function of the gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer is actually the density function of a mixed discrete and continuous distribution Feller (1971). The probability of having a population of zero is 0.471, which corresponds to the fraction of least-squares fits with an active nonnegativity constraint on the gauche<sup>+</sup> gauche<sup>+</sup> population. The continuous part of the probability density function has a population mean of 0.062 and has a roughly exponential distribution. The overall gauche<sup>+</sup> gauche<sup>+</sup> population mean is as expected near zero because the two rotational isomer model, which generates the Monte Carlo NMR observables, lacks gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer. The mean of the continuous part of the probability distribution is greater than 5% and suggests in perhaps a more direct fashion than fitting the three rotational isomer model as discussed in the last paragraph that gauche<sup>+</sup> gauche<sup>+</sup> populations in the 5% range can not be measured by fitting the NMR data with currently available models. The extent of the continuous part of the gauche<sup>+</sup> gauche<sup>+</sup> probability density function is determined by the observation errors incorporated in the least-squares design matrix and observation vector. Though this extent has nothing to do with the 2.1 kcal/mol relative energy of the gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer determined by molecular mechanics or the accuracy of this energy, it places gauche<sup>+</sup> gauche<sup>+</sup> population in a molecular mechanically realistic range and approximately captures the uncertainty in the molecular mechanics energy well depths. In this sense the Monte Carlo procedure described here blends together molecular mechanics and experimental NMR data.
The observation errors of the least-squares fit are dominated by the errors in the predicted NMR observables due to uncertainty in the Karplus coefficients and molecular mechanics geometries. What would happen if the errors in the predicted NMR observables could be reduced below the level of the experimental measurement errors? Given that the errors in the predicted NMR observables are about an order of magnitude larger than the experimental measurement errors, the population error estimates should also be reduced by about an order of magnitude into the 0.5% range. A population of 0.5% corresponds to a relative rotational isomer energy of around 3 kcal/mol. Excepting the prominent trans gauche<sup>+</sup> and gauche<sup>-</sup> trans and the forbidden gauche<sup>+</sup> gauche<sup>-</sup> rotational isomers, the remaining rotational isomers have energy map minima ranging 2.1 kcal/mol for gauche<sup>+</sup> gauche<sup>+</sup> to 3.9 kcal/mol for gauche<sup>-</sup> gauche<sup>-</sup>. A 0.5% population accuracy potentially places all rotational isomer populations except that of the forbidden gauche<sup>+</sup> gauche<sup>-</sup> rotational isomer within reach of experimental measurement. The measurability of the populations of all these rotational isomers can be assessed by the same Monte Carlo procedure applied in the previous paragraph to assess gauche<sup>+</sup> gauche<sup>+</sup> measurability (Table 3 row 38). Again we fit the simple two rotational isomer model with all observation errors included. The Monte Carlo NMR observables are generated with only the experimental measurement errors and an eight rotational isomer model that excludes only the forbidden gauche<sup>+</sup> gauche<sup>-</sup> rotational isomer is then fit to the Monte Carlo observables. In the initial model all rotational isomers except the prominent trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers are fixed at zero population. As a result the Monte Carlo probability densities (Fig. 4) of these rotational isomers are the density functions of mixed discrete and continuous distributions with zero population probabilities ranging from 0.52 to 0.89 and population means ranging from 0.0005 to 0.0023. The Monte Carlo density functions give population estimates of $`0.3712\pm 0.0038`$ and $`0.6203\pm 0.0051`$ for the trans gauche<sup>+</sup> and gauche<sup>-</sup> trans rotational isomers. The continuous parts of the probability density functions have population means ranging from 0.0035 to 0.0067 and have roughly exponential distributions. This seems to confirm that if experimental measurement was the only source of error, then rotational isomer populations as small as about 0.5% could be measured.
## V CONCLUSIONS
This study the cobalt glycyl-leucine dipeptide side chain rotational isomer populations gives a realistic picture of their measurability and in particular suggests that the population of the gauche<sup>+</sup> gauche<sup>+</sup> rotational isomer is less than 5 or 10%, which is below the limit of measurability at the present accuracy of Karplus equation calibration. Better calibrations of the Karplus equations with model systems such as cobalt dipeptides promise to push the limit of measurability of side chain populations down into the 1% range. To calibrate the Karplus equations to an accuracy substantially better than $`\pm `$1 Hz probably requires a separate Karplus equation for each vicinal coupling constant, for example, the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ couplings of the leucine side chain would require calibrating two Karplus equations, one for the coupling to the $`\beta 1`$-proton and a second for that to the $`\beta 2`$-proton. Each Karplus equation has three coefficients, so the calibration of the two $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ Karplus equations could require that as many as six coefficients be determined. As many as four of these six coefficients could be determined from the temperature dependence of the two vicinal coupling constants Whitesides et al. (1967). However, it is probably best to attempt to determine only three of these coefficients by adopting the following compromise. Because the torsion angle $`\varphi `$ between the $`\alpha `$ and either $`\beta `$-proton is always synclinal or antiperiplanar the Karplus equations only need to be really accurate within these angular ranges. Reasonable accuracy in these ranges can probably be achieved with only two adjustable parameters per Karplus equation, say by adjusting the $`A`$ and $`C`$ coefficients and fixing the coefficient $`B`$ at the best literature value. Note that these coefficients are defined in the background section and that $`B=(J_{\text{ap}}J_{\text{pp}})/2`$, where $`J_{\text{pp}}`$ and $`J_{\text{ap}}`$ are the coupling constants at $`\varphi =0`$ and 180 degrees. A further reduction by one in the number of coefficients could be achieved by assuming that $`J_{\text{pp}}`$ is the same for both $`\beta `$-protons. With these compromises there are only three coefficients to be determined from four measured quantities, that is, two coupling constants and their temperature variations. By measuring a half-dozen or so coupling constants about the C<sup>α</sup>–C<sup>β</sup> and C<sup>β</sup>–C<sup>γ</sup> bonds of the leucine side chain there would be more than enough measured values to calibrate all the Karplus equations and determine the unknown rotational isomer populations. Fitting the temperature dependence of the coupling constants implies a search for not only the rotational isomer populations, but also for a breakdown of the free energy differences into enthalpic and entropic contributions. The entropic contribution to the the population differences can probably be estimated more accurately from molecular mechanics Haydock (1993) than by fitting the temperature dependence of the vicinal coupling constants. Before attempting to calibrate Karplus equations with the cobalt glycyl-leucine dipeptide the critical issue of assigning the $`\beta `$ and $`\delta `$-protons must be addressed by preparing samples with stereoselectively deuterated leucine side chains Ostler et al. (1993). The L-amino acid leucine stereoselectively deuterated at the C<sup>δ</sup> position is available from Cambridge Isotope Laboratories, Inc., 50 Frontage Road, Andover MA 01810. It is also important to measure the cobalt glycyl-leucine dipeptide side chain rotational isomer populations with experiemnts that are completely independent of any Karplus equation calibration. Triple quantum filtered nuclear Overhauser effect spectroscopy (3QF-NOESY) or tilted rotating frame Overhauser effect spectroscopy (3QF T-ROESY) experiments give torsion restraints from cross correlated relaxation Brüschweiler et al. (1989) and could supply this independent corroboration of the population estimates.
In addition to calibrating Karplus equations with the cobalt glycyl-leucine dipeptide and other cobalt dipeptides we suggest parallel studies of $`N`$-acetyl-L-leucine $`N^{}`$-methylamide (leucine dipeptide), which is the leucine analogue of the alanine dipeptide Gresh et al. (1998); Brooks III and Case (1993). The cobalt dipeptides achieve a single backbone conformation by forming two approximately planar chelate rings. This backbone conformation is uncommon in proteins and destablizes the normal side chain conformational preferences of leucine. The leucine dipeptide should eliminate this disadvantage of the cobalt glycyl-leucine dipeptide without sacrificing the significant advantage of a single backbone conformation. Indeed the alanine dipeptide C$`_{7\text{eq}}`$ backbone conformation predominates in weakly polar solvents Madison and Kopple (1980). This strongly suggests that the leucine dipeptide backbone would adopt the C$`_{7\text{eq}}`$ conformation in solvents such as acetonitrile or chloroform, which would in turn strongly favor the gauche<sup>-</sup> trans conformation of the leucine side chain. Parallel studies of cobalt dipeptides and alanine dipeptide analogues would also reveal the extent to which cobalt chelation influences the vicinal coupling constants about the C<sup>α</sup>–C<sup>β</sup> bond.
A molecular graphic with multiple superimposed structures is perhaps the most common format for reporting the presence of multiple conformations in an NMR or crystallographic protein structure. It is very tempting to suppose that the structures displayed in these molecular graphics are correct in every detail and that structures or side chain conformations not present in these molecular graphics are extremely unlikely. Gel graphics such as those presented here for the leucine side chain of the cobalt dipeptide could supplement common molecular graphics and give a more realistic picture of protein conformational distributions.
## VI METHODS
We adopt the convention that a side chain torsion angle is gauche<sup>-</sup> in the range $``$120 to 0 degrees, gauche<sup>+</sup> in the range 0 to 120 degrees, and trans in the range $``$180 to $``$120 or 120 to 180 degrees. Torsion angle definitions and atom names are specified by the IUPAC-IUB conventions and nomenclature IUPAC-IUB Commission on Biochemical Nomenclature (1970) (CBN). The $`\beta `$ and $`\delta `$-protons are also identified according to the Cahn–Ingold–Prelog nomenclature scheme Voet and Voet (1995) for substituents on the C<sup>β</sup> and C<sup>γ</sup> prochiral centers. The torsion angle between two atoms across a single bond is synclinal in the range $``$90 to $``$30 or 30 to 90 degrees and antiperiplanar in the range 150 to 180 or $``$180 to $``$150 degrees Klyne and Prelog (1960). These last terms are very convenient for describing the geometry of vicinally coupled spin systems.
### VI.1 NMR experiments
Barium\[glycyl-L-leucinatonitrocobalt(III)\] was prepared as previously described Juranić et al. (1993). NMR spectra were recorded at 500 MHz on a Bruker AMX-500 spectrometer. Pure absorption 2D NOESY spectra were obtained by time-proportional phase incrementation Marion and Wüthrich (1983) at mixing times of 50, 100, 200, 400, and 800 ms and the cross relaxation rates were determined by one parameter linear least-squares fits of the initial build-up rates of the peak volumes Fejzo et al. (1989). The standard deviations of the cross relaxation rates were estimated from the one element least-squares moment matrices. Cross relaxation rates within one standard deviation of zero were set equal to zero. Both homonuclear and heteronuclear vicinal coupling constants were derived from 1D NMR spectra. The homonuclear couplings were analyzed Castellano and Bothner-By (1964); Ferguson and Marquardt (1964) with the LAOCN-5 program (QCPE #458).
### VI.2 Simple models for preliminary $`\beta `$-proton assignment
The goodnesss-of-fits of the two alternative assignments were compared for two simple models for the experimental coupling constants across the C<sup>α</sup>–C<sup>β</sup> bond. The two rotational isomer model included only the gauche<sup>-</sup> and trans $`\chi ^1`$ rotational isomers and the three rotational isomer model included all three $`\chi ^1`$ rotational isomers. For both models the experimental data was linear least-squares fit with the population sum constrained to one so that the first model had one population parameter and the second two parameters. The torsion angles between the coupled spins were assumed to have ideal synclinal or antiperiplanar values of magnitude 60 or 180 degrees. The Karplus coefficients for the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ and $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta }^{}`$ coupling constants (Table 4) were specifically calibrated Cung and Marraud (1982); Fischman et al. (1980) for these couplings in peptides without any correction for cobalt chelation effects. The predicted coupling constants were assumed to have errors of 1.5 Hz to accommodate uncertainty in both the Karplus coefficients and geometry.
### VI.3 Molecular mechanics
Energy maps on $`\chi ^1\times \chi ^2`$ torsion space and cobalt glycyl-leucine structure coordinates were calculated with the CHARMM molecular mechanics program Brooks et al. (1983). The CHARMM structure file internal data structure was generated from custom made topology and parameter CHARMM input files. The custom topology file contained nonstandard glycine and leucine residues to generate the peptide portion of the cobalt dipeptide complex and a patch to add the cobalt and three nitro groups. The molecular mechanics atomic charges were assigned by a simple scheme. First the side chain and backbone atomic charges were set to those of a glycyl-leucine zwitterion. The net dipeptide charge was then $``$0.6 because one N-terminal amine proton and the backbone amide proton were removed to make cobalt bonds. On the assumption that the nitro group charge magnitudes should be somewhat less than unity and the cobalt charge should be slightly positive we then assigned a charge of $``$0.6 to each nitro group and 0.4 to cobalt to give the correct total charge of $``$2.0 to the complete cobalt dipeptide anion. Electrostatics interactions were computed at a dielectric constant of 1.0 and without any distance cutoff, except where otherwise noted. Specialized force field parameters for the cobalt dipeptide ring system were introduced by giving nonstandard atom type codes to all the peptide backbone heavy atoms except for the leucine $`\alpha `$-carbon atom. Bond lengths and bond angles were taken from a glycyl-glycine cobalt crystallographic structure Prelesnik and Herak (1984). Bond length and bond angle force constants were taken from the force field of a polyamine cobalt complex Buckingham et al. (1974). The force constants of torsion angles with cobalt in one of the four angle defining positions were guessed by adopting the torsion angle force constants of standard peptide backbone atoms with roughly matching orbital hybridization and bonding. A glycyl-glycine cobalt crystallographic structure Herak et al. (1982) gave the initial Cartesian coordinates of the cobalt dipeptide ring system. Other ligand heavy atoms of this same structure gave initial coordinates for the three nitro nitrogens. The nitro oxygens were then built so that the plane of each nitro group was in a staggered orientation with respect to the other cobalt ligand bonds as viewed down each nitro cobalt bond. An internal coordinate representation of the leucine side chain was setup during generation of the CHARMM structure file. The torsion angle internal coordinates were set to ideal values defined in the topology file and the bond lengths and angles were filled in from the parameter file. The molecular mechanics energy map over $`\chi ^1`$ and $`\chi ^2`$ torsion space was computed by editing these two torsion angle internal coordinates, building the side chain Cartesian coordinates from internal coordinates, restraining the $`\chi ^1`$ and $`\chi ^2`$ torsion angles with an energy constant of 400 kcal/mole$``$rad<sup>2</sup>, and energy minimizing by the steepest decent method for 20 steps followed by the adopted basis Newton-Raphson Brooks et al. (1983) method for 200 steps. This sequence of edit, build, restrain, and minimize was repeated for all 5184 points on a 5 degree grid in $`\chi ^1`$ and $`\chi ^2`$ torsion space. The map was output from CHARMM as a list with each line containing the $`\chi ^1`$ and $`\chi ^2`$ coordinates and energy at one grid point. The Cartesian coordinates of the energy minimized dipeptide were temporarily output as a trajectory file with one trajectory file coordinate set for each grid point. Interatomic distances and torsion angles required for modeling cross relaxation rates and vicinal coupling constants were extracted from this temporary trajectory file with the CHARMM correlation and time series analysis command. To account for rotational averaging of the cross relaxation rate to a methyl group, the three interatomic distances from the methyl protons to the other cross relaxing atom were inverse sixth root mean inverse sixth power averaged with CHARMM time series manipulation commands. Each interatomic distance, averaged interatomic distance, and vicinal spin torsion angle was output from CHARMM as a separate file with one distance or angle on each line and one line for each grid point.
Compared to the CHARMM 22 developmental topology and parameter files for proteins with all hydrogens our custom topology file has similar backbone charges and side chain charges of about one half the magnitude. Our peptide parameters are generally similar to those in the CHARMM 22 developmental parameter file, except that we only define a single tetrahedral carbon atom type that does not depend on the number of bonded hydrogen atoms and our bond angle potential does not have Urey-Bradley Brooks III et al. (1988) interactions.
A separate FORTRAN program Haydock (1993) partitioned the molecular mechanics $`\chi ^1\times \chi ^2`$ energy map into energy well regions. The program employed a cellular automata that adjusted the regions so that the boundaries passed through the energy map saddle points and followed along the ridges leading up to the tops of high energy peaks. The program named each well, assigned an index to each well, arranged the indices in a conventional order, and output a new energy map such that each output line specified the energy and well index of one grid point. This cellular automata program is available from the first author of this paper.
### VI.4 NMR observables and Monte Carlo simulations
We calculated NMR observables (both NOESY cross relaxation rates and vicinal coupling constants), fit rotational isomer probabilities, ran Monte Carlo simulation of probability distributions, and generated graphics with the MATLAB software package. To accomplish these tasks we carefully designed and wrote a library of 36 function files containing about 1600 lines of MATLAB script. These functions passed all variables explicitly through input and output argument lists and made no references to global variables except in one minor instance of a function passed through an input argument list. Important information, such as spin assignments, isomer names, NMR measurement names, Karplus coefficient selections, and $`\chi ^1\times \chi ^2`$ torsion space grid point coordinates, was passed explicitly from low level definition routines back to high level I/O routines to minimize the possibility of mixing up array index definitions.
The NMR observables were calculated by a function file that input a list of NOESY cross relaxing protons and vicinally coupled spins, opened appropriate CHARMM distance or angle data files, which are described in the mechanics methods subsection, and output a matrix of cross relaxation rates and vicinal coupling constants, where the matrix had one column for each NMR observable and one row for each distance or angle listed in the CHARMM files. To calculate a cross relaxation rate the molecular mechanics interproton distance read from the CHARMM data file was raised to the inverse sixth power and multiplied by the average of glycine geminal $`\alpha `$-proton and leucine geminal $`\beta `$-proton scale factors, where each scale factor was equal to the experimental geminal proton cross relaxation rate times the sixth power of average molecular mechanics distance between the geminal protons. The glycine and leucine geminal proton relaxation rates were $`0.39\pm 0.01`$ s<sup>-1</sup> and $`0.42\pm 0.01`$ s<sup>-1</sup> respectively. The geminal proton scale factor varied from 13.5 to 13.7 Å<sup>6</sup>s<sup>-1</sup> depending on whether the geminal proton distances were averaged over structures at all energy map grid points or just over the structures of the nine energy minimized rotational isomers. To calculate a vicinal coupling constant the function file selected the Karplus equation coefficients based on the names in the input list of vicinally coupled spins and inserted the molecular mechanics vicinal proton torsion angle read from the CHARMM data file into a Karplus equation with the selected coefficients. The Karplus coefficients for the $`{}_{}{}^{3}J_{\text{H}\alpha \text{ H}\beta }^{}`$ and $`{}_{}{}^{3}J_{\text{C}\text{}\text{ H}\beta }^{}`$ coupling constants were specifically calibrated Cung and Marraud (1982); Fischman et al. (1980) for these couplings in peptides without any correction for cobalt chelation effects. The coefficients for the $`{}_{}{}^{3}J_{\text{H}\beta \text{ H}\gamma }^{}`$ coupling constant were those suggested Pachler (1972); Bothner-By (1965) for the sec-butyl fragment without correction for the extra carbon substitution on the $`\gamma `$-carbon. The coefficients for the H<sup>α</sup>–C<sup>α</sup>–C<sup>β</sup>–C<sup>γ</sup> and C<sup>α</sup>–C<sup>β</sup>–C<sup>γ</sup>–H<sup>γ</sup> heternuclear coupling constants were taken from a fit to theoretical coupling constants calculated for propane Breitmaier and Voelter (1990); Wasylishen and Schaefer (1973, 1972). These Karplus coefficients are summarized in Table 4.
The molecular mechanics energy, interproton distances, vicinal proton torsion angles, and NMR observables were all computed on a 5 degree $`\chi ^1\times \chi ^2`$ torsion space grid. The average $`\chi ^1`$ and $`\chi ^2`$ torsion angles and average NMR observables of each rotational isomer were computed by Boltzmann weighted summation over each energy well region in $`\chi ^1\times \chi ^2`$ torsion space. To average the $`\chi ^1`$ and $`\chi ^2`$ torsion angles over an energy well region these angles were referenced to the minimum energy grid point so that they varied continuously in the range $``$180 to 180 degrees in this energy well region. We assessed the effect of thermal motions by comparing the average NMR observables of each rotational isomer with the observables at the average $`\chi ^1\times \chi ^2`$ torsion angles, at the $`\chi ^1\times \chi ^2`$ energy map minimum position, and at the ideal geometry $`\chi ^1`$ and $`\chi ^2`$ torsion angles. The NMR observables at the average torsion angles were computed by interpolating the observables between $`\chi ^1\times \chi ^2`$ torsion space grid points with a bicubic spline. The energy map minima positions were approximated by the minima of the interpolating function. To assess the accuracy of this approximation we repeated the molecular mechanics energy minimization at the minimum energy grid point in each of the nine energy wells, except that during the last 100 steps of adopted basis Newton-Raphson minimization the $`\chi ^1`$ and $`\chi ^2`$ torsion angle restraints were released. The $`\chi ^1`$ and $`\chi ^2`$ torsion angles of these minimized unrestrained structures differed from the interpolated energy map minimum positions by less than about one half degree for all the energy wells except for the highly anharmonic trans gauche<sup>-</sup> and trans trans energy wells, where the torsion angles differed from the interpolated positions by about one and one half degrees. Given these small differences in $`\chi ^1`$ and $`\chi ^2`$ torsion angles we assumed that the differences between the distances, angles, and NMR observables interpolated to the energy map minima positions and those calculated from the minimized unrestrained structures would also be small. We arbitrarily decided to calculate the NMR observables at the $`\chi ^1\times \chi ^2`$ energy map minimum positions from the minimized unrestrained structures rather than calculate them by interpolating the observables between $`\chi ^1\times \chi ^2`$ torsion space grid points.
To find the rotational isomer probabilities we minimized the difference between the experimentally measured and predicted NOESY cross relaxation rates and vicinal coupling constants subject to the constraints that the probabilities were nonnegative and that their sum was one. The design matrix was formed by calculating the NMR observables for each rotational isomer as described in the last two paragraphs, arranging these observables in a matrix with one row for each observable and one column for each rotational isomer, and dividing each matrix element in each row by the observation error for that row. The observation vector was formed by dividing element-wise the column vector of experimental measurements by the column vector of observation errors. The observation errors were the RMS average of the experimental measurement errors (Table 2) and errors in the predicted NMR observables due to uncertainty in the Karplus coefficients and molecular mechanics geometries. The predicted NOESY cross relaxation rates were assumed to have uncorrelated errors of 0.01 s<sup>-1</sup> and the predicted vicinal coupling constants were assumed to have uncorrelated errors of 1.0 Hz. The linear least-squares with linear constraints problem was converted to the equivalent Gill et al. (1981) quadratic programming problem and solved by an active set strategy Gill et al. (1984).
The accuracies of the fit rotational isomer probabilities were estimated from Monte Carlo probability density functions and from the diagonal elements of a moment matrix. We setup an unconstrained linear least-squares subproblem with the number of rotational isomer probability parameters equal to one less than the number of inactive nonnegative probability constraints. The desired moment matrix was obtained by transformation Cramér (1946) of the subproblem moment matrix to generate matrix elements for the probability parameter that was removed to enforce the probability sum constraint. The probability density functions of the fit rotational isomer probabilities (parameters) were computed by the standard Monte Carlo recipe Press et al. (1989): the experimental NMR observables were fit to yield fit parameters and fit NMR observables, the fit parameters and fit NMR observables were assumed to be the true parameters and the error free experimental NMR observables, random errors were added to the fit NMR observables to give simulated NMR observables, these simulated NMR observables were fit to give simulated fit parameters, the previous two steps were repeated may times and the resulting large set of simulated fit parameters was histogramed to form the Monte Carlo probability density functions of the fit parameters (rotational isomer probabilities). To keep the simulated NMR observables nonnegative the random errors were drawn from appropriately truncated Gaussian distributions. These distributions were generated by a simple acceptance-rejection method, that is, if one sample drawn from a standard Gaussian distribution would have given a negative value to a particular NMR observable then that sample was discarded and a new sample was drawn and tested in the same way. When the standard deviations of the Monte Carlo probability density functions were significantly smaller than the fit rotational isomer probabilities, that is, when none of the fit rotational isomer probabilities were near zero, the Monte Carlo standard deviations were almost exactly equal to the moment matrix standard deviations.
### VI.5 Gel graphics
Monte Carlo probability density functions were displayed as gel graphics, which were designed to visually indicate both the discrete probability fraction at zero population and shape of the continuous probability density over the range of population from zero and one. This was accomplished by a simulated photographic process where the degree of film overexposure indicates the probability fraction at zero population and continuous gray tones represent the continuous part of the probability density. The continuous part of the probability density was prefiltered to reduce the noise from the Monte Carlo sampling. The prefilter consisted of two cycles of alternating extrapolation and Gaussian smoothing. The first cycle estimated the probability density at zero population by evenly extrapolating the probability density about zero population and then smoothing. In the second cycle the original probability density was oddly extrapolated about the just determined probability density at zero population and then smoothed. The standard deviation of the smoothing in the first cycle was half that in the second cycle. We speak of this second cycle standard deviation as the prefilter standard deviation. This somewhat cumbersome prefilter procedure smoothed the continuous part of the probability density without introducing distortion near zero population, where the probability density typically has a positive value and a nonzero slope. Note that this positive value is distinct from the discrete probability fraction at zero population, which is not prefiltered. The prefiltering eliminated noise from the Monte Carlo sampling, which would otherwise show up as distracting transverse stripes across the gel lanes. We examined the convergence of the probability density functions by repeating Monte Carlo simulations with increasing numbers of steps and comparing conventional probability density plots and gel graphics. The prefilter standard deviation was adjusted to remove Monte Carlo sampling noise without visibly obscuring the shape of the probability density. For simulations of length $`10^3`$, $`10^4`$, and $`10^5`$ steps the appropriate standard deviation expressed as the full width at half max (FWHM) was 32, 16, and 8 histogram bins, where 1024 bins covered the population range from zero to one. At $`10^3`$ steps substantial fluctuations in the shape of the conventional plots were observed, but the gel graphics had at least qualitatively converged to their final appearance. At $`10^5`$ steps only tiny differences were observed in the conventional plots and the differences in the gel graphics were imperceptible. The gel graphics presented here were created from Monte Carlo simulations of $`10^5`$ steps, even though simulations as short as $`10^3`$ steps would be adequate for many purposes.
After prefiltering an initial image was formed with the probability density functions of the rotational isomers displayed in lanes 1024 rows long by 64 columns wide. In this initial image the values across each row were constant and equal to the probability of finding the population in bins of width $`1/1024`$ covering the range zero to one, except that the values across bottom row of each lane contained the discrete probability fraction at zero population. The probability of finding the population in a given bin is proportional to the bin width and inversely proportional to the total number of bins, but the probability fraction at zero population is independent of the bin width. Because of the relatively high resolution of the image the probability fraction at zero population is about an order of magnitude larger than the probability of finding the population in another bin. For this reason the probability fraction at zero population can be effectively displayed as a film overexposure.
To simulate film overexposure at zero population and smooth the lane edges along the continuous part of the probability distribution a Gaussian blur filter with a FWHM of 16 pixels was applied to the initial image. With this amount of blurring the typical probability density at zero population was still considerably greater than that along the continuous part of the probability distribution. The pixel values of the blurred image were treated like scene luminances Zalewski (1995) and converted into photographic print densities. First, the maximum printable luminance $`L_m`$ was set equal to the maximum probability in the continuous part of the probability distributions, that is, excluding the probability fractions at zero population. The logarithm of the luminance $`L`$ was converted to density with a characteristic curve Hunt (1995) given by $`(3x^22x^3)10\mathrm{log}2`$, where $`x=1+(\mathrm{log}L\mathrm{log}L_m)/(10\mathrm{log}2)`$ and $`0x1`$. Note that the maximum point-gamma of our characteristic curve is 1.5. Then the printable densities were linearly mapped into gray scale values. A stepwedge bar of the 11 zones in the Zone System Adams (1995) was added to the gel graphic as an aid to calibrating the probability densities.
### VI.6 Supporting information available
Molecular mechanics, data analysis, and gel graphics input files and additional data tables<sup>5</sup><sup>5</sup>5The supporting information includes CHARMM input files to generate energy minimized rotational isomers without and with torsion restraints and to extract the vicinal coupling torsion angles and NOESY interatomic distances, crystallographic coordinates of cobalt dipeptide chelate rings with added nitro groups, cobalt glycyl-leucine dipeptide topology and parameter files, MATLAB version 4 M-files to calculate NMR observables, fit rotational isomer probabilities, simulate Monte Carlo probability distributions, and to generate gel graphics, tables of torsion angles and interatomic distances of the energy minimized rotational isomers without torsion restraints and of vicinal coupling constants and NOESY cross relaxation rates calculated at these angles and distances., which are sufficient to reproduce all the results reported here, are included in this electronic preprint’s source file.
## ACKNOWLEDGMENTS
We thank Martin Karplus for access to the c22g2 release of the CHARMM 22 program system and thank Roland L. Dunbrack, Jr. for providing the torsion angle list for constructing the region-of-interest rotamer library<sup>6</sup><sup>6</sup>6The torsion angle list can be downloaded from the backbone-dependent rotamer library Web page through a hyperlink to the file of dihedral angles for all chains at http://www.fccc.edu/research/labs/dunbrack/sidechain.html .. We are grateful to the National Institutes of Health (GM 34847) for financial support of this study.
|
no-problem/9905/cond-mat9905229.html
|
ar5iv
|
text
|
# Efficiency of Symmetric Targeting for Finite-𝑇 DMRG
## I Introduction
The density matrix renormalization group (DMRG) established by White has been successfully applied to various problems in condensed matter physics. A recent technical progress in DMRG is its applications to non-Hermitian problems, such as asymmetric exclusion process, reaction-diffusion process, and quantum Hall effect.
For these non-Hermitian problems, left eigenvectors of the Hamiltonian are not equal to the complex conjugates of the right eigenvectors. Two different targeting schemes have been used for DMRG under the situation. One is to use an asymmetric density matrix, which is a partial trace between the left and the right eigenvectors. This scheme has been used for the DMRG applied to classical systems and the finite temperature (finite-$`T`$) DMRG. The other scheme is to use a symmetric density matrix, which is created by targeting both left and right eigenvectors as two individual vectors.
The purpose of this paper is to compare the numerical efficiencies of these two schemes, by observing the cut-off error of the renormalization group (RG) transformation applied to the finite temperature Heisenberg spin chain. In the next section we define the cut-off errors as a function of projection operator, that represents the freedom restriction by the RG transformation. The two targeting schemes are briefly reviewed in §3, and these schemes are compared by calculating the cut-off error numerically. Conclusions are summarized in §4.
## II Cut-off Error in RG Transformation
The finite-$`T`$ DMRG estimates the free energy of one-dimensional quantum systems by way of a precise approximation for the largest eigenvalue $`\lambda `$ of the quantum transfer matrix (QTM) $`𝒯`$. As an example of QTM, we consider that of the $`S=1/2`$ Heisenberg chain. (See Fig.1.) We express the matrix element of the QTM as $`𝒯_{i_{}^{}j_{}^{},ij}^{}`$, where $`i`$ $`(i^{})`$ and $`j`$ $`(j^{})`$ represents the upper-part (U-part) and the lower-part (D-part ) of the column spin, respectively. It has been known that the partition function of $`N`$-site system can be approximated by that of the Trotter decomposed two-dimensional classical system; $`Z=\mathrm{Tr}𝒯_{}^N`$. When $`N`$ is sufficiently large, $`Z`$ is well approximated as
$$𝒯_{}^N𝐕_{}^\mathrm{R}\lambda _{}^N\left(𝐕_{}^\mathrm{L}\right)_{}^T,$$
(1)
where $`𝐕_{}^\mathrm{L}`$ and $`𝐕_{}^\mathrm{R}`$ is, respectively, the left and the right eigenvector of $`𝒯`$, that satisfies the eigenvalue relation
$`{\displaystyle \underset{i_{}^{}j_{}^{}}{\overset{}{}}}V_{i_{}^{}j_{}^{}}^\mathrm{L}𝒯_{i_{}^{}j_{}^{},ij}^{}`$ $`=`$ $`V_{ij}^\mathrm{L}\lambda `$ (2)
$`{\displaystyle \underset{ij}{\overset{}{}}}𝒯_{i_{}^{}j_{}^{},ij}^{}V_{ij}^\mathrm{R}`$ $`=`$ $`\lambda V_{i_{}^{}j_{}^{}}^\mathrm{R}.`$ (3)
We have used the normalization $`(𝐕_{}^\mathrm{L},𝐕_{}^\mathrm{R})`$ $`=_{ij}^{}V_{ij}^\mathrm{L}V_{ij}^\mathrm{R}`$ $`=1`$ in Eq.1. It should be noted that $`𝐕_{}^\mathrm{L}`$ is not equal to $`𝐕_{}^\mathrm{R}`$ in general, because of the asymmetry $`𝒯𝒯_{}^T`$.
Let us consider a formal decomposition of these vectors into the products of matrices
$`V_{ij}^\mathrm{L}`$ $``$ $`{\displaystyle \underset{\xi \eta }{\overset{}{}}}O_{i\xi }^\mathrm{L}v_{\xi \eta }^\mathrm{L}Q_{j\eta }^\mathrm{L}`$ (4)
$`V_{ij}^\mathrm{R}`$ $``$ $`{\displaystyle \underset{\xi \eta }{\overset{}{}}}O_{i\xi }^\mathrm{R}v_{\xi \eta }^\mathrm{R}Q_{j\eta }^\mathrm{R}`$ (5)
according to the convention in DMRG, where $`O_{}^{\mathrm{L}/\mathrm{R}}`$ and $`Q_{}^{\mathrm{L}/\mathrm{R}}`$ satisfy the orthogonal (or duality) relations
$`{\displaystyle \underset{i}{\overset{}{}}}O_{i\xi }^\mathrm{L}O_{i\xi _{}^{}}^\mathrm{R}`$ $`=`$ $`\delta _{\xi \xi _{}^{}}^{}`$ (6)
$`{\displaystyle \underset{j}{\overset{}{}}}Q_{j\eta }^\mathrm{L}Q_{j\eta _{}^{}}^\mathrm{R}`$ $`=`$ $`\delta _{\eta \eta _{}^{}}^{},`$ (7)
and $`\xi `$ and $`\eta `$ represent the block spin variables. The matrices $`O_{}^{\mathrm{L}/\mathrm{R}}`$ and $`Q_{}^{\mathrm{L}/\mathrm{R}}`$ play the role of renormalization group (RG) transformations, when the freedom restriction $`1\xi ,\eta m`$ is considered for for both $`𝐕_{}^\mathrm{L}`$ and $`𝐕_{}^\mathrm{R}`$ in Eq.3. Under the restriction, $`v_{\xi \eta }^\mathrm{L}`$ and $`v_{\xi \eta }^\mathrm{R}`$ are $`m`$-dimensional matrices that represent the renormalized states.
In DMRG applied to classical system or finite temperature quantum system, the RG transformations $`O_{}^{\mathrm{L}/\mathrm{R}}`$ and $`Q_{}^{\mathrm{L}/\mathrm{R}}`$ are determined so that the cut-off error of the partition function
$$\delta Z=\mathrm{Tr}(1P)𝒯_{}^N=(𝐕_{}^\mathrm{L},(1P)𝐕_{}^\mathrm{R})\lambda _{}^N$$
(8)
is suppressed, where $`P`$ is the projection operator
$`P_{i_{}^{}j_{}^{},ij}^{}`$ $`=`$ $`P_{i_{}^{}i}^\mathrm{U}P_{j_{}^{}j}^\mathrm{D}`$ (9)
$`=`$ $`{\displaystyle \underset{\xi }{\overset{m}{}}}O_{i_{}^{}\xi }^\mathrm{R}O_{i\xi }^\mathrm{L}{\displaystyle \underset{\eta }{\overset{m}{}}}Q_{j_{}^{}\eta }^\mathrm{R}Q_{j\eta }^\mathrm{L},`$ (10)
that represents the Hilbert space restriction by the RG transformation; $`P_{}^\mathrm{U}`$ and $`P_{}^\mathrm{D}`$ is projection operator for U- and D-part, respectively. Since the operator $`1P=1P_{}^\mathrm{U}P_{}^\mathrm{D}`$ in Eq.5 can be factorized as
$$(1P_{}^\mathrm{U})+(1P_{}^\mathrm{D})(1P_{}^\mathrm{U})(1P_{}^\mathrm{D}),$$
(11)
and the third term is negligible when it is applied to $`𝐕_{}^\mathrm{L}`$ and $`𝐕_{}^\mathrm{R}`$, we can precisely estimate the relative cut-off error $`\delta Z/Z`$ by calculating the inner product
$`(𝐕_{}^\mathrm{L},(1P_{}^\mathrm{U})𝐕_{}^\mathrm{R})+(𝐕_{}^\mathrm{L},(1P_{}^\mathrm{D})𝐕_{}^\mathrm{R})`$ (12)
$`=\left(1\mathrm{Tr}P_{}^\mathrm{U}\rho _{}^\mathrm{U}\right)+\left(1\mathrm{Tr}P_{}^\mathrm{D}\rho _{}^\mathrm{D}\right),`$ (13)
where $`\rho _{}^\mathrm{U}`$ and $`\rho _{}^\mathrm{D}`$ are the asymmetric density matrices
$`\rho _{i_{}^{}i}^\mathrm{U}`$ $`=`$ $`{\displaystyle \underset{j}{\overset{}{}}}V_{i_{}^{}j}^\mathrm{L}V_{ij}^\mathrm{R}`$ (14)
$`\rho _{j_{}^{}j}^\mathrm{D}`$ $`=`$ $`{\displaystyle \underset{i}{\overset{}{}}}V_{ij^{}}^\mathrm{L}V_{ij}^\mathrm{R}`$ (15)
that satisfy the normalization $`\mathrm{Tr}\rho _{}^\mathrm{U}`$ $`=\mathrm{Tr}\rho _{}^\mathrm{D}`$ $`=(𝐕_{}^\mathrm{L},𝐕_{}^\mathrm{R})`$ $`=1`$. Let us keep in mind that $`\mathrm{Tr}P_{}^\mathrm{U}\rho _{}^\mathrm{U}`$ and $`\mathrm{Tr}P_{}^\mathrm{D}\rho _{}^\mathrm{D}`$ are essential for the cut-off error of the RG transformation by $`P=P_{}^\mathrm{U}P_{}^\mathrm{D}`$.
## III Asymmetric and Symmetric Targeting
Two different targeting scheme have been used to determine the RG transformation matrices $`O_{}^{\mathrm{L}/\mathrm{R}}`$ and $`Q_{}^{\mathrm{L}/\mathrm{R}}`$. One is to obtain them by diagonalizing the asymmetric density matrices in Eq.9
$`\rho _{i_{}^{}i}^\mathrm{U}`$ $``$ $`{\displaystyle \underset{\xi }{\overset{}{}}}O_{i_{}^{}\xi }^\mathrm{R}w_\xi ^{}O_{i\xi }^\mathrm{L}`$ (16)
$`\rho _{j_{}^{}j}^\mathrm{D}`$ $``$ $`{\displaystyle \underset{\eta }{\overset{}{}}}Q_{j_{}^{}\eta }^\mathrm{R}w_\eta ^{}Q_{j\eta }^\mathrm{L},`$ (17)
where $`w_\xi ^{}`$ is the common eigenvalue for both $`\rho _{}^\mathrm{U}`$ and $`\rho _{}^\mathrm{D}`$ in the order of decreasing absolute value; normally all the $`w_\xi ^{}`$ are positive. The projection operators $`P_{}^\mathrm{U}`$, $`P_{}^\mathrm{D}`$, and $`P`$ created from $`O_{}^{\mathrm{L}/\mathrm{R}}`$ and $`Q_{}^{\mathrm{L}/\mathrm{R}}`$ in Eq.10 are asymmetric, as $`\rho _{}^\mathrm{U}`$ and $`\rho _{}^\mathrm{D}`$ are. Let us call such a construction of $`P_{}^{\mathrm{U}/\mathrm{D}}`$ as symmetric targeting. In this case, the relative cut-off error in Eq.8 can be calculated from the eigenvalues of the asymmetric density matrix as
$$2\mathrm{Tr}P_{}^\mathrm{U}\rho _{}^\mathrm{U}\mathrm{Tr}P_{}^\mathrm{D}\rho _{}^\mathrm{D}=2(1\underset{\xi }{\overset{m}{}}w_\xi ^{}).$$
(18)
It is possible to choose $`O_{i\xi }^{\mathrm{L}/\mathrm{R}}`$ and $`Q_{i\eta }^{\mathrm{L}/\mathrm{R}}`$ so that $`v_{\xi \eta }^\mathrm{L}`$ and $`v_{\xi \eta }^\mathrm{R}`$ become simultaneously diagonal: $`v_{\xi \eta }^\mathrm{L}`$ $`=v_{\xi \eta }^\mathrm{R}`$ $`=\delta _{\xi \eta }^{}\omega _\xi ^{}`$ where $`\omega _\xi ^2=w_\xi ^{}`$. Thus we can interpret the decomposition in Eq.3 as an extension of the singular value decomposition for the dual vectors $`𝐕_{}^\mathrm{L}`$ and $`𝐕_{}^\mathrm{R}`$
The other targeting scheme is to treat $`𝐕_{}^\mathrm{L}`$ and $`𝐕_{}^\mathrm{R}`$ as individual vectors, as they simultaneously target ground and excited states in DMRG applied to Hermitian quantum systems. In this case, the RG transformations are obtained by first creating the symmetric density matrices
$`\overline{\rho }_{i_{}^{}i}^\mathrm{U}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{j}{\overset{}{}}}V_{i_{}^{}j}^\mathrm{L}V_{ij}^\mathrm{L}+{\displaystyle \frac{1}{2}}{\displaystyle \underset{j}{\overset{}{}}}V_{i_{}^{}j}^\mathrm{R}V_{ij}^\mathrm{R}`$ (19)
$`\overline{\rho }_{j_{}^{}j}^\mathrm{D}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{\overset{}{}}}V_{ij^{}}^\mathrm{L}V_{ij}^\mathrm{L}+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{\overset{}{}}}V_{ij^{}}^\mathrm{R}V_{ij}^\mathrm{R}`$ (20)
and then by diagonalizing them
$`\overline{\rho }_{i_{}^{}i}^\mathrm{U}`$ $``$ $`{\displaystyle \underset{\xi }{\overset{}{}}}O_{i_{}^{}\xi }^{}\overline{w}_\xi ^\mathrm{U}O_{i\xi }^{}`$ (21)
$`\overline{\rho }_{j_{}^{}j}^\mathrm{D}`$ $``$ $`{\displaystyle \underset{\eta }{\overset{}{}}}Q_{j_{}^{}\eta }^{}\overline{w}_\eta ^\mathrm{D}Q_{j\eta }^{},`$ (22)
where we have dropped the label $`\mathrm{L}`$ and $`\mathrm{R}`$ from $`O_{}^{\mathrm{L}/\mathrm{R}}`$ and $`Q_{}^{\mathrm{L}/\mathrm{R}}`$, because $`O_{}^\mathrm{L}=O_{}^\mathrm{R}`$ and $`Q_{}^\mathrm{L}=Q_{}^\mathrm{R}`$. In this case, the projection operators
$`\overline{P}_{i_{}^{}i}^\mathrm{U}`$ $`=`$ $`{\displaystyle \underset{\xi }{\overset{m}{}}}O_{i_{}^{}\xi }^{}O_{i\xi }^{}`$ (23)
$`\overline{P}_{j_{}^{}j}^\mathrm{D}`$ $`=`$ $`{\displaystyle \underset{\eta }{\overset{m}{}}}Q_{j_{}^{}\eta }^{}Q_{j\eta }^{}`$ (24)
are symmetric. Let us call such a construction of $`\overline{P}_{}^{\mathrm{U}/\mathrm{D}}`$ as symmetric targeting. Unlike the asymmetric targetting, it is impossible to make the $`m`$-dimensional matrices $`v_{\xi \eta }^\mathrm{L}`$ and $`v_{\xi \eta }^\mathrm{R}`$ simultaneously diagonal. This targeting scheme is often used because there is no need to diagonalize the asymmetric density matrix, which requires special numerical care. It should be noted that the relative cut-off error
$$\delta Z/Z=1\mathrm{Tr}\overline{P}_{}^\mathrm{U}\rho _{}^\mathrm{U}+1\mathrm{Tr}\overline{P}_{}^\mathrm{D}\rho _{}^\mathrm{D}$$
(25)
in the symmetric targeting is not directly related to the eigenvalues of the symmetric density matrices in Eq. 12.
Now let us compare the relative cut-off errors for both asymmetric and symmetric targeting. We choose the $`S=1/2`$ isotropic Heisenberg spin chain as the reference system, and fix the Trotter number $`M=7`$ so that we can obtain the eigenvector of $`𝒯`$; we have to know $`𝐕_{}^\mathrm{L}`$ and $`𝐕_{}^\mathrm{R}`$ exactly in order to evaluate $`\delta Z/Z`$. We consider the case where $`\mathrm{U}`$-part contains the same number of spin variables as $`\mathrm{D}`$-part; this U-D division is normally used for the infinite system algorithm. Since $`\mathrm{U}`$-part is identical to the $`\mathrm{D}`$-part, $`1\mathrm{Tr}P_{}^\mathrm{U}\rho _{}^\mathrm{U}`$ $`=1\mathrm{Tr}P_{}^\mathrm{D}\rho _{}^\mathrm{D}`$ holds for Eq.11 and $`1\mathrm{Tr}\overline{P}_{}^\mathrm{U}\rho _{}^\mathrm{U}`$ $`=1\mathrm{Tr}\overline{P}_{}^\mathrm{D}\rho _{}^\mathrm{D}`$ for Eq.15. Figure 2 shows the relative cut-off errors when the imaginary time step $`J\mathrm{\Delta }\tau `$ is equal to $`1/7`$. As it is seen, $`1\mathrm{Tr}P_{}^\mathrm{U}\rho _{}^\mathrm{U}`$ decreases monotonically with respect to $`m`$, and is always positive. On the other hand, the dumping of $`1\mathrm{Tr}\overline{P}_{}^\mathrm{U}\rho _{}^\mathrm{U}`$ with respect to $`m`$ is oscillatory; $`1\mathrm{Tr}\overline{P}_{}^\mathrm{U}\rho _{}^\mathrm{U}`$ is not always positive, and the calculated partition function is not the variational lower bound. For most of $`m`$ the error $`1\mathrm{Tr}P_{}^\mathrm{U}\rho _{}^\mathrm{U}`$ is smaller than $`|1\mathrm{Tr}\overline{P}_{}^\mathrm{U}\rho _{}^\mathrm{U}|`$, that shows the superiority of asymmetric targeting for the finite-$`T`$ DMRG.
Figure 3 shows the cut-off errors for relatively high temperature $`J\mathrm{\Delta }\tau =1/14`$. In both Figs.2 and 3, we have to keep twice as large as $`m`$ for the symmetric targeting in order to keep the same cut-off error of the asymmetric targeting. This may be explained by the fact that the asymmetric projection operator $`P_{}^{\mathrm{U}/\mathrm{D}}`$ is created by $`2m`$ numbers of linearly independent vectors, while the symmetric projection operator $`\overline{P}_{}^{\mathrm{U}/\mathrm{D}}`$ is created by $`m`$ numbers of orthogonal vectors.
## IV Conclusion
We have compared the numerical efficiency of the symmetric and asymmetric targeting schemes when they are applied to the finite temperature DMRG. It is shown that the cut-off error calculated by the symmetric targeting is larger than that of asymmetric targeting; as far as cut-off error is concerned, the asymmetric targetting is superior to the symmetric targetting.
If we keep twice as large as $`m`$ for the symmetric targeting, we can recover the numerical precision of the asymmetric targeting. Therefore, for the problems that does not require large $`m`$ for the DMRG calculations, the symmetric targeting is of use, in the sense that it does not require the diagonalization of asymmetric density matrix, and is free from complex eigenvalue problem.
T.N. thank to G. Sierra, M. A. Martín-Delgado, and S. R. White for the discussion about multi state targeting. The present work is partially supported by a Grant-in-Aid from Ministry of Education, Science and Culture of Japan.
|
no-problem/9905/hep-ex9905027.html
|
ar5iv
|
text
|
# Classifying LEP Data with Support Vector Algorithms
## 1 Classification algorithms
Artificial Neural Networks (ANN) are a useful tool to solve multi-dimensional classification problems in high energy physics, in cases where one-dimensional cut techniques are not sufficient. They are used both as hard-coded chip for very fast low-level pattern recognition in on-line triggering and as a statistical tool for particle and event classifications in offline data analysis. In offline data analysis, a Monte Carlo simulation of the physics process and the detector response is necessary to train an ANN by supervised learning. ANN algorithms have been applied successfully in classification problems such as gluon-jet tagging and b-quark tagging . The ANN classifiers constructed in this paper have sigmoid nodes. Design and tuning issues were solved by applying practical experience rules .
This paper presents the application of a recently proposed machine-learning method, called Support Vector Machines (SVM) , to high energy physics data. The underlying idea is to map the patterns, i.e., the $`n`$-dimensional vectors $`𝐱`$ of $`n`$ input variables, from the input space to a higher dimensional feature space with a non-linear transformation (Fig. 1). Gaussian radial basis functions are used as a kernel for the mapping. After this mapping the problem becomes linearly separable by hyperplanes. The hyperplane which maximises the margin is defined by the support vectors which are the patterns lying closest to the hyperplane. This hyperplane which is determined with a training set is expected to ensure an optimal separation of the different classes in the data.
In many problems a complete linear separation of the patterns is not possible and additional slack variables for patterns not lying on the correct side of the hyperplane are therefore introduced. The training of a SVM is a convex optimisation problem. This guarantees that the global minimum can be found, which is not the case when minimising the error function for an ANN with back-propagation. The CPU time needed to find the hyperplane scales approximately with the cube of the number of patterns.
## 2 Data sets and the OPAL experiment
Two distinctly different problems were chosen for comparing ANN and SVM classifiers, charm tagging and muon identification.
The first problem is to classify (tag) $`\text{e}^+\text{e}^{}\text{q}\overline{\text{q}}`$ events according to the flavour of the produced quarks, separating c-quark events from light quark (uds) and b-quark events. Flavour tagging is necessary for precision measurements of electroweak parameters of the Standard Model. The events are divided into two hemispheres by a plane perpendicular to the thrust axis. The flavour tag is applied separately to both hemispheres, which contain the jets from the two produced quarks. For a signal (s) with background (bg) the efficiency $`\epsilon `$ is defined as
$$\epsilon =N_{\mathrm{tag}}^\mathrm{s}/N_{\mathrm{tot}}^\mathrm{s},$$
where $`N_{\mathrm{tag}}^\mathrm{s}`$ are the number of correctly tagged hemispheres and $`N_{\mathrm{tot}}^\mathrm{s}`$ are all signal hemispheres in the sample. The purity $`\pi `$ is given by
$$\pi =N_{\mathrm{tag}}^\mathrm{s}/\left(N_{\mathrm{tag}}^\mathrm{s}+N_{\mathrm{tag}}^{\mathrm{bg}}\right)$$
with $`N_{\mathrm{tag}}^{\mathrm{bg}}`$ being the number of tagged background hemispheres.
Due to the high mass ($`5`$ GeV) and long lifetime ($`1.5`$ ps) of b hadrons, hemispheres containing b-quarks can be tagged using an ANN with typical efficiencies of 25% and purities of about 92% .
However, for the lighter charm quark, the measured fragmentation properties and secondary vertex quantities are very similar in charm events and uds events (Fig. 2).
High purity charm tags with low efficiency are possible using D<sup>∗±</sup> mesons or leptons with high transverse momentum from semi-leptonic decays. Applying an ANN or SVM charm tag to kinematic variables defined for all charm events is expected to increase the charm tagging efficiency at the cost of lower purities.
The second problem is the identification of muons which are produced in the fragmentation of $`\text{e}^+\text{e}^{}\text{q}\overline{\text{q}}`$ events. Muons are usually not absorbed in the calorimeters. They are measured in muon chambers which build the outer layer of a typical collider detector. A signal in these chambers which is matched to a track in the central tracking chamber is already a good muon discriminator.
The OPAL detector at LEP has been extensively described elsewhere . The event generator JETSET 7.4 is used to simulate $`\text{e}^+\text{e}^{}`$ annihilation events ($`\text{e}^+\text{e}^{}\text{q}\overline{\text{q}}`$), including the fragmentation of quarks into hadrons measured in the detector. The fragmentation model has to be tuned empirically to match kinematic distributions as measured with the detector . The response of the detector is also simulated. A data set of simulated $`\text{e}^+\text{e}^{}`$ collisions at a centre-of-mass energy $`\sqrt{s}=m_{\mathrm{Z}^0}`$ is used for the charm identification problem. A second Monte Carlo data set at $`\sqrt{s}=189`$ GeV has been used for the muon identification problem.
## 3 Problem 1: Charm quark tagging
The Monte Carlo events had to fulfil preselection cuts which ensure that an event is well reconstructed in the detector. These cuts result in a first overall reduction of the reconstruction efficiency. The input variables for the machine-learning algorithms were chosen from a larger dataset of 27 variables, containing various jet shape variables, e.g. Fox-Wolfram moments and eigenvalues of the sphericity tensor, plus several secondary vertex variables and lepton information. A jet finding algorithm clusters the tracks and the calorimeter clusters into jets. Only the highest energy jet per hemisphere is used. The variables containing information about high transverse momentum leptons were removed in order to avoid a high correlation of this charm tag with the charm tag using leptons. The 14 variables with the largest influence on the performance of an ANN (27-27-1 architecture), trained to classify charm versus not-charm, were picked from the larger set. The variable selection method used is equivalent to a method which selects the variables with the largest connecting weights between the input and the first layer. This method has been shown to perform a fairly good variable selection . It would be interesting to try Hessian based selection methods in comparison . A ANN classifier with a 14-14-1 architecture was found to have the best performance. More complex architectures did not improve the classification.
At generator level five different quark types are distinguished. Due to their very similar decay and jet properties, Monte Carlo events coming from u,d and s quarks are put into one single class (uds). The data set consists of 10<sup>5</sup> hemispheres per uds, c and b class. However, the efficiencies and purities are calculated assuming a mixture of quark flavours according to the Standard Model prediction. This set is divided into training, validation and test sets of equal size. The learning machines were trained on equal number of events from the three classes. The supervision during the learning phase consisted of a charm versus not-charm label (udsb), thus distinguishing only two classes.
The outputs of both learning machines are shown in Fig. 3. The two classes c and udsb are separated by requiring a certain value for the output. This defines the efficiency and purity of the tagged sample. The purity $`\pi `$ as a function of the efficiency $`\epsilon `$ for the two charm tags are shown in Figure 3 with the statistical errors. The performance of the SVM is comparable to the performance of the ANN with a slightly higher purity $`\pi `$ for the ANN at larger efficiencies $`\epsilon `$.
## 4 Problem 2: Muon identification
The muon candidates are preselected by requiring a minimum track momentum of 2 GeV and by choosing the best match in the event between muon chamber track segment and the extrapolated central detector track. Ten discriminating variables containing muon matching, hadron calorimeter and central detector information on the specific energy loss, $`\mathrm{d}E/\mathrm{d}x`$, of charged particles were chosen from a larger set of variables.
The Monte Carlo data set consists of $`310^4`$ muons and $`310^4`$ fake muon candidates. This set was divided into training, validation and test sets of equal size. After training, the tag is defined by requiring a certain value for the output of the learning machines. The resulting purity as a function of efficiency for muon identification is shown in Fig. 4. For high efficiency the performance of the SVM is very similar to the ANN.
## 5 Conclusion
We have compared the performance of Support Vector Machines and Artificial Neural Networks in the classification of two distinctly different problems in high energy collider physics: charm-tagging and muon identification. The constructed SVM and ANN classifiers give consistent results for efficiencies and purities of tagged samples.
## Acknowledgements
This work has been supported by the Deutsche Forschungsgemeinschaft with the grants SO 404/1-1, JA 379/5-2, JA 379/7-1. We also would like to thank Frank Fiedler and Andreas Sittler of DESY for their help with the preparation of the OPAL data. We thank Gunnar Rätsch and Sebastian Mika of GMD-FIRST for helping to analyse the data.
|
no-problem/9905/hep-ph9905250.html
|
ar5iv
|
text
|
# THE 𝐽^{𝑃𝐶}=0⁺⁺ SCALAR MESON NONET AND GLUEBALL OF LOWEST MASS
## 1 Introduction
This session of the workshop is devoted to the study of the “sigma” particle, which is related to the large $`S`$-wave $`\pi \pi `$ scattering amplitude; it peaks around 800 MeV and again near 1300 MeV. The nature of this S wave enhancement was under discussion since the very beginning of $`\pi \pi `$ interaction studies<sup>1</sup><sup>1</sup>1For a summary of the early phase of studies in the seventies, see. and the interpretation is still developing. Its role in S-matrix and Regge theory, chiral theories and $`q\overline{q}`$ spectroscopy is considered since; after the advent of QCD the possibility of glueball spectroscopy has opened up as well which is in the focus of our attention. In order to obtain the proper interpretation of the “sigma”, a classification of all low lying $`J^{PC}=0^{++}`$ states into the $`q\overline{q}`$ nonet and glueball states appears necessary. To this end we first discuss the evidence for the low mass scalar states ($``$ 1600 MeV) and then proceed with an attempt of their classification as quarkonium or glueball states from their properties in production and decay. We will argue that the “sigma” is actually the lightest glueball. The main arguments for our classifications will be presented, further details of this study can be found in the recent publication.
## 2 Evidence for light $`0^{++}`$ states with $`I=0`$
The Particle Data Group lists the following $`I=0`$ scalar states: $`f_0(4001200)`$ which is related to the “sigma”, $`f_0(980)`$, $`f_0(1370)`$ and $`f_0(1500)`$, not all being firmly established. The existence of a resonance is not only signaled by a peak in the mass spectrum but it requires in addition that the scattering amplitude moves along a full circle in the complex plane (“Argand diagram”).
The first two states have been studied in detail in the phase shift analysis of elastic $`\pi ^+\pi ^{}`$ scattering. As discussed by K. Rybicki, the results from high statistics experiments with unpolarized and polarized target have led to an almost unique solution up to 1400 MeV out of the total of four. On the other hand, recent data on the $`\pi ^0\pi ^0`$ final state from GAMS show a different behaviour of the S-D wave phase differences above 1200 MeV. A complete phase shift analysis would provide an important consistency check with the previous $`\pi ^+\pi ^{}`$ results. Another experiment on $`\pi ^0\pi ^0`$ pair production is in progress (BNL–E852), the preliminary mass spectrum is shown in fig.1a.
One can see a broad spectrum with two or three peaks (which we refer to as the “red dragon”). There is no question about the existence of $`f_0(980)`$ which causes the first dip near 1 GeV by its interference with the smooth “background”.
More controversial is the interpretation of the second peak which appears in the region 1200-1400 MeV in different experiments. If we remove the $`f_0(980)`$ from a global resonance fit of the spectrum the remaining amplitude phase shift moves slowly through $`90^\mathrm{o}`$ near 1000 MeV and continues rising up to 1400 MeV where it has largely completed a full resonance circle (see also). A local Breit-Wigner approximation to these phase shifts yields
$$\text{“sigma”:}m1000\text{MeV},\mathrm{\Gamma }1000\text{MeV}.$$
(1)
In this interpretation the second peak does not correspond to a second resonance – $`f_0(1370)`$ – but is another signal from the broad object. A second resonance would require a second circle which is not seen. Therefore, a complete phase shift analysis of the $`\pi ^0\pi ^0`$ data in terms of resonances is important for consolidation.
We also investigated whether the state $`f_0(1370)`$, instead, appears with sizable coupling in the inelastic channels $`\pi \pi K\overline{K},\eta \eta `$ where peaks in the considered mass region occur as well, although not all at the same position, see fig.1b,c. To this end we constructed the Argand diagrams for these channels in fig.2. A similar result for $`K\overline{K}`$ has been found already from earlier data.
The movement of the amplitudes in the complex plane (fig.2) can be interpreted in terms of a superposition of a resonance and a slowly varying background. We identify the circles with the $`f_0(1500)`$ state which has been studied in great detail by Crystal Barrel. This resonance can be seen to interfere with opposite sign in the two channels in figs.2a,b with the background and this also explains the shift of the peak positions in fig.1b,c. Thus, the structures in the 1300 MeV region do not correspond to additional circles, therefore no additional Breit-Wigner resonance $`f_0(1370)`$ is associated with the respective peaks.
## 3 The $`J^{PC}=0^{++}`$ nonet of lowest mass
As members of the nonet we take the two isoscalars $`f_0(980)`$ and $`f_0(1500)`$ which are mixtures of flavor singlet and octet states. Furthermore we include the isovector $`a_0(980)`$ and the strange $`K^{}(1430)`$. Then the only scalar states with mass below $``$ 1600 MeV left out up to now are the broad “sigma” to which we come back later and the $`a_0(1450)`$, which could be a radially excited state.
We find the mixing of the $`f_0`$ states like the one of the pseudoscalars, namely, with flavour amplitudes $`(u\overline{u},d\overline{d},s\overline{s})`$, approximately as
$$\begin{array}{c}\begin{array}{cccccc}f_0(980)\hfill & \hfill & \eta ^{}(958)\hfill & \hfill & \frac{1}{\sqrt{6}}(1,1,2)\hfill & \text{(near}\text{singlet)}\hfill \\ f_0(1500)\hfill & \hfill & \eta (547)\hfill & \hfill & \frac{1}{\sqrt{3}}(1,1,1)\hfill & \text{(near}\text{octet)}\hfill \end{array}\hfill \end{array}$$
(2)
We have been lead to this classification and mixing by a number of observations:
1. $`J/\psi \omega ,\phi +X`$ decays
The branching ratios of $`J/\psi `$ into $`\phi \eta ^{}(958)`$ and $`\phi f_0(980)`$ are of similar size and about twice as large as $`\omega \eta ^{}(958)`$ and $`\omega f_0(980)`$ which is reproduced by the above flavor composition.
2. Gell-Mann-Okubo mass formula
This formula predicts the mass of the octet member $`f_0^{(8)}`$. With our octet members $`a_0`$ and $`K_0^{}`$ as input one finds $`m(f_0^{(8)})=1550`$ MeV, or, with the $`\eta `$-$`\eta ^{^{}}`$ type mixing included $`m(f_0^{(8)})=1600`$ MeV. The small deviation of $``$10% in $`m^2`$ from the mass of the $`f_0(1500)`$ is tolerable and can be attributed to strange quark mass effects.
3. Two body decays of scalars
Given the flavor composition eq.(2) we can derive the decay amplitudes into pairs of pseudoscalars whereby we allow for a $`s\overline{s}`$ relative amplitude $`S`$ (for a similar analysis, see). In particular, the branching ratios
$$f_0(980)\pi \pi ,K\overline{K};f_0(1500)\pi \pi ,K\overline{K},\eta \eta ,\eta \eta ^{^{}};a_0(980),f_0(980)\gamma \gamma $$
are found in satisfactory agreement with the data for values $`S`$ around 0.5.
4. Relative signs of decay amplitudes
A striking prediction is the relative sign of the decay amplitudes of the $`f_0(1500)`$ into pairs of pseudoscalars: because of the negative sign in the $`s\overline{s}`$ component, see eq.(2), the sign of the $`K\overline{K}`$ decay amplitude is negative with respect to $`\eta \eta `$ decay and also to the respective $`f_2(1270)`$ and glueball decay amplitudes. This prediction is indeed confirmed by the amplitudes in fig.2a,b which show circles pointing in upward and downward directions, respectively. If $`f_0(1500)`$ were a glueball, then both circles should have positive sign as in fig.2b, but the experimental results are rather orthogonal to such an expectation.
Further tests of our classification are provided by the predictions on the decays $`J/\psi \phi /\omega +f_0(1500)`$ and the $`\gamma \gamma `$ decay modes of the scalars.
## 4 The lightest $`0^{++}`$ glueball
In the previous analysis we have classified the scalar mesons in the PDG tables below 1600 MeV with the exception of $`f_0(4001200)`$ and also of $`f_0(1370)`$ which we did not accept as standard Breit-Wigner resonance. We consider the broad spectrum in fig.1a with its two or three peaks as a single very broad object which interferes with the $`f_0`$ resonances. This “background” with slowly moving phase appears also in the inelastic channels (see fig.2). It is our hypothesis that this very broad object with parameters eq.(1) is the lightest glueball. We do not exclude some mixing with the scalar nonet states but it should be sufficiently small such as to preserve the main characteristics outlined before. We discuss next, how this glueball assignment fits with phenomenological expectations.
1. The large width
The unique feature of this state is its large width. There are two qualitative arguments why this is natural for a light glueball:
a) For a heavy glueball one expects a small width as the perturbative analysis involves a small coupling constant $`\alpha _s`$ at high masses (“gluonic Zweig rule). For a light glueball around 1 GeV this argument doesn’t hold any more and a large $`\alpha _s`$ could yield a large width.
b) The light $`0^{++}`$ states are coupled mainly to pairs of pseudoscalar particles. Then, for a scattering process through a $`0^{++}`$ channel the external particles are in an S-wave state; an intermediate $`q\overline{q}`$ resonance will be in a P-wave state but an intermediate $`gg`$ system in an S-wave again. Therefore the overlap of wave functions in the glueball case is larger and we expect
$$\mathrm{\Gamma }_{gb_0}\mathrm{\Gamma }_{q\overline{q}hadron}.$$
(3)
2. Reactions favorable for glueball production
a) The “red dragon” shows up also in the centrally produced systems in high energy $`pp`$ collisions which are dominated by double Pomeron exchange, with new results presented by A. Kirk. Because of the gluonic nature of the Pomeron, this strong production coincides with the expectations.
b) The broad low mass $`\pi \pi `$ spectrum is also observed in decays of radially excited states $`\psi ^{}\psi (\pi \pi )_s`$ and $`Y^{},Y^{\prime \prime }Y(\pi \pi )_s`$ which are expected to be mediated by gluonic exchanges.
c) The hadrons in the decay $`J/\psi \gamma +\text{hadrons}`$ are expected to be produced through 2-gluon intermediate states which could form a scalar glueball. However, in the low mass region $`m<`$ 1 GeV only little S-wave in the $`\pi \pi `$ channel is observed.
3. Flavour properties
The branching ratios of the $`f_0(1370)`$ – which we consider as part of the glueball – into $`K\overline{K}`$ and $`\eta \eta `$ compare favorably with expectations.
4. Suppression in $`\gamma \gamma `$ collisions
If the mixing of the glueball with charged particles is small it should be weakly produced in $`\gamma \gamma `$ collisions. In the process $`\gamma \gamma \pi ^0\pi ^0`$ there is a dominant peak related to $`f_2(1270)`$ but, in comparison, a very small cross section in the low mass region around 600 MeV. This could be partially due to hadronic rescattering and absorption, partly due to the smallness of the 2 photon coupling of the intermediate states. Unfortunately, the data in the $`f_2`$ region leave a large uncertainty on the S-wave fraction ($`<19`$%). In a fit to the data which takes into account the one-pion-exchange Born terms and $`\pi \pi `$ rescattering the two photon width of the states $`f_2(1270)`$ and $`f_0(4001200)`$ have been determined as 2.84$`\pm `$0.35 and 3.8$`\pm `$ 1.5 keV, respectively. If the $`f_0`$ were a light quark state like the $`f_2`$ we might expect comparable ratios of $`\gamma \gamma `$ and $`\pi \pi `$ decay widths, but we find (in units of $`10^6`$)
$$R_2=\frac{\mathrm{\Gamma }(f_2(1270)\gamma \gamma )}{\mathrm{\Gamma }(f_2(1270)\pi \pi )}15;R_0=\frac{\mathrm{\Gamma }(f_0(4001200)\gamma \gamma )}{\mathrm{\Gamma }(f_0(4001200)\pi \pi )}46,$$
(4)
thus, for the scalar state, this ratio is 3-4 times smaller, and it could be smaller by another factor 3 at about the 2$`\sigma `$ level.<sup>2</sup><sup>2</sup>2 We thank Mike Pennington for the discussions about their analysis. A more precise measurement of the S-wave cross section in the $`f_2`$ region would be very important for this discussion.
At present, we conclude that the $`2\gamma `$ width of the scalar state is indeed surprisingly small. In this model an intermediate glueball would couple to photons through the intermediate $`\pi ^+\pi ^{}`$ channel.
5. Quark-antiquark and gluonic components in $`\pi \pi `$ scattering
In the dual Regge picture the $`22`$ scattering amplitude is built either from the sequence of s-channel resonances or from the sequence of t-channel Regge poles. There is a second component (“two component duality”) which corresponds to the Pomeron in the t-channel and is dual to a “background” in the direct s-channel. If the Pomeron is related to glueballs, then one should have, by crossing, a third component with a glueball in the direct s-channel, dual to exotic exchange.
The existence of the “background” process can be demonstrated by constructing the amplitudes for definite t-channel isospin $`I_t`$. Such an analysis has been carried out by Quigg for $`\pi \pi `$ scattering and is shown in fig.3. Similar to what has been found in $`\pi N`$ scattering there are essentially background-free resonance circles for $`I_t0`$, but in the $`I_t=0`$ amplitude (Pomeron exchange) the background rises with energy and is sizable already below 1 GeV. We take this result as a further hint that low energy $`\pi \pi `$ scattering is not dominated by $`q\overline{q}`$ resonances alone.
## 5 Theoretical expectations
### 5.1 QCD results on glueballs
1. Lattice QCD
In the calculation without sea-quarks (“quenched approximation”) one finds the lightest glueball in the $`0^{++}`$ channel at masses 1500-1700 MeV (recent review). These results have motivated various recent searches and scenarios for the lightest glueball. The identification with the well established $`f_0(1500)`$ state, either with or without mixing with other states, has some phenomenological difficulties, especially the negative amplitude sign into $`K\overline{K}`$ (fig.2a).
Some changes of these QCD predictions may occur if the full unquenched calculation is carried out. The first results by Bali et al. indicate a decrease of the glueball mass with the quark masses; the latter are still rather large and correspond to $`m_\pi 700\mathrm{}1000`$ MeV. For the moment we conclude that our light glueball hypothesis is not necessarily in conflict with the lattice QCD results.
2. QCD sum rules
The saturation of the sum rules for the $`0^{++}`$ glueball was found impossible with a single state near 1500 MeV alone in a recent analysis. Rather, the inclusion of a light glueball component was required and assumed to be coupled to states $`\sigma _B(1000)`$ and $`\sigma _B^{}(1370)`$. Already before, a sum rule solution with a light glueball $`500`$ MeV was proposed.
3. Bag model
In a model which consideres quarks and gluons to be confined in a bag of comparable size and with radiative QCD corrections included, the lightest glueball was suggested for $`0^{++}`$ at around 1 GeV mass.
### 5.2 Scalar nonet and effective Sigma variables
An important precondition for the assignment of glueball states is the understanding of the low mass $`q\overline{q}`$ spectroscopy.
1. Renormalizable linear sigma models
These models realize the spontaneous chiral symmetry breakdown and represent an attractive theoretical approach to the scalar and pseudoscalar mesons. An example is the approach by Törnqvist which starts from a “bare” nonet respecting the OZI rule while the observed hadron spectrum is strongly distorted by unitarization effects.
In an alternative approach, one starts from a 3-flavor Nambu-Jona-Lasinio model but includes a renormalizable effective action for the sigma fields with an instanton induced axial $`U(1)`$ symmetry-breaking term along the suggestion by t’Hooft. In this model $`f_0(1500)`$ is near the octet and the light isoscalar near the singlet state; different options are pursued for $`f_0(980)`$ and $`a_0(980)`$, at least one of them should be a non-$`q\overline{q}`$ state. This suggestion of a large singlet-octet mixing and the classification of the $`f_0(1500)`$ is close to our phenomenological findings in sect.3.
2. General effective QCD potential
In our approach we do not restrict ourselves to renormalizable interaction terms. In this way the consequences of chiral symmetry in different limits for the quark masses can be explored in a general QCD framework. In particular, it is possible to keep both $`f_0(980)`$ and $`a_0(980)`$ as $`q\overline{q}`$ states. Their degeneracy in mass can be obtained, although not predicted. An expansion to first order in the strange quark mass is investigated. The Gell-Mann-Okubo formula is obtained in this approximation; with an $`\eta `$-$`\eta ^{}`$ type mixing the observed states discussed in sect.3, with $`f_0(1500)`$ as the heaviest member of the nonet near the octet state, can be realized.
## 6 Conclusions
We found a classification of the low lying $`J^{PC}=0^{++}`$ states which explains a large body of experimental and phenomenological results. The $`q\overline{q}`$ nonet includes $`f_0(980)`$ and $`f_0(1500)`$ with mixing similar to the pseudoscalars $`\eta ^{}`$ and $`\eta `$, furthermore $`a_0(980)`$ and $`K^{}(1430)`$; $`\eta ^{}`$ and $`f_0(980)`$ appear as genuine parity doublet.
The lightest glueball is identified with the broad “sigma” corresponding to $`f_0(4001200)`$ and $`f_0(1370)`$ of the PDG. The basic triplet of light binary glueballs is completed by the states $`\eta (1440)`$ with $`0^+`$ and $`f_J(1710)`$ with $`2^{++}`$, not discussed here.
It will be important to further study production and decay of the states under discussion. Some particular questions we came across here include: a) unique phase shift solution for $`\pi \pi `$ scattering above 1 GeV for both charge modes ($`+`$ and $`00`$), b) production of $`f_0(1500)`$ in $`J/\psi `$ decays, c) S-waves in radiative $`J/\psi `$ decays and d) $`\gamma \gamma `$ widths of the scalar particles.
It remains an open question in this approach, though, what the physical origin of the $`a_0f_0`$ mass degeneracy is and where the mirror symmetry of the mass patterns in the scalar and pseudoscalar nonets comes from. A possible explanation for the latter structure is suggested by a renormalizable model with an instanton induced $`U_A(1)`$-breaking interaction.
|
no-problem/9905/hep-ph9905388.html
|
ar5iv
|
text
|
# Sterile Neutrinos in 𝐸₆ and a Natural Understanding of Vacuum Oscillation Solution to the Solar Neutrino Puzzle
## I Introduction
As is widely known by now, the Super-Kamiokande data has provided conclusive evidence for the existence of oscillations of the muon neutrinos from cosmic rays . While it is yet to be determined what final state the cosmic ray $`\nu _\mu `$s oscillate to ($`\nu _\tau `$ or $`\nu _{\mu s}`$), it is known that the mixing angle is near maximal and the $`\mathrm{\Delta }m_{atmos}^210^3`$ eV<sup>2</sup>. Similarly the solar neutrino data from Super-Kamiokande and other experiments are also making a very convincing case for oscillations of the electron neutrinos emitted by the Sun in order to understand the observed deficit of the solar neutrinos. Again, in this case also, it is not clear what final state the $`\nu _e`$ oscillates to on its way from the Sun to Earth. It could either be $`\nu _\mu `$ or $`\nu _{es}`$. There are however several mixing angle and mass difference possibilities in the solar case. One of the possibilities is the vacuum oscillation of $`\nu _e\nu _\mu `$ or $`\nu _e\nu _{es}`$ type. In order to explain the observations one needs in this case that the $`\mathrm{\Delta }m_{\nu _e\nu _X}^210^{10}`$ eV<sup>2</sup> and a maximal mixing like in the atmospheric neutrino case. The recent indications of a seasonal dependence of the solar neutrino events in the 708 day Super-Kamiokande data would seem to support this explanation although it is by no means the only way to understand it. If the vacuum oscillation explanation finally wins, then a serious theoretical challenge is to understand the unusually small mass difference squared between the neutrinos needed for the purpose. It is the goal of this letter to propose a way to answer this challenge within a gauge theory framework.
The first observation that motivates our final scenario is the symmetry between the solution to the atmospheric neutrino data and the vacuum oscillation solution to the solar neutrino data in that the mixing angles are maximal. This might suggest a generation independence of the neutrino mixings patterns. An implementation of such an idea would naturally require that in each case i.e. solar as well as atmospheric the active neutrinos (i.e. $`\nu _e`$ and $`\nu _\mu `$) oscillate into the sterile neutrinos to be denoted by $`\nu _{es}`$ and $`\nu _{\mu s}`$ respectively. The complete three family picture would then require that there be one sterile neutrino per family. One class of models that lead to such a scenario is the mirror universe picture where the particles and the forces in the standard model are duplicated in a mirror symmetric manner. There is no simple way to understand the ultra small $`\mathrm{\Delta }m^2`$ needed for the vacuum oscillation solution in this case. In this letter we focus on an alternative scheme based on the grand unification group $`[SU(3)]^3`$ or its parent group $`E_6`$.
We find it convenient to use the $`E_6`$ notation. As is well-known, under the SO(10) group, the 27-dimensional representation of $`E_6`$ decomposes to $`\mathrm{𝟏𝟔}_{+1}\mathrm{𝟏𝟎}_2\mathrm{𝟏}_{+4}`$ where the subscripts represent the U(1) charges. The 16 is well known to contain the left and the right handed neutrinos (to be denoted by us as $`\nu _i`$ and $`\nu _i^c`$, $`i`$ being the family index). The 10 contains two neutral colorless fermions which behave like neutrinos but are $`SU(2)_L`$ doublets and the last neutral colorless fermion in the 27, which we identify as the sterile neutrino is the one contained in 1 (denoted by $`\nu _{is}`$). In general in this model, we will have for each generation a $`5\times 5`$ “neutrino” mass matrix and we will show how the small masses for the sterile neutrino and the known neutrino come out as a consequence of a generalized seesaw mechanism. Furthermore, we will see how as a consequence of the smallness of the Yukawa couplings of the standard model, we will not only get maximal mixing between the active and the sterile neutrinos of each generation but also the necessary ultra-small $`\mathrm{\Delta }m^2`$ needed in the vacuum oscillation solution without fine tuning of parameters<sup>*</sup><sup>*</sup>*An $`E_6`$ model for the neutrino puzzles was first discussed by Ma, where his goal was to understand the smallness of the sterile neutrino masses. Our model is different in many respects and addresses the question of maximal mixing, small $`\mathrm{\Delta }m^2`$’s as well as the small neutrino masses. Our picture also differs from other recently proposed models. The way this comes about in our model is that to the lowest order in the Yukawa couplings, the $`\nu _i`$ and $`\nu _{is}`$ form a Dirac neutrino with a mass proportional to the generational Yukawa coupling $`\lambda _i`$ of the corresponding generation. However they become pseudo-Dirac to order $`\lambda _i^3`$ leading to nearly degenerate neutrinos with a mass splitting $`\mathrm{\Delta }m_i^2\lambda _i^3`$. Therefore fixing the $`\mathrm{\Delta }m_{atmos}^2`$ gives the right value for the $`\mathrm{\Delta }m_e^2`$ needed for the vacuum oscillation solution.
Let us now present the basic idea of the model for one generation of neutrinos consisting of $`\nu _i,\nu _{is}`$. Suppose that their mass matrix is given by the following $`2\times 2`$ matrix:
$`M_i=m_{0i}\left(\begin{array}{cc}\lambda _i^2& \lambda _i\overline{f}_i\\ \lambda _i\overline{f}_i& \lambda _i^2\overline{ϵ}_i\end{array}\right)`$ (3)
Since $`\lambda _i1`$, it is clear that the two neutrinos are maximally mixed with a mass $`m_i\lambda _i\overline{f}_im_{0i}`$ amd $`\mathrm{\Delta }m_i^2\lambda _i^3\overline{f}_im_{0i}^2`$ provided $`\overline{ϵ}1`$ and $`\overline{f}1`$. These relations are true generation by generation. We shall show in the next section that a mass matrix of this form emerges naturally from $`E_6`$ and its subgroup $`[SU(3)]^3`$ with $`\overline{f}_i1`$. The main difference between these two groups arises from the fact that for the simplest models based on $`E_6`$ the Yukawa coupling $`\lambda _i`$ is necessarily related to the Yukawa coupling of the corresponding up type quark. This is not true for $`[SU(3)]^3`$, for which $`\lambda _i`$ is expected to be related to the Yukawa coupling of the corresponding lepton.
Now let us look at the atmospheric neutrino data. Since $`\mathrm{\Delta }m_2^225\times 10^3`$ eV<sup>2</sup>, if we choose the $`\nu _\mu `$ mass to be of order $`0.20.5`$ eV (anticipating that we want to accomodate the LSND data), then we get $`\lambda _210^2`$, which is a typical second generation Yukawa coupling. Note that this is a plausible value even for $`[SU(3)]^3`$ since in supersymmetric models $`m_\mu \lambda _2v_d`$ and $`v_d`$ can be considerably less ( e.g 10 GeV for a $`tan\beta 24`$) than the standard model value of 246 GeV for the symmetry breaking parameter. Since the same formula applies to the $`\nu _e`$ and $`\nu _{es}`$ sector, assuming no large flavour dependence in the coefficients $`f_i`$ and $`ϵ_i`$, all we need in order to predict their masses and mass differences is the value for $`\lambda _1/\lambda _2`$. Irrespective of whether we consider $`E_6`$ or $`[SU(3)]^3`$ we find $`\lambda _1/\lambda _25\times 10^3`$. This leads to a value for the $`\mathrm{\Delta }m_1^22\times 10^{10}`$ eV<sup>2</sup>, which is clearly of the right order of magnitude. Our main point is not to insist on precise numbers but rather to illustrate the idea that a cubic dependence of the neutrino mass difference squares on the generational Yukawa couplings to leptons of the standard model can lead to an understanding of the extreme smallness of $`\mathrm{\Delta }m^2`$ value needed in the vacuum oscillation solution to the solar neutrino puzzle.
Extending our idea to the third generation, we find that value of the $`\nu _\tau `$ mass is $`(\lambda _3/\lambda _2)0.2`$ eV. This imples $`m_{\nu _\tau }23`$ eV for $`[SU(3)]^3`$ which is interesting for cosmology since this would mean that about 10-15% of the mass of universe could come from neutrinos. This expectation can eventually be tested when the finer measurements of the angular power spectrum is carried out in the MAP and PLANCK experiments in the next few years. However for minimal versions of $`E_6`$, $`\lambda _3/\lambda _2m_t/m_c`$ yielding a value $`m_{\nu _\tau }2030`$ eV which is unacceptable for a realistic cosmology. This means that the simplest $`E_6`$ model that can accomodate our scenario is one where the quark lepton symmetry is broken. The other possibility is to have some flavour dependence on the coefficients $`\overline{f}_i`$ and $`m_{0i}`$. The extent of required flavour dependence is certainly not extreme and we consider models based on both groups as realistic candidates for a complete theory of neutrino masses.
## II The Model
Let us now proceed to construct the mass matrix in Eq. (1) in the context of an $`E_6`$ model. As usual, we will assign matter to the 27 dimensional representation of the group and we have already noted that there are five neutrino-like fields in the model which will mix among each other subsequent to symmetry breaking. It is therefore necessary to describe the symmetry breaking of $`E_6`$. To implement the symmetry breaking we use three pairs of $`\mathrm{𝟐𝟕}+\overline{\mathrm{𝟐𝟕}}`$ representations and one 78-dim. field. The pattern of symmetry breaking is as follows:
1) $`<27_1>`$ and $`<\overline{27}_1>`$ have GUT scale vevs in the SO(10) singlet direction.
2) $`<27_{16}>`$ and $`<\overline{27}_{16}>`$ have GUT scale vevs in the $`\nu `$ and $`\nu ^c`$ directions respectively. They break SO(10) down to SU(5).
3) The $`<78_{[1,45]}>`$ completes the breaking of SU(5) to the standard model gauge group at the GUT scale. We assume the VEVs reside both in the adjoint and in the singlet of SO(10).
4) $`<27_{10}>`$ and $`<\overline{27}_{10}>`$ contain the Higgs doublets of the MSSM. It is assumed that $`H_u`$ and $`H_d`$ are both linear combinations arising partially from the $`<27_{10}>`$ and partially from the $`<\overline{27}_{10}>`$
In addition to the above there is another field labelled by $`27^{}`$ whose $`\nu ^c`$ component mixes with a singlet S and one linear combination of this pair (denoted by $`S^{}`$) remains light below the GUT scale. As a consequence of radiative symmetry breaking this picks up a VEV at the electroweak scale. We will show later how this can occur. The remaining components of $`27^{}`$ have GUT scale mass.
Let us now write down the relevant terms in the superpotential that lead to a $`5\times 5`$ “neutrino” mass matrix of the form we desire. To keep matters simple let us ignore generation mixings, which can be incorporated very trivially.
$`W=\lambda _i\psi _i\psi _i27_{10}+f_i\psi _i\psi _i27^{}+{\displaystyle \frac{\alpha _i}{M_P\mathrm{}}}\psi _i\psi _i27_178_{[1,45]}+{\displaystyle \frac{\gamma _i}{M_P\mathrm{}}}\psi _i\psi _i\overline{27}_{16}\overline{27}_{16}`$ (4)
We have chosen only a subset of allowed terms in the theory and believe that it is reasonable to assume a discrete symmetry (perhaps in the context of a string model) that would allow only this subset. In any case since we are dealing with a supersymmetric theory, radiative corrections will not generate any new terms in the superpotential.
Note that in Eq. (2), since it is the first term that leads to lepton and quark masses of various generations, it carries a generation label and obeys a hierarchical pattern, whereas the $`f_i`$’s not being connected to known fermion masses need not obey a hierarchical pattern. We will from now on assume that each $`f_i1`$, and see where it leads us.
After substituting the VEVs for the Higgs fields in the above equation, we find a $`5\times 5`$ mass matrixAlthough the form of this mass matrix is same as in , the results of our paper are different. of the following form for the neutral lepton fields of each generation in the basis $`(\nu ,\nu _s,\nu ^c,E_u^0,E_d^0)`$:
$`M=\left(\begin{array}{ccccc}0& 0& \lambda _iv_u& f_iv^{}& 0\\ 0& 0& 0& \lambda _iv_d& \lambda _iv_u\\ \lambda _iv_u& 0& M_{\nu ^c,i}& 0& 0\\ f_iv^{}& \lambda _iv_d& 0& 0& M_{10,i}\\ 0& \lambda _iv_u& 0& M_{10,i}& 0\end{array}\right)`$ (10)
Here $`M_{\nu ^c,i}`$ is the mass of the right handed neutrino and $`M_{10,i}`$ is the mass of the entire 10-plet in the 27 matter multiplet. Since 10 contains two full SU(5) multiplets, gauge coupling unification will not be effected even though its mass is below the GUT scale.
Note that the $`3\times 3`$ mass matrix involving the $`(\nu ^c,E_u^0,E_d^0)`$ have superheavy entries and will therefore decouple at low energies. Their effects on the spectrum of the light neutrinos will be dictated by the seesaw mechanism. The light neutrino mass matrix involving $`\nu _i,\nu _{is}`$ can be written down as:
$`M_{light}{\displaystyle \frac{1}{M_{\nu ^c,i}}}\left(\begin{array}{ccc}\lambda _iv_u& f_iv^{}& 0\\ 0& \lambda _iv_d& \lambda _iv_u\end{array}\right)\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& ϵ\\ 0& ϵ& 0\end{array}\right)\left(\begin{array}{cc}\lambda _iv_u& 0\\ f_iv^{}& \lambda _iv_d\\ 0& \lambda _iv_u\end{array}\right)`$ (19)
where $`ϵ_i=M_{10,i}/M_{\nu ^c,i}`$. Note that $`ϵ_i`$ is expected to be of order one. This leads to the $`2\times 2`$ mass matrix for the $`(\nu ,\nu _c)`$ fields of each generation which is of the form in Eq. (1),
$`M_i=m_{0i}\left(\begin{array}{cc}\lambda _i^2& \lambda _i\overline{f}_i\\ \lambda _i\overline{f}_i& \lambda _i^2\overline{ϵ_i}\end{array}\right)`$ (22)
Here $`m_{0i}=\frac{v_u^2}{M_{\nu ^c,i}}`$, $`\overline{f}_i=f_iϵ_iv^{}/v_u`$, and $`\overline{ϵ_i}=2ϵ_icot\beta `$. Taking $`M_{Pl}10^{19}GeV`$, $`M_{GUT}10^{16}`$ and reasonable values of the unknown parameters e.g. $`\alpha _i0.1`$, $`\gamma _i0.1`$, $`f_i1`$, $`v^{}v_u`$, we get $`m_{0i}20`$ eV and $`ϵ1`$ which leads us to the desired pattern of masses and mass differences outlined in the introduction.
A crucial assumption in our analysis is that that one of the Higgs fields has a vev along $`\nu ^c`$ direction with a low scale (the $`v^{}`$ above). We will now demonstrate what kind of a superpotential can lead to such a situation.
Consider
$`W=M27^{}\overline{27}^{}+S\overline{27}^{}27_{16}`$ (23)
Since $`27_{16}`$ has a VEV, this implies that one linear combination of S and the $`\nu ^c`$ component of $`27^{}`$ (denoted by $`S^{}`$) remains light while everything else in $`27^{}`$ and $`\overline{27}^{}`$ become heavy. If in addition the superpotential contains the couplings
$`W=S27_{10}\overline{27}_{10}+S^3`$ (24)
since $`27_{10}`$ and $`\overline{27}_{10}`$ have electroweak scale VEVs the light combination of S and $`27^{}`$ ($`S^{}`$) also picks up an electroweak scale VEV from the trilinear soft supersymmetry breaking terms. Note that this is inevitable as long as electroweak symmetry is broken because such a trilinear term then becomes a linear term in the potential for $`S^{}`$ and hence $`S^{}`$ must pick up a VEV. We thus see that it is possible to get vev for the singlet field $`\nu ^c`$ in the desired 27-plet of the order of the electroweak scale.
Let us next address the question of the generation mixing. We will assume that it parallels that in the quark sector i.e. the mixing angles to start with are small. Since the neutrino mixings have an additional contribution coming from their seesaw mechanism, we can easily have them be smaller than the corresponding quark mixings. This is for instance what one would like in order to fit the LSND data. We do not get into the details of this since clearly it does not effect the main point of the paper.
Let us end with a few comments on the phenomenological and cosmological implications of the model. The most severe test of this model will come from the understanding of big bang nucleosynthesis. Our model within the standard assumptions that go into the discussion of BBN would imply $`N_\nu =6`$ i.e. three extra neutrinos. However, in models with sterile neutrinos, possibilities of large lepton asymmetry at the BBN era has been discussed.
The second point that needs emphasizing is that in our model, both the solar and atmospheric neutrinos involve separate sterile neutrinos in the final state. There are well known tests of such models for the atmospheric neutrino oscillations where one looks for neutral pion production. For solar neutrinos, our model is testable by the neutral current measurement planned for the SNO experiment.
In conclusion, in this paper we have pointed out a simple way to understand theoretically challenging possibility of a tiny mass difference squared that may arise if the solar neutrino puzzle is to be solved via the vacuum oscillation solution. We exploit an apparent symmetry between the solar and the atmospheric case arising from the maximality of mixing angles to suggest that the ultra small $`\mathrm{\Delta }m_{solar}^2`$ may be undestandable in models of $`E_6`$ type, which automatically contain a sterile neutrino in each 27 that also contains other known particles of each generation and where the generational neutrino mass difference squared may be proportional to the cube of the lepton Yukawa couplings. In this model we can also accomodate the indication for neutrino oscillations from LSND.
This work has been supported by the National Science Foundation grant under no. PHY-9802551 .
|
no-problem/9905/cond-mat9905167.html
|
ar5iv
|
text
|
# Electronic correlations on a metallic nanosphere.
## Abstract
We consider the correlation functions in a gas of electrons moving within a thin layer on the surface of nanosize sphere. A closed form of expressions for the RKKY indirect exchange, superconducting Cooper loop and ‘density-density’ correlation function is obtained. The systematic comparison with planar results is made, the effects of spherical geometry are outlined. The quantum coherence of electrons leads to the enhancement of all correlations for the points–antipodes on the sphere. This effect is lost when the radius of the sphere exceeds the temperature coherence length.
Last years a considerable theoretical interest has been attracted to the electronic properties of cylindrical and spherical nanosize objects. This interest is mostly related to the physics of carbon macromolecules, where the theoretical works are devoted to the band structure calculations and to the effects of topology for the trasport and mechanical properties. The discussion of the topological aspects of electronic motion is primarily connected with carbon nanotubes and is based on the analysis of the effective models, incorporating the particular geometry of the object.
Other branches of the condensed matter physics, where the spherical objects appear, concern the studies of the nonlinear optical response in composite materials and the photonic crystals on the base of synthetic opals. In case of opals, the SiO<sub>2</sub> nanosize balls could be coated by metal films as well. The theoretical studies here are focussed on the nonlinear optical properties and the metal coating is characterized by an effective dielectric function. This approach should be revised when the coating has a width of a few monolayers, a limit allowed by modern technologies.
Thus the investigation of the electronic properties pertinent to the spherical geometry can provide an interesting link between the carbon macromolecules and other spherical structures. On the theoretical side of the problem, one meets a unique possibility to establish a bridge between the methods of microscopic many-body theory and the intrinsic spherical geometry of the atomic physics. To be consistent, these theoretical efforts should comprise the systematic comparison of the quantities found for the planar and the spherical geometries of the electronic motion. In a recent paper we investigated the electronic gas on a sphere in a uniform magnetic field. The exact solution of the problem was found, two physical effects were predicted. First is the jumps in the magnetic susceptibility at half-integer numbers of flux quanta, piercing the sphere. Second is the localization of the electronic states near the poles of the sphere at high fields.
In this paper we study the correlations in the electron gas moving on the surface of the sphere. We obtain a close form of expressions for the indirect RKKY exchange, superconducting Cooper loop, density-density correlator. The effects peculiar to this geometry are elucidated, their meaning for subsequent theoretical investigations is discussed. It is particularly shown that the coherence of the electronic motion on the sphere results in the enhancement of the correlations between the points – antipodes on the sphere. This effect is lost with increasing the temperature $`T`$, when the temperature coherence length becomes less than the radius of the sphere.
We consider the electrons moving on a surface of the sphere of radius $`r_0`$. The Hamiltonian of the system is given by $`=\frac{^2}{2m_e}+U(r),`$ where $`m_e`$ is the (effective) mass of an electron and we have set $`\mathrm{}=c=1`$. The total number of electrons $`N`$ (with one projection of spin) is fixed and defines the value of the chemical potential $`\mu `$ and the areal density $`\nu =N/(4\pi r_0^2)`$. We assume that the potential $`U(r)`$ confines the electrons within the thin spherical layer $`\delta rr_0`$, i.e. $`U(r)=0`$ at $`r_0<r<r_0+\delta r`$ and $`U(r)\mathrm{}`$ otherwise. The radial component $`R(r)`$ of the wave function is a solution of the Shrödinger equation with the quantum well potential. We adopt that the chemical potential $`\mu `$ lies below the first excited level of $`R(r)`$, which means $`\delta r\nu ^{1/2}`$. It possible then to ignore the radial component and put $`r=r_0`$ in the remaining angular part of the Hamiltonian $`_\mathrm{\Omega }`$. The eigenfunctions to the Shrödinger equation $`_\mathrm{\Omega }\mathrm{\Psi }=E\mathrm{\Psi }`$ are the spherical harmonics $`Y_{lm}`$ and the spectrum is that of a free rotator model :
$$\mathrm{\Psi }(\theta ,\varphi )=r_0^1Y_{lm}(\theta ,\varphi ),E_l=(2m_er_0^2)^1l(l+1).$$
(1)
According to (1) $`\mathrm{\Psi }(\theta ,\varphi )`$ is normalized as $`r_0^2|\mathrm{\Psi }|^2\mathrm{sin}\theta d\theta d\varphi =1`$, it facilitates the comparison of our results with the case of planar geometry.
For the two points $`𝐫(\theta ,0)`$ and $`𝐫^{}(\theta ^{},\varphi )`$ we define the distance $`\mathrm{\Omega }`$ on the sphere as $`\mathrm{cos}\mathrm{\Omega }=\mathrm{cos}\theta \mathrm{cos}\theta ^{}+\mathrm{sin}\theta \mathrm{sin}\theta ^{}\mathrm{cos}\varphi `$. One can find an exact representation of the electron Green’s function through the Legendre function
$$G(\mathrm{\Omega },i\omega _n)=\frac{m_e}{2\mathrm{cos}\pi a}P_{1/2+a}(\mathrm{cos}\mathrm{\Omega }),$$
(2)
where $`a=\sqrt{2m_er_0^2(\mu +i\omega _n)+1/4}`$ and Matsubara frequency $`\omega _n=\pi T(2n+1)`$.
In the limit $`a1`$ and for $`a\mathrm{sin}\mathrm{\Omega }1`$ one has in the main order of $`a^1`$ :
$$G(\mathrm{\Omega },\omega )\frac{m_e}{\sqrt{2\pi a\mathrm{sin}\mathrm{\Omega }}}\frac{\mathrm{cos}(a\pi a\mathrm{\Omega }\pi /4)}{\mathrm{cos}\pi a},$$
(3)
At the same time, if $`a1`$ and $`a(\pi \mathrm{\Omega })1`$ we get
$$G(\mathrm{\Omega }\pi ,\omega )\frac{m_e}{2\mathrm{cos}\pi a}J_0\left(a(\pi \mathrm{\Omega })\right),$$
(4)
with the Bessel function $`J_0(x)`$. When $`\mathrm{\Omega }0`$, the Green’s function (2) diverges logarithmically, as it should.
It is convenient to introduce here the concept of the angular Fermi momentum $`L=\sqrt{2m_er_0^2\mu +1/4}k_Fr_0`$. For simplicity of subsequent calculations we consider the case of integer $`L`$, so that at low temperatures the level $`l=L1`$ is completely filled and $`l=L`$ is empty. It is worth noting that the number of electrons $`N=L^2`$ and the energy level spacing at the Fermi level $`\mathrm{\Delta }E4\pi \nu /(m_eL)2\mu /L`$. For densities $`\nu 10^{14}`$cm<sup>-2</sup> and $`r_0=100`$Åwe have $`L30`$ and $`\mathrm{\Delta }E10`$meV.
The oscillating factors $`\mathrm{exp}\pm i(a\pi a\mathrm{\Omega }\pi /4)`$ in (3) correspond to coherence of two waves. One propagates along the shortest way between two points and another wave goes along the longest way, turning around the sphere. One can show that this coherence is destroyed by the finite quasiparticle lifetime or by the temperature. The latter reason takes place at $`T\mathrm{\Delta }E`$, in which case the Green’s function acquires the form ($`\omega _n>0`$) :
$$G(\mathrm{\Omega },\pm i\omega _n)\frac{m_e}{\sqrt{2\pi L\mathrm{\Omega }}}e^{\pm i(L\mathrm{\Omega }+\frac{\pi }{4})\omega _n\mathrm{\Omega }/\mathrm{\Delta }E}$$
(5)
This equation, after natural substitution $`L\mathrm{\Omega }k_Fr`$, corresponds to the usual expression for the planar geometry.
Let us now discuss the magnetic correlation function on the sphere. This quantity defines the indirect RKKY exchange interaction between the localized moments and thus is related to the transverse NMR relaxation rate. It is also reponsible for the magnetic instability in the electronic system (spin-density wave state). The range function of the RKKY exchange interaction is given by :
$$\chi (\mathrm{\Omega })=T\underset{n}{}G^2(\mathrm{\Omega },i\omega _n).$$
(6)
We discuss first the case $`T0`$ and use the limiting relation $`T_n_{\mathrm{}}^{\mathrm{}}𝑑\omega /(2\pi )`$. Using (2) we write
$$\chi (\mathrm{\Omega })=\frac{m_er_0^2}{8\pi i}_C𝑑a\frac{a}{\mathrm{cos}^2\pi a}\left[P_{a1/2}(\mathrm{cos}\mathrm{\Omega })\right]^2$$
(7)
where the contour $`C`$ in the complex plane starts at $`e^{i\pi /4}\mathrm{}`$, passes through the point $`L`$ on the real axis and goes to $`e^{i\pi /4}\mathrm{}`$. We can shift the integration contour to the imaginary axis of $`a`$, where the integral is zero due to the oddness of the integrand in (7). Upon this shift we intersect the double poles at half-integer real $`a`$. Therefore the above integral reduces to the finite sum of residues and is equal to
$$\chi (\mathrm{\Omega })=\frac{m_e}{4\pi ^2r_0^2}\frac{d}{da}\underset{l=0}{\overset{L1}{}}\left(a+1/2\right)\left[P_a(\mathrm{cos}\mathrm{\Omega })\right]^2|_{a=l}.$$
(8)
This expression was first obtained by Larsen in a different way. Being formally exact, the expression (8) is not particularly useful at $`L1`$. In the case of large $`L`$ and at $`\mathrm{\Omega }\pi `$ it was suggested in to approximate the sum (8) by an integral, thus obtaining
$$\chi (\mathrm{\Omega })L\left[P_L(\mathrm{cos}\mathrm{\Omega })\right]^2$$
(9)
This estimate is however unsatisfactory at $`\mathrm{\Omega }1`$, since at large $`\nu `$ one has $`P_\nu (\mathrm{cos}\mathrm{\Omega })\nu ^{1/2}\mathrm{cos}(\nu \mathrm{\Omega }\pi /4)`$. Then the leading terms in the sum (8) acquire the form $`_l\mathrm{cos}2l\mathrm{\Omega }`$ and the consideration of the next-order terms is needed.
To find $`\chi (\mathrm{\Omega })`$ in the closed form at $`L1`$, we return back to (6) and use the asymptotic expressions (3), (4) for the Green’s function. Next we rescale $`\omega 2\mu \omega /L`$ (it corresponds to measuring the energy in units of spacing $`\mathrm{\Delta }E`$) and neglect the terms $`1/L`$ in the expansion $`a=\sqrt{L^2+2iL\omega }L+i\omega +\omega ^2/2L`$. After that the integration over $`\omega `$ in (7) is easily done and we come to the improved estimate for (8) in the form :
$`\chi (\mathrm{\Omega })`$ $``$ $`{\displaystyle \frac{mr_0^2}{4\pi ^2}}LJ_0^2[L(\pi \mathrm{\Omega })],(\pi \mathrm{\Omega })1/L`$ (10)
$``$ $`{\displaystyle \frac{mr_0^2}{4\pi ^3\mathrm{sin}\mathrm{\Omega }}}\left[1+{\displaystyle \frac{\mathrm{\Omega }\pi }{\mathrm{sin}\mathrm{\Omega }}}\mathrm{sin}(2L\mathrm{\Omega })\right],\mathrm{\Omega }1,`$ (11)
with the Bessel function $`J_0(x)`$. One can verify that the Eqs. (10) and (11) match smoothly at $`(\pi \mathrm{\Omega })1/L`$. It is convenient to rewrite (11) further as $`\chi (\mathrm{\Omega })=\chi ^{(1)}(\mathrm{\Omega })+\chi ^{(2)}(\mathrm{\Omega })`$ where
$`\chi ^{(1)}(\mathrm{\Omega })`$ $`=`$ $`{\displaystyle \frac{mr_0^2}{4\pi ^2}}{\displaystyle \frac{\mathrm{sin}(2L\mathrm{\Omega })}{\mathrm{sin}^2\mathrm{\Omega }}}\mathrm{\Theta }\left[{\displaystyle \frac{\pi }{2}}\mathrm{\Omega }\right],`$ (12)
$`\chi ^{(2)}(\mathrm{\Omega })`$ $`=`$ $`{\displaystyle \frac{mr_0^2}{4\pi ^3\mathrm{sin}\mathrm{\Omega }}}\left[1+{\displaystyle \frac{\mathrm{sin}(2L\mathrm{\Omega })}{\mathrm{sin}\mathrm{\Omega }}}\left[\mathrm{\Omega }\pi \mathrm{\Theta }\left[\mathrm{\Omega }{\displaystyle \frac{\pi }{2}}\right]\right]\right],`$ (13)
and the step function $`\mathrm{\Theta }(x)=1`$ at $`x>0`$. For the small angles $`1/L\mathrm{\Omega }1`$ the $`\chi ^{(1)}`$ term dominates and we recover the result of the planar geometry
$$\chi (\mathrm{\Omega })\frac{m}{(2\pi r_0\mathrm{\Omega })^2}\mathrm{sin}(2L\mathrm{\Omega })\frac{m}{(2\pi r)^2}\mathrm{sin}(2k_Fr)$$
(14)
Discussing the analogue of the $`q`$representation of (11), we note that the usual formula for the inhomogeneous static susceptibility $`\chi (𝐪)=𝑑𝐫e^{i\mathrm{𝐪𝐫}}\chi (𝐫)`$ is replaced by $`\chi _{lm}=r_0^2\chi (\mathrm{\Omega })Y_{lm}^{}(\mathrm{\Omega },\phi )\mathrm{sin}\mathrm{\Omega }d\mathrm{\Omega }d\phi `$. In the absence of $`\phi `$ in $`\chi (\mathrm{\Omega })`$, we have :
$$\chi _l=2\pi r_0^2_0^\pi 𝑑\mathrm{\Omega }\mathrm{sin}\mathrm{\Omega }P_l(\mathrm{cos}\mathrm{\Omega })\chi (\mathrm{\Omega })$$
(15)
Recalling the property $`P_l(x)=(1)^lP_l(x)`$ and noting that $`\chi ^{(2)}(\pi \mathrm{\Omega })=\chi ^{(2)}(\mathrm{\Omega })`$, we conclude that the term $`\chi ^{(1)}(\mathrm{\Omega })`$ contributes to all harmonics $`\chi _l`$, whereas the “coherent” part $`\chi ^{(2)}(\mathrm{\Omega })`$ contributes only to $`\chi _l`$ with even $`l`$. Below we show that $`\chi ^{(2)}`$ disappears at high enough temperatures, $`T\mathrm{\Delta }E`$, or in the presence of scattering.
Calculating the contribution of the term $`\chi ^{(1)}`$, we encounter the integral $`_0^{\pi /2}𝑑\mathrm{\Omega }P_l(\mathrm{cos}\mathrm{\Omega })\mathrm{sin}(2L\mathrm{\Omega })/\mathrm{sin}(\mathrm{\Omega })`$ which is determined by small $`\mathrm{\Omega }`$. Thus one can use $`P_l(\mathrm{cos}\mathrm{\Omega })J_0[(l+1/2)\mathrm{\Omega }]`$ and extend the integration to the infinite upper limit.
Further, the first term in $`\chi ^{(2)}`$ yields the integral $`_0^\pi 𝑑\mathrm{\Omega }P_{2l}(\mathrm{cos}\mathrm{\Omega })=\mathrm{\Gamma }^2(l+1/2)/\mathrm{\Gamma }^2(l+1)(l+\frac{1}{2})^1`$. The second part of $`\chi ^{(2)}(\mathrm{\Omega })`$ proportional to $`\mathrm{\Omega }\mathrm{sin}(2L\mathrm{\Omega })/\mathrm{sin}^2\mathrm{\Omega }`$ is also important at $`\mathrm{\Omega }1`$; we can approximate the corresponding integral by $`_0^{\mathrm{}}𝑑\mathrm{\Omega }J_0((l+\frac{1}{2})\mathrm{\Omega })\mathrm{sin}(2L\mathrm{\Omega })`$.
Combining all the contributions, we have
$`\chi _l`$ $``$ $`{\displaystyle \frac{m}{2\pi }}\left[f_1\left({\displaystyle \frac{l}{2L}}\right){\displaystyle \frac{1}{\pi L}}f_2\left({\displaystyle \frac{l+1/2}{2L}}\right)\right],`$ (17)
$`\chi _l`$ $``$ $`{\displaystyle \frac{m}{2\pi }}f_1\left({\displaystyle \frac{l}{2L}}\right)`$ (18)
for even and odd $`l`$, respectively. Here the function
$`f_1(x)`$ $`=`$ $`\pi /2,x<1`$ (19)
$`=`$ $`\mathrm{sin}^1(1/x),x>1`$ (20)
corresponds to $`\chi ^{(1)}(\mathrm{\Omega })`$. It defines a cusp in $`\chi _l`$ at $`l=2L2k_F`$ and thus resembles the usual planar expression. The function
$`f_2(x)`$ $`=`$ $`x^1+(1x^2)^{1/2},x<1`$ (21)
$`=`$ $`x^1,x>1`$ (22)
stems from the part $`\chi ^{(2)}(\mathrm{\Omega })`$ and is present in the even harmonics $`\chi _l`$. This coherent term has a prefactor $`1/L`$ in (Electronic correlations on a metallic nanosphere.) and formally vanishes in the limit $`Lr_0\mathrm{}`$. However, $`f_2(x)`$ is singular at $`x=0`$ and at $`x=1`$, which points deserve more attention. At $`l2L`$ one estimates the contribution of $`\chi ^{(2)}L^{1/2}`$, i.e. much less than $`\chi ^{(1)}1`$. At $`l1`$ both contributions are comparable. Moreover, using (8) and relevant formulas in one can find the exact identity for the uniform static susceptibility $`\chi _{l=0}=0`$. The vanishing value of $`\chi _{l=0}`$ is connected with our choice of the chemical potential lying between the discrete energy levels $`L1`$ and $`L`$, so that the excited electronic states are separated from the vacuum ones by the energy $`\mathrm{\Delta }E`$. It resembles the situation with vanishing static susceptibility in superconductors. Exploring further this analogy, it is interesting to observe that in the clean metal and at $`T0`$ the coherent term $`f_2`$ provides a maximum of $`\chi _l`$ at $`l=\sqrt{2}L`$. This point, similarly to the case of ferromagnetic superconductors , could be labeled as the momentum of helikoidal magnetic ordering.
Next, we discuss the superconducting correlation function, which reflects the tendency to superconducting pairing and defines on the mean-field level the superconducting temperature, $`T_c`$. The basic object here (found also in the localization theory) is the static Cooper loop, which could be written in $`r`$representation as
$$\mathrm{\Pi }(\mathrm{\Omega })=T\underset{n}{}G(\mathrm{\Omega },i\omega _n)G(\mathrm{\Omega },i\omega _n),$$
(23)
Unlike the above case of the magnetic loop $`\chi (\mathrm{\Omega })`$, we could not arrive at some closed type of expression for $`\mathrm{\Pi }(\mathrm{\Omega })`$ with the use of the exact formula (2) for $`G(\mathrm{\Omega },\omega )`$. However, one can find an approximate form of $`\mathrm{\Pi }(\mathrm{\Omega })`$ exploring the large $`L`$ asymptote (3) for the Green’s function. A calculation similar to the above one gives the result (cf. (11))
$`\mathrm{\Pi }(\mathrm{\Omega })`$ $``$ $`{\displaystyle \frac{mr_0^2}{4\pi ^2}}LJ_0^2[L(\pi \mathrm{\Omega })],(\pi \mathrm{\Omega })1/L`$ (25)
$``$ $`{\displaystyle \frac{mr_0^2}{4\pi ^3\mathrm{sin}\mathrm{\Omega }}}\left[{\displaystyle \frac{\pi \mathrm{\Omega }}{\mathrm{sin}\mathrm{\Omega }}}\mathrm{sin}(2L\mathrm{\Omega })\right],\mathrm{\Omega }1`$ (26)
Again, Eqs. (Electronic correlations on a metallic nanosphere.a) and (Electronic correlations on a metallic nanosphere.b) smoothly match at $`(\pi \mathrm{\Omega })1/L`$. As above, we decompose $`\mathrm{\Pi }(\mathrm{\Omega })`$ into the sum $`\mathrm{\Pi }^{(1)}(\mathrm{\Omega })+\mathrm{\Pi }^{(2)}(\mathrm{\Omega })`$ with
$`\mathrm{\Pi }^{(1)}(\mathrm{\Omega })`$ $`=`$ $`{\displaystyle \frac{mr_0^2}{4\pi ^2\mathrm{sin}^2\mathrm{\Omega }}}\mathrm{\Theta }\left[{\displaystyle \frac{\pi }{2}}\mathrm{\Omega }\right],`$ (27)
$`\mathrm{\Pi }^{(2)}(\mathrm{\Omega })`$ $`=`$ $`{\displaystyle \frac{mr_0^2}{4\pi ^3\mathrm{sin}\mathrm{\Omega }}}\left[{\displaystyle \frac{\pi \mathrm{\Theta }\left[\mathrm{\Omega }\pi /2\right]\mathrm{\Omega }}{\mathrm{sin}\mathrm{\Omega }}}\mathrm{sin}(2L\mathrm{\Omega })\right]`$ (28)
The property $`\mathrm{\Pi }^{(2)}(\pi \mathrm{\Omega })=\mathrm{\Pi }^{(2)}(\mathrm{\Omega })`$ should be noted here. At $`1/L\mathrm{\Omega }1`$ the term $`\mathrm{\Pi }^{(1)}(\mathrm{\Omega })`$ is a principal one and the planar result is restored
$$\mathrm{\Pi }(\mathrm{\Omega })\frac{mr_0^2}{4\pi ^2\mathrm{sin}^2\mathrm{\Omega }}\frac{m}{(2\pi r)^2}$$
(29)
Finding the momentum representation of (Electronic correlations on a metallic nanosphere.b) according to (15), we encounter the logarithmic singularity at $`\mathrm{\Omega }0`$ and all $`l`$, which originates from the term $`\mathrm{\Pi }^{(1)}(\mathrm{\Omega })`$. This divergence should be cutoff at the lowest allowable angles $`\mathrm{\Omega }_0a_0/r_0`$ with the interatomic spacing $`a_0`$.
At the same time, the part $`\mathrm{\Pi }^{(2)}(\mathrm{\Omega })`$ provides a convergent integral which contributes only to odd harmonics, in view of the abovementioned property. Further, it saturates at small $`\mathrm{\Omega }`$, where $`\mathrm{\Pi }^{(2)}(\mathrm{\Omega })`$ coincides with $`\chi ^{(2)}(\mathrm{\Omega })`$.
The whole result is then given by :
$`\mathrm{\Pi }_l`$ $``$ $`{\displaystyle \frac{m}{2\pi }}\mathrm{ln}\left({\displaystyle \frac{r_0}{a_0l}}\right),`$ (31)
$`\mathrm{\Pi }_l`$ $``$ $`{\displaystyle \frac{m}{2\pi }}\left[\mathrm{ln}\left({\displaystyle \frac{r_0}{a_0l}}\right){\displaystyle \frac{1}{\pi L}}f_2\left({\displaystyle \frac{l+1/2}{2L}}\right)\right],`$ (32)
for even and odd $`l`$, respectively. Comparing it to (Electronic correlations on a metallic nanosphere.) we see that the same coherent term $`f_2(l/2L)`$ contributes the harmonics of different parity. A simple analysis shows that the presence of $`f_2(l/2L)`$ in (Electronic correlations on a metallic nanosphere.) does not modify an overall character of $`\mathrm{\Pi }_l`$ for all $`L1`$, and it is the logarithmic term that defines the tendency to the superconducting pairing at $`l0`$. It should be also stressed that the static Cooper loop has a singularity at an analog of $`2k_F`$, the feature absent in the planar geometry.
Let us discuss now the ‘density-density’ correlation function of the electronic gas on the sphere. This function describes the variation in the electronic density, caused by impurities, and can be written as
$`𝒞(𝐫,𝐫^{})`$ $`=`$ $`{\displaystyle \frac{1}{\nu }}(n(𝐫)\nu )(n(𝐫^{})\nu )\delta (𝐫𝐫^{})`$ (33)
$`=`$ $`{\displaystyle \frac{1}{\nu }}\left(T{\displaystyle \underset{n}{}}G(𝐫,𝐫^{},i\omega _n)e^{i\omega _n\tau }\right)^2,`$ (34)
where $`\tau +0`$ in the last line.
In the planar geometry we have at $`T=0`$ :
$`𝒞(𝐫,𝐫^{})`$ $`=`$ $`{\displaystyle \frac{1}{\pi r^2}}J_1^2(k_Fr){\displaystyle \frac{\mathrm{sin}(2k_Fr)1}{\pi ^2k_Fr^3}},`$ (35)
with $`r=|𝐫𝐫^{}|`$. We see that at large $`r`$ the Friedel oscillations with a period $`(2k_F)^1`$ take place.
For the spherical geometry and large $`L`$ we use (3), (34), to obtain in the limit $`T=0`$ :
$`𝒞(𝐫,𝐫^{})`$ $`=`$ $`{\displaystyle \frac{1}{4\pi ^2r_0^2L}}{\displaystyle \frac{\mathrm{sin}(2L\mathrm{\Omega })1}{\mathrm{sin}\mathrm{\Omega }\mathrm{sin}^2(\mathrm{\Omega }/2)}}`$ (36)
This expression holds at $`L\mathrm{sin}\mathrm{\Omega }1`$ and has the obvious correspondence with (35) for $`\mathrm{\Omega }1`$. At large $`\mathrm{\Omega }\pi `$ the amplitude of the correlations grows, in accordance with the previous formulas (11), (Electronic correlations on a metallic nanosphere.).
Now let us discuss the case of finite temperatures. First, we note that for the considered case of the fixed number of particles, the position of the chemical potential depends on $`T`$ and is determined from the equation $`N=_l(2l+1)n_F(E_l)`$, with Fermi function $`n_F(x)`$. Some analysis shows, however, that the value of $`\mu `$ at finite $`T`$ does not deviate much from its zero-temperature value $`\mu _0`$. Specifically, this deviation satisfies the inequalities $`|\mu \mu _0|/T1`$ and $`|\mu \mu _0|<\mathrm{\Delta }E/2`$ and is ignored below.
The above correlation functions are not essentially modified at $`T\mathrm{\Delta }E`$. The changes occur when $`T\mathrm{\Delta }E`$, in which case we use Eq. (5) and perform the Matsubara sums in (6), (23), (34). As a result, we arrive at the following expressions :
$`\chi (\mathrm{\Omega },T)`$ $``$ $`{\displaystyle \frac{m\mathrm{sin}(2L\mathrm{\Omega })}{(2\pi r_0\mathrm{\Omega })^2}}\left({\displaystyle \frac{2\pi T\mathrm{\Omega }}{\mathrm{\Delta }E}}\right)`$ (37)
$`\mathrm{\Pi }(\mathrm{\Omega },T)`$ $``$ $`{\displaystyle \frac{mr_0^2}{4\pi ^2\mathrm{\Omega }^2}}\left({\displaystyle \frac{2\pi T\mathrm{\Omega }}{\mathrm{\Delta }E}}\right)`$ (38)
$`𝒞(𝐫,𝐫^{},T)`$ $``$ $`{\displaystyle \frac{\mathrm{sin}(2L\mathrm{\Omega })1}{\pi ^2r_0^2L\mathrm{\Omega }^3}}^2\left({\displaystyle \frac{\pi T\mathrm{\Omega }}{\mathrm{\Delta }E}}\right)`$ (39)
with $`(x)=x/\mathrm{sinh}x`$. Therefore the temperatures $`T\mathrm{\Delta }E`$ lead to the exponential decrease of the correlations at $`\mathrm{\Omega }\mathrm{\Delta }E/(\pi T)=\xi _T/r_0`$, with the temperature coherence length $`\xi _T=2\mu /(\pi k_FT)`$. One should also note the disappearance of the coherent terms $`\chi ^{(2)}(\mathrm{\Omega })`$ and $`\mathrm{\Pi }^{(2)}(\mathrm{\Omega })`$ present at $`\mathrm{\Omega }1`$ in magnetic and superconducting correlators, (12) and (27), respectively.
Summarizing, we find a closed form of the correlation functions for the electron gas on the sphere. The effects peculiar to this geometry are analyzed, the role of finite temperatures for our results is elucidated.
I thank S.L. Ginzburg, S.V. Maleyev, A.G. Yashenkin, A.V. Lazuta for stimulating discussions. The partial financial support from the Russian State Program for Statistical Physics (Grant VIII-2), grant FTNS 99-1134 and grant INTAS 97-1342 is gratefully acknowledged.
|
no-problem/9905/hep-lat9905032.html
|
ar5iv
|
text
|
# What we do understand of colour confinement
## 1 Introduction
No big progress has been made in the last year on the subject. This is a good time to assess what we have learned, and to try an outlook. The question in the title is ambitious. The answer will be: not as much as we think, but we have good handles.
This paper is the sum of two talks: one presented by the first author on the general statement of the problem, the other presented by the second author on specific lattice results. All this is based on results already presented in ref.’s 1. The topics which will be addressed are:
a) lattice vs. continuum formulation of QCD;
b) confinement;
c) duality and disorder parameter;
d) monopoles;
e) monopole condensation, abelian dominance and monopole dominance;
f) what next.
## 2 Lattice vs. continuum
A popular prejudice is that continuum QCD is based on logical and mathematical arguments, contrary to lattice, which is based on numerical simulations, and therefore does not help in understanding. In reality
1. The continuum quantization is perturbative, has the Fock vacuum as ground state, and consists in computing scattering processes between gluons and quarks. Fock vacuum is certainly not the ground state and this instability is signalled by the presence of renormalons, i.e. by the fact that the renormalized perturbative expansion is not even an asymptotic series. However, for some reason, which would be interesting to understand, perturbation theory works at short distances.
2. Lattice formulation is a sensible approximation to the functional Feynman integral which defines the theory. Most probably QCD exists as a self consistent field theory, and is defined constructively on a lattice. Gauge invariance is built in. Since, in addition, objects with non trivial topology (instantons, monopoles) play an important rôle in QCD dynamics, a formulation in terms of parallel transport, like lattice, is superior.
3. Numerical results are like experiments: what is understood from them depends on the question they address. Experiments testing a symmetry, like Michelson-Morley experiment, can be more important to understand than the computation of 3-loop radiative corrections.
## 3 Confinement
Quarks and gluons have never been observed. The ratio of quark to nucleons abundance in the universe is bounded by the experimental limit
$`{\displaystyle \frac{n_q}{n_p}}10^{27}`$ (1)
coming from a Millikan like analysis of $`1g`$ of matter. In a cosmological standard model one would expect for the same ratio $`n_q/n_p10^{12}`$, which is bigger by 15 orders of magnitude.
Lattice numerical evidences exist that QCD confines colour. The Wilson loops obey the area law
$`W(R,T)\underset{RT\mathrm{}}{}\mathrm{exp}(\sigma RT).`$ (2)
Since general arguments imply that
$`W(R,T)\underset{RT\mathrm{}}{}\mathrm{exp}(V(R)T),`$ (3)
$`V(R)`$ being the static $`Q\overline{Q}`$ potential, it follows that $`V(R)=\sigma R`$ ($`\sigma `$ is the string tension), which means confinement of quarks.
A guiding principle in our analysis of confinement will be that such an absolute property like confinement can only be explained in terms of symmetry. A similar situation exists e.g. in ordinary superconductivity: the resistivity is observed to be consistent with zero with very great precision. This is not due to the smallness of a tunable parameter, but to a symmetry.
## 4 Duality
In a finite temperature formulation of QCD the confined phase corresponds to the strong coupling region (low values of $`\beta =\frac{2N_c}{g^2}`$). Above some $`\beta _C`$ deconfinement takes place. Apparently the confined phase is disordered: it should, however, have a nontrivial order, if confinement has to be explained in terms of symmetry.
A wide class of systems exist in statistical mechanics and in field theory, in which a similar situation occurs. All those systems have topologically non trivial configurations, carrying a conserved topological charge, and admit two equivalent descriptions (duality). The “ordered” phase (low values of $`\beta `$) is described in terms of the usual local fields, and its symmetry is discussed in terms of vacuum expectation values (vev) of local fields (order parameters): in this description topological excitations are extended objects (kinks, vortices, $`\mathrm{}`$). In the dual description the extended topological objects are described in terms of dual local fields, the coupling constant is $`1/g`$ and the disordered phase looks ordered, while the phase that originally was ordered looks disordered. The order parameter of the dual description is called a disorder parameter. Understanding confinement consists then in understanding the symmetry of the system dual to QCD. In fact the explicit construction of the dual system is not required: once the dual symmetry is understood the disorder parameter can be constructed in terms of the original local fields, of course as a highly non local operator.
A suggestive possibility in this direction is that QCD vacuum could behave as a dual superconductor. Confinement would be a consequence of the dual Meissner effect, squeezing the chromoelectric field of a $`Q\overline{Q}`$ pair into an Abrikosov flux tube with energy proportional to the length, or
$`V(R)=\sigma R.`$ (4)
In this mechanism monopoles are the topological structures which are expected to condense.
Chromoelectric flux tubes are indeed observed on lattice configurations and also their collective modes have been detected. The main results of a more detailed analysis of this mechanism will be presented below.
## 5 Monopoles in non abelian gauge theories
We shall refer to $`SU(2)`$ for simplicity: the extension to $`SU(3)`$ only involves some additional formal complications.
Let $`\mathrm{\Phi }=\stackrel{}{\mathrm{\Phi }}\stackrel{}{\sigma }`$ be any operator in the adjoint representation. A unit colour vector $`\widehat{\mathrm{\Phi }}(x)`$ can be defined:
$`\widehat{\mathrm{\Phi }}(x)={\displaystyle \frac{\stackrel{}{\mathrm{\Phi }}(x)}{\stackrel{}{\mathrm{\Phi }}(x)}}`$ (5)
everywhere except at sites where $`\stackrel{}{\mathrm{\Phi }}(x)=0`$. The field configuration $`\stackrel{}{\mathrm{\Phi }}(x)`$ can present a non trivial topology. If we adopt a “local” reference frame for colour, with 3 orthonormal unit vectors $`\stackrel{}{\xi }_i(x)`$, $`\stackrel{}{\xi }_i(x)\stackrel{}{\xi }_j(x)=\delta _{ij}`$, $`\stackrel{}{\xi }_i(x)\stackrel{}{\xi }_j(x)=ϵ_{ijk}\stackrel{}{\xi }_k(x)`$, with $`\stackrel{}{\xi }_3(x)=\widehat{\mathrm{\Phi }}(x)`$, instead of the usual $`x`$ independent unit vectors $`\stackrel{}{\xi }_i^o`$, a rotation $`R(x)`$ will exist such that
$`\stackrel{}{\xi }_i(x)=R(x)\stackrel{}{\xi }_i^o.`$ (6)
Since $`\stackrel{}{\xi }_i(x)^2=1`$,
$`_\mu \stackrel{}{\xi }_i(x)=\stackrel{}{\omega }_\mu \stackrel{}{\xi }_i(x)`$ (7)
or
$`D_\mu \stackrel{}{\xi }_i(x)=\left(_\mu \stackrel{}{\omega }_\mu \right)\stackrel{}{\xi }_i(x)=0.`$ (8)
The symbol $``$ indicates cross product; the $`SO(3)`$ generators are in the fundamental representation $`T_{ij}^a=iϵ_{iaj}`$.
Eq. (8) implies $`[D_\mu ,D_\nu ]=0`$, or
$`\stackrel{}{F}_{\mu \nu }=_\mu \stackrel{}{\omega }_\nu _\nu \stackrel{}{\omega }_\mu +\stackrel{}{\omega }_\mu \stackrel{}{\omega }_\nu =0.`$ (9)
The rotation (6) is a parallel transport, and as such it is a pure gauge.
The solution of eq. (8) is
$`\stackrel{}{\xi }_i(x)=\text{Pexp}\left(i{\displaystyle _{\mathrm{},C}^x}\stackrel{}{\omega }_\mu \stackrel{}{T}𝑑x^\mu \right)\stackrel{}{\xi }_i^o,`$ (10)
and eq. (9) implies that the path integral (10) is independent of the choice of the line $`C`$. In fact eq. (9) is not valid at the singularities occurring at the zeros of $`\stackrel{}{\mathrm{\Phi }}(x)`$, where $`R(x)`$ is not defined, and as a consequence $`\stackrel{}{\xi }_i(x)`$ is not independent of the path $`C`$.
The inverse rotation $`R^1(x)`$ acts on $`\widehat{\mathrm{\Phi }}(x)=\stackrel{}{\xi }_3`$ as
$`R^1(x)\widehat{\mathrm{\Phi }}(x)=\stackrel{}{\xi }_3^o.`$ (11)
$`R^1(x)`$ is called abelian projection.
Under abelian projection the field strength tensor $`\stackrel{}{G}_{\mu \nu }=_\mu A_\nu _\nu A_\mu +gA_\mu A_\nu `$ has the usual covariant transformation out of the singularities. Where $`R(x)`$ is not defined it can acquire a singular term
$`\stackrel{}{G}_{\mu \nu }\underset{R^1(x)}{}\stackrel{}{G}_{\mu \nu }+\widehat{\mathrm{\Phi }}(x)\left(_\mu A_\nu ^{sing}_\nu A_\mu ^{sing}\right).`$ (12)
The quantity
$`F_{\mu \nu }=\widehat{\mathrm{\Phi }}(x)\stackrel{}{G}_{\mu \nu }{\displaystyle \frac{1}{g}}\left(D_\mu \widehat{\mathrm{\Phi }}(x)D_\nu \widehat{\mathrm{\Phi }}(x)\right)\widehat{\mathrm{\Phi }}(x)`$ (13)
can be identically put in the form
$`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu {\displaystyle \frac{1}{g}}\left(_\mu \widehat{\mathrm{\Phi }}(x)_\nu \widehat{\mathrm{\Phi }}(x)\right)\widehat{\mathrm{\Phi }}(x).`$ (14)
In the abelian projected form the second term disappears, since $`\widehat{\mathrm{\Phi }}(x)\stackrel{}{\xi }_3^o`$ which is $`x`$ independent and
$`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu +\text{ singular term}.`$ (15)
$`F_{\mu \nu }`$ is an abelian field.
$`F_{\mu \nu }^{}=\frac{1}{2}ϵ_{\mu \nu \rho \sigma }F^{\rho \sigma }`$ defines a magnetic current
$`j_\mu ^M`$ $`=`$ $`^\nu F_{\mu \nu }^{},`$ (16)
which is identically conserved. The system has a magnetic $`U(1)`$ symmetry. If there are no singularities $`j_\mu ^M`$ itself vanishes (Bianchi identities). It can be shown that the singularities of the abelian projection are nothing but pointlike $`U(1)`$ magnetic charges: the singular term in eq. (12) is a Dirac string taking care of flux conservation.
A magnetic $`U(1)`$ symmetry exists for each of the functionally-infinite choices of $`\stackrel{}{\mathrm{\Phi }}(x)`$: one can construct a disorder parameter as the vev of an operator carrying non zero magnetic charge, and investigate dual superconductivity. The disorder parameter can be then measured on the lattice, and the symmetry of the confined vacuum can be determined. This has been done for a number of different choices of the operator $`\stackrel{}{\mathrm{\Phi }}(x)`$, and for all of them the transition to confined phase is a transition from dual normal conductor to dual superconductor.
## 6 Monopole condensation: the disorder parameter
The disorder parameter is the vev of a non local operator. The logical procedure to define this operator merely consists in shifting the field variables at a given time $`t`$ by a configuration describing a static monopole sitting at $`\stackrel{}{y}`$, $`\stackrel{}{A}(\stackrel{}{x},\stackrel{}{y})`$, in the same way as for a particle moving in one dimension
$`e^{ipa}|x=|x+a.`$ (17)
For a field configuration $`\stackrel{}{A}(\stackrel{}{x},t)`$, adding the field $`\stackrel{}{A}(\stackrel{}{x},\stackrel{}{y})`$ amounts to
$`\mu |\stackrel{}{A}(\stackrel{}{x},t)=e^{i{\scriptscriptstyle \stackrel{}{\pi }(\stackrel{}{x},t)\stackrel{}{A}(\stackrel{}{x},\stackrel{}{y})𝑑\stackrel{}{x}}}|\stackrel{}{A}(\stackrel{}{x},t)=|\stackrel{}{A}(\stackrel{}{x},t)+\stackrel{}{A}(\stackrel{}{x},\stackrel{}{y}).`$ (18)
In doing that care has to be taken of the compactness of the theory, and of the fact that we want to add a monopole to the abelian part of the abelian projected field. All this can be done exactly and gives
$`\mu ={\displaystyle \frac{Z[S+\mathrm{\Delta }S]}{Z[S]}},`$ (19)
where $`Z`$ is the usual partition function of the theory and $`\mathrm{\Delta }S`$ consists in a modification to the action $`S`$ in the slice $`x_0=t`$, in all points of space (non local operator).
The recipe is to change the temporal plaquette $`\mathrm{\Pi }_{i0}(\stackrel{}{n},t)`$ to $`\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t)`$:
$`\mathrm{\Pi }_{i0}(\stackrel{}{n},t)=U_i(\stackrel{}{n},t)U_0(\stackrel{}{n}+\widehat{ı},t)\left(U_i(\stackrel{}{n},t+1)\right)^{}\left(U_0(\stackrel{}{n},t)\right)^{}`$ (20)
$`\mathrm{\Pi }_{i0}^{}(\stackrel{}{n},t)=U_i^{}(\stackrel{}{n},t)U_0(\stackrel{}{n}+\widehat{ı},t)\left(U_i(\stackrel{}{n},t+1)\right)^{}\left(U_0(\stackrel{}{n},t)\right)^{},`$ (21)
$`U_i^{}(\stackrel{}{n},t)`$ $`=`$ $`e^{i\mathrm{\Lambda }(\stackrel{}{n},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n},t)\stackrel{}{\sigma }}U_i(\stackrel{}{n},t)e^{iA_i^M(\stackrel{}{n}+\widehat{ı}/2,\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }}`$ (23)
$`e^{i\mathrm{\Lambda }(\stackrel{}{n}+\widehat{ı},\stackrel{}{y})\widehat{\mathrm{\Phi }}(\stackrel{}{n}+\widehat{ı},t)\stackrel{}{\sigma }},`$
$`\stackrel{}{A}_{}^M(\stackrel{}{x},\stackrel{}{y})`$ and $`\mathrm{\Lambda }(\stackrel{}{x},\stackrel{}{y})`$ being respectively the transverse ($`\stackrel{}{}\stackrel{}{A}_{}^M(\stackrel{}{x},\stackrel{}{y})=0`$) and the pure gauge part ($`\stackrel{}{}\mathrm{\Lambda }(\stackrel{}{x},\stackrel{}{y})=A_{}(\stackrel{}{x},\stackrel{}{y})`$) of $`\stackrel{}{A}(\stackrel{}{x},\stackrel{}{y})`$.
The operator is in fact $`\mu =e^{\beta \mathrm{\Delta }S}`$, with $`\mathrm{\Delta }SN_s^3`$, $`N_s`$ being the spatial extension of the system. The fluctuations of $`\mu `$ are then $`\mathrm{exp}(N_s^{3/2})`$. Instead of $`\mu `$, which is a widely fluctuating quantity, it proves to be convenient to define
$`\rho ={\displaystyle \frac{\text{d}}{\text{d}\beta }}\mathrm{log}\mu .`$ (24)
Eq. (19) gives
$`\rho =S_SS+\mathrm{\Delta }S_{S+\mathrm{\Delta }S}.`$ (25)
The subscript denotes the action used in weighting the average.
## 7 Monopole condensation in $`SU(2)`$ and $`SU(3)`$
We studied numerically on the lattice the deconfining phase transition at finite temperature for $`SU(2)`$ and $`SU(3)`$ by means of the quantity $`\rho `$, eq. (25). Finite temperature means that our lattice is $`N_s^3\times N_t`$, with $`N_sN_t`$. $`N_s^3`$ is the physical volume, while $`N_t`$ is related to the temperature $`T`$ by the relationship $`T=1/\left[N_ta(\beta )\right]`$. At finite temperature, $`C^{}`$ boundary conditions have to be used in the $`t`$ direction.
We investigated condensation for the monopoles defined by the following operators $`\mathrm{\Phi }`$:
* $`\mathrm{\Phi }`$ is related to the Polyakov line $`L(\stackrel{}{n},t)=\mathrm{\Pi }_{t^{}=t}^{N_t1}U_0(\stackrel{}{n},t^{})\mathrm{\Pi }_{t^{}=0}^{t1}U_0(\stackrel{}{n},t^{})`$ in the following way:
$`\mathrm{\Phi }(n)\mathrm{\Phi }(\stackrel{}{n},t)=L(\stackrel{}{n},t)L^{}(\stackrel{}{n},t)`$ (26)
(Polyakov projection on a $`C^{}`$ periodic lattice<sup>3</sup><sup>3</sup>3the symbol $``$ in eq. (26) indicates the complex conjugation operation.);
* $`\mathrm{\Phi }`$ is an open plaquette, i.e. a parallel transport on an elementary square of the lattice
$`\mathrm{\Phi }(n)=\mathrm{\Pi }_{ij}(\stackrel{}{n},t)=U_i(n)U_j(n+\widehat{ı})\left(U_i(n+\widehat{ȷ})\right)^{}\left(U_j(n)\right)^{};`$ (27)
* $`\mathrm{\Phi }`$ is the “butterfly” (topological charge density) operator:
$`\mathrm{\Phi }(n)=F(\stackrel{}{n},t)=U_x(n)U_y(n+\widehat{x})\left(U_x(n+\widehat{y})\right)^{}\left(U_y(n)\right)^{}`$ (28)
$`U_z(n)U_t(n+\widehat{z})\left(U_z(n+\widehat{t})\right)^{}\left(U_t(n)\right)^{}.`$
Our main results are
1. $`\rho `$ shows a sharp negative peak in the critical region independently of the abelian projection chosen (fig. 1). $`\mu `$ has an abrupt decline in the critical region. $`\rho `$ does not depend on the abelian projection used.
2. The peak position follows the displacement of the critical temperature when the size in the $`t`$ direction is changed.
3. In the $`SU(3)`$ case for a given abelian projection we have two possible choices for defining monopoles, the residual gauge group being $`[U(1)]^2`$. The corresponding $`\rho `$’s show a similar behaviour.
4. For both $`SU(2)`$ and $`SU(3)`$ at strong coupling (low $`\beta `$’s) $`\rho `$ seems to reach a finite limit when $`N_s\mathrm{}`$. From the condition
$`\mu =\mathrm{exp}\left({\displaystyle _0^\beta }\rho (\beta ^{})\text{d}\beta ^{}\right)`$ (29)
it follows that $`\mu 0`$ and the magnetic $`[U(1)]^{N1}`$ symmetry is broken in the confined phase.
5. At weak coupling (large $`\beta `$’s) numerical data are consistent with a linear behaviour of $`\rho `$ as a function of $`N_s`$. For $`\mu `$ we get
$`\mu \underset{N_s\mathrm{}}{}Ae^{(cN_s+d)\beta },`$ (30)
with $`c0.6`$ and $`d12`$ for $`SU(2)`$ and $`c2`$ and $`d12`$ for $`SU(3)`$. $`\mu =0`$ in the deconfined phase.
6. A finite size scaling analysis shows that the disorder parameter reproduces the correct critical indices and the correct $`\beta _C`$ in both cases. We have indeed
$`{\displaystyle \frac{\rho }{N_s^{1/\nu }}}=f\left(N_s^{1/\nu }\left(\beta _C\beta \right)\right),`$ (31)
i.e. $`\rho /N_s^{1/\nu }`$ is a function of the scaling variable $`x=\left(N_s^{1/\nu }\left(\beta _C\beta \right)\right)`$. In eq. (31) $`\nu `$ is the critical index associated to the divergence of the correlation length (pseudo-divergence for a first order phase transition) and $`\beta _C`$ is the critical value of $`\beta `$. We parameterize $`\rho /N_s^{1/\nu }`$ as
$`{\displaystyle \frac{\rho }{N_s^{1/\nu }}}={\displaystyle \frac{\delta }{x}}c+{\displaystyle \frac{a}{N_s^3}},`$ (32)
where $`\delta `$ is the exponent associated to the drop of $`\mu `$ at the transition ($`\mu \left(\beta _C\beta \right)^\delta `$, $`N_s\mathrm{}`$), $`c`$ is a constant and $`a`$ measures scaling violations. We find independently of the abelian projection and (for $`SU(3)`$) of the abelian generator $`\beta _C=2.29(3)`$, $`\nu =0.63(5)`$, $`\delta =0.20(8)`$, $`a0`$ for $`SU(2)`$ and $`\beta _C=5.69(3)`$, $`\nu =0.33(5)`$, $`\delta =0.54(4)`$, $`a210`$ for $`SU(3)`$ (fig. 2). $`\beta _C`$ and $`\nu `$ agree with ref.’s 14.
## 8 Discussion
The firm point we have is that confinement is an order disorder transition. Whatever the dual symmetry of the disordered phase is, the operators defining dual order have non zero magnetic charge in different abelian projections.
For sure the statement that only one abelian projection is at work, e.g. the maximal abelian, is inconsistent with this symmetry. May be maximal abelian is more convenient than other abelian projections to build effective lagrangeans, but this point is not directly relevant to symmetry.
In addition, if one single abelian projection were involved in monopole condensation, then flux tubes observed on lattice in $`Q\overline{Q}`$ configurations should have the electric field belonging to the confining $`U(1)`$, and this is not the case. Moreover there exist coloured states which are neutral with respect to that $`U(1)`$, and they would not be confined.
Symmetry of the dual field theory is more clever than we think. It is any how related to condensation of monopoles in different abelian projections. As guessed in ref. 16 all abelian projections are physically equivalent.
|
no-problem/9905/chao-dyn9905022.html
|
ar5iv
|
text
|
# Noise-induced flow in quasigeostrophic turbulence with bottom friction
## Abstract
Randomly-forced fluid flow in the presence of scale-unselective dissipation develops mean currents following topographic contours. Known mechanisms based on the scale-selective action of damping processes are not at work in this situation. Coarse-graining reveals that the phenomenon is a kind of noise-rectification mechanism, in which lack of detailed balance and the symmetry-breaking provided by topography play an important role.
In the last decades much effort has been devoted towards understanding fluid motion in situations in which vertical velocities are small and slaved to the horizontal motion. Under these circumstances the flow can be described in terms of two horizontal coordinates, the vertical depth of the fluid becoming a dependent variable. The fluid displays many of the unique properties of two-dimensional turbulence, but some of the aspects of three-dimensional dynamics are still essential. Among the several quasi two-dimensional dynamics considered, barotropic quasigeostrophic turbulence has focussed most of the interest. The reason for such interest lies on the relevance of this dynamics as a model to understand planetary atmospheres and ocean general circulations as well as to describe some plasma physics phenomena. In the first case the twodimensionality of the flow is induced by the Coriolis force, whereas in the second it may arise from the action of magnetic fields. In the geophysical context, the topography over which the layer of fluid flows is the ingredient introducing a difference with respect to purely twodimensional turbulence.
The first works on topographic turbulence focused on its statistical properties in the absence of dissipation and forcing . These studies established a tendency of the flow to reach a maximum entropy (Gibbs) state characterized by the existence of stationary mean currents following isobaths. Mean currents associated with topography are however not restricted to this inviscid and unforced case, as shown by , where viscosity was included. Statistical-mechanics equilibrium arguments can not be invoked in this situation of decaying turbulence to explain the appearance of mean currents correlated to topographic features. The explanation put forward was that viscous damping, due to its stronger action at the smallest scales, dissipates enstrophy much faster than energy, a quantity more concentrated at large scales in twodimensional turbulence settings. As a consequence of this scale-selective behavior, the decaying turbulent state would be the one with the smaller enstrophy compatible with the (slowly decreasing in time) instantaneous value of the energy. It turns out that these minimum enstrophy states are closely related to the equilibrium maximum entropy ones and, as them, present distinct mean currents following isobaths. The natural tendency of topographic turbulence to generate currents along isobaths, under different situations, aroused a significant interest in the geophysical community. Specifically, studies were addressed to investigate the role played by these currents on the general circulation of the world’s ocean .
It has been recently shown numerically that mean currents following isobaths appear also in situations in which a scale-unselective dissipation, Rayleigh friction modeling an Eckman bottom drag, is used. This fact was observed in numerical simulations in which random forcing (also of a scale unselective nature) was present. Thus the state obtained is not one of decaying turbulence but a nonequilibrium statistically steady state. Statistical properties are not far from those of a generalized canonical equilibrium, but this is just a numerical coincidence, since neither the maximum entropy nor the minimum enstrophy mechanisms, the ones invoked in the previous situations, are here at work. Instead interpreted these currents as a noise rectification phenomenon: the mean currents were generated by nonlinearity and sustained by noise, with topography providing the symmetry-breaking ingredient giving a particular sense to the flow. Later it was theoretically shown and numerically confirmed that noise rectification also appears in randomly forced topographic turbulence when a scale selective damping (viscosity) is considered . However, the detailed understanding of the noise-rectification mechanism when a scale-unselective damping is present is still an open question. Addressing this question as well as to provide a rigorous theoretical framework to explain the numerical results shown in are the main motivation of this Letter.
In the quasigeostrophic approximation, the motion of a single layer of fluid in a rotating frame or planet can be described in terms of the horizontal components of the velocity $`(u(𝐱),v(𝐱))`$ that can also be written in terms of a streamfunction verifying:
$$u=\frac{\psi }{y},v=\frac{\psi }{x}$$
(1)
The streamfunction $`\psi (𝐱,t)`$, with $`𝐱(x,y)`$, is governed by the dynamics :
$$\frac{^2\psi }{t}+\lambda [\psi ,^2\psi +h]=ϵ^2\psi +F.$$
(2)
$`ϵ`$ is the friction parameter, modelling bottom friction, $`F(𝐱,t)`$ is any kind of relative-vorticity external forcing, and $`h=f\mathrm{\Delta }H/H_0`$, with $`f`$ the Coriolis parameter, $`H_0`$ the mean depth, and $`\mathrm{\Delta }H(𝐱)`$ the local deviation from the mean depth. $`\lambda `$ is a bookkeeping parameter introduced to allow perturbative expansions in the interaction term. The physical case corresponds to $`\lambda =1`$. The Poisson bracket or Jacobian is defined as
$$[A,B]=\frac{A}{x}\frac{B}{y}\frac{B}{x}\frac{A}{y}.$$
(3)
Equation (2) gives the time evolution of relative vorticity subjected to forcing and dissipation. The form used by the dissipation term acts with the same strength at all spatial scales, in contrast with viscosity terms of the form $`^4\psi `$, excluded from our model (2), which would dissipate better the smaller scales. A convenient choice of $`F`$, able to model a variety of processes, is to assume it to be a Gaussian stochastic process with zero mean, and with a Fourier transform $`\widehat{F}_𝐤(\omega )`$ having the following two-point correlation function:
$$\widehat{F}_𝐤(\omega )\widehat{F}_𝐤^{}(\omega ^{})=Dk^y\delta (𝐤+𝐤^{})\delta (\omega +\omega ^{}).$$
(4)
Here, $`𝐤=(k_x,k_y)`$, and $`k=|𝐤|`$. Thus the process is white in time but has power-law correlations in space. Stochastic forcing has been used in fluid dynamics problems to model stirring forces, short scale instabilities, thermal noise , or processes below the resolution of computer models , among others .
Guided by the results in , we attribute the average currents to a large-scale rectification of the small-scale fluctuations introduced by the noise term. In consequence we resort to a coarse-graining procedure to investigate how the dynamics of long-wavelength modes in (2) is affected by the small scales. For our problem it is convenient to use the Fourier components of the streamfunction $`\widehat{\psi }_{𝐤\omega }`$ or equivalently the relative vorticity $`\zeta _{𝐤\omega }=k^2\widehat{\psi }_{𝐤\omega }`$. This variable satisfies:
$`\zeta _{𝐤\omega }`$ $`=`$ $`G_{𝐤\omega }^0F_{𝐤\omega }+`$ (6)
$`\lambda G_{𝐤\omega }^0{\displaystyle \underset{𝐩,𝐪,\mathrm{\Omega },\mathrm{\Omega }^{}}{}}A_{\mathrm{𝐤𝐩𝐪}}\left(\zeta _{𝐩\mathrm{\Omega }}\zeta _{𝐪\mathrm{\Omega }^{}}+\zeta _{𝐩\mathrm{\Omega }}h_𝐪\right),`$
where the interaction coefficient is:
$$A_{\mathrm{𝐤𝐩𝐪}}=(p_xq_yp_yq_x)p^2\delta _{𝐤,𝐩+𝐪},$$
(7)
the bare propagator is:
$$G_{𝐤\omega }^0=(i\omega +ϵ)^1,$$
(8)
and the sum is restricted by $`𝐤=𝐩+𝐪`$ and $`\omega =\mathrm{\Omega }+\mathrm{\Omega }^{}`$. $`𝐩=(p_x,p_y)`$, $`p=|𝐩|`$, and similar expressions hold for $`𝐪`$. The wavenumbers are restricted to $`0<k<k_0`$, with $`k_0`$ an upper cut-off. Following the method in Ref. , one can eliminate the modes $`\zeta _{𝐤\omega }^>`$ with $`k`$ in the shell $`k_0e^\delta <k<k_0`$ and substitute their expressions into the equations for the remaining low-wavenumber modes $`\zeta _{𝐤\omega }^<`$ with $`0<k<k_0e^\delta `$. The parameter $`\delta `$ measures thus the width of the band of eliminated wavenumbers. To second order in $`\lambda `$, the resulting equation of motion for the modes $`\zeta _{𝐤\omega }^<`$, written in terms of the large-scale streamfunction $`\psi ^<(𝐱,t)`$ is:
$`{\displaystyle \frac{^2\psi ^<}{t}}`$ $`+`$ $`\lambda [\psi ^<,^2\psi ^<+h^<]=`$ (9)
$``$ $`ϵ^2\psi ^<g^4h^<+\nu ^{}^4\psi ^<+F^{},`$ (10)
where
$$\nu ^{}=\frac{\lambda ^2S_2Dy\delta }{16(2\pi )^2ϵ^2},$$
(11)
$$g(\lambda ,D,\delta ,ϵ,y)=\frac{\lambda ^2DS_2(y+2)\delta }{32(2\pi )^2ϵ^2}.$$
(12)
$`F^{}(𝐱,t)`$ is an effective noise which turns out to be also a Gaussian process but with mean value and correlations given by:
$$<F^{}(𝐱,t)>=\frac{\lambda ^2DS_2(2+y)\delta }{32(2\pi )^2ϵ^2}^4h^<,$$
(13)
$`\left(\widehat{F}_k^{}(\omega )\widehat{F}_k^{}(\omega )\right)\left(\widehat{F}_k^{}^{}(\omega ^{})\widehat{F}_k^{}^{}(\omega ^{})\right)=`$ (14)
$`Dk^y\delta (k+k^{})\delta (\omega +\omega ^{})`$ (15)
$`S_2`$ is the length of the unit circle: $`2\pi `$. Equations (9)-(14) are the main result in this Letter. They give the dynamics of long wavelength modes $`\psi _{𝐤\omega }^<`$ with $`0<k<k_0e^\delta `$. They are valid for small $`\lambda `$ or, when $`\lambda 1`$, for small width $`\delta `$ of the elimination band.
The elimination of the small scales leads, as expected from physical grounds, to the appearance of an effective viscosity $`\nu ^{}`$ at large scales. Depending on the sign of $`y`$, this viscous term can be destabilizing. One should keep in mind however that Eq.(9) is only valid at large scales, so that such small-wavelength instabilities, when formally present, would be avoided by proper choice of the wavenumber cut-off. More importantly, a new term depending on the topography $`g^4h^<`$ has been generated by the small scales. Another similar term is contained in the mean value of the effective large-scale noise $`F^{}`$. Both terms have the effect of pushing the large-scale motion towards a state of flow following the isolevels of bottom perturbations $`h^<`$. The energy in this preferred state is determined by the function $`g(\lambda ,D,\delta ,ϵ,y)`$ which measures the influence of the different terms of the dynamics (nonlinearity, noise, friction). From relation (12) it is clear that while nonlinearities and noise increase the intensity of the mean currents in the preferred state, high values of the friction parameter would reduce them.
As in the situation with scale selective damping , (12) and (13) exhibit a characteristic dependence on the exponent $`y`$ of the random-forcing spectral power-law. In the present case, the dependence with $`y+2`$ implies that the directed currents reverse sign as $`y`$ crosses the value $`2`$, and that they vanish if $`y=2`$. It is easy to check that the conditions for detailed balance between the fluctuation input and dissipation are satisfied precisely for $`y=2`$ (in general, the condition is $`y=n`$, where $`n`$ is the exponent of $``$ in the dissipation term). In this situation the steady-state probability distribution for $`\psi `$ can be found exactly and is independent of topography. Thus the generation of flow along isobaths is a consequence of lack of detailed balance, a general feature of noise-rectifying mechanisms.
Concluding, relations (9)-(14) show that the origin of average circulation patterns in quasigeostrophic turbulence over topography is related to nonlinearity and lack of detailed balance. The present approach provides a theoretical basis for the numerical observations presented in , for which mechanisms based on the scale-selective action of damping can not be applied. Nonlinear terms couple the dynamics of small scales to the large ones and provide the mechanism for energy transfer from the fluctuating component of the spectrum to the mean one. This mean spectral component, absent in purely two-dimensional turbulence , is controlled by the shape of the bottom boundary, which breaks the isotropy of the system, and characterizes the structure of the flow pattern.
Financial support from CICYT (AMB95-0901-C02-01-CP and MAR98-0840), DGICYT (PB94-1167), and from the MAST program MATTER MAS3-CT96-0051 (EU) is greatly acknowledged.
|
no-problem/9905/cond-mat9905261.html
|
ar5iv
|
text
|
# Transitions between metastable states in silica clusters
## I Introduction
The theoretical investigation of the properties of disordered systems is a very difficult task, because the lack of simmetry usually prevents analytical approaches. Many authors have found it convenient to rely on the concept of potential energy landscape in the configuration space (see e.g. Frauenfelder et al. 1991, Berry 1993, Heuer and Silbey 1993, Mohanty et al. 1994, Heuer 1997, Angelani et al. 1998, Mousseau and Barkema 1998, Wales et al 1998).
From a purely qualitative point of view, we may imagine the multidimensional analogue of a surface rich in climbing points (first order saddles) and valleys (the various minima). A detailed analysis of the energy landscape topology requires numerical simulation.
In this work we analyze the potential energy landscape of SiO<sub>2</sub> clusters, namely \[SiO<sub>2</sub>\]<sub>20</sub>, \[SiO<sub>2</sub>\]<sub>30</sub>, and \[SiO<sub>2</sub>\]<sub>50</sub>, which are comparable in size to (or larger than) monoatomic and binary systems so far investigated. After a short description of the numerical methods used to move up and down the multidimensional hypersurface, we report our results regarding:
1. the structure of the clusters (resembling that of the bulk solid, although affected by surface effects);
2. the energy distribution of the stationary points of the potential energy function;
3. a physical interpretation of geometrical quantities uselful to characterize transitions between different minima;
4. the identification of tunneling systems.
## II Numerical methods
We simulate the interaction among Si and O atoms with the recently developed pair potential by Van Beest et al. (1990), modified with a very short range contribution (Guissani and Guillot 1996), necessary to avoid unphysical divergencies. It is given by
$$\mathrm{\Phi }_{ij}=\frac{q_iq_j}{r_{ij}}+a_{ij}\mathrm{exp}(b_{ij}r_{ij})\frac{c_{ij}}{r_{ij}^6}+4ϵ_{ij}\left[\left(\frac{\sigma _{ij}}{r_{ij}}\right)^{24}\left(\frac{\sigma _{ij}}{r_{ij}}\right)^6\right],$$
(1)
where both i and j indexes run on all Si and O atoms, and the values of the parameters were determined by Van Beest et al. (1990) and Guissani and Guillot (1996). This potential has already been widely used (with slight modifications) in molecular dynamics studies of the liquid and glassy phases of silica (see e.g. Vollmair et al. 1996, Horbach et al. 1996, Taraskin and Elliott 1997). As our purpose is to simulate clusters, we used free boundary conditions. We adopted the procedure described in detail in Daldoss et al. (1998, 1999) to locate minimum-saddle-minimum triplets, that are the key ingredients to describe jump processes. Schematically the steps are:
* descent towards a minimum by the conjugate gradient method (Press et al. 1986), starting from a randomly chosen configuration;
* ascent towards the vicinity of a saddle, following the eigenvector corresponding to the lowest eigenvalue;
* once the potential energy along the path of the previous item starts decreasing, we take the corresponding configuration (hill-climbing point, Berry 1993) as the starting point for a Newton-Raphson search (Press et al. 1986) of the first order saddle point;
* a descent on both sides of the saddle following again the eigenvector of the minimum eigenvalue.
This technique provides approximate adjacent minima that are subsequently fed into the Newton algorithm for accurate location. With this numerical procedure we found many thousand minimum-saddle-minimum triplets (hereafter double well potentials, DWP).
## III Results
### A The Structure of the clusters
We obtain information on the structure of the cluster by the radial distribution function $`g(r)`$, defined in the same way as for extended systems: it shows the bond lengths, the short range order, and also allows for the calculation of the co-ordination number. The results are reported in Fig.1, which was obtained by averaging over the various configurations obtained (both minima and saddles) to have good statistics. We have plotted the partial $`g(r)`$ for the different bonds (Si-Si, Si-O, O-O) and cluster sizes. The resulting bond lengths obtained by the main peaks are in good agreement with the experimental data on bulk structures (see table I) and with simulations with periodic boundary conditions (Taraskin and Elliott 1997). Nevertheless we observe smaller peaks on the low-distance side of the nearest-neighbour peak in the Si-Si and O-O bonds. In our opinion this is due to surface effects: in fact it appears from the figure that these anomalies tend to disappear with increasing system size. Very satisfactory is also the result concerning the co-ordination number: it is 4 for Si-O bond, in agreement with the tetrahedral structure typical of SiO<sub>2</sub>. This is also confirmed by the analysis of Si-Si, also equal to 4, and O-O, equal to 6.
### B Energy distributions of stationary points
We now analyze the topological features of the potential energy hypersurface; at first we present the energy distribution of the stationary points (saddles and minima) that form double wells. In Fig. 2 we report the three energy distributions for saddles, lower minima and upper minima of the various DWP, respectively; only the result for 150 atoms is presented, the situation being quite analogous in the other two cases. We note that the distributions are very similar, with defined peaks superimposed onto a broad backgroud; the peaks are due to the fact that certain configurations are favoured for given values of the binding energy, and so they act like attraction basins during the descent towards the minima. The presence of these structures in the distribution is more evident with increasing system size: in \[SiO<sub>2</sub>\]<sub>20</sub> the curve looks smoother (Brangian 1998). It should be remarked that Fig. 2 does not refer to the total distribution of stationary points, because their number increases exponentially with the system size: we sampled partially the configuration space, and we are quite confident not to have introduced systematic errors in this sampling. The only thing we can note is that maybe our numerical investigation is in some way prevented from reaching very low lying (that is crystalline-like) configurations: since we are not interested in a careful thermodynamical analysis (Doye and Wales 1998), but only in transitions between stable disordered states, we think this fact does not constitute a serious drawback.
### C Topological features of double well potentials
Double well potentials can be characterized by many quantities: the first one is the asymmetry $`\mathrm{\Delta }`$, that is the energy difference between the two connected minima of a DWP. This parameter is essential for the identification of candidate two level systems (TLS), which require a value of $`\mathrm{\Delta }`$ of the order of less than $``$1 K (however, as we will see, this condition is not sufficient to identify a TLS). Our results indicate that the asymmetries are distributed exponentially, the most part being lower than 5000 K; the distribution is not sensitive to the system size. Equally important are the energy barriers $`V`$, i.e. the energy differences between a minimum and the corresponding saddle. Of course for every DWP there are two values of $`V`$, one for the relaxation process and one for the activation. The distribution of the relaxation barriers similar in shape to that of the $`\mathrm{\Delta }`$’s, but on average the values of $`V`$ are smaller, being significantly present only up to 1000 K. There seems to be little correlation between the asymmetry and the barrier (both for the activation and relaxation).
We have evaluated also the (mass weighted) euclidean distance between pairs of minima, defined as
$$dist(a,b)=\left[\underset{i}{}\frac{m_i}{\stackrel{~}{m}}|𝐫_{i,a}𝐫_{i,b}|^2\right]^{1/2},$$
(2)
where $`a`$ and $`b`$ are the two minima, $`m_i`$ the single atom mass, $`𝐫_{i,\alpha }`$ its position in the $`\alpha `$ configuration, and $`\stackrel{~}{m}=_im_i/N`$. In Fig. 3 we report the relative distributions. The distributions do not extend very much beyond $``$10$`\sigma `$ and present a maximum at $`2\sigma `$.
Another very interesting parameter to consider is the participation number defined as
$$N_{\mathrm{part}}=\underset{i}{}\frac{d_i^2}{d_{\mathrm{max}}^2}.$$
(3)
Here $`d_i`$ refers to the atom-atom distances in (2) and $`d_{max}`$ is the distance of the atom that moves most during the transition; the partecipation number gives an idea on the numbers of atoms involved in a transition. In Fig. 4(a) we report this quantity for the three cases studied; the same quantity normalized to the number of particles constituting the clusters is reported in Fig. 4(b). We see that $`N_{\mathrm{part}}`$ is a nearly scaling quantity with $`N`$, indicating that at least an appreciable part of the atoms that move in the transitions belong to the bulk.
We can make use of multidimensional transition state theory (for a review see Hanggi 1985) to estabilsh a link between the potential energy landscape and the (classical) relaxation dynamics of the system. Under appropriate conditions (Hanggi 1985) the classical, thermally activated transition probability between two metastable minima is given by
$`\mathrm{\Gamma }=\nu ^{}\mathrm{exp}\left({\displaystyle \frac{E_b}{\kappa _BT}}\right)`$ (4)
$`\nu ^{}={\displaystyle \frac{\underset{i=1}{\overset{N}{}}\nu _i^M}{_{j=1}^{N1}\nu _j^S}},`$ (5)
where $`\nu ^{}`$ is an effective frequency that takes into account the effect of all the degrees of freedom on the one which forces the transition. This frequency can be rewritten as
$$\nu ^{}=\frac{\nu _0}{R}$$
(6)
with $`\nu _0`$ the lowest frequency of the dynamical matrix in the starting minimum, while $`R`$ is the product of all the $`N1`$ positive eigenvalues of the dynamical matrix at the saddle point, divided by the corresponding product at the minimum. In Fig. 5 we show the value of this entropic factor $`R`$ (which enhances or reduces the transition probability), versus the energy barrier value, both for the relaxation and the activation transitions. This plot evidences a rather marked correlation between $`R`$ and the barrier height, and in most cases $`R<`$1 or even $``$1.
### D Two-level systems (TLS)
We describe now the most difficult part of our work, that is the selection of two-level systems. Following the energy landscape paradigm, these are DWP that imply purely quantum mechanical relaxation processes. The calculation has been carried out by assuming the validity of the 1D Wentzel-Kramers-Brillouin (WKB) approximation (Froman and Froman 1965, Landau and Lifchitz 1967; Schiff 1968; a discussion on the validity of this approximation in the case of Ar clusters, as well as on the effects to be expected when it is released, can be found in Daldoss et al. 1999): the splitting of the ground state is given in terms of the action integral between the two minima. In principle, in order to apply the WKB procedure it would be necessary to find the least action path, i.e. the classical path that takes from one minimum to the other and minimizes the action integral; this involves rather heavy numerical calculations. As a starting point, we decided to use the path that takes from one minimum to the other, and that is defined at each point by the direction of the minimum eigenvector, as described in Section II. Work on the minimization of the action integral is in progress (Brangian et al. 1999). It should be stressed that as cumbersome as this procedure may look, it is probably the simplest way to get a quantitative estimate of the tunneling splitting. It should also be mentioned that, in agreement with previous works (Heuer and Silbey 1993, Heuer 1997, Daldoss et al. 1998), we find roughly one TLS for 1000 DWP’s, which implies a very extensive search strategy.
We have performed an a posteriori test on the reliability of the use of the 1D WKB scheme. The main hypothesis is that, along the chosen 1D path (the least-action or a very close one), the degrees of freedom other than the considered one are decoupled from it: in this case it is reasonable to expect that the Schroedinger equation may be (nearly) factorized into 3$`N`$ mutually independent equations, a condition for the applicability of the WKB scheme in many dimensions (Schiff 1968). If this factorization should actually take place, the eigenvalues of the dynamical matrix (which are proportional to the curvatures of the hypersurface) other than the lowest one should remain constant along the chosen path. The departure of these eigenvalues from constancy gives a measure (though a qualitative one) of the invalidity of the 1D scheme. In Fig. 6 we show, as an example, the results for two TLS belonging to clusters of 90 atoms. In one case (Fig. 6(b)) the approximation is satisfied in a very good way; on the contrary, in the other case (Fig. 6(a)) there is appreciable mixing of the low energy eigenvectors. These results are in qualitative agreement with those relative to Ar clusters (Daldoss et al, 1999); they imply that the actual structure of TLS in disordered systems is probably much more complex than expected, since many-dimensional effects seem to play important roles.
## IV Conclusions
We have reported preliminary results of an extensive investigation on the properties of the potential energy landscape in SiO<sub>2</sub> clusters of three different sizes (60,90, and 150 atoms); the aim of this research is to find connections between the properties of the landscape and the high- and low-temperature relaxation dynamics. By analyzing the structure of the clusters and the topological features of their energy landscape (and in particular of its minima and first-order saddle points), we have identified tunneling centres and studied the conditions of validity of the WKB approximation, which allows a quantitative estimate of the tunneling splitting.
ACKNOWLEDGMENTS
This work was supported in part by the Parallel Computing Initiative of the INFM.
REFERENCES
Angelani, L., Parisi, G., Ruocco, G., and Viliani, G., 1998, Phys. Rev. Lett., 81, 4648.
Berry, R.S., 1993, Chem. Rev., 93, 2379.
Brangian, C., 1998, Thesis (University of Trento).
Brangian, C., Pilla, O., and Viliani, G., 1999, to be published.
Daldoss, G., Pilla, O., and Viliani, G., 1998, Phil. Mag. B, 77, 689.
Daldoss, G., Pilla, O., Viliani, G., Brangian, C., and Ruocco, G., 1999, to appear on Phys. Rev. B.
Doye, J.P.K., and Wales D.J., 1998 Phys. Rev. Lett., 80, 1357.
Frauenfelder, H., Sligar, S.G., and Wolynes, P.G., 1991, Science, 254, 1594.
Froman, N., and Froman, O.O., 1965, JWKB Approximation (North-Holland: Amsterdam).
Guissani, Y., and Guillott, B., 1996, J. Chem. Phys., 104, 7633.
Hanggi, P., 1986, J. Stat. Phys., 42, 105.
Heuer, A., 1997, Phys. Rev. Lett., 78, 4051.
Heuer, A., and Silbey, R.J., 1993, Phys. Rev. Lett., 70, 3911.
Horbach, J., Kob, W., Binder, K., and Angell, C.A., 1996, Phys. Rev. E, 54, R5827.
Landau, L., and Lifchitz, E., 1967, Mécanique Quantique, chapter 7 (MIR: Moscow).
Mohanty, U., Oppenheim, I., and Taubes, C.H., 1994, Science, 266, 425.
Mousseau, N., and Barkema, G.T., 1998, Phys. Rev. E, 57, 2419.
Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T., 1986, Numerical Recipes (Cambridge University Press: Cambridge).
Schiff, L.J., 1968, Quantum mechanics (McGraw-Hill: New York).
Taraskin, S.N., and Elliott, S.R., 1997, Phys. Rev. B, 56, 8605.
Van Beest, B.W.H., Kramer, G.J., and Van Santen, R.A., 1990, Phys. Rev. Lett., 64, 1955.
Vollmair, K., Kob, W., and Binder, K., 1996 Phys. Rev. B 54, 15808.
Wales, D.J., Miller, M.A., and Walsh, T.R., 1998, Nature, 394, 758.
FIGURE CAPTIONS
Fig. 1. Radial distribution function $`g(r)`$ in clusters of three different sizes. Besides the three main peaks, which are also found in bulk systems, we notice other peaks for the O-O and Si-Si bonds, probably due to the presence of surface co-ordination defects.
Fig. 2. Energy distribution for saddles, upper and lower minima in clusters of 150 atoms.
Fig. 3. Distribution of euclidean distances between connected minima, in units of $`\AA `$ and $`\sigma =1.6\AA `$, i.e. the Si-O bond length.
Fig. 4. Participation number: the $`y`$ axis units are chosen such that the integrals of the various curves is equal to 1. The bottom plot refers to the participation ratio normalized to the number of atoms.
Fig. 5. Entropic ratio $`R`$ as a function of the barrier height for $`N=`$60, 90, and 150.
Fig. 6. Variation of the 10 lowest eigenvalues of the dynamical matrix along the minimum eigenvalue path (see text) in two TLS of clusters of 90 atoms. The minima are arbitrarily assigned the $`\pm `$1 values of the coordinate along the path.
|
no-problem/9905/adap-org9905001.html
|
ar5iv
|
text
|
# Extremal Coupled Map Lattices
\[
## Abstract
We propose a model for co-evolving ecosystems that takes into account two levels of description of an organism, for instance genotype and phenotype. Performance at the macroscopic level forces mutations at the microscopic level. These, in turn, affect the dynamics of the macroscopic variables. In some regions of parameter space, the system self-organises into a state with localised activity and power law distributions.
PACS: 05.45.+b, 64.60.Lx, 87.10.+e
\]
Complex extended systems showing a lack of scale in their features appear to be widespread in nature, being as diverse as earthquakes , creep phenomena , material fracturing , fluid displacement in porous media , interface growth , river networks and biological evolution . At variance with equilibrium statistical mechanics, these systems do not need any fine tuning of a parameter to be in a critical state. To explain this behaviour, Bak, Tang and Wiesenfeld introduced the concept of self-organised criticality (SOC) through the simple sand-pile model . In recent years, several models with extremal dynamics have been shown to exhibit SOC in the presence of a sufficiently decorrelated signal .
Several years ago, Bak and Sneppen (BS) proposed a SOC model for the co-evolution of species. In the BS model each species occupies a site on a lattice. Each site $`j`$ is assigned a fitness, namely a random number between $`0`$ and $`1`$. At each time step in the simulation the smallest fitness is found. Then the fitnesses of the minimum and of its nearest neighbours are updated according to the rule
$$f_{n+1}=F(f_n)$$
(1)
that assigns a new fitness $`f_{n+1}`$ at time $`n+1`$ to the chosen lattice site. This corresponds to the extinction of the less fit and its impact on the ecosystem. Indeed, in the original BS model, the function $`F`$ is just a random function with a uniform distribution between 0 and 1. The system reaches a stationary critical state in which the distribution of fitnesses is zero below a certain threshold and uniform above it (the actual value of the threshold depends on the updating rule). It has been shown that the exact nature of the updating rule is not relevant. Indeed, the use of a chaotic map instead of a random update, preserves the universality class (even if the final distribution may be altered).
As a result of the dynamical rules, the BS system exhibits sequences of causally connected evolutionary events called avalanches . The number of avalanches $`N`$ follows a power law distribution $`N(s)s^\tau `$ where $`s`$ is the size of the avalanche and $`\tau 1.07`$ . Other quantities also exhibit a power law behaviour with their own critical exponents. Prominent among them are the first and all return time distributions of activity (a site is defined as active when its fitness is the minimum one),
$$P_f(t)t^{\tau _f},P_a(t)t^{\tau _a},$$
(2)
where $`\tau _f1.58`$ and $`\tau _a0.42`$ .
In the BS model, two basic ingredients are needed for SOC to occur :
i) Order from extremal dynamics (minimum rule).
ii) Disorder from the updating rule (stochastic or otherwise).
An oversimplification in the BS model is apparent: each species is described by a single variable. The minimum rule is applied to this variable, and so is the effect of mutations. In natural evolving systems, however, at least two—interacting—levels of organisation are present, and both play a role in the evolution. Mutations occur at a microscopic level, namely at the molecular level of the genotype. This affects a macroscopic level, defining the phenotype. Natural selection acts on the phenotype, allowing the survival of some and the extinction of others, according to the observed power law distributions.
In this paper we propose a new model, namely an Extremal Coupled Map Lattice (ECML), that takes into account this two level structure. The ecosystem is represented as an ensemble of $`N`$ species arranged on a one-dimensional lattice. Each species is described by means of macroscopic variable $`x_i`$ subject to a nonlinear dynamics and coupled to its nearest neighbours. In general, $`x_i`$ can be identified with a population, or some other function of the phenotype. The control parameter $`\lambda _i`$ of the nonlinear dynamics is identified with the microscopic level (genotype). To find the exact function connecting these two levels of description is beyond the scope of this simplified model. For this reason, we chose a logistic map for the independent evolution of $`x_i`$ with $`\lambda _i`$ acting as the nonlinear parameter in the map . With this in mind, the evolution of each species is given by:
$$x_{n+1}^i=(1ϵ)f(x_n^i)+\frac{ϵ}{2}\left[f(x_n^{i1})+f(x_n^{i+1})\right]$$
(3)
where $`f(x)`$ is the logistic map $`f(x)=\lambda x(1x)`$ (in general any chaotic function should do). Each site has its own $`\lambda _i`$, extracted from a fixed distribution $`g(\lambda )`$ . The local coupling of strength $`ϵ_i`$ emulates the ecological interaction between neighbouring species. A general description should include inhomogeneities in $`ϵ`$. Here, for simplicity we limit our analysis to the homogeneous case, $`ϵ_iϵ`$.
We consider this CML as the substrate on which evolution takes place. The parameter $`\lambda _i`$, that determines the behaviour of $`x_i`$, is regarded as the microscopic level, the genotype, and is subject to mutation. We suppose that, through mutation, a species is able to alter this strategy to adapt to the environment defined by the collective behaviour. For this we propose an extremal mechanism, akin to the BS model . The species that, at each time step, has the minimum value of $`x`$, is considered the candidate to mutation. Its $`\lambda `$ is replaced by a new value drawn from the distribution $`g(\lambda )`$.
We can summarise these simple dynamical rules by
1. Evaluate expression (3).
2. Find the site with the absolute minimum fitness on the lattice (this site will be called the “active” site).
3. Change the value of $`\lambda `$ of the active site.
4. Go to step 1.
In Fig. 1 we show the space-time picture of the active sites for four different sets of parameters. In all cases we take $`g(\lambda )`$ uniform within the range $`(\lambda _0,\lambda _0+\mathrm{\Delta }\lambda )`$ and zero outside, with $`\mathrm{\Delta }\lambda =0.1`$. This four cases can be considered paradigmatic of the behaviours observed in a wide region of parameter space. Fig. 1(a) and (b) correspond to a rather high value of the coupling, $`ϵ=0.5`$. Fig. 1(c) and (d) have, instead, $`ϵ=0.1`$. The value of $`\lambda _0`$ is $`3.9`$ in (b) and (d), $`3.7`$ in (a), and $`3.8`$ in (c). Depending on the value of the parameters, the behaviour of the successive minima can be classified in one of the following four categories.
* discontinuous lines at $`i=\text{constant}`$: The activity is concentrated in a few sites of the system, and in first approximation the same sites remain active as time goes by.
* uniform: The activity is spread all over the system without any apparent order.
* “worms”: The activity is mostly concentrated in a few sites of the system. The activity, however, wanders as time goes by following a random pattern.
* clusterized: The activity is spread all over the system, but in clusters. At variance with (b), at any given time, one sees regions in space where there is no activity.
Let us discuss in more detail Fig. 1(c). The worms are created during the transient. Once created, each worm wanders in space till it encounters a second worm. At that moment they annihilate each other. We have observed that the inactive regions correspond to a periodic pattern in space . The worms, in turn, correspond to defects in the periodicity. Eventually, the last two surviving worms merge, and from there on the activity remains concentrated on a single “fat” worm.
For each one of these cases, we have computed the first return time distribution. This corresponds to the distribution of times between consecutive activity in the same site. We observe that when the successive minima are distributed uniformly in space, the first return time distribution is close to an exponential (see Fig. 2(b)). As soon as the activity is not uniformly distributed, be it lines or irregular clusters, the first return time distribution exhibits a power law decay (asymptotically in case (d)).
To make this picture more quantitative, we present in Fig. 3 a phase diagram, in $`\lambda `$-$`ϵ`$ space ($`\mathrm{\Delta }\lambda `$ is held constant). Phase I is characterised by the presence of power law behaviour in the first return time distribution, $`P(\tau )\tau ^\alpha `$. The value of the exponent of this distribution is not constant within the region. Phase II, on the other hand, is characterised by a non power law behaviour (close to exponential) in the first return time distribution (see Fig. 2(b)). This is tantamount to saying that in phase I there is clusterization of the activity, that is absent in phase II. In Fig. 3 we indicate also a third phase (III), which corresponds to an asymptotic power law (see Fig. 2(d)), with an exponent that is different from that in region I. The border of this region (that we show only schematically) is not as well defined as the border between regions I and II.
This picture holds, qualitatively, even in the absence of the evolutionary dynamics. Indeed, if in every time step we track the position of the minimum, but do not change the value of its $`\lambda `$ (this corresponds, effectively, to setting $`\mathrm{\Delta }\lambda =0`$), we still obtain the four abovementioned regimes and the corresponding first return time distributions. As $`\mathrm{\Delta }\lambda `$ changes, the value of the exponent in the first return time distribution (in region I) changes as well. Moreover, the area of region II increases with increasing $`\mathrm{\Delta }\lambda `$.
The presence of evolutionary dynamics (i. e. $`\mathrm{\Delta }\lambda >0`$), has yet another effect. The distribution of $`\lambda `$ evolves in time, from an originally uniform to a stationary non-uniform one. Indeed, this stationary distribution is peaked close to $`\lambda _0`$ and monotonically decresases for larger values of $`\lambda `$: The system has “self-organized”. Since the extremal dynamics favors higher values of $`x`$, and the higher the value of $`\lambda `$ the more likely small values of $`x`$ are , then large values of $`\lambda `$ are more likely to be updated. The exact shape of this distribution depends on the values of the parameters .
Summarizing, the dynamics of extremal coupled maps exhibits a behavior usually associated with criticality:
* First return time distribution is a power law (even in the absence of extremal dynamics)
* there is self-organization in the $`\lambda `$ space
* the active regions are localized in space, very much like the avalanches in SOC models (BS).
It is worth emphasizing that this behavior (except the self organization in $`\lambda `$ space) remains even in the absence of extremal dynamics. In particular, the first return time distribution exhibits a power law behavior. This implies that power law distributions are compatible with non-extremal systems, as has been previously observed by Newman an coworkers , in the context of noise-driven systems.
Both SOC and CML models have been used separately in the past to describe evolving ecosystems. To our knowledge, the model presented here is the first synthesis of both ideas that includes the level structure (genotype/phenotype) of real organisms.
|
no-problem/9905/gr-qc9905049.html
|
ar5iv
|
text
|
# Relativistic quantum field inertia and vacuum field noise spectra
## 1 Introduction
The legendary gedanken discovery of classical free fall universality by Galilei in the first instants of modern science is now, for everybody, an early textbook exciting story (first actual experiments in June 1710 at St. Paul’s in London by Newton). Starting with the neutron beam experiments of Dabbs et al in 1965 non-relativistic quantum free falls have also been of much interest. As known, ‘free falls’ of quantum wavefunctions (wavepackets), i.e. Schroedinger solutions in a homogeneous gravitational field, are mass dependent and therefore closer to Aristotle’s fall. Thus, a reset of the quest for the universality features of free fall type phenomena in the quantum realm has emerged in recent epochs. Moreover, at the present time, there are interesting insights in the problem of relativistic quantum field inertia, which have been gained as a consequence of the Hawking effect and the Unruh effect . This substantially helped to display the ‘imprints’ of gravitation in the relativistic quantum physics . Natural questions in this context on which I hope to be sufficiently informal during this work could be (i) What does really mean ‘free fall’ in relativistic quantum field theories ? (ii) How should one formulate EPs for quantum field states ? (iii) What are the restrictions on quantum field states imposed by the EPs ?
The method of quantum detectors proved to be very useful for the understanding of the quantum field inertial features. New ways of thinking of quantum fluctuations have been promoted and new pictures of the vacuum states have been provided, of which the landmark one is the heat bath interpretation of the Minkowski vacuum state from the point of view of a uniformly accelerating non-inertial quantum detector. Essentially, simple, not to say toy, model particles (just two energy levels separated by $`E`$ and monopole form factor) commonly known as Unruh-DeWitt (UDW) quantum detectors of uniform, one-dimensional proper acceleration $`a`$ in Minkowski vacuum are immersed in a scalar quantum field ‘heat’ bath of temperature
$$T_a=\frac{\mathrm{}}{2\pi ck}a,$$
(1)
where $`\mathrm{}`$ is Planck’s barred constant, $`c`$ is the speed of light in vacuum, and $`k`$ is Boltzmann’s constant. A formula of this type has been first obtained by Hawking in a London Nature Letter of March 1974 on black hole explosions , then in 1975 by Davies in a moving mirror model , and finally settled by Unruh in 1976 . For first order corrections to this formula one can see works by Reznik . This Unruh temperature is proportional to the lineal uniform acceleration, and the scale of such noninertial quantum field ‘heat’ effects with respect to the acceleration one is fixed by the numerical values of universal constants to the very low value of $`4\times 10^{23}`$ in cgs units). In other words, the huge acceleration of $`2.5\times 10^{22}`$ $`\mathrm{cm}/\mathrm{s}^2`$ can produce a black body spectrum of only 1 K. In the (radial) case of Schwarzschild black holes, using the surface gravity $`\kappa =c^4/4GM`$ instead of $`a`$, one immediately gets the formula for their Hawking temperature, $`T_\kappa `$. In a more physical picture, the Unruh quantum field heat reservoir is filled with the so-called Rindler photons (Rindler quasi-particles), and therefore the quantum transitions are to be described as absorptions or emissions of the Rindler reservoir ‘photons’.
I also recall that according to an idea popularized by Smolin , one can think of zero-point fluctuations, gravitation and inertia as the only three universal phenomena of nature. However, one may also think of inertia as related to those peculiar collective, quantum degrees of freedom which are the vacuum expectation values (vev’s) of Higgs fields. As we know, these vev’s do not follow from the fundamentals of quantum theory. On the other hand, one can find papers claiming that inertia can be assigned to a Lorentz type force generated by electromagnetic zero-point fields . Moreover, it is quite well known the Rindler condensate concept of Gerlach . Amazingly, one can claim that there exist completely coherent zero-point condensates, like the Rindler-Gerlach one, which entirely mimick the Planck spectrum, without any renormalization, as the case is for the Casimir effect.
In this work, I will stick to the standpoint based on the well-known concept of vacuum field noise (VFN) , - or vacuum excitation spectrum from the point of view of quantum UDW detectors - because in my opinion this not only provides a clear origin of the relativistic thermal effects, it avoids at the same time uncertain generalizations, and also helps one of my purposes herein. This is to shed more light on the connection between the stationary VFNs and the equivalence principle statements for scalar field theories.
## 2 Survey of quantum detector EPs
The Unruh picture can be used for interpreting Hawking radiation in Minkowski space . In order to do that, one has to consider the generalization(s) of the EP to quantum field processes. A number of authors have discussed this important issue with various degree of detail and meaning and with some debate . Nikishov and Ritus raised the following objection to the heat bath concept. Since absorption and emission processes occur in finite space time regions, the application of the local principle of equivalence requires a constant acceleration over those regions. However, the space-time extension of the quantum processes are in general of the order of inverse acceleration. In Minkowski space it is not possible to create homogeneous and uniform gravitational fields having accelerations of the order of $`a`$ in spacetime domains of the order of the inverse of $`a`$.
Grishchuk, Zel’dovich, and Rozhanskii, and also Ginzburg and Frolov wrote extensive reviews on the formulations of QFEP . One should focus on the response functions of quantum detectors, in particular the UDW two-level monopole detector in stationary motion. In the asymptotic limit this response function is the integral of the quantum noise power spectrum. Or, since the derivative of the response function is the quantum transition rate, the latter is just the measure of the vacuum power spectrum along the chosen trajectory (worldline) and in the chosen initial (vacuum) state. This is valid only in the asymptotic limit and more realistic cases require calculations in finite time intervals . Denoting by $`R_{M,I}`$, $`R_{R,A}`$, and $`R_{M,A}`$ the detection rates with the first subscript corresponding to the vacuum (either Minkowski or Rindler) and the second subscript corresponding to either inertial or accelerating worldline, one can find for the UDW detector in a scalar vacuum that $`R_{M,I}=R_{R,A}`$ expressing the dissipationless character of the vacuum fluctuations in this case, and a thermal factor for $`R_{M,A}`$ leading to the Unruh heat bath concept. In the case of a uniform gravitational field, the candidates for the vacuum state are the Hartle-Hawking ($`HH`$) and the Boulware ($`B`$) vacua. The $`HH`$ vacuum is defined by choosing incoming modes to be those of positive frequency with respect to the null coordinate on the future horizon and outgoing modes as positive frequency ones with respect to the null coordinate on the past horizon, whereas the $`B`$ vacuum has the positive frequency modes with respect to the Killing vector which makes the exterior region static. For an ideal, uniform gravitational field the $`HH`$ vacuum can be thought of as the counterpart of the Minkowski vacuum, while the $`B`$ vacuum is the equivalent of the Rindler vacuum. Then, the QFEP can be formulated in one of the following ways
Quantum detector-QFEP: $`HHM`$ equivalence
i) The detection rate of a free-falling UDW detector in the HH vacuum is the same as that of an inertial UDW detector in the M vacuum.
ii) A UDW detector at rest in the HH vacuum has the same DR as a uniformly accelerated detector in the M vacuum.
Quantum detector-QFEP: $`BR`$ equivalence
iii) A UDW detector at rest in the B vacuum has the same detection rate as a uniformly accelerated detector in the R vacuum.
iv) A free-falling UDW detector in the B vacuum has the same detection rate as an inertial detector in the R vacuum.
Let us record one more formulation due to Kolbenstvedt
Quantum detector-QFEP: Kolbenstvedt
A detector in a gravitational field and an accelerated detector will behave in the same manner if they feel equal forces and perceive radiation baths of identical temperature.
In principle, since the Planck spectrum is Lorentz invariant (and even conformal invariant) its presence in equivalence statements is easy to accept if one reminds that Einstein EP requires local Lorentz invariance. The linear connection between ‘thermodynamic’ temperature and one-dimensional, uniform, proper acceleration, which is also valid in some important gravitational contexts (Schwarzschild black holes, de Sitter cosmology), is indeed a fundamental relationship, because it allows for an absolute meaning of quantum field effects in such ideal noninertial frames, as soon as one recognize thermodynamic temperature as the only absolute, i.e., fully universal energy type physical concept.
## 3 The six types of stationary scalar VFNs
In general the scalar quantum field vacua are not stationary stochastic processes (abbreviated as SVES) for all types of classical trajectories on which the UDW detector moves. Nevertheless, the lineal acceleration is not the only case with that property as was shown by Letaw who extended Unruh’s considerations, obtaining six types of worldlines with SVES for UDW detectors (SVES-1 to SVES-6, see below). These worldlines are solutions of some generalized Frenet equations on which the condition of constant curvature invariants is imposed, i.e., constant curvature $`\kappa `$, torsion $`\tau `$, and hypertorsion $`\nu `$, respectively. Notice that one can employ other frames such as the Newman-Penrose spinor formalism as recently did Unruh but the Serret-Frenet one is in overwhelming use throughout physics. The six stationary cases are the following
1. $`\kappa =\tau =\nu =0`$, (inertial, uncurved worldlines). SVES-1 is a trivial cubic spectrum
$$S_1(E)=\frac{E^3}{4\pi ^2}$$
(2)
i.e., as given by a vacuum of zero point energy per mode $`E/2`$ and density of states $`E^2/2\pi ^2`$.
2. $`\kappa 0`$, $`\tau =\nu =0`$, (hyperbolic worldlines). SVES-2 is Planckian allowing the interpretation of $`\kappa /2\pi `$ as ‘thermodynamic’ temperature. In the dimensionless variable $`ϵ_\kappa =E/\kappa `$ the vacuum spectrum reads
$$S_2(ϵ_\kappa )=\frac{ϵ_\kappa ^3}{2\pi ^2(e^{2\pi ϵ_\kappa }1)}$$
(3)
3. $`|\kappa |<|\tau |`$, $`\nu =0`$, $`\rho ^2=\tau ^2\kappa ^2`$, (helical worldlines). SVES-3 is an analytic function corresponding to case 4 below only in the limit $`\kappa \rho `$
$$S_3(ϵ_\rho )\stackrel{\kappa /\rho \mathrm{}}{}S_4(ϵ_\kappa )$$
(4)
Letaw plotted the numerical integral $`S_3(ϵ_\rho )`$, where $`ϵ_\rho =E/\rho `$ for various values of $`\kappa /\rho `$.
4. $`\kappa =\tau `$, $`\nu =0`$, (the spatially projected worldlines are the semicubical parabolas $`y=\frac{\sqrt{2}}{3}\kappa x^{3/2}`$ containing a cusp where the direction of motion is reversed). SVES-4 is analytic, and since there are two equal curvature invariants one can use the dimensionless energy variable $`ϵ_\kappa `$.
$$S_4(ϵ_\kappa )=\frac{ϵ_\kappa ^2}{8\pi ^2\sqrt{3}}e^{2\sqrt{3}ϵ_\kappa }$$
(5)
It is worth noting that $`S_4`$ is rather close to the Wien-type spectrum $`S_Wϵ^3e^{\mathrm{const}.ϵ}`$.
5. $`|\kappa |>|\tau |`$, $`\nu =0`$, $`\sigma ^2=\kappa ^2\tau ^2`$, (the spatially projected worldlines are catenaries, i.e., curves of the type $`x=\kappa \mathrm{cosh}(y/\tau )`$). In general, SVES-5 cannot be found analitically. It is an intermediate case, which for $`\tau /\sigma 0`$ tends to SVES-2, whereas for $`\tau /\sigma \mathrm{}`$ tends toward SVES-4
$$S_2(ϵ_\kappa )\stackrel{0\tau /\sigma }{}S_5(ϵ_\sigma )\stackrel{\tau /\sigma \mathrm{}}{}S_4(ϵ_\kappa )$$
(6)
6. $`\nu 0`$, (rotating worldlines uniformly accelerated normal to their plane of rotation). SVES-6 forms a two-parameter set of curves. These trajectories are a superposition of the constant linearly accelerated motion and uniform circular motion. The corresponding vacuum spectra have not been calculated by Letaw even numerically.
Thus, only the hyperbolic worldlines having just one nonzero curvature invariant allow for a Planckian SVES and for a strictly one-to-one mapping between the curvature invariant $`\kappa `$ and the ‘thermodynamic’ temperature. On the other hand, in the stationary cases it is possible to determine at least approximately the curvature invariants, that is the classical worldline on which a quantum particle moves, from measurements of the vacuum noise spectrum.
## 4 Preferred vacua and/or high energy radiometric standards
There is much interest in considering the magnetobremsstrahlung (i.e., not only synchrotron) radiation patterns at accelerators in the aforementioned perspective at least since the works of Bell and collaborators . It is in this sense that a sufficiently general and acceptable statement on the universal nature of the kinematical parameters occurring in a few important quantum field model problems can be formulated as follows
There exist accelerating classical trajectories (worldlines) on which moving ideal (two-level) quantum systems can detect the scalar vacuum environment as a stationary quantum field vacuum noise with a spectrum directly related to the curvature invariants of the worldline, thus allowing for a radiometric meaning of those invariants.
Although this may look an extremely ideal (unrealistic) formulation for accelerator radiometry, where the spectral photon flux formula of Schwinger is very effective, I recall that Hacyan and Sarmiento developed a formalism similar to the scalar case to calculate the vacuum stress-energy tensor of the electromagnetic field in an arbitrarily moving frame and applied it to a system in uniform rotation, providing formulas for the energy density, Poynting flux and stress of zero-point oscillations in such a frame. Moreover, Mane has suggested the Poynting flux of Hacyan and Sarmiento to be in fact synchrotron radiation when it is coupled to an electron.
Another important byproduct and actually one of the proposals I put forth in this essay is the possibility to choose a class of preferred vacua of the quantum world as all those having stationary vacuum noises with respect to the classical (geometric) worldlines of constant curvature invariants because in this case one may find some necessary attributes of universality in the more general quantum field radiometric sense in which the Planckian Unruh thermal spectrum is included as a particularly important case. Of course, much work remains to be done for a more “experimental” picture of highly academic calculations in quantum field theory, but a careful look to the literature shows that there are already definite steps in this direction . One should notice that all the aforementioned scalar quantum field vacua look extremely ideal from the experimental standpoint. Indeed, it is known that only strong external fields can make the quantum electrodynamical vacuum to react and show its physical properties, becoming similar to a magnetized and polarized medium, and only by such means one can learn about the physical structure of the QED vacuum. Important results on the relationship between Schwinger mechanism and Unruh effect have been reported in recent works .
## 5 Nonstationary VFNs
Though the nonstationary VFNs do not enter statements of equivalence type they are equally important. Since such noises have a time-dependent spectral content one needs joint time and frequency information, i.e. generalizations of the power spectrum analysis such as tomographical processing and wavelet transform analysis . Alternatively, since in the quantum detector method the vacuum autocorrelation functions are the essential physical quantities, and since according to fluctuation-dissipation theorem(s) (FDT) they are related to the linear (equilibrium) response functions to an initial condition/vacuum, more FDT type work, especially its generalization to the out of equilibrium case will be useful in this framework. One can hope that effective temperature concepts can be introduced following the reasoning already developed for systems with slow dynamics (glasses) . In fact, there is some progress due to Hu and Matacz in making more definite use of FDT for vacuum fluctuations. Very recently, Gour and Sriramkumar questioned if small particles exhibit Brownian motion in the quantum vacuum and concluded that even though the answer is in principle positive the effect is extremely small and thus very difficult to detect experimentally.
## 6 Axiomatic QFEPs
At the rigorous, axiomatic level, Hessling published further results on the algebraic quantum field equivalence principle (AQFEP) due to Haag and collaborators. Hessling’s formulation is too technical to be reproduced here. The difficulties are related to the rigorous formulation of local position invariance, a requisite of equivalence, for the singular short-distance behavior of quantum fields, and to the generalization to interacting field theories. Various general statements of locality for linear quantum fields are important steps toward proper formulations of AQFEP. These are nice but technical results coming out mainly from clear mathematical exposition involving algebraic-thermal states, namely the Kubo-Martin-Schwinger states of Hadamard type. Hessling’s AQFEP formulation is based on the notion of quantum states constant up to first order at an arbitrary spacetime point, and means that for these states a certain scaling limit should exist, and moreover a null-derivative condition with respect to a local inertial system around that arbitrary point is to be fulfilled for all n-point functions. For example, the vacuum state of the Klein-Gordon field in Minkowski space with a suitable scaling function fulfills Hessling’s AQFEP. Using as a toy model the asymptotically free $`\varphi ^3`$ theory in six-dimensional Minkowski space, Hessling showed that the derivative condition of his AQFEP is not satisfied by this interacting quantum field theory, which perturbatively is similar to quantum chromodynamics. This failing is due to the running coupling constant that does not go smoothly to zero in the short-distance limit. If one takes AQFEP or generalizations thereof as a sine qua non criterium for physically acceptable quantum field vacuum states then one has at hand a useful selection guide for even more complex vacua such as the Yang-Mills one or those of quantum gravity .
Since the time-thermodynamics relation in general covariant theories and the connection with Unruh’s temperature and Hawking radiation are an active area of research due to the remarkable correspondence between causality and the modular Tomita-Takesaki theory it would be interesting to formulate in this context some sort of AQFEP statement beyond that of Hessling.
Finally, the work of Faraggi and Matone is to be noted, where a sort of mathematical equivalence postulate is introduced stating that all physical systems can be connected by a coordinate transformation to the free system with vanishing energy, uniquely leading to the quantum analogue of the Hamilton-Jacobi equation, which is a third-order non-linear differential equation. The interesting feature of their approach, which they carry on in both nonrelativistic and relativistic domains, is the derivation of a trajectory representation of quantum mechanics depending on the Planck length.
## 7 Conclusions
The first conclusion of this work is that considerations of equivalence type in quantum field theories may well guide the abstract research in this area towards the highly required feature of universality, which being an important form of unification is among the ultimate purposes of meaningful theoretical research. This may go till the act of measuring generic field operators as was argued by D’Ariano for the homodyne tomography technique in quantum optics.
The second conclusion refers to the hope that Hawking and Unruh effects are not only mathematical idealizations. Especially their vacuum excitation spectrum interpretation can be used for what one may call high energy kinematical radiometry, at least as guiding principles in establishing rigorous high energy and astrophysical radiometric standards. Whether or not Unruh’s and Hawking’s effects may really occur they can be employed as a sort of standards in relativistic quantum field radiometry.
|
no-problem/9905/hep-ex9905015.html
|
ar5iv
|
text
|
# 1 Electroweak Physics program of the SLD Experiment
## 1 Electroweak Physics program of the SLD Experiment
The SLD Experiment began its physics program at the SLAC Linear Collider (SLC) in 1992, and has accumulated a total data sample of approximately 550K hadronic $`Z^0`$ decays between 1992 and 1998. This data sample is a factor 30 smaller than the $`Z^0`$ sample available from the combined data of the 4 LEP experiments, ALEPH, DELPHI, L3 and OPAL. Yet the SLD physics results in many areas are competitive with the combined LEP result, and for some measurements SLD has the world’s most precise results.
There are 3 features that distinguish the SLD experiment at the SLC: a small, stable interaction volume; a precision vertex detector; and a highly polarized electron beam. SLD is the first experiment at an electron linear collider. The collision volume is small and stable, measuring 1.5 microns by 0.7 microns in the transverse dimensions by 700 microns longitudinally.
These key features for the SLD experiment result in the world’s best measurement of the weak mixing angle, a precise direct measurement of parity violation at the $`Zb\overline{b}`$ vertex, $`A_b`$, and a good measurement of the $`Zb\overline{b}`$ coupling strength, $`R_b`$. The weak mixing angle measurement provides an excellent means to search for new physics that may enter through oblique (or loop) corrections, while the $`A_b`$ and $`R_b`$ measurements are excellent means to search for new physics that may enter through a correction at the $`Zb\overline{b}`$ vertex.
In its near (analysis) future, SLD is also exploiting its capabilities to search for $`B_s`$ mixing. The analysis for this is evolving to take full advantage of the precise vertexing information, and by the time of the summer 1999 conferences SLD should have a measurement of $`B_s`$ mixing comparable in sensitivity with the combined LEP result. SLD estimates it should have a reach for $`\mathrm{\Delta }m_s`$ of $`1215ps^1`$, in the region where it is predicted in the SM.
## 2 $`Z^0`$ Coupling Parameters
At the $`Zf\overline{f}`$ vertex, the SM gives the vector and axial vector couplings to be $`v_f=I_f^32Q_f\mathrm{sin}^2(\theta _W^{eff})`$, and $`a_f=I_f^3`$, where $`I_f`$ is the fermion isospin and $`Q_f`$ is the fermion charge. Radiative corrections are significant and are treated as follows. First, vacuum polarization and vertex corrections are included in the coupling constants, and an effective weak mixing angle is defined to be $`\mathrm{sin}^2(\theta _W^{eff})\frac{1}{4}(1v_e/a_e)`$. Second, experimental measurements need to be corrected for initial state radiation and for $`Z\gamma `$ interference to extract the $`Z`$-pole contribution.
One can define a parity-violating fermion asymmetry parameter, $`A_f=\frac{2v_fa_f}{v_f^2+a_f^2}`$. The cross-section for $`e^+e^{}Z^0f\overline{f}`$ can be expressed by
$$\frac{d\sigma ^f}{d\mathrm{\Omega }}[v_f^2+a_f^2]\left\{\begin{array}{c}(1+\mathrm{cos}^2\theta )(1+PA_e)+\\ 2\mathrm{cos}\theta A_f(P+A_e)\end{array}\right\}$$
(1)
where $`\theta `$ is the angle of the outgoing fermion with respect to the incident electron, and $`P`$ is the polarization of the electron beam (the positron beam is assumed to be unpolarized). We can then define forward, backward, and left, right cross-sections as follows: $`\sigma _F=_0^1\frac{d\sigma }{d\mathrm{\Omega }}d(\mathrm{cos}\theta )`$; $`\sigma _B=_1^0\frac{d\sigma }{d\mathrm{\Omega }}d(\mathrm{cos}\theta )`$; $`\sigma _L=_1^1\frac{d\sigma _L}{d\mathrm{\Omega }}d(\mathrm{cos}\theta )`$; $`\sigma _R=_1^1\frac{d\sigma _R}{d\mathrm{\Omega }}d(\mathrm{cos}\theta )`$. Here, $`\sigma _L`$ ($`\sigma _R`$) is the cross-section for left (right) polarized electrons colliding with unpolarized positrons.
At the SLC, the availability of a highly polarized electron beam allows for direct determinations of the $`A_f`$ parameters via measurements of the left-right forward-backward asymmetry, $`A_{LR}^{FB}`$, defined by
$$A_{LR}^{FB}=\frac{(\sigma _F^L\sigma _F^R)(\sigma _B^L\sigma _B^R)}{\sigma _F^L+\sigma _F^R+\sigma _B^L+\sigma _B^R}=\frac{3}{4}P_eA_f$$
Additionally, a very precise determination of $`A_e`$ is achieved from the measurement of the left-right asymmetry, $`A_{LR}`$, which is defined as
$$A_{LR}=\frac{1}{P_e}\frac{\sigma _L\sigma _R}{\sigma _L+\sigma _R}=A_e$$
All $`Z`$ decay modes can be used, and this allows for a simple analysis with good statistical power for a precise determination of $`\mathrm{sin}^2(\theta _W^{eff})`$.
## 3 The SLAC Linear Collider
LEP200 is the last of the large electron storage rings, and a new technology is needed to push to higher center-of-mass energies. Electron linear collider technology provides a means to achieve this, and the SLC is a successful prototype for this. It has reached a peak luminosity of $`310^{30}cm^2s^1`$, which is within a factor two of the design luminosity . The spotsizes at the Interaction Point (IP) are actually significantly smaller than design, and Figure 1 indicates how the spotsizes have improved with time. With the small spotsizes, there is an additional luminosity enhancement from the “pinch effect” the two beams have on each other. At the higher luminosities achieved in the last SLC run, the pinch effect enhanced the luminosity by a factor of two.
The luminosity is limited due to the maximum charge achievable in single bunches because of instabilities in the Damping Rings.
## 4 SLD’s Vertex Detector
SLD’s vertex detector consists of 3 layers, with the inner layer located at a radius of 2.7cm. Angular coverage extends out to $`\mathrm{cos}(\theta )=0.90`$. It has 307 million pixels, with a single hit resolution of 4.5 microns. There are 0.4$`\%`$ radiation lengths per layer. The capability of SLD’s vertex detector is illustrated in Figure 2, which is a histogram of the reconstructed jet mass. With a mass cut of $`2.0GeV/c^2`$, SLD can identify b jets with 50$`\%`$ efficiency and 98$`\%`$ purity.
## 5 SLD’s Compton Polarimeter
This polarimeter, shown in Figure 3, detects both Compton-scattered electrons and Compton-scattered gammas from the collision of the longitudinally polarized 45.6 GeV electron beam with a circularly polarized photon beam. The photon beam is produced from a pulsed Nd:YAG laser with a wavelength of 532 nm. After the Compton Interaction Point (CIP), the electrons and backscattered gammas pass through a dipole spectrometer. A nine-channel threshold Cherenkov detector (CKV) measures electrons in the range 17 to 30 GeV. Two detectors, a single-channel Polarized Gamma Counter (PGC) and a multi-channel Quartz Fiber Calorimeter (QFC), measure the counting rates of Compton-scattered gammas.
Due to beamstrahlung backgrounds produced during luminosity running, only the CKV detector can make polarization measurements during beam collisions. Hence it is the primary detector and the most carefully analyzed. Its systematic error is estimated to be $`0.7\%`$. Dedicated electron-only runs are used to compare electron polarization measurements between the CKV, PGC and QFC detectors. The PGC and QFC results are consistent with the CKV result at the level of 0.5$`\%`$. Typical beam polarizations for the SLD experiment have been in the range $`7378\%`$.
## 6 Measurements of $`\mathrm{sin}^2(\theta _W^{eff})`$, and testing oblique corrections
For the $`A_{LR}`$ analysis, all Z decay modes can be used, though in practice the leptonic modes are excluded. They are analyzed separately in the measurements of $`A_{LR}^{FB}`$ described below. The $`A_{LR}`$ event selection requires at least 4 charged tracks originating from the IP and greater than 22 GeV energy deposition in the calorimeter. Energy flow in the event is required to be balanced by requiring the normalized energy vector sum be less than 0.6. These criteria have an efficiency of 92$`\%`$ for hadronic events, with a residual background of 0.1$`\%`$.
SLD’s 1998 running yielded 225K hadronic Z decays, with $`N_L=124,404`$ produced from the left-polarized beam and $`N_R=100,558`$ produced from the right-polarized beam. For the measured beam polarization of 73.1$`\%`$, this yielded $`A_{LR}^{meas}=0.1450\pm 0.0030(stat)`$. Correcting for initial state radiation and $`Z\gamma `$ interference effects, gives $`A_{LR}^0=0.1487\pm 0.0031(stat)\pm 0.0017(syst)`$. The systematic error includes a contribution of 0.0015 from uncertainties in the polarization scale and 0.0007 from uncertainties in the energy scale. This result determines the weak mixing angle to be $`\mathrm{sin}^2(\theta _W^{eff})=0.23130\pm 0.00039\pm 0.00022`$. Combining all of SLD’s $`A_{LR}`$ results from 1992-98, gives $`\mathrm{sin}^2(\theta _W^{eff})=0.23101\pm 0.00031`$.
For the $`A_{LR}^{FB}`$ analysis, leptonic Z decay events are selected as follows. The number of charged tracks must be between 2 and 8. One hemisphere must have a charge of -1 and the other hemisphere a charge of +1. The polar angle is required to have $`\mathrm{cos}(\theta )<0.8`$. For ee final state events, the only additional requirement is a deposition of greater than 45 GeV in the calorimeter. The $`\mu \mu `$ final state events must reconstruct with a large invariant mass and have less than 10 GeV per track deposited in the calorimeter. The $`\tau \tau `$ final state events must reconstruct with an invariant mass less than 70 GeV, and deposit less than 27.5 GeV per track in the calorimeter. One stiff track is required ($`>3`$ GeV), the acollinearity angle must be greater than $`160^{}`$ and the invariant mass in each hemisphere must be less than 1.8 GeV. Event selection efficiencies are 87.3$`\%`$ for $`ee`$, 85.5$`\%`$ for $`\mu \mu `$, and 78.1$`\%`$ for $`\tau \tau `$. Backgrounds are estimated to be 1.2$`\%`$ for $`ee`$ (predominantly $`\tau \tau `$), 0.2$`\%`$ for $`\mu \mu `$ (predominantly $`\tau \tau `$), and 5.2$`\%`$ for $`\tau \tau `$ (predominantly $`\mu \mu `$ and $`2\gamma `$).
We use Equation 1 in a maximum likelihood analysis (which also allows for photon exchange and for $`Z\gamma `$ interference) to determine $`A_e`$, $`A_\mu `$ and $`A_\tau `$. The results are $`A_e=0.1504\pm 0.0072`$, $`A_\mu =0.120\pm 0.019`$, and $`A_\tau =0.142\pm 0.019`$. These results are consistent with universality and can be combined, giving $`A_{e,\mu ,\tau }=0.1459\pm 0.0063`$. This determines the weak mixing angle to be $`\mathrm{sin}^2(\theta _W^{eff})=0.2317\pm 0.0008`$.
Combining the $`A_{LR}`$ measurements that use hadronic final states and the $`A_{LR}^{FB}`$ measurements that use leptonic final states, we determine the weak mixing angle to be $`\mathrm{sin}^2(\theta _W^{eff})=0.23110\pm 0.00029`$. This is a preliminary result.
A comparison of SLD’s result with leptonic asymmetry measurements at LEP is given in Figure 5. These results are compared by technique, where $`A_l`$ is SLD’s combined result from $`A_{LR}`$ and $`A_{LR}^{FB}(leptons)`$ described above; $`A_{FB}^l`$ is the LEP result using the forward-backward asymmetry with leptonic final states; $`A_\tau `$ and $`A_e`$ are the LEP results from analyzing the $`\tau `$ polarization for the $`\tau \tau `$ final state. We do not include in this comparison the LEP results using hadronic final states. These results are discussed below, when we examine SLD’s $`A_b`$ measurement and tests of b vertex corrections. The SLD and LEP data in Figure 5 are replotted in Figure 5 by experiment rather than by technique. The data are consistent and can be combined to give a world average $`\mathrm{sin}^2(\theta _W^{eff})=0.23128\pm 0.00022`$.
A convenient framework for analyzing the consistency of the $`\mathrm{sin}^2(\theta _W^{eff})`$ measurement with the SM and with other electroweak measurements is given by the Peskin-Takeuchi parametrization for probing extensions to the SM. This parametrization assumes that vacuum polarization effects dominate and expresses new physics in terms of the parameters S and T, which are defined in terms of the self-energies of the gauge bosons. In S-T space, a measurement of an electroweak observable corresponds to a band with a given slope. Figure 6 shows the S-T plot for measurements of the weak mixing angle ($`\mathrm{sin}^2(\theta _W^{eff})`$), the Z width ($`\mathrm{\Gamma }_Z`$), and the W mass ($`M_W`$), . The experimental bands shown correspond to one sigma contours. The elliptical contours are the error ellipses (68$`\%`$ confidence and 95$`\%`$ confidence) for a combined fit to the data. The SM allowed region is the small parallelogram, with arrows indicating the dependence on $`m_t`$ and $`m_H`$. The Higgs mass is allowed to vary from 100 GeV to 1000 GeV and $`m_t`$ from 165 GeV to 185 GeV. The measurements are in reasonable agreement with the SM and favour a light Higgs mass. A comparison is also given to a prediction for the parameter space of the Minimal Supersymmetric Model (region of dots in figure). The combined SLD and LEP measurement for the weak mixing angle gives the narrowest band in S-T space, and provides the best test of the SM for oblique corrections. Improved measurements of $`M_W`$ from LEP and FNAL are eagerly awaited to further constrain and test the SM in this regard.
## 7 Measurements of $`A_b`$, and <br>testing vertex corrections
The measurement technique for determining $`A_b`$ is similar to that for determining $`A_e`$, $`A_\mu `$ and $`A_\tau `$. For this analysis, the capabilities of SLD’s vertex detector is critical and good use is also made of SLD’s particle identification system to identify kaons with the Cherenkov Ring Imaging Detector (CRID). Three different analyses are employed with different techniques for determining the b-quark charge. The Jet Charge analysis uses a momentum-weighted jet charge to identify the b-quark charge, and it requires a secondary vertex mass greater than 2.0 GeV. The Kaon Tag analysis uses the kaon sign in the cascade decay ($`bcs`$) to identify the b-quark charge, and it requires a secondary vertex mass greater than 1.8 GeV. The Lepton Tag analysis uses the lepton charge in semileptonic decays to identify the b-quark charge; it has no secondary vertex mass requirement.
The three analyses yield the following results: $`A_b`$ (Jet Charge) = $`0.882\pm 0.020\pm 0.029`$; $`A_b`$ (Kaon Tag) = $`0.855\pm 0.088\pm 0.102`$; and $`A_b`$ (Lepton Tag) = $`0.924\pm 0.032\pm 0.026`$. These results can be combined, giving $`A_b=0.898\pm 0.029`$. This is a preliminary result.
Similar to the S-T analysis for testing oblique corrections, one can utilize an extended parameter space for testing vertex corrections. This is done in Figure 7, where we plot the deviation in $`A_b`$ from the SM prediction versus the deviation in $`\mathrm{sin}^2(\theta _W^{eff})`$ from the SM prediction. The three bands plotted are SLD’s $`A_b`$ measurement, the combined leptonic $`\mathrm{sin}^2(\theta _W^{eff})`$ measurement from SLD and LEP, and LEP’s forward-backward b asymmetry. The elliptical contours are the error ellipses (68$`\%`$ confidence and 95$`\%`$ confidence) for a combined fit to the data. The horizontal line is the SM prediction. We note that the data are in excellent agreement, but differ from the SM prediction by 2.6$`\sigma `$. Unfortunately, there will be no new data to indicate whether this deviation results from a statistical fluctuation, a problem in the b physics analysis, or new physics. We also note the discrepancy of where the SLD-LEP $`\mathrm{sin}^2(\theta _W^{eff})`$ and the LEP $`A_{FB}^b`$ measurement bands intersect the SM line. This reflects their 2.2$`\sigma `$ discrepancy in determining $`\mathrm{sin}^2(\theta _W^{eff})`$ within the SM framework.
## 8 Conclusions
The SLD experiment has been the first experiment at an electron linear collider. The viability of a linear collider has been demonstrated and this technology is now being proposed for future $`e^+e^{}`$ colliders with center-of-mass energies up to 1 TeV. The SLD has made many important contributions to precision electroweak physics. SLD has made the best measurement of the weak mixing angle, $`\mathrm{sin}^2(\theta _W^{eff})=0.23110\pm 0.00029`$ (preliminary). This provides a stringent test of oblique corrections; our measurement is consistent with SM predictions and favours a light Higgs mass. SLD makes the only direct measurement of $`A_b`$, which we determine to be $`A_b=0.898\pm 0.029`$ (preliminary). This measurement, together with measurements by SLD and LEP of $`\mathrm{sin}^2(\theta _W^{eff})`$ and LEP’s measurement of $`A_{FB}^b`$, can be used to test b vertex corrections. The data are consistent, but indicate a 2.6$`\sigma `$ discrepancy with the SM prediction.
## References
|
no-problem/9905/hep-lat9905026.html
|
ar5iv
|
text
|
# Percolation properties of the 2D Heisenberg model
## Abstract
We analyze the percolation properties of certain clusters defined on configurations of the 2–dimensional Heisenberg model. We find that, given any direction $`\stackrel{}{n}`$, in $`O(3)`$ space, the spins almost perpendicular to $`\stackrel{}{n}`$ form a percolating cluster. This result gives indications of how the model can avoid a previously conjectured Kosterlitz–Thouless phase transition at finite temperature $`T`$.
Abstract
64.60.Cn; 05.50.+q; 75.10.Hk
The classical Heisenberg model in 2 dimensions (2D) describes the behaviour of a system of classical spins with short range ferromagnetic interactions . The spins are placed at the sites of a 2D lattice. The physics of the model, which has been studied both through analytical calculations and Monte Carlo simulations, is defined by equations that display a continuous $`O(3)`$ symmetry and it is subject to the Mermin and Wagner theorem , i.e.: there are no equilibrium states with broken symmetry.
Perturbation theory (PT) indicates that the 2D Heisenberg model has a critical point at zero temperature . Moreover, from the field–theoretical point of view, the spin field carries a particle of non–zero mass $`m`$. This mass has been calculated by applying a Bethe ansatz technique and by making a partial use of PT. However, PT is constructed by studying small oscillations around the trivial configuration (all spins parallel), a state which obviously violates the Mermin and Wagner theorem. Therefore a problem raises concerning the validity of the above–mentioned analytical results and any other which relies on PT.
The model has also been studied by numerical simulations. Among other quantities (see for example ), the mass $`m`$ and the magnetic susceptibility $`\chi `$ have been recently measured with good precision. The mass is evaluated from the exponential decay of the 2–point correlation function $`G(xx^{})`$. The magnetic susceptibility is extracted from the sum $`\chi _xG(xx^{})`$. Some of the most recent numerical calculations are the following. In $`m`$ has been determined by extrapolating the result from small lattice sizes and large temperatures to smaller temperatures by using finite–size scaling. In it has been extracted from the 2–point function by using improved actions and very high statistics. In both cases agreement with the analytical calculation of $`m`$ is found within 2–4%.
Another scenario for the 2D Heisenberg model has been put forward in . The model would undergo a Kosterlitz–Thouless (KT) phase transition at a finite temperature $`T_{KT}`$ and no massive particle would be carried by the spin field in the low temperature phase. This scenario is a mimic of what is known to happen in the 2D $`O(2)`$ or XY model . If $`T_{KT}`$ is low enough, a numerical simulation is not able to detect it (in it was argued that $`T_{KT}`$ is indeed much smaller than the typical temperatures utilized for thermalization in present–day simulations). In a finite–size analysis of the helicity modulus is used to rule out such a KT transition for $`T>0.1`$. In it was also shown that the data for the correlation length and the magnetic susceptibility for temperatures $`T>0.53`$ do not scale as the KT scenario predicts.
In the present paper, we want to tackle directly the arguments of where the percolation properties of certain clusters are analyzed and, after assuming a set of hypotheses, it is concluded that the magnetic susceptibility diverges which is sufficient to prove that the mass $`m`$ vanishes.
We realize the classical spins of the Heisenberg model by 3–component scalar fields of modulus 1 placed at each site $`x`$ of a square lattice, $`\stackrel{}{\varphi }_x`$ with $`\left(\stackrel{}{\varphi }_x\right)^2=1`$. These fields interact through a nearest neighbour (n.n.) coupling and the hamiltonian can be written as ($`xy`$ stands for two generic n.n. sites)
$$H\underset{xy}{}\stackrel{}{\varphi }_x\stackrel{}{\varphi }_y.$$
(1)
The partition function at a temperature $`T`$ is $`Z=_{\{\stackrel{}{\varphi }_x\}}\mathrm{exp}\left(H/T\right)`$. The hamiltonian (1) and the partition function are invariant under $`O(3)`$ rotations.
In the following we recall the arguments of . Let us consider a configuration for this model thermalized at a given temperature $`T`$. Let $`\stackrel{}{n}`$ be an arbitrary unit vector in the internal space of the $`O(3)`$ symmetry. To any such a vector we can associate various types of clusters on the configuration. If $`𝒜`$ is one such cluster then its size, defined as the number of sites contained in it, shall be denoted by $`C_𝒜`$. On the other hand, its perimeter, defined as the number of sites along the border of the cluster, will be called $`B_𝒜`$. If a set of clusters completely cover the whole lattice volume with no overlap then we say that we have a “cluster system”.
The Fortuin–Kasteleyn clusters (hereafter called $``$ are made of sites connected by the bonds which survive the deletion process performed with the probability
$$P_{xy}\mathrm{exp}\left(\mathrm{min}\{0,\frac{2}{T}\left(\stackrel{}{\varphi }_x\stackrel{}{n}\right)\left(\stackrel{}{\varphi }_y\stackrel{}{n}\right)\}\right).$$
(2)
The average size of the $``$ clusters satisfies $`C_{}=\kappa \chi `$ where $`\chi `$ is the magnetic susceptibility and $`\kappa `$ is a constant $`\kappa <1`$. The brackets $``$ indicate average over configurations or equivalently the average calculated with the Boltzmann weight of the partition function. In physical terms, $``$ clusters are regions of correlated spins. The set of $``$ clusters form a cluster system.
Other clusters associated to an arbitrary unit vector $`\stackrel{}{n}`$ are the $`^+`$, $`^{}`$ and $`𝒮`$ clusters. All n.n. spins on a thermalized configuration at a temperature $`T`$ tipically satisfy $`\stackrel{}{\varphi }_x\stackrel{}{\varphi }_y\epsilon `$ with a parameter $`\epsilon `$ of order $`O\left(\sqrt{2T}\right)`$. Then for any site $`x`$ the scalar product $`\left(\stackrel{}{\varphi }_x\stackrel{}{n}\right)`$ can be either a) $`\left(\stackrel{}{\varphi }_x\stackrel{}{n}\right)>\epsilon /2`$, and $`x`$ belongs to an $`^+`$ cluster, or b) $`\left(\stackrel{}{\varphi }_x\stackrel{}{n}\right)<\epsilon /2`$, so $`x`$ belongs to an $`^{}`$ cluster, or c) $`|\stackrel{}{\varphi }_x\stackrel{}{n}|\epsilon /2`$ and consequently $`x`$ belongs to an $`𝒮`$ cluster. In simple words, the $`𝒮`$ clusters are constituted by sites whose spins almost lie in the plane perpendicular to $`\stackrel{}{n}`$; on the other hand the $`^+`$ ($`^{}`$) clusters contain sites whose spins are almost parallel (antiparallel) to $`\stackrel{}{n}`$. The set of $`^\pm `$ and $`𝒮`$ clusters form a cluster system.
By using a variant of the $`O(3)`$ model that includes a Lipschitz continuity condition, $`\stackrel{}{\varphi }_x\stackrel{}{\varphi }_y\delta `$ for all n.n. $`x`$, $`y`$ (this condition does not change the physical properties of the model as long as $`\delta >\epsilon `$), it is possible to prove that the $`^\pm `$ clusters lie entirely inside the $``$ clusters . Moreover no $`^+`$ cluster has a common frontier with any $`^{}`$ cluster, they are always separated by spins belonging to $`𝒮`$ type clusters.
Following Ref. , three hypotheses are now necessary to prove that the magnetic susceptibility diverges. The first one states the impossibility to have simultaneous percolation of two or more disjoint clusters in a cluster system defined on the 2D Heisenberg model. This fact is known to be true in several models . The second hypothesis extends the validity of the Mermin and Wagner theorem to hamiltonians which, like the one including the Lipschitz condition, are not differentiable in the fields. These two conditions altogether prevent the $`^\pm `$ clusters from percolating.
The third hypothesis is to assume that the $`𝒮`$ type clusters do not percolate either. Consequently none of the $`^\pm `$ or $`𝒮`$ clusters percolate. When a cluster system does not contain a percolating cluster then at least two kinds of clusters have a divergent average size . In the cluster system of $`^\pm `$ and $`𝒮`$, this result means that either $`C_^+=\mathrm{}`$ or $`C_{^{}}=\mathrm{}`$ or both (actually both, to avoid the breaking of the second hypothesis). Moreover, as the $`^\pm `$ clusters are entirely included inside the $``$ clusters, these clusters must also have a divergent average size. Therefore the magnetic susceptibility $`\chi `$ diverges and the theory is massless in clear contrast to the calculation of .
The above conclusion can be avoided if at least one of the three hypotheses fails. In particular we have checked the third one finding that there does exist a $`𝒮`$ cluster which percolates. In the present paper we report the results of a numerical simulation of the Heisenberg model at $`T=0.5`$ on very large lattices to give evidence that this is indeed the case at that temperature. We describe how this percolation takes place in the system, showing that a percolating $`𝒮`$ cluster is found for every direction $`\stackrel{}{n}`$ in the symmetry group space $`O(3)`$. Moreover all (percolating or not) $`𝒮`$ and $`^\pm `$ clusters present a high degree of roughness. More specifically, every spin in a cluster has on average one n.n. lying on the border of the cluster, (on a 2D square lattice each spin has four n.n.). In the following we describe the technical details of our simulation and give the results obtained.
We have simulated the system described by the hamiltonian in Eq.(1) on several lattice sizes $`L^2`$ with $`L=1024`$, 1250, 1550 and 2048 at a temperature $`T=0.5`$. The measurements and cluster analysis have been performed on a sample of between 1000 and 2000 configurations which were separated by 200 overheatbath updating steps , (this updating method applies a usual heatbath step to the dynamical variables –the angle formed by the spin field $`\stackrel{}{\varphi }_x`$ and the sum of their n.n.– and fixes the other variables by maximizing the distance between the old and the new updated fields $`\stackrel{}{\varphi }_x`$ in order to hasten the decorrelation).
The percolation of clusters is a well–defined concept on infinite lattices. Obvious computer limitations force us to work on finite volume systems. We have overcome this difficulty by working with a large ratio $`L/\xi `$ where $`\xi `$ is a typical correlation length of the system. In this way we expect that the would–be percolating cluster will show up as a very large cluster, much larger than the others. We have chosen our simulation parameters in such a way that $`L/\xi 4.5`$, see Ref. .
The first task is to decide the value for $`\epsilon `$ to be used in the construction of clusters. $`\epsilon `$ should not be too large in order to avoid a certain (but not instructive for our problem) percolation of the $`𝒮`$ clusters, the estimate $`\epsilon \sqrt{2T}`$ being an appropriate value. In Fig. 1 we show the probability distribution of values of $`\stackrel{}{\varphi }_x\stackrel{}{\varphi }_y`$ for all n.n. $`x`$, $`y`$ on a lattice of size $`L=1024`$ at $`T=0.5`$. The error bars in this and in the subsequent figures are very small: we show only the error bar of the data point on the top of Fig. 1. Motivated by this distribution we have taken $`\epsilon =1.05`$ which satisfies the estimate $`\epsilon \sqrt{2T}=1`$. We have checked that, by using this value of $`\epsilon `$, the $`^+`$ and $`^{}`$ clusters touch each other only in a $`0.5`$% of all bonds and this is shown in the inset of Fig. 1 where we give the fraction of bonds which join $`^+`$ and $`^{}`$ clusters, $`F_H`$, as a function of $`\epsilon `$. At $`\epsilon =1.05`$ we will already find evidence for the existence of percolating $`𝒮`$ clusters. For smaller epsilons one cannot longer prove that the $`^\pm `$ clusters lie entirely inside the $``$ clusters, which is an essential ingredient for the arguments given in . We could choose smaller values of $`\epsilon `$ only at lower temperatures.
We have taken several choices for the vector $`\stackrel{}{n}`$. In all cases we observed exactly the same results. In this paper we report the figures obtained with $`\stackrel{}{n}=(1,0,0)`$.
In Fig. 2 we show the distribution of sizes for the three types of clusters, $`𝒮`$ (circles) and $`^\pm `$ (triangles). We present the data for each cluster size $`C`$ up to sizes $`C=100`$; for larger values of $`C`$ we show the results averaged over bins \[$`\mathrm{ln}C\eta /2,\mathrm{ln}C+\eta /2`$\] with $`\eta 0.5`$. The quantity $`P(C)\mathrm{d}C`$ is proportional to the probability of finding a cluster with size between $`C`$ and $`C+\mathrm{d}C`$. The line with triangles is continuous and it ends at $`\mathrm{ln}C=10.2`$. On the other hand the curve with circles, which represents the $`𝒮`$ clusters, displays two parts: a continuous line ending at $`\mathrm{ln}C=9.2`$ and a separated point at $`\mathrm{ln}C13.1`$. This isolated point is the one determined by the percolating cluster. Although not explicitely depicted, this point has an horizontal error bar which results in the slight horizontal spreading of circles. This error bar is a remnant of the fact that on truly infinite lattices the size of the percolating cluster must be infinite. Notice that the continuous part of the curve with circles becomes steeper just before ending at $`\mathrm{ln}C=9.2`$: this is another indication for the existence of a percolating cluster because it means that all clusters beyond some size prefer to be absorbed by the percolating $`𝒮`$ cluster. This figure has been obtained by working on a lattice size $`L=1024`$ and choosing $`\stackrel{}{n}=(1,0,0)`$. Completely analogous results are obtained with other sizes $`L`$ and any other unit vector $`\stackrel{}{n}`$. We then conclude that there always exists a percolating cluster of type $`𝒮`$ for any versor $`\stackrel{}{n}`$.
In Fig. 3 an example of a type $`𝒮`$ percolating cluster taken from a single thermalized configuration is shown. The percolating sites are coloured in black. To render the fine details of the clusters as clear as possible we show only a piece of size $`512\times 512`$ taken out from a configuration on a lattice with $`L=1024`$. The figure corresponds to the percolating cluster found in the plane perpendicular to $`\stackrel{}{n}=(1,0,0)`$. It is clear from this figure that the clusters present a high degree of roughness.
The property of roughness is made explicit in the inset of Fig. 2. The ratio “perimeter of cluster”/“size of cluster”, $`B/C`$, is displayed against $`\mathrm{ln}C`$ for all $`^\pm `$ (triangles) and $`𝒮`$ (circles) clusters (irrespective of the fact that they percolate or not). For bidimensional compact objects $`B/C1/\sqrt{C}`$ when $`C\mathrm{}`$. In our case this ratio tends to a constant (which is almost 1) when $`C\mathrm{}`$ and this is an indicative that our clusters present a rough border, suggesting that they have a fractal structure.
In the inset of Fig. 2 we see that the ratio $`B/C`$ goes asymptotically to a constant which seems to be a bit larger for the $`𝒮`$ clusters than for $`^\pm `$. We think that this is because the $`𝒮`$ clusters surround the $`^\pm `$ clusters (see the introduction) and consequently they need to have an additional boundary.
We have also studied the ratio among the size of the largest (always percolating) $`𝒮`$ cluster, $`M_𝒮\mathrm{max}\{C_𝒮\}`$, and the size of the largest $`^\pm `$, $`M_{}\mathrm{max}\{C_^\pm \}`$, as a function of the lattice size $`L`$. Let us call $`R`$ such ratio,
$$R\frac{M_𝒮}{M_{}}.$$
(3)
If $`R`$ increases with $`L`$ for a fixed temperature, we can reasonably infer that the above described percolation properties survive the thermodynamic limit. The data for $`R`$ and the maximum size of $`𝒮`$ and $`^\pm `$ clusters are shown in Table 1 from the four lattice sizes we have used. They indicate that indeed the largest $`𝒮`$ cluster keeps percolating when $`L`$ increases. Notice that the percentage of lattice volume covered by the percolating cluster does not vary appreciably with the lattice size $`L`$.
TABLE 1. Ratio $`R`$ and maximum sizes of clusters as a function of the lattice size $`L`$.
$`L`$ 1024 1250 1550 2048 $`M_𝒮/L^2`$ 0.437(10) 0.429(4) 0.433(7) 0.424(4) $`M_{}/L^2`$ 0.019(3) 0.015(2) 0.011(1) 0.0072(7) $`R`$ 23(4) 29(4) 40(4) 59(6)
In conclusion, we have studied the percolation properties of several clusters which can be defined on configurations of the 2D Heisenberg model described by the hamiltonian in Eq. (1). Of particular interest are the clusters called $`𝒮`$. To define them one has to introduce an arbitrary unit vector $`\stackrel{}{n}`$ in the symmetry group space of the system, $`O(3)`$. When the spin $`\stackrel{}{\varphi }_x`$ at the site $`x`$ is almost perpendicular to $`\stackrel{}{n}`$, we say that this site belongs to some $`𝒮`$ cluster. The term “almost perpendicular” depends on the temperature $`T`$ of the system (see above). For $`T=0.5`$ and working on rather large lattice sizes $`L1024`$, we have given strong evidence that for every $`\stackrel{}{n}`$ one of these $`𝒮`$ clusters percolates on each thermalized configuration of the system (see Fig. 2). These percolation properties seem to survive the thermodynamic limit (see Table 1).
This is an important conclusion because if such clusters do not percolate then one can prove that the system carries no massive particle, contradicting the exact calculation of the mass gap performed for this theory in Ref. . Our results exclude this massless phase for $`T>0.5`$.
We have also shown that, at least at $`T=0.5`$, all clusters present a high degree of roughness recalling a fractal structure, (see inset in Fig. 2 and Fig. 3). It seems unlikely that, as suggested in , such a dilute set of spins, almost lying on a plane, can enforce the system to behave like an effective $`O(2)`$ model (see ).
We wish to thank Andrea Pelissetto for discussions and Julio Fernández for a careful reading of the manuscript. B. A. also thanks the warm hospitality at the Departamento de Física Aplicada I of the Málaga University during the completion of the paper.
|
no-problem/9905/quant-ph9905044.html
|
ar5iv
|
text
|
# Klein paradox and antiparticle
## Abstract
The Klein paradox of Klein-Gordon (KG) equation is discussed to show that KG equation is self-consistent even at one-particle level and the wave function for antiparticle is uniquely determined by the reasonable explanation of Klein paradox. No concept of “hole” is needed.
The Klein paradox of Dirac equation ( see also , ) is of great historical importance for cognizing the existence of antiparticle of electron (the positron) and explaining qualitatively the pair creation process in the collision of particle beam with strongly repulsive electric field. However, the explanation of this Klein paradox usually resorted to the concept of “hole” in the “negative-energy electron sea”. So it was difficult to generalize to the case of Klein-Gordon (KG) equation where it is hopeless to fill the doubly infinite states of negative energy. Of course, one would say that this problem had been solved in the quantum field theory (QFT). But it is still interesting to see that it can also be understood in the category of quantum mechanics (QM).
Consider a one-dimensional problem for KG equation:
$$[i\mathrm{}\frac{}{t}V(x)]^2\psi =c^2\mathrm{}^2\frac{^2}{x^2}\psi +m^2c^4\psi $$
(1)
with
$$V(x)=\{\begin{array}{cc}0\hfill & x<0\hfill \\ V_0\hfill & x>0\hfill \end{array}$$
(2)
The boundary condition is fixed by incident wave function:
$$\psi _i=a\mathrm{exp}[\frac{i}{\mathrm{}}(pxEt)](x<0)$$
(3)
with $`p>0`$, $`E=\sqrt{p^2c^2+m^2c^4}>0`$.
We expect that the particle wave will be partially reflected at $`x=0`$, forming a reflected wave $`\psi _r`$ together with a transmitted wave $`\psi _t`$ as follows:
$`\psi _r`$ $`=b\mathrm{exp}[{\displaystyle \frac{i}{\mathrm{}}}(pxEt)](x<0)`$ (4)
$`\psi _t`$ $`=b^{}\mathrm{exp}[{\displaystyle \frac{i}{\mathrm{}}}(p^{}xEt)](x>0)`$ (5)
where $`p_{}^{}{}_{}{}^{2}=(EV_0)^2/c^2m^2c^2`$.
The continuation condition of wave function ($`\psi _i+\psi _r`$) with $`\psi _t`$ at $`x=0`$ leads to
$$\{\begin{array}{cc}\frac{b}{a}\hfill & =\frac{pp^{}}{p+p^{}}\hfill \\ \frac{b^{}}{a}\hfill & =\frac{2p}{p+p^{}}\hfill \end{array}$$
(6)
There are two cases to be discussed:
(1)$`E+mc^2>V_0>E`$
Since $`p^{}=\sqrt{(V_0E)^2/c^2m^2c^2}=iq`$ becomes purely imaginary, the transmitted wave
$$\psi _t=b^{}\mathrm{exp}[qxiEt/\mathrm{}](x>0)$$
(7)
is decreasing exponentially along $`x`$ axis while the reflectivity of incident wave equals to $`1`$:
$$R\left|\frac{b}{a}\right|^2=\frac{|piq|^2}{|p+iq|^2}=1$$
(8)
(2) $`V_0>E+mc^2`$
Since now $`p^{}=\pm \sqrt{(V_0E)^2/c^2m^2c^2}`$ remains real, the transmitted wave is oscillating while the reflectivity of incident wave reads
$$R=\left|\frac{b}{a}\right|^2=\frac{|pp^{}|^2}{|p+p^{}|^2}=\{\begin{array}{cc}<1\hfill & \text{if }p^{}>0\hfill \\ >1\hfill & \text{if }p^{}<0\hfill \end{array}$$
(9)
While the result of (7) with (8) is as expected, an satisfying explanation for the prediction (9) is needed. Especially, we need to know the criterion for the choice of the sign of $`p^{}`$ and what happens when $`p^{}<0`$?
For this purpose, we should learn from the important observation by Feshbach and Villars, who recast the KG equation, Eq. (1), into two coupled Schrödinger equations:
$$\{\begin{array}{cc}(i\mathrm{}\frac{}{t}V)\phi \hfill & =mc^2\phi \frac{\mathrm{}^2}{2m}^2(\phi +\chi )\hfill \\ (i\mathrm{}\frac{}{t}V)\chi \hfill & =mc^2\chi +\frac{\mathrm{}^2}{2m}^2(\chi +\phi )\hfill \end{array}$$
(10)
with
$$\{\begin{array}{cc}\phi \hfill & =\frac{1}{2}[(1\frac{V}{mc^2})\psi +i\frac{\mathrm{}}{mc^2}\dot{\psi }]\hfill \\ \chi \hfill & =\frac{1}{2}[(1+\frac{V}{mc^2})\psi i\frac{\mathrm{}}{mc^2}\dot{\psi }]\hfill \end{array}$$
(11)
Correspondingly, the continuity equation takes the following form,
$$\frac{\rho }{t}+\stackrel{}{j}=0$$
(12)
$$\rho =\frac{i\mathrm{}}{2mc^2}(\psi ^{}\dot{\psi }\psi \dot{\psi }^{})\frac{V}{mc^2}\psi ^{}\psi =\phi ^{}\phi \chi ^{}\chi $$
(13)
$`\stackrel{}{j}`$ $`=`$ $`{\displaystyle \frac{i\mathrm{}}{2m}}(\psi \psi ^{}\psi ^{}\psi )`$ (14)
$`=`$ $`{\displaystyle \frac{i\mathrm{}}{2m}}[(\phi \phi ^{}\phi ^{}\phi )+(\chi \chi ^{}\chi ^{}\chi )+(\phi \chi ^{}\chi ^{}\phi )+(\chi \phi ^{}\phi ^{}\chi )]`$ (15)
In the example here, we find for the incident wave ($`c=1`$):
$$\{\begin{array}{cc}\phi _i\hfill & =\frac{1}{2}(1+\frac{E}{m})\psi _i\hfill \\ \chi _i\hfill & =\frac{1}{2}(1\frac{E}{m})\psi _i\hfill \end{array}(x<0)$$
(16)
$$\rho _i=|\phi _i|^2|\chi _i|^2=\frac{E}{m}|a|^2>0$$
(17)
$$j_i=\frac{p}{m}|a|^2>0$$
(18)
For the reflected wave, one has
$$\rho _r=\frac{E}{m}|b|^2>0$$
(19)
$$j_r=\frac{p}{m}|b|^2<0$$
(20)
The situation for the transmitted wave is more interesting:
$$\{\begin{array}{cc}\phi _t\hfill & =\frac{1}{2}(1+\frac{(EV_0)}{m})\psi _t\hfill \\ \chi _t\hfill & =\frac{1}{2}(1\frac{(EV_0)}{m})\psi _t\hfill \end{array}(x>0)$$
(21)
$$\rho _t=|\phi _t|^2|\chi _t|^2=\frac{(EV_0)}{m}|b^{}|^2<0$$
(22)
$$j_t=\frac{p^{}}{m}|b^{}|^2$$
(23)
It seems quite attractive that we should demand $`p^{}<0`$ to get $`j_t<0`$ in conformity with $`\rho _t<0`$ and to meet the requirement of Eq. (12) so that
$$j_i+j_r=j_t$$
(24)
with $`|j_r|>j_i`$ ($`|b|>|a|`$, $`R>1`$).
The reason is clear. For an observer located at $`x>0`$, the energy of particle in the transmitted wave should be measured with respect to the local potential $`V_0`$. In other words, the particle has energy $`E^{}=EV_0`$ locally. Hence the wave function should be redefined as:
$$\psi _t\stackrel{~}{\psi _t}=b^{}\mathrm{exp}[\frac{i}{\mathrm{}}(p^{}xE^{}t)](x>0)$$
(25)
However, since $`E^{}<0`$, from the experimental point of view, the particle with negative energy behaves as an antiparticle. We should express its wave function as:
$$\stackrel{~}{\psi _t}=b^{}\mathrm{exp}[\frac{i}{\mathrm{}}(|p^{}|x|E^{}|t)]$$
(26)
and claim that the energy and momentum of this antiparticle are $`|E^{}|>0`$ and $`|p^{}|>0`$ respectively. It moves to the right though $`p^{}<0`$ and $`j_t<0`$.
So the above analysis of Klein paradox reveals that the KG equation is reasonable or self-consistent even at the one-particle level. The crucial point is looking at its wave function as a coherent superposition of two parts as shown by Eq. (11):
$$\psi =\phi +\chi $$
(27)
When $`|\phi |>|\chi |`$, it describes a particle like Eq. (3) with the energy and momentum operators:
$$\widehat{E}=i\mathrm{}\frac{}{t},\widehat{\stackrel{}{p}}=i\mathrm{}$$
(28)
When $`|\chi |>|\phi |`$, it describes an antiparticle like:
$$\psi _c\mathrm{exp}[\frac{i}{\mathrm{}}(\stackrel{}{p_c}\stackrel{}{x}E_ct)]$$
(29)
with the corresponding operators for antiparticle:
$$\widehat{E_c}=i\mathrm{}\frac{}{t},\widehat{\stackrel{}{p_c}}=i\mathrm{}$$
(30)
which give $`E_c>0`$ and $`\stackrel{}{p_c}`$ for $`\psi _c`$ shown at Eq. (29). In any case, no concept of “hole” is needed.
It is interesting to notice that Eqs. (29) and (30) were pointed out long ago by Schwinger , Konopinski, and Mahmaud , and even earlier (in the Green function or propagator of QFT) by Stückelberg and Feynman essentially.
However, if we accept the above point of view,it will have far-reaching consequence. A particle is always not pure, it always comprises two components, $`\phi `$ and $`\chi `$. In the equation governing its motion, $`\phi `$ and $`\chi `$ are always coupled together with the symmetry under the transformation ($`\stackrel{}{x}\stackrel{}{x},tt`$) and
$`\phi (\stackrel{}{x},t)\chi (\stackrel{}{x},t)`$ (31)
$`V(\stackrel{}{x},t)V(\stackrel{}{x},t)`$ (32)
as shown at Eq. (10) as a special case.
But we wish to stress that Eq. (32) is a basic symmetry, which should be raised as a general postulate in relativistic quantum mechanics as well as in QFT . It can also be served as a starting point to understand the essence of special relativity. (, )
Finally, it is interesting to add that another paradox in physics, the original version of EPR paradox also raised a very acute question in quantum mechanics. Its reasonable explanation leads precisely to the same conclusion as that in this paper, i.e., the necessity of existence of antiparticle with its wave function shown as Eq.(29).
This work was supported in part by the NSF of China.
|
no-problem/9905/astro-ph9905222.html
|
ar5iv
|
text
|
# 0.1 1) Gravitational Lenses
### 0.1 1) Gravitational Lenses
Refsdael (1964, 1966) noted that the arrival times for the light from two gravitationally lensed images of a background point source are dependent on the path lengths and the gravitational potential traversed in each case. Hence, a measurement of the time delay and the angular separation for different images of a variable quasar can be used to provide a measurement of $`\mathrm{H}_0`$. This method offers tremendous potential because it can be applied at great distances and it is based on very solid physical principles (Blandford & Kundić 1997).
There are of course difficulties with this method as there are with any other. Astronomical lenses are galaxies whose underlying (luminous or dark) mass distributions are not independently known, and furthermore they may be sitting in more complicated group or cluster potentials. A degeneracy exists between the mass distribution of the lens and the value of H<sub>0</sub> (e.g., Keeton and Kochanek 1997; Schechter et al. 1997). Ideally velocity dispersion measurements as a function of position are needed (to constrain the mass distribution of the lens). Such measurements are very difficult (and generally have not been available). Perhaps worse yet, the distribution of the dark matter in these systems is unknown.
Unfortunately, to date, there are very few systems known which have both a favorable geometry (for providing constraints on the lens mass distribution) and a variable background source (so that a time delay can be measured). The two systems to date that have been well-studied yield values of H<sub>0</sub> in the approximate range of 40-70 km/sec/Mpc (Schechter et al. 1997; Impey et al. 1998) with an uncertainty of $``$20-30%. These values assume a value of $`\mathrm{\Omega }`$ = 1, and rise by 10% for low $`\mathrm{\Omega }`$. Tonry & Franx (1998) have recently reported an accurate new velocity dispersion of $`\sigma `$ = 288 $`\pm `$ 9 km/sec for the lensing galaxy in 0957+561, based on data obtained at the Keck 10m telescope. Adopting $`\mathrm{\Omega }_m`$ = 0.25 and the model of Grogin & Narayan (1996) for the mass distribution of the lens yields a value of H<sub>0</sub> = 72 $`\pm `$ 7 (1-$`\sigma `$ statistical) $`\pm `$ 15% (systematic).
As the number of favorable lens systems increases (as further lenses are discovered that have measurable time delays), the prospects for measuring H<sub>0</sub> and its uncertainty using this technique are excellent. Schechter (private communication) reports that there are now 6 lenses with measured time delays, but perhaps only half of these will be useful for H<sub>0</sub> determinations due to the difficulty of modelling the galaxies.
### 0.2 Sunyaev Zel’dovich Effect and X-Ray Measurements
The inverse-Compton scattering of photons from the cosmic microwave background off of hot electrons in the X-ray gas of rich clusters results in a measurable decrement in the microwave background spectrum known as the Sunyaev-Zel’dovich (SZ) effect (Sunyaev & Zel’dovich 1969). Given a spatial distribution of the SZ effect and a high-resolution X-ray map, the density and temperature distributions of the hot gas can be obtained; the mean electron temperature can be obtained from an X-ray spectrum. The method makes use of the fact that the X-ray flux is distance-dependent, whereas the Sunyaev-Zel’dovich decrement in the temperature is not.
Once again, the advantages of this method are that it can be applied at large distances and, in principle, it has a straightforward physical basis. Some of the main uncertainties result from potential clumpiness of the gas (which would result in reducing $`\mathrm{H}_0`$), projection effects (if the clusters observed are prolate, $`\mathrm{H}_0`$ could be larger), the assumption of hydrostatic equilibrium, details of the models for the gas and electron densities, and potential contamination from point sources.
To date, a range of values of $`\mathrm{H}_0`$ have been published based on this method ranging from $``$40 - 80 km/sec/Mpc (e.g., Birkinshaw 1998). The uncertainties are still large, but as more and more clusters are observed, higher-resolution (2D) maps of the decrement, and X-ray maps and spectra become available, the prospects for this method are improving enormously.
Carlstrom, (this meeting) presented exquisite new measurements of the Sunyaev-Zel’dovich decrement, measured in two dimensions with an interferometer, for a number of nearby clusters. He also highlighted the imminent advances on the horizon for X-ray imaging using NASA’s soon-to-be launched Chandra X-ray Observatory (the satellite formerly known as AXAF). There is promise of a significant increase in accuracy for this method in the near future.
### 0.3 The Extragalactic Distance Scale
The launch of the Hubble Space Telescope (HST) in 1990 has provided the opportunity to undertake a major program to calibrate the extragalactic distance scale. The resolution of HST is an order of magnitude higher than can be generally obtained through the Earth’s atmosphere, and moreover it is stable; as a result, the volume of space made accessible by HST increased by 3 orders of magnitude. The HST Key Project on the extragalactic distance scale was designed to measure the Hubble constant to an accuracy of $`\pm `$10% $`rms`$ (Freedman et al. 1994; Mould et al. 1995; Kennicutt et al. 1995). Since the dominant sources of error are systematic in nature, the approach we have taken in the Key Project is to measure H<sub>0</sub> by intercomparing several different methods. This approach allows us to assess and quantify explicitly the systematic errors. The HST Key Project will be completed in 1999; since new results will be available shortly, this discussion will be kept very brief. Results based on half of the available data yield H<sub>0</sub> = 72 $`\pm `$ 5 $`\pm `$ (random) 7 (systematic) km/sec/Mpc (Madore et al. 1998, 1999; Freedman et al. 1998; Mould et al. 1997). In Figure 4, the results for various H<sub>0</sub> methods are combined using both a Frequentist and a Bayesian approach (from Madore et al.).
The largest remaining sources of uncertainty in the extragalactic distance route to H<sub>0</sub> can be traced to uncertainty in the distance to the Large Magellanic Cloud (the galaxy which provides the fiducial comparison for more distant galaxies), and to the potential effects of differing amounts of elements heavier than helium (or metallicity). The importance of the latter effect has been difficult to establish. The recently-installed infrared (NICMOS) camera on HST is being used to address this, and may help to resolve the issue shortly.
A histogram of the distribution of distances to the LMC from the literature is shown in Figure 5. The distances in this histogram are based on Cepheids, RR Lyraes, SN 1987A, red giants, the “red clump”, and long-period variables. Values prior to 1996 come from the published compilation of Westerlund (1997), but only the latest revision published by a given author is plotted for a given data set. Despite decades of effort in measuring the distance to this nearby neighboring galaxy, and the number of independent methods available, the dispersion in measured distance modulus remains very high. Moreover, the distribution is not Gaussian. There has been much recent activity on the red clump which contributes many of the values around 18.3 mag, and gives rise to the bimodal nature of the distribution. There is as yet no understanding of why there is a systematic difference between the Cepheid and the red clump distance scale. This histogram illustrates that the uncertainty in the distance to the LMC is still large. Without assuming that the distribution is Gaussian, the 95% confidence limits are +/- 0.28 mag, and the 68% confidence limits amount to +/- 0.13 mag or 7distance. Unfortunately, the distance to the LMC remains as one of the largest systematic uncertainties in the current extragalactic distance scale.
Determination of t<sub>0</sub>
Age-dating of the oldest known objects in the Universe has been carried out in a number of ways. The most reliable ages are generally believed to come from the application of theoretical models of stellar evolution to observations of the oldest clusters in the Milky Way, the globular clusters. For about 30 years, the ages of globular clusters have remained reasonably stable, at about 15 billion years (e.g., Demarque et al. 1990; Vandenberg et al. 1996); however, recently these ages have been revised downward, as described below. Ages can also be obtained from radioactive dating or nucleocosmochronology (e.g., Schramm 1989), and a further lower limit can be estimated from the cooling rates for white dwarfs (e.g., Oswalt et al. 1996). Generally, these ages have ranged from about 10 to 20 billion years; the largest sources of uncertainty in each of these estimates are again systematic in nature.
During the 1980’s and 1990’s, the globular cluster age estimates have improved as both new observations of globular clusters have been made with CCD’s, and as refinements to stellar evolution models, including updated opacities, consideration of mixing, and different chemical abundances, have been incorporated (e.g., Vandenberg et al. 1996; Chaboyer et al. 1996, 1998). The latter authors have undertaken an extensive analysis of the uncertainties in the determination of globular cluster ages. From the theory side, uncertainties in globular cluster ages result, for example, from uncertainties in how convection is treated, the opacities, and nuclear reaction rates. From the measurement side uncertainties arise due to corrections for dust and chemical composition; however, the dominant source of overall uncertainty in the globular cluster ages is the uncertainty in the cluster distances.
In fact, the impact of distance uncertainty on the ages of globular clusters is twice as large as its effect on the determination of H<sub>0</sub>. That is, a 0.2 mag difference in zero point corresponds to a 10% difference in the distance (or correspondingly, H<sub>0</sub>), but it results in a 20% difference in the age of a cluster (e.g., Renzini 1991).
The Hipparcos satellite has recently provided geometric parallax measurements for 118,000 nearby stars (Kovalevsky 1998). Relevant for the calibration of globular cluster distances, are the relatively nearby old stars of low metal composition, the so-called subdwarf stars, presumably the nearby analogs of the old, metal-poor stars in globular clusters. Accurate distances to these stars provide a fiducial calibration from which the absolute luminosities of equivalent stars in globular clusters can be determined and compared with those from stellar evolution models. The new Hipparcos calibration has led to a downward revision of the globular cluster ages from $``$15 billion years to 11-14 billion years (e.g., Reid 1997; Pont et al. 1998; Chaboyer et al. 1998).
However, as emphasized by Chaboyer et al., there are only 8 stars in the Hipparcos catalog having both small parallax errors, and low metal abundance, \[Fe/H\] $`<`$-1.1 (i.e., less than one tenth the iron-to-hydrogen abundance relative to the solar value). In fact, Gratton et al. (1998) note that there are no stars with parallax errors $`<`$10% with \[Fe/H\]$`<`$-2 corresponding to the oldest, metal poor globular clusters. Hence, the calibration of globular cluster ages based on parallax measurements of old, metal-poor stars remains an area where an increase in sample size will be critical to beat down the statistical uncertainties. A decade or so from now, new parallax satellites such as NASA’s SIM (the Space Interferometry Mission) and the European Space Agency’s mission (named GAIA) will be instrumental in improving these calibrations, not only for subdwarfs, but for many other classes of stars (for example, Cepheids and the lower-mass variable RR Lyrae stars). These interferometers will be capable of delivering 2–3 orders of magnitude more accurate parallaxes than Hipparcos, down to fainter magnitude limits for several orders of magnitude more stars. Until larger samples of accurate parallaxes are available, however, distance errors are likely to continue to contribute the largest source of systematic uncertainty to the globular cluster ages.
The Cosmic Microwave Background Radiation and Cosmological Parameters
As discussed at this meeting by Silk, Wilkinson and Spergel, over the next few years, increasingly more accurate measurements will be made of the fluctuations in the cosmic microwave background (CMB) radiation. The underlying physics governing the shape of the CMB anisotropy spectrum can be described by the interaction of a very tightly coupled fluid composed of electrons and photons before recombination (e.g., Hu & White 1996; Sunyaev & Zel’dovich (1970). Figure 6 shows a plot of the predicted angular power spectrum for CMB anisotropies from Hu, Sugiyama & Silk (1997), computed under the assumption that the fluctuations are Gaussian and adiabatic. The position of the first angular peak is very sensitive to $`\mathrm{\Omega }_0`$ ($`\mathrm{\Omega }_m`$ \+ $`\mathrm{\Omega }_\mathrm{\Lambda }`$ \+ $`\mathrm{\Omega }_k`$).
For information on cosmological parameters to be extracted from the CMB anisotropies, the following must be true: first, the physical source of these fluctuations must be understood, and second, the sources of systematic error must be eliminated or minimized so that they do not dominate the uncertainties.
Recently it has become clear that almost exact degeneracies exist between various cosmological parameters (e.g., Efstathiou & Bond 1998; Eisenstein, Hu & Tegmark 1998) such that, for example, cosmological models with the same matter density can have the same CMB anisotropies, while having very different geometries. As a result, measurement of CMB anisotropies will, in principle, be able to yield strong constraints on the products $`\mathrm{\Omega }_m`$h<sup>2</sup> and $`\mathrm{\Omega }_b`$h<sup>2</sup>, but not on the individual values of h (= H<sub>0</sub>/100) and $`\mathrm{\Omega }_m`$ directly. Hence, earlier suggestions that such cosmological parameters could be measured from CMB anisotropies to precisions of 1% or better (e.g., Bond, Efstathiou & Tegmark 1997) will unfortunately not be realized. However, breaking these degeneracies can be accomplished by using the CMB data in combination with other data, for example, the Sloan survey and type Ia supernovae (e.g. White 1998).
Currently the estimates of the precisions for which cosmological parameters can be extracted from measurements of anisotropies in the CMB are based on models in which the primordial fluctuations are Gaussian and adiabatic, and for which there is no preferred scale. Very detailed predictions can be made for this model, more so than for competing models such as isocurvature baryons or cosmic strings or textures. In the next few years, as the data improve, all of these models will be scrutinized in greater detail.
Important additional constraints may eventually come from polarization measurements (e.g., Zaldarriaga, Spergel & Seljak 1997; Kamionkowski et al. 1997), but these may require the next generation of experiments beyond MAP and Planck. The polarization data may provide a means of breaking some of the degeneracies amongst the cosmological parameters that are present in the temperature data alone. Furthermore, they are sensitive to the presence of a tensor (gravity wave) contribution, and hence can allow a very sensitive test of competing models.
Although it is not yet certain how accurately the cosmological parameters can be extracted from measurements of CMB anisotropies, what is clear is that upcoming, scheduled balloon and space experiments offer an opportunity to probe detailed physics of the early Universe. If current models are correct, the first acoustic peak will be confirmed very shortly and its position accurately measured by balloon experiments even before the launch of MAP. These balloon experiments will soon be followed with the total sky and multi-frequency coverage provided by MAP and Planck. This new era now being entered, of precision CMB anisotropy experiments, is extremely exciting.
Discussion and Summary
In the past year, a radical shift has begun to occur. Until recently, a majority of the theoretical community viewed the (standard) Einstein- de Sitter model ($`\mathrm{\Omega }_0`$ = 1, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ = 0) as the most likely case (with h= 0.5, t = 13 Gyr). With accumulating evidence for a low matter density, difficulty in fitting the galaxy power spectrum with the standard model, the conflict in ages for the Einstein-de Sitter case, and now, most recently, the evidence from type Ia supernovae for an accelerating universe, a new “standard” model is emerging, a model with $`\mathrm{\Omega }_m`$ 0.3, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ 0.7, h = 0.65, and t = 13 Gyr. This model preserves a flat universe and is still consistent with inflation.
In Figure 7, the bounds on several cosmological parameters are summarized in a plot of the matter density as a function of the Hubble constant. What do these current limits on cosmological parameters imply about the contribution of non-baryonic dark matter to the total matter density? As can be seen from the figure, for H<sub>0</sub> = 70 km/sec/Mpc, current limits to the deuterium abundance (Burles & Tytler 1998; Hogan 1998) yield baryon densities in the range of $`\mathrm{\Omega }_b`$ = 0.02 to 0.04, or 2-4% of the critical density. Given current estimates of the matter density ($`\mathrm{\Omega }_m`$0.3), non-baryonic matter would thus contribute just over 25% of the total energy density needed for a flat, $`\mathrm{\Omega }`$ = 1 universe.
One might ask, is non-baryonic dark matter still required if $`\mathrm{\Lambda }`$ is non-zero? Allowance for $`\mathrm{\Lambda }`$ $``$ 0 does not provide the missing energy to simultaneously yield $`\mathrm{\Omega }`$ = 1, while doing away with the necessity of non-baryonic dark matter, at least for the current limits from Big Bang nucleosynthesis. As can be seen from Figure 7, for the current deuterium limits, having all baryonic mass plus $`\mathrm{\Lambda }`$ would require both H<sub>0</sub> $``$ 30 km/sec/Mpc and an age for the Universe of $``$ 30 Gyr. These H<sub>0</sub> and age values are outside the range of currently measured values for both of these parameters. Although it might be appealing to do away simultaneously with one type of unknown (non-baryonic dark matter) while introducing another parameter ($`\mathrm{\Lambda }`$), a non-zero value for the cosmological parameter does not remove the requirement for non-baryonic dark matter.
The question of the nature of dark matter (or energy) remains with us. In this sense, the situation has not changed very much over the past few decades, although the motivation for requiring a critical-density universe has evolved from considerations of fine-tuning and causal arguments to the development of inflation. But searches for dark matter since the 1970’s have not uncovered sufficient matter to result in a critical-density universe. This year has offered exciting new (and therefore still tentative) results that a non-zero value of the cosmological constant, or perhaps an evolving scalar field like quintessence (Steinhardt, this volume; Steinhardt & Caldwell 1998) could provide the missing energy to result in a critical-density universe. Still, the nature of the dark matter, whether it contributes 25% or 95% of the total energy density, is unknown, and remains as one of the most fundamental unsolved problems in cosmology.
The progress in measuring cosmological parameters has been impressive; still, however, the accurate measurement of cosmological parameters remains a challenging task. It is therefore encouraging to note the wealth of new data that will appear over the next few years, covering very diverse areas of parameter space. For example, measurement of CMB anisotropies, (from balloons and space with MAP, and Planck), the Sloan Digital Sky Survey, Hubble Space Telescope, Chandra X-ray Observatory, radio interferometry, gravitational lensing studies, weakly interacting massive particle (WIMP) cryogenic detectors, neutrino experiments, the Large Hadron Collider (LHC), and a host of others, inspire optimism that the noose on cosmological parameters is indeed tightening. At the very least, the next few years should continue to be interesting!
## Acknowledgments
It is a great pleasure to thank the organizers of this Nobel Symposium for a particularly enjoyable and stimulating conference, and for their immense hospitality. I also thank Brad Gibson for his help with the LMC distance literature search.
References
W. Baade, IAU Trans. 8, 397, 1954.
N. A. Bahcall, L. M. Lubin & V. Dorman, Astrophys. J. Lett., 447, 81, 1995.
N. A. Bahcall & Fan, Publ. Nat. Acad. Sci., 95, 5956, 1998.
M. Birkinshaw Phys. Rep., 1999, in press.
R. Blandford & T. Kundić, in The Extragalactic Distance Scale eds. M. Donahue &
M. Livio (Cambridge University Press, 1997), pp. 60–75.
A. Bosma, Astron. J.,86, 1791, 1981.
S. Burles & D. Tytler, Astrophys. J., 507, 732, 1998.
Carlberg, R. G., et al., Astrophys. J., 462, 32, 1996.
Carroll, Press, & Turner, Astron. Rev. Astron. Astrophys.,30, 499, 1992.
B. Chaboyer, P. Demarque, P. J. Kernan & L. M. Krauss, Science, 271, 957, 1996.
B. Chaboyer, P. Demarque, P. J. Kernan & L. M. Krauss, Astrophys. J., 494, 96, 1998.
Y.-C. N. Cheng & L. M. Krauss Mon. Not. Royal Astr. Soc., 1999. astro-ph 9810393, in press.
A. Dekel et al., Astrophys. J., 412, 1, 1993.
A. Dekel, D. Burstein & S. D. M. White in Critical Dialogs in Cosmology, ed. N. Turok
(World Scientific, 1997).
P. Demarque, C. P. Deliyannis & A. Sarajedini, in Observational Tests of Inflation,
ed. T. Shanks et al. (Dordrecht, Kluwer, 1991).
M. Donahue & M. Livio, eds. The Extragalactic Distance Scale
(Cambridge University Press, 1997).
G. Efstathiou & J. R. Bond, Mon. Not. Royal Astr. Soc., 1998,astro-ph/9807103.
A. Einstein, Sitz. Preuss. Akad. d. Wiss. Phys.-Math, 142, (1917).
D. Eisenstein, W. Hu & M. Tegmark, Astrophys. J., 504, L57, 1998.
W. L. Freedman in Proceedings of the 18th Texas Symposium,
eds. A. Olinto, J. Frieman, & D. Schramm, (World Scientific Press, 1997a).
W. L. Freedman in Critical Dialogs in Cosmology, ed. N. Turok
(World Scientific, 1997b), p. 92.
W. L. Freedman, B. F. Madore & R. C. Kennicutt, in The Extragalactic Distance
Scale eds. M. Donahue & M. Livio, (Cambridge University Press, 1997), pp. 171-185.
W. L. Freedman, J. R. Mould, R. C. Kennicutt & B.F. Madore, in IAU Symposium No. 183,
Cosmological Parameters and the Evolution of the Universe, in press, 1998, astro-ph/9801080.
A. Friedmann, Z. Phys., 10, 377, (1922).
M. Fukugita, T. Tutamase & M. Kasai, Mon. Not. Royal Astr. Soc.,246, 24p, 1990.
M. Fukugita & E. Turner, Mon. Not. Royal Astr. Soc.,253, 99, 1991.
P. Garnavich et al., Astrophys. J., 493, L3, 1998.
A. Goobar & S. Perlmutter, Astrophys. J., 450, 14, 1995.
N. A. Grogin & R. Narayan, Astrophys. J., 464, 92, 1996.
A. H. Guth, Phys. Rev., 23, 347, 1981.
C. Hogan, Space Sci. Review, 84, 127 (1998).
W. Hu & M. White Astrophys. J., 441, 30, 1996.
E. Hubble, Publ. Nat. Acad. Sci., 15, 168, 1929.
E. Hubble & M. L. Humason Astrophys. J., 74, 43, 1931.
C. Impey, Astrophys. J., 509, 551, 1998.
N. Kaiser & G. Squires, Astrophys. J., 404, 441, 1993.
N. Kaiser et al., Astrophys. J., 1999, astro-ph 9809269, in press.
M. Kamionkowski, A. Kosowsky & A. Stebbins, astro-ph/9609132.
R. C. Kennicutt, W. L. Freedman & J. R. Mould, Astron. J., 110, 1476, 1995.
C. S. Kochanek, Astrophys. J., 466, 638, 1996.
J. Kovalevsky, Astron. Rev. Astron. Astrophys., 36, 99, 1998.
L. Krauss & M. S. Turner, Gen. Rel. Grav., 27, 1137, 1995.
C.-P. Ma, T. Small & W. Sargent 1998, astro-ph/9808034.
B. F. Madore et al., Nature, 395, 47, 1998.
B. F. Madore et al., Astrophys. J., 1999, in press, Apr. 10 issue, astro-ph/9812157.
J. R. Mould, S. Sakai & S. Hughes, in The Extragalactic Distance Scale
eds. M. Donahue & M. Livio (Cambridge University Press, 1997), pp. 158–170.
J. P. Ostriker & P. Steinhardt, Nature, 377, 600, 1995.
J. P. Ostriker, P. J. E. Peebles & A. Yahil Astrophys. J. Lett., 193, L1, 1974.
T. D. Oswalt, J. A. Smith, M. A. Wood & P. Hintzen, Nature, 382, 692, 1996.
S. Perlmutter et al., Astrophys. J., 483, 565, 1997.
S. Perlmutter et al., Nature, 391, 51, 1998a.
S. Perlmutter et al., Astrophys. J., 1999, astro-ph/9812133.
V. Petrosian, E. Salpeter & P. Szekeres, Astrophys. J., 147, 1222, 1967.
F. Pont, M. Mayor, C. Turon & D. A. Vandenberg it Astron. Astrophys., 1999, in press.
S. Refsdael, Mon. Not. Royal Astr. Soc., 128, 295, 1964.
S. Refsdael, Mon. Not. Royal Astr. Soc., 132, 101, 1966.
N. Reid, Astron. J., 114, 161, 1997.
A. Reiss, W. Press & R. Kirshner Astrophys. J., 473, 88, 1996.
A. Reiss et al. Astron. J., 116, 1009, 1998.
A. Renzini, in Observational Tests of Inflation, ed. T. Shanks et al.
(Dordrecht, Kluwer, 1991), pp. 131-146.
D. H. Rogstad & G. S. Shostak, Astrophys. J., 176, 315, 1972.
M. Rowan-Robinson, The Cosmological Distance Ladder, (New York, Freeman, 1985).
V. C. Rubin, N. Thonnard & W. K. Ford, Astrophys. J. Lett., 225, L107, 1978.
A. Sandage et al., Astrophys. J. Lett., 460, 15, 1996.
A. Sandage, Astrophys. J., 127, 513, 1958.
P. Schechter et al., Astrophys. J. Lett., 475, 85, 1997.
D. N. Schramm, in Astrophysical Ages and Dating Methods,
eds. E. Vangioni-Flam et al. (Edition Frontieres: Paris, 1989).
I. Smail, R. S. Ellis, M. J. Fitchett & A. C. Edge, Mon. Not. Royal Astr. Soc., 273, 277, 1995.
P. Steinhardt & J. Caldwell, in The Cosmic Microwave Background and Large Scale Structure
in the Universe eds. Y.. I. Byan & K. W. Ng (ASP Conf. Series 151, 1998), p. 13.
P. Stetson Publ. Astr. Soc. Pac., 110, 1448, 1998.
R. A. Sunyaev & Y. B. Zel’dovich, Astrophys. & SS, 4, 301, 1969.
R. A. Sunyaev & Y. B. Zel’dovich, Astrophys. & SS, 7, 3, 1970.
J. Tonry & M. Franx, astro-ph 9809064.
V. Trimble, Astron. Rev. Astron. Astrophys., 25, 425, 1987.
S. van den Bergh, in The Extragalactic Distance Scale eds. M. Donahue
& M. Livio (Cambridge University Press, 1997), pp. 1–5.
D. A. Vandenberg, M. Bolte, & P. B. Stetson, Astron. Rev. Astron. Astrophys., 34, 461, 1996.
S. Weinberg, Rev. Mod. Phys., 61, 1, 1989.
B. E. Westerlund, The Magellanic Clouds, (Cambridge: Cambridge Univ. Press, (1997).
S. D. M. White, J. F. Navarro, A. E. Evrard & C. S. Frenk, Nature, 366, 429, 1993.
M. White, Astrophys. J., 1999.
M. Zaldarriaga, D. N. Spergel & U. Seljak, Astrophys. J., 488, 1, 1997.
F. Zwicky, Helv. Phys. Acta, 6, 110, 1933.
Figure Captions
Figures 1a-b: The trend with time for measurements of H<sub>0</sub> and $`\mathrm{\Omega }_m`$. See text for details.
Figures 2a-b: The trend with time for $`\mathrm{\Lambda }`$, and t<sub>0</sub>. Note the arbitrary units for $`\mathrm{\Lambda }`$. See text for details.
Figure 3 (top panel): The Hubble diagram for type Ia supernovae from Hamuy et al. (1996) and Reiss et al. (1998). Plotted is the distance modulus in magnitudes versus the logarithm of the redshift. Curves for various cosmological models are indicated. (bottom panel): Following Reiss et al. (1998), the difference in magnitude between the observed data points compared to an open ($`\mathrm{\Omega }_m`$ = 0.2) model is shown. The distant supernovae are fainter by 0.25 magnitudes, on average, than the nearby supernovae.
Figure 4: Plot of various H<sub>0</sub> determinations and the adopted values from Madore et al. (1998). In the left panel, each value of $`H_0`$ and its statistical uncertainty is represented by a Gaussian of unit area (linked dotted line) centered on its determined value and having a dispersion equal to the quoted random error. Superposed immediately above each Gaussian is a horizontal bar representing the one sigma limits of the calculated systematic errors derived for that determination. The adopted average value and its probability distribution function (continuous solid line) is the arithmetic sum of the individual Gaussians. This Frequentist representation treats each determination as independent, and assumes no a priori reason to prefer one solution over another. A Bayesian representation of the products of the various probability density distributions is shown in the right panel. Because of the close proximity and strong overlap in the various independent solutions the Bayesian estimator is very similar to, while more sharply defined than, the Frequentist solution.
Figure 5: A histogram of distance moduli determinations for the Large Magellanic cloud. Values prior to 1996 are from a published compilation by Westerlund (1997).
Figure 6: The angular power spectrum of cosmic microwave background anisotropies assuming adiabatic, nearly scale-invariant models for a range of values of $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ (Hu, Sugiyama, and Silk 1997; their Figure 4). The C<sub>l</sub> values correspond to the squares of the spherical harmonics coefficients. Low $`l`$ values correspond to large angular scales ($`l\frac{200\mathrm{deg}}{\theta }`$). The position of the first acoustic peak is predicted to be at $`l`$220$`\mathrm{\Omega }_{TOT}^{1/2}`$, and hence, shifts to smaller angular scales for open universes.
Figure 7: Plot of $`\mathrm{\Omega }_\mathrm{m}`$ versus $`\mathrm{H}_0`$ showing current observational limits on cosmological parameters. The shaded box is defined by values of H<sub>0</sub> in the range of 40 to 90 km/sec/Mpc and 0.15 $`<\mathrm{\Omega }_\mathrm{m}<`$ 0.4. The thick solid lines denote expansion ages for an open ($`\mathrm{\Omega }_\mathrm{\Lambda }`$ = 0) Universe for 10, 15, and 20 Gyr and the thick dashed lines denote expansion ages in the case of a flat ($`\mathrm{\Omega }_\mathrm{m}+\mathrm{\Omega }_\mathrm{\Lambda }`$ =1) Universe. The light dashed lines denote current limits for $`\mathrm{\Omega }_b`$ based on low and high values for the deuterium to-hydrogen ratio.
|
no-problem/9905/cond-mat9905033.html
|
ar5iv
|
text
|
# Observation of Quantum Asymmetry in an Aharonov-Bohm Ring
## I Introduction
The Aharonov-Bohm effect, first proposed in 1957 , was experimentally realized in mesoscopic physics in 1987 . Soon after the Aharonov-Bohm effect became a very fruitful research area in mesoscopic physics . These early investigations, all except one , focus their attention on the Aharonov-Bohm effect at relatively high magnetic fields ($`\omega _c\tau 1`$).
Recently, due to the perfection of device fabrication, the Aharonov-Bohm effect has gained renewed interest. Aharonov-Bohm rings are now used to perform phase sensitive measurement on e.g. quantum dots or on rings were a local gate only affects the properties in one of arms of the ring . The technique in these reports use the idea, that by locally changing the properties of one of the arms in the ring, and studying the Aharonov-Bohm effect as a function of this perturbation, information about the changes in the phase can be subtracted from the measurements. Recently also a realisation of the electronic double slit interference experiment presented surprising results . Especially the observation of a period halving from $`h/e`$ to $`h/2e`$ and phase-shifts of $`\pi `$ has attracted large interest in these reports.
All these recent investigations are, as in contrast to the prior ones, performed at relative low magnetic fields and the perturbation enforced on the ring is regarded as local. Furthermore they are all performed in the multi-mode regime. Hence we find it of importance to study the Aharonov-Bohm effect in the single-mode regime at low magnetic fields and as a function of a global perturbation.
## II experiment
Our starting point in the fabrication of the Aharonov-Bohm structures is a standard two dimensional electron gas (2DEG) realized in a GaAs/GaAlAs heterostructure. The two dimensional electron density is $`\mathrm{n}=2.010^{15}\mathrm{m}^2`$ and the mobility of the heterostructure is $`\mu =90\mathrm{T}^1`$. This corresponds to a mean free path of approximately $`6\mu \mathrm{m}`$. The 2DEG is made by conventional molecular beam epitaxy (MBE) and is situated 90nm below the surface of the heterostructure. For further details regarding the wafer, contacts etc. we refer to .
Using a rebuild field emission scanning electron microscope (SEM) operated at an acceleration voltage of $`30\mathrm{k}\mathrm{e}\mathrm{V}`$ a $`100\mathrm{n}\mathrm{m}`$ thick PMMA etch mask is defined by standard e-beam lithography (EBL) on the surface of the heterostructure. The pattern written in the PMMA was then transferred to the by a $`50\mathrm{n}\mathrm{m}`$ shallow etching in $`\mathrm{H}_3\mathrm{PO}_4:\mathrm{H}_2\mathrm{O}_2:\mathrm{H}_2\mathrm{O}`$. The dimensions of the etched Aharonov-Bohm structure is given by a ring radius $`r=0.65\mu \mathrm{m}`$ and a width of the arms $`w=200\mathrm{n}\mathrm{m}`$ as can be seen on Fig.1.
In a second EBL step we define a PMMA lift-off mask for a $`50\mathrm{n}\mathrm{m}`$ thick and $`30\mu \mathrm{m}`$ wide gold gate which covers the entire Aharonov-Bohm ring. This allows us to globally control the electron density in the Aharonov-Bohm ring during the measurements. Due to depletion from the edges, the structure is initially pinched off. By applying a positive voltage $`V_g`$ on the global gate, electrons are accumulated in the structure and the structure begins to conduct.
The sample was mounted on a <sup>3</sup>He cryostat, equipped with a copper electromagnet. All measurements are performed at $`0.3\mathrm{K}`$ if nothing else is mentioned. The measurements was performed by a conventional voltage biased lock-in technique with an excitation voltage of $`V_{\mathrm{pp}}=7.7\mu \mathrm{V}`$ at a frequency of $`131\mathrm{H}\mathrm{z}`$. In this report we focus on measurements performed on one device, almost identical results have been obtain with another device in a total of six different cool-downs.
## III results and discussion
Fig.1 present a measurement of the magnetoconductance of the device displayed in the left insert. As expected the magnetoconductance is dominated by the Aharonov-Bohm oscillations. The measurement is, due to the long distance between the voltage probes, an effectively two-terminal measurement; hence the Aharonov-Bohm magnetoconductance is as observed forced to be symmetrical due to the Onsager relations.
A Fourier transform of the magnetoresistance displays a very large peak corresponding to a period of 33Gs. This is in full agreement with the dimensions of the ring obtained from the SEM picture.
The right insert i Fig.1 displays the conductance as a function of gate voltage at $`4.2\mathrm{K}`$, steps are observed at integer values of $`e^2/h`$. Five steps are seen as the voltage is increased with $`0.18\mathrm{V}`$ from pinch-off. Such steps have previously been reported in Aharonov-Bohm rings , and can be interpreted as if the device, at these relatively high temperatures, behaves as two quantum point contacts (QPC) in series with two QPC in parallel. In any case, this indicates that our system, in the gate voltage regime used here, only has a few propagating modes. When the temperature is lowered, the conductance curve changes into a fluctuating signal, and finally the steps are completely washed out by the fluctuations. These fluctuations, which we ascribe to resonance’s, appear simultaneously with the Aharonov-Bohm oscillations and are the signature of a fully phase coherent device.
Fig.2 shows two contour plots; the left one displays the measured conductance $`G(B,V_g)`$ as a function of gate voltage and magnetic field. The fluctuating zero magnetic field conductance $`G(0,V_g)`$ has been subtracted from the measurements to enhance the contours of the Aharonov-Bohm signal. This figure clearly displays the Aharonov-Bohm oscillations as a periodic pattern in the horizontal direction. In the vertical direction the gate voltage is changed between $`0.43\mathrm{V}`$ and $`0.50\mathrm{V}`$.
The studied systems begins to conduct at $`0.33\mathrm{V}`$. The data clearly shows that, by changing the gate voltage viz. changing the density, it is possible to change the sign of the magnetoconductance - or stated differently, change the phase of the Aharonov-Bohm signal by $`\pi `$ . This is quite surprising since a negative magnetoconductance always are expected in symmetrical Aharonov-Bohm structures . However in the case of an asymmetrical structure this is no longer the case .
In order to compare our measurements with theory , we need to estimate the electron density $`\mathrm{n}`$ in the device. From a simple capacitor based estimate a voltage of $`0.5\mathrm{V}`$ corresponds to a electron density of $`\mathrm{n}=ϵ(0.50\mathrm{V}0.33\mathrm{V})/ae=1.3610^{15}\mathrm{m}^2`$. Here $`a=90\mathrm{n}\mathrm{m}`$ is the distance from the wafer surface to the 2DEG. At high magnetic fields we observe the so called Camel-back structure . An analysis of this structure yields a density of $`\mathrm{n}=0.9510^{15}\mathrm{m}^2`$ at the gate voltage $`V_g=0.50V`$. We therefore estimate the electron density at $`0.50\mathrm{V}`$ to be $`110^{15}\mathrm{m}^2`$ and to be zero at $`0.33\mathrm{V}`$. The characteristic dimensionless number $`\mathrm{k}_\mathrm{f}\mathrm{L}`$ is found to be approximately $`160`$ at $`V_g=0.50V`$, where $`\mathrm{L}`$ is half the circumference of the ring and $`\mathrm{k}_\mathrm{f}=\sqrt{2\pi \mathrm{n}}`$ is the Fermi wavevector.
As a first order approximation we can use a linear relation between the Fermi wavevector and the gate voltage, viz.
$$\mathrm{k}_\mathrm{f}\mathrm{L}=160\frac{\mathrm{V}_\mathrm{g}0.33\mathrm{V}}{0.50\mathrm{V}0.33\mathrm{V}}$$
(1)
In the case of asymmetrical structures the conductance is given by
$`\mathrm{G}(\theta ,\varphi ,\delta )=`$ $`{\displaystyle \frac{2e^2}{h}}2ϵ\mathrm{g}(\theta ,\varphi )`$ (3)
$`(\mathrm{sin}^2\varphi \mathrm{cos}^2\theta +\mathrm{sin}^2\theta \mathrm{sin}^2\delta \mathrm{sin}^2\varphi \mathrm{sin}^2\delta ),`$
where $`\theta =\pi \mathrm{\Phi }/\mathrm{\Phi }_0`$ is the phase originating from the magnetic flux, $`\varphi =k_fL`$ is the average phase due to spatial propagation, $`\delta =\mathrm{\Delta }(k_fL)`$ is the phase difference between the two ways of traversing the ring. The coupling parameter $`ϵ`$ can vary between $`1/2`$ for a fully transparent system and zero for a totally reflecting system. The function $`\mathrm{g}(\theta ,\varphi )`$ is given by
$`\mathrm{g}(\theta ,\varphi )=`$ (4)
$`{\displaystyle \frac{2ϵ}{(a_{}^2\mathrm{cos}2\delta +a_+^2\mathrm{cos}2\theta (1ϵ)\mathrm{cos}2\varphi )^2+ϵ^2\mathrm{sin}^22\varphi }},`$ (5)
where $`a_\pm =(1/2)(\sqrt{12ϵ}\pm 1)`$.
The right part of Fig.2 shows a plot of equation (3), where the value of the phase due to asymmetry is set to vary as $`\delta =0.15\mathrm{k}_\mathrm{f}\mathrm{L}`$ and the coupling parameter $`ϵ`$ is set to $`1/2`$. The scale on the $`\mathrm{k}_\mathrm{f}\mathrm{L}`$-axis is determined by using equation (1), hence when the voltage is changed between $`0.50\mathrm{V}`$ and $`0.43\mathrm{V}`$ the value of $`\mathrm{k}_\mathrm{f}\mathrm{L}`$ changes between 160 and 95. It is seen in Fig.2 that even though a perfect fit is not possible, it is indeed possible to reproduce the general features of the measurement by the theoretical expression (3).
Fig.3 shows traces taken from the contour plots of Fig.2. The figure on the left shows nine equidistantly spaced measurements $`(\mathrm{}V_g=2.2\mathrm{mV})`$. According to equation (1) this corresponds to a equidistant spacing of $`2.0`$ in units of $`\mathrm{k}_\mathrm{f}\mathrm{L}`$. On the figure to the right nine successive theoretical magnetoconductance curves are presented, the distance in $`\mathrm{k}_\mathrm{f}\mathrm{L}`$ is also $`2.0`$. The amplitude of all nine curves has been scaled by a factor of $`0.2`$. We ascribe this decrease in amplitude to the fact that the experiment is performed at finite temperatures whereas the theoretical expression is an effective zero temperature result.
From the comparison of the theoretical expression and the measured magnetoconductance curves it is seen that a direct comparison between single traces is possible in a limited voltage regime. Such a comparison is not possible over the whole gate voltage regime from $`V_g=0.43\mathrm{V}`$ to $`V_g=0.50\mathrm{V}`$ with the assumption of a linear relation between gate voltage and $`k_f`$ made here, see equation (1). One should also note, that when changing $`V_g`$ with $`0.07\mathrm{V}`$ a new sublevel will start to get populated, as can be seen on the insert of Fig.1. However, all the complicated features such as period halving and changes of $`\pi `$ in the phase of the Aharonov-Bohm signal are obseved for both theory and experiment.
## IV conclusion
We have measured the Aharonov-Bohm effect in a one-dimensional GaAlAs/GaAs ring. The effect was studied as a function of the electron density in the ring. We find that the standard theoretical expressions for a symmetrical ring are not applicable for the rings in question. To reproduce essential features of the measurements, i.e. the phase shifts, it is necessary to introduce a in build asymmetry in the ring - e.g. different average density in the two arms of the ring.
These results for the first time show the influence of asymmetry of a ring and it gives insight in the recent observation of phase-shifts and period halving seen in other related systems.
## V acknowledgements
This work was financially supported by Velux Fonden, Ib Henriksen Foundation, Novo Nordisk Foundation, Danish Research Council (grant 9502937, 9601677 and 9800243) and the Danish Technical Research Council (grant 9701490)
|
no-problem/9905/cond-mat9905073.html
|
ar5iv
|
text
|
# Directed rigidity and bootstrap percolation in (1+1) dimensions
## I Introduction
Central-force rigidity percolation (RP) is the mechanical equivalent of the usual percolation problem . In RP forces (vectors) must be transmitted instead of scalars. This problem has received increased attention recently, following the development of mean-field theories as well as of powerful combinatorial algorithms for its numerical study . As a result of these efforts, a deeper understanding of the rigidity transition has emerged, although some open questions remain.
Bethe lattice calculations for RP with an adjustable number $`g`$ of degrees of freedom at each site have been used to obtain the behavior of the spanning cluster density $`P_{\mathrm{}}(p)`$ as a function of $`p`$, the dilution (bond or site) parameter. For $`g=1`$ one has usual (scalar) percolation, displaying a continuous transition with $`\beta ^{MF}=1`$. But for any $`g>1`$, the order parameter $`P_{\mathrm{}}`$ has a discontinuity at a finite critical value $`p_c`$. Thus the rigidity transition is discontinuous for $`d\mathrm{}`$. Other MF approximations also predict a first-order RP transition
On triangular lattices on the other hand, there is a divergent correlation length and the RP transition is *second order* , but in a different universality class than usual percolation . Some of the numerical evidence in 2d is consistent with a small discontinuity in the order parameter $`P_{\mathrm{}}`$, or a very small value for $`\beta `$, but the precise interpretation of this evidence is still a matter of debate . In three dimensions the rigidity transition is undoubtedly second-order . It is at present unclear in which fashion the RP transition becomes discontinuous as the dimensionality increases. Is there something like an upper critical dimension for RP, beyond which it is first-order? Or does it get increasingly “first-order” (i.e. $`\beta 0`$) as $`d\mathrm{}`$? This analysis is further complicated by the fact that the character of the transition is *lattice* dependent. Hypercubic lattices in which sites have $`d`$ degrees of freedom each cannot be rigid if they are diluted, but are rigid if undiluted and if they have appropriate boundary conditions. Thus on hypercubic lattices the RP transition is “trivially first-order” at $`p_c=1`$, in any dimension.
Similar considerations apply to directed lattices. Bethe lattices are directed by construction, since there is only one path between any two given sites. On directed lattices, rigid connectivity takes a particularly simple form. Imagine a rigid boundary to which a site with $`g`$ degrees of freedom must be rigidly attached by means of rotatable springs (central forces). Each spring, or bond, restricts one degree of freedom. Thus the minimum number of bonds required to completely fix this site is $`g`$. Propagation of rigidity on directed lattices is then defined in the following terms: a site (with $`g`$ degrees of freedom) at “time” $`t`$ is rigidly connected to a boundary at $`t=0`$ if and only if it has $`g`$ or more neighbors at *earlier* times who in turn are rigidly connected to the boundary. Thus, in contrast to *undirected* rigidity, which requires complex algorithms that presently limit the maximum sizes to approximately $`1.6\times 10^7`$ sites , directed rigidity percolation (DRP) can be studied by means of a simple numerical procedure, and on much larger systems.
It is interesting to notice that, on any *directed* lattice, rigidity percolation is equivalent to *bootstrap percolation* (BP), a modified percolation problem in which a site belongs to a cluster if at least $`m`$ of its neighbors also do . Bootstrap percolation on undirected lattices attempts to describe certain systems in which atoms behave magnetically only if they are surrounded by a “large enough” number of magnetic neighbors. A second reason for interest in BP is the search for novel critical behaviors in percolation , but the present understanding of this problem indicates that BP is either “trivially first-order” with $`p_c=1`$ or second-order and in the universality class of scalar percolation .
Early studies of semi-directed $`m=2`$ BP on square lattices seemed to indicate a transition at a non-trivial $`p`$ , but rigorous arguments later showed that $`p_c=1`$ in this case. To our knowledge there are no published studies of directed bootstrap percolation (DBP) displaying a second-order transition.
It has been recently conjectured that any continuous transition in a nonequilibrium process with a scalar order-parameter, and a non-fluctuating, non-degenerate absorbing state must be in the same universality class as directed percolation(DP) . According to this, DRP-DBP would belong to the DP universality class in all dimensions for which it has a continuous transition.
It is thus interesting to study RP on finite-dimensional directed lattices of increasing dimensionality, both to test this conjecture and to understand in which fashion DRP becomes discontinuous as $`d\mathrm{}`$. In this article we report on our results for directed rigidity percolation (DRP, equivalent to DBP) on several (1$`+`$1)-dimensional lattices displaying first- and second-order phase transitions.
If the DRP transition is second-order, we pay particular attention to the determination of critical indices associated with the spreading of rigidity, as described in the following . As usual in the study of directed processes, we define $`D(p)`$ to be the asymptotic density of “active” (rigid) sites, which is equivalent to the probability $`P(p)`$ that, at large times $`t`$, a randomly chosen point be rigidly connected to a totally rigid boundary at $`t=0`$. If the dilution parameter $`p`$ is lower than a critical value $`p_c`$, rigidity does not propagate and $`P(p)=0`$. If the transition is second-order, immediately above $`p_c`$ one has $`P(p)(pp_c)^{\beta ^{dens}}`$.
If the evolution starts from a *finite* rigid cluster or “seed” at $`t=0`$ instead of a rigid boundary, one defines $`P_a^{seed}(t,p)`$ as the probability that the cluster grown from this seed still be “active” at time $`t`$. If the transition is second-order, $`P_a^{seed}(t,p)(pp_c)^{\beta ^{seed}}`$ for $`pp_c^+`$ and $`t\mathrm{}`$. At $`p_c`$ this quantity decays as
$$P_a^{seed}(t,p_c)t^\delta ,$$
(1)
with $`\delta =\beta ^{seed}/\nu _{}`$ and $`\nu _{}`$ the temporal (or parallel) *correlation length* exponent: $`\xi _{}|pp_c|^\nu _{}`$
The typical width $`w`$ of a cluster grown from a finite seed at $`p_c`$ behaves as
$$w(t)t^\chi ,$$
(2)
where $`\chi =\nu _{}/\nu _{}`$ and $`\nu _{}`$ is the critical index associated with the decay length $`\xi _{}`$ of perpendicular or “space” correlations: $`\xi _{}|pp_c|^\nu _{}`$. Averages are taken only over clusters still alive at time $`t`$. Finally, the average mass of a cluster grown from a finite seed at $`p_c`$ behaves as
$$M_{seed}(t)t^{\stackrel{~}{\eta }},$$
(3)
where $`\stackrel{~}{\eta }=(\nu _{}+\nu _{}\beta ^{dens})/\nu _{}`$ .
For comparison we also simulate numerically usual directed percolation (DP, which corresponds to $`g=1`$). In the DP case a simple argument shows that $`\beta ^{dens}=\beta ^{seed}`$ because of time-reversal symmetry: consider for simplicity bond dilution and choose an arbitrary point $`x`$ at time $`t`$. Any configuration of occupied bonds connecting $`(x,t)`$ to the boundary at $`t=0`$, and thus contributing to $`P(p,t)`$, when reflected in the time direction, contributes to $`P_a^{seed}(t,p)`$ if now a point-like seed is located at $`x`$. Since both the original and the time-inverted configuration have the same probability, $`P(p,t)=P_a^{seed}(p,t)`$ exactly for bond-diluted DP and therefore $`\beta ^{dens}=\beta ^{seed}`$. Notice that this equality implies $`\stackrel{~}{\eta }+\delta \chi =1`$.
Although no such time-reversal symmetry exists for DRP, we find that $`\beta ^{dens}=\beta ^{seed}`$ also in this case. Furthermore we find that DRP belongs to the DP universality class, i.e. has exactly the same critical indices. Thus, there is no separate universality class for directed rigidity percolation as there is for *undirected* rigidity percolation. This is consistent with a recent conjecture according to which any nonequilibrium process with a single absorbing state will belong to the same universality class as DP.
We also studied the surface critical behavior of DRP, by means of simulations in the presence of an absorbing boundary. In the DP case, the presence of the absorbing wall is known to only modify the survival exponent $`\beta ^{seed}`$. We find that this is also the case for DRP, and the new exponent is also consistent with the one obtained for DP with a wall.
In section II we present our numerical results for DRP on directed lattices, with and without absorbing walls, and estimate the relevant critical indices associated to the second-order transitions. Section III describes DRP on a directed triangular lattice. This case has a first-order transition at $`p=1`$, and can be solved exactly for $`p1`$.
## II Numerical simulations
In order to simulate DRP we store a binary variable per site, indicating whether the given site is or not rigidly connected to the boundary at $`t=0`$. We use the by now standard techniques of multispin-coding (MSC) , which allow us to store 64 binary variables in an integer word, and also to update all of them simultaneously. Since interactions are short-ranged in the time direction, we only need to keep in memory a maximum of three consecutive lines of the system and we do this by means of three linear arrays which are reused periodically.
We have considered three different oriented lattices: square, triangular and 5n-lattice (see later). The first two of them display trivial behavior (i.e. rigid only at $`p=1`$) and the third one presents a continuous DRP transition at a non-trivial $`p_c`$. In the first place we discuss DRP on a square lattice as depicted in Fig. 1a. Each site at time $`t`$ has two neighbors at time $`t1`$. This is the minimum number of neighbors needed for rigidity with $`g=2`$ and thus any amount of dilution is enough to impede propagation of rigidity. Therefore square lattices are not rigid at any $`p<1`$. If $`p=1`$, rigidity propagates only if boundary conditions are appropriate (e.g. rigid, or periodic, but not open). Any finite rigid cluster of size $`l`$ shrinks to zero in $`l`$ time-steps, as shown in Fig. 2a. The same would happen on directed $`d`$-dimensional hypercubic lattices if $`g=d`$.
We next consider the triangular lattice, oriented as shown in Fig. 1b. Each site has three neighbors at earlier times. Despite the number of neighbors being larger than the minimum required (two), this lattice is also unable to propagate rigidity if diluted by any amount. To see why this is so, consider Fig. 2b, where one starts from a finite cluster of rigid sites (black sites) at $`t=0`$. If the lattice is undiluted ($`p=1`$), this cluster would just propagate unchanged in “time”. If the lattice is diluted by any amount, this rigid cluster would gradually shrink and eventually disappear. Thus for this lattice $`p_c=1`$, the same as for the square lattice. In contrast, finite-size effects are expected to be quite strong on the triangular lattice, since the lifetime of a finite rigid cluster diverges as $`p1`$, no matter its original size. Also boundary effects are different since now propagation of rigidity can exist without periodic boundary conditions. We discuss this case in detail in Sec. III.
In order to have a nontrivial $`p_c`$ for DRP, we use triangular lattices augmented with two further bonds per site. These extra bonds connect layers $`t`$ and $`t2`$, as shown in Fig. 1c. This makes a total of five neighbors per site and we call this the 5n-lattice for simplicity. Now consider what happens when starting from a finite cluster of rigid sites on an undiluted 5n-lattice. As shown in Fig. 2c, the size of the rigid cluster *expands* in time with a constant angle if $`p=1`$. Thus there will be a nontrivial value $`p_c`$, above which rigidity propagates forever. We find that the DRP transition is second-order on this lattice. For comparison we also simulate DP on the square lattice.
We next discuss our numerical results for DRP on the 5n-lattice, and compare them to DP on the square lattice. We typically start our simulations from a finite seed of contiguous rigid sites and let the system evolve for $`10^5`$ timesteps (or until all activity dies out) and measure the survival probability $`P_a^{seed}`$, cluster width $`w`$ and average mass $`M_{seed}`$ as a function of time.
In a first set of simulations we estimate the critical density $`p_c`$ for DRP on site-diluted 5n-lattices, by measuring $`P_a^{seed}(t)`$ at different values of $`p`$ and identifying the one for which the asymptotic behavior is closest to a straight line in a log-log plot (Figure 3). From these data we estimate $`p_c^{DRP}=0.70505\pm 0.00005`$. In contrast to $`P_a^{seed}`$, which shows appreciable curvature for off-critical values of $`p`$, the slopes of the cluster mass $`M_{seed}`$ and the meandering width $`w(t)`$ in a similar log-log graph show little variation when $`pp_c`$. For DP on site-diluted square lattices, we use the estimate $`p_c=0.64470`$.
Figure 4a shows $`P_{seed}(t)`$ for DRP (5n-lattice) and DP (square lattice) at their respective critical values. Assuming power-law corrections to (1), we fit $`P_{seed}(p_c,t)=at^\delta (1+bt^\omega )`$ and find $`\delta ^{DRP}=0.15\pm 0.01`$ and $`\delta ^{DP}=0.16\pm 0.02`$.
The cluster mass $`M(t)`$ and the meandering width $`w(t)`$ behave as shown in Figures 4b and c respectively. From these data we estimate $`\stackrel{~}{\eta }^{DRP}=1.47\pm 0.01`$, $`\stackrel{~}{\eta }^{DP}=1.47\pm 0.01`$, $`\chi ^{DRP}=0.633\pm 0.005`$ and $`\chi ^{DP}=0.631\pm 0.005`$.
These estimates are consistent with the more precise values $`\delta ^{DP}=0.1594`$, $`\stackrel{~}{\eta }^{DP}=1.4732`$ and $`\chi ^{DP}=0.6327`$, suggesting that DRP and DP are in the same universality class.
In order to further test whether DRP has the same critical behavior as DP, we also studied DRP in the presence of an absorbing wall (DRPW). For DP with an absorbing wall (DPW) it is known that the survival exponent $`\beta ^{seed}`$ is replaced by $`\beta _1^{seed}`$, while $`\nu _{}`$ and $`\nu _{}`$ remain unchanged. Therefore only $`\delta `$ is expected to change due to the presence of the absorbing wall. Our results are displayed in Fig. 5, and from them we obtain $`\delta ^{DRPW}=0.423\pm 0.003`$, $`\stackrel{~}{\eta }^{DRPW}=1.48\pm 0.01`$ and $`\chi ^{DRPW}=0.62\pm 0.02`$. Notice that $`\delta ^{DRPW}`$, $`\stackrel{~}{\eta }^{DRPW}`$ and $`\chi ^{DRPW}`$ no longer add up to one since $`\beta ^{dens}`$ and $`\beta _1^{seed}`$ are independent exponents. These results are entirely consistent with the values obtained for DPW by other authors .
## III DRP on the triangular lattice
This case is marginal as already advanced, since any amount of dilution will destroy rigidity and thus $`p_c=1`$, but on the other hand the lifetime of finite clusters is not finite as on the square lattice, but diverges as $`p1`$. As we show now, it is possible to obtain finite-size effects analytically for $`q=(1p)<<1`$.
Assume one starts from a completely rigid boundary at $`t=0`$, on a triangular lattice of infinite width (Fig. 6a). Let $`q=(1p)<<1`$ be the dilution parameter. We do not need to specify for the moment whether we are dealing with bond or site dilution. For short times all sites are rigidly connected to the lower boundary, but soon some non-rigid sites, or “defects”, will appear in the presence of either bond or site dilution. The smallest possible defect is a single non-rigid site, which happens with probability $`q`$ per site on site-diluted lattices (one missing site) and with probability $`3q^2p+q^33q^2`$ per site on bond-diluted lattices (two or three missing bonds). This single defect “heals” immediately since each site above this one has three predecessors, but needs only two rigid ones in order to be itself rigid.
A non-healing defect (in the following simply a defect) is created if two sites connected by a diagonal bond are simultaneously non-rigid, as in Fig. 6a. All sites directly above these will have only one rigid neighbor and thus fail to be rigid, creating a “defect wall”. Assume that these paired defects are nucleated with density $`\rho (p)`$ per unit length (we calculate $`\rho `$ later), and consider now the time evolution of the resulting defect wall.
In the absence of dilution ($`q=0`$), the boundaries of a non-rigid region stay unchanged in time (Fig. 6a). Rigid sites directly on this boundary have only two bonds (the minimum required number since $`g=2`$) to rigid sites at earlier times. If one of these boundary sites fails to be rigid, all sites above it will also not be rigid. In this case the rigid boundary is displaced by one unit, as shown in Fig. 6b. Therefore, for small but nonzero $`q`$, the rigid wall in Fig. 6b moves rightwards with an average velocity $`v=x/t`$ which equals the probability for a boundary site to fail to be rigid.
Neglecting fluctuations, we have a picture in which defects appear at a rate $`\rho (p)`$ per unit length, giving rise to non-rigid regions which widen in time with constant velocity $`v(p)`$. The system will become completely non-rigid when all defect regions have coalesced, as depicted in Fig. 7. This picture of the rigid-non-rigid transition is related to the Polynuclear Growth model (PNG), which has been extensively studied in the area of crystal growth. For our discussion of DRP we only need a few results which can be derived by means of simple arguments.
Assuming one knows the cone angle $`v(p)`$ and the defect density $`\rho (p)`$, it is easy to calculate the density of rigid points $`P(p,t)`$ after $`t`$ timesteps on a system of infinite width. A point $`(x,t)`$ will be rigidly connected to the rigid boundary located at $`t=0`$ if it has not suffered the effect of any defect. In other words, if no defect has nucleated inside a “cone” with downwards opening angle $`v`$ and whose vertex sits at $`(x,t)`$. Let $`\mathrm{\Omega }=vt^2`$ be the area of this cone. Since defects nucleate randomly in space-time with density $`\rho (p)`$, their number inside any given area $`\mathrm{\Omega }`$ is a Poisson-distributed random variable with average $`\mathrm{\Omega }\rho `$. Thus
$$P(p,t)=e^{\left(t/t^{}\right)^2},$$
(4)
where
$$t^{}=(v\rho )^{1/2}$$
(5)
is a characteristic time for the disappearance of rigidity, on an infinitely wide system. Using similarly simple arguments it is easy to see that the mean lifetime of a *finite* rigid cluster diverges as $`v^1`$ as $`p1`$.
We now calculate $`v(p)`$ and $`\rho (p)`$, and compare the resulting prediction for $`t^{}`$ with our numerical results. Under site-dilution, the probability per unit time for a rigid wall to be displaced by one unit is simply $`v_{site}=q`$. If bonds are diluted instead, one gets $`v_{bond}=1p^22q`$. In order to calculate $`\rho `$, we notice that a pair of contiguous absent sites appears with probability $`\rho _{site}=2q^2`$ per site and per unit time on site-diluted lattices. On bond-diluted lattices on the other hand, creating such a pair requires at least three missing bonds. Thus $`\rho _{bond}q^3`$. Finally one has (Eq. (5)) $`t_{site}^{}q^{3/2}`$ and $`t_{bond}^{}q^2`$. Figure 8 shows $`t^{}`$ as measured on site-diluted lattices. These values are obtained by integrating in time the density of rigid sites $`P(p,t)`$. For an intermediate range of $`q`$, it is found that $`t^{}q^{3/2}`$ as predicted for infinitely wide systems.
On a system of a finite width $`w`$ with periodic boundary conditions, a crossover to a width-dominated behavior is expected if $`w<<t^{}v`$, equivalently if $`w<<(v/\rho )^{1/2}`$, whereupon
$$P(p,t)_{finite}=e^{t/t_{finite}^{}},$$
(6)
and
$$t_{finite}^{}=(w\rho )^1$$
(7)
This regime corresponds to the defect-free area $`\mathrm{\Omega }`$ becoming essentially a rectangle of height $`t`$ and width $`w`$ instead of a triangle of height $`t`$ and base $`vt`$. According to equation (7), one should expect $`t^{}q^2`$ for small $`q`$.
This regime is observed for $`w=128`$, but is less clear for larger values of $`w`$. Observation of this crossover for wider systems is numerically difficult, since it requires one to simulate very small $`q`$ values, which makes the mean rigid times too large.
## IV Conclusions
We considered directed rigidity percolation (DRP) with two degrees of freedom per site on three different (1$`+`$1)-dimensional lattices. This problem is equivalent to directed bootstrap percolation (DBP) with $`m=2`$. On the square lattice, the system is only rigid at long times if $`p=1`$. On triangular lattices a similar situation happens, but this case has a non-trivial behavior for $`p1`$, which we calculate analytically and confirm by numerical simulation. The mean lifetime of rigidity on infinitely wide systems is found to diverge when $`p1`$ as $`(1p)^{3/2}`$ for site dilution, and as $`(1p)^2`$ for bond dilution. The mean lifetime of a finite rigid cluster diverges on the other hand as $`(1p)^1`$ in both cases.
By augmenting the triangular lattice with two further bonds we define the 5n-lattice, which has a continuous transition at $`p_c^{DRP}=0.70505\pm 0.00005`$ for site-dilution. We measure the critical indices associated with the spreading of rigidity and find that the DRP transition belongs to the directed percolation (DP) universality class, as a recent conjecture would indicate. A similar numerical study of DRP with an absorbing wall gives exponents equally consistent with those of DP. Thus, while (undirected) rigidity percolation does not belong to the same universality class as usual percolation, the introduction of directedness makes these two problems essentially equivalent at their respective critical point, i.e. on large scales. On (d$`+`$1) directed lattices, Bethe lattice calculations indicate that the DRP transition becomes first-order for large $`d`$, while DP is always second-order (with mean-field exponents above its upper critical dimension $`d_c=5`$). We are presently extending this study to larger values of $`d`$.
###### Acknowledgements.
One of us (CM) wishes to thank K. Lauritsen and P. Grassberger for useful discussions on DP. C. M. is supported by FAPERJ, and M. A. M by CAPES, Brazil.
|
no-problem/9905/cond-mat9905058.html
|
ar5iv
|
text
|
# Simulation of the Zero Temperature Behavior of a 3-Dimensional Elastic Medium
## Abstract
We have performed numerical simulation of a 3-dimensional elastic medium, with scalar displacements, subject to quenched disorder. In the absence of topological defects this system is equivalent to a $`(3+1)`$-dimensional interface subject to a periodic pinning potential. We have applied an efficient combinatorial optimization algorithm to generate exact ground states for this interface representation. Our results indicate that this Bragg glass is characterized by power law divergences in the structure factor $`S(k)Ak^3`$. We have found numerically consistent values of the coefficient $`A`$ for two lattice discretizations of the medium, supporting universality for $`A`$ in the isotropic systems considered here. We also examine the response of the ground state to the change in boundary conditions that corresponds to introducing a single dislocation loop encircling the system. The rearrangement of the ground state caused by this change is equivalent to the domain wall of elastic deformations which span the dislocation loop. Our results indicate that these domain walls are highly convoluted, with a fractal dimension $`d_f=2.60(5)`$. We also discuss the implications of the domain wall energetics for the stability of the Bragg glass phase. Elastic excitations similar to these domain walls arise when the pinning potential is slightly perturbed. As in other disordered systems, perturbations of relative strength $`\delta `$ introduce a new length scale $`L^{}\delta ^{1/\zeta }`$ beyond which the perturbed ground state becomes uncorrelated with the reference (unperturbed) ground state. We have performed scaling analysis of the response of the ground state to the perturbations and obtain $`\zeta =0.385(40)`$. This value is consistent with the scaling relation $`\zeta =d_f/2\theta `$, where $`\theta `$ characterizes the scaling of the energy fluctuations of low energy excitations.
Observation of glassy behavior in flux line arrays in high $`T_c`$ superconductors calls for a thorough theoretical description of such behavior. In this system the collective pinning of the flux line array, rather than the interactions of a single flux line with the disorder, can dominate the physics . For weak pinning, where dislocations are believed to be unimportant at large length scales , the entire flux line array can be modeled as a single medium subject to a pinning potential. Analytic calculations carried out using the approximation of linear elasticity and including the effects of the short range order in this system indicate that quasi-long range order exists in 3 dimensions . The elastic medium assumption was justified a posteriori and is further supported by an approximate domain wall renormalization calculation . The structure factor of a topologically ordered system was predicted by these calculations to have power law divergences of the form $`S(k)k^3`$. We will consider only the case of scalar displacements, which also models a charge density wave pinned by charge impurities .
We have numerically generated ground states for an elastic medium subject to quenched point disorder in the topologically ordered phase. Our results for the coefficient of the divergence of $`S(k)`$ lie between the renormalization group and Gaussian variational method results obtained by Giamarchi and Le Doussal . In addition to supporting their analysis, we are able to examine the response of the system to changes in boundary conditions and pinning potential. By a suitable choice of the boundary conditions we can simulate the domain wall of elastic deformations induced by a dislocation loop . The energy of the domain wall dominates the random part of the energy cost of introducing a single topological defect . Our results on the energetics of the domain walls thereby indirectly support the analysis carried out by Fisher , which indicated that the this system is marginally stable with respect to the introduction of dislocations. The numerically generated domain walls were found to have a fractal dimension $`d_f=2.60(5)`$. At large length scales the ground state is highly sensitive to small perturbations in the disorder potential, as in spin glasses and other disordered systems . Perturbations of relative strength $`\delta `$ in the disorder decorrelate the ground state on length scales $`L^{}\delta ^{1/\zeta }`$, with $`\zeta =0.385(40)`$. We are able to relate this response to disorder perturbations to the properties of the domain walls.
We have generated exact ground states for a discrete model whose energy in the continuum limit is given by
$$H=d^3x\frac{c}{2}[u(\stackrel{}{x})]^2+V(u(\stackrel{}{x}),\stackrel{}{x})$$
(1)
with distortions of the medium represented by $`u(\stackrel{}{x})`$, which is assumed to be slowly varying over the system. The coefficient $`c`$ is the elastic constant. The potential felt by the medium due to the randomly placed impurities is represented by $`V(u(\stackrel{}{x}),\stackrel{}{x})`$. In microscopic descriptions of an elastic system subject to weak disorder there is a length scale $`\xi _p`$ below which the elastic energy dominates and the medium is ordered. This short range order manifests itself here as correlations in the disorder potential of the form $`V(u,\stackrel{}{x})=V(u+a,\stackrel{}{x})`$, with $`a`$ the intrinsic period of the medium. The period of the potential, $`a`$, is the lattice spacing in a flux line array or the wavelength of a charge density wave. Although this Hamiltonian is insufficient to describe the core of a dislocation loop, it can serve to describe the sheet of elastic deformations which span the loop, since the approximation of linear elasticity breaks down only in the region near the dislocation core. Comparing the ground state of Eq. (1) in a system of size $`L`$ subject to periodic boundary conditions to that for the same disorder realization with twisted boundary conditions in one direction (i.e., $`u(x,y,0)=u(x,y,L)+a`$) allows for the identification of the domain wall which would be caused by a single dislocation loop encircling the system .
The elastic Hamiltonian, Eq. (1), describes a $`(3+1)`$-dimensional interface subject to a disorder potential. In this picture, the displacement variable $`u(\stackrel{}{x})`$ maps to the height of the directed interface. This interface model has a natural discrete representation in which the configuration of the interface is specified by the set of bonds it cuts in a 4-dimensional lattice . The bonds of this lattice are assigned weights which directly correspond to the disorder potential. The sum of the weights of the bonds which the interface cuts give the energy of the configuration corresponding to the disorder energy $`d^3xV(u,\stackrel{}{x})`$. In these discrete models an effective elastic constant arises from a dependence of the number of configurations on the average gradient of the interface. Maximal flow algorithms , a subclass of combinatorial optimization algorithms, allow for the generation of ground states of this discrete representation of the interface .
The lattices numerically studied here are composed of $`L^3\times U`$ nodes, where $`L`$ is the linear size of the elastic medium, and $`U`$ is the extent of the lattice in the $`\widehat{u}`$ direction in which the displacement variable fluctuates. Unlike the simulation of elastic manifolds , where the bond weights are non-periodic, there are long range correlations in the disorder in the $`\widehat{u}`$ direction. We generated random integer weights, chosen from a uniform distribution over $`[0,V_{max}]`$, independently for each of the forward bonds in a layer of unit cells at constant height. The data we will present were obtained with $`V_{max}=5000`$, but we have verified that our results on the structure of the interface are not significantly altered for $`V_{max}`$ as low as $`100`$. Throughout the following discussion, we have normalized the energy of the system, so that the effective range of the bond weights is in $`[0,1]`$. This set of bond weights on one layer is sufficient to fix the value of the disorder on all of the bonds because we require that bonds which differ only by translations along $`\widehat{u}`$ have the same weight. In order to study the universality of the coefficient of the divergence in $`S(\stackrel{}{k})`$, we have simulated interfaces on the simple hypercubic lattice (SHC) as well as the Z-centered hypercubic lattice (ZHC) . For the SHC lattice $`\widehat{u}`$ was chosen to be along the (1111) crystallographic axis. The elementary bonds in this lattice are: $`(x,y,z,u)=`$ $`\pm (0,\sqrt{2}/2,1/2,1/2)`$, $`\pm (0,\sqrt{2}/2,1/2,1/2)`$, $`\pm (\sqrt{2}/2,0,1/2,1/2)`$, $`\pm (\sqrt{2}/2,0,1/2,1/2)`$. In the ZHC lattice we considered the following 12 bonds extending from each node: $`(x,y,z,u)=`$ $`(\pm \sqrt{2}/2,0,0,\pm \sqrt{2}/2)`$, $`(0,\pm \sqrt{2}/2,0,\pm \sqrt{2}/2)`$, $`(0,0,\pm \sqrt{2}/2,\pm \sqrt{2}/2)`$ . These lattices are the natural extensions of the two types of lattices used in simulations of the ground state of a 2-dimensional elastic medium . Both types of lattices were simulated using periodic boundary conditions in the transverse directions. In addition, in the ZHC lattice the ground state was computed for each realization of disorder with twisted boundary conditions in one direction.
We have developed a custom implementation of the push-relabel maximal flow algorithm optimized for application to the regular lattices considered here. Our modifications to the Push-Relabel algorithm reduced the memory requirements by nearly a factor of ten, allowing for simulation of systems composed of up to approximately $`6\times 10^6`$ nodes using less than $`512`$MB. The primary modification involves computing nearest neighbor relations as needed rather than storing this information. Overhangs in the interface are precluded by assigning a large weight to backwards arcs . Since these backwards arcs have an effectively infinite weight, they cannot be be part of the minimal cut. Thus our algorithm can operate without storing their weight, provided that flow is always allowed to move along the backwards arcs. These modifications increased the running time of the algorithm, but obtaining the ground state for each realization of disorder still took less than one hour of processor time on a single 400 MHz Pentium II CPU for the largest system sizes studied. The memory requirement is linear in the number of nodes $`N`$; the processor time was found to scale approximately as $`N^{1.3}`$, compared with the worst case bound of $`N^2`$ .
The modest computational requirements of this algorithm have allowed us to average the properties of the ground states for a variety of system sizes over a large number of disorder realizations. In addition to generating the value of the minimal energy, the algorithm produces the configuration of the interface. The interface can be then represented by $`u(\stackrel{}{x})`$, which is defined on the 3-dimensional lattice formed by projecting the interface along the $`\widehat{u}`$ direction . Due to the periodicity of the disorder, the energy is invariant under global translations of integer multiples of $`a`$ in the displacement variable $`u(\stackrel{}{x})u(\stackrel{}{x})+na`$. Considering the set of the forward bonds cut at each location provides a characterization of the configuration equivalent to measuring the gradient of the interface. This representation of the interface is useful when comparing different ground states since it insensitive to global shifts of $`u`$. For the SHC lattice, we have generated at least $`10^3`$ realizations of disorder for systems of size $`L=8,16,24,32,40,48,60,80`$. We chose the extent of the lattice in the displacement direction $`\widehat{u}`$ to ensure that the boundaries of the system do not affect the ground state. For the SHC lattice $`U=20`$ was sufficient. Our simulations for the ZHC lattice were more extensive, with at least $`10^4`$ realizations for systems, subject to both periodic and twisted boundary conditions, of size $`L=8,16,32,48`$, and at least $`10^3`$ realizations for systems of size $`L=64,80`$. The largest systems here required $`U=12`$ to prevent the configuration from being affected by the boundaries of the lattice in the $`\widehat{u}`$ direction.
We have examined the displacement correlations of the minimal energy configurations by computing the disorder averaged structure factor $`\overline{S(\stackrel{}{k})}`$ of the displacement variables. This allows us to more clearly distinguish the large length scale behavior; direct measurement of the width is more difficult to analyze due to finite size effects. The orientationally averaged structure factor $`\overline{S(k)}`$ has been obtained by averaging the value of $`\overline{S(\stackrel{}{k})}`$ over radial bins of size $`\mathrm{\Delta }k=0.025`$, and is presented in Fig. 1. The error bars represent the fluctuations of $`\overline{S(\stackrel{}{k})}`$ within each spherical shell. These fluctuations, which measure the anisotropy of the structure factor, generally decrease with decreasing $`k`$; for $`k<0.75`$, these fluctuations saturate at a value comparable to the statistical fluctuations in $`S(\stackrel{}{k})`$ indicating the range in $`k`$ where the system is isotropic within the statistical fluctuations. To extract the coefficient of the leading order divergence we have fit $`k^3\overline{S(k)}`$ with the form
$$k^3\overline{S(k)}=A+Bk$$
(2)
over the region $`k<0.5`$. The leading order term of $`\overline{S(k)}k^3`$ indicates the quasi-long range order of the ground state. For the ZHC lattice we obtain $`k^3\overline{S(k)}=1.08(5)+1.02(6)k`$; for the SHC lattice we obtain $`k^3\overline{S(k)}=1.01(4)+0.46(4)k`$. The error estimates on the parameters in these fits represent the statistical uncertainty over the given fit range and systematic errors arising from the choice of the cutoff in $`k`$. Giamarchi and Le Doussal have applied a renormalization group technique to order $`ϵ=4d`$ to this system and obtained $`A=1.0`$ . They have also carried out a Gaussian variational calculation and obtained $`A=1.1`$. In both of these approximations the value of the coefficient is universal.
The form $`Bk^2`$ of the leading order corrections to $`\overline{S(k)}`$ can be obtained by a renormalization group calculation. Upon renormalization, the periodic pinning potential $`V(u(\stackrel{}{x}),\stackrel{}{x})`$ introduces a new term into the Hamiltonian of the form $`\stackrel{}{\mu }(\stackrel{}{x})\text{.}u(\stackrel{}{x})`$ as when $`d=2`$ . This random tilting field is short range correlated with $`\overline{\stackrel{}{\mu }(\stackrel{}{x})\text{.}\stackrel{}{\mu }(\stackrel{}{y})}=g^2\delta ^3(\stackrel{}{x}\stackrel{}{y})`$. Unlike the case where $`d=2`$, the strength of this field $`g^2`$ undergoes only a finite renormalization for $`d=3`$ . The effects of this term can be determined from the effective small length scale Hamiltonian
$$H_{eff}=d^3x\frac{c}{2}[u(\stackrel{}{x})]^2+\stackrel{}{\mu }(\stackrel{}{x})\text{.}u(\stackrel{}{x}),$$
(3)
which ignores the periodic pinning potential. Solving the equations of motion and averaging over realizations of the tilting field predicts corrections of the form $`Bk^2`$ with $`B=g^2/c^2`$ at $`T=0`$. In order to consider the effects of the pinning potential and the tilting field separately we relied on the fact that the renormalization group flow of $`V(u(\stackrel{}{x}),\stackrel{}{x})`$ is unaffected by the presence of a non-zero $`g`$ . The results of this analysis can be confirmed by examining the stability of the functional renormalization group fixed point obtained by Giamarchi and Le Doussal in an $`ϵ=4d`$ expansion . The most slowly decaying perturbation to the fixed point decays $`L^ϵ`$, implying corrections to the structure factor of the form $`k^{ϵ3}`$. This order $`ϵ`$ calculation also predicts the form of the corrections which are observed in our simulation data.
Naturally, real-space measurements of the width must be consistent with the structure factor. We measured the disorder averaged squared width
$$w^2=\overline{<u^2><u>^2},$$
(4)
where $`<>`$ denotes a spatial average over the system. These data, shown in Fig. 2, have been fit using the real space version of Eq. (2): $`w^2=a+bln(L)+c/L`$. The constant term arises from the short wavelength fluctuations, while the second and third terms arise from the $`k^3`$ and $`k^2`$ terms in the structure factor respectively. The real space coefficient $`b`$ is related to the leading order behavior of $`S(k)`$ by $`b=A/4\pi ^2`$. This three parameter fit gives estimates for the values of $`A`$ and $`B`$, the coefficients describing the long wavelength form of the structure factor, which are consistent with that obtained by fitting $`S(k)`$ at a single system size. If the form of $`S(k)`$ depended on $`L`$, then this consistency would not be maintained. Direct comparison of the structure factor for various system sizes also demonstrates that $`\overline{S(k)}`$ exhibits negligible system size effects, except for the change in $`k_{min}=L/2\pi `$.
Other measures of the displacements provide us with additional information on the structure of the ground state. The disorder averaged extremal displacement difference, $`\mathrm{\Delta }H=\overline{u_{max}u_{min}}`$, Fig. 3, was found to grow logarithmically with system size for both lattice types. We computed least squares fits of the form $`\mathrm{\Delta }H=\stackrel{~}{a}+\stackrel{~}{b}ln(L)`$ to obtain $`\stackrel{~}{b}_{ZHC}=0.76(1)`$ and $`\stackrel{~}{b}_{SHC}=0.70(1)`$. The coefficients of the logarithmic term differ by less than $`10`$% for the two lattices studied here, suggesting that this measure of the system is weakly, if at all, dependent on the lattice discretization of the medium. This logarithmic growth is consistent with the following picture of the ground state structure developed by Fisher . At each length scale $`R=b,b^2,b^3,\mathrm{}`$ the displacement undergoes one shift of amount $`\pm a`$. Furthermore the sign of the displacement shift is random at each scale, leading to the logarithmic growth of the squared width. When traversing from the minimum to the maximum the signs of the displacements are strongly correlated, leading to a coherent sum, and a logarithmic dependence on the system size for the extremal differences results.
We have also determined the effect of coarse-graining the displacement variable. The coarse grained displacement is defined as the average of $`u`$,
$$u_R(\stackrel{}{y})=\frac{1}{R^3}_{\mathrm{\Omega }_R(\stackrel{}{y})}d^3xu(\stackrel{}{x}),$$
(5)
over $`\mathrm{\Omega }_R(\stackrel{}{y})`$, a cube of size R centered at the point $`\stackrel{}{y}`$. We measured the fluctuations in these coarse grained height variables $`|\mathrm{\Delta }u_R|^2=\overline{(u_R(\stackrel{}{y})u_R(\stackrel{}{y}+\stackrel{}{b}))^2}`$, with $`\stackrel{}{b}`$ the vector between the centers of the cubes which touch at one corner. This spatial averaging procedure is similar to a real space renormalization transformation. Villain and Fernandez have explicitly carried out a real space renormalization calculation for a 3-dimensional elastic medium with cubic symmetry . Their calculations indicate that $`|\mathrm{\Delta }u_R|^2`$ has a finite limit as $`R,L\mathrm{}`$. We have directly measured $`|\mathrm{\Delta }u_R|^2`$ for the ZHC lattice (Fig. 4). The coarse grained height fluctuations are related to the structure factor by:
$$|\mathrm{\Delta }u_R|^2=_{BZ}\frac{d^3k}{(2\pi )^3}|G(\stackrel{}{k})|^2e^{i\stackrel{}{k}\stackrel{}{b}}S(\stackrel{}{k})$$
(6)
with $`G(\stackrel{}{k})=_{\mathrm{\Omega }_R}d^3xe^{i\stackrel{}{k}\stackrel{}{x}}`$. In the infinite volume limit $`|\mathrm{\Delta }u_R|^2`$ depends only on the leading order behavior, $`Ak^3`$, of the structure factor and the limit of the ratio $`R/L`$. We have numerically evaluated the right hand side of Eq. (6) in this limit to obtain the infinite size limit presented in Fig. 4. For finite sized systems the sub-leading order corrections to $`S(k)`$ contribute to the coarse grained height fluctuations. These terms lead to the divergence of $`|\mathrm{\Delta }u_R|^2`$ as $`R/L0`$. The dominant corrections arise from the $`Bk^2`$ corrections to $`S(k)`$ seen in the structure factor data, but decay as $`L^1`$. The data are consistent with convergence to a finite limit as $`L\mathrm{}`$.
For the ZHC lattice, we have also investigated the behavior of the system subject to the twisted boundary conditions defined previously. For each realization of disorder we have compared the ground state energy with periodic boundary conditions, $`E_p`$, to that obtained with twisted boundary conditions along one of the lattice directions, $`E_t`$. This allows us to investigate the properties of the excitations induced by the change in boundary conditions. We identify the energy difference $`E_{DW}=E_tE_p`$ with the energy of the domain wall. The domain wall is identified by the set of bonds that the interface intersects in one set of boundary conditions but not the other. Even though the domain wall could be identified by examining the values of $`u_p(\stackrel{}{x})`$ and $`u_t(\stackrel{}{x})`$, it is more efficient to identify the domain wall by examining the sets of cut bonds for each boundary conditions. The cut bonds are projected along the $`\widehat{u}`$ direction for each boundary condition. The domain wall is then the symmetric difference between these two sets of bonds. Without directly simulating a dislocation loop itself, we are able to investigate the properties of the domain wall induced by the introduction of a single loop encircling the system.
The energetics of these domain walls dominate the random part of the energy cost of introducing a dislocation loop into an elastic medium . The mean energy difference $`\overline{E}_{DW}`$ grows linearly with the (linear) size $`L`$ of the domain wall (Fig. 5), a result consistent with the scaling of the elastic contribution to the energy and the statistical symmetry of the disorder potential of the continuum model. We have also analyzed the variance $`\sigma ^2(E_{DW})`$ of the distribution of domain wall energies (Fig. 6). No single power law fit of the form $`\sigma ^2(E_{DW})L^{2\theta }`$ can adequately fit our data over the range of sizes simulated. However the data are well described by the empirically determined form $`\sigma ^2(E_{DW})=0.031L^2+0.24L`$ displayed as the solid line in Fig. 6. The finite size correction leads to a size dependent effective value of the exponent, $`\theta _{eff}`$, characterizing the scaling of the sample to sample fluctuations in the energy of low energy excitations. Depending on the lower limit imposed on the fit, single power-law fits $`\sigma ^2(E_{DW})L^{2\theta }`$ give $`0.85<\theta _{eff}<0.92`$ over this range of sizes. The distribution of domain wall energies depends on $`\overline{E}_{DW}`$ and $`\sigma (E_{DW})`$ only through the combination $`ϵ=(E_{DW}\overline{E}_{DW})/\sigma (E_{DW})`$, as can be seen in Fig. 7. This collapsed distribution has more highly weighted tails than a unit normal distribution. The frequency with which negative energy domain walls were observed in the simulations is significantly higher that what one expects for a Gaussian distribution with the measured mean and standard deviation at each system size (Fig. 8). This behavior also occurs for high energy domain walls; to within the statistical uncertainty the distribution is symmetric about its mean value. Our data are consistent with both $`\overline{E}_{DW}`$, and $`\sigma (E_{DW})`$ increasing linearly with $`L`$ for large systems. Fisher’s argument assumed this behavior of the domain wall energetics in his domain wall renormalization calculation indicating the marginal stability of the Bragg glass phase .
Because of the balance between the elastic energy scale and the scale of the energy fluctuations due to the disorder potential, the domain walls are highly convoluted and expected to have a fractal dimension $`d_f`$ between 2 and 3. Similar to the approach used for a 2-dimensional elastic medium , we have measured the size of the domain wall by counting the number of bonds $`N_b`$ in the wall. The data for the area of the wall as a function of system size, averaged over disorder, can be fit by a simple power law, $`N_bL^{d_f}`$, with the fractal dimension of the domain wall $`d_f2.60`$, shown in Fig. 9(a). This fit has been taken after excluding the smallest system size $`L=8`$. However, the form of the residuals, see Fig. 9(b), indicate the presence of sub-leading order corrections, suggesting that this may underestimate the value of $`d_f`$. In order to verify this, we have also fit the whole range of data using the form $`N_b=aL^{d_f}+bL^2`$. The second term arises from the effectively two dimensional nature of the domain walls at small length scales. This three parameter fit gives $`d_f=2.65`$; thus we conclude that our systematic errors in estimating the fractal dimension are approximately $`0.05`$.
In order to investigate other low energy excitations of this system, we have examined the sensitivity of the ground state to perturbations in the disorder potential. Similar studies have been carried out for disordered systems such as spin glasses , $`(1+1)`$-dimensional directed polymers in random media , and 2-dimensional elastic media . In all of these disorder dominated systems, perturbations of relative strength $`\delta `$ in the disorder potential introduce a length scale $`L^{}\delta ^{1/\zeta }`$, $`\zeta =d_f/2\theta `$, beyond which the ground state becomes uncorrelated with the reference ground state. The exponent $`\zeta `$ characterizing the sensitivity of the ground state is referred to as the chaos exponent. These studies have been done by comparing the ground state for two correlated choices of the disorder potential. In our simulations, we have obtained the ground states for the two pinning potentials $`V^\pm (u(\stackrel{}{x}),\stackrel{}{x})=b(u(\stackrel{}{x}),\stackrel{}{x})\pm d(u(\stackrel{}{x}),\stackrel{}{x})`$, with both terms periodic in the $`\widehat{u}`$ direction. The constant part of the potential $`b(u(\stackrel{}{x}),\stackrel{}{x})`$ was an integer chosen uniformly from $`[1000,2000]`$. The term which generates differences between the realizations, $`d(u(\stackrel{}{x}),\stackrel{}{x})`$, was chosen uniformly from $`[d_{max}/2,d_{max}/2]`$. The parameter $`\delta =d_{max}/2000`$ characterizes the relative strength of the perturbations. This prescription was chosen to ensure that for a fixed value of $`\delta `$ the distribution of the bond weights is the same for both realizations of disorder. Our simulations include values of $`\delta `$ ranging from 0.01 to 0.75, with at least $`500`$ independent realizations of disorder at each $`\delta `$ and $`L`$. By performing scaling analysis of both the energetic and structural correlations between the ground states for these realizations of disorder we can extract the value of the chaos exponent.
We have found that both the energetic and structural correlations are governed by the same length scale. First we calculated the domain wall energy $`E_{DW}^\pm `$ for the two disorder realizations $`V^\pm `$, which can then be used to compute the domain wall energy correlation function
$$G=\frac{(\overline{E_{DW}^+\overline{E_{DW}^+}})\overline{(E_{DW}^{}\overline{E_{DW}^{}})}}{\sigma (E_{DW}^+)\sigma (E_{DW}^{})}.$$
(7)
The simple scaling form $`G=f(\delta L^\zeta )`$ describes our data well (Fig. 10). We found reasonable data collapse for $`\zeta =0.38(4)`$, taking into account the statistical errors. The value of the chaos exponent can be related to the domain wall fractal dimension and the energy fluctuation exponent by a simple scaling argument as in the case of spin glasses . The perturbations introduce a random change in the energy of the domain wall of order $`\delta L^{d_f/2}`$ because the perturbations are uncorrelated with the location of the domain wall. The typical fluctuations in the domain wall energy scale as $`L^\theta `$. When these energy scales become comparable, at a length scale $`L^{}\delta ^{1/\zeta }`$, the domain wall energies become uncorrelated. When using the effective value of the energy fluctuation exponent, $`\theta _{eff}0.9`$, this scaling relation holds to within 5% accuracy.
We can understand the structural deformations induced by the bond perturbations by reasoning similar to that for domain wall correlations. Here we consider the differences in the ground states of the system (with periodic boundary conditions) due to the changes in the pinning potential. Again, the ratio of the energy change due to the random perturbations, and that of the fluctuations in the energy landscape, $`\delta L^{d_f/2}/L^\theta `$, determines the behavior of the system. In response to Zhang’s simulation of directed polymers in random media, Feigel’man and Vinokur had argued that the probability of a positional excitation grows linearly with $`\delta L^\zeta `$ for small values of the perturbation strength . Following their argument, we expect the probability of a change in the displacement variable at a single location to grow linearly with $`\delta L^\zeta `$. Unlike the case for the directed polymer, the magnitude of the differences is bounded for periodic pinning since excitations with $`|\mathrm{\Delta }u|>a`$ probe the same energy landscape as those with $`|\mathrm{\Delta }u|<a`$. Thus, the spatially and disorder averaged mean squared displacement difference $`\chi =\overline{<(u^+(\stackrel{}{x})u^{}(\stackrel{}{x}))^2>}\delta L^\zeta `$ for small perturbations. For large values of $`\delta `$ the ground states are completely decorrelated, and we expect that $`\chi ln(L)`$. We have found that the data for $`\chi `$ collapse according to the scaling form $`\chi =f(\delta L^\zeta )`$, with $`\zeta =0.39(2)`$ (Fig. 11). Our data are consistent with the results obtained by the scaling argument in both limiting cases. Before computing $`\chi `$, we made the transformation $`u^+u^++na`$, where $`n`$ is the integer which maximizes the number of locations at which $`u^+(\stackrel{}{x})u^{}(\stackrel{}{x})=0`$ for each realization. This transformation minimizes $`\chi `$ over the discrete set of global translations which leave the energy of the medium invariant. Our scaling ansatz is significantly different from that proposed for the 2-dimensional elastic medium $`\chi =\delta ^{1/\zeta }f(L)`$ . This form cannot adequately collapse our data over the range of parameters we have simulated.
Implicit in this discussion is the assumption that both the perturbation induced deformations, and the boundary condition induced domain walls are characterized by the same fractal dimension. Even for relatively small perturbations in the disorder, the deformations are typically composed of a set of disconnected clusters. We have directly measured these clusters’ fractal dimension. The size of a cluster $`R`$ is defined as the average of the sides of the bounding box which encloses the cluster and is measured in units of the lattice spacing. Our algorithm identifies the sets of nodes on which $`u^+(\stackrel{}{x})u^{}(\stackrel{}{x})0`$ after performing the translation which minimizes $`\chi `$. The surface area of a cluster $`s`$ is the number of nodes with neighbors not in the cluster. We have collapsed the data for $`\delta =0.05`$ using the finite size scaling form $`s=L^{d_f}f(R/L)`$ (Fig. 12). We expect that the scaling function $`f(R/L)`$ should have the form $`f(R/L)^{d_f}`$ in the region $`R/L<<1`$, $`R>>1`$, but this regime is not clearly visible in Fig. 12 due to lattice and finite size effects. Despite this, the best collapse of the data, for $`0.5<R/L<1.0`$, provides an estimate $`d_f=2.65(10)`$ which is consistent with the estimate for the domain wall fractal dimension. Similar analysis at other values of $`\delta `$ provide equivalent values for the cluster fractal dimension. The anomalous data at $`R/L1`$ arise from the rare clusters which span the system in all directions. This scaling also breaks down for clusters with $`R<8`$, where lattice effects make the surface effectively 2-dimensional (Fig. 13). The equality of the fractal dimension of the boundary condition induced domain walls and bond perturbation induced deformations can be justified by a simple argument. For small values of the disorder perturbation parameter, the cluster boundaries lie in regions where there is a small energy cost to deforming the medium. If one considers only a small volume containing a portion of the cluster boundary, the structural difference is the same as would be caused by the change from periodic to twisted boundary conditions on that volume. Thus both the deformations induced by small changes in the disorder potential and those caused by a change in boundary conditions should be characterized by the same fractal dimension. Despite the fact that the scaling regime is inaccessible due to the limits on the size of systems studied, our data are consistent with the conclusion that the fractal dimension of the clusters which compose the deformations is the same as that of the boundary condition induced domain walls.
We have performed extensive numerical simulation of a model 3-dimensional elastic medium subject to quenched disorder with scalar discrete displacements. Our results for the structure in the Bragg glass phase indicate that the structure factor has divergences of the form $`S(k)Ak^3`$. Our results for the coefficient A fall between the approximate values $`A=1.0`$, and $`A=1.1`$, obtained via a renormalization group and a replica approach . The observed energetics of the boundary condition induced domain walls indirectly support arguments for the stability of the Bragg glass phase. These domain walls correspond to the elastic deformations due to the introduction of a single dislocation loop winding around the system. Our data are consistent with the hypothesis that the mean energy and the energy fluctuations of a section of domain wall both scale linearly with the linear size of the section for large sizes. This balance is a crucial element of the analysis carried out by Fisher indicating the marginal stability of the Bragg glass phase to the introduction of dislocations . We are also able to measure the spatial structure of these domain walls and obtain their fractal dimension $`d_f=2.60(5)`$. We have observed that random changes in the disorder potential of relative strength $`\delta `$ decorrelate the ground state on length scales larger than $`L^{}\delta ^{1/\zeta }`$ with $`\zeta =0.385(40)`$. The properties of the domain walls and this sensitivity to disorder perturbations can be related to each other by the scaling relation $`\zeta =d_f/2\theta `$, where $`\theta `$ characterizes the fluctuations in the low energy excitations.
|
no-problem/9905/astro-ph9905115.html
|
ar5iv
|
text
|
# Fitting Formulae for Cross Sections of Tidal Capture Binary Formation
## 1 INTRODUCTION
A relatively large number of X-ray sources in globular clusters was first pointed out by Katz (K75 (1975)). These objects are thought to be close binary systems. As a mechanism for the formation of these binaries, the tidal capture of a normal star by a degenerate star was suggested by Clark (C75 (1975)) and Fabian et al. (FPR75 (1975)). In addition, tidally captured binaries can play an important role in globular cluster dynamics (e.g., Ostriker O85 (1985), Kim et al. KLG98 (1998)).
A precise mechanism for the dissipational tidal capture process was introduced by Fabian et al. (FPR75 (1975)), and a detailed computation for the amount of energy deposited into oscillatory modes during a close encounter was performed by Press & Teukolsky (PT77 (1977)) for an $`n=3`$ polytropic model. This work was extended by various authors to other polytropes (Lee & Ostriker LO86 (1986); abbreviated as LO hereafter, Ray et al. RKA87 (1987)), and to realistic stellar models (McMillan et al. MMT87 (1987)). Some further considerations to the subsequent dynamical evolution of tidal capture binaries are presented by Kochanek (K92 (1992)) and Mardling (1995a , 1995b ). Many numerical simulations for close stellar encounters, whose products include tidal capture binaries, have been performed (e.g. Benz & Hills BH87 (1987); Davies et al. DBH91 (1991)).
The interests for tidal capture process have been mainly concentrated on old stellar systems such as globular clusters and galactic nuclei whose densities are known to be very high. Therefore, the cross sections were calculated only for the cases applicable to those systems. The range of mass in these systems is considerably smaller than that in young stellar systems.
Recent advances in infrared astronomy lead to the discoveries of compact young clusters near the Galactic Center (Okuda et al. Oe90 (1990); Nagata et al. Ne95 (1995)), which have been also observed in detail using the HST (Figer et al. Fe99 (1999)). These clusters are found to be as dense as some globular clusters. The sharp image quality of the HST also enabled to find star clusters, whose age can be deduced from the population synthesis technique, from galaxies at considerable distances (e.g., Östlin et al. Oe98 (1998)).
The evolution of young clusters near the Galactic center region becomes an important issue in understanding the evolutionary history of the bulge of our galaxy, because these clusters would have rather short evaporation times. The main reason for fast dissolution is the strong tidal field environment. The dynamical process is further influenced by close encounters between stars, including tidal captures. The young clusters found in external galaxies are also of great interest in view of the general evolution of the galaxies. Again, the dynamical evolution and evaporation process are the key in these problems. Two-body relaxation drives the dynamical evolution and close interactions between two stars modify the course of evolution significantly (see, for example, Meylan & Heggie MH97 (1997)).
The effect of tidal interaction during stellar encounter can be incorporated in different ways depending on the methods of the study of dynamical evolution of stellar systems. In direct N-body calculations, one should know exactly when the tidal capture takes place. In statistical methods such as Fokker-Planck models, the capture cross section as a function of other kinematic parameters (usually velocity dispersion) is necessary.
In the present Research Note, we present convenient formulae for cross sections for tidal capture between two stars as a function of relative velocity at infinity. These formulae can be easily incorporated into statistical methods such as Fokker-Planck models or gaseous models for the studies of dynamics of star clusters. They will also be useful in making estimates for the interaction rates for a given cluster parameters.
## 2 CROSS SECTIONS
Previous studies presented tidal cross sections for the limited ranges of stars. For example, the mass ratios between encountering stars considered in LO ranged only from 1 to 8 for normal-normal star pairs, and 0.5 to 1.5 for normal-degenerate pairs. Also, $`R_1/M_1=R_2/M_2`$ was assumed and only encounters between the same polytropes were considered in LO, which may not be appropriate for encounters between stars with large mass ratios. While an $`n=1.5`$ polytrope may well represent the structure of low-mass stars, the outer structure of intermediate to massive stars ($`M\mathrm{}>1M_{}`$) may be better represented by a $`n=3`$ polytrope. Thus a consideration for the encounter between $`n=1.5`$ and 3 polytropes is necessary for encounters with a large mass difference. Here we extend the work of LO to obtain the cross sections for i) encounters between stars with a very large mass ratio, ii) encounters with mass-radius relation that deviates from conventional $`RM`$ relation (i.e., $`[R_2/M_2]/[R_1/M_1]`$ values other than 1), and iii) encounters between $`n=1.5`$ and 3 polytropes, and present the results in the form of convenient fitting formulae for cross sections and critical periastron distances.
The amount of orbital energy deposited to the stellar envelope is a very steep function of the distance between stars. The relative orbit can be described as a function of energy and angular momentum. However, the relative orbit near the periastron passage can be approximated by a parabolic orbit which can be specified by only one parameter: periastron distance $`R_{\mathrm{min}}`$. There exists a critical $`R_{\mathrm{min}}`$ below which the tidal interaction transforms the initial unbound system into a bound one for a given set of mass and radius pair, and the relative velocity at infinity, $`v_{\mathrm{}}`$.
Assuming a parabolic relative orbit for the encounter, Press & Teukolsky (PT77 (1977)) expressed the deposition of kinetic energy into stellar oscillations of a star with $`M_1`$ and $`R_1`$ due to a perturbing star with $`M_2`$ and $`R_2`$ by
$$\mathrm{\Delta }E_1=\left(\frac{GM_1}{R_1}\right)^2\left(\frac{M_2}{M_1}\right)^2\underset{l=2}{\overset{\mathrm{}}{}}\left(\frac{R_1}{R_{\mathrm{min}}}\right)^{2l+2}T_l(\eta _1),$$
(1)
where $`l`$ is the spherical harmonic index, $`R_{\mathrm{min}}`$ the apocenter radius, and the contribution of the summation behind $`l=3`$ to $`\mathrm{\Delta }E`$ is negligible. The dimensionless parameter $`\eta `$ is defined by
$$\eta _1\left(\frac{M_1}{M_1+M_2}\right)^{1/2}\left(\frac{R_{\mathrm{min}}}{R_1}\right)^{1.5}.$$
(2)
Expression for $`\mathrm{\Delta }E_2`$ may be obtained by exchanging subscripts 1 and 2 in Eqs. (1) and (2). For $`T_2(\eta )`$ and $`T_3(\eta )`$ values, we use Portegies Zwart & Meinen’s (PZM93 (1993)) fifth order fitting polynomials to the numerical calculations by LO.
Tidal capture takes place when the deposition of kinetic energy during the encounter $`\mathrm{\Delta }E=\mathrm{\Delta }E_1+\mathrm{\Delta }E_2`$ is larger than the initially positive orbital energy $`E_{\mathrm{orb}}=\frac{1}{2}\mu v_{\mathrm{}}^2`$, where $`\mu `$ is the reduced mass of $`M_1`$ and $`M_2`$ pair. The critical $`R_{\mathrm{min}}`$ below which the tidal energy exceeds the orbital energy depends on $`v_{\mathrm{}}`$ via $`\eta `$.
After obtaining critical $`R_{\mathrm{min}}`$ by requiring $`\mathrm{\Delta }E=\frac{1}{2}\mu v_{\mathrm{}}^2`$, we can compute the critical impact parameter $`R_0`$ that leads to the critical $`R_{\mathrm{min}}`$ (assuming the orbit does not change in the presence of tidal interaction), as a function of $`R_{\mathrm{min}}`$ and $`v_{\mathrm{}}`$:
$$R_0(v_{\mathrm{}})=R_{\mathrm{min}}\left(1+\frac{2GM_\mathrm{T}}{R_{\mathrm{min}}v_{\mathrm{}}^2}\right)^{1/2},$$
(3)
where $`M_T=M_1+M_2`$. The capture cross section, $`\sigma (v_{\mathrm{}})`$, is simply the area of the target, $`\pi R_0^2`$. The velocity dependence of cross section is expected to be close to $`v_{\mathrm{}}^2`$ since the second term in the parenthesis of Eq. (3) is much greater than 1, and $`R_{\mathrm{min}}`$ is only weakly dependent on $`v_{\mathrm{}}`$.
## 3 FITTING FORMULAE
The cross section depends on $`v_{\mathrm{}}`$ in a somewhat complex way. However, for the limited range of $`v_{\mathrm{}}`$, it can be approximated as a power law on the tidal capture cross section. Following LO, we provide $`\sigma `$ as a function of $`R_1`$ and escape velocity at the surface of star 1, $`v_1(2GM_1/R_1)^{1/2}`$:
$$\sigma =a\left(\frac{v_{\mathrm{}}}{v_1}\right)^\beta R_1^2.$$
(4)
We obtain constants $`a`$ and $`\beta `$ by fitting the power law curve to $`\sigma (v_{\mathrm{}})`$ at $`v_{\mathrm{}}=10\mathrm{km}\mathrm{s}^1`$, which is typical velocity range in the globular clusters and compact young clusters. The galactic nuclei are also thought to be dense enough for tidal interactions, but the direct collision is more probable because of high velocity dispersion (Lee & Nelson LN88 (1988)).
We assumed $`R_1/M_1=R_{}/M_{}`$ for the calculation of $`v_1`$, but $`a`$ and $`\beta `$ values are nearly insensitive to the choice of $`R_1/M_1`$ value because the above power law holds for a wide range of $`v_{\mathrm{}}`$ near $`10\mathrm{km}\mathrm{s}^1`$.
### 3.1 Normal-Degenerate Encounters
For encounters between a normal and a degenerate star, we obtain $`a`$ and $`\beta `$ for $`0.01M_2/M_1100`$. In this subsection, subscript 1 is for the normal star and 2 for the degenerate star. While $`a`$ is a steep function of $`M_2/M_1`$ (see Fig. 1), $`\beta `$ ranges only from 2.24 ($`M_2/M_1=0.01`$) to 2.13 ($`M_2/M_1=100`$) for $`n=1.5`$, and from 2.24 to 2.19 for $`n=3`$. Thus $`\beta =2.2`$ would be a good choice. We find that $`a`$ is well fit with a sum of two power law curves:
$`a_{\mathrm{ND}}`$ $`=6.60\left({\displaystyle \frac{M_2}{M_1}}\right)^{0.242}+5.06\left({\displaystyle \frac{M_2}{M_1}}\right)^{1.33}`$ $`\mathrm{for}n=1.5;`$
$`a_{\mathrm{ND}}`$ $`=3.66\left({\displaystyle \frac{M_2}{M_1}}\right)^{0.200}+2.94\left({\displaystyle \frac{M_2}{M_1}}\right)^{1.32}`$ $`\mathrm{for}n=3,`$ (5)
where subscript ND is for normal-degenerate star encounters. Eq. (3.1) fits our calculated $`a_{\mathrm{ND}}`$ values with a relative error ($`|`$fit-data$`|`$/data) better than 4 %.
### 3.2 Normal-Normal Encounters
For encounters between normal stars, we consider $`M_2/M_1`$ of 1 through 100. In this subsection, subscript 1 is for less massive star. Encounters between normal stars involve two more parameters, $`R_2`$ and $`M_2`$, but only one parameter, $`\gamma \mathrm{log}(R_2/R_1)/\mathrm{log}(M_2/M_1)`$, is enough in adding the second normal star. For $`0.5\gamma 1`$, it is found that $`\beta `$ ranges from 2.12 ($`M_2/M_1=1`$) to 2.24 ($`M_2/M_1=100`$) for $`n=1.5:1.5`$ and $`n=1.5:3`$, and from 2.18 to 2.24 for $`n=3`$. Again, $`\beta =2.2`$ would be a good approximation. Fig. 2 shows $`a`$ for three different $`\gamma `$ values. We find that a simple modification to the form of Eq. 3.1 can fit $`a`$ as a function of both $`M_2/M_1`$ and $`\gamma `$:
$`a_{\mathrm{NN}}=`$ $`6.05\left({\displaystyle \frac{M_2}{M_1}}\right)^{0.835\mathrm{ln}\gamma +0.468}`$
$`+\mathrm{\hspace{0.33em}6.50}\left({\displaystyle \frac{M_2}{M_1}}\right)^{0.563\mathrm{ln}\gamma +1.75}`$ $`\mathrm{for}n=1.5:1.5;`$
$`a_{\mathrm{NN}}=`$ $`3.50\left({\displaystyle \frac{M_2}{M_1}}\right)^{0.814\mathrm{ln}\gamma +0.551}`$
$`+\mathrm{\hspace{0.33em}3.53}\left({\displaystyle \frac{M_2}{M_1}}\right)^{0.598\mathrm{ln}\gamma +1.80}`$ $`\mathrm{for}n=3:3;`$
$`a_{\mathrm{NN}}=`$ $`7.98\left({\displaystyle \frac{M_2}{M_1}}\right)^{1.23\mathrm{ln}\gamma 0.232}`$ (6)
$`+\mathrm{\hspace{0.33em}3.57}\left({\displaystyle \frac{M_2}{M_1}}\right)^{0.625\mathrm{ln}\gamma +1.81}`$ $`\mathrm{for}n=1.5:3,`$
where subscript NN is for normal-normal star encounters, and star 1 has $`n=1.5`$ in case of $`n`$=1.5:3 encounters. Eq. (3.2) fits our calculated $`a_{\mathrm{NN}}`$ values with a relative error better than 10 %.
## 4 DISCUSSION
We expressed $`\sigma `$ in terms of $`v_1`$ and $`R_1`$ following LO in Sect. 3, but when $`\sigma _{\mathrm{NN}}`$ is expressed in terms of $`v_2`$ and $`R_2`$ such that
$$\sigma _{\mathrm{NN}}=a_{\mathrm{NN}}^{}\left(\frac{v_{\mathrm{}}}{v_2}\right)^\beta R_2^2,$$
(7)
the $`\gamma `$ dependence of $`a_{\mathrm{NN}}^{}`$ becomes smaller than that of $`a_{\mathrm{NN}}`$. Also note that $`a_{\mathrm{NN}}^{}(M_2/M_1)`$ for $`\gamma =1`$ is nearly the same as $`a_{\mathrm{ND}}(M_1/M_2)`$ with only a slight difference near $`M_2/M_11`$.
Critical $`R_{\mathrm{min}}`$ for tidal captures is also frequently useful for some studies such as the ones with N-body methods. One finds the approximate critical $`R_{\mathrm{min}}`$ as
$$R_{\mathrm{min}}\frac{a}{\pi }\left(\frac{R_1}{1+M_2/M_1}\right)\left(\frac{v_{\mathrm{}}}{v_1}\right)^{2\beta }$$
(8)
for the velocities that satisfy
$$\left(\frac{v_{\mathrm{}}}{v_1}\right)^{\beta 4}\frac{4a}{\pi }\left(\frac{1}{1+M_2/M_1}\right)^2,$$
(9)
where the right-hand-side does not vary much from 10.
Heating by tidal capture binaries is incorporated in Fokker-Planck models by calculating $`\sigma v`$, where brackets indicate the average over velocity. With the Maxwellian velocity distribution for stars, we obtain $`\sigma vv^{1.2}1.5v_{\mathrm{rms}}^{1.2}`$ where $`v_{\mathrm{rms}}`$ is the root-mean-square relative velocity between two stellar mass groups. The $`\sigma `$ presented here also includes encounters that will lead to a merge before encountering a third star. We will not, however, attempt to go over this issue because it is beyond our scope in this Research Note. We find that the maximum velocity beyond which $`R_0R_1`$ is significantly larger than 100 $`\mathrm{km}\mathrm{s}^1`$ for our parameter regime.
In this Research Note, we merely gave the cross sections. The subsequent evolution of tidally captured systems is a very important subject, but is a rather difficult to follow. If the energy deposited to the envelope of stars can be quickly radiated away before two stars become close, the final orbit of the binary will be a circle whose radius is twice of the initial $`R_{\mathrm{min}}`$ (LO). The stellar rotation induced by the tidal interactions during the circularization process can reduce the separation of circularized orbits. Therefore, the tidal products are usually stellar mergers or tight binaries.
However, there are possibilities of resonant interactions between the tides and stellar orbits. In such an environment, the tidal energy can also be transferred back to the orbital energy. The final product of such a case is a rather wide binary, although it is still “hard” binary in terms of cluster dynamics (Kochanek K92 (1992), Mardling 1995a , 1995b ). Therefore, the tidal capture could produce dynamically and observationally interesting objects in dense stellar environments. The cross sections presented here can be useful in estimating the frequencies of such interactions.
###### Acknowledgements.
This work was supported in part by the International Cooperative Research Project of Korea Research Foundation to Pusan National University, in 1997.
|
no-problem/9905/astro-ph9905364.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Neutral hydrogen in the intergalactic medium (IGM) gives rise to a forest of Ly$`\alpha `$ absorption lines blueward of the Ly$`\alpha `$ emission line in quasar spectra. Hydrodynamic simulations of structure formation in a universe dominated by cold dark matter and including an ionizing background, show that the low column density ($`N\begin{array}{c}<\\ \end{array}10^{14.5}\mathrm{cm}^2`$) Ly$`\alpha `$ forest at redshift $`z\begin{array}{c}>\\ \end{array}2`$ is produced by a smoothly varying IGM. For the low-density gas responsible for the Ly$`\alpha `$ forest, shock heating is not important and the gas follows a tight temperature-density relation. The competition between photoionization heating and adiabatic cooling results in a power-law ‘equation of state’ $`T=T_0(\rho /\overline{\rho })^{\gamma 1}`$, which depends on cosmology and reionization history .
The smoothly varying IGM gives rise to a fluctuating optical depth in redshift space. Many of the optical depth maxima can be fitted quite accurately with Voigt profiles. The distribution of line widths depends on the initial power spectrum, the peculiar velocity gradients around the density peaks and on the temperature of the IGM. However, there is a lower limit to how narrow the absorption lines can be. Indeed, the optical depth will be smoothed on a scale determined by three processes: thermal broadening, baryon (Jeans) smoothing and possibly instrumental, or in the case of simulations, numerical resolution. The first two depend on the thermal state of the gas. While for high-resolution observations (echelle spectroscopy) the effective smoothing scale is not determined by the instrumental resolution, numerical resolution has in fact been the limiting factor in many simulations (see for a discussion).
Scatter plots of the $`b(N)`$-distribution have been published for many observed QSO spectra . These plots show a clear cutoff at low $`b`$-parameters, which increases slightly with column density. However, this cutoff is not absolute, there are some narrow lines, especially at low column densities. Lu et al. and Kirkman & Tytler use Monte Carlo simulations to show that many of these lines are caused by line blending and noise in the data. Some contamination from unidentified metal lines is also expected. A lower envelope which increases with column density has also been seen in numerical simulations .
In this contribution we shall demonstrate that the cutoff in the $`b(N)`$-distribution is determined by the equation of state of the low-density gas and can therefore be used to measure the equation of state of the IGM. This work will be more fully described and discussed in a forthcoming publication .
## 2 Results
In Fig. 1a we plot the $`b(N)`$-distribution for 800 random absorption lines taken from multicomponent Voigt profile fits of spectra at redshift $`z=3`$, generated from one of our simulated models. A cutoff at low $`b`$-values, which increases with column density, can clearly be seen. As in the observations, there are some very narrow lines, which occur in blends. These unphysically narrow lines make determining the cutoff in an objective manner nontrivial. We developed an iterative procedure for fitting a power-law, $`b=b_0(N/N_0)^{\mathrm{\Gamma }1}`$, to the $`b(N)`$-cutoff over a certain column density range ($`10^{12.5}\mathrm{cm}^2\mathrm{N}10^{14.5}\mathrm{cm}^2`$ at $`z=3`$) which is insensitive to these narrow lines.
The $`b(N)`$-distribution for a colder ($`\mathrm{log}T_0=3.83`$ vs. $`\mathrm{log}T_0=4.20`$) model is plotted in Fig. 1b. Clearly, the distribution cuts off at lower $`b`$-values. Let us assume that the absence of lines with low $`b`$-values is due to the fact that there is a minimum line width set by the thermal state of the gas through the thermal broadening and/or baryon smoothing scales. Since the temperature of the low-density gas responsible for the Ly$`\alpha `$ forest increases with density, we expect the minimum $`b`$-value to increase with column density, provided that the column density correlates with the density of the absorber.
To see whether this picture is correct, we need to investigate the relation between the Voigt profile parameters $`N`$ and $`b`$, and the density and temperature of the absorbing gas respectively. From Fig. 2, which shows the gas density as a function of column density for the absorption lines plotted in Fig. 1a, it can be seen that these two quantities are tightly correlated.
The temperature is plotted against the $`b`$-parameter in Fig. 3a. The result is a scatter plot with no apparent correlation. This is not surprising since many absorbers will be intrinsically broader than the local thermal broadening scale. In order to test whether the cutoff in the $`b(N)`$-distribution is a consequence of the existence of a minimum line width set by the thermal state of the gas, we need to look for a correlation between the temperature and $`b`$-parameters of the lines near the cutoff. Fig. 3b shows that these lines do indeed display a tight correlation. The dashed line corresponds to the thermal width, $`b=(2k_BT/m_p)^{1/2}`$, where $`m_p`$ is the mass of a proton and $`k_B`$ is the Boltzmann constant. Lines corresponding to density peaks whose width in velocity space is much smaller than the thermal broadening width, have Voigt profiles with this $`b`$-parameter.
Figs. 2 and 3b suggest that the cutoff in the $`b(N)`$-distribution should be strongly correlated with the equation of state of the absorbing gas. The objective is to establish the relations between the cutoff parameters and the equation of state using simulations. These relations turn out to be unaffected by systematics like changes in cosmology (for a fixed equation of state) and can thus be used to measure the equation of state of the IGM using the cutoff in the observed $`b(N)`$-distribution.
The amplitudes of the power-law fits to the cutoff and the equation of state are plotted against each other in the left panel of Fig. 4. The error bars, which indicate the dispersion in the cutoff of sets of 300 lines (typical for $`z=3`$), are small compared to the differences between the models. This means that measuring the cutoff in a single QSO spectrum can provide significant constraints on theoretical models (at $`z=3`$, physically reasonable ranges for the parameters of the equation of state are $`10^{3.0}\mathrm{K}<\mathrm{T}_0<10^{4.5}\mathrm{K}`$ and $`1.2<\gamma <1.7`$ ). The slope of the cutoff, $`\mathrm{\Gamma }1`$, is plotted against $`\gamma `$ in the right panel of Fig. 4. The dispersion in the slope of the cutoff for a fixed equation of state is comparable to the difference between the models. The weak dependence of $`\mathrm{\Gamma }`$ on $`\gamma `$ and the large spread in the measured $`\mathrm{\Gamma }`$ will make it difficult to put tight constraints on the slope of the equation of state.
|
no-problem/9905/astro-ph9905005.html
|
ar5iv
|
text
|
# SOURCE SIZE LIMITATION FROM VARIABILITIES OF A LENSED QUASAR
## 1 INTRODUCTION
Since Liebes (1964) and Refsdal (1964) have reported meaningful aspects of gravitational lensing phenomenon, many researchers rushed into the field of gravitational-lensing study, and presented many interesting results. This situation is not altered in these days.
One of the most interesting gravitational-lens phenomena is quasar lensing. This is caused by a lensing galaxy (or galaxies) intervening observer and quasar. In the context of cosmology, it will be possible to estimate Hubble’s constant from a time delay of the quasar variations between gravitationally-lensed, split images. The most successful study is by Kundić et al. (1997, hereafter K97). They monitored Q0957+561 for a long time and performed robust determination of the time delay. From their own result, they evaluate Hubble’s ($`H_0`$) constant as $`64_{13}^{+12}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ based on the lens model constructed by Grogin and Narayan (1996, hereafter GN).
On the other hands, concerning the structure of quasar, we will discriminate the structure of central engine according to the effect of a finite source size. Recently, Yonehara et al. (1998, 1999) performed realistic simulations of quasar microlensing, and showed that multi-wavelength observations will reveal the structure of accretion disk believed to be situated in the center of quasars. Furthermore, using precise astrometric technique, Lewis and Ibata (1998) indicated that it is also possible to probe the structure of quasar from image-centroid shift caused by microlensing. Observationally, in the case of Q2237+0305, Mediavilla et al. (1998) detected a difference between an extent of the continuum source and that of the emission-line source by two-dimensional spectroscopy, and limit the size of these regions. Thus, quasar-lensing phenomena are a useful tool to probe not only for cosmology but also for the structure of quasar.
Following these interesting researches, we propose a method to estimate, in this $`letter`$, the effect of a finite source size on time delays of the observed quasar variations between each gravitationally-lensed, split image, and to judge whether it is negligibly small or not and to limit the whole size of the source of quasar variability. This is important because no such limitation has been done yet although the size of each variation, “one shot”, had already been obtained order of days assuming causality in the individual source of variations.
In section 2, I describe the basic concept of this work, and simply estimate the time delay difference. Next, I present some results of calculation for the case of Q0957+561 in section 3. Finally, section 4 is devoted to discussion.
## 2 BASIC CONCEPT
The basic idea that we wish to present in this $`letter`$ is schematically illustrated in figure 1. Suppose the situation that a quasar is macrolensed by lensing objects so that its image is split into two (or more) images. The angular separation between these images is large enough to observe individually, say apparent angular separation is $`\stackrel{>}{}1\mathrm{arcsec}`$. If we observe such quasar images, we will realize the intrinsic variabilities of quasar in each image as in the case of an ordinary, not gravitationally lensed quasar (e.g., recent optical monitoring results are shown in Sirola et al. 1998). Because of the macrolensing effect, generally, the variabilities in such a quasar are not observed in both images at the same time. There is a time delay between these quasar images caused by a light path difference from the light path without lensing objects which originates from gravitational lens effect (e.g., see Schneider, Ehlers, & Falco 1992, hereafter SEF). These facts are nicely demonstrated by K97.
However, previous studies related to the time delay caused by gravitational lensing were not so much concerned with the source of variabilities, and the source of variabilities was treated as a point source. This treatment is reasonable, if the whole source size is negligibly small compared with the typical scale length over which a time delay changes. In contrast, actually, we only know that the source of quasar variabilities is smaller than the limit of the observational resolution, say $`\stackrel{<}{}1\mathrm{arcsec}`$ (e.g., in the case of HST observations, $`0.1\mathrm{arcsec}`$), and we do not know whether the whole source size is small or large compared with the scale length over which a time delay changes. Therefore, first, we should try to consider the effect of a finite source size on the expected observed light curve in quasar images.
Then, if we include such an effect, what do we expect to see ? The answer is easily understood by figure 1. For simplicity, I consider only two images (image A and B) of the lensed quasar, and the source exhibit only two bursts (“burst 1” and “burst 2”, they occur in this order on the source plane) with some time interval ($`\mathrm{\Delta }t_{\mathrm{burst}}`$). The origin and separation of such bursts are not specified, we assume that these two bursts are not physically correlated, in other words they appear randomly. Additionally, we set a time delay difference between the position of the “burst 1” and the “burst 2” on image A as $`\mathrm{\Delta }t_\mathrm{A}`$ and that on image B as $`\mathrm{\Delta }t_\mathrm{B}`$.
In the case of $`\mathrm{\Delta }t_{\mathrm{burst}}|\mathrm{\Delta }t_\mathrm{A}\mathrm{\Delta }t_\mathrm{B}|`$, light curves of two images show apparently very similar feature, instead of its time delay at the very center. Although the shape of light curves is altered from intrinsic one by the effect of finite source size as is depicted in lower left part of figure 1, we can easily identify these two light curves are intrinsically the same one. Thus, we are able to obtain a robust time delay between two images.
On the other hands, in the case of $`\mathrm{\Delta }t_{\mathrm{burst}}\stackrel{<}{}|\mathrm{\Delta }t_\mathrm{A}\mathrm{\Delta }t_\mathrm{B}|`$, a previous fact does not hold any more. In this case, time interval between two bursts is significantly modified by the effect of its apparently large time delay difference ($`|\mathrm{\Delta }t_\mathrm{A}\mathrm{\Delta }t_\mathrm{B}|`$). In such a situation, we can no longer conclude that light curves from two images have the same origin, even if we include an effect of time delay for the case of point source. We may seek for the reason for this to microlensing or something exotic. In other words, there will be no good correlation between light curves of two images. This is a serious problem not only to determine time delay or $`H_0`$ but also to construct a quasar structure, to determine the origin of variabilities, or some other problems.
Here, I will make a simple estimate of time delay difference between different parts of the source, i.e., the effect of a finite source. In this estimate, I define $`\beta `$,$`\theta _\mathrm{A}`$, $`\theta _\mathrm{B}`$ as angular positions of the source center and those of the centers of two images. Therefore, $`\theta _\mathrm{A}`$ and $`\theta _\mathrm{B}`$ are the solutions of well-known lens equation (e.g., SEF),
$$\beta =\theta \alpha (\theta ),$$
(1)
where, $`\alpha `$ is a bending angle caused by intervening lens object(s), i.e., gravitational lens effect. Furthermore, time delay from un-lensed light path ($`\tau `$) in the case of the image position is $`\theta `$ and the source position is $`\beta `$ is written as
$$\tau =\frac{(1+z_{\mathrm{ol}})}{2c}𝒟\left|\theta \beta \right|^2\frac{(1+z_{\mathrm{ol}})}{c^3}\mathrm{\Psi }(\theta ).$$
(2)
Here, $`z_{\mathrm{ol}}`$ is redshift from observer to lens, $`𝒟`$ is effective lens distance that by using angular diameter distance from observer to lens ($`D_{\mathrm{ol}}`$), from observer to source ($`D_{\mathrm{os}}`$), and from lens to source ($`D_{\mathrm{ls}}`$), written as $`𝒟=D_{\mathrm{ol}}D_{\mathrm{os}}/D_{\mathrm{ls}}`$, and $`\mathrm{\Psi }`$ is so-called “effective lens potential” (e.g., SEF). Insert each image position into equation (2) and subtract one equation from the other, we obtain the well-known time delay expression between images A and B ($`\mathrm{\Delta }\tau _{\mathrm{AB}}`$),
$$\mathrm{\Delta }\tau _{\mathrm{AB}}(\beta )=\frac{(1+z_{\mathrm{ol}})}{2c}𝒟\left(\left|\theta _\mathrm{A}\beta \right|^2\left|\theta _\mathrm{B}\beta \right|^2\right)\frac{(1+z_{\mathrm{ol}})}{c^3}\{\mathrm{\Psi }(\theta _\mathrm{A})\mathrm{\Psi }(\theta _\mathrm{B})\}.$$
(3)
Additionally, if we assume the position that is offset by $`d\beta `$ from the center of the source and write $`d\theta _\mathrm{A}`$ and $`d\theta _\mathrm{B}`$ as image positions from the center of the image, these variables should fulfill the lens equation (1) again, i.e.,
$$(\beta +d\beta )=(\theta _i+d\theta _i)\alpha (\theta _i+d\theta _i),(i=\mathrm{A}\mathrm{or}\mathrm{B})$$
(4)
or, subtracting this from equation (1) and adopt Taylor expansion to $`\alpha `$, we obtain another expression of equation (4), $`d\beta =d\theta _i_\theta \alpha (\theta _i)d\theta _i+\mathrm{}`$.
Subtracting $`\mathrm{\Delta }\tau _{\mathrm{AB}}(\beta )`$ from $`\mathrm{\Delta }\tau _{\mathrm{AB}}(\beta +d\beta )`$, c.f., equation (3), and using equation (1) and equation (4), we are able to obtain the time delay difference between the center of source and the other position offset by $`d\beta `$ from the center of the source ($`\delta \tau _{\mathrm{AB}}=\mathrm{\Delta }\tau _{\mathrm{AB}}(\beta +d\beta )\mathrm{\Delta }\tau _{\mathrm{AB}}(\beta )`$) on the source plane.
Moreover, by definition of effective lens potential, bending angle $`\alpha `$ is related to the $`\theta `$ through the derivative of effective lens potential $`\mathrm{\Psi }(\theta )`$ as $`_\theta \mathrm{\Psi }(\theta )/𝒟c^2=\alpha (\theta )`$. Since we are considering the origin of quasar variabilities, the source size is $`\mathrm{kpc}`$ at most and the distance from observer is typical cosmological scale $`1\mathrm{Gpc}`$. Thus, its apparent angular size is $`\mathrm{kpc}/1\mathrm{G}\mathrm{p}\mathrm{c}=10^6\mathrm{rad}`$. This seems to be small compared with image separation and the scale of bending angles which is typically a few arcsec. For $`\delta \tau _{\mathrm{AB}}`$, accordingly, we can adopt a Taylor expansion to $`\alpha `$ and $`\mathrm{\Psi }`$, neglect the higher terms than first order assuming $`d\beta \beta `$ and $`d\theta _i\theta _i`$. After using some algebra and putting $`R`$ as actual off-centered distance on the source plane, i.e., $`R=|d\beta |D_{\mathrm{os}}`$, we are able to evaluate time delay difference as follows,
$`\delta \tau _{\mathrm{AB}}`$ $``$ $`{\displaystyle \frac{(1+z_{\mathrm{ol}})}{2c}}𝒟\{2\alpha (\theta _\mathrm{A})_\theta \alpha (\theta _\mathrm{A})d\theta _\mathrm{A}2\alpha (\theta _\mathrm{B})_\theta \alpha (\theta _\mathrm{B})d\theta _\mathrm{B}\}`$ (5)
$`{\displaystyle \frac{(1+z_{\mathrm{ol}})}{c^3}}\{_\theta \mathrm{\Psi }(\theta _\mathrm{A})d\theta _\mathrm{A}_\theta \mathrm{\Psi }(\theta _\mathrm{B})d\theta _\mathrm{B}\}`$
$`=`$ $`{\displaystyle \frac{(1+z_{\mathrm{ol}})}{c}}𝒟\left[\{_\theta \alpha (\theta _\mathrm{A})1\}\alpha (\theta _\mathrm{A})d\theta _\mathrm{A}\{_\theta \alpha (\theta _\mathrm{B})1\}\alpha (\theta _\mathrm{B})d\theta _\mathrm{B}\right]`$
$``$ $`{\displaystyle \frac{(1+z_{\mathrm{ol}})}{c}}𝒟\left|\left(\theta _\mathrm{B}\theta _\mathrm{A}\right)d\beta \right|`$
$``$ $`12\left({\displaystyle \frac{1+z_{\mathrm{ol}}}{2}}\right)\left({\displaystyle \frac{D_{\mathrm{ol}}}{D_{\mathrm{ls}}}}\right)\left({\displaystyle \frac{|\theta _\mathrm{B}\theta _\mathrm{A}|}{1\mathrm{}}}\right)\left({\displaystyle \frac{R}{1\mathrm{kpc}}}\right)\mathrm{day}`$ (6)
This one-dimensional evaluation is somewhat overestimated, however, for the calculations above, we did not use any restriction about lens model, and equation (6) seems to be appropriate for any lens models and lensed systems except in some special situations, e.g., in the vicinity of caustics (or critical curves).
Consequently, considering the fact that quasar optical intrinsic variabilities have timescale $`\mathrm{day}\mathrm{month}`$, equation (6) indicates that correlation between light curves of two images shown, in worst cases, will disappear, if the origin of quasar variabilities is extended over $`1\mathrm{kpc}`$, i.e., maximum off-centered burst occurs at $`1\mathrm{kpc}`$ from the center of quasar.
## 3 EXAMPLES OF Q0957+561
Finally, we will show some impressive result for the case of Q0957+561 which is the first detected lensed quasar by Walsh, Carswell, & Weymann, (1979).
To demonstrate how the extended source effect works on the time delay determination in an actual lensed quasar, here, I will present simulation results of Q0957+561 as one example. Using equation (6), we are able to estimate a time delay difference between same source positions at different lensed images. In this case, as is well known, if we use $`z_{\mathrm{ol}}0.36`$, $`z_{\mathrm{os}}1.41`$, $`|\theta _\mathrm{B}\theta _\mathrm{A}|6\mathrm{}`$ (e.g., GN) and assumed that $`H_060\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$, we will obtain $`\delta \tau _{\mathrm{AB}}50\mathrm{day}`$ for the source with a size of $`1\mathrm{kpc}`$ !
Furthermore, to obtain more realistic results, we used isothermal SPLS galaxy with compact core as an example of lens model for Q0957+561 (details are shown in GN), adopted parameters listed in table 7 in GN as “isothermal” SPLS and calculated time delay difference between images center and off-centered part of images ($`\delta \tau _{\mathrm{AB}}`$). For this calculation, we set $`\mathrm{\Omega }=1.0`$ for simplicity and took on convergence $`\kappa 0.22`$ and $`H_064\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ to reproduce the observed time delay following K97. The resultant time delay contours compared with the image centers on the source plane are depicted in figure 2. On image A (left panel), a gradient of the contour is almost in the negative y-direction, although that of image B (right panel, this time delay advanced $`417\mathrm{d}\mathrm{a}\mathrm{y}`$ ) is almost in the positive y-direction. Additionally, time difference between the same position on the source reaches order of $`\mathrm{months}`$ for the case of the source with a $`\mathrm{kpc}`$ size, therefore, we expect disappearance of correlations between the light curves of image A and that of image B. Here, from equation (5), we can easily understand why do the contour lines show almost straight and perpendicular to y-direction. Product of $`(\theta _\mathrm{A}\theta _\mathrm{B})`$ and $`d\beta `$ in equation (5) means that time delay difference determined mostly by the element which parallel to $`(\theta _\mathrm{A}\theta _\mathrm{B})`$ of displacement $`d\beta `$. Therefore, the time delay difference significantly alters along the $`(\theta _\mathrm{A}\theta _\mathrm{B})`$ direction and almost constant along the perpendicular direction.
Moreover, we simulated expected light curves of variabilities in both quasar images using superposition of simple bursts with triangled shape and duration of $`10`$ days which are randomly distributed in time, in space, and in amplitude. For the whole source size, I consider three cases, $`1\mathrm{kpc}`$, $`100\mathrm{pc}`$ and $`10\mathrm{pc}`$. Using the same procedure to produce figure 2, we calculated time delay from the center of image A over both images, randomly produce bursts, sum up all bursts and finally obtain expected light curves as presented in figure 3. “Residual light curves” produced by subtracting properly-shifted light curve of image B from that of image A, are also shown in the figure. In the case of the smallest source, $`R=10\mathrm{pc}`$, and still in the case of middle source size, $`R=100\mathrm{pc}`$, we can easily recognize the coherent pattern in the light curves of images A and B but with time delay of $`417\mathrm{days}`$ advanced light curve of image B. Time delay between two image centers is able to be determined fairly well. However, in the case of largest source, $`R=1\mathrm{kpc}`$, it seems no correlation between two light curves even if we already know the time delay between two image centers, and we may misunderstand that the variabilities did not originate in source itself ! This feature is far from the observed properties that the time delay between two images is determined easily even if we fit them by eyes. Therefore, I conclude that the size of source that is origin of quasar variabilities should be smaller than $`100\mathrm{pc}`$, namely, maximum acceptable size is order of $`10\mathrm{pc}`$ from this simple simulation.
## 4 DISCUSSIONS AND COMMENTS
As we examined, if we include the finite source-size effect to the time delay determination from quasar variabilities, correlation between expected light curves of each lensed image will disappear in the case of the size is sufficiently large, say $`1\mathrm{k}\mathrm{p}\mathrm{c}`$. Using this fact, we can limit the size of the region where quasar variabilities are produced, from the correlation between light curves of multiple lensed quasar images. Furthermore, since the size of the origin of intrinsic quasar variability reflects a physical origin of the variabilities, we can also determine the origin of the variabilities, e.g., whether it is disk instability (Kawaguchi et al. 1998) or star burst (Aretxaga, Cid Fernandes, & Terlevich, 1996). Particularly, in the case of Q0957+561, the origin of variabilities has a size smaller than $`100\mathrm{pc}`$. This value is consistent with disk instability model, because of its small size ($`0.01\mathrm{pc}`$ for $`1000`$ Schwarzschild radius accretion disk surrounding $`10^8M_{}`$ supermassive black hole). Starburst model can be rejected, since starburst region is $`100\mathrm{pc}1\mathrm{kpc}`$. Hence, for the origin of intrinsic quasar variabilities, the disk instability model is more preferable, as was indicated by Kawaguchi et al. (1998) already. To draw this conclusion more critically, we should do this study more precisely in future.
Additionally, the fact that a larger source size tends to reduce a good correlation between the light curves of each image provides an answer to the question why time delay determination from radio flux gave a wrong answer except recent works, e.g., Haarsma et al (1999). Generally, radio emitting region is believed to have a larger size than that of optical photon because of the existence of large radio lobe and/or jet component, and the effect we have shown in this $`letter`$ may be significant. Thus, robust determination of the time delay seems to be difficult.
If such a effect is significant in the well-known lensed quasar Q2237+0305, microlens interpretation of individual variabilities (e.g., see Irwin et al. 1989) will be rejected. Fortunately, however, this may not be the case because for this source, caused by its quite nice symmetry of lensed image, the effect seems not to be so significant and intrinsic variabilities will be expected to appear in every images with good correlations.
If we develop this technique furthermore, and adapted to another multiply-imaged lensed quasar, we will determine the size of the most interesting part in quasar.
The author would like to express his thanks to Toshihiro Kawaguchi, Jun Fukue for their number of comments, and to referee for his/her meaningful suggestions. The author also grateful to Shin Mineshige for his valuable discussions and kind reading of the previous draft. This work was supported in part by the Japan Society for the Promotion of Science (9852).
|
no-problem/9905/quant-ph9905053.html
|
ar5iv
|
text
|
# Untitled Document
1. Introduction.
The modern era was created probably as much by Descartes’ conceptual separation of mind from matter as by any other event. This move freed science from the religious dogmas and constraints of earlier times, and allowed scientists to delve into the important mathematical regularities of the observed physical world. Descartes himself allowed interaction between mind and matter to occur within the confines of a human brain, but the deterministic character of the physical world specified later by Newtonian mechanics seemed to rule out completely, even within our brains, any interference of mind with the workings of matter. Thus the notion of a completely mechanical universe, controlled by universal physical laws, became the new dogma of science.
It can readily be imagined that within the milieu dominated by such thinking there would be stout opposition to the radical claims of the founders of quantum theory that our conscious human knowings should be taken as the basis of our fundamental theory of nature. Yet the opposition to this profound shift in scientific thinking was less fierce than one might suppose. For, in the end, no one could dispute that science rests on what we can know, and quantum theory was formulated in practical human terms that rested squarely on that fact. Hence the momentous philosophical shift was achieved by some subtle linguistic reformulations that were inculcated into the minds of the students and practitioners of quantum theory. The new thought patterns, and the calculations they engendered, worked beautifully, insofar as one kept to the specified practical issues, and refrained, as one was instructed to do, from asking certain “meaningless” metaphysical questions.
Of course, there are a few physicists who are dissatisfied with purely practical success, and want to understand what the practical success of these computational rules is telling us about ourselves and the nature of the world in which we live. Efforts to achieve such an understanding are proliferating, and the present work is of that genre. Historically, efforts to achieve increasingly coherent and comprehensive understandings of the clues we extract from Nature have occasionally led to scientific progress.
The outline of the present work is as follows. In section 2, I document the claim made above that the orthodox Copenhagen interpretation of quantum theory is based squarely and explicitly on human knowings. The aim of the paper is to imbed this orthodox pragmatic epistemological theory in a rationally coherent naturalistic ontology in a minimalistic way that causes no disruption of anything that orthodox quantum theory says, but merely supplies a natural ontological underpinning. In the special case of processes occurring in human body/brains this ontological structure involves human conscious knowings that enter into the brain dynamics in a manner that accounts for the way that these knowings enter into the orthodox interpretation of quantum theory.
In section 3 I discuss another interpretation, which is probably the common contemporary interpretation of the Copenhagen interpretation. It is coarse in that it is imprecise on essential theoretical points. Because it is common and coarse I call it the Vulgar Copenhagen Interpretation.
In section 4 the unusual causal structure of quantum theory is discussed, and is used to justify, in the context of trying to understand the role of mind in nature: 1) the rejection of the classical ontology, 2) the reasonableness of attempting to ontologicalize the orthodox interpretation of quantum theory, and 3) the expectation that our knowings involve non-local aspects.
Section 5 is entitled “All roads lead to Solvay 1927”. The 1927 Solvay conference, seventy years ago, marked the birth of the orthodox Copenhagen interpretation of quantum theory. In this section I review this Symposium from a certain point of view, namely the viewpoint that many of the highlights of the Symposium confirm the basic message of the orthodox interpretation, namely that the only reasonable way to make rational sense out of the empirical data is to regard nature as being built out of knowings. I argue that the experience of the last seventy years suggests the reasonableness of taking this interpretation seriously: more seriously than the founders of quantum theory took it. Basically, they said, cautiously, that the mathematical formalism is a useful tool for forming expectations about our future knowings on the basis of our past ones. That claim has been now been abundantly confirmed, also in fields far beyond the narrow confines of atomic physics. But the founders scrupulously avoided any suggestion that this mathematical formalism corresponded to reality. They either discouraged us from asking questions about what is really happening, or, if pressed, looked for reality not in their own knowledge-based formalism, but in terms of more conventional physical terms. This reluctance to take their own formalism seriously was, I think, the result partly of an inertial carry-over from classical physics, which shunned and excluded any serious consideration of mind in physics, and partly of a carry-over of an idea from the special theory of relativity. This is the idea that no influence or signal could propagate faster than light. However, in quantum theory there is a sharp distinction between signal and influence, because it can be proved both that no signal can be transmitted faster than light, and that this property cannot be imagined to hold for influences. The distinction between signal and influence has to do with the difference between the causal structure of the deterministic evolution of the statistical predictions of the theory and the causal structure of something that has no analog in classical mechanics, namely the selection process that acts within the deterministic structure that is the analog of the classical deterministic structure, but that is not fully determined by that structure.
In cosmological solutions in general relativity there is usually a preferred set of advancing spacelike surfaces that provide a natural definition of instantaneousness. Also, there is the empirical cosmological preferred frame defined by the background black-body radiation. So the idea of special relativity that there is no preferred frame for the universe, although it may indeed hold for the formulation of the general local-deterministic laws, is not as compelling now as it was in 1905, or even 1927: that idea could very well break down in our particular universe at the level of the selection of particular individual results (knowings). Indeed, I believe it must break down at that level. (Stapp, 1997)
So I propose to take seriously the message of Solvay 1927, that nature be understood as built out of knowings. But we must then learn how better to understand knowings, within the mathematical framework provided by the quantum formalism.
In section 6 I distinguish the two different components of the quantum mechanical evolutionary process, the unitary/local part and the nonunitary/ nonlocal part, and note that our conscious knowings, as they occur in the quantum description, enter only into the latter part. But that part is eliminated when one takes the classical approximation to the quantum dynamics. Thus from the perspective of quantum mechanics it would be irrational to try to find consciousness in a classical conception of nature, because that conception corresponds to an approximation to the basic dynamics from which the process associated with consciousness has been eradicated.
I note there also that the ontologicalization of the quantum mechanical description dissolves, or at least radically transforms the mind-matter dualism. The reason is this: in the classical theory one specifies at the outset that the mathematical quantities of the theory represent the physical configuration of matter, and hence one needs to explain later how something so seemingly different from matter as our conscious knowings fit in. But in the quantum case one specifies from the outset that the mathematical quantities of the theory describe properties of knowings, so there is no duality that needs explaining: no reality resembling the substantive matter of classical physics ever enters at all. One has, instead, a sequence of events that are associated from the outset with experiences, and that evolve within a mathematically specified framework.
Section 7 lays out more explicitly the two kinds of processes by showing how they can be considered to be evolutions in two different time variables, called process time and mathematical time.
Section 8 goes into the question of the ontological nature of the “quantum stuff” of the universe.
In the sections 9 and 10 I describe the proposed ontology. It brings conscious knowings efficaciously into quantum brain dynamics. The basic point is that in a theory with objectively real quantum jumps, some of which are identifiable with the quantum jumps that occur in the orthodox epistemological interpretation, one needs three things that lie beyond what orthodox quantum theory provides:
1. A process that defines the conditions under which these jumps occur, and the possibilities for what that jump might be.
2. A process that selects which one of the possibilities actually occurs.
3. A process that brings the entire universe into concordance with the selected outcome.
Nothing in the normal quantum description of nature in terms of vectors in Hilbert space accomplishes either 1 or 2. And 3 is simply put in by hand. So there is a huge logical gap in the orthodox quantum description, if considered from an ontological point of view. Some extra process, or set of processes, not described in the orthodox physical theory, is needed.
I take a minimalistic and naturalistic stance, admitting only the least needed to account for the structure of the orthodox quantum mechanical rules.
In appendix A I show why the quantum character of certain synaptic processes make it virtually certain that the quantum collapse process will exercise dominant control over the course of a conscious mind/brain processes.
2. The subjective character of the orthodox interpretation of quantum mechanics.
In the introduction to his book “Quantum theory and reality” the philosopher of science Mario Bunge (1967) said: “The physicist of the latest generation is operationalist all right, but usually he does not know, and refuses to believe, that the original Copenhagen interpretation — which he thinks he supports — was squarely subjectivist, i.e., nonphysical.”
Let there be no doubt about this.
Heisenberg (1958a): “The conception of objective reality of the elementary particles has thus evaporated not into the cloud of some obscure new reality concept but into the transparent clarity of a mathematics that represents no longer the behavior of particles but rather our knowledge of this behaviour.”
Heisenberg (1958b): “…the act of registration of the result in the mind of the observer. The discontinuous change in the probablitity function… takes place with the act of registration, because it is the discontinuous change in our knowledge in the instant of registration that has its image in the discontinuous change of the probability function.”
Heisenberg (1958b:) “When old adage ‘Natura non facit saltus’ is used as a basis of a criticism of quantum theory, we can reply that certainly our knowledge can change suddenly, and that this fact justifies the use of the term ‘quantum jump’. ”
Wigner (1961): “the laws of quantum mechanics cannot be formulated…without recourse to the concept of consciousness.”
Bohr (1934): “In our description of nature the purpose is not to disclose the real essence of phenomena but only to track down as far as possible relations between the multifold aspects of our experience.”
In his book “The creation of quantum mechanics and the Bohr-Pauli dialogue” (Hendry, 1984) the historian John Hendry gives a detailed account of the fierce struggles by such eminent thinkers as Hilbert, Jordan, Weyl, von Neumann, Born, Einstein, Sommerfeld, Pauli, Heisenberg, Schroedinger, Dirac, Bohr and others, to come up with a rational way of comprehending the data from atomic experiments. Each man had his own bias and intuitions, but in spite of intense effort no rational comprehension was forthcoming. Finally, at the 1927 Solvay conference a group including Bohr, Heisenberg, Pauli, Dirac, and Born come into concordance on a solution that came to be called “The Copenhagen Interpretation”. Hendry says: “Dirac, in discussion, insisted on the restriction of the theory’s application to our knowledge of a system, and on its lack of ontological content.” Hendry summarized the concordance by saying: “On this interpretation it was agreed that, as Dirac explained, the wave function represented our knowledge of the system, and the reduced wave packets our more precise knowledge after measurement.”
Certainly this profound shift in physicists’ conception of the basic nature of their endeavour, and the meanings of their formulas, was not a frivolous move: it was a last resort. The very idea that in order to comprehend atomic phenomena one must abandon physical ontology, and construe the mathematical formulas to be directly about the knowledge of human observers, rather than about the external real events themselves, is so seemingly preposterous that no group of eminent and renowned scientists would ever embrace it except as an extreme last measure. Consequently, it would be frivolous of us simply to ignore a conclusion so hard won and profound, and of such apparent direct bearing on our effort to understand the connection of our knowings to our bodies.
Einstein never accepted the Copenhagen interpretation. He said: “What does not satisfy me, from the standpoint of principle, is its attitude toward what seems to me to be the programmatic aim of all physics: the complete description of any (individual) real situation (as it supposedly exists irrespective of any act of observation of substantiation).” (Einstein, 1951, p.667) and “What I dislike in this kind of argumentation is the basic positivistic attitude, which from my view is untenable, and which seems to me to come to the same thing as Berkeley’s principle, esse est percipi. (Einstein, 1951, p. 669). Einstein struggled until the end of his life to get the observer’s knowledge back out of physics. But he did not succeed! Rather he admitted that: “It is my opinion that the contemporary quantum theory…constitutes an optimum formulation of the \[statistical\] connections.” (ibid. p. 87). He referred to: “the most successful physical theory of our period, viz., the statistical quantum theory which, about twenty-five years ago took on a logically consistent form. … This is the only theory at present which permits a unitary grasp of experiences concerning the quantum character of micro-mechanical events.” (ibid p. 81).
One can adopt the cavalier attitude that these profound difficulties with the classical conception of nature are just some temporary retrograde aberration in the forward march of science. Or one can imagine that there is simply some strange confusion that has confounded our best minds for seven decades, and that their absurd findings should be ignored because they do not fit our intuitions. Or one can try to say that these problems concern only atoms and molecules, and not things built out of them. In this connection Einstein said: “But the ‘macroscopic’ and ‘microscopic’ are so inter-related that it appears impracticable to give up this program \[of basing physics on the ‘real’\] in the ‘microscopic’ alone.” (ibid, p.674).
The examination of the “locality” properties entailed by the validity of the predictions of quantum theory that was begun by Einstein, Podolsky, and Rosen, and was pursued by J.S. Bell, has led to a strong conclusion (Stapp, 1997) that bears out this insight that the profound deficiencies the classical conception of nature are not confinable to the micro-level. This key result will be discussed in section 4. But first I discuss the reason why, as Mario Bunge said: “The physicist of the latest generation is operationalist all right, but usually he does not know, and refuses to believe, that the original Copenhagen interpretation — which he thinks he supports — was squarely subjectivist, i.e., nonphysical.”
3. The Vulgar Copenhagen Interpretation.
Let me call the original subjectivist, knowledge-based Copenhagen interpretation the “strict” Copenhagen interpretation. It is pragmatic in the sense that it is a practical viewpoint based on human experience, including sensations, thoughts, and ideas. These encompass both the empirical foundation of our physical theories and the carrier of these theories, and perhaps all that really matters to us, since anything that will never influence any human experience is, at least from an anthropocentric viewpoint, of no value to us, and of uncertain realness.
Nevertheless, the prejudice of many physicists, including Einstein, is that the proper task of scientists is to try to construct a rational theory of nature that is not centered on such a small part of the natural world as human experience.
The stalwarts of the Copenhagen interpretation were not unaware of the appeal of that idea to some of their colleagues, and they had to deal with it in some way. Thus one finds Bohr(1949) saying, in his contribution ‘Discussion with Einstein’ to the Schilpp(1951) volume on Einstein:
“In particular, it must be realized that—besides in the account of the placing and timing on the instruments forming the experimental arrangement—all unambiguous use of space-time concepts in the description of atomic phenomena is confined to the recording of observations which refer to marks on a photographic plate or similar practically irreversible amplification effects like the building of a water drop around an ion in a cloud-chamber.”
and,
“On the lines of objective description, it is indeed more appropriate to use the word phenomenon to refer only to observations obtained under circumstances whose description includes an account of the whole experimental arrangement. In such terminology, the observational problem in quantum physics is deprived of any special intricacy and we are, moreover, directly reminded that every atomic phenomena is closed in the sense that its observation is based on registrations obtained by means of suitable amplification devices with irreversible functioning such as, for example, permanent marks on a photographic plate, caused by the penetration of electrons into the emulsion. In this connection, it is important to realize that the quantum mechanical formalism permits well-defined applications referring only to such closed phenomena.”
These are carefully crafted statements. If read carefully they do not contradict the basic thesis of the strict Copenhagen interpretation that the quantum formalism is about our observations described in plain language that allows us to “tell others what we have done and what we have learned.”
On the other hand, it seems also to be admitting that there really are events occurring ‘out there’, which are we are observing, but which do not derive their realness from our observations of them.
Heisenberg (1958) says something quite similar:
“The observation, on the other hand, enforces the description in space and time but breaks the determined continuity of the probability function by changing our knowledge of the system.”
”Since through the observation our knowledge of the system has changed discontinuously, its mathematical representation also has undergone the quantum jump, and we speak of a ‘quantum jump’ .”
“A real difficulty in understanding the interpretation occurs when one asks the famous question: But what happens ‘really’ in an atomic event?”
“If we want to describe what happens in an atomic event, we have to realize that the word ‘happens’ can apply only to the observation, not to the state of affairs between the two observations. It \[ the word ‘happens’ \] applies to the physical, not the psychical act of observation, and we may say that the transition from the ‘possible’ to the ‘actual’ takes place as soon as the interaction of the object with the measuring device, and therefore with the rest of the world, has come into play; it is not connected with the act of registration of the result in the mind of the observer. The discontinuous change in the probability function, however, occurs with the act of registration, because it is the discontinuous change in our knowledge in the instant of recognition that has its image in the discontinuous change in the probability function.”
All of this is very reasonable. But it draws a sharp distinction between the quantum formalism, which is about knowledge, and a world of real events that are actually occurring ‘out there’, and that can be understood as transitions from the ‘possible’ to the ‘actual’, closed by irreversible processes when the interaction between the object and the measuring device, and hence the rest of the world, comes into play.
Yet the extreme accuracy of detailed theoretical calculations \[one part in a hundred million in one case\] seems to make it clear that the mathematical formalism must be closely connected not merely to our knowledge but also to what is really happening ‘out there’: it must be much more than a mere representation of our human knowledge and expectations.
I call this natural idea—that the events in the formalism correspond closely to real “physical” events out there at the devices—the Vulgar Copenhagen Interpretation: vulgar in the sense of common and coarse.
This vulgar interpretation is I think the common interpretation among practicing quantum physicists: at this symposium some important experimentalists were, as Mario Bunge suggested, unwilling to believe that the quantum mechanical formalism was about ‘our knowledge’. And it is coarse: the idea of what constitutes an ‘irreversible’ process is not carefully specified, nor is the precise meaning of ‘as soon as the interaction with the object with the measuring device comes into play’.
My aim in this paper is to reconcile the strict and vulgar interpretations: i.e., to reconcile the insight of the founders of quantum theory that the mathematical formalism of quantum is about knowledge with the demand of Einstein that our basic physical theory be a theory of nature herself.
The main obstacle to a rational understanding of these matters is the faster-than-light action that the quantum formalism seems to entail, if interpreted at a physical level. If one takes literally the idea that the quantum event at the device constitutes a real transition from some physical state of ‘possibility’ or ‘propensity’ to a state of ‘actuality’ then—in the ‘entangled states’ of the kind studied by Schroedinger, by Einstein, Podolsky, and Rosen, and by Bell and others—it would seem that the mere act of making a measurement in one region would, in certain cases, instantly produce a change in the physical propensities in some far-away region. This apparent faster-than-light effect is dealt with in the strict Copenhagen interpretation by denying that the probability function in the formalism represents anything physical: the formalism is asserted to represent only our knowledge, and our knowledge of far-away situations can be instantly changed—in systems with correlations—by merely acquiring information locally.
This fact that the strict Copenhagen interpretation “explains away” the apparent violations of the prohibition \[suggested by the theory of relativity\] of faster-than-light actions is a main prop of that interpretation.
So the essential first question in any attempt to describe nature herself is the logical status of the claimed incompatibility of quantum theory with the idea—from the theory of relativity in classical physics—that no influence can act backward in time in any frame of reference.
It is of utmost importance to progress in this field that we get this matter straight.
4. Causality, Locality, and Ontology.
David Hume cast the notion of causality into disrepute. However, when one is considering the character of a putative law of evolution of a physical system it is possible to formulate in a mathematically clean way a concept of causality that is important in contemporary physical theory.
In relativistic physics, both classical and quantum mechanical, the idea of causality is introduced in the following way:
We begin with some putative law of evolution of a physical system. This law is specified by picking a certain function called the Lagragian. A key feature of the possible Lagrangians is that one can modify them by adding a term that corresponds to putting in an extra-force that acts only in some small spacetime region R.
The evolution is specified by the “law” specified by the chosen Lagrangian, plus boundary conditions. Let us suppose that boundary condition is specified as the complete description of “everything” before some “initial time” $`T_{in}`$. The laws then determine, in principle, “everything” for all later times.
In classical mechanics “everything” means the values of all of the physical variables that are supposed to describe the physical system that is being considered, which might be the entire physical universe.
In quantum mechanics “everything” means all of the “expectation values” of all of the conceivable possible physical observables, where “expectation value” means a predicted average value over an (in principle) infinite set of instances.
To bring in the notion of causality one proceeds as follows. It is possible, both in classical and quantum theory, to imagine changing incrementally the Lagrangian that specifies the law of evolution. The change might correspond to adding extra terms to the forces acting on certain kinds of particles if they are in some small spacetime region R that lies later than time $`T_{in}`$. Such a change might be regarded as being introduced whimsically by some outside agent. But, in any case, one can compare the values of “everything” at times later than time $`T_{in}`$ in the new modified world (i.e., the world controlled by the new modified Lagrangian) to the values generated from the laws specified by the original Lagrangian. If one is dealing with an idealized world without gravity, or at least without any distortion of the ‘flat’ Minkowsky spacetime, then it is a mathematical property of relativistic field theories, both classical and quantum mechanical, that “nothing” will be changed outside the forward light cone of the region R in which the Lagrangian was changed!
In other word, “everything” will be exactly the same in the two cases at all points that cannot be reached from the spacetime region R without moving faster than the speed of light.
This property of relativistic field theories is called a causality property. The intuition is that this change in the Lagrangian can be regarded, or identified, as a “cause”, because it can be imposed whimsically from outside the physical system. The mathematical property just described says that the effects of this “cause” are confined to its forward light cone; i.e., to spacetime points that can be reached from the spacetime region R of the cause without ever traveling at a speed greater than the speed of light.
Relativistic field theories are formulated mathematically in such a way that this causality property holds. This means that insofar as it is legitimate to imagine that human beings can “freely choose” \[i.e., can act or not act upon a physical system without there being any cause from within that physical system of this act\] to do one thing or another in a region $`R`$ \[e.g., to exert or not exert a force on some physical particles of the system in region $`R`$\] then “everything” outside the forward light cone of $`R`$ will be independent of this choice: i.e., “everything” outside this forward light cone will be left unaltered by any change in this choice.
This relativistic causality property is a key feature of relativistic field theories in flat Minkowsky spacetime: it is all the causality that the orthodox pragmatic quantum philosophy calls for.
But notice that by “everything” one means, in the quantum case, merely the “expectation values”, which are averages over an (in principle) infinite ensemble of instances.
Now, one might think that since this relativistic causality property holds for these averages it ought to be at least conceivably possible that it could hold also for the individual instances.
But the amazing thing is that this is not true! It is not logically possible to impose the no-faster-than-light condition in the individual instances, and maintain also the validity of certain simple predictions of quantum theory.
The point is this. Suppose one considers an experimental situation involving two experimental regions that are spacelike separated from each other. This means that no point in either region can be reached from any point in the other without traveling faster than the speed of light. In the first region there is an experimenter who can freely choose to do one experiment or another. Each of these two alternative possible experiments has two alternative possible outcomes. There is a similar set up in the second region. Each possible outcome is confined to the associated experimental region, so that no outcome of an experiment in one region should be able to be influenced by the free choice made by the experimenter in the other region.
One single instance is considered, but with the two free choices of the two experimenters being treated as two free variables. Thus the one single instance under consideration, uniquely fixed at all times earlier than the earliest time in either of the two experimental regions, will go into one or another of altogether $`(2\times 2)=`$ four alternative possible evolutions of this system, depending on which of the two alternative possible choices is made by each of the two experimenters. There can then be further branchings that are specified by which of the possible outcomes nature selects for whichever experiments are performed.
The particular experimental details can be arranged so that the assumed validity of the predictions of quantum theory for that particular arrangement entails the nonvalidity of at least one of the three following locality conditions:
LOC1: It is possible to impose the following condition: If in each of the two regions the first of the two possible experiments were to be performed, and a certain result $`r`$ appeared in the first region then if this very same experiment were to be performed in the first region then this same result $`r`$ would appear there even if the experimenter in the second region were to elect at the last moment to do the other measurement.
The rationale for this locality condition is that a free choice of what to do in one place cannot—relativity theory leads us to believe— affect, at a speed faster than the speed of light, what occurs elsewhere: making a different choice in one region should not be able to force what appears (at the macroscopic, observable level) in the other region to be different. Indeed, in some frame of reference the outcome in the first region has already occurred before the experimenter in the second region makes his free choice of which experiment he will perform. But, according to ideas from relativity theory, what someone has already seen and recorded here at some earlier time cannot be disturbed by what a faraway experimenter freely chooses to do at some later time.
Notice that LOC1 requires only that it be possible to impose this condition. The point is that only one of the two possible experiments can actually be performed in the second region, and hence nature herself will make only one choice. So what would actually appear in the first region if the experimenter in the other (far away) region were (at some future time) to make a different choice in not physically well defined. Thus this is a theoretical investigation: the question is whether the predictions of QT are compatible with the notion that nature evolves in such a way that what one observer sees and records in the past can be imagined to be fixed independently of what another person will freely choose to do in the future.
LOC2: Suppose, under the condition that the first of the two possible measurements were to be performed in the first region (with no condition imposed on what the outcome there is) that one can prove from LOC1 and the predictions of quantum theory, the truth of a statement $`S`$ that pertains exclusively to what experimenters can observe under various possible conditions of their own making in the second region. Then this locality condition asserts that it is logically possible to demand that $`S`$ remain true under the condition that the experimenter in the first region freely chooses (say in the future) to perform there, instead, the second possible measurement.
The rationale is that, according to certain ideas from the theory of relativity, the truth of a statement that pertains to macroscopic conditions that refer exclusively to one space-time region should not depend on what someone far away freely chooses to do later.
LOC3 This is another form of LOC1: Altering the free choice in R leaves any outcome in L undisturbed. \[See Stapp, 1997\]
The validity of the predictions of quantum theory in correlation situations like this are being regularly borne out. (…Most recently in a highly publicized experiment using the Swiss telephone company optical fibers to connect experimental regions that were 14 km apart, with the intent of making important practical applications.) Thus it can, I believe, be confidently assumed that the pertinent quantum predictions are valid. But in that case one of the “locality conditions” described above must fail.
Before drawing any conclusions one must consider the impact or significance of the assumption that the experimenters’ choices can be treated as “free variables”.
It is part of the orthodox quantum philosophy that the experimenters’ choices can and should be considered to stand outside the physical system that is being examined. Bohr and Heisenberg argued that biological systems in general lie outside the domain covered by the pragmatic framework. But in any case, one thing is certain: the beautiful and elegant quantum formalism is naturally suited to the idea that it represents a system that is part of a bigger system that can extract information from it, where the nature of the information being extracted from the subsystem is controlled by things outside that subsystem, namely the observer and his instruments of observation.
But even at a more intuititive level it seems that the decision-making process of human experimenters are so complex and delicate, and so insulate-able in principle, prior to the time of the examination, from the system that they are about to examine, as to render their choices as to what to look for effectively free, under appropriate conditions of isolation, from any influence upon them by the system they are about to examine. So it would seem to be safe, under appropriate conditions of prior isolation, to treat these choices as if they were free from such influences even in a strictly deterministic universe.
In a quantum universe this move is even more reasonable, because the choices could be governed by a quantum process, such as the decay of a radio-active nucleus. Within the quantum theoretical framework each such decay appears as a spontaneous random event. It is free of any “physical” cause, where “physical” means something that is part of the physical world as that world is described by the physical theory. Thus within both the deterministic and stochastic contexts it seems reasonable to treat the choices to be made by the experimenters as if they were free, in the sense of not being influenced by the physical properties of the system that is about to be examined.
One caveat. The arguments require that meaning be given to a condition such as: “If the experimenter in region one performs experiment one, and the outcome that occurs there is outcome one”. This condition is nonsensical in the Everett many-minds interpretation, because every outcome occurs. I have excluded that interpretation from consideration on other grounds, which are described in section 5.
The apparent failure of the locality condition has three important consequences:
1. It gives a solid basis for the conclusion of the founders of quantum theory that no return to the notions of classical mechanics (relativistic field theory) is possible: the invalid locality property certainly holds in relativistic classical mechanics.
2. It makes reasonable the attempt to ontologicalize the orthodox interpretation. It had formerly been believed that this was a nonsensical thing to try, because ontologicalization immediately entails faster-than-light transfer of information on the individual-instance level. Such transfers had seemed unacceptable, but are now seen to be unavoidable even in a very general framework that maintains merely the validity of the predictions of quantum theory, and the idea that the experimenters’ choices can be considered to be “free”, in the weak sense discussed above.
3. Because the nonlocal effects enter into orthodox quantum theory specifically in connection with the entry of our knowings into the dynamics there is prima facie evidence that our knowings may be associated with the nonlocal aspect of nature. It is worth noting that these effects are not confined to a microscopic scale: in the Swiss experiment the effect in question extended over a separation of 14km. And, according to quantum theory, the effect does not fall off at all with distance. In my proposal each of our knowings is associated with a brain event that involves, as a unit, a pattern of brain (e.g., neuronal) activity that may extend over a large part of the brain. The collapse actualizes this whole pattern, and the associated knowing is an expression of the functional properties of this pattern.
Once the reality is recognized to be knowledge, rather than substantive matter, the nonlocal connections seem less problemmatic: nothing but knowledge about far-away knowings is changed by nearby knowings.
5. All Roads Lead to Solvay 1927.
The Solvay conference of 1927 marks the birth of (coherently formulated) quantum theory. Two of the many important papers delivered there stand out.
Born and Heisenberg presented a paper on the mathematical formalism and proclaimed that the essential features of the formalism were complete and not subject to further revision.
Dirac gave a paper on the interpretation, and claimed that “the wave function represents our knowledge of the system, and the reduced wave packets our more precise knowledge after measurement.”
These two parts, the mathematical formalism and its interpretation in terms of knowledge, meshed perfectly: that was the logical basis of the Copenhagen interpretation.
This was an epic event in the history of human thought. Since the time of the ancient Greeks the central problem in understanding the nature of reality, and our role in it, had been the puzzling separation of nature into two seemingly very different parts, mind and matter. This had led to the divergent approaches of idealism and materialism. According to the precepts of idealism our ideas, thought, sensations, and other experiential realities should be taken as basic. But then the mathematical structure carried by matter was difficult to fathom in any natural way. Materialism, on the other hand, claimed that matter was basic. But, if one started with matter then it was difficult to understand how something like your experience of the redness of a red apple could be constructed out of it, or why the experiential aspect of reality should exist at all if, as classical mechanics avers, the material aspect is dynamically complete by itself. There seemed to be no rationally coherent way to comprehend the relationship between our experiences of the reality that exists outside our thoughts, and the nonexperiential-type material substance that the external reality was claimed to be made of.
Yet at the Solvay meeting, physicists, of all people, had come up with a perfect blending, based on empirical evidence, in which the mathematical structure needed to account for all of the empirical regularities formerly ascribed to substantive matter, was present without there being anything like substantive matter: the mathematical structure was a property of knowings!
What an exhilerating moment it must have been. Driven simply by the need to understand in a rational way the empirical facts that nature had presented to us, scientists had been led to a marvelous resolution of this most fundamental of all philosophical problems. It was a tremendous achievement. Now, seventy years later, we are able to gather here at the X-th Max Born Symposium to celebrate the unbroken record of successes of that profound discovery, and to hear about its important new triumphs.
So now, the end of our Symposium, I take this opportunity to review briefly some of its highlights from the perspective of the Solvay breakthough.
Probably the most exciting reports were from experimentalists who are now performing experiments that could only be imagined seventy years ago. Yet the thinking of the founders of quantum theory did involve “gedanken” experiments designed to confirm the rational coherency of the framework. Today these “thought” experiments involving preparations and measurements on small numbers of individual atoms are being carried out, and the results invariably confirm all of the “quantum weirdness” that the Copenhagen interpretation predicted.
But do these successes really confirm the radical ideas of Solvay 1927? Time has eroded the message of Solvay to the extent that the scientist performing the experiments hardly recognize the Solvay insights in the interpretation of their work, though they give lip service to it. One must probe into the rational foundations of the subject to see the import of their results on this deep question.
I cite first the report of Omnes. There had been hope that some way around the Copenhagen interpretation would emerge from the studies of decoherence and consistent histories that have been so vigorously pursued of late. No one has pursued these ideas more deeply than Omnes. His verdict is that these methods amount to “the Copenhagen interpretation ‘done right’ ”. He said similar things in his book (Omnes, 1994). And such prominent proponents of “decoherence” as Zurek(1986) and Joos(1986) have said similar things: Zurek concluded that the study of decoherence “constitutes a useful addition to the Copenhagen …a clue pointing at a still more satisfactory resolution of the measurement problem…a hint about how to proceed rather than the means to settle the matter quickly.” Joos asks at the beginning of his article “Is there some way, at least a hint, how to understand… ” and at the end says “one may hope that these superselection rules can be helpful in developing new ideas ..\[about\].. measurement processes.” So they both stressed that decoherence effects do not resolve the deep problems.
Indeed, decoherence is rather the cause of the problem: decoherence effects make it virtually impossible to empirically determine whether quantum collapses are occurring outside our brains or not. It is precisely because of decoherence effects that we cannot tell, empirically, whether or not collapses actually do occur “when the interaction of the object with the measuring device, and hence the rest of the world, comes into play”.
The decoherence-consistent-histories approach had originally been pursued within the Everett framework, and indeed was sometimes called the ‘post-Everett’ approach to stress that it was being pursued within that framework, rather than the Copenhagen framework, which it sought to unseat. But Omnes put his finger on the fatal flaw in the Everett approach when he said that it did not explain the transition from “and” to “or”. In the evolving wave function of Everett the various branches do evolve independently, and hence might naturally be imagined to have different “minds” associated with them, as Everett suggests. But these branches, and the minds that are imagined to be properties of these branches, are all simultaneously present. Hence there is no way to give meaning to the notion that one mind is far more likely to be present at some finite time than the others. It is like waves on a pond: the big waves and the small ones are all present simultaneously. So one needs something else, perhaps like a surfer that will be pushed into one branch or the other, to define the “or” that is logically needed to define the notion of the probabilities of the different “alternatives”. Yet the Everett interpretation allows nothing else besides the wave function and its properties. So all the minds are simultaneously present because all the corresponding properties of the various branches are simultaneously present.
The idea of the surfer being pushed by the wave is exactly the idea behind the model of David Bohm that was so ably expounded here by D. Duerr and by F. Faisal. But the model has not been consistently extended to the relativistic case of quantum electrodynamics, or to quantum chromodynamics, which are our premiere quantum theories.
The model has other unpleasant features. One is the problem of the empty branches. Each time a “good measurement” is performed the wave function must separate into different “branches”. These branches are parts of the wave function such that the full wave function is a sum (i.e., superposition) of these branches, and each branch is nonzero only in a region (of the 3n-dimensional space in which these wave functions live) that overlaps none of the other regions. Here n is the number of particles in the universe.
If two branches separate then the ‘surfer’ (which in the Bohm model would be the entire classically described physical world) must end up in just one of these branches. But all the other branches (which are regarded as physically real) must continue to evolve for all eternity without ever having any effect upon the ‘surfer’, which is the only part of reality that is directly connected to human experience, according to the model. This seems wildly extravagant! If the surfer is the important thing then the effort of nature to continue to evolve these ineffectual branches for all eternity seems to be a gigantic waste of effort. But if the surfer is not important then why is this tiny part of reality there at all? It does nothing but get pushed around.
There is a perhaps bigger problem with the initial conditions. The model is predicated on the premise that the single real classical world is a random element in a statistical ensemble of possibilities. The idea of a statistical ensemble makes good sense when we have the possibility of repeated preparations of similar situations. But when we are speaking about the entire universe it does not seem to make sense to speak of a particular statistical ensemble of universes with some particular density (weight) function if only one of them is ever created. Or are we supposed to think that a whole ensemble of real classical worlds is created, and that “our” real world is just one of them? That would seem to be the more natural interpretation. But I asked David Bohm about that, many years ago, and he insisted that there was, according to his thinking, only one universe.
Bohm was stimulated to construct his model by conversations with Einstein. Yet Einstein rejected the model, calling it “too cheap”.
I asked Bohm what he thought about Einstein’s evaluation, and he said he completely agreed.
Indeed, at the end of his book with Hiley about his model, after finishing the part describing the model, he added two chapters about going beyond the model. He motivated those chapters by references to the efforts that I was making, and that Gell-mann and Hartle were making, to go beyond the Copenhagen interpretation. Gell-mann and Hartle were pursuing the decoherence-consistent-histories approach mentioned above, which has led back to Solvay, and I had proposed a theory of events. The events were real collapses of a wave function that was considered to be ontologically real.
This brings me to the talk of Rudolf Haag. Haag described his theory of events, and mentioned that it still needed twenty years of work. In his written account Haag(1996) mentions that I had proposed essentially the same theory in the seventies, some twenty years ago (Stapp, 1975, 1977, 1979). My twenty years of work on this idea has lead back to Solvay 1927. The problem is always the same: if one wants to make natural use of what nature has told us, namely that the beautiful mathematical formalism works to high precision, then one is led to ascribe to that formalism some ontological reality. But then the condition for the collapses must be spelled out in detail.
It is natural for physicists to try to find purely physical conditions. But in the end there are no adequate natural conditions of this kind: the possibilities are all unnatural and ad hoc. Von Neumann said it all when he showed, back in the thirties, that one could push the boundary between the world described by the quantum formalism and the world described in terms our classical concepts all the way to the boundary between brain and mind without disrupting the predictions of quantum theory, and noted that there is no other natural place to put the boundary, without disrupting the integrity of the theory. In fact, it is, in principle, only if one pushes the boundary all way to the brain-mind interface that one obtains, strictly, the prediction of orthodox quantum theory: otherwise there are rogue collapses that are not associated with knowings.
Of course, pushing the boundary all the way to mind brings mind into our theory of nature. But why on earth should we try to keep mind out— bottled up, ignored, and isolated from the physical world—when we know it is present, and seemingly efficacious, particularly when the intense struggle of physicists to find a rational way of accounting for the observed phenomena led them to the conclusion that the theory of physical reality has the form of a theory about knowings, not the form of a theory about matter. Our aim should be not to bring back moribund matter, which we are well rid of, but to learn how better to understand knowings, within the mathmatical framework provided for them by the quantum formalism.
6. The Two Quantum Processes.
There have been many attempts by physicists to ‘get mind back out of physics’: i.e., to reverse the contamination of physics brought in by Bohr, Heisenberg, Dirac, Pauli and company in 1927. I believe those decontamination efforts have failed, even though I myself have worked hard to achieve it. So I am taking here the other tack, and trying to build a coherent ontology around the orthodox ideas. In particular, I am accepting as basic the idea that there are knowings, and that each such knowing occurs in conjunction with a collapse of the wave function that reduces it to a form concordant with that knowing. I assume that knowings are not associated exclusively with human body/brains. But I shall focus here on these particular kinds of knowings because these are the ones we know most about.
A fundamental fact of orthodox quantum theory is that the evolution of the state of the physical system between the collapse events is mathematically very different from the evolution of this state associated with the collapses: the former are “unitary” and “local”, whereas the latter are neither.
The “unitarity” property means several things. On the one hand, it means that the evolution is in some sense no change at all: the internal or intrinsic structure of the state is unaltered. One can imagine that only the ‘mode of description’ of the state is changed, not the state itself. Indeed, that point of view is very often adopted in quantum theory, and is the one I shall adopt here. (See the next section.)
The “unitarity” property also means that the transformation operator that changes the state at an earlier time to the state at a later time does not depend on that initial (or final) state: there is, in this sense, in connection with the unitary part of the process of evolution, no self reference!
According to the orthodox interpretation, there is no experiential reality associated with the unitary part of the evolution, which is the part between the observations: there is no essential change, and no self reference, and hence, reasonably enough, no experience.
Experiences are associated only with the nonunitary parts of the evolution: the part associated with observations. For that part there is essential change, and the transformation operator analogous to the one defined for the unitary case would depend on the state upon which it acts. Thus there would be, in this sense, self-reference. This self reference (nonlinearity) plays a key role in the dynamics associated with observation. It is a special kind of self reference that has no counterpart in classical mechanics.
In the classical approximation to the quantum dynamics only the unitary part of the dynamical evolution survives. So from a quantum mechanical point of view it would be nonsensical to look for mind in a system described by classical physics. For classical physics is the result of an approximation to the full dynamical process of nature that eliminates the part of that process that orthodox quantum theory says is associated with our experiences.
7. The Two Times: Process Time and Mathematical Time
The distinctions between the two processes described above is central to this work. It can be clarified, and made more vivid, by explaining how these two processes can be considered to take place in two different times.
In quantum theory there are two very different kinds of mathematical objects: vectors and operators. Operators operate on vectors: the action of an operator on a vector changes it to another (generally different) vector.
Given an operator, and a vector that represents a state of a physical system (perhaps the entire universe), a number is formed by first letting the operator act on the vector, and then multipling the resulting vector by the (complex conjugate of the) original vector. This number is called the “expectation value of the operator in the state represented by that vector”.
Modern field theories are generally expressed in the so-called Heisenberg picture (rather than the so-called Schroedinger picture). I shall follow that practice.
In ordinary relativistic quantum field theory each spacetime point has a collection of associated operators. (I gloss over some technicalities that are not important in the present context.)
Consider the collection of operators $`C(t)`$ formed by taking all of the operator associated with all of the spacetime points that lie at fixed time $`t`$.
This set $`C(t)`$ is “complete” in the sense that the expectation values of all the operators of $`C(t)`$ in a state $`S`$ determine all the expectation values of the all the operators in $`C(t^{})`$ in the state $`S`$, for every time $`t^{}`$. The operators in $`C(t)`$ are related to those in $`C(t^{})`$ by a unitary transformation. Whether one represents the state $`S`$ by giving the expectation values in this state of all the operators in $`C(t)`$, or of all the operators in $`C(t^{})`$, is very much like choosing to use one coordinate system or another to describe a given situation: it is just a matter of viewpoint. The unitary transformation that relates the collection of operators $`C(t)`$ to the collection of operators $`C(t^{})`$ is essentially the unitary transformation associated with the Schroedinger-directed temporal evolution. It is in this sense that the unitary transformation that generates evolution in the “mathematical time” $`t`$ is relatively trivial. It is deterministic, continuous, invertible, and independent of the state $`S`$ of the physical system upon which the operators act.
But giving the complete set of all the operators associated with all the points in spacetime says nothing at all about the evolution of the state! Saying everything that can be said about the operators themselves, and about evolution via the unitary part of the transformation has merely fixed the mode of description, and the connections between different modes of description. It has not said anything about the all-important evolution of the state.
The state undergoes a sequence of abrupt jumps:
$$\mathrm{}S_iS_{i+1}S_{i+2}\mathrm{}.$$
The situation can be displayed graphically by imagining that $`i`$ is the imaginary part of the complex time $`t`$: the evolution proceeds at constant imaginary part of $`t`$ equal $`i`$, and at constant $`S_i`$, with the real part of $`t`$ increasing until it reaches a certain ‘jump time’ $`t_i`$, whereupon there is an abrupt quantum jump to a new constant state $`S_{i+1}`$, and a new constant imaginary part of $`t`$ equal to $`i+1`$, and the evolution then again proceeds with increasing real part of $`t`$ until the next ‘jump value’ $`t_{i+1}`$ is reached, and then there is another jump up to a new value, $`i+2`$, of the imaginary part of $`t`$. Thus the full process is represented in complex time as a line having the shape of a flight of steps. The horizontal segments where the real part of time is increasing represent the trivial unitary parts of the process, which correspond merely to changing the viewpoint, or mode of description, with the state remaining fixed, and with no associated experience. The vertical segments correspond to increases in ‘process time’. These are the parts associated with experience. (This identification of the vertical axis with imaginary time is purely pedagogical)
The present endeavour is to begin to fill in the details of the process associated with the increases in the vertical coordinate, process time, which is the time associated with the nontrivial part of the evolutionary process, and with experience. The final phase of each vertical segment is the fixing of a new knowing. But some process in Nature must bring about this particular fixing: this process is represented by motion along the associated vertical segment.
8. Quantum Ontology
What is the connection between the our experiences and the physicists’ theoretical description of the physical world?
The materialist position is that each experience is some aspect of the matter from which the physicists say the world is built.
But the physical world certainly is not built out of the substantive matter that was postulate to exist by classical mechanics. Such stuff simply does not exist, hence our experiences cannot be built out of it.
The quantum analog of physical reality, namely the quantum state S of the universe, is more like information and ideas than like the matter of classical physics: it consist of accumulated knowledge. It changes when human knowledge changes, and is tied to intentionality, as I shall explain presently.
Orthodox classical mechanics is naturally complete in itself: the physical world represented in it is dynamically complete, and there is no hint within its structure of the existence of anything else.
Orthodox quantum mechanics is just the opposite: the physical world represented by it is not dynamically complete. There is a manifest need for a process that is not represented within the orthodox description.
In orthodox quantum mechanics the basic realities are our knowings. The dynamics of the physical world represented in the orthodox quantum formalism is not internally complete because there is, in connection with each knowing, a collapse process that appears in the orthodox theory as a “random choice” between alternative possibilities: contemporary quantum theory provides no description of the process that selects the particular knowing that actually occurs.
This collapse process, which is implemented by a nonunitary/nonlocal transformation, must specify two things that the contemporary machinery of quantum theory does specify:
1. It must specify an experience E, associated with a corresponding projection operator P(E), such that the question is put to Nature: “Does E occur?”
2. It must then select either the answer ‘yes’, and accordingly change the current state (i.e., density matrix) S to the state PSP, or select the answer ‘no’, and accordingly replace S by (1-P)S(1-P). The probability of answering ‘yes’ is Trace PSP/TraceS; the probability of answering ‘no’ is Trace (1-P)S(1-P)/Trace S.
In the orthodox pragmatic interpretation the step 1 is achieved by a human experimenter’s putting in place a device whose observed response will determine whether the system that is being examined has a certain property specified by P(E): the occurrence of experience E will confirm, basically on the basis of past experience, that future experiences will be likely to conform to the answer “Yes, the system has property P(E).”
According to the orthodox viewpoint, the experimenter stands outside the quantum system being examined, and the device is regarded as an extension of himself.
Step 2 is then achieved by appeal to a random selection process that picks the answer ‘Yes’ or ‘No’ in accordance with a statistical rule. This selection process (also) is not represented within the orthodox Hilbert space description.
How can these two steps be comprehended in a rational, minimalistic, naturalistic way?
9. Von Neumann’s Process I.
The first step in the nonunitary process is what von Neumann called Process I, in contrast to his Process II, which is the normal unitary evolution.
Process I consists of “posing the next question”. We can suppose that the possible answers are $`Yes`$ or $`No`$. Nature will then answer the question. The crucial requirement is that the answer $`Yes`$ must be recognizably different from the answer $`No`$, which includes no recognizable answer at all.
In practice a human being creates the conditions for Process I, and it is he who recognizes the positive response: this recognition is a knowing.
For example, the observer may know that he is seeing the pointer on the device—that he himself has set in plac—resting definitely between the numbers 6 and 7 on the dial. This is a complex thing that he knows. But knowings can be known, at least in part, by later knowings. This is the sort of knowing that science is built upon. Of course, all one can really know is that one’s experiences are of a certain kind, not that there really is a pointer out there. So we expect the knowings to correspond in some way to a brain activity of some sort, which under normal circumstances would be an effect of something going on outside the brain.
Von Neumann accepts the statistical character of the theory, and his Process I is statistical in character: his Process I covers merely the posing of the question, and the assignment of a statistical weight to each of the recognizably different alternative possible answers. It does not cover the subsequent process whereby Nature delivers an answer.
My basic commitment here is to accept the quantum principles as they are, rather than to invent new principles that would allow us to exclude mind from Nature’s dynamics. So I accept here, ontologically as well as pragmatically, that the possibilities singled out in Process I are defined by different ‘possible knowings’.
Two important features of the von Neumann Process I are:
1) It produces an abrupt increase in entropy. If the state of the universe prior to the process is well defined, so that the entropy (with no coarse graining) is zero, then if, for example, the Process I gives a statistical mixture with $`50\%Yes`$ and $`50\%No`$, the entropy will jump to $`ln2`$.
2) It is quasi-local. There will be nonlocal aspects extending over the size of the examined system, but no long-range nonlocal effects of the kind mentioned in section 3. That is, there will be, for the Process I associated with a human knowings, brain-sized nonlocal effects associated with defining the question, but no nonlocal effects extending outside the body/brain. Thus Process I is, for human knowings, a human process, not a global one. \[Technically, the reason that there is no effect on far-away systems is that such an effect is computed by performing a ‘trace’ over the degrees of freedom of the nearby system (e.g., the brain/body), but von Neumann’s Process I is achieved by dropping out interference terms between the alternative possible answers, and that operation leaves this trace unaltered.\]
Process I lies at the root of measurement and mind-body problems. In approaches that try to explain Process I in purely physical terms, with knowings not mentioned, but rather forced to follow from physically characterized processes, the answers tend to assert either that:
1), the wave function of a particle occasionally just spontaneously reduces to a wave function that is essentially zero except over a small region, or that
2), what is not measurable in practice (i.e., via some practicable procedure) does not exist in principle: if it is impractical to detect an interference term them it does not exist.
This latter sort of rule is certainly justified in a pragmatic approach. But most physicists have been reluctant to accept such rules at the ontological level. Hence the pragmatic approach has won by default.
From the present standpoint, however, the basic principle is that Nature responds only to questions that are first posed, and whose answers are possible knowings, or are things of the same general ontological type as possible knowings. \[The needed generalization will be discussed later, after the knowings themselves have been discussed.\]
But the important immediate point is that the quantum dynamics is organized so as to put knowings, and their possible generalizations, into the central position.
All such knowings contribute to the general self knowledge of the universe, which is represented by the (Hilbert-space) state $`S`$ of the universe.
10. Origin of the Statisical Rules
Without loss of generality we can suppose that each posed question is a single question answered with a single reply, Yes or No. Then the usual (density matrix) formalism allows the reduction process to be formalized in the following way. The state of the universe is represented by the density matrix (operator) $`S`$. The question is represented by the projection operator $`P`$: $`P^2=P`$. Then the von Neumann Process I is represented by
$$S[PSP+(1P)S(1P)+PS(1P)+(1P)SP]PSP+(1P)S(1P).$$
The subsequent completion of the reduction is then represented by
$$[PSP+(1P)S(1P)]PSP\text{ or }(1P)S(1P)$$
where the fractions of the instances giving the two results are:
$$(TrPSP)/(TrPSP+Tr(1P)S(1P))\text{ for }PSP$$
and
$$(Tr(1P)S(1P))/(TrPSP+Tr(1P)S(1P))\text{ for }(1P)S(1P).$$
Here Tr represents the trace operation, which instructs one to sum up the diagonal elements $`<i|M|i>`$ of the matrix $`<j|M|i>`$ that represents the operator, for some complete orthonormal set of states $`|i>`$. \[The value of the trace does not depend upon which complete orthonormal set is used, and, for any two (bounded) operators $`A`$ and $`B`$, $`TrAB=TrBA`$. Using this property, and $`P^2=P`$, one sees that the denominator in the two equations just given reduces to $`TrS`$. A partial trace is given by the same formula, but with the vectors $`|i>`$ now forming a complete orthonormal basis for part of the full system\]
I believe it is perfectly acceptable to introduce an unexplained random choice or selection in a pragmatically formulated theory. But in a rational ontological approach there must be some sufficient cause or reason for a selection to pick out $`Yes`$ rather than $`No`$, or vice versa. In view of the manifestly nonlocal character of the reduction process, there is, however, no reason for this selection to be determined locally.
Quantum theory does not specify what this selection process is, and I do not try to do so. But given our ignorance of what this process is, it is highly plausible that it should give statistical results in accord with the rules specified above. The reason is this.
If the selection process depends in some unknown way on things outside the system being examined then the fractions ought to be invariant under a huge class of unitary transformations $`U`$ of the state $`S`$ that leave $`P`$ invariant, for these transformations are essentially the trivial rearrangements of the distant features of the universe:
$$SUSU^1U^1PU=P.$$
Since the statistical description after the Process I has occurred is essentially similar to the classical statistical description one should expect $`S`$ and $`P`$ (or $`(1P)`$) to enter linearly. But the trace formulas are the only possibilities that satisfy these conditions, for all $`U`$ that leave $`P`$ invariant.
The point here is only that if the actual selection process depends in a complicated and unknown way on distant uncontrolled properties of $`S`$ then the long-term averages should not be sensitive to basically trivial rearrangements made far away.
This assumption is quite analogous to the assumption made in classical statistical analysis—which has a deterministic underpinning—that in the absence of information about the full details one should integrate over phase space without any weighting factor other than precisely one in those degrees of freedom about which one has no information. Thus the quantum statistical rules need not be regarded as some mysterious property of nature to have unanalysable tendencies to make sudden random jumps: it is rational to suppose, within an ontological setting, that there is a causal, though certainly nonlocal, underpinning to these choices, but that we do not yet know anything about it, and hence our ignorance must be expressed by the uniquely appropriate averaging over the degrees of freedom about which we have no knowledge.
The effective randomness of Nature’s answers does not render the our knowings nonefficacious. Our knowings can enter the dynamics in a strongly controlling way through the choice of the questions, even though the answers to these questions are effectively random. The formation of the questions, in Process I, is human based, even though the selection of the answers is presumably global. This will be discussed presently.
The theory is naturalistic in that, although there are knowings, there are no soul-like experiencers: each human stream of consciousness belongs to a human body/brain, which provides the structure that links the experiences of that stream tightly together.
11. Brains and Experiences.
The dynamics of the theory is organized around the collection of operators P(E) that connect experiences E to their effects on the state S of the universe. I describe here my conception of this connection, and of the dynamical differences between the quantum version of this connection and its classical analog.
Each experience is supposed to be one gestalt that, like a percept, “comes totally or not at all”, in the words of Wm. James (1987. p. 1061). This experience is part of a sequence whose elements are, according to James, linked together in two ways: each consists of a fringe that changes only very slowly from one experience to the next, and a focal part that changes more rapidly. The fringe provides the stable contextual framework. It is the background that provides both the contextual setting, within which the foreground is set, and the experience of a persisting historical self that provides both the backdrop for the focal part and the carrier of longer term motivations. The focal part has a sequence of temporally displaced components that, like the rows of a marching band that are currently in front of the viewing stand, consists of some that are just coming into consciousness, some that are at the center, and some that are fading out. The occurrence together, in each instantaneous experience, of this sequence of temporal components is what allows comparisons to be made within a conscious experience. Judgments about courses of events can be parts of an experiences. The experiences are organized in the first instance, around experiences of the person’s body in the context of his environment, and later also around abstractions from those primitive elements. These matters are discussed in more detail in chapter VI of my book (Stapp ,1993).
Each experience normally has a feel that includes an experience of a prolongation of the current sequence of temporal components: this prolongation will normally be a prolongation that is, on the basis of past experience, likely to be imbedded in the “current sequence of temporal components” of some later experience in the linked sequence of experiences.
Each experience E induces a change of the state of the universe S–$`>`$ PSP. This change will, I believe, for reasons I will describe presently, be a specification of the classical part (see below) of the electro-magnetic field within the brain of the person. This specification will fix the activities of the brain in such a way as to produce a coordinated activity that will generally produce, via a causal chain in the physical world (i.e., via the causal evolution specified by the Schroedinger or Heisenberg equations of motion) the potentialities for the next experience, $`E^{}`$. That causal chain may pass, via the motor cortex, to muscle action, to effects on the environment, to effects on sensors, to effects on the brain, and finally to a set of potentialities for various possible prolongations of the current sequence of temporal components.
Then a selection must be made: one of the potential experiences will become actual.
But this description glosses over an essential basic problem: How do the possible experiences E and the associations E–$`>`$ P(E) get characterized and created in the first place. There is an infinite continuum of projection operators P such that S–$`>`$ PSP would generate a new state. Why are some particular P’s given favored status, and why are these favored P’s associated with “experiences”?
This favored status is this: some one of these favored P’s will be picked out from the continuum of possibilities, in conjunction with the next phase of the dynamical process. This next phase is the putting to Nature of the question: Does the current state S jump to PSP or not?
To provide some basis for getting the universe going in a way that tends to produce stable or enduring structure, instead of mere chaotic random activity, I assume that a basic characteristic of the underlying dynamics is to select only projectors P that impose a certain repetitiveness on the dynamical structure. These qualities of repetitiveness are assumed to be fundamental qualities of the projectors. But each such quality is a characteristic that is more general in its nature than any particular realization of it. These general qualities I call “feels”: they encompass all human experiences, but extend far beyond.
Thus the basic assumption is that certain projectors P have “feels”, but most do not, where a “feel” is a generalized version of a human experience. Each feel is characterized by a quality of repetitiveness, and the actualization of this feel entails the actualization of some particular realization of that quality or pattern of repetitiveness within the dynamical structure that constitutes the universe. This actualization is expressed by the transformation S–$`>`$ PSP where P = P(E), and E is the feel: it is the quality of the repetitiveness that is being actualized.
This general tendency to produce repetitive spatio-temporal patterns carries over to human experience, and will, I believe, be greatly enhanced by natural selection within the biological sphere. Thus the selection, from among the proferred potential experiences, of the next $`E^{}`$, will be such as to favor a sequences E–$`>`$ P(E)–$`>`$ $`E^{}`$ such that $`E^{}`$ is either the same as E, or at least the same as E in some essential way. Thus experiences, and their more general ontological cousins, feels, are tied to the generation of self-reproducing structures. This generation of regenerating/reverberating stable structures underlies quantum dynamics, in the form of the creation by the dynamics of stable and quasi-stable particles, and extends beyond human beings, both to biological systems in general, and even to the overall organization of the universe, according to the ideas being developed here.
As regards this repetitiveness, it is undoubtedly pertinent that classical mechanics is formulated basically in space-time, with lawfulness expressed essentially by a static or quasi-static quality of momentum-energy. But the essence of the transition to quantum theory is precisely that this static quality of momentum-energy is replaced by a repetitive quality, by a characteristic oscillatory behavior’: quantum theory is basically about repetitive regeneration.
In line with all this, I assume that the projection operators P act by specifying the (expectation values of the) quantum elecromagetic field. There are many reason for believing that this is the way nature operates:
1. The EM fields naturally integrate the effects of the motions of the billions of ions and electrons that are responsible for our neural processes. Thus examining the EM fields provide a natural way of examining the state of the brain, and selecting a state of the EM field of the brain provides a natural way of controlling the behavior of the brain.
2. The EM field has marvelous properties as regards connections to classical physics. The bulk of the low-energy EM state automatically organizes itself into a superposition of “coherent states”, each of which is described by a classical electromagnetic field, and which enjoys many properties of this classical elecromagnetic field. These “classical” states are brought into the dynamical structure in a natural way: the condition that each actually realized state will correspond to essentially a single one of these classically describable coherent states is what is needed to deal effectively, in a physically realistic way, with the infra-red divergence problem in quantum electro-dynamics. \[See Stapp (1983), and Kawai and Stapp (1995)\]
3, These “classical” states (coherent states) of the quantum EM field are robust (not easily disrupted by the thermal and random noises in a warm wet brain): they are ideal for use in generating self-reproducing effects in a warm, wet, noisy enviroment. \[See Stapp (1987), (1993, p.130), and Zurek (1993)\]
4. These classical states are described by giving the ampitudes in each of the oscillatory modes of the field: spacetime structure arises from phase relationships among the different oscillatory modes.
Although the theory being developed here maintains a close connection to classical physics, its logical and ontological structure is very different. In classical physics the dynamics is governed entirely by myopic local rules: i.e., by rules that specify the evolution of everything in the universe by making each local variable respond only to the physical variables in its immediate neighborhood. Human experiences are thus epiphenomenal in the sense that they do not need to be recognized as entities that play any dynamical role: the local microscopic description, and the local laws, are sufficient to specify completely the evolution of the state of physical universe. Experiential gestalts can regarded as mere effects of local dynamical causes, not as essential elements in the causal progession.
But the most profound lesson about nature learned in the twentieth century is that the empirically revealed structure of natural phenomena cannot be comprehended in terms of any local dynamics: natural phenomena are strictly incompatible with the idea that the underlying dynamics is local.
The second most profound lesson is that the known observed regularities of natural phenomena can be comprehended in terms of a mathematical model built on a structure that behaves like representations of knowledge, rather than representations of matter of the kind postulated to exist in classical mechanics: the carrier of the structure that accounts for the regularities in nature that were formerly explained by classical physical theory is, according to contempory theory, more idealike than matterlike, although it does exhibit a precise mathematical structure.
The third essential lesson is that this new description, although complete in important practical or pragmatic ways, is, as an ontological description, incomplete: there is room for additional specifications, and indeed an absolute need for additional specifications if answers are to be given to questions about how our experiences come to be what they are. The presently known rules simply do not fix this aspect of the dynamics. The purpose of work is to make a first stab at filling this lacuna.
One key point, here, is that brains are so highly interconnected that it will generally be only large macroscopic structures that have a good chance of initiating a causal sequence that will be self-reproductive. So each possible experience E should correspond to a P(E) that creates a macroscopic repetitiveness in the states of a brain.
A second key point is that our knowings/experiences can be efficacious not only in the sense that they select, in each individual case, what actually happens in that case, but also in the statistical sense that the rules that determine which questions are put to Nature, can skew the statistical properties, even if the answers to the posed questions follow the quantum statistical rules exactly. I turn now to a discussion of this point and its important consequences.
12. Measurements, Observations, and Experiences.
A key question is whether, in a warm wet brain, collapses associated with knowings would have any effects that are different from what would be predicted by classical theory, or more precisely, by a Bohm-type theory. Bohm’s theory yields all the predictions of quantum theory in a way that, like classical mechanics, makes consciousness epiphenomenal: the flow of consciousness is governed deterministically (but nonlocally) by a state of the universe that evolves, without regard to consciousness, in accordance with local deterministic equations of motion. Bohm’s theory, like classical physics, tacitly assumes a connection between consciousness and brain activity, but the details of this connection are not specified.
The aim of the present work is to specify this connection, starting from the premise that the quantum state of the universe is essentially a compendium of knowledge, of some general sort, which includes all human knowledge, as contrasted to something that is basically mechanical, and independent of human knowledge, like the quantum state in Bohmian mechanics.
I distinguish a “Heisenberg collapse”, S–$`>`$ PSP or S–$`>`$ (1-P)S(1-P), from a “von Neumann collapse” S–$`>`$\[PSP + (1-P)S(1-P)\]. The latter can be regarded as either a precursor to the former, or a representation of the statistical effect of the collapse: i.e., the effect if one averages, with the appropriate weighting, over the possible outcomes.
This latter sort of averaging would be pertinent if one wanted to examine the observable consequences of assuming that a certain physical system is, or alternatively is not, the locus of collapses.
This issue is a key question: Are there possible empirical distinctions between the behaviors of systems that are—or alternatively are not— controlled by high-level collapses of the kind that this theory associates with consciousness. Can one empirically distinguish, on the basis of theoretical principles, whether collapses of this kind are occurring within some system that is purported to be conscious. This question is pertinent both to the issue of whether some computer that we have built could, according to this theory, be conscious, and also to the issue of whether our own behavior, as viewed from the outside, has aspects that reveal the presence of the sort of quantum collapses that this theory associates with consciousness.
This question about differences in behaviour at the statistical level feeds also into the issue of whether being conscious has survival value. If behaviour has, on the average, no dependence on whether or not collapses occur in the system then the naturalistic idea that consciousness develops within biological systems due to the enhancement of survival rates that the associated collapses provide would become nonsense. Indeed, that idea is nonsense within classical physics, for exactly this reason: whether conscious thoughts occur in association with certain physical activities makes absolutely no difference to the microlocally determined physical behavior of the system.
There are certain cases in which a von Neumann collapse, S–$`>`$ \[PSP + (1-P)S(1-P)\], would produce no observable effects on subsequent behavior. To understand these conditions let us examine the process of measurement/observation.
If one separates the degrees of freedom of the universe into those of “the system being measured/observed”, and those of the rest of the universe, and writes the state of the universe as
$$S=|\mathrm{\Psi }><\mathrm{\Psi }|$$
with
$$|\mathrm{\Psi }>=\underset{i}{}\varphi _i\chi _i,$$
where the $`\varphi _i`$ are states of “the system being measured/observed”, and the $`\chi _i`$ are states of the rest of the universe, then since we observers are parts of the rest of the universe it is reasonable to demand that if someone can have an experience E then there should be a basis of orthonormal states $`\chi _i`$ such that the corresponding projector P(E) is defined by
$$P(E)\varphi _i=\varphi _i$$
for all $`i`$,
$$P(E)\chi _i=\chi _i$$
for $`i`$ in $`I(E)`$, but
$$P(E)\chi _i=0,$$
otherwise, where $`I(E)`$ is the set of indices $`i`$ that label those states $`\chi _i`$ that are compatible with experience E.
A “good measurement” is defined to be an interaction between the system being measured and the rest of the universe such that the set of states $`\varphi _i`$ defined above with $`i`$ in $`I(E)`$ span a proper subspace of the space corresponding to the measured system. In this case the knowledge that $`i`$ is in the set $`I(E)`$ ensures that the state of the measured system lies in the subspace spanned by the set of states $`\varphi _i`$ with $`i`$ in $`I(E)`$. That is, experience E would provide knowledge about the measured system.
Let P\_ be the projector that projects onto the subspace spanned by the set of states $`\varphi _i`$ with $`i`$ in $`I(E)`$. Then a von Neumann collapse with P\_ in place of P would be identical to the von Neumann collapse S–$`>`$ \[PSP + (1-P)S(1-P)\]. But then the observer would be unable to determine whether a collapse associated with P\_ occurred in the system, unbeknownst to him, or whether, on the contrary, the definiteness of the observed outcome was brought about by the collapse associated with his own experience. This is essentially von Neumann’s conclusion.
But why should an actual collapse associated with the measured/observed system correspond in this special way to a subsequent experience of some human being? Why should an actually occurring P\_ be such as to ensure an equivalence between P\_ and a P(E)?
Von Neumann’s approach to the measurement problem suggests that such a connection would exist.
In both the von Neumann and Copenhagen approaches the measuring device plays a central role. Different perceptually distinguishable locations of some “pointer” on the device are supposed to become correlated, during an interaction between the measured system and the measuring device, to different orthogonal subspaces of the Hilbert space of the measured system. This perceptual distinctness of the possible pointer positions means that there is a correlation between pointer locations and experiences. That connection must be explained by the theory of consciousness, which is what is being developed here. But why, ontologically, as opposed to epistemologically, should the projector P\_ in the space of the measured/observed system be to a state that is tied in this way to something outside self, namely the location of a pointer on a measuring device with which it might have briefly interacted at some earlier time.
Von Neumann did not try to answer this question ontologically. If the real collapse were in the brain, and it corresponded to seeing the pointer at some one of the distinguishable locations, then from an epistemological point of view the effect of this collapse would be equivalent to applying P\_ to the state of the measured/observed system.
If one works out from experiences and brains, in this way. one can formulate the collapses in terms of collapses out in the world, instead of inside the brain, and largely circumvent (rather than resolve) the mind-brain problem. Then the equivalence of the experience to the collapse at the level of the measured/observed system would become true essentially by construction: one defines the projectors at the level of the measured/observed system in a way such that they correspond to the distinct perceptual possibilities.
But from a non-subjectivist viewpoint, one would like to have a characterization of the conditions for the external collapse that do not refer in any way to the observers.
One way to circumvent the observers is to use the fact that the pointer interacts not only with observers but also with “the environment”, which is imagined to be described by degrees freedom that will never be measured or observed. The representation of S given above will again hold with the $`\varphi _i`$ now representing the states of the system being measured plus the measuring device, and the $`\chi _i`$ corresponding to states of the environment.
The interaction between the pointer and the environment should quickly cause all the $`\chi _i`$ that correspond to different distinct locations of the pointer to become orthogonal.
All observable projectors P are supposed to act nontrivially only on the states $`\varphi _i`$: they leave unchanged all of the environmental states $`\chi _i`$.But then all observable aspects of the state S reside in tr S, where tr stands for the trace over the environmental degrees of freedom.
Let $`P_i`$ be a projector onto an eigenstate of tr S. Suppose one postulates that each of the allowed projectors P\_ is a sum over some subset of the $`P_i`$, or, equivalently that each possible P\_ commutes with tr S, and is unity in the space of the degrees of freedom of the environment.
This rule makes each allowed P project onto a statistical mixture of pointer locations, in cases where these locations are distinct. So it give the sort of P’s that would correspond to what observers can observe, without mentioning observers.
The P’s defined in this way commute with S. But then the effect of any von Neumann reduction is to transform S into S: the von Neumann reduction has no effect at all. The collapse would have no effect at all on the average over the alternative possible answers to the question of whether or not the collapse occurs. This nondependence of the average is of course an automatic feature of classical statistical mechanics.
The theory being described here is a development of von Neumann’s approach in the sense that it gives more ontological reality to the quantum state than the Copenhagen approach, and also in the sense that it follows von Neumann’s suggestion (or what Wigner describes as von Neumann’s suggestion) of bringing consciousness into the theory as a real player. But it differs from the models discussed above that are based on his theory of measurement. For it does not associate collapses with things like positions of pointers on measuring devices. The projectors P(E) associated experiences E are in terms of classical aspects of the electromagnetic fields in brains of observers. That would be in line with von Neumann’s general idea, but he did not go into details about which aspects of the brain were the pertinent ones. Rather he circumvented the issue of the mind-brain connection by centering his attention on the external devices and their pointer-type variables.
The classical aspects of the EM field are technically different from pointers because their interaction with the environment is mainly their interaction with the ions and electrons of the brain, and these are the very interactions that both create these aspects of these fields, and that are in part responsible for the causal effects of the experiences E through the action of the projectors P(E). So what was formerly an uncontrolled and unobservable environment that disturbed the causal connections is now the very thing that creates the coherent oscillatory structure through which our experiences control our brains.
The effects of this switch will be examined in the next section.
13. Efficacy of Knowings.
A formalism for dealing with the classical part of the the electro-magnetic field, within quantum electrodynamics (QED), has been developed in Stapp (1983) and Kawai and Stapp (1995), where it was shown that this part dominates low-energy aspects, and is exactly expressed in terms of a unitary operator that contains in a finite way the terms that, if not treated with sufficient precision, lead to the famous infrared divergence problem in QED. This classical part is a special kind of quantum state that has been studied extensively. It is a so-called coherent state of the photon field. Essentially all of the low-energy contributions are contained within it, and the effects of emission and re-absorption are all included. However, different classically conceived current sources produce different “classical fields”, and hence the full low-energy field is a quantum superposition of these classical states.
Each such classical state is a combination (a product) of components each of which has a definite frequency. All of the electrons and ions in the brain contribute to each of these fixed frequency components, with an appropriate weighting determined by that frequency. Thus the description is naturally in the frequency domain, rather than in spacetime directly: spatial information is encoded in quantum phases of the various fixed frequency components. Each value is represented, actually, by a gaussian wave packet centered at that value, in a certain space, and hence neighboring values are represented by overlapping gaussian wave packets.
To exhibit a basic feature I consider a system of just three of these states. Suppose state 2 has all of the correct timings to elicit some coordinated actions. It represents in this simple model the state singled out by the projector P = P(E). Suppose it is dynamically linked to some motor state, represented by state 3: the dynamical evolution carries 2 to 3. Let state 1 be a neighbor of state 2 such that the dynamical evolution mixes 1 and 2. (I use here the Schroedinger picture, for convenience.)
The transition from 2 to 3 will tend to depopulate the coupled pair 1 and 2. This depopulation of the system 1 and 2 will occur naturally whether or not any von Neumann collapse associated with P occurs. The question is: Can a von Neumann collapse associated with P affect in a systematic way the rate of depopulation from the coupled pair 1 and 2. The answer is “Yes”: it can speed up the emptying of the amplitude in the system 1 and 2 into the system 3 that represents the motor action. This means that the effect of repeatedly putting to nature the question associated with P can have the effect of producing the motor action more quickly than what the dynamics would do if no question was put: putting the question repeatedly can effect the probabilities, compared Bohm’s model, in which there are no collapses. The quantum rules regarding the probability of receiving a ‘Yes’, or alternatively a ‘No’, are stricly observed.
To implement the dynamical conditions suppose the initial state is represented, in the basis consisting of our three states 1, 2, and 3, by the Hermitian matrix S with $`S_{1,1}=x`$, $`S_{2,2}=y`$, $`S_{1,2}=z`$, $`S_{2,1}=z^{}`$, and all other elements zero. Suppose the coupling between states 2 and 3 is represented by the unitary matrix U with elements $`U_{1.1}=1`$, and
$$U_{2,2}=U_{2,3}=U_{3,3}=U_{3,2}=r=(2)^{1/2},$$
with all other elements zero.
The mixing between the states 1 and 2 is represented by the unitary matrix M with
$$M_{1,1}=c,M_{1,2}=s,M_{2,1}=s^{},M_{2,2}=c^{},M_{3,3}=1,$$
with all other elements zero. Here $`c^{}c+s^{}s`$ = 1.
The initial probability to be in the state 2 is given Trace PS = y, where P projects onto state 2. The action of U depopulates state 2: $`TracePUSU^1=y/2`$.
Then the action of the mixing of 1 and 2 generated by M brings the probability of state 2 to
$$TracePMUSU^1M^1=(xs^{}s)+(yc^{}c/2)zcs^{}rz^{}c^{}sr,$$
where r is one divided by the square root of 2
For the case c = s = r this gives for the probability of state 2:
$$(xs^{}s)+(yc^{}c/2)zcs^{}rz^{}c^{}sr=x/2+y/4zr/2z^{}r/2$$
Since states 1 and 2 are supposed to be neighbors the most natural initial condition would be that the feeding into these two states would be nearly the same: the initial state would be a super position of the two states with almost equal amplitudes. This would make x = y = z = $`z^{}`$. Then the probability of state 2 becomes
$$prob=y/2+y/4yr$$
Then the effect of the mixing M is to decrease from y/2 the probability in the state 2 that feeds the motor action.
If the question E, with P(E)= P, is put to nature before U acts, then the effect of the corresponding von Neumann reduction is to set z to zero. Hence in this case
$$prob=y/2+y/4,$$
and the probability is now increased from y/2.
Thus putting the question to Nature speeds up the motor response, on the average, relative to what that speed would be if the question were not asked.
The point of this calculation is to establish that this theory allows experiences to exercise real control over brain activity, not only by making the individual choices between possibilities whose probabilities are fixed by the quantum rules, but also at a deeper level by shaping, through the choices of which questions are put to nature, those statistical probabilities themselves. This opens the door both to possible empirical tests of the presence of collapses of the kind predicated in this theory, and to a natural-selection-driven co-evolution of brains and their associated minds.
14. Natural Selection and the Evolution of Consciousness.
In a naturalistic theory one would not expect consciousness to be present in association with a biological system unless it had a function: nothing as complex and refined as consciousness should be present unless it enhances the survival prospects of the system in some way.
This requirement poses a problem for a classically described system because there consciousness is causally non-efficatious: it is epiphenomenal. Its existence is not, under any boundary conditions, implied by the principles of classical physics in the way that what we call “a tornado” is , under appropriate boundary conditions, implied by the principles of classical physics. Consciousness could therefore be stripped away without affecting the behavior of the system in any way. Hence it could have no survival value.
Consider two species, generally on a par, but such that in the first the survival-enhancing templates for action are linked to knowings, in the way described above, but in the second there is no such linkage. Due to the enhancement effects described in the preceding section the members of the first species will actualize their survival-enhancing templates for action faster and more often than the members of the second species, and hence be more likely to survive. And over the course of generations one would expect the organism to evolve in such a way that the possible experiences E associated with it, and their consequences specified by the associated projection operators P(E), will become ever better suited to the survival needs of the organism.
15. What is Consciousness?
When scientists who study consciousness are asked to define what it is they study, they are reduced either to defining it in other words that mean the same thing, or to defining it ostensively by directing the listener’s attention to what the word stands for in his own life. In some sense that is all one can do for any word: our language is a web of connections between our experiences of various kinds, including sensations, ideas, thoughts, and theories.
If we were to ask a physicist of the last century what an “electron” is, he could tell us about its “charge”, and its “mass”, and maybe some things about its “size”, and how it is related to “atoms”. But this could all be some crazy abstract theoretical idea, unless a tie-in to experiences is made. However, he could give a lengthy description of this connection, as it was spelled out by classical physical theory. Thus the reason that a rational physicist or philosopher of the ninteenth century could believe that “electrons” were real, and perhaps even “more real” than our thoughts about them, is that they were understandable as parts of a well-defined mathematical framework that accounted—perhaps not directly for our experiences themselves, but at least—for how the contents of our experiences hang together in the way they do.
Now, however, in the debate between materialists and idealists, the tables are turned: the concepts of classical physics, including the classical conception of tiny electrons responding only to aspects of their local environment, absolutely cannot account for the macroscopic phenomena that we see before our eyes. On the contrary: the only known theory that does account for all the empirical phenomena, and that is not burdened with extravagent needless ontological excesses, is a theory that is neatly formulated directly in terms of our knowings. So the former reason for being satisfied with the idea of an electron, namely that it is part of a parsimonious mathematical framework that accounts quantitatively for the contents of our experiences, and gives us a mathematical representation of what persists during the intervals between our experiences, has dissolved insofar as it applies to the classical idea of an electron: it applies now, instead, to our knowings, and the stored compendium of all knowings, the Hilbert space state of the universe.
To elicit intuitions, the classical physicist might have resorted to a demonstration of tiny “pith balls” that attract or repel each other due to (unseen) electric fields, and then asked the viewer to imagine much smaller versions of what he sees before his eyes. This would give the viewer a direct intuitive basis for thinking he understood what an electron is.
This intuitive reason for the viewer’s being satisfied with the notion of an electron as an element of reality is that it was a generalization of something very familiar: a generalization of the tiny grains of sand that are so common in our ordinary experience, or of the tiny pith balls.
No things are more familiar to us than our own experiences. Yet they are elusive: each of them disappears almost as soon as it appears, and leaves behind only a fading impression, and fallible memories.
However, I shall try in this section to nail down a more solid idea of what a conscious experience is: it unifies the theoretical and intuitive aspects described above.
The metaphor is the experienced sound of a musical chord.
We have all experienced how a periodic beat will, when the frequency is increased, first be heard as a closely spaced sequence of individual pulses, then as a buzz, then as a low tone, and then as tones of higher and higher pitch. A tone of high pitch, say a high C, is not experienced by most listeners as a sequence of finely spaced individual pulses, but as something experientially unigue.
The same goes for major and minor chords: they are experienced differently, as a different gestalts. Each chord, as normally experienced, has its own unique total quality, although an experienced listener can attend to it in a way that may reveal the component elements.
One can generalize still further to the complex experience of a moment of sound in a Beethoven symphony.
These examples show that a state that can be described physically as a particular combination of vibratory motions is experienced as a particular experiential quality: what we cannot follow in time, due to the rapidity of the variations, is experienced as a gestalt-type impression that is a quality of the entire distribution of energy among the sensed frequencies.
According to the theory purposed here, the aspect of brain dynamics that corresponds to a conscious experience is a complex pattern of reverberating patterns of EM excitations that has reached a stable steady state and become a template for immediate further brain action. Its actualization by a quantum event initiates that action: it selects out of an infinite of alternative competing and conflicting patterns of neural excitations a single coherent energetic combination of reverberating patterns that initiates, quides, and monitors, an ongoing coordinated evolution of neural activities. The experience that accompanies this suddenly-picked-out “chord” of reverberations is, I suggest, the “quality” of this complex pattern of reverberations. Because the sensed combinations of EM reverberations that constitute the template for action is far more complex than those that represent auditory sounds, the quality of the former chord must be far more complex than that of the latter.
But the most important quality of our experiencess is that they have meanings. These meanings arise from their intentionalities, which encompass both intentions and attentions. The latter are intentions to attend to—and thereby to update the brains representation of—what is attended to.
These aspects of the experience arise from their self-reproducing quality: their quality of re-creating themselves. In the case of our human thoughts this self-reproductive feature has evolved to the point such that the present thought contains a representation of what will be part of a subsequent thought: the present experience E contains an image of a certain prolongation (projection into the future) of the current Jamesian sequence of temporal components that is likely, by virtue of the causal effect of E, namely S–$`>`$ PSP, with P = P(E), to be the current Jamesian sequence of a subsequent experience $`E^{}`$.
Thus the meaning of the experience, through physically imbedded in the present state of the brain that it engenders, consists of the image of the future that it is likely to generate, within the context of its fringe.
Acknowledgements
This article is essentially a reply to detailed questions about earlier works of mine raised by Aaron Sloman, Pat Hayes, Stan Klein, David Chalmers, William Robinson, and Peter Mutnick. I thank them for communicating to me their dissatisfactions. I also thank Gregg Rosenberg and John Range for general support.
References.
Bohr, N (1934), Atomic Theory and the Description of Nature (Cambridge: Cambridge University Press).
Bunge, M. (1967), Quantum Theory and Reality (Berlin: Springer).
Einstein, A. (1951) Albert Einstein: Philosopher-Scientist ed. P.A. Schilpp (New York: Tudor).
Fogelson, A.L. & Zucker, R.S. (1985),‘Presynaptic calcium diffusion from various arrays of single channels: Implications for transmitter release and synaptic facilitation’, Biophys. J., 48, pp. 1003-1017.
Feynman, R., Leighton, R., and Sands, M., (1965) The Feynman Lectures in Physics. (Vol. III, Chapter 21).(New York: Addison-Wesley).
Haag, R. (1996) Local Quantum Physics (Berlin: Springer).p 321.
Heisenberg, W. (1958a) ‘The representation of nature in contemporary physics’, Deadalus bf 87, 95-108.
Heisenberg, W. (1958b) Physics and Philosophy (New York: Harper and Row).
Hendry, J. (1984) The Creation of Quantum Theory and the Bohr-Pauli Dialogue (Dordrecht: Reidel).
Kawai, T and Stapp, H.P. (1995) ‘Quantum Electrodynamics at large distance I, II, III’, Physical Review, D 52 3484-2532.
Joos, E. (1986) ‘Quantum Theory and the Appearance of a Classical World’, Annals of the New York Academy of Science 480 6-13.
Omnes, R. (1994) The Interpretation of Quantum Theory, (Princeton: Princeton U.P.) p. 498.
Stapp, H.P. (1975) ‘Bell’s Theorm and World Process’, Nuovo Cimento 29, 270-276.
Stapp, H.P. (1977) ‘Theory of Reality’, Foundations of Physics 7, 313-323.
Stapp, H.P. (1979) ‘Whiteheadian Approach to Quantum Theory’, Foundations of Physics 9, 1-25.
Stapp, H.P. (1983) ‘Exact solution of the infrared problem’ Physical Review, 28 1386-1418.
Stapp, H.P. (1993) Mind, Matter, and Quantum Mechanics (Berlin: Springer), Chapter 6.
& http://www-physics.lbl.gov/~stapp/stappfiles.html
Stapp, H.P. (1996) ‘The Hard Problem: A Quantum Approach’, Journal of Consciousness Studies, 3 194-210.
Stapp, H.P. (1997) ‘Nonlocal character of quantum theory’, American Journal of Physics, 65, 300-304.
For commentaries on this paper see:
http://www-physics.lbl.gov/~stapp/stappfiles.html
The papers quant-ph/9905053 cited there can be accessed at
quant-ph@xxx.lanl.gov by putting in the subject field the command:
get 9905053
Stapp, H.P. (1997a) ‘Science of Consciousness and the Hard Problem’, J. of Mind and Brain, vol 18, spring and summer.
Stapp, H.P. (1997b) ‘The Evolution of Consciousness’,
http://www-physics.lbl.gov/~stapp/stappfiles.html
Wigner, E. (1961) ‘The probability of the existence of a self-reproducing unit’, in The Logic of Personal Knowledge ed. M. Polyani (London: Routledge & Paul) pp. 231-238.
Zucker, R.S. & Fogelson, A.L. (1986), ‘Relationship between transmitter release and presynaptic calcium influx when calcium enters through disrete channels’, Proc. Nat. Acad. Sci. USA, 83, 3032-3036.
Zurek, W.H. (1986) ‘Reduction of the Wave Packet and Environment-Induced Superselection’, Annals of the New York Academy of Science 480, 89-97
Zurek, W.H., S. Habib, J.P. Paz, (1993) ‘Coherent States via Decoherence’, Phys. Rev. Lett. 70 1187-90.
Appendix A. Quantum Effect of Presynaptic Calcium Ion Diffusion.
Let me assume here, in order to focus attention on a particular easily analyzable source of an important quantum effect, that the propagation of the action potential along nerve fibers is well represented by the classical Hodgson-Huxley equation, and that indeed all of brain dynamics is well represented by the classical approximation apart from one aspect, namely the motions of the pre-synaptic calcium ions from the exit of the micro-channels (through which they have entered the nerve terminal) to their target sites. The capture of the ion at the target site releases a vesicle of neurotransmitter into the synaptic cleft.
The purpose of the brain activity is to process clues about the outside world coming from the sensors, within the context of a current internal state representing the individual’s state of readiness, in order to produce an appropriate “template for action”, which can then direct the ensuing action (Stapp, 1993). Let it be supposed that the classically described evolution of the brain, governed by the complex nonlinear equations of neurodynamics, will cause the brain state move into the vicinity of one member of a set of attractors. The various attractors represent the various possible templates for action: starting from this vicinity, the state of the classically described body/brain will evolve through a sequence of states that represent the macroscopic course of action specified by that template for action.
Within this classically described setting there are nerve terminals containing the presynaptic calcium ions. The centers of mass of these ions must be treated as quantum mechanical variables. To first approximation this means that each of these individual calcium ions is represented as if it were a statistical ensemble of classically conceived calcium ions: each individual (quantum) calcium ion is represented as a cloud or swarm of virtual classical calcium ions all existing together, superposed. This cloud of superposed virtual copies is called the wave packet. Our immediate interest is in the motion of this wave packet as it moves from the exit of a microchannel of diameter 1 nanometer to a target trigger site for the release of a vesicle of neurotransmitter into the synaptic cleft.
The irreducible Heisenberg uncertainty in the velocity of the ion as it exits the microchannel is about $`1.5`$ m/sec, which is smaller than its thermal velocity by a factor of about $`4\times 10^3`$. The distance to the target trigger site is about $`50`$ nanometers. (Fogelson,1985;Zucker,1986) Hence the spreading of the wave packet is of the order of $`0.2`$ nanometers, which is of the order of the size of the ion itself, and of the target trigger site. Thus the decision as to whether the vesicle is released or not, in an individual instance, will have a large uncertainty due to the large Heisenberg quantum uncertainty in the position of the calcium ion relative to the trigger site: the ion may hit the trigger site and release the vesicle, or it may miss it the trigger site and fail to release the vesicle. These two possibilities, yes or no, for the release of this vesicle by this ion continue to exist, in a superposed state, until a “reduction of the wave packet” occurs.
If there is a situation in which a certain particular set of vesicles is released, due to the relevant calcium ions having been captured at the appropriate sites, then there will be other nearby parts of the (multi-particle) wave function of the brain in which some or all of the relevant captures do not take place—simply because, for those nearby parts of the wave function, the pertinent calcium ions miss their targets—and hence the corresponding vesicles are not released.
More generally, this means, in a situation that corresponds to a very large number N of synaptic firings, that, until a reduction occurs, all of the $`2^N`$ possible combinations of firings and no firings will be represented with comparable statistical weight in the wave function of the brain/body and its environment. Different combinations of these firings and no firings can lead to different attractors, and thence to very different macroscopic behaviours of the body that is being controlled by this brain.
The important thing, here, is that there is, on top of the nonlinear classically described neurodynamics, a quantum mechanical statistical effect arising from the spreading out of the wave functions of the centers of mass of the various presynaptic calcium ions relative to their target trigger sites.The spreading out of the wave packet is unavoidable, because it is a consequence of the Heisenberg uncertainty principle. This spreading is extremely important, because it entails that every vesicle release will be accompanied by a superposed alternative situation of comparable statistical weight in which that vesicle is not released. This means that wave function of the entire brain must, as a direct consequence of the Heisenberg uncertainty principle, disperse into a shower of superposed possibilities arising from all the different possible combinations of vesicle releases or non-releases. Each possibility can be expected to evolve into the neighborhood of some one of the many different attractors. These different attactors will be brain states that will evolve, in turn, if no reduction occurs, into different possible macroscopic behaviors of the brain and body.
Thus the effect of the spreadings of the wave functions of the centers of the presynaptic calcium ions is enormous: it will cause the wave function of the person’s body in its environment to disperse, if no reduction occurs, into a profusion of branches that represent all of the possible actions that the person is at all likely to take in the circumstance at hand. The eventual reduction of the wave packet becomes, then, the decisive controlling factor: in any given individual situation the reduction selects—from among all of the possible macroscopically different large-scale bodily actions generated by the nonlinear (and, we have supposed, classically describable) neurodynamics—the single action that actually occurs.
In this discussion I have generated the superposed macroscopically different possibilities by considering only the spreading out of the wave packets of the centers-of-mass of the pertinent presynaptic calcium ions relative to the target trigger sites, imagining the rest of the brain neurodynamics to be adequately approximated by the nonlinear classically describable neurodynamics of the brain. Improving upon this approximation would tend only to increase the quantum effect I have described.
It should be emphasized that this effect is generated simply by the Heisenberg uncertainty principle, and hence cannot be simply dismissed or ignored within a rational scientific approach. The effect is in no way dependent upon macroscopic quantum coherence, and is neither wiped out nor diminished by thermal noise. The shower of different macroscopic possibilities created by this effect can be reduced to the single actual macroscopic reality that we observe only by a reduction of the wave packet.
Appendix B. Knowings, Knowledge, and Causality.
I shall flesh out here the idea that Nature is built out of knowings, not matter.
A typical knowing of the kind that quantum theory is built upon is a knowing that the pointer on the measuring device appears to lie between the numbers 6 and 7 on the dial. This is the sort of fact that all (or at least most) of science is built upon. It is quite complex. The idea that the appearance pertains to a dial on something that acts as a measuring device has a tremendous amount of education and training built into it. Yet somehow this knowing has this background idea built into it: that idea is a part of the experience.
William James says about perceptions:
“Your acquaintance with reality grows literally by buds or drops of perception. Intellectually and upon reflection you can divide these into components, but as immediately given they come totally or not at all.”
This fits perfectly with Copenhagen quantum theory, which takes these gestalts as the basic elements of the theory. In the von Neumann/ Wigner type ontology adopted here there is, in association with this knowing, a collapse of the state vector of the universe. It is specified by acting on this state with a projection operator that acts on the degrees of freedom associated with the brain of the perceiver, and that reduces the state of the body/brain of the observer, and consequently also the state of the whole universe, to the part of that state that is compatible with this knowing.
So a knowing is a complex experiential type of event that, however, according to the theory, occurs in conjunction with a correspondingly complex “physical” event that reduces the state of the the brain/body of the person to whom the experience belongs to the part of that state that is compatible with the knowing. \[I shall use the word “physical” to denote the aspect of nature that is represented in the Hilbert-space description used in quantum theory: this aspect is the quantum analog of the physical description of classical physics.\]
That “person” is a system consisting of a sequence of knowings bound together by a set of tendencies that are specified by the state of the universe. This state is essentially a compendium of prior knowings. However, these knowings are not merely human knowings, but more general events of which human knowings are a special case.
In strict Copenhagen interpretation quantum theory is regarded as merely a set of rules for making predictions about human knowledge on the basis of human knowledge: horses and pigs do not make theoretical calculations using these ideas about operators in Hilbert space, and their “knowings” are not included in “our knowldge.
But in a science-based ontology it would be unreasonable to posit that human knowledge plays a singular role: human knowings must be assumed to be particular examples of a general kind of “knowings” that would include “horse knowings” and “pig knowings”. These could be degraded in many ways compared to human knowings, and perhaps richer in some other dimensions, but they should still be of the same general ontological type. And there should have been some sort of things of this general ontological kind even before the emergence of life. \[In the section, “What is Consciousness”, I have tried to provide an intuition about what a knowing associated with a nonbiological system might be like.\]
Science is an ongoing endeavor that is expected to develop ever more adequate (for human needs) ideas about the nature of ourselves and of the world in which we find ourselves. Newton himself seemed to understand this, although some of his successors did not. But the present stage of theoretical physics makes it clear that we certainly do not now know all the answers to even the most basic questions: physics is still very much in a groping stage when it comes to the details of the basic underlying structure. So it would be folly, from a scientific perspective, to say that we must give specific answers now to all questions, in the way that classical physics once presumed to do.
This lack of certainty is highlighted by the fact that the Copenhagen school could claim to give practical rules that worked in the realm of human knowledge without paying any attention to the question of how nonhuman knowings entered into nature. And no evidence contrary to Copenhagen quantum theory has been established. This lack of data about nonhuman knowledge would make it presumptuous, in a science-based approach, to try to spell out at this time details of the nature of nonhuman knowings, beyond the reasonable presumption that animals with bodies structurally similar to the bodies of human beings ought, to the extent they also behave like human beings, to have similar experiences. But knowings cannot be assumed to be always exactly the kinds of experiences that we human beings have, and they could be quite different.
The knowings that I mentioned at the outset were percepts: knowings that appear to be knowings about things lying outside the person’s body. But, according to the von Neummann/ Wigner interpretation, each such knowing is actually connected directly to the state of the person’s body/brain, after that event has occurred. This state of the body/brain will, in the case of percepts of the external world, normally be correlated to aspects of the state of the universe that are not part of the body/brain. But experienced feelings, such as the feelings of warmth, joy, depression, devotion, patriotism, mathematical understandings, etc. are not essentially different from percepts: all are experiences that are associated with collapse events that reduce the state of the body/brain to the part of it that is compatible with the experience..
I have spoken here of a body/brain, and its connection to an experience. But what is this body/brain? It seems to be something different from the knowing that it is connected to. And what is the nature of this connection?
The body/brain is an aspect of the quantum mechanically described state of the universe. This Hilbert-space state (sometimes called density matrix) is expressed as a complex-valued function of two vectors, each of which is defined over a product of spaces, each of which corresponds to a degree of freedom of the universe. Any system is characterized by a certain set of degrees of freedom, and the state of that system is defined by taking the trace of the state of the universe over all other degrees of freedom, thereby eliminating from this state any explicit reference to those other degrees of freedom.
In this way the state of each system is separately definable, and dependent only on its own degrees of freedom, even though the system itself is basically only an aspect of the whole universe. Each part (i.e., system) is separately definable, yet basically ontologically inseparable from the whole: that is the inescapable basic message of quantum theory. Each system has a state that depends only on its own degrees of freedom, and this system, as specified by its state, is causally pertinent, because each knowing is associated with some system, and the probabilities for its alternative possible knowings are specified by its own state, in spite of the fact that the system itself is fundamentally an inseparable part of the entire universe. It is the properties of the trace operation that reconciles these disparate requirements
The state of the universe specifies only the probabilities for knowings to occur, and it generally undergoes an instantaneous global instantaneous jump when a new knowing occurs. But this probability, by virtue of the way it jumps when a new knowing occurs, and suddenly changes in regions far away from the system associated with the new knowing, and that it is formulated in terms of infinite sets of pssibilities that may never occur, is more like an idea or a thought than a material reality. Indeed, these properties of the state are exactly why the founders of quantum theory were led to the conclusion that the mathematical formalism that they created was about knowledge.
The state of the universe is the preserved compendium of all knowings. More precisely, it is an aspect of that compendium that expresses certain statistical properties pertaining to the next knowing. There is presumeably some deeper structure, not captured by the properties expressed in the Hilbert-space mathematical structure, that fixes what actually happens.
The knowings that constitute our experiences are the comings into being of bits of knowledge, which join to form the knowledge that is represented by the state of the universe. This gives an ontology based on knowings, with nothing resembling matter present. But the statistical causal structure of the sequence knowings is expressed in terms of equations that are analogs of the mathematical laws that governed the matter postulated to exist by the principles of classical mechanics. This connection to classical mechanics is enough to ensure a close similarity between the predictions of classical mechanics and those of quantum mechanics in many cases of interest, even though the two theories are based on very different mathematical structures.
If one starts from the ontological framework suggested by classical mechanics the questions naturally arise: Why should experiences exist at all? And given that they do exist, Why should they be composed of such qualities as sensations of (experiential) colors and (experiential) sounds, and feelings of warmth and coldness, and perceptions of simple geometric forms that correspond more directly to the shapes of structures outside the body/brain than to structures (such as patterns of neural excitations that are presumably representing these various features) inside the body/brain. How do these experiential types of qualities arise in a world that is composed exclusively of tiny material particle and waves? The experiential qualities are not constructible from their physical underpinnings in the way that all the physical properties of a tornado are, according to classical mechanics, constructible from its physical constituents.
Quantum theory allows one to get around these questions by eliminating that entire classical ontology that did not seem to mesh naturally with experiential realities, and replacing that classical ontology with one built around experiential realities. These latter realities are embedded in a specified way, which is fixed by the pragmatic rules, into a mathematical structure that allows the theory to account for all the successes of classical mechanics without being burdened with its awkward ontological baggage.
A discussion of this appendix with cognitive scientist Pat Hayes can be found on my website:
(http://www-physics.lbl.gov/‘tilde’stapp/stappfiles.html),
where ‘tilde’ stands for the tilde symbol.
Appendix C. Quantum Wholism and Consciousness.
One reason touted for the need to use quantum theory in order to accomodate consciousness in our scientific understanding of brain dynamics is the seeming pertinence of quantum wholism to the unitary or wholistic character of the conscious experience.
I shall here spell out that reason within the framework of a computer simulation of brain dynamics.
Suppose we consider a field theory of the brain, with several kinds of interacting fields, say, for example, the electric and magnetic fields, and a field representing some mass- and charge-carrying field. Suppose the equations of motion are local and deterministic. This means that the evolution in time of each field value at each spacetime point is completely determined by the values of the various fields in the immediate neighborhood of that spacetime point. Suppose we can, with good accuracy, simulate this evolution with a huge collection of computers, one for each point of a cubic lattice of finely spaced spatial points, where each computer puts out a new set of values for each the fields, evaluated at that its own spatial point, at each of a sequence of finely spaced times. Each computer has inputs only from the outputs of its nearest few neighbors, over a few earlier times in the sequence of times. The outputs are digital, and the equations of motion are presumed to reduce to finite-difference equations that can be readily solved by the stripped-down computers, which can do only that. Thus, given some appropriate initial conditions at some early times, this battery of simple digital computers will grind out the evolution of the simulated brain.
Merely for definiteness I assume that the spatial lattice has a thousand points along each edge, so the entire lattice has a billion points. Thus our simulator has a billion simple computers.
Now suppose after some long time the field values should come to spell out a gigantic letter “M”: i.e., the fields all vanish except on a set of lattice points that have the shape of a letter “M” on one of the faces of the lattice. If the outputs are printed out at the location of the corresponding grid point then you or I, observing the lattice, would know that the letter “M” had been formed.
But would the battery of dynamically linked but ontologically distinct computers itself contain that information explicitly? None of the computers has any information in its memory except information about numbers pertaining to its immediate neighborhood: each computer “knows” nothing except what its immediate environment is. So nowhere in the battery of computers, B, has the higher-level information about the global structure been assessed and recorded: the fact that an “M” has been formed is not “known” to the battery of computers. Some other computer C, appropriately constructed, could examine the outputs of the various elements of B, and issue a correct statement about this global properties of B, but that global information is not explicity expressed in the numbers that are recorded in B itself: some extra processing would be needed for that.
Of course, brains examine themselves. So B itself might be able to do the job that C did above, and issue the statement about its own global property, and also record that information in some way in the configuration of values in the various simple computers: the existence of this configuration can be supposed to have been caused by the presence of the “M”, and can be supposed to cause, under appropriate conditions, the battery of computers B to display on some lattice face the message: “I did contain an ‘M’ ”.
So the information about the global structure is now properly contained in the structure of B, as far as causal functioning is concerned. But even though the configuration of values that carries the information about the “M” is correctly linked causally to past and future, this configuration itself is no more than any such configurations was before, namely a collection of tiny bits of information about tiny regions in space. There is nothing in this classical conception that corresponds ontologically to the entire gestalt, “M”, as a whole. The structure of classical physics is such that the present reality is specified by values located within in an infinitesimal interval centered on the present instant, without any need to refer to any more distant times. To bring relationships to the past and future events into the present evolving ontological reality would be alien to the ideas of classical physics. There is simply no need to expand the idea of reality in this way: it adds only superfluities to the ontologically and dynamically closed structure of classical physics.
The situation changes completely when one quantizes the system. To make a computer simulation of the quantum dynamics one generalizes the spatial points of the classical theory to super-points. Each possible entire classical state is a super-point. In our case, each super-point is defined by specifying at each of the points in the lattice a possible value of each of the several (in our case three) fields. To each super-point we assign a super-computer. If the number of discrete allowed values for our original simple computers was, say, one thousand possible values for each of the three fields, and hence $`10^9`$ possible output values in all for each simple computer, then the number of allowed classical states would be $`10^9`$ raised to the power $`10^9`$: each of the $`10^9`$ simple computers can have $`10^9`$ possible values. Thus the number of needed super-computers would be $`10^9`$ raised to the power $`10^9`$. In the dynamical evolution each of these super-computers generates, in succession, one complex number (two real numbers) at each of the times in the finely spaced sequence of times.
One can imagine that a collapse event at some time might make all of these complex numbers, except one, equal to zero, and make the remaining one equal to $`1`$. Then the state would be precisely one of the $`10^9`$ to the power $`10^9`$ classical states. It would then evolve into a superposition of possible classical states until the next collapse occurs. But the collapse takes the state to a “whole” classical world. That is, each super-computer is associated not just with some tiny region, but with the whole system, and the collapses can be to states in which some whole region of spacetime has a fixed configuration of values. Thus, for example, there would be a super-computer such that its output’s being unity would mean that “M” appeared on one face. And the collapse to that single state would actualize that gestalt “M”. The sudden selective creation of this gestalt is more similar to someone’s experiencing this gestalt than any occurrence or happening in the classical dynamics, because in both the experience and the quantum event the whole body of information (the whole “M”) suddenly appears.
This intuitive similarity of collapse events to conscious events is a reason why many quantum theorists are attracted to the idea that conscious events are quantum events. Orthodox quantum theory rests on that idea.
There is in the quantum ontology a tie-in to past and future, because if one asks what the present reality is, the answer can be either knowledge of the past, or potentialities for the future: the present is an abrupt transition from fixed past to open future, not a slice of a self-sufficient continuous reality.
Appendix D. The Dilemma of Free Will.
The two horns of this dilemma are ‘determinism’ and ‘chance’. If determinism holds then a person seems reduced to a mechanical device, no more responsible for his acts than a clock is responsible for telling the wrong time. But if determinism fails then his actions are controlled in part by “chance”, rendering him even less responsible for his acts.
This argument can powerfully affect on our lives: it allows us to rationalize our own moral failings, and it influences the way we, and our institutions, deal with the failings of others.
It might appear that there is no way out: either the world is deterministic or it’s not, and the second possibility involves chance. So we get hung on one horn or the other.
Quantum ontology evades both horns.
The point is that determinism does not imply mechanism. The reason we say we are not responsible if determinism holds is that “determinism” evokes the idea of “mechanism”; it evokes the idea of a clock. And, indeed, that’s exactly what is entailed by the determinism of classical mechanics. According to the principles of classical mechanics everything you will do in your life was fixed and settled before you were born by local ‘myopic’ mechanical laws: i.e., by essentially the same sort of local mechanical linkages that control the workings of a clock. If your thoughts and ideas enter causally into the physical proceedings at all, it is only to the extent that they are themselves completely controlled by these local mechanical processes. Hence the causes of your actions can be reduced to a huge assembly of thoughtless microscopic processes.
But in quantum dynamics our knowings enter as the central dynamical units. What we have is a dynamics of knowings that evolve according to the rules of quantum dynamics. To be sure these dynamical rules do involve elements of chance, but these are no more problematic than the thermal and environmental noise that occurred in the classical case: our high-level structures cannot maintain total fine control over every detail. But there is, in spite of that important similarity, a huge difference because in the classical case everything was determined from the bottom up, by thoughtless micro processes, whereas in the quantum case everything is determined from the top down, by a dynamics that connects earlier knowings to later knowings.
And these knowings are doing what we feel they are doing: initiating complex actions, both physical and mental, that pave the way to future knowings.
No reduction to knowingless process is possible because each step in the dynamical processes is the actualization of a knowing that is represented mathematically as the grasping, as a whole, of a structural complex that is equivalent to the structure of the knowing.
|
no-problem/9905/chao-dyn9905008.html
|
ar5iv
|
text
|
# Quantization of a billiard model for interacting particles
## Abstract
We consider a billiard model of a self-bound, interacting three-body system in two spatial dimensions. Numerical studies show that the classical dynamics is chaotic. The corresponding quantum system displays spectral fluctuations that exhibit small deviations from random matrix theory predictions. These can be understood in terms of scarring caused by a 1-parameter family of orbits inside the collinear manifold.
Billiards are interesting and useful models to study the quantum mechanics of classically chaotic systems . In particular, the study of the completely chaotic Sinai billiard and Bunimovich stadium showed that the quantum spectra and wave functions of classically chaotic system exhibit universal properties (e.g. spectral fluctuations) as well as deviations (e.g. scars of periodic orbits) when compared to random matrix theory (RMT) predictions of the Gaussian orthogonal ensemble (GOE) .
While the theory of wave function scarring has reached a mature state in two dimensional systems there is a richer structure in more than two dimensions. In particular, invariant manifolds in billiards and systems of identical particles may lead to an enhancement in the amplitude of wave functions provided classical motion is not too unstable in their vicinities.
The purpose of this letter is twofold. First we want to quantize a billiard model for three interacting particles and study a new type of wave function scarring. Second, our numerical results strongly suggest that the system under consideration is chaotic and ergodic. This is interesting in view of recent efforts to construct higher-dimensional chaotic billiards .
This letter is organized as follows. First we introduce a billiard model of an interacting three-body system and study its classical dynamics. Second we compute highly excited eigenstates of the corresponding quantum system and compare the results with RMT predictions.
Recently, a self-bound many-body system realized as a billiard has been studied in the framework of nuclear physics . We want to consider the corresponding three-body system with the Hamiltonian
$$H=\underset{i=1}{\overset{3}{}}\frac{\stackrel{}{p}_i^2}{2m}+\underset{i<j}{}V(|\stackrel{}{r}_i\stackrel{}{r}_j|),$$
(1)
where $`\stackrel{}{r}_i`$ is a two–dimensional position vector of the $`i`$-th particle and $`\stackrel{}{p}_i`$ is its conjugate momentum. The two-body potential is
$`V(r)=\{\begin{array}{cc}0\hfill & \text{for }r<a,\hfill \\ \mathrm{}\hfill & \text{for }ra.\hfill \end{array}`$ (4)
The particles thus move freely within a convex billiard in six-dimensional configuration space and undergo elastic reflections at the walls. Besides the energy $`E`$, the total momentum $`\stackrel{}{P}`$ and angular momentum $`L`$ are conserved quantities which leaves us with three degrees of freedom. In what follows we consider the case $`\stackrel{}{P}=0,L=0`$.
To study the classical dynamics it is convenient to fix the velocity $`\stackrel{}{v}_1^2+\stackrel{}{v}_2^2+\stackrel{}{v}_3^2=1`$ and perform computations with the full Hamiltonian (1) without transforming to the subspace $`\stackrel{}{P}=0,L=0`$. We want to compute the Lyapunov exponents of several trajectories. To this purpose we draw initial conditions at random and compute the tangent map while following their time evolution. To ensure good statistics and good convergence we follow an ensemble of $`7\times 10^4`$ trajectories for $`10^5`$ bounces off the boundary. All followed trajectories have positive Lyapunov exponents. The ensemble averaged value of the maximal Lyapunov exponent and its RMS deviation are $`\lambda a=0.3933\pm 0.0015`$, while the second Lyapunov exponent is also always positive. Thus, the system is chaotic for practical purposes. However, we have no general proof that no stable orbits exist. The reliability of the numerical computation was checked by (i) comparing forward with backward evolution, (ii) observing that energy, total momentum and angular momentum are conserved to high accuracy during the evolution and (iii) using an alternative method to determine the Lyapunov exponent.
The considered billiard possesses two low-dimensional invariant manifolds that correspond to symmetry planes. The first “collinear” manifold is defined by configurations where all three particles move on a line. The dynamics inside this manifold is governed by the one-dimensional analogon of Hamiltonian (1). After separation of the center-of-mass motion one obtains a two-dimensional billiard with the shape of a regular hexagon. This system is known to be pseudo-integrable . To study the motion in the vicinity of the collinear manifold we compute the full phase space stability matrix for several periodic orbits inside the collinear manifold which come in 1-dim families and can be systematically enumerated using the tiling property of the hexagon. All considered types of orbits except two are unstable in the transverse direction: (i) The family of bouncing ball orbits (i.e. two particles bouncing, the third one at rest in between) is marginally stable (parabolic) in full phase space. (ii) The family of orbits depicted in Fig. 1 is stable (elliptic) in two transversal directions and marginally stable (parabolic) in the other 10 directions of 12-dim phase space. Though this behavior does not spoil the ergodicity of the billiard one may expect that it causes the quantum system to display deviations from RMT predictions. Note that this family of periodic orbits differs from the bouncing ball orbits which have been extensively studied in two- and three-dimensional billiards since (i) it is restricted to a lower dimensional invariant manifold, and (ii) it is elliptic (complex unimodular pair of eigenvalues) in one conjugate pair of directions.
The second invariant manifold is defined by those configurations where two particles are mirror images of each other while the third particle is restricted to the motion on the (arbitrarily chosen) symmetry line. Inside this manifold one finds mixed (i.e. partly regular and partly chaotic) dynamics. However, the motion is infinitely unstable in the transverse direction due to non-regularizable three-body collisions.
The quantum mechanics is done using the coordinates
$`\stackrel{}{x}`$ $`=`$ $`\left(\stackrel{}{r_1}+\stackrel{}{r_2}+\stackrel{}{r_3}\right)/\sqrt{3},`$ (5)
$`\rho \mathrm{cos}{\displaystyle \frac{\theta ^{}}{2}}\left(\begin{array}{c}\mathrm{cos}(\varphi \phi ^{}/2)\\ \mathrm{sin}(\varphi \phi ^{}/2)\end{array}\right)`$ $`=`$ $`\left(\stackrel{}{r_1}\stackrel{}{r_2}\right)/\sqrt{2},`$ (8)
$`\rho \mathrm{sin}{\displaystyle \frac{\theta ^{}}{2}}\left(\begin{array}{c}\mathrm{cos}(\varphi +\phi ^{}/2)\\ \mathrm{sin}(\varphi +\phi ^{}/2)\end{array}\right)`$ $`=`$ $`\left(\stackrel{}{r_1}+\stackrel{}{r_2}2\stackrel{}{r_3}\right)/\sqrt{6}.`$ (11)
Here $`\rho ,\theta ^{}`$ and $`\phi ^{}`$ describe the intrinsic motion of the three-body system while $`\stackrel{}{x}`$ and $`\varphi `$ are the center of mass and the global orientation, respectively. In a second transformation we apply a rotation of $`\pi /2`$ around the abscissa corresponding to spherical coordinates $`(\rho ,\theta ^{},\phi ^{})`$, namely $`\mathrm{tan}\phi =\mathrm{cot}\theta ^{}/\mathrm{cos}\phi ^{}`$ and $`\mathrm{cos}\theta =\mathrm{sin}\theta ^{}\mathrm{sin}\phi ^{}`$ and obtain for the Laplacian in the subspace $`\stackrel{}{P}=0,L=0`$
$$\mathrm{\Delta }=\frac{^2}{\rho ^2}+\frac{3}{\rho }\frac{}{\rho }+\frac{4}{\rho ^2}\left(\frac{^2}{\theta ^2}+\mathrm{cot}\theta \frac{}{\theta }+\frac{1}{\mathrm{sin}^2\theta }\frac{^2}{\phi ^2}\right).$$
(12)
Products of Bessel functions and spherical harmonics
$$\psi _{k,l,l_z}(\rho ,\theta ,\phi )=(k\rho )^1J_{2l+1}(k\rho )Y_l^{l_z}(\theta ,\phi )$$
(13)
are eigenfunctions of the Laplacian (12)
$$\mathrm{\Delta }\psi _{k,l,l_z}(\rho ,\theta ,\phi )=k^2\psi _{k,l,l_z}(\rho ,\theta ,\phi ),$$
(14)
with the usual relation between wavevector and energy $`k=\mathrm{}^1(2mE)^{1/2}`$. Fig. 2 shows a picture of the billiard taking $`(\rho ,\theta ,\phi )`$ as spherical coordinates. The billiard possesses a $`D_{3h}`$ symmetry. In the fundamental domain $`(\theta ,\phi )(0,\pi /2)\times (\pi /6,\pi /6)`$ the boundary is given by
$$\rho _B(\theta ,\phi )=\frac{a}{\sqrt{1+\mathrm{sin}\theta \mathrm{sin}(\phi +\pi /3)}}.$$
(15)
The collinear manifold is the equatorial plane $`\theta =\pi /2`$. The second invariant manifold is given by the vertical symmetry planes $`\varphi =\pm \pi /6`$. Note that in this representation, classical geodesics of the billiard between two successive collisions are not straight lines since the centrifugal potential is stronger than in Euclidean case. In what follows we restrict ourselves to the fundamental domain and choose basis functions that fulfill Dirichlet boundary conditions. These are bosonic states.
We are interested in highly excited eigenstates. These may be accurately computed numerically by using the scaling method developed in ref. and applied to a three-dimensional billiard by one of the authors . This method works efficiently only when a suitable positive weight function is introduced in a boundary integral. To this purpose we note that the radial part of (12) looks like a 4-dim Laplacian. Extending the results of refs. to four dimensions yields the appropriate weight function, which has a remarkably simple form in our coordinates, namely we minimize the following functional
$$f[\mathrm{\Psi }_k]=_0^1d\mathrm{cos}\theta _{\pi /6}^{\pi /6}𝑑\phi \rho _B^4(\theta ,\phi )|\mathrm{\Psi }_k(\rho _B(\theta ,\phi ),\theta ,\phi )|^2$$
where the wave-function is expressed in terms scaling functions (13), $`\mathrm{\Psi }_k=_lc_{l,l_z}\psi _{k,l,l_z}`$. Due to our particular choice of boundary conditions we consider only the terms for which $`l+l_z`$ is odd and $`l_z=3m`$, and truncate at $`l=l_{\mathrm{max}}=ka/2+\mathrm{\Delta }lka/2`$.
We have computed three stretches of highly excited states. They consist of $`7430`$, $`1813`$ and $`2362`$ consecutive eigenstates with $`120<ka<235`$, $`290<ka<300`$ and $`393<ka<400`$, respectively. The last two stretches comprise levels with sequential quantum numbers around $`20000`$ and $`45000`$, respectively. The completeness of the series was checked by comparing the number of obtained eigenstates with the leading order prediction from the Weyl-formula $`\overline{d}(k)=c(24\pi ^2)^1(ak)^2,c0.51349`$.
Fig. 3 shows that the nearest neighbor spacing distribution agrees very well with RMT predictions already for the lower energy spectral stretch $`120ka235`$. The other series show well agreement, too. As for the long-range spectral correlations, the number variance $`\mathrm{\Sigma }^2(L)`$ deviates from RMT predictions for interval length of more than ten mean level spacings which we believe is due to the parabolic-elliptic family of periodic orbits in the collinear manifold (Fig. 1). The deviation from RMT decreases with increasing $`k`$. For the highest spectral stretch ($`ka400`$) the number variance increases linearly, $`\mathrm{\Sigma }^2(L)\mathrm{\Sigma }_{\mathrm{goe}}^2(L)+\epsilon L`$ up to $`L250`$, with $`\epsilon 0.04`$. This finding is consistent with the model of a statistically independent fraction $`\epsilon `$ of strongly scarred states . $`\mathrm{\Sigma }^2(L)`$ reaches its maximum and begins to oscillate at the saturation length $`L^{}`$ which scales as $`L^{}k^2`$ in agreement with the prediction of ref. .
The length spectrum $`D(r)`$, i.e. the cosine transform of the oscillatory part of the spectral density $`d_{\mathrm{osc}}(k)=_n\delta (kk_n)\overline{d}(k)`$, gives further information about long-range spectral fluctuations. For finite stretches of consecutive levels in the interval $`[k_1,k_2]`$ one uses a Welsh window function $`w(k;k_1,k_2)=(k_2k)(kk_1)/(6(k_2k_1)^3)`$ in the actual computation and obtains $`D(r)=_{k_1}^{k_2}𝑑kw(k;k_1,k_2)\mathrm{cos}(kr)d_{\mathrm{osc}}(k)`$ (see e.g. ). Fig. 4 shows that orbits of length $`r=\sqrt{2}a`$ and its integer multiples cause dominant peaks in the length spectrum.
To investigate the observed deviations from RMT predictions in more detail it is useful to compute the inverse participation ratio (IPR) of the wave functions in some basis . In the case of billiards the use of the angular momentum basis (13) is particularly convenient and suitable since periodic orbits correspond to sets of isolated points within this representation. Let $`c_{l,l_z}^{(n)}`$ denote the expansion coefficients of $`n`$th eigenstate $`\mathrm{\Psi }_{k_n}`$. We compute the IPR over a set of $`N`$ consecutive eigenstates as $`\mathrm{IPR}(l,l_z)=N_n|c_{l,l_z}^{(n)}|^4/(_n|c_{l,l_z}^{(n)}|^2)^2`$. The predicted RMT value for ideally quantum ergodic states is $`\mathrm{IPR}_{\mathrm{goe}}=3`$. Fig. 5 shows the IPR for the two sets of eigenstates with $`170<k<200`$ and $`290<k<300`$, respectively. The agreement with RMT predictions is rather good in both cases. This confirms that the billiard under consideration is dominantly chaotic and ergodic. However, the IPR is slightly enhanced in the region around $`l=l_zka/3`$. This is a robust phenomenon (present at all energy ranges), although the region of enhancement shrinks with increasing $`k`$. This finding is compatible with the expectation of uniform quantum ergodicity in the semi-classical limit. Note that the region $`ll_z`$ corresponds to the vicinity of the collinear manifold. Note further that the orbits belonging to the parabolic-elliptic family depicted in Fig. 1 have length $`\sqrt{2}a`$ and angular momenta in the region $`l/ka=l_z/ka(1/2\sqrt{6},1/\sqrt{6})`$. This is precisely the region where the IPR exhibits its enhancement while the orbits’ lengths coincide with the prominent peaks of the length spectrum in Fig. 4.
Thus, the deviations from RMT predictions observed for the spectrum and for the wave functions are associated with the family of parabolic-elliptic periodic orbits inside the collinear manifold. The special stability properties of this family lead to scars in the wave functions of the quantum system. This is an exciting new type of scars of invariant manifolds and complements results previously found in a three-dimensional billiard and in interacting few-body systems . Note that the family of parabolic bouncing ball orbits inside the collinear manifold does not cause statistically detectable scarring. The orbits of this family correspond to points in angular momentum space with $`l=l_z<ka/2\sqrt{6}`$ and do not exhibit an enhancement in the IPR since the classical motion is too unstable in their vicinity.
In summary we have investigated an interacting three-body system realized as a billiard. Numerical results show that the classical dynamics is dominantly chaotic and no deviation from ergodic behavior is found. The spectral fluctuations of the quantum system agree well with random matrix theory predictions on energy scales of a few mean level spacings. However, wave function intensities and long ranged spectral fluctuations display deviations. These can be explained in terms of scars of a family of periodic orbits inside the collinear manifold.
We thank L. Kaplan for stimulating discussions. The hospitality of the Centro Internacional de Ciencias, Cuernavaca, Mexico, is gratefully acknowledged.
|
no-problem/9905/astro-ph9905336.html
|
ar5iv
|
text
|
# Modeling age and metallicity in globular clusters: a comparison of theoretical giant branch isochrones
## 1. Introduction
Evolutionary population synthesis (EPS) is now a traditional approach to study distant galaxies from their integrated light. This technique has its foundation in stellar evolution theory which, with an appropriate library of stellar spectra, allows to predict the spectral evolution of a given stellar population. In this context, the globular clusters as single stellar populations (SSP) of old metal-deficient stars put stringent constraints on evolutionary synthesis models, which are particularly useful in investigating the effects of age and metallicity in the integrated spectro-photometric properties. In view of the important contributions of red giant branch (RGB) stars to the integrated light (up to 40–60 %) in evolutionary synthesis calculations of old stellar populations in clusters and galaxies, it is therefore important to compare the predictions of different sets of theoretical isochrones frequently used in EPS studies, such as those computed from the Padova (P), Geneva (G), and Yale (Y) evolutionary tracks.
## 2. Theoretical color–absolute magnitude diagrams
In this work, we used the Padova (P) isochrones ($`Z=0.0001`$ to $`0.10`$) as calculated from the Isochrone Synthesis program of Bruzual & Charlot (see Bruzual et al. 1997), the Yale (Y) isochrones ($`Z=0.0002`$ to $`0.10`$) from Demarque et al. (1996) and the Geneva (G) isochrones ($`Z=0.001`$ to $`0.10`$) calculated by Schaerer & Lejeune (1998). All the isochrones have been converted to the observational color–magnitude (c-m) diagrams, ($`M(T_1)`$, $`CT_1`$) and ($`M(I)`$, $`VI`$), in a uniform manner, i.e., by employing the same stellar library to transform the theoretical quantities ($`M_{\mathrm{b}ol}`$, $`T_{\mathrm{eff}}`$) into absolute magnitudes and colors. For this purpose, we used the most recent version of the Basel stellar library (BaSeL) of Lejeune et al. (1999) which provides color-calibrated theoretical flux distributions for the largest possible range of fundamental stellar parameters, $`T_{\mathrm{eff}}`$ (2000 K to 50,000 K), $`\mathrm{log}g`$ (-1.0 to 5.5) , and $`[Fe/H]`$ (-5.0 to +1.0) (for details, see Lejeune et al. 1997, 1998, 1999, and Westera et al. 1999 – these proceedings).
In Figure 1, we provide a comparison of the c-m diagrams computed from the Y-isochrones in the Washington and in the Johnson-Cousins systems, ($`M(T_1)`$, $`CT_1`$) and ($`M(I)`$, $`VI`$), respectively. Both the position and the curvature of the RGB are very sensitive to the metallicity, as already noticed in the empirical studies by Da Costa & Armandroff (1990, hereafter DCA90) for $`VI`$, and by Geisler & Sarajedini (1999, \[GS99\]) for $`CT_1`$. The larger separation of the isochrones in $`CT_1`$ confirms the superior metallicity sensitivity of the Washington system, in particular at low-metallicities. Note also that the observed near-constancy of the absolute magnitude, $`M(I)`$ or $`M(T_1)`$, of the RGB-tip, is confirmed by theory for $`Z`$ $`\stackrel{<}{}`$ 0.04 ($`[M/H]`$ $`\stackrel{<}{}`$ $`0.5`$) to within a dispersion $`\sigma (M)`$ $`\stackrel{<}{}`$ 1 mag – supporting the use of this property in Galactic and extragalactic distance determinations.
## 3. Comparisons with observed giant branches
In Figure 2, theoretical RGBs from the P-, the Y- and the G-isochrones at a typical age of 14 Gyr are compared with the observed standard giant branches of the globular clusters NGC 6397 and 47 Tuc defined by DCA90 in $`VI`$ and by GS99 in $`CT_1`$. Systematic differences exist between the theoretical isochrones and the observed giant branches, and between the models themselves. While the P- isochrones reproduce very well the RGB of 47 Tuc, large discrepancies are found for NGC 6357 ($`[Fe/H]=1.76`$), both in $`VI`$ and $`CT_1`$. More generally, our tests with other clusters (Lejeune & Buser 1999) indicate that the P-models provide reliable theoretical RGBs for metal-rich and intermediate-metallicity globular clusters, while they become systematically too blue for $`[M/H]`$ $`\stackrel{<}{}`$ $`1.4`$. This is particularly true in Washington photometry with color differences, $`\mathrm{\Delta }(CT_1)`$, at the RGB-tip increasing up to $`0.4`$ mag with decreasing $`[M/H]`$. At the opposite, the Geneva models appear slightly too red when compared to the different cluster branches. Our comparisons show in particular that, over the whole range of metallicities $`(2<[M/H]<0)`$ and within an age range between 8 and 17 Gyr, the best agreement is provided by the Y-isochrones, which reproduce very well the position and the curvature of the observed standard giant branches of DCA90 and GS99, along with the empirical metallicity scales derived from the color of the RGB at a fixed absolute magnitude (Lejeune & Buser 1999).
## 4. Conclusions
In view of their use in old stellar population studies, we have compared the predictions of Padova, the Yale and the Geneva sets of theoretical isochrones with recent observations of globular cluster giant branches. We find in particular that, at a given typical old age of 14 Gyr, the 3 different isochrones of the RGB provide significantly different slopes and curvatures, with differences increasing for metallicities decreasing below $`[M/H]`$ $`\stackrel{<}{}`$ $`1`$. The Y-isochrones are the only ones which show excellent agreement throughout the full range of metallicities with observed globular cluster giant-branch templates in both photometric systems. The P-isochrones agree well with observations only at the higher metallicities, while for the lower metallicities they become generally too blue, by up to $`0.4`$ mag at the tip of the RGB. Finally, the G-isochrones are systematically redder than the observations. As all the theoretical isochrones have been converted in an uniform manner by employing the same stellar library, such discrepancies must likely be attributed to differences in the physics employed in the different calculations of the stellar evolutionary tracks.
The influence of such differences on the integrated colors predicted by stellar population models, which can lead in particular to systematic uncertainties in age ($``$ 2–3 Gyr) and/or in metallicity ($``$ 0.2–0.3 dex), will be discussed in a forthcoming paper (Lejeune & Buser 1999).
### Acknowledgments.
T. L. gratefully acknowledges financial support from the Swiss National Science Foundation (grant 20-53660.98 to Prof. Buser), and from the “Société Suisse d’Astronomie et d’Astrophysique” (SSAA).
## References
Bruzual, G., Barbuy, B., Ortolani, S., Bica, E., Cuisinier, F., Lejeune, T. & Schiavon, R., 1997, AJ, 114 , 1531
Caretta, E. & Gratton, R. G., 1997, A&AS, 121, 95
Demarque, P., Chaboyer, B., Guenther, D., Pinsonneault, M, Pinsonneault, L., & Yi, S. 1996, Yale Isochrones 1996 in ”Sukyoung Yi’s WWW Homepage”
Da Costa, G. S. & Armandroff, T. E., 1990, AJ, 100, 162 (DCA90)
Geisler, D. & Sarajedini, A., 1999, AJ, 117, 308 (GS99)
Lejeune, T. & Buser, R., 1999, in prep.
Lejeune, T, Cuisinier, F. & Buser, R., 1997, A&AS, 125, 229
Lejeune, T, Cuisinier, F. & Buser, R., 1998, A&AS, 130, 65
Lejeune, T, Westera, P. & Buser, R., 1999, in prep.
Salaris, M., Chieffi, A. & Straniero, O., 1993, ApJ, 414, 530
Schaerer, D & Lejeune, T., 1998, in ”Unsolved problems in stellar evolution”, STScI Symp., Baltimore, Maryland, USA
Westera, P., Lejeune, T, & R. Buser, 1999, in prep.
|
no-problem/9905/cond-mat9905090.html
|
ar5iv
|
text
|
# Aging as dynamics in configuration space
## Abstract
The relaxation dynamics of many disordered systems, such as structural glasses, proteins, granular materials or spin glasses, is not completely frozen even at very low temperatures . This residual motion leads to a change of the properties of the material, a process commonly called aging. Despite recent advances in the theoretical description of such aging processes, the microscopic mechanisms leading to the aging dynamics are still a matter of dispute . In this letter we investigate the aging dynamics of a simple glass former by means of molecular dynamics computer simulation. Using the concept of the inherent structure we give evidence that aging dynamics can be understood as a decrease of the effective configurational temperature $`T`$ of the system. From our results we conclude that the equilibration process is faster when the system is quenched to $`T_c`$, the critical $`T`$ of mode-coupling theory , and that thermodynamic concepts are useful to describe the out-of-equilibrium aging process.
Despite their constitutive differences, many complex disordered materials show a strikingly similar dynamical behavior. In such systems, the characteristic relaxation time increases by many decades upon a small variation of the external control parameters such as $`T`$ or density. Correlation functions show power-law and stretched exponential behavior as opposed to a simple exponential decay. Also the equilibration process is frequently of non-exponential nature, and is often so slow to give rise to strong out-of-equilibrium phenomena, commonly named aging. It has recently been recognized that the similarity in the equilibrium dynamics for many systems might be related to a similarity in the structure of their configuration space (often called energy-landscape) and that this structure can be studied best in the out-of-equilibrium situation. In this letter we investigate the energy-landscape of an aging system and show that by means of this landscape it is indeed possible to establish a close connection between the equilibrium and out-of-equilibrium properties of the same system.
For the case of glass-forming liquids, whose characteristic relaxation time increases by more than 13 decades when $`T`$ is decreased by a modest amount, the configuration space is given by the $`3N`$ dimensional space spanned by the spatial coordinates of the $`N`$ atoms. In 1985, Stillinger and Weber introduced the concept of the inherent structure (IS), which can be defined as follows : for any configuration of particles the IS is given by that point which is reached by a steepest descent procedure in the potential energy if one uses the particle configuration as starting point of the minimization, i.e. the IS is the location of the nearest local potential-energy minimum in configuration space. By this method configuration space can be decomposed in a unique way into the basins of attraction of all IS of the systems and the time-evolution of a system in configuration space can be described as a progressive exploration of different IS. Here we determine the properties of the IS in equilibrium as well as in the out-of-equilibrium situation. From the comparison of the IS in these different situations we elucidate the dynamics of the system during the aging and learn about the structure of configuration space and the physics of the glassy dynamics.
The system we study is a binary mixture of Lennard-Jones particles whose equilibrium dynamics has been investigated in great detail . It has been found that this dynamics can be well described by the so-called mode-coupling theory with a critical temperature $`T_c=0.435`$. To study the non-equilibrium dynamics we quench at time zero the equilibrated system from an initial temperature $`T_i=5.0`$ to a final temperature $`T_f\{0.1,0.2,0.3,0.4,0.435\}`$. In Fig. 1 we show $`e_{\mathrm{IS}}`$, the average energy per particle in the IS, as a function of $`T`$ (equilibrium case — panel a) and as a function of time (out-of-equilibrium case — panel b), respectively. In agreement with the results of Ref. we find that in equilibrium $`e_{\mathrm{IS}}`$ is almost independent of $`T`$ for $`T1.0`$, i.e. when the thermal energy $`k_\mathrm{B}T`$ is larger than the depth of the Lennard-Jones pair potential. At lower $`T`$, $`e_{\mathrm{IS}}`$ shows a significant $`T`$ dependence confirming that on decreasing $`T`$ the system is resident in deeper minima. The relation $`e_{\mathrm{IS}}(T)`$ can be inverted, $`T=T(e_{\mathrm{IS}})`$, and we propose to use this relation to associate in the non-equilibrium case to each value of $`e_{\mathrm{IS}}(t)`$ an effective temperature $`T_e(e_{\mathrm{IS}}(t))`$ \[see Fig. 1\]. By associating an equilibrium $`T`$ value to an $`e_{\mathrm{IS}}(t)`$ value we can describe the (out-of-equilibrium) time dependence of $`e_{\mathrm{IS}}`$ during the aging process as a progressive exploration of configuration space valleys with lower and lower energy or, equivalently, as a progressive thermalization of the configurational potential energy. We find that, for all studied $`T_f`$, the equilibration process is composed by three regimes (Fig. 1b). An early-time regime, during which the equilibrating system explores basins with high IS energy and in which $`e_{\mathrm{IS}}(t)`$ shows little $`t`$ dependence. This regime is followed by one at intermediate-time in which $`e_{\mathrm{IS}}(t)`$ decreases with a power-law with an exponent $`0.13\pm 0.02`$, independent of $`T_f`$. This scale-free $`t`$-dependence is evidence that the aging process is a self-similar process. At even longer $`t`$ a third regime is observed for the lowest $`T_f`$, characterized by a slower thermalization rate. The cross-over between these three time regimes is controlled by the value of $`T_f`$ .
We show next that during the equilibration process the system visits the same type of minima as the one visited in equilibrium. We evaluate the curvature of the potential energy at the IS as a function of $`T`$ (for the equilibrium case) and as a function of time $`t`$ (for the out-of-equilibrium case) by calculating the density of states $`P(\nu )`$, i.e. the distribution of normal modes with frequency $`\nu `$. Before we discuss the frequency dependence of $`P(\nu )`$ we first look at its first moment, $`\overline{\nu }`$, a quantity which can be calculated with higher accuracy than the distribution itself. The $`T`$-dependence of $`\overline{\nu }`$ in equilibrium and its $`t`$-dependence in non-equilibrium are shown in Figs. 2a and 2b, respectively. Note that Fig. 1 and Fig. 2 are quite similar. This demonstrates that during the progressive thermalization, the aging system explores IS with the same $`e_{\mathrm{IS}}`$ and with the same curvature as the one visited in equilibrium, i.e. the same type of minima. The similarity of Fig. 1 and Fig. 2 also demonstrates that $`T_e(e_{\mathrm{IS}})`$ can indeed serve as a $`T`$ which characterizes the typical configuration occupied by the system. We also consider the frequency dependence of $`P(\nu )`$, which is shown for the equilibrium case in Fig. 3. In the inset we show $`P(\nu )`$ at the highest and lowest $`T`$ investigated and we find that the dependence of $`P(\nu )`$ on $`T`$ is rather weak. To better visualize the weak $`T`$-dependence of $`P(\nu )`$ we discuss in the following $`P(\nu )P_0(\nu )`$, where $`P_0(\nu )`$ is the (equilibrium) distribution at $`T=0.446`$, the lowest $`T`$ at which we were able to equilibrate the system. In Fig. 3 we show $`P(\nu )P_0(\nu )`$ and from it we see that the main effect of a change in $`T`$ is that with decreasing $`T`$ the modes at high $`\nu `$ disappear and that more modes in the vicinity of the peak appear. We also find that if an analogous plot is made for the out-of-equilibrium data the same pattern is observed, i.e. that with increasing $`t`$ $`P(\nu )`$ becomes narrower and more peaked.
We next show that $`T_e(e_{\mathrm{IS}})`$ completely determines $`P(\nu )`$ during the aging process. For this we see from Fig. 1 that $`T_e=0.6`$ corresponds to $`t1600`$ for $`T_f=0.435`$, $`0.4`$, and $`0.3`$, and to $`t4000`$ and $`t25000`$ for $`T_f=0.2`$ and $`0.1`$, respectively (see dashed lines in Fig. 1). If $`T_e`$ has thermodynamic meaning the non-equilibrium $`P(\nu )`$ at these times should be the same as the equilibrium $`P(\nu )`$ at $`T=0.6`$. These functions are plotted in Fig. 4 (curves with filled symbols). We find that the different distribution functions are superimposed, demonstrating the validity of the proposed interpretation of $`T_e`$ as effective $`T`$ . That this collapse of the curves is not a coincidence can be recognized by the second set of curves which is shown in Fig. 4 (curves with open symbols). These curves correspond to $`T_e=0.5`$ for which the corresponding times from Fig. 1 are $`t16000`$ for $`T_f=0.435`$, and $`0.4`$, and the $`t25000`$ for $`T_f=0.3`$ . Also for $`T_e=0.5`$ the different $`P(\nu )`$ can be considered to be identical within the accuracy of the data. Note that the two set of curves corresponding to the two values of $`T_e`$ are clearly different, showing that our data has a sufficiently high precision to distinguish also values of $`T_e`$ which are quite close together. From the present data we conclude that minima with the same value of $`e_{\mathrm{IS}}`$ do indeed have the same distribution of curvature.
To discuss the results of this paper it is useful to recall insight gained from the analysis of instantaneous normal modes (INM) of supercooled liquids. The INM studies have demonstrated that the slowing down of the dynamics in supercooled liquids is accompanied by a progressive decrease in the number of so-called double-well modes, i.e. the number of directions in configuration space where the potential energy surface has a saddle leading to a new minimum, a condition stronger than the local concavity. It has been demonstrated that the number of double-well modes vanishes on approach to $`T_c`$. Thus for $`T>T_c`$ the system is always located on a potential energy landscape which, in at least one direction, is concave (i.e. the system sits on a saddle point) whereas at $`T<T_c`$ the system is located in the vicinity of the local minima, i.e. the landscape is convex. This result can be rephrased by saying that $`T_c`$ is the $`T`$ at which the thermal energy $`k_BT_c`$ is of the same order as the height of the lowest-lying saddle point above the nearest IS, i.e. above $`e_{\mathrm{IS}}`$. The energy difference between $`k_BT+e_{\mathrm{IS}}`$ and the lowest-lying saddle point energy can be chosen as an effective ($`T`$-dependent) barrier height. This observation, which should hold true also for the Lennard-Jones system studied here, and the results presented here lead us to the following view of the energy landscape and the aging process. For high $`T`$, $`k_BT`$ is significantly higher than the lowest lying saddle energy and the effective barriers between two minima are basically zero, i.e. the system can explore the whole configuration space. At $`T1.0`$ the system starts to become trapped in that part of configuration space which has a value of $`e_{\mathrm{IS}}`$ which is less than the one at high $`T`$ (Fig. 1 and Ref. ) and the properties of the IS start to become relevant. This point of view of the structure of phase space can now be used to understand the aging dynamics. At the beginning of the quench the system is still in the large part of configuration space which corresponds to (high) $`T_i`$. Although $`k_BT_f`$ is now relatively small, the effective barriers are still zero and the system can move around unhindered and it moves to minima which have a lower energy. The rate of this exploration is related to the number of double-well directions accessible within $`k_BT_f`$, which explains why in Fig. 1b the curves with small $`T_f`$ stay at the beginning longer on the plateau than the ones with larger $`T_f`$. With increasing $`t`$ the system starts to find IS which have a lower and lower energy and $`e_{\mathrm{IS}}(t)`$ starts to decrease. Note that apart from the $`T_f`$ dependence of the rate of exploration just discussed, this search seems to be independent of $`T_f`$, since in Fig. 1b the slope of the curves at intermediate times does not depend on $`T_f`$.
With increasing $`t`$ the system finds IS with lower and lower energies and decreases its $`T_e`$. From the above discussion on the INM we know that with decreasing $`T`$ the height of the effective barriers also increases and it can be expected that the search of the system becomes inefficient once it has reached a $`T_e`$ at which the energy difference between the lowest-lying saddle and $`e_{\mathrm{IS}}`$ becomes of the order of $`k_\mathrm{B}T_f`$. Therefore we expect that once this stage has been reached the $`t`$-dependence of $`e_{\mathrm{IS}}`$ will change and this is what we find, as shown in Fig. 1b in the curves for $`T_f=0.2`$ and 0.1 at $`t10^4`$. We also note that the $`T_e`$ at which this crossover occurs will increase with decreasing $`T_f`$, in agreement with the result shown in Fig. 1. For times larger than this crossover $`t`$ the system no longer explores the configuration space by moving along unstable modes but rather by means of a hopping mechanism in which barriers are surmounted. This hopping mechanism, although not efficient for moving the system through configuration space, still allows the system to decrease its configurational energy and its $`T_e`$ further. Thus the cross-over from the self-similar process to the activated dynamics, which in equilibrium is located close to $`T_c`$, is in the non-equilibrium case $`T_f`$ dependent. We conclude that in order to obtain configurations which are relaxed as well as possible (within a given time span) one should quench the system to $`T_c`$ in order to exploit as much as possible the low-lying saddle points.
The presented picture implies that, if hopping processes were not present, $`T_e`$ for the system would always be above $`T_c`$. Although hopping processes are always present, they might be so inefficient that even for long times the value of $`T_e`$ is above $`T_c`$. From Fig. 1 we recognize that this is the case for the present study. We note that theoretical mean-field predictions derived for $`p`$spin models and recent extensions of the ideal MCT equations to non-equilibrium processes conclude that system quenched below $`T_c`$ always remain in that part of configuration space corresponding to $`T>T_c`$.
Finally we stress that the present analysis of the aging process strongly support the use of the IS as an appropriate tool for the identification of an effective $`T`$ at which the configurational potential energy is at equilibrium. This opens the way for detailed comparisons with recent out-of-equilibrium thermodynamics approaches and for detailed estimates of the configurational entropy.
|
no-problem/9905/hep-ph9905494.html
|
ar5iv
|
text
|
# 1 The nucleon-nucleon potential, from Ref. []. The 𝜎 is believed to be a two-pion resonance, although it may be a real, but very broad, physical state.
WM-99-108
The Triple-Alpha Process and the Anthropically
Allowed Values of the Weak Scale
Tesla E. Jeltema and Marc Sher
Nuclear and Particle Theory Group
Physics Department
College of William and Mary, Williamsburg, VA 23187, USA
ABSTRACT
In multiple-universe models, the constants of nature may have different values in different universes. Agrawal, Barr, Donoghue and Seckel have pointed out that the Higgs mass parameter, as the only dimensionful parameter of the standard model, is of particular interest. By considering a range of values of this parameter, they showed that the Higgs vacuum expectation value must have a magnitude less than $`5.0`$ times its observed value, in order for complex elements, and thus life, to form. In this report, we look at the effects of the Higgs mass parameter on the triple-alpha process in stars. This process, which is greatly enhanced by a resonance in Carbon-12, is responsible for virtually all of the carbon production in the universe. We find that the Higgs vacuum expectation value must have a magnitude greater than $`0.90`$ times its observed value in order for an appreciable amount of carbon to form, thus significantly narrowing the allowed region of Agrawal et al.
The anthropic principle states that the parameters of our Universe must have values which allow intelligent life to exist. It is a principle which has existed in some form or another since the beginning of human history. It has countless formulations, many of which have religious overtones. In recent years, however, the anthropic principle has been revived as a method of explaining some fine-tuning problems. For example, Weinberg has considered whether the principle can address the relative smallness of the cosmological constant.
In its weak form, the anthropic principle states that, because we are here to observe them, the observed properties of the universe must have values which allow life to exist. This may seem somewhat obvious or circular, but it becomes significant in some physical theories which support the existence of domains in the universe in which different parameters are applicable. In chaotic inflation models, for example, different domains may have different Higgs vacuum expectation values. These domains can be regarded as different universes. Alternatively, in regions of high gravitational curvature, new universes may, in some models, “pop” out of the vacuum; these new universes may have different values of the parameters. Thus, considering how our universe (and the life therein) would evolve if the parameters of the standard model were changed may be physically relevant.
The standard model has (including neutrino masses and mixing) some 24 parameters. Thus, any complete study of the anthropic principle would involve study of a complex 24-dimensional parameter space. In two recent papers, Agrawal, Barr, Donoghue and Seckel(ABDS) noted that the Higgs mass-squared parameter is of special interest. It is the only dimensionful parameter in the model, and multiple-universe models may be more likely to have varying dimensionful couplings than varying dimensionless ones. The Higgs mass-squared parameter is also unnaturally small compared with the parameters of more general theories, such as grand unified theories.
ABDS considered the range of anthropically allowed values of the Higgs mass-squared parameter, $`\mu ^2`$. They considered values of this parameter ranging from $`M_{Pl}^2`$ to $`M_{Pl}^2`$, where $`M_{Pl}`$ is the Planck scale, and we define the sign of $`\mu ^2`$ to be negative in the standard model. ABDS considered both the cases $`\mu ^2<0`$ and $`\mu ^2>0`$. In the latter case, the electroweak gauge symmetry is still broken by quark condensation ($`\overline{q}q0`$). For the $`\mu ^2<0`$ case, they found that as one increases the Higgs vacuum expectation value $`v\varphi =\sqrt{\frac{\mu ^2}{\lambda }}`$ from its standard model value, $`v_0`$, the first major effect occurs when the deuteron becomes unbound. This occurs when $`v/v_0`$ reaches a value of $`1.42.7`$, depending on the nuclear physics model, and is due to the increasing neutron-proton mass difference. When $`v/v_0`$ is greater than about $`5.0`$, all nuclei become unstable. They argue, therefore, that one must have $`v/v_0<5.0`$ (and possibly less than $`2.7`$) in order for complex elements to form, and thus life. They also note that for $`v/v_0>10^3`$, the $`\mathrm{\Delta }^{++}`$ becomes stable relative to the proton, leading to a very unusual universe indeed. For $`\mu ^2>0`$, the weak scale becomes of the order of magnitude of the QCD scale, and chemical and stellar evolution become much more complicated.
One process not considered by ABDS is the triple-alpha process in stars. This process occurs when two alpha particles first fuse into beryllium ($`{}_{}{}^{4}He+{}_{}{}^{4}He{}_{}{}^{8}Be`$). The beryllium has a very short lifetime (of order of $`10^{16}`$ seconds), but lives long enough for further interaction with a third alpha particle ($`{}_{}{}^{4}He+{}_{}{}^{8}Be{}_{}{}^{12}C_{}^{}`$) to produce carbon. Virtually all of the carbon in the universe is produced through this process. This process is anthropically significant because it depends very precisely on the existence of a $`0^+`$ resonance $`7.6`$ MeV above the ground state in $`{}_{}{}^{12}C`$. The existence of this resonance was one of the first, major successful predictions of astrophysics; being predicted by Hoyle long before the discovery of the resonance. Without this resonance, little carbon will be produced. Without carbon, it is difficult to see how life could spontaneously develop. Life, as we generally define it, requires the existence of a molecule capable of storing large amounts of information, and it is impossible for hydrogen and helium to form such molecules. Since the existence of the resonance is a very sensitive function of the parameters of the model, one might expect it to give much more stringent bounds on $`v/v_0`$ than those obtained by ABDS. In this Brief Report, we examine the dependence of this process on $`\mu ^2`$, and significantly narrow the range found by ABDS.
There have been several calculations concerning the anthropic significance of the triple-alpha process. Livio, et al. calculated the sensitivity of the amount of carbon production to changes in the location of the $`0^+`$ resonance, but did not address the underlying physics behind the location of the resonance. Oberhummer, et al. then did a detailed nuclear physics calculation of the sensitivity of the location of the resonance to the strength of the nucleon-nucleon potential. This required considering several different models for the nuclear reaction rates. They found that a change of only a part in a thousand in the strength of the nucleon-nucleon interaction will change the reaction rate of the triple-alpha process by roughly a factor of $`20`$, and a change of two parts in a thousand changes it by roughly a factor of $`400`$.
The strength of the nucleon-nucleon interaction, however, is a very complicated function of the many parameters of the standard model. Our objective is to relate this strength to changes in the vacuum expectation value of the Higgs boson, $`v`$. Changing $`v`$ will change the quark masses, and will also change the value of the QCD scale. Both of these are addressed by ABDS. The quark masses change in a very predictable way: $`m_q(v/v_0)`$. The QCD scale, $`\mathrm{\Lambda }`$, which is sensitive to the quark masses through threshold effects (it is assumed that the high energy value is unchanged), is found by ABDS to scale as $`(v/v_0)^\zeta `$, where $`\zeta `$ varies between $`0.25`$ and $`0.3`$—we will take it to be $`0.25`$ in this work. From these variations, one can calculate the variation of the relevant baryon and meson masses, and convert that into an effect on the strength of the nucleon-nucleon interaction.
The phrase “strength of the nucleon-nucleon interaction” is, of course, somewhat ambiguous. Oberhummer, et al., simply multiplied the interaction by a constant. When the meson and baryon masses change, however, the entire shape of the potential changes. A precise analysis would necessitate using this full potential in the calculation of the triple-alpha process. However, these calculations use “phenomenological” parameters, which are experimentally determined, and the variation of these parameters with $`v`$ is unknown. We therefore estimate the size of the effect by finding an “average” value of the potential, defined as
$$V=\frac{_0^{\mathrm{}}V(r)|\psi (r)|^2d^3r}{_0^{\mathrm{}}|\psi (r)|^2d^3r}$$
(1)
where $`\psi `$ is the two-nucleon wavefunction, obtained by solving the Schrödinger equation, and compare this with Oberhummer, et al.
We now have to determine the dependence of the potential on $`v/v_0`$.
The nucleon-nucleon potential has three main features, shown in Figure 1. There is a repulsive core, an attractive minimum and a long-range tail from one-pion exchange. We will look at two different models for the nucleon-nucleon potential. The first considers the repulsive core to be due to the exchange of the $`\omega `$ vector meson, and the attractive minimum to be due to the exchange of the hypothetical sigma meson. Controversy exists as to whether the sigma meson is an actual particle with a large width, or simply a correlated two-pion exchange. We will assume the latter for the moment, but will show that the results will not change significantly in either case. The potential can then be written as
$$V(r)=\frac{g_\omega \mathrm{exp}^{m_\omega r}}{r}\frac{g_\sigma \mathrm{exp}^{m_\sigma r}}{r}\frac{g_\pi \mathrm{exp}^{m_\pi r}}{r}$$
(2)
where the $`g_i`$, arising from the strong interaction van der Waals forces, are assumed to be independent of the weak scale. To find the dependence of $`V(r)`$ on $`v/v_0`$, we now need to ascertain the dependence of $`m_\omega `$, $`m_\sigma `$ and $`m_\pi `$ on $`v/v_0`$ (as well as the dependence of the nucleon mass, due to the input into the Schrödinger equation).
The dependence of the pion mass on the weak scale is easily determined from the formula from chiral symmetry breaking, which gives $`m_\pi ^2f_\pi (m_u+m_d)`$. Since $`f_\pi `$ varies as $`\mathrm{\Lambda }_{QCD}`$, which varies as $`(v/v_0)^\zeta `$, and $`m_u+m_d`$ varies as $`v/v_0`$, one can see that $`m_\pi (v/v_0)^{\frac{1+\zeta }{2}}`$. The nucleon and the $`\omega `$ primarily get their masses from QCD, which scale as $`\mathrm{\Lambda }_{QCD}`$, but have small contributions from the current quark masses. In MeV, the masses are given by $`m_{nucleon}=921(v/v_0)^\zeta +18(v/v_0)`$ and $`m_\omega =768(v/v_0)^\zeta +14(v/v_0)`$, where we have taken the up and down current quark masses to be $`4`$ and $`7`$ MeV, respectively.
The mass of the sigma is a different matter, since it is a two-pion correlated state. We follow the work of Lin and Serot, who derive the mass of the $`\sigma `$ in terms of the pion mass, the nucleon mass and the pion-nucleon coupling constant. By varying the masses of the pion and nucleon in their expressions, we find that $`m_\sigma (v/v_0)^{0.26}`$. This is not a surprising result. The $`\sigma `$ mass turns out to be very insensitive to the pion mass, and thus it can only scale as the nucleon mass, which scales as $`(v/v_0)^{0.25}`$. It also indicates that the result is not significantly changed if one regards the $`\sigma `$ to be a real particle, since one would expect such a particle to scale as the QCD scale, and $`\mathrm{\Lambda }_{QCD}(v/v_0)^{0.25}`$.
With the mass dependences, we now determine the strength of the nucleon-nucleon potential as $`v`$ is varied. It is found that a $`1\%`$ change in $`v`$ affects the strength of the potential by $`0.4\%`$ (in the same direction); a $`10\%`$ change in $`v`$ affects it by $`4\%`$. To see how robust this result is, we also considered a completely different nucleon-nucleon potential, due to Maltman and Isgur, using six-quark states. There are two parts to the potential, a modified one-pion exchange part and a part due to residual quark-quark interactions. The latter, which is most relevant for this analysis, is entirely due to QCD, and thus its variation with $`v`$ only depends on the variation through $`\mathrm{\Lambda }_{QCD}`$, which is determined dimensionally. The result is similar; a $`1\%`$ decrease in $`v`$ decreases the strength of the potential by $`0.6\%`$.
Now that we have related the strength of the nucleon-nucleon potential to the dependence on $`v`$, we can go to the work of Oberhummer et al. who relate that to the rate of carbon production. Oberhummer et al. found that a decrease of $`24\%`$ in the strength of the nucleon-nucleon potential leads to the virtual elimination of carbon production (Livio, et al. analyzed both 5 and 20 solar mass stars, although the result is insensitive to the precise stellar mass). Comparing with our result from the previous paragraph, we find that (conservatively taking a $`4\%`$ decrease as our limit as well as the first potential model) one must have $`v/v_0`$ greater than $`0.90`$. This substantially narrows the region found by ABDS, which had no effective lower bound on $`v/v_0`$, but only an upper bound of between $`1.4`$ and $`5`$.
How accurate is this result? As noted earlier, a precise determination of the effects of changing $`v`$ on the rate of carbon production in stars would require solving the twelve-body problem with a varying nucleon-nucleon potential (not to mention three-body forces). Oberhummer et al. just varied the overall strength of the two-body potential. A full analysis does not seem possible at this time. We have related the change in the potential caused by the variation of $`v`$ to an “average” potential strength. This “mean-field” approach is not particularly precise, but is probably the best that can be done at this time, given our lack of understanding of nuclear dynamics. The fact that two very different models of the potential give a similar bound is encouraging. Thus, our bound should be taken as a reasonable approximation to the bound that could be obtained with a full understanding of the nuclear physics involved.
We thank Dirk Walecka and Nathan Isgur for many useful conversations about the nucleon-nucleon potential, and Eric Dawnkaski for help with the computational aspects of this work. This work was supported by the National Science Foundation PHY-9900657.
|
no-problem/9905/hep-ph9905356.html
|
ar5iv
|
text
|
# Is SAX J1808.4-3658 A Strange Star?
## Abstract
One of the most important questions in the study of compact objects is the nature of pulsars, including whether they are composed of $`\beta `$-stable nuclear matter or strange quark matter. Observations of the newly discovered millisecond X-ray pulsar SAX J1808.4-3658 with the Rossi X-Ray Timing Explorer place firm constraint on the radius of the compact star. Comparing the mass - radius relation of SAX J1808.4-3658 with the theoretical mass - radius relation for neutron stars and for strange stars, we find that a strange star model is more consistent with SAX J1808.4-3658, and suggest that it is a likely strange star candidate.
The transient X-ray burst source SAX J1808.4-3658 was discovered in September 1996 with the Wide Field Camera (WFC) on board BeppoSAX . Two bright type I X-ray bursts were detected, each lasting less than 30 seconds. Such bursts are generally accepted to be due to thermonuclear flashes on the surface of a neutron star , suggesting that this source is a member of low-mass X-ray binaries (LMXBs), consisting of a low ($`\stackrel{<}{}10^{10}`$ G) magnetic field neutron star accreting from a companion star of less than one solar mass . Analysis of the bursts in SAX J1808.4-3658 indicates that it is 4 kpc distant and has a peak X-ray luminosity of $`6\times 10^{36}\text{erg\hspace{0.17em}s}\text{-1}`$ in its bright state, and $`<10^{35}\text{erg\hspace{0.17em}s}\text{-1}`$ in quiescence .
Recently a transient X-ray source designated XTE J1808-369 was detected with the Proportional Counter Array (PCA) on board the Rossi X-ray Timing Explorer (RXTE) . The source is positionally coincident within a few arcminutes with SAX J1808.4-3658, implying that both sources are the same object. Coherent pulsations at a period of 2.49 milliseconds were discovered . The star’s surface dipolar magnetic moment was derived to be $`\stackrel{<}{}10^{26}\text{G\hspace{0.17em}cm}\text{3}`$ from detection of X-ray pulsations at a luminosity of $`10^{36}\text{erg\hspace{0.17em}s}\text{-1}`$ , consistent with the weak fields expected for type I X-ray bursters and millisecond radio pulsars (MS PSRs) . The binary nature of SAX J1808.4-3658 was firmly established with the detection of a 2 hour orbital period , as well as with the optical identification of the companion star . SAX J1808.4-3658 is the first pulsar to show both coherent pulsations in its persistent emission and thermonuclear bursts, and by far the fastest-rotating, lowest-field accretion-driven pulsar known. It presents direct evidence for the evolutionary link between LMXBs and MS PSRs .
The discovery of SAX J1808.4-3658 also allows a direct test of the compactness of pulsars. Detection of X-ray pulsations requires that the inner radius $`R_0`$ of the accretion flow (generally in the form of a Keplerian accretion disk in LMXBs) should be larger than the stellar radius $`R`$ (viz. the stellar magnetic field must be strong enough to disrupt the disk flow above the stellar surface), and less than the so-called corotation radius $`R_\mathrm{c}=[GM/(4\pi ^2)P^2]^{1/3}`$ (viz. the stellar magnetic field must be weak enough that accretion is not centrifugally inhibited) . Here $`G`$ is the gravitation constant, $`M`$ is the mass of the star, and $`P`$ is the pulse period. The inner disk radius $`R_0`$ is generally evaluated in terms of the Alfvén radius $`R_\mathrm{A}`$, at which the magnetic and material stresses balance , $`R_0=\xi R_\mathrm{A}=\xi [B^2R^6/\dot{M}(2GM)^{1/2}]^{2/7}`$, where $`B`$ and $`\dot{M}`$ are respectively the surface magnetic field and the mass accretion rate of the pulsar, and $`\xi `$ is a parameter of order of unity almost independent of $`\dot{M}`$ . Since X-ray pulsations in SAX J1808.4-3658 were detected over a wide range of mass accretion rate, say, from $`\dot{M}_{\mathrm{min}}`$ to $`\dot{M}_{\mathrm{max}}`$, a firm upper limit of the stellar radius can then be obtained from the condition $`R<R_0(\dot{M}_{\mathrm{max}})<R_0(\dot{M}_{\mathrm{min}})<R_\mathrm{c}`$, i.e.,
$$R<27.6(\frac{F_{\mathrm{max}}}{F_{\mathrm{min}}})^{2/7}(\frac{P}{2.49\mathrm{ms}})^{2/3}(\frac{M}{M_{}})^{1/3}\mathrm{km},$$
(1)
where $`F_{\mathrm{max}}`$ and $`F_{\mathrm{min}}`$ denote the X-ray fluxes measured during X-ray high- and low-state, respectly, $`M_{}`$ is the solar mass. Here we have assumed that the mass accretion rate $`\dot{M}`$ is proportional to the X-ray flux observed with RXTE. This is guaranteed by the fact that the X-ray spectrum of SAX J1808.4-3658 was remarkably stable and there was only slightly increase in the pulse amplitude when the X-ray luminosity varied by a factor of $`100`$ during the 1998 April/May outburst.
Given the range of X-ray flux at which coherent pulsations were detected, inequality (1) defines a limiting curve in the mass - radius ($`MR`$) parameter space for SAX J1808.4-3658, as plotted in the dashed curve in Fig. 1. During the 1998 April/May outburst, the maximum observed $`230`$ keV flux of SAX J1808.4-3658 at the peak of the outburst was $`F_{\mathrm{max}}3\times 10^9\text{erg\hspace{0.17em}cm}\text{-2}\text{ s}\text{-1}`$, while the pulse signal became barely detectable when the flux dropped below $`F_{\mathrm{min}}2\times 10^{11}\text{erg\hspace{0.17em}cm}\text{-2}\text{ s}\text{-1}`$ . Here we adopt $`F_{\mathrm{max}}/F_{\mathrm{min}}100`$. The dotted curve represents the Schwarzschild radius $`R=2GM/c^2`$ (where $`c`$ is the speed of light) - the lower limit of the stellar radius to prevent the star collapsing to be a black hole . Thus the allowed range of the mass and radius of SAX J1808.4-3658 is the region confined by the dashed and dotted curves in Fig. 1.
Figure 1 compares the theoretical $`MR`$ relations (solid curves) for nonrotating neutron stars given by six recent realistic models for the equation of state (EOS) of dense matter. In models UU , BBB1 and BBB2 the neutron star core is assumed to be composed by an uncharged mixture of neutrons, protons, electrons and muons in equilibrium with respect to the weak interaction ($`\beta `$-stable nuclear matter). Equations of state UU, BBB1, BBB2 are based on microscopic calculations of asymmetric nuclear matter by use of realistic nuclear forces which fit experimental nucleon-nucleon scattering data, and deuteron properties. In model Hyp , hyperons are considered in addition to nucleons as hadronic constituents of the neutron star core. Next, we consider, as a limiting case, a very soft EOS for $`\beta `$-stable nuclear matter, namely the BPAL12 model , which is still able to sustain the measured mass 1.442 $`M_{}`$ of the pulsar PSR 1916+13. In general, a soft EOS is expected to give a lower limiting mass and a smaller radius with respect to a stiff EOS . Finally, we consider the possibility that neutron stars may possess a core with a Bose–Einstein condensate of negative kaons . The main physical effect of the onset of $`K^{}`$ condensation is a softening of the EOS with a consequent lowering of the neutron star maximum mass and possibly of the radius. Actually, neutron star with $`R79km`$ were obtained , for some EOS with $`K^{}`$ condensation. However, in those models the kaon condensation phase transition was implemented using the Maxwell construction, which is inadequate in stellar matter, where one has two conserved charges: baryon number and electric charge . When the kaon condensation phase transition is implemented properly , one obtains neutron stars with “large” radii, as shown by the curve labeled $`K^{}`$ in Fig. 1. Moreover, kaon-nucleon and nucleon-nucleon correlations rise the threshold density for the onset of kaon condensation, possibly to densities higher than those found in the centre of stable neutron stars . It is clearly seen in Fig. 1 that none of the neutron star $`MR`$ curves is consistent with SAX J1808.4-3658 (Including rotational effects will shift the $`MR`$ curves to up-right in Fig. 1 , and does not help improve the consistency between the theoretical neutron star models and observations of SAX J1808.4-3658). Moreover, it is unlikely that the actual mass and radius of SAX J1808.4-3658 lie very close to the dashed curve, since the minimum flux $`F_{\mathrm{min}}`$ at which X-ray pulsations were detected by RXTE was determined by the instrumental sensitivity, and the actual value could be even lower; while the presence of the slight X-ray dips observed in SAX J1808.4-3658 suggests that the companion mass is most likely to be less than $`0.1M_{}`$, and the pulsar mass op more than $`1M_{}`$. Therefore it seems that SAX J1808.4-3658 is not well described by a neutron star model. As shown below, a strange star model seems to be more compatible with SAX J1808.4-3658.
Note that in writing inequality (1) we have implicitly assumed that the pulsar magnetic field is basically dipolar, even when the accretion disk is close to the stellar surface. This is partly supported by the agreement between the dipolar spin-up line and the location of MS PSRs in the spin period - spin period derivative diagram, which implies that the multipole moments in LMXBs are no more than $`40\%`$ of the dipole moments if the quadrupole component is comparable to or larger than higher order anomalies . However, the $`R_0(\dot{M})`$ relation will be changed if the star’s field has more complicated structure. For example, there may exist regions on the surface of the star where the magnetic field strength is much greater than that from a central dipole, to affect the channelling of the accretion flow and the pulsed emission; the induced current flow in the boundary layer of the disk could increase the field strength when $`R_0`$ reaches $`R`$. If SAX J1808.4-3658 possesses such anomalous field, there should be two kinds of observational effects: the pulse profile shows a dependence on energy (because in strongly magnetized plasma, photons with different energies have different scattering cross-sections), and the X-ray spectrum changes with the mass accretion rate (due to a change in the configuration of accretion pattern and in the X-ray emitting region). These are in contrast with the observations of SAX J1808.4-3658, which shows a single sine pulse profile with little energy dependence , and stable X-ray spectra when the X-ray luminosity varied by a factor of $`100`$ . Note also that in the $`R_0(\dot{M})`$ formula the parameter $`\xi R_0/h`$, where $`h`$ is the scale height of the disk . The effect of the increase in $`B_r`$ with $`\dot{M}`$, due to the induced current, may be largely counteracted by the decrease of $`\xi `$ with $`\dot{M}`$, and a steeper $`\dot{M}`$-dependence of $`R_0`$ when the effect of general relativity is included . So we conclude that the accretion flow around SAX J1808.4-3658 may still be dominated by a central dipole field even when the disk reaches the star.
Strange stars are astrophysical compact objects which are entirely made of deconfined u,d,s quark matter (strange matter). The possible existence of strange stars is a direct consequence of the conjecture that strange matter may be the absolute ground state of strongly interacting matter. Detailed studies have shown that the existence of strange matter is allowable within uncertainties inherent in a strong interaction calculation ; thus strange stars may exist in the universe. Apart from the fact that strange stars may be relics from the cosmic separation of phases as suggested by Witten , a seed of strange matter may convert a neutron star to a strange one . Conversion from protoneutron stars during the collapse of supernova cores is also possible . Recent studies have shown that the compact objects associated with the X-ray pulsar Her X-1 , and with the X-ray burster 4U 1820-30 , are good strange star candidates.
Most of the previous calculations of strange star properties used an EOS for strange matter based on the phenomenological nucleonic bag model, in which the basic features of quantum chromodynamics, such as quark confinement and asymptotic freedom are postulated from the beginning. The deconfinement of quarks at high density is, however, not obvious in the bag model. To find a star of small mass and radius, one has to postulate a large bag constant, whereas one would imagine in a high density system the bag constant should be lower.
Recently, Dey et al. derived an EOS for strange matter, which has asymptotic freedom built in, shows confinement at zero baryon density, deconfinement at high density, and gives a stable configuration for chargeless, $`\beta `$-stable strange matter. In this model the quark interaction is described by an interquark vector potential originating from gluon exchange, and by a density dependent scalar potential which restores the chiral symmetry at high density. This EOS was then used to calculate the structure of strange stars. Using the same model (but different values of the parameters with respect to those employed in ref. ) we calculated the $`MR`$ relations, which are also shown in solid curves labeled ss1 and ss2 in Fig. 1, corresponding to strange stars with maximum masses of $`1.44M_{}`$ and $`1.32M_{}`$ and radii of 7.07 km and 6.53 km, respectively. It is seen that the region confined by the dashed and dotted curves in Fig. 1 is in remarkable accord with the strange star models. Figure 1 clearly demonstrates that a strange star model is more compatible with SAX J1808.4-3658 than a neutron star one.
If SAX J1808.4-3658 is a strange star, then the thermonuclear flash model can not be invoked to explain the observed X-ray bursts. However, a different mechanism has been recently proposed , in which the X-ray burst is powered basically by the conversion of the accreted normal matter to strange quark matter on the surface of a strange star.
As both the spin rate and the magnetic moment of SAX J1808.4-3658 resemble those inferred for other, non-pulsing LMXBs, an interesting and important question is: why is SAX J1808.4-3658 the only known LMXB with an MS PSR? The most straightforward explanation seems to be that the magnetic field of SAX J1808.4-3658 is considerably stronger than that of other systems of similar X-ray luminosity . We point out that a strange star is more liable to radiate pulsed emission than a neutron star because of its compactness. As seen in Fig. 1, the radius of a $`1M_{}`$ strange star is generally $`1.52`$ times smaller than that of a neutron star of similar mass, implying that, with the same magnetic moment (the observable quantity), the surface field strength of the strange star is $`38`$ times higher than that of the neutron star, and that the size of the polar caps in the strange star for field-aligned flow, $`4\pi R^2(1(1R/R_0)^{1/2})`$, is up to 10 times smaller than in the neutron star. The more efficient magnetic channelling of the accreting matter close to the strange star surface could then lead to higher pulsation amplitudes, making it easier to detect. A strange star model for SAX J1808.4-3658 may also help to explain the unusually hard X-ray spectrum , if it has a low-mass ($`10^{20}10^{19}M_{}`$) atmosphere .
Strange stars have been proposed to model $`\gamma `$-ray bursters , soft $`\gamma `$-ray repeaters and the bursting X-ray pulsar GRO J1744-28 . But these models are generally speculative. In this work, we have suggested that SAX J1808.4-3658 is a likely strange star candidate, by comparing its $`MR`$ relation determined from X-ray observations with the theoretical models of a neutron star and of a strange star. If so, there will be very deep consequences for both the physics of strong interactions and astrophysics. But we point out that the available observational data are not sufficient and accurate enough to exclude the possibility that SAX J1808.4-3658 could be a neutron star with anomalous magnetic fields. It has been suggested that strange stars could become unstable to $`m=2`$ bar mode . Further observations of this signature in case of SAX J1808.4-3658 will be of great interest.
We are grateful to Dr. G. B. Cook for providing the tabulated data for EOS UU of neutron stars, and the referees for critical comments. I. B. thanks Prof. B. Datta for valuable discussions and helpful suggestions. X. L. was supported by National Natural Science Foundation of China and by the Netherlands Organization for Scientific Research (NWO). J. D. and M. D. acknowledge partial support from Government of India (SP/S2/K18/96) and FAPESP.
|
no-problem/9905/astro-ph9905149.html
|
ar5iv
|
text
|
# Cluster Formation and the ISM
## 1. Introduction
Star formation and the state of the interstellar medium involve strongly coupled physical processes. This is because the primary energy inputs into the ISM dervive from the products of massive star formation in OB associations, namely, correlated supernova explosions (Type II), stellar winds, and photoionizing flux. The resulting pressure, shock waves, FUV, and cosmic ray content of the diffuse medium determine the surface pressures and ionization of molecular clouds. These mechanisms are the means by which the ISM regulates the surface density of molecular clouds, as well as the coupling of their internal magnetic fields; and thereby the process of star formation.
This review highlights some of the central processes in star formation and their links to ISM physics. We first give an overview of the basic physical processes in star formation (§1). In §2, we review the basic physics of molecular clouds and their cluster forming, more massive cores. We then highlight some current work on the physics of filamentary molecular clouds (§3). Finally, we review some of the basic ideas of star formation theory with an eye to understanding cluster formation (§4). The reader may consult recent related reviews by McKee et. al. (1993), McKee (1995), and Elmegreen et. al. (1999).
## 2. Overview: From the ISM to Protostars
### 2.1. Diffuse ISM
The physical scales of the diffuse ISM span the range $`10^210^3`$ pc. The diffuse medium consists of cooler clouds that are in pressure balance with a hot surrounding medium, following the ideas of Field (1965), and McKee and Ostriker (1977, henceforth MO). The existence of a multiphase medium hinges upon the structure of the cooling function of atomic gas. The diffuse ISM has three phases in its inventory (see review; McKee 1995):
$``$ cold neutral medium of clouds (CNM) with a temperature $`T50^o`$ K and density $`n40cm^3`$;
$``$ warm neutral medium (WNM) of temperature $`T8,000^o`$ K which surrounds the CNM and pervades much of space and, a warm ionized medium (WIM) at $`T8,000^o`$ K and $`n0.26`$ cm<sup>-3</sup> which consists of highly ionized warm hydrogen;
$``$ hot ionized medium (HIM) with $`T5\times 10^5`$ K and $`n10^{2.5}cm^3`$.
The ISM contains other important components however. Of greater significance, energetically, is the pressure associated with the magnetic field that pervades the ISM. It has a strength of $`B\mathrm{several}\mu \mathrm{G}`$. Cosmic rays have comparable energy densities and are thought to be accelerated in supernova remnants. Thus, the bulk of the cosmic ray component of the ISM can be viewed as a consequence of star formation (see Duric, this volume). Finally, the ISM has non-thermal gas motions whose energy density is comparable to the magnetic and cosmic ray contributions as well. The fact that the energy density in the non-thermal component dominates the thermal one is reminiscent of the physics of molecular gas, which we address below.
The total pressure in the ISM can be ascertained by applying the condition of hydrostatic balance: the pressure in the galactic plane must be sufficient to support the weight of a hot galactic corona, detected through X-ray emission. The result (see Boulares and Cox, 1990) is that $`\overline{P}_{ISM}(3.9\pm 0.6)\times 10^{12}`$ dyne cm<sup>-2</sup>. The constituent pressures that add up to this value are, in units of $`10^{12}`$ dyne cm<sup>-2</sup> (from Boulares and Cox):
$``$ thermal pressure, $`0.3`$
$``$ cosmic rays, $`1.0`$
$``$ magnetic fields, $`1.0`$ (field strengths locally 2-3 $`\mu `$G)
$``$ kinetic pressure, $`1.8`$.
The thermal energy contributes little to the overall budget because the gas cools so efficiently, the primary coolant being the CII line. Thus the total energy budget for the diffuse ISM may be written as,
$$E_{therm}<<E_{nontherm}E_{CR}E_{mag}>>E_{grav}$$
(1)
The MO theory predicts that the hot phase of the ISM is heated by supernova explosions. The SN are assumed to occur at random throughout the Galactic disk. Uncorrelated Type I supernovae as well as a distributed Type II population are the drivers of ISM energetics in this picture. The HI survey of the Milky Way by Colomb, Poppel, and Heiles (1980) revealed, however, that vast atomic clouds are organized in filamentary structures. Here and there, large cavities and shells are obviously discerned. Heiles (1984) catalogued a large number of large HI shells in the ISM which he suggested are produced by the combined energy input from winds and supernova explosions that accompany OB associations. Note that OB stars formed in associations account for 90 % of the supernovae in the galaxy (van den Bergh and Tammann 1991).
Theoretical models of the energy input from massive stars in the HI of the galactic disk show that the correlated massive stellar winds and supernovae in OB associations create large superbubbles in the galactic HI (Bruhweiler et. al. 1980). The swept up gas forms dense, compressed supershells. Filaments could be produced as these shells cool and undergo thermal instabilities. Sufficiently intense wind and supernova activity can also lead to break-out into the galactic halo. The plumes of hot gas flowing vertically out of the disk are akin to ”chimneys” located above the OB associations (eg. Norman and Ikeuchi 1989). A beautiful example of this process in action is the CGPS 21 cm map of the chimney in the W3 region (Normandeau et. al. 1996).
Large scale interstellar magnetic fields can strongly ffect the evolution of superbubbles. Recent numerical calculations by Tomisaka (1998) as an example, show that blow-out depends upon the structure of the magnetic field in the ISM. For a uniform field parallel to the galactic plane, expansion perpendicular to the plane is suppressed and a strongly magnetized shell is swept up. On the other hand, if the field strength scales with the gas density as $`B\rho ^{1/2}`$, superbubble expansion and blow-out occur more readily, the results being similar to the unmagnetized case. These calculations also show that the field is stretched in the walls of the expanding shell and appears to be well ordered.
We now turn to the question of how the ISM affects the physical properties of molecular clouds. There are at least two important influences:
$``$ ISM Pressure $``$ Molecular Cloud Properties
$``$ Cosmic Ray production $``$ Molecular Cloud Heating/Ionization
The ISM pressure upon the surfaces of molecular clouds truncates them and in so doing, determines their size. Thus, the ISM pressure essentially determines the surface density of a cloud. The second point is that the ionization of molecular clouds is determined by two types of radiation; FUV photons and high energy cosmic rays. The cosmic rays penetrate into the heart of high column density clumps in the cloud and partially ionize the gas, while FUV photoionization dominates in gas with visual extinctions $`A_V4`$ mag (McKee 1989). By contributing to the ionization rate in a cloud in this way, the ISM activity helps to control the star formation rate.
### 2.2. Molecular Clouds
Stars form in self-gravitating, molecular clouds. These are highly inhomogenous structures that are filamentary (eg. Johnstone and Bally, 1999), and have a rich sub-structure. Molecular clouds are host to a spectrum of dense, high pressure sub-regions known as molecular cloud clumps and smaller cores. These are the actual sites of star formation within molecular clouds.
Most of the self-gravitating gas in the Milky Way is gathered in the distribution of Giant Molecular Clouds (GMCs). Surveys show (see eg. Scoville and Sanders, 1987) that GMCs range in mass from $`10^510^6M_{}`$ although smaller molecular clouds may be as low as several $`10^2M_{}`$. GMCs have a mass spectrum, $`dN/dM_{GMC}M_{GMC}^{1.6}`$. The range in physical scales is about $`10100`$ pc, with the median cloud having a mass of $`3.3\times 10^5M_{}`$ and a median radius of 20 pc. The median cloud density is then 180 cm<sup>-3</sup>, and column density $`1.4\times 10^{22}`$ cm<sup>-2</sup> or equivalently, $`260M_{}`$ pc<sup>-2</sup> (see Harris and Pudritz 1994). The pressures within molecular clouds are much higher than the ISM pressure surrounding them. Typically $`P_{GMC}/P_{ISM}2030`$. Thus, molecular clouds are not another phase of the interstellar medium because self-gravity, not pressure, dominates their internal physics. Direct Zeeman measurements of molecular clouds also show that they are pervaded by strong magnetic fields $`B30\mu `$G. The corresponding magnetic pressure is comparable with the gravitational energy density of the a cloud.
The line-profiles of molecular clouds are dominated by non-thermal motions. Larson (1981) was able to establish two important empirical cloud properties. If $`r`$ represents the physical scale of a CO map, and one measures the average column density $`\mathrm{\Sigma }`$ of gas within this scale, Larson found that $`\mathrm{\Sigma }\rho rconst`$. Moreover, if $`\sigma `$ is the velocity dispersion of the gas on this scale, he also determined that $`\sigma r^{1/2}`$. The physical explanation of these relations came with the work of Chieze (1987) and Elmegreen (1989). The theory of self-gravitating spheres that are embedded within an external medium of pressure $`P_s`$ was worked out in classic papers by Ebert (1955), and Bonner (1956). They showed that an isothermal cloud of mass $`M`$ and average velocity dispersion $`\sigma _{ave}`$ has a critically stable state, described by
$$M_{crit}=1.18\frac{\sigma _{ave}^4}{(G^3P_s)^{1/2}}$$
(2)
$$\mathrm{\Sigma }_{crit}=1.60(P_s/G)^{1/2}$$
(3)
We note that these relations may also be determined from the virial theorem (eg. McCrae 1957). They give a natural explanation for Larson’s empirical laws. The surface density of a molecular cloud, so interpreted, depends only upon the ISM pressure $`\mathrm{\Sigma }P_{ISM}^{1/2}`$. It is the relative constancy of the ISM surface pressure over significant portions of the disk then, that accounts for the near constancy of molecular cloud surface densities. Similarly, by noting that $`M=\pi r^2\mathrm{\Sigma }`$, one sees that the first equation yields Larson’s size-line width relation. Exactly the same type of scalings, as well as coefficients of the same order of magnitude pertain to the description of a filamentary cloud, as Fiege and Pudritz (1999, FP) show.
The energetics of a molecular cloud may be summarized as follows;
$$E_{therm}<<E_{nontherm}E_{mag}(1/2)E_{grav}$$
(4)
where the last inequality follows from the fact that ordered fields, and non-thermal (MHD) motions contribute equally to GMC cloud support (eg. review McKee et. al., 1993). The explanation of the insignificant contribution of thermal support to this balance is again related (as in the ISM) to the efficiency of cooling; the predominant coolant in molecular gas now being the CO rotational lines $`J=10`$ etc. By balancing the heating rate of molecular clouds due to cosmic ray bombardment, with the molecular cooling rates one can show that clouds are maintained at a temperature of $`1020^oK`$ over a wide range of densities $`10^210^4`$ cm<sup>-3</sup> (eg. Goldsmith and Langer 1978). It turns out that the column density at which HI clouds attain sufficiently high column density, ( several $`10^{21}`$ cm<sup>-3</sup>) to become self-gravitating, is also the column at which self-shielding from the galactic FUV field takes place allowing the gas to become molecular.
The onset of molecular cooling in sufficiently high-column gas drops the cloud to low temperatures where it is thermostatically controlled. Molecular clouds also have low rotational energies compared with gravity. Thus, it is left to magnetic pressure as well as the non-thermal motions (MHD waves or turbulence) to provide support of the cloud against gravitational collapse. Molecular clouds are exotic structures in which magnetism plays out a slow, and ultimately losing battle against self-gravity.
The literature contains four basic mechanisms for cloud formation that may all be active in the galaxy:
$``$ Cloud-cloud agglomeration in spiral waves (eg. Kwan & Valdes 1983)
$``$ Supershell fragmentation (eg. McCray and Kafatos 1987)
$``$ Large-scale gravitational instability: $`Q`$ criterion (Kennicutt, 1989)
$$Q=\frac{c_s\kappa }{\pi G\mathrm{\Sigma }}<1\mathrm{gas}$$
$``$ Parker Instability (eg. Blitz & Shu 1980, Elmegreen 1982)
The evidence for the first comes from CO observations of spiral galaxies. One invariably finds that GMCs inhabit the spiral arms of galaxies (eg. Rand 1993). The idea is that the passage of an arm leads to focussing of orbits of more diffuse clouds. These collide and agglomerate in the potential well of the arm. The form of the power law mass spectrum of clouds, given above, is in good support of this mechanism as Kwan and Valdes showed. The second mechanism also appears to operate; within the walls of supershells. The fragmentation mass is of order $`5\times 10^4M_{}`$ by the time that $`3\times 10^7`$ years have elapsed (see McCray and Kafatos, 1987). The third mechanism, that of gravitational instability in a self-gravitating gas layer, employs the well-known Toomre ’Q’ criterion where $`c_s`$ is the effective sound speed of the medium and $`\kappa `$ is the epicyclic frequency. Kennicutt (1989) showed that significant star formation in spiral galaxies only occurs in gas whose column density exceeds the limit given by the Toomre criterion, and is related, therefore, to gravitational instability of the gas. Finally, the Parker instability arises from the intrinsic buoyancy that a magnetic field, oriented parallel to the galactic plane, has within a self-gravitating gas layer. As the field rises and bubbles out of a gas layer, the gas flows back to the disk along magnetic field lines and gathers in magnetic ”valleys”. The predicted scale length of this instability is about 1 kpc along spiral arms (eg. Elmegreen 1982), which is precisely the kind of distance one sees between GMCs and giant HII regions in galaxies. The complete picture of GMC formation probably involves all of these processes. Gas clouds agglomerate within spiral arms and finally reach the critical column density at which gravitational instability, and magnetic buoyancy become important. The subsequent formation of OB associations and supershells leads to further gas compression, and also the possible formation of chimneys that channels some of the energy release into the galactic halo.
### 2.3. Molecular Cloud Clumps: Cluster Formation
The clump mass spectrum within molecular clouds is, interestingly enough, similar to the GMC mass spectrum. Thus, Blitz (review, 1993) reported that the spectrum $`dN/dMM^{1.6}`$, with a 10 % error in the index, fit the data for a number of different clouds rather well. The masses of such clumps range from $`11000M_{}`$, so that one sees that the median clump mass is only a thousandth of the median GMC cloud mass. The fact that only a few percent of the mass of a molecular cloud is tied up in these star forming clumps is, empirically, the reason why star formation is a rather inefficient process.
The overall energetics of clumps are more dominated by gravity than the diffuse inter-clump regions of the cloud. These clumps have strong fields and their internal kinematics are dominated by non-thermal motions (eg. Casselli and Myers, 1995). Maps of the magnetic structures associated with clumps are now becoming available through the technique of submillimetre polarimetry. Schleuning’s (1998) map of a clump in the Orion cloud as an example, shows that there is a well ordered, hour-glass shaped magnetic field structure present on these scales. Not all clumps are expected to lie near the critical threshold for gravitational collapse. As an example, low mass clumps in the logatropic model for cloud cores developed by McLaughlin and Pudritz (1996) are pressure dominated structures. In that model, a critical magnetized clump has a mass of $`250M_{}`$.
Infrared camera observations of clumps in molecular clouds such as Orion reveal that the most massive clumps are forming star clusters. The most massive clump in the Orion cloud as an example, has a mass of $`500M_{}`$ and has a star formation efficiency approaching 40 $`\%`$ (E. Lada 1992; also reviews by Zinnecker et. al. 1993, Elmegreen et. al. 1999). Stars typically form as a member of a group or cluster, and not in isolation. This may be explained by noting that clump mass spectra are rather different than the Salpeter IMF for the stars. Beyond $`0.3M_{}`$ or so, the IMF has a much steeper power-law than that describing molecular clumps; $`dN_{}/dM_{}M_{}^{2.35}`$. By weighting the clump mass spectrum with the mass, and integrating over mass, one finds that the total mass of the clumps scales as $`M_{clump,tot}M_{max}^{0.4}`$. Thus, most of the mass resides in the massive end of the spectrum of clumps. For stars however, this same procedure shows that it is the lower mass end of the IMF that contains most of the stellar mass. Therefore, cluster formation must be the preferred mode of star formation (Patel and Pudritz 1994).
How do the clumps form? Processes that have been proposed include;
$``$ Clump-clump agglomeration (Carlberg and Pudritz 1990, McLaughlin and Pudritz 1996)
$``$ Structures in turbulent flow (see Pouquet’s contribution).
The fact that clumps and their parental clouds share similar mass spectral forms suggests a common formation mechanism. Thus clumps could be built up by the same type of agglomerative and gravitational instability process that produced molecular clouds. Turbulence has also been proposed as the origin of a wide range of structure and substructure (Elmegreen and Efremov 1997).
The finite amplitude Alfvén waves within molecular clouds and their clumps contribute not only to the support of the gas, but also to the process of clump and structure formation within them. Dewar (1970) showed that a collection of Alfvén waves exerts a nearly isotroptic gas pressure that can support gas in a direction parallel to the magnetic field lines. Arons and Max (1975) later championed the idea that such waves might account for non-thermal motions in clouds. Alfvén waves of finite amplitude are different than linear waves however, in that they produce density fluctuations. Carlberg and Pudritz (1990) suggested that such non-linear waves could produce the small density fluctuations within clouds that subsequently agglomerate to produce the observed clump mass spectrum.
This behaviour of non-linear Alfvén waves is akin to pressure (magneto-acoustic) waves, which damp out quickly. Numerical simulations of the behaviour of such waves find that they damp very quickly; with $`\tau _{damp}2t_{ff}`$ (see Vasquez-Semadeni, Passot and Pouquet 1995, Maclow et al 1998, Ostriker et al 1998). Thus, unless MHD waves and turbulence are replenished very quickly, the turbulent support of clumps would quickly vanish. It has often been suggested that bipolar outflows, examined below, can solve this problem. One caveat here is that turbulence is also observed in starless clumps and cores.
### 2.4. Molecular Cloud Cores: Individual Star Formation
On scales of 0.01-0.1 pc one encounters dense cores ($`n_{core}10^4`$ cm<sup>-3</sup>) of several solar masses that may be associated with the formation of individual protostars (Benson and Myers, 1989). Their internal velocity dispersions are less dominated by non-thermal motion (see Casselli and Myers 1995).
We have arrived at perhaps the most intensively studied level of star formation. Individual star formation is an incredibly rich process involving the simultaneous presence of;
$``$ Accretion disks
$``$ Bipolar outflows and jets
$``$ Gravitational collapse
Jet activity and accretion disks are strongly correlated because whenever one sees and outflow, there is good evidence that an accretion disk is also present. Bipolar molecular outflows are the most obvious and ubiquitous sign-post of star formation. There are now more than 200 molecular outflows known (eg. reviews; Bachiller 1996, Padman, Bence and Richer 1997). They persist for at least several $`10^5`$ years, which is a good fraction of the pre-main-sequence evolution timescale. The outflows consist of material in the natal molecular cloud core of a protostar that is swept up and put into motion by a faster, underlying jet from the central source. Outflows are associated with stars of all masses; from the common low-mass T-Tauri stars, to high mass stars in the process of formation. A model for the underlying jets that appears to fit all of the observed facts predicts that jets are centrifugally accelerated winds driven from the surfaces of magnetized protostellar accretion disks (eg. review, Königl and Pudritz 1999). Another suggestion is that jets originate at the interface between the magnetosphere surrounding an active young protostar, and the surrounding disk (eg. review Shu et. al. 1999).
How do cores form? Several ideas have been suggested in the literature, including:
$``$ Ambipolar diffusion (eg. Mouschovias 1993)
$``$ MHD wave damping ( Langer 1978, Mouschovias 1987, Pudritz 1990)
The first mechanism arises from the fact that the magnetic field within a partially ionized gas (typically one ion in $`10^7`$ neutrals in a molecular cloud) is fairly diffusive. A field will slowly slip out of overdense regions because in a magnetic field gradient, the Lorentz force pushes outwards on the ions to which the field is directly coupled. Collisions of the slowly outward moving ions with the neutrals communicate the magnetic force to the overwhelmingly neutral gas. This magnetic pressure support cannot be maintained forever because the ions are gradually pushed out of the dense region, dragging the field with them. The neutral density, on the other hand, slowly increases. The drift time scale of the field $`v_D`$ is thus proportional to the gravitational acceleration $`g`$ in the clump times the time-scale for a neutral to collide with any ion, $`\nu _{ni}^1`$. The ambipolar diffusion time-scale is therefore (eg. McKee et al 1993), $`\tau _D=(R/v_D)=(R/g\nu _{ni}^1)(\zeta /\rho )^{1/2}`$ where $`\zeta `$ is the ionization rate per molecule. Compare this with the free-fall time; $`t_{ff}=(3\pi /32G\rho )^{1/2}=1.4\times 10^6(n/10^3cm^3)^{1/2}\mathrm{yr}.`$ Both these time scales have the same power-law dependence upon the cloud density. Thus, their ratio, $`\tau _D/t_{ff}10`$ for standard ionizing flux rates. This result is independent of the cloud density. We see that magnetic pressure can stave off gravitational collapse for a significant number of dynamical time-scales, but that the battle is finally lost. The time-scale required to form a core from the background molecular cloud (of density $`10^2`$ cm<sup>-3</sup>) has been calculated by taking into account a host of complicating factors such as the effect of grains on the ionization state of the gas. The results (Ciolek and Mouschovias 1995) suggest that it would take nearly 20 million years to form a core. This may be too long in comparison with the lifetime of a molecular cloud, because it suggests that most cores should appear to be starless.
The process of ambipolar diffusion and the gradual loss of magnetic support leading to gravitational collapse is the current paradigm for star formation in a molecular core (eg. review Shu et al 1987). By taking into account the angular momentum of the parent core, the collapse gives rise to the formation of a magnetized disk. While attractive, one may well wonder whether or not this vision of individual star formation is able to account for the formation of star clusters? Of particular concern is the question of what determines the mass of a star in this scenario. We address this concern in §4.
The second mechanism for core formation begins with the fact that MHD waves in a partially ionized medium are damped on a small enough scale. The physical idea is that a wave in the combined ion-neutral fluid can only be sustained if a neutral particle collides with any ion within the period of an Alfvén wave. It is easily shown that on scales of approximately the size of cores, this condition breaks down and that waves must damp (eg. Mouschovias 1987, Pudritz, 1990). Without this wave support, gas may be more prone to increase its density, and to eventually collapse.
## 3. Molecular Cloud Structure: Topology and Helical Fields
Let us now consider models for filamentary clouds, namely upon magnetized, pressure-truncated cylinders. An analytic solution for the radial structure of a self-gravitating, isothermal filament (without pressure truncation) was found by Ostriker (1964);
$$\rho =\frac{\rho _c}{(1+(r^2/8r_0^2))^2}.$$
(5)
The point to note about this solution is the steep fall-off of the gas density at larger scales; $`\rho r^4`$. Such a steep density gradient has never been seen in any study of molecular clouds. Molecular cloud core density profiles are never steeper than $`r^2`$. Recent infrared-studies of the molecular filaments (eg. Lada et. al., 1998) find radial profiles that fall off as $`r^2`$. These results raise the basic question; what is a self-consistent model for filamentary clouds?
The MHD models in the literature posit predominantly poloidal magnetic fields (eg. Nagasawa 1987, Gehman et. al. 1996). Such a field contributes its pressure to the support against gravitational collapse. What is often ignored in such treatments is the effect of external pressure, as well as the possibility that filaments have a toroidal component of the field as well. The possibility that filaments are associated with helical magnetic fields was raised in the observations by Heiles (1987), and discussed by Bally (1987).
How would helical fields arise? All that is required is to twist one end of a filament containing a poloidal field, with respect to the other end. This could easily occur in the interstellar medium. We (Fiege and Pudritz 1999, FP; see Fiege and Pudritz, this volume) have developed a rather general model in which we idealize molecular filaments as infinitely long cylinders of self-gravitating gas, that are truncated at a finite radius by the pressure $`P_s`$ of the external medium. The general internal field consists of both poloidal $`B_z`$ and toroidal $`B_\varphi `$ field components. We also include an internal, non-thermal gas pressure. One then applies the tensor virial theorem (FP) to derive a new form of the scalar virial theorem that is appropriate to the radial equilibrium of an infinite cylinder.
In the absence of an ordered magnetic field; the virial theorem for pressure truncated cylinders takes the simple form;
$$\frac{m}{m_{vir}}=1\frac{P_s}{<P>},$$
(6)
where $`m`$ is the mass per unit length of a self-gravitating cylinder whose average internal pressure is $`<P>`$, and $`m_{vir}=2<\sigma ^2>/G`$ is a critical mass per unit length (see eg. McCrae 1957). This solution demonstrates that the stability of cylinders is different than for spheres. In the latter, a cloud of fixed mass, when squeezed with ever higher external pressure, reaches a critical radius below which the solution is unstable and spherical collapse ensues. For the cylinder on the other hand, as long as its mass per unit length is small enough that, $`m<m_{vir}`$, it doesn’t matter how much you squeeze; the filament will remain stable against radial collapse for any external pressure. Solutions to this virial equation depend upon observationally measurable quantities such as the ratio, $`P_s/<P>`$ and $`m/m_{vir}`$. We compare our models with the available data by plotting these two virial parameters for a number of observed filaments.
Let us now turn to magnetic field effects (FP). A poloidal field adds magnetic pressure support to the filament, so that its critical mass per unit length increases. For a purely toroidal field however, the opposite trend occurs because the hoop stress associated with this component squeezes the filament. A toroidal field also produces a much shallower density gradient. By including these two field components in the virial theorem, one can draw curves through the two dimensional observational plane discussed above. We plot, in Figure 1, the predictions of the magnetic version of the virial theorem. The observer is not required to have any field strengths available, but must have enough data on hand to ascribe to any filament, the two ratios (involving the pressure and mass per unit length) discussed above. Each curve represents the solution for a different relative contribution of poloidal and toroidal field components to the virial equation, as measured by a magnetic virial parameter. For the limited amount of data available, it appears that filaments lie in a magnetic field regime wherein the pinch of the toroidal field dominates the support of the poloidal field component (see FP). Helical fields, it appears, may explain the data. A very important vindication of this result is that the radial density profiles for filaments with helical fields follow $`\rho (r)r^2`$, or shallower, in agreement with the current data.
## 4. The Clustered Mode of Star Formation
We have seen that OB associations provide the dominant input into heating and sculpting the properties of the ISM. It also appears that star formation is dominated by the process of cluster formation. Thus, from both the point of view of star formation, and ISM theory, it is imperative that we understand how stellar clusters are formed. The crucial question is, what processes determine the mass of stars and hence, the form of the IMF?
As is well known, there are two physical pictures of invidual star formation that have dominated the discussion in the last decades (eg. review; Pudritz et. al. 1996). The first view, based on the gravitational stability of B-E spheres, posits that the mass of a star is essentially a Jeans mass. At a temperature of $`10^o`$ K, and for reasonable core pressures, the critical B-E mass is indeed around $`1M_{}`$ (eg. Larson 1992). The basic problem with this approach, however, is that in a highly inhomogenous medium the Jeans mass becomes an ambiguous quantity. In order to explain the IMF therefore, one would have to posit that it is ”laid-in” in the mass spectrum of clumps. We have noted that this is unlikely for larger mass scales. However, recent submm observations by Motte et. al. (1998) of clumps in the $`\rho `$ Oph cloud claim to have found a core spectrum much like the Salpeter IMF.
The second basic model for star formation is the idea that it is not the mass, but rather the mass accretion rate $`\dot{M}`$ that is determined by a molecular cloud. The self-similar collapse picture, (eg. Shu 1977), require that some process turn-off the infall. The observed outflows have been hypothesized to play this role. Note however, that jets in Class 0 objects are highly collimated and will not intercept most of the infalling envelope. In addition, there is a strong positive relation between disk accretion, and jet outflow which seems to indicate that plenty of matter can collapse onto a disk, and accrete through it onto the central protostar.
Both of the previous pictures appear to be inadequate to provide a theory for the IMF. A promising new approach however, posits that stellar mass is aquired through a process of competitive accretion for the gas that is available in a clump. In the specific numerical simulation of Bonnell et. al. (1997) as an example, the accretion rate is modelled as a Bondi-Hoyle process onto distributed set of initial objects of mass $`M_o`$. The stellar velocity is $`v_{\mathrm{}}`$, and the gas density of the clump is $`\rho _{\mathrm{}}`$. The simulations showed that seed objects in the higher, central density region became more massive than the outliers. This can be understood analytically since the time dependence of accretion at the Bondi accretion rate, given by $`\dot{M}_{BH}=\gamma M^2`$ where $`\gamma 4\pi \rho _{\mathrm{}}G^2/(v_{\mathrm{}}^2+c_s^2)^{3/2}`$ is readily found. It is easy to show that the accretion time scale in this picture is $`t_{accr}=(1/\gamma M_o)(1/\rho _{\mathrm{}}M_o)`$ Shorter accretion times occur in higher density regions of a clump, which is where one might expect more massive stars to be formed. These results appear to strongly dependent on mass of input objects which is not determined by the theory. Nevertheless, this type of approach holds considerable promise for a theory of cluster formation.
#### Acknowledgments.
We are grateful to the organizers of this meeting for the invitation to participate in this most stimulating work shop. We thank Chris McKee and Dean McLaughlin for interesting discussions about these topics. REP’s research is supported through a grant from the Natural Sciences and Engineering Research Council of Canada.
## References
Bachillor, R. 1996, ARA&A, 34, 111
Bally J., 1987, Ap.J., 312, L45
Benson P.J., & Myers P.C., 1989, ApJSupp, 71, 89
Blitz, L. 1993, in Protostars & Planets III, E. H. Levy & J. I. Lunine (Tucson: Univ. of Arizona Press), 125
Blitz L., & Shu F.H., 1980, ApJ, 238, 148
Bonnell, I. A., Bate, M. R., Clarke, C. J., & Pringle, J. E. 1997, MNRAS, 285, 201
Bonnor, W. B. 1956, MNRAS, 116, 351
Boulares A., & Cox, D.P., 1990, ApJ, 365, 544
Bruhweiler, F. C., Gull, T. R., Kafatos, M., & Sofia, S. 1980, ApJ, 238, L27
Carlberg R.G., & Pudritz R.E., 1990, MNRAS, 247, 353
Caselli, P., & Myers, P. C. 1995, ApJ, 446, 665
Chièze J.P., 1987, A&A, 171, 225
Ciolek G.E., & Mouschovias T.Ch., 1995, ApJ, 454, 194
Colomb F.R., Poppel W.G.L., & Heiles C., 1980, A&A Supp., 40, 47
Dewar, R. L. 1970, Phys. Fluids, 13, 2710
Ebert, R. 1955, Z. Astrophys., 37, 217
Elmegreen, B. G. 1982, ApJ, 253, 655
Elmegreen, B. G., 1989, ApJ, 338, 178
Elmegreen, B. G., & Efremov Y.N., 1997, 480, 235
Elmegreen, B. G., Efremov, Y. N., Pudritz, R. E., & Zinnecker, H. 1999, in Protostars and Planets IV, A. P. Boss, S. S. Russell, & V. Mannings (Tucson, Univ. Arizona Press), in press
Fiege, J., & Pudritz, R. E. 1999, MNRAS, astro-ph/9901096
Field, G. B. 1965, ApJ, 142, 531
Gehman C.S., Adams F.C., & Watkins R., 1996, Ap.J., 472, 673
Goldsmith, P. F., & Langer, W. D. 1978, ApJ, 222, 881
Harris, W. E., & Pudritz, R. E. 1994, ApJ, 429, 177
Heiles C., 1987, Ap.J., 315, 555
Heiles C., 1984, ApJSupp., 55, 585
Johnstone, D., & Bally, J. 1999, ApJ, 510, L49
Kennicutt, R. C. 1989, ApJ, 344, 685
Königl, A., & Pudritz, R. E. 1999, in Protostars and Planets IV, A. P. Boss, S. S. Russell, & V. Mannings (Tucson, Univ. Arizona Press), in press
Kwan J., & Valdes F., 1983, ApJ, 271, 604
Lada C.J., Alves J., & Lada E.A., 1998, ApJ, in press
Lada E., 1992, ApJ, 393, 25
Langer, W. D. 1978, ApJ, 225, 95
Larson, R. B., 1981, MNRAS, 194, 809
Larson, R. B. 1992, MNRAS, 256, 641
Maclow, M. -M., Klessen, R. S., Burkert, A., & Smith, M. D. 1998, Phys. Rev. Lett., 80, 2754
McCray R., & Kafatos M., 1987, ApJ, 317, 190
McCrea W.H., 1957, MNRAS, 117, 562
McKee, C. F. 1995, in ASP Conf. Series 80, The Physics of the Interstellar Medium, A. Ferrara, C. F. McKee, C. Heiles, & R. R. Shapiro (San Francisco: ASP), 292
McKee, C. F., Zweibel, E. G., Goodman, A. A., & Heiles, C. 1993, in Protostars & Planets III, E. H. Levy & J. I. Lunine (Tucson: Univ. of Arizona Press), 327
McKee C.F., 1989, ApJ, 345, 782
McKee C.F., & Ostriker J.P., 1977, ApJ, 218, 148
McLaughlin D.E., & Pudritz R.E, 1996, ApJ, 469, 194
Motte, F., André, P., & Neri, R. 1998, A&A, 336, 150
Mouschovias, T. 1987, in Physical Processes in Interstellar Clouds, G. E. Morfill & M. Scholer (Dordrecht: D. Reidel), 491
Nagasawa M., 1987, Prog. Theor. Phys., 77, 635
Norman C.A., & Ikeuchi S., 1989, ApJ, 345, 372
Normandeau M., Taylor A.R., & Dewdney P.E., 1996, Nature, 380, 687
Ostriker, E. C., Gammie, C. F., & Stone, J. M. 1998, ApJ, in press
Ostriker J., 1964, Ap.J., 140, 1056
Padman, R. Bence, S., & Richer, J. 1997 in IAU Symposium 182: Herbig-Haro Flows and the Birth of Low Mass Stars, B. Reipurth & C. Bertout, Dordrecht: Kluwer, 123
Patel K., & Pudritz R.E., 1994, ApJ, 424, 688
Pudritz, R. E. 1990, ApJ, 350, 195
Pudritz, R. E., McLaughlin, D. E., & Ouyed, R. 1996, in Computational Astrophysics: Proceedings of the 12th Kingston Meeting, D. A. Clarke and M. J. West (San Francisco: ASP), 117
Rand, R. J. 1993, ApJ, 410, 68
Scoville, N. Z., & Sanders, D. B. 1987, in Interstellar Processes, D. J. Hollenbach & H. A. Thronson Jr. (Dordrecht: Kluwer), 21
Schleuning, D. A. 1998, ApJ, 493, 811
Shu, F. H. 1977, ApJ, 214, 488
Shu, F. H., Adams, F. C., & Lizano, S. 1987, ARA&A, 25, 23
Shu, F. H., et. al. 1999, in Protostars and Planets IV, A. P. Boss, S. S. Russell, & V. Mannings (Tucson, Univ. Arizona Press), in press
Tomisaka K., 1998, MNRAS, 298, 797
van den Bergh S., & Tammann G.A., 1991, ARA&A, 29, 363
Vasquez-Semadeni, E., Passot, T., & Pouquet, A. 1995, ApJ, 441, 702
Zinnecker, H., McCaughrean, M. J., & Wilking, B. A. 1993, in Protostars & Planets III, E. H. Levy & J. I. Lunine (Tucson: Univ. of Arizona Press), 429
|
no-problem/9905/hep-ph9905492.html
|
ar5iv
|
text
|
# HIGH 𝑘_𝑇 MESON PRODUCTION: A DIFFERENT WAY TO PROBE HADRON STRUCTURE11footnote 1Invited talk given at the Electron Polarized Ion Collider Workshop (EPIC99), at the Indiana University Cyclotron Facility, Bloomington, Indiana, USA, 8–11 April 1999
## 1 Semi-Exclusive Processes as Probes of Hadron Structure
Information about hadron structure, in the form of distribution functions or quark wave functions, classically comes from deep inelastic scattering, Drell-Yan processes, or coincident electroproduction . What we shall study here is the possibility of getting the same kind of information from photoproduction of hard, which means high transverse momentum, pions or non-coincident electroproduction of the same . The production can proceed by a number of processes, including direct pion production, direct photon interactions followed by parton fragmentation, resolved photon processes, and soft processes. The first of these, direct pion production can also be called short-distance or isolated pion production. In the following, we will discuss how the first two of these processes can give hadronic distribution function and wave function information. Information can also come from resolved photon processes in the right circumstances, but we shall not pursue them here. Soft processes are from the present viewpoint an annoyance, but one we need to discuss and estimate the size of. All the processes will be defined and discussed below.
To begin being more explicit, the semi-exclusive process we will discuss is
$$\gamma +AM+X,$$
(1)
where $`A`$ is the target and $`M`$ is a meson, here the pion. The process is perturbative because of the high transverse momentum of the pion, not because of the high $`Q^2`$ of the photon. Our considerations will also apply to electroproduction,
$$e+AM+X$$
(2)
if the final electron is not seen. In such a case, the exchanged photon is nearly on shell, and we use the Weizäcker-Williams equivalent photon approximation to relate the electron and photon cross sections,
$$d\sigma (eAMX)=𝑑E_\gamma N(E_\gamma )𝑑\sigma (\gamma AMX),$$
(3)
where the number distribution of photons accompanying the electron is a well known function.
In the following section, we will describe the subprocesses that contribute to hard pion production, and in the subsequent section display some results. Section 4 will be a short summary.
## 2 The Subprocesses
### 2.1 At the Highest $`k_T`$
At he highest possible transverse momenta, observed pions are directly produced at short range via a perturbative QCD (pQCD) calculable process . Two out of four lowest order diagrams are shown Fig. 1. The pion produced this way is kinematically isolated rather than part of a jet, and may be seen either by making an isolated pion cut or by having some faith in the calculation and going to a kinematic region where this process dominates the others. Although this process is higher twist, at the highest transverse momenta its cross section falls less quickly than that of the competition, and we will show plots indicating the kinematics where it can be observed.
The subprocess cross section for direct or short-distance pion production is
$$\frac{d\widehat{\sigma }}{dt}(\gamma q\pi ^\pm q^{})=\frac{128\pi ^2\alpha \alpha _s^2}{27(t)\widehat{s}^2}I_\pi ^2\left(\frac{e_q}{\widehat{s}}+\frac{e_q^{}}{\widehat{u}}\right)\left[\widehat{s}^2+\widehat{u}^2+\lambda h(\widehat{s}^2\widehat{u}^2)\right],$$
(4)
where $`\widehat{s}`$, $`\widehat{t}=t`$, and $`\widehat{u}`$ are the subprocess Mandlestam variables; $`\lambda `$ and $`h`$ are the helicities of the photon and target quark, respectively; and $`I_\pi `$ is an integral related to the pion wave function at the origin in coordinate space,
$$I_\pi =\frac{dy_1}{y_1}\varphi _\pi (y_1,\mu ^2).$$
(5)
In the last equation, $`\varphi _\pi `$ is the distribution amplitude of the pion, and describes the quark-antiquark part of the pion as a parallel moving pair with momentum fractions $`y_i`$. It is normalized through the rate for $`\pi ^\pm \mu \nu `$, and for example,
$$\varphi _\pi =\frac{f_\pi }{2\sqrt{3}}6y_1(1y_1)$$
(6)
for the distribution amplitude called “asymptotic” and for $`f_\pi 93`$ MeV. Overall, of course,
$$\frac{d\sigma }{dxdt}(\gamma A\pi X)=\underset{q}{}G_{q/A}(x,\mu ^2)\frac{d\widehat{\sigma }}{dt}(\gamma q\pi ^\pm q^{}),$$
(7)
where $`G_{q/A}(x,\mu ^2)`$ is the number distribution for quarks of flavor $`q`$ in target $`A`$ with momentum fraction $`x`$ at renormalization scale $`\mu `$.
Let us note a number of points about direct pion production.
$``$ For the photoproduction case, at least, the momentum fraction of the struck quark is determined from experimental observables. This is like the situation in deep inelastic scattering, where the experimenter can measure $`xQ^2/2m_N\nu `$ and the theorist can prove that this $`x`$ is the same as the momentum fraction of the struck quark, for high $`Q`$ and $`\nu `$. For the present case, define the momenta
$$\gamma (q)+A(p)\pi (k)+X,$$
(8)
and then the Mandlestam variables for the overall process,
$$s=(p+q)^2;t=(qk)^2;\mathrm{and}u=(pk)^2.$$
(9)
Each of the Mandlestam variables is an observable, and the ratio
$$x=\frac{t}{s+u}$$
(10)
is the momentum fraction of the struck quark. We will let the reader prove this.
$``$ The gluon involved in direct pion production is well off shell. We will illustrate this by comparison to the pion electromagnetic form factor. For the gluon in Fig. 1(right),
$$q_G^2=(xpy_1k)^2=y_1xu.$$
(11)
(The gluon is Fig. 1(left) is farther off shell.) To get a number, take $`E_\gamma =100`$ GeV and $`90^{}`$ in the center of mass. Then $`u95`$ GeV<sup>2</sup>, and using $`y_11/3`$ and $`x1/2`$ gives
$$q_G^215\mathrm{GeV}^2.$$
(12)
In a calculation of $`F_\pi `$,
$$q_G^2=y_1y_2^{}q^2\frac{1}{9}q^2,$$
(13)
where $`y_1`$ and $`y_2^{}`$ are the momentum fractions, in the incoming and outgoing pion, of the quark that does not absorb the photon. Hence to match the above direct pion production kinematics requires measuring $`F_\pi `$ at a momentum transfer $`|q^2|=135`$ GeV<sup>2</sup>. We come much closer to asymptopia in direct pion production than in a thinkable pion form factor measurement!
$``$ Without polarization, we can measure $`I_\pi `$, given enough trust in the other parts of the calculation. This $`I_\pi `$ is precisely the same as the $`I_\pi `$ in both $`\gamma ^{}\gamma \pi ^0`$ (which is measured in $`eeee\pi ^0`$) and $`e\pi ^\pm e\pi ^\pm `$ (which gives the pion form factor $`F_\pi `$). Presently, the experimental results for $`\gamma ^{}\gamma \pi ^0`$ agree with the theoretical results using the asymptotic distribution amplitude mentioned earlier, but the results for $`F_\pi `$ disagree with the same. Thus there is room for a third process in measuring $`I_\pi `$.
$``$ We also have polarization sensitivity in direct pion production. For $`\pi ^+`$ production at high $`x`$,
$$A_{LL}\frac{\sigma _{R+}\sigma _{L+}}{\sigma _{R+}+\sigma _{L+}}=\frac{s^2u^2}{s^2+u^2}\frac{\mathrm{\Delta }u(x)}{u(x)}$$
(14)
where $`R`$ and $`L`$ refer to the polarization of the photon, and $`+`$ refers to the target, say a proton, polarization. Also, inside a $`+`$ helicity proton the quarks could have either helicity, and
$$\mathrm{\Delta }u(x)u_+(x)u_{}(x).$$
(15)
The large $`x`$ behavior of both $`d(x)/u(x)`$ and $`\mathrm{\Delta }d(x)/\mathrm{\Delta }u(x)`$ are of current interest. Most fits to the data have the down quarks disappearing relative to the up quarks at high $`x`$, in contrast to pQCD which has definite non-zero predictions for both of the ratios in the previous sentence. Recent improved work on extracting neutron data from deuteron targets, has tended to support the pQCD predictions .
There is some data already on $`A_{LL}`$ , from SLAC End Station A, which we shall show when we have discussed some of the other processes that can produce pions. However, the reader who has gotten this far should get to see plots that show, at least by calculation, that there is a non-empty region where direct or short-range pion production can be seen. To this end, Fig. 2 shows the differential cross section for high transverse momentum $`\pi ^+`$ electroproduction for two different kinematics. The leftmost figure is for a SLAC energy, 50 GeV incoming electrons, with the pion emerging at 5.5 in the lab. It shows that above about 27 GeV total pion momentum or 2.6 GeV transverse momentum, direct (short distance, isolated) pion production exceeds its competition. The rightmost plot is tuned to what I understand is a discussed possibility for EPIC, namely 4 GeV electrons colliding with 40 GeV protons, with the pions emerging at 90 in the lab. Again, the direct pion process dominates at high enough momentum, although this time the crossover point is higher and the crossover cross section lower.
### 2.2 Moderate $`k_T`$
At moderate transverse momentum, the generally dominant process is still a direct interaction in the sense that the photon interacts directly with constituents of the target, but the pion is not produced directly at short range but rather at long distances by fragmentation of some parton . Many authors refer to this as the direct process; others of us are in the habit of calling it the fragmentation process. The main subprocesses are called the Compton process and photon-gluon fusion, and one example of each is shown in Fig. 3.
A key feature of hard pion photoproduction in vis the fragmentation process is that the target gluons are involved in lowest order. This stands in contrast to deep inelastic scattering, Drell-Yan processes, or coincident electroproduction, where target gluons affect the cross sections only in next-to-leading order (NLO). These NLO effects can be significant enough with precise data to give a good determination of the gluon distribution, and indeed the unpolarized gluon distribution $`g(x)`$ has been determined this way. However, for the polarized distribution $`\mathrm{\Delta }g(x)`$, the situation is unsettled. Fig. 4 shows a number of different $`\mathrm{\Delta }g(x)`$, normalized to a common $`g(x)`$, that have been derived from analyses of NLO effects and that all purport to fit the data. There is clearly need for additional information, and photon gluon fusion could supply it. Photon gluon fusion often gives 30–50% of the cross section for the fragmentation process, and the polarization asymmetry is as large as can be in magnitude,
$$\widehat{A}_{LL}(\gamma gq\overline{q})=100\%.$$
(16)
Typically for the Compton process, $`\widehat{A}_{LL}(\gamma qgq)1/2`$. To use the fragmentation process we need to have a significant region where that process dominates, and we need to know the sensitivity of the measured polarization asymmetry to the different plausible models for $`\mathrm{\Delta }g(x)`$. EPIC is the right energy to give a significant region where the fragmentation process dominates, as may be seen from the right hand part of Fig. 2. The sensitivity is also good, but we shall put off showing the $`A_{LL}`$ plots until we discuss the soft processes.
We should also note that the NLO calculations for the fragmentation process have been done also for the polarized case, though our plots are based on LO. For direct pion production, NLO calculations are not presently completed.
### 2.3 Resolved Photon Processes
The photon may split into hadronic matter before interacting with the target. If splits into a quark anti-quark pair that are close together, the splitting can be modeled perturbatively or quasi-perturbatively, and we call it a resolved photon process. Perturbative QCD calculations of the entire process can ensue, and a typical diagram is shown in the left hand part of Fig. 5. Though resolved photon processes are crucial at HERA energies, they are never dominant at energies under discussion here, and we say no more about them.
### 2.4 Soft Processes
This is the totally non-perturbative part of the calculation, whose size can be estimated by connecting it to hadronic cross sections. The photon may turn into hadronic matter, such as $`\gamma q\overline{q}+\mathrm{}`$ with a wide spatial separation. It can be represented as photons turning into vector mesons. A picture is shown on the right of Fig. 5.
We want a reliable approximation to the non-perturbative cross section so we can say where perturbative contributions dominate and where they do not. Briefly, what we have done to get such an approximation is to start with the cross section, given as
$$d\sigma (\gamma A\pi X)=\underset{V}{}\frac{\alpha }{\alpha _V}d\sigma (V+A\pi X)+\mathrm{non}\mathrm{VMD},$$
(17)
where the sum is over vector mesons $`V`$, $`\alpha =e^2/4\pi `$, and $`\alpha _V=f_V^2/4\pi `$, with the photon-vector meson vertex in Fig. 5 (right side) given as $`em_V^2/f_V`$. We can get, for example, $`f_\rho `$ from the decay $`\rho e^+e^{}`$.
Contributions from the $`\rho ^{}`$ and other excited $`\rho `$ mesons are compensated changing $`\alpha _V`$ into $`\alpha _V^{eff}`$, which is about 20% higher . Including the $`\omega `$ and $`\varphi `$ increases the result by 33%, according to SU(3). Now we “just” need the cross section for $`\rho ^0A\pi ^+X`$. Lacking direct data, we approximate it by
$$d\sigma (\rho ^0p\pi ^+X)d\sigma (\pi ^+p\pi ^0X)1.3d\sigma (\pi ^+p\pi ^{}X).$$
(18)
Part of this sequence is backed up by data of O’Neill et al . Then we used data and data reductions of Bosetti et al. and Beier et al. to get a parameterized fit to the last cross section. One can compare what we did to the Regge fit for soft processes of T. Sjöstrand et al., which is implemented in PYTHIA .
We took the soft processes to be polarization insensitive. This agrees with a recent Regge analysis of Manayenkkov .
## 3 Results
Results for the unpolarized cross section have already been displayed in Fig. 2. The soft VMD process is the most important out to transverse momenta of about 2 GeV. Above this, at SLAC energies, one almost immediately enters a region where direct pion production is the main process. At planned EPIC energies, however, there is a long region where the fragmentation process dominates, and this can be of use is studying $`\mathrm{\Delta }g`$.
Most interesting may be the calculations of $`E`$ or $`A_{LL}`$, and together with the recent data from SLAC. (Here $`E`$ is just old notation for $`A_{LL}`$. Barker et al. in 1975 listed all measurable asymmetries in pion photoproduction and what we call $`A_{LL}`$ was the fifth on their list—and $`E`$ is the fifth letter of the alphabet.) Fig. 6 shows the calculated $`A_{LL}`$ for both $`\pi ^{}`$ and $`\pi ^+`$ off proton targets for three different parton distribution models. Although the fragmentation process is not the crucial one here, we should mention that mostly we used our own fragmentation functions , and that the results using BKK are not very different. Neither set of fragmentation functions agrees well with the most recent HERMES data for unfavored vs. favored fragmentation functions, and the one curve labeled “newfrag” is calculated with fragmentation functions that agree better that data (assuming that data should be explained by simple fragmentation alone).
Below about 20 GeV total pion momentum, the soft process dominates and the data is indeed well described by supposing the soft processes have no polarization asymmetry. Above that, the asymmetry is calculated in pQCD, and the difference among the results for the different sets of parton distributions is quite large for the $`\pi ^{}`$.
The data of Anthony et al. is also shown. Presently most, though not all, of the data is in the region where the soft processes dominate. The data is already interesting. Further data at even higher pion momenta would be even more interesting, especially for the $`\pi ^{}`$. Large momentum corresponds to $`x1`$ for the struck quark, and pQCD predicts that the quarks are 100% polarized in this limit. Only the parton distributions labeled “BBS” are in tune with the pQCD prediction, and they for large momentum predict even a different sign for $`A_{LL}`$ for the $`\pi ^{}`$. The experiment also has data for deuteron targets, and the calculated results plotted with the data for this case may be examined in .
Regarding EPIC, there is the long region where the fragmentation process dominates, and we would like to know how sensitive the possible measurements of $`A_{LL}`$ are to the different models for $`\mathrm{\Delta }g`$. To this end, we present in Fig. 7 the results for $`A_{LL}`$ for one set of quark distributions and 5 different distributions for $`\mathrm{\Delta }g`$. The quark distributions and unpolarized gluon distribution in each case are those of GRSV. There are 6 curves on each figure. One of them is a benchmark, which was calculated with $`\mathrm{\Delta }g`$ set to zero. The other curves use the $`\mathrm{\Delta }g`$ from the indicated distribution. There is a fair spread in the results, especially for the $`\pi ^{}`$ where photon-gluon fusion gives a larger fraction of the cross section. Thus, one could adjudicate among the polarized gluon distribution models.
## 4 Summary
Hard, meaning high transverse momentum, semiexclusive processes such as $`\gamma p\pi X`$ provide a different way to probe parton distributions.
There are several perturbative processes that contribute, which we have called the direct (or isolated or short distance) pion production process, the fragmentation process, and the resolved photon process. All are calculable. They give us new ways to measure aspects of the pion wave function, and quark and gluon distributions, especially $`\mathrm{\Delta }q`$ and $`\mathrm{\Delta }g`$. The soft processes can be estimated and avoided if the transverse momentum is greater than about 2 GeV. EPIC is projected to have a wide window in which the fragmentation process dominates, which should make it excellent, in particular, for measuring $`\mathrm{\Delta }g`$.
## Acknowledgments
My work on this subject has been done with Andrei Afanasev, Chris Wahlquist, and A. B. Wakely and I thank them for pleasant collaborations. I have also benefited from talking to and reading the work of many authors and apologize to those I have not explicitly cited. I thank the NSF for support under grants PHY-9600415 and PHY-9900657.
## References
|
no-problem/9905/chao-dyn9905025.html
|
ar5iv
|
text
|
# Experimental Generation and Observation of Intrinsic Localized Spin Wave Modes in an Antiferromagnet
\[
## Abstract
By driving with a microwave pulse the lowest frequency antiferromagnetic resonance of the quasi 1-D biaxial antiferromagnet $`(\mathrm{C}_2\mathrm{H}_5\mathrm{NH}_3)_2\mathrm{CuCl}_4`$ into an unstable region intrinsic localized spin waves have been generated and detected in the spin wave gap. These findings are consistent with the prediction that nonlinearity plus lattice discreteness can lead to localized excitations with dimensions comparable to the lattice constant.
\]
Although solitons continue to play an important role in condensed matter physics in the last decade it was recognized that nonlinearity plus lattice discreteness can lead to a different class of excitations with dimensions comparable to the lattice constant . Such intrinsic localized modes (ILMs) have been identified in a variety of classical molecular dynamics simulations and with macroscopic mechanical and electrical models, all of which ignore the possible role of quantum mechanics . Some effort has gone into identifying specific condensed matter signatures as evidence of ILM production but all require intricate arguments: these include far infrared absorption , radiation ionization tracks , the temperature dependent Mössbauer effect and resonant Raman scattering . As yet evidence of externally generated ILMs in a lattice of atomic dimensions is missing. The large amplitude modulational instability of an antiferromagnetic resonance (AFMR) for some antiferromagnets has been suggested as a mechanism for the generation of intrinsic localized spin wave modes (ILSMs) and in this letter we describe the experimental observation and control of such nanoscale excitations.
Because the driving field necessary to create ILMs via the modulational instability scales with the frequency of the antiferromagnetic resonance, $`\omega _{\mathrm{AFMR}}`$, $`(\mathrm{C}_2\mathrm{H}_5\mathrm{NH}_3)_2\mathrm{CuCl}_4`$ with the lowest frequency resonance at $`\omega _{\mathrm{AFMR}}=1.5\mathrm{GHz}`$ was chosen for this first study. This antiferromagnet, often referred to as $`\mathrm{C}(2)\mathrm{CuCl}_4`$, is a layered organic material with a strong ferromagnetic coupling of the magnetic $`\mathrm{Cu}^{2+}`$ ions within a layer and with a weak antiferromagnetic coupling between these layers. Because of this weak interlayer coupling the total spin in each layer can be represented by a classical one with respect to describing the lowest frequency mode dynamics. This spin system with its biaxial anisotropy is described in more detail in Ref. .
The $`\mathrm{C}(2)\mathrm{CuCl}_4`$ single crystals were grown from aqueous solution of ethylamine hydrochloride and copper (II) chloride in a closed vessel by slowly decreasing the temperature . For the measurement, platelets with well defined surfaces and a typical dimension of $`3\times 3\times 0.5\mathrm{mm}^3`$ were chosen.
The experimental setup is shown schematically in Fig. 1(a). The first oscillator provides the high power pulse and the second oscillator the probe beam. Because the expected experimental signature of ILSM generation after a strong microwave pulse near the $`\omega _{\mathrm{AFMR}}`$ is the breakup of the AFMR into a broad band, the generation/detection system is tailored to produce a large AC field at the driving frequency $`\omega _{\mathrm{excite}}`$ and a high sensitivity over a broad band below but close to $`\omega _{\mathrm{excite}}`$. A single $`3\mathrm{mm}`$ diameter loop of copper wire is used as a non-resonant antenna for the excitation and pickup of the broad band signal in reflection. A pump signal of up to $`100\mathrm{W}`$ can be obtained from the oscillator and solid state amplifier. To create short pulses a fast GaAs switch is used in front of the amplifier. The signal of a second, tunable oscillator is overlayed by means of a directional coupler. With a second fast switch, synchronized by a digital delay generator, the reflected signal from the pump pulse is suppressed. The weak reflected signal from the second oscillator passes through two low noise amplifiers and is detected with a spectrum analyzer which was used as a variable bandwidth detector locked to the frequency of the probe oscillator. The minimum noise level of the system is $`120\mathrm{dBm}`$ at $`100\mathrm{kHz}`$ bandwidth. In this measurement setup the absorption can be obtained as a function of time and frequency by first measuring the time dependent absorption at the frequency of the second oscillator, and then scanning this oscillator frequency during subsequent pump pulses. The time resolution is limited to $`20\mu \mathrm{s}`$ by the spectrum analyzer, and the sensitivity, by the flatness of the exciting loop impedance. Care was taken not to saturate any electronic circuits, as this could produce an artificial nonlinear response. The exciting coil and sample are immersed in pumped liquid helium at $`1.2\mathrm{K}`$.
To provide a road map to the experimental data-taking procedure a schematic view of the power and time dependence of the AFMR is shown in Fig. 1 (b). The AFMR frequency before the pulse, $`t<0`$, is represented by the horizontal line. During the high power driving pulse (shaded area) the detection electronics is blocked. After the trailing edge ($`t=0`$) of the driving pulse the spin wave is highly excited and its frequency is decreased due to the intrinsic nonlinearity of the spin Hamiltonian. From this state the system relaxes back to equilibrium with the longitudinal relaxation time T<sub>1</sub>. The absorption by the spin system can be examined for different power levels of the driving pulse \[lines labeled A … F in Fig. 1(b)\] or for different time delays at a fixed power \[dotted lines in Fig. 1(b)\] or as a function of the driving pulse length. Measurements varying all of these experimental parameters have been carried out and are described below.
Figure 2 shows the absorption spectra $`20\mu \mathrm{s}`$ ($`\mathrm{T}_2`$ of the AFMR) after the trailing end of a pump pulse for six different powers. The power of the $`400\mu \mathrm{s}`$ long driving pulse varied from $`50\mathrm{mW}`$ to $`1.6\mathrm{W}`$. On the low frequency side of each spectrum are two magnetostatic volume modes while the shoulder on the high frequency side is a surface mode . With increasing power levels the AFMR first collapses into a broad asymmetric shape and then at still higher powers returns to a sharp AFMR. Note that the magnetostatic modes do not show the same behavior as the AFMR. These experimental results demonstrate that the AFMR is indeed unstable with increasing amplitude but only up to a specific amplitude while for still larger amplitudes the AFMR uniform mode again becomes stable.
The results in Fig. 2 illustrate some of the criteria for the creation of ILSMs. First there is a minimum transverse amplitude above which the extended mode becomes unstable but then, surprisingly, there is also a maximum amplitude associated with this instability of the extended mode. This was not expected from our molecular dynamics simulations. Next the frequency interval of the instability region is observed to become larger as the frequency shift $`\omega _{\mathrm{excite}}\omega _{\mathrm{AFMR}}`$ becomes larger. For optimum conditions, localization could be observed for powers as low as $`50\mathrm{mW}`$ in a $`400\mu \mathrm{s}`$ driving pulse. Varying the pulse width from $`50\mu \mathrm{s}`$ to $`400\mu \mathrm{s}`$ while keeping the energy in the pulse fixed does not change these criteria.
The narrow linewidth of curve F shown in Fig. 2 after the strongest driving pulse is a handy test to exclude any temperature related effect, such as the heating of the crystal by the intense microwave pulses. From low power linear measurements both the temperature dependence of the AFMR frequency, which goes to zero at the Néel temperature $`\mathrm{T}_\mathrm{N}=10.2\mathrm{K}`$, and its linewidth are known. If the frequency shift of $`75\mathrm{MHz}`$ between the low power trace (dashed line in Fig. 2) and high power trace were caused by an increased temperature in the crystal, the line width should nearly double. This is not observed so temperature effects are not the source of the unusual lineshape results.
The frequency response at several delay times for $`200\mathrm{mW}`$ excitation in a $`200\mu \mathrm{s}`$ pulse is shown in Fig. 3. The five traces at $`20\mu \mathrm{s}`$, $`220\mu \mathrm{s}`$, $`420\mu \mathrm{s}`$, $`620\mu \mathrm{s}`$ and $`6\mathrm{ms}`$ show the evolution of this absorption feature from a broad resonance back to an AFMR at times smaller than $`\mathrm{T}_1=1.50\mathrm{ms}`$. For times larger than $`2\mathrm{T}_1`$ only the AFMR survives. The radical difference between the line shapes at $`t=20\mu \mathrm{s}`$ and $`t=420\mu \mathrm{s}`$ is consistent with the idea that the breakup into ILSMs is only possible when the effects of nonlinearity and dispersion are much stronger than the dissipation effect .
MD simulations can be used to demonstrate that the extended spin wave is unstable against breakup into localized modes for the specific amplitude of the extended wave produced in our experiments. Details on molecular dynamics procedures for simulating localized spin waves in $`\mathrm{C}(2)\mathrm{CuCl}_4`$ can be found in Ref. . The solid line in Fig. 4(a) shows the calculated frequency dependence of the AFMR as a function of its transverse amplitude $`S_y`$ in the hard axis direction. To compare this simulation to our experiment, we extract from the shift of $`75\mathrm{MHz}`$ or $`0.05\omega _{\mathrm{AFMR}}`$ of trace (F) in Fig. 2 the spin wave amplitude $`S_y=0.06`$ as marked with (F) in Fig. 4(a). Correspondingly the points A to F superimposed on this curve identify different AFMR frequency shifts observed in the high power measurements of Fig. 2. To test for the instability threshold the time dependent evolution of the energy density for a simulation of a $`250`$ spin antiferromagnetic chain is calculated, starting from an extended wave with fixed transverse amplitude plus random noise $`\delta S_n=0.0025`$. For the whole range (A) to (F) covered by the experiment the extended spin wave is unstable and breaks up into localized modes with widths extending over about $`10`$ lattice constants. Thus for these microwave powers the AFMR is driven into the interesting nonlinear region.
To compare the absorption spectrum $`A(\omega )\omega \chi ^{\prime \prime }`$ measured in the experiment with results from molecular dynamic simulations, the imaginary part of the dynamic magnetic susceptibility is calculated using the Kubo expression where the absorption is proportional to the Fourier transform of the auto-correlation function of the net magnetic moment, $`M_y(t)`$:
$$\chi _y^{\prime \prime }(\omega )\omega _0^{\mathrm{}}𝑑tM_y(t_0+t)M_y(t)\mathrm{exp}(i\omega t).$$
(1)
For our special case the simulations are started with an extended wave of a given amplitude plus some small amount of random noise. The extended wave is unstable against breakup into localized excitations and transforms after approximately $`100`$ periods of the antiferromagnetic resonance into a broad spectrum of ILMs. From $`200`$ to $`1000\tau _{\mathrm{AFMR}}`$ the evolution of the net magnetic moment is recorded and then via Eq. 1 the imaginary part of the dynamic magnetic susceptibility is found.
The resulting calculated absorption spectra for a chain of $`1000`$ spins at three different powers are shown in Fig. 4(b). To generate these spectra the starting transverse amplitudes of the AFMR are $`Sy=0.01`$, $`0.03`$, and $`0.06`$, respectively. Each curve is averaged over 6 simulation runs to remove arbitrary spikes associated with single long-living ILSMs. The broad wing of the asymmetric band is due to the statistical distribution of the ILMs with different degrees of localization and thus different frequencies. To compare the simulations on the long chain with the spectra obtained in the experiment, the frequency axis in Fig. 4(b) was scaled to the same width relative to $`\omega _{\mathrm{AFMR}}`$ as given in Figs. 2 and 3. Both the shape and width of the experimental onset results are in good agreement with theoretical simulations. Moreover the absorption spectra shown in Fig. 4(b) are obtained starting from amplitudes of the extended wave which are in the right range for traces A to F.
The disappearance of the instability at large amplitudes is an experimental feature not represented by curve $`\gamma `$ in these model calculations. Since the values of the relaxation times taken from the experiment are $`\mathrm{T}_1=1.50\mathrm{ms}`$ and $`\mathrm{T}_20.1\mu \mathrm{s}`$, with the latter depending on the surface quality of the samples, T<sub>2</sub> is much smaller than the pulse widths used in our experiments so that energy is transferred to other degenerate spin waves during the microwave pulse. Our experimental results indicate that at the powers where the uniform mode instability occurs this transfer does not influence the positive curvature of the dispersion curve which is required for the instabilty. At the highest powers shown in Fig. 2 the uniform mode again becomes stable indicating that the finite wavevector spin waves have become so heavily populated during the microwave pulse that the dispersion curve now has negative curvature at the uniform mode, a condition for stability of the mode at large amplitude . Because it takes a finite time for the uniform mode to break up into ILSMs both features can appear at intermediate powers such as displayed in trace D in Fig. 2. Here ILSMs are formed during the early part of the pulse while the population in the degenerate modes is still small, and they remain isolated in frequency space above the dispersion curve after the degenerate mode population becomes large so both features are seen.
In this series of microwave experiments the instability that appears, when the lowest lying AFMR of $`\mathrm{C}(2)\mathrm{CuCl}_4`$ is driven to larger amplitudes, has been used to generate nonlinear excitations which are localized on a nano-length scale. To distinguish between the ILSMs of classical simulations and the excitations observed in experiment the latter will be identified as ‘anons’. The hallmark experimental feature of the uniform mode breakup into anons is a broad and asymmetric spectral band below the AFMR. Classical MD simulations on a long 1-D antiferromagnetic spin chain show that both the amplitude of the extended spin wave at which it becomes unstable to breakup into localized modes and the resulting spectral shape of the ILSMs are in good agreement with the microwave results. At still higher experimental powers the AFMR is observed to become stable again, a feature related to the fact that the pump pulse is longer than T<sub>2</sub>.
Simulations have played an important role in identifying the most economical experimental pathway for the detection of anons in real solids. This interplay may be expected to continue since additional MD studies on a $`\mathrm{C}(2)\mathrm{CuCl}_4`$ spin chain in the presence of a magnetic field gradient indicate directed anon transport should be possible.
The authors thank H. Padamsee who provided the microwave amplifier for these experiments and also R. Lai and R. H. Silsbee for helpful discussions. This work is supported by NSF-DMR-9631298. One of the authors (U. T. S.) is supported in part as a Feodor-Lynen scholar by the Alexander von Humboldt-Foundation.
|
no-problem/9905/math9905165.html
|
ar5iv
|
text
|
# Perception games, the image understanding and interpretational geometries
## I. Interactive games and their verbalization
### 1.1. Interactive systems and intention fields
###### Definition Definition 1
An interactive system (with $`n`$ interactive controls) is a control system with $`n`$ independent controls coupled with unknown or incompletely known feedbacks (the feedbacks as well as their couplings with controls are of a so complicated nature that their can not be described completely). An interactive game is a game with interactive controls of each player.
Below we shall consider only deterministic and differential interactive systems. In this case the general interactive system may be written in the form:
$$\dot{\phi }=\mathrm{\Phi }(\phi ,u_1,u_2,\mathrm{},u_n),$$
$`1`$
where $`\phi `$ characterizes the state of the system and $`u_i`$ are the interactive controls:
$$u_i(t)=u_i(u_i^{}(t),[\phi (\tau )]|_{\tau t}),$$
i.e. the independent controls $`u_i^{}(t)`$ coupled with the feedbacks on $`[\phi (\tau )]|_{\tau t}`$. One may suppose that the feedbacks are integrodifferential on $`t`$.
###### Proposition
Each interactive system (1) may be transformed to the form (2) below (which is not, however, unique):
$$\dot{\phi }=\stackrel{~}{\mathrm{\Phi }}(\phi ,\xi ),$$
$`2`$
where the magnitude $`\xi `$ (with infinite degrees of freedom as a rule) obeys the equation
$$\dot{\xi }=\mathrm{\Xi }(\xi ,\phi ,\stackrel{~}{u}_1,\stackrel{~}{u}_2,\mathrm{},\stackrel{~}{u}_n),$$
$`3`$
where $`\stackrel{~}{u}_i`$ are the interactive controls of the form $`\stackrel{~}{u}_i(t)=\stackrel{~}{u}_i(u_i^{}(t);\phi (t),\xi (t))`$ (here the dependence of $`\stackrel{~}{u}_i`$ on $`\xi (t)`$ and $`\phi (t)`$ is differential on $`t`$, i.e. the feedbacks are precisely of the form $`\stackrel{~}{u}_i(t)=\stackrel{~}{u}_i(u_i^{}(t);\phi (t),\xi (t),\dot{\phi }(t),\dot{\xi }(t),\ddot{\phi }(t),\ddot{\xi }(t),\mathrm{},\phi ^{(k)}(t),\xi ^{(k)}(t))`$).
###### Remark Remark 1
One may exclude $`\phi (t)`$ from the feedbacks in the interactive controls $`\stackrel{~}{u}_i(t)`$. One may also exclude the derivatives of $`\xi `$ and $`\phi `$ on $`t`$ from the feedbacks.
###### Definition Definition 2
The magnitude $`\xi `$ with its dynamical equations (3) and its contribution into the interactive controls $`\stackrel{~}{u}_i`$ will be called the intention field.
Note that the theorem holds true for the interactive games. In practice, the intention fields may be often considered as a field-theoretic description of subconscious individual and collective behavioral reactions. However, they may be used also the accounting of unknown or incompletely known external influences. Therefore, such approach is applicable to problems of computer science (e.g. semi-automatically controlled resource distribution) or mathematical economics (e.g. financial games with unknown factors). The interactive games with the differential dependence of feedbacks are called differential. Thus, the theorem states a possibility of a reduction of any interactive game to a differential interactive game by introduction of additional parameters – the intention fields.
### 1.2. Some generalizations
The interactive games introduced above may be generalized in the following ways.
The first way, which leads to the indeterminate interactive games, is based on the idea that the pure controls $`u_i^{}(t)`$ and the interactive controls $`u_i(t)`$ should not be obligatory related in the considered way. More generally one should only postulate that there are some time-independent quantities $`F_\alpha (u_i(t),u_i^{}(t),\phi (t),\mathrm{},\phi ^{(k)}(t))`$ for the independent magnitudes $`u_i(t)`$ and $`u_i^{}(t)`$. Such claim is evidently weaker than one of Def.1. For instance, one may consider the inverse dependence of the pure and interactive controls: $`u_i^{}(t)=u_i^{}(u_i(t),\phi (t),\mathrm{},\phi ^{(k)}(t))`$.
The second way, which leads to the coalition interactive games, is based on the idea to consider the games with coalitions of actions and to claim that the interactive controls belong to such coalitions. In this case the evolution equations have the form
$$\dot{\phi }=\mathrm{\Phi }(\phi ,v_1,\mathrm{},v_m),$$
where $`v_i`$ is the interactive control of the $`i`$-th coalition. If the $`i`$-th coalition is defined by the subset $`I_i`$ of all players then
$$v_i=v_i(\phi (t),\mathrm{},\phi ^{(k)}(t),u_j^{}|jI_i).$$
Certainly, the intersections of different sets $`I_i`$ may be non-empty so that any player may belong to several coalitions of actions. Def.1 gives the particular case when $`I_i=\{i\}`$.
The coalition interactive games may be an effective tool for an analysis of the collective decision making in the real coalition games that spread the applicability of the elaborating interactive game theory to the diverse problems of sociology.
### 1.3. Differential interactive games and their $`\epsilon `$–representations
###### Definition Definition 3
The $`\epsilon `$–representation of differential interactive game is a representation of the differential feedbacks in the form
$$u_i(t)=u_i(u_i^{},\phi (t),\mathrm{},\phi ^{(k)}(t);\epsilon _i(t))$$
$`4`$
with the known function $`u_i`$ of all its arguments, where the magnitudes $`\epsilon _i(t)`$ are unknown functions of $`u_i^{}`$ and $`\phi (t)`$ with its higher derivatives:
$$\epsilon _i(t)=\epsilon _i(u_i^{}(t),\phi (t),\dot{\phi }(t),\mathrm{},\phi ^{(k)}(t)).$$
It is interesting to consider several different $`\epsilon `$-representations simultaneously. For such simultaneous $`\epsilon `$-representations with $`\epsilon `$-parameters $`\epsilon _i^{(\alpha )}`$ a crucial role is played by the time-independent relations between them:
$$F_\beta (\epsilon _i^{(1)},\mathrm{},\epsilon _i^{(\alpha )},\mathrm{},\epsilon _i^{(N)};u_i^{},\phi ,\mathrm{},\phi ^{(k)})0,$$
which are called the correlation integrals. Certainly, in practice the correlation integrals are determined a posteriori and, thus they contain an important information on the interactive game. Using the sufficient number of correlation integrals one is able to construct various algebraic structures in analogy to the correlation functions in statistical physics and quantum field theory.
### 1.4. Dialogues as interactive games. The verbalization
Dialogues as psycholinguistic phenomena can be formalized in terms of interactive games. First of all, note that one is able to consider interactive games of discrete time as well as interactive games of continuous time above.
###### Definition Defintion 4A (the naïve definition of dialogues)
The dialogue is a 2-person interactive game of discrete time with intention fields of continuous time.
The states and the controls of a dialogue correspond to the speech whereas the intention fields describe the understanding.
Let us give the formal mathematical definition of dialogues now.
###### Definition Definition 4B (the formal definition of dialogues)
The dialogue is a 2-person interactive game of discrete time of the form
$$\phi _n=\mathrm{\Phi }(\phi _{n1},\stackrel{}{v}_n,\xi (\tau )|t_{n1}\tau t_n).$$
$`5`$
Here $`\phi _n=\phi (t_n)`$ are the states of the system at the moments $`t_n`$ ($`t_0<t_1<t_2<\mathrm{}<t_n<\mathrm{}`$), $`\stackrel{}{v}_n=\stackrel{}{v}(t_n)=(v_1(t_n),v_2(t_n))`$ are the interactive controls at the same moments; $`\xi (\tau )`$ are the intention fields of continuous time with evolution equations
$$\dot{\xi }(t)=\mathrm{\Xi }(\xi (t),\stackrel{}{u}(t)),$$
$`6`$
where $`\stackrel{}{u}(t)=(u_1(t),u_2(t))`$ are continuous interactive controls with $`\epsilon `$–represented couplings of feedbacks:
$$u_i(t)=u_i(u_i^{}(t),\xi (t);\epsilon _i(t)).$$
The states $`\phi _n`$ and the interactive controls $`\stackrel{}{v}_n`$ are certain known functions of the form
$`\phi _n=`$ $`\phi _n(\stackrel{}{\epsilon }(\tau ),\xi (\tau )|t_{n1}\tau t_n),`$ $`7`$
$`\stackrel{}{v}_n=`$ $`\stackrel{}{v}_n(\stackrel{}{u}^{}(\tau ),\xi (\tau )|t_{n1}\tau t_n).`$
Note that the most nontrivial part of mathematical formalization of dialogues is the claim that the states of the dialogue (which describe a speech) are certain “mean values” of the $`\epsilon `$–parameters of the intention fields (which describe the understanding).
###### Remark Important
The definition of dialogue may be generalized on arbitrary number of players and below we shall consider any number $`n`$ of them, e.g. $`n=1`$ or $`n=3`$, though it slightly contradicts to the common meaning of the word “dialogue”.
An embedding of dialogues into the interactive game theoretical picture generates the reciprocal problem: how to interpret an arbitrary differential interactive game as a dialogue. Such interpretation will be called the verbalization.
###### Definition Definition 5
A differential interactive game of the form
$$\dot{\phi }(t)=\mathrm{\Phi }(\phi (t),\stackrel{}{u}(t))$$
with $`\epsilon `$–represented couplings of feedbacks
$$u_i(t)=u_i(u_i^{}(t),\phi (t),\dot{\phi }(t),\ddot{\phi }(t),\mathrm{}\phi ^{(k)}(t);\epsilon _i(t))$$
is called verbalizable if there exist a posteriori partition $`t_0<t_1<t_2<\mathrm{}<t_n<\mathrm{}`$ and the integrodifferential functionals
$`\omega _n`$ $`(\stackrel{}{\epsilon }(\tau ),\phi (\tau )|t_{n1}\tau t_n),`$ $`8`$
$`\stackrel{}{v}_n`$ $`(\stackrel{}{u}^{}(\tau ),\phi (\tau )|t_{n1}\tau t_n)`$
such that
$$\omega _n=\mathrm{\Omega }(\omega _{n1},v_n;\phi (\tau )|t_{n1}\tau t_n).$$
$`9`$
The verbalizable differential interactive games realize a dialogue in sense of Def.4.
The main heuristic hypothesis is that all differential interactive games “which appear in practice” are verbalizable. The verbalization means that the states of a differential interactive game are interpreted as intention fields of a hidden dialogue and the problem is to describe such dialogue completely. If a differential interactive game is verbalizable one is able to consider many linguistic (e.g. the formal grammar of a related hidden dialogue) or psycholinguistic (e.g. the dynamical correlation of various implications) aspects of it.
During the verbalization it is a problem to determine the moments $`t_i`$. A way to the solution lies in the structure of $`\epsilon `$-representation. Let the space $`E`$ of all admissible values of $`\epsilon `$-parameters be a CW-complex. Then $`t_i`$ are just the moments of transition of the $`\epsilon `$-parameters to a new cell.
## II. Perception games and the image understanding
Let us considered a verbalizable interactive game. We shall suppose for simplicity that the concrete set is finished if some quantity $`F(\omega _n,\phi (t))`$ reaches some critical value $`F_0`$. The game will be called perception game iff the moments $`t_i`$ are just the moments of finishing of the concrete sets so the multistage perception game realizes a sequence of sets with initial states coinciding with the final state of the preceeding set. Such construction is not senseless contrary to the most of the ordinary games because the quantity $`F`$ should be recalculated with the new $`\omega `$. Thus, we have the following general definition.
###### Definition Definition 6
The perception game is a multistage verbalizable game (no matter finite or infinite) for which the intervals $`[t_i,t_{i+1}]`$ are just the sets. The conditions of their finishing depends only on the current value of $`\phi `$ and the state of $`\omega `$ at the beginning of the set. The initial position of the set is the final position of the preceeding one.
Practically, the definition describes the discrete character of the perception and the image understanding. For example, the goal of a concrete set may be to perceive or to understand certain detail of the whole image. Another example is a continuous perception of the moving or changing object.
Note that the definition of perception games is applicable to various forms of perception. However, the most interesting one is the visual perception. Besides the numerous problems of human visual perception of reality (as well as of computer vision) there exists a scope of numerous questions of the human behaviour in the computer modelled worlds, e.g. constructed by use of the so-called “virtual reality” (VR) technology. There are no an evident boundary between them because we can always interpret the internal space of our representations as a some sort of the natural “virtual reality” and apply the analysis of perception in VR to the real image understanding as well as to the activity of imagination. So one should convince that it is impossible to explain all phenomena of our visual perception of reality without deep analysis of its peculiarities in the computer modelled worlds (cf.). Especially crucial role is played by the so-called integrated realities (IR), in which the channels only of some kinds of perception are virtual (e.g. visual) whereas others are real (e.g. tactile, kinesthetic).
The proposed definition allows to take into account the dialogical character of the image understanding and to consider the visual perception, image understanding and the verbal (and nonverbal) dialogues together. It may be extremely useful for the analysis of collective perception, understanding and controlling processes in the dynamical environments – sports, dancings, martial arts, the collective controlling of moving objects, etc.
On the other hand this definition explicates the self-organizing features of human perception, which may be unraveled by the game theoretical analysis.
And, finally, the definition put a basis for a systematical application of the linguistic (e.g. formal grammars) and psycholinguistic methods to the image understanding as a verbalizable interactive game with a mathematical rigor.
Also interpreting perception processes and the image understanding as the verbalizable interactive games we obtain an opportunity to adapt some procedures of the image understanding to the verbalizable interactive games of a different nature, e.g. to the verbal dialogues. It may enlight the processes of generation of subjective figurative representations, which is important for the analysis of the understanding of speech in dialogues.
Traditionally the problems of the visual perception are related to geometry (as descriptive as abstract) so it is reasonable to pay an attention to the geometrical background for the perception videogames and then to combine both geometrical and interactive game theoretical approaches to the visual perception and the image understanding.
## III. Interpretational geometries, intentional anomalous virtual realities and their interactive game theoretical aspects
### 3.1. Interpretational figures
Geometry described below is related to a class of interactive information systems. Let us call an interactive information system computer graphic (or interactive information videosystem) if the information stream “computer–user” is organized as a stream of geometric graphical data on a screen of monitor; an interactive information system will be called psychoinformation if an information transmitted by the channel “user–computer” is (completely or partially) subconscious. In general, an investigation of interactively controlled (psychoinformation) systems for an experimental and a theoretical explication of possibilities contained in them, which are interesting for mathematical sciences themselves, and of “hidden” abstract mathematical objects, whose observation and analysis are actually and potentially realizable by these possibilities, is an important problem itself. So below there will be defined the notions of an interpretational figure and its symbolic drawing that undoubtly play a key role in the description of a computer–geometric representation of mathematical data in interactive information systems. Below, however, the accents will be focused a bit more on applications to informatics preserving a general experimentally mathematical view, the interpretational figures (see below) will be used as pointers to droems and interactive real-time psychoinformation videosystems will be regarded as components of integrated interactive videocognitive systems for accelerated nonverbal cognitive communications.
In interactive information systems mathematical data exist in the form of an interrelation between the geometric internal image (figure) in the subjective space of the observer and the computer-graphic external representation. The latter includes visible (drawings of the figure) and invisible (analytic expressions and algorithms for constructing these images) elements. Identifying geometric images (figures) in the internal space of the observer with computer-graphic representations (visible and invisible elements) is called a translation, in this way the visible object may be not identical with the figure, so that separate visible elements may be considered as modules whose translation is realized independently. The translation is called an interpretation if the translation of separate modules is performed depending on the results of the translation of preceding ones.
###### Definition Definition 7
The figure obtained as a result of interpretation is called an interpretational figure.
Note that the interpretational figure may have no usual formal definition; namely, only if the process of interpretation admits an equivalent process of compilation definition of the figure is reduced to definitions of its drawings that is not true in general. So the drawing of an interpretational figure defines only dynamical “technology of visual perception” but not its “image”, such drawings will be called symbolic.
The computer-geometric description of mathematical data in interactive information systems is closely connected with the concept of anomalous virtual reality.
### 3.2. Intentional anomalous virtual realities
###### Definition Definition 8
(A). Anomalous virtual reality (AVR) in a narrow sense means some system of rules of a nonstandard descriptive geometry adapted for realization on videocomputers (or multisensorial systems of “virtual reality”). Anomalous virtual reality in a wide sense also involves an image in cyberspace formed in accordance with said system of rules. We shall use the term in its narrow sense. (B). Naturalization is the constructing of an AVR from some abstract geometry or physical model. We say that anomalous virtual reality naturalizes the abstract model and the model transcendizes the naturalizing anomalous virtual reality. (C). Visualization is the constructing of certain image or visual dynamics in some anomalous virtual reality (realized by hardware and software of a computer-grafic interface of the concrete videosystem) from the objects of an abstract geometry or processes in a physical model. (D). Anomalous virtual reality, whose objects depend on the observer, is called an intentional anomalous virtual reality (IAVR). The generalized perspective laws for IAVR contain the interactive dynamical equations for the observed objects in addition to standard (geometric) perspective laws. In IAVR the observation process consists of a physical process of observation and a virtual process of intentional governing of the evolution of images in accordance with the dynamical perspective laws.
In intentional anomalous virtual reality (IAVR) that is realized by hardware and software of the computer-graphic interface of the interactive videosystem being geometrically modelled by this IAVR (on the level of descriptive geometry whereas the model transcendizing this IAVR realizes the same on the level of abstract geometry) respectively, the observed objects are demonstrated as connected with the observer who acts on them and determines, or fixes, their observed states so that the objects are thought only as a potentiality of states from the given spectrum whose realization depends also on the observer. The symbolic drawings of interpretational figures may be considered as states of some IAVR.
Note that mathematical theory of anomalous virtual realities (AVR) including the basic procedures of naturalization and thanscending connected AVR with the abstract geometry is a specific branch of modern nonclassical descriptive (computer) geometry.
###### Definition Definition 8E
The set of all continuously distributed visual charcteristics of the image in anomalous virtual reality is called an anomalous color space; the anomalous color space elements of noncolor nature are called overcolors, and the quantities transcendizing them in an abstract model are called “latent lights”. The set of the generalized perspective laws in a fixed anomalous color space is called a color-perspective system.
### 3.3. Remarks on the interactive game theoretical aspects
Certainly, the interpretational geometries may be considered as the perception games. An interesting geometrical consequence of such approach was proposed .
###### Proposition
There exist models of interpretational geometries in which there are interpretational figures observed only in a multi-user mode.
It seems that this proposition may be regarded as a startpoint for the future interactions between geometry and interactive game theory in the sphere of mathematical foundations for the collective perception and image understanding of real objects as well as objects in the computer VR or IR systems.
## IV. Conclusions
Thus, the interactive game theoretical approach to the description of perception processes is proposed. A new class of the multistage verbalizable interactive games, the perception games, is introduced. The interactive game theoretical aspects of interpretational geometries are clarified. Perspectives are sketched.
|
no-problem/9905/gr-qc9905063.html
|
ar5iv
|
text
|
# COSMIC ACCELERATION: INHOMOGENEITY VERSUS VACUUM ENERGY11footnote 1This essay received an ”Honorable Mention” in the 1999 Essay Competition of the Gravity Research Foundation.
## 1 INTRODUCTION
Last year, two independent groups , by using type Ia Supernovae as standard candles without evolution effects, were able to extend the Hubble diagram of luminosity distance versus redshift, out to a redshift of $`z\stackrel{<}{_{}}1`$, implementing a generalized K-correction, .
The main conclusion of these works is that the deceleration parameter at present cosmic time $`q_0`$, is negative, i.e., an acceleration of the cosmic expansion. Moreover, they interpret this conclusion in the framework of the Friedmann (FLRW) models with cosmological constant, $`\mathrm{\Lambda }`$, , in which a necessary and sufficient condition of cosmic acceleration (regardless of the value and sign of the intrinsic spatial curvature of the cosmic spaces, if the null energy condition (NEC) holds) is that $`\mathrm{\Lambda }`$ is positive.
The cosmological constant $`\mathrm{\Lambda }`$, was reinterpreted as a vacuum energy and was used by Guth in the inflationary models. An estimate of this vacuum ”quantum-mechanical” energy , is at least 120 orders of magnitude higher, that the vacuum energy associated with $`\mathrm{\Lambda }`$, determined by the interpretation of the Supernova (SNe Ia) data in the background of FLRW models with $`\mathrm{\Lambda }`$. Nobody knows which is the supression mechanism, if it exists.
In this essay, I propose an alternative explanation for the measured cosmic acceleration in which, from the beginning, $`\mathrm{\Lambda }`$ and hence the vacuum energy are set to zero. Our starting point will be the relaxation of the essential assumption of the FLRW models, the Cosmological Principle and, after, the consideration of barotropic locally rotational symmetric inhomogeneous models without $`\mathrm{\Lambda }`$, in which the acceleration of the congruence of cosmic matter is just a sufficient condition for the SNe Ia measured cosmic acceleration.
Our main ”a priori” argument to discard the Cosmological Principle is Ockham’s razor applied to observational Cosmology. Observationally we can only assert that there is isotropy about our worldline and this has been falsified using different tests, being the most important one, the measured high degree of isotropy of the cosmic background radiation, CBR. This local isotropy about our worldline, when combined with the Copernican Principle, leads to isotropy about all worldlines (at late times, of different clusters of Galaxies and, at early times, of the average motion of a mixture of gas and radiation) and thus to the homogeneity of the 3-dim spacelike hypersurfaces of constant cosmic time and finally to the FLRW models.
However, as Ellis et al. pointed out , if we suspend the Copernican assumption in favour of a direct observational approach, then it turns out that the local isotropy of the CBR is insufficient to force isotropy into the spacetime geometry and hence spatial homogeneity of the 3-dim cosmic hypersurfaces, i.e., to force the verification of the Cosmological Principle.
Homogeneity of the 3-dim spacelike hypersurfaces have poor observational support. At the large-scale level, we only have data from our past light cone and testing homogeneity of the 3-dim hypersurfaces at constant cosmic time, requires us to know about conditions at great distances at present cosmic time, whereas what we can observe at great distances is what happened long time ago. So to test homogeneity of spacelike cosmic hypersurfaces, we first have to understand how is the evolution of both the spacetime geometry and its matter-energy contents. For other critics to the Cosmological Principle and the FLRW models, see for instance .
## 2 OUR MODEL: BAROTROPIC INHOMOGENEOUS LRS
There is one family of spacetimes in which the Cosmological Principle is relaxed but they assure the observational local isotropy, these are the local rotational symmetric (LRS) inhomogeneous models. In the family of inhomogeneous LRS spacetimes, the symmetry group is 3-dimensional, just half the symmetry group of the FLRW models.
In our model, I will assume that the matter part of Einstein equations have a perfect fluid form. However, we will not consider the dust case, i.e., the Lemaitre-Tolman-Bondi (LTB) models, because then necessarily the congruence of matter worldlines will be geodesic. Instead, I will consider a barotropic equation of state, $`p=p(\varrho )`$ and $`\varrho +p>0`$ (NEC condition), which allows for an accelerating congruence.
Geometrically, in these LRS models, the coefficients of the spacetime metric depend on two independent variables (of cosmic time and a radial coordinate), and if one chooses a comoving system, then the metric depends on three non-negative coefficients, and reads
$$ds^2=A^2(r,t)dt^2+B^2(r,t)dr^2+R^2(r,t)d\mathrm{\Omega }^2.$$
(1)
If moreover one supposes spherical symmetry (SS), the congruence of matter fluid is initially irrotational and, then, by the supposed barotropic equation of state, the vorticity is zero at any time, where by SS, $`\varrho =\varrho (r,t)`$ and $`p=p(r,t)`$. However, the other kinematical quantities of the congruence of matter worldlines, i.e., acceleration, shear and expansion are non zero in this spacetime. Note that in the FLRW models all are zero except the expansion.
As the vorticity is always zero, then the fluid matter flow is always hypersurface orthogonal and there exists: 1) A cosmic time $`t`$ and 2) A 3-metric of the spacelike hypersurfaces. As far as I know, this spacetime was used by Mashhoon and Partovi to describe the gravitational collapse of a charged fluid sphere , and to large-scale observational relations .
From the Einstein equations without $`\mathrm{\Lambda }`$, one obtains the conservation of energy-momentum $`T^{ab}`$:
$$_aT^{ab}=0.$$
(2)
From (2), one obtains (see ) for a perfect fluid, the energy conservation equation:
$$\frac{\varrho }{t}+(\varrho +p)\theta =0,$$
(3)
being $`\theta `$ the expansion of the matter fluid and the Euler equation
$$\frac{p}{r}+(\varrho +p)𝐚=\mathrm{𝟎},$$
(4)
being $`𝐚`$, the acceleration of the fluid congruence. Note that in FLRW models, equation (4), is a tautology, because both terms on the LHS are independently zero. However, the consequences of Euler equation (4), are very important in our model. As the fluid is barotropic and the NEC holds, the acceleration is always away from a high-pressure region towards a neighbouring low-pressure one. In other words, the radial gradient of pressure is negative and gives place to an acceleration of the matter flow which opposes the gravitational attraction. This can also be important in order to surpass the classical singularity theorems, but in this essay, I will only show that this fluid acceleration can explain the SNe Ia data about the negativeness of the $`q_0`$ parameter.
## 3 LUMINOSITY DISTANCE-REDSHIFT RELATION AND DECELERATION PARAMETER
To relate our model with the SNe Ia data, we need to know how the luminosity distance-redshift relation and the deceleration parameter are modified by the inhomogeneity. By using conservation of light flux, (see ), it follows from the metric (1)
$$D_L=(1+z)^2R(t_s,r_s),$$
(5)
being $`D_L`$ the luminosity distance and $`t_s,r_s`$ the cosmic time and radial coordinate at emission. At present time, $`t_o`$, this relation reads
$$D_L(t_0,z)=(1+z)^2R[t_s(t_0,z),r_s(t_0,z)].$$
(6)
If one makes an expansion of $`D_L`$ to second order in $`z`$, after making an expansion to first order in $`z`$ of $`t_s(t_0,z)`$ and $`r_s(t_0,z)`$, one finds, :
$$D_L(t_0,z)\frac{1}{H_0}[z+\frac{1}{2}(1Q_0)z^2],$$
(7)
where $`Q_0`$ is a generalized deceleration parameter at present cosmic time. On the other hand, if one develops the metric coefficients of (1) and the mass-energy and pressure in power series of the radial coordinate and after imposing the Einstein equations, one obtains after a scale change in the radial coordinate, :
$`ds^2`$ $`\left(1+\frac{1}{2}\alpha (t)r^2\right)dt^2`$ (8)
$`+S^2(t)\left[\left(1+\frac{1}{2}\beta (t)r^2\right)dr^2+r^2\left(1+\frac{1}{2}\gamma (t)r^2\right)^2d\mathrm{\Omega }^2\right],`$
where $`S(t)`$ is the customary scale factor, $`\alpha (t)`$ is a non-negative function related to the acceleration of the cosmic fluid and a combination of $`\beta `$ and $`\gamma `$ gives the intrinsic spatial curvature of the 3-dim spacelike cosmic spaces.
On the basis of equations (7,8), one finds , that
$$Q_0=q_0II_0,$$
(9)
thus, the luminosity distance-redshift relation at present time, reads
$$D_L(t_0,z)\frac{1}{H_0}[z+\frac{1}{2}(1q_0+II_0)z^2],$$
(10)
where $`H_0`$ and $`q_0`$ are the customary Hubble and deceleration parameters
$$H_0:=\frac{\dot{S_0}}{S_0},$$
$$q_0:=\frac{S_0\ddot{S_0}}{\dot{S_0}^2},$$
and $`II_0`$ is a new inhomogeneity parameter which reads
$$II_0=\frac{\alpha (t_0)}{(S_0H_0)^2}.$$
(11)
Note that $`II_0`$ is related to the congruence acceleration through the metric coefficient $`\alpha `$. In our model the deceleration parameter at present time is:
$$q_0=\frac{1}{2}\mathrm{\Omega }_0\left(1+\frac{3p_0}{\varrho _0}\right)II_0,$$
(12)
as at present time $`{\displaystyle \frac{3p_0}{\varrho _0}}1`$, then one finally obtains
$$q_0\frac{1}{2}\mathrm{\Omega }_0II_0,$$
(13)
where $`\mathrm{\Omega }_0`$ is the present matter density in units of the critical density.
## 4 CONCLUSION
From the formula (13), we see that one can obtain a negative deceleration parameter, i.e., cosmic acceleration, in agreement with recent SNe Ia data, by the presence of a positive inhomogeneity parameter related to the kinematic acceleration or, equivalently, to a negative pressure gradient of a cosmic barotropic fluid. In this way, it is not necessary to explain the Supernova data by the presence of $`\mathrm{\Lambda }`$ or a vacuum energy or some other exotic forms of matter. Although in our model without $`\mathrm{\Lambda }`$, the Cosmological Principle is relaxed, however, it maintains perfect agreement with the local isotropy about our worldline measured by the CBR experiments.
## 5 ACKNOWLEDGEMENTS
I am grateful to P. Ruiz-Lapuente for explaining me some observational insights and to A. San Miguel and F. Vicente for discussions and TeX help. This work has been partially supported by the spanish research projects VA61/98, VA34/99 of Junta de Castilla y León and C.I.C.Y.T. PB97-0487.
## 6 REFERENCES
Perlmutter, S. et al., Nature, 391 (1998) 51.
Riess, A.G. et al., preprint astro-ph/9805201, (1998).
Ruiz-Lapuente, P., private communication (1999).
Felten, J.E., and Isaacman, R., Rev. Mod. Phys., 58 (1986) 689.
Guth, A., Phys. Rev. D, 23 (1981) 347.
Carroll, S.M., Press, W.H., Turner, E.L., Ann. Rev. Astron. & Astrophys., 30 (1992) 499.
Ellis, G.F.R. et al., Phys. Rep., 124 (1985) 315.
Krasiński, A., preprint gr-qc/9806039, (1998).
Ribeiro, M.B., Astrophys. J., 441 (1995) 477.
Mashhoon, B., Partovi, M.H., Phys. Rev. D, 20 (1979) 2455.
Partovi M.H., Mashhoon, B., Astrophys. J., 276 (1984) 4.
Ellis, G.F.R., (1971) in General Relativity and Cosmology, ed. R.K. Sachs (N.Y.: Academic Press).
Pascual-Sánchez, J.F., preprint MAF-UVA-01, (1999).
Kristian, J., Sachs, R.K. Astrophys. J., 143 (1966) 379.
|
no-problem/9905/astro-ph9905054.html
|
ar5iv
|
text
|
# HOW SATURATED ARE ABSORPTION LINES IN THE BROAD ABSORPTION LINE QUASAR PG 1411+442 ?
## 1 INTRODUCTION
About 10% of quasars (QSO) found in the optical flux-limited samples exhibit broad absorption lines (BAL), resonance line absorption troughs extending $``$ 0.1c to the blue of the emission line centers (Weymann et al. 1991). However, Goodrich (1997) and Krolik & Voit (1998) argued that the true fraction of BAL QSO can be as high as $`>30`$% considering attenuation of the light or a non-spherical distribution of continuum emission. Based on the similarity of emission line properties, it is generally thought that BAL regions exist in all quasars, but occupy only a fraction of the solid angle (e.g., Weymann et al. 1991). Thus a BAL region is an important part of every QSO’s structure.
The absorption line properties of BAL QSO have been intensively studied, however, results on the nature of the absorbing material from the UV absorption lines alone are rather controversial. All earlier results indicated low column densities of about an equivalent $`N_H`$ density around 10<sup>20∼21</sup> cm<sup>-2</sup> on the assumption that the absorption is not too optically thick, but metal abundances far higher than the solar values are required (Korista et al. 1996; Turnshek et al. 1996; Hamann 1996). However, recent results show that the structure of the absorbing material is rather complicated and seriously saturated in some velocity range in PG 0946+301 (Arav et al. 1998). The total column density could be much larger. One major difficulty, however, is the lack of an independent measurement of the covering factor.
In the soft X-ray band, BAL QSO are notorious for their weakness (Green et al. 1995, Green & Mathur 1996). It was argued that the BAL QSO are not intrinsically X-ray weak, but heavily absorbed in the soft band. Great similarity of the emission line spectra in BAL and non-BAL QSO indicates that the BLR clouds have seen a similar ionizing (UV to X-ray) continuum in both type of QSO (Weymann et al. 1991).
Heavy absorption has been found in the one of two ROSAT detected BAL QSO. Green & Mathur (1996) showed that the spectrum of 1246-057 is absorbed by an intrinsic column of $``$1.2$`\times `$10<sup>23</sup> cm<sup>-2</sup>. A similar column density was found in the ASCA spectrum of PHL 5200 (Mathur et al. 1996). More BAL QSO still remain to be discovered in the X-ray band. From the ROSAT non-detection, Green & Mathur (1996) derived a lower limit for the absorption column densities of $``$ 10<sup>23</sup> cm<sup>-2</sup> for these objects if the intrinsic X-ray emission is similar to other quasars. A similar conclusion has been reached by Gallagher et al. (1999). Brinkmann et al. (1999) analyzed the ASCA spectra of three BAL QSO; only the brightest, low redshift (z=0.0896, Marziani et al. 1996) BAL QSO PG 1411+442 has been detected with a column density of $``$2 10<sup>23</sup> cm<sup>-2</sup>.
At these column densities, photons below 1 keV are completely absorbed. Any soft X-ray emission from a BAL QSO must be either scattered light or the light leaking from a partially covering absorber. Therefore, the measurement of the unabsorbed component in soft X-rays and the absorbed ones in the hard X-ray band will enable us to determine the fraction of light scattered or leaked, thus providing an independent measurement of the effective covering factor of absorbing material. Both, scattered light and absorbed hard X-rays were detected in the nearby bright BAL QSO PG 1411+442 (Brinkmann et al. 1999).
In this Letter, we make a further analysis of the broad band X-ray spectrum and the UV absorption line spectrum from HST. We will show that the UV absorption lines are completely saturated in the deepest part of the absorption trough.
## 2 X-RAY ABSORPTION AND SCATTERING
Brinkmann et al. (1999) found that the combined ROSAT and ASCA spectrum can be well fitted by a heavily absorbed power law at high energies plus an unabsorbed power law at low energy with a much steeper spectrum. They noticed that the steepness of the power law in the soft band is consistent with the H$`\beta `$ width versus soft X-ray spectral index correlation found for a sample of QSOs (Laor et al. 1997, Wang et al. 1996), and interpreted this component as scattered or leaked nuclear light. By comparing the normalizations at 1 keV, they estimated the fraction of this soft component to the primary component to be $``$ 5 per cent. Here we re-fit the broad band X-ray spectrum to determine more consistently the fraction of the scattered or leaked light.
As the ASCA and ROSAT spectra in the overlapping energy range are consistent with each other, they were fitted simultaneously (see Brinkmann et al. 1999). Since the X-ray spectra at low energies are usually much steeper due to a soft excess, the X-ray spectrum is modeled as a broken power law with a low energy index $`\mathrm{\Gamma }_{sx}`$ and a high energy index $`\mathrm{\Gamma }_{hx}`$, with partially covering absorption. The index $`\mathrm{\Gamma }_{hx}`$ is fixed to 2.0, which is typical for radio-quiet QSOs. The model can fit the data very well ($`\chi ^2=74`$ for 79 degrees of freedom, see also Figure 1a). The results are presented in Table 1. Figure 2 shows the 68% and 90% confidence contours for $`N_H^{(intrin)}`$ versus the covering factor. Both covering factor and the absorption column density are dependent on the value of $`\mathrm{\Gamma }_{hx}`$ and the break energy. For a reasonable range $`1.8<\mathrm{\Gamma }_{hx}<2.2`$, the best fit covering factor is around 0.95 to 0.97. The fitted covering factor also depends on the break energy. A lower break energy yields a slightly larger covering factor and a higher break energy results in a somewhat lower covering factor. Considering these uncertainties, it is likely that the covering factor is in the range 0.94-0.97.
Since PG 1411+442 is the least luminous BAL QSO, there is some concern that the soft X-ray emission might be due to thermal emission from hot gas in the host galaxy. A model consisting of an absorbed power-law plus thermal emission (Raymond-Smith model) was fitted to the spectra. The fit is statistically poor ( $`\chi ^2`$=132 for 78 d.o.f.) and shows systematic deviations in the residuals (Figure 1b). Furthermore, the fit yields a very flat photon index $`\mathrm{\Gamma }=0.06_{0.15}^{+0.20}`$. Therefore, thermal emission cannot be the source for the soft excess.
As warm absorption can also produce a structure similar to a soft excess, we fit the spectrum with a power-law absorbed by ionized absorption material ( Zdziarski et al. 1995). The fit is only acceptable at a probability of 3% ($`\chi ^2`$=104 for 78 d.o.f.) with systematic deviations between 0.3 and 1.2 keV (see Figure 1c). Thus a warm absorber cannot successfully reproduce the observed spectrum either. In fact, when a partially covered warm absorption model applied, the soft X-ray emission almost entirely come from the leaked component and the X-ray ionization parameter ($`U_x`$, see Netzer 1996 for definition) is constrainted to be 0.05$`{}_{}{}^{+0.04}{}_{0.05}{}^{}`$.
## 3 THE UV ABSORPTION LINE TROUGHS
PG 1411+442 was observed by HST with GHRS centered at 130nm and 190nm with a spectral resolution $`\mathrm{\Delta }\lambda /\lambda =3000`$ (Corbin & Boroson 1996). We retrieved the calibrated files from the HST data archive to obtain the wavelength, absolute flux, error flag, and the photon noise vector. We have averaged all spectra. Since the blue wings of the emission lines Ly$`\alpha `$, CIV and NV are strongly absorbed, it is impossible to determine their emission line profile. CIII\]$`\lambda `$1909 is the only moderately strong line which is not affected by absorption in the HST spectrum, and is used to construct a template model for emission line profiles. We fitted the line with two gaussians over a range avoiding the SiIII\] contamination and assumed that the other lines also consist of the same two gaussian components but with different normalizations. To test this, we have fitted this model to the HeII and the NV, CIV red wings. Reasonable good fits were obtained and normalizations are determined for each line. We have taken into account the doublet nature of CIV and NV line. The double ratios were fixed to the ratio of their statistical weights. Since the red wing of $`Ly\alpha `$ is also affected by the NV absorption line, the emission line profiles are modeled using the scaled exact model for CIV line profile.
Figure 3 shows the ratio of the data to a model consisting of emission lines and a continuum. The absorption line profiles are similar for CIV, Ly$`\alpha `$ and NV. The difference in the apparent broadness of NV, CIV, and Ly$`\alpha `$ of the component B is due to the doublet nature of CIV and NV, and their different separations. The narrow (FWHM $``$ 300 km sec<sup>-1</sup>) absorption line at zero velocity which is seen strongest in Ly$`\alpha `$ and is also visible in CIV and SiIV as doublets. The exact profile of component A in NV is uncertain due to the fact that the Ly$`\alpha `$ profile cannot be well defined. A more detailed analysis of the absorption line structures is beyond the scope of this paper.
The absorption troughs of the component B in CIV and NV are box-shaped and have a none-zero flux of $``$ 5-6 per cent of the unabsorbed flux at the bottom. This value is higher than the expected Grating-scattered light at 1-2% level in the GHRS spectrum (Crenshaw et al. 1998). However, the fraction of light at the bottom of the troughs is coincident with the 3-5 per cent of scattered or leaked light in the soft X-ray band.
## 4 DISCUSSION
We have shown that in PG 1411+442 unabsorbed X-rays occur at a 3-5 per cent level, which is coincident with the fraction of the residual flux in the bottom of the absorption line troughs. This can be described by either a partial coverage or a simple scattering model. In either case, the absorption lines at the bottom of troughs can quite well be black saturated.
The idea that scattered light fills in the absorption line troughs in BAL QSOs was actually proposed some years ago based on the results of spectropolarimetric observations. Cohen et al. (1995) and Ogle (1997) showed that the polarization at the bottom of the troughs is much higher than the continuum level polarization. Their finding demonstrates that a considerable fraction of the flux at the bottom of the troughs is from scattered photons. As the polarization degree is strongly dependent on the detailed geometry of the scatterer as well as on the continuum emission pattern, it is impossible to determine the amount of total scattered light from the polarized light itself and the degree of saturation of the line cannot be estimated. Here we have illustrated that because of the completely scattered or leaked nature of the soft X-rays, broad band X-ray observations can provide a way to estimate this fraction. A combination of polarization and X-ray measurements will allow to constrain the geometry of the scatterer.
Ogle et al. (1997) showed that,in general, the absorbing medium also partially covers the scattering medium. This implies that the scattered light is also partially absorbed. We wish to point out that the upper limit ( 3.3$`\times 10^{20}`$ cm<sup>-2</sup>) of column density derived for the intrinsic soft X-ray absorption might still be consistent with their results. Future UV spectro-polarimetric observations of PG 1411+442 will allow to address this question. The remarkable good agreement between the ROSAT and ASCA spectrum at the overlapping region suggests that the soft X-ray emission did not change between the two epochs separated by 6 years. This can be naturally explained in a scattering model.
On the other hand, partial covering is strongly suggested from optical and UV observations. For intrinsic absorber with relatively narrow UV line profiles, there is good evidence that the coverage of the continuum source is partial, velocity dependent, and may also be a function of ionization stage (Barlow et al. 1997) . There is suspicion that the same complications apply to the BAL lines (Arav 1997). A constant soft X-ray flux is possibly a drawback for the partial covering model as the continuum of PG 1411+442 is variable (Giveon et al. 1999). Because of lacking simultaneous hard X-ray observation during the ROSAT observation, we are unable to rule out the partial covering model. Future observations of this object covering both, soft and hard X-rays, will allow to discriminate the two models and to precisely estimate the fraction of unabsorbed X-ray light.
The X-ray properties of objects with intrinsically relatively narrow absorption lines are completely different from those of BAL QSOs. Warm absorption is seen in the former, while X-rays are extremely weak in the BAL QSOs, perhaps due to strong absorption. Recently, Wang et al. (1999) found that the soft X-ray emission is very weak in the luminous Seyfert 1 galaxy PG 1126-041 which shows UV absorption lines only slightly narrower than typical BAL lines. They further demonstrated that the X-ray weakness can be fully explained by warm absorption with lower ionization parameter and larger column density than found in typical Seyfert 1 galaxies. The absorbing column density deduced for PG 1411+442 is several times larger than that for PG 1126-041, while the ionization parameter seems also lower for the former than for the latter (see section 2). These results indicates that there is a continuous changes of the physical parameters with width of absorption line, i.e., an increase in column density, as well as a decrease in the ionization parameter, with increasing line width. In fact, the two ASCA detected BAL QSOs, PHL 5200 (Mathur et al. 1996) and PG 1411+442 show a line width narrower than more typical BAL QSOs. Future observation with more sensitive instruments will allow to address this issue.
We are grateful to the referee for useful suggestions which improved the presentation of this paper. TW acknowledges support at RIKEN by a Science and Technology Agency fellowship. WB thanks the Cosmic Radiation Laboratory for hospitality where part of the research was done in the framework of the MPG-RIKEN exchange program. This work is partly supported by Pandeng Program of CSC and Chinese NSF.
|
no-problem/9905/astro-ph9905088.html
|
ar5iv
|
text
|
# Small-scale Interaction of Turbulence with Thermonuclear Flames in Type Ia Supernovae
## 1 Introduction
The thermonuclear explosion of a Chandrasekhar mass C+O white dwarf is presently the most promising candidate to explain the majority of Type Ia Supernova (SN Ia) events (Höflich et al. 1996). However, the complex phenomenology of turbulent thermonuclear flames and deflagration-detonation-transitions (DDTs) renders a self-consistent description of the explosion mechanism extremely difficult (Khokhlov 1995, Niemeyer et al. 1996, Niemeyer & Woosley 1997). The open questions can be broadly classified as macroscopic ones, pertaining to the global structure of the flame front and the buoyancy-driven production of turbulence, and microscopic ones including turbulence-flame interactions on scales of the flame thickness and pre-conditioning for DDT. In this work, first results of an investigation of the latter will be presented, obtained from direct numerical simulations of a simplified flame model coupled to a three-dimensional incompressible turbulent flow.
Based on the observational evidence of intermediate elements in SN Ia spectra, detonations can be ruled out as the initial combustion mode after onset of the thermonuclear runaway, as they would predict the complete incineration of the white dwarf to iron group nuclei. Deflagrations, on the other hand, are hydrodynamically unstable to both flame intrinsic (Landau-Darrieus) and buoyancy-driven (Rayleigh-Taylor, RT) instabilities. While the former is stabilized in the nonlinear regime, the latter produces a growing, fully turbulent RT-mixing region of hot burning products and cold “fuel”, separated by the thin thermonuclear flame. Driven predominantly by the shear flow surrounding buoyant large-scale bubbles, turbulent velocity fluctuations cascade down to the Kolmogorov scale $`l_\mathrm{k}`$, which may, under certain conditions, be smaller than the laminar flame thickness (Section 2). This regime is unknown territory for flame modeling; although it has been speculated in the supernova literature that the effect of turbulence on the laminar flame structure is negligible as long as the velocity fluctuations are sufficiently weak, the existence of turbulent eddies on scales smaller than the flame thickness – regardless of their velocity – is in conflict with the definition of the “flamelet regime” in the flamelet theory of turbulent combustion (Peters 1984). No numerical or experimental evidence to confirm and quantify this speculation has been available so far.
As the explosion proceeds, the turbulence intensity grows while the flame slows down and thickens as a consequence of the decreasing material density of the expanding star. After some time, small scale turbulence must be expected to significantly alter the flame structure and its local propagation velocity with respect to the laminar solution. On the other hand, most subgrid-scale models for the turbulent thermonuclear flame brush in numerical simulations of supernovae depend crucially on the assumption of a (nearly) laminar flame structure on small scales (Niemeyer & Hillebrandt 1995, Khokhlov 1995, Niemeyer et al. 1996). The intent of this work is to present a first approach to study the regions of validity and the possible breakdown of this “thermonuclear flamelet” assumption. Specifically, a modification of Peters’ (1984) flamelet definition suggested by Niemeyer & Kerstein (1997) will be tested.
In addition to the verification of subgrid-scale models, this inquiry is relevant in the context of DDTs which were suggested to occur in SN Ia explosions after an initial turbulent deflagration phase (Khokhlov 1991, Woosley & Weaver 1994). A specific mechanism for DDT in SN Ia explosions based on strong turbulent straining of the flame front and transition to the distributed burning regime has been proposed (Niemeyer & Woosley 1997). The ratio of laminar flame speed to turbulence velocity on the scale of the flame thickness, $`S_\mathrm{L}/u(\delta )`$, where $`\delta `$ is the laminar thermal flame thickness, has been suggested as a control parameter indicating the transition to distributed burning when $`S_\mathrm{L}/u(\delta )O(1)`$ (Niemeyer & Kerstein 1997). One of the main results presented below is that the transition to distributed burning was not observed in the parameter range ($`S_\mathrm{L}/u(\delta )0.95`$) that we were able to probe.
Thermonuclear burning fronts are similar in many ways to premixed chemical flames. The issues addressed in this work are motivated in the framework of supernova research, but our results apply equally well to premixed chemical flames with low Prandtl numbers and small thermal expansion rates. In order to facilitate numerical computations, we modeled the flame with a single scalar reaction-diffusion equation that is advected in a three-dimensional, driven incompressible turbulent flow. The arguments justifying these simplifications are outlined in Section 2.
This paper is organized as follows: We shall summarize the most important parameters and dimensional relations of thermonuclear flames and buoyancy-driven turbulence in Section 2, followed by a brief description of the numerical methods employed for this work (Section 3). In Section 4, the results of a series of direct simulations of a highly simplified flame propagating through a turbulent medium are discussed and interpreted in the framework of SN Ia modeling.
## 2 Flame properties and model formulation
The laminar properties of thermonuclear flames in white dwarfs were investigated in detail by Timmes & Woosley (1992), including all relevant nuclear reactions and microscopic transport mechanisms. The authors found that the laminar flame speed, $`S_\mathrm{L}`$, varies between $`10^7`$ and $`10^4`$ cm s<sup>-1</sup> as the density declines from $`3\times 10^9`$ to $`10^7`$ g cm<sup>-3</sup>. The thermal flame thickness, $`\delta `$, grows from $`10^5`$ to $`1`$ cm for the same density variation. Microscopic transport is dominated entirely by electrons close to the Fermi energy by virtue of their near-luminal velocity distribution and large mean-free-paths. As a consequence, ionic diffusion of nuclei is negligibly small compared with heat transport and viscosity. Comparing the latter two, one finds typical values for the Prandtl number of $`Pr=\nu /\kappa 10^5\mathrm{}10^4`$, where $`\kappa `$ and $`\nu `$ are the thermal diffusivity and viscosity, respectively (Nandkumar & Pethick 1984). Further, partial electron degeneracy in the burning products limits the density contrast, $`\mu =\mathrm{\Delta }\rho /\rho `$, between burned and unburned material to very small values, $`\mu 0.1\mathrm{}0.5`$.
To within reasonable accuracy, one may estimate the magnitude of large-scale turbulent velocity fluctuations, $`u(L)`$, from the rise velocity of buoyant bubbles with diameter $`L`$, $`u_{\mathrm{rise}}(0.5\mu gL)^{1/2}`$, where $`g`$ is the gravitational acceleration. Inserting typical values, $`L10^7`$ cm, $`g10^8`$ cm s<sup>-2</sup>, and $`\mu 0.3`$, one finds $`u(L)10^7`$ cm s<sup>-1</sup>. For a viscosity of $`\nu 1`$ cm<sup>2</sup> s<sup>-1</sup> (Nandkumar & Pethick 1984), this yields the integral-scale Reynolds number $`Re10^{14}`$ and a characteristic Kolmogorov scale $`l_\mathrm{k}LRe^{3/4}10^4`$ cm. Hence, it is clear that soon after the onset of the explosion, turbulent eddies are present on scales smaller than the laminar flame thickness. In conventional flamelet theory (Peters 1984), the “flamelet regime” is defined based on length-scale arguments alone; that is, if the characteristic length-scale of the flame is smaller than the Kolmogorov length, the turbulent flame is said to be in the flamelet regime. Thus, according to conventional flamelet theory, the scaling arguments offered here would clearly indicate that these thermonuclear flames are not in the flamelet regime. Therefore, flamelet-based models such as those used in almost all multidimensional SN Ia simulations would not appear to be applicable for these flames.
However, the low Prandtl number of degenerate matter allows a situation in which the Kolmogorov time scale, $`\tau _\mathrm{k}l_\mathrm{k}/u(l_\mathrm{k})l_\mathrm{k}^2/\nu `$, is larger than the reaction time scale $`\tau _\mathrm{r}\dot{w}^1`$, where $`\dot{w}`$ is the fuel consumption rate (Niemeyer & Kerstein 1997). This is readily seen by setting $`\tau _\mathrm{r}`$ equal to the diffusion time scale $`\tau _\mathrm{d}\delta ^2/\kappa `$ for stationary flames (where $`\kappa `$ is the microscopic thermal diffusivity), yielding
$$\frac{\tau _\mathrm{k}}{\tau _\mathrm{r}}=Pr^1\left(\frac{l_\mathrm{k}}{\delta }\right)^2.$$
(1)
Even if the length scale ratio on the $`rhs`$ is less than unity, the $`lhs`$ can be large for a sufficiently small $`Pr`$. In this case, small eddies are burned before their motion can appreciably affect the flame structure.
An alternative, $`Pr`$-independent, criterion for flamelet breakdown has been proposed (Niemeyer & Kerstein 1997), based on the relative importance of eddy diffusivity, $`\kappa _\mathrm{e}u(l)l`$, and microscopic heat conductivity on scales $`l\delta `$. As $`\kappa _\mathrm{e}`$ is, in general, a growing function of scale, the condition $`\kappa _\mathrm{e}(\delta )\kappa `$ is sufficient and can be invoked to define the flamelet burning regime. Using the relation $`S_\mathrm{L}\delta /\tau _\mathrm{d}`$, one finds the more intuitive formulation $`u(\delta )S_\mathrm{L}`$. In other words, the flame structure on scales $`\delta `$ and below is dominated by heat diffusion as long as the characteristic velocity associated with eddies of a length scale the same order as the laminar flame thickness is smaller than the laminar flame speed. If heat diffusion is the only relevant microscopic transport process, the local flame speed is expected to remain comparable to $`S_\mathrm{L}`$ despite the presence of eddies within the flame.
This paper attempts to establish whether or not the newly proposed scaling relationship of Niemeyer & Kerstein is an appropriate definition of the flamelet regime for thermonuclear flames, and, more generally, whether or not these thermonuclear flames can be treated as flamelets in numerical simulations. In order to be able to efficiently address this question, we make three assumptions that greatly simplify the problem without violating the underlying physics. Firstly, we note that nuclear energy generation is dominated by carbon burning which has a very strong dependence on temperature ($`\dot{w}T^{21}`$). Therefore, the flame dynamics can be well approximated by a single, diffusive progress variable $`c`$ that is advected by the fluid and coupled to a strongly nonlinear source term that mimics nuclear burning. Second, the small value of $`\mu `$ suggests that dilatation effects do not play a significant role and may be neglected for the purpose of this study. This, together with the small Mach number of turbulent fluctuations on very small scales, justifies the use of the incompressible Navier-Stokes equations. Finally, we assume that the effect of the turbulent cascade from large scales can be adequately modeled by forcing the flow field on the lowest wavenumbers of the simulation.
## 3 Numerical technique
The code used to simulate the thermonuclear flame used the pseudo-spectral approach, where derivatives are taken in Fourier space but non-linear terms are evaluated in real space (Ruetsch & Maxey 1991). The diffusive term is evaluated implicitly, such that the code provided stable, accurate solutions, even for very small Prandtl numbers. All boundary conditions were periodic, and energy was added at every time step to the lowest wavenumbers by solving a Langevin equation as described in Eswaran and Pope (1988a, 1988b). All of the simulations were carried out in a $`64^3`$ domain, and were run for several eddy-turnover times so as to obtain statistical stationarity.
As was mentioned in the previous section, the temperature dependence of the main reaction participating in thermonuclear flame is roughly $`T^{21}`$. It was found that a source term $`\dot{w}=kc^{21}(1c)`$ (where the $`(1c)`$ arises from the dependence of the reaction on reactant concentration) produced too narrow a reaction zone to be easily resolved in space in a three-dimensional simulation. Instead, it was decided to use a source term of $`\dot{w}=kc^4(1c)`$, which is still strongly non-linear, but produces a reaction zone that can be resolved in a practical three-dimensional simulation.
One difficulty that arises in using a pseudo-spectral code to simulate premixed combustion is that the scalar field – in this case, the progress variable – must be periodic. This was achieved by separating the scalar field into two components – a uniform gradient in the direction of propagation of the flame was subtracted such that the remaining field was zero at each end of the periodic box in that direction. Thus, where
$$\frac{c}{t}+u_i\frac{c}{x_i}=𝒟\frac{^2c}{x_ix_i}+\dot{w}$$
(2)
is the transport equation for the progress variable with constant properties, if a uniform gradient $`\beta `$ in the $`x_3`$ direction (the direction of propagation of the flame) is subtracted,
$$c=\beta x_3+\theta $$
(3)
then the transport equation for the periodic fluctuating component $`\theta `$ is:
$$\frac{\theta }{t}+u_i\frac{\theta }{x_i}+\beta u_3=𝒟\frac{^2\theta }{x_ix_i}+\dot{w}.$$
(4)
So long as the reaction zone remained relatively thin and did not approach the boundaries, $`c`$ remained bounded between 0 and 1. In order to keep the reaction away from the boundaries, the mean velocity in the direction of propagation was set to the propagation speed of the flame. This propagation speed was determined at each time step from a volume integral of the source term. The need to keep the reaction away from the boundaries was found to restrict the simulation to a limited ratio of Prandtl number to $`k`$ – the flame speed could not be significantly lower than $`u^{}`$ or wrinkles in the flame would become too large to be contained in the domain.
## 4 Discussion of the results
The results of three simulations with varying laminar flame speeds and Prandtl numbers are illustrated in figures (1) – (6) (see figure captions for the model parameters). Note that $`S_\mathrm{L}/u^{}`$, with the root-mean-square velocity fluctuation $`u^{}`$ dominated in the simulation by eddies on the scale of the laminar flame thickness, corresponds roughly to the parameter $`S_\mathrm{L}/u(\delta )`$ employed in Section (2) to describe the validity of the flamelet assumption based on dimensional analysis. Therefore, one may expect noticeable deviations from locally laminar flame propagation for $`S_\mathrm{L}/u^{}<1`$. Conversely, the dimensional argument predicts that changes of the total burning rate are exclusively due to the growth of the flame surface area by turbulent wrinkling as long as $`S_\mathrm{L}/u^{}1`$.
We define the turbulent flame speed in terms of the volume integral of the source term, $`S_\mathrm{T}\mathrm{\Lambda }^2_V\dot{w}\mathrm{d}^3\lambda `$, where $`\mathrm{\Lambda }`$ is the grid length. The wrinkled flame surface area, $`A_\mathrm{T}`$, is measured by triangular discretization of the $`c=0.5`$ isosurface. For the three cases with $`S_\mathrm{L}/u^{}=11.5`$, $`1.15`$, and $`0.95`$ we find $`S_\mathrm{T}/S_\mathrm{L}`$ ($`A_\mathrm{T}/\mathrm{\Lambda }^2`$) of 1.008 (1.008), 1.31 (1.27), and 1.51 (1.56), respectively. Hence, to within 5 % accuracy the ratio of turbulent and laminar flame speeds is identical to the increase of the flame surface area with respect to the laminar surface, implying that the local flame speed is, on average, equal to $`S_\mathrm{L}`$ in all cases.
In conclusion, we confirmed – within the limitations of our simplified flame description – that the local propagation speed of turbulent low-$`Pr`$ premixed flames remains equal to $`S_\mathrm{L}`$ if $`S_\mathrm{L}v(\delta )`$, even if eddies exist on scales smaller than the flame thickness. Our results show no indication of a breakdown of the flamelet burning regime in the parameter range $`S_\mathrm{L}/v(\delta )0.95`$ that was studied. Lower values of $`S_\mathrm{L}/v(\delta )`$ were unattainable because large scale flame wrinkling forced regions with nonvanishing $`\dot{w}`$ over the streamwise grid boundaries, violating the requirement of periodicity of the non-linear component of the progress variable. This outcome suggests that the conventional definition of the flamelet regime (Peters 1984) which is based on a length-scale argument alone should be generalized to a time-scale dependent definition in the sense of Niemeyer & Kerstein (1997).
In the framework of supernova modeling, this result helps to formulate a subgrid-scale model for the turbulent thermonuclear flame brush in large-scale hydrodynamical simulations. Specifically, it is possible to estimate $`S_\mathrm{L}/v(\delta )`$ from the filtered density and velocity strain, using an assumed spectrum for the turbulent velocity cascade. If $`S_\mathrm{L}/v(\delta )1`$, a subgrid-scale model based purely on the surface increase by turbulent wrinkling can be employed (Niemeyer & Hillebrandt 1995). In practice, this is possible for densities above $`10^7`$ g cm<sup>-3</sup>, where most of the explosion energy is released. For lower densities (in the late stages of the explosion), relevant for the nucleosynthesis of intermediate mass elements and a possible deflagration-detonation-transition (Niemeyer & Woosley 1997), a more detailed model accounting for small-scale turbulence flame interactions needs to be developed.
All the currently discussed scenarios for deflagration-detonation-transitions (DDT) in the late stage of SN Ia explosions require an earlier transition to distributed or well-stirred burning in order to allow pre-conditioning of unburned material. Our results indicate that the flamelet structure of thermonuclear flames is more robust than previously anticipated, hence delaying or even preventing the formation of favorable conditions for DDT during the first expansion phase. A more detailed investigation of this question, extending the parameter range to lower $`S_\mathrm{L}/v(\delta )`$, is underway (Young, Niemeyer & Rosner 1999).
We would like to thank Joel Ferziger, Nigel Smith, and Dan Haworth for interesting discussions. JCN wishes to acknowledge the hospitality of the Center for Turbulence Research where most of this research was carried out, supported in part by the ASCI Center on Astrophysical Thermonuclear Flashes at the University of Chicago (DOE contract no. B34149).
|
no-problem/9905/astro-ph9905195.html
|
ar5iv
|
text
|
# The Activity of the Soft Gamma Repeater SGR 1900+14 in 1998 from Konus-Wind Observations: 1. Short Recurrent Bursts.
## 1 INTRODUCTION
Recurrent short gamma-ray bursts with soft spectra have been known for over 20 years. The first two sources of such bursts were discovered and localized in March 1979 by the Konus experiment on Venera 11 and 12 (Mazets et al., 1981). The extraordinary superintense gamma-ray outburst on March 5, 1979 (Mazets et al., 1979a) was followed by a series of 16 weaker short bursts from the FXP 0526-66 source, which were observed during the next few years (Golenetskii et al., 1984). Also in March 1979, three short soft bursts arriving from the source B1900+14 were detected (Mazets et al., 1979b). In 1983, Prognoz-9 and ICE observed a series of soft recurrent bursts from a third source, 1806-20 (Atteia et al., 1987, Laros et al., 1987). The sources of recurrent soft bursts were given the name soft gamma repeaters, SGRs. Interestingly, a retrospective analysis of Venera 11 and Prognoz-7 data shows that the short gamma-ray burst of 07.01.1979 (Mazets et al., 1981) also belonged to SGR 1806-20 (Atteia et al., 1987). Thus bursts from the first three soft gamma repeaters were detected within a three month period. This is remarkable, because the fourth soft gamma repeater, the SGR 1627-41, was detected and localized only 19 years later, in 1998 (Hurley et al., 1999a, Woods et al., 1999). A fifth SGR has also been observed (Hurley et al., 1997), but it still awaits a good localization.
Important new results came from studies aimed at association of the recurrent bursters with astrophysical objects visible in other wavelengths. The giant narrow initial pulse of the 1979 March 5 event was detected on a dozen different spacecraft. Triangulation yielded a very small source-localization box, about 0.1 square-arcmin, which projected on the outer edge of the N49 supernova remnant in the Large Magellanic Cloud (Cline et al., 1982). Later, ROSAT found a persistent X-ray source in this region (Rothschild et al., 1994).
The association of the SGR 0526-66 with N49 was sometimes questioned because of energy considerations (Mazets and Golentskii, 1981). For a distance of 55 kpc to the N49, the energy release in the March 5 event is $`5\times 10^{44}`$ erg, and in the recurrent bursts, up to $`8\times 10^{42}`$ erg, giving luminosities which exceed the Eddington limit for a neutron star by a factor of $`10^410^6`$ (Mazets and Golenetskii, 1981). However arguments for large distances to the SGRs and, accordingly, for large energy releases continued to accumulate. Kulkarni and Frail (1993) established an association of the SGR 1806-20 with the supernova remnant G10.0-0.3, about 14 kpc distant (Corbel et al., 1997). Murakami et al. (1994) used ASCA to localize one of the events from SGR 1806-20 and discovered a soft X-ray source coinciding with its position. Subsequently, the observations of Kouveliotou et al. (1998) from RXTE revealed regular pulsations in the emission of this source with a period $`P=7.47`$ s. A parallel analysis of ASCA archived data for 1992 confirmed this period and permitted determination of its derivative $`\dot{P}=2.6\times 10^3`$ s yr<sup>-1</sup>.
SGR 1900+14 is located close to the G48.2+0.6 supernova remnant and is believed to be associated with it (Kouveliotou et al., 1994). Coinciding with this repeater in position is a soft X-ray source reliably localized from ROSAT (Hurley et al., 1996). ASCA observations of this source made in April 1998 revealed a 5.16-s periodicity of the emission (Hurley et al., 1999b). When the SGR 1900+14 resumed its activity in June and August 1998 (Hurley et al., 1999c), RXTE observations confirmed this period and established the spin-down rate of the neutron star $`\dot{P}=3.5\times 10^3`$ s yr<sup>-1</sup> (Kouveliotou et al., 1999).
RXTE observations also yielded some evidence for a possible 6.7-s periodicity of the new SGR 1627-41 (Dieters et al., 1998). Thus the known soft gamma repeaters exhibit an association with young ($`<10^4`$ years) supernova remnants, a periodicity of 5-8 s, and a secular spin-down by a few ms per year. Thompson and Duncan (1995, 1996) suggested that the soft gamma repeaters are young neutron stars with superstrong, up to $`10^{15}`$ G, magnetic fields and high spin-down rates because of high losses due to magnetic dipole radiation – the so-called magnetars. The fractures produced by magnetic stress in a neutron star’s crust give rise to the release and transformation of magnetic energy into the energy carried away by particles and hard photons.
In this paper, we present the results of observations of recurrent bursts from SGR 1900+14 made in 1998 with a gamma-ray burst spectrometer onboard the Wind spacecraft (Aptekar et al., 1995).
## 2 OBSERVATIONS
Until 1998, recurrent bursts from the SGR 1900+14 were observed during two intervals: three events in 1979 (Mazets et al., 1979b) and another three in 1992 (Kouveliotou et al., 1993). SGR 1900+14 resumed burst emission in May 1998 (Hurley et al., 1998; Hurley et al., 1999d), which continued up to January 1999.
This time the frequency of recurrent bursts was found to be high and very irregular. Figure 1 shows the distribution within this time interval of the recurrent bursts with measured fluences $`S`$. Three subintervals with a distinctly higher source activity stand out. On August 27, 1998, SGR 1900+14 emitted a superintense outburst with a complex and spectacular time structure (Cline et al., 1998; Hurley et al., 1999d). This event is not shown in Fig. 1 because it will be considered in a separate paper (Mazets et al., 1999a). On May 30, 1998, an intense train of recurrent bursts occurred. Several tens of bursts varying in duration from 0.05 to 0.7 s arrived during as short a time as three minutes. The intervals between the bursts decreased at times to such an extent as to become comparable to the duration of the bursts themselves, and the radiation intensity between them did not drop down to the background level. Figure 2 displays the most crowded part of the train. In Fig. 1 it is represented by the feature with total flux $`S=5.8\times 10^5`$ erg cm<sup>-2</sup>. The high burst occurrence frequency may cause losses in the information obtained. Readout of the information on a trigger event takes up about one hour. If other events arrive within this interval, only a very limited amount of the relevant information will be recorded in the housekeeping channel. Such cases did occur, and quite possibly they comprised two or three weaker trains of recurrent bursts, in particular, on September 1, 1998, 61232.–61585 s UT, with a total flux $`S2\times 10^5`$ erg cm<sup>-2</sup>, and on October 24, 1998, 4921–5348 s UT, with $`S10^5`$ erg cm<sup>-2</sup>.
All recurrent bursts are short events with a fairly complex time structure and soft energy spectra, which, when fitted with a $`dN/dEE^1\mathrm{exp}(E/kT)`$ relation, are characterized by $`kT2030`$ keV. Figure 3 presents time histories of several events recorded before August 27. Their energy spectra are very similar. Figure 4 shows the spectrum of a burst on June 7. After the August 27 event, the second interval of increased activity began (see Fig. 1), but most of the bursts did not change their characteristics. Shown in Fig. 5 are time structures of a few events, and Fig. 6 displays a typical energy spectrum. The only pronounced difference in the period after August 27 was the onset of several long, up to 4 s, bursts with a correspondingly high total energy flux, up to $`5\times 10^5`$ erg cm<sup>-2</sup>. Figure 7 presents time histories of two such bursts, with a typical energy spectrum shown in Fig. 8. Such long recurrent bursts were observed to be produced by other SGRs as well (Golenetskii et al., 1984).
As can be seen from these data, recurrent bursts exhibit a complex time structure, which cannot be described by a model of a single pulse with standard characteristic rise and decay times. The burst intensity rises in 15-20 ms. By contrast, long bursts take a substantially longer time to rise, up to $`150`$ ms (Fig. 7). In many cases the main rise is preceded by an interval with a weaker growth in intensity or even by a single weak pulse (Figs. 3 and 7). The intensity decay extends practically through the whole event. At the end of a burst one frequently observes a strong steepening of the falloff (Figs. 3 and 7). Large-scale details in the time structure may indicate that the bursts consist of several structurally simpler but closely related events (Figs. 3, 5, and 7).
The value of $`kT`$ for the photon spectra of different bursts lies in the 18-30 keV region. There is practically no spectral evolution within any one event, which is readily seen from Fig. 7. The maximum fluxes in a burst vary from $`2\times 10^6`$ to $`5\times 10^5`$ erg cm<sup>-2</sup> s<sup>-1</sup>. However for 80% of events they lie within a narrow region of $`(13)\times 10^5`$ erg cm<sup>-2</sup> s<sup>-1</sup>. Fluences vary within broader ranges, from $`10^7`$ to $`5\times 10^5`$ erg cm<sup>-2</sup>. This implies that the energy release is partially determined by the duration of the emission process in the source. Figure 9 presents a fluence vs duration distribution of bursts $`(\mathrm{lg}Svs\mathrm{lg}\mathrm{\Delta }T_{0.25})`$. The measure of burst duration $`\mathrm{\Delta }T_{0.25}`$ is the time interval within which the radiation intensity is in excess of the 25% level of the maximum flux Fmax. The graph demonstrates a strong correlation between these quantities ($`\rho =0.8`$).
As follows from the data presented here, for a 10 kpc distance of the SGR 1900+14 source (Case and Bhattacharya, 1998; Vasisht et al., 1994), and assuming the emission to be isotropic, the maximum source luminosity in recurrent bursts lies within the range of $`(14)\times 10^{41}`$ erg s<sup>-1</sup>, and the energy liberated in a recurrent burst is $`2\times 10^{39}`$ to $`8\times 10^{41}`$ erg.
## 3 CONCLUSION
The observations of this period of high activity of SGR 1900+14 have substantially broadened our ideas concerning such sources. The giant outburst on August 27 has come as a real surprise. Among other remarkable events is the intense train of bursts on May 30, 1998, when the frequency of recurrent bursts increased within a few minutes by at least a factor of $`10^4`$ compared to that usually observed during the reactivation periods of known SGRs. On the other hand, the characteristics of the bursts themselves, their time histories, spectra, and intensity do not suggest radical differences from those of recurrent events observed in other soft gamma repeaters (Kouveliotou et al., 1987; Frederiks et al., 1997; Mazets et al., 1999b), which argues for the fundamental similarity between the emission processes occurring in different sources. It appears significant that the giant outburst with an energy release thousands of times larger than that typical of a single recurrent event did not noticeably affect the behavior and individual characteristics of recurrent bursts.
Partial support of the RSA and RFBR (Grant 99-02-17031) is gratefully acknowledged.
## FIGURE CAPTIONS
|
no-problem/9905/astro-ph9905050.html
|
ar5iv
|
text
|
# Physical Conditions in Regions of Star Formation
## 1 INTRODUCTION
Long after their parent spiral galaxies have formed, stars continue to form by repeated condensation from the interstellar medium. In the process, parts of the interstellar medium pass through a cool, relatively dense phase with a great deal of complexity– molecular clouds. While both the diffuse interstellar medium and stars can be supported by thermal pressure, most molecular clouds cannot be thermally supported (Goldreich & Kwan 1974). Simple consideration would suggest that molecular clouds would be a very transient phase in the conversion of diffuse gas to stars, but in fact they persist much longer than expected. During this extended life, they produce an intricate physical and chemical system that provides the substrate for the formation of planets and life, as well as stars. Comparison of cloud masses to the total mass of stars that they produce indicates that most of the matter in a molecular cloud is sterile; stars form only in a small fraction of the mass of the cloud (Leisawitz et al 1989).
The physical conditions in the bulk of a molecular cloud provide the key to understanding why molecular clouds form an essentially metastable state along the path from diffuse gas to stars. Most of the mass of most molecular clouds in our Galaxy is contained in regions of modest extinction, allowing photons from the interstellar radiation field to maintain sufficient ionization for magnetic fields to resist collapse (McKee 1989); most of the molecular gas is in fact in a photon-dominated region, or PDR (Hollenbach & Tielens 1997). In addition, most molecular gas has supersonic turbulence (Zuckerman & Evans 1974). The persistence of such turbulence over the inferred lifetimes of clouds in the face of rapid damping mechanisms (Goldreich & Kwan 1974) suggests constant replenishment, most likely in a process of self-regulated star formation (Norman & Silk 1980, Bertoldi & McKee 1996), since star formation is accompanied by energetic outflows, jets, and winds (Bachiller 1996).
For this review, the focus will be on the physical conditions in regions that are forming stars and likely precursors of such regions. While gravitational collapse explains the formation of stars, the details of how it happens depend critically on the physical conditions in the star-forming region. The details determine the mass of the resulting star and the amount of mass that winds up in a disk around the star, in turn controlling the possibilities for planet formation. The physical conditions also control the chemical conditions. With the recognition that much interstellar chemistry is preserved in comets (Crovisier 1999, van Dishoeck & Blake 1998), and that interstellar chemistry may also affect planet formation and the possibilities for life (e.g. Pendleton 1997, Pendleton & Tielens 1997, Chyba & Sagan 1992), the knowledge of physical conditions in star-forming regions has taken on additional significance.
Thinking more globally, different physical conditions in different regions determine whether a few, lightly clustered stars form (the isolated mode) or a tight grouping of stars form (the clustered mode) (Lada 1992; Lada et al 1993). The star formation rates per unit mass of molecular gas vary by a factor $`>10^2`$ in clouds within our own Galaxy (Evans 1991, Mead et al 1990), and starburst galaxies achieve even higher rates than are seen anywhere in our Galaxy (e.g. Sanders et al 1991). Ultimately, a description of galaxy formation must incorporate an understanding of how star formation depends on physical conditions, gleaned from careful study of our Galaxy and nearby galaxies.
Within the space limitations of this review, it is not possible to address all the issues raised by the preceding overview. I will generally avoid topics that have been recently reviewed, such as circumstellar disks (Sargent 1996, Papaloizou & Lin 1995, Lin & Papaloizou 1996, Bodenheimer 1995), as well as bipolar outflows (Bachiller 1996), and dense PDRs (Hollenbach & Tielens 1997). I will also discuss the sterile parts of molecular clouds only as relevant to the process that leads some parts of the cloud to be suitable for star formation. While the chemistry and physics of star-forming regions are coupled, chemistry has been recently reviewed (van Dishoeck & Blake 1998). Astronomical masers are being concurrently reviewed (Menten 1999); as with HII regions, they will be discussed only as signposts for regions of star formation.
I will focus on star formation in our Galaxy. Nearby regions of isolated, low-mass star formation will receive considerable attention (§4) because we have made the most progress in studying them. Their conditions will be compared to those in regions forming clusters of stars, including massive stars (§5). These regions of clustered star formation are poorly understood, but they probably form the majority of stars in our Galaxy (Elmegreen 1985), and they are the regions relevant for comparisons to other galaxies.
Even with such a restricted topic, the literature is vast. I make no attempt at completeness in referencing. On relatively non-controversial topics, I will tend to give an early reference and a recent review; for more unsettled topics, more references, with different points of view, will be given. Recent or upcoming publications with significant overlap include Hartmann (1998), Lada & Kylafis (1999), and Mannings et al (2000).
## 2 PHYSICAL CONDITIONS
The motivation for studying physical conditions can be found in a few simple theoretical considerations. Our goal is to know when and how molecular gas collapses to form stars. In the simplest situation, a cloud with only thermal support, collapse should occur if the mass exceeds the Jeans (1928) mass,
$$M_J=\left(\frac{\pi kT_K}{\mu m_HG}\right)^{1.5}\rho ^{0.5}=18\text{M}\text{}T_K^{1.5}n^{0.5},$$
(1)
where $`T_K`$ is the kinetic temperature (kelvins), $`\rho `$ is the mass density (gm cm<sup>-3</sup>), and $`n`$ is the total particle density (cm<sup>-3</sup>). In a molecular cloud, H nuclei are almost exclusively in H<sub>2</sub> molecules, and $`nn(\text{H}\text{2})+n(\mathrm{He})`$. Then $`\rho =\mu _nm_Hn`$, where $`m_H`$ is the mass of a hydrogen atom and $`\mu _n`$ is the mean mass per particle (2.29 in a fully molecular cloud with 25% by mass helium). Discrepancies between coefficients in the equations presented here and those in other references usually are traceable to a different definition of $`n`$. In the absence of pressure support, collapse will occur in a free-fall time (Spitzer 1978),
$$t_{ff}=\left(\frac{3\pi }{32G\rho }\right)^{0.5}=3.4\times 10^7n^{0.5}\mathrm{years}.$$
(2)
If $`T_K=10`$ K and $`n50`$ cm<sup>-3</sup>, typical conditions in the sterile regions (e.g. Blitz 1993), $`M_J80\text{M}\text{}`$, and $`t_{ff}5\times 10^6`$ years. Our Galaxy contains about 1–3$`\times 10^9`$ M of molecular gas (Bronfman et al 1988, Clemens et al 1988, Combes 1991). The majority of this gas is probably contained in clouds with $`M>10^4`$ M (Elmegreen 1985). It would be highly unstable on these grounds, and free-fall collapse would lead to a star formation rate, $`\dot{M}_{}200`$ Myr<sup>-1</sup>, far in excess of the recent Galactic average of 3 Myr<sup>-1</sup> (Scalo 1986). This argument, first made by Zuckerman & Palmer (1974), shows that most clouds cannot be collapsing at free fall (see also Zuckerman & Evans 1974). Together with evidence of cloud lifetimes of about $`4\times 10^7`$ yr (Bash et al 1977; Leisawitz et al 1989), this discrepancy motivates an examination of other support mechanisms.
Two possibilities have been considered, magnetic fields and turbulence. Calculations of the stability of magnetized clouds (Mestel & Spitzer 1956, Mestel 1965) led to the concept of a magnetic critical mass ($`M_B`$). For highly flattened clouds,
$$M_B=(2\pi )^1G^{0.5}\mathrm{\Phi },$$
(3)
(Li & Shu 1996), where $`\mathrm{\Phi }`$ is the magnetic flux threading the cloud,
$$\mathrm{\Phi }B𝑑a.$$
(4)
Numerical calculations (Mouschovias & Spitzer 1976) indicate a similar coefficient (0.13). If turbulence can be thought of as causing pressure, it may be able to stabilize clouds on large scales (e.g. Bonazzola et al 1987). It is not at all clear that turbulence can be treated so simply. In both cases, the cloud can only be metastable. Gas can move along field lines and ambipolar diffusion will allow neutral gas to move across field lines with a timescale of (McKee et al 1993)
$$t_{AD}=\frac{3}{4\pi G\rho \tau _{ni}}7.3\times 10^{13}x_e\mathrm{years},$$
(5)
where $`\tau _{ni}`$ is the ion-neutral collision time. The ionization fraction ($`x_e`$) depends on ionization by photons and cosmic rays, balanced by recombination. It thus depends on the abundances of other species ($`X(x)n(x)/n`$).
These two suggested mechanisms of cloud support (magnetic and turbulent) are not entirely compatible because turbulence should tangle the magnetic field (compare the reviews by Mouschovias 1999 and McKee 1999). A happy marriage between magnetic fields and turbulence was long hoped for; Arons and Max (1975) suggested that magnetic fields would slow the decay of turbulence if the turbulence was sub-Alfvénic. Simulations of MHD turbulence in systems with high degrees of symmetry supported this suggestion and indicated that the pressure from magnetic waves could stabilize clouds (Gammie & Ostriker 1996). However, more recent 3-D simulations indicate that MHD turbulence decays rapidly and that replenishment is still needed (Mac Low et al 1998, Stone et al 1999). The usual suggestion is that outflows generate turbulence, but Zweibel (1998) has suggested that an instability induced by ambipolar diffusion may convert magnetic energy into turbulence. Finally, the issue of supporting clouds assumes a certain stability and cloud integrity that may be misleading in a dynamic interstellar medium (e.g. Ballesteros-Paredes et al 1999). For a current review of this field, see Vázquez-Semadini et al (2000).
With this brief and simplistic review of the issues of cloud stability and evolution, we have motivated the study of the basic physical conditions:
$$T_K,n,\stackrel{}{v},\stackrel{}{B},X.$$
(6)
All these are local variables; in principle, they can have different values at each point ($`\stackrel{}{r}`$) in space and can vary with time ($`t`$). In practice, we usually can measure only one component of vector quantities, integrated through the cloud. For example, we measure only line-of-sight velocities, usually characterized by the linewidth ($`\mathrm{\Delta }v`$) or higher moments, and the line-of-sight magnetic field ($`B_z`$) through the Zeeman effect, or the projected direction (but not the strength) of the field in the plane of the sky ($`B_{}`$) by polarization studies. In addition, our observations always average over finite regions, so we attempt to simplify the dependence on $`\stackrel{}{r}`$ by assumptions or models of cloud structure. In §3, I will describe the methods used to probe these quantities and some overall results. Abundances have been reviewed recently (van Dishoeck & Blake 1998), so only relevant results will be mentioned, most notably the ionization fraction, $`x_e`$.
In addition to the local variables, quantities that explicitly integrate over one or more dimensions are often measured. Foremost is the column density,
$$Nn𝑑l.$$
(7)
The extinction or optical depth of dust at some wavelength is a common surrogate measure of $`N`$. If the column density is integrated over an area, one measure of the cloud mass within that area is obtained:
$$M_NN𝑑a.$$
(8)
Another commonly used measure of mass is obtained by simplification of the virial theorem. If external pressure and magnetic fields are ignored,
$$M_V=C_vG^1R\mathrm{\Delta }v^2=210\text{M}\text{}C_vR(pc)(\mathrm{\Delta }v(\text{km s}\text{-1}))^2,$$
(9)
where $`R`$ is the radius of the region, $`\mathrm{\Delta }v`$ is the FWHM linewidth, and the constant ($`C_v`$) depends on geometry and cloud structure, but is of order unity (e.g. McKee & Zweibel 1992, Bertoldi & McKee 1992). A third mass estimate can be obtained by integrating the density over the volume,
$$M_nn𝑑v.$$
(10)
$`M_n`$ is commonly used to estimate $`f_v`$, the volume filling factor of gas at density $`n`$, by dividing $`M_n`$ by another mass estimate, typically $`M_V`$. Of the three methods of mass determination, the virial mass is the least sensitive to uncertainties in distance and size, but care must be taken to exclude unbound motions, such as outflows.
Parallel to the physical conditions in the gas, the dust can be characterized by a set of conditions:
$$T_D,n_D,\kappa (\nu ),$$
(11)
where $`T_D`$ is the dust temperature, $`n_D`$ is the density of dust grains, and $`\kappa (\nu )`$ is the opacity at a given frequency, $`\nu `$. If we look in more detail, grains have a range of sizes (Mathis et al 1977) and compositions. For smaller grains, $`T_D`$ is a function of grain size. Thus, we would have to characterize the temperature distribution as a function of size, the composition of grains, both core and mantle, and many more optical constants to capture the full range of grain properties. For our purposes, $`T_D`$ and $`\kappa (\nu )`$ are the most important properties, because $`T_D`$ affects gas energetics and $`T_D`$ and $`\kappa (\nu )`$ control the observed continuum emission of molecular clouds. The detailed nature of the dust grains may come into play in several situations, but the primary observational manifestation of the dust is its ability to absorb and emit radiation. For this review, a host of details can be ignored. The optical depth is set by
$$\tau _D(\nu )=\kappa (\nu )N,$$
(12)
where it is convenient to define $`\kappa (\nu )`$ so that $`N`$ is the gas column density rather than the dust column density. Away from resonances, the opacity is usually approximated by $`\kappa (\nu )\nu ^\beta `$.
## 3 PROBES OF PHYSICAL CONDITIONS
The most fundamental fact about molecular clouds is that most of their contents are invisible. Neither the H<sub>2</sub> nor the He in the bulk of the clouds are excited sufficiently to emit. While fluorescent emission from H<sub>2</sub> can be mapped over the face of clouds (Luhman & Jaffe 1996), the ultraviolet radiation needed to excite this emission does not penetrate the bulk of the cloud. In shocked regions, H<sub>2</sub> emits rovibrational lines that are useful probes of $`T_K`$ and $`\stackrel{}{v}(\stackrel{}{r})`$ (e.g. Draine & McKee 1993). Absorption by H<sub>2</sub> of background stars is also difficult: the dust in molecular clouds obnubilates the ultraviolet that would reveal electronic transitions; the rotational transitions are so weak that only huge $`N`$ would produce absorption, and the dust again obnubilates background sources; only the vibrational transitions in the near-infrared have been seen in absorption in only a few molecular clouds (Lacy et al 1994). Gamma rays resulting from cosmic ray interactions with atomic nuclei do probe all the material in molecular clouds (Bloemen 1989, Strong et al 1994). So far, gamma-ray studies have suffered from low spatial resolution and uncertainties in the cosmic ray flux; they have been used mostly to check consistency with other tracers on large scales.
In the following subsections, I will discuss probes of different physical quantities, including some general results, concluding with a discussion of the observational and analytical tools. Genzel (1992) has presented a detailed discussion of probes of physical conditions.
### 3.1 Tracers of Column Density, Size, and Mass
Given the reticence of the bulk of the H<sub>2</sub>, essentially all probes of physical conditions rely on trace constituents, such as dust particles and molecules other than H<sub>2</sub>. Dust particles (Mathis 1990, Pendleton & Tielens 1997) attenuate light at short wavelengths (ultraviolet to near-infrared) and emit at longer wavelengths (far-infrared to millimeter). Assuming that the ratio of dust extinction at a fixed wavelength to gas column density is constant, one can use extinction to map $`N`$ in molecular clouds, and early work at visible wavelengths revealed locations and sizes of many molecular clouds before they were known to contain molecules (Barnard 1927, Bok & Reilly 1947, Lynds 1962). There have been more recent surveys for small clouds (Clemens & Barvainis 1988) and for clouds at high latitude (Blitz et al 1984). More recently, near-infrared surveys have been used to probe much more deeply; in particular, the $`HK`$ color excess can trace $`N`$ to an equivalent visual extinction, $`A_V30`$ mag (Lada et al 1994, Alves et al 1998a). This method provides many pencil beam measurements through a cloud toward background stars. The very high resolution, but very undersampled, data require careful analysis but can reveal information on mass, large-scale structure in $`N`$ and unresolved structure (Alves et al 1998a). Padoan et al (1997) interpret the data of Lada et al (1994) in terms of a lognormal distribution of density, but Lada et al (1999) show that a cylinder with a density gradient, $`n(r)r^2`$, also matches the observations.
Continuum emission from dust at long wavelengths is complementary to absorption studies (e.g. Chandler & Sargent 1997). Because the dust opacity decreases with increasing wavelength ($`\kappa (\nu )\nu ^\beta `$, with $`\beta 12`$), emission at long wavelengths can trace large column densities and provide independent mass estimates (Hildebrand 1983). The data can be fully sampled and have reasonably high resolution. The dust emission depends on the dust temperature ($`T_D`$), linearly if the observations are firmly in the Rayleigh-Jeans limit, but exponentially on the Wien side of the blackbody curve. Observations on both sides of the emission peak can constrain $`T_D`$. Opacities have been calculated for a variety of scenarios including grain mantle formation and collisional concrescence (e.g. Ossenkopf & Henning 1994). For grain sizes much less than the wavelength, $`\beta 2`$ is expected from simple grain models, but observations of dense regions often indicate lower values. By observing at a sufficiently long wavelength, one can trace $`N`$ to very high values. Recent results indicate that $`\tau _D(1.2mm)=1`$ only for $`A_V4\times 10^4`$ mag (Kramer et al 1998a) in the less dense regions of molecular clouds. In dense regions, there is considerable evidence for increased grain opacity at long wavelengths (Zhou et al 1990, van der Tak et al 1999), suggesting grain growth through collisional concrescence in addition to the formation of icy mantles. Further growth of grains in disks is also likely (Chandler & Sargent 1997).
The other choice is to use a trace constituent of the gas, typically molecules that emit in their rotational transitions at millimeter or submillimeter wavelengths. By using the appropriate transitions of the appropriate molecule, one can tune the probe to study the physical quantity of interest and the target region along the line of sight. This technique was first used with OH (Barrett et al 1964), and it has been pursued intensively in the 30 years since the discovery of polyatomic interstellar molecules (Cheung et al 1968; van Dishoeck & Blake 1998).
The most abundant molecule after H<sub>2</sub> is carbon monoxide; the main isotopomer (<sup>12</sup>C<sup>16</sup>O) is usually written simply as CO. It is the most common tracer of molecular gas. On the largest scales, CO correlates well with the gamma-ray data (Strong et al 1994), suggesting that the overall mass of a cloud can be measured even when the line is quite opaque. This stroke of good fortune can be understood if the clouds are clumpy and macroturbulent, with little radiative coupling between clumps (Wolfire et al 1993); in this case, the CO luminosity is proportional to the number of clumps, hence total mass. Most of the mass estimates for the larger clouds and for the total molecular mass in the Galaxy and in other galaxies are in fact based on CO. On smaller scales, and in regions of high column density, CO fails to trace column density, and progressively rarer isotopomers are used to trace progressively higher values of $`N`$. Dickman (1978) established a strong correlation of visual extinction $`A_V`$ with <sup>13</sup>CO emission for $`1.5A_V5`$. Subsequent studies have used C<sup>18</sup>O and C<sup>17</sup>O to trace still higher $`N`$ (Frerking et al 1982). These rarer isotopomers will not trace the outer parts of the cloud, where photodissociation affects them more strongly than the common ones, but we are concerned with the more opaque regions in this review.
When comparing $`N`$ measured by dust emission with $`N`$ traced by CO isotopomers, it is important to correct for the fact that emission from low-$`J`$ transitions of optically thin isotopomers of CO decreases with $`T_K`$, while dust emission increases with $`T_D`$ (Jaffe et al 1984). Observations of many transitions can avoid this problem but are rarely done. To convert $`N(\mathrm{CO})`$ to $`N`$ requires knowledge of the abundance, $`X(\mathrm{CO})`$. The only direct measure gave $`X(\mathrm{CO})=2.7\times 10^4`$ (Lacy et al 1994), three times greater than inferred from indirect means (e.g. Frerking et al 1982). Clearly, this area needs increased attention, but at least a factor of three uncertainty must be admitted. Studies of some particularly opaque regions in molecular clouds indicate severe depletion (Kuiper et al 1996, Bergin & Langer 1997), raising the concern that even the rare CO isotopomers may fail to trace $`N`$. Indeed, Alves et al (1999) find that C<sup>18</sup>O fails to trace column density above $`A_V=10`$ in some regions, and Kramer et al (1999) argue that this failure is best explained by depletion of C<sup>18</sup>O.
Sizes of clouds, characterized by either a radius ($`R`$) or diameter ($`l`$), are measured by mapping the cloud in a particular tracer; for non-spherical clouds, these are often the geometric mean of two dimensions, and the aspect ratio ($`a/b`$) characterizes the ratio of long and short axes. The size along the line of sight (depth) can only be constrained by making geometrical assumptions (usually of a spherical or cylindrical cloud). One possible probe of the depth is H<sub>3</sub><sup>+</sup>, which is unusual in having a calculable, constant density in molecular clouds. Thus, a measurement of $`N(\mathrm{H}_3^+)`$ can yield a measure of cloud depth (Geballe & Oka 1996).
With a measure of size and a measure of column density, the mass ($`M_N`$) may be estimated (equation 8); with a size and a linewidth ($`\mathrm{\Delta }v`$), the virial mass ($`M_V`$) can be estimated (equation 9). On the largest scales, the mass is often estimated from integrating the CO emission over the cloud and using an empirical relation between mass and the CO luminosity, $`L(\mathrm{CO})`$. The mass distribution has been estimated for both clouds and clumps within clouds, primarily from CO, <sup>13</sup>CO, or C<sup>18</sup>O, using a variety of techniques to define clumps and estimate masses (Blitz 1993, Kramer et al 1998b, Heyer & Terebey 1998, Heithausen et al 1998). These studies have covered a wide range of masses, with Kramer et al extending the range down to $`M=10^4\text{M}\text{}`$. The result is fairly well agreed on: $`dN(M)M^\alpha dM`$, with $`1.5\alpha 1.9`$. Elmegreen & Falgarone (1996) have argued that the mass spectrum is a result of the fractal nature of the interstellar gas, with a fractal dimension, $`D=2.3\pm 0.3`$. There is disagreement over whether clouds are truly fractal or have a preferred scale (Blitz & Williams 1997). The latter authors suggest a scale of 0.25–0.5 pc in Taurus based on <sup>13</sup>CO. On the other hand, Falgarone et al (1998), analyzing an extensive data set, find evidence for continued structure down to 200 AU in gas that is not forming stars. The initial mass function (IMF) of stars is steeper than the cloud mass distribution for $`M_{}>1\text{M}\text{}`$ but is flatter than the cloud mass function for $`M_{}<1\text{M}\text{}`$ (e.g. Scalo 1998). Understanding the origin of the differences is a major issue (see Williams et al 2000, Meyer et al 2000).
### 3.2 Probes of Temperature and Density
The abundances of other molecules are so poorly constrained that CO isotopomers and dust are used almost exclusively to constrain $`N`$ and $`M_N`$. Of what use are the over 100 other molecules? While many are of interest only for astrochemistry, some are very useful probes of physical conditions like $`T_K,n,v,B_z,`$ and $`x_e`$.
Density ($`n`$) and gas temperature ($`T_K`$) are both measured by determining the populations in molecular energy levels and comparing the results to calculations of molecular excitation. A useful concept is the excitation temperature ($`T_{\mathrm{ex}}`$) of a pair of levels, defined to be the temperature that gives the actual ratio of populations in those levels, when substituted into the Boltzmann equation. In general, collisions and radiative processes compete to establish level populations; when lines are optically thick, trapping of line photons enhances the effects of collisions. For some levels in some molecules, radiative rates are unusually low, collisions dominate, $`T_{\mathrm{ex}}=T_K`$, and observational determination of these “thermalized” level populations yields $`T_K`$. Un-thermalized level populations depend on both $`n`$ and $`T_K`$; with a knowledge of $`T_K`$, observational determination of these populations yields $`n`$, though trapping usually needs to be accounted for. While molecular excitation probes the local $`n`$ and $`T_K`$ in principle, the observations themselves always involve some average over the finite beam and along the line of sight. Consequently, a model of the cloud is needed to interpret the observations. The simplest model is of course a homogeneous cloud, and most early work adopted this model, either explicitly or implicitly.
Tracers of temperature include CO, with its unusually low dipole moment, and molecules in which transitions between certain levels are forbidden by selection rules. The latter include different $`K`$ ladders of symmetric tops like NH<sub>3</sub>, CH<sub>3</sub>CN, etc. (Ho & Townes 1983, Loren & Mundy 1984). Different $`K_1`$ ladders in H<sub>2</sub>CO also probe $`T_K`$ in dense, warm regions (Mangum and Wootten 1993). A useful feature of CO is that its low-$`J`$ transitions are both opaque and thermalized in most parts of molecular clouds. In this case, observations of a single line provide the temperature, after correction for the cosmic background radiation and departures from the Rayleigh-Jeans approximation (Penzias et al 1972, Evans 1980). Early work on CO (Dickman 1975) and NH<sub>3</sub> (Martin & Barrett 1978) established that $`T_K10`$ K far from regions of star formation and that sites of massive star formation are marked by elevated $`T_K`$, revealed by peaks in maps of CO (e.g. Blair et al 1975).
The value of $`T_K`$ far from local heating sources can be understood by balancing cosmic ray heating and molecular cooling (Goldsmith & Langer 1978), while elevated values of $`T_K`$ in star forming regions have a more intricate explanation. Stellar photons, even when degraded to the infrared, do not couple well to molecular gas, so the heating goes via the dust. The dust is heated by photons and the gas is heated by collisions with the dust (Goldreich & Kwan 1974); above a density of about $`10^4`$ cm<sup>-3</sup>, $`T_K`$ becomes well coupled to $`T_D`$ (Takahashi et al 1983). Observational comparison of $`T_K`$ to $`T_D`$, determined from far-infrared observations, supports this picture (e.g. Evans et al 1977, Wu & Evans 1989).
In regions where photons in the range of 6 to 13.6 eV impinge directly on molecular material, photoelectrons ejected from dust grains can heat the gas very effectively and $`T_K`$ may exceed $`T_D`$. These PDRs (Hollenbach & Tielens 1997) form the surfaces of all clouds, but the regions affected by these photons are limited by dust extinction to about $`A_V8`$ mag (McKee 1989). However, the CO lines often do form in the PDR regions, raising the question of why they indicate that $`T_K10`$K. Wolfire et al (1993) explain that the optical depth in the lower $`J`$ levels usually observed reaches unity at a place where the $`T_K`$ and $`n`$ combine to produce an excitation temperature ($`T_{\mathrm{ex}}`$) of about 10 K. Thus, the agreement of $`T_K`$ derived from CO with the predictions of energetics calculations for cosmic ray heating may be fortuitous. The $`T_K`$ derived from NH<sub>3</sub> refer to more opaque regions and are more relevant to cosmic ray heating. Finally, in localized regions, shocks can heat the gas to very high $`T_K`$; values of 2000 K are observed in H<sub>2</sub> ro-vibrational emission lines (Beckwith et al 1978). It is clear that characterizing clouds by a single $`T_K`$, which is often done for simplicity, obscures a great deal of complexity.
Density determination requires observations of several transitions that are not in local thermodynamic equilibrium (LTE). Then the ratio of populations, or equivalently $`T_{\mathrm{ex}}`$, can be used to constrain density. A useful concept is the critical density for a transition from level $`j`$ to level $`k`$,
$$n_c(jk)=A_{jk}/\gamma _{jk},$$
(13)
where $`A_{jk}`$ is the Einstein A coefficient and $`n\gamma _{jk}`$ is the collisional deexcitation rate per molecule in level $`j`$. In general, both H<sub>2</sub> and He are effective collision partners, with comparable collision rates, so that excitation techniques measure the total density of collision partners, $`nn(\text{H}\text{2})+n(\mathrm{He})`$. In some regions of high $`x_e`$, collisions with electrons may also be significant. Detection of a particular transition is often taken to imply that $`nn_c(jk)`$, but this statement is too simplistic. Lines can be seen over a wide range of $`n`$, depending on observational sensitivity, the frequency of the line, and the optical depth (e.g. Evans 1989). Observing high frequency transitions, multilevel excitation effects, and trapping all tend to lower the effective density needed to detect a line. Table 1 contains information for some commonly observed lines, including the frequency, energy in K above the effective ground state ($`E_{up}(K)`$), and the critical densities at $`T_K=10`$ K and 100 K. For comparison, the Table also has $`n_{eff}`$, the density needed to produce a line of 1 K, easily observable in most cases. The values of $`n_{eff}`$ were calculated with a Large Velocity Gradient (LVG) code (§3.5) to account for trapping, assuming log$`(N/\mathrm{\Delta }v)=13.5`$ for all species but NH<sub>3</sub>, for which log$`(N/\mathrm{\Delta }v)=15`$ was used. $`N/\mathrm{\Delta }v`$ has units of cm<sup>-2</sup> (km s<sup>-1</sup>)<sup>-1</sup>. These column densities are typical and produce modest optical depths. Note that $`n_{eff}`$ can be as much as a factor of 1000 less than the critical density, especially for high excitation lines and high $`T_K`$. Clearly, the critical density should be used as a guideline only; more sophisticated analysis is necessary to infer densities.
Assuming knowledge of $`T_K`$, at least two transitions with different $`n_c(jk)`$ are needed to determine both $`n`$ and the line optical depth, $`\tau _{jk}N_k/\mathrm{\Delta }v`$, which determines the amount of trapping, and more transitions are desirable. Because $`A_{J,J1}J^3`$, where $`J`$ is the quantum number for total angular momentum, observing many transitions up a rotational energy ladder provides a wide range of $`n_c(jk)`$. Linear molecules, like HCN and HCO<sup>+</sup>, have been used in this way, but higher levels often occur at wavelengths with poor atmospheric transmission. Relatively heavy species, like CS, have many accessible transitions, and up to five transitions ranging up to $`J=10`$ have been used to constrain density (e.g. Carr et al 1995, van der Tak et al 1999). More complex species provide more accessible energy levels; transitions within a single $`K_1`$ ladder of H<sub>2</sub>CO provide a valuable density probe (Mangum & Wootten 1993). Transitions of H<sub>2</sub>CO with $`\mathrm{\Delta }J=0`$ are accessible to large arrays operating at centimeter wavelengths (e.g. Evans et al 1987). The lowest few of these H<sub>2</sub>CO transitions have the interesting property of absorbing the cosmic background radiation (Palmer et al 1969), $`T_{\mathrm{ex}}`$ being cooled by collisional pumping (Townes & Cheung 1969).
Application of these techniques to the homogeneous cloud model generally produces estimates of density exceeding $`10^4`$ cm<sup>-3</sup> in regions forming stars, while the sterile regions of the cloud are thought to have typical $`n10^210^3`$ cm<sup>-3</sup>, though these are less well constrained. Theoretical simulations of turbulence have predicted lognormal (Vázquez-Semadini 1994) or power-law (Scalo et al 1998) probability density functions. Studies of multiple transitions of different molecules with a wide range of critical densities often reveal evidence for density inhomogeneities; in particular, pairs of transitions with higher critical densities tend to indicate higher densities (e.g. Evans 1980, Plume et al 1997). Both density gradients and clumpy structure have been invoked to explain these results (see §4 and §5 for detailed discussion). Since lines with high $`n_c(jk)`$ are excited primarily at higher $`n`$, one can avoid to some extent the averaging over the line of sight by tuning the probe.
### 3.3 Kinematics
In principle, information on $`\stackrel{}{v}(\stackrel{}{r})`$ is contained in maps of the line profile over the cloud. In practice, this message has been difficult to decode. Only motions along the line of sight produce Doppler shifts, and the line profiles average over the beam and along the line of sight. Maps of the line center velocity generally indicate that the typical cloud is experiencing neither overall collapse (Zuckerman & Evans 1974) nor rapid rotation (Arquilla & Goldsmith 1986, Goodman et al 1993). Instead, most clouds appear to have velocity fields dominated by turbulence, because the linewidths are usually much greater than expected from thermal broadening. While such turbulence can explain the breadth of the lines, the line profile is not easily matched. Even a homogeneous cloud will tend to develop an excitation gradient in unthermalized lines because of trapping, and gradients in $`T_K`$ or $`n`$ toward embedded sources should exacerbate this tendency. Simple microturbulent models with decreasing $`T_{\mathrm{ex}}(r)`$ predict that self-reversed line profiles should be seen more commonly than they are. Models with many small clumps and macroturbulence have had some success in avoiding self-reversed line profiles (Martin et al 1984, Wolfire et al 1993, Falgarone et al 1994, Park & Hong 1995).
The average linewidths of clouds are larger for larger clouds, the linewidth-size relation: $`\mathrm{\Delta }vR^\gamma `$ (Larson 1981). For clouds with the same $`N`$, the virial theorem would predict $`\gamma =0.5`$, consistent with the results of many studies of clouds as a whole (e.g. Solomon et al 1987). Myers (1985) summarized the different relations and distinguished between those comparing clouds as a whole and those studying trends within a single cloud. The status of linewidth-size relations within clouds, particularly in star-forming regions, will be discussed in later sections.
### 3.4 Magnetic Field and Ionization
The magnetic field strength and direction are important but difficult to measure. Heiles et al (1993) review the observations and McKee et al (1993) review theoretical issues. The only useful measure of the strength is the Zeeman effect, which probes the line-of-sight field, $`B_z`$. Observations of HI can provide some useful probes of $`B_z`$ in PDRs (e.g. Brogan et al 1999), but cannot probe the bulk of molecular gas. Molecules suitable for Zeeman effect measurements have unpaired electrons and their resulting reactivity tends to decrease their abundance in the denser regions (e.g. Sternberg et al 1997). Almost all work has been done with OH, along with some work with CN (Crutcher et al 1996), but future prospects include CCS and excited states of OH and CH (Crutcher 1998). Measurements of $`B_z`$ have been made with thermal emission or absorption by OH (e.g. Crutcher et al 1993), mostly probing regions with $`n10^3`$ cm<sup>-3</sup>, where $`B_z20`$ $`\mu `$G or with OH maser emission, probing much denser gas, but with less certain conditions. As reviewed by Crutcher (1999a), the results for 14 clouds of widely varying mass with good Zeeman detections indicate that $`M_B`$ is usually within a factor of 2 of the cloud mass. Given uncertainties, this result suggests that clouds with measured $`B_z`$ lie close to the critical-subcritical boundary (Shu et al 1999). The observations can be fit with $`Bn^{0.47}`$, remarkably consistent with predictions of ambipolar diffusion calculations (e.g. Fiedler & Mouschovias 1993). However, if turbulent motions in clouds are constrained to be comparable to the Alfvén velocity, $`v_aBn^{0.5}`$, this result is also expected (Myers & Goodman 1988; Bertoldi & McKee 1992).
The magnetic field direction, projected on the plane of the sky, can be measured because spinning, aspherical grains tend to align their spin axes with the magnetic field direction (see Lazarian et al 1997 for a list of mechanisms). Then the dust grains absorb and emit preferentially in the plane perpendicular to the field. Consequently, background star light will be preferentially polarized along $`B_{}`$ and thermal emission from the grains will be polarized perpendicular to $`B_{}`$ (Hildebrand 1988). Goodman (1996) has shown that the grains that polarize background starlight do not trace the field very deeply into the cloud, but maps of polarized emission at far-infrared, submillimeter (Schleuning 1998) and millimeter (Akeson & Carlstrom 1997, Rao et al 1998) wavelengths are beginning to provide maps of field direction deep into clouds. Line emission may also be weakly polarized under some conditions (Goldreich & Kylafis 1981), providing a potential probe of $`B_{}`$ with velocity information. After many attempts, this effect has been detected recently (Greaves et al 1999).
The ionization fraction ($`x_e`$) is determined by chemical analysis and has been discussed by van Dishoeck and Blake (1998). Theoretically, $`x_e`$ should drop from about $`10^4`$ near the outer edge of the cloud to about $`10^8`$ in interiors shielded from ultraviolet radiation. Observational estimates of $`x_e`$ are converging on values around $`10^8`$ to $`10^7`$ in cores (de Boisanger et al 1996, Caselli et al 1998, Williams et al 1998, Bergin et al 1999).
### 3.5 Observational and Analytical Tools
Having discussed how different physical conditions are probed, I will end this section with a brief summary of the observational and analytical tools that are used. Clearly, most information on physical conditions comes from observations of molecular lines. Most of these lie at millimeter or submillimeter wavelengths, and progress in this field has been driven by the development of large single-dish telescopes operating at submillimeter wavelengths and by arrays of antennas operating interferometrically at millimeter wavelengths (Sargent & Welch 1993). The submillimeter capability has allowed the study of high-$`J`$ levels for excitation analysis and increased sensitivity to dust continuum emission, which rises with frequency ($`S_\nu \nu ^2`$ or faster). Studies of millimeter and submillimeter emission from dust have been greatly enhanced recently with the development of cameras on single dishes, both at millimeter wavelengths (Kreysa 1992) and at submillimeter wavelengths, with SHARC (Hunter et al 1996) and SCUBA (Cunningham et al 1994). Examples of the maps that these cameras are producing are the color plates showing the 1.3 mm emission from the $`\rho `$ Ophiuchi region (Motte et al 1998, Figure 1) and the 850 $`\mu `$m and 450 $`\mu `$m emission from the ridge in Orion (L1641) (Johnstone & Bally 1999, Figure 2).
Interferometric arrays, operating at millimeter wavelengths, have provided unprecedented angular resolution (now better than 1<sup>′′</sup>) maps of both molecular line and continuum emission. They are particularly critical for separating the continuum emission from a disk and the envelope and for studying deeply embedded binaries (e.g. Looney et al 1997, Figure 3). Complementary information has been provided in the infrared, with near-infrared star-counting (Lada et al 1994, 1999), near-infrared and mid-infrared spectroscopy of rovibrational transitions (e.g. Mitchell et al 1990, Evans et al 1991, Carr et al 1995, van Dishoeck et al 1998), and far-infrared continuum and spectral line studies (e.g. Haas et al 1995). Early results from the Infrared Space Observatory can be found in Yun & Liseau (1998).
The analytical tools for molecular cloud studies have grown gradually in sophistication. Early studies assumed LTE excitation, an approximation that is still used in some studies of CO isotopomers, but it is clearly invalid for other species. Studies of excitation require solution of the statistical equilibrium equations (Goldsmith 1972). Goldreich & Kwan (1974) pointed out that photon trapping will increase the average $`T_{\mathrm{ex}}`$ and provided a way of including its effects that was manageable with the limited computer resources of that time: the Large Velocity Gradient (LVG) approximation. Tied originally to their picture of collapsing clouds, this approximation allowed one to treat the radiative transport locally. Long after the overall collapse scenario had been discarded, the LVG method has remained in use, providing a quick way to include trapping, at least approximately. In parallel, more computationally intensive codes were developed for microturbulent clouds, in which photons emitted anywhere in the cloud could affect excitation anywhere else (e.g. Lucas 1974).
The microturbulent and LVG assumptions are the two extremes, and real clouds probably lie between. For modest optical depths, the conclusions of the two methods differ by factors of about 3, comparable to uncertainties caused by uncertain geometry (White 1977, Snell 1981). These methods are still useful in some situations, but they are gradually being supplanted by more flexible radiative transport codes, using either the Monte Carlo technique (Bernes 1979, Choi et al 1995, Park & Hong 1998, Juvela 1997) or $`\mathrm{\Lambda }`$-iteration (Dickel & Auer 1994, Yates et al 1997, Wiesemeyer 1999). Some of these codes allow variations in the velocity, density, and temperature fields, non-spherical geometries, clumps, etc. Of course, increased flexibility means more free parameters and the need for more extensive observations to constrain them.
Similar developments have occurred in the area of dust continuum emission. Since stellar photons are primarily at wavelengths where dust is quite opaque, a radiative transport code is needed to compute dust temperatures as a function of distance from a stellar heat source (Egan et al 1988, Manske & Henning 1998). For clouds without embedded stars or protostars, only the interstellar radiation field heats the dust; $`T_D`$ can get very low (5–10 K) in centers of opaque clouds (Leung 1975). Embedded sources heat clouds internally; in clouds opaque to the stellar radiation, it is absorbed close to the source and reradiated at longer wavelengths. Once the energy is carried primarily by photons at wavelengths where the dust is less opaque, the temperature distribution relaxes to the optically thin limit (Doty & Leung 1994):
$$T_D(r)L^{q/2}r^q$$
(14)
where $`L`$ is the luminosity of the source and $`q=2/(\beta +4)`$, assuming $`\kappa (\nu )\nu ^\beta `$.
## 4 FORMATION OF ISOLATED LOW-MASS STARS
Many low-mass stars actually form in regions of high-mass star formation, where clustered formation is the rule (Elmegreen 1985, Lada 1992, McCaughrean & Stauffer 1994). The focus here is on regions where we can isolate the individual star-forming events and these are almost inevitably forming low-mass stars.
### 4.1 Theoretical Issues
The theory of isolated star formation has been developed in some detail. It relies on the existence of relatively isolated regions of enhanced density that can collapse toward a single center, though processes at smaller scales may cause binaries or multiples to form. One issue then is whether isolated regions suitable for forming individual, low-mass stars are clearly identifiable. The ability to separate these from the rest of the cloud underlies the distinction between sterile and fertile parts of clouds (§1).
Because of the enormous compression needed, gravitational collapse plays a key role in all star formation theories. In most cases, only a part of the cloud collapses, and theories differ on how this part is distinguished from the larger cloud. Is it brought to the verge of collapse by an impulsive event, like a shock wave (Elmegreen & Lada 1977) or a collision between clouds or clumps (Loren 1976), or is the process gradual? Among gradual processes, the decay of turbulence and ambipolar diffusion are leading contenders. If the decay of turbulence leaves the cloud in a subcritical state ($`M<M_B`$), then a relatively long period of ambipolar diffusion is needed before dynamical collapse can proceed (§2). If the cloud is supercritical ($`M>M_B`$), then the magnetic field alone cannot stop the collapse (e.g. Mestel 1985). If turbulence does not prevent it, a rapid collapse ensues, and fragmentation is likely. Shu et al (1987a, 1987b) suggested that the subcritical case describes isolated low mass star formation, while the supercritical case describes high-mass and clustered star formation. Recently, Nakano (1998) has argued that star formation in subcritical cores via ambipolar diffusion is implausible; instead he favors dissipation of turbulence as the controlling factor. For the present section, the questions are whether there is evidence that isolated, low-mass stars form in sub-critical regions and what is the status of turbulence in these cores.
Rotation could in principle support clouds against collapse, except along the rotation axis (Field 1978). Even if rotation does not prevent the collapse, it is likely to be amplified during collapse, leading at some point to rotation speeds able to affect the collapse. In particular, rotation is usually invoked to produce binaries or multiple systems on small scales. What do we know about rotation rates on large scales and how the rotation is amplified during collapse? Is there any correlation between rotation and the formation of binaries observationally? If rotation controls whether binaries form, can we understand why collapse leads to binary formation roughly half the time? It is clear that both magnetic flux and angular momentum must be redistributed during collapse to produce stars with reasonable fields and rotation rates, and these processes will affect the formation of binaries and protoplanetary disks. Some of these questions will be addressed in the next sections on globules and cores, while others will be discussed in the context of testing specific theories.
### 4.2 Globules and Cores: Overall Properties
Nearby small dark clouds, or globules, are natural places to look for isolated star formation (Bok & Reilly 1947). A catalog of 248 globules (Clemens & Barvainis 1988) has provided the basis for many studies. Yun & Clemens (1990) found that 23% of the CB globules appear to contain embedded infrared sources, with spectral energy distributions typical of star forming regions (Yun 1993). About one-third of the globules with embedded sources have evidence of outflows (Yun & Clemens 1992; Henning & Launhardt 1998). Clearly, star formation does occur in isolated globules.
Within the larger dark clouds, one can identify numerous regions of high opacity (e.g. Myers et al 1983), commonly called cores (Myers 1985). Surveys of such regions in low-excitation lines of NH<sub>3</sub> (e.g. Benson & Myers 1989) led to the picture of an isolated core within a larger cloud, which then might pursue its course toward star formation in relative isolation from the rest of the cloud. Most intriguing was the fact that the NH<sub>3</sub> linewidths in many of these cores indicated that the turbulence was subsonic (Myers 1983); in some cores, thermal broadening of NH<sub>3</sub> lines even dominated over turbulent broadening (Myers & Benson 1983, Fuller & Myers 1993). Although later studies in other lines indicated a more complex dynamical situation (Zhou et al 1989, Butner et al 1995), the NH<sub>3</sub> data provided observational support for theories describing the collapse of isothermal spheres (Shu 1977). The discovery of IRAS sources in half of these cores (Beichman et al 1986) indicated that they were indeed sites of star formation. The observational and theoretical developments were synthesized into an influential paradigm for low mass star formation (Shu et al 1987a, §4.5).
Globules would appear to be an ideal sample for measuring sizes since the effects of the environment are minimized by their isolation, but distances are uncertain. Based on the angular size of the optical images and an assumed average distance of 600 pc (Clemens & Barvainis 1988), the mean size, $`l=0.7`$ pc. A subsample of these with distance estimates were mapped in molecular lines, yielding much smaller average sizes: $`l=0.33\pm 0.15`$ pc for a sample of 6 “typical” globules mapped in CS (Launhardt et al 1998); maps of the same globules in C<sup>18</sup>O $`J=21`$ (Wang et al 1995) give sizes smaller by a factor of $`2.9\pm 1.6`$. A sample of 11 globules in the southern sky, mapped in NH<sub>3</sub> (Bourke et al 1995) have $`l=0.21\pm 0.08`$ pc.
Cores in nearby dark clouds have the advantage of having well-determined distances; the main issue is how clearly they stand out from the bulk of the molecular cloud. Gregersen (1998) found that some known cores are barely visible above the general cloud emission in the C<sup>18</sup>$`J=10`$ line. The mean size of a sample of 16 cores mapped in NH<sub>3</sub> is 0.15 pc, whereas CS $`J=21`$ gives 0.27 pc, and C<sup>18</sup>O $`J=10`$ gives 0.36 pc (Myers et al 1991). These differences may reflect the effects of opacity, chemistry, and density structure.
Globules are generally not spherical. By fitting the opaque cores with ellipses, Clemens & Barvainis (1988) found a mean aspect ratio ($`a/b`$) of 2.0. Measurements of aspect ratio in tracers of reasonably dense gas toward globules give $`a/b1.52`$ (Wang et al 1995, Bourke et al 1995), and cores in larger clouds have $`a/b2`$ (Myers et al 1991). Myers et al and Ryden (1996) have argued that the underlying 3-D shapes were more likely to be prolate, with axial ratios around 2, than oblate, where axial ratios of 3–10 were needed. However, toroids may also be able to match the data because of their central density minima (Li & Shu 1996).
The uncertainties in size are reflected directly into uncertainties in mass. For the sample of globules mapped by Bourke et al (1995) in NH<sub>3</sub>, $`M_N=4\pm 1`$ M, compared to $`10\pm 2`$ M for cores. Larger masses are obtained from other tracers. For the sample of globules studied by Launhardt et al (1998), $`M_V=26\pm 12`$ based on CS $`J=21`$ and $`10\pm 6`$, based on C<sup>18</sup>$`J=21`$. Studies of the cores in larger clouds found similar ranges and differences among tracers (e.g. Fuller 1989, Zhou et al 1994a). A series of studies of the Taurus cloud complex has provided an unbiased survey of cores with known distance identified by C<sup>18</sup>$`J=10`$ maps. Starting from a large-scale map of <sup>13</sup>CO $`J=10`$ (Mizuno et al 1995), Onishi et al (1996) covered 90% of the area with $`N>3.5\times 10^{21}`$ with a map of C<sup>18</sup>$`J=10`$ with 0.1 pc resolution. They identified 40 cores with $`l=0.46`$ pc, $`a/b=1.8`$, and $`M_N=23`$ M. The sizes extended over a range of a factor of 6 and masses over a factor of 80. Comparing these cores to the distribution of T Tauri stars, infrared sources, and H<sup>13</sup>CO<sup>+</sup> emission, Onishi et al (1998) found that all cores with $`N>8\times 10^{21}`$ cm<sup>-2</sup> are associated with H<sup>13</sup>CO<sup>+</sup> emission and/or cold IRAS sources. In addition, the larger cores always contained multiple objects. They concluded that the core mass per star-forming event is relatively constant at 11 M.
It is clear that characterizing globules and cores by a typical size and mass is an oversimplification. First, they come in a range of sizes that is probably just the low end of the general distribution of cloud sizes. Second, the size and mass depend strongly on the tracer and method used to measure them. If we ignore these caveats, it is probably fair to say that most of these regions have sizes measured in tracers of reasonably dense gas in the range of a few tenths of a pc and masses of less than 100 M, with more small, low mass cores than massive ones. The larger cores tend to be fragmented, so that the mass of gas with $`n10^4`$ cm<sup>-3</sup> tends toward 10 M, within a factor of 2. Star formation may occur when the column density exceeds 8$`\times 10^{21}`$ cm<sup>-2</sup>, corresponding in this case to $`n10^4`$ cm<sup>-3</sup>.
### 4.3 Globules and Cores: Internal Conditions
If the sizes and masses of globules and cores are poorly defined, at least the temperatures seem well understood. Early CO observations (Dickman 1975) showed that the darker globules are cold ($`T_K`$$`10`$ K), as expected for regions with only cosmic ray heating. Similar results were found from NH<sub>3</sub> (Martin & Barrett 1978 Myers & Benson 1983; Bourke et al 1995). Clemens et al (1991) showed the distribution of CO temperatures for a large sample of globules; the main peak corresponded to $`T_K=8.5`$ K, with a small tail to higher $`T_K`$. Determination of the dust temperature was more difficult, but Keene (1981) measured $`T_D=1316`$ K in B133, a starless core. More recently, Ward-Thompson et al (1998) used ISO data to measure $`T_D=13`$ K in another starless core. These results are similar to predictions for cores heated by the interstellar radiation field (Leung 1975), though $`T_D`$ is expected to be lower in the deep interiors.
A sample of starless globules with $`A_V12`$ mag produced no detections of NH<sub>3</sub> (Kane et al 1994), but surveys of H<sub>2</sub>CO (Wang et al 1995) and CS (Launhardt et al 1998, Henning & Launhardt 1998) toward more opaque globules indicate that dense gas is present in some. The detection rate of CS $`J=21`$ emission was much higher in globules with infrared sources. Butner et al (1995) analyzed multiple transitions of DCO<sup>+</sup> in 18 low-mass cores, finding $`\mathrm{log}n(\text{cm}\text{-3})5`$, with a tendency to slightly higher values in the cores with infrared sources. Thus, gas still denser than the $`\mathrm{log}n(\text{cm}\text{-3})4`$ gas traced by NH<sub>3</sub> exists in these cores.
Discussion of the kinematics of globules and cores has often focused on the relationship between linewidth ($`\mathrm{\Delta }v`$) and size ($`l`$ or $`R`$). This relation is much less clearly established for cores within a single cloud than is the relation for clouds as a whole (§3.3), and it may have a different origin (Myers 1985). Goodman et al (1998) have distinguished four types of linewidth-size relationships. Most studies have employed Goodman Type 1 relations (multi-tracer, multi-core), but it is difficult to distinguish different causes in such relations. Goodman Type 2 relations (single-tracer, multi-core) relations within a single cloud would reveal whether the virial masses of cores are reliable. Interestingly, the most systematic study of cores in a single cloud found no correlation in 24 cores in Taurus mapped in C<sup>18</sup>$`J=10`$ ($`\gamma =0.0\pm 0.2`$), but the range of sizes (0.13 to 0.4 pc) may have been insufficient (Onishi et al 1996).
To study the kinematics of individual cores, the most useful relations are Goodman Types 3 (multi-tracer, single-core) and 4 (single-tracer, single-core). In these, a central position is defined, either by an infrared source or a line peak. A Type 3 relationship using NH<sub>3</sub>, C<sup>18</sup>$`J=10`$ and CS $`J=21`$ lines was explored by Fuller & Myers (1992), who found $`\mathrm{\Delta }vR^\gamma `$, with $`R`$ the radius of the half-power contour. Caselli & Myers (1995) added <sup>13</sup>CO $`J=10`$ data and constructed a Type 1 relation for 8 starless cores, after removing the thermal broadening. The mean $`\gamma `$ was $`0.53\pm 0.07`$ with a correlation coefficient of 0.81. However, both these relationships depend strongly on the fact that the NH<sub>3</sub> lines have small $`\mathrm{\Delta }v`$ and $`R`$. Some other species (e.g. HC<sub>3</sub>N, Fuller & Myers 1993) also have narrow lines and small sizes, but DCO<sup>+</sup> emission has much wider lines over a similar map size (Butner et al 1995), raising the possibility that chemical effects cause different molecules to trace different kinematic regimes within the same overall core. Goodman et al (1998) suggest that the DCO<sup>+</sup> is excited in a region outside the NH<sub>3</sub> region and that the size of the DCO<sup>+</sup> region is underestimated. On the other hand, some chemical simulations indicate that NH<sub>3</sub> will deplete in dense cores while ions like DCO<sup>+</sup> will not (Rawlings et al 1992), suggesting the opposite solution.
A way to avoid such effects is to use a Type 4 relation, searching for a correlation between $`\mathrm{\Delta }v_{NT}`$ as spectra are averaged in rings of larger size, though line of sight confusion cannot be avoided, as one gives up the ability to tune the density sensitivity. This method has almost never been applied to tracers of dense gas, but Goodman et al (1998) use an indirect method to obtain a Type 4 relation for three clouds in OH, C<sup>18</sup>O, and NH<sub>3</sub>. The relations are very flat ($`\gamma =0.1`$ to 0.3), and $`\gamma 0`$ has statistical significance only for OH, which traces the least dense gas. In particular, NH<sub>3</sub> shows no significant Type 4 relation, having narrow lines on every scale (see also Barranco & Goodman 1998). Goodman et al (1998) interpret these results in terms of a “transition to coherence” at the scale of 0.1–0.2 pc from the center of a dense core. Inside that radius, the turbulence becomes subsonic and no longer decreases with size (Barranco & Goodman 1998). While this picture accords nicely with the idea that cores can be distinguished from the surroundings and treated as “units” in low-mass star formation, the discrepancy between values of $`\mathrm{\Delta }v`$ measured in different tracers of the dense core (cf Butner et al 1995) indicate that caution is required in interpreting the NH<sub>3</sub> data.
Rotation can be detected in some low-mass cores, but the ratio of rotational to gravitational energy has a typical value of 0.02 on scales of 0.1 pc (Goodman et al 1993). The inferred rotation axes are not correlated with the orientation of cloud elongation, again suggesting that rotation is not dynamically important on the scale of 0.1 pc. Ohashi et al (1997a) find evidence in several star-forming cores for a transition at $`r0.03`$ pc, inside of which the specific angular momentum appears to be constant at about $`10^3`$ km s<sup>-1</sup>pc down to scales of about 200 AU.
Knowledge of the magnetic field strength in cores and globules would be extremely valuable in assessing whether they are subcritical or supercritical. Unfortunately, Zeeman measurements of these regions are extremely difficult because they must be done with emission, unless there is a chance alignment with a background radio source. Crutcher et al (1993) detected Zeeman splitting in OH in only one of 12 positions in nearby clouds. Statistical analysis of the detection and the upper limits, including the effects of random orientation of the field, led to the conclusion that the data could not falsify the hypothesis that the clouds were subcritical. Another problem is that the OH emission probes relatively low densities ($`n10^3`$ cm<sup>-3</sup>). Attempts to use CN to probe denser gas produced upper limits that were less than expected for subcritical clouds, but the small sample size and other uncertainties again prevented a definitive conclusion (Crutcher et al 1996). Improved sensitivity and larger samples are crucial to progress in this field. At present, no clear examples of subcritical cores have been found (Crutcher 1999a), but uncertainties are sufficient to allow this possibility.
Onishi et al (1996) found that the major axis of the cores they identified in Taurus tended to be perpendicular to the optical polarization vectors and hence $`B_{}`$. Counterexamples are known in other regions (e.g. Vrba et al 1976) and the fact that optical polarization does not trace the dense portions of clouds (Goodman 1996) suggests that this result be treated cautiously. Further studies of $`B_{}`$ using dust emission at long wavelengths are clearly needed.
The median ionization fraction of 23 low-mass cores is 9$`\times 10^8`$, with a range of log$`x_e`$ of $`7.5`$ to $`6.5`$, with typical uncertainties of a factor of 0.5 in the log (Williams et al 1998). Cores with stars do not differ significantly in $`x_e`$ from cores without stars, consistent with cosmic ray ionization. For a cloud with $`n=10^4`$ cm<sup>-3</sup>, the ambipolar diffusion timescale, $`t_{AD}7\times 10^6\mathrm{yr}20t_{ff}`$. If the cores are subcritical, they will evolve much more slowly than a free fall time. Recent comparisons of the line profiles of ionized and neutral species have been interpreted as setting an upper limit on the ion-neutral drift velocity of $`0.03`$ km s<sup>-1</sup>, consistent with that expected from ambipolar diffusion (Benson et al 1998).
To summarize the last two subsections, there is considerable evidence that distinct cores can be identified, both as isolated globules and as cores within larger clouds. While there is a substantial range of properties, scales of 0.1 pc in size and 10 M in mass seem common. The cores are cold ($`T_KT_D10`$ K) and contain gas with $`n10^4`$ cm<sup>-3</sup>, extending in some cases up to $`n10^5`$ cm<sup>-3</sup>. While different molecules disagree about the magnitude of the effect, these cores seem to be regions of decreased turbulence compared to the surroundings. While no clear cases of subcritical cores have been found, the hypothesis that low-mass stars form in subcritical cores cannot be ruled out observationally. How these cores form is beyond the scope of this review, but again one can imagine two distinct scenarios: ambipolar diffusion brings an initially subcritical core to a supercritical state; or dissipation of turbulence plays a similar role in a core originally supported by turbulence (Myers & Lazarian 1998). In the latter picture, cores may build up from accretion of smaller diffuse elements (Kuiper et al 1996), perhaps the structures inferred by Falgarone et al (1998).
### 4.4 Classification of Sources and Evolutionary Scenarios
The IRAS survey provided spectral energy distributions over a wide wavelength range for many cores (e.g. Beichman et al 1986), leading to a classification scheme for infrared sources. In the original scheme (Lada & Wilking 1984, Lada 1987), the spectral index between 2 $`\mu `$m and the longest observed wavelength was used to divide sources into three Classes, designated by roman numerals, with Class I indicating the most emission at long wavelengths. These classes rapidly became identified with stages in the emerging theoretical paradigm (Shu et al 1987a): Class I sources are believed to be undergoing infall with simultaneous bipolar outflow, Class II sources are typically visible T Tauri stars with disks and winds, and Class III sources have accreted or dissipated most of the material, leaving a pre-main-sequence star, possibly with planets (Adams et al 1987; Lada 1991).
More recently, submillimeter continuum observations have revealed a large number of sources with emission peaking at still longer wavelengths. Some of these new sources also have infrared sources and powerful bipolar outflows, indicating that a central object has formed; these have been designated Class 0 (André et al 1993). André & Montmerle (1994) argued that Class 0 sources represent the primary infall stage, in which there is still more circumstellar than stellar matter. Outflows appear to be most intense in the earliest stages, declining later (Bontemps et al 1996). Other cores with submillimeter emission have no IRAS sources and probably precede the formation of a central object. These were found among the “starless cores” of Benson & Myers (1989), and Ward-Thompson et al (1994) referred to them as “pre-protostellar cores.” The predestination implicit in this name has made it controversial, and I will use the less descriptive (and somewhat tongue-in-cheek) term, Class $`1`$. There has also been some controversy over whether Class 0 sources are really distinct from Class I sources or just more extreme versions. The case for Class 0 sources as a distinct stage can be found in André et al (2000).
While classification has an honored history in astronomy, serious tests of theory are facilitated by continuous variables. Myers & Ladd (1993) suggested that we characterize the spectral energy distribution by the flux-weighted mean frequency, or more suggestively, by the temperature of a black body with the same mean frequency. The latter ($`T_{bol}`$) was calculated by Chen et al (1995) for many sources and the following boundary lines in $`T_{bol}`$ were found to coincide with the traditional classes: $`T_{bol}<70`$K for Class 0, $`70T_{bol}650`$K for Class I, and $`650<T_{bol}2800`$K for Class II. In a crude sense, $`T_{bol}`$ captures the “coolness” of the spectral energy distribution, which is related to how opaque the dust is, but it can be affected strongly by how much mid-infrared and near-infrared emission escapes, and thus by geometry. Other measures, such as the ratio of emission at a submillimeter wavelength to the bolometric luminosity ($`L_{smm}/L_{bol}`$), may also be useful. One of the problems with all these measures is that the bulk of the energy for classes earlier than II emerges at far-infrared wavelengths, where resolution has been poor. Maps of submillimeter emission are showing that many near-infrared sources are displaced from the submillimeter peaks and may have been falsely identified as Class I sources (Motte et al 1998). Ultimately, higher spatial resolution in the far-infrared will be needed to sort out this confusion.
### 4.5 Detailed Theories
The recent focus on the formation of isolated, low-mass stars is at least partly due to the fact that it is more tractable theoretically than the formation of massive stars in clusters. Shu (1977) argued that collapse begins in a centrally condensed configuration and propagates outward at the sound speed ($`a`$); matter inside $`r_{inf}=at`$ is infalling after a time $`t`$. Because the collapse in this model is self-similar, the structure can be specified at any time from a single solution. This situation is called inside-out collapse. In Shu’s picture, the pre-collapse configuration is an isothermal sphere, with $`n(r)r^p`$ and $`p=2`$. Calculations of core formation via ambipolar diffusion gradually approach a configuration with an envelope that is close to a power law, but with a core of ever-shrinking size and mass where $`p0`$ (e.g. Mouschovias 1991). It is natural to identify the ambipolar diffusion stage with the Class $`1`$ stage, and the inside-out collapse with Class 0 to Class I. As collapse proceeds, the material inside $`r_{inf}`$ becomes less dense, with a power law approaching $`p=1.5`$ in the inner regions, after a transition region where the density is not a power law. In addition, $`v(r)r^{0.5}`$ at small $`r`$.
There are many other solutions to the collapse problem (Hunter 1977, Foster and Chevalier 1993) ranging from the inside-out collapse to overall collapse (Larson 1969, Penston 1969). Henriksen et al (1997) have argued that collapse begins before the isothermal sphere is established; the inner ($`p=0`$) core undergoes a rapid collapse that they identify with Class 0 sources. They suggest that Class I sources represent the inside-out collapse phase, which appears only when the wave of infall has reached the $`p=2`$ envelope.
Either rotation or magnetic fields will break spherical symmetry. Terebey et al (1984) added slow rotation of the original cloud at an angular velocity of $`\mathrm{\Omega }`$ to the inside-out collapse picture, resulting in another characteristic radius, the point where the rotation speed equals the infall speed. A rotationally supported disk should form somewhere inside this centrifugal radius (Shu et al 1987a)
$$r_c=\frac{G^3M^3\mathrm{\Omega }^2}{16a^8},$$
(15)
where $`M`$ is the mass already in the star and disk. Since disks are implicated in all models of the ultimate source of the outflows, the formation of the disk may also signal the start of the outflow. Once material close to the rotation axis has accreted (or been blown out) all further accretion onto the star should occur through the disk.
Core formation in a magnetic field should produce a flattened structure (e.g. Fiedler & Mouschovias 1993) and Li and Shu (1996) argue that the equilibrium structure equivalent to the isothermal sphere is the isothermal toroid. Useful insights into the collapse of a magnetized cloud have resulted from calculations in spherical geometry (Safier et al 1997, Li 1998) or for thin disks (Ciolek & Königl 1998). Some calculations of collapse in two dimensions have been done (e.g. Fiedler & Mouschovias 1993). A magnetically-channeled, flattened structure may appear; this has been called a pseudo-disk (Galli & Shu 1993a, 1993b) to distinguish it from the rotationally supported disk. In this picture, material would flow into a pseudo-disk at a scale of $`1000`$ AU before becoming rotationally supported on the scale of $`r_c`$. The breaking of spherical symmetry on scales of 1000 AU may explain some of the larger structures seen in some regions (see Mundy et al 2000 for a review).
Ultimately, theory should be able to predict the conditions that lead to binary formation, but this is not presently possible. Steps toward this goal can be seen in numerical calculations of collapse with rotation (e.g. Bonnell & Bastien 1993, Truelove et al 1998, Boss 1998). Rotation and magnetic fields have been combined in a series of calculations by Basu & Mouschovias (1995 and references therein); Basu (1998) has considered the effects of magnetic fields on the formation of rotating disks.
Theoretical models make predictions that are testable, at least in principle. In the simplest picture of the inside-out collapse of the rotationless, non-magnetic isothermal sphere, the theory predicts all the density and velocity structure with only the sound speed as a free parameter. Rotation adds $`\mathrm{\Omega }`$ and magnetic fields add a reference field or equivalent as additional parameters. Departure from spherical symmetry adds the additional observational parameter of viewing angle. Consequently, observational tests have focused primarily on testing the simplest models.
### 4.6 Tests of Evolutionary Hypotheses and Theory
Both the empirical evolutionary sequence based on the class system and the detailed theoretical predictions can be tested by detailed observations. One can compute the expected changes in the continuum emission as a function of time for a particular model. Examples include plots of $`L`$ versus $`A_V`$ (Adams 1990), $`L`$ versus $`T_{bol}`$ (Myers et al 1998), and $`L_{mm}`$ versus $`L`$ (Saraceno et al 1996). For example, as time goes on, $`A_V`$ decreases, $`T_{bol}`$ increases, and $`L`$ reaches a peak and declines. At present the models are somewhat idealized and simplified, and different dust opacities need to be considered. Comparison of the number of objects observed in various parts of these diagrams with those expected from lifetime considerations can provide an overall check on evolutionary scenarios and provide age estimates for objects in different classes (e.g. André et al 2000). One can also test the models against observations of particular objects. However, models of source structure constrained only by the spectral energy distribution are not unique (Butner et al 1991, Men’shchikov & Henning 1997). Observational determination of $`n(r)`$ and $`v(r)`$ can apply more stringent tests.
Maps of continuum emission from dust can trace the column density as a function of radius quite effectively, if the temperature distribution is known. With an assumption about geometry, this information can be related to $`n(r)`$. For the Class $`1`$ sources, with no central object, the core should be isothermal ($`T_D510`$ K) or warmer on the outside if exposed to the interstellar radiation field (Leung 1975, Spencer & Leung 1978). New results from ISO will put tighter limits on possible internal energy sources (e.g. Ward-Thompson et al 1998, André et al 2000). Based on small maps of submillimeter emission, Ward-Thompson et al (1994) found that Class $`1`$ sources were not characterized by single power laws in column density; they fit the distributions with broken power laws, indicating a shallower distribution closer to the center. This distribution appears consistent with the interpretation that these are cores still forming by ambipolar diffusion. Maps at 1.3 mm (André et al 1996, Ward-Thompson et al 1999) confirm the early results: the column density is quite constant in an inner region ($`r<3000`$ to 4000 AU), with $`M0.7`$ M. SCUBA maps of these sources are just becoming available, but they suggest similar conclusions (D Ward-Thompson, personal communication, Shirley et al 1998). In addition, some of the cores are filamentary and fragmented (Ward-Thompson et al 1999). Citing these observed properties, together with a statistical argument regarding lifetimes, Ward-Thompson et al (1999) now argue that ambipolar diffusion in subcritical cores does not match the data (see also André et al 2000).
For cores with central sources (Class $`>1`$), $`T_D(r)`$ will decline with radius from the source. If the emission is in the Rayleigh-Jeans limit, $`I_\nu T_D\kappa (\nu )N`$. If $`\kappa (\nu )`$ is not a function of $`r`$, $`I_\nu (\theta )T_D(r)n(r)𝑑l`$, where the integration is performed along the line of sight ($`dl`$). If one avoids the central beam and any outer cut-off, assumes the optically thin expression for $`T_D(r)r^q`$, and fits a power law to $`I_\nu (\theta )\theta ^m`$, then $`p=m+1q`$ (Adams 1991, Ladd et al 1991). With a proper calculation of $`T_D(r)`$ and convolution with the beam, one can include all the information (Adams 1991). This technique has been applied in the far-infrared (Butner et al 1991), but it is most useful at longer wavelengths. Ladd et al (1991) found $`p=1.7\pm 0.3`$ in two cores, assuming $`q=0.4`$, as expected for $`\beta =1`$ (equation 14). Results so far are roughly consistent with theoretical models, but results in this area from the new submillimeter cameras will explode about the time this review goes to press.
Another issue arises for cores with central objects. If a circumstellar disk contributes significantly to the emission in the central beam, it will increase the fitted value of $`m`$. Disks contribute more importantly at longer wavelengths, so the disk contribution is more important at millimeter wavelengths (e.g. Chandler et al 1995). Luckily, interferometers are available at those wavelengths, and observations with a wide range of antenna spacings can separate the contributions of a disk and envelope. Application of this technique by Looney et al (1997) to L1551 IRS5, a Class I source, reveals very complex structure (Figure 3): binary circumstellar disks (cf Rodríguez et al 1998), a circumbinary structure (perhaps a pseudo-disk), and an envelope with a density distribution consistent with a power law ($`p=1.5`$ to 2). This technique promises to be very fruitful in tracing the flow of matter from envelope to disk. Early results indicated that disks are more prominent in more evolved (Class II) systems (Ohashi et al 1996), but compact structures are detectable in some younger systems (Terebey et al 1993). Higher resolution observations and careful analysis will be needed to distinguish envelopes, pseudo-disks, and Keplerian disks (see Mundy et al 2000 for a review). At the moment, one can only say that disks in the Class 0 stage are not significantly more massive than disks in later classes (Mundy et al 2000). Meanwhile, the interferometric data confirm the tendency of envelope mass to decrease with class number inferred from single-dish data (Mundy et al 2000).
Similar techniques have been used for maps of molecular line emission. For example, <sup>13</sup>CO emission has been used to trace column density in the outer regions of dark clouds. With an assumption of spherical symmetry, the results favor $`p2`$ in most clouds (Snell 1981, Arquilla & Goldsmith 1985). The <sup>13</sup>CO lines become optically thick in the inner regions; studies with higher spatial resolution in rarer isotopomers, like C<sup>18</sup>O or C<sup>17</sup>O, tend to show somewhat more shallow density distributions than expected by the standard model (Zhou et al 1994b, Wang et al 1995). Depletion in the dense, cold cores may still confuse matters (e.g. Kuiper et al 1996, §3.1). Addressing the question of evolution, Ladd et al (1998) used two transitions of C<sup>18</sup>O and C<sup>17</sup>O to show that $`N`$ toward the central source declines with $`T_{bol}`$, with a power between $`0.4`$ and $`1.0`$. To reproduce the inferred rapid decrease in mass with time, they suggest higher early mass loss than predicted by the standard model.
By observing a series of lines of different critical density, modeling those lines with a particular cloud model and appropriate radiative transport, and predicting the emission into the beams used for the observations, one can constrain the run of density more directly. Studies using two transitions of H<sub>2</sub>CO have again supported $`p=2\pm 0.5`$ on relatively large scales (Loren et al 1983, Fulkerson & Clark 1984). When interferometery of H<sub>2</sub>CO was used to improve the resolution on one core, $`p`$ appeared to decrease at small $`r`$ (Zhou et al 1990), in agreement with the model of Shu (1977). Much of the recent work on this topic has involved testing of detailed collapse models, including velocity fields and the complete density law, rather than a single power law, as described in the next section.
### 4.7 Collapse
The calculation of line profiles as a function of time (Zhou 1992) for the collapse models of Shu (1977) and Larson (1969) and Penston (1969), along with claims of collapse in a low-mass star forming region (Walker et al 1986), reinvigorated the study of protostellar collapse. Collapsing clouds will depart from the linewidth-size relation (§4.3), having systematically larger linewidths for a given size (Zhou 1992). Other simulations of line profiles range from a simple two-layer model (Myers et al 1996) to detailed calculations of radiative transport (Choi et al 1995, Walker et al 1994, Wiesemeyer 1997, 1999).
Zhou et al (1993) showed that several lines of CS and H<sub>2</sub>CO observed towards B335, a globule with a Class 0 source, could be fitted very well using the exact $`n(r)`$ and $`v(r)`$ of the inside-out collapse model. Using a more self-consistent radiative transport code, Choi et al (1995) found slightly different best-fit parameters. Using a sound speed determined from lines away from the collapse region, the only free parameters were the time since collapse began and the abundance of each molecule. With several lines of each molecule, the problem is quite constrained (Figure 4). This work was important in gaining acceptance for the idea that collapse had finally been seen.
Examination of the line profiles in Figure 4 reveals that most are strongly self-absorbed. Recall that the overall collapse idea of Goldreich and Kwan (1974) was designed to avoid self-absorbed profiles. The difference is that Goldreich and Kwan assumed that $`v(r)r`$, so that every velocity corresponded to a single point along the line of sight. In contrast, the inside-out collapse model predicts $`v(r)r^{0.5}`$ inside a static envelope. If the line has substantial opacity in the static envelope, it will produce a narrow self-absorption at the velocity centroid of the core (Figure 5). The other striking feature of the spectra in Figure 4 is that the blue-shifted peak is stronger than the red-shifted peak. This “blue” profile occurs because the $`v(r)r^{0.5}`$ velocity field has two points along any line of sight with the same Doppler shift (Figure 6). For a centrally peaked temperature and density distribution, lines with high critical densities will have higher $`T_{\mathrm{ex}}`$ at the point closer to the center. If the line has sufficient opacity at the relevant point in the cloud, the high $`T_{\mathrm{ex}}`$ point in the red peak will be obscured by the lower $`T_{\mathrm{ex}}`$ one, making the red peak weaker than the blue peak (Figure 6). Thus a collapsing cloud with a velocity and density gradient similar to those in the inside-out collapse model will produce blue profiles in lines with suitable excitation and opacity properties. A double-peaked profile with a stronger blue peak or a blue-skewed profile relative to an optically thin line then becomes a signature of collapse. These features were discussed by Zhou & Evans (1994) and, in a more limited context, by Snell & Loren (1977) and Leung & Brown (1977).
Of course, the collapse interpretation of a blue profile is not unique. Such profiles can be produced in a variety of ways. To be a plausible candidate for collapse, a core must also show these features: an optically thin line must peak between the two peaks of the opaque line; the strength and skewness should peak on the central source; and the two peaks should not be caused by clumps in an outflow. The optically thin line is particularly crucial, since two cloud components, for example, colliding fragments, could produce the double-peaked blue profile, but they would also produce a double-peaked profile in the optically thin line.
Rotation, combined with self-absorption, can create a line profile like that of collapse (Menten et al 1987, Adelson & Leung 1988), but toward the center of rotation, the line would be symmetric (Zhou 1995). Rotating collapse can cause the line profiles to shift from blue to red-skewed on either side of the rotation axis, with the sign of the effect depending on how the rotation varies with radius (Zhou 1995). Maps of the line centroid can be used to separate rotation from collapse (Adelson & Leung 1988, Walker et al 1994).
To turn a collapse candidate into a believable case of collapse, one has to map the line profiles, account for the effects of outflows, model rotation if present, and show that a collapse model fits the line profiles. To date this has been done only for a few sources: B335 (Zhou et al 1993, Choi et al 1995), L1527 (Myers et al 1995, Zhou et al 1996, Gregersen et al 1997), and IRAS 16293 (Zhou 1995, Narayanan et al 1998). Of this group, only IRAS 16293, rotating about 20 times faster than B335, is known to be a binary (Wootten 1989, Mundy et al 1992), supporting the idea that faster rotation is more likely to produce a binary. Mathieu (1994) reviews binarity in the pre-main-sequence stage, and Mundy et al (2000) discuss recent evidence on the earlier stages.
Interferometric observations have also revealed infall motions and rotational motions on scales of $`1000`$ AU in several sources (e.g. Ohashi et al 1997b; Momose et al 1998). Such studies can reveal how matter makes the transition from infall to a rotating disk. Inevitably, irregularities in the density and velocity fields will confuse matters in real sources, and these may be more noticeable with interferometers. Outflows are particularly troublesome (Hogerheijde et al 1998). Extreme blue/red ratios are seen in interferometric observations of HCO<sup>+</sup> and HCN $`J=10`$ lines, which are difficult to reproduce with standard models (Choi et al 1999). Even in B335, the best case for collapse, Velusamy et al (1995) found evidence for clumpy structure within the overall gradients. In addition, very high resolution observations of CS $`J=54`$ emission toward B335 are not consistent with predicted line profiles very close to the forming star (Wilner et al 1999); either CS is highly depleted in the infalling gas, or the velocity or density fields depart from the model.
If, for the sake of argument, we accept a blue profile as a collapsing core, can we see any evolutionary trends? A number of surveys for blue profiles have been undertaken recently. Gregersen et al (1997) found 9 blue profiles in 23 Class 0 sources, using the $`J=32`$ and $`J=43`$ lines of HCO<sup>+</sup> and H<sup>13</sup>CO<sup>+</sup>. After consideration of possible confusion by outflows, etc., they identified 6 sources as good candidates for collapse. Mardones et al (1997) extended the search to Class I sources with $`T_{bol}<200`$ K, using CS and H<sub>2</sub>CO lines. They introduced the line asymmetry as a collapse indicator:
$$\delta V=(V_{thick}V_{thin})/\mathrm{\Delta }V_{thin},$$
(16)
where $`V_{thick}`$ is the velocity of the peak of the opaque line, $`V_{thin}`$ is the velocity of the peak of the optically thin line, and $`\mathrm{\Delta }V_{thin}`$ is the linewidth of the thin line. They confirmed many of the collapse candidates found by Gregersen and identified 6 more, but they found very few collapse candidates among the Class I sources. The difference could be caused by using different tracers, since the CS and H<sub>2</sub>CO lines are less opaque than the HCO<sup>+</sup> lines. To remove this uncertainty, Gregersen (1998) surveyed the Class I sources of Mardones et al (1997) in HCO<sup>+</sup>. Using $`\delta V`$ as the measure, the fraction of blue profiles did not decrease substantially between Class 0 and Class I. Most of these line profiles need further observations before they become bona fide candidates.
When does collapse begin? Surveys of the Class $`1`$ cores might reveal very early stages of collapse. In the inside-out collapse picture, blue profiles should appear, if at all, only toward the center. In fact, blue profiles have been seen in a substantial number of these cores (Gregersen 1998, Lee et al 1999) and maps of one, L1544, show that the blue profiles are very extended spatially (Tafalla et al 1998). Clearly, extended blue profiles in Class $`1`$ cores do not fit expectations for early collapse, and Tafalla et al argue that the velocities are higher than expected for ambipolar diffusion. If the regions producing the blue and red peaks are indeed approaching one another, they are forming a core more rapidly than in standard models, suggesting that some new ideas may be necessary (e.g. Nakano 1998, Myers & Lazarian 1998). For a current review of this field, see Myers et al (2000).
### 4.8 Summary of Isolated Star Formation
Distinct cores can be identified in tracers of dense ($`n10^4`$ cm<sup>-3</sup>) gas; these cores are frequently associated with star formation. There is no clear evidence that they are magnetically subcritical, and some kinematic evidence suggests that the decay of turbulence, rather than ambipolar diffusion, is the critical feature. An empirical evolutionary sequence, based on the spectral appearance of dust emission, and detailed theoretical models are now being tested by observations. The spatial distribution of dust emission is providing important tests by probing $`n(r)`$. Predictions of the evolution of spectral lines during collapse are available for the simplest theory and observations of some sources match predictions of theory quite well. Evidence of collapse is now strong in a few cases and surveys for distinctive line profiles have revealed many more possible cases.
## 5 CLUSTERED STAR FORMATION AND MASSIVE STARS
In this section, we will address the issues of clustered formation, regardless of mass, and high-mass star formation, which seems to occur exclusively in clusters. In this review, the term cluster refers to a group of forming stars, whether or not the eventual outcome is a bound cluster. Since high mass stars are rare, the nearest examples are more distant than is the case for low mass star formation. Together with the fact that they form in clusters, the greater distance makes it difficult to isolate individual events of star formation. On the other side of the balance, massive stars are more easily detectable at large distances because luminosity is such a strong function of mass. Heating of the surroundings makes them strong emitters in both dust and many spectral lines; the spectra of regions forming massive stars are often very rich, reflecting both a high excitation state and, in many cases, enhanced abundances, as complex chemistry is driven by elevated temperatures (van Dishoeck & Blake 1998). These features led early studies, when sensitivity was poor, to concentrate on regions forming massive stars. However, most of these advantages arise because the star is strongly influencing its surroundings; if we wish to know the preconditions, we will be misled. This problem is aggravated by the fast evolution of massive stars to the main sequence. Reviews of the topic of clustered star formation can be found in Elmegreen (1985), Lada & Lada (1991), and Lada (1999). Reviews focusing on the formation of massive stars include Churchwell (1993, 1999), Walmsley (1995), Stahler et al (2000), and Kurtz et al (2000).
### 5.1 Theoretical Issues
Some of the primary theoretical issues regarding the formation of massive stars have been reviewed by Stahler et al (2000). First, what is the relevant dividing line between low-mass and high-mass stars? For the star formation problem, the question is how far in mass the scenario for low-mass stars can be extended. Theoretically, the limit is probably about 10 M, where stars reach the main sequence before the surrounding envelope is dissipated. Observations of the physical conditions in regions forming intermediate mass stars (Herbig Ae/Be stars and their more embedded precursors) can reveal whether modifications are needed at even lower masses. Since accretion through disks plays a crucial role in the standard model, it is important to know the frequency and properties of disks around more massive stars.
Stars as massive as 100 M seem to exist (Kudritzki et al 1992), but radiation pressure from the rapidly evolving stellar core should stop accretion before such masses can be built (e.g. Wolfire & Cassinelli 1987). In addition, massive stars produce very strong outflows (Shepherd & Churchwell 1996, Bachiller 1996). Since standard accretion theory cannot produce rates of mass accretion high enough to overwhelm these dispersive effects, some new effects must become important.
Related questions concern the formation of clusters. To what extent can the ideas of isolated core collapse be applied if there are competing centers of collapse nearby? If the collapse to form massive stars is supercritical, a whole region may collapse and fragment to form many stars.
The fact that the most massive stars are found near the centers of forming clusters has led to the suggestion that massive stars are built by collisional coalescence of stars or protostars (Bonnell et al 1997, 1998). This scheme requires high densities ($`n_{}10^4`$ stars pc<sup>-3</sup>); for a mean stellar mass of 1 M, this corresponds to $`n2\times 10^5`$ cm<sup>-3</sup>. Even higher stellar densities are seen in the core of the Orion Nebula Cluster (Hillenbrand & Hartmann 1998).
The special problems of making the most massive stars are a subset of the larger question of explaining the mass distribution of all stars. There may be variations in the IMF between clusters (Scalo 1998), which can test theories. Hipparcos observations of nearby OB associations have extended the membership to lower mass stars (de Zeeuw et al 1999), suggesting total masses of a few $`10^3`$ M.
The main questions these issues raise for observations are whether the mass and density of cores are sufficient for forming clusters and massive stars, and whether the mass distribution of clumps in a star forming region can be related to the mass distribution of stars ultimately formed.
### 5.2 Overall Cloud and Core Properties
What do we know about the general properties of the galactic clouds? The broadest picture is provided by surveys of CO and <sup>13</sup>CO. Surveys of significant areas of the Galaxy indicate that the power-law distribution in mass seen for small clouds ( $`dN(M)M^\alpha dM`$, §3.1) continues up to a cutoff at $`M6\times 10^6`$ M (Williams & McKee 1997). Studies by Casoli et al (1984), Solomon & Rivolo (1989), and Brand & Wouterloot (1995) find $`1.4\alpha 1.8`$ over both inner and outer Galaxy. Extinction surveys find flatter slopes (Scalo 1985), but Scalo & Lazarian (1996) suggest that cloud overlap affects the extinction surveys. The fact that $`\alpha <2`$ implies that most of the mass is in the largest structures, although there are issues of how to separate clouds at the largest scales. Since the star formation rate per unit mass, measured by CO, appears not to depend on cloud mass (Mead et al 1990, Evans 1991), the mass distribution supports the idea that most stars form in massive clouds (Elmegreen 1985). However, the enormous spread ($`>10^2`$) in star formation rate per unit mass at any given mass, together with the fact that most of the molecular gas is sterile, suggests that comparisons to overall cloud masses, measured by CO, are not particularly relevant.
Surveys in molecular lines indicative of denser gas have generally been biased towards signposts of star formation. Exceptions are the CS $`J=21`$ survey of L1630 (Lada et al 1991) and the CS $`J=10`$ and $`J=21`$ surveys of L1641 (Tatematsu et al 1993, 1998). These two clouds, also called Orion B and Orion A, are adjacent. In both cases, the CS $`J=21`$ maps showed more contrast than maps of <sup>13</sup>CO (Bally et al 1987), with CS $`J=10`$ being somewhat intermediate. Maps of higher-$`J`$ transitions have been less complete, but show still less area covered by emission. The CS surveys detected less than 20% of the total mass in both clouds (Lada et al 1991, Tatematsu, personal communication). The $`J=21`$ emission in L1641 is somewhat smoother than the $`J=21`$ emission from L1630 (Tatematsu et al 1998), and this difference may be reflected in the distribution of star formation. Star formation in L1641 appears to include a distributed component (Strom et al 1993); in contrast, star formation in L1630 is tightly concentrated in clusters associated with massive cores of dense gas (Lada 1992, Li et al 1997).
Excitation analysis of CS lines with higher critical density in L1630 shows that the star-forming regions all contain gas with $`n10^5`$ cm<sup>-3</sup> (Lada et al 1997). These results suggest that surveys in lines of high $`n_c(jk)`$ are relevant for characterizing star formation regions. Of the possible tracers, CS and NH<sub>3</sub> have been most widely surveyed. In comparison to cores in Taurus, where only low-mass stars are forming, cores in the Orion clouds tend to be more massive and to have larger linewidths when observed with the same tracer (CS: Tatematsu et al 1993, NH<sub>3</sub>: Harju et al 1993). The differences are factors of 2–4 for the majority of the cores, but larger for the cores near the Orion Nebula and those forming clusters in L1630 (Lada et al 1991).
CS transitions have been surveyed toward Ultra-Compact (UC) HII regions, H<sub>2</sub>O masers, or luminous IRAS sources (see Kurtz et al 2000). Since the IRAS survey became available, most samples are drawn from the IRAS catalog with various color selection criteria applied. The most complete survey (Bronfman et al 1996) was toward IRAS sources with colors characteristic of UC HII regions (Wood and Churchwell 1989) over the entire Galactic plane. Bronfman et al found CS $`J=21`$ emission (see Table 1 for density sensitivity) in 59% of 1427 IRAS sources, and the undetected sources were either weak in the far-infrared or had peculiar colors. Searches toward H<sub>2</sub>O masers have used the catalogs of Braz and Epchtein (1983) and Cesaroni et al (1988). Surveys of CS $`J=21`$ (Zinchenko et al 1995, Juvela 1996) toward southern H<sub>2</sub>O masers found detection rates close to 100%, suggesting that dense, thermally excited gas surrounds the compact, ultra-dense regions needed to produce H<sub>2</sub>O masers. The detection rate drops in higher $`J`$ transitions of CS (Plume et al 1992, 1997) but is still 58% in the CS $`J=76`$ line, which probes higher densities (Table 1). An LVG, multitransition study of CS lines found $`\mathrm{log}n(\text{cm}\text{-3})=5.9`$ for 71 sources and a similar result from a smaller sample using C<sup>34</sup>S data (Plume et al 1997). Densities derived assuming LVG fall between the average and maximum densities in clumpy models with a range of densities (Juvela 1997, 1998).
Maps of the cores provide size and mass information. Based on the sizes and masses of 28 cores mapped in the CS $`J=21`$ line (Juvela 1996), one can compute a mean size, $`l=1.2\pm 0.5`$ pc, mean virial mass, $`M_V5500`$ M, and $`M_N4900`$ M. While the two mass estimates agree on average, there can be large differences in individual cases. Remarkably, cloud structure does not introduce a big uncertainty into the cloud masses: using a clumpy cloud model, (see §5.4), Juvela (1998) found that $`M_N`$ increased by a factor of 2 on average compared to homogeneous models and agreed with $`M_V`$ to within a factor of 2. Plume et al (1997) obtained similar results from strip maps of CS $`J=54`$: $`l=1.0\pm 0.7`$ pc (average over 25 cores); $`M_V=3800`$ M (16 cores). As usual mean values must be regarded with caution; there is a size distribution. As cores with weaker emission are mapped, the mean size decreases; an average of 30 cores with full maps of $`J=54`$ emission gives $`l=0.74\pm 0.56`$ pc, with a range of 0.2 to 2.8 pc (Y Shirley, unpublished results).
Churchwell et al (1992) surveyed 11 UC HII regions for CS $`J=21`$ and $`J=54`$ emission, leading to estimates of $`n10^5`$ cm<sup>-3</sup>. Cesaroni et al (1991) surveyed 8 UC HII in three transitions of CS and C<sup>34</sup>S and estimated typical sizes of 0.4 pc, masses of 2000 M, and densities of $`10^6`$ cm<sup>-3</sup>. More extensive surveys have been made in NH<sub>3</sub>; Churchwell et al (1990) found NH<sub>3</sub> $`(J,K)=(1,1)`$ and (2,2) emission from 70% of a sample of 84 UC HII regions and IRAS sources with similar colors. They derived $`T_K`$, finding a peak in the distribution around 20 K, but a significant tail to higher values. Further studies in the (4,4) and (5,5) lines toward 16 UC HII regions with strong (2,2) emission (Cesaroni et al 1992) detected a high fraction. Estimates for $`T_K`$ ranged from 64 to 136 K and sizes of 0.5 pc. Two sources indicated much higher densities and NH<sub>3</sub> abundances. Follow-up studies with the VLA (Cesaroni et al 1994, 1998) revealed small ($`0.1`$ pc), hot ($`T_K50200`$ K), dense ($`n=10^7`$ cm<sup>-3</sup>) regions with enhanced NH<sub>3</sub> abundances. These hot cores (discussed below) are slightly displaced from the UC HII, but coincide with H<sub>2</sub>O masers.
Magnetic fields strengths have been measured with the Zeeman effect toward about 10 regions of massive star formation. The fields are substantially stronger than the fields seen in isolated, low-mass cores, but the masses are also much higher. In most cases the mass to flux ratio is comparable to the critical ratio, once geometrical effects are considered (Crutcher 1999b). Given the uncertainties and sample size, it is too early to decide if regions forming massive stars are more likely to be supercritical than regions forming only low-mass stars.
The ionization fraction in the somewhat more massive Orion cores appears to be very similar to that in low-mass cores: $`6.9<\mathrm{log}x_e<7.3`$ (Bergin et al 1999). The most massive cores in their sample have $`x_e10^8`$, as do some of the massive cores studied by de Boisanger et al (1996). Expressed in terms of column density, the decline in $`x_e`$ appears around $`N3\times 10^{22}`$ cm<sup>-2</sup>. At $`x_e=10^8`$, $`t_{AD}=7\times 10^5`$ yr, about $`0.1t_{AD}`$ in isolated, low-mass cores. Even if the massive cores are subcritical, their evolution should be faster than that of low-mass cores.
To summarize, the existing surveys show ample evidence that massive star formation usually takes place in massive ($`M>10^3`$ M), dense ($`n10^6`$ cm<sup>-3</sup>) cores, consistent with the requirements inferred from the study of young clusters and associations, and with conditions needed to form the most massive stars by mergers. Cores with measured $`B_z`$ seem to be near the boundary between subcritical and supercritical cores.
### 5.3 Evolutionary Scenarios and Detailed Theories
To what extent can an evolutionary scenario analogous to the class system be constructed for massive star formation? Explicit attempts to fit massive cores into the class system have relied on surveys of IRAS sources. Candidates for massive Class 0 objects, with $`L>10^3`$ L, have been found (Wilner et al 1995, Molinari et al 1998). One difficulty with using the shape of the spectral energy distribution for massive star formation is that dense, obscuring material usually surrounds objects even after they have formed, and a single star may be quite evolved but still have enough dust in the vicinity to have the same spectral energy distribution as a much younger object. The basic problem is the difficulty in isolating single objects. Also, the role of disks in massive regions is less clear, and they are unlikely to dominate the spectrum, as they do in low mass Class II sources. Other markers, such as the detection of radio continuum emission, must be used as age indicators. Hot cores provide obvious candidiates to be precursors of UC HII regions, but some have embedded UC HII regions and may be transitional (Kurtz et al 2000). The chemical state of massive cores may also provide an evolutionary sequence; Helmich et al (1994) suggested an evolutionary ordering of three sources in the W3 region based on their molecular spectra and chemical models (see van Dishoeck & Blake 1998).
While theories for clustered and massive star formation are much less developed, some steps have been taken (e.g. Bonnell et al 1998, Myers 1998). The larger $`\mathrm{\Delta }v`$ in regions forming massive stars imply that turbulence must be incorporated into the models. Myers & Fuller (1992) suggested a “thermal-non-thermal” (TNT) model, in which $`n(r)`$ is represented by the sum of two power-laws and the term with $`p=1`$ dominates outside the radius ($`r_{TNT}`$) at which turbulent motions dominate thermal motions. McLaughlin & Pudritz (1997) develop the theory of a logatropic sphere, which has a density distribution approximated by $`p=1`$. Collapse in such a configuration leads to power laws in the collapsing region with similar form to those in the collapsing isothermal sphere, but with higher densities and lower velocities. These ideas lead to accretion rates that increase with time and timescales for forming massive stars that are much weaker functions of the final stellar mass than is the case for the isothermal sphere (§4.5). Recent simulations of unmagnetized fragmentation that follow the interaction of clumps find that the mass spectrum of fragments steepens from $`\alpha =1.5`$ to a lognormal distribution of the objects likely to form stars (e.g. Klessen et al 1998). To avoid an excessive global star formation rate (§2) and distortion of the clump mass spectrum in the bulk of the cloud, this process must be confined to star-forming regions in clouds.
### 5.4 Filaments, Clumps, Gradients, and Disks
An important issue is whether the dense cores have overall density gradients or internal structures (clumps) that are likely to form individual stars and whether the mass distribution is like that of stars. Unfortunately, the terms “clumps” and “cores” have no standard usage; I will generally use “cores” to refer to regions that appear in maps of high-excitation lines and “clumps” to mean structures inside cores, except where a different usage is well established (e.g. “hot cores”). Myers (1998) has suggested the term “kernels” to describe clumps within cores. Cores themselves are usually embedded in structures traced by lines with lower critical density, and these are also called clumps by those who map in these lines. Many of these low-density structures are quite filamentary in appearance: examples include the <sup>13</sup>CO maps of L1641 (Orion B) of Bally et al (1987). In some cases, this filamentary structure is seen on smaller scales in tracers of high density or column density (e.g. Johnstone & Bally 1999, Figure 2).
The clumpy structure of molecular clouds measured in low-excitation lines suggests that dense cores will be clumpy as well. Suggestions of clumpy structure came from early work comparing densities derived from excitation analysis in different tracers, but smooth density gradients provided an alternative (e.g. Evans 1980). Multitransition studies of three cores in CS (Snell et al 1984), C<sup>34</sup>S (Mundy et al 1986) and H<sub>2</sub>CO (Mundy et al 1987) found no evidence for overall density gradients; the same high densities were derived over the face of the core, while the strength of the emission varied substantially. This was explained in a clumpy model with clump filling factors of the dense gas $`f_v0.03`$ to 0.3, based on a comparison of $`M_V`$ with $`M_n`$ (Snell et al 1984). This comparison forms the basis for most claims of unresolved clumps.
Observations with higher resolution support the idea of clumps postulated by Snell et al (1984). For example, Stutzki and Güsten (1990) deconvolved 179 clumps from a map of C<sup>18</sup>$`J=21`$ emission near M17. Because of overlap, far fewer clumps are apparent to the eye; assumptions about the clump shape and structure may affect the deconvolution. Maps of the same source in several CS and C<sup>34</sup>S lines (Wang et al 1993) could be reproduced with the clump catalog of Stutzki & Güsten, but only with densities about 5 times higher than they found. Thus the clumps themselves must have structure, either a continuation of clumpiness or smooth gradients. Since the inferred clumps are now similar in size to the cores forming low mass stars, a natural question is whether massive cores are fragmented into many clumps which can be modeled as if they were isolated cores. In favor of this view, Stutzki and Güsten noted that the Jeans length was similar to the size of their clumps.
A significant constraint on this picture is provided by the close confinement of the clumps; unlike the picture of isolated core formation, the sphere of influence of each clump will be limited by its neighbors. A striking example is provided by the dust continuum maps of the $`\rho `$ Ophiuchi cloud (Figure 1), our nearest example of cluster formation, albeit with no very massive stars. Within about six cores of size 0.2 pc, Motte et al (1998) find about 100 structures with sizes of 1000-4000 AU. They deduce a fragmentation scale of 6000 AU, five times smaller than isolated cores in Taurus. Thus the “feeding zone” of an individual clump is considerably less and the evolution must be more dynamic, with frequent clump-clump interactions, than is the case for isolated star formation. This picture probably applies even more strongly to the more massive cores. In $`\rho `$ Ophiuchi, the clump mass spectrum above 0.5 M steepens to $`\alpha =2.5`$, close to the value for stars (Motte et al 1998), in agreement with predictions of Klessen et al (1998). A similar result ($`\alpha =2.1`$) is found in Serpens, using millimeter interferometry (Testi & Sargent 1998). While more such studies are needed, these results are suggesting that dust continuum maps do trace structures that are likely precursors of stars, opening the study of the origin of the IMF to direct observational study.
Some of the less massive, relatively isolated cores, such as NGC2071, S140, and GL2591, have been modeled with smooth density and temperature gradients (Zhou et al 1991, 1994b, Carr et al 1995, van der Tak et al 1999). Models with gradients can match the relative strengths of a range of transitions with different excitation requirements, improving on homogeneous models. Zhou et al (1994) summarized attempts to deduce gradients and found preliminary evidence that, as core mass increases, the tendency is first toward smaller values of $`p`$. The most massive cores showed little evidence for any overall gradient and more tendency toward clumpy substructure. This trend needs further testing, but it is sensible if more massive cores form clusters. Lada et al (1997) found that the L1630 cores forming rich embedded clusters with high efficiency tended to have larger masses of dense ($`n>10^5`$ cm<sup>-3</sup>) gas, but a lower volume filling factor of such gas, indicating more fragmentation.
However, the line profiles of optically thick lines predicted by models with gradients are usually self-absorbed, while the observations rarely show this feature in massive cores. Clumps within the overall gradients are a likely solution. The current state of the art in modeling line profiles in massive cores is the work of Juvela (1997, 1998), who has constructed clumpy clouds from both structure tree and fractal models, performed 3-D radiative transport and excitation, and compared the model line profiles to observations of multiple CS and C<sup>34</sup>S transitions in massive cores. He finds that the clumpy models match the line profiles much better than non-clumpy models, especially if macroturbulence dominates microturbulence. Structure trees (Houlahan & Scalo 1992) match the data better than fractal models, but overall density and/or temperature gradients with $`p+q2`$ are needed in addition to clumps.
The study of gradients versus clumps in regions forming intermediate mass stars could help to determine whether conditions change qualitatively for star formation above some particular mass and how this change is related to the outcome. Using near-infrared observations of regions with Herbig Ae/Be stars, Testi et al (1997) found that the cluster mode of star formation becomes dominant when the most massive star has a spectral type earlier than B7. Studies of the far-infrared emission from dust remaining in envelopes around Herbig Ae/Be stars found values of $`p`$ ranging from 0.5 to 2 (Natta et al 1993). Maps of dust continuum emission illustrate the difficulties: the emission may not peak on the visible star, but on nearby, more embedded objects (Henning et al 1998, Di Francesco et al 1998). Detailed models of several suitable sources yield $`p=0.75`$ to 1.5 (Henning et al 1998, Colomé et al 1996). Further work is needed to determine whether a change in physical conditions can be tied to the change to cluster mode.
Many Herbig Ae stars have direct evidence for disks from interferometric studies of dust emission (Mannings & Sargent 1997), though fewer Herbig Be stars have such direct evidence (Di Francesco et al 1997). For a review, see Natta et al (2000). Disks may be more common during more embedded phases of B star formation; Shepherd and Kurtz (1999) have found a large (1000 AU) disk around an embedded B2 star. The statistics of UC HII regions may provide indirect evidence for disks around more massive stars. Since such regions should expand rapidly unless confined, the large number of such regions posed a puzzle (Wood & Churchwell 1989). Photoevaporating disks have been suggested as a solution (Hollenbach et al 1994). Such disks have also been used to explain very broad recombination lines (Jaffe & Martín-Pintado 1999). Kinematic evidence for disks will be discussed in §5.5.
A particular group of clumps (or cores) deserve special mention: hot cores (e.g. Ohishi 1997). First identified in the Orion cloud (Genzel & Stutzki 1989), about 20 are now known (see Kurtz et al 2000). They are small regions ($`l0.1`$ pc), characterized by $`T_K>100`$ K, $`n>10^7`$ cm<sup>-3</sup>, and rich spectra, probably reflecting enhanced abundances, as well as high excitation (van Dishoeck & Blake 1998). Theoretical issues have been reviewed by Millar (1997) and Kaufman et al (1998), who argue that they are likely to be heated internally, but they often lack radio continuum emission. Since dynamical timescales for gas at such densities are short, these may plausibly be precursors to the UC HII regions.
The evidence for flatter density distribution in regions of intermediate mass support the relevance of models like the TNT or logatropic sphere models in massive regions (§5.3), but it will be important to study this trend with the same methods now being applied to regions forming low mass stars, with due regard for the greater distance to most regions forming massive stars. The increasingly fragmented structure in more massive cores and the increased frequency of clusters above a certain mass are consistent with a switch to a qualitatively different mode of star formation, for which different theories are needed. Finally, the common appearance of filaments may support a continuing role for turbulence in dense regions, since simulations of turbulence often produce filamentary structure (e.g. Scalo et al 1998).
### 5.5 Kinematics
Lada et al (1991) found only a weak correlation between linewidth and size, disappearing entirely for a different clump definition, in the L1630 cores (Goodman Type 2 relation, see §4.3). Caselli & Myers (1995) also found that the Type 1 linewidth-size relation (with only non-thermal motions included, $`\mathrm{\Delta }v_{NT}R^\gamma `$) is flatter in massive cloud cores ($`\gamma =0.21\pm 0.03`$) than in low mass cores ($`\gamma =0.53\pm 0.07`$); in addition, the correlation is poor (correlation coefficient of 0.56) though the correlation is better for individual cores (Type 3 relations). They also noted that non-thermal (turbulent) motions are much more dominant in more massive cores and find good agreement with predictions of the TNT model. The “massive” cores in the Caselli & Myers study are mostly the cores in Orion with masses between 10 and 100 M. The much more massive ($`M_V=3800`$ M) cores studied by Plume et al (1997) exhibit no statistically significant linewidth-size relation at all (correlation coefficient is 0.26) and the linewidths are systematically higher (by factors of 4–5) for a given size than would be predicted by the relationships derived for low and intermediate mass cores (Caselli & Myers 1995). In addition, the densities of these cores exceed by factors of 100 the predictions of density-size relations found for less massive cores (Myers 1985). The regions forming truly massive stars are much more dynamic, as well as much denser, than would be expected from scaling relations found in less massive regions. The typical linewidth in massive cores is 6–8 km s<sup>-1</sup>, corresponding to a 1-D velocity dispersion of 2.5–3.4 km s<sup>-1</sup>, similar to that of the stars in the Orion Nebula Cluster (Hillenbrand & Hartmann 1998). Larson (1981) noted that regions of massive star formation, like Orion and M17, did not follow his original linewidth-size relation, suggesting that gravitational contraction would decrease size while keeping $`\mathrm{\Delta }v`$ roughly constant or increasing it.
Searching for collapse in massive cores is complicated by the turbulent, clumpy structure, with many possible centers of collapse, outflow, etc. A collapse signature may indicate an overall collapse of the core, with accompanying fragmentation. In fact, self-absorbed line profiles from regions forming massive stars are rather rare (Plume et al 1997). A possible collapse signature has been seen in CS emission toward NGC 2264 IRS, a Class I source with $`L2000`$ L (Wolf-Chase & Gregersen 1997). If an HII region lies at the center of a collapsing core, absorption lines should trace only the gas in front and should be redshifted relative to the emission lines. Failure to see this effect, comparing H<sub>2</sub>CO absorption to CO emission, supported an early argument against the idea that all clouds were collapsing (Zuckerman & Evans 1974). A more recent application of this technique to dense cores, using high-excitation lines of NH<sub>3</sub>, showed no preference for red-shifted absorption (Olmi et al 1993) overall, but a few dense cores do show this kind of effect. These sources include W49 (Welch et al 1988, Dickel & Auer 1994), G10.6–0.4 (Keto et al 1988), and W51 (Zhang & Ho 1997, Zhang et al 1998a).
Dickel & Auer (1994) tested different collapse scenarios against observations of HCO<sup>+</sup> and favored free-fall collapse, with $`nr^{1.5}`$ and $`vr^{0.5}`$ throughout W49A North; they noted that more complex motions are present on small scales. Keto et al (1988) used NH<sub>3</sub> observations with 0.3<sup>′′</sup> resolution to separate infall from rotational motions toward the UC HII region, G10.6–0.4. Zhang & Ho (1997) used NH<sub>3</sub> absorption and Zhang et al (1998a) added CS $`J=32`$ and CH<sub>3</sub>CN observations to identify collapse onto two UC HII regions, W51e2 and W51e8, inferring infall velocities of about 3.5 km s<sup>-1</sup> on scales of 0.06 pc. Young et al (1998) have tested various collapse models against the data on W51e2 and favor a nearly constant collapse velocity ($`v5`$ km s<sup>-1</sup>) and $`n(r)r^2`$. Mass infall rates of about 6$`\times 10^3`$ (Zhang et al 1998a) to 5$`\times 10^2`$ (Young et al 1998) M yr<sup>-1</sup> were inferred for W51e2. Similar results were found in G10.6–0.4 (Keto et al 1988), and even more extreme mass infall rates ($`10^2`$ to 1 M yr<sup>-1</sup>) have been suggested for W49A, a distant source with enormous mass ($`10^6\text{M}\text{}`$) and luminosity ($`L10^7`$ L) (Welch et al 1988). These high infall rates may facilitate formation of very massive stars (§5.1) and help confine UC HII regions (Walmsley 1995).
Large transverse velocity gradients have been seen in some hot cores, including G10.6–0.4 (Keto et al 1988), W51 (Zhang & Ho 1997), and $`IRAS20126+4104`$ (Cesaroni et al 1997, Zhang et al 1998b). For example, gradients reach 80 km s<sup>-1</sup> pc<sup>-1</sup> in the NH<sub>3</sub> $`(J,K)=(4,4)`$ line and 400 km s<sup>-1</sup>pc<sup>-1</sup> in the CH<sub>3</sub>CN $`J=65`$ line toward G29.96–0.02 and G31.41+0.31 (Cesaroni et al 1994b, 1998). Cesaroni et al (1998) interpret the gradients in terms of rotating disks extending to about $`10^4`$ AU. Using the dust continuum emission at 3 mm, they deduce disk masses up to 4200 M, in the case of G31.41+0.31.
### 5.6 Implications for Larger Scales
Since most stars form in the massive cores discussed in this section (Elmegreen 1985), they are most relevant to issues of star formation on a galactic scale. If the luminosity is used to trace star formation rate (e.g. Rowan-Robinson et al 1997, Kennicutt 1998), the star formation rate per unit mass is proportional to $`L/M`$. Considering clouds as a whole, $`L/M4`$ in solar units (e.g. Mooney & Solomon 1988), with a spread exceeding a factor of $`10^2`$ (Evans 1991). Using CS $`J=54`$ emission to measure $`M`$ in the dense cores, Plume et al (1997) found $`L/M=190`$ with a spread of a factor of 15. The star formation rate per unit mass is much higher and less variable if one avoids confusion by the sterile gas. The average $`L/M`$ seen in dense cores in our Galaxy is similar to the highest value seen in ultra-luminous infrared galaxies, where $`M`$ is measured by CO (Sanders et al 1991, Sanders & Mirabel 1998). The most dramatic starburst galaxies behave as if their entire interstellar medium has conditions like those in the most active, massive dense cores in our Galaxy. Some studies (e.g. Mauersberger & Henkel 1989) indeed found strongly enhanced CS $`J=54`$ emission from starburst galaxies. It will be interesting to observe high-$`J`$ CS lines in the most luminous galaxies to compare to the conditions in massive cores in our Galaxy. Perhaps the large scatter in $`L/M`$ seen in galaxies will be reduced if CS, rather than CO, is used as a measure of the gas, ultimately leading to a better understanding of what controls galactic star formation rates (see Kennicutt 1998).
### 5.7 Summary of Clustered Star Formation
The cloud mass distribution found for lower mass objects continues to massive clouds, but less is known about the distribution for dense cores. Very massive cores clearly exist, with sufficient mass ($`M>10^3`$ M) to make the most massive clusters and associations. These cores are denser and much more dynamic than cores involved in isolated star formation, with typical $`n10^6`$ cm<sup>-3</sup> and linewidths about 4–5 times larger than predicted from the linewidth-size relation. Pressures in massive cores (both thermal and turbulent) are substantially higher than in lower mass cores. The densities match those needed to form the densest clusters and the most massive stars by coalescence. There are some cores with evidence of overall collapse, but most do not show a clear pattern. There is some evidence that more massive regions have flatter density profiles, and that fragmentation increases with mass, but more studies are needed. High resolution studies of nearby regions of cluster formation are finding many clumps, limiting the feeding zone of a particular star-forming event to $`l6000`$ AU, much smaller than the reservoirs available in the isolated mode. In some cases, the clump mass distribution approaches the slope of the IMF, suggesting that the units of star formation have been identified. Studies of intermediate mass stars indicate that a transition to clustered mode occurs at least by a spectral type of B7.
## 6 CONCLUSIONS AND FUTURE PROSPECTS
The probes of physical conditions have been developed and are now fairly well understood. Some physical conditions have been hard to measure, such as the magnetic field strength, or hard to understand, notably the kinematics. Star-forming structures, or cores, within primarily sterile molecular clouds can be identified by thresholds in column density or density. Stars form in distinct modes, isolated and clustered, with massive stars forming almost exclusively in a clustered mode. The limited number of measurements of magnetic field leave open the question of whether cores are subcritical or supercritical, and whether this differs between the isolated and clustered mode. Cores involved in isolated star formation may be distinguished from their surroundings by a decrease in turbulence to subsonic levels, but clustered star formation occurs in regions of enhanced turbulence and higher density, compared to isolated star formation.
Evolutionary scenarios and detailed theories exist for the isolated mode. The theories assume cores with extended, power-law density gradients, leading to the mass accretion rate as the fundamental parameter. Detailed tests of these ideas are providing overall support for the picture, but also raising questions about the detailed models. Notably, kinematic evidence of gravitational collapse has finally been identified in a few cases. The roles of turbulence, the magnetic field, and rotation must be understood, and the factors that bifurcate the process into single or multiple star formation must be identified. Prospects for the future include a less biased census for cores in early stages and improved information on density gradients, both facilitated by the appearance of cameras at millimeter and submillimeter wavelengths on large telescopes. Antenna arrays operating at these wavelengths will provide more detailed information on the transition region between envelope and disk and study early disk evolution and binary fraction. Future, larger, antenna arrays will probe disk structure to scales of a few AU. Finally, a closer coupling of physical and chemical studies with theoretical models will provide more pointed tests of theory.
Theories and evolutionary scenarios are less developed for the clustered mode, and our understanding of the transition between isolated and clustered mode is still primitive. Current knowledge suggests that more massive cores have flatter density distributions and greater tendency to show substructure. At some point, the substructure dominates and multiple centers of collapse develop. With restricted feeding zones, the mass accretion rate gives way to the mass of clump as the controlling parameter, and some studies of clump mass spectra suggest that the stellar IMF is emerging in the clump mass distribution. The most massive stars form in very turbulent regions of very high density. The masses and densities are sufficient to form the most massive clusters and to explain the high stellar density at the centers of young clusters. They are also high enough to match the needs of coalescence theories for the formation of the most massive stars. As with isolated regions, more and better measurements of magnetic fields are needed, along with a less biased census, particularly for cool cores that might represent earlier stages. Larger antenna arrays will be able to separate clumps in distant cores and determine mass distributions for comparison to the IMF. Larger airborne telescopes will provide complementary information on luminosity sources in crowded regions. Observations with high spatial resolution and sensitivity in the mid-infrared will provide clearer pictures of the deeply embedded populations, and mid-infrared spectroscopy with high spectral resolution could trace kinematics close to the forming star. Deeper understanding of clustered star formation in our Galaxy will provide a foundation for understanding the origin and evolution of galaxies.
ACKNOWLEDGEMENTS
I am grateful to P André, J Bally, M Choi, F Motte, D Johnstone, and L Looney for supplying figures. Many colleagues sent papers in advance of publication and/or allowed me to discuss results in press or in progress. A partial list includes P André, R Cesaroni, R Crutcher, D Jaffe, L Mundy, E Ostriker, Y Shirley, J Stone, D Ward-Thompson, and D Wilner. I would like to thank R Cesaroni, Z Li, P Myers, F Shu, F van der Tak, and M Walmsley for detailed, helpful comments on an earlier version. This work has been supported by the State of Texas and NASA, through grants NAG5-7203 and NAG5-3348.
|
no-problem/9905/hep-th9905113.html
|
ar5iv
|
text
|
# Soliton vacuum energies and the CP(1) model.
## I INTRODUCTION
Solitons arise in many field theories and their particle-like behaviour has prompted discussions of their quantum properties . Recently, some old results of Schwinger have been developed into a useful numerical scheme for calculating the vacuum energy of a soliton at one loop order . The aim of this paper is to develop these ideas further using heat kernel methods and discuss some applications.
The one loop corrections to the soliton energies are likely to be most significant in situations where different classical soliton solutions have the same energy. This happens in an important class of field theories, where the classical solutions fall into topologically seperate families and the solutions in each family saturates an energy bound. The prototype for this behaviour was the BPS monopole solution , but similar solutions play an important role in superstring theories .
The $`CP(1)`$ model in $`2+1`$ dimensions is one of the simplest models of this type . The calculation of the vacuum energy of the single soliton solution provides an instructive example of the general technique. A useful comparison can also be made with the $`CP(1)`$ model in two dimensions, where the one loop contribution to the path integral from instanton solutions has been obtained analytically .
## II HEAT KERNEL METHOD
The one loop quantum properties of a soliton solution depend on the normal mode frequencies of classical perturbations about the solution. It will prove convenient to impose boundary conditions at a fixed radius $`R`$ on the perturbations in order to obtain a discrete spectrum and then take the limit $`R\mathrm{}`$.
The heat kernel is defined by
$$K(t)=e^{k^2t},$$
(1)
where the sum extends over the discrete spectrum with values $`k^2`$. For large $`R`$, the normal modes with real $`k`$ approach trigonometric functions of $`kR+\varphi `$, where $`\varphi `$ is a constant phase depending on $`k`$ and a set of other parameters $`\gamma `$. If no solitons are present, then the normal mode frequencies (denoted by a superscript zero) are given by
$$k^{(0)}R+\varphi ^{(0)}=n\pi +\beta ,$$
(2)
where $`\beta `$ is a phase that depends on the boundary conditions. In the presence of the soliton, the real frequencies are given by
$$kR+\varphi =n\pi +\beta .$$
(3)
In the limit $`R\mathrm{}`$, $`kk^{(0)}`$, and the heat kernel can be expressed in terms of the phase shift by
$$K(t)K^{(0)}(t)=\frac{2}{\pi }_0^{\mathrm{}}𝑑ke^{k^2t}kt\underset{\gamma }{}\delta _\gamma (k)$$
(4)
where $`\delta _\gamma =\varphi \varphi ^{(0)}`$.
In fact, the $`n=1`$ states can disappear from the continuum in the $`R\mathrm{}`$ limit if $`\delta _\gamma (0)0`$ and the derivative $`\delta _\gamma ^{}(0)=0`$. In this case the phase shift must be displaced onto a new branch. Furthermore, bound states have their own asymptotic behaviour and need to be added as a seperate contribution to equation (4). If there are $`n_0`$ bound states with $`k=0`$, equation (4) becomes
$$K(t)K^{(0)}(t)=n_0+\frac{2}{\pi }_0^{\mathrm{}}𝑑ke^{k^2t}kt\underset{\gamma }{}(\delta _\gamma (k)\delta _\gamma (0)).$$
(5)
Levinson’s theorem implies that the two expressions (4) and (5) are usually identical. However, in some situations (including the $`CP(1)`$ soliton), Levinson’s theorem does not apply and equation (5) in the one that must be used.
An important feature of the heat kernel is the well-understood behaviour of the $`t0`$ limit in $`d`$ dimensions ,
$$K(t)t^{d/2}\underset{n=0}{}B_nt^n$$
(6)
where the heat kernel coefficients $`B_n`$ determine the one loop ultra-violet divergencies of the theory. Finite parts of physical quantities can be obtained from a regulated heat kernel,
$$K_{\mathrm{reg}}(t)=K(t)t^{d/2}\underset{n=0}{\overset{(d+1)/2}{}}B_nt^n$$
(7)
The heat kernel coefficients are integrals of local polynomials of the background fields and their derivatives. Explicit results are known for the first few coefficients .
The first term in the asymptotic expansion is equivalent to the free heat kernel $`K^{(0)}`$. A simple comparison shows that inserting the first Born approximation to the phase shifts $`\delta _\gamma ^{(1)}`$ into equation (4) produces next term in the asymptotic expansion with the coefficient $`B_1`$. However, the second order Born approximation $`\delta _\gamma ^{(2)}`$ involves a double integral which contains contributions to $`B_2`$ and higher terms. The part responsible for $`B_2`$ can be isolated by taking a derivative expansion and keeping only the leading term. This allows us to express the regulated heat kernel in terms of phase shifts,
$$K_{\mathrm{reg}}(t)=n_0+\frac{2}{\pi }_0^{\mathrm{}}𝑑kkte^{k^2t}\underset{\gamma }{}(\overline{\delta }_\gamma (k)\overline{\delta }_\gamma (0))$$
(8)
where $`\overline{\delta }_\gamma =\delta _\gamma \delta _\gamma ^{(1)}\mathrm{}`$, subtracting the $`n`$’th order Born approximations to the phase shift keeping only local terms with $`m`$ derivatives and $`2n+4md+1`$.
The heat kernel can be used to find the vacuum energy of a soliton in $`d+1`$ dimensions or the effective action of an instanton in $`d`$ dimensions. The vacuum energy of a soliton in $`d+1`$ dimensions is given by $`\frac{1}{/}2\zeta (\frac{1}{/}2)`$ , where $`\zeta (s)`$ is the zeta-function
$$\zeta (s)=\frac{1}{\mathrm{\Gamma }(s)}_0^{\mathrm{}}𝑑tt^{s1}(K(t)n_0)$$
(9)
The effective action of an instanton on $`d`$ dimensions is given by $`\frac{1}{/}2\zeta ^{}(0)`$. A proper time cutoff $`ϵ`$ on the lower limit of the integral can be introduced to regularise the theory . Inserting equation (8) gives
$`\zeta ({\displaystyle \frac{1}{/}}2)`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}{\displaystyle _0^{\mathrm{}}}𝑑k{\displaystyle \underset{\gamma }{}}(\overline{\delta }_\gamma (k)\overline{\delta }_\gamma (0))+\text{poles}`$ (10)
$`\zeta ^{}(0)`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle _0^{\mathrm{}}}𝑑kk^1{\displaystyle \underset{\gamma }{}}(\overline{\delta }_\gamma (k)\overline{\delta }_\gamma (0))+\text{poles}`$ (11)
The finite part is now in a form that can be evaluated numerically and the pole terms, which depend on the heat kernel coefficients, can be absorbed by renormalisation.
Farhi et al. have obtained expressions for the vacuum energy in $`3+1`$ dimensions by subtracting the full first and second order phase shifts. This corresponds to subtracting an expansion of the heat kernel with nonlocal factors. Such an expansion does indeed exist , although the local version is sufficient for removing the divergencies.
## III CP(1) SOLITON
The $`CP(1)`$ model in $`2+1`$ dimensions $`x^\mu `$, $`\mu =0\mathrm{}2`$, has a complex field $`u`$ with Lagrangian density
$$L=\frac{1}{g^2}\frac{_\mu u^\mu \overline{u}}{(1+u\overline{u})^2}$$
(12)
where $`g^2`$ is an energy scale. The solutions to the field equations fall into families characterised by a topological index . The single soliton solution centered at the origin can be written $`u_1=\alpha /z`$, where $`\alpha `$ is a constant and $`z`$ is the complex coordinate $`x^1+ix^2`$.
Perturbations $`\xi `$ about the soliton satisfy the equation
$$4D_zD_{\overline{z}}\xi =k^2\xi $$
(13)
where $`D_{\overline{z}}=_{\overline{z}}`$ and $`D_z=_z2\overline{u}_1(_zu_1)/(1+u_1\overline{u}_1)`$. There is one normalisable zero mode $`\xi =z^2`$ and the remaining spectrum is positive. The first heat kernel coefficient for this operator can be evaluated from standard formulae or by the Atiyah-Singer index theorem to be $`B_1=1`$.
The perturbation equation separates, and with the decomposition $`\xi =(1+\alpha ^2/r^2)u_me^{im\theta }`$,
$$u_m^{\prime \prime }+\frac{1}{r}u_m^{}\frac{(m+2)^2}{r^2}u_m+k^2u_m=U(r)u_m$$
(14)
where the potential
$$U(r)=\frac{4(m+1)}{r^2+\alpha ^2}\frac{8\alpha ^2}{(r^2+\alpha ^2)^2}$$
(15)
The boundary conditions are
$$u_m\{\begin{array}{cc}\pm r^{|m+2|}\hfill & r0\hfill \\ A(J_m(kr)Y_m(kr)\mathrm{tan}\delta _m)\hfill & r\mathrm{}\hfill \end{array}$$
(16)
The potential does not meet the integrability requirements of Levinson’s theorem and the numerical solution shows the unusual behaviour that $`\delta _m(0)=\pi `$ for $`m1`$ and $`\delta _m(0)=\pi `$ for $`m<2`$.
The first Born approximation to the phase shifts can be evaluted in terms of Bessel functions, $`\delta _m^{(1)}(k)=\pi (2m\alpha _\alpha )I_m(k\alpha )K_m(k\alpha )`$. The sum over $`m`$ is not formally convergent, however the limit
$$\underset{M\mathrm{}}{lim}\underset{m=M}{\overset{M}{}}\delta _m^{(0)}(k)=\pi $$
(17)
is well defined and when substituted into equation (5) gives the correct conclusion that $`B_1=n_0`$. This definition of the $`m\mathrm{}`$ limit should also be used for actual phase shifts.
The vacuum energy $`E`$ can be obtained from equation (10), with the observation that the first Born approximation cancels due to (17),
$$E=\frac{2\pi }{g^2}\frac{1}{2\pi }_0^{\mathrm{}}𝑑k\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}(\delta _m(k)\delta _m(0))$$
(18)
where $`2\pi /g^2`$ is the classical contribution. The integrand is plotted in figure 1. After performing the integral numerically
$$E=\frac{2\pi }{g^2}0.248\alpha ^1$$
(19)
The energy decreases as the width of the soliton decreases. However, if the width of the soliton changed with time, there would be an adiabatic solution of the form $`u=\alpha (t)/z`$, but this solution has infinite action. The value of $`\alpha `$ must therefore remain fixed assymptotically, although it may decrease locally.
The loop expansion parameter for the single soliton is effectively $`g^2/\alpha `$, and so continuing to analysis to more than one loop would introduce further inverse powers of $`\alpha `$ . This raises the possibility that the energy has a minimum at some value of $`\alpha `$.
|
no-problem/9905/cond-mat9905320.html
|
ar5iv
|
text
|
# Defect formation in inhomogeneous 2-nd order phase transition: theory and experiment.
## I Introduction
To produce a new vortex line in the vortex-free state of superfluid liquid is not an easy job. If the container is devoid of the remnant vorticity, which can be pinned by rough surface, the vortices are created only when a treshold $`v_c`$ for the hydrodynamic instability of the superflow is reached . The thermal activation or quantum tunneling can assist the nucleation only in the narrow vicinity of the instability treshold, where the external perturbations, however, are more effective. In superfluid <sup>3</sup>He-B, because of the large size $`r_c`$ of the vortex core, the region near the treshold, where thermal activation or quantum tunneling can be important, is particularly small, $`v_cv_s10^6v_c`$. In a typical cylindrical container with radius $`R=2.5`$ mm and height $`L=7`$ mm, rotating with angular velocity $`\mathrm{\Omega }=3`$ rad/s, the vortex-free state stores a huge amount of kinetic energy $`(1/2)𝑑V\rho _s(v_sv_n)^210`$ GeV. This energy cannot be released, since the intrinsic half period of the decay of this metastable state is essentially larger than the proton life time.
That cosmic rays can assist in releasing this energy by producing vortex rings, I first heard from my supervisor, professor Iordanskii, in 1972. The natural scenario for that was thought as depicted in Fig.1. The energetic particle heats a region above the superfluid transition temperature $`T_c`$. During the cooling the normal liquid in this region can continuously evolve to form the core of a vortex loop, which starts growing if the radius $`R_b`$ of the heated region is larger than the radius of the ring sustained by counterflow, i.e. if $`|v_sv_n|>v_{c1}=(\kappa /4\pi R_b)\mathrm{ln}(R_b/r_c)`$, where $`\kappa `$ is the quantum of circulation around the vortex.
If the counterflow essentially exceeds this treshold, the evolution, which is most favourable for vortex production, leads to the closely packed vortex rings of the critical size, which can further develop. This gives the following estimation for maximal number of vortex loops, which can grow further: $`N(|v_sv_n|/v_{c1})^3`$.
Experiments with irradiated superfluid <sup>3</sup>He were started in 1992 in Stanford, where it was found that the irradiation assists the transition of supercooled <sup>3</sup>He-A to <sup>3</sup>He-B. In 1994 the neutron irradiation of <sup>3</sup>He-B was found to produce a shower of quasiparticles in Lancaster and vortices in rotating <sup>3</sup>He-B in Helsinki . Energy deficit found in low-$`T`$ Grenoble experiments indicated possib le formation of vortices in <sup>3</sup>He-B even without rotation . In Helsinki the observed number of vortices produced per one event showed both the treshold behavior and the cubic dependence at large rotation velocity: Above the treshold it was well approximated by $`N(|v_sv_n|/v_{c1})^31`$. This indicated that the nature has chosen some scenario, which produces the maximal possible number of vortices. What is the reason for that?
The decay products from the neutron absorption reaction generate ionization tracks, the details of which are not well known in liquid <sup>3</sup>He. At the moment we have two working scenaria of thermalization of the energetic particles:
(i) The mean free path is long and increases with decreasing of the energy. This can lead to a “Baked Alaska” effect, as has been described by Leggett . A thin shell of the radiated high energy particles expands with the Fermi velocity $`v_F`$, leaving behind a region at reduced $`T`$. In this region, which is isolated from the outside world by a warmer shell, a new phase can be formed. Such Baked Alaska mechanism for generation of new phase has also been discussed in high energy physics, where it describes the result of a hadron-hadron collision. In this relativistic case the thin shell of energetic particles expands with the speed of light. In the region separated from the exterior vacuum by the hot shell a false vacuum with a chiral condensate can be formed . This scenario provides possible explanation of formation of the B-phase in the supercooled A-phase .
(ii) During thermalization the mean free path is less than the dimension of the region where the energy is deposited and the temperature is well determined during the phase transition through $`T_c`$. In this case there is no Baked-Alaska effect: no hot shell separating the interior region from the exterior. So the exterior region can effectively fix the phase in the cooled bubble, suppressing the formation of the vacuum states, which would be different from that in the bulk liquid. Due to this proximity effect the formation of vortices can be also suppressed.
In both cases of monotonic and nonmonotonic temperature profile, two mechanisms of the vortex formation are important:
(a) The Kibble-Zurek (KZ) mechanism of the defect formation during the quench. For the scenario (ii), where the interior region is not separated from the exterior by the warmer shell, the KZ mechanism is to be modified to include spatial inhomegeneity, which leads to the moving transition front. The proximity effect of the exterior region is not effective if the phase transition front moves sufficiently rapidly . The modified KZ mechanism is not sensitive to the existence of the external counterflow, which only role is to extract the formed vortices from the bubble. The same KZ mechanism could be responsible for the formation of the A-B interfaces, which provides another scenario of the B-phase nucleation in the supercooled A-phase .
(b) Instability of the normal-superfluid interface, which occurs in the presence of the counterflow .
Here we discuss these two mechanisms (a) and (b) of vortex formation during inhomogeneous quench as manifested in numerical simulations .
## II KZ scenario in pesence of planar front.
For a rough understanding of the KZ scenario of vortex formation let us consider the time-dependent Ginzburg-Landau (TDGL) equation for the one-component order parameter (OP) $`\mathrm{\Psi }=\mathrm{\Delta }/\mathrm{\Delta }_0`$:
$$\tau _0\frac{\mathrm{\Psi }}{t}=\left(1\frac{T(𝐫,t)}{T_c}\right)\mathrm{\Psi }\mathrm{\Psi }|\mathrm{\Psi }|^2+\xi _0^2^2\mathrm{\Psi }.$$
(1)
Here $`\tau _01/\mathrm{\Delta }_0`$ and $`\xi _0`$ are correspondingly the relaxation time of the OP and the coherence length far from $`T_c`$.
If the quench occurs homogeneously in the whole space $`𝐫`$, the temperature depends only on one parameter, the quench time $`\tau _\mathrm{Q}`$:
$$T(t)\left(1\frac{t}{\tau _\mathrm{Q}}\right)T_c.$$
(2)
In the presence of a temperature gradient, say, along $`x`$, a new parameter appears:
$$T(xut)\left(1\frac{tx/u}{\tau _\mathrm{Q}}\right)T_c.$$
(3)
Here $`u`$ is the velocity of the temperature front which is related to the temperature gradient
$$_xT=\frac{T_c}{u\tau _\mathrm{Q}}.$$
(4)
There exists a characteristic critical velocity $`u_c`$ of the propagating temperature front. At $`uu_c`$ the vortices are formed, while at $`uu_c`$ the defect formation is either strongly suppressed or completely stops .
At slow velocities, $`u0`$, the order parameter almost follows the transition temperature front:
$$|\mathrm{\Psi }(x,t)|^2=\left(1\frac{T(xut)}{T_c}\right),T<T_c.$$
(5)
In this case the phase coherence is preserved behind the transition front and thus no defect formation is possible.
The extreme case of large velocity of the temperature front, $`u\mathrm{}`$, corresponds to the homogeneous quench. As was found by Kopnin and Thuneberg , if $`u`$ is large enough, the phase transition front cannot follow the temperature front: it lags behind (see Fig. 2). In the space between these two boundaries the temperature is already below the phase transition temperature, $`T<T_c`$, but the phase transition did not yet happen, and the OP is still not formed, $`\mathrm{\Psi }=0`$. This situation is unstable towards the formation of bubbles of the new phase with $`\mathrm{\Psi }0`$. This occurs independently in different regions of the space, leading to vortex formation according to the KZ mechanism. At a given point of space $`𝐫`$ the development of the instability can be found from the linearized TDGL equation, since during the intitial growth of the OP $`\mathrm{\Psi }`$ the cubic term can be neglected:
$$\tau _0\frac{\mathrm{\Psi }}{t}=\frac{t}{\tau _\mathrm{Q}}\mathrm{\Psi }.$$
(6)
This gives an exponentially growing OP, which starts from some seed $`\mathrm{\Psi }_{\mathrm{fluc}}`$, caused by fluctuations:
$$\mathrm{\Psi }(𝐫,t)=\mathrm{\Psi }_{\mathrm{fluc}}(𝐫)\mathrm{exp}\frac{t^2}{2\tau _\mathrm{Q}\tau _0}.$$
(7)
Because of the exponential growth, even if the seed is small, the modulus of the OP reaches its equilibrium value $`|\mathrm{\Psi }_{\mathrm{eq}}|=\sqrt{1T/T_c}`$ after the Zurek time $`t_\mathrm{Z}`$
$$t_\mathrm{Z}=\sqrt{\tau _\mathrm{Q}\tau _0}.$$
(8)
This occurs independently in different regions of space and thus the phases of the OP in each bubble are not correlated. The spatial correlation between the phases becomes important at distances $`\xi _\mathrm{v}`$ where the gradient term in Eq. (1) becomes comparable to the other terms at $`t=t_\mathrm{Z}`$. Equating the gradient term $`\xi _0^2^2\mathrm{\Psi }(\xi _0^2/\xi _\mathrm{v}^2)\mathrm{\Psi }`$ to, say, the term $`\tau _0\mathrm{\Psi }/t|_{t_{\mathrm{Zurek}}}=\sqrt{\tau _0/\tau _\mathrm{Q}}\mathrm{\Psi }`$, one obtains the characteristic Zurek length scale which determines the initial distance between the defects in homogeneous quench:
$$\xi _\mathrm{v}=\xi _0(\tau _\mathrm{Q}/\tau _0)^{1/4}.$$
(9)
We can estimate the lower limit of the characteristic value of the fluctuations $`\mathrm{\Psi }_{\mathrm{fluc}}=\mathrm{\Delta }_{\mathrm{fluc}}/\mathrm{\Delta }_0`$, which serve as a seed for the vortex formation. If there is no other source of fluctuations, caused, say, by external noise, the initial seed is provided by thermal fluctuations of the order parameter in the volume $`\xi _\mathrm{v}^3`$. The energy of such fluctuation is $`\xi _\mathrm{v}^3\mathrm{\Delta }_{\mathrm{fluc}}^2N_F/E_F`$, where $`E_F`$ is the Fermi energy and $`N_F`$ the fermionic density of states in the normal Fermi liquid. Equating this energy to the temperature $`TT_c`$ one obtains the magnitude of the thermal fluctuations of the OP
$$\frac{|\mathrm{\Psi }_{\mathrm{fluc}}|}{|\mathrm{\Psi }_{\mathrm{eq}}|}\left(\frac{\tau _0}{\tau _\mathrm{Q}}\right)^{1/8}\frac{T_c}{E_F}.$$
(10)
Since the fluctuations are initially rather small their growth time exceeds the Zurek time by the factor $`\sqrt{\mathrm{ln}|\mathrm{\Psi }_{\mathrm{eq}}|/|\mathrm{\Psi }_{\mathrm{fluc}}|}`$.
The criterium for the defect formation is that the time of growth of fluctuations, $`t_\mathrm{Z}=\sqrt{\tau _\mathrm{Q}\tau _0}`$, is shorter than the time $`t_{\mathrm{sw}}=x_0(u)/u`$ in which the transition front sweeps the space between the two boundaries. Here $`x_0(u)`$ is the lag between the transition temperature front and the OP front (see Fig. 2). Thus the equation $`t_\mathrm{Z}=x_0(u_c)/u_c`$ gives an estimate for the critical value $`u_c`$ of the velocity of the temperature front, at which the laminar propagation becomes unstable. At large $`u`$ one has $`x_0(u)u^3\tau _Q\tau _0^2/4\xi _0^2`$ and thus
$$u_c\frac{\xi _0}{\tau _0}\left(\frac{\tau _0}{\tau _\mathrm{Q}}\right)^{1/4},$$
(11)
which agrees with estimation $`u_c=\xi _\mathrm{v}/t_\mathrm{Z}`$ in .
In the case of the neutron bubble the velocity of the temperature front is $`uR_\mathrm{b}/\tau _\mathrm{Q}`$, which makes $`u10`$ m/s. The critical velocity $`u_\mathrm{c}`$ we can estimate to possess the same order of magnitude value. This estimation suggests that the thermal gradient should be sufficiently steep in the neutron bubble such that defect formation can be expected. The further fate of the vortex tangle formed under the KZ mechanism is the phase ordering process: the intervortex distance continuously increases until it reaches the critical size, when the vortex loops are expanded by the counterflow. This reproduces the most favourable scenario of the vortex formation with the cubic law.
## III Instability of normal/superfluid interface.
Another mechanism of the vortex formation has been recently found in 3D numerical simulations in Ref.. It is related to the instability of the normal-superfluid interface in the presence of the superflow. Let us consider a simple hand-waving interpretation of such an instability. The process can be roughly splitted into two stages (see Fig. 3).
At first stage the heated region of the normal liquid surrounded by the superflow undergoes a superfluid transition. The transition should occur into the state with the lowest energy, which corresponds to the superfluid at rest, i.e. with $`𝐯_s=0`$. Thus there appears the superfluid-superfluid interface, which separates the state with superflow (outside) from the state without superflow (inside). Such a superfluid-superfluid interface with tangential discontinuity of the superfluid velocity represents a vortex sheet by definition. Such vortex sheet, on which the phase of the OP is not determined, was suggested by Landau and Lifshitz for He-II to describe the superfluid state of <sup>4</sup>He under rotation (see also and ).
The vortex sheet is unstable towards breaking up into a chain of quantized vortex lines. The development of this instability represents the second stage of the process. In numerical simulation the resulting chain of vortices is clearly seen (see Figs. 4 and 5).
The evolution in Fig. 1 is thus caused by the hydrodynamic instability of the normal/superfluid interface in the presence of the tangential flow. Since vorticity is quantized, such instability leads to the formation of the vortex chain only above the treshold required to achieve the circulation quantum from the tangential superflow. If the counterflow is large the number of vortices in this chain $`N|v_sv_n|R_b/\kappa `$, i.e. one has a linear law instead of cubic.
Nucleation of the KZ vortices due the motion of the superfluid/normal interface is also observed in numerical simulations in Ref. . It occurs during shrinking of the interior region with normal fluid.
## IV Discussion.
Two mechanisms of the vortex formation have been identified in numerical simulations : (a) vortices are formed behind the propagating front due to KZ mechanism, as discussed in Refs.; and in addition (b) vortices are formed due to the corrugation instability (vortex sheet instability) of the front in the presence of external superflow. Each of these mechanisms can be derived either analytically for a simple geometry, or understood qualitatively with simple physical picture in mind. The AKV calculations actually showed that each mechanism is fundamental: it does not depend much on the geometry and on parameters of the TDGL equation. Probably both mechanisms hold even if TDGL theory cannot be applied.
The interplay of the two mechanisms must depend on details of the microscopic physics. In their calculations based on TDGL model, AKV found that the chain of vortices formed in the process (b) screens the external superflow very effectively. The KZ vortices formed in the process (a) cannot grow: they decay before the screening chain escapes to the bulk liquid. Thus in the AKV scenario only the chain of vortices survives. This gives the linear dependence of the vortex number $`N`$ on the counterflow $`v_sv_n`$ instead of the observed cubic law.
This does not exclude the possibility of another regime, where KZ vortices have enough time to escape to the bulk. This is probably what the cubic law found in Helsinki experiments tells us. Maybe the latter regime cannot be obtained in the TDGL scheme and one must discuss the combined dynamics of the OP and quasiparticles.
In conclusion, in the period between LT-21 and LT-22 the principles of defect formation in inhomogeneous phase transition have been developed.
I am indepbt to N. Kopnin and M. Krusius for discussions and I. Aranson and E. Thuneberg who kindly provided me with Figures from their papers.
|
no-problem/9905/hep-th9905230.html
|
ar5iv
|
text
|
# UT-KOMABA 99-7hep-th/9905230 Brane Configurations for Three-dimensional Nonabelian Orbifolds
## 1 Introduction
D-branes at orbifold singularities have been intensively studied for recent years (see - for example). One of the purposes is to investigate geometries of spacetime. D-branes serve as tools to study ultrashort structure of spacetime. So it is interesting to study geometries by using D-branes as probes and compare them with geometries probed by point particles or fundamental strings. Orbifolds provide nontrivial but relatively simple examples for such a purpose.
Another motivation to study D-branes on orbifolds is to construct large families of gauge theories. Various choices of orbifolds lead to gauge theories with various supersymmetries and dimensions. It is well-known that gauge theories can be constructed by using brane configurations following the work of . Investigations from different approaches and comparison between them are useful to clarify various aspects of gauge theories. Investigations along this line have been made in for example.
In this paper, we study D-branes on three-dimensional nonabelian orbifolds $`𝐂^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }SU(3)`$. For these cases, the gauge theories have $`1/8`$ supersymmetry compared with type II string theory. It is known that finite subgroups of $`SU(3)`$ are classified into $`ADE`$-like series . We are concerned with ”D”-type subgroups $`\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$, where $`n`$ is a positive integer and the number in braces represents the order of the group.
In studying D-brane gauge theory on $`𝐂^3/\mathrm{\Gamma }`$, the quiver diagram of $`\mathrm{\Gamma }`$ plays an important role. The quiver diagram of $`\mathrm{\Gamma }`$ is a diagram which represents algebraic structure of $`\mathrm{\Gamma }`$. It consists of nodes and arrows connecting the nodes. The nodes represent irreducible representations of $`\mathrm{\Gamma }`$, while the arrows represent structure of tensor products between a certain faithful three-dimensional representation and the irreducible representations. In the gauge theory, the nodes represent gauge groups and the arrows represent matter contents. The quiver diagrams of the groups $`\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ were calculated in . The quiver diagram of $`\mathrm{\Delta }(3n^2)`$ can be considered as a $`𝐙_3`$ quotient of that of $`𝐙_n\times 𝐙_n`$, and the quiver diagram of $`\mathrm{\Delta }(6n^2)`$ is obtained by a further $`𝐙_2`$ quotient. In , it was also pointed out that these quiver diagrams have similar structure to a web of $`(p,q)`$5-branes of type IIB string theory . So it is natural to expect that the D-brane gauge theories can be realized by brane configurations involving such a web of $`(p,q)`$ branes. The purpose of the present paper is to construct brane configurations corresponding to D-branes on $`𝐂^3/\mathrm{\Delta }(3n^2)`$ and $`𝐂^3/\mathrm{\Delta }(6n^2)`$.
In constructing brane configurations, the brane box model provides useful information. The brane box model is a realization of D-brane gauge theory on $`𝐂^3/𝐙_n\times 𝐙_n`$ via brane configurations. What is remarkable on the brane box model is that there is a correspondence between the quiver diagram of $`𝐙_n\times 𝐙_n`$ and the brane configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$. We use such a correspondence as a guideline to construct brane configurations for $`𝐂^3/\mathrm{\Gamma }`$. As stated above, the quiver diagram of $`\mathrm{\Delta }(3n^2)`$ is a $`𝐙_3`$ quotient of that of $`𝐙_n\times 𝐙_n`$. Therefore, combining with the correspondence between quiver diagrams and brane configurations, we expect that the brane configuration for $`𝐂^3/\mathrm{\Delta }(3n^2)`$ is obtained from that for $`𝐂^3/𝐙_n\times 𝐙_n`$ by a $`𝐙_3`$ quotient. However, we can not define a $`𝐙_3`$ quotient on the brane box configuration since the $`𝐙_3`$ is not a symmetry of the configuration. So we can not use the brane box configuration itself as a configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$. Instead, we construct a brane configuration with a $`𝐙_3`$ symmetry maintaining the correspondence with the quiver diagram of $`𝐙_n\times 𝐙_n`$. It consists of D3-branes and a web of $`(p,q)`$ 5-branes of type IIB string theory. The brane configuration for $`𝐂^3/\mathrm{\Delta }(3n^2)`$ is obtained from such a configuration by a $`𝐙_3`$ quotient. We can see that the configuration naturally reproduces the structure of the quiver diagram of $`\mathrm{\Delta }(3n^2)`$. In other words, the brane configuration can be considered as a physical realization of the quiver diagram. The brane configuration for $`𝐂^3/\mathrm{\Delta }(6n^2)`$ is obtained by a further $`𝐙_2`$ quotient.
The quiver diagram of $`\mathrm{\Gamma }`$ is also a key ingredient in the discussion of what is called McKay correspondence-. The McKay correspondence is a relation between the representation theory of a finite group $`\mathrm{\Gamma }`$ and the geometries of $`𝐂^d/\mathrm{\Gamma }`$. It was originally found for $`d=2`$ and $`\mathrm{\Gamma }SL(2,𝐂)`$. The McKay correspondence states that the quiver diagram of $`\mathrm{\Gamma }`$ coincides with a diagram which represent intersections among exceptional divisors of $`\stackrel{~}{𝐂^2/\mathrm{\Gamma }}`$. ($`\stackrel{~}{X}`$ represents the minimal resolution of $`X`$.) We are concerned with its generalization to three dimensions. If $`\mathrm{\Gamma }`$ is abelian, $`𝐂^3/\mathrm{\Gamma }`$ becomes a toric variety, and its resolution can be discussed by using toric method. In , relations between brane configurations and toric diagrams were discussed. Applying such arguments to the case $`\mathrm{\Gamma }=𝐙_n\times 𝐙_n`$, we can see that the brane configuration and the orbifolds are related by T-duality, and the brane configuration is graphically dual to the corresponding toric diagram. Combining with an observation that the brane configuration for $`𝐂^3/\mathrm{\Gamma }`$ can be regarded as the quiver diagram of $`\mathrm{\Gamma }`$, we can say that the quiver diagram and the toric diagram are related by T-duality. Since the toric diagram represents geometric information of the orbifold such as intersections among exceptional divisors, it leads to an interpretation that the three-dimensional McKay correspondence can be understood as T-duality.
The organization of this paper is as follows. In Section 2, we review a prescription to obtain worldvolume gauge theory of D-branes on an orbifold $`𝐂^3/\mathrm{\Gamma }`$. We also present quiver diagrams for $`\mathrm{\Gamma }=\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ calculated in . In Section 3, we first review the brane box model. Then we construct a brane configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$ which is appropriate to construct brane configuraions for nonabelian orbifolds. By taking its quotient, we construct brane configurations for $`𝐂^3/\mathrm{\Delta }(3n^2)`$ and $`𝐂^3/\mathrm{\Delta }(6n^2)`$. We argue that the structure of the quiver diagrams of $`\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ can be explained by the brane configurations. In Section 4, we discuss relations between brane configurations and toric diagrams. Based on the discussion of which relates brane configurations and toric diagrams, we present evidence that the three dimensional McKay correspondence may be interpreted as T-duality. Section 5 is devoted to discussions.
## 2 Quiver diagrams for D-branes on nonabelian orbifolds
In this section, we review the results of , in which the quiver diagrams for D-branes on orbifolds $`𝐂^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }=\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ were obtained. We start with $`N`$ parallel D1-branes on $`𝐂^3`$ where $`N=|\mathrm{\Gamma }|`$ is the order of $`\mathrm{\Gamma }`$. We will discuss the reason to take D1-branes in Sections 3 and 4. The effective action of the D-branes is given by the dimensional reduction of ten-dimensional $`U(N)`$ super Yang-Mills theory to two-dimensions. The bosonic field contents are a $`U(N)`$ gauge field $`A`$ and four complex adjoint scalars $`X^\mu `$ ($`\mu =1,2,3`$) and $`Y`$. Fermionic field contents are determined via unbroken supersymmetry, so we do not touch upon them. We take $`\mathrm{\Gamma }`$ to act on complex three-dimensional space corresponding to $`X^\mu `$. Since $`Y`$ is irrelevant in the following discussions, we set $`Y=0`$. Next we project this theory onto $`\mathrm{\Gamma }`$ invariant space. The condition is expressed as
$`R_{reg}(g)AR_{reg}(g)^1`$ $`=`$ $`A,`$ (2.1)
$`R_3(g)_\nu ^\mu R_{reg}(g)X^\nu R_{reg}(g)^1`$ $`=`$ $`X^\mu `$ (2.2)
where $`g\mathrm{\Gamma }`$, $`R_3`$ is a three-dimensional representation of $`\mathrm{\Gamma }`$ which acts on spacetime indices $`\mu `$ and $`R_{reg}`$ is the $`N\times N`$ regular representation of $`\mathrm{\Gamma }`$ which acts on Chan-Paton indices. $`R_3`$ defines how $`\mathrm{\Gamma }`$ acts on $`𝐂^3`$ to form the quotient singularity. The condition (2.1) implies that gauge fields surviving the projection are $`\mathrm{\Gamma }`$-invariant parts of $`R_{reg}R_{reg}^{}`$. Since the regular representation $`R_{reg}`$ has the following decomposition
$$R_{reg}=_{a=1}^rN_aR^a,N_a=\mathrm{dim}R^a$$
(2.3)
where $`R^a`$ denotes an irreducible representation of $`\mathrm{\Gamma }`$ and $`r`$ is the number of irreducible representations of $`\mathrm{\Gamma }`$, we obtain the following expression,
$`R_{reg}R_{reg}^{}|_{\mathrm{\Gamma }\mathrm{inv}}`$ $`=`$ $`_{ab}N_a\overline{N_b}R^aR^b|_{\mathrm{\Gamma }\mathrm{inv}}`$ (2.4)
$`=`$ $`_aN_a\overline{N_a}.`$
Here we used Schur’s lemma. The equation means that the gauge symmetry of the resulting theory is
$$\underset{a=1}{\overset{r}{}}U(N_a).$$
(2.5)
If we consider $`n`$ D-branes on the orbifold, the gauge group becomes
$$\underset{a=1}{\overset{r}{}}U(nN_a).$$
(2.6)
One can see the matter contents after the projection (2.2) in a similar way. They are $`\mathrm{\Gamma }`$-invariant parts of $`R_3R_{reg}R_{reg}^{}`$. By defining the tensor product of the three-dimensional representation $`R_3`$ and an irreducible representation $`R^a`$ as
$$R_3R^a=_{b=1}^rn_{ab}R^b,$$
(2.7)
one obtains the following expression,
$`R_3R_{reg}R_{reg}^{}|_{\mathrm{\Gamma }\mathrm{inv}}`$ $`=`$ $`_{abc}n_{ab}N_a\overline{N_c}R^bR^c|_{\mathrm{\Gamma }\mathrm{inv}}`$ (2.8)
$`=`$ $`_{ab}n_{ab}N_a\overline{N_b}.`$
It means that $`n_{ab}`$ is the number of bifundamental fields which transform as $`N_a\times \overline{N}_b`$ under $`U(N_a)\times U(N_b)`$.
The gauge group and the spectrum are summarized in a quiver diagram. A quiver diagram consists of $`r`$ nodes corresponding to irreducible representations and arrows which connect these nodes corresponding to bifundamental matters. Outgoing arrows represent fundamentals and ingoing arrows represent anti-fundamentals. $`n_{ab}`$ is the number of arrows from the $`a`$-th node to the $`b`$-th node.
### 2.1 $`\mathrm{\Delta }(3n^2)`$ case
The group $`\mathrm{\Delta }(3n^2)`$ consists of the following elements
$$A_{i,j}=\left(\begin{array}{ccc}\omega _n^i& 0& 0\\ 0& \omega _n^j& 0\\ 0& 0& \omega _n^{ij}\end{array}\right),C_{i,j}=\left(\begin{array}{ccc}0& 0& \omega _n^i\\ \omega _n^j& 0& 0\\ 0& \omega _n^{ij}& 0\end{array}\right),E_{i,j}=\left(\begin{array}{ccc}0& \omega _n^i& 0\\ 0& 0& \omega _n^j\\ \omega _n^{ij}& 0& 0\end{array}\right)$$
(2.9)
where $`\omega _n=e^{2\pi i/n}`$ and $`0i,j<n`$. The $`n^2`$ elements {$`A_{i,j}`$} correspond to a three-dimensional reducible representation of the abelian group $`𝐙_n\times 𝐙_n`$. The elements of the group $`\mathrm{\Delta }(3n^2)`$ are obtained by multiplying the the following matrices
$$\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),\left(\begin{array}{ccc}0& 0& 1\\ 1& 0& 0\\ 0& 1& 0\end{array}\right),\left(\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ 1& 0& 0\end{array}\right)$$
(2.10)
to the elements {$`A_{i,j}`$}. The matrices (2.10) are the elements of the group $`𝐙_3`$, so the group $`\mathrm{\Delta }(3n^2)`$ is isomorphic to the semidirect product of $`𝐙_n\times 𝐙_n`$ and $`𝐙_3`$.
We first consider the cases with $`n3𝐙`$. The irreducible representations of $`\mathrm{\Delta }(3n^2)`$ consists of one-dimensional representations and three-dimensional representations. There are 3 one-dimensional representations which we denote as $`R_1^\alpha `$ $`(\alpha =0,1,2)`$. The three-dimensional representations are labeled by two integers $`(l_1,l_2)𝐙_n\times 𝐙_n`$ with $`(l_1,l_2)(0,0)`$. Furthermore there are equivalence relations among the representations $`R_3^{(l_1,l_2)}`$,
$$R_3^{(l_1,l_2)}R_3^{(l_1+l_2,l_1)}R_3^{(l_2,l_1l_2)}.$$
(2.11)
Thus the three-dimensional representations are labeled by the lattice points $`(𝐙_n\times 𝐙_n(0,0))/𝐙_3`$ where the action of $`𝐙_3`$ is defined by the relation (2.11). By counting the number of the lattice points in $`(𝐙_n\times 𝐙_n(0,0))/𝐙_3`$ one can see that there are $`(n^21)/3`$ three-dimensional irreducible representations. Thus the gauge group of the D-brane worldvolume theory on $`𝐂^3/\mathrm{\Delta }(3n^2)`$ is $`U(1)^3\times U(3)^{(n^21)/3}`$.
Now we choose the three-dimensional representation acting on the spacetime indices to be $`R_3^{(m_1,m_2)}`$. By calculating tensor products between $`R_3^{(m_1,m_2)}`$ and irreducible representations of $`\mathrm{\Delta }(3n^2)`$, we can depict the quiver diagram of $`\mathrm{\Delta }(3n^2)`$ in a closed form. An example will be given in Section 3. However, it becomes very complicated as $`n`$ increases, so it is not useful to analyze structure of the gauge theories. Therefore we would like to depict the quiver diagram of $`\mathrm{\Delta }(3n^2)`$ in a form such that its structure becomes simple. To this end, it is useful to define $`R_3^{(0,0)}`$ as
$$R_3^{(0,0)}R_1^0R_1^1R_1^2.$$
(2.12)
Then the tensor products of $`R_3^{(m_1,m_2)}`$ and the irreducible representations are given by a compact form,
$$R_3^{(m_1,m_2)}R_3^{(l_1,l_2)}=R_3^{(l_1+m_1,l_2+m_2)}R_3^{(l_1m_2,l_2+m_1m_2)}R_3^{(l_1m_1+m_2,l_2m_1)}.$$
(2.13)
Although $`R_3^{(0,0)}`$ is not an irreducible representation, the expression (2.13) is so simple that we tentatively forget the structure (2.12) and treat it as if it were an irreducible representation.
As stated above, a quiver diagram consists of nodes associated with the irreducible representations and arrows associated with the structure of the tensor products. In the present case, the irreducible representations are labeled by two integers modulo $`n`$, so we put the nodes on the lattice points of $`𝐙_n\times 𝐙_n`$. The equation (2.13) indicates that there are three arrows which start from each node $`(l_1,l_2)`$. The end points are $`(l_1+m_1,l_2+m_2)`$, $`(l_1m_2,l_2+m_1m_2)`$ and $`(l_1m_1+m_2,l_2m_1)`$. The quiver diagram is obtained by putting such arrows to each node in $`𝐙_n\times 𝐙_n`$ and identifying the nodes according to the $`𝐙_3`$ equivalence relations (2.11). Thus the quiver diagram of $`\mathrm{\Delta }(3n^2)`$ is basically a $`𝐙_3`$ quotient of that of $`𝐙_n\times 𝐙_n`$. It is depicted in Figure 2. Only one set of three arrows is depicted for simplicity although such a set of arrows start from every node. Due to the $`𝐙_3`$ identification, the fundamental region is restricted to the parallelogram surrounded by dashed lines. To be precise, however, we must take the equation (2.12) into consideration. Except for the fixed points, $`𝐙_3`$ quotient means that three one-dimensional nodes related by $`𝐙_3`$ must be identified. Such nodes become three-dimensional after the $`𝐙_3`$ quotient. For the fixed points such as the node at the origin, the $`𝐙_3`$ quotient must be understood as a splitting into three one-dimensional nodes as indicated by the equation (2.12). The rule of the $`𝐙_3`$ quotient on arrows is defined according to the rule on nodes since each arrow is determined by specifying the starting node and the ending node.
For the cases with $`n3𝐙`$, there are 9 one-dimensional representations which we denote as $`R_1^\alpha `$ $`(\alpha =0,1,\mathrm{},8)`$ and three-dimensional representations labeled by $`(l_1,l_2)𝐙_n\times 𝐙_nF`$ where $`F=\{(0,0),(n/3,2n/3),(2n/3,n/3)\}`$. As in the $`n3𝐙`$ cases, there are equivalence relations (2.11). By counting the number of the lattice points in $`(𝐙_n\times 𝐙_nF)/𝐙_3`$, one can see that there are $`(n^23)/3`$ three-dimensional irreducible representations when $`n/3`$ is an integer. So the gauge group of the D-brane theory is $`U(1)^9\times U(3)^{(n^23)/3}`$.
By defining
$`R_3^{(0,0)}R_1^0R_1^1R_1^2,`$ (2.14)
$`R_3^{(n/3,2n/3)}R_1^3R_1^4R_1^5,`$ (2.15)
$`R_3^{(2n/3,n/3)}R_1^6R_1^7R_1^8,`$ (2.16)
tensor products of $`R_3^{(m_1,m_2)}`$ and the irreducible representations are given as follows,
$$R_3^{(m_1,m_2)}R_3^{(l_1,l_2)}=R_3^{(l_1+m_1,l_2+m_2)}R_3^{(l_1m_2,l_2+m_1m_2)}R_3^{(l_1m_1+m_2,l_2m_1)}.$$
(2.17)
The quiver diagram is shown in Figure 2. It is essentially the same as in the $`n3𝐙`$ cases except that the nodes $`(n/3,2n/3)`$ and $`(2n/3,n/3)`$ lie at fixed points of the $`𝐙_3`$ action.
### 2.2 $`\mathrm{\Delta }(6n^2)`$ case
We now turn to the group $`\mathrm{\Delta }(6n^2)`$. The group $`\mathrm{\Delta }(6n^2)`$ with $`n`$ a positive integer consists of {$`A_{i,j}`$, $`C_{i,j}`$, $`E_{i,j}`$} in (2.9) and the following matrices.
$$B_{i,j}=\left(\begin{array}{ccc}\omega _n^i& 0& 0\\ 0& 0& \omega _n^j\\ 0& \omega _n^{\frac{n}{2}ij}& 0\end{array}\right),D_{i,j}=\left(\begin{array}{ccc}0& \omega _n^i& 0\\ \omega _n^j& 0& 0\\ 0& 0& \omega _n^{\frac{n}{2}ij}\end{array}\right),F_{i,j}=\left(\begin{array}{ccc}0& 0& \omega _n^i\\ 0& \omega _n^j& 0\\ \omega _n^{\frac{n}{2}ij}& 0& 0\end{array}\right).$$
As pointed out in , even if one choose $`n`$ to be an odd integer, the matrices generate elements of $`\mathrm{\Delta }(6(2n)^2)`$ by multiplication. So we restrict the value of $`n`$ to even integers. The elements of the group $`\mathrm{\Delta }(6n^2)`$ are obtained by multiplying the matrices (2.10) and
$$\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 1\\ 0& 1& 0\end{array}\right),\left(\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right),\left(\begin{array}{ccc}0& 0& 1\\ 0& 1& 0\\ 1& 0& 0\end{array}\right)$$
(2.18)
to the elements {$`A_{i,j}`$} in (2.9). Note that the six matrices (2.10) and (2.18) are the elements of the symmetric group $`𝐒_3`$ of order six. Thus the group $`\mathrm{\Delta }(6n^2)`$ is isomorphic to the semidirect product of $`𝐙_n\times 𝐙_n`$ and $`𝐒_3`$.
When $`n/3`$ is not an integer, the irreducible representations consist of 2 one-dimensional representations, 1 two-dimensional representation, $`2(n1)`$ three-dimensional representations and $`(n^23n+2)/6`$ six-dimensional representations. Thus the gauge group is $`U(1)^2\times U(2)\times U(3)^{2(n1)}\times U(6)^{(n^23n+2)/6}`$. We denote the one-dimensional representations as $`R_1^t`$ $`(t𝐙_2)`$, the two-dimensional representation as $`R_2`$ and the three-dimensional representations as $`R_3^{(l,l)t}`$, where $`l=1,2,\mathrm{},n1`$ and $`t`$ takes values in $`𝐙_2`$. The six-dimensional representations are labeled by $`(l_1,l_2)𝐙_n\times 𝐙_nF^{}`$ where $`F^{}=\{(0,0),(l,l),(l,0),(0,l)\}`$. As in the $`\mathrm{\Delta }(3n^2)`$ cases, there are equivalence relations among the representations $`R_6^{(l_1,l_2)}`$,
$$R_6^{(l_1,l_2)}R_6^{(l_1+l_2,l_1)}R_6^{(l_2,l_1l_2)},$$
(2.19)
$$R_6^{(l_1,l_2)}R_6^{(l_2,l_1)}.$$
(2.20)
Hence the six-dimensional representations are labeled by the lattice points $`(𝐙_n\times 𝐙_nF^{})/𝐒_3`$. Here $`𝐒_3`$ is the semidirect product of groups $`𝐙_3`$ and $`𝐙_2`$ whose action are defined by the relations (2.19) and (2.20) respectively. $`(n^23n+2)/6`$ is the number of the lattice points in $`(𝐙_n\times 𝐙_nF^{})/𝐒_3`$ when $`n/3`$ is not an integer.
By defining
$`R_6^{(0,0)}R_1^0R_1^12R_2,`$ (2.21)
$`R_6^{(l,l)}R_3^{(l,l)0}R_3^{(l,l)1},(l0)`$ (2.22)
tensor products of $`R_3^{(m,m)t}`$ and the irreducible representations are given by,
$$R_3^{(m,m)t}R_6^{(l_1,l_2)}=R_6^{(l_1+m,l_2+m)}R_6^{(l_1,l_2m)}R_6^{(l_1m,l_2)}.$$
(2.23)
The quiver diagram is depicted in Figure 4. Due to the additional $`𝐙_2`$ identification of (2.20), the fundamental region is restricted to, say, the lower triangle surrounded by dashed lines in Figure 4. We note that the nodes at $`(l,l)`$ correspond to fixed points of the $`𝐙_2`$ action defined by (2.20). In addition, the node at the origin corresponds to a fixed point of the $`𝐙_3`$ action defined by (2.19).
When $`n/3`$ is an integer, the irreducible representations consist of 2 one-dimensional representations, 4 two-dimensional representations, $`2(n1)`$ three-dimensional representations and $`(n^23n)/6`$ six-dimensional representations. So the gauge group is $`U(1)^2\times U(2)^4\times U(3)^{2(n1)}\times U(6)^{(n^23n)/6}`$. We denote the two-dimensional representations as $`R_2^\alpha `$ $`(\alpha =0,\mathrm{},3)`$ and use the same notations as the $`n3𝐙`$ case for other representations. As in the $`n3𝐙`$ case, there are equivalence relations (2.19) and (2.20), so the six-dimensional representations are labeled by the lattice points $`(𝐙_n\times 𝐙_nF^{\prime \prime })/𝐒_3`$ where $`F^{\prime \prime }=FF^{}`$. $`(n^23n)/6`$ is the number of the lattice points in $`(𝐙_n\times 𝐙_nF^{\prime \prime })/𝐒_3`$ when $`n/3`$ is an integer.
By defining
$`R_6^{(0,0)}R_1^0R_1^12R_2^0,`$ (2.24)
$`R_6^{(l,l)}R_3^{(l,l)0}R_3^{(l,l)1},(l0)`$ (2.25)
$`R_6^{(2n/3,n/3)}R_2^1R_2^2R_2^3,`$ (2.26)
tensor products of $`R_3^{(m,m)t}`$ and the irreducible representations are given as follows,
$$R_3^{(m,m)t}R_6^{(l_1,l_2)}=R_6^{(l_1+m,l_2+m)}R_6^{(l_1,l_2m)}R_6^{(l_1m,l_2)}.$$
(2.27)
The quiver diagram is depicted in Figure 4. It is essentially the same as in the $`n3𝐙`$ cases except that the node $`(2n/3,n/3)`$ lies at a fixed point of the $`𝐙_3`$ action.
## 3 Brane configurations for nonabelian orbifolds
In the last section, we have presented the quiver diagrams for D-branes on orbifolds $`𝐂^3/\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }=\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$. Unexpectedly the quiver diagrams have a resemblance to the structure of string junctions or webs of $`(p,q)`$ 5-branes . Three $`(p,q)`$ strings of type IIB theory are permitted to form a prong if the $`(p,q)`$ charge is conserved,
$$\underset{i=1}{\overset{3}{}}p_i=\underset{i=1}{\overset{3}{}}q_i=0.$$
(3.1)
In order to have a quarter supersymmetry, $`(p,q)`$ strings is constrained to have a slope $`p+\tau q`$ on a plane where $`\tau =\frac{i}{g_s}+\frac{\chi }{2\pi }`$, $`g_s`$ is the coupling constant and $`\chi `$ is the axion of type IIB string theory. Similar conditions must be satisfied to form a prong of $`(p,q)`$ 5-branes. The arrows in the quiver diagrams representing matter contents have the same structure. It is likely that the gauge theory represented by the quiver diagram have something to do with string junctions or a web of $`(p,q)`$ 5-branes. In this section, we consider realizations of the gauge theories of D-branes on the nonabelian orbifolds by using such brane configurations.
To investigate the brane configurations for nonabelian orbifolds $`𝐂^3/\mathrm{\Delta }`$, we first review the brane box model, which is a brane configuration corresponding to D-branes on an abelian orbifold $`𝐂^3/𝐙_n\times 𝐙_n`$ . As noted in the last section, the finite groups $`\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ are closely related to $`𝐙_n\times 𝐙_n`$. We study the brane configurations for $`𝐂^3/\mathrm{\Delta }`$ based on the brane box model by using relations between the groups $`\mathrm{\Delta }`$ and $`𝐙_n\times 𝐙_n`$.
Irreducible representations of the finite group $`𝐙_n\times 𝐙_n`$ consist of $`n^2`$ one-dimensional representations $`R_1^{(l_1,l_2)}`$ with $`(l_1,l_2)𝐙_n\times 𝐙_n`$. We choose the three-dimensional representation which defines the geometry of the orbifold to be
$$R_3=R_1^{(1,1)}R_1^{(1,0)}R_1^{(0,1)},$$
(3.2)
then the decomposition of the product $`R_3R_1^{(l_1,l_2)}`$ becomes
$$R_3R_1^{(l_1,l_2)}=R_1^{(l_1+1,l_2+1)}R_1^{(l_11,l_2)}R_1^{(l_1,l_21)}.$$
(3.3)
The quiver diagram is given in Figure 6.
The point we would like to emphasize is that the quiver diagram for the orbifold $`𝐂^3/\mathrm{\Delta }(3n^2)`$ ($`𝐂^3/\mathrm{\Delta }(6n^2)`$) is essentially that for the orbifold $`𝐂^3/𝐙_n\times 𝐙_n`$ with the $`𝐙_3`$ equivalence relation (2.11) (the $`S_3`$ equivalence relation (2.19) and (2.20)).
The brane box is a model that provides the same gauge theory as the D-brane worldvolume theory on $`𝐂^3/𝐙_n\times 𝐙_n`$. It consists of the following set of branes; NS5-branes along 012345 directions, NS’5-branes along 012367 directions and D5-branes along 012346 directions. The brane box configuration is shown in Figure 6. It represents the brane configuration on the 46-plane. NS and NS’ 5-branes are indicated by horizontal and vertical lines respectively. D5-branes lie on each box bounded by NS and NS’ 5-branes. In the brane box model, the first NS5(NS’5)-brane and the (n+1)-th NS5(NS’5)-branes must be identified. The interest focuses on the gauge theory on the world-volume of the D5-branes. Being finite in 46 directions, the D5-branes are macroscopically (3+1)-dimensional. Each box provides a gauge group $`U(N_a)`$ where $`N_a`$ denotes the number of D-branes on the $`a`$-th box. Open strings connecting D-branes on neighboring boxes provide matters. In the absence of NS5 and NS’5-branes, two possible orientations are allowed for the strings. The orientation of NS5 and NS’5-branes induces a particular orientation for the strings. One possible choice of orientations of strings is shown in Figure 6 . Only one set of three open strings is drawn although such a set of strings stretch from every box. The oriented open strings stretching from D-branes on the $`a`$-th box to D-branes on the $`b`$-th box provide bifundamental matters $`N_a\times \overline{N_b}`$. Thus the matter contents of the brane box model are just what determined by the quiver diagram in Figure 6. We make a list of correspondence among representation theory, gauge theory, quiver diagram and brane box model in Table 1.
We regard the correspondence between the quiver diagram and the brane configuration as a guideline in constructing brane configurations. As discussed in Section 2, the quiver diagram of $`\mathrm{\Delta }(3n^2)`$ is obtained from that of $`𝐙_n\times 𝐙_n`$ by the $`𝐙_3`$ quotient. Thus a naive guess leads to the idea that the brane configuration for the orbifold $`𝐂^3/\mathrm{\Delta }(3n^2)`$ is obtained from the brane box configuration by the $`𝐙_3`$ quotient. However, we can not define a $`𝐙_3`$ quotient on the brane box configuration since $`𝐙_3`$ is not a symmetry of the configuration of Figure 6. Instead, we construct a brane configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$ which has the $`𝐙_3`$ symmetry maintaining the correspondence of Table 1. Global structure is determined by the requirement that it provides appropriate gauge groups and matter contents given in Table 1. The requirement of the $`𝐙_3`$ symmetry leads to a condition on the local structure. Naively it leads to a prong which consists of three branes with directions $`(m_1,m_2)`$, $`(m_2,m_1m_2)`$ and $`(m_2m_1,m_1)`$ on a plane. However, the $`𝐙_3`$ equivalence relation (2.11) is defined on the lattice of irreducible representations. It means that the $`𝐙_3`$ action on the brane configuration is defined in reference to the ”lattice of boxes”. Thus it is necessary to take directions of alignment of boxes into account. There are infinitely many possibilities which satisfy these criteria. Two examples are depicted in Figure 7(a) and (b).
These figures represent brane configurations on the 56-plane. The lines indicate $`(p,q)`$ 5-branes which extend along 01234 directions and $`p+\tau q`$ direction in the 56-plane. Here we take $`\tau =i`$. The difference between Figure 7(a) and Figure 7(b) is charges $`(p,q)`$ of 5-branes. Three types of 5-branes in Figure 7(a) have $`(p,q)`$ charges $`(2,1)`$, $`(1,1)`$ and $`(1,2)`$, while three types of 5-branes in Figure 7(b) have $`(p,q)`$ charges $`(1,1)`$, $`(1,0)`$ and $`(0,1)`$. Although we do not have a way to judge which is the right one at present, we will discuss in Section 4 that the Figure 7(b) is the right one for the brane configuraiton for $`𝐂^3/𝐙_n\times 𝐙_n`$. Each hexagon plays the role of each square of the brane box model. In contrast to the brane box model, however, D-branes on each box must be D3-branes stretching along 0156 directions to maintain 1/8 supersymmetry. Since D3-branes are bounded by $`(p,q)`$ 5-branes in 56-directions, their effective action is two-dimensional. This is why we started with D1-branes on orbifolds.
To realize the correspondence given in Table 1 for $`\mathrm{\Gamma }=\mathrm{\Delta }(3n^2)`$, we must identify the first row and the $`n`$-th row and the first column and the $`n`$-th column. As in the brane box model, each box gives a gauge group $`U(N_a)`$ where $`N_a`$ is the number of D-branes on the $`a`$-th box. We let the number of D1-branes on $`𝐂^3/𝐙_n\times 𝐙_n`$ be one, then $`N_a`$ becomes one for every box. Matters come from open strings which connect D-branes on neighboring boxes. The presence of $`(p,q)`$ 5-branes induces a particular orientation for the strings. Relative orientations of strings are restricted by the requirement of the invariance under the $`𝐙_3`$ action. There are two possibilities; one is shown in Figure 7. The arrows indicate the orientations of strings stretching from one box. Again only one set of three open strings is drawn although such a set of strings stretch from every box. These open strings reproduce the matter contents determined by the quiver diagram in Figure 2 for the case $`R_3^{(m_1,m_2)}=R_3^{(1,1)}`$. Another set of orientations is obtained by reversing the orientations of all arrows, which is essentially equivalent to the case shown in Figure 7. Although we discussed orientations of open strings by the requirement of the $`𝐙_3`$ invariance, it is necessary to find out a reason that only such orientations are allowed based on string theory.
Now that we have a brane configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$ with a $`𝐙_3`$ symmetry, we make a $`𝐙_3`$ quotient to obtain a brane configuration for $`𝐂^3/\mathrm{\Delta }(3n^2)`$. The $`𝐙_3`$ quotient on the brane configuration is defined through that on the quiver diagram. By the $`𝐙_3`$ quotient, fundamental region becomes the small parallelogram bounded by dashed lines in Figure 7. We would like to show that the brane configuration precisely reproduces the structure of the quiver diagram of $`\mathrm{\Delta }(3n^2)`$. We take $`n=4`$ as an example. The quiver diagram of $`\mathrm{\Delta }(3n^2)`$ with $`n=4`$ can be depicted in a closed form as in Figure 8(a). The brane configuration for $`𝐂^3/\mathrm{\Delta }(3n^2)`$ can also be drawn in a closed form as in Figure 8(b).
It is basically a $`𝐙_3`$ orbifold of a two-torus $`T^2`$. Three apexes in Figure 8(b) corresponds to the fixed points of $`𝐙_3`$. There are three D-branes on five boxes which do not include the fixed points, and hence each box gives a gauge group $`U(3)`$. On the other hand, the D-brane on the box which include the fixed point has spiral structure as illustrated in Figure 8(b) due to the orbifolding procedure. In this case, an oriented open string stretching from the $`i`$-th D-brane to the $`j`$-th D-brane is the same as an open string stretching from the $`(i+1)`$-th D-brane to the $`(j+1)`$-th D-brane (modulo 3). Since the $`(i,j)`$ component of the gauge field comes from the oriented open string from the $`i`$-th D-brane to the $`j`$-th D-brane, the $`3\times 3`$ components of the gauge field take the following form,
$$A\left(\begin{array}{ccc}a_1& a_2& a_3\\ a_3& a_1& a_2\\ a_2& a_3& a_1\end{array}\right).$$
(3.4)
It means that the box including the fixed point gives the gauge group $`U(1)^3`$ instead of $`U(3)`$. As for the matters, they come from open strings which connect D-branes on neighboring boxes. Locally, every boundary between two boxes has the same structure; there are three D-branes on both sides, so each boundary gives a matter with $`3\times 3`$ components. The $`(i,j)`$ component comes from an oriented open string stretching from the $`i`$-th D-brane on one side to the $`j`$-th D-brane on the other side. Difference comes from global structure of D-branes meeting at each boundary. It determines how the matters transform under the gauge groups. If boxes on both side do not include fixed points of $`𝐙_3`$, the matter transforms as a bifundamental $`3\times \overline{3}`$ of $`U(3)\times U(3)`$. If a box on one side of the boundary include a fixed point, the corresponding representation $`3`$ splits into $`111`$ <sup>1</sup><sup>1</sup>1If we consider $`N`$ D-branes on an orbifold, the gauge group coming from the box including a fixed point is $`U(N)^3`$ instead of $`U(3N)`$. Accordingly, $`3N`$ dimensional representation splits into $`NNN`$; each $`N`$ represents $`N`$-dimensional representation of each $`U(N)`$.. Thus the $`3\times 3`$ components of the matter coming from such a boundary split into three $`3\times \overline{1}`$ or three $`1\times \overline{3}`$. They coincide with the spectrum indicated by the arrows in the quiver diagram. Collecting these results, one can see that the brane configuration in Figure 8(b) presicely reproduces the structure of the quiver diagram in Figure 8(a).
In the $`n3𝐙`$ case, only one of the three apexes is included in a box and the other two lie at boundaries of some boxes as shown in Figure 8(b). For $`n3𝐙`$, on the other hand, we can see that three apexes lie inside of three boxes respectively. Therefore each box at the apex gives a gauge group factor $`U(1)^3`$ in accordance with the structure of the quiver diagram.
Now we would like to comment on the correspondence between the brane configuration and the quiver diagram. In the definition of the $`𝐙_3`$ action on the quiver diagram, a rather elaborate rule was necessary in relation to the fixed points of $`𝐙_3`$. In contrast, the rule of the $`𝐙_3`$ quotient are automatically reproduced via the brane configurations although the quotienting procedure on the brane configurations is simple. In this respect, we can say that the brane configuration inherently posesses necessary property which the quiver diagrams should have. In other words, it seems reasonable to regard the brane configuration as a realization of the quiver diagram in the language of physics.
The brane configuration for $`𝐂^3/\mathrm{\Delta }(6n^2)`$ is obtained by making a $`𝐙_2`$ quotient in addition to the $`𝐙_3`$ quotient for $`𝐂^3/\mathrm{\Delta }(3n^2)`$. The $`𝐙_2`$ quotient is determined through the relation (2.20) which defines the $`𝐙_2`$ action on the quiver diagram. One can see that the brane configuration reproduces the structure of the quiver diagram of $`𝐂^3/\mathrm{\Delta }(6n^2)`$ by a similar argument to the $`𝐂^3/\mathrm{\Delta }(3n^2)`$ cases.
## 4 Physical interpretation of McKay correspondence
In this section, we would like to discuss relations between the brane configurations obtained in the last section and geometries of $`𝐂^3/\mathrm{\Gamma }`$. As we mentioned, the brane configuration for $`𝐂^3/\mathrm{\Gamma }`$ plays the role of the quiver diagram of $`\mathrm{\Gamma }`$, hence the relation can be considered as a relation between the tensor product structure of irreducible representations of $`\mathrm{\Gamma }`$ and the geometry of $`𝐂^3/\mathrm{\Gamma }`$. Such a relation has been discussed in mathematics as the McKay correspondence. We will present evidence that the McKay correspondence may be understood as T-duality.
Before we come to the discussion on duality, we make it clear what we mean by the term ”McKay correspondence”. The McKay correspondence was originally observed for two-dimensional orbifolds. It is a relation between the quiver diagram of $`\mathrm{\Gamma }`$ and a diagram representing intersections among exceptional divisors of $`\stackrel{~}{𝐂^2/\mathrm{\Gamma }}`$. Both of them coincide with a certain Dynkin diagram of $`ADE`$ Lie algebra. We consider its straightforward generalization as a three-dimensional McKay correspondence. That is, we use the term McKay correspondence to refer to a relation between the quiver diagram of $`\mathrm{\Gamma }`$ and a diagram which represents intersections among exceptional divisors of $`\stackrel{~}{𝐂^3/\mathrm{\Gamma }}`$. If $`\mathrm{\Gamma }`$ is abelian, $`𝐂^3/\mathrm{\Gamma }`$ becomes a toric variety and its resolution is discussed by using toric method. In this case, information on intersections among exceptional divisors is represented by a toric diagram. In summary, we use the term McKay correspondence in dimension three as a relation between the quiver diagram of $`\mathrm{\Gamma }`$ and the toric diagram of $`\stackrel{~}{𝐂^3/\mathrm{\Gamma }}`$.
Fortunately, relations between brane configurations and toric varieties were generally discussed in . To explain the idea of , we briefly review toric virieties. A complex $`d`$-dimensional toric variety involves real $`d`$-dimensional torus $`T^d`$. Certain cycles of the torus $`T^d`$ shrink on some loci of the toric variety. A toric variety is determined by specifying which cycles shrink where. Such information is represented by a diagram in real $`d`$-dimensional space, which we call a toric diagram. Information on (co)homology of the toric variety is encoded in the toric diagram. For toric varieties with vanishing first Chern class, toric diagrams are reduced to diagrams in ($`d1`$)-dimensional space. As we are considering complex three-dimensional toric varieties with vanishing first Chern class, their toric diagrams represent how a torus $`T^2`$ of the toric variety shrinks. More precisely, what is identified with loci where $`T^2`$ shrinks is a dual diagram of the toric diagram. A line in the dual diagram with a slope $`q/p`$ represents locus at which a $`(p,q)`$ cycle of $`T^2`$ shrinks. Keeping the property of toric varieties in mind, we turn to the explanation of which relates brane configurations and toric diagrams. It is known that M-theory on $`T^2`$ and type IIB string theory on a circle $`S^1`$ are related by a kind of T-duality . Vanishing $`(p,q)`$ cycles of $`T^2`$ of M-theory corresponds to $`(p,q)`$ 5-branes of type IIB string theory. If we consider M-theory on a toric variety and dualize in this sense along the 2-torus $`T^2`$ of the toric variety, we obtain type IIB string theory on a circle with $`(p,q)`$ 5-branes; the skeleton of the dual diagram is identified with a web of $`(p,q)`$ 5-branes.
We would like to apply this argument to the case $`\mathrm{\Gamma }=𝐙_n\times 𝐙_n`$. The toric diagram of a certain resolution of the orbifold $`𝐂^3/𝐙_n\times 𝐙_n`$ is depicted in Figure 10. Its dual diagram depicted in Figure 10 represents a web of $`(p,q)`$ 5-branes in the sense described above.
The brane configuration in Figure 10 has the same structure as the configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$ in Figure 7(b) at least locally. In this sense, the brane configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$, which can be interpreted as the quiver diagram of $`𝐙_n\times 𝐙_n`$, and the toric diagram of $`𝐂^3/𝐙_n\times 𝐙_n`$ are related by T-duality. The global structure of Figure 10, however, is different from that of Figure 7(b); to obtain the brane configuration in Figure 7(b), we must combine two brane configurations in Figure 10 and make a certain identification. The difference stems from the non-compactness of the orbifold. The models discussed in Section 3 are so-called elliptic models; they are compactified along 56 directions to $`T^2`$. On the other hand, the toric diagram in Figure 10 represents an open variety which does not have such compactness. In fact, such a manifold is not appropriate to perform T-duality since the radii of the circles performing T-duality become infinite as the distance from the singularity becomes infinite. In order to make the T-duality argument precise, we must replace the toric variety by a maniofold which have the same local structure but have another asymptotics, so that the radii of the circles are finite at infinity. It is analogous to the replacement of $`𝐂^2/𝐙_n`$ with a Taub-NUT space. We hope that the correspondence between the brane configuration and the toric diagram can be made definite when such a replacement is properly taken into account.
Thus it needs further study to make the T-duality argument precise, but there are some evidences to support such an argument. First, we would like to point out a nontrivial coincidence between the physical argument presented above and mathematical argument on the McKay correspondence. In resolving the singularity of $`𝐂^3/𝐙_n\times 𝐙_n`$, there are many possibilities related by topology changing process called a flop. Among such resolutions, only one resolution represented by the toric diagram in Figure 10 has a relation to the brane configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$. In the mathematical argument , the McKay correspondence is proved only for a paticular kind of resolution. It is so-called the Hilbert scheme of $`𝐂^3/𝐙_n\times 𝐙_n`$, whose toric diagram is nothing but Figure 10. Although the formulation of the McKay correspondence in is different from the naive one we are considering, it seems reasonable to regard the coincidence as a supporting evidence for the T-duality interpretation of the McKay correspondence.
Now we turn to the case $`\mathrm{\Gamma }=\mathrm{\Delta }(3n^2)`$. Since the orbifolds $`𝐂^3/\mathrm{\Delta }(3n^2)`$ is not a toric variety, we can not directly apply the above argument. However, the brane configuration for $`𝐂^3/\mathrm{\Delta }(3n^2)`$ is obtained from that for $`𝐂^3/𝐙_n\times 𝐙_n`$ by the $`𝐙_3`$ quotient. So we use this relation to discuss the geometry of $`𝐂^3/\mathrm{\Delta }(3n^2)`$. The $`𝐙_3`$ action on the brane configuration can be translated to the action on the toric diagram through the relation between the Figure 10 and Figure 10. For example, $`𝐙_3`$ acts cyclically on three apexes of the triangle which lies at the center of the toric diagram. Since each lattice point inside the toric diagram represent an exceptional divisor, the $`𝐙_3`$ quotient means that the three divisors must be identified. In this way, we obtain a $`𝐙_3`$ quotient of the toric variety $`𝐂^3/𝐙_n\times 𝐙_n`$. The $`𝐙_3`$ action on the quiver diagram is illustrated in Figure 11.
In fact, resolutions of the singularity of $`𝐂^3/\mathrm{\Delta }(3n^2)`$ were discussed in . Broadly speaking, the singularity is resolved by the following procedure. First, consider toric resolutions of the abelian orbifold $`𝐂^3/𝐙_n\times 𝐙_n`$. Then perform a certain $`𝐙_3`$ quotient, and finally resolve singularities which occur due to the $`𝐙_3`$ quotienting procedure. The group $`𝐙_3`$ permutes exceptional divisors of the toric resolution of $`𝐂^3/𝐙_n\times 𝐙_n`$. In fact, the $`𝐙_3`$ quotient is equivalent to that represented in Figure 11 if we consider Figure 10 as a resolution of $`𝐂^3/𝐙_n\times 𝐙_n`$. Although the $`𝐙_3`$ actions are equivalent, the origins are quite different. The $`𝐙_3`$ action in Figure 11 were determined by the discussion on the quiver diagrams, so it originates from the representation theory of $`\mathrm{\Delta }(3n^2)`$. On the other hand, the $`𝐙_3`$ action in the process of the resolution of the singularity comes from geometrical argument. The coincidence between the two $`𝐙_3`$ actions can be considered as one aspect of the McKay correspondence for $`\mathrm{\Gamma }=\mathrm{\Delta }(3n^2)`$, and again it serves as a supporting evidence for the T-duality interpretation of the McKay correspondence.
Now we comment on the difference between the brane configuration of Figure 7(a) and that of Figure 7(b). They are only different in $`(p,q)`$ charges of 5-branes. From the toric geometry point of view, however, there is an essential difference . The toric diagram corresponding to the configuration of Figure 7(a) before the $`𝐙_3`$ identification is depicted in Figure 12(a). In the figure, there is a lattice point in each triangle in contrast to Figure 10. It means that the orbifold singularity is not fully resolved. To resolve fully the orbifold singularity, we must subdivide the toric diagram as shown in Figure 12(b). Then each junction of $`(p,q)`$ 5-branes is replaced by Figure 12(c)<sup>2</sup><sup>2</sup>2This kind of configuration is considered in in the context of brane box models.. On the other hand, each junction in Figure 7(b) can not be made such a replacement. Therefore we can determine the right $`(p,q)`$ charges of 5-branes by requiring that each junction can not be replaced by a set of junctions as in Figure 12(c).
## 5 Discussions
In this paper, we have studied brane configurations corresponding to D-branes on nonabelian orbifolds $`𝐂^3/\mathrm{\Delta }(3n^2)`$ and $`𝐂^3/\mathrm{\Delta }(6n^2)`$ based on the analyses of the quiver diagrams. We first constructed the brane configuration for $`𝐂^3/𝐙_n\times 𝐙_n`$ by requiring that it had a $`𝐙_3`$ symmetry. It consists of a web of $`(p,q)`$ 5-branes and D3-branes. By making a $`𝐙_3`$ ($`S_3`$) quotient, we have obtained the brane condigurations for $`𝐂^3/\mathrm{\Delta }(3n^2)`$ ($`𝐂^3/\mathrm{\Delta }(6n^2)`$). The structure of the quiver diagrams of $`\mathrm{\Delta }(3n^2)`$ and $`\mathrm{\Delta }(6n^2)`$ can be naturally explained via the brane configurations. We have also discussed relations between the brane configurations and toric diagrams. Based on the argument which relates branes and the toric geometry, we have pointed out that the three-dimensional McKay correspondence may be understood as T-duality.
In , it was shown that D3-branes on orbifolds and the brane box model are related by T-duality. Now we would like to make a rough argument to relate the brane configurations and D-branes on orbifolds. We start with type IIB string theory with a web of $`(p,q)`$ 5-branes along 01234 directions and one direction of the 56 plane and D3-branes along 0156 directions. We first T-dualize along direction 9 and decompactify along direction 10. Then we obtain M-theory on an orbifold $`𝐂^3/\mathrm{\Delta }(3n^2)`$ with M5-branes along 01569(10) directions. Here $`𝐂^3/\mathrm{\Delta }(3n^2)`$ extends to 56789(10) directions. To obtain type IIB string theory on orbifolds, we take a limit that the direction 1 shrinks. Then we obtain type IIA string theory on $`𝐂^3/\mathrm{\Delta }(3n^2)`$ with D4-branes along 0569(10). Next we perform T-dualities along 256 directions. Then we obtain type IIB string theory on $`𝐂^3/\mathrm{\Delta }(3n^2)`$ with D3-branes. Here $`𝐂^3/\mathrm{\Delta }(3n^2)`$ extends to 56789(10) directions, while D3-branes extend to 029(10) directions. If such an argument on duality is true, the right interpretation of the D1-branes discussed in Section 2 seems to be D3-branes wrapped around two directions of the orbifold. D-branes wrapping on each cycle of an orbifold stick to the singularity . However, since we are considering a set of D-branes which corresponds to the regular representation of $`\mathrm{\Delta }(3n^2)`$, they are allowed to move away from the singularity . Anyway it needs further investigation on the argument of duality.
One of the important problems which should be clarified is the structure of the Kahler moduli space of nonabelian orbifolds. For three-dimensional abelian orbifolds, their Kahler moduli spaces were examined in by using a toric method combining with the fact that toric varieties can be realized as vacuum moduli spaces of two-dimensional gauged linear sigma models . On the other hand, this method can not be applied to nonabelian orbifolds since they are not toric varieries. The nonabelian orbifolds, however, are related to toric varieties through a quotienting procedure. It would be interesting to investigate whether the quotienting procedure can be incorporated into the toric method to analyze the structure of Kahler moduli space of nonabelian orbifolds.
It is also interesting to generalize the work to other cases. For the E-type subgroups of $`SU(3)`$, a catalogue of quiver diagrams is given in . It is so complicated that construction of brane configurations seems to be difficult. However, unbroken supersymmetry may provide us a guide to the construction. For $`\mathrm{\Gamma }SU(4)`$, classification of finite subgroups were summarized in . Since there is no supersymmetry in these cases, quiver diagrams representing bosonic matter contents and those representing fermionic matter contents are different. It would be interesting to find a mechanism to provide such asymmetric matter contents from brane configurations.
Acknowledgements
I would like to thank Y.Ito, S. Hosono, T. Kitao, Y. Sekino, T. Hara and S. Sugimoto for valuable discussions and comments. I wish to express my special thanks to T. Tani for helpful suggestions on several points in the paper. This work is supported in part by Japan Society for the Promotion of Science(No. 10-3815).
|
no-problem/9905/cond-mat9905330.html
|
ar5iv
|
text
|
# A Search for Fluctuation-Dissipation Theorem Violations in Spin Glasses from Susceptibility Data
## Abstract
We propose an indirect way of studying the fluctuation-dissipation relation in spin-glasses that only uses available susceptibility data. It is based on a dynamic extension of the Parisi-Toulouse approximation and a Curie-Weiss treatment of the average magnetic couplings. We present the results of the analysis of several sets of experimental data obtained from various samples.
Introduction.— The fluctuation-dissipation theorem (FDT) relates the response of a magnetic system to the magnetization correlation function at equilibrium. In its integrated form FDT states that
$$\chi (t,t_w)=\frac{1}{T}(q_dC(t,t_w)),$$
(1)
where the response to an applied field $`h`$ held constant from a waiting-time $`t_w`$ up to $`t`$ and the correlation are defined as
$`\chi (t,t_w)\delta m(t)/\delta h|_{h=0},C(t,t_w)m(t)m(t_w),`$
and $`q_d`$ is the long-time limit of $`C(t,t)`$.
Glassy systems are out of equilibrium and FDT does not apply, as shown in particular in solvable models with infinite range interactions. Otherwise stated, there are effective temperatures (different from the bath-temperature) at play in aging systems . A simple modification of Eq. (1) consists in proposing that, for large $`t_w`$, the susceptibility still depends on $`t`$ and $`t_w`$ only through $`C`$, i.e.
$$\chi (t,t_w)=\chi (C(t,t_w))$$
(2)
where $`\chi (C)`$ is a system-dependent function . The latter may be obtained from a plot of $`\chi (t,t_w)`$ against $`C(t,t_w)`$ using $`tt_w`$ as a parameter. This curve is known analytically for several mean-field spin-glasses such as the Sherrington-Kirkpatrick (SK) model. Models with finite range interactions have been studied numerically by several groups who obtained results that are qualitatively similar to the mean-field ones. These numerical results must however be taken with caution since the times that can be reached in simulations are relatively short.
In principle, $`\chi `$ and $`C`$ should be determined experimentally by measuring independently noise correlations and susceptibilities. This procedure has been recently used to investigate FDT violation in structural glasses . For spin-glasses, noise measurements along the lines of the early work of Ref. are under way: their outcome will provide a most stringent test for spin-glass theories.
In this paper, we shall assume that violations of FDT do occur below $`T_g`$, and propose an indirect method for the determination of the $`\chi `$ vs $`C`$ curve using available experimental data. This construction gives a first glimpse of the form of this curve. It is based on some assumptions, notably a dynamic extension of the Parisi-Toulouse (PaT) approximation , that hold for some spin-glasses.
We start by noticing that, in all solvable spin-glass models, below $`T_g`$, $`\chi (C)`$ is a piecewise function :
$`\chi (C)`$ $`=`$ $`\{\begin{array}{ccc}\hfill \frac{1}{T}\left(q_dC\right)& \text{if}& qC<q_d(\mathrm{FDT}\mathrm{regime}),\hfill \\ \hfill \chi _{\mathrm{ag}}(C)+\frac{1}{T}\left(q_dq\right)& \text{if}& q_0<C<q(\mathrm{aging}\mathrm{regime}),\hfill \end{array}`$ (5)
where the dynamical Edwards-Anderson parameter, $`q`$, and the minimal correlation in an applied magnetic field $`H`$, $`q_0`$, are defined as
$$q\underset{tt_w\mathrm{}}{lim}\underset{t_w\mathrm{}}{lim}C(t,t_w),q_0\underset{t\mathrm{}}{lim}C(t,t_w),$$
(6)
the latter vanishing in zero applied field. In the SK model, $`\chi _{\mathrm{ag}}(C)`$ is decreasing and has a downwards curvature. Numerical results indicate that, at least within the simulation times, the shape of the curve for the 3D Edwards-Anderson (EA) model is similar.
The dynamical version of the PaT hypothesis (see also Ref. ) consists of the following two assertions: (i) $`\chi (C)`$ is independent of $`T`$ and $`H`$ in the aging regime and (ii) $`q`$ and $`q_0`$ only depend on $`T`$ and $`H`$, respectively. We shall moreover assume that this approximation is good even at finite times (see Discussion).
The near temperature-independence of $`\chi (C)`$ in the aging regime has been checked numerically for the 4D EA model in Ref. . No checks are available for the 3D case. The validity of this approximation for the experimental systems will be discussed below. It will be seen that the PaT approximation allows us to estimate the $`C`$-dependence of the susceptibility using exclusively response results, thus circumventing the difficulties inherent to noise measurements.
Our strategy is to use data taken under $`T`$ and $`H`$ conditions such that the system is at the limit of validity of FDT, i.e. $`C(t,t_w)=q`$. The point $`\{q,\chi (q)\}`$ is the intersection between the straight part (FDT regime) and the curved part (aging regime) of $`\chi (C)`$ (cf. Eq. 5). The locus of the points obtained varying $`T`$ and $`H`$ spans a master curve $`\stackrel{~}{\chi }(C)`$ which, by the PaT hypothesis, is field and temperature independent. The method of construction is explained below and illustrated in Fig 2.
The susceptibility at the limit of the FDT regime corresponds to:
$`\chi (q)=\underset{tt_w\mathrm{}}{lim}\underset{t_w\mathrm{}}{lim}\chi (t,t_w)={\displaystyle \frac{1}{T}}(q_dq).`$ (7)
We have approximated this limit by using susceptibility data of three different types taken from the literature.
Frequency dependent measurements.
In ac-susceptibility measurements, a small ac-field of fixed frequency $`\omega `$ is applied. The in-phase susceptibility $`\chi ^{}(\omega ,t_w)`$ is recorded as a function of temperature. For frequencies in the range $`\omega \stackrel{>}{}`$ 1Hz, the long waiting-time limit $`\omega t_w1`$ is approached within the measurement time. Then we can estimate the limit of zero frequency by extrapolation
$$\chi ^{}(0,\mathrm{})\underset{\omega 0}{lim}\underset{t_w\mathrm{}}{lim}\chi ^{}(\omega ,t_w)=\frac{1}{T}(q_dq(T)).$$
(8)
The master curve $`\stackrel{~}{\chi }(C)`$ is obtained by joining the points $`\left\{C=q_dT\chi ^{}(0,\mathrm{});\stackrel{~}{\chi }=\chi ^{}(0,\mathrm{})\right\}`$ using $`T`$ as parameter.
Field cooled measurements. $`M_{\mathrm{fc}}`$ is measured by cooling the sample in a constant magnetic field. Below $`T_g`$, $`\chi _{\mathrm{fc}}=dM_{\mathrm{fc}}/dH`$ rapidly reaches an asymptotic value (see however ).
In some spin-glasses like CuMn $`\chi _{\mathrm{fc}}`$ is nearly temperature-independent below $`T_g`$, $`\chi _{\mathrm{fc}}(T,H)\chi _{\mathrm{fc}}(H)`$ as required by PaT. However, this does not hold in most systems for small fields and near $`T_g`$ where a cusp in $`\chi _{\mathrm{fc}}`$ appears. The $`\chi _{\mathrm{fc}}`$ data may be used where both FDT and PaT hold: on the critical line (assuming there is at least a transient one, an issue discussed below). The parameter is now $`H`$ and the master curve is spanned by the points $`\left\{C=q_dT_g(H)\chi _{\mathrm{fc}}(H);\stackrel{~}{\chi }=\chi _{\mathrm{fc}}(H)\right\}`$.
Zero-field cooled measurements.
In the absence of a sufficient amount of published ac or $`\mathrm{fc}`$ data, we have resorted to include in the analysis data obtained in a zero-field cooled procedure. The sample is quenched in the absence of a field from above $`T_g`$ down to some low temperature. After a time $`t_w`$ (necessary for stabilization of the temperature), a weak magnetic field $`H`$ is applied and $`M_{\mathrm{zfc}}`$ is immediately measured. Under typical experimental conditions, the measurement time $`t`$ is significantly shorter than $`t_w`$. We shall thus consider that $`\chi _{\mathrm{zfc}}M_{\mathrm{zfc}}/H`$ is a good approximation to Eq. (7).
Ideally, the same procedure should be repeated for each measurement temperature. In practice, however, the magnetization $`M_{\mathrm{zfc}}`$ is measured by increasing the temperature in steps from its initial value. Although the two methods are not strictly equivalent, we believe that the possible differences are of little consequence for our conclusions. Therefore we shall consider that the whole experimental $`M_{\mathrm{zfc}}(T)`$ curve yields an acceptable approximation to Eq. (7). Support for this point of view is given by a comparison of both ac and $`\mathrm{zfc}`$ procedures on one sample (see below).
The field-independence of $`\chi _{\mathrm{zfc}}`$ implied by PaT is in general well verified experimentally except for the largest fields (see, for example, Fig. 1 of the first of Refs. ). The master curve is determined as in the accase.
Before turning to the analysis of the data, we notice that Eq. (5) cannot be applied to experimental data as it stands. Indeed, this equation is only valid for systems in which the exchange coupling averages to zero. In real spin-glasses, however, this average is, in general, finite. This is reflected in the behavior of the high temperature susceptibility that often obeys a Curie-Weiss law. The Curie-Weiss temperature $`\theta `$ may be a sizeable fraction of $`T_g`$. Within a mean-field approximation of the average coupling, however, Eq. (5) still holds for the response to the total field (applied plus internal). This amounts to the replacement $`\chi \chi _{\mathrm{meas}}/(1+\theta /𝒞\chi _{\mathrm{meas}})`$, with $`\chi _{\mathrm{meas}}`$ the measured susceptibility and $`𝒞`$ the Curie constant. In some metallic spin-glasses, like CuMn at low concentration of the magnetic impurity , as well as AuFe and AgMn , the Curie-Weiss law holds down to the transition temperature. This is not the case in other spin-glasses where, due to progressive clustering, the paramagnetic behaviour deviates from a simple Curie-Weiss law close to the transition. When the Curie-Weiss law holds, a plot of the inverse susceptibility as function of temperature yields the values of $`𝒞`$ and $`\theta `$ and it is easy to show that $`𝒞=q_d`$.
The analysis. — We have analysed data obtained with the methods described above for three metallic systems: CuMn and AuFe for several concentrations, AgMn at 2.6% , and two insulating samples CdCr<sub>1.7</sub>In<sub>0.3</sub>S<sub>4</sub> and Fe<sub>0.5</sub> Mn<sub>0.5</sub>TiO<sub>3</sub> .
For CdCr<sub>1.7</sub>In<sub>0.3</sub>S<sub>4</sub> we used the $`\chi ^{}(\omega ,t_w)`$ data , and extrapolated to zero frequency assuming a power-law decay $`\chi ^{}(\omega ,\mathrm{})\chi ^{}(0,\mathrm{})+c_1\omega ^a`$. In the cases of CdCr<sub>1.7</sub>In<sub>0.3</sub>S<sub>4</sub> and Fe<sub>0.5</sub> Mn<sub>0.5</sub>TiO<sub>3</sub>, we also used the $`\mathrm{fc}`$ data to check the consistency of our determination. The susceptibilities $`\chi _{\mathrm{zfc}}`$ were obtained for CuMn, AgMn, AuFe, CdCr<sub>1.7</sub>In<sub>0.3</sub>S<sub>4</sub> and Fe<sub>0.5</sub> Mn<sub>0.5</sub>TiO<sub>3</sub> using $`\chi _{\mathrm{zfc}}M_{\mathrm{zfc}}/H`$.
In order to analyse the field-cooled data, it is necessary to identify the transition temperature in the presence of an applied field. This is done by studying the onset of irreversibilities in the magnetization curves: we defined $`T_g`$ as the temperature where $`M_{\mathrm{irr}}M_{\mathrm{fc}}M_{\mathrm{zfc}}=0`$. At high fields, when the relationship between $`\chi _{\mathrm{fc}}`$ and $`M_{\mathrm{fc}}`$ is non-linear, we have made polynomial fits of $`M_{\mathrm{fc}}(H)`$ to compute $`\chi _{\mathrm{fc}}(H)=dM_{\mathrm{fc}}/dH`$. For high enough fields, $`M_{\mathrm{fc}}`$ is independent of temperature supporting the PaT approximation. However, for some samples, a cusp in the low-field susceptibility may appear near the transition temperature and the determination of a $`T`$-independent $`\chi _{\mathrm{fc}}`$ becomes ambiguous. In these cases we have used the value $`\chi _{\mathrm{fc}}`$ at the critical temperature in order to ensure consistency with the alternative determination of $`\stackrel{~}{\chi }(C)`$ based on $`M_{\mathrm{zfc}}`$ measurements. On the whole, the construction using $`\mathrm{fc}`$ data is more ambiguous than that using $`\mathrm{zfc}`$ or ac data.
Our analysis is most reliable for CuMn, a system in which the Curie-Weiss law as well as the PaT approximation are very well verified. Figure 2 shows the $`\stackrel{~}{\chi }(C)`$ curve determined using the $`\mathrm{zfc}`$ data of Ref. for two concentrations, 1.08 % and 2.02%. There are no experimental points for $`C/q_d>0.8`$ that correspond to rather low temperatures. We know however that $`\stackrel{~}{\chi }(C)`$ tends to zero as $`Cq_d`$ since $`\chi _{\mathrm{zfc}}(T=0)=0`$. In addition, the slope $`d\stackrel{~}{\chi }/dC`$ should be infinite at $`C=q_d`$ so that $`q=q_d`$ only at $`T=0`$. The validity of the hypotheses can be judged by the inset of Fig. 2 where we show the temperature dependence of the inverse susceptibiliy for the 1.08% compound. A Curie-Weiss law with $`\theta 0`$ holds accurately for all $`TT_g`$. The $`T`$-independence of $`\chi _{\mathrm{fc}}`$ required by the PaT approximation is also well verified below the transition. The same is true for the 2.02% sample.
For comparison, we also show in Fig. 2 the curve $`\stackrel{~}{\chi }(C)`$ for the 3D EA model, at $`T=0.7(<T_g)`$ and $`H=0`$, obtained numerically in Ref. . The agreement between the numerical results and the experimental data for the 1.08% sample is remarkable. It may be fortuitious, however, since the results for the 2.02% sample deviate from it. In fact, one must note that $`\stackrel{~}{\chi }(C)`$ is not a universal function. For example, it depends on the details of the Hamiltonian (Heisenberg, Ising and, in general, the level of anisotropy) even at the mean-field level. Thus, there is no reason to expect universality in real systems.
The data for CdCr<sub>1.7</sub>In<sub>0.3</sub>S<sub>4</sub> obtained using the three different techniques described above are shown in Fig. 4. It can be noticed that the $`\mathrm{fc}`$ and ac results are very similar. The $`\mathrm{zfc}`$ data are somewhat higher (probably due to the fact that the large $`t_w`$ condition is not as well full filled as in the other cases), but the agreement with the other determinations remains acceptable. Notice that the $`\stackrel{~}{\chi }(C)`$ curve obtained for this compound is quite different from that corresponding to CuMn.
In Fig. 4 we collect results for all the other samples. As expected, the curves do not fall on a universal curve, but their shape is similar.
Discussion.— Finally, let us clarify an important point. In several situations, such as 2D Ising and 3D Heisenberg and, perhaps, 3D Ising spin glasses under a magnetic field, no true spin-glass phase is expected. However, for still relatively long times the system remains below a slowly time-dependent pseudo de Almeida-Thouless (AT) line : it ages and behaves as a true (out of equilibrium) glass with a non-trivial $`\chi (C)`$ that would eventually become a straight line with slope $`1/T`$. In this paper we explored the consequences of the stronger assumption that PaT is a good approximation below the pseudo AT-line if all quantities involved belong to the same epochs.
Another important issue is the asymptotic ($`t_w\mathrm{}`$) form of the $`\chi _{\mathrm{ag}}(C)`$ curve. Even if the system never equilibrates, the $`\chi _{\mathrm{ag}}(C)`$ curve may still be a very slowly varying function of $`t_w`$, eventually reaching a form different from that observed experimentally. We are not in a position to discard this possibility.
It has been recently shown that, under certain hypotheses, the slope of the dynamic $`\chi (C)`$, for an infinite system in the large-$`t_w`$ limit coincides with the static $`x(q)`$ as defined by the probability of overlaps of configurations taken with the Gibbs measure . Since we do not address here the issue as to whether the AT-line and the PaT approximation survive beyond experimental times we cannot make any statements concerning the relation of this dynamical function to the corresponding equilibrium Parisi function.
In conclusion we have presented an approximate determination of the $`\stackrel{~}{\chi }(C)`$ curve characterising the deviations from equilibrium of spin-glasses through the violations of FDT, assuming they exist. This construction does not replace a true determination via simultaneous measurements of susceptibility and noise correlation, as the ones in Ref. , but it yields some insight into how this curve might look in reality.
\***
We wish to thank D. Sherrington for early discussions on this subject as well as H. Aruga-Katori, N. Bontemps, A. Ito, H. Takayama, A. Tobo and H. Yoshino. We also thank J. Hammann and M. Ocio for a critical reading of the manuscript. LFC, DRG and EV thank the Monsbusho contract for partial financial support. JK was partially supported by the ‘Programme Thématique Matériaux’, Région Rhône-Alpes.
|
no-problem/9905/astro-ph9905379.html
|
ar5iv
|
text
|
# The transition phase in Nova Muscae 1998, the dust formation in classical novae and a possible link to intermediate polar systems.
## 1 Introduction
Nova Muscae 1998 was discovered on 1998 December 29 (Liller 1998) at $`m_V`$=8.5. The light curve of the nova during the first 100 days followed outburst, which was compiled from various sources is shown in Fig. 1. We estimate that the time to decline by two magnitudes is: $`t_{2(V)}=3.5\pm 0.3`$ day. The object is therefore classified as a very fast nova.
Fig. 1 also displays oscillations with time scales of several days, and full amplitude of about one magnitude in the light curve of the nova during days 20-60 after maximum light. This phase of the nova is known as the transition phase. It is unclear why certain novae experience this behaviour in their light curves (Leibowitz 1993).
We observed Nova Mus 1998 during 17 nights in 1999. The observations were carried out using the 20-cm telescope in Vina Del Mar, Chile, and the 45-cm telescope in Loomberah, Australia. CCDs and broad band filters were used. Most of the observations were obtained during the transition phase of the nova. The highest peak in the power spectrum (not shown) corresponds to the period 0.16930 day +/-0.00015.
## 2 Discussion
In the power spectrum there are a few more peaks with possible inter-connections, however they are below the significance level. The nova might thus be classified as an intermediate polar candidate.
In Table 1. we summarize the properties of all novae, which are intermediate polar systems or intermediate polar candidates. Although the information is scarce, the data suggest that there might be a connection between the two groups. This link may be related with the evidence for the early presence of the accretion disc in young novae (Retter 1999).
|
no-problem/9905/gr-qc9905026.html
|
ar5iv
|
text
|
# The Detection of Gravitational Waves with LIGO
## I Introduction
Einstein first predicted gravitational waves in 1916 as a consequence of the general theory of relativity. In this theory, concentrations of mass (or energy) warp space-time, and changes in the shape or position of such objects cause a distortion that propagates through the Universe at the speed of light (i.e., a gravitational wave).
It is tempting to draw the analogy between gravitational waves and electromagnetic waves. However, the nature of the waves is quite different in these two cases. Electromagnetic waves are oscillating electromagnetic fields propagating through space-time, while gravitational waves are the propagation of distortions of space-time, itself. The emission mechanisms are also quite different. Electromagnetic wave emission results from an incoherent superposition of waves from molecules, atoms and particles, while gravitational waves are coherent emission from bulk motions of energy. The characteristics of the waves are also quite different in that electromagnetic waves experience strong absorption and scattering in interaction with matter, while gravitational waves have essentially no absorption or scattering. Finally, the typical frequency of detection of electromagnetic waves is $`f>10^7`$ Hz, while gravitational waves are expected to be detectable at much lower frequency, $`f<10^4`$ Hz.
By making these comparisons it becomes clear that most sources of gravitational waves will not be seen as sources of electromagnetic waves and vice versa. This means there is great potential for surprises! However, it also means that (since most of what we know about the Universe comes from electromagnetic waves) there is much uncertainty in the types and characteristics of sources, as well as the strengths and rate.
The characteristics of gravitational radiation can be seen from the perturbation to flat space-time and in the weak field approximation is expressed by $`g_{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu },`$ where $`h_{\mu \nu }`$ is the perturbation from Minkowski space. The detailed information about the gravitational wave is carried in the form of the quantity $`h_{\mu \nu }`$. There is freedom of the choice of gauge, but in the transverse traceless gauge and weak field limit, the field equations become a wave equation
$$\left(^2\frac{1}{c^2}\frac{^2}{t^2}\right)h_{\mu \nu }=0,$$
with the solution being plane waves having two polarizations for the gravitational wave,
$$h_{\mu \nu }=a\widehat{h}_+\left(t\frac{z}{c}\right)+b\widehat{h}_\times \left(t\frac{z}{c}\right),$$
with the two components at $`45^{}`$ from each other (Figure 1), rather than $`90^{}`$ as for electromagnetic waves.
Interestingly, this is a consequence of the spin 2 nature of gravity. The experiments discussed below, although classical experiments analogous to the Hertz experiment that demonstrated electromagnetic waves, are capable of decomposing the two components of the wave, thereby establishing empirically that gravity is spin 2. They also have capability to measure the speed of the gravitational wave, and can establish that they move with velocity $`c`$.
For gravitational radiation there is no monopole term or dipole term, so the first term is quadrupolar and the strength of the radiation depends on the magnitude of this non axisymmetric moment. The largest term for gravitational radiation is
$$h_{\mu \nu }=\frac{2G}{Rc^4}\ddot{I}_{\mu \nu },$$
where $`G`$ is Newton’s constant, $`R`$ is the distance to the source, and $`I_{\mu \nu }`$ is the reduced quadrupole moment tensor. This yields a strain at the surface of the earth for the inspiral of a binary system of two neutron stars at a distance of the Virgo Cluster ($``$15 Mpc) of $`h10^{21}`$. The new generation of gravitational wave detectors promise to have resolution capable of measuring such small strain.
Until now, gravitational waves have not been observed directly, however, strong indirect evidence resulted from the beautiful experiment of Hulse and Taylor. They studied the neutron star binary system PSR1913+16 and observed by using pulsar timing the gradual speed up of the $``$ 8 hour orbital period of this system. This speed up of about 10 seconds was tracked accurately over about 14 years and the result (Figure 2) is in very good quantitative agreement with the predictions of general relativity.
Of course, the motivation for direct detection of gravitational waves is based on the empirical desire to “‘see these waves”. However, such studies also have enormous potential both to study the nature of gravity in a new regime and to probe the Universe in a fundamentally new way. It is tempting to draw an analogy with the neutrino, where it was “indirectly observed” by Pauli and Fermi in the 1930’s as the explanation for the apparent non-conservation of energy and angular momentum in nuclear beta decay. Decades of rich physics have followed, which were first focussed on the goal of direct detection. Since the direct detection by Reines and Cowan, neutrino physics has been a rich subject, both for studies of the properties of neutrinos themselves (e.g., the question of neutrino mass remains an important topic) and as an important tool to probe the constituent nature of nucleons.
LIGO is designed to directly detect gravitational ways using the technique of laser interferometry. The arms of the interferometer are arranged in an L-shaped pattern that will measure changes in distance between suspended test masses at the ends of each arm. The basic principle is illustrated in Figure 3. A gravitational wave produces a distortion of the local metric such that one axis of the interferometer is stretched while the orthogonal direction shrinks. This effect oscillates between the two arms with the frequency of the gravitational wave. Thus,
$$\mathrm{\Delta }L=\mathrm{\Delta }L_1\mathrm{\Delta }L_2=hL,$$
where $`h`$ is the gravitational strain or amplitude of the the gravitational wave. Since the effect is linearly proportional to $`L`$, the interferometer should have arm length as long as is practical and for LIGO that is 4 km, to yield the target strain sensitivity of $`h10^{21}`$ for the initial interferometers now being installed.
## II Sources of Gravitational Waves
Construction of LIGO is well underway at the two observatory sites: Hanford, Washington, and Livingston, Louisiana. The commissioning of the detectors will begin in 2000. The first data run is expected to begin in 2002 at a sensitivity of $`h10^{21}`$. Incremental technical improvements that will lead to a better sensitivity of $`h10^{22}`$ are expected to follow shortly, and the facility will allow further improved second generation interferometers when they are developed with sensitivity of $`h10^{23}`$. It is also important to note that all the detectors in the world of comparable sensitivity will be used in a worldwide network to make the most sensitive and reliable detection. A comparably long baseline detector (VIRGO) is being built by a French-Italian collaboration near Pisa, and there are smaller interferometers being built in Japan (TAMA) and in Germany (Geo-600). Finally, an Australian group is working toward a detector in the Southern Hemisphere.
There are a large number of processes in the Universe that could emit detectable gravitational waves. Interferometers like LIGO will search for gravitational waves in the frequency range $`f10`$ Hz to 10 KHz. It is worth noting that there are proposals to put interferometers in space which would be complementary to the terrestrial experiments, as they are sensitive to much lower frequencies ( $`f<`$ 0.1 Hz), where there are known sources like neutron binaries or rotating black holes. For LIGO, characteristic signals from astrophysical sources will be sought by recording time-frequency data. Examples of such signals include the following:
Chirp Signals: The inspiral of compact objects such as a pair of neutron stars or black holes will give radiation that increases in amplitude and frequency as they move toward the final coalescence of the system. This characteristic chirp signal can be characterized in detail, depending on the masses, separation, ellipticity of the orbits, etc.. Figure 4 illustrates the “chirp” signal where the amplitude and frequency are determined by the masses of the neutron stars (the chirp mass), the distance to the sources and the orbital inclination. A variety of search techniques, including comparisons with an array of templates will be used for this type of search. The Newtonian (quadrupole) approximation is accurate at a level that allows a set of specific templates to be used.
Relativistic corrections to the time frequency behavior are typically $`<`$10% of the Newtonian contribution and can be extracted from the signal to high accuracy. A great deal of phenomenological work has been done to determine the number and range of templates required, the efficiency for extraction of signals in background noise, etc.. The results indicate that the range of anticipated neutron star parameters can be covered with a manageable number of templates. The final coalescence of NS/NS systems will yield information sensitive to the equation of state of nuclear matter, however this part of the spectrum is typically at frequencies 1 kHz or higher where the shot noise in the interferometers are a serious limitation. If such sources are observed, however, future configurations for the LIGO interferometer promise to yield the ability to have improved sensitivity in a narrower bandwidth which can be used for these studies.
The expected rate of such events is expected to be a few per year within about 200 Mpc from neutron star pairs. The rate is more uncertain for black hole pairs, but due to the heavier masses they make a large signal which will allow a deeper search into the Universe for a given LIGO sensitivity.
Burst Signals: The gravitational collapse of stars (e.g. supernovae) will lead to emission of gravitational radiation. Type I supernovae involve white dwarf stars and are not expected to yield substantial emission. However, Type II collapses can lead to strong radiation, if the core collapse is sufficiently non-axisymmetric. Estimates of the strengths indicate detection might be possible out to the Virgo Cluster, which would yield rates of one or more per year. However, the gravitational wave signal depends on the non axisymmetric component of the collapse and this is not well determined by calculation. Detection within or near or galaxy, however, seems assured even if the collapse is highly symmetrical. Calculations indicate that the signal from convectively unstable neutron starts during the first second or so of its life should be detectable in the frequency band of LIGO from sources throughout our galaxy.
The detection of signals from supernovae is a challenge because the waveforms are not well determined. The duration and general characteristics should allow identifying such burst signals with a generic search for bursts, and assurance of detection will require identifying burst like signals in coincidence from multiple interferometers. In addition, steps are underway to correlate signals from the large neutrino detectors and similarly, the gravitational wave signal can be coincidenced with these neutrino signals.
Periodic Signals: Radiation from rotating non-axisymmetric neutron stars will produce periodic signals in the detectors. The gravitational wave frequency is twice the rotation frequency, which is typically within the LIGO sensitivity band for known neutron stars. Neutron stars spin down partially due to emission of gravitation waves. Searches for signals from identified neutron stars will involve tracking the system for many cycles, taking into account the Doppler shift for the motion of the Earth around the Sun, and effects of spin-down of the pulsar. Both targeted searches for known pulsars and general sky searches are anticipated.
Stochastic Signals: Signals from gravitational waves emitted in the first instants of the early universe ($`t10^{43}`$ sec) can be detected through correlation of the background signals from two or more detectors. Some models of the early Universe can result in detectable signals. Observations of this early Universe gravitational radiation would provide an exciting new cosmological probe.
## III The LIGO Facilities
The LIGO facilities at Hanford, WA and Livingston, LA each have a 4 km “L” shaped vacuum enclosure, which is 1 meter in diameter. Vacuum is required to reduce scattering off residual molecules that bounce off the walls and are modulated by the small shaking of the vacuum walls modulating this background. In addition, we have installed baffles on the walls to reduce scattering. The large diameter of the tube is both to minimize scattering and to provide the ability to house multiple interferometers within the same facility. Each facility has a 4 km interferometer with test masses housed in vacuum chambers at the vertex and the ends of the L shaped arms. At Hanford, there will also be a 2 km interferometer implemented in the same vacuum system, allowing a triple coincidence requirement. The overall vacuum system is capable of achieving pressures of $`10^9`$ torr. We presently have both arms at each site installed and they are vacuum tight. We are beginning to bake the tube to reach the desired high vacuum. We expect to have the entire vacuum system complete, all control systems operational and at high vacuum before the end of 1999.
The initial detector for LIGO is a Michelson interferometer with a couple of special features:
The arms are Fabry-Perot cavities to increase the sensitivity by containing multiple bounces and effectively lengthening the interferometer arms. The number of bounces is set to not exceed half the gravitational wave wavelength ($``$ 30 bounces). The interferometers are arranged such that the light from the two arms destructively interferes in the direction of the photodetector, thus producing a dark port. However, the light constructively interferes in the direction of the laser and this light is “re-used” by placing a recycling mirror between the laser and beam splitter. This mirror forms an additional resonant cavity by reflecting this light back into the interferometer, effectively increasing the laser power and thereby the sensitivity of the detector.
Much work has been done over the past decade to demonstrate this arrangement and the detailed techniques and required sensitivities in smaller scale laboratory prototypes. This includes experiments on a 40 m prototype interferometer at Caltech, which is a scale model of LIGO, which has provided an excellent test bed to study sensitivity, optics, controls and even some early work on data analysis, noise characterization, etc.. We also have built a special interferometer (PNI) at MIT, which has successfully demonstrated our required phase sensitivity, the limitation to the sensitivity at high frequencies.
LIGO is limited in practice by three noise sources, as illistrated in Figure 5:
At low frequencies ($``$ 10 Hz to 50 Hz), the limitation in sensitivity is set by the level of seismic noise in the system. We employ a seismic isolation system to control this noise that consists of a four-layer, passive vibration isolation stack having stainless steel plates separated by constrained-layer damped springs. This system is contained within large vacuum chambers. The stack supports an optical platform from which the test mass is suspended. The combination of the seismic isolation stack and the test mass suspension give an isolation from ground motion at the relevant frequencies of about 10 orders of magnitude. The possibility of a more elaborate isolation system and/or the addition of active isolation exists for the future, as well as improved suspension systems. Improvements in this area are planned early in the future improvement program of LIGO.
In the middle range of frequencies ($``$ 50 Hz to 200 Hz) the principle effect limiting the sensitivity is thermal noise. This noise comes partially from the suspension system, where there are violin wire resonances from the steel suspension fibers. However the principal noise source is from the vibrational modes of the test masses. This noise is reduced by the choice of test masses, presently fused silica, to have a very high Q thereby dissipating most of its noise out of our frequency band. The test masses will also improve in the future by using higher Q fused silica, better bonding techniques for the wires, and perhaps even new materials for the test masses, like sapphire.
At the higher frequencies (200 Hz to 5 kHz) the main limitation comes shot noise and the sensitivity is limited by the power of the laser (or the effective photostatistics). The initial laser (Nd:YAG) is designed and produced for LIGO, using a master oscillator/power amplifier configuration, which yield a 10 watt high quality output beam. We also have developed a system to pre-stabilize this laser in power and frequency. Again, we expect to incorporate higher power lasers in the future as they become available.
The expected sensitivity of the inital LIGO detectors and the advanced LIGO II detectors is shown in Figure 6, in comparison with signal strengths for various sources.
## IV Conclusions and Prospects
The LIGO interferometer parameters have been chosen such that our initial sensitivity will be consistent with estimates needed for possible detection of known sources. Although the rate for these sources have large uncertainty, improvements in sensitivity linearly improve the distance searched for detectable sources, which increases the rate by the cube of this improvement in sensitivity. So, anticipated future improvements will greatly enhance the physics reach of LIGO and for that reason a vigorous program for implementing improved sensitivities is integral to the design and plans for LIGO.
We are now entering into the final year of the construction of the LIGO facilities and initial detectors. We have formed the scientific collaboration that will organize the scientific research on LIGO. This collaboration already consists of more than 200 collaborators from 22 institutions. By early in the next millennium we will turn on and begin the commissioning of these detectors. We anticipate that we will reach a sensitivity of $`h10^{21}`$ by the year 2002. At that point, we plan to enter into the first physics data run ($``$ 2 years) to search for sources. This will be the first search for gravitational waves with sensitivity where we might expect signals from known sources. Following this run in 2004, we will begin incremental improvements to the detector interleaved with further data runs. We expect to reach a sensitivity of $`h10^{22}`$ within the next 10 years, making direct detection of gravitational waves within that time frame reasonably likely.
|
no-problem/9905/cond-mat9905243.html
|
ar5iv
|
text
|
# Coulomb Driven New Bound States at the Integer Quantum Hall States in GaAs/Al0.3Ga0.7As Single Heterojunctions
\[
## Abstract
Coulomb driven, magneto-optically induced electron and hole bound states from a series of heavily doped GaAs/Al<sub>0.3</sub>Ga<sub>0.7</sub>As single heterojunctions (SHJ) are revealed in high magnetic fields. At low magnetic fields ($`\nu >`$2), the photoluminescence spectra display Shubnikov de-Haas type oscillations associated with the empty second subband transition. In the regime of the Landau filling factor $`\nu <`$1 and 1$`<\nu <2`$, we found strong bound states due to Mott type localizations. Since a SHJ has an open valence band structure, these bound states are a unique property of the dynamic movement of the valence holes in strong magnetic fields.
\]
In the last several years, many optical studies focused on the regime of the integer and fractional quantum Hall states of semiconductor quantum wells (QWs), where electrons and holes have confined energy levels. Since a SHJ has only one interface, it is easier to fabricate high quality devices with ultrahigh mobilities. In a SHJ, the conduction band electrons are confined in a wedge-shaped quantum well near the interface, whereas the photocreated valence holes are not confined and tend to move to the GaAs flat band region. This has often been considered to be a disadvantage in optical experiments since the dynamic movement of valence holes due to the open structure makes it difficult to judge their location. To avoid this problem, intentional acceptor doping techniques have been employed to study optical transitions from SHJs. For example, magnetophotoluminescence (MPL) experiments have been carried out on acceptor doped SHJs to investigate transitions associated with the integer quantum Hall effect (IQHE) and fractional quantum Hall effect (FQHE).
In this Letter, we report the observation of strong discontinuous transitions at the $`\nu `$=2 and $`\nu `$=1 integer quantum Hall states and we believe these new transitions are the consequence of the formation of Coulomb-driven strong bound states. In a heavily doped single heterojunction, there are two factors that enhance the second subband (E1) exciton transition. The first is due to the close proximity between the Fermi energy and the E1 subband; the second is due to the spatially indirect nature of conduction and valence band structure, the wavefunction overlap between the E1 subband and the valence hole is much larger than that of the first subband (E0) and the valence hole. As the magnetic field changes, the 2DEG modifies the valence hole self-energy. This in turn causes the E1 exciton transition to display strong oscillatory behavior in its transition energy and peak intensity at magnetic fields smaller than the $`\nu `$=2 integer quantum Hall state. Near $`\nu `$=2, however, the E1 exciton transition loses its intensity and a new feature emerges at a lower energy. This transition rapidly increases its intensity for 2$`>\nu >`$1 but then diminishes in strength and disappears as the magnetic field approaches the $`\nu `$=1 quantum Hall state. In its place, another new peak appears at lower energy which swiftly grows in intensity for $`\nu <`$1. These two red-shifted transitions are attributed to the formation of new bound states due to electron and hole localization and can still be observed to temperatures as high as 40K. In addition, we found that the strong bound states at $`\nu `$=2 and $`\nu `$=1 were not observed for low electron density samples (sample 1 and 2). These samples have a relatively large separation between the Fermi energy and the E1 subband and only the E0-hole free carrier transitions were observed.
A series of samples grown by different growth techniques was used for this study. One set was grown using a metal-oxide chemical vapor deposition (MOCVD) technique and the other group was fabricated using a molecular beam epitaxy (MBE) technique. Table 1 shows the various parameters for six samples. The high magnetic fields were generated using a recently commissioned 60T quasi-continuous (QC) magnet, which has 2-second field duration. A pumped <sup>3</sup>He cryostat were used to achieve temperatures of 0.4-70K. For MPL experiments, a 630nm low power diode laser ($`<`$1.5mW/cm<sup>2</sup> to the samples) was used as the excitation source and a single optical fiber (600mm diameter; 0.16 numerical aperture) provided both the input excitation light onto sample and the output PL signal to the spectrometer. The spectroscopic system consisted of a 300 mm focal length f/4 spectrometer and a charge coupled device (CCD) detector, which has a fast refresh rate (476Hz) and high quantum efficiency (90% at 500nm). This fast detection system allowed us to collect approximately 500 PL spectra/second during the duration of the magnet field pulse.
The $`\sigma `$\+ polarization MPL spectra for sample 3 as a function of magnetic field taken in the QC magnet is displayed in Fig. 1. The corresponding MPL intensities for both the $`\sigma `$\+ and $`\sigma `$\- polarizations are shown in Fig. 2. Two dramatic intensity changes with high magnetic fields near the $`\nu `$=2 and $`\nu `$=1 integer quantum Hall states were observed for these heavily doped samples. Our self-consistent calculations of the conduction band energies (Fig. 3) indicate that the Fermi energy lies close proximity to the E1 subband for heavily doped samples and the discontinuous MPL transitions are associated with the Fermi edge enhanced E1 exciton transitions. The energy shifts of the peaks are displayed in Fig. 4a as a function of magnetic field B. As the magnetic field increases further, the transition (P2) that occurred at $`\nu `$=2 disappears near $`\nu `$=1 and another transition (P1) emerges at the $`\nu `$ =1 quantum Hall state. Fig. 4b shows oscillatory behaviors of the MPL transitions at low magnetic fields.
As indicated in Fig. 3 for heavily doped SHJs, the Fermi energy lies close to the E1 subband ($`<`$1meV) which induces E1 exciton transition. The emergence of the P2 transition at the $`\nu `$=2 quantum Hall state can be explained as follows: At low fields ($`\nu >`$2), valence holes are unbound and are free to move to the GaAs flat band region due to the open structure of the valence band (see Fig. 2 inset). As the magnetic field varies, the Fermi energy sweeps continuously from localized states to extended states, which changes the dielectric screening of the valence holes. When this happens, there are two factors to be considered in an optical transition. One is the hole self-energy and the other is the vertex correction arising from the exciton effect. The hole self-energy gives rise to a blue shift, whereas the vertex correction term gives a red-shift due to the exciton binding energy. Since the hole self-energy correction is bigger than the exciton effect the free carrier transition shows a blue shift at even Landau filling factors. For the E1 exciton transition measured in these heavily doped samples, the valence hole self-energy is already modified by the E0 free carriers and hence it shows a blue shift at even filling factors for $`\nu >`$2 (see Fig. 4b). Unlike transitions at even filling factors for $`\nu >`$2, the P2 transition at $`\nu `$=2 shows a large red-shift. This means that the vertex correction at $`\nu `$=2 in a heavily doped SHJ is much larger than the other effects which give a blue shift in the transition energy. At the $`\nu `$=2 quantum Hall state, electrons are strongly localized and the screening within the 2DEG becomes negligible. In the absence of screening effects, holes migrate back towards the interface because of the strong Coulomb attraction between the electrons and holes. As a result, holes are localized near the interface and form a new bound state (P2) at the $`\nu `$=2 quantum Hall state. This Coulomb-driven hole localization induces a large vertex correction that gives rise to a giant red-shift in the transition energy. For sample 3, the P2 transition has a binding energy of about 8.2meV which decreases to 5meV at $`\nu `$=1 where it disappears. The amount of the red-shift would correspond to binding energy of the new bound state. The red-shift in this field regime is analogous to a Mott type transition, since it is similar to the metal to insulator transition in the hydrogen system originally suggested by Mott, for which the transition is induced when the screening length exceeds a critical value caused by the expanding lattice constant. The relative binding energy ($`\mathrm{\Delta }`$$`E2`$) of the new bound state (P2) increases with increasing 2DEG density. This is consistent with the reduction in the screening at the $`\nu `$=2 QHS. When the screening is turned off, the higher density sample has the stronger Coulomb attraction between electrons and holes. Hence, a higher 2DEG density sample has a larger binding energy than a lower electron density sample.
Near $`\nu `$=1, the P2 transition disappears and a new peak designated as P1 emerges on the lower energy side of the P2 transition. Like the P2 transition, the P1 transition also rapidly increases in intensity with increasing magnetic field as seen in Fig. 2. Near $`\nu `$=1, the screening strength within the 2DEG in the well is greatly reduced and once again the localization of the valence hole induces the discontinuous transitions and intensity changes at $`\nu <`$1. As indicated in Table 1, the amount of the $`\mathrm{\Delta }`$$`E1`$ and the $`\mathrm{\Delta }`$$`E2`$ measured at $`\nu `$=1 and $`\nu `$=2, respectively, are almost the same for a given 2DEG density.
There are numerous magneto-optical studies near the $`\nu `$=1 filling state where a red-shift in transition energy has been observed as a ‘shake-up’ process at the Fermi energy. However, our circularly polarized MPL measurements show completely different results from others. In our experiments, we found that both the P1 and P2 transitions were strongly left circularly polarized ($`\sigma `$-) as seen in Fig. 2. The intensity of the P2 $`\sigma `$\- transition is about three times that of the $`\sigma `$\+ transition, whereas P1 has about a 5:1 ratio for $`\sigma `$-/$`\sigma `$+. The E1 transition, on the other hand, shows no appreciable intensity differences between the two spin polarizations. This means that for an unbound exciton state (E1), both spins are almost equally populated, whereas for the strongly bound exciton states (P1 and P2), they are strongly polarized to the excitonic ground state ($`\sigma `$-). Though not shown here, the temperature dependence of the MPL experiments show that the P2 transition disappears at 40K, whereas the P1 transition disappears at 10K. This is due to the fact that the thermal broadening of the 2DEG density of states closes the Zeeman gap of the Landau levels which does not contribut the reduction of the screening at the odd integer filling states at 10K.
It has been suggested that within the quantum Hall phase diagram, the mobility of the sample strongly affects the quantum Hall liquid (QHL) to quantum Hall insulator (QHI) phase transition, as the induced electron localization can take place at different quantum Hall states. For example, for a highly disordered structure the phase transition occurs near $`\nu `$=2, but for a moderately disordered one the phase transition is near $`\nu `$=1. As the samples used in this study have high mobilities ($`>`$10<sup>6</sup>cm<sup>2</sup>/Vs), we may expect to see other bound states near $`\nu `$=1/3 fractional quantum Hall state caused by the quantum Hall phases transition. This would manifest as another minimum in the intensity for sample 3 at about 69T for $`\nu `$=1/3. Unfortunately, this is just beyond the current high field limit of 60T for the QC magnet at NHMFL-LANL but there is some evidence that this may be about to take place as the intensity of the P1 transition shows a rapid decrease between 50 and 60T (see Fig. 2).
We have presented MPL studies on a series of high mobility SHJs in high magnetic fields to 60T using a QC magnet at NHMFL-LANL. At low magnetic fields ($`\nu >`$2), the photoluminescence spectra display Shubnikov de-Haas type oscillations associated with the empty second subband transition. In the high field regime, we observe the formation of Coulomb driven, magneto-optically induced electron and hole bound states near Landau filling factors $`\nu `$=2 and $`\nu `$=1 (with some evidence that there may be another at $`\nu `$=1/3). A discrete phase transformation from a dynamic hole to a bound hole state due to a Mott-type transition is thought to be responsible for the large red-shift that occurs near the $`\nu `$=2 and $`\nu `$=1 Landau filling states. Both bound state transitions are strongly spin polarized ($`\sigma `$-) states. The appearance of these bound states appears to be a unique property of heavily doped SHJs and the associated dynamic movement of the holes in strong magnetic fields due to their open valence band structure.
Authors gratefully acknowledge the engineers and technicians at NHMFL-LANL for their efforts on operating 60T QC magnet. Work at NHMFL-LANL is supported by NSF Cooperative Agreement DMR 9527035 and US DOE. Work at Sandia National Laboratories and UCLA is supported by DOE under Contract DE-AC04-94AL85000 and by NSF Cooperative Agreement DMR 9705439, respectively.
|
no-problem/9905/hep-th9905182.html
|
ar5iv
|
text
|
# Why General Theory of Relativity Allows the Existence of Only Extremal Black Holes
## Abstract
Supersymmetric String theories find occurrences of extremal Black Holes with gravitational mass $`M=Q`$ where $`Q`$ is the charge ($`G=c=1`$). Thus, for the chargeless cases, they predict $`M=0`$. We show that General Theory of Relativity, too, demands a unique BH mass $`M=0`$. Physically, this means that, continued gravitational collapse indeed continues for infinite proper time as the system hopelessly tries to radiate its entire original mass energy to attain the lowest energy state $`M=0`$.
PACS: 04.70. Bw
The concept of Black Holes (BHs) is now important not only for astrophysicists but also for elementary particle physicists. In particular, one of the promising candidates for the Quantum Gravity is the Supersymmetric String theory (or M-theory). In the low energy limit, such theories are naturally expected to be consistent with classical General Theory of Relativity (GTR). However, the supersymmetric theories find the occurrences of BHs with mass $`M=0`$ for the chargeless Schwarzschild case. We show that, GTR too, actually, yields the same result. In the context of the GTR, the concept of BHs first arose with the discovery of famous vacuum spherically symmetric Schwarzschild solution:
$$ds^2=\left(1\frac{r_g}{r}\right)dt^2(1\frac{r_g}{r})^1dr^2r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)$$
(1)
Here we shall call ‘$`r`$” as the “Schwarzschild radial coordinate”; $`\theta `$ and $`\varphi `$ are the usual polar angles and $`r_g=2M`$ ($`G=c=1`$). Because of the importance of this solution, we would like to remind the reader the salient points behind it as discussed by Landau & Lifshitz. Bearing in mind the fact that there can not be any spacetime cross term in the metric describing isotropic cases, the most general form of the metric (not necessarily for vacuum) is
$$ds^2=e^{\nu (r)}dt^2e^{\lambda (r)}dr^2e^{\mu (r)}(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)$$
(2)
But for a spherically symmetric finite system, the origin is unique and out of infinite choices for the value of $`r`$, only one value is physically meaningful. And this uniqueness comes from the function $`e^\mu (r)`$. For a fixed $`r`$ and $`t=constant`$ hypersurface, and for a fixed $`\theta =0`$, the invariant circumference is
$$𝑑s=2\pi e^{\mu /2}$$
(3)
And therefore only the unique and invariant choice $`e^{\mu /2}=r`$ can claim to be the physically meaningful radial coordinate. And thus $`r`$ naturally retains its essential space like character in every situation. By starting from this general premises, one can derive the vacuum Schwarzschild metric (1) without imposing a single assumption or extraneous condition like whether $`r>2M`$ or $`r2M`$. One important point is that this metric is, naturally, asymptotically flat, as can be seen that $`g_{rr}=g_{tt}=1`$ at $`r=\mathrm{}`$. And then, by demanding that at large $`r`$, the metric assumes Newtonian form, one interprets $`M`$, appearing in Eq. (1) as the “gravitational” mass. Simultaneously $`t`$, as defined by Eq. (1), assumes a clear physical meaning as the proper time of $`S_{\mathrm{}}`$, the distant inertial observer. And since, there is no assumption or precondition in the derivation of Eq. (1), naturally, Schwarzschild was unhappy with the occurrence of this singularity in his solution and consoled himself with the fact that no physical body, in hydrostatic equilibrium, can be squeezed below $`r<2M`$. If a body is static and could be squeezed at $`r=2M`$, its surface gravitational red-shift would be $`z=\mathrm{}`$:
$$z_s=(12M/r)^{1/2}1$$
(4)
On the other hand, Schwarzschild found that the absolute upper limit $`z_s2`$ for any static body.
Here recall that for a free falling particle the radial speed as measured by a local static Schwarzschild observer $`V_{Sch}=1`$ at $`r=2M`$. Then no amount of Lorentz boost can hold the Sch. observer as static and this fact is turned around to say that the Sch. coordinate breaks down at $`r=2M`$. Physically this would mean “the system of reference for $`r<r_g`$ can be realized only by means of moving bodies, whose motion is directed toward the center”. In practical terms this would mean that, in case we are trying to model a relativistic static star having $`r2M`$, we would not be successful, and on the other hand, the star must be collapsing, which in turn means that, when one is describing the collapse of a dust ball, he is free to match the internal solutions with the external vacuum Sch. solutions. And indeed, Oppenheimer and Snyder (OS) did so in their famous work. Nonetheless, such explanations hardly appear to be satisfactory because if $`r`$ has a clear physical significance as the invariant radius and $`t`$ too has a clear physical significance as the time recorded by $`S_{\mathrm{}}`$, why should the Sch. coordinate break down at $`r=2M`$ (for the static case) or why should the metric coefficients be singular at $`r=2M`$? Rather than ever trying to face such physical questions head on, traditionally all authors, have, inadvertently, pushed them below the carpet of mathematics, by using the standard refrain that it is like the singularity at the origin of the polar coordinate system ($`g_{\varphi \varphi }=0`$ at $`\theta `$ =0). And the freedom of choice of coordinates in GTR has come very handy in running away from such poignant physical questions. For example, it was generally agreed upon by the community in 1960 that the Kruskal coordinates, rather than the $`r,t`$ coordinates correctly describes the spacetime both inside and outside of a Schwarzschild BH. For the external region (Sector I)
$$u=f_1(r)\mathrm{cosh}\frac{t}{4M};v=f_1(r)\mathrm{sinh}\frac{t}{4M};r2M$$
(5)
where
$$f_1(r)=\left(\frac{r}{2M}1\right)^{1/2}e^{r/4M}$$
(6)
However, if one would stick to this definition of $`u`$ and $`v`$, they would be imaginary for $`r<2M`$. But since $`u`$ and $`v`$ are believed to be the real physical coordinates (rather than $`r,t`$), they can not be allowed to be imaginary. Therefore, by hand, the definition of $`u`$ and $`v`$ are altered for the region interior to the supposed event horizon (Sector II):
$$u=f_2(r)\mathrm{sinh}\frac{t}{4M};v=f_2(r)\mathrm{cosh}\frac{t}{4M};r2M$$
(7)
where
$$f_2(r)=\left(1\frac{r}{2M}\right)^{1/2}e^{r/4M}=\sqrt{1}f_1(r)$$
(8)
But note that, even now, to ensure that $`u`$ and $`v`$ are definable at all, first of all $`r`$ and $`t`$ must be definable over the entire spacetime. But how can $`r`$ and $`t`$ be meaningfully defined for $`r<2M`$ if either of them ceases to be definable? Note that, if we stick to the original relationship between $`r`$ and $`t`$ (like that of OS) $`t`$ would become undefinable or imaginary for $`r<2M`$
$$\frac{t}{2M}=\mathrm{ln}\frac{(r_{\mathrm{}}/2M1)^{1/2}+\mathrm{tan}(\eta /2)}{(r_{\mathrm{}}/2M1)^{1/2}\mathrm{tan}(\eta /2)}+\left(\frac{r_{\mathrm{}}}{2M}1\right)^{1/2}\left[\eta +\left(\frac{r_{\mathrm{}}}{4M}\right)(\eta +\mathrm{sin}\eta )\right]$$
(9)
Here the test particle is assumed to be at rest at $`r=r_{\mathrm{}}`$ at $`t=0`$ (or for dust collapse, the starting point) and the “cyclic coordinate” $`\eta `$ is defined by
$$r=\frac{r_{\mathrm{}}}{2}(1+\mathrm{cos}\eta )$$
(10)
Since $`\mathrm{tan}(\eta /2)=(r_{\mathrm{}}/r1)^{1/2}`$, we may rewrite Eq. (9) in terms of a new variable
$$x=\left(\frac{r_{\mathrm{}}/2M1}{r_{\mathrm{}}/r_b1}\right)^{1/2}$$
(11)
as
$$\frac{t}{2M}=\mathrm{ln}\frac{x+1}{x1}+\left(\frac{r_{\mathrm{}}}{2M}1\right)\left[\eta +\left(\frac{r_{\mathrm{}}}{4M}\right)(\eta +\mathrm{sin}\eta )\right]$$
(12)
It can be easily found that, irrespective of $`M`$ being finite or zero, $`t\mathrm{}`$ as $`x1`$ or $`r2M`$. For the time being, we assume that $`M0`$. Then, from Eq. (11), note that
$$x1;forr2M$$
(13)
Thus for $`r<2M`$, $`t`$ is not definable at all because the argument of logarithmic function can not be negative. However nobody seems to have pondered whether the situation which is leading to an imaginary $`t`$ is unphysical or not. How can $`t`$ be imaginary if $`r`$ remains real with its glory as the physically measurable “invariant radius”. Note, even when one purports to describe the EH or the central singularity, one does so in terms of $`r`$, i.e, whether $`r=2M`$ or whether $`r=0`$. And, as far as $`S_{\mathrm{}}`$ is concerned, he is either able to watch an event ($`t=finite`$) or unable to do so ($`t=\mathrm{}`$). Probably later authors realized that $`t`$ can not be allowed to be imaginary, not because of the fact that $`t`$ is still the proper time of a Galilean observer, a measurable quantity, but because of the fact that, otherwise esoteric new coordinates, like $`u`$ and $`v`$ would not be definable. Thus we find that a modulus sign was introduced in the $`tr`$ relationship:
$$\frac{t}{2M}=\mathrm{ln}\frac{x+1}{x1}+\left(\frac{r_{\mathrm{}}}{2M}1\right)\left[\eta +\left(\frac{r_{\mathrm{}}}{4M}\right)(\eta +\mathrm{sin}\eta )\right]$$
(14)
But, the conceptual catastrophe actually becomes worse by this tailoring. Of course, as the particle enters the EH, still, $`t\mathrm{}`$. Note that, $`t`$ having become infinite, definitely can not decrease, and more importantly inside the EH, $`t`$ can not be finite because, otherwise, it would appear that although the distant observer can not witness the exact formation of the EH (in a finite time), nevertheless, he can witness the collapse of the fluid inside the EH. This would mean violation of causality and the existence of some sort of a “time machine”. And we know that, it is only the comoving observer who is supposed to witness both the formation of EH and the collapse beyond it. But the foregoing equation tells that if $`M>0`$ (finite), the the value of $`t`$ not only starts decreasing but also suddenly becomes finite as the boundary enters the EH ($`r<2M`$)!! And as the collapse is complete,
$$x=0;ifM>0;atr=0$$
(15)
the corresponding value of $`t=T`$ required by the distant observer to see the collapse to the central singularity within the EH is simply
$$t=T=2M\left(\frac{r_{\mathrm{}}}{2M}1\right)\left[\eta +\left(\frac{r_{\mathrm{}}}{4M}\right)(\eta +\mathrm{sin}\eta )\right]$$
(16)
By using Eq. (5), this can be rewritten as
$$T=\pi (r_{\mathrm{}}2M)\left(1+\frac{r_{\mathrm{}}}{4M}\right)$$
(17)
And note that, if we really insist that $`T=\mathrm{}`$ as per the original agenda, we must realize that $`M=0`$! This would mean (i) there is no additional spacetime between the EH and the central singularity, i.e, they are synonymous and the Sch. singularity is a genuine singularity provided one realizes that by the time one would have $`r=2M`$, the value of $`M0`$, and (ii) the proper time for formation of this singularity is $`\tau M^{1/2}=\mathrm{}`$. The latter means that, it is not formed at any finite proper time, the collapse process continues indefinitely and there is no incompleteness for the timelike geodesics.
Einstein, too, was equally worried about this singularity and his intuition (correctly) told that it can not occur in actual physical cases. In fact he (unsuccessfully) struggled in 1938 to show that it was indeed so. But his proof was not convincing and was ignored. On the other hand, in 1939, it was convincingly shown by Oppenheimer and Volkoff, that given a certain equation of state (EOS) there is an upper limit on the gravitational mass upto which it is possible to have static configurations. Coupled with the fact that $`z_s2`$, it definitely meant that sufficiently massive bodies will undergo continued gravitational collapse. But the most important problem left to be answered now was where does the collapse process stop, a question actually burning ever since Newton discovered gravitation as an universal property. In the same year 1939, OS set out to find an answer for this question by solving the Einstein equations for an idealized homogeneous “dust”. They attempted to explicitly find the asymptotic behaviour of the metric coefficients for $`r2M`$ as a function of $`t`$ because, as explained earlier, for a collapsing body, $`r,t`$ remains valid coordinate system even when one (incorrectly) considers $`M>0`$. For the internal solutions, for the sake of convenience, they first worked with “comoving coordinates” $`R`$ and $`\tau `$ and then transformed the results to $`r,t`$ system by matching the solution with Eq. (1) at the boundary. And their solution (apparently) gave the impression that gravitational collapse ends in the formation of a BH of unspecified gravitational mass $`M`$. Since then, it has not been possible to find any other exact solution of gravitational collapse even for the spherically symmetric case. And inspired by the OS work, in the sixties and seventies a large number of relativists, formulated, by trials and errors, the so called “singularity theorems” which showed that, under a set of (apparently) reasonable assumptions, generic gravitational collapse should result in the formation of “singularities”. These singularities could be both “naked” as well as BH type. And the Cosmic Censorship Conjecture asserts that the resultant singularity should be a BH type. One of the important assumptions behind the singularity theorems is that of the existence of “trapped surfaces”, which for the spherical case implies the occurrence of a surface with
$$\frac{2M(r)}{r}>1$$
(18)
However in 1998, it was shown by us that independent of the details of the collapse (or expansion) like the EOS and radiation transport properties, trapped surfaces do not form :
$$\frac{2M(r)}{r}1$$
(19)
Consequently, for continued collapse, the final gravitational mass
$$M(r)0;r0$$
(20)
if we rule out occurrence of negative mass (repulsive gravity). Physically, $`M=0`$ state corresponds to the lowest energy state which, the system naturally strives to attain by radiating the entire original mass energy. We have also found that the physically meaningful collapse ends with $`2M/r=1`$ state rather than $`2M/r<1`$ state. Thus as if the system tries to attain a zero mass BH state. Eventually, it is found that the proper time for formation of this state is $`\tau =\mathrm{}`$, which implies that GTR is singularity free (atleast for isolated bodies). Since our results are most general, it is trivially applicable to the OS case too. And naturally this result must be imprinted in the explicit OS solutions. Unfortunately, probably because of the excitement of having found BH solutions, OS completely overlooked this telltale imprint in their Eq. (36). They instead attempted to simplify Eq. (36) in such a manner that this explicit imprint got obliterated. Consequently they obtained physically inconsistent solutions under the assumption of a finite value of $`M`$. In fact they partially admitted the inconsistent internal solutions: for an internal point ($`R<R_b`$), $`e^\lambda `$ refused to become infinite even at $`r=0`$, i.e, when the collapse was complete! Unfortunately, they did not care to pursue this matter further (may be because of 2nd world war) and sent their paper to Physical Review where it got accepted. And the rest is, of course, history. And it is only recently that we have brought out all such issues in a most transparent manner to show that the OS work itself demanded $`M=0`$ even in the absence of our general result (Eq. ).
Yet to be doubly sure about the non-existence of finite mass BHs, in another recent work, we first assumed the existence of a finite mass Schwarzschild BH. The radial part of the Kruskal metric has the form
$$ds^2=g_{vv}dv^2g_{uu}du^2$$
(21)
On the positive side all that Kruskal transformations achieved was to ensure that $`g_{uu}=g_{vv}`$ was definable over the entire spacetime (provided of course $`r`$ and $`t`$ are definable in the first place), they retained their respective original algebric signs, and do not blow up at $`r=2M`$ (provided $`M>0`$). On the flip side, it created a pandora’s box as far as physical concepts are concerned. For instance, it demanded that (i) although the central singularity is still described by $`r=0`$, its actual structure is that of a pair of hyperbolas : $`u=\pm (1+v^2)^{1\mathrm{`}/2}`$, (ii) the negative branch of the hyperbola corresponds to “White Hole” which can spew out mass energy at its will, (iii) Inside the BH, there are two universes connected by a spacetime “throat”, (iv) Even the spacetime at $`r=\mathrm{}`$, which is naturally seen to be flat and Newtonian by the original and exact Sch. solution, has a complex structure corresponding to $`u\pm v`$ or $`u\pm v`$.
To the knowledge of the present author, nobody raised here the question, when there are expected to be $`N`$ BHs, how much complex will be structure of the spacetime far away from all the BHs? And when astronomical observations firmly supported the view that far from massive objects, the structure of the spacetime is well described by the mundane $`r,t`$ coordinates, there was no introspection as to whether the complex structure of spacetime at $`r=\mathrm{}`$, suggested by the Kruskal view, was acceptable or not. Note, if a finite mass BH were a physical object, the radial geodesic of a physical particle must remain timelike at EH. And we have directly shown that it is not so; it becomes null just like the Schwarzschild case! Any reader can verify this by noting that since as $`r2M`$, $`u\pm v`$, and consequently the Kruskal derivative assumes a form
$$\frac{du}{dv}\frac{f(r,t,dt/dr)}{\pm f(r,t,dt/dr)};r2M$$
(22)
And the corresponding limit becomes unity irrespective of whether $`f0,\mathrm{}`$ or anything. Therefore, one has $`du^2=dv^2`$ at $`r=2M`$ in Eq. (21), so that $`ds^2=0`$ !
Physically this means that the free fall speed at the EH, $`V=1`$, and this is not allowed by GTR unless $`R=M=0`$. Now we explain why $`V=1`$ at the EH for any coordinate system, Kruskal or Lemaitre or anything else. Let the speed of the static other observer be $`V_{SchO}`$ with respect to the Schwarzschild observer. By principle of equivalence, we can invoke special theory of relativity locally. Then the free fall speed of the material particle with respect to the other static observer will be
$$V=\frac{V_{Sch}\pm V_{SchO}}{1\pm V_{Sch}V_{SchO}}$$
(23)
But since $`V_{Sch}=1`$ at the EH, we would always obtain $`V=1`$ too. And hence there can not be any finite mass BH. The value of $`V`$ can change in various coordinates only as long as it is subluminous to all observers. Thus, all that Kruskal transformations and several other transformations tried to do was to arrive at a metric whose coefficients do not (appear to) blow up at $`r=2M`$. But as we saw, this was a purely cosmetic approach because no independent effort was ever made to verify whether the actual value of $`ds^2=dt^2(12M/r)`$, which becomes zero at $`r=2M`$ for a Schwarzschild observer becomes timelike, $`ds^2>0`$, in the new coordinate.
And since after all, $`ds^2`$ is an invariant, the value of it can not change unless, in the new coordinate, the location of the EH materially changes! The latter statement means that, the location of the Sch. singularity or the central singularity have to be described independent of the clutches of the Schwarzschild system. In other words, the new coordinates must not be obtained by transforming the $`r,t`$ coordinates, if it were possible. It is really inexplicable how almost all the authors overlooked this simple point while taking the existence of finite mass BHs for granted. We still hope that the reader will not reject this paper to uphold the same inexplicable legacy. Finally, we conclude that as far as the value of the mass of the BHs are concerned, the results obtained by Supersymmetric String theories completely agree with corresponding GTR results. This result is also, in a certain way, in agreement with the result that the naked singularities could be of zero gravitational mass.
|
no-problem/9905/hep-th9905036.html
|
ar5iv
|
text
|
# T-Duality Can Fail
## 1 Introduction
$`R1/R`$ symmetry, the statement that a string theory compactified on a circle of radius $`R`$ is isomorphic to a string compactified on a circle of radius $`1/R`$, is the most basic instance of T-duality. It is not only a prime example of inherently “stringy” physics but also an extremely useful tool in studying the theory. It has been used to argue for the existence of “minimal distances” in string theory and, applied in more complicated situations, to derive various properties of the theory. In these applications it is often necessary to assume that T-duality is an exact symmetry of string theory. The justification of $`R1/R`$ symmetry comes from conformal field theory. Probably the neatest argument is that if one looks at a conformal field theory corresponding to a string on a circle of radius $`R`$, then at $`R=1`$ (in suitable units) one gets an extra $`\mathrm{SU}(2)`$ symmetry which may be used to identify the marginal operator which decreases $`R`$ with the marginal operator which increases $`R`$ (see, e.g., ). In compactification on $`T^2`$ the T-duality is extended to an $`\mathrm{SL}(2,)`$ symmetry acting on the volume and background $`B`$-field.
The problem with the argument is that it ignores nonperturbative effects in the string coupling constant. In this paper we will show how nonperturbative effects can modify the conclusion when the string coupling is nonzero. Our general conclusion is that T-duality-like arguments work generally only when there is a lot of supersymmetry. Here we will look at the heterotic string compactified on a product of a K3 surface and a 2-torus leading to an $`N=2`$ theory in four dimensions. Heterotic strings compactified on $`T^2`$ exhibit T-duality at any value of the string coupling, as we will show. At the level of conformal field theory the correlation functions factorize into products of correlators associated to one or the other factor of the compactification space, so the presence of K3 is irrelevant to the issue of T-duality for $`T^2`$. We will show that this is not true nonperturbatively. The 2-torus in this context appears to be the most supersymmetric case for a failure of T-duality and so will be the easiest to analyze.
In many ways the essential facts of this paper have been known to many people for a few years now. That is, something peculiar happens to T-duality when a heterotic string is compactified on K3$`\times T^2`$. This was studied in particular in for example. Our purpose here is to emphasize that this implies a basic failure of T-duality in a general setting — a fact which does not appear to have been widely appreciated.
We should immediately clarify precisely what we mean when we say that T-duality is “broken”. What we do not mean is that we consider the string theory on a circle of radius $`R`$ and then the same string theory on a circle of radius $`1/R`$ and discover that they are different. The situation is more awkward than this because it is difficult to say unambiguously how to measure the radius of a circle in this context.
What we can describe unambiguously is the moduli space of heterotic string theories compactified on a given spacetime. We can then follow the philosophy developed in studying moduli spaces of Calabi–Yau threefolds . We attempt to assign a “geometrical” interpretation in terms of parameters such as the sizes of circles to points in the moduli space. One begins with some large radius (and/or weak coupling) region where one understands the system classically and then one integrates along paths in the moduli space assigning parameters to all the points in the moduli space. The problem with this method is that monodromies within the moduli space force branch cuts to be made. In general one finds that this process requires choices, so that the parameters are not uniquely determined by the moduli. One may try to remove these monodromies by forming a Teichmüller space which will cover this moduli space. If all goes well then the new copies of the fundamental region will tessellate “naturally” to form a cover giving some parameter space. An example where this fails was first recognized for measuring the volume of the quintic threefold in (see figure 5.2 of that paper in particular).
It is important to note that one may always form the Teichmüller space and obtain an associated (usually very large) modular group. Indeed this was the approach taken in for example and one may also obtain useful information about the theory if this path is taken (see for example). The point we wish to emphasize here however is that this Teichmüller space is often physically meaningless in the usual sense.
What conditions need to hold in order for a physical interpretation to survive? In this paper we propose the following. The parameter space must contain a limiting “semiclassical” open region corresponding to weakly coupled strings and large radii. This region should be unique up to the action of duality transformations which can be determined from the semiclassical approximation to the system. In the case at hand this would mean that the large-radius, weak-coupling limit of a $`T^2`$ compactification must be unique up to $`BB+1`$. This requirement is quite restrictive as we shall see.
We believe this is a very reasonable definition for T-duality to respect. One could not seriously propose that there is a symmetry in the universe which identifies a circle of radius 1cm with a circle of radius 2cm — we may use a ruler to measure the difference! An important aspect of T-dualities is that large distances are unique. It is precisely this uniqueness which is lost if we allow an arbitrary cover of the moduli space to act as our Teichmüller space.
We will endeavour to show that the moduli space for the 2-torus in the context above does not admit a covering with this property and so there is no T-duality group. Note that $`R1/R`$ symmetry is not literally “broken”. Instead what is true is that for $`R`$ of order one or smaller, there is no natural way to associate a “size” $`R`$ to a given point in moduli space. In particular, the statement of T-duality thus loses its content.
The nonperturbative corrections responsible for modifying the structure of the moduli space and rendering it inconsistent with T-duality are presumably represented by fivebrane instantons wrapping the K3 as well as one of the cycles of the torus. We do not know at present how to compute these corrections explicitly. Instead, we use a dual type IIA description of the model. In this description, the heterotic dilaton is mapped to a Kähler modulus and the corrections in question are generated by world-sheet instantons whose effect is readily computed by mirror symmetry.
In the context of this dual model the issue of T-duality breaking becomes that of fibre-wise duality. The idea is that one has a Calabi–Yau space, $`X`$, which is an $`F`$-fibration for some fibre $`F`$ which is also a Calabi–Yau space. If some duality statement is true for each $`F`$ can it be extended to a duality of $`X`$? In section 2 we give an example where it can and in section 3 we give an example where it cannot. By heterotic/type II string duality the latter case implies a failure of T-duality. The essential difference between sections 2 and 3 is that in former we show that we are required to fit a finite number of fundamental domains of the moduli space into a part of the Teichmüller space with infinite area while in the later case we would be required to fit an infinite number of fundamental domains of the moduli space into a part of the Teichmüller space with finite area.
In section 4 we will discuss the consequences of broken dualities. In particular we will set a general description of how dualities are natural only when there is a large amount of supersymmetry. We propose that the clearest interpretation of the results in this paper is that whether a circle respects T-duality in string theory depends on the context of the circle. We will also note the possibly alarming conclusion that the classical $`\mathrm{SL}(2,)`$ modular group of the moduli space of complex structures on a torus is also probably broken (in the same sense).
## 2 Unbroken T-Duality
### 2.1 A Two Parameter K3 Surface
In this section we will deal with a type IIA string compactified on a K3 surface which is an elliptic fibration with section. We wish to see that the $`\mathrm{SL}(2,)`$ symmetry of the elliptic fibre can be seen in the moduli space of the K3 surface.
An F-theory argument shows that this is equivalent to saying that the $`\mathrm{SL}(2,)`$ T-duality acting on the moduli space of complexified Kähler forms is unaffected by the value of the complex structure modulus (or, equally, the mirror of this statement). Note that the heterotic dilaton in eight dimensions lives in a separate $``$ factor of the moduli space and will certainly not affect any T-duality statements.
We know from Narain that the moduli space required for a heterotic string on $`T^2`$ (with the Wilson lines switched off) is
$$\mathrm{M}_0=\mathrm{O}(\mathrm{\Gamma }_{2,2})\backslash \mathrm{O}(2,2)/(\mathrm{O}(2)\times \mathrm{O}(2)),$$
(1)
where $`\mathrm{\Gamma }_{2,2}=UU`$ and $`U`$ is the usual hyperbolic lattice of signature $`(1,1)`$. This space can be thought of as the Grassmannian of space-like 2-planes in $`^{2,2}`$ where we care only about the orientation of the 2-planes relative to the lattice $`\mathrm{\Gamma }_{2,2}`$.
One may explicitly map $`\mathrm{SL}(2,)\times \mathrm{SL}(2,)`$ to $`\mathrm{O}(2,2)`$ by writing a vector $`(x_1,x_2,x_3,x_4)`$ in $`^{2,2}`$ as
$$M=\left(\begin{array}{cc}x_1& x_3\\ x_4& x_2\end{array}\right),$$
(2)
and then let $`(A,B)\mathrm{SL}(2,)\times \mathrm{SL}(2,)`$ act on this vector as
$$(A,B):MAMB^1.$$
(3)
(Note that $`(1,1)`$ maps to the identity.)
This allows us to rewrite $`\mathrm{M}_0`$ in (1) in terms of two copies of the upper-half plane $`\mathrm{SL}(2,)/\mathrm{U}(1)`$. The modular group $`\mathrm{O}(\mathrm{\Gamma }_{2,2})`$ then generates two copies of $`\mathrm{SL}(2,)`$ — each acting on its upper-half plane, a “mirror” $`_2`$ which exchanges the upper-half planes, and a “conjugation” $`_2`$ which acts as (minus) complex conjugation on both upper-half planes simultaneously. One can then interpret one upper-half plane as representing the complex structure, $`\tau `$, of the 2-torus and the other upper-half plane as representing the $`B`$-field and Kähler form, $`\sigma =B+iJ`$, of the torus . Equivalently we may describe this moduli space as two copies of the $`j`$-line divided by exchange and complex conjugation. We denote coordinates on the two $`j`$-lines by $`j_1`$ and $`j_2`$.
Via the usual F-theory argument the heterotic string on $`T^2`$ is dual to F-theory (or the type IIA string in some limit) on a K3 surface, $`S`$.<sup>1</sup><sup>1</sup>1There is a very awkward $`_2`$ identification problem which we will ignore for the sake of exposition. The complex conjugation symmetry of $`\mathrm{M}_0`$ must be treated carefully in the type IIA picture. We will ignore this subtlety for the sake of exposition. We will map the moduli space explicitly for the case of an algebraic K3 surface written as a hypersurface in a toric variety. Let us consider the surface $`\stackrel{~}{S}`$ given by
$$x_0^2+x_1^3+x_2^{12}+x_3^{12}+\psi x_0x_1x_2x_3+\varphi x_2^6x_3^6,$$
(4)
in $`_{\{6,4,1,1\}}^3/_6`$, where we divide by the $`_6`$ action generated by $`(x_0,x_1,x_2,x_3)(x_0,x_1,e^{\pi i/6}x_2,e^{\pi i/6}x_3)`$.<sup>2</sup><sup>2</sup>2Although this action looks like it generates $`_{12}`$, don’t forget that the weighted projective identification means that $`(x_0,x_1,x_2,x_3)(x_0,x_1,x_2,x_3)`$. This K3 surface is mirror via the usual arguments to an elliptic K3 surface with a section. That is, $`S`$ and $`\stackrel{~}{S}`$ are a mirror pair in the sense of algebraic K3 surfaces . We should now be able to map out the desired moduli space, $`\mathrm{M}_0`$ by varying $`\psi `$ and $`\varphi `$ in (4), i.e., by varying the complex structure of $`\stackrel{~}{S}`$.
We want to explicitly map the coordinates $`(\psi ,\varphi )`$ to the two copies of the $`j`$-line we saw above. This has already been done in (see also ). Let us review this construction so that we may compare some details to that of the Calabi–Yau threefold in the next section. The structure is most easily seen by finding the “interesting” points in the moduli space. As is well-known, at certain points in the moduli space we obtain enhanced gauge symmetries. In the language of the Grassmannian (1) this occurs when lattice elements of length squared $`2`$ are orthogonal to the space-like 2-plane. Let us suppose that we force the 2-plane to be orthogonal to such a lattice element. The result is that the 2-plane now varies in the space $`\mathrm{\Gamma }_{2,1}_{}`$, where $`\mathrm{\Gamma }_{2,1}=UL(2)`$ and $`L(2)`$ is a one-dimensional lattice whose generator has length squared $`2`$. That is, the moduli space within $`\mathrm{M}_0`$ where the theory has a gauge symmetry of at least $`\mathrm{SU}(2)`$ is given by
$$\mathrm{M}_{\mathrm{SU}(2)}=\mathrm{O}(\mathrm{\Gamma }_{2,1})\backslash \mathrm{O}(2,1)/(\mathrm{O}(2)\times \mathrm{O}(1)).$$
(5)
In the same way that $`\mathrm{O}(2,2)`$ is mapped to $`\mathrm{SL}(2,)\times \mathrm{SL}(2,)`$, we may map $`\mathrm{O}(2,1)`$ to $`\mathrm{SL}(2,)`$ by writing
$$M=\left(\begin{array}{cc}x_1& x_2\\ x_3& x_1\end{array}\right),$$
(6)
and letting $`A\mathrm{SL}(2,)`$ act as
$$A:MAMA^1.$$
(7)
The reader may also check that (up to a $`_2`$ corresponding to complex conjugation) $`\mathrm{O}(\mathrm{\Gamma }_{2,1})\mathrm{SL}(2,)`$.
Thus we may embed $`\mathrm{M}_{\mathrm{SU}(2)}`$ into $`\mathrm{M}_0`$ by setting $`A=B`$. In other words $`j_1=j_2`$ in terms of the two $`j`$-lines. Actually the fact that $`\mathrm{O}(\mathrm{\Gamma }_{2,2})`$ acts transitively on vectors of length squared $`2`$ shows that we have an enhanced gauge symmetry (of at least $`\mathrm{SU}(2)`$) if and only if $`j_1=j_2`$.
In terms of $`\stackrel{~}{S}`$ in the form (4) we have an enhanced gauge symmetry when $`\varphi `$ and $`\psi `$ are tuned to produce a canonical singularity. This happens when the discriminant, $`\mathrm{\Delta }`$, of the equation vanishes. This is given by<sup>3</sup><sup>3</sup>3Note that there are some identifications to be made within the $`(\psi ,\varphi )`$ plane. This discriminant does not really have four components.
$$\mathrm{\Delta }=(\varphi 2)(\varphi +2)(432\varphi \psi ^6+864)(432\varphi \psi ^6864).$$
(8)
One may also argue that $`\mathrm{\Delta }=k(j_1j_2)^2`$, where $`k`$ is some constant, as follows. As we said above we only get enhanced gauge symmetry when $`j_1=j_2`$. One argues that the power is 2 by saying that any odd power would violate the $`j_1j_2`$ symmetry and any power higher than 2 would imply that the degeneration of $`\stackrel{~}{S}`$ when $`\mathrm{\Delta }=0`$ could never be suitably generic.
We also have further enhanced gauge symmetry. Namely there is a single point $`j_1=j_2=0`$ where we have $`\mathrm{SU}(3)`$ and a single point $`j_1=j_2=1728`$ where we have $`\mathrm{SU}(2)\times \mathrm{SU}(2)`$. With a bit of algebra we may find the corresponding values for $`\varphi `$ and $`\psi `$. All said, our equations are satisfied with $`k=1`$ by making $`j_1`$ and $`j_2`$ the roots of
$$j^2(\varphi \psi ^6432\varphi ^2+1728)j+\psi ^{12}=0.$$
(9)
If the reader is not fully convinced of this map between $`(\psi ,\varphi )`$ and the $`j`$-lines, one may check that the “flatness” condition is satisfied for the “special coordinates” $`\sigma `$ and $`\tau `$. We will do this next. For a nice account of what is meant by “special coordinates” we refer to .
The special coordinates are of particular interest to us because these specify “length” are should parameterize any natural Teichmüller space. Let us perform the usual analysis of converting “algebraic coordinates” $`(\psi ,\varphi )`$ into special, or flat coordinates as in . This is a rather technical process but it is thoroughly treated in the literature. Our approach is closest to . Introducing variables
$$z=\frac{\varphi }{\psi ^6},y=\frac{1}{\varphi ^2},$$
(10)
we obtain the hypergeometric partial differential operators
$$\begin{array}{cc}\hfill \mathrm{}_z& =z\frac{}{z}\left(z\frac{}{z}2y\frac{}{y}\right)432z\left(z\frac{}{z}+\frac{1}{6}\right)\left(z\frac{}{z}+\frac{5}{6}\right)\hfill \\ \hfill \mathrm{}_y& =\left(y\frac{}{y}\right)^2y\left(z\frac{}{z}2y\frac{}{y}\right)\left(z\frac{}{z}2y\frac{}{y}1\right)\hfill \end{array}$$
(11)
The monomial-divisor mirror map of tells us that $`S`$ is an elliptic fibration with a section and that the section goes to infinite size as $`y0`$ and the fibre goes to infinite size as $`z0`$. There is a unique solution of $`\mathrm{}_z\mathrm{\Phi }(z,y)=\mathrm{}_y\mathrm{\Phi }(z,y)=0`$ which is equal to 1 at the origin. Call this $`\mathrm{\Phi }_0`$. We also have solutions $`\mathrm{\Phi }_z`$ and $`\mathrm{\Phi }_y`$ which behave asymptotically as $`\mathrm{log}(z)`$ and $`\mathrm{log}(y)`$ as we approach the origin. The flat coordinates are then ratios of these solutions in the usual way. Let us put
$$q_z=\mathrm{exp}\left(\frac{\mathrm{\Phi }_z}{\mathrm{\Phi }_0}\right),q_y=\mathrm{exp}\left(\frac{\mathrm{\Phi }_y}{\mathrm{\Phi }_0}\right).$$
(12)
(Just as $`q`$ is used conventionally to denote $`\mathrm{exp}(2\pi i\tau )`$.) We then have a power series solution from the hypergeometric system:
$$\begin{array}{cc}\hfill q_z& =zzy+312z^2zy^2192z^2y+107604z^3+\mathrm{}\hfill \\ \hfill q_y& =y+120zy+2y^2+41580yz^2+5y^3+\mathrm{}\hfill \end{array}$$
(13)
Explicit computation then shows order by order that this is consistent with (9) with
$$\begin{array}{cc}\hfill j_1& =j(q_z)\hfill \\ \hfill j_2& =j(q_zq_y),\hfill \end{array}$$
(14)
where $`j(q)`$ is the usual $`j`$-function expansion given by $`q^1+744+196884q+O(q^2)`$. This verifies that (9) is correct. See for an explicit proof of this.
### 2.2 Fibre-wise T-Duality
Now we wish to analyze the same two parameter model from the point of view of the elliptic fibration of the K3 surface $`S`$. We want to know if $`\mathrm{SL}(2,)`$ modular group of the elliptic fibre can be seen in the moduli space of $`S`$. First let us take the area of section to be huge, $`y0`$. In this limit, the moduli properties of $`S`$ are basically that of its fibre. Putting $`y=0`$ we obtain
$$j_1=j(q_z)=\frac{1}{z(1432z)},$$
(15)
and $`j_2=\mathrm{}`$. It is very important to note that varying $`z`$ gives us a double cover of the $`j_1`$-line. In particular, if we plot the shape of fundamental region in terms of flat coordinates (as discussed in ) we obtain the shaded area of figure 1. This region is $`H/\mathrm{\Gamma }_0(2)`$, where $`H`$ is the upper-half plane. This means that there is a non-toric $`_2`$ symmetry acting on $`\stackrel{~}{S}`$ which will identify the two fundamental regions of $`H/\mathrm{SL}(2,)`$ in figure 1.
In terms of the mirror $`\stackrel{~}{S}`$, taking $`y0`$ takes $`\varphi \mathrm{}`$. $`\stackrel{~}{S}`$ is an elliptic K3 surface with a section and in this limit the complex structure of the fibre becomes constant. The remaining variation of complex structure of $`\stackrel{~}{S}`$ is therefore the variation of the complex structure of this constant fibre. A similar story was was discussed in .
Finding the moduli space of an elliptic curve via toric varieties is always going to be plagued by finding multiple covers of the $`j`$-line because of non-toric symmetries. Most simple families of elliptic curves that one tries to write down tend to have fixed rational points. See for a discussion of such problems. This “level structure” forces the naïve moduli space to be a multiple cover of the $`j`$-line. This helps us out in the toric case as we now explain.
In the general approach of Candelas et al. in analyzing a one parameter case, the moduli space will always look like a rational curve with 3 special points. To use the language of the quintic threefold, one point will be “large complex structure” another will be an orbifold-like point and a third, which marks the division between the first two phases, is a “conifold”-like point. The problem is that an elliptic curve doesn’t have a conifold-like point — any singular complex structure must be the large complex structure. Thus one must have a non-toric symmetry identifying the putative conifold point with the large complex structure point. In our example this must identify the “conifold” $`\tau =0`$ with $`\tau =i\mathrm{}`$, i.e., it is the $`\tau 1/\tau `$ symmetry. It is this non-toric discrete symmetry which produces the multiple cover of the $`j`$-line above.
Now the main question we wish to ask is what happens to figure 1 if we allow $`y`$ to be nonzero? From (15) the point given by $`\tau =0`$ in figure 1 corresponds to $`z=1/432`$. This is on the discriminant as it corresponds to $`j_1=j_2=\mathrm{}`$. For general $`y`$, the relevant part of the discriminant becomes
$$(432z1)^2864^2yz^2=0.$$
(16)
Thus, as $`y`$ is switched on, the point $`z=1/432`$ splits into two nearby points (up to extra identifications). How can we understand this in terms of flat coordinates?
The question of what a slice of constant $`y`$ in the moduli space corresponds to in terms of the elliptic curve is awkward. Recall that setting $`y=0`$ put $`j_2=\mathrm{}`$ and so $`\sigma =i\mathrm{}`$. Suppose instead we fix $`\sigma `$ to be $`iC`$ for some large but finite $`C`$. What does the fundamental region for $`\tau `$ then look like? Because of the mirror map $`\sigma \tau `$, we are free to fix $`\mathrm{Im}(\tau )\mathrm{Im}(\sigma )`$. This “chops off” the top of the fundamental region. This will turn figure 1 into figure 2.
Now the domain for $`y`$ constant, as opposed to $`j_2`$ constant, won’t quite look like figure 2 but the coarse structure should be about right. Thus we see how the point at the end of the cusp at $`\tau =0`$ in figure 1 gets split and moves up a little in figure 2. (We should also point out that these supposed two points are actually identified by the $`\mathrm{SL}(2,)`$ action in figure 2.) When $`y`$ is nonzero, a solution of $`z`$ satisfying (16) gives a $`_2`$-orbifold point on $`S`$. Thus this corresponds to an $`\mathrm{SU}(2)`$ gauge symmetry (in the eight-dimensional theory in question) and this point in the moduli space is a $`_2`$ orbifold point.
The important point is that nothing catastrophic happens to the $`\mathrm{SL}(2,)`$ action as we switch on $`y`$. We just pick up an extra $`_2`$ into the modular group. We know that our moduli space is of the form of a global quotient (1) and so no matter what kind of complex slice of the moduli space we take, we will always see some T-duality group acting nicely on some slice of the Teichmüller space. In the heterotic string point of view, the T-duality group of the torus persists.
## 3 Broken T-Duality
We now wish to discuss along parallel lines the case of a heterotic string compactified on K3$`\times T^2`$. To this end, we need to replace the K3 surface $`S`$ with a Calabi–Yau threefold $`X`$. The space $`X`$ is now a K3-fibration. The fibre has a T-duality which can be shown to be $`\mathrm{SL}(2,)`$ (which up to $`_2`$ factors is $`\mathrm{O}(\mathrm{\Gamma }_{1,2})\mathrm{O}(\mathrm{\Gamma }_{4,20})`$ — the full T-duality group of a K3 surface). This $`\mathrm{SL}(2,)`$ will not survive to the full Calabi–Yau threefold moduli space. This shows that the T-duality of the heterotic string is destroyed at nonzero string coupling.
The toric data for $`X`$ will be very similar to that of $`S`$, and $`X`$ will again have two deformations of the $`B`$-field and Kähler form. Let $`\stackrel{~}{X}`$ be given by
$$x_0^2+x_1^6+x_2^6+x_3^{12}+x_4^{12}+\psi x_0x_1x_2x_3x_4+\varphi x_3^6x_4^6,$$
(17)
in $`_{\{6,2,2,1,1\}}^3/G`$, where we divide by the $`G=_6\times _6\times _2`$ action generated by
$$\begin{array}{cc}\hfill (x_0,x_1,x_2,x_3,x_4)& (x_0,x_1,x_2,e^{\pi i/6}x_3,e^{\pi i/6}x_4)\hfill \\ \hfill (x_0,x_1,x_2,x_3,x_4)& (x_0,x_1,e^{\pi i/3}x_2,x_3,e^{\pi i/3}x_4)\hfill \\ \hfill (x_0,x_1,x_2,x_3,x_4)& (x_0,e^{\pi i/3}x_1,x_2,x_3,e^{\pi i/3}x_4).\hfill \end{array}$$
(18)
This Calabi–Yau threefold and its mirror $`X`$ have been studied extensively . In particular Kachru and Vafa conjectured that the type IIA string on $`X`$ is dual to the heterotic string on $`S_H\times E_H`$ where $`S_H`$ is a K3 surface and $`E_H`$ is an elliptic curve. The string theory on $`E_H`$ is frozen along $`\tau =\sigma `$ by Higgsing the enhanced $`\mathrm{SU}(2)`$ symmetry we saw in the last section. Thus the moduli space of the string on $`E_H`$ should be just a single copy of the $`j`$-line. There is considerable evidence to support this conjecture (for example ).
Now as before we may introduce
$$z=\frac{\varphi }{\psi ^6},y=\frac{1}{\varphi ^2},$$
(19)
and we obtain the hypergeometric partial differential operators
$$\begin{array}{cc}\hfill \mathrm{}_z& =\left(z\frac{}{z}\right)^2\left(z\frac{}{z}2y\frac{}{y}\right)1728z\left(z\frac{}{z}+\frac{1}{6}\right)\left(z\frac{}{z}+\frac{1}{2}\right)\left(z\frac{}{z}+\frac{5}{6}\right)\hfill \\ \hfill \mathrm{}_y& =\left(y\frac{}{y}\right)^2y\left(z\frac{}{z}2y\frac{}{y}\right)\left(z\frac{}{z}2y\frac{}{y}1\right)\hfill \end{array}$$
(20)
Again the monomial-divisor mirror map of tells us that $`X`$ is a K3 fibration with a section and that the section goes to infinite size as $`y0`$ and the K3 fibre goes to infinite size as $`z0`$. Note that this hypergeometric system is remarkably similar to the elliptic fibration in the previous section (11).
We may find the flat coordinates as before, the exponentials of which we will call $`q_z`$ and $`q_y`$ again. Now when we put $`y=0`$ one finds
$$j(q_z)=\frac{1}{z}.$$
(21)
That is, the moduli space of the K3 fibre gives one cover of the $`j`$-line. The fact that the $`j`$-line appears here can be argued directly as follows. With a suitable change of coordinates, the constant K3 fibre of (17) as $`y0`$ may be written as
$$x_0^2+x_1^6+x_2^6+x_3^6+\xi x_2^2x_3^2x_4^2,$$
(22)
in $`_{\{3,1,1,1\}}^3`$ divided by $`_2\times _6`$. This is manifestly a double cover of $`^2`$ branched over a sextic curve. We may write $`^2`$ as a $`_2\times _2`$-cover of $`^2`$ by mapping $`[y_1,y_2,y_3][x_1^2,x_2^2,x_3^2]`$. This $`_2\times _2`$ is a subgroup of the $`_2\times _6`$ by which we are going to orbifold. That is, the sextic curve is mapped to a cubic — i.e., an elliptic curve. Thus the moduli space of the K3 surface can be mapped to the moduli space of an elliptic curve. A little work is involved in showing that there is no level structure implicit on this elliptic curve and so one indeed gets one copy of the $`j`$-line as desired.
The heterotic interpretation of the flat coordinates corresponding to $`q_z`$ and $`q_y`$ are the single complex modulus of the torus and the dilaton-axion respectively . Indeed, the appearance of the $`j`$-line above as $`y0`$ shows that, in this limit we do indeed get the correct moduli space for the torus — $`H/\mathrm{SL}(2,)`$. In this way we see that the T-duality of the heterotic string on a torus at zero coupling is reproduced in the type-IIA dual.
Now what happens when we allow $`y`$ to be nonzero? Although the hypergeometric system (20) is similar at first sight to that of the previous section (11), there is a drastic difference. Let us consider the behaviour of the discriminant locus in a constant $`y`$ slice as $`y`$ is near zero. As discussed above, a component near $`z=1/432`$ gave us two points (before identifications are made) close to where a cusp of the fundamental region was in the $`y=0`$ limit. This cusp corresponded to $`j=\mathrm{}`$.
In this case in question here, the corresponding component of the discriminant locus is given by
$$(11728z)^23456^2z^2y=0.$$
(23)
At $`y=0`$, the point $`z=1/1728`$ is not a cusp on the $`j`$-line but rather the $`_2`$-orbifold point at $`j=1728`$. In the language of the K3 fibre of $`X`$, the K3 surface acquires a $`_2`$-orbifold point here (along with $`B=0`$) to give an enhanced $`\mathrm{SU}(2)`$ gauge symmetry.
Now as we turn on $`y`$ we have two solutions for $`z`$. This is exactly the situation that was analyzed thoroughly by Kachru et al where it was shown that the region of moduli space near $`z=1/1728`$ and $`y=0`$ maps to Seiberg-Witten theory for an $`\mathrm{SU}(2)`$ theory . The single point $`y=0`$, $`z=1/1728`$ is the classical limit corresponding to an $`\mathrm{SU}(2)`$ enhanced gauge symmetry and the two points for constant $`y>0`$ are the points giving the massless solitons. The important result we wish to borrow from Seiberg-Witten theory is that in flat coordinates the monodromy around either of these two points alone is infinite. The monodromy within the curve $`y=0`$ around both points together is $`_2`$ — as we see from the $`y=0`$ limit.<sup>4</sup><sup>4</sup>4This is not to say that the monodromy around the two points in the Seiberg–Witten plane is $`_2`$. See .
The situation here contrasted with that of the previous section can be summed up as follows
* For the moduli space of $`S`$ we have a point at $`y=0`$ around which the monodromy is infinite. As $`y`$ is switched on this splits into two points around which the monodromy is $`_2`$.
* For the moduli space of $`X`$ we have a point at $`y=0`$ around which the monodromy is $`_2`$. As $`y`$ is switched on this splits into two points around which the monodromy is infinite.
This can all be traced back to the fact that for $`S`$, $`y=0`$ gave a double cover of the $`j`$-line while for $`X`$ it gives a single cover.
We now wish to claim that this behaviour for $`X`$ makes it completely impossible to maintain any notion of T-duality when $`y`$ is switched on. That is T-duality for the heterotic string is “broken” as soon as the dilaton is switched on.
Suppose we let $`y`$ be small but nonzero. This should correspond to the weakly-coupled heterotic string. The moduli space for the $`T^2`$ part must look “almost” like the classical fundamental region for $`H/\mathrm{SL}(2,)`$. By “almost” we mean that no points in the moduli space for $`T^2`$ can have moved a large distance with respect to the special Kähler metric relative to the $`y=0`$ slice. We should now ask whether we can apply $`\mathrm{SL}(2,)`$ to this new moduli space to form a nice tessellated cover of the upper half plane just as we could for $`y=0`$. Note in particular the the angle of any corner of the moduli space must correspond to the monodromy in the Levi-Civita connection of the special Kähler metric around that point in order that such a tessellation works.
In the case of $`X`$ this correspondence fails. We show schematically<sup>5</sup><sup>5</sup>5We do not claim that this picture represents the geometry very accurately. The only fact we really need is that there is infinite monodromy at a point in the moduli space which is at finite distance. what happens as $`y`$ is switched on in figure 3. Near the edge of the moduli space labelled by $`L`$ we need to continue into a new fundamental region. This region must be distinct from that obtained by $`\tau \tau +1`$ (otherwise there would be no monodromy around the dots labelling the discriminant). Clearly this is impossible without moving off onto a infinitely-sheeted cover of the upper-half plane.
Note that one might want to extend the idea of duality by following the route of building an infinite-sheet covering. Indeed this was the idea behind such duality groups as those proposed in for example. One is certainly free do this mathematically and such a model might have many uses. What is unclear however is the physical significance of the huge Teichmüller space one builds by this process. It is certainly a considerable departure for the usual meaning of T-duality or U-duality. If we want to have a meaningful Teichmüller space then we should insist that we have a unique idea of an object at large radius and weak string coupling. This forces us to cover our moduli space with only one copy of a plane. We will demand this for our notion of duality.
The essential difference between this section where T-duality is broken and the previous section where T-duality was exact is as follows. In the unbroken case we introduced new copies of the fundamental region close to the horizontal axis “$`\mathrm{Im}(\tau )=0`$” when we switched on the coupling in figure 2. In this section we had to attempt to add new regions near the point $`\tau =i`$. The metric diverges near $`\mathrm{Im}(\tau )=0`$ allowing room for new regions. New regions cannot be fitted in near $`\tau =i`$ where the metric is finite.
We have also seen explicitly how the argument for T-duality using the enhanced gauge symmetry fails in this case. The Weyl symmetry of $`\mathrm{SU}(2)`$ was used to relate the deformations increasing and decreasing $`R`$ from its value at the point of enhanced symmetry. In fact, the symmetry is never restored, and near the origin the monodromies about the two singularities that do appear render any attempt to consistently define the Weyl-noninvariant coordinate ($`a`$ in ) futile.
## 4 Discussion
### 4.1 Holonomy Arguments
One my try to forge a general setting for T-duality (and S-duality and U-duality) as follows. One wants to begin with a general smooth Teichmüller space, $`\mathrm{T}`$, which parameterizes familiar concepts such as length, coupling constants, etc. One then wants the moduli space to look like $`\mathrm{M}=\mathrm{T}/G`$ for some discrete duality group $`G`$. In particular this means that any orbifold point in $`\mathrm{M}`$ is a global orbifold point. That is, there is a global cover which removes this singularity.
Orbifold points in a moduli space need not be global. We would like to consider the appearance of local orbifold points as an obstruction to duality groups. Of course one may want to declare in such a case that the covering $`\mathrm{T}`$ itself has the orbifold point. One is free to do this and one ends up with a weaker notion of duality groups. In a typical case however this weak duality group will probably be trivial and so this is not a particularly useful notion. For duality we will insist that all orbifold points are global and $`\mathrm{T}`$ is smooth.
Thanks to the Berger-Simons theorem (see for example chapter 10 of ) knowing the holonomy of a manifold can lead a considerable knowledge of its structure. If the holonomy (on the Levi-Civita connection) and its representation on the tangent bundle is of a certain type then the manifold must be a symmetric space. We will call such a holonomy “rigid”. If the holonomy is “non-rigid” then the manifold is of type general Riemannian, complex Kähler, hyperkähler, etc. By using the homogeneous structure of the symmetric spaces one may argue the following by tracing geodesics.<sup>6</sup><sup>6</sup>6We thank R. Bryant for discussions on this.
###### Proposition 1
If a geodesically complete orbifold has a rigid holonomy then it is globally a quotient of a symmetric space.
Note that in lower dimensions, a sufficiently extended supersymmetry forces a rigid holonomy group. The proposition tells us immediately that we should expect duality groups to appear for a large number of supersymmetries. For example for a type IIA string on a K3 surface we have $`N=2`$ supersymmetry in six dimensions giving an $`R`$ symmetry of $`\mathrm{Sp}(1)\times \mathrm{Sp}(1)\mathrm{SO}(4)`$ (up to discrete groups). Given the dimension of the moduli space this has rigid holonomy and so we see a Teichmüller space in the form of a symmetric space and a T-duality group (or more generally a U-duality group). The same is not true for an $`N=2`$ theory in four dimensions however. Here the holonomy of the moduli space only dictates that it be the product of a Kähler manifold and a quaternionic Kähler manifold. This non-rigid case should not be expected to have a duality group. This is the case of a heterotic string on K3$`\times T^2`$.
This classification breaks down a little in higher dimensions. This happens because symmetric spaces can sometimes have a holonomy (and representation) which appears to be non-rigid. For example, the heterotic string on a 2-torus alone is an $`N=1`$ theory in eight dimensions which has an $`R`$ symmetry $`\mathrm{U}(1)`$. This tells us only that the moduli space is Kähler and so would not in itself predict a T-duality group. Nevertheless the moduli space here is a quotient of a symmetric space. This shows that the holonomy can only tell us when we must have a duality group rather than when we cannot. It seems reasonable however in lower dimensions where this ambiguity does not exist that a generic compactification leading to non-rigid holonomy will have no T-duality. The computation of section 3 shows this is the case at least for one example.
### 4.2 Consequences
We have shown that when one considers a heterotic string compactified on K3$`\times T^2`$ in a particular way then the factor of the moduli space relevant to the torus is not a quotient of a physically meaningful smooth space of parameters by some T-duality group. In particular it is a completely meaningless statement to say that there is some kind of $`R1/R`$ symmetry on the circles within this torus. For a nonzero value of the string coupling constant the torus (and therefore circle) obeys a T-duality relationship no more than a generic Calabi–Yau threefold. The meaninglessness of T-duality for Calabi–Yau’s was discussed in .
All one may do is to begin labelling points in the moduli space with size values near the large radius limit and then extend this process over the entire moduli space. This will usually lead to a minimum “size”. No real meaning can be attached to sizes below this value.
The question that immediately raises itself is the following. If T-duality fails for the heterotic string on K3$`\times T^2`$ then does it also fail for a circle itself? One might argue that if the string on a circle of radius $`R`$ is exactly the same thing as a string on a circle of radius $`1/R`$ then further compactification on K3$`\times S^1`$ should also yield identical physics. This leads to three possibilities:
1. T-Duality fails for the string on a circle. That is, $`R1/R`$ symmetry is totally wrong when all nonperturbative corrections are taken into account. This would seem to be unlikely. For a string on a single circle at its self-dual radius we really see an unbroken $`\mathrm{SU}(2)`$ gauge symmetry signaling an identification in the moduli space. This $`\mathrm{SU}(2)`$ is broken by quantum effects when we consider K3$`\times T^2`$. The holonomy arguments above also show that a lot of supersymmetry implies T-duality. We suggest therefore that T-duality is a symmetry for a string on a circle by itself.
2. Heterotic/Type II duality is wrong. If we really want T-duality to be respected exactly by all strings then we need to sacrifice the duality we used in this paper. One is free to do this. We do not know enough about string theory to “prove” heterotic/type II duality. If one really is wedded to the idea of T-duality then this is a sacrifice one might be willing to make.
3. A string doesn’t know about its compactifications. When one says that one understands some string theory compactified on some space, does this understanding ever include further compactification? Such a further compactification includes putting different boundary conditions on fields which had lived in flat space. In particular one may break supersymmetry. The above reasoning that $`R1/R`$ symmetry on a circle implies T-duality for further compactification is wrong.
The last possibility would appear to be the most reasonable. It implies that $`R1/R`$ symmetry of a circle depends on the context of the circle. Any argument which uses T-duality for a circle or torus embedded in some more complicated situation cannot necessarily be considered rigorous.
In this paper we have considered the Kachru–Vafa example of the heterotic string compactified on a torus stuck at “$`\sigma =\tau `$”. This allowed us to consider just a two-parameter family of Calabi–Yau threefolds — one remaining parameter for the torus and the string coupling. Our failure to see an $`\mathrm{SL}(2,)`$ modular group can therefore be viewed just as much as failure of the classical $`\mathrm{SL}(2,)`$ modular group as of T-duality. What one should do is to go to the three parameter example of so that we may disentangle the $`\sigma `$ and $`\tau `$ parameters of the 2-torus. The essential computation was again done in . The coarse result of this is that the moduli space does not respect any $`\mathrm{SL}(2,)`$ symmetry generically.
This is perhaps a good deal more shocking at first sight than breaking T-duality. Are we really saying that string theory breaks the classical modular invariance of a torus? In other words string theory does not respect diffeomorphism invariance of the target space!
This is unavoidable if one wants to maintain heterotic/type II duality. Actually this possibility is not as bad as it might first appear upon reflection.
One is used to the idea from conformal field theory at weak string coupling that quantum geometry may effect the structure of the moduli space of complexified Kähler form but that the complex structure moduli space is unchanged. Now we claim that string coupling effects may also lead to quantum corrections to the moduli space of complex structures. This is quite reasonable in the current context as once string corrections have been taken into account the moduli space of the 2-torus does not even factorize exactly into Kähler form and complex structure parts. Note also that the parts of the moduli space whose shape is most effected by the corrections are those close to where the enhanced gauge symmetry appeared in the weak coupling limit. These must involve circles in the torus approaching the string scale. Thus it is still distances at the string scale in some sense which are most affected by quantum effects.
Again we should emphasize is that we are not really saying that a string on a certain torus is different to a string on the torus having undergone a modular transform. Rather we are saying that once the string coupling is not zero, the moduli space of is no longer of the form $`H/\mathrm{SL}(2,)`$. The meaning of what is it to go outside the fundamental region then becomes unclear.
Finally we note that all of the results in this paper have tied to dualities of the heterotic string. One might wish the type II string to escape such effects. This appears to be unlikely. Our holonomy argument of section 4.1 would seem to imply that a generic compactification with little supersymmetry should not obey any T-duality or U-duality laws. What is true however is that the type IIA string has more supersymmetry to begin with. Thus we know that T-duality for the type IIA string K3$`\times T^2`$ is exact (see, for example, for a discussion of this). What one might question however is whether the same is true for the type II string on $`Z\times S^1`$ where $`Z`$ is a generic Calabi–Yau threefold.
## Acknowledgements
It is a pleasure to thank P. Argyres, R. Bryant, S. Kachru, J. Harvey, G. Moore, D. Morrison, E. Sharpe and E. Silverstein for useful discussions.
|
no-problem/9905/hep-ph9905241.html
|
ar5iv
|
text
|
# 1 Riddle of the 𝜎 : what is the 𝜎 ?
## 1 Riddle of the $`\sigma `$ : what is the $`\sigma `$ ?
In this talk, I will start to answer some of the questions that Lucien Montanet listed in his introduction and will not attempt to survey all the known scalars, as this will be covered in the many other talks in this session. I will try to differentiate clearly between those statements that are matters of fact and those that are model-dependent and so might, at the moment, be regarded as matters of opinion.
Let me begin with the first fact. As is very well-known, nuclear forces are dominated by one pion exchange. The pion propagator is accurately described by $`1/(m_\pi ^2t)`$, where the pion mass $`m_\pi `$ is very nearly a real number, since pions are stable as far as the strong interactions are concerned. The next most important contributors to nuclear forces are two pion exchange, where the pions are correlated in either an $`I=1`$ $`P`$–wave or an $`I=0`$ $`S`$–wave. The former we know as $`\rho `$–exchange, the propagator for which is described simply by $`_\rho ^2t`$, where now $`_\rho `$ is a complex number, reflecting the fact that the $`\rho `$ is an unstable particle. Indeed, typically we may write $`_\rho ^2m_\rho ^2im_\rho \mathrm{\Gamma }_\rho `$ with the mass $`m_\rho `$ and width $`\mathrm{\Gamma }_\rho `$ real numbers. Two pions correlated in an $`S`$–wave we call the $`\sigma `$. However, it is an open question, whether this can be described by a simple Breit-Wigner propagator $`1/(m_\sigma ^2tim_\sigma \mathrm{\Gamma }_\sigma )`$.
A hint, that the situation is not so straightforward, is given by looking in the direct channel at the $`I=0`$ $`\pi \pi `$ $`S`$–wave cross-section. This is sketched in Fig. 1. One sees immediately that there are no simple Breit-Wigner-like structures. The only narrow features are the dips that correspond to the $`f_0(980)`$ and the $`f_0(1510)`$. Otherwise we only see broad enhancements. One might then think that there really is no sign of the $`\sigma `$ as a short-lived particle. Indeed, that there is no $`\sigma `$ has been argued by noting that this cross-section can be largely explained by $`\rho `$–exchange in the cross-channel. Though this is a fact, it does not immediately imply that there is no $`\sigma `$ in the direct channel. Thirty years ago we learnt that Regge exchanges in the $`t`$ and $`u`$–channels not only provide an economical description of hadron scattering cross-sections above a few GeV, but that their extrapolation to low energies averages the resonance (and background) contributions in a way specified by (finite energy sum-rule) duality. In the case of the $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ channel (studied recently by the BNL E852 experiment <sup>2)</sup>), one has just $`I=1`$ and $`I=2`$ exchanges in the cross-channel. These Regge contributions are dominated by the $`\rho `$. This exchange not only averages the direct channel cross-section, but because there are no narrow structures near threshold, it almost equals it. If resonances are not narrow, global duality becomes local. What we learn from this duality is that $`t`$–channel exchange equates in some sense to $`s`$–channel resonances. Both are true. Thus, an $`s`$–channel $`\sigma `$–resonance may well occur. (See a further comment on this in Sect. 5.)
Why are we worried about this ? Is the $`\sigma `$ not just another particle in the hadron zoo ? The reason the $`\sigma `$ is important is because of its key role in chiral symmetry breaking <sup>3)</sup>. We believe as a fact that QCD is the underlying theory of the strong interaction. The light quark sector of this theory has a chiral symmetry. The current masses of the up and down quarks are very much less than $`\mathrm{\Lambda }_{QCD}`$ and to first (or zeroth approximation) are zero. It is the masses in the QCD Lagrangian that couple left and right-handed fields, so that if there are no masses, the theory has a left–right symmetry. This chiral symmetry is however not apparent at the hadron level : pseudoscalar and scalar particles are not degenerate. This we understand as being due to the breakdown of this chiral symmetry in the Goldstone mode <sup>4)</sup>, in which the scalar field acquires a non-zero vacuum expectation value, while the pseudoscalar fields remain massless. We regard pions as these Goldstone bosons, and the $`\sigma `$ or $`f_0`$ as the Higgs of the strong interaction. It is this particle that reflects the dynamical generation of constituent masses for the up and down quarks, and so is responsible for the mass of all light flavoured hadrons. Thus the $`\sigma `$ or $`f_0(4001200)`$ is a fundamental feature of the QCD vacuum. Moreover, the Goldstone nature of pions is reflected in the fact that though pions interact strongly, their interaction is weak close to threshold and so amenable to a Taylor series expansion in the low energy region — this underlies Chiral Perturbation Theory <sup>5)</sup>.
## 2 Where is the $`\sigma `$ ?
If the $`\sigma `$ is so fundamental, how can we tell whether it exists ? First we recall the key aspect of a short-lived particle. At its basic, such a resonance gives rise to a peak in a cross-section for scattering with the appropriate quantum numbers. Importantly, this is described in an essential way by a Breit-Wigner amplitude, which has a pole on the nearby unphysical sheet (or sheets). It is in fact this pole that is the fundamental definition of a state in the spectrum of hadrons, regardless of how the state appears in experiment. In the case of a narrow, isolated resonance, there is a close connection between the position of the pole on the unphysical sheet and the peak we observe in experiments at real values of the energy. However, when a resonance is broad, and overlaps with other resonances, then this close connection is lost. It is the position of the pole that provides the fundamental, model-independent, process-independent parameters. While a relatively long-lived state like the $`J/\psi `$ appears almost the same in every channel, the $`\rho `$, for example, has somewhat different mass and width in different channels. This problem was recognised by the PDG long ago.
In 1971, the $`\mathrm{\Delta }(1236)`$, as it was then called, had been seen in many different channels. Different Breit-Wigner parameters were noted and the PDG tables <sup>6)</sup> stated: We conclude that mass and width of $`\mathrm{\Delta }(1236)`$ are in a state of flux; therefore we do not quote any errors in the table. A year later <sup>7)</sup>, it was recognised that this problem is removed if we take the mass and width to be given by the actual pole position of the $`\mathrm{\Delta }(1236)`$ in the complex energy plane. By analytically continuing into the complex plane to the nearby pole, it was found that the pole’s position was essentially process-independent and parameterization-independent, as $`S`$–matrix principles require. Though this was known more than 25 years ago, it has often been forgotten. For the $`\rho `$, the 1998 PDG tables <sup>8)</sup> quote a mass and width determined as Breit-Wigner parameters on the real axis. These are displayed in the complex energy plane as $`E=Mi\mathrm{\Gamma }/2`$. By expanding the relevant region, we can plot these real axis parameters as shown in Fig. 2. The points are scattered about. However, if one now analytically continues into the complex plane, one finds that these correspond <sup>9)</sup> to the pole mass and width plotted as $``$. One sees that these concentrate together 10–12 MeV lower than the real axis parameters. It is these pole parameters that are the closest present data gets to the true parameters of the $`\rho `$–resonance.
Now let us turn to the $`\sigma `$. In Fig. 3 are shown the mass and width from the determinations given in the 1998 PDG Tables <sup>8)</sup>. The labels correspond to the initials of the authors given there. Only the circles are from attempts to determine the pole positions; the triangles are Breit-Wigner-like modellings. One sees that where the $`\sigma `$ is is quite unclear. Its mass is anything from 400 MeV to 1200 MeV and its width from 200 MeV to 1 GeV. The reason for this is not hard to see. The parameters only become model-independent when close to the pole, as we illustrate below. In a very hand-waving sense, the accuracy with which one can continue into the complex plane is governed by the range and precision with which one knows the amplitude along the real axis. Even for the $`I=1`$ $`P`$–wave, where the precision is good and the pole not far (some 70 MeV) from the real axis, there is a shift of 10–12 MeV. For the $`I=0`$ $`S`$–wave, any pole may be 200–500 MeV away and the precision, with which this component of $`\pi \pi `$ scattering, is known is not very good, Consequently, for the $`\sigma `$ (the $`f_0(4001200)`$), any pole is so far in the complex plane that a continuation is quite unreliable without the aid of detailed modelling of the continuation. Indeed, we need to differentiate strongly between the form of the amplitude on the real axis and far away near the pole. Let us do that with a simple illustration.
Consider some scattering amplitude, $`𝒯(s)`$, for the process $`12`$. In the neighbourhood of the pole in the complex energy plane, where $`s=E^2`$, we can write
$$𝒯(s)=\frac{g_1^Rg_2^R}{m_R^2sim_R\mathrm{\Gamma }_R}+B(s),$$
$`(1)`$
where the residue factors $`g_1^R`$ and $`g_2^R`$ give the coupling of the resonance to the initiating formation or production channel and to the decay channel, respectively. Just as the mass, $`m_Ri\mathrm{\Gamma }_R/2`$ is complex, so these couplings will, in general, also be complex. It is the pole position defined here, and the residue factors, that will be model and process-independent. Now we, of course, observe scattering only for real values of the energy. There we represent the amplitude by
$$𝒯(s)=\frac{g_1(s)g_2(s)}{m^2(s)sim(s)\mathrm{\Gamma }(s)}+b(s).$$
$`(2)`$
This corresponds to a generalised Breit-Wigner representation, in which not only, the “width” will be a function of $`s`$, but the “mass” too. The Breit-Wigner mass and width are then just
$$M_{BW}=m\left(s=M_{BW}^2\right),\mathrm{\Gamma }_{BW}=\mathrm{\Gamma }\left(s=M_{BW}^2\right).$$
$`(3)`$
Importantly, the parameters $`m(s)`$, $`\mathrm{\Gamma }(s)`$ and the $`g_i(s)`$ will not only be process-dependent, but also depend on the way the background $`b(s)`$ is parametrized. However, when a pole is very close to the real axis, as in the case of the $`J/\psi `$, there is essentially no difference between the pole and real axis parameters. This is, of course, not the case for poles that are further away, even for the relatively nearby $`\rho `$. The parameters of Eq. (2) are connected to those of Eq. (1) by an analytic continuation. The functions must have the correct cut-structure to do this in a meaningful way. For the $`f_0(4001200)`$ the connection is wild and unstable, without a detailed modelling of this continuation. An example of such modelling will be given later.
It is important to realise that the unitarity of the $`S`$–matrix means that the pole-positions, given by Eq. (1), transmit universally from one process to another, independently of $`B(s)`$. This does not hold for the real axis parameters of Eq. (2). Indeed, the parameters of the Breit-Wigner and background, $`b(s)`$, are correlated. Thus, for instance in elastic scattering, unitarity requires that it is the sum of the phases of the Breit-Wigner and background component that transmits universally from one process to another and not the Breit-Wigner component separately from the background. This fact is most important and one often forgotten in determining resonance parameters from Eq. (2) and not Eq. (1). This is beautifully illustrated by the fits of the Ishidas and their collaborators <sup>10)</sup>.
Recognising that Watson’s theorem requires that the total phase, $`\delta `$, must equal the sum of the Breit-Wigner phase, $`\delta _{BW}`$, and background, $`\delta _{bkgd}`$, they choose the background phase for isoscalar scalar $`\pi \pi `$ scattering to have a particular momentum dependence, shown in Fig. 4a. They then deduce the Breit-Wigner component and infer that the parameters of Eqs. (2,3) give
$$M_{BW}=\mathrm{\hspace{0.17em}440}\mathrm{MeV},\mathrm{\Gamma }_{BW}=\mathrm{\hspace{0.17em}385}\mathrm{MeV},$$
from the fit of Fig. 4a, to the standard Ochs-Wagner phase-shifts <sup>11)</sup> from the classic CERN-Munich experiment <sup>12)</sup>. Since it is only the total phase, $`\delta `$, that matters, one can equally choose some different background, Fig. 4b, and then deduce that
$$M_{BW}\mathrm{\hspace{0.17em}500}\mathrm{MeV},\mathrm{\Gamma }_{BW}\mathrm{\hspace{0.17em}560}\mathrm{MeV},$$
for the Breit-Wigner-like component. Of course, any other choice of background is just as good. This shows that from the real axis, one can obtain more or less any set of Breit-Wigner parameters one likes for the $`\sigma `$ and yet describe exactly the same experimental data. Indeed, from the analysis by Kaminski et al. <sup>13)</sup> of the polarized scattering results, described here by Rybicki <sup>14)</sup>, one sees that the uncertainties on the starting phase-shifts may be presently far greater than those indicated in Fig. 4. This just makes the matter worse. The pole parameters are the only meaningful ones, but determining these directly from data on $`\pi \pi `$ scattering lacks any precision.
Because of the ubiquity of $`\pi \pi `$ final states in almost any hadronic process, it is useful (if not crucial) to include data from other initiating channels too. What unifies all these is unitarity. Consider $`I=0`$ $`J=0`$ interactions for definiteness. Let $`𝒯_{ij}`$ be the amplitude for initial state $`i`$ to go to final state $`j`$, then the conservation of probability requires that
$$\mathrm{Im}𝒯_{ij}(s)=\underset{n}{}\rho _n𝒯_{in}^{}(s)𝒯_{nj}(s),$$
$`(4)`$
where the sum is over all channels $`n`$ physically accessible at the energy $`\sqrt{s}`$ and $`\rho _n`$ is the appropriate phase-space for channel $`n`$. Most importantly, any channel with the same final state $`j`$, for instance $`\pi \pi `$, but initiated by a non-hadronic process, e.g. $`\gamma \gamma \pi \pi `$, has an amplitude $`_j`$ closely related to the hadronic scattering amplitudes, $`𝒯_{ij}`$, again by the conservation of probability. Unitarity then requires that
$$\mathrm{Im}_j(s)=\underset{n}{}\rho _n_n^{}(s)𝒯_{nj}(s).$$
$`(5)`$
In the elastic region, where $`i=j=n`$, this relation becomes the well-known final state interaction theorem due to Watson <sup>15)</sup>, that requires the phase of $`_i`$ to be the same as the phase of the hadronic amplitude $`𝒯_{ii}`$. It is the elastic phase-shift that transmits universally from one process to another. Unitarity knows of no separation into Breit-Wigner and background components, only the sum transmits. It is the nature of final state interactions that when, for instance, a pion pair is produced in $`\gamma \gamma `$ or in $`e^+e^{}`$ collisions, they continue to interact independently of the way they have been produced — only quantum numbers matter.
In the multi-channel case, the solution to Eqs. (4,5) is easily deduced <sup>16)</sup> to be
$$_j(s)=\underset{i}{}\alpha _i(s)𝒯_{ij}(s),$$
$`(6)`$
where the functions $`\alpha _i(s)`$ must be real. These determine the relative strengths of the coupling of the non-hadronic production channel to that for hadronic scattering, and are referred to as coupling functions. Eq. (6) is an exact statement of the content of unitarity and its universality. It ensures that any resonance in one channel couples universally to all processes that access the same quantum numbers. Of course, it does not mean that all processes are alike!
We may treat central production of pion pairs as a quasi-non-hadronic reaction, at least in certain kinematic regimes, like high energies and very small momentum transfers and big rapidity separations. Then the final state protons do not interact directly with the centrally produced mesons. Similarly, the decay $`J/\psi \varphi (\pi \pi )`$ is not expected to have any sizeable strong interaction between the $`\varphi `$ and the pions. Consequently, their amplitudes for $`I=0`$ $`S`$–wave $`\pi \pi `$ production satisfy Eq. (6). In Fig. 5 are shown the cross-sections for $`\pi \pi \pi \pi `$ scattering <sup>11,17)</sup>, $`pppp\pi \pi `$ <sup>18)</sup> and the $`J/\psi \varphi \pi \pi `$ decay distribution <sup>19)</sup>. The difference between these is reflected in differences in the coupling functions $`\alpha _i(s)`$. We see that apart from an Adler zero, near threshold, that suppresses $`\pi \pi `$ elastic scattering at low energies, central production has a very similarly shaped cross-section. In particular, the $`f_0(980)`$ produces a drop or shoulder in each of them. This is consistent with the notion that Pomerons, that supposedly control this central production process, couple to configurations of up and down quarks, in a similar way to $`\pi \pi `$ scattering. In contrast, in the $`J/\psi `$ decay, the final state $`\varphi `$ picks out hidden strangeness and so the $`f_0(980)`$ appears as a peak, reflecting its strong coupling to $`K\overline{K}`$. It is the coupling functions that shape the characteristics of these processes with the general unitarity relation of Eq. (6) as the underlying principle.
Inexplicably, the authors of Refs. 20 have taken this universality as implying that the coupling functions are constants. Then all the three processes in Fig. 5 would look alike, which of course, they do not — for very good reason. It is the difference in the coupling functions that reveals the nature of any resonance that couples to these channels. The fact that the $`f_0(980)`$ appears as a peak in $`J/\psi \varphi (\pi \pi )`$, in $`D_s\pi (\pi \pi )`$ and $`\varphi \gamma (\pi \pi )`$ is what teaches us <sup>16,21)</sup> that the $`f_0(980)`$ couples strongly to $`K\overline{K}\pi \pi `$ and less to $`\pi \pi \pi \pi `$ and reflects its underlying $`s\overline{s}`$ or $`K\overline{K}`$ make-up. The functions $`\alpha _i(s)`$ are not meaningless and unphysical as claimed in Refs. 20. In their language, their parameter $`\overline{\xi }_f/\overline{g}_f`$ is just $`\alpha (s=m_f^2)`$, for instance.
## 3 What is the $`\sigma `$ ? $`q\overline{q}`$ or glueball ?
Perhaps we can use such processes to build an understanding of the $`f_0(4001200)`$ too, just as for the $`f_0(980)`$. An ideal reaction in this regard is the two photon process. As photons couple to the charged constituents of hadrons, their two photon width measures the square of their average charge squared. We have data on both the $`\pi ^+\pi ^{}`$ and $`\pi ^0\pi ^0`$ final states from Mark II <sup>22)</sup>, CELLO <sup>23)</sup> and Crystal Ball <sup>24)</sup>. The underlying physics of the cross-sections shown in Fig. 6 is reviewed in detail in Refs. 25,26.
Suffice it to say that the $`f_2(1270)`$ is most evident, but where is the $`\sigma `$ ? Now it is often argued in the literature <sup>27)</sup> that since the charged cross-section at low energies is dominated by the one-pion exchange Born term, the neutral one provides a ready measure of the $`\sigma `$’s contribution. Looking at Fig. 6, this must be very small below 900 MeV. Consequently, the $`\sigma `$ must have a very small $`\gamma \gamma `$ width and so have little of charged constituents — perhaps it’s a glueball ? This is to misunderstand the nature of final state interactions. These affect the charged cross-section much more dramatically than the neutral one and this must be explained within the same modelling. How to handle this is fully described in Refs. 25,26, so let us just deal with an essential point here.
Imagine constructing the two photon amplitude from Feynman diagrams and simplistically assume the contributions are just the Born term, or rather its $`S`$–wave component, we call $`\sqrt{2/3}B_S`$, and the $`\sigma `$ contribution, $`\mathrm{\Sigma }`$, incorporating the direct $`\gamma \gamma `$ couplings of the $`\sigma `$. As there is no Born contribution to the $`\pi ^0\pi ^0`$ cross-section, it is assumed to be given wholly by this $`\sigma `$ component. Thus from the measured $`\gamma \gamma \pi ^0\pi ^0`$ cross-section, Fig. 6, we know $`\mathrm{\Sigma }`$.
Similarly, by taking the measured $`\gamma \gamma \pi ^+\pi ^{}`$ cross-section (of course, taking into account the limited angular range of such data, Fig. 6), and subtracting the contribution from $`L2`$ partial waves given by the Born amplitude, one obtains the modulus of the “charged” $`S`$–wave. At 600 MeV, for instance, we have $`\mathrm{\Sigma }=0.35`$ and $`B_S+\mathrm{\Sigma }=0.16`$, where the amplitudes are conveniently normalized (cf. Ref. 25) so that the $`B_S=\sqrt{3/2}`$ at threshold.<sup>1</sup><sup>1</sup>1This normalization has been chosen to avoid square roots in the labelling of Fig. 8. These constraints are displayed in Fig. 8a. Their intersection fixes the vector $`\mathrm{\Sigma }=\mathrm{\Sigma }_1`$.
However, final state interactions are specified by Watson’s theorem, which requires that the phase of the $`\gamma \gamma \pi \pi `$ $`S`$–wave amplitude must be the same as that for the corresponding $`\pi \pi `$ partial wave. For the $`I=0`$ $`\gamma \gamma \pi \pi `$ amplitude, this means
$$\mathrm{tan}\delta _0^0=\frac{\mathrm{Im}\mathrm{\Sigma }}{\frac{2}{3}B_S+\mathrm{Re}\mathrm{\Sigma }},$$
which fixes the $`I=0`$ $`S`$–wave vector to lie along the dashed line in Fig. 8b running from the point X. This constraint combined with the $`\pi ^0\pi ^0`$ cross-sections means $`\mathrm{\Sigma }=\mathrm{\Sigma }_2`$, whereas the $`\pi ^+\pi ^{}`$ cross-section gives $`\mathrm{\Sigma }=\mathrm{\Sigma }_3`$. Which is the right $`\mathrm{\Sigma }`$–vector dramatically affects the size of the $`I=0`$ $`S`$–wave amplitude. Clearly, $`\mathrm{\Sigma }_1,\mathrm{\Sigma }_2,\mathrm{\Sigma }_3`$ should all be equal! This inconsistency is a sign of the inadequacy of such a simplistic model. Indeed, in terms of Feynman diagrams we must add to the graphs of Fig. 7
as well as all the corrections to the Born term. Without such terms, the magnitude of the direct $`\sigma `$–component is meaningless. The dispersive framework sums all such terms exactly. This allows the nearest one can presently get to a model-independent separation of the individual spin components with $`I=0`$ and 2. This reveals a quite different $`S`$–wave amplitude, see Fig. 10 for the dip solution <sup>28)</sup>. It is the strong interference between the contributions of Figs. 7,9 that makes the structure of the $`\gamma \gamma \pi \pi `$ $`I=0`$ $`S`$–wave quite different from that of any other process, cf. Figs. 1,10.
Recalling that two photon widths are a measure of the square of the mean squared charge squared of the constituents of a hadron times the probability that these constituents annihilate, these widths tells us about the constitution of resonances. Thus, $`\mathrm{\Gamma }(f_2(1270)\gamma \gamma )=(2.84\pm 0.35)`$ keV we find <sup>28)</sup> is just what is expected of a $`(u\overline{u}+d\overline{d})`$ tensor. While $`\mathrm{\Gamma }(f_0(980)\gamma \gamma )=(0.28_{0.13}^{+0.09})`$ keV is not only consistent with the radiative width of an $`s\overline{s}`$ scalar, it also agrees with the prediction for a $`K\overline{K}`$ system <sup>29)</sup>. Indeed, the $`f_0(980)`$ is likely to be a mixture of both of these. For the $`\sigma `$ we find $`\mathrm{\Gamma }(f_0(4001200)\gamma \gamma )=(3.8\pm 1.5)`$ keV, which is quite consistent with the width expected for a $`(u\overline{u}+d\overline{d})`$ scalar, according to Li, Barnes and Close <sup>30)</sup>.
Another model-dependent way to test the composition of the $`f_0(4001200)`$ is by the use of QCD sum-rules. These connect the low energy hadron world to the high energy regime of asymptotic freedom, where the predictions of QCD are calculable <sup>31)</sup>. By applying sum-rule techniques to the non-strange scalar current, Cherry, Maltman and myself <sup>32)</sup> have found that the $`f_0(4001200)`$, as given by experiment, cf. Fig. 1, saturates the sum-rules. This is in contradistinction to the conclusions of Elias et al. <sup>33)</sup>, presented at this meeting by Steele, who find the sum-rules are not saturated, but where they describe the $`\sigma `$ by a broad Breit-Wigner-like structure.
## 4 Modelling the unknown — the $`\sigma `$ pole
Earlier, we have stressed the importance of the pole in determining the only truly unambiguous parameters of any short-lived state. However, the $`\sigma `$ is so short-lived that continuing experimental information to the actual pole is highly unreliable without modelling. A possible way to proceed is to approximate experiment (at real values of the energy) by known analytic functions, one can then readily continue to the pole. Let me illustrate this with an example. Let us consider, the scalar form-factor, $`F(s)`$. Though this is not a directly observable quantity, it has the advantage of only having a right hand cut and so its continuation is particularly straightforward. Let us assume that, as $`s\mathrm{}`$,
$$s^2<F(s)<s^2.$$
Then both $`F(s)`$ and its inverse satisfy twice-subtracted dispersion relations.
$`F(s)`$ $`=`$ $`1+bs+{\displaystyle \frac{s^2}{\pi }}{\displaystyle _{4m_\pi ^2}^{\mathrm{}}}𝑑s^{}{\displaystyle \frac{\mathrm{Im}F(s^{})}{s^2(s^{}s)}},`$ (7)
$`{\displaystyle \frac{1}{F(s)}}`$ $`=`$ $`1bs+{\displaystyle \frac{s^2}{\pi }}{\displaystyle _{4m_\pi ^2}^{\mathrm{}}}𝑑s^{}{\displaystyle \frac{\mathrm{Im}1/F(s^{})}{s^2(s^{}s)}}.`$ (8)
Along the real positive axis we simply model $`\beta \mathrm{Im}F(s)`$ by a polynomial in $`\beta ^2=14m_\pi ^2/s`$. The parameters are arranged to fulfill elastic unitarity at low energies, by fixing the phase of $`F(s)`$ to be the experimental $`I=J=0`$ $`\pi \pi `$ phase-shift, $`\delta _0^0`$ of Fig. 4. We can then use the dispersion relations of Eqs. (7,8) to determine $`F(s)`$ everywhere in the complex energy plane $`E`$, where $`s=E^2`$. Whether one approximates the imaginary part of the form-factor, or its inverse, makes very little difference, to its continuation on the first sheet. In any event, there are no poles on this sheet.
We then continue to the second sheet. This is achieved by taking the other sign of the square root branch-point, i.e.
$$\sqrt{s\mathrm{\hspace{0.17em}4}m_\pi ^2}\sqrt{s\mathrm{\hspace{0.17em}4}m_\pi ^2}.$$
If one considers the continuation given by Eq. (7), there are still no poles and $`F(s)`$ is smooth. However, if instead one uses Eq. (8), poles emerge rather spectacularly, Fig. 11. This alternative approximation to the imaginary part of the inverse form-factor may be regarded as some Padé approximant to the exact imaginary part. In this simple exercise, we find the $`\sigma `$ pole gives, Eq. (1),
$$M_R\mathrm{\hspace{0.17em}457}\mathrm{MeV},\mathrm{\Gamma }_R\mathrm{\hspace{0.17em}219}\mathrm{MeV}.$$
If instead, one models the low energy form-factor by using Chiral Perturbation Theory ($`\chi PT`$), as has been done by by Hannah <sup>34)</sup>, then one finds
$$M_R\mathrm{\hspace{0.17em}463}\mathrm{MeV},\mathrm{\Gamma }_R\mathrm{\hspace{0.17em}393}\mathrm{MeV}$$
at one loop $`\chi PT`$, and
$$M_R\mathrm{\hspace{0.17em}445}\mathrm{MeV},\mathrm{\Gamma }_R\mathrm{\hspace{0.17em}470}\mathrm{MeV}$$
at two loops, making three subtractions, to emphasise still further the low energy constraints from $`\chi PT`$ <sup>35)</sup>. While the real part of the pole position is reasonably stable, the imaginary part depends rather more sensitively on the modelling of the real axis information. Dobado and Pelaez <sup>35)</sup>, and then Oller and Oset <sup>35)</sup>, discussed similar calculations first, but for the full $`\pi \pi `$ scattering amplitude in $`\chi PT`$. These examples illustrate how precision data on $`\pi \pi `$ observables can determine the parameters of the $`\sigma `$. The precision is input by a specific modelling that allows a continuation giving poles on the nearby unphysical sheet. Remember that whatever the pole parameters actually are, it is still the same $`\pi \pi `$ amplitudes and phases, Figs. 1,4, that are being described.
## 5 Summary — facts
Here we summarise the key facts discussed here:
* The $`I=J=0`$ $`\pi \pi `$ interaction is strong above 400 MeV, or so. This very short-lived correlation between pion pairs is what we call the $`\sigma `$.
* Such a $`\sigma `$ is expected to be the field whose non-zero vacuum expectation value breaks chiral symmetry. The details of this are, however, model-dependent — see Refs. 3,4,5,36.
* The low mass $`\pi \pi `$ enhancement may be describable in terms of $`t`$–channel exchanges, in particular the $`\rho `$, but this does not mean that the $`\sigma `$ does not exist as an $`s`$–channel pole. That there are these alternative descriptions is just hadron duality.<sup>2</sup><sup>2</sup>2While an infinite number of crossed-channel exchanges are needed to generate a pole in the $`s`$–channel, the $`\sigma `$–pole is so far from the real axis, that the absorptive part of the $`I=0`$ $`\pi \pi `$ amplitude can on the real axis be readily described by a few crossed-channel (Regge) exchanges, which is all that matters for (finite energy sum-rule) duality.
* It is the pole in the complex energy plane that defines the existence of a state in the spectrum of hadrons. It is only the pole position (and residues) that are model-independent. Within models, the position of $`K`$–matrix poles may be imbued with significance as indicating the underlying or precursor state <sup>37,38)</sup>. However, these only have meaning within models and within a particular parametrization of the $`K`$–matrix. In contrast, poles of the $`S`$–matrix are both process and model-independent.
* Fitting $`I=J=0`$ $`\pi \pi `$ data on the real axis in the energy plane with Breit-Wigner forms determines $`M_{BW}`$ and $`\mathrm{\Gamma }_{BW}`$, Eqs. (2,3). However, these are parametrization-dependent, process-dependent and a poor guide to the true pole position, Eq. (1).
* The pole position is determined by analytic continuation. Since for the $`\sigma `$, this continuation is far, the mass function $`m(s)`$, width function $`\mathrm{\Gamma }(s)`$, and couplings $`g(s)`$ of Eq. (2) will all be functions of energy and not simply constant. Tornqvist has illustrated the energy dependence of such scalar propagators within a model of hadron dressing <sup>38)</sup>.
* While the shape of the $`\pi \pi `$ spectrum is process-dependent (see Fig. 5), the phase of the corresponding amplitudes, in a given spin and isospin configuration, is process-independent below 1 GeV.
## 6 Summary — model-dependent statements
* The link between the almost model-independent experimentally determined radiative width and the composition of the $`\sigma `$ does require modelling. Analysis of the $`f_0(4001200)`$ in two photon processes indicates that it has a $`(u\overline{u}+d\overline{d})`$ composition.
* Preliminary results from a new QCD sum-rule analysis <sup>32)</sup> of the scalar $`(u\overline{u}+d\overline{d})`$ current suggests that this is saturated by the $`f_0(4001200)`$, just as expected from above.
* The pole position of the $`\sigma `$ can be found by modelling the analytic continuation, starting from experimental (or theoretical) information on the real axis.
* The relation that this pole has to the underlying undressed or bare state is model-dependent. The model of Tornqvist <sup>38)</sup>, for instance, provides a possible connection with the lightest underlying ideally mixed $`q\overline{q}`$ multiplet <sup>38,39)</sup>.
To go further, we need precision data on understood processes. $`\pi \pi `$ final states with vacuum quantum numbers appear in a multitude of reactions. It is only by the collective analysis of all of these that we can hope to solve the riddle of the $`\sigma `$. It is a puzzle worth solving, since the nature and properties of the $`\sigma `$ lie at the heart of the QCD vacuum.
Acknowledgements
It is a pleasure to thank the organisers, particularly Tullio Bressani and Alessandra Filippi, for having generated this meeting and Lucien Montanet for devoting a day to the $`\sigma `$ and stimulating renewed discussion of its existence, nature and properties. I acknowledge travel support from the EEC-TMR Programme, Contract No. CT98-0169, EuroDA$`\mathrm{\Phi }`$NE. I am especially grateful to Tony Thomas and the Special Research Centre for the Subatomic Structure of Matter at the University of Adelaide, where this talk was prepared, for their generous hospitali
References
* B.S. Zou, hep-ph/9611235, Talk presented at 34th Course of International School of Subnuclear Physics, Erice, Sicily, July 1996;
V.V. Anisovich, D.V. Bugg, A.V. Sarantsev, B.S. Zou, Phys.Rev. D50 (1994) 972;
D.V. Bugg, B.S. Zou, A.V. Sarantsev, Nucl. Phys. B471 (1996) 59.
* J. Gunter et al., hep-ex/9609010, APS Minneapolis 1996, Particles and fields vol.1 pp. 387-391.
* M. Gell-Mann, M. Levy, Nuovo Cim. 16, 705 (1960);
Y. Nambu, G. Jona-Lasinio, Phys. Rev. 122, 345 (1961);
see R. Delbourgo, M.D. Scadron, Phys. Rev. Lett. 48, 379 (1982) in the context of QCD.
* J. Goldstone, Nuovo Cim. 19, 154 (1961).
* S. Weinberg, Physica 96A, 327 (1979);
* J. Gasser, H. Leutwyler, Ann. Phys. (NY) 158, 142 (1984), Nucl. Phys. B250 465 (1985).
* A. Rittenberg et al., Review of Particle Properties, Rev. Mod. Phys. 43 S1 (1971) — see p. S115.
* P. Söding et al., Review of Particle Properties, Phys. Lett. 39B, no. 1 (1972) — see p. 104.
* C. Caso et al., Review of Particle Physics, Eur. Phys. J. C3, 1 (1998).
* M. Benayoun, H.B. O’Connell, A.G. Williams, Phys.Rev. D59, 074020 (1999).
* S. Ishida et al., Prog. Theor. Phys. 95, 745 (1996),
* W. Ochs, thesis submitted to the University of Munich (1974).
* B. Hyams et al., Nucl Phys. B64, 134 (1973);
* G. Grayer et al., Nucl. Phys. B75, 189 (1974);
* see also, D. Morgan, M.R. Pennington, Second DA$`\mathrm{\Phi }`$NE Physics Handbook, ed. L. Maiani, G. Pancheri and N. Paver, pp. 193-213 (INFN, Frascati, 1995).
* R. Kaminski, L. Lesniak, K. Rybicki, Z. Phys. C74, 79 (1997).
* K. Rybicki, these proceedings.
* K.M. Watson, Phys. Rev. 95, 228 (1954).
* K.L. Au, D. Morgan, M.R. Pennington, Phys. Rev. D35, 1633 (1987).
* M.R. Pennington, Proc. BNL Workshop on Glueballs, Hybrids and Exotic Hadrons, ed. S.U. Chung, AIP Conf. Proc. No. 185, Particles & Fields Series 36, p. 145 (AIP, New York, 1989).
* T. Åkesson et al., Nucl. Phys. B264, 154 (1986).
* U. Mallik, Strong Interactions and Gauge Theories, Proc. XXIth Recontre de Moriond, ed. J. Tran Thanh Van, Vol 2, p. 431 (Editions Frontiéres, Gif-sur-Yvette, 1986);
see also, A. Falvard et al. (DM2), Phys. Rev. D38, 2706 (1988);
W. Lockman (Mark III), Proc 3rd Int. Conf on Hadron Spectroscopy, Ajaccio, 1989, p. 109 (Editions Frontières, Gif-surYvette, 1989).
* M. Ishida, S. Ishida, T. Ishida, hep-ph/9805319, Prog. Theor. Phys. 99, 1031 (1998).
* S. Ishida, these proceedings.
* D. Morgan, M.R. Pennington, Phys. Rev. D48, 1185 (1993).
* J. Boyer et al., Phys. Rev. D42, 1350 (1990).
* H.J. Behrend et al., Z. Phys. C56, 381 (1992).
* H. Marsiske et al., Phys. Rev. D41, 3324 (1990);
* J.K. Bienlein, Proc. IXth Int. Workshop on Photon-Photon Collisions (San Diego 1992) ed. D. Caldwell and H.P. Paar, p. 241 (World Scientific).
* D. Morgan, M.R. Pennington, Phys. Lett. 192B, 207 (1987), Z. Phys. C37, 431 (1988); C39, 590 (1988); C48, 623 (1990).
* M.R. Pennington, DA$`\mathrm{\Phi }`$NE Physics Handbook, ed. L. Maiani, G. Pancheri and N. Paver, pp. 379-418 (INFN, Frascati, 1992); Second DA$`\mathrm{\Phi }`$NE Physics Handbook, ed. L. Maiani, G. Pancheri and N. Paver, pp. 531-558 (INFN, Frascati, 1995).
* S. Narison, Nucl. Phys. B509, 312 (1998);
P. Minkowski, W. Ochs, hep-ph/9811518, EPJC (in press).
* M. Boglione, M.R. Pennington, hep-ph/9812258, EPJC (in press).
* T. Barnes, Phys. Lett. 165B, 434 (1985); Proc. IXth Int. Workshop on Photon-Photon Collisions (San Diego 1992) ed. D. Caldwell and H.P. Paar, p. 263 (World Scientific).
* Z.P. Li, F.E. Close, T. Barnes, Phys. Rev. D43 2161 (1991).
* M.A. Shifman, A.I. Vainshtein, V.I. Zakharov, Nucl. Phys. B147 385, 448 (1979).
* S.N. Cherry, K. Maltman, M.R. Pennington, in preparation.
* V. Elias, A.H. Fariborz, F. Shi, T.G. Steele, Nucl. Phys. A633, 279 (1998);
* T.G. Steele, these proceedings.
* T. Hannah, Phys. Rev. (in press).
* A. Dobado, J.R. Peláez, Phys. Rev. D56, 3057 (1997);
J.A. Oller and E. Oset, Nucl. Phys. A620, 438 (1997).
* M.R. Pennington, hep-ph/9612417, Proc. of DAPHCE Workshop, Frascati, Nov. 1996, Nucl. Phys. (Proc. Supp.) A623 189 (1997).
* V.V. Anisovich, A.V. Sarantsev, Phys. Lett. B382, 429 (1996).
* N. Tornqvist, Z. Phys. C68, 647 (1995).
* E. van Beveren et al., Z. Phys. C30, 615 (1986);
* M. Boglione, M.R. Pennington, Phys. Rev. Lett. 79, 1633 (1997).
|
no-problem/9905/cond-mat9905131.html
|
ar5iv
|
text
|
# Infinite Characteristic Length in Small-World Systems
## Abstract
It was recently claimed that on $`d`$-dimensional small-world networks with a density $`p`$ of shortcuts, the typical separation $`s(p)p^{1/d}`$ between shortcut-ends is a characteristic length for shortest-paths (M. E. J. Newman and D. J. Watts, “Scaling and percolation in the small-world network model” cond-mat/9904419). This contradicts an earlier argument suggesting that no finite characteristic length can be defined for bilocal observables on these systems (M. Argollo de Menezes, C. Moukarzel and T. J. P. Penna, “First-order transition in small-world networks”, cont-mat/9903426). We show analytically, and confirm by numerical simulation, that shortest-path lengths $`\mathrm{}(r)`$ behave as $`\mathrm{}(r)r`$ for $`r<r_c`$ and as $`\mathrm{}(r)r_c`$ for $`r>r_c`$, where $`r`$ is the Euclidean separation between two points and $`r_c(p,L)p^{1/d}\mathrm{log}(L^dp)`$. This shows that the mean separation $`s`$ between shortcut-ends is *not* a relevant length-scale for shortest-paths. The true characteristic length $`r_c(p,L)`$ *diverges* with system size $`L`$ no matter the value of $`p`$. Therefore no finite characteristic length can be defined for small-world networks in the thermodynamic limit.
The recent observation that a non-extensive amount of long-range bonds, or “shortcuts”, on a regular $`d`$-dimensional lattice suffices to drastically shorten shortest-path distances, has triggered appreciable interest in these so-called “small-world” networks.
Small-world networks consist of a regular $`d`$-dimensional lattice of linear size $`L`$ to which $`L^dp`$ long-ranged bonds have been added, connecting randomly chosen pairs of sites. Several recent studies have concentrated on the behavior of the average shortest-path distance $`\overline{\mathrm{}}=1/N^2_{<ij>}<\mathrm{}_{ij}>`$, where $`\mathrm{}_{ij}`$ is the minimum number of links that must be traversed to join sites $`i`$ and $`j`$, and $`<>`$ means average over disorder realizations. On a regular lattice one has $`\overline{\mathrm{}}L`$, while on a random graph of $`L^d`$ sites, $`\overline{\mathrm{}}\mathrm{log}(L)`$. On small-world networks, which interpolate between these two limits, one finds $`\overline{\mathrm{}}L`$ if $`L^dp<<1`$, but $`\overline{\mathrm{}}\mathrm{log}L`$ if $`L^dp>>1`$. Thus on a large system a small density of shortcuts is enough to have shortest-path distances which are characteristic of random graphs.
It has been suggested that for any fixed density $`p`$ of long-ranged bonds, a *crossover-size* $`L^{}(p)`$ exists, above which the average shortest-path distance $`\overline{\mathrm{}}`$ increases only logarithmically with $`L`$. Renormalization Group arguments as well as numerical measurements indicate that this crossover length diverges as $`p0`$ as $`L^{}(p)p^{1/d}`$. While some authors see this result as evidence of a continuous phase transition at $`p=0`$ , it has also been argued that no finite correlation length can be defined on these systems, and thus that the small-world transition is first-order.
Clearly, the normalized shortest-path distance $`(p)=\overline{\mathrm{}}/L`$ undergoes a discontinuity at $`p=0`$ in the thermodynamic limit. Thus, the “small-world” transition is technically a *first-order*, or discontinuous, transition . A subtler and still controversial point is whether it is possible to identify a correlation length $`\xi (p)`$ diverging at $`p=0`$, i.e. if the small-world transition is associated with some sort of *critical behavior*. This would mean that $`p=0`$ is a *first-order critical point*(FOCP) . As discussed by Fisher and Berker , a FOCP is a critical point where the thermal eigen-exponent attains its maximum possible value $`y_t=d`$, thus allowing the *coexistence* of two different critical phases. Consequently at a FOCP one has $`\nu =1/y_t=1/d`$, and some finite-size corrections will be dictated by this exponent.
A FOCP appears for example in one-dimensional percolation , where the order parameter $`P_{\mathrm{}}`$ (the density of the spanning cluster) is discontinuous at $`p_c=1`$, while the correlation length $`\xi `$ diverges as $`p1`$ as $`\xi (1p)^1`$. Finite-size effects as e.g. the width of the critical region are of order $`L^1`$. Because of this, $`P_{\mathrm{}}`$ is non-zero even for $`p1`$, if $`L<\xi (1p)^1`$. Another example of a FOCP is the Ising model in one dimension, where the magnetization density $`<m>(T)`$ is discontinuous at $`T=0`$ while the correlation length $`\xi `$ diverges as $`\xi e^{J/T}`$.
At a normal first-order point, on the other hand, no critical behavior exists (i.e. all relevant $`L`$-independent length-scales remain finite), but the eigen-exponent $`d`$ also shows up in finite-size corrections. In order to illustrate this, let us consider a field-driven first-order transition at $`h=0`$ (like e.g. in the two-dimensional Ising model below $`T_c`$). Assume that the central spin in a system of linear size $`L`$ is pinned in the direction opposite to the external field $`h`$, and let us take for simplicity $`T0`$. If $`L`$ is large enough, the bulk energy associated with $`h`$ will be predominant, and $`m(L,h)`$ will have the same sign as $`h`$. On the other hand if $`L`$ is small, it will be energetically favorable not to break any bonds and thus all spins will point in the $`h`$ direction. The crossover length $`L^{}`$ at which magnetization inversion happens can be estimated by equating the total field contribution $`L^dh`$ to the energy cost associated with the breaking a finite number of bonds around the pinned spin, and one obtains $`L^{}h^{1/d}`$ . So we see that there may be a diverging crossover size in a first-order transition, although there is no diverging correlation length in this case.
Thus the fact that finite-size corrections in the *global* observable $`\overline{\mathrm{}}`$ of small-world networks behave as $`L^d`$ is compatible both with a first-order transition and with a first-order critical point at $`p=0`$. Clearly it is still necessary to prove or disprove the existence of critical behavior in small-world networks, and for this purpose we suggest to look at a bilocal observable (from which the analogous of a correlation function $`g(r)`$ could be eventually defined), instead of the global observable $`\overline{\mathrm{}}`$. We study in this work the behavior of $`\mathrm{}(r)`$, the average shortest-path distance between two points separated by an Euclidean distance $`r`$.
The basic question we wish to answer is whether it is possible to identify a characteristic length $`\xi (p)`$ on small-world networks, such that: *i)* $`\xi (p)`$ is a length scale dictating the behavior of $`\mathrm{}(r)`$, *ii)* $`\xi (p)`$ only depends on $`p`$ and not on system size $`L`$, when $`L\mathrm{}`$, and, *iii)* $`\xi `$ diverges at the critical point $`p_c`$.
Argollo et al have recently given a simple asymptotic argument to suggest that, when $`L\mathrm{}`$, no *finite characteristic length* can be defined for bilocal observables such as shortest-path lengths on small-world networks. Newman and Watts on the other hand, recently claimed that the characteristic length for shortest-paths is the mean separation $`s(p)p^{1/d}`$ between shortcut-ends. This quantity clearly satisfies criteria *ii)* and *iii)* above, but we will show that it does not satisfy *i)*, i.e. it is *not* relevant for shortest-path lengths on large systems.
In order to test the ideas above, consider the spread of a disease in a $`d`$-dimensional small-world system of linear size $`L`$, as depicted in Fig. 1. For simplicity we work on the continuum. Assume that, at $`t=0`$, an infectious disease starts to spread radially with constant velocity $`v=1`$ from A. Let the sphere of radius $`t`$ grown from A be called “primary sphere” of infected sites, or “individuals”. Because of the existence of shortcuts, which we assume are traversed in zero time, other (secondary) sources of infection will appear at times $`t_k`$ at *random locations* in the available non-infected space. These times $`t_k`$ are the times at which an infection sphere (primary or not) hits one end of a shortcut. We call the spheres born at the other end of these shortcuts, “secondary spheres”. In this setting, the shortest-path distance $`\mathrm{}(A,x)`$ from $`A`$ to $`x`$ is exactly the *time* $`t`$ at which a point $`x`$ is first infected. If a point at $`x`$ is hit by the primary sphere first (before being hit by a secondary sphere), this will necessarily happen at time $`t=d_E(A,x)`$, where $`d(A,x)`$ is the Euclidean distance from $`A`$ to $`x`$. In this case the shortest-path distance $`\mathrm{}(A,x)`$ is simply $`d_E(A,x)`$. If on the other hand a secondary sphere hits $`x`$ at an *earlier* time $`t^{}`$ then $`\mathrm{}(A,B)=t^{}<d_E(A,x)`$.
In order to better uncover the physical meaning of a characteristic length $`r_c`$ for shortest-path distances $`\mathrm{}(r)`$, let us consider a slightly modified disease-spreading problem. Let us assume that the primary version of the disease (the one that starts at $`A`$) is lethal, while secondary versions are modified when traversing shortcuts, and become of a milder, non-lethal form, which protects individuals who are infected by it against the lethal form. Therefore noinfected individuals who are hit by the primary sphere will be killed, while those being hit first by a secondary sphere become immune. Now assume that a characteristic length $`r_c`$ exists such that $`\mathrm{}(r)r`$ for $`r<r_c`$ and $`\mathrm{}(r)<<r`$ for $`r>r_c`$. Because of the relationship between disease-spreading and shortest-path distances described above, one would conclude that *a)* a person at $`x`$ is killed by the disease with a high probability if $`d_E(A,x)<r_c`$, and *b)* a person at $`x`$ is relatively safe if $`d_E(A,x)>r_c`$. Therefore $`r_c`$, the characteristic length of shortest-path distances $`\mathrm{}(r)`$, plays the role of a “safety distance” from the primary infection site $`A`$ in this modified disease-spreading model.
Newman and Watts recently provided a clever recursive equation for the total volume $`V(t)`$ of infected sites as a function of time , and solved it exactly in one dimension. Their result reads
$$V(t)=\frac{1}{2}s\left(e^{4t/s}1\right).$$
(1)
where $`sp^1`$ is the average separation between ends of different shortcuts. Notice that $`V(t)t`$ for $`t<<s`$ (only the primary sphere is important) while later $`V(t)e^{t/s}`$ when $`t>s`$ (proliferation of secondary spheres). Because of this change in the behavior of $`V(t)`$ at $`ts`$, Newman and Watts suggest that $`s`$ is the equivalent of a correlation length.
The total infected volume $`V(t)`$ in (1) is the sum of the primary volume $`V_1(t)`$ plus the secondary volume $`V_2(t)`$. The primary volume is simply $`V_1(t)=t^d\mathrm{\Omega }_d`$, where $`\mathrm{\Omega }_d`$ is the volume of a $`d`$-dimensional hypershpere of unit radius. According to our discussion above, we are interested in calculating the probability $`P_S(r)`$ for an individual located at a distance $`r`$ from $`A`$ to become infected by a secondary sphere before being hit by the primary sphere. Because the secondary volume $`V_2(t)`$ is *randomly distributed* in the total available space $`L^d`$, it is easy to calculate the probability $`p_2(t)`$ for an individudal to become infected by the secondary version of the disease at time $`t`$ *or earlier*. This is simply the fraction of the total volume that is covered by the secondary volume: $`p_2(t)=V_2(t)/L^d`$, and does not depend on the Euclidean distance from the primary source at $`A`$. If the individual does not become immunized by the secondary disease first, it will certainly be killed by the primary infection at time $`t=d_E(A,x)`$, when the primary sphere reaches him. Thus the relevant quantity is $`p_2(t)`$ evaluated at $`t=d_E(A,x)`$, which measures the probability $`P_S(d_E(A,x))`$ for an individual at $`x`$ to survive. In one dimension one obtains, using (1) and $`V_1(t)=2t`$,
$$P_S(d_E(A,x))=\frac{s}{2L}\left(e^{4d_E(A,x)/s}\frac{4d_E(A,x)}{s}1\right),$$
(2)
which goes to zero when $`L\mathrm{}`$ no matter how large $`d_E(A,x)`$ is. Thus there is no *finite* safety distance in the thermodynamic limit, meaning that $`r_c`$ diverges with system size. The size-dependence of the “safety distance” $`r_c`$ can be estimated from (2) by equating $`P_S`$ constant, from which one obtains $`r_cs\mathrm{ln}(L/s)`$.
These heuristic arguments are confirmed by an exact calculation of $`\mathrm{}(r)`$ , as described in the following. Newman and Watts equation for $`V(t)`$ can be exactly solved in all dimensions, and using this solution it is possible to calculate the average shortest-path distance $`\mathrm{}(r)`$ between two points separated by an Euclidean distance $`r`$. The result is :
$$\mathrm{}(r)\{\begin{array}{ccc}r\hfill & \text{for}\hfill & r<r_c=p^{1/d}\mathrm{log}(p^{1/d}L)\hfill \\ r_c\hfill & \text{for}\hfill & r>r_c\hfill \end{array}$$
(3)
We confirmed this result numerically in one dimension, by measuring the average shortest path distance $`\mathrm{}(r)`$ as a function of Euclidean separation $`r`$, for several values of $`L`$ and $`p`$. Equation (3) then means that the rescaled shortest-path distance $`\stackrel{~}{\mathrm{}}(r)=\mathrm{}(r)/r_c`$ should be a universal function of the rescaled Euclidean distance $`\stackrel{~}{r}=r/r_c`$. This is confirmed by the fact that all data displayed in Fig. 2 fall on a single curve.
Notice that the characteristic length $`r_cs\mathrm{log}(L/s)`$ is much larger than $`s`$, the mean separation between shortcuts. The reason why $`s`$ is not a relevant length scale for shortest-path distances is not difficult to understand. Consider the shortest-path distance $`\mathrm{}(A,x)`$ between $`A`$ and some point $`x`$ as in Fig. 1. While the average distance one has to travel from $`A`$ in order to find a shortcut end is $`s`$, this shortcut has a typical length $`L`$ and thus will take us too far away from $`A`$ to be useful in shortening the distance to $`x`$.
According to (3), only when $`d_E(A,x)`$ is larger than $`r_c=s\mathrm{log}(L^dp)`$ is it more “convenient” to go from $`A`$ to $`x`$ through shortcuts than directly through the lattice. This result means that the typical shortest path in the regime of long distances $`r>>r_c`$ contains $`𝒪(\mathrm{log}(L^dp))`$ shortcuts, since the typical “price” paid for each shortcut traversed is the average distance between shortcut ends, i.e. $`s`$.
Thus $`s`$ is *not* a relevant scale for shortest-path lengths $`\mathrm{}(r)`$ and therefore it does not play the role of a correlation length as suggested recently . A physically more appealing interpretation of $`s`$ is to say that it is a diverging *timescale* for the dynamic process of disease spreading.
As might be clear by now, the impossibility to define a finite correlation length is a consequence of the fact that the locations of the secondary spheres are uncorrelated with the location of the primary infection. In other words, while the typical separation between ends of different shortcuts $`s(p)`$ is finite for $`L\mathrm{}`$, the typical separation between both ends of the same shortcut, which is an important quantity for shortest-paths, scales as $`L`$. As a consequence of this, the number of shortcuts with both ends inside a region of any finite size $`\xi `$ goes to zero when $`L\mathrm{}`$, which already implies the impossibility to define a finite correlation length . Thus, although our demonstration here that no finite correlation length can be defined for shortest-path lengths is more rigorous, the same conclusion can be reached by using simple arguments .
A different situation would certainly arise if shortcuts had a length-dependent distribution, for example if each site $`i`$ is connected with probability $`p`$ to another site $`j`$ chosen with probability $`r^\alpha `$, where $`r`$ is the distance between $`i`$ and $`j`$ and $`\alpha `$ is a free parameter. For $`\alpha 0`$, this model is the same as discussed here, while for $`\alpha `$ large enough one would only have short-range connections and thus there would be no logarithmic regime in $`\overline{\mathrm{}}`$, even for $`p=1`$. We speculate that the transition in $`p`$ could be shifted to a finite $`p_c`$ for some intermediate $`\alpha `$ values. We are presently working on this extended model .
###### Acknowledgements.
We acknowledge useful discussions with P. M. C. de Oliveira, T. J. P. Penna, D. J. Watts, and M. Newman. This work is supported by CAPES and FAPERJ.
|
no-problem/9905/gr-qc9905071.html
|
ar5iv
|
text
|
# Dual geometries and spacetime singularities
## I Introduction
To our knowledge Dicke was first who raised questions about the physical significance of Riemannian geometry in relativity due to the arbitrariness in the metric tensor resulting from the indefiniteness in the choice of units of measure. Actually, Brans-Dicke (BD) theory with a changing dimensionless gravitational coupling constant: $`Gm^2\varphi ^1`$ ($`m`$ is the inertial mass of some elementary particle and $`\varphi `$ is the scalar BD field, $`\mathrm{}=c=1`$), can be formulated in two equivalent ways for either $`m`$ or $`G`$ could vary with position in spacetime<sup>*</sup><sup>*</sup>*Generaly BD theory can be formulated in an infinite number of equivalent ways since $`m`$ and $`G`$ can both vary with position in an infinite number of ways such as to keep $`Gm^2\varphi ^1`$.. The choice $`G\varphi ^1`$, $`m=const.`$, leads to the Jordan frame (JF) BD formalism, that is based on the Lagrangian:
$$L^{BD}[g,\varphi ]=\frac{\sqrt{g}}{16\pi }(\varphi R\frac{\omega }{\varphi }g^{nm}_n\varphi _m\varphi )+L_{matter}[g],$$
(1)
where $`R`$ is the curvature scalar, $`\omega `$ is the BD coupling constant, and $`L_{matter}[g]`$ is the Lagrangian density for ordinary matter minimally coupled to the scalar field.
On the other hand, the choice $`m\varphi ^{\frac{1}{2}}`$, $`G=const.`$, leads to the Einstein frame (EF) BD theory based on the Lagrangian:
$$L^{BD}[\widehat{g},\widehat{\varphi }]=\frac{\sqrt{\widehat{g}}}{16\pi }(\widehat{R}(\omega +\frac{3}{2})\widehat{g}^{nm}\widehat{}_n\widehat{\varphi }\widehat{}_m\widehat{\varphi })+\widehat{L}_{matter}[\widehat{g},\widehat{\varphi }],$$
(2)
where now, in the EF metric $`\widehat{𝐠}`$, the ordinary matter is nonminimally couppled to the scalar field $`\widehat{\varphi }\mathrm{ln}\varphi `$ through the Lagrangian density $`\widehat{L}_{matter}[\widehat{g},\widehat{\varphi }]`$.
Both JF and EF formulations of BD gravity are equivalent representations of the same physical situation since they belong to the same conformal class. The EF Lagrangian (1.2) is equivalent to the JF Lagrangian (1.1) in respect to the conformal rescaling of the spacetime metric $`𝐠\widehat{𝐠}=\varphi 𝐠`$. In the coordinate basis this transformation can be written as:
$$\widehat{g}_{ab}=\varphi g_{ab},$$
(3)
where $`\varphi `$ is a nonvanishing smooth function.
The conformal rescaling (1.3) can be interpreted geometrically as a particular transformation of the physical units (a scalar factor applied to the units of time, length and reciprocal mass). Any dimensionless number (for example $`Gm^2`$) is invariant under (1.3). Experimental observations are unchanged too under these transformations since spacetime coincidences are not affected by them, i.e. spacetime measurements are not sensitive to the conformal rescalings of the metricAnother way of looking at this is realizing that the the experimental measurements deal always with dimensionless numbers and these are unchanged under the transformations of the physical units. For a readable discussion on the dimensionless nature of measurements we recommend section II of reference. This means that, concerning experimental observations both formulations based on varying $`G`$ (JFBD) and varying $`m`$ (EFBD) respectively are indistinguishable. These are physically equivalent representations of a same physical situation.
The same line of reasoning can be applied to the case suggested by Magnano and Sokolowski, involving the conformally related Lagrangians:
$$L^{GR}[g,\varphi ]=\frac{\sqrt{g}}{16\pi }(\varphi R\frac{\omega }{\varphi }g^{nm}_n\varphi _m\varphi )+L_{matter}[g,\varphi ],$$
(4)
and
$$L^{GR}[\widehat{g},\widehat{\varphi }]=\frac{\sqrt{\widehat{g}}}{16\pi }(\widehat{R}(\omega +\frac{3}{2})\widehat{g}^{nm}\widehat{}_n\widehat{\varphi }\widehat{}_m\widehat{\varphi })+\widehat{L}_{matter}[\widehat{g}],$$
(5)
where now, unlike the situation we encountered in usual BD gravity, ordinary matter is minimally coupled in the EF (magnitudes with hat), while it is nonminimally coupled in the JF. Both Lagrangians (1.4) and (1.5) represent equivalent pictures of the same theory: general relativity (GR). Actually, it can be seen that the theory linked with the Lagrangian (1.5) is just GR with a scalar field as an additional source of gravity. In particular, it can be verified that both weak equivalence principle (WEP) and strong equivalence principle (SEP) hold in this case. We shall call the theory derivable from (1.5) as Einstein frame general relativity(JFGR), while its conformally equivalent representation based on the Lagrangian (1.4) we call Jordan frame general relativity(JFGR).
The field equations derivable from the Lagrangian (1.5) are:
$$\widehat{G}_{ab}=8\pi \widehat{T}_{ab}+(\omega +\frac{3}{2})(\widehat{}_a\widehat{\varphi }\widehat{}_b\widehat{\varphi }\frac{1}{2}\widehat{g}_{ab}\widehat{g}^{nm}\widehat{}_n\widehat{\varphi }\widehat{}_m\widehat{\varphi }),$$
(6)
$$\widehat{\mathrm{}}\widehat{\varphi }=0,$$
(7)
and the conservation equation:
$$\widehat{}_n\widehat{T}^{na}=0,$$
(8)
where $`\widehat{G}_{ab}\widehat{R}_{ab}\frac{1}{2}\widehat{g}_{ab}\widehat{R}`$, $`\widehat{\mathrm{}}\widehat{g}^{nm}\widehat{}_n\widehat{}_m`$, and $`\widehat{T}_{ab}=\frac{2}{\sqrt{\widehat{g}}}\frac{}{\widehat{g}^{ab}}(\sqrt{\widehat{g}}\widehat{L}_{matter})`$ are the components of the stress-energy tensor for ordinary matter.
Now we shall list some features of the JFGR theory that constitute its main disadvantages. The BD scalar field is nonminimally coupled both to scalar curvature and to ordinary matter so the gravitational constant $`G`$ varies from point to point ($`G\varphi ^1`$). At the same time the material test particles don’t follow the geodesics of the geometry since they are acted on by both the metric field and the scalar field. This leads that test particles inertial masses vary from point to point in spacetime in such a way as to preserve the constant character of the dimensionless gravitational coupling constant $`Gm^2`$, i.e. $`m\varphi ^{\frac{1}{2}}`$. The most serious objection to the Jordan frame formulation, however, is associated with the fact that the kinetic energy of the scalar field is not positive definite in this frame. This is usually linked with the formulation of the theory in unphysical variables . In section III we shall show that the indefiniteness in the sign of the energy density in the Jordan frame is only apparent. On the contrary, once the scalar field energy density is positive definite in the Einstein frame it is so in the Jordan one.
In the present paper we shall focus on those aspects of the Jordan frame formulation of general relativity with an extra scalar field that represent some advantage of this formulation in respect to its conformal EF formulation. It is respecting the transformation properties of the Lagrangian (1.4) under particular transformations of units and the issue of spacetime singularities. In this frame (JF) $`R_{mn}k^nk^m`$ is negative definite for any non-spacelike vector $`𝐤`$. This means that the relevant singularity theorems may not hold. This is in contradiction with the Einstein frame formulation of GR where $`\widehat{R}_{mn}k^nk^m`$ is non-negative and then the occurence of spacetime singularities is inevitable. Then the singularities that can be present in the EFGR may be smoothed out and, in some cases, avoided in the Jordan frame.
To the best our knowledge, only the Einstein frame formulation of general relativity (canonical GR and, consequently, Riemann manifolds with singularities it leads) have been paid attention in the literature. This historical omission is the main motivation for the present work.
The paper has been organized as follows. In Sec. II we present the notion of geometrical duality in BD gravity and GR theory. In Sec. III the JF formulation of general relativity is presented in detail. Secs. IV and V are aimed at the study of particular solutions to GR theory that serve as illustrations to the notion of geometrical duality. For simplicity we shall focus mainly in the value $`\omega =\frac{3}{2}`$ for the BD coupling constant. In this case EFGR reduces to canonical Einstein’s theoryFor $`\omega =\frac{3}{2}`$ in the EF the scalar field is unphysical and doesn’t influence the physics in this frame. In particular the Schwarzschild solution is studied in Sec. IV, while flat Friedman-Robertson-Walker (FRW) cosmology for perfect fluid ordinary matter with a barotropic equation of state is studied in Sec. V. Finally a physical discussion on the meaning of geometrical duality is given in section VI.
## II Geometrical duality
Usually the JF formulation of BD gravity is linked with Riemann geometry. It is directly related to the fact that, in the JFBD formalism, ordinary matter is minimally coupled to the scalar BD field through $`L_{matter}[g]`$ in (1.1). This leads that point particles follow the geodesics of the Riemann geometry. This geometry is based upon the parallel transport law $`d\xi ^a=\gamma _{mn}^a\xi ^mdx^n`$, and the length preservation requirement $`dg(\xi ,\xi )=0`$ where, in the coordinate basis $`g(\xi ,\xi )=g_{nm}\xi ^n\xi ^m`$, $`\gamma _{bc}^a`$ are the affine connections of the manifold, and $`\xi ^a`$ are the components of an arbitrary vector $`\xi `$.
The above postulates of parallel transport and legth preservation in Riemann geometry lead that the affine connections of the manifold coincide with the Christoffel symbols of the metric $`𝐠`$:$`\gamma _{bc}^a=\mathrm{\Gamma }_{bc}^a=\frac{1}{2}g^{an}(g_{nb,c}+g_{nc,b}g_{bc,n})`$. Under the rescaling (1.3) the above parallel transport law is mapped into:
$$d\xi ^a=\widehat{\gamma }_{mn}^a\xi ^mdx^n,$$
(9)
where $`\widehat{\gamma }_{bc}^a=\widehat{\mathrm{\Gamma }}_{bc}^a\frac{1}{2}(\widehat{}_b\widehat{\varphi }\delta _c^a+\widehat{}_c\widehat{\varphi }\delta _b^a\widehat{}^a\widehat{\varphi }\widehat{g}_{bc})`$ are the affine connections of a Weyl-type manifold given by the length transport law:
$$d\widehat{g}(\xi ,\xi )=dx^n\widehat{}_n\widehat{\varphi }\widehat{g}(\xi ,\xi ).$$
(10)
In this case the affine connections of the manifold don’t coincide with the Christoffel symbols of the metric and, consequently, one can define metric and affine magnitudes and operators on the Weyl-type manifold.
Summing up. Under the rescaling (1.3) Riemann geometry with normal behaviour of the units of measure is mapped into a more general Weyl-type geometry with units of measure varying length in spacetime according to (2.2). At the same time, as shown in section I, JF and EF Lagrangians (of both BD and GR theories) are connected too by the conformal rescaling of the metric (1.3) (together with the scalar field redefinition $`\varphi \widehat{\varphi }=\mathrm{ln}\varphi `$). This means that, respecting conformal transformation (1.3) JF and EF formulations of the theory on the one hand, and Riemann and Weyl-type geometries on the other, form classes of conformal equivalence. These classes of conformal gravity theories on the one hand, and conformal geometries on the other, can be uniquely linked only after the coupling of the matter fields to the metric has been specified.
In BD theory, for example, matter minimally couples in the JF so the test particles follow the geodesics of the Riemann geometry in this frame, i.e. JFBD theory is naturally linked with Riemann geometry. This means that EFBD theory (conformal to JF one) should be linked with the geometry that is conformal to the Riemann one (the Weyl-type geometry). For general relativity with an extra scalar field just the contrary is true. In this case matter minimally couples in the Einstein frame and then test particles follow the geodesics of the Riemann geometry precisely in this frame, i.e. EFGR is naturally linked with Riemann geometry and, consequently Jordan frame GR (conformal to EFGR) is linked with Weyl-type geometry<sup>§</sup><sup>§</sup>§When the matter part of the Lagrangian is not present both BD and GR theories can be interpreted on the grounds of either Riemann or Weyl-type geometry indistinctly. We then reach the conclusion that, in this case, both BD theory and general relativity with an extra scalar field coincide. This degeneration of the geometrical interpretation of gravity can be removed only after setting of the matter content of the theory.
The choice of the unit of length of the geometry is not an experimental issue (for a classical discussion on this subject we refer the reader to ). Moreover, the choice of the spacetime geometry itself is not an experimental issue. We can explain this fact by using a simple argument. The experimental measurements (that always deal with dimensionless numbers) are invariant under the rescaling (1.3) that can be interpreted as a particular units transformation. Then physical experiment is insensitive to the rescaling (1.3). The fact that both Riemann and Weyl-type geometries belong to the same equivalence class in respect to the transformation (1.3) completes this explanation. Actually, this line of reasoning leads that the members in one conformal class are experimentally indistinguishable.
The same is true for the Jordan frame and Einstein frame formulations of the given classical theory of gravity. The choice of one or another representation for the description of the given physical situation is not an experimental issue. Then a statement such like: ’the JF formulation (or any other formulation) of the given theory (BD or GR theory) is the physical one’ is devoid of any physical, i.e. experimentally testable meaning. Such a statement can be taken only as an independent postulate of the theory. This means that the discussion about which conformal frame is the physical one is devoid of interest. It is a non-well-posed question.
An alternative approach can be based on the following postulate. Conformal representations of a given classical theory of gravity are physically equivalent. This postulate leads that the geometrical representation of a given physical situation through general relativity (or BD and Scalar-Tensor(ST) theories in general) produces not just one unique picture of the physical situation but it generates a whole equivalence class of all conformally related pictures. This fact we call as ’geometrical duality’. In this sense Riemann and Weyl-type geometries, for instance, are dual to each other. They provide different geometrical pictures originating from the same physical situation. These different geometrical representations are equally consistent with the observational evidence since they are experimentally indistinguishable. The choice of one or the other picture for the interpretation of the given physical effect is a matter of philosophical prejudice or, may be, mathematical convenience. The word duality is used here in the same context as in , i.e. it has only a semantic meaning and has nothing to do with the notion of duality in string theory.
The rest of this paper is based, precisely, upon the validity of the postulate on the physical equivalence of conformal representations of a given classical theory of gravity. In what follows we shall illustrate the notion of geometrical duality in the context of general relativity with an extra scalar field.
## III Jordan frame general relativity
The formulation of general relativity to be developed in the present section is not a complete geometrical theory. Gravitational effects are described here by a scalar field in a Weyl-type manifold, i.e. the gravitational field shows both tensor (spin-2) and scalar (spin-0) modes. In this representation of the theory the redshift effect, for instance, should be interpreted as due in part to a change of the gravitational potential (the metric coefficients) from point to point in spacetime and, in part, as due to a real change in the energy levels of an atom over the manifoldIt is due to the fact that, in Jordan frame GR the inertial mass of a given particle varies from point to point in spacetime according to: $`m=m_0\varphi ^{\frac{1}{2}}`$, where $`m_0`$ is some constant..
The field equations of the Jordan frame GR theory can be derived, either directly from the Lagrangian (1.4) by taking its variational derivatives respect to the dynamical variables or by conformally mapping eqs.(1.6-1.8) back to the JF metric according to (1.3), to obtain:
$$G_{ab}=\frac{8\pi }{\varphi }T_{ab}+\frac{\omega }{\varphi ^2}(_a\varphi _b\varphi \frac{1}{2}g_{ab}g^{nm}_n\varphi _m\varphi )+\frac{1}{\varphi }(_a_b\varphi g_{ab}\mathrm{}\varphi ),$$
(11)
and
$$\mathrm{}\varphi =0,$$
(12)
where $`T_{ab}=\frac{2}{\sqrt{g}}\frac{}{g^{ab}}(\sqrt{g}L_{matter})`$ is the stress-energy tensor for ordinary matter in the Jordan frame. The energy is not conserved because the scalar field $`\varphi `$ exchanges energy with the metric and with the matter fields. The corresponding dynamic equation is:
$$_nT^{na}=\frac{1}{2}\varphi ^1^a\varphi T,$$
(13)
The equation of motion of an uncharged, spinless mass point that is acted on by both the JF metric field $`𝐠`$ and the scalar field $`\varphi `$,
$$\frac{d^2x^a}{ds^2}=\mathrm{\Gamma }_{mn}^a\frac{dx^m}{ds}\frac{dx^n}{ds}\frac{1}{2}\varphi ^1_n\varphi (\frac{dx^n}{ds}\frac{dx^a}{ds}g^{an}),$$
(14)
does not coincide with the geodesic equation of the JF metric. This (together with the more complex structure of the equation (3.1) for the metric field in respect to the corresponding equation (1.6)) introduces additional complications in the dynamics of the matter fields.
We shall point out that the different solutions to the wave equation (3.2) generate different Weyl-type geometrical pictures that are dual to the Einstein frame one.
One of the most salient features of the Jordan frame GR theory is that, in general, the energy conditions do not hold due, on the one hand to the term with the second covariant derivative of the scalar field in the righthand side (r.h.s.) of eq.(3.1) and, on the other to the constant factor in the second term that can take negative values. This way the r.h.s. of eq.(3.1) may be negative definite leading that some singularity theorems may not hold and, as a consequence, spacetime singularities that can be present in canonical Riemannian GR (given by eqs.(1.5-1.8)), in Weyl-type GR (JFGR) spacetimes may become avoidable.
In the following sections we shall illustrate this feature of GR theory in some typical situations where the BD coupling constant is taken to be $`\omega =\frac{3}{2}`$.In BD gravity the case with $`\omega =\frac{3}{2}`$ can’t be studied because it leads that the field equations of the theory are undefined. In this case, in the EF the scalar field stress-energy tensor ($`\frac{\varphi }{8\pi }`$ times the second term in the righthand side of eq.(1.6)) vanishes so we recover the canonical Einstein’s GR theory with ordinary matter as the only source of gravity <sup>\**</sup><sup>\**</sup>\**Usual Einstein’s GR theory can be approached as well if we set $`\varphi =const.`$.. Then the EF scalar field $`\widehat{\varphi }`$ (fulfilling the field equation (1.7)) is a non interacting (nor with matter nor with curvature), massless, uncharged, and spinless, ’ghost’ field (it is an unphysical field). Nevertheless it influences the physics in the JF. Then its functional form in the EF must be taken into account. For $`\omega >\frac{3}{2}`$, $`\widehat{\varphi }`$ is a physical field in the EF.
The fact that the Jordan frame formulation does not lead to a well defined energy-momentum tensor for the scalar field is the most serious objection to this representation of the theory. For this reason we shall briefly discuss on this. The kinetic energy of the JF scalar field is negative definite or indefinite unlike the Einstein frame where for $`\omega >\frac{3}{2}`$ it is positive definite. This implies that the theory does not have a stable ground state (that is necessary for a viable theory of classical gravity) implying that it is formulated in unphysical variables.
We shall point out that, although in this frame the r.h.s. of eq.(3.1) does not have a definite sign (implying that some singularity theorems may not hold), the scalar field stress-energy tensor can be given the canonical form. In fact, as pointed out in reference , the terms with the second covariant derivatives of the scalar field contain the connection, and hence a part of the dynamical description of gravity. For instance, a new connection was presented in that leads to a canonical form of the scalar field stress-energy tensor in the JF.
We can obtain the same result as in ref. if we rewrite equation (3.1) in terms of affine magnitudes in the Weyl-type manifold (see section II). In this case the affine connections of the JF (Weyl-type) manifold $`\gamma _{bc}^a`$ are related with the Christoffel symbols of the JF metric through: $`\gamma _{bc}^a=\mathrm{\Gamma }_{bc}^a+\frac{1}{2}\varphi ^1(_b\varphi \delta _c^a+_c\varphi \delta _b^a^a\varphi g_{bc})`$. We can define the ’affine’ Einstein tensor $`{}_{}{}^{\gamma }G_{ab}^{}`$ given in terms of the affine connections of the manifold $`\gamma _{bc}^a`$ instead of the Christoffel symbols of the Jordan frame metric $`\mathrm{\Gamma }_{bc}^a`$. Equation (3.1) can then be rewritten as:
$${}_{}{}^{\gamma }G_{ab}^{}=\frac{8\pi }{\varphi }T_{ab}+\frac{(\omega +\frac{3}{2})}{\varphi ^2}(_a\varphi _b\varphi \frac{1}{2}g_{ab}g^{nm}_n\varphi _m\varphi ),$$
(15)
where now $`\frac{\varphi }{8\pi }`$ times the second term in the r.h.s. of this equation has the canonical form for the scalar field stress-energy tensor. We shall call this as the ’true’ stress-energy tensor for $`\varphi `$, while $`\frac{\varphi }{8\pi }`$ times the sum of the 2nd and 3rd terms in the r.h.s. of eq.(3.1) we call as the ’effective’ stress-energy tensor for the BD scalar field $`\varphi `$. Then once the scalar field energy density is positive definite in the Einstein frame it is so in the Jordan frame. This way the main physical objection against this formulation of general relativity is removed.
Another remarkable feature of the Jordan frame GR theory is that it is invariant in form under the following conformal transformations (as pointed out in ref. these can be interpreted as transformations of physical units):
$$\stackrel{~}{g}_{ab}=\varphi ^2g_{ab},$$
(16)
$$\stackrel{~}{\varphi }=\varphi ^1,$$
(17)
and
$$\stackrel{~}{g}_{ab}=fg_{ab},$$
(18)
$$\stackrel{~}{\varphi }=f^1\varphi ,$$
(19)
where $`f`$ is some smooth function given on the manifold. In both cases the invariance in form of the equations (3.1-3.4) can be verified by direct substitution of (3.6) and (3.7) or (3.8) and (3.9) in these equations. Also Jordan frame GR based on the Lagrangian (1.4) is invariant in respect to the more general rescaling (first presented in ):
$$\stackrel{~}{g}_{ab}=\varphi ^{2\alpha }g_{ab},$$
(20)
and the scalar field redefinition:
$$\stackrel{~}{\varphi }=\varphi ^{12\alpha }.$$
(21)
This transformation is accompanied by a redefinition of the BD coupling constant:
$$\stackrel{~}{\omega }=\frac{\omega 6\alpha (\alpha 1)}{(12\alpha )^2},$$
(22)
with $`\alpha \frac{1}{2}`$. The case $`\alpha =\frac{1}{2}`$ constitute a singularity in the transformations (3.10-3.12).<sup>††</sup><sup>††</sup>††It should be stressed that the full Jordan frame general relativity theory is invariant in respect to transformations (3.10-3.12). Unlike this in Jordan frame Brans-Dicke gravity the presence of ordinary matter with $`TT_n^n0`$ breaks this symmetry.
The conformal invariance of a given theory of gravitation (i.e. its invariance under a particular transformation of physical units) is a very desirable feature of any classical gravity theory that would correctly describe our real world in the large scale. As it was pointed out by Dicke, it is obvious that the particular values of the units of mass, length and time employed are arbitrary so the physical laws must be invariant under these transformations. This simple argument suggests that the Jordan frame formulation of general relativity with an extra scalar field that is based on the Lagrangian (1.4) is a better candidate for such ultimate classical theory of gravitation than the other classical theories that are given by the Lagrangians (1.1), (1.2) and (1.5) respectively. In fact, the Lagrangian (1.4) is invariant in respect to the particular transformations of the physical units studied here ((3.6,3.7), (3.8,3.9) and (3.10-3.12)) while the Lagrangians (1.1), (1.2) and (1.5) are not invariant under these transformations.
In the following section we shall discuss on geometrical duality among singular Schwarzschild (EF) vacuum solution and the corresponding non singular JF solution and, in section V, we shall illustrate this kind of duality for flat, perfect fluid Friedman - Robertson - Walker (FRW) cosmologies. A similar discussion on conformal transformations between singular and non singular spacetimes in the low-energy limit of string theory can be found in for axion - dilaton black hole solutions in $`D=4`$ and in for classical FRW axion - dilaton cosmologies<sup>‡‡</sup><sup>‡‡</sup>‡‡References were pointed out to us by D. Wands. For spurious black hole in the classical approximation see .
## IV Geometrical duality and Schwarzschild black hole
In this section, for simplicity, we shall interested in the static, spherically symmetric solution to Riemannian general relativity (Einstein frame GR) with $`\omega =\frac{3}{2}`$ for material vacuum, and in its dual Weyl-type picture (Jordan frame GR). In the EF the field equations (1.6-1.8) can be written, in this case, as:
$`\widehat{R}_{ab}=0,`$ (23)
$`\widehat{\mathrm{}}\widehat{\varphi }=0.`$ (24)
The corresponding solution , in Schwarzschild coordinates,looks like($`d\mathrm{\Omega }^2=d\theta ^2+\mathrm{sin}^2\theta d\phi ^2`$):
$$d\widehat{s}^2=(1\frac{2m}{r})dt^2+(1\frac{2m}{r})^1dr^2+r^2d\mathrm{\Omega }^2,$$
(25)
and
$$\widehat{\varphi }=q\mathrm{ln}(1\frac{2m}{r}),$$
(26)
where $`m`$ is the mass of the point, located at the coordinate beginning, that generates the gravitational field and $`q`$ is an arbitrary real parameter. As seen from eq.(4.2) the static, spherically symmetric solution to eq.(4.1) is just the typical Schwarzschild black hole solution for vacuum. The corresponding solution for JFGR can be found with the help of the conformal rescaling of the metric (1.3) and the scalar field redefinition $`\varphi =e^{\widehat{\varphi }}=(1\frac{2m}{r})^q`$:
$$ds^2=(1\frac{2m}{r})^{1q}dt^2+(1\frac{2m}{r})^{1q}dr^2+\rho ^2d\mathrm{\Omega }^2,$$
(27)
where we have defined the proper radial coordinate $`\rho =r(1\frac{2m}{r})^{\frac{q}{2}}`$. In this case the curvature scalar is given by:
$$R=\frac{3}{2}\varphi ^2g^{nm}_n\varphi _m\varphi =\frac{6m^2q^2}{r^4}(1\frac{2m}{r})^{q1}.$$
(28)
The real parameter $`q`$ labels different spacetimes ($`M,g_{ab}^{(q)},\varphi ^{(q)}`$), so we obtained a class of spacetimes {$`(M,g_{ab}^{(q)},\varphi ^{(q)})/q\mathrm{}`$} that belong to a bigger class of known solutions. These known solutions are given, however, for an arbitrary value of the coupling constant $`\omega `$.
We shall outline the more relevant features of the solution given by (4.4). For the range $`\mathrm{}<q<1`$ the Ricci curvature scalar (4.5) shows a curvature singularity at $`r=2m`$. For $`\mathrm{}<q<0`$ this represents a timelike, naked singularity at the origin of the proper radial coordinate $`\rho =0`$. We shall drop these spacetimes for they are not compatible with the cosmic censorship conjecture. Situation with $`q=0`$ is trivial. In this case the conformal transformation (1.1) coincides with the identity transformation that leaves the theory in the same frame. For $`q>0`$, the limiting surface $`r=2m`$ has the topology of an spatial infinity so, in this case, we obtain a class of spacetimes with two asymptotic spatial infinities<sup>\**</sup><sup>\**</sup>\**For $`0<q<1`$ the spatial infinity at $`r=\mathrm{}`$ is Ricci flat, meanwhile, the one at $`r=2m`$ is singular. When $`q1`$ both spatial infinities are Ricci flat one at $`r=\mathrm{}`$ and the other at $`r=2m`$, joined by a wormhole with a throat radius $`r=(2+q)m`$, or the invariant surface determined by $`\rho _{min}=q(1+\frac{2}{q})^{1+\frac{q}{2}}m`$. The wormhole is asymmetric under the interchange of the two asymptotic regions ($`r=\mathrm{}`$ and $`r=2m`$).
This way, Weyl-type spacetimes dual to the Riemannian Schwarzschild black hole one (line element (4.2)) are given by the class {$`(M,g_{ab}^{(q)},\varphi ^{(q)})/q>0`$} of wormhole (singularity free)spacetimes.
Although in the present paper we are interested in the particular value $`\omega =\frac{3}{2}`$ of the BD coupling constant, it will interesting however to discuss, briefly, what happen for $`\omega >\frac{3}{2}`$. In this case there is a physical scalar in the Einstein frame (see eq.(1.6)). The corresponding EF solution to eqs.(1.6) and (1.7) is given by:
$$d\widehat{s}^2=(1\frac{2m}{pr})^pdt^2+(1\frac{2m}{pr})^pdr^2+\widehat{\rho }^2d\mathrm{\Omega }^2,$$
(29)
and
$$\widehat{\varphi }=q\mathrm{ln}(1\frac{2m}{pr}),$$
(30)
where $`p^2+(2\omega +3)q^2=1`$, $`p>0`$. For non-exotic scalar matter ($`\omega \frac{3}{2}`$), $`0<p1`$. In eq.(4.6) we have used the definition $`\widehat{\rho }=(1\frac{2m}{pr})^{\frac{1p}{2}}r`$ for the EF proper radial coordinate. There is a time-like curvature singularity at $`r=\frac{2m}{p}`$, so the horizon is shrunk to a point. Then in the EF the validity of the cosmic censorship hypothesis and, correspondingly, the ocurrence of a black hole are uncertain.
The JF solution conformally equivalent to (4.6) is given by:
$$ds^2=(1\frac{2m}{pr})^{pq}dt^2+(1\frac{2m}{pr})^{pq}dr^2+\rho ^2d\mathrm{\Omega }^2,$$
(31)
where the JF proper radial coordinate $`\rho =r(1\frac{2m}{pr})^{\frac{1pq}{2}}`$ was used. In this case, when $`\omega `$ is in the range $`0<\omega +3<\frac{1+p}{2(1p)}`$, the Weyl-type JF geometry shows again two asymptotic spatial infinities joined by a wormhole. The particular value $`p=1`$ corresponds to the case of interest $`\omega =\frac{3}{2}`$.
The singularity-free character of the Weyl-type geometry should be tested with the help of a test particle that is acted on by the JF metric in eq.(4.8) and by the scalar field $`\varphi =(1\frac{2m}{pr})^q`$. Consider the radial motion of a time-like test particle($`d\mathrm{\Omega }^2=0`$). In this case the time-component of the motion equation (3.4) can be integrated to give:
$$\dot{t}^2=C_1^2(1\frac{2m}{pr})^{q2p},$$
(32)
where $`C_1^2`$ is some integration constant and the overdot means derivative with respect to the JF proper time $`\tau `$($`d\tau ^2=ds^2`$). The integration constant can be obtained with the help of the following initial conditions: $`r(0)=r_0`$, $`\dot{r}(0)=0`$, meaning that the test particle moves from rest at $`r=r_0`$. We obtain $`C_1^2=(1\frac{2m}{pr_0})^p`$. Then the proper time taken for the particle to go from $`r=r_0`$ to the point with the Schwarzschild radial coordinate $`r_0r<\frac{2m}{p}`$ is given by:
$$\tau =_r^{r_0}\frac{r^{\frac{q}{2}}dr}{\sqrt{(1\frac{2m}{pr_0})^p(1\frac{2m}{pr})^p}(r\frac{2m}{p})}.$$
(33)
While deriving this equation we have used eq.(4.8) written as:$`1=(1\frac{2m}{pr})^{pq}d\dot{t}^2(1\frac{2m}{pr})^{pq}d\dot{r}^2`$. The integral in the r.h.s. of eq.(4.10) can be evaluated to obtain($`q2`$):
$$\tau >\frac{(\frac{2m}{p})^{\frac{q}{2}}(1\frac{2m}{pr_0})^{\frac{p}{2}}}{1\frac{q}{2}}[(r_0\frac{2m}{p})^{1\frac{q}{2}}(r\frac{2m}{p})^{1\frac{q}{2}}],$$
(34)
and
$$\tau >\frac{2m}{p}\mathrm{ln}(\frac{r_0\frac{2m}{p}}{r\frac{2m}{p}}),$$
(35)
for $`q=2`$. For $`q2`$ the proper time taken by the test particle to go from $`r=r_0`$ to $`r=\frac{2m}{p}`$ is infinite showing that the particle can never reach this surface (the second spatial infinity of the wormhole). Then the time-like test particle does not see any singularity.
If we consider the scalar field $`\varphi `$ as a perfect fluid then we find that its ’true’ energy density (the (0,0) component of $`\frac{\varphi }{8\pi }`$ times the second term in the r.h.s. of eq.(3.5)) as measured by a comoving observer is given by:
$$\mu ^\varphi =\frac{2m^2q^2(\omega +\frac{3}{2})}{8\pi p^2r^4}(1\frac{2m}{pr})^{p+2q2},$$
(36)
while its ’effective’ energy density (the (0,0) component of $`\frac{\varphi }{8\pi }`$ times the sum of the second and third terms in the r.h.s. of eq.(3.1)) is found to be:
$$\mu _{eff}^\varphi =\frac{2m^2q(q(\omega +1)p)}{8\pi p^2r^4}(1\frac{2m}{pr})^{p+2q2}.$$
(37)
These are everywhere non-singular for $`q\frac{2p}{2}`$ ($`0<p1`$) in the range $`2mr<\mathrm{}`$. The ’true’ BD scalar field energy density $`\mu ^\varphi `$ is everywhere positive definite for $`\omega >\frac{3}{2}`$ for all $`q`$ and $`0<p1`$. This means that the scalar matter is non-exhotic and shows a non-singular behaviour evrywhere in the given range of the parameters involved. The scalar field ’effective’ energy density $`\mu _{eff}^\varphi `$ is everywhere positive definite only for $`q>\frac{p}{\omega +1}`$.
Summing up. With the help of time-like test particles that are acted on by both the metric field and the scalar field we can test the absence of singularities (and black holes) in Weyl-type spacetimes of the class {$`M,g_{ab}^{(q)},\varphi ^{(q)}/q2`$}. These are dual to Riemannian (singular) spacetimes ($`M,\widehat{g}_{ab}`$) given by (4.2). Pictures with and without singularity are different, but physically equivalent (dual) geometrical representations of the same physical situation. Experimental evidence on the existence of a black hole (enclosing a singularity), obtained when experimental data is interpreted on the grounds of Riemann geometry (naturally linked with Einstein frame GR theory with $`\omega =\frac{3}{2}`$) can serve, at the same time, for evidence on the existence of a wormhole when the same experimental data is interpreted on the grounds of the Weyl-type geometry (linked with Jordan frame GR) dual to it.
Although the wormhole picture is not simpler than its conformal black hole one, it is more viable because these geometrical objects (Jordan frame wormholes) are invariant respecting transformations (3.6-3.12) that can be interpreted as particular transformations of physical units. As noted by Dicke, these transformations should not influence the physics if the theory is correct. The Einstein frame Schwarzschild black hole, for his part, does not possess this invariance. More discussion on this point will be given in section VI.
## V Geometrical duality in cosmology
Other illustrations to the notion of geometrical duality come from cosmology. In the Einstein frame the FRW line element for flat space can be written as:
$$d\widehat{s}^2=dt^2+\widehat{a}(t)^2(dr^2+r^2d\mathrm{\Omega }^2),$$
(38)
where $`\widehat{a}(t)`$ is the EF scale factor. Suppose the universe is filled with a perfect-fluid-type matter with the barotropic equation of state (in the EF): $`\widehat{p}=(\gamma 1)\widehat{\mu }`$, $`0<\gamma <2`$. Taking into account the line element (5.1) and the barotropic equation of state, the field equation (1.6) can be simplified to the following equation for determining the EF scale factor:
$$(\frac{\dot{\widehat{a}}}{\widehat{a}})^2=\frac{8\pi }{3}\frac{(C_2)^2}{\widehat{a}^{3\gamma }},$$
(39)
while, after integrating eq.(1.7) once, we obtain for the EF scalar:
$$\dot{\widehat{\varphi }}=\frac{C_1}{\widehat{a}^3},$$
(40)
where $`C_1`$ and $`C_2`$ are arbitrary integration constants. The solution to eq.(5.2) is found to be:
$$\widehat{a}(t)=(A)^{\frac{2}{3\gamma }}t^{\frac{2}{3\gamma }},$$
(41)
where $`A\sqrt{6\pi }\gamma C_2`$. Integrating eq.(5.3) gives:
$$\widehat{\varphi }^\pm (t)=\widehat{\varphi }_0Bt^{1\frac{2}{\gamma }},$$
(42)
where $`B\frac{\gamma C_1}{(2\gamma )A^{\frac{2}{\gamma }}}`$.
The JF scale factor $`a^\pm (t)=\widehat{a}(t)\mathrm{exp}[\frac{1}{2}\widehat{\varphi }^\pm (t)]`$ is given by the following expression:
$$a^\pm (t)=\frac{A^{\frac{2}{3\gamma }}}{\sqrt{\varphi _0}}t^{\frac{2}{3\gamma }}\mathrm{exp}[\pm \frac{B}{2}t^{1\frac{2}{\gamma }}].$$
(43)
The proper time $`t`$ in the EF and $`\tau `$ in the JF are related through:
$$(\tau \tau _0)^\pm =\frac{1}{\sqrt{\varphi _0}}\mathrm{exp}[\pm \frac{B}{2}t^{1\frac{2}{\gamma }}]𝑑t.$$
(44)
For big $`t`$ ($`t+\mathrm{}`$) this gives $`(\tau \tau _0)^\pm t`$. Then $`t+\mathrm{}`$ implies $`\tau +\mathrm{}`$ for both ’+’ and ’-’ branches of our solution, given by the choice of the ’+’ and ’-’ signs in eq.(5.5).
For $`t0`$, the r.h.s. of eq.(5.7) can be transformed into:
$$\frac{\gamma }{\sqrt{\varphi _0}}(2\gamma )\frac{\mathrm{exp}[\pm Dx]}{x^{\frac{2}{2\gamma }}}𝑑x,$$
(45)
where we have defined $`xt^{1\frac{2}{\gamma }}`$ and $`D\frac{\gamma C_1}{2(\sqrt{6\pi }\gamma C_2)^{\frac{\gamma }{2}}(2\gamma )}`$. If we take the ’-’ sign in the exponent under integral (5.8) then, for $`t0`$ ($`x\mathrm{}`$), $`\tau \tau _0`$. If we take the ’+’ sign, for his part, integral (5.8) diverges for $`t0`$ so $`\tau \mathrm{}`$ in this last case.
In the ’-’ branch of our solution the evolution of the universe in the Jordan frame is basically the same as in the Einstein frame. The flat FRW perfect-fluid-filled universe evolves from a cosmological singularity at the beginning of time $`t=0`$ ($`\tau =\tau _0`$ in the JF) into an infinite size universe at the infinite future $`t=+\mathrm{}`$ ($`\tau =+\mathrm{}`$ in the JF). It is the usual picture in canonical general relativity where the cosmological singularity is unavoidable.
However, in the ’+’ branch of the solution the JF flat FRW perfect-fluid-filled universe evolves from an infinite size at the infinite past ($`\tau =\mathrm{}`$) into an infinite size at the infinite future ($`\tau =+\mathrm{}`$) through a bounce at $`t^{}=[\frac{3}{4}\frac{\gamma C_1}{(\sqrt{6\pi }\gamma C_2)^{2\gamma }}]^{\frac{\gamma }{2\gamma }}`$ where it reaches its minimum size $`a^{}=\frac{1}{\sqrt{\varphi _0}}[\sqrt{\frac{3}{32\pi }}\frac{C_1}{C_2}e]^{\frac{2}{3(2\gamma )}}`$. Then the Jordan frame universe is free of the cosmological singularity unlike the Einstein frame universe where the cosmological singularity is unavoidable. The more general case of arbitrary $`\omega >\frac{3}{2}`$ is studied in .
If we model the JF scalar field $`\varphi `$ as a perfect fluid then, in the Jordan frame its ’true’ energy density (as measured by a comoving observer) will be given by the following expression:
$$\mu _\pm ^\varphi =\frac{(\omega +\frac{3}{2})(C_1\varphi _0)^2}{16\pi A^{\frac{4}{\gamma }}t^{\frac{4}{\gamma }}}\mathrm{exp}[2Bt^{1\frac{2}{\gamma }}],$$
(46)
while the ’effective’ energy density of $`\varphi `$ as seen by a cosmological observer is given by:
$$\mu _{eff,\pm }^\varphi =\frac{(\omega +3)(C_1\varphi _0)^2}{16\pi A^{\frac{4}{\gamma }}t^{\frac{4}{\gamma }}}\mathrm{exp}[2Bt^{1\frac{2}{\gamma }}](1\frac{4A^{\frac{2}{\gamma }}t^{\frac{2}{\gamma }1}}{(\omega +3)\gamma C_1}).$$
(47)
In the ’+’ branch of the JF solution both $`\mu ^\varphi `$ and $`\mu _{eff}^\varphi `$ are finite for all times. In this case the ’true’ energy density (equation (5.9)) evolves from zero value at $`t=0`$ ($`\tau =\mathrm{}`$) into a zero value at the infinite future ($`\tau =+\mathrm{}`$), through a maximum (finite) value at some intermediate time. It is positive definite for all times. This means that the scalar matter is non-exhotic and non-singular for all times. The ’effective’ scalar field energy density (5.10) evolves from zero at $`t=0`$ ($`\tau =\mathrm{}`$) into zero at $`t^{}=[(\omega +1)(2\gamma )]^{\frac{\gamma }{2\gamma }}`$ through a maximum (finite value) at some prior time. In this range of times $`\mu _{eff}^\varphi `$ is positive definite. Then it further evolves from a zero value at $`t^{}`$ into a zero value at $`t=+\mathrm{}`$ ($`\tau =+\mathrm{}`$) through a maximum absolute value at some time $`t^{}<t<+\mathrm{}`$. In this range of times $`\mu _{eff}^\varphi `$ is negative definite.
For the perfect fluid barotropic ordinary matter we found that the energy density in the ’plus’ branch of the Jordan frame solution is given by:
$$\mu =(\frac{C_2}{A})^2\frac{\varphi _0}{t^2}\mathrm{exp}[Bt^{1\frac{2}{\gamma }}].$$
(48)
It evolves from zero at $`t=0`$ ($`\tau =\mathrm{}`$)into zero density at $`t=+\mathrm{}`$ ($`\tau =+\mathrm{}`$) through a maximum value $`\mu ^{}=e^2[A^{6\gamma }(\frac{2(2\gamma )}{\gamma C_1})^{2\gamma }]^{\frac{1}{2\gamma }}`$ at $`t^{}=(\frac{\gamma C_1}{2(2\gamma )})^{\frac{\gamma }{2\gamma }}\frac{1}{A^{\frac{2}{2\gamma }}}`$, i.e. it is bounded for all times. This means that the energy density as measured by a comoving observer is never singular (it is true for the sum of (5.9) and (5.11) as well as for the sum of (5.10) and (5.11)).
## VI The Jordan frame or the Einstein frame afterall?
In this section we are going to discuss about the physical implications of the viewpoint developed in the present paper. Our proposal is based on the postulate that the different conformal formulations of general relativity are physically equivalent. Among these conformal representations of GR the Jordan frame and the Einstein frame formulations are distinguished.
For the purpose of the present discussion we shall take the static, spherically symmetric solution presented in section IV. In the Einstein frame for $`\omega =\frac{3}{2}`$ this is the typical Schwarzschild black hole solution. A time-like singularity at the origin of the Schwarzschild radial coordinate is enclosed by an event horizon at $`r=2m`$. To a distant observer, an observer falling into the black hole asymptotically approaches the event horizon but he never crosses the surface $`r=2m`$. The same is true for a distant observer in the Jordan frame because spacetime coincidences are not affected by a aconformal transformation of the metric. This means that to a distant observer, the black hole never forms neither in the EF nor in the JF. The situation is dramatically different for an observer falling into the black hole (in the JF we have a wormhole instead of a black hole). In the Einstein frame he crosses the event horizon and inevitably hits the singularity at $`r=0`$ in a finite proper time. Unlike this, in the Jordan frame this observer never sees any singularity (he never crosses the surface $`r=2m`$). For $`\omega >\frac{3}{2}`$, in the Einstein frame to a distant observer, an observer falling into the singular point $`r=\frac{2m}{p}`$ will reach it in a finite time. In this case the singularity at $`\frac{2m}{p}`$ is naked so it is seen by the distant observer. The same is true for a distant observer in the Jordan frame. He finds that the observer falling into the surface $`r=\frac{2m}{p}`$ (the begining of the JF proper radial coordinate $`\rho =0`$) will reach it in a finite time. However, in this case, the surface $`r=\frac{2m}{p}`$ is non-singular for $`q2p`$ ($`0<p1`$) because the curvature scalar $`R=\frac{6m^2q^2}{p^2r^4}(1\frac{2m}{pr})^{p+q2}`$ is finite at $`\frac{2m}{p}`$. In the Einstein frame, the observer falling into the singular point $`r=\frac{2m}{p}`$ hits the singularity in a finite proper time. In the Jordan frame the falling observer never meets any singularity. Moreover, it takes an infinite proper time to the falling observer to reach the surface $`r=\frac{2m}{p}`$.
Summing up. The physics as seen by a distant observer is the same in the Einstein and in the Jordan frames since spacetime coincidences are unchanged under the conformal rescaling of the metric. Unlike this, the physics seems dramatically different to the falling observer in the Einstein frame. He hits the singularity in a finite proper time. In the Jordan frame the falling observer never meets any singularity. It is very striking because, according to our proposal, both situations are physically equivalent (they are equally consistent with the observational evidence). However, the falling observer is part of the physical reality and the physical reality is unique(it is a prime postulate in physics).
We can not pretend to give a final answer to this paradoxical situation since, we feel it is a very deep question. We shall however conjecture on this subject. Two explanations to this striking situation are possible. The first one is based on the fact that Einstein’s theory is a classical theory of spacetime and near of the singular point we need a quantum theory (this theory has not been well stablished at present). When a viable quantum gravity theory will be worked out it may be that this singularity is removed. In the Jordan frame no singularity occurs (for $`q2`$) and, consequently, we do not need of any quantum theory for describing gravitation. This explanation is in agreement with a point of view developed in reference . According to this viewpoint, to bring in the quantum effects into the classical gravity theory one needs to make only a conformal transformation. If we start with Einstein’s classical theory of gravitation then we can set in the quantum effects of matter by simply making a conformal transformation into, say, the Jordan frame. In this sense the Jordan frame formulation of general relativity already contains the quantum effects (Jordan frame GR represents a unified description of both gravity and the quantum effects of matter).
The second possibillity is more radical and has been already outlined above in this paper. The Einstein frame formulation is not invariant under the particular transformations of the units of time, length and mass studied in section III. It is very striking since the physical laws should be invariant under the transformations of units. Unlike this The Jordan frame formulation of general relativity is invariant in respect to these transformations. This means that the picture without singularity is more viable than the one with them, i.e. spacetime singularities are not physical. These are fictitious entities due to a wrong choice of the formulation of the theory.
We recall that these are just conjectures and we hope to discuss further on this point in future works.
AKNOWLEDGEMENT
We thank A. G. Agnese and D. Wands for helpful comments. We also aknowlege the unknown referees for recommendations and criticism and MES of Cuba for financing.
|
no-problem/9905/quant-ph9905071.html
|
ar5iv
|
text
|
# Entanglement-assisted local manipulation of pure quantum states
## Abstract
We demonstrate that local transformations on a composite quantum system can be enhanced in the presence of certain entangled states. These extra states act much like catalysts in a chemical reaction: they allow otherwise impossible local transformations to be realised, without being consumed in any way. In particular, we show that this effect can considerably improve the efficiency of entanglement concentration procedures for finite states.
The rapid development of quantum information processing in recent years has led us to view quantum-mechanical entanglement as a useful physical resource . As with any such resource, there arises naturally the question of how it can be quantified and manipulated. Attempts have been made to find meaningful measures of entanglement , and also to uncover the fundamental laws of its behaviour under local quantum operations and classical communication (LQCC) . These laws are fundamentally and also practically important, since many applications of quantum information processing involve spatially separated parties who must manipulate an entangled state without performing joint operations. In this context, it is generally assumed that entanglement may be used to perform useful tasks only if it is consumed in whole or in part. Indeed, this is implicit in the common-sense notion of a “resource”.
In this Letter we demonstrate that entanglement is, in fact, a stranger kind of resource, one that can be used without being consumed at all. More precisely, we show that the mere presence of an entangled state can allow distant parties to realise local transformations that would otherwise be impossible, or less efficient. Our idea is best introduced by the following situation, illustrated in Fig. 1. Imagine that Alice and Bob share a finite-dimensional entangled state $`|\psi _1`$ of two particles, which they would like to convert, using only LQCC, into the state $`|\psi _2`$. For some choices of $`|\psi _1`$ and $`|\psi _2`$ there exists a local protocol that accomplishes this task with certainty , but for others it can only be done probabilistically, with some maximum probability $`p_{max}<1`$ . Assume the latter is the case, as indicated by the crossed arrow in the upper part of Fig. 1. Now suppose that an “entanglement banker”, let us call him Scrooge, agrees to lend Alice and Bob another entangled pair of particles $`|\varphi `$, under the condition that exactly the same state must be returned to him later on. Given this additional state, will Alice and Bob be able to transform $`|\psi _1`$ into $`|\psi _2`$ and still return the state $`|\varphi `$ to Scrooge? We suggest to call a transformation of this kind, which uses intermediate entanglement without consuming it, an entanglement-assisted local transformation, abbreviated by ELQCC. The possible existence of such a class of transformations has been conjectured by Popescu (see also ).
The main result of this letter is the proof that entanglement-assisted local transformations are indeed more powerful than ordinary local transformations. This result is significant in a number of ways. First of all, it provides a concrete mechanism by which Alice and Bob can enhance their entanglement-manipulation ability. For example, we will demonstrate that entanglement concentration is more efficient with ELQCC than with only LQCC. Moreover, the definition of a meaningful new class of entanglement transformations demonstrates that the structure of entanglement, even for pure, bipartite states, is still not completely understood.
Let us begin then with an explicit example of the power of entanglement-assisted transformations. The central tool we will require for this is Nielsen’s theorem .
Theorem (Nielsen): Let $`|\psi _1=_{i=1}^n\sqrt{\alpha _i}|i_A|i_B`$ and $`|\psi _2=_{i=1}^m\sqrt{\alpha _i^{}}|i_A|i_B`$ be pure bipartite states, with Schmidt coefficients respectively $`\alpha _1\mathrm{}\alpha _n>0`$ and $`\alpha _1^{}\mathrm{}\alpha _m^{}>0`$ (we can refer to such distributions as “ordered Schmidt coefficients”, or OSCs). Then a transformation $`T`$ that converts $`|\psi _1`$ to $`|\psi _2`$ with $`100\%`$ probability can be realised using LQCC iff the OSCs $`\left\{\alpha _i\right\}`$ are majorized by $`\left\{\alpha _i^{}\right\}`$, that is, iff for $`1ln`$
$$\underset{i=1}{\overset{l}{}}\alpha _i\underset{i=1}{\overset{l}{}}\alpha _i^{}.$$
(1)
One consequence of Nielsen’s theorem is that there exist pairs $`|\psi _1`$ and $`|\psi _2`$ where neither state is convertible into the other with certainty under LQCC. Such pairs are called incomparable , and can be indicated by $`|\psi _1|\psi _2`$. Examples are the following two states:
$`|\psi _1`$ $`=`$ $`\sqrt{0.4}|00+\sqrt{0.4}|11+\sqrt{0.1}|22+\sqrt{0.1}|33`$ (2)
$`|\psi _2`$ $`=`$ $`\sqrt{0.5}|00+\sqrt{0.25}|11+\sqrt{0.25}|22.`$ (3)
It can easily be checked that $`\alpha _1<\alpha _1^{}`$ but $`\alpha _1+\alpha _2>\alpha _1^{}+\alpha _2^{}`$, so indeed $`|\psi _1|\psi _2`$. If Alice and Bob share one of these states and wish to convert it to the other using LQCC, they must therefore run the risk of failure. Their greatest probability of success is given by
$$p_{\mathrm{max}}\left(|\psi _1|\psi _2\right)=\mathrm{min}|_{1ln}\frac{E_l\left(|\psi _1\right)}{E_l\left(|\psi _2\right)}$$
(4)
where $`E_l\left(|\psi _1\right)=1_{i=1}^{l1}\alpha _i`$. For instance, in the case of the pair in Eq. (3), $`p_{\mathrm{max}}`$ is only 80%.
Suppose now that Scrooge lends them the 2-qubit state
$$|\varphi =\sqrt{0.6}|44+\sqrt{0.4}|55.$$
(5)
The Schmidt coefficients $`\gamma _k`$, $`\gamma _k^{}`$ of the product states $`|\psi _1|\varphi ,`$ $`|\psi _2|\varphi `$, given in decreasing order, are
$`|\psi _1|\varphi `$ $`:`$ $`0.24,0.24,0.16,0.16,0.06,0.06,0.04,0.04,`$ (6)
$`|\psi _2|\varphi `$ $`:`$ $`0.30,0.20,0.15,0.15,0.10,0.10,0.00,0.00.`$ (7)
so that $`_{i=1}^k\gamma _k_{i=1}^k\gamma _k^{},1k8`$. Nielsen’s theorem then implies that the transformation $`|\psi _1|\varphi |\psi _2|\varphi `$ can, in fact, be realised with 100% certainty using LQCC. Alice and Bob can complete their task and still return the borrowed state $`|\varphi `$ to Scrooge. This state acts therefore much like a catalyst in a chemical reaction: its presence allows a previously forbidden transformation to be realised, and since it is not consumed it can be reused as many times as desired. This represents a fundamental distinction between the present effect and previous proposals for using entanglement as an enhancing factor, such as entanglement pumping or the activation of bound entanglement , where the extra entanglement was either used up or transformed. We shall thus adopt the “catalysis” metaphor as a convenient way of referring to our novel effect.
Nielsen’s theorem, along with its generalisation given in , provides a complete answer to the question “which transformations on a pure bipartite state are possible under LQCC?” It would, of course, be desireable to find analogous conditions for ELQCC. For instance, given $`|\psi _1`$,$`|\psi _2`$, we would like to know when there exists an appropriate catalyst state $`|\varphi `$. Mathematically, this means that given the OSCs $`\left\{\alpha _i\right\},\left\{\alpha _i^{}\right\}`$, we have to determine when there exists other OSCs $`\left\{\beta _k\right\}`$, such that $`\left\{\alpha _i\beta _k\right\}`$ is majorized by $`\left\{\alpha _j^{}\beta _k\right\}`$. Unfortunately, this problem seems in general to be hard to solve analytically . The difficulty lies in the fact that, before Nielsen’s theorem can be applied to the tensor products, their Schmidt coefficients must be sorted into descending order. No general analytic way for doing this is known, so we are at present confined to searching numerically for appropriate catalysts. Nevertheless, it is possible to present a few interesting partial results.
Lemma 1: No transformation can be catalysed by a maximally entangled state $`|\phi _p=(1/\sqrt{p})_{i=1}^p|i_A|i_B`$.
Proof: The Schmidt coefficients $`\gamma _j`$ of $`|\psi _1|\phi _p`$ are just $`\frac{\alpha _i}{p}`$, each one being $`p`$-fold degenerate. In this case sorting them is trivial, and we can write that, for any $`l`$, $`_{j=1}^{pl}\gamma _j=_{i=1}^l\alpha _i`$. Now, by Nielsen’s theorem, if $`|\psi _1|\psi _2`$ under LQCC, then for some $`l=l_0`$ we have $`_{i=1}^{l_0}\alpha _i>_{i=1}^{l_0}\alpha _i^{}_{j=1}^{pl_0}\gamma _j>_{j=1}^{pl_0}\gamma _j^{}|\psi _1|\phi _p|\psi _2|\phi _p`$ under LQCC $`\mathrm{}`$
This result shows a surprising property of catalysts: they must be partially entangled. Roughly speaking, if the catalyst has “not enough” entanglement, Alice and Bob will not be able to transform $`|\psi _1`$ into $`|\psi _2`$ with certainty, but if it has “too much” then they will not be able to return it intact to Scrooge.
Lemma 2: Two states are interconvertible (i.e., both $`|\psi _1|\psi _2`$ and $`|\psi _2|\psi _1`$) under ELQCC iff they are equivalent up to local unitary transformations.
Proof: Suppose that $`|\psi _1|\psi _2`$ under ELQCC . Then there exist $`|\eta ,|\varphi `$ such that both $`|\psi _1|\varphi |\psi _2|\varphi `$ and $`|\psi _2|\eta |\psi _1|\eta `$ are possible under LQCC. This means that $`|\psi _1|\varphi |\eta `$ and $`|\psi _2|\varphi |\eta `$ are interconvertible under LQCC, which happens iff their Schmidt coefficients are identical . This in turn implies that the Schmidt coefficients of $`|\psi _1`$ and $`|\psi _2`$ are also identical, and thus that they are equivalent under local unitary rotations $`\mathrm{}`$
One consequence of Lemma 2 is that if a transition that is forbidden under LQCC can be catalysed by extra entanglement (i.e. $`|\psi _1|\psi _2`$ under LQCC but $`|\psi _1|\psi _2`$ under ELQCC), then the reverse transition (from $`|\psi _2`$ to $`|\psi _1`$) must be impossible even under ELQCC. In particular, only transitions between incomparable states may be catalysed. Therefore, catalysis is impossible if $`|\psi _1`$ and $`|\psi _2`$ are both $`2\times 2`$ states, for in this case it is always true that either $`|\psi _1|\psi _2`$ or $`|\psi _2|\psi _1`$ under LQCC.
A somewhat more surprising result is that catalysis is also impossible when $`|\psi _1`$ and $`|\psi _2`$ are both $`3\times 3`$ states. In this case incomparable states do exist , so Lemma 2 does not immediately rule it out. To see that it actually cannot occur, we must look more closely at the relevant Schmidt coefficients.
Lemma 3: Let $`|\psi _1,|\psi _2`$ be $`n\times n`$-level states, with OSCs $`\{\alpha _i\},\{\alpha _i^{}\},1in`$. Then $`|\psi _1|\psi _2`$ under ELQCC only if both
$$\alpha _1\alpha _1^{},\alpha _n\alpha _n^{}.$$
(8)
Proof: Let $`\left\{\beta _j\right\}_{j=1}^m`$ be the OSCs of $`|\varphi `$. Then the largest and smallest Schmidt coefficient of $`|\psi _1|\varphi `$ are, respectively, $`\gamma _1=\alpha _1\beta _1`$ and $`\gamma _{nm}=\alpha _n\beta _m`$ (analogous expressions hold for $`|\psi _2|\varphi `$ ). Nielsen’s theorem now tells us that, if $`|\psi _1|\varphi |\psi _2|\varphi `$ under LQCC, then $`\gamma _1\gamma _1^{}`$ and $`_{k=1}^{nm1}\gamma _k=1\gamma _{nm}_{k=1}^{nm1}\gamma _k^{}=1\gamma _{nm}^{}`$, from which Eq. (8) follows $`\mathrm{}`$
Suppose now that $`|\psi _1`$ and $`|\psi _2`$ are incomparable $`3\times 3`$ states. Then Nielsen’s theorem implies that one of two possibilities must hold: either
$$\{\begin{array}{c}\alpha _1>\alpha _1^{}\\ \alpha _1+\alpha _2<\alpha _1^{}+\alpha _2^{}\end{array}\text{or }\{\begin{array}{c}\alpha _1<\alpha _1^{}\\ \alpha _1+\alpha _2>\alpha _1^{}+\alpha _2^{}\end{array}.$$
(9)
In either case, Eq. (8) is violated, so $`|\psi _1|\psi _2`$ under ELQCC. In other words, there are pairs of states which are incomparable even in presence of extra entanglement.
In the $`4\times 4`$ case, we have seen by example (Eq. (3)) that catalysis is indeed possible. Lemma 3 shows that the only case where it can happen is when the following conditions are all satisfied
$$\alpha _1\alpha _1^{},\alpha _1+\alpha _2>\alpha _1^{}+\alpha _2^{},\alpha _4\alpha _4^{}.$$
(10)
where the second condition ensures that the transformation is not possible under LQCC alone. Indeed, the states $`|\psi _1,|\psi _2`$ in Eq. (3) are of this type.
The concept of entanglement-assisted transformations may be extended in a number of ways. An example is when the presence of a catalyst state does not allow Alice and Bob to transform $`|\psi _1`$ into $`|\psi _2`$ with certainty, but still increases the optimal probability with which this can be done. For instance, consider the incomparable $`3\times 3`$ states $`|\psi _1=\sqrt{0.5}|00+\sqrt{0.4}|11+\sqrt{0.1}|22`$ and $`|\psi _2=\sqrt{0.6}|00+\sqrt{0.2}|11+\sqrt{0.2}|22`$. From eq. (4) the optimal probability of converting $`|\psi _1`$ into $`|\psi _2`$ under LQCC is $`80\%`$, and Lemma 3 tells us that this cannot be increased to 100% by the use of any catalyst. Nevertheless, Eq. (4) also shows that $`p_{\mathrm{max}}\left(|\psi _1|\varphi |\psi _2|\varphi \right)`$ can be as large as $`90.04\%`$ when $`|\varphi =\sqrt{0.65}|33+\sqrt{0.35}|44`$.
Even this limited enhancement is not always possible, as shown by the following result:
Lemma 4: Let $`|\psi _1,|\psi _2`$ be $`n\times n`$ bipartite states with OSCs $`\{\alpha _i\},\{\alpha _i^{}\}`$, and such that $`p_{\mathrm{max}}\left(|\psi _1|\psi _2\right)`$ under LQCC is $`\frac{\alpha _n}{\alpha _n^{}}`$. Then this probability cannot be increased by the presence of any catalyst state.
Proof: Let $`|\varphi `$ be a bipartite state with OSCs $`\left\{\beta \right\}_{i=1}^m.`$ From Eq. (4), the optimal probability of converting $`|\psi _1|\varphi `$ into $`|\psi _2|\varphi `$ under LQCC is given by
$$p_{max}=\underset{lnm}{\mathrm{min}}\frac{E_l\left(|\psi _1|\varphi \right)}{E_l\left(|\psi _2|\varphi \right)}\frac{E_{nm}\left(|\psi _1|\varphi \right)}{E_{nm}\left(|\psi _2|\varphi \right)}=\frac{\alpha _n}{\alpha _n^{}},$$
(11)
where we have used that $`E_{nm}(|\psi _1|\varphi )=\alpha _n\beta _m`$ $`\mathrm{}`$
In particular, Lemma 4 applies when $`|\psi _1`$ has $`n`$ Schmidt coefficients and $`|\psi _2`$ is the maximally entangled state $`|\phi _n`$, for in this case $`p_{\mathrm{max}}\left(|\psi _1|\phi _n\right)=`$$`n\alpha _n`$ . At first sight, this may seem to indicate that catalytic effects cannot increase the efficiency with which entanglement can be concentrated into maximally-entangled form. It turns out, however, that the opposite is actually the case. To see this, recall first that an entanglement concentration protocol (ECP) can be defined as any sequence of LQCC’s that transform a given partially entangled state $`|\psi _1`$ into a maximally entangled state $`|\phi _m`$ of $`m`$ levels, with probability $`p_m`$ (note that $`|\phi _1`$ is a disentangled state). Among all these protocols, the optimal is the one that yields on average the greatest amount of concentrated entanglement i.e., that maximises $`E=_{m=1}^np_m\mathrm{ln}m`$ over all possible distributions $`\{p_m\}`$ compatible with LQCC. The maximum value $`E_{max}`$ is the (finite) distillable entanglement of $`|\psi _1`$ . In it was shown that $`E_{max}=_{m=0}^nm\mathrm{ln}m(\alpha _m\alpha _{m+1}),`$ corresponding to a probability distribution $`p_m^{opt}=m(\alpha _m\alpha _{m+1})`$, where $`\left\{\alpha _i\right\}`$ are the OSCs of $`|\psi _1`$ and $`a_{n+1}=0`$.
A catalysed ECP (Fig. 2) is then any sequence of LQCCs that transform the product $`|\psi _1|\varphi `$ (for some catalyst state $`|\varphi `$) into one of the states $`|\phi _m|\varphi `$, with probability $`p_m`$. It turns out that in this case the distillable entanglement $`E_{\mathrm{max}}(|\varphi )`$ can be larger than the value given above. To show this, we use a general technique for optimising entanglement transformations, presented in . From the generalised Nielsen’s theorem , a catalysed ECP with probability distribution $`p_m`$ can be realised using LQCCs iff the following constraints are satisfied for $`1ln`$
$$\underset{m=1}{\overset{n}{}}p_mE_l\left(|\phi _m|\varphi \right)E_l\left(|\psi _1|\varphi \right),$$
(12)
where $`E_l`$ is the same as in Eq. (4). The optimal protocol can then be found by maximizing $`E(|\varphi )`$ with respect to $`p_m`$, given these constraints. This is a standard linear programming problem , for which an exact solution can always be found in any particular case. In Fig. 3 we plot $`E_{\mathrm{max}}(|\varphi )`$ for the case where $`|\psi _1`$ is a $`3\times 3`$ state with Schmidt coefficients $`\alpha _1=0.5`$, $`\alpha _2=0.3`$, $`\alpha _3=0.2`$, and where $`|\varphi `$ is a a $`2\times 2`$ state. We can see that some catalysts allow a substantial increase in the entanglement yield relative to the one achievable using only LQCC.
How does this happen, even under the constraints implied by Lemma 4? It turns out that, although Lemma 4 forbids $`p_n`$ from increasing in the presence of a catalyst, the same is not true for $`p_{n1}`$. For instance, in the example above the optimal probability distribution without a catalyst (i.e., with a disentangled catalyst) is $`p_3^{opt}=0.6,p_2^{opt}=p_1^{opt}=0.2`$. On the other hand, in the presence of a catalyst with Schmidt components $`\beta _1=0.5825,\beta _2=.4175`$, it becomes $`p_3^{opt}=0.6,p_2^{opt}=0.3178,p_1^{opt}=0.0822`$. Effectively, the presence of a catalyst allows us to syphon probability away from the unwanted outcome where all the entanglement is lost and into one where a maximally entangled state is obtained.
How far can this enhancing effect be used to increase the distillable entanglement $`E_{max}(|\varphi )`$ ? Lemma 4 gives us immediately the following upper bound
$$Bn\alpha _n\mathrm{ln}n+(1n\alpha _n)\mathrm{ln}(n1)E_{max}(|\varphi )$$
(13)
This simply corresponds to a case where $`p_n`$ is maximum, and all the remaining probability is assigned to obtaining $`|\phi _{n1}`$. Another upper bound is the asymptotic distillable entanglement per copy . These bounds are unrelated: there are states like the one in Fig. 3, for which $`B<S`$, and others for which $`S<B`$. It is an open question whether any of these bounds can in general be reached by the use of an appropriate catalyst.
To summarise: we have presented a counter-intuitive effect by means of which local entanglement transformations of finite states may be catalysed in the presence of ‘borrowed’ entanglement. Our results raise many interesting questions. For instance, what are sufficient conditions for the existence of catalysts? Are catalysts always more efficient as their dimension increases? We hope that the intricacy of this effect may convince readers that there is more to pure-state entanglement than just asymptotic properties, and that no single “measure” of entanglement can fully capture it all.
Acknowledgements: We would like to thank I. Olkin, A. Uhlmann and O. Pretzel for helpful comments on majorization and tensor products and especially S. Popescu for inspiring discussions. We acknowledge the support of the Brazilian agency Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPQ), the ORS Award Scheme, the United Kingdom Engineering and Physical Sciences Research Council, the Leverhulme Trust, the European Science Foundation, and the European Union.
|
no-problem/9905/astro-ph9905173.html
|
ar5iv
|
text
|
# Mid-infrared imaging of the young binary star Hen 3-600: Evidence for a dust disk around the primary
## 1 Introduction
Over the past decade, infrared and millimeter observations have provided strong support for the single star plus accretion disk model of star formation (Shu, Adams, & Lizano 1987; also see Hartmann 1998). The disk contains most of the system’s angular momentum and may help accrete the bulk of the central star’s mass. It is also the likely site of planet formation.
This scenario must be more complicated in most systems because a majority of young, low-mass stars are members of binary systems (Ghez et al. 1993, 1997; Leinert et al. 1993; Reipurth & Zinnecker 1993; Simon et al. 1995). A young binary system may contain up to three distinct disks: circumstellar disks around one or both stars, with disk radii smaller than the binary’s periastron separation, and a circumbinary disk lying entirely outside the binary orbit (cf. Jensen & Mathieu 1997). These disks are remnants of star birth and thus may provide clues to binary formation mechanisms. Furthermore, the stars and the disk(s) in a binary influence each component’s evolution through dynamical interactions (e.g., Lin & Papaloizou 1993; Artymowicz & Lubow 1994, 1996). Therefore, the study of disks in binary systems is crucial to our understanding of both star and planet formation processes.
Here we report high-resolution mid-infrared imaging observations of the nearby, young, late-type binary Hen 3-600. It was first identified by de la Reza et al. (1989) as a T Tauri pair in the vicinity of another isolated young star TW Hydrae based on the presence of H$`\alpha `$ emission and the Li I 6708Å absorption line. Its coincidence with the IRAS source 1108-3715 was interpreted as being due to thermal dust emission associated with one or both stars in the binary. More recently, several groups have suggested that Hen 3-600, TW Hya and a number of other active, young stars in the same region of the sky form a physical association with an age of $``$10 Myr (Gregorio-Hetem et al. 1992; Kastner et al. 1997; Soderblom et al. 1998; Webb et al. 1999; see Jensen, Cohen, & Neuhäuser 1998 for a different viewpoint). At a distance of $``$50 pc, the “TW Hydrae Association” is the nearest known group of young stars, and is ideally suited to studies of circumstellar disk evolution.
The primary goal of our observations was to determine whether the thermal dust emission in the Hen 3-600 system originates from one or both stars in the binary. A secondary motivation was to place useful constraints on the nature of the dust disk(s) by comparing the near- to far-infrared spectral energy distribution (SED) with observations of T Tauri disks in Taurus.
## 2 Observations and Results
Hen 3-600 was observed on February 23, 1999 with the 4-m Blanco telescope at Cerro Tololo Interamerican Observatory using OSCIR, the University of Florida mid-infrared camera. OSCIR uses a 128$`\times `$128 Si:As Blocked Impurity Band (BIB) detector developed by Boeing. On the CTIO 4-m telescope, OSCIR has a plate scale of 0.183”/pixel, which gives a field of view of 23”$`\times `$23”. Our observations were made using the standard chop/nod technique with a chopper throw of 23” in declination. Images of Hen 3-600 were obtained in the K(2.2 $`\mu `$m), M(4.8$`\mu `$m), N(10.8 $`\mu `$m), and IHW18(18.2 $`\mu `$m) bands, and flux calibrated using the standard stars $`\lambda `$ Vel and $`\gamma `$ Cru. On-source integration times for Hen 3-600 were 300 seconds in K, 300 seconds in M, 600 seconds in N, and 600 seconds in IHW18. Additional information on OSCIR is available on the World-Wide Web at www.astro.ufl.edu/iag/. Our final stacked images at each of the four wavelengths are shown in Figure 1.
In the K band, the two components of the binary are clearly resolved, with a peak-to-peak separation of 1.4” and a position angle of $`215^{}\pm 3^{}`$, consistent with previous observations. The flux ratio we measure for the A and B components, $`F_A/F_B=1.6`$, agrees extremely well with that from Webb et al.’s (1999) K magnitudes based on IRTF photometry. In the M-band, the two stars are again well resolved, with a flux ratio of $``$1.9. In the N-band, only the primary (A) is clearly visible. Our flux measurement of 900$`\pm `$90 mJy is consistent with the IRAS 12$`\mu `$m flux of 990 mJy. By subtracting an appropriately scaled PSF star from the position of Hen 3-600A, we are able to place an upper limit of 50 mJy on the flux from B at N. In the IHW18 band, again only A appears on our image, with a total flux of 1500$`\pm `$150 mJy, which is consistent with the IRAS 25$`\mu `$m measurement of 1750 mJy. (The errors are primarily due to uncertainties in flux calibration.)
## 3 Discussion
### 3.1 Location and nature of circumstellar dust
Our mid-infrared images show that the IRAS-detected excess emission originates from Hen 3-600A, and not from Hen 3-600B, implying that the primary has a circumstellar disk while the secondary does not. From Keck LRIS spectra, Webb et al. (1999) measure H$`\alpha `$ equivalent widths of -21.8Å and -7.14Å for A and B respectively. Taken together, our mid-infrared images and Webb et al.’s spectra suggest that A is a classical T Tauri star (CTTS) with an actively accreting disk, while B is a weak-line T Tauri star (WTTS) without an optically thick inner disk.
In a spectroscopic and photometric study of 12 pre-main-sequence close binary systems, Prato & Simon (1997) found that in every case both components exhibited CTTS characteristics. Thus, they suggested that the transition of disk optical depth, from $`\tau >`$1 to $`\tau <`$1, is roughly coeval for both stars in a close binary and that a circumbinary envelope could regulate the common evolution of the inner disks. Our detection of a circumstellar disk around only the primary, and not the secondary, in Hen 3-600 is inconsistent with that scenario. However, it does agree with Prato (1998) who finds evidence for only circumprimary disks in 4 close binaries in a larger sample of 25 systems. We further note that in a study of 39 wide pre-main sequence binary systems, Hartigan, Strom, & Strom (1994) found that 6 systems were composed of CTTS/WTTS pairs, as judged from the H$`\alpha `$ emission (see Figure 8 of Hartigan et al. 1994). The last step in the evolution from the CTTS phase to the WTTS state may take place on relatively short timescales. It is possible that Hen 3-600 is a transition system with one component already evolved into a WTTS while the other is still in the process of doing so.
We plot the composite spectral energy distribution (SED) of Hen 3-600A in Figure 2 and compare it to the median SED for a classical T Tauri star of similar spectral type and to the SED of the HD 98800 quadruple system. The solid line on the plot corresponds to the median CTTS SED for stars in Taurus, normalized at H (D’Alessio et al. 1999); the dashed lines show the quartile fluxes to provide some idea of the range of observed fluxes. The dip at 4.8$`\mu `$m (M filter) in the spectrum of Hen 3-600A is suggestive of a possible central hole in the disk. However, this result needs to be confirmed with higher-precision photometry at wavelengths between 2$`\mu `$m–10$`\mu `$m. At 60$`\mu `$m and 100$`\mu `$m, the emission from Hen 3-600A falls well below that expected for a median $``$100 AU CTTS disk, implying that the disk may be significantly smaller in radius. Assuming an optically thick disk with a temperature distribution $`Tr^{1/2}`$ (Beckwith et al. 1990) suggests that truncating the disk at an outer radius of $``$25 AU would help account for the reduction in long-wavelength fluxes. Similar dips at 5–10$`\mu `$m and again at far-infrared wavelengths are also seen in HD 98800, where the dust is apparently associated with one of the two binary pairs (Gehrz et al. 1999).
Theoretical calculations suggest that circumstellar disks will be truncated by the tidal effects of a companion star in circular orbit at approximately 0.9 of the average Roche lobe radius (Artymowicz & Lubow 1994), which for roughly equal mass stars in the Hen 3-600 system would be at about a third of the orbital radius (Papaloizou & Pringle 1977). Given a binary separation of 1.4”, the disk of the primary should then be confined to $`<`$0.5” angular radius. It is interesting to note that in another young close binary, L1551 IRS5 in Taurus, compact disks have been observed around each star with the Very Large Array (VLA) at 7 mm (Rodriguez et al. 1998). In that case, the disk radii appear to be about a quarter of the binary separation distance. (Of course, it is possible that these outer radii are the result of instrument sensitivity limits rather than disk truncation.)
### 3.2 Age and evolutionary status of Hen 3-600
Hen 3-600 appears to be one of several nearby young stellar systems in the vicinity of TW Hydrae. Based on strong X-ray fluxes from all five systems, Kastner et al. (1997) concluded that the group forms a physical association at a distance of $``$50 pc with an age of 20$`\pm `$10 Myr. Webb et al. (1999) have identified five more T Tauri star systems in the same region of the sky as candidate members of the “TW Hya Association” (TWA), based on the same signatures of youth –namely high X-ray flux, large Li abundance, and strong chromospheric activity– and the same proper motion as the original five members. Furthermore, they suggest that the wide binary HR 4796 is also part of the Association, even though its Hipparcos parallactic distance of 67 pc places it further away than most other members of the group. The three other TWA stars with Hipparcos distances –TW Hya, HD 98800, and TWA 9– are at 56, 47 and 50 pc, respectively.
If all proposed members of the TWA do indeed form a coeval group with a common origin, they would provide an excellent sample for studying disk evolution timescales. In particular, using the $`Hipparcos`$ distance, and the D’Antona & Mazzitelli (1994) evolutionary tracks, it is possible to estimate an age of $`8\pm 3`$ Myr for the M2.5 star HR 4796B (Jayawardhana et al. 1998). This age is consistent with the upper bound provided by the measurement of the strong Li absorption line at 6708 Å (Stauffer et al. 1995); for this mass range, Li is predicted to be rapidly destroyed in the stellar interior at ages $`911`$ Myr (see, e.g., D’Antona & Mazzitelli 1994). We note that HR 4796B and Hen 3-600A have comparable Li equivalent widths and $`L_X/L_{bol}`$ ratios. Based on similar considerations, Soderblom et al. (1998) derive an age of 7-12 Myr for the HD 98800 quadruple system. A lower limit to the age of a few Myr is indicated by the isolated location of HR 4796, HD 98800, Hen 3-600, and the other TWA stars, since most stars of comparable or smaller ages are found in regions of molecular clouds and substantial interstellar dust extinction (Leisawitz, Bash, & Thaddeus 1989).
A spatially-resolved circumstellar disk was recently imaged around the A star HR 4796A at mid-infrared wavelengths (Jayawardhana et al. 1998; Koerner et al. 1998; Telesco et al. 1999). The presence of an inner disk hole in HR 4796A, first inferred by Jura et al. (1993) on the basis of the SED and now confirmed by mid-infrared imaging, may provide evidence for coagulation of dust into larger bodies on a timescale similar to that suggested for planet formation in the solar system (e.g., Strom et al. 1989; Strom, Edwards, & Skrutskie 1993; Podosek & Cassen 1994). If TWA stars are indeed coeval, it would be of significant interest to determine whether similar disk evolution has ocurred in other systems.
However, given that the apparent proper motions of the TWA stars are primarily due to solar reflex motion, it is difficult to establish a common origin for the candidate group members (Jensen, Cohen, & Neuhäuser 1998; Soderblom et al. 1998; Hoff, Henning & Pfau 1998). In particular, without a Hipparcos distance measurement, the position of Hen 3-600 on the H-R diagram is highly uncertain. At an assumed distance of 50 pc, Webb et al. (1999) obtain an age of a few Myr for Hen 3-600A and B from PMS evolutionary tracks.
In summary, it is likely that Hen 3-600 is younger than 10 Myr, based on the presence of Li, and older than $``$1 Myr, given the lack of an associated parent molecular cloud. Possible membership of the TWA suggests an age close to the upper end of this range while its CTTS characteristics point to the lower end. A more accurate age estimate may not be possible until its distance is directly measured or a common origin for the TWA stars is firmly established.
### 3.3 Implications for disk formation and evolution in binaries
Nearly half of the known pre-main-sequence stars in Taurus are WTTS, i.e., they have H$`\alpha `$ equivalent widths $`<`$10 Å and no infrared excess, implying a lack of accretion disks. It has been argued that many, if not most, WTTS have binary stellar companions which render disks dynamically unstable (Jensen, Mathieu, & Fuller 1994, 1996; Osterloh & Beckwith 1995). Therefore, “mixed” binary systems like Hen 3-600, where one star is a CTTS and the other a WTTS, present somewhat of a puzzle.
The exact configuration of disks –circumprimary, circumsecondary and/or circumbinary– in a binary system may depend on the details of its formation. Smoothed particle hydrodynamics (SPH) simulations by Bate & Bonnell (1997) show that the fractions of infalling material that are captured by each protostar and the fraction which forms a circumbinary disk depend on the binary’s mass ratio and the parent cloud’s specific angular momentum, $`j_{infall}`$. For accretion with low $`j_{infall}`$, most of the infalling material is captured by the primary. For gas with intermediate $`j_{infall}`$, the fraction captured by the primary decreases and that captured by the secondary increases. For higher $`j_{infall}`$, more and more gas goes into a circumbinary disk instead of circumstellar disks. Thus, it could be that infall from a low $`j_{infall}`$ cloud led to a more massive disk around the primary in Hen 3-600. However, given their roughly equal masses, it is not clear why one star would capture much more material than the other. One possibility is that as protostars, the two components had very different masses, with what is now the primary accumulating most of its mass at the end of accretion. Another possibility is that there is a very low-mass, so far undetected, close companion around Hen 3-600B which accelerated the depletion of its disk.
It is likely that there is not a universal evolutionary timescale for protoplanetary disks, especially when the influence of companion stars is taken into account. Detailed studies of disks around a large sample of young binaries, including other candidate members of the TWA, may provide new insights to resolve this puzzle.
We wish to thank the staff of CTIO for their outstanding support. We are also grateful to Jim DeBuizer for his assistance at the telescope. The research at CfA was supported by NASA grant NAG5-4282 and the Smithsonian Institution. The research at the University of Florida was supported by NASA, NSF, and the University of Florida.
Figure Captions
Figure 1. Final, stacked OSCIR images of Hen 3-600 in K (2.2$`\mu `$m) upper left, M (4.8$`\mu `$m) upper right, N (10.8$`\mu `$m) lower left, and IHW18 (18.2$`\mu `$m) lower right filters, smoothed with a 3-pixel Gaussian. The bars below each panel indicate the intensity scale. The plate scale is 0.183”/pixel.
Figure 2. (a) Composite spectral energy distribution of Hen 3-600A from near- to far-infraed wavelengths based on measurements of Webb et al. (1999), IRAS, and this paper. (b) The SED of HD 98800, for comparison, based on fluxes reported by Sylvester et al. (1996). The solid line in each panel is the median SED for Taurus CTTS, normalized at H (from D’Alessio et al. 1999), and the dashed lines show the quartile fluxes to provide some idea of the range of observed fluxes.
|
no-problem/9905/cond-mat9905344.html
|
ar5iv
|
text
|
# Physics Letters A 225 (1997) 167-169 Dipole-dipole interaction of Josephson diamagnetic moments
A weak-link structure (both intrinsic and extrinsic) of high-$`T_c`$ superconductors (HTS) is known to play a rather crucial role in understanding many unusual and anomalous physical phenomena in these materials (see, e.g., \[1-7\]). In particular, the ”fishtail” anomaly of magnetization in oxygen-deficient crystals is argued \[4-7\] to originate from intrinsic (atomic scale) weak links, while the spontaneous orbital magnetic moments induced by grain boundary weak links with the so-called ”$`\pi `$ junctions” (see, e.g., ) is believed to be responsible for the ”paramagnetic Meissner effect” in granular superconductors. Furthermore, to probe into the symmetry of the pairing mechanism in HTS, the Josephson interference experiments based on the assumption that $`\pi `$ junctions are created between $`s`$-wave and $`d`$-wave superconductors or between different grains of the $`d`$-wave superconductor itself have been conducted (see, e.g., for discussion and further references therein).
In the present paper, we would like to draw a particular attention to the importance of the dipole-dipole interaction between the above-mentioned orbital magnetic moments. Due to the vector character of this interaction, the sign of the resulting Josephson critical current will depend on the orientation between these diamagnetic moments, giving rise to either $`0`$ or $`\pi `$ type junction behavior. It is shown that for large enough grains and/or small enough distance between clusters, the dipole energy between clusters may overcome the direct Josephson coupling between grains within a single cluster, allowing for manifestation of long-range correlation effects in granular superconductors.
Let us consider a system of two identical clusters of superconducting grains. Each cluster contains three weakly connected grains (which is a minimal number needed to create a current loop and the corresponding non-zero diamagnetic moment $`\stackrel{}{\mu }`$, see below). Between adjacent grains in each cluster, there is a Josephson-like coupling with the energy $`J_{ij}=J_{ji}`$. The dipole-dipole interaction between these two clusters can be presented in the form
$$_{dip}=\underset{\alpha \beta }{}V_{\alpha \beta }(\stackrel{}{R})\mu ^\alpha \mu ^\beta ,$$
(1)
where
$$V_{\alpha \beta }(\stackrel{}{R})=\frac{\mu _0}{R^3}\left(\delta _{\alpha \beta }\frac{R_\alpha R_\beta }{R^2}\right)$$
(2)
and
$$\mu ^\alpha =\frac{2e}{\mathrm{}}\underset{i=1}{\overset{3}{}}J_{i,i+1}\sigma _{i,i+1}^\alpha \mathrm{sin}\varphi _{i,i+1}.$$
(3)
Here, $`R`$ is the distance between clusters, $`\stackrel{}{\sigma }_{ij}=\stackrel{}{r}_i\times \stackrel{}{r}_j`$ is the (oriented) projected area for each cluster, $`\varphi _{ij}=\varphi _i\varphi _j`$ is the phase difference between adjacent grains, and $`\mu _0=4\pi \times 10^7H/m`$. Hereafter, $`\{\alpha ,\beta \}=x,y,z`$; and $`\{i,j\}=1,2,3.`$
In view of the obvious constraint, $`\varphi _{12}+\varphi _{23}+\varphi _{31}=0`$, our consideration can be substantially simplified by introducing a ”collective variable” $`\varphi `$, namely
$$\varphi _{12}=\varphi _{31}\varphi ,\varphi _{23}=0$$
(4)
As a result, the dipole-dipole interaction energy takes on a simple form
$$_{dip}=D\mathrm{sin}^2\varphi ,$$
(5)
where
$$D=\left(\frac{2e}{\mathrm{}}\right)^2\underset{\alpha \beta }{}V_{\alpha \beta }(\stackrel{}{R})\left(J\sigma \right)^\alpha \left(J\sigma \right)^\beta ,$$
(6)
with
$$\left(J\sigma \right)^\alpha J_{12}\sigma _{12}^\alpha +J_{13}\sigma _{13}^\alpha .$$
(7)
To estimate the significance of the above-considered dipole-dipole energy, we have to compare it with the Josephson coupling energy between grains within a single cluster. The latter contribution for three adjacent grains (forming a cluster and allowing for a non-zero current loop) gives
$$_J=\underset{i=1}{\overset{3}{}}J_{i,i+1}\mathrm{cos}\varphi _{i,i+1},$$
(8)
or equivalently, in terms of the ”collective variables” (see Eq.(4))
$$_J=J_{23}(J_{12}+J_{13})\mathrm{cos}\varphi .$$
(9)
Thus, the resulting Josephson current in our system of the two coupled clusters, defined as $`I(\varphi )=(2e/\mathrm{})(_{tot}/\varphi )`$ with the total energy $`_{tot}=2_J+_{dip}`$ (notice that there is no direct Josephson interaction between clusters, they are coupled only via the dipole-dipole interaction) reads
$$I(\varphi )=2I_J^c\mathrm{sin}\varphi +I_D^c\mathrm{sin}2\varphi ,$$
(10)
where
$$I_J^c=\frac{2e}{\mathrm{}}(J_{12}+J_{13}),$$
(11)
and
$$I_D^c=\frac{2e}{\mathrm{}}D.$$
(12)
It is interesting to mention that a similar to Eq.(10) form of the ”nonsinusoidal” current-phase relationship has been recently discussed by Yip who investigated the Josephson coupling involving unconventional superconductors beyond the tunnel-junction limit.
Let us find out now when dipole-dipole interaction between two clusters may become comparable with (or even exceed) the direct Josephson coupling between grains within the same cluster. In view of Eq.(10), this will happen when $`D`$ becomes equal to (or larger than) $`2(J_{12}+J_{13})4J`$ which in turn is possible either for small enough distance between clusters $`R`$ or for large enough grain size $`r_g\sqrt{\sigma /\pi }`$. Taking $`J/k_B90K`$ for the maximum (zero-temperature) Josephson energy in $`YBCO`$ materials, and assuming (roughly) $`Rr_g`$, we get $`r_g10\mu m`$ for the minimal grain size needed to observe the effects due to the dipole-dipole interaction between diamagnetic moments in granular HTS.
There is however another possibility to observe these effects which does not require the above-mentioned restrictions (small $`R`$ and/or large $`r_g`$). Indeed, in view of Eq.(11), when grains $`1`$ and $`2`$ form a $`0`$ junction (with $`J_{12}=J`$) and at the same time grains $`1`$ and $`3`$ produce a $`\pi `$ junction (with $`J_{13}=J`$) within the same cluster, the direct Josephson contributions cancel each other so that $`I_J^c0`$, and the resulting critical current is completely defined by its dipole part only, which in this case reads
$$I_D^c=\left(\frac{2e}{\mathrm{}}\right)^3J^2\underset{\alpha \beta }{}V_{\alpha \beta }(\stackrel{}{R})\mathrm{\Delta }\sigma ^\alpha \mathrm{\Delta }\sigma ^\beta ,$$
with $`\mathrm{\Delta }\sigma ^\alpha \sigma _{12}^\alpha \sigma _{13}^\alpha `$. Moreover, due to the orientational nature of the dipole-dipole interaction, the induced critical current $`I_D^c`$ may exhibit properties of either $`0`$ or $`\pi `$ junctions, depending on the sign of the interaction potential $`V_{\alpha \beta }(\stackrel{}{R})`$. In particular, as is seen from Eq.(2), the non-diagonal part of $`V_{\alpha \beta }(\stackrel{}{R})`$ is responsible for creation of $`\pi `$ type junctions with $`I_D^c<0`$. It would be very interesting to try to observe the predicted behavior experimentally, using perhaps some artificially prepared systems of superconducting grains.
In summary, a system of two clusters (of weakly connected superconducting grains) coupled via dipole-dipole interaction between their diamagnetic moments was considered. For large enough grains (or small enough distance between clusters), the dipole energy between clusters was found to compare with the direct Josephson coupling between grains within a single cluster. The sign of the critical current, related to the dipole energy, was shown to depend on the mutual orientation of the clusters, varying from $`0`$ to $`\pi `$ type junction behavior.
|
no-problem/9905/chao-dyn9905007.html
|
ar5iv
|
text
|
# Collective and chaotic motion in self-bound many-body systems
## Abstract
We investigate the interplay of collective and chaotic motion in a classical self-bound $`N`$-body system with two-body interactions. This system displays a hierarchy of three well separated time scales that govern the onset of chaos, damping of collective motion and equilibration. Comparison with a mean-field problem shows that damping is mainly due to dephasing. The Lyapunov exponent, damping and equilibration rates depend mildly on the system size $`N`$.
preprint: DOE/ER/40561-50
Self-bound many-body systems like the atomic nucleus display regular, collective phenomena as well as chaotic behavior . The giant dipole resonance, for example, constitutes a regular oscillation of the protons against the neutrons albeit it is excited at energies where the spectrum displays fluctuations which are typical for chaotic systems . This interesting interplay of collective and chaotic motion and the effects of chaotic dynamics on the damping and dissipation of nuclear excitations is a matter of intense research but still not fully understood.
Several authors have addressed equilibration and damping of collective motion by coupling a slowly moving collective degree of freedom (typically the wall of a container) to fast moving independent particle degrees of freedom. Within these models, collective degrees of freedom are neither constructed from single-particle coordinates nor the result of a self-consistent mean field. In the case of driven walls this may lead to energy conservation problems. Bauer et al. treated a more realistic model where the collective motion was coupled self-consistently to the single-particle motion and found the coexistence of undamped collective motion and chaotic single-particle dynamics. This observation is interesting with view on low-lying collective excitations but cannot apply to the case of the giant dipole resonance. Furthermore, the classical models discussed in the literature suffer from the absence of a two-body force and rotational symmetry. In this work we are particularly interested in the many-body aspects of the problem and the role of the two-body interaction opposed to the mean-field. To this purpose we study a classical self-bound many-body system with the following characteristics: two-body surface interaction; chaotic dynamics; collective mode constructed from single-particle degrees of freedom.
Let us consider the model Hamiltonian
$$H=\underset{i=1}{\overset{N}{}}\frac{\stackrel{}{p}_i^2}{2m}+\underset{i<j}{}V(|\stackrel{}{r}_i\stackrel{}{r}_j|),$$
(1)
where $`\stackrel{}{r}_i`$ is a two–dimensional position vector of the $`i`$-th nucleon and $`\stackrel{}{p}_i`$ is its conjugate momentum. The interaction is given by
$`V(r)=\{\begin{array}{cc}0\hfill & \text{for }r<a,\hfill \\ \mathrm{}\hfill & \text{for }ra.\hfill \end{array}`$ (4)
The particles thus move freely within a convex billiard in $`2N`$-dimensional configuration space and undergo specular reflection at the wall. This corresponds to the basic picture of nucleon motion in a nucleus: they move in a common flat-bottom potential, and interact mainly at the surface. The Hamiltonian (1,4) achieves these features while retaining the two-body interaction. Total momentum $`\stackrel{}{P}`$, energy $`E`$ and angular momentum $`L`$ are conserved. Classical phase space structure is independent of energy since a scaling $`\stackrel{}{p}_i\alpha \stackrel{}{p_i}`$ simply leads to a rescaling of $`E\alpha ^2E,\stackrel{}{P}\alpha \stackrel{}{P}`$, and $`L\alpha L`$. We set $`m=a=\mathrm{}=1`$. Thus, energies, momenta and times are measured in units of $`\mathrm{}^2/ma^2`$, $`\mathrm{}/a`$ and $`ma^2/\mathrm{}`$, respectively <sup>*</sup><sup>*</sup>*We introduced $`\mathrm{}`$ to set an energy scale.. In what follows we fix the total momentum $`\stackrel{}{P}=0`$, the angular momentum $`L=0`$ and set the energy $`E=N`$.
The $`N=2`$ system is integrable and chaotic dynamics may exist for $`N3`$. To study the classical dynamics of the $`N`$-body system we compute the Lyapunov exponent using the tangent map . The time evolution of the $`N`$-body system is done efficiently by organizing collision times in a partially ordered tree and requires an effort $`𝒪(N\mathrm{ln}N)`$ . We draw several initial conditions for fixed $`N`$ at random from the $`(E=N,\stackrel{}{P}=0,L=0)`$-shell and follow their time evolution for several hundred Lyapunov times. This ensures good convergence of the Lyapunov exponents. The results of several runs are listed in Table I (mean values and RMS deviations). For $`N=3`$ we traced $`7\times 10^4`$ trajectories, all of them being unstable with respect to small initial deviations. Thus, we expect more than 99% of phase space to be chaotic. For larger $`N`$ we traced fewer trajectories and have less statistics. However, all followed trajectories have positive Lyapunov exponents. This suggests that the dynamics of the $`N`$-body system is dominantly chaotic. We note that the $`N`$-body system possesses marginally stable orbits corresponding to configurations where $`N2`$ particles are at rest. However, these configurations are of zero measure in phase space. The reliability of the numerical evolution was checked by comparing forward with backward evolution. Moreover, total energy, total momentum and angular momentum were conserved to high accuracy. As a further check we computed the Lyapunov exponents using an alternative method and found good agreement with the results displayed in Table I.
We next consider the time evolution of collective motion. To this purpose we define a set of initial conditions (i.e. phase space points) of the $`N`$-body system that lead to collective motion corresponding to the giant dipole resonance. In passing we mention that one could also use collective coordinates introduced by Zickendraht or consider motion close to invariant manifolds of the rotationally symmetric many-body system . Let us draw the momenta $`\stackrel{}{p}_i`$ at random from uniform probability distributions that vanish outside of the domains $`(p_x,p_y)[\mathrm{\Delta }q_x,\mathrm{\Delta }q_x]\times [\pm q\mathrm{\Delta }q_y,\pm q+\mathrm{\Delta }q_y]`$ , with $`+`$ or $``$ sign for particles $`i=1,\mathrm{},N/2`$ (e.g. protons) and particles $`i=N/2+1,\mathrm{},N`$ (e.g. neutrons), respectively. We rescale the energy to $`E=N`$. The initial positions $`\stackrel{}{r}_i,i=1,\mathrm{},N1`$ are drawn at random to lie inside a circle of radius $`a/2`$; the position of the $`N`$th particle is chosen such that the total angular momentum vanishes; the center of mass is subtracted. For $`\mathrm{\Delta }q_x\mathrm{\Delta }q_yq\sqrt{2mE/N}`$ one obtains initial conditions that correspond to the motion of all protons against all neutrons and therefore may be identified with the dipole giant resonanceOur choice of zero angular momentum is justified in the framework of classical mechanics.. In what follows we use $`\mathrm{\Delta }q_x=\mathrm{\Delta }q_yq/10`$.
We are interested the time evolution of the dipole-moment
$$\stackrel{}{D}=\frac{2}{N}\underset{i=1}{\overset{N/2}{}}(\stackrel{}{r}_i\stackrel{}{r}_{Ni}).$$
(5)
Fig. 1 shows the time dependence of the dipole-moment for a system of size $`N=220`$ and one collective initial condition. The longitudinal component $`D_y`$ exhibits a few damped oscillations that eventually turn into erratic fluctuations of small amplitude. On the same time scale the transverse component $`D_x`$ increases in amplitude from approximately zero and turns into similar erratic fluctuations. These fluctuations decrease with increasing system size $`N`$, indicating that they are of statistical nature. The first two periods of the time evolution can be fitted to an exponentially damped harmonic oscillation yielding the period $`\tau `$ and the damping rate $`\gamma `$. The results of several runs (mean values and RMS deviations are listed in Table II.
For the equilibration we study the distribution of single-particle energies $`ϵ_j,j=1\mathrm{}N`$ during the time evolution of the dipole mode. The moments $`I_\nu N^1_{j=1}^Nϵ_j^\nu `$ of this distribution may be compared with the corresponding equilibrium values obtained from a Boltzmann distribution $`\rho ^{(B)}(ϵ)=1/[kT\mathrm{exp}(ϵ/kT)]`$ at a temperature $`kT=E/N=1`$, i.e. $`I_\nu ^{(B)}=\nu !(kT)^\nu `$. Fig. 2 shows the first nontrivial moment $`I_2`$ averaged over many runs to reduce statistical fluctuations. This moment approaches its equilibrium value exponentially fast at a rate $`\alpha `$ (listed in Table II). The saturation at long times seems to be due to the finite number of particles. Once the system has equilibrated, the single-particle momenta are Maxwell-Boltzmann distributed (See Fig. 3). We also computed the Lyapunov exponent $`\lambda `$ of the many-body trajectory corresponding to the collective dipole motion and listed the results (mean values and RMS deviations) of several runs in Table II. The Lyapunov exponents agree well with the previously computed ones listed in Table I. This shows that the dynamics of the many-body system is also chaotic in the region of phase space that corresponds to the collective motion. Table II displays a hierarchy of well separated rates $`\lambda >\gamma >\alpha `$ with $`\lambda \alpha `$. This hierarchy is particularly pronounced for large $`N`$ since $`\lambda `$ ($`\alpha `$) increases (decreases) with increasing $`N`$. The damping and equilibration rates differ for the following reason. While the dipole mode is damped out once the single-particle momentum distribution exhibits spherical symmetry, equilibration requires also a considerable change in the radial momentum distribution. Note that the dipole moment $`\stackrel{}{D}`$ and the moments $`I_\nu `$ of the single-particle energy distribution effectively test the angular and radial momentum distribution, respectively. Our results show that a generic self-bound many-body system with two-body interactions displays a fast equilibration in the angular components opposed to a slow equilibration in the radial components of the single-particle momenta. This observation is also of interest with view on heavy ion collisions.
To examine the role of the two-body interaction more closely, we consider a system of $`N`$ independent particles moving inside a circular billiard of diameter $`a`$. This system is integrable since single-particle energies and angular momenta are conserved quantities. It can be taken as a mean-field approximation of our Hamiltonian and is motivated by the observation that the surface of the interacting many body-system (i.e. the points in two-dimensional configuration space where interactions occur) becomes sharper with increasing numbers of particles and may roughly be approximated by a circle of diameter $`a`$. As before, we take initial conditions at random from the region that corresponds to collective motion and compute the dipole-moment (5) as a function of time. Fig. 4 compares the evolution of the longitudinal dipole-moment $`D_y`$ obtained for the mean-field model with the result obtained for the interacting many-body system. One observes damped oscillations with a period and a damping rate that are slightly larger than for the interacting system. The mean-field Hamiltonian therefore captures the damping of collective motion quite accurately. In the integrable system, damping is due to the dephasing of single-particle oscillations that is induced by the collisions with the surface, i.e. the directions of single-particle momenta get randomized on a time scale $`1/\gamma `$. The vadility of the mean-field picture is also confirmed by the observation that the periods and damping rates display no significant $`N`$-dependence. This is, however, different for the Lyapunov exponents and equilibration rates. These do depend on $`N`$ through the presence of the two-body interaction. Note that our observation of $`N`$-independent frequencies $`\omega =2\pi /\tau `$ and damping rates $`\gamma `$ leads to a simple scaling law. Keeping the average single-particle energies $`E/N`$ fixed and scaling the diameter of the $`N`$-body system as $`aN^{1/3}`$ yields $`\omega ,\gamma N^{1/3}`$. These results are in semi-quantitative agreement with experimental data , i.e. $`\mathrm{}\omega N^{1/3}`$ in heavy nuclei, and $`\gamma N^{[1/3\mathrm{}2/3]}`$.
We also considered initial conditions of the interacting many-body system with larger momentum spread $`\mathrm{\Delta }q_x\mathrm{\Delta }q_yq`$, i.e. the initial momentum distribution is closer to an equilibrium distribution. This situation is closer to the perturbative excitation of the nuclear giant resonance. As expected, one finds a collective oscillation of smaller amplitude and comparable period, showing that the observed phenomena are robust. The determination of the damping rate is, however, difficult since amplitide and statistical fluctuations are roughly of the same magnitude.
In summary, we have considered damping and equilibration of collective motion in a self-bound $`N`$-body systems with two-body (surface) interactions. This system is predominantly chaotic and exhibits damped collective motion that leads to equilibration. There is a hierarchy of three well separated time scales starting with the onset of classical chaos at short times, damping of the dipole mode at intermediate times and equilibration at long times, respectively. The damping is mainly due to dephasing and may be understood quite accurately in a mean-field picture of non-interacting particles. Consequently, periods and damping rates show no significant $`N`$-dependence. Equilibration, however, requires the randomization of the magnitudes of single-particle momenta and a two-body interaction. Lyapunov exponents and equilibration rates depend mildly on $`N`$. The presented model exhibits a rather homogeneous phase space structure and the main characteristics of self-bound many-body systems like atomic nuclei. Our results show that it captures important features of collective motion in such systems.
I thank G. Bertsch, F. Leyvraz, T. Prosen, S. Reddy, P.-G. Reinhard and T. Seligman for useful discussions. The hospitality of the Centro Internacional de Ciencias, Cuernavaca, Mexico, is gratefully acknowledged. This work was supported by the Department of Energy under grant DOE/ER/40561.
|
no-problem/9905/astro-ph9905276.html
|
ar5iv
|
text
|
# Strongly Polarized Optical Afterglows of Gamma-Ray Bursts
## 1 Introduction, observations
Gamma-ray burst (GRB) afterglows were explained by synchrotron radiation from an ultra-relativistic blast wave (Paczynski & Rhoads 1993, Katz 1994, Meszaros & Rees 1997, Vietri 1997, Waxman 1997). A good way to test the synchrotron emission model is to measure the polarization (Loeb & Perna 1998, Gruzinov & Waxman 1999).
Hjorth et al (1999) found an upper limit of 2.3% on the linear polarization of the optical emission from GRB 990123. Covino et al (1999) have detected a 1.7% linear polarization of the optical transient associated to GRB 990510. A few percent polarization relative to the stars in the field can be induced by dust along the line of site in the host galaxy. To be sure that the polarization is from the GRB afterglow, one needs to look for the time variability of the polarization signal (Gruzinov & Waxman 1999).
While current observations are not in disagreement with the model of Gruzinov & Waxman (1999), we would like to bring attention to the fact that a much stronger polarization of the optical afterglows, tens of percents, is theoretically possible. Polarization of more than a few percent would be a true signature of the synchrotron emission model.
The strong polarization might be achieved if the afterglow is beamed, and the magnetic fields parallel and perpendicular to the jet have different strengths (§2, Medvedev & Loeb 1999). Medvedev & Loeb (1999) believe that the magnetic fields in the synchrotron emitting plasma are strictly parallel to the shock front. We do not think that this is the case (§3), but their idea that parallel and perpendicular magnetic fields can have different strength seems plausible, especially in the jet scenario.
## 2 Strongly polarized afterglows
Synchrotron emission is strongly polarized. The degree of polarization $`\mathrm{\Pi }_0`$ depends on the distribution function and the frequency, a typical value is $`\mathrm{\Pi }_0=60\%`$. In an unresolved source like a GRB afterglow, the polarization will cancel out if the magnetic field is fully mixed in the emitting plasma. If the symmetry is violated, the unresolved image will have a non-zero polarization $`\mathrm{\Pi }`$.
Loeb & Perna (1998) and Gruzinov & Waxman (1999) break the symmetry by small number statistics - if the number of coherent magnetic patches in the synchrotron emitting plasma is $`N`$, the measured polarization is $`\mathrm{\Pi }\mathrm{\Pi }_0/\sqrt{N}`$. Gruzinov & Waxman (1999) then estimate $`N`$ for the radiation from the self-similar ultra-relativistic blast of Blandford & McKee (1976). Medvedev & Loeb (1999) rely on interstellar scintillations in the radio band to resolve the image.
Medvedev & Loeb (1999) in their paper on radio afterglows note that another way to break the symmetry is to have a beamed blast wave in which magnetic fields are perpendicular to the jet. We do not think that magnetic fields are purely perpendicular, but it seems plausible that magnetic fields are not fully mixed, i.e., the magnetic fields parallel and perpendicular to the jet have significantly different averaged strengths. A highly polarized optical afterglow can result.
For illustrative purposes, Fig. 1, assume that (i) the opening angle of the jet is much smaller than the viewing angle $`\theta `$, (ii) $`\theta 1`$, (iii) at the time of emission, the Lorenz factor of the emitting plasma is $`\gamma =\theta ^1`$. Then the degree of polarization is
$$\mathrm{\Pi }=\mathrm{\Pi }_0\frac{<B_{}^2>0.5<B_{}^2>}{<B_{}^2>+0.5<B_{}^2>},$$
(1)
where the magnetic field is measured in the plasma rest frame, and we have assumed full mixing in the plane parallel to the shock: $`<B_x^2>=<B_y^2>=0.5<B_{}^2>`$.
If $`\theta \gamma ^1`$, the photon makes an angle $`\alpha \pi /2`$ with the z-axis in the plasma frame, and equation (1) should be replaced by
$$\mathrm{\Pi }=\mathrm{\Pi }_0\mathrm{sin}^2\alpha \frac{<B_{}^2>0.5<B_{}^2>}{<B_{}^2>\mathrm{sin}^2\alpha +0.5<B_{}^2>(1+\mathrm{cos}^2\alpha )},$$
(2)
where
$$\mathrm{sin}\alpha =\frac{2\gamma \theta }{1+\gamma ^2\theta ^2}.$$
(3)
As long as the proper strengths of the fields are significantly different, and the viewing angle and the jet opening angle are $`\gamma ^1`$, the emission is strongly, $`\mathrm{\Pi }_0`$tens of percent, polarized.
## 3 The origin of the magnetic fields
The origin of the magnetic fields remains the biggest problem of the blast synchrotron emission model. To explain the observed afterglows, the magnetic field in the shocked plasma has to have close to equipartition magnitude. The shock compression of the magnetic field of the surrounding medium is insufficient. In order to be in near equipartition after the passage of the ultra-relativistic shock, the magnetic fields in the unshocked medium must be in near equipartition with the rest mass energy density. The equipartition with the rest mass energy density in the cold unshocked medium is impossible - such fields could not be confined by the unshocked medium pressure. Therefore the unshocked medium is effectively unmagnetized, and magnetic fields must be generated by the blast wave itself.
As explained by Gruzinov & Waxman (1999), the collisionless ultra-relativistic blast wave will generate strong magnetic fields by the Weibel instability (electromagnetic instability in a plasma with an anisotropic distribution function, e.g. Krall & Trivelpiece 1973). These fields are generated at the microscopic, skin depth scales ($`\delta c/\omega _p`$, $`\omega _p`$ is the plasma frequency). The skin depth is much smaller than the blast wave proper thickness $`l`$, typically $`\delta /l10^{10}`$. Gruzinov & Waxman (1999) suggested that the length scale of the magnetic field will grow after the shock transition by an unidentified mechanism. Medvedev & Loeb (1999) believe that the small-scale Weibel-generated fields solve the problem, but, unfortunately, these small scale fields cannot explain the magnetization of the bulk of the blast wave plasma.
The skin depth scales are exactly the dissipative scales. After a few inverse plasma frequency times, the Weibel instability should isotropise the plasma, bringing it up the shock adiabatic (Taub adiabatic). The small-scale fields should be damped. If Weibel instability were the full story, magnetic fields would have existed only in the narrow layer (few skins deep) in the vicinity of the shock front. Most of the blast wave plasma would be free of magnetic fields. The synchrotron emission would be negligible.
In reality, close to equipartition magnetic fields should exist on the large scales. This is what happens in the astrophysical plasmas where we can measure the magnetic fields. But we do not understand how the strong large-scale fields could be generated by the blast wave. It may be that the unshocked plasma is already pre-heated by the jet, and the large-scale fields are close to equipartition even before the blast wave passage.
## 4 Summary
GRB optical afterglows might be strongly, up to tens of percents, polarized. We do not understand the origin of the magnetic field in the blast synchrotron emission model of the GRB afterglows, but observations of the highly polarized afterglows may demonstrate the presence of the magnetic field.
###### Acknowledgements.
I thank John Bahcall, Daniel Eisenstein, and David Hogg for useful discussions. I thank Bruce Draine and Pawan Kumar for useful information. This work was supported by NSF PHY-9513835.
|
no-problem/9905/astro-ph9905095.html
|
ar5iv
|
text
|
# An SZ Temperature Decrement - X-ray Luminosity Relation for Galaxy Clusters.
## 1 Introduction
The X-ray emission from galaxy clusters has enabled the study of what is assumed to be the largest virialized systems in the universe. The X-ray electron temperature, $`T_\mathrm{e}`$, measures the depth of the galaxy cluster potential, while the X-ray luminosity, $`L`$, emitted as thermal bremsstrahlung by the intracluster plasma, measures primarily the baryonic number density within this potential. When combined with the fact that the electron temperature is a robust estimator of the galaxy cluster total mass, $`M_{\mathrm{tot}}`$, the X-ray emission observations can be used in cosmological studies to derive constraints on the power spectrum of the initial density perturbations and on cosmological parameters (e.g., Eke et al. 1998; Oukbir & Blanchard 1997; Bahcall, Fan & Cen 1997). However, most of the important conclusions from such studies are dependent on the accuracy of used scaling relations between the cluster properties, such as the cluster X-ray luminosity, total mass, gas mass, and the electron temperature. Independent of the large scale distribution, baryonic component of the individual clusters has been used to constrain the mass density of the universe, $`\mathrm{\Omega }_m`$ (e.g., Briel et al. 1992; White et al. 1993; White & Fabian 1995; David, Jones & Forman 1995; Evrard 1997), but such studies can be subject to variations in physical properties from one cluster to another, e.g., due to cluster cooling flows (Fabian et al. 1994; White, Jones & Forman 1997; Allen & Fabian 1998; Markevitch 1998).
Apart from X-ray emission observations, another well known probe of the intracluster gas distribution is the Sunyaev-Zeldovich (SZ) effect (Sunyaev & Zeldovich 1970, 1980). The SZ effect is the scattering of the cosmic microwave background (CMB) radiation via hot electrons of the X-ray emitting gas through an inverse-Compton process, producing a distortion in the CMB spectrum. In recent years, increasingly sensitive observations of galaxy clusters, first with single-dish telescopes and now with interferometers, have produced accurate maps of the CMB temperature change resulting from the SZ effect. Unlike the X-ray emission from galaxy clusters, the magnitude of the CMB temperature decrement due to the SZ effect is independent of the redshift, and allows a direct probe to the distant universe. It is likely that the potential use of the SZ effect as a cosmological probe to distant universe is yet to be fully realized, and more work, both together and independent of the X-ray properties of galaxy clusters, is required. The combination of X-ray emission and SZ effect towards a given cluster can be used to determine the distance to that cluster, from which the Hubble constant can be derived. Also, the SZ effect, which measures that gas mass within galaxy clusters, can be used to constrain the total mass density of the universe (see, e.g., Myers et al. 1997). Other than cosmological uses, the SZ effect and the X-ray emission from clusters can be used to study cluster gas physics, since these two observables depends differently on the electron number density and temperature distributions.
Given the ever increasing galaxy cluster SZ data, we have initiated a study to investigate the ways in which the SZ effect can be used both as a cosmological tool, as well as a tool to understand the distribution and physical properties of baryonic content within clusters. Here in this paper, we present the observed relation between the CMB temperature change due to the SZ effect, $`\mathrm{\Delta }T_{\mathrm{SZ}}`$, and the X-ray luminosity, $`L`$, of a sample of galaxy clusters. In Section 2, we present a brief introduction to the SZ effect and the X-ray emission, and formulate the expected relation between $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ and $`L`$. In Section 3, we present the used cluster sample in which SZ effect has been observed and derive the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation, while in Section 4, we discuss the observed relation in terms of the galaxy cluster properties. In the same section, we use this relation as an application to study the cluster contribution to arcminute scale cosmic microwave background anisotropies.
## 2 Theory
Briefly, the SZ effect is a distortion of the cosmic microwave background (CMB) radiation by inverse-Compton scattering of thermal electrons within the hot intracluster medium (Sunyaev & Zeldovich 1970, 1980). The change in the CMB brightness temperature observed is:
$$\frac{\mathrm{\Delta }T}{T_{\mathrm{CMB}}}=f(x)\left(\frac{k_BT_e}{m_ec^2}\right)n_e\sigma _T𝑑l,$$
(1)
where
$$f(x)=\left[\frac{x(e^x+1)}{e^x1}4\right]$$
(2)
is the frequency dependence with $`x=h\nu /k_BT_{\mathrm{CMB}}`$, $`T_{\mathrm{CMB}}=2.728\pm 0.002`$ (Fixsen et al. 1996) and $`n_e`$, $`T_e`$ and $`\sigma _T`$ are the electron density, electron temperature and cross section for the Thomson scattering. The integral is performed along the line of sight through the cluster. We refer the reader to a recent review by Birkinshaw (1998) on the SZ effect and its observation. At the Rayleigh-Jeans (RJ) part of the frequency spectrum, $`f(x)=2`$.
The other important observable of the hot intracluster gas is the X-ray emission, whose surface brightness $`S_X`$ can be written as:
$$S_X=\frac{1}{4\pi (1+z)^3}n_e^2\mathrm{\Lambda }_e𝑑l,$$
(3)
where $`z`$ is the redshift and $`\mathrm{\Lambda }_e(\mathrm{\Delta }E,T_e)`$ is the X-ray spectral emissivity of the cluster gas due to thermal Bremsstrahlung emission within a certain energy band $`\mathrm{\Delta }E`$ ($`\mathrm{\Lambda }_eT_e^{1/2}`$). By combining the SZ intensity change and the X-ray emission observations of a given cluster, the angular diameter distance to the cluster can be derived due to the different dependence of the X-ray emission and the SZ effect on the electron number density, $`n_e`$ (e.g., Cavaliere et al. 1977). Combining the distance measurement with redshift allows a determination of the Hubble constant, H<sub>0</sub>, as a function of certain cosmological parameters (e.g., see, Hughes & Birkinshaw 1998). Other than the derivation of the Hubble constant, SZ effect and X-ray emission can in principle be used to constrain the cosmological parameters based on distance measurements of a large sample of galaxy clusters with a wide range of redshift.
The present paper discusses the relation between $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ and $`L`$, the X-ray luminosity, for a sample of galaxy clusters. We can estimate the expected relation based on $`n_e`$ and $`T_e`$ dependences between the SZ effect and the X-ray emission. The SZ effect is due to the pressure integrated along the line of sight, i.e. $`\mathrm{\Delta }T_{\mathrm{SZ}}n_\mathrm{e}T_\mathrm{e}`$, while the X-ray emission is due to the thermal Bremsstrahlung with $`Ln_\mathrm{e}^2T_\mathrm{e}^{1/2}`$. Here, we igonore the contribution from X-ray line emission. By removing the $`n_\mathrm{e}`$ dependence between $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ and $`L`$, one gets $`\mathrm{\Delta }T_{\mathrm{SZ}}L^{1/2}T_\mathrm{e}^{3/4}`$. The relation between $`L`$ and $`T_\mathrm{e}`$ has been well studied based on numerical simulations (e.g., Cavaliere et al. 1997) and observed data (e.g., Mushotzky & Scharf 1997; Allen & Fabian 1998; Arnaud & Evrard 1998). Assuming that $`LT_\mathrm{e}^\alpha `$,
$$\mathrm{\Delta }T_{\mathrm{SZ}}L^{\frac{1}{2}+\frac{3}{4\alpha }}$$
(4)
Based on numerical simulations, Cavaliere et al. (1997) predicts that $`\alpha =5`$ at the scale of groups and flattens to $`\alpha =3`$ for rich clusters, and saturates to $`\alpha =2`$ for high temperature clusters. Currently, the SZ effect has been detected towards rich clusters with moderate to high electron temperatures, resulting in $`\mathrm{\Delta }T_{\mathrm{SZ}}L^{0.65}`$ to $`L^{0.88}`$, when $`\alpha `$ varies from 5 to 2. The observed data currently suggest that $`\alpha 2.8`$, less than previous studies which suggested values for $`\alpha 3`$ to 3.3 (e.g., David et al. 1993).
Also, since $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ is a measurement of the pressure along the line of sight to the cluster, the integral of $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ through a cylindrical cut of the cluster is directly proportional to the gas mass within that cylinder. This assumes that the cluster gas is isobaric. If we expect the gas mass to scale with luminosity according to a relation of the form $`M_{\mathrm{gas}}L^\gamma `$, then,
$$\mathrm{\Delta }T_{\mathrm{SZ}}L^\gamma .$$
(5)
The second estimate for the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation should be more accurate since the measurement of gas mass accounts for the integrated flux along the line of sight, while the former estimate relies slightly on the assumption of spherical geometry for galaxy clusters and that no cluster gas clumping is present.
## 3 Data
In order to derive the observed relation between $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ and $`L`$, we have compiled a sample of galaxy clusters for which SZ data are available from literature, and for which accurate X-ray results are also available. We list these clusters, X-ray luminosities in the 2 to 10 keV band, electron temperature, SZ temperature decrement and references to each of the X-ray luminosity, temperature and SZ experiments in Table 1.
Given that the tabulated SZ observations were made at different frequencies, we have scaled the SZ temperature decrement to the RJ part of the spectrum so that the comparison between the SZ effect and the X-ray luminosity of galaxy clusters is uniform. Also, some of the SZ experiments ignored the higher order corrections to the inverse-Compton scattering in evaluating the SZ temperature decrement. Using the relativistic corrections presented by Itoh et al. (1997), we have corrected for the SZ temperature decrement, which are mainly important for clusters with high electron temperatures ($`>`$ 10 keV). Such corrections only change the SZ temperature decrement by about 3% to 5%, but the relation between $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ and $`L`$ presented here is not heavily dependent on such corrections.
Another uncertainty associated with the tabulated values is the accuracy of X-ray luminosities. Except for Cl0016, we have used the published luminosity values from Allen & Fabian (1998) and Arnaud & Evrard (1998). These studies have taken into account the variations in the X-ray luminosities due to cluster cooling flows, so the values tabulated here should be unbiased from cooling flow effects. However, we note that these luminosities may have both statistical and systematic uncertainties of the order 15%, resulting from poor calibration to uncertainties associated with removal of individual cluster cooling flow regions in calculating the luminosities. Our SZ cluster sample is primarily made of interferometric observations of the SZ effect (Carlstrom et al. 1996; Grainge et al. 1993), SuZIE observations at the Caltech Sub-mm Observatory (Holzapfel et al. 1997), and OVRO single-dish observations of the low redshift clusters (Myers et al. 1997).
The published SZ temperature decrement values in Myers et al. (1997) are not corrected for the beam dilution and switching. However, based on modeling of the cluster gas distribution for individual clusters, the authors have calculated various efficiencies in this process (see, Table 8 in Myers et al. 1997). We used these values to convert the observed SZ temperature decrements to beam-corrected values, which are presented in Table 1.
In Fig. 1, we show the observed SZ temperature decrement vs. the X-ray luminosity in the 2 to 10 keV band, to which we have fitted a relation of the form $`\mathrm{\Delta }T_{\mathrm{SZ}}=AL^\kappa `$. For clusters with multiple SZ measurements, we have used the mean value of the reported temperature decrement but have appropriately scaled the uncertainty so that it covers the whole range suggested by the two separate measurements. Using a maximum-likelihood minimization, the data are best explained by:
$$\mathrm{\Delta }T_{\mathrm{SZ}}=(0.17\pm 0.10)\left(\frac{L}{10^{44}h_{50}^2\mathrm{ergs}\mathrm{s}^1}\right)^{0.61\pm 0.18}\mathrm{mk},$$
(6)
where the uncertainty is the 1$`\sigma `$ statistical error.
In order to test the expected $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ and $`L`$ relation based on the X-ray emission data through measurements of the cluster gas mass, we complied a list of gas mass estimates from published data. In Fig. 2, we show the gas mass within 0.5 Mpc of the cluster center as derived by White et al. (1997) for a sample of $``$ 150 clusters. The derived relation between gas mass, $`M_{\mathrm{gas}}`$, and the cluster luminosity, $`L`$, is:
$`M_{\mathrm{gas}}=(21.42\pm 3.16)`$ $`\left({\displaystyle \frac{L}{10^{44}h_{50}^2\mathrm{ergs}\mathrm{s}^1}}\right)^{0.66\pm 0.06}`$
$`\times 10^{12}\mathrm{h}_{50}^{2.5}\mathrm{M}_{\mathrm{}},`$
which is very similar to the relation found by Shimasaku (1997) for 40 clusters studied in Jones & Forman (1984) and Arnaud et al. (1992).
## 4 Discussion
For the SZ observational data in Table 1, we derived $`\mathrm{\Delta }T_{\mathrm{SZ}}L^{0.61\pm 0.18}`$. The slope is consistent with the numerically simulated values for $`\alpha `$ ranging from 5 to 3, but slightly inconsistent with a $`LT`$ relation of the form $`LT^2`$, which is expected for clusters under the self-similar model. The current observational data suggest that $`LT^{2.6}`$ (Markevitch 1998) to $`T^{2.8}`$(Arnaud & Evrard 1998), which is consistent with the present estimate of the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ slope.
The $`M_{\mathrm{gas}}L`$ relation, as derived based on the observed mass data for a large sample of clusters in White et al. (1997) suggests a slope of $`0.66\pm 0.06`$ to the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation, which is again consistent with the observed value. Since the SZ temperature decrement is a direct estimate on the cluster gas mass, and assuming that the measured gas mass is essentially similar to the gas mass the SZ experiments would observed based on the geometry, then $`\mathrm{\Delta }T_{\mathrm{SZ}}L^{0.66\pm 0.06}`$. However, effects due to cluster projections and other associated systematic errors (see, below) may produce a slightly different observed relation that one expected based on these simple arguments. We note that the observed $`M_{\mathrm{gas}}L`$ relation is based on the gas masses within inner 0.5 MPc from the cluster centers. Depending on the redshift and the instrumental properties (beam size, resolution etc), individual SZ experiments may be sensitive to a different radius from the cluster center. The $`\mathrm{\Delta }T_{\mathrm{SZ}}`$ values are derived by modeling the observed flux over a larger radius than 0.5 Mpc. Thus, comparsion of the two relations may be slightly problematic, however, since the total gas mass out to a larger radius would scale in a self-similar manner, we do not expect the slope to be affected. However, comparsion of the normalizations are not currently possible. Also, we have used two different estimates of luminosities – luminosities in 2 to 10 keV for $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation and bolometric luminosities for $`M_{\mathrm{gas}}L`$ relation — but here again, we don’t expect the slope to have been affected systematically by these differences.
Other than statistical uncertainties associated with the measurement of SZ decrements and X-ray luminosities, it is likely that there are additional effects contributing to the observed scatter in the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation. Such contributions may come from projection effects of galaxy clusters, which are mostly modeled using spherical geometries, variations in gas mass fraction from one cluster to another, and effects due to gas clumping and nonisothermality.
For example, if a cluster is prolate and elongated along the line of sight, then the observed SZ temperature decrement would be larger than what is expected based on the X-ray properties of that cluster (see, Cooray 1998). In such a case, one may also find a substantially lower value for the Hubble constant. Since we are considering the galaxy cluster sample in a statistical manner, the derived relations such as $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$, should be unbiased of effects that may arise due to cluster projections. However, this requires that the clusters in the sample are distributed randomly, so that the whole sample is unbiased. This may not likely to be true with the current sample, as most of the current SZ targets are clusters which are likely to have been previously studied due to certain properties. Such clusters, which may have been selected due to high X-ray luminosities or gravitational lensing effects, are likely to be prolate clusters elongated along the line of sight such that the observed decrement may be slightly higher than the value expected for the luminosity of that cluster. This is more likely to be the case with strong gravitational lensing clusters; the fact that both A1689 and A2163 have similar temperature decrements but different X-ray luminosities clearly suggests this possibility. Also, high luminosity clusters are likely to be strong cooling flows, but the luminosity values used here, except for Cl0016, have been corrected for such cooling flow contributions. For the present sample, if biased effects exist either at the low or high end of the luminosities, then the derived relation may have been affected.
From the present SZ sample, A2218, A773, and Cl0016 have been observed by multiple observational programs. The difference between these separate measurements are of the order 15% to 25%. Other than physical systematics effects discussed above, it is likely that the derived relation has an additional intrinsic dispersion as high as 25%. Such differences are likely to arise when clusters are modeled using different parameters and that the observational effects, such as the beam dilution produced by the instrument, are not properly taken into accounted. It is likely that the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation will be properly studied using carefully selected sample of galaxy clusters that are planned to be observed with ground-based interferometers, and space-based observatories such as PLANCK.
The current cluster sample used to derive the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation ranges in redshift from 0.02 to 0.54, with only one cluster beyond a redshift of 0.2. Therefore, we are unable to study any redshift evolution of this relation. However, we note that when using this relation to study clusters at high redshifts, it may be important to consider the possible evolutionary effects. However, if such effects exist in the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation, then it would be primarily due to the evolution of the X-ray luminosity function, but other effects, such as a systematic change in the cluster baryonic gas mass fraction with redshift, can also produce deviations in the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation.
In order to test the accuracy of the derived relation between SZ effect and X-ray luminosity of galaxy clusters, we now consider additional observable cluster properties. Since the X-ray luminosity and the gas mass of a cluster is a measurement of the baryonic content, the presented $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ is primarily a probe of the baryonic mass distribution. When the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation is combined with scaling relations such as $`LT_\mathrm{e}`$ and $`M_{\mathrm{tot}}T_\mathrm{e}`$ (Hjorth, Oukbir & Kampen 1998), one can constrain both the baryonic and dark matter distribution separately from each other. In Fig. 3, we show the observed temperature decrements as a function of the cluster electron temperatures in table 1. Even though the involved uncertainties are high, we find that the $`\mathrm{\Delta }T_{\mathrm{SZ}}T_\mathrm{e}`$ is:
$$\mathrm{\Delta }T_{\mathrm{SZ}}=(0.56\pm 0.51)\times 10^3\left(\frac{T_\mathrm{e}}{\mathrm{keV}}\right)^{2.35\pm 0.85}\mathrm{mk}.$$
(8)
The slope of this relationship is again consistent with simple scaling-law arguments.
The luminosity-gas mass relation observed from the X-ray measurements in Fig. 2, suggests $`\mathrm{\Delta }T_{\mathrm{SZ}}L^{0.64\pm 0.06}`$, which is in good agreement with the relation derived from the SZ observational data. As suggested earlier, this relation is more likely to be accurate since the integrated pressure along the line of sight is simply proportional to the gas mass within the line of sight, which is the gas mass within the cylinder defined by geometry. A direct probe of the total cluster mass along the line of sight is the mass derived based on gravitational lensing observations. Smail et al. (1997) studied lensing properties of a sample of galaxy clusters observed with the Hubble Space Telescope and measured both weak and strong lensing properties of the clusters. They derived a relationship between cluster luminosity, $`L`$, and mean shear, $`<\gamma >`$, of the form:
$$<\gamma >=(0.074\pm 0.017)\times \left(\frac{L}{10^{44}h_{50}^2\mathrm{ergs}\mathrm{s}^1}\right)^{0.58\pm 0.23}.$$
(9)
Here, $`\gamma `$ is a measurement of the average tangential shear strength of galaxy clusters, and is directly proportional to the cluster total mass responsible for gravitationally lensing the background galaxies towards the foreground cluster. Thus, total mass can be written as $`M_{\mathrm{tot}}L^{0.58\pm 0.23}`$. Since $`M_{\mathrm{gas}}\mathrm{\Delta }T_{\mathrm{SZ}}L^{0.66\pm 0.06}`$, we find that the ratio, $`f_{\mathrm{gas}}M_{\mathrm{gas}}/M_{\mathrm{tot}}`$, is $`L^{0.08\pm 0.24}`$. This ratio measures the cluster gas mass fraction, which has been used in literature to constrain the total mass density of the universe based on baryonic-mass density as derived based on nucleosynthesis arguments (see, e.g., Evrard 1997). It is likely that this ratio is independent of the cluster luminosity, suggesting that the gas mass fraction within clusters is constant from one cluster to another. Recently, Arnaud & Evrard (1998) studied the changes in cluster gas mass fraction from one cluster to another and suggested that these changes are likely be due to heating processes within clusters, such as due to winds. If such effects exist, then the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation would also be affected contributing to the observed dispersion.
We use the X-ray luminosity function (XLF) of clusters of galaxies from the ROSAT Brightest Cluster Sample (Ebeling et al. 1997), which is an X-ray selected, flux limited sample of 172 clusters compiled from the ROSAT All-Sky Survey data, to study the local cluster contribution to arc-minute scake cosmic microwave background (CMB) anisotropies. In order to compare with the current limits of the Compton $`y`$-parameter based on FIRAS observations, we convert our $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation to a $`yL`$ relation using:
$$y(L)=\frac{1}{2}\frac{\mathrm{\Delta }T_{\mathrm{SZ}}(L)}{T_{\mathrm{CMB}}}.$$
(10)
The XLF is represented by:
$$\varphi (L)dL=AL^\alpha \mathrm{exp}\left(\frac{L}{L_{}}\right)dL,$$
(11)
where $`A=(1.59\pm 0.36)\times 10^7h_{50}^3\mathrm{Mpc}^3\times (10^{44}h_{50}^2\mathrm{ergs}\mathrm{s}^1)^{\alpha 1}`$, $`L_{}=(8.46\pm 2.35)\times 10^{44}h_{50}^2\mathrm{ergs}\mathrm{s}^1`$ and $`\alpha =1.25\pm 0.20`$. Here, $`L`$ is the luminosity in the 2 to 10 keV band. Using the luminosity function and the $`y(L)`$ relation, we can write the average $`<y>`$ parameter due to galaxy clusters as:
$$<y>=_{L_1}^{L_2}y(L)\varphi (L)𝑑V𝑑L$$
(12)
where $`L_1`$ and $`L_2`$ are the lower and upper limits of XLF, respectively. In order to describe the cluster volume $`dV`$ we adopt a $`\beta `$-model for the cluster gas distribution with a core radius $`R_c`$. We assume a $`\beta `$ of 2/3 to describe all clusters. We allow for the total cluster scale radius $`R`$ to vary with luminosity of the form, $`RL^\kappa `$, where $`\kappa `$ is a free-parameter. According to Mohr & Evrard (1997), the X-ray size scales with temperature as $`RT^{0.93\pm 0.11}`$. With the $`LT`$ relation, the X-ray size is expected to scale with luminosity as $`RL^{0.2}`$ to $`L^{0.5}`$. Using the tabulated data in White et al. (1997), we obtained a relation of the form,
$$R_\mathrm{c}(L)=0.46\left(\frac{L}{10^{44}h_{50}^2\mathrm{ergs}\mathrm{s}^1}\right)^{0.33\pm 0.15}\mathrm{Mpc},$$
(13)
between the tabulated core radii for clusters, $`R_\mathrm{c}`$, and their luminosities, $`L`$. We note that the core-radii from White et al. (1997) do not necessarily represent the true underlying sacle-size of the X-ray emission. In their analysis, each cluster core-radius was treated as a parameter which was varied to obtain a flat temperature profile, under their assumption for the form of the gravitational potential. For the purpose of the present calculation, however, we are only interested in core-radii as a representation for the relative distribution of size-scales. Since we also vary most of the parameters, such as the slope in the $`RL`$ relation, and the fact that our final results our presented as a ratio between the cluster radius and the core-radius, the use of a $`RL`$ relation from data in White et al. (1997) should not affect the final conclusions. For the rest of the paper we assume that $`\kappa `$ ranges from 0 to 1. In order to test the accuracy of derived $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation, we also vary its slope $`\gamma `$, between 0 and 2.5.
In Fig. 4 we show the derived $`<y>`$ values as a function of $`R/R_c`$. It is likely that cluster sizes, in general, are of the order 15 to 25 core radii. Each of the individual plots correspond to the different $`RL`$ relation at steps of 0.25 between 0 and 1, while each of the curves represent the different $`\gamma `$ values at steps of 0.5 between 0 and 2.5. For our preferred values of $`\kappa `$ and $`\gamma `$, $`\kappa 0.25`$ to 0.5 and $`\gamma 0.5`$ to 1.0, we find that the cluster contribution to $`y`$-parameter through the SZ effect is at least two orders of magnitude less than the current upper limit based on FIRAS observations of $`2.5\times 10^5`$ (Mather et al. 1994). If all clusters have a constant size, independent of the X-ray luminosity, then the $`<y>`$ can be as high as $`10^6`$, but this possibility is excluded by the observational data, which suggests that cluster size varies with the luminosity. In general, we find that the cluster contribution to $`y`$-parameter is $`2\times 10^7`$, which is consistent with previous estimates (see, e.g., Ceballos & Barcons 1994). The XLF used here has been calculated for clusters out to redshifts of 0.3, so that the derived results may be valid to clusters out to the same redshift. However, evidence for no evolution in the XLF out to redshifts of 0.8 has recently appeared on literature (Rosati et al. 1998), and thus, our results may be valid to a much higher redshift. As studied in Barbosa et al. (1996), in a low $`\mathrm{\Omega }_m`$ universe, galaxy cluster contribution to Compton $`y`$ parameter may be as high as $`10^5`$, with most of the contribution coming from clusters at $`z>1`$.
Other than using the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation as a probe of cluster physics, the observed $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation can also be used in the study of high redshift clusters, where SZ temperature decrements have been observed but no X-ray emission have been detected. For example, given that we now know the $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation, it is reasonably possible to constrain the expected X-ray luminosity of such a detection, and then, perhaps, based on the observed X-ray flux threshold, put a lower limit on the redshift. Detection of such high redshift clusters has strong implications for cosmological world models, and based on the expected redshift and the mass of such clusters, one can constrain cosmological parameters with high accuracy (e.g, Bartlett, Blanchard & Barbosa 1998). The $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ and $`M_{\mathrm{gas}}L`$ allows one to directly relate the observed temperature decrements to gas mass, and under the assumption of a constant baryonic fraction, to relate to the total mass of the cluster. It should then be possible to apply the Press-Schechter (PS) formalism to number density of such cluster masses, and constrain the cosmological parameters, with little dependence on the X-ray observations. However, to perform such an analysis one requires data from a complete sample of galaxy clusters, and such SZ samples are currently not available; observations of a luminosity-selected unbiased sample of galaxy clusters would be useful in the future to constrain the cosmological parameters, using scaling relations such as the one presented here.
## 5 Conclusions
Based on observations of the Sunyaev-Zeldovich effect in galaxy clusters and the X-ray luminosity, we have derived a relation between the two observables. We have studied this relation in terms of other cluster properties, and have found it to agree with the $`LT`$, $`M_{\mathrm{gas}}L`$ and $`M_{\mathrm{tot}}L`$ relations. Using the observed $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ relation and the X-ray luminosity function for galaxy clusters, we have derived the local cluster contribution to the Compton $`y`$-parameter through the SZ effect. These values are at least two orders of magnitudes lower than the current upper limit based on FIRAS observations. Given the planned observations of SZ effect in large sample of galaxy clusters, such as with PLANCK and other ground based interferometers, it is likely that a relation such as $`\mathrm{\Delta }T_{\mathrm{SZ}}L`$ would be useful both in making predictions and deriving important cosmological parameters. We leave such studied to be carried out in future, using complete and unbiased samples of galaxy clusters.
## Acknowledgements
I would like to thank David A. White, the referee, for constructive comments on the manuscript, John Carlstrom and Bill Holzapfel for a helpful suggestions on an early draft of the paper and Dan Reichart for useful discussions.
|
no-problem/9905/astro-ph9905083.html
|
ar5iv
|
text
|
# Cosmological Implications of the Fundamental Relations of X-ray Clusters
## 1 Introduction
Galaxy clusters are the largest virialized objects in the universe and provide useful cosmological probes, since several properties of clusters are strongly dependent on the cosmological parameters. For example, the statistics of X-ray clusters can serve as an excellent probe of cosmology. The abundance of clusters and its redshift evolution can be used to determine the cosmological density parameter, $`\mathrm{\Omega }_0`$, and the rms amplitude of density fluctuations on the fiducial scale $`8h^1`$ Mpc, $`\sigma _8`$ (e.g. White, Efstathiou, & Frenk 1993 ; Eke, Cole, & Frenk 1996 ; Viana & Liddle 1996 ; Bahcall, Fan, & Cen 1997 ; Fan, Bahcall, & Cen 1997). Moreover, temperature and luminosity function of X-ray clusters is also used for a cosmological probe. Taking account of the difference between cluster formation redshift and observed redshift, Kitayama & Suto (1996) computed a temperature and luminosity function semi-analytically; comparing the predicted temperature ($`T`$) and luminosity ($`L_\mathrm{X}`$) function with the observed ones, they conclude that $`\mathrm{\Omega }_00.20.5`$ and $`h0.7`$. However, one tenacious problem in such investigations is the discrepancy of $`L_\mathrm{X}T`$ relation between observations and simple theoretical prediction. This relation should also be an important probe of the formation history and cosmology.
In a separate paper (Fujita & Takahara 1999; hereafter Paper I), we have shown that the clusters of galaxies populate a planar distribution in the global parameter space $`(\mathrm{log}\rho _0,\mathrm{log}R,\mathrm{log}T)`$, where $`\rho _0`$ is the central gas density, and $`R`$ is the core radius of clusters of galaxies. This ’fundamental plane’ can be described as
$$X=\rho _0^{0.47}R^{0.65}T^{0.60}=constant.$$
(1)
We thus find that clusters of galaxies form a two-parameter family. The minor and major axes of the distribution are respectively given by
$$Y=\rho _0^{0.39}R^{0.46}T^{0.80},$$
(2)
$$Z=\rho _0^{0.79}R^{0.61}T^{0.039}.$$
(3)
The scatters of observational data in the directions of $`Y`$ and $`Z`$ are $`\mathrm{\Delta }\mathrm{log}Y=0.2`$ and $`\mathrm{\Delta }\mathrm{log}Z=0.5`$, respectively (Paper I). The major axis of this ’fundamental band’ ($`Z`$) is nearly parallel to the $`\mathrm{log}R\mathrm{log}\rho _0`$ plane, and the minor axis ($`Y`$) describes the $`L_\mathrm{X}T`$ relation.
In this Letter, we discuss cosmological implications of the relations we found in Paper I, paying a particular attention to the two-parameter family nature of X-ray clusters and to the difference between cluster formation redshift and observed redshift. In §2, we use spherical collapse model to predict the formation history of clusters of galaxies, and in §3, we predict the observable distribution of X-ray clusters.
## 2 Formation History of Clusters of Galaxies
In order to explain the observed relations between the density, radius, and temperature of clusters of galaxies, we predict them for a flat and an open universe theoretically with the spherical collapse model (Tomita 1969; Gunn & Gott 1972). For simplicity, we do not treat vacuum dominated model in this paper. For a given initial density contrast, the spherical collapse model predicts the time at which a uniform spherical overdense region, which contains mass of $`M_{\mathrm{vir}}`$, gravitationally collapses. Thus, if we specify cosmological parameters, we can obtain the collapse or formation redshifts of clusters. Moreover, the model predicts the average density of the collapsed region $`\rho _{\mathrm{vir}}`$.
In Paper I, we showed that the observed fundamental band are described by the two independent parameters $`M_{\mathrm{vir}}`$ and $`\rho _{\mathrm{vir}}`$. In particular, the variation of $`\rho _{\mathrm{vir}}`$ is basically identified with the scatter of $`Z`$. Since the spherical collapse model treats $`\rho _{\mathrm{vir}}`$ and $`M_{\mathrm{vir}}`$ as two independent variables, it can be directly compared with the observed fundamental band, as long as we assume that the core radius and the mass of core region are respectively a fixed fraction of the virial radius and that of the virial mass at the collapse redshift, as is adopted in this paper. Although the model may be too simple to discuss cosmological parameters quantitatively, it can plainly distinguish the relations between the density, radius, and temperature in a low-density universe from those in a flat universe, as shown below.
For the spherical model, the virial density of a cluster is $`\mathrm{\Delta }_c`$ times the critical density of a universe at the redshift when the cluster collapsed ($`z_{\mathrm{coll}}`$). It is given by
$$\rho _{\mathrm{vir}}=\mathrm{\Delta }_c\rho _{\mathrm{crit}}(z_{\mathrm{coll}})=\mathrm{\Delta }_c\rho _{\mathrm{crit},0}E(z_{\mathrm{coll}})^2=\mathrm{\Delta }_c\frac{\mathrm{\Omega }_0\rho _{\mathrm{crit},0}(1+z_{\mathrm{coll}})^3}{\mathrm{\Omega }(z_{\mathrm{coll}})},$$
(4)
where $`\mathrm{\Omega }(z)`$ is the cosmological density parameter, and $`E(z)^2=\mathrm{\Omega }_0(1+z)^3/\mathrm{\Omega }(z)`$. The index 0 refers to the values at $`z=0`$. Note that the redshift-dependent Hubble constant can be written as $`H(z)=100hE(z)\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. We fix $`h`$ at 0.5. In practice, we use the fitting formula of Bryan & Norman (1998) for the virial density:
$$\mathrm{\Delta }_c=18\pi ^2+60x32x^2,$$
(5)
where $`x=\mathrm{\Omega }(z_{\mathrm{coll}})1`$.
It is convenient to relate the collapse time in the spherical model with the density contrast calculated by the linear theory. We define the critical density contrast $`\delta _c`$ that is the value, extrapolated to the present time ($`t=t_0`$) using linear theory, of the overdensity which collapses at $`t=t_{\mathrm{coll}}`$ in the exact spherical model. It is given by
$`\delta _c(t_{\mathrm{coll}})`$ $`=`$ $`{\displaystyle \frac{3}{2}}D(t_0)\left[1+\left({\displaystyle \frac{t_\mathrm{\Omega }}{t_{\mathrm{coll}}}}\right)^{2/3}\right](\mathrm{\Omega }_0<1)`$ (6)
$`=`$ $`{\displaystyle \frac{3(12\pi )^{2/3}}{20}}\left({\displaystyle \frac{t_0}{t_{\mathrm{coll}}}}\right)^{2/3}(\mathrm{\Omega }_0=1)`$ (7)
(Lacey & Cole 1993), where $`D(t)`$ is the linear growth factor given by equation (A13) of Lacey & Cole (1993) and $`t_\mathrm{\Omega }=\pi H_0^1\mathrm{\Omega }_0(1\mathrm{\Omega }_0)^{3/2}`$.
For a power-law initial fluctuation spectrum $`Pk^n`$, the rms amplitude of the linear mass fluctuations in a sphere containing an average mass $`M`$ at a given time is $`\delta M^{(n+3)/6}`$. Thus, the virial mass of clusters which collapse at $`t_{\mathrm{coll}}`$ is related to that at $`t_0`$ as
$$M_{\mathrm{vir}}(t_{\mathrm{coll}})=M_{\mathrm{vir}}(t_0)\left[\frac{\delta _c(t_{\mathrm{coll}})}{\delta _c(t_0)}\right]^{6/(n+3)}.$$
(8)
Here, $`M_{\mathrm{vir}}(t_0)`$ is regarded as a variable because actual amplitude of initial fluctuations has a distribution. We relate $`t=t_{\mathrm{coll}}`$ to the collapse or formation redshift $`z_{\mathrm{coll}}`$, which depends on cosmological parameters. Thus, $`M_{\mathrm{vir}}`$ is a function of $`z_{\mathrm{coll}}`$ as well as $`M_{\mathrm{vir}}(t_0)`$. This means that for a given mass scale $`M_{\mathrm{vir}}`$, the amplitude takes a range of value, and thus spheres containing a mass of $`M_{\mathrm{vir}}`$ collapse at a range of redshift. In the following, the slope of the spectrum is fixed at $`n=1`$, unless otherwise mentioned. It is typical in the scenario of standard cold dark matter for a cluster mass range.
The virial radius and temperature of a cluster are then calculated by
$$r_{\mathrm{vir}}=\left(\frac{3M_{\mathrm{vir}}}{4\pi \rho _{\mathrm{vir}}}\right)^{1/3},$$
(9)
$$T_{\mathrm{vir}}=\frac{\mu m_\mathrm{H}}{3k_\mathrm{B}}\frac{GM_{\mathrm{vir}}}{r_{\mathrm{vir}}},$$
(10)
where $`\mu (=0.6)`$ is the mean molecular weight, $`m_\mathrm{H}`$ is the hydrogen mass, $`k_\mathrm{B}`$ is the Boltzmann constant, and $`G`$ is the gravitational constant.
## 3 Results and Discussion
Since equations (4) and (8) show that $`\rho _{\mathrm{vir}}`$ and $`M_{\mathrm{vir}}`$ are the functions of $`z_{\mathrm{coll}}`$ for a given $`M_{\mathrm{vir}}(t_0)`$, the virial radius $`r_{\mathrm{vir}}`$ and temperature $`T_{\mathrm{vir}}`$ are also the functions of $`z_{\mathrm{coll}}`$ for a given $`M_{\mathrm{vir}}(t_0)`$ (equations and ). Thus, by eliminating $`z_{\mathrm{coll}}`$, the relations among them can be obtained. Since observational values reflect mainly the structures of core region while the theory predicts average values within the virialized region, we must specify the relation between the observed values $`(\rho _0,R,T)`$ and theoretically predicted values $`(\rho _{\mathrm{vir}},R_{\mathrm{vir}},T_{\mathrm{vir}})`$. Since we assume that mass distribution of clusters is similar, $`r_{\mathrm{vir}}R`$ and $`T_{\mathrm{vir}}=T`$, emphasizing that $`r_{\mathrm{vir}}`$ is the virial radius when the cluster collapsed (see Salvador-Solé, Solanes, & Manrique 1998). In this case, the typical gas density of clusters has the relation $`\rho _0f\rho _{\mathrm{vir}}`$, where $`f`$ is the baryon fraction of the cluster. Since it is difficult to predict $`f`$ theoretically, we assume $`fM_{\mathrm{vir}}^{0.4}`$, which is consistent with the observations and corresponds to $`Xconstant`$ (Paper I). For definiteness, we choose
$$f=0.25\left(\frac{M_{\mathrm{vir}}}{10^{15}\mathrm{M}_{\mathrm{}}}\right)^{0.4}.$$
(11)
Figure 1 shows the predicted relations between $`f\rho _{\mathrm{vir}}`$, $`r_{\mathrm{vir}}`$, and $`T_{\mathrm{vir}}`$ for $`\mathrm{\Omega }_0=0.2`$ and $`1`$. Since we are interested only in the slope and extent of the relations, we do not specify $`M_{\mathrm{vir}}(t_0)`$ exactly. Moreover, since $`M_{\mathrm{vir}}(t_0)`$ has a distribution, we calculate for $`M_{\mathrm{vir}}(t_0)=10^{16}\mathrm{M}_{\mathrm{}}`$ and $`5\times 10^{14}\mathrm{M}_{\mathrm{}}`$. The lines in Figure 1 correspond to the major axis of the fundamental band or $`Z`$ because they are the one-parameter family of $`\rho _{\mathrm{vir}}`$. The width of the distribution of $`M_{\mathrm{vir}}(t_0)`$ represents the width of the band or $`Y`$. The observational data of clusters are expected to lie along these lines according to their formation period. However, when $`\mathrm{\Omega }_0=1`$, most of the observed clusters collapsed at $`z0`$ because clusters continue growing even at $`z=0`$ (Peebles 1980). Thus, the cluster data are expected to be distributed along the part of the lines close to the point of $`z_{\mathrm{coll}}=0`$ (segment ab). In fact, Monte Carlo simulation done by Lacey & Cole (1993) shows that if $`\mathrm{\Omega }_0=1`$, most of present clusters ($`M_{\mathrm{vir}}10^{15}\mathrm{M}_{\mathrm{}}`$) should have formed in the range of $`z_{\mathrm{coll}}<0.5`$ (parallelogram abcd). When $`\mathrm{\Omega }_0=0.2`$, the growing rate of clusters decreases and cluster formation gradually ceases at $`z1/\mathrm{\Omega }_01`$ (Peebles 1980). Thus, cluster data are expected to be distributed between the point of $`z_{\mathrm{coll}}=0`$ and $`z_{\mathrm{coll}}=1/\mathrm{\Omega }_01`$ and should have a two-dimensional distribution (parallelogram ABCD).
The observational data are also plotted in Figure 1. The data are the same as those used in Paper I. For definiteness, we choose $`r_{\mathrm{vir}}=8R`$ and $`f\rho _{\mathrm{vir}}=0.06\rho _0`$. The figure shows that the model of $`\mathrm{\Omega }_0=0.2`$ is generally consistent with the observations. The slopes of lines both in the model of $`\mathrm{\Omega }_0=0.2`$ and $`\mathrm{\Omega }_0=1`$ seem to be consistent with the data, although the model of $`\mathrm{\Omega }_0=0.2`$ is preferable. However, the model of $`\mathrm{\Omega }_0=1`$ is in conflict with the extent of the distribution because this model predicts that the data of clusters should be located only around the point of $`z_{\mathrm{coll}}=0`$ in Figure 1.
Finally, we comment on the case of $`n=2`$, which is suggested by the analysis based on the assumption that clusters have just formed when they are observed (e.g. Markevitch 1998). Figure 2 is the same as Figure 1b, but for $`n=2`$. The theoretical predictions are inconsistent with the observational results, because the theory predicts rapid evolution of temperatures; there should be few clusters with high-temperature and small core radius.
The results of this Letter suggest $`\mathrm{\Omega }_0<1`$, and that the clusters of galaxies existing at $`z0`$ include those formed at various redshifts. We also show that the model of $`n1`$ is favorable. Importantly, the location of a cluster in these figures tells us its formation redshift. In order to derive the cosmological parameters more quantitatively, we should consider merging history of clusters and predict the mass function of clusters for each $`z`$.
This work was supported in part by the JSPS Research Fellowship for Young Scientists.
|
no-problem/9905/cond-mat9905114.html
|
ar5iv
|
text
|
# Statistical Physics of the Jamming Transition: The Search for Simple Models
## 1 Introduction
Many studies in physics concern the onset of instabilities into turbulence or some form of chaotic break up, the simplest being laminar flow into turbulent flow, but the collapse of civil engineering structures is much studied. There are not many studies of the reverse phenomenon, where some chaotic system becomes more regular, or even stops altogether. The jamming phenomenon exists in quantum as well as classical systems, but in this article we will discuss only the latter, and indeed systems which are readily accessible to intuitive understanding.
The approach of this article will be to list the most common jamming systems, and see if there exist simple physical or mathematical systems which lead to interpretable experiments and soluble theories. In this paper, we consider mostly our own work, or studies that we know of in detail. The word “jamming” derives from systems coming to rest, and an archetypical problem is in the flow of granular or colloidal systems. We will try to answer the following questions:
* What is the jammed state?
* How does the jamming transition occur?
* What are the common features of the jamming transition in granular materials, colloidal suspensions and glasses?
In this paper, we confine ourselves to the discussion of the jamming transition in granular materials, colloids and glasses. We will not comment on other systems that exhibit the jamming transition, which are many: foams, vortices in superconductors, field lines in turbulent plasma etc. All these systems throw light on one another and exhibit universal behaviour. We will demonstrate that the jamming transition in these systems can be characterized by the slowing of response to external perturbations, and the onset of structural heterogeneities.
## 2 Granular Media
The jamming transition in dry granular media is a very common phenomenon which can be observed in everyday life. Stirring jammed sugar with a spoon can be hard but manageable. A jam within a silo can cause its failure. Nonetheless, such an ubiquity does not make the problem less difficult. In this section we will not even attempt to analyze the dynamical problem of a granular flow coming to rest, and the onset of jamming. Instead, we will try to characterize the static jammed state of a granular material. The difficulty of a theoretical analysis lies in the fact that dense granular media can expand upon shearing, a well known Reynolds dilatancy, that can be viewed as a counterpart of jamming . Consider a cylindrical vertical pipe in the following cases (see Figure 1):
* Rough walls, filled with pieces of twisted wire. This wire will entangle and not flow at all, indeed one does not need the pipe.
* Rough walls, with rough, approximately spherical particles. These can flow under certain conditions, but can also jam; it is a well studied problem in chemical engineering.
* Smooth walls, smooth spheres. This is difficult, perhaps impossible, to jam.
If one asks for the simplest granular material in the above list, it must be the third one if we model the granular material as an assembly of discrete rigid particles whose interactions with their neighbours are localized at point contacts. Therefore, the description of the network of intergranular contacts is essential for the understanding of the stress transmission and the onset of the jammed state in granular assemblies. The random geometry of a static granular packing can be visualized as a network of intergranular contacts. For any aggregate of rigid particles the transmission of stress from one point to another can occur only via the intergranular contacts. Therefore, the contact network determines the network of intergranular forces. In general, the contact network can have a coordination number varying within the system and different for every particular packing. It follows then that the network of intergranular forces is indeterminate i.e. the number of unknown forces is larger than the number of Newton’s equations of mechanical equilibrium. Thus, in order for the network of intergranular forces to be perfectly defined, the number of equations must equal the number of unknowns. This can be achieved by choosing the contact network with a certain fixed coordination number. In this case the system of equations for intergranular forces has a unique solution and the complete system of equations for the stress tensor can be derived. This is the simplest model of a granular material. If this cannot be solved, one will be left with empiricism. This is from the point of view of physics; it is not a good philosophy for engineers. The geometric specification of our system is as follows: we will need $`z`$ contact point vectors $`\stackrel{}{}^{\alpha \beta }`$, centroid of a grain $`\alpha `$, $`\stackrel{}{R}^\alpha `$, $`\stackrel{}{r}^{\alpha \beta }`$ the vector from $`\stackrel{}{R}^\alpha `$ to $`\stackrel{}{}^{\alpha \beta }`$, and the distance between grains $`\alpha `$ and $`\beta `$, $`\stackrel{}{R}^{\alpha \beta }`$ (see Figure 2).
Grain $`\alpha `$ exerts a force on grain $`\beta `$ at a point $`\stackrel{}{}^{\alpha \beta }=\stackrel{}{R}^\alpha +\stackrel{}{r}^{\alpha \beta }`$. The contact is a point in a plane whose normal is $`\stackrel{}{n}^{\alpha \beta }`$. The vector $`\stackrel{}{R}^\alpha `$ is defined by:
$$\stackrel{}{R}^\alpha =\frac{_\beta \stackrel{}{}^{\alpha \beta }}{z},$$
(1)
so that $`\stackrel{}{R}^\alpha `$ is the centroid of contacts, and hence the relation: $`_\beta \stackrel{}{r}^{\alpha \beta }=\mathrm{\hspace{0.17em}0}`$. We note that $`z`$ is the number of contacts per grain, and $`_\beta `$ means summation over the nearest neighbours. We define the distance between grains $`\alpha `$ and $`\beta `$
$$\stackrel{}{R}^{\alpha \beta }=\stackrel{}{r}^{\beta \alpha }\stackrel{}{r}^{\alpha \beta },$$
(2)
Hence $`\stackrel{}{R}^\alpha `$, $`\stackrel{}{r}^{\alpha \beta }`$ and $`\stackrel{}{n}^{\alpha \beta }`$ are geometrical properties of the aggregate under consideration and the other shape specifications do not enter the equations. In a static array, Newton’s equations of intergranular force and torque balance are satisfied. Balance of force around the grain requires
$$\underset{\beta }{}f_i^{\alpha \beta }=g_i^\alpha ,$$
(3)
$$f_i^{\alpha \beta }+f_i^{\beta \alpha }=0,$$
(4)
where $`\stackrel{}{g}^\alpha `$ is the external force acting on grain $`\alpha `$.
The equation of torque balance is
$$\underset{\beta }{}ϵ_{ikl}f_k^{\alpha \beta }r_l^{\alpha \beta }=C_i^\alpha .$$
(5)
Friction is assumed to be infinite . It can be verified that, for the intergranular forces in the static array to be determined by these equations, the coordination number $`z=3`$ in 2-D and $`z=4`$ in 3-D is required. The microscopic version of stress analysis is to determine all of the intergranular forces, given the applied force and torque loadings on each grain, and geometric specification of a granular array. The number of unknowns per grain is $`zd/2`$. The required force and torque equations give $`d+\frac{d(d1)}{2}`$ constraints. The system of equations for the intergranular forces is complete when the coordination number is $`z_m=d+1`$. In addition, the configuration of contacts and normals is only acceptable if all the forces are compressive. If tensile forces are allowed, then we would be studying a sandstone (an interesting problem, but not a subject for a study of jamming). Many investigators do not believe that $`z`$ will be this small, and would invoke the kind of arguments used in bridgework, where the same problem arises when extra spares lead to too few equations for solution. However the simplest assumption is to assume that if the geometry gives more than $`\frac{z_m}{2}`$ contacts, some will contain no force. At all events we can restrict ourselves to systems where there really are $`\frac{z_m}{2}`$ contacts per grain The main question is: can one observe the jammed state in this simple model where the static stress state can be determined? We define the tensorial force moment:
$$S_{ij}^\alpha =\underset{\beta }{}f_i^{\alpha \beta }r_j^{\alpha \beta },$$
(6)
which is the microscopic analogue of the stress tensor. With $`C_i^\alpha =0`$, $`S_{ij}^\alpha `$ will be symmetric. To obtain the macroscopic stress tensor from the tensorial force moment, we coarse-grain, i.e. average it over an ensemble of configurations:
$$\sigma _{ij}(\stackrel{}{r})=\underset{\alpha =1}{\overset{N}{}}S_{ij}^\alpha \delta (\stackrel{}{r}\stackrel{}{R}^\alpha ).$$
(7)
The number of equations required equals the number of independent components of a symmetric stress tensor $`\sigma _{ij}=\sigma _{ji}`$, and is $`\frac{d(d+\mathrm{\hspace{0.17em}1})}{2}`$. At the same time, the number of equations available is $`d`$. These are vector equations of the stress equilibrium $`\frac{\sigma _{ij}}{x_j}=g_i`$ which have their origin in Newton’s second law. Therefore we have to find $`\frac{d(d1)}{2}`$ equations, which possess the information from Newton’s third law, to complete and solve the system of equations which governs the transmission of stress in a granular array.
Given the set of equations $`(\text{3}\text{5})`$ we can write the probability functional for the integranular force $`f_{ij}^{\alpha \beta }`$ as
$`P\left\{f_i^{\alpha \beta }\right\}`$ $`=`$ $`𝒩\delta \left({\displaystyle \underset{\beta }{}}f_i^{\alpha \beta }g_i^\alpha \right)`$ (8)
$`\times \delta \left({\displaystyle \underset{\beta }{}}ϵ_{ikl}f_k^{\alpha \beta }r_l^{\alpha \beta }\right)`$
$`\times \delta \left(f_i^{\alpha \beta }+f_i^{\beta \alpha }\right),`$
where the normalization, $`𝒩`$, which is a function of a configuration, is
$`𝒩^1`$ $`=`$ $`{\displaystyle \underset{\alpha ,\beta }{}P\left\{f_i^{\alpha \beta }\right\}𝒟f^{\alpha \beta }}.`$ (9)
The probability of finding the tensorial force moment $`S_{ij}^\alpha `$ on grain $`\alpha `$ is
$`P\left\{S_{ij}^\alpha \right\}`$ $`=`$ $`{\displaystyle \underset{\alpha ,\beta }{}\delta \left(S_{ij}^\alpha \underset{\beta }{}f_i^{\alpha \beta }r_j^{\alpha \beta }\right)P\left\{f_i^{\alpha \beta }\right\}𝒟f^{\alpha \beta }},`$ (10)
where $`𝒟f^{\alpha \beta }`$ implies integration over all functions $`f^{\alpha \beta }`$, since all the constraints on $`f^{\alpha \beta }`$ have been experienced. We assume that the $`z=d+1`$ condition means that the integral exists.
It has been shown that the fundamental equations of stress equilibrium take the form
$$_j\sigma _{ij}+_j_k_mK_{ijkl}\sigma _{lm}+\mathrm{}=g_i$$
(11)
$$P_{ijk}\sigma _{jk}+_jT_{ijkl}\sigma _{kl}+_j_lU_{ijkl}\sigma _{km}+\mathrm{}=0.$$
(12)
However, there are difficulties with the averaging procedure which is highly non-trivial. The simplest mean-field approximation gives the equation $`\sigma _{11}=\sigma _{22}`$ for the case of an isotropic and homogeneous disordered array . Though this equation gives the diagonal stress tensor $`\sigma _{ij}=p\delta _{ij}`$, it is not rotationally invariant. The only linear, algebraic and rotationally invariant equation is $`\text{Tr}\sigma _{ij}=\mathrm{\hspace{0.17em}0}`$. However, this equation cannot be accepted, for a stable granular aggregate does not support tensile stresses. We believe that, at least in the simplest case, the fundamental equation for the microscopic stress tensor should be linear and algebraic (because of the linearity of Newtons’ second law for intergranular forces). In this paper we offer an alternative way which is considered below.
The leading terms of the system of equations (11,12) arise from the system of discrete linear equations for $`S_{ij}^\alpha `$
$$\underset{\beta }{}S_{ij}^\alpha M_{jk}^\alpha R_k^{\alpha \beta }S_{ij}^\beta M_{jk}^\beta R_k^{\beta \alpha }=g_i^\alpha $$
(13)
$$S_{11}^\alpha S_{22}^\alpha =\mathrm{\hspace{0.17em}2}S_{12}^\alpha \text{tan}\theta ^\alpha $$
(14)
and if $`\text{tan}\theta ^\alpha `$ has an average value $`\text{tan}\varphi `$
$$\sigma _{11}\sigma _{22}=\mathrm{\hspace{0.17em}2}\sigma _{12}\text{tan}\varphi $$
(15)
which is known as the Fixed Principal Axes equation , and has been used with notable effect to solve the problem of the stress distribution in sandpiles. For a homogeneous and isotropic system, the averaging process gives the stress tensor $`\sigma _{ij}=p\delta _{ij}`$ which is simply hydrostatic pressure, as is to be expected. Rotating $`S_{ij}^\alpha `$ by some arbitrary angle $`\theta `$ one can easily that (14) constraints the off-diagonal components of $`S_{ij}^\alpha `$ to be zero. The system of equations (13,14) is solved by Fourier transformation and the macroscopic stress tensor is obtained by averaging over the angle $`\theta `$
$$i\sigma _{11}(\stackrel{}{k})=S_{11}(\stackrel{}{k})_\theta =\frac{g_1(k_1^3+\mathrm{\hspace{0.17em}3}k_2^2k_1)+g_2(k_2^3k_1^2k_2)}{|\stackrel{}{k}|^4}$$
(16)
$$i\sigma _{22}(\stackrel{}{k})=S_{22}(\stackrel{}{k})_\theta =\frac{g_2(k_2^3+\mathrm{\hspace{0.17em}3}k_1^2k_2)+g_1(k_1k_2^2k_1^3)}{|\stackrel{}{k}|^4}$$
(17)
$$i\sigma _{12}(\stackrel{}{k})=S_{12}(\stackrel{}{k})_\theta =\frac{(g_1k_2g_2k_1)(k_2^2k_1^2)}{|\stackrel{}{k}|^4}$$
(18)
where $`|\stackrel{}{k}|^2=k_1^2+k_2^2`$ and $`\sigma _{ij}(\stackrel{}{r})=\sigma _{ij}(\stackrel{}{k})e^{i\stackrel{}{k}\stackrel{}{r}}\text{d}^3\stackrel{}{k}`$. By doing the inverse Fourier transformation one can see that the macroscopic stress tensor is diagonal. There must also be constraints on the permitted configurations (due to the absence of tensile forces) which are not so easily expressed, for they affect each grain in the form
$$S_{ik}^\alpha M_{kl}^\alpha R_l^{\alpha \beta }n_i^{\alpha \beta }>\mathrm{\hspace{0.17em}0}$$
(19)
which has not yet been put into continuum equations other than $`\text{Det}\sigma >\mathrm{\hspace{0.17em}0}`$ and $`\text{Tr}\sigma >\mathrm{\hspace{0.17em}0}`$. However, this condition must be crucial, for without it jamming becomes a very ubiquitous phenomenon provided the density of grains is such that they are all in contact. This example shows the utility of simple models. Apply this type of grain in a vertical pipe (the grain is a thick tile, see Figure 3).
This system is clearly jammed, but the problem is how it can get into this configuration. Another limiting case is to think of a sphere with many spines pointing in the radial direction. These spines mean that spheres can only approach or retreat along the direction joining their centres, and the only other motion permitted to a group of two or more is that of rigid rotation of the group. As soon as a line (in 2-D) or shell (3-D) of these objects occurs, they jam; so they always jam. Again, although this is a possible material, most materials fall into the classes above. So there are trivial jamming problems, but it is proving difficult to produce an effective analytic theory for intermediate materials. It is natural then to use computer simulations, and there is a notable literature in existence . However, rather than comment on this literature for granular materials we move to the related problem of colloidal flow which offers a natural route from grains to glasses.
## 3 Colloidal Suspensions
The simplest model of a colloid which exhibits the jamming transition is the sheared suspension of hard monodisperse spheres, interacting hydrodynamically through a Newtonian solvent of viscosity $`\eta _0`$. Such a system at equilibrium is characterized by its volume fraction $`\varphi _v`$. The behaviour of this system at equilibrium is well known. With increasing $`\varphi _v`$, the system crystallizes, with phase coexistence occurring between $`\varphi _v\mathrm{\hspace{0.17em}0.50}`$ and $`\varphi _v\mathrm{\hspace{0.17em}0.55}`$. If simple shear is applied, we need one other parameter, which is the Peclet number
$$\text{Pe}=\frac{6\pi \dot{\gamma }\eta _0d^3}{k_BT}$$
(20)
where $`d`$ is the particle diameter, $`\dot{\gamma }`$ the shear rate and $`T`$ the temperature. The Peclet number gives the ratio of shear forces to Brownian forces. Pairwise hydrodynamic interactions between the neighbouring particles can be divided into squeeze terms along the line of centres and terms arising from relative shear motion. To leading order the squeeze hydrodynamic force on a particle is given by the well-known Reynolds lubrication formula
$$f_i=\underset{j}{}\frac{3\pi \eta _0}{8h_{ij}}\left\{(v_iv_j)n_{ij}\right\}n_{ij}+O\left(\mathrm{ln}\frac{2}{h_{ij}}\right)$$
(21)
where the sum is over nearest-neighbour particles $`j`$, $`h_{ij}`$ is the gap between the neoghbour surfaces with the unit of distance the particle diameter, $`n_{ij}`$ is the unit vector along the line of centres $`i`$ to $`j`$ and $`v_i`$, $`v_j`$ are the particle centre velocities.
In the absence of other interactions, Brownian forces are left to control the approach of particles. If conservative (steric or charge) forces are present, these control the gaps between particles, and dominate over the Brownian forces at high shear rates. When one studies the shear flow of such a system, it exhibits thickening effects: a rise of viscosity with increasing shear rate (for a review, see ). At volume fractions approaching random close packing, $`\varphi _{RCP}\mathrm{\hspace{0.17em}0.64}`$, discontinuous thickening with a large jump in viscosity occurs at a critical Pe. However, at a lower $`\varphi _v`$ and lower Pe a more continuous rise can be observed (see the Figure 4 below). Hence, the jamming transition in this simple system can be either continuous or discontinuous, which depends sensitively on the volume fraction.
This happens in various experimental systems but in particular, in those whose particles, by polymer coating or surface charges, do not flocculate (if they want to stick together then jamming is not an obvious phenomenon). Colloids with repulsive interactions exhibit shear thinning i.e. the viscosity decreases as the shear rate increases. The presence of aggregating forces can greatly alter the shear thinning. We will discuss the regime of shear thickening (which we call later the jamming transition) in the simplest model, gradually incorporating various interactions. The physical picture can be obtained by combining theory and computer simulations. The simulation technique for particles under quasi-static motion determined by a balance of conservative and dissipative forces has been proposed in . The motion of $`N`$ colloidal particles, immersed within a hydrodynamic medium, is governed by an equation of quasi-static force balance
$$(\stackrel{}{X})\stackrel{}{V}+\stackrel{}{F}_C(\stackrel{}{X})+\stackrel{}{F}_B(t)=\stackrel{}{0}$$
(22)
where $`\stackrel{}{X}`$ represents the $`6N`$ particle position coordinates and orientations, $``$ is a $`6N\times \mathrm{\hspace{0.17em}6}N`$ resistance matrix and $`\stackrel{}{V}=d\stackrel{}{X}/dt`$ is the particle velocity. The terms $`F_C`$ and $`F_B`$ represent conservative and Brownian forces. The effects of inertia are ignored i.e. the Reynolds number is small
$$\text{Re}=\frac{\rho _s\dot{\gamma }d^2}{\eta _0}\mathrm{\hspace{0.17em}1}$$
(23)
Bearing in mind that for a typical colloid $`\frac{\text{Pe}}{\text{Re}}\mathrm{\hspace{0.17em}10}^7`$, it follows that it is possible to achieve $`\text{Re}\mathrm{\hspace{0.17em}1}`$ and $`\text{Pe}\mathrm{\hspace{0.17em}1}`$ simultaneously. Equation (22) can be solved on a computer. Although simulation shows exactly what is happening in the sense one can see every sphere flowing its path (see Figures 5,6), it is still not agreed as to whether the phenomenon of thickening is an order-disorder transition or whether it is due to the development of clusters of particles along the compression axis.
A microscopic kinetic theory for the origin of the increased bulk viscosity at high $`\varphi _v`$ and $`\text{Pe}=\mathrm{}`$ has been recently proposed . It attributes the viscous enhancement to the presence of hydrodynamic clustering, i.e. incompressible groups of particles which lie near the compression axis. The rigidity of clusters is provided by divergent lubrication drag coefficients. The theory predicts a critical $`\varphi _v`$ above which jamming occurs. This model gives a flow-jam phase transition at any strain and may be of more general applicability (e.g. colloids with conservative and Brownian forces). Assuming the cluster length to be additive on collision, one can obtain the standard Smoluchowski aggregation equation
$$\frac{\text{d}X_k}{\text{d}t}=\frac{1}{2}\underset{i,j=1}{\overset{\mathrm{}}{}}\left[K_{ij}X_iX_jX_{i+j}\right]\left(\delta _{i+j,k}\delta _{i,k}\delta _{j,k}\right)$$
(24)
This equation relies upon a mean field approximation, i.e. it is assumed that each cluster is embedded in an average flow, composed of the other clusters, ”spectator” particles and solvent. Eqn (24) governs the evolution of the concentrations $`X_k`$ of clusters of size $`k`$ monomers as a function of time, given that clusters can collide, aggregate and break up at rates dependent on their sizes and specified by the aggregation kernel $`K_{ij}`$. The mathematical procedure of solving Equation (24) has been reported in . We discuss the generic structure of the theory. The rheology of this system results from a competition between a binary aggregation process and single cluster breakup. The paradigm of hydrodynamic clustering modelled in terms of aggregation-breakup laws provides the qualitative physical picture of the jamming transition. At low $`\varphi _v`$ breakup dominates. The system achieves a steady state with a population of clusters whose average size increases and a viscosity which increases as the volume fraction rises. In the vicinity of some volume fraction $`\varphi _c`$, average cluster size diverges, and so does the viscosity
$$\eta \eta _0\left(1\frac{\varphi _v}{\varphi _c}\right)^2$$
(25)
Above $`\varphi _c`$ the flow is transient and dominated by aggregation. The average cluster size diverges after a certain strain, which falls as $`\varphi _v`$ is further increased. The system undergoes the jamming transition and can never reach a steady state. This model predicts $`\varphi _c\mathrm{\hspace{0.17em}0.49}`$, although experiments suggest a divergence of bulk viscosity at a higher volume fraction (which is still less than $`\varphi _{RCP}`$). This difference is due to the presence of conservative interactions in the real hard core colloid. In conclusion, we briefly discuss the jamming behaviour of long flexible chain polymers. A good way to understand this is to plot viscosity against the molecular weight of the polymer in the molten state. The molecular weight is equivalent to the length of the chain. The short chains slide past each other in many ways, giving a linear relation between length and shear viscosity, but an entanglement crisis occurs at a critical molecular weight $`M_c`$, when the polymer can only wriggle in Brownian motion up and down and “out” of a tube formed by its neighbours in reptation. Whether there is sharp transition is not established in a difficult slow experiment, but all agree that the dependence of the viscosity jumps to $`M^{3.4}`$, a colossal change which at first sight is just like a jamming, but very slowly the melt flows flows due to reptation. Polymers very easily form glasses and it only takes a fall in temperature to make the melt solidify, although the specific heat shows that although reptation (a slow translatory motion) has ceased, there is plenty of other movement taking place, but movement which averages over time to zero. A further fall in temperature destroys this motion, and one enters a state like a conventional glass.
## 4 Glasses
Studies of glasses seem to fall into two camps. Those taking continuum field representation of the material and solving under statistical thermodynamics conditions, principally mode-coupling methods, and alternatively the translation, into comparatively simple equations and simple physical models. The former has given intuitive thinkers a hard time because it is not obvious in those cases, such as the polymeric glasses discussed below, where one should have a picture of what goes, what indeed is going on. The modelling approach is weak; for example, in an assembly of packed spheres where an enormous number of motions is possible, but strong in polymer glasses where the motion is obvious and the jamming of motion is the cessation of the centre of mass diffusion. The mode-coupling theory of the glass transition for simple liquids has an ergodicity to non-ergodicity transition. It starts with an equation of motion for the density correlation function, which is an integro-differential Langevin equation, with a non-linear memory kernel. This non-linear memory term governs the transition to the non-ergodic state, and can be characterized by measurements of the dynamic structure factor. We refer the reader to the literature for details of this approach. The glass transition is not limited to special types of materials. Every class of material can be transformed in an amorphous solid if the experimental parameters are adjusted to the dynamics of the system. Consider therefore, two extreme examples. Consider the system consisting of spherical molecules such as rare gases. The hopping time of the spheres is very short, and the dynamics extremely fast. Nevertheless, it has been shown that such fluids undergo a glass transition from the super cooled melt, if one cools the system with a quenching rate of $`q\mathrm{\hspace{0.17em}10}^{12}\text{Ks}^1`$, when the molecules jam effectively to rest. On the other hand, consider polystyrene which consists of very large molecules. The dynamics of such a system is much more complicated in comparison to spheres because there are many degrees of freedom. It is most significant that the centre of mass diffusion of a single molecule is small. We want to stress characteristics of the glass transition in general. The most significant points we want to discuss are:
* The divergence of the transport or inverse transport properties, such as viscosity, inverse diffusion coefficient, and relaxation times.
* The extreme broad relaxation phenomena of the stress, modulus etc.
* The quantitative definition of the term cooperativity.
* Influence of external parameters on $`T_g`$.
It has been recognised that the relaxation time follows the Vogel – Fulcher Law (VF):
$$\tau e^{\frac{A}{TT_0}}$$
(26)
This law has many names. In polymer physics it is often called the Williams – Landel – Ferry (WLF) law . The law is not valid over the whole temperature range (see Figure 7). The divergence given by (26) comes at $`T_0`$, where $`T_0`$ is a temperature below the freezing temperature $`T_g`$, and the empirical rule is given by:
$$T_0T_g(20\mathrm{\hspace{0.17em}30})^0.$$
(27)
The physical meaning of $`T_g`$ is still unclear, and we want to try to clarify this point. The VF law is much stronger than the critical slowing down in phase transition phenomena where, by scaling arguments, the relaxation time is given by
$$\tau TT_0^{\nu z}$$
(28)
where $`\nu `$ is the correlation length exponent i.e. $`\xi (TT_0)^\nu `$ and $`z`$ the dynamical exponent
$$\tau \xi ^z.$$
(29)
There have been attempts to fit data for freezing transitions (glass transition, spin glass transition) by Eqn.(28), and it turned out that $`\nu z`$ is very high $`\nu z\mathrm{\hspace{0.17em}10},\mathrm{},20`$ which seems very unnatural. This again indicates a physical significance to the VF law. Another peculiar point lies in the relaxation properties. An empirical law was found long ago by Kohlrausch and in the early seventies recovered by Williams and Watts in their studies of the broadening of relaxation processes . For example, measurement of the dielectric constant shows a much larger half width in the imaginary part compared to the Debye process. Empirically this is described by the Kohlrausch – Williams – Watts (KWW) law (see Fig 8)
$$\varphi (t)e^{(\frac{t}{\tau })^\beta }$$
(30)
where $`\varphi (t)`$ can be any quantity which relaxes, i.e.
$$\varphi (t)=\frac{ϵ(t)ϵ(\mathrm{})}{ϵ(0)ϵ(\mathrm{})}$$
(31)
However, there has been no convincing explanation for a unique value for $`\beta `$. We doubt that there is more to the physics of (30) than a wide relaxation spectrum due to different physical processes, and a more relevant question is how both the KWW and VF laws link together in general, and what is the relationship to cooperativity. It is believed that the glass transition exhibits a large amount of cooperative motion as the system is close to $`T_g`$. This might be indicated as well by the VF law, which is the crossover to the Arrhenius behaviour right at $`T_g`$ but with an extremely high activation energy (see Figure 7). This activation energy is so high that it could hardly be attributed to only one molecule, and the phenomenological interpretation is that there are cooperative regions of some linear size diverging at some temperature
$$\xi (TT_g)^{power}.$$
(32)
Another quite general question is the state of time-temperature superposition principle. This says that if one measures a physical quantity, $`D(t)`$, at some temperature $`T`$, and if the measurement is repeated at some temperature $`T_1`$, the quantity $`D(t)`$ can be resolved by a “shift factor” $`a_T`$, i.e. $`D(T,t)`$ is not a function of two variables but only of a combination of both:
$$D(T,t)=D\left(\frac{t}{a_T}\right)$$
(33)
where $`a_T`$ is often given by
$$a_T=\frac{C_1+T_g}{TT_g+C_2}$$
(34)
near $`T_g`$, and
$$a_T=\frac{\delta E}{T}\frac{\delta E}{T^{}}$$
(35)
far from $`T_g`$. Hence (34) is of the Volger-Fulcher form.
Polymeric glasses offer two challenges:
* There is a clear intuitive picture of what is happening. The “tube” closes at its ends, or contracts at “entanglement” points.
* The molecular weight offers, as with viscosity, a new degree of freedom, and hence new laws emerge. Any theory of glass must encompass VF, KWW, and the experiments we now describe.
For polymers, the glass transition temperature depends on the molecular weight, and an empirical rule is given by Flory and Fox:
$$T_g(L)=T_g(\mathrm{})\frac{\text{constant}}{L},$$
(36)
where $`L`$ is the length of the molecules i.e. the molecular weight. The agreement of (36) with experiments is not particularly good, but it gives an estimate of $`T_g(L)`$. Hence (36) tells us that there is not a very significant dependence on the molecular weight, unless $`L`$ is small. This point should be investigated using the knowledge of polymer dynamics in melts which has recently emerged. Another empirical law we want to discuss is the mixing rule in plastification. Mixing two glass forming polymers together, the new $`T_g`$ is often given by
$$\frac{1}{T_g(\text{mix})}=\frac{\varphi _1}{T_{g_1}}+\frac{\varphi _2}{T_{g_2}}$$
(37)
to zeroth order. The $`\varphi _i`$ is the volume fraction of the $`i`$’th polymer. We have discussed so far the most important experimental results. Clearly there is a need for a solvable model in the framework of which the VF and KWW laws can be derived. Any fundamental theory of glass transition should relate the mobility of a molecule within the cage to mobilities of molecules forming the cage. These mobilities of surrounding molecules are coupled with those of their neighbours etc., and therefore in general there is no small parameter that can justify the decoupling approximation. The small molecule model of a glass transition involves cages, but polymers are simpler because they have a tube, and a tube of such a model can be modelled as a straight tube, indeed restrictions along the tube can be included, but are not here for simplicity. The reason why we take rods was the advantage of simple geometry, no internal degrees of freedom, and very slow dynamics, so that we have no problems with high quenching rates, so we do not have to worry about thermodynamics. It is obvious that the small width to the length ratio and the sufficiently large number of neighbouring rods justify the decoupling procedure. If the solution of the rods is dense, the concentration $`cd^2L`$ ($`d`$ is the diameter of the rod, $`L`$ its length and $`c`$ the concentration) and severe constraints are acting on the rods. For example, they cannot move rotationally, and they can only make progress along their length. Such a solution has been called entangled. Suppose now we have such a solution of highly entangled rods. A rod can slide between the entangling rods until it meets rods which block it (see Figure 9).
The motion of a rod will then be like a particle diffusing along a line but meeting gates which open and close randomly through thermal fluctuations. If no barriers we present the probability $`P(x,t)`$ of finding the test rod at $`x`$ (which is the coordinate of the rod down the tube) and time $`t`$ satisfies the simple diffusion equation
$$\left(\frac{}{t}D_0\frac{^2}{x^2}\right)P(x,t)=\mathrm{\hspace{0.17em}0}$$
(38)
This has the solution
$$P(x,t)=_{\mathrm{}}^{\mathrm{}}G_0(x,x^{};t,t^{})P_0(x^{},t^{})\text{d}x^{}$$
(39)
where $`P_0(x^{},t^{})`$ is the initial probability function and $`G_0(x,x^{};t,t^{})`$ is the standard Green function of the diffusion equation. Suppose now , a reflecting barrier is placed at position $`R`$ at time $`t_R`$ and removed at some time $`t_Q`$. Then using the method of images to find the Green function it is straightforward to calculate $`P(x,t)`$ . The same method can be applied when there are many barriers appearing and disappearing along the path of the rod. After neglecting correlations between barriers one can see that the rod (1) can only diffuse if
$$D=D_0(1\alpha )$$
(40)
where $`\alpha ϵ(cDL^2)^{\frac{3}{2}}`$, $`cDL^2`$ being the Onsager number. Eqn.(40) gives $`D=\mathrm{\hspace{0.17em}0}`$ if $`\alpha =\mathrm{\hspace{0.17em}1}`$ i.e. $`ϵcDL^2=\mathrm{\hspace{0.17em}1}`$. This is the jammed state. The result can be modified to include cooperativity. The complete solution has been obtained in by mapping the problem onto the self-avoiding walk problem. The VF law appears when summed over $`n`$ (where $`n`$ is the number of rods that loops are made out of)
$$DD_0exp\left(\frac{\alpha _1^2}{1\alpha _2}\right)$$
(41)
where the parameter $`\alpha _1`$ contains generalised constants and a minimum loop size, and $`\alpha _2=\alpha \alpha _1`$. It can be shown that the number of rods moving cooperatively in the loops are given by
$$\overline{n}(1\alpha _2)^2$$
(42)
and the size of the loop is therefore
$$\xi ^2\overline{n}L^2\frac{L^2}{(1\alpha _2)^2}$$
(43)
Thus $`\xi `$ diverges if $`\alpha _2\mathrm{\hspace{0.17em}1}`$, as the phenomenological interpretation requires. So far we have used $`\alpha `$ to denote the expression $`cdL^2`$, a combination of constants well known in liquid crystal theory since the work of Onsager. However, its role in our theory is much more general; for example, $`d`$ can be temperature dependent through the fact that Van der Waals forces appear in the form $`e^{\frac{E}{nT}}`$, whereas hardcore forces do not contain $`T`$. Thus, at the level of a model one can regard $`\alpha =\mathrm{\hspace{0.17em}1}`$ at $`T=T_g`$. Until one studies a detailed physical case, there is no purpose in false verisimilitude. Thus the above model now says that the VF law is a direct consequence of cooperativity. Turning now to the relaxation behaviour, it can be shown that for the rod model, the stress relaxation follows, to first order, the law
$$\sigma (t)e^{(\frac{t}{\tau })^{\frac{1}{2}}}$$
(44)
Cooperativity as indicated by (41) does not change (44) drastically, giving rise to logarithms
$$\sigma (t)e^{(\frac{t}{\tau })^{\frac{1}{2}}}\left(1+\text{const.}\text{log}\frac{t}{\tau }\right)$$
(45)
This leads us to the conclusion that cooperativity is responsible for the VF behaviour, but not for the relaxation phenomena. Further study of this model allows a derivation of (36,37). A final interesting feature of the tube model is that the tube itself must be a random walk of step length $`a`$ (in the rheology literature this is called the primitive path) and the “rod” of the preceding discussion is of length $`a`$ for a very long polymer, and $`L`$ for a long, but not very long molecule. The freezing of large scale motion is the freezing of motion on the scale of the primitive path step length, and the long term reptative motion of the whole chain is not a vital constitutive of the glass temperature. Hence $`T_g`$ is a function of $`a`$ and the density of the material, and also additional parameters (e.g. chain stiffness), i.e. $`T_g=T_g(a(c,L),\mathrm{})`$. But $`a`$ is a function of the density itself, so that the effect of a diluent acts on both $`a`$ and $`c`$, and we expect a different concentration dependence of $`T_g`$ above $`M_c`$. $`T_g`$ dependence on $`L`$ is roughly given by Figure 10.
This simple model has been modified for arrays of rigid rods with fixed centres of rotation . The rods attached to the sites of the cubic lattice can rotate freely but cannot cross each other. It is important to note that the glass transition in this system is decoupled from the structural transition (nematic ordering) and the only parameter is the ratio of the length of the rods to the distance between the centers of rotation. The Monte-Carlo study in 2-D shows that with increasing the parameter of the model a sharp crossover to infinite relaxation times can be observed. In 3-D the simulation gives a real transition to a completely frozen state at some critical length $`L_c`$.
### 4.1 An Important Analogy
Recent crucial experiments on granular materials show that external vibrations lead to a slow approach of the packing density to a final steady-state value. Depending on the initial conditions and the magnitude of the vibration acceleration, the system can either reversibly move between steady-state densities, or can become irreversibly trapped into metastable states; that is, the rate of compaction and the final density depend sensitively on the history of vibration intensities that the system experiences (see Figure 11).
The function which has been found to fit the ensemble averaged density $`\rho (t)`$ better than other functional forms, is :
$$\rho (t)=\rho _f\frac{\mathrm{\Delta }\rho _f}{1+B\mathrm{log}(1+\frac{t}{\tau })}$$
(46)
where the parameters $`\rho _f`$, $`\mathrm{\Delta }\rho _f`$ and $`\tau `$ depend only on $`\mathrm{\Gamma }`$. This is an analogue of a glass forming material where the lower curve corresponds to the situation where the quenching speed inflicted on the glass is faster than the speed at which the glass relaxes back to equilibrium. The special feature of the granular material is that it has no power of its own to proceed to the equilibrium state, so that every aspect can be studied without having to worry about the fact that the true glass is always seeking equilibrium. It is worth considering a simple model for the density response to external vibrations . If we assume that all configurations of a given volume are equally probable, it is possible to develop the formalism analogous to conventional statistical mechanics. We introduce the volume function $`W`$ (the analogue of a Hamiltonian) which depends on the coordinates of the grains, and their orientations. Averaging over all possible configurations of the grains in real space gives us a configurational statistical ensemble, which describes the random packing of grains. An analog of the microcanonical probability distribution is:
$$P=e^{\frac{S}{\lambda }}\delta (VW).$$
(47)
We can define the analogue of temperature as:
$$X=\frac{V}{S}.$$
(48)
This fundamental parameter is called compactivity . It characterizes the packing of a granular material, and may be interpreted as being characteristic of the number of ways it is possible to arrange the grains within the system into a volume $`\mathrm{\Delta }V`$, such that the disorder is $`\mathrm{\Delta }S`$. Consequently, the two limits of $`X`$ are $`0`$ and $`\mathrm{}`$, corresponding to the most and least compact stable arrangements. This is clearly a valid parameter for sufficiently dense powders, since one can in principle calculate the configurational entropy of an arrangement of grains, and therefore derive the compactivity from the basic definition. We will use the canonical probability distribution
$$P=e^{\frac{YW}{\lambda X}},$$
(49)
where $`\lambda `$ is a constant which gives the entropy the dimension of volume. We call $`Y`$ the effective volume; it is the analogue of the free energy:
$$e^{\frac{Y}{\lambda X}}=e^{\frac{W(\mu )}{\lambda X}}\text{d (all)},$$
(50)
$$V=YX\frac{Y}{X}.$$
(51)
Examples of volume functions for particular systems can be found elsewhere. We consider the rigid grains powder dominated by friction deposited in a container which will be shaken or tapped (in order to consider the simplest case, we ignore other possible interactions, e.g. cohesion, and do not distinguish between the grain-grain interactions in the bulk and those on the boundaries). We assume that most of the particles in the bulk do not acquire any non ephemeral kinetic energy, i.e. the change of a certain configuration occurs due to continuous and cooperative rearrangement of a free volume between the neighbouring grains. The simplest volume function is
$$W=v_0+(v_1v_0)(\mu _1^2+\mu _2^2)$$
(52)
where two degrees of freedom $`\mu _1`$ and $`\mu _2`$ define the “interaction” of a grain with its nearest neighbours. If we assume that all grains in the bulk experience the external vibration as a random force, with zero correlation time, then the process of compaction can be seen as the Ornstein-Uhlenbeck process for the degrees of freedom $`\mu _i,i=1,2`$ . Therefore we write the Langevin equation:
$$\frac{d\mu _i}{dt}+\frac{1}{\nu }\frac{W}{\mu _i}=\sqrt{D}f_i(t)$$
(53)
where $`f_i(t)f_j(t^{})=2\delta _{ij}\delta (tt^{})`$, and $`\nu `$ characterizes the frictional resistance imposed on the grain by its nearest neighbours. The term $`f_i(t)`$ on the RHS of (53) represents the random force generated by a tap. The derivation of this gives the analogue of the Einstein relation that $`\nu =(\lambda X)/D`$. If we identify $`f`$ with the amplitude of the force $`a`$ used in the tapping, the natural way to make this dimensionless is to write the “diffusion” coefficient as :
$$D=\left(\frac{a}{g}\right)^2\frac{\nu \omega ^2}{v},$$
(54)
that is we have a simplest guess for a fluctuation-dissipation relation:
$$\lambda X=\left(\frac{a}{g}\right)^2\frac{\nu ^2\omega ^2}{v}$$
(55)
where $`v`$ is the volume of a grain, $`\omega `$ the frequency of tapping, and $`g`$ the gravitational acceleration. The standard treatment of the Langevin equation (53) is to use it to derive the Fokker-Planck equation:
$$\frac{P}{t}=\left(D_{ij}\frac{^2}{\mu _i\mu _j}+\gamma _{ij}\frac{}{\mu _i}\mu _j\right)P=0$$
(56)
where $`D_{ij}=D\delta _{ij}`$ and $`\gamma _{ij}=\gamma \delta _{ij}`$. This equation can be solved explicitly . We can calculate the volume of the system as a function of time and compactivity, $`V(X,t)`$. Though this is a simple model, it is too crude to give a quantitative agreement with experimental data; however, it gives a clear physical picture of what is happening. It is possible to imagine an initial state where all the grains are improbably placed, i.e. where each grain has its maximum volume $`v_1`$. So if one could put together a powder where the grains were placed in a high volume configuration, it will just sit there until shaken; when shaken, it will find its way to the distribution (49). It is possible to identify physical states of the powder with characteristic values of volume in our model. The value $`V=Nv_1`$ corresponds to the “deposited” powder, i.e. the powder is put into the most unstable condition possible, but friction holds it. When $`V=Nv_0`$ the powder is shaken into closest packing. The intermediate value of $`V=(v_0+v_1)/2`$ corresponds to the minimum density of the reversible curve. Thus we can offer an interpretation of three values of density presented in the experimental data . Our theory gives three points $`\rho (X=0),\rho (X=\mathrm{})`$ and $`\rho (t=0)`$ which are in the ratio: $`v_0^1,\frac{2}{(v_0+v_1)},v_1^1`$ and these are in reasonable agreement with experimental data: $`\rho (X=0)=\frac{1}{v_0}0.64,\rho _0=\frac{1}{v_1}0.58`$ and$`\rho (X=\mathrm{})=\frac{2}{(v_0+v_1)}0.62`$. Another important issue is the validity of the compactivity concept for a “fluffy”, but still mechanically stable, granular array, e.g. for those composed of spheres with $`\rho 0.58`$. In our theory, $`\rho (X=\mathrm{})`$ corresponds to the beginning of the reversible branch (see Figure 11). We foresee that granular materials will throw much light onto glassy behaviour in the future, for questions like the pressure fluctuations in the jammed material are accessible in granular materials, and not entirely in glasses.
## 5 Discussion
In Section 2 we have given the analysis of stress for the case of granular arrays with the fixed values of the coordination number. However, a realistic granular packing can have a fluctuating coordination number which need not necessarily be 3 in 2-D or 4 in 3-D. How one can extend the formalism of Section 2 to make it capable of dealing with arbitrary packings? The simplest (however sensible) idea is to assume that in 3-D the major force is transmitted only through four contacts (we call them active contacts). The rest of the contacts transmit only an infinitesimal stress and can be christened as passive. A considerable experimental evidence for this conjecture does exist. Photoelastic visualization experiments show that stresses in static granular media concentrate along certain well-defined paths. A disproportionally large amount of force is transmitted via these stress paths within the material. Computer simulations also confirm the existence of well-defined stress-bearing paths. At the macroscopic scale, the most obvious characteristic of a granular packing is its density. As it has been shown in Section 4 it is natural to introduce the volume function $`W`$ and compactivity $`X=\frac{V}{S}`$. The statistical ensemble now includes now the set of topological configurations with different microscopic force patterns. Therefore, in order to have a complete set of physical variables, one should combine the volume probability density functional (47) with the stress probability density functional (10). The joint probability distribution functional can be written in the form
$$P\{VS_{ij}^\alpha \}=e^{\frac{S}{\lambda }}\delta (VW)\underset{\alpha ,\beta }{}\delta \left(S_{ij}^\alpha \underset{\beta }{}f_i^{\alpha \beta }r_j^{\alpha \beta }\right)\mathrm{\Theta }(\stackrel{}{f}^{\alpha \beta }\stackrel{}{n}^{\alpha \beta })P\left\{\stackrel{}{f}^{\alpha \beta }\right\}.$$
(57)
We speculate that this mathematical object is necessary for the analysis of the stress distribution in granular aggregates with an arbitrary coordination number. However, this is an extremely difficult problem which involves a mathematical description of force chains i.e. the mesoscopic clusters of grains carrying disproportionally large stresses and surrounded by the sea of spectator particles. An explicit theoretical analysis of this problem will be a subject of future research.
The aim of this paper has been to show that simple models can capture the basic physics of the jamming transition and provide insight into universal features of this phenomenon. We have christened the ”jamming transition” as diverse physical phenomena which take place in granular materials, colloids and glasses. However, we argue that the existence of similar features like the slowing of response and the appearance of heterogeneous correlated structures gives hope that there are universal physical laws that govern jamming in various systems. Due to the extreme complexity of this problem, the simplest models should be considered and solved at first. This would help to capture basic mechanisms of these phenomena, and establish a theoretical framework which will incorporate higher complexities and details, and become a predictive tool.
## 6 Acknowledgements
We acknowledge financial support from Leverhulme Foundation (S. F. E.) and Shell (Amsterdam) (D. V. G.). The authors have benefited from many conversations with Professors Robin Ball, Mike Cates, Joe Goddard, Sidney Nagel and Tom Witten. We are most grateful to Dr John Melrose and Alan Catherall for providing the diagrams and many illuminating discussions on the jamming transition in colloids. We thank the Institute for Theoretical Physics (University of California in Santa-Barbara) for hosting the “Jamming and Rheology” programme, and for warm hospitality.
|
no-problem/9905/chao-dyn9905009.html
|
ar5iv
|
text
|
# Characterization of the long-time and short-time predictability of low-order models of the atmosphere
## 1 Introduction
The behavior of a system such as the atmosphere cannot be predicted with prescribed spatial detail beyond a certain characteristic time because initially close trajectories diverge with time. Since the first systematic studies of error growth, low–order models of the atmospheric circulation, such as the Lorenz (1963,1980) models, have been used to study the predictability properties of chaotic systems in general and of the atmosphere in particular. In spite of the their lack of realism, low order models can be still considered useful tools to study some of the mechanisms that limit predictability. Several different methods have been adopted in studies on the subject. In this paper we will follow the guidelines of methods employed in the theory of dynamical systems (Benzi et al. 1985; Benzi and Carnevale 1989) in order to study the statistics of divergence of initially close forecasts.
As is well-known (Oseledec 1968), for chaotic systems the asymptotic divergence of initially close trajectories behave exponentially with a rate given by the maximum Lyapunov exponent $`\lambda _1`$ (Benettin et al. 1980). Lyapunov exponents are defined as global averages on the attractor of a chaotic system; however, predictability is a local property in the phase space. Indeed, operational forecasting experience indicates that the skill associated with individual forecasts may vary greatly. Thus, predictability properties of atmospheric models exhibit large variability in both time and phase space (Palmer et al. 1990); that is, predictability is characterized by the presence of fluctuations.
Another important point is the existence of a period of transient growth that can exceed the Lyapunov exponential growth rate (Mukougawa et al. 1991; Nicolis 1992; Trevisan 1993; Trevisan and Legnani 1995). Error amplification on short time scales is controlled by rapidly growing perturbations that are not of normal-mode form. If the normal modes of instability of a stationary solution are not orthogonal, perturbations may for some time grow faster than the most unstable normal mode and the global average growth rate of errors, initially in a random direction, attains the asymptotic value given by the first Lyapunov exponent after a finite period of time. The concept of transient enhanced exponential growth was introduced by Farrell (1985) and Lacarra and Talagrand (1988). A common application of this concept used in numerical weather prediction consists in finding the initial perturbation that grows fastest in a given period of time (ECMWF 1992; Mureau et al. 1993; Molteni et al. 1996).
In this paper we want to address the following questions:
* how can the fluctuations in the predictability properties of atmospheric models be described?
* how can the predictability properties in the transient region of super-exponential growth be described?
* what is the influence of initialization procedures on the predictability fluctuations?
Our major goal will be to give new insights on these three (related) problems. Specifically, we will use, for the first time in atmospheric science, mathematical tools which are well-known in the theory of dynamical systems and in other contexts (as in turbulence theory) and which allow the quantitative characterization of predictability fluctuations at long and short times.
The outline of the paper is as follows: in sec. 2, we discuss the mathematical background needed to describe the statistics of error growth. A quantitative definition of its statistical fluctuations (a characteristic also known as intermittency) is given in terms of an infinite set of characteristic time scales, related to the so-called generalized Lyapunov exponents $`L(q)`$. In the absence of fluctuations, the generalized Lyapunov exponents are linearly dependent on $`\lambda _1`$, thus making the maximum Lyapunov exponent the only relevant parameter defining the predictability. We also discuss how to characterize the predictability fluctuations at short times by means of new techniques recently applied in turbulence theory. In sec. 3, we apply these ideas to the low-order model introduced and studied by Lorenz (1980). In sec. 4 we investigate how predictability properties are modified by the superbalance equations introduced by Lorenz (1980), which do not not support gravity wave oscillations. This allows us to make an assessment of the role played by gravity wave oscillations in enhancing predictability fluctuations at short times. Conclusions follow in sec. 5.
## 2 Predictability fluctuations: the general theory
In this section we shall review some basic mathematical definitions and concepts related to sensitive dependence of the trajectory of a generic dynamical system on initial conditions.
Let us consider an N-dimensional dynamical system given by the equations:
$$\dot{𝐱}=𝐟(𝐱),$$
(1)
where $`𝐱R^N`$. The time evolution of an infinitesimal disturbance $`\delta 𝐱(t)`$ is given by the linearized equations:
$$\delta \dot{𝐱}(t)=\mathrm{𝐃𝐟}\delta 𝐱(t),$$
(2)
where $`\delta 𝐱(0)0`$ and $`Df_{ij}={\displaystyle \frac{f_i}{x_j}}|_{𝐱=𝐱(t)}`$.
We can define the response function $`R(t,0)`$ as
$$R(t,0)=\frac{|\delta 𝐱(t)|}{|\delta 𝐱(0)|}.$$
(3)
The Oseledec theorem (Oseledec 1968) tells us that, for $`t\mathrm{}`$ and for almost all (in the sense of measure theory) initial conditions $`\delta 𝐱(0)`$, we have:
$$R(t,0)e^{\lambda _1t},$$
(4)
where $`\lambda _1`$ is called the maximum Lyapunov exponent (Benettin et al. 1980). One may venture to guess that this exponential growth is valid even at finite times and that $`\lambda _1^1`$ is the only characteristic time scale of the error growth; however as is well-known, this is not the case (see, for example, Trevisan and Legnani 1995). Indeed, even if $`t`$ is large enough to allow the observation of an exponential growth rate of $`R(t,0)`$, $`\lambda _1^1`$ is not the only relevant time scale: depending on the initial conditions, dynamical systems can show states or configurations that can be predicted for times longer or shorter than $`\lambda _1^1`$.
The question we want to address is related to the existence of well-defined mathematical quantities able to characterize these predictability fluctuations in chaotic systems (Eckmann and Procaccia 1986; Paladin et al. 1986) and in particular in atmospheric models (Benzi and Carnevale 1989; Trevisan and Legnani 1995). This aspect will be investigated in sec. 2.1.
Another issue to address is that for times not long enough, the dynamical behavior of $`R(t,0)`$ is not necessarily characterized by an exponential growth rate. This question has been investigated by Mukougawa et al. (1991), Nicolis (1992), and Trevisan (1993), Trevisan and Legnani (1995), amongst others, and turns out to be of great importance in realistic applications of predictability theory. We shall come to this question in sec. 2.2.
### 2.1 Characterization of predictability fluctuations at long times
In this section we recall the concept of generalized Lyapunov exponents, already introduced in atmospheric applications by Benzi and Carnevale (1989). We point out that $`\lambda _1`$ is an average quantity and we define the probability $`P_t(\gamma )`$ of having a local exponent $`\gamma `$ different from $`\lambda _1`$. Then we derive the relationships between all these quantities and we give two examples of simple probability density functions (p.d.f.’s). In particular, we show that the log-Poisson p.d.f, proposed here for the first time in a meteorological context, has some interesting features regarding the description of strong fluctuations.
#### 2.1.1 Basic concepts
In this section we review some general statistical properties of the response function $`R(t,0)`$ (see also Benzi and Carnevale 1989). Let us consider times long enough to have an exponential growth rate for the response function $`R(t,0)`$. Let us introduce the quantities:
$$R(t,0)^q,$$
(5)
where $``$ is the average over different initial conditions.
If $`\lambda _1`$ were the only relevant time scale characterizing the error growth, then we should expect:
$$R(t,0)^qe^{\lambda _1qt}.$$
(6)
On the other hand, if many times scales characterize the error growth, we should have the more general behavior:
$$R(t,0)^qe^{L(q)t},$$
(7)
where the $`L(q)`$’s are the so-called generalized Lyapunov exponents (Fujisaka 1983; Benzi et al. 1985). In general, $`L(q)`$ is not a linear function of $`q`$.
Before explaining in detail the physical meaning of eq. (7), let us give simple examples of dynamical systems that satisfy either eq. (6) or eq. (7).
Let us consider the one dimensional “tent” map (see Fig. 1):
$$x_{n+1}=\{\begin{array}{cc}\frac{x_n}{c}\hfill & (0xc)\hfill \\ \frac{1x_n}{1c}\hfill & (c<x1).\hfill \end{array}$$
(8)
In this case, $`t=n`$ is a discrete time and the corresponding response function $`R(n,0)`$ is given by:
$$R(n,0)=\underset{i=1}{\overset{n}{}}D_i,$$
(9)
where $`D_i`$ is given either by $`1/c`$ or $`1/(1c)`$, depending on whether $`x_i[0,c]`$ or $`x_i[c,1]`$, respectively.
In order to compute $`R(n,0)^q`$, we need to know the p.d.f. of $`x`$. For the tent map the p.d.f. is that of the uniform distribution on the interval $`[0,1]`$ (Frisch 1995). It then follows from (9) that:
$$\begin{array}{cc}R(n,0)^q\hfill & =\{_0^c[\frac{1}{c}]^q𝑑x+_c^1[\frac{1}{1c}]^q𝑑x\}^n\hfill \\ & =\{c^{1q}+(1c)^{1q}\}^n=e^{L(q)n},\hfill \end{array}$$
(10)
where
$$L(q)=\mathrm{ln}[c^{1q}+(1c)^{1q}].$$
(11)
First of all, notice that for $`c=1/2`$, all points of the dynamical system are characterized by the same slope $`D_n=2`$. In this case we have:
$$L(q)=\mathrm{ln}\left[\left(\frac{1}{2}\right)^{1q}+\left(\frac{1}{2}\right)^{1q}\right]=qln2.$$
(12)
Secondly, for $`c>1/2`$ (and in particular for $`c`$ very close to 1) the system is characterized by two different slopes, $`1/c<2`$ and $`1/(1c)>2`$; this means that there are states of the system for which the error growth is ‘slow’ ($`x[0,c]`$) and states for which the error growth is ‘fast’ ($`x[c,1]`$). In this case, $`L(q)`$ is no longer characterized by a single time scale.
Coming back to the interpretation of eq. (7) and assuming that $`R(t,0)`$ already shows an exponential behavior, we can write:
$$R(t,0)e^{\gamma t},$$
(13)
where $`\gamma `$ is the local error growth exponent; the error growth exponent is local in the sense that it depends on the particular initial condition under consideration. We now briefly review how $`\lambda _1`$ can be seen as the average over all possible local exponents of the system. We can rewrite $`R(t,0)`$ as:
$$R(t,0)=\underset{i=1}{\overset{M}{}}R(t_i,t_{i1}),$$
(14)
where $`0=t_0<t_1<\mathrm{}<t_i<t_{i+1}<\mathrm{}<t_M=t`$, and we are regarding the trajectory as a sequence of $`M`$ trajectories. By introducing the notation:
$$R(t_i,t_{i1})e^{\gamma _i(t_it_{i1})},$$
(15)
we have:
$$e^{\gamma t}=e^{_{i=1}^M\gamma _i(t_it_{i1})}.$$
(16)
Therefore, assuming $`t_it_{i1}=\mathrm{\Delta }t`$ for any $`i`$, we have $`t=M\mathrm{\Delta }t`$ and
$$\gamma =\frac{1}{M}\underset{i=1}{\overset{M}{}}\gamma _i.$$
(17)
Equation (17) tells us that $`\gamma `$ is just the average value of $`\gamma _i`$’s along the trajectory. By the Oseledec (1968) theorem we know that, for $`t\mathrm{}`$ (i.e., for $`M\mathrm{}`$), $`\gamma \lambda _1`$. Thus, $`\lambda _1`$ is the average over all possible local exponents $`\gamma _i`$ of the system. We can now ask what is the probability of having $`\gamma \lambda _1`$ for finite times. In order to answer this question, we introduce the probability $`P_t(\gamma )`$ of the local error growth being equal to $`\gamma `$ at time $`t=M\mathrm{\Delta }t`$. The introduction of a p.d.f. $`P_t(\gamma )`$ is possible under the ergodicity assumption (for a review of the ergodic theory, see for example Halmos (1956)). The p.d.f. is then related to the existence of an attractor set, described by an invariant measure, which can be operatively defined by computing the average fraction of the time spent by the evolving system in any portion of the attractor. The quantity $`R(t,0)^q`$ can be expressed by the integral:
$$R(t,0)^q=P_t(\gamma )e^{\gamma qM\mathrm{\Delta }t}𝑑\gamma .$$
(18)
As already pointed out, only for $`M\mathrm{}`$ do we have $`P_t(\gamma )\delta (\gamma \lambda _1)`$. For finite but large $`M`$, the large deviations theory (for a general presentation see Varadhan (1984) and Ellis (1985); for a more physical treatment see Frisch (1995)) characterizes $`P_t(\gamma )`$ as:
$$P_t(\gamma )e^{S(\gamma )M\mathrm{\Delta }t},$$
(19)
where $`S(\gamma )`$ is a concave function such that $`S(\gamma )0`$ and $`S(\gamma )=0`$ for $`\gamma =\lambda _1`$. The large deviations theory holds, in its simplest form, for independent random variables. Like the law of large numbers and the central limit theorem, the large deviations theory has extensions to random variables with correlations, when these correlations decrease sufficiently fast. These are the conditions under which $`P_t(\gamma )`$ takes the form (19). In the language of large deviations theory, $`S(\gamma )`$ is called Cramer function or Cramer entropy (Mandelbrot 1991). Different systems will have different p.d.f.’s (i.e., different $`S(\gamma )`$). In the next part of this subsection we will show how the explicit form of the $`L(q)`$ exponents can be obtained from the Cramer function $`S(\gamma )`$ of the system. We will also show the general relation between $`L(q)`$ and $`\lambda _1`$.
Inserting (19) into (18), by saddle-point integration one obtains:
$$R(t,0)^qe^{[\gamma qS(\gamma )]M\mathrm{\Delta }t}𝑑\gamma e^{L(q)M\mathrm{\Delta }t},$$
(20)
where
$$L(q)=\underset{\gamma }{sup}[q\gamma S(\gamma )]$$
(21)
and $`sup_\gamma [f(\gamma )]`$ means the smallest upper bound of the function $`f(\gamma )`$.
Since $`S(\gamma )`$ is a concave function, the maximum value of the convex function $`q\gamma S(\gamma )`$ is unique and attained at the value $`\gamma _q`$ at which
$$\frac{d(q\gamma S(\gamma ))}{d\gamma }|_{\gamma =\gamma _q}=0\text{or}\frac{dS(\gamma )}{d\gamma }|_{\gamma =\gamma _q}=q.$$
(22)
Thus, from (21) one obtains:
$$L(q)=q\gamma _qS(\gamma _q)$$
(23)
and
$$\frac{dL(q)}{dq}=\gamma _q+q\frac{d\gamma _q}{dq}\frac{dS(\gamma _q)}{dq}=\gamma _q.$$
(24)
Since $`L(0)=0`$, it follows from (23) that $`S(\gamma _0)=0`$. Thus, as an immediate consequence of the general properties of $`S(\gamma )`$ , one obtains $`\gamma _0=\lambda _1`$. From (24) the following expression for the maximum Lyapunov exponent can be obtained:
$$\lambda _1=\frac{dL(q)}{dq}|_{q=0}.$$
(25)
The quantities $`\gamma _q`$ are the characteristic exponents describing the predictability fluctuations of the dynamical system: if $`L(q)`$ is a linear function of $`q`$, they reduce to a single exponent, namely $`\lambda _1`$ (see eq. (24)). If $`L(q)`$ does not follow the linear relationship $`\lambda _1q`$, then the complete set of exponents $`\gamma _q`$ is needed to characterize predictability fluctuations. In this case the system is intermittent. The functional form of $`L(q)`$ depends on the system, and specific assumptions on the p.d.f. $`P_t(\gamma )`$ have to be made in order to perform a quantitative analysis.
#### 2.1.2 The log-normal case
There is no general theory about the shape of the $`S(\gamma )`$ function, as different systems may have different predictability fluctuations. If correlations are weak enough, central limit arguments can be applied and a Gaussian law for $`P_t(\gamma )`$ may be assumed (Paladin and Vulpiani 1987). Several numerical studies (see, for example, Benzi et al. 1985; Benzi and Carnevale 1989) show to what degree the log-normal hypothesis for the statistics of the error growth can reproduce the computed $`L(q)`$’s for simple dynamical and atmospheric models. Let us recall here the main points which follow from this hypothesis, assuming a quadratic behavior for $`S(\gamma )`$:
$$S(\gamma )=(\gamma \lambda _1)^2/2\mu .$$
(26)
From eqs. (19) and (26) it follows that the normalized p.d.f. for $`\gamma `$ reads:
$$P_t(\gamma )=\frac{1}{\sqrt{2\pi \mu /t}}e^{\frac{(\gamma \lambda _1)^2}{2\mu /t}}.$$
(27)
From the transformation rule $`P_t(R)=P_t(\gamma (R))\left|\frac{d\gamma }{dR}\right|`$, the log-normal distribution for the response function $`R(t,0)`$ follows as:
$$P_t(R)=\frac{1}{R\sqrt{2\pi \mu t}}e^{\frac{(\mathrm{ln}R\lambda _1t)^2}{2\mu t}}.$$
(28)
The probability distribution is thus fully characterized by two parameters only:
$$\begin{array}{cccc}\lambda _1\hfill & =\hfill & \mathrm{ln}R(t,0)/t,\hfill & \\ \mu \hfill & =\hfill & [(\mathrm{ln}R(t,0))^2\mathrm{ln}R(t,0)^2]/t,\hfill & \end{array}$$
(29)
where $`\lambda _1`$ is the maximum Lyapunov exponent and $`\mu `$ is the second cumulant.
In the general case, the parameters $`\lambda _1`$ and $`\mu `$, as defined in (29), continue to be important, although they may not completely represent the p.d.f.: they give respectively the mean value and the variance of the $`\gamma `$-fluctuations. For this reason, the $`\mu `$ parameter is usually referred to as the intermittency of the system. The ratio $`\mu /\lambda _1=1`$ may be taken as the borderline between weak and strong intermittency. About the latter regime, an important point has to be stressed. Consider the most probable response function value $`\stackrel{~}{R}`$ (obtained by solving $`\frac{dP_t(R)}{dR}=0`$) and the mean value $`R`$. In the log-normal case their expressions read:
$$\stackrel{~}{R}=e^{\lambda _1t(1\mu /\lambda _1)}\text{and}R=e^{\lambda _1t(1+\mu /(2\lambda _1))}.$$
(30)
From eqs. (30) it follows that a rough estimate of the response that, for example, takes the most probable value $`\stackrel{~}{R}`$ as representative of the distribution, fully breaks down for $`\mu /\lambda _1>1`$. This approximation predicts a decreasing error for long times ($`lim_t\mathrm{}\stackrel{~}{R}=0`$) instead of the chaotic error growth characterized by a positive exponent ($`lim_t\mathrm{}R=\mathrm{}`$).
Using $`S(\gamma )`$ given by (26) in the expression (22) we obtain:
$$\gamma _q=\lambda _1+\mu q$$
(31)
which, substituted into (23), gives the generalized Lyapunov exponents for the log-normal case:
$$L(q)=\lambda _1q+\frac{1}{2}\mu q^2.$$
(32)
As a reasonable definition of the characteristic predictability time $`T`$ of the system, one can consider the following, which takes into account the effects of fluctuations:
$$T\frac{1}{L(1)}\frac{1}{\lambda _1+(1/2)\mu }.$$
(33)
It is reasonable to assume that, in the general case, $`L(q)`$ would be bounded between the linear shape (i.e., absence of fluctuations) and the quadratic shape (i.e., strong fluctuations). In many cases, the log-normal shape is a good approximation for small deviations of $`\gamma `$ around its mean value, i.e., it reproduces quite well the smallest moments of the distribution. On the other hand, for large $`q`$’s, the moments cannot be of the log-normal type (Paladin and Vulpiani 1987), since log-normal moments grow more than exponentially with $`q`$. This implies that $`lim_q\mathrm{}\gamma _q`$ is not finite. This limit is related to the fastest error growth $`R^{}`$ in the system, which must be finite for all physical systems. The log-normal approximation fails to reproduce the tails of the distribution $`P_t(R)`$, that is, the largest fluctuations.
#### 2.1.3 The log-Poisson case
The physical constraint that the fastest error growth must be finite forces one to look for parametrizations of $`P_t(\gamma )`$ other than Gaussian. For this purpose, let us introduce, for the first time to our knowledge in predictability theory, recent ideas developed in statistical treatment of fully developed turbulence (the basic concepts can be found in She and Waymire (1995)).
One way to consider alternative probability distributions is by observing that the quantity:
$$R(t^{},t)=\frac{|\delta 𝐱(t^{})|}{|\delta 𝐱(t)|}$$
(34)
obeys the multiplicative rule:
$$R(t^{},t)=R(t^{},t^{\prime \prime })R(t^{\prime \prime },t),$$
(35)
for any $`t^{\prime \prime }`$. Thus, one is led to consider all possible probability functions $`P_t(\gamma )`$ which are left functionally invariant by the multiplicative transformation (35). We shall call this class of probability functions covariant.
By making the (strong) assumption of weak correlation between $`R(t^{},t^{\prime \prime })`$ and $`R(t^{\prime \prime },t)`$, an important class of distributions turns out to be covariant: this is the class of the infinitively divisible distributions (IDD). The normal and the Poisson distributions are the most popular examples of IDD (for a comprehensive list and demonstrations of the properties of IDD, see for example Doob (1990)).
In the following we shall consider the Poisson distribution as the simplest example of a non-Gaussian IDD that gives a more suitable description of strong fluctuations than the Gaussian distribution. As we shall see, it permits also to satisfy the physical constraint that the fastest error growth has to be finite.
Notice that this simple (and discrete) stochastic model is not intended to represent the p.d.f. of a generic (continuous) physical system. Our aim here is twofold: on one hand, we show that the Poisson distribution, in spite of its simplicity, shares some important properties with realistic p.d.f.’s; on the other hand, we show that statistical quantities can be successfully fitted by using the Poisson formulation. In this sense, the Poisson assumption gives a simple way to characterize and quantify some predictability properties.
Let us rewrite the response function in terms of a Poisson random variable $`x`$, factorizing its exponential time dependence:
$$R(t,0)e^{at}\beta ^x,$$
(36)
where $`\beta 1`$ and $`x`$ is a random variable that follows the Poisson distribution:
$$P_t(x=n)=\frac{(bt)^ne^{bt}}{n!}.$$
(37)
Using eq. (37), the qth-order moment of the response function $`R`$ can be easily calculated:
$$R^q=\underset{n}{}\frac{(bt)^ne^{bt}}{n!}e^{aqt}\beta ^{qn}=e^{(aqb)t}\underset{n}{}\frac{(bt\beta ^q)^n}{n!}=e^{[aqb(1\beta ^q)]t}$$
(38)
and the expression for the generalized Lyapunov exponents becomes:
$$L(q)=aqb(1\beta ^q).$$
(39)
From eqs. (24) and (25) we also obtain
$$\gamma _q=a+b\beta ^q\mathrm{ln}\beta \text{and}\lambda _1=a+b\mathrm{ln}\beta .$$
(40)
Notice that for large $`q`$’s the generalized Lyapunov exponents given by (39) behave as $`L(q)aqb`$. The $`a`$ parameter is associated with the fastest error growth in the system, $`R^{}=e^{at}`$, which is finite. Thus, the log-Poisson p.d.f. does not show the pathologies typical of the log-normal distribution.
A comparison between $`R^{}`$ and $`R`$ can give a measure of intermittency, namely:
$$\frac{\mathrm{ln}R^{}}{\mathrm{ln}R}=\frac{a}{L(1)}\stackrel{~}{a}.$$
(41)
To summarize, the considerations outlined in this section allow us to investigate predictability properties at large $`t`$ (i.e., on the attractor set) by means of quantities such as $`\lambda _1`$, $`\mu `$ and $`L(q)`$. In particular, one can consider the ratio $`\mu /\lambda _1`$ (particularly important if a log-normal hypothesis is made) or the ratio $`\stackrel{~}{a}`$ (in the case of log-Poisson statistics) as measures of predictability fluctuations.
### 2.2 Characterization of predictability fluctuations at short times
In this section we show, with the help of a simple example, how it is possible to extract quantitative information about the predictability fluctuations of a system in the super-exponential error growth regime. Here and for the first time in a meteorological context, we use recent techniques employed in turbulence data analysis, which lead to a generalization of the concepts previously reported for the asymptotic regime. Again, the simple Poisson assumption can be made and useful statistical quantities can be evaluated.
We are here concerned with the response function for a finite time interval. A statement about the choice of initial conditions has thus to be made. In the following, we make the familiar assumption of an homogeneous and isotropic probability distribution of the disturbance vector in phase space, which was assumed originally by Lorenz (1965).
#### 2.2.1 An example
In order to discuss the predictability fluctuations at short times, we will consider the Lorenz (1963) system (hereafter L63). For $`r24.74`$ (the critical value for the fixed points to be unstable) a linear behavior of $`L(q)`$ is found (see Benzi et al. 1985): the system does not show intermittency in the long times regime. The same behavior does not hold for short times, which are relevant in meteorological applications.
This point can be detected in Fig. 2, where the second and sixth order moments of the response function $`R`$ are shown, as an example, as a function of time for $`r=28`$. In both graphs, two temporal ranges are clearly detectable (the noise in the signal is simply due to lack in statistics). For $`t\tau =1.25`$ there is evidence of an exponential error growth, whose exponent, in these log-linear plots, is given by the slope of the straight line fitting the data: the slopes are $`L(2)2\lambda _11.6`$ (see Fig. 2(a)) and $`L(6)6\lambda _1=4.8`$ (see Fig. 2(b)). These values confirm the linear law $`L(q)=q\lambda _1`$ for the generalized Lyapunov exponents found by Benzi et al. (1985).
At shorter times ($`t\tau `$), the error grows more rapidly. Furthermore, a zoom over $`t0.2`$ reveals a non-exponential behavior of the response function. This feature makes it difficult to extract informations from the data inside the range $`0t\tau `$.
Notice that the separation between the two temporal regimes takes place at a time scale $`\tau `$ which can be identified to be $`\tau 1/\lambda _1`$. This seems to be a non-trivial result, even if more examples based on other models are necessary to get the confidence that this property is general and not merely a coincidence.
In order to determine the statistical properties of the error growth at small $`t`$, we try to consider different representations of our data. Let us consider, as an example, the log-log plot of the sixth order moment against the second order moment (Fig. 3). As one can see, such representation reduces the spreading of points with respect to Fig. 2, thus improving the quality of the linear fit. Furthermore, a zoom corresponding to the temporal range $`t0.2`$ (inside box in Fig. 3) reveals the following key point: a linear behavior now takes place from very early time and a measure of intermittency now becomes possible. The best fit for long times ($`t1/\lambda _1`$) gives the ratio $`L(6)/L(2)3`$ (i.e., no intermittency is present in the long-time regime). For $`t<1/\lambda _1`$ this ratio gives the values $`4.9`$, indicating a nonlinear behavior of $`L(q)`$ and thus the intermittent nature of the system for short times.
Fig. 3 represents one of the main results obtained in this paper.
From this example we argue that it is possible to achieve a quantitative measure of predictability fluctuations even at short time, when non-exponential growth rates are observed.
#### 2.2.2 The ESS analysis at short times
The results of Fig. 3 suggest to try a generalization of the concepts reported in sec. 2.1. Accordingly, one can continue to assume the covariance property (35) to be valid even for a time-dependence of the response $`R(t,0)`$ different from the exponential one.
Thus, the hypothesis of IDD’s governing the error growth statistics continue to hold in the transient regime. As an example, let us consider again the Poisson case. The temporal dependence $`e^t`$ may be replaced by a more general function $`g(t)`$, such that $`g(0)=1`$ and $`lim_t\mathrm{}g(t)=e^t`$. Relation (36) becomes:
$$R(t,0)g(t)^a\beta ^x,$$
(42)
where $`x`$ is a random variable with a Poisson distribution:
$$P_t(x=n)=\frac{[b\mathrm{ln}g(t)]^ne^{b\mathrm{ln}g(t)}}{n!}.$$
(43)
From (42) and (43), it follows that the statistical moments of the response function at short $`t`$ take the form:
$$R(t,0)^qe^{L(q)\mathrm{ln}g(t)},$$
(44)
with $`L(q)`$ given by (39).
The results of Fig. 3 suggest that one may extract information about predictability fluctuations in the system by considering ratios like $`\mathrm{ln}R^q/\mathrm{ln}R^p`$, which allow us to eliminate the functional dependence on $`g(t)`$. This is the basic idea of the Extended Self Similarity (ESS) techniques recently employed in fully developed turbulence (Benzi et al. 1993; Benzi et al. 1995): they concern the possibility of observing a scaling behavior of turbulent velocity and energy dissipation fields over an enlarged range of spatial scales and in systems and conditions where these scaling properties are not evident with standard techniques.
The above simple considerations allow extending the notion of generalized Lyapunov exponents in a simple way. Accordingly, we can define the quantities:
$$\frac{\mathrm{ln}R^q}{\mathrm{ln}R}=\frac{L(q)\mathrm{ln}g(t)}{L(1)\mathrm{ln}g(t)}\stackrel{~}{L}(q).$$
(45)
The constraint $`\stackrel{~}{L}(1)=1`$ leaves only two free parameters, $`\stackrel{~}{a}=a/L(1)`$ and $`\stackrel{~}{b}=b/L(1)`$. As already noted (see Eq. (41)), the quantity $`\stackrel{~}{a}`$ has a straightforward meaning and can be considered as a useful intermittency indicator.
By measuring $`\stackrel{~}{L}(q)`$ exponents, it is thus possible to obtain quantitative information about predictability properties even at short times.
Let us conclude this section by introducing another new indicator that we cinsider useful in studying
predictability fluctuations at short times. We define the quantity $`\stackrel{~}{P}(t)`$ as:
$$\stackrel{~}{P}(t)=_{R1}P_t(R)𝑑R,$$
(46)
where $`P_t(R)`$ is the p.d.f. of the response function $`R(t,0)`$.
The quantity $`\stackrel{~}{P}(t)`$ is the probability of observing an error $`\delta 𝐱(t)`$ smaller or equal to the initial error $`\delta 𝐱(0)`$. By construction, we have $`\stackrel{~}{P}(0)=1`$, while in the limit of large $`t`$, $`\stackrel{~}{P}(t)0`$. If there are no fluctuations in the predictability exponents, one obtains $`\stackrel{~}{P}(t)=0`$ for every $`t0`$. This is not the case for intermittent systems.
Let us compare the cases of log-normal and log-Poisson distributions. For a log-normal distribution, $`\stackrel{~}{P}(t)`$ decreases monotonically toward zero. An easy way to see it is the following: after substituting eq.(28) into eq.(46) and definining the variable
$$z=\frac{lnR\lambda _1t}{\sqrt{2\mu t}},$$
(47)
one obtains the following expression of eq.(46) for the log-normal case:
$$\stackrel{~}{P}(t)=_{\mathrm{}}^{\frac{\lambda _1}{\sqrt{2\mu }}\sqrt{t}}\frac{e^{z^2}}{\sqrt{\pi }}𝑑z.$$
(48)
The dependence on $`t`$ is now only in the upper bound of the integral: as time increases, the range of integration decreases, thus making $`\stackrel{~}{P}(t)`$ a monotonically decreasing function of $`t`$.
On the other hand, the short-time behavior in the log-Poisson case is completely different: $`\stackrel{~}{P}(t)`$ grows as $`(1e^{b\mathrm{ln}g(t)})`$, up to a time $`\overline{t}`$ such that $`g(\overline{t})=\beta ^{1/a}`$. Then, for large $`t`$, it decays toward zero, following a typical step behavior induced by the discrete character of the Poisson distribution.
The above considerations imply that in an intermittent system the probability of observing an error smaller or equal to the initial error can be finite. Moreover, the behavior of $`\stackrel{~}{P}(t)`$ for small $`t`$ is completely different in the two cases of log-Poisson and log-normal distributions. This effect is due to the finite maximum growth rate of the Poisson case: the finiteness of the parameter $`a`$ allows a growth of $`\stackrel{~}{P}(t)`$ for $`t\overline{t}`$.
These considerations illustrate the straightforward physical meaning of the quantity $`\stackrel{~}{P}(t)`$ in predictability studies concerning the short time range. Moreover, a systematic, quantitative investigation may be performed at any time by evaluating the $`\stackrel{~}{L}(q)`$ exponents.
## 3 Application to a low-order primitive equation model
In order to apply the theory outlined in the previous section, we have considered the low-order model for atmospheric flow introduced by Lorenz (1980). This model represents a suitable truncation of simplified primitive equations (hereafter PE) on $`\beta `$-channel. As in Lorenz (1980), the model equations are:
$`a_i{\displaystyle \frac{dx_i}{d\tau }}`$ $`=`$ $`a_ib_ix_jx_kc(a_ia_k)x_jy_k+c(a_ia_j)y_jx_k`$
$`2c^2y_jy_k\nu _0a_{i}^{}{}_{}{}^{2}x_i+a_i(y_iz_i)`$
$`a_i{\displaystyle \frac{dy_i}{d\tau }}`$ $`=`$ $`a_kb_kx_jy_ka_jb_jy_jx_k+c(a_ka_j)y_jy_k`$
$`a_ix_i\nu _0a_{i}^{}{}_{}{}^{2}y_i`$
$`{\displaystyle \frac{dz_i}{d\tau }}`$ $`=`$ $`b_kx_j(z_kh_k)b_j(z_jh_j)x_k+cy_j(z_kh_k)`$
$`c(z_jh_j)y_k+g_0a_ix_i\kappa _0a_iz_i+F_i`$
where $`x_i`$, $`y_i`$ and $`z_i`$ are the coefficients of the velocity potential, streamfunction and height fields, respectively. Here, $`(i,j,k)`$ stands for any permutation over $`(1,2,3)`$. For our studies, we have used physical parameter values which correspond to large scale motion in the mid-latitude atmosphere, in agreement with previous works (Lorenz 1980; Gent and McWilliams 1982; Vautard and Legras 1986; Lorenz 1986; Curry et al. 1995)
The evolution equations for an infinitesimal error have been obtained by linearization of eqs. (3-3). For all simulations presented in this paper, the familiar assumption of homogeneous and isotropic p.d.f. of the initial disturbance vector in the phase space is made (Lorenz 1965). The number of realizations (i.e., different trajectories) considered is of the order of $`10^4`$. For each realization of the ensemble, the initial error has been randomly selected on the sphere centered at the initial point of the trajectory. The numerical integrations have been performed using a fourth-order Runge-Kutta scheme.
In order to show the capability of the parameter $`\mu /\lambda _1`$ to capture the fluctuations in the predictability, let us consider the three different regimes of the PE model discussed by Krishnamurthy (1985), corresponding to three different values of the forcing parameter: $`F_1=0.1`$, $`F_1=0.25`$ and $`F_1=0.3`$. The physical regimes associated with such values are the following (for details, see Krishnamurthy (1985)): for $`F_1=0.1`$ the attractor of the PE system is free from gravity waves; for $`F_1=0.25`$ the trajectories vary chaotically, with an almost periodic behavior interrupted irregularly by short periods characterized by high frequency gravity waves; finally, for $`F_1=0.3`$, gravity waves are present all the time.
To evaluate the two parameters $`\lambda _1`$ and $`\mu `$, defined in (29), each realization (trajectory) has been made to last for a time $`t`$ of the order of a hundred days, large enough to apply all the considerations of sec. 2.1.
In Tab. 1, the values of the positive maximum Lyapunov exponent, the intermittency, the ratio $`\mu /\lambda _1`$, and the predictability time $`T`$ for the PE system in the three aforementioned regimes are shown. As already noted, the value of $`\mu /\lambda _1=1`$ could be considered as the borderline between weak and strong intermittency. We can see from Tab. 1 that three different values of $`\mu /\lambda _1`$ occur in the three regimes. The signature of the gravity wave activity, in the cases $`F_1=0.25`$ and $`F_1=0.3`$, is emphasized by the enhancement of both $`\lambda _1`$ and $`\mu `$, with respect to the case $`F_1=0.1`$. The predictability of the system is thus reduced by persistent gravity waves. It is also worth noticing that strong intermittency (i.e., $`\mu /\lambda _1>1`$) is present only in the intermediate case $`F_1=0.25`$, characterized by the appearance of bursts of chaoticity containing high frequency gravity waves.
For the regime $`F_1=0.1`$, we have also performed a systematic numerical investigation of the generalized Lyapunov exponents at long and short times. The logarithm of the sixth order moment of the response function $`R`$ as a function of time is reported in Fig. 4. Two regions are clearly detectable. For $`t`$ large enough ($`t1/\lambda _116days`$) the points can be fitted by a straight line, indicating an exponential error growth: the slope of the fitting line gives the sixth order generalized Lyapunov exponent $`L(6)`$ (the noise in the signal is simply due to lack in statistics). At shorter times ($`t1/\lambda _1`$), the error grows more rapidly; moreover, a zoom over the first 5 days reveals a strongly nonlinear behavior. As we shall see in the next section, this is mainly associated with the presence of transient gravity waves, whose amplitude is significant, especially in very the first times of integration.
Notice that, as in the L63 model, the characteristic time scale associated with a change in the error growth regime is of the order of $`1/\lambda _1`$.
The use of ESS techniques allows us to improve the quality of the fit in the long-time region and to extract information on generalized Lyapunov exponents even at small $`t`$. As an example, the sixth order moment against the first order moment is shown in Fig. 5 in a log-log plot. We can see from the zoom over the 5 day temporal range that a linear behavior occurs from very early times. We have then evaluated for 14 moments (q=0.2, 0.4, 0.6, 0.8, 1, 2,…, 10) the $`\stackrel{~}{L}(q)`$ exponents at short and long times, by performing linear fits in the two temporal regions $`0<t10`$ days and $`40t300`$ days, respectively. The results are shown in Fig. 6 : the points are well fitted by the log-Poisson formula, with parameters $`\stackrel{~}{a}_l=1.2`$, $`\stackrel{~}{b}_l=0.3`$ (for the long-time $`\stackrel{~}{L}(q)`$ values) and $`\stackrel{~}{a}_s=4.5`$, $`\stackrel{~}{b}_s=4.3`$ (for the short-time $`\stackrel{~}{L}(q)`$ values). In particular, notice that $`\stackrel{~}{a}_s>\stackrel{~}{a}_l`$, as the comparison of the two slopes at high $`q`$’s shows. This is a quantitative indication of the more intermittent nature of the system for short times (see eq. (41)).
Focusing our attention on the short-time regime, we have also evaluated the function $`\stackrel{~}{P}(t)`$ defined in sec. 2.2, by counting the number of realizations (trajectories) having $`R(t,0)<1`$ at a certain time t:
$$\stackrel{~}{P}(t)=\frac{N_{R(t,\mathit{0})<\mathit{1}}}{N_{\mathrm{𝑡𝑜𝑡}}}.$$
(52)
Fig. 7 shows the $`\stackrel{~}{P}(t)`$ for the PE model. Due to the appearance of transient gravity waves, a very strong loss of predictability occurs immediately after the beginning of the PE model integration: $`\stackrel{~}{P}(t)`$ is close to $`0`$ and starts to grow until $`\overline{t}5`$ days, a behavior in qualitative agreement with a log-Poisson character of the $`R(t,0)`$ p.d.f.. After about $`10`$ days, gravity waves become lost and only slow oscillations remain.
From the above results, it appears that the presence of gravity waves (on the attractor set as well as in the initial transient) has strong effects on predictability. In particular, the loss of predictability immediately after the first times of the model integration may be relevant for practical purposes. The final question we want to address is thus the following: may the elimination of gravity waves produce an enhanced short-time predictability? We shall try to answer this question in the next section.
## 4 Predictability fluctuations and their relations with gravity wave activity
Our first aim is to study how the predictability fluctuations are influenced by the disappearance of gravity waves in models not supporting such waves.
We shall focus our attention on Lorenz’s (1980) algorithm exploiting the complete separation between quasi-geostrophic and gravity wave frequencies, and leading to the well-known superbalance equation, defining the slow manifold (Leith 1980; Lorenz 1980).
Defining $`𝐗^G=(X_1^G,X_2^G,X_3^G,X_4^G,X_5^G,X_6^G)`$, where $`X_i^G=x_i`$ and $`X_{i+3}^G=z_iy_i`$ for $`i=1,2,3`$, while $`𝐗^R=(y_1,y_2,y_3)`$, Lorenz (1980) exploited the scale separation imposing the condition $`\frac{d^p𝐗_{p,k}^G}{dt^p}=0`$ which can be solved iteratively for each $`p`$ by the Newton method (Lorenz 1980).
The low-order dynamical system obtained from the PE model by calculating the $`𝐗^G`$ components through Lorenz’s algorithm (with $`p=1`$) is called here SuperBalance Equation (SBE) model.
We apply the notions outlined in sec. 2.1 and 2.2 to the PE and SBE models in order to quantify the effects of gravity wave activity on the predictability fluctuations.
The positive maximum Lyapunov exponent, the intermittency, the ratio $`\mu /\lambda _1`$ and predictability time $`T`$ (see eq. (33)) are shown in Tab. 2 for the two models. Actually, as we can see from this table, these asymptotic quantities do not permit us to discriminate between the PE and SBE models. The very slight differences in these chaotic indicators confirm the capability of SBE to reproduce the asymptotic statistical properties of the PE system. Let us then come back to the original question: in what sense does elimination of transient gravity waves make the SBE model less chaotic than the PE model?
An answer to this question can be given by exploiting the short-time analysis presented in sec. 2.2. In Fig. 8 we report the same plot of Fig. 4, but for the SBE model. As in the PE model, two regions are clearly detectable, with the error growth for short times faster than that for long times. However, the zoom over the first 5 days shows an exponential (that is linear in the graph) error growth since the very first time, unlike the PE case. This is due to the projection onto the slow manifold which causes the gravity waves to die out completely.
In order to improve the quality of the fit and make a comparison possible, we perform for the moments of the error growth the same ESS analysis already presented for the PE case. In particular, $`\stackrel{~}{L}(q)`$ exponents of the SBE model in the long-time regime turn out to be indistinguishable from those of the PE model shown in Fig. 6 (white circles). This fact confirms the success of the SBE model in reproducing the asymptotic statistical properties of the Lorenz model.
Let us focus our attention on the short-time behavior. The $`\stackrel{~}{L}(q)`$ exponents for the SBE model, obtained from a linear fit at short time, are reported in Fig. 9. In order to simplify the comparison, the corresponding curve for the PE model (already reported in Fig. 6 (black circles)) is shown. Several comments are worthwhile. The log-Poisson fit for the SBE model gives $`\stackrel{~}{a}_s=2.9`$ and $`\stackrel{~}{b}_s=2.4`$: in particular, we stress that the slopes $`\stackrel{~}{a}_s`$ are smaller than in the PE case. Hence, disappearance of gravity waves in the SBE model decreases temporal intermittency, damping predictability fluctuations in a sensible way.
The enhanced predictability during the first days becomes more evident if one considers $`\stackrel{~}{P}(t)`$ behavior for the PE and SBE models, shown in Fig. 10. As we can see, the SBE curve strongly differs from that of the PE system during the first days. Notice that the $`\stackrel{~}{P}(t)`$ starts near $`1`$ for the SBE model. It then follows that the very strong loss of predictability (i.e., $`\stackrel{~}{P}(t)`$ close to $`0`$) immediately after the first times of the PE model integration is certainly due to the transient gravity waves: for the SBE model, where gravity waves are not present, there is only a slight loss of predictability (i.e., $`\stackrel{~}{P}(t)`$ close to $`1`$) during the first few days.
After about $`10`$ days, gravity waves disappear in the PE system and the behavior of $`\stackrel{~}{P}(t)`$ for the two models becomes quite similar.
From the behavior of both $`\stackrel{~}{L}(q)`$ and $`\stackrel{~}{P}(t)`$, we can conclude that the SBE model systematically produces at short times a more predictable flow with respect to the PE model.
To our knowledge, this is the first statistical assessment concerning the effects introduced by initialization procedures on the predictability properties of an atmospheric model.
## 5 Conclusions
In the Introduction we have posed three different questions which we believe are relevant for the predictability problem in atmospheric flows.
The answer to the first question we posed (how to describe predictability fluctuations) can be given in terms of generalized Lyapunov exponents, $`L(q)`$’s, which represent the growth rate of the moments of the response to an initial error. The statistics of $`R(t,0)`$ are characterized by an infinite set of exponents related to $`R(t,0)^q`$. The non linearity of $`L(q)`$ vs $`q`$ reflects the intermittent nature of the temporal evolution of the error growth. In this sense, this infinite set of exponents contains a level of information which is not present in the maximum Lyapunov exponent $`\lambda _1`$, $`\lambda _1`$ being able to quantify only the average predictability properties of a system. The use of $`L(q)`$’s is not completely new. They have been already introduced in the framework of atmospheric models by Benzi and Carnevale (1989). Regarding specific hypothesis on the $`R(t,0)`$ p.d.f., they considered the log-normal case. Here we have proposed the use of a log-Poisson p.d.f. (recently introduced in the statistical theory of turbulence for the treatment of spatial intermittency, see for example She and Waymire (1995)) to overcome some of the deficiencies of the log-normal p.d.f..
We have also focused our attention on the short-time predictability properties which is a rather complex problem. The fact that the error growth need not be exponential for short times makes the analysis more difficult, introducing a complex functional dependence of the response function on the time $`t`$, which can vary depending on the system.
Using recent ideas introduced and applied in the context of fully developed turbulence, namely the Extended Self Similarity techniques, we have shown that the concepts of the generalized Lyapunov exponents can be extended to the short-time regime, introducing the $`\stackrel{~}{L}(q)`$ exponents. They are well-defined mathematical quantities that allow us to quantify the intermittency in the system also at short times. This is one of the major results achieved in this paper.
As a further measure of predictability fluctuations at short time, we have also considered the probability to have a decrease of the initial error at time $`t`$ (what we called $`\stackrel{~}{P}(t)`$): this is a clear indicator concerning the possibility to have convergence of two trajectories at finite time, due to large fluctuations around the average error growth.
We have applied these concepts to the case of a simple atmospheric model (Lorenz, 1980), which captures some features of synoptic scale motions at mid-latitudes. We chose this model because its numerical study does not need the computational resources necessary in the case of more complex and realistic models. Moreover, in spite of its simplicity, the model has a rich dynamics, characterized by appearance of the so-called gravity wave modes. This property has allowed us to answer the third question posed in the Introduction, that is, which is the connection, if any, between short-time predictability fluctuations and initialization procedures. To this end, we have considered the initialization procedure proposed by Lorenz (1980) leading to the SBE model.
Our results concerning the PE and SBE models can be summarized as follows:
* the generalized Lyapunov exponents evaluated for the PE and SBE models are well-reproduced by a statistical law of the log-Poisson type and differ both from the non-intermittent linear law $`L(q)q`$ and the quadratic form implied by a log-normal assumption;
* we have observed in the models two different regimes, associated with two temporal ranges: in the short-time regime, the error grows more rapidly and intermittently than in the asymptotic region. This means that standard (asymptotic) chaotic indicators as $`\lambda _1`$ and $`\mu `$ can be meaningless when considering short-time regimes;
* comparing the asymptotic behaviors of the models, we conclude that the SBE procedure is successful in reproducing the long-time statistics of the original PE model, since all the asymptotic chaotic indicators are practically indistinguishable in the two cases;
* elimination of gravity waves in the SBE model strongly influences predictability for short times, giving a more predictable and less intermittent flow. The initialization procedure is then successful in decreasing the intermittency in the short time region, which is strongly influenced by transient fast oscillations in the PE model;
* strongly non-exponential error growth was observed in the case of the model supporting gravity waves (PE) after the first times of the model integrations. This is the signature of the strong gravity wave activity. Non-exponential behavior is strongly reduced in the SBE model, which does not support gravity waves.
We would like to conclude with some remarks and suggestions for future work. It is necessary to better understand the physical meaning both of the transition observed in the predictability properties (short-time and long-time regimes) and the characteristic time at which this transition occurs. This characteristic time seems to be strongly related to the maximum Lyapunov exponent. We think that this feature (which we are currently investigating) is not peculia only to the models studied in this paper.
Another point regards the study of the growth of non-infinitesimal errors: indeed, in this paper we have considered the error growth in the so-called tangent space, that is, we have studied the behavior of an infinitesimal initial error, governed by the linearized version of the dynamical equations. It would be interesting to perform a similar analysis for finite initial errors, in order to obtain more realistic information on the predictability when the error in the initial conditions cannot be considered infinitesimal.
Finally, we conclude with two remarks concerning the feasibility of our analysis when more complex models involving a large number of variables are considered. First, in our analysis we computed the moments of the response function up to the tenth order just to highlight the effect of predictability fluctuations. Actually, these can be captured also restricting the analysis to lower order moments, thus reducing the related computational effort. Second, since the predictability is characterized by fluctuations, the need for a good statistical ensemble of simulation runs is always crucial. This is independent of the method used to perform the statistical analysis and, in particular, is not peculiar to our method. The reason for this is that for systems characterized by strong intermittency, large fluctuations may occur with a non-vanishing probability and their effects on the predictability have to be properly accounted for in a statistical way. A large amount of statistics has thus to be gathered just in order to sample the tails of the p.d.f., where the rare events are placed. From this point of view, it is clear that over-simplified atmospheric models suggest a possible strategy to tackle the problem of predictability fluctuations.
## Acknowledgements
We are grateful to L. Biferale, R. Festa, C.F. Ratto, O. Reale, M. Vergassola and A. Vulpiani for illuminating discussions, and Dr. Claudio Paniconi for reviewing the text. We thank the “Meteo-Hydrological Center of Liguria Region” where part of the numerical analysis was done. The author (2) acknowledge support from the Sardinian Regional Authorities.
References
Benettin, G., L. Galgani, A. Giorgilli and J. M. Strelcyn, 1980: Lyapunov characteristic exponents for smooth dynamical systems; A method for computing all of them. Part I: Theory, and Part II: Numerical Application. Meccanica 15, 9–20 and 21–30.
Benzi, R., G. Paladin, G. Parisi and A. Vulpiani, 1985: Characterisation of intermittency in chaotic systems. J. Phys. A, 18, 2157–2165.
Benzi, R., and G. Carnevale, 1989: A possible Measure of Local Predictability. J. Atmos. Sci., 46, 3595–3598.
Benzi, R., S. Ciliberto, R. Tripiccione, C. Baudet, C. Massaioli, and S. Succi, 1993: Extended self–similarity in turbulent flows. Phys. Rev. E, 48, R29–R32.
Benzi, R., S. Ciliberto, C. Baudet, and G. R. Chavarria, 1995: On the scaling of three–dimensionale homogeneous and isotropic turbulence. Physica D, 80, 385–398.
Curry, J.H., S. E. Haupt, and M. N. Limber, 1995: Low–order models, initialization and slow manifold. Tellus, 47A, 145–161.
Doob, J.L., 1990: Stochastic Processes. J. Wiley and Sons, 654 pp.
Eckmann, J.P., and I. Procaccia, 1986: Fluctuations of dynamical scaling indices in nonlinear systems. Phys. Rev. A, 34, 659–661.
ECMWF, 1992: New developments in predictability. Workshop Proceedings, 13-15 November 1991.
Ellis, R.S., 1985: Entropy, Large Deviations and Statistical Mechanics. Springer, 364 pp.
Farrell, B. 1985: Transient growth of damped baroclinic waves. J. Atmos. Sci., 42,2718-2727.
Frisch, U., 1995: Turbulence, Cambridge Univ. Press, 296 pp.
Fujisaka, H., 1983: Statistical dynamics generated by fluctuations of local Lyapunov exponents. Progr. Theor. Phys., 70, 1264–1275.
Gent, P.R., and J. C. McWilliams, 1982: Intermediate Model Solutions to the Lorenz Equations: Strange Attractors and Other Phenomena. J. Atmos. Sci., 39, 3–13.
Halmos, P.R., 1956: Lectures on Ergodic Theory, Chelsea, 302 pp.
Krishnamurthy, V., 1985: The slow–manifold and the persisting gravity waves. PHD Thesis, MIT, 146 pp.
Lacarra, J. and O. Talagrand, 1988: Short range evolution of small perturbations in a barotropic model. Tellus, 40A, 81-95.
Leith, C.E., 1980: Nonlinear Normal Mode Initialization and Quasi–Geostrophic Theory. J. Atmos. Sci., 37, 958–968.
Lorenz, E.N., 1963: Deterministic non–periodic flow. J. Atmos. Sci., 20, 130–141.
Lorenz, E.N., 1965: A study of the predictability of a 28-variable atmospheric model. Tellus 17, 321-333.
Lorenz, E.N., 1980: Attractor sets and Quasi–Geostrophic Equilibrium. J. Atmos. Sci., 37, 1685–1699.
Lorenz, E.N., 1986: On the existence of a slow manifold. J. Atmos. Sci., 43, 1547–1557.
Mandelbrot, B., 1991: Random multifractals: negative dimensions and the resulting limitations of the thermodynamic formalism. Proc. R. Soc. Lond., A 434, 79–88.
Molteni, F., R. Buizza, T.N. Palmer, and T. Petroliagis, 1996: The ECMWF Ensemble Prediction System: Methodology and validation. Q.J.R. Meteor. Soc., 122, 73–119.
Mukougawa, H., M. Kimoto, and S. Yoden, 1991: A relationship between local error growth and quasi–geostrophic states: case study in the Lorenz system. J. Atmos. Sci., 48, 1231–1237.
Mureau, R., F. Molteni, and T.N. Palmer, 1993: Ensemble prediction using dynamically conditioned perturbations. Q.J.R. Meteor. Soc., 119, 299–323.
Nicolis, C., 1992: Probabilistic aspects of error growth in atmospheric dynamics. Quart. J. Roy. Meteor. Soc., 118, 553-568.
Oseledec, V.I., 1968: A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems. Trans. Moscow Math. Soc., 19, 197–231.
Paladin, G., L. Peliti and A. Vulpiani, 1986: Intermittency as multifractality in history space. J. Phys. A, 19, L991–L996.
Paladin, G., and A. Vulpiani, 1987: Anomalous scaling laws in multifractal object. Phys. Rep., 156, 147–225.
Palmer, T.N., R. Mureau and F. Molteni, 1990: The Montecarlo forecast. Weather, 45, 198-207.
She, Z.S., and E. C. Waymire, 1995: Quantized energy cascade and log–Poisson statistics in fully developed turbulence. Phys. Rev. Lett., 74, 262–265.
Trevisan A., 1993: Impact of transient error growth on global average predictability measures. J. Atmos. Sci., 50, 1016-1028.
Trevisan A., and R. Legnani, 1995: Transient error growth and local predictability: a study in the Lorenz system. Tellus, 47A, 103-117.
Varadhan, S.R.S., 1984: An Introduction to the Theory of Large Deviations. Springer–Verlag, 196 pp.
Vautard, R., and B. Legras, 1986: Invariant Manifold, Quasi–Geostrophy and Initialization. J. Atmos. Sci., 43, 565–584.
|
no-problem/9905/math9905171.html
|
ar5iv
|
text
|
# Rigidity of 𝐶² infinitely renormalizable unimodal maps
## 1. Introduction
It was already clear more than 20 years ago, from the work of Coullet-Tresser and Feigenbaum, that the small scale geometric properties of the orbits of some one dimensional dynamical systems were related to the dynamical behavior of a non-linear operator, the renormalization operator, acting on a space of dynamical systems. This conjectural picture was mathematically established for some classes of analytic maps by Sullivan, McMullen and Lyubich. Here we will extend this description to the space of $`C^2`$ maps and prove a rigidity result for a class of unimodal maps of the interval. As it is well-known, a unimodal map is a smooth endomorphism of a compact interval that has a unique critical point which is a turning point. Such a map is renormalizable if there exists an interval neighborhood of the critical point such that the first return map to this interval is again a unimodal map, and the return time is greater than one. The map is infinitely renormalizable if there exist such intervals with arbitrarily high return times. We say that two maps have the same combinatorial type if the map that sends the i-th iterate of the critical point of the first map into the i-th iterate of the critical point of the second map, for all $`i0`$, is order preserving. Finally, we say that the combinatorial type of an infinitely renormalizable map is bounded if the ratio of any two consecutive return times is uniformly bounded.
A unimodal map $`f`$ is $`C^r`$ with a quadratic critical point if $`f=\varphi _fp\psi _f`$, where $`p(x)=x^2`$ and $`\varphi _f`$, $`\psi _f`$ are $`C^r`$ diffeomorphisms. Let $`c_f`$ be the critical point of $`f`$. In this paper we will prove the following rigidity result.
###### Theorem 1.
Let $`f`$ and $`g`$ be $`C^2`$ unimodal maps with a quadratic critical point which are infinitely renormalizable and have the same bounded combinatorial type. Then there exists a $`C^{1+\alpha }`$ diffeomorphism $`h`$ of the real line such that $`h(f^i(c_f))=g^i(h(c_g))`$ for every integer $`i0`$.
We observe that in Theorem 1 the Hölder exponent $`\alpha >0`$ depends only upon the bound of the combinatorial type of the maps $`f`$ and $`g`$. Furthermore,as we will see in Section 2, the maps $`f`$ and $`g`$ are smoothly conjugated to $`C^2`$ normalized unimodal maps $`F=\varphi _Fp`$ and $`G=\varphi _Gp`$ with critical value $`1`$, and the Hölder constant for the smooth conjugacy between the normalized maps $`F`$ and $`G`$ depends only upon the combinatorial type of $`F`$ and $`G`$, and upon the norms $`\varphi _F_{C^2}`$ and $`\varphi _G_{C^2}`$.
The conclusion of the above rigidity theorem was first obtained by McMullen in under the extra hypothesis that $`f`$ and $`g`$ extend to quadratic-like maps in neighborhoods of the dynamical intervals in the complex plane. Combining this last statement with the complex bounds of Levin and van Strien in , we get the existence of a $`C^{1+\alpha }`$ map $`h`$ which is a conjugacy along the critical orbits for infinitely renormalizable real analytic maps with the same bounded combinatorial type. We extended this result to $`C^2`$ unimodal maps in Theorem 1, by combining many results and ideas of Sullivan in with recent results of McMullen in , in , and of Lyubich in on the hyperbolicity of the renormalization operator $`R`$ (see the definition of $`R`$ in the next section). A main lemma used in the proof of Theorem 1 is the following:
###### Lemma 2.
Let $`f`$ be a $`C^2`$ infinitely renormalizable map with bounded combinatorial type. Then there exist positive constants $`\eta <1`$, $`\mu `$ and $`C`$, and a real quadratic-like map $`f_n`$ with conformal modulus greater than or equal to $`\mu `$, and with the same combinatorial type as the $`n`$-th renormalization $`R^nf`$ of $`f`$ such that
$$R^nff_n_{C^0}<C\eta ^n$$
for every $`n0`$.
We observe that in this lemma, the positive constants $`\eta <1`$ and $`\mu `$ depend only upon the bound of the combinatorial type of the map $`f`$. For normalized unimodal maps $`f`$, the positive constant $`C`$ depends only upon the bound of the combinatorial type of the map $`f`$ and upon the norm $`\varphi _f_{C^2}`$.
This lemma generalizes a Theorem of Sullivan (transcribed as Theorem 4 in Section 2) by adding that the map $`f_n`$ has the same combinatorial type as the $`n`$-th renormalization $`R^nf`$ of $`f`$.
Now, let us describe the proof of Theorem 1 which also shows the relevance of Lemma 2: let $`f`$ and $`g`$ be $`C^2`$ infinitely renormalizable unimodal maps with the same bounded combinatorial type. Take $`m`$ to be of the order of a large but fixed fraction of $`n`$, and note that $`nm`$ is also a fixed fraction of $`n`$. By Lemma 2, we obtain a real quadratic-like map $`f_m`$ exponentially close to $`R^mf`$, and a real quadratic-like map $`g_m`$ exponentially close to $`R^mg`$. Then we use Lemma 6 of Section 2.2 to prove that the renormalization $`(nm)`$-th iterates $`R^nf`$ of $`R^mf`$, and $`R^ng`$ of $`R^mg`$ stay exponentially close to the $`(nm)`$-th iterates $`R^{nm}f_m`$ of $`f_m`$ and $`R^{nm}g_m`$ of $`g_m`$, respectively. Again, by Lemma 2, we have that $`f_m`$ and $`g_m`$ have conformal modulus universally bounded away from zero, and have the same bounded combinatorial type of $`R^mf`$ and $`R^mg`$. Thus, by the main result of McMullen in , the renormalization $`(nm)`$-th iterates $`R^{nm}f_m`$ of $`f_m`$ and $`R^{nm}g_m`$ of $`g_m`$ are exponentially close. Therefore, $`R^nf`$ is exponentially close to $`R^{nm}f_m`$, $`R^{nm}f_m`$ is exponentially close to $`R^{nm}g_m`$, and $`R^{nm}g_m`$ is exponentially close to $`R^ng`$, and so, by the triangle inequality, the $`n`$-th iterates $`R^nf`$ of $`f`$ and $`R^ng`$ of $`g`$ converge exponentially fast to each other. Finally, by Theorem 9.4 in the book of de Melo and van Strien, we conclude that $`f`$ and $`g`$ are $`C^{1+\alpha }`$ conjugate along the closure of their critical orbits.
Let us point out the main ideas in the proof of Lemma 2: Sullivan in proves that $`R^nf`$ is exponentially close to a quadratic-like map $`F_n`$ which has conformal modulus universally bounded away from zero. The quadratic-like map $`F_n`$ determines a unique quadratic map $`P_{c(F_n)}(z)=1c(F_n)z^2`$ which is hybrid conjugated to $`F_n`$ by a $`K`$ quasiconformal homeomorphism, where $`K`$ depends only upon the conformal modulus of $`F_n`$ (see Theorem 1 of Douady-Hubbard in , and Lemma 11 in Section 3.3). In , Lyubich proves the bounded geometry of the Cantor set consisting of all the parameters of the quadratic family $`P_c(z)=1cz^2`$ corresponding to infinitely renormalizable maps with combinatorial type bounded by $`N`$ (see definition in Section 2 and the proof of Lemma 2). In Lemma 8 of Section 2.2, we prove that $`R^nf`$ and $`F_n`$ have exponentially close renormalization types. Therefore, letting $`c_n`$ be the parameter corresponding to the quadratic map $`P_{c_n}`$ with the same combinatorial type as $`R^nf`$, we have, from the above result of Lyubich, that $`c(F_n)`$ and $`c_n`$ are exponentially close. In Lemma 12 of Section 3.3, we use holomorphic motions to prove the existence of a real quadratic-like map $`f_n`$ which is hybrid conjugated to $`P_{c_n}`$, and has the following essential property: the distance between $`F_n`$ and $`f_n`$ is proportional to the distance between $`c(F_n)`$ and $`c_n`$ raised to some positive constant. Therefore, the real quadratic-like map $`f_n`$ has the same combinatorial type as $`R^nf`$, and $`f_n`$ is exponentially close to $`F_n`$. Since the map $`F_n`$ is exponentially close to $`R^nf`$, we obtain that the map $`f_n`$ is also exponentially close to $`R^nf`$.
The example of Faria and de Melo in for critical circle maps can be adapted to prove the existence of a pair of $`C^{\mathrm{}}`$ unimodal maps, with the same unbounded combinatorial type, such that the conjugacy $`h`$ has no $`C^{1+\alpha }`$ extension to the reals for any $`\alpha >0`$.
## 2. Shadowing unimodal maps
A $`C^r`$ unimodal map $`F:II`$ is normalized if $`I=[1,1]`$, $`F=\varphi _Fp`$, $`F(0)=1`$, and $`\varphi _F:[0,1]I`$ is a $`C^r`$ diffeomorphism. A $`C^r`$ unimodal map $`f=\varphi _fp\psi _f`$ with quadratic critical point either has trivial dynamics or has an invariant interval where it is $`C^r`$ conjugated to a $`C^r`$ normalized unimodal map $`F`$. Take, for instance, the map
$$\varphi _F(x)=\left(\psi _f^1\varphi _f(0)\right)^1\psi _f^1\varphi _f\left(\left(\psi _f^1\varphi _f(0)\right)^2x\right).$$
Therefore, from now on we will only consider $`C^r`$ normalized unimodal maps $`f`$.
The map $`f`$ is renormalizable if there is a closed interval $`J`$ centered at the origin, strictly contained in $`I`$, and $`l>1`$ such that the intervals $`J,\mathrm{},f^{l1}(J)`$ are disjoint, $`f^l(J)`$ is strictly contained in $`J`$ and $`f^l(0)J`$. If $`f`$ is renormalizable, we always consider the smallest $`l>1`$ and the minimal interval $`J_f=J`$ with the above properties. The set of all renormalizable maps is an open set in the $`C^0`$ topology. The renormalization operator $`R`$ acts on renormalizable maps $`f`$ by $`Rf=\psi f^l\psi ^1:II`$, where $`\psi :J_fI`$ is the restriction of a linear map sending $`f^l(0)`$ into $`1`$. Inductively, the map $`f`$ is $`n`$ times renormalizable if $`R^{n1}f`$ is renormalizable. If $`f`$ is $`n`$ times renormalizable for every $`n>0`$, then $`f`$ is infinitely renormalizable.
Let $`f`$ be a renormalizable map. We label the intervals $`J_f,\mathrm{},f^{l1}(J_f)`$ of $`f`$ by $`1,\mathrm{},l`$ according to their embedding on the real line, from the left to the right. The permutation $`\sigma _f:\{1,\mathrm{},l\}\{1,\mathrm{},l\}`$ is defined by $`\sigma _f(i)=j`$ if the interval labeled by $`i`$ is mapped by $`f`$ to the interval labeled by $`j`$. The renormalization type of an $`n`$ times renormalizable map $`f`$ is given by the sequence $`\sigma _f,\mathrm{},\sigma _{R^nf}`$. An $`n`$ times renormalizable map $`f`$ has renormalization type bounded by $`N>1`$ if the number of elements of the domain of each permutation $`\sigma _{R^mf}`$ is less than or equal to $`N`$ for every $`0mn`$. We have the analogous notions for infinitely renormalizable maps.
Note that if any two maps are $`n`$ times renormalizable and have the same combinatorial type (see definition in the introduction), then they have the same renormalization type. The converse is also true in the case of infinitely renormalizable maps. An infinitely renormalizable map has combinatorial type bounded by $`N>1`$ if the renormalization type is bounded by $`N`$.
If $`f=\varphi p`$ is $`n`$ times renormalizable, and $`\varphi C^2`$, there is a $`C^2`$ diffeomorphism $`\varphi _n`$ satisfying $`R^nf=\varphi _np`$. The nonlinearity $`\mathrm{nl}(\varphi _n)`$ of $`\varphi _n`$ is defined by
$$\mathrm{nl}(\varphi _n)=\underset{xp(I)}{sup}\left|\frac{\varphi _n^{\prime \prime }(x)}{\varphi _n^{}(x)}\right|.$$
Let $`(N,b)`$ be the set of all $`C^2`$ normalized unimodal maps $`f=\varphi p`$ with the following properties:
* $`f`$ is infinitely renormalizable;
* the combinatorial type of $`f`$ is bounded by $`N`$;
* $`\varphi _{C^2}b`$.
###### Theorem 3.
(Sullivan ) There exist positive constants $`B`$ and $`n_1(b)`$ such that, for every $`f(N,b)`$, the $`n`$-th renormalization $`R^nf=\varphi _np`$ of $`f`$ has the property that $`\mathrm{nl}(\varphi _n)B`$ for every $`nn_1`$.
This theorem together with Arzelá-Ascoli’s Theorem implies that, for every $`0\beta <2`$, and for every $`nn_1(b)`$, the renormalization iterates $`R^nf`$ are contained in a compact set of unimodal maps with respect to the $`C^\beta `$ norm. We will use this fact in the proof of Lemma 5 below.
### 2.1. Quadratic-like maps
A quadratic-like map $`f:VW`$ is a holomorphic map with the property that $`V`$ and $`W`$ are simply connected domains with the closure of $`V`$ contained in $`W`$, and $`f`$ is a degree two branched covering map. We add an extra condition that $`f`$ has a continuous extension to the boundary of $`V`$. The conformal modulus of a quadratic-like map $`f:VW`$ is equal to the conformal modulus of the annulus $`W\overline{V}`$. A real quadratic-like map is a quadratic-like map which commutes with complex conjugation.
The filled Julia set $`𝒦(f)`$ of $`f`$ is the set $`\{z:f^n(z)V,\mathrm{for}\mathrm{all}n0\}`$. Its boundary is the Julia set $`𝒥(f)`$ of $`f`$. These sets $`𝒥(f)`$ and $`𝒦(f)`$ are connected if the critical point of $`f`$ is contained in $`𝒦(f)`$.
Let $`𝒬(\mu )`$ be the set of all real quadratic-like maps $`f:VW`$ satisfying the following properties:
* the Julia set $`𝒥(f)`$ of $`f`$ is connected;
* the conformal modulus of $`f`$ is greater than or equal to $`\mu `$, and less than or equal to $`2\mu `$;
* $`f`$ is normalized to have the critical point at the origin, and the critical value at one.
By Theorem 5.8 in page 72 of , the set $`𝒬(\mu )`$ is compact in the Carathéodory topology taking the critical point as the base point (see definition in page 67 of ).
###### Theorem 4.
(Sullivan ) There exist positive constants $`\gamma (N)<1`$, $`C(b,N)`$, and $`\mu (N)`$ with the following property: if $`f(N,b)`$, then there exists $`f_n𝒬(\mu )`$ such that $`R^nff_n_{C^0}C\gamma ^n`$.
In the following sections, we will develop the results that will be used in the last section to prove the generalization of Theorem 4 (as stated in Lemma 2), and to prove Theorem 1.
### 2.2. Maps with close combinatorics
Let $`D(\sigma )`$ be the open set of all $`C^0`$ renormalizable unimodal maps $`f`$ with renormalization type $`\sigma _f=\sigma `$. The open sets $`D(\sigma )`$ are pairwise disjoint. Let $`E(\sigma )`$ be the complement of $`D(\sigma )`$ in the set of all $`C^0`$ unimodal maps $`f`$.
###### Lemma 5.
There exist positive constants $`n_2(b)`$ and $`ϵ(N)`$ with the following property: for every $`f(N,b)`$, for every $`nn_2`$, and for every $`gE(\sigma _{R^nf})`$, we have $`R^nfg_{C^0}>ϵ.`$
###### Proof.
Suppose, by contradiction, that there is a sequence $`R^{m_1}f_1`$, $`R^{m_2}f_2`$, $`\mathrm{}`$ with the property that for a chosen $`\sigma `$ there is a sequence $`g_1,g_2,\mathrm{}E(\sigma )`$ satisfying $`R^{m_i}f_ig_i_{C^0}<1/i`$. By Theorem 3, there are $`B>0`$ and $`n_1(b)1`$ such that the maps $`R^{m_i}f_i`$ have nonlinearity bounded by $`B>0`$ for all $`m_in_1`$. By Arzela-Ascoli’s Theorem, there is a subsequence $`R^{m_{i_1}}f_{i_1},R^{m_{i_2}}f_{i_2},\mathrm{}`$ which converges in the $`C^0`$ topology to a map $`g`$. Hence, the map $`g`$ is contained in the boundary of $`E(\sigma )`$ and is infinitely renormalizable. However, a map contained in the boundary of $`E(\sigma )`$ is not renormalizable, and so we get a contradiction. ∎
###### Lemma 6.
There exist positive constants $`n_3(N,b)`$ and $`L(N)`$ with the following property: for every $`f(N,b)`$, for every $`C^2`$ renormalizable unimodal map $`g`$, and for every $`n>n_3`$, we have
$$R^nfRg_{C^0}LR^{n1}fg_{C^0}.$$
###### Proof.
In the proof of this lemma we will use the inequality (1) below. Let $`f_1,\mathrm{},f_m`$ be maps with $`C^1`$ norm bounded by some constant $`d>0`$, and let $`g_1,\mathrm{},g_m`$ be $`C^0`$ maps. By induction on $`m`$, and by the Mean Value Theorem, there is $`c(m,d)>0`$ such that
(1)
$$f_1\mathrm{}f_mg_1\mathrm{}g_m_{C^0}c\underset{i=1,\mathrm{},m}{\mathrm{max}}\{f_ig_i_{C^0}\}.$$
Set $`n_3=\mathrm{max}\{n_1,n_2\}`$, where $`n_1(b)`$ is defined as in Theorem 3, and $`n_2(b)`$ is defined as in Lemma 5. Set $`F=R^{n1}f`$ with $`nn_3`$. We start by considering the simple case (a), where $`F`$ and $`g`$ do not have the same renormalization type, and conclude with the complementary case (b). In case (a), by Lemma 5, there is $`ϵ(N)>0`$ with the property that
$$RFRg_{C^0}22ϵ^1Fg_{C^0}.$$
In case (b), there is $`1<mN`$ such that $`RF(x)=a_FF^m(a_F^1x)`$, and $`Rg(x)=a_gg^m(a_g^1x)`$, where $`a_F=F^m(0)`$ and $`a_g=g^m(0)`$. By Theorem 3, there is a positive constant $`B(N)`$ bounding the nonlinearity of $`F`$. Since the set of all infinitely renormalizable unimodal maps $`F`$ with nonlinearity bounded by $`B`$ is a compact set with respect to the $`C^0`$ topology, and since $`a_F`$ varies continuously with $`F`$, there is $`S(N)>0`$ with the property that $`|a_F|S`$. Again, by Theorem 3, and by inequality (1), there is $`c_1(N)>0`$ such that
(2)
$$F^mg^m_{C^0}c_1Fg_{C^0}.$$
Thus,
(3)
$$|a_Fa_g|c_1Fg_{C^0}.$$
Now, let us consider the cases where (i) $`FgS/(2c_1)`$ and (ii) $`FgS/(2c_1)`$. In case (i), we get
$$RFRg_{C^0}24c_1S^1Fg_{C^0}.$$
In case (ii), using that $`|a_F|S`$ and $`(\text{3})`$, we get $`a_ga_FS/2S/2`$, and thus, by $`(\text{2})`$, we obtain
$$\left|a_F^1a_g^1\right|a_F^1a_g^1|a_Fa_g|2S^2c_1Fg_{C^0}.$$
Hence, again by $`(\text{2})`$ and $`(\text{3})`$, there is $`c_2(N)>0`$ with the property that
$`RFRg_{C^0}`$ $``$ $`F^m_{C^0}|a_Fa_g|+|a_g|F^m_{C^1}\left|a_F^1a_g^1\right|`$
$`+|a_g|F^mg^m_{C^0}`$
$``$ $`c_2Fg_{C^0}.`$
Therefore, this lemma is satisfied with $`L(N)=\mathrm{max}\{2ϵ^1,4c_1S^1,c_2\}`$. ∎
###### Lemma 7.
For all positive constants $`\lambda <1`$ and $`C`$ there exist positive constants $`\alpha (N,\lambda )`$ and $`n_4(b,N,\lambda ,C)`$ with the following property: for every $`f(N,b)`$, and every $`n>n_4`$, if $`f_n`$ is a $`C^2`$ unimodal map such that
$$R^nff_n_{C^0}<C\lambda ^n,$$
then $`f_n`$ is $`[\alpha n+1]`$ times renormalizable with $`\sigma _{R^mf_n}=\sigma _{R^{n+m}f}`$ for every $`m=0,\mathrm{},[\alpha n]`$ (where $`[y]`$ means the integer part of $`y>0`$.)
###### Proof.
Let $`ϵ(N)`$ and $`n_2(b)`$ be as defined in Lemma 5, and let $`L(N)`$ and $`n_3(b)`$ be as defined in Lemma 6. Take $`\alpha >0`$ such that $`L^\alpha \lambda <1`$. Set $`n_4\mathrm{max}\{n_2,n_3\}`$ such that $`C\lambda ^{n_4}<ϵ`$ and $`C\lambda ^{n_4}L^{[\alpha n_4]}<ϵ`$. Then, for every $`n>n_4`$, the values $`C\lambda ^n,C\lambda ^nL,\mathrm{},C\lambda ^nL^{[\alpha n]}`$ are less than $`ϵ`$.
By Lemma 5, if $`R^nff_n_{C^0}<C\lambda ^n<ϵ`$ with $`n>n_4`$, then the map $`f_n`$ is contained in $`D(\sigma _{R^nf})`$. Thus, $`f_n`$ is once renormalizable, and $`\sigma _{f_n}=\sigma _{R^nf}`$. By induction on $`m=1,\mathrm{},[\alpha n]`$, let us suppose that $`f_n`$ is $`m`$ times renormalizable, and $`\sigma _{R^if_n}=\sigma _{R^{n+i}f}`$ for every $`i=0,\mathrm{},m1`$. By Lemma 6, we get that $`R^{n+m}fR^mf_n_{C^0}<CL^m\lambda ^n<ϵ`$. Hence, again by Lemma 5, the map $`R^mf_n`$ is once renormalizable, and $`\sigma _{R^mf_n}=\sigma _{R^{n+m}f}`$. ∎
###### Lemma 8.
There exist positive constants $`\gamma (N)<1`$, $`\alpha (N)`$, $`\mu (N)`$, and $`C(b,N)`$ with the following property: for every $`f(N,b)`$, there exists $`f_n𝒬(\mu )`$ such that
* $`R^nff_n_{C^0}C\gamma ^n`$;
* $`f_n`$ is $`[\alpha n+1]`$ times renormalizable with $`\sigma _{R^mf_n}=\sigma _{R^{n+m}f}`$ for every $`m=0,\mathrm{},[\alpha n]`$.
###### Proof.
The proof follows from Theorem 4 and Lemma 7. ∎
## 3. Varying quadratic-like maps
We start by introducing some classical results on Beltrami differentials and holomorphic motions, all of which we will apply later in this section to vary the combinatorics of quadratic-like maps.
### 3.1. Beltrami differentials
A homeomorphism $`h:UV`$, where $`U`$ and $`V`$ are contained in $``$ or $`\overline{}`$, is quasiconformal if it has locally integrable distributional derivatives $`h`$, $`\overline{}h`$, and if there is $`ϵ<1`$ with the property that $`\left|\overline{}h/h\right|ϵ`$ almost everywhere. The Beltrami differential $`\mu _h`$ of $`h`$ is given by $`\mu _h=\overline{}h/h`$. A quasiconformal map $`h`$ is $`K`$ quasiconformal if $`K(1+\mu _h_{\mathrm{}})/(1\mu _h_{\mathrm{}})`$.
We denote by $`D_R(c_0)`$ the open disk in $``$ centered at the point $`c_0`$ and with radius $`R>0`$. We also use the notation $`D_R=D_R(0)`$ for the disk centered at the origin.
The following theorem is a slight extension of Theorem 4.3 in page 27 of the book by Lehto.
###### Theorem 9.
Let $`\psi :`$ be a quasiconformal map with the following properties:
* $`\mu _\psi =\overline{}\psi /\psi `$ has support contained in the disk $`D_R`$;
* $`\mu _\psi _{\mathrm{}}<ϵ<1`$;
* $`lim_{|z|\mathrm{}}(\psi (z)z)=0`$.
Then there exists $`C(ϵ,R)>0`$ such that
$$\psi id_{C^0}C\mu _\psi _{\mathrm{}}.$$
###### Proof.
Let us define $`\varphi _1=\mu _\psi `$, and, by induction on $`i1`$, we define $`\varphi _{i+1}=\mu _\psi H\varphi _i`$, where $`H\varphi _i`$ is the Hilbert transform of $`\varphi _i`$ given by the Cauchy Principal Value of
$$\frac{1}{\pi }_{}\frac{\varphi _i(\xi )}{(\xi z)^2}𝑑u𝑑v.$$
By Theorem 4.3 in page 27 of , we get $`\psi (z)=z+_{i=1}^{\mathrm{}}T\varphi _i(z)`$, where $`T\varphi _i(z)`$ is given by
$$\frac{1}{\pi }_{}\frac{\varphi _i(\xi )}{\xi z}𝑑u𝑑v.$$
By the Calderón-Zigmund inequality (see page 27 of ), for every $`p1`$, the Hilbert operator $`H:L^pL^p`$ is bounded, and its norm $`H_p`$ varies continuously with $`p`$. An elementary integration also shows that $`H_2=1`$ (see page 157 of ). Therefore, given that $`\mu _\psi _{\mathrm{}}<ϵ`$, there is $`p_0(ϵ)>2`$ with the property that
(4)
$$H_{p_0}\mu _\psi _{\mathrm{}}<H_{p_0}ϵ<1.$$
Since $`p_0>2`$, it follows from Hölder’s inequality (see page 141 of ) that there is a positive constant $`c_1(p_0,R)`$ such that
(5)
$$T\varphi _i_{C^0}c_1\varphi _i_{p_0}.$$
By a simple computation, we get
(6)
$$\varphi _i_{p_0}(\pi R^2)^{\frac{1}{p_0}}H_{p_0}^{i1}\mu _\psi _{\mathrm{}}^i.$$
Thus, by inequalities (4), (5), and (6), there is a positive constant $`c_2(ϵ,R)`$ with the property that
$`\psi id_{C^0}`$ $``$ $`{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}T\varphi _i_{C^0}{\displaystyle \frac{c_1(\pi R^2)^{\frac{1}{p_0}}\mu _\psi _{\mathrm{}}}{1H_{p_0}\mu _\psi _{\mathrm{}}}}`$
$``$ $`c_2\mu _\psi _{\mathrm{}}.`$
### 3.2. Holomorphic motions
A holomorphic motion of a subset $`X`$ of the Riemann sphere over a disk $`D_R(c_0)`$ is a family of maps $`\psi _c:XX_c`$ with the following properties: (i) $`\psi _c`$ is an injection of $`X`$ onto a subset $`X_c`$ of the Riemann sphere; (ii) $`\psi _{c_0}=id`$; (iii) for every $`zX`$, $`\psi _c(z)`$ varies holomorphically with $`cD_R(c_0)`$.
###### Theorem 10.
(Słodkowski ) Let $`\psi _c:XX_c`$ be a holomorphic motion over the disk $`D_R(c_0)`$. Then there is a holomorphic motion $`\mathrm{\Psi }_c:\overline{}\overline{}`$ over the disk $`D_R(c_0)`$ such that
* $`\mathrm{\Psi }_c|X=\psi _c`$;
* $`\mathrm{\Psi }_c`$ is a $`K_c`$ quasiconformal map with
$$K_c=\frac{R+|cc_0|}{R|cc_0|}.$$
See also Douady’s survey .
### 3.3. Varying the combinatorics
Let $``$ be the set of all quadratic-like maps with connected Julia set. Let $`𝒫`$ be the set of all normalized quadratic maps $`P_c:`$ defined by $`P_c(z)=1cz^2`$, where $`c\{0\}`$. Two quadratic-like maps $`f`$ and $`g`$ are hybrid conjugate if there is a quasiconformal conjugacy $`h`$ between $`f`$ and $`g`$ with the property that $`\overline{}h(z)=0`$ for almost every $`z𝒦(f)`$. By Douady-Hubbard’s Theorem 1 in page 296 of , for every $`f`$ there exists a unique quadratic map $`P_{c(f)}`$ which is hybrid conjugated to $`f`$. The map $`\xi :𝒫`$ defined by $`\xi (f)=P_{c(f)}`$ is called the straightening.
Observe that a real quadratic map $`P_c`$ with $`c[1,2]`$ has trivial dynamics. Therefore, we will restrict our study to the set $`𝒬([1,2],\mu )`$ of all $`f𝒬(\mu )`$ satisfying $`\xi (f)=P_{c(f)}`$ for some $`c(f)[1,2]`$.
Let us choose a radius $`\mathrm{\Delta }`$ large enough such that, for every $`c[1,2]`$, $`P_c(z)=1cz^2`$ is a quadratic-like map when restricted to $`P_c^1(D_\mathrm{\Delta })`$.
###### Lemma 11.
There exist positive constants $`\mathrm{\Omega }(\mu )`$ and $`K(\mu )`$ with the following property: for every $`f𝒬([1,2],\mu )`$ there exists a topological disk $`V_fD_\mathrm{\Omega }`$ such that $`f`$ restricted to $`f^1(V_f)`$ is a quadratic-like map. Furthermore, there is a $`K`$ quasiconformal homeomorphism $`\mathrm{\Phi }_f:\overline{}\overline{}`$ such that
* $`\mathrm{\Phi }_f|\mathrm{\Phi }_f^1(V_f)`$ is a hybrid conjugacy between $`f`$ and $`P_{c(f)}`$;
* $`\mathrm{\Phi }_f(V_f)=D_\mathrm{\Delta }`$;
* $`\mathrm{\Phi }_f`$ is holomorphic over $`\overline{}\overline{V_f}`$;
* $`\mathrm{\Phi }_f(\overline{z})=\overline{\mathrm{\Phi }_f(z)}`$.
###### Proof.
The main point in this proof is to combine the hybrid conjugacy between $`f`$ and $`P_{c(f)}`$ given by Douady-Hubbard, with Sullivan’s pull-back argument, and with McMullen’s rigidity theorem for real quadratic maps. Using Sullivan’s pull-back argument and the hybrid conjugacy between $`f`$ and $`P_{c(f)}`$, we construct a $`K`$ quasiconformal homeomorphism $`\mathrm{\Phi }_f:\overline{}\overline{}`$ which restricts to a conjugacy between $`f`$ and $`P_{c(f)}`$. Moreover, $`\mathrm{\Phi }_f`$ satisfies properties (ii), (iii) and (iv) of this lemma, and the restriction of $`\mathrm{\Phi }_f`$ to the filled in Julia set of $`f`$ extends to a quasi conformal map that is a hybrid conjugacy between $`f`$ and $`P_{c(f)}`$. By Rickman’s glueing lemma (see Lemma 2 in ) it follows that $`\mathrm{\Phi }_f`$ also satisfies property (i) of this lemma.
Now, we give the details of the proof: let us consider the set of all quadratic-like maps $`f:W_fW_f^{}`$ contained in $`𝒬([1,2],\mu )`$. Using the Koebe Distortion Lemma (see page 84 of ), we can slightly shrink $`f^n(W_f^{})`$ for some $`n0`$ to obtain an open set $`V_f`$ with the following properties:
* $`V_f`$ is symmetric with respect to the real axis;
* the restriction of $`f`$ to $`f^1(V_f)`$ is a quadratic-like map;
* the annulus $`V_f\overline{f^1(V_f)}`$ has conformal modulus between $`\mu /2`$ and $`2\mu `$;
* the boundaries of $`V_f\overline{f^1(V_f)}`$ are analytic $`\gamma (\mu )`$ quasi-circles for some $`\gamma (\mu )>0`$, i. e., they are images of an Euclidean circle by $`\gamma (\mu )`$ quasiconformal maps defined on $`\overline{}`$.
Let $`𝒬^{}`$ be the set of all quadratic-like maps $`f:f^1(V_f)V_f`$ contained in $`𝒬([1,2],\mu /2)𝒬([1,2],\mu )`$ for which $`V_f`$ satisfies properties (i), $`\mathrm{}`$, (iv) of last paragraph. Since for every $`f𝒬^{}`$ the boundaries of $`V_f\overline{f^1(V_f)}`$ are analytic $`\gamma (\mu )`$ quasi-circles, any convergent sequence $`f_n𝒬^{}`$, with limit $`g`$, in the Carathéodory topology has the property that the sets $`V_{f_n}`$ converge to $`V_g`$ in the Hausdorff topology (see Section 4.1 in pages 75-76 of ). Therefore, the set $`𝒬^{}`$ is closed with respect to the Carathéodory topology, and hence is compact. Furthermore, by compactness of $`𝒬^{}`$, and using the Koebe Distortion Lemma, there is an Euclidean disk $`D_\mathrm{\Omega }`$ which contains $`V_f`$ for every $`f𝒬^{}`$.
Now, let us construct $`\mathrm{\Phi }_f:\overline{}\overline{}`$ such that the properties (i), $`\mathrm{}`$, (iv) of this lemma are satisfied.
Since $`V_f`$ is symmetric with respect to the real axis, there is a unique Riemann Mapping $`\varphi :\overline{}\overline{V_f}\overline{}\overline{D_\mathrm{\Delta }}`$ satisfying $`\varphi (\overline{z})=\overline{\varphi (z)}`$, and such that $`\varphi (^+)^+`$. Since the boundaries of $`V_f\overline{f^1(V_f)}`$ are analytic $`\gamma (\mu )`$ quasi-circles, using the Ahlfors-Beurling Theorem (see Theorem 5.2 in page 33 of ) the map $`\varphi `$ has a $`K_1(\mu )`$ quasiconformal homeomorphic extension $`\varphi _1:\overline{}\overline{}`$ which also is symmetric $`\varphi _1(\overline{z})=\overline{\varphi _1(z)}`$.
Let $`\varphi _2:V_f𝒦(f)D_\mathrm{\Delta }𝒦(P_{c(f)})`$ be the unique continuous lift of $`\varphi _1`$ satisfying $`P_{c(f)}\varphi _2(z)=\varphi _1f(z)`$, and such that $`\varphi _2(^+)^+`$. Since $`\varphi _1`$ is a $`K_1(\mu )`$ quasiconformal homeomorphism, so is $`\varphi _2`$.
Using the Ahlfors-Beurling Theorem, we construct a $`K_2(\mu )`$ quasi-conformal homeomorphism $`\varphi _3:\overline{}𝒦(f)\overline{}𝒦(P_{c(f)})`$ interpolating $`\varphi _1`$ and $`\varphi _2`$ with the following properties:
* $`\varphi _3(z)=\varphi _1(z)`$ for every $`z\overline{}V_f`$;
* $`\varphi _3(z)=\varphi _2(z)`$ for every $`z\overline{f^1(V_f)}𝒦(f)`$;
* $`\varphi _3(\overline{z})=\overline{\varphi _3(z)}`$.
Then the map $`\varphi _3`$ conjugates $`f`$ on $`f^1(V_f)`$ with $`P_{c(f)}`$ on $`P_{c(f)}^1(D_\mathrm{\Delta })`$, and is holomorphic over $`\overline{}\overline{V_f}\overline{}\overline{D_\mathrm{\Omega }}`$.
By Theorem 1 in , there is a $`K_f^{}`$ quasiconformal hybrid conjugacy $`\varphi _4:V_f^{}V_{c(f)}^{}`$ between $`f`$ and $`P_{c(f)}`$, where $`V_f^{}`$ is a neigbourhood of $`𝒦(f)`$. Using the Ahlfors-Beurling Theorem, we construct a $`K_f^{\prime \prime }`$ quasiconformal homeomorphism $`\mathrm{\Phi }_0:\overline{}\overline{}`$ interpolating $`\varphi _3`$ and $`\varphi _4`$ such that
* $`\mathrm{\Phi }_0(z)=\varphi _3(z)`$ for every $`z\overline{}f^1(V_f)`$;
* $`\mathrm{\Phi }_0(z)=\varphi _4(z)`$ for every $`z𝒦(f)`$;
* $`\mathrm{\Phi }_0(\overline{z})=\overline{\mathrm{\Phi }_0(z)}`$.
Then the map $`\mathrm{\Phi }_0`$ conjugates $`f`$ on $`𝒦(f)f^1(V_f)`$ with $`P_{c(f)}`$ on $`𝒦(P_{c(f)})P_{c(f)}^1(D_\mathrm{\Delta })`$, and satisfies the properties (ii), (iii) and (iv) as stated in this lemma. Furthermore, $`\mu _{\mathrm{\Phi }_0}(z)=0`$ for every $`z\overline{}V_f`$, $`|\mu _{\mathrm{\Phi }_f}(z)|(K_21)/(K_2+1)`$ for a. e. $`zV_ff^1(V_f)`$, and $`\mu _{\mathrm{\Phi }_f}(z)=0`$ for a. e. $`z𝒦(f)𝒥(f)`$.
For every $`n>0`$, let us inductively define the $`K_f^{\prime \prime }`$ quasiconformal homeomorphism $`\mathrm{\Phi }_n:\overline{}\overline{}`$ as follows:
* $`\mathrm{\Phi }_n(z)=\mathrm{\Phi }_{n1}(z)`$ for every $`z\left(\overline{}f^n(V_f)\right)𝒦(f)`$;
* $`P_{c(f)}\mathrm{\Phi }_n(z)=\mathrm{\Phi }_{n1}f(z)`$ for every $`zf^n(V_f)𝒦(f)`$.
By compactness of the set of all $`K_f^{\prime \prime }`$ quasiconformal homeomorphisms on $`\overline{}`$ fixing three points ($`0`$, $`1`$ and $`\mathrm{}`$), there is a subsequence $`\mathrm{\Phi }_{n_j}`$ which converges to a $`K_f^{\prime \prime }`$ quasiconformal homeomorphism $`\mathrm{\Phi }_f`$. Then $`\mathrm{\Phi }_f`$ satisfies the properties (ii), (iii) and (iv) as stated in this lemma.
The restriction of $`\mathrm{\Phi }_f`$ to the set $`f^1(V_f)`$ has the property of being a quasiconformal conjugacy between $`f`$ and $`P_{c(f)}`$. Furthermore, the Beltrami differential $`\mu _{\mathrm{\Phi }_f}`$ has the following properties:
* $`\mu _{\mathrm{\Phi }_f}(z)=0`$ for every $`z\overline{}V_f`$;
* $`|\mu _{\mathrm{\Phi }_f}(z)|(K_21)/(K_2+1)`$ for a. e. $`zV_f𝒦(f)`$;
* $`\mu _{\mathrm{\Phi }_f}(z)=0`$ for a. e. $`z𝒦(f)𝒥(f)`$.
Therefore, by Rickman’s glueing lemma, $`\mathrm{\Phi }_f:\overline{}\overline{}`$ is a $`K_2(\mu )`$ quasiconformal homeomorphism, and $`\mathrm{\Phi }_f`$ restricted to the set $`f^1(V_f)`$ is a hybrid conjugacy between $`f`$ and $`P_{c(f)}`$. ∎
The lemma below could be proven using the external fibers and the fact that the holonomy of the hybrid foliation is quasi conformal as in . However we will give a more direct proof of it below.
###### Lemma 12.
There exist positive constants $`\beta (\mu )1`$, $`D(\mu )`$, and $`\mu ^{}(\mu )`$ with the following property: for every $`c[1,2]`$, and for every $`f𝒬([1,2],\mu )`$, there is $`f_c𝒬([1,2],\mu ^{})`$ satisfying $`\xi (f_c)=P_c`$, and such that
(7)
$$ff_c_{C^0(I)}D|c(f)c|^\beta .$$
###### Proof.
The main step of this proof consists of constructing the real quadratic-like maps $`f_c=\psi _cP_c\psi _c^1`$ satisfying $`f_{c(f)}=f`$, and such that the maps $`\omega _c:\overline{}\overline{}`$ defined by $`\omega _c=\psi _c\psi _{c(f)}^1`$ form a holomorphic motion $`\omega _c`$, and have the property of being holomorphic on the complement of a disk centered at the origin. Using Theorem 9 and Theorem 10, we prove that there is a positive constant $`L_3`$ with the property that $`\omega _cid_{C^0}L_3|cc(f)|`$. Finally, we show that this implies the inequality (7) above.
Now, we give the details of the proof: let us choose a small $`ϵ>0`$, and a small open set $`U`$ of $``$ containing the interval $`[1,2]`$ such that, for every $`cU`$, the quadratic map $`P_c(z)=1cz^2`$ has a quadratic-like restriction to $`P_c^1(D_\mathrm{\Delta })`$, and $`P_c^1(D_\mathrm{\Delta })D_{\mathrm{\Delta }ϵ}`$. Let $`\eta :`$ be a $`C^{\mathrm{}}`$ function with the following properties:
* $`\eta (z)=1`$ for every $`zD_\mathrm{\Delta }`$;
* $`\eta (z)=0`$ for every $`zD_{\mathrm{\Delta }ϵ}`$;
* $`\eta (z)=\eta (\overline{z})`$ for every $`z`$.
There is a unique continuous lift $`\alpha _c:P_{c_0}^1(D_\mathrm{\Delta })P_c^1(D_\mathrm{\Delta })`$ of the identity map such that
* $`P_c\alpha _c(z)=P_{c_0}(z)`$;
* $`\alpha _{c_0}=id`$;
* $`\alpha _c(z)`$ varies continuously with $`c`$.
Then the maps $`\alpha _c`$ are holomorphic injections, and, for every $`zP_{c_0}^1(D_\mathrm{\Delta })`$, $`\alpha _c(z)`$ varies holomorphically with $`c`$.
Let $`\beta _c:P_{c_0}^1(D_\mathrm{\Delta })P_c^1(D_\mathrm{\Delta })`$ be the interpolation between the identity map and $`\alpha _c`$ defined by $`\beta _c=\eta id+(1\eta )\alpha _c`$. We choose $`r^{}>0`$ small enough such that, for every $`c_0[1,2]`$, and $`cD_r^{}(c_0)U`$, $`\beta _c`$ is a diffeomorphism. Then $`\beta _c:P_{c_0}^1(D_\mathrm{\Delta })P_c^1(D_\mathrm{\Delta })`$ is a holomorphic motion over $`D_r(c_0)`$ with the following properties:
* the map $`\beta _c`$ is a conjugacy between $`P_{c_0}`$ on $`P_{c_0}^1(D_\mathrm{\Delta })`$ and $`P_c`$ on $`P_c^1(D_\mathrm{\Delta })`$;
* the restriction of $`\beta _c`$ to the set $`D_\mathrm{\Delta }`$ is the identity map;
* if $`c`$ is real then $`\beta _c(\overline{z})=\overline{\beta _c(z)}`$.
By Theorem 10, $`\beta _c`$ extends to a holomorphic motion $`\widehat{\beta }_c:\overline{}\overline{}`$ over $`D_r^{}(c_0)`$, and, by taking $`r=r^{}/2`$, the map $`\widehat{\beta }_c`$ is $`3`$ quasiconformal for every $`cD_r(c_0)`$.
By Lemma 11, there is a $`K(\mu )`$ quasiconformal homeomorphism $`\mathrm{\Phi }_f:\overline{}\overline{}`$, and an open set $`V_f=\mathrm{\Phi }_f^1(D_\mathrm{\Delta })`$ such that (i) $`\mathrm{\Phi }_f`$ restricted to $`f^1(V_f)`$ is a hybrid conjugacy between $`f`$ and $`P_{c(f)}`$; (ii) $`\mathrm{\Phi }_f`$ is holomorphic over $`\overline{}\overline{V_f}`$; and (iii) $`\mathrm{\Phi }_f(\overline{z})=\overline{\mathrm{\Phi }_f(z)}`$. Let $`\mathrm{\Phi }_c:\overline{}\overline{}`$ be defined by $`\mathrm{\Phi }_c=\widehat{\beta }_c\mathrm{\Phi }_f`$. Then, for every $`cD_r(c_0)`$, $`\mathrm{\Phi }_c`$ is a $`3K`$ quasiconformal homeomorphism which conjugates $`f`$ on $`f^1(V_f)`$ with $`P_c`$ on $`P_c^1(D_\mathrm{\Delta })`$.
We define the Beltrami differential $`\mu _c`$ as follows:
* $`\mu _c(z)=0`$ if $`z𝒦(P_c)(D_\mathrm{\Delta })`$;
* $`(\mathrm{\Phi }_c)^{}\mu _c(z)=0`$ if $`zD_\mathrm{\Delta }\overline{P_c^1(D_\mathrm{\Delta })}`$;
* $`\left(P_c^n\right)_{}\mu _c(z)=\mu _c(P_c^n(z))`$ if $`zP_c^n(D_\mathrm{\Delta })\overline{P_c^{(n+1)}(D_\mathrm{\Delta })}`$ and $`n1`$.
Then (i) the Beltrami differential $`\mu _c`$ varies holomorphically with $`c`$; (ii) $`\mu _c_{\mathrm{}}<(3K1)/(3K+1)`$ for every $`cD_r(c(f))`$; and (iii) if $`c`$ is real then $`\mu _c(\overline{z})=\overline{\mu _c(z)}`$ for almost every $`z`$.
By the Ahlfors-Bers Theorem (see ), for every $`cD_r(c(f))`$ there is a normalized $`3K`$ quasiconformal homeomorphism $`\psi _c:\overline{}\overline{}`$ with $`\psi _c(0)=0`$, $`\psi _c(1)=1`$, and $`\psi _c(\mathrm{})=\mathrm{}`$ such that $`\mu _{\psi _c}=\mu _c`$, and $`\psi _c(z)`$ varies holomorphically with $`c`$. Thus, the restriction of $`\psi _c`$ to $`\overline{}\overline{D_\mathrm{\Delta }}`$ is a holomorphic map, and if $`c`$ is real then $`\psi _c(\overline{z})=\overline{\psi _c(z)}`$ for every $`z`$.
The map $`f_c:\psi _c(P_c^1(D_\mathrm{\Delta }))\psi _c(D_\mathrm{\Delta })`$ defined by $`f_c=\psi _cP_c\psi _c^1`$ is $`1`$ quasiconformal, and thus a holomorphic map. Furthermore, the map $`f_c`$ is hybrid conjugated to $`P_c`$, and so $`f_c`$ is a quadratic-like map whose straightening $`\xi (f)`$ is $`P_c`$. Since the conformal modulus of the annulus $`\psi _c(D_\mathrm{\Delta })\overline{\psi _c(P_c^1(D_\mathrm{\Delta }))}`$ depends only on $`3K(\mu )`$, we obtain that there is a positive constant $`\mu ^{}(\mu )`$ such that the conformal modulus of $`f_c`$ is greater than or equal to $`\mu ^{}(\mu )`$. If $`c`$ is real then $`f_c(\overline{z})=\overline{f_c(z)}`$, which implies that $`f_c`$ is a real quadratic-like map.
For the parameter $`c(f)`$, the map $`\psi _{c(f)}\mathrm{\Phi }_f`$ is $`1`$ quasiconformal and fixes three points ($`0`$, $`1`$ and $`\mathrm{}`$). Therefore, $`\psi _{c(f)}\mathrm{\Phi }_f`$ is the identity map, and since the map $`\psi _{c(f)}\mathrm{\Phi }_f`$ conjugates $`f`$ with $`f_{c(f)}`$, we get $`f_{c(f)}=f`$.
Now, let us prove that the quadratic-like map $`f_c`$ satisfies inequality (7). By compactness of the set of all $`3K(\mu )`$ quasiconformal homeomorphisms $`\varphi `$ on $`\overline{}`$ fixing three points ($`0`$, $`1`$ and $`\mathrm{}`$), there are positive constants $`l(s,\mu )sL(s,\mu )`$ for every $`s>0`$ with the property that
(8)
$$D_l\varphi (D_s)\mathrm{and}\overline{}\overline{D_L}\varphi (\overline{}\overline{D_s}).$$
Thus, there is $`\mathrm{\Delta }^{\prime \prime }=L(L(\mathrm{\Delta }))`$ with the property that $`\omega _c=\psi _c\psi _{c(f)}^1`$ is holomorphic in $`\overline{}\overline{D_{\mathrm{\Delta }^{\prime \prime }}}`$ for every $`cD_r(c(f))`$, and $`c(f)[1,2]`$.
Let $`S_{2\mathrm{\Delta }^{\prime \prime }}`$ be the circle centered at the origin and with radius $`2\mathrm{\Delta }^{\prime \prime }`$. By (8), we obtain that $`\omega _c(S_{2\mathrm{\Delta }^{\prime \prime }})`$ is at a uniform distance from $`0`$ and $`\mathrm{}`$ for every $`cD_r(c(f))`$, and $`c(f)[1,2]`$. Hence, by the Cauchy Integral Formula, and since $`\omega _c`$ is a holomorphic motion over $`D_r(c(f))`$, the value $`a_c=\omega _c^{}(\mathrm{})`$ varies holomorphically with $`c`$, and there is a constant $`L_1(\mu )>0`$ with the property that
(9)
$$|a_c1|<L_1|cc(f)|.$$
Thus, (i) the map $`a_c\omega _c`$ is holomorphic in $`\overline{}\overline{D_{\mathrm{\Delta }^{\prime \prime }}}`$; (ii) $`\mu _{a_c\omega _c}_{\mathrm{}}`$ is less than or equal to $`(9K^21)/(9K^2+1)`$; and (iii) $`lim_{|z|\mathrm{}}(a_c\omega _c(z)z)=0`$. Hence, by Theorem 9, there is a positive constant $`L_2(\mu )`$ such that, for every $`cD_r(c(f))`$, and for every $`c(f)[1,2]`$, we get
(10)
$$a_c\omega _cid_{C^0}L_2\mu _{a_c\omega _c}_{\mathrm{}}.$$
Since $`a_c\omega _c`$ is a holomorphic motion over $`D_r(c(f))`$, and by Theorem 10, we get
(11)
$$\mu _{a_c\omega _c}_{\mathrm{}}\frac{|cc(f)|}{r}.$$
By inequalities (9), (10), and (11) there is a positive constant $`L_3(\mu )`$ such that, for every $`c(f)[1,2]`$, and for every $`c(c(f)r,c(f)+r)`$, we obtain
(12)
$$\omega _cid_{C^0(I)}<L_3|cc(f)|.$$
This implies that
(13)
$$\omega _c^1id_{C^0(I)}<L_3|cc(f)|.$$
Since $`\omega _c`$ is a $`9K^2`$ quasiconformal homeomorphism, and fixes three points, we obtain from Theorem 4.3 in page 70 of that there are positive constants $`\beta (\mu )1`$ and $`L_4(\mu )`$ with the property that $`\omega _c_{C^\beta (I)}<L_4`$. Then by inequalities (12) and (13) there is a positive constant $`L_5(\mu )`$ such that, for every $`c(f)[1,2]`$, and for every $`c(c(f)r,c(f)+r)`$, we have
$`f_cf_{c(f)}_{C^0(I)}`$ $``$ $`\omega _cid_{C^0(I)}+\omega _c_{C^\beta (I)}P_cP_{c(f)}_{C^0(I)}^\beta `$
$`+\omega _c_{C^\beta }P_{c(f)}_{C^1(I)}^\beta \omega _c^1id_{C^0(I)}^\beta `$
$``$ $`L_5|cc(f)|^\beta .`$
Finally, by increasing the constant $`L_5`$ if necessary, we obtain that the last inequality is also satisfied for every $`c(f)`$ and $`c`$ contained in $`[1,2]`$. ∎
## 4. Proofs of the main results
### 4.1. Proof of Lemma 2
Let $`f=\varphi _fp`$ be a $`C^2`$ infinitely renormalizable map with bounded combinatorial type. Let $`N`$ be such that the combinatorial type of $`f`$ is bounded by $`N`$, and set $`b=\varphi _f_{C^2}`$. By Lemma 8, there are positive constants $`\gamma (N)<1`$, $`\alpha (N)`$, $`\mu (N)`$, and $`c_1(b,N)`$ with the following properties: for every $`n0`$, there is an $`[\alpha n+1]`$ times renormalizable quadratic-like map $`F_n`$ with renormalization type $`\underset{¯}{\sigma }(n)=\sigma _{R^nf},\mathrm{},\sigma _{R^{n+[\alpha n]}f}`$, with conformal modulus greater than or equal to $`\mu `$, and satisfying
(14)
$$R^nfF_n_{C^0(I)}c_1\gamma ^n.$$
By Milnor-Thurston’s topological classification (see and Theorem 4.2a. in page 470 of ), the real values $`c`$ for which the real quadratic maps $`P_c(z)=1cz^2`$ have renormalization type $`\underset{¯}{\sigma }(n)`$ is an interval $`I_{\underset{¯}{\sigma }(n)}`$. Thus, by Sullivan’s pull-back argument (see and Theorem 4.2b. in page 471 of ), there is a unique $`c_nI_{\underset{¯}{\sigma }(n)}`$ such that $`P_{c_n}`$ has the same combinatorial type as $`R^n(f)`$. By Douady-Hubbard’s Theorem 1 in , there is a unique quadratic map $`\xi (F_n)=P_{c(F_n)}`$ which is hybrid conjugated to $`F_n`$. Since $`F_n`$ has renormalization type $`\underset{¯}{\sigma }(n)`$, the parameter $`c(F_n)`$ belongs to $`I_{\underset{¯}{\sigma }(n)}`$. By Lyubich’s Theorem 9.6 in page 79 of , there are positive constants $`\lambda (N)<1`$ and $`c_2(N)`$ such that $`|I_{\underset{¯}{\sigma }(n)}|c_2\lambda ^n`$. Therefore, $`|c_nc(F_n)|c_2\lambda ^n`$.
By Lemma 12, there are positive constants $`\beta (\mu )<1`$, $`D(\mu )`$, and $`\mu ^{}(\mu )`$ with the following properties: for every $`n0`$, there is a real quadratic-like map $`f_n`$ with conformal modulus greater than or equal to $`\mu ^{}`$, satisfying $`\xi (f_n)=P_{c_n}`$, and such that
$$f_nF_n_{C^0(I)}D|c_nc(F_n)|^\beta Dc_2^\beta \lambda ^{\beta n}.$$
Therefore, the map $`f_n`$ has the same combinatorial type as $`R^n(f)`$, and, by inequality (14), for $`C(b,N)=c_1+Dc_2^\beta `$ and $`\eta (N)=\mathrm{max}\{\gamma ,\lambda ^\beta \}`$, we get
$$R^nff_n_{C^0(I)}C\eta ^n.$$
### 4.2. Proof of Theorem 1
Let $`f=\varphi _fp`$ and $`g=\varphi _gp`$ be any two $`C^2`$ infinitely renormalizable unimodal maps with the same bounded combinatorial type. Let $`N`$ be such that the combinatorial type of $`f`$ and $`g`$ are bounded by $`N`$, and set $`b=\mathrm{max}\{\varphi _f_{C^2},\varphi _g_{C^2}\}`$. For every $`n0`$, let $`m=[\alpha n]`$, where $`0<\alpha <1`$ will be fixed later in the proof. By Lemma 2, there are positive constants $`\eta (N)<1`$ and $`c_1(b,N)`$, and there are infinitely renormalizable real quadratic-like maps $`F_m`$ and $`G_m`$ with the following property:
(15)
$$R^mfF_m_{C^0(I)}c_1\eta ^{\alpha n}\mathrm{and}R^mgG_m_{C^0(I)}c_1\eta ^{\alpha n}.$$
By Lemma 6, there are positive constants $`n_3(b)`$ and $`L(N)`$ such that, for every $`m>n_3`$, we get
$`R^nfR^{nm}F_m_{C^0(I)}`$ $``$ $`L^{nm}R^mfF_m_{C^0(I)}`$
$``$ $`c_1\left(L^{1\alpha }\eta ^\alpha \right)^n,`$
and, similarly,
(17)
$$R^ngR^{nm}G_m_{C^0(I)}c_1\left(L^{1\alpha }\eta ^\alpha \right)^n.$$
Now, we fix $`0<\alpha (N)<1`$ such that $`L^{1\alpha }\eta ^\alpha <1`$.
Again, by Lemma 2, $`F_m`$ and $`G_m`$ have conformal modulus greater than or equal to $`\mu (N)`$, and the same combinatorial type as $`R^mf`$ and $`R^mg`$. Therefore, by McMullen’s Theorem 9.22 in page 172 of , there are positive constants $`\nu _2(N)<1`$ and $`c_2(\mu ,N)`$ with the property that
(18)
$$R^{nm}F_mR^{nm}G_m_{C^0(I)}c_2\nu _2^{nm}.$$
By inequalities $`(\text{4.2})`$, $`(\text{17})`$, and $`(\text{18})`$, there are constants $`c_3(b,N)=2c_1+c_2`$ and $`\nu _3(N)=\mathrm{max}\{L^{1\alpha }\eta ^\alpha ,\nu _2^{1\alpha }\}`$ such that
$$R^nfR^ng_{C^0(I)}c_3\nu _3^n.$$
By Theorem 9.4 in page 552 of , the exponential convergence implies that there is a $`C^{1+\alpha }`$ diffeomorphism which conjugates $`f`$ and $`g`$ along the closure of the corresponding orbits of the critical points for some $`\alpha (N)>0`$. ∎
The exponential convergence of the renormalization operator in the space of real analytic unimodal maps holds for every combinatorial type. Indeed, if $`f`$ and $`g`$ are real analytic infinitely renormalizable maps, by the complex bounds in Theorem A of Levin-van Strien in , there exists an integer $`N`$ such that $`R^N(f)`$ and $`R^N(g)`$ have quadratic like extensions. Then we can use Lyubich’s Theorem 1.1 in to conclude the exponential convergence. However, as we pointed out before, this is not sufficient to give the $`C^{1+\alpha }`$ rigidity. Finally, at the moment, we cannot prove the exponential convergence of the operator for $`C^2`$ mappings with unbounded combinatorics.
Acknowledgments Alberto Adrego Pinto would like to thank IMPA, University of Warwick, and IMS at SUNY Stony Brook for their hospitality. We would like to thank Edson de Faria, and Mikhail Lyubich for useful discussions. This work has been partially supported by the Pronex Project on Dynamical Systems, Fundação para a Ciência, Praxis XXI, Calouste Gulbenkian Foundation, and Centro de Matemática Aplicada, da Universidade do Porto, Portugal.
|
no-problem/9905/hep-ph9905559.html
|
ar5iv
|
text
|
# Transversity distributions and Drell-Yan spin asymmetries
## 1 Introduction
Since the discovery of a serious spin problem, the internal spin structure of the nucleon has been a popular topic. Using many experimental data on $`g_1`$, we have a rough idea on the longitudinally polarized parton distributions. On the other hand, the transversity distributions $`\mathrm{\Delta }__Tq`$ have not been measured at all because they cannot be measured in the usual inclusive deep inelastic scattering. They are expected to be measured in the transversely polarized Drell-Yan process at RHIC and semi-inclusive process at HERA. We should try to understand the properties of $`\mathrm{\Delta }__Tq`$ before the experimental data are taken.
The $`Q^2`$ evolution of the transversity distributions has been already investigated in detail including the next-to-leading-order (NLO) effects . Using these results, we investigate the antiquark flavor asymmetry in the transversity distributions . Now, it is well known that light antiquark distributions are not flavor symmetric according to the NMC, NA51, E866, and HERMES experimental data. In particular, the recent E866 result revealed the $`x`$ dependence of the ratio $`\overline{d}/\overline{u}`$ by measuring the Drell-Yan proton-deuteron asymmetry. The antiquark flavor asymmetry in the polarized distributions is also expected to exist. However, it is not known at this stage except for a few theoretical predictions . Because the polarized antiquark distributions are measured at RHIC, it is important to investigate a possible asymmetric distribution. In this paper, we calculate the transversity flavor asymmetry by using the nonperturbative models. Then, obtained results are used for calculating the transverse double spin asymmetry $`A_{_{TT}}`$ at a RHIC energy. Furthermore, we discuss the flavor asymmetry effects on the Drell-Yan proton-deuteron asymmetry $`\mathrm{\Delta }__T\sigma ^{pd}/2\mathrm{\Delta }__T\sigma ^{pp}`$.
## 2 Antiquark flavor asymmetry
The contribution from the perturbative QCD is considerably small as far as the $`Q^2`$ evolution in the range $`Q^21`$ GeV<sup>2</sup> is concerned. Therefore, we expect the antiquark flavor asymmetry comes almost from the non-perturbative mechanisms. As such mechanisms, a meson-cloud and a Pauli exclusion principle models are typical in discussing the unpolarized $`\overline{u}/\overline{d}`$ asymmetry . We try to apply these mechanisms to the polarized distributions . Since we have no experimental information about transversity distributions, we calculate the flavor asymmetry in the longitudinally polarized distributions at first. Then, we assume the transversity distributions are equal to the longitudinal distributions at small $`Q^2`$ by using the prediction of the nonrelativistic quark model.
First, we discuss the meson-cloud model. In this model, we calculate the meson-nucleon-baryon (MNB) process in which the initial nucleon splits into a virtual meson and a baryon, then the virtual photon from lepton interacts with this meson. Because the lightest vector meson is the $`\rho `$ meson, we calculate the $`\rho `$-meson contribution to the flavor asymmetry in the polarized distributions. The contributions from this kind of process can be expressed by the following convolution integral:
$$\mathrm{\Delta }\overline{q}(x,Q^2)=_x^1\frac{dy}{y}\mathrm{\Delta }f_{\rho NB}(y)\mathrm{\Delta }\overline{q}_\rho (\frac{x}{y},Q^2),$$
(1)
where the function $`\mathrm{\Delta }f_{\rho NB}`$ represent the $`\rho `$-meson momentum distribution due to the $`\rho `$NB process and the function $`\mathrm{\Delta }\overline{q}_\rho `$ represent the polarized antiquark distribution in the $`\rho `$ meson. In our analysis, nucleon and $`\mathrm{\Delta }`$ are taken into account as a final state baryon and all the possible $`\rho `$NB processes are considered. Since the dominant contribution comes from the $`\rho ^+`$ meson which has a valence $`\overline{d}`$ quark, the meson-cloud contribute $`\mathrm{\Delta }\overline{d}`$ excess over $`\mathrm{\Delta }\overline{u}`$. Note that the $`\rho `$-meson effects are also studied in Ref. by using the slightly different method from ours in the calculation of $`\mathrm{\Delta }f_{\rho NB}`$.
Next, we discuss the Pauli exclusion model. Because there already exist some studies on this mechanism in the polarized case , we simply use their results. According to the SU(6) quark model, we have each quark state probability as $`u^{}=5/3`$, $`u^{}=1/3`$, $`d^{}=1/3`$, and $`d^{}=2/3`$ in the proton spin-up state. Since the probability of $`u^{}`$ ($`d^{}`$) is much larger than that of $`u^{}`$ ($`d^{}`$), it is more difficult to create $`u^{}`$ ($`d^{}`$) sea than $`u^{}`$ ($`d^{}`$) sea because of the Pauli exclusion principle. Then, assuming that the exclusion effect is the same as the unpolarized, $`(u_s^{}u_s^{})/(u_v^{}u_v^{})=(d_su_s)/(u_vd_v)`$ and a similar equation for $`d_s^{}d_s^{}`$, we have $`\mathrm{\Delta }\overline{u}=0.13`$ and $`\mathrm{\Delta }\overline{d}=+0.05`$. As a result, we find that the flavor asymmetry from this mechanism.
The both models should be valid only at small $`Q^2`$, so that the GRSV parametrization at $`Q^2`$=0.34 GeV<sup>2</sup> is chosen in our calculation. The obtained results for the $`\mathrm{\Delta }__T\overline{u}\mathrm{\Delta }__T\overline{d}`$ distribution are shown at $`Q^2`$ = 10 GeV<sup>2</sup> in Figure 1. From this figure, we find that both model predictions have similar tendency, namely, $`\mathrm{\Delta }__T\overline{d}`$ excess over $`\mathrm{\Delta }__T\overline{u}`$. However, the meson contributions seems to be smaller than those of the exclusion model. Recently, a flavor asymmetric distribution was proposed by analysing deep inelastic semi-inclusive data . Our distributions are consistent with their result because the accuracy of the present semi-inclusive data is insufficient to find accurate flavor asymmetry.
## 3 Transverse double spin asymmetry
The transversity distributions will be studied by measuring the Drell-Yan transverse double spin asymmetries $`A_{_{TT}}`$ at RHIC. In this section, we discuss the flavor asymmetry effects on $`A_{_{TT}}`$. We calculate $`A_{_{TT}}`$ with NLO contributions at the RHIC energy $`\sqrt{s}=200`$ GeV by using the flavor symmetric and the flavor asymmetric parton distributions.
The results are shown in Figure 2 as the function of dimuon mass square. The solid, dashed, and dotted curves represent the flavor symmetric, meson-cloud, and Pauli exclusion results, respectively. From this figure, we find that the magnitude of $`A_{_{TT}}`$ is about 1% in the dimuon mass region $`100<M_{\mu \mu }^2<500`$ GeV<sup>2</sup>. Furthermore, we find that the flavor asymmetric results are considerably different from the flavor symmetric one. However, the differences are not enough to find the flavor asymmetry effects because we may change the magnitude of the transversity distributions so as to agree with the experimental data. Although the longitudinal flavor asymmetry $`\mathrm{\Delta }\overline{u}/\mathrm{\Delta }\overline{d}`$ should be investigated by the $`W^\pm `$ production processes, the transversity asymmetry cannot be measured by the $`W^\pm `$. Therefore, we should think about possible measurements in order to investigate the $`\mathrm{\Delta }__T\overline{u}/\mathrm{\Delta }__T\overline{d}`$ asymmetry.
## 4 Drell-Yan proton-deuteron asymmetry
As the alternative candidate for finding the transversity flavor asymmetry $`\mathrm{\Delta }__T\overline{u}/\mathrm{\Delta }__T\overline{d}`$, we propose to use the polarized proton-deuteron Drell-Yan process in combination with the polarized pp process . Recently, a general formalism for the polarized pd reactions was completed . From their parton model analysis, the Drell-Yan proton-deuteron asymmetry $`\mathrm{\Delta }__T\sigma ^{pd}/2\mathrm{\Delta }__T\sigma ^{pp}`$ is expressed by transversity distributions in the proton and the deuteron. If we neglect the nuclear effects in the deuteron and assume the isospin symmetry, the Drell-Yan p-d asymmetry in the large $`x_F`$ (=$`x_Ax_B`$) region is approximately given by the following equation:
$$\frac{\mathrm{\Delta }__T\sigma ^{pd}}{2\mathrm{\Delta }__T\sigma ^{pp}}\frac{\left[1+\frac{1}{4}\frac{\mathrm{\Delta }__Td\left(x_A\right)}{\mathrm{\Delta }__Tu\left(x_A\right)}\right]\left[1+\frac{\mathrm{\Delta }__T\overline{d}\left(x_B\right)}{\mathrm{\Delta }__T\overline{u}\left(x_B\right)}\right]}{2\left[1+\frac{1}{4}\frac{\mathrm{\Delta }__Td\left(x_A\right)}{\mathrm{\Delta }__Tu\left(x_A\right)}\frac{\mathrm{\Delta }__T\overline{d}\left(x_B\right)}{\mathrm{\Delta }__T\overline{u}\left(x_B\right)}\right]},$$
(2)
where the $`x_A`$ and $`x_B`$ represent the momentum fractions in the hadron A (proton) and the hadron B (deuteron). From this equation, the ratio clearly becomes one if $`\mathrm{\Delta }__T\overline{u}`$ is equal to $`\mathrm{\Delta }__T\overline{d}`$. Therefore, if we find that the data which is different from one, it suggests the flavor asymmetry.
We calculate the ratio $`\mathrm{\Delta }__T\sigma ^{pd}/2\mathrm{\Delta }__T\sigma ^{pp}`$ by using the flavor symmetric and the flavor asymmetric distributions . As a result, we find that the flavor symmetric result becomes almost one in the large $`x_F`$ region as discussed above although the meson-cloud and the Pauli exclusion results are significantly different from one. From this analysis, we conclude that the Drell-Yan p-d asymmetry is very useful for investigating the light antiquark flavor asymmetry in the transversity distributions. We mention that the p-d asymmetry should be also used for studying the flavor asymmetry in the longitudinally polarized distributions in a similar way. At this stage, there are no project to measure the polarized proton-deuteron Drell-Yan process. However, we expect it is possible in the future RHIC, FNAL, or HERA experiment.
M.M. was supported by the JSPS Research Fellowships for Young Scientists. This research was partly supported by the Grant-in-Aid from the Japanese Ministry of Education, Science, and Culture.
|
no-problem/9905/hep-ph9905461.html
|
ar5iv
|
text
|
# May 1999 UM-P-99/11 RCHEP-99/05 Maximal 𝜈_𝑒→𝜈_𝑠 solution to the solar neutrino problem: just-so, MSW or energy independent?
## I Introduction
Five experiments have measured solar neutrino fluxes that are significantly deficient relative to standard solar model expectations. Four out of the five experiments find overall fluxes that are roughly $`50\%`$ of the theoretical predictions. (The Chlorine experiment sees a larger deficit.) Maximal mixing between the electron neutrino and a sterile flavour has been proposed as the underlying explanation for these observations. This oscillation mode produces a $`50\%`$ reduction in the day-time solar neutrino flux for a large range of the relevant $`\mathrm{\Delta }m^2`$ parameter:
$$10^3>\frac{\mathrm{\Delta }m^2}{\mathrm{eV}^2}\stackrel{>}{}\text{few}\times 10^{10}.$$
(1)
The upper bound arises from the lack of $`\overline{\nu }_e`$ disappearence in the CHOOZ experiment<sup>*</sup><sup>*</sup>*Note that this entire range for $`\mathrm{\Delta }m^2`$ does not necessarily lead to any inconsistency with bounds imposed by big bang nucleosynthesis., while the lower bound is a rough estimate of the transition region between the totally averaged oscillation regime and the “just-so” regime.
The very special feature of maximal mixing between the $`\nu _e`$ and a sterile flavour is well motivated by the Exact Parity Model (also known as the mirror matter model). In this theory, the sterile flavour maximally mixing with the $`\nu _e`$ is identified with the mirror electron neutrino. The characteristic maximal mixing feature occurs because of the underlying exact parity symmetry between the ordinary and mirror sectors. The potentially maximal mixing observed for atmospheric muon neutrinos is beautifully in accord with this framework , which sees the atmospheric neutrino problem resolved through ‘$`\nu _\mu `$ mirror partner’ oscillations.The mirror partners of the three ordinary neutrino flavours are distinct, effectively sterile light neutrino flavours. The Exact Parity Model therefore provides a unified understanding of the solar and atmospheric neutrino problems: each is due to maximal oscillations of the relevant ordinary neutrino into its mirror partner. Note that the mirror neutrino scenario is phenomenologically similar to the pseudo-Dirac scenario.
The chlorine result is not quantitatively consistent with this view of the origin of the solar neutrino anomaly, it being about $`30\%`$ too low. We await with interest some new experiments that have the capacity to double-check this result.
The purpose of this paper is to make a more detailed analysis of the maximal $`\nu _e\nu _s`$ solution to the solar neutrino problem. We do so because of two recent developments: (i) the clarification from Guth et al. that a day-night asymmetry generically exists even for maximal mixing (due to Earth regeneration which affects the night-time events) and (ii) the observation by SuperKamiokande of an interesting feature in the recoil electron energy spectrum for $`E>13`$ MeV. We will calculate the day-night asymmetry and the recoil electron spectrum as a function of $`\mathrm{\Delta }m^2`$ in the range $`10^3`$ eV<sup>2</sup> to $`10^{11}`$ eV<sup>2</sup>. We will draw the important conclusion that the maximal $`\nu _e\nu _s`$ scenario has a larger number of characteristic and testable features than realised hitherto. We will summarise the “smoking gun” experimental signatures for this scenario in the concluding section.
## II Day-night asymmetry
Guth et al. have recently provided a very lucid account of the physics of the day-night effect for maximally mixed solar neutrinos. This is important for the maximal $`\nu _e\nu _s`$ scenario for two reasons. First, high statistics experiments such as SuperKamiokande have an on-going programme to measure the solar neutrino day-night asymmetry. It had been previously and erroneously thought that maximally mixed $`\nu _e`$’s would not give rise to a day-night asymmetry. We will calculate this asymmetry for the total, energy-integrated flux relevant for SuperKamiokande as a function of $`\mathrm{\Delta }m^2`$. We will see that the present data already rule out a range of $`\mathrm{\Delta }m^2`$. If a nonzero asymmetry were to be experimentally established in the future, then this would not falsify the maximal $`\nu _e\nu _s`$ scenario, contrary to previous belief. Rather, such an observation would help to pin down the actual value of $`\mathrm{\Delta }m^2`$.
The second consequence of doing the physics correctly is the revelation that the night-time flux reduction is generically not an energy-independent factor of a half as previously advertised. This is simply because the Earth regeneration effect is energy-dependent. Data samples which do not separate day-time and night-time observations would thus be expected to show a (weak) energy dependent suppression if the maximal $`\nu _e\nu _s`$ scenario is correct.
Since the paper by Guth et al. provides a complete account of how the day-night asymmetry is calculated, we will not repeat all of this material here. Suffice it to say that we checked our numerical procedure by recalculating some of the results given by Guth et al. for maximal $`\nu _e\nu _\mu (\text{or}\nu _\tau )`$ oscillations. We found agreement. The new computation we present here is very similar, but involves changing the effective potential for neutrino oscillations in the Earth to the one relevant for $`\nu _e\nu _s`$ oscillations. It is given by
$$V=\sqrt{2}G_F(N_e\frac{N_n}{2})\frac{G_F}{\sqrt{2}}\frac{\rho }{m_N}(3Y_e1),$$
(2)
where $`G_F`$ is the Fermi constant, $`N_e`$ ($`N_n`$) is the terrestrial electron (neutron) number density, $`m_N`$ is the nucleon mass, $`\rho `$ is the terrestrial mass density, and $`Y_e`$ is the number density of electrons per nucleon. We used the terrestrial density profile given in Ref.. The core of the computation is the determination of the recoil electron flux $`g(\alpha ,T)`$ at apparent recoil energy $`T`$ and zenith angle $`\alpha `$ (using the notation of Guth et al.). It is given by
$$g(\alpha ,T)=_0^{\mathrm{}}𝑑E_\nu \mathrm{\Phi }(E_\nu )_0^{T_{\text{max}}^{}}𝑑T^{}R(T,T^{})\frac{d\sigma }{dT^{}}(T^{},E_\nu ,\alpha )$$
(3)
where $`\mathrm{\Phi }(E_\nu )`$ is the Boron neutrino spectrum , $`R(T,T^{})`$ is the energy resolution function given by
$$R(T,T^{})=\frac{1}{\mathrm{\Delta }_T^{}\sqrt{2\pi }}\mathrm{exp}\left[\frac{(T^{}T)^2}{2\mathrm{\Delta }_T^{}^2}\right],$$
(4)
and the effective cross-section is given by
$$\frac{d\sigma }{dT^{}}(T^{},E_\nu ,\alpha )=P(E_\nu ,\alpha )\frac{d\sigma _{\nu _e}}{dT^{}}(T^{},E_\nu ).$$
(5)
The function $`P`$ is the electron neutrino survival probability incorporating matter effects in the Earth, and $`d\sigma _{\nu _e}/dT^{}`$ is the $`\nu _ee`$ scattering cross-section. The resolution width is given by $`\mathrm{\Delta }_T^{}/\text{MeV}0.47\sqrt{T^{}/\text{MeV}}`$.
Our result is presented in Fig.1, where the energy integrated day-night asymmetry $`A_{nd}`$ is plotted as a function of $`\mathrm{\Delta }m^2`$ for maximal $`\nu _e\nu _s`$ oscillations. The asymmetry is defined by
$$A_{nd}\frac{_{6.5\text{MeV}}^{\mathrm{}}𝑑T\left[N(T)D(T)\right]}{_{6.5\text{MeV}}^{\mathrm{}}𝑑T\left[N(T)+D(T)\right]},$$
(6)
where $`6.5`$ MeV is the energy threshold relevant for the day-night asymmetry measurement of SuperKamiokande, $`D(T)`$ is the day-time recoil electron flux and $`N(T)`$ is the night-time flux. The day-night asymmetry is computed from $`g(\alpha ,T)`$ via the procedure described in Appendix B of Guth et al. An average is performed over all zenith angles and seasons of the year. Observe that the day-night asymmetry is positive and peaks with a value of about $`20\%`$ when $`\mathrm{\Delta }m^210^6`$ eV<sup>2</sup>. The present SuperKamiokande result,
$$A_{nd}=+0.026\pm 0.021(\text{stat. + sys.}),$$
(7)
yields a $`2\sigma `$ upper bound of roughly $`A_{nd}<0.068`$ which is plotted as a horizontal line in Fig.1. We see that the range
$$2\times 10^7\stackrel{<}{}\frac{\mathrm{\Delta }m^2}{\text{eV}^2}\stackrel{<}{}8\times 10^6$$
(8)
is disfavoured at the $`2\sigma `$ level. The regions immediately to the side of the disfavoured region will obviously be probed as more data are gathered. The asymmetry falls to the $`1\%`$ level at about $`\mathrm{\Delta }m^2=3\times 10^8`$ eV<sup>2</sup> and $`5\times 10^5`$ eV<sup>2</sup>.
If a positive nonzero value for $`A_{nd}`$ were to be measured, then an ambiguity would remain in the determination of $`\mathrm{\Delta }m^2`$: values on either side of the presently disfavoured region can produce the same $`A_{nd}`$. This ambiguity could in principle be resolved from a determination of the energy dependence of the night-time rate. Figures 2a and 2b depict the energy dependence of the flux reduction for two representative values of $`\mathrm{\Delta }m^2`$ on either side of the disfavoured region. Figure 2a takes $`\mathrm{\Delta }m^210^7`$ eV<sup>2</sup>, with the solid (dotted) line showing the ratio of night-time (day-time) flux per unit energy to the no-oscillation expectation. The dot-dashed curve is the average. Note that the day-time rate is rigorously an energy-independent factor of two less than the no-oscillation rate, while the night-time rate is $`58\%`$ higher than the day-time rate and is weakly energy-dependent. Figure 2b considers the same quantities for $`\mathrm{\Delta }m^2=10^5`$ eV<sup>2</sup>. Note that the slopes of the night-time fluxes have opposite signs on opposite sides of the disfavoured region. The energy dependence becomes unobservably small for $`\mathrm{\Delta }m^2`$ values away from the interval around $`10^6`$ eV<sup>2</sup>.
## III Recoil electron spectrum
An interesting situation exists at present with regard to the recoil electron energy spectrum measured by SuperKamiokande. If the $`hep`$ neutrino flux predictions from standard solar models are taken at face value, then SuperKamiokande has evidence for a distortion in the boron neutrino induced recoil energy spectrum for energies greater than about 13 MeV. We will call this the “spectral anomaly”. More specifically, the spectral anomaly is an excess of events relative to what would be expected on the basis of an energy-independent boron neutrino flux reduction of about $`50\%`$. The observed distortion also disfavours the popular small and large mixing angle MSW solutions to the solar neutrino problem.
One can readily identify three possible interpretations of the spectral anomaly: (i) Standard solar models grossly underestimate the $`hep`$ neutrino flux. (ii) There is an as yet unidentified systematic error in the energy resolution function used by SuperKamiokande, and/or in their energy calibration. (iii) It is a statistical fluctuation. (iv) New physics is the cause. We will not consider (i) in this paper, and instead focus on (iv). Before doing so, we briefly discuss (ii) as a cautionary note.
Figure 3 illustrates the effect of varying the resolution width $`\mathrm{\Delta }_T^{}`$. We fit the data by minimising the $`\chi ^2`$ function
$$\chi ^2=\underset{i=1}{\overset{18}{}}\left[\frac{N_i^{\text{exp}}0.5fN_i^{\text{th}}}{\sigma (N_i^{\text{exp}})}\right]^2+\left[\frac{f1}{\sigma (f)}\right]^2,$$
(9)
where $`N_i^{\text{exp}}`$ is the measured recoil electron flux in energy bin $`i`$, $`N_i^{\text{th}}`$ is the theoretical no-oscillation expectation for same, $`0.5`$ represents an energy independent $`50\%`$ suppression due (for instance) to averaged maximal $`\nu _e\nu _s`$ oscillations and $`f`$ is a boron neutrino flux normalisation parameter to take account of the theoretical uncertainty $`\sigma (f)0.19`$ in this quantity . We include the two low energy bins, $`5.56.0\text{MeV}`$ and $`6.06.5\text{MeV}`$, as well as the 16 other energy bins used by SuperKamiokande . Seasonal and daily averages are taken. These data are fitted by varying $`f`$ and the quantity $`\mathrm{\Delta }`$ defined through
$$\mathrm{\Delta }_T^{}=(\mathrm{\Delta }\text{MeV})\sqrt{\frac{T^{}}{\text{MeV}}}.$$
(10)
We find the minimum at $`f=0.90`$ and $`\mathrm{\Delta }0.51`$, with $`\chi _{\text{min}}^220`$ for $`192=17`$ degrees of freedom, which is a good fit. The spectral anomaly becomes an artifact of the finite resolution of the detector. SuperKamiokande quote a central value for $`\mathrm{\Delta }`$ of about $`0.47`$. We can see from Fig.3 that a $`510\%`$ systematic shift upward of $`\mathrm{\Delta }`$ would remove the anomaly. It is interesting to compare this result with Fig.1 from Ref. which shows the effect of an unidentified systematic error in the energy calibration of the detector. Systematic errors in either or both of the energy resolution and energy calibration can in principle explain the spectral anomaly. Having sounded this cautionary note, we now proceed on the basis that SuperKamiokande have in fact correctly determined their energy resolution and calibration capabilities, and that new physics is behind the spectral anomaly.
It is easy to understand how the spectral anomaly can be explained in a very amusing way through maximal $`\nu _e\nu _s`$ oscillations. It could be that $`\mathrm{\Delta }m^2`$ is such that the transition energy between the averaged oscillation regime and the just-so regime is about 13 MeV! The spectral anomaly might represent the onset of just-so behaviour. This would of course be a great piece of luck, but we probably should not disregard the possibility just out of world weary pessimism. To determine the $`\mathrm{\Delta }m^2`$ value required to achieve this effect, the appropriate $`\chi ^2`$ function is minimised by varying $`\mathrm{\Delta }m^2`$ and $`f`$. The appropriate $`\chi ^2`$ is similar to Eq.(9), except that $`0.5N_i^{\text{th}}`$ is replaced by a properly computed convolution integral. The $`\mathrm{\Delta }m^2`$ dependence of $`\chi ^2`$ is shown in Fig.4. There are deep local minima in the $`4\times 10^{10}`$ eV<sup>2</sup> to $`10^9`$ eV<sup>2</sup> region, which then trail off into a flat $`\chi ^2`$ as the averaged oscillation regime is entered. The minimum $`\chi ^2`$ value is about 17 at $`\mathrm{\Delta }m^26\times 10^{10}`$ eV<sup>2</sup> (and $`f=0.92`$), which for 17 degrees of freedom represents a excellent fit. This $`\mathrm{\Delta }m^2`$ can in principle be further probed through the anomalous seasonal effect. Preliminary calculations employing the $`\mathrm{\Delta }m^2`$ which minimises $`\chi ^2`$ show that an effect of about $`6\%`$ in magnitude can be obtained for the near-far asymmetry for certain of the higher energy bins of SuperKamiokande. We hope to return to an analysis of the seasonal effect in a later paper.
## IV Conclusion
Maximal $`\nu _e\nu _s`$ oscillations have been proposed as a solution to the solar neutrino problem. This idea can be well-motivated by the mirror matter hypothesis, or by the pseudo-Dirac neutrino idea. In this paper we have demonstrated the following points:
1. Across the allowed $`\mathrm{\Delta }m^2`$ spectrum, maximal $`\nu _e\nu _s`$ oscillations lead to three qualitatively different behaviours for the $`\nu _e`$ survival probability; just-so, MSW or approximately energy independent. These occur in the $`\mathrm{\Delta }m^2/\text{eV}^2`$ ranges $`10^3>\text{energy independent}>\text{few}\times 10^5>\text{MSW}>\text{few}\times 10^8>\text{energy independent}>10^9>\text{just-so}>\text{few}\times 10^{10}`$. The day-night asymmetry observable provides for an experimental probe in the MSW regime. The recoil electron energy spectrum and the anomalous seasonal effect likewise provide a probe in the just-so regime.
2. Day-night asymmetry data rule out the range $`2\times 10^78\times 10^6`$ eV<sup>2</sup> for $`\mathrm{\Delta }m^2`$ at the $`2\sigma `$ level. A positive measurement of the day-night asymmetry would pin down the preferred value to two possibilities on either side of the disfavoured region, and the energy dependence of the night-time rate could in principle resolve the ambiguity.
3. The SuperKamiokande spectral anomaly can be explained in this scenario if $`\mathrm{\Delta }m^2`$ takes any of the three or four values corresponding to the local minima in Fig.4.
4. We have shown that increasing SuperKamiokande resolution width by $`510\%`$ would also explain the spectral data.
Finally, The $`\mathrm{\Delta }m^2`$ region $`10^310^5`$ eV<sup>2</sup> region can be probed through the $`\overline{\nu }_e`$ disappearence experiment KAMLAND as well as through the atmospheric neutrino experiments. Note from the above that the region $`\text{few}\times 10^810^9`$ eV<sup>2</sup> is without a smoking-gun signature. Of course, another important test of the scenario for the entire $`\mathrm{\Delta }m^2`$ range of interest will be the neutral to charged current ratio at SNO: they should obtain the no-oscillation value. In addition, the BOREXINO and iodine experiments will double check the greater than $`50\%`$ suppression result distilled from the chlorine experiment.
## V Acknowledgements
RMC is supported by the Commonwealth of Australia and the University of Melbourne. RF and RRV are supported by the Australian Research Council.
## VI Figure Captions
Figure 1: Night-day asymmetry versus $`\mathrm{\Delta }m^2/\text{eV}^2`$ (solid line) for maximal $`\nu _e\nu _s`$ oscillations. Also shown is the night-day asymetry for maximal $`\nu _e\nu _\mu `$ oscillations for comparison (dashed-dotted line). The horizontal dashed line is the $`2\sigma `$ upper limit.
Figure 2: Predicted recoil electron energy spectrum normalized to the no oscillation expectation. Figure 2a (2b) is for $`\mathrm{\Delta }m^2/\text{eV}^2=10^7`$ ($`10^5`$). The solid (dotted) line corresponds to the ratio of night-time (day-time) flux per unit energy to the no-oscillation expectation and the dot-dashed line is the average.
Figure 3: The effect of varying the energy resolution, parameterized by $`\mathrm{\Delta }`$ (see text). The figure shows that the spectral anomaly can be explained if $`\mathrm{\Delta }0.50\text{MeV}`$ instead of the assumed SuperKamiokande value of $`\mathrm{\Delta }0.47\text{MeV}`$.
Figure 4: Fit to the SuperKamiokande recoil electron energy spectrum using maximal $`\nu _e\nu _s`$ oscillations.
|
no-problem/9905/astro-ph9905012.html
|
ar5iv
|
text
|
# The Spectral Energy Distributions of Low-Luminosity Active Galactic Nuclei
## 1 Introduction
The spectral energy distributions (SEDs) of active galactic nuclei (AGNs) carry important information on the physical processes of the accretion process. Many aspects of the AGN phenomenon, including the SED, have been successfully interpreted within the accretion disk framework, specifically one in which the disk is assumed to be optically thick and physically thin (Blandford & Rees 1992, and references therein). Previous work, however, has concentrated nearly exclusively on high-luminosity AGNs — mainly bright Seyfert nuclei and QSOs. Very little data exist on the spectral properties of low-luminosity AGNs, such as those commonly found in nearby galaxies (Ho, Filippenko, & Sargent 1997a), because they are difficult to study. Yet, knowledge of the SEDs of AGNs in the low-luminosity regime is fundamental to understanding the physical nature of these objects and their relation to their more luminous counterparts.
The intrinsic weakness of low-luminosity nuclei poses practical challenges on obtaining the data. Aside from the issue of sensitivity, often the main limitation stems from insufficient angular resolution necessary to separate the faint central source from the galaxy background. At virtually all wavelengths of interest the core emission constitutes only a small fraction of the total light, and hence contamination from the host galaxy is severe. To date, only a handful of objects have been adequately studied with multiwavelength observations, and even for these the wavelength coverage is sometimes highly incomplete and the data only approximate (NGC 1316 and NGC 3998: Fabbiano, Fassnacht, & Trinchieri 1994; Sgr A: Narayan, Yi, & Mahadevan 1995; M81: Ho, Filippenko, & Sargent 1996; NGC 4258: Lasota et al. 1996; M87: Reynolds et al. 1996; NGC 4594: Fabbiano & Juda 1998, Nicholson et al. 1998). Nonetheless, these studies already suggest that the SEDs of low-luminosity AGNs look markedly different compared to the SEDs normally seen in luminous AGNs. The spectral peculiarities seen in low-luminosity AGNs hint at possibly significant departures in these objects from the standard AGN accretion disk model.
This paper presents SEDs for a small sample of low-luminosity AGNs for which reasonably secure black hole masses have been determined by dynamical measurements. Data are presented for a total of seven objects (see Table 1 for a summary), four for the first time (NGC 4261, NGC 4579, NGC 6251, and M84); the SEDs for the remaining three (M81, M87, and NGC 4594) have been substantially updated and improved compared to previous publications. The sample consists of four objects spectroscopically classified as low-ionization nuclear emission-line regions or LINERs (Heckman 1980; see Ho 1999a for a review), one Seyfert, and two objects that border on the definition of LINERs and Seyferts. It is worth remarking that, although the sample is admittedly small and heterogeneous, it contains every known low-luminosity object that has both a black hole mass determination and sufficient multiwavelength data to establish the SED. The only two not included, namely Sgr A and NGC 4258, have already been extensively discussed in the literature. Because of the many complexities (§ 2) involved in defining the nuclear SEDs of these weak sources, because so few data of this kind exist in the literature, and because a companion paper (Ho et al. 1999b) relies critically on the details presented here, I devote considerable attention in § 2 to describing the data selection for each object. Section 3 summarizes the most noteworthy trends observed collectively in the sample, and section 4 discusses possible complications introduced by dust extinction; readers not interested in the details of each object may wish to skip directly to sections 3 and 4 for an overview of the main results. Ho et al. (1999b) will present detailed modeling of these data based on accretion-disk calculations.
## 2 Compilation of the Data
As mentioned in § 1, measuring the weak signal from low-luminosity AGNs requires observations that minimize the contamination from the bright background of the host galaxy. High angular resolution, therefore, is indispensable at all wavelengths. This study takes the following strategy for data selection. (1) At radio wavelengths, only interferometric data will be used, preferably measured through VLBI techniques. The radio jet component, if present, potentially can contaminate the core emission even in sub-arcsecond resolution. (2) Data in the infrared (IR) window are at the moment most poorly constrained. There are no useful far-IR data because nearly all the existing measurements have been obtained using the Infrared Astronomical Satellite, which has a beam $`\genfrac{}{}{0pt}{}{_>}{^{}}`$ 1. Ground-based measurements in the mid-IR (10–20 $`\mu `$m) and near-IR (1–3 $`\mu `$m) are widely available in the literature, but these data should be regarded strictly as upper limits because of the relatively large apertures employed ($``$3<sup>′′</sup>–10<sup>′′</sup>). The energy distribution of normal stellar populations implies that the contamination from starlight in the near-IR should greatly exceed that in the mid-IR; thus, the mid-IR points should be more representative of the direct emission from the nucleus, although emission from hot ($``$100 K) dust grains can substantially boost the luminosity in this band (e.g., Willner et al. 1985). (3) All of the optical and ultraviolet (UV) data are derived from observations made with the Hubble Space Telescope (HST). Photometry points generally pertain to an aperture of $``$0$`\stackrel{}{\mathrm{.}}`$1, while apertures $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 1<sup>′′</sup> have been used for spectroscopic measurements. (4) Finally, I quote only X-ray fluxes that originate from a nuclear source known to be compact at soft X-ray energies (0.5–2.5 keV). The criterion for compactness, unless otherwise noted, is currently limited to the resolution of the High-Resolution Imager (HRI), approximately $``$5<sup>′′</sup>, on either the Einstein or the ROSAT satellite. Several objects have hard X-ray (2–10 keV) spectra acquired with ASCA, which has an angular resolution of $``$5; HRI images show that the soft X-ray emission is compact in all these cases.
The following subsections describe the data chosen for each object. Additional details can be found in Tables 2–8, and the individual SEDs are shown in Figures 1–7.
### 2.1 NGC 3031 (M81)
The SED presented in Ho et al. (1996) has been updated (Table 2; Fig. 1) with new optical and UV photometry points from HST, an additional high-frequency radio point, and a more up-to-date hard X-ray spectrum from ASCA. The radio continuum between $``$1 and 15 GHz shows an inverted spectrum ($`\alpha `$ –0.3 to –0.6, where $`F_\nu \nu ^\alpha `$). It is difficult to assess the reality of the apparent slight turnover between 5 and 15 GHz because the two data points were not observed simultaneously; the radio core of M81 is highly variable on many timescales (Ho et al. 1999a). The spectroscopically measured optical–UV slope of the featureless continuum is surprisingly steep. Even after accounting for an estimated reddening of $`E(BV)`$ = 0.094 mag, the slope is still $``$2 (Ho et al. 1996). Devereux, Ford, & Jacoby (1997) recently obtained an HST WFPC2 image of the nucleus of M81 using a filter centered near 1500 Å. They obtained a flux $``$4 times higher than that reported in the Faint Object Spectrograph (FOS) UV spectrum of Ho et al. (1996). The two new points based on optical images (Bower et al. 1996; Devereux et al. 1997), on the other hand, are consistent with the spectroscopic measurements. Maoz et al. (1998) have reanalyzed the FOS UV data and concluded that the nucleus may have been miscentered in the aperture during the observations. This may account for the discrepancy between the FOS and WFPC2 fluxes, although it cannot be excluded that the nucleus varied in the UV between the two observing epochs. Adopting the WFPC2 UV flux, the optical-UV slope is now $`\alpha `$ 1.3–1.4. The nucleus of M81 emits a nonthermal continuum in the hard X-ray band. The spectrum between 2–10 keV is well described by a single power law with a slope of $`\alpha `$ = 0.85$`\pm `$0.04 (Ishisaki et al. 1996; see also Serlemitsos, Ptak, & Yaqoob 1996), similar to that seen in luminous Seyfert 1 nuclei (Turner & Pounds 1989; Nandra et al. 1997); the luminosity in this case, however, is just 2$`\times `$10<sup>40</sup> ergs s<sup>-1</sup>. It is noteworthy that M81 emits substantially more energy in the X-rays relative to the UV than in luminous AGNs. The two-point spectral index between 2500 Å and 2 keV, $`\alpha _{\mathrm{ox}}`$, is 1.08, smaller than the average value in quasars (1.4) or in luminous Seyfert 1 nuclei (1.2) (Mushotzky & Wandel 1989).
### 2.2 NGC 4261 (3C 270)
Figure 2 (Table 3) displays the SED of the nucleus. The radio core is resolved even on a scale of several milliarcseconds. No obvious point source can be seen in the 1.6 and 8.4 GHz maps of Jones & Wehrle (1997), so I take as a reasonable approximation the peak intensities at these frequencies. The resulting radio spectrum has $`\alpha `$ = 0. The three optical points derived from WFPC2 images (Ferrarese, Ford, & Jaffe 1996) define a slope of $`\alpha `$ 2.2. The FOS spectrum of Ferrarese et al., taken with a similar-sized aperture (0$`\stackrel{}{\mathrm{.}}`$1), suggests that most of the continuum is nonstellar. The very steep optical continuum probably results mainly from reddening by dust internal to NGC 4261 and most likely associated with the nuclear disk. Patchy extinction can be seen throughout the disk and in the immediate vicinity of the nucleus (Ferrarese et al. 1996). Dust is also the most likely culprit for the complete absence of UV emission: the upper limit to the flux at 2300 Å is 10<sup>-16</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> (Zirbel & Baum 1998), a factor of 80 lower than expected from a simple power-law extrapolation of the observed optical continuum. Estimating the intrinsic optical and UV luminosity of the nucleus, however, is difficult without prior knowledge of the absorbing column, the source geometry, and the extinction law (see § 4). The extinction due to the Galaxy is small. The foreground hydrogen column density is only $`N_\mathrm{H}`$ = 1.6$`\times `$10<sup>20</sup> cm<sup>-2</sup>, which, for the conversions $`E(BV)=N_\mathrm{H}`$/(5.8$`\times `$10<sup>21</sup> cm<sup>-2</sup>) mag and $`A_V/E(BV)`$ = 3.1 (Bohlin, Savage, & Drake 1978), translates to $`A_V`$ = 0.084 mag (Table 1). A rough estimate of the magnitude of the internal extinction can be obtained from the Balmer decrement observed through a larger aperture (2<sup>′′</sup>$`\times `$4<sup>′′</sup>) by Ho et al. (1997b), H$`\alpha `$/H$`\beta `$ = 4.9.<sup>2</sup><sup>2</sup>2Ferrarese et al. (1996) quote a significantly larger value of H$`\alpha `$/H$`\beta `$ = 9.7. Comparison of their relative line intensities with those of Ho et al. (1997b) suggests that they have underestimated the intensity of H$`\beta `$ by a factor of $``$2. Ferrarese et al. could not remove the underlying stellar absorption lines from their spectra because of the limited signal-to-noise ratio of their data. This effect preferentially biases the Balmer line intensities, H$`\beta `$ more so than H$`\alpha `$, to low values. After removing the Galactic contribution using the extinction law of Cardelli, Clayton, & Mathis (1989), H$`\alpha `$/H$`\beta `$(internal) = 4.8; assuming the Galactic extinction law and a Case B intrinsic H$`\alpha `$/H$`\beta `$ of 3.1, which is thought to be appropriate for the conditions in the narrow-line regions of AGNs (e.g., Gaskell & Ferland 1984), $`A_V`$(internal) = 1.4 mag. This value is much higher than that inferred from the soft X-ray observations of Worral & Birkinshaw (1994), who placed a limit of $`N_\mathrm{H}<`$ 3.9$`\times `$10<sup>20</sup> cm<sup>-2</sup>, or $`A_V<`$ 0.2 mag for a normal dust-to-gas ratio.
Worral & Birkinshaw (1994) observed NGC 4261 with the ROSAT Position-Sensitive Proportional Counter (PSPC). Despite the coarse angular resolution of the PSPC ($``$25<sup>′′</sup>), they found that $``$50% of the flux comes from a spatially unresolved component whose spectrum is well fitted by a power-law function. The nonthermal component emits $`L`$(0.2–1.9 keV) = 4.7$`\times `$10<sup>40</sup> ergs s<sup>-1</sup>. Inspection of an archival ROSAT HRI image confirms that most of the soft X-ray emission indeed does stem from a compact component. The uncertain impact of dust extinction on the optical-UV continuum renders estimates of $`\alpha _{\mathrm{ox}}`$ highly unreliable. If $`A_V`$(internal) = 0 mag (correcting only for the Galactic contribution), the optical slope is $`\alpha `$ = 2.06, which, when extrapolated to 2500 Å yields $`\alpha _{\mathrm{ox}}`$ = 0.44. Choosing $`A_V`$(internal) = 1.4 mag and the Galactic extinction law, the optical slope becomes $`\alpha `$ = 0.67, and $`\alpha _{\mathrm{ox}}`$ = 0.84. These two extremes probably bracket the true value.
### 2.3 NGC 4374 (M84, 3C 272.1)
Jones, Terzian, & Sramek (1981) mapped the nucleus with a resolution of 4 mas at 1.6 GHz. The morphology of the radio core can be modeled by an elliptical Gaussian with dimensions 1.4 mas $`\times `$ 6.0 mas, presumably because a parsec-scale jet still contributes to the emission on these scales. The radio flux from the accretion flow, therefore, ought to be less than indicated in Figure 3 (Table 4). At optical wavelengths the nucleus appears unresolved in WFPC2 images. Bower et al. (1997) measure $`V`$ = 19.9 mag and $`(VI)`$ = 1.6 mag. The stellar contribution to the pointlike nucleus is unclear, but judging from the complete absence of stellar absorption features between 6300 Å and 6800 Å in spectra acquired through a 0$`\stackrel{}{\mathrm{.}}`$2 slit (Bower et al. 1998), most of the light is probably nonstellar. The ground-based spectrum in Ho et al. (1995), for instance, shows a very noticeable (equivalent width $``$1 Å) absorption line at $`\lambda `$ = 6495 Å due to Ca I+Fe I. The $`(VI)`$ color of the nucleus, which corresponds to $`\alpha `$ = 3.5, again suggests substantial reddening by dust, as does the nondetection of UV emission (Zirbel & Baum 1998); the images in Bower et al. (1997), in fact, clearly show dust patches projected on the front of the nucleus. These authors used the $`(VI)`$ color map and an assumed intrinsic $`(VI)`$ color for ellipticals to estimate a mean internal extinction of $`A_V`$ = 0.54 mag within a relatively large region of 14$`\stackrel{}{\mathrm{.}}`$5$`\times `$8$`\stackrel{}{\mathrm{.}}`$4. Close to the nucleus the extinction could be higher than this value. The intrinsic Balmer decrement measured through a 2<sup>′′</sup>$`\times `$4<sup>′′</sup> aperture around the nucleus, for instance, is H$`\alpha `$/H$`\beta `$ = 4.7 (Ho et al. 1997b), or $`A_V`$ = 1.3 mag. The optical and UV points were dereddened using this $`A_V`$.
The Einstein HRI image of M84 reveals a relatively complex central morphology with significant extended emission (Fabbiano, Kim, & Trinchieri 1992); the extended structure is also clearly evident in an archival ROSAT HRI image. The 0.5–4 keV luminosity given by Fabbiano et al. (1992), 5.3$`\times `$10<sup>40</sup> ergs s<sup>-1</sup> (assuming a “Raymond-Smith” thermal spectrum with $`kT`$ = 1 keV and a line-of-sight $`N_\mathrm{H}`$ = 1.7$`\times `$10<sup>20</sup> cm<sup>-2</sup>), should be taken only as an upper limit to the luminosity of a pointlike nucleus. Extrapolating the $`\alpha `$ 3.1 optical slope to the UV gives an estimate of $`\alpha _{\mathrm{ox}}`$ 0.75.
### 2.4 NGC 4486 (M87, 3C 274)
Reynolds et al. (1996) presented a sparse SED for the nucleus. Figure 4 (Table 5) gives an updated version that includes additional data from the IR to UV region. As in the objects discusses so far, the radio spectrum is either flat or slightly inverted; the VLBI points (filled symbols) have a mean spectral index $`<\alpha >\mathrm{\hspace{0.17em}0.1}`$. The apparent change in the spectral index from one point to another reflects the nonuniform resolution of the four experiments, which results in varying degrees of contamination from the jet, the nonsimultaneity of the observations, or both. It is instructive to note that the jet in this case introduces very substantial contamination to the radio core emission even on a scale of $``$1<sup>′′</sup>. The open symbols show the three VLA points from Biretta, Stern, & Harris (1991); clearly they lie systematically higher by about a factor of 10 compared to the VLBI flux densities. The optical (6800 Å) to UV (1200 Å) spectrum, observed nearly simultaneously with the HST/FOS through the same (0$`\stackrel{}{\mathrm{.}}`$2) aperture, traces a smooth, continuous curve that can be well fitted with a double power law ($`\alpha `$ = 1.75 for $`\lambda `$$`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 4500 Å, $`\alpha `$ = 1.41 for $`\lambda `$$`\genfrac{}{}{0pt}{}{_>}{^{}}`$ 4500 Å; Tsvetanov et al. 1998). The two $`I`$-band points derived from archival HST images agree very well with the FOS spectrum (within $`\pm `$10%). On the other hand, the point taken from the Faint Object Camera (FOC) measurement of Maoz et al. (1995) is about 50% higher than predicted from the FOS spectrum; variability is a plausible explanation for this discrepancy. The fluxes in the $`J`$ (1.25$`\mu `$m) and $`K`$ (2.2$`\mu `$m) bands (Stiavelli, Peletier, & Carollo 1997), both acquired under sub-arcsecond seeing conditions, also appear to follow the extrapolated optical power law. Taken at face value, the 10$`\mu `$m point deviates quite strongly from the power law at shorter wavelengths, but the relatively large aperture of the observations (6<sup>′′</sup>) allows room for substantial contamination. The soft X-ray flux recorded by the Einstein HRI is $``$50% lower than that seen in the ROSAT HRI \[$`L`$(1 keV) = 5.4$`\times `$10<sup>40</sup> ergs s<sup>-1</sup>, assuming a power-law spectrum with $`\alpha `$ = 0.7 and a line-of-sight $`N_\mathrm{H}`$ = 2.5$`\times `$10<sup>20</sup> cm<sup>-2</sup>; see Reynolds et al. 1996\]. This level of long term X-ray variability is not uncommon in low-luminosity AGNs (Serlemitsos et al. 1996). An ASCA spectrum of M87 is available, but the nucleus could not be clearly detected in the relatively short exposure (Reynolds et al. 1996). Combining the UV spectrum with the average of the two HRI fluxes, $`\alpha _{\mathrm{ox}}`$ = 1.06.
M87 belongs to the minority of LINERs ($``$25%; Maoz et al. 1995; Barth et al. 1998) that shows prominent UV emission. The optical-UV continuum of M87 evidently suffers little or no internal extinction. The best-fitting double power-law model shown in Figure 4 requires only $`A_V`$ = 0.12 mag for a Milky Way extinction law (Tsvetanov et al. 1998), very similar to the Galactic contribution of $`A_V`$ = 0.078 mag. Note that the Balmer decrement measured by Ho et al. (1997b) in a 2<sup>′′</sup>$`\times `$4<sup>′′</sup> aperture yields a substantially larger $`A_V`$ of 1.0 mag. This example underscores the dangers of comparing observations taken with markedly different apertures.
### 2.5 NGC 4579 (M58)
The 2.3-GHz and 8.4-GHz interferometric observations of Sadler et al. (1995), both made with a beam of 0$`\stackrel{}{\mathrm{.}}`$03, define a rising radio spectrum with $`\alpha `$ = –0.19 (Fig. 5; Table 6). The HST/FOS UV spectrum of Barth et al. (1996) extends from $``$3300 Å to 1150 Å; excluding a broad feature near 3000 Å due to Balmer continuum and Fe II emission, the spectrum can be described by a nearly featureless power-law function, $`F_\nu \nu ^1`$. There is tentative evidence that the nucleus varies in the UV. The 2300 Å flux in the FOS spectrum is a factor of 3 lower than that reported by Maoz et al. (1995) based on an FOC image. In either case, the UV output, again, is quite low with respect to the X-rays: $`\alpha _{\mathrm{ox}}`$ = 0.78 and 1.02 for the FOS and FOC UV flux, respectively. A compact, nonthermal component dominates the ASCA spectrum of NGC 4579. Terashima et al. (1998) find that the 2–10 keV continuum can be modeled as a moderately absorbed (intrinsic $`N_\mathrm{H}`$ = 4$`\times `$10<sup>20</sup> cm<sup>-2</sup>) power law with $`\alpha `$ = 0.72 and luminosity 1.5$`\times `$10<sup>41</sup> ergs s<sup>-1</sup>. The value of $`N_\mathrm{H}`$ derived from the X-ray spectrum roughly matches the intrinsic extinction estimated by Barth et al. (1996) based on the shape of the observed UV continuum.
### 2.6 NGC 4594 (M104, Sombrero)
Fabbiano & Juda (1997) presented an approximate SED for the nucleus of NGC 4594. The optical and UV points used in that study, however, were highly uncertain. The $`B`$-band data, based on the pre-refurbishment FOC image of Crane et al. (1993), may have been underestimated because of the nonlinear behavior of the FOC (see Crane & Vernet 1997). On the other hand, the UV flux taken with the very large aperture (10<sup>′′</sup>$`\times `$20<sup>′′</sup>) of the International Ultraviolet Explorer is undoubtedly far too high. Here the SED of NGC 4594 is reassessed, paying close attention to aperture effects in the difficult optical-UV region; a preliminary version of these data appears in Nicholson et al. (1998).
The high-resolution data between 0.6 and 15 GHz define a compact flat-spectrum core with $`\alpha `$ ranging from 0.2 to –0.4 (Fig. 6; Table 7). The nonstellar component of the optical and UV continuum, on the other hand, is somewhat difficult to specify. The optical FOS spectrum of Kormendy et al. (1997), taken through a 0$`\stackrel{}{\mathrm{.}}`$21 aperture, has a very red continuum ($`\alpha \mathrm{\hspace{0.17em}3.5}`$). The steep optical slope is most likely caused by an increasing contamination of starlight toward longer wavelengths (even within such a small aperture) rather than by dust reddening of a purely featureless continuum. The dilution of the depth of the stellar absorption lines indicated to Kormendy et al. that approximately 50% of the light at $`B`$ within their 0$`\stackrel{}{\mathrm{.}}`$21 aperture comes from the nonstellar continuum. This roughly agrees with the strength of the point source extracted from an archival WFPC2 $`V`$-band image (Table 7). The point-source luminosity similarly measured from an $`I`$-band image, however, falls significantly below (by a factor $``$4) the extrapolated FOS spectrum if one assumes the degree of stellar contamination to be constant between $`B`$ and $`I`$. This suggests that the nonstellar component contributes much less to the red end of the FOS spectrum ($``$6800 Å), probably on the order of 25% or so. The $`I`$-band flux is therefore adopted for the red end of the FOS spectrum. If one considers the two photometry points to be a reliable measure of the nonstellar component, the optical slope decreases to $`\alpha `$ = 1.3. The UV spectrum, taken with an even larger aperture of 0$`\stackrel{}{\mathrm{.}}`$86, likewise suffers from wavelength-dependent starlight contamination (Nicholson et al. 1998). I excluded from the SED the portion of the spectrum longward of $`\lambda `$= 2200 Å, where incipient stellar absorption features and a sharply rising continuum suggest a sizable contribution from stars. The final adopted optical and UV points (shown as filled symbols) are well joined by a power law with $`\alpha `$ 1.5.
The ASCA 2–10 keV spectrum is predominantly nonthermal. Nicholson et al. (1998) obtained a best fit with a power law with $`\alpha `$ = 0.63 and $`L`$(2–10 keV) = 1.1$`\times `$10<sup>41</sup> ergs s<sup>-1</sup>. The morphology of the central region as seen in the ROSAT HRI image (Fabbiano & Juda 1997) indicates that most of the hard X-ray emission is likely to originate from the nucleus. The X-ray band energetically dominates over the UV band, with $`\alpha _{\mathrm{ox}}`$ = 0.89. The average extinction inferred from the Balmer decrement on arcsecond scales appears modest ($`A_V`$ = 0.25 mag; Ho et al. 1997b) and roughly agrees with the intrinsic hydrogen column derived from the ASCA spectrum ($`N_\mathrm{H}`$ = 5.3$`\times `$10<sup>20</sup> cm<sup>-2</sup> or $`A_V`$ = 0.28 mag).
### 2.7 NGC 6251
Good quality VLBI maps are available to isolate the core radio emission (Cohen & Readhead 1979; Jones et al. 1986), but the nonsimultaneous nature of the observations makes definition of the intrinsic radio spectrum ambiguous (Fig. 7; Table 8). The two low-frequency points, for instance, yield a much steeper spectral index than the two high-frequency points ($`\alpha `$ = 0.4 compared to $`\alpha `$ = –1.2). The only conclusion that can be drawn at this stage is that the radio spectrum, as in all the sources studied here, is consistent with that of a self-absorbed synchrotron source. Four HST photometry points have been measured for the pointlike nucleus, corresponding roughly to the $`U,B,V`$, and $`I`$ bands (Crane & Vernet 1997). Taking into consideration the nonlinear effects of the FOC that affected the $`U`$ and $`B`$ fluxes, the optical to near-UV slope is consistent with $`\alpha \mathrm{\hspace{0.17em}1.7}`$. Combining the extrapolated UV flux with the soft X-ray (PSPC) flux reported by Worral & Birkinshaw (1994), $`\alpha _{\mathrm{ox}}`$ = 0.83. Birkinshaw & Worral (1993) performed a detailed analysis of the PSPC data and concluded that nearly all ($``$90%) of the emission arises from a spatially unresolved, power-law component with a diameter $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 4<sup>′′</sup>. More detailed spectral information comes from Turner et al.’s (1997) statistical study of the ASCA spectra of a large sample of Seyfert 2 nuclei, which included NGC 6251. The best-fitting model found by Turner et al. requires a thermal (Raymond-Smith) plasma with a temperature of $`kT`$ = 0.85 keV added to a power-law component characterized by $`\alpha `$ = 1.11$`{}_{0.19}{}^{}{}_{}{}^{+0.16}`$, $`L`$(2–10 keV) = 1.3$`\times `$10<sup>42</sup> ergs s<sup>-1</sup>, and an intrinsic $`N_\mathrm{H}`$ = 1.4$`\times `$10<sup>21</sup> cm<sup>-2</sup>. The absorbing column obtained from the X-rays predicts a sizable internal extinction of $`A_V`$ = 0.75 mag. The Balmer decrement given in Shuder & Osterbrock (1981) requires a much larger $`A_V`$ 5 mag, although the reported H$`\beta `$ intensity appears to be rather uncertain.
## 3 General Properties of the SEDs
Luminous AGNs generally display a fairly “universal” SED (see, e.g., Sanders et al. 1989 and Elvis et al. 1994). The continuum from the IR to the X-rays, roughly flat in log $`\nu F_\nu `$–log $`\nu `$ space, can be represented by an underlying power law with $`\alpha `$ 1 superposed with several distinct components, the most prominent of which is a broad UV excess. This so-called big blue bump is conventionally interpreted as thermal emission arising from an optically thick, geometrically thin accretion disk (Shields 1978; Malkan & Sargent 1982). The largest spectral difference among AGNs manifests itself in their brightness in the radio band — a factor of nearly $`10^2`$$`10^3`$ in radio power distinguishes “radio-loud” from “radio-quiet” objects.
The broad-band spectra of the seven low-luminosity AGNs presented here share a number of common traits, and yet they differ markedly from the SEDs of luminous AGNs. To illustrate this point, Figure 8 compares the SEDs of the present sample with the median SED of radio-loud and radio-quiet luminous AGNs taken from Elvis et al. (1994); all the curves have been normalized at 1 keV. Several features of the low-luminosity SEDs are noteworthy:
(1) The optical-UV slope is quite steep. The power-law indices for the seven objects average $`\alpha \mathrm{\hspace{0.17em}1.8}`$ (range 1.0–3.1; see Table 9), or 1.5 if the possibly highly reddened objects NGC 4261 and M84 are excluded, whereas in luminous AGNs $`\alpha `$ 0.5–1.0.
(2) The UV band is exceptionally dim relative to the optical and X-ray bands. There is no evidence for a big blue bump component in any of the objects. Indeed, the SED reaches a local minimum somewhere in the far-UV or extreme-UV region. This leads to the above-mentioned steep optical-UV slope and to systematically low values of $`\alpha _{\mathrm{ox}}`$. Table 9 gives $`<\alpha _{\mathrm{ox}}>\mathrm{\hspace{0.17em}0.9}`$, to be compared with $`<\alpha _{\mathrm{ox}}>`$ = 1.2–1.4 for luminous Seyferts and QSOs (Mushotzky & Wandel 1989). In other words, the low-luminosity AGNs in the present sample, most of which are LINERs, are systematically “X-ray loud” (relative to the UV) compared to AGNs of higher luminosity. This modification of the SED from UV to X-ray energies leads to a harder ionizing spectrum, and it offers an explanation, at least in part, for the characteristically lower ionization state of the emission-line regions (Ho 1999b).
(3) The hard X-ray (2–10 keV) spectra, where available, are well fitted with a power-law function with $`\alpha `$ 0.6–0.8, very similar to spectra observed in high-luminosity sources.
(4) There is tentative evidence for a maximum in the SED at mid-IR or longer wavelengths. Despite the relatively large apertures employed in the mid-IR observations, the 10-$`\mu `$m point should be largely uncontaminated by starlight, although dust emission could contribute significantly in this band.
(5) The nuclei have radio spectra that are either flat or inverted. The radio brightness temperature, where available, reach at least 10<sup>9</sup>–10<sup>10</sup> K. The radio cores, therefore, are self-absorbed synchrotron sources.
(6) One usually gauges the degree of radio dominance in AGNs by the ratio of the specific luminosities in the radio to the optical band. For instance, Kellermann et al. (1989) classify the radio strength of QSOs by the parameter $`RF_\nu (6\mathrm{c}\mathrm{m})/F_\nu (B)`$; radio-quiet members have $`R`$ 0.1–1, and radio-loud members are distinguished by $`R`$ $`\genfrac{}{}{0pt}{}{_>}{^{}}`$ 100. Adopting the same criterion, all of the objects, including the three spiral galaxies in the sample (M81, NGC 4579, and NGC 4594), qualify as being radio-loud. M81 has the smallest radio-to-optical ratio, but $`R`$ is still $``$50. This finding runs counter to the usual notion that only elliptical galaxies host radio-loud AGNs. Note that if the total (host galaxy \+ nucleus) $`B`$ luminosity had been used, which in these sources significantly exceeds the nuclear value alone, all the objects, with the possible exception of NGC 6251, would have been considered radio quiet.
(7) The sample of objects studied here is intrinsically extremely faint. To cast this statement in a more familiar context and to fully appreciate the enormous challenge in detecting these objects, we note that AGNs that occupy the upper end of the luminosity distribution, namely QSOs, typically have nonstellar continua with absolute magnitudes –30 $`<`$ $`M_B^{nuc}`$ $`<`$ –23. Classical Seyfert nuclei, such as those from the Markarian survey, are characterized by –23 $`<`$ $`M_B^{nuc}`$ $`<`$ –18. By contrast, the nonstellar nuclear magnitudes listed in Table 9 lie in the range –14.7 $`<`$ $`M_B`$ $`<`$ –8.9. Excluding the two extreme cases, $`<M_B^{nuc}>`$ = –11.5 mag. The host galaxies themselves, on the other hand, are luminous $`L^{}`$ systems ($`<M_{B_T}^0>`$ –21.1 mag; Table 1), and hence the nuclei comprise merely $``$0.01% of the total optical light of the host galaxies.
(8) The bolometric luminosities of the sources (Table 9), obtained by integrating the power-law segments shown on Figure 8, range from $`L_{\mathrm{bol}}`$ = 2$`\times `$10<sup>41</sup> to 8$`\times `$10<sup>42</sup> ergs s<sup>-1</sup>, or $`10^610^3`$ times the Eddington rate for the black hole masses listed in Table 1. The bolometric luminosities would be lower if the mid-IR peak has been overestimated, but this would not affect the conclusion that the Eddington ratios are very low. The X-ray band, arbitrarily defined here as the region from 0.5 to 10 keV, carries 6%–33% of $`L_{\mathrm{bol}}`$.
## 4 Uncertainties due to Dust Extinction
Important aspects of the interpretation of the data depend on the intrinsic luminosity of the UV region, as this impacts conclusions concerning the optical-UV continuum slope, the presence of the big blue bump, and the strength and shape of the ionizing spectrum. The UV bandpass, unfortunately, is strongly affected by dust extinction. Although the effects of extinction can, in principle, be corrected, in practice such a procedure is fraught with a number of uncertainties, which I briefly mention here.
First, it is unclear how to measure the amount of dust affecting the UV continuum source. Two measures of extinction are traditionally used. One is based on comparison of the observed ratio of a pair of emission lines with their intrinsic ratio, after adopting a form for the extinction law. The intrinsic spectrum of the hydrogen recombination lines is well known for conditions prevailing in the low-density narrow-line regions of AGNs, and the derived optical extinction does not depend sensitively on the exact form of the extinction law. A second method uses the neutral hydrogen column density derived from X-ray spectra to calculate the absorbing column under the assumption that the dust-to-gas ratio and the grain properties are the same as those in the Galaxy. The latter assumptions, however, need not hold, especially in the vicinity of an AGN where grains can be destroyed by the harsh radiation field (Voit 1991). Low values of $`N_\mathrm{H}`$ derived from X-ray observations also do not necessarily imply low extinction because some dust can be associated with the ionized medium. In either case, a major source of uncertainty is whether the UV continuum source traverses through the same absorbing medium as probed by the X-rays or by the optical emission lines. The extinction obtained from the X-ray absorbing column in AGN spectra, for instance, often greatly exceeds the extinction inferred from the Balmer decrement (e.g., Reichert et al. 1985).
One can attempt to estimate directly the extinction of the continuum by searching for spectral signatures imprinted by the assumed extinction law. For a Galactic extinction law, the most noticeable feature in the mid-UV is a broad depression centered near 2200 Å. The UV spectra of AGNs, on the other hand, usually do not show this feature (McKee & Petrosian 1974; Neugebauer et al. 1980; Malkan & Oke 1983; Tripp, Bechtold, & Green 1994) despite independent evidence for dust from other indicators (e.g., emission-line ratios). This result implies one or more of the following possibilities: (1) the UV continuum and the line-emitting gas experience different amounts of extinction because the distribution of dust along the line-of-sight is patchy; (2) the strong radiation field destroys dust grains close to the central continuum source; and (3) the extinction law in extragalactic environments differs from that of the Galaxy, specifically in having a much weaker 2200-Å bump. Support for the latter possibility comes from observations of starburst galaxies (Calzetti, Kinney, & Storchi-Bergmann 1994) and of the Small Magellanic Cloud (SMC; Bouchet et al. 1985) whose UV spectra generally show a very weak, if any, 2200-Å bump. Gordon, Calzetti, & Witt (1997) argue that the absence of the 2200-Å feature in these systems is inherent in their extinction law and not merely a consequence of geometry effects. The properties of dust grains, specifically the small graphite grains responsible for the 2200-Å bump (Mathis 1994), evidently can be greatly affected by environmental conditions such as star-formation activity and/or metallicity. It is not difficult to imagine that the same may be the case in the vicinity of an AGN, where the extinction law can be modified dramatically by the intense radiation field (Laor & Draine 1993; Czerny et al. 1995).
Is the apparent faintness of the UV band intrinsic to the SEDs or is it instead simply a consequence of dust extinction? Luminous AGNs have an optical-UV slope of $`\alpha _{\mathrm{ou}}`$ 0.5–1.0, whereas in the present sample $`\alpha _{\mathrm{ou}}`$ 1.5–2.0. Let us estimate how much UV extinction is required to redden the “typical” AGN spectrum to that seen here. Following Maoz et al. (1998), I consider the extinction curve of the SMC (Bouchet et al. 1985), as parameterized by Pei (1992), and the empirical starburst “attenuation curve” of Calzetti et al. (1994), as parameterized by Calzetti (1997). Both curves lack a 2200-Å bump, but the Calzetti et al. curve is greyer. Adopting a fiducial optical and UV wavelength of 6500 Å and 2500 Å, respectively, reddening $`\alpha _{\mathrm{ou}}`$ by a slope of 1 requires $`A_V`$ = 0.6 mag and $`A_{2500}`$ = 1.5 mag for the SMC curve. The shallower Calzetti et al. curve gives $`A_V`$ = 1.1 mag and $`A_{2500}`$ = 2.0 mag. For a more extreme case of $`\mathrm{\Delta }\alpha _{\mathrm{ou}}`$ = 2, $`A_V`$ = 1.3 mag and $`A_{2500}`$ = 3.0 mag for the SMC curve, and $`A_V`$ = 2.3 mag and $`A_{2500}`$ = 4.1 mag for the Calzetti et al. curve. These cases are illustrated in Figure 9. Given the limited accuracy of the available data and the unknown form of the extinction law in AGNs, it is difficult to rule out dust reddening as the principal cause of the observed weakness of the UV band, although one can probably exclude Galactic-type dust grains from the absence of the predicted strong 2200-Å feature (thin solid lines in Fig. 9). In fact, the necessary amount of visual extinction, $`A_V`$ 1–2 mag, is not unreasonable compared to those obtained from the Balmer decrements (Table 1). Note that an average correction of $`A_{2500}`$ = 2.0 mag will change $`\alpha _{\mathrm{ox}}`$ from 0.9 to 1.2, as seen in luminous Seyfert 1s.
Nonetheless, in order to account for the systematic differences between the SEDs of the two luminosity classes, one would have to postulate that low-luminosity AGNs systematically have either greater levels of extinction or a different dust extinction law compared to high-luminosity AGNs. The first explanation is not supported by the data, at least insofar as Balmer decrements can be used to gauge the continuum extinction; the internal extinctions listed in Table 1 do not appear anomalous compared to those found in more luminous Seyfert galaxies (e.g., Shuder & Osterbrock 1981; Dahari & De Robertis 1988). The second hypothesis is ad hoc and difficult to test experimentally. Moreover, if exposure to the AGN environment indeed does modify the extinction law, one would naïvely expect the effect to be greatest on the most powerful AGNs, exactly the opposite of what is observed. Thus, although dust extinction can in principle be responsible for the spectral peculiarities seen in low-luminosity objects, the alternative view that the nonstandard SEDs are intrinsic to the sources is also tenable and probably more favorable.
## 5 Summary
Broad-band spectra are presented for seven low-luminosity AGNs. Although the sample is still limited and heterogeneous, this is the most extensive effort so far to systematically investigate the SEDs of these weak sources. Contamination by emission from the host galaxy can severely corrupt the faint signal from the nucleus, and careful attention has been paid to select only the highest quality nuclear fluxes in assembling the SEDs. A comparative study of the gross features of the SEDs reveals that the SEDs of low-luminosity AGNs, as a group, look strikingly different compared to the standard energy spectrum of luminous Seyfert galaxies and QSOs. Most of the differences stem from the exceptional faintness of the UV continuum in the low-luminosity objects. The so-called big blue bump is very weak or altogether absent, thereby making the continuum shape between optical and UV wavelengths steeper than normal and the X-ray band energetically more important. Extinction by dust grains with properties different from those of Galactic composition may be responsible for suppressing the UV emission in low-luminosity AGNs, but an alternative, more intriguing possibility is that the absence of the big blue bump is a property intrinsic to these sources. Another notable property of the SEDs is the prominence of the compact, flat-spectrum radio component relative to the emission in other energy bands. All seven nuclei in the sample, including the three hosted by spiral galaxies, can be considered “radio-loud.” The AGNs investigated here indeed are very quiescent objects. Their bolometric luminosities range from 2$`\times `$10<sup>41</sup> to 8$`\times `$10<sup>42</sup> ergs s<sup>-1</sup>, which correspond to an Eddington ratio of $`10^610^3`$.
L. C. H. acknowledges financial support from a Harvard-Smithsonian Center for Astrophysics postdoctoral fellowship, from NASA grant NAG 5-3556, and from NASA grants GO-06837.01-95A and AR-07527.02-96A from the Space Telescope Science Institute (operated by AURA, Inc., under NASA contract NAS5-26555). I thank Chien Peng and Edward Moran for assistance in analyzing some of the HST and ROSAT HRI images, respectively, and I am grateful to Marianne Vestergaard for valuable comments on an earlier version of the paper. I thank Ramesh Narayan for discussions on the theoretical interpretation of the data. This work made extensive use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
References
Bääth, L. B., et al. 1992, A&A, 257, 31
Bartel, N., et al. 1982, ApJ, 262, 556
Barth, A. J., Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1998, ApJ, 496, 133
Barth, A. J., Reichert, G. A., Filippenko, A. V., Ho, L. C., Shields, J. C., Mushotzky, R. F., & Puchnarewicz, E. M. 1996, AJ, 112, 1829
Bash, F. N., & Kaufman, M. 1986, ApJ, 310, 621
Biretta, J., Stern, C. P., & Harris, D. E. 1991, AJ, 101, 1632
Birkinshaw, M., & Worrall, D. M. 1993, ApJ, 412, 568
Blandford, R. D., & Rees, M. J. 1992, in Testing the AGN Paradigm, ed. S. Holt, S. Neff, & M. Urry (New York: AIP), 3
Bohlin, R. C., Savage, B. D., & Drake, J. K. 1978, ApJ, 224, 132
Bouchet, P., Lequeux, J., Maurice, E., Prevot, L., & Prevot-Burnichon, M. L. 1985, A&A, 149, 330
Bower, G. A., et al. 1998, ApJ, 492, L111
Bower, G. A., Heckman, T. M., Wilson, A. S., & Richstone, D. O. 1997, ApJ, 483, L33
Bower, G. A., Wilson, A. S., Heckman, T. M., & Richstone, D. O. 1996, AJ, 111, 1901
Calzetti, D. 1997, in The Ultraviolet Universe at Low and High Redshift, ed. W. H. Waller et al. (New York: AIP), 403
Calzetti, D., Kinney, A. L., & Storchi-Bergmann, T. 1994, ApJ, 429, 582
Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245
Cohen, M. H., & Readhead, A. C. S. 1979, ApJ, 233, L101
Crane, P., et al. 1993, AJ, 106, 1371
Crane, P., & Vernet, J. 1997, ApJ, 486, L91
Czerby, B., Loska, Z., Szczerba, R., Cukierska, J., & Madejski, G. 1995, AcA, 45, 623
Dahari, O., & De Robertis, M. M. 1988, ApJS, 67, 249
de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Jr., Buta, R. J., Paturel, G., & Fouqué, R. 1991, Third Reference Catalogue of Bright Galaxies (New York: Springer)
Devereux, N. A., Becklin, E. E., & Scoville, N. 1987, ApJ, 312, 529
Devereux, N. A., Ford, H. C., & Jacoby, G. 1997, ApJ, 481, L71
Elvis, M., et al. 1994, ApJS, 95, 1
Fabbiano, G., Fassnacht, C., & Trinchieri, G. 1994, ApJ, 434, 67
Fabbiano, G., & Juda, J. Z. 1997, ApJ, 476, 666
Fabbiano, G., Kim, D.-W., & Trinchieri, G. 1992, ApJS, 80, 531
Fabian, A. C., Rees, M. J., Stella, L., & White, N. E. 1989, MNRAS, 238, 729
Ferrarese, L., Ford, H. C., & Jaffe, W. 1996, ApJ, 470, 444
Forbes, D. A., Ward, M. J., DePoy, D. L., Boisson, C., & Smith, M. S. 1992, MNRAS, 254, 509
Gaskell, C. M., & Ferland, G. J. 1984, PASP, 96, 393
González-Delgado, R. M., & Pérez, E. 1996, MNRAS, 281, 1105
Gordon, K. D., Calzetti, D., & Witt, A. N. 1997, ApJ, 487
Heckman, T. M. 1980, A&A, 87, 152
Ho, L. C. 1998c, in Observational Evidence for Black Holes in the Universe, ed. S. K. Chakrabarti (Dordrecht: Kluwer), 157
Ho, L. C. 1999a, in The AGN-Galaxy Connection, ed. H. R. Schmitt, L. C. Ho, & A. L. Kinney (Advances in Space Research), in press
Ho, L. C. 1999b, in preparation
Ho, L. C., et al. 1999b, in preparation
Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1995, ApJS, 98, 477
Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1996, ApJ, 462, 183
Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1997a, ApJ, 487, 568
Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1997b, ApJS, 112, 315
Ho, L. C., Van Dyk, S. D., Pooley, G. G., Sramek, R. A., & Weiler, K. W. 1999a, AJ, submitted
Hummel, E., van der Hulst, J. M., & Dickey, J. M. 1984, A&A, 134, 207
Impey, C. D., Wynn-Williams, C. G., & Becklin, E. E. 1986, ApJ, 309, 572
Ishisaki, Y., et al. 1996, PASJ, 48, 237
Jones, D. L., et al. 1986, ApJ, 305, 684
Jones, D. L., Terzian, Y., & Sramek, R. A. 1981, ApJ, 246, 28
Jones, D. L., & Wehrle, A. E. 1997, ApJ, 484, 186
Kellermann, K. I., Sramek, R. A., Schmidt, M., Shaffer, D. B., & Green, R. F. 1989, AJ, 98, 1195
Kormendy, J., et al. 1997, ApJ, 473, L91
Laor, A., & Draine, B. 1993, ApJ, 402, 441
Lasota, J.-P., Abramowicz, M. A., Chen, X., Krolik, J., Narayan, R., & Yi, I. 1996, ApJ, 462, 142
Maiolino, R., Ruiz, M., Rieke, G. H., & Keller, L. D. 1995, ApJ, 446, 561
Malkan, M. A., & Oke, J. B. 1983, ApJ, 265, 92
Malkan, M. A., & Sargent, W. L. W. 1982, ApJ, 254, 22
Maoz, D., Filippenko, A. V., Ho, L. C., Rix, H.-W., Bahcall, J. N., Schneider, D. P., & Macchetto, F. D. 1995, ApJ, 440, 91
Maoz, D., Koratkar, A. P., Shields, J. C., Ho, L. C., Filippenko, A. V., & Sternberg, A. 1998, AJ, 116, 55
Mathis, J. S. 1994, ApJ, 422, 176
McKee, C. F., & Petrosian, V. 1974, ApJ, 189, 17
Murphy, E. M., Lockman, F. J., Laor, A., & Elvis, M. 1996, ApJS, 105, 369
Mushotzky, R. F., & Wandel, A. 1989, ApJ, 339, 674
Nandra, K., George, I. M., Mushotzky, R. F., Turner, T. J., & Yaqoob, T. 1997, ApJ, 477, 602
Neugebauer, G., et al. 1980, ApJ, 238, 502
Nicholson, K. L., Reichert, G. A., Mason, K. O., Puchnarewicz, E. M., Ho, L. C., Shields, J. C., & Filippenko, A. V. 1998, MNRAS, 300, 893
Pauliny-Toth, I. I. K., Preuss, E., Witzel, A., Graham, D., Kellermann, K. I., Ronnang, B. 1981, AJ, 86, 371
Pei, Y. C. 1992, ApJ, 395, 130
Reichert, G. A., Mushotzky, R. F., Petre, R., & Holt, S. S. 1985, ApJ, 296, 69
Reid, M. J., Biretta, J. A., Junor, W., Muxlow, T. W. B., & Spencer, R. E. 1989, ApJ, 336, 112
Reynolds, C. S., Di Matteo, T., Fabian, A. C., Hwang, U., & Canizares, C. R. 1996, MNRAS, 283, L111
Rieke, G. H., & Lebofsky, M. J. 1978, ApJ, 220, L37
Sadler, E. M., Slee, O. B., Reynolds, J. E., & Roy, A. L. 1995, MNRAS, 276, 1373
Sanders, D. B., Phinney, E. S., Neugebauer, G., Soifer, B. T., & Matthews, K. 1989, ApJ, 347, 29
Schilizzi, R. T. 1976, AJ, 81, 946
Serlemitsos, P., Ptak, A., & Yaqoob, T. 1996, in The Physics of LINERs in View of Recent Observations, ed. M. Eracleous et al. (San Francisco: ASP), 70
Shields, G. A. 1978, Nature, 272, 706
Shuder, J. M., & Osterbrock, D. E. 1981, ApJ, 250, 55
Soltan, A. 1982, MNRAS, 200, 115
Spencer, R. E., & Junor, W. 1986, Nature, 321, 753
Stiavelli, M., Peletier, R. F., & Carollo, C. M. 1997, MNRAS, 285, 181
Terashima, Y., Kunieda, H., Misaki, K., Mushotzky, R. F., Ptak, A. F., & Reichert, G. A. 1998, ApJ, 503, 212
Tripp, T. M., Bechtold, J., Green, R. F. 1994, ApJ, 433, 533
Tsvetanov, Z. I., Hartig, G. F., Ford, H. C., Kriss, G. A., Dopita, M. A., Dressel, L. L., & Harms, R. J. 1998, in Proceedings of the M87 Workshop (Lecture Notes in Physics: Springer Verlag), in press
Turner, J. L., & Ho, P. T. P. 1994, ApJ, 421, 122
Turner, T. J., George, I. M., Nandra, K., & Mushotzky, R. F. 1997, ApJS, 113, 23
Turner, T. J., & Pounds, K. A. 1989, MNRAS, 240, 833
Voit, G. M. 1991, ApJ, 379, 122
Willner, S. P., Elvis, M., Fabbiano, G., Lawrence, A., & Ward, M. J. 1985, ApJ, 299, 443
Worral, D. M., & Birkinshaw, M. 1994, ApJ, 427, 134
Zirbel, E. L., & Baum, S. A. 1998, ApJS, 114, 177
|
no-problem/9905/cond-mat9905424.html
|
ar5iv
|
text
|
# NMR studies of the original magnetic properties of the cuprates: effect of impurities and defects
## I Introduction: magnetic properties of pure materials
It is now experimentally well established that the CuO<sub>2</sub> planes display anomalous magnetic properties in the metallic normal state of the cuprates, at least in the underdoped and optimally doped states. The occurrence of magnetic correlations was first shown by the existence of an enhanced non-Korringa nuclear spin relaxation rate $`1/T_1`$ on <sup>63</sup>Cu and not on <sup>17</sup>O and <sup>89</sup>Y takigawa ; alloulohno . In the recent past, considerable interest has been focused on the pseudo-gap in the excitation spectrum of the cuprates. It was detected first in microscopic NMR measurements of the susceptibility $`\chi _p`$ of the CuO<sub>2</sub> planes alloulohno , which exhibit a large reduction in the homogeneous $`𝐪=0`$ excitations at low $`T`$ in underdoped materials, as shown in Fig.1. Similar low $`T`$ reductions of the imaginary part of the susceptibility at the AF wave vector $`𝐪=(\pi ,\pi )`$ were observed in <sup>63</sup>Cu $`1/T_1T`$ data and inelastic neutron scattering experiments berthier . Presently, the pseudo-gap of the underdoped high-$`T_c`$ superconductors is also studied by many techniques such as transport and angle resolved photoemission, which yields its $`𝐤`$ dependence. Various explanations are proposed for the pseudo-gaps which are believed to be essential features of the physics of the normal state (and perhaps the superconducting state) of the cuprates.
Atomic substitutions in the planar Cu site have naturally been found the most detrimental to superconductivity. This has in parallel triggered a large effort, particularly in our research group, to use such impurities to reveal the original normal-state magnetic properties of the cuprates. We shall see that NMR, as a local magnetic probe, is an essential tool which lends weight to this approach. In section II we present the effect of impurities on the phase diagram and the pseudo-gaps. We will distinguish the average effect of impurities on the physical properties far from the impurity site from the local magnetic perturbations. The study of the distribution of the spin polarisation induced by magnetic impurities is shown in sec. III to be a direct probe of the non-local magnetic response $`\chi ^{}(𝐫)`$ of the pure system, a quantity which is hard to access by other experimental approaches. In section IV we consider the case of non-magnetic impurities like Zn ($`3d^{10}`$) which, upon substitution on the Cu site of the CuO<sub>2</sub> plane, strongly decreases the superconducting transition temperature $`T_c`$. It has been anticipated fink , and subsequently shown experimentallyalloul2 , that, although Zn itself is non-magnetic, it induces a modification of the magnetic properties of the correlated spin system of the CuO<sub>2</sub> planes. We shall recall how <sup>89</sup>Y NMR demonstrated mahajan that local magnetic moments are induced on the Cu near neighbour ($`n.n.`$) of the Zn substituent in the CuO<sub>2</sub> plane. Experiments on La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> have confirmed ishida2 that the occurrence of local moments induced by non-magnetic impurities on the Cu sites is a general property of cuprates. Recent measurements of the variation with hole doping of the effective magnetic moment associated with non magnetic impurities will be reviewed. Finally, we shall discuss briefly in sec.V the influence of impurities on transport properties.
## II Impurities and phase diagrams
The main effect of impurities is to depress superconductivity, usually much faster in the underdoped regime than in the optimally doped case. Similarly the increase of resistivity is larger for underdoped materials. This results in a shift of the Insulator-Metal transition towards higher hole concentration in presence of impurities. So what happens then to the crossover pseudo-gap lines? There is a controversy which has arisen because of the differences between macroscopic and microscopic measurements of the pseudo-gap. NMR has the advantage that the sites in the vicinity of the impurity usually display well shifted resonance lines. Therefore, the main NMR line corresponds to sites far from the impurity. Its broadening, to be studied in section III, is associated with the oscillatory induced polarisation of the host. Its position measures then the average $`\chi _p`$ far from the impurities which reflects their influence on the homogeneous magnetic properties. The $`T`$ dependence of the shifts $`\mathrm{\Delta }K(T)`$ of the <sup>89</sup>Y or <sup>17</sup>O NMR mainlines alloul2 ; mahajan ; bobroff are found to be unmodified by impurity substitutions, as can be seen in Fig. 2.
This demonstrates that, contrary to the Metal-Insulator transition, the pseudo-gap is unaffected by impurity substitutions at large distance from the impurities. Incidentally these data also ensure that the hole doping is not significantly modified. Conversely if hole doping is changed, i.e. by Pr substitution on the Y site macfarlane , NMR shift data can be used to estimate its variation by comparison with calibrated curves for $`\chi _p`$ in the pure material (Fig 1).
Other experiments, which very often probe the macroscopic behaviour of the sample, have sometimes been interpreted differently, as they do not directly distinguish local from large distance properties. For instance, it was initially suggested kakurai on the basis of neutron scattering experiments, that the pseudo-gap vanishes at $`𝐪=(\pi ,\pi )`$ upon Zn substitution. However, the careful neutron data of Sidis et al. sidis indicate that the opening of the pseudo-gap still occurs at the same $`T^{}`$ although new states appear in the pseudo-gap. Such states may be associated with local magnetic modifications induced around the Zn, which will be described in section IV. In any case, far from the impurities the pseudo-gap is quite robust upon impurity substitution. These results are quite natural if the pseudo-gaps are only associated with the occurrence of AF correlation effects. However, in the scenario in which the pseudo-gaps are associated with the formation of local pairs below $`T^{}`$, they indicate that impurities do not prevent the formation of local pairs except possibly in their vicinity.
## III Extended response to a local magnetic excitation
In noble metals hosts, any local charge perturbation is known to induce long distance charge density oscillations (also called Friedel oscillations). Similarly a local magnetic moment induces a long distance oscillatory spin polarisation (RKKY) which has an amplitude which scales with the magnetization of the local moment and with its coupling $`J_{ex}`$ with the conduction electrons. This oscillatory spin polarisation gives a contribution to the NMR shift of the nuclei which decreases with increasing distance from the impurity. In very dilute samples, if the experimental sensitivity is sufficient, the resonances of the different shells of neighbours of the impurity can be resolved alloul1 . These resonances merge together if the impurity concentration is too large, resulting in a net broadening $`\mathrm{\Delta }\nu _{imp\text{ }}`$of the host nuclear resonance. For some impurities, like rare earths substituted on the Y site, the hybridisation and therefore the exchange coupling $`J_{ex}`$ are extremely weak. The plane nuclear spins only sense the moment through its dipolar field. But for moments located in the planes such as Ni, the induced spin polarisation dominates, especially for the <sup>17</sup>O and <sup>63</sup>Cu nuclei. The large <sup>17</sup>O NMR broadening induced by Ni has been therefore studied in great detail by Bobroff et al bobroff . It has been found that in underdoped YBCO, the linewidth increases much faster than $`1/T`$ at low temperature, contrary to what one might expect in a non-correlated metallic host (Fig.3). This fast increase is a signature of the anomalous magnetic response of the host which displays a peak in $`\chi ^{}(𝐪)`$ near the AF wavevector ($`\pi `$, $`\pi `$), which can be characterized by a correlation length $`\xi `$. Therefore the $`T`$ variation of the NMR spectra yields a method to study the $`T`$ dependence of $`\chi ^{}(𝐪)`$ and $`\xi `$. The analysis of such data depends on the phenomenological shape given to the $`𝐪`$ dependence of $`\chi ^{}(𝐪),`$especially as the O site probes $`\chi ^{}(𝐫)`$ on its two neighbours, and therefore is governed somewhat by the gradient of $`\left|\chi ^{}(𝐫)\right|`$. Assuming a Gaussian shape for $`\chi ^{}(𝐪)`$, it is found that the linewidth is nearly insensitive to the $`T`$ dependence of $`\xi `$, in contrast to the case of a Lorentzian shape slichter . The experiment on a single nuclear site does not by itself allow deduction of $`\xi (T)`$. However comparison with spin-spin relaxation data, which also measures an integral quantity involving $`\chi ^{}(𝐪)`$ yields some complementary information. With a Gaussian shape we find that $`\xi `$ increases with $`T`$ while for a Lorentzian, it would decrease with increasing $`T`$. Similarly a comparison of the respective broadenings of the <sup>17</sup>O, <sup>89</sup>Y and <sup>63</sup>Cu spectra, which probe differently $`\chi ^{}(𝐫)`$, lead us to conclude that the Lorentzian model is somewhat better, implying that $`\xi `$ increases at low $`T`$ as one might expect bobroff2 . More accurate studies of the shape of the spectra as well as their concentration dependence are required to arrive at quantitative conclusions on the variation of $`\xi (T)`$. We should point out here that, for YBCO<sub>7</sub>, the <sup>17</sup>O width varies roughly like $`1/T`$, while the Ni moment still displays a Curie law. This indicates that the $`T`$ dependence of $`\chi ^{}(𝐪)`$ is not as large in optimally doped systems. In such cases the detailed shape of $`\chi ^{}(𝐪)`$ could not be analysed up to now.
## IV Local magnetism induced by non-magnetic Zn
Surprisingly it has been found that, as for Ni substitution, a $`1/T`$ broadening of the <sup>89</sup>Y line occurs alloul2 in YBCO<sub>7</sub>:Zn, even though Zn is expected to be in a non-magnetic 3d<sup>10</sup> state. This was the first experimental evidence for the occurrence of “local moment like” behaviour induced by Zn. A more refined picture of the response of the host to a non-magnetic substituent has been obtained in the case of underdoped YBCO<sub>6.64</sub>, as distinct resonances of <sup>89</sup>Y were observed and could be attributed to Y $`n.n.`$ sites of the substituted Zn mahajan . The measured Curie-like contribution to the NMR shift of the first $`n.n.`$ line (Fig.4), and the shortening of its T<sub>1</sub> at low-$`T`$ are striking evidence which justify the denomination “local moment”, that we have been using throughout <sup>1</sup><sup>1</sup>1The validity of these observations has been periodically put into question, for instance as similar $`n.n.`$ resonances were not detectedjanossy in ESR experiments on Gd (substituted on Y). From the T<sub>1</sub> data for <sup>89</sup> Y NMR, we have shown that the large expected relaxation rate for Gd corresponds to a significant line broadening of the Gd ESR $`n.n.`$ lines which prohibits their detection nmahajan . It has also been conjectured that substitution of Zn on Cu and Ca on Y yield similar disorder effects on the NMR tallon . This is not true as the line broadening does not exhibit a Curie like T variation for Ca substituted samples..
The Zn induced local moments are quite clearly located in the vicinity of the Zn. As is shown by analysis of the $`n.n.`$ <sup>89</sup>Y NMR intensity data, the Zn substitutes essentially on the plane copper site. Therefore, the Zn contribution $`\chi _c`$ to the macroscopic susceptibility could be inferred from SQUID data taken on samples free of parasitic impurity phases mendels ; mendels1 . The hyperfine couplings deduced from the comparison of the <sup>89</sup>Y NMR shift data to $`\chi _c`$ have the correct order of magnitude to demonstrate that the local moment resides mainly on the Cu $`n.n.`$ to the Zn. Assuming that they are not modified with respect to pure YBCO, the data can be analysed consistently with a locally AF state extending over a few lattice sites. This might also explain the existence of a line corresponding to Y second $`n.n.`$ to the Zn nmahajan .
In the superconducting state, <sup>63</sup>Cu NQR relaxation ishida1 and Mössbauer experiments hodges indicate the existence of states in the gap. In neutron scattering experiments sidis , the local states induced by the Zn, both in the pseudo-gap and in the spin-gap detected below $`T_c`$, are found at the ($`\pi `$, $`\pi `$) scattering vector, and correspond to a real state extension of about 7Å. These thus constitute direct evidence for the persistence of AF correlations in the vicinity of the impurities comment .
It is clear that the observed local moment behaviour is original inasmuch as it is the magnetic response of the correlated electron system to the presence of a spinless site, as has been proposed from various theoretical arguments fink ; nagaosa ; poilblanc ; khaliullin ; nagaosalee . As complete understanding of the magnetic properties of pure cuprates is far from being achieved, it is no surprise that present theoretical descriptions of impurity induced magnetism are rather crude, and for example, do not address its microscopic extent. Our results also fit well in the context of recent theoretical work on undoped quantum spin systems. For instance Martins dagotto predicts static local moments induced by doping $`S=1/2`$ Heisenberg AF chains or ladders with non-magnetic impurities. NMR experiments on the $`S=1/2`$ Heisenberg chain system Sr<sub>2</sub>CuO<sub>3</sub> are consistent with the prediction of an induced local moment with a large spatial extent along the chain takigawa2 . In this undoped insulating quantum liquid, the response is purely magnetic. Since AF correlations persist in the metallic cuprates, the appearance of a local moment near the Zn might be anticipated, but its properties could depend strongly on the density of charge carriers. Magnetisation data by Mendels et al. mendels ; mendels1 , shown in Fig. 5, demonstrate that the Curie constant decreases steadily from YBCO<sub>6.64</sub>:Zn to YBCO<sub>7</sub>:Zn. However existing experiments do not, at present, distinguish the respective roles of the AF correlation length and screening by the conduction holes in defining the local moment magnitude and spatial extent.
In the slightly overdoped YBCO<sub>7</sub>, the occurrence of a local moment was confirmed from <sup>17</sup>O NMR linewidth data YY . The fact that we could not resolve the <sup>89</sup>Y$`n.n.`$ signal in YBCO<sub>7</sub> is consistent with the weak magnitude found for the Curie-like contribution to the local susceptibility. Furthermore, Ishida et al ishida2 showed that our observation extends to another cuprate family, as non-magnetic Al exhibits a local moment behaviour in optimally doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. A local signature of this fact was found in the shift of the <sup>27</sup>Al NMR which exhibits a Curie-Weiss T dependence of $`\chi _c`$, with a sizable Weiss temperature ($`\theta `$ $``$ $`50K`$). It is not very clear from these data whether such a high value of $`\theta `$ corresponds to a genuine single impurity effect or if itvaries with Al content, thereby revealing a strong coupling between the local moments. By analogy with our results this observation is supposed to result from a local moment residing on the $`n.n.`$ copper orbitals, which are coupled to the <sup>27</sup>Al nuclear spin via transferred hyperfine couplings.
## V Magnetism and transport properties
Let us now consider briefly the influence of impurities on transport properties. Most analyses of the resistivity data ong ; mizuhashi suggest a large magnitude for the Zn scattering, not far from the unitary limit. Such results are generally observed for most local defects induced in the CuO<sub>2</sub> planes, such as irradiation defects legris . It has occasionally been assumed tallon2 that strong scattering is due to potential scattering. In the present case, for which no charge difference occurs between Zn<sup>2+</sup> and Cu<sup>2+</sup>, the scattering cannot be associated with charge difference, but rather is due indirectly to the fact that Zn is a spin-less defect. So, as for Kondo impurities in normal metal hosts, unitary scattering is associated with a magnetic effect. These remarks are of course included in most analyses of impurity scattering done in the strong correlation approaches fink ; nagaosa ; poilblanc ; khaliullin ; nagaosalee . Different behaviour of spinons and holons is at the root of the anomalous impurity scattering in such theories. However none is at present sufficiently advanced to provide quantitative results which could be compared to experimental data.
As for the reduction of T<sub>c</sub> induced by “non-magnetic” impurities, the situation has evolved since our first report mahajan , since the d wave symmetry of the order parameter is now well established in most cuprates. In this case, any type of impurity scattering depresses $`T_c`$. This is exemplified by recent resistivity measurements performed on electron irradiated samples albenque . It it is found that a universal law applies between the respective variations $`\mathrm{\Delta }T_c`$ and $`\mathrm{\Delta }\rho `$ of the superconducting temperature and of the resistivity induced by defects. The most remarkable feature detected in these experiments is that the universal relation extends to the overdoped regime, suggesting that the d wave symmetry of the order parameter is valid for the entire phase diagram, even in the doping range for which Fermi liquid behaviour seems to apply.
## VI Conclusions
The studies presented above have allowed us to demonstrate that valuable information on the magnetic properties of the cuprates are obtained from the local study of their response to substitutional impurities in the CuO<sub>2</sub> planes. It could be anticipated, by analogy with RKKY effects in simple metals, that the large distance modifications of the host properties would directly reflect the non-local magnetic response of the host. This is indeed apparent in the NMR studies of the modifications of the central <sup>89</sup>Y or <sup>17</sup>O NMR lines, which give us some insight on the $`𝐪`$ and $`T`$ dependence of the static susceptibility, i.e. of the AF correlation length. These studies also show that the pseudo-gap is unaffected by impurities and is therefore either a purely magnetic effect associated with AF fluctuations, or due to a local pairing which is not disrupted by impurities in contrast with the macroscopic superconducting condensate.
More surprising are the actual properties associated with spin-less impurities. The existence of magnetic correlations are responsible for the occurrence of local moment behaviour induced by non-magnetic impurities on the neighbouring Cu sites. While one might expect that the spin 1/2 moment which is released by such a defect in a magnetic quantum spin system should extend on a distance comparable to the AF correlation length, the measured decrease of the effective moment with increasing hole content is more likely to be associated with an interaction with charge carriers. As transport properties indicate that isovalent impurities produce a large scattering of the carriers, one can anticipate that this is due to the occurrence of a resonant state. But the actual Curie dependence of the susceptibility would not be expected then to extend to low T. However, the occurrence of superconductivity limits somewhat our experimental capability to investigate the magnetic properties of the metallic ground state. From SQUID data mendels1 it was concluded that in underdoped YBCO<sub>6.6</sub>:Zn4%, for which $`T_c`$ is reduced to zero, the susceptibility follows a Curie-Weiss law down to 10K, with $`\theta `$ $`4K.`$ Does this correspond to a Kondo like energy scale characteristic of the width of the impurity resonant state? Such a Kondo-like effect is a candidate mechanism nagaosalee for the reduction of the magnitude of the local moment in YBCO<sub>7</sub>:Zn. However in these systems, one does not a priori expect the physical properties to mimic those obtained for Kondo effect in noble metals in a classical sd exchange model. Such difficulties have already been pointed out by Hirschfeld hirschfeld in view of our preliminary data mahajan . Obviously further efforts are required to complete this approach both experimentally and theoretically in the context of a correlated electron system.
We should like to acknowledge A. MacFarlane for helpful discussions and careful reading of the manuscript.
|
no-problem/9905/hep-ex9905012.html
|
ar5iv
|
text
|
# Balloon Measurements of Cosmic Ray Muon Spectra in the Atmosphere along with those of Primary Protons and Helium Nuclei over Mid-Latitude
## I Introduction
Data on muon spectrum as a function of the atmospheric depth in the momentum interval 0.3–40 GeV/$`c`$ have been published earlier by this collaboration . We report in this paper a new measurement of muon spectra in the atmosphere as well as the spectra of proton and helium nuclei which were measured at the float altitude with the same apparatus during the same balloon flight. The measurements were performed with the MASS (Matter Antimatter Spectrometer System) apparatus on September 23, 1991 starting from Ft. Sumner, NM at 1270 m above sea-level. The coordinates of this location are $`34^{}`$N and $`104^{}`$W, corresponding to an effective vertical cut-off rigidity of about 4.3 GV. The balloon ascent lasted for almost 3 hours, during which about 240,000 triggers were collected. The muon measurements cover the altitude range from ground level to 36 km, which corresponds to about 5 g/cm<sup>2</sup> of atmospheric depth. The ascent curve of the apparatus, based on the pressure measurements taken by the payload sensors is shown in Fig. 1. The float data analyzed for this work cover an exposure time of about 10 hours. These data were taken at atmospheric depths between 4 and 7 g/cm<sup>2</sup>, with an average value of 5.8 g/cm<sup>2</sup>.
Primary cosmic ray particles, while entering the Earth atmosphere, interact with the atmospheric nuclei and produce secondary particles (see , for an excellent introduction). Among the primary cosmic rays, protons and helium nuclei are the major components, and as a consequence, a large fraction of these secondary particles are produced by them. Most of the secondary particles decay and some of the decay products are muons and neutrinos. Muons and muon neutrinos are the decay products of mesons, and both muon and electron neutrinos are the result of muon decays. Both these kinds of neutrinos are detected by underground detectors.
Due to this close relationship, atmospheric muons have been often considered as a powerful tool to calibrate the calculations of atmospheric propagation, in particular for the neutrino flux evaluation (e.g., ). This situation appears to be most interesting in the context of the increasing evidence of the atmospheric neutrino anomaly (for a recent discussion, see ). The anomaly is based on the discrepancy between the observed ratio of the number of neutrino interactions due to $`\mu `$\- type and that due to $`e`$-type, as measured by some underground detectors, and the robust theoretical expectation at low energy. While the evidence for the anomaly will not be discussed here, it is important to note that any interpretation of the phenomenon depends crucially on the absolute value of the expected fluxes of neutrinos.
In order to take into account the details of particle propagation and interactions in the calculations of atmospheric cascades, both analytic and Monte Carlo approaches have been succesfully undertaken in the past. An extensive work has investigated the differences between the recent neutrino calculations , indicating that the parametrization of the cross-sections for meson creation in proton collisions with the atmospheric nuclei is one of the major reasons for this discrepancy. It is well known that at low transverse momentum $`p_T`$ the perturbative quark model does not work and, moreover, the data available from accelerator measurements are not enough to discriminate between different interaction models in the central collision region (Feynman $`x_L`$0.1). Contribution from this experimentally unexplored region is important for the meson production. An additional factor of inaccuracy may come from the kinematics of the particle propagation and decay. Although these processes are well known, their description in the atmospheric simulation codes requires some approximations. In fact, most of the calculations published so far are performed under the approximation of unidimensional propagation of the secondaries, and the effect of this approximation on the low-energy neutrino results is still under study.
Another important input to the atmospheric propagation calculations, which may introduce a further degree of uncertainty, is the primary cosmic ray composition and flux. The direct measurements of the primary components show sometimes significant discrepancies with respect to one another (see , for a compilation). The differences in the experimental results are to some extent due to the specific conditions of the measurements, namely, the geomagnetic suppression and the solar modulation, and in part may be due to experimental inaccuracies. Both the geomagnetic and solar cycle effects on the primary cosmic rays need to be taken into account to evaluate the neutrino fluxes, since the undergound experiments collect events coming from a large interval of geomagnetic locations over significant fractions of the solar activity cycles. While the geomagnetic suppression is a well understood mechanism and significant improvements in its description have been introduced recently , the solar modulation of cosmic rays is not exactly periodic and shows some pecularities (e.g., the so-called “Forbush events”) that are hard to describe in a model.
A comparison of the expected muon fluxes to measurements of muons in the atmosphere may help in reducing the uncertainty in the neutrino calculations due to the above factors, namely the primary spectra and the interaction cross-sections; both affect to similar extent the muon and neutrino flux calculations. An obvious limitation to this approach is that the muon measurements are not always available in experiments, by which primary particle spectra are measured, and calculations are carried out using available primary spectra measured at a time and location, which may not correspond to the muon measurements. The approach described in this investigation to measure the primary spectra of protons and helium nuclei along with the measurement of atmospheric muons by the same experiment, allows the following possibilities. (i) The measured primary spectra can be used as input to the propagation calculations whose results have to be compared to the muon measurements, thus taking automatically into account the specific levels of geomagnetic suppression and solar modulation of the experiment. (ii) Possible systematics on the global normalization of the experiment (e.g., geometric factor, acquisition efficiency, etc.) will be compensated as well in such calculations.
While muon measurements at sea-level are widely reported in the literature, there have been very few attempts to measure the muon flux as a function of altitude. The early experiments were performed either with airplane-borne apparatus or at mountain sites . Counter telescopes were used for detecting charged particles and muons were usually selected by requiring them to traverse large amounts of matter without interacting. The main difficulty in such experiments was to properly identify muons while rejecting the other components of the “hard” radiation. This problem was of course more complex for positive muon measurements, since the proton flux rapidly increases with increasing altitude. A thorough review of these earlier results is presented in . The deployment of balloon-borne detectors allows the investigation to be extended to momentum and depth ranges much larger than in previous experiments .
Preliminary results for the muon measurements from this study were reported earlier , as well as preliminary proton results at float level . The measurement of the muon flux and charge ratio at the float level from this experiment has already been published .
## II Detector Setup
The apparatus used in the 1991 experiment was a modified version of the MASS spectrometer flown by the same collaboration in 1989 . It consisted of a superconducting magnet spectrometer, a time of flight device (T.O.F.), a gas threshold Cherenkov detector and an imaging calorimeter, as shown in Fig. 2.
The magnet spectrometer consisted of the NMSU single coil superconducting magnet and of a hybrid tracking device. The magnet, with 11,161 turns and a current of 120 A, gave rise to a field strength of 0.1–2 T in the region of the tracking device. The latter consisted of three groups of multiwire proportional chambers interleaved with two drift chambers, for a total height of 110 cm. Each drift chamber was equipped with ten sensitive layers, each with 16 independent cells. The drift tubes were filled with CO<sub>2</sub>. The multiwire proportional chambers were filled with “magic gas”, and were read by means of the cathode-coupled delay line technique . A total number of 19 measurements along the direction of maximum curvature and 8 measurements along the perpendicular direction were performed. The maximum detectable rigidity for this configuration of the spectrometer was estimated to be about 210 GV for singly charged particles .
The time of flight detector consisted of two planes of scintillator separated by a distance of 2.36 m. The upper plane was located at the top of the apparatus. It consisted of two layers of scintillator, segmented into 5 paddles of 20 cm width and variable length in order to match the round section of the payload’s shell. The bottom plane, consisting of a single scintillator layer segmented into two paddles, was located below the tracker system and above the calorimeter. A coincidence between the signals from the two planes produced the trigger for data acquisition. The signals from each paddle of scintillator were independently digitized for time of flight measurements as well as for pulse height analyses.
The Cherenkov detector consisted of a 1 m tall cylinder of Freon 22 at the pressure of 1 atm. A four-segment spherical mirror focussed the light onto four photomultipliers. The threshold Lorentz factor for Cherenkov emission was $`\gamma _{th}`$ 25.
The calorimeter consisted of 40 layers, each having 64 brass streamer tubes. Tubes from adjacent layers were arranged perpendicular to one another. The total depth of the calorimeter was 40 cm, equivalent to 7.3 radiation lengths and 0.7 interaction lengths for protons.
## III Data Analysis
The general features of the data analysis procedures were the same for the three studies illustrated here. Nevertheless, we used different sets of criteria for selecting different particles, due to the different kinds of background events to be eliminated and the extent of the rigidity over which the analysis was carried out in each of these cases. Additional difficulties for the ascent analysis arise because of the possible shocks during the launch and of the rapidly changing environmental conditions with altitude, namely, atmospheric pressure and temperature. We accurately monitored the instrumental conditions continuously during the ascent in order to make sure that the detector performances did not change significantly during the data acquisition. Further, in the case of the ascent analysis, the relative intensity of different particles change with altitude, and this might mimic instrumental drifts. Because of these reasons, we used a stringent selection for ascent muons, in such a way to make use of the full information recorded for each event. A great deal of effort was put into checking the consistency of the ascent selection with the muon analysis at float, which has been illustrated separately .
The proton and helium events from the float file were identified by selecting charge 1 and 2 particles by means of the scintillator signals. The selection of muon events from the ascent file were mainly obtained by identifying singly charged particles which did not interact in the calorimeter. The track reconstruction in the spectrometer allowed the sign of charge of the particles to be determined. Low-energy muons were discriminated from protons by means of the time of flight measurement. Details of the event selection and analyses are described in the following sections.
### A Event Reconstruction
The criteria imposed for the selection of good reconstructed tracks were based on the experience gained with this spectrometer in this and in other flights . Although the spectrometer had some multiple track capabilities, only single track events were selected for analysis. The criteria used for the reconstruction of events in the spectrometer are summarized in Table I. This set of criteria was sufficient to select clean good events for the track reconstruction from both the ascent and the float samples. Among the tests shown in this Table, Tests 1–6 were introduced in order to select only good quality reconstructed tracks. In addition, the required consistency between the track extrapolation to the scintillator plane and the position obtained from the scintillator information (Test 7), the requirement that the extrapolated track pass through the calorimeter (Tests 8) and the rejection of tracks intersecting the lift bar of the payload (Tests 9) removed multiple tracks and events generated in interactions in the payload. Finally, Test 10 on the particle velocity as determined with the time of flight measurement rejected albedo events.
### B Proton and Helium Selection
The identification of protons and helium events in the float sample was performed by analyzing the pulse heights of the two independent signals, $`I_1`$ and $`I_2`$, from the top layers of scintillator.
The selection for charge 1 particles (protons) was:
$$0.7I_0<\frac{I_1+I_2}{2}<1.8I_0,$$
(1)
where $`I_0`$ is the mean signal from a singly charged minimum ionizing particle. The selection for charge 2 particles (helium) was:
$$3.5I_0<\frac{I_1+I_2}{2}<6I_0.$$
(2)
Such selection criteria are illustrated in Fig. 3. The lower cut in the helium selection, as given by the above equation, is necessary in order to reduce the proton contamination in the helium sample. For the same reason, consistency between the amplitudes of the two signals $`I_1`$ and $`I_2`$ was also required for helium selection:
$$\frac{|I_1I_2|}{\sqrt{2}}<0.4I_0.$$
(3)
These selection criteria are appropriate for rigidities above a few GV, relevant to this work. The results concerning the proton and deuterium components in the atmosphere below the geomagnetic cut-off will be presented separately (see for a preliminary report).
### C Muon Selection
The criteria for the identification of muons of either charge are shown in Table II. The scintillator selection (Test 1) for identifying singly charged particles was the same as for the float protons (1). For the muon selection, the number of hits detected in the calorimeter was counted separately for each view, in order to account for the different streamer tube efficiency. Both the minimum number of signals and the number of multiple hits refer to the hits contained in a cylinder of radius of 5 streamer tubes along the track extrapolation in the calorimeter, corresponding to about 3 Molière radii. In particular, Test 4 is a powerful means of rejecting electrons . An event identified as a negative muon by means of such selection is shown in Fig. 4.
The Cherenkov signal and the time of flight information were used for background rejection. Test 5 was imposed to remove the low-energy electrons and positrons misidentified in the calorimeter. Test 6 rejects low-energy protons from the positive muon sample by a test of the squared mass $`m^2`$ which, once the charge $`Ze`$ is known, can be estimated from the magnetic deflection $`\eta `$ and the velocity $`\beta `$ as:
$$m^2=\frac{\frac{1}{\beta ^2}1}{\eta ^2}\times \frac{Z^2e^2}{c^2}.$$
(4)
No time of flight test was required below 0.65 GV, since low-energy protons are efficiently rejected by the scintillator pulse height discrimination.
### D Background Estimates and Corrections
#### 1 Proton and Helium Analysis
Protons are the main component of primary cosmic rays. As a consequence, the possible background from light particles, namely, positrons, muons and pions, is expected to be small above the geomagnetic cut-off and is not customarily subtracted from the measurement. Therefore, no correction for such background events has been performed on the proton measurements in this investigation. The contamination from helium events in the selected proton sample is negligible. Further, no attempt was made to separate the isotopes of protons and helium events, even at low energies. In the case of helium selection there could be a small proton contamination due to the Landau fluctuations of the energy released in the scintillator layers from the large flux of protons. This background was evaluated by studying a sample of protons selected by means of the pulse height signals in the bottom scintillator layer. The contamination, in the whole energy range, was found to be less than 2% and for each energy bin the number of the estimated background protons in the helium sample was subtracted.
#### 2 Muon Analysis
The possible sources of background events, which might simulate moun-like events, are listed in Table III. Also shown in this table are the most efficient rejection criteria to eliminate the background events and the estimated levels of residual contamination.
Albedo events are upward-going particles which simulate a curvature of opposite sign in the spectrometer. They are either produced as large angle secondaries in interactions by hadrons incident at large zenith angles or by hard scatterings. However, we found only 9 upward-going events in the whole ascent sample. They were easily removed by means of the time of flight measurement (Test 10 of Table I).
The degree of possible electron contamination varies with altitude and energy because of the different development of the electron and muon fluxes in the atmosphere. For energies $``$ 1 GeV, the worst conditions for the relative ratio of muon to electron flux is expected at less than 100 g/cm<sup>2</sup>, where the muon flux is still increasing with atmospheric depth and the electron flux has already reached its maximum . Low-energy electrons and positrons misidentified as muons in the calorimeter were rejected from the muon sample by means of the Cherenkov selection shown in Test 5 of Table II. We used the number of Cherenkov-identified electron events to estimate the upper limit to the altitude-dependent residual contamination as given in Table III.
Spillover events are particles whose charge sign is misinterpreted in the magnet spectrometer. This source of background needs to be considered for the negative muon sample because of the large number of protons at high altitudes. As a consequence of the high performances of the magnet spectrometer, spillover is expected to be only a negligible source of background in the momentum range of this investigation. In fact, we carried out a simulation , which takes into account the details of the magnetic field in the spectrometer and detector response. We found that the spillover background can not be more than 1% of the negative muon events even near the float altitude and at the highest rigidity bin, where one might expect some contribution.
Background due to pions and kaons is the major concern for the muon measurements, because it is not possible to identify mesons that do not interact in the calorimeter. From theoretical expectations for the pion and kaon fluxes in the atmosphere , we estimated that for muon momenta less than 10 GeV/$`c`$ pions do not contaminate significantly the muon measurements at depths larger than 200 g/cm<sup>2</sup>, while an altitude-dependent pion contamination of the order of 1-2% can not be excluded at smaller depths. The fraction of contaminating pions may be larger at larger particle momenta. The kaon contamination is negligible everywhere. There is also the possibility that locally produced particles, namely secondaries produced by hadrons interacting in the shell or in the lift bar above the payload, may be detected as single muon-like events. In order to reject such events, we excluded from the analysis all the tracks whose extrapolation did intersect the lift bar. In addition, we placed severe requirements on the reconstructed tracks, as illustrated previously, by which multiple particles from an interaction that are incident within the instrument can be rejected. From an analysis of simulated events, we estimated that the possible residual contamination from locally produced particles is negligible, except at very low enrgies and at small atmospheric depths. In order to evaluate the possible extent of contamination in this region, we checked the number of negative events which were selected as muons in the rest of the apparatus and passed a pion selection criterion in the calorimeter. From this fraction and from the estimated efficiency for such a test to detect pions, we estimated that a contamination by locally produced particles at an extent up to 20% can not be excluded for muons below 1 GeV/$`c`$ at small atmospheric depths. The fraction of such events decreases rapidly with increasing atmospheric depth and we found that it may not exceed 5% at depths larger than 50 g/cm<sup>2</sup>. It should be emphasized that this procedure can allow us only to set the upper limit to the contamination due to this source of background. Therefore no further correction was made to the data.
Finally, the proton background is important for the positive muon measurements, since their flux rapidly increases with increasing altitude. Primary protons exponentially attenuate in the atmosphere with an absorption length $`\mathrm{\Lambda }`$ 120 g/cm<sup>2</sup> . This occurrence places a serious constraint on the range of atmospheric depth over which positive muon measurements are possible. However, the situation is different at low energy because of the geomagnetic suppression of primaries, as can be seen from the helium spectrum shown in Fig. 5. The low-energy proton component therefore has to be of a secondary nature. This can also be seen in the altitude distribution of such events . The geomagnetic suppression allows us to perform a low-energy proton rejection by means of the squared mass tests listed in Table II.
### E Geometric Factor and Efficiencies
#### 1 Geometric Factor and Global Efficiencies
The geometric factor of the apparatus was estimated by means of two independent codes for the containment conditions listed in Table I. The accuracy of such calculations was estimated to be better than 1%.
Particles generated at the top of the apparatus were followed down to the bottom of the calorimeter and then traced up to the level of the lift bar. By requiring that the track should not intersect the suspension bar, about 10% of the events above 1 GV that cross the whole detector were eliminated. The deflection dependence of the geometric factor for the different cases is shown in Fig. 6. The small difference between positive and negative particles at high deflection is due to a mechanical asymmetry of the magnet with respect to the detector stack.
The following global efficiencies were introduced in each analysis: (a) a trigger efficiency of 0.825$`\pm `$0.010 measured in a ground test before the launch; (b) a time-dependent livetime fraction, which varied during the ascent as shown in Fig. 7 and reached a value of 0.66$`\pm `$0.01 at float; (c) a rigidity dependent reconstruction efficiency, shown in Fig. 8 for muons. While the reconstruction efficiency, at high energy, is the same for protons and muons, it is significantly lower for helium nuclei. Above the geomagnetic cut-off the reconstruction efficiencies were nearly constant; they were 0.959$`\pm `$0.012 for protons and muons and 0.917$`\pm `$0.032 for helium. No dependence was found on the sign of charge for muons.
#### 2 Proton and Helium Selection Efficiencies
The scintillator efficiencies for charge 1 (protons and muons) and charge 2 (helium) particle selection were determined using samples of events tagged by the bottom scintillator detector; this information was not used in the analysis for the event selection. This technique allows a reliable evaluation of the selection efficiencies. We estimated a selection efficiency of 0.945$`\pm `$0.001 for protons and muons and 0.882$`\pm `$0.022 for helium nuclei.
#### 3 Muon Selection Efficiencies
In addition to the above efficiencies, the following selection efficiencies were considered in estimating the muon fluxes: (a) a calorimeter efficiency of 0.888 $`\pm `$ 0.008; (b) the requirement of the presence of the calorimeter information introduced a further efficiency of 0.851 $`\pm `$ 0.005, because of some acquisition failures; (c) the Cherenkov test at low energy was passed by muons with an efficiency of 0.998 $`\pm `$ 0.001; (d) the efficiency for the time of flight selection of low-energy positive muons was found to be 0.992 $`\pm `$ 0.004 and 0.908 $`\pm `$ 0.014, respectively in the 0.65–1.25 GV and 1.25–1.5 GV rigidity ranges.
The overall efficiency for muon selection was therefore a function of time and energy. It ranged for negative muons from a minimum of 0.298 $`\pm `$ 0.006 at 0.3 GeV/$`c`$ at maximum deadtime to the value of 0.539 $`\pm `$ 0.011 above 4 GeV/$`c`$ and at minimum deadtime. The detection efficiency for positive muons was slightly lower because of the additional squared mass selection criterion.
## IV Results
### A Muon Results
With the selection described in the previous sections, we selected a sample of 4,471 negative and 2,856 positive muons distributed in the atmospheric depth range of 5-886 g/cm<sup>2</sup>. As previously mentioned, the momentum range investigated for negative muons was from 0.3 to 40 GeV/$`c`$, while positive muons were selected in the 0.3–1.5 GeV/$`c`$ interval.
We followed the same procedure developed for our previous analysis for the reconstruction of the flux growth curves in the atmosphere. In particular, a parametrization of the form:
$$\mathrm{\Phi }(X)=kXe^{X/\mathrm{\Lambda }}$$
(5)
was adopted in order to describe the dependence of the muon flux in the different momentum intervals upon the atmospheric depth $`X`$, where $`k`$ and $`\mathrm{\Lambda }`$ are varied to fit the data. Results on the depth dependence of muons of either charge are shown in Fig. 9, and are also given in Table IV: it may be noted that the positive and negative curve shapes do not show any noticeable difference.
Fig. 10 shows the muon charge ratio in the atmosphere in two different energy intervals. It can be noticed that our results do not show any definite trend of the charge ratio changing with atmospheric depth. On the other hand, it may be pointed out that the depth-averaged value of the $`\mu ^+/\mu ^{}`$ ratio increases with increasing momentum of the particles, being 1.12 $`\pm `$ 0.04 and 1.23 $`\pm `$ 0.05 respectively in the 0.3–0.9 and 0.9–1.5 GeV/$`c`$ momentum bins. These values are consistent with the ratio measured at float in the same experiment . Fig. 10 also shows that, while there is a general agreement among results in the low energy bin at large atmospheric depths, there is noticeable difference at low altitudes below 100 g/cm<sup>2</sup>. In addition to the results shown in Fig. 10, results are also available at very small atmospheric depths. The CAPRICE experiment reported an average value of 1.64$`\pm `$0.08 between 0.2 and 2.3 GeV/$`c`$ at 3.9 g/cm<sup>2</sup> of residual atmosphere , while a ratio of 1.26$`\pm `$0.12 for 0.3–1.3 GeV/$`c`$ muons was previously found at 11 g/cm<sup>2</sup> . It is not clear from the literature how much of the differences in the observed ratio could be ascribed to the different experimental conditions.
The measured spectra of negative muons at different depths between 25 and 255 g/cm<sup>2</sup> are shown in Fig. 11 and also in Table V: The results in Table IV show that, in spite of the differences in the growth pattern of the muon flux for different momentum intervals, the estimated value of the effective atmospheric depth (FAD) do not differ by more than 1% at all depths, except at the largest depth interval. Above 1.5 GeV/$`c`$, the negative muon spectra may be parametrized as power-laws with a power index of 2.45$`\pm `$0.05, almost independent of the atmospheric depth, and in a close agreement with our previous observations in . A comparison between these two measurements shows that the normalizations of the two sets of results are in a good agreement in the 1–8 GeV/$`c`$ interval. A comparison between these two experiments at lower energy is less straightforward, due to the different conditions of solar modulation and geomagnetic cut-off of the two experiments. As shown in Fig. 12, we measured a significant deficit of low-energy muons in the 1991 flight with respect to the 1989 experiment over a large range of atmospheric depth.
### B Proton and Helium Results
From the events recorded at the float, we have selected 118,637 proton events and 15,207 helium events for the analysis. These events were collected over a period of 35,330 s. After subtracting the estimated background, the number of events were corrected for the selection efficiencies. The flux for each selected energy bin at the spectrometer level was estimated using the time of observation and the calculated geometric factor. In the case of protons, we have chosen the rigidity range between 3.3 and 100 GV, where the contribution from the atmospheric secondaries is small. The helium spectrum was investigated between 3 and 100 GV.
The estimated flux values at the spectrometer level were corrected to the top of the payload by taking into account inelastic interactions and ionization energy loss in the detectors above the spectrometer (namely, the plastic scintillator counters and the gas Cherenkov detector) and in the aluminum dome of the payload. The proton flux was then extrapolated to the top of the atmosphere by making use of the procedure described by Papini et al. , which includes the ionization and interaction losses as well as the secondary production in the residual atmosphere above the apparatus. In the case of the helium nuclei, in addition to the ionization and interaction losses, the production by heavy nucleus spallation was taken into account by considering the appropriate helium attenuation length instead of the helium interaction length.
The proton and helium fluxes at the top of the atmosphere are given in Tables VI and VII. The spectral index $`\gamma `$ is $`2.708\pm 0.037`$ for protons above 30 GeV and $`2.65\pm 0.19`$ for the helium flux above 15 GeV/n. The measured spectra are shown in Fig. 13, where the geomagnetic effect is evident below 3.5 GeV for protons and below 1.5 GeV/n for helium. The spectral shapes of the data, above the geomagnetic cut-off, show that the solar modulation effect is noticeable despite the high value of geomagnetic cut-off for this experiment.
Because of the penumbral bands associated with the geomagnetic cut-off rigidities at mid-latitudes, primary cosmic rays are partially transmitted through the earth’s magnetic field near the cut-off. In the following, we attempt to determine the geomagnetic transmission function, which is defined as the fraction of cosmic rays of given energy to reach the Earth after the interaction with the geomagnetic field, from our observation. For this purpose, we make use of the observed helium spectrum rather than the proton spectrum, because at low energies the secondary production of protons in the atmosphere influences the measured proton spectrum.
In Fig. 14(a) the helium flux is shown as a function of rigidity together with the curve of Fig. 13 corresponding to the maximum of solar modulation . The ratio between the experimental points and the curve is shown in Fig. 14(b). This ratio can be taken as representative of the transmission function. The dashed curve is the best-fit parametrization of the data with a simple curve:
$$GF(R)=\left\{\left[\left(0.920\pm 0.010\right)\times \left(R/R_c\right)\right]^{\left(23.2\pm 2.0\right)}+1\right\}^{\left(0.385\pm 0.040\right)},$$
(6)
where $`R_c`$=4.1 GV represents the average value of the effective vertical cut-off rigidity over the flight trajectory. This average value has been estimated using the vertical cut-off map by Shea and Smart . The position of the payload changed between $`34^{}43^{}`$ and $`35^{}29^{}`$ of N-latitude and between $`103^{}38^{}`$ and $`104^{}25^{}`$ of W-longitude during the flight, with a small variation in the value of vertical cut-off.
It can be useful to have an analytical representation of the measured primary fluxes . For this purpose, it has been found that a simple function of the form
$$J(E)=a\left(E+b\text{e}^{cE}\right)^\gamma \times GF(E)$$
(7)
can fit the data both for proton and helium spectra. In (7) $`a`$, $`b`$, $`c`$ are free parameters, $`\gamma `$ is the slope of the spectrum at high energy, $`GF(E)`$ is the geomagnetic transmission function and $`E`$ is the kinetic energy per nucleon. The parameter values obtained for protons are: $`a=11169\pm 121`$, $`b=2.682\pm 0.046`$, $`c=0.0950\pm 0.0059`$ with a reduced $`\chi ^2=1.12`$; the corresponding values for helium are $`a=406\pm 14`$, $`b=1.416\pm 0.068`$, $`c=0.203\pm 0.039`$ with a reduced $`\chi ^2=0.51`$. We found that a parametrization (7) can represent, with the same accuracy and in the same energy range explored in this work, the observed spectra of all recent measurements by using different values for the constants.
The comparison of the results from this experiment with data from other experiments is shown in Fig. 15 for protons and in Fig. 16 for helium. In general, it seems that there are several inconsistencies among the different experiments. Such discrepancies cannot be ascribed completely to the solar modulation effect, since they are noticed even at high energies where the solar modulation effect is very small. If we compare only the most recent data, as shown in Fig. 17 for energies above 10 GeV/n, we see that the discrepancies are reduced. However, the differences between different data in some cases are of the order of 20–30%, considerably larger than the estimated errors. It is difficult to establish a priori what systematics affect the different experiments. Therefore, in order to avoid the effect of such systematic errors in the comparison between atmospheric and primary cosmic ray fluxes, the approach proposed in this paper is to use the same apparatus to measure both the atmospheric muons and their parent primary particle fluxes.
## V Conclusions
We have reported on simultaneous measurements of atmospheric muons and of primary cosmic rays taken with the same apparatus in a balloon experiment. The muon measurements cover the atmospheric depth range between 5 and 886 g/cm<sup>2</sup>. Negative muon spectra were measured in the momentum range 0.3–40 GeV/$`c`$, while positive muons between 0.3 and 1.5 GeV/$`c`$. The proton and helium measurements were carried out at 5.8 g/cm<sup>2</sup>, in the 3–100 GV rigidity range. Corrections were applied in order to calculate the expected primary fluxes at the top of the atmosphere. The geomagnetic transmission function at mid–latitude has been determined. The data analysis procedures for primary nuclei and muon fluxes were similar. Nevertheless, some differences in the selection criteria for different particles were used. For this reason we can estimate a normalization uncertainty of 1% between proton and negative muon fluxes, and of 2% between proton and positive muon fluxes. The availability of results of muons and primaries taken with the same detector in the same experiment may help decrease the uncertainties in the atmospheric neutrino calculations.
###### Acknowledgements.
We acknowledge very useful discussions with T. Stanev, T. K. Gaisser and also with V. A. Naumov. We thank the National Scientific Balloon Facility (Palestine, Texas), which operated the flight. This work was supported by NASA Grant NAG-110, DARA and DFG, Germany, the Istituto Nazionale di Fisica Nucleare, Italy, and the Agenzia Spaziale Italiana, as part of the research activities of the WIZARD collaboration. A special thank to our technical support staff from NMSU and INFN.
|
no-problem/9905/gr-qc9905104.html
|
ar5iv
|
text
|
# Experimental test for extra dimensions in Kaluza-Klein gravity
## I Introduction
Most modern theories which attempt to unify gravity with the Standard Model gauge theory have extra dimensions. These extra dimensions makes it possible to geometrize the gauge fields (gauge bosons) according to the following theorem :
Let $`G`$ be the group fibre of the principal bundle. Then there is a one-to-one correspondence between the $`G`$-invariant metrics
$$ds^2=h^2(x^\mu )(\sigma ^a+A_\mu ^adx^\mu )^2+g_{\mu \nu }dx^\mu dx^\nu $$
(1)
on the total space $`𝒳`$ and the triples $`(g_{\mu \nu },A_\mu ^a,h)`$. Here $`g_{\mu \nu }`$ is Einstein’s pseudo - Riemannian metric on the base; $`A_\mu ^a`$ are the gauge fields of the group $`G`$ ( the nondiagonal components of the multidimensional metric); $`h\gamma _{ab}`$ is the symmetric metric on the fibre.
The off-diagonal components of multidimensional (MD) metric act as Yang-Mills fields. One distinction between such MD theories and 4D theories with Yang-Mills fields is that the MD theories have a scalar field connected with the extra dimension(s). This scalar field describes the linear size of the extra dimensions or equivalently the volume of gauge group. Thus one possible experimental test for any MD gravity theory is the observation of effects arising from this scalar field. For example, it is possible to show that in 5D Kaluza - Klein theory the presence of variations of the 5<sup>th</sup> coordinate leads to changes in the ratio of the electrical charge to the mass of an elementary particle. This effect is very small since no experiment has found such a change.
In this paper we offer a new possible experimental signal for probing the extra dimensions of MD gravity based on the existence of a certain type of spherically symmetric nonasymptotically flat solutions .
## II Wormhole and flux tube solutions in 5D gravity.
Before discussing the 5D solutions we will briefly recall a couple of spherically symmetric 4D electrogravity solutions which will be used for comparison with the 5D solutions. First there is the well known, asymptotically flat generalized Reissner-Nordström solution which gives the gravitational and electromagnetic fields for a point mass with both electric and magnetic charges (the time-time component of the metric has the form $`g_{tt}=(1\frac{m}{r}+\frac{q^2+Q^2}{r^2})`$ where $`m`$ is the mass and $`q,Q`$ are the electric and magnetic charges respectively). Second, there is the nonasymptotically flat, spherically symmetric Levi-Civita flux tube solution with the metric
$`ds^2`$ $`=`$ $`a^2\left(\mathrm{cosh}^2\zeta dt^2d\zeta ^2d\theta ^2\mathrm{sin}^2\theta d\phi ^2\right),`$ (2)
$`F_{01}`$ $`=`$ $`\rho ^{1/2}\mathrm{cos}\alpha ,F_{23}=\rho ^{1/2}\mathrm{sin}\alpha ,`$ (3)
where $`G^{1/2}a\rho ^{1/2}=1`$ ; $`\alpha `$ is an arbitrary constant angle; $`a`$ and $`\rho `$ are constants defined by Eq. (2) - (3); $`G`$ is Newton’s constant ($`c=1`$, is the speed of light); $`F_{\mu \nu }`$ is the electromagnetic field tensor. Both the generalized Reissner-Nordström solution and the Levi-Civita flux tube solution place no restrictions on the relative values of the electric and magnetic charges.
In 5D Kaluza - Klein theory there are intriguing wormhole (WH) and flux tube solutions . Here we give a brief summary of these solutions. The general form of the metric is:
$`ds^2`$ $`=`$ $`e^{2\nu (r)}dt^2r_0^2e^{2\psi (r)2\nu (r)}\left[d\chi +\omega (r)dt+n\mathrm{cos}\theta d\phi \right]^2`$ (4)
$``$ $`dr^2a(r)(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2),`$ (5)
where $`\chi `$ is the 5<sup>th</sup> coordinate; $`\omega =A_t`$ and $`n\mathrm{cos}\theta =A_\varphi `$ are the 4D electromagnetic potentials; $`n`$ is an integer; $`r,\theta ,\phi `$ are “polar” coordinates. The 5D spacetime is the total space of the U(1) principal bundle, where the fibre is the U(1) gauge group and the base is ordinary 4D spacetime.
A detailed analytical and numerical investigation of the metric in Eq. (5) gives the following spacetime configurations, whose global structure depends on the relationship between the electric and magnetic fields :
1. $`0H_{KK}<E_{KK}`$. The corresponding solution is a WH-like object located between two surfaces at $`\pm r_0`$ where the reduction from 5D to 4D spacetime breaks down. The cross-sectional size of this solution (given by $`a(r)`$) increases as $`r`$ goes from $`0`$ to $`\pm r_0`$. The throat between the $`\pm r_0`$ surfaces is filled with electric and/or magnetic flux. As the strength of the magnetic field increases the longitudinal distance between the surfaces at $`\pm r_0`$ increases. This can be seen diagrammatically from the first two pictures in Fig.1.
2. $`H_{KK}=E_{KK}`$. In this case the solution is an infinite flux tube filled with constant electrical and magnetic fields, with the charges disposed at $`\pm \mathrm{}`$. The cross-sectional size of this solution is constant ($`a=const.`$). In Refs. an exact, analytical form of this solution was given in terms of hyperbolic functions. This solution if almost identical to the 4D Levi-Civita flux tube solution except the strength of the magnetic and electric fields are equal, while in the Levi-Civita solution the two fields can take on any relative value with respect to one another. The restriction that the electric charge equals the magnetic charge is reminiscent of other higher dimensional soliton solutions. In Ref. non-Abelian, Kaluza-Klein dyon solutions were found which obeyed the same restriction that the “electric” charge equal the “magnetic” charge. The present flux tube solution can be viewed as two connected or bound Kaluza-Klein dyons. The form of this infinite flux tube configuration also has similarities to the Anti-de Sitter (AdS) “throat region” that one finds by stacking a large number of D3-branes . Both the spacetime around the D3-branes and the electric/magnetic flux tube have indefinitely long cylindrical “throats” which can be thought of as ending either on the horizon of a black hole (for the D3-branes solution), or on an electric/magnetic charged object (for the flux tube solution).
3. $`0<E_{KK}H_{KK}`$. In this case we have a finite flux tube between two (+) and (-) magnetic and/or electric charges, which are located at $`\pm r_0`$. The longitudinal size of this flux tube is finite, but now the cross sectional size decreases as $`rr_0`$. At $`r=\pm r_0`$ this solution has real singularities which we interpret as the locations of the magnetic and/or and electric charges. The behavior of this flux tube solution as $`E_{KK}`$ decreases can be seen diagrammatically from the last two pictures in Fig. 1
## III The basic idea
In Ref. the idea was advanced that a piecewise compactification mechanism can exist in Nature. Piecewise compactification implies that some parts of the Universe are regions where one has full MD gravity (5D in our case), while other parts of the Universe are ordinary 4D regions where gravity does not act on the extra dimensions. For this mechanism to be viable it is necessary that on the boundary between these regions a quantum splitting off of the 5<sup>th</sup> dimension occurs. In regions where gravity propagates in all the dimensions the Universe will appear as a true 5D spacetime. <sup>*</sup><sup>*</sup>*in this case the fifteen 5D Einstein vacuum equations = 4D gravity + Maxwell electrodynamic + scalar field. In the regions where gravity does not propagate into the extra dimension one has ordinary 4D spacetime plus the gauge fields of the fibre. in this case there are fourteen 5D Einstein vacuum equations = 4D gravity + Maxwell electrodynamic, where $`G_{55}`$ = scalar field does not vary.. The boundary between these regions should be Lorentz invariant surfaces for the 4D observer at infinity it will appear as an event horizon.. An example of such a construction is the composite WH of Ref. which consists of two 4D Reissner - Nordström black holes attached to either end of a 5D WH solution (see Fig.2).
The proposed experimental signal for the extra dimensions in MD gravity relies on these postulated composite WH structures. The basic idea is the following: Composite WHs can act as a quantum handles (quantum WHs) in the spacetime foam. These quantum structures can be “blown up” or “inflated” from a quantum state to a classical state by embedding it in parallel $`\mathrm{E}`$ and $`\mathrm{H}`$ fields with $`\mathrm{E}>\mathrm{H}`$. These quantum handles are taken as quantum fluctuations in the spacetime foam which the externally imposed $`E`$ and $`H`$ fields can then promote to classical states with some probability. This process is envisioned as taking place inside a solenoid which has an additional electric field, $`E`$, parallel to magnetic field $`H`$.
The above process has some similarity to the pair production of electric or magnetic charged black holes in an external electric or magnetic field . In Ref. it was shown that despite the fact that the Maxwell action ($`F_{\mu \nu }F^{\mu \nu }=𝐇^2𝐄^2`$) changes sign under a dual transformation of $`𝐇`$ and $`𝐄`$ that the pair production of electric black holes and magnetic black holes are identical and suppressed. In the next section we consider an approximate flux tube solution which has $`𝐄𝐇`$ and therefore has a Maxwell action which is approximately zero.
## IV A more detailed description
The first three pictures in Fig. 1 represent solutions where the charges are unconfined and separated by some finite, longitudinal distance. For an external observer these composite WHs will appear as two oppositely charged electric/magnetic objects, with the charges located on the surfaces where the 4D and 5D spacetimes are matched. Since one would like these electric/magnetic charged objects to be well separated, we will consider the case $`EH`$ $`(E>H)`$. (This leads to the Maxwell action, $`F^2=H^2E^2`$, being approximately zero). Under these condition the solution to Einstein’s MD vacuum equations for the metric ansatz given in Eq. (5) is
$`qQ,`$ (6)
$`a{\displaystyle \frac{q^2}{2}}=const,`$ (7)
$`e^\psi e^\nu \mathrm{cosh}\left({\displaystyle \frac{r\sqrt{2}}{q}}\right),`$ (8)
$`\omega {\displaystyle \frac{\sqrt{2}}{r_0}}\mathrm{sinh}\left({\displaystyle \frac{r\sqrt{2}}{q}}\right)`$ (9)
here $`q`$ is the electrical charge and $`Q`$ the magnetic charge. Both Kaluza - Klein fields are:
$$EH\frac{q}{a}\frac{2}{q}\sqrt{\frac{2}{a}}$$
(10)
The cross sectional size of the WH is proportional to $`q^2`$. According our scenario the external, parallel electric and magnetic fields should fill the virtual WH. Changing Eqs. (6)-(9) into cgs units the electric and magnetic fields necessary for forming a composite WH with a cross sectional size $`a`$ is
$$EH\frac{c^2}{\sqrt{G}}\sqrt{\frac{2}{a}}$$
(11)
From Eq. (11) it can be seen that the larger the cross sectional size, $`a`$, of the WH the smaller the $`E`$ and $`H`$ fields. However, in order for the charged surfaces of the WH to appear as well separated electric/magnetic charged objects we need to require that the longitudinal distance, $`l`$, between these surfaces be much larger than the cross sectional size of the WH, $`l\sqrt{a}`$. Also in order to be able to separate the two ends of the WH as distinct electric/magnetic charged objects one needs the external force to be much larger than the interaction force between the oppositely charged ends. This leads to the following condition which is illustrated in Fig.3.
$`F_{ext}`$ $`=`$ $`qE+QH{\displaystyle \frac{q^2+Q^2}{l^2}}=F_{int}`$ (12)
$``$ $`2qE{\displaystyle \frac{2q^2}{l^2}}l\sqrt{a}`$ (13)
If this condition holds than the oppositely charged ends will move apart. Otherwise the ends will come back together and annihilate back into the spacetime foam.
The average value of $`a`$ for spacetime foam is given by the Planck size $`\sqrt{a}L_{Pl}10^{33}`$ cm. Thus the relevant field $`E`$ should be $`E\sqrt{2c^7}/G\sqrt{\mathrm{}}3.1\times 10^{57}V/cm`$. This field strength is in the Planck region, and is well beyond experimental capabilities to create. Hence one must consider quantum WHs whose linear size satisfies $`\sqrt{a}L_{Pl}`$. The larger $`\sqrt{a}`$ the smaller the field strength needed. But such large quantum WHs are most likely very rare. If $`f(a)`$ is the probability density for the distribution for a WH of cross section $`a`$ then $`f(a)da`$ gives the probability for the appearance of quantum WH with cross section $`a`$. The bigger $`a`$ the smaller the probability, $`f(a)da`$. Also the larger the value of $`E`$ and $`H`$ the smaller is the cross sectional size $`a`$ of the WH that can be inflated from the spacetime foam. Thus depending on the unknown probability $`f(a)da`$ one can set up some spatial region with parallel $`E`$ and $`H`$ fields whose magnitudes are as large as technologically feasible, and look for electric/magnetic charged objects whose charges are of similar magnitude. Finally, it has been proposed that the Planck scale may occur at a much lower energy scale ($`10^3`$ GeV) than is normally thought ($`1\times 10^{19}`$ GeV) due to the presence of large, extra dimensions. In such a scenario $`\sqrt{a}L_{Pl}10^{18}`$ cm, and the field strength would decrease by fifteen orders of magnitude so that $`E3.1\times 10^{42}`$ from above. This is still beyond experimental capabilities, however now one can consider quantum WHs that are of a smaller size, $`a`$, as compared to the standard case when the Planck size is $`10^{33}`$ cm. Combining the large extra dimension scenario with the inflation of the electric/magnetic infinite flux tube solution by external fields, tends to increase the probability of observing such an event.
The energy density $`u`$ of electrical and magnetic fields stored in such an inflated WH is
$$u=\frac{E^2}{8\pi }+\frac{H^2}{8\pi }\frac{E^2}{4\pi }=\frac{1}{4\pi }\frac{c^4}{G}\frac{2}{a}.$$
(14)
In this case the energy $`U`$ is
$$U\pi alw=\frac{c^4}{2G}l$$
(15)
$`U`$ increases linearly with $`l`$ as one would expect for two objects connected by a flux tube. This places a restriction on $`l`$, since as $`l`$ increases beyond a certain point the energy will be large enough to favor creating another electric/magnetic charged pair.
## V Conclusion
We have presented a possible experimental scheme to test the presence of the higher dimensions in MD gravity through the use of certain WH-like solutions, and an assumption about piecewise compactification on the surface where the reduction from 5D to 4D breaks down.
The difference between the present solutions the 4D Levi-Civita flux tube solution is that the 5D solution requires that the magnitudes of the electric and magnetic charges be of the same magnitude. This restriction on the charges is similar to that for certain non-Abelian, Kaluza-Klein dyon solutions . In the 4D case any relative strength between the charges is allowed. For both the 4D and 5D solutions it can be asked how the Dirac condition between electric/magnetic charges fits into all of this. Recently an investigation into closely related cosmic magnetic flux tube solutions was carried out . It was found that in the context of these GR solutions the Dirac condition is modified so that (magnetic flux) + (dual electric charge) is the quantized object rather than just the magnetic flux.
## VI Acknowledgments
VD is supported by a Georg Forster Research Fellowship from the Alexander von Humboldt Foundation and H.-J. Schmidt for invitation to Potsdam Universität für research.
|
no-problem/9905/gr-qc9905072.html
|
ar5iv
|
text
|
# Sensitivity of wide band detectors to quintessential gravitons
## I Introduction
Gravitational wave astronomy, experimental cosmology and high energy physics will soon experience a boost thanks to the forthcoming interferometric detectors. From a theoretical point of view it is then interesting to compare our theoretical expectations/speculations with the foreseen sensitivities of the various devices in a frequency range which complements and greatly extends the information we can derive from the analysis of the microwave sky and of its temperature fluctuations.
By focusing our attention on relic gravitons of primordial origin we can say that virtually every variation in the time evolution of the curvature scale can imprint important informations on the stochastic gravitational wave background . The problem is that the precise evolution of the curvature scale is not known. Different cosmological scenarios, based on different physical models of the early Universe may lead to different energy spectra of relic gravitons and this crucial theoretical indetermination can affect the expected signal.
Of particular interest seems to be the case where the logarithmic energy density of the relic gravitons (in critical units) grows in the frequency region explored by the interferometric detectors (i.e., approximately between few Hz and $`10`$ kHz) . In this range we can parametrize the energy density of the relic gravitons $`\rho _{\mathrm{GW}}`$ at the present time $`\eta _0`$ as
$$\mathrm{\Omega }_{\mathrm{GW}}(f,\eta _0)=\frac{1}{\rho _c}\frac{d\rho _{\mathrm{GW}}}{d\mathrm{ln}f}=\overline{\mathrm{\Omega }}(\eta _0)q(f,\eta _0)$$
(1.1)
where $`\overline{\mathrm{\Omega }}(\eta _0)`$ denotes the typical amplitude of the spectrum and $`q(f,\eta _0)`$ is a monotonic function of the frequency at least in the interval $`1\mathrm{Hz}<f<\mathrm{\hspace{0.33em}10}\mathrm{kHz}`$. Both $`\overline{\mathrm{\Omega }}(\eta _0)`$ and $`q(f,\eta _0)`$ can depend on the parameters of the particular model. The assumption that $`q(f,\eta _0)`$ is monotonic can certainly be seen as a restriction of our analysis, but, at the same time, we can notice that the models with growing logarithmic energy spectra which were discussed up to now in the litterature fit in our choice for $`q(f,\eta _0)`$. Within the parametrization defined in Eq. (1.1) we will be discussing the cases where the spectral slope $`\alpha `$ (i.e., $`\alpha =dq(f,\eta _0)/df`$) is either blue (i.e., $`0<\alpha <\mathrm{\hspace{0.33em}1}`$) or violet (i.e., $`\alpha >1`$). In general we could have also the case $`\alpha <0`$ (red spectra) and $`\alpha =0`$ (flat spectrum). Flat spectra have been extensively studied in the context of ordinary inflationary models and in relation to cosmic string models .
Blue and violet spectra are physically peculiar since they are typically produced in models which are different from the ones leading to flat spectra. In quintessential inflationary models the logarithmic energy spectra are typically blue . This is due to the fact that in this class of models an ordinary inflationary phase is followed by an expanding phase whose dynamics is driven by an effective equation of state which is stiffer than radiation . Since the equation of state (after the end of inflation) is stiffer than the one of radiation, then the Universe will expand slower than in a radiation dominated phase and, therefore, $`\alpha `$ turns out to be at most one (up to logarithmic corrections).
In string cosmological models the graviton spectra can be either blue (if the physical scale corresponding to a present frequency of $`100`$ Hz went out of the horizon during the string phase) or violet (if the relevant scale crossed the horizon during the dilaton driven phase).
The purpose of this paper is to analyze the sensitivity of pairs of interferometric detectors to blue and violet spectra of relic quintessential gravitons. The reason for such an exercise is twofold. On one hand violet and blue spectra, owing to their growth in frequency, might provide signals which are larger than in the case of flat inflationary spectra. On the other hand the sensitivity to blue spectra from quintessential inflation can be different from the one computed in the case of flat spectra from ordinary inflationary models. Indeed, it is sometimes common practice to compare the theoretical energy density of the produced gravitons with the sensitivity of various interferometers to a flat spectrum. This is, strictly speaking, arbitrary even if, sometimes this procedure might lead to correct order of magnitude estimates.
In order to illustrate qualitatively this point let us consider the general expression of the signal-to-noise ratio (SNR) in the case of correlation of two detectors of arbitrary geometry for an observation time $`T`$. By assuming that the intrinsic noises of the detectors are stationary, gaussian, uncorrelated, much larger in amplitude than the gravitational strain, and statistically independent on the strain itself, one has :Notice that, with this definition, the SNR turns out to be the square root of the one used in refs. . The reason for our definition lies in the remark that the cross-correlation between the outputs $`s_{1,2}(t)`$ of the detectors is defined as:
$$S=_{T/2}^{T/2}dt_{T/2}^{T/2}dt^{}s_1(t)s_2(t^{})Q(t,t^{}),$$
where $`Q`$ is a filter function. Since $`S`$ is quadratic in the signals, with the usual definitions, it contributes to the SNR squared.
$$\mathrm{SNR}^2=\frac{3H_0^2}{2\sqrt{2}\pi ^2}F\sqrt{T}\left\{_0^{\mathrm{}}df\frac{\gamma ^2(f)\mathrm{\Omega }_{\mathrm{GW}}(f)}{f^6S_n^{(1)}(f)S_n^{(2)}(f)}\right\}^{1/2},$$
(1.2)
($`H_0`$ is the present value of the Hubble parameter and $`F`$ depends upon the geometry of the two detectors; in the case of the correlation between two interferometers $`F=2/5`$). In Eq. (1.2), $`S_n^{(k)}(f)`$ is the (one-sided) noise power spectrum of the $`k`$-th $`(k=1,2)`$ detector, while $`\gamma (f)`$ is the overlap reduction function which is determined by the relative locations and orientations of the two detectors. This function cuts off (effectively) the integrand at a frequency $`f1/2d`$, where $`d`$ is the separation between the two detectors.
From eq. (1.2) we can see that the frequency dependence of the signal directly enters in the determination of the SNR and, therefore, we can expect different values of the integral depending upon the relative frequency dependence of the signal and of the noise power spectra associated with the detectors. Hence, in order to get precise information on the sensitivities of various detectors to blue and violet spectra we have to evaluate the SNR for each specific model at hand.
The analysis of the SNR is certainly compelling if we want to confront quantitatively our theoretical conclusions with the forthcoming data. Owing to the differrence among the various logarithmic energy spectra of the relic gravitons we can wonder if different detector pairs can be more or less sensitive to a specific theoretical model. We will try, when possible, to state our conclusions in such a way that our results could be used not only in the specific cases discussed in the present paper.
The plan of the paper is the following. In Section II we will review the basic features of blue spectra arising in quintessential inflationary models. In Section III we will set up the basic definitions and conventions concerning the evaluation of the SNR. In Section IV we will be mainly concerned with the analysis of the achievable sensitivities to some specific theoretical model. Section V contains our concluding remarks.
## II Blue and violet graviton spectra
### A Basic bounds
Blue and violet logarithmic energy spectra of relic gravitons are phenomenologically allowed . At low frequencies the most constraining bound comes from the COBE observations of the first (thirty) multipole moments of the temperature fluctuations in the microwave sky which implies that $`h_0^2\mathrm{\Omega }_{\mathrm{GW}}(f_0,\eta _0)`$ has to be smaller than $`6.9\times 10^{11}`$ for frequencies of the order of $`H_0`$. At intermediate frequencies (i.e., $`f_\mathrm{p}10^8`$ Hz) the pulsar timing measurements imply that $`\mathrm{\Omega }_{\mathrm{GW}}(f_\mathrm{p},\eta _0)`$ should not exceed $`10^8`$. In order to be compatible with the homogeneous and isotropic nucleosynthesis scenario we should require that
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(f,\eta _0)\mathrm{d}\mathrm{ln}f<\mathrm{\hspace{0.17em}0.2}\mathrm{\Omega }_\gamma (\eta _0)h_0^2\mathrm{\hspace{0.17em}5}\times \mathrm{\hspace{0.17em}10}^6,$$
(2.1)
where $`\mathrm{\Omega }_\gamma (\eta _0)=2.6\times 10^5h_0^2`$ is the fraction of critical energy density in the form of radiation at the present observation time. In Eq. (2.1) the integral extends over all the modes present inside the horizon at the nucleosynthesis time. In the case of blue and violet logarithmic energy spectra the COBE and pulsar bounds are less relevant than the nucleosynthesis one and it is certainly allowed to have growing spectra without conflicting with any of the bounds.<sup>§</sup><sup>§</sup>§Notice that the nucleosynthesis bound refers to the case where the underlying nucleosynthesis model is homogeneous and isotropic. The presence of magnetic fields and/or matter–antimatter fluctuations can slightly alter the picture .
### B Quintessential Spectra
Recent measurements of the red-shift luminosity relation in type Ia supernovae suggest the presence of an effective cosmological term whose energy density can be as large as $`0.8`$ in critical units. Needless to say that this energy density is huge if compared with cosmological constant one would guess, for instance, from electroweak (spontaneous) symmetry breaking, i.e., $`\rho _\mathrm{\Lambda }(250\mathrm{GeV})^4`$. In order to cope with this problem various models have been proposed and some of them rely on the existence of some scalar field (the quintessence field) whose effective potential has no minimum . Therefore, according to this proposal the evolution of the quintessence field is dominated today by the potential providing then the wanted (time-dependent) cosmological term. In the past the evolution of the quintessence field is in general not dominated by the potential. The crucial idea behind quintessential inflationary models is the identification of the inflaton $`\varphi `$ with the quintessence field . Therefore, the inflaton/quintessence potential $`V(\varphi )`$ will lead to a slow-rolling phase of de Sitter type for $`\varphi <0`$ and it will have no minimum for $`\varphi >0`$. Hence, after the inflationary epoch (but prior to nucleosynthesis ) the Universe will be dominated by $`\dot{\varphi }^2`$. This means, physically, that the effective speed of sound of the sources driving the background geometry during the post-inflationary phase will be drastically different from the one of radiation (i.e., $`c_s=1/\sqrt{3}`$ in natual units) and it will have a typical stiff form (i.e., $`c_s=1`$). The fact that in the post-inflationary phase the effective speed of sound equals the speed of light has important implications for the gravitational wave spectra as it was investigated in the past for a broad range of equations of state stiffer than radiation (i.e., $`1/\sqrt{3}<c_s<1`$) . The conclusion is that if an inflationary phase is followed by a phase whose effective equation of state is stiffer than radiation, then, the high frequency branch of the graviton spectra will grow in frequency. The tilt depends upon the speed of sound and it is, in our notations,
$$\alpha =\frac{6c_s^22}{3c_s^2+1}.$$
(2.2)
We can immediately see that for all the range of stiff equations of state (i.e., $`1/\sqrt{3}<c_s<1`$) we will have that $`0<\alpha <1`$ . The case $`\alpha =0`$ corresponds to $`c_s=1/\sqrt{3}`$. This simply means that if the inflationary phase is immediately followed by the ordinary radiation-dominated phase the spectrum will be (as we know very well) flat. The case $`c_s=1`$ is the most interesting for the case of quintessential inflation. In this case the tilt is maximal (i.e., $`\alpha =1`$). Moreover, a more precise calculation shows that the graviton spectrum is indeed logarithmically corrected as
$$q(f,\eta _0)=\frac{f}{f_1}\mathrm{ln}^2\left(\frac{f}{f_1}\right).$$
(2.3)
It is amusing to notice that this logarithmic correction occurs only in the case $`c_s=1`$ but not in the case of the other stiff (post-inflationary) background. The typical frequency $`f_1(\eta _0)`$ appearing in Eq. (2.3) is given, today, by
$$f_1(\eta _0)=1132N_s^{\frac{1}{4}}\left(\frac{g_{\mathrm{dec}}}{g_{\mathrm{th}}}\right)^{\frac{1}{3}}\mathrm{GHz}.$$
(2.4)
Apart from the dependence upon the number of relativistic degrees of freedom (i.e., $`g_{\mathrm{dec}}=3.36`$ and $`g_{\mathrm{th}}=106.75`$ ) which is a trivial consequence of the red-sfhit, $`f_1(\eta _0)`$ does also depend upon $`N_s`$ which is the number of (minimally coupled) scalar degrees of freedom present during the inflationary phase. The amplitude of the spectrum depends upon $`N_s`$ as
$$\overline{\mathrm{\Omega }}(\eta _0)=\frac{1.64\times 10^5}{N_s}.$$
(2.5)
The reason for the presence of $`N_s`$ is that all the minimally coupled scalar degrees of freedom present during the inflationary phase will be amplified sharing approximately the same spectrum of the two polarizations of the gravitons . The main physical difference is that the $`N_s`$ scalars are directly coupled to fermions and, therefore, they will decay and thermalize thanks to gauge interactions . If minimally coupled scalars would not be present (i.e., $`N_s=0`$) the model would not be consistent since the Universe will be dominated by gravitons with (non-thermal) spectrum given by Eq. (2.3). The energy density of the quanta associated with the minimally coupled scalars, amplified thanks to the background transition from the inflationary phase to the stiff phase, will decrease with the Universe expansion as $`a^4`$ whereas the energy density of the background will decrease as $`a^6`$. The moment at which the energy density of the background becomes sub-leading marks the beginning of the radiation dominated phase and it takes place at a (present) frequency of the order of the mHz . Notice that this frequency has been obtained by requiring the reheating mechanism to be only gravitational . This assumption migh be relaxed by considering different reheating mechanisms (see also ). In order to satisfy the nucleosynthesis constraint in the framework of a quintessential model with gravitational reheating we have to demand that
$$\frac{3}{N_s}\left(\frac{g_\mathrm{n}}{g_{\mathrm{th}}}\right)^{1/3}<0.07,$$
(2.6)
where the factor of $`3`$ counts the two polarizations of the gravitons but also the quanta associated with the inflaton and $`g_\mathrm{n}=10.75`$ is the number of spin degrees of freedom at $`t_\mathrm{n}`$. For frequencies $`f(\eta _0)>f_1(\eta _0)`$ the spectra of the produced gravitons are exponentially suppressed as $`\mathrm{exp}[f/f_1]`$. This is a general feature of the spectra of massless particles produced thanks to the pumping action of the background geometry Quintessential graviton spectra have, in general, three branches: a soft branch (for $`10^{18}\mathrm{Hz}<f<\mathrm{\hspace{0.33em}10}^{16}\mathrm{Hz}`$), a semi-hard branch (for $`10^{16}\mathrm{Hz}<f<\mathrm{\hspace{0.33em}10}^3\mathrm{Hz}`$) and a hard branch which is the one mainly discussed in the present paper. The reason for this choice is obvious since the noise power spectra of the interferometric detectors are defined in a band which falls in the region of the hard branch of the theoretical spectrum.
## III Signal-to-noise ratio for monotonic blue spectra
In order to detect a stochastic gravitational wave background in an optimal way we have to correlate the outputs of two (or more) detectors . The signal received by a single detector can be thought as the sum of two components: the signal (given by the stochastic background itself) and the noise associated with each detector’s measurement. The noise level associated with a single detector is, in general, larger than the expected theoretical signal. This statement holds for most of the single (operating and/or foreseen) gravitational waves detectors (with the possible exception of the LISA space interferometer ). Suppose now that instead of a single detectors we have a couple of detectors or, ideally, a network of detectors. The signal registered at each detector will be
$$s_i=h_i(t)+n_i(t),$$
(3.1)
where the index $`i`$ labels each different detector. If the detectors are sufficiently far apart the ensamble average of the Fourier components of the noises is stochastically distributed which means that
$$n_i^{}(f)n_j(f^{})=\frac{1}{2}\delta (ff^{})S_n^{(i)}(|f|),$$
(3.2)
where $`S_n(|f|)`$ is the one-sided noise power spectrum which is usually expressed in seconds. The very same quantity can be defined for the signal. By then assuming the noise levels to be statistically independent of the gravitational strain registered by the detectors we obtain Eq. (1.2).
Consider now the case of two correlated interferometers and define the following rescaled quantities:
* $`\mathrm{\Sigma }_n^{(i)}=S_n^{(i)}/S_0`$ ($`i=\mathrm{\hspace{0.17em}1},2`$);
* $`\nu =f/f_0`$;
* $`\mathrm{\Omega }_{\mathrm{GW}}(f)=\mathrm{\Omega }(f_0)\omega (f)`$.
(in this Section we will not write the explicit dependence of the theoretical quantities upon $`\eta _0`$: they are meant to be considered at the present time). Notice that $`f_0`$ is (approximately) the frequency where the noise power spectra are minimal and $`\mathrm{\Omega }_{\mathrm{GW}}(f_0)`$ is the graviton (logarithmic) energy density at the frequency $`f_0`$. Therefore the signal-to-noise ratio can be expressed as:
$$\mathrm{SNR}^2=\frac{3H_0^2}{5\sqrt{2}\pi ^2}\sqrt{T}\frac{\mathrm{\Omega }_{\mathrm{GW}}(f_0)}{f_0^{5/2}S_0}J,$$
(3.3)
where we defined the (dimension-less) integral
$$J^2=_0^{\mathrm{}}d\nu \frac{\gamma ^2(f_0\nu )\omega ^2(f_0\nu )}{\nu ^6\mathrm{\Sigma }_n^{(1)}(f_0\nu )\mathrm{\Sigma }_n^{(2)}(f_0\nu )}.$$
(3.4)
From this last expression we can deduce that the minimum detectable $`h_0^2\mathrm{\Omega }_{\mathrm{GW}}(f_0)`$ is given by (1 yr = $`\pi \times \mathrm{\hspace{0.17em}10}^7`$ s)
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(f_0)4.0\times \mathrm{\hspace{0.17em}10}^{32}\frac{f_0^{5/2}S_0}{J}\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2$$
(3.5)
For example, by taking $`f_0=100`$ Hz and $`S_0=10^{44}`$ s, we get
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})\frac{4.0\times \mathrm{\hspace{0.17em}10}^7}{J}\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2.$$
(3.6)
Therefore, the estimate of the sensitivity of cross-correlation measurements between two detectors to a given spectrum $`\mathrm{\Omega }_{\mathrm{GW}}(f)`$ reduces, in our case, to the calculation of the integral $`J`$ defined in Eq. (3.4). Given a specific theoretical spectrum, $`J`$ can be numerically determined for the wanted pair of detectors.
## IV Achievable Sensitivities for quintessential spectra
Consider first the case of the two LIGO detectors (located at Hanford, WA and Livingston, LA) in their “advanced” versions. From the knowledge of the geographical locations and orientations of these detectors , the overlap reduction function can be calculated , and the result is reported in Fig. 2 of ref. . As function of the frequency, $`\gamma `$ has its first zero at 64 Hz and it falls rapidly at higher frequency. This behavior allows to restrict the integration domain in Eq. (3.4) to the region $`f10`$ kHz (i.e., $`\nu 100`$). We assumed identical construction of the two detectors (i.e., $`S_n^{(1)}=S_n^{(2)}`$). For the rescaled noise power spectrum of each detector we used the analytical fit of ref. , namely (see Fig. 1)
$$\mathrm{\Sigma }_n(f)=\{\begin{array}{cc}\mathrm{}\hfill & f<f_b\\ h_a^2\left(\frac{f_a}{\mathrm{\Gamma }}\right)^3\frac{1}{f^4}\hfill & f_bf<\frac{f_a}{\mathrm{\Gamma }}\\ h_a^2\hfill & \frac{f_a}{\mathrm{\Gamma }}f<\mathrm{\Gamma }f_a\\ \frac{h_a^2}{(\mathrm{\Gamma }f_a)^3}f^2\hfill & f\mathrm{\Gamma }f_a\end{array}$$
(4.1)
with
$$h_a^2=\mathrm{\hspace{0.17em}1.96}\times \mathrm{\hspace{0.17em}10}^2\mathrm{\Gamma }=\mathrm{\hspace{0.17em}1.6}f_a=\mathrm{\hspace{0.17em}68}\mathrm{Hz}f_b=\mathrm{\hspace{0.17em}10}\mathrm{Hz}.$$
In the case of a flat spectrum (i.e., $`\alpha =0`$, $`\omega (f)=1`$) we find $`J\mathrm{\hspace{0.17em}6.1}\times \mathrm{\hspace{0.17em}10}^3`$, which implies
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})\mathrm{\hspace{0.17em}6.5}\times \mathrm{\hspace{0.17em}10}^{11}\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2$$
(4.2)
in close agreement with the estimate obtained in ref. .
The minimum detectable $`h_0^2\mathrm{\Omega }_{\mathrm{GW}}`$ for quintessential gravitons can be obtained by recalling that
$$\omega (f)=\frac{\nu }{\mathrm{ln}^2\nu _1}\mathrm{ln}^2\left(\frac{\nu }{\nu _1}\right).$$
For $`f_0=100`$ Hz, numerical integration gives:
$$J\frac{10^3}{\mathrm{ln}^2\nu _1}\left\{\mathrm{\hspace{0.17em}6.91}+\mathrm{\hspace{0.17em}21.36}\mathrm{ln}\nu _1+\mathrm{\hspace{0.17em}26.52}\mathrm{ln}^2\nu _1+\mathrm{\hspace{0.17em}15.68}\mathrm{ln}^3\nu _1+\mathrm{\hspace{0.17em}3.78}\mathrm{ln}^4\nu _1\right\}^{1/2},$$
or, taking into account Eq. (2.4), in terms of $`N_s`$:
$$J\frac{1.6\times \mathrm{\hspace{0.17em}10}^7}{(88.0\mathrm{ln}N_s)^2}P_\mathrm{L}(N_s)$$
(4.3)
with
$`P_\mathrm{L}^2(N_s)`$ $``$ $`\mathrm{\hspace{0.17em}1.07}\mathrm{\hspace{0.17em}4.62}\times \mathrm{\hspace{0.17em}10}^2\mathrm{ln}N_s+\mathrm{\hspace{0.17em}7.52}\times \mathrm{\hspace{0.17em}10}^4\mathrm{ln}^2N_s`$
$`\mathrm{\hspace{0.17em}5.44}\times \mathrm{\hspace{0.17em}10}^6\mathrm{ln}^3N_s+\mathrm{\hspace{0.17em}1.48}\times \mathrm{\hspace{0.17em}10}^8\mathrm{ln}^4N_s.`$
By inserting this expression in Eq. (3.6), one has:
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})\mathrm{\hspace{0.17em}2.5}\times \mathrm{\hspace{0.17em}10}^{14}\frac{(88.0\mathrm{ln}N_s)^2}{P_\mathrm{L}(N_s)}\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2$$
(4.4)
By assuming for $`N_s`$ the minimum value compatible with Eq. (2.6) (i.e., $`N_s=21`$), we obtain:
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})\mathrm{\hspace{0.17em}1.8}\times \mathrm{\hspace{0.17em}10}^{10}\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2$$
(4.5)
As we can see by comparing Eq. (4.2) with Eq. (4.5) the minimum detectable $`h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})`$ is slightly larger for growing spectra. This a general result that is simply related to the structure of $`J`$. For the special value of $`N_s`$ considered this difference is roughly of a factor of 2. Another important point to stress is that for both the graviton spectra considered, as a consequence of the frequency behavior of $`\gamma (f)`$ and the presence of the weighing factor $`\nu ^6`$ in the integrand, the main contribution to the integral $`J`$ comes from the region $`f<100`$ Hz. The cut-off introduced by the overlap reduction function is not so relevant: by assuming $`\gamma (f)=1`$ over the whole integration domain (i.e., considering the correlation of one of the detector with itself), the sensitivity increases only by a factor 2.4 in the case of a flat spectrum, and 3.6 in the case of the quintessential one. This means that the only way to get a substantial rise in sensitivity lies in the improvement of the noise characteristics of the detectors in the low-frequency region.
As a comparison we considered also the sensitivity that could be obtained at VIRGO in the (purely hypothetical) case in which the detector now under construction at Cascina, near Pisa (Italy), were correlated with a second interferometer located at about 50 km from the first and with the same orientation.For illustrative purposes, we assumed, within our example, 50 km as the minimum distance sufficient to decorrelate local seismic and e.m. noises. This hypothesis might be proven to be correct and it is certainly justified in the spirit of this exercise. However, at the moment, we do not have any indication either against or in favor of our assumption. The overlap reduction function for this correlation has its first zero at a frequency $`f3`$ kHz (see Fig. 2).
Also in this case we assumed that the detectors are identical and for the common rescaled noise power spectrum we used the analytical parametrization given in ref. (see Fig. 1)
$$\mathrm{\Sigma }_n(f)=\{\begin{array}{cc}\mathrm{}\hfill & f<f_b\\ \mathrm{\Sigma }_1\left(\frac{f_\mathrm{a}}{f}\right)^5+\mathrm{\Sigma }_2\left(\frac{f_\mathrm{a}}{f}\right)+\mathrm{\Sigma }_3\left[1+\left(\frac{f}{f_\mathrm{a}}\right)^2\right],\hfill & ff_b\end{array}$$
(4.6)
where
$$f_a=\mathrm{\hspace{0.17em}500}\mathrm{Hz},f_b=\mathrm{\hspace{0.17em}2}\mathrm{Hz},\begin{array}{c}\mathrm{\Sigma }_1=\mathrm{\hspace{0.17em}3.46}\times \mathrm{\hspace{0.17em}10}^6\\ \mathrm{\Sigma }_2=\mathrm{\hspace{0.17em}6.60}\times \mathrm{\hspace{0.17em}10}^2\\ \mathrm{\Sigma }_3=\mathrm{\hspace{0.17em}3.24}\times \mathrm{\hspace{0.17em}10}^2\end{array}$$
In the case of flat spectrum, limiting the numerical integration to 10 kHz, we obtain $`J\mathrm{\hspace{0.17em}5.5}`$ and, therefore, according to Eq. (3.6)
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})\mathrm{\hspace{0.17em}7.2}\times \mathrm{\hspace{0.17em}10}^8\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2.$$
(4.7)
In the case of quintessential inflation, for $`f_0=100`$ Hz, we have:
$$J\frac{1}{\mathrm{ln}^2\nu _1}\left\{\mathrm{\hspace{0.17em}5.79}\mathrm{\hspace{0.17em}0.30}\mathrm{ln}\nu _1+\mathrm{\hspace{0.17em}31.20}\mathrm{ln}^2\nu _1+\mathrm{\hspace{0.17em}6.11}\mathrm{ln}^3\nu _1+\mathrm{\hspace{0.17em}12.91}\mathrm{ln}^4\nu _1\right\}^{1/2},$$
or, in terms of $`N_s`$:
$$J\frac{1.6\times \mathrm{\hspace{0.17em}10}^4}{(88.0\mathrm{ln}N_s)^2}P_\mathrm{V}(N_s)$$
(4.8)
with
$`P_\mathrm{V}^2(N_s)`$ $``$ $`\mathrm{\hspace{0.17em}3.10}\mathrm{\hspace{0.17em}0.14}\mathrm{ln}N_s+\mathrm{\hspace{0.17em}2.37}\times \mathrm{\hspace{0.17em}10}^3\mathrm{ln}^2N_s`$
$`\mathrm{\hspace{0.17em}17.84}\times \mathrm{\hspace{0.17em}10}^6\mathrm{ln}^3N_s+\mathrm{\hspace{0.17em}5.04}\times \mathrm{\hspace{0.17em}10}^8\mathrm{ln}^4N_s.`$
Therefore, from Eq. (3.6) one has:
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})\mathrm{\hspace{0.17em}2.5}\times \mathrm{\hspace{0.17em}10}^{11}\frac{(88.0\mathrm{ln}N_s)^2}{P_\mathrm{V}(N_s)}\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2$$
(4.9)
that, for $`N_s=21`$ gives:
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})\mathrm{\hspace{0.17em}1.1}\times \mathrm{\hspace{0.17em}10}^7\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2$$
(4.10)
At a frequency of $`100`$ Hz the theoretical signal can be expressed as
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(100\mathrm{Hz})=N_s^{3/4}\times \mathrm{\hspace{0.17em}10}^{15}\left[2220.07\mathrm{\hspace{0.17em}50.46}\mathrm{ln}N_s+\mathrm{\hspace{0.17em}0.28}\mathrm{ln}^2N_s\right],$$
(4.11)
as a function of $`N_s`$. In Fig. 3 this function (full thin line) is compared with the sensitivity of LIGO-WA\*LIGO-LA and VIRGO\*VIRGO (full thick lines) obtained from, respectively, Eqs. (4.4) and (4.9), assuming $`T=1`$ yr and SNR = 1. We can clearly see that our signal is always below the achievable sensitivities. Notice that, if we assume purely gravitational reheating $`N_s>\mathrm{\hspace{0.33em}21}`$.
One could think that, thanks to the sharp growth of the spectrum, the signal could be strong enough around $`10`$ kHz, namely at the extreme border of the interferometers band. Indeed around $`f_0=10`$ kHz, the theoretical signal is given by:
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(10\mathrm{kHz})=N_s^{3/4}\times \mathrm{\hspace{0.17em}10}^{15}\left[1387.81\mathrm{\hspace{0.17em}39.89}\mathrm{ln}N_s+\mathrm{\hspace{0.17em}0.28}\mathrm{ln}^2N_s\right],$$
(4.12)
We see that the situation does not change qualitatively. In fact it is certainly true that around 10 kHz the signal is larger but the sensitivity is also smaller. In fact, repeating the calculation for $`f_0=10`$ kHz, in the case $`N_s=21`$ we obtain:
$$h_0^2\mathrm{\Omega }_{\mathrm{GW}}(10\mathrm{kHz})\{\begin{array}{cc}1.1\times \mathrm{\hspace{0.17em}10}^8\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2\hfill & \mathrm{LIGO}\mathrm{WA}\mathrm{LIGO}\mathrm{LA}\hfill \\ 6.7\times \mathrm{\hspace{0.17em}10}^6\left(\frac{1\mathrm{yr}}{T}\right)^{1/2}\mathrm{SNR}^2\hfill & \mathrm{VIRGO}\mathrm{VIRGO}\hfill \end{array}$$
(4.13)
If we compare Eqs. (4.13) with Eqs. (4.5) and (4.10) we see that the minimum detectable signal gets larger the larger is the spectral frequency. Therefore, the mismatch appearent from Fig. 3 between the theoretical signal and the experimental sensitivity will remain practically unchanged.
## V Concluding Remarks
In this paper we precisely computed the sensitivity of pairs of interferometric detectors to blue and mildly violet spectra of relic gravitons. Our investigation can be of general relevance for any model predicting non flat spectra of relic gravitons. We analyzed the correlation of the two LIGO detectors in their “advanced” phase. On a more speculative ground we investigated the theoretical possibility of the correlation of VIRGO with an identical, coaligned, interferometer located very near to it.
As a test for our techniques we first discussed the case of a flat spectrum which has been discussed in the past. We then applied our results to the case of quintessential inflationary models whose graviton spectra are, in general, characterized by three “branches”. A soft branch (in the far infra-red of the graviton spectrum around $`10^{18}`$$`10^{16}`$ Hz), a semi-hard branch (between $`10^{16}`$ Hz and $`10^3`$ Hz) and a truly hard branch ranging, approximately, from $`10^3`$ Hz to $`100`$ GHz. Since the interferometers band is located, roughly, between few Hz and $`10`$ kHz, the relevant signal will come from the hard branch of the spectrum whose associated energy density appears in the signal-to-noise ratio with blue (or mildly violet) slope. In the hard branch the energy density of quintessential gravitons is maximal for frequencies in the range of the GHz. In this region $`h_0^2\mathrm{\Omega }_{\mathrm{GW}}`$ can be as large as $`10^6`$. In spite of the fact quintessential spectra are growing in frequency the predicted signal is still too small and below the sensitivity achievable by the advanced LIGO detectors. The reason for the smallness of the signal in the region $`f1`$ kHz is twofold. On one hand we have to enforce the nucleosynthesis bound on the spectrum. On the other hand, because of the gravitational reheating mechanism adopted, the number of (minimally coupled) scalar degrees of freedom needs to be large. It might be possible, in principle, that different reheating mechanisms could change the signal for frequencies comparable with the window of the interferometers. Therefore, the analysis presented in this paper seems to suggest that new techniques (possibly based on electromagnetic detectors ) operating in the GHz region should be used in order to directly detect quintessential gravitons.
## Acknowledgments
We would like to thank Alex Vilenkin for very useful comments and conversations.
|
no-problem/9905/physics9905002.html
|
ar5iv
|
text
|
# Beam Test of Gamma-ray Large Area Space Telescope Components
## 1 Introduction
GLAST is the next high-energy gamma-ray mission, scheduled to be launched by NASA in 2005. This mission will continue and expand the exploration of the upper end of the celestial electromagnetic spectrum uncovered by the highly successful EGRET experiment, which had full sensitivity up to $`10`$ GeV. The design of the GLAST instrument is based on that of EGRET, which is a gamma-ray pair-conversion telescope, with the primary innovation being the use of modern particle detector technologies. GLAST will cover the energy range 20 MeV-300 GeV, with capabilities up to 1 TeV. It will have more than a factor of 30 times the sensitivity of EGRET in the overlapping energy region (20 MeV - 10 GeV). With unattenuated sensitivity to higher than 300 GeV, GLAST will cover one of the most poorly measured regions of the electromagnetic spectrum. These new capabilities will open important new windows on a wide variety of science topics, including some of the most energetic phenomena in Nature: gamma-ray bursts, active galactic nuclei and supermassive black holes, the diffuse high-energy gamma-ray extra-galactic background, pulsars, and the origins of cosmic rays. GLAST will also make possible searches for galactic particle dark matter annihilations and other particle physics phenomena not currently accessible with terrestrial accelerators. In addition, GLAST will provide an important overlap in energy coverage with ground-based air shower detectors, with complementary capabilities. GLAST has been developed by an interdisciplinary collaboration of high-energy particle physicists and high-energy astrophysicists.
The pair-conversion measurement principle allows a relatively precise determination of the direction of incident photons and provides a powerful signature for the rejection of charged particle cosmic ray backgrounds, which have a flux as great as $`10^4`$ times that of cosmic gamma-ray signals. The instrument consists of three subsystems: a plastic scintillation anti-coincidence detector (ACD) to veto incident charged particles, a precision converter-tracker to record gamma conversions and to track the resulting $`e^+e^{}`$ pairs, and a calorimeter to measure the energy in the electromagnetic shower. Particles incident through the instrument’s aperture first encounter the ACD, followed by the converter-tracker, and finally the calorimeter. The technologies selected for the subsystems are in common use in many high-energy particle physics detectors. In GLAST, the scintillation light from the ACD tiles is collected and transported by wavelength-shifting fibers to miniature photomultipliers. The tracking pair converter section (tracker) has layers of thin sheets of lead, which convert the gammas, and the co-ordinates of resulting charged tracks are measured in adjacent silicon strip detectors. The calorimeter is a 10 radiation length stack of segmented, hodoscopically-arranged CsI crystals, read out by photodiodes. While the basic principles of these components are well-understood, adapting them for use in a satellite-based instrument presents challenges particularly in the areas of power and mass. The tracker, calorimeter and associated data acquisition system are modular: the baseline instrument comprises a 5x5 array of 32x32 cm towers. In addition to simplifying the construction of the flight instrument, the modularity also allows detailed testing and characterization of all critical aspects of detector performance in the full, flight-size configuration early in the development program.
A detailed Monte Carlo simulation of GLAST was used to quantify our understanding of these technology choices and to optimize the design. To verify the results obtained by the computer analysis simple versions of all three subsystems were constructed and tested together in an electron and tagged photon beam in End Station A at SLAC. The goals of these tests for each of the subsystems included the following:
ACD
1. Check the efficiency for detecting minimum ionizing particles (MIPs) using fiber readout of scintillating tiles.
2. Investigate the backsplash from showers in the calorimeter, which causes false vetoes, as a function of energy and angle (this self-veto was the primary limitation of the sensitivity of EGRET at high energy).
Tracker
1. Demonstrate the merits of a silicon strip detector (SSD) pair conversion telescope.
2. Validate the computer modeling and optimization studies with respect to converter thickness, detector spacing and SSD pitch.
3. Validate the prototype, low power front end electronics used to read out the SSDs.
CsI Calorimeter
1. Demonstrate the hodoscopic light sharing concept for co-ordinate measurement in transversely mounted CsI logs, and validate the shower imaging performance.
2. Measure the energy resolution.
3. Study leakage corrections using longitudinal shower profile fitting at high energies.
For each of these tests, the presence of the other subsystems proved valuable: for tracker studies (particularly at low energies) the calorimeter provided the measurement of the photon energy; for calorimeter studies the tracker provided a precision telescope to locate the entry point and direction of the beam particle; and for all tests the ACD system was used to discard contaminated events (e.g., accompanying low-energy particles coming down the beam pipe). We report here the results of these studies. In section 2, the experimental setup of the beamline and the detectors is described. The performance of the individual detectors is given in section 3, followed by a compendium of results from the studies for each subsystem in section 4 and a summary in section 5.
## 2 Experimental setup
### 2.1 Beamline and Trigger
The experiment was performed in End Station A (ESA) at the Stanford Linear Accelerator Center (SLAC). A technique was recently developed to produce relatively low-intensity secondary electron and positron beams parasitically from the main LINAC beamline, which delivers beams with energy up to 50 GeV at a 120 Hz repetition rate. A schematic is shown in figure 1. A small fraction of the LINAC electron beam is scraped by collimators, producing bremsstrahlung photons that continue downstream, past bending magnets, producing secondary electrons and positrons when they hit a 0.7 $`X_0`$ target. Electrons within an adjustable range of momentum (typically 1-2%) are transported to ESA. Beamline parameters were adjusted to allow an average of one electron per machine pulse into ESA.
In addition to the electron beam, a tagged photon beam was also generated, as shown in figure 2. A movable target with 2.5%, 5% and 10% $`X_0`$ copper foils produced bremsstrahlung photons from the ESA electron beam (a 25 GeV ESA electron beam was used for most of the photon runs). A large sweeping magnet ($`B0`$) deflected the electron beam toward an 88-channel two-dimensional scintillator hodoscope, followed by a set of four lead-glass block calorimeters.
The data acquisition system collected data from every machine pulse. More than 400 data runs were taken during a four week period, resulting in $`2.1\times 10^8`$ triggers and over 200 GB of data.
The GLAST experimental setup is shown schematically in figure 3. Each of the subsystems is described in the following sections.
### 2.2 ACD
Although an anticoincidence system is essential to distinguish the cosmic gamma-ray flux from the much larger charged particle cosmic ray flux seen by a gamma-ray telescope in orbit, a monolithic scintillator detector such as used by SAS-2, COS-B, and EGRET is neither practical for an instrument the size of GLAST nor desirable. The highest-energy gamma rays (especially with energies above 10 GeV) produce backsplash: low energy photons originating in the calorimeter as the products of the electromagnetic shower. Such backsplash photons can cause a veto pulse in the ACD through Compton scattering. The EGRET detector has a monolithic ACD and suffers a $``$50% loss of detection efficiency at 10 GeV due to this effect. This self-veto can be reduced by segmenting the GLAST ACD into tiles and vetoing an event only if the pulse appears in the tile through which the reconstructed event trajectory passes. Monte Carlo simulations indicate that this approach reduces the self veto rate at 30 GeV by at least an order of magnitude.
The beam test ACD consisted of two modules, as shown in figure 3. One module contained 9 scintillating paddles (Bicron BC-408) and was placed on the side of the tracker/calorimeter. The front module consisted of two superimposed layers with 3 paddles in each and was placed just upstream of the tracker. Wave-shifting fibers (BCF-91A, 1 mm diameter), matching the BC-408 scintillator, were embedded in grooves across the 1 cm-thick paddles to collect and transfer light to Hamamatsu R647 photo-multiplier tubes. Each phototube was packaged in a soft-iron housing for magnetic field shielding and was equipped with a variable resistor to adjust the gain. The signal from each phototube was pulse-height analyzed by a CAMAC 2249A PHA module.
### 2.3 Tracker
The silicon-strip tracker consisted of six modules, each with two detector layers, one oriented to measure the $`x`$ coordinate and the other oriented to measure $`y`$. The detectors were single-sided AC-coupled silicon strip detectors manufactured by Hamamatsu Photonics. They were 6 cm by 6 cm in size and 500 $`\mu `$m thick with an $`n`$-type substrate and $`p`$-type strip implants. The strips were 57 $`\mu `$m in width and 236 $`\mu `$m in pitch, with a total capacitance of about 1.2 pF per centimeter of strip length. The strip implants were biased at about 10 V via punchthrough structures, while the back side was biased at 140 V for full depletion, except during special runs in which the bias voltage was varied in order to study the efficiency as a function of depletion depth. The detectors were mounted on the two sides of a printed-circuit card, along with the readout electronics and cable connectors. To minimize scattering of the beam, each card was cut out under the detector active area, and windows were cut out of the acrylic housing. The entire assembly was wrapped in aluminized mylar for shielding from light and electromagnetic interference. Figure 4 shows the two general configurations used in the beam test. The “pancake” configuration had a 3 cm spacing between modules, similar to the baseline GLAST design, while in the “stretch” configuration that spacing was doubled, except for the space between the last two modules. The configuration was easily changed by sliding the modules in and out of grooves machined in the housing. The lead converter foils were mounted on separate cards that could also be slid in and out of grooves located directly in front of each detector plane. In figure 4 the converter foils are shown installed in the first four modules. The gap between the lead and the first detector was about 2 mm, while the gap between the two detector sides within a module was 1.5 mm.
Each readout channel was connected to a single 6 cm long strip, except for the $`y`$ side of the first module encountered by the beam which had five detectors connected in series to make 30 cm long strips. Only that module was used for studies of the noise performance, since only it had input capacitance close to that of the GLAST baseline design.
Consecutive strips were instrumented in each detector with six 32-channel CMOS chips that were custom designed to match the detector pitch and satisfy the GLAST power and noise requirements. Due to limitations on the number of available readout chips, only 192 of the total 256 strips on each detector were instrumented with readout electronics. Each channel consisted of a charge-sensitive preamplifier, a shaping amplifier with approximately 1 $`\mu `$s peaking time, a comparator, a programmable digital mask, and a latch. In addition, the six chips in each readout section provided a 192-wide logical OR of the comparator outputs (after the mask) to provide the self-triggering capability required for GLAST. In the beam test, however, the system was triggered by the beam timing. About 1 $`\mu `$s after the beam passed through the apparatus the latches were triggered, after which the 192 bits were shifted serially out of each readout section. In addition, the start-time and length of the logical-OR signals were digitized by TDC’s to study the self-triggering capability offline.
The custom readout electronics operated with a power consumption of 140 $`\mu `$W per channel and an rms equivalent noise charge of 1400 electrons (0.22 fC) for the 30-cm long strips. Except for runs in which it was varied to study efficiency, the threshold was generally set at about 1.5 fC, compared with the more than 6 fC of charge deposited by a single minimum ionizing particle at normal incidence. The typical rms variation of the threshold across a 32-channel chip was under 0.12 fC. The tracker readout electronics are described in more detail in reference .
### 2.4 CsI calorimeters
The calorimeter comprised eight layers of six CsI(Tl) crystals read out by PIN photodiodes. Each layer was rotated $`90^{}`$ with respect to its neighboring layers, forming an x-y hodoscopic array. The crystal blocks were $`3\times 3\times 19`$ cm in size and individually wrapped in Tetratek and aluminized mylar. Hamamatsu S3590 PIN photodiodes, with approximately 1 cm<sup>2</sup> active area, were mounted on each end to measure the scintillation light from an energy deposition in the crystal. The difference in light levels seen at the two ends provided a determination of the position of the energy deposition along the crystal block.
Although 48 crystals would be required to form the complete calorimeter, only 32 CsI(Tl) crystals were available for the test. Brass supporting blocks were therefore used to fill the remaining 16 positions to complete the hodoscopic array. Figure 5 shows the general arrangement of the calorimeter and the positions of the passive blocks. In the figure, the brass blocks are shaded and the CsI blocks are light with PIN photodiodes indicated on the ends. The arrangement of the active CsI blocks was designed to study events normally incident near the front center of the calorimeter. Off-axis response could be studied by directing the beam from the front center toward the lower right corner in the figure where the calorimeter was fully populated with active CsI blocks.
The crystal array was mounted in an aluminum frame consisting of four walls with PIN photodiode access holes and a bottom structural plate. In figure 5, two of the walls have been removed. The frame was open on the front where the beam entered the calorimeter. The calorimeter was enclosed in a light-tight aluminum shield and mounted on a precision translation table which permitted both vertical and horizontal adjustment of the beam position on the front of the calorimeter. This translation table was used to study the position resolution by mapping the relative light levels over the entire length of the CsI blocks. In these tests the tracker remained fixed and provided accurate beam positions, while the calorimeter was moved relative to the beam and tracker to map the entire crystal array.
The PIN diodes were biased by 35 volt batteries and attached to eV Products 5092 hybrid preamps. The preamps were mounted on circuit cards adjacent to the PIN diodes. The outputs of the 64 preamps were routed to a CAMAC/NIM data acquisition system consisting of CAEN shaping amplifiers and Phillips or LeCroy 12 bit analog to digital converters. The CAEN shaping amplifiers provided programmable gain adjustments to optimize the electronics for the specific beam energies of each test.
### 2.5 Online data spying, event display, and offline filtering
The online system sampled events from the data stream and made simple data selections in real time. This enabled us to monitor the performance of the individual detectors and to tune various beam parameters while collecting data. The online monitoring system included a single event display with rudimentary track reconstruction and full online histogramming capabilities.
Offline processing reduced the volume of data for storage and distribution. Most of the beam pulses did not result in photon events, due to the thin target radiators we used. To separate real events from empty pulses we applied very loose selection criteria on the raw data, requiring either hits in three consecutive x-y tracker planes or at least 6 MeV of energy deposited in the calorimeter. Event filtering removed approximately 80% of the raw data in photon mode and approximately 30% in electron mode.
## 3 Detector performance
### 3.1 ACD
The overall response and efficiency of the ACD were investigated using a 25 GeV electron beam. Typical pulse height histograms are shown in figure 6 for (a) a tile that was crossed by a direct electron beam, and (b) a tile outside the direct beam. The peak corresponding to one MIP is clearly seen in (a), near channel 100. The backsplash spectrum appears in low channels of histogram (b).
The efficiency was determined using a sample of electron beam events that had hits in all 12 tracker planes within 1 cm of the beam axis by counting the fraction of these events that had a coincident hit in the relevant ACD tile. For thresholds below 0.35 MIP, the inefficiencies were always smaller than $`5\times 10^4`$.
### 3.2 Tracker
The efficiency to detect minimum-ionizing particles and the occupancy due to random noise were measured. The efficiency must be close to 100%: to realize the optimal angular resolution of the device it is crucial not to miss either of the first two $`xy`$ pairs of measurements on a track. The noise occupancy must be low, not only to avoid flooding the data stream but, more importantly, to avoid saturating the readout system with spurious triggers. In GLAST, the tracker will be employed in the first-level trigger, which simply looks for a coincidence among three consecutive $`xy`$ pairs of silicon layers. The rate for this trigger depends very strongly on the occupancy: with a 1 $`\mu `$s coincidence window the single-channel noise occupancy must be less than $`10^4`$ so that spurious triggers do not dominate the overall trigger rate. A major objective of the beam test was to demonstrate that such a low occupancy can be achieved with the prototype electronics without degrading the detection efficiency.
#### 3.2.1 Tracker noise occupancy
Only the first layer of detectors struck by the beam had five detectors ganged in series, so it was the only relevant testbed for studies of the noise occupancy. (Due to poor quality control at the wire bonding vendor, a number of detector strips in the five-detector module were damaged in random locations. These were known prior to the beam test and have been removed from the analysis.) The other single-detector modules had low capacitance and therefore almost unobservably low noise, with the exception of very few damaged strips. The efficiency, however, is not expected to depend significantly on the capacitance, so it could be studied with the single detector modules as well as the first five-detector layer. Figure 7 shows the vertical beam profile. It is well contained within the 4.5 cm instrumented region of the detector.
In the case that random hits are due to electronic noise, the dependence of the threshold-crossing rate, or noise rate, on the threshold level $`V_t`$ is well approximated by
$$f_n=f_0e^{V_t^2/2\sigma _n^2},$$
(1)
where $`\sigma _n`$ is the rms noise level at the discriminator input. Figure 8 shows the occupancy for four typical channels of the five detector module. For these measurements all channels but one were masked off at the output of the comparator. The rms noise is extracted by fitting the curves to Eqn. 1, with the results plotted as smooth dotted curves in figure 8. The value of $`\sigma _n`$ in those four fits ranges from 1290 electrons to 1390 electrons (0.21 fC to 0.22 fC) equivalent noise charge referenced to the preamplifier input. The channel-to-channel variation in noise occupancy is primarily due to threshold variations. The typical rms variation across a 32-channel chip was 0.05 fC, with a few chips showing rms variations as large as 0.14 fC.
The occupancy increased significantly, however, when the outputs of all channels were enabled. In that condition the logical-OR of all channels (Fast-OR) —which is to be used for triggering— runs much faster, and its signal was observed to feed back to the amplifiers, causing a shift in the effective threshold. Steps have been taken to solve this problem by improving the grounding and and power-supply isolation and decoupling of the circuit board onto which the chips are mounted; by changing the CMOS Fast-OR outputs to low-voltage differential signals; and by decreasing the digital power supply from 5 V to 3 V. A prototype chip and circuit board fabricated with these new features does not exhibit this feedback problem—the occupancy no longer depends on the number of enabled channels.
#### 3.2.2 Tracker efficiency
The efficiency was measured for the five-detector module and a single-detector module. The remaining four modules were used as anchor planes to reconstruct the track. A 25 GeV electron beam was used. Single particle events were selected by requiring that the calorimeter signal was consistent with a single-electron shower and that only one track was reconstructed in the tracker. For the detectors under test, a hit was counted if it was found within 4 strips of the position predicted from the track. The bias voltage was varied to change the depletion thickness and, therefore, the amount of ionization deposited. At about 180 V the 500 $`\mu `$m detectors were fully depleted. A 90 V bias voltage yielded a depletion thickness between 360 $`\mu `$m and 390 $`\mu `$m, close to the envisaged GLAST detector thickness of 400 $`\mu `$m. Figure 9 shows the inefficiency versus threshold setting for the two bias voltages. The upper limits reflect the limited number of recorded events (about $`10^4`$). No significant difference in efficiency was observed between the single-detector planes and the five-detector plane.
From figures 8 and 9 it is evident that the tracker can be operated at essentially 100% efficiency with an occupancy well below $`10^4`$ by setting a threshold in the range of 1-1.5 fC. The GLAST signal-to-noise and trigger requirements have been met and exceeded, while the 140 $`\mu `$W per channel consumed by the amplifiers and discriminators satisfies the GLAST power restrictions. More recent tests with prototype chips containing the full GLAST digital readout capability have demonstrated that, even with the digital activity included, the per-channel power can meet the goal of 200 $`\mu `$W.
#### 3.2.3 Fast-OR
The Fast-OR signal was studied in the beam test using multi-hit TDC’s. The distribution of the time of the leading edge is important for understanding the GLAST trigger timing requirements. It was measured with high-energy electrons for a variety of detector bias voltages corresponding to depletion depths ranging between 200 $`\mu `$m and 500 $`\mu `$m. For full depletion, the full width of the peak at half maximum was only 50 ns. The lower bias voltages resulted in larger time fluctuations, but overall the data indicated that a trigger coincidence window of 0.5 $`\mu `$s could be used for minimum ionizing particles with essentially 100% efficiency.
The GLAST experiment will record the time-over-threshold of the Fast-OR from each detector layer, along with the hit pattern. The time-over-threshold gives a rough measurement of the charge deposited by the most highly ionizing track that passed through the layer. That information can be useful for background rejection as well as for possible cosmic ray studies. Figure 10 shows the measured time-over-threshold versus input signal, obtained via charge injection, since the beam test did not provide a controlled, wide range of charge deposition. The relationship, which would be logarithmic for a true RC/CR filter, is actually fairly linear in the range 0.5-25 MIPs, where it saturates at 95 $`\mu `$s. This is because, for large amplitudes, the shaping amplifier reset rate is limited by a constant current source.
### 3.3 Calorimeter
The number of electrons produced in the photodiodes per MeV deposited energy was measured in several channels of the CsI calorimeter array. To calibrate the yield, a known charge was injected into each channel and the response was compared with the pulse height distribution produced by cosmic-ray muons, which typically deposit $``$20 MeV in a crystal. The yield was typically $``$12,000-15,000 electrons per MeV per photodiode in the 19-cm CsI bars with a 3 $`\mu `$s amplifier shaping time.
## 4 Studies
### 4.1 ACD studies
The nine scintillator tiles on the side of the tracker/calorimeter and those on the top that were not directly illuminated by the beam were used to measure the backsplash. Figure 11 shows, as a function of threshold, the fraction of events that were accompanied by a pulse in tile number 9 which, when viewed from the center of the shower in the calorimeter, was approximately 90 from the direction of the incident photon. The self-veto effect is a sensitive function of this threshold. In figure 12, the fraction of events that were accompanied by an ACD pulse of greater than 0.2 MIP is shown as a function of angle with respect to the incident photon direction. To present the result in a manner that is insensitive to geometry, the vertical axis is normalized by the solid angle each tile presents when viewed from the center of the shower in the calorimeter. Only the statistical errors are displayed; the systematic errors, which may be substantial, are being evaluated. The increase at 180 may be due to secondary particles in the beam accompanying the photon. Aside from this feature, and the effects of shower leakage, the backsplash is apparently approximately isotropic.
### 4.2 Tracker studies
#### 4.2.1 Track reconstruction
The incident $`\gamma `$-ray direction is determined from the electron and positron tracks, which are reconstructed from the set of hit strips. In addition to effects of noise hits, missing hits, and spurious or ambiguous tracks, the pointing resolution is ultimately limited by hit position measurement error and by energy-dependent multiple scattering. Furthermore, the $`x`$ and $`y`$ projections of the instrument are read out separately so that, given a track in the $`x`$ projection, the question of which $`y`$ track corresponds to it is ambiguous. Clearly, a good method of finding and fitting electron tracks will be critical for GLAST.
The Kalman filter is an optimal linear method for fitting particle tracks. A practical implementation has been developed by Frühwirth. The problem simplifies in the limit where either one of the resolution-limiting effects is negligible: if the measurement error were negligible compared to effects of multiple scattering, as expected at low energies, the filter would simply “connect the dots,” making a track from one hit to the next; however, if the measurement error were completely dominant and multiple scattering effects were negligible (e.g., at high energy), all hits would have information and one would essentially fit a straight line to the hits. The Kalman filter effectively balances these limits.
The basic algorithm we have adopted is based on the Frühwirth implementation. At each plane the Kalman filter predicts, based on the information from the prior planes, the most likely location of the hit for a projected track. Usually, the hit nearest to that predicted location is then assumed to belong to the track. This simple approach is complicated by opportunities for tracks to leave the tracker or to share a hit with another track. For each event, the algorithm looks for electron tracks in the two instrument projection planes ($`xz`$ and $`yz`$) independently. The fitted tracks are used to calculate the incident photon direction, as described in the following sections.
#### 4.2.2 Simulations
Simulations of the beam test instrument were made using a version of glastsim specially modified to represent the beam test instrument. glastsim is the code used to simulate the response of the entire GLAST instrument via the detailed interactions of particles with the various instrument and detector components . The Monte Carlo code was modified for the beam test application to include the $`e^{}`$ beam, the Cu conversion foil, and the magnet used for analyzing the tagged photon beam, as well as the beam test instrument. Simulated data were analyzed in the same way as the beam test data.
#### 4.2.3 Cuts on the Data
Each event used in the analysis was required to pass several cuts. First, the Pb glass blocks used for tagging must have indicated that there was only one electron in the bunch. This lowered the probability of having multiple $`\gamma `$-rays produced at the bremsstrahlung target. Second, the Anti-Coincidence tiles through which the $`\gamma `$-ray beam passed were each required to have less than $`1/4`$ MIP of energy registered. This ensured that the $`\gamma `$-ray did not convert inside the ACD tile and that the event was relatively clean of accompanying low-energy particles from the beamline. Depending on run conditions, this left about 30% of the data for further analysis. Three more cuts were imposed based on the parameters of the reconstructed tracks: tracks must have had at least three hits regardless of the energy in the calorimeter, a reduced $`\chi ^2<5`$, and the starting position of the track must have been at least $`4.7`$ mm from the edge of the tracker. This last requirement lowered the probability that a track might escape the tracker, which could bias the reconstructed track directions. These overall track definition cuts further reduced the data by about one third.
In an effort to make the beam test data as directly comparable with Monte Carlo simulations as possible, the Monte Carlo data were subjected to very similar cuts. The Monte Carlo included an anti-coincidence system, and a similar cut was made to reject events which converted in the plastic scintillator. All of the cuts based on track parameters were made in the same way for both the Monte Carlo and the beam test data.
#### 4.2.4 Reconstructing photon directions
Since the average pair conversion results in unequal sharing of the $`\gamma `$-ray energy, and since multiple scattering effects are inversely proportional to the energy of the particle producing the track, the incident $`\gamma `$-ray angles were calculated using a weighted average of the two track directions, with the straighter track receiving 3/4 of the total weight. The projected instrument angular resolution could be measured by examining the distribution of reconstructed incident angles. As this distribution had broader tails than a Gaussian, the 68% and 95% containment radii were used to characterize it. For each instrument configuration, these parameters were measured as a function of energy in ten bands. The same reconstruction code was used to analyze the Monte Carlo simulations, and the distribution widths were compared (figures 13 and 14). The simulated distributions show good agreement with the data out to the 95% containment radius and beyond.
The containment radii in each projection fall off with increasing energy somewhat faster than the $`1/E`$ dependence expected purely from multiple scattering, for a number of reasons. The containment radii at low energies are smaller than might be expected because of self-collimation: the finite width of the detector prevents events from being reconstructed with large incident angles. At higher energies, measurement error becomes a significant contributor to the angular resolution. While these effects cause deviations from theoretical estimates of the pointing resolution, they are well-represented by Monte Carlo simulations (see figure 15). Details of the angular resolution determination, including specifics of the track-finding algorithm, methods of dealing with noisy strips, alignment of the instrument planes, and possible systematic biases are discussed elsewhere .
### 4.3 Calorimeter studies
#### 4.3.1 Energy reconstruction
The principal function of the calorimeter is to measure the energy of incident $`\gamma `$-rays. At the lower end of the sensitive range of GLAST, where electromagnetic showers are fully contained within the calorimeter, the best measurement of the incident gamma-ray energy is obtained from the simple sum of all the signals from the CsI crystals. At energies above $``$1 GeV, an appreciable fraction of the shower escapes out the back of the calorimeter, and this fraction increases with $`\gamma `$-ray energy. At moderate energies ($``$ few GeV), fluctuations in the shower development thus create a substantial tail to lower energy depositions; at higher energies these fluctuations completely dominate the resolution and the response distribution is again symmetric, but broader.
Figure 16 shows the distributions of energy deposition for 25 GeV electron showers in each of the 8 layers of the beam test calorimeter. A pair of distributions is shown for each layer: the left member of the pair is from the beam test data, with one event producing one point in each layer; the right member of the pair is the same distribution from the Monte Carlo simulation. The centroid and width of the beam test and Monte Carlo distributions in each layer are in good agreement quantitatively (with the exception of layers 7 and 8, where a configuration error in the ADCs blurred the distributions). The broad energy distributions seen in the figure are dominated by shower fluctuations, and the energy depositions are strongly correlated from layer to layer. Using a monoenergetic 160 MeV/nulceon $`{}_{}{}^{12}C`$ beam at the National Superconducting Cyclotron Laboratory at Michigan State University, the intrinsic energy resolution of these CsI crystals with PIN readout was measured to be 0.3% (rms) at $``$ 2 GeV.
Using the longitudinal shower profile provided by the segmentation of the CsI calorimeter, one can improve the measurement of the incident electron energy by fitting the profile of the captured energy to an analytical description of the energy-dependent mean longitudinal profile. This shower profile is reasonably well-described by a gamma distribution which is a function only of the location of the shower starting point and the incident energy $`E_0`$:
$$\frac{1}{E_0}\frac{dE}{d\xi }=\frac{b(b\xi )^{a1}\text{e}^{b\xi }}{\mathrm{\Gamma }(a)}$$
(2)
The parameter $`\xi `$ is the depth into the shower normalized to radiation lengths, $`\xi =x/X_0`$. The parameter $`b`$ scales the shower length and depends weakly on electron energy and the $`Z`$ of the target material; however, a good approximation is simply to set $`b=0.5`$. The parameter $`a`$ is energy-dependent with the form $`a=1+b(\mathrm{ln}(E_0/E_c)0.5)`$. $`E_c`$ is the critical energy where bremsstrahlung energy loss rate is equal to the ionization loss rate ($`E_c14`$ MeV in CsI).
The free parameters in the fit were the starting position of the shower relative to the edge of the first layer of the calorimeter and the initial electon energy, $`E_0`$. In the fitting, the shower profile of Eqn. 2 was integrated over the path length in each of the layers. The fitting permitted both early and late starts to the shower. The results of the fitting are shown in Figure 17. Panel (a) of the figure shows the histograms of the measured energy loss in the calorimeter for electron beams of 2, 25, and 40 GeV. The tails to low energy are clearly evident for the beam energies of 25 and 40 GeV. Figure 17b shows the results of the fitting as histograms of the fitted energy for 25 and 40 GeV runs. Fitting was not performed for the 2 GeV run and the slight tailing to low energy is still evident. The resolutions, $`\sigma _E/E`$, as seen in panel (b) are 4, 7 and 7% for these three energies.
#### 4.3.2 Position reconstruction and imaging calorimetry
The segmentation of the CsI calorimeter allows spatial imaging of the shower and accurate reconstruction of the incident photon direction. Each CsI crystal provides three spatial coordinates for the energy de posited in it, two coordinates from the physical location of the bar in the array and one coordinate along the length of the bar, reconstructed from the difference in the light level measured in the photodiode at each end ($`Left`$ and $`Right`$). To reconstruct this longitudinal position, we calculate a measure of the light asymmetry, $`A=(LeftRight)/(Left+Right)`$, that is independent of the total energy deposited in the crystal. We note that if the light attenuation in the crystal is strictly exponential, the longitudinal position is proportional to the inverse hyperbolic tangent of the light asymmetry, $`x=K\mathrm{tanh}^1A`$.
Figure 18 demonstrates that this relationship does indeed hold in the 32-cm CsI bar, and simple analytic forms can be used to convert light asymmetry to position. Positions were determined by the Si tracker for 2 GeV electrons, which typically deposited $``$150 MeV in this crystal. The rms error in the position, determined from light asymmetry, is 0.28 cm.
The measured rms position error is summarized in the following two figures. Figure 19 shows the position error from three crystals at increasing depth in the eight-layer CsI array at four beam energies: 2, 25, 30, and 40 GeV. The dashed line indicates that the error scales roughly as $`1/\sqrt{E}`$, indicating that the measurement error is dominated by photon statistics. Also shown is the position error deduced from imaging cosmic-ray muons in the array, along with that from a 2 GeV electron run in a 32-cm CsI bar identically instrumented. The muon point falls below those from electron showers because ionization energy-loss tracks do not have the significant transverse spread that EM showers have (the Moliere radius for CsI is 3.8 cm).
The effect of transverse shower development on position determination can be seen in figure 20. The rms position error is shown as a function of energy deposited and depth in the calorimeter (indicated by the ordinal layer numbers on the data points) for three beam energies. We see that the position resolution is best early in the shower, where the radiating particles are few in number and tightly clustered, and at shower maximum, where the energy deposited is greatest and statistically easiest to centroid. The position resolution degrades past shower maximum, where the shower multiplicity falls and the energy deposition is spread over a larger area with variations from shower to shower.
To test the ability of the hodoscopic calorimeter to image showers, we reconstructed the arrival direction of the normally-incident beam electrons from the measured positions of the shower centroids in each layer, without reference to the tracker information. The angular resolution, given by the 68% confinement space angle, is shown by the filled circles in figure 21. The open circles indicate the angular resolution derived from a Monte Carlo simulation of a pencil beam normally incident and centered on a 3-cm $`\times `$ 3-cm crossing of crystals. The slightly poorer measured resolution is presumably due to systematic errors in the mapping of light asymmetry to position. Also indicated, in open squares, is the angular resolution expected from a uniform illumination at normal incidence. Here the angular resolution is degraded because of transverse sharing of the shower within crystals in a layer of the array.
## 5 Summary and Conclusion
The basic detector elements for GLAST were assembled and tested together for the first time in an electron and tagged photon beam at SLAC. The performance of each detector subsystem has been evaluated, and the concept of a silicon strip pair conversion telescope has been validated. The critical tracker performance characteristics (efficiency, occupancy, and power) have been investigated in detail with flight-size ladders and meet the requirements necessary for the flight instrument. Most importantly, comparison of the results with Monte Carlo simulations confirmed that the same detailed software tools that were used to design and optimize the full GLAST instrument accurately represent the beam test instrument performance. A follow-up beam test of a full GLAST prototype tower is planned for late 1999.
## 6 Acknowledgments
We thank the SLAC machine group and SLAC directorate for their strong support of this experiment.
## 7 Figures
|
no-problem/9905/astro-ph9905249.html
|
ar5iv
|
text
|
# Statistical lensing of faint QSOs by galaxy clusters.
## 1 Introduction
Gravitational lensing by galaxies and clusters produces two different effects in QSO surveys. At bright magnitudes, where QSO counts are steep, a positive correlation of QSOs and foreground galaxies or clusters can be produced, as objects intrinsically fainter than the magnitude limit are amplified and hence artificially added to the sample \[Gott & Gunn 1974\]. At fainter magnitudes, where the QSO number count slope is much flatter, it is the reduction of observed area behind the foreground lenses which dominates, producing a deficit in the background QSO number count \[Wu 1994\].
Here we interpret the faint QSO-galaxy group anti-correlation result of Boyle, Fong & Shanks (1988) (hereafter BFS88) (see also Shanks et al. 1983 and Boyle 1986) in terms of gravitational lensing. This result was first interpreted in terms of dust in foreground galaxies and clusters obscuring background QSOs. However, observations of galaxy groups and clusters do not show significant amounts of dust \[Ferguson 1993\] and the limits are at a level which make the dust hypothesis uncomfortable. Previously Rodrigues-Williams and Hogan (1994) have suggested that the anti-correlation result may be due to lensing and this is the avenue we shall pursue here. The results in this paper are based in part on those of Croom (1997).
If correct, the lensing hypothesis would allow important constraints to be placed on cosmology and large-scale structure. The deficit of QSOs near a group or cluster can be used to weigh that structure. This method has the advantage over other mass estimates in that it allows a measurement of the absolute mass of the cluster, while other estimators such as the measurement of velocity dispersions and the observation of shear due to strong lensing are effectively measuring the gradient of the cluster potential \[Broadhurst et al. 1995\]. Other authors (e.g. Taylor et al. 1998) have looked for a deficit of galaxies behind foreground clusters to measure the lensing magnification. The advantage of using QSOs over galaxies is that they are easier to distinguish as background objects and their redshift distribution is well known. Of course, there is the disadvantage that QSOs are rare objects and so cannot be used to examine the mass distribution of individual clusters, however they can be used to investigate the properties of a distribution of clusters.
Previous searches for QSO-galaxy correlations at brighter magnitudes have produced varying results with most showing the observational evidence for QSO-galaxy associations, (see Table 1 of Wu 1994) but the statistical basis for most of the results was limited. Recently, Williams & Irwin (1998) have found a strong positive correlation between $`60`$ $`B<18`$ LBQS \[Hewett et al. 1995\] $`z>1`$ QSOs and APM galaxies. Below we shall compare their results with ours to check whether these two observations provide a consistent picture for the mass distribution in the Universe.
Section 2 reviews the lensing model we use in this paper. In Section 3 we compare these models to the BFS88 data. In Section 4 we compare the results of Williams & Irwin with those of BFS88. We present our conclusions in Section 5.
## 2 Statistical Gravitational Lensing
We use two analytic mass profiles to fit the observed anti-correlation; the first, and simplest, of these being the single isothermal sphere (SIS), which gives a gravitational lensing amplification of
$$A=\frac{\theta }{\theta \theta _\mathrm{E}},\theta >\theta _\mathrm{E},$$
(1)
(e.g. Wu 1994) where $`\theta _\mathrm{E}`$ is the Einstein radius, the radius within which multiple images can occur. For the SIS case this is
$$\theta _\mathrm{E}=4\pi \frac{D_{\mathrm{ls}}}{D_\mathrm{s}}\left(\frac{\sigma }{c}\right)^2,$$
(2)
where $`D_\mathrm{s}`$, $`D_\mathrm{l}`$ and $`D_{\mathrm{ls}}`$ are the angular diameter distances from the observer to the source, the observer to the lens and the lens to the source, respectively. In our second mass profile we add a uniform density plane to the isothermal profile (SIS+plane). This could be a good approximation to the effects of clustering and large scale structure (as pointed out by Wu et al., 1996), because a distribution of isothermal spheres with an auto-correlation function of the form $`\xi (r)r^2`$ produces a uniform mass surface density. The globally measured auto-correlation function slope is $`1.8`$ \[Davis et al. 1988\], which produces a sheet of matter which is uniform to better than $`10\%`$ on the scales of interest. The amplification then becomes
$$A=\frac{\theta }{\theta \theta _\mathrm{E}/(1\mathrm{\Sigma }_\mathrm{c}/\mathrm{\Sigma }_{\mathrm{crit}})}\frac{1}{(1\mathrm{\Sigma }_\mathrm{c}/\mathrm{\Sigma }_{\mathrm{crit}})^2},$$
(3)
(e.g. Wu et al. 1996) where $`\mathrm{\Sigma }_\mathrm{c}`$ is the mass surface density in the plane and the critical surface density, $`\mathrm{\Sigma }_{\mathrm{crit}}`$, is
$$\mathrm{\Sigma }_{\mathrm{crit}}=\frac{D_\mathrm{s}c^2}{D_{\mathrm{ls}}D_\mathrm{l}4\pi G}.$$
(4)
Gravitational lensing can cause an over- or under-density of source objects near to the lens. The ratio of observed surface density to the true surface density (unlensed) is the enhancement factor, $`q`$, given by
$$q=\frac{N(<m+2.5\mathrm{log}(A))}{N(<m)}\frac{1}{A}$$
(5)
\[Narayan 1989\], where $`A`$ is the amplification factor. $`N(<m)`$ is the integrated number count of source objects brighter than magnitude $`m`$. We note that $`q`$ depends on the source counts fainter than the limit of the survey. With a number count of the form $`N(<m)10^{\alpha m}`$, we then find an angular cross-correlation function $`\omega _{\mathrm{CQ}}(\theta )`$ that is described by
$$\omega _{\mathrm{CQ}}(\theta )=q1=A^{2.5\alpha 1}1.$$
(6)
## 3 The Correlation of Durham/AAT QSOs and Galaxy Groups
We look at the result from the Durham/AAT UVX Survey \[Boyle 1986, Boyle et al. 1990\] which shows an anti-correlation between UVX QSO candidates and galaxy groups (BFS88). This cross-correlation was carried out within 7 UKST fields, using COSMOS scans of photographic plates. Spectroscopy of the UVX catalogue \[Boyle et al. 1987\] suggested that with a colour limit of $`ub<0.4`$ there was $`55`$ per cent contamination by Galactic stars. In the BFS88 analysis the UVX criterion was tightened to $`ub<0.5`$, reducing contamination to 25 per cent while keeping 85 per cent of the QSOs. The UVX catalogue was then split into two magnitude limited samples, $`17.9<b<19.9`$ and $`17.9<b<20.65`$. The galaxy catalogue consists of all galaxies to a limit of $`b_\mathrm{J}=20.0`$ and the cluster sample was created using a ‘friends-of-friends’ algorithm \[Gott & Turner 1977, Stevenson et al. 1988\]. Groups of seven or more galaxies with density greater than 8 times the average for the field were classed as clusters, which amounted to 10 per cent of the total number of galaxies. BFS88 performed a cross-correlation analysis between the entire galaxy catalogue and the UVX sample but no significant correlation was found on any scale. Cross-correlation of cluster galaxies with the UVX catalogue resulted in negative correlations on scales $`<10^{}`$ for both samples, the brighter sample showing a marginally more negative clustering signal. This can be interpreted as a decrease in contamination from smaller photometric errors in the brighter sample, but a second effect is that the QSO $`N(m)`$ slope will be steeper at this brighter limit, thus a smaller anti-correlation might be expected. Given that our results are sensitive to the exact shape and position of the break, we restrict our analysis to the fainter sample, with the proviso that in new larger samples the anti-correlation as a function of magnitude will provide an important test of the gravitational lensing hypothesis.
### 3.1 A comparison of lensing models and the data
The effect of gravitational lensing is strongly dependent on the slope of the QSO number-magnitude relation at faint magnitudes. We use the number counts from Boyle, Shanks & Peterson (1988) which give an asymptotic faint end slope of $`0.28`$. We have confirmed that this is a reasonable representation of the integral QSO number count at $`1`$ mag fainter than our magnitude limit, the region from where we expect amplified QSOs to come, by using the deeper data of Boyle, Jones & Shanks (1991). A flatter slope would clearly reduce the lensing mass required. The separation of observer, lens and source also affects the lensing amplification. To take this into account in our model, we integrate the known QSO redshift distribution over the effective range of the Durham/AAT survey ($`0.3<z<2.2`$). This gives us an effective lensing amplification for a particular lens mass. For the galaxies we assume the analytic form of $`N(z)`$ given by Baugh & Efstathiou (1993):
$$\frac{\mathrm{d}N}{\mathrm{d}z}z^2\mathrm{exp}\left[\left(\frac{z}{z_\mathrm{c}(m)}\right)^{3/2}\right],$$
(7)
where $`z_\mathrm{c}=(0.016(b_\mathrm{J}17.0)^{1.5}+0.046)/1.412`$. This is shown integrated to $`b_\mathrm{J}=20`$ in Fig. 1 along with a polynomial fit to the QSO $`N(z)`$ \[Shanks & Boyle 1994\]. The two populations occupy almost completely independent volumes, less than $`1\%`$ of the QSOs are at $`z<0.3`$ while less than $`0.5\%`$ of the galaxies are at $`z>0.3`$. We assume an $`\mathrm{\Omega }_0=1`$ cosmology throughout this analysis, but it should be noted that when the lensing mass is at low redshift ($`z0.1`$) cosmology has a relatively small effect as $`D_\mathrm{s}D_{\mathrm{ls}}`$.
We compare the SIS lens model (from Eqs. 1 and 6) to the cross-correlation result $`\omega _{CQ}(\theta )`$ of the faint sample shown in Fig. 2. We have allowed for 25% contamination of the QSOs by randomly distributed stars. Using a minimum $`\chi ^2`$ fit the velocity dispersion is $`\sigma =1286_{91}^{+72}\mathrm{km}\mathrm{s}^1`$ (reduced $`\chi ^2=1.44`$). The dotted line in Fig. 2 shows this model. The SIS+plane model (Eqs. 3 and 6) is shown as the dashed line in Fig. 2, the best fit in this case has velocity dispersion $`\sigma =1143_{153}^{+109}\mathrm{km}\mathrm{s}^1`$ and the surface density in the plane of $`\mathrm{\Sigma }_\mathrm{c}=0.081\pm 0.032h\mathrm{g}\mathrm{cm}^2`$. The reduced $`\chi ^2`$ for this fit is $`1.07`$. The confidence levels for this fit are shown in Fig. 3.
The values found are significantly larger than directly measured velocity dispersions of poor clusters and groups. However, because BFS88 correlate cluster members with UVX objects, they effectively weight each cluster by the number of member galaxies. Stevenson et al., (1988) find the fraction of clusters as a function of the number of members follows an approximate power-law with a slope of $`2.2`$. From this we can calculate the mean cluster membership, $`\overline{n}`$. Integrating the relationship between $`n=7`$ and $`n=50`$ gives $`\overline{n}=15`$. However, a member weighted mean gives $`\overline{n}=20`$. Thus, the BFS88 result is probing clusters which typically have $`20`$ members. The density on the sky of these clusters is $`0.8`$ deg<sup>-2</sup>, which can be compared to the density of clusters in the APM Cluster Catalogue \[Dalton et al. 1997\] of $`0.2`$ deg<sup>-2</sup> and the density of richness class 0 or greater Abell clusters which is $`0.1`$ deg<sup>-2</sup> \[Abell et al. 1989\]. Thus an ‘average’ cluster used by BFS88 is significantly poorer than Abell richness clusters. The velocity dispersion that might be expected for clusters of this richness is $`\sigma 400600\mathrm{km}\mathrm{s}^1`$ (e.g. Ratcliffe et al. 1998). For comparison, a lensing model corresponding to a velocity dispersion of $`600\mathrm{km}\mathrm{s}^1`$ is shown in Fig. 2; the model is formally rejected at $`>5\sigma `$. It therefore appears that the masses implied for the galaxy groups from lensing are $`4`$ times bigger than expected from virial analyses.
Although the addition of the uniform plane to mimic the effects of clustering of clusters helps to improve the fit to $`\omega _{\mathrm{QC}}`$, Fig. 3 shows that this only reduces the velocity dispersion of the clusters for high values of $`\mathrm{\Sigma }_\mathrm{c}`$. Wu et al., (1996) find that the maximum mass likely to be associated with lenses from large-scale structure is $`0.010.02h\mathrm{g}\mathrm{cm}^2`$, which assumes that the matter density in clusters is near the critical density (i.e. $`\mathrm{\Omega }_{\mathrm{clus}}1`$). Values of $`\mathrm{\Sigma }_\mathrm{c}`$ in this range are only compatible with group velocity dispersions greater than 1000$`\mathrm{km}\mathrm{s}^1`$ (see Fig. 3)
We now demonstrate how the lensing estimates of average group mass, via the velocity dispersion $`\sigma `$, could be used to obtain a new estimate of $`\mathrm{\Omega }_0`$. Using the 0.8deg<sup>-2</sup> sky density of groups from above, we infer an approximate space density of galaxy groups in the range $`\mathrm{n}=24\times 10^4\mathrm{h}^3\mathrm{Mpc}^3`$. The lower value comes from integrating the proper volume to z=0.1 and assuming all groups are detected to this redshift, and on the basis of Fig. 1 that this contains half the group sky density. The higher value comes from using the galaxy n(z) in Fig. 1 to derive the galaxy selection function. This is multiplied by the proper volume and the product is integrated to z=0.7, to give the effective volume from which the group space density is then derived. Multiplying by the estimated mass per group obtained by integrating the isothermal sphere profile out to a radius $`r`$ leads to the estimate of $`\mathrm{\Omega }_0`$. We therefore find
$$\mathrm{\Omega }_0=1.3\left(\frac{n}{0.0003h^3\mathrm{Mpc}^3}\right)\left(\frac{r}{1h^1\mathrm{Mpc}}\right)\left(\frac{\sigma }{1286\mathrm{km}\mathrm{s}^1}\right)^2,$$
(8)
where $`r`$ is now the extent of the anti-correlation, or the effective extent of the isothermal sphere. We note that the dependence on $`h`$ cancels out and there is no dependence on any biasing parameter. The above scaling values represent our best estimate for $`n`$, $`r`$ and $`\sigma `$. The value for r is obtained from consideration of Fig. 2 where $`\theta 10^{}`$ corresponds to 1$`h^1\mathrm{Mpc}`$. The errors on $`n`$ and $`r`$ are unfortunately likely to be dominated by systematic components, with each potentially varying by a factor of $`2`$. This is due to the approximate methods used to determine both the surface density of clusters, the space density of clusters, and the limiting scale of the anti-correlation. We also note that the measurement of $`\sigma `$ is dependent on $`\mathrm{\Omega }_0`$ through the angular diameter distance terms in eq. 2, although at the redshifts considered this effect is small (a $`25\%`$ difference in mass between $`\mathrm{\Omega }_0=1`$ and $`\mathrm{\Omega }_0=0`$), and may currently be dominated by more serious systematic effects. More meaningful error estimates must await larger galaxy-QSO datasets, possibly with full redshift information for the galaxies as well as the QSOs, where the extent of the anti-correlation and the group density can be better defined.
A further final problem is that this analysis for $`\mathrm{\Omega }_0`$ also assumes that the groups are physically real and not significantly contaminated by accidental line-of-sight over-densities on the sky, which Stevenson et al (1988) suggested was a possibility. Of course, even accidental over-densities will act as lenses and it is not yet clear how sensitive this estimate of $`\mathrm{\Omega }_0`$ is to this type of contamination. Again, in a survey which also has full galaxy redshift information, such as the forthcoming 2dF QSO/galaxy redshift survey, the effects of spurious groups could be more easily checked.
## 4 Comparison with the LBQS-Galaxy Cross-Correlation Result
We now compare our conclusions with those of Williams & Irwin (1998) (henceforth WI98) who have found a strong positive correlation between APM galaxies and LBQS QSOs which is significant out to scales of $`60^{}`$. These authors find that their positive correlation is an order of magnitude larger than that expected from a model with $`\mathrm{\Omega }_0=0.3`$ and galaxy bias of $`b1`$, based on a comparison of the galaxy-galaxy angular correlation function and the QSO-galaxy cross-correlation function. WI98 derive the relation:
$$\omega _{\mathrm{QG}}(\theta )(2\tau /b)(2.5\alpha 1)\omega _{\mathrm{GG}}(\theta ).$$
(9)
Here $`\alpha `$ is the slope of the QSO number counts, $`b`$ is the galaxy bias, assumed to be constant as a function of scale and $`\tau `$ is the optical depth of the lenses:
$$\tau =\rho _{\mathrm{crit}}\mathrm{\Omega }_0_0^{z_{\mathrm{max}}}\frac{(c\mathrm{d}t/\mathrm{d}z)(1+z)^3}{\mathrm{\Sigma }_{\mathrm{crit}}(z,z_\mathrm{s})}dz.$$
(10)
For a given value of $`\mathrm{\Omega }_0`$, a bias value can therefore be found. This analysis can easily be applied to the faint QSO anti-correlation result of BFS88. We fit a power law with a slope of –0.8 to the auto-correlation of clusters measured by Stevenson et al. (1988), finding $`\omega _{\mathrm{CC}}(\theta )=(0.140\pm 0.053)\theta ^{0.8}`$. We then also fit a –0.8 power law to the anti-correlation between QSOs and clusters, which we find to be $`\omega _{\mathrm{CQ}}(\theta )=(0.0071\pm 0.0059)\theta ^{0.8}`$. With an assumed number count slope of $`0.28\pm 0.02`$ these values then imply a value of $`\tau /b=0.085\pm 0.077`$. We assume $`\mathrm{\Omega }_0=1`$ and integrate Eq. 10 to $`z_{\mathrm{max}}=0.2`$, the redshift at which the $`N(z)`$ relation described by Eq. 7 falls to half its peak value, this gives $`\tau =0.021`$. We therefore find a bias value of the clusters used in this analysis of $`b_\mathrm{C}=0.25\pm 0.23`$. If we use an $`\mathrm{\Omega }_0=0.3`$ model, as used by WI98, then $`\tau `$ and therefore $`b_\mathrm{C}`$ will fall by a factor of $`3`$. We should note here that the errors are large, and there is some uncertainty in this procedure; if we integrate $`\tau `$ to where the $`N(z)`$ drops to $`3/4`$ ($`z=0.16`$) or $`1/4`$ ($`z=0.24`$) of its peak value we find $`\tau =0.015`$ and $`0.029`$ respectively. However even if we take the largest reasonable value of $`\tau `$, then $`b_\mathrm{C}=0.34\pm 0.031`$, which is still an order of magnitude lower than expected for clusters. Thus, in rough agreement with $`b_\mathrm{G}0.07`$ from WI98, we find a bias value estimated from statistical lensing which is an order of magnitude less than predicted by other methods. We also note that the WI98 result is consistent with the QSO-galaxy cross-correlation measured by BFS88, although BFS88 do not find a significant anti-correlation.
Although our result appears to be consistent with WI98, they are both clearly significantly out of line with other current estimates of the combination of $`\mathrm{\Omega }_0`$ and bias. The space density of galaxy clusters gives $`\mathrm{\Omega }_0^{0.5}\sigma _80.5`$ (Eke et al. 1998) and dynamical estimates such as the measurement of redshift space distortions give similar values (e.g. Ratcliffe et al. 1998). We could possibly appeal to the scale dependence of bias to bring these different results into agreement, however, this would require an order of magnitude change in bias over a scale of $`10h^1\mathrm{Mpc}`$. Taken at face value the above lensing result appears to suggest much more mass is present in the Universe than is detected from the distribution and motion of galaxies.
## 5 Discussion and Conclusions
BFS88 originally interpreted the UVX QSO-cluster anti-correlation as being due to absorption by dust present in clusters, the required amount of absorption being $`A_\mathrm{B}0.2`$ mag. Ferguson (1993) finds no evidence for any reddening due to dust in clusters, and the 90% upper limit on the reddening is $`E(BV)=0.06`$. This upper limit is just consistent with the required absorption assuming $`A_\mathrm{B}=4.10E(BV)`$, and it is therefore still possible that lensing and absorption could both play a part in producing the anti-correlation result. However, it is impossible for dust in groups to also provide an explanation for the strong positive QSO-galaxy correlation found by Williams & Irwin and if their result is due to lensing then an anti-correlation is expected at faint QSO magnitudes which is comparable to that discussed here. If both results prove to be real, the simplest interpretation is that both are due to gravitational lensing.
Assuming that the measured anti-correlation is due to gravitational lensing, we find that fitting an isothermal sphere model for the cluster potentials gives a larger than expected velocity dispersion. Adding a uniform density plane to the mass profile does not significantly affect this conclusion. These lensing mass estimates suggest cluster/group masses which are $`4`$ times larger than expected from virial theorem analyses. We discuss a potential method to determine $`\mathrm{\Omega }_0`$ from this type of mass estimate combined with a cluster/group space density measurement. We demonstrate this method with the current data, although an accurate measure of $`\mathrm{\Omega }_0`$ will have to wait for larger and better controlled galaxy-QSO dataset.
We find consistency between the high $`\mathrm{\Omega }_0/b34`$ value implied by the strong positive QSO-galaxy cross-correlation seen at bright QSO magnitudes (Williams & Irwin 1998) and the negative QSO-galaxy cross-correlation seen at faint QSO magnitudes (BFS88), if lensing is assumed to cause both effects. Applying the method of Williams & Irwin to both these cross-correlation results gives $`\mathrm{\Omega }_0\sigma _834`$ (where $`\sigma _81/b`$) and the inferred values of $`\mathrm{\Omega }_0^{0.5}\sigma _8`$ are therefore 6–8 times higher than those inferred from arguments based on the space-density of rich clusters.
Of course, it is still possible that some combination of systematic and random errors have contrived to produce the positive QSO-galaxy correlation seen in LBQS and the anti-correlation detected by BFS88. The importance of the above results suggests that it is vital to make further observational checks as to the reality of the QSO-galaxy cross-correlation signal. Fortunately, extended analyses of the above type will soon be possible with the completion of new large redshift surveys such as the 2dF QSO Redshift Survey and the 2dF Galaxy Redshift Survey. These two samples with 25000 QSOs and 250000 galaxies covering the same areas of sky should allow a definitive measurement of the cross-correlation function between background QSOs and galaxies at both bright and faint QSO magnitudes.
## acknowledgements
SMC acknowledges the support of a Durham University Research Studentship. This paper was prepared using the facilities of the Durham STARLINK node.
|
no-problem/9905/nucl-th9905040.html
|
ar5iv
|
text
|
# Matter-induced 𝜔→𝜋𝜋 decayResearch supported by PRAXIS grants XXI/BCC/429/94 and PRAXIS/P/FIS/12247/1998, and by the Polish State Committee for Scientific Research grant 2P03B-080-12.
<sup>1</sup><sup>1</sup>institutetext: H. Niewodniczański Institute of Nuclear Physics, PL-31342 Kraków, Poland, <sup>1</sup><sup>1</sup>email: broniows@solaris.ifj.edu.pl, florkows@solaris.ifj.edu.pl<sup>2</sup><sup>2</sup>institutetext: Center for Theoretical Physics, University of Coimbra, P-3000 Coimbra, Portugal, <sup>2</sup><sup>2</sup>email: brigitte@malaposta.fis.uc.pt (Received: date / Revised version: date)
## Abstract
We calculate the width for the $`\omega \pi \pi `$ decay in nuclear matter in a hadronic model including mesons, nucleons and $`\mathrm{\Delta }`$ isobars. We find a substantial width of the longitudinally polarized $`\omega `$ modes, reaching $`100`$MeV for mesons moving suitably fast with respect to the nuclear medium.
The dilepton measurements in the CERES ceres and HELIOS helios experiments have indicated that the masses and/or widths of light vector mesons undergo large modifications in nuclear matter. Clearly, since the mesons interact strongly with the medium, this fact is not at all surprising. Indeed, in-medium modifications of hadron properties are predicted in a variety of theoretical calculations brscale ; celenza ; hatlee ; jean ; cassing ; li ; hatsuda ; rapp ; pirner ; klingl ; leupold ; eletsky ; friman2 ; bratko (for a recent review see hadrons ; tsukuba ). An interesting factor brought in by the presence of the medium is that processes which are forbidden in the vacuum by symmetry principles are now made possible. The constraints of Lorentz-invariance, $`G`$-parity, or isospin invariance in isospin-asymmetric media dutt ; rhoom , are no longer effective. An example of such an “exotic” phenomenon which becomes possible and significant in the presence of nuclear matter is the decay of $`\omega \pi \pi `$. This process is, apart for small isospin-violation effect, forbidden in the vacuum.<sup>1</sup><sup>1</sup>1In the vacuum the partial width for the decay $`\omega \pi ^+\pi ^{}`$ is only $`0.2\mathrm{MeV}`$, and is due to the small isospin breaking and the resulting $`\rho \omega `$ mixing. In this paper we are not concerned with this negligible effect. In this paper we show that the matter-induced width for this process is large. For $`\omega `$ moving with respect to the medium with a momentum above $`200\mathrm{M}\mathrm{e}\mathrm{V}`$ (such momenta are easily accessible in heavy-ion collisions) the corresponding width, at the nuclear saturation density, is of the order of 100MeV. In addition, we find very different behavior of the longitudinally and transversely polarized $`\omega `$ mesons, with the former ones being much wider than the latter ones.
Our calculation is made in the framework of an effective hadronic theory. Mesons interact with the nucleons and $`\mathrm{\Delta }`$ isobars, and the interactions are assumed to have the usual form used in many other calculations and fits. We work to the leading order in the nuclear density. Based on the experience of other in-medium calculations we hope that this approximation should be sufficient up to densities of the order of the nuclear saturation density. To this leading order only the diagrams shown in Fig. 1 contribute. In these diagrams the nucleon lines include the propagation of occupied states of the Fermi sea. The “bubble” diagram (a) has been analyzed by Wolf, Friman, and Soyeur in Ref. wolf , where the role of the $`\omega \sigma `$ mixing mechanism has been pointed out. In this process the $`\omega `$ meson is first converted, via interaction with the nucleons, into the scalar-isoscalar $`\sigma `$ meson, which in turn decays into two pions. The relevance of “triangle” diagrams (b) has been shown out in Ref. bfh . Note that in any formal counting scheme (low density, chiral limit, large number of colors) the diagrams (a) and (b) are of the same order and consistency requires to include both. Our present calculation of $`\omega \pi \pi `$ in nuclear matter includes a further contribution of diagrams (c-d) with the $`\mathrm{\Delta }`$(1232) isobar. Among other possible resonances, the $`\mathrm{\Delta }`$ is the most important one due to the large value of the $`\pi N\mathrm{\Delta }`$ vertex and small $`\mathrm{\Delta }N`$ mass splitting.
The solid line in Fig. 1 denotes the in-medium nucleon propagator, which can be conveniently decomposed in the free and density parts chin
$`G(k)G_F(k)+G_D(k)`$ (1)
$`=(\overline{)}k+M)\left[{\displaystyle \frac{1}{k^2M^2+i\epsilon }}+{\displaystyle \frac{i\pi }{E_k}}\delta (k_0E_k)\theta (k_F|𝐤|)\right],`$
where $`k`$ is the nucleon four-momentum, $`M`$ denotes the nucleon mass, $`E_k=\sqrt{M^2+𝐤^2}`$, and $`k_F`$ is the Fermi momentum. The diagram (a) is non-zero only when one propagator is $`G_D`$, and the other one $`G_F`$. The only non-vanishing contributions in diagram (b) involve one $`G_D`$ propagator and two $`G_F`$ propagators. Diagram (a) involves the intermediate $`\sigma `$-meson propagator, which we take in the form
$$G_\sigma (k)=\frac{1}{k^2m_\sigma ^2+im_\sigma \mathrm{\Gamma }_\sigma \frac{1}{4}\mathrm{\Gamma }_\sigma ^2}.$$
(2)
Here the mass and the width of the $`\sigma `$ meson are chosen in such a way that they reproduce effectively the experimental $`\pi \pi `$ scattering length at $`q^2=m_\omega ^2=(780\mathrm{M}\mathrm{e}\mathrm{V})^2`$, which is the relevant kinematic point for the process at hand. From this fit we find $`m_\sigma =789`$MeV and $`\mathrm{\Gamma }_\sigma =237`$MeV. Note that $`m_\omega `$ and $`m_\sigma `$ are very close to each other, which enhances the amplitude obtained from diagram (a) wolf .
The double line in diagrams (c-d) denotes the $`\mathrm{\Delta }`$ propagator
$`G_\mathrm{\Delta }^{\alpha \beta }(k)={\displaystyle \frac{\overline{)}k+M_\mathrm{\Delta }}{k^2M_\mathrm{\Delta }^2+iM_\mathrm{\Delta }\mathrm{\Gamma }_\mathrm{\Delta }\frac{1}{4}\mathrm{\Gamma }_\mathrm{\Delta }^2}}`$ (3)
$`\times \left[g^{\alpha \beta }+{\displaystyle \frac{1}{3}}\gamma ^\alpha \gamma ^\beta +{\displaystyle \frac{2k^\alpha k^\beta }{3M_\mathrm{\Delta }^2}}+{\displaystyle \frac{\gamma ^\alpha k^\beta \gamma ^\alpha k^\beta }{3M_\mathrm{\Delta }}}\right].`$
This formula corresponds to the usual Rarita-Schwinger definition rarita ; mukho with the denominator modified in order to account for the finite width of the $`\mathrm{\Delta }`$ resonance, $`\mathrm{\Gamma }_\mathrm{\Delta }=120`$MeV.
We assume that the $`\omega NN`$ and $`\omega \mathrm{\Delta }\mathrm{\Delta }`$ vertices have the form which follows from the minimum-substitution prescription and vector-meson dominance applied to the nucleon and the Rarita-Schwinger rarita Lagrangians:
$`V_{\omega NN}^\mu =g_\omega \gamma ^\mu ,`$ (4)
$`V_{\omega \mathrm{\Delta }\mathrm{\Delta }}^{\mu \alpha \beta }=g_\omega \left[\gamma ^\mu g^{\alpha \beta }+g^{\alpha \mu }\gamma ^\beta +g^{\beta \mu }\gamma ^\alpha +\gamma ^\alpha \gamma ^\mu \gamma ^\beta \right].`$
Possible anomalous couplings can be incorporated at the expense of having more parameters. The results presented below do not depend qualitatively on the form of the coupling, as long as it remains strong. The coupling constant $`g_\omega `$ can be estimated from the vector dominance model. We use $`g_\omega =9`$. For the $`\pi NN`$ vertex we use the pseudoscalar coupling, with the coupling constant $`g_{\pi NN}=`$ $`12.7`$. The same value is used for $`g_{\sigma NN}`$. The $`\sigma \pi \pi `$ coupling constant is taken to be equal to $`g_{\sigma \pi \pi }=12.8m_\pi `$, where $`m_\pi =139.6\mathrm{MeV}`$ is the physical pion mass (this value follows from the fit done to $`\pi \pi `$ scattering phase shifts done in Ref. wolf ). The $`\pi N\mathrm{\Delta }`$ vertex has the form $`V_{\pi N\mathrm{\Delta }}^\mu =(f_{\pi N\mathrm{\Delta }}/m_\pi )p^\mu \stackrel{}{T}`$, where $`p^\mu `$ is the pion momentum, $`\stackrel{}{T}`$ is the $`\frac{1}{2}\frac{3}{2}`$ isospin transition matrix, and the coupling constant $`f_{\pi N\mathrm{\Delta }}=2.1`$ durso .<sup>2</sup><sup>2</sup>2There is another possible structure in the $`\pi N\mathrm{\Delta }`$ coupling, of the form $`a\overline{)}p\gamma ^\mu `$. Our vertex corresponds to the popular choice of the off-shell parameter $`a`$ set to zero.
The amplitude, evaluated according to the diagrams depicted in Fig. 1 (a-d) can be uniquely decomposed in the following Lorentz-invariant way:
$`=ϵ^\mu (Ap_\mu +Bu_\mu +Cq_\mu ),`$ (5)
where $`p`$ is the four-momentum of one of the pions, $`q`$ is the four-momentum of the $`\omega `$ meson, $`u`$ is the four-velocity of nuclear matter, and $`ϵ`$ specifies the polarization of $`\omega `$. Our calculation is performed in the rest frame of nuclear matter, where $`u=(1,0,0,0)`$. In this reference frame the amplitude $``$ vanishes for vanishing 3-momentum $`𝐪`$, as requested by rotational invariance. Hence, the process $`\omega \pi \pi `$ occurs only when the $`\omega `$ moves with respect to the medium.
The expression for the decay width reads
$`\mathrm{\Gamma }={\displaystyle \frac{1}{2}}3{\displaystyle \frac{1}{n_s}}{\displaystyle \underset{s}{}}{\displaystyle \frac{1}{2q_0}}{\displaystyle \frac{d^3p}{(2\pi )^32p_0}\frac{d^3p^{}}{(2\pi )^32p_0^{}}}`$
$`\times ||^2(2\pi )^4\delta ^{(4)}(qpp^{}),`$ (6)
where the factor $`\frac{1}{2}`$ is the symmetry factor when the decay proceeds into two neutral pions, the factor of $`3`$ accounts for the isospin degeneracy of the final pion states (*i.e.* neutral and charged pions), $`n_s`$ is the number of spin states of the $`\omega `$ meson, and $`_s`$ denotes the sum over these spin states ($`q`$ , $`p`$ and $`p^{}=qp`$ are the four-momenta of the $`\omega `$ meson, and the two pions, respectively).
We take the effort to analyze separately the longitudinally and transversely polarized $`\omega `$, since the presence of the medium results in different behavior of these states. Transversely polarized $`\omega `$ has two helicity states ($`n_s=2`$), with projections $`s=\pm 1`$ on the direction of $`𝐪`$, and the longitudinally polarized $`\omega `$ has one helicity state ($`n_s=1`$), with the corresponding projection $`s=0`$. An explicit calculation yields
$`{\displaystyle \underset{s=\pm 1}{}}\epsilon _{(s)}^\mu \epsilon _{(s)}^\nu T^{\mu \nu }=`$
$`={\displaystyle \frac{(q^\mu quu^\mu )(q^\nu quu^\nu )}{qq(qu)^2}}g^{\mu \nu }+u^\mu u^\nu ,`$
$`\epsilon _{(s=0)}^\mu \epsilon _{(s=0)}^\nu L^{\mu \nu }=`$
$`={\displaystyle \frac{(q^\mu quu^\mu )(q^\nu quu^\nu )}{qq(qu)^2}}+{\displaystyle \frac{q^\mu q^\nu }{qq}}u^\mu u^\nu .`$ (7)
The tensors $`T^{\mu \nu }`$ and $`L^{\mu \nu }`$ are defined in such a way that they are projection operators. In addition, $`T^{\mu \nu }q_\nu =0`$ and $`L^{\mu \nu }q_\nu =0`$, which reflects current conservation, and $`T^{\mu \nu }u_\nu =0`$. From relations (5) and (7) in Eq. (6) we find that
$`|_T|^2=|A|^2p_\mu (T^{\mu \nu })p_\nu ,`$ (8)
$`|_L|^2=(A^{}p_\mu +B^{}u_\mu )(L^{\mu \nu })(Ap_\nu +Bu_\nu ).`$
Note that the value of the coefficient $`C`$ is irrelevant for our calculation. For the diagram (a) we find $`A=0,`$ $`B0,`$ hence this diagram contributes only to the width of the longitudinal mode. Diagrams (b-d) have $`A0,`$ $`B0,`$ and contribute to the width of both the longitudinal and transverse modes.
As we have said, we evaluate the amplitude $``$ to leading order in the baryon density. This leads to a simplification. The integrals of the form $`_0^{k_F}k^2𝑑kf(k)`$ arising in our calculation are replaced by $`f(0)_0^{k_F}k^2𝑑k`$, which is proportional to baryon density, $`\rho _B`$. Consequently, the widths $`\mathrm{\Gamma }^{L,T}`$ $`\rho _B^2`$.
In Fig. 2 we present our numerical results at the nuclear saturation density, $`\rho _B=0.17\mathrm{fm}^3`$. We show $`\mathrm{\Gamma }^L`$ (solid lines) and $`\mathrm{\Gamma }^T`$ (dashed lines) plotted as functions of $`|𝐪|`$. The upper part of the plot is for $`m_\omega ^{}=m_\omega =780\mathrm{M}\mathrm{e}\mathrm{V}`$, i.e. the value of the $`\omega `$ mass is not modified by the medium. The lower part is for $`m_\omega ^{}=0.7m_\omega `$. In both cases we reduce the value of the in-medium nucleon mass to 70 % of its vacuum value, $`M^{}=0.7M,`$ which is a typical number at the nuclear saturation density. We also reduce by the same factor the mass of the $`\mathrm{\Delta }`$, *i.e.* $`M_\mathrm{\Delta }^{}=0.7M_\mathrm{\Delta }`$, since it is expected to behave similarly to the nucleon. The labels indicate which diagrams of Fig. 1 have been included. The complete result corresponds to the case (a-d). The case (a) reproduces the result of Ref. wolf . We note that the inclusion of subsequent processes of Fig. 1 substantially increases the result. All the curves start as $`𝐪^2`$ at low $`|𝐪|`$. The longitudinal width reaches a maximum at $`|𝐪|`$ a few hundred MeV, and the value at the peak is large: 250MeV for $`m_\omega ^{}=m_\omega `$ and 100MeV for $`m_\omega ^{}=0.7m_\omega `$. The transverse width is strictly zero with diagram (a), less than 1MeV with diagrams (a-b), reach a few MeV when the diagrams with the $`\mathrm{\Delta }`$ are included. Qualitatively similar results follow for other choices of parameters. One should bare in mind that the effect is proportional to $`\rho _B^2`$, hence may be much larger at higher densities.
Our main conclusions are: 1) nuclear matter induces the $`\omega \pi \pi `$ transitions with large partial widths, 2) the widths depend strongly on the three momentum of the $`\omega `$ with respect to the medium, $`|𝐪|`$, 3) the longitudinal mode is much wider than the transverse mode.
The results obtained mean that in a hadron gas, such as created in a heavy-ion collision, the propagation of longitudinally polarized $`\omega `$ meson is inhibited when the momentum $`|𝐪|`$ is nonzero. This will cause a depletion in the population of the $`\omega `$ mesons. Such effects should be included in Monte-Carlo simulations of heavy-ion collisions.
The important question is to what extent can the discussed process influence shape of the dilepton-production cross-sections in relativistic heavy-ion collisions. Our results can be used to calculate the cross section for the $`\pi \pi `$ annihilation into dileptons occurring in the $`\omega `$ channel. This mechanism has been analyzed for the first time in Ref. wolf . The calculation of the annihilation cross section requires the knowledge of the same amplitude that has been used in the calculation of the omega width, see Fig. 3. In Ref. wolf only the $`\omega \sigma `$ mixing term was included in this amplitude (diagram (a) of Fig. 1). In our present calculation we take into account additional diagrams shown in Figs. 1 (b) - (d). Moreover, we take into consideration differences in the propagation of the transverse and longitudinal modes, which is important due to the large difference observed in the widths.
The $`\pi \pi `$ annihilation cross section corresponding to Fig. 3, averaged over the incoming pion momenta at fixed total four-momentum $`q=(q^0,𝐪)`$, can be written in the compact form
$$\sigma =\frac{8\pi ^2q^0m_\omega ^4}{9q^2(q^2/4m_\pi ^2)}\left(\frac{\alpha }{g_\omega }\right)^2\left(2|G_\omega ^T|^2\mathrm{\Gamma }^T+|G_\omega ^L|^2\mathrm{\Gamma }^L\right),$$
(9)
where
$$|G_\omega ^{T,L}|^2=\frac{1}{[q^2m_\omega ^2\frac{1}{4}(\mathrm{\Gamma }_0+\mathrm{\Gamma }^{T,L})^2]^2+m_\omega ^2(\mathrm{\Gamma }_0+\mathrm{\Gamma }^{T,L})^2}$$
(10)
is the modulus of the $`\omega `$ propagator squared. The widths $`\mathrm{\Gamma }^T`$ and $`\mathrm{\Gamma }^L`$ should be calculated in the way described above with the only difference that the physical mass of $`\omega `$ is now replaced by the invariant dilepton mass $`\sqrt{q^2}`$. The quantity $`\mathrm{\Gamma }_0`$ denotes the width of the $`\omega `$ at vanishing three momentum, which is due to other effects, such as $`\omega \pi \pi \pi `$. With $`\mathrm{\Gamma }_010\mathrm{M}\mathrm{e}\mathrm{V}`$ klingl and our values for $`\mathrm{\Gamma }^{T,L}`$ we obtain the cross section from Eq. (9), which is typically a fraction of a microbarn. This is to be compared to $`3.5\mu \mathrm{b}`$ from the decay via the $`\rho `$ resonance wolf . Note that large widths $`\mathrm{\Gamma }^{T,L}`$ in Eq. (9) do not increase $`\sigma `$, since they also appear in the denominator of Eq. (10). In fact, there are optimum widths $`\mathrm{\Gamma }^{T,L}\mathrm{\Gamma }_0`$ at which the cross section is the largest. A further increase of the widths $`\mathrm{\Gamma }^{T,L}`$ decreases the cross section. At the point $`q^2=m_\omega ^2`$ and with our numbers from Fig. 2 we find that the contribution of the longitudinal modes to Eq. (9) is negligible, while the contribution from the transverse modes at $`|𝐪|=400\mathrm{M}\mathrm{e}\mathrm{V}`$ equals $`0.4\mu \mathrm{b}`$ for $`m_\omega ^{}=m_\omega `$ (where $`\mathrm{\Gamma }^T=1.8\mathrm{MeV}`$), and $`1.4\mu \mathrm{b}`$ for $`m_\omega ^{}=0.7m_\omega `$ (where $`\mathrm{\Gamma }^T=5\mathrm{M}\mathrm{e}\mathrm{V}`$). We stress that the numbers quoted above are almost entirely due to the diagrams with the $`\mathrm{\Delta }`$. Without the processes (b-d) of Fig. 1 the dilepton production via mechanism of Fig. 3 would be about a factor of 10 smaller. In conclusion, the process of Fig. 3 may be significant for the dilepton production in heavy-ion collisions.
If the dilepton-production experiments measured the three-momentum $`𝐪`$ of the dilepton pair coming from a vector-meson decay, then they should observe different behavior at different values of $`|𝐪|`$. Such measurement would be very helpful for a better understanding of meson dynamics in the nuclear medium.
Our last remark refers to the final state interactions, which can be important durso . The pions emitted in processes of Fig. 1 can interact in the final channel. This will result in an appropriate modification the $`\omega \pi \pi `$ amplitude. The full analysis of the final-state interactions requires a model for the $`\pi \pi `$ scattering amplitude, as well as solving a Lipmann-Schwinger equation. This is beyond the scope of this paper. Note, however, that the diagram (a), which includes the intermediate $`\sigma `$ state, does in fact account for final-state interactions. In this process the pions form a resonance in the $`S`$-channel, which enhances the amplitude. Similar rescattering can also occur for the diagrams (b-d). Thus, the final-state interactions are only partially included in our analysis.
###### Acknowledgements.
We thank Bengt Friman for numerous valuable comments and for the suggestion to include the $`\mathrm{\Delta }`$ in the presented analysis.
|
no-problem/9905/hep-ph9905271.html
|
ar5iv
|
text
|
# Top-quark physics in six-quark final states at the Next Linear Collider
## 1 Introduction
Many important signals to be studied at future high energy colliders, NLC and LHC , will have a large number of particles in the final state. In particular, the processes with six fermions in the final state will be of great importance for several tests of the Standard Model, such as the studies of top-quark and electroweak gauge bosons, as well as the search for an intermediate-mass Higgs boson. Such processes are already of great interest at the Tevatron collider in connection with top-quark physics. Theoretical studies of six-fermion ($`6f`$) processes by means of complete tree-level calculations have only very recently appeared in the literature , where top-quark physics, Higgs physics and $`WWZ`$ production have been addressed.
All these studies clearly demonstrate that complete calculations are important for a precise determination of the cross-sections, and for the development of reliable event generators, whenever accurate evaluations of interference, off-shellness and background effects, as well as spin correlations, are important.
In view of the precision measurements of the top-quark properties we have to analyse, among the $`6f`$ signatures, the ones containing a $`b\overline{b}`$ pair and two charged currents, as the top-quark decays almost exclusively into a $`W`$ boson and a $`b`$ quark. Semi leptonic signatures have already been considered in refs. . It is then of great interest to carefully evaluate the size of the totally hadronic, six-quark ($`6q`$) contributions to integrated cross-sections and distributions as well as to determine their phenomenological features. The aim of the present study is to make a first quantitative analysis in this context, for what concerns the full set of electroweak contributions to a class of $`e^+e^{}`$ annihilation processes related to top-quark physics. Special emphasis will be given to the determination and to the analysis of the topology of the events considered, so as to characterize them, as far as possible, against the QCD backgrounds.
Looking at an experimental situation where the $`b`$-tagging technique can be applied, it is meaningful to distinguish the $`6q`$ final states containing one $`b\overline{b}`$ pair, of the form $`b\overline{b}q\overline{q}^{}q^{\prime \prime }\overline{q}^{\prime \prime \prime }`$, from those containing two or three $`b\overline{b}`$ pairs, respectively of the form $`b\overline{b}b\overline{b}q\overline{q}`$ and $`b\overline{b}b\overline{b}b\overline{b}`$. The last two kinds of processes are not relevant to top-quark production, as they contain no charged currents.
In the present study the signatures with one $`b\overline{b}`$ pair are considered and the full set of purely electroweak contributions is taken into account. These processes can be further divided into three subsets (although in realistic predictions they cannot be treated separately), which are shown in Table 1: concerning the quark flavours other than $`b`$, only charged currents are involved in the first subset, both charged and neutral currents in the second one, and only neutral currents in the third one. The total number of tree-level Feynman diagrams involved in the complete electroweak calculation amounts to several hundreds. Such a complexity is unavoidable, as will be shown, if an accuracy of $`1\%`$ is to be reached. The diagrams with top-quark production, which will be referred to as signal diagrams, are shown in Fig. 1. They contribute to the processes in the first two columns of Table 1, but not to those in the third.
All the processes receive contributions from diagrams of Higgs production, of which the leading ones are illustrated in Fig. 2. The relevance of such contributions depends on the Higgs mass and on the centre-of-mass (c.m.) energy: the dominant decay mode is $`Hb\overline{b}`$ for low Higgs masses ($`m_H130`$-$`140`$ GeV) and $`HVV`$ ($`V=W,Z`$) for high Higgs masses. The predictions can thus be expected to depend on the Higgs mass.
As is well known, the behaviour of the cross-section near the threshold for $`t\overline{t}`$ production is characterized by strong interaction effects that give a sizeable modification with respect to the purely electroweak prediction. Such effects are treated in the literature , and are not included in the calculations presented in this paper. <sup>1</sup><sup>1</sup>1Theoretical calculations of radiative corrections to $`t\overline{t}`$ production are also present in the literature, as recently reviewed in ref. . Results at energies around the threshold are shown, so as to give a thorough analysis of the electroweak contribution. Some of the QCD backgrounds to the signatures considered in the present study have been evaluated in ref. , and their topology has been studied by means of event-shape variables. One of the objectives of the present work is to characterize the topology of the complete electroweak contributions in order to help finding appropriate selection criteria to reduce as far as possible the QCD backgrounds studied in ref. . The analysis performed in the present work, together with the other studies in the literature so far, should give a complete picture of electroweak contributions to $`6f`$ processes relevant to top-quark physics at NLC.
The paper is organized as follows: in Section 2 the computing procedure is briefly described; in Section 3 the numerical results, including integrated cross-sections and various distributions, are presented and discussed; Section 4 is devoted to our conclusions.
## 2 Calculation
The numerical results have been obtained by means of a procedure analogous to the one adopted in ref. , where the interested reader can find some technical details that will be omitted here. The computer program already used in ref. , which is based on ALPHA for the matrix element calculation and on an evolution of HIGGSPV/WWGENPV for the Monte Carlo integration and event generation, has been adapted, in the multichannel importance sampling, to include some new diagram topologies, such as those in Figs. 1 and 2.
The cross-section is calculated according to the formula
$$\sigma =𝑑z_1𝑑z_2D_{BS}(z_1,z_2;s)𝑑x_1𝑑x_2D(x_1,s)D(x_2,s)d[PS]\frac{d\widehat{\sigma }}{d[PS]},$$
(1)
where initial-state radiation (ISR and beamstrahlung (BS are included by means of the structure functions $`D(x,s)`$ and $`D_{BS}(x,s)`$, respectively; $`d\widehat{\sigma }/d[PS]`$ is the differential cross-section at the partonic level, and $`d[PS]`$ is the six-body phase-space measure. The program may be used to generate unweighted events as well.
The input parameters are $`G_\mu `$, $`M_W`$, $`M_Z`$, the top-quark mass $`m_t=175`$ GeV, and the $`b`$-quark mass $`m_b=4.3`$ GeV; all the other fermions are treated as massless. The widths of the $`W`$ and $`Z^0`$ bosons and of the top-quark and all the couplings are calculated at tree level. The Higgs-boson width includes the $`h\mu \mu ,\tau \tau ,cc,bb`$, the $`hgg`$ and the two-vector-boson channels. The CKM matrix used is exactly diagonal. The propagators of unstable particles have denominators of the form $`p^2M^2+i\mathrm{\Gamma }M`$ with fixed widths. The validity of this choice for minimizing possible gauge violations has been discussed in ref. , where the final states $`q\overline{q}l^+l^{}\nu \overline{\nu }`$ were considered. In that paper, for the $`SU(2)`$ invariance, the fudge-factor method has been used to check the numerical results: apart from the well-known problems of the fudge scheme, i.e. the mistreatment of non-resonant diagrams close to the resonances, no deviation has been found in the total cross-section up to the numerical accuracy considered. In order to check $`U(1)`$ invariance, the matrix element has been calculated with different forms of the photon propagator obtained by varying the gauge parameter; the results were found to be stable up to numerical precision. The same analysis carries on to the present study and gauge-violation effects are estimated to be numerically negligible.
The colour algebra, not implemented in the version of ALPHA that has been employed here, has been performed by summing the different processes with proper weights. As an example of this, the process $`e^+e^{}b\overline{b}u\overline{d}\overline{u}d`$ may be considered: the colour amplitude, in the case of purely electroweak contributions, can be written in the form
$$A=\left(a_1\delta _{i_1i_2}\delta _{i_3i_4}+a_2\delta _{i_1i_3}\delta _{i_2i_4}\right)\delta _{jk},$$
(2)
where the colour indices $`i_1,i_2,i_3,i_4,j`$ and $`k`$ refer to the $`u,\overline{d},\overline{u},d,b`$ and $`\overline{b}`$ quarks respectively. The squared modulus summed over colours is then
$$\underset{col}{}|A|^2=N_c^3|a_1|^2+N_c^2(a_1a_2^{}+a_1^{}a_2)+N_c^3|a_2|^2.$$
(3)
The amplitude given by ALPHA is instead
$$𝒜=a_1+a_2.$$
(4)
Thus one cannot use an overall factor to obtain eq. (3) from eq. (4). In order to disentangle the various terms in eq. (3), it is useful to notice that, with the quark masses adopted here and with a diagonal CKM matrix, the first term in the right-hand side of eq. (2) is equal to the amplitude $`A^{}`$ of the process $`e^+e^{}b\overline{b}u\overline{d}\overline{c}s`$, and the second term is equal to the amplitude $`A^{\prime \prime }`$ of the process $`e^+e^{}b\overline{b}u\overline{s}\overline{u}s`$. Similarly, the two colourless amplitudes $`𝒜^{}`$ and $`𝒜^{\prime \prime }`$ of these processes are equal to the first and to the second term, respectively, in the right-hand side of eq. (4). Thus the following relation is valid:
$`{\displaystyle \underset{col}{}}\left(|A|^2+|A^{}|^2+|A^{\prime \prime }|^2\right)=N_c^2\left(|𝒜|^2+(2N_c1)(|𝒜^{}|^2+|𝒜^{\prime \prime }|^2)\right).`$ (5)
Other situations are treated in a similar way, and the correct colour weights are thus obtained in the sum over the whole class of processes considered.
## 3 Numerical results and discussion
In this section the numerical results, including both integrated cross-sections and distributions, are shown. In all the calculations the invariant masses of the $`b\overline{b}`$ pair and of all the pairs of quarks other than $`b`$ and $`\overline{b}`$ are required to be greater than 10 GeV. The results presented below are obtained, unless otherwise stated, by summing over all the processes listed in Table 1.
### 3.1 Integrated cross-sections
As a first step, the total cross-section, resulting from all the tree-level diagrams for the processes in Table 1, has been calculated in the Born approximation at energies from 340 to 800 GeV. Two values of Higgs mass have been considered, $`m_H=100,185`$ GeV, so as to study the dependence of the results on $`m_H`$ in the intermediate range. The numerical errors are always below $`1\%`$ and in particular above the $`t\overline{t}`$ threshold they are kept at $`0.20.3\%`$ level.
In Fig. 3 the full cross-section for $`m_H=185`$ GeV is compared with the signal, defined as the contribution of the two diagrams of $`t\overline{t}`$ production of Fig. 1, summed over the four processes to which they contribute (see the first two columns of Table 1); the signal is shown both in the Born approximation and with ISR switched on.
The difference between the full and the signal curve is dominated by Higgs-strahlung contributions (diagrams $`(a)`$ and $`(b)`$ of Fig. 2) at low energy, while other backgrounds are important at high energy, coming from all the processes in Table 1 and, for a little amount, from the interference of the signal diagrams with the other contributions in the charged-current and mixed processes. The electroweak background effects, which are in the range $`510\%`$ above the threshold, amount to around 30$`\%`$ at threshold; they are much greater below the threshold, where the signal is suppressed with respect to the background (the ratio background/signal is of 2.5 at 340 GeV).
The radiative effects strongly suppress the cross-section in the low-energy region, where it grows rapidly, as they reduce the effective c.m. energy; with increasing energy, the curve with ISR comes to cross the one in the Born approximation, as a consequence of the onset of the opposite behaviour of the Born term, which, above the threshold, starts decreasing. It can be observed that at 500 GeV the enhancement due to the background is of the same order as the lowering given by the ISR.
In Fig. 4 the signal cross-section without kinematical cuts is plotted together with the cross-section in the narrow-width approximation (NWA). The latter is calculated as the product of the cross-section for $`e^+e^{}t\overline{t}`$, and of the branching ratios of the decays $`Wq\overline{q}^{}`$, assuming the branching ratio of $`tWb`$ to be exactly unity. The difference between the two calculations is about $`15\%`$ in the region near the threshold, and it decreases, as expected, with increasing c.m. energy: at 500 GeV it is $`3\%`$, while at 800 GeV it is less than $`1\%`$. These results give a measure of the off-shellness effects connected with the top-quark and $`W`$-boson widths.
The cross-sections for the two values of the Higgs mass, $`m_H=100`$ and 185 GeV, have been found to differ by less than $`1\%`$ at energies above the threshold region, while at lower energies, differences of up to $`2030\%`$ occur. This is due to the fact that the signal at low energy is not large enough to hide the Higgs-mass effects. Moreover, such effects decrease with increasing energy. In order to make a detailed study of the dependence on the Higgs mass at low energy, the cross-section at the threshold for $`t\overline{t}`$ production, $`\sqrt{s}=350`$ GeV, has been calculated for various Higgs masses in the range from 100 GeV to 185 GeV. The results are shown in Fig. 5, where the cross-section is plotted as a function of the Higgs mass. Variations of the order of $`10\%`$ can be seen in this plot, which shows the importance of complete calculations to keep under control the background effects and uncertainties that come from not knowing the Higgs mass.
### 3.2 Distributions
Two samples of events have been generated at a c.m. energy of 500 GeV and with a Higgs mass of 185 GeV. One sample is in the Born approximation, while the other includes ISR and BS. The numbers of events, of the order of $`10^5`$, have been determined by assuming a luminosity of $`500`$ fb<sup>-1</sup>, which is the integrated value expected in one year of run.
In the definition of observable distributions for the class of processes considered here, we must take into account the fact that quark flavours other than $`b`$ cannot be identified. As a consequence, two kinds of distributions, labelled “exact” and “reconstructed”, are considered in the following: the “exact” distributions are calculated by identifying all the quarks; the “reconstructed” distributions are calculated by means of the following algorithm. The momenta $`q_1,\mathrm{},q_4`$ of the four quarks other than $`b`$ and $`\overline{b}`$ are first considered and, for every pair $`(q_i,q_j)`$, the invariant mass $`m_{ij}=\sqrt{(q_i+q_j)^2}`$ is calculated; then the two $`W`$ particles, $`W_1`$ and $`W_2`$, are reconstructed as the pairs $`(q_i,q_j)`$ and $`(q_k,q_l)`$ such that the quantity $`|m_{ij}M_W|+|m_{kl}M_W|`$ is minimized; the top-quark is then determined by taking the combination $`(b,W_i)`$, $`(\overline{b},W_j)`$, which minimizes the quantity $`|m_{bW_i}\stackrel{~}{m}_t|+|m_{\overline{b}W_j}\stackrel{~}{m}_t|`$, where $`\stackrel{~}{m}_t=175`$ GeV is the nominal top mass.
The invariant mass of the top-quark is studied in Fig. 6. In the plot $`(a)`$ a comparison is made between the exact (dashed line) and the reconstructed (solid line) distribution in the Born approximation and a good agreement can be observed. In order to further check the reconstruction procedure, in particular the dependence on the adopted value of $`\stackrel{~}{m}_t`$, some tests have been made by taking values in the range $`170`$ GeV $`<\stackrel{~}{m}_t<180`$ GeV and fitting the resulting histograms with Breit–Wigner distributions. The values of the physical top mass obtained in the various cases are identical, within the statistical errors.
The radiative effects are shown in the plot $`(b)`$ of Fig. 6, for the reconstructed distribution. They do not apparently give a substantial modification. In the plot $`(c)`$ the role of background diagrams is studied, by comparing the result of the full calculation with the signal alone. The background does not introduce any observable distortion. More quantitative results have been obtained by means of fits to the histograms with Breit–Wigner distributions. All the histograms in Fig. 6 give the same value of $`m_t`$, so that it can be safely concluded that electroweak backgrounds, as well as ISR and BS do not give any bias in the determination of the physical mass via the direct reconstruction method on the scale of precision of 100 MeV.
The angular distribution of the top-quark with respect to the beam axis is a good indicator of the spin nature and of the couplings of the top-quark. This variable is illustrated in Fig. 7. As in the case of the invariant mass, the exact and reconstructed distributions have been checked to be in very good agreement. It should be observed that radiative and background effects are of the same magnitude (in particular the former are dominated by the ISR). The shapes of the histograms are in good qualitative agreement with the angular distribution predicted by the lowest-order analytic calculation for the process of real $`t\overline{t}`$ production.
The most effective way to obtain a separation between the $`t\overline{t}`$ signal and the QCD backgrounds, as already pointed out by some authors , is to analyse event-shape variables, such as thrust , sphericity , spherocity , $`C`$ and $`D`$ parameters , etc. A comparison between pure QCD ($`O(\alpha _{em}^2\alpha _s^4)`$) six-jet events and the $`t\overline{t}`$ signal has been performed in ref. for the thrust and sphericity distributions, and other shape variables have been studied in the same article for the QCD contributions only. In the present work several such variables have been analysed for the electroweak contributions, and the effects of the electroweak backgrounds and of ISR and BS have been studied. The thrust and $`C`$ parameter distributions for the process under consideration are shown in Fig. 8. In the upper row the radiative effects are displayed, while in the plots of the lower row the signal is compared with the full result. In the radiative case, the distributions are calculated after going to the c.m. frame. Remarkable effects due to ISR and BS can be seen in these plots and in particular in the thrust distribution, where the peak is strongly reduced with respect to the Born approximation and the events are shifted towards the lower values of $`T`$, which correspond to spherical events. It is interesting to observe that this phenomenon is of help for the selection of the signal with respect to QCD backgrounds. From the plots in the second row, it can be seen that the presence of the electroweak backgrounds, although visible, is almost negligible for both observables.
The remarkable change of the thrust distribution after inclusion of radiation can be better understood by observing the dependence of this distribution on the c.m. energy, which is analysed in Fig 9, where four samples of 10000 events each, at the energies of 360, 500, 800 and 1500 GeV, are studied. The peak structure that is present at 500 GeV is completely lost at 360 GeV, and this explains the lowering of the peak at 500 GeV in the presence of ISR and BS, as this reduces the available c.m. energy. At 800 and 1500 GeV the peak is shifted towards the collinear region $`T1`$, as a consequence of the Lorentz boost of the $`t`$ and $`\overline{t}`$ quarks.
As a conclusion, we can say that, at 500 GeV, in view of the results of the pure QCD processes, studied in ref. , the thrust variable is the most effective in discriminating pure QCD backgrounds, also in the presence of electroweak backgrounds and of ISR and BS. At higher energies this separation appears to be more and more problematic.
On the other hand, the backgrounds of $`O(\alpha _{em}^4\alpha _s^2)`$, given by $`24`$ processes with subsequent gluon emission from a quark line, should be considered (a study of contributions of this class for semi leptonic signatures is made in ref. , but without an analysis of event-shape variables). A rough estimate of the leading contributions of this kind could be obtained by considering a four-fermion process of the form $`e^+e^{}W^+W^{}4`$ jets, similar to what is done in ref. . A test made by means of the four-fermion program WWGENPV has confirmed the results of ref. for the thrust and has led to similar conclusions for the $`C`$ parameter: such processes appear to be well separated from the top-quark signal and thus appear to be less dangerous than the pure QCD backgrounds.
## 4 Conclusions
The production of $`t\overline{t}`$ pairs has been studied in processes with six quarks in the final state, at the energies of the NLC. The signatures considered contain one $`b\overline{b}`$ pair, and receive contributions from both charged and neutral currents. The top-quark signal is present only in the charged-current terms. The purely electroweak contributions have been considered and complete tree-level calculations have been performed.
The cross-section has been calculated by means of a computer program already used for other phenomenological studies on $`6f`$ processes and adapted here to sample the new diagram topologies. The importance of the electroweak backgrounds and of the off-shellness effects has been examined. Above the threshold for $`t\overline{t}`$ production, the former are of the order of several per cent and the latter are at the per cent level. Near the threshold, both effects are sizeable and, in particular, a study of the dependence of the cross-section on the Higgs mass at threshold shows that variations of the order of 10$`\%`$ occur for Higgs masses between 100 GeV and 185 GeV. A complete calculation is needed to keep such effects under control and to have a $`1\%`$ accuracy.
Some distributions have been studied in a realistic approach, by using a reconstruction algorithm for the top-quark that takes into account the impossibility of identifying quark flavours other than $`b`$. The invariant mass of the top-quark has thus been studied and the presence of electroweak background contributions as well as the initial-state radiative effects have been found not to affect the determination of the mass on the scale of experimental precision expected at NLC.
The angular distribution of the top-quark with respect to the beam axis, which is directly related, in the case of real production, to the quantum numbers of the top-quark, has been calculated and shown to be in qualitative agreement with the expectation suggested by the real production case.
Finally, some event-shape variables have been studied. At a c.m. energy of 500 GeV the thrust distribution turns out to be the most interesting for the aim of discriminating the leading QCD backgrounds, as suggested by other authors who discussed the top-quark signal alone. The effects of electroweak backgrounds and of ISR and BS have been shown here not to alter these conclusions. At higher energies, the Lorentz boost gives to the event a more collinear shape, so that the separation of QCD backgrounds could become more difficult.
The study presented in this paper has been performed by means of a computing program that can equally well deal with semi leptonic signatures and can be switched in a straightforward manner to treat polarized scattering. Moreover, by employing the new version of ALPHA , which embodies also the QCD Lagrangian, complete strong and electroweak results could be obtained.
Acknowledgements
F. Gangemi thanks the INFN, Sezione di Pavia, for the use of computing facilities. The work of M. Moretti is funded by a Marie Curie fellowship (TMR-ERBFMBICT 971934).
|
no-problem/9905/quant-ph9905084.html
|
ar5iv
|
text
|
# Untitled Document
Proceedings of the Adriatico Research Conference “Quantum Interferometry III”
Trieste, March 1999 (to be published in Fortschritte der Physik).
Bayesian Analysis of Bell Inequalities
Asher Peres
Department of Physics,
Technion—Israel Institute of Technology,
32 000 Haifa, Israel
Electronic address: peres@photon.technion.ac.il
Abstract
Statistical tests are needed to determine experimentally whether a hypothetical theory based on local realism can be an acceptable alternative to quantum mechanics. It is impossible to rule out local realism by a single test, as often claimed erroneously. The “strength” of a particular Bell inequality is measured by the number of trials that are needed to invalidate local realism at a given confidence level. Various versions of Bell’s inequality are compared from this point of view. It is shown that Mermin’s inequality for Greenberger-Horne-Zeilinger states requires fewer tests than the Clauser-Horne-Shimony-Holt inequality or than its chained variants applied to a singlet state, and also than Hardy’s proof of nonlocality.
I. Formulation of the problem
Bell inequalities are upper bounds on the correlations of results of distant measurements. These inequalities are obeyed by any local realistic theory, namely a theory that uses local variables with objective values. Since Bell’s original discovery , many inequalities of that type have been published, with various claims of superiority. The purpose of this article is to compare their relative strengths for various quantum states.
In actual experimental tests, there are no infinite ensembles for accurate measurements of mean values. Experimental physicists perform a finite number of tests, and then they state that their results violate the inequality at some confidence level. The problem I wish to discuss here is of a different nature. I am a theorist and I trust that quantum mechanics gives a reliable description of nature. However, I have a friend who is a local realist. We have only a finite number of trials at our disposal. How many tests are needed to make my realist friend feel uncomfortable?
The problem is not whether the validity of a Bell inequality can be salvaged by invoking clever loopholes, as some local realists try to trick us into, but whether there can be any local realistic theory that reproduces the experimental results. When these results are analyzed we have to take into account detector inefficiencies, and this should be done honestly in the same way when our analysis is based on quantum theory or on a local realistic theory. To simplify the discussion, I shall assume that there are ideal detectors, and that the rate at which particles are produced by the apparatus is perfectly known. The disagreement is only on the choice of the correct theory.
Consider a yes-no test. Quantum mechanics (QM) predicts that the probability of the “yes” result is $`q`$, and an alternative local realistic (LR) theory predicts a probability $`r`$. An experimental test is performed $`n`$ times and yields $`m`$ “yes” results. What can we infer about the likelihood of the two theories? The answer is given by Bayes’s theorem . Denote by $`p_q^{}`$ and $`p_r^{}`$ the prior probabilities that we assign to the validity of the two theories. These are subjective probabilities, expressing our personal beliefs. For example, if my friend is willing to bet 100 to 1 that LR is correct and QM is wrong, then $`p_r^{}/p_q^{}=100`$. The question is how many experimental tests are needed to change my friend’s opinion to $`p_r^{\prime \prime }/p_q^{\prime \prime }=0.01`$ say, before he is driven to bankruptcy.
It follows from Bayes’s theorem that
$$\frac{p_r^{\prime \prime }}{p_q^{\prime \prime }}=\frac{p_r^{}}{p_q^{}}\frac{E_r}{E_q},$$
(1)
where $`E_r`$ and $`E_q`$ are the probabilities of the experimentally found result (namely $`m`$ successes in $`n`$ trials), according to the two theories. These are, by the binomial theorem,
$$E_r=[n!/m!(nm)!]r^m(1r)^{nm},$$
(2)
$$E_q=[n!/m!(nm)!]q^m(1q)^{nm},$$
(3)
whence
$$E_q/E_r=(q/r)^m[(1q)/(1r)]^{nm}.$$
(4)
I shall call the ratio
$$D=E_q/E_r$$
(5)
the confidence depressing factor for hypothesis LR with respect to hypothesis QM.
II. Greenberger-Horne-Zeilinger state
As a first example, consider the Greenberger-Horne-Zeilinger (GHZ) state for a tripartite system, namely $`(|000|111)/\sqrt{2}`$, where 0 and 1 denote two orthogonal states of each subsystem. This state is experimentally difficult to produce but its theoretical analysis is quite simple. Three distant observers examine the three subsystems. The first observer has a choice of two tests. The first test can give two different results, that we label $`a=\pm 1`$, and likewise the other test yields $`a^{}=\pm 1`$. Symbols $`b,b^{},c`$ and $`c^{}`$ are similarly defined for the two other observers. Any possible values of their results satisfy
$$a^{}bc+ab^{}c+abc^{}a^{}b^{}c^{}\pm 2,$$
(6)
whence it follows that
$$2a^{}bc+ab^{}c+abc^{}a^{}b^{}c^{}2.$$
(7)
This is Mermin’s inequality .
Quantum mechanics happens to make a very simple prediction for the GHZ state: there are well chosen tests that give with certainty
$$a^{}bc=ab^{}c=abc^{}=a^{}b^{}c^{}=1.$$
(8)
Naturally, performing any such test can verify the value 1 for only one of these products, since each product corresponds to a different experimental setup. Yet, if we take all these results together they manifestly conflict with Eq. (6), and many authors have stated that a single experiment is sufficient to invalidate local realism. This is sheer nonsense: a single experiment can only verify one occurrence of one of terms in (8).
Let us return to our realist friend. He believes that, in each experimental run, each term in Eq. (8) has a definite value even if that term is not actually measured in that run. Let us therefore ask him to propose just a rule giving the average values of the products $`a^{}bc`$, etc., that appear in Eq. (7). How many tests are needed for depressing his confidence in that rule by a factor $`10^4`$, say?
The most successful LR theory, namely the one that gives the least depressing factor, is to assume that
$$a^{}bc=ab^{}c=abc^{}=a^{}b^{}c^{}=0.5.$$
(9)
This obviously attains the right hand side of Mermin’s inequality (7). The LR prediction thus is that if we measure $`a^{}bc`$, we shall find the result 1 (i.e., “yes”) in 75% of cases, and the opposite result in 25%; and likewise for the other tests. We thus have, with the notations introduced above, $`q=1`$ and $`r=0.75`$. For $`n`$ tests, with ideal detectors, we have $`m=n`$ (I am assuming here that quantum theory is correct), and the depressing factor in Eq. (4) is $`0.75^n`$. For example, 32 tests give $`D10^4`$, as required.
III. The singlet state
The second example involves just two correlated quantum systems far away from each other. An observer, located near one of the systems, has a choice of several yes-no tests, labelled $`A_1`$, $`A_3`$, $`A_5`$, etc. Likewise, another observer, near the second system, has a choice of several yes-no tests, $`B_2`$, $`B_4`$, $`B_6`$ … Let $`p(A_iB_j)`$ denote the probability that tests $`A_i`$ and $`B_j`$ give the same result (both “yes” or both “no”). It was shown long ago by Clauser, Horne, Shimony, and Holt (CHSH) that local realism implies
$$p(A_1B_2)+p(B_2A_3)+p(A_3B_4)p(A_1B_4).$$
(10)
(In the original paper , this equation was written in terms of correlations, namely $`2p1`$, but it is much simpler to use probabilities, as here.) More generally, Braunstein and Caves derived chained Bell inequalities that can be written
$$p(A_1B_2)+p(B_2A_3)+\mathrm{}+p(A_{2k1}B_{2k})p(A_1B_{2k}).$$
(11)
There are $`(k!)^2`$ independent inequalities of that type, obtainable by relabelling the various $`A_i`$ and $`B_j`$. Local realism guarantees that all these inequalities are satisfied.
Consider a pair of spin-$`\frac{1}{2}`$ particles in the singlet state (similar results hold for maximally entangled pairs of polarized photons, except that all the angles mentioned below should be halved). Each observer can measure a spin component along one of $`k`$ possible directions, as illustrated in Fig. 1, where the angle between consecutive directions is $`\theta =\pi /2k`$. Quantum theory predicts that each one of the probabilities on the left hand side of Eq. (11) is $`q=(1\mathrm{cos}\theta )/2`$, and the probability on the right hand side is $`1q`$. These predictions manifestly violate Eq. (11).
What could be the predictions of an alternative theory, based on local realism? These predictions have to satisfy Eq. (11). The closest they can approach quantum theory is when equality holds in the latter equation. Moreover, it is reasonable to assume that all the terms on the left hand side are equal (this follows from rotational symmetry, and it can also be shown that any deviation from this symmetry would only increase the depression factor $`D`$). Let $`r`$ be the common value of all these terms. Then the right hand side of Eq. (11) has to be $`1r`$ (again, because of rotational symmetry and because the spin projection along $`\beta _{2k}`$ is opposite to that along $`\beta =0`$). It follows that in a local realistic theory which mimics as closely as possible quantum mechanics and saturates the inequality (11), we have $`(2k1)r=1r`$, whence $`r=1/2k=\theta /\pi `$, where $`\theta `$ is the angle between consecutive rays. This is indeed the result obtainable from a crude semi-classical model, where a spinless system splits into two fragments with opposite angular momenta . Quantum theory, on the other hand, predicts for the same angle $`\theta `$ a probability $`q=(1\mathrm{cos}\theta )/2`$ that both observers obtain the same result.
We thus have now definite predictions, from quantum theory and from an alternative local realistic theory. To distinguish experimentally between these two claims, we test $`n`$ pairs of particles prepared in the singlet state. Let $`m`$ be the number of “yes” answers. If $`mqn`$ (that is, if quantum theory is experimentally correct), it follows from Eq. (4) that
$$D=\left[\left(\frac{q}{r}\right)^q\left(\frac{1q}{1r}\right)^{1q}\right]^n.$$
(12)
For example, if we wish to have $`D10^4`$ as before, we obtain $`n287`$ for $`k=2`$ (the case that was investigated by CHSH), and $`n200`$ for the more efficient configuration of Braunstein and Caves with $`k=4`$ (higher values of $`k`$ would require a higher number of tests for giving the same depression factor $`D`$).
IV. Hardy’s proof of nonlocality
Finally, let us examine Hardy’s proof of nonlocality “without inequalities” , which was called by Mermin “the best version of Bell’s inequality” . It will be shown that this version is not stronger than the preceding ones. Stripped of all its technical details, Hardy’s paradox can be formulated as follows. There are four alternative setups as in the CHSH case, but each setup requires only one detector. The first observer has a choice of using detectors $`A`$ or $`A^{}`$, the second observer may use $`B`$ or $`B^{}`$. Detector coincidences will be labelled $`C_j`$, with $`j=1,\mathrm{},4`$. Explicitly,
$$C_1=AB,C_2=AB^{},C_3=A^{}B,$$
(13)
and $`C_4`$ means that in the fourth setup neither $`A^{}`$ nor $`B^{}`$ is excited. Other types of coincidences are not relevant in the following discussion.
Local realism implies that the probabilities $`p(C_j)`$ satisfy the Clauser-Horne (CH) inequality ,
$$p(C_1)p(C_2)+p(C_3)+p(C_4).$$
(14)
On the other hand, quantum mechanics predicts that, for well chosen states and tests, these probabilities are $`p(C_2)=p(C_3)=p(C_4)=0`$ and
$$p(C_1)q=[(\sqrt{5}1/2)]^5=0.09017,$$
(15)
so that the CH inequality is violated.
As in the preceding cases, let our LR friend propose a new set of probabilities $`r_j`$ that satisfy the CH inequality. For example, a simple possibility is to postulate that all the $`r_j`$ vanish (this assumption is implicit in Hardy’s proof). Then LR and QM agree for setups 2, 3, and 4, and we only have to test experimentally setup 1. According to QM, the probability of finding $`n`$ consecutive “no” results (in agreement with the LR prediction) is $`(1q)^n`$. This is less than 50% after only 8 trials. The hypothesis that all the $`r_j`$ vanish is obviously untenable, and this is why Hardy’s proof is usually considered as quite strong.
However, there is a more sophisticated way to defend local realism. Let us assume that $`r_2=r_3=r_4=r_1/3`$, so that the CH inequality (14) is saturated, and let us optimize the value of $`r_1r`$. There are now two types of experimental tests. Those with setup 1 lead to a value of $`D`$ given by Eq. (12). On the other hand, setups 2, 3, and 4 have $`q=0`$ and then Eq. (12) gives, with $`r`$ replaced by $`r/3`$, the result $`D=(1r)^n`$. To invalidate local realism, we shall obviously choose the setup that minimizes $`n`$, the number of required tests. Therefore the best that a LR theorist can do is to choose $`r`$ so as to equate these two values of $`D`$. A straigthforward calculation then gives $`r=0.03358`$ and $`n270`$.
Acknowledgments
I am grateful to Dagmar Bruß for pointing out ambiguities in an earlier version of this article, and to Chris Fuchs for helpful comments. This work was supported by the Gerard Swope Fund and the Fund for Encouragement of Research.
1. J. S. Bell, Physics 1, 195 (1964).
2. L. J. Savage, The Foundations of Statistics, (Dover, New York, 1972).
3. D. M. Greenberger, M. A. Horne, and A. Zeilinger, in Bell’s Theorem, Quantum Theory and Conceptions of the Universe, ed. by M. Kafatos (Kluwer, Dordrecht, 1989).
4. N. D. Mermin, Physics Today 43 (6), 9 (1990); Am. J. Phys. 58, 731 (1990).
5. N. D. Mermin, Phys. Rev. Letters 65, 1838 (1990).
6. The list of authors is too long to be given explicitly, and it would be unfair to give only a partial list.
7. J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Letters 23, 880 (1969).
8. S. L. Braunstein and C. M. Caves, Ann. Phys. (NY) 202, 22 (1990).
9. A. Peres, Quantum Theory: Concepts and Methods (Kluwer, Dordrecht, 1993) p. 161.
10. L. Hardy, Phys. Rev. Letters 71, 1665 (1993).
11. S. Goldstein, Phys. Rev. Letters 72, 1951 (1994).
12. N. D. Mermin, in “Fundamental Problems in Quantum Theory” ed. by D. M. Greenberger and A. Zeilinger, Ann. New York Acad. Sci. 755, 617 (1995).
13. J. F. Clauser and M. A. Horne, Phys. Rev. D 10, 526 (1974).
Fig. 1. The Braunstein-Caves configuration for chained Bell inequalities: there are $`k`$ alternative directions along which each observer can measure a spin projection.
|
no-problem/9905/quant-ph9905037.html
|
ar5iv
|
text
|
# Quantum Mechanics on a Real Hilbert Space
## 1 Introduction
There are well known mathematical arguments saying that the Hilbert space of quantum mechanics could be real or quaternionic, as alternatives to the standard complex theory \[?–?\]. The real case was dealt with mainly by Stueckelberg and collaborators, who concluded that it is essentially equivalent to the complex case \[?–?\] (see also ). The interest in the quaternionic case is more alive \[?–?\]. A more exotic subject is octonionic quantum theory .
The compromise proposed here is a genuine extension of the complex theory, but is not quite the full quantum theory on a real Hilbert space. The set of physical states is taken to be exactly the same as in the complex theory, but the complex Hilbert space is reinterpreted as a real space, and the set of observables is enlarged from the set of all complex Hermitean matrices to the set of all real symmetric matrices.
There exists some physical motivation for such a generalization in the fact that the time reversal operator $`T`$ is antilinear in the complex theory. All transformations involving time reversal are antilinear, among them the fundamental $`CPT`$ symmetry of quantum field theory. It is true that the effect of time reversal can be described easily enough in standard quantum theory, but strictly speaking, as soon as time reversal is introduced, the step from the complex to the real Hilbert space has already been taken .
As another example, it is shown in Section 4 below how to represent the canonical Poisson bracket relation $`\{x,p\}=1`$ in terms of operators on a finite dimensional real Hilbert space. As is well known, the canonical commutation relation $`[x,p]=\text{i}\mathrm{}I`$ can not be represented on a finite dimensional complex Hilbert space, simply because the commutator on the left hand side must then have zero trace, while the identity operator $`I`$ on the right hand side has nonzero trace. The argument does not apply in the real Hilbert space, because there the operator $`J=\text{i}I`$ has zero trace.
The main argument of Stueckelberg for the equivalence of real and complex quantum mechanics is the need for an uncertainty principle. On the real Hilbert space it is very useful, if not strictly necessary, to have an operator $`J`$ commuting with all observables and having the property that $`J^2=1`$, if one wants to derive a general inequality for the product of the variances of any pair of observables. However, the argument is not compelling, partly because, as Stueckelberg points out, there might in principle be one separate operator $`J`$ for every pair of observables, and partly because quantum mechanics makes sense even without a general uncertainty principle. In the example of Section 4 below, the uncertainty principle for position and momentum holds indeed in all physical states, even though the position and momentum operators both anticommute with the operator $`J`$ defining the complex structure.
Stueckelberg and collaborators also discussed field quantization with fields that are either linear or antilinear, in the sense of commuting or anticommuting with $`J`$. Again their main conclusion is that quantization with antilinear fields is impossible for bosons, and possible for fermions but then essentially equivalent to quantization with linear fields, so that the real case reduces to the complex case. If the conclusion is valid in the present case, it means that the unconventional quantization of the harmonic oscillator does not lead to any interesting new quantum field theory. However, one should perhaps reexamine the arguments, keeping in mind in particular that there might be several different square roots of $`1`$, as discussed briefly in Section 5.
If antilinear field operators are ever going to be useful, it would most likely be in the quantization of the Dirac field. If the Dirac matrices are chosen real, then the Dirac equation is seen to be a real equation, and it is not unnatural to go one step further and formulate the quantum field theory in terms of a real Hilbert space, with linear or (possibly?) antilinear field operators, symmetric with respect to the real scalar product. The standard complex notation, both for the field equation and for the Hilbert space, hides the fact that the theory contains several $`\text{i}=\sqrt{1}`$ that are logically different, although they are identical by the notation.
One i is the generator of electromagnetic gauge transformations in the Dirac equation for a charged particle, it commutes with the mass term in the equation. A second i appears in the massless Weyl equation, it generates chiral gauge transformations, and does not commute with the Majorana mass term. For this reason, the Majorana and Dirac mass terms are claimed to be different, although one might just as naturally have concluded that there are two different i’s and only one kind of mass term. A third i turns up in the Fourier transformation connecting the position and momentum representations of the fields, by definition it commutes with the field operators. The fourth i, acting on the complex Hilbert space where all the fields act as operators, need not in principle be identified with any of the three.
## 2 Complex and real Hilbert spaces
For simplicity, we will consider here mostly finite dimensional Hilbert spaces. The complex Hilbert space $`\text{C}^D`$ of complex dimension $`D`$ corresponds to the real Hilbert space $`\text{R}^{2D}`$ of real dimension $`2D`$. The imaginary unit i on $`\text{C}^D`$ is then a linear operator $`J`$ on $`\text{R}^{2D}`$ with $`J^2=I`$. Here $`I`$ is the identity operator on $`\text{R}^{2D}`$. We define the correspondence so that we have for $`D=2`$, as an example,
$`\psi =\left(\begin{array}{c}\psi _{1r}+\text{i}\psi _{1i}\\ \psi _{2r}+\text{i}\psi _{2i}\end{array}\right)\text{C}^2\psi =\left(\begin{array}{c}\psi _{1r}\\ \psi _{1i}\\ \psi _{2r}\\ \psi _{2i}\end{array}\right)\text{R}^4.`$ (7)
Then $`J`$ is an antisymmetric matrix,
$`J=\left(\begin{array}{cccc}0& 1& 0& 0\\ 1& 0& 0& 0\\ 0& 0& 0& 1\\ 0& 0& 1& 0\end{array}\right).`$ (12)
The complex scalar product on $`\text{C}^D`$,
$`\varphi ^{}\psi ={\displaystyle \underset{j=1}{\overset{D}{}}}(\varphi _{jr}\psi _{jr}+\varphi _{ji}\psi _{ji})+\text{i}{\displaystyle \underset{j=1}{\overset{D}{}}}(\varphi _{jr}\psi _{ji}\varphi _{ji}\psi _{jr}),`$ (13)
has a real part which is the symmetric scalar product $`\varphi ^T\psi =\psi ^T\varphi `$ on $`\text{R}^{2D}`$, and an imaginary part which is the antisymmetric symplectic scalar product $`\varphi ^TJ\psi =\psi ^TJ\varphi `$ on $`\text{R}^{2D}`$.
To any complex $`D\times D`$ matrix $`A`$ corresponds a real $`2D\times 2D`$ matrix, which we choose to call by the same name $`A`$. The correspondence is such that, for example,
$`\left(\begin{array}{cc}A_{11r}+\text{i}A_{11i}& A_{12r}+\text{i}A_{12i}\\ A_{21r}+\text{i}A_{21i}& A_{22r}+\text{i}A_{22i}\end{array}\right)\left(\begin{array}{cccc}A_{11r}& \hfill A_{11i}& A_{12r}& \hfill A_{12i}\\ A_{11i}& \hfill A_{11r}& A_{12i}& \hfill A_{12r}\\ A_{21r}& \hfill A_{21i}& A_{22r}& \hfill A_{22i}\\ A_{21i}& \hfill A_{21r}& A_{22i}& \hfill A_{22r}\end{array}\right).`$ (20)
In tensor product notation we may write
$`A=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)\left(\begin{array}{cc}A_{11r}& A_{12r}\\ A_{21r}& A_{22r}\end{array}\right)+\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)\left(\begin{array}{cc}A_{11i}& A_{12i}\\ A_{21i}& A_{22i}\end{array}\right).`$ (29)
The Hermitean conjugate $`A^{}`$ of the complex matrix $`A`$ corresponds to the transposed $`A^T`$ of the real matrix $`A`$. The distinguishing property of those real matrices that correspond to complex matrices, is that they commute with $`J`$. Any real $`2D\times 2D`$ matrix $`A`$ can be written in a unique way as $`A=A_++A_{}`$, where $`A_+J=JA_+`$ and $`A_{}J=JA_{}`$, in fact the explicit solution is
$`A_\pm ={\displaystyle \frac{1}{2}}(AJAJ).`$ (30)
$`A_+`$ is complex linear. $`A_{}`$ is complex antilinear, thus it is a product of complex conjugation and a complex linear operator. In the $`4D^2`$ dimensional space of all real matrices, the complex linear and the complex antilinear matrices form two complementary subspaces of complex dimension $`D^2`$ and real dimension $`2D^2`$.
The complex $`D\times D`$ matrix $`A`$ is Hermitean if $`A^{}=A`$ and unitary if $`A^{}=A^1`$. The real $`2D\times 2D`$ matrix $`A`$ is symmetric if $`A^T=A`$, antisymmetric if $`A^T=A`$, orthogonal if $`A^T=A^1`$, and symplectic if $`A^TJ=JA^1`$. An orthogonal matrix is symplectic if and only if it commutes with $`J`$. Thus, the complex Hermitean matrices correspond to those real matrices that are symmetric and commute with $`J`$, whereas the complex unitary matrices correspond to precisely those real matrices that are orthogonal and symplectic.
In other words, an orthogonal matrix is a real linear operator on $`\text{R}^{2D}`$ which is invertible and preserves the ordinary real scalar product $`\varphi ^T\psi `$. A symplectic matrix is invertible and preserves the symplectic scalar product $`\varphi ^TJ\psi `$. And a unitary matrix is a complex linear operator on $`\text{C}^D`$ which is invertible and preserves the complex scalar product $`\varphi ^{}\psi `$. (In the finite dimensional case, but not in the infinite dimensional case, the invertibility is a consequence of the preservation of scalar products.)
An infinitesimal linear transformation $`U=I+ϵA`$ on $`\text{R}^{2D}`$, with $`ϵ`$ infinitesimal, is orthogonal if and only if the generator $`A`$ is antisymmetric, $`A^T=A`$. It is symplectic if and only if $`A^TJ=JA`$, which means that the matrix $`B=JA`$ is symmetric. Equivalently, $`A=JB`$ with $`B`$ symmetric. $`U`$ is both orthogonal and symplectic if and only if $`A=JB=BJ`$ with $`B`$ symmetric.
The dimension of the orthogonal group $`\text{O}(2D)`$ is $`2D^2D`$, the dimension of the symplectic group $`\text{Sp}(2D)`$ is $`2D^2+D`$, and the dimension of the unitary group $`\text{U}(D)`$ is $`D^2`$.
## 3 Quantum Mechanics
### 3.1 Observables and probabilities
In standard quantum mechanics a pure state of a given physical system is represented by a unit vector $`\psi \text{C}^D`$, or equivalently by the Hermitean density matrix $`\rho =\psi \psi ^{}`$, which is a projection operator, since $`\rho ^2=\rho `$. An observable of the system is represented by a Hermitean $`D\times D`$ matrix $`A`$, and the theory predicts the expectation value in the pure state $`\psi `$ as
$`A=\psi ^{}A\psi =\text{Tr}(\rho A).`$ (31)
More generally, any pure or mixed state is represented by a Hermitean density matrix $`\rho `$ which is positive definite, i.e. has nonnegative eigenvalues, and has unit trace, $`\text{Tr}\rho =1`$. The expectation value of the observable $`A`$ in this state is $`A=\text{Tr}(\rho A)`$. The theory also predicts the variance of $`A`$, $`\text{var}(A)=(\mathrm{\Delta }A)^2`$, as
$`\text{var}(A)=(AA)^2=A^2A^2.`$ (32)
$`A`$ has a sharp value in the state $`\rho `$ if and only if $`\text{var}(A)=0`$.
This probability interpretation makes sense because of the spectral theorem for Hermitean matrices, which guarantees the existence of the spectral representation
$`A={\displaystyle \underset{n=1}{\overset{N}{}}}a_nP_n.`$ (33)
Here $`a_1,\mathrm{},a_N`$ are distinct real eigenvalues, $`ND`$, and $`P_1,\mathrm{},P_N`$ are Hermitean projection operators, with the properties that $`P_mP_n=0`$ for $`mn`$, and
$`I={\displaystyle \underset{n=1}{\overset{N}{}}}P_n.`$ (34)
This implies that
$`A={\displaystyle \underset{n=1}{\overset{N}{}}}p_na_n,\text{var}(A)={\displaystyle \underset{n=1}{\overset{N}{}}}p_n(a_nA)^2,`$ (35)
where $`p_n=P_n=\text{Tr}(\rho P_n)`$. According to the probability interpretation, the possible results of a measurement of $`A`$ are the eigenvalues $`a_1,\mathrm{},a_N`$, and $`p_n`$ is the probability of the result $`a_n`$ in the state $`\rho `$.
The fact that the spectral theorem for complex Hermitean matrices is valid for all real symmetric matrices, with no more than the obvious changes in wording, allows us to generalize standard quantum mechanics by admitting as observables all the real symmetric matrices.
### 3.2 States
In this generalization we have two options for choosing the set of states. The straightforward choice is to admit all real unit vectors as possible pure states of the system, and all real symmetric and positive definite matrices of unit trace as possible mixed states. This enlarges the set of possible states as compared to standard quantum mechanics, since not all such real density matrices are complex linear. In particular, it doubles the total number of states in the system, and it doubles the degeneracy of the spectrum of all standard observables, i.e. those that are complex linear. It seems that the degeneracy doubling is unphysical, at least if we want to describe systems that are well described by standard quantum theory.
The second option, more interesting from the physical point of view, is to admit exactly the same states as in the complex theory. It means that we enlarge the class of observables, including all real symmetric matrices, but we admit only those density matrices that commute with $`J`$. For example, with $`J`$ as in equation (12) the most general physical density matrix has the form
$`\rho =\left(\begin{array}{cccc}\alpha & 0& \gamma & \delta \\ 0& \alpha & \delta & \gamma \\ \gamma & \delta & \beta & 0\\ \delta & \gamma & 0& \beta \end{array}\right),`$ (40)
with $`2(\alpha +\beta )=1`$, $`\alpha 0`$, $`\beta 0`$, and $`\alpha \beta \gamma ^2\delta ^20`$.
This has the advantage that the physical degeneracies are unchanged, but it also has the somewhat strange consequence that there are no pure states in the theory. In fact, the pure states in the complex theory correspond to mixed states in the real theory. One way to see this is to observe that the meaning of the trace is different in the real theory as compared to the complex theory, because the number of basis vectors is doubled. Thus, if $`\text{Tr}\rho =1`$ when the density matrix $`\rho `$ is regarded as a complex $`D\times D`$ matrix, we have $`\text{Tr}\rho =2`$ when the same $`\rho `$ is regarded as a real $`2D\times 2D`$ matrix. Therefore the proper correspondence is that the complex density matrix $`\rho `$ must correspond to the real density matrix $`\rho /2`$, which can never represent a pure state since it has no eigenvalues larger than $`1/2`$.
In this generalization of quantum mechanics as a theory defined on $`\text{R}^{2D}`$, every observable will have a complete set of $`2D`$ real eigenvalues and eigenvectors. However, one eigenvector alone does not represent a physical state. If we say that two physical states $`\rho `$ and $`\rho ^{}`$ are orthogonal when $`\rho \rho ^{}=\rho ^{}\rho =0`$, the maximum number of orthogonal physical states is $`D`$. Just by counting we see that if an observable $`A`$ has more than $`D`$ different eigenvalues, not every one of these can possess its own “physical eigenstate”, in which a measurement of $`A`$ gives this particular value with probability one.
More explicitly stated, if an observable $`A`$ does not commute with $`J`$, then it will have at least one eigenvalue which is not sharply realized in any physical state, and conversely, there will exist no complete set of physical states such that $`\text{var}(A)=0`$ in every state belonging to the complete set.
### 3.3 Poisson brackets
To the classical Poisson bracket $`\{A,B\}`$ of two classical observables $`A`$ and $`B`$ corresponds the “quantum Poisson bracket”
$`\{A,B\}={\displaystyle \frac{\text{i}}{\mathrm{}}}[A,B]={\displaystyle \frac{\text{i}}{\mathrm{}}}(ABBA).`$ (41)
The proper way to write the same quantity in the real formulation is
$`\{A,B\}=A\mathrm{\Omega }BB\mathrm{\Omega }A,`$ (42)
with $`\mathrm{\Omega }=J/\mathrm{}`$. The antisymmetry, $`\{A,B\}=\{B,A\}`$, and the Jacobi identity,
$`\{A,\{B,C\}\}+\{B,\{C,A\}\}+\{C,\{A,B\}\}=0,`$ (43)
are easily verified. The most important property of the matrix $`\mathrm{\Omega }`$ is that it is antisymmetric, $`\mathrm{\Omega }^T=\mathrm{\Omega }`$, because that ensures that $`\{A,B\}`$ is symmetric whenever $`A`$ and $`B`$ are both symmetric. Another way to write the relation $`\{A,B\}=C`$ for symmetric matrices $`A`$, $`B`$ and $`C`$ is as
$`[JA,JB]=\mathrm{}JC.`$ (44)
This is then a commutation relation in the Lie algebra of the symplectic group.
The real density matrix $`\rho `$ must in general be explicitly time dependent and satisfy the Liouville equation
$`{\displaystyle \frac{\text{d}\rho }{\text{d}t}}={\displaystyle \frac{\rho }{t}}+\{\rho ,H\}=0.`$ (45)
Here $`\text{d}\rho /\text{d}t`$ is the absolute time derivative, $`\rho /t`$ is the explicit time derivative, and $`H`$ is the Hamiltonian. Thus the explicit time dependence of $`\rho `$ is given by the equation of motion
$`{\displaystyle \frac{\rho }{t}}=\{H,\rho \}=H\mathrm{\Omega }\rho \rho \mathrm{\Omega }H.`$ (46)
The equation of motion must preserve $`\text{Tr}\rho `$. A sufficient condition is that either $`H`$ or $`\rho `$ commute with $`\mathrm{\Omega }`$, because then we have either $`\rho \mathrm{\Omega }H=\rho H\mathrm{\Omega }`$ or $`\rho \mathrm{\Omega }H=\mathrm{\Omega }\rho H`$, and in both cases
$`{\displaystyle \frac{(\text{Tr}\rho )}{t}}=\text{Tr}(H\mathrm{\Omega }\rho \rho \mathrm{\Omega }H)=0.`$ (47)
Thus, if we accept all symmetric and positive definite matrices of unit trace as density matrices, we should impose the condition on the Hamiltonian $`H`$ that it commute with $`J=\mathrm{}\mathrm{\Omega }`$.
We should impose the same condition on $`H`$ even in the case where we accept only density matrices that commute with $`J`$. The point is that the equation of motion must preserve the condition of commutation with $`J`$, that is, the Poisson bracket $`\{H,\rho \}=H\mathrm{\Omega }\rho \rho \mathrm{\Omega }H`$ must commute with $`J`$. A sufficient condition, when $`\rho `$ commutes with $`J`$, is that $`H`$ also commutes with $`J`$.
When $`H`$ commutes with $`J`$, and is not explicitly time dependent, the equation of motion can be integrated explicitly to give
$`\rho (t)=U(t)\rho (0)U(t),`$ (48)
where $`U(t)=\text{e}^{\frac{t}{\mathrm{}}JH}`$ is the unitary time development operator.
In conclusion, not every real symmetric matrix is an acceptable Hamiltonian in the generalized quantum mechanics as formulated here. It is necessary, or at least natural, to require the Hamiltonian to be a complex linear matrix. If we also require the density matrices to be complex linear matrices, it would seem that we are back to the point of departure, which was the standard complex quantum mechanics. However, we have enlarged the class of observables, even though we do not accept the new observables to be Hamiltonians governing the time development.
## 4 The harmonic oscillator
Time reversal was mentioned in the introduction as a motivation for the proposed generalization of the complex formalism. A more unconventional example of the generalizations that become possible, is the representation of the canonical Poisson bracket $`\{x,p\}=I`$ in any finite and even dimension. For example, in the four dimensional case considered above, any two positive lengths $`\xi _1,\xi _2`$ define a representation of the form
$`x=\left(\begin{array}{cccc}\xi _1& 0& 0& 0\\ 0& \xi _1& 0& 0\\ 0& 0& \xi _2& 0\\ 0& 0& 0& \xi _2\end{array}\right),p={\displaystyle \frac{\mathrm{}}{2}}\left(\begin{array}{cccc}0& 1/\xi _1& 0& 0\\ 1/\xi _1& 0& 0& 0\\ 0& 0& 0& 1/\xi _2\\ 0& 0& 1/\xi _2& 0\end{array}\right).`$ (57)
The generalization to any even dimension $`2D`$, or to infinite dimension, is obvious. Then both $`x`$ and $`p`$ anticommute with the imaginary unit $`J`$, as opposed to standard quantum mechanics where they commute with $`J`$. With the most general physical density matrix $`\rho `$, equation (40), this gives $`x=0`$, $`(\mathrm{\Delta }x)^2=x^2=2(\alpha \xi _1^{\mathrm{\hspace{0.33em}2}}+\beta \xi _2^{\mathrm{\hspace{0.33em}2}})`$, and similarly for $`p`$. Thus the Heisenberg uncertainty relation holds in every physical state,
$`\mathrm{\Delta }x\mathrm{\Delta }p`$ $`=`$ $`\mathrm{}\sqrt{(\alpha \xi _1^{\mathrm{\hspace{0.33em}2}}+\beta \xi _2^{\mathrm{\hspace{0.33em}2}})\left({\displaystyle \frac{\alpha }{\xi _1^{\mathrm{\hspace{0.33em}2}}}}+{\displaystyle \frac{\beta }{\xi _2^{\mathrm{\hspace{0.33em}2}}}}\right)}`$ (58)
$`=`$ $`\mathrm{}\sqrt{(\alpha +\beta )^2+\alpha \beta \left({\displaystyle \frac{\xi _1}{\xi _2}}{\displaystyle \frac{\xi _2}{\xi _1}}\right)^2}{\displaystyle \frac{\mathrm{}}{2}}.`$
With these definitions the Hamiltonian of the harmonic oscillator of angular frequency $`\omega `$,
$`H={\displaystyle \frac{p^2}{2m}}+{\displaystyle \frac{1}{2}}m\omega ^2x^2,`$ (59)
is diagonal and has the energy eigenvalues
$`E_i={\displaystyle \frac{\mathrm{}^2}{8m\xi _i^{\mathrm{\hspace{0.33em}2}}}}+{\displaystyle \frac{1}{2}}m\omega ^2\xi _i^{\mathrm{\hspace{0.33em}2}}.`$ (60)
For any given value $`E_i\mathrm{}\omega /2`$ this equation has the positive solutions
$`\xi _i=\sqrt{{\displaystyle \frac{2E_i\pm \sqrt{4E_i^{\mathrm{\hspace{0.33em}2}}\mathrm{}^2\omega ^2}}{2m\omega ^2}}}.`$ (61)
By the obvious generalization, we may assign to the harmonic oscillator any finite number, or an infinite number, of arbitrary energy levels above the lower bound $`\mathrm{}\omega /2`$.
Note that the Hamiltonian $`H`$ does commute with $`J`$, even though $`x`$ and $`p`$ here do not, implying that the time development operator $`\text{e}^{\frac{t}{\mathrm{}}JH}`$ is unitary. On the other hand, the operator $`\text{e}^{\frac{d}{\mathrm{}}Jp}`$, representing translation in space by a distance $`d`$, is not unitary when $`d0`$, but only symplectic. The problem is that it is not orthogonal, because the matrix $`Jp`$ in the exponent is symmetric rather than antisymmetric. Thus it does not conserve probabilities, and is not a symmetry transformation, as is evident from the fact that the position operator $`x`$ has a discrete spectrum.
The case $`\xi _1=\xi _2=\xi `$ in equation (57) is interesting because it is a realization of the canonical relation $`\{x,p\}=I`$, but is also, in a certain sense, a fermionic quantization of the harmonic oscillator, with
$`x^2=\xi ^2,p^2={\displaystyle \frac{\mathrm{}^2}{4\xi ^2}},xp+px=0.`$ (62)
In fact, the last relations are just one particular form of the canonical anticommutation relations
$`aa^T+a^Ta=I,a^2=(a^T)^2=0.`$ (63)
To see this in more detail, let us introduce another antisymmetric matrix $`K`$ so that it is an imaginary unit, $`K^2=I`$, and commutes with both $`x`$ and $`p`$. Then we take
$`a={\displaystyle \frac{1}{2\xi }}x+{\displaystyle \frac{\xi }{\mathrm{}}}Kp,a^T={\displaystyle \frac{1}{2\xi }}x{\displaystyle \frac{\xi }{\mathrm{}}}Kp.`$ (64)
The Hamiltonian $`H`$ as defined in equation (59) is just a constant, and a more interesting quantity is the usual Hamiltonian of the fermionic oscillator, which is
$`H^{}=\mathrm{}\omega \left(a^Ta{\displaystyle \frac{1}{2}}\right)={\displaystyle \frac{\omega }{2}}K(xppx)={\displaystyle \frac{\mathrm{}\omega }{2}}JK.`$ (65)
The time development operator with this alternative Hamiltonian is
$`U^{}(t)=\text{e}^{\frac{t}{\mathrm{}}JH^{}}=\text{e}^{\frac{\omega t}{2}K}.`$ (66)
In some respects this theory, where $`x`$ and $`p`$ anticommute with the imaginary unit $`J`$, is equivalent to another theory where the imaginary unit is $`K`$, which commutes with $`x`$ and $`p`$. An explicit representation for $`K`$ might be
$`K=\left(\begin{array}{cccc}0& 0& 1& 0\\ 0& 0& 0& 1\\ 1& 0& 0& 0\\ 0& 1& 0& 0\end{array}\right)=SJS^1,`$ (71)
with, for example,
$`S=S^1=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 0& 1& 0\\ 0& 1& 0& 0\\ 0& 0& 0& 1\end{array}\right).`$ (76)
If we define
$`\stackrel{~}{\rho }=S\rho S^1=\left(\begin{array}{cccc}\alpha & \gamma & 0& \delta \\ \gamma & \beta & \delta & 0\\ 0& \delta & \alpha & \gamma \\ \delta & 0& \gamma & \beta \end{array}\right),`$ (81)
then this is a density matrix which commutes with $`K`$ instead of with $`J`$. If now $`\rho (t)=U^{}(t)\rho (0)U^{}(t)`$, then we have $`\stackrel{~}{\rho }(t)=\stackrel{~}{U}(t)\stackrel{~}{\rho }(0)\stackrel{~}{U}(t)`$, where
$`\stackrel{~}{U}(t)=SU^{}(t)S^1=\text{e}^{\frac{t}{\mathrm{}}KH^{}}=\text{e}^{\frac{\omega t}{2}J},`$ (82)
since $`SH^{}=H^{}S`$. The energy spectrum is the same in the two theories, for example we have
$`\text{Tr}(\rho H^{})=\text{Tr}(\stackrel{~}{\rho }H^{})=2\delta \mathrm{}\omega .`$ (83)
However, the expectation values for $`x`$ and $`p`$ are not the same. For example,
$`\text{Tr}(\rho x)=0,\text{Tr}(\stackrel{~}{\rho }x)=2(\alpha \beta )\xi .`$ (84)
## 5 More than one degree of freedom
With two degrees of freedom, referred to here by indices $`a`$ and $`b`$, and belonging for example to two fields commuting with each other, the Hilbert space is a tensor product,
$`=_a_b.`$ (85)
However, the complex and real tensor products are mathematically different constructions. The complex tensor product of spaces of complex dimensions $`D_a`$ and $`D_b`$ has complex dimension $`D_aD_b`$ and real dimension $`2D_aD_b`$, whereas the real tensor product of spaces of real dimensions $`2D_a`$ and $`2D_b`$ has dimension $`4D_aD_b`$.
The relation between the two types of tensor product can be understood as follows. In the complex case the following relations hold for tensor products of vectors,
$`\varphi \psi =\text{i}((\text{i}\varphi )\psi )=\text{i}(\varphi (\text{i}\psi ))=(\text{i}\varphi )(\text{i}\psi ).`$ (86)
In the real case the four tensor products $`\varphi \psi `$, $`(J\varphi )\psi `$, $`\varphi (J\psi )`$ and $`(J\varphi )(J\psi )`$ are linearly independent vectors, thus there exist two imaginary units $`J_a=JI`$ and $`J_b=IJ`$ defined as operators on the real tensor product space $``$. It follows that $``$ is a direct sum, $`=_+_{}`$, where $`_\pm =P_\pm `$, and the two operators
$`P_\pm ={\displaystyle \frac{1}{2}}(IJ_aJ_b)`$ (87)
are complementary orthogonal projection operators. Each of the two subspaces $`_\pm `$ has dimension $`2D_aD_b`$. Both $`J_a`$ and $`J_b`$ commute with $`P_\pm `$, and hence act within the subspaces $`_\pm `$ separately. On $`_+`$ the relation $`J_a=J_b`$ holds, whereas $`J_a=J_b`$ holds on $`_{}`$.
The complex tensor product space can be identified in a natural way with $`_+`$. Hence a physical density matrix $`\rho `$ on the product space $``$ must be a real $`(4D_aD_b)\times (4D_aD_b)`$ matrix that commutes with both $`J_a`$ and $`J_b`$, and in addition it must have the property that $`\rho =P_+\rho =\rho P_+=P_+\rho P_+`$.
If an operator $`x`$ acts on $`_a`$ and anticommutes with $`J`$, then the corresponding operator $`x_a=xI`$ on $`=_a_b`$ anticommutes with $`J_a`$, but commutes with $`J_b`$. This means that $`x_a`$ maps $`_+`$ into $`_{}`$, and vice versa. Such an operator could not be constructed at all within the complex quantum theory, because the subspace $`_{}`$ simply would not exist. It is essential for the construction that there exist two mutually commuting imaginary units $`J_a`$ and $`J_b`$ on the real tensor product Hilbert space. The possibility of having operators that are antilinear with respect to $`J_a`$ and at the same time linear with respect to $`J_b`$, or vice versa, is at least a partial answer to one of the problems with antilinear field operators recognized by Stueckelberg and collaborators.
The operator $`x_a`$ above is “unphysical” in the sense that it maps the physical space $`_+`$ out of itself. But similar unphysical operators are well known in physics. For example, a general isospin rotation does not respect the superselection rule for electric charge, it transforms a physical state into an unphysical superposition of states with different values of the charge. Another example is the relative position of two identical particles, which is an operator changing the symmetry properties of the two-particle wave functions.
The generalization to more than two factors in the tensor product is easy, just introduce one new pair of projection operators for each new factor. If for example $`=_a_b_c`$, then define $`P_\pm `$ as above, and in addition
$`Q_\pm ={\displaystyle \frac{1}{2}}(IJ_aJ_c).`$ (88)
Since $`P_ϵ`$ and $`Q_\eta `$ commute, for $`ϵ=\pm `$ and $`\eta =\pm `$, the products $`P_ϵQ_\eta `$ are also projection operators. Decompose $``$ as a direct sum of subspaces $`_{ϵ\eta }=P_ϵQ_\eta `$, then in each such subspace we have $`J_a=ϵJ_b=\eta J_c`$. The physical subspace is $`_{++}`$, it corresponds to the complex tensor product, since the relations $`J_a=J_b=J_c`$ hold there.
|
no-problem/9905/astro-ph9905085.html
|
ar5iv
|
text
|
# The Compact UV Nucleus of M33
## 1. Introduction
The nearby galaxy M33 hosts the most luminous steady X-ray source in the Local Group, X-8. This source, with $`L_X`$10<sup>39</sup>erg s<sup>-1</sup> (Long et al. (1996)), is coincident to within 5” with the nucleus of the galaxy (Schulman & Bregman (1995)). Different models were invoked for X-8, including a quiescent mini-AGN (Trinchieri et al. (1988); Peres et al. (1989)), a collection of X-ray binaries (Hernquist et al. (1991)) and a new type of X-ray binary (Gottwald, et al. (1987)). Our ROSAT studies (Dubus et al. (1997)) have shown that X-8 is very steady on both short and long time scales, except for low amplitude ($``$20%) variations which appear modulated on a $``$106 day period. This strongly favors a single source explanation for X-8.
We have interpreted the modulation in X-8 as “superorbital”, similar to that seen in a number of bright galactic X-ray binaries which were monitored by e.g. the Vela 5B satellite (Smale & Lochner (1992)). X-8 is then likely to be a $``$10$`M_{}`$ black-hole X-ray binary (the high mass is required to account for the observed luminosity) but with a companion in a much shorter orbital period than 106d. This is supported by the extremely low velocity dispersion of the nucleus which limits the mass of a central black hole in M33 to $``$5$`\times `$10<sup>4</sup>M (Kormendy & McClure (1993), KM93). This and the high central stellar density imply that the nucleus is an extremely relaxed, post core-collapse stellar system (comparable for instance to a galactic globular cluster such as M15). A significant number of stellar collisions/interactions could have taken place, eventually leading to the creation of exotic interacting binaries (e.g. Hut et al. (1992)).
The next step toward unravelling the mystery of X-8 would be to identify its optical counterpart. However, even with optimistic $`L_X/L_{opt}`$ ratios for either X-ray binaries or AGN the counterpart would only have V$``$21 compared to a core brightness of V$``$14. But with the optical spectral type of an F supergiant, the dominance of the M33 visual core cannot extend to UV wavelengths where the hot/flat spectrum of X-8’s associated disc ought to be a significant contributor. This is true despite evidence for a color gradient in the nucleus (KM93, Mighell & Rich (1995); Lauer et al. (1998)) suggesting that a period of recent star formation has taken place and/or that collisions have modified the central star population. Here we report an attempt to find the counterpart using the UV imaging capabilities of HST.
## 2. Observations and Reduction
Observations were carried out with the HST WFPC-2 on June 12, 1997 using three different filters. The nucleus was positioned at the center of the Planetary Camera ($`\alpha `$ 1:33:51.1, $`\delta `$ 30:39:39, J2000). During the first orbit two 1200s exposures were made with the F160BW (‘UV’ filter, $`\overline{\lambda }1491`$). In the following orbit, two 800s exposures were made with the F300W filter (‘U’ filter, $`\overline{\lambda }2942`$Å) and one 500s exposure with the F439W filter (‘B’ filter, $`\overline{\lambda }4300`$Å). All these exposures were made with the gain setting at 7.
In addition, we have extracted recalibrated archival data from the Space Telescope European Coordinating Facility (ST-ECF) Archive. These data included two 40s exposures with the F555W at gain 14 (‘V’ filter, $`\overline{\lambda }5397`$Å), two 40s exposures with the F814W at gain 14 (‘I’ filter, $`\overline{\lambda }7924`$Å) and six 300s exposures with the F1042M at gain 7 ($`\overline{\lambda }10190`$Å) filters, all centered on the nucleus and dating from September 26-27, 1994. The archival V and I data were previously discussed by Lauer et al. (1998) (L98). Fig. 1 shows the central region of the reduced UV, U and B images.
The data were reduced using the HST calibration pipeline. The signal-to-noise ratio of the only existing F160BW flat was quite low. Following advice from the WFPC-2 group at STScI, we decided to use the F255W flat field. Effects of cosmic rays were reduced on those images for which we had multiple exposures with the IRAF stsdas routine crrej. Images of the nuclear region as observed through the F160BW, F300W and F439W filter with the PC are shown in Fig. 1.
## 3. Analysis
Our values for the total flux of the nucleus in the different bands agree with those of Gordon et al. (1999). The radial profiles of the B, V and I data are also consistent with KM93 and L98. As is apparent from Fig. 1, the nucleus of M33 appears more concentrated in the F160BW filter than in the longer wavelength filters. Nevertheless, as shown in Fig. 2, the profile in the F160BW image of the nucleus is extended compared to the profile of the star located about 1” NNW from the nucleus and to PSF profiles calculated using the HST PSF generating routine TinyTim (Krist & Hook (1997)). Further investigation of the radial structure of the UV emission calls for deconvolution of the data taking into account the different instrumental effects in the Planetary Camera (L98). Since the UV image does not have a large enough signal-to-noise to allow for a proper deconvolution, we chose instead to fit convolved models to the data.
### 3.1. Fits of extended emission models
Following KM93, we fitted radial profile models of the form:
$$\mathrm{\Sigma }=\mathrm{\Sigma }_\mathrm{o}\left(1+(r/r_\mathrm{o})^2\right)^n$$
(1)
The model and the PSF are oversampled on a 4x4 grid for each PC pixel. For a given set of $`(r_\mathrm{o},n)`$, the convolved model is moved on the 4x4 grid, rebinned to the PC resolution and compared to the data. The comparison with the data is performed in a 64x64 pixel aperture (about 3”x3”) but we have verified that larger and smaller apertures gave similar results. We have assumed an A type spectrum for the PSF but other choices do not affect our conclusions. The parameter $`r_\mathrm{o}`$ is varied between 0.05 and 2 PC pixels and $`n`$ is varied between 0.5 and 2. The results from the $`\chi ^2`$ minimization routine are presented in Tab. 1 and Fig. 3. The quoted errors correspond to 10% higher values than at the minimum of the fit function. In the noisy UV band only lower bounds to the parameters could be extracted. Here we give errors on the FWHM instead of on $`r_\mathrm{o}`$.
The FWHM of the models decreases in accordance with the blue colour gradient. In all cases $`n`$ 1 suggesting that the distribution of light at large radii is the same in all the filters. This is consistent with the flat colour profiles observed for $`rr_\mathrm{o}`$. With $`n`$ fixed at 0.75 as in L98 we also find a FWHM for the F555W data of 0$`\stackrel{}{\mathrm{.}}`$07. This is clearly not the best solution when $`n`$ varies (Tab. 1). We find a higher FWHM (0$`\stackrel{}{\mathrm{.}}`$09). We note that KM93 find $`n`$ between 0.8–1.3 and a FWHM below 0$`\stackrel{}{\mathrm{.}}`$1. The similar values found for $`r_\mathrm{o}`$ in the V, I, F1042M and (to a lesser extent) B bands indicate that their radial profiles are comparable (i.e. the colour gradients are much reduced between those bands than when compared to the UV and U so that, for example, on first approximation the V-I colour gradient is negligible when compared to UV-V).
### 3.2. Fits with an additional point source
Since the V, I and F1042M show very close light distributions, we investigated whether the compact emission from the UV and U filter could be explained by an underlying extended population having the V band distribution plus a blue point source. We fixed $`r_\mathrm{o}`$=1 PC pixel, $`n`$=1 and superposed at the center of this model a point source of varying relative strength. For the V and redward filters, the best fits were consistently obtained with a nil contribution from the point source. However, a point source of increasing strength was needed in B, U and UV. Only those models with fit values within the errors of the previous extended emission fits (Tab. 1) were kept i.e. the fits here are as good or better than the previous ones (see Fig. 3). The contribution of the point source to the total flux within a 1$`\stackrel{}{\mathrm{.}}`$45 circular aperture is summarized in Tab. 2. The corresponding magnitudes are given in the VegaMAG system using updated tables for the zeropoints (Holtzman et al. (1995)).
The best fits for the B, U and UV are shown in Fig. 3 by dashed lines. They are indistinguishable from the extended emission fits. As a result we cannot, based on the data in hand, distinguish between a model in which emission is extended in the UV but has a smaller core radius than at longer wavelengths and a composite model consisting of a point source and an underlying distribution characterised by the visible light profile.
## 4. Discussion
The nucleus of M33 has a composite spectrum ranging from A7V at $`\lambda `$$``$3800Å to F5V at $`\lambda `$$``$4300Å (O’Connell (1983)). This requires at least a two component population in most models with the blue emission being due to young metal-rich stars. The colour gradient in B-R implies this young population is more centrally condensed. The nucleus could have been the site of episodic starbursts with the youngest stars being about 10 Myr old (O’Connell (1983); van den Bergh (1991); Schmidt et al. (1990) and references therein). Recently Gordon et al. (1999) have argued that a single 70 Myr old starburst reproduces the UV to IR spectral energy distribution within 4$`\stackrel{}{\mathrm{.}}`$5 of the center if dust is correctly taken into account<sup>1</sup><sup>1</sup>1Gordon et al. (1999) propose that X-8 is a high mass X-ray binary with an early B companion. But as had been noted by O’Connell (1983), the high mass tranfer rate needed to power the 10<sup>39</sup> ergs$``$s<sup>-1</sup> luminosity implies an uncomfortably short evolutionary time-scale ($`10^5`$ years).. The nucleus is also very similar to a globular cluster and is likely to have undergone core-collapse (Hernquist et al. (1991), KM93). Blue stars formed in collisions at the center might explain the colour gradient. This model may have difficulties accounting for the UV luminosity (Hernquist et al. (1991), L98, Gordon et al. (1999)).
Massey et al. (1996) detected the nucleus in the UV but with 5” resolution. Hence they had proposed that the blue component of the nucleus could be due to unresolved emission from a few hot stars. However, the HST data shows the UV emission is very compact with no stars of comparable brightness within 5” of the center. The total flux is about 6.3$``$10<sup>-15</sup> ergs$``$s<sup>-1</sup>cm<sup>-2</sup>Å<sup>-1</sup> in a 4$`\stackrel{}{\mathrm{.}}`$55 aperture or about 2.8$``$10<sup>38</sup> ergs$``$s<sup>-1</sup>at 800 kpc. From the best fit model, $``$50% of the UV light comes from the inner 0$`\stackrel{}{\mathrm{.}}`$14 of the nucleus. Any model of the structure of the nucleus has to explain this UV emission from a region only $``$ 0.55 pc across. If a point source is present, this source is responsible for $``$30% of the UV flux within 0$`\stackrel{}{\mathrm{.}}`$14. The contribution from the underlying extended population is subsequently reduced.
The nucleus of NGC 205, at a comparable distance of 720kpc, is in many ways similar to that of M33, but without the X-ray source. It is globular cluster like, has a comparable $`M_V`$ and a low upper limit of 9$``$10<sup>4</sup>$`M_{}`$ on the mass within the central pc (Heath Jones et al. (1996)). These authors find that the nucleus is more extended, with a F555W FWHM of 0$`\stackrel{}{\mathrm{.}}`$2, $`F_{\mathrm{F555W}}`$=1.8$``$10<sup>-15</sup> ergs$``$s<sup>-1</sup>cm<sup>-2</sup>Å<sup>-1</sup> and only $`F_{\mathrm{F160BW}}`$=6.0$``$10<sup>-16</sup> ergs$``$s<sup>-1</sup>cm<sup>-2</sup>Å<sup>-1</sup> (within 0$`\stackrel{}{\mathrm{.}}`$273). Using the same aperture on the M33 data we find for M33 $`F_{\mathrm{F555W}}`$=3.4$``$10<sup>-15</sup> ergs$``$s<sup>-1</sup>cm<sup>-2</sup>Å<sup>-1</sup> and $`F_{\mathrm{F160BW}}`$=2.5$``$10<sup>-15</sup> ergs$``$s<sup>-1</sup>cm<sup>-2</sup>Å<sup>-1</sup>. Excluding the contribution from the point source, the F160BW flux of M33 is about 1.5$``$10<sup>-15</sup> ergs$``$s<sup>-1</sup>cm<sup>-2</sup>Å<sup>-1</sup> and the ratios of the fluxes between the bands become comparable.
If there is in fact a point source at the center of the nucleus of M33, the magnitude estimates in Table 2 are consistent with a Rayleigh-Jeans tail. We estimate the total UV flux of the source would be $``$1.0$``$10<sup>-15</sup> ergs$``$s<sup>-1</sup>cm<sup>-2</sup>Å<sup>-1</sup>, which would suggest $`L_{\mathrm{opt}}`$/$`L_X`$$``$0.05 which is reasonable for the type of X-ray source we have postulated X-8 to be. This source would be responsible for most of the colour gradient in UV-B and U-B and thus the very compact appearance of the nucleus at these wavelengths. As it contributes only $``$18% of the total F160BW flux and $``$8% of the F300W flux, the spectral energy distribution of the nucleus is not changed much and the consequences on population synthesis studies should be minor. However, it does have a major influence in that it relaxes the constraints on e.g. mass segregation to explain the very strong colour gradients in UV-B and U-B.
The star located about 1” from the nucleus in M33 has $`B`$=19.45, $`V`$=19.25, $`M_{\mathrm{F300W}}`$=18.05 and $`M_{\mathrm{F160BW}}`$=17.60. The count rates are compatible with an A0 type spectrum which would make it similar, although fainter, to the two A supergiants detected by Massey et al. (1996). The fluxes in the F300W and F160BW bands, at the distance of M33 ($``$800 kpc), are $``$ $`10^{37}`$ erg$``$s<sup>-1</sup>. The positional accuracy of the ROSAT HRI does not rule out this star as a possible counterpart to the X-ray source X-8. The ratio $`L_X`$/$`L_{\mathrm{opt}}`$ 100 and the observed $`UB`$ and $`BV`$ would agree with what is expected from a low-mass X-ray binary (van Paradijs & McClintock (1995)). But its absolute magnitude ($`M_V`$-5.2) would make it more similar to a high mass X-ray binary. The main argument against this star being the X-ray binary is X-8’s unique character and special location at the nucleus. Given that Massey et al. (1996) found $``$300 analogous UV sources in M33, it would be remarkable that the one near the nucleus is the most luminous X-ray source in the Local Group.
## 5. Conclusion
The UV high resolution images obtained with the HST Planetary Camera show the nucleus of M33 is extremely compact. We have fitted convolved models to the radial profiles in the different bands from which we find the FWHM of the nucleus in UV to be $``$0$`\stackrel{}{\mathrm{.}}`$035 and $``$0$`\stackrel{}{\mathrm{.}}`$090 in V. About half of the UV flux comes from the inner 0$`\stackrel{}{\mathrm{.}}`$14 of the nucleus. The UV and U profiles are also well fitted if one assumes a blue point source superposed on an extended population with the same FWHM as in V. If this is the correct model for the nucleus, then this point source is likely to be the UV counterpart to the very luminous X-ray source X-8. Such a counterpart would be responsible for most of the strong colour gradient seen in UV. Its contribution to the total UV flux of the nucleus would be about 18%. Models for the structure of the nucleus still need to account for the remainder of the UV flux but the constraints on population segregation (more compact blue star population) are reduced. High spatial resolution UV spectroscopy of the nucleus is the obvious next step, which we will undertake shortly.
We thank Sylvia Baggett for help with the F160BW filter reduction. G.D. and K.S.L wish to thank the Oxford Astrophysics Department where part of this work was completed. We acknowledge support by the British-French joint research program Alliance and by NASA grant NAG 5-1539 to the STScI. Based on observations with the NASA/ESA Hubble Space Telescope obtained at the STScI which is operated by the AURA under NASA contract No. NAS5-26555.
|
no-problem/9905/astro-ph9905360.html
|
ar5iv
|
text
|
# Discovery of metal line emission from the Red star in IP Peg during outburst maximum
## 1 Introduction
Cataclysmic variables are interacting binaries where a white dwarf and a red dwarf orbit each other within a few hours. Line emission from the red star is now regularly detected (Harlaftis and Marsh 1996, and references therein). During outburst of the eclipsing dwarf nova IP Peg, irradiation from the hot central regions of the disc is most likely responsible for the line emission located on the red star (Marsh and Horne 1990). During quiescence, H$`\alpha `$ line emission from the red star of IP Peg is transient and its origin is unresolved (Harlaftis et al. 1994). The fast rotation of the red stars in cataclysmic variables and the regular irradiation of their atmosphere by the hot accretion disc present a physical situation which may affect, in the long term, the atmospheric stratification of the companion star and its subsequent evolution. Techniques for mapping either the surface of cool single stars from the absorption lines (Cameron 1999) or the surface of the red star in cataclysmic variables from the emission lines (Marsh and Horne 1988; Rutten and Dhillon 1994) have been developed. These techniques can be used in probing the ionization structure of the upper atmosphere of the red star. Here, we report on spectrophotometric observations of IP Peg, obtained with the 2.5m INT at La Palma, during maximum of the November 1996 outburst, which were aimed to probe the structure of the, recently discovered, spiral arms in the disc of IP Peg (Harlaftis et al. 1999). As a by-product of the observations, we discover metal lines in emission from the secondary star.
## 2 Doppler tomography
IP Peg is a well-studied, double-eclipsing dwarf nova, which shows semi-periodic outbursts every $``$3 months. For details on the observations and data reduction see Harlaftis et al. (1999). Average spectra, in the range 4354–4747 Å, at four characteristic binary phases are displayed in Fig. 1. In addition to the Bowen blend and He II 4686 lines, weak He I lines at 4388, 4471, 4713 Å Mg II 4481 Å and Ti II 4418 Å are also observed (see also trailed spectra in Fig. 2). These lines display a sharp peak at phase 0.5 indicating a component from the companion star. We adopt the binary ephemeris from Wolf et al. (1993) $`T_o(HJD)=2445615.4156(4)+0.15820616(4)E`$, where $`T_o`$ is the inferior conjunction of the white dwarf.
The trailed spectra over the full wavelength range are shown in Fig. 2 with the aim to display the motion of the weak lines (the intensity scale is adjusted so that He II line appears saturated). The disc and red star emission components are seen in the lines of He I 4388, He I 4472, Mg II 4481, the Bowen blend and the He I 4713. The red star component is the sharp ‘S’-wave moving from red to blue at phase 0.5. It can also be traced in the He II 4542 and the Ti II 4418 lines. Note that the Mg II ‘S’-wave component disappears earlier (binary phase 0.7) than that of the neighbouring He I 4472 (binary phase 0.75).
We reconstruct the Doppler images of the emission lines using the trailed spectra (Marsh and Horne 1988). A Doppler image is the reconstruction of the emission line distribution in velocity space and has been particularly successful in resolving the location of emission components such as the red star (IP Peg; Harlaftis et al. 1994), the gas stream (OY Car in outburst; Harlaftis and Marsh 1996), the bright spot (GS2000+25; Harlaftis et al. 1996) and spiral waves in the outer accretion disc (Steeghs, Harlaftis and Horne 1997; Harlaftis et al. 1999). We built the Doppler images of the emission lines (see above references for the procedure) and, after subtracting the axisymmetric disc emission, we can zoom onto the Roche lobe of the red star (Fig. 3 from left to right, high-ionization to low-ionization lines, He II 4686, He I 4388, He I 4472, He I 4713, Mg II 4481).
## 3 Line emission from the Red star
The line emission, on the Doppler images of Fig. 3, is stronger on the side of the red star facing the white dwarf and further it weakens towards the equator of the red star, indicative of screening by the disc. The He I maps show a relative strength similar to that seen in diffuse nebulae (weaker in 4388Å, stronger in 4472 Å; Kaler 1976). We measured the velocity locations of the peak intensities in the Doppler images using Gaussian fitting. There may be a systematic shift towards the L<sub>1</sub> point with higher-ionization potential (see Table 1). As a consistency test for the properties of the “spots” being realistic, the velocity widths of the irradiated sites are indeed less than the rotational broadening of the companion star, $`\upsilon \mathrm{sin}i=146`$ km s<sup>-1</sup> (1$`\sigma `$), which is obtained from the relation
$$\frac{\upsilon \mathrm{sin}i}{K_\mathrm{c}}=0.462\left[(1+q)^2q\right]^{1/3}.$$
where $`q=0.58\pm 0.10`$, and $`K_\mathrm{c}=280\pm 2`$ km s<sup>-1</sup> (Wolf et al. 1998). The measured relative shifts between the wavelength-dependent irradiated sites are with respect to zero velocity (binary’s centre), therefore any uncertainties in the system parameters do not alter our conclusion.
In the past, similar emission has been interpreted as irradiation of the inner side of the red star by soft X-ray photons emitted by the boundary layer (Harlaftis & Marsh 1996 and references therein). The Roche lobe maps may suggest that there is temperature foreshortening or that the shadow cast by the disc on the companion star decreases with higher energy photons (Mg II, He I, He II). Indeed, the disc thickness may hinder efficient irradiation around L<sub>1</sub> relative to the polar regions.
In this way, the red star emission can also be used to measure the thickness of the disc as seen by the soft X-rays which excite the line emission, independently of X-ray data. From the values in Table 1, the L<sub>1</sub> region may mainly be clear of emission around 50 km s<sup>-1</sup> (or 2 pixels) from the L<sub>1</sub>. This corresponds to a Roche lobe height of $`30`$ %, or $`0.10`$ $`\alpha `$, where $`\alpha `$ is the binary separation (assuming q=0.6 and use of Table 3-1 in Kopal 1959). Therefore, there is potential to probe the vertical structure of the disc with higher quality data. Moreover, disc contamination and small-scale blurring caused by the Doppler tomography (which assumes isotropic emission from the orbital plane) are significant at that level and only improvement of the Doppler tomography technique combined with higher-resolution data will clarify better the ionization zones on the Roche lobe and the vertical disc height.
###### Acknowledgements.
The data reduction and analysis was partly carried out at the St. Andrews STARLINK node. Use of software developed by T. Marsh is acknowledged. ETH was supported by the TMR contract RT ERBFMBICT960971 of the European Union. ETH was partially supported by a joint research programme between the University of Athens, the British Council at Athens and the University of St. Andrews.
|
no-problem/9905/astro-ph9905052.html
|
ar5iv
|
text
|
# Hard X-ray emission from elliptical galaxies
## 1 Introduction
Observations of nearby galaxies provide convincing evidence for the existence of supermassive black holes. Although most such galaxies exhibit little or no nuclear activity, dynamical arguments based on the observed stellar and gas distributions firmly imply the presence of supermassive compact objects in their cores (e.g. Kormendy & Richstone 1995, Magorrian et al. 1998, Ho 1998, van der Marel 1999). Interestingly, these studies show that virtually all early-type galaxies host black holes with masses in the range $`10^8`$ a few $`10^9`$ $`\mathrm{M}_{}`$. In disk galaxies, however, the (rather sparser) observations indicate that the black hole masses are typically of order $`10^7\mathrm{M}_{}`$ (e.g. SgrA in our own Galaxy, M31, and the maser galaxy NGC 4258), consistent with the black hole masses in galaxies being roughly proportional to the masses of the spheroidal components.
Although the central black hole masses in early type galaxies are large enough to be the remnants of QSO phenomena (with luminosities ranging from $`10^{46}`$ to $`10^{48}\mathrm{erg}\mathrm{s}^1`$) and giant ellipticals are known to host radio galaxies and radio-loud quasars at high redshifts, in nearby ellipticals, only low-luminosity radio cores are commonly observed (Sadler, Jenkins & Kotanji 1989; Wrobel & Heershen 1991). Spiral galaxies, in contrast, frequently exhibit nuclear activity (e.g. Seyfert nuclei), with optical/UV emission, strong, power-law X-ray continua (with a typical intrinsic photon index, $`\mathrm{\Gamma }2`$) and complex variability observed. Such phenomena are usually attributed to standard, thin-disk accretion, coupled with a hot coronal plasma with high radiative efficiencies.
Postulating that the early-type systems are simply ‘starved’ of fuel to power their activity appears implausible since these galaxies possess extensive hot, gaseous halos which will inevitably accrete onto their central black holes at rates which may be estimated using the Bondi (1952) formula (a lower limit). This accretion should give rise to far more activity than is observed, if the radiative efficiency is $`10`$ per cent, as is postulated in standard accretion theory (e.g. Fabian & Canizares 1988). All nearby elliptical galaxies should then host active nuclei with luminosities exceeding $`10^{45}\mathrm{erg}\mathrm{s}^1`$. However, the available data show that the total X-ray luminosities of these galaxies rarely exceed $`10^{42}\mathrm{erg}\mathrm{s}^1`$, with only a small fraction of this flux being due to a central point source.
Accretion with such high radiative efficiency need not be universal, however. As suggested by several authors (Rees et al. 1982; Begelman, Blandford & Rees 1984; Fabian & Rees 1995) and successfully applied to a number of giant ellipticals in the Virgo cluster (Reynolds et al. 1996; Di Matteo & Fabian 1997a; Mahadevan 1997; Di Matteo et al. 1999a), the final stages of accretion in elliptical galaxies may occur via an advection-dominated accretion flow (ADAF; Narayan & Yi 1995, Abramowicz et al. 1995) at roughly the Bondi rates. Within the context of such an accretion mode, the quiescence of the nuclei in these systems is not surprising; when the accretion rate is low, the radiative efficiency of the accreting (low density) material will also be low. Other factors may also contribute to the low luminosities observed. As discussed by Blandford & Begelman (1999; and emphasized observationally by Di Matteo et al. 1999a), winds may transport energy, angular momentum and mass out of the accretion flows, resulting in only a small fraction of the material supplied at large radii actually accreting onto the central black holes.
If the accretion from the hot interstellar medium in elliptical galaxies (which should have relatively low angular momentum) proceeds directly into the hot, advection-dominated regime, and low-efficiency accretion is coupled with outflows (Di Matteo et al. 1999a), the question arises of whether any of the material entering into the accretion flows at large radii actually reaches the central black holes. The present observational data generally provide little or no evidence for detectable optical, UV or X-ray emission associated with the nuclear regions of these galaxies.
In this paper we present a detailed, multiphase analysis of high-quality ASCA X-ray spectra for six nearby elliptical galaxies; three central cluster galaxies and three giant ellipticals in the Virgo Cluster. We obtain clear detections of hard (mean-weighted photon index, $`\mathrm{\Gamma }=1.2`$) power-law, emission components in the integrated X-ray spectra of all six galaxies. Although our data do not allow us to unambiguously identify these components with the nuclear regions of the galaxies, we show that this emission is likely to be related to the accretion process and the presence of the central, supermassive black holes, which also power radio activity in the galaxies at varying levels.
The presence of hard X-ray emission components in the ASCA spectra of elliptical galaxies has previously been reported by a number of authors (e.g. Matsushita et al. 1994; Matsumoto et al. 1997; Buote & Fabian 1998). In general, however, these studies have associated the observed, hard components with a population of binary X-ray sources, the spectra of which can be approximated by bremsstrahlung models with typical temperatures of $`47`$ keV (e.g. Fabbiano, Trinchieri & Van Speybroeck 1987; White, Stella & Parmar 1988; Makishima et al. 1989; Tanaka et al. 1996; Christian & Swank 1998). The spectra of the hard X-ray emission components detected in the present sample of elliptical galaxies are significantly harder than those of binary X-ray sources or typical AGN, identifying these objects as, potentially, a new class of accreting source. (The ‘canonical’ photon index for AGN has $`\mathrm{\Gamma }1.82`$; Nandra et al. 1997, Reynolds 1997; recent ASCA studies of low-luminosity AGN indicate photon indices in the range $`\mathrm{\Gamma }1.61.8`$; e.g. M81; Ishiaki et al. 1996, Turner et al. 1996 or NGC 4258; Makishima et al. 1994) The X-ray luminosities of the present systems ($`L_{\mathrm{X},110}=2\times 10^{40}2\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$) are similar to those of low luminosity AGN (e.g. Ho, Filippenko & Sargent 1997), although the large ($`10^9`$ $`\mathrm{M}_{}`$) black hole masses in their nuclei identifies them as far more inefficient radiators. The spectral energy distributions for the galaxies in our sample indicate that a significant fraction of their luminosities are emitted at X-ray wavelengths, with relatively low levels of optical and UV emission, which are dominant in typical AGN. These data, coupled with the presence of compact (possibly thermal synchrotron) radio cores, suggest a possible, ubiquitous presence of low-level nuclear activity in the nearby universe and provide important new constraints on the dominant accretion mechanisms in elliptical galaxy cores.
## 2 Observations and data reduction
### 2.1 Target selection
The targets selected for investigation are six well-studied, nearby elliptical galaxies, with high-quality ASCA observations available in public archives. The objects include three dominant cluster galaxies; the well known, low-luminosity radio galaxy M87 (NGC 4486) at the centre of the Virgo Cluster; NGC 1399, the giant elliptical at the centre of the Fornax Cluster and NGC 4696, the dominant galaxy of the Centaurus Cluster. These three galaxies, and especially M87, are key examples of quiescent active nuclei, containing nuclear black hole masses of $`35\times 10^9\mathrm{M}_{}`$ (Ford et al. 1995; Harms et al. 1994; Macchetto et al. 1998; Magorrian et al. 1998; although no direct mass measurement for NGC 4696 has been made) and exhibiting low-luminosity relativistic jets and FR-I-type radio emission. However, the luminosities of their nuclei are approximately three orders of magnitude less than would be expected if they were accreting at close to their Bondi rates, with a standard radiative efficiency (e.g. Reynolds et al. 1996).
The sample also includes three other giant ellipticals in the Virgo cluster; NGC 4649, 4472 and 4636. These galaxies contain almost completely inactive black holes, with measured masses ranging from a few $`10^8\mathrm{M}_{}`$ to a few $`10^9`$$`\mathrm{M}_{}`$(Magorrian et al. 1998) and predicted X-ray luminosities (if the accretion occurs at the Bondi rate with a standard radiative efficiency) four to five orders of magnitude greater than is observed (Di Matteo et al. 1999a).
The central cluster galaxies provide an extreme illustration of the phenomenon of quiescent black holes. Beside possessing FR-I-type radio sources and very large black hole masses, these galaxies exist in extremely gas-rich environments i.e. in cooling flows at the centres of clusters, and are therefore ideal sources in which to study the physics of low-radiative efficiency accretion. However, these same reasons also imply that these galaxies are by no means typical. Despite exhibiting lower luminosities, the properties of the other three ellipticals in our sample, which do not exist in such preferential locations, may be more easily generalized to other systems. These three galaxies have also been studied at high radio frequencies and sub-mm wavelengths (Di Matteo et al. 1999a) and, when coupled with the reduced jet-radio-activity in these systems (and the ASCA data presented in this paper), provide further crucial constraints on the primary accretion mechanisms in the nuclei of elliptical galaxies.
### 2.2 The ASCA Observations and data reduction
The ASCA (Tanaka, Inoue & Holt 1994) observations were made over a two-and-a-half year period between 1993 June and 1995 December. The ASCA X-ray telescope array (XRT) consists of four nested-foil telescopes, each focussed onto one of four detectors; two X-ray CCD cameras, the Solid-state Imaging Spectrometers (SIS; S0 and S1) and two Gas scintillation Imaging Spectrometers (GIS; G2 and G3). The XRT provides a spatial resolution of $`3`$ arcmin (Half Power Diameter) in the energy range $`0.510`$ keV. The SIS detectors provide excellent spectral resolution \[$`\mathrm{\Delta }E/E=0.02(E/5.9\mathrm{keV})^{0.5}`$\] over a $`22\times 22`$ arcmin<sup>2</sup> field of view. The GIS detectors provide poorer energy resolution \[$`\mathrm{\Delta }E/E=0.08(E/5.9\mathrm{keV})^{0.5}`$\] but cover a larger circular field of view of $`50`$ arcmin diameter.
For our analysis of the ASCA data we have used the screened event lists from the rev2 processing of the data sets available on the Goddard-Space Flight Center (GSFC) ASCA archive (for details see the ASCA Data Reduction Guide, published by GSFC.) The data were reduced using the FTOOLS software (version 4.1) from within the XSELECT environment (version 1.4). Further data-cleaning procedures as recommended in the ASCA Data Reduction Guide, including appropriate grade selection, gain corrections and manual screening based on the individual instrument light curves, were followed. A full summary of the observations is given in Table 1.
Spectra were extracted from all four ASCA detectors, except for NGC 4696 for which the S0 and G3 data exhibited residual calibration errors and so were excluded from the analysis. The spectra were extracted from circular regions, centred on the peaks of the X-ray emission from the galaxies. For the SIS data, the radii of the regions studied were selected to minimize the number of chip boundaries crossed (thereby minimizing the systematic uncertainties introduced by such crossings) whilst covering as large a region of the galaxies as possible. Data from the regions between the chips were masked out and excluded. The final extraction radii for the SIS data are summarized in Table 2. Also noted are the chip modes used for the observations and the number of chips from which the extracted data were drawn. For the GIS data, a fixed extraction radius of 6 arcmin was adopted.
Background subtraction was carried out using the ‘blank sky’ observations of high Galactic latitude fields compiled during the performance verification stage of the ASCA mission. The background spectra were screened and grade selected in the same manner as the target observations and extracted from the same regions of the detectors. For the systems observed in 4-CCD mode, additional SIS background spectra were extracted from regions of the chips free from significant source counts (we note that this was not possible for the M87 observation due to the extended cluster emission which covers the full area of the detectors). The use of these on-chip backgrounds in place of the blank-sky backgrounds did not significantly alter the results.
For the SIS data, response matrices were generated using the FTOOLS SISRMG software (version 1.1). Where the extracted spectra covered more than a single chip, individual response matrices were created for each chip, which were then combined to form a counts-weighted mean matrix. For the GIS analysis, the response matrices issued by GSFC on 1995 March 6 were used.
## 3 The Analysis of the ASCA data
### 3.1 The spectral models
The modeling of the X-ray spectra has been carried out using the XSPEC spectral fitting package (version 10.0; Arnaud 1996). For the SIS data, only counts in pulse height analyser (PHA) channels corresponding to energies between 0.6 and 10.0 keV were included in the analysis (the energy range over which the calibration of the SIS instruments is best-understood). For the GIS data, only counts in the energy range $`1.010.0`$ keV were used. The spectra were grouped before fitting to ensure that $`\chi ^2`$ statistics could be reliably used (after background subtraction).
The spectra have been modeled using the plasma codes of Kaastra & Mewe (1993; incorporating the Fe L calculations of Liedahl, Osterheld & Goldstein 1995) and the photoelectric absorption models of Balucinska-Church & McCammon (1992). The data from all four detectors were included in the analysis, with the fit parameters linked to take the same values across the data sets. The exceptions to this were the emission measures of the (hot) plasma components, which model the extended X-ray halos of the galaxies and which, due to the different extraction radii used for the different detectors, were maintained as independent fit parameters.
The spectra were examined with a series of spectral models. (We have adopted the naming convention of Allen et al. 1999 from their analysis of nearby cluster spectra). The first model, Model B, consists of an isothermal plasma of temperature, $`kT`$, and metallicity, $`Z`$, in collisional equilibrium, at the optically-determined redshift for the galaxy, and absorbed by a column density $`N_\mathrm{H}`$. Metallicities are measured relative to solar photospheric values of Anders & Grevesse (1989) with the various elements assumed to be present in their solar ratios. The second model, model D, included an additional, cooler plasma component of temperature, $`kT_2`$, the normalization of which was a further free parameter in the fits. (This cooler component, where present, is typically associated with the presence of a cooling flow in a cluster or galaxy). The metallicity of the cooler component was linked to that of the hotter gas. The cooler component was also assumed to be absorbed by an intrinsic column density, $`\mathrm{\Delta }N_\mathrm{H}`$, of cold gas, which was a further free parameter in the fits. The abundances of metals in the absorbing gas were fixed at their solar values (Anders & Grevesse 1989).
Where it provided a significant statistical improvement, a third spectral model, model F, was also examined which was similar to model D but with the abundances of various individual elements also included as free parameters in the fits. This model was statistically preferred over model D for M87, NGC 4696 and NGC 4636.
Finally, a fourth spectral model, Model G, was investigated in which a power-law emission component was introduced into the previous best-fitting two-temperature plasma model (either model D or F, respectively). The power-law emission component was assumed to be absorbed by the same intrinsic column density acting on the cooler plasma emission component (since both components are expected to arise primarily from the central regions of the galaxies and/or clusters).
### 3.2 The requirement for multi-phase models and the metallicity of the X-ray gas
The results from the spectral modeling are summarized in Table 3. We see that the two-temperature model, model D, invariably provides a much better description of the ASCA spectra for the galaxies than the single-temperature model (model B), with a typical reduction in $`\chi ^2`$ of a few hundred for the introduction of only three additional degrees of freedom in the fit. For NGC 1399, 4472 and 4649, the use of the two temperature model results in a significant increase in the inferred metallicity of the X-ray gas in the galaxies, from values of $`0.20.4`$ solar with spectral model B, to values consistent with (or slightly exceeding) the solar value, with spectral model D. This is in agreement with the results previously-reported by Buote & Fabian (1998; see also Buote 1999).
For M87, NGC 4696 and NGC 4636, we found that the fits were further significantly improved by allowing the abundances of various individual elements to be included as additional free parameters in the fits. (With spectral model D, all elements are linked to vary in the same ratio, relative to their solar values). Improvements in the measured $`\chi ^2`$ values of several hundred were obtained for these objects by allowing the abundances of Mg, Si and S to be included as free parameters in the fits (with only a single extra degree of freedom being associated with each extra element included as a free fit parameter). The results on the individual element abundances for M87 and NGC 4696 are discussed by Allen et al. (1999). The results for NGC 4636 are detailed in Section 6.
### 3.3 The requirement for the power-law components
The results on the power-law components detected in the ASCA spectra are summarized in Table 4. In all cases the introduction of the power-law component into the two-temperature models (models D and F) leads to a highly significant improvement in the fit. For guidance, a reduction in $`\chi ^2`$ of $`\mathrm{\Delta }\chi ^210`$ with the introduction of the power-law component (2 extra fit parameters) is significant at approximately the 99 per cent confidence level (for a fit with 1000 degrees of freedom and a reduced $`\chi ^2`$/DOF value $`1.0`$). The observed improvements range from $`\mathrm{\Delta }\chi ^230`$ to $`\mathrm{\Delta }\chi ^2600`$.
In all cases the slopes of the power-law components are significantly flatter ($`\mathrm{\Gamma }=0.61.5`$) than the canonical value of $`\mathrm{\Gamma }1.8`$ obtained for Seyfert galaxies (e.g. Nandra et al. 1997, Reynolds 1997). The weighted mean best-fit photon index for the sources in our sample is $`\mathrm{\Gamma }=1.22`$. The observed $`210`$ keV fluxes range from $`5.7\times 10^{13}`$ to $`8.7\times 10^{12}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, and the intrinsic $`110`$ keV luminosities of the power-law components (corrected for Galactic and intrinsic absorption as determined with spectral model G, and calculated using the luminosity distances listed in Table 1) range from $`2.6\times 10^{40}`$ (NGC 4649) to $`2.2\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$(NGC 4696). These luminosities are also less than those typically associated with Seyfert nuclei. Figure 1 shows the intrinsic $`110`$ keV luminosities as a function of the observed photon index. No clear correlation between the two parameters is observed. Fig. 2 shows the ASCA spectrum and best-fitting model for M87 (spectral model G). For clarity, only the results for the S0 detector are shown.
The detection of hard, power-law emission components from all six galaxies is not easily explained as an artifact of the analysis. Applying the same method to an ASCA observation of NGC 1275, the dominant galaxy of the nearby ($`z=0.0183`$) Perseus Cluster, which is known to contain an active nucleus, we determine a photon index for the nuclear emission of $`\mathrm{\Gamma }=2.05\pm 0.05`$, in good agreement with the typical values determined for such galaxies (Nandra et al. 1997; Reynolds 1997; Turner et al. 1997). The $`210`$ keV flux associated with the nucleus of NGC 1275, $`F_{\mathrm{X},210}1.8\times 10^{10}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(implying an intrinsic $`110`$ keV luminosity of $`4\times 10^{44}`$ $`\mathrm{erg}\mathrm{s}^1`$), is also significantly larger than the fluxes associated with the harder power-law components detected in the present sample of objects. Secondly, when we apply the same analysis method (using spectral model D) to an observation of the central regions of the Coma Cluster ($`z=0.0232`$), where the observations are centred on the X-ray centroid of the cluster rather than an individual galaxy (the cluster does not contain a single, optically-dominant galaxy in its core), we find no improvement in the fit ($`\mathrm{\Delta }\chi ^2=0.0`$) on the introduction of a hard, power-law component. The upper (90 per cent confidence) limit to the $`210`$ keV flux of any power-law component with a photon index $`\mathrm{\Gamma }=2.0`$ in the ASCA data for centre of the Coma Cluster using spectral model D is $`F_{\mathrm{X},210}<2.2\times 10^{12}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, significantly less than the measured values for M87 and NGC 4696. We also note that the application of a similar analysis procedure to ASCA observations of more distant, luminous cooling-flow clusters does not, generally, indicate the presence of hard ($`\mathrm{\Gamma }1.2`$) power-law components in these systems. (We do not expect to detect hard, power-law components with luminosities comparable to those reported here in clusters an order of magnitude (or more) more X-ray luminous than the Virgo or Centaurus clusters; c.f. Section 5.3) This further suggests that the hard, power-law components detected in the present sample of elliptical galaxies are not artifacts due to systematic errors in the modeling of diffuse cluster/galaxy emission in these systems. (A more detailed discussion of the ASCA data for the Perseus and Coma clusters is presented by Allen et al. 1999. For a discussion of more distant, luminous cooling-flow clusters see Allen 1999).
Finally, we note that the data for NGC 4636 exhibit a systematic discrepancy at low energies ($`E<1`$keV) between the S0 and S1 detectors, which contributes significantly to the measured $`\chi ^2`$. (This discrepancy is also noted by Buote 1999). Ignoring the data in this region and repeating the analysis with spectral model G leads to a lower $`\chi ^2`$ value ($`\chi ^2=284`$ for 201 degrees of freedom) and constraints on the power-law component in good agreement with those listed in Table 2.
### 3.4 A bremsstrahlung model for the hard X-ray emission
We have investigated whether the hard X-ray emission from the elliptical galaxies can also be parameterized by a simple bremsstrahlung model. To do this, the power-law component in spectral model G was replaced with a thermal bremsstrahlung component, with the temperature, $`kT_{\mathrm{brem}}`$, and normalization, $`A_2`$, as fit parameters. The results from the fits with this modified model are summarized in Table 5. The $`\chi ^2`$ values obtained indicate that the bremsstrahlung model provides as good a parameterization for the hard X-ray emission from the galaxies as the power-law model. Generally, the results constrain the temperatures of the bremsstrahlung components to be $`10`$ keV, with the upper limits on the temperatures unconstrained. (ASCA data in the $`0.610.0`$ keV band cannot reliably constrain the temperatures of plasmas hotter than $`10`$ keV). The results on the other fit parameters are essentially identical to those listed in Table 3. The fluxes and intrinsic luminosities associated with the bremsstrahlung components are also in good agreement with the values listed in Table 4, using the power-law model. The measured temperatures for the bremsstrahlung components are significantly hotter than those of typical Galactic binary sources (e.g. Christian & Swank 1998) and support the interpretation for the origin of the hard X-ray emission from the galaxies in terms of low radiative efficiency accretion flows, given in Section 4.
We have examined whether the statistical quality of the fits with spectral model G (or the modified model G, incorporating the bremsstrahlung component) can be improved by the addition of further model components. In all cases, the introduction of a further power-law or bremsstrahlung component did not significantly improve the fits. The introduction of a third, thermal plasma component (as described in Section 3.1) also leads to no significant improvement (at the 99 per cent confidence level) for any object, except NGC 4696. For this source, the introduction of a third plasma component (with the element abundances linked to be equal to those in the other two components) leads to a drop in $`\chi ^2`$ of $`\mathrm{\Delta }\chi ^2=37`$, and slightly modified constraints on the power-law emission ($`\mathrm{\Gamma }=1.10_{0.44}^{+0.36}`$ and $`A_1=5.8_{3.5}^{+6.2}\times 10^4`$ photon keV<sup>-1</sup>cm<sup>-2</sup>s<sup>-1</sup>). We note that the relatively high temperature ($`kT3.3`$ keV) for the hotter thermal component in NGC 4696 simply reflects the virial temperature of the host cluster. NGC 4696 is the most distant galaxy included in the present study and the 3.7 arcmin (66 kpc) S1 aperture used includes significant flux from the extended cluster gas.
Finally, we note that several of the $`\chi ^2`$ values listed in Tables $`35`$ (in particular, those for the galaxies with the highest signal-to-noise ratios in their ASCA spectra) indicate that the best-fit models are, formally, statistically unacceptable. However, the high $`\chi ^2`$ values obtained are primarily due to residual systematic errors in the instrument response matrices and plasma emission models (although we do not expect these to significantly effect the main conclusions reported here.)
## 4 The origin of the hard X-ray emission
### 4.1 Accretion processes
The presence of hard, power-law X-ray emission from astrophysical sources provides a discriminating signature of accretion processes around black holes. Such emission is usually attributed to the presence of a hot, tenuous coronal plasma (probably magnetically) coupled to an accretion disk or a hot (ADAF-type) accretion flow, but can also arise in the shock sites of a jet. Other astrophysical processes tend to give rise to softer X-ray emission, with a steeper photon index (when this emission is modeled as a simple power-law). The flat slopes of the power-law components detected in our sample of elliptical galaxies, together with the dynamical evidence for supermassive black holes with masses of $`10^810^{10}`$$`\mathrm{M}_{}`$ in the nuclei of these objects, argues strongly for this emission being intimately related to the accretion process.
The relatively broad point spread function of the ASCA mirrors (Section 2.2) does not allow us to resolve the X-ray emission from the galaxy cores into more than a single integrated spectrum. We therefore cannot separate the X-ray emission associated with the jets in these objects (e.g. knot A in M87, which previous ROSAT High Resolution Imager observations have resolved; Reynolds et al. 1996, Harris, Biretta & Junor 1997, 1998) from the accretion disks themselves. However, the fact that the galaxy with the strongest radio emission, M87, in which the radio and X-ray jet emission is relativistically beamed towards us, exhibits a significantly weaker hard X-ray component than NGC 4696 (which has a twin lobe radio structure and a total 4.85 GHz radio flux $`40`$ times lower than M87) at least suggests that the jet emission does not dominate the detected hard X-ray flux. This is further supported by the detections of hard, power-law X-ray components, with similar characteristic slopes, in objects such as NGC 4636, 4649, and 4472, in which the radio activity is at the level of a mJy or less. Sharp cut-offs observed in the radio spectra of these objects (Di Matteo et al. 1999a) also strongly constrain the populations of non-thermal particles responsible for synchrotron radiation and synchrotron self-Compton emission (in X-rays) from a jet and/or outflow associated with a low radiative-efficiency accretion flow (although contributions from a non-thermal distribution of relativistic electrons might still be important in the central cluster galaxies with more dominant radio activity; Di Matteo et al. 1999b).
#### 4.1.1 The effects of intrinsic absorption
The ASCA spectra alone cannot reliably discriminate between whether the observed power-law components are intrinsically flat or have steeper photon indices which have been modified by the effects of intrinsic absorption over and above that accounted for by spectral model G. (This uncertainty is primarily due to the complex spectrum of the diffuse, thermal emission at soft X-ray energies). In general, the inclusion of an extra $`5\times 10^{22}10^{23}`$ atom cm<sup>-2</sup> of intrinsic absorption associated with the power-law components leads to power-law photon indices of $`\mathrm{\Gamma }2`$, and provides similarly good fits to the ASCA data (although the determination of confidence limits on the fit parameters becomes difficult with such models). Conversely, if the true intrinsic column densities acting on the hard X-ray components are overestimated using spectral model G, then the intrinsic photon indices of these components will actually be flatter than the values listed in Table 4.
An intrinsic photon index of $`\mathrm{\Gamma }1.82`$ is typically observed in AGN (e.g. Nandra et al. 1997; Reynolds 1997; Turner et al. 1997), where the accretion is thought to occur via a thin disk, and where the radiative efficiency is $`10`$ per cent. However, as discussed in Section 1, if the accretion in the present sample of elliptical galaxies were to proceed in this manner, the X-ray luminosities associated with their power-law emission components should be $`35`$ orders of magnitude larger than the observed values. There is no simple way in which to modify the observed X-ray fluxes (particularly when one takes into account the joint ASCA and ROSAT results; Section 5.1) and measured photon indices to agree with the predictions from the standard thin-disk accretion models by simply including extra absorption on the power-law components.
IRAS measurements of the 60 and 100$`\mu m`$ fluxes from the galaxies (determined using the IPAC SCANPI software applied to co-added IRAS scans) also constrain the X-ray luminosities that may be absorbed and reprocessed to infrared wavelengths in these systems. For M87, the observed 60 and 100$`\mu m`$ luminosities ($`L_{60\mu \mathrm{m}}8\pm 1\times 10^{41}`$$`\mathrm{erg}\mathrm{s}^1`$ and $`L_{100\mu \mathrm{m}}4\pm 1\times 10^{41}`$$`\mathrm{erg}\mathrm{s}^1`$, respectively) are comparable to the $`110`$ keV luminosity of the hard X-ray component (Table 4). The infrared luminosities for the other galaxies are summarized in Table 6. In general, the observed 100$`\mu m`$ luminosities do not exceed the $`110`$ keV values by more than an order of magnitude (with the exception of NGC 4649, for which the observed 100$`\mu m`$ luminosity is $`500`$ times larger than the $`110`$keV value) and the presence of an absorbed active nucleus with an X-ray luminosity $`45`$ orders of magnitude more luminous in X-rays than the values listed in Table 4 can be firmly ruled out. We note, however, that although the X-ray and infrared data argue strongly against the observed hard, power-law components being due to intrinsically absorbed, steep ($`\mathrm{\Gamma }2`$) X-ray spectra with luminosities $`10^{46}`$$`\mathrm{erg}\mathrm{s}^1`$, the very flat photon indices observed in some galaxies (with $`\mathrm{\Gamma }<1`$) suggest that intrinsic absorption may play some role. Such flat slopes could result from absorption of few $`\times 10^{21}`$atom cm<sup>-2</sup> acting on emission spectra with intrinsic photon indices, $`\mathrm{\Gamma }1.4`$ (which could be produced by low-radiative efficiency accretion flows with electron temperatures $`<100`$keV; Section 4.1.2). This is consistent with the observation that M87, which has an X-ray photon index, $`\mathrm{\Gamma }=1.4`$, exhibits a lower $`L_{100\mu \mathrm{m}}/L_{\mathrm{X},110}`$ ratio than those galaxies with flatter X-ray slopes.
#### 4.1.2 Low radiative efficiency accretion flows
Although the observed properties of the elliptical nuclei are not easily explained within the context of unified models for Seyfert galaxies, the characteristic hard X-ray, radio and (lack of) optical emission from these sources (i.e. the typically non-thermal character of their spectra) can be accounted for by models of low-radiative efficiency accretion (such as an ADAF; Narayan & Yi 1994, 1995) coupled with outflows/winds (Blandford & Begelman 1999), with the accretion occurring at approximately the Bondi rates.
In an advection dominated flow, which occurs at accretion rates $`\dot{M}<\alpha ^2\dot{M}_{\mathrm{Edd}}`$ (where $`\dot{M}_{\mathrm{Edd}}`$ is the accretion rate at the Eddington limit) the viscosity (parameterized by the constant $`\alpha >0.1`$) is assumed to dissipate most of the energy locally into ions. The ions cannot cool (Coulomb scattering between the ions and electrons is very inefficient in the low density plasma; no other form of electron-ion coupling is assumed and the gas is two-temperature) and flow inwards, carrying an increasing amount of thermal energy. In the absence of an energy sink, which would normally be present in the form of efficient radiation in a thin disk, the gas will be unbound (i.e. have a Bernoulli parameter greater than zero) unless the binding energy is transported radially outward by the viscous torque, in the form of a wind (Blandford & Begelman 1998). Within the context of low-radiative efficiency accretion flows (unavoidably coupled with outflows), most of the accreted mass, angular momentum and energy will be lost at large distances in the flows and the central pressures and densities will be much reduced.
In these more generalized models for low-radiative efficiency accretion flows, the density varies as $`\rho r^{3/2+p}`$ (where the accretion rate $`\dot{M}r^p`$, with $`p1`$) such that the emission in the central regions is highly suppressed. The thermal electrons, at a temperature of $`100\mathrm{keV}`$, radiate by synchrotron emission, inverse Compton scattering (of synchrotron photons) and bremsstrahlung emission. Most of the synchrotron emission and its Comptonization will occur in the innermost regions (within a few Schwarschild radii) of the flow (where the temperature is the high enough, $`>10^9\mathrm{K}`$, for these processes to become important) and will therefore be highly suppressed in the presence of winds <sup>1</sup><sup>1</sup>1Although the luminosities due to both bremsstrahlung and synchrotron processes vary as the square of the accretion rate (i.e. the density), in the presence of a wind the accretion rate will be much reduced in the inner regions where most of the synchrotron emission is produced. More importantly, in the very low density inner regions the electron temperature profile is much flatter; the synchrotron power $`T^7`$ is highly suppressed (Di Matteo et al. 1999b; Quataert & Narayan 1999). Comptonization of this component is also highly suppressed, its importance also depending on the scattering optical depth $`\tau \dot{M}`$, which always remains low in the presence of a wind. (as emphasized by the observations and modeling discussed by Di Matteo et al. 1999a,b; see also Quataert & Narayan 1999). Bremsstrahlung emission, in contrast, arises from all radii in the flow and should, therefore, dominate the emission from sources in which low radiative efficiency accretion is associated with winds. The X-ray spectrum due to thermal bremsstrahlung in the optically thin gas has a very flat spectrum up to its cut off frequency at about $`h\nu kT`$. Most of the bremsstrahlung luminosity, $`\rho ^2T^{1/2}r^3\mathrm{exp}^{h\nu /kT}`$ with $`\rho \alpha ^1M_{\mathrm{BH}}^1\dot{M}r^{3/2+p}`$, originates at larger radii where the density is the highest. The temperature profile in an ADAF with winds is virtually constant with radius with $`kT100200\mathrm{keV}`$ for $`r<`$ a few hundred Schwarschild radii, and tracks the virial temperature in the outer regions. Even in the presence of a very strong wind the bremsstrahlung spectrum would not be highly suppressed up to energies $`kT<10\mathrm{keV}`$, as the minimum temperature in the outer region of the flow is $`10^{12}/r_{\mathrm{out}}`$K and the outer radius $`r_{\mathrm{out}}`$ is of the order of a $`10^310^4`$ Schwarschild radii. As discussed by Di Matteo et al. (1999b), the rates at which material is required to be fed from the hot interstellar medium to the outer regions of the accretion flows, to explain the luminosities of the observed hard, X-ray components, are consistent with the expected Bondi accretion estimates of $`0.11\mathrm{M}_{}`$yr<sup>-1</sup>. The emission from such a flow would be dominated by a thermal bremsstrahlung at temperatures $`<100\mathrm{keV}`$ (as expected in the outer regions of the flows) resembling a hard ($`\mathrm{\Gamma }1.4`$) power law in the ($`110`$ keV) ASCA band (as required by the data). The differences in luminosity between the objects in our sample can then be ascribed to differences in the black hole masses and Bondi accretion rates, with the hard, power-law components in the central cluster galaxies being more luminous due to their higher density environments. This is illustrated in Fig. 3 where we show the intrinsic ($`110`$ keV) luminosities of the hard, power-law components as a function of the bolometric luminosity of the X-ray gas within a radius of 10 kpc in the galaxies. A clear correlation between these quantities is observed, with NGC 4696, the dominant galaxy of the Centaurus Cluster and the galaxy with the most luminous hard, X-ray component, also exhibiting the largest X-ray gas luminosity within a radius of 10 kpc. Detailed modeling and discussion of these issues, and of the effects of non-thermal particle distributions in the winds and/or jets, are presented by Di Matteo et al. (1999b).
### 4.2 Other sources of X-ray emission
Although the observed properties of the hard X-ray components can be accounted for by models of low-radiative efficiency accretion onto the central supermassive black holes in the galaxies, a variety of other sources (given the large field of view of the ASCA instruments) may contribute to the integrated X-ray spectra.
#### 4.2.1 Binary X-ray sources
Undoubtedly, some contribution towards the harder X-ray emission from the galaxies will be made by binary X-ray sources (e.g. Canizares, Fabbiano & Trinchieri 1987; Fabbiano 1989). Bright Galactic X-ray binaries and black-hole candidates exhibit persistent X-ray luminosities in the range $`10^{36}10^{38}`$ $`\mathrm{erg}\mathrm{s}^1`$ (e.g. White et al. 1988). The luminosities associated with the power-law components detected in the Virgo ellipticals could therefore be accounted for by a few $`10^210^4`$ such sources (or $`10^410^6`$ sources for the more luminous central cluster galaxies). The luminosities of the power-law components in NGC 1399, 4472, 4636 and 4649 are consistent with an extrapolation of the $`L_\mathrm{X}/L_B`$ relation determined for irregular and spiral galaxies (Fabbiano & Trinchieri 1985), in which the X-ray emission is dominated by discrete sources (binary X-ray sources, stars and central active nuclei). Thus, binary X-ray sources could contribute at a significant level (comparable to the accretion flows; see below) to the harder X-ray emission from these galaxies. For M87 and NGC 4696, however, the X-ray luminosities of the hard, power-law components are an order or magnitude larger than the values estimated from the $`L_\mathrm{X}/L_B`$ relation, based on their observed blue luminosities. Binary X-ray sources are therefore unlikely to contribute significantly to the hard, power-law components detected in M87 and NGC 4696. (The result for M87 is strongly supported by the ROSAT HRI observations discussed in Section 5.1).
In addition, although individual binary sources can exhibit very hard X-ray spectra, a more typical spectrum in the $`210`$ keV band would have a photon index (using a simple power-law model) in the range $`\mathrm{\Gamma }=1.52.5`$ or, using a thermal bremsstrahlung model, a temperature, $`kT47`$ keV (e.g. Fabbiano et al. 1987; White et al. 1988; Makishima et al. 1989; Tanaka et al. 1996; Christian & Swank 1997), with little or no associated intrinsic absorption (Fabbiano et al. 1987). These power-law slopes are significantly steeper (or, equivalently, the bremsstrahlung temperatures are lower) than the observed values for the hard emission components detected in the present sample of galaxies. (We recall that the observed photon indices show no clear correlation with the luminosities of the hard, power-law components; Fig. 1). We have examined the effect of including a second power-law component, with a fixed photon index of $`\mathrm{\Gamma }=2.0`$, in the analysis of the ASCA data for NGC 4472 and 4636 with spectral model G. (We assume that the intrinsic absorption acting on both power-law components and the cooler plasma component is the same since the ASCA spectra cannot easily constrain more detailed models.) For both systems, we find that the fits are not significantly improved by the introduction of the steeper power-law components ($`\mathrm{\Delta }\chi ^2=0.0`$; see also Section 3.4), although the maximum (90 per cent confidence) $`210`$ keV fluxes associated with these components ($`2.0\times 10^{13}`$ and $`3.3\times 10^{13}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$) are approximately 30 and 60 per cent of the values listed in Table 4, respectively. With the steeper components included at their maximum allowed levels, the photon indices of the harder power-law components in NGC 4472 and 4636 are reduced to $`0.7_{1.1}^{+0.6}`$ and $`0.9_{0.6}^{+0.3}`$, respectively.
We note that the relatively low source fluxes and high background count rates at harder energies, together with the broad point spread function of the ASCA mirrors, prevent us from placing useful constraints, from the ASCA imaging data, on the relative contributions to the hard X-ray fluxes from central point sources and extended emission components in the galaxies. (For M87 and NGC 4696, the extended cluster emission dominates the X-ray spectra across essentially the entire ASCA energy band; Section 5.3). Future observations with the Chandra Observatory will provide an answer to this question.
We conclude that binary X-ray sources are unlikely to dominate the hard, power-law emission detected in M87 and NGC 4696, although they will contribute to the measured fluxes in the galaxies with weaker hard, X-ray components.
#### 4.2.2 Cosmic XRB fluctuations
It is important to consider the effects of fluctuations in the Cosmic XRB on the results, particularly given the close match between the weighted mean slope of the detected power-law components ($`\mathrm{\Gamma }1.2`$) and the XRB ($`\mathrm{\Gamma }=1.4`$). Following Barcons, Fabian & Carrera (1998), we have estimated the confusion noise, $`\sigma _\mathrm{c}`$ (the flux equivalent to a $`1\sigma `$ variation in the XRB intensity histogram) for the ASCA observations. Assuming $`\sigma _\mathrm{c}=2.0\times 10^{12}\mathrm{\Omega }^{2/3}`$, where $`\mathrm{\Omega }`$ is the beam area of the ASCA SIS spectra in degree<sup>2</sup>, and normalizing to the GINGA observations of Butcher et al. (1997), we obtain the confusion limits listed in column 6 of Table 4. For the strongest sources (M87 and NGC 4696) the detected fluxes are much larger (a factor $`4060`$) than the confusion limits. For even the weakest sources (e.g. NGC 4636), the detected fluxes are $`45`$ times larger than the confusion limits. We also recall that the use of an independent on-chip background-subtraction method in the analysis of the ASCA observations made in 4-CCD mode (section 2) provides very similar results on the power-law components.
#### 4.2.3 Cosmic ray electrons and inverse Compton emission
A third mechanism that could contribute to the detected X-ray fluxes is inverse Compton emission due to primary cosmic ray electrons in the intracluster medium. Detailed models (e.g. Sarazin 1999) show that for steady particle injection (with a given power-law distribution) the inverse Compton spectra relax into a steady-state form. Although these models predict ubiquitous EUV and soft X-ray emission in clusters, due to electrons with $`\gamma 300`$ (which have the longest loss times and could be reasonably be injected since $`z<1`$), at harder energies their contribution is negligible. Significant hard X-ray emission ($`150\mathrm{keV}`$) can only be produced by electrons with $`\gamma 10^310^4`$ i.e. by particles with rather short lifetimes (as set by inverse Compton losses; $`t_{\mathrm{loss}}10^9\mathrm{yr}`$) and would only be present in clusters in which substantial particle injection has occurred since $`z<0.1`$. Assuming that particles are accelerated in shocks in the intracluster medium, one would only expect diffuse hard X-ray emission in clusters undergoing, or having recently experienced, a major merger event. The Virgo cluster does not show evidence for recent merger activity in its central regions (Allen et al. 1999). In addition, even in cases were such particle injection does occur, the expected steady state photon index where inverse Compton losses dominate would be $`\mathrm{\Gamma }2.1`$, significantly steeper than the observed slopes of the power-law components.
Inverse Compton emission from the radio lobes is a further possible source of power-law X-ray emission from the galaxies. Such emission is normally very weak but has been detected from the hot-spots of Cygnus-A (Harris, Carilli & Perley 1994; Reynolds & Fabian 1996) and Fornax-A (Feigelson et al. 1995; Kaneda et al. 1995). The photon index of the resulting X-ray emission should have a photon index similar to, or steeper than, that of the radio emission which, for the radio lobes, will be significantly steeper than $`\mathrm{\Gamma }1.2`$. Thus, although inverse Compton emission is expected at some level in all of the systems, it is unlikely to contribute significantly to the hard, X-ray components reported here.
#### 4.2.4 X-ray ‘reflection’ from cold material
A final possibility for obtaining a relatively flat photon index in the ASCA band is the situation where the observed X-ray flux is dominated by photons Compton scattered off cold, optically-thick material close to the central X-ray source (e.g. Lightman & White 1988; George & Fabian 1991; Matt, Perola & Piro 1991. This situation also requires that the primary X-ray source is heavily obscured, a possibility argued against in Section 4.1). The scattered X-ray spectrum will include absorption and emission features due to various elements in the scattering medium and in particular should exhibit a strong, fluorescent Fe-K emission line at 6.4 keV. ASCA observations of the Circinus galaxy (Matt et al. 1996) and NGC 6552 (Reynolds et al. 1994) exhibit such ‘reflection-dominated’ spectra with strong (redshifted) 6.4 keV emission lines with equivalent widths of $`2`$ keV. We have searched for the presence of intrinsic 6.4keV emission lines in the ASCA spectra for the galaxies. In all cases we find no improvement to the fits obtained with spectral model G on the introduction of a narrow 6.4 keV emission line ($`\mathrm{\Delta }\chi ^2=0.0`$), and are able to place 90 per cent confidence limits ($`\mathrm{\Delta }\chi ^2=2.71`$) on the maximum equivalent widths (relative to the power-law continua) of any such lines as listed in Table 6. We conclude that the flat, power-law components detected in the elliptical galaxies are unlikely to be due to this process.
### 4.3 Summary of the results on the origin of the hard X-ray components
We have argued that the hard ($`\mathrm{\Gamma }1.2`$) power-law, X-ray components detected in the ASCA spectra are likely to be due to bremsstrahlung emission from low-radiative efficiency accretion flows onto the central supermassive black holes, coupled with winds/outflows. This is a very different situation from Seyfert galaxies, where the power-law X-ray emission is normally attributed to thermal inverse Compton scattering of the soft, disk radiation field.
We have examined a range of other mechanisms that could contribute towards the detected X-ray fluxes. Some contribution towards the harder X-ray emission could be made by the jets in these sources, although the absence of a clear correlation between the 5 GHz radio and hard X-ray luminosities suggests that the jet emission does not dominate the hard, power-law flux. The close agreement between the ASCA and ROSAT (Section 5.1) X-ray fluxes for M87, and the IRAS infrared measurements for the galaxies, argue strongly against the observed flat photon indices and low luminosities being simply due to intrinsic absorption acting on canonical Seyfert-like spectra (due to accretion at the Bondi rates with a standard radiative efficiency). Some level of intrinsic absorption is possible, however, and could explain the very flat power-law components ($`\mathrm{\Gamma }<1`$) observed in some objects (further flattening the spectra of the low-radiative efficiency accretion flows, which should have an intrinsic photon index in the ASCA band of $`\mathrm{\Gamma }1.4`$). Limits on the 6.4 keV emission-line fluxes rule out a significant contribution to the detected hard, power-law components from X-rays Compton scattered off cold, optically-thick matter surrounding the central X-ray sources.
Binary X-ray sources are likely to contribute to the harder X-ray emission from the galaxies with lower-luminosity power-law components (the Virgo ellipticals and NGC 1399). For M87 and NGC 4696, however, the measured power-law luminosities are an order of magnitude greater than the values predicted from the $`L_\mathrm{X}/L_\mathrm{B}`$ relation for spiral and irregular galaxies. Moreover, the typical X-ray spectra of X-ray binaries and black hole candidates are significantly steeper than the observed hard, power-law components. Inverse Compton emission from the radio lobes in the galaxies and primary cosmic-ray electrons in the intracluster medium should not contribute significantly to the detected power-law fluxes and, where present, will typically produce a photon index in the ASCA band significantly steeper than the measured values. Fluctuations in the Cosmic XRB should not significantly effect our results.
Future X-ray observations at high spatial resolution made with the Chandra Observatory will be crucial in establishing the contributions from the various emission mechanisms outlined above and unambiguously identifying the origin of the hard X-ray components. Deep X-ray spectroscopy with XMM and ASTRO-E will allow us to examine variability in the X-ray emission and search for broad, iron emission features associated with the power-law components. If the hard X-ray emission indeed originates from bremsstrahlung processes in the outer regions of low radiative-efficiency accretion flows, then the variability timescales observed should be longer than those associated with typical Seyfert nuclei. The detection of broad iron emission features would argue against the simple, low-radiative efficiency accretion models discussed here (Section 4.1), and require the presence of significant amounts of cold, reflecting material close to the central black holes (as in standard thin-disk accretion models).
## 5 Previous constraints on nuclear X-ray emission from M87
### 5.1 ROSAT HRI observations
Reynolds et al. (1996) and Harris et al. (1997, 1998) discuss ROSAT High Resolution Imager (HRI) observations of M87, and measure a time-averaged HRI count rate associated with the active nucleus of the source of $`0.12`$ count s<sup>-1</sup>. We have used the XSPEC software, incorporating the 1990 December version of the HRI response matrix, to determine the flux of the power-law component required to produce the count rate observed in the $`0.12.4`$ keV ROSAT HRI band. For an assumed power-law emission model with a photon index, $`\mathrm{\Gamma }=1.4`$, absorbed by a Galactic column density, $`N_\mathrm{H}=2.5\times 10^{20}`$ atom cm<sup>-2</sup>, we find that a normalization, $`A_11.5\times 10^3`$ photon keV<sup>-1</sup>cm<sup>-2</sup>s<sup>-1</sup> is required to account for the observed HRI count rate. This is in excellent agreement with the value measured from the ASCA spectra of $`A_1=1.4_{0.6}^{+1.6}\times 10^3`$ photon keV<sup>-1</sup>cm<sup>-2</sup>s<sup>-1</sup>. (We note that accounting for the intrinsic absorption also detected in the ASCA spectra would imply a larger intrinsic normalization for the power-law component of $`A_15.0\times 10^3`$ photon keV<sup>-1</sup>cm<sup>-2</sup>s<sup>-1</sup>. However, this corrected value assumes that the absorption is due to cold gas in a uniform screen in front of the nucleus, which may not be the case e.g. Allen & Fabian 1997; Reynolds 1997).
### 5.2 Previous analyses of the ASCA observations
Two previous studies of M87 have also reported results based on the same ASCA observations discussed in this paper. Matsumoto et al. (1996) present results from an analysis using a simple two-temperature plasma plus power-law spectral model, with a fixed power-law photon index of $`\mathrm{\Gamma }=1.7`$, from which they determine a flux associated with the nucleus (in the $`0.54.5`$ keV band) of $`1.1\pm 0.2\times 10^{11}`$$`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. This flux is $`2.5`$ times larger than the value implied by our best-fitting spectral model (model G). Reynolds et al. (1996) also present results from a study of the same ASCA observations, in which they did not detect significant emission associated with the nucleus, although these authors did not account for the possibility of variable element abundance ratios in their analysis, which provides a crucial step in the modeling (see also Allen et al. 1999). To further illustrate this, we have repeated our analysis of the power-law component in M87 with the element abundances linked to vary in the same ratio relative to their solar values (c.f. spectral model D). The best-fit $`\chi ^2`$ value obtained with this more simple model, $`\chi ^2=1888`$, is substantially worse than the value, $`\chi ^2=1468`$, obtained with spectral model G. The best-fit parameter values ($`\mathrm{\Gamma }=1.70_{0.31}^{+0.27}`$ and $`A_1=1.9_{0.9}^{+1.5}\times 10^3`$ photon keV<sup>-1</sup>cm<sup>-2</sup>s<sup>-1</sup>) are also (slightly) offset from the results obtained with model G. Such differences demonstrate the need to fully account for the complex temperature structure of the galaxy/cluster plasma and possible variations in individual element abundances ratios when attempting to constrain the power-law emission from such sources.
### 5.3 Rossi X-ray Timing Explorer observations
Reynolds et al. (1999) present further constraints on power-law emission from M87 using observations made with the Proportional Counter Array (PCA) on the Rossi X-ray Timing Explorer (RXTE). The data presented by these authors cover the $`315`$ keV range and constrain the $`210`$ keV flux of the power-law component to be $`<4.1\times 10^{12}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, for an assumed photon index $`\mathrm{\Gamma }=2.0`$. This upper limit is lower than the measured value of $`8.7_{1.6}^{+1.7}\times 10^{12}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ from the ASCA data (Table 4). The ASCA data also require a significantly flatter photon index than assumed in the RXTE study.
The best-fitting spectral model for M87 determined from the ASCA analysis and plotted in Fig. 2 shows that the extended cluster emission dominates over the power-law component in the ASCA spectra across virtually the entire energy range of the detectors. Only at energies $`E>89`$ keV does the power-law component dominate the detected flux. The much larger field of view of the PCA collimator ($`1`$ degree<sup>2</sup> FWHM; Jahoda et al. 1996) results in a larger fraction of the total cluster flux ($`5`$ times more) being included in the detected spectrum (the cluster emission extends to a radius of at least 4 degree; Schindler, Binggeli & Böhringer 1999). Thus, the power-law component, as determined from the ASCA data, will not dominate over the cluster emission in the PCA spectrum below an energy of $`1213`$ keV.
Modelling the extended plasma emission in a cluster as cool as Virgo Cluster (Table 3) with instruments like the PCA, restricted to the (relatively hard) $`315`$keV energy range, is difficult. The PCA spectra cannot reliably constrain the multiphase nature of the gas in the cluster core which, as this study has shown, can be crucial in constraining the properties of the power-law emission. However, although such considerations may be relevant in interpreting the RXTE results, it remains plausible that the nuclear emission from M87 may simply have varied between the ASCA observations in 1993 June and the PCA observations made in 1998 January (Harris et al. 1997, 1998; Tsvetanov et al. 1998)
### 5.4 ROSAT HRI flux-limits on other sources in the sample
The other galaxies discussed in this paper have not previously been studied in the same detail as M87 and no detections of point-source X-ray emission associated with their nuclei have been reported. However, Di Matteo et al. (1999a) present limits on possible nuclear X-ray emission for the three Virgo ellipticals, based on ROSAT HRI imaging data. Their limits, which are defined at an energy of 1keV, are $`\nu F_\nu <6.8\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(NGC 4472), $`\nu F_\nu <7.8\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(NGC 4636) and $`\nu F_\nu <1.5\times 10^{13}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(NGC 4649). These limits compare to the measured ASCA fluxes, quoted in the same units, of $`\nu F_\nu 6.1\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(NGC 4472), $`\nu F_\nu 1.6\times 10^{13}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(NGC 4636), and $`\nu F_\nu 3.5\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(NGC 4649).
The ASCA measurements for NGC 4472 and 4649 are within the Di Matteo et al. (1999a) limits. For NGC 4636, however, the ASCA measurement is approximately twice the ROSAT limit. The Di Matteo et al. (1999a) limits are determined by fitting an analytic King model to the observed X-ray surface brightness profiles and determining the maximum additional contribution that can be made by a central point source. However, these limits are sensitive to complexities in the observed surface brightness profiles and, especially for NGC 4636, which exhibits a complex X-ray morphology in its central regions, the ROSAT limits should be viewed with caution.
## 6 The element abundances in NGC 4636
In contrast to the ASCA results for M87 and NGC 4696 (Allen et al. 1999), for which the introduction of individual element abundances as free parameters in the fits leads to a more significant improvement in the statistical quality of the fits than the introduction of the power-law component, for NGC 4696, the-introduction of the power-law emission component provides by far the most significant improvement over the basic two-temperature model. Thus, in determining our results on the abundances of the individual elements in NGC 4636, we started from the two-temperature model with the power-law component included, which provides a $`\chi ^2`$ of 722.6 for 230 degrees of freedom. We then systematically determined the statistical improvements to the fit obtained by allowing the abundance of each element, in turn, to be a free parameter in the analysis. Having identified the element providing the most significant improvement, the abundance of that element was maintained as a free parameter, and the process repeated to determine the element providing the next most significant improvement.
In agreement with the results for M87 and NGC 4696, we find that the most significant improvements in the fit to the NGC 4636 data are obtained by including the abundances of Si, Mg and S as free fit parameters (the measured $`\chi ^2`$ value is reduced from 722.6 to 475.0, for the introduction of only three extra fit parameters). At this point, including the abundances of further elements as fit parameters did not lead to such significant improvements and, due to the already complex nature of the best-fit model, we forced the abundances of all other elements to vary with the same ratio relative to their solar values (effectively tying the abundances to that of iron, the element to which the ASCA data are most sensitive). We note, however, that including the abundances of Na and O as further free fit parameters did lead to formally significant improvements, with reductions in $`\chi ^2`$ of $`20`$ and 30, respectively. However, due to the systematic uncertainties in the NGC 4636 spectra at low energies (Section 3.3), which effect the measured O abundance, and the fact that the Na abundance fits to an un-physically high value ($`5`$ times solar), the metallicities of these elements were not included as free parameters in our final analysis). The measured element abundances for NGC 4636, as a fraction of their solar photospheric values defined by Anders & Grevesse (1989), are $`Z_{\mathrm{Fe}}=0.62_{0.09}^{+0.13}`$, $`Z_{\mathrm{Mg}}=0.93_{0.15}^{+0.19}`$, $`Z_{\mathrm{Si}}=0.96_{0.12}^{+0.18}`$ and $`Z_\mathrm{S}=1.49_{0.22}^{+0.27}`$. These results are in reasonable agreement with those of Matsushita et al. (1997) and Buote (1999).
Scaling our measured abundance ratios to the meteoric abundance scale of Anders & Grevesse (1989) we determine \[Mg/Fe\] $`0.00`$, \[Si/Fe\] $`0.02`$ and \[S/Fe\]$`0.16`$. Comparing these values with the supernova yield models of Nagataki & Sato (1998), our observed \[Si/Fe\] ratio implies a mass fraction of the iron enrichment due to type Ia supernova, $`M_{\mathrm{Fe},\mathrm{SNIa}}/M_{\mathrm{Fe},\mathrm{total}}`$, in the range $`0.550.9`$ (where the limits cover the full range of models examined by these authors). For spherical type II supernovae, a mass fraction in the range $`M_{\mathrm{Fe},\mathrm{SNIa}}/M_{\mathrm{Fe},\mathrm{total}}0.70.85`$ is preferred. Further comparison of our \[Si/Fe\] constraint with the supernovae models discussed by Gibson et al. (1997) also requires $`M_{\mathrm{Fe},\mathrm{SNIa}}0.50.7`$. However, the observed \[S/Fe\] and \[Mg/Fe\] ratios favour a mass fraction due to type Ia supernovae of $`<0.5`$ (Gibson et al. 1997).
## 7 Conclusions
We have presented detections of hard X-ray emission components in the spectra of six, nearby giant elliptical galaxies observed with ASCA. The galaxies exhibit clear, dynamical evidence for supermassive ($`10^8`$ a few $`10^9`$$`\mathrm{M}_{}`$) black holes in their nuclei. The hard X-ray emission can be parameterized by power-law models with photon indices, $`\mathrm{\Gamma }=0.61.5`$ (mean value 1.2), and luminosities, $`L_{\mathrm{X},110}2.6\times 10^{40}2.1\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$, or thermal bremsstrahlung models with electron temperatures, $`kT>10`$ keV. Such properties identify these galaxies as a new class of accreting X-ray source, with X-ray spectra significantly harder than those of Seyfert nuclei, typical binary X-ray sources and low-luminosity AGN, and bolometric luminosities comparatively dominated by their X-ray emission. We have argued that the hard X-ray emission is likely to be due to accretion onto the central, supermassive black holes, via low-radiative efficiency flows, coupled with strong outflows. Within such models, the hard X-ray emission originates from bremsstrahlung processes in the radiatively-dominant, outer regions of the accretion flows. (Detailed modeling and discussion of these issues are presented by Di Matteo et al. 1999b).
For the case of M87, the flux of the hard component was shown to be in good agreement with the nuclear X-ray flux determined from earlier ROSAT HRI observations, which were able to resolve knot A in the jet from the nuclear emission component. We have stressed the importance of accounting for the complex temperature structure, intrinsic absorption and variable element abundance ratios in the analysis of the ASCA spectra. We confirmed results showing that the application of such models leads to measurements of approximately solar emission-weighted metallicities for the X-ray gas in the galaxies. We also presented detailed results on the individual element abundances in NGC 4636.
Future observations at high spatial resolution with the Chandra Observatory will be crucial in establishing the contributions from the various X-ray emission mechanisms present in elliptical galaxies and unambiguously identifying the origin of the hard X-ray components. Deep X-ray spectroscopy with XMM and ASTRO-E will allow us to examine variability in the X-ray emission (which should be slower in sources where the X-ray emission originates from the outer regions of low radiative-efficiency accretion flows than in typical Seyfert nuclei) and to search for broad, iron emission features associated with the power-law components. The detection of such broad emission features would argue against the simple, low-radiative efficiency accretion models discussed here, and require the presence of significant amounts of cold, reflecting material close to the central black holes.
The discovery of hard X-ray emission components in the spectra of nearby elliptical galaxies containing supermassive black holes provides important new constraints on the accretion processes in these systems. Our results are relevant to understanding the demise of quasars (which could plausibly be due to a change in the dominant accretion mode in ellipticals over the history of the Universe) and, ultimately, the origin of the hard ($`\mathrm{\Gamma }1.4`$) Cosmic X-ray Background (e.g. Di Matteo & Fabian 1997b). These issues, and others, will be explored in future papers (Di Matteo et al. 1999b; Di Matteo & Allen 1999).
## Acknowledgements
We thank Ramesh Narayan, Eliot Quataert and Chris Reynolds for useful discussions. TDM acknowledges support for this work provided by NASA through AXAF Fellowship grant number PF8-10005 awarded by the AXAF Science Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073.
|
no-problem/9905/astro-ph9905366.html
|
ar5iv
|
text
|
# NICMOS OBSERVATIONS OF THE PRE-MAIN-SEQUENCE PLANETARY DEBRIS SYSTEM HD 988001footnote 11footnote 1Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
## 1 INTRODUCTION
Long known as a visual double with less than 1″ separation, HD 98800 (SAO 179815; IRAS P11195-2430) was found by IRAS to contain the brightest planetary debris system (PDS) in the sky. Now we find that in terms of dimensions, configuration, temperature, and likely origin, this very young PDS bears remarkable similarity to the zodiacal dust bands in our solar system formed and maintained for four billion years by the asteroid families, a phenomenon also discovered by IRAS (Low et al. 1984). Recently identified as one of 11 known members of the nearby TW Hydrae Association (Kastner et al. 1997; Webb et al. 1999), the HD 98800 (TWA 4) system is comprised of two similar K dwarfs that have not yet reached the main sequence. Torres et al. (1995) reported that both stars are spectroscopic binaries with periods of 262 (Aa+Ab) and 315 days (Ba+Bb). At a distance of $`46.7\pm 6`$pc, measured by the Hipparcos satellite, the two brightest components of HD 98800 are well resolved by HST from 0.4 to 2$`\mu `$m. See Soderblom et al. (1998) for BVI photometry of the two major components using WFPC2 on HST, and for estimates of their masses ($``$1 M) and age ($`<`$10 Myr). From the ground at 4.7 and 9.8$`\mu `$m, Gehrz et al. (1999) showed that the visual companion (component B) is actually at the center of the PDS with only a small amount of dust possible around star A. Star B lies north of star A by about 0$`\stackrel{}{\mathrm{.}}`$8. Separations of Ab from Aa and of Bb from Ba are of order 1 AU ($`0\stackrel{}{\mathrm{.}}02`$), and the luminosity of Bb is thought to be greater than that of Ab.
Using the sub-arcsecond resolution and dynamic range in excess of one million afforded by the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) onboard HST, we have resolved the two primary components in five bands from 0.95 to 1.9$`\mu `$m, spanning the peak of their spectral energy distributions. Our objective was to obtain precise relative and absolute photometry of the two stellar components and to search for a halo of scattered or reflected light from the PDS, realizing that all other resolved planetary debris systems scatter and emit about equally. However, based on our predicted inner diameter of $`<10`$ AU and small cross-sectional area for the PDS, we cannot be confident of separating the reflected light from direct starlight.
## 2 NICMOS OBSERVATIONS AND REDUCTIONS
Table 1 summarizes key observational parameters of the two HST orbits (GTO 7232) devoted to these measurements. Spanning 306 days, the two orbits were aimed at obtaining high fidelity “sub-stepped” images using four narrow band filters and one centrally located medium band filter. Four images were planned at each epoch and at each wavelength (Table 1) using a spiral dither pattern (Mackenty et al. 1997) to allow for bad pixel replacement and to improve the spatial sampling (from $`2`$ AU to $`1`$ AU). Unfortunately, in the first orbit only half of the images were obtained due to a pointing error. Care was taken to correct for effects of non-linear response in the detectors, including persistence, for cosmic ray events, and for flat fielding of the images. In-flight flat-field images and the HD 98800 raw images were reduced with in-flight dark frames.
Table 2 summarizes flux densities and their ratios derived from the complete set of 18 images. Our absolute calibrations of the final background subtracted images are based on preliminary photometric calibrations kindly provided by M. Rieke (1999) prior to their publication. Flux densities for magnitude zero stars, accurate to $`\pm 3\%`$, are listed in Table 2, column 6, to facilitate conversion to magnitudes and for comparisons with other calibrations. When the images are combined, peak signal-to-noise is high (S/N $`>500`$), and flat-fields good only to $`0.3\%`$ dominate the relative photometric errors, while uncertainties in the absolute flux densities are dominated by calibration errors. For completeness we have included the flux densities derived from the B, V and “I” magnitudes reported by Soderblom et al. (1998), and the ratios (B/A) from our reprocessing of their publicly available images.
Precise NICMOS ratios were determined from each calibrated image using the software package IDP3 (Lytle et al. 1999) as follows: a scaled copy of the image was shifted to center star A precisely over star B and the two images were subtracted after varying the scale factor to minimize the residual. Then the process was repeated in reverse order and, finally, the image of star B was divided by that of star A. The residuals were used to determine the error. Subtraction of a PSF-star gave consistent, but noisier results. A similar procedure was used to derive the listed ratios from the WFPC2 images.
## 3 DISCUSSION
Figure (1) shows the spectral energy distributions of components A and B based on WFPC2 and NICMOS data and the results from Gehrz et al. (1999). For the PDS we include color-corrected Faint Source Catalog flux densities from IRAS and measurements at sub-mm wavelengths (see Table 2). Shown in Figure (1a) are blackbody fits to each stellar component and to the PDS. Figure (1b) shows the percentage deviations of the observations of the two stars from the Planck Law. The corresponding blackbody temperatures (T<sub>BB</sub>) for the two primary stars and for the PDS are included in Table 3, along with errors estimated from the fitting process and from the flux density errors. For comparison, we list effective temperatures for the stars based on the B-V colors from Soderblom et al. (1998) using the Bessell (1979) color-temperature relation, but treating both A and B as single stars. Also, we have attempted to fit these data using model SEDs based on ground-based photometry of K dwarfs of the same spectral types. Unexpectedly, these unpublished models of main sequence K-stars (M. Meyer 1999) do not represent the measured flux densities of A and B short of 2$`\mu `$m as well as do the simple blackbody models. However, beyond 2$`\mu `$m the SEDs of these stars appear to drop below the simple blackbody model. The luminosities listed in Table 3, derived by integration of the blackbody models, may be high by as much as 3$`\%`$ due to this effect. Because the B-V colors are close to normal for stars with spectral types K5 and K7, we do not include an interstellar extinction correction. Neglecting the uncertainty of the distance, we believe the errors of the luminosity determinations are of order $`\pm 5\%`$. Taking our measured luminosities and treating our blackbody temperatures as good approximations to the true effective temperatures, current PMS models<sup>1</sup><sup>1</sup>1http://www-astro.phast.umass.edu/data/tracks.html give plausible values for mass, but are in conflict with other age indicators.
Using the luminosity and temperature of components A and B their radii were calculated. In neglecting their duplicity we note that this oversimplification is more significant for B than for A (see Soderblom et al. 1996). The values in Table 3 show that the cooler star, B, is slightly larger than star A, a result well within the precision of the relative measurements involved. Pre-main sequence stars may well have this property. However, when both components of star B are considered, they each can have radii smaller than that of star Aa (Ab is almost negligible), and as previously noted, some portion of the power emitted by Ba and Bb will be reflected by the PDS adding to the star’s apparent luminosity. Figure 1 shows that there are slight differences between the two primary components when compared in detail relative to their respective blackbodies. Also, star A varies at all wavelengths by as much as a few percent and significant coronal activity has been reported (e.g. Fekel & Bopp 1993; Henry & Hall 1994).
At 0.95$`\mu `$m, where our resolution is highest, we have been unable to detect starlight reflected by the PDS. From the residuals of the flux-scaled PSF subtraction, we place a tentative upper limit of $`<6\%`$ for the reflected light relative to the direct emission from Ba+Bb. These residuals have been compared to quantitative predictions based on a simplified geometrical model of the IR source convolved with star A (which serves as the PSF). From Table 3 we see that the power emitted in our direction by the PDS divided by the total power emitted by the Ba+Bb systemic components and reflected from the PDS is 0.19. Therefore, at 0.95$`\mu `$m we have placed an upper limit on the albedo of the reflecting material of 0.3, well within the range expected for such material. Asteroids have albedos that are generally lower than 0.3 \[see results from IRAS reported by Tedesco et al. (1989)\]. Note also that we have direct observational evidence from the work of Skinner, Barlow & Justtanont (1992) that the emitting material contains silicates. However, in our filters we see no indication of coloration of the stellar emission from component B as might be caused by reflection from the PDS. This suggests that the albedo is low at all bands from 0.4 to 2$`\mu `$m.
Next, we must explain how a circumstellar envelope can form and be maintained over millions of years with geometry, opacity, and stability sufficient to produce the system that we observe. When Zuckerman & Becklin (1993) reported the extraordinarily high value of L<sub>IR</sub>/L<sub>bol</sub> for HD 98800, they pointed out that the lifetime of a circumstellar dust cloud thick enough to absorb 15$`\%`$ or more of the stellar emission would quickly evolve into a thin disk. Consider the case of a failed terrestrial planet of sufficient mass to form dense “dust bands” which overlap and fill in a belt subtending 20$`\%`$ of the celestial sphere around the stars. The required range of orbital inclinations is -12 to $`+12^{}`$. In the solar system the dust bands lie 10 above and below the zodiacal plane, and are constantly replenished by collisional erosion of asteroids in those same orbits. The much larger quantity of dust in the HD 98800B planetary debris system follows from its recent formation
Just as we calculated the radii of the stars, we have also calculated an “equivalent” radius for the material orbiting around star B, even though in projection the IR source, clearly, is not circular in shape. The result listed in Table 3, 2 AU, is consistent with our proposed geometry of the PDS, and constrains that portion of the belt that is visible directly. The actual radii of the inner debris orbits, about 4.5 AU, can be predicted because the material must be in radiative equilibrium with the measured emission of the two central stars. In the solar system, where the corresponding temperatures are 200 to 250 K, the dust is optically thin, of order $`10^8`$, and its emission follows a blackbody at least out to 100$`\mu `$m.
In HD 98800B the dust opacity remains high from 7 to at least 1000$`\mu `$m, but we find no indication of attenuation of light from star B. This implies that the orbital plane of the dust system is inclined to our line of sight by more than 12. The most likely value is near 45 since the emitting area must equal that of the equivalent disk, $``$ 12 sq. AU. Because most of the 165 K emitting surface is obscured by colder particles in larger orbits, the actual ratio of IR to optical luminosity is probably higher than the observed ratio of 0.19 reported here. Clearly, the IR emission is not isotropic. Using the predicted 4.5 AU inner radius of the belt, an equivalent thickness of 1 mm, and terrestrial density, a rough estimate of minimum mass is 0.6 M.
In conclusion, we now have enough information about the stars in HD 98800 to test, and perhaps improve, models of PMS dwarfs. We can also construct plausible models of recently formed planetary debris systems with interesting similarities to our own planetary system. We note that as yet no other examples of very young systems of this type have been found, indicating that the lifetime of this optically thick phase is probably rather short. Future observations using adaptive optics on the largest telescopes and space missions such as NGST should resolve both the thermal and reflected components of the PDS around HD 98800B. Infrared surveys now in progress and in the planning stage with SIRTF should provide new and better means of locating planetary debris systems, thus providing an answer to the question of how often terrestrial planets are formed.
Valuable assistance from our colleagues, M. Meyer, M. Rieke, D. McCarthy and E. Becklin is much appreciated, and we thank A. Shultz, D. Golombek, P. Stanley, and I. Dashevsky for their help at STScI. This work is supported by NASA grant NAG5-3042 to the NICMOS instrument definition team.
|
no-problem/9905/hep-lat9905016.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Among the many puzzles resolved by 2D conformal field theory is the appearance of rational critical exponents in models such as the 2D Ising and Potts models. The miracle is repeated when the models are coupled to 2D quantum gravity since, as was shown by KPZ and DDK , the dressing of the conformal weights by the Liouville field of 2D quantum gravity leads to a new set of exponents which are nonetheless still rational. The key formula in establishing the relation between the bare ($`\mathrm{\Delta }`$) and dressed ($`\stackrel{~}{\mathrm{\Delta }}`$) conformal weights
$$\stackrel{~}{\mathrm{\Delta }}=\frac{\sqrt{1c+24\mathrm{\Delta }}\sqrt{1c}}{\sqrt{25c}\sqrt{1c}}$$
(1)
depends only on the central charge $`c`$ of the matter. The net effect of the gravitational dressing for the minimal $`(p,q)`$ conformal models with $`c=16(pq)^2/pq`$, where the primary scaling operators are labelled by two integers $`r,s`$ satisfying $`1rp1`$, $`1sq1`$, is to transmute the bare weights $`\mathrm{\Delta }_{r,s}=\mathrm{\Delta }_{pr,qs}=[(rqsp)^2(pq)^2]/4pq`$ from the Kac table into
$$\stackrel{~}{\mathrm{\Delta }}_{r,s}=\frac{|rqsp||pq|}{p+q|pq|},$$
(2)
where $`|pq|=1`$ for unitary models. The relation between $`\mathrm{\Delta }`$ and $`\stackrel{~}{\mathrm{\Delta }}`$ may be written as
$$\stackrel{~}{\mathrm{\Delta }}\mathrm{\Delta }=\frac{\xi ^2}{2}\stackrel{~}{\mathrm{\Delta }}(\stackrel{~}{\mathrm{\Delta }}1),$$
(3)
where
$$\xi =\frac{1}{2\sqrt{3}}(\sqrt{25c}\sqrt{1c}),$$
(4)
and is called the KPZ scaling relation.
The effect of coupling various statistical mechanical models to 2D gravity, such as the Ising and $`q4`$ Potts models, can thus be calculated using the KPZ/DDK results. If we denote the critical temperature for the phase transition in these models by $`T_c`$ and the reduced temperature $`|TT_c|/T_c`$ by $`t`$ then the critical exponents $`\alpha ,\beta `$ are defined in the standard manner as $`t0`$ by
$`C_{\mathrm{sing}}`$ $``$ $`t^\alpha ,`$
$`M`$ $``$ $`t^\beta ,`$ (5)
where $`C_{\mathrm{sing}}`$ is the singular part of the specific heat and $`M`$ is the magnetization. It is then possible to calculate $`\alpha `$ and $`\beta `$ using the conformal weights of the energy density operator $`\mathrm{\Delta }_ϵ`$ and spin operator $`\mathrm{\Delta }_\sigma `$ in both the dressed and undressed cases,
$`\alpha `$ $`=`$ $`{\displaystyle \frac{12\mathrm{\Delta }_ϵ}{1\mathrm{\Delta }_ϵ}},`$
$`\beta `$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }_\sigma }{1\mathrm{\Delta }_ϵ}}.`$ (6)
The various scaling relations between the critical exponents then allow the determination of the complete set of exponents.
## 2 When the wrong $`c`$ can be right
The preceding derivation is quite natural when considering the models in a gravitational context. Since the matter is interacting with gravity, it is the central charge of the matter itself which gets fed into the KPZ formula and returns the new set of rational dressed conformal weights and consequently new set of rational critical exponents. There are, however, circumstances in which one could conceive of coupling the conformal matter to graphs with the “wrong” central charge. The first of these is when one considers the matter living on a quenched ensemble of 2D gravity graphs, as was done in . In this case the interaction between the graphs and the matter is switched off and one is in effect looking at an ensemble with quenched connectivity disorder. This ensemble displays several interesting effects, including a softening of first-order phase transitions in $`q>4`$ Potts models to continuous transitions and the possible appearance of a new set of non-rational (but still algebraic) quenched exponents for $`q4`$ Potts models. In these respects it very much resembles the quenched bond disorder models that have attracted much attention recently rather than other quenched connectivity disorder ensembles generated using Poisonnian random lattices which retain the characteristics of their regular lattice counterparts.
The relevant relation between the quenched dressed weights and bare weights is given by the $`c=0`$ version of the KPZ formula
$$\stackrel{~}{\mathrm{\Delta }}=\frac{\sqrt{1+24\mathrm{\Delta }}1}{4}.$$
(7)
It has recently been pointed out by Cardy that one should, in fact, see multi-fractal scaling of local correlators on quenched gravity graphs, just as with quenched bond disorder. The $`n`$th power of a correlator with weight $`\stackrel{~}{\mathrm{\Delta }}`$ averaged over the disorder scales not as $`n\stackrel{~}{\mathrm{\Delta }}`$, but rather
$$\stackrel{~}{\mathrm{\Delta }}_n=\frac{\sqrt{1+24n\mathrm{\Delta }}1}{4}.$$
(8)
Freed from the bounds of using the “right” value of $`c`$ in the KPZ formula we can consider the effect of coupling conformal matter to other backgrounds, whether quenched or annealed. In the quenched case one is interested in calculating the (reduced) free energy $`F=[\mathrm{ln}Z]_{av}`$ where $`[\mathrm{}]_{av}`$ is a quenched average over an ensemble of graphs characterized by a central charge $`c=d`$. Such graphs can be generated by using the adjacency matrix of the graph $`G`$ since the fixed area (i.e. fixed number of vertices) partition function $`Z_A`$ obtained on integrating out $`d`$ scalar fields with central charge $`d`$ is
$$Z_A=\underset{G𝒢(A)}{}(\mathrm{det}C_G)^{d/2},$$
(9)
where $`𝒢(𝒜)`$ is the class of graph being being summed over and $`C_G`$ is the adjacency matrix of the the graph $`G`$:
$$C_G=\{\begin{array}{cc}q_i\hfill & \text{if }i=j\text{,}\hfill \\ n_{ij}\hfill & \text{if }i\text{ and }j\text{ are adjacent,}\hfill \\ 0\hfill & \text{otherwise.}\hfill \end{array}$$
(10)
Since $`d`$ is now a parameter, one can in principle use arbitrary negative or positive values in generating the ensemble of graphs. In the above $`q_i`$ is the order of vertex $`i`$ and $`n_{ij}`$ is the number of edges connecting the adjacent vertices $`i`$ and $`j`$, which can be more than one in certain ensembles $`𝒢(𝒜)`$.
In analytical calculations for quenched ensembles we can use the replica trick to (formally) replace the free energy by the $`n0`$ limit of an $`n`$-replicated version of our matter action
$`F`$ $`=`$ $`[\underset{n0}{lim}(Z^n1)/n]_{av}`$ (11)
$`=`$ $`\underset{n0}{lim}([Z^n]_{av}1)/n`$
which relates the quenched ensemble to the annealed problem with $`n`$ copies of the matter fields of interest. The $`[\mathrm{}]_{av}`$ stands as before for a functional integral over surfaces with central charge $`d`$, now dynamically coupled to the matter fields. The total central charge is thus $`c_{total}=d+nc_{matter}`$ and in the quenched $`n0`$ limit $`c_{total}d`$ is the number which appears in the KPZ formula. A simulation to this effect has already been carried out in where good numerical agreement was obtained for the measured exponents of the Ising model on a quenched ensemble of $`d=5`$ graphs and those calculated by substituting $`c=d=5`$ in the KPZ formula for the dressed energy and spin weights <sup>1</sup><sup>1</sup>1The interpretation put upon the results there was that annealed and quenched ensemble of graphs gave the same results provided the total central charge, $`c_{total}`$, was the same - the difference is essentially semantic. .
Since the central charge of the graphs is decoupled from the matter in quenched simulations such as that described above, the KPZ formula on such backgrounds can be thought of as giving a line of dressed conformal weights, say $`\stackrel{~}{\mathrm{\Delta }}(d)`$, depending on the central charge associated with the graphs $`d`$. If we parameterise $`d`$ in the customary manner
$$d=1\frac{6(pq)^2}{pq}$$
(12)
we arrive at the following version of the KPZ formula
$$\stackrel{~}{\mathrm{\Delta }}(d)=\frac{\sqrt{p^2+q^22pq(12\mathrm{\Delta })}|pq|}{p+q|pq|}$$
(13)
for the dressing of weights in a gravitational background characterised by central charge $`d`$. The energy and spin weights derived from this formula would then, via equ.(6), give a line of critical exponents $`\alpha (d)`$ and $`\beta (d)`$ which depended on the background central charge $`d`$. In annealed simulations the central charge of the (now non-replicated) matter should be included and $`d`$ is replaced by $`c_{total}=d+c_{matter}`$ in the above considerations.
## 3 Rational Points
On a line of continuously varying critical exponents the rational values are typically the most interesting, a prime example being the 8 vertex model on a square lattice where the Ising model, amongst others, appears at such a point. We might thus enquire whether rational points other than the standard ones (i.e. $`d=1/2`$ for Ising, $`7/10`$ for the tri-critical Ising model, $`4/5`$ for the 3-state Potts model) exist. In the case of the Ising model $`\mathrm{\Delta }_ϵ=1/2`$, $`\mathrm{\Delta }_\sigma =1/16`$ and there are no other operators in the conformal table apart from the unit operator. If we want to obtain rational $`\stackrel{~}{\mathrm{\Delta }}_ϵ(d)`$ and $`\stackrel{~}{\mathrm{\Delta }}_\sigma (d)`$, and hence rational $`\alpha (d)`$ and $`\beta (d)`$, we see from equ.(13) that both $`p^2+q^2`$ and $`p^2+q^27pq/4`$ must be perfect squares. The first condition will be satisfied by the first two members of any Pythagorean triple $`(p,q,m)`$ <sup>2</sup><sup>2</sup>2Pythagorean triples are three integers satisfying $`p^2+q^2=m^2`$. which can be parameterised in general as $`p=u^2v^2,q=2uv,m=u^2+v^2`$ with $`g.c.d.(u,v)=1`$. Inserting this into the second condition and looking at the possible factorisations shows that only two triples satisfy both conditions, $`(3,4,5)`$ and $`(5,12,13)`$. The first corresponds to $`d=1/2`$ and, as it should, returns the standard KPZ weights for the Ising model coupled to 2D gravity. The second, however, is a background with $`d=39/10`$ and gives the weights
$`\stackrel{~}{\mathrm{\Delta }}_ϵ\left({\displaystyle \frac{39}{10}}\right)`$ $`=`$ $`{\displaystyle \frac{3}{5}}`$
$`\stackrel{~}{\mathrm{\Delta }}_\sigma \left({\displaystyle \frac{39}{10}}\right)`$ $`=`$ $`{\displaystyle \frac{1}{10}}`$ (14)
which translates to exponents $`\alpha (39/10)=1/2`$, $`\beta (39/10)=1/4`$. For comparison we show in Table 1 the exponents $`\alpha `$, $`\beta `$ and $`\gamma `$ <sup>3</sup><sup>3</sup>3 Derived from the scaling relation $`\alpha +2\beta +\gamma =2`$. for the flat lattice (Onsager exponents), KPZ ($`d=1/2`$), quenched ($`d=0`$) and $`d=39/10`$.
| | $`\alpha `$ | $`\beta `$ | $`\gamma `$ |
| --- | --- | --- | --- |
| Onsager | $`0`$ | $`\frac{1}{8}`$ | $`\frac{7}{4}`$ |
| KPZ $`(d=1/2)`$ | $`1`$ | $`\frac{1}{2}`$ | $`2`$ |
| Quenched $`(d=0)`$ | $`0.8685169`$ | $`0.4167516`$ | $`2.0350137`$ |
| $`d=\frac{39}{10}`$ | $`\frac{1}{2}`$ | $`\frac{1}{4}`$ | $`2`$ |
Table 1: Critical exponents for the Ising model
Remarkably, $`\alpha (39/10)=1/2`$ is the standard KPZ value for the three-state Potts model which has $`c=4/5`$, but $`\beta (39/10)=1/4`$ is half the 3-state Potts model exponent. This prompts one to look at the weights of the three-state Potts model in their own right. In this case one has a much larger conformal grid of allowed scaling dimensions, twelve of which actually appear as physical operators. Demanding rationality for all of these turns out to be too restrictive for all but the KPZ values with $`d=4/5`$. However, if we ask for rationality of only the energy and spin operators, which have bare weights $`\mathrm{\Delta }_ϵ=2/5`$ and $`\mathrm{\Delta }_\sigma =1/15`$ repectively, equ.(13) now shows that $`p^2+q^2(2/5)pq`$ and $`p^2+q^2(26/15)pq`$ must be perfect squares. We again find two possible solutions, $`d=4/5`$ and $`d=3886/1115`$. The resulting exponents are tabulated in Table 2 along with the classical (fixed lattice), KPZ and quenched values.
| | $`\alpha `$ | $`\beta `$ | $`\gamma `$ |
| --- | --- | --- | --- |
| Fixed | $`\frac{1}{3}`$ | $`\frac{1}{9}`$ | $`\frac{13}{9}`$ |
| KPZ $`(d=4/5)`$ | $`\frac{1}{2}`$ | $`\frac{1}{2}`$ | $`\frac{3}{2}`$ |
| Quenched $`(d=0)`$ | $`0.2932676`$ | $`0.3511286`$ | $`1.5910104`$ |
| $`d=3886/1115`$ | $`\frac{1}{27}`$ | $`\frac{6}{27}`$ | $`\frac{43}{27}`$ |
Table 2: Critical exponents for the 3-state Potts model
We have omitted the tricritical Ising model, which strictly speaking comes between the Ising and three-state Potts model in any classification, in our discussion. Once again demanding rationality for the full conformal grid is too restrictive to give any values of $`d`$ other than $`7/10`$, but we can still get rational values by restricting ourselves to rational energy and spin operator weights. In this case the bare weights are $`\mathrm{\Delta }_ϵ=1/10`$ and $`\mathrm{\Delta }_\sigma =3/80`$ and we find a whole series of additional solutions $`d=1449/400`$, $`3059/1430`$, $`133763/156400`$, $`\mathrm{}`$ as well as (unlike the Ising and 3-state Potts models) positive solutions $`d=69/70,44719/81200,\mathrm{}`$.
## 4 Conclusions
In summary, we have seen that treating the central charge in the KPZ formula as a free parameter admits rational exponent values other than the standard ones for Ising, tricritical Ising and three-state Potts models. The numerology is most convincing in the Ising case, since both the operators in its small conformal grid acquire rational weights when $`d=39/10`$. Although rational weights can be arranged for the energy and spin operators at novel values of the central charge for the other two, the rest of the conformal grid still acquires irrational weights. The three-state Potts model has one other rational value of $`d=3886/1115`$, and the tricritical Ising many, including positive values.
We have noted that feeding a value of the central charge other than that of the matter into the KPZ formula is precisely what is required when the matter/gravity back reaction is switched off, as in quenched simulations. Given the similarity of spin model behaviour on such ensembles to those with quenched bond disorder and the existence of non-perturbative results for the resulting exponents, investigation of such models may be useful for illuminating some of the murkier properties of ferromagnetic systems with quenched disorder. Formally exponents calculated for the “wrong” central charge values can also apply to annealed ensembles of graphs (i.e. the connectivity in the graphs is fluctuating on the same time-scale as the spins, even if the back reaction of the matter on the graphs is switched off) so long as the appropriate matter central charge is included.
Finally, it is worth noting that the new rational points all appear to be numerically accessible, since investigated $`c=5`$ and the central charges for the Ising and three-state Potts rational exponents are both in the vicinity of $`4`$. It would be interesting to investigate either numerically or analytically whether the rational points were simply a numerical accident, or differed in some manner from the rest of the line of exponents, as well as the occurrence of such points in other known models.
## Acknowledgements
DJ was partially supported by a Royal Society of Edinburgh/SOEID Support Research Fellowship. The collaborative work of DJ and WJ was funded by ARC grant 313-ARC-XII-98/41.
|
no-problem/9905/hep-ph9905306.html
|
ar5iv
|
text
|
# 𝜃 Vacuum in a Random Matrix Model
## Acknowledgments
This work was supported by the US DOE grants DE-FG-88ER40388, by the Polish Government Project (KBN) grant 2P03B00814 and by the Hungarian grant OTKA-F026622.
|
no-problem/9905/hep-lat9905029.html
|
ar5iv
|
text
|
# 1 The heavy quark potential measured on a set of 100 12⁴ Wilson 𝛽=2.4 gauge configurations. SU(2) is the full SU(2) potential, Z(2) is measured on the centre projected configurations, and “Lorentz Z(2)” is measured on the first Lorentz gauge fixed then MCG fixed and then centre projected ensemble.
## Acknowledgements
TGK thanks Pierre van Baal and Margarita García Pérez for discussions and Philippe de Forcrand for helpful correspondence.
|
no-problem/9905/cond-mat9905407.html
|
ar5iv
|
text
|
# The Dynamics of Structural Transitions in Sodium Chloride Clusters
## I Introduction
Understanding the relationship between the potential energy surface (PES), or energy landscape, and the dynamics of a complex system is a major research effort in the chemical physics community. One particular focus has been the dynamics of relaxation from an out-of equilibrium starting configuration down the energy landscape to the state of lowest free energy, which is often also the global minimum of the PES.
The possible relaxation processes involved can be roughly divided into two types. The first is relaxation from a high-energy disordered state to a low-energy ordered state, and examples include the folding of a protein from the denatured state to the native structure, and the formation of a crystal from the liquid.
The second kind of relaxation process, which is the focus of the work here, is relaxation from a low-energy, but metastable, state to the most stable state. This second situation often arises from the first; the initial relaxation from a disordered state can lead to the population of a number of low-energy kinetically accessible configurations. The time scales for this second relaxation process can be particularly long because of the large free energy barriers that can separate the states.
Some proteins provide an instance of this second type of relaxation. Often, as well as a rapid direct path from the denatured state to the native structure, there is also a slower path which passes through a low-energy kinetic trap. As this trapping is a potential problem for protein function, the cell has developed its own biochemical machinery to circumvent it. For example, it has been suggested that the chaperonin, GroEL, aids protein folding by unfolding those protein molecules which get stuck in a trapped state.
There are also a growing number of examples of this second type of relaxation involving clusters. For Lennard-Jones (LJ) clusters there are a number of sizes for which the global minimum is non-icosahedral. For example, for LJ<sub>38</sub> the global minimum is a face-centred-cubic truncated octahedron, but relaxation down the PES almost always leads to a low-energy icosahedral minimum. Similarly, for LJ<sub>75</sub> the icosahedral states act as a trap preventing relaxation to the Marks decahedral global minimum. A similar competition between face-centred-cubic or decahedral and icosahedral structures has recently been observed for metal clusters.
For these clusters unbiased global optimization is difficult because the icosahedral states act as an effective trap. More generally, kinetic traps are one of the major problems for a global optimization algorithm. Therefore, much research has focussed on decreasing the ‘life-time’ of such traps. For example, some methods simulate a non-Boltzmann ensemble that involves increased fluctuations, thus making barrier crossing more likely. Other algorithms transform the energy landscape in a way that increases the temperature range where the global minimum is populated, thus allowing one to choose conditions where the free energy barriers relative to the thermal energy are lower.
Recently, clear examples of trapping associated with structural transitions have emerged from experiments on NaCl clusters. These clusters have only one energetically favourable morphology: the magic numbers that appear in mass spectra correspond to cuboidal fragments of the bulk crystal (rocksalt) lattice, hence the term nanocrystals. Indirect structural information comes from the experiments of Jarrold and coworkers which probe the mobility of size-selected cluster ions. For most (NaCl)<sub>N</sub>Cl<sup>-</sup> with $`N>30`$, multiple isomers were detected which were assigned as nanocrystals with different cuboidal shapes. The populations in the different isomers were not initially equilibrated, but slowly evolved, allowing rates and activation energies for the structural transitions between the nanocrystals to be obtained.
In a previous paper we identified the mechanisms of these structural transitions by extensively searching the low-energy regions of the PES of one of these clusters, (NaCl)<sub>35</sub>Cl<sup>-</sup>, in order to obtain paths linking the different cuboidal morphologies. The key process in these transitions is a highly cooperative rearrangement in which two parts of the nanocrystal slip past one another on a $`\{110\}`$ plane in a $``$11̄0$``$ direction.
Here we continue our examination of the structural transitions by investigating the dynamics of (NaCl)<sub>35</sub>Cl<sup>-</sup>. Given the long time scales for which the clusters reside in metastable forms, it is not feasible to probe the transitions with conventional dynamics simulations. Instead, we use a master equation that describes the probability flow between the minima on the PES. This method has the advantage that we can relate the dynamics to the topography of the PES . In this paper we are particularly concerned with obtaining activation energies for the structural transitions: firstly, in order to compare with experiment, and secondly, to understand how the activation energy for a process that involves a series of rearrangements and a large number of possible paths is related to the features of the energy landscape. In section II we outline our methods, and then in Section III, after a brief examination of the topography of the PES and the thermodynamics, we present our results for the dynamics of the structural transitions.
## II Methods
### A Potential
The potential that we use to describe the sodium chloride clusters is the Tosi-Fumi parameterization of the Coulomb plus Born-Mayer form:
$$E=\underset{i<j}{}\left(\frac{q_iq_j}{r_{ij}}+A_{ij}e^{r_{ij}/\rho }\right),$$
where $`q_i`$ is the charge on ion $`i`$, $`r_{ij}`$ is the distance between ions $`i`$ and $`j`$ and $`A_{ij}`$ and $`\rho `$ are parameters. Although simple, this potential provides a reasonable description of the interactions. For example, in a previous study we compared the global minima of (NaCl)<sub>N</sub>Cl<sup>-</sup> ($`N35`$) for this potential with those for a more complex potential derived by Welch et al. which also includes terms due to polarization. Most of the global minima were the same for both potentials. Given some of the other approximations we use in this study, the small advantages gained by using the Welch potential do not warrant the considerable additional computational expense.
We should also note that a well-known problem associated with the above family of potentials for the alkali halides is that they never predict the CsCl structure to be the most stable. This problem arises because the potentials do not allow the properties of an ion to be dependent on the local ionic environment. This deficiency should not greatly affect the relative energies of the low-lying (NaCl)<sub>35</sub>Cl<sup>-</sup> minima because they all have the same rock-salt structure, but it may affect the barriers for rearrangements where some ions experience a different local environment at the transition state.
### B Searching the potential energy surface
The samples of 3518 minima and 4893 transition states that we use here were obtained in our previous study on the mechanisms of the structural transitions for (NaCl)<sub>35</sub>Cl<sup>-</sup>. This sampling was performed by repeatedly stepping across the PES from minimum to minimum via transition states, thus giving a connected set of stationary points. We biased this search to probe the low-energy regions of the PES either by using a Metropolis criterion to decide whether to accept a step to a new minimum or by systematically performing transition state searches from the lower energy minima in the sample. Thus, although our samples of stationary points only constitute a tiny fraction of the total number on the PES, we have a good representation of the low-energy regions that are relevant to the structural transitions of (NaCl)<sub>35</sub>Cl<sup>-</sup>.
### C Thermodynamics
We used two methods to probe the thermodynamics: first, conventional Monte Carlo simulations and second, the superposition method. The latter is a technique to obtain the thermodynamics of a system from a sample of minima. It is based on the idea that all of configuration space can be divided up into the basins of attraction surrounding each minimum. The density of states or partition function can then be written as a sum over all the minima on the PES, e.g. $`Z=_iZ_i`$, where $`Z_i`$ is the partition function of minimum $`i`$.
The limitations of the superposition method are that the $`Z_i`$ are not known exactly and that, for all but the smallest systems, the total number of minima on the PES is too large for us to characterize them all. However, the harmonic expression for $`Z_i`$ leads to a reasonable description of the thermodynamics. Furthermore, anharmonic forms are available which allow the thermodynamics of larger clusters to be reproduced accurately.
The incompleteness of the sample can be overcome by weighting the contributions from the minima in a representative sample. However, this approach is not necessary in the present study since we are interested in low temperature behaviour where the number of thermodynamically relevant minima is still relatively small. Furthermore, in this temperature regime the superposition method has the advantage that it is unaffected by large free energy barriers between low-energy minima which can hinder the determination of equilibrium thermodynamic properties by conventional simulation.
Here we use the harmonic form of the superposition method, because we later use the harmonic approximation to derive rate constants (reliable anharmonic expressions for the rate constants are not so readily available). The partition function is then
$$Z=\underset{i}{}\frac{n_ie^{\beta E_i}}{(\beta h\overline{\nu }_i)^\kappa },$$
(1)
where $`\beta =1/kT`$, $`E_i`$ is the energy of minimum $`i`$, $`\overline{\nu }_i`$ is the geometric mean vibrational frequency of $`i`$, $`\kappa =3N6`$ is the number of vibrational degrees of freedom and $`n_i`$ is the number of permutational isomers of $`i`$. $`n_i`$ is given by $`2N!/h_i`$, where $`h_i`$ is the order of the point group of $`i`$. From this expression thermodynamic quantities such as the heat capacity, $`C_v`$, can be obtained by the application of standard thermodynamic formulae. The superposition method also allows us to examine the contributions of particular regions of configuration space to the the thermodynamics. For example, the probability that the system is in region $`A`$ is given by
$$p_A=\underset{iA}{}\frac{Z_i}{Z},$$
(2)
where the sum runs over those minima that are part of region A.
### D Dynamics
The master equation approach is increasingly being used to describe the inter-minimum dynamics on a multi-dimensional PES with applications to, for example, clusters, glasses, proteins and idealized model landscapes. The master equation is defined in terms of $`𝐏(𝐭)=\{P_i(t)\}`$, the vector whose components are the ensemble-average probabilities that the system is associated with each minimum at time $`t`$:
$$\frac{dP_i(t)}{dt}=\underset{ji}{\overset{n_{\mathrm{min}}}{}}[k_{ij}P_j(t)k_{ji}P_i(t)],$$
(3)
where $`k_{ij}`$ is the rate constant for transitions from minimum $`j`$ to minimum $`i`$. Defining the matrix
$$W_{ij}=k_{ij}\delta _{ij}\underset{m=1}{\overset{n_{\mathrm{min}}}{}}k_{mi}$$
(4)
allows Equation (3) to be written in matrix form: $`d𝐏(t)/dt=\mathrm{𝐖𝐏}(t)`$.
If the transition matrix $`𝐖`$ cannot be decomposed into block form, it has a single zero eigenvalue whose corresponding eigenvector is the equilibrium probability distribution, $`𝐏^{eq}`$. As a physically reasonable definition for the rate constants must obey detailed balance at equilibrium, i.e. $`W_{ij}P_j^{\mathrm{eq}}=W_{ji}P_i^{\mathrm{eq}}`$, the solution of the master equation can be expanded in terms of a complete set of eigenfunctions of the symmetric matrix, $`\stackrel{~}{𝐖}`$, defined by $`\stackrel{~}{W}_{ij}=(P_j^{\mathrm{eq}}/P_i^{\mathrm{eq}})^{1/2}W_{ij}`$. The solution is
$$P_i(t)=\sqrt{P_i^{\mathrm{eq}}}\underset{j=1}{\overset{n_{\mathrm{min}}}{}}\stackrel{~}{u}_i^{(j)}e^{\lambda _jt}\left[\underset{m=1}{\overset{n_{\mathrm{min}}}{}}\stackrel{~}{u}_m^{(j)}\frac{P_m(0)}{\sqrt{P_m^{\mathrm{eq}}}}\right],$$
(5)
where $`\stackrel{~}{u}_i^{(j)}`$ is component $`i`$ of the $`j^{\mathrm{th}}`$ eigenvector of $`\stackrel{~}{𝐖}`$ and $`\lambda _j`$ is the $`j^{\mathrm{th}}`$ eigenvalue.
The eigenvalues of $`𝐖`$ and $`\stackrel{~}{𝐖}`$ are identical and the eigenvectors are related by $`u_i^{(j)}=\stackrel{~}{u}_i^{(j)}\sqrt{P_i^{eq}}`$. Except for the zero eigenvalue, all $`\lambda _j`$ are negative. Therefore, as $`t\mathrm{}`$ the contribution of these modes decays exponentially to zero and $`𝐏𝐏^{eq}`$.
To apply Equation (5) we must first diagonalize $`\stackrel{~}{𝐖}`$. The computer time required for this procedure scales as the cube of the size of the matrix and the memory requirements scale as the square. Therefore, it is advantageous for the matrix $`\stackrel{~}{𝐖}`$ to be as small as possible. For this reason we recursively removed those minima that are only connected to one other minimum; these ‘dead-end’ minima do not contribute directly to the probability flow between different regions of the PES. After pruning our samples have 1624 minima and 2639 transition states. To test the effect of this pruning, we performed some calculations using both the full and the pruned samples. The effect on the dynamics of the structural transitions was negligible.
As the temperature of a system is decreased the spread of eigenvalues can increase rapidly. When the ratio of the largest to smallest eigenvalues is of the order of the machine precision of the computer, the accuracy of the extreme eigenvalues can become degraded by rounding errors. We encountered these problems below 275K. Without pruning the samples these numerical difficulties are more pronounced.
We model the rate constants, which are needed as input to Equation (3), using RRKM theory in the harmonic approximation. Therefore, the rate constant for a transition from minimum $`i`$ to minimum $`j`$ via a particular transition state (denoted by $``$) is given by
$$k_{ij}^{}(T)=\frac{h_i}{h_{ij}^{}}\frac{\overline{\nu }_i^\kappa }{\overline{\nu }_{ij}^{(\kappa 1)}}\mathrm{exp}((E_{ij}^{}E_i)/kT).$$
(6)
## III Results
### A Topography of the (NaCl)<sub>35</sub>Cl<sup>-</sup> PES
In our previous study of (NaCl)<sub>35</sub>Cl<sup>-</sup> we found that the low-energy minima all had rock-salt structures. The different minima have four basic shapes: an incomplete $`5\times 5\times 3`$ cuboid, a $`6\times 4\times 3`$ cuboid with a single vacancy, a $`8\times 3\times 3`$ cuboid with a single vacancy and an incomplete $`5\times 4\times 4`$ cuboid. The lowest-energy minimum for each of these forms is shown in Figure 1.
In the experiments on (NaCl)<sub>35</sub>Cl<sup>-</sup> by Jarrold and coworkers the three peaks that were resolved in the arrival time distribution were assigned on the basis of calculated mobilities as $`5\times 5\times 3`$, $`5\times 4\times 4`$ and $`8\times 3\times 3`$ nanocrystals. However, when the $`6\times 4\times 3`$ nanocrystal is also considered, better agreement between the calculated and observed mobilities can be obtained by assigning the three experimental peaks to the $`6\times 4\times 3`$, $`5\times 5\times 3`$ and $`8\times 3\times 3`$ nanocrystals in order of increasing drift time. This reassignment is also in better agreement with the thermodynamics since the clusters convert to (what is now assigned as) the $`5\times 5\times 3`$ nanocrystal as time progresses, indicating that this structure has the lowest free energy. In our calculations a $`5\times 5\times 3`$ isomer is the global potential energy minimum, and the $`5\times 5\times 3`$ nanocrystal is always more stable than the other nanocrystals (See Section III B).
Disconnectivity graphs provide a way of visualizing an energy landscape that is particularly useful for obtaining insight into dynamics, and have previously been applied to a number of protein models and clusters. The graphs are constructed by performing a ‘superbasin’ analysis at a series of energies. This analysis involves grouping minima into disjoint sets, called superbasins, whose members are connected by pathways that never exceed the specified energy. At each energy a superbasin is represented by a node, and lines join nodes in one level to their daughter nodes in the level below. Every line terminates at a local minimum. The graphs therefore present a visual representation of the hierarchy of barriers between minima.
The disconnectivity graph for (NaCl)<sub>35</sub>Cl<sup>-</sup> is shown in Figure 2. The barriers between minima with the same cuboidal form are generally lower than those between minima that have different shapes. Therefore, the disconnectivity graph splits into funnels corresponding to each cuboidal morphology. (A funnel is a set of downhill pathways that converge on a single low-energy minimum or a set of closely-related low-energy minima. In a disconnectivity graph an ideal funnel is represented by a single tall stem with lines branching directly from it, indicating the progressive exclusion of minima as the energy is decreased.) The separation is least clear for the $`5\times 4\times 4`$ minima because of the large number of different ways that the nine vacant sites can be arranged. For example, these vacancies are organized very differently in the two lowest-energy $`5\times 4\times 4`$ minima (Figure 1), and in fact the barrier between minimum O and the low-energy $`6\times 4\times 3`$ isomers is lower than the barrier between O and minimum L. Therefore, the minima associated with O form a sub-funnel that splits off from the $`6\times 4\times 3`$ funnel, rather than being directly connected to the main $`5\times 4\times 4`$ funnel.
The disconnectivity graph shows that the barriers between the $`5\times 5\times 3`$, $`6\times 4\times 3`$ and $`5\times 4\times 4`$ nanocrystals are of similar magnitude, while the $`8\times 3\times 3`$ minima are separated from the rest by a considerably larger barrier. The values of some of the barrier heights are given in Table I.
The disconnectivity graph is also helpful for interpreting the (NaCl)<sub>35</sub>Cl<sup>-</sup> dynamics observed in experiments. In the formation process it is likely that a high-energy configuration is initially generated. The cluster then relaxes to one of the low-energy nanocrystals. Simulations for potassium chloride clusters indicate that this relaxation is particularly rapid for alkali halides because of the low barriers (relative to the energy difference between the minima) for downhill pathways. However, there is a separation of time scales between this initial relaxation and the conversion of the metastable nanocrystals to the one with the lowest free energy. the large barriers between the different cuboids make them efficient traps.
### B Thermodynamics of (NaCl)<sub>35</sub>Cl<sup>-</sup>
Some thermodynamic properties of (NaCl)<sub>35</sub>Cl<sup>-</sup> are shown in Figures 3 and 4. The caloric curve shows a feature at $``$700K which indicates melting (Figure 3a); the melting temperature is depressed relative to the bulk due to the cluster’s finite size. The effects of the barriers between nanocrystals are apparent in the MC simulations. The radius of gyration, $`R_g`$, provides a means of differentiating the nanocrystals. It can be seen from the plot of $`R_g`$ for simulations started in the lowest-energy minima of the four cuboidal forms that each simulation is stuck in the starting structure (Figure 3b) up to temperatures close to melting, implying that there are large free energy barriers between the nanocrystals.
These free energy barriers prevent an easy determination of the relative stabilities of the different nanocrystals by conventional simulations. Therefore, we use the superposition method to examine this question. First we assign the fifty lowest-energy minima to one of the cuboidal forms by visual inspection of each structure. Then, using these sets as definitions of the nanocrystals in Equation (2), we calculate the equilibrium probabilities of the cluster being in the different cuboidal morphologies as a function of temperature. It can be seen from Figure 4a that the $`5\times 5\times 3`$ nanocrystal is most stable up until melting. The $`6\times 4\times 3`$ nanocrystal also has a significant probability of being occupied. However, the probabilities for the $`8\times 3\times 3`$ and $`5\times 4\times 4`$ nanocrystals are always small. The onset of the melting transition is indicated by the rise of $`p_{\mathrm{rest}}`$ in Figure 4a and by the peak in the heat capacity (Figure 4b). However, this transition is much too broad and the heat capacity peak occurs at too high a temperature because the incompleteness of our sample of minima leads to an underestimation of the partition function for the liquid-like minima. Given these expected failings of the superposition method at high temperature when the partition functions of the minima are not reweighted, we restrict our dynamics calculations to temperatures below 600K.
Although the probabilities of being in the different morphologies show little variation at low temperature, there are significant changes in the occupation probabilities of specific minima. For example, the small low temperature peak in the heat capacity is a result of a redistribution of probability amongst the low-energy $`5\times 5\times 3`$ minima; the third lowest-energy minimum becomes most populated. It is also interesting to note that the second lowest-energy $`5\times 4\times 4`$ minimum (O) becomes more stable than the lowest-energy $`5\times 4\times 4`$ minimum (L) for temperatures above approximately 220K. Both these changes are driven by differences in vibrational entropy.
### C Dynamics of (NaCl)<sub>35</sub>Cl<sup>-</sup>
Some examples of the interfunnel dynamics that we find on solution of the master equation are depicted in Figure 5. The time scales involved are much longer than those accessible by conventional simulations.
The dynamics of relaxation to equilibrium depend significantly on the starting configuration. When the lowest-energy $`6\times 4\times 3`$ minimum is the initial configuration there is a small transient population in $`5\times 4\times 4`$ minima before the system adopts a $`5\times 5\times 3`$ structure. This is consistent with the lowest-energy pathway that was found between the two nanocrystals; it passes through some intermediate $`5\times 4\times 4`$ minima.
The relaxation to equilibrium is much slower when the cluster starts from the lowest-energy $`8\times 3\times 3`$ minimum. This is a result of the large barrier to escape from this funnel (Figure 2). The probability flow out of the $`8\times 3\times 3`$ funnel leads to a simultaneous rise in the occupation probabilities of both the $`5\times 5\times 3`$ and $`6\times 4\times 3`$ nanocrystals towards their equilibrium values, even though the lowest-energy pathway out of the $`8\times 3\times 3`$ funnel directly connects it to the low-energy $`6\times 4\times 3`$ minima. This occurs because the time scale for interconversion of these latter two nanocrystals is much shorter than that for escape from the $`8\times 3\times 3`$ funnel. However, we do find evidence that the cluster first passes through the $`6\times 4\times 3`$ minima if we examine the probabilities on a log-scale. At times shorter than that required for local equilibrium between the $`5\times 5\times 3`$ and $`6\times 4\times 3`$ minima the occupation probability for the $`6\times 4\times 3`$ minima is larger.
As the two lowest-energy $`5\times 4\times 4`$ minima are well-separated in configuration space we considered relaxation from both these minima. In both cases, there is a large probability flow into the $`6\times 4\times 3`$ minima, which is then transferred to the $`5\times 5\times 3`$ funnel on the same time scale as when initiated in the $`6\times 4\times 3`$ funnel. However, the time scale for the build-up of population in the $`6\times 4\times 3`$ minima depends on the initial configuration. Probability flows more rapidly and directly into the $`6\times 4\times 3`$ minima when initiated from minimum O, reflecting the low barriers between these minima (Figure 2). For relaxation from minimum L there are two active pathways, leading to an increase in the population of both the $`5\times 5\times 3`$ and the $`6\times 4\times 3`$ minima. The direct path into the $`5\times 5\times 3`$ funnel has the lower barrier (Table I) but is long (96.7Å), and so has a smaller rate than the path into the $`6\times 4\times 3`$ funnel, which has a slightly higher barrier (by 0.05 eV). The small shoulder in the occupation probability of the $`5\times 5\times 3`$ minima occurs in the time range when the occupation probability of the $`5\times 4\times 4`$ minima has reached a value close to zero (thus reducing the contribution from the direct path) and when the probability flow out of the $`5\times 4\times 4`$ minima is only just beginning.
The combination of our thermodynamics and dynamics results for (NaCl)<sub>35</sub>Cl<sup>-</sup> enable us to explain why a peak associated with the $`5\times 4\times 4`$ cuboids was not observed experimentally. The $`5\times 4\times 4`$ minima have a shorter lifetime than the other cuboidal forms and have a low equilibrium occupation probability.
Another way to analyse the dynamics is to examine how local equilibration progresses towards the point where global equilibrium has been obtained. To accomplish this we define two minima to be in local equilibrium at the time when
$$\frac{\left|P_i(t)P_j^{\mathrm{eq}}P_j(t)P_i^{\mathrm{eq}}\right|}{\sqrt{P_i(t)P_j(t)P_i^{\mathrm{eq}}P_j^{\mathrm{eq}}}}ϵ$$
(7)
is obeyed for all later times. In the present work we set $`ϵ=0.01`$, i.e. the two minima are within 1% of equilibrium. Using this definition we can construct equilibration graphs, in which nodes occur when two (groups of) minima come into local equilibrium.
We show an example of an equilibration graph in Figure 6. Equilibration first occurs at the bottom of the $`6\times 4\times 3`$ and $`5\times 5\times 3`$ funnels between those minima that are connected by low barrier paths, then progresses to minima with the same cuboidal shape but which are separated by larger barriers, and finally occurs between the funnels. The order of interfunnel equilibrium agrees with the time scales that we observe in the time evolution of the occupation probabilities of the nanocrystals (Figure 5). Minimum O, then minimum L, come into equilibrium with the $`6\times 4\times 3`$ funnel. Then, the $`5\times 5\times 3`$ and $`6\times 4\times 3`$ funnels reach local equilibrium. Finally, the $`8\times 3\times 3`$ funnel reaches equilibrium with the rest of the PES. As one of the major determinants of the time scale required for local equilibrium is the height of the barriers between minima, it is unsurprising that the equilibration graph reflects the structure of the disconnectivity graph (Figure 2).
In the experiments on (NaCl)<sub>35</sub>Cl<sup>-</sup> rate constants and activation energies were obtained for the conversion of the $`6\times 4\times 3`$ and $`8\times 3\times 3`$ nanocrystals into the $`5\times 5\times 3`$ nanocrystal. It would, therefore, be useful if we could extract rate constants for the different interfunnel processes from the master equation dynamics.
For a two-state system, where $`AB`$ and $`k_+`$ and $`k_{}`$ are forward and reverse rate constants, respectively, it can be shown that
$$\mathrm{ln}\left[\frac{P_A(t)P_A^{\mathrm{eq}}}{P_A(0)P_A^{\mathrm{eq}}}\right]=(k_++k_{})t,$$
(8)
and the equivalent expression for B are obeyed. This is a standard result for a first-order reaction. This expression will also hold for the rate of passage between two funnels in our multi-state system if the interfunnel dynamics are the only processes affecting the occupation probabilities of the relevant funnels, and if the interfunnel dynamics cause the occupation probabilities of the two funnels to converge to their equilibrium values.
In Figure 7 we test the above expression by applying it to the interconversion of $`6\times 4\times 3`$ and $`5\times 5\times 3`$ nanocrystals. The two lines in the graph converge to the same plateau value, before both falling off beyond 0.001s. This plateau corresponds to the time range for which the interfunnel passage dominates the evolution of the probabilities for the two funnels. At shorter times, when the occupation probabilities for the two funnels are still close to their initial values, there are many other contributing processes. At longer times the probabilities are both very close to their equilibrium values, and the slower equilibration with the $`8\times 3\times 3`$ funnel dominates the probability evolution. From the plateau in Figure 7 we obtain $`k^++k^{}=5320\mathrm{s}^1`$. The individual rate constants can be obtained by using the detailed balance relation: $`k^+P_A^{\mathrm{eq}}=k^{}P_B^{\mathrm{eq}}`$.
The application of Equation (8) to escape from the $`8\times 3\times 3`$ funnel also leads to a range of $`t`$ where there is a well-defined plateau. However, this approach works less well for the interconversion of $`5\times 4\times 4`$ and $`6\times 4\times 3`$ minima (Figures 5a and b). This is because the assumption that no other processes contribute to the probability evolution of the two funnels is obeyed less well, and because the occupation probabilities do not converge to their equilibrium values, but near to the equilibrium values that would obtain if the $`5\times 5\times 3`$ minima were excluded. Nevertheless, approximate values of $`k^+`$ and $`k^{}`$ can be obtained.
Diagonalization of the matrix $`\stackrel{~}{𝐖}`$ produces a set of eigenvalues that give the time scales for a set of characteristic probability flows. The dynamical processes to which the eigenvalues correspond can be identified by examining the eigenvectors. Flow occurs between those minima for which the corresponding components of the eigenvector have opposite sign (Equation (5)). This observation forms the basis for Kunz and Berry’s net-flow index which quantifies the contribution of an eigenvector $`i`$ to flow out of a funnel $`A`$. The index is defined by
$$f_i^\mathrm{A}=\underset{j\mathrm{A}}{}\stackrel{~}{u}_j^{(i)}\sqrt{P_j^{\mathrm{eq}}}.$$
(9)
The index allows the interfunnel modes to be identified; the values of $`f^A`$ and $`f^B`$ for these modes will be large and of opposite sign. For example, at $`T`$=400K the mode with the most $`5\times 5\times 36\times 4\times 3`$ character has $`f^{5\times 5\times 3}=0.339`$, $`f^{6\times 4\times 3}=0.331`$ and $`\lambda =5275\mathrm{s}^1`$. The eigenvalue is in good agreement with the sum of interfunnel rates obtained using Equation (8).
The extraction of the interfunnel rate in this manner is hindered by the fact that the eigenvalues of $`\stackrel{~}{𝐖}`$ cannot cross as a function of temperature. Instead, there are avoided crossings and mixing of modes. For example, the small difference between the two values for the sum of the $`5\times 5\times 36\times 4\times 3`$ interfunnel rate constants that we obtained above is probably due to mixing. The eigenvector with which the interfunnel mode mixes gains some $`5\times 5\times 36\times 4\times 3`$ character; it has $`f^{5\times 5\times 3}=0.014`$, $`f^{6\times 4\times 3}=0.017`$ and $`\lambda =6388\mathrm{s}^1`$.
By calculating and diagonalizing $`\stackrel{~}{𝐖}`$ at a series of temperatures we can examine the temperature dependence of the interfunnel rate constants. For most of the interfunnel processes, the rate constants that we obtain fit well to the Arrhenius form, $`k=A\mathrm{exp}(E_a/kT)`$, where $`E_a`$ is the activation energy and $`A`$ is the prefactor (Figure 8).
In our previous study of the interconversion mechanisms in (NaCl)<sub>35</sub>Cl<sup>-</sup> we estimated the activation energies in order to compare with the experimental values. Using the analogy to a simple one-step reaction, we equated the activation energy with the difference in energy between the highest-energy transition state on the lowest-energy path between the relevant nanocrystals and the lowest-energy minimum of the starting nanocrystal. However, it was not clear how well this analogy would work. In Table I we compare these estimates with the activation energies obtained from our master equation results. There is good agreement, confirming the utility of the approximate approach. Similar agreement has also been found for the interfunnel dynamics of a 38-atom Lennard-Jones cluster.
A simple explanation for this correspondence can be given. We first label the minima along the lowest-barrier path between the two funnels in ascending order and define $`l`$ so that the highest-energy transition state on this path lies between minima $`l1`$ and $`l`$. If the minima behind the highest-energy transition state (i.e. 1 to $`l1`$) are in local equilibrium, then the occupation probability of the minimum $`l1`$ is given by
$$\frac{p_{l1}}{p_1}\mathrm{exp}\left((E_{l1}E_1)/kT\right)$$
(10)
Then, if the rate of interfunnel flow, $`k^+p_A`$, is equated to the rate of passage between minima $`l1`$ and $`l`$,
$`k^+p_A`$ $``$ $`p_{l1}\mathrm{exp}((E_{l1,l}^{}E_{l1})/kT)`$ (11)
$``$ $`p_1\mathrm{exp}((E_{l1,l}^{}E_1)/kT),`$ (12)
Therefore, if the occupation probability for funnel A is dominated by the occupation probability for the lowest-energy minimum in the funnel, i.e. $`p_Ap_1`$ for all $`T`$, then the activation energy is equal to the energy difference between the highest-energy transition state on the lowest-barrier path and the energy of the lowest-energy minimum in the starting funnel.
We should note that in the above derivation the interfunnel probability flow is assumed to all pass through a single transition state. However, if there is competition between two paths, one with a low barrier and a small prefactor and one with a larger barrier and a large prefactor, we expect that the low-barrier path would dominate at low temperature and the high-barrier path at high temperature. This behaviour would give rise to a interfunnel rate constant with a positive curvature in an Arrhenius plot. However, the lines are either straight or have a small amount of negative curvature.
The lack of positive curvature, and the agreement between the estimated and the observed activation energies, probably indicates that the interfunnel probability flow is dominated by paths which pass through the highest-energy transition state on the lowest barrier path. It is interesting that, on a PES with so many minima and transition states, a single transition state can have such a large influence on the dynamics.
At low enough temperature, $`d(p_1/p_A)/dT0`$, and so the interfunnel barrier height can be measured with respect to the lowest-energy minimum in the starting funnel (Equation (12)). However, as the occupation probabilities of other minima in the funnel becomes significant relative to that for the lowest-energy minimum in the funnel, the ratio $`p_1/p_A`$ decreases, thus giving the lines in the Arrhenius plot their slight negative curvature. In other words, the apparent activation energy decreases with increasing temperature, because the barrier height should be measured with respect to some kind of average minimum energy for the funnel, perhaps $`E_A=_{iA}p_iE_i`$. The negative curvature is most pronounced when the occupation probabilities in a funnel change considerably. For example, the $`5\times 4\times 4(L)6\times 4\times 3`$ rate constant has the most curvature because minimum L has a particularly low vibrational entropy leading to population of other minima within that funnel (Figure 2).
In Table I we also compare our activation energies to the two experimental values. Our values are too large by 0.24 and 0.49 eV. There are a number of possible sources of error. Firstly, the samples of minima and transition state provide only an incomplete characterization of the PES and so it is possible that the nanocrystals are connected by undiscovered lower-barrier paths. However, we believe that our sample of minima is a good representation of the low-energy regions of the PES and consider it improbable that undiscovered pathways could account for all of the discrepancy. Secondly, in our calculations the input rate constants $`k_{ij}^{}`$ were calculated on the basis of the harmonic approximation. Although this is likely to have a significant effect on the absolute values of the interfunnel rate constants and prefactors (the latter are too large compared to the experiment values), it should not have such a significant effect on the activation energies. Instead, we consider the most likely source of the discrepancy between theory and experiment to be inaccuracies in the potential. When polarization is included by using the Welch potential, the estimated barriers become closer to the experimental values (the discrepancies are then 0.16 and 0.34 eV) but significant differences still remain. For better agreement we may need a potential that allows the properties of the ions to depend on the local environment. Unfortunately, although such potentials have been developed for a number of systems, one does not yet exist for sodium chloride.
## IV Conclusion
The (NaCl)<sub>35</sub>Cl<sup>-</sup> potential energy surface has a multiple-funnel topography. Structurally, the different funnels correspond to rocksalt nanocrystals with different cuboidal forms. The large potential energy barriers between the funnels causes the time scales for escape from metastable nanocrystals to be far longer than those accessible by conventional dynamics simulations. Therefore, we examined the interfunnel dynamics by applying the master equation approach to a database of (NaCl)<sub>35</sub>Cl<sup>-</sup> minima and transition states. The slowest rate constant we obtained was $`k_{5\times 5\times 38\times 3\times 3}=1.46\times 10^8\mathrm{s}^1`$ at $`T=275`$K.
Using a net flow index we were able to identify the eigenvalues of the transition matrix, $`𝐖`$, which correspond to interfunnel probability flow. Thus, we were able to obtain rate constants and activation energies for the interconversion of the different nanocrystals. One particularly interesting finding is that the activation energies correspond fairly closely to the potential energy differences between the highest-energy transition state on the lowest-energy path between two nanocrystals and the lowest-energy minimum of the starting nanocrystal. This is the result one might expect by a simple extrapolation from the dynamics of a simple molecular reaction. However, it holds despite the multi-step, and potentially multi-path, nature of the interfunnel dynamics. The question of whether this result is generally true for interfunnel dynamics involving large potential energy barriers or reflects some of the particulars of the (NaCl)<sub>35</sub>Cl<sup>-</sup> system is an interesting subject for further research. We already know that this simplification holds for the 38-atom Lennard-Jones cluster.
###### Acknowledgements.
J.P.K.D. is the Sir Alan Wilson Research Fellow at Emmanuel College, Cambridge. D.J.W. is grateful to the Royal Society for financial support. We would like to thank Mark Miller for helpful discussions.
|
no-problem/9905/hep-ph9905572.html
|
ar5iv
|
text
|
# First-order chiral phase transition may naturally lead to the “quenched” initial condition and strong soft-pion fields
## ACKNOWLEDGMENTS
We thank the Yale Relativistic Heavy Ion Group for kind hospitality and support from grant no. DE-FG02-91ER-40609. Also, we gratefully acknowledge fruitful discussions with A.D. Jackson, I. Mishustin, D.H. Rischke, J. Schaffner and U.A. Wiedemann. We thank D.H. Rischke for reading the manuscript prior to publication.
|
no-problem/9905/cond-mat9905113.html
|
ar5iv
|
text
|
# Fermi liquid theory of electronic topological transitions and screening anomalies in metals
## Abstract
General expressions for the contributions of the Van Hove singularity (VHS) in the electron density of states to the thermodynamic potential $`\mathrm{\Omega }`$ are obtained in the framework of a microscopic Fermi liquid theory. The renormalization of the singularities in $`\mathrm{\Omega }`$ connected with the Lifshitz electronic topological transition (ETT) is found. Screening anomalies due to virtual transitions between VHS and the Fermi level are considered. It is shown that, in contrast with the one-particle picture of ETT, the singularity in $`\mathrm{\Omega }`$ turns out to be two-sided for interacting electrons.
Electronic topological transitions (ETT) can take place in metals and alloys at the motion of the Van Hove singularities (VHS) of the electron density of states across the Fermi level at the variation of external parameters. Their investigation is now a well-developed branch of solid state physics (see, e.g., the reviews ). Recently the interest in the problem has been revived by the observations of anomalies in the pressure dependence of lattice properties of Cd and Zn, which were explained in terms of ETT . To describe qualitatively the behavior of thermodynamic and transport properties of metals near ETT, correlation effects are to be taken into account since they are by no means small at typical electron densities. Up to now the influence of the interelectron interactions on ETT has been considered only for simplified models. An exact expression for the many-electron renormalization of the most singular contribution to the thermodynamic potential $`\mathrm{\Omega }`$ near ETT has been obtained in the model of nearly free electrons . In comparison with the one-particle expression for singular contribution to the thermodynamic potential $`\mathrm{\Omega },`$ correlation effects result in appearance of numerical factors which are expressed in terms of the effective mass of the quasiparticles and three-leg vertex $`\gamma `$ for the wave vectors connecting VHS points in the Brillouin zone. Dzyaloshinskii demonstrated that in the two-dimensional (2D) case, in contrast with 3D one, $`\gamma `$ diverges and, moreover, the ground state of many-electron system can be of a non-Fermi-liquid type provided that VHS are close enough to the Fermi level (see also the recent papers ). In the 3D case the Fermi-liquid description is valid near ETT. However, the Fermi-liquid renormalization factors contain singularities themselves, and some anomalous contributions to the physical properties appear due to virtual transitions between the VHS point in the electron energy spectrum and the Fermi level (screening anomalies ). It was found in that these anomalies make the singularity in $`\mathrm{\Omega }`$ at ETT to be two-sided, i.e., of order of $`\left(\pm z\right)^{5/2}\theta \left(\pm z\right)const\left(z\right)^{7/2}\theta \left(z\right)`$ where $`z=E_FE_c`$; $`E_F`$ and $`E_c`$ are the Fermi energy and the energy of VHS, $`\theta \left(x>0\right)=1,`$ $`\theta \left(x<0\right)=0.`$ However, these considerations were based on the calculations of particular diagrams in perturbation theory. In this work we present a general consideration of the singularities in the thermodynamic properties of metals near ETT in the framework of the rigorous microscopic Fermi liquid theory . We restrict ourselves only to the consideration of the 3D case and do not discuss the much more complicated 2D case where the applicability of the Fermi liquid theory is doubtful.
Let $`\lambda =n𝐤\sigma `$ be the set of electron quantum numbers in a metal: band index, quasimomentum and spin projection, respectively. For the normal Fermi liquid we have the following general expression for the Green function near the chemical potential level $`\mu `$
$`G_\lambda \left(E\right)`$ $`=`$ $`{\displaystyle \frac{1}{E\epsilon _\lambda ^0+\mu \mathrm{\Sigma }_\lambda (E,\mu )}}`$ (1)
$`=`$ $`{\displaystyle \frac{Z_\lambda }{E+\mu \epsilon _\lambda }}+G_\lambda ^{reg}\left(E\right),`$ (2)
where $`\epsilon _\lambda ^0`$ is the bare electron energy, $`\epsilon _\lambda `$ is the renormalized one that satisfies the equation
$$\epsilon _\lambda =\epsilon _\lambda ^0+\mathrm{\Sigma }_\lambda (\epsilon _\lambda \mu ,\mu ),$$
(3)
$`\mathrm{\Sigma }_\lambda (E,\mu )`$ is the self-energy, $`Im\mathrm{\Sigma }_\lambda \left(E=0,\mu \right)=0,`$ $`Z_\lambda `$ is the residue of the Green function in the pole and $`G_\lambda ^{reg}\left(E\right)`$ is the regular (incoherent or nonquasiparticle) part of the Green function. To find the singular contribution to the thermodynamic potential connected with the closeness of VHS in the renormalized spectrum to the Fermi energy $`E_F=\mu `$ it is suitable to start from the Luttinger theorem
$$N=\frac{\mathrm{\Omega }}{\mu }=\underset{\lambda }{}\theta \left(\mu \epsilon _\lambda \right)$$
(4)
where $`N`$ is the number of particles,
$`{\displaystyle \underset{\lambda }{}}={\displaystyle \underset{n\sigma }{}}{\displaystyle \frac{V_0}{\left(2\pi \right)^3}}{\displaystyle \underset{BZ}{}}\mathrm{𝐝𝐤}`$
BZ is the Brillouin zone, $`V_0`$ is the unit cell volume. Differentiating (3) with respect to $`\mu `$ with taking into account Eq. (1) one has
$`{\displaystyle \frac{N}{\mu }}`$ $`=`$ $`{\displaystyle \frac{^2\mathrm{\Omega }}{\mu ^2}}={\displaystyle \underset{\lambda }{}}\delta \left(\mu \epsilon _\lambda \right){\displaystyle \frac{\left(\mu \epsilon _\lambda \right)}{\mu }}`$ (5)
$`=`$ $`{\displaystyle \underset{\lambda }{}}\delta \left(\mu \epsilon _\lambda \right){\displaystyle \frac{G_\lambda ^1\left(E=0,\mu \right)}{\mu }}Z_\lambda `$ (6)
Further, we use the set of well-known identities . First of all, the quantity
$$\frac{G_\lambda ^1\left(E=0,\mu \right)}{\mu }=\gamma _{\lambda \lambda }$$
(7)
is the three-leg vertex describing the response to the uniform static field. It is connected with the “dynamic” response $`\gamma _{\lambda \lambda }^\omega `$ by the equation
$$\gamma _{\lambda \lambda }=\gamma _{\lambda \lambda }^\omega \underset{\nu }{}\mathrm{\Gamma }_{\lambda \nu }^\omega Z_\nu ^2\delta \left(\mu \epsilon _\nu \right)\gamma _{\nu \nu }$$
(8)
where $`\mathrm{\Gamma }_{\lambda \nu }^\omega `$ is the “dynamic” limit of the four-leg vertex which is connected with the Landau Fermi-liquid interaction function
$$f_{\lambda \nu }=\frac{\delta ^2E_{tot}}{\delta n_\lambda \delta n_\nu }$$
(9)
($`E_{tot\text{ }}`$ is the total energy, $`n_\lambda `$ is the quasiparticle distribution function) by the relation
$$f_{\lambda \nu }=Z_\lambda Z_\nu \mathrm{\Gamma }_{\lambda \nu }^\omega $$
(10)
At the same time, the Ward identity gives
$$\gamma _{\lambda \lambda }^\omega =\frac{1}{Z_\lambda }$$
(11)
On substituting Eqs. (5-9) into (4) we derive the following exact expression
$$\frac{^2\mathrm{\Omega }}{\mu ^2}=\underset{\lambda }{}\delta \left(\mu \epsilon _\lambda \right)\stackrel{~}{\gamma }_\lambda $$
(12)
where $`\stackrel{~}{\gamma }_\lambda =\gamma _{\lambda \lambda }Z_\lambda `$ satisfies the equation
$$\stackrel{~}{\gamma }_\lambda =1\underset{\nu }{}f_{\lambda \nu }\delta \left(\mu \epsilon _\nu \right)\stackrel{~}{\gamma }_\nu $$
(13)
It follows from Eq. (10) that the singular contribution to $``$ $`^2\mathrm{\Omega }/\mu ^2`$ is proportional to the singularity in the density of states at the Fermi level,
$$\rho \left(\mu \right)=\underset{\lambda }{}\delta \left(\mu \epsilon _\lambda \right)$$
(14)
The coefficient of the proportionality may be found from the solution of the integral equation (11) for a given quasiparticle spectrum and interaction function.
This form of the result is probably most convenient to separate the singularity of order of $`\left(\pm z\right)^{5/2}\theta \left(\pm z\right)`$. To investigate the screening anomalies it is better to use another expression which may be obtained directly from Eqs. (1), (4)
$$\frac{^2\mathrm{\Omega }}{\mu ^2}=\underset{\lambda }{}Z_\lambda \delta \left(\mu \epsilon _\lambda \right)\left[1\frac{\mathrm{\Sigma }_\lambda \left(E=0,\mu \right)}{\mu }\right]$$
(15)
One can see from the perturbation expansion that the corresponding contributions to the $`\lambda `$-dependence of $`\mathrm{\Sigma }`$ is weaker than to its energy dependence and can be neglected when separating the main singularity. Thus the multiplier $`Z_\lambda \delta \left(\mu \epsilon _\lambda \right)`$ in Eq. (13) turns out to be nonsingular, and we have to consider only the term with $`\mathrm{\Sigma }/\mu `$.
The second-order expression for the self-energy has the form (see e.g. )
$`\mathrm{\Sigma }_\lambda ^{(2)}\left(E=0,\mu \right)`$ $`=`$ $`{\displaystyle \underset{\left\{\lambda _i\right\}}{}}U_{\lambda \lambda _1\lambda _3\lambda _2}^dU_{\lambda _1\lambda _2\lambda \lambda _3}`$ (17)
$`\times {\displaystyle \frac{\left[n_{\lambda _3}\left(1n_{\lambda _1}n_{\lambda _2}\right)+n_{\lambda _1}n_{\lambda _2}\right]}{\mu +\epsilon _{\lambda _3}\epsilon _{\lambda _1}\epsilon _{\lambda _2}}}`$
where
$`U_{\lambda _1\lambda _2\lambda _4\lambda _3}`$ $`=`$ $`\lambda _1\lambda _2\left|U\right|\lambda _4\lambda _3`$ (18)
$`U_{\lambda \lambda _1\lambda _3\lambda _2}^d`$ $`=`$ $`2U_{\lambda \lambda _3\lambda _1\lambda _2}U_{\lambda \lambda _3\lambda _2\lambda _1},`$ (19)
$`U`$ is the interelectron interaction. Here $`\lambda _i`$ are the orbital quantum numbers, and we restrict ourselves to the nonmagnetic case. In particular, in one-band Hubbard model one has
$$\mathrm{\Sigma }_𝐤^{(2)}\left(E=0,\mu \right)=U^2\underset{𝐤_1𝐤_2}{}\frac{\left[n_{_{𝐤_1𝐤_2}}\left(1n_{𝐤_1}n_{𝐤_2}\right)+n_{𝐤_1}n_{𝐤_2}\right]}{\mu +\epsilon _{𝐤_1𝐤_2}\epsilon _{𝐤_1}\epsilon _{𝐤_2}}$$
(20)
Averaging in $`𝐤`$ over the Brillouin zone we find after simple transformations
$$\underset{𝐤}{}\frac{^2\mathrm{\Sigma }_𝐤^{(2)}\left(E=0,\mu \right)}{\mu ^2}=\frac{3U^2\rho ^2}{8}R\left(\mu \right)$$
(21)
where
$`R(z)=P{\displaystyle \frac{d\epsilon \rho \left(\epsilon \right)}{z\epsilon }}`$
is the real part of the on-site lattice Green function. Thus one obtains for the contribution $`\mathrm{\Omega }_{sa}`$ of screening anomalies to $`\mathrm{\Omega }`$-potential near ETT $`^3\mathrm{\Omega }_{sa}/\mu ^3U^2R\left(\mu \right).`$ On taking into account that the singularity of order of $`\left(\pm z\right)^{1/2}\theta \left(\pm z\right)`$ in $`\rho \left(\mu \right)`$ corresponds to the singularity of order of $`\left(z\right)^{1/2}\theta \left(z\right)`$ in $`R\left(\mu \right),`$ we conclude from Eqs. (13), (17) that the singularity in $`\mathrm{\Omega }\left(\mu \right)`$ is two-sided, $`\delta \mathrm{\Omega }\left(\mu \right)\left(\pm z\right)^{5/2}\theta \left(\pm z\right)const\left(z\right)^{7/2}\theta \left(z\right),`$ as it was mentioned above.
General equations (10), (11) seem to be formally applicable also for low-dimensional systems with a non-Fermi-liquid ground state. However, in these cases the Landau function $`f_{\lambda \nu }`$ is divergent for the forward-scattering processes . We will not consider here this complicated case.
To conclude, it is worthwhile to note one more consequence of these equations. There are two types of quasiparticles in interacting Fermi systems: dynamical quasiparticles with the spectrum determined by the poles of Green functions and statistical quasiparticles, i.e. quasiparticles in the sense of Landau theory . Generally speaking, their spectra do not coincide (the difference occurs in the third order of perturbation expansion for paramagnetic state and in the second order for ferromagnetic state ). However, we prove that the points of ETT found from both spectra are the same since $`\rho \left(\mu \right)`$is determined by the dynamical quasiparticles whereas $`\mathrm{\Omega }\left(\mu \right)`$by statistical ones. Together with the Luttinger theorem this means that both the volume and topology (but not necessarily the exact shape) of the Fermi surface are the same for these two types of quasiparticles.
|
no-problem/9905/cond-mat9905032.html
|
ar5iv
|
text
|
# Determination of critical current density in melt-processed HTS bulks from levitation force measurements
## 1 Introduction
Superconducting systems with magnetic levitation have long been known and the discovery of high temperature superconductors (HTS) highly stimulated their investigation but a real interest in them for large scale applications appears only with the successful development of the melt-processed (MP) technology . The use of MP HTS in large scale systems such as flywheels for energy storage, electric motors and generators, permanent magnets, etc. is the most promising HTS application now . In this applied region the levitation force measurements can be considered in two roles: as an information source to know more about levitation systems and as a quick technique to test HTS samples . In many earlier works it has been shown that the forces between a PM and HTS sample are closely related with HTS magnetization curves. Vertical levitation force versus vertical distance $`F_z(z)`$ is the nearest analog to $`M(H)`$ dependencies with their major and minor hysteresis loops but the complexity of a field configuration in such large scale PM-HTS systems makes it very difficult to directly correlate them in general case. The problem can be solved by numerical approaches and some of them have been successfully used . The numerical approaches are undoubtedly useful to evaluate the real system parameters but usually need too much computer resources to be applicable to direct HTS sample investigation. To perform such an investigation an analytical evaluation is more wished for.
Two limiting cases of HTS structure have been considered as analytical to calculate the dynamic parameters of an idealized system, a point magnetic dipole over an infinite flat superconductor. The first one is the case of ‘granular superconductor’ which can be modeled by a set of small isolated superconducting grains . It was shown the usual granular HTS obtained by standard sintering techniques can be described very well within this model . The second one is the case of an ‘ideally hard superconductor’ . It was shown recently that dynamics in a wide variety of levitation systems can be described in terms of surface screening currents which screen alternating magnetic field component due to PM displacements like it would be for an ideally hard superconductor, a superconductor with infinite pinning forces. For an infinite flat superconductor the frozen-image method was introduced as an illustration of simple analytical calculation of forces acting in the system. A perfect agreements with experiments were found by us for small PM resonance oscillations frequencies and, recently, by Hull and Cansiz , for both vertical and lateral force components.
## 2 Approach description
The feasibility of the ‘ideally hard superconductor’ approach is that the penetration depth $`\delta `$ of alternating magnetic field is much less than system dimensions . To calculate the stiffness or resonance frequencies the limit $`\delta 0`$ can be used, but it was shown that taking into account the finite values of $`\delta `$ it is possible to calculate the a.c. loss and even recover critical current density profiles within $`\delta `$ depth from a.c. loss measurements . In this paper we present such an approach of levitation force calculation (including its hysteresis behavior) for superconductor with finite values of critical current density $`J_c`$ and simple methods to obtain $`J_c`$ values from levitation force experimental data.
A PM placed over ideally hard superconductor induces at its surface the screening currents $`𝐣=(c/4\pi )𝐧\times 𝐛_r`$, where $`𝐧`$ is the surface normal and $`𝐛_r`$ is the tangential magnetic field component at the surface (the normal component $`b_n`$ at the surface is zero). From the symmetry, for an infinite flat surface
$`𝐛_r=2𝐛_{ar},`$ (1)
where $`𝐛_a`$ is the variation of the PM magnetic field $`𝐁_a`$ due to its displacements in respect to initial field cooled (FC) position: $`𝐛_a=𝐁_a𝐁_{aFC}`$ . For the $`z`$-axial symmetric configuration $`𝐫=(rsin\theta ,rcos\theta ,z)`$ where only $`j_\theta `$ component is induced, for the vertical force acting on PM from the screening currents one can write
$`F_{id}={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}rb_{ar}^2(r)𝑑r.`$ (2)
This is ideal force which can be readily calculated just from known tangential component of PM field. The Eq.(2) is obtained from zero-depth screening currents approach that we will call a zero approximation of real PM-HTS systems. Within this approximation any configuration of such systems can be calculated numerically but to describe hysteresis phenomena next-order approximations have to be considered.
In the second stage (a first approximation) we will examine a model where: (i) $`\delta `$ is finite but still much less than system dimensions $`L`$, (ii) the critical state model is applicable to these samples, and (iii) critical current density is constant. The applicability of the critical state model to melt-processed HTS has been proven in many experiments and is quite acceptable here. The first condition on $`\delta `$ can be written as
$`\delta (r)B_{ar}(r)\left({\displaystyle \frac{dB_{ar}(r)}{dr}}\right)^1L,`$ (3)
and, because of $`\delta b_r`$, can be satisfied anyway limiting the minimum distance between PM and HTS surface. One can estimate $`Lz+d/2`$, where $`z`$ is, here and below, the distance between PM and HTS surface and $`d`$ is the PM thickness.
As for condition on $`J_c`$, it is far from reality by itself since the critical current density usually depends on both magnetic field and space coordinate. But as it will be shown below we can accept this condition for levitation force measurements. This just means that in the next relation for $`j`$, the surface density of screening currents,
$`j(r,z)=\delta (r,z)J_c={\displaystyle \frac{c}{2\pi }}b_{ar}(r,z),`$ (4)
$`J_c`$ can be treated as a coefficient between $`j`$ and $`\delta `$, a coarse-grained flux penetration depth averaged over $`L`$ scale. The $`j(r)`$ function does not depend on field history but only on PM position $`z`$ in the same way as $`b_{ar}(r)`$, the distribution at the HTS surface of the PM field variation, that after cooling is a function of $`r`$ and $`z`$. Thus, in the considering approximation, the function $`\delta (r,z)`$ formally does not depend on field history but means the flux penetration depth at the first PM descent only.
Next, if we use a protocol of PM motion according to which it moves between two points: the initial or FC point $`z_{max}`$ that is included in $`b_{ar}(r,z)`$ function as a condition $`b_{ar}(r,z_{max})=0`$ ($`z_{max}=\mathrm{}`$ for ZFC case), and the lowest point $`z_{min}`$, the current distributions in the depth $`z`$ of superconductor are the following. After the PM first stop and beginning to go up (the first ascent), the depth of the layer where currents flow remains constant and equal to its maximum value $`\delta _{max}\delta (r,z_{min})`$ but there are two regions with opposite currents. The opposite flowing current penetrates from the top at the depth $`\delta _{}`$ that can be obtained from (4)
$`\delta _{}(r,z,z_{min})={\displaystyle \frac{1}{2}}(\delta (r,z_{min})\delta (r,z)).`$ (5)
Its maximum value is $`\delta _{max}\delta _{}(r,z_{max},z_{min})=\delta (r,z_{min})/2`$, so during the second descent there are three regions with $`+J_c`$ for $`0<\zeta <\delta _{}`$ , $`J_c`$ for $`\delta _{}<\zeta <\delta _{max}`$ , and $`+J_c`$ for $`\delta _{max}<\zeta <\delta _{max}`$ , where $`\delta _{}(r,z)=\delta (r,z)/2`$ also does not depend on $`z_{min}`$. If one can neglect flux creep for times greater than the descent-ascent time, any other ascents are equal to the first one and any other descents are equal to the second one. Any other current distributions for other protocols, for example to describe minor hysteresis loops, can also readily be obtained within the scheme above.
Applying this scheme to calculate the vertical forces during the first descent $`F(z)`$, the first and the next ascents $`F_{}(z,z_{min})`$ and the second and the next descents $`F_{}(z,z_{min})`$ one can write
$`F(z)={\displaystyle \frac{2\pi }{c}}J_c{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}r𝑑r{\displaystyle \underset{0}{\overset{\delta (r,z)}{}}}𝑑\zeta b_{ar}(r,z+\zeta ).`$ (6)
$`F_{}(z,z_{min})={\displaystyle \frac{2\pi }{c}}J_c{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}r𝑑r\left[{\displaystyle \underset{\delta _{}}{\overset{\delta _{max}}{}}}𝑑\zeta {\displaystyle \underset{0}{\overset{\delta _{}}{}}}𝑑\zeta \right]b_{ar}(r,z+\zeta ).`$ (7)
$`F_{}(z,z_{min})={\displaystyle \frac{2\pi }{c}}J_c{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}r𝑑r\left[{\displaystyle \underset{\delta _{max}/2}{\overset{\delta _{max}}{}}}𝑑\zeta {\displaystyle \underset{\delta /2}{\overset{\delta _{max}/2}{}}}𝑑\zeta +{\displaystyle \underset{0}{\overset{\delta /2}{}}}𝑑\zeta \right]b_{ar}(r,z+\zeta ).`$ (8)
The functions $`\delta (r,z)`$ depend on $`J_c`$ in according to the above equations ((4), (5) and below) and for $`J_c\mathrm{}`$ all these forces become equal to $`F_{id}(z)`$.
Remaining within the condition (3) we can approximate the integrals over $`z`$ from the formulas (6)-(8) by multiplying the depth of the layer where current flows by the field bar in its center. It is easy to show that within the above approximation the formula (6), for example, can be rewritten as
$`F(z)={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}rb_{ar}(r,z)b_{ar}(r,z+{\displaystyle \frac{\delta }{2}})𝑑r,`$ (9)
which highly increases the calculation speed.
## 3 Experiment
To check the applicability of the above consideration to real MP HTS we used a standard experimental setup on levitation force measurements . The SmCo<sub>5</sub> disk shape PM was 15 mm in diameter and 8 mm in thickness (the effective thickness with ferromagnetic holder that was evaluated from real PM field configuration was 12.7 mm) with averaged axial magnetization of $`4\pi M`$ = 9236 G (the field measured by Hall probe in its center at the distance of 0.8 mm from its bottom surface was 3350 G). The magnetic field of the PM was calculated as field of a coil with the same dimensions and with lateral surface current density $`J=cM`$. All measured samples were melt-processed HTS of 30$`\pm `$0.5 mm in diameter and 17.5$`\pm `$0.5 mm in thickness. The distance $`z`$ between PM bottom surface and HTS top surface varied from $`z_{max}`$ = 400 mm (that can be considered as ZFC case) to its minimum value $`z_{min}`$ = 0.5 mm. The minimum step of PM motion was 75 mm. The accuracy of force detecting was 15 mN. Within this accuracy the experimental data were reproducible for every sample. Fig.1 represents the first and second hysteresis loops (the first and second descent and ascent) for two samples.
Within the above approximation we have only one parameter, $`J_c`$, for forces (6)-(8) (or (9) and analogous ones) to be fitted to the experimental ones $`F_{exp}(z)`$. To do this, we have to choose one of these functions and one point $`z_i`$, and solve the equation
$`F(J_c,z_i)=F_{exp}(z_i).`$ (10)
The forces calculated from formulas (6)-(8) with the $`J_c`$ values obtained from (10) in $`z_{min}`$ point are also represented in Fig.1 by solid lines. The forces calculated from the formula (9) and from analogous ones practically coincide with the above in the $`F(z)`$ plot scale. A good agreement between the experimental and calculated $`F(z)`$ dependencies demonstrates the above approximation is correct.
Nevertheless, the discrepancy between the experimental and calculated forces still exists and is lager than the experimental accuracy. One of the most likely reasons is a variation of $`J_c`$ with depth and field. Fig.2 shows the values of $`J_c`$ versus maximum value of $`B_r(z)`$ at the HTS surface for two HTS samples. The data were obtained by solving Eq.(10). Open symbols represent the solution for the function (6), and solid symbols represent the solution for the function (9). The solid line in Fig.2 with respect to right axis represents the dependence of $`B_{rmax}(z)`$. For a perfectly uniform sample with $`c`$-axis exactly perpendicular to the surface such a dependence of $`J_c(B_{rmax})`$ would be uniquely determined by the dependence of $`J_c^{ab}(B_{ab})`$, the critical current density flowing in $`ab`$-plane versus the magnetic field parallel to this plane. But for real melt processed samples, it is more reasonable to assume that the dependencies of $`J_c(B_{rmax})`$ in Fig.2 are mostly caused by space variations of critical current density. The steep slope of the curves in Fig.2 at low field, which caused the maximum in the apper curve, is rellated with the finite diameter of the HTS samples and shows the lower field limit for given configuration.
## 4 Another simple method
There is a possibility here to introduce a visual simple method to evaluate $`J_c`$. It is understandable that in spirit of the above consideration a shift $`\mathrm{\Delta }z`$ (see Fig.1) of the first descent experimental curve with respect to ideal one has to be proportional to an average penetration depth. From the condition $`F_{id}(z+\mathrm{\Delta }z)=F(J_c,z)`$ and Eq.(9) one can readily obtain
$`\delta 4\mathrm{\Delta }z,\text{ or }J_c{\displaystyle \frac{c}{8\pi }}{\displaystyle \frac{b_{ar}}{\mathrm{\Delta }z}}.`$ (11)
The values of $`J_c`$ evaluated in such way are also represented in Fig.2 and show a good agreement with ones determined before. The experimental error $`\sigma _{J_c}`$ that is shown here was estimated from the formula $`\sigma _{J_c}/J_c=\sigma _F(dF/dz)^1/\mathrm{\Delta }z`$ which assumes the maximum error is caused by the force measuring: $`\sigma _F`$ 30 mN.
## 5 Conclusions
In summary, we have considered the approach, which we call the ”first approximation”, to describe levitation force data. The term ”first” implies that we consider a case in which such parameters as flux penetration depth $`\delta `$ or normal component of magnetic field at HTS surface $`b_n`$ are already not zero, as it is for ideally hard superconductor , but small enough: $`\delta L`$, $`b_nb_r`$. Within this condition the methods to calculate $`J_c`$, the critical current density, which we have introduced in the paper are exact. Remarkably, the approach works well even beyond this condition, when $`\delta L`$, $`b_nb_r`$. In this region the methods become empirical. The $`J_c`$ value that can be obtained by the methods is averaged over $`L`$ scale critical current density in $`ab`$-plane for field parallel to this plane: $`J_c=J_c^{ab}(𝐁ab)`$. $`L`$ scale depends on the size of a magnet we use.
## 6 Acknowledgments
The authors would like to thank J. R. Hull for helpful discussions and T. Strasser for assistance with the experimental setup.
## 7 Figures Captions
Fig. 1. Experimental (symbols) and calculated (solid lines) data on the first and second hysteresis loops of the vertical levitation force vs. distance $`z`$ between PM and HTS surface. Dashed line represents the force for an ideal superconductor.
Fig. 2. The values of averaged critical current density versus maximum value of $`B_r`$, the magnetic field tangential component at the HTS surface, obtained by different methods. The solid line with respect to right axis represents the dependence of $`B_{rmax}(z)`$.
|
no-problem/9905/astro-ph9905179.html
|
ar5iv
|
text
|
# COSMIC RAYS IN CLUSTERS OF GALAXIES AND RADIO HALOS
## 1 Introduction
Clusters of galaxies have recently revealed themselves as sites of high energy processes, resulting in a multiwavelength emission which extends from the radio to the gamma rays and probably beyond. In this paper we refer to the Coma cluster, because of the wide evidences now accumulated for the presence of these non-thermal phenomena.
The high energy processes which produce the observable radiations are due to the presence of a non thermal population of particles originating most likely from the cosmic ray sources in the cluster. The important role of cosmic ray electrons in Coma and in a few other clusters of galaxies is known since a long time because of the diffuse radio emission which extends over typical spatial scales of order $`1`$ Mpc. This radiation can be interpreted as synchrotron emission of relativistic electrons in the intracluster magnetic field. However, the combination of energy losses and diffusive propagation of these electrons makes their motion from the sources extremely difficult, so that it becomes a challenge to explain the spatial extent of the diffuse radio emission, unless electrons are continuously reaccelerated in the intracluster medium (ICM) (see for a recent review). To solve this problem the SEM was first proposed in : in this model CRs (protons) diffuse on large scales because their energy losses are negligible and can produce electrons in situ as secondary products of $`pp`$ interactions with the production and decay of charged pions.
The production of charged pions is always associated with the production of neutral pions which in turn result into gamma rays mainly with energies above $`100`$ MeV. The flux of gamma radiation and high energy neutrinos due to the cosmological distribution of clusters of galaxies was calculated in and , where the resulting diffuse neutrino background was also evaluated. Fluxes from single clusters were also compared with the upper limits on the gamma ray emission from the EGRET instrument onboard the CGRO satellite.
More recently UV and hard X-ray observations of the Coma cluster have led to the first detection of large fluxes at these wavelengths. Thier interpretation based on ICS of relativistic electrons off the photons of the microwave background radiation requires an intracluster magnetic field of $`B0.1\mu G`$ .
In this paper we calculated the multifrequency spectrum of the Coma cluster in the context of the SEM and compared our predictions with the results of recent observations. We find that also for secondary electrons, the radio and hard X-ray observations imply an intracluster magnetic field $`B0.1\mu G`$ and large energy densities in CRs. As a consequence, the flux of gamma rays above $`100`$ MeV exceeds the EGRET upper limit . We also discuss alternative scenarios which do not imply large CR energy requirements.
The plan of the paper is as follows: in section 2 we describe the propagation of CRs in a cluster of galaxies; in section 3 we calculate the fluxes of secondary radio, X and gamma radiation and we present our conclusions with application to the case of the Coma cluster in section 4.
## 2 Cosmic ray propagation in clusters of galaxies
The propagation of CRs in clusters of galaxies was considered in previous works where the effect of diffusive confinement of CRs was investigated. We summarize these results in the following.
The CRs produced in a cluster of galaxies propagate diffusively in the intracluster magnetic field with a diffusion time, over a spatial scale $`R`$, that can be estimated as $`\tau R^2/4D(E)`$, where $`D(E)`$ is the diffusion coefficient. Very little is known about the diffusion coefficient in clusters, but as argued in , for the bulk of CRs the diffusion time at large distances from the cluster center ($`RR_c`$, with $`R_c1`$ Mpc the radius of the cluster) exceeds the age of the cluster for any reasonable choice of $`D(E)`$. Assuming the following form of the diffusion coefficient, $`D(E)=D_0E^\eta `$, the maximum energy for which CRs are confined in the cluster is $`E_c(R_c^2/D_0t_0)^{1/\eta }`$, where $`t_0`$ is the age of the cluster, comparable with the age of universe. Despite of the large diffusion times in the cluster, CR protons do not suffer appreciable energy losses in the interesting energy range, so that the propagation can be simply described by a transport equation with no energy loss term (see, e.g., ). Let us first consider the case of a single point source in the center of the cluster, injecting CRs with a rate $`Q_p(E)`$. As shown in the equilibrium number density of CRs with energy $`E`$ which solves the transport equation can be written in the form
$$n_p(E,r)=\frac{Q_p(E)}{D(E)}\frac{1}{2\pi ^{3/2}r}_{r/r_{max}(E)}^{\mathrm{}}𝑑ye^{y^2},$$
(1)
where $`r`$ is the distance from the source and $`r_{max}(E)=\sqrt{4D(E)t_0}`$. It is interesting to note that for $`rr_{max}(E)`$ the CR distribution tends to the well known time independent form: $`n_p(E,r)=Q_p(E)/(4\pi rD(E))`$ (see for details).
In the case of CRs injected homogeneously in the ICM, the equilibrium distribution of CRs can be written as
$$n_p(E)=K\frac{ϵ_{tot}}{V}p^\gamma ,$$
(2)
where $`V`$ is the volume of the cluster, $`ϵ_{tot}`$ is the total energy in CRs injected in the cluster and we assumed that the injection spectrum is a power law in the CR momentum $`p`$. The constant $`K`$ is a normalization constant determined by energy conservation. Clearly this solution breaks down close to the boundary of the cluster and at high energies, where CRs are no longer confined in the cluster.
## 3 Cosmic ray interactions and secondary electron emission
The main interaction channel of CR protons in clusters of galaxies is represented by $`pp`$ collisions with pion production. The decay of neutral pions produces gamma rays with energy above $`100`$ MeV, while the decay of charged pions results in electrons and neutrinos. The production of gamma rays and neutrinos from clusters of galaxies was recently investigated in . The secondary electrons produced by charged pions can play a fundamental role in the explanation of not thermal emission in clusters of galaxies, and here we describe this point in a greater detail.
The ‘primary electrons’ models proposed as an explanation of radio halos in Coma-like clusters have serious problems due to the severe energy losses that make difficult the propagation of the relativistic electrons out to Mpc scales, where the diffuse radio halo emission is observed. This problem would be solved if electrons were produced or accelerated in situ. As initially proposed in this is the case for secondary electrons, generated in CR interactions with the thermal gas in the ICM. The same electrons would also produce X-rays by inverse compton scattering (ICS) off the photons of the microwave background.
The calculation of the production spectrum of secondary electrons is explained in detail in , and will be summarized here. For both radio and X-ray emission, the relevant electrons have energies above $`1`$ GeV, and it is important to have a good description of the $`pp`$ interaction in a wide range of energies. For $`pp`$ collisions at laboratory energy less than $`3`$ GeV, the pion production is well described by a isobar model, in which the pions are the result of the decay of the $`\mathrm{\Delta }`$ resonance . For energies larger than $`710`$ GeV we use a scaling approach. In the latter the cross section for $`pp`$ collisions depends only on the ratio of the pion energy $`E_\pi `$ and the incident proton energy $`E_p`$ ($`x=E_\pi /E_p`$) and can be written in the following form
$$\frac{d\sigma (E_p,E_\pi )}{dE_\pi }=\frac{\sigma _0}{E_\pi }f_\pi (x),$$
(3)
where $`f_\pi (x)`$ is the scaling function given in for the case of charged and neutral pions. We refer to for a detailed expression of the cross section in the low energy case.
The production electron spectrum at distance $`r`$ from the cluster center, assumed to be spherically symmetric, can be easily calculated according to the expression
$`q_e(E_e,r)`$ $`=`$ $`{\displaystyle \frac{m_\pi ^2}{m_\pi ^2m_\mu ^2}}n_H(r)c{\displaystyle _{E_e}^{E_p^{max}}}𝑑E_\mu {\displaystyle _{E_{\pi }^{}{}_{}{}^{min}}^{E_{\pi }^{}{}_{}{}^{max}}}𝑑E_\pi `$ (4)
$`{\displaystyle _{E_{th}(E_\pi )}^{E_p^{max}}}𝑑E_pF_\pi (E_\pi ,E_p)F_e(E_e,E_\mu ,E_\pi )n_p(E_p,r),`$
where $`F_\pi `$ is the differential cross section for the production of a pion with energy $`E_\pi `$ in a $`pp`$ collision at energy $`E_p`$ (see for details), $`F_e`$ is the spectrum of electrons generated by the decay of a single muon with energy $`E_\mu `$ and $`n_p`$ is the spectrum of CRs. The function $`F_e`$ depends also on the pion energy because the muons produced in the pion decay are fully polarized, and this effect is taken into account here. The gas density at distance $`r`$ is assumed to follow a King profile
$$n_H(r)=n_0\left[1+\left(\frac{r}{r_0}\right)^2\right]^{3\beta /2},$$
(5)
where, in the case of the Coma cluster we use $`n_0=3\times 10^3cm^3`$, $`r_0=400`$ kpc and $`\beta =0.75`$ (we use here $`H_0=60kms^1Mpc^1`$).
The equilibrium electron distribution, $`n_e(E_e,r)`$, is achieved mainly due to energy losses, dominated at high energy by ICS and synchrotron emission and at low energy by Coulomb scattering. The effect of losses is to produce a steepening of the electron spectrum by one power in energy, at high energy, and a flattening by one power of energy, at low energy. In the next subsections we outline the calculations of non thermal radio, X and gamma ray emission from a cluster.
### 3.1 The radio halo emission
Electrons with energy $`E_e`$ in a magnetic field $`B`$ radiate by synchrotron emission photons with typical frequency
$$\nu =3.7\times 10^6B_\mu E_e^2Hz,$$
(6)
where $`B_\mu `$ is the value of the magnetic field in $`\mu G`$. The emissivity at frequency $`\nu `$ and at distance $`r`$ from the cluster center can be easily estimated as
$$j(\nu ,r)=n_e(E_e,r)\left(\frac{dE_e}{dt}\right)_{syn}\frac{dE_e}{d\nu },$$
(7)
where $`(dE_e/dt)_{syn}`$ denotes the rate of energy losses due to synchrotron emission and $`dE_e/d\nu `$ is obtained from eq. (6). The total fluence from the cluster is obtained by volume integration.
Some simple comments can help understanding general features of the radio halo spectrum: for this purpose let us assume that CR are confined in the cluster and that the density of intracluster gas is spatially constant. If the injected CR spectrum is $`E_p^\gamma `$ then the equilibrium CR spectrum is, within the distance $`r_{max}(E)`$ defined above, a power law $`E_p^{(\gamma +\eta )}`$. Provided the electron energy is $`E_e1`$ GeV, the production electron spectrum reproduces the parent CR spectrum, so that the equilibrium electron spectrum in the same energy region is $`E_e^{(\gamma +\eta +1)}`$. From eq. (7) it is easy to show that $`j_\nu (\nu ,r)\nu ^{(\gamma +\eta )/2}`$. The volume integration gives the observed spectrum: in the simple assumptions used here (complete confinement) the integration over the distance $`r`$ at each frequency $`\nu `$ must be limited by a maximum value $`r_{max}(\nu )\nu ^{\eta /4}`$, so that the resulting radio halo spectrum is $`\nu ^{\gamma /2}`$, independent of the diffusion details. Repeating the same discussion, it is easy to show that the synchrotron radiation produced by CRs non confined in the cluster volume has a spectrum as steep as $`\nu ^{(\gamma +\eta )/2}`$. Therefore there is, in principle, a break frequency where the spectrum steepens from a power index $`\gamma /2`$ to a power index $`(\gamma +\eta )/2`$. In any realistic scenario, there is a smooth steepening of the spectrum. The presence of a density profile in the intracluster gas also contributes an additional small steepening of the spectrum. All these propagation effects are self-consistently considered in eq. (1).
### 3.2 Non thermal X-rays
The peak energy where most of the photons are produced by ICS of electrons with energy $`E_e`$ is
$$E_X=2.7E_e^2(GeV)keV.$$
(8)
So, that the emissivity in the form of X-rays with energy $`E_X`$ at distance $`r`$ from the cluster center is:
$$\varphi _X(E_X,r)=n_e(E_e,r)\left(\frac{dE_e}{dt}\right)_{ICS}\frac{dE_e}{dE_X}.$$
(9)
Here $`(dE_e/dt)_{ICS}`$ is the rate of energy losses due to ICS and $`dE_e/dE_X`$ is calculated from eq. (8). As for the radio halo emission, the observed fluence is determined by volume integration.
### 3.3 The gamma ray emission
As pointed out above, the production of neutral pions in $`pp`$ collisions results in the generation of gamma rays with typical energy $`E_\gamma 100`$ MeV. The emissivity of gamma rays with energy $`E_\gamma `$ at distance $`r`$ from the center of the cluster is given by
$$j_\gamma (E_\gamma ,r)=2n_H(r)c_{E_\pi ^{min}(E_\gamma )}^{E_p^{max}}_{E_{th}(E_\pi )}^{E_p^{max}}𝑑E_pF_{\pi ^0}(E_\pi ,E_p)\frac{n_p(E_p,r)}{(E_\pi ^2+m_\pi ^2)^{1/2}},$$
(10)
where $`E_\pi ^{min}(E_\gamma )=E_\gamma +m_{\pi ^0}^2/(4E_\gamma )`$. The function $`F_{\pi ^0}`$ is again calculated using the isobar model for proton energy less than $`3`$ GeV and with the scaling model for energies larger than $`7`$ GeV. The observed gamma ray flux is obtained by integration over the cluster volume.
In the contribution of secondary electrons to the gamma ray flux through bremsstrahlung was also calculated. Since its contribution is small if compared with the contribution from pion decay, we neglect here the bremsstrahlung gamma ray emission.
## 4 Application to the Coma cluster
In this section we apply our predictions in the SEM framework to the case of the Coma cluster, for which multiwavelength observations are available.
Two extreme scenarios of CR injection are considered here: a point-like CR source in the cluster center and a uniform CR injection distributed over the cluster volume. In both cases we assume that the CR injection spectrum is a power law in momentum with power index $`2.1\gamma 2.4`$, covering the range of values expected for first order Fermi acceleration at shocks as well as for other CR acceleration mechanisms.
The diffusion of CRs in the cluster is described by a diffusion coefficient derived from a Kolmogorov spectrum of magnetic fluctuations in the cluster, according with the procedure described in . In this case the diffusion coefficient can be written as
$$D(E)=2.3\times 10^{29}E(GeV)^{1/3}B_\mu ^{1/3}\left(\frac{l_c}{20kpc}\right)^{2/3}cm^2/s$$
(11)
where $`l_c`$ is the size of the largest eddy in the Kolmogorov spectrum.
The procedure used to evaluate the expected fluxes is the following: we first fit the spectrum of the radio halo as given in for $`\gamma =2.1`$ and $`\gamma =2.4`$. This allows to find the value of the absolute normalization of the injection CR spectrum in terms of an injection luminosity $`L_p`$ as a function of the average intracluster magnetic field $`B_\mu `$. We carried out this calculation for $`B_\mu =0.1,1`$ and $`2`$. After calculating $`L_p`$ in this way, we determine the hard X-ray flux and the gamma ray flux above $`100`$ MeV according to the expressions given in the previous section. Our results for the radio and hard X-ray emission for the case of a single source are shown in Figs. 1 and 2 respectively. The three panels refer to values $`B_\mu =0.1,1,2`$, as indicated. The normalization constants and the integral fluxes of gamma rays above $`100`$ MeV in the same cases are reported in Table 1, where the gamma ray flux is compared with the upper limit obtained by the EGRET experiment . The last column in Table 1 contains the ratio of the flux of gamma rays from electron bremsstrahlung compared with the flux of gamma rays from pion decay, as calculated in .
Fig. 2 shows that a joint fit for the radio and hard X-ray fluxes is possible only for low values of the magnetic field ($`B_\mu 0.1`$), which in turn imply large values of $`L_p`$ (see Table 1). As a consequence, the gamma ray fluxes easily exceed the EGRET upper limit from Coma. These results indicate that the SEM cannot explain the multiwavelength observations of the Coma cluster without violating the EGRET limit. This conclusion is not appreciably changed in the case of homogeneous injection of CRs. In this case, the value of $`\gamma `$ is fixed by radio observations and is $`\gamma =2.32`$. In order to fit the hard X-ray data at the same time, an ICM magnetic field $`B_\mu 0.1`$ and a total energy in CRs $`ϵ_{tot}8\times 10^{63}`$ erg are required. This value, averaged over the age of the cluster, corresponds to a typical CR luminosity of $`L_p2\times 10^{46}`$ erg/s, which implies a gamma ray flux above $`100`$ MeV in excess of the EGRET upper limit by a factor $`3`$. Therefore, also for a homogeneous CR injection, the SEM fails in explaining the radio and hard X-ray observations at the same time, without exceeding the EGRET limit.
This conclusion has, however, further important consequences for other models too. In fact, we checked that the CR energy densities obtained in the present paper are comparable with those obtained assuming equipartition, as done in recent papers (e.g. ) to explain the radio, hard X-ray and UV observations. Therefore, the gamma ray limit applies to other models as well and forces the CR energy density in clusters to be some fraction of the equipartition value. Actually, as shown in , the present gamma ray observations put weak constraints on this fraction, but future gamma ray observations will definitely do better. We can envision at least two other arguments against equipartition of CRs and the ICM in clusters of galaxies: first of all, the most powerful CR sources typically present in clusters of galaxies (see ) allow to achieve a CR energy density equal to a small fraction (typically $`5\%`$) of the equipartition value. Moreover, the magnetic field derived in ($`B0.1\mu G`$), which is the main reason to call for equipartition, is quite smaller than the equipartition magnetic field in the cluster, and it seems difficult to envision a scenario where CRs are in equipartition but not magnetic fields, in particular if the origin of CRs and magnetic fields in clusters are related each other.
So, at the present status of the debate, it is necessary to look for possible alternative explanations of the hard X-ray excess observed in Coma with the SAX satellite, since there is no stringent argument in favour of the ICS origin of such hard X-ray tail. A possible alternative was proposed in where it was speculated that the presence of a non thermal tail in the electron distribution could account for the X-ray excess through bremsstrahlung emission. If this turns out to be the explanation of the hard X-ray excess, then the radio and hard X-ray fluxes become unrelated and magnetic fields of order $`1\mu G`$, as the ones suggested by Faraday rotation measurements, would still be allowed. As a consequence, the CR energy density required to fit the radio and hard X-ray data would be of the same order of that predicted in . In this case, the SEM still remains a viable option for the origin of the Coma radio halo emission.
References
|
no-problem/9905/quant-ph9905093.html
|
ar5iv
|
text
|
# Quantum hexaspherical observables for electrons
## I Introduction
In quantum field theory as well as in classical field theories such as Maxwell theory or general relativity, fields are represented as functions of coordinate parameters on a classical map of space-time. It is now a common idea that such a classical conception of localization in space-time cannot be considered as satisfactory. In particular, the difficulties met when attempting to quantize the gravitational field suggest that sizeless points in space-time have to be replaced by fuzzy spots with a size at least of the order of Planck length. A lot of work has been devoted to this idea in the domains of non commutative geometry or quantum groups .
In fact, the insistance on defining positions in space-time as physical observables rather than points on a map dates back at least to Einstein’s introduction of relativistic concepts . This idea was revived in a quantum context by Schrödinger who noticed that positions in space-time should be described as quantum observables if a proper physical meaning is to be attributed to Lorentz transformations . In non relativistic quantum mechanics, this requirement is met for space observables but a time operator is lacking . In relativistic quantum field theory, this unacceptable difference between space and time is cleared up at the expense of abandoning the observable character of space variables as well . The absence of a standard solution to this problem has many implications in present physical theory. It makes the implementation of relativistic symmetries in a quantum framework quite unsatisfactory and plagues the attempts to build up a quantum theory including gravity .
In the present paper, we vindicate a recently proposed description of localization in space-time which associates quantum observables with positions of an event in space and time. These observables have been first defined for coincidence events between two light rays, in which case they fit Einstein definitions of clock synchronization and space-time localization while obeying the Lorentz transformation laws of classical relativity . They have canonical commutators with momenta and meet the requirements enounced by Schrödinger. This algebraic ‘quantum relativity’ framework is built on the symmetries of electromagnetic field theory. The latter include not only Lorentz transformations of special relativity but also dilatations and conformal transformations to uniformly accelerated frames .
Invariance under dilatation manifests the insensitivity of light propagation to a conformal metric factor, that is also to a change of space-time scale . Localization observables are defined in terms of Poincaré and dilatation generators. This definition holds for field states containing photons propagating in two different directions which is obviously a preliminary condition for defining a coincidence event. This condition may also be expressed by the property that the mass of the field state differs from zero. Hence, the domain of definition of localization observables does not cover the space of all field states and these observables are not self-adjoint although they are hermitian. This circumvents the common objection against the very possibility of giving a quantum definition of time . Hermitian but not self-adjoint observables are known to allow for a perfectly rigorous treatment which solves the quantum paradoxes of phase and time . Here, localization observables are defined in the enveloping division ring built on symmetry generators through a quantum algebraic calculus well defined as soon as divisions by the mass are carefully dealt with .
The shift of mass under conformal transformations to accelerated frames is then found to fit the classical redshift law but written with the quantum positions. It thus reproduces the gravitational potential arising in accelerated frames according to Einstein equivalence principle . Clearly the extension of these results to massive field theories is impossible as long as mass is treated as a classical constant which breaks conformal symmetry, as is the case in Dirac’s electron theory . In modern developments however, electron mass is generated through an interaction with Higgs fields and standard forms of this interaction obey conformal invariance . Mass is no longer a classical constant. It is now a quantum operator which changes under frame transformations. Conformal invariance just means that mass unit scales as the inverse of space-time unit so that the Planck constant is preserved . Using this assumption, it is possible to define localization observables for electrons in the same manner as for $`2`$-photon states. The redshift of mass derived from conformal symmetry is anew found to fit the expectation of Einstein equivalence principle .
Now, it is well known from classical projective geometry that the conformal symmetry in a $`n`$-dimensional space is equivalent to rotational symmetry in a $`\left(n+2\right)`$-dimensional space . In particular, conformal symmetry in Minkowski space-time is equivalent to $`\mathrm{SO}(4,2)`$ symmetry on a hyperquadric in a $`6`$-dimensional space . Dirac and Bhabha have proposed a field description of electrons in such a space and a number of connections between electrons and the $`\mathrm{SO}(4,2)`$ dynamical symmetry have been studied . In the present context where quantum localization observables have been defined, the challenge is raised of finding a representation of these observables explicitly displaying conformal $`\mathrm{SO}(4,2)`$ symmetry.
A further challenge immediately follows. According to the classical law of inertia, Newton’s equation of motion is not the same in uniformly accelerated frames as in inertial frames, which makes it incompatible with the symmetries of frame transformations. In classical relativity, this difficulty is solved by writing the law of motion as the geodesic equation which transforms covariantly under frame transformations. But this requires the introduction of a space dependent metric tensor representing a classical gravitational field . In the quantum algebraic framework, the question is raised of writing the law of motion under a form compatible with conformal dynamical symmetry.
In the present paper we will take up these challenges. We will show that localization of electrons in space-time may be written in terms of quantum hexaspherical observables transformed as components of a $`\mathrm{SO}(4,2)`$ vector under $`\mathrm{SO}(4,2)`$ rotations, that is also conformal transformations to accelerated frames. We will exhibit the close connection between hexaspherical variables and mass, thus extending known results of classical projective geometry. We will finally demonstrate that this representation allows one to write a quantum form of the law of free fall which respects conformal symmetry.
The four next sections are mainly devoted to algebraic developments. The physical significance of the results is discussed in the concluding section.
## II Classical hexaspherical coordinates
Before addressing the localization problem in a quantum context, we recall the definition of hexaspherical coordinates in a classical space-time representation. To this aim, we remind the conformal representation of accelerated frames in classical relativity and we introduce hexaspherical coordinates which constitute a natural extension of space-time coordinates. We also discuss the important role played by the conformal factor.
In classical relativity, uniformly accelerated frames may be identified as flat conformal frames with a metric tensor $`\lambda ^2\left(x\right)\eta _{\mu \nu }`$ proportional to the Minkowski metric
$`\eta _{\mu \nu }`$ $`=`$ $`\mathrm{diag}(1,1,1,1)`$ (1)
$`\mu ,\nu `$ $`=`$ $`0\mathrm{}3`$ (2)
The conformal factor $`\lambda \left(x\right)`$ depends on position in accelerated frames. This dependence is not arbitrary since the metric corresponds to a null curvature. Flat conformal frames are tranformed into one another under conformal coordinate transformations generated by Poincaré tranformations and inversions or, equivalently, by Poincaré tranformations, dilatations and Bateman-Cunningham transformations
$$\overline{x}^\mu =\frac{x^\mu x^2\alpha ^\mu }{12\alpha _\mu x^\mu +\alpha ^2x^2}$$
(3)
The velocity of light is set to unity. In (3), $`x^\mu `$ and $`\overline{x}^\mu `$ represent the coordinates of a point in two maps of classical space-time. The transformations (3) form a group which extends the symmetry principles of special relativity to uniform accelerations. In particular, they describe the change of the conformal factor
$$\overline{\lambda }\left(\overline{x}\right)=\left(12\alpha _\mu x^\mu +\alpha ^2x^2\right)\lambda \left(x\right)$$
(4)
It is always possible to bring the conformal factor $`\lambda `$ back to an inertial one, with $`\overline{\lambda }`$ independent of $`\overline{x}`$, by applying a well-chosen conformal transformation. Accordingly, geodesic motion in conformal accelerated frames corresponds exactly to the usual relativistic definition of uniformly accelerated motion .
Hexaspherical coordinates $`y_a`$ can be associated with a point $`x`$ in classical space-time through
$`y_{}+y_+`$ $`=`$ $`\lambda `$ (5)
$`y_\mu `$ $`=`$ $`\lambda x_\mu `$ (6)
$`y_+y_{}`$ $`=`$ $`\lambda x^2`$ (7)
Indices in ordinary $`4d`$ space-time are labelled by Greek letters ($`\mu =0\mathrm{}3`$) and manipulated with the Minkowski metric used throughout the paper to raise or lower indices and to evaluate squared vectors
$$x^2\eta _{\mu \nu }x^\mu x^\nu $$
(8)
Notice that we keep this convention in accelerated frames in contradistinction with the standard covariance convention. Meanwhile, indices in hexaspherical $`6d`$ space are labelled by Latin letters, with $``$ and $`+`$ denoting additional dimensions, and they are manipulated with the $`6d`$ metric
$`\eta _{ab}`$ $`=`$ $`\mathrm{diag}(1,1,1,1,1,1)`$ (9)
$`a,b`$ $`=`$ $`,+,0\mathrm{}3`$ (10)
Hexaspherical coordinates $`y_a`$ associated with points $`x_\mu `$ of ordinary space-time $`𝒮`$ lie on a quadric $`𝒬`$
$$y^2\eta _{ab}y^ay^b=0$$
(11)
Both notations (8) and (11) will be used in the following depending on the context, the first one for points in ordinary space-time and the second one for $`6d`$ coordinates.
The relation (7) between points of ordinary space-time $`𝒮`$ and their hexaspherical representatives is a stereographic projection of $`𝒬`$ onto $`𝒮`$, that is also an inversion. Usually, hexaspherical coordinates $`y_a`$ are projective coordinates so that the definition of the factor $`\lambda `$ is not fixed by equation (7). Chosing for this factor the $`x`$-dependent conformal factor $`\lambda \left(x\right)`$ is however particularly appropriate for different reasons.
First, this choice allows one to write a simple relation between the $`6d`$ distance $`\left(yy^{}\right)^2`$ of two points on $`𝒬`$ and the metric distance of the two points in $`𝒮`$
$$\left(yy^{}\right)^2=\lambda \left(x\right)\lambda \left(x^{}\right)\left(xx^{}\right)^2$$
(12)
This implies that two points in $`𝒮`$ with a light-like separation have their hexaspherical representatives on $`𝒬`$ also conjugated with respect to $`𝒬`$
$$\left(xx^{}\right)^2=0y^2=y^{2}=y^ay_a^{}=0$$
(13)
Hence, the quadric $`𝒬`$ contains straight lines of points conjugated to each other which are hexaspherical images of ordinary light rays in $`𝒮`$.
Then, conformal coordinate transformations in $`𝒮`$ are given by mere rotations of hexaspherical coordinates on $`𝒬`$. In particular, conformal transformations to accelerated frames (3) correspond to
$`\overline{y}_{}+\overline{y}_+`$ $`=`$ $`y_{}+y_++2\alpha ^\mu y_\mu +\alpha ^2\left(y_{}y_+\right)`$ (14)
$`\overline{y}_\mu `$ $`=`$ $`y_\mu +\alpha _\mu \left(y_{}y_+\right)`$ (15)
$`\overline{y}_{}\overline{y}_+`$ $`=`$ $`y_{}y_+`$ (16)
The transformation (4) of the conformal factor is just the first line in the preceding equation.
Finally, a light ray remains a light ray under conformal transformations to accelerated frames. The hexaspherical scalar $`y^ay_a^{}`$ is preserved by rotations (16) so that, as a consequence of (12), $`\lambda \left(x\right)\lambda \left(x^{}\right)\left(xx^{}\right)^2`$ is preserved under conformal frame transformations. This is exactly the property which is needed to demonstrate the conformal invariance of electromagnetic vacuum .
At the limit of neighbouring points, the invariance of the hexaspherical scalar (12) is read as a metric property
$$\left(\mathrm{d}y\right)^2=\lambda ^2\left(\mathrm{d}x\right)^2$$
(17)
As a matter of fact, $`\sqrt{\left(\mathrm{d}x\right)^2}`$ is the Lorentz interval defined in all frames in terms of the Minkowski tensor $`\eta _{\mu \nu }`$ and its product by the conformal factor $`\lambda `$ is the proper time interval. The invariance of this proper time interval under transformations to accelerated frames is here associated with conformal symmetry.
Up to now we have restricted our attention to hexaspherical points lying on the quadric $`𝒬`$. Points lying outside $`𝒬`$ also have a well known interpretation in classical projective geometry . Any point $`y_a`$ in the $`6d`$ space indeed defines an hyperplane of points $`y_a^{}`$ conjugated to it $`\left(y^ay_a^{}=0\right)`$ with respect to $`𝒬`$. The intersection of this hyperplane with $`𝒬`$ is the hexaspherical image of an hyperboloid $`_y`$ in ordinary space-time $`𝒮`$
$$y^ay_a^{}=y^{2}=0x^{}_y$$
(18)
where $`x^{}`$ and $`y^{}`$ are related by (7). The characteristic elements of this hyberboloid, namely its center $`\omega `$ and radius or waist size $`\rho `$, are related to the hexaspherical coordinates $`y^a`$
$`x^{}_y`$ $``$ $`\left(x^{}\omega \right)^2+\rho ^2=0`$ (19)
$`y_{}+y_+`$ $`=`$ $`\lambda `$ (20)
$`y_\mu `$ $`=`$ $`\lambda \omega _\mu `$ (21)
$`y_+y_{}`$ $`=`$ $`\lambda \left(\omega ^2+\rho ^2\right)`$ (22)
This relation is such that
$$y^2=\lambda ^2\rho ^2$$
(23)
The particular case of a null radius $`\rho =0`$ corresponds to points $`y_a`$ which lie on $`𝒬`$. In this case the hyperboloid is degenerated into the light cone issued from the point $`\omega `$ that is also the set of all light rays which intersect this point. In the general case of a non null radius, the hyperboloid may still be built up as a collection of light rays but these light rays no longer intersect the same point.
As previously, $`y_a`$ are projective coordinates of $`_y`$ so that the choice of $`\lambda `$ is not fixed. We now choose $`\lambda `$ as inversely proportional to the radius $`\rho `$
$$\lambda ^2=\frac{k^2}{\rho ^2}y^2=k^2$$
(24)
The factor $`\lambda `$ is a conformal factor now associated with $`_y`$ rather than with a point. A given hyperboloid $`_y`$ is transformed into another hyperboloid $`_y^{}`$ under conformal frame transformations and this transformation is still described by the rotation (16) of hexaspherical coordinates (22). Since the factor $`k`$ is preserved under conformal transformations (16), it may be eliminated from the transformation of the characteristic elements of hyperboloids
$`{\displaystyle \frac{1}{\overline{\rho }}}={\displaystyle \frac{12\alpha ^\mu \omega _\mu +\alpha ^2\left(\omega ^2+\rho ^2\right)}{\rho }}`$ (25)
$`{\displaystyle \frac{\overline{\omega }_\mu }{\overline{\rho }}}={\displaystyle \frac{\omega _\mu \alpha _\mu \left(\omega ^2+\rho ^2\right)}{\rho }}`$ (26)
As (24), these relations show that the radius $`\rho `$ encodes metric information in projective geometry. It is preserved for Poincaré transformations but changed as a conformal factor for dilatations and transformations to accelerated frames. Equations (26) thus generalize the laws of differential geometry in a manner which now depends not only on a position $`\omega _\mu `$ but also on a spot size, the radius $`\rho `$. In the limiting case of an infinitesimal radius $`\rho 0,`$ the conformal factor has just its standard form and the laws of differential geometry are recovered.
We have discussed in some detail these results of classical projective geometry because they announce quantum properties to be obtained in the following where the conformal factor $`\lambda `$ and the projective constant $`k`$ will be replaced respectively by the electron mass and the Planck constant.
## III Quantum localization observables
We come now to the definition of quantum localization observables. This definition will be based upon the algebraic properties obeyed by the generators of the symmetries involved in localization.
We first recall the commutators of Poincaré and dilatation generators
$`(P_\mu ,P_\nu )`$ $`=`$ $`0`$ (27)
$`(J_{\mu \nu },P_\rho )`$ $`=`$ $`\eta _{\nu \rho }P_\mu \eta _{\mu \rho }P_\nu `$ (28)
$`(J_{\mu \nu },J_{\rho \sigma })`$ $`=`$ $`\eta _{\nu \rho }J_{\mu \sigma }+\eta _{\mu \sigma }J_{\nu \rho }\eta _{\mu \rho }J_{\nu \sigma }\eta _{\nu \sigma }J_{\mu \rho }`$ (29)
$`(D,P_\mu )`$ $`=`$ $`P_\mu `$ (30)
$`(D,J_{\mu \nu })`$ $`=`$ $`0`$ (31)
$`P_\mu `$ and $`J_{\mu \nu }`$ are the components of energy-momentum vector and angular momentum tensor. $`D`$ is the generator of dilatations. Algebraic relations (31) represent at the same time quantum relations between observables and actions of relativistic symmetries on these observables. It is convenient to denote commutators as brackets $`(A,B)`$ related to the usual quantum notation $`[A,B]`$
$$(A,B)\frac{[A,B]}{i\mathrm{}}\frac{ABBA}{i\mathrm{}}$$
(32)
Notice that the Planck constant $`\mathrm{}`$ is kept as the characteristic scale of quantum effects. Commutators obey the Jacobi identity
$$((A,B),C)=(A,(B,C))(B,(A,C))$$
(33)
As discussed in the Introduction, the electron mass should no longer be considered as a classical constant but as a quantum operator. Forthcoming developments will not depend on a particular underlying quantum field theory but only on the hypothesis of conformal symmetry. We will introduce the operator $`M`$ according to the relativistic definition of mass
$`(P_\mu ,M)=(J_{\mu \nu },M)=0`$ (34)
$`(D,M)=M`$ (35)
$`M^2=P^2`$ (36)
Mass is invariant under Poincaré transformations and it has the same conformal weight as energy-momentum.
The definition and properties of localization observables are deduced from conformal algebra. Spin observables are first defined through the Pauli-Lubanski vector and the spin tensor $`S_{\mu \nu }`$
$`S_\mu {\displaystyle \frac{1}{2}}ϵ_{\mu \nu \rho \sigma }J^{\nu \rho }{\displaystyle \frac{P^\sigma }{M}}`$ (37)
$`S_{\mu \nu }=(S_\mu ,S_\nu )`$ (38)
$`ϵ_{\mu \nu \lambda \rho }`$ is the completely antisymmetric Lorentz tensor. The square modulus of the Lorentz vector $`S^\mu `$ is a Lorentz scalar $`S^2`$ with its standard form in terms of a spin number $`s`$ fixed to the value $`\frac{1}{2}`$ in the following
$$S^2=\mathrm{}^2s\left(s+1\right)=\frac{3}{4}\mathrm{}^2$$
(39)
Position observables are then defined as
$$X_\mu =\frac{P_\mu }{M^2}D+\frac{P^\rho }{M^2}J_{\rho \mu }$$
(40)
The dot symbol denotes a symmetrized product for non commuting observables
$$AB\frac{AB+BA}{2}$$
(41)
It has to be manipulated with care since it is not associative
$$A\left(BC\right)\left(AB\right)C=\frac{\mathrm{}^2}{4}(B,(A,C))$$
(42)
We will also use a symmetrized division
$$\frac{A}{B}A\frac{1}{B}$$
(43)
Poincaré and dilatation generators take their usual form in terms of localization observables
$`J_{\mu \nu }`$ $`=`$ $`P_\mu X_\nu P_\nu X_\mu +S_{\mu \nu }`$ (44)
$`D`$ $`=`$ $`P^\mu X_\mu `$ (45)
The shifts of positions under translations, dilatation and rotations also have the classical expressions
$`(P_\mu ,X_\nu )`$ $`=`$ $`\eta _{\mu \nu }`$ (46)
$`(D,X_\mu )`$ $`=`$ $`X_\mu `$ (47)
$`(J_{\mu \nu },X_\rho )`$ $`=`$ $`\eta _{\nu \rho }X_\mu \eta _{\mu \rho }X_\nu `$ (48)
Positions in space-time are thus defined as conjugate with respect to momentum observables while properly representing Lorentz symmetry. These results meet the requirements enounced by Schrödinger and have to be contrasted with previous studies of the localization problem where only positions in space were introduced . Different position components do not commute in the presence of a non vanishing spin
$$(X_\mu ,X_\nu )=\frac{S_{\mu \nu }}{M^2}$$
(49)
This indicates that quantum objects cannot be treated as sizeless points.
Symmetry generators have to be thought of as integrals built on the quantum stress tensor associated with the electron. The squared mass $`M^2`$ is defined in terms of momenta while position observables are obtained in the division ring built on symmetry algebra (31). Hence these observables are highly non linear expressions built on integrals of electron stress tensor. They are hermitian but not self-adjoint observables . As recalled in the Introduction, this is not a deficiency but rather a mandatory condition for solving difficulties which are otherwise inescapable.
In a quantum algebraic approach, frame transformations of observables are described as conjugations by group elements. Since such conjugations preserve commutation relations as well as products, any algebraic relation valid in a given frame also holds in any other one. As far as inertial frames are concerned, this property constitutes the very essence of the principle of relativity. Here, this principle is extended to dilatations, that is to say to changes of units which preserve the velocity of light and Planck constant $`\mathrm{}`$, and to conformal transformations to accelerated frames. In the following we will focus our attention on the latter which correspond to classical transformations (3) and are obtained here by exponentiating infinitesimal generators $`C_\mu `$
$`\overline{A}`$ $`=`$ $`\mathrm{exp}\left({\displaystyle \frac{\alpha ^\mu C_\mu }{i\mathrm{}}}\right)A\mathrm{exp}\left({\displaystyle \frac{\alpha ^\mu C_\mu }{i\mathrm{}}}\right)`$ (50)
$`=`$ $`A+\alpha ^\mu (A,C_\mu )+{\displaystyle \frac{\alpha ^\mu \alpha ^\nu }{2}}((A,C_\mu ),C_\nu )+\mathrm{}`$ (51)
The classical parameters $`\alpha ^\mu `$ are acceleration components along the $`4`$ space-time directions. Positions and momenta transformed according to these relations preserve the canonical commutators since $`\eta _{\mu \nu }`$ is a classical number invariant under conjugations. Quantum algebraic relations are written in all frames in terms of the same Minkowski metric which, as already stated, stands in contradistinction with covariance convention.
The relativistic effects of acceleration are recovered when the results of group conjugations are evaluated. As an important example, the redshifts of an observable under conjugations (51) can be obtained from the definition of this observable and from the commutators of the generators $`C_\mu `$ with other conformal generators
$`(D,C_\mu )`$ $`=`$ $`C_\mu `$ (52)
$`(P_\mu ,C_\nu )`$ $`=`$ $`2\eta _{\mu \nu }D2J_{\mu \nu }`$ (53)
$`(C_\mu ,C_\nu )`$ $`=`$ $`0`$ (54)
$`(J_{\mu \nu },C_\rho )`$ $`=`$ $`\eta _{\nu \rho }C_\mu \eta _{\mu \rho }C_\nu `$ (55)
The general problem of evaluating the shifts of observables under transformations to accelerated frames is greatly simplified when the spin number $`s`$ is preserved. In this case, closed expressions can be derived for the generators $`C_\mu `$ in terms of Poincaré and dilatation generators . We assume that this is the case for electrons which have a spin number $`s=\frac{1}{2}`$ in all frames and we restrict our attention to the simplest form of the expression of $`C_\mu `$
$$C_\mu =2DX_\mu P_\mu \left(X^2+\frac{3}{4}\frac{\mathrm{}^2}{M^2}\right)+2X^\rho S_{\rho \mu }$$
(56)
Electron spin can only take the two values $`\pm \frac{\mathrm{}}{2}`$ when measured along any direction transverse to momentum. This property is expressed as the following relation between spin and momentum observables
$$S_\mu S_\nu =\frac{\mathrm{}^2}{4}\left(\eta _{\mu \nu }\frac{P_\mu P_\nu }{M^2}\right)$$
(57)
Taken with the general results of the present section, these assumptions are sufficient to build up a theory of electrons in uniformly accelerated as well as inertial frames .
## IV Quantum hexaspherical observables
In classical theory, hexaspherical variables have been built on positions and the conformal factor. We now generalize this definition to the quantum algebraic framework by letting the mass observable play the role of the conformal factor.
To this aim, we consider the shift of mass under transformations (51) to accelerated frames. We first obtain the action of $`C_\mu `$ on mass
$`(C_\mu ,M)=2Y_\mu `$ (58)
$`Y_\mu =MX_\mu `$ (59)
and then iterate this action by making use of (57)
$`(C_\mu ,Y_\nu )=\eta _{\mu \nu }\left(MX^2+{\displaystyle \frac{3}{4}}{\displaystyle \frac{\mathrm{}^2}{M}}\right)`$ (60)
$`(C_\mu ,MX^2+{\displaystyle \frac{3}{4}}{\displaystyle \frac{\mathrm{}^2}{M}})=0`$ (61)
As a consequence, the transformed mass (51) is a second-order polynomial of the acceleration parameters. Moreover, quantum hexaspherical observables may be defined which transform as classical hexaspherical coordinates under frame transformations
$`Y_++Y_{}=M`$ (62)
$`Y_\mu =MX_\mu `$ (63)
$`Y_+Y_{}=MX^2+{\displaystyle \frac{3}{4}}{\displaystyle \frac{\mathrm{}^2}{M}}`$ (64)
Precisely, these observables have their shifts under finite transformations to accelerated frames (51) read as the classical laws (16). The shifts are now written in terms of the quantum observables $`Y_a`$ and they have to be dealt with care since they involve operators which do not commute with each other.
With this remark kept in mind, we write the transformation of quantum observables $`Y_a`$ as
$`\overline{M}=M2\alpha ^\mu Y_\mu +\alpha ^2\left(Y_+Y_{}\right)`$ (65)
$`\overline{Y}_\mu =Y_\mu \alpha _\mu \left(Y_+Y_{}\right)`$ (66)
$`\overline{Y}_+\overline{Y}_{}=Y_+Y_{}`$ (67)
$`\overline{Y}^2=Y^2=\mathrm{}^2`$ (68)
As for classical variables, $`Y^2`$ is evaluated in $`6d`$ space whereas the notation $`X^2`$ refers to Minkowski space. Relations (64-68) are quantum analogs of the classical expressions (22-26) with the classical conformal factor $`\lambda `$ identified as the quantum mass and the classical projective constant $`k`$ identified as the Planck constant
$`\rho ^2`$ $`=`$ $`s\left(s+1\right){\displaystyle \frac{\mathrm{}^2}{M^2}}={\displaystyle \frac{3}{4}}{\displaystyle \frac{\mathrm{}^2}{M^2}}`$ (69)
$`\lambda ^2`$ $`=`$ $`{\displaystyle \frac{M^2}{s\left(s+1\right)}}`$ (70)
$`k^2`$ $`=`$ $`\mathrm{}^2`$ (71)
The inverse relation of (68) is simply obtained by exchanging the roles of the two frames and changing the sign of acceleration parameters $`\alpha _\mu `$.
We now write the various commutation relations in a form explicitly displaying rotation symmetry in $`6d`$ space. To this aim, the $`15`$ conformal generators are identified as rotation generators $`J_{ab}`$ in a $`6d`$ space which extend the generators $`J_{\mu \nu }`$ of Lorentz transformations in ordinary space-time
$`P_\mu `$ $`=`$ $`J_{+\mu }+J_\mu `$ (72)
$`D`$ $`=`$ $`J_+`$ (73)
$`C_\mu `$ $`=`$ $`J_{+\mu }J_\mu `$ (74)
The whole set of conformal commutators (31,55) is then collected in a single relation
$$(J_{ab},J_{cd})=\eta _{bc}J_{ad}+\eta _{ad}J_{bc}\eta _{ac}J_{bd}\eta _{bd}J_{ac}$$
(75)
which is just the definition of SO$`(4,2)`$ symmetry. Then the commutators (59,61), together with relations (36,48), are gathered in a single relation
$$(J_{ab},Y_c)=\eta _{bc}Y_a\eta _{ac}Y_b$$
(76)
which means that the variables $`Y_a`$ are transformed as components of a SO$`(4,2)`$ vector under SO$`(4,2)`$ rotations. In particular, shifts (68) under finite transformations to accelerated frames are direct consequences of (76).
We have now written the quantum algebraic description of electrons in terms of relations quite analogous to classical projective geometry. But this description is no longer classical and, in particular, quantum hexaspherical observables do not commute. Their commutators are deduced from previously written results
$`(Y_\mu ,M)=P_\mu `$ (77)
$`(Y_\mu ,Y_\nu )=J_{\mu \nu }`$ (78)
$`(Y_+Y_{},M)=2D`$ (79)
$`(Y_+Y_{},Y_\mu )=C_\mu `$ (80)
and they may be collected in a single SO$`(4,2)`$ expression
$$(Y_a,Y_b)=J_{ab}$$
(81)
## V The law of free fall
As already emphasized, the mass observable takes the place of the conformal factor in the quantum algebraic framework. We will now show that quantum mass effectively allows one to write the law of free fall in a constant gravity field. To this aim, we will consider an inertial frame with generators $`\overline{J}_{ab}`$ and hexaspherical observables $`\overline{Y}_a`$ as well as a second frame, with generators $`J_{ab}`$ and hexaspherical observables $`Y_a`$, which is accelerated with respect to the inertial one. The trajectories defined as inertial in the inertial frame do appear as accelerated in the accelerated frame. In other words, they are the geodesic trajectories in the constant gravity field associated with this uniform acceleration.
We first remark that the concept of motion may be defined in the quantum algebraic framework as the action of a commutator with the inertial mass observable $`\overline{M}`$
$$F^{}=(F,\overline{M})$$
(82)
As a consequence of Jacobi identity, the Leibniz rule is obeyed by this differentiation operator
$$\left(FG\right)^{}=F^{}G+FG^{}$$
(83)
This would be true for the commutator with any observable but the choice of inertial mass $`\overline{M}`$ as the generator of motion leads to conservation of Poincaré generators in the inertial frame
$$\overline{P}_\mu ^{}=\overline{J}_{\mu \nu }^{}=0$$
(84)
The laws of inertial motion may also be written
$`\overline{Y}_\mu ^{}`$ $`=`$ $`\overline{M}\overline{X}_\mu ^{}=\overline{P}_\mu `$ (85)
$`\overline{Y}_\mu ^{\prime \prime }`$ $`=`$ $`\overline{M}\overline{X}_\mu ^{\prime \prime }=0`$ (86)
The choice of $`\overline{M}`$ for generating motion fixes the definition of inertial frames but motion can as well be written in accelerated frames. The inertial mass $`\overline{M}`$ may indeed be expressed in terms of the mass $`M`$ evaluated in the accelerated frame and of a position dependent conformal factor $`\mathrm{\Lambda }`$
$`\overline{M}`$ $`=`$ $`{\displaystyle \frac{M}{\mathrm{\Lambda }}}`$ (87)
$`{\displaystyle \frac{1}{\mathrm{\Lambda }}}`$ $`=`$ $`12\alpha ^\mu X_\mu +\alpha ^2\left(X^2+{\displaystyle \frac{3}{4}}{\displaystyle \frac{\mathrm{}^2}{M^2}}\right)`$ (88)
The latter is now a quantum operator which depends on quantum localization observables $`X_\mu `$ and $`M`$. The position dependence has nearly the same form as in the classical case except for the last term which is proportional to the squared spin. The motion of any observable evaluated in the accelerated frame, say the position $`X_\mu `$, is then obtained as its commutator with $`\frac{M}{\mathrm{\Lambda }}`$. The expressions obtained in this manner are quantum extensions of the laws of geodesic motion of classical relativity. They contain classically looking terms arising from the canonical commutators between momenta and positions and purely quantum terms depending on spin.
At this point, it is worth emphasizing that these spin terms are direct consequences of symmetry considerations. Quantum hexaspherical observables do not commute and their commutators are equal to the rotation generators. For ordinary space-time indices in particular, the commutator $`(Y_\mu ,Y_\nu )`$ is just equal to the ordinary angular momentum $`J_{\mu \nu }`$. It contains an orbital part which corresponds to the canonical commutators (48) between momenta and positions. It also involves a spin part which fits the commutator (49) between different position components. Hence, the fact that position components do not commute and have spin components as their commutators is directly connected with conformal dynamical symmetry. In the present quantum algebraic approach, the equivalence principle is nothing but another expression for this dynamical symmetry and the spin terms appearing in the equations of geodesic motion are consequences of this principle.
Quantum geodesic equations may be laid down in a much simpler manner by using hexaspherical observables. As the observables $`Y_a`$ are linear superpositions of $`\overline{Y}_a`$ (see (68)), the quantum laws of free fall are obtained as
$`Y_\mu ^{\prime \prime }=2\alpha _\mu \overline{M}`$ (89)
$`M^{\prime \prime }=2\alpha ^2\overline{M}`$ (90)
$`Y_+^{\prime \prime }Y_{}^{\prime \prime }=2\overline{M}`$ (91)
The first equation describes a force $`Y_\mu ^{\prime \prime }`$ proportional to the constant gravity field $`2\alpha _\mu `$ and to the mass $`\overline{M}`$. The mass entering this law is the inertial mass, that is also the generator of motion $`\overline{M}`$. This inertial mass is a constant of motion whilst the mass $`M`$ evaluated in the accelerated frame varies according to the second equation in (91).
## VI Discussion
In the present paper, we have defined quantum observables $`Y_a`$ which correspond to the hexaspherical coordinates of classical projective geometry. These observables involve not only space-time position observables but also the mass observable. The latter describes metric properties in the quantum algebraic framework, playing the same role as the conformal factor in classical relativity.
Localization observables $`Y_a`$ are associated with an electron localized in space and time. Transformations between various uniformly accelerated frames correspond to SO$`(4,2)`$ rotations of these observables. In summary, quantum as well as relativistic properties of electrons are described by a ‘non commutative conformal geometry’ which is essentially determined by the conformally invariant commutators (75,76,81). These results clearly indicate that the conceptions of space-time inherited from classical relativity have to be revised for quantum objects. In particular localization of electrons can no longer be thought of in terms of sizeless points. The best classical picture for localization of electrons obtained in this paper corresponds to the center of an hyperboloid having a waist size or a radius proportional to spin and inversely proportional to mass. Accordingly, the best classical picture of relativistic transformations of electrons is given by the projective geometry of hyperboloids rather than by the geometry of points. Furthermore, the geometrical elements of the hyperboloids, its center and waist size parameter, have to be considered as non commutative operators. In this context sizeless classical points appear as unobservable entities and this certainly raises questions about the pertinence of classical representations of space-time and infinitesimal geometry when applied to quantum problems.
Problems with classical representations of space-time are usually expected to arise at a typical size of the order of Planck length, in connection with the difficulties of quantum gravity. Here in contrast, electrons appear as fuzzy spots with a typical size $`\frac{S}{M}`$, where $`S`$ is a spin component and $`M`$ the mass, of the order of Compton wavelength. We have seen that position components do not commute and have spin components as their commutators, as a direct consequence of conformal dynamical symmetry. Then, dispersions in position have to obey an Heisenberg inequality with a typical length just of the order of Compton wavelength.
This typical size might appear as astonishing when contrasted with the fact that quantum field theory is certainly still efficient at smaller length scales. At this point, it is worth recalling that an equivalent set of observables may be defined for the positions of an electron in space-time . In that representation, position observables commute with each other and, hence, may be considered as quantum algebraic extensions of the position variables of standard Dirac theory. There is however a price to be paid for this simplification. Commuting position components are no longer hermitian and their non hermitian part is related to spin. This means that quantum field theory manages to deal with the non commutativity of localization observables at the prize of representing it in terms of internal spin variables. This has certainly permitted impressive achievements with however the drawback of renouncing to the principles of conformal dynamical symmetry which are shown here to lie at the root of the theory of electrons.
We have seen that mass plays the role of a conformal factor, thus determining the space-time scale. At the same time, it allows to represent the law of free fall by extending geodesic equations to the quantum algebraic framework. According to the equivalence principle, a constant gravity field may be considered as arising from a uniform acceleration with respect to inertial frames. Geodesic motion in the accelerated frame is thus identified with inertial motion in these inertial frames. More precisely, the generator of motion is the mass observable $`\overline{M}`$ evaluated in the inertial frames, that is also $`\frac{M}{\mathrm{\Lambda }}`$ where $`M`$ is the mass in the accelerated frame and $`\mathrm{\Lambda }`$ a quantum conformal factor. Quantum laws of free fall in a constant gravity field are obtained in this manner. These laws have a simpler form when expressed in terms of quantum hexaspherical observables.
These laws depend on the acceleration parameters $`2\alpha _\mu `$, that is also on the gravity field. In this respect, they are not explicitly conformally invariant. It is however possible to write a quantum algebraic Newton’s law which is manifestly invariant under SO$`(4,2)`$ dynamical symmetry. Such an extension is obtained as the double commutators between hexaspherical observables which follow from (76,81)
$$((Y_a,Y_b),Y_c)=\eta _{bc}Y_a\eta _{ac}Y_b$$
(92)
The specific law of free fall corresponding to a constant gravity field is then recovered by chosing the classical parameters $`\alpha _\mu `$. This amounts to select inertial frames or, in other words, to select the inertial mass $`\overline{M}`$ among all the possible expressions of mass observables which may be reached by SO$`(4,2)`$ rotations. Then, the law of free fall is obtained through a contraction of the conformally invariant expression (92).
Expression (92) has exactly the same form in any conformal frames including uniformly accelerated as well as inertial frames. No reference to any classical field is needed for writing it. This means that the choice of specific frames as defining inertia cannot be justified from purely algebraic properties. Accelerated frames being included in conformal symmetry, there is no longer any privilege for the case of a null acceleration.
The quantum algebraic framework has the ability of describing not only localization in space-time and relativistic symmetries associated with frame transformations, but it may also accomodate the description of motion. Up to now, this description has been restricted to constant gravity fields, that is also to flat conformal frames but, even with this restriction, it has extended the symmetry principles of special relativity to include the equivalence principle.
|
no-problem/9905/hep-ph9905511.html
|
ar5iv
|
text
|
# References
Asymmetry of prompt photon production
in $`\stackrel{}{p}\stackrel{}{p}`$ collisions at RHIC
G. P. Škoro<sup>1</sup><sup>1</sup>1goran@rudjer.ff.bg.ac.yu, Institute of Nuclear Sciences ”Vinča”,
Faculty of Physics, University of Belgrade,
Belgrade, Yugoslavia
M. Zupan<sup>2</sup><sup>2</sup>2mzupan@rt270.vin.bg.ac.yu, Institute of Nuclear Sciences ”Vinča”,
Belgrade, Yugoslavia
M. V. Tokarev<sup>3</sup><sup>3</sup>3tokarev@sunhe.jinr.ru Laboratory of High Energies,
Joint Institute for Nuclear Research,
141980, Dubna, Moscow region, Russia
Summary.$``$ The prompt photon production in $`\stackrel{}{p}\stackrel{}{p}`$ collisions at high energies is studied. Double-spin asymmetry $`A_{LL}`$ of the process is calculated by using Monte Carlo code SPHINX. A possibility to discriminate the spin-dependent gluon distributions and to determine sign of $`\mathrm{\Delta }G`$ is discussed. Detailed study of expected background, such as $`\pi ^0`$ production and decay, is given. The predictions for the longitudinal asymmetry $`A_{LL}`$ of the prompt photons and $`\pi ^0`$-meson production in the $`\stackrel{}{p}\stackrel{}{p}`$ collisions at RHIC energies have been made.
PACS: 13.85.Qk; 13.88.+e; 14.70.Bh; 14.70.Dj
1 $``$ Introduction
One of the actual problems of high energy spin physics is the measurement of the spin-dependent gluon distribution $`\mathrm{\Delta }G(x,Q^2)`$. This would allow the determination of gluon and moreover of quark contribution to the spin of the proton. Although spin-dependent quark distributions were obtained from deep-inelastic lepton-nucleon scattering, this information was not sufficient to determine the various parton contributions to the proton spin (see and references therein).
Consequently the preparation of many experiments is well under way. These experiments are to take place on colliders that are presently under construction, such as RHIC , HERA and LHC (see and references therein). These colliders will provide high-energy polarized proton beams necessary for the study of the asymmetry of jet and/or prompt photon production which can be used as a measure of $`\mathrm{\Delta }G(x,Q^2)`$. In a recent paper , we analyzed asymmetry of jet and dijets production at RHIC by means of Monte Carlo simulations and found that the asymmetry $`A_{LL}^{jet}`$ is sensitive for $`\mathrm{\Delta }G`$ and can give us the information about the sign and shape of spin-dependent gluon distribution. In this paper we study prompt photon production and the asymmetry of the production with respect to the parallel and anti-parallel orientation of the longitudinally oriented spins of the colliding protons.
The processes responsible for prompt photon production are the Compton $`qgq\gamma `$ scattering and the annihilation process $`q\overline{q}g\gamma `$. The asymmetry of prompt photon production will therefore be dependent on the convolution of spin-dependent distributions of quarks and gluons. The set of possible distributions is subject to very few constraints. Additional ones must be arrived at theoretically or empirically, or both. For example, the sign of the gluon spin contribution is subject to speculation. Although there are indications that it should be positive, the possibility that it may be negative also exists. Both positive and negative values of the sign of $`\mathrm{\Delta }G(x,Q^2)`$ were considered in . The possibility to draw conclusions on the sign of the spin-dependent gluon distribution, $`\mathrm{\Delta }G(x,Q^2)`$, from existing polarized DIS data have been studied in . Other speculations deal with the relative magnitude of the gluon contribution, models are proposed in which $`\mathrm{\Delta }G`$ is large compared to other contributions, others where it is small compared to other contributions.
All this points to the necessity of measurement of $`\mathrm{\Delta }G(x,Q^2)`$, and consequently $`\mathrm{\Delta }G`$. The goal of this paper is to study the asymmetry of prompt photon production in proton-proton collisions for positive and negative $`\mathrm{\Delta }G(x,Q^2)`$ and to predict the magnitude of the effect when observed by the STAR detector at RHIC . Moreover, the paper is intent on presenting some of the considerations involved in the selection of the kinematical region which is optimal for the observation of the effect. Finally some consideration was made of the expected background giving a prediction of the asymmetry signal that would actually be observed in the experiment.
2 $``$ Spin-dependent gluon distribution
A phenomenological spin-dependent distributions were used in the calculation of the prompt photon production asymmetry. The distributions were arrived at empirically, including some constraints on the signs of valence and sea quark distributions, taking into account the axial gluon anomaly and utilizing results on integral quark contributions to the nucleon spin. Based on the analysis of experimental deep inelastic scattering data for the structure function $`g_1`$ the parametrizations of spin-dependent parton distributions for both positive and negative sign of $`\mathrm{\Delta }G`$ have been constructed. We would like to note that both sets of distributions describe experimental data very well. We shall denote $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ sets of spin-dependent parton distributions obtained in with positive and negative sign of $`\mathrm{\Delta }G`$, respectively. It was shown in that the constructed spin-dependent parton distributions for positive sign of $`\mathrm{\Delta }G`$ reproduce the main features of the NLO QCD $`Q^2`$-evolution of proton, deuteron and neutron structure function $`g_1`$.
Figure 1 shows the dependence of the ratio $`\mathrm{\Delta }G(x,Q^2)/G(x,Q^2)`$ on $`x`$ at $`Q^2=100(GeV)^2`$ for gluon distributions $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ . In the case of positive $`\mathrm{\Delta }G(x,Q^2)`$ we see a monotonically increasing curve corresponding to the behaviour $`\mathrm{\Delta }G/Gx^\alpha `$ as $`x1`$. For $`\mathrm{\Delta }G^{<0}`$ we see a monotonically decreasing curve behaving almost exactly as $`\mathrm{\Delta }G/Gx^{1/2}`$, as $`x1`$. Figure 2 shows the dependence of the ratio $`\mathrm{\Delta }q(x,Q^2)/q^R(x,Q^2)`$ on $`x`$ at $`Q^2=100(GeV)^2`$ for u-quark (a) and d-quark (b). We would like to note that $`q^R(x,Q^2)`$ represents the renormalised parton distribution as given in .
3 $``$ Asymmetry of prompt photon production
There are two principal processes that produce prompt photons: the Compton process $`qgq\gamma `$ scattering and the annihilation process $`q\overline{q}g\gamma `$. However for large gluon polarisations the contribution of the annihilation process can be neglected.
The longitudinal, double spin asymmetry is defined as the difference of cross-sections for prompt photon production when the longitudinally oriented spins of the colliding protons are antiparallel ($``$) and parallel ($``$):
$$A_{LL}=\frac{\sigma ^{}\sigma ^{}}{\sigma ^{}+\sigma ^{}}=\frac{1}{P^2}\frac{N_\gamma ^{}N_\gamma ^{}}{N_\gamma ^{}+N_\gamma ^{}},$$
(1)
where $`N_\gamma `$ represents the number of prompt photons. The statistical error is calculated as:
$$\delta A_{LL}\frac{1}{P^2}\frac{1}{\sqrt{N_\gamma ^{}+N_\gamma ^{}}}.$$
(2)
The calculation of prompt photon production asymmetry was done using Monte Carlo code SPHINX which is a ’polarized’ version of PYTHIA . Calculations of asymmetry were made for center-of-mass energies $`\sqrt{s}=200GeV`$ and $`\sqrt{s}=500GeV`$ which are part of the design specifications of RHIC. The STAR detector is designed to cover full space in azimuth and pseudorapity region $`1<\eta <2`$, so this was taken into account in calculating the asymmetry of prompt photon production.
The rate and asymmetry of prompt photon production was estimated using the assumed RHIC integrated luminosity $`320pb^1`$ at $`\sqrt{s}=200GeV`$ and $`800pb^1`$ at $`\sqrt{s}=500GeV`$ and a fixed beam polarization of P = 0.7.
4 $``$ Results and discussions
The asymmetry of prompt photon production was calculated from simulations for both $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ sets of spin dependent PDF . Figure 3 shows the dependence of $`A_{LL}`$ on the photon transverse momentum $`p_T`$ at $`\sqrt{s}=200GeV`$. For $`\mathrm{\Delta }G^{>0}`$ the asymmetry increases from about 1.5% at $`p_T=6GeV/c`$ to 13% at $`p_T=24GeV/c`$. For $`\mathrm{\Delta }G^{<0}`$ the asymmetry drops from zero at $`p_T=6GeV/c`$ to -17% at $`p_T=24GeV/c`$. The errors shown are statistical. Figure 4 shows the asymmetry of prompt photon as a function of prompt photons transverse momentum $`p_T`$ at $`\sqrt{s}=500GeV`$. This asymmetry is much smaller than the one at $`\sqrt{s}=200GeV`$. It is practically equal to zero for $`p_T`$ up to about $`14GeV/c`$. Beyond $`p_T=14GeV/c`$ it rises slowly for $`\mathrm{\Delta }G^{>0}`$ and drops for $`\mathrm{\Delta }G^{<0}`$.
This means that the energy of $`\sqrt{s}=200GeV`$ will be more preferable for determination of $`\mathrm{\Delta }G`$ from prompt photon production. Note that higher colliding energy $`\sqrt{s}=500GeV`$ is preferable for extracting $`\mathrm{\Delta }G`$ from jets asymmetry . Figure 5 shows estimated rates of prompt photon production with above mentioned conditions at $`\sqrt{s}=200GeV`$ and $`\sqrt{s}=500GeV`$.
The rate of prompt photon production decreases with increasing $`p_T`$ due largely to the decrease in the distribution of gluons at high $`x`$, keeping in mind the approximate relation $`x=p_T/(2\sqrt{s})`$. At the same time the asymmetry effect increases with $`p_T`$. Consequently the successful measurement of the asymmetry in prompt photon production depends largely on reconciling the magnitude of the effect with the statistical reliability of the measurement. For example, at $`\sqrt{s}=200GeV`$ only the asymmetry values up to $`p_T25GeV/c`$ will have acceptable statistical significance. The problem of experimental errors have to be analyzed in the light of corresponding background processes, too.
The main source of background, in this case, is the production of $`\pi ^0`$ mesons because the $`\pi ^0`$’s general mode of decay $`\pi ^02\gamma `$ significantly affects prompt photon detection. High-energy $`\pi ^0`$’s decay into photons diverging mostly at an angle $`\theta `$, given by $`sin(\theta /2)=m_{\pi ^0}/E_{\pi ^0}`$, small enough to be collected by a single cell of the Electro-Magnetic Calorimeter and counted as a high-$`p_T`$ prompt photon. Also the possibility of one of the decay photons escaping detection causes the one which is detected to be considered a prompt photon.
Also, the processes responsible for $`\pi ^0`$-meson production are $`qq`$, $`qg`$ and $`gg`$ scattering, so in the polarized $`pp`$ collisions non-zero values of $`\pi ^0`$ asymmetry can be expected. Figure 6 shows the asymmetry of $`\pi ^0`$-meson production as a function of transverse momentum $`p_T`$ at $`\sqrt{s}=200GeV`$. The asymmetry is positive and increase with $`p_T`$ for $`\mathrm{\Delta }G^{>0}`$ and practically equal to zero for $`\mathrm{\Delta }G^{<0}`$. Such a behaviour will reflect on the expected experimental prompt photon asymmetry. Also, we can see in Figure 7 that the average cross-section of $`\pi ^0`$-meson production is higher than average cross-section of prompt photon production by an order of magnitude in the whole kinematical region. To provide an adequate prediction of the asymmetry measured in the experiment, the production of $`\pi ^0`$’s has to be taken into account in the calculation of $`A_{LL}`$. Also, in order to reduce such a high background, the optimal method for the so-called ”$`\pi ^0/\gamma `$ separation” should be performed. The very important part of the EM Calorimeter at STAR is Shower Maximum Detector which can be used for the purpose . The method described in Ref. is based on the different shapes of the EM showers induced by gammas and pions. The experimental asymmetry was calculated by substituting:
$$N_{\mathrm{exp}}^{}=aN_\gamma ^{}+bN_{\pi ^0}^{},$$
(3)
for $`N_\gamma ^{}`$ into (1), and by an analogous substitution for $`N_\gamma ^{}`$. The quantity $`N_{\pi ^0}^{}`$ is the number of produced pions and $`a`$ and $`b`$ are constants determined by the gamma detection efficiency and the percent of misidentified pions , respectively. These constants were taken to be: $`a=0.9`$ reflecting a 90% photon detection efficiency and $`b=0.35`$ reflecting a 65% efficiency in $`\pi ^0`$-meson rejection from the signal as shown in .
Figure 8 shows the ’experimental’ prompt photon asymmetry obtained by taking into account all conditions described above. The experimental asymmetry is different for different signs of $`\mathrm{\Delta }G`$ implying that it will be a clear signal at least for the sign of $`\mathrm{\Delta }G`$.
5 $``$ Conclusions
Monte Carlo simulations of prompt photon production in polarized proton collisions at high energies were made, taking into account parameters of the STAR detector at RHIC. The dependence of asymmetry of prompt photon production on the transverse momentum of photons was studied for positive and negative values of $`\mathrm{\Delta }G`$ at colliding energies of $`\sqrt{s}=200GeV`$ and $`\sqrt{s}=500GeV`$. The results show that a stronger asymmetry signal with a clear indication of the sign of $`\mathrm{\Delta }G`$ will be at $`\sqrt{s}=200GeV`$. At that energy the asymmetry $`A_{LL}`$ ranges from 1.5% to 13% for the $`p_T`$ range of $`6GeV/c`$ to $`24GeV/c`$. The asymmetry effect is larger at higher $`p_T`$ values but at the same time the number of prompt photons produced at high $`p_T`$ is smaller. The measurement of the asymmetry will need to reconcile the visibility of the effect and the statistical reliability of the measurement. The decay of $`\pi ^0`$ mesons was considered as a source of background. It was found that the number of photons from $`\pi ^0`$ decay can exceed the number of prompt photons by an order of magnitude. Taking into account the results on $`\pi ^0/\gamma `$ separation achievable at STAR, it was shown that the experimental asymmetry will also give a clear signal as to the sign of $`\mathrm{\Delta }G`$.
Figure 1
The ratio of polarized and unpolarized gluon distributions as a function of $`x`$ for two different parametrizations $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ at $`Q^2`$=100 GeV<sup>2</sup>.
a) b)
Figure 2
The ratio of polarized and unpolarized $`u`$-quark (a) and $`d`$-quark (b) distributions as a function of $`x`$ for two different parametrizations $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ at $`Q^2`$=100 GeV<sup>2</sup>.
Figure 3.
Asymmetry of prompt photon production $`A_{LL}`$ in polarised $`pp`$ collisions at $`\sqrt{s}=200`$ GeV for two different sets of spin-dependent PDFs ( $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ ) as a function of photon transverse momentum $`p_T`$. The errors indicated are statistical only.
Figure 4.
Asymmetry of prompt photon production $`A_{LL}`$ in polarised $`pp`$ collisions at $`\sqrt{s}=500`$ GeV for two different sets of spin-dependent PDFs ( $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ ) as a function of photon transverse momentum $`p_T`$. The errors indicated are statistical only.
Figure 5.
Estimated rates of prompt photon production $`A_{LL}`$ in polarised $`pp`$ collisions at $`\sqrt{s}=200`$ GeV and $`\sqrt{s}=500`$ GeV as a function of photon transverse momentum $`p_T`$. The rates are based on the expected luminosity of RHIC and the properties of the STAR detector.
Figure 6.
Asymmetry of $`\pi ^0`$ production $`A_{LL}`$ in polarised $`pp`$ collisions at $`\sqrt{s}=200`$ GeV for two different sets of spin-dependent PDFs ( $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ ) as a function of $`\pi ^0`$ transverse momentum $`p_T`$. The errors indicated are statistical only.
Figure 7.
Comparison of prompt photon and $`\pi ^0`$ cross-sections in polarised $`pp`$ collisions at $`\sqrt{s}=200`$ GeV as a function of transverse momentum $`p_T`$.
Figure 8.
Experimental asymmetry of prompt photon production $`A_{LL}`$ in polarised $`pp`$ collisions at $`\sqrt{s}=200`$ GeV for two different sets of spin-dependent PDFs ( $`\mathrm{\Delta }G^{>0}`$ and $`\mathrm{\Delta }G^{<0}`$ ) as a function of photon transverse momentum $`p_T`$.
|
no-problem/9905/cond-mat9905299.html
|
ar5iv
|
text
|
# Capture of the Lamb: Diffusing Predators Seeking a Diffusing Prey
## Abstract
We study the capture of a diffusing “lamb” by diffusing “lions” in one dimension. The capture dynamics is exactly soluble by probabilistic techniques when the number of lions is very small, and is tractable by extreme statistics considerations when the number of lions is very large. However, the exact solution for the general case of three or more lions is still not known.
I. INTRODUCTION
What is the survival probability of a diffusing lamb which is hunted by $`N`$ hungry lions? Although this capture process is appealingly simple to define (see Fig. 1), its long-time behavior poses a theoretical challenge because of the delicate interplay between the positions of the lamb and the closest lion. This model also illustrates a general feature of nonequilibrium statistical mechanics: life is richer in low dimensions. For spatial dimension $`d>2`$, it is known that the capture process is “unsuccessful” (in the terminology of Ref. 1), as there is a nonzero probability for the lamb to survive to infinite time for any initial spatial distribution of the lions. This result is a consequence of the transience of diffusion for $`d>2`$, which means that two nearby diffusing particles in an unbounded $`d>2`$ domain may never meet. For $`d=2`$, capture is “successful”, as the lamb dies with certainty. However, diffusing lions in $`d=2`$ are such poor predators that the average lifetime of the lamb is infinite! Also, the lions are essentially independent, so that the survival probability of a lamb in the presence of $`N`$ lions in two dimensions is $`S_N(t)S_1(t)^N`$, where $`S_1(t)`$, the survival probability of a lamb in the presence of a single lion, decays as $`(\mathrm{ln}t)^1`$.
Lions are more efficient predators in $`d=1`$ because of the recurrence of diffusion, which means that two diffusing particles are certain to meet eventually. The $`d=1`$ case is also special because there are two distinct generic cases. When the lamb is surrounded by lions, the survival probability at a fixed time decreases rapidly with $`N`$ because the safe zone which remains unvisited by lions at fixed time shrinks rapidly in $`N`$. This article focuses on the more interesting situation of $`N`$ lions all to one side of the lamb (Fig. 1), for which the lamb survival probability decays as a power law in time with an exponent that grows only logarithmically in $`N`$.
We begin by considering a lamb and a single stationary lion in Section II. The survival probability of the lamb $`S_1(t)`$ is closely related to the first-passage probability of one-dimensional diffusion and leads to $`S_1(t)t^{1/2}`$. It is also instructive to consider general lion and lamb diffusivities. We treat this two-particle system by mapping it onto an effective single-particle diffusion problem in two dimensions with an absorbing boundary to account for the death of the lamb when it meets the lion, and then solving the two-dimensional problem by the image method. We apply this approach in Section III by mapping a diffusing lamb and two diffusing lions onto a single diffusing particle within an absorbing wedge whose opening angle depends on the particle diffusivities, and then solving the diffusion problem in this absorbing wedge by classical methods.
In Section IV, we study $`N1`$ diffusing lions. An essential feature of this system is that the motion of the closest (“last”) lion to the lamb is biased towards the lamb, even though each lion diffuses isotropically. The many-particle system can be recast as a two-particle system consisting of the lamb and an absorbing boundary which, from extreme statistics, moves to the right as $`\sqrt{4D_Lt\mathrm{ln}N}`$, where $`D_L`$ is the lion diffusivity. Because this time dependence matches that of the lamb’s diffusion, the survival probability depends intimately on these two motions, with the result that $`S_N(t)t^{\beta _N}`$ and $`\beta _N\mathrm{ln}N`$. The logarithmic dependence of $`\beta _N`$ on $`N`$ reflects the fact that each additional lion poses a progressively smaller marginal peril to the lamb — it matters little whether the lamb is hunted by 99 or 100 lions. Amusingly, the value of $`\beta _N`$ implies an infinite lamb lifetime for $`N3`$ and a finite lifetime otherwise. In the terminology of Ref. 1, the capture process changes from “successful” to “complete” when $`N4`$. We close with some suggestions for additional research on this topic.
II. SURVIVAL IN THE PRESENCE OF ONE LION
A. Stationary Lion and Diffusing Lamb
We begin by treating a lamb which starts at $`x_0>0`$ and a stationary lion at $`x=0`$. In the continuum limit, the probability density $`p(x,t)`$ that the lamb is at any point $`x>0`$ at time $`t`$ satisfies the diffusion equation
$$\frac{p(x,t)}{t}=D_{\mathrm{}}\frac{^2p(x,t)}{x^2},$$
(1)
where $`D_{\mathrm{}}`$ is the diffusivity (or diffusion coefficient). The probability density satisfies the boundary condition $`p(x=0,t)=0`$ to account for the death of the lamb if it reaches the lion at $`x=0`$, and the initial condition $`p(x,t=0)=\delta (xx_0)`$. Equation (1) may be easily solved by the familiar image method. For $`x>0`$, $`p(x,t)`$ is the superposition of a Gaussian centered at $`x_0`$ and an “image” anti-Gaussian centered at $`x_0`$:
$$p(x,t)=\frac{1}{\sqrt{4\pi D_{\mathrm{}}t}}\left[e^{(xx_0)^2/4D_{\mathrm{}}t}e^{(x+x_0)^2/4D_{\mathrm{}}t}\right].$$
(2)
The image contribution ensures that the boundary condition at $`x=0`$ is automatically satisfied, while the full solution satisfies both the initial condition and the diffusion equation. Thus Eq. (2) gives the probability density of the lamb for $`x>0`$ in the presence of a stationary lion at $`x=0`$.
The probability that the lamb meets the lion at time $`t`$ equals the diffusive flux to $`x=0`$ at time $`t`$. The flux is
$$F(t)=+D_{\mathrm{}}\frac{p(x,t)}{x}|_{x=0}=\frac{x_0}{\sqrt{4\pi D_{\mathrm{}}t^3}}e^{x_0^2/4D_{\mathrm{}}t}.$$
(3)
The flux $`F(t)`$ is also the first-passage probability to the origin, namely, the probability that a diffusing lamb which starts at $`x_0`$ reaches $`x=0`$ for the first time at time $`t`$. Note that in the long time limit, defined by $`D_{\mathrm{}}tx_0^2`$, the first-passage probability reduces to $`F(t)x_0/t^{3/2}`$. This $`t^{3/2}`$ time dependence is a characteristic feature of the first-passage probability in one dimension.
The probability that the lamb dies by time $`t`$ is the time integral of $`F(t)`$ up to time $`t`$. The survival probability is just the complementary fraction of these doomed lambs, that is,
$`S_1(t)`$ $`=`$ $`1{\displaystyle _0^t}F(t^{})𝑑t^{},`$ (4)
$`=`$ $`1{\displaystyle _0^t}{\displaystyle \frac{x_0}{\sqrt{4\pi D_{\mathrm{}}t^3}}}e^{x_0^2/4D_{\mathrm{}}t^{}}𝑑t^{}.`$ (5)
The integral in Eq. (4) can be reduced to a standard form by the substitution $`u=x_0/\sqrt{4D_{\mathrm{}}t^{}}`$ to give
$$S_1(t)=\mathrm{erf}\left(\frac{x_0}{\sqrt{4D_{\mathrm{}}t}}\right)\frac{x_0}{\sqrt{\pi D_{\mathrm{}}t}}\mathrm{as}t\mathrm{},$$
(6)
where $`\mathrm{erf}(z)=(2/\sqrt{\pi })_0^ze^{u^2}𝑑u`$ is the error function. The same expression for $`S_1(t)`$ can be obtained by integrating the spatial probability distribution in Eq. (2) over all $`x>0`$.
An amusing feature of the $`t^{1/2}`$ decay of the lamb survival probability is that although the lamb dies with certainty, its average lifetime, defined as $`t=_0^{\mathrm{}}tF(t)𝑑t=_0^{\mathrm{}}S(t)𝑑t^{\mathrm{}}t^{1/2}𝑑t`$, is infinite. This infinite lifetime arises because the small fraction of lambs which survive tend to move relatively far away from the lion. More precisely, the superposition of the Gaussian and anti-Gaussian in Eq. (2) leads to a lamb probability distribution which is peaked at a distance $`(D_{\mathrm{}}t)^{1/2}`$ from the origin, while its spatial integral decays as $`(D_{\mathrm{}}t)^{1/2}`$.
B. Both Species Diffusing
What is the survival probability of the lamb when the lion also diffuses? In the rest frame of the lamb, the lion now moves if either a lion or a lamb hopping event occurs, and their separation diffuses with diffusivity equal to $`D_{\mathrm{}}+D_L`$ (see for example, Ref. REFERENCES), where $`D_L`$ is the lion diffusivity. From the discussion of Section IIA, the lamb survival probability has the asymptotic time dependence $`S_1(t)x_0/\sqrt{\pi (D_{\mathrm{}}+D_L)t}`$.
It is also instructive to determine the spatial probability distribution of the lamb. This distribution may be found conveniently by mapping the two-particle interacting system of lion at $`x_L`$ and lamb at $`x_{\mathrm{}}`$ in one dimension to an effective single-particle system in two dimensions and then applying the image method to solve the latter (see Fig. 2). To construct this mapping, we introduce the scaled coordinates $`y_1=x_L/\sqrt{D_L}`$ and $`y_2=x_{\mathrm{}}/\sqrt{D_{\mathrm{}}}`$ to render the two-dimensional diffusive trajectory $`(y_1,y_2)`$ isotropic. The probability density in the plane, $`p(y_1,y_2,t)`$, must satisfy an absorbing boundary condition when $`y_2\sqrt{D_{\mathrm{}}}=y_1\sqrt{D_L}`$, corresponding to the death of the lamb when it meets the lion. For simplicity and without loss of generality, we assume that the lion and lamb are initially at $`x_L(0)=0`$ and $`x_{\mathrm{}}(0)=1`$ respectively, that is, $`y_1(0)=0`$ and $`y_2(0)=\sqrt{D_{\mathrm{}}}`$. The probability density is therefore the sum of a Gaussian centered at $`(y_1(0),y_2(0))=(0,\sqrt{D_{\mathrm{}}})`$ and an anti-Gaussian image. From the orientation of the absorbing boundary (Fig. 2), this image is centered at $`(\sqrt{D_{\mathrm{}}}\mathrm{sin}2\theta ,\sqrt{D_{\mathrm{}}}\mathrm{cos}2\theta )`$, where $`\theta =\mathrm{tan}^1\sqrt{D_L/D_{\mathrm{}}}`$.
From this image representation, the probability density in two dimensions is
$$p(y_1,y_2,t)=\frac{1}{4\pi t}\left[e^{[y_1^2+(y_2\sqrt{D_{\mathrm{}}})^2]/4t}e^{[(y_1\sqrt{D_{\mathrm{}}}\mathrm{sin}2\theta )^2+(y_2+\sqrt{D_{\mathrm{}}}\mathrm{cos}2\theta )^2]/4t}\right].$$
(7)
The probability density for the lamb to be at $`y_2`$ is the integral of the two-dimensional density over the accessible range of the lion coordinate $`y_1`$:
$$p(y_2,t)=_{\mathrm{}}^{y_2\mathrm{cot}\theta }p(y_1,y_2,t)𝑑y_1.$$
(8)
If we substitute the result (7) for $`p(y_1,y_2,t)`$, the integral in Eq. (8) can be expressed in terms of the error function. We then transform back to the original lamb coordinate $`x_{\mathrm{}}=y_2\sqrt{D_{\mathrm{}}}`$ by using $`p(x_{\mathrm{}},t)dx_{\mathrm{}}=p(y_2,t)dy_2`$ to obtain
$$p(x_{\mathrm{}},t)=\frac{1}{\sqrt{16\pi D_{\mathrm{}}t}}\left[e^{(x_{\mathrm{}}1)^2/4D_{\mathrm{}}t}\mathrm{erfc}\left(\frac{x_{\mathrm{}}\mathrm{cot}\theta }{\sqrt{4D_{\mathrm{}}t}}\right)e^{(x_{\mathrm{}}+\mathrm{cos}2\theta )^2/4D_{\mathrm{}}t}\mathrm{erfc}\left(\frac{\mathrm{sin}2\theta x_{\mathrm{}}\mathrm{cot}\theta }{\sqrt{4D_{\mathrm{}}t}}\right)\right],$$
(9)
where $`\mathrm{erfc}(z)=1\mathrm{erf}(z)`$ is the complementary error function. A plot of $`p(x_{\mathrm{}},t)`$ is shown in Fig. 3 for various values of the diffusivity ratio $`rD_{\mathrm{}}/D_L`$. The figure shows that the survival probability of the lamb rapidly decreases as the lion becomes more mobile. Note that when the lion is stationary, $`\theta =0`$, and Eq. (9) reduces to Eq. (2).
III. TWO LIONS
To find the lamb survival probability in the presence of two diffusing lions, we generalize the above approach to map the three-particle interacting system in one dimension to an effective single diffusing particle in three dimensions with boundary conditions that reflect the death of the lamb whenever a lion is encountered. Let us label the lions as particles 1 and 2, and the lamb as particle 3, with respective positions $`x_1`$, $`x_2`$, and $`x_3`$, and respective diffusivities $`D_i`$. It is again useful to introduce the scaled coordinates $`y_i=x_i/\sqrt{D_i}`$ which renders the diffusion in the $`y_i`$ coordinates spatially isotropic. In terms of $`y_i`$, lamb survival corresponds to $`y_2\sqrt{D_2}<y_3\sqrt{D_3}`$ and $`y_1\sqrt{D_1}<y_3\sqrt{D_3}`$. These constraints mean that the effective particle in three-space remains behind the plane $`y_2\sqrt{D_2}=y_3\sqrt{D_3}`$ and to the left of the plane $`y_1\sqrt{D_1}=y_3\sqrt{D_3}`$ (Fig. 4(a)); this geometry is a wedge region of opening angle $`\mathrm{\Theta }`$ defined by the intersection of these two planes. If the particle hits one of the planes, then one of the lions has killed the lamb.
This mapping therefore provides the lamb survival probability, since it is known that the survival probability of a diffusing particle within this absorbing wedge asymptotically decays as
$$S_{\mathrm{wedge}}(t)t^{\pi /2\mathrm{\Theta }}.$$
(10)
For completeness, we derive this asymptotic behavior by mapping the diffusive system onto a corresponding electrostatic system in Appendix A. To determine the value of $`\mathrm{\Theta }`$ which corresponds to our 3-particle system, notice that the unit normals to the planes $`y_1\sqrt{D_1}=y_3\sqrt{D_3}`$ and $`y_2\sqrt{D_2}=y_3\sqrt{D_3}`$ are $`\widehat{𝐧}_{13}=(\sqrt{D_1},0,\sqrt{D_3})/\sqrt{D_1+D_3}`$ and $`\widehat{𝐧}_{23}=(0,\sqrt{D_2},\sqrt{D_3})/\sqrt{D_2+D_3}`$, respectively. Consequently $`\mathrm{cos}\varphi =\widehat{𝐧}_{13}\widehat{𝐧}_{23}`$ (Fig. 4(b)), and the wedge angle is $`\mathrm{\Theta }=\pi \varphi =\pi \mathrm{cos}^1[D_3/\sqrt{(D_1+D_3)(D_2+D_3)}]`$. If we take $`D_1=D_2=D_L`$ for identical lions, and $`D_3=D_{\mathrm{}}`$, the survival exponent for the lamb is
$$\beta _2(r)=\frac{\pi }{2\mathrm{\Theta }}=\left[2\frac{2}{\pi }\mathrm{cos}^1\frac{r}{1+r}\right]^1,$$
(11)
where $`r=D_{\mathrm{}}/D_L`$.
The dependence of $`\beta _2(r)`$ on the diffusivity ratio $`r`$ is shown in Fig. 5. This exponent monotonically decreases from 1 at $`r=0`$ to 1/2 for $`r\mathrm{}`$. The former case corresponds to a stationary lamb, where the two lions are statistically independent and $`S_2(t)=S_1(t)^2`$. On the other hand, when $`r\mathrm{}`$ the lamb diffuses rapidly and the motion of the lions becomes irrelevant. This limit therefore reduces to the diffusion of a lamb and a stationary absorber, for which $`S_2(t)=S_1(t)`$. Finally, for $`D_{\mathrm{}}=D_L`$, $`\beta _2=3/4<2\beta _1`$, and equivalently, $`S_2(t)>S_1(t)^2`$. This inequality reflects the fact that the incremental threat to the lamb from the second lion is less than the first.
IV. MANY LIONS
The above construction can, in principle, be extended by recasting the survival of a lamb in the presence of $`N`$ lions as the survival of a diffusing particle in $`N+1`$ dimensions within an absorbing hyper-wedge defined by the intersection of the $`N`$ half-spaces $`x_i<x_{N+1}`$, $`i=1,2,\mathrm{},N`$. This approach has not led to a tractable analytical solution. On the other hand, numerical simulations indicate that the exponent $`\beta _N`$ grows slowly with $`N`$, with $`\beta _30.91`$, $`\beta _41.03`$, and $`\beta _{10}1.4`$. The understanding of the slow dependence of $`\beta _N`$ on $`N`$ is the focus of this section.
A. LOCATION OF THE LAST LION
One way to understand the behavior of the survival probability is to focus on the lion closest to the lamb, because this last lion ultimately kills the lamb. As was shown in Fig. 1, the individual identity of this last lion can change with time due to the crossing of different lion trajectories. In particular, crossings between the last lion and its left neighbor lead to a systematic rightward bias of the last lion. This bias is stronger for increasing $`N`$, due to the larger number of crossings of the last lion, and this high crossing rate also leads to $`x_{\mathrm{last}}(t)`$ becoming smoother as $`N`$ increases (Fig. 6). This approach of the last lion to the lamb is the mechanism which leads to the survival probability of the lamb decaying as $`t^{\beta _N}`$, with $`\beta _N`$ a slowly increasing function of $`N`$.
To determine the properties of this last lion, suppose that $`N1`$ lions are initially at the origin. If the lions perform nearest-neighbor, discrete-time random walks, then at short times, $`x_{\mathrm{last}}(t)=t`$. This trivial dependence persists as long as the number of lions at the last site in their spatial distribution is much greater than one. In this case there is a large probability that one of these lions will hop to the right, thus maintaining the deterministic growth of $`x_{\mathrm{last}}`$. This growth will continue as long as $`\frac{N}{\sqrt{4\pi D_Lt}}e^{t^2/4D_Lt}1`$, that is, for $`t4D_L\mathrm{ln}N`$. At long times, an estimate for the location of the last lion is provided by the condition
$$_{x_{\mathrm{last}}}^{\mathrm{}}\frac{N}{\sqrt{4\pi D_Lt}}e^{x^2/4D_Lt}𝑑x=1.$$
(12)
Equation (12) specifies that there is one lion out of an initial group of $`N`$ lions which is in the range $`[x_{\mathrm{last}},\mathrm{}]`$. Although the integral in Eq. (12) can be expressed in terms of the complementary error function, it is instructive to evaluate it explicitly by writing $`x=x_{\mathrm{last}}+ϵ`$ and re-expressing the integrand in terms of $`ϵ`$. We find that
$$_{x_{\mathrm{last}}}^{\mathrm{}}\frac{N}{\sqrt{4\pi D_Lt}}e^{x_{\mathrm{last}}^2/4D_Lt}e^{x_{\mathrm{last}}ϵ/2D_Lt}e^{ϵ^2/4D_Lt}𝑑ϵ=1.$$
(13)
Over the range of $`ϵ`$ for which the second exponential factor is non-negligible, the third exponential factor is nearly equal to unity. The integral in Eq. (13) thus reduces to an elementary form, with the result
$$\frac{N}{\sqrt{4\pi D_Lt}}e^{x_{\mathrm{last}}^2/4D_Lt}\frac{2D_Lt}{x_{\mathrm{last}}}=1.$$
(14)
If we define $`y=x_{\mathrm{last}}/\sqrt{4D_Lt}`$ and $`M=N/\sqrt{4\pi }`$, the condition in Eq. (14) can be simplified to
$$ye^{y^2}=M,$$
(15)
with the solution
$$y=\sqrt{\mathrm{ln}M}\left(1\frac{1}{4}\frac{\mathrm{ln}(\mathrm{ln}M)}{\mathrm{ln}M}+\mathrm{}\right).$$
(16)
In addition to obtaining the mean location of the last lion, extreme statistics can be used to find the spatial probability of the last lion. For completeness, this calculation is presented in Appendix B.
To lowest order, Eq. (16) gives
$$x_{\mathrm{last}}(t)\sqrt{4D_L\mathrm{ln}Nt}\sqrt{A_Nt},$$
(17)
for finite $`N`$. For $`N=\mathrm{}`$, $`x_{\mathrm{last}}(t)`$ would always equal $`t`$ if an infinite number of discrete random walk lions were initially at the origin. A more suitable initial condition therefore is a concentration $`c_0`$ of lions uniformly distributed from $`\mathrm{}`$ to 0. In this case, only $`N\sqrt{c_0^2D_Lt}`$ of the lions are “dangerous,” that is, within a diffusion distance from the edge of the pack and thus potential candidates for killing the lamb. Consequently, for $`N\mathrm{}`$, the leading behavior of $`x_{\mathrm{last}}(t)`$ becomes
$$x_{\mathrm{last}}(t)\sqrt{2D_L\mathrm{ln}(c_0^2D_Lt)t}.$$
(18)
As we discuss in Section IVB, the survival probability of the lamb in the presence of many lions is essentially determined by this behavior of $`x_{\mathrm{last}}`$.
B. LAMB SURVIVAL PROBABILITY FOR LARGE $`N`$
An important feature of the time dependence of $`x_{\mathrm{last}}`$ is that fluctuations decrease for large $`N`$ (Fig. 6). Therefore the lamb and $`N`$ diffusing lions can be recast as a two-body system of a lamb and an absorbing boundary which deterministically advances toward the lamb as $`x_{\mathrm{last}}(t)=\sqrt{A_Nt}`$.
To solve this two-body problem, it is convenient to change coordinates from $`[x,t]`$ to $`[x^{}=xx_{\mathrm{last}}(t),t]`$ to fix the absorbing boundary at the origin. By this construction, the diffusion equation for the lamb probability distribution is transformed to the convection-diffusion equation
$$\frac{p(x^{},t)}{t}\frac{x_{\mathrm{last}}}{2t}\frac{p(x^{},t)}{x^{}}=D\frac{^2p(x^{},t)}{x^2},(0x^{}<\mathrm{})$$
(19)
with the absorbing boundary condition $`p(x^{}=0,t)=0`$. In this reference frame which is fixed on the average position of the last lion, the second term in Eq. (19) accounts for the bias of the lamb towards the absorber with a “velocity” $`x_{\mathrm{last}}/2t`$. Because $`x_{\mathrm{last}}\sqrt{A_Nt}`$ and $`x^{}\sqrt{D_{\mathrm{}}t}`$) have the same time dependence, the lamb survival probability acquires a nontrivial dependence on the dimensionless parameter $`A_N/D_{\mathrm{}}`$. Such a dependence is in contrast to the cases $`x_{\mathrm{last}}x^{}`$ or $`x_{\mathrm{last}}x^{}`$, where the asymptotic time dependence of the lamb survival is controlled by the faster of these two co-ordinates. Such a phenomenon occurs whenever there is a coincidence of fundamental length scales in the system (see, for example, Ref. REFERENCES).
Equation (19) can be transformed into the parabolic cylinder equation by the following steps. First introduce the dimensionless length $`\xi =x^{}/x_{\mathrm{last}}`$ and make the following scaling ansatz for the lamb probability density,
$$p(x^{},t)t^{\beta _N1/2}F(\xi ).$$
(20)
The power law prefactor in Eq. (20) ensures that the integral of $`p(x^{},t)`$ over all space, namely the survival probability, decays as $`t^{\beta _N}`$, and $`F(\xi )`$ expresses the spatial dependence of the lamb probability distribution in scaled length units. This ansatz codifies the fact that the probability density is not a function of $`x^{}`$ and $`t`$ separately, but is a function only of the dimensionless ratio $`x^{}/x_{\mathrm{last}}(t)`$. The scaling ansatz provides a simple but powerful approach for reducing the complexity of a wide class of systems with a divergent characteristic length scale as $`t\mathrm{}`$.
If we substitute Eq. (20) into Eq. (19), we obtain
$$\frac{D_{\mathrm{}}}{A_N}\frac{d^2F}{d\xi ^2}+\frac{1}{2}(\xi +1)\frac{dF}{d\xi }+\left(\beta _N+\frac{1}{2}\right)F=0.$$
(21)
Now introduce $`\eta =(\xi +1)\sqrt{A_N/2D_{\mathrm{}}}`$ and $`F(\xi )=e^{\eta ^2/4}𝒟(\eta )`$ in Eq. (21). This substitution leads to the parabolic cylinder equation of order $`2\beta _N`$
$$\frac{d^2𝒟_{2\beta _N}}{d\eta ^2}+\left[2\beta _N+\frac{1}{2}\frac{\eta ^2}{4}\right]𝒟_{2\beta _N}=0,$$
(22)
subject to the boundary condition, $`𝒟_{2\beta _N}(\eta )=0`$ for both $`\eta =\sqrt{A_N/2D_{\mathrm{}}}`$ and $`\eta =\mathrm{}`$. Equation (22) has the form of a Schrödinger equation for a quantum particle of energy $`2\beta _N+\frac{1}{2}`$ in a harmonic oscillator potential $`\eta ^2/4`$ for $`\eta >\sqrt{A_N/2D_{\mathrm{}}}`$, but with an infinite barrier at $`\eta =\sqrt{A_N/D_{\mathrm{}}}`$. For the long time behavior, we want the ground state energy in this potential. For $`N1`$, we may approximate this energy as the potential at the classical turning point, that is, $`2\beta _N+\frac{1}{2}\eta ^2/4`$. We therefore obtain $`\beta _NA_N/16D_{\mathrm{}}`$. Using the value of $`A_N`$ given in Eqs. (17) and (18) gives the decay exponent
$$\beta _N\{\begin{array}{cc}\frac{D_L}{4D_{\mathrm{}}}\mathrm{ln}N,\hfill & \text{if}\text{ }N\text{ finite}\hfill \\ & \\ \frac{D_L}{8D_{\mathrm{}}}\mathrm{ln}t.\hfill & \text{for}\text{ }N=\mathrm{}\hfill \end{array}$$
(23)
The latter dependence of $`\beta _N`$ implies that for $`N\mathrm{}`$, the survival probability has the log-normal form
$$S_{\mathrm{}}(t)\mathrm{exp}\left(\frac{D_L}{8D_{\mathrm{}}}\mathrm{ln}^2t\right).$$
(24)
Although we obtained the survival probability exponent $`\beta _N`$ for arbitrary diffusivity ratio $`r=D_{\mathrm{}}/D_L`$, simple considerations give different behavior for $`r1`$ and $`r1`$. For example, for $`r=0`$ (stationary lamb) the survival probability decays as $`t^{N/2}`$. Therefore Eq. (23) can no longer apply for $`r<N^1`$, where $`\beta _N(r)`$ becomes of order $`N`$. Conversely, for $`r=\mathrm{}`$ (stationary lions), the survival probability of the lamb decays as $`t^{1/2}`$. Thus Eq. (23) will again cease to be valid for $`r>\mathrm{ln}N`$, where $`\beta _N(r)`$ becomes of order unity. By accounting for these limits, the dependence of $`\beta _N`$ on the diffusivity ratio $`r`$ is expected to be
$$\beta _N(r)=\{\begin{array}{cc}N/2\hfill & \text{ }r1/N\hfill \\ (1/4r)\mathrm{ln}(4N)\hfill & \text{ }1/Nr\mathrm{ln}N\hfill \\ 1/2\hfill & \text{ }r\mathrm{ln}N\text{.}\hfill \end{array}$$
(25)
The $`r`$ dependence of $`\beta _N`$ in the intermediate regime of $`1/Nr\mathrm{ln}N`$ generalizes the exponents given in Eq. (11) for the three-particle system to general $`N`$.
V. DISCUSSION
We investigated diffusive capture of a lamb in one dimension by using of several essential techniques of nonequilibrium statistical mechanics including first-passage properties of diffusion, extreme value statistics, electrostatic analogies, scaling analysis, and moving boundary value theory. These tools provide an appealing physical description for the survival probability of the lamb in the presence of $`N`$ lions for the cases of very small and very large $`N`$. Nevertheless, the exact solution to capture of the lamb remains elusive for $`N3`$.
We close by suggesting several avenues for further study:
1. Better Simulation Methods. Previous simulations of this system followed the random walk motion of one lamb and $`N`$ lions until the lamb was killed. This type of simulation is simple to construct. One merely places the lamb and lions on a one-dimensional lattice and have them perform independent nearest-neighbor random walks until one lion lands on the same site as the lamb. The survival probability is obtained by averaging over a large number of realizations of this process. However, following the motion of discrete random walks is inefficient, because it is unlikely for the lamb to survive to long times and many realizations of the process must be simulated to obtain accurate long-time data. In principle, a much better approach would be to propagate the exact probability distribution of the particles in the system. Can such an approach be developed for the lamb-lion capture process? Another possibility is to devise a simple discrete random-walk process to simulate the motion of the last lion. Such a construction would permit consideration of just the lamb and the last lion, thus providing significant computational efficiency.
2. The Last Lion. Extreme value statistics provides the spatial probability distribution of the last lion. We may also ask other basic questions: How long is a given lion the “last” one? How many lead changes of the last lion occur up to time $`t`$? How many different lions may be in the lead up to time $`t`$? What is the probability that a particular lion is never in the lead? Methods to investigate some of these issues are also outlined in the article by Schmittmann and Zia in this journal issue.
3. Spatial Probability Distribution of the Lamb. As we have seen for the case of one lion, the spatial distribution of the lamb is a useful characteristic of the capture process. What happens for large $`N`$? In principle, this information is contained in the solution to the parabolic cylinder equation for the scaled probability distribution (Eq. (22)). The most interesting behavior is the form of the distribution close to the absorbing boundary, where the interaction between the lions and the lamb is strongest. For $`N=1`$ lion, this distribution decays linearly to zero as a function of the distance to the lion, while for $`N=2`$, the distribution has a power law decay in the distance to the last lion which depends on the diffusivity ratio $`D_{\mathrm{}}/D_L`$. What happens for general $`N`$ and for general diffusivity ratio? Is there a physical way to determine this behavior?
4. Two-Sided Problem. If $`N`$ lions are located on both sides of the lamb, then the lamb is relatively short-lived because there is no escape route. One can again construct a mapping between the $`N+1`$-particle reacting system and the diffusion of an effective particle in an absorbing wedge-shaped domain in $`N+1`$ dimensions. From this mapping, $`S_N(t)`$ decays as $`t^{\gamma _N}`$, but the dependence of $`\gamma _N`$ on $`N`$ and diffusivity ratio is unknown. It is clear, however, that the optimal strategy for the surrounded lamb is to remain still, in which case the lions are statistically independent and we then recover $`S_N(t)t^{N/2}`$. Is there a simple approach that provides the dependence of $`\gamma _N`$ on $`N`$ for arbitrary diffusivity ratio? Finally, $`S_{\mathrm{}}(t)`$ exhibits a stretched exponential decay in time, $`\mathrm{exp}(t^{1/2})`$. What is that nature of the transition from finite $`N`$ to infinite $`N`$ behavior?
5. Intelligent Predators and Prey. In a more realistic capture process, lions would chase the lamb, while the lamb would attempt to run away. What are physically-reasonable and analytically-tractable rules for such directed motion which would lead to new and interesting kinetic behaviors?
ACKNOWLEDGMENTS
We gratefully acknowledge NSF grant DMR9632059 and ARO grant DAAH04-96-1-0114 for partial support of this research.
APPENDIX A: SURVIVAL PROBABILITY OF A DIFFUSING PARTICLE WITHIN A WEDGE
The survival probability of a diffusing particle within an absorbing wedge can be derived by solving the diffusion equation in this geometry. We provide an alternative derivation of this result can also be obtained by developing a correspondence between diffusion and electrostatics in the same boundary geometry. Although the logic underlying the correspondence is subtle, the result is simple and has wide applicability.
The correspondence rests on the fact that the integral of the diffusion equation over all time reduces to the Poisson equation. This time integral is
$$_0^{\mathrm{}}\left\{D^2p(r,\theta ,t)=\frac{p(r,\theta ,t)}{t}\right\}𝑑t.$$
(26)
If one defines an electrostatic potential by $`\mathrm{\Phi }(r,\theta )=_0^{\mathrm{}}p(r,\theta ,t)𝑑t`$, Eq. (26) can be written as
$$^2\mathrm{\Phi }(r,\theta )=\frac{1}{D}\left[p(r,\theta ,t=\mathrm{})p(r,\theta ,t=0)\right].$$
(27)
For a boundary geometry such that the asymptotic survival probability in the diffusive system is zero, then Eq. (27) is just the Poisson equation, with the initial condition in the diffusive system corresponding to the charge distribution in the electrostatic system, and with absorbing boundaries in the diffusive system corresponding to grounded conductors.
To exploit this analogy, we first note that the electrostatic potential in the wedge decays as $`\mathrm{\Phi }(r,\theta )r^{\pi /\mathrm{\Theta }}`$ for $`r\mathrm{}`$, for any localized charge distribution. Because the survival probability of a diffusing particle in the absorbing wedge is given by $`S(t)=p(r,\theta ,t)𝑑A`$, where the integral is over the area of the wedge, we find the following basic relation between $`S(t)`$ and the electrostatic potential in the same boundary geometry
$`{\displaystyle _0^t}S(t)𝑑t`$ $`=`$ $`{\displaystyle _0^t}𝑑t{\displaystyle p(r,\theta ,t)𝑑A}`$ (28)
$``$ $`{\displaystyle _0^{\sqrt{Dt}}}r𝑑r{\displaystyle _0^\mathrm{\Theta }}𝑑\theta \mathrm{\Phi }(r,\theta )`$ (29)
$``$ $`{\displaystyle _0^{\sqrt{Dt}}}r^{1\pi /\mathrm{\Theta }}𝑑r`$ (30)
$``$ $`t^{1\pi /2\mathrm{\Theta }}.`$ (31)
In evaluating the time integral of the survival probability, we use the fact that particles have time to diffuse to radial distance $`\sqrt{Dt}`$ but no further. Thus in the second line of Eq. (28), the time integral of the probability distribution reduces to the electrostatic potential for $`r<\sqrt{Dt}`$ and is essentially zero for $`r>\sqrt{Dt}`$. Finally, by differentiating the last equality in Eq. (28) with respect to time we recover Eq. (10).
APPENDIX B: SPATIAL DISTRIBUTION OF THE LAST LION BY EXTREME STATISTICS
It is instructive to apply extreme statistics to determine the probability distribution for the location of the last lion from an ensemble of $`N`$ lions. Let $`p(x)=\frac{1}{\sqrt{4\pi D_Lt}}e^{x^2/4D_Lt}`$ be the (Gaussian) probability distribution of a single lion. Then $`p_>(x)_x^{\mathrm{}}p(x^{})𝑑x^{}`$ is the probability that a diffusing lion is in the range $`[x,\mathrm{}]`$ and similarly $`p_<(x)=1p_>(x)`$ is the probability that the lion is in the range $`[\mathrm{},x]`$. Let $`L_N(x)`$ be the probability that the last lion out of a group of $`N`$ is located at $`x`$. This extremal probability is given by
$$L_N(x)=Np(x)p_<(x)^{N1}.$$
(32)
That is, one of the $`N`$ lions is at $`x`$, while the remaining $`N1`$ lions are in the range $`[\mathrm{},x]`$. If we evaluate the factors in Eq. (32), we obtain a double exponential distribution:
$`L_N(x)`$ $`=`$ $`{\displaystyle \frac{N}{\sqrt{4\pi D_Lt}}}e^{x^2/4D_Lt}\left[1{\displaystyle _x^{\mathrm{}}}{\displaystyle \frac{1}{\sqrt{4\pi D_Lt}}}e^{x^2/4D_Lt}𝑑x\right]^{N1},`$ (33)
$``$ $`{\displaystyle \frac{N}{\sqrt{4\pi D_Lt}}}e^{x^2/4D_Lt}\mathrm{exp}\left[{\displaystyle \frac{N1}{\sqrt{4\pi D_Lt}}}{\displaystyle _x^{\mathrm{}}}e^{x^2/4D_Lt}𝑑x\right].`$ (34)
When $`N`$ is large, then $`x/\sqrt{4D_Lt}`$ is also large, and we can asymptotically evaluate the integral in the exponential in Eq. (33). Following Eq. (15), it is convenient to express the probability distribution in terms of $`M=N/\sqrt{4\pi }`$ and $`y=x_{\mathrm{last}}/\sqrt{4D_Lt}`$. If we use $`L_N(y)dy=L_N(x)dx`$, we obtain
$$L_N(y)2Me^{y^2}\mathrm{exp}(Me^{y^2}/y).$$
(35)
The most probable value of $`x_{\mathrm{last}}`$ is determined by the requirement that $`L_N^{}(y)=0`$. This condition reproduces $`ye^{y^2}=M`$ given in Eq. (15).
We may also estimate the width of the distribution from its inflection points, that is, when $`L_M^{\prime \prime }(y)=0`$. By straightforward calculation, $`L_N^{\prime \prime }(y)=0`$ at
$$y_\pm \sqrt{\mathrm{ln}(M/k_\pm )}\sqrt{\mathrm{ln}M}\left(1\frac{\mathrm{ln}k_\pm }{\mathrm{ln}M}\right),$$
(36)
where $`k_\pm =(3\pm \sqrt{5})/2`$. Therefore as $`N\mathrm{}`$, the width of $`L_N(y)`$ vanishes as $`1/\sqrt{\mathrm{ln}M}`$. This behavior is qualitatively illustrated in Fig. 6, where the fluctuations in $`x_{\mathrm{last}}(t)`$ decrease dramatically as $`N`$ increases. This decrease can also be understood from the form of the extreme distribution $`L_N(x)`$ in Eq. (33). The large-$`x`$ decay of $`L_N(x)`$ is governed by $`p(x)`$, while the double exponential factor becomes an increasingly sharp step at $`x_{\mathrm{step}}\sqrt{4D_Lt\mathrm{ln}N}`$ as $`N`$ increases. The product of these two factors leads to $`L_N(x)`$ essentially coinciding with $`p(x)`$ for $`x>x_{\mathrm{step}}`$ and $`L_N(x)0`$ for $`x<x_{\mathrm{step}}`$.
|
no-problem/9905/astro-ph9905144.html
|
ar5iv
|
text
|
# CFNUL - 99/03 Velocity at the Schwarzschild horizon revisited
## Abstract
The question of the physical reality of the black hole interior is a recurrent one. An objection to its existence is the well known fact that the velocity of a material particle, refered to the stationary frame, tends to the velocity of light as it approaches the horizon.
It is shown, using Kruskal coordinates, that a timelike radial geodesic does not become null at the event horizon.
The interpretation of the maximal analytic extension of the 4 regions Schwarzschild spacetime presents some difficulties. The conventional view is that the only regions relevant to a black hole formed by gravitational collapse are regions I and II .
There is however a long list of literature where the physical reality of the black hole interior (region II) is argued. References can be found elsewhere .
Recently a new case was made to that point . There, the reasoning is mainly based in the result that the velocity of any material particle as measured by a Kruskal observer (as defined below) is equal to 1 at the event horizon. The purpose of this paper is to show that this is not the case.
In Schwarzschild coordinates, the metric of the Schwarzschild spacetime takes the well known form,
$$ds^2=\left(1\frac{2m}{r}\right)dt^2+\left(1\frac{2m}{r}\right)^1dr^2+r^2\left(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2\right).$$
(1)
For $`r>2m`$, the Kruskal coordinates $`(x^{},t^{})`$ relate to these by,
$$\{\begin{array}{ccc}x^2t^2=\left(\frac{r2m}{2m}\right)e^{r/2m}\hfill & & \\ & & \\ t^{}=\mathrm{tanh}\left(\frac{t}{4m}\right)x^{}\hfill & & \end{array}$$
(2)
In these coordinates the metric takes the form ,
$$ds^2=\frac{32m^3}{r}e^{r/2m}(dt^2+dx^2)+r^2\left(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2\right).$$
(3)
A Kruskal observer is one which maintains the space-like coordinate $`x^{}`$ constant and consequentely, from (3), it verifies,
$$\frac{32m^3}{re^{r/2m}}\left(\frac{dt^{}}{d\tau }\right)^2=1.$$
(4)
Differentiating (2) we get,
$`{\displaystyle \frac{dr}{d\tau }}={\displaystyle \frac{8m^2}{e^{r/2m}r}}\left(x^{}{\displaystyle \frac{dx^{}}{d\tau }}t^{}{\displaystyle \frac{dt^{}}{d\tau }}\right),`$ $`{\displaystyle \frac{dt}{d\tau }}=\left(x^{}{\displaystyle \frac{dt^{}}{d\tau }}t^{}{\displaystyle \frac{dx^{}}{d\tau }}\right){\displaystyle \frac{8m^2}{e^{r/2m}(r2m)}}.`$ (5)
Using $`dx^{}=0`$ and (4) we can write the following equation :
$$\left(1\frac{2m}{r}\right)\left(\frac{dt}{d\tau }\right)^2\left(1\frac{2m}{r}\right)^1\left(\frac{dr}{d\tau }\right)^2=\frac{2m(x^2t^2)}{e^{r/2m}(r2m)}=1,$$
(6)
meaning the Kruskal observer follows a radial trajectory.
Consider now a material particle along a radial ingoing trajectory in region I. Its velocity, measured by a Kruskal observer is simply
$$v=\frac{dx^{}}{dt^{}},$$
(7)
since (3) is diagonal with $`g_{x^{}x^{}}=g_{t^{}t^{}}`$. Dividing one of the equations (5) by the other and solving for $`v`$ we obtain,
$$v=\frac{1+t^{}/x^{}\frac{dt}{dr}(12m/r)}{t^{}/x^{}+\frac{dt}{dr}(12m/r)},$$
(8)
which is the equation (20) of .
At the event horizon, separating regions I and II, the coordinates take the values: $`r=2m`$, $`t=+\mathrm{}`$, $`t^{}/x^{}=\mathrm{tanh}(t/4m)=1`$ and $`x^{}0`$. Apparently, making $`t^{}/x^{}=1`$ shows that when the particle, following any trajectory described by $`dt/dr`$, and the observer intersects at the horizon, the velocity measured by the latter is 1. However, $`v`$ is a function of 2 coordinates ($`r`$ and $`t`$), and both limits must be taken simultaneously.
To illustrate this point, consider the movement of a particle $`p`$ in special relativity, relative to 2 referencials $`S`$ and $`S^{}`$ with all the movements parallel to each other. The well known expression for the addition of velocities is,
$$v_{p/S^{}}^{}=\left(\frac{v_{p/S}V_{S^{}/S}}{1v_{p/S}V_{S^{}/S}}\right).$$
(9)
If both particle and $`S^{}`$ are moving at the speed of light with respect to $`S`$, their relative velocity is not necessarily 1, the expression giving $`\frac{0}{0}`$. However, if we take only the limit $`v_{p/S}=1`$ we obtain identical expressions in the numerator and denominator.
So at this point we cannot determine the value of $`v`$ in (8) in general. Let us assume a specific trajectory $`dt/dr`$: a geodesic. In this case there is a conserved quantity for motion ,
$$E=\frac{dt}{d\tau }\left(1\frac{2m}{r}\right).$$
(10)
Inserting this into (1) we get, for the ingoing geodesic,
$$\frac{dt}{dr}=E\left(1\frac{2m}{r}\right)^1\left[E^2\left(1\frac{2m}{r}\right)\right]^{1/2}$$
(11)
and (8) becomes,
$$v=\frac{\sqrt{E^2(12m/r)}E\mathrm{tanh}(t/4m)}{\sqrt{E^2(12m/r)}\mathrm{tanh}(t/4m)E}.$$
(12)
If we Taylor expand the square root in the vicinity of the horizon we obtain,
$$v=\frac{1\mathrm{tanh}(t/4m)(r2m)/(2rE^2)}{\mathrm{tanh}(t/4m)1(r2m)/(2rE^2)\mathrm{tanh}(t/4m)}\frac{\epsilon \delta /E^2}{\epsilon +\mathrm{tanh}(t/4m)\delta /E^2},$$
(13)
where,
$`\epsilon =1\mathrm{tanh}(t/4m),`$ $`\delta =(r2m)/2r.`$ (14)
In this form we see that in the denominator there is a sum with $`\epsilon `$ while in the numerator there is a subtraction from the same factor. We conclude that the modulus of $`v`$ is less than 1.
This expression also shows that, depending on the relative size of $`\epsilon `$ and $`\delta `$, $`dx^{}/dt^{}`$ can be negative or positive, unlike $`dr/dt`$ which is obviously always negative in an ingoing geodesic. For null geodesics where $`E=\mathrm{}`$, we must obtain $`v=\epsilon /\epsilon =1`$.
This discussion is reminiscent of an equivalent one that took place over 20 years ago . In that case the problem was not posed in terms of Kruskal coordinates but was solved with the use of another set of ingoing coordinates. It is an example of a different observer who measures a sub-luminous velocity at the event horizon.
|
no-problem/9905/astro-ph9905134.html
|
ar5iv
|
text
|
# The Core Structure of Galaxy Clusters from Gravitational Lensing
## 1 Introduction
The extraordinary lensing power of galaxy systems first put in evidence by the discovery of giant arcs (Lynds & Petrosian 1986, Soucail et al 1987a) provides an invaluable tool for investigating the gravitational potential of galaxy clusters at moderate redshift. For example, early studies of the giant arcs were instrumental in cementing the view that a monolithic dark matter halo dominates the cluster gravitational potential, to which individual galaxies contribute relatively small local perturbations (Soucail et al 1987b). Detailed studies of the location, morphology, and magnification of giant arcs provide further insight into the mass structure within clusters (AbdelSalam et al 1998, Kneib et al 1996).
The most straightforward interpretation of the location of giant tangential arcs is that they occur roughly at the “Einstein radius” of a cluster, allowing accurate estimates of the total projected mass enclosed within the arc if the angular diameter distances to the lensing cluster and to the arc source galaxy can be measured. This method remains to date the most direct estimator of the total mass projected onto the core of a galaxy cluster. Accurate estimates of the core surface mass density within the arc may be used to place strong upper limits on the core radii of isothermal cluster mass models, and early results were puzzling. Narayan, Blandford & Nityananda (1984) noted that the small cores required to explain the properties of giant arcs were at odds with the relatively large core radii derived from X-ray and optical observations. Small, but finite, core radii were also required to account for observations of radial arcs in clusters such as MS2137-23 (Fort et al 1992) and A370 (Smail et al 1995).
A simple explanation for this discrepancy was proposed by Navarro, Frenk & White (1996, hereafter NFW96) on the basis of cosmological N-body simulations of cluster formation. These authors found that isothermal models are in general a poor approximation to the structure of dark halos formed in N-body simulations, and proposed a simple model to describe the structure of dark matter halos. In this model, which we shall refer to as the “NFW” model, the density profile is shallower than a singular isothermal sphere near the center, and steepens gradually outwards to become steeper than isothermal far from the center. This result explains naturally why models based on the isothermal sphere fail to account simultaneously for lensing and X-ray observations. X-ray core radii, which correspond to the radius where the local slope of the mass profile is close to the isothermal value, occur in NFW halos at different radii than giant arcs, whose location trace the radius where the average inner surface density equals the “critical” lensing value (see eq. 9 below).
Subsequent work by Navarro, Frenk & White (1997, hereafter NFW97) showed that the structure of dark halos is approximately independent of mass, power spectrum, and cosmological parameters, and demonstrated how a simple algorithm may be used to calculate the mass profile of halos of arbitrary mass in hierarchical universes. The procedure applies only to halos that are close to dynamical equilibrium and assumes spherical symmetry, but it has no free parameters and can be used to predict the location and magnification of giant tangential arcs (as well as of radial arcs) once the velocity dispersion of the cluster and the angular diameter distances to the source galaxy and the cluster lens are specified. Thus, in principle, lensing observations may be used to test directly the overall applicability of the results of N-body simulations to the structure of dark halos on the scale of galaxy clusters.
In this paper we compile the results of gravitational lensing studies of 24 galaxy clusters to investigate whether the properties of gravitationally lensed arcs are consistent with the NFW dark halo model. We discuss the lensing properties of NFW halos in §2 and summarize the main properties of our dataset in §3. Section 4 presents our main results. In §5 we discuss the implications of our model, and in §6 we summarize our main findings.
## 2 Lensing Properties of NFW halos
### 2.1 The NFW mass profile
As discussed by NFW96 and NFW97, the spherically averaged density profiles of equilibrium dark matter halos formed in cosmological N-body simulations of hierarchically clustering universes are well represented by a simple formula,
$$\frac{\rho (r)}{\rho _{crit}}=\frac{\delta _c}{(r/r_s)(1+r/r_s)^2},$$
$`(1)`$
where $`\rho _{crit}=3H^2/8\pi G`$ is the critical density for closure, $`H(z)`$ is the Hubble parameter,<sup>1</sup><sup>1</sup>1We parameterize the present value of Hubble’s constant by $`H_0=100h`$ km s<sup>-1</sup>Mpc<sup>-1</sup>. $`\delta _c`$ is a dimensionless characteristic density contrast, and $`r_s`$ is a scale radius. If we define the mass of a halo, $`M_{200}`$, as the total mass of a sphere of mean density $`200`$ times critical, eq.(1) above has a single free parameter once the halo mass is specified. (The radius of this sphere, $`r_{200}`$, is sometimes called the “virial” radius.) The free parameter can be taken to be the characteristic density contrast $`\delta _c`$ or the “concentration” parameter, $`c=r_{200}/r_s`$. These two parameters are related by
$$\delta _c=\frac{200}{c}\frac{c^3}{\mathrm{ln}(1+c)c/(1+c)}.$$
$`(2)`$
The numerical experiments of NFW97 indicate that $`\delta _c`$ is determined by the mean matter density of the universe at the redshift of collapse of each halo, i.e. $`\delta _c(M_{200})(1+z_{coll}(M_{200}))^3`$. Halos of increasing mass collapse later in hierarchical universes, and therefore $`\delta _c`$ and $`c`$ are monotonically decreasing functions of $`M_{200}`$. Collapse redshifts can be easily calculated once the cosmological model is specified, and NFW97 describe a simple algorithm that can be used to calculate the density profile of halos of arbitrary mass in cold dark matter dominated universes (see the Appendix of NFW97 for details).
### 2.2 Lensing by NFW halos
The lensing properties of axially symmetric lenses is described in detail in Schneider, Ehlers & Falco (1992) and in the many reviews of gravitational lensing (see, e.g., Blandford & Narayan 1992, Narayan & Bartelmann 1995). Provided that the gravitational potential causing the deflection is small, $`|\mathrm{\Phi }|c^2`$, the lens equation describing the mapping of the source plane into the image plane is very simple. In terms of the angular diameter distances to the lens ($`D_l`$), to the source ($`D_s`$), and between lens and source ($`D_{ls}`$), a lens is locally described by the Jacobian matrix of the mapping,
$$A=\left(\delta _{ij}\frac{^2\psi }{\theta _i\theta _j}\right),$$
$`(3)`$
where $`\stackrel{}{\theta }=(\theta _i,\theta _j)`$ are angular coordinates relative to the optical axis, and $`\psi `$ is the projected Newtonian potential of the lens,
$$\psi (\stackrel{}{\theta })=\frac{D_{ls}}{D_lD_s}\frac{2}{c^2}_{\mathrm{}}^+\mathrm{}\mathrm{\Phi }(D_l\stackrel{}{\theta },z)𝑑z$$
$`(4)`$
The lensing properties of NFW halo models are fully specified by the radial dependence of the surface mass density in units of the critical surface mass density $`\mathrm{\Sigma }_{cr}`$; the “convergence”,
$$\kappa (x)=\frac{\mathrm{\Sigma }(x)}{\mathrm{\Sigma }_{cr}},$$
$`(5)`$
where $`\mathrm{\Sigma }_{cr}`$ depends on the lens-source configuration through
$$\mathrm{\Sigma }_{cr}=\frac{c^2}{4\pi G}\frac{D_s}{D_lD_{ls}},$$
$`(6)`$
and $`x=r/r_s`$ is the radius in units of the NFW scale radius. Following the derivation of Bartelmann (1996), the mass inside radius $`x`$ can be described by the dimensionless function,
$$m(x)=2_0^x\kappa (y)y𝑑y,$$
$`(7)`$
which can be used to find the eigenvalues of the Jacobian matrix $`A`$,
$$\lambda _r=1\frac{d}{dx}\frac{m}{x}$$
$`(8)`$
$$\lambda _t=1\frac{m}{x^2}.$$
$`(9)`$
The tangential and radial critical curves occur at $`x_t`$ and $`x_r`$, where $`\lambda _t=0`$ and $`\lambda _r=0`$, respectively. The surface density associated with an NFW model (eq.1) is
$$\mathrm{\Sigma }(x)=\frac{2\delta _c\rho _{crit}r_s}{x^21}f(x),$$
$`(10)`$
with
$$f(x)=\{\begin{array}{cc}1(2/\sqrt{x^21})\mathrm{tan}^1\sqrt{(x1)/(x+1)}\hfill & \text{if }x>1\text{;}\hfill \\ 1(2/\sqrt{1x^2})\mathrm{tanh}^1\sqrt{(1x)/(1+x)}\hfill & \text{if }x<1\text{;}\hfill \\ 0\hfill & \text{if x=1.}\hfill \end{array}$$
$`(11)`$
Defining $`\kappa _s=\delta _c\rho _{crit}r_s/\mathrm{\Sigma }_{cr}`$, we can write the convergence as
$$\kappa (x)=\frac{2\kappa _s}{x^21}f(x),$$
$`(12)`$
and the dimensionless mass $`m(x)`$ as,
$$m(x)=4\kappa _sg(x),$$
$`(13)`$
with
$$g(x)=\mathrm{ln}\frac{x}{2}+\{\begin{array}{cc}(2/\sqrt{x^21})\mathrm{tan}^1\sqrt{(x1)/(x+1)}\hfill & \text{if }x>1\text{;}\hfill \\ 2/\sqrt{1x^2})\mathrm{tanh}^1\sqrt{(1x)/(1+x)}\hfill & \text{if }x<1\text{;}\hfill \\ 1\hfill & \text{if x=1.}\hfill \end{array}$$
$`(14)`$
Equations (10)-(14), together with the algorithm to compute halo parameters outlined by NFW97, can be used to estimate the location of tangential and radial arcs for halos of arbitrary mass.
Finally, we note that the arc thickness is controlled by the angular size of the source and by the value of the convergence at the critical line. As demonstrated by Kovner (1989) and Hammer (1991), the thin dimension of an arc is magnified by a factor of order $`\mu 1/2(1\kappa )`$ relative to its original angular size. Tangential arcs thinner than the source thus require $`\kappa (x_t)<0.5`$. We use these results below to analyze the constraints posed by observations of giant arcs on the structure of galaxy clusters.
## 3 The Dataset
The main properties of clusters in our sample are listed in Table 1. The sample includes all systems in the recent compilation by Wu et al (1998) with measured velocity dispersion. The table lists the following information for each cluster: (1) cluster name, (2) redshift, (3) velocity dispersion in km s<sup>-1</sup>, (4) designation of the arc used in the analysis (as labeled in the appropriate reference), (5) the redshift of the arc (if available), (6) the arc distance from the center of the cluster, typically chosen to coincide with that of the brightest cluster galaxy, in arcsec, (7) the radial half-light thickness of the arc, in arcsec (upper limits correspond to the seeing of the observation when arc is unresolved), (8) telescope and instrument used, and (9) references for the arc width.
Some values in Table 1 differ from those adopted by Wu et al (1998). A370: The giant arc in this strongly bimodal cluster is 10” from the nearest bright galaxy and 26” from the center of mass of the cluster. As a compromise we take the clustercentric distance of the arc to be 18”. AC114: The most prominent arc, A0, is almost certainly a singly imaged source located beyond the cluster’s critical curve. The location of the critical curve is thus not well determined but must lie somewhere inside that radius. There is a multiply imaged system in the same cluster, $`S1/S2`$. We estimate the clustercentric distance of the tangential critical line to be the average of A0 and $`S1/S2`$, or 38”. MS0440: We take the velocity dispersion of this cluster to be 872 km s<sup>-1</sup>(Gioia et al 1998) rather than 606 km s<sup>-1</sup>(Carlberg et al. 1996), because the former value is more consistent with the cluster’s temperature, $`T_X=5.5`$ keV, and its (0.1-2.4) keV X-ray luminosity, $`L_X=7.125\times 10^{43}h^2`$ erg s<sup>-1</sup>. Cl0024: The redshift of the source galaxy has been determined spectroscopically, $`z_s1.7`$ (Broadhurst et al. 1999). A2124: Data for this cluster have been taken from Blakeslee & Metzger (1998). Cl0016+1609: Data for this cluster have been taken from Lavery (1996).
The column labeled $`\delta \theta _t`$ in Table 1 lists the half-light radii of the giant arcs in the radial direction (or the half-seeing if the arc is unresolved). In clusters where more than one tangential arc has width information, the average of the available widths is listed. Except for Cl0302, where Mathez et al (1992) measure an arc half-width of 0.6” for the A1/A1W pair and Luppino et al (1999) quote $`<0.25`$” for the same arc, width estimates from different authors agree to within the errors in all clusters. We take the average of the two discrepant values for Cl0302.
## 4 Results
### 4.1 Main trends in the dataset
Figures 1 and 2 summarize the main trends in the dataset. The top panel in Figure 1 shows the velocity dispersion of the cluster ($`\sigma `$) versus the clustercentric distance of the giant tangential arc ($`\theta _t`$). Assuming circular symmetry, arc distances can also be expressed in terms of the total projected mass within the arc radius, $`M_{core}=\pi \mathrm{\Sigma }_{cr}(\theta _tD_l)^2`$. This mass estimate depends, through $`\mathrm{\Sigma }_{cr}`$ and $`D_l`$, on the angular diameter distances to source and lens. We have assumed a simple Einstein-de Sitter cosmological model to compute the values of $`M_{core}`$ plotted in the bottom panel of Figure 1. Arcs without measured redshifts are assumed to be at $`z_t=1`$<sup>2</sup><sup>2</sup>2We shall hereafter use subscripts $`t`$ and $`r`$ to refer to quantities associated with the source of tangential and radial arcs, respectively..
The first thing to note from Figure 1 is that core mass and lensing power seem to correlate only weakly with velocity dispersion in clusters with $`\sigma >\mathrm{\hspace{0.17em}1000}`$ km s<sup>-1</sup>. This is at odds with scalings expected from simple models. For comparison, the dotted lines in these panels indicate the Einstein radius and the core mass of a singular isothermal sphere at $`z_l=0.3`$ lensing a source galaxy located at $`z_t=1.0`$ in an Einstein de Sitter universe. The strong dependence on $`\sigma `$ expected in this simple model ($`M_{core}\sigma ^4`$) is clearly absent in the data. We note as well that several clusters are more powerful lenses than singular isothermal spheres, indicating a large central concentration of mass in some of these systems.
The filled circles in Figure 2 indicate the (half-light) radial widths of the giant tangential arcs quoted in the literature (see Table 1 for references). Open circles are upper limits to the width derived from the seeing at the time of observation in cases where the arc is unresolved. A clear trend is observed between width and cluster velocity dispersion: arcs become thicker as $`\sigma `$ increases. The trend is highly significant. Treating upper limits as measurements, $`96.9\%`$ of randomly reshuffled ($`\sigma `$, $`\delta \theta _t`$) samples have a smaller Kendall $`\tau `$ correlation coefficient than the real sample. Neglecting the one deviant point (which corresponds to the arc in Cl0016, deemed unresolved in an HST WFPC2 image by Lavery 1996) increases the significance of the correlation as measured by this test to $`99.1\%`$. Because upper limit points are confined exclusively to the low-$`\sigma `$ section of the diagram, our procedure most likely underestimates the significance of the correlation.
As discussed in §2.2, the ratio ($`\mu _r`$) between the radial thickness of tangential arcs and the intrinsic angular extent of the source depends directly on the value of the convergence at the location of the arc, $`\mu _r=1/2[\kappa (x_t)1]`$. Within this context, the correlation shown in Figure 2 implies that the convergence at the arc location, $`\kappa (x_t)`$, increases systematically with $`\sigma `$. The actual values of $`\kappa (x_t)`$ depend on the intrinsic angular size and redshift of the source, which we shall now examine. From Table 1, many of the sources with measured redshifts are at $`z_t0.7`$. We compare in Figure 3 their “lensed half-widths” $`\delta \theta _t`$ with the angular size of field galaxies at various redshifts: (i) the half-light radii of galaxies at intermediate redshifts in the WFPC Medium Deep Survey (Mutz et al 1994), (ii) the half-light radii of $`z1`$ galaxies in the CFRS sample of Lilly et al (1998), and (iii) the half-light radii of “Ly-break” galaxies at $`z3`$ (starred symbols in Figure 3, from Giavalisco et al 1996).
The data in Figure 3 imply that the angular size of galaxies decreases monotonically out to $`z3`$. Taken at face value, intrinsic galaxy sizes also seem to decrease as a function of $`z`$, in agreement with predictions from hierarchical models of galaxy formation (Mo, Mao & White 1998). One crude estimate of the evolution may be made by comparing the observational data with the angular extent of a “standard rod” of fixed proper size equal to the average half-light radius of bright ($`L_{}`$) spirals, $`4.4h^1`$ kpc (top set of curves, Mutz et al. 1994), and with the angular radius of sources whose proper size scale in inverse proportion to $`(1+z)`$ (bottom set of lines). The actual evolution in source size out to $`z1`$ is approximately intermediate between these two somewhat extreme examples. A word of caution applies to this conclusion. The three surveys shown in Figure 3 have been conducted in different passbands, are subject to different selection biases, and may sample intrinsically different source populations. For example, the Ly-break galaxies seem to be forming stars preferentially in their central regions and therefore would appear substantially smaller in the rest-frame UV used to estimate their sizes than the half-light radii of more passively evolving galaxies examined by the other groups.
With this caveat, we observe that arcs with measured redshifts (filled circles in Figure 3) seem to be of angular size comparable to, or thinner than, field galaxies at similar redshifts. In particular, half-light radii of galaxies in the CFRS survey exceed the arcwidths by about $`50\%`$. This may be due in part to the fact that the magnitude-limited CFRS sample is biased towards the bright, large galaxies present at that redshift, while heavily magnified arc sources may be intrinsically fainter and smaller. We conclude, rather conservatively, that the radial magnification of giant tangential arcs probably does not exceed unity, $`\mu _r<\mathrm{\hspace{0.17em}1}`$, implying that $`\kappa (x_t)<0.5`$.
### 4.2 Interpretation of the observed trends
The trends highlighted in Figures 1 and 2 and, in particular, the correlation (or lack thereof) between $`\sigma `$ and $`\theta _t`$ suggest that the lensing properties of galaxy clusters differ significantly from those of simple circularly symmetric models such as NFW or the singular isothermal sphere. A number of effects may be responsible for the disagreement; e.g., asphericity in the mass distribution, uncertainties in velocity dispersion estimates, substructure associated with departures from dynamical equilibrium, and the presence of a massive central galaxy.
Let us consider first the effects of asphericity in the mass distribution. N-body models suggest that the distribution of mass in equilibrium galaxy clusters deviates significantly from spherical symmetry, and is well approximated by triaxial shapes maintained by anisotropic velocity dispersion tensors (Thomas et al 1998 and references therein). Estimates of the cluster velocity dispersion and giant arc properties are therefore sensitive to the relative orientation between the principal axes and the line of sight to the cluster. For example, line-of-sight velocity dispersion estimates of cigar-shaped clusters observed end-on would lead to systematic overestimation of the true average $`\sigma `$, but $`\theta _t`$ would be similarly affected by the favorable orientation, in a manner that preserves a firm correlation between $`\sigma `$ and $`\theta _t`$.
Another factor that may affect the observed relation between $`\sigma `$ and $`\theta _t`$ are observational uncertainties in $`\sigma `$ estimates. Velocity dispersions are notoriously difficult to estimate properly, as the error budget is generally dominated by systematic uncertainties such as cluster membership rather than by strict measurement error (Zabludoff et al 1990, Carlberg et al 1996). The magnitude of the errors required to cause the apparent lack of correlation between $`\sigma `$ and $`\theta _t`$ (of order $`1000`$ km s<sup>-1</sup>) appears, however, excessive. We conclude that projection effects and observational error may contribute significantly to the scatter in correlations between lensing and dynamical properties but are unlikely to be the source of the trends shown in Figures 1 and 2.
A simpler alternative is that the projected surface density profile inside $`\theta _t`$ effectively steepens towards lower $`\sigma `$. Indeed, the effective slope of the lensing potential can be deduced in a simple model-independent way, based entirely on observables. Let us approximate the convergence inside the tangential arc by a single power law, $`\kappa (R)=\kappa _0R^\alpha `$, where $`\kappa _0`$ and $`\alpha `$ are functions of $`\sigma `$ and $`R`$ is the projected radius. Applying the condition that inside the location of the tangential arc the average value of the convergence is unity, and that the value of $`\kappa (R_t)`$ at the location of the arc is known from arc’s width magnification, we derive the relation $`\alpha =2[\kappa (R_t)1]`$. In other words, the effective slope of the projected density profile is inversely proportional to the width magnification of the arc. From the data presented in Figures 1 and 2 we see that a $`\sigma 1000`$ km s<sup>-1</sup>cluster has $`\kappa (R_t)0.35`$ and $`\alpha 1.3`$. On the other hand, $`\sigma _v>\mathrm{\hspace{0.17em}1500}`$ km s<sup>-1</sup>clusters have much shallower profiles; $`\kappa (R_t)0.6`$ and $`\alpha 0.8`$. The tangential arc properties of clusters in our sample imply that the effective lensing potential is steeper (shallower) than isothermal in $`\sigma <\mathrm{\hspace{0.17em}1250}`$ ($`\sigma >\mathrm{\hspace{0.17em}1250}`$) km s<sup>-1</sup>clusters. (A singular isothermal sphere has $`\alpha =1`$ in this notation.)
As noted in §4.1, the conclusion that the slope of the inner density profile is a strong function of cluster mass is difficult to reconcile with the nearly scalefree structure of cold dark matter halos found in cosmological N-body simulations. This discrepancy afflicts all models where the core structure of dark halos is approximately independent of mass. In particular, slight modifications to the NFW profile that preserve its independence of scale, such as those proposed by Moore et al (1998) and Kravtsov et al (1998), would also fail to reproduce the lensing observations.
We investigate below whether the lensing data may be reconciled with scalefree models such as NFW by assuming that the lensing power of clusters has been significantly boosted by the presence of dynamical substructure and of massive central galaxies, in a way that mimics the correlations between effective slope and $`\sigma `$ pointed out above. We emphasize that qualitatively our conclusion applies to all scalefree models of halo structure but we adopt below the NFW description in order to derive quantitative estimates of the lensing contribution by the central galaxy and by substructure.
### 4.3 Comparison with NFW halo models
As discussed in §2, the lensing properties of NFW halos can be computed as a function of velocity dispersion once the lens-source configuration is specified. This allows us to compare the lensing data directly with the predictions of the model. The comparison depends on the values of the cosmological parameters, since these control both the halo parameters (NFW97) and the angular diameter distances of each lens-source configuration. Qualitatively, however, none of the conclusions we discuss below depend on this choice of cosmological model. For illustration, we explore first the lensing properties of NFW clusters in a CDM-dominated universe with $`\mathrm{\Omega }_0=0.2`$, $`\mathrm{\Lambda }=0.8`$, and $`h=0.7`$. The power spectrum has been normalized by $`\sigma _8=1.13`$ in order to match the present day abundance of galaxy clusters, as prescribed by Eke, Cole & Frenk (1996). As in Figure 1, we assume that arcs without measured redshift are located at $`z_t=1`$.
The top-left panel in Figure 4 shows the ratio between the observed and predicted “Einstein radius” as a function of cluster velocity dispersion. Clearly, CDM halos formed in this cosmology are in general less powerful lenses than actual clusters. This result is not unexpected given our previous discussion: most $`\sigma 1000`$ km s<sup>-1</sup> lenses are more powerful than singular isothermal spheres, let alone models with shallower inner density profiles such as NFW. The magnitude of the discrepancy depends strongly on $`\sigma `$. Clusters with $`\sigma 1000`$ km s<sup>-1</sup> have Einstein radii about a factor of three larger than expected for NFW models. On the other hand, the light-deflecting power of the most massive clusters in the sample (i.e. those with $`\sigma 15002000`$ km s<sup>-1</sup>) agrees well with that of NFW models. (We use for the comparison the halo mean velocity dispersion within the virial radius, $`\sigma _{200}=(GM_{200}/2r_{200})^{1/2}`$.)
A compounding problem that afflicts cluster mass models with shallow inner density profiles such as NFW was noted by Bartelmann (1996) and concerns the radial magnification of tangential arcs: the convergence at the tangential critical curve is $`\kappa (x_t)>0.5`$, implying that the radial magnification exceeds unity. This is shown by the empty circles in the top-left panel of Figure 5. A crude estimate of the “true” radial magnification can be made for arcs with measured redshifts by assuming that the actual source half light radii is $`4.4(1+z_t)^{1/2}h^1`$ kpc (see Figure 3). Values of $`\mu _r`$ computed this way are shown as thick crosses in Figure 5. The difficulty pointed out by Bartelmann (1996) is confirmed by our analysis: observed tangential arcs are roughly $`2`$-$`3`$ times thinner than expected from NFW model lenses.
### 4.4 Dependence on the cosmological parameters
Strictly speaking, the quantitative estimates of the disagreement between NFW models and observations presented above are valid only for the low-density $`\mathrm{\Lambda }`$CDM model adopted there, but qualitatively the conclusions are independent of the cosmological parameters. For example, assuming CDM universe models normalized to match the present day abundance of galaxy clusters, we find that changing $`\mathrm{\Omega }_0`$ from $`0.2`$ to $`1`$ (in flat or open geometries) modifies $`\theta _t/\theta _{t,pred}`$ and $`\mu _r`$ only by about $`10`$-$`20\%`$, a negligible effect compared to the effects of central galaxy and substructure we explore below.
### 4.5 The role of substructure
As discussed by Bartelmann, Steinmetz & Weiss (1995), the discrepancy shown in Figure 4 between observed and predicted tangential arc clustercentric distances may be ameliorated by considering the effects of substructure. Let us parameterize this effect by the outward displacement of the tangential arc relative to a circularly averaged NFW halo of the same velocity dispersion, $`f_t=\theta _t/\theta _{t,\mathrm{NFW}}`$. The distribution of $`f_t`$ has been determined from N-body simulations (Bartelmann & Steinmetz 1996) and is shown by the solid histogram in Figure 6; on average substructure pushes out tangential arcs by $`50\%`$ in radius. Since there is little indication either from observations or numerical simulations that substructure is a strong function of mass on the scales we probe here, we will assume that all clusters are affected equally, regardless of $`\sigma `$. The upper right panels of Figures 4 and 5 indicate the result of increasing $`\theta _{t,pred}`$ by $`50\%`$ in order to account for this effect. The error bars correspond to the $`1/4`$ and $`3/4`$ quartiles in the distribution of $`f_t`$ (Figure 6). Note that width magnifications are also reduced because the convergence at the tangential critical curve decreases as the arc moves outwards (Figure 5). Including substructure helps to narrow the gap, but it appears insufficient to reconcile fully the predictions of NFW halo models with observations.
### 4.6 The effects of a central massive galaxy
The interpretation advanced in §4.2 ascribes the remainder of the difference to the lensing contribution of a central massive galaxy, an ansatz that allows us to estimate its total projected mass within the tangential arc radius. This mass may be compared directly with the stellar mass in the central galaxy, which we estimate as follows. The typical absolute magnitude of brightest cluster galaxies is $`M_V5\mathrm{log}h23.5`$ (Schombert 1986), which combined with rough upper limits to the stellar mass-to-light derived from lensing of QSOs by isolated ellipticals, $`M/L_V15hM_{}/L_{}`$ (Keeton, Kochanek & Falco 1998), imply a total stellar mass of order $`M_g3\times 10^{11}h^1M_{}`$. Assuming that the galaxy mass profile is well approximated by a de Vaucouleurs law with effective radius, $`r_{eff}=15h^1`$ kpc, we recompute the predicted location of tangential arcs including this contribution and report the results in the bottom-left panels of Figures 4 and 5. The results assume that the structure of the dark halo is modified “adiabatically” by the presence of the central galaxy (see details in NFW96) and are insensitive to our choice of effective radius provided that $`r_{eff}\theta _tD_l60`$-$`120h^1`$ kpc; i.e., provided that most of the galaxy’s mass is contained within the tangential critical line. Most brightest cluster galaxies studied so far easily meet this criterion (Schombert 1986).
Predicted arc locations in the bottom-left panel of Figure 4 include the combined effects of substructure and of the stellar component of the central cluster galaxy but are still seen to fall short of observations. This result is rather insensitive to the choice of cosmological model. The disagreement actually grows more acute if higher density universes are considered because $`\mathrm{\Sigma }_{cr}`$ and, therefore lens masses, increase monotonically with $`\mathrm{\Omega }_0`$ for the lens-source configuration we assume here. Good agreement with observations require that the total mass associated with the central galaxy is increased significantly over and above the expected stellar mass of the galaxy. This is shown in the bottom-right panel of Figures 4 and 5, where we have assumed that the total mass associated with the central galaxy is $`M_g=3\times 10^{12}h^1M_{}`$, so that the average ratio of $`\theta _t`$ to $`\theta _{t,pred}`$ is about unity. This galaxy mass corresponds roughly to the mass (projected inside $`\theta _t`$) of an isothermal sphere with velocity dispersion of order $`300`$ km s<sup>-1</sup>. This does not seem extravagant: velocity dispersions measured for the central galaxies in MS1358+62, MS2053-04, and MS2124 are all of order $`300`$ km s<sup>-1</sup> or higher (Kelson et al 1997, Blakeslee & Metzger 1998), while in cluster RXJ1347.5-1145 the velocity dispersion of stars in the central galaxy is $`\sigma =620`$ km s<sup>-1</sup>(Sahu et al. 1998).
Reconciling lensing observations with NFW halo models thus require that the central galaxy has somehow managed to retain a sizeable fraction of its own dark halo. This may at first seem puzzling, but is consistent with observational estimates of the mass attached to individual galaxies in clusters. For example, based on the smooth structure of the arc in A370, Kneib et al (1993) conclude that as much as $`10^{11}h^1M_{}`$ may be associated with $`M_B5\mathrm{log}(h)19.6`$ galaxies in that cluster. Brightest cluster galaxies are about $`30`$ times more luminous and a simple scaling suggests that halos as massive as $`3\times 10^{12}h^1M_{}`$ may indeed be associated with central cluster galaxies. Our estimate is also consistent with the recent work of Tyson, Fischer & Dell’Antonio (1999), who argue that mass concentrations surrounding individual galaxies in Cl0024 may be as large as $`5\times 10^{12}h^1M_{}`$. A detailed lensing study of A2218 by Kneib et al. (1996) also indicates that individual cluster galaxies with velocity dispersion of order $`\sigma =300`$ km s<sup>-1</sup>must be surrounded with halos of mass $`10^{12}h^1M_{}`$. We note as well that the total mass associated with the central galaxy would actually be lower if, as proposed by Moore et al (1998), our procedure had somehow underestimated the central density concentration of the cluster halo. These authors report that NFW concentration parameter obtained from the procedure outlined in NFW97 may be as much as $`50\%`$ lower than required to fit their numerical experiments. Our conclusion does not change qualitatively, but the mass of the central galaxy in this case would need to be revised downward by approximately $`50\%`$ in order to fit the data.
## 5 Implications of the model
### 5.1 Radial Arcs
Our conclusion that the mass associated with the luminous central galaxy plays a 88888888fundamental role in the lensing properties of the cores of galaxy clusters can be tested directly by considering the formation of radial arcs. These arcs are located at the radial critical lines, $`x_r`$, where the eigenvalue $`\lambda _r`$ vanishes. We see from eqs. 8 and 9 that the location of the radial arc depends on the angular gradient of the projected mass, rather than on the mean enclosed surface density. Their location, therefore, may in principle be used to verify independently our conclusion that the surface density profile steepens systematically towards decreasing $`\sigma `$.
In the absence of a massive central galaxy, the relative location of radial and tangential arcs formed through lensing by NFW halos is straightforward to compute once the angular diameter distances to the sources are known. Conversely, knowledge of the relative location of the arcs and of the tangential arc redshift uniquely determines the redshift of the radial arc. For example, Bartelmann (1996) applies this procedure to the radial/tangential arc system in MS2137 and concludes that the sources of both arcs must be either at very similar redshifts, or else far behind the cluster at $`z1`$. The dichotomy simply reflects the fact that neither arc has measured redshift. The tangential arc redshift is known for A370 (Soucail et al 1988), and the same analysis yields a prediction $`z_r1.5`$ for the radial arc, in reasonable agreement with the $`z_r1.3`$ prediction from the detailed lens models of Kneib et al (1993) and Smail et al (1995). Radial arcs have now been observed in four clusters: MS0440, MS2137, A370, and AC114. These clusters span the entire range in velocity dispersion of our sample (AC114 has one of the highest velocity dispersions, $`\sigma =1649`$ km s<sup>-1</sup>, and MS0440 has one of the lowest, $`\sigma =872`$ km s<sup>-1</sup>), and therefore we expect that the systematic trends inferred in the previous subsection may have a detectable influence on the properties of the radial arcs.
One simple trend predicted by our interpretation concerns the relative location of radial and tangential arcs as a function of the cluster velocity dispersion. Assuming that the sources are at similar redshifts, the ratio between the clustercentric distances of radial and tangential arcs, $`\theta _r/\theta _t`$, depends strongly on the slope of the surface density profile inside $`\theta _t`$: the steeper the profile the closer to the center the radial arc moves and the smaller $`\theta _r/\theta _t`$ becomes. This is shown in Figure 7, where the open circles represent the predictions of our model for all clusters in our sample (including the effect of a central galaxy of mass $`M_g=3\times 10^{12}h^1M_{}`$). We assume that the redshifts of both arcs are the same, $`z_r=z_t`$, and adopt the same fiducial cosmology of Figures 4 and 5. The arcs in MS0440, MS2137 and AC114 follow the predicted trend very well, but the radial arc in A370 is much farther than expected in our simple model. This may be because the source of the radial arc is much farther behind the cluster than the tangential arc source; $`z_r2z_t1.4`$, again in good agreement with the predictions of Smail et al (1995).
In summary, the relative location of radial and tangential arcs is a useful test of the conclusions reached in the previous subsection regarding the role played by the central galaxy on the lensing properties of clusters. Although there are no measured redshifts for radial arcs, our analysis predicts that the arc sources in MS0440, MS2137, and AC114 are at very similar redshifts. Consistency with our model requires that the radial arc source must be far behind the tangential arc in A370. Spectroscopic redshifts of radial arcs may thus be used to verify or rule out the applicability to actual clusters of the mass modeling we propose here.
### 5.2 Observability of lensed features
A second corollary of the interpretation outlined in §4.2 is that, because lensing features depend so heavily on the mass of the central galaxy, at fixed cluster velocity dispersion tangential arc distances must correlate with the total luminosity of the central galaxy. This could in principle be demonstrated by comparing the residuals of the $`\sigma `$-$`\theta _t`$ correlation shown in Figure 1 with residuals of the correlation between $`\sigma `$ and the total luminosity of the central galaxy. A search of the literature yields, unfortunately, few total absolute magnitudes for brightest cluster galaxies in our sample, and therefore we are unable to utilize this test to verify our interpretation. We intend to use archival images of clusters from different telescopes and revisit this question in a future paper.
We note here that, if our interpretation is correct, it would help to explain the apparent underrepresentation of moderate-$`\sigma `$ clusters in current lensing samples. Estimates based on the Press-Schechter (1974) algorithm predict that clusters with $`\sigma >1000`$ km s<sup>-1</sup> should outnumber clusters with $`\sigma >1500`$ km s<sup>-1</sup> by a factor of $`20`$ or more (Eke et al 1998), a result of the exponential decline of the cluster mass function at the high-mass end. This sharp decrease in the number of clusters with increasing $`\sigma `$ is not readily apparent in lensing samples. For example, in the sample compiled here $`\sigma >1000`$ km s<sup>-1</sup> are only about twice as numerous as $`\sigma >1500`$ km s<sup>-1</sup> systems, suggesting the operation of a mechanism that hinders the observability of lensed features in low-$`\sigma `$ clusters. Our discussion above hints at one possibility: out of all low-mass clusters only those with very massive central galaxies may display easily identifiable lensed features. In these systems, tangential arcs are long and thin and appear at large distances from the center, thereby increasing their visibility to surveys looking preferentially for images with large length-to-width ratio. Without massive central galaxies the lensing power of low-mass clusters would be substantially reduced; arcs would occur closer to the center (where they are hard to distinguish from a central galaxy), would be significantly magnified in both radial and tangential directions (see Figure 5 and the discussion by Williams & Lewis 1998), and may thus readily have escaped detection as “giant arcs”.
## 6 Summary
We compare the lensing properties of galaxy clusters with the predictions of cluster models based on the NFW density profile. We find that clusters are in general more powerful lenses than NFW halos of similar velocity dispersion. The magnitude of the discrepancy is small for the most massive clusters in our sample, $`\sigma 15002000`$ km s<sup>-1</sup>, but increases towards lower cluster velocity dispersions. NFW lenses also yield large radial magnifications at the tangential arc location, at odds with observations which indicate that tangential arc widths are of order of (or perhaps thinner than) the typical angular size of possible galaxy sources. We use a simple analysis to show that the data are best reproduced by mass models where the inner slope of the projected cluster density profile steepens significantly with decreasing $`\sigma `$. Agreement with the data requires the effective core lensing potential to be steeper than isothermal in $`\sigma 1000`$ km s<sup>-1</sup>clusters, but shallower than isothermal in the most massive clusters in the sample ($`\sigma 1500`$-$`2000`$ km s<sup>-1</sup>).
We interpret the disagreement between NFW models and lensing observations as signaling the contribution to the cluster lensing potential of significant amounts of substructure and of massive central galaxies. Provided that central galaxy mass correlates only weakly with $`\sigma `$, its contribution to lensing is more important in less massive clusters, reproducing the observed trends. We use N-body simulations to calibrate the effects of substructure, and estimate that central galaxies as massive as $`M_g3\times 10^{12}h^1M_{}`$ are needed to reconcile NFW halo models with observations. This is much larger than estimates of the stellar mass of the galaxy; agreement between lensing data and NFW halo models requires that the central galaxy be surrounded by a dark matter halo which, within the arc radius, contains almost ten times as much mass as associated with stars. Lower galaxy masses may be acceptable if, as suggested by recent N-body experiments, the NFW model systematically underestimates the central concentration of dark matter halos (Moore et al 1998).
Qualitatively, this conclusion applies to all dark halo models where the inner slope within the Einstein radius is approximately independent of mass, although the quantitative estimates presented above are strictly valid only for NFW models and for the $`\mathrm{\Lambda }`$CDM model we explore. Quantitative estimates, however, are quite insensitive to the values of the cosmological parameters, varying only by $`10`$-$`20\%`$ when $`\mathrm{\Omega }_0`$ is allowed to vary between $`0.2`$ and $`1`$ (in open and flat geometries).
A crucial ingredient of this interpretation is that the less massive the cluster the more conspicuous the lensing role played by the central galaxy. The role of a central galaxy in modifying the cluster’s inner profile can in principle be tested through observations of radial/tangential arc systems. Our modeling predicts that the redshifts of the radial and tangential arcs must be similar in MS0440, MS2137, and AC114, but that the radial arc source is far behind the tangential arc source in A370. Radial arc redshifts are therefore sensitive tests of our model predictions. These observations are within reach of the $`8`$-$`10`$m class telescopes coming into operation, so we should be able to assess the validity of the modeling we propose here very soon indeed.
This work has been supported in part by the National Science and Engineering Research Council of Canada. JFN acknowledges useful discussions with Mike Hudson, Greg Fahlman, and Ian Smail.
|
no-problem/9905/cond-mat9905228.html
|
ar5iv
|
text
|
# Thermal Conductivity of Mg-doped CuGeO3
## I Introduction
The discovery of the first inorganic spin-Peierls (SP) compound CuGeO<sub>3</sub> (Ref. ) has triggered subsequent studies of the impurity-substitution effect on the spin-singlet states, and the existence of the disorder-induced transition into three-dimensional antiferromagnetic (3D-AF) state has been established. Since the relevant exchange energies, i.e., intra- and interchain coupling $`J`$ and $`J^{}`$, do not change with doping except at the impurity sites, it is difficult to understand the impurity-induced transition from the SP to 3D-AF state in the framework of the “conventional” competition between dimensionalities (where $`J^{}/J`$ is an essential parameter ) and therefore such impurity-substitution effect is a matter of current interest.
In CuGeO<sub>3</sub>, a small amount of impurity leads to an exotic low-temperature phase where the lattice dimerization and antiferromagnetic staggered moments simultaneously appear \[dimerized AF (D-AF) phase\]. Moreover, when the impurity concentration $`x`$ exceeds a critical concentration $`x_c`$, the SP transition measured by dc susceptibility disappears and a uniform AF (U-AF) phase appears below the Néel temperature $`T_N`$ $``$ 4 K. The D-AF ground state can be understood as a state of spatially modulated staggered moments accompanied with the lattice distortion. However, the mechanism of the depression of the SP phase and the establishment of the disorder-induced antiferromagnetism are still to be elucitated. A transport measurement could be a desirable tool in dealing with such a problem, because the mobility of spin excitations or spin diffusivity is often sensitive to the impurities. Nevertheless, even a crude estimation of the mobility has not been carried out for the spin excitations so far.
Recently, we reported intriguing behaviors of the thermal conductivity $`\kappa `$ of pure CuGeO<sub>3</sub>. In Ref. , the existence of the spin heat channel ($`\kappa _s`$) was suggested. However, we cannot separate $`\kappa _s`$ and the phonon thermal conductivity $`\kappa _{\mathrm{ph}}`$ with the measurement of the pure sample only. In this work, we have resolved this problem with Mg-doped crystals. It turned out that the spin diffusion length, which can be calculated from $`\kappa _s`$, is much larger than the distance between adjacent spins for the pure CuGeO<sub>3</sub>, indicating coherent heat transport due to the spin excitations. Since the spin heat transport almost disappears for the heavily doped samples in which the long-range SP ordering is absent, it is suggested that the large mobility of the spin excitations plays an important role in the SP ordering.
The spin heat transport is also a probe of the spin gap. It is found that local spin-gap opening is robust against the impurity doping, though the temperature of the long-range ordering is strongly suppressed with $`x`$. Phonon heat channel ($`\kappa _{\mathrm{ph}}`$) provides an unusual thermal conductivity peak in the SP state, which was one of the topics of our previous report. The peak is drastically suppressed with both magnetic field and Mg-doping, indicating that the spin-gap opening has produced the peak in $`\kappa _{\mathrm{ph}}`$ via strong phonon-spin interaction.
## II Experimental
The single crystals of Cu<sub>1-x</sub>Mg<sub>x</sub>GeO<sub>3</sub> were grown with a floating-zone method. The Mg concentration is determined by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). The critical concentration $`x_c`$ was carefully determined by Masuda et al. for the same series of crystals that we have used. They found that the Néel temperature $`T_N`$ jumps at the impurity-driven transition from D-AF to U-AF phase and that phase is separated in the transition range of $`0.023x0.027`$, indicating that the transition is of first order. We used six samples with $`x=0`$, $`0<x<x_c`$, and $`x>x_c`$ for the thermal conductivity measurement. They were originally prepared for the neutron and the synchrotron x-ray experiments, and thus the sample homogeneity has been confirmed in detail which is described elsewhere. Typical sample size is 0.2 $`\times `$ 2 $`\times `$ 7 mm<sup>3</sup> ($`a\times b\times c`$). Since our interest is thermal conductivity along the 1D spin chain, we need the sample with a longer dimension along the $`c`$ axis. Thus we payed particular attention to the Mg homogeneity along the $`c`$ axis for this study, and the best way is to use the samples prepared for the neutron experiments. The fundamental magnetic properties of the samples used already appeared in another paper.
The thermal conductivity is measured using a “one heater, two thermometers” method as the previous measurement. The base of the sample is anchored to a copper block held at desired temperatures. A strain-gauge heater is used to heat the sample. A matched pair of microchip Cernox thermometers are carefully calibrated and then mounted on the sample. The maximum temperature difference between the sample and the block is 0.2 K. Typical temperature difference between the two thermometers is $``$0.5 K above 20 K and 0.2 K below 20 K. The temperature-controlled block is covered with a metal shield so that heat loss through radiation becomes negligible. Magnetic field is applied parallel to the $`c`$-axis direction. The measured temperature range is below 30 K, which is well below the temperature scale of the intra-chain coupling $`J_c`$ ($``$ 120 K according to neutron scattering measurement ), and therefore, the one-dimensional quantum spin liquid (1D-QSL) is realized above the transition temperatures.
## III Results
Figure 1 shows the temperature dependence of $`\kappa `$ for samples with and without Mg substitution: 1(a)-1(f) are for $`x=0,0.016`$, 0.0216, 0.0244, 0.032 and 0.040, respectively. $`x=0.016`$ is well below $`x_c`$, $`x=0.0216`$ is close to $`x_c`$, $`x=0.0244`$ is on the border and $`x=0.032`$ and $`x=0.040`$ are in the U-AF region. The transition temperatures $`T_{\mathrm{SP}}^\chi `$ for the $`xx_c`$ samples and $`T_N`$ for the $`x>x_c`$ samples, which were determined by susceptibility measurement, are indicated by arrows in the figures. A sharp drop is observed just below $`T_{\mathrm{SP}}^\chi `$ for the pure CuGeO<sub>3</sub> as presented in the inset of Fig. 1(a). The $`x=0.016`$ sample shows the same behavior at $`T_{\mathrm{SP}}`$, while $`\kappa `$ is enhanced just below $`T_{\mathrm{SP}}`$ for the $`x=0.0216`$ and 0.0244 samples.
In Figs. 1(e) and 1(f), a slight change in the curvature is observed at $`T_N`$ and the field dependence in the Néel state is opposite to that above $`T_N`$; $`\kappa `$ decreases with field below $`T_N`$ while $`\kappa `$ increases with field above $`T_N`$. The three-dimensional magnon excitations, which appear in the Néel state, are responsible for this behavior. The thermal conductivity in U-AF phase will be further discussed elsewhere. Although the Néel transition (to D-AF phase) should be present also for the $`xx_c`$ samples, we cannot identify the corresponding feature at the transition, because $`T_N`$ ($`22.5`$ K) is too close to the lower temperature limit of our measurement, $``$ 2 K.
Taking a look at the full scale of each figure, one can notice that $`\kappa `$ is strongly suppressed with Mg substitution in the whole temperature range. Two thermal conductivity peaks are present in the zero-field curve of Fig. 1(a); one is in the SP phase and the other is in the 1D-QSL state. The broad shape of the high-temperature peak is maintained with doping. On the other hand, the low-temperature peak (SP peak) diminishes with $`xx_c`$, and disappears for the $`x>x_c`$ samples.
The SP peak is suppressed also in magnetic fields \[Figs. 1(a)-1(d)\]. In order to see the field dependence in more detail, the thermal conductivity at the SP peak is plotted as a function of magnetic field in Fig. 2 for the $`x=0`$ sample. $`\kappa `$ is suppressed with magnetic field and suddenly drops at the threshold field, $`H_c`$, at which the system makes a transition to the incommensurate phase. The same features were observed in our previous measurement with the crystal grown by the laser-heated pedestal growth technique. Also, the typical value of $`\kappa `$, for example at the high-temperature peak, is almost the same as that of the previous result. We thus believe that the above features are intrinsic in this material regardless of the growth method of the crystal, as long as high quality single crystal is used.
In a temperature range close to $`T_{\mathrm{SP}}^\chi `$, $`\kappa `$ increases with field for the Mg-doped samples, while little field dependence is seen in the pure sample \[Figs. 1(a)-1(d)\]. Note that the field dependence extends even above $`T_{\mathrm{SP}}^\chi `$ for the Mg-doped samples. The variation in the field dependence implies qualitative difference of the SP transition between pure CuGeO<sub>3</sub> and doped CuGeO<sub>3</sub>. This field dependence is apparent also in the $`x=0.032`$ sample, which is above $`x_c`$ \[Fig. 1(e)\], but rapidly diminishes in the heavily doped sample \[Fig. 1(f)\].
## IV Discussion
CuGeO<sub>3</sub> is an insulator and heat conduction by electrons or holes is absent unlike metals. Instead, low-energy spin excitations can carry heat. In the 1D-QSL state, total $`\kappa `$ is given by a sum of $`\kappa _s`$ and $`\kappa _{\mathrm{ph}}`$ as,
$$\kappa =\kappa _s+\kappa _{\mathrm{ph}}(\mathrm{in}1\mathrm{D}\mathrm{QSL},T_{\mathrm{SP}}<TJ).$$
(1)
In contrast, $`\kappa _s`$ is suppressed at the lowest temperature region in SP phase because few spin excitations are present due to the spin gap, and the total $`\kappa `$ represents a phonon contribution only,
$$\kappa \kappa _{\mathrm{ph}}(TT_{\mathrm{SP}}).$$
(2)
The field and impurity dependence of $`\kappa _{\mathrm{ph}}`$ below $`T_{\mathrm{SP}}`$ will be discussed in subsection A. The value of $`\kappa _s`$ in 1D-QSL will be crudely estimated in subsection B. Finally, the problem of local spin-gap formation will be dealt with in subsection C, using this $`\kappa _s`$ as a probe.
### A $`\kappa `$ below $`T_{\mathrm{SP}}`$: depression of the SP peak in magnetic fields and with Mg-doping
Since the low-temperature peak is drastically suppressed with the application of field as shown in Figs. 1 and 2, the feature should be related to the spin-excitation spectrum. We have proposed the following explanation for this peak. At temperatures well below $`T_{\mathrm{SP}}`$, where low-energy spin excitations are negligible, heat is carried mostly by phonons and $`\kappa _{\mathrm{ph}}`$ increases with temperature because of the growing population of phonons like usual insulating crystals. With increasing temperature, the increasing number of thermally excited spin excitations scatter phonons, and $`\kappa _{\mathrm{ph}}`$ diminishes. As a result, the thermal conductivity peak shows a peak. Although the thermal spin excitations begin to carry heat, the increase in $`\kappa _s`$ appears only in the temperature region slightly below $`T_{\mathrm{SP}}`$ \[see inset in Fig. 1(a)\].
In order to confirm the above understanding, the impurity substitution is helpful because the spin gap is absent in heavily substituted samples. We normalized the size of the peak for each sample as $`\kappa `$(5 K)/$`\kappa `$(15 K), and plotted it against $`x`$ in Fig. 3. One can see the suppression of the peak with $`x`$ similarly to the case of field application (Fig. 2). Moreover, the peak disappears at the concentration $`x>x_c`$ where the long-range SP order no longer exists, as shown in Figs. 1(e) and 1(f). We have thus confirmed the correlation between spin gap and the peak also by Mg substitution, giving the additional evidence for the scenario described in the previous paragraph.
The $`x=0.0244`$ sample, which is on the border from SP (D-AF) to U-AF demonstrates enhancement of $`\kappa `$ below $`T_{\mathrm{SP}}^\chi `$, as observed in Fig. 1(d). Since the enhancement is suppressed in magnetic fields, the existence of a well-defined spin gap is suggested even at $`x=x_c`$. We can notice the rapid change in $`\kappa _s`$ in the (low-temperature) vicinity of $`T_{\mathrm{SP}}`$, which is another sign of the spin-gap opening, for the $`x=0`$ and 0.016 samples \[Figs. 1(a) and 1(b)\]. However, this feature is absent for the $`x=0.0216`$ and 0.0244 samples \[Figs. 1(c) and 1(d)\], since $`\kappa _s`$ is significantly reduced with $`x`$ because of impurity scattering.
Figure 4(a) shows the zero-field thermal conductivity of all the samples. The thermal conductivity is strongly suppressed with Mg-substitution in the whole measured temperature range. The $`x`$ dependence of $`\kappa `$ below $``$ 4 K can be understood as the difference in the scattering rate of phonons. Since we showed in Ref. that the scattering by planer defects is dominant for the $`x=0`$ sample, difference in the number of planer defects may cause the variation of $`\kappa _{\mathrm{ph}}`$ in our series of samples. The number of planer defects does not necessarily increase with increasing $`x`$; note that $`\kappa `$ in the $`x=0.032`$ sample is smaller than that in the $`x=0.040`$ sample below $``$ 5 K.
### B $`\kappa `$ above $`T_{\mathrm{SP}}`$: coherent heat transport in the 1D-QSL
Remembering the origin of the low-$`T`$ thermal conductivity peak, we can notice that scattering by the spin excitations, in turn, becomes dominant in $`\kappa _{\mathrm{ph}}`$ above the peak temperature for the $`xx_c`$ samples. Noting that all the $`x0.016`$ curves looks parallel to one another above $``$ 15 K, $`\kappa _{\mathrm{ph}}`$ in the $`x>x_c`$ samples should probably have a temperature dependence similar to that in the $`x=0.016`$, 0.0216 and 0.0244 samples above $``$ 15 K. Therefore, it is expected that phonon heat transport is governed by the spin-scattering also for the $`x>x_c`$ samples, at least above $``$ 15 K.
In contrast to rather complicated temperature dependence below $`T_{\mathrm{SP}}`$, $`\kappa (T)`$ in the 1D-QSL region is simpler, which is an advantage of discussing $`x`$ dependence in the region. As a crude approximation, it is assumed that the phonon part is not so much $`x`$ dependent and that most of the $`x`$-dependence comes from $`\kappa _s`$, because the detailed characterization guarantees good homogeneity in both pure and Mg-doped crystals and the direct modification in the phonon modes due to the Mg substitution can be estimated to be negligible. The scattering rate of phonons due to the point defects is more than two orders of magnitude smaller than the total scattering rate. One may notice that the robustness of $`\kappa _{\mathrm{ph}}`$ to the Mg-doping is natural, considering that phonons are mainly scattered by spin excitations above $``$ 15 K, which is the conclusion of the previous paragraph. Since the population of both phonons and spin excitations does not change so much with $`x`$ in the temperature range above $`T_{\mathrm{SP}}`$, as has been reported in specific heat results, $`x`$ dependence of $`\kappa _{\mathrm{ph}}`$ is expected to be small in the 1D-QSL state.
Figure 4(b) shows the $`x`$ dependence of $`\kappa `$ at 17.5 K. $`\kappa `$ rapidly decreases with $`x`$ up to $`x_c`$, and it saturates when $`x`$ exceeds $`x_c`$. Therefore, it is naturally assumed that $`\kappa _s`$ is close to zero in the heavily doped samples with $`x`$ $`>`$ $`x_c`$. This result suggests some correlation between the mobility of the spin excitations and the SP ordering.
Following the above interpretation, it is possible to estimate $`\kappa _s`$ by subtracting $`\kappa `$ of the $`x=0.040`$ sample from the measured $`\kappa `$ for each sample. The result indicates that most part of the $`\kappa `$ of pure CuGeO<sub>3</sub> is due to $`\kappa _s`$, which is approximately 0.4 W/cm K at 17.5 K. This gives the diffusivity of the spin excitations $`D_s`$ of $``$1.6$`\times `$10<sup>-3</sup> m<sup>2</sup>/s, if we assume $`C_s`$ $``$1.8 J/mol K. $`D_s`$ in 1D systems can be written as $`v_sl_s`$, where $`v_s`$ is the velocity of the spin excitations and $`l_s`$ is the mean free path, in other words, the spin diffusion length. Assuming the spin-wave like dispersion given by des Cloizeaux and Pearson $`ϵ`$=($`\pi /2`$)$`J_c|\mathrm{sin}(kc)|`$ and $`J_c`$ $``$ 120 K, one can evaluate $`v_s`$ $``$ ($`\pi /2`$)$`cJ_c`$/$`\mathrm{}`$= 1.3$`\times `$10<sup>4</sup> m/s and $`l_s`$ $``$ 130 nm ($`ϵ`$ is the energy of the spin excitations, $`k`$ is the wave vector and $`c`$ ($``$ 0.3 nm) is the distance between adjacent spins). Note that the spin diffusion length of the same order of magnitude was obtained from the NMR relaxation measurement for another 1D spin system, AgVP<sub>2</sub>S<sub>6</sub>.
Since $`l_s/c`$ 430 is much larger than unity, we can conclude that the spin heat transport for the pure CuGeO<sub>3</sub> is coherent. If the same estimation is applied also to the Mg-doped samples, $`l_s/c`$ 100 for the $`x=0.016`$ ($`<x_c`$) and $`l_s/c`$ 50 even for $`x=x_c`$ at this temperature. Since $`c/x_c`$ is approximately calculated as 40, the above estimation indicates that the spin diffusion length in the 1D-QSL state exceeds the mean impurity distance $`c/x`$, as long as $`x`$ is less than $`x_c`$. Whereas, the spin excitations are not so mobile when $`x>x_c`$.
### C $`\kappa `$ just above $`T_{\mathrm{SP}}^\chi `$: short-range SP order
Figure 5 shows $`\kappa \kappa (x=0.040)`$ ($``$ $`\mathrm{\Delta }\kappa `$) of the three Mg-doped samples which have SP transition. One can find that $`\mathrm{\Delta }\kappa `$ is almost temperature independent above $`T^{}`$ $``$ 15 K and that $`\mathrm{\Delta }\kappa `$ of all the samples deviates from this temperature-independent value below $`T^{}`$. Since $`T^{}`$ is close to $`T_{\mathrm{SP}}`$ of the pure CuGeO<sub>3</sub>, the deviation can be attributed to a precursor of the SP transition. Note that similar behavior is observed in the Raman scattering measurement on Zn- and Si-dopd CuGeO<sub>3</sub>.
Recently, it is shown by synchrotron x-ray diffraction measurement that FWHM of the Bragg peak from the lattice dimerization reaches the resolution limit at a temperature $`T_{\mathrm{SP}}^{\mathrm{x}\mathrm{ray}}`$ which is far below $`T_{\mathrm{SP}}^\chi `$. The authors claimed that the long-range order takes place only below $`T_{\mathrm{SP}}^{\mathrm{x}\mathrm{ray}}`$ and that lattice dimerization is short-range just below $`T_{\mathrm{SP}}^\chi `$. The gap-like feature in $`\kappa `$ is also to be explained along the idea of the short-range order (SRO). However, the data in Fig. 5 suggests that SRO grows from $`T^{}`$ $``$ 15 K, which is even above $`T_{\mathrm{SP}}^\chi `$.
The field dependence of $`\kappa `$ is a strong indication of the presence of the SRO below $`T^{}`$. Figure 6 shows the temperature dependence of $`\mathrm{\Delta }\kappa `$ for Mg-doped samples in various magnetic fields. In all cases, we observed the recovery of $`\mathrm{\Delta }\kappa `$ with increasing fields below $`T^{}`$. The results are consistent with the notion that the reduction of $`\mathrm{\Delta }\kappa `$ below $`T^{}`$ is due to the development of the local spin gap. Since the magnetic field is thought to reduce the magnitude of spin gap even above $`T_{\mathrm{SP}}^\chi `$, the number of spin excitations responsible for the heat transport will increase with the field. Moreover, the field dependence only appears below $`T^{}`$ in our observation. We have no field dependence above $`T^{}`$, which is consistent with the notion that even the local spin gap is absent.
It should be emphasized that the heat transport by spin excitations is very sensitive to the spin-Peierls SRO. We observed a magnetic field dependence in $`\mathrm{\Delta }\kappa `$ below $`T^{}`$ even for the sample with $`x=0.032(>x_c)`$, where the spin-Peierls LRO no longer exist in the whole temperature range. According to the synchrotron x-ray measurement, SP-SRO develops below 7.5 K in the $`x=0.032`$ sample. For the $`x=0.040`$ sample, however, there was no evidence that SP-SRO actually develops. In contrast, one can see that the magnetic field dependence of $`\kappa `$ still exists for the $`x=0.040`$ sample (Fig. 1(f)) even though the change is not so obvious as in the $`x=0.032`$ sample. We think that the difference of a length scale of probed area between the two measurement techniques causes such discrepancy.
The above results tell us that the impurity-substitution modifies the ground state and suppresses the spin-Peierls ordering in CuGeO<sub>3</sub> system through different mechanisms. The importance of the inter-chain coupling and the tendency to the 3D-antiferromagnetism increases below $`T_N`$. As a result, $`T_N`$ increases with $`x`$ and the ground state changes from D-AF to U-AF at $`x_c`$. On the other hand, the local spin-gap formation, which occurs at $`T^{}`$, is goverened only by one-dimensional nature of the spin system coupled with 3D phonons and has nothing to do with 3D-AF fluctuation. Therefore, the temperature of SP-SRO is not modified with $`x`$, as long as the spin-singlet state is energetically favored. (the spin-singlet state will be no more favored in a heavily disordered system where long-wave-length spin excitations, whose energy is less than the spin-gap energy, are absent.) The spin-diffusion length $`l_s`$, in turn, rapidly diminishes with $`x`$ owing to the growing impurity-scattering of the spin excitations. Since the spin correlation length is directly related to $`l_s`$, the length scale of the SP domains is reduced with $`x`$. As a result, signals of the spin gap, detected with any probe, diminishes in size with $`x`$.
## V Summary
The thermal conductivity of the Mg-doped CuGeO<sub>3</sub> helps to understand both the spin-gapped state below the SP transiton and the 1D-QSL state above the transition. Large spin heat transport is observed in the 1D-QSL state for pure CuGeO<sub>3</sub>, and rapidly diminishes with Mg-doping, accompanied by the suppression of the long-range SP ordering. The spin gap opening suppresses $`\kappa _s`$ and enhances $`\kappa _{\mathrm{ph}}`$. Examining the $`x`$ dependence of the spin-gap features, it turned out that the local spin gap opens at a temperature ($`T^{}`$) independent of $`x`$, suggesting that the suppression of SP ordering is attributed to the reduction of the spin diffusion due to the impurity scattering. It is expected that the above analysis of the impurity-substitution effect on the thermal conductivity is applicable to other one-dimensional spin systems and that thus obtained transport properties may reveal new aspects on such materials.
## VI Acknowledgment
We thank A. Kapitulnik for helpful advices both on the experimental techniques and in understanding the results in the early stage of this project.
## Phonon scattering rate due to point defects
We can show that the scattering rate $`1/\tau _p`$ by point defects, which is introduced by the substitution with the ion with different mass, is very small even for the most heavily doped sample ($`x=0.040`$). The dominant-phonon approximation gives,
$$1/\tau _p\frac{na^3(k_BT)^4}{4m\pi v^3\mathrm{}^4(\mathrm{\Delta }M/M)^2}.$$
(3)
\[$`n`$ ($`=x=0.040`$) is density ratio of point defects, $`a`$ ($`4\times 10^8`$ cm) is lattice constant, $`m`$ ($`=15`$) is number of phonon modes (3 times the number of atoms per unit cell), $`v`$ ($`5\times 10^5`$ cm/s) is phonon velocity and $`\mathrm{\Delta }M/M`$ ($`=0.62`$) is the ratio (mass difference between Mg and Cu atoms)/(mass of Cu atom)\]. $`1/\tau _p`$ can be estimated to be about $`1.0\times 10^8`$ (s<sup>-1</sup>) at 30 K.
In comparison, the total scattering rate $`1/\tau `$ for the $`x=0.040`$ sample can be obtained from the data as
$$1/\tau \frac{C_{\mathrm{ph}}v^2}{3\kappa _{\mathrm{ph}}},$$
(4)
assuming phonon specific heat $`C_{\mathrm{ph}}`$ to be $`\beta T^3`$ and using the value of $`\beta 2.8\times 10^6`$ (J/cm<sup>3</sup> K<sup>4</sup>), shown in Ref. . Taking $`\kappa _{\mathrm{ph}}0.1`$ (W/cm K) from our result of the $`\kappa `$ measurement, $`1/\tau 6.4\times 10^{10}`$ (s<sup>-1</sup>) at 30 K. Since this $`1/\tau `$ is more than 600 times larger than $`1/\tau _p`$ (the ratio is even larger below 30 K), the scattering by the point defects is not important for the three-dimensional phonon heat transport in the measured temperature range.
|
no-problem/9905/hep-ph9905207.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Radiative $`\tau `$ pair production is of great interest, as it is sensitive to anomalous electromagnetic couplings of the $`\tau `$. With the sensitivity afforded by the LEP experiments, this provides an opportunity to search for new physics phenomena \[1–4\]. Any meaningful interpretation of the experimental data requires a Monte Carlo simulation in which Standard Model predictions may be augmented by the contributions from possible anomalous couplings.
Since the LEP collaborations are entering their final years of operation it is a good time to document the programs that were actually used in data analyses. In this paper we describe a library that has been used to calculate anomalous contributions to $`\tau \tau \gamma `$ couplings . The library is based on the work described in and can be used with any $`e^+e^{}\tau ^+\tau ^{}(n\gamma )`$ Monte Carlo program and, after minor adaptation, with $`ppZ/\gamma +\mathrm{}`$; $`Z/\gamma \tau ^+\tau ^{}(n\gamma )`$ or $`epZ/\gamma +\mathrm{}`$; $`Z/\gamma \tau ^+\tau ^{}(n\gamma )`$ programs as well.
In the present paper, we will discuss the interface of our library to KORALZ version 4.04, which is described in detail in . The fortran code of the library is archived together with KORALZ , in the same tree of directories. Let us note that in the future, KORALZ will be replaced by a new program, KK2f , which is based on a more powerful exponentiation at the spin amplitude level; implementation of our library will be straightforward for that program as well.
## 2 Calculation of anomalous couplings
To evaluate the effects of anomalous electromagnetic couplings on radiative $`\tau `$ pair production, a tree–level calculation of the squared matrix element for the process $`e^+e^{}\tau ^+\tau ^{}\gamma `$ has been carried out , including contributions from the anomalous magnetic dipole moment at $`q^2=0`$, $`F_2(0)`$, and the electric dipole moment $`F_3(0)`$. This calculation is included in our library. When activated, it uses the 4-momenta of the leptons and the photon generated by the host program to compute a weight, $`w`$, for each event according to
$$w=\frac{|_{\mathrm{a}no}|^2}{|_{\mathrm{S}M}|^2}.$$
(1)
$`_{\mathrm{a}no}`$ is the matrix element for $`F_2(0)0`$ and/or $`F_3(0)0`$, and $`_{\mathrm{S}M}`$ is the matrix element for $`F_2(0)=F_3(0)=0`$.
As this calculation is performed at $`𝒪(\alpha )`$, the case of multiple bremsstrahlung requires special treatment. In this case, a reduction procedure is first applied in which all photons, except the one with the greatest momentum transverse to a lepton, or $`p_T`$, are incorporated into the 4-momenta of effective initial– or final–state leptons. The 4-momenta of the photon with greatest $`p_T`$ and the effective leptons are then used to compute the weight. Cross–checks of the calculation against an independent and slightly simplified analytical calculation as well as checks of the validity of the reduction procedure are described in . The results of the calculation have been used in the measurement of anomalous electromagnetic moments of the $`\tau `$ described in .
## 3 Flags to control anomalous couplings in KORALZ
In KORALZ version 4.04, the calculation in our library is activated by setting the card IFKALIN=2. This is transmitted from the main program via the KORALZ input parameter NPAR(15). Additional input parameters are set in the routine kzphynew(XPAR,NPAR), although there are currently no connections to the KORALZ matrix input parameters XPAR and NPAR. Table 1 summarizes the functions of these input parameters.
In order to provide the user with enough information to retrieve $`w`$ for a given event for any $`F_2(0)`$ or $`F_3(0)`$, we take advantage of the fact that, for each event, one may write $`w`$ as a quadratic function of the anomalous couplings:
$$w=\alpha F_2^2(0)+\beta F_2(0)+\gamma F_3^2(0)+\delta F_3(0)+ϵ.$$
(2)
When the calculation of $`w`$ is completed, the 5 weight parameters $`\alpha ,\beta ,\gamma ,\delta `$ and $`ϵ`$ are stored in the common block common /kalinout/ wtkal(6), with the assignments shown in Table 2.
The user is then free to calculate $`w`$ for whatever combination of $`F_2(0)`$ and $`F_3(0)`$ is desired. In practice we set $`ϵ=1`$, since anomalous terms must vanish for $`F_2(0)=F_3(0)=0`$. We also set $`\delta =0`$, as the interference between Standard Model and anomalous amplitudes vanishes in the case of radiation from an electric dipole moment. These short cuts save substantial CPU time.
The code for calculation of the weight $`w`$ is placed in the directory korz\_new/ttglib in the files ttg.f and ttgface.f.
## 4 Demonstration programs
The demonstration program DEMO2.f for the run of KORALZ when our library is activated can be found in the directory korz\_new/february and the output DEMO2.out in the directory korz\_new/february/prod1. The Standard Model DEMO.f for KORALZ and its output DEMO.out are also included in the directories mentioned above. All these files, as well as the library itself, are archived together with KORALZ . The maximum centre-of-mass energy allowed for the runs with anomalous $`\tau \tau \gamma `$ couplings is $`200`$ GeV.
## Acknowledgements
ZW would like to thank the L3 group of ETH Zürich for support while this work was performed.
|
no-problem/9905/hep-ph9905406.html
|
ar5iv
|
text
|
# A remark on the muonium to antimuonium conversion in a 331 model
## Abstract
Here we analyze the relation between the search for muonium to antimuonium conversion and the 331 model with doubly charged bileptons. We show that the constraint on the mass of the vector bilepton obtained by experimental data can be evaded even in the minimal version of the model since there are other contributions to that conversion. We also discuss the condition for which the experimental data constraint is valid.
preprint: IFT-P.040/99 May 1999
Recently a new upper limit for the spontaneous transition of muonium ($`M\mu ^+e^{}`$) to antimuonium ($`\overline{M}\mu ^{}e^+`$) has been obtained . This implies constraints upon the models that induce the $`M\overline{M}`$ transition. One of them is the 331 model proposed some years ago . Here we would like to discuss the conditions in which this constraint can be evaded even in the context of the minimal version of the model (minimal in the sense that no new symmetries or fields are introduced).
In that model in the lepton sector the charged physical mass eigenstates (unprimed fields) are related to the weak eigenstates (primed fields) through unitary transformations ($`E_{L,R}`$) as follows:
$$l_L^{}=E_Ll_L,l_R^{}=E_Rl_R,$$
(1)
where $`l=e,\mu ,\tau `$. It means that the doubly charged vector bilepton, $`U_\mu ^{++}`$, interacts with the charged leptons through the current given by
$$J_{U^{++}}^\mu =\frac{g_{3l}}{\sqrt{2}}\overline{l_L^c}\gamma ^\mu 𝒦l_L,$$
(2)
where $`𝒦`$ is the unitary matrix defined as $`𝒦=E_R^TE_L`$ in the basis in which the interactions with the $`W^+`$ are diagonal ($`\nu _L^{}=E_L\nu _L`$.
In the theoretical calculations of the $`M\overline{M}`$ transition induced by a doubly charged vector bilepton so far only the case $`𝒦=\mathrm{𝟏}`$ has been considered . Although this is a valid simplification it does not represent the most general case in the minimal 331 model. In fact, in that model in the quark sector all left-handed mixing matrices survive in different places of the lagrangian density . In the lepton sector both, left- and right-handed mixing matrices survive in the interactions with the doubly charged vector bilepton as in Eq. (2) and also with doubly and singly charged scalars (see below). Hence, these mixing matrices as $`𝒦`$ in Eq. (2) have the same status than the Kobayashi-Maskawa mixing matrix in the context of the standard model in the sense that they must be determined by experiment. In Ref. it is recognized that their bound is valid only for the flavor diagonal bilepton gauge boson case i.e., $`𝒦=\mathrm{𝟏}`$. If nondiagonal interactions, like in Eq. (2), are assumed the new upper limit on the conversion probability in the $`M\overline{M}`$ system implies
$$M_{U^{++}}>g_{3l}|𝒦_{\mu \mu }||𝒦_{ee}|\mathrm{\hspace{0.17em}2.6}\mathrm{TeV}=850|𝒦_{\mu \mu }||𝒦_{ee}|\mathrm{GeV}.$$
(3)
If $`|𝒦_{\mu \mu }||𝒦_{ee}|0.70`$ we get a lower bound of 600 GeV for the doubly charged bilepton which is compatible with the upper bound obtained by theoretical arguments .
The following is more important. Besides the contribution of the vector bileptons there are also the doubly charged and the neutral scalar ones. To consider only the vector bileptons is also a valid approximation since all the lepton-scalar couplings can be small if all vacuum expectation values (VEVs), except the one controlling the $`SU(3)`$ breaking, are of the order of the electroweak scale and, if there are no flavor changing neutral currents (FCNC) in the leptonic sector. Both conditions may not be natural in the minimal version of the model. The former because the sextet is introduced only to give mass to the leptons so its VEV may be of the order of a few GeV. The later because if we want to avoid FCNC in the lepton sector it is necessary to impose a discrete symmetry which does not belong to the minimal 331 model. In the model the $`SU(3)_LU(1)_N`$ triplets $`\eta =(\eta ^0\eta _1^{}\eta _2^+)^T(\mathrm{𝟑},0)`$ and $`\rho =(\rho ^+\rho ^0\rho ^{++})^T(\mathrm{𝟑},+1)`$ give mass to the quarks. (The third triplet $`\chi (\chi ^{}\chi ^{}\chi ^0)^T`$ is out of our concern here.) If the first family transforms in a different way from the other two, the quark $`u`$ mass is given by the VEV of the $`\eta `$, here denoted by $`v_\eta `$; if the third family is which transforms differently, it is the quark $`t`$ which gets its mass from $`v_\eta `$. However, since the mixing matrix of the charge 2/3 quarks is not trivial the general case interpolates between these two cases. Hence, the vacuum expectation values $`v_\eta `$ and $`v_\rho `$ are of the order of the electroweak scale, i.e., $`v_\eta ^2+v_\rho ^2=(246\text{GeV})^2`$. As we said before, the scalar sextet $`S(\mathrm{𝟔},0)`$
$$S=\left(\begin{array}{ccc}\sigma _1^0& \frac{h_2^{}}{\sqrt{2}}& \frac{h_1^+}{\sqrt{2}}\\ \frac{h_2^{}}{\sqrt{2}}& H_1^{}& \frac{\sigma _2^0}{\sqrt{2}}\\ \frac{h_1^+}{\sqrt{2}}& \frac{\sigma _2^0}{\sqrt{2}}& H_2^{++}\end{array}\right)$$
(4)
is necessary in order to give to the charged leptons an arbitrary mass. We will denote $`\sigma _2^0=v_S`$ the VEV of $`\sigma _2^0`$ the neutral component of the sextet. The other one, $`\sigma _1^0`$ does not gain a nonzero VEV if the neutrinos must remain massless.
The Yukawa couplings in the lepton sector are
$$_l=\frac{G_{ab}}{\sqrt{2}}\overline{\psi _{aiL}}(\psi _{bjL})^cS_{ij}+\frac{1}{2}ϵ_{ijk}G_{ab}^{}\overline{\psi _{aiL}}(\psi _{bjL})^c\eta _k+H.c.,$$
(5)
where $`\psi =(\nu ll^c)^T`$ and $`a,b=e,\mu ,\tau `$; $`i,j`$ are $`SU(3)`$ indices; $`G_{ab}`$ and $`G_{ab}^{}`$ are symmetric and anti-symmetric complex matrices, respectively. (The model can have $`CP`$ violation in the leptonic sector .)
We stress once more that this is the minimal 331 model because if we want to avoid in Eq. (5) the coupling with the triplet, since only the sextet is necessary for giving all charged leptons a mass, we have to impose a discrete symmetry. (Only in this case there is not FCNC in the lepton sector.) Hence, the mass matrix of the charged leptons has the form
$$M_l=\frac{1}{\sqrt{2}}\left(G_{ab}v_S+G_{ab}^{}v_\eta \right),$$
(6)
and it is diagonalized by the bi-unitary transformation $`E_L^{}M_lE_R=\mathrm{diag}(m_e,m_\mu ,m_\tau )`$ with $`E_L`$ and $`E_R`$ defined in Eq. (1). However, the biunitary transformation does not diagonalize $`G`$ and $`G^{}`$ separately. Thus, we have FCNC and there are Yukawa couplings which are not proportional to the lepton masses.
The model has four singly charged and two doubly charged physical scalars, four $`CP`$-even and two $`CP`$-odd neutral scalars. Let us consider the doubly charged and neutral scalar Yukawa interactions with the sextet in Eq. (4). $`H_1^{++}`$ is a part of a complex triplet under $`SU(2)_L\times U(1)_Y`$ with its neutral partner having vanishing vacuum expectation value (if neutrinos do not get Majorana masses). There are also a doubly charged $`H_2^{++}`$ which is a singlet of $`SU(2)_L`$ and the neutral Higgs $`\sigma _2^0`$ which is part of a doublet of $`SU(2)_L`$. Hence we have the respective Yukawa interactions proportional to
$$\overline{l_L}𝒦_{LL}l_L^cH_1^{}+\overline{l_L^c}𝒦_{RR}l_RH_2^{++}+[\overline{l_L}𝒦_{LR}l_R+\overline{l_L^c}𝒦_{LR}^Tl_L^c]\sigma _2^0+H.c.,$$
(7)
where we have denoted $`𝒦_{LL}=E_L^TGE_L^{}`$; $`𝒦_{RR}=E_R^TGE_R`$ and $`𝒦_{LR}=E_L^{}GE_R`$. These matrices are unitary only when $`G`$ is real.
We see that since the unitary matrices $`E_{L,R}`$ diagonalize $`M_l`$ in Eq. (6), $`𝒦_{LL},\overline{K}_{RL}`$ and $`𝒦_{RL}`$ are not diagonal matrices, thus their matrix elements are arbitrary and only constrained by perturbation theory, by their contributions to the charged lepton masses and by some purely leptonic processes.
As we said before, since there are already two scalar triplets which give the appropriate mass to the $`W^\pm `$ and $`Z^0`$ vector bosons, it is not necessary the $`v_S`$ be of the same order of magnitude than the other vacuum expectation values which are present in the model, $`v_\eta `$ and $`v_\rho `$. For instance it is possible that $`v_S10`$ GeV and $`|G|1`$. In this case, the contributions of the doubly charged scalars to the muonium-antimuonium transition can be as important as the contribution of the vector bilepton. There are also new contributions involving FCNC through the neutral scalar exchange, can also give important contributions to the $`M\overline{M}`$ conversion as it has been suggested in Ref. . We see that in the 331 model all contributions shown in Figs. 1 and 2 do exist. Since there are several contributions to the $`M\overline{M}`$ conversion it is still possible to have some cancellations among the scalar and vector bilepton contributions.
In order to appreciate a little bit more the muonium-antimuonium transition in the 331 model let us give a brief review of the theoretical results so far known. Many years ago, Feinberg and Weinberg used a $`(VA)^2`$ Hamiltonian with the four-fermion effective coupling equal to the usual $`\beta `$-decay coupling constant $`C_V`$, in order to study the $`M\overline{M}`$ conversion. Let us here denote it by $`G_{M\overline{M}}`$. The transition amplitude is proportional to $`\delta =16G_{M\overline{M}}/\sqrt{2}\pi a^3`$ where $`a`$ is the Bohr radius. More recently, the same transition was studied in the context of models with doubly charged Higgs; in this case the effective Hamiltonian is of the $`(V\pm A)^2`$ form . In this case $`G_{M\overline{M}}`$ is given by the product of two Yukawa couplings , so in all these cases the sing of the effective coupling $`G_{M\overline{M}}`$ is undetermined. On the other hand, in models with doubly charged vector bilepton the respective Hamiltonian is of the $`(VA)\times (V+A)`$ form with a four-fermion effective coupling given by $`G_{M\overline{M}}/\sqrt{2}=g^2/8M_U^2`$, being $`M_U`$ the vector bilepton mass and $`g`$ the $`SU(3)`$ coupling constant. Hence, in this case always $`G_{M\overline{M}}<0`$.
On the other hand, in $`(V\pm A)^2`$ models the transition amplitude is the same for the singlet and triplet muonium given above but in $`(VA)\times (V+A)`$ models we have $`\delta =8G_{M\overline{M}}/\sqrt{2}\pi a^3`$ for the triplet muonium state and $`\delta =24G_{M\overline{M}}/\sqrt{2}\pi a^3`$ for the singlet state .
In the 331 model there are also neutral scalars and pseudoscalars which, as we said before, have flavor changing neutral interactions in the lepton sector. It has been shown that pseudoscalars do not induce conversion for triplet muonium, while both pseudoscalars and scalars contribute for the singlet muonium. We see that it is in fact possible a cancellation among the contributions to the $`M\overline{M}`$ transition due to scalars and those due to doubly charged vector. It means that separate measurements of singlet vs. triplet $`M\overline{M}`$ conversion probabilities can distinguish among neutral scalar, pseudoscalar and doubly charged Higgs induced transition . Such measurements can also distinguish doubly charged vector bileptons from scalar contributions.
The $`M\overline{M}`$ transition also can be measured in matter. In this case the collisions make the amplitude add incoherently . However in matter the conversion is strongly suppressed mainly due to the loss of symmetry between $`M`$ and $`\overline{M}`$ due to the possibility of $`\mu ^{}`$ transfer collisions involving $`\overline{M}`$ . Hence, all those data together when available will allow to constrain models with several sort of fields inducing the muonium–antimuonium transition.
If all these effects are present or not in the 331 model depend on the value of the parameters. We have argue above that this may be the fact since there are flavor changing neutral interactions in the Higgs-lepton sector and also one of the vacuum expectation values may be of the order of some GeVs.
It is usually considered that the $`\mu e\gamma `$ decay imposes stronger constraints on a given model than the $`M\overline{M}`$ transition. So, some model builders consider situations in which $`\mu e\gamma `$ is forbidden by a discrete symmetry . However in the 331 model the interactions which induce the $`\mu e\gamma `$ decay are $`\overline{\nu _L}𝒦_{LR}l_Rh_1^+`$, $`\overline{\nu _L}𝒦_{LL}^Tl_L^ch_2^{}`$, $`[\overline{\nu _L}𝒦_{LR}^{}l_R+\overline{(l^c)_L}𝒦_{LR}^T(\nu _L)^c]\eta _1^{}`$ and $`[\overline{\nu _L}𝒦_{LL}^{}(l_L)^c\overline{l_L}𝒦_{LL}^{}(\nu _L)^c]\eta _2^+`$, with $`𝒦_{LR}^{}=E_L^{}G^{}E_R`$ and $`K_{LL}^{}=E_L^TG^{}E_L^{}`$. The decay $`\mu e\gamma `$ as shown in Fig. 3 has contributions of the vector $`U^{}`$ and scalars $`H_{1,2}^{}`$ bileptons. The interactions in Eqs. (2) and (7) involve different mixing matrices, hence, if all bosons have masses of the same order of magnitude, as it is in fact expected in the 331 model (see below), we can have some cancellations among all the contributions. Notice that those matrices are unitary only when $`G^{}`$ is real, like the matrices in Eq. (7).
Notice also that the $`\mu e\gamma `$ decay is dominated by the lepton $`\tau `$ contributions, thus it implies strong constraints on the mixing angles involving this lepton.
Another potential trouble for the model is the $`\mu eee`$ decay shown in Fig. 4.
Here the amplitudes of the exchange of $`U^{}`$, $`H_1^{}`$ and $`H_2^{}`$ are proportional to $`𝒦_{\mu e}𝒦_{ee}`$ $`(𝒦_{LL})_{\mu e}(𝒦_{RL})_{ee}`$ and $`(𝒦_{RL})_{\mu e}(𝒦_{RL})_{ee}`$ respectively, as it can be seen from Eqs. (2) and (7). Thus in this case again a cancellation among the contributions may occur if all bosons have masses of the same order of magnitude, or it may be suppressed by the $`\mu e`$ matrix element.
Summarizing, the bound of Ref. is applied only for a range of the parameters in the model and if the sextet is the only Higgs which couples to leptons. In this case the neutral currents given in Eq. (7) are diagonal and there is no FCNC in the lepton sector and all the sextet-lepton couplings are proportional to $`𝒦_{RL}=\sqrt{2}m_l/v_S`$ where $`m_l`$ is the lepton mass. If $`v_S`$ is of the order of 100 GeV the main contribution to the $`M\overline{M}`$ conversion comes from the interaction in Eq. (2) and it constrains the mass of the $`U`$-vector boson and the mixing angles of the matrix $`𝒦`$ as discussed early. Hence in the minimal 331 model the contributions in Fig. 1 (a), 1(c) and 1(d) of Ref. , here summarized in Figs. 1 and 2, do exist and its experimental data do not, in straightforward way, apply to the model.
Finally we would like to remark that although the model predicts that there will be a Landau pole at the energy scale $`\mu `$ when $`\mathrm{sin}^2\theta _W(\mu )=1/4`$, it is not clear at all what is the value of $`\mu `$. In fact, it has been argued that the upper limit on the vector bilepton masses is 3.5 TeV . Any way the important thing is that in this model the “hierarchy problem” i.e., the existence of quite different mass scales, is less severe than in the standard model and its extensions since no arbitrary mass scale (say, the Planck scale) can be introduced in the model. In particular, it is a very well known fact that the masses of fundamental scalars are sensitive to the mass of the heaviest particles which couple directly or indirectly with them. Since in the 331 model the heaviest mass scale is of the order of a few TeVs there is not a “hierarchy problem” at all. This feature remains valid when we introduce supersymmetry in the model. Thus, the breaking of the supersymmetry is also naturally at the TeV scale in this 331 model.
The $`M\overline{M}`$ transition deserves indeed more experimental studies. On the other hand, the matrices $`G`$ and $`G^{}`$ can be complex, we can have CP violation in the present model . Hence, experimental difficulties apart, this system in vacuum could be useful for studying $`CP`$ and $`T`$ invariance in the lepton sector: by comparing $`M\overline{M}`$ with $`\overline{M}M`$ transitions as it has been done recently in the $`K^0\overline{K}^0`$ and $`\overline{K}^0K^0`$ case .
###### Acknowledgements.
This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Conselho Nacional de Ciência e Tecnologia (CNPq) and by Programa de Apoio a Núcleos de Excelência (PRONEX).
|
no-problem/9905/hep-ph9905539.html
|
ar5iv
|
text
|
# 1 Next-to-leading order contribution to the ground state energy level shift which gives the leading order decay rate. The filled circle at the vertices denotes the 𝐶₀ coupling.
Relativistic corrections to the Pionium Lifetime
Xinwei Kong<sup>1</sup> and Finn Ravndal<sup>1</sup><sup>1</sup>1On leave of absence from Institute of Physics, University of Oslo, N-0316 Oslo, Norway
Department of Physics and Institute of Nuclear Theory,
University of Washington, Seattle, WA 98195, U.S.A
Abstract: Next to leading order contributions to the pionium lifetime are considered within non-relativistic effective field theory. A more precise determination of the coupling constants is then needed in order to be consistent with the relativistic $`\pi \pi `$ scattering amplitude which can be obtained from chiral perturbation theory. The relativistic correction is found to be 4.1% and corresponds simply to a more accurate value for the non-relativistic decay momentum.
PACS numbers: 03.65.N, 11.10.S, 12.39.Fe
In the DIRAC experiment which is underway at CERN, one plans to measure the pionium lifetime with an accuracy of 10% or better. The dominant decay proceeds through the strong annihilation $`\pi ^++\pi ^{}\pi ^0+\pi ^0`$. Since the momenta of the final state particles is given by the mass difference $`\mathrm{\Delta }m=m_+m_0`$ between the charged and neutral pions, the process is strongly non-relativistic. In lowest order the decay rate follows directly from the corresponding non-relativistic scattering amplitude which can be written as
$`T_{NR}={\displaystyle \frac{8\pi }{3E_+E_0}}(a+bp^2/m_+^2)`$ (1)
in the center-of-mass frame where the energies of the pions are $`E_0=E_+=m_++p^2/2m_+`$ when $`p`$ is the momentum of the charged pions. The S-wave scattering length $`a`$ and the slope parameter $`b`$ include both higher order chiral corrections and isospin-violating effects from the quark mass difference $`m_um_d`$ and short-range electromagnetic effects. At threshold the momentum $`p=0`$ and the full scattering amplitude with long-range Coulomb interactions removed is then given by just this scattering length. It can be written in terms of the more conventional isospin-symmetric scattering lengths $`a_0`$ and $`a_2`$ in the isospin $`I=0`$ and isospin $`I=2`$ channels as $`a=a_0a_2+\mathrm{\Delta }a`$ where $`\mathrm{\Delta }a`$ includes these symmetry-breaking effects.
Since the charged pions in pionium are supposed to be bound in a $`1S`$ Coulomb state $`\mathrm{\Psi }(r)`$ with relative momentum $`\gamma =\alpha m_+/2`$, the annihilation takes place essentially at threshold. In lowest order the transition rate is then given by just the scattering length $`a`$,
$`\mathrm{\Gamma }={\displaystyle \frac{16\pi }{9m_+^4}}|\mathrm{\Psi }(0)|^2m_0\sqrt{2\mathrm{\Delta }mm_0}a^2`$ (2)
where $`|\mathrm{\Psi }(0)|^2=\gamma ^3/\pi `$ gives the probability to find the two particles at the same point. A measurement of the pionium lifetime will thus give an experimental determination of the scattering length. For this to be meaningful, corrections to this lowest order formula must be calculated so that the theoretical lifetime has an uncertainty substantially smaller than the one in the experiment.
In a recent paper we have described this system in the new framework of non-relativistic effective field theory. This approach offers a compact and more systematic approach to obtaining the many different higher order corrections to the pionium lifetime compared with previous methods which to a large extent were based on covariant methods. As shown by Holstein to lowest order in the interactions this approach is equivalent to just using effective couplings in non-relativistic quantum mechanics.
The effective field theory for non-relativistic pions is based upon the free Schrödinger Lagrangian $`_0=\pi ^{}(i_t+^2/2m_\pi )\pi `$ which corresponds to the propagator
$`G(E,𝐤)={\displaystyle \frac{1}{E𝐤^2/2m_\pi +iϵ}}`$ (3)
for a particle with energy $`E`$ and momentum $`𝐤`$. The interactions we consider are contained in the Lagrangian<sup>2</sup><sup>2</sup>2We define here $`C_0`$ with opposite sign to what we used in
$`_{int}={\displaystyle \frac{1}{2}}C_0(\pi _+^{}\pi _{}^{}\pi _0\pi _0)+{\displaystyle \frac{1}{4}}C_2(\pi _+^{}\pi _{}^{}\pi _0\stackrel{}{}\stackrel{}{}^2\pi _0+\pi _+^{}\stackrel{}{}\stackrel{}{}^2\pi _{}^{}\pi _0\pi _0)+\text{h.c.}`$ (4)
where the gradient is defined as $`\stackrel{}{}\stackrel{}{}=1/2(\stackrel{}{}\stackrel{}{})`$. It makes it possible to calculate the scattering amplitude for $`\pi ^++\pi ^{}\pi ^0+\pi ^0`$ to order $`p^2`$ which then must agree with the definition (1) to this order. The result of this matching for the first coupling constant is then
$`C_0={\displaystyle \frac{8\pi }{3m_+^2}}\left[a(ba){\displaystyle \frac{\mathrm{\Delta }m}{m_+}}\right]`$ (5)
when we neglect smaller rescattering corrections. Similarly, one finds for the derivative coupling $`C_2=(8\pi /3m_+^4)(ba)`$. These values are more accurate than the ones used previously.
With the value of the coupling $`C_0`$ now determined, one can calculate the second order correction $`\mathrm{\Delta }E`$ to the ground state energy of pionium from the bound state diagram in Fig.1 as first pointed out by Labelle and Buckley. It is found to be imaginary with a resulting decay rate of $`\mathrm{\Gamma }=2\text{Im}\mathrm{\Delta }E`$. To lowest order in the mass difference $`\mathrm{\Delta }m`$ and ignoring the small binding energy, this gives the zero-order result (2).
Since the pion scattering lengths are set by the natural size $`1/m_\pi `$, the counting rules needed to estimate the magnitudes of contributions appearing in different orders of perturbations theory, are simple. The energy $`E`$ in the propagator will be of the order $`Q^2`$ when the characteristic momentum in the process is $`Q`$. As a result, the propagator scales as $`1/Q^2`$. For the same reason the four-dimensional volume integration $`d^4k`$ will scale as $`Q^5`$. The loop diagram in Fig. 1 thus scales as $`Q`$ since it involves two one-particle propagators. This is the leading order contribution to the decay rate.
To next order in the effective theory we must include the contribution from the derivative coupling $`C_2`$ in (4). It will follow from the diagram in Fig.2 and is seen to scale as $`Q^3`$ again using dimensional regularization. The contributions to the decay rate from these two diagrams are thus found to be
$`\mathrm{\Gamma }={\displaystyle \frac{m_0}{4\pi }}|\mathrm{\Psi }(0)|^2\sqrt{2\mathrm{\Delta }mm_0}(C_0^2+2C_0C_2\mathrm{\Delta }mm_0)`$ (6)
With the above value for the two coupling constants we then simply get the lowest order rate (2). The correction due to the slope parameter thus disappears with the more precise values for the matched coupling constants used here. It results from a cancellation between a next-to-leading order $`p^2`$ contribution and a $`\mathrm{\Delta }m`$ contribution which in this particular process are of the same order.
To this order in perturbation theory relativistic effects must also be included. These have previously also been considered in more covariant approaches. Here they will arise from the lowest order relativistic correction to the free Lagrangian $`_0`$ which now should be taken to be
$`=\pi ^{}\left(i_t+{\displaystyle \frac{^2}{2m_\pi }}+{\displaystyle \frac{^4}{8m_\pi ^3}}\right)\pi `$ (7)
This new interaction will modify the propagators in the bubble of Fig.1 as shown in Fig.3.
Since the diagram now involves three propagators, it scales as $`Q^5Q^4Q^6=Q^3`$. This is also confirmed by an actual evaluation of the diagram which is given by
$`\mathrm{\Delta }E^{(rel)}={\displaystyle \frac{d^3p}{(2\pi )^3}\frac{d^3k}{(2\pi )^3}\frac{d^3q}{(2\pi )^3}\mathrm{\Psi }^{}(𝐩)C_0\frac{𝐤^4/8m_0^3}{(2\mathrm{\Delta }m𝐤^2/m_0+iϵ)^2}C_0\mathrm{\Psi }(𝐪)}`$ (8)
Here $`\mathrm{\Psi }(𝐩)`$ is the Fourier transform of the bound state wavefunction. The integrations over momenta $`𝐩`$ and $`𝐪`$ now give just $`|\mathrm{\Psi }(0)|^2`$. Using dimensional regularization for the divergent integral over the loop momentum $`𝐤`$, we obtain the finite result
$`\mathrm{\Delta }E^{(rel)}={\displaystyle \frac{5iC_0^2}{32\pi }}\mathrm{\Delta }m\sqrt{\mathrm{\Delta }mm_0}|\mathrm{\Psi }(0)|^2`$ (9)
In terms of the decay rate, it corresponds to
$`{\displaystyle \frac{\mathrm{\Delta }\mathrm{\Gamma }^{(rel)}}{\mathrm{\Gamma }}}={\displaystyle \frac{5\mathrm{\Delta }m}{4m_+}}`$ (10)
and amounts to 4.1%. By its very nature it can also be derived from using more covariant methods. Finally, the same relativistic interaction will also act on the external legs of the diagram in Fig.1. But the resulting correction is then of the order $`\alpha ^2`$ and can thus be neglected here.
The relativistic correction (10) follows also directly from the available phase space for the $`\pi ^0\pi ^0`$ final state since there is no energy dependence in the annihilation amplitude to lowest order. Each $`\pi ^0`$ has the energy $`E_0=m_0+k^2/2m_0k^4/8m_0^3`$ when we include the next-to-leading order term in the momentum expansion. The decay rate will then involve the integral
$`\mathrm{\Gamma }{\displaystyle \frac{d^3k}{(2\pi )^3}\delta (2m_+2E_0)}`$ (11)
when we ignore the small binding energy. The argument of the delta-function will now have two zeros of which one represents an unphysical, high-momentum state. Keeping only the contribution from the physical state, we recover exactly the additional term (10).
In a recent paper by Gall, Gasser, Lyubovitskij and Rusetsky the pionium lifetime is also calculated from the non-relativistic Lagrangian used above. But instead of using bound state perturbation theory based upon the standard Coulomb wavefunction as done here, they determine the properties of the bound state by calculating the complex pole on the second Riemann sheet of the corresponding T-matrix. In this way they derive higher order corrections which are not considered here. Our results are consistent with their general form of the decay rate where the relativistic correction (10) is seen to be the first term in the expansion of their decay momentum $`p^{}`$. The higher order corrections they have derived should follow in the present approach from including the Coulomb-interactions between the charged pions and rescattering effects corresponding to extra bubbles in the diagram Fig.1.
Almost at the same time another calculation of the lifetime was completed by Eiras and Soto. This is again based upon the same effective Lagrangian which is now further reduced by integrating out degrees of freedom with momenta of the order $`\sqrt{2\mathrm{\Delta }mm_0}`$. It then becomes an effective theory for only charged pions with contact and Coulomb interactions. Their result for the lifetime is also in agreement with what we have obtained here.
We want to thank P. Labelle for several helpful comments and J. Gasser, P. Labelle, V.E. Lyubovitskij, A. Rusetsky and J. Soto for many clarifying discussions during the HadAtom99 workshop. In addition, we are grateful to the Department of Physics and the INT at the University of Washington in Seattle for generous support and hospitality. Xinwei Kong was supported by the Research Council of Norway.
|
no-problem/9905/cond-mat9905216.html
|
ar5iv
|
text
|
# Single-vehicle data of highway traffic - a statistical analysis
## I Introduction
Experimental and theoretical investigations of traffic flow are focus of extensive research interest during the past decades . Various theoretical concepts (e.g. ) have been developed and numerous empirical observations have been reported . Despite these enormous scientific efforts both theoretical concepts as well as experimental findings are still under debate. In particular the empirical analysis turns out to be very subtle because the data strongly depend on several external influences, e.g. weather conditions, or the performance of junctions . Therefore, even certain experimental facts are not well established, although considerable progress has been made in the past few years. To date, the following experimental view of highway traffic seems to be of common knowledge and generally accepted: It has been found that at least three states with qualitatively different behavior exist, namely free-flow, stop-and-go and synchronized states . In addition to the existence of qualitative different phases some other interesting phenomena have been observed, e.g. the spontaneous formation of jams , and hysteresis effects .
This work focuses basically on two points. First of all we present a direct analysis of single-vehicle data which leads to a more detailed characterization of the different microscopic states of traffic flow, and second we use standard techniques of time-series analysis in order establish objective criteria for an identification of the different states.
A more detailed characterization of the microscopic structure of the different traffic states should lead to sensitive checks of the different modeling approaches. In particular the time-headway distributions and the speed-distance relations, which, to our knowledge for the first time, have been calculated from the single-vehicle data, allow for a quantitative comparison with simulation results of microscopic models . Moreover the objective criteria for an identification of the different traffic states, developed in the framework of this article, allow for an unbiased analysis of the experimental data.
The paper is organized as follows. In section II we present some technical details of the measurements as well as of the given data set. The analysis of single-vehicle data is presented in section III. Explicitly we show results for the time-headway distribution and the speed-distance relations. These results are compared to earlier estimates based on data from Japanese highways . In section IV we show the results for the fundamental diagram. Here we focus on the effect of different time-intervals for the collection of data and discuss different methods for the calculation of the stationary fundamental diagram. Finally the time-series analysis of the single-vehicle data as well as of the aggregated data are presented in section V.
## II Remarks on the data-collection
The data set is provided by 12 counting loops all located at the German highway A1 near Cologne. At this section of the highway a speed limit of $`100km/h`$ is valid – theoretically.
In Fig. 1 the section of the highway and the position of the detectors are sketched. A detector consists of three individual detection devices, one for each lane. By combining three devices covering the three lanes belonging to one direction (except D2, see Fig. 1) one gets the cross-section labeled D1 through D4. The two detector arrangements D1 and D4 are installed at the intersection of two highways (AK Köln-Nord), while D2 and D3 are located close to a junction (AS Köln-Lövenich). These locations are approximately $`9km`$ apart. In between there is a further junction but with a rather low usage. The most interesting results are obtained at D1 where the number of lanes is reduced from three to two for cars passing the intersection towards Köln-Lövenich. Therefore, this part of the highway effectively acts as a bottleneck. Consequently, congested traffic is most often recorded at detector D1, and the analysis is mainly based on this data set.
The data were collected between June 6, 1996 and June 17, 1996 when a total number of more than $`500,000`$ vehicles passed each cross-section, with a portion of trucks and trailers of about $`16\%`$ on average. During this period the traffic data set was not biased due to road constructions or bad weather conditions.
The distance-headway $`\mathrm{\Delta }x`$ as well as the velocity $`v`$ of the vehicles passing a detector are collected in the data set. The velocity $`v`$ is derived from the time elapsed between the crossing of the first and the second detector installed in a row with a known distance (usually $`2m`$). The second direct measure is the time elapsed between two consecutive vehicles. Due to storage capacity reasons it is saved with a rough resolution (only $`1\mathrm{sec}`$), but used to determine the distance between the vehicle $`n+1`$ and its predecessor $`n`$ via<sup>*</sup><sup>*</sup>*This implies that the error in calculating $`\mathrm{\Delta }x_{n+1}`$ increases with $`\mathrm{\Delta }t_{n+1}`$, since the calculation is made under the assumption of a constant speed $`v_n`$. $`\mathrm{\Delta }x_{n+1}=v_n\mathrm{\Delta }t_{n+1}`$. It should be mentioned that this procedure gives correct results as far as the velocity at the detector is constant. Here the small spatial extension of the detectors guarantees that this assumption is fulfilled. So it is admissible to overcome the restriction of the resolution applying the reverse procedure to recover $`\mathrm{\Delta }t`$ with higher accuracy as it has been used in the framework of this paper.
For a sensible discussion it is plausible to split up the data set according to the different traffic states. In Fig. 2 a typical time-series of one minute aggregates of the speeds at the detectors D1 and D2 is shown. The transition from a free-flow to a congested state is indicated by a sudden drop of the local velocity. This allows for a undoubted separation of the data set into free-flow and congested regimes. Then the analysis of the data has been performed separately for the free-flow and congested states excluding the transition regime. At D1, the most interesting installation, one obtains eight different periods of both free-flow and congested states. These periods are labeled by numbers I through VIII.
Fig. 2 also shows the bottleneck-effect given by the lane-reduction near the intersection. At D1, the cross-section behind the local defect, one gets a sudden drop in the velocity. On the other hand, downstream this cross-section one finds only a weak decay of the velocity which represents the outflow from a jam.
## III Analysis of single-vehicle data
In this section the results for the time-headway distributions and speed-distance relations calculated from single-vehicle data are presented. For a detailed examination the data set was classified in two ways. As mentioned above a discrimination between free-flow and congested states was made, followed by a classification due to local densitiesIn Appendix A it is described how the local density is deduced from the data set.. Whereas the first one was done by a simple and manual separation by means of the time series of the speed, the second one requires a more detailed explanation: Every count belongs to a certain minute, and the local density $`\rho `$ obtained during this certain minute is the criterion for the classification. I.e., it is conceivable that a distance-headway $`\mathrm{\Delta }x`$ is much larger than the mean distance-headway $`\mathrm{\Delta }x\rho ^1`$ of the considered period. Moreover, for the analyses made in this section the traffic states recognized as stop-and-go traffic are omitted, since the determination of the related local densities strongly depends on the used methodThis inevitable behavior of the local density is more precisely discussed in Appendix A. Stop-and-go traffic is characterized by a large value ($`1`$) of the cross-covariance between the local density and the local flow as defined by (3) and displayed in Fig. 11.. So only the synchronized states remain.
### A Time-headway distribution
In section II the way of calculating the time-headways $`\mathrm{\Delta }t`$ is described in detail. In principle the accuracy of the measurement would allow for a very fine resolution of the time-headway distribution, but in order to obtain a reliable statistics we have chosen time-intervals of length $`0.1\mathrm{sec}`$. In Fig. 3 the time-headway distributions of different traffic states at different local densities $`\rho `$ are displayed.
Regardless of the value of the local density all free-flow distributions are dominated by a two-peak structure. The first peak at $`\mathrm{\Delta }t=0.8\mathrm{sec}`$ represents the global maximum of the distribution and is in the range of time a driver typically needs to react to external incidents.
On a microscopic level this short time-headways correspond to platoons of some vehicles traveling very fast – their drivers are taking the risk of driving ”bumper-to-bumper” with a rather high speed. These platoons are the reason for the occurrence of high-flow states in free traffic. The corresponding states exhibit meta stability, i.e. a perturbation of finite magnitude and duration is able to destroy such a high-flow state . Once such a collapse of the flow resp. the speed emerges, the free-flow branch can only be reached again by reducing the local density . In the data base considered here such a sharp fall is not observable since all detected jams are caused by the bottleneck downstream the detector D1. Additionally, a second peak emerges at $`\mathrm{\Delta }t=1.8\mathrm{sec}`$ which can be associated with a typical drivers’ behavior: It is recommended and safe to drive with a temporal distance of $`2\mathrm{sec}`$ corresponding to a maximum flow of $`1,800veh/h`$.
Surprisingly, the small time-headways have much less weight in congested traffic but the peak at $`\mathrm{\Delta }t=1.8\mathrm{sec}`$ is recovered. Here the background signal is of greater importance. The observed peak corresponds to the typical temporal headway ($`2\mathrm{sec}`$) of two vehicles leaving a jam consecutively.
However, almost every fourth driver falls below the $`1`$-$`\mathrm{sec}`$-threshold, and this is more likely when the traffic is free-flowing. Moreover, our results indicate that the small time-headways are of the highest weight in the transition regime between free-flow and congested flow.
The common structure of the time-headway distributions in all density regimes can be summarized as follows: A background signal covers a wide range of temporal headways, especially for $`\tau <1\mathrm{sec}`$. Additionally, at least one peak is to be noticed.
### B Speed distance-headway characteristics
Probably the most important information for an adjustment of the speed is the accessible distance-headway $`\mathrm{\Delta }x`$. This is captured by several models which use either a stationary fundamental diagram or even more directly a so-called optimal-velocity (OV-) function $`v=v(\mathrm{\Delta }x)`$ as input parameters . Therefore, a detailed analysis of the speed-distance relationship is of great importance for the modeling of traffic flow.
By means of Fig. 4 it is obvious that the average speed does not only depend on $`\mathrm{\Delta }x`$ itself, but also on the local density. In particular, the average speed for large distances in congested states is significantly lower than for the free flow states, but also saturated for sufficiently large $`\mathrm{\Delta }x`$.
Next we also took into account the velocity differences $`\mathrm{\Delta }v=v_nv_{n+1}`$ between consecutive cars ($`n`$ followed by $`n+1`$). The dependence $`\mathrm{\Delta }x=\mathrm{\Delta }x(\mathrm{\Delta }v)`$ is depicted in Fig. 5, where we also discriminated between the interesting traffic states. The results clearly indicate that $`\mathrm{\Delta }x`$ is minimized if both cars move with the same velocity, irrespective of the microscopic state. Note that $`\mathrm{\Delta }x`$ is smaller than the mean distance derived from the inverse of the local density. Similar results and comparable conclusions were presented in , where the probability distribution $`P(v_tv_{t+\tau })`$ was investigated. In this context $`v_t`$ resp. $`v_{t+\tau }`$ are the speeds of two arbitrary (not necessarily consecutive) vehicles crossing the detector with a temporal distance of $`\tau `$ seconds. They also observed a peak at $`v_tv_{t+\tau }=0`$ for any $`\tau `$.
These observations are the motivation to determine an OV-function using exclusively the data where $`|\mathrm{\Delta }v_n|0.5m/\mathrm{sec}`$, because these should be relevant if an empirical OV-function is demanded as input parameter for traffic models. By using this reduced data set a better convergence of the OV-function in the high density regime is observable (Fig. 6). Nevertheless, at least the results in the free-flow and congested regime strongly differ. This indicates that in congested traffic the drivers do not only react on the distance to next vehicle ahead, they also take into account the situation at larger distances. It should be noticed that the dropping of measurements with $`|\mathrm{\Delta }v_i|>0.5m/\mathrm{sec}`$ leaves only a fifth part of the data, but the quality of the OV-diagrams does not suffer very much from this restriction.
In order to give explicit measures for the OV-functions in the different density regimes we used the ansatz:
$$V(\mathrm{\Delta }x)=k\left[\mathrm{tanh}(a(\mathrm{\Delta }xb))+c\right]$$
(1)
suggested by Bando et al. , where $`a,b,c,k`$ serve as fit parameters.
In Fig. 7 the empirical relations $`v=v(\mathrm{\Delta }x)`$ are displayed averaging (top) over all states corresponding to free-flow, (middle) over all congested states and (bottom) over all empirical data satisfying the restriction $`|\mathrm{\Delta }v|0.5m/s`$. The comparison with an empirical OV-function established by analyses of a car-following experiment on a Japanese highway reveals a higher value of $`v(\mathrm{})`$ and a slower increase of the OV-function.
The characteristic values of the different OV-functions are summarized in Tab. I, where D denotes the distance where $`V(D)=0.95V(\mathrm{})`$ holds. The numerical results show that when averaging over both free-flow and congested states (Fig. 7, bottom) the asymptotic regime of the OV-function is reached at much larger distances.
Our results for the OV-functions can be summarized as follows. In the free-flow regime the function are characterized by a steep increase at small distances corresponding to the small time-headways discussed in the previous subsection. For synchronized states it is remarkable that the asymptotic velocity takes a rather small value. Furthermore, our results show that it is necessary to distinguish between the traffic states in order to get a more precise description of the speed-headway relation.
## IV The fundamental diagram
In this section we present results on the fundamental diagram based on time-averaged data. The present data set allows for a free choice of the averaging interval and overlaps. Here we compare the results obtained for one- and five-minute intervals. At the end of this section we discuss different methods in order to establish the stationary fundamental diagram.
In Fig. 8 fundamental diagrams for averaging intervals $`\mathrm{\Delta }t`$ of one and five minutes are shown. Beyond the trivial effect that longer averaging intervals lead to a reduction of the fluctuations, one observes that both the extremal values of the density and the flow decrease with growing $`\mathrm{\Delta }t`$. Moreover, the small flow values at very low densities are averaged out if five minute intervals are chosen. One might ask whether longer $`\mathrm{\Delta }t`$’s hide some real structure of traffic states or whether the additional structure in the one minute intervals is a statistical artifact. From our point of view the results for the low density branch, which agree for both averaging intervals, indicate that a one-minute interval is sufficient to establish the systematic density dependence of the flow. Beyond that microscopic states with short lifetimes can be detected using short time-intervals which makes the one-minute intervals preferable.
The uncommon structure structure of the flow-density relation for small speeds, especially the return into the origin of the coordinate system, must be traced back to the method of the determination the local density via $`J/v`$, since the occupation itself was not accessible in the underlying data set. This behavior will be explained in more detail in Appendix A.
In order to obtain the stationary flow-density relationship we generated histograms from the fundamental diagram. Due to the problems of the density estimation in stop-and-go traffic we omit these states in the further discussion<sup>§</sup><sup>§</sup>§See the following section for the identification of stop-and-go states.. In Fig. 9 the results for two averaging procedures are displayed. The continuous form of the fundamental diagram has been obtained by averaging over all flow values of a given density, while the discontinuous shape has been obtained discriminating between free-flow and congested traffic. It should be mentioned that the shape of the continuous stationary fundamental diagram also depends on the statistical weight of free-flow and congested states. Therefore from our point of view it is necessary to distinguish between the different states in order to obtain reasonable results for the stationary fundamental diagram.
Using the latter method it turns out that for high densities the average flow takes a constant value in a wide range of density. This plateau formation is similar to what is found in driven systems with so-called impurity sites or defects, where in a certain density regime the flow is limited by the capacity of the local defect . Here the bottleneck effect is produced by lane-reduction as well as by the large activity of the on- and off-ramps at the intersection (see Section II).
## V Time series analysis
As already mentioned in the introduction we propose objective criteria for the classification of different traffic states using standard methods of time-series analysis. Beyond that we will show that these methods allow for a further characterization of the different states.
### A Auto-covariance function
The first quantity to consider is the auto-covariance
$$ac_x(\tau )=\frac{x(t)x(t+\tau )x(t)^2}{x^2(t)x(t)^2}$$
(2)
of the aggregated quantities $`x(t)`$. The brackets $`\mathrm{}`$ indicate the average over a complete period of a free-flow or congested state.
In Fig. 10 the auto-covariances of one-minute aggregates of the density, flow and average velocity of a free-flow and a congested state are shown. In the free-flow state the average speeds are only correlated on short time scales whereas long-ranged correlations are present in the time series of local density as well as of the flow. This implies that no systematic deviations of the average velocity from the constant average value are observable, while the density and therefore also the flow vary systematically on much longer time scales up to the order of magnitude of hours.
This behavior of the auto-covariance is clearly contrasted with the behavior found in synchronized traffic, where all temporal correlations are short-ranged irrespective of the chosen observable. Both results show that longer time scales are only apparent in slow variations of the density during a day, while the other time-series reveal a noisy behavior.
Furthermore, the cross-covariance
$$cc_{x,y}(\tau )=\frac{x(t)y(t+\tau )x(t)y(t+\tau )}{\sqrt{x^2(t)x(t)^2}\sqrt{y^2(t)y(t)^2}}$$
(3)
indicates the strong coupling between flow and density in the free-flow regime (Fig. 11). This implies that the variations of the flow are mainly controlled by density fluctuations while the average velocity is almost constant. Again the results for synchronized states differ strongly. Here all combinations of flow, density and average velocity lead to small values of the cross-covariance also supporting the existence of irregular patterns in the fundamental diagram.
Therefore the covariance analysis of the empirical data are in agreement with the interpretation of empirical data given in , where synchronized states first have been identified. The synchronized states can be distinguished from stop-and-go traffic using the same methods.
Similar to free-flow states stop-and-go traffic is characterized by strong correlations between density and flow ($`cc_{\rho ,J}(0)1`$). Beyond that also the auto-covariance function shows an interesting behavior namely an oscillating structure for all three quantities of interest. The period of these oscillations is given by $`10min`$. This result is in accordance with measurements by Kühne , who found oscillating structures in stop-and-go traffic with similar periods.
### B Transitions between the different states
These previous results show that the time-series analysis allows for an identification of different traffic states. Now we focus on the transition regime. Compared to the typical life-time of a free-flow or a congested state the transition is of short duration of approximately the order of magnitude of fifteen minutes (see Fig. 13 for a typical time-series of the local speed including a congested state).
Transitions from free-flow to both congested states are observable in the data set. The transitions take place at densities significantly lower than the density of maximum flow, since the transitions are initiated by a reduction of the capacity of the bottleneck and not by a continuous increase of the local density. We also want to mention that the congested states often are composed of stop-and-go and synchronized states, i.e. during a time-series corresponding to congested traffic frequently transitions between both congested states occur.
The results of several other empirical investigations suggest that the transition between free and congested flow is accompanied by a peak of the velocity variance at the transition . Our analysis clearly does not support this result. The existence and height of the peak is closely related to the length of the averaging intervals. But of course these peaks are numerical artifacts. They show up because the time interval includes two different states but do not reflect any further characteristics of the transition.
### C Correlation between different lanes
In addition to the irregular pattern in the fundamental diagram it has been argued that a characteristic feature of the synchronized states is the strong coupling between different lanes . These interpretations are mainly based on the fact that the average speeds on the different lanes approach each other. Here this effect is not observable because due to the speed-limit even in the free-flow regime the average velocities on different lanes only slightly differ (the average speed in the free-flow regime on the left lane is given by $`120km/h`$ and on the two other lanes by $`100km/h`$).
Therefore we calculated $`cc_{x_i,x_j}(\tau )`$ in order to quantify this coupling effect. Here $`x_i`$ denotes the flow, density, or speed on lane $`i`$. In Fig. 14 the cross-covariances of different lanes belonging to the same driving direction are shown. The coupling between flow and density in the synchronized state is comparable to the free-flow state. It is also apparent that the free-flow signal is veritable on long time-scales, while in synchronized states the correlations rapidly decay with time. Again this result mainly reflects the daily variation of the density.
The synchronization of the different lanes is indicated by large value ($`cc_{v_i,v_j}(0)0.9`$) of the cross-covariance of the speed at $`\tau =0`$, while the time series of the speed on different lanes in free-flow are completely decoupled.
### D Time-series of the single-vehicle data
In the previous section it could be shown that the relevant time-scales are identifiable by time-series analyses. Here these methods, in particular a generalization of the cross-covariance function, will be used in order gain further information on the different microscopic states.
By using the single-vehicle data directly it is not possible to evaluate the time-dependence of $`ac_x(t)`$ in realistic units because the time-intervals between consecutive signals strongly fluctuate. Instead of the temporal difference $`\tau `$ now the number of cars $`n`$ passing the detector between vehicle $`i`$ and $`j=i+n`$ is used.
The behavior of the auto-covariance in the free-flow regime can be characterized as follows. For small $`n`$’s one observes a steep decrease of $`ac_x(n)`$ while asymptotically slow decrease is found. The crossover from a fast to a slow decay has been observed for a small number of cars ($`n5`$).
In the free-flow regime the single vehicle data basically support the results obtained for the aggregated data, namely a strong coupling between the temporal headways (the single-vehicle analogy of the flow: $`J\mathrm{\Delta }t^1`$) and the distances (corresponding to the density: $`\rho \mathrm{\Delta }x`$). Moreover, the slow asymptotic decay is mainly due to the daily variation of the density. A different behavior has been found for $`ac_v(n)`$. First the decay for small $`n`$ is not as fast as for the other signals and second the function decays faster asymptotically. The asymptotic behavior, in turn, is in accordance with the result drawn from aggregated data. But from our point of view the slower decrease for short distances is of special interest. It implies that also in the free flow regime small platoons of few cars moving with the same speed are formed. These platoons lead to the peak at $`\mathrm{\Delta }t=0.8\mathrm{sec}`$ in the time-headway distribution.
Having Fig. 15 in mind $`ac_x(n)`$ of congested-state quantities behaves similarly, except for two differences: First of all no long-ranged signal is present for all quantities of interest and second the decay of $`ac_v(n)`$ for small $`n`$ is much weaker than in the free-flow regime. This leads to the following picture of the microscopic states in synchronized flow: Similar to the-free flow regime platoons are formed with cars moving at the same speed, but in synchronized flow these platoons are much larger (of the order of magnitude of some ten vehicles).
## VI Summary and conclusion
In this paper a detailed statistical analysis of single-vehicle data of highway traffic is presented. The data allow to analyze the microscopic structure of different traffic states as well as a discussion of time-averaged data.
Using the single-vehicle data directly we calculated the time-headway distribution and the headway-dependence of the velocity. Both quantities are of great interest for modeling of traffic flow, because they can be directly compared with simulation results or are even used as input parameters for several models.
Our analysis of the time-headway distribution has revealed a qualitative difference between free-flow and synchronized states. The time-headway distribution of free-flow states shows a two-peak structure. The first peak is located at very small time-headways ($`\mathrm{\Delta }t0.8\mathrm{sec}`$) while a second peak shows up at $`\mathrm{\Delta }t1.8\mathrm{sec}`$. The second peak is also observed in congested flow but smaller time-headways are significantly reduced. The small time-headways correspond to very large values of the flow. Therefore the peak at small time-headways can be interpreted as a microscopic verification of meta-stable free-flow states.
Similar results have also been obtained for the speed distance relation, the so-called OV-function . It also turned out that it is necessary to distinguish between free-flow and congested states. In particular the asymptotic velocities in free-flow and congested states differ strongly. Moreover a global average leads to different characteristics at small distances.
For comparison with earlier empirical investigations we also have used aggregated data in order to calculate the fundamental diagram. Our results indicate that one minute intervals are preferable compared to five minute periods (see for comparison) although this short intervals lead to larger fluctuations. Nevertheless, from our point of view, the single-vehicle data suggest that these fluctuations are not an artifact of the short averaging procedure but represent the complex structure of the different traffic states.
The data have also been used to calculate a stationary fundamental diagram. Again our results show that it is necessary to distinguish between free-flow and congested states in order to get reliable results for the average flow at a given density. Then one obtains a discontinuous form of the fundamental diagram and a non-unique behavior of the flow at low densities.
Using the auto-covariance function and cross-covariance for the different time-series we were able to identify three qualitatively different microscopic states of traffic flow, namely the free-flow, synchronized and stop-and-go traffic . The free-flow states are characterized by a strong coupling of the flow and density and beyond that by a slow decay of the related auto-covariance functions. This implies that as far as a free-flow state is present the flow solely depends on the density. The time-scale which governs the asymptotic decay of the auto-covariance function is also determined by the daily variance of the density.
As shown in section II one can easily distinguish free-flow and congested-states. By contrast it is much more difficult to separate between time-series belonging to stop-and-go and synchronized states by inspection. Therefore an objective criterion is of great interest. It turns that the time-series analysis provides such criterion. Synchronized states are indicated by small values of the cross-covariance between flow, speed and density. Moreover the auto-covariance function is short-ranged for all three quantities. These results reflect the completely irregular pattern in the flow density plane found for synchronized states. By contrast, in stop-and-go traffic flow and density are strongly correlated. On the other hand, the auto-covariance function reveals an oscillating structure with a period of the order of ten minutes. In addition it was found that transitions between free-flow and congested flow are rare but transitions between the different congested states are more frequent.
The auto-covariance functions of the single-vehicle data have suggest that in the free-flow regime as well as in synchronized states platoons of cars moving with the same velocity can be observed. Presumably the platoons in the free-flow regime lead to the peak at $`\mathrm{\Delta }t0.8`$ in the time-headway distribution and therefore to very large values of the flow.
From our point of view our results have important implications for the theoretical description of traffic flow phenomena. The short distance-headways present in free-flow traffic are only possible when drivers anticipate the behavior of the vehicles in front of him . Anticipation is less important in congested traffic. Another important effect is reflected by the gap-dependence of the velocity at high densities. Here we observe a small asymptotic velocity. This implies that drivers tend to hold their speed in dense states, another feature which has to be captured by traffic models. Finally, one has to take into account the reduced outflow from a jam which has been verified by other authors and is supported by our results.
In conclusion the analysis of single-vehicle data leads to a much better understanding of the microscopic structure of different traffic states. Although our results give a consistent picture of the experimental facts on highway traffic an enlarged data set or data from other detector locations would be very helpful in order to settle the experimental findings. First of all a series of counting loops would allow a more detailed analysis of the spatio-temporal structure of highway traffic, additional data from on- and off-ramps would help to discriminate between bulk and boundary effects.
## Appendix A:
In principle one could use the single vehicle data directly in order to establish the velocity-flow relationship because the speed and the time-headway (which is proportional to the inverse flow) of individual cars are provided by the detector. Unfortunately, an interpretation of these results is difficult because of the extreme fluctuations of the experimental data. Therefore we used aggregated data in order to determine a fundamental diagram. In particular we show the flow-density relationship of one and five minute aggregates.
While the local flow is directly given in the data set one has to calculate the temporally averaged local densities $`\rho `$ at the detector because the coverageThe coverage of a detector denotes the fraction of time when the detector is occupied by vehicles. of a detector is not provided here. The local density can be calculated via the relation
$$\rho =J/v,$$
(4)
where $`JN`$ is closely related to the total number of cars $`N`$ crossing the detector during the time interval $`[t,t+\mathrm{\Delta }t]`$, and $`v=v_n(t)/N`$ the average velocity of the cars. Note that both the velocity $`v_n(t)`$ of the individual cars and the flow $`J`$ are directly accessible. Therefore this method should give the best estimate for the local density $`\rho `$ as long as the velocity $`v_n(t)`$ represents a characteristic value of the local speed.
Problems using this kind of density calculation may arise from the strong fluctuations of the speed, especially in stop-and-go traffic. Then the velocity recorded by the detector gives a measure of the typical velocity of moving vehicles while the periods when cars do not move are not taken into account.
Fig. 16 illustrates the effect of the different measuring procedures using computer simulations. During the simulation of a continuous version of the NaSch-model (see Appendix B for a definition of the model) we used two kinds of detectors. The first detector is located at a link between two lattice sites. At this link we perform measurements of the number of passing cars (i.e. the local flow) and their velocity. Then the local density is calculated via (4). This result is compared with direct measurements of the local density where the average occupation on a short section of the lattice is detected. The figure shows that both estimates of the local density are in good agreement at low densities while at large densities the estimates may strongly differ. The different estimates for the local density lead to different shapes of the fundamental diagram. Estimating the density via the occupation of the detector we get the well known form of a high density branch while the calculation of the local density via (4) leads to a pattern which is similar to free-flow states but with a much smaller average velocity.
Similar patterns have also been found in our data set (see Fig. 8). Therefore the simulation results indicate that these periods correspond to stop-and-go traffic. Data points representing blocked cars are located in the origin of the fundamental diagram. Using a coverage-based density the points belonging to the same period would be shifted to the right. A deadlock situation would approach ($`\rho _{max}`$,0), with $`\rho _{max}`$ the maximum density. We suppose that $`\rho _{max}=140veh/km`$.
Finally we want to mention that this problem cannot be circumvented using the speed-flow relation because one is still left with the problem of overestimating local speeds in stop-and-go traffic.
## Appendix B:
The simulation results have been obtained using a space-continuous version of the Nagel-Schreckenberg (NaSch) model for single-lane traffic. Analogous to the NaSch model the velocity of the $`n`$-th car in the next time step is determined via the following four rules which are applied synchronously to all cars:
Step 1: Acceleration
xxxx$`V_nmin(V_n+1,V_{max})`$
Step 2: Deceleration (due to other vehicles)
xxxx$`V_nmin(V_n,d_n)`$.
Step 3: Randomization
xxxx$`V_nmax(V_nrand(),0)`$ with $`rand()[0,1]`$
Step 4: Movement
xxxx$`X_nX_n+V_n`$.
The velocity of the n-th car $`V_n`$ is given in units of $`5m/s`$. $`V_{max}`$ denotes the maximum velocity, $`X_n`$ the position of the cars, $`d_n=X_{n+1}X_n1`$ the distance to the next car ahead. $`X_n`$ and $`d_n`$ are also given in units of $`5m`$ (the length of the cars). $`rand()`$ is a random number between 0 and 1. In our simulation we use $`V_{max}=8`$ which corresponds to $`40m/s`$ in realistic units.
The discrete NaSch model is also able to generate such fundamental diagrams, but with a worse resolution – the line of stop-and-go traffic has a rather steep slope. This is why we decided to use the continuous version of the NaSch model. Note that beside the better spatial resolution of the continuous version, no qualitative difference between the continuous and discrete version of the model have been found.
Acknowledgments: It is our pleasure to thank B.S. Kerner and D. Chowdhury for fruitful discussions. The authors are grateful to ”Landschaftsverband Rheinland” (Cologne) for data support, to ”Systemberatung Povse” (Herzogenrath) for technical assistance, to the Ministry of Economy, Technology and Traffic of North-Rhine Westfalia, and to the German Ministry of Education and Research for the financial support within the BMBF project ”SANDY”.
|
no-problem/9905/astro-ph9905380.html
|
ar5iv
|
text
|
# RXTE OSERVATIONS OF AN OUTBURST OF RECCURENT X-RAY NOVA GS 1354–644
## 1. Introduction
A modest X-ray outburst from the recurrent transient X1354–644 was detected by the All Sky Monitor (ASM) aboard the Rossi X-ray Timing Explorer (RXTE) around 1 November 1997 (Remillard, Marshall & Takeshima (1998)). The flux from the source as measured by the ASM in the 2-12 keV energy band, rose gradually to 40-50 mCrab in mid-November, stayed at this level for about a month, then started a slow decline. The maximum hard X-ray flux observed by BATSE (Harmon & Robinson (1998)) and HEXTE (Heindl et al. (1997)) appeared to be around 150 mCrab, indicating a hard spectrum. HEXTE detected an exponential spectral cutoff at higher energies.
Castro-Tirado (1997) obtained B- and R-band images following the detection of the source in X-rays. The optical counterpart BW Cir was found to be in a bright state. Radio (Fender et al. (1997)) and infrared (Soria, Bessell & Wood (1998)) emission was also detected during the X-ray outburst. Spectroscopic observations obtained in January 1998, reveal a strong H-alpha line in emission, with a profile that varied from double- to single-peaked over three nights, reminiscent of the black hole microquasar GRO 1655-40 during its 1996 outburst, and strongly suggesting an accretion disk origin (Buxton et al. (1998)).
The X-ray transient source X1354–644 belongs to the class of X-ray binaries known as X-ray Novae (e.g. Sunyaev et al. (1994); Tanaka & Shibazaki (1996)). All sources in this class are assumed to be recurrent, but only a few have been observed in more than one X-ray outburst. In case of X1354–644, thanks to observations of a common optical counterpart (Pedersen, Ilovaisky & van der Klis (1987); Castro-Tirado et al. (1997)), one can reliably identify the 1997 X-ray event with one observed by Ginga in 1987 as GS 1354–64 (Makino (1987), Kitamoto et al. (1990)). It is quite probable that X-ray emission from the same source, under different names, was detected also in 1967 (Harries et al. (1967), Francey (1971)), in 1971-1972 (Markert et al. (1977)), and in 1974-1976 (Seward et al. (1976); Wood et al. (1984)). Those earlier observations however are more disputable because of relatively poor ($``$ degrees) localization for the X-ray source and the lack of counterpart detections at other wavelengths.
Different outbursts of the source varied substantially in peak flux. The most prominent outburst from this area of sky was detected in 1967 as Cen X-2 (Harries et al. (1967)). It was the first X-ray transient found, and still one of the brightest Galactic X-ray sources known. Its association with GS 1354–644 is however questionable because of poor position determination for Cen X-2, and because the 1967 outburst luminosity would have been much higher than Eddington. The peak fluxes detected by Ginga in 1987 (Kitamoto et al. (1990)) and by OSO-7 in 1971-1972 (Markert et al. (1977)) were about 2 orders of magnitude weaker than Cen X-2. The maximum flux detected by RXTE in 1997 was about 3 times lower than in 1987 outburst.
The energy spectra were also quite different for the two outbursts (1987 and 1997). The spectrum measured with Ginga in 1987 was composed of a strong soft component below 10 keV and a power law at higher energies, which is typical for black-hole binaries in their “high” spectral state. In contrast, the spectrum detected by RXTE in the current outburst is typical of the “low” spectral state of Galactic X-ray binaries.
In this paper we present an analysis of the observations of GS 1354–644 during its 1997-1998 outburst based on RXTE data. An overview of observations and our data reduction approach are presented in section 2. Our timing analysis is presented in §3, and spectral analysis in §4. We compare our results with other sources and with some theoretical models in section 5. We summarize our results in §6.
## 2. Observations and data reduction
The Rossi X-ray Timing Explorer satellite (Bradt, Rothschild & Swank (1993)) has two coaligned spectrometers the PCA and HEXTE, with narrow fields of view, as well as an All Sky Monitor (ASM). PCA and HEXTE together provide broad band spectral coverage in the energy range from 3 to $`200`$ keV. The ASM tracks the long term behavior of sources in the 2-12 keV energy band. The data for our analysis have been obtained from the RXTE Guest Observer Facility at GSFC, which is a part of High Energy Astrophysics Science Archive Research Center (HEASARC).
The reduction of PCA and HEXTE data was performed with the standard ftools package. To estimate the PCA background we applied the $`L7/240`$ background model, which takes into account various particle monitors and the SAA history, for the 17 November 1998 observation, and the $`VLE`$ (Very Large Events) -based model for other observations.
We used PCA response matrix v.3.3 (Jahoda 1998 a, b). Analysis of the Crab nebula spectra confirmed that the systematic uncertainties of the matrix were less than 1% in the 3-20 keV energy band. Uncertainties in the response, and a sharp decrease of PCA effective area below 3 keV, makes it hard to accurately measure low-energy spectral absorption. We have used PCA data only in the 3-20 keV range, and added 1% systematic error for each PCA channel to account for residual uncertainties in the spectral response. All spectra were corrected for dead-time as per Zhang & Jahoda (1996).
Response matrix v.2.6 was used to fit the HEXTE spectra (Rothschild et al. (1998)). The background value for each cluster of HEXTE detectors was estimated from adjacent off-source observations. An upper energy limit for the analysis was defined according to the brightness of the source. Typically, we did not consider data at energies higher than $`150200`$ keV, where background subtraction uncertainties became unacceptably large. A dead time correction was applied to all observations.
The pointed RXTE observations are summarized in Table 1. The 1997 PCA and HEXTE observations were carried out near outburst maximum and adjacent rise and decline phases. During the observation of Nov 1998 the source was much dimmer, probably at or approaching quiescence.
## 3. Variability
### 3.1. Light curve of the outburst
The 1997 light curve showed a triangular profile, possibly with a short plateau near maximum, similar to that observed from other X-ray transients (e.g. Lochner & Roussel-Dupre (1994); Harmon et al. (1994)). For general light curve morphologies see Chen, Shrader & Livio (1997).
The light curve of the source measured by the ASM in the 2-12 keV energy band is presented in Fig.1. When approximated by an exponential function, the rise time parameter is $`20\pm 2`$ days and decay time parameter around 40 days. Flux measurements by the PCA in the 2-30 keV energy band and by HEXTE in the 20-100 keV energy band are shown at the same Figure. There is some evidence for a secondary maximum or “kick”, typical for outbursts of X-ray Novae (e.g. Sunyaev et al. (1994); review in Tanaka & Shibazaki (1996)). The light curve of the previous outburst of this source, tracked by the Ginga ASM, had a plateau and a decline with time scales around 60 days (Kitamoto et al. (1990)). The peak flux of the 1997-1998 outburst in the 1-10 keV band extrapolated from PCA measurements was $`1.1\times 10^9`$ erg s<sup>-1</sup> cm<sup>-2</sup>, which is almost three times lower than the maximum flux detected with Ginga in 1987 (Kitamoto et al. (1990)).
### 3.2. Fast variability
The PCA detected a strong flux variability as illustrated by Fig. 2. The source flux varied by a factor of 2-3 in 10 to 20 s. Such variability is often fit by shot noise models (Terrell (1972)), in which the light curve is made up of randomly occurring discrete and identical events, the “shots”. The approach was further developed (Sutherland, Weisskopf & Kahn (1978)), and has been proven to be quite useful in interpreting the archetypical black hole candidate Cyg X-1 (Lochner, Swank & Szymkowiak (1991)) and other black hole binaries in their low state. We shall discuss below the application of this model to GS 1354–644. Another method to study fast variability is by means of Fourier techniques (see the detailed discussion in van der Klis (1989)), in particular, by the analysis of the power density spectra (PDS). PDSs measured with the PCA for GS 1354–644 can be qualitatively described as the sum of at least two band-limited components, each of which is a constant below its break frequency, and a power-law with a slope $``$-2 above its break frequency. PDSs for observations #3 and #9 are shown in Fig. 3. The power spectra have been normalized as squared fractional $`rms`$, according to Belloni & Hasinger (1990), and rebinned logarithmically in frequency. Rudimental white-noise was subtracted. It was estimated by Poissonian statistics as modified by dead-time effects. We have plotted PDSs as $`f\times \left(\frac{rms}{mean}\right)^2`$ vs. frequency. As was argued, e.g. by Belloni et al. (1997), this convention has important advantages, namely, a plot gives a direct visual idea of the power distribution, and Lorentzian functions representing band-limited noise are symmetric in a log-log plot. This convention was used only for the plot, while all analytic fits (see below) were performed on the original power spectra. We fitted the PDS by a sum of functions $`\frac{1}{1+(\frac{f}{f_{br}})^2}`$, each of which represents the power spectrum from single type of exponential shots with a profile $`s(t)exp(\frac{(tt_0)}{\tau })`$, for $`t>t_0`$, $`\tau =1/(2\pi f_{br})`$. The PDSs for observations #5–9 are fitted satisfactorily by two such components, but for the first four observations a third, intermediate, component should be included for a good representation of the data. The fitting parameters are presented on Table 2. The inferred parameters are very similar to those detected for other black holes (e.g. Nowak et al. 1999a ; Nowak, Wilms & Dove (1999); Grove et al. (1998)) and neutron stars (e.g. Olive et al. 1998a ) in their low state. The frequency of the first break in the PDS evolved significantly with time - it appeared at its lowest frequency near the outburst maximum, and shifted to higher frequencies with time. The frequency of the last break, however, remained fairly stable. The energy-resolved power density analysis showed that the $`f_{break}`$ frequencies do not vary significantly with energy, similar to what is observed for Cyg X-1 (e.g. Nowak, Wilms & Dove (1999)) and other black hole candidates. The dependence of integrated fractional variability on energy in the full analyzed frequency range ($`10^3`$–50 Hz) is presented in Fig. 4a (in $`rms`$ percent). The decline of integrated $`rms`$ with energy will be discussed in more detail in the next section, and compared with data from other sources in section 5.3.
### 3.3. Shot noise model
Strong chaotic variability has been detected in many Galactic binaries in their hard/low spectral state. Following the approach developed by Terrell (1972), such variability can be modeled in terms of a shot noise model (e.g. Lochner, Swank & Szymkowiak (1991) and references therein). In the shot noise model, the light curve is assumed to be composed of a number of individual shots or microflares. In principle, different shots might influence each other, and might have various shapes and spectra. However the models are usually simplified to reduce the number of free parameters. We typically assume that there are only a few types of shots, and that all shots of a given type are identical. In this simplified model statistical analyses of the observed light curve allow the determination of shot parameters. For GS 1354–644, the overall shape of the power density spectrum suggests that the light curve is formed by two or three different types of shots with characteristic times corresponding to the breaks in PDS, $`\tau =1/(2\pi f_{br})`$(see Table 2). The power density spectrum does not provide a complete description of the shots, because it is not possible to determine independently shot rates and their intensities. To obtain additional information we analyzed the flux histogram. The probability histogram for the flux values integrated into 16-sec bins is presented in Fig. 5. We selected 16-sec time bins to study the long shots. The PDS shows that long shots and short shots give almost equal input into the integrated fractional variability of the source, but the contribution of short shots to the source variability is negligible below about 0.1 Hz. The number of short shots in each 16-sec bin is large enough that their contribution is nearly constant. To avoid the influence of Poissonian counting statistics on the distribution, we have chosen the flux bins to be 2 times wider than the Poissonian error associated with each bin. For high shot rates one would expect that the distribution, according to the central limit theorem, would have a symmetrical Gaussian shape. However, this is not the case for our distribution, which has a detectable deficit at lower fluxes. This shape suggests a low shot rate, and can be fitted by a Poissonian distribution, if the duration of any single shot is substantially shorter than the 16-second bin width. For GS 1354–644 the first PDS component corresponds to time scales of 2-5 seconds, while the flux has been integrated into 16-sec bins. We have assumed that the total flux from the source is a sum of a constant component, which is stable on the time scale of one observation, and a variable component formed by individual shots. As was mentioned above, short shots appear as part of the constant component, so the flux variations are caused by long shots only. We have fitted the flux density distribution as a sum of constant and variable components by applying the maximum likelihood method, which is preferable to chi-square statistics for low event rates. The best fit for the shot rate was $`0.3`$ shots per second, and the best fit for constant component was in the range of 50–70% of the total flux (in reality this method only puts an upper limit of the value of the constant component. Our analysis of 0.01-0.1 sec light curve showed that there exist points with the flux $``$10 times lower than the average source flux. This fact immidiately removes the upper limit of the constant component down to the 10% of the average flux). The signal detected from an individual shot was estimated to be $``$500 PCA counts for the first observation of 1997, and $``$200 PCA counts for the last outburst observation (#9). The best fit obtained for observation #4 is presented in Fig. 5. We repeated the analysis for a 64-sec integration time and obtained consistent results. The low shot rate for long shots shows that the variability of the source on time scales of tens of seconds is caused by relatively rare powerful flares. Unfortunately, we were not able to study the contribution of short shots by the same method, because the flux distribution for time scales of tenths of a second depends strongly on both the short and long shots. However, the integrated $`rms`$ variability of the short shots, combined with some simple constraints on the shot amplitude, allows us to estimate the shot overlapping parameter (see Vikhlinin et al. (1995) for the method description), and consequently derive the shot rate. The short shot rate can be estimated to be $``$10-15 shots/s. The physical origin of the shots is not very clear. For GS 1354–644, as well as for other black hole systems in their low state, the break frequencies detected in PDS ($``$1 Hz) are much lower than would be expected for the environment close to a stellar mass black hole ($`10^2`$ Hz). The shape of the energy spectrum, which is dominated by the Comptonized emission component, moves us to explore whether the Comptonization might be responsible for the time blurring of intrinsically short shots. (This could be the case if the product of the Compton optical depth and light crossing time is of order one second). The detected dependence of fractional variability on energy (Fig. 4a) at first glance seems to support this interpretation, because photons of higher energy undergo on average more interactions and so must have a wider distribution in time and lower integral variability. However, the fractional variability integrated up to the lowest break in PDS should not be affected by this mechanism and should therefore be independent of the energy. We tried to test this assumption for GS 1354–644, but found that slope of the dependence of $`rms`$ variation vs energy for the frequencies below 0.02 Hz (see Fig. 4b) cannot be defined accurately. To get a more definitive answer we had to repeat our analysis for a much brighter source, Cyg X–1, which in many ways resembles GS 1354–644. For Cyg X–1 we can clearly see that the fractional variability decreases with energy both when integrated up to 50 Hz, and when integrated for the frequency range below 0.2 Hz. This last result is in direct contradiction with an assumption that the Compton up-scattering is responsible for the low-state PDS break frequencies.
### 3.4. Time lags in GS 1354-644
Another way to study rapid fluctuations in the flux is to compute time lags between variations in different energy bands. We calculated frequency-dependent phase lags according to the procedure of Nowak et al. (1999a). Due to the faintness of the source we were forced to sum almost all of the data available – from observations #2 through #9 – to obtain a significant result. The total integrated live time for these PCA observations was $`48`$ ksec. Time lags are presented in Fig. 6. The error values were estimated from the width of the lag distribution histogram. One can see that, at least qualitatively, the dependence of phase lag on Fourier frequency is very similar to low states in other X-ray binaries, both in black holes and neutron stars (e.g. Nowak et al. 1999a, Grove et al. 1999, Ford et al. 1999).
## 4. Energy spectrum
The energy spectrum of GS 1354–644 during its 1997 outburst can be roughly represented as a hard power law with slope $`\alpha `$1.5, and a high-energy cutoff above $``$50 keV. Such spectra are typical for black hole binaries in the low/hard spectral state. A commonly accepted mechanism for generating the hard radiation in this state is thermal Comptonization (Shapiro, Lightman & Eardley (1976); Sunyaev & Titarchuk (1980)). Applying a thermal Comptonization model to the hard-state BH spectra, one infers a hot cloud surrounding the central object (BH) with typical plasma temperature $``$50 keV and a Thomson optical depth $``$1. Such a cloud can Comptonize the soft photons from the central region, most likely from an accretion disk. We fitted the spectrum with a variety of models, ranging from a simple power law to detailed Comptonization, in search of the underlying physics. The results are presented in Tables 3 and 4. We present results for two groups of observations (2-5 and 6-9), because the spectral parameters did not differ significantly within each group, but summing the groups allows a more accurate parameter estimates. The spectra for observations 2-5 and also for observation #10 are presented in Fig. 7. The y-axis is in units of the photon flux multiplied by the energy squared. These units show directly the energy content per decade. Crab Nebula spectrum plotted the same way would be close to flat, and Galactic black hole binaries typically show a negative slope in their high state, and a positive slope (up to $``$100 keV) in their low state. On using the same models for the PCA and HEXTE data considered jointly we found that cross-calibration uncertainties of these instruments played a significant role. Namely, any power law (with high energy cutoff) approximations gave noticeably different photon index values, even if the same energy bands were used. So we used power laws with photon indexes differing by 0.08-0.1 for PCA and HEXTE spectra (we followed here an approach by Wilms et al. (1999)). In the Table 3 we present the PCA photon indices. The fits show a noticeable high energy cut-off at energies above $``$60 keV. While different models give different cut-off parameters, no fit without a cut-off was satisfactory. In fact, the spectrum cannot be fully described by a simple power law, with or without a high energy cut-off. The spectral fit is improved significantly by adding a neutral iron fluorescent line and a reflection component, which indicates that some part of the emission is reflected by optically thick cold material, most probably in the outer accretion disk (Basko, Sunyaev & Titarchuk (1974); George & Fabian (1991); Magdziarz & Zdziarski (1995)). This component was represented by $`pexrav`$ model of $`XSPEC`$ package, which simulated reflection from a neutral medium. For Comptonization models, we applied the classical $`compST`$ model of Sunyaev & Titarchuk (1980), and the more recent generalized $`compTT`$ model (Titarchuk (1994)). For both cases the parameter $`E_{cut}`$, the cutoff energy for the $`pexrav`$ model (reflected component), has been frozen at the value $`3kT_e`$, where $`kT_e`$ is the temperature of the Comptonizing hot electrons. The value of the optical depth parameter depends on the assumed geometry - for $`compTT`$ we cite the $`\tau `$ parameter for both spherical and disk geometries. It is well known that the cut-off at energies higher than $`3kT_e`$ is more abrupt in the spectrum of Comptonized emission than a simple exponential cut-off, but we were not able to detect a statistically significant difference between these two cut-off models because of the faintness of the source. A Gaussian component with the central energy 6.4 keV and FWHM equal to 0.1 keV (frozen at these values) was added to account for emission at the iron line. During the observation of 17 November 1998 (# 10) the PCA measured a very low flux. The spectrum was found to be softer, with a power law photon index $``$2 in comparison with $``$1.5 the year before. This observation might represent the quiescent state of GS 1354–644 or the transition to quiescence.
### 4.1. Spectral change analysis
To study spectral changes in more detail, we analyzed raw spectral ratios. This method is based on the idea that, if one divides one spectrum by another, subtle differences between two are evident in the ratio. We applied this method to spectra obtained during different PCA observations and also to spectra collected within each observation, segregated by flux level. In all cases, the spectral ratios could be approximated by a single power law $`E^\beta `$ in spite of the complexity of the initial spectra. We have used the power law index $`\beta `$ for quantitative comparisons of spectral ratios. We labeled this index as $`\beta _1`$ when the method was applied to different observations, and $`\beta _2`$ when we compared high-flux and low-flux spectra for the same observation. In both cases we call $`\beta `$ spectral ratio slope or differential slope. For the separate observations, we divided each spectrum by the spectrum of the observation with maximum flux (#4). This method allowed us to exclude from the analysis uncertainties in the PCA response matrix, and hence to study fine differences between spectra. The drift of detector parameters, however, can contribute a significant systematic error, if the interval between observations is too long. To estimate the level of such systematic errors, we applied the same technique to Crab spectra taken before, during and after the outburst of GS 1354–644. The apparent changes in the Crab spectral ratio caused by drifting detector response are much smaller than the changes seen in GS 1354–644 (Fig. 8). The X-ray spectrum of GS 1354–644 became softer with time, then suddenly harder for observation #9. There is no clear correlation with flux level. Instead, observations at the same flux level taken during rise and decline had significantly different spectral ratios (see differential slopes $`\beta _1`$ on Table 5). The same technique was applied to study the relation between spectrum and flux within each observation. In this case the data were segregated according to the total count rate averaged into 16-sec bins. The range from minimum to maximum flux was divided into three equal parts, and two spectra for “high” and “low” fluxes were obtained. Thanks to the large chaotic variations in the flux, the “high” and “low” average levels differed by a factor of 1.5–2. Finally, for each observation the high-flux spectrum was divided by the low-flux spectrum to obtain a spectral ratio. A typical high-flux/low-flux spectral ratio is shown in Fig. 9 (from observation #4). A negative differential slope was obtained for all other observations within the outburst. It is evident from Table 5 that while spectral ratio slopes $`\beta _1`$ for separate sessions are not correlated with flux, high-to-low spectral ratios of a single observation always have slopes $`\beta _2<`$ 0. For comparison, we present in Fig. 9 the spectral ratio for Cyg X-1 calculated with the same technique. The effect is qualitatively the same for Cyg X-1 as for GS 1354–644, with a differential slope $`\beta _2=(3.9\pm 0.4)\times 10^2`$ (observation 26 June 1997). We were concerned that selection effects might bias our estimates of $`\beta _2`$. Because most counts fall in the soft part of the spectrum, one might get softer spectra simply by selecting for high flux, because we would be selecting for random excesses in the soft band. To check the importance of this effect, we changed our selection criteria and re-filtered the data according to the flux levels above 10 keV. The results remained the same, indicating that the observed correlation of higher fluxes with softer spectrum was much stronger than any bias introduced by the selection criteria. The systematic difference between high and low flux spectra can be interpreted in a variety of ways. The softening may be due to an increase in the relative contribution of shots that have a softer intrinsic spectrum than the constant flux. The constant flux, in turn, may be formed by the sum of shorter shots with harder spectra (this possibility is discussed, e.g. by Revnivtsev, Gilfanov & Churazov (1999) and Gilfanov, Churazov & Revnivtsev (1999)). Another interpretation might be that the low energy shot emission changes the temperature of the hot plasma cloud responsible for Comptonization. Alternatively it might be suggested that the shot spectrum evolves with time and is softest near the maximum of the shot. Other explanations can not be excluded either. Our spectral ratio analysis is quite sensitive to subtle changes in parameters of the system. We see that, in the course of the 1997 outburst, the spectrum softens steadily (except for observation #9). This implies that, in this case, the spectrum was not directly correlated with the flux level (or luminosity). We detected a hardness-flux anti-correlation for short-term flux variations within each observation, but not when we compared spectra for different observations. The last observation of the 1997 outburst, which breaks the overall pattern, likely corresponds to the secondary maximum or ’kick’ in the light curve. The observation #10, performed a year after the maximum of 1997 outburst, showed a significantly softer spectrum with flux more than two orders of magnitude below the peak value.
## 5. Discussion
### 5.1. Spectral state
The high and low spectral states first identified in Cyg X-1 (Tananbaum et al. (1972)) have since been observed in a number of X-ray binaries. The “high” and “low” terminology was originally chosen based on the 2-10 keV X-ray flux. It was later found that the low state corresponds to a hard spectrum, and the high state to a soft spectrum. A typical high state spectrum is the sum of a soft thermal component and a hard power-law tail. The low state spectrum is approximately power-law with an exponential cut-off at energies above $``$ 100 keV. More detailed descriptions of the spectral states of X-ray binaries can be found elsewhere (e.g. Tanaka & Shibazaki (1996)). Studies of the aperiodic time variability in black hole candidates provided another dimension to this phenomenology. In particular, a third very-high state has been recognized (Miyamoto et al. 1991, 1994; Ebisawa et al. (1994)). This state is characterized by a 3-10 Hz QPO peak, plus either band-limited noise or a weaker power law noise component (Belloni et al. (1997)). The hard power-law-like energy spectrum with a slope $``$1.5 that was detected from GS 1354–644 during its 1997 outburst is a clear indication of the low/hard spectral state. The character of the rapid variability, the absence of QPO, and the power spectrum of GS 1354–644 are similar to other black hole candidates in this state (Miyamoto et al. (1992); Nowak et al. 1999a ; Nowak, Wilms & Dove (1999); Grove et al. (1998)), and provide additional proof for such characterization. A similar time variability is manifested by neutron star binaries in their low state (Olive et al. 1998a ; Ford et al. (1999)). However, black hole systems in their other - high or very high - states have distinctly different power spectra (van der Klis (1995); Belloni et al. (1997)). GS 1354–644 was detected with Ginga in 1987 in its high/soft state (Kitamoto et al. (1990)). But in 1997-1998 the same source was in a low/hard state. This confirms the earlier identification of the source as a black hole candidate and also demonstrates that both high/soft and low/hard states are generic for this group of X-ray sources. In fact, all the properties of GS 1354–644, that we revealed in our analysis are similar to other black hole candidates, both persistent and transient.
### 5.2. Geometry of the system
The energy spectrum of GS 1354–644 supports a model of low energy photons that are Comptonized in a hot plasma cloud. An accretion disk reveals itself via fluorescent iron line emission detected at $``$6.4 keV and a reflected continuum detected at 15-20 keV. The equivalent line width and relative intensity of the reflected component are both consistent with reflection of harder X-ray emission from a cold plasma, which, in the commonly accepted model, is likely to be an optically thick accretion disk around the compact object. It is straightforward to assume that the X-ray source includes a cool disk plus a hot optically thick corona, which Comptonize the soft photons up to tens of keV. The exact geometrical configuration of the system remains uncertain. This disk/corona combination is the most popular model for interpreting Galactic black holes in their low/hard spectral state. A slab geometry, where the accretion disk is sandwiched between two flat corona layers, was widely discussed several years ago (Haardt & Maraschi (1991), Haardt (1993)), but cannot fit fit the spectra in a self-consistent manner (see eg. Dove, Wilms & Begelman (1997)). Other investigators have suggested a spherical hot corona (Kazanas, Hua & Titarchuk (1997)) or an advection-dominated accretion flow (Narayan (1996)) at radii smaller than the inner edge of the thick accretion disk. Seed soft photons could be generated in the accretion disk (Dove, Wilms & Begelman (1997), Dove et al. (1998)) or inside the hot cloud (suggested by Narayan (1996) to be synchrotron/cyclotron photons). The former models invoke external illumination of the up-scattering region, while the latter invoke a source embedded inside the hot plasma. The optical depth $`\tau 45`$ inferred from our spectral fits (for $`compST`$ and $`compTT`$ models in a spherical geometry) is dependent both on the geometry and on the model, and hence can not be considered as a direct measurement of the optical depth. However, the value of $`\tau `$ indicates that the source of seed photons is most likely inside the hot cloud. In the case of external illumination of the Comptonization region by soft photons originating from the accretion disk, the optical depth parameter would be expected to be $``$1 even for high intrinsic values of $`\tau `$ in hot cloud. A partial overlap of the disk by the corona may solve the puzzle, but the physical basis for such a configuration has yet to be explored. If the characteristic frequency of the Compton cloud is the PDS break frequency about $`23`$ Hz, then the cloud should be huge, which causes severe problems with energy balance (see e.g. Nowak et al. 1999a ). More likely, the breaks in the PDS are determined by the intrinsic duration of the seed shots. They might also be related to the geometrical parameters of the system, such as the radius of the accretion disk, and with typical times, such as the plasma diffusion time. In the PDS of GS 1354–644 the first break frequency is anti-correlated with the flux, perhaps reflecting changes in some of the system parameters. For example, the inner edge of the disk is probably moving closer to the compact object when the mass accretion rate grows and the luminosity increases. Simultaneously the break frequency is decreasing, which might mean that shots become longer.
### 5.3. Dependence of fractional variability on energy
Fig. 4 shows that the integrated fractional variability of GS 1354–644 clearly decreases with energy. This decrease can be approximated by a power law $`rmsE^{0.07}`$. Similar dependences have been found for Nova Per 1992 (Vikhlinin et al. (1995)), Cyg X-1(Nowak et al. 1999a ), and GX 339–4(Nowak, Wilms & Dove (1999)), all Galactic black hole binaries, when observed in the low/hard state. In contrast, an increase of fractional variability with energy has been detected for X-ray bursters – e.g. 1E1724-3045 (Olive et al. 1998a ), 4U1608-522 (Yu et al. (1997)) in their low/hard state. This apparent difference between low-state black hole and neutron star systems is remarkable, because these systems are otherwise so similar, when in their low/hard state. This fact motivated us to perform our own survey of publicly available RXTE data. The X-ray bursters 4U1705-44, SLX 1735-269, SAX J1808-3659, 4U1728-34 (GX 354-0) and 4U0614+091, together with the aforementioned systems 1E1724-3045 and 4U1608-522, were included in our analysis. Power-law fits to the relation between $`rms`$ variation and energy are presented on Table 6. All observations were taken when objects were in their low/hard state. For GX 339-4 and Terzan 2, the data were taken from Nowak, Wilms & Dove (1999) and Olive et al. 1998a respectively. Although a simple power law was in some cases a poor fit, we used it anyway to quantify the tendency (whether the $`rms`$ variation decreased, increased or stayed constant with energy). Remarkably, slope is negative for all black hole binaries and positive for all neutron star binaries. The strongest correlation of fractional variability with energy is at energies below $``$15 keV. In fact, the fractional variability has a broad maximum at energies 10-20 keV for many X-ray bursters, whereas for black holes the maximum is at the lowest observed energy. This might indicate that the similar spectra in black hole and neutron star systems have fundamentally different origins. Whatever the physical reason, the difference is quite remarkable because of the substantial similarity in the other properties of black hole and neutron star systems in the low/hard state (e.g. Berger & van der Klis (1998); Revnivtsev et al. (1998); Olive et al. 1998a ). If confirmed, this dependence may become an important new tool for deriving the type of compact object in Galactic binaries from X-ray observations.
## 6. Conclusions
We analyzed observations of the recurrent X-ray transient GS 1354-644 by the RXTE satellite. The observations were made during a modest outburst of the source in 1997-1998. The overall light curve was triangular with possible plateau at the maximum. PCA/HEXTE observations were carried out during both the rise and the decay phase of the outburst. The dramatic fast variability was studied in terms of a shot noise model. The power density spectrum can be approximated by the sum of 2 or 3 components, each corresponding to a specific type of shot. The most prominent components peak around 0.02–0.09 Hz and 2.3–2.9 Hz respectively. For several observations a third, intermediate component is statistically significant. Our flux density distribution analysis showed that the longest shots occur at a rate of 0.3 $`s^1`$, and contribute 30–50% of the total flux. The short shots are more frequent ($``$ 10–15 shots/s) and proportionally weaker, although their contribution to the total flux is comparable to the longer shots. In general, the rapid time variability of the source X-ray flux is very similar to the low/hard state of other Galactic black hole systems, such as Cyg X-1, Nova Persei 1992, and GX 339-4. The spectrum obtained by the PCA and HEXTE is clearly the hard/low state spectrum observed in many Galactic black hole binaries. The overall power-law-with- high-energy-cutoff shape can be approximated by Comptonization models based on up-scattering of soft photons on energetic electrons in a hot plasma cloud. In order to fit the data, an additional component describing a spectrum reflected from cold material with a strong iron fluorescent line must be included. Both the equivalent width of the line and the intensity of the reflected component are consistent with the assumption of the reflection of hard X-ray emission from a cold, optically thick accretion disk. To examine finer spectral changes we analyzed ratios of the raw spectra. This technique demonstrated an overall softening of the spectrum during the outburst, except for the last observation, which was obtained during the secondary maximum. At shorter time scales, we detect a softening of the spectrum at higher flux levels. An anti-correlation of fractional variability with energy is typical for Galactic black holes in their low spectral state (e.g. Nowak et al. 1999a , Vikhlinin et al. (1995), and this work), but a positive correlation is typical for neutron star systems in their low state (e.g. Olive et al. 1998a ). Our analysis using RXTE archival data for several sources, confirmed this difference between black hole and neutron star binaries. This difference can be very useful for segregating neutron star binaries from black hole systems, which is otherwise difficult with X-ray data only. The research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. MR is thankful to Dr.M.Gilfanov for helpful discussions. KB is glad to acknowledge valuable comments by Prof.L.Titarchuk and Dr.S.Brumby. We are grateful to an anonymous referee for his/her careful comments on the manuscript, which helped us to improve the paper significantly.
|
no-problem/9905/astro-ph9905212.html
|
ar5iv
|
text
|
# Signatures of Energetic Protons in Hot Accretion Flows: Synchrotron Cooling of Protons in Strongly Magnetized Pulsars
## 1 Introduction
It has recently been suggested that the accretion flows around black holes and neutron stars are very hot and two-temperatured when their luminosities are low and spectra are hard (e.g. Narayan & Yi 1995, Rees et al. 1982 and references therein). In such flows, often called advection-dominated accretion flows (ADAFs), most of the viscously dissipated energy is used to heat ions and only a small fraction of the total energy is radiated by electron cooling processes. X-ray and gamma-ray emission properties in Galactic X-ray transients (e.g. Narayan et al. 1998b for a review) and galactic nuclei (e.g. Yi & Boughn 1998 and references therein) including Sgr A (Manmoto et al. 1997, Narayan et al. 1998a) have been successfully modeled by these hot, two-temperature flows. The apparent success of these models critically relies on the existence of the two-temperature plasma in which the ion temperature is essentially the virial temperature and hence much higher than the electron temperature (Narayan & Yi 1995). Unless there is an efficient electron-ion energy exchange mechanism, the Coulomb exchange alone typically leads to the two-temperature condition (Narayan et al. 1998b).
Although the spectral signatures seen in various radiation components due to electrons at temperatures $`10^910^{10}K`$ are quite plausible, the electron spectral components alone cannot prove the uniqueness of the spectral fits in the X-ray systems studied so far. Any direct observational signatures due to energetic protons could be extremely useful to confirm the presence of the protons and hence the presence of the two-temperature accretion flows. However, direct proton radiation signatures are difficult to observe since the radiation efficiencies of proton-related radiation processes are usually very low (Mahadevan 1998). There have been some recent suggestions that energetic ions could provide observable signatures through various phenomena such as pion production (Mahadevan et al. 1997, Mahadevan 1998) and nuclear spallation (Yi & Narayan 1997). It is however unclear how the suggested possibilities are proved as unambiguous signatures of the energetic protons in the two-temperature plasma (Yi & Narayan 1997). It is therefore interesting to see whether there are any other signatures of the two-temperature accretion flows in which protons have relativistic energies. In this paper, we point out that there could be such a signature produced by the proton synchrotron emission in the strong magnetic fields around neutron stars. Since electrons are rapidly cooled near the neutron stars by soft photons from the stellar surface, the electron synchrotron signature is not expected (Yi & Narayan 1995).
Within the accretion flows, since the proton gyroradius $`3\gamma _2B_8^1cm`$ is much smaller than the length scale of the accretion flow, the protons are likely to be tightly bound within the accretion flows where $`B_8=B/10^8G`$ is the magnetic field strength and $`\gamma _2=\gamma /10^2`$ is the Lorentz factor for relativistic protons. The characteristic synchrotron loss time scale is $`t_{sync}5\gamma _2^1B_8^2s`$. The typical accretion time scale in the hot ADAFs $`t_{acc}3\times 10^5\alpha ^1mr^{1/2}s`$ where $`m=M/M_{}`$ is the stellar mass, $`r=R/R_{Sch}`$ is the radius from the star, and $`R_{Sch}=2.95\times 10^5mcm`$ is the Schwartzschild radius (e.g. Yi & Narayan 1995, Rees et al. 1982). The energetic protons could transfer their energies to electrons on the electron-ion Coulomb exchange time scale $`t_{ie}9\times 10^5\theta _e^{3/2}\alpha m\dot{m}^1r^{3/2}s`$ where $`\dot{m}=\dot{M}/\dot{M}_{Edd}`$ is the dimensionless accretion rate, $`\dot{M}_{Edd}=1.39\times 10^{18}mg/s`$ is the Eddington accretion rate, $`\theta _e=kT_e/m_ec^2<`$ a few is the dimensionless electron temperature, and $`\alpha 0.1`$ is the dimensionless viscosity parameter (e.g. Frank et al. 1992). If the magnetic field has the equipartition strength, $`t_{sync}/t_{acc}3\times 10^3\gamma _2^1\alpha ^2\dot{m}^1r^2`$ and $`t_{sync}/t_{ie}9\times 10^2\theta _e^{3/2}r`$, which indicates that only a very small fraction of the proton energy could be radiated by the proton synchrotron emission as expected. However, if there exists an external strong magnetic field such as that around a pulsar, the magnetic field strength is $`B_84\times 10^5\mu _{30}m^3r^3`$ where $`\mu _{30}=\mu /10^{30}Gcm^3`$ is the magnetic moment of the star (Frank et al. 1992). In this case, $`t_{sync}<t_{acc}`$ occurs at $`r<20\gamma _2^{2/11}\mu _{30}^{4/11}\alpha _1^{2/11}m^{10/11}`$ and $`t_{sync}<t_{ie}`$ occurs when $`r<50\gamma _2^{2/9}\theta _e^{1/3}\mu _{30}^{4/9}\alpha _1^{2/9}m^{10/9}\dot{m}_2^{2/9}`$ where $`\alpha _1=\alpha /0.1`$. Therefore, in the region close to the neutron star, proton synchrotron cooling could be a significant channel for proton cooling. We suggest an observational signature based on the possible proton synchrotron cooling near the strongly magnetized stars. The optical emission near $`5500\AA `$ from 4U 1626-67 (Chakrabarty 1998) appears interestingly close to the predicted synchrotron emission.
## 2 High Energy Protons and Proton Synchrotron
We assume that the hot accretion flow contains an equipartition strength magnetic field (i.e. the gas pressure is equal to the magnetic pressure). The relevant physical quantities are the equipartition magnetic field $`B8\times 10^8\alpha ^{1/2}m^{1/2}\dot{m}^{1/2}r^{5/4}G`$ (e.g. Yi & Narayan 1997), the ion temperature $`T_i1\times 10^{12}r^1K`$, and the proton number density $`n_p6\times 10^{19}\alpha ^1m^1\dot{m}r^{3/2}cm^3`$. Although $`T_i`$ and $`n_p`$ quite plausibly represent the mean energy and density of the protons, they do not constrain the possible non-thermal relativistic proton population which could coexist with the non-relativistic thermal protons.
The energy spectrum of the protons in the hot accretion flows is not well understood. Since there is not a clear thermalization process working for protons, if they are energized in non-thermal processes, their energy distribution could remain non-thermal throughout accretion (Narayan et al. 1998b). Non-thermal, power-law energy distributions for relativistic protons have been recently motivated by possible gamma-ray emission and low frequency radio emission signatures from Sgr A (Mahadevan et al. 1997, Mahadevan 1998). We take the number density of protons $`n_p=n(\gamma ,\theta _p)𝑑\gamma `$ where $`\theta _p=kT_i/m_pc^2`$. If protons have a fraction $`f`$ in the non-thermal power-law tail while $`1f`$ in the thermal, non-relativistic ($`\gamma 1`$) protons, the fraction $`f=3(s2)\theta _p/2`$ if the mean energy of protons is $`kT_i`$ and the power-law slope is given by $`\gamma ^s`$. We have assumed that the thermal, non-relativistic protons with $`\gamma 1`$ exist as a separate proton population. Since $`\theta _p0.1r^1`$, $`f0.2(s2)r^1`$. Of the total power-law protons, protons with $`\gamma >10`$ are energetic enough for synchrotron emission. For $`s=2.5`$ (e.g. Mahadevan et al. 1997), at $`r1`$, $`f0.1`$ and the fraction of protons with $`\gamma 10`$ is $`3\times 10^2`$. At $`r10^2`$, $`f10^3`$. Therefore, the fraction of the relativistic protons with $`\gamma >10`$, $`ϵ`$, is likely to be in the range $`3\times 10^53\times 10^4`$. Therefore, $`ϵ_4=ϵ/10^41`$ could well represent the fraction of the relativistic non-thermal protons relevant for synchrotron emission (cf. Mahadevan 1998).
The single particle synchrotron energy loss rate is (Lang 1980) $`\left|dE/dt\right|=3\times 10^2\gamma _2^2B_8^2erg/s`$. Using the effective number density of the relativistic protons $`ϵn_p`$, the total synchrotron energy loss rate in the entire hot accretion flow is estimated as
$$L_{sync}𝑑R4\pi R\left|dE/dt\right|ϵn_p4\times 10^{33}ϵ_4\gamma _2^2\alpha ^2m\dot{m}^2$$
(2-1)
where we have assumed that the magnetic field is the internal equipartition field. (A more detailed luminosity estimate involving integration over $`\gamma `$ follows below.) Since the hot accretion flows are likely to exist up to $`\dot{m}0.3\alpha ^2`$ (Narayan & Yi 1995), the maximum synchrotron power is $`L_{sync,max}3\times 10^{30}ϵ_4\gamma _2^2\alpha _1^2m`$, which is a small fraction of the total viscously dissipated energy.
On the other hand, if the magnetic field provided by a dipole type field of the neutron star $`B_8=4\times 10^5\mu _{30}m^3r^3`$ (Frank et al. 1992), $`L_{sync}1\times 10^{39}ϵ_4\gamma _2^2\mu _{30}^2\alpha ^1m^4\dot{m}erg/s`$. For the maximum accretion rate $`\dot{m}0.3\alpha ^2`$ for the two-temperature accretion flows (Yi & Narayan 1995), $`L_{sync,max}3\times 10^{37}ϵ_4\gamma _2^2\mu _{30}^2\alpha _1m^4erg/s`$. The external stellar field would be more important for synchrotron cooling inside the radius $`r_s4\times 10^2\mu _{30}^{4/7}\alpha _1^{2/7}m^{10/7}\dot{m}_2^{2/7}`$ which is compared with the magnetospheric radius $`r_o1\times 10^3\mu _{30}^{4/7}m^{4/7}\dot{m}_2^{2/7}`$ (Frank et al. 1992, Yi et al. 1997, Yi & Wheeler 1998). Therefore, emission inside the magnetospheric radius could be dominated by the stellar field. If the accretion flows are cooled rapidly inside the magnetospheric radius, then the synchrotron luminosity could be entirely due to the accretion flow present at $`r>r_o`$, which is estimated to be $`L_{sync}2\times 10^{30}ϵ_4\gamma _2^2\mu _{30}^2\alpha _1^1m^4\dot{m}_2.`$ This is comparable to the synchrotron luminosity due to the internal equipartition field $`L_{sync}4\times 10^{31}ϵ_4\gamma ^2\alpha _1^2m\dot{m}_2^2`$. Since the accretion flow inside the magnetospheric radius is adiabatically heated much like a spherical accretion flow, we assume that the accretion flow contains a fraction $`ϵ10^4`$ of energetic, relativistic protons until it hits the surface of the neutron star.
The synchrotron power at frequency $`\nu `$ for a single proton with the Lorentz factor $`\gamma `$ is (e.g. Lang 1980)
$$P(\nu ,\gamma )=\frac{3^{1/2}e^3B\mathrm{sin}\varphi }{m_pc^2}\frac{\nu }{\nu _c}_{\frac{\nu }{\nu _c}}^{\mathrm{}}𝑑xK_{5/3}(x)$$
(2-2)
where a major fraction of the synchrotron power is produced at the characteristic frequency $`\nu =\nu _c\nu _{cyc}\gamma ^2=3eB\gamma ^2/4\pi m_pc`$ and $`\mathrm{sin}\varphi `$ accounts for the pitch angle between magnetic field and proton velocity. The emission coefficient is
$$j_\nu =\frac{1}{4\pi }𝑑\gamma N(\gamma )P(\nu ,\gamma )$$
(2-3)
and the absorption coefficient is
$$\alpha _\nu =\frac{1}{8\pi m_p\nu ^2}𝑑\gamma P(\nu ,\gamma )\frac{d}{d\gamma }\left(\frac{N(\gamma )}{\gamma ^2}\right)$$
(2-4)
where $`N(\gamma )`$ is the distribution of protons per unit volume with the Lorentz factor $`\gamma `$. In our simple model for the relativistic protons, $`𝑑\gamma N(\gamma )=ϵn_p`$.
Taking a power-law slope $`s=2.5`$ and assuming a constant pitch angle $`\mathrm{sin}\varphi =1/2`$, we get
$$\alpha _\nu 6.1\times 10^5ϵn_pB^{9/4}\nu ^{13/4}cm^1$$
(2-5)
or for a characteristic absorption length scale which is comparable to the scale height of the hot accretion flow $`HR`$ (e.g. Narayan & Yi 1995), the self-absorption depth $`\tau _\nu \alpha _\nu H6.1\times 10^5ϵn_pRB^{9/4}\nu ^{13/4}`$. Using the hot accretion flow solution, $`\tau _\nu 2.2\times 10^3ϵ_4\alpha _1^1\dot{m}r^{1/2}B_8^{9/4}\nu _{15}^{13/4}`$ where $`ϵ_4=ϵ/10^4`$ and $`\nu _{15}=\nu /10^{15}Hz`$. The source function for the self-absorbed part of the synchrotron emission spectrum is
$$S_\nu =j_\nu /\alpha _\nu 4\times 10^{30}B^{1/2}\nu ^{5/2}erg/s/cm^2/Hz.$$
(2-6)
When the magnetic field is the internal equipartition field,
$$\tau _\nu 9\times 10^8ϵ_4\alpha _1^{17/8}m^{9/8}\dot{m}_2^{17/8}r_1^{53/16}\nu _{15}^{13/4}$$
(2-7)
where $`r_1=r/10`$. The self-absorption occurs when $`\tau _\nu =1`$ or at $`\nu _{abs}7\times 10^{12}ϵ_4^{4/13}\alpha _1^{17/26}m^{9/26}\dot{m}_2^{17/26}r_1^{53/52}Hz`$. The synchrotron luminosity $`L_{sync}=\nu L_\nu `$ is estimated as
$$L_{sync}4\times 10^{26}ϵ_4^{0.79}\alpha _1^{1.43}m^{1.61}\dot{m}_2^{1.43}\nu _{13}^{0.92}erg/s$$
(2-8)
which could extend to $`\nu =\nu _{max}7\times 10^{13}ϵ_4^{0.31}\alpha _1^{0.65}m^{0.35}\dot{m}_2^{0.65}`$ Hz where exponents have been rounded off for convenience. Therefore, such a proton synchrotron luminosity is obviously several orders of magnitude lower than the luminosity due to electron cooling for all reasonable parameters (Yi & Narayan 1995).
However, in strongly magnetized neutron star systems, the magnetic field is the stellar field of the dipole type, $`B_8=4\times 10^5\mu _{30}m^3r^3`$ (e.g. Frank et al. 1992) and the synchrotron emission could be much stronger and the emission can extend to much higher frequency. That is, the absorption depth $`\tau _\nu ϵ_4\mu _{30}^{9/13}\alpha _1^1m^{29/4}\dot{m}_2r_1^8\nu _{15}^{13/4}`$ and the self-absorption occurs at
$$\nu _{abs}1\times 10^{15}ϵ_4^{4/13}\mu _{30}^{9/13}\alpha _1^{4/13}m^{29/13}\dot{m}_2^{4/13}r_1^{32/13}Hz.$$
(2-9)
The luminosity is estimated as $`L_{sync}4\times 10^{33}ϵ_4^{0.44}\mu _{30}^{0.55}\alpha _1^{0.44}m^{0.33}\dot{m}_2^{0.44}\nu _{15}^{2.08}erg/s`$. The emission could extend up to $`\nu =\nu _{max}2\times 10^{16}ϵ_4^{0.31}\mu _{30}^{0.69}\alpha _1^{0.31}m^{2.23}\dot{m}_2^{0.31}`$ Hz where the exponents have again been rounded off for convenience. The expected luminosity is significant enough for possible detection (see below).
## 3 Possible Detections
We expect roughly $`L_{sync}=\nu L_\nu \nu `$ for black hole systems where the magnetic fields are internal equipartition type fields and $`L_{sync}=\nu L_\nu \nu ^2`$ for neutron star systems with strong magnetic fields. Since the hot accretion flows are most likely during low luminosity states, the above estimate for the internal field case (for black holes) suggests that in black hole systems, detection of the proton synchrotron emission is unlikely. For instance, during quiescence of A0620-00 ($`M6M_{}`$), $`\dot{m}2\times 10^4`$ for $`\alpha 0.3`$ (Yi & Narayan 1997 and references therein). Then, using the above results we immediately get $`\nu L_\nu 1\times 10^{25}ϵ_4^{0.79}\alpha _1^{1.43}erg/s`$ at $`\nu =10^{13}Hz`$ and $`\nu L_\nu 1\times 10^{26}ϵ_4^{0.79}\alpha _1^{1.43}erg/s`$ at $`\nu =10^{14}Hz`$. The proton synchrotron emission is too weak for detection. Even for accretion rates as high as $`\dot{m}0.1`$, the synchrotron emission is unlikely to be detected. In most cases, the electron synchrotron emission is expected to be much stronger and it could have a significant contribution to the observed emission spectra (Narayan & Yi 1995).
The existence of the hot, two-temperature accretion flows in neutron star systems is hard to prove because the emission from the surface of the neutron star dominates the emission spectrum (e.g. Narayan & Yi 1995, Yi et al. 1996). The luminosity from the stellar surface $`L_xGM\dot{M}/R_{NS}3\times 10^{36}\dot{m}_2erg/s`$ is most likely to occur in the X-ray range (Frank et al. 1992). Therefore, the luminosity from the neutron star systems is not expected to reflect the low radiative efficiency which is often attributed to the hot, two-temperature accretion flows (Narayan & Yi 1995, Yi et al. 1996, Narayan et al. 1998b). Using the expression for $`L_x`$, we get
$$L_{sync}2\times 10^{33}ϵ_4^{0.44}\mu _{30}^{0.55}\alpha _1^{0.44}m^{0.33}L_{x,36}^{0.44}\nu _{15}^{2.08}$$
(3-1)
where $`L_{x,36}=L_x/10^{36}erg/s`$. However, there are a few pulsar systems which show some indication of the hot accretion flows. That is, some systems have shown puzzling torque reversals (Yi et al. 1997). The spin-down episodes could be due to the transition of accretion flows from cool, thin accretion disk to hot accretion flows (Yi & Wheeler 1998). In the neutron star systems, the electron temperature becomes much lower than that of the black hole systems due to intense cooling of electrons by soft photons emitted at the surface of the neutron star where the accretion flow lands. Since the electrons and ions are not strongly coupled (only weakly through Coulomb coupling), the ion temperature remains nearly unaffected. As a result, the electron temperature becomes much lower than $`10^9K`$ and the electron synchrotron emission is effectively quenched (Narayan & Yi 1995). As long as the protons remain hot, the proton synchrotron emission remains unaffected.
Assuming that $`M=1.4M_{}`$ and $`R_{NS}=10km`$, we consider the pulsar systems which showed abrupt torque reversals and consider their spin-down episodes as due to the hot, two-temperature accretion flows. First, 4U 1626-67’s torque reversal event could be accounted for by $`\dot{m}2\times 10^2`$ and $`\mu _{30}2`$ (Yi et al. 1997), which lead to $`L_{sync}9\times 10^{33}ϵ_4^{0.44}\alpha _1^{0.44}erg/s`$ at $`\nu =10^{15}Hz`$ and $`L_{sync}1\times 10^{36}ϵ_4^{0.44}\alpha _1^{0.44}erg/s`$ at $`\nu =10^{16}Hz`$. The soft X-ray luminosity is expected at $`L_x6\times 10^{36}erg/s`$. Similarly, GX 1+4’s parameters are estimated as $`\dot{m}5\times 10^2`$ and $`\mu _{30}50`$ (Yi et al. 1997, Yi & Wheeler 1998), which gives $`L_{sync}8\times 10^{34}ϵ_4^{0.44}\alpha _1^{0.44}erg/s`$ at $`\nu =10^{15}Hz`$ and $`L_{sync}9\times 10^{36}ϵ_4^{0.44}\alpha _1^{0.44}erg/s`$ at $`\nu =10^{16}Hz`$. The X-ray luminosity $`L_x2\times 10^{37}erg/s`$ Finally, OAO 1657-415 (Yi & Wheeler 1998) has also shown an abrupt torque reversal which suggests $`\dot{m}5\times 10^2`$ and $`\mu _{30}20`$ or $`L_{sync}4\times 10^{34}ϵ_4^{0.44}\alpha _1^{0.44}erg/s`$ at $`\nu =10^{15}Hz`$ and $`L_{sync}5\times 10^{36}ϵ_4^{0.44}\alpha _1^{0.44}erg/s`$ at $`\nu =10^{16}Hz`$. $`L_x`$ is expected to be similar to that of GX 1+4.
It is particularly interesting that 4U 1626-67 has been detected in the optical with a luminosity $`5\times 10^{33}erg/s`$ at $`5500\AA `$ (for an assumed distance of 8.5 kpc) and $`\nu L_\nu \nu ^2`$ (Chakrabarty 1998), which is similar to our prediction, if the relativistic protons indeed have a power-law energy spectrum with index $`s=2.5`$. Given the fact that $`s=2.5`$ is amply motivated by the Galactic Center source Sgr A, it is quite interesting that a similar population of relativistic protons may account for the optical emission in 4U 1626-67. For the accretion parameters discussed above, if the observed optical emission is the proton synchrotron emission, we immediately get an estimate on the fraction of the relativistic protons $`ϵ5\times 10^4\alpha _1^1`$. This fraction is not far from that assumed for the Galactic Center source Sgr A. Chakrabarty (1998) suggests that the observed optical emission may be accounted for by the X-ray irradiated accretion flow. While this possibility cannot be ruled out, it is difficult for a compact binary such as 4U 1626-67 has a large outer radius required for the irradiation to be effective in the accretion flow (Chakrabarty 1998). It is also unclear whether the radiation from the magnetic poles can effectively heat up the outer accretion flow (e.g. Yi & Vishniac 1998). In the case of GX 1+4, the pulsar is fed through wind accretion from a giant secondary (e.g. Chakrabarty et al. 1998). It is unclear whether the torque reversing mechanism is similar to that of 4U 1626-67 (Yi et al. 1997).
If the accretion flow at large radii is in the form of a thin disk, the optical emission from the thin disk could occur at $`\nu 10^{15}Hz`$ with the quasi-blackbody spectra at the temperature $``$ a few $`10^4\dot{m}_2(r_o/10^3)^{4/3}K`$ (Frank et al. 1992). The proton synchrotron luminosity at $`\nu 10^{15}Hz`$ exceeds the quasi-blackbody emission from the thin disk if $`\dot{m}<2\times 10^2(r_o/10^3)^{1.8}`$. Therefore, we conclude that the proton synchrotron emission should be seen if the accretion flow is a two-temperature, hot flow with a characteristic spectral index $`1`$ (i.e. $`I_\nu \nu `$).
## 4 Discussions
We have shown that in strongly magnetized neutron star systems, the existence of energetic protons in the hot accretion flows could be confirmed by detection of synchrotron emission from relativistic protons. Although the required number density of nonthermal protons remains highly uncertain, the detection of such a radiation signature is possible if the relativistic protons are similar to those recently discussed in the context of the gamma-ray emission from the Galactic center source Sgr A (Mahadevan et al. 1997, Mahadevan 1998). The proton energy distribution index $`s=2.5`$ makes a particularly interesting case in 4U 1626-67. A similar index is motivated by the Sgr A’s proton signatures (Mahadevan et al. 1997, Mahadevan 1998).
The existence of the two-temperature plasma around accreting black holes and neutron stars in their low luminosity states has been shown very plausible. However, due to the lack of any direct test of such a possibility for relativistic protons, the spectral evidence has been the basis of the recent discussions on the low efficiency, two-temperature flows. Therefore, non-detection of the proton synchrotron signature would imply either that the hot protons lack a relativistic component or that the neutron star systems do not have the hot accretion flows. If the former is the case, the recently suggested gamma-ray signature in Sgr $`A^{}`$ could be questioned (Mahadevan et al. 1997).
In neutron star systems, electrons are cooled much more efficiently than in black hole systems while protons remain nearly virialized (Narayan & Yi 1995). Since ions are not likely to be thermalized once they are produced by some nonthermal acceleration processes, the relativistic protons could be highly nonthermal. These protons could lose their energy significantly via proton synchrotron emission. In black hole systems, due to lack of any strong fields, the synchrotron emission is much weaker. Therefore, the two-temperature accretion flows and energetic protons could be ”directly” detected more easily in neutron star systems.
So far, there has not been a convincing evidence for the hot accretion flows in the neutron star systems although the abrupt torque reversal events have been attributed to the accretion flow transition between the cool, geometrically thin accretion disk and the hot, geometrically thick accretion flow. If the reversal is indeed due to the accretion flow transition, the proton synchrotron emission would be seen only during spin-down as the hot accretion flow exist only during spin-down. Interestingly, it has been noted that the torque reversal seen in GX 1+4 is much more gradual and different from the more puzzling 4U 1626-67 event (Yi et al. 1997). The detected optical emission in 4U 1626-67 shows the luminosity and the spectral slope interestingly close to the proton synchrotron emission. If the proton synchrotron is indeed responsible for the optical emission, a strong polarization signal is expected. If the GX 1+4 event is due to some mechanisms other than the accretion flow transition, then GX 1+4’s spin-down phase should lack the proton synchrotron emission signature. The predicted correlation between $`L_{sync}`$ and $`L_x`$ could provide an additional test for the two-temperature accretion flow.
IY thanks R. Mahadevan for related conversations and R. Narayan for informing of a recent work on nuclear spallation. JY acknowledges a partial support from a Ministry of Education research fund BSRI 97-2427, a MOST project 97-N6-01-01-A-9, and a KOSEF project 961-0210-061-2. IY acknowledges a partial support from KRF grant 1998-001-D00365.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.